repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
jumpserver/jumpserver | django | 14,946 | [Bug] | ### Product Version
ssh 秘钥连接windows报错
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [x] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
目标机器
ssh server: v9.8.1.0p1-Preview
os version :win10 1809
### 🐛 Bug Description

测试过用其他客户端,用私钥登录没有问题。
如果用ssh密码登录也是没有问题的,只在用秘钥对登录的时候有问题
### Recurrence Steps
ssh 密钥对登录
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2025-02-28T02:42:29Z | 2025-02-28T02:50:36Z | https://github.com/jumpserver/jumpserver/issues/14946 | [
"🐛 Bug"
] | wellsyu | 1 |
PeterL1n/BackgroundMattingV2 | computer-vision | 87 | training question | Hello! I trained the code 'train_refine.py' without any revise using my own dataset to get the pth file.However,I find the network structure I have got is different from the pretrained model you have provide:pytorch_mobilenetv2.The following image1 is your pretrained model structure and the second one image2 is mine

image1

image2
Could you please tell me why the network structure of mine has another 10 layers than yours? and if I want to change the code to get a model like yours,how should I do ?
2.Another question is that I test the model between yours and mine on CPU,I found that your model's flops is twice smaller than mine,which using the same training code.I don’t know if it’s because my network structure has 10 layers more than yours. Like the first question said.
my English is not good,sorry And I hope you can solve my doubts,Thank you! | closed | 2021-04-15T07:03:38Z | 2021-04-22T15:21:06Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/87 | [] | PROMISELVKR | 4 |
ageitgey/face_recognition | python | 1,531 | 【BUG】Video example code is not working... | * face_recognition version: 1.3.0
* Python version: 3.9.16
* Operating System: Rocky Linux 9.2 (Blue Onyx) x86_64
### Description
The example `facerec_from_video_file.py` is not running.
Here's the the terminal output...
```bash
$ /bin/python /home/ander/face_recognition/examples/facerec_from_video_file.py
Traceback (most recent call last):
File "/home/ander/face_recognition/examples/facerec_from_video_file.py", line 50, in <module>
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
File "/home/ander/.local/lib/python3.9/site-packages/face_recognition/api.py", line 214, in face_encodings
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
File "/home/ander/.local/lib/python3.9/site-packages/face_recognition/api.py", line 214, in <listcomp>
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
TypeError: compute_face_descriptor(): incompatible function arguments. The following argument types are supported:
1. (self: _dlib_pybind11.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),numpy.uint8], face: _dlib_pybind11.full_object_detection, num_jitters: int = 0, padding: float = 0.25) -> _dlib_pybind11.vector
2. (self: _dlib_pybind11.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),numpy.uint8], num_jitters: int = 0) -> _dlib_pybind11.vector
3. (self: _dlib_pybind11.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),numpy.uint8], faces: _dlib_pybind11.full_object_detections, num_jitters: int = 0, padding: float = 0.25) -> _dlib_pybind11.vectors
4. (self: _dlib_pybind11.face_recognition_model_v1, batch_img: List[numpy.ndarray[(rows,cols,3),numpy.uint8]], batch_faces: List[_dlib_pybind11.full_object_detections], num_jitters: int = 0, padding: float = 0.25) -> _dlib_pybind11.vectorss
5. (self: _dlib_pybind11.face_recognition_model_v1, batch_img: List[numpy.ndarray[(rows,cols,3),numpy.uint8]], num_jitters: int = 0) -> _dlib_pybind11.vectors
Invoked with: <_dlib_pybind11.face_recognition_model_v1 object at 0x7f8bc98bb2b0>, array([[[ 8, 3, 0],
[12, 7, 4],
[18, 13, 10],
...,
[24, 9, 0],
[24, 9, 0],
[24, 9, 0]],
[[ 8, 3, 0],
[12, 7, 4],
[18, 13, 10],
...,
[24, 9, 0],
[24, 9, 0],
[24, 9, 0]],
[[ 7, 2, 0],
[11, 6, 3],
[16, 11, 8],
...,
[24, 9, 0],
[24, 9, 0],
[24, 9, 0]],
...,
[[ 7, 3, 0],
[ 7, 3, 0],
[ 9, 3, 0],
...,
[17, 1, 0],
[11, 1, 2],
[11, 1, 2]],
[[ 7, 3, 0],
[ 7, 3, 0],
[ 9, 3, 0],
...,
[17, 2, 0],
[11, 1, 0],
[11, 1, 0]],
[[ 7, 3, 0],
[ 7, 3, 0],
[ 9, 3, 0],
...,
[17, 2, 0],
[11, 1, 0],
[11, 1, 0]]], dtype=uint8), <_dlib_pybind11.full_object_detection object at 0x7f8ba3186e70>, 1
```
### What I Did
```bash
git clone https://github.com/ageitgey/face_recognition.git
cd face_recognition/examples
python facerec_from_video_file.py
```
| open | 2023-09-03T04:51:53Z | 2023-12-09T11:58:05Z | https://github.com/ageitgey/face_recognition/issues/1531 | [] | andersprenger | 2 |
glumpy/glumpy | numpy | 34 | WebGL backend | How complicated it would be to implement a WebGL backend? Assuming we have a `gloo.js` that implements gloo in JavaScript, we'd at least need to export an entire window to a structure with the list of programs, GLSL, variables, data, etc.
Alternatively, we could have another gloo implementation which creates such structure instead of displaying something, or it could generate GLIR commands like in VisPy.
For interactivity we can always reimplement it in JavaScript on top of gloo.js.
| closed | 2015-03-20T12:25:57Z | 2015-08-18T19:36:35Z | https://github.com/glumpy/glumpy/issues/34 | [] | rossant | 2 |
tortoise/tortoise-orm | asyncio | 1,332 | Actual query results and Tortoise results inconsistent with filtering using custom SQL annotations | **Describe the bug**
The following demo model outputs SQL that returns multiple results. However, when actually executing the query, Tortoise for unknown reasons only returns one result.
Here is the model:
```python
class Demo(Model):
active = fields.BooleanField(default=True)
user_id = fields.BigIntField(index=True)
start = fields.DatetimeField(auto_now=True)
end = fields.DatetimeField(default=None, null=True)
@staticmethod
async def get_active():
return Demo.filter(consumed__gt=0.0)\
.filter(active=True)\
.annotate(active_seconds=RawSQL(f'{int(time.time())} - UNIX_TIMESTAMP(`start`)'))\
.filter(active_seconds__lt=90000)\
.sql()
```
Here is the query it outputs:
```sql
SELECT `user_id`,`end`,`id`,`active`,`start`,1675345393 - UNIX_TIMESTAMP(`start`) `active_seconds` FROM `demo` WHERE `active`=true AND 1675345393 - UNIX_TIMESTAMP(`start`)<90000
```
This query returns 4 results,

Running the query through Tortoise again now, it returns nothing, unless I increase active_seconds to 100,000, then it returns 2 results again.
**To Reproduce**
You'll likely need to re-create your own demo tables and run a query with a filter similar to what is demonstrated above.
**Expected behavior**
It is doing something outside of the query and filtering results that it should not be filtering. Any results that are returned by the database backend should be processed. For some reason, that is not happening here. I have no idea what it could be doing between running the query, which works and returns the correct number of results, and processing the results into objects. | closed | 2023-02-02T13:48:40Z | 2023-02-02T14:05:47Z | https://github.com/tortoise/tortoise-orm/issues/1332 | [] | ghost | 2 |
pyg-team/pytorch_geometric | deep-learning | 9,181 | torch::jit::load error | ### 🐛 Describe the bug
I have done script my model in python, and want load it in c++. It can be confirmed that torch_scatcher and torch_sparse have been successfully compiled.
I can run the following example and get the correct result.
````
#include <torch/script.h>
#include <torch/torch.h>
#include <pytorch_scatter/scatter.h>
#include <pytorch_sparse/sparse.h>
#include <iostream>
int main()
{
torch::Tensor src = torch::tensor({ 0.5, 0.4, 0.1, 0.6 });
torch::Tensor index = torch::tensor({ 0, 0, 1, 1 });
std::cout << src << std::endl;
std::cout << index << std::endl;
std::cout << torch::cuda::cudnn_is_available() << std::endl;
std::cout << torch::cuda::is_available() << std::endl;
std::cout << scatter_sum(src, index, 0, torch::nullopt, torch::nullopt) << std::endl;
torch::Tensor tensor = torch::tensor({ 0, 0, 0, 0, 1, 1, 1});
std::cout << tensor << std::endl;
std::cout << ind2ptr(tensor, 2) << std::endl;
}
````
````
output:
0.5000
0.4000
0.1000
0.6000
[ CPUFloatType{4} ]
0
0
1
1
[ CPULongType{4} ]
1
1
0.9000
0.7000
[ CPUFloatType{2} ]
0
0
0
0
1
1
1
[ CPULongType{7} ]
0
4
7
[ CPULongType{3} ]
````
But when I load my script model, an error will be thrown.
````
torch::jit::script::Module model;
std::string file_name = "D:\\TESTProgram\\Libtorch_1.13_cu116\\TorchTest_1.13+cu116\\Dualgnn_module_0409.pt";
try {
model = torch::jit::load(file_name);
}
catch (std::exception& e)
{
std::cout << e.what() << std::endl;
return -1;
}
````
````
Unknown builtin op: torch_scatter::segment_sum_csr.
Could not find any similar ops to torch_scatter::segment_sum_csr. This op may not exist or may not be currently supported in TorchScript.
:
File "code/__torch__/torch_scatter/segment_csr.py", line 35
indptr: Tensor,
out: Optional[Tensor]=None) -> Tensor:
_10 = ops.torch_scatter.segment_sum_csr(src, indptr, out)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return _10
def segment_mean_csr(src: Tensor,
'segment_sum_csr' is being compiled since it was called from 'segment_csr'
Serialized File "code/__torch__/torch_scatter/segment_csr.py", line 5
out: Optional[Tensor]=None,
reduce: str="sum") -> Tensor:
_0 = __torch__.torch_scatter.segment_csr.segment_sum_csr
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_1 = __torch__.torch_scatter.segment_csr.segment_mean_csr
_2 = __torch__.torch_scatter.segment_csr.segment_min_csr
'segment_csr' is being compiled since it was called from 'segment'
Serialized File "code/__torch__/torch_geometric/utils/segment.py", line 4
ptr: Tensor,
reduce: str="sum") -> Tensor:
_0 = __torch__.torch_scatter.segment_csr.segment_csr
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return _0(src, ptr, None, reduce, )
'segment' is being compiled since it was called from 'MeanAggregation.reduce'
Serialized File "code/__torch__/torch_geometric/nn/aggr/basic.py", line 22
reduce: str="sum") -> Tensor:
_1 = __torch__.torch_geometric.nn.aggr.base.expand_left
_2 = __torch__.torch_geometric.utils.segment.segment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_3 = __torch__.torch_geometric.utils.scatter.scatter
_4 = uninitialized(Tensor)
'MeanAggregation.reduce' is being compiled since it was called from 'MeanAggregation.forward'
File "E:\Anaconda\envs\Mesh_Net\lib\site-packages\torch_geometric\nn\aggr\basic.py", line 34
ptr: Optional[Tensor] = None, dim_size: Optional[int] = None,
dim: int = -2) -> Tensor:
return self.reduce(x, index, ptr, dim_size, dim, reduce='mean')
~~~~~ <--- HERE
Serialized File "code/__torch__/torch_geometric/nn/aggr/basic.py", line 12
dim_size: Optional[int]=None,
dim: int=-2) -> Tensor:
_0 = (self).reduce(x, index, ptr, dim_size, dim, "mean", )
~~~~~~ <--- HERE
return _0
def reduce(self: __torch__.torch_geometric.nn.aggr.basic.MeanAggregation,
````
See same question at https://github.com/pyg-team/pytorch_geometric/issues/1718#issuecomment-1072448621
But it's not clear where the problem lies.
What other work can I do? Or do you have any suggestions to solve this problem?
### Versions
pytorch: 1.13.0
libtorch:1.13.0
cuda: 11.6
pyg: 2.3.1
torch_scatter: 2.1.1
torch_sparse: 0.6.18 | closed | 2024-04-10T10:32:36Z | 2025-03-17T02:00:49Z | https://github.com/pyg-team/pytorch_geometric/issues/9181 | [
"bug"
] | qiuqiouba | 4 |
qubvel-org/segmentation_models.pytorch | computer-vision | 695 | MixVisionTransformer in combination with PAN fails with "encoder does not support dilated mode" | ```
import segmentation_models_pytorch as smp
smp.PAN(encoder_name="mit_b0")
```
raises the exception:
```
ValueError: MixVisionTransformer encoder does not support dilated mode
```
Since the default PAN uses dilation, this config is uncompatible atm?
If we use a configuration of PAN that does not use dilation the error, of course, does not apper:
```
smp.PAN(encoder_name="mit_b0", encoder_output_stride=32)
```
I did not test yet though if output strides of 32 still deliver comparable results. My guess would be that the default stride of 16 should encode a lot more of information that might be beneficial for better performence.
Is there any way to get it to work with dilation? | closed | 2022-12-13T09:19:30Z | 2023-02-20T02:05:23Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/695 | [
"Stale"
] | Daniel451 | 4 |
pydata/xarray | numpy | 9,137 | open_dict_of_datasets function to open any file containing nested groups | ### Is your feature request related to a problem?
In https://github.com/pydata/xarray/issues/9077#issuecomment-2161622347 I suggested the idea of a function which could open any netCDF file with groups as a dictionary mapping group path strings to `xr.Dataset` objects.
The motivation is as follows:
- People want the new `xarray.DataTree` class to support inheriting coordinates from parent groups,
- This can only be done if the coordinates align with the variables in the child group (i.e. using `xr.align`),
- The best time to enforce this alignment is at `DataTree` construction time,
- This requirement is not enforced in the netCDF/Zarr model, so this would mean some files can no longer be opened by `open_datatree` directly, as doing so would raise an alignment error,
- _But_ we still really want users to have some way to open an arbitrary file with xarray and see what's inside (including displaying all the groups #4840).
- A simpler intermediate structure of a dictionary mapping group paths to `xarray.Dataset` objects doesn't enforce alignment, so can represent any file.
- We should add a new opening function to allow any file to be opened as this dict-of-datasets structure.
- Users can then use this to inspect "untidy" data, and make changes to the dict returned before creating an aligned `DataTree` object via `DataTree.from_dict` if they like.
### Describe the solution you'd like
Add a function like this:
```python
def open_dict_of_datasets(
filename_or_obj: str | os.PathLike[Any] | BufferedIOBase | AbstractDataStore,
engine: T_Engine = None,
group: Optional[str] = None,
**kwargs,
) -> dict[str, Dataset]:
"""
Open and decode a file or file-like object, creating a dictionary containing one xarray Dataset for each group in the file.
Useful when you have e.g. a netCDF file containing many groups, some of which are not alignable with their parents and so the file cannot be opened directly with ``open_datatree``.
It is encouraged to use this function to inspect your data, then make the necessary changes to make the structure coercible to a `DataTree` object before calling `DataTree.from_dict()` and proceeding with your analysis.
Parameters
----------
filename_or_obj : str, Path, file-like, or DataStore
Strings and Path objects are interpreted as a path to a netCDF file or Zarr store.
engine : str, optional
Xarray backend engine to use. Valid options include `{"netcdf4", "h5netcdf", "zarr"}`.
group : str, optional
Group to use as the root group to start reading from. Groups above this root group will not be included in the output.
**kwargs : dict
Additional keyword arguments passed to :py:func:`~xarray.open_dataset` for each group.
Returns
-------
dict[str, xarray.Dataset]
See Also
--------
open_datatree()
DataTree.from_dict()
"""
...
```
This would live inside `backends.api.py`, and be exposed publicly as a top-level function along with the rest of `open_datatree`/`DataTree` etc. as part of #9033.
The actual implementation could re-use the code for opening many groups of the same file performantly from #9014. Indeed we could add a `open_dict_of_datasets` method to the `BackendEntryPoint` class, which uses pretty much the same code as the existing `open_datatree` method added in #9014 but just doesn't actually create a `DataTree` object.
### Describe alternatives you've considered
Really the main alternative to this is not to have coordinate inheritance in `DataTree` at all (see [9077](https://github.com/pydata/xarray/issues/9077)), in which case `open_datatree` would be sufficient to open any file.
---
The name of the function is up for debate. I prefer nothing with the word "datatree" in it since this doesn't actually create a `DataTree` object at any point. (In fact we could and perhaps should have implemented this function years ago, even without the new `DataTree` class.) The reason for not calling it "`open_as_dict_of_datasets`" is that we don't use "as" in the existing `open_dataset`/`open_dataarray` etc.
### Additional context
cc @eni-awowale @flamingbear @owenlittlejohns @keewis @shoyer @autydp | closed | 2024-06-19T17:08:16Z | 2024-09-07T23:34:22Z | https://github.com/pydata/xarray/issues/9137 | [
"topic-backends",
"enhancement",
"topic-DataTree",
"io"
] | TomNicholas | 7 |
apify/crawlee-python | automation | 1,106 | In certain situations, it is impossible to enter the method under @crawler.router.default_handler, and the execution ends directly. | code :
```py
async def crawling(urls: List, output_path: str, unique_filter:set) -> None:
# initialize crawler configuration
concurrency_settings = ConcurrencySettings(
max_concurrency=50,
max_tasks_per_minute=200,
)
session_pool = SessionPool(max_pool_size=100)
crawler = BeautifulSoupCrawler(
max_request_retries=3,
request_handler_timeout=timedelta(
seconds=30,
),
max_requests_per_crawl=100,
max_crawl_depth=1,
concurrency_settings=concurrency_settings,
session_pool=session_pool,
)
# Define the default request handler, which will be called for every request
@crawler.router.default_handler
async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
url = context.request.url
logger.info(f'Processing {url} ...')
depth = context.request.crawl_depth
logger.info(f'The depth of {url} is:{depth}.')
await context.enqueue_links(
strategy='same-hostname',
include=init_filters(),
transform_request_function=transform_request,
)
await crawler.run(urls)
```
Issue 1: In the above code, debugging revealed that after executing the crawling method, when it reaches await crawler.run(urls), it does not enter the method under @crawler.router.default_handler (sometimes it works, sometimes it doesn't).
Issue 2: If crawling a URL fails due to network issues, attempting to crawl the same URL again also does not enter the @crawler.router.default_handler. | open | 2025-03-19T07:17:18Z | 2025-03-24T11:21:11Z | https://github.com/apify/crawlee-python/issues/1106 | [
"t-tooling"
] | CodeDan-CN | 5 |
tflearn/tflearn | data-science | 882 | About loss in Tensorboard | Hello everyone,
I run the example of Multi-layer perceptron, and visualize the loss in Tensorboard.
Does "Loss" refer to the training loss on each batch? And "Loss/Validation" refers to the loss on validation set? What does "Loss_var_loss" refer to?

| open | 2017-08-22T14:57:32Z | 2017-08-26T07:15:47Z | https://github.com/tflearn/tflearn/issues/882 | [] | zhao62 | 3 |
python-gitlab/python-gitlab | api | 2,564 | I can't user user.save() to update the identifies of user | ## Description of the problem, including code/CLI snippet
I use get user and change the identifies of it,but after use user.save(),it not changes.
when i update user's name,it works. but if i update identifies, it not work.
The identifies correct ,i can update it by hand.
## Expected Behavior
after user.save() update indetifies success
## Actual Behavior
not work,don't get any error
## Specifications
- python-gitlab version:3.14.0
- API version you are using (v3/v4):v4
- Gitlab server version (or gitlab.com):15.0.1
| closed | 2023-05-06T11:38:06Z | 2023-05-28T09:16:02Z | https://github.com/python-gitlab/python-gitlab/issues/2564 | [] | queyijiangnan | 2 |
wkentaro/labelme | computer-vision | 1,212 | labelme2coco.py crash when just contines __ignore__ label | ### Provide environment information
(DET2) D:\FPCs>python --version
Python 3.9.13
(DET2) D:\FPCs>pip list labelme
Package Version Editable project location
----------------------- ------------------ --------------------------
labelme 5.0.5
### What OS are you using?
windows11
### Describe the Bug
(DET2) D:\FPCs>python D:/DeepLearning/labelme/examples/instance_segmentation/labelme2coco.py --labels classes.txt 000003 labels
Creating dataset: labels
Generating dataset from: 000003\dog.json
Traceback (most recent call last):
File "D:\DeepLearning\labelme\examples\instance_segmentation\labelme2coco.py", line 209, in <module>
main()
File "D:\DeepLearning\labelme\examples\instance_segmentation\labelme2coco.py", line 184, in main
labels, captions, masks = zip(
ValueError: not enough values to unpack (expected 3, got 0)
### Expected Behavior
Skip the label file
### To Reproduce
1. Just set __ignore__ label of an image
2. Create classes.txt file
3. run labelme2coco.py | closed | 2022-11-06T04:44:18Z | 2024-04-08T00:58:34Z | https://github.com/wkentaro/labelme/issues/1212 | [
"issue::bug"
] | withkun | 4 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 57 | 请问llama2-7b的显存要求是多少 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
模型训练与精调
### 基础模型
LLaMA-2-7B
### 操作系统
Linux
### 详细描述问题
```
# 用了6张a10(24G)进行预训练,block开到512,没有加lm_head,embedding层,开启了zero2 offload依旧报错
```
### 运行日志或截图
```
# torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 0; 22.20 GiB total capacity; 20.60 GiB already allocated; 126.12 MiB free; 20.98 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
0%| | 0/3 [00:00<?, ?it/s
```
| closed | 2023-08-03T03:35:33Z | 2023-08-03T06:03:05Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/57 | [] | AlexasXu | 2 |
jupyter/nbviewer | jupyter | 895 | The error was: Failed to connect to localhost port 8988: Connection refused | Hi! Thanks for using Jupyter Notebook Viewer (nbviewer) and taking the time to report a bug you've encountered. Please use the template below to tell us about the problem.
If you've found a bug in a different Jupyter project (e.g., [Jupyter Notebook](http://github.com/jupyter/notebook), [JupyterLab](http://github.com/jupyterlab/jupyterlab), [JupyterHub](http://github.com/jupyterhub/jupyterhub), etc.), please open an issue using that project's issue tracker instead.
If you need help using or installing Jupyter Notebook Viewer, please use the [jupyter/help](https://github.com/jupyter/help) issue tracker instead.
**Describe the bug**
Should be a working notebook however the error that is stated in the title shows up
**To Reproduce**
Steps to reproduce the behavior:
Copy URL and paste into nbviewer
**Expected behavior**
Should see the notebook
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Windows 10
- Browser Firefox
- Version Not sure
**Additional context**
Add any other context about the problem here.
| closed | 2020-02-04T20:31:55Z | 2020-03-04T02:34:02Z | https://github.com/jupyter/nbviewer/issues/895 | [] | Mespn520 | 5 |
lmcgartland/graphene-file-upload | graphql | 57 | Option for starting development server | closed | 2021-03-03T18:54:34Z | 2021-03-03T18:54:58Z | https://github.com/lmcgartland/graphene-file-upload/issues/57 | [] | Waidhoferj | 0 |
|
pydantic/pydantic | pydantic | 11,157 | make typecheck failing | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
I'm trying to commit a change and the pre-commit hook make typecheck is failing on various bits of v1 code. I reverted my changes and ran the check - same result, so not sure how others are managing to commit!
There are 144 errors, so I'll spare everyone the whole output, but a couple of samples are below. I've checked that I've got the latest versions of all dependencies, such as mypy.
Typecheck................................................................Failed
- hook id: typecheck
- exit code: 3
Pyproject file parse attempt 1 error: {}
Pyproject file parse attempt 2 error: {}
Pyproject file parse attempt 3 error: {}
Pyproject file parse attempt 4 error: {}
Pyproject file parse attempt 5 error: {}
Pyproject file parse attempt 6 error: {}
Config file "/Users/Nick/Code/github/pydantic/pyproject.toml" could not be parsed. Verify that format is correct.
/Users/Nick/Code/github/pydantic/pydantic/mypy.py
/Users/Nick/Code/github/pydantic/pydantic/mypy.py:630:54 - error: Argument of type "SemanticAnalyzerPluginInterface" cannot be assigned to parameter "api" of type "CheckerPluginInterface" in function "error_extra_fields_on_root_model"
"SemanticAnalyzerPluginInterface" is not assignable to "CheckerPluginInterface" (reportArgumentType)
/Users/Nick/Code/github/pydantic/pydantic/_internal/_std_types_schema.py
/Users/Nick/Code/github/pydantic/pydantic/_internal/_std_types_schema.py:111:20 - error: Cannot instantiate abstract class "PathLike"
"PathLike.__fspath__" is not implemented (reportAbstractUsage)
/Users/Nick/Code/github/pydantic/pydantic/_internal/_std_types_schema.py:111:20 - error: Cannot instantiate Protocol class "PathLike" (reportAbstractUsage)
/Users/Nick/Code/github/pydantic/pydantic/_internal/_std_types_schema.py:111:35 - error: Expected 0 positional arguments (reportCallIssue)
/Users/Nick/Code/github/pydantic/pydantic/v1/_hypothesis_plugin.py
/Users/Nick/Code/github/pydantic/pydantic/v1/_hypothesis_plugin.py:33:8 - error: Import "hypothesis.strategies" could not be resolved (reportMissingImports)
...
/Users/Nick/Code/github/pydantic/pydantic/v1/validators.py:81:32 - error: Union requires two or more type arguments (reportInvalidTypeArguments)
### Example Code
_No response_
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.3
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: /Users/Nick/Code/github/pydantic/pydantic
python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 10:37:40) [Clang 14.0.6 ]
platform: macOS-15.2-arm64-arm-64bit
related packages: fastapi-0.115.6 mypy-1.13.0 pyright-1.1.391 pydantic-settings-2.7.0 typing_extensions-4.12.2 pydantic-extra-types-2.10.1
commit: a915c7cd
```
| closed | 2024-12-20T00:08:35Z | 2024-12-20T09:18:30Z | https://github.com/pydantic/pydantic/issues/11157 | [
"bug V2",
"pending"
] | nickyoung-github | 1 |
google-research/bert | nlp | 562 | Extract features of a word given a text | I am interested in getting the features of only one word in a text, but the current implementation gives the features of all the words in the text. I guess this makes the computations much slower, so I would like to simplify the implementation. Is this possible?
Thanks!!! | open | 2019-04-08T13:25:17Z | 2019-04-09T11:25:27Z | https://github.com/google-research/bert/issues/562 | [] | RodSernaPerez | 1 |
zihangdai/xlnet | tensorflow | 199 | How to use xlnet to guess a word in a sentence | I am wonder how to use the xlnet to guess a word in a sentence. | open | 2019-08-01T03:34:30Z | 2019-08-01T03:34:30Z | https://github.com/zihangdai/xlnet/issues/199 | [] | syu0000 | 0 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 15 | IUserUpdate should have id of UUID | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/blob/a11c40077ec0bad51508974704d3343d187557a1/fastapi-alembic-sqlmodel-async/app/schemas/user_schema.py : 37 | closed | 2022-10-08T19:10:38Z | 2022-10-09T16:10:17Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/15 | [] | bazylhorsey | 1 |
seleniumbase/SeleniumBase | pytest | 2,497 | Update script to download chromedriver from the newer location (on version 121+) | ## Update script to download chromedriver from the newer location (on version 121+)
The Chromedriver team is starting to switch `chromedriver` storage from `https://edgedl.me.gvt1.com/edgedl/chrome/chrome-for-testing/` to `https://storage.googleapis.com/chrome-for-testing-public/`.
This caused issues here: https://github.com/seleniumbase/SeleniumBase/issues/2495, but thankfully the Chromedriver team quickly made a change to at least temporarily use both locations. That was probably a warning shot so that frameworks are made aware to make changes soon.
If downloading `chromedriver` 121 (or newer), SeleniumBase should grab `chromedriver` from the newer location: `https://storage.googleapis.com/chrome-for-testing-public/`.
| closed | 2024-02-15T16:30:20Z | 2024-02-16T02:45:48Z | https://github.com/seleniumbase/SeleniumBase/issues/2497 | [
"requirements"
] | mdmintz | 1 |
koxudaxi/datamodel-code-generator | fastapi | 1,421 | Ability to add custom imports | **Is your feature request related to a problem? Please describe.**
Can't customize my template to add new imports
**Describe the solution you'd like**
New argument `additional_imports` in `generate` function which adds additional imports to final rendered template
**Describe alternatives you've considered**
-
**Additional context**
- | closed | 2023-07-12T07:32:25Z | 2023-07-14T07:45:49Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1421 | [] | skonik | 0 |
ultralytics/ultralytics | machine-learning | 19,724 | The prediction results of the segmentation model are strange | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
## Background
To obtain contour data of a specific object, I created a segmentation model using the following steps:
1. Generated a mask for the object using `sam2.1_b.pt` and created labels using `result.masks.xyn.pop()`.
```python
normalized_contour = result.masks.xyn.pop()
with open(os.path.join(output_label_path, img_file.replace('.jpg', '.txt')), 'w') as f:
class_id = 0
points_str = " ".join([f"{x:.4f} {y:.4f}" for x, y in normalized_contour])
output_str = f"{class_id} {points_str}"
f.write(output_str)
```
2. Fine-tuned `yolo11n-seg.pt` using the generated labels.
```python
model = YOLO('yolo11n-seg.pt')
model.train(data="dataset.yaml", epochs=1000, patience=50, batch=16, imgsz=1024, device=0)
```
3. Performed predictions using the fine-tuned model (`best.pt`).
```python
result = model(img_path, device=0)[0]
```
## Isuue
The prediction results are shown in the image below.
When examining the boundary between the mask and the bounding box (circled areas), certain parts of the mask protrude in a square-like shape.
## Question
Why is this happening?
Are there any possible solutions to this issue?

### Additional
_No response_ | closed | 2025-03-16T12:13:09Z | 2025-03-17T09:32:27Z | https://github.com/ultralytics/ultralytics/issues/19724 | [
"question",
"segment"
] | family36 | 5 |
LibrePhotos/librephotos | django | 1,531 | Add option to scrub all region tags in Exif information when using "delete face" | **Describe the enhancement you'd like**
I have some pictures that have (wrong) face areas defined from a previous version of digikam - would it be possible to scrub the EXIF info from all face info when using "delete face"? It seems right now only librephoto tags are scrubbed
After using the "delete face" I still have region and name information in the file
**Describe why this will benefit the LibrePhotos**
Deleting face regions that are wrong would improve the data quality and training date for face recognition
**Additional context**
It seems Librephotos is using the "Region Rectangle" to define the face area.
Digikam tags look like this (real example)
Region Applied To Dimensions H : 2304
Region Applied To Dimensions Unit: pixel
Region Applied To Dimensions W : 3456
Region Name : Personname
Region Type : Face, Face
Region Area X : 0.330874, 0.161892
Region Area Y : 0.771701, 0.498047
Region Area W : 0.175637, 0.128762
Region Area H : 0.315972, 0.193142
Region Area Unit : normalized, normalized
Tags List : People/Personname
Adding an option in settings to scrub this when deleting faces would make sense. This should only be applied to face regions, which is the defined in the "Region Type" tag | open | 2025-03-04T15:28:15Z | 2025-03-05T05:19:41Z | https://github.com/LibrePhotos/librephotos/issues/1531 | [
"enhancement"
] | ChrisFab16 | 1 |
sherlock-project/sherlock | python | 1,665 | Error while trying to even install numpy | error: subprocess-exited-with-error
× Building wheel for numpy (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [271 lines of output]
Running from numpy source directory. setup.py:67: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present. It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
import numpy.distutils.command.sdist
Processing numpy/random/_bounded_integers.pxd.in
Processing numpy/random/_bounded_integers.pyx.in
Processing numpy/random/_common.pyx
Processing numpy/random/_generator.pyx
Processing numpy/random/_mt19937.pyx
Processing numpy/random/_pcg64.pyx
Processing numpy/random/_philox.pyx
Processing numpy/random/_sfc64.pyx
Processing numpy/random/bit_generator.pyx
Processing numpy/random/mtrand.pyx
Cythonizing sources
INFO: blas_opt_info:
INFO: blas_armpl_info:
INFO: customize UnixCCompiler
INFO: libraries armpl_lp64_mp not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: blas_mkl_info:
INFO: libraries mkl_rt not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: blis_info:
INFO: libraries blis not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_info:
INFO: libraries openblas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: accelerate_info:
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_blas_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries tatlas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_blas_info:
INFO: libraries satlas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_blas_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries ptf77blas,ptcblas,atlas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_blas_info:
INFO: libraries f77blas,cblas,atlas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/system_info.py:2077: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
INFO: blas_info:
INFO: libraries blas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/system_info.py:2077: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
if self._calc_info(blas):
INFO: blas_src_info:
INFO: NOT AVAILABLE
INFO:
/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/system_info.py:2077: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
INFO: NOT AVAILABLE
INFO:
non-existing path in 'numpy/distutils': 'site.cfg'
INFO: lapack_opt_info:
INFO: lapack_armpl_info:
INFO: libraries armpl_lp64_mp not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: lapack_mkl_info:
INFO: libraries mkl_rt not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_lapack_info:
INFO: libraries openblas not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_clapack_info:
INFO: libraries openblas,lapack not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: flame_info:
INFO: libraries flame not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries tatlas,tatlas not found in /data/data/com.termux/files/usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_info:
INFO: libraries satlas,satlas not found in /data/data/com.termux/files/usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_3_10_info'>
INFO: NOT AVAILABLE
INFO:
INFO: atlas_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries ptf77blas,ptcblas,atlas not found in /data/data/com.termux/files/usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_threads_info'>
INFO: NOT AVAILABLE
INFO:
INFO: atlas_info:
INFO: libraries f77blas,cblas,atlas not found in /data/data/com.termux/files/usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_info'>
INFO: NOT AVAILABLE
INFO:
INFO: lapack_info:
INFO: libraries lapack not found in ['/data/data/com.termux/files/usr/lib']
INFO: NOT AVAILABLE
INFO:
/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/system_info.py:1902: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
INFO: lapack_src_info:
INFO: NOT AVAILABLE
INFO:
/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/system_info.py:1902: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
INFO: NOT AVAILABLE
INFO:
INFO: numpy_linalg_lapack_lite:
INFO: FOUND:
INFO: language = c
INFO:
Warning: attempted relative import with no known parent package
/data/data/com.termux/files/usr/lib/python3.11/distutils/dist.py:274: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running bdist_wheel
running build
running config_cc
INFO: unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
INFO: build_src
INFO: building py_modules sources
creating build
creating build/src.linux-aarch64-3.11
creating build/src.linux-aarch64-3.11/numpy
creating build/src.linux-aarch64-3.11/numpy/distutils
INFO: building library "npymath" sources
WARN: Could not locate executable armflang
WARN: Could not locate executable gfortran
WARN: Could not locate executable f95
WARN: Could not locate executable ifort
WARN: Could not locate executable ifc
WARN: Could not locate executable lf95
WARN: Could not locate executable pgfortran
WARN: Could not locate executable nvfortran
WARN: Could not locate executable f90
WARN: Could not locate executable f77
WARN: Could not locate executable fort
WARN: Could not locate executable efort
WARN: Could not locate executable efc
WARN: Could not locate executable g77
WARN: Could not locate executable g95
WARN: Could not locate executable pathf95
WARN: Could not locate executable nagfor
WARN: Could not locate executable frt
WARN: don't know how to compile Fortran code on platform 'posix'
creating build/src.linux-aarch64-3.11/numpy/core
creating build/src.linux-aarch64-3.11/numpy/core/src
creating build/src.linux-aarch64-3.11/numpy/core/src/npymath
INFO: conv_template:> build/src.linux-aarch64-3.11/numpy/core/src/npymath/npy_math_internal.h
INFO: adding 'build/src.linux-aarch64-3.11/numpy/core/src/npymath' to include_dirs.
INFO: conv_template:> build/src.linux-aarch64-3.11/numpy/core/src/npymath/ieee754.c
INFO: conv_template:> build/src.linux-aarch64-3.11/numpy/core/src/npymath/npy_math_complex.c
INFO: None - nothing done with h_files = ['build/src.linux-aarch64-3.11/numpy/core/src/npymath/npy_math_internal.h']
INFO: building library "npyrandom" sources
INFO: building extension "numpy.core._multiarray_tests" sources
creating build/src.linux-aarch64-3.11/numpy/core/src/multiarray
INFO: conv_template:> build/src.linux-aarch64-3.11/numpy/core/src/multiarray/_multiarray_tests.c
INFO: building extension "numpy.core._multiarray_umath" sources
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 249, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/tmp/pip-build-env-goe15s6p/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 230, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/tmp/pip-build-env-goe15s6p/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
self.run_setup()
File "/data/data/com.termux/files/usr/tmp/pip-build-env-goe15s6p/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/tmp/pip-build-env-goe15s6p/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 479, in <module>
setup_package()
File "setup.py", line 471, in setup_package
setup(**metadata)
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/tmp/pip-build-env-goe15s6p/overlay/lib/python3.11/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/core.py", line 148, in setup
dist.run_commands()
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/data/data/com.termux/files/usr/tmp/pip-build-env-goe15s6p/overlay/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/command/build.py", line 62, in run
old_build.run(self)
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/data/data/com.termux/files/usr/lib/python3.11/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/command/build_src.py", line 144, in run
self.build_sources()
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/command/build_src.py", line 161, in build_sources
self.build_extension_sources(ext)
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/command/build_src.py", line 318, in build_extension_sources
sources = self.generate_sources(sources, ext)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/distutils/command/build_src.py", line 378, in generate_sources
source = func(extension, build_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/core/setup.py", line 506, in generate_config_h
check_math_capabilities(config_cmd, ext, moredefs, mathlibs)
File "/data/data/com.termux/files/usr/tmp/pip-install-uc8hi5p2/numpy_0f93abc54da747c3833d8fbd6cec679c/numpy/core/setup.py", line 192, in check_math_capabilities
raise SystemError("One of the required function to build numpy is not"
SystemError: One of the required function to build numpy is not available (the list is ['sin', 'cos', 'tan', 'sinh', 'cosh', 'tanh', 'fabs', 'floor', 'ceil', 'sqrt', 'log10', 'log', 'exp', 'asin', 'acos', 'atan', 'fmod', 'modf', 'frexp', 'ldexp', 'expm1', 'log1p', 'acosh', 'asinh', 'atanh', 'rint', 'trunc', 'exp2', 'copysign', 'nextafter', 'strtoll', 'strtoull', 'cbrt', 'log2', 'pow', 'hypot', 'atan2', 'creal', 'cimag', 'conj']).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Failed to build numpy
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects | closed | 2023-01-17T03:37:11Z | 2023-01-25T12:16:11Z | https://github.com/sherlock-project/sherlock/issues/1665 | [] | Onionbread35 | 2 |
pywinauto/pywinauto | automation | 668 | Implement WindowWrapper for UIA backend with move_window() method | Found this while reading this StackOverflow question: https://stackoverflow.com/q/54475685/3648361 | closed | 2019-02-01T21:18:14Z | 2020-05-23T21:46:54Z | https://github.com/pywinauto/pywinauto/issues/668 | [
"enhancement",
"Priority-Critical",
"UIA-related",
"good first issue",
"refactoring_critical"
] | vasily-v-ryabov | 1 |
littlecodersh/ItChat | api | 41 | msg['ActualNickName']乱码 | 昨天还好好的,今天就变成了乱码,找不到原因所在,来求助
| closed | 2016-07-20T02:43:09Z | 2016-07-20T11:00:16Z | https://github.com/littlecodersh/ItChat/issues/41 | [
"bug"
] | xzjs | 5 |
marcomusy/vedo | numpy | 879 | Cutter tools create leftovers in plotter | It seems some (all?) cutter tools create some leftovers in the plotters that get picked up by subsequent cutting operations | open | 2023-06-08T14:58:40Z | 2023-08-29T10:32:31Z | https://github.com/marcomusy/vedo/issues/879 | [
"bug"
] | jo-mueller | 0 |
AutoViML/AutoViz | scikit-learn | 80 | Filename is an empty string or file not able to be loaded | I get this error with even the simplest csv input file
```
a,b
1,2
3,4
```
providing full path to the filename, using and specifying different separators... nothing seems to work. | closed | 2023-01-17T12:04:02Z | 2023-01-23T23:08:06Z | https://github.com/AutoViML/AutoViz/issues/80 | [] | delocalizer | 2 |
kizniche/Mycodo | automation | 946 | Cannot save MQTT input configuration changes | I am using Mycodo 8.9.0. Whenever I add an MQTT input, click the (+) sign and then click "Save" - no changes made at all - I receive the following error :
Error: Modify Input: '<' not supported between instances of 'NoneType' and 'float'
The error happens regardless of which fields I change or attempt to modify (even when no fields are modified as I mentioned above). I'm new to mycodo so please do let me know if I'm doing something wrong. Thanks in advance for your help! | closed | 2021-03-10T16:31:38Z | 2021-03-16T02:41:56Z | https://github.com/kizniche/Mycodo/issues/946 | [
"bug",
"Fixed and Committed"
] | danielfppps | 5 |
qubvel-org/segmentation_models.pytorch | computer-vision | 997 | Mention NVIDIA non-commercial in top LICENSE section | Could you mention NVIDIA license, which is written in encoders/mix_transformer.py, in the top LICENSE section?
This can cause a problem for commercial use including Kaggle, in which the competition rule often requires commercial use. I misunderstood that segFormer had become completely MIT.
For example, mmcv has a LICENSES page, which lists files with NVIDIA license.
https://github.com/open-mmlab/mmcv/blob/main/LICENSES.md
Thanks for developing and maintaining this great library! I am using a lot for Kaggle.
| closed | 2024-12-05T00:39:25Z | 2024-12-15T00:37:46Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/997 | [] | junkoda | 3 |
python-restx/flask-restx | api | 196 | Swagger's preauthorize_apikey feature | Currently there is no "flask_restx" way to use the Swagger ``preauthorize_apikey`` feature. (You can always patch the javascript returned by apidoc using the Api.documentation decorator, but it seems to be the worst way to do it)
We needed this feature, in order to display Swagger documentation allready "populated" with user's apiKey. Our use case :
- a user login with login/password couple in order to fetch an api Token and see swagger doc
- when displaying swagger documentation for this authenticated user, the curl example should contains the Token argument
I wrote two commit implementing this feature in two ways :
- https://github.com/yweber/flask-restx/commit/4c6955eadaa4c93ec8fe6d14d322e1ec01ecc9e3
- a straight-forward template modification using a global Flask.config item referencing a function returning preauth informations
- https://github.com/yweber/flask-restx/commit/85310b3eadbb3d94b18bfecf083fc8cb9c5c6753
- adding a decorator to ``flask_restx.Api`` class in order to register a function returning preauth informations
Both PR will be created soon.
I don't know wich one is the better nor wich one fits best with flask_restx philosophy.
- the global configuration solution is.... global : you register a function once and it will be used for all Api instance. It will be messy if different apiKey swagger's authorization names are used
- the ``Api.apikey_preauthorization`` decorator allows to register a preauth information function per Api : you have to do it for each Api instance with swagger ``preauthorize_apikey`` enabled
- Right now, registered functions do not takes any argument. With the decorator's solution it seems to be trivial to pass the ``flask_restx.Api`` instance as argument. I'm not a flask expert, but it looks like it is useless : the flask application context is accessible globally (with ``Flask.current_app`` or ``Flask.session`` etc.) Is there a use case where this function would like to access the current ``flask_restx.Api`` instance ?
Finally, a mixed solution is possible :
- merging the configuration and the decorator : allowing both with a priority on the function registered with the decorator
Thank's for your time reading this issue (and the associated PRs :) ) ! | open | 2020-08-13T13:07:10Z | 2020-08-13T13:07:10Z | https://github.com/python-restx/flask-restx/issues/196 | [
"enhancement"
] | yweber | 0 |
TheKevJames/coveralls-python | pytest | 41 | Missing 'url' key raises uncaught KeyError on coveralls.io 503 response | Could not reproduce with `coveralls debug` as coveralls.io had evidently fixed its server-side error by this time. Traceback from original run below:
```
Submitting coverage to coveralls.io...
Coverage submitted!
Failure to submit data. Response [503]: <!DOCTYPE html>
<html>
<head>
<style type="text/css">
html, body, iframe { margin: 0; padding: 0; height: 100%; }
iframe { display: block; width: 100%; border: none; }
</style>
<title>Application Error</title>
</head>
<body>
<iframe src="https://s3.amazonaws.com/assets.coveralls.io/maintenance.html">
<p>Application Error</p>
</iframe>
</body>
</html>
Traceback (most recent call last):
File "/home/rof/.virtualenv/bin/coveralls", line 9, in <module>
load_entry_point('coveralls==0.4.1', 'console_scripts', 'coveralls')()
File "/home/rof/.virtualenv/local/lib/python2.7/site-packages/coveralls/cli.py", line 52, in main
log.info(result['url'])
KeyError: 'url'
```
| closed | 2014-02-13T19:26:56Z | 2014-05-05T17:47:00Z | https://github.com/TheKevJames/coveralls-python/issues/41 | [] | isms | 7 |
allure-framework/allure-python | pytest | 236 | Change folder strucutre from `src` to module name | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
Folder structure is like this:
```
- allure-pytest\
- src\
- source files (...)
- setup.py
- other files (...)
```
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
Modules are not available for imports when installing with `pip install --editable .` command.
#### What is the expected behavior?
We could just rename the `src` folder with the module name:
```
- allure-pytest\
- allure_pytest\
- source files (...)
- setup.py
- other files (...)
```
Sure it is redundant (and aesthetically I don't like it myself) but this will easy the development process until the problem with `pip --editable` is solved (see below).
#### What is the motivation / use case for changing the behavior?
The pip editable install does not work with current structure, using `package_dir` option:
https://github.com/pypa/pip/issues/3160
#### Please tell us about your environment:
- Allure version: 2.6.0
- Test framework: pytest@3.6
- Allure adaptor: allure-pytest@2.3.3b1
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| closed | 2018-05-25T10:14:35Z | 2022-12-05T17:32:14Z | https://github.com/allure-framework/allure-python/issues/236 | [] | Sup3rGeo | 8 |
mljar/mljar-supervised | scikit-learn | 650 | Invalid comparison between dtype=timedelta64[ns] and float64 | Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/pandas/core/arrays/datetimelike.py", line 935, in _cmp_method
other = self._validate_comparison_value(other)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/arrays/datetimelike.py", line 571, in _validate_comparison_value
raise InvalidComparison(other)
pandas.errors.InvalidComparison: 3.4028234663852886e+38
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/supervised/base_automl.py", line 1195, in _fit
trained = self.train_model(params)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/supervised/base_automl.py", line 401, in train_model
mf.train(results_path, model_subpath)
File "/opt/conda/lib/python3.11/site-packages/supervised/model_framework.py", line 197, in train
].fit_and_transform(
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/supervised/preprocessing/preprocessing.py", line 298, in fit_and_transform
X_train[numeric_cols] = X_train[numeric_cols].clip(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/frame.py", line 11457, in clip
return super().clip(lower, upper, axis=axis, inplace=inplace, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/generic.py", line 8215, in clip
return self._clip_with_scalar(lower, upper, inplace=inplace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/generic.py", line 8024, in _clip_with_scalar
subset = self <= upper
^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/ops/common.py", line 81, in new_method
return method(self, other)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/arraylike.py", line 52, in __le__
return self._cmp_method(other, operator.le)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/frame.py", line 7445, in _cmp_method
new_data = self._dispatch_frame_op(other, op, axis=axis)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/frame.py", line 7484, in _dispatch_frame_op
bm = self._mgr.apply(array_op, right=right)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/internals/managers.py", line 350, in apply
applied = b.apply(f, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/internals/blocks.py", line 329, in apply
result = func(self.values, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/ops/array_ops.py", line 279, in comparison_op
res_values = op(lvalues, rvalues)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/ops/common.py", line 81, in new_method
return method(self, other)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/arraylike.py", line 52, in __le__
return self._cmp_method(other, operator.le)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/arrays/datetimelike.py", line 937, in _cmp_method
return invalid_comparison(self, other, op)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/pandas/core/ops/invalid.py", line 36, in invalid_comparison
raise TypeError(f"Invalid comparison between dtype={left.dtype} and {typ}")
TypeError: Invalid comparison between dtype=timedelta64[ns] and float64
Please set a GitHub issue with above error message at: https://github.com/mljar/mljar-supervised/issues/new
| open | 2023-09-12T15:11:19Z | 2023-09-12T15:13:49Z | https://github.com/mljar/mljar-supervised/issues/650 | [] | adrienpacifico | 1 |
microsoft/qlib | machine-learning | 1,428 | Anyone successfully run a US benchmark yet? There seem to be a bug with US version, instrument variable got evaluated to NAN. | ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. take any yaml file under examples/benchmark. I'll take workflow_config_lightgbm_Alpha158.yaml for example.
2. Change all fields from china version to US version:
2.1 change (provider_uri: "~/.qlib/qlib_data/cn_data") to (provider_uri: "~/.qlib/qlib_data/us_data")
2.2 change( region: cn) to (region: us)
2.3 change (market: &market csi300) to (market: &market sp500)
2.4 change (benchmark: &benchmark SH000300) to (benchmark: &benchmark ^GSPC)
3. Run
qrun workflow_config_lightgbm_Alpha158.yaml
## Expected Behavior
AttributeError: 'float' object has no attribute 'lower'
Upon closer debugging, the instrument variable was evaluated to Nan.
## Screenshot
<img width="1045" alt="image" src="https://user-images.githubusercontent.com/1435138/215260135-671ad6bc-690e-4651-9314-4ecf138ae5b2.png">
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
Windows
AMD64
Windows-10-10.0.22621-SP0
10.0.22621
Python version: 3.8.15 (default, Nov 24 2022, 14:38:14) [MSC v.1916 64 bit (AMD64)]
Qlib version: 0.9.0.99
numpy==1.23.5
pandas==1.5.2
scipy==1.9.3
requests==2.28.1
sacred==0.8.2
python-socketio==5.7.2
redis==4.4.0
python-redis-lock==4.0.0
schedule==1.1.0
cvxpy==1.2.3
hyperopt==0.1.2
fire==0.5.0
statsmodels==0.13.5
xlrd==2.0.1
plotly==5.11.0
matplotlib==3.6.2
tables==3.8.0
pyyaml==6.0
mlflow==1.30.0
tqdm==4.64.1
loguru==0.6.0
lightgbm==3.3.3
tornado==6.2
joblib==1.2.0
fire==0.5.0
ruamel.yaml==0.17.21
## Additional Notes
| open | 2023-01-28T10:01:03Z | 2023-07-22T11:45:39Z | https://github.com/microsoft/qlib/issues/1428 | [
"bug"
] | tianlongwang | 4 |
coqui-ai/TTS | pytorch | 3,731 | [Feature request] GPU Mac Silicon Chip | I would like to use the GPU from my Mac Silicon Chip. :) | closed | 2024-05-11T04:27:35Z | 2024-07-19T05:21:06Z | https://github.com/coqui-ai/TTS/issues/3731 | [
"wontfix",
"feature request"
] | kabelklaus | 3 |
Johnserf-Seed/TikTokDownload | api | 247 | 下载的时候会直接把cpu拉满 | 下载的时候会直接把cpu拉满,可以有个数值输入框控制一下cpu的占用嘛 | closed | 2022-11-11T15:44:25Z | 2022-11-27T11:46:45Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/247 | [
"额外求助(help wanted)",
"无效(invalid)"
] | HelloZhou3301 | 1 |
aminalaee/sqladmin | asyncio | 700 | Add messages support | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I would like to display feedback to users after a form submission, not just error messages. This would allow for warnings and success messages.
### Describe the solution you would like.
Django Admin uses this
https://docs.djangoproject.com/en/dev/ref/contrib/messages/#django.contrib.messages.add_message
### Describe alternatives you considered
_No response_
### Additional context
I may be willing to work on this, if there is interest. | open | 2024-01-19T23:25:17Z | 2024-05-15T20:07:55Z | https://github.com/aminalaee/sqladmin/issues/700 | [] | jonocodes | 11 |
sergree/matchering | numpy | 32 | [Discussion] Desktop Application 🖥 | I've been looking into different technologies for making Rust desktop applications because that's personally something I want to get into. I think that [Flutter](https://flutter.dev) is one of the most promising GUI frameworks around right now, and I just yesterday discovered [nativeshell](https://github.com/nativeshell/nativeshell) which is a way to build desktop applications with Rust and Flutter. I was thinking about what might be a good demo application for me to build using nativeview and then I thought of Matchering.
If I get the time, I might try to make a native desktop application for Matching, but I've still got to figure out the best way to embed Matching in a desktop application without requring Python to be installed. I heard that you had experimented with a Rust version of matchering, which would be the easiest way to get matchering embedded, but if that isn't ready yet then that won't be an option.
The other option is to embed the Python interpreter in Rust. This should work, and I've done it before in smaller use-cases, I just have to look into it more.
I wanted to open this issue to start the discussion and ask whether or not the Rust version of matchering is anywhere close to usable or if I should just try to embed the Python interpreter. | closed | 2021-06-08T16:23:36Z | 2024-10-23T19:28:22Z | https://github.com/sergree/matchering/issues/32 | [
"enhancement"
] | zicklag | 3 |
waditu/tushare | pandas | 1,538 | 请求接入公募基金净值数据接口(fund_nav) | 你好管理员,
新近注册Tushare,承蒙照顾,以博士在读身份获得了千余的初始积分。但还是厚颜地想再请求一下。
我在做一些个人投资,想提取场外公募基金的净值数据,稍微做一些均线分析。说来惭愧,我也是2020年底入市的这批韭菜之一。单就这个接口,需要2000积分,想问问有没可能再补个500积分。或者有没有别的解决办法。
ID: 435618. | open | 2021-04-11T14:53:07Z | 2021-05-20T11:08:53Z | https://github.com/waditu/tushare/issues/1538 | [] | LiuGuanTing | 1 |
opengeos/leafmap | plotly | 927 | Problem with save_draw_features | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.38.5
- Python version: 3.10.13
- Operating System: Ubuntu 22.04
### Description
```
{
"name": "ValueError",
"message": "Assigning CRS to a GeoDataFrame without a geometry column is not supported. Use GeoDataFrame.set_geometry to set the active geometry column.",
"stack": "---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/geopandas/geodataframe.py:517, in GeoDataFrame.crs(self)
516 try:
--> 517 return self.geometry.crs
518 except AttributeError:
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/pandas/core/generic.py:6299, in NDFrame.__getattr__(self, name)
6298 return self[name]
-> 6299 return object.__getattribute__(self, name)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/geopandas/geodataframe.py:253, in GeoDataFrame._get_geometry(self)
247 msg += (
248 \"\
There are no existing columns with geometry data type. You can \"
249 \"add a geometry column as the active geometry column with \"
250 \"df.set_geometry. \"
251 )
--> 253 raise AttributeError(msg)
254 return self[self._geometry_column_name]
AttributeError: You are calling a geospatial method on the GeoDataFrame, but the active geometry column to use has not been set.
There are no existing columns with geometry data type. You can add a geometry column as the active geometry column with df.set_geometry.
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/pandas/core/generic.py:6325, in NDFrame.__setattr__(self, name, value)
6324 try:
-> 6325 existing = getattr(self, name)
6326 if isinstance(existing, Index):
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/pandas/core/generic.py:6299, in NDFrame.__getattr__(self, name)
6298 return self[name]
-> 6299 return object.__getattribute__(self, name)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/geopandas/geodataframe.py:519, in GeoDataFrame.crs(self)
518 except AttributeError:
--> 519 raise AttributeError(
520 \"The CRS attribute of a GeoDataFrame without an active \"
521 \"geometry column is not defined. Use GeoDataFrame.set_geometry \"
522 \"to set the active geometry column.\"
523 )
AttributeError: The CRS attribute of a GeoDataFrame without an active geometry column is not defined. Use GeoDataFrame.set_geometry to set the active geometry column.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
File /home/user/GeoSegmentation/notebook/esempio.py:6
3 m = leafmap.Map()
4 m
----> 6 m.save_draw_features(\"data.geojson\")
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/leafmap/leafmap.py:3889, in Map.save_draw_features(self, out_file, indent, crs, **kwargs)
3883 geojson = {
3884 \"type\": \"FeatureCollection\",
3885 \"features\": self.draw_features,
3886 }
3888 gdf = gpd.GeoDataFrame.from_features(geojson)
-> 3889 gdf.crs = \"epsg:4326\"
3890 gdf.to_crs(crs).to_file(out_file, **kwargs)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/geopandas/geodataframe.py:223, in GeoDataFrame.__setattr__(self, attr, val)
221 object.__setattr__(self, attr, val)
222 else:
--> 223 super().__setattr__(attr, val)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/pandas/core/generic.py:6341, in NDFrame.__setattr__(self, name, value)
6333 if isinstance(self, ABCDataFrame) and (is_list_like(value)):
6334 warnings.warn(
6335 \"Pandas doesn't allow columns to be \"
6336 \"created via a new attribute name - see \"
(...)
6339 stacklevel=find_stack_level(),
6340 )
-> 6341 object.__setattr__(self, name, value)
File ~/GeoSegmentation/.venv/lib/python3.10/site-packages/geopandas/geodataframe.py:529, in GeoDataFrame.crs(self, value)
527 \"\"\"Sets the value of the crs\"\"\"
528 if self._geometry_column_name is None:
--> 529 raise ValueError(
530 \"Assigning CRS to a GeoDataFrame without a geometry column is not \"
531 \"supported. Use GeoDataFrame.set_geometry to set the active \"
532 \"geometry column.\",
533 )
535 if hasattr(self.geometry.values, \"crs\"):
536 if self.crs is not None:
ValueError: Assigning CRS to a GeoDataFrame without a geometry column is not supported. Use GeoDataFrame.set_geometry to set the active geometry column."
}
```
### What I Did
I use this code:
```
import leafmap
m = leafmap.Map()
m
m.save_draw_features("data.geojson")
```
| closed | 2024-10-21T15:43:45Z | 2024-10-21T19:58:41Z | https://github.com/opengeos/leafmap/issues/927 | [
"bug"
] | automataIA | 1 |
deepset-ai/haystack | pytorch | 8,089 | Mermaid Crashes If trying to draw a large pipeline | Thanks in advance for your help :)
**Describe the bug**
I was building a huge pipeline, 30 components and 35 connections, and for debugging proposes I wanted to display the diagram, but both .draw() and .show() methods failed. It still works with small pipelines by the way.
**Error message**
```
Failed to draw the pipeline: https://mermaid.ink/img/ returned status 400
No pipeline diagram will be saved.
Failed to draw the pipeline: could not connect to https://mermaid.ink/img/ (400 Client Error: Bad Request for url: https://mermaid.ink/img/{place holder for 2km long data}
No pipeline diagram will be saved.
Traceback (most recent call last):
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/haystack/core/pipeline/draw.py", line 87, in _to_mermaid_image
resp.raise_for_status()
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://mermaid.ink/img/{another placeholder}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/babyagi.py", line 188, in <module>
pipe.draw(path=Path("pipe"))
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/haystack/core/pipeline/base.py", line 649, in draw
image_data = _to_mermaid_image(self.graph)
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/haystack/core/pipeline/draw.py", line 95, in _to_mermaid_image
raise PipelineDrawingError(
haystack.core.errors.PipelineDrawingError: There was an issue with https://mermaid.ink/, see the stacktrace for details.
```
**Expected behavior**
I expect the .show() and .draw() methods to work for all pipelines, no matter the size.
This might be a Mermaid problem and not strictly haystacks, but we would need to work to implement a local diagram generator as said in #7896
**To Reproduce**
I will not add all the 200 lines of add_component, connect statements, but you can imagine how it goes.
**System:**
- OS: macOS
- GPU/CPU: M1
- Haystack version (commit or version number): 2.3.0
| closed | 2024-07-25T22:08:43Z | 2025-01-28T11:18:55Z | https://github.com/deepset-ai/haystack/issues/8089 | [
"P3"
] | CarlosFerLo | 10 |
Lightning-AI/pytorch-lightning | data-science | 20,027 | [Fabric Lightning] Named barriers | ### Description & Motivation
To prevent ranks losing alignment due to user error -- it would be beneficial to have named barriers with lightning allowing nodes to move forward only if same barrier name is met.
### Pitch
For example:
```
if fabric.global_rank == 0:
fabric.barrier("rank_0")
else:
fabric.barrier("not_rank_0")
```
will fail in this case, and upon timeout each rank will raise an error with the barrier at which it is held up.
This is as opposed to potential user error where due to incorrect logic the various ranks might go different paths, reach some other barrier which in turn enables the whole flow to continue.
An issue that will likely repeat itself is with `fabric.save`. It is not obvious to new users (that don't dig into the documentation) that this should be called in all nodes, as it implements its own internal barrier call.
A typical mistake would be to construct
```
if fabric.global_rank == 0:
fabric.save(...)
fabric.barrier()
do_training_stuff
fabric.barrier()
```
In this case, rank 0 will start to lag behind as it performs an additional barrier call.
If `fabric.save` would implement `fabric.barrier("save")` then the above program would exit printing that there is an alignment issue.
### Alternatives
_No response_
### Additional context
https://github.com/Lightning-AI/pytorch-lightning/issues/19780
cc @borda @awaelchli | open | 2024-06-28T11:14:00Z | 2024-06-28T12:25:44Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20027 | [
"feature",
"help wanted",
"distributed"
] | tesslerc | 1 |
awesto/django-shop | django | 579 | Payment modul | I have a question about creating a payment module for one of the operators.
The operator I want to integrate requires that POST data to the transaction log after it returns a token for further authorization.
One of these data is OrderId, which can be sent only once to the payment system.
After this authorization and status check is carried out by the token given when the payment is registered.
Should I implement the payment as it is with the "Pay in advance" method?
Where an order is created with the possibility of later payment.
Or is it possible to give the orderID in advance to the operator and go to his site to pay for the order?
I have tracked how it is in the case of paypal payment and there the orderID is created only after returning from paypal with information on the correct payment of the order. | closed | 2017-05-15T08:12:25Z | 2017-08-25T12:03:14Z | https://github.com/awesto/django-shop/issues/579 | [] | maltitco | 21 |
python-restx/flask-restx | flask | 535 | Choices Property Does Not Work For JSON List | When defining choices for a list in the JSON data input, validation does not work. This is true if type is `list` or if type is `str` and action is `"append"`.
### **Code**
```python
import flask
from flask_restx import Api, Namespace, Resource
from flask_restx import reqparse
from flask_restx import Api
parser = reqparse.RequestParser()
parser.add_argument(
"argList",
dest="arg_list",
type=list,
location="json",
default=[],
choices=[
"one",
"two",
"three",
],
required=False,
help=f"An argument list",
)
# Our Flask app and API
app = flask.Flask(__name__)
api = Api(
app,
version="1.0.0",
title="Tester",
description="Test parsing arguments",
)
class RouteWithArgs(Resource):
@api.expect(parser)
def put(
self,
):
args = parser.parse_args()
return {"data": "Args look good!"}, 200
# routes
api.add_resource(RouteWithArgs, "/args")
if __name__ == "__main__":
app.run(debug=True)
```
### **Repro Steps** (if applicable)
1. Run Flask application for code above with `python <file-name>.py`
2. Send a request with either allowed or disallowed values
3. Observe that you receive an error message either way
### **Expected Behavior**
I would expect to receive an error message with a disallowed parameter and no error message when providing allowed parameters.
### **Actual Behavior**
An error is returned no matter what is present in the request.
### **Error Messages/Stack Trace**
```>>> response = requests.put("http://localhost:5000/args", headers={"Content-Type": "application/json"}, data=json.dumps({"argList": ["a"]}))
>>> response.json()
{'errors': {'argList': "An argument list The value '['a']' is not a valid choice for 'argList'."}, 'message': 'Input payload validation failed'}
>>> response = requests.put("http://localhost:5000/args", headers={"Content-Type": "application/json"}, data=json.dumps({"argList": ["one"]}))
>>> response.json()
{'errors': {'argList': "An argument list The value '['one']' is not a valid choice for 'argList'."}, 'message': 'Input payload validation failed'}
```
### **Environment**
- Python 3.8.12
- Flask 2.0.1
- Flask-RESX 0.5.1
- Flask Cors 3.0.10
| open | 2023-04-03T19:13:31Z | 2023-04-12T18:54:06Z | https://github.com/python-restx/flask-restx/issues/535 | [
"bug"
] | pype-leila | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,163 | vocoder.pt drive link is not working. | Getting 404 error while trying to download the vocoder model for the drive link: https://drive.google.com/uc?ixd=1cf2NO6FtI0jDuy8AV3Xgn6leO6dHjIgu | open | 2023-02-14T14:57:09Z | 2023-02-14T14:57:09Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1163 | [] | seriesscar | 0 |
google/trax | numpy | 1,065 | ConvTranspose Layer | It would be nice to have a **Transpose Convolution Layer** added to the ```trax.layers.convolution``` class. | closed | 2020-10-03T06:18:03Z | 2020-12-10T16:18:00Z | https://github.com/google/trax/issues/1065 | [
"enhancement",
"good first issue"
] | SauravMaheshkar | 3 |
ymcui/Chinese-BERT-wwm | nlp | 144 | transformer2.2.2 加载参数失败 | 直接加载会报错
OSError: Model name '/Users/wonbyron/bert/chinese_roberta_wwm_large_ext_pytorch/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed '/Users/wonbyron/bert/chinese_roberta_wwm_large_ext_pytorch/config.json' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
经检查,应该将config文件改成bert.config才行 | closed | 2020-09-15T02:04:20Z | 2020-09-21T07:38:11Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/144 | [] | RoacherM | 4 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 505 | using pytorch training reference script with pytorch-metric-learning | Hi @KevinMusgrave !
I have recently been using the reference scripts pytorch provides to train my models (which is wonderful btw) BUT, I would love to use pytorch-metric-learning with this reference script.
The training script and blog re this is here:
https://github.com/pytorch/vision/tree/main/references/classification
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
I am particularly interested in classification:
https://github.com/pytorch/vision/blob/main/references/classification/train.py
However, the issue ofc is that they all use CE loss which requires the logits - but I am not sure how to use say ArcFace Loss with this training reference script. Essentially, I would need the logits to make this work, but atm, all my arcface loss models work with an embedder output and distance metrics.
I was wondering if you could provide some guidance/advise on how to proceed to include metric learning in the reference training script.
Thank you!
| closed | 2022-07-28T07:07:13Z | 2022-08-02T18:24:20Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/505 | [
"question"
] | abaqusstudent | 2 |
amidaware/tacticalrmm | django | 1,406 | Feature Request: Expose agent errors in the TRMM console | **Is your feature request related to a problem? Please describe.**
The `agent.log` has some errors that indicate possible problems but you don't know they are problems until you work with that specific agent.
**Describe the solution you'd like**
It would be nice if these system level or agent level errors were exposed in the TRMM console. If the agent can't connect to the server, then of course it can't report the errors. However, the majority of the errors can be reported to the server. Out of the 4 errors below, 3 could be reported to the server. The "MeshAgent.exe: file does not exist" error will be found when you go to remote to the agent and the "Connect" button is greyed out. In this case, repairing the agent did not do anything; Mesh needed to be reinstalled. The other 2 errors indicate possible problems with the agent and it's better to fix the issue before they become major problems, or before you try to troubleshoot a problem and run into the specific scenario which caused the error.
**Describe alternatives you've considered**
One alternative is to use a check on an agent to parse the logs and report the errors. Using a script check has one major problem:
1. TRMM checks do not have a memory. They cannot know the last timestamp that was scanned to know where to pickup in the log to prevent duplicate/missing alerts.
**Additional context**
IMHO this should be implemented as part of the core functionality. The reporting should be done as part of the frontend interface to view "TRMM errors", not agent-specific errors that you would expect for script checks for a human to fix. TRMM errors are generally programming issues (i.e. adding extra error handling) and should be reported/treated as such.
```text
time="2023-01-18T08:30:03-05:00" level=error msg="SyncMeshNodeID() getMeshNodeID() exec: \"C:\\\\Program Files\\\\Mesh Agent\\\\MeshAgent.exe\": file does not exist: "
```
```text
time="2022-11-26T19:02:03-05:00" level=error msg="error creating NewUpdateSession: ole.CoInitializeEx(0, ole.COINIT_MULTITHREADED): Cannot change thread mode after it is set."
```
```text
time="2022-03-25T15:09:49-04:00" level=error msg="x509: certificate has expired or is not yet valid: "
```
```text
time="2022-10-18T01:37:13-04:00" level=error msg="Checkrunner RunChecks exit status 2: Exception 0xc0000005 0x0 0xc000618000 0x7ff81957600f
PC=0x7ff81957600f
runtime.cgocall(0x9f57e0, 0xc000056ac0)
C:/Program Files/Go/src/runtime/cgocall.go:157 +0x4a fp=0xc000373660 sp=0xc000373628 pc=0x993f6a
syscall.SyscallN(0xc0003737b0?, {0xc0003736f8?, 0x74006e00650076?, 0x1acaf300001?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109 fp=0xc0003736d8 sp=0xc000373660 pc=0x9f0a49
syscall.Syscall9(0xc00060a500?, 0x0?, 0x3?, 0xc0003737d0?, 0xd921e6?, 0x0?, 0x0?, 0x0?, 0x0?, 0x0,...)
C:/Program Files/Go/src/runtime/syscall_windows.go:506 +0x78 fp=0xc000373750 sp=0xc0003736d8 pc=0x9f0758
github.com/amidaware/rmmagent/agent.FormatMessage(0x3800, 0xe7a662?, 0x80001779, 0x0, 0x2?, 0x10000, 0x0?)
C:/users/public/documents/agent/agent/syscall_windows.go:69 +0xc5 fp=0xc0003737e0 sp=0xc000373750 pc=0xd91fe5
github.com/amidaware/rmmagent/agent.getResourceMessage({0xc0002092e0?, 0x2ac?}, {0xc000209bb0, 0x8}, 0xb858bf7d?, 0x11f2d80?)
C:/users/public/documents/agent/agent/eventlog_windows.go:169 +0x1d8 fp=0xc0003838b8 sp=0xc0003737e0 pc=0xd81d98
github.com/amidaware/rmmagent/agent.(*Agent).GetEventLog(0xc00022e4e0, {0xc0002092e0, 0x6}, 0x1)
C:/users/public/documents/agent/agent/eventlog_windows.go:92 +0x5b0 fp=0xc000383ae8 sp=0xc0003838b8 pc=0xd815b0
github.com/amidaware/rmmagent/agent.(*Agent).EventLogCheck(_, {{{0x0, 0x0}, {0x0, 0x0}, 0x0}, {0x0, 0x0, 0x0}, 0xd0, ...}, ...)
C:/users/public/documents/agent/agent/checks.go:259 +0x77 fp=0xc000383d18 sp=0xc000383ae8 pc=0xd7e3d7
github.com/amidaware/rmmagent/agent.(*Agent).RunChecks.func7(0xc000209370, 0x0?)
C:/users/public/documents/agent/agent/checks.go:152 +0x148 fp=0xc000383fc0 sp=0xc000383d18 pc=0xd7bfe8
github.com/amidaware/rmmagent/agent.(*Agent).RunChecks.func14()
C:/users/public/documents/agent/agent/checks.go:154 +0x2e fp=0xc000383fe0 sp=0xc000383fc0 pc=0xd7be6e
runtime.goexit()
C:/Program Files/Go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000383fe8 sp=0xc000383fe0 pc=0x9f3f21
created by github.com/amidaware/rmmagent/agent.(*Agent).RunChecks
C:/users/public/documents/agent/agent/checks.go:149 +0x81b
goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc00020d160?)
C:/Program Files/Go/src/runtime/sema.go:56 +0x25
sync.(*WaitGroup).Wait(0xe5d920?)
C:/Program Files/Go/src/sync/waitgroup.go:136 +0x52
github.com/amidaware/rmmagent/agent.(*Agent).RunChecks(0xc00022e4e0, 0x0)
C:/users/public/documents/agent/agent/checks.go:156 +0x828
main.main()
C:/users/public/documents/agent/main.go:112 +0xe5b
goroutine 43 [syscall, locked to thread]:
syscall.SyscallN(0x7ff819434ad0?, {0xc000077888?, 0x3?, 0x0?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109
syscall.Syscall(0xc000027620?, 0x0?, 0x2030000?, 0x20?, 0x2030000?)
C:/Program Files/Go/src/runtime/syscall_windows.go:494 +0x3b
syscall.WaitForSingleObject(0x1ac215b6216?, 0xffffffff)
C:/Program Files/Go/src/syscall/zsyscall_windows.go:1145 +0x65
os.(*Process).wait(0xc000090330)
C:/Program Files/Go/src/os/exec_windows.go:18 +0x65
os.(*Process).Wait(...)
C:/Program Files/Go/src/os/exec.go:132
os/exec.(*Cmd).Wait(0xc0000c42c0)
C:/Program Files/Go/src/os/exec/exec.go:510 +0x54
os/exec.(*Cmd).Run(0xc000238340?)
C:/Program Files/Go/src/os/exec/exec.go:341 +0x39
github.com/amidaware/rmmagent/agent.(*Agent).RunPythonCode(0xc00022e4e0, {0xe976ff?, 0x0?}, 0xd, {0xc000281c50, 0x0, 0x0?})
C:/users/public/documents/agent/agent/agent.go:483 +0x58d
github.com/amidaware/rmmagent/agent.(*Agent).GetCPULoadAvg(0xc00022e4e0)
C:/users/public/documents/agent/agent/agent.go:328 +0x3e
github.com/amidaware/rmmagent/agent.(*Agent).CPULoadCheck(_, {{{0x0, 0x0}, {0x0, 0x0}, 0x0}, {0x0, 0x0, 0x0}, 0x59, ...}, ...)
C:/users/public/documents/agent/agent/checks.go:231 +0x3b
github.com/amidaware/rmmagent/agent.(*Agent).RunChecks.func2({{{0x0, 0x0}, {0x0, 0x0}, 0x0}, {0x0, 0x0, 0x0}, 0x59, {0xc0002091f0, ...}, ...}, ...)
C:/users/public/documents/agent/agent/checks.go:105 +0xa5
created by github.com/amidaware/rmmagent/agent.(*Agent).RunChecks
C:/users/public/documents/agent/agent/checks.go:103 +0x10fe
goroutine 45 [syscall, locked to thread]:
syscall.SyscallN(0x7ff819434ad0?, {0xc00007b6d8?, 0x3?, 0x0?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109
syscall.Syscall(0x0?, 0xc0002c0440?, 0x35?,0xc000096000?, 0x1ac215c8014?)
C:/Program Files/Go/src/runtime/syscall_windows.go:494 +0x3b
syscall.WaitForSingleObject(0x20?, 0xffffffff)
C:/Program Files/Go/src/syscall/zsyscall_windows.go:1145 +0x65
os.(*Process).wait(0xc000496ba0)
C:/Program Files/Go/src/os/exec_windows.go:18 +0x65
os.(*Process).Wait(...)
C:/Program Files/Go/src/os/exec.go:132
os/exec.(*Cmd).Wait(0xc0000c4000)
C:/Program Files/Go/src/os/exec/exec.go:510 +0x54
github.com/amidaware/rmmagent/agent.(*Agent).RunScript(0xc00022e4e0, {0xc000202300?, 0x1ac215b3332?}, {0xc000209204, 0xa}, {0x1248e08, 0x0, 0xc000283c78?}, 0x5a, 0x0)
C:/users/public/documents/agent/agent/agent_windows.go:178 +0xd34
github.com/amidaware/rmmagent/agent.(*Agent).ScriptCheck(_, {{{0xc000209204, 0xa}, {0xc000202300, 0x16a}, 0x0}, {0x0, 0x0, 0x0}, 0xab, ...}, ...)
C:/users/public/documents/agent/agent/checks.go:172 +0xbf
github.com/amidaware/rmmagent/agent.(*Agent).RunChecks.func5({{{0xc000209204, 0xa}, {0xc000202300, 0x16a}, 0x0}, {0x0, 0x0, 0x0}, 0xab, {0xc000209210, ...}, ...}, ...)
C:/users/public/documents/agent/agent/checks.go:126 +0xc8
created by github.com/amidaware/rmmagent/agent.(*Agent).RunChecks
C:/users/public/documents/agent/agent/checks.go:123 +0xdfd
goroutine 20 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000281c70?, 0x0?, 0x0?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109
syscall.Syscall6(0x10?, 0x1ac21dd38e0?, 0x35?, 0xc000281d10?, 0x99c89e?, 0x1ac21dd38e0?, 0x35?, 0x0?)
C:/Program Files/Go/src/runtime/syscall_windows.go:500 +0x50
syscall.ReadFile(0xc000281d35?, {0xc000400000?, 0x200, 0x800000?}, 0x7ffff800000?, 0x2?)
C:/Program Files/Go/src/syscall/zsyscall_windows.go:1024 +0x94
syscall.Read(0xc0000b2c80?, {0xc000400000?,0x99a43d?, 0xc000281db0?})
C:/Program Files/Go/src/syscall/syscall_windows.go:380 +0x2e
internal/poll.(*FD).Read(0xc0000b2c80, {0xc000400000, 0x200, 0x200})
C:/Program Files/Go/src/internal/poll/fd_windows.go:427 +0x1b4
os.(*File).read(...)
C:/Program Files/Go/src/os/file_posix.go:31
os.(*File).Read(0xc00008c058, {0xc000400000?, 0x1ac4a7e0028?, 0xc000281ea0?})
C:/Program Files/Go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc000089590, {0xf589a0, 0xc00008c058})
C:/Program Files/Go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0xf580a0, 0xc000089590}, {0xf589a0, 0xc00008c058}, {0x0, 0x0, 0x0})
C:/Program Files/Go/src/io/io.go:412 +0x14b
io.Copy(...)
C:/Program Files/Go/src/io/io.go:385
os/exec.(*Cmd).writerDescriptor.func1()
C:/Program Files/Go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0x0?)
C:/Program Files/Go/src/os/exec/exec.go:444 +0x25
created by os/exec.(*Cmd).Start
C:/Program Files/Go/src/os/exec/exec.go:443 +0x845
goroutine 21 [syscall, locked to thread]:
syscall.SyscallN(0xa2f6c5?, {0xc000285c70?, 0xe75aa2?, 0x8?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109
syscall.Syscall6(0xc000027350?, 0xc000220820?, 0xc000089470?, 0xc00009e0a0?, 0xc00009e0b4?, 0xc00009e0b0?, 0xc00008a180?, 0x0?)
C:/Program Files/Go/src/runtime/syscall_windows.go:500 +0x50
syscall.ReadFile(0x0?, {0xc000304200?, 0x200, 0x800000?}, 0x7ffff800000?, 0x2?)
C:/Program Files/Go/src/syscall/zsyscall_windows.go:1024 +0x94
syscall.Read(0xc0000b3180?, {0xc000304200?, 0xebc720?, 0xc000285db0?})
C:/Program Files/Go/src/syscall/syscall_windows.go:380 +0x2e
internal/poll.(*FD).Read(0xc0000b3180, {0xc000304200, 0x200, 0x200})
C:/Program Files/Go/src/internal/poll/fd_windows.go:427 +0x1b4
os.(*File).read(...)
C:/Program Files/Go/src/os/file_posix.go:31
os.(*File).Read(0xc00008c070, {0xc000304200?, 0xc000276300?, 0xc000285ea0?})
C:/Program Files/Go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc0000895c0, {0xf589a0, 0xc00008c070})
C:/Program Files/Go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0xf580a0, 0xc0000895c0}, {0xf589a0, 0xc00008c070}, {0x0, 0x0, 0x0})
C:/Program Files/Go/src/io/io.go:412 +0x14b
io.Copy(...)
C:/Program Files/Go/src/io/io.go:385
os/exec.(*Cmd).writerDescriptor.func1()
C:/Program Files/Go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0xc000276300?)
C:/Program Files/Go/src/os/exec/exec.go:444 +0x25
created by os/exec.(*Cmd).Start
C:/Program Files/Go/src/os/exec/exec.go:443 +0x845
goroutine 22 [select]:
os/exec.(*Cmd).Start.func2()
C:/Program Files/Go/src/os/exec/exec.go:452 +0x75
created by os/exec.(*Cmd).Start
C:/Program Files/Go/src/os/exec/exec.go:451 +0x82a
goroutine 73 [IO wait]:
internal/poll.runtime_pollWait(0x1ac21856558, 0x72)
C:/Program Files/Go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0001bf505?, 0xc0001bf505?, 0x0)
C:/Program Files/Go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.execIO(0xc00011aa18, 0xebc568)
C:/Program Files/Go/src/internal/poll/fd_windows.go:175 +0xe5
internal/poll.(*FD).Read(0xc00011aa00, {0xc0001bf500, 0x13b8, 0x13b8})
C:/Program Files/Go/src/internal/poll/fd_windows.go:441 +0x25f
net.(*netFD).Read(0xc00011aa00, {0xc0001bf500?, 0xc0000704a0?, 0xc0001bf505?})
C:/Program Files/Go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00008c048, {0xc0001bf500?, 0xa97ebeacd5d1a31f?, 0x1224?})
C:/Program Files/Go/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc000210708, {0xc0001bf500?, 0x0?, 0xc00048d8a0?})
C:/Program Files/Go/src/crypto/tls/conn.go:785 +0x3d
bytes.(*Buffer).ReadFrom(0xc0000b8cf8, {0xf58140, 0xc000210708})
C:/Program Files/Go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0000b8a80, {0x1ac21856828?, 0xc00008c048}, 0x13b8?)
C:/Program Files/Go/src/crypto/tls/conn.go:807 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0000b8a80, 0x0)
C:/Program Files/Go/src/crypto/tls/conn.go:614 +0x116
crypto/tls.(*Conn).readRecord(...)
C:/Program Files/Go/src/crypto/tls/conn.go:582
crypto/tls.(*Conn).Read(0xc0000b8a80, {0xc0000d3000, 0x1000, 0x21010401?})
C:/Program Files/Go/src/crypto/tls/conn.go:1285 +0x16f
bufio.(*Reader).Read(0xc000065560, {0xc00029e660, 0x9, 0xc00031cc00?})
C:/Program Files/Go/src/bufio/bufio.go:236 +0x1b4
io.ReadAtLeast({0xf58040, 0xc000065560}, {0xc00029e660, 0x9, 0x9}, 0x9)
C:/Program Files/Go/src/io/io.go:331 +0x9a
io.ReadFull(...)
C:/Program Files/Go/src/io/io.go:350
net/http.http2readFrameHeader({0xc00029e660?, 0x9?, 0xc000188ad0?}, {0xf58040?, 0xc000065560?})
C:/Program Files/Go/src/net/http/h2_bundle.go:1566 +0x6e
net/http.(*http2Framer).ReadFrame(0xc00029e620)
C:/Program Files/Go/src/net/http/h2_bundle.go:1830 +0x95
net/http.(*http2clientConnReadLoop).run(0xc00048df98)
C:/Program Files/Go/src/net/http/h2_bundle.go:8820 +0x130
net/http.(*http2ClientConn).readLoop(0xc000188a80)
C:/Program Files/Go/src/net/http/h2_bundle.go:8716 +0x6f
created by net/http.(*http2Transport).newClientConn
C:/Program Files/Go/src/net/http/h2_bundle.go:7444 +0xa65
goroutine 114 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000283c70?, 0xc000066640?, 0x0?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109
syscall.Syscall6(0x10?, 0x1ac21dd15d0?, 0x35?, 0xc000283d10?, 0x99c89e?, 0x1ac21dd15d0?, 0xc000283d35?, 0xd97491?)
C:/Program Files/Go/src/runtime/syscall_windows.go:500 +0x50
syscall.ReadFile(0x135?, {0xc000304000?, 0x200, 0x800000?}, 0x7ffff800000?, 0x2?)
C:/Program Files/Go/src/syscall/zsyscall_windows.go:1024 +0x94
syscall.Read(0xc0000b3680?, {0xc000304000?, 0x0?, 0xc000283db0?})
C:/Program Files/Go/src/syscall/syscall_windows.go:380 +0x2e
internal/poll.(*FD).Read(0xc0000b3680, {0xc000304000, 0x200, 0x200})
C:/Program Files/Go/src/internal/poll/fd_windows.go:427 +0x1b4
os.(*File.read(...)
C:/Program Files/Go/src/os/file_posix.go:31
os.(*File).Read(0xc000006040, {0xc000304000?, 0xd7c5e0?, 0xc000283ea0?})
C:/Program Files/Go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc00026a2d0, {0xf589a0, 0xc000006040})
C:/Program Files/Go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0xf580a0, 0xc00026a2d0}, {0xf589a0, 0xc000006040}, {0x0, 0x0, 0x0})
C:/Program Files/Go/src/io/io.go:412 +0x14b
io.Copy(...)
C:/Program Files/Go/src/io/io.go:385
os/exec.(*Cmd).writerDescriptor.func1()
C:/Program Files/Go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0x0?)
C:/Program Files/Go/src/os/exec/exec.go:444 +0x25
created by os/exec.(*Cmd).Start
C:/Program Files/Go/src/os/exec/exec.go:443 +0x845
goroutine 115 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000489c70?, 0x1ac215be721?, 0x0?})
C:/Program Files/Go/src/runtime/syscall_windows.go:556 +0x109
syscall.Syscall6(0x10?, 0x1ac21dd38e0?, 0x35?, 0xc000489d10?, 0x99c89e?, 0x1ac21dd38e0?, 0xc000489d35?, 0x99d0e5?)
C:/Program Files/Go/src/runtime/syscall_windows.go:500 +0x50
syscall.ReadFile(0xac21570a35?, {0xc000400200?, 0x200, 0x800000?}, 0x7ffff800000?, 0x2?)
C:/Program Files/Go/src/syscall/zsyscall_windows.go:1024 +0x94
syscall.Read(0xc0000b3b80?, {0xc000400200?, 0xc0000b8e00?, 0xc000489db0?})
C:/Program Files/Go/src/syscall/syscall_windows.go:380 +0x2e
internal/poll.(*FD).Read(0xc0000b3b80, {0xc000400200, 0x200, 0x200})
C:/Program Files/Go/src/internal/poll/fd_windows.go:427 +0x1b4
os.(*File).read(...)
C:/Program Files/Go/src/os/file_posix.go:31
os.(*File).Read(0xc0000060f0, {0xc000400200?, 0x0?, 0xc000489ea0?})
C:/Program Files/Go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc00026a300, {0xf589a0, 0xc0000060f0})
C:/Program Files/Go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0xf580a0, 0xc00026a300}, {0xf589a0, 0xc0000060f0}, {0x0, 0x0, 0x0})
C:/Program Files/Go/src/io/io.go:412 +0x14b
io.Copy(...)
C:/Program Files/Go/src/io/io.go:385
os/exec.(*Cmd).writerDescriptor.func1()
C:/Program Files/Go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0x0?)
C:/Program Files/Go/src/os/exec/exec.go:444 +0x25
created by os/exec.(*Cmd).Start
C:/Program Files/Go/src/os/exec/exec.go:443 +0x845
goroutine 116 [chan receive]:
github.com/amidaware/rmmagent/agent.(*Agent).RunScript.func1(0x0?)
C:/users/public/documents/agent/agent/agent_windows.go:172 +0x39
created by github.com/amidaware/rmmagent/agent.(*Agent).RunScript
C:/users/public/documents/agent/agent/agent_windows.go:170 +0xd27
rax 0x0
rbx 0xc000373884
rcx 0x0
rdi 0x198c9ffb10
rsi 0xc000373882
rbp 0x198c9ff370
rsp 0x198c9ff270
r8 0x0
r9 0x7fde8cc33301
r10 0xff01
r11 0x0
r12 0x1acaf31b884
r13 0x0
r14 0xc000618000
r15 0x0
rip 0x7ff81957600f
rflags 0x10202
cs 0x33
fs 0x53
gs 0x2b
"
``` | open | 2023-01-18T14:36:30Z | 2023-02-05T21:50:21Z | https://github.com/amidaware/tacticalrmm/issues/1406 | [
"enhancement"
] | NiceGuyIT | 0 |
ydataai/ydata-profiling | jupyter | 981 | TypeError From ProfileReport in Google Colab | ### Current Behaviour
In Google Colab the `.to_notebook_iframe` method on `ProfileReport` throws an error:
```Python
TypeError: concat() got an unexpected keyword argument 'join_axes'
```
This issue has been spotted in other contexts and there are questions in StackOverflow: https://stackoverflow.com/questions/61362942/concat-got-an-unexpected-keyword-argument-join-axes
### Expected Behaviour
This section not applicable. Reporting bug that throws an error.
### Data Description
You can reproduce the error with this data:
```Python
https://projects.fivethirtyeight.com/polls/data/favorability_polls.csv
```
### Code that reproduces the bug
```Python
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.read_csv('https://projects.fivethirtyeight.com/polls/data/favorability_polls.csv')
profile = ProfileReport(df)
profile.to_notebook_iframe
```
### pandas-profiling version
Version 1.4.1
### Dependencies
```Text
absl-py==1.0.0
alabaster==0.7.12
albumentations==0.1.12
altair==4.2.0
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arviz==0.12.0
astor==0.8.1
astropy==4.3.1
astunparse==1.6.3
atari-py==0.2.9
atomicwrites==1.4.0
attrs==21.4.0
audioread==2.1.9
autograd==1.4
Babel==2.10.1
backcall==0.2.0
beautifulsoup4==4.6.3
bleach==5.0.0
blis==0.4.1
bokeh==2.3.3
Bottleneck==1.3.4
branca==0.5.0
bs4==0.0.1
CacheControl==0.12.11
cached-property==1.5.2
cachetools==4.2.4
catalogue==1.0.0
certifi==2021.10.8
cffi==1.15.0
cftime==1.6.0
chardet==3.0.4
charset-normalizer==2.0.12
click==7.1.2
cloudpickle==1.3.0
cmake==3.22.4
cmdstanpy==0.9.5
colorcet==3.0.0
colorlover==0.3.0
community==1.0.0b1
contextlib2==0.5.5
convertdate==2.4.0
coverage==3.7.1
coveralls==0.5
crcmod==1.7
cufflinks==0.17.3
cvxopt==1.2.7
cvxpy==1.0.31
cycler==0.11.0
cymem==2.0.6
Cython==0.29.28
daft==0.0.4
dask==2.12.0
datascience==0.10.6
debugpy==1.0.0
decorator==4.4.2
defusedxml==0.7.1
descartes==1.1.0
dill==0.3.4
distributed==1.25.3
dlib @ file:///dlib-19.18.0-cp37-cp37m-linux_x86_64.whl
dm-tree==0.1.7
docopt==0.6.2
docutils==0.17.1
dopamine-rl==1.0.5
earthengine-api==0.1.307
easydict==1.9
ecos==2.0.10
editdistance==0.5.3
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
entrypoints==0.4
ephem==4.1.3
et-xmlfile==1.1.0
fa2==0.3.5
fastai==1.0.61
fastdtw==0.3.4
fastjsonschema==2.15.3
fastprogress==1.0.2
fastrlock==0.8
fbprophet==0.7.1
feather-format==0.4.1
filelock==3.6.0
firebase-admin==4.4.0
fix-yahoo-finance==0.0.22
Flask==1.1.4
flatbuffers==2.0
folium==0.8.3
future==0.16.0
gast==0.5.3
GDAL==2.2.2
gdown==4.4.0
gensim==3.6.0
geographiclib==1.52
geopy==1.17.0
gin-config==0.5.0
glob2==0.7
google==2.0.3
google-api-core==1.31.5
google-api-python-client==1.12.11
google-auth==1.35.0
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.6
google-cloud-bigquery==1.21.0
google-cloud-bigquery-storage==1.1.1
google-cloud-core==1.0.3
google-cloud-datastore==1.8.0
google-cloud-firestore==1.7.0
google-cloud-language==1.2.0
google-cloud-storage==1.18.1
google-cloud-translate==1.5.0
google-colab @ file:///colabtools/dist/google-colab-1.0.0.tar.gz
google-pasta==0.2.0
google-resumable-media==0.4.1
googleapis-common-protos==1.56.0
googledrivedownloader==0.4
graphviz==0.10.1
greenlet==1.1.2
grpcio==1.44.0
gspread==3.4.2
gspread-dataframe==3.0.8
gym==0.17.3
h5py==3.1.0
HeapDict==1.0.1
hijri-converter==2.2.3
holidays==0.10.5.2
holoviews==1.14.8
html5lib==1.0.1
httpimport==0.5.18
httplib2==0.17.4
httplib2shim==0.0.3
humanize==0.5.1
hyperopt==0.1.2
ideep4py==2.0.0.post3
idna==2.10
imageio==2.4.1
imagesize==1.3.0
imbalanced-learn==0.8.1
imblearn==0.0
imgaug==0.2.9
importlib-metadata==4.11.3
importlib-resources==5.7.1
imutils==0.5.4
inflect==2.1.0
iniconfig==1.1.1
intel-openmp==2022.1.0
intervaltree==2.1.0
ipykernel==4.10.1
ipython==5.5.0
ipython-genutils==0.2.0
ipython-sql==0.3.9
ipywidgets==7.7.0
itsdangerous==1.1.0
jax==0.3.8
jaxlib @ https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.3.7+cuda11.cudnn805-cp37-none-manylinux2014_x86_64.whl
jedi==0.18.1
jieba==0.42.1
Jinja2==2.11.3
joblib==1.1.0
jpeg4py==0.1.4
jsonschema==4.3.3
jupyter==1.0.0
jupyter-client==5.3.5
jupyter-console==5.2.0
jupyter-core==4.10.0
jupyterlab-pygments==0.2.2
jupyterlab-widgets==1.1.0
kaggle==1.5.12
kapre==0.3.7
keras==2.8.0
Keras-Preprocessing==1.1.2
keras-vis==0.4.1
kiwisolver==1.4.2
korean-lunar-calendar==0.2.1
libclang==14.0.1
librosa==0.8.1
lightgbm==2.2.3
llvmlite==0.34.0
lmdb==0.99
LunarCalendar==0.0.9
lxml==4.2.6
Markdown==3.3.6
MarkupSafe==2.0.1
matplotlib==3.2.2
matplotlib-inline==0.1.3
matplotlib-venn==0.11.7
missingno==0.5.1
mistune==0.8.4
mizani==0.6.0
mkl==2019.0
mlxtend==0.14.0
more-itertools==8.12.0
moviepy==0.2.3.5
mpmath==1.2.1
msgpack==1.0.3
multiprocess==0.70.12.2
multitasking==0.0.10
murmurhash==1.0.7
music21==5.5.0
natsort==5.5.0
nbclient==0.6.2
nbconvert==5.6.1
nbformat==5.3.0
nest-asyncio==1.5.5
netCDF4==1.5.8
networkx==2.6.3
nibabel==3.0.2
nltk==3.2.5
notebook==5.3.1
numba==0.51.2
numexpr==2.8.1
numpy==1.21.6
nvidia-ml-py3==7.352.0
oauth2client==4.1.3
oauthlib==3.2.0
okgrade==0.4.3
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
openpyxl==3.0.9
opt-einsum==3.3.0
osqp==0.6.2.post0
packaging==21.3
palettable==3.3.0
pandas==1.3.5
pandas-datareader==0.9.0
pandas-gbq==0.13.3
pandas-profiling==1.4.1
pandocfilters==1.5.0
panel==0.12.1
param==1.12.1
parso==0.8.3
pathlib==1.0.1
patsy==0.5.2
pep517==0.12.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==7.1.2
pip-tools==6.2.0
plac==1.1.3
plotly==5.5.0
plotnine==0.6.0
pluggy==0.7.1
pooch==1.6.0
portpicker==1.3.9
prefetch-generator==1.0.1
preshed==3.0.6
prettytable==3.2.0
progressbar2==3.38.0
prometheus-client==0.14.1
promise==2.3
prompt-toolkit==1.0.18
protobuf==3.17.3
psutil==5.4.8
psycopg2==2.7.6.1
ptyprocess==0.7.0
py==1.11.0
pyarrow==6.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycocotools==2.0.4
pycparser==2.21
pyct==0.4.8
pydata-google-auth==1.4.0
pydot==1.3.0
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
pyemd==0.5.1
pyerfa==2.0.0.1
pyglet==1.5.0
Pygments==2.6.1
pygobject==3.26.1
pymc3==3.11.4
PyMeeus==0.5.11
pymongo==4.1.1
pymystem3==0.2.0
PyOpenGL==3.1.6
pyparsing==3.0.8
pyrsistent==0.18.1
pysndfile==1.3.8
PySocks==1.7.1
pystan==2.19.1.1
pytest==3.6.4
python-apt==0.0.0
python-chess==0.23.11
python-dateutil==2.8.2
python-louvain==0.16
python-slugify==6.1.2
python-utils==3.1.0
pytz==2022.1
pyviz-comms==2.2.0
PyWavelets==1.3.0
PyYAML==3.13
pyzmq==22.3.0
qdldl==0.1.5.post2
qtconsole==5.3.0
QtPy==2.1.0
regex==2019.12.20
requests==2.23.0
requests-oauthlib==1.3.1
resampy==0.2.2
rpy2==3.4.5
rsa==4.8
scikit-image==0.18.3
scikit-learn==1.0.2
scipy==1.4.1
screen-resolution-extra==0.0.0
scs==3.2.0
seaborn==0.11.2
semver==2.13.0
Send2Trash==1.8.0
setuptools-git==1.2
Shapely==1.8.1.post1
simplegeneric==0.8.1
six==1.15.0
sklearn==0.0
sklearn-pandas==1.8.0
smart-open==6.0.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
SoundFile==0.10.3.post1
soupsieve==2.3.2.post1
spacy==2.2.4
Sphinx==1.8.6
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-websupport==1.2.4
SQLAlchemy==1.4.36
sqlparse==0.4.2
srsly==1.0.5
statsmodels==0.10.2
sympy==1.7.1
tables==3.7.0
tabulate==0.8.9
tblib==1.7.0
tenacity==8.0.1
tensorboard==2.8.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow @ file:///tensorflow-2.8.0-cp37-cp37m-linux_x86_64.whl
tensorflow-datasets==4.0.1
tensorflow-estimator==2.8.0
tensorflow-gcs-config==2.8.0
tensorflow-hub==0.12.0
tensorflow-io-gcs-filesystem==0.25.0
tensorflow-metadata==1.7.0
tensorflow-probability==0.16.0
termcolor==1.1.0
terminado==0.13.3
testpath==0.6.0
text-unidecode==1.3
textblob==0.15.3
Theano-PyMC==1.1.2
thinc==7.4.0
threadpoolctl==3.1.0
tifffile==2021.11.2
tinycss2==1.1.1
tomli==2.0.1
toolz==0.11.2
torch @ https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/cu113/torchaudio-0.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
torchsummary==1.5.1
torchtext==0.12.0
torchvision @ https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
tornado==5.1.1
tqdm==4.64.0
traitlets==5.1.1
tweepy==3.10.0
typeguard==2.7.1
typing-extensions==4.2.0
tzlocal==1.5.1
uritemplate==3.0.1
urllib3==1.24.3
vega-datasets==0.9.0
wasabi==0.9.1
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.6.0
wordcloud==1.5.0
wrapt==1.14.0
xarray==0.18.2
xgboost==0.90
xkit==0.0.0
xlrd==1.1.0
xlwt==1.3.0
yellowbrick==1.4
zict==2.2.0
zipp==3.8.0
```
```
### OS
Google Colab
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Frequent Issues](https://pandas-profiling.ydata.ai/docs/master/rtd/pages/support.html#frequent-issues). | closed | 2022-05-13T19:00:16Z | 2022-05-16T18:06:54Z | https://github.com/ydataai/ydata-profiling/issues/981 | [
"documentation 📖"
] | adamrossnelson | 3 |
deepset-ai/haystack | machine-learning | 8,171 | Outdated documentation | Most of the examples provided in your documentation do not seem to be functioning correctly. Even on your website’s first page, under the “Quick Start” section (https://haystack.deepset.ai/overview/quick-start), there appears to be an error regarding the “PredefinedPipeline.” The line “from haystack import Pipeline, PredefinedPipeline” results in an error indicating that “PredefinedPipeline” cannot be found. Where can I find the correct and up-to-date documentation? | closed | 2024-08-08T04:13:33Z | 2024-09-07T22:52:48Z | https://github.com/deepset-ai/haystack/issues/8171 | [
"type:documentation",
"community-triage"
] | dariush-saberi | 4 |
autogluon/autogluon | data-science | 3,856 | [BUG]image classification doesn't work | Ubuntu 20.04
autogluon 1.0.0
My original train_df had column labels 'img' and 'lbl'. This code failed:
```
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor(label='lbl', problem_type='multiclass', presets='medium_quality', path='models/mq')
predictor.fit(train_data=train_df, presets='medium_quality', column_types={'img':'image_path'})
```
That code resulted in the first 3 checkpoints getting saved and no improvement thereafter, no matter which backbone or preset I used. At inference time, predict_proba assigned equal probs to all classes for each row. Note that I had to specify column type else the text predictor would get used since the img column is detected as text instead of filepaths.
Tested that the Quick Start shopee example does work. Initially I thought changing the column names helped but turns out that was just reverting to a text predictor because I hadn't specified the column type. | closed | 2024-01-11T20:34:06Z | 2024-01-11T21:49:06Z | https://github.com/autogluon/autogluon/issues/3856 | [
"bug: unconfirmed",
"Needs Triage"
] | rxjx | 1 |
Avaiga/taipy | automation | 1,685 | [BUG] Investigate Azure issue | ### What would you like to share or ask?
From a user feedback:
We’re having some odd issues with Taipy App deployment. The Taipy App uses the Taipy framework and has an external connection (i.e., Azure Cosmos).
1. Create WebApp and Deploy Taipy App using Azure CLI
a. Create WebApp resource and Deploy Taipy App ‘taipyapp2-DEV’ using the command ‘az webapp up’.
b. Results: OK. The deployment succeeds and the webapp runs without error.
2. Deploying a Taipy App using Azure CLI to a pre-created WebApp resource.
a. Deploy to ‘taipyapp-DEV’. (Note this is the WebApp I asked you to create yesterday. I assume the WebApp was created via Azure Portal)
b. The Azure CLI command ‘az web app up’ (the same as 1) is used to deploy, and we specify the name of the WebApp to deploy to.
c. Results: Fails during deployment because resource not found. Error states that the WebApp resource cannot be found using Azure CLI ‘az webapp up’ command. It is odd because I can list WebApp via the ‘az webapp list’ command.
3. Deploying a Taipy App using Azure CLI to a pre-created WebApp
a. Deploy to ‘webapp-DEV’. Note this was created a long time ago. I assume the WebApp was created via Azure Portal
b. Azure CLI command ‘az webapp up’ (same as 1) is used to deploy and we specify the name of the WebApp to deploy to.
c. Results: Fails during deployment with a build failure.
4. Deploying a Taipy App using DevOps pipeline to a pre-created WebApp
a. Deploy to ‘webapp-DEV’. Note this was created a long time ago and the deployment uses the build and release pipelines that you set up for us.
b. Results: Build / Deploy succeeds but App throw ‘Monkey Patch Error’ (the one I showed you before). This is an odd error because the Deployment using 1 above uses the exact same code, requirements.txt file, etc. so the only difference is the deployment method and the way the WebApp was created. Likely we need to look at the build and deploy script too.
So, we think it’s a combination of two issues:
- There is something different about the App created via ‘az webapp up’ command and the one’s created separately. On the surface, I didn’t see any major differences.
- There is some adjustment needed for the build and/or deploy script to match what ‘az webapp up’ is doing.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2024-08-20T10:39:43Z | 2025-02-07T13:33:25Z | https://github.com/Avaiga/taipy/issues/1685 | [
"🖧 Devops",
"💥Malfunction",
"🆘 Help wanted",
"🟧 Priority: High"
] | FlorianJacta | 0 |
MentatInnovations/datastream.io | jupyter | 33 | Use correct library versions in requirements.txt | Datastream.io is not working because some of the library dependencies are not set to the correct version (tornado and elasticsearch in my case).
Here is the pip freeze (python3.5) of the fully working datastream.io:
`bokeh==1.3.0
dateparser==0.7.1
-e git+https://github.com/MentatInnovations/datastream.io@a243b89ec3c4e06473b5004c498c472ffd37ead2#egg=dsio
elasticsearch==5.5.3
Jinja2==2.10.1
joblib==0.13.2
kibana-dashboard-api==0.1.2
MarkupSafe==1.1.1
numpy==1.17.0
packaging==19.0
pandas==0.24.2
Pillow==6.1.0
pyparsing==2.4.1.1
python-dateutil==2.8.0
pytz==2019.1
PyYAML==5.1.1
regex==2019.6.8
scikit-learn==0.21.2
scipy==1.3.0
six==1.12.0
tornado==4.5.3
tzlocal==2.0.0
urllib3==1.25.3
` | open | 2019-07-31T07:21:04Z | 2019-07-31T07:21:04Z | https://github.com/MentatInnovations/datastream.io/issues/33 | [] | Aid91 | 0 |
OpenInterpreter/open-interpreter | python | 616 | azure openai api version support in yaml config file | ### Is your feature request related to a problem? Please describe.
Currently using open-interpreter version 0.1.7 together with azure openai, I need to specify the api_version as an environment
variable in my shell configuration file, e.g:
export: AZURE_API_VERSION=2023-08-01-preview
### Describe the solution you'd like
I would like to be able to specify the api version in open-interpreter's config.yaml just like the api_key and api_base.
An argument like api_version would be appreciated.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2023-10-10T08:55:20Z | 2024-03-18T19:51:13Z | https://github.com/OpenInterpreter/open-interpreter/issues/616 | [
"Enhancement",
"Good First issue"
] | rasmusstrong | 2 |
kizniche/Mycodo | automation | 652 | Internet not detect | ## Mycodo Issue Report:
- Specific Mycodo Version: 7.4.2
#### Problem Description
Please list: Internet is not detected even if internet is accessible.

Eth0 connection is straight to a laptop to interface with Mycodo.
Wlan0 connection is the one with access to the internet.
### Errors


### Additional Notes
Routing table has Wlan0 as the main gateway.

| closed | 2019-04-25T19:46:15Z | 2019-06-07T19:13:25Z | https://github.com/kizniche/Mycodo/issues/652 | [] | pigelb | 3 |
horovod/horovod | deep-learning | 3,866 | How to create a tensor variable on the chief worker? | I need a scalar variable to count something. In parameter server mode, I created it on the first ps node and all the workers can run `add_op` to update it. It works fine.
```
with tf.device('/job:ps/task:0/cpu:0'):
var_for_count = tf.get_variable('count_variable', (), tf.int32, initializer=tf.zeros_initializer)
add_op = var_for_count.assign_add(1, use_locking=True)
```
In horovod mode, there exists just the worker nodes. So, I created the scalar variable on the chief worker only and expected all the workers can also use `add_op` to update it like this.
```
with tf.device('/job:worker/task:0/cpu:0'):
var_for_count = tf.get_variable('count_variable', (), tf.int32, initializer=tf.zeros_initializer)
add_op = var_for_count.assign_add(1, use_locking=True)
```
However, it caused an error.
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation count_variable: node count_variable was explicitly assigned to /job:worker/task:0/device:CPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0]
``` | closed | 2023-03-20T10:46:39Z | 2023-03-20T18:01:01Z | https://github.com/horovod/horovod/issues/3866 | [] | formath | 0 |
PokemonGoF/PokemonGo-Bot | automation | 5,816 | How to run bot with mutliple users ? | Hello,
I install "Bot" and "Web UI" both and then they work pretty good now. Because I saw the top right corner of the design of [my web page](http://imgur.com/UhKua95) showing "Bots", I wonder if we can run more bots and display on the same page? | closed | 2016-11-15T17:49:25Z | 2016-11-16T08:59:39Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5816 | [] | jamiekuo | 3 |
sgl-project/sglang | pytorch | 3,769 | sgl-kernel for aarch64 | Hello,
Thank you very much for your great work on SGLang!
I was wondering if it would be possible to release wheels for `sgl-kernel` for aarch64 (the one on pypi right now only supports x86_64). Alternatively, it would be very helpful if you could provide instructions on how to build `sgl-kernel` from source as well! | open | 2025-02-21T17:19:00Z | 2025-03-12T16:10:08Z | https://github.com/sgl-project/sglang/issues/3769 | [
"help wanted"
] | GeorgiosSmyrnis | 2 |
tfranzel/drf-spectacular | rest-api | 1,251 | Wrong Type for FileField based fields in POST type bodies | **Describe the bug**
If I have a model with a models.FileField field, the type of this field is 'url' since the GET Request would include a url to the image, which is right. But when doing post requests, you don't have to provide an url but an file.
**To Reproduce**
It would be most helpful to provide a small snippet to see how the bug was provoked.
```python
class Bar(models.Model):
Image = models.FileField(upload_to="events/", null=True, blank=True)
... Boilerplate with serializers.HyperlinkedModelSerializer Router
```
The generated schema then contains
```yml
post:
operationId: bars_create
tags:
- bars
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/Bar'
```
and
`#/components/schemas/Bar` says:
```yml
Bar:
type: object
properties:
image:
type: string
format: uri
nullable: true
```
**Expected behavior**
If i understand https://swagger.io/docs/specification/describing-request-body/file-upload/ correctly it should be `format: binary` | closed | 2024-06-05T16:44:11Z | 2024-06-08T09:47:44Z | https://github.com/tfranzel/drf-spectacular/issues/1251 | [] | autoantwort | 2 |
OpenInterpreter/open-interpreter | python | 611 | download language model | ### Is your feature request related to a problem? Please describe.
i need to download language model when use local. but disk space available
How to change the download path ?
Download to `C:\Users\z\AppData\Local\Open Interpreter\Open Interpreter\models`?
[?] (Y/n): y
You do not have enough disk space available to download this model.
Open Interpreter will require approval before running code.
### Describe the solution you'd like
How to change the download path ?
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2023-10-09T20:21:50Z | 2023-10-10T00:14:08Z | https://github.com/OpenInterpreter/open-interpreter/issues/611 | [
"Enhancement"
] | While941 | 3 |
jofpin/trape | flask | 396 | Validation of User Input for Port and URL (Lines 138, 139) | https://github.com/jofpin/trape/blob/6baae245691997742a51979767254d7da580eadd/core/trape.py#L138C4-L138C37
**Potential Issue:** User inputs for the `port` and `URL` fields are currently not validated, which could lead to errors or potential security risks.
**Suggestion:** Add validation checks for port ranges and URL format. This ensures input safety and reduces the likelihood of invalid configurations.
**Code Suggestion:**
```
try:
port = int(options.port)
if port < 1 or port > 65535:
raise ValueError("Port out of range")
except ValueError as e:
print(f"Invalid port: {e}")
sys.exit(1)
if not options.url.startswith(('http://', 'https://')):
print("Invalid URL format. URL must start with 'http://' or 'https://'")
sys.exit(1)
```
**Explanation:** This input validation strengthens security and ensures the application receives expected input formats. | open | 2024-11-06T22:49:25Z | 2024-11-06T22:49:25Z | https://github.com/jofpin/trape/issues/396 | [] | nitish-yaddala | 0 |
ploomber/ploomber | jupyter | 418 | Building online APIs from pipelines with scripts/notebooks | we only support exporting online APIs from pipelines with Python functions | closed | 2021-11-12T16:37:24Z | 2022-08-19T19:26:38Z | https://github.com/ploomber/ploomber/issues/418 | [] | edublancas | 0 |
sebp/scikit-survival | scikit-learn | 94 | What is the origin of time for predict function? | Based on the docs:
> If samples are ordered according to their predicted risk score (in ascending order), one obtains the sequence of events, as predicted by the model. This is the return value of the predict() method of all survival models in scikit-survival.
In my use case I need to predict the order in which individuals will die (in absolute/global time frame) given that they survived till the end (last day) of study.
I interpret predicted risk scores (using `predict()` function) as relative time to events.
However, it's not clear to me what is the origin of time for them?
Is it the time of birth for each individual (which would require adjusting risk scores in my use case), or is it the end of study time? | closed | 2020-02-20T13:45:01Z | 2021-04-01T09:58:19Z | https://github.com/sebp/scikit-survival/issues/94 | [
"question"
] | mateuszbuda | 4 |
home-assistant/core | python | 141,038 | Fritzboxtool: wlan with Fritzbox 5690pro | ### The problem
Some actions doesnt work correctly:
Swiching wifi 5 ghz on or off applies to 6 ghz band
Action for the 5ghz band are missing. This box supports 5, 6 and 2.4 ghz but you can only swich 2 bands.
Furher feature request: I would like to limit the wifi to 50% in the night but there is no action for.
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Fritzboxtools
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
alias: Nachtschaltung Fritzbox
description: ""
triggers:
- trigger: time
at: "23:00:00"
- type: turned_off
device_id: 0f95a9f999b2383f933599333e279e5a
entity_id: 25f533c2cc809888e9e3964d51b79b6d
domain: remote
trigger: device
conditions:
- condition: time
after: "23:00:00"
before: "04:59:00"
- condition: device
type: is_off
device_id: 0f95a9f999b2383f933599333e279e5a
entity_id: 25f533c2cc809888e9e3964d51b79b6d
domain: remote
actions:
- type: turn_off
device_id: 1963e78c4301cb8c1160fdb4fc1c8558
entity_id: a41d918ab06b7cb7e3584c8e824cf01a
domain: switch
- type: turn_off
device_id: 1963e78c4301cb8c1160fdb4fc1c8558
entity_id: dba2c72a9586466a2a53617a2a0157f8
domain: switch
- wait_for_trigger:
- trigger: time
at: "05:00:00"
continue_on_timeout: false
- type: turn_on
device_id: 1963e78c4301cb8c1160fdb4fc1c8558
entity_id: a41d918ab06b7cb7e3584c8e824cf01a
domain: switch
- type: turn_on
device_id: 1963e78c4301cb8c1160fdb4fc1c8558
entity_id: dba2c72a9586466a2a53617a2a0157f8
domain: switch
mode: single
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-21T04:46:11Z | 2025-03-23T10:57:22Z | https://github.com/home-assistant/core/issues/141038 | [
"integration: fritz",
"feature-request"
] | fif81 | 4 |
tartiflette/tartiflette | graphql | 209 | Exception raised in arguments coercer doesn't returns expected error | * **Tartiflette version:** 0.8.3
* **Python version:** 3.7.1
* **Executed in docker:** No
* **Is a regression from a previous versions?** No
SDL example:
```graphql
directive @validateLimit(
limit: Int!
) on ARGUMENT_DEFINITION | INPUT_FIELD_DEFINITION
type Query {
aList(
nbItems: Int! @validateLimit(limit: 2)
): [String!]
}
```
Python:
```python
class LimitReachedException(Exception):
def coerce_value(self, *_args, path=None, locations=None, **_kwargs):
computed_locations = []
try:
for location in locations:
computed_locations.append(location.collect_value())
except AttributeError:
pass
except TypeError:
pass
return {
"message": "Limit reached",
"path": path,
"locations": computed_locations,
"type": "bad_request",
}
@Directive("validateLimit", schema_name="test_issue209")
class ValidateLimitDirective(CommonDirective):
@staticmethod
async def on_argument_execution(
directive_args, next_directive, argument_definition, args, ctx, info
):
value = await next_directive(argument_definition, args, ctx, info)
if value > directive_args["limit"]:
raise LimitReachedException("Limit has been reached")
return value
@Resolver("Query.aList", schema_name="test_issue209")
async def resolver_query_a_list(parent, args, ctx, info):
nb_items = args["nbItems"]
return [f"{nb_items}.{index}" for index in range(nb_items)]
```
Query:
```graphql
query {
aList(nbItems: 3)
} == {
"data": {
"aList": null
},
"errors": [
{
"message": "Limit has been reached",
"path": [
"aList"
],
"locations": [
{
"line": 3,
"column": 15
}
]
}
]
}
```
Expected:
```json
{
"data": {
"aList": null
},
"errors": [
{
"message": "Limit reached",
"path": [
"aList"
],
"locations": [
{
"line": 3,
"column": 15
}
],
"type": "bad_request"
}
]
}
``` | closed | 2019-04-16T12:39:32Z | 2019-04-16T12:52:58Z | https://github.com/tartiflette/tartiflette/issues/209 | [
"bug"
] | Maximilien-R | 0 |
sanic-org/sanic | asyncio | 2,664 | Dark mode / custom CSS for Sanic's own output | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Sanic currently creates error pages and assuming that PR #2662 is merged will be producing file listings with bright white background.
The Internet is moving to dark mode, with more or less all sites implementing dark background via `@media (prefers-color-scheme: dark)` if not as the only option. The bright white hurts eyes on modern screens that often output 200 nits of bright light for it, and many are used to working in all-dark-background environments, coders too often in all-dark rooms.
### Describe the solution you'd like
Sanic generates its output from Python source code where the CSS is included as a string. Adding the media selector for automatic dark mode would be simple, or if minimalism is preferred, Sanic could even only implement a dark mode (but it is more polite to implement both so that users' browsers choose the one matching their desktop preference).
However, there is legitimate use for applications and enterprises using Sanic wanting to customize the pages further such that even the errors at least to a degree agree with their general visual style. This would be far harder to implement. Fetching the CSS as an extra file on an error page would be a bad idea, so basically Sanic would instead need to load those strings from some external source (e.g. during server startup).
I am opening this issue for discussion on the matter, that would work as a basis for a PR to come.
### Additional context
_No response_ | closed | 2023-01-25T13:22:48Z | 2023-03-21T18:53:34Z | https://github.com/sanic-org/sanic/issues/2664 | [
"feature request"
] | Tronic | 2 |
litestar-org/litestar | pydantic | 3,646 | Bug: order of types in openapi spec is not consistent in json rendering | ### Description
We are seeing the order of types change in openapi generation, which makes comparing golden versions of the openapi spec problematic. I think the specific problem we are seeing comes from https://github.com/litestar-org/litestar/blob/ffaf5616b19f6f0f4128209c8b49dbcb41568aa2/litestar/_openapi/schema_generation/schema.py#L160 where we use the `set` operation to uniquify the list of types. The order doesn't matter to the correctness of the openapi spec, so perhaps the responsibility for ensuring a determistic spec file could also come from the serializer, but either way it would be helpful if we could always render the same openapi spec the same way.
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.9.1
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-07-25T13:57:31Z | 2025-03-20T15:54:50Z | https://github.com/litestar-org/litestar/issues/3646 | [
"Bug :bug:",
"OpenAPI"
] | atom-andrew | 2 |
mirumee/ariadne | api | 478 | Update project docs to point to Github discussions instead of Spectrum | Starting August 2021 Spectrum will become read-only and its sprit lives on as GitHub Discussions.
Readme, issue template and website should be updated to direct users to discussions page, and announcement should be made on spectrum about this. | closed | 2021-03-01T22:59:35Z | 2021-03-19T13:20:11Z | https://github.com/mirumee/ariadne/issues/478 | [
"docs",
"meta"
] | rafalp | 0 |
sloria/TextBlob | nlp | 267 | Center word with negative polarity | Hello,
Why does the word have negative polarity. ? | open | 2019-05-22T17:08:34Z | 2019-05-22T17:08:34Z | https://github.com/sloria/TextBlob/issues/267 | [] | jahnavipatel2 | 0 |
sqlalchemy/alembic | sqlalchemy | 634 | Server default not matching for `func.now()` with SQLite | My model has
```python
created = Column(DateTime, server_default=func.now())
```
And every time I invoke `autogenerate` with `compare_server_default=True` on sqlite, it generates the same migrations that do nothing:
```python
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('visitors', schema=None) as batch_op:
batch_op.alter_column('created',
existing_type=sa.DATETIME(),
server_default=sa.text('(CURRENT_TIMESTAMP)'),
existing_nullable=True)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('visitors', schema=None) as batch_op:
batch_op.alter_column('created',
existing_type=sa.DATETIME(),
server_default=sa.text('(CURRENT_TIMESTAMP)'),
existing_nullable=True)
# ### end Alembic commands ###
``` | closed | 2019-12-10T12:57:40Z | 2020-03-11T16:45:52Z | https://github.com/sqlalchemy/alembic/issues/634 | [
"question"
] | tdaff | 10 |
deepinsight/insightface | pytorch | 1,736 | CUDA version | When importing insightface I'm keep having
'libcudart.s.0.11.0 : cannot open shared object file : No such file or directory ' error
I think this is because of CUDA version.
So my question is what version of insightface is using under CUDA 10 version? | open | 2021-08-30T16:27:19Z | 2021-08-31T00:40:45Z | https://github.com/deepinsight/insightface/issues/1736 | [] | dlwjddms | 2 |
mckinsey/vizro | plotly | 770 | How to custom action for selector feature | ### Question
Dear all,
I hope to use click data on the scatter plot to control the input of another figure. I can use the control component to achieve the function of control the figure. I am also able to extract clickData from the action. However, not sure how to pass the information in the custom action to another figure.
This works
```
vm.Parameter(
targets=["my_bio_figure.material"],
selector=vm.Dropdown(options=list(df_perovskite.material), value="YTcO3", multi=False),
)
```
This doesn't work
```
actions=[vm.Action(function=select_interaction(), inputs=["scatter_chart.clickData"],outputs=["my_bio_figure.material"])]
```
The full code
```
import vizro.models as vm
import vizro.plotly.express as px
from vizro import Vizro
from vizro.tables import dash_ag_grid
import pandas as pd
from vizro.models.types import capture
import dash_bio as dashbio
import dash_bio.utils.ngl_parser as ngl_parser
import ase.db
from pymatgen.io.ase import AseAtomsAdaptor
import nvcs
from pymatgen.core import Structure
import nglview
from vizro.actions import filter_interaction
df_perovskite = pd.read_csv("cubic.csv")
structure_db = ase.db.connect("perovskites.db")
data_path = "https://raw.githubusercontent.com/plotly/datasets/master/Dash_Bio/Molecular/"
@capture("figure")
def custom_bio_molecule_figure(data_frame, material):
atoms = structure_db.get_atoms(material)
pmg_structure = AseAtomsAdaptor().get_structure(atoms)
sites_displayed = nvcs.structure._get_displayed(pmg_structure)
pmg_structure_displayed = Structure.from_sites([si.site for si in sites_displayed])
atoms_displayed = AseAtomsAdaptor().get_atoms(pmg_structure_displayed)
ngl_ase_adaptor = nglview.ASEStructure(atoms_displayed)
#data_list = [ngl_ase_adaptor.get_structure_string()]
content = ngl_ase_adaptor.get_structure_string()
data = {
'filename': material,
'ext': 'pdb',
'selectedValue': '1',
'chain': 'ALL',
'aaRange': 'ALL',
'chosen': {'atoms':'', 'residues':''},
'color': '#e41a1c',
'config': {'type': 'text/plain', 'input': content},
'resetView': True,
'uploaded': True
}
data_list = [data]
print(data_list)
molstyles_dict = {
"representations": ["ball+stick", 'unitcell'],
}
return dashbio.NglMoleculeViewer(
id="ngl_molecule_viewer_id",
data=data_list,
molStyles=molstyles_dict,
)
@capture("action")
def select_interaction(clickData):
"""Returns the input value."""
material = clickData['points'][0]['customdata'][0]
print(material)
#print(clickData["custom_data"])
#return clickData["custom_data"]
return material
page = vm.Page(
title="Perovskites",
layout=vm.Layout(grid=[[0, 0, 1],
[0, 0, 2]]),
components=[vm.AgGrid(figure=dash_ag_grid(data_frame=df_perovskite)),
vm.Graph(
id = "scatter_chart",
figure = px.scatter(df_perovskite, x="lattice_constant (AA)", y="bulk_modulus (eV/AA^3)", custom_data = ["material"]),
actions=[vm.Action(function=select_interaction(), inputs=["scatter_chart.clickData"],outputs=["my_bio_figure.material"])]
),
vm.Figure(id="my_bio_figure", figure=custom_bio_molecule_figure(data_frame=pd.DataFrame(), material="YTcO3")),
],
controls=[
vm.Parameter(
targets=["scatter_chart.x"],
selector=vm.Dropdown(options=list(df_perovskite.columns), value="lattice_constant (AA)", multi=False),
),
vm.Parameter(
targets=["scatter_chart.y"],
selector=vm.Dropdown(options=list(df_perovskite.columns), value="bulk_modulus (eV/AA^3)", multi=False),
),
vm.Parameter(
targets=["my_bio_figure.material"],
selector=vm.Dropdown(options=list(df_perovskite.material), value="YTcO3", multi=False),
),
]
)
dashboard = vm.Dashboard(pages=[page], theme="vizro_light")
if __name__ == "__main__":
Vizro().build(dashboard).run()
```
Thank you very much for your help.
### Code/Examples
_No response_
### Which package?
vizro
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-10-01T23:02:16Z | 2024-11-26T08:38:58Z | https://github.com/mckinsey/vizro/issues/770 | [
"General Question :question:"
] | yaoyi92 | 6 |
nschloe/tikzplotlib | matplotlib | 261 | Percent in Axis label produces hard to read error. need to escape '%' or Warning? | Regarding #50
When assigning a axis label, I was describing a percent, and used the percent symbol. Everything was produced fine in python, and went as expected.
However, I could not figure out why it kept crashing in latex a week later. It took me around an hour and a half, as the latex error output was not helpful at all
Here's reproducible code
```
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib2tikz import save as tikz_save
#RANDOM DATA
des = [1,2,3,4]
perc_below = [1,2,3,4]
df3 = pd.DataFrame({'one':perc_below, 'two':des})
df3.plot(fontsize=20, kind='line',y='one',x='two')
plt.xlabel('example',fontsize=20)
# ISSUE LIES BELOW
plt.ylabel('example %',fontsize=20)
tikz_save('example.tex')`
```
Tex output is as expected...
In my opinion, this should be escaped, as the work around is a backslash when assigning the label. However, this would lead to different behavior as matplotlib. (I also export to pngs, which would end up with a backslash in them)
I understand if you do not agree. In that case, a warning when the graph is produced would be great, as again, the latex error was completely unhelpful.
I love the library, BTW! | closed | 2018-12-08T01:16:36Z | 2019-03-17T13:27:18Z | https://github.com/nschloe/tikzplotlib/issues/261 | [] | RichardLettich | 4 |
PeterL1n/BackgroundMattingV2 | computer-vision | 207 | 请问V2版本resnet怎么转成onnx模型? | 转了好久没通过 | open | 2024-02-22T06:23:55Z | 2024-02-22T06:23:55Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/207 | [] | cnywt | 0 |
sinaptik-ai/pandas-ai | pandas | 961 | Add Faker into white list or add ability to expand white list. | ### 🚀 The feature
Add Faker into white list or add ability to expand white list.
### Motivation, pitch
I've asked question "Generate me 10 new synthetic rows based on provided examples" and got error:
Generated code includes import of faker which is not in whitelist.
Log:
```
Question: Generate me 10 new synthetic rows based on provided examples
Running PandasAI with openai LLM...
Prompt ID: eb749e24-762f-4807-828c-a52f772666be
<class 'pandasai.helpers.output_types._output_types.DefaultOutputType'> is going to be used.
<class 'pandasai.helpers.viz_library_types._viz_library_types.NoVizLibraryType'> is going to be used.
Executing Step 0: CacheLookup
Executing Step 1: PromptGeneration
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
2,1,1,"Cumings, Mrs. John Bra...",female,38.0,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina...",female,26.0,0,0,STON/O2. 3101282,7.925,,S
1,0,3,"Braund, Mr. Owen Harri...",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
Update this initial code:
python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
Q: Generate me 10 new synthetic rows based on provided examples
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" variable as a dictionary of type and value.
Generate python code and return full updated code:
Executing Step 2: CodeGenerator
Code generated:
# TODO: import the required dependencies
import pandas as pd
from faker import Faker
# Write code here
fake = Faker()
# Generate 10 new synthetic rows based on provided examples
new_rows = []
for _ in range(10):
new_row = {
"PassengerId": fake.random_int(min=4, max=1000),
"Survived": fake.random_int(min=0, max=1),
"Pclass": fake.random_int(min=1, max=3),
"Name": fake.name(),
"Sex": fake.random_element(elements=["male", "female"]),
"Age": fake.random_int(min=1, max=80),
"SibSp": fake.random_int(min=0, max=4),
"Parch": fake.random_int(min=0, max=2),
"Ticket": fake.random_int(min=10000, max=99999),
"Fare": round(fake.random_number(digits=4, fix_len=True), 2),
"Cabin": fake.random_element(elements=[None, "A1", "B2", "C3"]),
"Embarked": fake.random_element(elements=["S", "C", "Q"])
}
new_rows.append(new_row)
# Create a new DataFrame with the original data plus the 10 new synthetic rows
new_df = pd.concat([dfs[0], pd.DataFrame(new_rows)], ignore_index=True)
# Declare result var
result = {"type": "dataframe", "value": new_df}
Executing Step 3: CachePopulation
Executing Step 4: CodeExecution
Failed to execute code with a correction framework [retry number: 1]
Failed with error: Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
. Retrying
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
2,1,1,"Cumings, Mrs. John Bra...",female,38.0,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina...",female,26.0,0,0,STON/O2. 3101282,7.925,,S
1,0,3,"Braund, Mr. Owen Harri...",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
The user asked the following question:
Q: Generate me 10 new synthetic rows based on provided examples
You generated this python code:
# TODO: import the required dependencies
import pandas as pd
from faker import Faker
# Write code here
fake = Faker()
# Generate 10 new synthetic rows based on provided examples
new_rows = []
for _ in range(10):
new_row = {
"PassengerId": fake.random_int(min=4, max=1000),
"Survived": fake.random_int(min=0, max=1),
"Pclass": fake.random_int(min=1, max=3),
"Name": fake.name(),
"Sex": fake.random_element(elements=["male", "female"]),
"Age": fake.random_int(min=1, max=80),
"SibSp": fake.random_int(min=0, max=4),
"Parch": fake.random_int(min=0, max=2),
"Ticket": fake.random_int(min=10000, max=99999),
"Fare": round(fake.random_number(digits=4, fix_len=True), 2),
"Cabin": fake.random_element(elements=[None, "A1", "B2", "C3"]),
"Embarked": fake.random_element(elements=["S", "C", "Q"])
}
new_rows.append(new_row)
# Create a new DataFrame with the original data plus the 10 new synthetic rows
new_df = pd.concat([dfs[0], pd.DataFrame(new_rows)], ignore_index=True)
# Declare result var
result = {"type": "dataframe", "value": new_df}
It fails with the following error:
Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
Fix the python code above and return the new python code:
Failed to execute code with a correction framework [retry number: 2]
Failed with error: Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
. Retrying
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
2,1,1,"Cumings, Mrs. John Bra...",female,38.0,1,0,PC 17599,71.2833,C85,C
3,1,3,"Heikkinen, Miss. Laina...",female,26.0,0,0,STON/O2. 3101282,7.925,,S
1,0,3,"Braund, Mr. Owen Harri...",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
The user asked the following question:
Q: Generate me 10 new synthetic rows based on provided examples
You generated this python code:
# TODO: import the required dependencies
import pandas as pd
from faker import Faker
# Write code here
fake = Faker()
# Generate 10 new synthetic rows based on provided examples
new_rows = []
for _ in range(10):
new_row = {
"PassengerId": fake.random_int(min=4, max=1000),
"Survived": fake.random_int(min=0, max=1),
"Pclass": fake.random_int(min=1, max=3),
"Name": fake.name(),
"Sex": fake.random_element(elements=["male", "female"]),
"Age": fake.random_int(min=1, max=80),
"SibSp": fake.random_int(min=0, max=4),
"Parch": fake.random_int(min=0, max=2),
"Ticket": fake.random_int(min=10000, max=99999),
"Fare": round(fake.random_number(digits=4, fix_len=True), 2),
"Cabin": fake.random_element(elements=[None, "A1", "B2", "C3"]),
"Embarked": fake.random_element(elements=["S", "C", "Q"])
}
new_rows.append(new_row)
# Create a new DataFrame with the original data plus the 10 new synthetic rows
new_df = pd.concat([dfs[0], pd.DataFrame(new_rows)], ignore_index=True)
# Declare result var
result = {"type": "dataframe", "value": new_df}
It fails with the following error:
Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 186, in execute_code
code_to_run = self._clean_code(code, context)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 375, in _clean_code
self._check_imports(node)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 456, in _check_imports
raise BadImportError(library)
pandasai.exceptions.BadImportError: Generated code includes import of faker which is not in whitelist.
Fix the python code above and return the new python code:
Pipeline failed on step 4: Generated code includes import of faker which is not in whitelist.
```
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-02-26T19:04:48Z | 2024-03-03T13:56:58Z | https://github.com/sinaptik-ai/pandas-ai/issues/961 | [] | PavelAgurov | 2 |
ageitgey/face_recognition | python | 735 | Does Zeropadding requires a memory in neural network? | Hello guys, i want to know does zero padding requires a memory in neural network?
I red **VGGNet in detail** section in http://cs231n.github.io/convolutional-networks/
But it doesn't contains zero padding.
Any body? | closed | 2019-02-04T22:20:24Z | 2019-02-04T22:26:46Z | https://github.com/ageitgey/face_recognition/issues/735 | [] | flyingduck92 | 0 |
Teemu/pytest-sugar | pytest | 222 | invalid reports when used with `-rA`, repeats 3 times with instafail | pytest-sugar reports strange things when used with `-rA`
#### Command used to run pytest
````pytest test_example.py````
#### Test file
````python
def test_example():
print(1)
pass
````
#### Output
````
Test session starts (platform: linux, Python 3.8.5, pytest 6.2.2, pytest-sugar 0.9.4)
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collecting ...
test_example.py ✓ 100% ██████████
Results (0.02s):
1 passed
````
### with -rA
```
$ pytest -rA test_example.py
Test session starts (platform: linux, Python 3.8.5, pytest 6.2.2, pytest-sugar 0.9.4)
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collecting ...
test_example.py ✓ 100% ██████████
====================================================================== PASSES =======================================================================
___________________________________________________________________ test_example ____________________________________________________________________
--------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
1
___________________________________________________________________ test_example ____________________________________________________________________
--------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
1
============================================================== short test summary info ==============================================================
PASSED test_example.py::test_example
PASSED test_example.py::test_example
PASSED test_example.py::test_example
Results (0.02s):
1 passed
```
2 problems:
1. stdout is dumped **twice**
2. PASSED report is printed **trice**
### If combined with `instafail`
It now also reports the test 3 times! ✓✓✓
```
pytest -rA test_example.py --instafail
====================================================================== test session starts =======================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collected 1 item
test_example.py ✓✓✓ [100%]
============================================================================= PASSES =============================================================================
__________________________________________________________________________ test_example __________________________________________________________________________
---------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------
1
__________________________________________________________________________ test_example __________________________________________________________________________
---------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------
1
==================================================================== short test summary info =====================================================================
PASSED test_example.py::test_example
PASSED test_example.py::test_example
PASSED test_example.py::test_example
======================================================================= 3 passed in 0.06s ========================================================================
```
W/o `-rA`
```
pytest test_example.py --instafail
====================================================================== test session starts =======================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /home/stas/hf/transformers-stas
plugins: forked-1.3.0, xdist-2.2.0, sugar-0.9.4, instafail-0.4.2
collected 1 item
test_example.py ✓✓✓ [100%]
======================================================================= 3 passed in 0.01s ========================================================================
```
Thanks. | closed | 2021-04-05T01:52:56Z | 2022-11-14T16:45:11Z | https://github.com/Teemu/pytest-sugar/issues/222 | [] | stas00 | 4 |
RomelTorres/alpha_vantage | pandas | 89 | Date range for intraday data | Hello everyone! I wondered if it is possible to change the date range for intraday data of a stock or currency. I've already used outputsize='full', but I need something larger in order to use machine learning methods. It's for my thesis, in which I'm building a trading algorithm based in neural networks, but I need hundreds of thousands of data for this to work.
Thank you for your time.
@RomelTorres | closed | 2018-10-11T21:33:00Z | 2019-02-11T21:59:42Z | https://github.com/RomelTorres/alpha_vantage/issues/89 | [
"question"
] | CarlosT93 | 0 |
jina-ai/serve | deep-learning | 5,916 | Streaming for gRPC for Deployment | closed | 2023-06-19T13:01:35Z | 2023-07-25T10:07:37Z | https://github.com/jina-ai/serve/issues/5916 | [] | alaeddine-13 | 0 |
|
Avaiga/taipy | data-visualization | 1,982 | Add back to top functionality | ### Description
As the total height of website is long so back to top button is much needed for easy navigation.
So assign this issue to be under Hacktoberfest.
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
- [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | closed | 2024-10-09T09:16:21Z | 2024-10-09T12:24:12Z | https://github.com/Avaiga/taipy/issues/1982 | [
"✨New feature"
] | Rajput-xv | 1 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 292 | [Feature request] add request history to network_interceptor | Why this feature is needed:
We need it to parsing requests for certain actions without creating crutches and global variables
We open the Interceptor together with the history request
execute our code and then just get the request history and analyze what happened during the execution of this or that code.
You can also make an empty query at the beginning of the aiter to trigger certain actions
Which is a good, but not very nice solution
```
PARAM: request_history: bool = False
self.request_history = [] if request_history else request_history
self.request_changed_history = [] if request_history else request_history
in _paused_handler:
if isinstance(self.request_history, list):
self.request_history.append(request)
``` | closed | 2024-12-08T12:41:57Z | 2024-12-09T07:21:36Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/292 | [
"question",
"wontfix"
] | Toxenskiy | 5 |
aiortc/aioquic | asyncio | 18 | Can server create stream? | Hi,
I am using aioquic to develop a software. I want to create a one-way stream from the server to transfer data in the existing session, but I don't seem to find a way to create a stream on the server.
Who can give me a solution?
Thanks. | closed | 2019-07-29T17:38:47Z | 2019-09-04T06:14:02Z | https://github.com/aiortc/aioquic/issues/18 | [] | lRoccoon | 2 |
fastapi-users/fastapi-users | asyncio | 751 | Cannot extend the User model to use nested objects with Ormar | `OrmarUserDatabase.create()` will not save declared nested models/relationships. They do still appear in the OpenAPI schema docs, but the relationships cannot be saved or retrieved.
Steps to reproduce the behavior:
1. Extend your models with a relational field. I have used roles:
```python
class PublicMeta(ormar.ModelMeta):
'''For use with the Public Postgres schema'''
metadata = metadata
database = database
class Role(ormar.Model):
class Meta(PublicMeta):
tablename = 'role'
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=50)
description: str = ormar.String(max_length=255, nullable=True)
class UserModel(OrmarBaseUserModel):
class Meta(PublicMeta):
tablename = 'user'
roles: Optional[List[Role]] = ormar.ManyToMany(Role)
```
Extend Pydantic models:
```python
class User(models.BaseUser):
roles: List[Role]
(UserCreate etc ommited)
```
2. Your registration route should look like this:

3. POST to `/registration`
4. In the response, the role value is empty:
```json
{
"id": "c020a52e-3355-4066-bed2-aa13287305ff",
"email": "user@example.com",
"is_active": true,
"is_superuser": false,
"is_verified": false,
"roles": []
}
```
## Expected behavior
Nested relationships should be stored.
## Configuration
- Python version : 3.8
- FastAPI version : 0.68.1
- FastAPI Users version : 8.1.0
### FastAPI Users configuration
Shown above
## Additional context
It looks like this can be solved by using Ormar's `save_related()` and possibly `select_all()` (for reading) methods.
I managed to store the relationship by calling `save_related` in `OrmarUserDatabase.create()` as follows:
```python
async def create(self, user: UD) -> UD:
oauth_accounts = getattr(user, "oauth_accounts", [])
model = await self.model(**user.dict(exclude={"oauth_accounts"})).save()
await model.save_related()
if oauth_accounts and self.oauth_account_model:
await self._create_oauth_models(model=model, oauth_accounts=oauth_accounts)
user_db = await self._get_user(id=user.id)
return cast(UD, user_db)
```
I suspect that the `_get_db_user()` function would need to call Ormar's [select_all()](https://collerek.github.io/ormar/queries/joins-and-subqueries/#select_all) method in order to return the relationship when querying users, but I've not had success with that, as I'm a bit new to Ormar's API.
I'm happy to do a pull request but I need some guidance here. | closed | 2021-10-04T12:59:46Z | 2022-05-05T13:21:24Z | https://github.com/fastapi-users/fastapi-users/issues/751 | [
"bug"
] | LonelyVikingMichael | 0 |
ultralytics/ultralytics | pytorch | 19,751 | How to use my own class name when predict? | ### Search before asking
- [ ] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to use my own class name when predict?
### Additional
_No response_ | open | 2025-03-18T03:59:20Z | 2025-03-18T06:38:03Z | https://github.com/ultralytics/ultralytics/issues/19751 | [
"question"
] | ChineseFootball10 | 3 |
roboflow/supervision | deep-learning | 1,463 | Notebook not found: Serialise Detections to a CSV File | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
The Colab in this [cookbook](https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/) is not found.
<img width="1118" alt="Screenshot 2024-08-19 at 1 19 21 PM" src="https://github.com/user-attachments/assets/07b23e28-0ccc-456d-a496-631e3600bb57">
```
Notebook not found
There was an error loading this notebook. Ensure that the file is accessible and try again.
Ensure that you have permission to view this notebook in GitHub and authorize Colab to use the GitHub API.
https://github.com/roboflow/supervision/blob/develop/docs/notebooks/detections-to-jsonsink.ipynb
Could not find detections-to-jsonsink.ipynb in https://api.github.com/repos/roboflow/supervision/contents/docs/no
```
### Environment
Browser only error: https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/
### Minimal Reproducible Example
Steps:
1. Open the cookbook https://supervision.roboflow.com/develop/notebooks/serialise-detections-to-csv/
2. Click on "Open in Colab"
3. Get the 404 error
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-08-19T20:22:59Z | 2024-08-20T14:08:37Z | https://github.com/roboflow/supervision/issues/1463 | [
"bug"
] | ediardo | 2 |
yt-dlp/yt-dlp | python | 11,766 | --keep-fragments doesn't work with livestreams | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Provide a description that is worded well enough to be understood
I was trying to keep fragments along with the single output file for the live news by below command:
yt-dlp --downloader "m3u8:native" --keep-fragments -f 300 -P hls-live/ -v -o "%(title)s.%(ext)s" YDfiTGGPYCk
but there were no fragments on the specified folder, while it would produce the single output file as specified. The format 300 was both with video and audio when testing.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
[debug] Command-line config: ['-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.06 from yt-dlp/yt-dlp [4bd265539] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-125-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: certifi-2020.06.20, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.06 from yt-dlp/yt-dlp)
++++++++++++++++++++++++ Below info were the output when downloading ++++++++++++++++++++
[debug] Command-line config: ['--downloader', 'm3u8:native', '--keep-fragments', '-f', '300', '-P', 'hls-live/', '-v', '-o', '%(title)s.%(ext)s', 'YDfiTGGPYCk']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.06 from yt-dlp/yt-dlp [4bd265539] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-125-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: certifi-2020.06.20, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-1.26.20
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[youtube] Extracting URL: YDfiTGGPYCk
[youtube] YDfiTGGPYCk: Downloading webpage
[youtube] YDfiTGGPYCk: Downloading ios player API JSON
[youtube] YDfiTGGPYCk: Downloading mweb player API JSON
[youtube] YDfiTGGPYCk: Downloading m3u8 information
[youtube] YDfiTGGPYCk: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[info] YDfiTGGPYCk: Downloading 1 format(s): 300
[debug] Invoking ffmpeg downloader on "https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8"
[download] Destination: hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.41 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
' -i https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8 -c copy -f mpegts 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4.part'
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
[tcp @ 0x558771f1bac0] Starting connection attempt to 172.217.19.238 port 443
[tcp @ 0x558771f1bac0] Starting connection attempt to 2a00:1450:4006:80b::200e port 443
[tcp @ 0x558771f1bac0] Connected attempt failed: Network is unreachable
[tcp @ 0x558771f1bac0] Successfully connected to 172.217.19.238 port 443
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:00.008+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32920/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[hls @ 0x558771f16c80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32920/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[tcp @ 0x558772246200] Starting connection attempt to 203.66.182.18 port 443
[tcp @ 0x558772246200] Successfully connected to 203.66.182.18 port 443
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32921/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[hls @ 0x558771f16c80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32921/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[tcp @ 0x558772286100] Starting connection attempt to 203.66.182.18 port 443
[tcp @ 0x558772286100] Successfully connected to 203.66.182.18 port 443
[h264 @ 0x5587725afdc0] Reinit context to 1280x720, pix_fmt: yuv420p
Input #0, hls, from 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8':
Duration: N/A, start: 69155.644478, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
Stream #0:1: Video: h264 (Main), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Metadata:
variant_bitrate : 0
[mpegts @ 0x5587725d8a40] service 1 using PCR in pid=256, pcr_period=83ms
[mpegts @ 0x5587725d8a40] muxrate VBR, sdt every 500 ms, pat/pmt every 100 ms
Output #0, mpegts, to 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4.part':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (Main), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, left), 1280x720 (0x0) [SAR 1:1 DAR 16:9], q=2-31, 59.94 fps, 59.94 tbr, 90k tbn, 90k tbc
Metadata:
variant_bitrate : 0
Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
Stream mapping:
Stream #0:1 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32922/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32922/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[tcp @ 0x558772c030c0] Starting connection attempt to 172.217.19.238 port 443ts/s speed=2.43x
[tcp @ 0x558772c030c0] Starting connection attempt to 2a00:1450:4006:80b::200e port 443
[tcp @ 0x558772c030c0] Connected attempt failed: Network is unreachable
[tcp @ 0x558772c030c0] Successfully connected to 172.217.19.238 port 443
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:10.018+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32923/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x55877227ab80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32923/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32924/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32924/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[https @ 0x558772b7c240] Opening 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8' for reading
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:20.028+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32925/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32925/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32926/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x55877227ab80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32926/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[https @ 0x558772b7c240] Opening 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8' for reading
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:25.033+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32927/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32927/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
[https @ 0x558772b7c240] Opening 'https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8' for reading
[hls @ 0x558771f16c80] Skip ('#EXT-X-VERSION:3')
[hls @ 0x558771f16c80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x558771f16c80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-12-08T13:58:30.038+00:00')
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32928/goap/lmt%3D18/govp/lmt%3D18/dur/4.938/file/seg.ts', offset 0, playlist 0
[https @ 0x558772243040] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32928/goap/lmt%3D18/govp/lmt%3D18/dur/4.938/file/seg.ts' for reading
[hls @ 0x558771f16c80] HLS request for url 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32929/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts', offset 0, playlist 0
[https @ 0x55877227ab80] Opening 'https://rr7---sn-ipoxu-un56.googlevideo.com/videoplayback/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8/sq/32929/goap/lmt%3D18/govp/lmt%3D18/dur/5.005/file/seg.ts' for reading
^Cframe= 2544 fps= 74 q=-1.0 Lsize= 7181kB time=00:00:42.44 bitrate=1385.9kbits/s speed=1.24x
video:5895kB audio:676kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 9.275200%
Input file #0 (https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1733687906/ei/AqZVZ9jeHvzVmLAPpb_viA0/ip/211.21.22.118/id/YDfiTGGPYCk.47/itag/300/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D298/rqh/1/hdlc/1/hls_chunk_host/rr7---sn-ipoxu-un56.googlevideo.com/xpc/EgVo2aDSNQ%3D%3D/playlist_duration/30/manifest_duration/30/vprv/1/playlist_type/DVR/initcwndbps/802500/met/1733666316,/mh/4y/mm/44/mn/sn-ipoxu-un56/ms/lva/mv/m/mvi/7/pl/24/rms/lva,lva/dover/11/pacing/0/keepalive/yes/fexp/51326932,51331020,51335594,51347746/mt/1733665944/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,sgoap,sgovp,rqh,hdlc,xpc,playlist_duration,manifest_duration,vprv,playlist_type/sig/AJfQdSswRgIhAPyShYm4xwg8o_0KoGLx0duKkm1kdBH7D5AaH1tKsvkHAiEAlZoD45Ffe9jZmybNkGKQB9TaZqeJafDIOIQIP1s0_As%3D/lsparams/hls_chunk_host,initcwndbps,met,mh,mm,mn,ms,mv,mvi,pl,rms/lsig/AGluJ3MwRQIgAJm5XMlGSveDGl1BD6gwVbjBIXoAXzk1eikOeMPRsi4CIQCwORNk8gI_IsK19mhtyt50AS33O5vr2cMpeGmJKue4Fg%3D%3D/playlist/index.m3u8):
Input stream #0:0 (audio): 1829 packets read (692344 bytes);
Input stream #0:1 (video): 2544 packets read (6036934 bytes);
Total: 4373 packets (6729278 bytes) demuxed
Output file #0 (file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4.part):
Output stream #0:0 (video): 2544 packets muxed (6036934 bytes);
Output stream #0:1 (audio): 1829 packets muxed (692344 bytes);
Total: 4373 packets (6729278 bytes) muxed
[AVIOContext @ 0x558772c00900] Statistics: 0 seeks, 29 writeouts
[AVIOContext @ 0x558772288fc0] Statistics: 4942332 bytes read, 0 seeks
[AVIOContext @ 0x5587728d3780] Statistics: 2506792 bytes read, 0 seeks
[AVIOContext @ 0x558772a21b80] Statistics: 28559 bytes read, 0 seeks
[AVIOContext @ 0x558772257200] Statistics: 6860 bytes read, 0 seeks
Exiting normally, received signal 2.
[ffmpeg] Interrupted by user
[download] 100% of 7.01MiB in 00:00:39 at 183.07KiB/s
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4'
[debug] ffmpeg command line: ffprobe -show_streams 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4'
[FixupM3u8] Fixing MPEG-TS in MP4 container of "hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:hls-live/LIVE NEWS: LiveNOW FOX 24⧸7 LIVE STREAM 2024-12-08 21_58.temp.mp4'
```
| closed | 2024-12-08T14:00:32Z | 2024-12-09T14:05:18Z | https://github.com/yt-dlp/yt-dlp/issues/11766 | [
"question"
] | billsenxu | 4 |
tensorflow/tensor2tensor | machine-learning | 1,506 | t2t_decoder hangs when dot_product_relative_v2 is used | Hello,
I am trying to train a custom Transformer model that has a decoder only (with a custom bottom['targets']), for sequence generation. I was able to train and generate from the model when I had not specified any other special params. However, the generated sequences frequently had a failure mode where certain tokens repeated too often.
I then added the two following params and am training a new model.
hparams.self_attention_type = "dot_product_relative_v2"
hparams.max_relative_position = 256
However, now when I run t2t_decoder, it hangs and does not generate any output (and it's hard to kill it with ^C, and I have to do a kill -9).
I run the decoder in interactive mode, and simply press the return at the '>' prompt.
t2t_decoder --data_dir="${DATA_DIR}" --decode_hparams="${DECODE_HPARAMS}" --decode_interactive --hparams="sampling_method=random" --hparams_set=${HPARAMS_SET} --model=${MODEL} --problem=${PROBLEM} --output_dir=${TRAIN_DIR}
where:
DECODE_HPARAMS="alpha=0,beam_size=1,extra_length=2048"
MODEL=transformer
OS: macOS, High Sierra
$ pip freeze | grep tensor
Error [Errno 20] Not a directory: '/Users/vida_vakil/miniconda3/lib/python3.6/site-packages/magenta-1.0.2-py3.6.egg' while executing command git rev-parse
Exception:
....
NotADirectoryError: [Errno 20] Not a directory: '/Users/vida_vakil/miniconda3/lib/python3.6/site-packages/magenta-1.0.2-py3.6.egg'
The model I am using is based on Score2Perf (https://github.com/tensorflow/magenta/tree/master/magenta/models/score2perf), and I have installed it using instructions from their page, and here: https://github.com/tensorflow/magenta
Looks like the error has to do with the egg thing.
$ python -V
Python 3.6.6 :: Anaconda, Inc.
tensorflow 1.12.0
tensor2tensor 1.13.0
Thanks in advance | closed | 2019-03-20T18:20:27Z | 2019-03-22T03:03:26Z | https://github.com/tensorflow/tensor2tensor/issues/1506 | [] | vidavakil | 2 |
youfou/wxpy | api | 63 | AttributeError: 'Bot' object has no attribute 'enable_puid' | 现在是没有enable_puid这个方法了?
```
raceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/example/weixin/1.py", line 7, in <module>
bot.enable_puid('wxpy_puid.pkl')
AttributeError: 'Bot' object has no attribute 'enable_puid'
```
我的代码是
```
# -*- coding:utf-8 -*-
from wxpy import *
# 初始化机器人,扫码登陆
bot = Bot(cache_path=True)
# 启用 puid 属性,并指定 puid 所需的映射数据保存/载入路径
bot.enable_puid('wxpy_puid.pkl')
``` | closed | 2017-05-25T02:09:50Z | 2017-05-27T07:55:54Z | https://github.com/youfou/wxpy/issues/63 | [] | seozed | 1 |
comfyanonymous/ComfyUI | pytorch | 6,314 | ComfyUI seems to ignore the --reserve-vram and/or --disable-smart-memory ? Is there anything going wrong ? | ### Your question
So I am using everything to reduce the vram usage amount.
`ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --lowvram --force-fp16 --reserve-vram 2.4 --disable-smart-memory`
However I keep seeing that the memory usage of the loaded model is increased ->

and then I am getting memory errors in later generations as shown in the logs and the generation slows down significantly. Sometimes I get an OOM error and the generation dies.
I am basically using a XY generation of pictures with different guidance values and base_shifts. I am using the flux-q4-ks.gguf model (with clip models t5_v1.1_q8 and vit-l-text-detail ) and this [lora](https://civitai.com/models/730373/hyper-realism-lora-by-aidma-flux).
I can provide the workflow if needed for reproduction.
Seems like the issue is very close to #5958 , #5385 and #4318.
However I am using an SamplerCustomAdvanced and a nvidia card. So just wondering if this is a different problem. Or I am doing something wrong.
### Logs
```powershell
[2025-01-01 20:36:33.946]
[ARequested to load Flux
[2025-01-01 20:36:34.009] 0 models unloaded.
[2025-01-01 20:36:34.118] loaded partially 6320.525634765625 6320.427978515625 0
[2025-01-01 20:36:34.123] Attempting to release mmap (44)
[2025-01-01 20:36:34.259]
[2025-01-01 20:36:34.260] [A
[2025-01-01 20:36:34.261] [AERROR lora diffusion_model.single_blocks.0.linear2.weight Allocation on device
[2025-01-01 20:38:46.045]
[2025-01-01 20:38:46.045] [A
[2025-01-01 20:40:26.966] [A
[2025-01-01 20:42:00.008] [A
[2025-01-01 20:43:26.563] [A
[2025-01-01 20:45:06.902] [A
[2025-01-01 20:46:33.755] [A
[2025-01-01 20:48:04.298] [A
[2025-01-01 20:49:35.220] [A
[2025-01-01 20:52:40.229] [A
[2025-01-01 20:55:56.443] [A
[2025-01-01 20:58:42.092] [A
[2025-01-01 21:01:50.056] [A
[2025-01-01 21:02:17.609] [A
bosh3: 100%|█████████████████████████████████████████████████████████████████| 1/1 [25:43<00:00, 1543.35s/it]
bosh3: 100%|█████████████████████████████████████████████████████████████████| 1/1 [25:43<00:00, 1543.35s/it]
[2025-01-01 21:02:17.612]
[2025-01-01 21:02:17.613]
[ARequested to load Flux
[2025-01-01 21:02:17.679] 0 models unloaded.
[2025-01-01 21:02:17.759] loaded partially 6384.427978515625 6383.381103515625 0
[2025-01-01 21:02:17.765] Attempting to release mmap (40)
[2025-01-01 21:02:17.869]
[2025-01-01 21:02:17.869] [A
[2025-01-01 21:02:17.870] [AERROR lora diffusion_model.single_blocks.0.linear2.weight Allocation on device
[2025-01-01 21:04:24.271]
```
### Other
Total VRAM 8192 MB, total RAM 40352 MB
pytorch version: 2.5.1+cu124
Forcing FP16.
Set vram state to: LOW_VRAM
Disabling smart memory management
Device: cuda:0 NVIDIA GeForce RTX 3070 Laptop GPU : cudaMallocAsync
Using pytorch cross attention
### Loading: ComfyUI-Manager (V2.55.5)
### ComfyUI Version: v0.3.7-13-g44db978 | Released on '2024-12-10' | open | 2025-01-02T03:20:01Z | 2025-03-17T22:56:43Z | https://github.com/comfyanonymous/ComfyUI/issues/6314 | [
"User Support"
] | SagnikDe2024 | 5 |
OWASP/Nettacker | automation | 702 | x_powered_by_vuln module to log/show the header value | we now have the ability to log response results - need to update the module to log/display the x-powered-by header value | closed | 2023-07-04T17:17:51Z | 2023-07-04T17:41:23Z | https://github.com/OWASP/Nettacker/issues/702 | [] | securestep9 | 0 |
gradio-app/gradio | deep-learning | 9,972 | Issue with Gradio assets not loading through Nginx reverse proxy | ### Describe the bug
When accessing a Gradio application through Nginx reverse proxy, the main page loads but static assets (JS/CSS) fail to load with 404 errors when the page attempts to fetch them automatically.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
### gradio code
```python
import gradio as gr
import time
def test(x):
time.sleep(4)
return x
gr.Interface(test, "textbox", "textbox").queue().launch(
root_path="/webui", # add i also try to use nginx+http://0.0.0.0:7861/ with no root_path="/webui"
server_name="0.0.0.0",
server_port=7861
)
```
### Current Behavior
- `localhost:9999/webui` loads successfully and returns the Gradio web interface
- When the page tries to fetch its assets, the following requests return 404:
- `localhost:9999/assets/index-Dj1xzGVg.js`
- `localhost:9999/assets/index-Bmd1Nf3q.css`
I manually tried accessing with the /webui prefix, but still got 404:
- `localhost:9999/webui/assets/index-Dj1xzGVg.js`
- `localhost:9999/webui/assets/index-Bmd1Nf3q.css`
However, accessing directly through port 7861 works:
- `localhost:7861/webui/assets/index-Dj1xzGVg.js`
- `localhost:7861/webui/assets/index-Bmd1Nf3q.css`
### Expected Behavior
Static assets should load correctly when accessing the application through the Nginx reverse proxy at `localhost:9999/webui`.
### Question
Is there something wrong with my configuration? How can I properly serve Gradio's static assets through the Nginx reverse proxy?
### Additional Notes
- The main application interface loads correctly
- Static assets (JS/CSS) fail to load when the page tries to fetch them automatically
- Direct access to the Gradio server works as expected
### Nginx Configuration
```
server {
listen 9999;
server_name _;
root /root;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ [^/]\.php(/|$) {
try_files $uri =404;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
set $path_info $fastcgi_path_info;
set $real_script_name $fastcgi_script_name;
if ($fastcgi_script_name ~ "^(.+?\.php)(/.+)$") {
set $real_script_name $1;
set $path_info $2;
}
fastcgi_param SCRIPT_FILENAME $document_root$real_script_name;
fastcgi_param SCRIPT_NAME $real_script_name;
fastcgi_param PATH_INFO $path_info;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
location /webui {
proxy_pass http://0.0.0.0:7861/webui/; # and i also try to use http://0.0.0.0:7861/ with no root_path="/webui"
proxy_buffering off;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### Screenshot
Gradio is running normally on port 7861, and 7861/webui can also be accessed.
The following is the situation on localhost:9999:
The webpage opens up as a blank page.
However, the returned HTML contains Gradio content.

### Logs
```shell
None
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.3.0
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.0
huggingface-hub: 0.26.2
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.0
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.1
ruff: 0.7.4
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.11.0
urllib3: 2.2.1
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.0
huggingface-hub: 0.26.2
packaging: 24.0
typing-extensions: 4.11.0
websockets: 12.0
```
### Severity
Blocking usage of gradio | open | 2024-11-16T18:34:18Z | 2025-02-28T17:53:19Z | https://github.com/gradio-app/gradio/issues/9972 | [
"bug",
"cloud"
] | shiertier | 3 |
airtai/faststream | asyncio | 1,713 | docs: add How-To section placeholder to all brokers pages | I think, we should add such placeholder to all specific broker sections: RabbitMQ, NATS, Redis.
Also we should edit the current Kafka placeholder and add detail information, how user should add the section to navigation exactly | open | 2024-08-21T19:27:06Z | 2024-08-21T19:27:06Z | https://github.com/airtai/faststream/issues/1713 | [
"documentation"
] | Lancetnik | 0 |
ray-project/ray | tensorflow | 50,939 | Release test long_running_many_tasks.aws failed | Release test **long_running_many_tasks.aws** failed. See https://buildkite.com/ray-project/release/builds/34295#01954657-cbe0-487c-b8a5-d54e567f3856 for more details.
Managed by OSS Test Policy | closed | 2025-02-27T08:15:02Z | 2025-02-28T06:09:26Z | https://github.com/ray-project/ray/issues/50939 | [
"bug",
"P0",
"triage",
"core",
"release-test",
"ray-test-bot",
"weekly-release-blocker",
"stability"
] | can-anyscale | 1 |
postmanlabs/httpbin | api | 376 | Difficult response after request from an TLS client which not supporting TLS SNI | We did some tests with `openssl s_client -connect 23.23.115.5:443` to simulate our embedded client.
We can't get an connection successfully with this method and got always "Internal_error" . After digging deeper we found `openssl s_client -connect 23.23.115.5:443 -servername www.httpbin.org` is working fine.
The https://tools.ietf.org/html/rfc6066 states:
> If the server understood the ClientHello extension but does not recognize the server name, the server SHOULD take one of two actions: either abort the handshake by sending a fatal-level unrecognized_name(112) alert or continue the handshake.
It would be very nice to change the response to "unrecognized_name" instead of "Internal_error"
This will help others to find the root cause much faster ;-) | open | 2017-08-17T11:54:01Z | 2018-08-05T07:35:44Z | https://github.com/postmanlabs/httpbin/issues/376 | [] | wodo | 4 |
home-assistant/core | python | 140,883 | Emulated Hue error since 2025.3.x | ### The problem
Emulated hue devices are offline in Harmony HUB, repair or refresh doesn't fix it.
Accessing http://haip:8300/api/v2/lights throws a 500 Internal Server Error.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
core-2024.x.x
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Emulated Hue
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/emulated_hue/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Logger: aiohttp.server
Source: /usr/local/lib/python3.13/site-packages/aiohttp/web_protocol.py:451
First occurred: 6:56:39 PM (3 occurrences)
Last logged: 6:56:57 PM
Error handling request from 192.168.0.54
Error handling request from 192.168.0.20
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/aiohttp/web_protocol.py", line 480, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/aiohttp/web_app.py", line 569, in _handle
return await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/http.py", line 75, in handle
result = handler(request, **request.match_info)
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 247, in get
return self.json(create_list_of_entities(self.config, request))
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 899, in create_list_of_entities
config.entity_id_to_number(entity_id): state_to_json(config, state)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 775, in state_to_json
state_dict = get_entity_state_dict(config, state)
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 666, in get_entity_state_dict
return _build_entity_state_dict(entity)
File "/usr/src/homeassistant/homeassistant/components/emulated_hue/hue_api.py", line 740, in _build_entity_state_dict
data[STATE_BRIGHTNESS] = round(percentage * HUE_API_STATE_BRI_MAX / 100)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
TypeError: unsupported operand type(s) for /: 'str' and 'int'
```
### Additional information
_No response_ | open | 2025-03-18T17:06:05Z | 2025-03-18T17:06:12Z | https://github.com/home-assistant/core/issues/140883 | [
"integration: emulated_hue"
] | aurelmarius | 1 |
SYSTRAN/faster-whisper | deep-learning | 1,179 | BatchedInferencePipeline degrades transcription quality heavily | At first the new `BatchedInferencePipeline` seems great. It produces around 2X speed improvement compared to the normal pipeline. But after some more testing I discovered for some audio files the transcription quality is highly degraded. Whole segments are missing compared to the normal pipeline. Some segments switch language mid way for long periods.
Example:
Segment A has 30 seconds audio, fully in Dutch. It does contains a few English words. Half way the transcription segment the text becomes English, translating the Dutch audio. And at the end of the segment the `initial_prompt` is displayed. This happens at multiple places.
So this makes the `BatchedInferencePipeline` not suited for a production application. | open | 2024-11-28T11:22:07Z | 2024-12-10T15:54:30Z | https://github.com/SYSTRAN/faster-whisper/issues/1179 | [] | Appfinity-development | 11 |
neuml/txtai | nlp | 749 | Support `<pre>` blocks with Textractor | Update the Textractor pipeline to handle `<pre>` blocks. These blocks should be converted to Markdown code blocks.
| closed | 2024-07-21T16:18:16Z | 2024-07-21T16:22:53Z | https://github.com/neuml/txtai/issues/749 | [] | davidmezzetti | 0 |
feature-engine/feature_engine | scikit-learn | 295 | add functionality to detect introduction of NAN in transform method of discretizers | For the arbitrary, equal width and equal frequency, we should include in the transform method a check to see if NaN are being introduced accidentally.
This can happen if a value is outside the boundaries entered by the user in the arbitrary, or of the variable is too skewed for the other transformers.
The catch should be as a warning to begin with, not to break backwards compatibility perhaps, and should inform the user exactly in which variables the NaN were introduced.
We should also expand the tests of all the transformers to make sure this functionality works. As it is the same functionality for all transformers, and also for the categorical encoders as well as per issue #294, I wonder if we should make a master function from this? I think it depends in the amount of code, if it is a one liner probably not. | closed | 2021-07-18T09:08:10Z | 2022-01-04T13:29:39Z | https://github.com/feature-engine/feature_engine/issues/295 | [
"priority"
] | solegalli | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.