repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
serengil/deepface | machine-learning | 1,108 | Fine-tune the model. | Thank you very much for your guidance. I would like to fine-tune the model using my own custom dataset. Could you please provide the relevant training code? | closed | 2024-03-13T02:05:59Z | 2024-03-13T08:37:32Z | https://github.com/serengil/deepface/issues/1108 | [
"question"
] | KxuanZhang | 1 |
hpcaitech/ColossalAI | deep-learning | 6,028 | 如何同时训练两个模型? | ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
在官方文档中给出了训练一个model的例子:
```
colossalai.launch(...)
plugin = GeminiPlugin(...)
booster = Booster(precision='fp16', plugin=plugin)
model = GPT2()
optimizer = HybridAdam(model.parameters())
dataloader = plugin.prepare_dataloader(train_dataset, batch_size=8)
lr_scheduler = LinearWarmupScheduler()
criterion = GPTLMLoss()
model, optimizer, criterion, dataloader, lr_scheduler = booster.boost(model, optimizer, criterion, dataloader, lr_scheduler)
for epoch in range(max_epochs):
for input_ids, attention_mask in dataloader:
outputs = model(input_ids.cuda(), attention_mask.cuda())
loss = criterion(outputs.logits, input_ids)
booster.backward(loss, optimizer)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
如果在我的训练中,有两个模型model1和model2都需要被训练,应该如何使用训练呢?
### Environment
_No response_ | closed | 2024-08-23T03:22:27Z | 2024-08-23T09:22:07Z | https://github.com/hpcaitech/ColossalAI/issues/6028 | [
"question"
] | wangqiang9 | 4 |
Sanster/IOPaint | pytorch | 598 | [Feature Request] add briaai/RMBG-2 | add briaai/RMBG-2
https://huggingface.co/briaai/RMBG-2.0 | closed | 2024-11-14T21:54:21Z | 2024-11-23T13:41:26Z | https://github.com/Sanster/IOPaint/issues/598 | [] | wolfkingal2000 | 0 |
numpy/numpy | numpy | 28,002 | DOC: dtype member docstrings are not tested | ### Issue with current documentation:
Over at #28001 we discovered that `np.dtype.kind` is not being tested via doctests. I think the problem is in doctests itself, where [it only checks certain items in `obj.__dict__`](https://github.com/python/cpython/blob/7900a85019457c14e8c6abac532846bc9f26760d/Lib/doctest.py#L1064):
- staticmethod, classmethod, property
- inspect.isroutine, inspect.isclass
In the case at hand, `np.dtype.kind` is a member, so it is not collected for testing.
### Idea or request for content:
We should find a work-around, as doctest is a part of the python stdlib, so we cannot simply upgrade the version. cc @ev-br | closed | 2024-12-15T13:21:31Z | 2024-12-22T15:27:19Z | https://github.com/numpy/numpy/issues/28002 | [
"04 - Documentation"
] | mattip | 3 |
nolar/kopf | asyncio | 216 | [PR] Speed up e2e tests and make them exiting gracefully | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-10-28 17:19:13+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/216
> Merged by [nolar](https://github.com/nolar) at _2019-11-06 14:31:06+00:00_
> Issue : #13 #59
## Description
Improve e2e tests to wait for the stop-words in the logs instead of just waiting for the time. It was quite common that the e2e tests do not fit into the empirically guessed timings, so the timings had to be increased far above what was normally needed — thus slowing the e2e tests.
This became even more important for the tests that contain the artificial delays, such as sleep, temporary errors with delays, or arbitrary exceptions with the default retry delay (even if mocked).
Now, they default delay is 10 seconds, but the tests continue as soon as they see the specially defined stop-words for each stage (creation, deletion; later: startup, cleanup).
In addition, the `KopfRunner` was improved to stop the background operator gracefully instead of the forced cancellation (which had no graceful period).
## Types of Changes
- Refactor/improvements
| closed | 2020-08-18T20:00:56Z | 2020-08-23T20:51:01Z | https://github.com/nolar/kopf/issues/216 | [
"archive",
"automation"
] | kopf-archiver[bot] | 0 |
graphql-python/graphene-django | graphql | 464 | Graphene Django is incompatible with django-filters 2.0.0 | When using graphene-django along with `django-filter` 2.0.0 I get an error trying to use the graphql endpoint.
A brief example:
```
class Query(object):
projects = DjangoFilterConnectionField(
MyProjectNode, filterset_class=MyProjectFilterSet)
```
This is the error:
```
`GrapheneMyProjectFilterSet.filter_for_reverse_field` has been removed. `GrapheneMyProjectFilterSet.filter_for_field` now generates filters for reverse fields. See: https://django-filter.readthedocs.io/en/master/guide/migration.html
``` | closed | 2018-07-13T14:31:24Z | 2018-09-05T23:22:37Z | https://github.com/graphql-python/graphene-django/issues/464 | [] | synasius | 8 |
mitmproxy/mitmproxy | python | 6,468 | Server TLS handshake failed. Certificate verify failed: unable to get local issuer certificate | #### Problem Description
IE and some apps encounts the "Server TLS handshake failed. Certificate verify failed: unable to get local issuer certificate" error. But it works in Chrome
#### Steps to reproduce the behavior:
1. Download the certificates through the mimt.it. Import the certificates to "Trused Root certificates".
2. Set the proxy in network. The proxy address is 127.0.0.1:8080.
3. Start the Mitmproxy by "mitmweb"
4. I can get the https record when I open the website in Chrome
5. But when I open IE or other app. The log shows "Server TLS handshake failed. Certificate verify failed: unable to get local issuer certificate"
#### System Information
Windows 10
Mitmproxy 10.1.3



| open | 2023-11-08T02:06:19Z | 2023-12-11T06:13:54Z | https://github.com/mitmproxy/mitmproxy/issues/6468 | [
"kind/triage"
] | shenping0324 | 3 |
pennersr/django-allauth | django | 3,875 | Headless Logout should return 200 instead of 401 | I find it a bit unusual that the Headless Logout endpoint returns 401 on a successful logout. Shouldn't it return 200 instead? I am not an expert on this topic by any means - so please feel free to enlighten me! :) | closed | 2024-06-09T17:52:39Z | 2024-06-09T20:08:44Z | https://github.com/pennersr/django-allauth/issues/3875 | [] | semihsezer | 2 |
albumentations-team/albumentations | deep-learning | 1,481 | Adding intermediate information in a custom augmentation | Apologies in advance if a version of this has been asked before, but I wasn't able to find any info. I have a custom augmentation that takes an image and a bounding box, expands the bbox randomly within limits and then crops. If i want to also access the expanded bbox that was used, how can I get that information from the output? For reference, here's the basic code skeleton. Assume `crop`, `expand` and `jitter_bbox` functions exist, and that cases where expansions protrude beyond image boundaries are handled:
```
class RandomExpansion(A.DualTransform):
def __init__(self,
expansion_limits=[0.0, 0.5],
always_apply=False,
p=0.5,
):
super(RandomExpansion, self).__init__(always_apply, p)
self.expansion_limits = expansion_limits
def apply(self, np_img, x_min, x_max, y_min, y_max, **params):
h, w = np_img.shape[:2]
exp_bbox = np.array([x_min, y_min, x_max, y_max])
return crop(np_img, exp_bbox)
def apply_to_bbox(self, bbox, **params):
x_min = np.clip(bbox[0], 0.0, 1.0)
y_min = np.clip(bbox[1], 0.0, 1.0)
x_max = np.clip(bbox[2], 0.0, 1.0)
y_max = np.clip(bbox[3], 0.0, 1.0)
return (x_min, y_min, x_max, y_max)
@property
def targets_as_params(self):
return ["image", "bboxes"]
def get_params_dependent_on_targets(self, params):
h, w = params["image"].shape[:2]
norm_bbox = params["bboxes"][0]
bbox = denormalize_bbox(norm_bbox, h, w)[:4]
# jitter the bbox randomly
bbox = jitter_bbox(bbox)
# expand randomly
exp_factor = np.random.uniform(
self.expansion_limits[0], self.expansion_limits[1]
)
exp_bbox = expand(bbox, exp_factor)
ex1, ey1, ex2, ey2 = exp_bbox
return {
"x_min": ex1,
"y_min": ey1,
"x_max": ex2,
"y_max": ey2,
}
```
Ideally, after calling wrapping this with `A.Compose`, I'd do something like `out = tfm(image=<np_img>, bboxes=<bbox>)` and would want `out` to also contain the `exp_bbox` being referenced above. Is there a way to do this?
| open | 2023-09-13T16:36:24Z | 2023-09-13T16:36:24Z | https://github.com/albumentations-team/albumentations/issues/1481 | [] | avasbr | 0 |
ckan/ckan | api | 7,760 | Project: UI Revamp | ### Short Description
UI isn't up to date. Reworking UX is complex, it'll will take time and effort. If we're seeing value in modernizing UI so it's prettier, not getting deep in rethinking flows and interactions, it's a comparably low hanging fruit.
### Problem hypothesis
Old looking UI can scare off newcomers as it communicates a lag. User would feel better when working with modern UI.
### Value
1. Good [first impression](https://thestory.is/en/journal/good-first-impression-website/) is easier to achieve with modernized UI.
2. Customer satisfaction will grow as it's a pleasure to work with modern looking UI. ([Aesthetics role in user satisfaction](https://www.researchgate.net/publication/221325046_User_Satisfaction_Aesthetics_and_Usability_Beyond_Reductionism))
3. With nice looking UI we'd be able to present CKAN better to new users, maintainers.
### Desired outcomes
- Increase in customer satisfaction for data publishers
- Increase in conversion rate from Prospect to Customer
### User needs
TBD
### Technical needs & known limitations
😎 Need your input, guys on what designer should know before designing new screens. Or even better - what are prerequisites to design UI that will be easier to implement.
### Costs
TBD
### Validation
TBD
| open | 2023-08-25T11:05:16Z | 2024-10-15T10:34:33Z | https://github.com/ckan/ckan/issues/7760 | [
"UX"
] | thegostev | 6 |
LAION-AI/Open-Assistant | machine-learning | 3,741 | chat frontend no longer active, fix readme | From Readme:
> How To Try It Out
> Chatting with the AI
>
> The chat frontend is now live [here](https://open-assistant.io/chat). Log in and start chatting! Please try to react with a thumbs up or down for the assistant's responses when chatting.
but the link now leads to the `OpenAssistant has finished!` page, without allowing you to try the model | open | 2023-12-30T17:35:25Z | 2024-10-14T22:43:22Z | https://github.com/LAION-AI/Open-Assistant/issues/3741 | [] | tubbadu | 2 |
ydataai/ydata-profiling | jupyter | 865 | Increase supported categorical descriptions | closed | 2021-10-23T10:35:14Z | 2021-10-23T10:42:39Z | https://github.com/ydataai/ydata-profiling/issues/865 | [] | chanedwin | 0 |
|
wkentaro/labelme | deep-learning | 1,273 | labelme2coco labelme2voc ,There are problems with both | ### Provide environment information
labelme 5.2.0.post4
### What OS are you using?
WIN11
### Describe the Bug
Polygon conversion
No coordinates
### Expected Behavior
_No response_
### To Reproduce
_No response_ | open | 2023-05-08T01:18:16Z | 2023-05-08T01:18:16Z | https://github.com/wkentaro/labelme/issues/1273 | [
"issue::bug"
] | monkeycc | 0 |
GibbsConsulting/django-plotly-dash | plotly | 328 | Parsing Dash initial_arguments broken in v1.6.1 | I've identified a regression in release 1.6.1 that causes a bug when parsing `initial_arguments` as a serialized string.
We have an app that uses `initial_arguments` as follows. The `dash_initial_arguments` is a *string* of serialized JSON that is generated via `views.py` and passed to the Django template via `context`.
```
{% plotly_app name="MyApp" ratio=1 initial_arguments=dash_initial_arguments %}
```
Prior to v1.6.1, this worked as intended. However, there was a [change to dash_wrapper.py](https://github.com/GibbsConsulting/django-plotly-dash/compare/v1.6.0...v1.6.1#diff-7b3d671859d84ee8816d9e86a0705b5e13e3f3b49dc12a2a6aa4caa7a290f89aR466) in v1.6.1 that removed the JSON deserialization logic. Specifically, this change occurred in https://github.com/GibbsConsulting/django-plotly-dash/commit/cddf57559a8dcd12d1cdbb42d95c48b29678ee11.

`initial_arguments` is still a string, but since the parsing has been removed, this now results in the following error:
```
ValueError: dictionary update sequence element #0 has length 1; 2 is required
```
@sdementen or @sebastiendementen do you have any context for why this parsing logic was removed? | closed | 2021-03-15T18:17:26Z | 2021-03-18T21:52:31Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/328 | [] | gabegrand | 9 |
jina-ai/serve | machine-learning | 5,358 | Move the old Jina/Docarray relation docs from DocArray documenation to the jina one | # Context
DocArray is moving into its own organization and reference to the jina project are slowly beeing removed.
Therefore we will loose this documentation at some point in docarray : https://docarray.jina.ai/fundamentals/jina-support/
We need to move to content to the Jina documentation as it is still relevant.
It is more than copy pasting, work need to be done in the working because it is not jina info in docarray but docarray info in jina. The content is mostly the same though | closed | 2022-11-07T10:43:00Z | 2022-11-11T15:40:55Z | https://github.com/jina-ai/serve/issues/5358 | [] | samsja | 0 |
ray-project/ray | machine-learning | 51,596 | [CG, Core] Illegal memory access with Ray 2.44 and vLLM v1 pipeline parallelism | ### What happened + What you expected to happen
We got the following errors when running vLLM v1 PP>1 with Ray 2.44. It was working fine with Ray 2.43.
```
ERROR 03-21 10:34:30 [core.py:343] File "/home/ray/default/vllm/vllm/v1/worker/gpu_model_runner.py", line 1026, in execute_model
ERROR 03-21 10:34:30 [core.py:343] self.intermediate_tensors[k][:num_input_tokens].copy_(
ERROR 03-21 10:34:30 [core.py:343] RuntimeError: CUDA error: an illegal memory access was encountered
ERROR 03-21 10:34:30 [core.py:343] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
ERROR 03-21 10:34:30 [core.py:343] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
ERROR 03-21 10:34:30 [core.py:343] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions / Dependencies
- Python 3.11
- CUDA 12.4
- NVIDIA L4 / L40S GPUs
- Ray 2.44
- vLLM 0.8.1 (or any newer commits)
### Reproduction script
```python
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.0, max_tokens=50)
# Create an LLM.
llm = LLM(
model="Qwen/Qwen2.5-0.5B-Instruct",
distributed_executor_backend="ray",
pipeline_parallel_size=2,
enforce_eager=False,
)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
Run the script with
```
VLLM_USE_V1=1 python run.py
```
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-21T17:37:42Z | 2025-03-21T21:35:18Z | https://github.com/ray-project/ray/issues/51596 | [
"bug",
"P0",
"core"
] | comaniac | 0 |
huggingface/transformers | deep-learning | 36,059 | Code for VisionT5Model | ### Feature request
So right now you can't use T5 as decoder block in VisionEncoderDecoderModel , I wrote a code here which almost does that trying to get some help if it covers everything I need and can use it directly I am planning to use it for a OCR code base
``` python
import copy
import torch
import torch.nn as nn
from torch.nn import CrossEntropyLoss
from typing import Optional, Tuple, Union
from transformers import (
PreTrainedModel,
GenerationMixin,
VisionEncoderDecoderConfig,
AutoModel,
T5Config,
T5Stack,
ViTModel
)
from transformers.modeling_outputs import Seq2SeqLMOutput
class VisionT5Model(PreTrainedModel, GenerationMixin):
"""
A vision-text model using a ViT-like encoder and a T5 decoder stack.
It mimics the design of VisionEncoderDecoderModel but replaces the decoder
with a T5 decoder. Useful for tasks like OCR, image captioning, etc.
"""
config_class = VisionEncoderDecoderConfig
base_model_prefix = "vision_t5"
main_input_name = "pixel_values"
def __init__(self, config: VisionEncoderDecoderConfig):
"""
Args:
config (VisionEncoderDecoderConfig):
Configuration for the vision-encoder–text-decoder model.
- config.encoder should be a vision config (e.g. ViTConfig)
- config.decoder should be a T5Config
"""
super().__init__(config)
# ----------------------
# 1) Load the Vision Encoder
# ----------------------
self.encoder = ViTModel(config.encoder)
# Make sure it does NOT have a "head" for classification etc.
if self.encoder.get_output_embeddings() is not None:
raise ValueError("The encoder should not have a LM head; please use a bare vision backbone.")
# ----------------------
# 2) Build the T5 decoder stack (no encoder part!)
# ----------------------
# We copy the T5 config from config.decoder
# Then ensure is_decoder=True, is_encoder_decoder=False, etc.
t5_decoder_config = T5Config.from_dict(config.decoder.to_dict())
t5_decoder_config.is_decoder = True
t5_decoder_config.is_encoder_decoder = False
t5_decoder_config.num_layers = config.decoder.num_layers
# If you want cross-attention in T5, it must have `add_cross_attention=True`.
# Usually T5's is_decoder implies that anyway, but just to be safe:
t5_decoder_config.add_cross_attention = True
self.decoder = T5Stack(t5_decoder_config)
# Optionally, if the hidden sizes differ, we need a projection:
if self.encoder.config.hidden_size != t5_decoder_config.d_model:
self.enc_to_dec_proj = nn.Linear(
self.encoder.config.hidden_size, t5_decoder_config.d_model, bias=False
)
else:
self.enc_to_dec_proj = None
# ----------------------
# 3) Final LM head (same as T5's)
# ----------------------
self.lm_head = nn.Linear(t5_decoder_config.d_model, t5_decoder_config.vocab_size, bias=False)
if t5_decoder_config.tie_word_embeddings:
self.lm_head.weight = self.decoder.embed_tokens.weight
self.model_dim = t5_decoder_config.d_model # keep track if we want the T5 scaling
# Initialize weights, etc.
self.post_init()
def get_encoder(self):
return self.encoder
def get_decoder(self):
return self.decoder
def get_input_embeddings(self):
"""By convention, the 'input embeddings' come from the decoder if needed."""
return self.decoder.embed_tokens
def set_input_embeddings(self, new_embeddings):
self.decoder.set_input_embeddings(new_embeddings)
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def forward(
self,
pixel_values: torch.FloatTensor,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.BoolTensor] = None,
encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None,
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
return_dict: Optional[bool] = None,
**decoder_kwargs
) -> Union[Seq2SeqLMOutput, Tuple[torch.FloatTensor]]:
"""
pixel_values: (batch, channels, height, width)
The images to encode (e.g. from ViTFeatureExtractor).
decoder_input_ids: (batch, tgt_seq_len)
Input tokens to the T5 decoder.
labels: (batch, tgt_seq_len)
If given, we compute LM loss by teacher-forcing and produce CrossEntropyLoss.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
use_cache = use_cache if use_cache is not None else self.config.decoder.use_cache
# 1) Run the vision encoder if needed
if encoder_outputs is None:
encoder_outputs = self.encoder(pixel_values=pixel_values, return_dict=True)
# encoder_outputs.last_hidden_state shape => (batch, seq_len, hidden_size)
hidden_states = encoder_outputs.last_hidden_state
# Possibly project to match T5 dimension
if self.enc_to_dec_proj is not None:
hidden_states = self.enc_to_dec_proj(hidden_states)
# 2) Prepare decoder inputs
# If we have labels but no decoder_input_ids, shift-right internally
if labels is not None and decoder_input_ids is None:
# Standard T5 shift-right:
decoder_input_ids = self._shift_right(labels)
# T5 decoder forward
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=hidden_states,
encoder_attention_mask=None, # If you want to mask out padding in hidden_states, pass it here
past_key_values=past_key_values,
use_cache=use_cache,
return_dict=True,
**decoder_kwargs,
)
sequence_output = decoder_outputs[0] # (batch, tgt_len, d_model)
# 3) Final LM head
# T5 typically scales by d_model^-0.5 if tie_word_embeddings = True,
# but you can do that if needed.
if self.config.decoder.tie_word_embeddings:
sequence_output = sequence_output * (self.model_dim ** -0.5)
logits = self.lm_head(sequence_output)
loss = None
if labels is not None:
# Compute standard LM loss
loss_fct = CrossEntropyLoss(ignore_index=-100)
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
if not return_dict:
# Return (loss, logits, past, decoder_outputs, encoder_outputs)
out = (logits,) + decoder_outputs[1:] + (encoder_outputs,)
return ((loss,) + out) if loss is not None else out
return Seq2SeqLMOutput(
loss=loss,
logits=logits,
past_key_values=decoder_outputs.past_key_values,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=hidden_states,
encoder_hidden_states=encoder_outputs.hidden_states,
encoder_attentions=encoder_outputs.attentions,
)
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past_key_values=None,
encoder_outputs=None,
**kwargs,
):
"""
During generation, the `generate()` method calls this to assemble the inputs for each step.
"""
if past_key_values is not None:
# we only need to pass the last token of decoder_input_ids
decoder_input_ids = decoder_input_ids[:, -1:].clone()
return {
"pixel_values": None, # not needed if `encoder_outputs` is already computed
"decoder_input_ids": decoder_input_ids,
"past_key_values": past_key_values,
"encoder_outputs": encoder_outputs,
"use_cache": kwargs.get("use_cache"),
}
def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor) -> torch.Tensor:
return self._shift_right(labels)
def _reorder_cache(self, past_key_values, beam_idx):
# if decoder past is not included in output
# speedy decoding is disabled and no need to reorder
if past_key_values is None:
print("You might want to consider setting `use_cache=True` to speed up decoding")
return past_key_values
reordered_decoder_past = ()
for layer_past_states in past_key_values:
# get the correct batch idx from layer past batch dim
# batch dim of `past` is at 2nd position
reordered_layer_past_states = ()
for layer_past_state in layer_past_states:
# need to set correct `past` for each of the four key / value states
reordered_layer_past_states = reordered_layer_past_states + (
layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)),
)
if reordered_layer_past_states[0].shape != layer_past_states[0].shape:
raise ValueError(
f"reordered_layer_past_states[0] shape {reordered_layer_past_states[0].shape} and layer_past_states[0] shape {layer_past_states[0].shape} mismatched"
)
if len(reordered_layer_past_states) != len(layer_past_states):
raise ValueError(
f"length of reordered_layer_past_states {len(reordered_layer_past_states)} and length of layer_past_states {len(layer_past_states)} mismatched"
)
reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,)
return reordered_decoder_past
def _shift_right(self, labels: torch.LongTensor) -> torch.LongTensor:
"""
Same shifting that T5 does: pad -> start token -> ... -> y[0..-2]
"""
# In T5, the decoder_start_token_id is often the same as pad_token_id
# But check or override as needed.
decoder_start_token_id = self.config.decoder.decoder_start_token_id
if decoder_start_token_id is None:
# default fallback
decoder_start_token_id = self.config.decoder.pad_token_id
pad_token_id = self.config.decoder.pad_token_id
# create shifted ids
shifted = labels.new_zeros(labels.shape)
shifted[..., 1:] = labels[..., :-1].clone()
shifted[..., 0] = decoder_start_token_id
# replace -100 with pad_token_id
shifted.masked_fill_(shifted == -100, pad_token_id)
return shifted
```
### Motivation
For OCR Project
### Your contribution
T5 can be used a decoder block for vision models | closed | 2025-02-06T05:51:51Z | 2025-02-06T14:49:37Z | https://github.com/huggingface/transformers/issues/36059 | [
"Feature request"
] | SaiMadhusudan | 1 |
zwczou/weixin-python | flask | 25 | 麻烦检查一下代码,basestring是什么鬼 |
```
if isinstance(text, basestring):
```
就这几行,确定是basestring,不是str吗?? | closed | 2018-06-02T10:29:27Z | 2018-07-25T01:48:00Z | https://github.com/zwczou/weixin-python/issues/25 | [] | allphfa | 1 |
plotly/dash | data-visualization | 2,528 | [BUG] Support React.memo() equal function in react functional component develope | When I develop dash component in react functional component, if I set the `equal` function to prevent some unnecessary redraw, the component won't be generated after the build:


@T4rk1n | closed | 2023-05-12T06:05:50Z | 2023-05-13T09:53:02Z | https://github.com/plotly/dash/issues/2528 | [] | CNFeffery | 4 |
chezou/tabula-py | pandas | 57 | python Tabula : FileNotFoundError: [WinError 2] The system cannot find the file specified | # Summary of your issue
I'm getting an error while reading a pdf file via tabula
# Environment
Write and check your environment.
- [ ] `python --version`:3 ?
- [ ] `java -version`: 8?
- [ ] OS and it's version: Win7 32bit ?
- [ ] Your PDF URL:
# What did you do when you faced the problem?
//write here
below is the code used
## Example code:
```
import tabula
df = tabula.read_pdf("D:/Users/rag/Documents/GE_Confidential/Projects/GE_Health_Care/pdf/test.pdf")
```
## Output:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-11-1c72e9de1c11> in <module>()
----> 1 df = tabula.read_pdf("D:/Users/rag/Documents/GE_Confidential/Projects/GE_Health_Care/pdf/test.pdf")
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\site-packages\tabula\wrapper.py in read_pdf(input_path, output_format, encoding, java_options, pandas_options, multiple_tables, **kwargs)
73
74 try:
---> 75 output = subprocess.check_output(args)
76 finally:
77 if is_url:
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in check_output(timeout, *popenargs, **kwargs)
334
335 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 336 **kwargs).stdout
337
338
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in run(input, timeout, check, *popenargs, **kwargs)
401 kwargs['stdin'] = PIPE
402
--> 403 with Popen(*popenargs, **kwargs) as process:
404 try:
405 stdout, stderr = process.communicate(input, timeout=timeout)
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors)
705 c2pread, c2pwrite,
706 errread, errwrite,
--> 707 restore_signals, start_new_session)
708 except:
709 # Cleanup if the child failed starting.
D:\Users\rag\AppData\Local\Continuum\Anaconda3\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
988 env,
989 cwd,
--> 990 startupinfo)
991 finally:
992 # Child is launched. Close the parent's copy of those pipe
FileNotFoundError: [WinError 2] The system cannot find the file specified
```
## What did you intend to be?
i want to read a pdf table and convert it to data-frame for further analysis...
if there is any other alternative please let me know how to do it..
Many thanks in advance...
| closed | 2017-09-28T06:49:36Z | 2020-07-18T23:41:17Z | https://github.com/chezou/tabula-py/issues/57 | [] | Raghav1990 | 7 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,996 | Why is undetected_chromedriver automatically updated? | Why is undetected_chromedriver automatically updated? | open | 2024-08-22T06:17:46Z | 2025-01-15T11:06:17Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1996 | [] | subingxi | 3 |
igorbenav/fastcrud | pydantic | 165 | FastCRUD seems to only be compatible with fastapi>=0.100.0,<0.112.0, is it intentional? | When installing fastcrud via uv a get the following:
$ uv add fastcrud==0.15.0
× No solution found when resolving dependencies:
╰─▶ Because fastcrud==0.15.0 depends on fastapi>=0.100.0,<0.112.0 and your project depends on fastapi==0.114.0, we can conclude that your project and
fastcrud==0.15.0 are incompatible.
And because your project depends on fastcrud==0.15.0, we can conclude that your project's requirements are unsatisfiable.
| closed | 2024-09-18T16:55:16Z | 2024-09-19T01:56:14Z | https://github.com/igorbenav/fastcrud/issues/165 | [] | Mumbawa | 3 |
python-gino/gino | asyncio | 712 | AttributeError: 'asyncpg.pgproto.pgproto.UUID' object has no attribute 'replace' | * GINO version: 1.0.1
* Python version: 3.8.2
* asyncpg version: 0.20.1
* PostgreSQL version: 12.3 (Ubuntu 12.3-1.pgdg20.04+1)
### Description
I'm trying to use UUID value as unique Id in my model
```
from . import db
from uuid import uuid4
from sqlalchemy.dialects.postgresql import UUID
class User(db.Model):
__tablename__ = "users"
id = db.Column(UUID(as_uuid=True), primary_key=True, unique=True, index=True, nullable=False, default=uuid4)
login = db.Column(db.String(255), nullable=False, unique=True)
password = db.Column(db.String(255), nullable=True)
full_name = db.Column(db.String(255))
last_login = db.Column(db.DateTime, nullable=True)
is_superuser = db.Column(db.Boolean, nullable=False, default=False)
is_staff = db.Column(db.Boolean, nullable=False, default=True)
remark = db.Column(db.String)
```
My controller is
```
class UserModel(BaseModel):
login: str
password: str
full_name: str
is_superuser: bool = False
is_staff: bool = True
remark: str = None
@router.post("/users")
async def add_user(user: UserModel):
rv = await User.create(login=user.login,
password=user.password,
full_name=user.full_name,
is_superuser=user.is_superuser,
is_staff=user.is_staff,
remark=user.remark
)
return rv.to_dict()
```
### What I Did
When I'm trying to post a new user to db via swagger UI I got this error:
```
INFO: 127.0.0.1:38548 - "POST /users HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/petr/crm/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 386, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 181, in __call__
await super().__call__(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/gino_starlette.py", line 79, in __call__
await self.app(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 196, in app
raw_response = await run_endpoint_function(
File "/home/petr/crm/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 147, in run_endpoint_function
return await dependant.call(**values)
File "./src/crm/views/users.py", line 30, in add_user
rv = await User.create(login=user.login,
File "/home/petr/crm/.venv/lib/python3.8/site-packages/gino/crud.py", line 444, in _create_without_instance
return await cls(**values)._create(bind=bind, timeout=timeout)
File "/home/petr/crm/.venv/lib/python3.8/site-packages/gino/crud.py", line 478, in _create
for k, v in row.items():
File "/home/petr/crm/.venv/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 207, in items
return [(key, self[key]) for key in self.keys()]
File "/home/petr/crm/.venv/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 207, in <listcomp>
return [(key, self[key]) for key in self.keys()]
File "/home/petr/crm/.venv/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/base.py", line 1328, in process
value = _python_UUID(value)
File "/usr/lib/python3.8/uuid.py", line 166, in __init__
hex = hex.replace('urn:', '').replace('uuid:', '')
AttributeError: 'asyncpg.pgproto.pgproto.UUID' object has no attribute 'replace'
```
| open | 2020-07-24T12:01:49Z | 2024-10-17T20:02:15Z | https://github.com/python-gino/gino/issues/712 | [
"question"
] | PetrMixayloff | 3 |
mwaskom/seaborn | pandas | 3,252 | Figure level plot BUG | When I set the fontdict in g.set_yticklabels(fontdict={'fontsize': 16, 'fontweight': 'bold'}) , the labels of the y tick will loss while g.set_xticklabels(fontdict={'fontsize': 16, 'fontweight': 'bold'}) will not. Is this a BUG?

| closed | 2023-02-09T01:17:55Z | 2023-02-18T12:57:42Z | https://github.com/mwaskom/seaborn/issues/3252 | [] | ryrl9703 | 2 |
aiortc/aiortc | asyncio | 105 | JSON.parse error in examples/server | I am trying out this example: https://github.com/jlaine/aiortc/tree/master/examples/server
I can get to the server. But when I try to start either audio/video I get the following error:
**SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data**
I am seeing this error on my system (Debian), I suspect it might have something to do with that:
**av.AVError: [Errno 1094995529] Invalid data found when processing input: 'demo-instruct.wav'**
Extra remarks:
- demo_instruct.wav is present
Can somebody help me further with this? | closed | 2018-12-04T11:35:25Z | 2018-12-11T10:19:24Z | https://github.com/aiortc/aiortc/issues/105 | [] | yschermer | 5 |
ydataai/ydata-profiling | jupyter | 1,323 | Bug Report: Kurtosis at constant columns values | ### Current Behaviour
I am trying to generate a report but it but it throws an error.
```
187 descriptive_statistics = Table(
188 [
189 {
190 "name": "Standard deviation",
191 "value": fmt_numeric(summary["std"], precision=config.report.precision),
192 },
193 {
194 "name": "Coefficient of variation (CV)",
195 "value": fmt_numeric(summary["cv"], precision=config.report.precision),
196 },
197 {
198 "name": "Kurtosis",
--> 199 "value": fmt_numeric(
200 summary["kurtosis"], precision=config.report.precision
201 ),
202 },
File /opt/conda/lib/python3.10/site-packages/ydata_profiling/report/formatters.py:232, in fmt_numeric(value, precision)
221 @list_args
222 def fmt_numeric(value: float, precision: int = 10) -> str:
223 """Format any numeric value.
224
225 Args:
(...)
230 The numeric value with the given precision.
231 """
--> 232 fmtted = f"{{:.{precision}g}}".format(value)
233 for v in ["e+", "e-"]:
234 if v in fmtted:
TypeError: unsupported format string passed to NoneType.__format__
```
I think it is because pyspark.sql.functions.kurtosis function returns None for constant columns
```
df.select(kurtosis(df.column_name)).show()
+--------------+
|kurtosis(column_name)|
+--------------+
| null |
+--------------+
```
### Expected Behaviour
It was expected to generate the report.
### Data Description
My data has two columns that all the values are constants.
### Code that reproduces the bug
```Python
report_df = ProfileReport(df)
```
### pandas-profiling version
4.1.2
### Dependencies
```Text
pyspark==3.3.2
```
### OS
Linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-05-10T16:01:58Z | 2024-10-24T11:34:32Z | https://github.com/ydataai/ydata-profiling/issues/1323 | [
"bug 🐛",
"spark :zap:"
] | pedro-tofani | 3 |
JaidedAI/EasyOCR | machine-learning | 489 | the link "https://www.jaided.ai/custom_model.md" is lost could you provide again? | closed | 2021-07-13T13:01:48Z | 2021-07-21T07:20:06Z | https://github.com/JaidedAI/EasyOCR/issues/489 | [] | neverstoplearn | 1 |
|
thunlp/OpenPrompt | nlp | 172 | [Tutorial 2.1 error] TypeError: where(): argument 'other' (position 3) must be Tensor, not int | It happened in tutorial 2.1. Details are as follows:
Traceback (most recent call last):
File "condional_prompt.py", line 112, in <module>
loss = prompt_model(inputs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/openprompt/pipeline_base.py", line 449, in forward
return self._forward(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/openprompt/pipeline_base.py", line 467, in _forward
logits, labels = self.shift_logits_and_labels(logits, batch['loss_ids'], reference_ids)
File "/opt/conda/lib/python3.7/site-packages/openprompt/pipeline_base.py", line 434, in shift_logits_and_labels
shift_input_ids = torch.where(shift_loss_ids>0, shift_input_ids, -100)
TypeError: where(): argument 'other' (position 3) must be Tensor, not int | open | 2022-07-07T02:52:51Z | 2022-07-07T02:52:51Z | https://github.com/thunlp/OpenPrompt/issues/172 | [] | canghongjian | 0 |
alteryx/featuretools | data-science | 2,119 | Adding spark to the latest dependency checker lowered the checker's reported pandas version | Making an issue to track this for now. Possibly pyspark 3.3 will allow us to use pandas 1.4 with pyspark | closed | 2022-06-16T22:19:38Z | 2022-07-11T18:58:21Z | https://github.com/alteryx/featuretools/issues/2119 | [] | rwedge | 1 |
chaoss/augur | data-visualization | 2,998 | New key management hard-crashes on a bad key | Following on from #2994 discussions...
The new key management is v0.8.1 is handy, but it hard crashes if it encounters a bad key:
```
beat: Starting...
2025-02-14 14:18:13 da9ef9a76ffe augur[7] INFO Retrieved 16 github api keys for use
WARNING: The key 'redacted' is not a valid key. Hint: If valid in past it may have expired
WARNING: The key 'redacted' is not a valid key. Hint: If valid in past it may have expired
WARNING: The key 'redacted' is not a valid key. Hint: If valid in past it may have expired
Protocol Error: <class 'httpx.ProtocolError'>
augur backend start command setup failed
You are not connected to the internet.
Please connect to the internet to run Augur
Consider setting http_proxy variables for limited access installations.
```
Given a long-lived instance is pretty much *guaranteed* to hit a bad or expired key during it's lifetime, this should be handled and reported to the user, rather than causing a crash. | closed | 2025-02-17T11:17:11Z | 2025-02-18T11:41:38Z | https://github.com/chaoss/augur/issues/2998 | [] | GregSutcliffe | 11 |
inducer/pudb | pytest | 329 | C-c (KeyboardInterrupt) hangs pudb when running event loop | I am using a library (https://github.com/getsenic/gatt-python/blob/master/gatt/gatt_linux.py) that implements its own event loop. When I call the .run() method in this library, eventually I would like to interrupt it, but it seems that pudb does not either pass through the interrupt or gets hung up itself. This is printed to the console when running an app using this library after it has called .run(), when running with pudb:
```
^CTraceback (most recent call last):
File "/home/clayton/src/ride_track/venv/lib/python3.7/site-packages/gi/_ossighelper.py", line 107, in signal_notify
if condition & GLib.IO_IN:
File "/home/clayton/src/ride_track/venv/lib/python3.7/site-packages/gi/_ossighelper.py", line 107, in signal_notify
if condition & GLib.IO_IN:
File "/usr/lib64/python3.7/bdb.py", line 88, in trace_dispatch
return self.dispatch_line(frame)
File "/home/clayton/src/ride_track/venv/lib/python3.7/site-packages/pudb/debugger.py", line 189, in dispatch_line
raise bdb.BdbQuit
bdb.BdbQuit
```
I basically have to C-z to background the task and then send SIGKILL to get pudb to quit (which is obviously not ideal).
I'll try to poke around some more to figure out what might be going on here, but figured I would file this issue just in case I am doing something obviously incorrect. | open | 2019-02-13T03:32:52Z | 2019-04-17T20:42:46Z | https://github.com/inducer/pudb/issues/329 | [] | craftyguy | 2 |
gradio-app/gradio | deep-learning | 10,839 | Bash API description for Image component is wrong | ### Describe the bug
The `Image` component creates a wrong description for the bash API documentation. Instead of using the `url` flag, it uses the `path` flag with an url.
The provided gradio sketch produces the following example bash message:
```bash
curl -X POST http://127.0.0.1:7860/gradio_api/call/predict -s -H "Content-Type: application/json" -d '{
"data": [
{"path":"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"}
]}' \
| awk -F'"' '{ print $4}' \
| read EVENT_ID; curl -N http://127.0.0.1:7860/gradio_api/call/predict/$EVENT_ID
```
First of all, the url has to be in the `url` part. However, if we would do so, the url is not a base64 data-url and fails to be parsed with this (not correct) error message:
```
Image path is None.
```
Either we have a better error message or we implement automatic image download (which would be possible using PIL). Is this not done due to security measures?
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import numpy as np
import gradio as gr
def greet(image: np.ndarray):
return f"Thanks for the image: {image.shape}"
demo = gr.Interface(fn=greet, inputs=gr.Image(), outputs="text")
demo.launch(share=False)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.22.0
gradio_client version: 1.8.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.3.1
gradio-client==1.8.0 is not installed.
groovy: 0.1.2
httpx: 0.25.0
huggingface-hub: 0.29.3
jinja2: 3.1.2
markupsafe: 2.1.3
numpy: 1.26.0
orjson: 3.9.7
packaging: 23.1
pandas: 2.1.1
pillow: 10.0.1
pydantic: 2.4.2
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.1
ruff: 0.11.0
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.1
tomlkit: 0.13.2
typer: 0.15.2
typing-extensions: 4.8.0
urllib3: 2.0.5
uvicorn: 0.23.2
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.9.2
httpx: 0.25.0
huggingface-hub: 0.29.3
packaging: 23.1
typing-extensions: 4.8.0
websockets: 11.0.3
```
### Severity
I can work around it | closed | 2025-03-19T20:28:49Z | 2025-03-21T12:29:05Z | https://github.com/gradio-app/gradio/issues/10839 | [
"bug",
"API"
] | cansik | 2 |
python-gitlab/python-gitlab | api | 2,832 | Feature: add `pages` support in `project` | ## Description of the problem, including code/CLI snippet
I was not able to determine how to access to [`/api/v4/projects/:id/pages`](https://docs.gitlab.com/ee/api/pages.html) with python-gitlab. I'm not sure to have searched well, but the [`Project`object](https://python-gitlab.readthedocs.io/en/stable/api/gitlab.v4.html#gitlab.v4.objects.Project) does not seem to provide such access.
Would it be possible to add it?
The use case behind is I want to provide a public script to add a badge linking to the deployed pages, which requires such API access.
## Specifications
- python-gitlab version: 4.4.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 16.9.2
Thanks for you great API wrapper, it's a real improvement over standard GitLab API! | closed | 2024-03-28T16:27:21Z | 2024-10-08T13:49:40Z | https://github.com/python-gitlab/python-gitlab/issues/2832 | [] | Karreg | 2 |
kizniche/Mycodo | automation | 720 | Error 500 upgrading from 8.2.0 | Error (Full Traceback):
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_restplus/api.py", line 584, in error_router
return original_handler(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_login/utils.py", line 261, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_admin.py", line 539, in admin_upgrade
is_internet=is_internet)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 2, in top-level template code
{% set active_page = "upgrade" %}
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 1005, in render
return concat(self.root_render_func(self.new_context(vars)))
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 18, in root
objDiv.scrollTop = objDiv.scrollHeight;
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 335, in root
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 119, in block_body
{{_('No upgrade is available. You are running the latest release, version')}} <a href="https://github.com/kizniche/Mycodo/releases/tag/v{{current_release}}" target="_blank">{{ current_release }}</a>
TypeError: '>' not supported between instances of 'NoneType' and 'int'
| closed | 2019-12-09T11:58:52Z | 2019-12-09T15:12:22Z | https://github.com/kizniche/Mycodo/issues/720 | [] | SAM26K | 2 |
youfou/wxpy | api | 38 | 有办法发送语音吗 | 大概需求就是把本地的音频,当作语音发出去? | closed | 2017-04-27T07:00:17Z | 2017-04-27T07:22:54Z | https://github.com/youfou/wxpy/issues/38 | [] | RyanKung | 3 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 212 | [BUG] Brief and clear description of the problem | Not working api now. Please Fix this 🥲 | closed | 2023-06-10T05:07:24Z | 2023-06-10T10:01:47Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/212 | [
"BUG",
"enhancement"
] | nimesh-akalanka | 1 |
holoviz/panel | plotly | 7,174 | Feature request: Include Tooltip in Input style widgets | When one is creating a form and wants to include a `pn.widgets.TooltipIcon`, they are required to manually set a label for that input for it to be properly aligned with the input.
Consider the code below. A set of inputs is created, where some of them have tooltips. I have noticed that the label is also part of the widget, and therefor aligning the tooltip to a widget with a label results in it being vertically misaligned when compared to the input itself.
Code for creating a `pn.widgets.TooltipIcon` and the current workaround:
```python
import panel as pn
pn.extension()
my_label = pn.widgets.StaticText(
value="Input with a tooltip",
align=('start', 'end'),
height_policy="min",
margin=(0, 0, 0, 10)
)
my_input = pn.widgets.FloatInput(
step=1e-2,
start=0.0,
end=1.0,
value=0.3,
sizing_mode='scale_width',
height_policy="min",
margin=(0, 0, 0, 10)
)
my_tooltip = pn.widgets.TooltipIcon(
value="Very useful tooltip.",
align="center"
)
pn.WidgetBox(
pn.widgets.FloatInput(name="First input",sizing_mode='scale_width'),
pn.Column(my_label,pn.Row(my_input,my_tooltip)),
pn.Row(
pn.widgets.TextInput(name="Some other input",sizing_mode='scale_width'),
pn.widgets.TooltipIcon(
value="Another tooltip.",
align="center"
)
),
width=200
)
```

My proposal would be to take a tooltip parameter in the initializer of these widgets where it could display a properly aligned tooltip icon.
```python
my_input = pn.widgets.FloatInput(
step=1e-2,
start=0.0,
end=1.0,
value=0.3,
tooltip="super useful tooltip"
)
``` | closed | 2024-08-20T14:00:18Z | 2024-08-21T06:24:46Z | https://github.com/holoviz/panel/issues/7174 | [] | ambrustorok | 2 |
flasgger/flasgger | api | 584 | Add a CONTRIBUTING file | https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors | open | 2023-06-20T19:43:39Z | 2023-06-20T19:43:39Z | https://github.com/flasgger/flasgger/issues/584 | [] | reedy | 0 |
samuelcolvin/watchfiles | asyncio | 305 | 'Venv' is not used for some reason? | ### Description
For some reason, `watchfiles` does not use `python` from activated `venv`?
Is it expected or not?
### Steps to reproduce
Yes, I'm sure I have activated venv, and it's being used.
Steps to reproduce:
1. Create `venv`
2. Activate it
3. Install `rich` (or any other dependency)
4. Create the following file (`mre.py`):
```
import sys
print("Python executable:", sys.executable)
import rich
print(rich.__all__)
```
5. Run file from console to be sure everything's okay:
```
$ python mre.py
Python executable: D:\Code\project\.venv\Scripts\python.exe
['get_console', 'reconfigure', 'print', 'inspect', 'print_json']
```
6. Run same command via `watchfiles`:
```
$ watchfiles "python mre.py" .
[05:21:25] watchfiles v0.23.0 👀 path="D:\Code\project" target="python mre.py" (command) filter=DefaultFilter...
Python executable: C:\Program Files\Python312\python.exe
Traceback (most recent call last):
File "D:\Code\project\mre.py", line 4, in <module>
import rich
ModuleNotFoundError: No module named 'rich'
```
### Operating System & Architecture
Windows-10-10.0.19045-SP0
10.0.19045
### Environment
I tested with both `cmd` and `bash`, and have the same output everywhere. I use `venv`, created via `PyCharm`
### Python & Watchfiles Version
python: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)], watchfiles: 0.23.0
### Rust & Cargo Version
_No response_
### Additional info:
When I use `watchfiles ".venv/Scripts/python mre.py" .`, everything works as expected, since the python from `venv` is getting used.
`where python` command result:
```
$ where python
D:\Code\project\.venv\Scripts\python.exe
C:\Program Files\Python312\python.exe
``` | open | 2024-10-18T02:23:12Z | 2024-10-18T02:26:29Z | https://github.com/samuelcolvin/watchfiles/issues/305 | [
"bug"
] | Danipulok | 0 |
comfyanonymous/ComfyUI | pytorch | 6,624 | Blank Screen when load ComfyUI on M2 Mac | ### Expected Behavior
Start ComfyUI
### Actual Behavior
When I start comfyUI I have blank screen
### Steps to Reproduce
Start comfyUI
### Debug Logs
```powershell
No errors
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
In the console of the browser there is this error
Removing unpermitted intrinsics lockdown-install.js:1:52832
Same problem with all browsers
```
### Other
This arrive after trying updating comfyUI which give an error unable to update | open | 2025-01-28T00:51:36Z | 2025-01-28T17:30:21Z | https://github.com/comfyanonymous/ComfyUI/issues/6624 | [
"Potential Bug",
"MacOS"
] | Creative-comfyUI | 1 |
absent1706/sqlalchemy-mixins | sqlalchemy | 24 | [ActiveRecordMixin] model not persisted unless I call db.session.commit() | When I call `model.create(...)`, it isn't persisted in the database unless I call `db.session.commit()` after. What am I doing wrong? | closed | 2019-09-17T13:02:11Z | 2020-03-09T07:59:00Z | https://github.com/absent1706/sqlalchemy-mixins/issues/24 | [] | jonathanronen | 3 |
donnemartin/system-design-primer | python | 655 | Ggg | open | 2022-03-30T21:05:41Z | 2022-04-23T13:17:58Z | https://github.com/donnemartin/system-design-primer/issues/655 | [
"needs-review"
] | nabpreab4 | 5 |
|
ray-project/ray | data-science | 51,419 | [Core] map_batches cannot guarantee a stable batch_size, but if drop_last=True is set, it can be guaranteed (although data will be lost). Can we consider adding this parameter to map_batches? | ### Description
map_batches cannot guarantee a stable batch_size, but if drop_last=True is set, it can be guaranteed (although data will be lost). Can we consider adding this parameter to map_batches?
### Use case
```
import ray
import numpy as np
ray.init()
def square_root_batch(batch):
print("len", len(batch["value"]))
batch["sqrt_value"] = np.sqrt(batch["value"])
return batch
data = [{"value": float(np.random.randint(1, 100))} for _ in range(600004)]
ds = ray.data.from_items(data)
ds = ds.map_batches(
square_root_batch,
concurrency=4,
batch_size=16,
)
ds.take_all()
```
many batch length is not 16.
but when i change blocks_to_batches(data/_internal_block_batching/util.py) drop_last = True, each batch is 16.
can we add drop_last in map_batches? | open | 2025-03-17T09:35:39Z | 2025-03-18T17:21:19Z | https://github.com/ray-project/ray/issues/51419 | [
"enhancement",
"triage",
"data"
] | Hanyu96 | 0 |
Avaiga/taipy | automation | 2,441 | [OTHER] Taipy unable to assign a state variable, apparent leak between clients? | ### What went wrong? 🤔
I've been stuck on a really critical bug for the past week. It's preventing pages from loading, and the only current workaround is restarting the server. It occurs (seemingly) random, even with automated tests.
The issue arises a nested dict-like data structure object is saved to a state variable. This object is used to save datasets, figures, values, etc. The size of this data structure is dynamic and is different for each client session, defined by a query parameter in the URL called `client_handle` (important). The data structure is constructed during `on_init()` of the page without any issue, and then assigned to a state variable in a simple line: `state.risk_tree = tree` which causes the (occasional) error.
The first client to connect to the server (since server startup) always initializes without a problem. Subsequent clients can have the issue, *if they use a different `client_handle`*. For example, assuming each client connects from an isolated browser and creates its own Taipy session:
1. `client_handle = vm_groningen_dev` (initialization OK)
2. `client_handle = vm_gelderland_dev` (initialization OK)
3. `client_handle = vm_limburg_dev` (ERROR)
The error raised by during the initialization of connection #3 (`vm_limburg_dev`) is:
```
File "taipy/gui/utils/_attributes.py", line 26, in _getscopeattr_drill
return attrgetter(name)(gui._get_data_scope())
AttributeError("'types.SimpleNamespace' object has no attribute 'tp_TpExPr_gui_get_adapted_lov_risk_tree_vm_gelderland_dev_root_0_thema_2_rp_0_ind_0_var_CHART_AGG_LOV_str_TPMDL_9_0'")
```
This long attribute id refers to the key which is used to retrieve an element from the data structure (in this case an LOV).
**Note:** in the middle of the attribute id is a part `vm_gelderland_dev` (the `client_handle`). The currently connected client is `vm_limburg_dev`. **This indicates Taipy is trying to bind a callback from another client session, which it obviously cannot find in this session.**
### Expected Behavior
The state variable `state.risk_tree` should be set. 9/10 times this works without a problem.
### Steps to Reproduce Issue
The biggest difficulty is that this bug is not consistent. 9/10 times it works fine, even when I reproduce client page-visit combinations that previously caused an error. So the only way to debug this is by inspecting the logs. I realize this is very little to go off, but since I can't even reliably reproduce the error, creating a minimum example is pretty much impossible.
See [isolated_error.log](https://github.com/user-attachments/files/18706548/bug951_isolated_error.log), which also contains the variables. The actions to produce this:
- The first 5 entries show how a client with `client_handle = vm_gelderland_dev` initializes without an issue.
- Browser is then closed.
- New browser is opened, client with `client_handle = vm_limburg_dev` fails to initialize.
- *Note that the error contains the `client_handle` of the previous client*
The issue occurs in the page `on_init()`. Constructing the object works without an issue. Saving the object to a state variable causes the issue.
```python
<imports>
# Declare page state variable
risk_tree = None
def on_init(state):
# Create the basic tree structure based on client settings. Data is added later in Long-Running Callback
state_var_name = "risk_tree"
tree = TreeRoot(
client_subniveau=state.client_settings["subniveau"],
client_alt_subniveaus=state.client_settings["subniveau_alt"],
state_var_name=state_var_name,
client_handle=state.client_handle,
children=[
ThemaNode(
id=idx,
thema_naam=thema,
risico_profielen=risicos,
state_var_name=state_var_name,
)
for idx, (thema, risicos) in enumerate(state.client_settings["risico_themas"].items())
],
client_settings=state.client_settings,
client_data_ops_fn=apply_client_allpages_data_operations,
)
state.risk_tree = tree # ERROR OCCURS HERE
tpgui.invoke_long_callback(
state=state,
user_function=LRCB_load_risicotree_data,
user_function_args=[
state.get_gui(),
tpgui.get_state_id(state),
tree,
state.client_handle,
state.pg_uri,
],
user_status_function=status_LRCB_tree_loading, # invoked at the end of and possibly during the runtime of user_function
period=10000, # Interval in milliseconds to check the status of the long callback
)
```
After the Long-Running Callback is complete, a Taipy Partial generates the content of the page. As seen below, the GUI elements reference attributes inside the data structure.
```python
def create_taipy_partial_content(tree_node, page_context=None):
"""Create the content for the current node in the Taipy Page format
"""
class_name = f"{tree_node.css_class_name} content-block"
# All other nodes (RisicoNode, IndicatorNode, VariabeleNode)
with tgb.part(class_name=class_name) as content:
if is_thema_node:
# Just title
tgb.text(
f"## {tree_node.get_label_text().upper()}",
mode="markdown",
class_name="card-title",
)
else:
# Title and link to docs
with tgb.layout(columns="2 1"):
tgb.text(
f"**{tree_node.get_label_text().title()}**",
mode="markdown",
class_name="card-title",
)
tgb.button(
"{docs_icon}", # docs_icon is defined in main.py
on_action="{navigate_to_docs}", # navigate_to_docs is defined in main.py
class_name="docs-button",
hover_text="Open documentatie",
)
# Description
tgb.text(f"{tree_node.help}", mode="markdown")
# Figure
if tree_node.figure is not None and tree_node.selected:
tgb.toggle(
value="{"
+ f"{tree_node.state_var_name}['{tree_node.id}']" #client_handle is part of the tree_node.id
+ ".chart_agg_toggle}",
lov="{"
+ f"{tree_node.state_var_name}['{tree_node.id}']"
+ ".CHART_AGG_LOV}",
on_change=callback_chart_agg_toggle,
)
tgb.chart(
figure="{"
+ f"{tree_node.state_var_name}['{tree_node.id}']"
+ ".figure}"
)
# Build children in nested content blocks
for child in tree_node.children:
if child.count_selected() == 0:
continue
create_taipy_partial_content(child)
return content
```
### Runtime Environment
Docker Container: python:3.12-slim-bookworm
### Browsers
Chrome, Firefox, Safari
### Version of Taipy
4.0.2
### Additional Context
```bash
```
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [x] I am willing to work on this issue (optional) | closed | 2025-02-07T12:45:51Z | 2025-03-11T08:58:49Z | https://github.com/Avaiga/taipy/issues/2441 | [
"🟥 Priority: Critical",
"🖰 GUI",
"💥Malfunction",
"Gui: Back-End"
] | LMaiorano | 29 |
pydata/pandas-datareader | pandas | 985 | web.DataReader + "fred": Failed Downloads | Running this in jupyter notebook:
start = datetime.date.today() - datetime.timedelta(days=5 * 365)
end = datetime.date.today()
df = web.DataReader(["sp500", "NASDAQCOM", "CBBTCUSD"], "fred", start, end)
Gives me this error:
[*********************100%%**********************] 3 of 3 completed
3 Failed downloads:
['CBBTCUSD', 'NASDAQCOM', 'SP500']: Exception('%ticker%: No timezone found, symbol may be delisted')
What can I do to fix it? | open | 2023-12-20T03:49:12Z | 2023-12-20T03:49:12Z | https://github.com/pydata/pandas-datareader/issues/985 | [] | dhruvsingh14 | 0 |
tensorflow/tensor2tensor | machine-learning | 1,579 | IndexError: Cannot choose from an empty sequence | ### Description
When I tried to train the a deterministic model in reinforcement learning project, I suddenly got this IndexError in 60000 steps. I didn't change any code in the T2T project. Now, I can't continue training. It shows
File "/home/guest/tensor2tensor/tensor2tensor/data_generators/gym_env.py", line 186, in start_new_epoch
self._load_epoch_data(load_data_dir)
File "/home/guest/tensor2tensor/tensor2tensor/data_generators/gym_env.py", line 531, in _load_epoch_data
raise ValueError("Some data is missing, the experiment might've been "
ValueError: Some data is missing, the experiment might've been interupted during generating data.
### Environment information
Linux version 4.18.0-18-generic (buildd@lcy01-amd64-006) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #19~18.04.1-Ubuntu SMP Fri Apr 5 10:22:13 UTC 2019
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.13.1
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-datasets==1.0.1
tensorflow-estimator==1.13.0
tensorflow-metadata==0.13.0
tensorflow-probability==0.6.0
$ python -V
Python 3.6.8 :: Anaconda, Inc.
INFO:tensorflow:Timing: 2:35:05.578154
INFO:tensorflow:Setting T2TModel mode to 'infer'
INFO:tensorflow:Setting hparams.dropout to 0.0
INFO:tensorflow:Setting hparams.label_smoothing to 0.0
INFO:tensorflow:Setting hparams.layer_prepostprocess_dropout to 0.0
INFO:tensorflow:Setting hparams.symbol_dropout to 0.0
INFO:tensorflow:Setting hparams.residual_dropout to 0.0
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Transforming feature 'input_action' with symbol_modality_6_64.bottom
INFO:tensorflow:Transforming feature 'input_reward' with symbol_modality_3_64.bottom
INFO:tensorflow:Transforming feature 'inputs' with video_modality.bottom
INFO:tensorflow:Transforming feature 'target_action' with symbol_modality_6_64.targets_bottom
INFO:tensorflow:Transforming feature 'target_reward' with symbol_modality_3_64.targets_bottom
INFO:tensorflow:Transforming feature 'targets' with video_modality.targets_bottom
INFO:tensorflow:Building model body
INFO:tensorflow:Transforming body output with video_modality.top
INFO:tensorflow:Transforming body output with symbol_modality_3_64.top
INFO:tensorflow:Restoring checkpoint /home/guest/t2t_train/mb_det_pong_random/world_model/model.ckpt-60000
INFO:tensorflow:Restoring parameters from /home/guest/t2t_train/mb_det_pong_random/world_model/model.ckpt-60000
Traceback (most recent call last):
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/guest/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 389, in <module>
tf.app.run()
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/guest/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 384, in main
training_loop(hp, FLAGS.output_dir)
File "/home/guest/tensor2tensor/tensor2tensor/rl/trainer_model_based.py", line 356, in training_loop
env, hparams, directories["world_model"], debug_video_path
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 158, in evaluate_world_model
subsequence_length + frame_stack_size
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 336, in random_rollout_subsequences
return [choose_subsequence() for _ in range(num_subsequences)]
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 336, in <listcomp>
return [choose_subsequence() for _ in range(num_subsequences)]
File "/home/guest/tensor2tensor/tensor2tensor/rl/rl_utils.py", line 328, in choose_subsequence
rollout = random.choice(rollouts)
File "/home/guest/miniconda3/envs/tensor2tensor/lib/python3.6/random.py", line 260, in choice
raise IndexError('Cannot choose from an empty sequence') from None
IndexError: Cannot choose from an empty sequence
| open | 2019-05-20T19:30:01Z | 2019-05-20T19:30:01Z | https://github.com/tensorflow/tensor2tensor/issues/1579 | [] | vfleon | 0 |
aio-libs/aiopg | sqlalchemy | 748 | State of the library | Hello,
I notice that there have been only 2 commits merged in the past 2 years. There is nothing wrong with this, especially if nobody is being paid to maintain the library. However, if this library is not likely to see much future development for whatever reason, it might be good to document this prominently, with the following outcomes in mind:
* Allowing potential users to properly set their expectations, since most of the other projects under this organization are very actively maintained
* Attracting attention of anyone who might be willing to financially sponsor work on the library
* Attracting attention of anyone who might be willing to help maintain this library
* Promoting forks
Thank you @asvetlov for all of your work on this library, and elsewhere! You have had an incredible impact on the Python ecosystem. | closed | 2020-11-12T00:22:26Z | 2021-04-05T13:31:44Z | https://github.com/aio-libs/aiopg/issues/748 | [] | dralley | 3 |
slackapi/bolt-python | fastapi | 362 | Running Bolt for Python apps on IBM Cloud Functions (FaaS) | I am trying to use IBM cloud faas. I tried some steps, but failed :( . Any help with an example will be appreciated.
IBM cloud faas documentation https://cloud.ibm.com/docs/openwhisk?topic=openwhisk-prep#prep_python_local_virtenv | closed | 2021-06-02T22:21:05Z | 2021-06-15T23:14:31Z | https://github.com/slackapi/bolt-python/issues/362 | [
"question",
"area:adapter"
] | ac427 | 2 |
dynaconf/dynaconf | django | 792 | [docs] Pyinstaller with Dynaconf raising `UnicodeDecodeError` | Pyinstaller Compiles the Dynaconf Modules and loaders so when _dynaconf.loader.py_loader_ tries to load files from the inspect stack trace it tries to read compiled pyc files and fails on UnicodeDecodeError. The fix is to pacakage dynaconf and python-dotenv[cli] as a package without compiling it by using the **--collect-all** argument of pyinstaller
_Originally posted by @OmmarShaikh01 in https://github.com/dynaconf/dynaconf/issues/770#issuecomment-1193254565_ | closed | 2022-08-19T14:47:48Z | 2023-03-30T19:50:33Z | https://github.com/dynaconf/dynaconf/issues/792 | [
"Docs"
] | rochacbruno | 1 |
openapi-generators/openapi-python-client | rest-api | 295 | Support for Free-Form Query Parameters | **Is your feature request related to a problem? Please describe.**
Given an API that accepts arbitrarily-named query parameters like:
/my-endpoint/?dynamic_param1=value&dynamic_param2=value2
We'd like to be able to append arbitrary key/values to the query string search.
Given a current YAML snippet like:
```yaml
parameters:
- in: query
name: dynamicFields
schema:
type: object
additionalProperties: true
```
The parameter generated is `schema_field: Union[Unset, ModelNameSchemaField] = UNSET`, and it's also sent as the parameter named `schema_field` instead of using the arbitrary keys.
**Describe the solution you'd like**
Generate the parameter above with `additionalProperties` as `schema_field: Union[Unset, None, Dict[str, Any]] = UNSET`. When `schema_field` is a `dict`, it will then send values for all keys in the query parameters instead of as `schema_field`.
If multiple parameters are defined having `additionalProperties`, it will treat all of them as arbitrary keys. If two parameters were to define the same dynamically named key, we make no guarantees about which one is sent. I imagine it would be the last parameter encountered with `additionalProperties`. Alternatively, we could raise an exception instead of making silent assumptions about the collision.
**Describe alternatives you've considered**
Rather than modeling the field in Open API, allow every GET method to accept an `additional_properties: Dict[str, str]` parameter which would append all the keys as query parameters. The name `additional_properties` might need to be configurable to avoid collision with APIs using that parameter name already.
| closed | 2021-01-12T16:14:40Z | 2021-02-10T23:40:23Z | https://github.com/openapi-generators/openapi-python-client/issues/295 | [
"✨ enhancement"
] | bowenwr | 8 |
paperless-ngx/paperless-ngx | machine-learning | 8,063 | [BUG] possible bug with trash and filename templates | ### Description
With latest 2.13.0, I've encountered a few issues after configuring custom storage path with expression referencing custom field. I'm running paperless on Proxmox/bare metal installed via tteck's script. There are two issues I've encountered.
1. files aren't being properly deleted from media directory after deleting document and emptying trash
2. export/import ends with error documents.models.Document.DoesNotExist: Problem installing fixture '/root/export/manifest.json': Document matching query does not exist.
### Steps to reproduce
In order to reproduce the issue
1. import any pdf document
2. create new document type magazines
3. create custom field path
4. add new storage path with an expression {{ document_type }} / {{ custom_fields|get_cf_value('path')|replace("-", "/", 2) }} / {{ title }}
5. finally assign the document with
- document type
- add custom field path and set it to aaa/bbb
- configure storage path
So far everything works fine. In the media folder, document is renamed as expected in both archive and originals subdirectory.
Now perform export using
python3 manage.py document_export /root/export
Then delete the document and empty trash
-> document is not deleted, but it gets moved from aaa/bbb subdirectory to None subdirectory,
Also import ends with above mentioned error message
-> documents.models.Document.DoesNotExist: Problem installing fixture '/root/export/manifest.json': Document matching query does not exist
### Webserver logs
```bash
--
```
### Browser logs
```bash
--
```
### Paperless-ngx version
2.13.0
### Host OS
Debian 12
### Installation method
Bare metal
### System status
```json
{
"pngx_version": "2.13.0",
"server_os": "Linux-6.8.12-2-pve-x86_64-with-glibc2.36",
"install_type": "bare-metal",
"storage": {
"total": 10464022528,
"available": 2987577344
},
"database": {
"type": "postgresql",
"url": "paperlessdb",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://localhost:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-10-27T21:14:47.000542Z",
"index_error": null,
"classifier_status": "WARNING",
"classifier_last_trained": null,
"classifier_error": "Classifier file does not exist (yet). Re-training may be pending."
}
}
```
### Browser
Firefox
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-10-27T22:02:44Z | 2024-11-27T03:16:18Z | https://github.com/paperless-ngx/paperless-ngx/issues/8063 | [
"bug",
"backend"
] | horvatkm | 3 |
clovaai/donut | nlp | 57 | ValueError: Unknown split "validation". Should be one of ['train']. | I only run:
python train.py --config config/train_cord.yaml --pretrained_model_name_or_path "naver-clova-ix/donut-base" --dataset_name_or_paths '["naver-clova-ix/cord-v2"]' --exp_version "test_experiment"
It has such an error, why? Thanks for your help!
Traceback (most recent call last):
File "train.py", line 150, in <module>
train(config)
File "train.py", line 87, in train
sort_json_key=config.sort_json_key,
File "/home/donut/donut/util.py", line 64, in __init__
self.dataset = load_dataset(dataset_name_or_path, split=self.split)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 1644, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 793, in as_dataset
disable_tqdm=False,
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 817, in _build_single_dataset
in_memory=in_memory,
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 889, in _as_dataset
in_memory=in_memory,
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 213, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 187, in get_file_instructions
name, split_infos, instruction, filetype_suffix=self._filetype_suffix
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 110, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 618, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 618, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError('Unknown split "{}". Should be one of {}.'.format(split, list(name2len)))
ValueError: Unknown split "validation". Should be one of ['train']. | closed | 2022-09-20T07:23:03Z | 2024-06-03T15:17:35Z | https://github.com/clovaai/donut/issues/57 | [] | NearStart | 3 |
slackapi/bolt-python | fastapi | 697 | Can I reuse a single client object for each event? | Hi, I'm trying to make a slack bot in OOP-fashioned way.
```pyhotn
@app.event('reaction_added')
def handle_reaction_added_event(ack, say, event, client):
ack()
eventHandler.run(client, say, event)
class EventHandler:
def run(self, web_client: WebClient, say: Say, event: Dict[str, Any]):
obj_a = A(web_client)
obj_b = B(say, obj_a)
obj_c = C(event, obj_b)
# work with those objects
```
As seen above, everytime I get event from slack I inject `client`, `say`, `event` object to my handler.
In handler, I made up some object with them. Everytime.
So my question is, can I reuse my client when first event came in, and use it for later event?
In short, **I want some singletone objects that have slack_bolt arguments.**
Below is what I want to make:
```python
@app.event('reaction_added')
def handle_reaction_added_event(ack, say, event, client):
ack()
eventHandler.run(event) # client, say are already injected somehow
class EventHandler:
def run(self, event: Dict[str, Any]):
obj_c = C(event, obj_b) # obj_b is singleton
# ...
```
#### The `slack_bolt` version
slack-bolt==1.14.3
#### Python runtime version
3.9.13
#### OS info
ProductName: macOS
ProductVersion: 12.5
BuildVersion: 21G72
Darwin Kernel Version 21.6.0: Sat Jun 18 17:07:22 PDT 2022; root:xnu-8020.140.41~1/RELEASE_ARM64_T6000
| closed | 2022-08-07T03:26:19Z | 2022-08-07T13:37:36Z | https://github.com/slackapi/bolt-python/issues/697 | [
"question"
] | roeniss | 2 |
MaartenGr/BERTopic | nlp | 2,000 | Loading of saved model returns Error: "This BERTopic instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator." | **Edit: Nvm, I just forgot to actually save the loading into a variable...
Previously saved several models using the following code:
```python3
from sklearn.feature_extraction.text import CountVectorizer
from bertopic.representation import KeyBERTInspired, PartOfSpeech, MaximalMarginalRelevance
main_representation_model = KeyBERTInspired()
aspect_representation_model1 = PartOfSpeech("en_core_web_sm")
aspect_representation_model2 = [KeyBERTInspired(top_n_words=30), MaximalMarginalRelevance(diversity=.5)]
representation_model = {
"Main": main_representation_model,
"Aspect1": aspect_representation_model1,
"Aspect2": aspect_representation_model2
}
vectorizer_model = CountVectorizer(min_df=5, stop_words = 'english')
topic_mdl = BERTopic(nr_topics = 'auto', vectorizer_model = vectorizer_model,
representation_model = representation_model, verbose=True)
apps = ['Assetto Corsa', 'Assetto Corsa Competizione', 'Beat Saber', 'CarX Drift Racing Online', 'DCS World Steam Edition', 'DEVOUR', 'Golf It!', 'Gorilla Tag', 'Hand Simulator', 'Microsoft Flight Simulator 40th Anniversary Edition',
'No_Mans_Sky', 'Paint the Town Red', 'Pavlov VR', 'Phasmophobia', 'Rec Room', 'STAR WARS™: Squadrons', 'Tabletop Simulator', 'VRChat', 'VTOL VR', 'War Thunder']
app = apps[5]
docs = dfs_reviews[app]
topic, ini_probs = topic_mdl.fit_transform(docs)
topics_info = get_topic_stats(topic_mdl)
# Saving model
topic_mdl.save(f'./topic_models/{app}', serialization='safetensors', save_ctfidf=True)
```
Then when I tried loading it and visualise it using the barplot:
```python3
topic_mdl.load(f'./topic_models/{app[3]}')
topic_mdl.visualize_barchart(top_n_topics = 16, n_words = 10)
```
it gives the following error:
```
{
"name": "ValueError",
"message": "This BERTopic instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[25], line 5
1 app = ['Assetto Corsa', 'Assetto Corsa Competizione', 'Beat Saber', 'CarX Drift Racing Online', 'DCS World Steam Edition', 'DEVOUR', 'Golf It!', 'Gorilla Tag', 'Hand Simulator', 'Microsoft Flight Simulator 40th Anniversary Edition',
2 'No_Mans_Sky', 'Paint the Town Red', 'Pavlov VR', 'Phasmophobia', 'Rec Room', 'STAR WARS™: Squadrons', 'Tabletop Simulator', 'VRChat', 'VTOL VR', 'War Thunder']
4 topic_mdl.load(f'./topic_models/{app[3]}')
----> 5 topic_mdl.get_topic_info()
File ~/Uni Codes/Thesis/Web-Scraper/env/lib/python3.10/site-packages/bertopic/_bertopic.py:1514, in BERTopic.get_topic_info(self, topic)
1499 def get_topic_info(self, topic: int = None) -> pd.DataFrame:
1500 \"\"\" Get information about each topic including its ID, frequency, and name.
1501
1502 Arguments:
(...)
1512 ```
1513 \"\"\"
-> 1514 check_is_fitted(self)
1516 info = pd.DataFrame(self.topic_sizes_.items(), columns=[\"Topic\", \"Count\"]).sort_values(\"Topic\")
1517 info[\"Name\"] = info.Topic.map(self.topic_labels_)
File ~/Uni Codes/Thesis/Web-Scraper/env/lib/python3.10/site-packages/bertopic/_utils.py:76, in check_is_fitted(topic_model)
72 msg = (\"This %(name)s instance is not fitted yet. Call 'fit' with \"
73 \"appropriate arguments before using this estimator.\")
75 if topic_model.topics_ is None:
---> 76 raise ValueError(msg % {'name': type(topic_model).__name__})
ValueError: This BERTopic instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator."
}
```
I can't seem to find anything related to this online. I read that I shouldn't be fitting it again as that would defeat the whole point of saving it in the first place from this issue #1584 .
Any help would be greatly appreciated.
Thank you | closed | 2024-05-21T09:52:37Z | 2024-05-21T10:17:15Z | https://github.com/MaartenGr/BERTopic/issues/2000 | [] | HeinzJS | 0 |
pydata/bottleneck | numpy | 437 | use move_mean and set window&min_count =1, diff result | ```
import bottleneck as bn
a = [0.008196721311475436, -0.01626016260162607, 0.012396694214876205, -0.016326530612245076, 0.008298755186722151, 0.004115226337448442, 0.0, -0.008196721311475436, -0.008264462809917252, -0.00416666666666668, -0.012552301255230165, -0.012711864406779568, 0.017167381974248982, 0.008438818565400736, 0.004184100418410055, -0.00833333333333336, 0.008403361344537843, -0.01666666666666672, 0.0, 0.016949152542372937, -0.00416666666666668, 0.0, 0.004184100418410055, -0.012499999999999907, -0.004219409282700569, 0.004237288135593369, -0.004219409282700569, -0.025423728813559268, -0.02173913043478254, 0.013333333333333234, 0.0, 0.039473684210526445, 0.05063291139240508, 0.012048192771084246, -0.007936507936507962, -0.003999999999999886, -0.00803212851405625, 0.01619433198380546, -0.01593625498007948, 0.0, -0.016194331983805717, 0.008230452674897144, 0.008163265306122474, -0.012145748987854418, -0.012295081967213154, -0.024896265560165793, -0.012765957446808685, 0.004310344827586358, -0.008583690987124627, 0.0, 0.05194805194805212, 0.008230452674897144, -0.016326530612245076, -0.008298755186721888, 0.0, -0.01673640167364009, 0.02127659574468078, -0.012499999999999907, 0.0, -0.004219409282700569, 0.004237288135593369, -0.012658227848101439, 0.004273504273504423, 0.008510638297872367, -0.00843881856540087, -0.012765957446808685, 0.0, -0.008620689655172306, 0.017391304347826004, -0.004273504273504152, 0.008583690987124491, 0.004255319148936049, 0.004237288135593369, 0.0, 0.004219409282700301, 0.004201680672268921, -0.004184100418410055, 0.008403361344537843, -0.020833333333333266, -0.012765957446808685, 0.004310344827586358, 0.008583690987124491, -0.008510638297872367, 0.0042918454935621094, -0.004273504273504152, -0.004291845493562381, 0.004310344827586358, -0.004291845493562381, 0.017241379310344883, 0.012711864406779702, -0.008368200836819977, 0.021097046413501908, 0.012396694214876205, 0.012244897959183583, 0.0, 0.0, -0.008064516129032284, -0.004065040650406388, -0.012244897959183841, -0.004132231404958691, 0.008298755186722151, -0.01646090534979429, 0.004184100418410055, 0.004166666666666548, 0.0, 0.008298755186722151, 0.041152263374485465, -0.02371541501976267, -0.016194331983805717, -0.024691358024691305, -0.02109704641350231, 0.0, 0.0129310344827588, 0.017021276595744598, -0.012552301255230165, 0.0, 0.012711864406779702, 0.03347280334728044, 0.0]
b = bn.move_mean(a, window=1, min_count=1)
for item1, item2 in zip(a[-60:], b[-60:]):
print(item1, item2)
``` | open | 2023-09-06T02:47:21Z | 2024-04-09T00:02:10Z | https://github.com/pydata/bottleneck/issues/437 | [
"bug"
] | gucasbrg | 0 |
dot-agent/nextpy | pydantic | 123 | Introduce @xt.method Decorator for AI Code Generation Compatibility |
We could introduce `@xt.method` decorator in Nextpy for defining event handlers within state classes. This feature is intended to enhance code readability, standardize the declaration of methods handling state changes, and align with AI code generation practices.
## Current Behavior
Currently, Nextpy requires methods within state classes to be defined directly, without specific decorators. This approach is functional but does not distinguish between regular methods and event handlers explicitly designed to modify the state.
## Proposed Behavior
The introduction of the `@xt.method` decorator would allow developers to clearly mark methods in the state class as event handlers. This not only improves code readability but also aligns with AI code generation patterns, where such decorators are often included by default. It could also facilitate additional framework optimizations or checks.
For example:
```python
@xt.method(ToDoState)
def delete_todo(state, todo):
state.todos.remove(todo)
```
## Benefits
- **Improved Code Readability and Maintainability**: Clearly distinguishes state-modifying methods from regular class methods.
- **Alignment with AI Code Generation**: Aligns with default practices of AI code generation tools, which often include method decorators in their outputs.
| open | 2024-01-16T10:40:55Z | 2024-01-16T10:40:55Z | https://github.com/dot-agent/nextpy/issues/123 | [
"AI Code Gen"
] | anubrag | 0 |
scikit-hep/awkward | numpy | 2,377 | `ak.flatten` raises `np.AxisError` for `unknown[unknown]`, but not for `unknown` | ### Version of Awkward Array
2.1.1
### Description and code to reproduce
To reproduce the problem;
```python3
[ins] In [1]: import awkward as ak
...: ak.__version__
Out[1]: '2.1.1'
[ins] In [3]: empty = ak.Array([])
...: ak.flatten(empty[empty])
---------------------------------------------------------------------------
AxisError Traceback (most recent call last)
Cell In[3], line 1
----> 1 ak.flatten(empty[empty])
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_flatten.py:164, in flatten(array, axis, highlevel, behavior)
12 """
13 Args:
14 array: Array-like data (anything #ak.to_layout recognizes).
(...)
158 999]
159 """
160 with ak._errors.OperationErrorContext(
161 "ak.flatten",
162 {"array": array, "axis": axis, "highlevel": highlevel, "behavior": behavior},
163 ):
--> 164 return _impl(array, axis, highlevel, behavior)
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_flatten.py:232, in _impl(array, axis, highlevel, behavior)
229 return wrap_layout(out, behavior, highlevel, like=array)
231 else:
--> 232 out = ak._do.flatten(layout, axis)
233 return wrap_layout(out, behavior, highlevel, like=array)
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/_do.py:253, in flatten(layout, axis)
252 def flatten(layout: Content, axis: int = 1) -> Content:
--> 253 offsets, flattened = layout._offsets_and_flattened(axis, 1)
254 return flattened
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/contents/numpyarray.py:415, in NumpyArray._offsets_and_flattened(self, axis, depth)
412 return self.to_RegularArray()._offsets_and_flattened(axis, depth)
414 else:
--> 415 raise ak._errors.wrap_error(
416 np.AxisError(f"axis={axis} exceeds the depth of this array ({depth})")
417 )
AxisError: while calling
ak.flatten(
array = <Array [] type='0 * int64'>
axis = 1
highlevel = True
behavior = None
)
Error details: axis=1 exceeds the depth of this array (1)
```
where as by contrast, in the older versions;
```python3
[ins] In [1]: import awkward as ak
...: print(ak.__version__)
1.8.0
[nav] In [2]: empty = ak.Array([])
...: ak.flatten(empty[empty])
Out[2]: <Array [] type='0 * unknown'>
```
This does seem like a bug to me, we can't guarantee that every list that we call flatten on will have items in.
Not impossible that it's related to this bug; https://github.com/scikit-hep/awkward/issues/2207
I will check out the repo some time and see if that fix solves it.
| open | 2023-04-08T13:13:10Z | 2023-07-02T18:04:35Z | https://github.com/scikit-hep/awkward/issues/2377 | [
"bug"
] | HenryDayHall | 2 |
Lightning-AI/pytorch-lightning | data-science | 20,220 | Can no longer install versions 1.5.10-1.6.5 | ### Bug description
Hey everyone,
I have been working on the same server for the past few months ( w/ RTX6000) without issue
Recently, I tried to re-install lightning 1.5.10 (new virtual environment, python 3.9.18), and got the error below
I tried versions up to 1.6.5 with the same error
I can't use the newest version, as that will require a torch upgrade (currently using 1.13.1 due to specific versioning issues)
This popped up in the last month, I'm wondering if anyone else is seeing this problem or if it is to be expected for some reason?
Thanks,
Jonathan
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
```python
Create a virtual environment with python 3.9.18
Activate
pip install pytorch-lightning==1.5.10
```
### Error messages and logs
```
ERROR: Could not find a version that satisfies the requirement pytorch-lightning==1.5.10 (from versions: 0.0.2, 0.2, 0.2.2, 0.2.3, 0.2.4, 0.2.4.1, 0.2.5, 0.2.5.1, 0.2.5.2, 0.2.6, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.4.1, 0.3.5, 0.3.6, 0.3.6.1, 0.3.6.3, 0.3.6.4, 0.3.6.5, 0.3.6.6, 0.3.6.7, 0.3.6.8, 0.3.6.9, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.1.2, 0.5.1.3, 0.5.2, 0.5.2.1, 0.5.3, 0.5.3.1, 0.5.3.2, 0.5.3.3, 0.6.0, 0.7.1, 0.7.3, 0.7.5, 0.7.6, 0.8.1, 0.8.3, 0.8.4, 0.8.5, 0.9.0, 0.10.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3.0rc1, 1.3.0rc2, 1.3.0rc3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.7.post0, 1.3.8, 1.4.0rc0, 1.4.0rc1, 1.4.0rc2, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.8.0rc0, 1.8.0rc1, 1.8.0rc2, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post0, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.4.post0, 1.8.5, 1.8.5.post0, 1.8.6, 1.9.0rc0, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 2.0.0rc0, 2.0.0, 2.0.1, 2.0.1.post0, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.9.post0, 2.1.0rc0, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.2.0rc0, 2.2.0, 2.2.0.post0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.4.0)
ERROR: No matching distribution found for pytorch-lightning==1.5.10
```
### Environment
<details>
<summary>Current environment</summary>
```
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA RTX 6000 Ada Generation
- available: True
- version: 11.6
* Lightning:
- pytorch-tabnet: 3.0.0
- torch: 1.13.1+cu116
- torchaudio: 0.13.1+cu116
- torchmetrics: 0.11.0
- torchvision: 0.14.1+cu116
* Packages:
- absl-py: 1.3.0
- aiohttp: 3.8.3
- aiosignal: 1.3.1
- alembic: 1.13.2
- aniso8601: 9.0.1
- antlr4-python3-runtime: 4.9.3
- association-metrics: 0.0.1
- asttokens: 2.2.1
- async-timeout: 4.0.2
- attrs: 22.2.0
- autocommand: 2.2.2
- backcall: 0.2.0
- backports.tarfile: 1.2.0
- brotlipy: 0.7.0
- cachetools: 5.2.0
- category-encoders: 2.2.2
- certifi: 2020.6.20
- cffi: 1.17.0
- charset-normalizer: 2.1.1
- click: 8.1.3
- cloudpickle: 3.0.0
- comm: 0.1.2
- configparser: 5.3.0
- contourpy: 1.0.6
- cycler: 0.11.0
- databricks-cli: 0.17.4
- databricks-sdk: 0.30.0
- datasets: 2.10.1
- debugpy: 1.6.5
- decorator: 5.1.1
- deprecated: 1.2.14
- dill: 0.3.6
- docker: 7.1.0
- docker-pycreds: 0.4.0
- einops: 0.3.0
- entrypoints: 0.4
- executing: 1.2.0
- filelock: 3.9.0
- flask: 2.2.3
- fonttools: 4.38.0
- frozenlist: 1.3.3
- fsspec: 2022.11.0
- future: 0.18.2
- gitdb: 4.0.10
- gitpython: 3.1.30
- google-auth: 2.15.0
- google-auth-oauthlib: 0.4.6
- gputil: 1.4.0
- graphene: 3.3
- graphql-core: 3.2.3
- graphql-relay: 3.2.0
- greenlet: 3.0.3
- grpcio: 1.51.1
- gunicorn: 22.0.0
- huggingface-hub: 0.13.0
- idna: 3.4
- importlib-metadata: 6.0.0
- importlib-resources: 6.4.0
- inflect: 7.3.1
- ipykernel: 6.19.4
- ipython: 8.8.0
- ipywidgets: 8.0.4
- itsdangerous: 2.1.2
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jedi: 0.18.2
- jinja2: 3.1.4
- joblib: 1.2.0
- jupyter-client: 7.4.8
- jupyter-core: 5.1.2
- jupyterlab-widgets: 3.0.5
- kiwisolver: 1.4.4
- kornia: 0.7.3
- kornia-rs: 0.1.5
- llvmlite: 0.43.0
- mako: 1.3.5
- markdown: 3.4.1
- markupsafe: 2.1.1
- matplotlib: 3.6.2
- matplotlib-inline: 0.1.6
- mlflow: 2.15.1
- mlflow-skinny: 2.15.1
- more-itertools: 10.3.0
- multidict: 6.0.4
- multiprocess: 0.70.14
- nest-asyncio: 1.5.6
- numba: 0.60.0
- numpy: 1.24.2
- oauthlib: 3.2.2
- omegaconf: 2.3.0
- opentelemetry-api: 1.26.0
- opentelemetry-sdk: 1.26.0
- opentelemetry-semantic-conventions: 0.47b0
- ordered-set: 4.1.0
- packaging: 22.0
- pandas: 1.1.5
- parso: 0.8.3
- patsy: 0.5.3
- pexpect: 4.8.0
- pickleshare: 0.7.5
- pillow: 9.4.0
- pip: 24.2
- platformdirs: 2.6.2
- plotly: 4.14.3
- ply: 3.11
- promise: 2.3
- prompt-toolkit: 3.0.36
- protobuf: 3.20.3
- psutil: 5.9.4
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pyarrow: 11.0.0
- pyasn1: 0.4.8
- pyasn1-modules: 0.2.8
- pycparser: 2.22
- pydeprecate: 0.3.1
- pygments: 2.14.0
- pyjwt: 2.6.0
- pyparsing: 3.0.9
- pyqt5-sip: 12.11.0
- python-dateutil: 2.8.2
- pytorch-tabnet: 3.0.0
- pytz: 2022.7
- pyyaml: 5.4.1
- pyzmq: 24.0.1
- querystring-parser: 1.2.4
- regex: 2022.10.31
- requests: 2.28.1
- requests-oauthlib: 1.3.1
- responses: 0.18.0
- retrying: 1.3.4
- rsa: 4.9
- scikit-learn: 1.2.0
- scipy: 1.10.0
- seaborn: 0.12.2
- sentry-sdk: 1.12.1
- setuptools: 72.1.0
- shap: 0.45.0
- shortuuid: 1.0.11
- six: 1.16.0
- slicer: 0.0.7
- smmap: 5.0.0
- sqlalchemy: 2.0.32
- sqlparse: 0.5.1
- stack-data: 0.6.2
- statsmodels: 0.13.5
- subprocess32: 3.5.4
- tabulate: 0.9.0
- tensorboard: 2.11.0
- tensorboard-data-server: 0.6.1
- tensorboard-plugin-wit: 1.8.1
- threadpoolctl: 3.1.0
- tokenizers: 0.13.2
- tomli: 2.0.1
- torch: 1.13.1+cu116
- torchaudio: 0.13.1+cu116
- torchmetrics: 0.11.0
- torchvision: 0.14.1+cu116
- tornado: 6.2
- tqdm: 4.64.1
- traitlets: 5.8.0
- transformers: 4.26.1
- typeguard: 4.3.0
- typing-extensions: 4.12.2
- urllib3: 1.26.13
- wandb: 0.10.11
- watchdog: 2.2.1
- wcwidth: 0.2.5
- webencodings: 0.5.1
- werkzeug: 2.2.2
- wheel: 0.43.0
- widgetsnbextension: 4.0.5
- wrapt: 1.16.0
- xxhash: 3.2.0
- yarl: 1.8.2
- zipp: 3.11.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.9.18
- release: 6.2.0-37-generic
- version: #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
```
</details>
### More info
_No response_ | open | 2024-08-21T20:09:48Z | 2025-02-02T23:52:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20220 | [
"bug",
"dependencies"
] | JonathanBhimani-Burrows | 14 |
flairNLP/flair | pytorch | 3,444 | [Bug]: optimizer state not saved | ### Describe the bug
Thank you for developing and maintaining this invaluable module!
We would like to save the state of the optimizer at the end of each epoch.
The `save_optimizer_state` parameter of the `fine_tune` function seems to be designed for this purpose.
However, the state of the optimizer is not saved even if we set `save_optimizer_state=True`.
Thank you!
### To Reproduce
```python
%pip install scipy==1.10.1 datasets transformers torch==2.0 flair==0.13.1
import torch
import flair
from flair.data import Corpus
from flair.datasets import TREC_6
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer
# 1. get the corpus
corpus: Corpus = TREC_6()
# 2. what label do we want to predict?
label_type = 'question_class'
# 3. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type=label_type)
# 4. initialize transformer document embeddings (many models are available)
document_embeddings = TransformerDocumentEmbeddings('distilbert-base-uncased', fine_tune=True)
# 5. create the text classifier
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, label_type=label_type)
# 6. initialize trainer
trainer = ModelTrainer(classifier, corpus)
# 7. run training with fine-tuning
trainer.fine_tune('resources/taggers/question-classification-with-transformer',
learning_rate=5.0e-5,
mini_batch_size=4,
max_epochs=10,
save_optimizer_state=True,
save_model_each_k_epochs=1
)
checkpoint = torch.load('resources/taggers/question-classification-with-transformer/model_epoch_1.pt', map_location=flair.device)
```
### Expected behavior
When `save_optimizer_state` is `true`, the checkpoint contains the state_dict of the optimizer.
### Logs and Stack traces
_No response_
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.0.0+cu117
##### Transformers
4.40.0
#### GPU
True | open | 2024-04-19T08:55:10Z | 2025-03-13T00:52:17Z | https://github.com/flairNLP/flair/issues/3444 | [
"bug"
] | chelseagzr | 2 |
yihong0618/running_page | data-visualization | 208 | TODO : 3D 热力图 | 大佬,还有个小建议,就是这个统计图可以做成3D的么,就是根据自己每次跑步的距离长短来定义高度,像下面这样。

我是用的这个大佬的脚本生成的这种3D的统计图,当然我技术比较菜,不了解其中的原理,仅仅是一个小提议哈🙈
——————> https://github.com/yoshi389111/yoshi389111 <———————— | closed | 2022-02-16T12:35:43Z | 2022-02-18T07:37:27Z | https://github.com/yihong0618/running_page/issues/208 | [] | sun0225SUN | 3 |
harry0703/MoneyPrinterTurbo | automation | 74 | 网页多端访问的时候,某些文件会冲突,目前不支持并行多个运行是么 | 开启了两个页面,同时跑两个任务,其中一个成功,另外一个提示:
PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。: 'final-1.mp4.tempTEMP_MPY_wvf_snd.mp3'
Traceback:
目前不支持并行多个同时运行是么? | closed | 2024-03-27T09:09:02Z | 2024-04-10T09:33:31Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/74 | [
"bug"
] | HuangXihuang | 4 |
gee-community/geemap | jupyter | 248 | Add support for creating cartoee maps with an interactive GUI |

 | open | 2020-12-31T13:59:14Z | 2020-12-31T13:59:14Z | https://github.com/gee-community/geemap/issues/248 | [
"Feature Request"
] | giswqs | 0 |
babysor/MockingBird | pytorch | 440 | 【长期】跨语言支持 | ### 已有讨论
#142
#197
### Q & A
[LxnChan](https://github.com/LxnChan)
有足够的样本的前提下我想自己训练出来一个日语模型,不知道行不行
答:就像把中文句子例如 “你好” 变成 “ni2 hao3”,只需要找到一个tts前端处理一下日语为phenomenon
> 作者却苦于近期精力限制只能势单力薄处理一些小的bug,也看到issue区有不少爱好与开发者想要学习或二次改造更好满足自己需求,不过比较零碎难以展开。为了让项目和AI持续可以给大家提供更多价值,共同学习,我在issue区根据不同主题创建长期交流频道,若留言人数超过20也将建立对应交流群。
> - 如何改参数,搞出更逼真的克隆效果 435
> - 如何改模型,搞出更好效果 436
> - 训练克隆特定人声音&finetune 437
> - 学术/论文讨论/训练分析 438
> - 跨语言支持 440
> - 工程化/新场景讨论(绝不做恶 & 合法合规) 439
| open | 2022-03-07T15:23:58Z | 2022-05-16T04:15:04Z | https://github.com/babysor/MockingBird/issues/440 | [
"discussion"
] | babysor | 11 |
PeterL1n/BackgroundMattingV2 | computer-vision | 137 | ##If I have no background image, this algorithm is broken!!! | Thanks for your code~ But I have a serious question. If I have only one image and no background image, how can I get the foreground objects? | closed | 2021-07-23T05:17:53Z | 2021-07-23T23:59:59Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/137 | [] | junleen | 1 |
keras-team/keras | deep-learning | 20,542 | model.fit - class_weight broken | It seems argmax is returning dtype=int64 in the true case and int32 is returned in the false case.
https://github.com/keras-team/keras/blob/a503a162fc5b4120a96a1f7203a1de841f0601e2/keras/src/trainers/data_adapters/tf_dataset_adapter.py#L129-L133
Stacktrace:
```Python traceback
Traceback (most recent call last):
File "/home/example/workspace/fir/trainer/train.py", line 122, in <module>
model.fit(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 282, in fit
epoch_iterator = TFEpochIterator(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 664, in __init__
super().__init__(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/epoch_iterator.py", line 64, in __init__
self.data_adapter = data_adapters.get_data_adapter(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/__init__.py", line 56, in get_data_adapter
return TFDatasetAdapter(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/tf_dataset_adapter.py", line 30, in __init__
dataset = dataset.map(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 2341, in map
return map_op._map_v2(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 43, in _map_v2
return _MapDataset(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 157, in __init__
self._map_func = structured_function.StructuredFunctionWrapper(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 265, in __init__
self._function = fn_factory()
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1251, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1221, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 696, in _initialize
self._concrete_variable_creation_fn = tracing_compilation.trace_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 178, in trace_function
concrete_function = _maybe_define_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 283, in _maybe_define_function
concrete_function = _create_concrete_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 310, in _create_concrete_function
traced_func_graph = func_graph_module.func_graph_from_py_func(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1059, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 599, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 231, in wrapped_fn
ret = wrapper_helper(*args)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 161, in wrapper_helper
ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 690, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
return f(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/tf_dataset_adapter.py", line 129, in class_weights_map_fn
y_classes = tf.__internal__.smart_cond.smart_cond(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/framework/smart_cond.py", line 57, in smart_cond
return cond.cond(pred, true_fn=true_fn, false_fn=false_fn,
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/ops/cond_v2.py", line 880, in error
raise TypeError(
TypeError: true_fn and false_fn arguments to tf.cond must have the same number, type, and overall structure of return values.
true_fn output: Tensor("cond/Identity:0", shape=(2048,), dtype=int64)
false_fn output: Tensor("cond/Identity:0", shape=(2048,), dtype=int32)
Error details:
Tensor("cond/Identity:0", shape=(2048,), dtype=int64) and Tensor("cond/Identity:0", shape=(2048,), dtype=int32) have different types
``` | closed | 2024-11-23T21:29:58Z | 2024-12-27T02:01:47Z | https://github.com/keras-team/keras/issues/20542 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | GICodeWarrior | 4 |
benlubas/molten-nvim | jupyter | 96 | [Bug] Exporting to ipynb with comments at the end of the line | MacOS
nvim 9.4
## Description
Exporting outputs that have code cells with comments at the end of the line causes cells to not match up. Need to investigate a little further to figure out the cause of that.
## Reproduction Steps
create a jupyter notebook with this cell
```python
print("hi") # problem
```
open in nvim, run with molten, try to export the cell. Cell contents don't match.
Print statement debugging to the rescue. Though I'm not sure why this would be a problem right now
| closed | 2023-12-20T21:50:31Z | 2023-12-20T22:25:12Z | https://github.com/benlubas/molten-nvim/issues/96 | [
"bug"
] | benlubas | 0 |
JoeanAmier/TikTokDownloader | api | 117 | 希望支持 docker 部署, 支持配置代理 | open | 2023-12-27T00:57:24Z | 2024-03-26T09:42:18Z | https://github.com/JoeanAmier/TikTokDownloader/issues/117 | [
"功能优化(enhancement)"
] | ghost | 4 |
|
hbldh/bleak | asyncio | 677 | Scanning BLE devices returns empty metadata | * bleak version: 0.10.0
* Python version: Python 3.6.13 Anaconda, Inc.
* Operating System: Windows 10 Pro
### Description
Hi,
I wanted to test scanning devices and print the details of the scanned device (name, metadata, etc.).
The devices are detected (name, address), nevertheless the metadata field metadata = {'uuids': [], ...} does not contain the service uuids which is expected to be a list of one element, in my case 'uuids': ['680c21d9-c946-4c1f-9c11-baa1c21329e7'].
The reason is I want to filter the scan output, application-side, on the device's service uuid to only report devices of interest.
Also i checked the issue 437 and Fix KeyError: 'delegate' in CoreBluetooth backend commit. I am using current libraries (updated ones after the issue 437). Am i doing something wrong?
### What I Did
The python code:
```python
async def run():
scanner = BleakScanner()
scanner.register_detection_callback(detection_callback)
await scanner.start()
await asyncio.sleep(sleep_time)
await scanner.stop()
scanned_devices = scanner.discovered_devices
for d in scanned_devices:
if d.address in Dict:
print(d)
print(d.metadata.values())
```
```
The output of this code:
E3:EB:F8:F2:9E:9D: Nordic_UART
dict_values([[], {89: b''}])
```
| closed | 2021-10-22T14:38:39Z | 2021-10-25T13:21:37Z | https://github.com/hbldh/bleak/issues/677 | [] | huseyinege | 4 |
flaskbb/flaskbb | flask | 519 | CONTRIBUTING.md references wrong requirements file | CONTRIBUTING.md says to run `pip install -r requirements-dev.txt` but I believe it should be `pip install -r requirements-test.txt`.
Also, is it worth mentioning that `pip install -r requirements.txt` or `pip install -r requirements-dev.txt` has to be run before `pip install -r requirements-test.txt`, or making requirements-test.txt reference requirements.txt so it's mildly more straight forward to install and test? | closed | 2019-03-12T02:32:03Z | 2019-04-25T08:17:43Z | https://github.com/flaskbb/flaskbb/issues/519 | [] | chadat23 | 1 |
horovod/horovod | pytorch | 3,982 | Horovod on spark>=2.4 Barrier Execution Mode supporting | **Is your feature request related to a problem? Please describe.**
When I use Horovod on spark for training distributed DL model, Horovod does some additional actions for data transferring to Horovod processes:
- DataFrame partitions saving to some distributed storage using Petastorm (for example, HDFS)
- Partitions reading from this storage for data delivering to Horovod processes using client (for example, hadoop).
So this actions can decrease processing speed when we work with big data.
**Describe the solution you'd like**
I suggest use Barrier Execution Mode that was introduced in spark 2.4 version. Horovod can repartition Dataframe to number of executors and use `mapInPandas()` for conversion Spark DataFrame partition representation to iterator of `pd.Dataframe`. Arrow enabling will increase conversation speed. Iterator of `pd.Dataframe` Horovod can convert to specific DL framework dataloader. This logic can be wrapped into [Torch|Keras|Lighting]Estimator or by adding special function `horovod.spark.run_on_dataframe()` like `horovod.spark.run()`
**Describe alternatives you've considered**
As I understand, Databricks uses similar design for [HorovodRunner](https://docs.databricks.com/en/machine-learning/train-model/distributed-training/horovod-runner.html). And [XGBoost on Pyspark](https://github.com/dmlc/xgboost/blob/v1.7.5/python-package/xgboost/spark/core.py#L855) uses similar approach
**Additional context**
This feature can increase Horovod on spark popularity. So, in this [presentation](https://youtu.be/gMT_ONmI9RM?t=562) Uber engineers describe this problem.
If you support this idea, but you don’t have time to implement it, I can start implementation it as a contribution to horovod.
I'll be waiting for your feedback. | open | 2023-09-13T17:23:31Z | 2023-09-13T17:23:31Z | https://github.com/horovod/horovod/issues/3982 | [
"enhancement"
] | max-509 | 0 |
joeyespo/grip | flask | 346 | Feature request: support for copy to clipboard for code blocks | Looks like github has recently added a nice 'copy to clipboard' button for every code block rendered from a markdown. Without knowing a lot about how this is implemented or if it is easy/hard for grip to do, it would be awesome if this could be incorporated into the grip output as well.
Thanks for a great tool! | open | 2021-10-07T15:08:53Z | 2024-01-25T22:37:29Z | https://github.com/joeyespo/grip/issues/346 | [] | theimpostor | 1 |
aminalaee/sqladmin | fastapi | 784 | Ckeditor <TypeError: Cannot convert undefined or null to object> | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Following the guide [sqladmin ckeditor](https://aminalaee.dev/sqladmin/cookbook/using_wysiwyg/) got an error:
```
TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at ou.init (datacontroller.ts:344:42)
at ou.<anonymous> (observablemixin.ts:277:33)
at ou.fire (emittermixin.ts:241:31)
at <computed> [as init] (observablemixin.ts:281:17)
at classiceditor.ts:227:31
```
### Steps to reproduce the bug
Follow the ckeditor embedding documentation
### Expected behavior
_No response_
### Actual behavior
```
TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at ou.init (datacontroller.ts:344:42)
at ou.<anonymous> (observablemixin.ts:277:33)
at ou.fire (emittermixin.ts:241:31)
at <computed> [as init] (observablemixin.ts:281:17)
at classiceditor.ts:227:31
```
### Debugging material
_No response_
### Environment
Sqladmin 0.17.0
### Additional context
Everything starts working fine if you add the following code
```
<script src="https://cdn.ckeditor.com/ckeditor5/39.0.1/classic/ckeditor.js"></script>
<script>
ClassicEditor
.create(document.querySelector('#content'))
.catch(error => {
console.error(error);
});
</script>
```
straight into the editor.html template before the last closing block, however I don't think this is the correct behavior. | closed | 2024-06-24T17:13:29Z | 2024-06-25T10:03:29Z | https://github.com/aminalaee/sqladmin/issues/784 | [] | A-V-tor | 1 |
python-visualization/folium | data-visualization | 1,318 | Request Adding Leaflet.TileLayer.MBTiles Plugin Support | **Is your feature request related to a problem? Please describe.**
No. This feature request adds support for displaying raster tile base maps in .mbtiles format.
**Describe the solution you'd like**
1. Make [Leaflet.TileLayer.MBTiles](https://gitlab.com/IvanSanchez/Leaflet.TileLayer.MBTiles#) it's own Folium plugin.
2. Put the above plugin on a CDN so users aren't forced to store it locally. The original author's CDN link is broken I think.
Currently, you can use Folium and [Leaflet.TileLayer.MBTiles](https://gitlab.com/IvanSanchez/Leaflet.TileLayer.MBTiles#) to display .mbtiles format, but it's a bit hacky.
I used roughly the following snippet to override `TileLayer._template`
```
# override defaults to use the plugin
folium.raster_layers.TileLayer._template = Template(u"""
{% macro script(this, kwargs) %}
var {{ this.get_name() }} = L.tileLayer.mbTiles( <--------
{{ this.tiles|tojson }},
{{ this.options|tojson }}
).addTo({{ this._parent.get_name() }});
{% endmacro %}
""")
# make a Map with .mbtile basemap
m = folium.Map(
location=[35.650787, -117.661728],
tiles='my_tiles.mbtiles',
attr=attr
)
# add open street map
layer = folium.TileLayer(
tiles='https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png',
name='OpenStreetMap Online',
attr='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'contributors',
overlay=True,
control=True,
show=False
).add_to(m)
# function to return _template back to default after the fact.
set_tile_layer_default(layer)
```
Even then, you have to override `TileLayer._template` back to it's default before `m.save('index.html')` for each additional (non-mbtiles) basemap you add.
It would be really nice to:
```
from folium.plugins import MBTiles
m = MBTiles.Map(
location = [14.0, 15.0],
tiles = 'your_tiles_url.mbtiles',
attr = 'attr text'
)
```
**Describe alternatives you've considered**
The only alternative I can see is something like the above snippet. There was some discussion in #351.
**Additional context**
For those unfamiliar, one open source way to make your own tiles is by using [TileMill](https://github.com/tilemill-project/tilemill). TileMill exports the usual {z}{x}{y} directory structure, as well as the Mapbox raster .mbtiles format. Mbtiles are preferable in some cases, because you only have to manage 1 file instead of (no joke) millions of .png files.
Also, it's great when one open source project ties cleanly into another.
**Implementation**
folium is maintained by volunteers. Can you help make a PR if we want to implement this feature?
I would be happy to write this PR. I have working code which accomplishes the task, just not sure how to best turn it into a plugin.
If somebody could please comment:
1. Is this idea acceptable and feasible?
2. A general best practices and implementation list.
Thanks.
| open | 2020-04-30T23:30:11Z | 2023-02-16T09:39:46Z | https://github.com/python-visualization/folium/issues/1318 | [
"enhancement",
"plugin"
] | and-viceversa | 4 |
apify/crawlee-python | automation | 1,076 | Add support for request to use a specific session | I'm scraping a single site with 2 different logins. I don't see a way to specify which session/cookies a request should use?
The documentation for this is lacking.
Thanks | open | 2025-03-12T16:55:09Z | 2025-03-13T09:45:54Z | https://github.com/apify/crawlee-python/issues/1076 | [
"enhancement",
"t-tooling"
] | dickermoshe | 2 |
skforecast/skforecast | scikit-learn | 378 | Replace, or complement, `pmdarima` with `statsforecast.AutoARIMA` | The Nixtla folks claim that their `statsforecast.AutoARIMA` is faster than `pmdarima` and `prophet`:

([source](https://nixtla.github.io/statsforecast/examples/autoarima_vs_prophet.html))
This is the API: https://nixtla.github.io/statsforecast/models.html#autoarima-1
My understanding is that both pmdarima and statsforecast implement 2008 Hyndman & Khandakar "Automatic Time Series Forecasting: The forecast Package for R".
Paging @mergenthaler, @FedericoGarza, and @kdgutier 👋🏽😊 | open | 2023-03-27T20:30:14Z | 2023-03-31T10:35:01Z | https://github.com/skforecast/skforecast/issues/378 | [
"enhancement"
] | astrojuanlu | 1 |
adamerose/PandasGUI | pandas | 65 | pandasgui shutdown out of interactive cmd, pandasgui.show(df, settings={'block': True}) is needed. | Hi @adamerose ,
I am trying pandasgui in some python script, unlike in interactive cmd, it shutdown immediately after running.
I find `settings={'block': True}` is necessary, would you please figure it out in the demo examples?
Thank you and pandasgui is nice tool!
| closed | 2020-11-08T05:01:43Z | 2020-11-11T02:08:49Z | https://github.com/adamerose/PandasGUI/issues/65 | [
"bug"
] | forhonourlx | 9 |
ultrafunkamsterdam/undetected-chromedriver | automation | 804 | Which version of selenium? | Is there a specific version required to run UC?
I want to use selenium version 4.1.3 instead of the latest version. | closed | 2022-08-31T16:36:51Z | 2022-08-31T17:12:16Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/804 | [] | MacMarde | 2 |
OpenBB-finance/OpenBB | python | 6,902 | [🕹️] Copilot for Terminal Code Side-Quest | ### What side quest or challenge are you solving?
Copilot for Terminal
### Points
300-750
### Description
Create a custom copilot that integrates a new language model (e.g., Cohere, Llama3.2, etc.) into OpenBB's Terminal.
### Provide proof that you've completed the task
... | closed | 2024-10-28T14:27:36Z | 2024-10-30T14:45:19Z | https://github.com/OpenBB-finance/OpenBB/issues/6902 | [] | FloatinggOnion | 6 |
jpadilla/django-rest-framework-jwt | django | 35 | request.user returns AnonymousUser | Is there a method to return the authorised user when using JSONWebTokenAuthentication in DEFAULT_AUTHENTICATION_CLASSES? I'm calling a view very similar to this http://stackoverflow.com/a/20569205 but this doesn't work as request.user returns AnonymousUser.
| closed | 2014-09-17T04:37:20Z | 2018-11-03T16:19:50Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/35 | [] | rob-balfre | 8 |
freqtrade/freqtrade | python | 10,974 | Docker proxy problem | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Docker Engine v20.10.8 (Windows)
* Python Version: Python 3.12.7
* CCXT version: ccxt==4.4.24
* Freqtrade Version: freqtrade 2024.10
## Your question
Here is my config.json
```
{
"$schema": "https://schema.freqtrade.io/schema.json",
"max_open_trades": 3,
"stake_currency": "USDT",
"stake_amount": "unlimited",
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "USD",
"dry_run": true,
"dry_run_wallet": 1000,
"cancel_open_orders_on_exit": false,
"trading_mode": "futures",
"margin_mode": "isolated",
"unfilledtimeout": {
"entry": 10,
"exit": 10,
"exit_timeout_count": 0,
"unit": "minutes"
},
"entry_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1,
"price_last_balance": 0.0,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"exit_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1
},
"exchange": {
"name": "okx",
"key": "",
"secret": "",
"ccxt_config": {
"httpsProxy": "http://127.0.0.1:7890"
},
"ccxt_async_config": {},
"pair_whitelist": [],
"pair_blacklist": []
},
"pairlists": [
{
"method": "VolumePairList",
"number_assets": 20,
"sort_key": "quoteVolume",
"min_value": 0,
"refresh_period": 1800
}
],
"telegram": {
"enabled": true,
"token": "",
"chat_id": ""
},
"api_server": {
"enabled": true,
"listen_ip_address": "0.0.0.0",
"listen_port": 8080,
"verbosity": "error",
"enable_openapi": false,
"jwt_secret_key": "",
"ws_token": "",
"CORS_origins": [],
"username": "freqtrader",
"password": "freqtrader"
},
"bot_name": "freqtrade",
"initial_state": "running",
"force_entry_enable": false,
"internals": {
"process_throttle_secs": 5
}
}
```
when I run with docker

I got this log
```
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade | 2024-11-23 08:25:32,145 - freqtrade - ERROR - Could not load markets, therefore cannot start. Please investigate the above error for more details.
freqtrade | Exception ignored in: <module 'threading' from '/usr/local/lib/python3.12/threading.py'>
freqtrade | Traceback (most recent call last):
freqtrade | File "/usr/local/lib/python3.12/threading.py", line 1624, in _shutdown
freqtrade | lock.acquire()
freqtrade | File "/freqtrade/freqtrade/commands/trade_commands.py", line 18, in term_handler
freqtrade | raise KeyboardInterrupt()
freqtrade | KeyboardInterrupt:
freqtrade | 2024-11-23 08:31:34,355 - freqtrade - INFO - freqtrade 2024.10
freqtrade | 2024-11-23 08:31:35,013 - numexpr.utils - INFO - NumExpr defaulting to 16 threads.
freqtrade | 2024-11-23 08:31:38,106 - freqtrade.worker - INFO - Starting worker 2024.10
freqtrade | 2024-11-23 08:31:38,107 - freqtrade.configuration.load_config - INFO - Using config: /freqtrade/user_data/config.json ...
freqtrade | 2024-11-23 08:31:38,115 - freqtrade.loggers - INFO - Verbosity set to 0
freqtrade | 2024-11-23 08:31:38,115 - freqtrade.configuration.configuration - INFO - Runmode set to dry_run.
freqtrade | 2024-11-23 08:31:38,116 - freqtrade.configuration.configuration - INFO - Parameter --db-url detected ...
freqtrade | 2024-11-23 08:31:38,116 - freqtrade.configuration.configuration - INFO - Dry run is enabled
freqtrade | 2024-11-23 08:31:38,117 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite:////freqtrade/user_data/tradesv3.sqlite"
freqtrade | 2024-11-23 08:31:38,117 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 3 ...
freqtrade | 2024-11-23 08:31:38,181 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
freqtrade | 2024-11-23 08:31:38,183 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/okx ...
freqtrade | 2024-11-23 08:31:38,184 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
freqtrade | 2024-11-23 08:31:38,191 - freqtrade.exchange.check_exchange - INFO - Exchange "okx" is officially supported by the Freqtrade development team.
freqtrade | 2024-11-23 08:31:38,192 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
freqtrade | 2024-11-23 08:31:38,229 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy SampleStrategy from '/freqtrade/user_data/strategies/sample_strategy.py'...
freqtrade | 2024-11-23 08:31:38,230 - freqtrade.strategy.hyper - INFO - Found no parameter file.
freqtrade | 2024-11-23 08:31:38,231 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
freqtrade | 2024-11-23 08:31:38,232 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
freqtrade | 2024-11-23 08:31:38,232 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
freqtrade | 2024-11-23 08:31:38,233 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 3.
freqtrade | 2024-11-23 08:31:38,233 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'60': 0.01, '30': 0.02, '0': 0.04}
freqtrade | 2024-11-23 08:31:38,234 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 5m
freqtrade | 2024-11-23 08:31:38,234 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.1
freqtrade | 2024-11-23 08:31:38,234 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
freqtrade | 2024-11-23 08:31:38,235 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
freqtrade | 2024-11-23 08:31:38,236 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
freqtrade | 2024-11-23 08:31:38,236 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
freqtrade | 2024-11-23 08:31:38,236 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
freqtrade | 2024-11-23 08:31:38,237 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'market', 'stoploss_on_exchange': False}
freqtrade | 2024-11-23 08:31:38,237 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
freqtrade | 2024-11-23 08:31:38,238 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
freqtrade | 2024-11-23 08:31:38,238 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
freqtrade | 2024-11-23 08:31:38,239 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 200
freqtrade | 2024-11-23 08:31:38,239 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
freqtrade | 2024-11-23 08:31:38,239 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
freqtrade | 2024-11-23 08:31:38,240 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
freqtrade | 2024-11-23 08:31:38,240 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
freqtrade | 2024-11-23 08:31:38,241 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
freqtrade | 2024-11-23 08:31:38,241 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
freqtrade | 2024-11-23 08:31:38,242 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
freqtrade | 2024-11-23 08:31:38,242 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
freqtrade | 2024-11-23 08:31:38,242 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
freqtrade | 2024-11-23 08:31:38,243 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 3
freqtrade | 2024-11-23 08:31:38,243 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
freqtrade | 2024-11-23 08:31:38,246 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
freqtrade | 2024-11-23 08:31:38,247 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.24
freqtrade | 2024-11-23 08:31:38,248 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'httpsProxy': 'http://127.0.0.1:7890'}
freqtrade | 2024-11-23 08:31:38,254 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'httpsProxy': 'http://127.0.0.1:7890'}
freqtrade | 2024-11-23 08:31:38,261 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap', 'brokerId': 'ffb5405ad327SUDE'}, 'httpsProxy': 'http://127.0.0.1:7890'}
freqtrade | 2024-11-23 08:31:38,268 - freqtrade.exchange.exchange - INFO - Using Exchange "OKX"
freqtrade | 2024-11-23 08:31:38,296 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Retrying still for 3 times.
freqtrade | 2024-11-23 08:31:38,785 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Retrying still for 2 times.
freqtrade | 2024-11-23 08:31:39,301 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Retrying still for 1 times.
freqtrade | 2024-11-23 08:31:39,816 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT". Giving up.
freqtrade | 2024-11-23 08:31:39,817 - freqtrade.exchange.exchange - ERROR - Could not load markets.
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | During handling of the above exception, another exception occurred:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | During handling of the above exception, another exception occurred:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | During handling of the above exception, another exception occurred:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1091, in _wrap_create_connection
freqtrade | sock = await aiohappyeyeballs.start_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 104, in start_connection
freqtrade | raise first_exception
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 82, in start_connection
freqtrade | sock = await _connect_sock(
freqtrade | ^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohappyeyeballs/impl.py", line 174, in _connect_sock
freqtrade | await loop.sock_connect(sock, address)
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 651, in sock_connect
freqtrade | return await fut
freqtrade | ^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 691, in _sock_connect_cb
freqtrade | raise OSError(err, f'Connect call failed {address}')
freqtrade | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 7890)
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 210, in fetch
freqtrade | async with session_method(yarl.URL(url, encoded=True),
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 1359, in __aenter__
freqtrade | self._resp: _RetType = await self._coro
freqtrade | ^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/client.py", line 663, in _request
freqtrade | conn = await self._connector.connect(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 563, in connect
freqtrade | proto = await self._create_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1030, in _create_connection
freqtrade | _, proto = await self._create_proxy_connection(req, traces, timeout)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1391, in _create_proxy_connection
freqtrade | transport, proto = await self._create_direct_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1366, in _create_direct_connection
freqtrade | raise last_exc
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1335, in _create_direct_connection
freqtrade | transp, proto = await self._wrap_create_connection(
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/aiohttp/connector.py", line 1106, in _wrap_create_connection
freqtrade | raise client_error(req.connection_key, exc) from exc
freqtrade | aiohttp.client_exceptions.ClientProxyConnectionError: Cannot connect to host 127.0.0.1:7890 ssl:default [Connect call failed ('127.0.0.1', 7890)]
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 626, in _api_reload_markets
freqtrade | return await self._api_async.load_markets(reload=reload, params={})
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in load_markets
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in load_markets
freqtrade | result = await self.markets_loading
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 273, in load_markets_helper
freqtrade | markets = await self.fetch_markets(params)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1416, in fetch_markets
freqtrade | promises = await asyncio.gather(*promises)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/okx.py", line 1580, in fetch_markets_by_type
freqtrade | response = await self.publicGetPublicInstruments(self.extend(request, params))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 868, in request
freqtrade | return await self.fetch2(path, api, method, params, headers, body, config)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 864, in fetch2
freqtrade | raise e
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 855, in fetch2
freqtrade | return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 248, in fetch
freqtrade | raise ExchangeNotAvailable(details) from e
freqtrade | ccxt.base.errors.ExchangeNotAvailable: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade |
freqtrade | The above exception was the direct cause of the following exception:
freqtrade |
freqtrade | Traceback (most recent call last):
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 665, in reload_markets
freqtrade | self._markets = retrier(self._load_async_markets, retries=retries)(reload=True)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 193, in wrapper
freqtrade | return wrapper(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 193, in wrapper
freqtrade | return wrapper(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 193, in wrapper
freqtrade | return wrapper(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 196, in wrapper
freqtrade | raise ex
freqtrade | File "/freqtrade/freqtrade/exchange/common.py", line 181, in wrapper
freqtrade | return f(*args, **kwargs)
freqtrade | ^^^^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 638, in _load_async_markets
freqtrade | markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
freqtrade | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
freqtrade | File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
freqtrade | return future.result()
freqtrade | ^^^^^^^^^^^^^^^
freqtrade | File "/freqtrade/freqtrade/exchange/exchange.py", line 630, in _api_reload_markets
freqtrade | raise TemporaryError(
freqtrade | freqtrade.exceptions.TemporaryError: Error in reload_markets due to ExchangeNotAvailable. Message: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT
freqtrade | 2024-11-23 08:31:39,838 - freqtrade - ERROR - Could not load markets, therefore cannot start. Please investigate the above error for more details.
freqtrade | Exception ignored in: <module 'threading' from '/usr/local/lib/python3.12/threading.py'>
freqtrade | Traceback (most recent call last):
freqtrade | File "/usr/local/lib/python3.12/threading.py", line 1624, in _shutdown
freqtrade | lock.acquire()
freqtrade | File "/freqtrade/freqtrade/commands/trade_commands.py", line 18, in term_handler
freqtrade | raise KeyboardInterrupt()
freqtrade | KeyboardInterrupt:
freqtrade exited with code 2
```
What should I try next?
| closed | 2024-11-23T08:45:29Z | 2024-11-23T11:10:43Z | https://github.com/freqtrade/freqtrade/issues/10974 | [
"Question",
"Docker"
] | YangShuai-uestc | 3 |
aimhubio/aim | tensorflow | 3,065 | Potential security issue | Hello 👋
I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@fa2y) has found a potential issue, which I would be eager to share with you.
Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future.
Looking forward to hearing from you 👍
(cc @huntr-helper) | open | 2023-11-20T17:17:33Z | 2023-12-29T00:43:14Z | https://github.com/aimhubio/aim/issues/3065 | [] | psmoros | 5 |
encode/databases | asyncio | 405 | "Connection is already acquired" error | Hi. I have a daemon with several asyncio tasks, each of which executes some requests. All of the tasks share the same connection.
After some time one of them begins to produce this error.
Failed code is simple :)
In main.py I have
`self.engine = Database(config.Config.SQLALCHEMY_DATABASE_URI)
await self.engine.connect()
`
And in broken task
`await wait_for(self.engine.execute(query=statement), 10)
`
What could be wrong? | closed | 2021-10-09T13:22:23Z | 2021-12-01T11:15:46Z | https://github.com/encode/databases/issues/405 | [] | bokolob | 11 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 281 | [BUG] 抖音的视频都无法解析了 | ***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
| closed | 2023-09-27T09:31:34Z | 2023-09-29T10:47:04Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/281 | [
"BUG"
] | zhilinzheng0809 | 1 |
inducer/pudb | pytest | 297 | Feature request: threading support | Has support for multi-threaded processes been considered in the past? Any pointers on how one would go about adding that to pudb? | open | 2018-04-12T08:50:14Z | 2023-07-14T16:35:35Z | https://github.com/inducer/pudb/issues/297 | [] | andre-merzky | 5 |
ipython/ipython | jupyter | 14,158 | Feature Request: magic command to paste code from file and execute | There are multiple instances where one would want to send code to a running IPython terminal from an editor (a la Spyder). An example of this would be this extension for VSCode https://github.com/hoangKnLai/vscode-ipython which works really well in its current state. However, it's using the "%run -i" command to run code that resides in some temporary file.
The "%load" magic command can be used for such a task, but it does not execute the code, it simply pastes it in the terminal. This requires sending multiple "enter" keys to the terminal to execute the code, and it also keeps the "%load ...." tag at the top which contaminates the history in case the user wants to re-run the same code block from history using the up arrow. It looks like this command was mainly designed for injecting code from a file into a jupyter notebook cell.
There is another "%paste" magic command but loads code from the clipboard, but the clipboard tends to be a little sluggish and act up especially when firing multiple commands from the editor (think running code line after line).
A potential solution is using "%run -i path/to/file.py" which the aforementioned extension implements. However, looking at the source code for this magic function, it seems to be making some manipulation to the user name space, among other things. We are not sure if it's "safe" to be using the %run command for this.
Would it be possible to add a new, simple magic command, call it "%fpaste" that does the same thing as "%paste" but loads code from a file instead of the clipboard? There are no namespace checks of any sort, it simply loads the code in the current session and executes it. It would simplify things a lot for extension developers since we can just send a "%fpaste path/to/file.py" to ipython and it automatically loads the code and executes it. This way we don't have to send multiple enter keys to the terminal.
Here is a simple implementation that I am using currently to create a custom magic command that does exactly this:
```python
# ~\.ipython\profile_default\startup\fpaste.py
from IPython.core.magic import Magics, magics_class, line_cell_magic, line_magic
# The class MUST call this class decorator at creation time
@magics_class
class MyMagics(Magics):
@line_magic
def fpaste(self, line):
import sys
from pathlib import Path
contents = Path('path/to/pycode.py')
sys.stdout.write(self.shell.pycolorize(contents))
self.shell.run_cell(contents, store_history=True)
get_ipython().register_magics(MyMagics)
```
This works pretty well. I had to keep the import statements for path and sys inside the function otherwise I have to import them again after doing a hard reset with "%reset -f".
The current workflow in VSCode would look like this:
[in editor]
trigger keybinding that sends text to ipython, VSCode writes the contents of the selected code or code cell to the pycode.py file, then send a "%fpaste path/to/pycode.py" to the integrated terminal running ipython, then finally sends one enter key to execute the whole statement.
[switch to integrated terminal running ipython]
ipython executes the code because that's what the %fpaste implementation does, the execution is separate from printing the code to stdout. Maybe a better implementation would be to skip writing the contents to stdout in case that's not a desired behavior, we can just execute the code.
Note:
Since history is set to true in the fpaste function, we can easily hit up arrow and rerun the code if needed.
For some reason that I don't understand, the %fpaste tag does not show up in the history. Is this by design? It is the needed behavior but it would be nice to understand why it's happening. As mentioned, using the "%load" command tends to inject the magic command tag in history as well, which pollutes the code a little when the user wants to re-run it.
Here is a screenshot of what this would look like (using IPython classic prompt):

Same snippet ran using the "%load" command:

| closed | 2023-09-08T20:40:54Z | 2025-02-23T20:46:08Z | https://github.com/ipython/ipython/issues/14158 | [] | AmerM137 | 3 |
kizniche/Mycodo | automation | 612 | Crash report for output | ## Mycodo Issue Report:
7.1.3
#### Problem Description
Please list:
Hi All, run into this error for outputs set up,
After pressing: add output for atlas i2c pump signal crash error accrued.
Restart / did not solved it, pressing output in setup generate the same error
Any suggestions? many thanks!
### Errors
```
Error 500: Internal Server Error
Something bad happened but it's probably not your fault. Letting the developers know about these issues is crucial to supporting Mycodo. Please submit a new issue on GitHub with the following error traceback (copy the entire traceback):
Error (Full Traceback):
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask_login/utils.py", line 261, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_page.py", line 1546, in page_output
user=user)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/templating.py", line 135, in render_template
context, ctx.app)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/templating.py", line 117, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/output.html", line 3, in top-level template code
{% set help_page = ["output", dict_translation['output']['title']] %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 260, in top-level template code
{%- block body %}{% endblock -%}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/output.html", line 230, in block "body"
{% include 'pages/output_options/'+each_output_template %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/output_options/atlas_ezo_pmp.html", line 4, in top-level template code
{{form_mod_output.i2c_location.label(class_='control-label')}}
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 430, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'mycodo.mycodo_flask.forms.forms_output.OutputMod object' has no attribute 'i2c_location'
```
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. step 1
2. step 2...
3. etc
### Additional Notes
Is there anything that should be added to make it easier
to address this issue? | closed | 2019-01-25T19:25:41Z | 2019-01-27T19:35:24Z | https://github.com/kizniche/Mycodo/issues/612 | [] | alonrab | 5 |
litestar-org/litestar | asyncio | 3,631 | Enhancement: Allow parameter of type `Iterable` to be processed as `list` | ### Description
Litestar doesn't think it should process `Iterable` parameter as an array.
The original [discussion](https://discord.com/channels/919193495116337154/1260559977081471058) was about default parameter value, but the problem is actually with how litestar handles `Iterable` itself
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/8c4c15bb501879dabaecfbf0af541ac571c08cf3/litestar/_kwargs/parameter_definition.py#L67C9-L67C60
### MCVE
```python
@get('/sequence')
async def sequence(foo: Sequence[str]) -> Sequence[str]:
return foo
@get('/iterable')
async def iterable(foo: Iterable[str]) -> Iterable[str]:
return foo
app = Litestar(
route_handlers=(
sequence,
iterable,
)
)
```
### Steps to reproduce
```bash
$ curl 'localhost:8000/sequence?foo=s1&foo=s2'
["s1","s2"]
$ curl 'localhost:8000/iterable?foo=s1&foo=s2'
s2
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.9.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-07-16T11:57:47Z | 2025-03-20T15:54:49Z | https://github.com/litestar-org/litestar/issues/3631 | [
"Enhancement"
] | rafalkrupinski | 3 |
thunlp/OpenPrompt | nlp | 50 | Is there any standard configuration of softprompt on few-shot tasks to match the performance of the Papers? | closed | 2021-11-15T03:07:06Z | 2022-01-12T07:15:08Z | https://github.com/thunlp/OpenPrompt/issues/50 | [] | yh351016 | 1 |
|
kymatio/kymatio | numpy | 583 | Include Keras, sklearn in docs and structure | Right now, it's all one big text. We should split up different subsections for different frontends. At the top, we should have a quick text summarizing the frontend framework and describing how to import them dynamically (using `kymatio.Scattering*D` and specifying `frontend` when creating). | closed | 2020-02-25T19:06:35Z | 2020-03-06T18:49:06Z | https://github.com/kymatio/kymatio/issues/583 | [] | janden | 5 |
keras-team/autokeras | tensorflow | 1,904 | ModuleNotFoundError: No module named 'tensorflow.keras.layers.experimental'Bug: | ### Bug Description
when import autokeras
import autokeras as ak
### Bug Reproduction
Code for reproducing the bug:
https://colab.research.google.com/github/keras-team/autokeras/blob/master/docs/ipynb/structured_data_classification.ipynb
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3
- autokeras: <!--- e.g. 0.4.0, 1.0.2, master-->
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
### Additional context
[<ipython-input-4-4e35e895c450>](https://localhost:8080/#) in <cell line: 4>()
2 import tensorflow as tf
3
----> 4 import autokeras as ak
7 frames
[/usr/local/lib/python3.10/dist-packages/autokeras/keras_layers.py](https://localhost:8080/#) in <module>
18 from tensorflow import nest
19 from tensorflow.keras import layers
---> 20 from tensorflow.keras.layers.experimental import preprocessing
21
22 from autokeras.utils import data_utils
ModuleNotFoundError: No module named 'tensorflow.keras.layers.experimental | open | 2024-03-12T11:51:45Z | 2025-03-20T17:32:10Z | https://github.com/keras-team/autokeras/issues/1904 | [
"bug report"
] | veiro | 8 |
mwaskom/seaborn | pandas | 3,434 | narrow gaps in 2d kernel density diagram | Hi there!
When I plot a 2d kernel density diagram, there are narrow gaps between the adjacent contours (not white lines).
I am wondering if there is anyway to fill these gaps, thanks!

| closed | 2023-08-02T03:02:57Z | 2023-08-02T11:19:22Z | https://github.com/mwaskom/seaborn/issues/3434 | [] | merry200603 | 1 |
jmcnamara/XlsxWriter | pandas | 616 | Feature request: Streaming | As per [my comment in #11](https://github.com/jmcnamara/XlsxWriter/issues/11#issuecomment-479149512), it is possible, if not completely straightforward, to stream ZIP files on output, for example using [`zipstream`](https://github.com/allanlei/python-zipstream). The current ZIP member must be finished before the next one can be started, however. The current `zipstream` also requires all the members to be listed beforehand, but this is an artifact of the current implementation that shouldn’t be too hard to fix.
Does this mean `xlsxwriter` could work in `constant_memory` mode without creating any temporary files as long as write operations to different worksheets are not interleaved?
I would be willing to implement this if necessary and possible, but I’m not at all familliar with the internal structure of `xlsxwriter`, so some design help would be appreciated. | closed | 2019-04-02T20:26:29Z | 2019-04-07T00:33:50Z | https://github.com/jmcnamara/XlsxWriter/issues/616 | [
"feature request"
] | alexshpilkin | 2 |
explosion/spaCy | machine-learning | 13,622 | 3.7.6 does not have wheel for linux aarch64 / arm64 | - https://pypi.org/project/spacy/3.7.6/#files
- https://pypi.org/project/spacy/3.7.5/#files
3.7.5 provides wheels for linux aarch64 for various python versions, but 3.7.6 does not have any wheels for linux aarch64.
Is it intended? Couldn't find related info on changelogs. | open | 2024-09-10T01:39:12Z | 2025-03-19T15:19:02Z | https://github.com/explosion/spaCy/issues/13622 | [] | chulkilee | 5 |
slackapi/python-slack-sdk | asyncio | 1,037 | Update PythOnBoardingBot tutorial to use slack-bolt instead of slackeventsapi package | The PythOnBoardingBot tutorial should be updated to use bolt-python instead of slackevensapi, as bolt-python is the latest recommended way to use Events API in Python.
You don't need to change any parts of the tutorial contents. All you need to do are:
* You modify the app under `tutorial/PythOnBoardingBot/` to use [slack-bolt](https://pypi.org/project/slack-bolt/) instead of [slackevensapi](https://pypi.org/project/slackevensapi/)
* Once you verify if the app works in the tutorial scenario, you update the corresponding code snippets in `tutorial/*.md` files too
### The page URLs
- https://github.com/slackapi/python-slack-sdk/tree/main/tutorial
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-06-15T00:05:54Z | 2021-06-18T05:24:48Z | https://github.com/slackapi/python-slack-sdk/issues/1037 | [
"docs",
"Version: 3x",
"good first issue"
] | seratch | 2 |
flasgger/flasgger | flask | 295 | how to configre security on the top level | I want use security on the top level, but how to configure. | open | 2019-04-15T11:13:35Z | 2019-04-15T11:13:35Z | https://github.com/flasgger/flasgger/issues/295 | [] | fanyebo | 0 |
Yorko/mlcourse.ai | pandas | 396 | Typo in 4.1 | After defining linearity of weights there is a mathematical expression which can not be displayed clearly due to a typo I believe. Presumably *∀k* is meant.
Best
> where $\forall\ k\ $, | closed | 2018-10-22T09:17:32Z | 2018-11-10T16:18:38Z | https://github.com/Yorko/mlcourse.ai/issues/396 | [
"minor_fix"
] | kazimanil | 3 |
dmlc/gluon-cv | computer-vision | 1,066 | Error on mxnet/gluon: after upgrade the mxnet-cu100 and gluoncv,then run script "train_faster_rcnn.py" | My virtual environment is broken and then I create a new virtual environment named "huf_mx" as following.

Trying to run script "train_faster_rcnn.py" on spyder IDE, I get the error.
As shown in the image, there are some problem on mxnet/gluon, or maybe gluoncv either.
How to fix it, installing another mxnet?
I have tried 1.4.0,but not work.
thx a lot. | closed | 2019-11-28T10:44:44Z | 2020-03-26T05:58:38Z | https://github.com/dmlc/gluon-cv/issues/1066 | [] | HuFBH | 3 |
ARM-DOE/pyart | data-visualization | 762 | Issue with file handing rainbow files using wradlib | https://github.com/ARM-DOE/pyart/blob/master/pyart/aux_io/rainbow_wrl.py#L153
proposed by our friends at MeteoSwiss
```
I propose to solve the issue by passing the file handle, not the file name to wradlib, i.e.:
with open(filename, 'rb') as fid:
rbf = read_rainbow(fid, loaddata=True)
Greetings from Switzerland!
```
| closed | 2018-08-22T16:47:23Z | 2019-04-25T18:09:12Z | https://github.com/ARM-DOE/pyart/issues/762 | [
"Easy Fix",
"good first issue"
] | scollis | 4 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 273 | 代码里好多错误,pix2pixHD_model_DA.py里面的netE是什么了? | 代码里好多错误,pix2pixHD_model_DA.py里面的netE是什么了?出现的莫名其妙 | open | 2023-05-11T06:37:18Z | 2023-05-11T06:37:18Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/273 | [] | sibangde | 0 |
slackapi/bolt-python | fastapi | 1,180 | Slack Action `type_select` Not Triggering After Modal is Opened | Hello everyone, I'm seeing an issue with the SDK for a particular action `type_select`, I've provided more details below. Any help would be appreciated. Also, please let me know if more information is required from my side.
### Reproducible in:
#### The `slack_bolt` version
```bash
slack_bolt==1.20.1
slack_sdk==3.33.1
```
#### Python runtime version
```bash
Python 3.9.6
```
#### OS info
```bash
ProductName: macOS
ProductVersion: 15.0
BuildVersion: 24A335
Darwin Kernel Version 24.0.0: Mon Aug 12 20:51:54 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6000
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Open the modal by clicking the “Create Broadcast” button.
2. Try to select either “Task” or “FYI” from the dropdown.
3. Observe that the modal is not dynamically updated, and no logging occurs for the type_select action.
### Expected result:
I was working on a dynamic form extension in Slack using the Bolt framework. The form allows users to select between different broadcast types (such as Task or FYI) using a dropdown. Based on the selected value, additional fields should be dynamically shown in the modal. For example:
- If the user selects Task, fields related to task details (like start date, end date, number of reminders) should appear.
- If the user selects FYI, these fields would not be shown.
Similarly, I have another dropdown for selecting the broadcast method (Channel or DM). If DM is selected, a field should appear for selecting multiple users who will receive the broadcast. However, the action for the dropdown selection `type_select` is not being triggered, preventing the dynamic form behavior from working as expected.
1. The modal should open after the button is clicked.
2. When the user selects Task or FYI in the dropdown, the type_select action should be triggered, and the modal should be updated with additional fields based on the selection.
```bash
# In broadcast_modal_content.py
def get_initial_blocks():
return [
{
"type": "input",
"block_id": "broadcast_type",
"label": {"type": "plain_text", "text": "Is this a Task or FYI?"},
"element": {
"type": "static_select",
"action_id": "type_select", # This is the action ID not being triggered
"options": [
{"text": {"type": "plain_text", "text": "Task"}, "value": "task"},
{"text": {"type": "plain_text", "text": "FYI"}, "value": "fyi"}
]
}
}
]
# In broadcast_action.py
@app.action("type_select")
def update_modal_on_type_selection(ack, body, client, logger):
ack() # Acknowledge the action
logger.info("type_select action triggered") # This log is never seen
# Further processing for modal update
```
### Actual result:
- The open_broadcast_form action triggers correctly, and the modal opens.
- However, the type_select action is not triggered when a selection is made in the dropdown.
## Additional Details
<img width="740" alt="Screenshot 2024-10-15 at 12 38 41 PM" src="https://github.com/user-attachments/assets/161fcef6-930d-4c39-8354-2beaaa100531">
| closed | 2024-10-15T07:10:11Z | 2024-10-15T09:56:45Z | https://github.com/slackapi/bolt-python/issues/1180 | [
"question"
] | maniparas | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.