repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
napari/napari | numpy | 7,129 | Widgets with return type annotation `List[LayerDataTuple]` don't get their layers added to the `Viewer` | ### 🐛 Bug Report
In `0.5.0` we dropped support for python 3.8, which included some typing changes. Notably, the `list` type no longer needed importing `from typing import List`, but could be used directly.
This led to a change in the types we register with `magicgui` - notably, we now use the builtin `list` type for `LayerDataTuple` [here](https://github.com/napari/napari/pull/6738/files#diff-0d09c2e8083dd5acfebfeffc8e34e4c44d781e246d736d6e7db79feae78194d6R160).
Now, any magicgui widget that is return type annotated with the imported `List[LayerDataTuple]` no longer works i.e. the layers are not added to the viewer.
### 💡 Steps to Reproduce
1. Run the following script
```python
import numpy as np
import napari
from magicgui import magic_factory
from napari.types import LayerDataTuple
from typing import List
@magic_factory
def layer_return(
first_layer: 'napari.types.ImageData',
# ) -> list[LayerDataTuple]:
) -> List[LayerDataTuple]:
layer_tuple = (first_layer, {}, 'image')
layer_tuple_list = [layer_tuple]
return layer_tuple_list
viewer = napari.Viewer()
viewer.add_image(np.random.rand(20, 20))
viewer.window.add_dock_widget(layer_return())
napari.run()
```
2. Click the `Run` button on the widget
3. Nothing happens.
4. Swap the uncommented return statement
5. Run the script
6. Click `Run` button on the widget
7. Layer gets added
### 💡 Expected Behavior
I expected the layer to be added to the viewer regardless of whether the builtin `list` type is used or whether we import `from typing import List`.
### 🌎 Environment
```
napari: 0.5.0
Platform: macOS-10.16-x86_64-i386-64bit
System: MacOS 14.5
Python: 3.10.14 (main, May 6 2024, 14:47:20) [Clang 14.0.6 ]
Qt: 5.15.2
PyQt5: 5.15.10
NumPy: 1.26.4
SciPy: 1.14.0
Dask: 2024.7.1
VisPy: 0.14.3
magicgui: 0.8.3
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.8
npe2: 0.7.6
OpenGL:
- GL version: 2.1 INTEL-22.5.11
- MAX_TEXTURE_SIZE: 16384
- GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
- screen 1: resolution 1440x900, scale 2.0
- screen 2: resolution 3840x2160, scale 1.0
Optional:
- numba: 0.60.0
- triangle not installed
Settings path:
- /Users/ddoncilapop/Library/Application Support/napari/stardist_d7f2585946fc58f34534dbaf8ce99a60b9039489/settings.yaml
Plugins:
- napari: 0.5.0 (81 contributions)
- napari-console: 0.0.9 (0 contributions)
- napari-svg: 0.2.0 (2 contributions)
- stardist-napari: 2022.12.6 (8 contributions)
```
### 💡 Additional Context
We can bandaid fix this by changing line #160 in [this file](https://github.com/napari/napari/blob/main/napari/types.py#L160) to
```python
for type_ in (LayerDataTuple, list[LayerDataTuple], List[LayerDataTuple]):
```
But it's not clear that this should be the final solution - maybe we should be doing some disambiguating in magicgui?
I also haven't checked whether other types are affected. | closed | 2024-07-26T04:50:31Z | 2024-11-25T20:50:59Z | https://github.com/napari/napari/issues/7129 | [
"bug",
"priority:high",
"triage:probably solved"
] | DragaDoncila | 11 |
pytorch/pytorch | machine-learning | 149,774 | bound_sympy() produces incorrect result for mod | ### 🐛 Describe the bug
`bound_sympy(s0 - (s0 % 8))` produces an incorrect range of [-5, inf], when the correct answer is [0, inf] (s0 has a bound of [2, inf].
My guess is this happens because each term is evaluated individually, with s0 resolving to [2, inf], and -(s0 % 8) resolving to [-7, 0], combining for a range of [-5, inf]. Not sure what the efficient fix is.
xref: https://fb.workplace.com/groups/pytorch.edge2.team/posts/1163036018285582/?comment_id=1163038158285368&reply_comment_id=1164412728147911
```
from torch.utils._sympy.value_ranges import bound_sympy
class Foo(torch.nn.Module):
def forward(self, x):
expr = x.shape[0] - (x.shape[0] % 8) # s0 - (s0 % 8)
return torch.empty(expr)
ep = export(
Foo(),
(torch.randn(13),),
dynamic_shapes={"x": (Dim("dim", min=2),)},
)
val = [node for node in ep.graph.nodes][-2].meta["val"]
expr = val.shape[0].node.expr
var_to_ranges = val.shape[0].node.shape_env.var_to_range
print(bound_sympy(val.shape[0], var_to_ranges)) # [-5, inf], should be [0, inf]
```
### Versions
.
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | open | 2025-03-21T23:07:36Z | 2025-03-24T09:40:47Z | https://github.com/pytorch/pytorch/issues/149774 | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"oncall: export"
] | pianpwk | 2 |
mwaskom/seaborn | pandas | 3,484 | How to show all x-tick labels with seaborn.objects? | How do I make it so that it shows all x ticks from 0 to 9?
```
import pandas as pd
import seaborn.objects as so
diff_df = pd.DataFrame({'bin': [0,1,9,3,4,2,3,4,7,5,6,7,8,9], 'diff': [1,0,1,1,1,3,2,4,1,2,3,0,2,1]})
(
so.Plot(x='bin', y='diff', data=diff_df)
.theme({**axes_style("whitegrid"), "grid.linestyle": ":"})
.add(so.Dots())
.add(so.Range(color='orange'), so.Est())
.add(so.Dot(color='orange'), so.Agg())
.add(so.Line(color='orange'), so.Agg())
.label(
x="Image Similarity Bin", y="Difference",
color=str.capitalize,
)
)
```
I tried to set xticks in .label, but it doesn't do anything.
SO: https://stackoverflow.com/questions/77137092/how-to-show-all-x-tick-labels-with-seaborn-objects
| closed | 2023-09-19T19:10:04Z | 2023-09-19T21:12:56Z | https://github.com/mwaskom/seaborn/issues/3484 | [] | anya-ji | 5 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,818 | [Bug]: Error when using --precision full | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
A1111 report error on generation.
### Steps to reproduce the problem
- Add `--precision full` to command line arg
- Load a half precision checkpoint
- Click generate
- Observe error message
### What should have happened?
Generation without error.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-05-16-19-47.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15340122/sysinfo-2024-05-16-19-47.json)
### Console logs
```Shell
0%| | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(1ztcgh7sjo0if7m)', <gradio.routes.Request object at 0x0000017608058AF0>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[],
batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[]), False, 1, False, False, 3, 0.1, 0, 0, '', 0, 25, False, False, False, 'BREAK', '-', 0.2, 10, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False,
False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "D:\stable-diffusion-webui\modules\processing.py", line 845, in process_images
res = process_images_inner(p)
File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\stable-diffusion-webui\modules\processing.py", line 1328, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 218, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_models_xl.py", line 44, in apply_model
return self.model(x, t, cond)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 18, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 32, in __call__
return self.__orig_func(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
return self.diffusion_model(
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "D:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 984, in forward
emb = self.time_embed(t_emb)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
input = module(input)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 503, in network_Linear_forward
return originals.Linear_forward(self, input)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
```
```
### Additional information
_No response_ | open | 2024-05-16T19:48:40Z | 2024-06-09T20:09:51Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15818 | [
"bug-report"
] | huchenlei | 2 |
blb-ventures/strawberry-django-plus | graphql | 187 | Permissions: applying IsAuthenticated directive to entire schema | Hi there,
Instead of applying the `IsAuthenticated()` directive to individual fields, I'm looking to apply this to an entire schema.
I'd wondered if this might work, but it doesn't:
```python
authenticated_schema = gql.Schema(
query=AuthenticatedQueries,
mutation=AuthenticatedMutations,
extensions=[SchemaDirectiveExtension],
directives=[IsAuthenticated()]
)
```
I get a type error:
`Expected type 'Iterable[StrawberryDirective]', got 'list[IsAuthenticated]' instead`
And an actual error:
`AttributeError: 'IsAuthenticated' object has no attribute 'arguments'`.
Any thoughts? | open | 2023-03-14T13:55:19Z | 2023-03-22T18:35:18Z | https://github.com/blb-ventures/strawberry-django-plus/issues/187 | [] | gghdev | 3 |
matplotlib/matplotlib | data-visualization | 29,487 | [Bug]: LinearSegmentedColormap returns different results for int/float when used as a function | ### Bug summary
When invoking a `LinearSegmentedColormap` object as a function, the output can differ based on whether you pass an integer or a float.
For example, in the code snippet below, `cmap(1)` returns a completely different result than `cmap(1.0)`. While this behavior might be expected given how the colormap is implemented, it feels unintuitive.
IMO the provided reprex demonstrates the issue clearly, but please let me know if more details are needed.
### Code for reproduction
```Python
from matplotlib.colors import LinearSegmentedColormap
cmap = LinearSegmentedColormap.from_list(name="reprex", colors=["red", "blue"])
print("cmap(0):", cmap(0))
print("cmap(1):", cmap(1))
print("cmap(1.0):", cmap(1.0))
```
### Actual outcome
`cmap(0): (np.float64(1.0), np.float64(0.0), np.float64(0.0), np.float64(1.0))` (red)
`cmap(1): (np.float64(0.996078431372549), np.float64(0.0), np.float64(0.00392156862745098), np.float64(1.0))` (red)
`cmap(1.0): (np.float64(0.0), np.float64(0.0), np.float64(1.0), np.float64(1.0))` (blue)
### Expected outcome
`cmap(0): (np.float64(1.0), np.float64(0.0), np.float64(0.0), np.float64(1.0))` (red)
`cmap(1): (np.float64(0.0), np.float64(0.0), np.float64(1.0), np.float64(1.0))` (blue)
`cmap(1.0): (np.float64(0.0), np.float64(0.0), np.float64(1.0), np.float64(1.0))` (blue)
### Additional information
_No response_
### Operating system
MacOS Sonoma 14.6.1
### Matplotlib Version
3.10.0
### Matplotlib Backend
module://positron_ipykernel.matplotlib_backend
### Python version
Python 3.13.1
### Jupyter version
/
### Installation
pip | closed | 2025-01-20T10:45:34Z | 2025-01-20T11:51:12Z | https://github.com/matplotlib/matplotlib/issues/29487 | [
"status: duplicate"
] | JosephBARBIERDARNAL | 1 |
modelscope/data-juicer | streamlit | 138 | [Bug]:我添加了个随机抽样的操作符,并且已经测试成功,但是在配置文件中使用的时候报错如下,是为什么呢? | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
ubuntu
### Installation Method 安装方式
source
### Data-Juicer Version Data-Juicer版本
_No response_
### Python Version Python版本
3.9
### Describe the bug 描述这个bug
<img width="903" alt="截屏2023-12-14 下午6 45 04" src="https://github.com/alibaba/data-juicer/assets/116297296/9158252c-308e-432d-879d-cee63deded36">
上述是测试结果输出。
然后在配置文件中如下:
<img width="416" alt="截屏2023-12-14 下午6 45 57" src="https://github.com/alibaba/data-juicer/assets/116297296/649ec6d2-0cdb-446d-a8d6-0c715a86ee69">
显示错误,输出结果为:
<img width="1022" alt="截屏2023-12-14 下午6 46 25" src="https://github.com/alibaba/data-juicer/assets/116297296/e1657b5f-a7d2-49ec-af31-583c91a8337e">
### To Reproduce 如何复现
import sys
import random # 新添加的模块
from jsonargparse.typing import PositiveFloat # 修改导入
from data_juicer.utils.availability_utils import AvailabilityChecking
from data_juicer.utils.constant import Fields, StatsKeys
from data_juicer.utils.model_utils import get_model, prepare_model
from ..base_op import OPERATORS, Filter
from ..common import get_words_from_document
@OPERATORS.register_module('random_sample_filter')
class RandomSampleFilter(Filter):
"""Filter to randomly sample a percentage of samples."""
def __init__(self,
tokenization: bool = False,
sample_percentage: PositiveFloat = 0.1, # 修改参数
*args,
**kwargs):
"""
Initialization method.
:param hf_tokenizer: the tokenizer name of Hugging Face tokenizers.
:param sample_percentage: The percentage of samples to keep.
:param args: extra args
:param kwargs: extra args
"""
super().__init__(*args, **kwargs)
self.sample_percentage = sample_percentage
self.model_key = None
def compute_stats(self, sample):
# 不再计算标记数
return sample
def process(self, sample):
# 根据随机概率决定是否保留样本
if random.uniform(0, 1) <= self.sample_percentage:
return True
else:
return False
这是我的random_sample_filter.py文件。
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | closed | 2023-12-14T10:47:11Z | 2023-12-15T02:51:22Z | https://github.com/modelscope/data-juicer/issues/138 | [
"bug"
] | hitszxs | 5 |
lexiforest/curl_cffi | web-scraping | 125 | Only version 0.1.5 can be installed | python:3.6.8
os: Linux ecom-darwin-eip 5.4.119-1-tlinux4-0009-eks #1 SMP Sat Apr 15 20:30:49 CST 2023 x86_64 x86_64 x86_64 GNU/Linux
By default, only this version can be installed. Installing a higher version is abnormal。
<img width="941" alt="image" src="https://github.com/yifeikong/curl_cffi/assets/29711470/37669a14-f121-44fb-be54-721af3936437">
| closed | 2023-09-19T05:10:23Z | 2023-09-19T05:38:33Z | https://github.com/lexiforest/curl_cffi/issues/125 | [] | crazyxw | 1 |
ExpDev07/coronavirus-tracker-api | fastapi | 165 | US State Timelines | Not sure if it something wrong with what I am doing but I seem to have lost the ability to get US State based timelines from the API?
| open | 2020-03-24T15:13:34Z | 2020-03-25T06:09:47Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/165 | [
"question"
] | fsa317 | 7 |
microsoft/nni | tensorflow | 5,561 | issue list | ### need reply issue ###
@J-shang
https://github.com/microsoft/nni/issues/5555
https://github.com/microsoft/nni/issues/5524
https://github.com/microsoft/nni/issues/5499
@ultmaster
https://github.com/microsoft/nni/issues/5547
@super-dainiu
https://github.com/microsoft/nni/issues/5536
@liuzhe-lz
https://github.com/microsoft/nni/issues/3496
```[tasklist]
### Tasks
```
| closed | 2023-05-15T02:26:29Z | 2023-05-18T07:04:38Z | https://github.com/microsoft/nni/issues/5561 | [] | Lijiaoa | 1 |
apify/crawlee-python | web-scraping | 968 | JSONDecodeError: Expecting value: line 1 column 1 (char 0) while opening RequestQueue | ### Issue description
Hi crawlee team. Thank you for the great work.
I encounter the following error while I try to run the crawler for the second time:
```
Traceback (most recent call last):
File "/home/sadaf/store_crawler/stores_crawler/d/dookcollection.py", line 401, in <module>
asyncio.run(main())
File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/stores_crawler/d/dookcollection.py", line 377, in main
request_queue = await RequestQueue.open(name="dookcollection")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storages/_request_queue.py", line 165, in open
return await open_storage(
^^^^^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storages/_creation_management.py", line 170, in open_storage
storage_info = await resource_collection_client.get_or_create(name=name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_request_queue_collection_client.py", line 35, in get_or_create
resource_client = await get_or_create_inner(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_creation_management.py", line 143, in get_or_create_inner
found = find_or_create_client_by_id_or_name_inner(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_creation_management.py", line 102, in find_or_create_client_by_id_or_name_inner
storage_path = _determine_storage_path(resource_client_class, memory_storage_client, id, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sadaf/store_crawler/store_crawler_venv/lib/python3.11/site-packages/crawlee/storage_clients/_memory/_creation_management.py", line 412, in _determine_storage_path
metadata = json.load(metadata_file)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 293, in load
return loads(fp.read(),
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
I removed the related directory in the storage/request_queues and re-ran it but I still have the same problem.
I appreciate if you guys can help! Thanks!
### Package version
crawlee==0.5.0
| closed | 2025-02-09T12:45:43Z | 2025-02-25T09:54:33Z | https://github.com/apify/crawlee-python/issues/968 | [
"bug",
"t-tooling"
] | sadaffatollahy | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,607 | Is there a sample I can use to paint an image without cutting it? | I more or less understood the test, but is there any way to paint images (I trained a small model with references on how to do it) without having to lower the quality so much? if the image is 256 you can hardly see anything even if you raise the quality. | open | 2023-10-29T23:20:33Z | 2023-10-29T23:20:33Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1607 | [] | Keiser04 | 0 |
Yorko/mlcourse.ai | data-science | 720 | Data for assignment 4 | Thanks for the course. I've been working my way through it via cloning the repo off of Github. I can't seem to find the data set for assignment 4 on sarcasm detection. If it is indeed included, apologies; if not, how would you suggest to get it? Via Kaggle or some other means? Thanks again. | closed | 2022-09-12T19:22:28Z | 2022-09-13T23:01:54Z | https://github.com/Yorko/mlcourse.ai/issues/720 | [] | jonkracht | 1 |
SciTools/cartopy | matplotlib | 2,304 | Add type hints for mypy | ### Description
I would like to propose adding type hints to cartopy so that mypy can be used with the project.
#### Code to reproduce
Using the following code (adapted from the [global map](https://scitools.org.uk/cartopy/docs/latest/gallery/lines_and_polygons/global_map.html) tutorial):
```python
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson())
ax.set_global()
ax.stock_img()
ax.coastlines()
ax.plot(-0.08, 51.53, 'o', transform=ccrs.PlateCarree())
ax.plot([-0.08, 132], [51.53, 43.17], transform=ccrs.PlateCarree())
ax.plot([-0.08, 132], [51.53, 43.17], transform=ccrs.Geodetic())
plt.show()
```
If you run:
```console
> mypy --strict plot.py
```
you'll see several errors.
#### Traceback
```
test.py:2: error: Skipping analyzing "cartopy.crs": module is installed, but missing library stubs or py.typed marker [import-untyped]
test.py:2: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
test.py:2: error: Skipping analyzing "cartopy": module is installed, but missing library stubs or py.typed marker [import-untyped]
test.py:8: error: "Axes" has no attribute "set_global" [attr-defined]
test.py:9: error: "Axes" has no attribute "stock_img" [attr-defined]
test.py:10: error: "Axes" has no attribute "coastlines" [attr-defined]
Found 5 errors in 1 file (checked 1 source file)
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
macOS and Linux
### Cartopy version
0.22.0
### conda list
N/A
### pip list
```
Package Version
----------------------------- -----------
absl-py 1.4.0
aenum 3.1.12
affine 2.1.0
aiohttp 3.8.4
aiosignal 1.2.0
alabaster 0.7.13
altgraph 0.17.2
antlr4-python3-runtime 4.9.3
anyio 4.0.0
appdirs 1.4.4
appnope 0.1.3
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asttokens 2.4.0
astunparse 1.6.3
async-lru 1.0.3
async-timeout 4.0.2
attrs 23.1.0
Babel 2.12.1
backcall 0.2.0
beautifulsoup4 4.12.2
black 23.11.0
bleach 6.0.0
Bottleneck 1.3.7
build 1.0.3
cachetools 5.2.0
Cartopy 0.22.0
certifi 2023.5.7
cffi 1.15.1
cftime 1.0.3.4
charset-normalizer 3.1.0
click 8.1.3
click-plugins 1.1.1
cligj 0.7.2
cmocean 3.0.3
colorama 0.4.6
comm 0.1.3
contourpy 1.0.7
coverage 7.2.6
cycler 0.11.0
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
docstring-parser 0.15
docutils 0.18.1
editables 0.3
efficientnet-pytorch 0.7.1
einops 0.7.0
et-xmlfile 1.0.1
executing 1.2.0
fastjsonschema 2.16.3
filelock 3.12.4
Fiona 1.9.4
flake8 6.1.0
fonttools 4.39.4
fqdn 1.5.1
frozenlist 1.3.1
fsspec 2023.1.0
future 0.18.2
GDAL 3.8.0
geocube 0.3.2
geopandas 0.11.1
gevent 23.7.0
google-auth 2.20.0
google-auth-oauthlib 0.5.2
greenlet 2.0.2
grpcio 1.52.0
h5py 3.8.0
hatch-jupyter-builder 0.8.3
hatchling 1.18.0
huggingface-hub 0.14.1
hydra-core 1.3.1
idna 3.4
imageio 2.30.0
imagesize 1.4.1
importlib-metadata 6.6.0
importlib-resources 5.12.0
iniconfig 2.0.0
ipykernel 6.23.1
ipython 8.14.0
ipywidgets 8.0.2
isoduration 20.11.0
isort 5.12.0
jaraco.classes 3.2.3
jedi 0.18.2
Jinja2 3.0.3
joblib 1.2.0
json5 0.9.14
jsonargparse 4.25.0
jsonpointer 2.0
jsonschema 4.17.3
jupyter_client 8.2.0
jupyter_core 5.3.0
jupyter-events 0.6.3
jupyter-lsp 2.2.0
jupyter_server 2.6.0
jupyter_server_terminals 0.4.4
jupyterlab 4.0.1
jupyterlab-pygments 0.2.2
jupyterlab_server 2.22.1
jupyterlab-widgets 3.0.3
keyring 23.13.1
kiwisolver 1.4.4
kornia 0.7.0
laspy 2.2.0
lazy_loader 0.1
lightly 1.4.18
lightly-utils 0.0.2
lightning 2.1.2
lightning-utilities 0.8.0
macholib 1.15.2
Markdown 3.4.1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.8.2
matplotlib-inline 0.1.6
mccabe 0.7.0
mdurl 0.1.2
mistune 2.0.5
more-itertools 9.1.0
mpmath 1.2.1
multidict 6.0.4
munch 2.5.0
mypy 1.7.0
mypy-extensions 1.0.0
nbclient 0.6.7
nbconvert 7.4.0
nbformat 5.8.0
nbmake 1.4.3
nbsphinx 0.8.8
nest-asyncio 1.5.6
netCDF4 1.6.2
networkx 3.1
notebook_shim 0.2.3
numexpr 2.8.4
numpy 1.26.2
oauthlib 3.2.1
odc-geo 0.1.2
omegaconf 2.3.0
openpyxl 3.1.2
overrides 7.3.1
packaging 23.1
pandas 2.1.3
pandocfilters 1.5.0
parso 0.8.3
pathspec 0.11.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 10.0.0
pip 21.2.4
pkginfo 1.9.6
planetary-computer 0.4.9
platformdirs 3.10.0
pluggy 1.0.0
pooch 1.7.0
pretrainedmodels 0.7.4
prometheus-client 0.17.0
prompt-toolkit 3.0.38
protobuf 3.20.3
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
pyasn1 0.4.8
pyasn1-modules 0.2.8
pybind11 2.11.0
pycocotools 2.0.6
pycodestyle 2.11.0
pycparser 2.21
pydantic 1.10.9
pydocstyle 6.2.1
pyflakes 3.1.0
pygeos 0.14
Pygments 2.16.1
pyparsing 3.0.9
pyproj 3.2.1
pyproject_hooks 1.0.0
pyrsistent 0.19.3
pyshp 2.1.0
pystac 1.4.0
pystac-client 0.5.1
pytest 7.3.2
pytest-cov 4.0.0
python-dateutil 2.8.2
python-dotenv 0.19.2
python-json-logger 2.0.7
pytorch-lightning 2.0.0
pytorch-sphinx-theme 0.0.24
pytz 2023.3
pyupgrade 3.3.1
pyvista 0.42.3
PyWavelets 1.4.1
PyYAML 6.0
pyzmq 25.0.2
radiant-mlhub 0.3.1
rarfile 4.1
rasterio 1.3.8
readme-renderer 37.3
requests 2.31.0
requests-oauthlib 1.3.1
requests-toolbelt 1.0.0
rfc3339-validator 0.1.4
rfc3986 2.0.0
rfc3986-validator 0.1.1
rich 13.4.2
rioxarray 0.4.1.post0
rsa 4.9
Rtree 1.1.0
safetensors 0.3.1
scikit-image 0.20.0
scikit-learn 1.3.2
scipy 1.11.4
scooby 0.5.7
segmentation-models-pytorch 0.3.3
Send2Trash 1.8.0
setuptools 63.4.3
Shapely 1.8.1
six 1.16.0
sniffio 1.3.0
snowballstemmer 2.2.0
snuggs 1.4.1
soupsieve 2.4.1
Sphinx 5.3.0
sphinx-copybutton 0.2.12
sphinx_design 0.4.1
sphinx-rtd-theme 1.2.2
sphinxcontrib-applehelp 1.0.4
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 2.0.1
sphinxcontrib-jquery 4.1
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-programoutput 0.15
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.9
stack-data 0.6.2
sympy 1.11.1
tensorboard 2.14.1
tensorboard-data-server 0.7.0
terminado 0.17.1
threadpoolctl 3.1.0
tifffile 2023.8.30
timm 0.9.2
tinycss2 1.2.1
tokenize-rt 4.2.1
torch 2.1.1
torchmetrics 1.2.0
torchvision 0.16.1
tornado 6.3.3
tqdm 4.66.1
traitlets 5.9.0
trove-classifiers 2023.8.7
twine 4.0.2
typeshed-client 2.1.0
typing_extensions 4.8.0
tzdata 2023.3
uri-template 1.2.0
urllib3 1.26.12
vermin 1.5.2
wcwidth 0.2.7
webcolors 1.11.1
webencodings 0.5.1
websocket-client 1.6.3
Werkzeug 3.0.0
wheel 0.41.2
widgetsnbextension 4.0.3
xarray 2023.7.0
yarl 1.9.2
zipfile-deflate64 0.2.0
zipp 3.17.0
zope.event 4.6
zope.interface 5.4.0
```
</details>
| open | 2023-12-21T15:25:29Z | 2023-12-21T20:11:09Z | https://github.com/SciTools/cartopy/issues/2304 | [] | adamjstewart | 3 |
graphql-python/graphene-django | django | 909 | iterable gets refiltered by resolve_queryset but iterable might be promise | I'm trying to use DataLoader but I got a problem in DjangoConnectionField.
According to the comment, does that means I can't DataLoader here? My iterable here is Promise.
https://github.com/graphql-python/graphene-django/blob/0da06d4d54d3e73d43d88534259f55733ab7609b/graphene_django/fields.py#L176
| closed | 2020-03-19T13:23:17Z | 2022-04-22T10:16:54Z | https://github.com/graphql-python/graphene-django/issues/909 | [
"wontfix"
] | frankchen211 | 2 |
JoeanAmier/TikTokDownloader | api | 399 | 封面图重复下载 | 当开启 "original_cover": true,
已经下载过的视频的封面图会无限次数的被重复下载, 删除封面图文件后,再下次在611q模式下运行main.py,程序依然会每次都重复下载以前所有视频的封面图
请问大家遇到过这个情况吗, 怎么解决的? 感谢指点 | open | 2025-01-31T11:26:48Z | 2025-01-31T11:28:00Z | https://github.com/JoeanAmier/TikTokDownloader/issues/399 | [] | 9ihbd2DZSMjtsf7vecXjz | 1 |
SALib/SALib | numpy | 372 | SyntaxWarning with python 3.8 | Hello,
a SyntaxWarning occurs when using SAlib with python 3.8
```
\lib\site-packages\SALib\util\_
_init__.py:222: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif row['group'] is 'NA':
\lib\site-packages\SALib\util\r
esults.py:15: SyntaxWarning: "is not" with a literal. Did you mean "!="?
return pd.DataFrame({k: v for k, v in self.items() if k is not 'names'},
``` | closed | 2020-10-12T16:32:46Z | 2020-10-12T23:52:56Z | https://github.com/SALib/SALib/issues/372 | [] | xavArtley | 2 |
horovod/horovod | pytorch | 3,850 | Docker build horovod-nvtabular fails | `pip` installing `cudf-cu11` results in an error:
```
#12 [ 7/37] RUN pip install --no-cache-dir cudf-cu11 dask-cudf-cu11 --extra-index-url=https://pypi.ngc.nvidia.com/
#12 1.247 Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com/
#12 2.285 Collecting cudf-cu11
#12 2.391 Downloading cudf_cu11-23.2.0.tar.gz (6.5 kB)
#12 2.525 ERROR: Command errored out with exit status 1:
#12 2.525 command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-lwkt7jbg/cudf-cu11/setup.py'"'"'; __file__='"'"'/tmp/pip-install-lwkt7jbg/cudf-cu11/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-lwkt7jbg/cudf-cu11/pip-egg-info
#12 2.525 cwd: /tmp/pip-install-lwkt7jbg/cudf-cu11/
#12 2.525 Complete output (5 lines):
#12 2.525 Traceback (most recent call last):
#12 2.525 File "<string>", line 1, in <module>
#12 2.525 File "/tmp/pip-install-lwkt7jbg/cudf-cu11/setup.py", line 137, in <module>
#12 2.525 raise RuntimeError(open("ERROR.txt", "r").read())
#12 2.525 FileNotFoundError: [Errno 2] No such file or directory: 'ERROR.txt'
#12 2.525 ----------------------------------------
```
https://github.com/horovod/horovod/actions/runs/4192929617/jobs/7277248027 | closed | 2023-02-16T18:34:51Z | 2023-02-17T09:35:01Z | https://github.com/horovod/horovod/issues/3850 | [
"bug"
] | maxhgerlach | 1 |
joerick/pyinstrument | django | 119 | timeline in interactive html view | Hi! I love the timeline feature, it really helps me understand the order of operation and get my mind wrapped around execution flow. I get an error when using the `timeline=True` from inside any of the output / open functions except for text output. I guess if this is the place to put a feature request then this is it. I would be happy to submit a PR if it doesn't exist, but I'd want to know what level of effort you would you estimate for this. | closed | 2021-02-10T22:57:22Z | 2021-02-19T09:29:14Z | https://github.com/joerick/pyinstrument/issues/119 | [] | startakovsky | 1 |
iperov/DeepFaceLab | deep-learning | 771 | Deepface | closed | 2020-06-05T07:41:45Z | 2020-06-05T07:42:07Z | https://github.com/iperov/DeepFaceLab/issues/771 | [] | kiim-wong | 0 |
|
mage-ai/mage-ai | data-science | 5,589 | [BUG] Passing empty dataframe from Data Transformer to Data Exporter clears/removes the columns (headers) | ### Mage version
v0.9.74
### Describe the bug
We have a use case where in the Data Transformer, it maps an incoming list to a pandas dataframe. In some cases, the incoming list is empty resulting in an empty dataframe to be output, but we still want the `columns` part of the "dataframe object" to be part of the output. The resulting dataframe object is then passed to a Data Exporter, where we export the dataframe as a csv to S3.
The issue is that sometimes, we have an empty pandas dataframe object being passed from the Transformer to the Exporter. When in the Exporter, dataframe part of the "pandas dataframe object" is empty (which is correct), but the `columns` part gets removed/cleared or replaced with an empty list (which is likely incorrect). We need the columns (headers) in the Exporter so that it can export the dataframe to a csv with just the headers (empty file with headers only).
### To reproduce
1. In a Data transformer, create an empty dataframe with columns:
```
@transformer
def transform(data, *args, **kwargs):
df = pd.DataFrame(columns=['A','B','C','D','E','F','G'])
print(df)
return df
```
Print result:
```
Empty DataFrame
Columns: [A, B, C, D, E, F, G]
Index: []
```
2. Output that dataframe from the Transformer and input that data into a Data Exporter:
```
@data_exporter
def export_data_to_s3(data, **kwargs) -> None:
print(data)
```
Print result:
```
Empty DataFrame
Columns: []
Index: []
```
### Expected behavior
Even if the dataframe is empty, the columns part of the dataframe object should still be passed on. In Data Exporter, when I print the incoming df, it should show this:
```
Empty DataFrame
Columns: [A, B, C, D, E, F, G]
Index: []
```
### Screenshots
_No response_
### Operating system
v0.9.74
python 3.12.3
### Additional context
_No response_ | open | 2024-11-22T13:21:11Z | 2024-11-22T13:21:11Z | https://github.com/mage-ai/mage-ai/issues/5589 | [
"bug"
] | fltfx | 0 |
graphql-python/graphene-sqlalchemy | graphql | 319 | How to tweak query structure from relationships | I'm working on a simple CRUD REST API to learn GraphQL & SqlAlchemy. I have a Movie table
```
class Movie(Base, Serializer):
__tablename__ = 'movie'
id = Column(Integer, primary_key=True, index=True)
movie = Column(String(50), nullable=False, unique=True)
budget = Column(Float, nullable=False)
genre_id = Column(Integer, ForeignKey('genre.id'), nullable=False)
rating = Column(Float, nullable=False)
studio_id = Column(Integer, ForeignKey('studio.id'), nullable=False)
director_id = Column(Integer, ForeignKey('director.id'), nullable=False)
director = relationship(
Director,
backref=backref('movies', uselist=True, cascade='delete,all')
)
genre = relationship(
Genre,
backref=backref('movies', uselist=True, cascade='delete,all')
)
studio = relationship(
Studio,
backref=backref('movies', uselist=True, cascade='delete,all')
)
actors = relationship(
Actor,
secondary=movie_actor_association_table,
backref='movies',
uselist=True
)
```
that has its own properties (movie, budget, rating) but also 4 foreign keys (genre, studio, director, actors).
my GraphQL types are simple
```
class Movie(SQLAlchemyObjectType):
class Meta:
model = MovieModel
interfaces = (relay.Node,)
class Director(SQLAlchemyObjectType):
class Meta:
model = DirectorModel
interfaces = (relay.Node,)
class Genre(SQLAlchemyObjectType):
class Meta:
model = GenreModel
interfaces = (relay.Node,)
class Studio(SQLAlchemyObjectType):
class Meta:
model = StudioModel
interfaces = (relay.Node,)
class Actor(SQLAlchemyObjectType):
class Meta:
model = ActorModel
interfaces = (relay.Node,)
```
however, now when I query data, for the relationship tables, I have to replicate key, value pairs to get simple data
```
movies {
edges {
node {
id
movie
budget
genre {
genre
}
rating
studio {
studio
}
director {
director
}
actors {
edges {
node{
actor
}
}
}
}
}
}
```
i.e. can I avoid using genre {genre}, studio {studio}, etc. and just retrieve genre directly inside the movie?
**bonus question**: adding filters to these relationships doesn't work
I have a movie filter
```
class MovieFilter(FilterSet):
class Meta:
model = MovieModel
fields = {
'id': ['eq'],
'movie': ['eq', 'ilike'],
'rating': ['eq', 'gt', 'gte']
}
```
that I can use like so
```
class Query(graphene.ObjectType):
node = relay.Node.Field()
movies = FilterableConnectionField(Movie.connection, filters=MovieFilter())
```
to have filtering available for my `movie` table. However, the filters only work for the fields defined in the `movie` table itself, i.e. `movie name`, `rating`, `budget`. Does anyone know how I can use `graphene-sqlalchemy-filter` to filter for all fields (director/actor/genre/studio)? It seems to me that GraphQL doesn't handle relationships all that well. | closed | 2021-10-01T01:13:06Z | 2023-02-25T00:48:46Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/319 | [
"question"
] | shlomi84 | 2 |
Yorko/mlcourse.ai | scikit-learn | 703 | patreon payment | Hi, I paid the $17 for the bonus assignment, but I have no way to access it. Please help. | closed | 2022-03-16T08:40:31Z | 2022-03-16T19:07:14Z | https://github.com/Yorko/mlcourse.ai/issues/703 | [] | vahuja4 | 1 |
zappa/Zappa | flask | 855 | [Migrated] How to update app without downtime? | Originally from: https://github.com/Miserlou/Zappa/issues/2103 by [xncbf](https://github.com/xncbf)
In the case of AWS Beanstalk, can be deployment without downtime through environment replication and url swap. Is it possible to perform a similar practice on Zappa? | closed | 2021-02-20T12:52:32Z | 2024-04-13T19:10:31Z | https://github.com/zappa/Zappa/issues/855 | [
"no-activity",
"auto-closed"
] | jneves | 3 |
miguelgrinberg/Flask-Migrate | flask | 234 | Migrate to multiple databases simultaneously | Greetings. I am doing a project and I need to migrate to several databases simultaneously. For example, I have bind 2 databases in SQLALCHEMY_BINDS like that :
**app.config['SQLALCHEMY_BINDS'] = {
'bobkov1': 'postgresql://postgres:zabil2012@localhost:5431/bobkov1',
'bobkov' : 'postgresql://postgres:zabil2012@localhost:5431/bobkov'
}**
And now i want to migrate models to both of this databases. I'm tried to do it like that:
**class User(BaseModel, db.Model):
__tablename__ = 'user'
__bind_key__ = {'bobkov','bobkov1'}
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(120), index=True, unique=True)
password_hash = db.Column(db.String(128))
posts = db.relationship('Post', backref='author', lazy='dynamic')
class Post(db.Model):
__tablename__ = 'post'
__bind_key__ = {'bobkov','bobkov1'}
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.String(140))
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))**
When trying to run this code, the model migrates only to the main database, which is defined in SQLALCHEMY_DATABASE_URI.
Help me please, how to configure that? | closed | 2018-10-30T12:45:57Z | 2020-10-08T13:58:22Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/234 | [
"question"
] | BobkovS | 22 |
geex-arts/django-jet | django | 216 | Image uploading and updating is not working with django-filer | I have integrated django-jet for all it's beautiful design and customized functionalities.
I'm also using django-filer for file and image uploading.
But I'm facing this issue when using djang-filer with django-jet.
-
> I'm unable to change image after uploading it for the first time. Image upload popup is also not opening.
Simply, I can select the image for upload for the first time but after that I can not update that image.
Please check below screenshot.

Has anybody encountered same problem? Help me. | open | 2017-05-23T12:04:37Z | 2017-10-10T15:43:55Z | https://github.com/geex-arts/django-jet/issues/216 | [] | mjrulesamrat | 6 |
deeppavlov/DeepPavlov | nlp | 1,329 | pymorphy2 0.9.1 is released | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
**What problem are we trying to solve?**:
Current `pymorphy2` requirement [is obsolete](https://github.com/deepmipt/DeepPavlov/blob/0.12.1/requirements.txt#L11) in DeepPavlov.
`pymorphy2 0.9.1` [was released](https://github.com/kmike/pymorphy2/releases/tag/0.9.1).
See also: https://github.com/kmike/pymorphy2/issues/125, https://github.com/kmike/pymorphy2/issues/133.
**How can we solve it?**:
```
pymorphy2==0.9.1
```` | closed | 2020-10-10T13:58:17Z | 2022-04-01T13:02:20Z | https://github.com/deeppavlov/DeepPavlov/issues/1329 | [
"enhancement"
] | kuraga | 1 |
alirezamika/autoscraper | web-scraping | 1 | Progression of errors while installing | ### 1
santiago@santiago-Aspire-A515-51:~$ pip install git+https://github.com/alirezamika/autoscraper.git
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/alirezamika/autoscraper.git
Cloning https://github.com/alirezamika/autoscraper.git to /tmp/pip-req-build-zjd5pn9g
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-zjd5pn9g/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-zjd5pn9g/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-du5i409j
cwd: /tmp/pip-req-build-zjd5pn9g/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-zjd5pn9g/setup.py", line 7, in <module>
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
File "/usr/lib/python3.6/codecs.py", line 897, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-req-build-zjd5pn9g/README.rst'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
### 2 - Git clone manually, installed setuptools manually, then Readme.rst not found
santiago@santiago-Aspire-A515-51:~/Devel/autoscraper$ python3.8 -m pip install setuptools
Collecting setuptools
Cache entry deserialization failed, entry ignored
Downloading https://files.pythonhosted.org/packages/b0/8b/379494d7dbd3854aa7b85b216cb0af54edcb7fce7d086ba3e35522a713cf/setuptools-50.0.0-py3-none-any.whl (783kB)
100% |████████████████████████████████| 788kB 615kB/s
Installing collected packages: setuptools
Successfully installed setuptools-50.0.0
santiago@santiago-Aspire-A515-51:~/Devel/autoscraper$ python setup.py install
Traceback (most recent call last):
File "setup.py", line 7, in <module>
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
File "/usr/lib/python3.8/codecs.py", line 905, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '/home/santiago/Devel/autoscraper/README.rst'
### 3 - Renamed Readme.md to README.rst
santiago@santiago-Aspire-A515-51:~/Devel/autoscraper$ python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python3.8/site-packages'
The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/usr/lib/python3.8/site-packages/
This directory does not currently exist. Please create it and try again, or
choose a different installation directory (using the -d or --install-dir
option).
| closed | 2020-08-31T17:33:31Z | 2020-08-31T20:08:38Z | https://github.com/alirezamika/autoscraper/issues/1 | [] | santiagodemierre | 3 |
matplotlib/mplfinance | matplotlib | 150 | Version 0.12.5 | Hi, I noticed there are comments regarding Version 0.12.5. When will this version be released ? | closed | 2020-06-05T06:15:12Z | 2020-06-05T10:29:59Z | https://github.com/matplotlib/mplfinance/issues/150 | [
"question"
] | Shuffydog | 3 |
kizniche/Mycodo | automation | 571 | setup>functions>conditional measurement stops working all the time | ## Mycodo Issue Report:
- Specific Mycodo Version: 6.4.5
#### Problem Description
Setup conditional measurement control to turn on a relay. Conditional control stops working after some time. Works for hours sometimes. Only way to fix is change some parameters in the conditional measurement control and save the changes.
- what were you trying to do
using analog sensors to turn a relay on and off depending on the voltage from the sensors.
| closed | 2018-11-25T09:48:30Z | 2018-12-22T17:51:24Z | https://github.com/kizniche/Mycodo/issues/571 | [] | SAM26K | 89 |
NullArray/AutoSploit | automation | 445 | Unhandled Exception (495ff691e) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-45-generic-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/peerles/源代码/Autosploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/peerles/源代码/Autosploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-02-08T14:07:27Z | 2019-02-19T04:22:45Z | https://github.com/NullArray/AutoSploit/issues/445 | [] | AutosploitReporter | 0 |
mwaskom/seaborn | data-visualization | 2,992 | PolyFit is not robust to missing data | ```python
so.Plot([1, 2, 3, None, 4], [1, 2, 3, 4, 5]).add(so.Line(), so.PolyFit())
```
<details><summary>Traceback</summary>
```python-traceback
---------------------------------------------------------------------------
LinAlgError Traceback (most recent call last)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj)
341 method = get_real_method(obj, self.print_method)
342 if method is not None:
--> 343 return method()
344 return None
345 else:
File ~/code/seaborn/seaborn/_core/plot.py:265, in Plot._repr_png_(self)
263 def _repr_png_(self) -> tuple[bytes, dict[str, float]]:
--> 265 return self.plot()._repr_png_()
File ~/code/seaborn/seaborn/_core/plot.py:804, in Plot.plot(self, pyplot)
800 """
801 Compile the plot spec and return the Plotter object.
802 """
803 with theme_context(self._theme_with_defaults()):
--> 804 return self._plot(pyplot)
File ~/code/seaborn/seaborn/_core/plot.py:822, in Plot._plot(self, pyplot)
819 plotter._setup_scales(self, common, layers, coord_vars)
821 # Apply statistical transform(s)
--> 822 plotter._compute_stats(self, layers)
824 # Process scale spec for semantic variables and coordinates computed by stat
825 plotter._setup_scales(self, common, layers)
File ~/code/seaborn/seaborn/_core/plot.py:1110, in Plotter._compute_stats(self, spec, layers)
1108 grouper = grouping_vars
1109 groupby = GroupBy(grouper)
-> 1110 res = stat(df, groupby, orient, scales)
1112 if pair_vars:
1113 data.frames[coord_vars] = res
File ~/code/seaborn/seaborn/_stats/regression.py:41, in PolyFit.__call__(self, data, groupby, orient, scales)
39 def __call__(self, data, groupby, orient, scales):
---> 41 return groupby.apply(data, self._fit_predict)
File ~/code/seaborn/seaborn/_core/groupby.py:109, in GroupBy.apply(self, data, func, *args, **kwargs)
106 grouper, groups = self._get_groups(data)
108 if not grouper:
--> 109 return self._reorder_columns(func(data, *args, **kwargs), data)
111 parts = {}
112 for key, part_df in data.groupby(grouper, sort=False):
File ~/code/seaborn/seaborn/_stats/regression.py:30, in PolyFit._fit_predict(self, data)
28 xx = yy = []
29 else:
---> 30 p = np.polyfit(x, y, self.order)
31 xx = np.linspace(x.min(), x.max(), self.gridsize)
32 yy = np.polyval(p, xx)
File <__array_function__ internals>:180, in polyfit(*args, **kwargs)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/lib/polynomial.py:668, in polyfit(x, y, deg, rcond, full, w, cov)
666 scale = NX.sqrt((lhs*lhs).sum(axis=0))
667 lhs /= scale
--> 668 c, resids, rank, s = lstsq(lhs, rhs, rcond)
669 c = (c.T/scale).T # broadcast scale coefficients
671 # warn on rank reduction, which indicates an ill conditioned matrix
File <__array_function__ internals>:180, in lstsq(*args, **kwargs)
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/linalg/linalg.py:2300, in lstsq(a, b, rcond)
2297 if n_rhs == 0:
2298 # lapack can't handle n_rhs = 0 - so allocate the array one larger in that axis
2299 b = zeros(b.shape[:-2] + (m, n_rhs + 1), dtype=b.dtype)
-> 2300 x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj)
2301 if m == 0:
2302 x[...] = 0
File ~/miniconda3/envs/seaborn-py39-latest/lib/python3.9/site-packages/numpy/linalg/linalg.py:101, in _raise_linalgerror_lstsq(err, flag)
100 def _raise_linalgerror_lstsq(err, flag):
--> 101 raise LinAlgError("SVD did not converge in Linear Least Squares")
LinAlgError: SVD did not converge in Linear Least Squares
```
</details> | closed | 2022-09-03T17:35:22Z | 2022-09-12T00:24:04Z | https://github.com/mwaskom/seaborn/issues/2992 | [
"bug",
"objects-stat"
] | mwaskom | 0 |
zappa/Zappa | flask | 561 | [Migrated] Release plan for Zappa | Originally from: https://github.com/Miserlou/Zappa/issues/1480 by [efimerdlerkravitz](https://github.com/efimerdlerkravitz)
This is not an actual bug, unfortunately I don't know exactly where to ask it.
Any release plan for Zappa ? When is the next version suppose to be released ? | closed | 2021-02-20T12:22:47Z | 2022-07-16T07:06:10Z | https://github.com/zappa/Zappa/issues/561 | [] | jneves | 1 |
gunthercox/ChatterBot | machine-learning | 2,211 | is it possible to train chatterbot on memes? | I couldnt find anything on my light google search so I thought id ask.
I was wondering if i can train chatterbot on a csv with a meme in the message and response field.
Im new to this whole machine learning thing so sorry if its a dumb question
Thank you! | closed | 2021-10-27T23:59:50Z | 2025-02-26T11:46:41Z | https://github.com/gunthercox/ChatterBot/issues/2211 | [] | jhmauritz | 2 |
yzhao062/pyod | data-science | 36 | LOCI fails on MacOS with Python 2.7 (caused by np.count_nonzero) | It is noted running **LOCI** model on **MacOS** with **Python 2.7** may fail. One potential cause is the following code, as np.count_nonzero returns **int** instead of **array**.
I am currently investigating how to fix it. Please stay tuned.
```
def _get_alpha_n(self, dist_matrix, indices, r):
"""Computes the alpha neighbourhood points.
Parameters
----------
dist_matrix : array-like, shape (n_samples, n_features)
The distance matrix w.r.t. to the training samples.
indices : int
Subsetting index
r : int
Neighbourhood radius
Returns
-------
alpha_n : array, shape (n_alpha, )
Returns the alpha neighbourhood points.
"""
if type(indices) is int:
alpha_n = np.count_nonzero(
dist_matrix[indices, :] < (r * self._alpha))
return alpha_n
else:
alpha_n = np.count_nonzero(
dist_matrix[indices, :] < (r * self._alpha), axis=1)
return alpha_n
```
The error message looks like below:
> (test27) bash-3.2$ python loci_example.py
> /anaconda2/envs/test27/lib/python2.7/site-packages/pyod/models/loci.py:199: RuntimeWarning: divide by zero encountered in double_scalars
> outlier_scores[p_ix] = mdef/sigma_mdef
> /Users/zhaoy9/.local/lib/python2.7/site-packages/numpy/core/_methods.py:101: RuntimeWarning: invalid value encountered in subtract
> x = asanyarray(arr - arrmean)
> On Training Data:
> Traceback (most recent call last):
> File "loci_example.py", line 133, in <module>
> evaluate_print(clf_name, y_train, y_train_scores)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/pyod/utils/data.py", line 159, in evaluate_print
> roc=np.round(roc_auc_score(y, y_pred), decimals=4),
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 356, in roc_auc_score
> sample_weight=sample_weight)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/base.py", line 77, in _average_binary_score
> return binary_metric(y_true, y_score, sample_weight=sample_weight)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 328, in _binary_roc_auc_score
> sample_weight=sample_weight)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 618, in roc_curve
> y_true, y_score, pos_label=pos_label, sample_weight=sample_weight)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/metrics/ranking.py", line 403, in _binary_clf_curve
> assert_all_finite(y_score)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/utils/validation.py", line 68, in assert_all_finite
> _assert_all_finite(X.data if sp.issparse(X) else X, allow_nan)
> File "/anaconda2/envs/test27/lib/python2.7/site-packages/sklearn/utils/validation.py", line 56, in _assert_all_finite
> raise ValueError(msg_err.format(type_err, X.dtype))
> ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). | closed | 2018-12-04T04:17:45Z | 2018-12-13T01:37:00Z | https://github.com/yzhao062/pyod/issues/36 | [
"bug"
] | yzhao062 | 1 |
dask/dask | pandas | 11,389 | mode on `axis=1` | The `mode` method in a `dask` `DataFrame` does not allow for the argument `axis=1`. It would be great to have since it seems that in `pandas`, that operation is very slow and seems straightforward to parallelize.
I would like to be able to do this in dask.
```
import pandas as pd
import numpy as np
import dask.dataframe as dd
np.random.seed(0)
N_ROWS = 1_000
df = pd.DataFrame({'a':np.random.randint(0, 100, N_ROWS),
'b':np.random.randint(0, 100, N_ROWS),
'c':np.random.randint(0, 100, N_ROWS)})
df['d'] = df['a'] #ensure mode is column 'a', unless b=c, then there are two modes
df.mode(axis=1)
```
For reference, in pandas with `N_ROWS = 100_000`, the mode operation takes 20 seconds, and the time seems to grow linearly with number of observations. | open | 2024-09-16T14:55:33Z | 2025-03-10T01:51:04Z | https://github.com/dask/dask/issues/11389 | [
"dataframe",
"needs attention",
"enhancement"
] | marcdelabarrera | 4 |
lanpa/tensorboardX | numpy | 97 | About RNN | I wrote a RNN program that i did't use nn.model so that I could see the structure. But I find some problem. Code and the datasets are as follow
https://github.com/VeritasXu/RNN
Can you run and see the structure? I think my program is correct, but the input type of add_graph function results in the strange structure. Could you help me ? | closed | 2018-03-09T15:04:40Z | 2018-03-12T04:45:00Z | https://github.com/lanpa/tensorboardX/issues/97 | [] | VeritasXu | 2 |
davidteather/TikTok-Api | api | 560 | It does not support mobile links [BUG] - Your Error Here | # Read Below!!! If this doesn't fix your issue delete these two lines
**You may need to install chromedriver for your machine globally. Download it [here](https://sites.google.com/a/chromium.org/chromedriver/) and add it to your path.**
**Describe the bug**
A clear and concise description of what the bug is.
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
```
# Code Goes Here
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
# Error Trace Here
```
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10]
- TikTokApi Version [e.g. 3.3.1] - if out of date upgrade before posting an issue
**Additional context**
Add any other context about the problem here.
| closed | 2021-04-13T15:20:30Z | 2021-04-13T15:27:34Z | https://github.com/davidteather/TikTok-Api/issues/560 | [
"bug"
] | ghost | 0 |
piccolo-orm/piccolo | fastapi | 1,099 | Objects accept node parameter for choosing extra node | Objects accept node parameter for choosing extra node | closed | 2024-10-14T15:23:34Z | 2024-10-16T08:05:00Z | https://github.com/piccolo-orm/piccolo/issues/1099 | [] | erhuabushuo | 2 |
TencentARC/GFPGAN | deep-learning | 530 | Gfpgan Not working on colab |
![Uploading Screenshot_20240321_084829.jpg…]()
I'm regularly using gfpgan on colab to upscale my AI generated images but last two weeks im facing an problem.the image is not upscale please check and correct that please.i tried many times to solve that but I couldn't.please help | closed | 2024-03-21T02:58:40Z | 2024-03-21T03:20:16Z | https://github.com/TencentARC/GFPGAN/issues/530 | [] | christopherdisho | 0 |
lucidrains/vit-pytorch | computer-vision | 141 | Should init scale matrix as diagonal form? | Hi, Phil:
I noticed the LayerScale part in the `CaiT`, in the original paper the scale matrix is a diagonal form `(b,d,d)`, but in this implement, it just initialized in a form of vector(maybe can broadcast afterwards, but would it be better just initialize as a diagonal form?)
https://github.com/lucidrains/vit-pytorch/blob/3f754956fbfb1f97ae4f1e244a7ecb16eab79296/vit_pytorch/cait.py#L41
Best, | closed | 2021-08-17T03:56:24Z | 2021-08-20T02:22:09Z | https://github.com/lucidrains/vit-pytorch/issues/141 | [] | CiaoHe | 3 |
apache/airflow | python | 48,076 | Add support for active session timeout in Airflow Web UI | ### Description
Currently, Airflow only support inactive session timeout via the `session_lifetime_minutes` config option. This handles session expiration after a period of inactivity, which is great - but it doesn't cover cases where a session should expire regardless of activity (i.e, an active session timeout).
This is a common requirement in environments with stricter security/compliance policies (e.g, session must expire after x hours, even if user is active)
### Use case/motivation
Introduce a new configuration option (e.g, `session_max_lifetime_minutes`) that defines the maximum duration a session can remain valid from the time of login, regardless of user activity.
This feature will help admins better enforce time-based access control.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-21T18:49:07Z | 2025-03-22T21:09:21Z | https://github.com/apache/airflow/issues/48076 | [
"kind:feature",
"area:UI",
"needs-triage"
] | bmoon4 | 2 |
PokeAPI/pokeapi | api | 743 | Error with trying to get the evolution chain | So Im trying to get the evolution chain for pokemon using this link:
https://pokeapi.co/api/v2/pokemon-species/2
But it keeps telling me KeyError
Im using discord.py to make this into a command btw | closed | 2022-08-07T00:34:21Z | 2022-08-07T04:03:22Z | https://github.com/PokeAPI/pokeapi/issues/743 | [] | Necrosis000 | 3 |
onnx/onnx | tensorflow | 6,772 | Introduction of https://www.conventionalcommits.org/ for PullRequest Titles? | I would consider it useful to introduce https://www.conventionalcommits.org/ at least at PullRequest title level.
We could only recommend it, or check it directly with e.g. the following Github Action Use. https://github.com/marketplace/actions/conventional-commit-in-pull-requests
I think the advantages are obvious, a better commit history in the main would make it easier for us in terms of release notes etc.
A next step would probably be:
* Define “Commit Types”, or do we need other than predefined?
* Scopes” do we need any? or what could they be?
What do you think about this? | open | 2025-03-08T05:12:22Z | 2025-03-08T15:56:17Z | https://github.com/onnx/onnx/issues/6772 | [] | andife | 1 |
encode/uvicorn | asyncio | 1,297 | Feature request: Ability to import uvicorn in django to enable websocket support | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
- [ ] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.
### Is your feature related to a problem? Please describe.
When we do `python manage.py runserver` we have a line in our manage.py file `import daphne.server` which enables websocket support with runserver. If we could do the same thing with uvicorn that would let us get rid of daphne entirely.
### Describe the solution you would like.
websocket support for django runserver
### Describe alternatives you considered
* continue using daphne for runserver (downside: extra dependency)
* use uvicorn with autoreload feature. (downside: devs prefer using runserver)
### Additional context
_No response_ | closed | 2021-12-22T01:59:52Z | 2023-02-03T08:14:27Z | https://github.com/encode/uvicorn/issues/1297 | [] | caleb15 | 6 |
mlfoundations/open_clip | computer-vision | 17 | Loss is constant | I'm using CLIP to train on my custom dataset with the following params:
Dataset size : 50k image-text pairs
Batch size : 128
Image Size : 224
Gpus : 1
Epochs : 500
It's been running for a while now, I'm on my 15th epoch, and the loss hasn't changed at all. It isn't a constant number, but its constantly at 4.8xxx. Should I be concerned? I'm not sure why this is happening.

| closed | 2021-09-13T20:47:23Z | 2022-04-06T00:11:30Z | https://github.com/mlfoundations/open_clip/issues/17 | [] | tarunn2799 | 14 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 737 | MAP和AP50 | 如果我的数据集只有一个类别,这时候输出的指标里MAP和AP50应该差不多吧?为什么MAP才0.3,AP50倒是有0.7。怎么修改相应指标呢,如果我想输出其他的指标,例如准确率,召回率或者自定义的一些指标 | open | 2023-05-20T06:12:37Z | 2023-05-20T06:12:37Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/737 | [] | thestars-maker | 0 |
iMerica/dj-rest-auth | rest-api | 466 | You're accessing the development server over HTTPS, but it only supports HTTP. | You're accessing the development server over HTTPS, but it only supports HTTP.
This error always shows up while assessing a dj-rest-auth view | open | 2023-01-06T08:00:50Z | 2023-01-17T12:34:06Z | https://github.com/iMerica/dj-rest-auth/issues/466 | [] | Danimoz | 1 |
asacristani/fastapi-rocket-boilerplate | pydantic | 22 | Testing: add mypy and pylint to the pre-commit | A lot of lines to fix. | closed | 2023-10-11T10:59:46Z | 2024-04-04T22:00:48Z | https://github.com/asacristani/fastapi-rocket-boilerplate/issues/22 | [
"enhancement",
"improvement"
] | asacristani | 0 |
pydata/pandas-datareader | pandas | 63 | Yahoo Finance Options tests raises ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y' | Hello,
some Yahoo Finance Options tests raises
```
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
```
I can see this exception using
```
$ nosetests -s -v
======================================================================
ERROR: test_get_all_data (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 358, in test_get_all_data
data = self.aapl.get_all_data(put=True)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1197, in get_all_data
expiry_dates = self.expiry_dates
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_all_data_calls_only (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 372, in test_get_all_data_calls_only
data = self.aapl.get_all_data(call=True, put=False)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1197, in get_all_data
expiry_dates = self.expiry_dates
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_call_data (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 337, in test_get_call_data
calls = self.aapl.get_call_data(expiry=self.expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 901, in get_call_data
expiry = self._try_parse_dates(year, month, expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates
expiry = [self._validate_expiry(expiry)]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry
expiry_dates = self.expiry_dates
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_data_with_list (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 365, in test_get_data_with_list
data = self.aapl.get_call_data(expiry=self.aapl.expiry_dates)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_expiry_dates (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 351, in test_get_expiry_dates
dates, _ = self.aapl._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_near_stock_price (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 330, in test_get_near_stock_price
expiry=self.expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1005, in get_near_stock_price
expiry = self._try_parse_dates(year, month, expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates
expiry = [self._validate_expiry(expiry)]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry
expiry_dates = self.expiry_dates
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_options_data (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 322, in test_get_options_data
options = self.aapl.get_options_data(expiry=self.expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 750, in get_options_data
self.get_call_data)]).sortlevel()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 749, in <listcomp>
for f in (self.get_put_data,
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 964, in get_put_data
expiry = self._try_parse_dates(year, month, expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates
expiry = [self._validate_expiry(expiry)]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry
expiry_dates = self.expiry_dates
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_put_data (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 344, in test_get_put_data
puts = self.aapl.get_put_data(expiry=self.expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 964, in get_put_data
expiry = self._try_parse_dates(year, month, expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1061, in _try_parse_dates
expiry = [self._validate_expiry(expiry)]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1085, in _validate_expiry
expiry_dates = self.expiry_dates
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_get_underlying_price (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 381, in test_get_underlying_price
url = options_object._yahoo_url_from_expiry(options_object.expiry_dates[0])
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'September 18, 2015' does not match format '%B %d, %Y'
======================================================================
ERROR: test_month_year (test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1226, in expiry_dates
expiry_dates = self._expiry_dates
AttributeError: 'Options' object has no attribute '_expiry_dates'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/tests/test_data.py", line 421, in test_month_year
data = self.aapl.get_call_data(month=self.month, year=self.year)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 901, in get_call_data
expiry = self._try_parse_dates(year, month, expiry)
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1075, in _try_parse_dates
expiry = [expiry for expiry in self.expiry_dates if expiry.year == year and expiry.month == month]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1228, in expiry_dates
expiry_dates, _ = self._get_expiry_dates_and_links()
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in _get_expiry_dates_and_links
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "/Users/femto/github/others/pandas-datareader/pandas_datareader/data.py", line 1250, in <listcomp>
expiry_dates = [dt.datetime.strptime(element.text, "%B %d, %Y").date() for element in links]
File "//anaconda/lib/python3.4/_strptime.py", line 500, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "//anaconda/lib/python3.4/_strptime.py", line 337, in _strptime
(data_string, format))
ValueError: time data 'August 28, 2015' does not match format '%B %d, %Y'
```
but
```
$ nosetests -s -v pandas_datareader/tests/test_data.py:TestYahooOptions.test_get_all_data
```
don't raises any error !
Any idea ?
| closed | 2015-08-22T06:57:39Z | 2017-01-09T16:37:37Z | https://github.com/pydata/pandas-datareader/issues/63 | [] | femtotrader | 7 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 589 | Provide a way to configure the SA engine | Hi all,
Unless I'm mis-reading the code, there is no way to provide engine creation options. One of them that appears with SA 1.2 is [pool_pre_ping](http://docs.sqlalchemy.org/en/latest/core/pooling.html#pool-disconnects-pessimistic).
I'm not sure I can provide extra parameters via flask-sqlalchemy parameters. Should I create the engine out of band?
Thanks, | closed | 2018-01-28T16:00:01Z | 2021-04-03T16:28:27Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/589 | [] | Lawouach | 6 |
deepinsight/insightface | pytorch | 2,104 | why 1machine (TITAN RTX ) +1 machine( RTX 3060) training time are slower any one machine | python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr="192.168.8.131" --master_port=12581 train.py configs/ms1mv2_mbf
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr="192.168.8.131" --master_port=12581 train.py configs/ms1mv2_mbf
/home/pc/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.
Please read local_rank from `os.environ('LOCAL_RANK')` instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : train.py
min_nodes : 2
max_nodes : 2
nproc_per_node : 1
run_id : none
rdzv_backend : static
rdzv_endpoint : 192.168.8.131:12581
rdzv_configs : {'rank': 0, 'timeout': 900}
max_restarts : 3
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_4a5rychg/none__fkba0g3
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
/home/pc/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=192.168.8.131
master_port=12581
group_rank=0
group_world_size=2
local_ranks=[0]
role_ranks=[0]
global_ranks=[0]
role_world_sizes=[2]
global_world_sizes=[2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_4a5rychg/none__fkba0g3/attempt_0/0/error.json
0
0
Training: 2022-09-15 11:06:52,012-rank_id: 0
Training: 2022-09-15 11:06:55,830-: margin_list [1.0, 0.5, 0.0]
Training: 2022-09-15 11:06:55,830-: network mbf
Training: 2022-09-15 11:06:55,834-: resume False
Training: 2022-09-15 11:06:55,834-: save_all_states False
Training: 2022-09-15 11:06:55,834-: output work_dirs/ms1mv2_mbf
Training: 2022-09-15 11:06:55,834-: embedding_size 512
Training: 2022-09-15 11:06:55,834-: sample_rate 1.0
Training: 2022-09-15 11:06:55,834-: interclass_filtering_threshold0
Training: 2022-09-15 11:06:55,834-: fp16 True
Training: 2022-09-15 11:06:55,834-: batch_size 256
Training: 2022-09-15 11:06:55,834-: optimizer sgd
Training: 2022-09-15 11:06:55,834-: lr 0.1
Training: 2022-09-15 11:06:55,834-: momentum 0.9
Training: 2022-09-15 11:06:55,834-: weight_decay 0.0001
Training: 2022-09-15 11:06:55,834-: verbose 2000
Training: 2022-09-15 11:06:55,834-: frequent 10
Training: 2022-09-15 11:06:55,834-: dali False
Training: 2022-09-15 11:06:55,834-: gradient_acc 1
Training: 2022-09-15 11:06:55,834-: seed 2048
Training: 2022-09-15 11:06:55,834-: num_workers 4
Training: 2022-09-15 11:06:55,834-: rec /home/pc/faces_webface_112x112
Training: 2022-09-15 11:06:55,834-: num_classes 10572
Training: 2022-09-15 11:06:55,834-: num_image 494194
Training: 2022-09-15 11:06:55,834-: num_epoch 40
Training: 2022-09-15 11:06:55,835-: warmup_epoch 0
Training: 2022-09-15 11:06:55,835-: val_targets ['lfw', 'cfp_fp', 'agedb_30']
Training: 2022-09-15 11:06:55,835-: total_batch_size 512
Training: 2022-09-15 11:06:55,835-: warmup_step 0
Training: 2022-09-15 11:06:55,835-: total_step 38600
loading bin 0
loading bin 1000
loading bin 2000
loading bin 3000
loading bin 4000
loading bin 5000
loading bin 6000
loading bin 7000
loading bin 8000
loading bin 9000
loading bin 10000
loading bin 11000
torch.Size([12000, 3, 112, 112])
loading bin 0
loading bin 1000
loading bin 2000
loading bin 3000
loading bin 4000
loading bin 5000
loading bin 6000
loading bin 7000
loading bin 8000
loading bin 9000
loading bin 10000
loading bin 11000
loading bin 12000
loading bin 13000
torch.Size([14000, 3, 112, 112])
loading bin 0
loading bin 1000
loading bin 2000
loading bin 3000
loading bin 4000
loading bin 5000
loading bin 6000
loading bin 7000
loading bin 8000
loading bin 9000
loading bin 10000
loading bin 11000
torch.Size([12000, 3, 112, 112])
/home/pc/fc/face/insightface/recognition/arcface_torch/train.py:163: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
/home/pc/anaconda3/envs/face19/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
Training: 2022-09-15 11:07:37,277-Reducer buckets have been rebuilt in this iteration.
Training: 2022-09-15 11:07:55,067-Speed 518.42 samples/sec Loss 44.2595 LearningRate 0.099902 Epoch: 0 Global Step: 20 Fp16 Grad Scale: 8192 Required: 13 hours
Training: 2022-09-15 11:08:04,952-Speed 517.94 samples/sec Loss 45.0456 LearningRate 0.099850 Epoch: 0 Global Step: 30 Fp16 Grad Scale: 8192 Required: 12 hours
Training: 2022-09-15 11:08:14,893-Speed 515.12 samples/sec Loss 45.5388 LearningRate 0.099798 Epoch: 0 Global Step: 40 Fp16 Grad Scale: 8192 Required: 12 hours
Training: 2022-09-15 11:08:24,767-Speed 518.53 samples/sec Loss 45.7875 LearningRate 0.099746 Epoch: 0 Global Step: 50 Fp16 Grad Scale: 8192 Required: 12 hours
Training: 2022-09-15 11:08:34,667-Speed 517.22 samples/sec Loss 45.5845 LearningRate 0.099695 Epoch: 0 Global Step: 60 Fp16 Grad Scale: 8192 Required: 11 hours
Training: 2022-09-15 11:08:44,533-Speed 518.98 samples/sec Loss 45.6968 LearningRate 0.099643 Epoch: 0 Global Step: 70 Fp16 Grad Scale: 8192 Required: 11 hours
(face19) ubuntu@ubuntu-X10SRA:~/fc/face/insightface/recognition/arcface_torch$ python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr="192.168.8.131" --master_port=12581 train.py configs/ms1mv2_mbf
/home/ubuntu/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.
Please read local_rank from `os.environ('LOCAL_RANK')` instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : train.py
min_nodes : 2
max_nodes : 2
nproc_per_node : 1
run_id : none
rdzv_backend : static
rdzv_endpoint : 192.168.8.131:12581
rdzv_configs : {'rank': 1, 'timeout': 900}
max_restarts : 3
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_bcc_b24k/none_nbf6ckxx
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
/home/ubuntu/anaconda3/envs/face19/lib/python3.9/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=192.168.8.131
master_port=12581
group_rank=1
group_world_size=2
local_ranks=[0]
role_ranks=[1]
global_ranks=[1]
role_world_sizes=[2]
global_world_sizes=[2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_bcc_b24k/none_nbf6ckxx/attempt_0/0/error.json
sgd
/home/ubuntu/fc/face/insightface/recognition/arcface_torch/train.py:166: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
/home/ubuntu/anaconda3/envs/face19/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
| open | 2022-09-15T03:30:26Z | 2022-09-15T03:30:26Z | https://github.com/deepinsight/insightface/issues/2104 | [] | wavelet2008 | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 194 | Without any api key | Can I use this library without keys? Because gpt has a free version, can I use the same? | closed | 2024-05-09T15:47:54Z | 2024-05-09T16:25:30Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/194 | [] | progeroffline | 1 |
matplotlib/mplfinance | matplotlib | 111 | Change font size of title. | If the length of title is too big , cropping occurs instead of reducing the font size | closed | 2020-04-29T08:04:02Z | 2020-04-30T01:45:04Z | https://github.com/matplotlib/mplfinance/issues/111 | [
"question"
] | abhisheksharma26jan | 1 |
microsoft/unilm | nlp | 924 | [layoutlmv3]: Issue with label format? Inference yields boundary boxes that are too short. | Hi,
I am working on object detection with layoutlmv3.
I am using the publaynet fine tuned model and have a training set with about 600 documents.
The issue that I am facing is that the predicted boundary boxes are only kind of correct. In most of the documents the predicted boundary are "too short". Meaning that the lower y coordinate is usually too small.
As an example I have attached you an example from my evaluation dataset. It is the case in almost every single inference picture. Thus, I am trying to get some ideas to troubleshoot.
I double checked that the drawn boundary boxes in the inference is correctly done.


Any ideas would be greatly appreciated.
| closed | 2022-11-20T00:32:52Z | 2023-06-06T12:40:43Z | https://github.com/microsoft/unilm/issues/924 | [] | OGiesecke | 2 |
Kav-K/GPTDiscord | asyncio | 425 | How to change model for indexing? | gpt3 sucks at math and code! I'm trying to use gpt4 for indexing but with no luck. It'd be great if there was a model parameter for indexing commands. currently, it only supports while querying which is not helpful if the context is written using gpt3.
I also tried setting the settings parameter model to gpt-4 but it didn't seem to work.
| open | 2023-11-17T21:50:31Z | 2023-11-17T22:27:49Z | https://github.com/Kav-K/GPTDiscord/issues/425 | [
"enhancement",
"help wanted",
"high-prio"
] | ashra-main | 14 |
tatsu-lab/stanford_alpaca | deep-learning | 60 | Plan to release the web demo code | Hi, thanks for sharing your work, this is amazing!
Do you plan to release the web demo code ?
| closed | 2023-03-16T14:11:30Z | 2023-03-16T16:18:34Z | https://github.com/tatsu-lab/stanford_alpaca/issues/60 | [] | testplop | 1 |
ultralytics/ultralytics | pytorch | 19,826 | How to Freeze Detection Head Layers in YOLOv8m-segment and Train Only Segmentation Head? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi all,
I'm working with yolov8m-seg.pt and want to freeze the detection head layers (bounding box/class prediction) while training only the segmentation head (mask prediction). The goal is to fine-tune the segmentation capability without updating the detection part. Has anyone done this before?
I’m thinking of freezing layers by setting requires_grad = False for detection-related params, but I’m unsure how to precisely identify them in the head (e.g., model.22). Here’s my tentative code—can someone confirm if this approach works or suggest a better way?
### Additional
`from ultralytics import YOLO
# Load model
model = YOLO("yolov8m-seg.pt")
# Freeze detection head layers (guessing these are related to 'detect')
for name, param in model.model.named_parameters():
if "detect" in name.lower(): # Is this the right way to target detection head?
param.requires_grad = False
# Train only segmentation head
model.train(data="path/to/data.yaml", epochs=50, imgsz=640)`
Questions:
Does detect correctly target the detection head, or should I use a different identifier (e.g., specific layer indices)?
Will this setup ensure the segmentation head (e.g., mask coefficients/Proto) still trains properly?
Any pitfalls to watch out for?
Thanks for any insights! | open | 2025-03-23T04:49:45Z | 2025-03-24T16:47:11Z | https://github.com/ultralytics/ultralytics/issues/19826 | [
"question",
"segment"
] | Wang-taoshuo | 3 |
rougier/numpy-100 | numpy | 22 | 17. add `print(np.nan in set([np.nan])) # True` | print(np.nan == np.nan) # False
print(np.nan in set([np.nan])) # True
| closed | 2016-09-09T15:26:39Z | 2020-03-13T13:39:41Z | https://github.com/rougier/numpy-100/issues/22 | [] | qeatzy | 2 |
marimo-team/marimo | data-science | 4,069 | Loading indicator needs to be shown for longer | I have a notebook that I've published via github pages. It's very nice, and marimo does a wonderful job. But a number of people who have visited have said that they thought it was broken because it initially showed a spinning circle saying "Initializing...", etc., but that circle disappeared, leaving just a white page. The problem is that there's a lag of up to 8 seconds between the spinning circle and the spinning hourglass (which is followed by actual content). And I guess that's just long enough for people to think the page must be broken. We all have pretty beefy machines with high-speed internet.
If that spinning circle could just be kept on the screen until other elements start to load, I think it would be perfect.
For reference, my published notebook is [here](https://moble.github.io/sxscatalog/), and the raw notebook itself is [here](https://github.com/moble/sxscatalog/blob/main/scripts/catalog_notebook.py).
(And thanks again for the wonderful package. It's really amazing.) | closed | 2025-03-12T18:46:00Z | 2025-03-13T01:37:01Z | https://github.com/marimo-team/marimo/issues/4069 | [] | moble | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 20,598 | ModelSummary does not account for every type of precision strings | ### Bug description
The `precision_to_bits` dictionary in https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/utilities/model_summary/model_summary.py#L219 does not account for every type of precision, e.g., `bf16-true`.
This will fail in getting the proper key from the dictionary and will default to 32.
### What version are you seeing the problem on?
v2.5, master
### How to reproduce the bug
In `lightning/pytorch/utilities/model_summary/model_summary.py:L219`, just add the following when `self._model.trainer.precision="bf16-true"`:
```python
...
precision_to_bits = {"64": 64, "32": 32, "16": 16, "bf16": 16}
print(precision_to_bits.get(self._model.trainer.precision, 32))
raise
...
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.5.0): master
#- PyTorch Version (e.g., 2.5): 2.5.1
#- Python version (e.g., 3.12): 3.10
#- OS (e.g., Linux): Ubuntu 22.04
#- CUDA/cuDNN version: 12.4
#- GPU models and configuration: 4xNVIDIA H100 NVL
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | open | 2025-02-20T14:56:27Z | 2025-02-20T14:58:14Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20598 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | gugarosa | 0 |
deepset-ai/haystack | pytorch | 8,410 | Create a version of DLAI lesson "Self-Reflecting Agents with Loops" (entity extraction) using ChatGenerator | We need to better understand how complex and difficult to understand Haystack example code would get if we used ChatGenerator instead of the regular Generators. For that purpose, let's create a version of https://learn.deeplearning.ai/courses/building-ai-applications-with-haystack/lesson/6/self-reflecting-agents-with-loops using ChatGenerator.
| closed | 2024-09-26T06:09:47Z | 2024-10-11T06:52:47Z | https://github.com/deepset-ai/haystack/issues/8410 | [
"P1"
] | julian-risch | 2 |
pandas-dev/pandas | pandas | 60,645 | DOC: pandas.DataFrame.aggregate return value | ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.aggregate.html#pandas.DataFrame.aggregate
### Documentation problem
The documentation of pandas.DataFrame.aggregate() method says:
The return can be:
* scalar : when Series.agg is called with single function
* Series : when DataFrame.agg is called with a single function
* DataFrame : when DataFrame.agg is called with several functions
But
df = pd.DataFrame([[1]]) ; type(df.agg(lambda x: 3*x))
returns pandas.core.frame.DataFrame even though .agg() was called with a single function
### Suggested fix for documentation
I'd love to offer a fix, but the reason I was looking up the docs was that I'd like to know what .agg() does exactly... | closed | 2025-01-02T12:12:12Z | 2025-01-05T18:19:52Z | https://github.com/pandas-dev/pandas/issues/60645 | [
"Docs",
"Duplicate Report"
] | sa42bme | 2 |
mljar/mercury | data-visualization | 119 | Enhancement: Add new apps via uploading jupyter notebooks via drag and drop in the browser | It would be nice, if users could create new apps via uploading their custom jupyter notebooks via drag and drop in the browser on the home screen of mercury.
Due to security concerns, this feature should only be enabled in trustworthy environments, e.g. via explicitly submitting an additional command-line argument `mercury run --enable-app-upload`.
I am curious about your thoughts. | closed | 2022-07-01T11:46:12Z | 2023-02-20T09:11:29Z | https://github.com/mljar/mercury/issues/119 | [] | jonaslandsgesell | 3 |
LibreTranslate/LibreTranslate | api | 477 | exclude db from .gitignore | Hello
Please, exclude db from .gitignore because it doesnt work with ci. Image can't start
```
Traceback (most recent call last):
File "/app/./venv/bin/libretranslate", line 8, in <module>
Loaded support for 3 languages (4 models total)!
sys.exit(main())
File "/app/venv/lib/python3.10/site-packages/libretranslate/main.py", line 189, in main
app = create_app(args)
File "/app/venv/lib/python3.10/site-packages/libretranslate/app.py", line 220, in create_app
os.mkdir(default_mp_dir)
FileNotFoundError: [Errno 2] No such file or directory: '/app/db/prometheus'
``` | open | 2023-08-04T12:43:11Z | 2023-10-19T18:25:59Z | https://github.com/LibreTranslate/LibreTranslate/issues/477 | [
"possible bug"
] | superset1 | 1 |
gradio-app/gradio | deep-learning | 9,939 | Dropdown and LinePlot buggy interaction | ### Describe the bug
Interactive dropdowns (```gr.Dropdown(options, interactive=True)```) do not work if a LinePlot (probably similar with ScatterPlot and others, but untested) is provided in the same block. This also happens if the plot is in other columns and rows. I did not check if it also happens with other components, but below you can find a very minimal reproducer, in which the dropdown is not interactible. If the plot is removed, the dropdown works (as shown in [this comment](https://github.com/gradio-app/gradio/issues/6103#issuecomment-1790205932)
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
my_list = ["World", "Gradio", "World2", "abc ", "You"]
with gr.Blocks() as demo:
drop1 = gr.Dropdown(choices=my_list, label="simple", value=my_list[0], interactive=True)
plt = gr.LinePlot() # Comment this out and the dropdown can be interacted with
demo.launch(share=True)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
I am using gradio 5.5.0, I'll paste the environment output:
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts: 0.2.1
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.3
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Can work around using other components (but not with LinePlots) | closed | 2024-11-11T14:51:32Z | 2025-02-07T18:16:33Z | https://github.com/gradio-app/gradio/issues/9939 | [
"bug"
] | nestor98 | 3 |
tableau/server-client-python | rest-api | 693 | server.jobs.get_by_id failing inconsistently with 401002: Unauthorized Access error | Hello,
I am writing a python script to trigger and monitor extract refreshes for a given set of datasource IDs.
First, I trigger the refresh using `server.datasources.refresh(datasource)` for all the given datasource IDs using multi threading.
Then, I monitor the progress of these refreshes and print out a message accordingly.
Do note that my Tableau server is configured to run only 2 extract refreshes at once, all others go into a pending state.
But, what I'm seeing is that every once in a while, one of the threads will throw a 401002 error when checking the status of the refresh job. Here's my code snippet:
```
def monitor_refresh_progress(self, job_id, datasource):
# Get initial job status value, will be -1 if in progress
with self.server.auth.sign_in(self.tableau_auth):
job_status = self.server.jobs.get_by_id(job_id)
# Keep polling until success or failure, added random to avoid multiple simultaneous hits
while int(job_status.finish_code) not in [0,1]:
time.sleep(randint(110,130))
with self.server.auth.sign_in(self.tableau_auth):
job_status = self.server.jobs.get_by_id(job_id)
if int(job_status.finish_code) == 0:
self.logger.info("Extract Refresh successfully completed for datasource: {}".format(datasource.name))
else:
slack.post_message(text=":: ERROR :: Tableau Extract Refresh failed for datasource {}.".format(datasource.name))
self.logger.error("Extract Refresh failed for datasource: {}".format(datasource.name))
raise Exception("Extract Refresh failed for datasource: {}".format(datasource.name))
```
Right now, I have added a retry decorator for this monitor_refersh_progress() method but I'm not too sure about the efficacy since I'm using multi threading.
Am I doing something wrong? Any help would be appreciated.
Thanks | closed | 2020-09-16T12:34:24Z | 2023-04-20T18:38:03Z | https://github.com/tableau/server-client-python/issues/693 | [] | quenchua | 5 |
roboflow/supervision | machine-learning | 1,016 | Regarding zone problem | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Currently, learning two yolov8 model, one for person detection other for object detection : Main problem is for automated selfcheckout based on zone logic : where we check weather person holding object crossing zone from left toward right or right towards left and then prepare recipt accordingly. Need guidance in logic : in this case, should I have to combine detection for person and object or should handle logic alternately ?
Below Code :
#Define empty lists to keep track of labels
original_labels = []
final_labels = []
person_bbox = []
p_items = []
purchased_items = set(p_items)
a_items = []
added_items = set(a_items)
hand_bbox = []
combined_detections = []
#Save result as det_tracking_result
with sv.VideoSink("new_det_tracking_result.mp4", video_info) as sink:
#Iterate through model predictions and tracking results
for index, (result, result1) in enumerate(zip(model.track(source=VID_PATH, show=False, stream=True, verbose=True, persist=True),
model1.track(source=VID_PATH, show=False, stream=True, verbose=True, persist=True))):
#Define variables to store interactions that are refreshed per frame
interactions = []
person_intersection_str = ""
# Obtain predictions from model1
frame1 = result1.orig_img
detections_objects1 = sv.Detections.from_ultralytics(result1)
detections_objects1 = detections_objects1[detections_objects1.class_id == 0]
bboxes1 = result1.boxes
#print(detections_objects1)
#Obtain predictions from yolov8 model
frame = result.orig_img
detections = sv.Detections.from_ultralytics(result)
detections = detections[detections.class_id < 10]
bboxes = result.boxes
# Apply mask over the single Zone
mask1, mask2 = zone.trigger(detections=detections_objects1), zone.trigger(detections=detections)
detections_filtered1, detections_filtered2 = detections_objects1[mask1], detections[mask2]
if detections_objects1 and len(detections_objects1) > 0:
label1 = label_map1[detections_objects1.class_id[0]] # Get the label for the class_id
combined_detections.append((detections_objects1, label1))
for detection, label in combined_detections:
print("Detections:", detection)
print("Label:", label)
if bboxes1.id is not None:
detections_objects1.tracker_id = bboxes1.id.cpu().numpy().astype(int)
labels = [
f'#{tracker_id} {label_map1[class_id]} {confidence:0.2f}'
for _, _, confidence, class_id, tracker_id
in detections_objects1
]
#Print labels for detections from model1
for _, _, confidence, class_id, _ in detections_objects1:
print(f"Label: {label_map1[class_id]} with confidence: {confidence:.2f}")
print(detections)
# Apply mask over the single Zone
mask = zone.trigger(detections=detections)
detections_filtered = detections[mask]
print("mask", mask)
print("Detection", detections_filtered)
if detections and len(detections) > 0:
label = label_map[detections.class_id[0]] # Get the label for the class_id
combined_detections.append((detections, label))
if bboxes.id is not None:
detections.tracker_id = bboxes.id.cpu().numpy().astype(int)
labels = [
f'#{tracker_id} {label_map[class_id]} {confidence:0.2f}'
for _, _, confidence, class_id, tracker_id
in detections
]
frame = box_annotator.annotate(scene=frame, detections=detections_filtered, labels=labels)
frame = zone_annotator.annotate(scene=frame)
objects = [f'#{tracker_id} {label_map[class_id]}' for _, _, confidence, class_id, tracker_id in detections]
# for _, _, confidence, class_id, _ in detections:
# print(f"Label: {label_map[class_id]} with confidence: {confidence:.2f}")
# # Combine detections from both models
# # combined_detections = np.concatenate((detections_objects1, detections))
# print(combined_detections)
# # Extract xyxy attributes from combined detections
# combined_detections_xyxy = [detection[0].xyxy for detection in combined_detections]
# print(combined_detections_xyxy)
# # Check if combined_detections_xyxy is not empty and contains non-empty arrays
# if combined_detections_xyxy and all(arr.size > 0 for arr in combined_detections_xyxy):
# # Concatenate xyxy arrays into a single array
# combined_xyxy_array = np.concatenate(combined_detections_xyxy, axis=0)
# else:
# combined_xyxy_array = np.empty((0, 4)) # Create an empty array
# # Create a Detections object with the concatenated xyxy array
# combined_detections_detections = sv.Detections(xyxy=combined_xyxy_array)
# # Apply mask over the combined detections
# mask = zone.trigger(detections= combined_detections_detections)
# # Filter combined detections based on the mask
# combined_detections_filtered = [combined_detections[i] for i in range(len(combined_detections)) if mask[i]]
# # Print the mask and filtered detections
# #print("Combined Detections mask:", mask)
# #print("Combined Detections filtered:", combined_detections_filtered)
# # Iterate through combined detections to create labels
# combined_labels = []
# for detection in combined_detections_filtered:
# detections, label = detection
# for _, _, confidence, class_id, tracker_id in detections:
# combined_labels.append(f'#{tracker_id} {label_map1[class_id]} {confidence:.2f}')
# # Print labels for combined detections
# for label in combined_labels:
# print("combined_labels", label)
# frame = box_annotator.annotate(scene=frame, detections=combined_detections_filtered, labels=combined_labels)
# frame = zone_annotator.annotate(scene=frame)
# objects = [f'#{tracker_id} {label_map[class_id]}' for _, _, confidence, class_id, tracker_id in combined_detections_filtered]
# print("Combined Objects:", objects)
#If this is the first time we run the application,
#store the objects' labels as they are at the beginning
if index == 0:
original_labels = objects
original_dets = len(detections_filtered)
else:
#To identify if an object has been added or removed
#we'll use the original labels and identify any changes
final_labels = objects
new_dets = len(detections_filtered)
#Identify if an object has been added or removed using Counters
removed_objects = Counter(original_labels) + Counter(final_labels)
added_objects = Counter(final_labels) - Counter(original_labels)
#Create two variables we can increment for drawing text
draw_txt_ir = 1
draw_txt_ia = 1
#Check for objects being added or removed
#if new_dets - original_dets != 0 and len(removed_objects) >= 1:
if new_dets != original_dets or removed_objects:
#An object has been removed
for k,v in removed_objects.items():
#For each of the objects, check the IOU between a designated object
#and a person.
if 'person' not in k:
removed_object_str = f"{v} {k} purchased"
removed_action_str = intersecting_bboxes(bboxes, bboxes1, person_bbox, removed_object_str)
print("Removed Action String:", removed_action_str) # Add this line
if removed_action_str is not None:
log.info(removed_action_str)
#Add the purchased items to a "receipt" of sorts
item = removed_action_str.split()
if len(item) >= 3:
item = f"{item [0]} {item [1]} {item [2]}"
removed_label = item.split(' ')[-1]
if any(removed_label in item for item in purchased_items):
purchased_items = {f"{int(item.split()[0]) + 1} {' '.join(item.split()[1:])}" if removed_label in item else item for item in purchased_items}
else:
purchased_items.add(f"{v} {k}")
p_items.append(f" - {v} {k}")
print("New_Purchased_Items:", purchased_items)
print("Removed_Objects:")
#Draw the result on the screen
draw_text(frame, text=removed_action_str, point=(50, 50 + draw_txt_ir), color=(0, 0, 255))
draw_text(frame, "Receipt: " + str(purchased_items), point=(50, 800), color=(30, 144, 255))
draw_txt_ir += 80
if len(added_objects) >= 1:
#An object has been added
for k,v in added_objects.items():
#For each of the objects, check the IOU between a designated object
#and a person.
if 'person' not in k:
added_object_str = f"{v} {k} returned"
added_action_str = intersecting_bboxes(bboxes, bboxes1, person_bbox, added_object_str)
print("Added Action String:", added_action_str) # Add this line
if added_action_str is not None:
#If we have determined an interaction with a person,
#log the interaction.
log.info(added_action_str)
item = added_object_str.split()
if len(item) >= 3:
item = f"{item [0]} {item [1]} {item [2]}"
item = item.split(' ')[-1]
if any(item in item for item in purchased_items):
purchased_items = {f"{int(item.split()[0]) - 1} {' '.join(item.split()[1:])}" if item in item else item for item in purchased_items}
if any(item.startswith('0 ') for item in purchased_items):
purchased_items = {item for item in purchased_items if not item.startswith('0 ')}
print("Updated_Purchased_Items:", purchased_items)
#p_items.remove(item)
added_items.add(added_object_str)
a_items.append(added_object_str)
print("Added_Objects:")
#Draw the result on the screen
draw_text(frame, text=added_action_str, point=(50, 300 + draw_txt_ia), color=(0, 128, 0))
draw_text(frame, "Receipt: " + str(purchased_items), point=(50, 800), color=(30, 144, 255))
draw_txt_ia += 80
# Clear the combined_detections list
combined_detections.clear()
draw_text(frame, "Receipt: " + str(purchased_items), point=(50, 800), color=(30, 144, 255))
sink.write_frame(frame)
### Additional
_No response_ | closed | 2024-03-18T04:04:29Z | 2024-03-18T08:46:23Z | https://github.com/roboflow/supervision/issues/1016 | [
"question"
] | Abhijeet241093 | 1 |
hyperspy/hyperspy | data-visualization | 3,464 | Cannot navigate signal with 1D or 2D navigator with keyboard on macOS | #### Describe the bug
Hi everyone, not sure what I'm doing wrong here...
I cannot navigate a signal on my macOS v15.1 with HyperSpy v2.2. I've tried left/right with all modifier keys (shift, control, option, and command) and combinations thereof.
Navigating with the mouse works as before.
#### To Reproduce
```python
import numpy as np
import hyperspy.api as hs
s = hs.signals.Signal2D(np.random.random((10, 10, 10, 10)))
s.plot()
# Try to navigate but cannot
```
#### Expected behavior
To navigate the signal as usual.
#### Python environment:
- HyperSpy version: 2.2
- Python version: 3.12.7
#### Additional context | open | 2024-11-17T11:35:48Z | 2024-11-18T15:19:43Z | https://github.com/hyperspy/hyperspy/issues/3464 | [
"type: bug"
] | hakonanes | 9 |
plotly/dash | data-visualization | 2,966 | performance issues when building custom components using dash-component-boilerplate |
When using the [dash-component-boilerplate] to build my custom React component, the component becomes very sluggish. This component is related to rendering graphics on a canvas.
React framework show this

and when i use in dash, show this

| closed | 2024-08-27T13:02:42Z | 2024-08-28T02:08:20Z | https://github.com/plotly/dash/issues/2966 | [
"performance",
"bug",
"P3"
] | manyuemeiquqi | 3 |
Kanaries/pygwalker | plotly | 125 | Cannot load more than | When I try to embed pygwalker in `streamlit`, I get the following error:
```
Dataframe is too large for ipynb files. Only 14862 sample items are printed to the file.
```
Is it a known issue that pygwalker cannot handle large datasets?
Thanks a lot for the work, the project looks super cool 😄
Best,
Adrien | closed | 2023-06-05T08:41:15Z | 2023-07-06T02:02:19Z | https://github.com/Kanaries/pygwalker/issues/125 | [
"fixed but needs feedback",
"P1"
] | ruaultadrien | 2 |
ipython/ipython | jupyter | 14,635 | selene_sdk issue |
i got this error message and couldn't find where's the issue
```
ValueError Traceback (most recent call last)
<ipython-input-3-c4059c3098d2> in <module>
----> 1 parse_configs_and_run(configs, lr=0.01)
2 print("Fin de Exécussion")
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/utils/config_utils.py in parse_configs_and_run(configs, create_subdirectory, lr)
349 "Using a random seed ensures results are reproducible.")
350
--> 351 execute(operations, configs, current_run_output_dir)
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/utils/config_utils.py in execute(operations, configs, output_dir)
190 "evaluate" in operations:
191 train_model.create_test_set()
--> 192 train_model.train_and_validate()
193
194 elif op == "evaluate":
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/train_model.py in train_and_validate(self)
428 for step in range(self._start_step, self.max_steps):
429 self.step = step
--> 430 self.train()
431
432 if step % self.nth_step_save_checkpoint == 0:
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/selene_sdk/train_model.py in train(self)
461
462 predictions = self.model(inputs.transpose(1, 2))
--> 463 loss = self.criterion(predictions, targets)
464
465 self.optimizer.zero_grad()
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
601
602 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 603 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
604
605
~/anaconda3/envs/selene-gpu/lib/python3.6/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
2906 raise ValueError(
2907 "Using a target size ({}) that is different to the input size ({}) is deprecated. "
-> 2908 "Please ensure they have the same size.".format(target.size(), input.size())
2909 )
2910
ValueError: Using a target size (torch.Size([64, 12])) that is different to the input size (torch.Size([64, 11])) is deprecated. Please ensure they have the same size.
```
| closed | 2024-12-29T20:32:35Z | 2025-01-01T21:18:22Z | https://github.com/ipython/ipython/issues/14635 | [] | syrine-27 | 2 |
pyjanitor-devs/pyjanitor | pandas | 1,068 | Discussion: arguments `old_min` and `old_max` should be removed from `min_max_scale` | ```python
>>> import pandas as pd
>>> import janitor
# Use one column dataframe to avoid scaling the entire data or the column data problem
>>> df = pd.Series([0, 1, 2]).to_frame()
# use the minimum and maximum value of data
>>> df.min_max_scale()
0
0 0.0
1 0.5
2 1.0
# Overwrite the value of data. The result seems wired for the user, but it's ok for the formula view.
# Question 1: 0 should be scaled or not? 0 is out range of [old_min 1, old_max 2].
# Question 2: I already define the new_min (0) and new_max of value. Why do there have -1?
# Question 3: The API differs from sklearn.preprocessing.MinMaxScaler
# Min-Max normalization formula
# X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
# X_scaled = X_std * (max - min) + min
>>> df.min_max_scale(old_min=1, old_max=2)
0
0 -1.0
1 0.0
2 1.0
```
In the end, it's hard to trace the `min_max_scale` [committing history](https://github.com/pyjanitor-devs/pyjanitor/commits/dev/janitor/functions/min_max_scale.py) to know the reason why added these options. | closed | 2022-04-27T03:52:39Z | 2022-06-02T01:10:30Z | https://github.com/pyjanitor-devs/pyjanitor/issues/1068 | [] | Zeroto521 | 1 |
alteryx/featuretools | data-science | 1,885 | No code coverage on __main__.py | https://github.com/alteryx/featuretools/pull/1882 Is passing for all CI checks except for code coverage, where suddenly there's no coverage of `__main__.py`. That PR's scipy update could be to blame, or it could be some untracked change (setuptools 60.8 vs setuptools 60.7).
We should determine why coverage was lost--though the "coverage" was just an import--and improve our tests so that `__main__.py` is truly covered. | closed | 2022-02-08T16:30:35Z | 2022-03-02T21:46:42Z | https://github.com/alteryx/featuretools/issues/1885 | [] | tamargrey | 0 |
graphql-python/graphene-django | django | 949 | Error in GraphQL Mutation Expected value of type ID | Model
```python
class Series(models.Model):
title = models.CharField(max_length=255, unique=True, db_index=True)
desc = RichTextUploadingField(verbose_name="Description", default= "Coming Soon...", max_length=10000)
series_type = models.ForeignKey(SeriesType, on_delete=models.CASCADE)
SERIES_STATUS = (
(0, 'Not Yet Released'),
(1, 'Done')
)
user = models.ForeignKey(User, on_delete=models.CASCADE)
status = models.PositiveSmallIntegerField(choices=SERIES_STATUS, default=0)
```
Schema
```python
class SeriesNode(DjangoObjectType):
class Meta:
model = models.Series
filter_fields = ['title', 'alt']
interfaces = (relay.Node, )
class SeriesMutation(DjangoModelFormMutation):
series = graphene.Field(SeriesNode)
class Meta:
form_class = forms.AdvancedAddSeries
class Mutation(graphene.ObjectType):
create_series = SeriesMutation.Field()
```
Query Mutation
```gql
mutation CreateSeries($input: SeriesMutationInput!){
createSeries(input:$input){
series{
title
desc
seriesType{
id
}
}
errors{
field
messages
}
}
}
```
Query Variables
```json
{
"input": {
"title": "Series1",
"desc": "to be updated",
"seriesType": {
"id": "U2VyaWVzVHlwZU5vZGU6Mg=="
},
"user": {
"id": "VXNlck5vZGU6MQ=="
},
"status": "A_0"
}
}
```
Image of Error

Reply
```json
{
"data": {
"createSeries": {
"series": null,
"errors": [
{
"field": "series_type",
"messages": [
"Select a valid choice. That choice is not one of the available choices."
]
},
{
"field": "status",
"messages": [
"Select a valid choice. A_0 is not one of the available choices."
]
},
{
"field": "user",
"messages": [
"Select a valid choice. That choice is not one of the available choices."
]
}
]
}
}
}
``` | closed | 2020-04-28T09:20:52Z | 2020-05-02T19:40:35Z | https://github.com/graphql-python/graphene-django/issues/949 | [] | modbender | 3 |
voila-dashboards/voila | jupyter | 1,377 | User facing changelog for the 0.5.0 release | <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Problem
We should highlight the major changes landing in `0.5.0` instead of just pointing users to the raw changelog: https://github.com/voila-dashboards/voila/blob/main/CHANGELOG.md.
### Suggested Improvement
Follow JupyterLab 4 and Notebook 7 changelogs and create a "Highlights" section in the changelog for user facing changes.
- Update to JupyterLab 4
- `--classic-tree` from https://github.com/voila-dashboards/voila/pull/1374
- more | closed | 2023-08-10T13:20:21Z | 2023-08-16T12:48:12Z | https://github.com/voila-dashboards/voila/issues/1377 | [
"documentation"
] | jtpio | 0 |
Esri/arcgis-python-api | jupyter | 1,435 | Branch editing error | **Describe the bug**
Branch editing in python is not working properly.
**Screenshots to reproduce**



error:
Exception: Unexpected operation
(Error Code: 0)
**Expected behavior**
Through rest api it is working, I expect the same behaviour as I do not intent to build this workflow if the arcgis module intent to implement it.
```
with vms.get('version name', "edit") as version:
#version.start_editing()
update_result = version.edit(<>)
# I expected this to work
```
**Platform (please complete the following information):**
- OS: Win Server 2019
- Browser chrome
-
| Name| Version| Build| Channel|
|-|-|-|-|
|arcgis| 2.0.1 | py39_2825| esri|
|arcgispro| 3.0 | 0 | esri|
**Additional context**
Add any other context about the problem here, attachments etc.
| open | 2023-01-17T14:39:01Z | 2024-03-12T14:00:24Z | https://github.com/Esri/arcgis-python-api/issues/1435 | [
"bug"
] | hildermesmedeiros | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 536 | Testing Result | How do I get the image files names for the visrank_topk during testing for the query images? I want to show the file names from the gallery set that have high match with the query image. | open | 2023-03-05T12:56:26Z | 2023-03-05T12:56:26Z | https://github.com/KaiyangZhou/deep-person-reid/issues/536 | [] | abhaykumart12 | 0 |
agronholm/anyio | asyncio | 178 | bidirectional buffered stream? | Hello,
since the API rewrite, it looks like I need to use a `BufferedByteReceiveStream` to use `receive_exactly`. But the class is only for receiving, not writing.
Is it by intention that I need to carry around two objects if I want both `receive_exactly` and `send`?
Thanks! | closed | 2020-12-17T17:19:32Z | 2020-12-17T18:38:11Z | https://github.com/agronholm/anyio/issues/178 | [] | joernheissler | 1 |
django-oscar/django-oscar | django | 3,978 | Dashboard-->vouchersets--> sorting "Num orders", "Num baskets". Cannot resolve keyword 'num_basket_additions' into field | ERROR: type should be string, got "\r\nhttps://latest.oscarcommerce.com/en-gb/dashboard/vouchers/sets/?sort=num_basket_additions" | open | 2022-09-09T10:10:09Z | 2024-02-12T10:56:57Z | https://github.com/django-oscar/django-oscar/issues/3978 | [
"☁ Bug",
"Good first issue"
] | martinsrudzroga | 4 |
ivy-llc/ivy | tensorflow | 27,999 | Fix Ivy Failing Test: paddle - elementwise.not_equal | closed | 2024-01-23T07:51:14Z | 2024-01-23T11:41:51Z | https://github.com/ivy-llc/ivy/issues/27999 | [
"Sub Task"
] | MuhammadNizamani | 0 |
|
litestar-org/litestar | pydantic | 3,822 | Bug: `litestar run` CLI has several readability issues | ### Description
First problem: low contrast on light theme:
<img width="788" alt="Снимок экрана 2024-10-16 в 23 04 11" src="https://github.com/user-attachments/assets/7cb84b46-ea93-4b53-84d8-a7fd7f05d6f2">
I can hardly read what grey and yellow texts say.
One can argue that this is a problem of my setup / theme, but I've never seen this before in other apps.
Second problem:
<img width="788" alt="Снимок экрана 2024-10-16 в 23 04 06" src="https://github.com/user-attachments/assets/8af38089-6d61-43c9-bc73-c1e6ad189ba1">
Option's name of `--create-self-signed-c...` (certificate?) is cut. I think that this is the most important part of the help here. And it should not cut the options' names.
The same happens with `--unix-domain-so…`
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
```bash
1. Run `litestar run -h`
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
main
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-10-16T20:11:01Z | 2025-03-20T15:55:00Z | https://github.com/litestar-org/litestar/issues/3822 | [
"Bug :bug:",
"CLI"
] | sobolevn | 3 |
roboflow/supervision | pytorch | 1,274 | [KeyPoints] - extend `from_mediapipe` with Google MediaPipe FaceMesh | # Description
Much like #1174 and #1232 adding pose landmark support, we'd also like to add face detection support to the `from_mediapipe` method.
* Add `Skeleton.FACEMESH_TESSELETION` of size `468` to the [Skeleton](https://github.com/roboflow/supervision/blob/447ef41fc45353130ec4dccdc7eeaf68b622fb7e/supervision/keypoint/skeletons.py#L7) enum.
* The nodes can be found here: https://github.com/google-ai-edge/mediapipe/blob/8cb99f934073572ce73912bb402a94f1875e420a/mediapipe/python/solutions/face_mesh_connections.py#L74
* Docs can be found here: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md
* Add the code to the `from_mediapipe` function in [`KeyPoints`](https://github.com/roboflow/supervision/blob/447ef41fc45353130ec4dccdc7eeaf68b622fb7e/supervision/keypoint/core.py#L16) object that is introduced in #1232.
* We'd like to support responses from both legacy and modern way to call the face mesher - see links below.

# Links:
- Google Mediapipe repository: https://github.com/google/mediapipe
- Google Mediapipe face landmarker: https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker
- Python Guide (Modern): https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker/python
- Legacy: https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/face_landmarker/python/%5BMediaPipe_Python_Tasks%5D_Face_Landmarker.ipynb
- Skeletons: https://github.com/google-ai-edge/mediapipe/blob/8cb99f934073572ce73912bb402a94f1875e420a/mediapipe/python/solutions/face_mesh_connections.py#L74
# Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | closed | 2024-06-11T11:27:49Z | 2024-07-05T09:56:51Z | https://github.com/roboflow/supervision/issues/1274 | [
"enhancement",
"api:keypoints"
] | LinasKo | 8 |
cleanlab/cleanlab | data-science | 735 | Extend label issue detection in Datalab to work even without pred_probs input | Goal: extend the label issue check in Datalab to work even if user only provided: `features`, `labels` to `Datalab.find_issues()`.
There are multiple ways this can be achieved:
Option 1 (easiest): Use sklearn `KNNclassifier` (or `LogisticRegression`) applied to `X=features, y=labels` in order to produce out-of-sample `pred_probs` and then continue as usual.
Option 2: Use methods from other papers like these (requires benchmarking them first):
- [SelfClean: A Self-Supervised Data Cleaning Strategy](https://arxiv.org/abs/2305.17048)
- [Detecting Corrupted Labels Without Training a Model to Predict](https://arxiv.org/abs/2110.06283)
| closed | 2023-05-31T22:02:27Z | 2023-07-27T19:43:36Z | https://github.com/cleanlab/cleanlab/issues/735 | [
"enhancement",
"help-wanted"
] | jwmueller | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,607 | Is there a sample I can use to paint an image without cutting it? | I more or less understood the test, but is there any way to paint images (I trained a small model with references on how to do it) without having to lower the quality so much? if the image is 256 you can hardly see anything even if you raise the quality. | open | 2023-10-29T23:20:33Z | 2023-10-29T23:20:33Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1607 | [] | Keiser04 | 0 |
yt-dlp/yt-dlp | python | 11,928 | PBS - Unable to extract | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
US
### Provide a description that is worded well enough to be understood
https://www.pbs.org/video/take-a-chance-wdZQCx/
[pbs] Downloading JSON metadata
Extracting cookies from firefox
Extracted 2912 cookies from firefox
[pbs] Extracting URL: https://www.pbs.org/video/take-a-chance-wdZQCx/
[pbs] take-a-chance-wdZQCx: Downloading webpage
[pbs] Downloading widget/partnerplayer page
[pbs] Downloading portalplayer page
ERROR: An extractor error has occurred. (caused by KeyError('title')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\pbs.py", line 689, in _real_extract
KeyError: 'title'
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-o', 'lidia', 'https://www.pbs.org/video/take-a-chance-wdZQCx/']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.12.26.232815 from yt-dlp/yt-dlp-nightly-builds [0b6b7742c] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2020-11-04-git-cfdddec0c8-full_build-www.gyan.dev, ffprobe 2020-11-04-git-cfdddec0c8-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.12.26.232815 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.12.26.232815 from yt-dlp/yt-dlp-nightly-builds)
[debug] Using fake IP 6.66.219.101 (US) as X-Forwarded-For
[pbs] Downloading JSON metadata
[pbs] Extracting URL: https://www.pbs.org/video/take-a-chance-wdZQCx/
[pbs] take-a-chance-wdZQCx: Downloading webpage
[pbs] Extracting URL: https://www.pbs.org/video/take-a-chance-wdZQCx/
[pbs] take-a-chance-wdZQCx: Downloading webpage
[pbs] Downloading widget/partnerplayer page
[pbs] Downloading portalplayer page
ERROR: An extractor error has occurred. (caused by KeyError('title')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\pbs.py", line 689, in _real_extract
KeyError: 'title'
```
| closed | 2024-12-27T21:23:08Z | 2025-01-03T16:51:58Z | https://github.com/yt-dlp/yt-dlp/issues/11928 | [
"duplicate",
"site-bug"
] | wallyps | 5 |
keras-team/autokeras | tensorflow | 1,120 | Multi-label classification with two labels | ### Bug Description
<!---
A clear and concise description of what the bug is.
-->
ImageClassifier classification head treats the multi-label classification with 2 labels as multi-class classification one-hot encoded labels.
### Bug Reproduction
Code for reproducing the bug:
----
from sklearn.datasets import make_multilabel_classification
X, Y = make_multilabel_classification(n_samples=100, n_features = 64,
n_classes=2, n_labels=1,
allow_unlabeled=False,
random_state=1)
X = X.reshape((100, 8, 8))
clf = ak.ImageClassifier(max_trials=2, multi_label=True)
clf.fit(X, Y, epochs=3, verbose=2)
----
Data used by the code:
synthetic data created with scikit-learn
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python: 3.6
- autokeras: 1.0.2
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow: 2.1.0
### Additional context
<!---
If applicable, add any other context about the problem.
-->
| closed | 2020-05-05T18:50:55Z | 2020-06-01T18:48:34Z | https://github.com/keras-team/autokeras/issues/1120 | [
"bug report",
"pinned"
] | qingquansong | 0 |
sigmavirus24/github3.py | rest-api | 336 | Proxy attributes to stored JSON | This way as the GitHub API expands, even if we don't explicitly set it, people can still do things like
``` py
pr = github3.pull_request('user', 'project', number)
pr.merged
```
It won't be documented in our docs but they'll be able to use it at least
| closed | 2015-01-07T17:44:47Z | 2015-12-27T16:56:19Z | https://github.com/sigmavirus24/github3.py/issues/336 | [] | sigmavirus24 | 8 |
plotly/dash | plotly | 2,295 | Dropdown Options Extending Beyond Container | For a space-limited dashboard, it's common to have dropdown options with names that are much longer than the space allocated for the dropdown button. Additionally, for my application assume that:
- Each option needs to be a single line
- The full option text should be visible when the dropdown is open (i.e. no ellipses)
- The size of the dropdown and its container cannot be increased
Dash Bootstrap's dbc.Select component handles this well by treating the dropdown as a pop-up that can extend beyond its container when open. However, dbc.Select lacks the advanced features of dcc.Dropdown and is not an option for me. Thanks!
 | open | 2022-11-01T16:01:59Z | 2024-08-13T19:22:08Z | https://github.com/plotly/dash/issues/2295 | [
"feature",
"P3"
] | TGeary | 2 |
holoviz/panel | plotly | 7,334 | Less readable for panel.pane.DataFrame in Jupyter Dark Theme | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
<details>
<summary>Software Version Info</summary>
```plaintext
altair 5.4.1
anyio 4.6.0
appnope 0.1.4
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
astunparse 1.6.3
async-lru 2.0.4
attrs 24.2.0
babel 2.16.0
beautifulsoup4 4.12.3
black 24.8.0
bleach 6.1.0
bokeh 3.5.2
bqplot 0.12.43
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.3.2
click 8.1.7
comm 0.2.2
contourpy 1.3.0
cycler 0.12.1
debugpy 1.8.6
decorator 5.1.1
defusedxml 0.7.1
executing 2.1.0
fastjsonschema 2.20.0
fonttools 4.54.1
fqdn 1.5.1
gast 0.4.0
h11 0.14.0
httpcore 1.0.5
httpx 0.27.2
idna 3.10
ipydatagrid 1.3.2
ipyflow 0.0.200
ipyflow-core 0.0.200
ipykernel 6.29.5
ipympl 0.9.4
ipython 8.27.0
ipython-genutils 0.2.0
ipywidgets 8.1.5
isoduration 20.11.0
itable 0.0.1
jedi 0.19.1
Jinja2 3.1.4
joblib 1.4.2
json5 0.9.25
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2023.12.1
jupyter 1.1.1
jupyter_client 8.6.3
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.5
jupyterlab-lsp 5.1.0
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
kiwisolver 1.4.7
linkify-it-py 2.0.3
Markdown 3.7
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.2
matplotlib-inline 0.1.7
mdit-py-plugins 0.4.2
mdurl 0.1.2
mistune 3.0.2
mypy-extensions 1.0.0
narwhals 1.8.3
nbclassic 1.1.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest-asyncio 1.6.0
notebook 7.2.2
notebook_shim 0.2.4
numpy 2.1.1
overrides 7.7.0
packaging 24.1
pandas 2.2.3
pandas-flavor 0.6.0
pandocfilters 1.5.1
panel 1.5.0
param 2.1.1
parso 0.8.4
pathspec 0.12.1
patsy 0.5.6
pexpect 4.9.0
pillow 10.4.0
pingouin 0.5.5
pip 24.2
platformdirs 4.3.6
prometheus_client 0.21.0
prompt_toolkit 3.0.48
psutil 6.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
py2vega 0.6.1
pyccolo 0.0.54
pycparser 2.22
Pygments 2.18.0
pyparsing 3.1.4
python-dateutil 2.9.0.post0
python-json-logger 2.0.7
pytz 2024.2
pyviz_comms 3.0.3
PyYAML 6.0.2
pyzmq 26.2.0
referencing 0.35.1
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.20.0
scikit-learn 1.5.2
scipy 1.14.1
seaborn 0.13.2
Send2Trash 1.8.3
setuptools 75.1.0
six 1.16.0
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
statsmodels 0.14.3
tabulate 0.9.0
terminado 0.18.1
threadpoolctl 3.5.0
tinycss2 1.3.0
tornado 6.4.1
tqdm 4.66.5
traitlets 5.14.3
traittypes 0.2.1
types-python-dateutil 2.9.0.20240906
typing_extensions 4.12.2
tzdata 2024.2
uc-micro-py 1.0.3
uri-template 1.3.0
urllib3 2.2.3
voila 0.5.7
wcwidth 0.2.13
webcolors 24.8.0
webencodings 0.5.1
websocket-client 1.8.0
websockets 13.1
wheel 0.44.0
widgetsnbextension 4.0.13
xarray 2024.9.0
xyzservices 2024.9.0
```
</details>
#### Description of expected behavior and the observed behavior
Is there any argument to set background into defaut HTML render background (gray-darkgary) for `pd.DataFrame` in Jupyter dark theme. The defaut black-white background is less readable, especially combined with `pandas.DataFrame.style`. This is no problem with light theme.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import pingouin as pg
import pandas as pd
from pandas.io.formats.style import Styler
import numpy as np
pn.extension()
```
```python
data = pg.read_dataset("mixed_anova")
data_style: Styler = data.style
data = data_style.background_gradient(cmap='Blues', subset='Scores')
data_pn = pn.pane.DataFrame(data, max_height=200, sizing_mode="stretch_both")
data_pn
```
```python
data
```
#### Screenshots or screencasts of the bug in action
<img width="811" alt="image" src="https://github.com/user-attachments/assets/2a6b44d7-f420-44ff-a5ce-f1797c979237">
<img width="835" alt="image" src="https://github.com/user-attachments/assets/41b7d30f-0aa7-4bde-b0ff-de820d7f935f">
| open | 2024-09-27T16:30:43Z | 2024-10-24T15:19:05Z | https://github.com/holoviz/panel/issues/7334 | [] | YongcaiHuang | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,320 | [Feature Request]: Add Hypernetwork Refresh API for API Mode. | ### Is there an existing issue for this?
- [x] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Hello,
I've recently been working with Stable Diffusion and my project is deployed on a server, necessitating operation via API mode. I noticed that the API includes functions like refresh-checkpoints / reload-checkpoint. However, I've found there's no API for updating the hypernetwork list. This absence means that when new .pt files are added during service operation, they cannot be immediately read, and a complete service restart is required.
As an aside, I noticed there's a refresh button in the web API, but I couldn't find a corresponding API endpoint.
<img width="599" alt="image" src="https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/83401245/d3ef66d1-4a65-4259-b613-12cfaa3ad8e4">
Lastly, I apologize as my English is not very strong and my coding skills are somewhat limited. I appreciate any guidance or advice.
Thank you.
my stable-diffusion-webui version : 1.4.1
| closed | 2024-03-19T07:14:35Z | 2024-04-12T03:02:42Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15320 | [
"enhancement"
] | FEXAQAQ | 1 |
numba/numba | numpy | 9,225 | python312 什么时候可以用 | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
| closed | 2023-10-05T01:49:50Z | 2023-10-05T07:44:49Z | https://github.com/numba/numba/issues/9225 | [
"duplicate"
] | Franklyn1987 | 1 |
exaloop/codon | numpy | 368 | Upstreaming OpenMP changes discussion | This is just an attempt to start a discussion about what it would take to upstream the changes (or perhaps some another solution) for codon. | open | 2023-04-24T06:10:16Z | 2024-11-10T06:05:58Z | https://github.com/exaloop/codon/issues/368 | [
"enhancement"
] | seanfarley | 5 |
vchaptsev/cookiecutter-django-vue | graphql | 40 | Don't npm install on every serve | No reason to install all the node modules every time you run the app, it adds a ton of time to startup needlessly. There should be a dockerfile for the Frontend app that does this and finally just runs the serve command. | open | 2019-09-08T19:34:12Z | 2020-04-08T22:05:29Z | https://github.com/vchaptsev/cookiecutter-django-vue/issues/40 | [
"refactor"
] | mekhami | 0 |
Kludex/mangum | asyncio | 236 | [Question] How to get logs like Zappa | Hi there
I followed this tutorial to get FastAPI up into a Lambda function: https://adem.sh/blog/tutorial-fastapi-aws-lambda-serverless
It seems to be working, but when I tail the logs (`sls logs --function app --stage test`), I see my 'hello' INFO log in there, but it's enclosed in a large block of other logging. It looks like the following:
```
START
2022-02-13 13:58:36,501 Event received.
2022-02-13 13:58:36,501 Waiting for application startup.
2022-02-13 13:58:36,501 LifespanCycleState.STARTUP: 'lifespan.startup.complete' event received from application.
2022-02-13 13:58:36,501 Application startup complete.
2022-02-13 13:58:36,501 HTTP cycle starting.
2022-02-13 13:58:36,502 hello
2022-02-13 13:58:36,502 HTTPCycleState.REQUEST: 'http.response.start' event received from application.
2022-02-13 13:58:36,502 HTTPCycleState.RESPONSE: 'http.response.body' event received from application.
2022-02-13 13:58:36,503 Waiting for application shutdown.
2022-02-13 13:58:36,503 LifespanCycleState.SHUTDOWN: 'lifespan.shutdown.complete' event received from application.
END Duration: 3.55 ms Billed Duration: 4 ms Memory Size: 1024 MB Max Memory Used: 79 MB
```
What would I need to do to tail nice coloured logs like I'm used to with Flask/Zappa with the option to filter them? Ideally calls to each endpoint would be logged on a single line, my own log statements would be a single lines, and the uncaught exceptions would also be visible. Basically, I'd like to tail the cloud logs so that they look as similar to the local FastAPI logs as possible. | closed | 2022-02-13T14:04:26Z | 2022-03-05T04:40:02Z | https://github.com/Kludex/mangum/issues/236 | [] | dsmurrell | 5 |
feature-engine/feature_engine | scikit-learn | 195 | test code in rst files | for each transformer, and also in the quickstart we have code in rst files. I would like to introduce tests, so when we make changes, the tests would highlight if something is broken and needs fixing. At the moment, we need to manually check.
This will get worse when we add more complicated tutorials on rst files. | closed | 2020-12-09T10:04:30Z | 2024-08-25T16:58:32Z | https://github.com/feature-engine/feature_engine/issues/195 | [
"docs",
"code quality"
] | solegalli | 0 |
aio-libs/aiomysql | asyncio | 440 | AttributeError: '_WindowsSelectorEventLoop' object has no attribute 'acquire' | async with pool.acquire() as conn:
async with conn.cursor() as cur:
# await cur.execute("SELECT 42;")
insert_sql = "insert into article_test(title) values('{}') ".format(title)
await cur.execute(insert_sql) | closed | 2019-09-20T07:42:23Z | 2019-09-20T14:44:04Z | https://github.com/aio-libs/aiomysql/issues/440 | [] | hubinggg | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,208 | Flagged by Imperva | I've been getting "Error 15" after trying to login to "https://driverpracticaltest.dvsa.gov.uk/login".
This is the same issue as #690, however the suggested workaround on that thread no longer works as you don't get an instant captcha, so no cookies to grab.
I'm able to get the login page fine but as soon as I click login I get flagged.
Any suggestions?
Here is my requirements.txt:
`undetected-chromedriver==3.4.6
anticaptchaofficial==1.0.29
backcall==0.2.0
cachetools==4.2.0
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
decorator==4.4.2
Flask==1.1.2
google-api-core==1.25.0
google-api-python-client==1.12.8
google-auth==1.24.0
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.2
googleapis-common-protos==1.52.0
gunicorn==20.0.4
httplib2==0.18.1
idna==2.10
ipython==7.16.1
ipython-genutils==0.2.0
itsdangerous==1.1.0
jedi==0.18.0
Jinja2==2.11.3
MarkupSafe==1.1.1
mysql-connector==2.2.9
oauthlib==3.1.0
parso==0.8.1
pexpect==4.8.0
pickleshare==0.7.5
prompt-toolkit==3.0.13
protobuf==3.14.0
ptyprocess==0.7.0
py==1.10.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
Pygments==2.7.4
PyJWT==1.7.1
python-dotenv==0.15.0
pytz==2020.5
random-user-agent==1.0.1
requests==2.25.1
requests-oauthlib==1.3.0
rsa==4.7
selenium==3.141.0
six==1.15.0
SQLAlchemy==1.3.22
traitlets==4.3.3
twilio==6.53.0
uritemplate==3.0.1
urllib3==1.26.2
wcwidth==0.2.5
Werkzeug==1.0.1
seleniumwire==4.6.1`
| open | 2023-04-19T02:53:53Z | 2023-06-25T13:40:59Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1208 | [] | JacobHobday | 4 |
jina-ai/clip-as-service | pytorch | 80 | Does this service support multiple GPU? | closed | 2018-11-30T10:16:43Z | 2018-11-30T10:31:05Z | https://github.com/jina-ai/clip-as-service/issues/80 | [] | jiqiujia | 1 |
|
ultralytics/yolov5 | deep-learning | 12,801 | The reasoning result is abnormal | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
I'm training on a custom dataset that has only one category,The training is all normal, and the final val result visualization is also normal.

But when I use the trained model for inference:
> python detect.py --weights runs/train/exp/weights/last.pt --data data/bdd100k.yaml --source /root/yolov5/datasets/bdd100k/images/val
log:
(base) root@autodl-container-23c2469e43-849f7d78:~/yolov5# python detect.py --weights runs/train/exp/weights/last.pt --data data/bdd100k.yaml --source /root/yolov5/datasets/bdd100k/images/train --conf_thres=0.5 --iou_thres=0.1
usage: detect.py [-h] [--weights WEIGHTS [WEIGHTS ...]] [--source SOURCE] [--data DATA] [--imgsz IMGSZ [IMGSZ ...]] [--conf-thres CONF_THRES] [--iou-thres IOU_THRES] [--max-det MAX_DET] [--device DEVICE] [--view-img] [--save-txt] [--save-csv] [--save-conf] [--save-crop] [--nosave]
[--classes CLASSES [CLASSES ...]] [--agnostic-nms] [--augment] [--visualize] [--update] [--project PROJECT] [--name NAME] [--exist-ok] [--line-thickness LINE_THICKNESS] [--hide-labels] [--hide-conf] [--half] [--dnn] [--vid-stride VID_STRIDE]
detect.py: error: unrecognized arguments: --conf_thres=0.5 --iou_thres=0.1
(base) root@autodl-container-23c2469e43-849f7d78:~/yolov5# python detect.py --weights runs/train/exp/weights/last.pt --data data/bdd100k.yaml --source /root/yolov5/datasets/bdd100k/images/val
detect: weights=['runs/train/exp/weights/last.pt'], source=/root/yolov5/datasets/bdd100k/images/val, data=data/bdd100k.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v7.0-290-gb2ffe055 Python-3.8.10 torch-1.9.0+cu111 CUDA:0 (NVIDIA GeForce RTX 4090, 24217MiB)
Fusing layers...
As a result, many bboxes appeared that should not have appeared:



How should I deal with this problem?
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-03-10T04:40:25Z | 2024-10-20T19:41:05Z | https://github.com/ultralytics/yolov5/issues/12801 | [
"bug"
] | Bin-ze | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.