repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
tflearn/tflearn | data-science | 499 | histogram_summary etc. are deprecated in Tensorflow r0.12 | Shall we update to reflect the latest tensorflow API?
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/deprecated
| closed | 2016-12-06T09:17:51Z | 2016-12-06T20:44:24Z | https://github.com/tflearn/tflearn/issues/499 | [] | bowang | 1 |
itamarst/eliot | numpy | 394 | Logging tracebacks more easily | Currently I need to explicitly include a traceback whenever I want one, but since tracebacks are so useful, I usually do want one. Does it make sense to make this easy by wrapping up the exception, some exception data, and a traceback all together in a way that makes it easy for users to enable?
```python
import inspect
import traceback
import eliot
def _exception_lines(exc: BaseException):
return traceback.format_exception(type(exc), exc, exc.__traceback__)
def _exception_data(exc: BaseException):
# Exclude the attributes that appear on a regular exception,
# aside from a few interesting ones.
exclude = set(dir(Exception())) - {"args", "__cause__", "__context__"}
return {k: v for k, v in inspect.getmembers(exc) if k not in exclude}
def summarize_exception(exc: BaseException):
return {
"exception_lines": _exception_lines(exc),
"exception_data": _exception_data(exc),
}
eliot.register_exception_extractor(Exception, summarize_exception)
``` | open | 2019-03-23T20:50:10Z | 2019-06-14T21:17:51Z | https://github.com/itamarst/eliot/issues/394 | [
"enhancement"
] | jtrakk | 2 |
PokeAPI/pokeapi | graphql | 344 | Import error | When I try to import csv do postgress i get:
```
bin/python manage.py shell --settings=config.local
Python 3.5.5 (default, Feb 6 2018, 10:57:32)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from data.v2.build import build_all
Traceback (most recent call last):
File "/var/www/subdomains/django/pokeapi/lib/python3.5/site-packages/django/core/management/commands/shell.py", line 69, in handle
self.run_shell(shell=options['interface'])
File "/var/www/subdomains/django/pokeapi/lib/python3.5/site-packages/django/core/management/commands/shell.py", line 61, in run_shell
raise ImportError
ImportError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/var/www/subdomains/django/pokeapi/data/v2/__init__.py", line 1, in <module>
from build import * # NOQA
ImportError: No module named 'build'
```
Any idea why?
| closed | 2018-06-30T23:37:56Z | 2020-08-12T20:20:32Z | https://github.com/PokeAPI/pokeapi/issues/344 | [] | boberski666 | 4 |
keras-team/keras | data-science | 20,608 | `keras.ops.image.map_coordinates` fails on `uint8` input with TensorFlow backend | Consider the following simple example
```python
import keras
image = keras.ops.ones((1, 1, 3), dtype='uint8')
coordinates = keras.ops.convert_to_tensor([-1., 0., 0.])[..., None, None]
interp = keras.ops.image.map_coordinates(image, coordinates, order=1, fill_mode='constant')
```
that is expected to yield `[[0]]`. However, with `KERAS_BACKEND=tensorflow` this code snippet results in
```console
2024-12-08 16:04:24.790791: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at gather_nd_op.cc:65 : INVALID_ARGUMENT: indices[0,0] = [-1, 0, 0] does not index into param shape [1,1,3], node name: GatherNd
2024-12-08 16:04:24.790814: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: INVALID_ARGUMENT: indices[0,0] = [-1, 0, 0] does not index into param shape [1,1,3], node name: GatherNd
Traceback (most recent call last):
File "<home>/tfmapc.py", line 11, in <module>
interp = keras.ops.image.map_coordinates(image, coordinates, order=1, fill_mode='constant')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<home>/.env/lib/python3.12/site-packages/keras/src/ops/image.py", line 787, in map_coordinates
return backend.image.map_coordinates(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<home>/.env/lib/python3.12/site-packages/keras/src/backend/tensorflow/image.py", line 485, in map_coordinates
contribution = tf.cond(tf.reduce_all(validities), fast_path, slow_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<home>/.env/lib/python3.12/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "<home>/.env/lib/python3.12/site-packages/keras/src/backend/tensorflow/image.py", line 481, in slow_path
tf.transpose(tf.gather_nd(input_arr, indices)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__GatherNd_device_/job:localhost/replica:0/task:0/device:CPU:0}} indices[0,0] = [-1, 0, 0] does not index into param shape [1,1,3], node name: GatherNd [Op:GatherNd] name:
```
The problem does not occur if I change the `dtype` of `image` from `uint8` to `float32` or switch either to the `jax` or `torch` backends. Also changing the `fill_mode` from `constant` to `nearest` avoids the issue.
Keras version: 3.7.0 | closed | 2024-12-08T15:18:28Z | 2025-01-17T18:12:00Z | https://github.com/keras-team/keras/issues/20608 | [
"stat:awaiting response from contributor",
"type:Bug"
] | sergiud | 4 |
fastapi-users/fastapi-users | fastapi | 1,462 | 500 Response on jwt.exception.ExpiredSignatureError | ## Describe the bug
When using the `oauth_router`, the `state` jwt token has an expiration time.
When the callback tries to call the `callback` endpoint with an expired state token, an Internal Server Error (500) is thrown because the `jwt.ExpiredSignatureError` is thrown, however only the `jwt.DecodeError` case is handled in the code.
## To Reproduce
1. Call the `oauth_routers` `/authorize` endpoint.
2. Wait for the `state` token to expire.
3. Call the `/callback` with an otherwise valid request (except for an expired token).
4. See error
## Expected behavior
When calling the `callback` endpoint with an Invalid token:
`400 BAD REQUEST` or similar, should be the response instead of `500`
## Solution Proposal
Catch `jwt.InvalidTokenError` instead of just `jwt.DecodeError`.
| open | 2024-11-08T09:45:04Z | 2024-11-08T09:45:04Z | https://github.com/fastapi-users/fastapi-users/issues/1462 | [
"bug"
] | alexanderlazarev0 | 0 |
lorien/grab | web-scraping | 279 | No module 'lxml' | Traceback (most recent call last):
File "E:/Projects/playgrounds/webscraper-grab/graber.py", line 2, in <module>
from grab import Grab
File "C:\Users\240\AppData\Local\conda\conda\envs\webscraper-grab\lib\site-packages\grab\__init__.py", line 5, in <module>
from grab.base import Grab # noqa
File "C:\Users\240\AppData\Local\conda\conda\envs\webscraper-grab\lib\site-packages\grab\base.py", line 24, in <module>
from grab.document import Document
File "C:\Users\240\AppData\Local\conda\conda\envs\webscraper-grab\lib\site-packages\grab\document.py", line 23, in <module>
from lxml.html import HTMLParser
ImportError: No module named 'lxml'
When I try to install lxml (Windows 10 64bit)
> (webscraper-grab) E:\Projects\playgrounds\webscraper-grab>pip install lxml
it returns an error:
Failed building wheel for lxml | closed | 2017-09-12T05:04:48Z | 2017-09-25T07:07:12Z | https://github.com/lorien/grab/issues/279 | [] | crawlerabc | 1 |
pydata/pandas-datareader | pandas | 497 | Add Investopedia as a Data Source? | Example URL, and note there is no API key required:
https://www.investopedia.com/markets/api/partial/historical/?Symbol=HMSF.L&Type=Historical+Prices&Timeframe=Daily&StartDate=Jan+10%2C+2018
This should return daily data for the following fields: Date, Open, High, Low, Adjusted Close, Volume
Importantly, this data set differs from Morningstar, IEX, et al because it provides Adjusted Close, which is adjusted for dividends and splits. The others provide a Close price, with no adjustments.
Edit: while I am here ... thanks to all contributors for the pandas-datareader package! | closed | 2018-02-25T19:25:23Z | 2018-03-10T23:27:09Z | https://github.com/pydata/pandas-datareader/issues/497 | [] | DennisStoller | 7 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 83 | 希望取得联系 | 尊敬的Huatuo-Llama-Med-Chinese应用开发者,我是 InternLM 社区开发者&志愿者尖米, 大佬开源的工作对我的启发很大,希望可以探讨使用 InternLM 实现Huatuo-Llama-Med-Chinese的可能性和实现路径,我的微信是mzm312,希望可以取得联系进行更深度的交流 | closed | 2023-08-06T12:51:34Z | 2023-08-12T15:24:53Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/83 | [] | JimmyMa99 | 1 |
iterative/dvc | data-science | 9,818 | `dvc repro --dry --allow-missing`: fails on missing data | ## I tried to update our dvc ci pipeline
Currently we got the following commands (among others).
`dvc pull` to check if everything is pushed
`dvc status` to check if the dvc status is clean. In other words no repro would be run if one would run dvc repro.
But pulling thats a long time and with the now new --alllow-missing feature i though i can skip that with
```
dvc data status --not-in-remote --json | grep -v not_in_remote
dvc repro --allow-missing --dry
```
the first is working like expected. Fails if data was forgotten to be pushed and succeeds if it was.
But the later just fails on missing data.
### Reproduce
Example: Failure/Success on Machine Two and Three should be synced
Machine One:
1. dvc repro -f
2. git add . && git commit -m "repro" && dvc push && git push
3. dvc repro --allow-missing --dry
--> doesnt fail, nothing changed (as expected)
Machine Two:
4. dvc data status --not-in-remote --json | grep -v not_in_remote
--> does not fail, everything is pushed and would be pulled
5. dvc repro --allow-missing --dry
--> fails on missing data (unexpected)
Machine Three
4. dvc pull
5. dvc status
--> succeeds
### Expected
On a machine where i did not `dvc pull` i would expect on a git clean state and a clean `dvc data status --not-in-remote --json | grep -v not_in_remote `state that `dvc repro --allow-missing --dry` would succed and show me that no stage had to run.
### Environment information
Linux
**Output of `dvc doctor`:**
```console
$ dvc doctor
09:16:47 DVC version: 3.13.2 (pip)
09:16:47 -------------------------
09:16:47 Platform: Python 3.10.11 on Linux-5.9.0-0.bpo.5-amd64-x86_64-with-glibc2.35
09:16:47 Subprojects:
09:16:47 dvc_data = 2.12.1
09:16:47 dvc_objects = 0.24.1
09:16:47 dvc_render = 0.5.3
09:16:47 dvc_task = 0.3.0
09:16:47 scmrepo = 1.1.0
09:16:47 Supports:
09:16:47 azure (adlfs = 2023.4.0, knack = 0.11.0, azure-identity = 1.13.0),
09:16:47 gdrive (pydrive2 = 1.16.1),
09:16:47 gs (gcsfs = 2023.6.0),
09:16:47 hdfs (fsspec = 2023.6.0, pyarrow = 12.0.1),
09:16:47 http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
09:16:47 https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
09:16:47 oss (ossfs = 2021.8.0),
09:16:47 s3 (s3fs = 2023.6.0, boto3 = 1.28.17),
09:16:47 ssh (sshfs = 2023.7.0),
09:16:47 webdav (webdav4 = 0.9.8),
09:16:47 webdavs (webdav4 = 0.9.8),
09:16:47 webhdfs (fsspec = 2023.6.0)
09:16:47 Config:
09:16:47 Global: /home/runner/.config/dvc
09:16:47 System: /etc/xdg/dvc
09:16:47 Cache types: <https://error.dvc.org/no-dvc-cache>
09:16:47 Caches: local
09:16:47 Remotes: ssh
09:16:47 Workspace directory: ext4 on /dev/nvme0n1p2
09:16:47 Repo: dvc, git
```
| closed | 2023-08-08T09:21:56Z | 2023-10-17T12:12:21Z | https://github.com/iterative/dvc/issues/9818 | [
"bug",
"awaiting response",
"p1-important",
"A: pipelines"
] | Otterpatsch | 24 |
ContextLab/hypertools | data-visualization | 210 | interference with seaborn/matplotlib settings | When the plot function is called, hypertools modifies seaborn/matplotlib settings, causing plots run after a hypertools plot function call to change. I believe it's caused by calling `set_palette` and `set_style` inside the plot function, which we should remove and modify the plot instance directly | closed | 2018-05-22T12:12:56Z | 2022-10-29T02:48:14Z | https://github.com/ContextLab/hypertools/issues/210 | [
"bug"
] | andrewheusser | 1 |
widgetti/solara | flask | 803 | Feature request: Enhancements to FileDrop | Hi there!
After using FileDrop{Multiple} for a few months, a few limitations keep arising that are difficult to patch:
- Reuploads of the same file are ignored, and it is quite difficult to "clear" the state of the FileDrop to listen for reuploads
- Customizing the dropzone appearance is impossible; ideally we can provide our own element from Solara land
- Users can drop files while the previous drop is still uploading, leading to undefined behavior
- Most file uploaders allow dropping _or_ clicking, while FileDrop only allows dropping
I ended up implementing a `DropOrInputFile{s}` which combines logic from `FileDrop` and `InputFile` widgets to resolve all these points:
- Clicking and dropping behaviors are added to a solara `activator` element passed to the function
- Clicking the widget brings up a file browser
- Much easier to use than `InputFile`, which doesn't allow custom widgets and complicates the process of reading file data
- Minor improvements like allowing a reupload of the same file name and disabling the widget during upload, though these changes are easily provided as a PR to the existing FileDrop
**My question is this:** If I were to open source the implementation, would you prefer it as a contrib element in a third-party package, new `DropOrInputFile{s}` elements within core solara, or a modification of FileDrop/FileDropMultiple?
<details>
<summary>Source Code for video</summary>
```python
@sl.component
def Page():
DefaultPageStyle()
text, set_text = sl.use_state(b"")
def on_file(file: FileInfo):
set_text(file["file_obj"].read(100))
with sl.Column(style="width: 350px"):
DropOrInputFile(on_file=on_file)
DropOrInputFile(
sl.Card("Drag onto a custom element", style="background-color: green"),
on_file=on_file,
)
if text:
sl.HTML("h2", "First 100 bytes of the file:")
sl.Text(str(text))
```
</details>
https://github.com/user-attachments/assets/3520ebd7-81a4-45d5-ac65-ec2ba42772bb
| open | 2024-09-27T16:25:27Z | 2024-10-01T19:39:23Z | https://github.com/widgetti/solara/issues/803 | [] | ntjess | 1 |
jupyter/nbgrader | jupyter | 1,910 | Formgrader not loading | Hello everyone,
First of all, big thanks for the good work!
I just started a fresh install of Jupyterhub (TLJH) and nbgrader, unfortunatly, formgrader does not load. In the console I see error :
Refused to frame 'mydomain.com' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'none'".
### Operating system
Ubuntu 22.04.4 LTS
### `nbgrader --version`
nbgrader version 0.9.3
### `jupyterhub --version` (if used with JupyterHub)
4.1.6
### `jupyter notebook --version`
7.2.1
### Expected behavior
Forgrader to load
### Actual behavior
Formgrader not loading
and
console Error:
Refused to frame 'mydomain.com' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'none'".
### Steps to reproduce the behavior
Install TLJH
Install nbgrader
Load formgrader | closed | 2024-08-02T21:30:16Z | 2024-08-03T15:30:01Z | https://github.com/jupyter/nbgrader/issues/1910 | [
"bug"
] | henry-goluss | 3 |
huggingface/transformers | nlp | 36,723 | trainer.train() | ### System Info
hello
I still have error as below on distilbert-base-uncased on jupyter!!!!!!
please help me
--------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[90], line 9
2 trainer = Trainer(
3 model=model,
4 args=training_args,
5 train_dataset=train_dataset,
6 eval_dataset=val_dataset
7 )
8 # شروع آموزش
----> 9 trainer.train()
File ~\anaconda3\Lib\site-packages\transformers\trainer.py:2241, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2239 hf_hub_utils.enable_progress_bars()
2240 else:
-> 2241 return inner_training_loop(
2242 args=args,
2243 resume_from_checkpoint=resume_from_checkpoint,
2244 trial=trial,
2245 ignore_keys_for_eval=ignore_keys_for_eval,
2246 )
File ~\anaconda3\Lib\site-packages\transformers\trainer.py:2548, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2541 context = (
2542 functools.partial(self.accelerator.no_sync, model=model)
2543 if i != len(batch_samples) - 1
2544 and self.accelerator.distributed_type != DistributedType.DEEPSPEED
2545 else contextlib.nullcontext
2546 )
2547 with context():
-> 2548 tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
2550 if (
2551 args.logging_nan_inf_filter
2552 and not is_torch_xla_available()
2553 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
2554 ):
2555 # if loss is nan or inf simply add the average of previous logged losses
2556 tr_loss = tr_loss + tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~\anaconda3\Lib\site-packages\transformers\trainer.py:3698, in Trainer.training_step(self, model, inputs, num_items_in_batch)
3695 return loss_mb.reduce_mean().detach().to(self.args.device)
3697 with self.compute_loss_context_manager():
-> 3698 loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
3700 del inputs
3701 if (
3702 self.args.torch_empty_cache_steps is not None
3703 and self.state.global_step % self.args.torch_empty_cache_steps == 0
3704 ):
File ~\anaconda3\Lib\site-packages\transformers\trainer.py:3759, in Trainer.compute_loss(self, model, inputs, return_outputs, num_items_in_batch)
3757 loss_kwargs["num_items_in_batch"] = num_items_in_batch
3758 inputs = {**inputs, **loss_kwargs}
-> 3759 outputs = model(**inputs)
3760 # Save past state if it exists
3761 # TODO: this needs to be fixed and made cleaner later.
3762 if self.args.past_index >= 0:
File ~\anaconda3\Lib\site-packages\torch\nn\modules\module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
File ~\anaconda3\Lib\site-packages\torch\nn\modules\module.py:1845, in Module._call_impl(self, *args, **kwargs)
1842 return inner()
1844 try:
-> 1845 return inner()
1846 except Exception:
1847 # run always called hooks if they have not already been run
1848 # For now only forward hooks have the always_call option but perhaps
1849 # this functionality should be added to full backward hooks as well.
1850 for hook_id, hook in _global_forward_hooks.items():
File ~\anaconda3\Lib\site-packages\torch\nn\modules\module.py:1793, in Module._call_impl.<locals>.inner()
1790 bw_hook = BackwardHook(self, full_backward_hooks, backward_pre_hooks)
1791 args = bw_hook.setup_input_hook(args)
-> 1793 result = forward_call(*args, **kwargs)
1794 if _global_forward_hooks or self._forward_hooks:
1795 for hook_id, hook in (
1796 *_global_forward_hooks.items(),
1797 *self._forward_hooks.items(),
1798 ):
1799 # mark that always called hook is run
File ~\anaconda3\Lib\site-packages\transformers\models\distilbert\modeling_distilbert.py:977, in DistilBertForSequenceClassification.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
969 r"""
970 labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
971 Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
972 config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
973 `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
974 """
975 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
--> 977 distilbert_output = self.distilbert(
978 input_ids=input_ids,
979 attention_mask=attention_mask,
980 head_mask=head_mask,
981 inputs_embeds=inputs_embeds,
982 output_attentions=output_attentions,
983 output_hidden_states=output_hidden_states,
984 return_dict=return_dict,
985 )
986 hidden_state = distilbert_output[0] # (bs, seq_len, dim)
987 pooled_output = hidden_state[:, 0] # (bs, dim)
File ~\anaconda3\Lib\site-packages\torch\nn\modules\module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
File ~\anaconda3\Lib\site-packages\torch\nn\modules\module.py:1845, in Module._call_impl(self, *args, **kwargs)
1842 return inner()
1844 try:
-> 1845 return inner()
1846 except Exception:
1847 # run always called hooks if they have not already been run
1848 # For now only forward hooks have the always_call option but perhaps
1849 # this functionality should be added to full backward hooks as well.
1850 for hook_id, hook in _global_forward_hooks.items():
File ~\anaconda3\Lib\site-packages\torch\nn\modules\module.py:1793, in Module._call_impl.<locals>.inner()
1790 bw_hook = BackwardHook(self, full_backward_hooks, backward_pre_hooks)
1791 args = bw_hook.setup_input_hook(args)
-> 1793 result = forward_call(*args, **kwargs)
1794 if _global_forward_hooks or self._forward_hooks:
1795 for hook_id, hook in (
1796 *_global_forward_hooks.items(),
1797 *self._forward_hooks.items(),
1798 ):
1799 # mark that always called hook is run
File ~\anaconda3\Lib\site-packages\transformers\models\distilbert\modeling_distilbert.py:776, in DistilBertModel.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
774 input_shape = inputs_embeds.size()[:-1]
775 else:
--> 776 raise ValueError("You have to specify either input_ids or inputs_embeds")
778 device = input_ids.device if input_ids is not None else inputs_embeds.device
780 head_mask_is_none = head_mask is None
ValueError: You have to specify either input_ids or inputs_embeds
Click to add a cell.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
import torch
import os
os.environ["HF_HUB_DISABLE_SYMLINKS_WARNING"] = "1"
# بارگذاری توکنایزر و مدل پیشآموزشیافته
model_name = "distilbert-base-uncased"
tokenizer = DistilBertTokenizer.from_pretrained(model_name)
# بارگذاری مدل
model = DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=2) # مثال برای دو کلاس
print("مدل و توکنایزر با موفقیت بارگذاری شدند!")
sample_text = "این یک تست است"
tokens = tokenizer(sample_text, truncation=True, padding=True, max_length=64)
print(tokens)
# بارگذاری و آمادهسازی دادهها
file_path = 'C:\\Users\\saada\\ARS\\tasnim_cleaned_label_encoded_label.xlsx' # مسیر فایل ورودی با دادهها
# نمایش چند سطر اول از دادهها
print("نمونهای از دادهها:")
print(df.head())
# فرض میکنیم فایل شامل ستونهای 'text' و 'label' باشد
texts = df['title'].tolist() # متنها
labels = df['label'].tolist() # برچسبها
# تبدیل دادهها به Dataset
from datasets import Dataset
train_dataset = Dataset.from_dict({"text": train_texts, "label": train_labels})
val_dataset = Dataset.from_dict({"text": val_texts, "label": val_labels})
# توکنایز کردن متنها
encodings = tokenizer(texts, truncation=True, padding=True, max_length=64, return_tensors="pt")
# نمایش دادههای توکنایز شده
print("متنها با موفقیت توکنایز شدند!")
print(encodings)
from torch.utils.data import DataLoader, Dataset
import torch
# تعریف کلاس Dataset
class CustomDataset(Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
return {
'input_ids': self.encodings['input_ids'][idx],
'attention_mask': self.encodings['attention_mask'][idx],
'labels': torch.tensor(self.labels[idx])
}
# ساخت Dataset سفارشی
dataset = CustomDataset(encodings, labels)
print(dataset[0]) # بررسی اولین نمونه
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(
texts, labels, test_size=0.2, random_state=42
)
# توکنایز کردن دادههای تقسیمشده
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=64, return_tensors="pt")
val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=64, return_tensors="pt")
from transformers import Trainer, TrainingArguments
import os
# غیرفعال کردن WandB
os.environ["WANDB_MODE"] = "disabled"
os.environ["WANDB_PROJECT"] = "disabled"
os.environ["WANDB_DISABLE"] = "true"
# مدل و دادهها
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=8)
# تنظیمات آموزش
training_args = TrainingArguments(
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
num_train_epochs=3,
learning_rate=2e-5,
output_dir="./results",
evaluation_strategy="epoch"
)
#اجری مدل
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
# شروع آموزش
trainer.train()
### Expected behavior
run properly huggingface/transformers/distilbert-base-uncased | open | 2025-03-14T14:10:36Z | 2025-03-14T14:10:36Z | https://github.com/huggingface/transformers/issues/36723 | [
"bug"
] | ARS1979ie | 0 |
JaidedAI/EasyOCR | machine-learning | 471 | How to use Transformer model for recognition? | Hi ,
I wanted to use EasyOcr for my use case. Can you help me to use Transformer model for recognition? As I saw a line of code in the description i.e reader = easyocr.Reader(['en'], detection='DB', recognition = 'Transformer') | closed | 2021-06-24T15:44:07Z | 2021-06-24T20:45:35Z | https://github.com/JaidedAI/EasyOCR/issues/471 | [] | karndeepsingh | 1 |
WeblateOrg/weblate | django | 13,856 | Bugs and improvements related to the “Multiple failing checks” check | ### Describe the issue
1. False positive in the “Multiple failing checks” check while only the translation in one target language has more than one failing checks
Example 1:
- Source string: https://hosted.weblate.org/translate/weblate/documentation/en/?checksum=cf04270623cfca2e
- Target string: https://hosted.weblate.org/translate/weblate/documentation/ta/?checksum=cf04270623cfca2e
2. The description of “Multiple failing checks” in the “Things to check” panel wrongly shows the language name multiple times, which is corresponding to the number of failing checks for that language.
Example 2:
- Source string: https://hosted.weblate.org/translate/weblate/documentation/en/?checksum=f252b7caf6adda19
- Target string:
- https://hosted.weblate.org/translate/weblate/documentation/ar/?checksum=f252b7caf6adda19
- https://hosted.weblate.org/translate/weblate/documentation/ta/?checksum=cf04270623cfca2e
### I already tried
- [x] I've read and searched [the documentation](https://docs.weblate.org/).
- [x] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
1. Situation 1:
1. Find a source string whose translation in only one target language has more than one failing checks.
2. Open it in the full editor.
2. Situation 2:
1. Find a source string whose multiple target languages all have more than one failing checks.
2. Open it in the full editor.
### Expected behavior
1. The “Multiple failing checks” check is not failing.
2. The description of “Multiple failing checks” in the “Things to check” panel correctly listing all failing checks and all target languages of each failing check:
1. Correctly listing all failing checks without omission in a _bullet list_.
2. Correctly listing all target languages of each failing check without repetition.
### Screenshots
1. Example 1:
- Source string: https://hosted.weblate.org/translate/weblate/documentation/en/?checksum=cf04270623cfca2e

- Target string: https://hosted.weblate.org/translate/weblate/documentation/ta/?checksum=cf04270623cfca2e

2. Example 2:
- Source string: https://hosted.weblate.org/translate/weblate/documentation/en/?checksum=f252b7caf6adda19

- Target string:
- https://hosted.weblate.org/translate/weblate/documentation/ar/?checksum=f252b7caf6adda19

- https://hosted.weblate.org/translate/weblate/documentation/ta/?checksum=cf04270623cfca2e

### Exception traceback
```pytb
```
### How do you run Weblate?
weblate.org service
### Weblate versions
Weblate 5.10-dev
### Weblate deploy checks
```shell
```
### Additional context
BTW, is it possible to localize the trailing semicolon of each failing check and the separator (`, `) between each target language in the description of “Multiple failing checks” in the “Things to check” panel? | closed | 2025-02-14T00:26:15Z | 2025-02-24T15:41:18Z | https://github.com/WeblateOrg/weblate/issues/13856 | [
"bug"
] | Geeyun-JY3 | 5 |
nltk/nltk | nlp | 2,837 | word_tokenize splits Pokémon into "pok" and 'émon" | This is without specifying a language, so i assume english by default.
Is there a workaround to this? | closed | 2021-10-02T08:18:05Z | 2021-10-13T14:00:42Z | https://github.com/nltk/nltk/issues/2837 | [] | SeeTwoDev | 3 |
litestar-org/litestar | api | 3,773 | Docs: pypi.org description leading to a 404 error | ### Summary
Hello
New to using the project, I found out that the pypi home for litlestar currently links to a 404
URL: https://pypi.org/project/litestar/
The "Example Applications" section's first item points to [litestar-pg-redis-docker](https://github.com/litestar-org/litestar-pg-redis-docker), which is currently showing a 404 error | closed | 2024-10-01T15:29:30Z | 2025-03-20T15:54:57Z | https://github.com/litestar-org/litestar/issues/3773 | [
"Documentation :books:"
] | romuald | 1 |
mars-project/mars | pandas | 3,020 | [BUG] Mars import took too long | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
When `import mars` first time, it took about 4~5 seconds which is pretty time-consuming for users

**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
`import mars` should take less than 1 second, just like pandas:

**Additional context**
Add any other context about the problem here.
| closed | 2022-05-11T08:02:46Z | 2022-05-13T07:53:37Z | https://github.com/mars-project/mars/issues/3020 | [] | chaokunyang | 2 |
akfamily/akshare | data-science | 5,859 | 关于分页问题; total/200 获取分页是错误的 |
**详细问题描述**
**关于分页问题; total/200 获取分页是错误的;
从接口中获取 "total": 3027, 若用 3027/200 = 17 获取分页总值是错误的;实际要大很多;实际第26 27 分页仍然哟数据;可能和内部排序和不同时间的排序问题;所以应该查询到 没有具体数据为止,才是获取到所有数据;当然是有重复的;**
| closed | 2025-03-11T06:19:50Z | 2025-03-11T09:09:35Z | https://github.com/akfamily/akshare/issues/5859 | [
"bug"
] | sunnnnnner | 3 |
ultralytics/ultralytics | deep-learning | 19,148 | Yolov8 OBB output for documentation purposes | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello guys,
I'm trying to find the output parameters for the OBB model. It should something similiar to this example from a paper:

As far as I know Yolov8 OBB outputs an angle[0...90°], the class and a bounding box.
How to adapt these outputs properly with the corresponding numbers (e.g. 512x256xnumber for the output) to the image.
Thanks
### Additional
_No response_ | open | 2025-02-09T19:11:31Z | 2025-02-21T05:19:09Z | https://github.com/ultralytics/ultralytics/issues/19148 | [
"documentation",
"question",
"OBB"
] | Petros626 | 10 |
sepandhaghighi/samila | matplotlib | 188 | Pillow dependency | #### Description
Add Pillow to `meta.yaml`
| closed | 2023-04-06T00:15:47Z | 2024-06-25T17:50:39Z | https://github.com/sepandhaghighi/samila/issues/188 | [
"dependencies"
] | sepandhaghighi | 0 |
koaning/scikit-lego | scikit-learn | 561 | [FEATURE] GroupTimeSeriesSplit for unequal groups' lengths | I am dealing with a time-series dataset that the number of data points in each group is not equal. Please check the attachment. I want to have TimeSeriesSplit for each group. The problem with the current implementation is that the groups' lengths must be equal.
I appreciate any suggestion or adding such a feature.

| open | 2023-05-24T15:36:01Z | 2023-05-24T15:36:18Z | https://github.com/koaning/scikit-lego/issues/561 | [
"enhancement"
] | msat59 | 0 |
StratoDem/sd-material-ui | dash | 407 | Update raised button | https://material-ui.com/components/buttons/ | closed | 2020-08-11T14:32:22Z | 2020-08-17T16:44:28Z | https://github.com/StratoDem/sd-material-ui/issues/407 | [] | coralvanda | 0 |
dask/dask | numpy | 11,356 | Add option to automatically compute chunk sizes in dask | When testing out dask using array API compat in scikit-learn and scipy, one annoyance are the unknown chunk sizes that come as a result from doing boolean indexing and calls to unique.
These cause problems later for scikit-learn and scipy, e.g. when you do boolean indexing with a lazy array or you get the length of a unique call.
(If one were to write code for dask only, we could call ``compute_chunk_sizes`` afterwards to get the shape, but this doesn't work for someone that wants to write array library agnostic code)
To work around this problem, we can add a mode to dask.array, e.g. ``compute_chunks_sizes_on_demand``, where if it is on, and we are in a method that needs chunks to be known, we will call ``compute_chunk_sizes`` for the user, instead of raising the error to tell the user to call ``compute_chunk_sizes``.
Then, in array-api-compat, I think we can set the flag to auto compute chunk sizes when a library requests the array API namespace from array-api-compat, and unset it when the array API namespace gets garbage collected.
This was briefly mentioned in the Python Array API standard (and I believe the PR implementing the Array API standard for dask does this).
https://github.com/dask/community/issues/109
cc @lucascolley | closed | 2024-08-29T16:01:38Z | 2024-09-05T20:56:43Z | https://github.com/dask/dask/issues/11356 | [
"needs triage"
] | lithomas1 | 4 |
ClimbsRocks/auto_ml | scikit-learn | 31 | impute missing values as one of the steps for gridsearchcv to tune | closed | 2016-08-17T03:18:31Z | 2017-03-12T00:40:19Z | https://github.com/ClimbsRocks/auto_ml/issues/31 | [] | ClimbsRocks | 2 |
|
microsoft/nni | pytorch | 5,783 | WARNING: GPU found but will not be used. Please set `experiment.config.trial_gpu_number` to the number of GPUs you want to use for each trial. | Hello, NAS! was found the problem:WARNING: GPU found but will not be used. Please set `experiment.config.trial_gpu_number` to the number of GPUs you want to use for each trial.
```[tasklist]
### Tasks
```
| open | 2024-05-16T14:40:12Z | 2024-05-29T02:27:43Z | https://github.com/microsoft/nni/issues/5783 | [] | xutongpure | 1 |
Gozargah/Marzban | api | 1,245 | حداکثر تعداد کاربران همزمان مجاز روی سرور نود | با سلام و احترام
لطفا این قابلیت رو فعال کنید که بشه حداکثر تعداد کاربران همزمان مجاز روی سرور نود را کنترل کرد، مثلا من 5 تا نود دارم و میخوام روی هر نود 200 کاربر بیشتر فعال نباشن و اگر به میزان مجاز ظرفیت رسید دیگه نتونم روی اون نود یوزر جدید بسازم یا اتصال جدید بدم، این کار باعث میشه که امکان مدیرت اتصال کاربران روی هر نود رو برای جلوگیری از شلوغ شدن نود ها انجام داد. | closed | 2024-08-15T05:38:21Z | 2024-08-17T14:37:45Z | https://github.com/Gozargah/Marzban/issues/1245 | [
"Feature"
] | Pezhman5252 | 6 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,252 | sipbuild.pyproject.PyProjectOptionException during pip install | While running
pip install -r requirements.txt
got
```
╰─> [25 lines of output]
Traceback (most recent call last):
File "/Users/marco/.pyenv/versions/3.11.5/envs/rtvc/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/marco/.pyenv/versions/3.11.5/envs/rtvc/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/marco/.pyenv/versions/3.11.5/envs/rtvc/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/api.py", line 46, in build_wheel
project = AbstractProject.bootstrap('wheel',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/abstract_project.py", line 87, in bootstrap
project.setup(pyproject, tool, tool_description)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/project.py", line 586, in setup
self.apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-install-pm6odyos/pyqt5_381e2a1d4cbc42e5906a2b57d17ae409/project.py", line 63, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/sipbuild/project.py", line 237, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "/private/var/folders/py/9j3vxq9x44q9k9bc1_tb13bc0000gp/T/pip-build-env-avz9esk8/overlay/lib/python3.11/site-packages/pyqtbuild/builder.py", line 69, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
[end of output]
```
python 3.11.5 | open | 2023-09-22T21:03:23Z | 2024-06-14T08:18:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1252 | [] | marcobazzani | 1 |
roboflow/supervision | pytorch | 1,209 | ValueError when loading COCO dataset with multiple segmentation masks for one class | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
My current COCO dataset includes annotations with more than 1 segmentation masks of the same class. A rough analogy is as follows whereby one eye of a cat is segmented as a whole but when exported from Fiftyone two polygons are produced (turned into segmentation masks):

As a result, when the COCO dataset is loaded into my program using supervision, the program crashes with the following error:
```
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (7,) + inhomogeneous part.
```
After some research, I discovered Ultralytics' [JSON2YOLO](https://github.com/ultralytics/JSON2YOLO) repository on GitHub, and adapted the library's `merge_multi_segment()` (seen [here](https://github.com/ultralytics/JSON2YOLO/blob/e6dd7784b77db4f1d8c3aa95ed93de2890b3ac23/general_json2yolo.py#L330)) function in supervision's `coco.py` file which then allows the COCO dataset to be loaded.
### Environment
- Supervision: 0.20.0
- OS: OpenSUSE Tumbleweed 20240423
- Python: 3.12.2
### Minimal Reproducible Example
The following code is used to load the COCO dataset with the annotations_path being the path to a .json file containing the paths and annotations for all images in the dataset:
```Python
ds = sv.DetectionDataset.from_coco(
images_directory_path=images_directory_path,
annotations_path=annotations_path,
force_masks=True,
)
```
The following is an example of a class/category containing multiple segmentation masks:
```JSON
{
...
{
"id": 41,
"image_id": 6,
"category_id": 0,
"bbox": [
694.801517364719,
278.90263698033465,
161.52883212628387,
282.881946369456
],
"segmentation": [
[
694,
560.5,
764.5,
407,
759.5,
400,
765.5,
397,
765.5,
393,
760.5,
391,
759.5,
384,
754.5,
381,
763,
376.5,
767.5,
370,
764.5,
363,
768.5,
354,
735,
278.5,
741.5,
284,
776,
356.5,
782,
359.5,
794.5,
348,
806.5,
321,
809.5,
321,
799.5,
346,
800,
348.5,
806.5,
349,
798.5,
364,
800.5,
387,
808.5,
400,
818.5,
408,
811,
413.5,
802,
405.5,
802.5,
416,
855.5,
530,
851.5,
529,
787.5,
395,
779,
394.5,
714.5,
531,
715.5,
522,
729,
490.5,
694,
560.5
],
[
713,
534.5,
713,
531.5,
713,
534.5
]
],
"area": 45693.59042666829,
"iscrowd": 0,
"ignore": 0
}
}
```
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | open | 2024-05-20T09:47:11Z | 2024-05-24T13:39:32Z | https://github.com/roboflow/supervision/issues/1209 | [
"bug"
] | DancinParrot | 13 |
aleju/imgaug | machine-learning | 30 | Import imgaug error | When I was running "from imgaug import augmenters as iaa"
It has an error:
Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.
But the libmkl_avx2.so and libmkl_def.so are in the folder ~/anaconda2/lib/. | closed | 2017-04-10T16:10:49Z | 2017-04-13T04:53:24Z | https://github.com/aleju/imgaug/issues/30 | [] | tianzq | 2 |
pandas-dev/pandas | data-science | 60,396 | CI/BUG: `comment_commands.yml` failing due to invalid `trim()` | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
comment anything on an issue in this sub, workflow will return as FAIL
```
### Issue Description
We have recently merged a past MR (#60359) regarding using `trim()` on `comment_commands.yml`. Turns out there are no "Trim" or similar related command on github workflows file...
> The workflow is not valid. .github/workflows/comment-commands.yml (Line: 14, Col: 9): Unrecognized function: 'trim'. Located at position 39 within expression: (!github.event.issue.pull_request) && trim(github.event.comment.body) == 'take'
Failed workflow example: https://github.com/pandas-dev/pandas/actions/runs/11973824956/workflow
Discussion regarding split or trim command on github workflow: https://stackoverflow.com/questions/64049306/github-actions-how-to-trim-a-string-in-a-condition
### Expected Behavior
comment_commands workflow should work properly
### Installed Versions
NA | closed | 2024-11-22T13:56:14Z | 2024-11-22T18:56:42Z | https://github.com/pandas-dev/pandas/issues/60396 | [
"Bug",
"Needs Triage"
] | KevsterAmp | 1 |
streamlit/streamlit | machine-learning | 10,514 | toml.decoder.TomlDecodeError: Key name found without value. Reached end of line. | ### Summary
Hello, I created Google OpenID Connect client and tried to implement it on streamlit but the following error occured
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/__main__.py", line 20, in <module>
main(prog_name="streamlit")
File "/usr/lib/python3/dist-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/cli.py", line 240, in main_run
_main_run(target, args, flag_options=kwargs)
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/cli.py", line 276, in _main_run
bootstrap.run(file, is_hello, args, flag_options)
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 349, in run
asyncio.run(main())
File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 341, in main
await run_server()
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 319, in run_server
await server.start()
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/server/server.py", line 295, in start
app = self._create_app()
^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/server/server.py", line 451, in _create_app
cookie_secret=get_cookie_secret(),
^^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/server/server_util.py", line 82, in get_cookie_secret
if secrets_singleton.load_if_toml_exists():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 222, in load_if_toml_exists
self._parse()
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 378, in _parse
path_secrets, found_secrets_file_in_path = self._parse_file_path(path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 336, in _parse_file_path
return self._parse_toml_file(path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 276, in _parse_toml_file
secrets.update(toml.loads(secrets_file_str))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ugroon/.local/lib/python3.12/site-packages/toml/decoder.py", line 213, in loads
raise TomlDecodeError("Key name found without value."
toml.decoder.TomlDecodeError: Key name found without value. Reached end of line. (line 9 column 67 char 346)
```
Config file that I used (.streamlit/secrets.toml)
```
[auth]
redirect_uri = "http://localhost:8501/oauth2callback"
cookie_secret = "hebelehubelecartcurt"
[auth.google]
client_id = "7760*********-k**************************q.apps.googleusercontent.com"
client_secret = "G*****-*r*__-***********************A"
server_metadata_url = (
"https://accounts.google.com/.well-known/openid-configuration"
)
```
Code that I used
```Python
import streamlit as st
st.button("Log in with Google", on_click=st.login, args=["google"])
``` | closed | 2025-02-19T11:56:06Z | 2025-02-25T18:24:57Z | https://github.com/streamlit/streamlit/issues/10514 | [] | ug0x01 | 3 |
fastapi/sqlmodel | sqlalchemy | 6 | Flexibly Create Nested Database Entries from Incoming Pydantic/SQLModels | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# Sudo Code Based on Examples in Docs
class Team(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
headquarters: str
heroes: List["Hero"] = Relationship(back_populates="team")
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
team_id: Optional[int] = Field(default=None, foreign_key="team.id")
team: Optional[Team] = Relationship(back_populates="heroes")
payload = {
"name": "Team Name",
"headquarters": "Whereever".
"heroes": [
"name": "Name 1"
// Other Requied Fields... 👇
]
}
with Session(engine) as session:
Team.create_all_nested(session, payload) # or something?
```
### Description
I would like to do what is described in [FastAPI issue #2194](https://github.com/tiangolo/fastapi/issues/2194)
> How to make nested sqlalchemy models from nested pydantic models (or python dicts) in a generic way and write them to the database in "one shot".
In the example above, I'd like to pass in the payload to a method and the following to occur.
* Create new Team entry
* Create Hero entry and/or Relate the existing Hero to the Team
Similarly, I'd like the same to happen on update. Effectively making writing to the SQL database akin to writing to MongoDB
I don't believe this is supported or haven't gotten it to work, but my main questions are.
1. Is this supported?
2. If no, is this a use-case you've thought of?
3. Are you interested in a PR to support this either as a utility method or some sort of decorator?
Loving working with this so far, thanks for all your hard work!
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.3
### Python Version
3.9.6
### Additional Context
I have accomplished this with SQLAlchemy in the past by using an `auto_init` decarator.
```python
from functools import wraps
from typing import Union
from sqlalchemy.orm import MANYTOMANY, MANYTOONE, ONETOMANY
def handle_one_to_many_list(relation_cls, all_elements: list[dict]):
elems_to_create = []
updated_elems = []
for elem in all_elements:
elem_id = elem.get("id", None)
existing_elem = relation_cls.get_ref(match_value=elem_id)
if existing_elem is None:
elems_to_create.append(elem)
else:
for key, value in elem.items():
setattr(existing_elem, key, value)
updated_elems.append(existing_elem)
new_elems = []
for elem in elems_to_create:
new_elems = [relation_cls(**elem) for elem in all_elements]
return new_elems
def auto_init(exclude: Union[set, list] = None): # sourcery no-metrics
"""Wraps the `__init__` method of a class to automatically set the common
attributes.
Args:
exclude (Union[set, list], optional): [description]. Defaults to None.
"""
exclude = exclude or set()
exclude.add("id")
def decorator(init):
@wraps(init)
def wrapper(self, *args, **kwargs): # sourcery no-metrics
"""
Custom initializer that allows nested children initialization.
Only keys that are present as instance's class attributes are allowed.
These could be, for example, any mapped columns or relationships.
Code inspired from GitHub.
Ref: https://github.com/tiangolo/fastapi/issues/2194
"""
cls = self.__class__
model_columns = self.__mapper__.columns
relationships = self.__mapper__.relationships
session = kwargs.get("session", None)
for key, val in kwargs.items():
if key in exclude:
continue
if not hasattr(cls, key):
continue
# raise TypeError(f"Invalid keyword argument: {key}")
if key in model_columns:
setattr(self, key, val)
continue
if key in relationships:
relation_dir = relationships[key].direction.name
relation_cls = relationships[key].mapper.entity
use_list = relationships[key].uselist
if relation_dir == ONETOMANY.name and use_list:
instances = handle_one_to_many_list(relation_cls, val)
setattr(self, key, instances)
if relation_dir == ONETOMANY.name and not use_list:
instance = relation_cls(**val)
setattr(self, key, instance)
elif relation_dir == MANYTOONE.name and not use_list:
if isinstance(val, dict):
val = val.get("id")
if val is None:
raise ValueError(f"Expected 'id' to be provided for {key}")
if isinstance(val, (str, int)):
instance = relation_cls.get_ref(match_value=val, session=session)
setattr(self, key, instance)
elif relation_dir == MANYTOMANY.name:
if not isinstance(val, list):
raise ValueError(f"Expected many to many input to be of type list for {key}")
if len(val) > 0 and isinstance(val[0], dict):
val = [elem.get("id") for elem in val]
instances = [relation_cls.get_ref(elem, session=session) for elem in val]
setattr(self, key, instances)
return init(self, *args, **kwargs)
return wrapper
return decorator
```
## Usage
```python
class AdminModel(SqlAlchemyBase, BaseMixins):
name = Column(String, index=True)
email = Column(String, unique=True, index=True)
password = Column(String)
is_superuser = Column(Boolean(), default=False)
@auto_init(exclude={'is_superuser'})
def __init__(self, **_):
this.is_superuser = false
@classmethod
def get_ref(cls, match_value: str, match_attr: str = "id"):
with SessionLocal() as session:
eff_ref = getattr(cls, match_attr)
return session.query(cls).filter(eff_ref == match_value).one_or_none()
```decorator | open | 2021-08-24T23:12:28Z | 2025-03-20T11:31:19Z | https://github.com/fastapi/sqlmodel/issues/6 | [
"question"
] | hay-kot | 24 |
Guovin/iptv-api | api | 875 | [Bug]: 生成的m3u文件为空 | ### Don't skip these steps | 不要跳过这些步骤
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field | 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**限制**
- [x] I am sure that this is a running error exception problem and will not submit any problems unrelated to this project | 我确定这是运行报错异常问题,不会提交任何与本项目无关的问题
- [x] I have searched and double-checked that there are no similar issues that have been created | 我已经通过搜索并仔细检查过没有存在已经创建的类似问题
### Occurrence environment | 触发环境
- [x] Workflow | 工作流
- [ ] GUI | 软件
- [ ] Docker
- [ ] Command line | 命令行
### Bug description | 具体描述
生成的m3u文件为空文件,没有节目链接和信息
### Error log | 报错日志
📺央视频道: ❌ Write channel to file failed: '<' not supported between instances of 'int' and 'str' | closed | 2025-01-25T15:56:35Z | 2025-01-26T01:17:56Z | https://github.com/Guovin/iptv-api/issues/875 | [
"bug"
] | Wisof-Young | 1 |
sktime/pytorch-forecasting | pandas | 1,519 | Issue with TFT.forward() method | Hi guys,
My TFT hasn't been working and I think I've found the reason why. Apologies if I've misunderstood anything, please feel free to tell me a fix or explain what I'm doing wrong.
In this line of the .forward() method
```
embeddings_varying_decoder, decoder_sparse_weights = self.decoder_variable_selection(
embeddings_varying_decoder,
static_context_variable_selection[:, max_encoder_length:],
)
```
which looks like
```
def forward(self, x: Dict[str, torch.Tensor], context: torch.Tensor = None):
if self.num_inputs > 1:
# transform single variables
var_outputs = []
weight_inputs = []
for name in self.input_sizes.keys():
# select embedding belonging to a single input
variable_embedding = x[name]
if name in self.prescalers:
variable_embedding = self.prescalers[name](variable_embedding)
weight_inputs.append(variable_embedding)
var_outputs.append(self.single_variable_grns[name](variable_embedding))
var_outputs = torch.stack(var_outputs, dim=-1)
# calculate variable weights
flat_embedding = torch.cat(weight_inputs, dim=-1)
sparse_weights = self.flattened_grn(flat_embedding, context)
sparse_weights = self.softmax(sparse_weights).unsqueeze(-2)
outputs = var_outputs * sparse_weights
outputs = outputs.sum(dim=-1)
else: # for one input, do not perform variable selection but just encoding
name = next(iter(self.single_variable_grns.keys()))
variable_embedding = x[name]
if name in self.prescalers:
variable_embedding = self.prescalers[name](variable_embedding)
outputs = self.single_variable_grns[name](variable_embedding) # fast forward if only one variable
if outputs.ndim == 3: # -> batch size, time, hidden size, n_variables
sparse_weights = torch.ones(outputs.size(0), outputs.size(1), 1, 1, device=outputs.device) #
else: # ndim == 2 -> batch size, hidden size, n_variables
sparse_weights = torch.ones(outputs.size(0), 1, 1, device=outputs.device)
return outputs, sparse_weights
```
the line
```
name = next(iter(self.single_variable_grns.keys()))
```
raises a StopIteration error when `self.num_inputs=0`, which gets caught 26 stack frames down :laughing: in `_TrainingEpochLoop.run()` [we are on the `self.advance(data_fetcher)` line] which terminates the training prematurely (I think?).
A fix would be hugely appreciated. Thanks a lot!
| open | 2024-02-20T17:48:13Z | 2024-02-20T17:55:56Z | https://github.com/sktime/pytorch-forecasting/issues/1519 | [] | Bruno-TT | 0 |
mwaskom/seaborn | data-visualization | 3,263 | Color not changing | Hi! I'm trying to change colors in seaborn but it's having no effect?
<img width="706" alt="Captura de pantalla 2023-02-15 a la(s) 21 53 02" src="https://user-images.githubusercontent.com/119420090/219230518-91d40820-51b1-4d11-b724-408e0b5525e1.png">
| closed | 2023-02-16T00:58:04Z | 2023-02-16T02:23:52Z | https://github.com/mwaskom/seaborn/issues/3263 | [] | pablotucu | 1 |
nerfstudio-project/nerfstudio | computer-vision | 2,821 | Cameras[0].attribute has wrong shape. | Looks like I found a bug in the Cameras class. If I have a variable `cameras` shaped `[128]`, I found `cameras[0].fx.shape` is `[1, 1]` instead of `[1]` - an extra dim is found. This will cause an error when simply running `ns-train nerfacto --data <my-data> colmap`. The error message is:
```bash
File "/home/why/miniconda3/envs/ns/lib/python3.10/site-packages/nerfstudio/viewer/viewer.py", line 440, in init_scene
R = vtf.SO3.from_matrix(c2w[:3, :3])
File "/home/why/miniconda3/envs/ns/lib/python3.10/site-packages/viser/transforms/_so3.py", line 170, in from_matrix
assert matrix.shape == (3, 3)
AssertionError
```
Because `matrix.shape` is `[1, 3, 3]` instead of `[3, 3]`
**To Reproduce**
Simply run `ns-train nerfacto --data <my-data> colmap`
| closed | 2024-01-25T11:15:18Z | 2024-01-25T14:10:35Z | https://github.com/nerfstudio-project/nerfstudio/issues/2821 | [] | onpix | 0 |
jupyter/docker-stacks | jupyter | 2,128 | Include netbase Ubuntu package in images | ### What docker image(s) is this feature applicable to?
base-notebook, docker-stacks-foundation, minimal-notebook
### What change(s) are you proposing?
Add the [netbase Ubuntu package](https://ubuntu.pkgs.org/22.04/ubuntu-main-amd64/netbase_6.3_all.deb.html) to images.
### How does this affect the user?
> This package provides the necessary infrastructure for basic TCP/IP based networking.
> In particular, it supplies common name-to-number mappings in /etc/services, /etc/rpc, /etc/protocols and /etc/ethertypes
[/etc/protocols](https://man7.org/linux/man-pages/man5/protocols.5.html) is part of the POSIX spec and required by the C functions getprotobyname and getprotobynumber. Some reading here: https://unix.stackexchange.com/questions/680494/what-is-the-significance-of-etc-protocols-in-linux
### Anything else?
If this proposal is accepted, it's as easy as adding `netbase` to an apt-get install somewhere in the chain of images. | closed | 2024-07-30T17:28:41Z | 2024-08-04T19:11:51Z | https://github.com/jupyter/docker-stacks/issues/2128 | [
"type:Enhancement"
] | AlexHill | 2 |
Significant-Gravitas/AutoGPT | python | 8,955 | Marketplace - Reduce margin between the search bar and the chips to 20px | ### Describe your issue.
Reduce margin between the search bar and the chips to 20px
The current margin is too big
<img width="950" alt="Screenshot 2024-12-13 at 16 49 15" src="https://github.com/user-attachments/assets/5c5d4d9d-d069-4f1e-ad4b-e9ef58d43fca" />
| open | 2024-12-13T08:50:15Z | 2024-12-13T10:18:16Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8955 | [
"good first issue",
"UI",
"platform/frontend"
] | ograce1421 | 0 |
ultralytics/ultralytics | computer-vision | 19,808 | 分割分类 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
a.May i increase the cls weight in the yolov8 instance seg task , b. The right and left page of the book does not perform well, how can I optimize
### Additional
_No response_ | open | 2025-03-21T03:19:00Z | 2025-03-22T03:46:44Z | https://github.com/ultralytics/ultralytics/issues/19808 | [
"question",
"segment"
] | Keven-Don | 2 |
allenai/allennlp | pytorch | 5,045 | Coreference resolution model performance | I have trained coreference resolution model with spanbert-base-cased model. But only obtain F1 score as 69%, The following is my training model.
```
transformer_model = "SpanBERT/spanbert-base-cased"
max_length = 512
feature_size = 20
max_span_width = 30
transformer_dim = 768 # uniquely determined by transformer_model
span_embedding_dim = 3*transformer_dim + feature_size
span_pair_embedding_dim = 3*span_embedding_dim + feature_size
token_indexer = PretrainedTransformerMismatchedIndexer(model_name=transformer_model, max_length=max_length)
reader = ConllCorefReader(max_span_width, {'bert_tokens': token_indexer}, max_sentences=110)
train_dataset = reader.read(train_filepath)
validation_dataset = reader.read(valid_filepath)
vocab = Vocabulary()
train_dataset.index_with(vocab)
validation_dataset.index_with(vocab)
train_sampler = BucketBatchSampler(train_dataset, batch_size=1, sorting_keys=["text"], padding_noise=0.0)
train_loader = DataLoader(train_dataset, batch_size=1, batch_sampler=train_sampler, collate_fn=allennlp_collate)
dev_sampler = BucketBatchSampler(validation_dataset, batch_size=1, sorting_keys=["text"], padding_noise=0.0)
dev_loader = DataLoader(validation_dataset, batch_size=1, batch_sampler=dev_sampler, collate_fn=allennlp_collate)
embedding = PretrainedTransformerMismatchedEmbedder(transformer_model, max_length=max_length)
embedder = BasicTextFieldEmbedder(token_embedders={'bert_tokens': embedding})
encoder = PassThroughEncoder(input_dim=transformer_dim)
mention_feedforward = FeedForward(span_embedding_dim, 2, [1500, 1500], torch.nn.ReLU(), dropout=0.3)
antecedent_feedforward = FeedForward(span_pair_embedding_dim, 2, [1500, 1500], torch.nn.ReLU(), dropout=0.3)
normal_initial = XavierNormalInitializer()
orthogonal_initial = OrthogonalInitializer()
initial_para = [[".*_span_updating_gated_sum.*weight", normal_initial],
[".*linear_layers.*weight", normal_initial],
[".*scorer.*weight", normal_initial],
["_distance_embedding.weight", normal_initial],
["_span_width_embedding.weight", normal_initial],
["_context_layer._module.weight_ih.*", normal_initial],
["_context_layer._module.weight_hh.*", orthogonal_initial]
]
initializer = InitializerApplicator(regexes=initial_para)
corefer = CoreferenceResolver(vocab, text_field_embedder=embedder, context_layer=encoder,
mention_feedforward=mention_feedforward, antecedent_feedforward=antecedent_feedforward,
feature_size=feature_size, max_span_width=max_span_width, spans_per_word=0.4,
max_antecedents=50, coarse_to_fine=True, inference_order=2,
lexical_dropout=0.5, initializer=initializer).to("cpu")
def build_trainer(
model: Model,
serialization_dir: str,
train_loader: DataLoader,
dev_loader: DataLoader
) -> Trainer:
parameters = [
(n, p)
for n, p in model.named_parameters()
]
grouppara = [
[[".*transformer.*"], {"lr": 1e-5}]
]
optimizer = HuggingfaceAdamWOptimizer(parameters, grouppara, lr=3e-4)
learning_rate_scheduler = SlantedTriangular(optimizer, num_epochs=40, cut_frac=0.06)
checkpoint = Checkpointer(serialization_dir=serialization_dir)
trainer = GradientDescentTrainer(
model=model,
serialization_dir=serialization_dir,
checkpointer=checkpoint,
data_loader=train_loader,
validation_data_loader=dev_loader,
validation_metric="+coref_f1",
patience=10,
num_epochs=40,
cuda_device=-1,
learning_rate_scheduler=learning_rate_scheduler,
optimizer=optimizer
)
return trainer
serialization_dir = os.path.join(dirpath, "spanbert_check")
trainer = build_trainer(
corefer,
serialization_dir,
train_loader,
dev_loader
)
print("Starting training")
trainer.train()
print("Finished training")
```
| closed | 2021-03-08T20:08:10Z | 2021-03-31T17:21:27Z | https://github.com/allenai/allennlp/issues/5045 | [
"question"
] | yqw-vicki | 18 |
babysor/MockingBird | deep-learning | 211 | 训练时出错:RuntimeError: Error(s) in loading state_dict for Tacotron: | Arguments:
run_id: mandarin
syn_dir: k:/mockingbird/datame/SV2TTS/synthesizer
models_dir: synthesizer/saved_models/
save_every: 1000
backup_every: 25000
log_every: 200
force_restart: False
hparams:
Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt
Loading training data from: k:\mockingbird\datame\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cpu
Initialising Tacotron Model...
Trainable Parameters: 32.866M
Loading weights at synthesizer\saved_models\mandarin\mandarin.pt
Traceback (most recent call last):
File "synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "K:\MockingBird\synthesizer\train.py", line 114, in train
model.load(weights_fpath, optimizer)
File "K:\MockingBird\synthesizer\models\tacotron.py", line 536, in load
self.load_state_dict(checkpoint["model_state"], strict=False)
File "f:\anaconda3\envs\mockingbird\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]).
size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]).
size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]).
size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]).
我已经把symbol里的那行字符改成旧版的那个了,还是报这个错。我这里用的是自己的数据,模仿aishell3的结构放了,已经做了 pre.py 的预处理,在开始训练这一步的时候就出了这个错 | open | 2021-11-11T17:14:11Z | 2021-11-12T02:54:05Z | https://github.com/babysor/MockingBird/issues/211 | [] | dsyrock | 2 |
mlfoundations/open_clip | computer-vision | 420 | Can not import open_clip_torch in colab [Critical] | [Example colab notebook](https://colab.research.google.com/drive/1fAG6G46qB5YyPd3_-z8zFnKgt2LgfgKP?usp=sharing)
```
!pip install open_clip_torch==2.12.0
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting open_clip_torch
Downloading open_clip_torch-2.12.0-py3-none-any.whl (1.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 15.4 MB/s eta 0:00:00
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from open_clip_torch) (4.64.1)
Collecting protobuf==3.20.*
Downloading protobuf-3.20.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 14.7 MB/s eta 0:00:00
Collecting ftfy
Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.1/53.1 KB 3.1 MB/s eta 0:00:00
Requirement already satisfied: regex in /usr/local/lib/python3.8/dist-packages (from open_clip_torch) (2022.6.2)
Collecting huggingface-hub
Downloading huggingface_hub-0.12.0-py3-none-any.whl (190 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 190.3/190.3 KB 2.6 MB/s eta 0:00:00
Requirement already satisfied: torch>=1.9.0 in /usr/local/lib/python3.8/dist-packages (from open_clip_torch) (1.13.1+cu116)
Collecting timm
Downloading timm-0.6.12-py3-none-any.whl (549 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 549.1/549.1 KB 24.9 MB/s eta 0:00:00
Collecting sentencepiece
Downloading sentencepiece-0.1.97-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 37.4 MB/s eta 0:00:00
Requirement already satisfied: torchvision in /usr/local/lib/python3.8/dist-packages (from open_clip_torch) (0.14.1+cu116)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.8/dist-packages (from torch>=1.9.0->open_clip_torch) (4.4.0)
Requirement already satisfied: wcwidth>=0.2.5 in /usr/local/lib/python3.8/dist-packages (from ftfy->open_clip_torch) (0.2.6)
Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.8/dist-packages (from huggingface-hub->open_clip_torch) (23.0)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.8/dist-packages (from huggingface-hub->open_clip_torch) (6.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from huggingface-hub->open_clip_torch) (3.9.0)
Requirement already satisfied: requests in /usr/local/lib/python3.8/dist-packages (from huggingface-hub->open_clip_torch) (2.25.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from torchvision->open_clip_torch) (1.21.6)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.8/dist-packages (from torchvision->open_clip_torch) (7.1.2)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests->huggingface-hub->open_clip_torch) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests->huggingface-hub->open_clip_torch) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests->huggingface-hub->open_clip_torch) (2022.12.7)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from requests->huggingface-hub->open_clip_torch) (4.0.0)
Installing collected packages: sentencepiece, protobuf, ftfy, huggingface-hub, timm, open_clip_torch
Attempting uninstall: protobuf
Found existing installation: protobuf 3.19.6
Uninstalling protobuf-3.19.6:
Successfully uninstalled protobuf-3.19.6
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.11.0 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible.
Successfully installed ftfy-6.1.1 huggingface-hub-0.12.0 open_clip_torch-2.12.0 protobuf-3.20.3 sentencepiece-0.1.97 timm-0.6.12
```
# Error
```
import open_clip
```
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-2-0270bfd9aba4>](https://localhost:8080/#) in <module>
----> 1 import open_clip
1 frames
[/usr/local/lib/python3.8/dist-packages/open_clip/coca_model.py](https://localhost:8080/#) in <module>
30
31 GENERATION_TYPES = {
---> 32 "top_k": TopKLogitsWarper,
33 "top_p": TopPLogitsWarper,
34 "beam_search": "beam_search"
NameError: name 'TopKLogitsWarper' is not defined
```
# Env
Google colab
```
!python --version
Python 3.8.10
```
```
!pip freeze
absl-py==1.4.0
aeppl==0.0.33
aesara==2.7.9
aiohttp==3.8.3
aiosignal==1.3.1
alabaster==0.7.13
albumentations==1.2.1
altair==4.2.2
appdirs==1.4.4
arviz==0.12.1
astor==0.8.1
astropy==4.3.1
astunparse==1.6.3
async-timeout==4.0.2
atari-py==0.2.9
atomicwrites==1.4.1
attrs==22.2.0
audioread==3.0.0
autograd==1.5
Babel==2.11.0
backcall==0.2.0
beautifulsoup4==4.6.3
bleach==6.0.0
blis==0.7.9
bokeh==2.3.3
branca==0.6.0
bs4==0.0.1
CacheControl==0.12.11
cachetools==5.3.0
catalogue==2.0.8
certifi==2022.12.7
cffi==1.15.1
cftime==1.6.2
chardet==4.0.0
charset-normalizer==2.1.1
click==7.1.2
clikit==0.6.2
cloudpickle==2.2.1
cmake==3.22.6
cmdstanpy==1.1.0
colorcet==3.0.1
colorlover==0.3.0
community==1.0.0b1
confection==0.0.4
cons==0.4.5
contextlib2==0.5.5
convertdate==2.4.0
crashtest==0.3.1
crcmod==1.7
cufflinks==0.17.3
cvxopt==1.3.0
cvxpy==1.2.3
cycler==0.11.0
cymem==2.0.7
Cython==0.29.33
daft==0.0.4
dask==2022.2.1
datascience==0.17.5
db-dtypes==1.0.5
dbus-python==1.2.16
debugpy==1.0.0
decorator==4.4.2
defusedxml==0.7.1
descartes==1.1.0
dill==0.3.6
distributed==2022.2.1
dlib==19.24.0
dm-tree==0.1.8
dnspython==2.3.0
docutils==0.16
dopamine-rl==1.0.5
earthengine-api==0.1.340
easydict==1.10
ecos==2.0.12
editdistance==0.5.3
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.4.1/en_core_web_sm-3.4.1-py3-none-any.whl
entrypoints==0.4
ephem==4.1.4
et-xmlfile==1.1.0
etils==1.0.0
etuples==0.3.8
fa2==0.3.5
fastai==2.7.10
fastcore==1.5.28
fastdownload==0.0.7
fastdtw==0.3.4
fastjsonschema==2.16.2
fastprogress==1.0.3
fastrlock==0.8.1
feather-format==0.4.1
filelock==3.9.0
firebase-admin==5.3.0
fix-yahoo-finance==0.0.22
Flask==1.1.4
flatbuffers==23.1.21
folium==0.12.1.post1
frozenlist==1.3.3
fsspec==2023.1.0
ftfy==6.1.1
future==0.16.0
gast==0.4.0
GDAL==3.0.4
gdown==4.4.0
gensim==3.6.0
geographiclib==1.52
geopy==1.17.0
gin-config==0.5.0
glob2==0.7
google==2.0.3
google-api-core==2.11.0
google-api-python-client==2.70.0
google-auth==2.16.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.6
google-cloud-bigquery==3.4.2
google-cloud-bigquery-storage==2.18.1
google-cloud-core==2.3.2
google-cloud-datastore==2.11.1
google-cloud-firestore==2.7.3
google-cloud-language==2.6.1
google-cloud-storage==2.7.0
google-cloud-translate==3.8.4
google-colab @ file:///colabtools/dist/google-colab-1.0.0.tar.gz
google-crc32c==1.5.0
google-pasta==0.2.0
google-resumable-media==2.4.1
googleapis-common-protos==1.58.0
googledrivedownloader==0.4
graphviz==0.10.1
greenlet==2.0.2
grpcio==1.51.1
grpcio-status==1.48.2
gspread==3.4.2
gspread-dataframe==3.0.8
gym==0.25.2
gym-notices==0.0.8
h5py==3.1.0
HeapDict==1.0.1
hijri-converter==2.2.4
holidays==0.19
holoviews==1.14.9
html5lib==1.0.1
httpimport==0.5.18
httplib2==0.17.4
httpstan==4.6.1
huggingface-hub==0.12.0
humanize==0.5.1
hyperopt==0.1.2
idna==2.10
imageio==2.9.0
imagesize==1.4.1
imbalanced-learn==0.8.1
imblearn==0.0
imgaug==0.4.0
importlib-metadata==6.0.0
importlib-resources==5.10.2
imutils==0.5.4
inflect==2.1.0
intel-openmp==2023.0.0
intervaltree==2.1.0
ipykernel==5.3.4
ipython==7.9.0
ipython-genutils==0.2.0
ipython-sql==0.3.9
ipywidgets==7.7.1
itsdangerous==1.1.0
jax==0.3.25
jaxlib @ https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.3.25+cuda11.cudnn805-cp38-cp38-manylinux2014_x86_64.whl
jieba==0.42.1
Jinja2==2.11.3
joblib==1.2.0
jpeg4py==0.1.4
jsonschema==4.3.3
jupyter-client==6.1.12
jupyter-console==6.1.0
jupyter_core==5.2.0
jupyterlab-widgets==3.0.5
kaggle==1.5.12
kapre==0.3.7
keras==2.11.0
keras-vis==0.4.1
kiwisolver==1.4.4
korean-lunar-calendar==0.3.1
langcodes==3.3.0
libclang==15.0.6.1
librosa==0.8.1
lightgbm==2.2.3
llvmlite==0.39.1
lmdb==0.99
locket==1.0.0
logical-unification==0.4.5
LunarCalendar==0.0.9
lxml==4.9.2
Markdown==3.4.1
MarkupSafe==2.0.1
marshmallow==3.19.0
matplotlib==3.2.2
matplotlib-venn==0.11.7
miniKanren==1.0.3
missingno==0.5.1
mistune==0.8.4
mizani==0.7.3
mkl==2019.0
mlxtend==0.14.0
more-itertools==9.0.0
moviepy==0.2.3.5
mpmath==1.2.1
msgpack==1.0.4
multidict==6.0.4
multipledispatch==0.6.0
multitasking==0.0.11
murmurhash==1.0.9
music21==5.5.0
natsort==5.5.0
nbconvert==5.6.1
nbformat==5.7.3
netCDF4==1.6.2
networkx==3.0
nibabel==3.0.2
nltk==3.7
notebook==5.7.16
numba==0.56.4
numexpr==2.8.4
numpy==1.21.6
oauth2client==4.1.3
oauthlib==3.2.2
okgrade==0.4.3
open-clip-torch==2.12.0
opencv-contrib-python==4.6.0.66
opencv-python==4.6.0.66
opencv-python-headless==4.7.0.68
openpyxl==3.0.10
opt-einsum==3.3.0
osqp==0.6.2.post0
packaging==23.0
palettable==3.3.0
pandas==1.3.5
pandas-datareader==0.9.0
pandas-gbq==0.17.9
pandas-profiling==1.4.1
pandocfilters==1.5.0
panel==0.12.1
param==1.12.3
parso==0.8.3
partd==1.3.0
pastel==0.2.1
pathlib==1.0.1
pathy==0.10.1
patsy==0.5.3
pep517==0.13.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==7.1.2
pip-tools==6.6.2
platformdirs==3.0.0
plotly==5.5.0
plotnine==0.8.0
pluggy==0.7.1
pooch==1.6.0
portpicker==1.3.9
prefetch-generator==1.0.3
preshed==3.0.8
prettytable==3.6.0
progressbar2==3.38.0
prometheus-client==0.16.0
promise==2.3
prompt-toolkit==2.0.10
prophet==1.1.2
proto-plus==1.22.2
protobuf==3.20.3
psutil==5.4.8
psycopg2==2.9.5
ptyprocess==0.7.0
py==1.11.0
pyarrow==9.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycocotools==2.0.6
pycparser==2.21
pyct==0.5.0
pydantic==1.10.4
pydata-google-auth==1.7.0
pydot==1.3.0
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
pyemd==0.5.1
pyerfa==2.0.0.1
Pygments==2.6.1
PyGObject==3.36.0
pylev==1.4.0
pymc==4.1.4
PyMeeus==0.5.12
pymongo==4.3.3
pymystem3==0.2.0
PyOpenGL==3.1.6
pyparsing==3.0.9
pyrsistent==0.19.3
pysimdjson==3.2.0
PySocks==1.7.1
pystan==3.3.0
pytest==3.6.4
python-apt==2.0.1
python-dateutil==2.8.2
python-louvain==0.16
python-slugify==8.0.0
python-utils==3.5.0
pytz==2022.7.1
pyviz-comms==2.2.1
PyWavelets==1.4.1
PyYAML==6.0
pyzmq==23.2.1
qdldl==0.1.5.post3
qudida==0.0.4
regex==2022.6.2
requests==2.25.1
requests-oauthlib==1.3.1
requests-unixsocket==0.2.0
resampy==0.4.2
rpy2==3.5.5
rsa==4.9
scikit-image==0.18.3
scikit-learn==1.0.2
scipy==1.7.3
screen-resolution-extra==0.0.0
scs==3.2.2
seaborn==0.11.2
Send2Trash==1.8.0
sentencepiece==0.1.97
shapely==2.0.1
six==1.15.0
sklearn-pandas==1.8.0
smart-open==6.3.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soundfile==0.11.0
spacy==3.4.4
spacy-legacy==3.0.12
spacy-loggers==1.0.4
Sphinx==3.5.4
sphinxcontrib-applehelp==1.0.4
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
SQLAlchemy==1.4.46
sqlparse==0.4.3
srsly==2.4.5
statsmodels==0.12.2
sympy==1.7.1
tables==3.7.0
tabulate==0.8.10
tblib==1.7.0
tenacity==8.2.0
tensorboard==2.11.2
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.11.0
tensorflow-datasets==4.8.2
tensorflow-estimator==2.11.0
tensorflow-gcs-config==2.11.0
tensorflow-hub==0.12.0
tensorflow-io-gcs-filesystem==0.30.0
tensorflow-metadata==1.12.0
tensorflow-probability==0.19.0
termcolor==2.2.0
terminado==0.13.3
testpath==0.6.0
text-unidecode==1.3
textblob==0.15.3
thinc==8.1.7
threadpoolctl==3.1.0
tifffile==2023.2.3
timm==0.6.12
toml==0.10.2
tomli==2.0.1
toolz==0.12.0
torch @ https://download.pytorch.org/whl/cu116/torch-1.13.1%2Bcu116-cp38-cp38-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/cu116/torchaudio-0.13.1%2Bcu116-cp38-cp38-linux_x86_64.whl
torchsummary==1.5.1
torchtext==0.14.1
torchvision @ https://download.pytorch.org/whl/cu116/torchvision-0.14.1%2Bcu116-cp38-cp38-linux_x86_64.whl
tornado==6.0.4
tqdm==4.64.1
traitlets==5.7.1
tweepy==3.10.0
typeguard==2.7.1
typer==0.7.0
typing_extensions==4.4.0
tzlocal==1.5.1
uritemplate==4.1.1
urllib3==1.24.3
vega-datasets==0.9.0
wasabi==0.10.1
wcwidth==0.2.6
webargs==8.2.0
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.6.1
wordcloud==1.8.2.2
wrapt==1.14.1
xarray==2022.12.0
xarray-einstats==0.5.1
xgboost==0.90
xkit==0.0.0
xlrd==1.2.0
xlwt==1.3.0
yarl==1.8.2
yellowbrick==1.5
zict==2.2.0
zipp==3.12.1
``` | closed | 2023-02-12T19:02:54Z | 2023-02-12T23:05:20Z | https://github.com/mlfoundations/open_clip/issues/420 | [] | mrk-andreev | 13 |
ivy-llc/ivy | tensorflow | 28,428 | fix `ivy.less_equal` to support `complex` dtype and all dtype at `paddle backend` | closed | 2024-02-26T19:41:39Z | 2024-02-27T13:48:38Z | https://github.com/ivy-llc/ivy/issues/28428 | [
"Sub Task"
] | samthakur587 | 0 |
|
jupyter/nbviewer | jupyter | 743 | PR 728 broke our nbviewer | In #728 a new check if "other" has read rights on files was introduced as well as replacing abspath with realpath.
Both changes are breaking our nbviewer deployment. What is the reasoning behind these decisions and if they are desirable for whatever reason, what can be done about this (e.g. submit a "rawfile" handler that exhibits the previous functionality)? | closed | 2017-12-05T15:59:25Z | 2018-01-03T12:49:44Z | https://github.com/jupyter/nbviewer/issues/743 | [] | MarkusTeufelberger | 7 |
vitalik/django-ninja | pydantic | 480 | How to disable the interactive API documentation? | Since we import the urls such as this:
```
from . import api, views
urlpatterns = [
path('', views.index, name='index'),
path('api/', api.ninja_api.urls)
]
```
I'd like to try two things:
- disable the interactive input on the docs page (i.e. just read mode)
- disable the docs url at least for production
How do I get to do that? | closed | 2022-06-22T01:57:47Z | 2025-03-21T16:53:29Z | https://github.com/vitalik/django-ninja/issues/480 | [] | edugmes | 4 |
microsoft/nni | pytorch | 4,799 | Error: Dispatcher stream error, tuner may have crashed. | **Describe the issue**:
When I run in pycharm, it crashes all the time immediately. We do not know why.
- Log is
```
[2022-04-24 19:49:06] Creating experiment, Experiment ID: mfk49db5
[2022-04-24 19:49:06] Starting web server...
[2022-04-24 19:49:06] (urllib3.connectionpool) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:49:07] (urllib3.connectionpool) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:49:07] (urllib3.connectionpool) http://localhost:8080 "GET /api/v1/nni/check-status HTTP/1.1" 200 36
[2022-04-24 19:49:07] Setting up...
[2022-04-24 19:49:07] (urllib3.connectionpool) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:49:08] (urllib3.connectionpool) http://localhost:8080 "POST /api/v1/nni/experiment HTTP/1.1" 200 28
/usr/bin/python3: No module named nni
Error: Dispatcher stream error, tuner may have crashed.
at EventEmitter.<anonymous> (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/core/nnimanager.js:651:32)
at EventEmitter.emit (node:events:390:28)
at Socket.<anonymous> (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/core/ipcInterface.js:70:72)
at Socket.emit (node:events:390:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
- Our config is as follows:
```python
search_space = {"iters": {"_type": "choice", "_value": [4, 6, 8, 10, 12, 14]},
"lr": {"_type": "loguniform", "_value": [1e-6, 2e-4]},
"gamma": {"_type": "quniform", "_value": [800, 1400, 50]}}
experiment = Experiment('local')
experiment.config.experiment_name = "ot_orl"
experiment.config.trial_command = 'python main.py --loss_fn L2 --epochs 300 --alpha 0 --data orl'
experiment.config.trial_code_directory = '..'
experiment.config.experiment_working_directory = '../log'
experiment.config.search_space = search_space
experiment.config.tuner.name = 'TPE'
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
experiment.config.max_trial_number = 300
experiment.config.trial_concurrency = 1
experiment.run(8080,debug=True)
```
Environment:
NNI version: 2.7
NNI mode (local|remote|pai): local
Client OS: Ubuntu 18.04
Server OS (for remote mode only):
Python version: 3.9
PyTorch/TensorFlow version: Pytorch 1.9
Is conda/virtualenv/venv used?: Anaconda
Is running in Docker?: No
Log message:
**nnimanager.log:**
```
[2022-04-24 19:37:33] INFO (nni.experiment/MainThread) Creating experiment, Experiment ID: ^[[36mzailfnmp^[[0m
[2022-04-24 19:37:33] INFO (nni.experiment/MainThread) Starting web server...
[2022-04-24 19:37:33] DEBUG (urllib3.connectionpool/MainThread) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:37:34] DEBUG (urllib3.connectionpool/MainThread) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:37:34] DEBUG (urllib3.connectionpool/MainThread) http://localhost:8080 "GET /api/v1/nni/check-status HTTP/1.1" 200 36
[2022-04-24 19:37:34] INFO (nni.experiment/MainThread) Setting up...
[2022-04-24 19:37:34] DEBUG (urllib3.connectionpool/MainThread) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:37:34] DEBUG (urllib3.connectionpool/MainThread) http://localhost:8080 "POST /api/v1/nni/experiment HTTP/1.1" 200 28
[2022-04-24 19:37:34] INFO (nni.experiment/MainThread) Web portal URLs: ^[[36mhttp://127.0.0.1:8080 http://183.174.229.150:8080^[[0m
[2022-04-24 19:37:44] DEBUG (urllib3.connectionpool/MainThread) Starting new HTTP connection (1): localhost:8080
[2022-04-24 19:37:44] DEBUG (urllib3.connectionpool/MainThread) http://localhost:8080 "GET /api/v1/nni/check-status HTTP/1.1" 200 80
[2022-04-24 19:37:44] INFO (nni.experiment/MainThread) Stopping experiment, please wait...
```
**dispatcher.log:**
```
2022-04-24 19:37:33] DEBUG (main) start() returned.
[2022-04-24 19:37:34] DEBUG (NNIRestHandler) GET: /check-status: body: {}
[2022-04-24 19:37:34] DEBUG (NNIRestHandler) POST: /experiment: body: {
experimentName: 'ot_orl',
searchSpace: {
iters: { _type: 'choice', _value: [Array] },
lr: { _type: 'loguniform', _value: [Array] },
gamma: { _type: 'quniform', _value: [Array] }
},
trialCommand: 'python main.py --loss_fn L2 --epochs 300 --alpha 0 --data orl',
trialCodeDirectory: '/home/fengjiao_gong/code/mvc',
trialConcurrency: 1,
maxTrialNumber: 300,
useAnnotation: false,
debug: false,
logLevel: 'info',
experimentWorkingDirectory: '/home/fengjiao_gong/code/mvc/log',
tuner: { name: 'TPE', classArgs: { optimize_mode: 'maximize' } },
trainingService: {
platform: 'local',
trialCommand: 'python main.py --loss_fn L2 --epochs 300 --alpha 0 --data orl',
trialCodeDirectory: '/home/fengjiao_gong/code/mvc',
debug: false,
maxTrialNumberPerGpu: 1,
reuseMode: false
}
}
[2022-04-24 19:37:34] INFO (NNIManager) Starting experiment: zailfnmp
[2022-04-24 19:37:34] INFO (NNIManager) Setup training service...
[2022-04-24 19:37:34] INFO (LocalTrainingService) Construct local machine training service.
[2022-04-24 19:37:34] INFO (NNIManager) Setup tuner...
[2022-04-24 19:37:34] DEBUG (NNIManager) dispatcher command: python3 -m nni --exp_params eyJleHBlcmltZW50TmFtZSI6Im90X29ybCIsInRyaWFsQ29tbWFuZCI6InB5dGhvbiBtYWluLnB5IC0tbG9zc19mbiBMMiAtLWVwb2NocyAzMDAgLS1hbHBoYSAwIC0tZGF0YSBvcmwiLCJ0cmlhbENvZGVEaXJlY3RvcnkiOiIvaG9tZS9mZW5namlhb19nb25nL2NvZGUvbXZjIiwidHJpYWxDb25jdXJyZW5jeSI6MSwibWF4VHJpYWxOdW1iZXIiOjMwMCwidXNlQW5ub3RhdGlvbiI6ZmFsc2UsImRlYnVnIjpmYWxzZSwibG9nTGV2ZWwiOiJpbmZvIiwiZXhwZXJpbWVudFdvcmtpbmdEaXJlY3RvcnkiOiIvaG9tZS9mZW5namlhb19nb25nL2NvZGUvbXZjL2xvZyIsInR1bmVyIjp7Im5hbWUiOiJUUEUiLCJjbGFzc0FyZ3MiOnsib3B0aW1pemVfbW9kZSI6Im1heGltaXplIn19LCJ0cmFpbmluZ1NlcnZpY2UiOnsicGxhdGZvcm0iOiJsb2NhbCIsInRyaWFsQ29tbWFuZCI6InB5dGhvbiBtYWluLnB5IC0tbG9zc19mbiBMMiAtLWVwb2NocyAzMDAgLS1hbHBoYSAwIC0tZGF0YSBvcmwiLCJ0cmlhbENvZGVEaXJlY3RvcnkiOiIvaG9tZS9mZW5namlhb19nb25nL2NvZGUvbXZjIiwiZGVidWciOmZhbHNlLCJtYXhUcmlhbE51bWJlclBlckdwdSI6MSwicmV1c2VNb2RlIjpmYWxzZX19
[2022-04-24 19:37:34] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2022-04-24 19:37:34] INFO (NNIManager) Add event listeners
[2022-04-24 19:37:34] DEBUG (NNIManager) Send tuner command: INITIALIZE: [object Object]
[2022-04-24 19:37:34] DEBUG (IpcInterface) ipcInterface command type: [IN], content:[{"iters":{"_type":"choice","_value":[4,6,8,10,12,14]},"lr":{"_type":"loguniform","_value":[0.000001,0.0002]},"gamma":{"_type":"quniform","_value":[800,1400,50]}}]
[2022-04-24 19:37:34] DEBUG (IpcInterface) ipcInterface command type: [PI], content:[]
[2022-04-24 19:37:34] INFO (LocalTrainingService) Run local machine training service.
[2022-04-24 19:37:34] ERROR (NNIManager) Dispatcher error: read ECONNRESET
[2022-04-24 19:37:34] ERROR (NNIManager) Error: Dispatcher stream error, tuner may have crashed.
at EventEmitter.<anonymous> (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/core/nnimanager.js:651:32)
at EventEmitter.emit (node:events:390:28)
at Socket.<anonymous> (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/core/ipcInterface.js:70:72)
at Socket.emit (node:events:390:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
[2022-04-24 19:37:34] INFO (NNIManager) Change NNIManager status from: RUNNING to: ERROR
[2022-04-24 19:37:44] DEBUG (NNIRestHandler) GET: /check-status: body: {}
[2022-04-24 19:37:44] DEBUG (NNIRestHandler) DELETE: /experiment: body: {}
[2022-04-24 19:37:44] INFO (NNIManager) Change NNIManager status from: ERROR to: STOPPING
[2022-04-24 19:37:44] INFO (NNIManager) Stopping experiment, cleaning up ...
[2022-04-24 19:37:44] DEBUG (IpcInterface) ipcInterface command type: [TE], content:[]
[2022-04-24 19:37:44] WARNING (IpcInterface) Commands jammed in buffer!
[2022-04-24 19:37:44] INFO (LocalTrainingService) Stopping local machine training service...
[2022-04-24 19:37:44] INFO (NNIManager) Change NNIManager status from: STOPPING to: STOPPED
[2022-04-24 19:37:44] INFO (NNIManager) Experiment stopped.
[2022-04-24 19:37:44] DEBUG (NNIExperimentsManager) Stopping experiment manager.
[2022-04-24 19:37:44] DEBUG (NNIExperimentsManager) Experiment manager: all clean up
[2022-04-24 19:37:44] DEBUG (NNIExperimentsManager) Experiment manager stopped.
[2022-04-24 19:37:44] INFO (NNITensorboardManager) Forced stopping all tensorboard task.
[2022-04-24 19:37:44] INFO (NNITensorboardManager) All tensorboard task stopped.
[2022-04-24 19:37:44] INFO (NNITensorboardManager) Tensorboard manager stopped.
[2022-04-24 19:37:44] ERROR (NNIManager) TypeError: Cannot read properties of undefined (reading 'startsWith')
at new RestServer (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/rest_server/index.js:50:37)
at Object.get (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/node_modules/typescript-ioc/es5.js:202:108)
at SingletonScope.resolve (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/node_modules/typescript-ioc/es5.js:290:33)
at ConfigImpl.getInstance (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/node_modules/typescript-ioc/es5.js:252:30)
at Function.IoCContainer.get (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/node_modules/typescript-ioc/es5.js:147:23)
at Function.Container.get (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/node_modules/typescript-ioc/es5.js:93:29)
at Object.get (/home/fengjiao_gong/anaconda3/envs/pytorch190_new/lib/python3.9/site-packages/nni_node/common/component.js:33:26)
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/Nnictl.md#nnictl%20log%20stdout
-->
**How to reproduce it?**:
Everytime I press run button in Pycharm
So how I should solve this issue? | closed | 2022-04-24T12:19:27Z | 2022-07-08T09:06:06Z | https://github.com/microsoft/nni/issues/4799 | [] | redLinmumu | 8 |
biolab/orange3 | pandas | 6,216 | python script widget - problem with target variable | Hi all
In order to use FreeViz into notebooks, I am using the python script widget. The problem is that FreeViz is giving me an error `AttributeError: 'NoneType' object has no attribute 'is_discrete'`. I believe the reason is that I have no target variable defined -I was able to make it work with iris-.
Question: I have been checking the documentation for hours but have not found any clear way to define one column as the target value.
Here is a copy of my code with a simplified dataset
```
>>> Table('iris').domain
[sepal length, sepal width, petal length, petal width | iris]
# I believe that ' | iris' is defined here as target
>>> data = Table.from_file('dataset.csv')
>>> data.domain
[condicion, N_branches, Branch_total_length, Branch_mean_length]
>>> df.domain[:]
(DiscreteVariable(name='condicion', values=('B003', 'HDF', 'P129', 'P130', 'P131')), ContinuousVariable(name='N_branches', number_of_decimals=0), ContinuousVariable(name='Branch_total_length', number_of_decimals=3), ContinuousVariable(name='Branch_mean_length', number_of_decimals=3))
>>> freeviz = FreeViz()
>>> model = freeviz(data)
AttributeError: 'NoneType' object has no attribute 'is_discrete'
```
| closed | 2022-11-21T17:11:12Z | 2023-01-13T12:12:01Z | https://github.com/biolab/orange3/issues/6216 | [] | Alecampoy | 2 |
vanna-ai/vanna | data-visualization | 721 | Support for MCP (Model Context Protocol) in Vanna.AI | I am exploring the Model Context Protocol (MCP).
Does Vanna.AI currently support MCP, or are there plans to implement it?
If not supported, how feasible would it be to integrate MCP with Vanna.AI?
| open | 2024-12-04T04:10:37Z | 2024-12-15T08:25:04Z | https://github.com/vanna-ai/vanna/issues/721 | [] | sinjup | 1 |
pyro-ppl/numpyro | numpy | 1,691 | [FR] add affine-invariant ensemble sampling (including DE-MCMC) and ensemble slice sampling | as implemented in [emcee](https://emcee.readthedocs.io/en/stable/) and [zeus](https://zeus-mcmc.readthedocs.io/en/latest/index.html) - both methods are gradient free. For likelihood free models, differential evolution monte carlo (DE-MCMC) tends to be [robust](https://www.sciencedirect.com/science/article/abs/pii/S0022249616301663).
I have minimal working drafts of the two classes [here](https://github.com/amifalk/numpyro/blob/demc/numpyro/infer/ensemble.py) with utility functions [here](https://github.com/amifalk/numpyro/blob/demc/numpyro/infer/ensemble_util.py). I tried to make the api align with emcee and zeus where possible.
Also, ```batch_ravel_pytree()``` in ensemble_util.py should make it easier to implement future mcmc methods that share information between chains.
Let me know what you think! | closed | 2023-11-30T01:52:32Z | 2024-01-26T13:17:25Z | https://github.com/pyro-ppl/numpyro/issues/1691 | [
"enhancement"
] | amifalk | 1 |
chaoss/augur | data-visualization | 2,619 | Project Popularity metric API | The canonical definition is here: https://chaoss.community/?p=3573 | open | 2023-11-30T18:04:06Z | 2023-11-30T18:20:19Z | https://github.com/chaoss/augur/issues/2619 | [
"API",
"first-timers-only"
] | sgoggins | 0 |
akfamily/akshare | data-science | 5,065 | Python3.10 pip install AKShare 报错 | 【操作背景】
1. Python3.10虚拟环境中引入akshare;
【报错信息】
1.
2.
【尝试解决步骤】
1. 返回Failed connect to chromium.goolesource.com:443; connection timed out;尝试pip install akshare -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com --upgrade,返回
.[0:45:12] Started.
[0:45:12]
________ running 'git -c core.deltaBaseCacheLimit=2g clone --no-checkout --progress https://chromium.googlesource.com/external/github.com/google/fuzztest.git /tmp/pip-install-tpnl40du/mini-racer_cb0f83585df940f09a0a78cd0cdb2748/v8_workspace/v8/third_party/fuzztest/_gclient_src_asxzj_ek' in '/tmp/pip-install-tpnl40du/mini-racer_cb0f83585df940f09a0a78cd0cdb2748/v8_workspace'
[0:45:12] Cloning into '/tmp/pip-install-tpnl40du/mini-racer_cb0f83585df940f09a0a78cd0cdb2748/v8_workspace/v8/third_party/fuzztest/_gclient_src_asxzj_ek'...
[0:45:15] error: RPC failed; result=22, HTTP code = 400
[0:45:15] fatal: The remote end hung up unexpectedly
3. 尝试使用代理未生效;依旧报错
4. 尝试设置git的缓存大小,也报错
| closed | 2024-07-23T03:33:49Z | 2024-07-23T08:27:31Z | https://github.com/akfamily/akshare/issues/5065 | [
"bug"
] | changjialiBRY | 1 |
vllm-project/vllm | pytorch | 15,403 | [Feature]: JSON based tool calling for Gemma 3 | ### 🚀 The feature, motivation and pitch
I have tried the tool calling features in Qwen 2.5 and Mistral models using the `hermes` and `mistral` JSON templates and I really enjoy their consistency and how well they work with langgraph. Therefore I'd like to request a JSON based tool calling for the Gemma 3 models. Would help the developers to get the full potential of these models.
### Alternatives
_No response_
### Additional context
A straightforward serving like
`vllm serve Qwen2.5-72B-Instruct-GPTQ-Int4 --dtype auto --tool-call-parser hermes --enable-auto-tool-choice`
would be fantastic for Gemma 3 series.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-24T15:30:16Z | 2025-03-24T15:32:34Z | https://github.com/vllm-project/vllm/issues/15403 | [
"feature request"
] | venki-lfc | 0 |
encode/databases | sqlalchemy | 439 | Backends crash when the user selects duplicate columns | This is to mirror the [upstream issue in SQLAlchemy 1.4](https://github.com/sqlalchemy/sqlalchemy/issues/7504). Those folks happily discarded our problem because we were using the private API at first, but later showed a humane attitude and advised on changing to the new public API that appeared in 1.4. As if we had a choice in 1.3.
Somebody has to study the new 1.4 code and update the backends' column mappings to support selecting duplicate columns. | open | 2021-12-24T17:58:15Z | 2021-12-27T17:51:25Z | https://github.com/encode/databases/issues/439 | [] | vmarkovtsev | 2 |
ultralytics/ultralytics | pytorch | 19,167 | Can NOT get result.boxes with yolo 11 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have a python code block to check if a specified product image is qualified(OK), after the images labeld and models trained with yolo 11,
I run the follow code block to check the specified product(datasets/images/val/image1_ok.png), but got "**No target found**",
It seems that it can NOT get result.boxes:
```
from ultralytics import YOLO
import cv2
def check_product_quality(image_path):
model = YOLO("best.pt")
image = cv2.imread(image_path)
results = model(image)
result = results[0]
qualified_class_id = 0
if result.boxes is not None and len(result.boxes) > 0:
for box in result.boxes:
class_id = int(box.cls[0])
if class_id != qualified_class_id:
return False
else:
print("No target found")
return False
return True
image_path = "datasets/images/val/image1_ok.png"
is_qualified = check_product_quality(image_path)
if is_qualified:
print("OK")
else:
print("KO")
```
Even the image showing in val_batch0_labels within the comment.
Thanks in advance for any comments!
### Additional
_No response_ | open | 2025-02-10T15:04:50Z | 2025-02-15T06:38:53Z | https://github.com/ultralytics/ultralytics/issues/19167 | [
"question",
"detect"
] | LingPiao | 12 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 844 | 多卡训练loss计算 | 大佬你好,在数据并行的多GPUs训练中,图中红框两行代码顺序是否需要调换一下呀?
我的理解: 应该用多个卡的平均损失去更新梯度?

| closed | 2024-12-04T11:36:02Z | 2024-12-05T15:02:21Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/844 | [] | dongyihua543 | 0 |
miguelgrinberg/Flask-SocketIO | flask | 821 | Websockets not handling response | When i start the gunicorn (eventlet) process, first few connections works fine. However, after a minute or so, the issues arises.
From my nginx error log, i first see lots of:
`2018/10/26 20:41:33 [error] 31364#31364: *610983 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 87.88.8.24, server: titanembeds.com, request: "GET /gateway/?EIO=3&transport=websocket HTTP/1.1", upstream: "http://unix:/var/www/titanembeds_ws.sock:/gateway/?EIO=3&transport=websocket", host: "titanembeds.com"`
Then after a while it becomes more of
`2018/10/26 20:42:43 [error] 31364#31364: *635544 connect() to unix:/var/www/titanembeds_ws.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 75.27.24.47, server: titanembeds.com, request: "GET /gateway/?EIO=3&transport=websocket HTTP/1.1", upstream: "http://unix:/var/www/titanembeds_ws1.sock:/gateway/?EIO=3&transport=websocket", host: "titanembeds.com"`
I am using multiple services to handle /gateway/ endpoint
Nginx sites enabled config includes:
```
upstream uwsgititanws {
ip_hash;
server unix:/var/www/titanembeds_ws.sock max_fails=0;
server unix:/var/www/titanembeds_ws1.sock max_fails=0;
server unix:/var/www/titanembeds_ws2.sock max_fails=0;
}
```
The command to run the gunicorn process is as follows:
`/usr/bin/python3 /usr/local/bin/gunicorn --worker-connections 4096 --worker-class eventlet -w 1 -b unix:/var/www/titanembeds_ws.sock titanembeds.app:app`
Now I like to add that this issue only appears during peak times. (Hundreds of connections, for a few hours a day)
When the services/processes are restarted, I can see upwards of hundreds of connections on my counter. But when the errors show up on the logs, the connection count jumps down to tens from hundreds.
I have monkey patched the code in the [first line](https://github.com/TitanEmbeds/Titan/blob/master/webapp/titanembeds/app.py) of `titanembeds.app:app`.
(By the way @miguelgrinberg, do you offer service to personally take a look at the server and resolve the issue?) | closed | 2018-10-26T20:49:56Z | 2019-04-07T10:09:12Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/821 | [
"question"
] | EndenDragon | 14 |
saulpw/visidata | pandas | 1,773 | Unable to replay commandlog when loading an HTML table | **Small description**
After saving the commandlog, it's not possible to replay it
**Expected result**
open the row and get the HTML table
**Actual result with screenshot**

**Steps to reproduce with sample data and a .vd**
1. Open the attached HTML
2. Shift-D and save the commandlog
3. Try to replay `vd -p table_0_cmdlog.vdj table.html`
4. Get the error as in the screenshot
**Additional context**
- +: syntax works ok `vd table.html +:table_0:0:0`
- Latest develop branch
[table.html.gz](https://github.com/saulpw/visidata/files/10853404/table.html.gz)
[table_0_cmdlog.vdj.gz](https://github.com/saulpw/visidata/files/10853492/table_0_cmdlog.vdj.gz)
| closed | 2023-02-28T18:15:28Z | 2023-03-04T02:24:28Z | https://github.com/saulpw/visidata/issues/1773 | [
"bug",
"fixed"
] | mokalan | 6 |
lucidrains/vit-pytorch | computer-vision | 166 | ViT-Dino for Medical images | HI!
I would like to thank you first for such a good and updated repo regarding Vision Transformers.
I want to know if I can use 3d medical images to pretrain the ViT using 3D medical images?. Do I need to make some changes to the sample code you shared.
Thanks | open | 2021-11-02T04:27:23Z | 2024-07-29T11:33:44Z | https://github.com/lucidrains/vit-pytorch/issues/166 | [] | Mushtaqml | 3 |
ivy-llc/ivy | tensorflow | 28,291 | Fix Ivy Failing Test: jax - shape.shape__rmul__ | closed | 2024-02-15T14:20:54Z | 2024-02-21T06:42:17Z | https://github.com/ivy-llc/ivy/issues/28291 | [
"Sub Task"
] | fnhirwa | 0 |
|
miguelgrinberg/microblog | flask | 165 | Section 6: More Interesting profiles, Operational error occurs after applying Upgrade. | INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 36694d345787 -> 58b088d8fad0, new fields in user model
Traceback (most recent call last):
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\base.py", line 1244, in _execute_context
cursor, statement, parameters, context
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\default.py", line 550, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: duplicate column name: timestamp
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask\__main__.py", line 14, in <module>
main(as_module=True)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask\cli.py", line 906, in main
cli.main(args=args, prog_name=name)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask\cli.py", line 569, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 717, in main
rv = self.invoke(ctx)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask\cli.py", line 419, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\click\core.py", line 555, in invoke
return callback(*args, **kwargs)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask_migrate\cli.py", line 134, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask_migrate\__init__.py", line 95, in wrapped
f(*args, **kwargs)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\flask_migrate\__init__.py", line 280, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\command.py", line 276, in upgrade
script.run_env()
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\script\base.py", line 475, in run_env
util.load_python_file(self.dir, "env.py")
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\util\pyfiles.py", line 90, in load_python_file
module = load_module_py(module_id, path)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\util\compat.py", line 156, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "migrations\env.py", line 96, in <module>
run_migrations_online()
File "migrations\env.py", line 90, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\runtime\environment.py", line 839, in run_migrations
self.get_context().run_migrations(**kw)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\runtime\migration.py", line 361, in run_migrations
step.migration_fn(**kw)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\migrations\versions\58b088d8fad0_new_fields_in_user_model.py", line 21, in upgrade
op.add_column('post', sa.Column('timestamp', sa.DateTime(), nullable=True))
File "<string>", line 8, in add_column
File "<string>", line 3, in add_column
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\operations\ops.py", line 1904, in add_column
return operations.invoke(op)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\operations\base.py", line 345, in invoke
return fn(self, operation)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\operations\toimpl.py", line 131, in add_column
operations.impl.add_column(table_name, column, schema=schema)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\ddl\impl.py", line 230, in add_column
self._exec(base.AddColumn(table_name, column, schema=schema))
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\alembic\ddl\impl.py", line 134, in _exec
return conn.execute(construct, *multiparams, **params)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\base.py", line 988, in execute
return meth(self, multiparams, params)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\sql\ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\base.py", line 1050, in _execute_ddl
compiled,
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\base.py", line 1248, in _execute_context
e, statement, parameters, cursor, context
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\base.py", line 1466, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\util\compat.py", line 383, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\util\compat.py", line 128, in reraise
raise value.with_traceback(tb)
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\base.py", line 1244, in _execute_context
cursor, statement, parameters, context
File "C:\Users\nooria.ali\AppData\Local\Programs\Python\Python37\lib\site-packages\sqlalchemy\engine\default.py", line 550, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) duplicate column name: timestamp
[SQL: ALTER TABLE post ADD COLUMN timestamp DATETIME]
(Background on this error at: http://sqlalche.me/e/e3q8) | closed | 2019-06-11T05:22:07Z | 2019-06-14T04:38:18Z | https://github.com/miguelgrinberg/microblog/issues/165 | [
"question"
] | NooriaAli | 4 |
mirumee/ariadne-codegen | graphql | 355 | websockets.client.connect is deprecated in async_base_client.py | We are encountering a DeprecationWarning due to the use of websockets.client.connect in async_base_client.py, which is deprecated in recent versions of the websockets library.
```
<project_dir>/generated/async_base_client.py:21
<project_dir>/generated/async_base_client.py:21: DeprecationWarning: websockets.client.connect is deprecated
from websockets.client import ( # type: ignore[import-not-found,unused-ignore]
```
Steps to Reproduce:
1. Run the code with the latest version of websockets installed.
2. Observe the deprecation warning in logs.
Proposed Fix:
Modify the import statement in async_base_client.py from:
```python
from websockets.client import ( # type: ignore[import-not-found,unused-ignore]
WebSocketClientProtocol,
connect as ws_connect,
)
```
to
```python
from websockets.sync.client import ( # type: ignore[import-not-found,unused-ignore]
WebSocketClientProtocol,
connect as ws_connect,
)
``` | closed | 2025-02-18T07:58:29Z | 2025-02-26T10:32:16Z | https://github.com/mirumee/ariadne-codegen/issues/355 | [] | Liavshab | 0 |
mwaskom/seaborn | pandas | 3,780 | Allow setting other scales in log_scale | The addition of `log_scale` has been great. Especially for violins. I know you can get the base using a number, but would it possible to allow setting a different scale? For example, I use `symlog` and `logit` quite a bit in my work. | closed | 2024-11-08T04:35:33Z | 2024-11-10T23:12:43Z | https://github.com/mwaskom/seaborn/issues/3780 | [] | mbhall88 | 2 |
coqui-ai/TTS | pytorch | 2,641 | Fail to run PhonemeCoverage.ipynb in Notebooks. | ### Describe the bug
Thank you for sharing your code for phoneme coverage analysis using TTS. It has been very helpful. However, I have encountered some issues while trying to execute the notebook due to the lack of certain functions provided in the https://github.com/mozilla/TTS/ repository.
To be specific, I have noticed the following problems:
- The` load_config` function is not available in `TTS.utils.io`.
- The `load_tts_samples `function is not available in `TTS.tts.datasets.formatters`.
- The notebook requires a config file as input, but no example config files have been provided.
- The `load_tts_samples` function requires a parameter `formatter` to be specified.
- The `phoneme_to_sequence()` and `sequence_to_phoneme()` functions are not available in `TTS.tts.utils.text`.
- It is necessary to specify the parameter `language` in `tokenize.text_to_ids()`.
Although I was eventually able to resolve these issues, the process was quite cumbersome. It would be great if these issues could be addressed to make the notebook more user-friendly.
### To Reproduce
```
cd ~/TTS/notebooks/
jupyter notebook
execute "phoneme coverage" notebook
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- TTS version 1.3.20
```
### Additional context
_No response_ | closed | 2023-05-30T02:11:49Z | 2023-07-17T02:02:58Z | https://github.com/coqui-ai/TTS/issues/2641 | [
"bug",
"help wanted",
"good first issue",
"wontfix"
] | Hide-A-Pumpkin | 1 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,738 | Update profile_list dynamically | We are are attempting to make our profile_list dynamic. We are trying to automate and avoid the 'helm update' when we create, update or delete a profile.
The end goal is to have an interface a user can create profiles and then submit it to Kubernetes or KubeSpawner?
Is this currently possible? If so any explanations / examples would be greatly appreciated.
Thanks in advance!
-Tim
| closed | 2022-06-02T18:02:22Z | 2022-06-02T18:17:16Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2738 | [
"support"
] | oitTim | 3 |
plotly/dash | jupyter | 2,234 | [BUG] multi_page layout (use_pages=True) won't recognize the pages if they are compiled .pyc files. | As mention in the title, it looks like dash only bats an eye for .py files. Am i doing something wrong or are .pyc files just not supported "yet"? | closed | 2022-09-17T17:47:59Z | 2024-07-24T15:12:38Z | https://github.com/plotly/dash/issues/2234 | [] | TheBubblePopped | 3 |
streamlit/streamlit | deep-learning | 10,743 | Add download action to `st.image` toolbar | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add an action to the `st.image` toolbar that makes it easy to download the image.
### Why?
_No response_
### How?
This is a pure frontend change and does not require any changes to the API.
### Additional Context
_No response_ | open | 2025-03-12T15:02:34Z | 2025-03-12T15:03:11Z | https://github.com/streamlit/streamlit/issues/10743 | [
"type:enhancement",
"feature:st.image"
] | lukasmasuch | 1 |
sinaptik-ai/pandas-ai | data-science | 918 | Code generation crash on question "Give me full names of people was in class 3 grouped by second name" | ### System Info
pandas AI 1.5.17
### 🐛 Describe the bug
I have a table with column Name and ask question "Give me full names of people was in class 3 grouped by second name". And I see this crash.
Data is Titanik data set.
[titanic.csv](https://github.com/gventuri/pandas-ai/files/14113732/titanic.csv)
Log:
```
Question: Give me full names of people was in class 3 grouped by second name
Running PandasAI with openai LLM...
Prompt ID: 388d3d15-3cbc-43be-b26b-c59b1b56fa9c
<class 'pandasai.helpers.output_types._output_types.DefaultOutputType'> is going to be used.
<class 'pandasai.helpers.viz_library_types._viz_library_types.NoVizLibraryType'> is going to be used.
Executing Step 0: CacheLookup
Executing Step 1: PromptGeneration
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
3,1,3,"Heikkinen, Miss. Laina",female,26.0,0,0,STON/O2. 3101282,7.925,,S
2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38.0,1,0,PC 17599,71.2833,C85,C
1,0,3,"Braund, Mr. Owen Harris",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
Update this initial code:
python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
Q: Give me full names of people was in class 3 grouped by second name
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" variable as a dictionary of type and value.
Generate python code and return full updated code:
Executing Step 2: CodeGenerator
Code generated:
# TODO: import the required dependencies
import pandas as pd
# Write code here
df = dfs[0]
class_3_names = df[df['Pclass'] == 3]['Name']
full_names = class_3_names.apply(lambda x: x.split(',')[1].strip())
grouped_names = full_names.groupby(full_names).apply(lambda x: ', '.join(x))
result = {"type": "dataframe", "value": grouped_names}
result
Executing Step 3: CachePopulation
Executing Step 4: CodeExecution
Code running:
df = dfs[0]
class_3_names = df[df['Pclass'] == 3]['Name']
full_names = class_3_names.apply(lambda x: x.split(',')[1].strip())
grouped_names = full_names.groupby(full_names).apply(lambda x: ', '.join(x))
result = {'type': 'dataframe', 'value': grouped_names}
result
Failed to execute code with a correction framework [retry number: 1]
Failed with error: Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 134, in execute_func
step_data = self._generate_exec_step(tag, result)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 183, in _generate_exec_step
self._response = self._format_response(result)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 198, in _format_response
df_dict = self.convert_dataframe_to_dict(result["value"])
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 93, in convert_dataframe_to_dict
return {"headers": json_data["columns"], "rows": json_data["data"]}
KeyError: 'columns'
. Retrying
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
3,1,3,"Heikkinen, Miss. Laina",female,26.0,0,0,STON/O2. 3101282,7.925,,S
2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38.0,1,0,PC 17599,71.2833,C85,C
1,0,3,"Braund, Mr. Owen Harris",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
The user asked the following question:
Q: Give me full names of people was in class 3 grouped by second name
You generated this python code:
# TODO: import the required dependencies
import pandas as pd
# Write code here
df = dfs[0]
class_3_names = df[df['Pclass'] == 3]['Name']
full_names = class_3_names.apply(lambda x: x.split(',')[1].strip())
grouped_names = full_names.groupby(full_names).apply(lambda x: ', '.join(x))
result = {"type": "dataframe", "value": grouped_names}
result
It fails with the following error:
Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 134, in execute_func
step_data = self._generate_exec_step(tag, result)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 183, in _generate_exec_step
self._response = self._format_response(result)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 198, in _format_response
df_dict = self.convert_dataframe_to_dict(result["value"])
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 93, in convert_dataframe_to_dict
return {"headers": json_data["columns"], "rows": json_data["data"]}
KeyError: 'columns'
Fix the python code above and return the new python code:
Code running:
df = dfs[0]
class_3_names = df[df['Pclass'] == 3]['Name']
full_names = class_3_names.apply(lambda x: x.split(',')[1].strip())
grouped_names = full_names.groupby(full_names).apply(lambda x: ', '.join(x))
result = {'type': 'dataframe', 'value': grouped_names.reset_index()}
result
Failed to execute code with a correction framework [retry number: 2]
Failed with error: Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 212, in execute_code
exec(code_to_run, environment)
File "<string>", line 5, in <module>
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/core/series.py", line 1581, in reset_index
return df.reset_index(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/core/frame.py", line 6361, in reset_index
new_obj.insert(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/core/frame.py", line 4817, in insert
raise ValueError(f"cannot insert {column}, already exists")
ValueError: cannot insert Name, already exists
. Retrying
Using prompt: <dataframe>
dfs[0]:891x12
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked
3,1,3,"Heikkinen, Miss. Laina",female,26.0,0,0,STON/O2. 3101282,7.925,,S
2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38.0,1,0,PC 17599,71.2833,C85,C
1,0,3,"Braund, Mr. Owen Harris",male,22.0,1,0,A/5 21171,7.25,,S
</dataframe>
The user asked the following question:
Q: Give me full names of people was in class 3 grouped by second name
You generated this python code:
# TODO: import the required dependencies
import pandas as pd
# Write code here
df = dfs[0]
class_3_names = df[df['Pclass'] == 3]['Name']
full_names = class_3_names.apply(lambda x: x.split(',')[1].strip())
grouped_names = full_names.groupby(full_names).apply(lambda x: ', '.join(x))
result = {"type": "dataframe", "value": grouped_names}
result
It fails with the following error:
Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 53, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandasai/helpers/code_manager.py", line 212, in execute_code
exec(code_to_run, environment)
File "<string>", line 5, in <module>
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/core/series.py", line 1581, in reset_index
return df.reset_index(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/core/frame.py", line 6361, in reset_index
new_obj.insert(
File "/home/adminuser/venv/lib/python3.9/site-packages/pandas/core/frame.py", line 4817, in insert
raise ValueError(f"cannot insert {column}, already exists")
ValueError: cannot insert Name, already exists
Fix the python code above and return the new python code:
Code running:
df = dfs[0]
class_3_names = df[df['Pclass'] == 3]['Name']
full_names = class_3_names.apply(lambda x: x.split(',')[1].strip())
grouped_names = full_names.groupby(full_names).apply(lambda x: ', '.join(x)).reset_index()
result = {'type': 'dataframe', 'value': grouped_names}
result
Pipeline failed on step 4: cannot insert Name, already exists
``` | closed | 2024-01-31T15:38:50Z | 2024-04-11T16:06:46Z | https://github.com/sinaptik-ai/pandas-ai/issues/918 | [] | PavelAgurov | 9 |
explosion/spaCy | data-science | 13,633 | CVE in dependency (black==22.3.0) | `black==22.3.0` is a dependency and the version is pinned in spaCy's `requirements.txt`. There is a CVE affecting `black` versions prior to `24.3.0`, specifically CVE-2024-21503 (https://nvd.nist.gov/vuln/detail/CVE-2024-21503).
Impact: Although not a run-time vulnerability in most scenarios (unless untrusted code is being processed), it still shows up in security scans that are the norm for any enterprise grade software, thus triggering processes for handling vulnerabilities / exceptions.
Please evaluate what it would take to migrate to the latest version of `black` so this detection would clear up.
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
To reproduce: in our pipeline we are using Wiz for scans, but even a "visual/manual" check in `requirements.txt` in the installed python package will show the reference to `black==22.3.0`.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: not relevant (linux based)
* Python Version Used: not relevant (3.8 / 3.9)
* spaCy Version Used: not relevant (at least one of our models uses `3.6.0` but the issue is also affecting `master`)
* Environment Information: not relevant (building various docker based images in linux and/or Windows VMs)
| open | 2024-09-25T06:28:28Z | 2024-11-06T11:24:50Z | https://github.com/explosion/spaCy/issues/13633 | [] | sstefanov78 | 2 |
coqui-ai/TTS | deep-learning | 3,322 | [Bug] Core dumped on windows WSL2 | ### Describe the bug
$ python3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> from TTS.api import TTS
>>> tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to("cpu")
> tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
> Using model: xtts
Segmentation fault (core dumped)
### To Reproduce
$ python3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> from TTS.api import TTS
>>> tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to("cpu")
> tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
> Using model: xtts
Segmentation fault (core dumped)
### Expected behavior
a normal behaviour like on latest ubuntu.
### Logs
_No response_
### Environment
```shell
$ python3 collect_env_info.py
{
"CUDA": {
"GPU": [],
"available": false,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1+cu121",
"TTS": "0.21.1",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.12",
"version": "#3570-Microsoft Fri Sep 29 17:00:00 PST 2023"
}
```
### Additional context
_No response_ | closed | 2023-11-28T00:28:17Z | 2023-11-28T10:35:04Z | https://github.com/coqui-ai/TTS/issues/3322 | [
"bug"
] | Zibri | 1 |
deepinsight/insightface | pytorch | 2,119 | JMLR code couldn't get the correct result | Hello, when I used your JMLR code (without any modification) to train on the WCPA dataset, I couldn't get the correct result, especially when the face is sideways, the effect is shown below. I have done a complete walkthrough of your code, but have found no problems. can you give me some suggestions?Or does the code have any special requirements for the environment? Thank you very much.
| closed | 2022-09-29T09:11:15Z | 2023-01-05T03:31:14Z | https://github.com/deepinsight/insightface/issues/2119 | [] | lcaikk1314 | 2 |
koxudaxi/datamodel-code-generator | fastapi | 1,853 | Broken backwards compatibility with black | **Describe the bug**
It seems that this PR #1829 has broken the compatibility with some older versions of black.
**To Reproduce**
The CLI breaks when importing all modules, no need to add any flag to reproduce the behaviour.
Used commandline:
```
$ datamodel-codegen
```
**Expected behavior**
See the CLI options
**Error**
```
Traceback (most recent call last):
File "/Users/ricardomartinez/opt/anaconda3/bin/datamodel-codegen", line 5, in <module>
from datamodel_code_generator.__main__ import main
File "/Users/ricardomartinez/opt/anaconda3/lib/python3.9/site-packages/datamodel_code_generator/__init__.py", line 32, in <module>
from datamodel_code_generator.format import PythonVersion
File "/Users/ricardomartinez/opt/anaconda3/lib/python3.9/site-packages/datamodel_code_generator/format.py", line 10, in <module>
import black.mode
ModuleNotFoundError: No module named 'black.mode'; 'black' is not a package
```
**Version:**
- OS: MacOS 12.2.1
- Python version: 3.9.12 (also reproduced in python 3.11)
- datamodel-code-generator version: [e.g. 22] 0.25.3
**Additional context**
black version: 19.10b0 | closed | 2024-02-11T23:22:53Z | 2024-02-13T18:06:58Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1853 | [
"bug"
] | rmargar | 0 |
Urinx/WeixinBot | api | 48 | msgType为43时候可能为小视频。 | 发小视频的时候msgType为43,然后不能保存。我把43和62放一起来保存小视频,但另一个问题是会保存撤回消息,把10002那条处理语句放在62和43就可以了。
43还有可能是什么消息?目前运行来看没遇到其他类型消息。
| open | 2016-04-28T10:43:06Z | 2016-04-28T10:43:06Z | https://github.com/Urinx/WeixinBot/issues/48 | [] | Zcc | 0 |
hankcs/HanLP | nlp | 1,396 | ModuleNotFoundError: No module named 'regex' | <!--
Please carefully fill out this form to bypass our spam filter. Please make sure that this is a bug. We only address bugs and feature requests issues on GitHub. Other questions should be posted on stackoverflow or https://bbs.hankcs.com/
以下必填,否则直接关闭。
-->
**Describe the bug**
**ModuleNotFoundError: No module named 'regex'**
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
```python
Python 3.7.5 (default, Jan 6 2020, 17:18:04)
[Clang 11.0.0 (clang-1100.0.33.16)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import hanlp
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/__init__.py", line 6, in <module>
import hanlp.common
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/common/__init__.py", line 4, in <module>
from . import component
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/common/component.py", line 17, in <module>
from hanlp.common.structure import SerializableDict
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/common/structure.py", line 6, in <module>
from hanlp.utils.io_util import save_json, save_pickle, load_pickle, load_json, filename_is_json
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/utils/__init__.py", line 5, in <module>
from . import rules
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/utils/rules.py", line 3, in <module>
from hanlp.utils.english_tokenizer import tokenize_english
File "/Users/user/.pyenv/versions/3.7.5/lib/python3.7/site-packages/hanlp/utils/english_tokenizer.py", line 12, in <module>
from regex import compile, DOTALL, UNICODE, VERBOSE
ModuleNotFoundError: No module named 'regex'
```
**Describe the current behavior**
ModuleNotFoundError: No module named 'regex'
**Expected behavior**
No error, no warning.
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Darwin MacBook 19.2.0
- Python version: 3.7.5
- HanLP version: 2.0.0a10
**Other info / logs**
**Solved**
```
pip install regex
``` | closed | 2020-01-10T04:59:29Z | 2020-01-10T05:33:38Z | https://github.com/hankcs/HanLP/issues/1396 | [
"bug",
"auto-replied"
] | butlerwilson | 24 |
FactoryBoy/factory_boy | sqlalchemy | 473 | How to achieve RelatedFactory(factory, 'field') on none nullable field | How can you achieve a RelatedFactory on a field which is not nullable?
When I try this, I receive django ValidationErrors {'field': ['This field cannot be null.']} | closed | 2018-04-25T13:34:29Z | 2018-04-27T06:17:21Z | https://github.com/FactoryBoy/factory_boy/issues/473 | [] | jorenvh1 | 0 |
ndleah/python-mini-project | data-visualization | 7 | New Project - Digital Clock | # Description
Add New Project - Digital Clock
## Type of issue
- [X] Feature (New Script)
- [ ] Bug
- [ ] Documentation
## Checklist:
- [X] I have read the project guidelines.
- [X] I have checked previous issues to avoid duplicates.
- [X] This issue will be meaningful for the project.
<!-- Uncomment this in case you have a issue related to a bug in existing code.-->
<!--
- [ ] I have added screenshots of the bug
- [ ] I have added steps to reproduce the bug
- [ ] I have proposed a possible solution for the bug
-->
| closed | 2021-11-11T08:23:47Z | 2021-11-11T08:37:05Z | https://github.com/ndleah/python-mini-project/issues/7 | [] | AnishLohiya | 0 |
Sanster/IOPaint | pytorch | 85 | I opened http://localhost:8080 with firefox and he showed a blank page with nothing on it. | I opened http://localhost:8080 with firefox and he showed a blank page with nothing on it.
Here are the log messages:
lama-cleaner --model=lama --devi=cpu --port=8080
/home/a/.local/lib/python3.9/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
/home/a/.local/lib/python3.9/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
2022-10-10 09:30:54.027 | INFO | lama_cleaner.model.lama:init_model:30 - Load LaMa model from: /home/a/.cache/torch/hub/checkpoints/big-lama.pt
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
127.0.0.1 - - [10/Oct/2022 09:33:12] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [10/Oct/2022 09:33:12] "GET /static/css/main.466cfae4.chunk.css HTTP/1.1" 304 -
127.0.0.1 - - [10/Oct/2022 09:33:12] "GET /static/js/2.cf5073aa.chunk.js HTTP/1.1" 304 -
127.0.0.1 - - [10/Oct/2022 09:33:12] "GET /static/js/main.54107436.chunk.js HTTP/1.1" 304 -
127.0.0.1 - - [10/Oct/2022 09:33:12] "GET /static/media/WorkSans-Regular.bb287b89.ttf HTTP/1.1" 304 -
| closed | 2022-10-10T01:45:46Z | 2022-10-11T00:12:40Z | https://github.com/Sanster/IOPaint/issues/85 | [] | popdog0 | 2 |
alteryx/featuretools | scikit-learn | 2,165 | Add series_library argument to transform primitives | - https://github.com/alteryx/featuretools/pull/2111 | closed | 2022-06-30T21:36:41Z | 2022-09-28T15:46:48Z | https://github.com/alteryx/featuretools/issues/2165 | [
"enhancement"
] | gsheni | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,322 | HUH?? | After i finish an ensemble, when they output the ensemble, it chrashes right at the end. how do i fix?
| closed | 2024-05-05T08:57:58Z | 2024-05-05T12:25:46Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1322 | [] | RaduMihaiEnache | 0 |
JoeanAmier/TikTokDownloader | api | 24 | 关于Python及第三方模块版本的说明 | # Python 版本
作者开发使用的是 `3.11` 最新版本的 Python,使用了部分新语法,对于旧版本的 Python 会产生报错。
如果使用 `3.9` 及以上版本的 Python,安装运行所需第三方模块之后即可正常使用。
如果使用 `3.9` 以下版本的 Python,请将 `src/StringCleaner.py` 文件内容改为以下代码,即可正常使用。
```
from platform import system
from string import whitespace
class Cleaner:
def __init__(self):
"""
替换字符串中包含的非法字符,默认根据系统类型生成对应的非法字符字典,也可以自行设置非法字符字典
"""
self.rule = self.default_rule() # 默认非法字符字典
@staticmethod
def default_rule():
"""根据系统类型生成默认非法字符字典"""
if (s := system()) in ("Windows", "Darwin"):
rule = {
"/": "",
"\\": "",
"|": "",
"<": "",
">": "",
"\"": "",
"?": "",
":": "",
"*": "",
"\x00": "",
} # Windows 系统和 Mac 系统
elif s == "Linux":
rule = {
"/": "",
"\x00": "",
} # Linux 系统
else:
print("不受支持的操作系统类型,可能无法正常去除非法字符!")
rule = {}
cache = {i: "" for i in whitespace[1:]} # 补充换行符等非法字符
return {**rule, **cache}
def set_rule(self, rule: dict[str, str], update=False):
"""
设置非法字符字典
:param rule: 替换规则,字典格式,键为非法字符,值为替换后的内容
:param update: 如果是 True,则与原有规则字典合并,否则替换原有规则字典
"""
self.rule = {**self.rule, **rule} if update else rule
def filter(self, text: str) -> str:
"""
去除非法字符
:param text: 待处理的字符串
:return: 替换后的字符串,如果替换后字符串为空,则返回 None
"""
if not text:
return text
for i in self.rule:
text = text.replace(i, self.rule[i])
return text or None
```
# 第三方模块版本
作者使用的第三方模块版本均为最新版本:`Flask 2.3.2`, `requests 2.31.0`, `openpyxl 3.1.2`, `PyExecJS2 1.6.1`,可使用以下命令安装。
```
pip install requests
pip install openpyxl
pip install Flask
pip install PyExecJS2
``` | closed | 2023-07-05T13:14:55Z | 2023-07-20T03:55:09Z | https://github.com/JoeanAmier/TikTokDownloader/issues/24 | [
"文档补充(docs)"
] | JoeanAmier | 1 |
miguelgrinberg/Flask-Migrate | flask | 478 | Is it possible to upgrade a single schema within a multitenant database using Flask-Migrate? | My Flask app runs a multitenant database with each schema reserved for a user. On registration of a new user I create a new schema and run an upgrade to populate the schema with the required tables/data. I've scaled to a point where the performance of the registration route has become unacceptable(~10s) since the upgrade goes through every single schema.
Is there a way to override the upgrade() method from Flask Migrate to do the upgrade only for a specific schema? | closed | 2022-08-12T05:00:52Z | 2022-08-12T08:55:49Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/478 | [
"question"
] | jaseel-cognicept | 1 |
ray-project/ray | pytorch | 51,310 | [Dashboard] Job start and end time are not updated for daylight savings | ### What happened + What you expected to happen
The displayed start and end time for jobs is not correct. Logs have correct time. Note our submission ID includes the local datetime as a sufix as well, so can see this doesn't match. This persists even if we tear down/redeploy the cluster.


### Versions / Dependencies
Running Ray 2.40.0. Python 2.12.3.
### Reproduction script
1. Spin up ray cluster
2. Submit job
3. Look at the displayed start and end times
### Issue Severity
None | open | 2025-03-12T15:42:24Z | 2025-03-18T02:08:21Z | https://github.com/ray-project/ray/issues/51310 | [
"bug",
"dashboard",
"triage",
"observability"
] | bhmiller | 5 |
huggingface/datasets | computer-vision | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>()
1 from datasets import load_dataset
----> 2 dataset = load_dataset("m-a-p/COIG-CQIA")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)
582 if not config_kwargs:
583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')"
--> 584 raise ValueError(
585 "Config name is missing."
586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}"
ValueError: Config name is missing.
Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu']
Example of usage:
`load_dataset('coig-cqia', 'chinese_traditional')`
```
This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy.
### Motivation
Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets.
### Your contribution
Not sure since I'm not familiar w/ the lib src. | closed | 2024-06-04T11:02:33Z | 2024-11-26T08:32:18Z | https://github.com/huggingface/datasets/issues/6951 | [
"enhancement"
] | windmaple | 5 |
tensorpack/tensorpack | tensorflow | 880 | Can not restore moving_mean, moving_variance and ExponentialMovingAverage from checkpoint | When I restore params from a checkpoint, I get the following warnings:
Actually, I can restore the `BatchNorm/beta` and `BatchNorm/gamma`, but I don't know whether it affect the model performance if don't restore the `ExponentialMovingAverage`, `moving_variance`, and `moving_mean`.
And I can change the name https://github.com/tensorpack/tensorpack/blob/fd19f4e21493b8c6dead47aefeaf574879c2ef4b/tensorpack/models/batch_norm.py#L35
to the corresponding name to load the `moving_mean/ExponentialMovingAverage:0`. But I don't know whether it is ok.
```
[0831 07:13:22 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph:
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/weights/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/weights/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/weights/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance/ExponentialMovingAverage:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance:0,
resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/weights/ExponentialMovingAverage:0,
```
```
[0831 06:36:41 @sessinit.py:90] WRN **The following variables are in the graph, but not found in the checkpoint:**
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block1/unit_2/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block1/unit_2/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block1/unit_2/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block1/unit_2/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block1/unit_3/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block1/unit_3/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block1/unit_3/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block1/unit_3/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_1/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_1/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_1/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_1/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_2/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_2/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_2/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_2/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_3/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_3/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_3/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_3/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_4/bottleneck_v1/conv1/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_4/bottleneck_v1/conv1/BatchNorm/variance/ExponentialMovingAverage,
resnet_v1_101/block2/unit_4/bottleneck_v1/conv2/BatchNorm/mean/ExponentialMovingAverage,
resnet_v1_101/block2/unit_4/bottleneck_v1/conv2/BatchNorm/variance/ExponentialMovingAverage,
``` | closed | 2018-08-31T07:32:10Z | 2018-09-06T20:07:06Z | https://github.com/tensorpack/tensorpack/issues/880 | [
"unrelated"
] | fanq15 | 3 |
apify/crawlee-python | web-scraping | 306 | Better format statistics logging | It seems we have 8 spaces indentation at the beginning:
```text
[crawlee.statistics.statistics] INFO crawlee.beautifulsoup_crawler.beautifulsoup_crawler request statistics {
"requests_finished": 0,
"requests_failed": 0,
"retry_histogram": [
0
],
"request_avg_failed_duration": null,
"request_avg_finished_duration": null,
"requests_finished_per_minute": 0,
"requests_failed_per_minute": 0,
"request_total_duration": 0.0,
"requests_total": 0,
"crawler_runtime": 0.007741
}
```
```text
[crawlee.beautifulsoup_crawler.beautifulsoup_crawler] INFO Final request statistics: {
"requests_finished": 32,
"requests_failed": 0,
"retry_histogram": [
32
],
"request_avg_failed_duration": null,
"request_avg_finished_duration": 0.349596,
"requests_finished_per_minute": 320,
"requests_failed_per_minute": 0,
"request_total_duration": 11.187069,
"requests_total": 32,
"crawler_runtime": 6.007066
}
``` | closed | 2024-07-15T14:00:46Z | 2024-08-06T14:49:00Z | https://github.com/apify/crawlee-python/issues/306 | [
"t-tooling"
] | vdusek | 3 |
plotly/dash-cytoscape | plotly | 171 | FR: 3D Networks | Hi @xhlulu,
I am massively impressed by the work you have done here. Cytoscape is incredibly useful and I love it!
I wondered if it might be possible to include the function to render networks in 3D. As you know plotly already offers the function to render 3D plots (https://plotly.com/python/3d-charts/). It would be awesome if the support would be added for Cytoscape as well!
Best,
Peter | open | 2022-04-16T10:50:22Z | 2022-04-19T16:36:22Z | https://github.com/plotly/dash-cytoscape/issues/171 | [] | ghost | 1 |
piskvorky/gensim | data-science | 3,016 | Update unittests to work with the newest version of scikit.learn | closed | 2020-12-27T06:52:31Z | 2020-12-27T15:43:08Z | https://github.com/piskvorky/gensim/issues/3016 | [
"housekeeping"
] | mpenkov | 4 |
|
plotly/dash | jupyter | 2,585 | dcc.Location unable to handle path separator '/' | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.8.1
dash-ag-grid 2.0.0a1
dash-bootstrap-components 1.3.1
dash-core-components 2.0.0
dash-daq 0.5.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
Bug exists in all browsers I have.
**Describe the bug**
I have the following code:
```
@callback(
Output('url-output', 'pathname'),
[Input('tabs', 'active_tab'), Input('site-table', 'selectionChanged')],
prevent_initial_call=True)
def url_manger(tab_id, site_selection):
path = tab_id
if site_selection:
path += '/' + site_selection[0]['Site_id']
return path
```
When site selection is activated, the new url is appended to the tab section of existing path: "site_tab/:site_id:" -> "site_tab/site_tab/:site_id:"; if tab is activated instead (site_selection = None), things work as expected.
If I change the delimiter from '/' slash to any other character, for example such as '_' lower dash, things work as expected; nothing gets appended to the path but the path is exactly the same as the return value.
**Expected behavior**
I think I should be able to use '/' in the Output value for dcc.Location.pathname? If I can not, how should I delimit paths or what should I do? The documentation says nothing about this. As far as I know there is no way to attach dcc.Link to the values of components so I wouldn't depend on using the callbacks?
I use quite basic Dash setup with stylesheets and Redis cache. I do not use Page Registry, because I am refactoring someone else's code from tab-oriented single-page app towards url-stateful multi-page app. I am trying to do this without Page Registry as Page Registry would demand more refactoring. Everything works if I use '_' lower dash as "path separator"; it just looks silly.
**Screenshots**
N/A | open | 2023-07-06T14:20:34Z | 2024-08-13T19:34:52Z | https://github.com/plotly/dash/issues/2585 | [
"bug",
"P3"
] | AhtiAhdeElisa | 0 |
wger-project/wger | django | 1,347 | The total energy (kcal) is not the approximate sum of the energy provided by protein, carbohydrates and fat (kcal +/-15%) | ## Steps to Reproduce
Hello !
When trying to enter some ingredients (ham) I got the following message
The total energy (109kcal) is not the approximate sum of the energy provided by protein, carbohydrates and fat (55.2kcal +/-15%)
The ham I try to enter is like this one https://www.herta.fr/produits/jambons-blancs/bon-paris/herta-bon-paris-jambon-a-etouffee-x4-170g
**Expected results:**
I can bypass the message (at least on selfhosted instances) to create the ingredient
**Actual results:**
The total energy (109kcal) is not the approximate sum of the energy provided by protein, carbohydrates and fat (55.2kcal +/-15%)
| open | 2023-06-08T09:10:18Z | 2024-06-03T18:11:05Z | https://github.com/wger-project/wger/issues/1347 | [] | daufinsyd | 3 |
bigscience-workshop/petals | nlp | 386 | Official website disappear | Hi everyone,
I'm new to this fascinating project and I'm eager to explore more documents on the official website. Unfortunately, it seems that https://petals.ml/ is currently inaccessible. Could someone please look into this issue and either correct the link or consider removing it from the project since it appears to be missing or unavailable?
Thank you for your attention. | closed | 2023-07-19T22:45:31Z | 2023-07-20T17:34:40Z | https://github.com/bigscience-workshop/petals/issues/386 | [] | edsonke | 3 |
pywinauto/pywinauto | automation | 936 | Waiting for the element availability & Visibility | ## Expected Behavior
After an action like button click, we have to wait for the next element to be loaded and available for action
## Actual Behavior
When a search operation is called from a 'Search' button click, I give sleep(seconds) with a tentative timing to delay the next line of code execution. If the search operation is taking more than the time to give results and the sleep(seconds) exceeded, automatically the next thing happens and goes ahead to click the 'Reset' button
## Steps to Reproduce the Problem
N/A
## Short Example of Code to Demonstrate the Problem
```
mainPanel.child_window(auto_id='searchButton', control_type='Button').click_input()
sleep(6)
ImageGrab.grab().save("results.png")
mainPanel.child_window(auto_id='resetButton', control_type='Button').click_input()
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.7.4 64 bit
- Platform and OS: Windows 10
| open | 2020-05-21T10:15:58Z | 2020-05-31T06:14:30Z | https://github.com/pywinauto/pywinauto/issues/936 | [
"question",
"documentation"
] | jjbright | 9 |
nltk/nltk | nlp | 3,035 | Acronyms with periods at the end of the sentence are tokenized incorrectly | If acronyms with periods are at the end of the sentence, `TreebankWordTokenizer` and `NLTKWordTokenizer` would split the last period (which serves both as part of the acronym and as a full stop):
```
>>> import nltk
>>> nltk.TreebankWordTokenizer().tokenize('I have been to U.S.A.')
['I', 'have', 'been', 'to', 'U.S.A', '.']
>>> nltk.NLTKWordTokenizer().tokenize('I have been to U.S.A.')
['I', 'have', 'been', 'to', 'U.S.A', '.']
```
Sentences with acronyms at the start or middle are handled correctly:
```
>>> nltk.TreebankWordTokenizer().tokenize('U.S.A. is a north American country.')
['U.S.A.', 'is', 'a', 'north', 'American', 'country', '.']
>>> nltk.NLTKWordTokenizer().tokenize('U.S.A. is a north American country.')
['U.S.A.', 'is', 'a', 'north', 'American', 'country', '.']
```
| closed | 2022-08-23T08:50:04Z | 2022-12-13T21:41:31Z | https://github.com/nltk/nltk/issues/3035 | [
"bug",
"tokenizer",
"need-help"
] | BLKSerene | 6 |
fastapi-users/fastapi-users | asyncio | 83 | Default values when call endpoint /me with custom User Model | Hi Frankie567
First of all thanks for this promising plugin !
I have added new fields to the User Model class like mentioned in documentation:
```
class User(BaseUser):
fullname: Optional[str] = None
creation_date: Optional[datetime] = datetime.utcnow()
```
When I call the register endpoint, the User is correctly saved with all fields completed.
_fullname = 'test'
creation_date = '2020-01-02T08:23:34.014678"_
But when after login I call the endpoint `/me`, only the 'base' fields (email, password, is_active) are set with the database value, the new fields (fullname, creation_date) are shown with their default value.
`{"id":"8ce5f915-3218-41b0-a75f-a18055b11176","email":"test@plop.fr","is_active":true,"is_superuser":true,"fullname":null,"creation_date":"2020-01-03T09:36:27.023413"}`
So maybe I'm missing a point or I need to surcharge other methods in my User Model class ?
| closed | 2020-01-03T09:52:32Z | 2020-01-04T17:19:57Z | https://github.com/fastapi-users/fastapi-users/issues/83 | [
"bug"
] | MariusMez | 4 |
huggingface/datasets | computer-vision | 6,863 | Revert temporary pin huggingface-hub < 0.23.0 | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | closed | 2024-05-03T05:53:55Z | 2024-05-27T10:14:41Z | https://github.com/huggingface/datasets/issues/6863 | [] | albertvillanova | 0 |
plotly/plotly.py | plotly | 4,660 | Choropleth maps only render the first feature with a matching featureidkey, not all matching features | When px.choropleth() is called with a GeoJSON feature collection containing multiple features matching a single featureidkey value, only the first feature is rendered. I believe all features should be so rendered.
The following code generates two images: the first renders all four features, while the second renders only three. The first uses a featureidkey unique to all features, the second uses a featureidkey that is not unique -- only the first feature to match is rendered. :-(
```python
#!/usr/bin/env python3
import pandas as pd
import plotly.express as px
# A simple GeoJSON with four features.
# Each feature has a district.
# One district has two features.
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"name": "Alpha",
"district": "One",
},
"geometry": {
"type": "Polygon",
"coordinates": [[[0, 0], [0, 5], [5, 5], [5, 0], [0, 0]]],
},
},
{
"type": "Feature",
"properties": {
"name": "Bravo",
"district": "One",
},
"geometry": {
"type": "Polygon",
"coordinates": [[[5, 0], [5, 5], [10, 5], [10, 0], [5, 0]]],
},
},
{
"type": "Feature",
"properties": {
"name": "Charlie",
"district": "Two",
},
"geometry": {
"type": "Polygon",
"coordinates": [[[0, 5], [0, 10], [5, 10], [5, 5], [0, 5]]],
},
},
{
"type": "Feature",
"properties": {
"name": "Delta",
"district": "Three",
},
"geometry": {
"type": "Polygon",
"coordinates": [[[5, 5], [5, 10], [10, 10], [10, 5], [5, 5]]],
},
},
],
}
districts = {
f["properties"]["name"]: f["properties"]["district"] for f in geojson["features"]
}
# A simple dataframe relating names and districts from the above GeoJSON data.
# i.e., [{'name': 'Alpha', 'district': 'One'}, ...]
data = [f['properties'] for f in geojson['features']]
df = pd.DataFrame(data)
# This code generates two maps from the data:
# * The name map shows all four squares in the same color (count = 1)
# * The district map shows _three_ squares, one with one color (count = 2) and two with another color (count = 1)
# What I expected:
# * The district map should show all four squares, two with light color and two with dark
for key in ["name", "district"]:
region_counts = df.groupby([key], observed=False).size().reset_index(name="count")
fig = px.choropleth(
region_counts,
geojson=geojson,
fitbounds='geojson',
locations=key,
color="count",
featureidkey=f"properties.{key}",
)
fig.write_image(f"mwe-{key}.png")
```
| open | 2024-07-11T00:06:34Z | 2024-08-13T13:22:25Z | https://github.com/plotly/plotly.py/issues/4660 | [
"bug",
"P3"
] | mathuin | 0 |
frappe/frappe | rest-api | 31,824 | Multiple Blogger support in "Social" module | A blog could be written by multiple bloggers (table multi-select?) | open | 2025-03-20T09:28:32Z | 2025-03-20T09:28:32Z | https://github.com/frappe/frappe/issues/31824 | [
"feature-request"
] | NagariaHussain | 0 |
aleju/imgaug | machine-learning | 113 | CropAndPad cannot be deterministic ? | I use this code to generate some image and cronsponding masks, but it turns out the generated image and mask is not consistent.
```
seq= iaa.Sequential(
iaa.CropAndPad(percent=(-0.5,-0.2)),
)
seq_det = seq.to_deterministic()
images_aug = seq_det.augment_images(images)
masks_aug = seq_det.augment_images(masks)
# combine masks into one image
mask_ = np.zeros_like(masks_aug[0])
for m in masks_aug:
mask_ = np.maximum(mask_, m)
plot_list([images[0],images_aug[0]],[np.squeeze(mask_)])
```
Any help is appreciate!!! | open | 2018-03-28T07:02:23Z | 2020-12-14T18:19:39Z | https://github.com/aleju/imgaug/issues/113 | [] | GuangsZuo | 10 |
matplotlib/matplotlib | data-visualization | 29,507 | [Bug]: Duplicating the labels in the `height`/`width` argument in `barh()`/`bar` leads to undrawn bars | ### Bug summary
When there are duplicate labels in my label array (for example: `['first label', 'second label’, ‘third label', 'second label']`), `ax.bar()` and `ax.barh()` ignore the duplicates (both bars and labels).
### Code for reproduction
```Python
import matplotlib.pyplot as plt
name = [
"first label",
"second label",
"third label",
"second label",
]
value = [1, 2, 3, 4]
fig, ax = plt.subplots(layout="tight")
ax.barh(name, value)
for i in range(len(name)):
n = name[i]
v = value[i]
ax.text(x=v + 1, y=i, s=f"{n} ({i})", va="center")
```
### Actual outcome

### Expected outcome
A bar chart with 4 bars (the second ‘second label’ would be above the bar of the ‘third label’).
What's more, for some reason the bar on the ‘second label’ has a value of 4, whereas I was intuitively expecting it to have a value of 2?
### Additional information
You might expect there to be only single labels, but as this isn't explained in the documentation, I've pointed it out just in case. It's quite hard to spot when you're drawing a lot of bars.
### Operating system
MacOS Sonoma
### Matplotlib Version
3.10.0
### Matplotlib Backend
module://positron_ipykernel.matplotlib_backend
### Python version
3.13.1
### Jupyter version
/
### Installation
pip | closed | 2025-01-23T16:28:52Z | 2025-01-25T00:38:36Z | https://github.com/matplotlib/matplotlib/issues/29507 | [
"topic: units and array ducktypes"
] | JosephBARBIERDARNAL | 7 |
RomelTorres/alpha_vantage | pandas | 10 | outputsize='full' in get_daily_adjusted() traceback | Hello, when i use the optional argument outputsize='full' in the get_daily_adjusted() function i get:
TypeError: __init__() got an unexpected keyword argument 'outputsize'
code snipped I use:
```
import alpha_vantage
from alpha_vantage.timeseries import TimeSeries
ts = TimeSeries(key='my API key', outputsize='full')
data = ts.get_daily_adjusted('VOO')
```
what am I doing wrong?
another question: Is there a way to retrieve a defined timeframe of data (like for example the last year)? | closed | 2017-06-18T13:38:18Z | 2017-06-19T06:04:35Z | https://github.com/RomelTorres/alpha_vantage/issues/10 | [
"invalid"
] | stnatter | 1 |
davidsandberg/facenet | computer-vision | 1,127 | Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor | Running training
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-25-462d105cd07e> in <module>()
total_loss, train_op, summary_op, summary_writer, regularization_losses,learning_rate_schedule_file,
stat, cross_entropy_mean, accuracy, learning_rate,
--> prelogits, prelogits_center_loss, random_rotate,random_crop, random_flip, prelogits_norm, prelogits_hist_max,use_fixed_image_standardization)
stat['time_train'][epoch-1] = time.time() - t
1 frames
<ipython-input-21-b568c1e15095> in train(sess, epoch, image_list, label_list, index_dequeue_op, enqueue_op, image_paths_placeholder, labels_placeholder, learning_rate_placeholder, phase_train_placeholder, batch_size_placeholder, control_placeholder, step, loss, train_op, summary_op, summary_writer, reg_losses, learning_rate_schedule_file, stat, cross_entropy_mean, accuracy, learning_rate, prelogits, prelogits_center_loss, random_rotate, random_crop, random_flip, prelogits_norm, prelogits_hist_max, use_fixed_image_standardization)
batch_number = 0
----> if learning_rate > 0.0:
lr = learning_rate
else:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in __bool__(self)
`TypeError`.
"""
raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "
"Use `if t is not None:` instead of `if t:` to test if a "
"tensor is defined, and use TensorFlow ops such as "
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor. | closed | 2020-01-07T06:12:24Z | 2020-01-07T08:40:17Z | https://github.com/davidsandberg/facenet/issues/1127 | [] | KowsalyaR97 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.