repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
saulpw/visidata | pandas | 2,272 | [tsv] add CLI option to use NUL as delimiter | It's useful to parse output from GNU grep's `-Z` option. That produces lines that in Python are `f'{filename}\0{line}\n'`, instead of the usual `f'{filename}:{line}\n'`.
Right now the command line can't be used to specify a NUL delimiter, as in `vd --delimiter="\0"`, because `sys.argv` strings are NUL-terminated and can't ever contain NUL.
My workarounds for now are to use .visidatarc, either add a temporary line:
`vd.option('delimiter', '\x00', 'field delimiter to use for tsv/usv filetype', replay=True)`.
or add a new filetype to allow `vd -f nsv`:
```
@VisiData.api
def open_nsv(vd, p):
tsv = TsvSheet(p.base_stem, source=p)
tsv.delimiter = '\x00'
tsv.reload()
return tsv
```
Can `open_nsv()` be written without `reload()` right now? I couldn't think of another way to set `delimiter` for TsvSheet. | closed | 2024-01-26T00:29:50Z | 2024-05-25T06:18:08Z | https://github.com/saulpw/visidata/issues/2272 | [
"wishlist",
"wish granted"
] | midichef | 8 |
ultralytics/ultralytics | deep-learning | 18,711 | Why the mAP increase only 0.001 percent every epoch. Any suggestion how to make fast? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I’ve been training a YOLO model on a custom dataset and have noticed that the mean Average Precision (mAP) increases by approximately 0.001% with each epoch. The training process doesn't provide clear guidance on when to stop, and I'm concerned that the model might be overfitting. However, the confusion matrix at epoch 400 doesn't seem to indicate overfitting.
Do you have any suggestions on how to determine the optimal stopping point or strategies to prevent potential overfitting?
Thank you!
<img width="855" alt="Image" src="https://github.com/user-attachments/assets/3cd039bc-5ed8-4ea2-b646-1b47bfd0c1f5" />
Thanks
### Additional
_No response_ | open | 2025-01-16T12:15:37Z | 2025-01-16T13:59:07Z | https://github.com/ultralytics/ultralytics/issues/18711 | [
"question",
"detect"
] | khandriod | 2 |
polakowo/vectorbt | data-visualization | 651 | Bug: Stop loss results is very largely different between from_signals(sl_stop=0.1) and generate_ohlc_stop_exits(sl_stop=0.1) | I'm trying to compare the stop loss mechanisms implemented with different ways such as model 1: `from_signals(sl_stop=0.1, sl_trail=False)`, model 2: `generate_ohlc_stop_exits(sl_stop=0.1, sl_trail=False) ` and model 3: `vbt.OHLCSTX.run(sl_stop=[0.1])`.
I found that model 2 and model 3 gave the exact same results (i.e., total return=459.05%) , but for model 1, the results are very different (i.e., total return = 47.47%) as displayed below:
Model 1: `from_signals(sl_stop=0.1, sl_trail=False)`

Model 2: `generate_ohlc_stop_exits(sl_stop=0.1, sl_trail=False)`

Model 3: `vbt.OHLCSTX.run(sl_stop=[0.1])`

Is it a bug? Or is it my implementation error?
And btw, here is where I got the reference from: https://github.com/polakowo/vectorbt/issues/181
Below are the codes for these 3 models:
--------------------------------------------------
Model 1: Using `from_signals(sl_stop=0.1, sl_trail=False)`
```
# Reference: stop exits with RANDENEX indicator: https://github.com/polakowo/vectorbt/issues/181
import vectorbt as vbt
ohlcv = vbt.YFData.download(
"BTC-USD",
start='2017-01-01 UTC',
end='2020-01-01 UTC'
).concat()
# Random enter signal generator based on the number of signals.
rand = vbt.RAND.run(ohlcv["Close"].shape, n=10, seed=42)
# Random exit signal generator based on the number of signals.
randx = vbt.RANDX.run(rand.entries, seed=42)
pf1 = vbt.Portfolio.from_signals(ohlcv["Close"],
rand.entries,
randx.exits,
open=ohlcv["Open"],
high=ohlcv["High"],
low=ohlcv["Low"],
sl_stop=0.1,
sl_trail=False,
)
pf1.stats()
```
Model 2: Using `generate_ohlc_stop_exits(sl_stop=0.1, sl_trail=False) `
```
import vectorbt as vbt
ohlcv = vbt.YFData.download(
"BTC-USD",
start='2017-01-01 UTC',
end='2020-01-01 UTC'
).concat()
# Random enter signal generator based on the number of signals.
rand = vbt.RAND.run(ohlcv["Close"].shape, n=10, seed=42)
# Random exit signal generator based on the number of signals.
randx = vbt.RANDX.run(rand.entries, seed=42)
stop_exits = rand.entries.vbt.signals.generate_ohlc_stop_exits(
open=ohlcv["Open"],
high=ohlcv['High'],
low=ohlcv['Low'],
close=ohlcv['Close'],
sl_stop=0.1,
sl_trail=False,
)
exits = randx.exits.vbt | stop_exits # optional: combine exit signals such that the first exit of two conditions wins
entries, exits = rand.entries.vbt.signals.clean(exits) # optional: automatically remove ignored exit signals
pf2 = vbt.Portfolio.from_signals(ohlcv['Close'], entries, exits,
open=ohlcv["Open"],high=ohlcv["High"],
low=ohlcv["Low"])
pf2.stats()
```
Model 3: Using `vbt.OHLCSTX.run(sl_stop=[0.1])`
```
import numpy
import vectorbt as vbt
ohlcv = vbt.YFData.download(
"BTC-USD",
start='2017-01-01 UTC',
end='2020-01-01 UTC'
).concat()
# Random enter signal generator based on the number of signals.
rand = vbt.RAND.run(ohlcv["Close"].shape, n=10, seed=42)
# Random exit signal generator based on the number of signals.
randx = vbt.RANDX.run(rand.entries, seed=42)
stops = [0.1,]
sl_exits = vbt.OHLCSTX.run(
rand.entries,
ohlcv['Open'],
ohlcv['High'],
ohlcv['Low'],
ohlcv['Close'],
sl_stop=list(stops),
stop_type=None,
stop_price=None
).exits
exits = randx.exits.vbt | sl_exits
pf3 = vbt.Portfolio.from_signals(ohlcv['Close'], rand.entries, exits) # with SL
pf3.stats()
```
| closed | 2023-08-27T05:34:07Z | 2024-02-19T01:48:21Z | https://github.com/polakowo/vectorbt/issues/651 | [
"stale"
] | tan-yong-sheng | 4 |
tensorpack/tensorpack | tensorflow | 943 | "buffer_size" error in the middle of training | I faced "buffer_size cannot be larger than the size of the DataFlow!" error in the middle of training (e.g., after epoch 10). I'm trying to minimize reproducible codes for debugging, but couldn't yet find.
Meanwhile, can I ask your advice about where to look at?
### training code
```
df = MyDataFlow(config, 'trainvalminusminival')
df = MultiThreadMapData(df, 10, df.mapf, buffer_size=32, strict=True)
df = PrefetchDataZMQ(df, 10)
df = BatchData(df, config.TRAIN.BATCH_SIZE, remainder=False)
vdf = MyDataFlow(config, 'minival')
vdf = MultiThreadMapData(vdf, 10, vdf.mapf, buffer_size=32, strict=True)
vdf = PrefetchDataZMQ(vdf, 10)
vdf = BatchData(vdf, config.TRAIN.BATCH_SIZE)
model = MyModel(config)
traincfg = get_train_config(model, df, vdf, config)
nr_tower = max(get_num_gpu(), 1)
trainer = SyncMultiGPUTrainerReplicated(nr_tower)
launch_train_with_config(traincfg, trainer)
```
### data flow
```
class MyDataFlow(RNGDataFlow):
def __init__(self, config, split, path, aug=False):
super(MyDataFlow, self).__init__()
self.config = config
self.image_size = config.DATA.IMAGE_SIZE
self.aug = aug
... tfrecord file grapping and generator using tf.python_io.tf_record_iterator ...
logger.info('{}: grabbed {} TFRecords.'.format(split, len(tfrecords)))
logger.info('{}: grabbed {} examples.'.format(split, self.num_samples))
def __len__(self):
return self.num_samples
def __iter__(self):
while True:
example = next(self.generator)
... parsing using tf.train.Example.FromString(example) ...
yield key, points, label
def mapf(self, example):
... some preprocessing ...
```
### log
````
[1021 15:51:58 @base.py:250] Start Epoch 8 ...
[1021 16:12:30 @base.py:260] Epoch 8 (global_step 2500000) finished, time:20 minutes 31 seconds.
[1021 16:12:30 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[1021 16:12:30 @saver.py:77] Model saved to train_log/config/model-2500000.
[1021 16:12:31 @misc.py:109] Estimated Time Left: 15 hours 45 minutes 23 seconds
[1021 16:14:30 @monitor.py:459] DataParallelInferenceRunner/QueueInput/queue_size: 25
[1021 16:14:30 @monitor.py:459] GPUUtil/0: 19.745
[1021 16:14:30 @monitor.py:459] QueueInput/queue_size: 49.969
[1021 16:14:30 @monitor.py:459] cost: 5.802
[1021 16:14:30 @monitor.py:459] learning_rate: 0.01
[1021 16:14:30 @monitor.py:459] train-error-top1: 0.98717
[1021 16:14:30 @monitor.py:459] train-error-top3: 0.96963
[1021 16:14:30 @monitor.py:459] val-error-top1: 0.99726
[1021 16:14:30 @monitor.py:459] val-error-top3: 0.99152
[1021 16:14:30 @group.py:48] Callbacks took 119.715 sec in total. DataParallelInferenceRunner: 1 minute 59 seconds
[1021 16:14:30 @base.py:250] Start Epoch 9 ...
[1021 16:35:00 @base.py:260] Epoch 9 (global_step 2812500) finished, time:20 minutes 30 seconds.
[1021 16:35:00 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[1021 16:35:00 @saver.py:77] Model saved to train_log/config/model-2812500.
[1021 16:35:01 @misc.py:109] Estimated Time Left: 15 hours 22 minutes 47 seconds
[1021 16:37:01 @monitor.py:459] DataParallelInferenceRunner/QueueInput/queue_size: 25
[1021 16:37:01 @monitor.py:459] GPUUtil/0: 19.735
[1021 16:37:01 @monitor.py:459] QueueInput/queue_size: 49.858
[1021 16:37:01 @monitor.py:459] cost: 5.8078
[1021 16:37:01 @monitor.py:459] learning_rate: 0.01
[1021 16:37:01 @monitor.py:459] train-error-top1: 0.99174
[1021 16:37:01 @monitor.py:459] train-error-top3: 0.9626
[1021 16:37:01 @monitor.py:459] val-error-top1: 0.99711
[1021 16:37:01 @monitor.py:459] val-error-top3: 0.99116
[1021 16:37:01 @group.py:48] Callbacks took 120.659 sec in total. DataParallelInferenceRunner: 2 minutes
[1021 16:37:01 @base.py:250] Start Epoch 10 ...
[1021 16:57:34 @base.py:260] Epoch 10 (global_step 3125000) finished, time:20 minutes 32 seconds.
[1021 16:57:34 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[1021 16:57:34 @saver.py:77] Model saved to train_log/config/model-3125000.
[1021 16:57:34 @misc.py:109] Estimated Time Left: 15 hours 36 seconds
[1021 16:59:32 @parallel_map.py:53] [4m [5m [31mERR [0m [MultiThreadMapData] buffer_size cannot be larger than the size of the DataFlow!
[1021 16:59:32 @parallel_map.py:53] [4m [5m [31mERR [0m [MultiThreadMapData] buffer_size cannot be larger than the size of the DataFlow!
````
### error related code: `parallel_map.py`
```
def _fill_buffer(self, cnt=None):
if cnt is None:
cnt = self._buffer_size - self._buffer_occupancy
try:
for _ in range(cnt):
dp = next(self._iter)
self._send(dp)
except StopIteration:
logger.error(
"[{}] buffer_size cannot be larger than the size of the DataFlow!".format(type(self).__name__))
raise
self._buffer_occupancy += cnt
```
Is it possible for `data source` to get empty (the end of its data), during the for loop in `_fill_buffer`?
Python version: 3.5
TF version: 1.11.0
Tensorpack version: 0.8.9 | closed | 2018-10-22T06:42:00Z | 2018-10-22T16:21:02Z | https://github.com/tensorpack/tensorpack/issues/943 | [
"duplicate"
] | ywpkwon | 1 |
streamlit/streamlit | streamlit | 10,768 | Support `background` CSS property in `st.dataframe()` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Support setting the `background` CSS property via `df.styler` in `st.dataframe()`.
### Why?
Pandas' beautiful [`Styler.bar()`](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.bar.html) feature uses the `background` CSS property. It's also possible to add CSS directly via [`Styler.map()`](https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.map.html). See examples below.
I should note that the `background-color` CSS property works as expected, but not `background` or `background-image`. Also, they all work in `st.table()`. Only `background` and `background-image` don't work in `st.dataframe()`.
[](https://issues.streamlitapp.com/?issue=gh-10768)
```py
import pandas as pd
import streamlit as st
df = pd.DataFrame({"solid": [0.1, 0.2, 0.3], "gradient": [0.4, 0.5, 0.6], "bar": [0.7, 0.8, 0.9]})
styler = df.style
styler.format(lambda x: f"{x:.0%}")
styler.map(lambda x: f"background-color: green;", subset="solid")
styler.map(lambda x: f"background-image: linear-gradient(to right, green {x:%}, transparent {x:%});", subset="gradient")
styler.bar(subset="bar", vmin=0, vmax=1, color="green") # Uses a `background: linear-gradient` under the hood.
st.code("st.table()")
st.table(styler) # Both the solid color and the gradient work as expected.
st.divider()
st.code("st.dataframe()")
st.dataframe(styler) # The solid color works as expected, but not the gradient.
```

### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-13T14:56:28Z | 2025-03-21T21:35:03Z | https://github.com/streamlit/streamlit/issues/10768 | [
"type:enhancement",
"feature:st.dataframe"
] | JosephMarinier | 1 |
indico/indico | sqlalchemy | 6,629 | Make the minimum password length configurable | Just 8 characters are not that great anymore according to recent standards. 12 or 15 is a more common minimum nowadays (TODO check the NIST guidelines). However, I could imagine that many Indico instances do not want to enforce such long passwords, so I'd prefer to not change the global default.
- Add `LOCAL_PASSWORD_MIN_LENGTH` setting, default to the current hardcoded value of `8`.
- Do not allow anything shorter unless debug mode is enabled (fail in `IndicoConfig.validate`).
- In `validate_secure_password`, keep the hard check for less than 8 chars (we want to keep forcing a password change for existing users with a shorter password), but also add a check for the new limit when the context is `set-user-password`.
- Maybe populate the config file in the setup wizard with a longer minimum length, so newly installed instances get a better default?
Alternatively, we could just raise the minimum length but still with the context check to avoid forcing an "upgrade" from everyone who has a shorter one that's still 8+ chars right now. Any opinions? | closed | 2024-11-25T14:38:51Z | 2025-02-24T14:44:09Z | https://github.com/indico/indico/issues/6629 | [
"enhancement",
"trivial"
] | ThiefMaster | 0 |
unit8co/darts | data-science | 1,781 | What is the best way to get the predicted values of the training set in Darts? | I am trying to get the **predicted values of my training set** in Darts. In SKlearn, one can simply do:
```
model.fit(training_set)
model.predict(training_set)
```
What is the equivalent method in Darts assuming I have target lags, past covariate lags and future covariate lags?
From what I've tried, the .predict() method is only forward looking after you fit your data so I won't be able to get the predictions of my training set.
Thanks in advance.
| closed | 2023-05-17T15:35:09Z | 2024-04-17T07:12:27Z | https://github.com/unit8co/darts/issues/1781 | [
"question"
] | ETTAN93 | 4 |
keras-team/keras | data-science | 20,873 | Inconsistencies with the behavior of bias initializers, leading to poor performance in some cases | Hello,
I've noticed some (potentially harmful) inconsistencies in bias initializers when running a simple test of the keras package, i.e. using a shallow MLP to learn a sine wave function in the [-1, 1] interval.
# Context
Most of the times (or for deep enough networks), using the default zero-initialization for biases is fine. However, for this simple problem having randomized biases is essential, since without them the neurons end up being too similar (redundant) and training converges to a very poor local optimum.
The [official guide](https://keras.io/api/layers/initializers/#variancescaling-class) suggests to use weight initializers for biases as well.
Now:
* The default initialization from _native_ PyTorch leads to good results that improve as expected as the network size grows.
* Several keras initializers are expected to be similar or identical to the PyTorch behavior (i.e. `VarianceScaling` and all its subclasses), but they fail to produce good results, regardless of the number of neurons in the hidden layer.
# Issues
The issues are due to the fact that all [RandomInitializer](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/initializers/random_initializers.py#L10) subclasses in their `__call__` function only have access to the shape they need to fill.
In case of bias vectors for `Dense` layers, this shape is a one element tuple, i.e. `(n,)` where `n` is the number of units in the current layer.
The [compute_fans function](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/initializers/random_initializers.py#L612) in this case reports a fan in of `n`, which is actually the number of units, i.e. the fan out.
Unfortunately, the correct fan in is not accessible, since the number of layer inputs is not included in the shape of the bias vector.
This makes the [official description of the VarianceScaling initializer](https://keras.io/api/layers/initializers/#variancescaling-class) incorrect when applied to neuron biases. The same holds for the description of the Glorot, He, LeCun initializers, which are implemented as `VarianceScaling` subclasses.
In my simple example, as soon as the shallow network has more than very few neurons, all size-dependent initializers have so little variability that they behave very similar to a zero initialization (i.e. incredibly poorly). What stumped me (before understanding the problem) is that the larger is the network, the worse the behavior.
# About possible fixes
I can now easily fix the issue by computing bounds for `RandomUniform` initializers externally so as to replicate the default PyTorch behavior, but this is not an elegant solution -- and I am worried other users may have encountered similar problems without noticing.
If the goal is correctly computing the fan in, I am afraid that I see no easy fix, short of restructuring the `RandomInitializer` API and giving it access to more information.
However, the real goal here is not actually computing the fan in, but preserving the properties that the size-dependent initializers were attempting to enforce. I would need to read more literature on the topic before suggesting a theoretically sound fix from this perspective. I would be willing to do that, in case the keras teams is fine with going in this direction. | open | 2025-02-07T13:44:16Z | 2025-03-07T15:35:08Z | https://github.com/keras-team/keras/issues/20873 | [
"type:bug/performance"
] | lompabo | 4 |
allenai/allennlp | nlp | 4,773 | SNLI-VE dataset reader and model | SNLI-VE is here: https://github.com/necla-ml/SNLI-VE
The VQA reader and model should serve as an example, but there will likely be significant differences. | closed | 2020-11-07T00:01:16Z | 2020-12-24T00:31:57Z | https://github.com/allenai/allennlp/issues/4773 | [] | dirkgr | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,347 | about generate images from simulation to real world | Hi, I 'm a new guy about image translation. Hope for your help.
>This is domain A. 1920×1080 images from unity3D.
>
>
>And this is domain B. 1920×1080 underwater images from onboard camera.
>
I trained with cycle GAN by --crop size 512, and tested by --preprocess none. But the result looks like bad.
I guess wether the random crop may not include the small target every times,or any other reason. I really don't know why and how to solve it. I hope that you can give me some tips or a little inspiration.
>This is the input image.
>
>
>And this is the output with epoch 110.
>
| open | 2021-11-30T09:38:21Z | 2021-12-02T20:49:25Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1347 | [] | julingers | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 180 | hardlink count not equal to hardlinked file count | hardlink count less than hardlinked file count, it seems as if 2 hidden hardlinked files automatically generated by aidlux are not counted, whose file name have ".l2s." as prefix, and one has "0001" as suffix, the other has '0001.000#" as suffix. there are no such two hardlinked hidden files on an ordinary linux system.
when a hidden hardlink is deleted, the left hardlinks will lose hardlink and become not accessible. this situation always lead to problems when apt upgrade debian of aidlux, or when installing packages. especially, when installing packages from source code, the problem shows up more often and with a faillure result.
====the following are the steps to show the issue:
root@localhost:/tmp/tmp# date>x
root@localhost:/tmp/tmp# ls -ali
总用量 16
533961 drwx------. 2 root root 3488 8月 2 23:08 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 1 root root 43 8月 2 23:08 x
root@localhost:/tmp/tmp# ln x y
root@localhost:/tmp/tmp# ls -ali
总用量 28
533961 drwx------. 2 root root 3488 8月 2 23:08 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 2 root root 43 8月 2 23:08 .l2s.x0001
1104231 -rw-------. 2 root root 43 8月 2 23:08 .l2s.x0001.0002
1104231 -rw-------. 2 root root 43 8月 2 23:08 x
1104231 -rw-------. 2 root root 43 8月 2 23:08 y
root@localhost:/tmp/tmp# ln x z
root@localhost:/tmp/tmp# ls -ali
总用量 32
533961 drwx------. 2 root root 3488 8月 2 23:09 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 3 root root 43 8月 2 23:08 .l2s.x0001
1104231 -rw-------. 3 root root 43 8月 2 23:08 .l2s.x0001.0003
1104231 -rw-------. 3 root root 43 8月 2 23:08 x
1104231 -rw-------. 3 root root 43 8月 2 23:08 y
1104231 -rw-------. 3 root root 43 8月 2 23:08 z
root@localhost:/tmp/tmp# find . -type l
./.l2s.x0001
./z
./x
./y
root@localhost:/tmp/tmp# cat x
2021年 08月 02日 星期一 23:49:59 UTC
root@localhost:/tmp/tmp# rm .l2s.x0001
root@localhost:/tmp/tmp# cat x
cat: x: 没有那个文件或目录
root@localhost:/tmp/tmp# ls -ali
ls: 无法访问'z': 不允许的操作
ls: 无法访问'x': 不允许的操作
ls: 无法访问'y': 不允许的操作
总用量 16
533961 drwx------. 2 root root 3488 8月 2 23:48 .
207221 drwx------. 49 root root 8192 8月 2 23:04 ..
1104231 -rw-------. 3 root root 43 8月 2 23:08 .l2s.x0001.0003
? l?????????? ? ? ? ? ? x
? l?????????? ? ? ? ? ? y
? l?????????? ? ? ? ? ? z
root@localhost:/tmp/tmp# | closed | 2021-08-03T00:00:08Z | 2021-08-28T13:40:26Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/180 | [] | zxq432 | 2 |
streamlit/streamlit | python | 9,904 | Stale output from a long-running computation erroneously shows as not stale when app rerun | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
This is a bug report on behalf of Thiago in order to keep it tracked.
When a Streamlit app with long-running computation is stopped and then re-run, old stale data from the previous run show up as if it was not stale until the old thread completes execution, at which point the data is properly cleaned up.
### Reproducible Code Example
```Python
import platform
import time
import streamlit as st
st.caption(
f"""
Running with Python {platform.python_version()} and Streamlit {st.__version__}.
"""
)
"""
This is an app!!
"""
def slow_computation(x):
st.write("Starting slow computation...")
time.sleep(10)
st.write("Stopping slow computation...")
return f"Done, {x + 1}"
out = slow_computation(1)
st.write(out)
```
### Steps To Reproduce
This is a Playwright test that simulates the current behavior (which is to say, this test currently passes, but should fail once the underlying issue is fixed). There are `NOTE`s inline that describe expected behavior
```py
import time
from playwright.sync_api import Page, expect
def test_threading_behavior(page: Page):
# This uses `localhost:8501` since I was running different versions of Streamlit
# in an attempt to bisect this issue in case it was introduced at some point in time
page.goto("http://localhost:8501/")
page.get_by_text("Stopping slow computation...").wait_for(timeout=20000)
expect(page.get_by_text("Stopping slow computation...")).to_be_visible()
print(page.query_selector_all(".element-container")[0].inner_text())
# At this point, the first run of the thread completed
expect(page.get_by_text("Done, 2")).to_be_visible()
# conditional logic to make it work with older versions of Streamlit
if page.query_selector('div[data-testid="stMainMenu"]'):
page.get_by_test_id("stMainMenu").click()
elif page.query_selector('[id="MainMenu"]'):
main_menu = page.query_selector('[id="MainMenu"]')
if main_menu:
main_menu.click()
# Now we are re-running the app
page.get_by_text("Rerun").click()
# Some time delay so that the new thread is started and some elements are marked as stable
time.sleep(2)
# Expect some of the elements to be marked as stale
assert len(page.query_selector_all('div[data-stale="true"]')) == 2
expect(page.get_by_text("Stopping slow computation...")).to_be_visible()
expect(page.get_by_text("Done, 2")).to_be_visible()
# Stop the new thread
page.get_by_role("button", name="Stop").click()
time.sleep(2)
# NOTE: This should not pass. Stale elements shouldn't suddenly be marked as not stale
# since these are old results from a thread that was supposed to have been stopped
assert len(page.query_selector_all('div[data-stale="true"]')) == 0
# NOTE: This should not pass. It is unexpected that the results from the old thread
# are still showing up, despite us re-running
expect(page.get_by_text("Stopping slow computation...")).to_be_visible()
expect(page.get_by_text("Done, 2")).to_be_visible()
# wait for the thread to complete
time.sleep(10)
# Expect old elements to be cleared out
expect(page.get_by_text("Stopping slow computation...")).not_to_be_visible()
expect(page.get_by_text("Done, 2")).not_to_be_visible()
```
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.0.0 -> 1.40.1 all show this behavior (I didn't test versions before)
- Python version: 3.8
- Operating System: Mac
- Browser: Chrome
### Additional Information
_No response_ | open | 2024-11-22T00:07:11Z | 2024-11-25T19:15:37Z | https://github.com/streamlit/streamlit/issues/9904 | [
"type:bug",
"status:confirmed",
"priority:P3"
] | sfc-gh-bnisco | 1 |
A3M4/YouTube-Report | matplotlib | 15 | Time format error? | Generating Heat Map.....
Traceback (most recent call last):
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/report.py", line 252, in <module>
visual.heat_map()
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/report.py", line 46, in heat_map
Mon = html.dataframe_heatmap('Mon')
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/parse.py", line 97, in dataframe_heatmap
times = self.find_times()
File "/home/server/Scrivania/Personal-YouTube-PDF-Report-Generator/parse.py", line 52, in find_times
dayOfWeek = datetime.datetime.strptime(time[0:12], '%b %d, %Y').strftime('%a')
File "/usr/lib/python3.6/_strptime.py", line 565, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/usr/lib/python3.6/_strptime.py", line 362, in _strptime
(data_string, format))
ValueError: time data 'Dec 15, 2019' does not match format '%b %d, %Y' | closed | 2019-12-16T10:59:17Z | 2019-12-25T21:11:28Z | https://github.com/A3M4/YouTube-Report/issues/15 | [] | andreaponza | 0 |
huggingface/datasets | nlp | 6,833 | Super slow iteration with trivial custom transform | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy')
def transform(batch):
return batch
ds2 = ds1.with_transform(transform)
%time sum(1 for _ in ds1)
%time sum(1 for _ in ds2)
```
```
CPU times: user 472 ms, sys: 319 ms, total: 791 ms
Wall time: 794 ms
CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s
Wall time: 9.78 s
```
In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial.
Related issue: https://github.com/huggingface/datasets/issues/5841
### Steps to reproduce the bug
Use code in the description to reproduce.
### Expected behavior
Trivial custom transform in the example should not slowdown the dataset iteration.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | open | 2024-04-23T20:40:59Z | 2024-10-08T15:41:18Z | https://github.com/huggingface/datasets/issues/6833 | [] | xslittlegrass | 7 |
mwaskom/seaborn | pandas | 2,966 | Don't apply a layout algorithm by default when provided with matplotlib axes in Plot.on | If `Plot.on` is provided with a matplotlib axes, it probably makes sense to defer the choice of a layout algorithm to the caller.
The expected behavior is less obvious when given a figure or subfigure. | closed | 2022-08-20T20:42:59Z | 2022-08-25T11:50:33Z | https://github.com/mwaskom/seaborn/issues/2966 | [
"objects-plot"
] | mwaskom | 0 |
huggingface/datasets | deep-learning | 7,394 | Using load_dataset with data_files and split arguments yields an error | ### Describe the bug
It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument.
If I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
then I get the error
```
ValueError: Unknown split "all_examples". Should be one of ['train'].
```
However, if I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="train", name="Expert")
```
then I get
```
ValueError: Unknown split "train". Should be one of ['all_examples'].
```
### Steps to reproduce the bug
Run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
### Expected behavior
No error.
### Environment info
Python = 3.12
datasets = 3.2.0 | open | 2025-02-12T04:50:11Z | 2025-02-12T04:50:11Z | https://github.com/huggingface/datasets/issues/7394 | [] | devon-research | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,165 | AttributeError: 'LogisticRegression' object has no attribute 'classes' | Hi everyone, that's my first help request, so I'm sorry if I do something wrong; so:
**Description**
I used for evaluate the classification models the classification_report class and it worked until I setted up `imblearn`(I don't think that it change something but, it happen when I set up that), now when i run the program I had the error
`AttributeError: 'LogisticRegression' object has no attribute 'classes'`
That's the "core" of the code called into question:
```
X_train, X_test, y_train, y_test = train_test_split(features, labels,test_size=0.25,shuffle=True, random_state = 0)
from sklearn.linear_model import LogisticRegression
logisticRegr = LogisticRegression(random_state=0)
from yellowbrick.classifier import ClassificationReport
visualizer = ClassificationReport(logisticRegr)
visualizer.fit(X_train, y_train) # Fit the visualizer and the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw/show the data
```
someone can help me to fix that?
Thank you
**Versions**
scikit-learn 0.24.1
yellowbrick 1.2
python 3.7
Environment Anaconda 1.9.12
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| closed | 2021-03-23T10:24:51Z | 2021-04-02T08:11:43Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1165 | [
"type: question"
] | Albembo | 2 |
Yorko/mlcourse.ai | seaborn | 361 | questions about the form of the course | should we just read the course web, or there are some videos? thx | closed | 2018-10-04T13:59:17Z | 2018-10-04T14:13:38Z | https://github.com/Yorko/mlcourse.ai/issues/361 | [
"invalid"
] | ZizhenWang | 2 |
ludwig-ai/ludwig | computer-vision | 3,571 | Unpin `transformers` when a newer version > 4.32.1 is released | Ludwig also runs into the same issue flagged here: https://github.com/huggingface/transformers/issues/25805 | closed | 2023-08-31T20:28:34Z | 2023-09-08T07:37:45Z | https://github.com/ludwig-ai/ludwig/issues/3571 | [] | arnavgarg1 | 1 |
aimhubio/aim | data-visualization | 2,573 | RocksIOError: ....../CURRENT: no such file or directory | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To reproduce
When I created hundreds of runs, I sometimes encounter the following error.

The script I run is as follows:

The command is : python create_runs.py -n 900
<!-- Reproduction steps. -->
### Environment
- Aim Version (e.g., 3.0.1): 3.16.1
- Python version: 3.9.15
- pip version: 22.3.1
- OS (e.g., Linux): Centos-8
| open | 2023-03-06T19:26:06Z | 2024-07-08T23:32:48Z | https://github.com/aimhubio/aim/issues/2573 | [
"type / bug",
"help wanted",
"area / SDK-storage"
] | thuzhf | 8 |
mars-project/mars | pandas | 2,636 | Refactor storage service to increase efficiency and stability | # Motivation
Many problems exist with current implementation of Mars storage.
1. No flexible way to control data location
When loading data from other endpoints, we may prefer multiple locations for fallback. Current implementation does not support this and may introduce unnecessary spill operations.
2. No support for remote reader / writer
Remote readers and writers provide flexible way to handle data transfer, enabling shuffle and client-side data manipulation without high memory cost. Current implementation only handles readers and writers locally.
3. Mix of lower-level code and higher-level code
Data transfer and spill should be implemented upon a common IO layer to make the whole storage more maintainable. Current implementation mixes all things up.
4. Race condition exists when spilling data on shuffle
In current implementation, when starting a reader and data spill is launched, it is possible that the data is spilled and we get a KeyError afterward.
5. Unnecessary IPC calls
In current implementation, we need to do quota request, put data info, update quota and deal with spill, all introducing more than one IPC call. The number of calls can be reduced to no more than 2.
# Design
The new design of Mars storage can be divided into two parts: the kernel storage and the user storage. The kernel storage is a thin wrap of storage backends plus necessary access controls. The user storage is constructed over the kernel storage with spill and transfer support.
<img width="745" alt="image" src="https://user-images.githubusercontent.com/8284922/155280541-1e061963-2045-45a6-bca3-89261cca3862.png">
## Kernel Storage
The principle of kernel storage is to make thing simple. That is, the API does not handle complicated retries and redirections. When encountering storage full or lock errors, it raises straightforwardly (instead of performing wait or retry). `KernelStorageAPI` will look like
```python
class KernelStorageAPI:
@classmethod
async def create(cls, band_name: str, worker_address: str) -> "KernelStorageAPI":
"""
Create a band-specific API
"""
async def open_reader(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
) -> KernelStorageFileObject:
"""
Create a reader on a specific file
"""
async def open_writer(
self,
session_id: str,
data_key: str,
size: int,
level: StorageLevel = None,
) -> KernelStorageFileObject:
"""
Create a writer on a specific file
"""
async def delete(self, session_id: str, data_key: str, error: str = "raise"):
"""
Delete a file with specified keys
"""
async def get_capacity(self) -> Dict[StorageLevel, StorageCapacity]:
"""
Get capacities of levels of the band
"""
async def list(
self,
level: StorageLevel,
lock_free_only: bool = False,
) -> List[InternalDataInfo]:
"""
Get information of all data in the band
"""
async def put(
self,
session_id: str,
data_key: str,
obj: Any,
level: StorageLevel = None,
) -> InternalDataInfo:
"""
Put an object into the band storage
"""
async def get(
self,
session_id: str,
data_key: str,
conditions: List = None,
level: StorageLevel = None,
error: str = "raise",
) -> Any:
"""
Get an object into the band storage.
Slicing support is also provided.
"""
async def get_info(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
) -> List[InternalDataInfo]:
"""
Get internal information of an object
"""
async def pin(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
error: str = "raise",
):
"""
Pin specific data on a specific level.
The object will get a read-only lock until unpinned
"""
async def unpin(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
error: str = "raise",
):
"""
Unpin specific data on a specific level
"""
```
A `StorageItemManagerActor` will hold all information necessary for kernel data management. It comprises of four separate handlers, namely `QuotaHandler`, `LockHandler`, `MetaHandler` and `ReferenceHandler`, implemented separately to reduce potential call overhead. Note that this actor only deal with data metas, not data themselves. Data are handled in caller actors with storage backends.
## User Storage
User storage API wraps kernel storage and provides more capabilities including multiple level handling, spill and transfer. The API can look like
```python
StorageLevels = Optional[List[StorageLevel]]
class UserStorageAPI:
@classmethod
async def create(
cls,
session_id: str,
band_name: str,
worker_address: str,
) -> "UserStorageAPI":
"""
Create a session and band specific API
"""
async def fetch(
self,
data_key: str,
levels: StorageLevels = None,
band_name: str = None,
remote_address: str = None,
error: str = "raise",
):
"""
Fetch object from remote worker or load object from disk
"""
async def open_reader(
self,
data_key: str,
levels: StorageLevels = None,
) -> UserStorageFileObject:
"""
Create a reader on a specific file
"""
async def open_writer(
self,
data_key: str,
size: int,
levels: StorageLevels = None,
band_name: str = None,
) -> UserStorageFileObject:
"""
Create a writer on a specific file
"""
async def delete(self, data_key: str, error: str = "raise"):
"""
Delete a file with specified keys
"""
async def put(
self,
data_key: str,
obj: Any,
levels: StorageLevels = None,
band_name: str = None,
) -> InternalDataInfo:
"""
Put an object into the band storage
"""
async def get(
self,
data_key: str,
conditions: List = None,
levels: StorageLevels = None,
band_name: str = None,
error: str = "raise",
) -> Any:
"""
Get an object into the band storage.
Slicing support is also provided.
"""
async def get_info(
self,
data_key: str,
levels: StorageLevels = None,
band_name: str = None,
) -> List[InternalDataInfo]:
"""
Get internal information of an object
"""
async def pin(
self,
data_key: str,
levels: StorageLevels = None,
band_name: str = None,
error: str = "raise",
):
"""
Pin specific data on a specific level.
The object will get a read-only lock until unpinned
"""
async def unpin(
self,
session_id: str,
data_key: str,
level: StorageLevel = None,
band_name: str = None,
error: str = "raise",
):
"""
Unpin specific data on a specific level
"""
```
## Spill
To implement spill, We need a `SpillManagerActor` to coordinate spill actions. A spill actor will be look like
```python
class SpillManagerActor(mo.StatelessActor):
@classmethod
def gen_uid(cls, band_name: str, storage_level: int) -> str:
pass
def notify_spillable(self, data_key: str, size: int):
"""
Register a spillable data key.
Only called when spill state is True.
"""
async def acquire_spill_lock(self, size: int) -> List[str]:
"""
Acquire certain size for spill and lock the actor
for spill. Keys will be returned for the caller to
spill.
"""
def release_spill_lock(self):
"""
Release the actor when spill ends.
"""
def wait_spill_state_change(self, last_state: bool) -> bool:
"""
Wait until the state of spill changes.
"""
```
Inside the actor, we define a boolean state to indicate whether the storage level is under spill. When the state changes to True, it will be broadcasted to all subscribers to notify them to notify data changes. When the storage is about to spill, it calls `acquire_spill_lock` and supply some sizes. Then the actor enters spill state, locks the actor and then checks for keys to spill. When sizes to spill is available, it will return keys to spill and spill is started from the caller. When spill ends (finishes or encounters an error), the caller calls `release_spill_lock` to release the spill lock for other callers. When there is no pending callers, the state of the actor turns into False.
## Transfer / Remote IO
To implement data transfer, we propose a two-actor solution. We will add a `SenderManagerActor` and a `RemoteIOActor` to do all required things. The `SenderManagerActor` masters data transfer initiated between workers, and `RemoteIOActor` handles remote readers and writers both for inter-worker data transfer as well as `UserStorageAPI`.
When starting an inter-worker transfer, a request is sent to `SenderManagerActor` at the worker hosting the data to send to the calling worker. It calls `RemoteIOActor.create_writer` at receiver site and then `write_data` with batch calls.
`RemoteIOActor` will look like
```python
class RemoteIOActor(mo.StatelessActor):
@mo.batch
async def create_reader(
self,
session_id: str,
data_key: str,
levels: StorageLevels,
) -> List[str]:
pass
@mo.batch
async def create_writer(
self,
session_id: str,
data_key: str,
data_size: int,
levels: StorageLevels,
) -> List[str]:
pass
@mo.batch
async def read_data(
self,
session_id: str,
reader_key: str,
data_buffer: bytes,
size: int,
):
pass
@mo.batch
async def write_data(
self,
session_id: str,
writer_key: str,
data_buffer: bytes,
is_eof: bool,
):
pass
@mo.batch
async def close(
self,
session_id: str,
key: str,
):
pass
```
And `SenderManagerActor` will look like
```python
class SenderManagerActor(mo.StatelessActor):
@mo.extensible
async def send_batch_data(
self,
session_id: str,
data_keys: List[str],
address: str,
level: StorageLevel,
band_name: str = "numa-0",
block_size: int = None,
error: str = "raise",
):
pass
``` | open | 2022-01-17T11:49:17Z | 2022-02-23T08:01:54Z | https://github.com/mars-project/mars/issues/2636 | [
"type: enhancement",
"mod: storage"
] | wjsi | 1 |
httpie/cli | api | 1,565 | Failed to use {{key}}={{value}} for nested JSON | Hi! I wanna write a function to [create a GitHub gist](https://docs.github.com/en/rest/gists/gists?apiVersion=2022-11-28#create-a-gist). This is what I wrote:
```fish
function gists__new --description "Create a gist for the authenticated user"
argparse l/login= p/pat= d/description= P/public f/file= c/content= -- $argv
set login $_flag_login
set pat $_flag_pat
set description $_flag_description
set public false
set --query _flag_public && set public true
set file $_flag_file
set content $_flag_content
set body "$(jq --null-input '{
"description": $description,
"public": $public,
"files": {
($file): {
"content": $content
}
}
}' \
--arg description $description \
--arg public $public \
--arg file $file \
--arg content $content)"
https --auth "$login:$pat" POST api.github.com/gists \
Accept:application/vnd.github+json \
X-GitHub-Api-Version:$api_version \
--raw $body
end
```
It works, but requires `jq`. According to [HTTPie docs](https://httpie.io/docs/cli/nested-json) I can get rid of it. I tried to use `{{key}}={{value}}` but failed:
``` fish
function gists__new --description "Create a gist for the authenticated user"
argparse l/login= p/pat= d/description= P/public f/file= c/content= -- $argv
set login $_flag_login
set pat $_flag_pat
set description $_flag_description
set public false
set --query _flag_public && set public true
set file $_flag_file
set content $_flag_content
https --auth "$login:$pat" POST api.github.com/gists \
"description=$description" \
"public=$public" \
"files[$file][content]=$content" \
Accept:application/vnd.github+json \
X-GitHub-Api-Version:$api_version \
end
```
The response I get is:
```json
{
"documentation_url": "https://docs.github.com/rest/gists/gists#create-a-gist",
"message": "Invalid request.\n\nInvalid input: object is missing required key: files."
}
```
What am I doing wrong? | closed | 2024-02-27T19:53:40Z | 2024-02-27T20:15:53Z | https://github.com/httpie/cli/issues/1565 | [
"new"
] | EmilyGraceSeville7cf | 1 |
autogluon/autogluon | computer-vision | 3,860 | [BUG] interpretable predictor interpretable_models_summary print_interpretable_rules not available | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
Following usage example from documentation (of version 0.5.1?) training works but no summary and no interpretable rules
https://auto.gluon.ai/0.5.1/tutorials/tabular_prediction/tabular-interpretability.html
**Expected behavior**
as described in documentation
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
```
from autogluon.tabular import TabularDataset, TabularPredictor
train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
subsample_size = 500 # subsample subset of data for faster demo, try setting this to much larger values
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head()
predictor = TabularPredictor(label='class')
predictor.fit(train_data, presets='interpretable')
predictor.leaderboard()
predictor.interpretable_models_summary()
predictor.print_interpretable_rules() # can optionally specify a model name or complexity threshold
```
**Screenshots / Logs**
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 predictor.interpretable_models_summary()
AttributeError: 'TabularPredictor' object has no attribute 'interpretable_models_summary'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 predictor.print_interpretable_rules()
AttributeError: 'TabularPredictor' object has no attribute 'print_interpretable_rules'
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-01-14
time : 12:06:25.974741
python : 3.10.12.final.0
OS : Linux
OS-release : 6.2.0-1017-aws
Version : #17~22.04.1-Ubuntu SMP Fri Nov 17 21:07:13 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 128
cpu_ram_mb : 253829.41015625
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 98683
accelerate : 0.21.0
async-timeout : 4.0.3
autogluon : 1.0.0
autogluon.common : 1.0.0
autogluon.core : 1.0.0
autogluon.features : 1.0.0
autogluon.multimodal : 1.0.0
autogluon.tabular : 1.0.0
autogluon.timeseries : 1.0.0
boto3 : 1.34.16
catboost : 1.2.2
defusedxml : 0.7.1
evaluate : 0.4.1
fastai : 2.7.13
gluonts : 0.14.3
hyperopt : 0.2.7
imodels : 1.4.1
jinja2 : 3.0.3
joblib : 1.3.2
jsonschema : 4.20.0
lightgbm : 4.1.0
lightning : 2.0.9.post0
matplotlib : 3.8.2
mlforecast : 0.10.0
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.3
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
orjson : 3.9.10
pandas : 2.1.4
Pillow : 10.2.0
psutil : 5.9.7
PyMuPDF : None
pytesseract : 0.3.10
pytorch-lightning : 2.0.9.post0
pytorch-metric-learning: 1.7.3
ray : 2.6.3
requests : 2.31.0
scikit-image : 0.20.0
scikit-learn : 1.3.2
scikit-learn-intelex : None
scipy : 1.11.4
seqeval : 1.2.2
setuptools : 60.2.0
skl2onnx : None
statsforecast : 1.4.0
statsmodels : 0.14.1
tabpfn : None
tensorboard : 2.15.1
text-unidecode : 1.3
timm : 0.9.12
torch : 2.0.1
torchmetrics : 1.1.2
torchvision : 0.15.2
tqdm : 4.65.2
transformers : 4.31.0
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```
</details>
| closed | 2024-01-14T12:22:36Z | 2024-06-24T23:13:35Z | https://github.com/autogluon/autogluon/issues/3860 | [
"bug: unconfirmed",
"Needs Triage"
] | Pagey | 1 |
jina-ai/clip-as-service | pytorch | 648 | 序列过长的问题 | 序列过长时设置max_seq_len None,资料显示会自动根据batch传递,是否会存在截断导致语义信息不全? 语料都是较长文本,根据真实情况设置服务基本运行不动了,若是序列截断了该如何? | open | 2022-01-12T20:40:03Z | 2022-03-09T09:30:51Z | https://github.com/jina-ai/clip-as-service/issues/648 | [] | anoobnewhere | 1 |
Sanster/IOPaint | pytorch | 490 | [Feature Request] python api | Currently, the tool supports inpainting through cli and ui but having a python api is extremely helpful since it gives a better control on the process.
Any possibility of working on a python api? | closed | 2024-03-18T12:27:47Z | 2025-01-13T02:03:32Z | https://github.com/Sanster/IOPaint/issues/490 | [
"stale"
] | LokeshBadisa | 3 |
napari/napari | numpy | 6,907 | Default to adding a newline for everything before the `extra_tooltip_text` when binding an action to a button | Follow up issue from the feedback at https://github.com/napari/napari/pull/6794#discussion_r1592792932
> Nice! Now wondering if we should default to adding a newline for everything before the `extra_tooltip_text`...
_Originally posted by @brisvag in https://github.com/napari/napari/pull/6794#discussion_r1595128310_
> That makes sense! From a quick check seems like the only other button that has some extra text is pan/zoom:
>
> 
>
> Maybe something like `Temporarily re-enable by holding Space` could be used as text in case the extra text is added always in a new line?
>
> 
>
> Also, should an issue be made to tackle that later or maybe is something worthy to be done here?
_Originally posted by @dalthviz in https://github.com/napari/napari/pull/6794#discussion_r1595648027_
> I mean I feel like the new line would be safe as a default...
> but I don't think I would change it in this PR--if at all. It's easy to add the new line, harder to remove it if someone does want a one-liner.
_Originally posted by @psobolewskiPhD in https://github.com/napari/napari/pull/6794#discussion_r1595982004_ | closed | 2024-05-10T15:02:08Z | 2024-06-06T16:17:19Z | https://github.com/napari/napari/issues/6907 | [
"needs:discussion"
] | dalthviz | 5 |
howie6879/owllook | asyncio | 63 | 榜单爬虫失败 | 榜单爬虫失败,这个应该是哪里出问题了呢?
` object async_generator can't be used in 'await' expression` | closed | 2019-03-29T03:24:22Z | 2019-04-01T05:33:35Z | https://github.com/howie6879/owllook/issues/63 | [] | imzhyp | 2 |
biosustain/potion | sqlalchemy | 180 | Query to only return specific fields set | In order to reduce the amount of data being transferred from a resource, is it possible to provide a query args to return a set of fields?
Sometimes, we don't need all attributes of an object but a couple of them.
It would require too many custom routes to expose the different set of attributes we would need.
Here a few examples to describe it:
```
/users?include_fields=['first_name', 'last_name']
/users?include_fields=['email']
/users?include_fields=['city', 'country']
``` | open | 2020-04-28T03:09:08Z | 2020-07-27T12:27:33Z | https://github.com/biosustain/potion/issues/180 | [] | matdrapeau | 1 |
huggingface/datasets | computer-vision | 7,303 | DataFilesNotFoundError for datasets LM1B | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module>
train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp>
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load
dataset = datasets.load_dataset('lm1b', split=split)
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset
builder_instance = load_dataset_builder(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory
).get_module()
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files
raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b`
### Environment info
datasets: 2.20.0 | closed | 2024-11-29T17:27:45Z | 2024-12-11T13:22:47Z | https://github.com/huggingface/datasets/issues/7303 | [] | hml1996-fight | 1 |
521xueweihan/HelloGitHub | python | 2,879 | 【开源自荐】guyuelan - Windy开源:Windy一个便捷式devops平台、支持需求、缺陷、API管理、流水线、自动化测试等功能。 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/languyue/Windy
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:Java devops
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:Windy一个便捷式devops平台、支持需求、缺陷、API管理、流水线、自动化测试等功能。
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:
- 项目类型: 使用Windy是一个强大的devops平台工具,
- 能干什么: 支持需求迭代的完整生命周期以及研发过程看护能力,可以帮助团队或公司规范研发流程,通过自动构建与部署能力提高研发效率以及通过自动化测试提高研发质量。
- 痛点问题:
- 解决研发与需求断层的问题,无法关联联动
- 通过API生成二方包解决产品API随意变更的问题
- 通过流水线简化手动构建与部署的时间成本
- 通过自动化测试,较少测试的成本
- 通过UI编写测试用例,较少编写门槛,研发也可编写服务用例,不需要依赖测试人员介入
- 适用于什么场景: 适用于产品公司/团队迭代开发使用
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- Windy一个平台提供更完整的迭代研发工具链,支持需求迭代、流水线、测试自动化、基于UI编写测试用例测试门槛更低、API管理工具
- 相对于其他产品,仅仅支持需求与缺陷人工维护,Windy通过研发流水线自动变更状态,状态变更维护的更加实时高效
- Windy依赖的工具更少,只需要mysql即可。
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg
登录

需求迭代管理

流水线管理

用例管理

自动化任务

- 后续更新计划:
- 支持api审核机制、支持api生成文档
- 生态集成:
- 消息通知: 对接三方系统消息通知机制(企业微信、钉钉、飞书等)
- 三方系统对接: 对接阐道、JIRA、PingCode等api,将三方系统数据同步至Windy中。
- 代码检查、以及覆盖率校验等
- 指标体系:支持需求、缺陷、研发、测试全流程数字指标建设,完成研发体系可视化,能够查看需求从创建到实现完成的整个生命周期数据
- 战略规划:通过研发体系数据化能力,将组织战略拆分细化能全局查看战略落地情况
- AI建设:
- 通过AI分析研发体系数据,提供优化研发效率手段、梳理研发流程阻塞点等
- AI自动添加测试用例
| open | 2025-01-09T03:02:39Z | 2025-01-09T03:02:39Z | https://github.com/521xueweihan/HelloGitHub/issues/2879 | [] | languyue | 0 |
mars-project/mars | scikit-learn | 2,960 | Need a better way to switch backend |
Currently, it is not easy to switch to the ray execution backend.
We don't want to introduce lots of the new APIs, such as `new_cluster`, `new_ray_session` in the https://github.com/mars-project/mars/blob/master/mars/deploy/oscar/ray.py for the mars on ray.
Instead, we want to reuse the mars APIs.
For Mars,
```python
new_cluster(worker_num=2, worker_cpu=2) # The default backend is mars
```
For Ray,
```python
new_cluster(backend="ray", worker_num=2, worker_cpu=2)
```
| closed | 2022-04-25T07:09:26Z | 2022-04-26T09:37:01Z | https://github.com/mars-project/mars/issues/2960 | [] | fyrestone | 0 |
FlareSolverr/FlareSolverr | api | 1,055 | request.post command with Content-Type set to application/x-www-form-urlencoded expect json from FlareSolverr server | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: I freshly `git pull`
- Last working FlareSolverr version: Only used the `git` version
- Operating system: GNU/Linux
- Are you using Docker: [yes/no] No
- FlareSolverr User-Agent (see log traces or / endpoint): User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] No
- Are you using a Proxy: [yes/no] No
- Are you using Captcha Solver: [yes/no] No
- If using captcha solver, which one:
- URL to test this issue: https://www3.yggtorrent.qa/user/login
```
### Description
Hello,
I'm trying to perform a `POST` request using the `request.post` command. The parameters of my request is the following:
```
{'cmd': 'request.post', 'url': 'https://www3.yggtorrent.qa/user/login', 'postData': 'id=someuser&pass=somepassword&ci_csrf_token=', 'session': '1234', 'maxTimeout': 60000, 'returnOnlyCookies': False}
```
Before this request I created a session (so I'm using the same `session_id`) and get the challenge.
But while sending the `request.post` command above, I got the following error from FlareSolverr logs:
```
2024-02-03 21:21:00 ERROR 'NoneType' object is not iterable
2024-02-03 21:21:00 INFO 127.0.0.1 POST http://127.0.0.1:8191/v1 500 Internal Server Error
```
So looking at the souce code I discovered the following in `FlareSolverr.py` (from line 48):
```
@app.post('/v1')
def controller_v1():
"""
Controller v1
"""
req = V1RequestBase(request.json)
res = flaresolverr_service.controller_v1_endpoint(req)
if res.__error_500__:
response.status = 500
return utils.object_to_dict(res)
```
The issue is the following line:
```
req = V1RequestBase(request.json)
```
So using `request.post` command with `application/x-www-form-urlencoded` does not seem to produce an expected `JSON`. As a result, `request.json` is empty and thus explain the error logs.
### Logged Error Messages
```text
2024-02-03 21:21:00 ERROR 'NoneType' object is not iterable
2024-02-03 21:21:00 INFO 127.0.0.1 POST http://127.0.0.1:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_ | closed | 2024-02-03T17:33:15Z | 2024-02-05T01:33:48Z | https://github.com/FlareSolverr/FlareSolverr/issues/1055 | [
"duplicate"
] | janemba | 1 |
twopirllc/pandas-ta | pandas | 299 | Roadmap for features, indicators | @twopirllc i'm not sure if this is the place to discuss this, perhaps, hence the ticket. Feel free to correct me if that's not the case.
Do you have a roadmap for the features you would like to see implemented, technical indicators, strategies, examples, notebooks, general roadmap targets for Pandas-TA? On the medium to long-term?
Perhaps this is of interest to other users and developers here. | closed | 2021-05-28T17:43:17Z | 2022-02-09T05:15:14Z | https://github.com/twopirllc/pandas-ta/issues/299 | [
"help wanted",
"info"
] | luisbarrancos | 5 |
xorbitsai/xorbits | numpy | 157 | BLD: Release Xorbits docker images for multi python versions that we support | Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
Docker image supports multi python versions.
| closed | 2023-01-10T03:22:25Z | 2023-02-02T04:46:14Z | https://github.com/xorbitsai/xorbits/issues/157 | [
"build"
] | ChengjieLi28 | 0 |
twelvedata/twelvedata-python | matplotlib | 50 | [Feature Request] mic_code instead of exchange as a parameter when fetching data | It would be nice to use the `mic_code` parameter to differentiate between markets when fetching time_series, live & eod prices.
| closed | 2022-06-22T11:23:23Z | 2022-06-22T14:26:03Z | https://github.com/twelvedata/twelvedata-python/issues/50 | [] | SimonDamberg | 5 |
Yorko/mlcourse.ai | scikit-learn | 724 | Topic 6 russian lecture notebook is in english | https://github.com/Yorko/mlcourse.ai/blob/main/jupyter_russian/topic06_features/topic6_feature_engineering_feature_selection_english.ipynb | closed | 2022-10-06T13:26:04Z | 2022-10-07T09:08:44Z | https://github.com/Yorko/mlcourse.ai/issues/724 | [] | mirmozavr | 0 |
ranaroussi/yfinance | pandas | 1,299 | Weekly data - last week missing | Hi all,
The code below, gives me data until the very last day of last week (13th of Jan):
```
yfObj = yf.Ticker(stock)
data = yfObj.history(period="3y")
```
But, if I want to have weekly data, using the code here below:
`
data = yfObj.history(period="3y",interval="1wk")
`
It gives me data until 9th of January. So for some reason the last weekly data isn't available.
If I look on yahoo manually though (historical data, weekly), I do see the data of 13th of January.
Any idea if there's a setting or parameter that I could include? | closed | 2023-01-14T10:35:16Z | 2023-01-14T12:33:26Z | https://github.com/ranaroussi/yfinance/issues/1299 | [] | Jokke-moose | 2 |
SciTools/cartopy | matplotlib | 1,828 | ModuleNotFoundError: No module named 'cartopy' | ### Description
I failed to install the cartopy when I tried
conda install cartopy
I used python in conda3
which python
/home/tools/anaconda3/install/bin/python
python -v
import 'site' # <_frozen_importlib_external.SourceFileLoader object at 0x2b7ae27a82b0>
Python 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0] :: Anaconda, Inc. on linux
The error information are shown as below. Any help will be appreciated.
Thanks
Rong
Collecting package metadata (current_repodata.json): done
Solving environment: \
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/linux-64::bokeh==2.3.2=py38h06a4308_0
- defaults/linux-64::cython==0.29.23=py38h2531618_0
- defaults/linux-64::nbconvert==6.0.7=py38_0
- defaults/noarch::nbformat==5.1.3=pyhd3eb1b0_0
- defaults/linux-64::anaconda==2021.05=py38_0
- defaults/noarch::jupyterlab==3.0.14=pyhd3eb1b0_1
- defaults/linux-64::clyent==1.2.2=py38_1
- defaults/linux-64::anaconda-navigator==2.0.3=py38_0
- defaults/linux-64::jupyter_server==1.4.1=py38h06a4308_0
- defaults/linux-64::distributed==2021.4.1=py38h06a4308_0
- defaults/linux-64::ipykernel==5.3.4=py38h5ca1d4c_0
- defaults/noarch::flake8==3.9.0=pyhd3eb1b0_0
- defaults/linux-64::gevent==21.1.2=py38h27cfd23_1
- defaults/noarch::pyls-black==0.4.6=hd3eb1b0_0
- defaults/noarch::python-language-server==0.36.2=pyhd3eb1b0_0
- defaults/linux-64::basemap==1.2.0=py38h856778e_4
- defaults/noarch::jupyterlab_pygments==0.1.2=py_0
- defaults/linux-64::astroid==2.5=py38h06a4308_1
- defaults/noarch::anaconda-project==0.9.1=pyhd3eb1b0_1
- defaults/linux-64::zope.event==4.5.0=py38_0
- defaults/noarch::conda-verify==3.4.2=py_1
- defaults/noarch::seaborn==0.11.1=pyhd3eb1b0_0
- defaults/linux-64::astropy==4.2.1=py38h27cfd23_1
- defaults/noarch::networkx==2.5=py_0
- defaults/noarch::pyls-spyder==0.3.2=pyhd3eb1b0_0
- defaults/noarch::nbclient==0.5.3=pyhd3eb1b0_0
- defaults/linux-64::scikit-learn==0.24.1=py38ha9443f7_0
- defaults/noarch::conda-repo-cli==1.0.4=pyhd3eb1b0_0
- defaults/noarch::conda-token==0.3.0=pyhd3eb1b0_0
- defaults/linux-64::spyder-kernels==1.10.2=py38h06a4308_0
- defaults/linux-64::ipython==7.22.0=py38hb070fc8_0
- defaults/noarch::isort==5.8.0=pyhd3eb1b0_0
- defaults/linux-64::matplotlib==3.3.4=py38h06a4308_0
- defaults/noarch::nbclassic==0.2.6=pyhd3eb1b0_0
- defaults/noarch::jupyter-packaging==0.7.12=pyhd3eb1b0_0
- defaults/noarch::flask==1.1.2=pyhd3eb1b0_0
- defaults/noarch::jupyter_console==6.4.0=pyhd3eb1b0_0
- defaults/noarch::jupyterlab_server==2.4.0=pyhd3eb1b0_0
- defaults/noarch::joblib==1.0.1=pyhd3eb1b0_0
- defaults/noarch::backports.functools_lru_cache==1.6.4=pyhd3eb1b0_0
- defaults/linux-64::_ipyw_jlab_nb_ext_conf==0.1.0=py38_0
- defaults/linux-64::anaconda-client==1.7.2=py38_0
- defaults/noarch::pygments==2.8.1=pyhd3eb1b0_0
- defaults/linux-64::jupyter==1.0.0=py38_7
- defaults/linux-64::notebook==6.3.0=py38h06a4308_0
- defaults/linux-64::pylint==2.7.4=py38h06a4308_1
- defaults/linux-64::numba==0.53.1=py38ha9443f7_0
- defaults/noarch::nltk==3.6.1=pyhd3eb1b0_0
- defaults/linux-64::conda-build==3.21.4=py38h06a4308_0
- defaults/linux-64::spyder==4.2.5=py38h06a4308_0
- defaults/noarch::qtconsole==5.0.3=pyhd3eb1b0_0
- defaults/noarch::dask==2021.4.0=pyhd3eb1b0_0
- defaults/noarch::ipywidgets==7.6.3=pyhd3eb1b0_1
- defaults/noarch::sphinx==4.0.1=pyhd3eb1b0_0
- defaults/linux-64::zope.interface==5.3.0=py38h27cfd23_0
- defaults/noarch::jsonschema==3.2.0=py_2
- defaults/linux-64::widgetsnbextension==3.5.1=py38_0
- defaults/linux-64::scikit-image==0.18.1=py38ha9443f7_0
- defaults/noarch::jinja2==2.11.3=pyhd3eb1b0_0
- defaults/noarch::bleach==3.3.0=pyhd3eb1b0_0
- defaults/noarch::numpydoc==1.1.0=pyhd3eb1b0_1
- defaults/linux-64::conda==4.10.3=py38h06a4308_0
failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: /
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/linux-64::bokeh==2.3.2=py38h06a4308_0
- defaults/linux-64::cython==0.29.23=py38h2531618_0
- defaults/linux-64::nbconvert==6.0.7=py38_0
- defaults/noarch::nbformat==5.1.3=pyhd3eb1b0_0
- defaults/linux-64::anaconda==2021.05=py38_0
- defaults/noarch::jupyterlab==3.0.14=pyhd3eb1b0_1
- defaults/linux-64::clyent==1.2.2=py38_1
- defaults/linux-64::anaconda-navigator==2.0.3=py38_0
- defaults/linux-64::jupyter_server==1.4.1=py38h06a4308_0
- defaults/linux-64::distributed==2021.4.1=py38h06a4308_0
- defaults/linux-64::ipykernel==5.3.4=py38h5ca1d4c_0
- defaults/noarch::flake8==3.9.0=pyhd3eb1b0_0
- defaults/linux-64::gevent==21.1.2=py38h27cfd23_1
- defaults/noarch::pyls-black==0.4.6=hd3eb1b0_0
- defaults/noarch::python-language-server==0.36.2=pyhd3eb1b0_0
- defaults/linux-64::basemap==1.2.0=py38h856778e_4
- defaults/noarch::jupyterlab_pygments==0.1.2=py_0
- defaults/linux-64::astroid==2.5=py38h06a4308_1
- defaults/noarch::anaconda-project==0.9.1=pyhd3eb1b0_1
- defaults/linux-64::zope.event==4.5.0=py38_0
- defaults/noarch::conda-verify==3.4.2=py_1
- defaults/noarch::seaborn==0.11.1=pyhd3eb1b0_0
- defaults/linux-64::astropy==4.2.1=py38h27cfd23_1
- defaults/noarch::networkx==2.5=py_0
- defaults/noarch::pyls-spyder==0.3.2=pyhd3eb1b0_0
- defaults/noarch::nbclient==0.5.3=pyhd3eb1b0_0
- defaults/linux-64::scikit-learn==0.24.1=py38ha9443f7_0
- defaults/noarch::conda-repo-cli==1.0.4=pyhd3eb1b0_0
- defaults/noarch::conda-token==0.3.0=pyhd3eb1b0_0
- defaults/linux-64::spyder-kernels==1.10.2=py38h06a4308_0
- defaults/linux-64::ipython==7.22.0=py38hb070fc8_0
- defaults/noarch::isort==5.8.0=pyhd3eb1b0_0
- defaults/linux-64::matplotlib==3.3.4=py38h06a4308_0
- defaults/noarch::nbclassic==0.2.6=pyhd3eb1b0_0
- defaults/noarch::jupyter-packaging==0.7.12=pyhd3eb1b0_0
- defaults/noarch::flask==1.1.2=pyhd3eb1b0_0
- defaults/noarch::jupyter_console==6.4.0=pyhd3eb1b0_0
- defaults/noarch::jupyterlab_server==2.4.0=pyhd3eb1b0_0
- defaults/noarch::joblib==1.0.1=pyhd3eb1b0_0
- defaults/noarch::backports.functools_lru_cache==1.6.4=pyhd3eb1b0_0
- defaults/linux-64::_ipyw_jlab_nb_ext_conf==0.1.0=py38_0
- defaults/linux-64::anaconda-client==1.7.2=py38_0
- defaults/noarch::pygments==2.8.1=pyhd3eb1b0_0
- defaults/linux-64::jupyter==1.0.0=py38_7
- defaults/linux-64::notebook==6.3.0=py38h06a4308_0
- defaults/linux-64::pylint==2.7.4=py38h06a4308_1
- defaults/linux-64::numba==0.53.1=py38ha9443f7_0
- defaults/noarch::nltk==3.6.1=pyhd3eb1b0_0
- defaults/linux-64::conda-build==3.21.4=py38h06a4308_0
- defaults/linux-64::spyder==4.2.5=py38h06a4308_0
- defaults/noarch::qtconsole==5.0.3=pyhd3eb1b0_0
- defaults/noarch::dask==2021.4.0=pyhd3eb1b0_0
- defaults/noarch::ipywidgets==7.6.3=pyhd3eb1b0_1
- defaults/noarch::sphinx==4.0.1=pyhd3eb1b0_0
- defaults/linux-64::zope.interface==5.3.0=py38h27cfd23_0
- defaults/noarch::jsonschema==3.2.0=py_2
- defaults/linux-64::widgetsnbextension==3.5.1=py38_0
- defaults/linux-64::scikit-image==0.18.1=py38ha9443f7_0
- defaults/noarch::jinja2==2.11.3=pyhd3eb1b0_0
- defaults/noarch::bleach==3.3.0=pyhd3eb1b0_0
- defaults/noarch::numpydoc==1.1.0=pyhd3eb1b0_1
- defaults/linux-64::conda==4.10.3=py38h06a4308_0
failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
pip list
Package Version Location
---------------------------------- ----------------- ---------------------------------------------------------------
alabaster 0.7.12
anaconda-client 1.7.2
anaconda-navigator 2.0.3
anaconda-project 0.9.1
anyio 2.2.0
appdirs 1.4.4
argh 0.26.2
argon2-cffi 20.1.0
asn1crypto 1.4.0
astroid 2.5
astropy 4.2.1
async-generator 1.10
atomicwrites 1.4.0
attrs 20.3.0
autopep8 1.5.6
Babel 2.9.0
backcall 0.2.0
backports.functools-lru-cache 1.6.4
backports.shutil-get-terminal-size 1.0.0
backports.tempfile 1.0
backports.weakref 1.0.post1
basemap 1.2.0
beautifulsoup4 4.9.3
bitarray 2.1.0
bkcharts 0.2
black 19.10b0
bleach 3.3.0
bokeh 2.3.2
boto 2.49.0
Bottleneck 1.3.2
brotlipy 0.7.0
certifi 2020.12.5
cffi 1.14.5
cftime 1.5.0
chardet 4.0.0
click 7.1.2
cloudpickle 1.6.0
clyent 1.2.2
colorama 0.4.4
conda 4.10.3
conda-build 3.21.4
conda-content-trust 0+unknown
conda-package-handling 1.7.3
conda-repo-cli 1.0.4
conda-token 0.3.0
conda-verify 3.4.2
contextlib2 0.6.0.post1
cryptography 3.4.7
cycler 0.10.0
Cython 0.29.23
cytoolz 0.11.0
dask 2021.4.0
decorator 5.0.6
defusedxml 0.7.1
diff-match-patch 20200713
distributed 2021.4.1
docutils 0.17.1
entrypoints 0.3
et-xmlfile 1.0.1
fastcache 1.1.0
filelock 3.0.12
flake8 3.9.0
Flask 1.1.2
fsspec 0.9.0
future 0.18.2
fv3jeditools 0.0.1
gevent 21.1.2
glmtools 0.1.dev0
glob2 0.7
gmpy2 2.0.8
greenlet 1.0.0
h5py 2.9.0
HeapDict 1.0.1
html5lib 1.1
idna 2.10
imageio 2.9.0
imagesize 1.2.0
importlib-metadata 3.10.0
iniconfig 1.1.1
intervaltree 3.1.0
ipykernel 5.3.4
ipython 7.22.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
isort 5.8.0
itsdangerous 1.1.0
jdcal 1.4.1
jedi 0.17.2
jeepney 0.6.0
Jinja2 2.11.3
joblib 1.0.1
json5 0.9.5
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyter-packaging 0.7.12
jupyter-server 1.4.1
jupyterlab 3.0.14
jupyterlab-pygments 0.1.2
jupyterlab-server 2.4.0
jupyterlab-widgets 1.0.0
keyring 22.3.0
kiwisolver 1.3.1
lazy-object-proxy 1.6.0
libarchive-c 2.9
llvmlite 0.36.0
lmatools 0.6a0
locket 0.2.1
lxml 4.6.3
MarkupSafe 1.1.1
matplotlib 3.2.0
mccabe 0.6.1
mistune 0.8.4
mkl-fft 1.3.0
mkl-random 1.2.1
mkl-service 2.3.0
mock 4.0.3
more-itertools 8.7.0
mpi4py 3.0.0
mpmath 1.2.1
msgpack 1.0.2
multipledispatch 0.6.0
mypy-extensions 0.4.3
navigator-updater 0.2.1
nbclassic 0.2.6
nbclient 0.5.3
nbconvert 6.0.7
nbformat 5.1.3
nest-asyncio 1.5.1
netCDF4 1.5.7
networkx 2.5
nltk 3.6.1
nose 1.3.7
notebook 6.3.0
numba 0.53.1
numexpr 2.7.3
numpy 1.20.1
numpydoc 1.1.0
olefile 0.46
openpyxl 3.0.7
packaging 20.9
pandas 1.2.4
pandocfilters 1.4.3
parso 0.7.0
partd 1.2.0
path 15.1.2
pathlib2 2.3.5
pathspec 0.7.0
patsy 0.5.1
pep8 1.7.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.2.0
pip 21.2.4
pkginfo 1.7.0
pluggy 0.13.1
ply 3.11
prometheus-client 0.10.1
prompt-toolkit 3.0.17
psutil 5.8.0
ptyprocess 0.7.0
py 1.10.0
pycodestyle 2.6.0
pycosat 0.6.3
pycparser 2.20
pycurl 7.43.0.6
pydocstyle 6.0.0
pyerfa 1.7.3
pyflakes 2.2.0
Pygments 2.8.1
pylint 2.7.4
pyls-black 0.4.6
pyls-spyder 0.3.2
pyodbc 4.0.0-unsupported
pyOpenSSL 20.0.1
pyparsing 2.4.7
pyproj 1.9.6
pyrsistent 0.17.3
pyshp 2.1.3
PySocks 1.7.1
pytest 6.2.3
python-dateutil 2.8.1
python-jsonrpc-server 0.4.0
python-language-server 0.36.2
pytz 2021.1
PyWavelets 1.1.1
pyxdg 0.27
PyYAML 5.4.1
pyzmq 20.0.0
QDarkStyle 2.8.1
QtAwesome 1.0.2
qtconsole 5.0.3
QtPy 1.9.0
regex 2021.4.4
requests 2.25.1
rope 0.18.0
Rtree 0.9.7
ruamel.yaml 0.17.10
ruamel.yaml.clib 0.2.6
ruamel-yaml-conda 0.15.100
scikit-image 0.18.1
scikit-learn 0.24.1
scipy 1.6.2
seaborn 0.11.1
SecretStorage 3.3.1
Send2Trash 1.5.0
setuptools 57.4.0
simplegeneric 0.8.1
singledispatch 0.0.0
sip 4.19.13
six 1.15.0
sniffio 1.2.0
snowballstemmer 2.1.0
sortedcollections 2.1.0
sortedcontainers 2.3.0
soupsieve 2.2.1
Sphinx 4.0.1
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 1.0.3
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.4
sphinxcontrib-websupport 1.2.4
spyder 4.2.5
spyder-kernels 1.10.2
SQLAlchemy 1.4.15
statsmodels 0.12.2
sympy 1.8
tables 3.6.1
tblib 1.7.0
terminado 0.9.4
testpath 0.4.4
textdistance 4.2.1
threadpoolctl 2.1.0
three-merge 0.1.1
tifffile 2020.10.1
toml 0.10.2
toolz 0.11.1
tornado 6.1
tqdm 4.59.0
traitlets 5.0.5
typed-ast 1.4.2
typing-extensions 3.7.4.3
ujson 4.0.2
unicodecsv 0.14.1
urllib3 1.26.4
watchdog 1.0.2
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.37.0
widgetsnbextension 3.5.1
wrapt 1.12.1
wurlitzer 2.1.0
xlrd 2.0.1
XlsxWriter 1.3.8
xlwt 1.3.0
xmltodict 0.12.0
yapf 0.31.0
zict 2.0.0
zipp 3.4.1
zope.event 4.5.0
zope.interface 5.3.0
conda list
# packages in environment at xxxxxxxxxx:
#
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py38_0
_libgcc_mutex 0.1 main
alabaster 0.7.12 pyhd3eb1b0_0
anaconda 2021.05 py38_0
anaconda-client 1.7.2 py38_0
anaconda-navigator 2.0.3 py38_0
anaconda-project 0.9.1 pyhd3eb1b0_1
anyio 2.2.0 py38h06a4308_1
appdirs 1.4.4 py_0
argh 0.26.2 py38_0
argon2-cffi 20.1.0 py38h27cfd23_1
asn1crypto 1.4.0 py_0
astroid 2.5 py38h06a4308_1
astropy 4.2.1 py38h27cfd23_1
async_generator 1.10 pyhd3eb1b0_0
atomicwrites 1.4.0 py_0
attrs 20.3.0 pyhd3eb1b0_0
autopep8 1.5.6 pyhd3eb1b0_0
babel 2.9.0 pyhd3eb1b0_0
backcall 0.2.0 pyhd3eb1b0_0
backports 1.0 pyhd3eb1b0_2
backports.functools_lru_cache 1.6.4 pyhd3eb1b0_0
backports.shutil_get_terminal_size 1.0.0 pyhd3eb1b0_3
backports.tempfile 1.0 pyhd3eb1b0_1
backports.weakref 1.0.post1 py_1
basemap 1.2.0 py38h856778e_4
beautifulsoup4 4.9.3 pyha847dfd_0
bitarray 2.1.0 py38h27cfd23_1
bkcharts 0.2 py38_0
black 19.10b0 py_0
blas 1.0 mkl
bleach 3.3.0 pyhd3eb1b0_0
blosc 1.21.0 h8c45485_0
bokeh 2.3.2 py38h06a4308_0
boto 2.49.0 py38_0
bottleneck 1.3.2 py38heb32a55_1
brotlipy 0.7.0 py38h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
c-ares 1.17.1 h27cfd23_0
ca-certificates 2021.4.13 h06a4308_1
cairo 1.16.0 hf32fb01_1
certifi 2020.12.5 py38h06a4308_0
cffi 1.14.5 py38h261ae71_0
cftime 1.5.0 pypi_0 pypi
chardet 4.0.0 py38h06a4308_1003
click 7.1.2 pyhd3eb1b0_0
cloudpickle 1.6.0 py_0
clyent 1.2.2 py38_1
colorama 0.4.4 pyhd3eb1b0_0
conda 4.10.3 py38h06a4308_0
conda-build 3.21.4 py38h06a4308_0
conda-content-trust 0.1.1 pyhd3eb1b0_0
conda-env 2.6.0 1
conda-package-handling 1.7.3 py38h27cfd23_1
conda-repo-cli 1.0.4 pyhd3eb1b0_0
conda-token 0.3.0 pyhd3eb1b0_0
conda-verify 3.4.2 py_1
contextlib2 0.6.0.post1 py_0
cryptography 3.4.7 py38hd23ed53_0
curl 7.71.1 hbc83047_1
cycler 0.10.0 py38_0
cython 0.29.23 py38h2531618_0
cytoolz 0.11.0 py38h7b6447c_0
dask 2021.4.0 pyhd3eb1b0_0
dask-core 2021.4.0 pyhd3eb1b0_0
dbus 1.13.18 hb2f20db_0
decorator 5.0.6 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
diff-match-patch 20200713 py_0
distributed 2021.4.1 py38h06a4308_0
docutils 0.17.1 py38h06a4308_1
entrypoints 0.3 py38_0
et_xmlfile 1.0.1 py_1001
expat 2.3.0 h2531618_2
fastcache 1.1.0 py38h7b6447c_0
filelock 3.0.12 pyhd3eb1b0_1
flake8 3.9.0 pyhd3eb1b0_0
flask 1.1.2 pyhd3eb1b0_0
fontconfig 2.13.1 h6c09931_0
freetype 2.10.4 h5ab3b9f_0
fribidi 1.0.10 h7b6447c_0
fsspec 0.9.0 pyhd3eb1b0_0
future 0.18.2 py38_1
geos 3.8.0 he6710b0_0
get_terminal_size 1.0.0 haa9412d_0
gevent 21.1.2 py38h27cfd23_1
glib 2.68.1 h36276a3_0
glob2 0.7 pyhd3eb1b0_0
gmp 6.2.1 h2531618_2
gmpy2 2.0.8 py38hd5f6e3b_3
graphite2 1.3.14 h23475e2_0
greenlet 1.0.0 py38h2531618_2
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
h5py 2.10.0 py38h7918eee_0
harfbuzz 2.8.0 h6f93f22_0
hdf5 1.10.4 hb1b8bf9_0
heapdict 1.0.1 py_0
html5lib 1.1 py_0
icu 58.2 he6710b0_3
idna 2.10 pyhd3eb1b0_0
imageio 2.9.0 pyhd3eb1b0_0
imagesize 1.2.0 pyhd3eb1b0_0
importlib-metadata 3.10.0 py38h06a4308_0
importlib_metadata 3.10.0 hd3eb1b0_0
iniconfig 1.1.1 pyhd3eb1b0_0
intel-openmp 2021.2.0 h06a4308_610
intervaltree 3.1.0 py_0
ipykernel 5.3.4 py38h5ca1d4c_0
ipython 7.22.0 py38hb070fc8_0
ipython_genutils 0.2.0 pyhd3eb1b0_1
ipywidgets 7.6.3 pyhd3eb1b0_1
isort 5.8.0 pyhd3eb1b0_0
itsdangerous 1.1.0 pyhd3eb1b0_0
jbig 2.1 hdba287a_0
jdcal 1.4.1 py_0
jedi 0.17.2 py38h06a4308_1
jeepney 0.6.0 pyhd3eb1b0_0
jinja2 2.11.3 pyhd3eb1b0_0
joblib 1.0.1 pyhd3eb1b0_0
jpeg 9b h024ee3a_2
json5 0.9.5 py_0
jsonschema 3.2.0 py_2
jupyter 1.0.0 py38_7
jupyter-packaging 0.7.12 pyhd3eb1b0_0
jupyter_client 6.1.12 pyhd3eb1b0_0
jupyter_console 6.4.0 pyhd3eb1b0_0
jupyter_core 4.7.1 py38h06a4308_0
jupyter_server 1.4.1 py38h06a4308_0
jupyterlab 3.0.14 pyhd3eb1b0_1
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 2.4.0 pyhd3eb1b0_0
jupyterlab_widgets 1.0.0 pyhd3eb1b0_1
keyring 22.3.0 py38h06a4308_0
kiwisolver 1.3.1 py38h2531618_0
krb5 1.18.2 h173b8e3_0
lazy-object-proxy 1.6.0 py38h27cfd23_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.33.1 h53a641e_7
libarchive 3.4.2 h62408e4_0
libcurl 7.71.1 h20c2e04_1
libedit 3.1.20210216 h27cfd23_1
libev 4.33 h7b6447c_0
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
liblief 0.10.1 he6710b0_0
libllvm10 10.0.1 hbcb73fb_5
libpng 1.6.37 hbc83047_0
libsodium 1.0.18 h7b6447c_0
libspatialindex 1.9.3 h2531618_0
libssh2 1.9.0 h1ba5d50_1
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.2.0 h85742a9_0
libtool 2.4.6 h7b6447c_1005
libuuid 1.0.3 h1bed415_2
libuv 1.40.0 h7b6447c_0
libwebp-base 1.2.0 h27cfd23_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.10 hb55368b_3
libxslt 1.1.34 hc22bd24_0
llvmlite 0.36.0 py38h612dafd_4
locket 0.2.1 py38h06a4308_1
lxml 4.6.3 py38h9120a33_0
lz4-c 1.9.3 h2531618_0
lzo 2.10 h7b6447c_2
markupsafe 1.1.1 py38h7b6447c_0
matplotlib 3.2.0 pypi_0 pypi
mccabe 0.6.1 py38_1
mistune 0.8.4 py38h7b6447c_1000
mkl 2021.2.0 h06a4308_296
mkl-service 2.3.0 py38h27cfd23_1
mkl_fft 1.3.0 py38h42c9631_2
mkl_random 1.2.1 py38ha9443f7_2
mock 4.0.3 pyhd3eb1b0_0
more-itertools 8.7.0 pyhd3eb1b0_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.2.1 py38h06a4308_0
msgpack-python 1.0.2 py38hff7bd54_1
multipledispatch 0.6.0 py38_0
mypy_extensions 0.4.3 py38_0
navigator-updater 0.2.1 py38_0
nbclassic 0.2.6 pyhd3eb1b0_0
nbclient 0.5.3 pyhd3eb1b0_0
nbconvert 6.0.7 py38_0
nbformat 5.1.3 pyhd3eb1b0_0
ncurses 6.2 he6710b0_1
nest-asyncio 1.5.1 pyhd3eb1b0_0
netcdf4 1.5.7 pypi_0 pypi
networkx 2.5 py_0
nltk 3.6.1 pyhd3eb1b0_0
nose 1.3.7 pyhd3eb1b0_1006
notebook 6.3.0 py38h06a4308_0
numba 0.53.1 py38ha9443f7_0
numexpr 2.7.3 py38h22e1b3c_1
numpy 1.20.1 py38h93e21f0_0
numpy-base 1.20.1 py38h7d8b39e_0
numpydoc 1.1.0 pyhd3eb1b0_1
olefile 0.46 py_0
openpyxl 3.0.7 pyhd3eb1b0_0
openssl 1.1.1k h27cfd23_0
packaging 20.9 pyhd3eb1b0_0
pandas 1.2.4 py38h2531618_0
pandoc 2.12 h06a4308_0
pandocfilters 1.4.3 py38h06a4308_1
pango 1.45.3 hd140c19_0
parso 0.7.0 py_0
partd 1.2.0 pyhd3eb1b0_0
patchelf 0.12 h2531618_1
path 15.1.2 py38h06a4308_0
path.py 12.5.0 0
pathlib2 2.3.5 py38h06a4308_2
pathspec 0.7.0 py_0
patsy 0.5.1 py38_0
pcre 8.44 he6710b0_0
pep8 1.7.1 py38_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 8.2.0 py38he98fc37_0
pip 21.2.4 pypi_0 pypi
pixman 0.40.0 h7b6447c_0
pkginfo 1.7.0 py38h06a4308_0
pluggy 0.13.1 py38h06a4308_0
ply 3.11 py38_0
proj4 5.2.0 he6710b0_1
prometheus_client 0.10.1 pyhd3eb1b0_0
prompt-toolkit 3.0.17 pyh06a4308_0
prompt_toolkit 3.0.17 hd3eb1b0_0
psutil 5.8.0 py38h27cfd23_1
ptyprocess 0.7.0 pyhd3eb1b0_2
py 1.10.0 pyhd3eb1b0_0
py-lief 0.10.1 py38h403a769_0
pycodestyle 2.6.0 pyhd3eb1b0_0
pycosat 0.6.3 py38h7b6447c_1
pycparser 2.20 py_2
pycurl 7.43.0.6 py38h1ba5d50_0
pydocstyle 6.0.0 pyhd3eb1b0_0
pyerfa 1.7.3 py38h27cfd23_0
pyflakes 2.2.0 pyhd3eb1b0_0
pygments 2.8.1 pyhd3eb1b0_0
pylint 2.7.4 py38h06a4308_1
pyls-black 0.4.6 hd3eb1b0_0
pyls-spyder 0.3.2 pyhd3eb1b0_0
pyodbc 4.0.30 py38he6710b0_0
pyopenssl 20.0.1 pyhd3eb1b0_1
pyparsing 2.4.7 pyhd3eb1b0_0
pyproj 1.9.6 py38h14380d9_0
pyqt 5.9.2 py38h05f1152_4
pyrsistent 0.17.3 py38h7b6447c_0
pyshp 2.1.3 pyhd3eb1b0_0
pysocks 1.7.1 py38h06a4308_0
pytables 3.6.1 py38h9fd0a39_0
pytest 6.2.3 py38h06a4308_2
python 3.8.8 hdb3f193_5
python-dateutil 2.8.1 pyhd3eb1b0_0
python-jsonrpc-server 0.4.0 py_0
python-language-server 0.36.2 pyhd3eb1b0_0
python-libarchive-c 2.9 pyhd3eb1b0_1
pytz 2021.1 pyhd3eb1b0_0
pywavelets 1.1.1 py38h7b6447c_2
pyxdg 0.27 pyhd3eb1b0_0
pyyaml 5.4.1 py38h27cfd23_1
pyzmq 20.0.0 py38h2531618_1
qdarkstyle 2.8.1 py_0
qt 5.9.7 h5867ecd_1
qtawesome 1.0.2 pyhd3eb1b0_0
qtconsole 5.0.3 pyhd3eb1b0_0
qtpy 1.9.0 py_0
readline 8.1 h27cfd23_0
regex 2021.4.4 py38h27cfd23_0
requests 2.25.1 pyhd3eb1b0_0
ripgrep 12.1.1 0
rope 0.18.0 py_0
rtree 0.9.7 py38h06a4308_1
ruamel-yaml 0.17.10 pypi_0 pypi
ruamel-yaml-clib 0.2.6 pypi_0 pypi
ruamel_yaml 0.15.100 py38h27cfd23_0
scikit-image 0.18.1 py38ha9443f7_0
scikit-learn 0.24.1 py38ha9443f7_0
scipy 1.6.2 py38had2a1c9_1
seaborn 0.11.1 pyhd3eb1b0_0
secretstorage 3.3.1 py38h06a4308_0
send2trash 1.5.0 pyhd3eb1b0_1
setuptools 57.4.0 pypi_0 pypi
simplegeneric 0.8.1 py38_2
singledispatch 3.6.1 pyhd3eb1b0_1001
sip 4.19.13 py38he6710b0_0
six 1.15.0 py38h06a4308_0
sniffio 1.2.0 py38h06a4308_1
snowballstemmer 2.1.0 pyhd3eb1b0_0
sortedcollections 2.1.0 pyhd3eb1b0_0
sortedcontainers 2.3.0 pyhd3eb1b0_0
soupsieve 2.2.1 pyhd3eb1b0_0
sphinx 4.0.1 pyhd3eb1b0_0
sphinxcontrib 1.0 py38_1
sphinxcontrib-applehelp 1.0.2 pyhd3eb1b0_0
sphinxcontrib-devhelp 1.0.2 pyhd3eb1b0_0
sphinxcontrib-htmlhelp 1.0.3 pyhd3eb1b0_0
sphinxcontrib-jsmath 1.0.1 pyhd3eb1b0_0
sphinxcontrib-qthelp 1.0.3 pyhd3eb1b0_0
sphinxcontrib-serializinghtml 1.1.4 pyhd3eb1b0_0
sphinxcontrib-websupport 1.2.4 py_0
spyder 4.2.5 py38h06a4308_0
spyder-kernels 1.10.2 py38h06a4308_0
sqlalchemy 1.4.15 py38h27cfd23_0
sqlite 3.35.4 hdfb4753_0
statsmodels 0.12.2 py38h27cfd23_0
sympy 1.8 py38h06a4308_0
tbb 2020.3 hfd86e86_0
tblib 1.7.0 py_0
terminado 0.9.4 py38h06a4308_0
testpath 0.4.4 pyhd3eb1b0_0
textdistance 4.2.1 pyhd3eb1b0_0
threadpoolctl 2.1.0 pyh5ca1d4c_0
three-merge 0.1.1 pyhd3eb1b0_0
tifffile 2020.10.1 py38hdd07704_2
tk 8.6.10 hbc83047_0
toml 0.10.2 pyhd3eb1b0_0
toolz 0.11.1 pyhd3eb1b0_0
tornado 6.1 py38h27cfd23_0
tqdm 4.59.0 pyhd3eb1b0_1
traitlets 5.0.5 pyhd3eb1b0_0
typed-ast 1.4.2 py38h27cfd23_1
typing_extensions 3.7.4.3 pyha847dfd_0
ujson 4.0.2 py38h2531618_0
unicodecsv 0.14.1 py38_0
unixodbc 2.3.9 h7b6447c_0
urllib3 1.26.4 pyhd3eb1b0_0
watchdog 1.0.2 py38h06a4308_1
wcwidth 0.2.5 py_0
webencodings 0.5.1 py38_1
werkzeug 1.0.1 pyhd3eb1b0_0
wheel 0.37.0 pypi_0 pypi
widgetsnbextension 3.5.1 py38_0
wrapt 1.12.1 py38h7b6447c_1
wurlitzer 2.1.0 py38h06a4308_0
xlrd 2.0.1 pyhd3eb1b0_0
xlsxwriter 1.3.8 pyhd3eb1b0_0
xlwt 1.3.0 py38_0
xmltodict 0.12.0 py_0
xz 5.2.5 h7b6447c_0
yaml 0.2.5 h7b6447c_0
yapf 0.31.0 pyhd3eb1b0_0
zeromq 4.3.4 h2531618_0
zict 2.0.0 pyhd3eb1b0_0
zipp 3.4.1 pyhd3eb1b0_0
zlib 1.2.11 h7b6447c_3
zope 1.0 py38_1
zope.event 4.5.0 py38_0
zope.interface 5.3.0 py38h27cfd23_0
zstd 1.4.5 h9ceee32_0
</details>
| closed | 2021-08-18T22:54:55Z | 2021-09-28T15:49:43Z | https://github.com/SciTools/cartopy/issues/1828 | [] | rkong66 | 8 |
flairNLP/flair | nlp | 3,199 | [Bug]: ModuleNotFoundError: 'flair.trainers.plugins.functional' on git-installed master | ### Describe the bug
When installing the flair master branch via pypi & git, we get an ModuleNotFound error.
### To Reproduce
```python
exec("pip install git+https://github.com/flairNLP/flair.git")
from flair.models import TARSClassifier
```
### Expected behavior
I can import any module and use flair normally.
### Logs and Stack traces
```stacktrace
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\__init__.py", line 28, in <module>
from . import ( # noqa: E402 import after setting device
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\trainers\__init__.py", line 2, in <module>
from .trainer import ModelTrainer
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\trainers\trainer.py", line 19, in <module>
from flair.trainers.plugins import (
File "C:\Users\bened\anaconda3\envs\steepmc_clf\lib\site-packages\flair\trainers\plugins\__init__.py", line 2, in <module>
from .functional.amp import AmpPlugin
ModuleNotFoundError: No module named 'flair.trainers.plugins.functional'
```
### Screenshots
_No response_
### Additional Context
I suppose this happens due to missing `__init__.py` files in https://github.com/flairNLP/flair/tree/master/flair/trainers/plugins/functional and https://github.com/flairNLP/flair/tree/master/flair/trainers/plugins/loggers
leading to those folders not being recognized as modules and therefore won't be found/installed as code.
### Environment
#### Versions:
##### Pytorch
2.0.0+cu117
##### flair
`ModuleNotFoundError`
##### Transformers
4.28.1
#### GPU
True
| closed | 2023-04-18T15:28:44Z | 2023-04-19T20:12:44Z | https://github.com/flairNLP/flair/issues/3199 | [
"bug"
] | helpmefindaname | 1 |
pytorch/pytorch | numpy | 149,324 | Unguarded Usage of Facebook Internal Code? | ### 🐛 Describe the bug
There is a [reference](https://github.com/pytorch/pytorch/blob/c7c3e7732443d7994303499bcb01781c9d59ab58/torch/_inductor/fx_passes/group_batch_fusion.py#L25) to `import deeplearning.fbgemm.fbgemm_gpu.fb.inductor_lowerings`, which we believe to be Facebook internal Python module based on description of this [commit](https://github.com/pytorch/benchmark/commit/e26cd75d042e880676a5f21873f2aaa72e178be1).
It looks like if the module isn't found, `torch` disables some `fbgemm` inductor lowerings.
Is this expected for this code snippet, or should this rely on publicly available `fbgemm`?
### Versions
Looks like this module is used as described above since torch's transition to open-source (at least).
cc @chauhang @penguinwu | open | 2025-03-17T15:54:24Z | 2025-03-17T20:29:04Z | https://github.com/pytorch/pytorch/issues/149324 | [
"triaged",
"module: third_party",
"oncall: pt2"
] | BwL1289 | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 204 | OSNet training error | I got the below error message when trying to train osnet. Not sure what caused it.
```shell
=> Start training
* Only train ['classifier'] (epoch: 1/10)
Traceback (most recent call last):
File "main.py", line 168, in <module>
main()
File "main.py", line 164, in main
engine.run(**engine_run_kwargs(args))
File "/home/guest/mvb/deep-person-reid/torchreid/engine/engine.py", line 119, in run
self.train(epoch, max_epoch, trainloader, fixbase_epoch, open_layers, print_freq)
File "/home/guest/mvb/deep-person-reid/torchreid/engine/image/softmax.py", line 89, in train
for batch_idx, data in enumerate(trainloader):
File "/home/guest/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 582, in __next__
return self._process_next_batch(batch)
File "/home/guest/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/home/guest/.local/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/guest/.local/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 68, in default_collate
return [default_collate(samples) for samples in transposed]
File "/home/guest/.local/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 68, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/home/guest/.local/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 43, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 808 and 552 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:711
``` | closed | 2019-07-05T05:40:27Z | 2019-07-05T08:12:13Z | https://github.com/KaiyangZhou/deep-person-reid/issues/204 | [] | johnzhang1999 | 2 |
ultralytics/yolov5 | pytorch | 13,427 | 如何在yolov5中添加FPS和mAPs评价指标? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
如何在yolov5中添加FPS和mAPs评价指标?
### Additional
_No response_ | open | 2024-11-22T03:00:31Z | 2024-11-24T10:09:08Z | https://github.com/ultralytics/yolov5/issues/13427 | [
"question"
] | lqh964165950 | 2 |
voxel51/fiftyone | data-science | 4,761 | [DOCS]when use fob.compute_visualization for Object similarity, it compute the embedding for each object instance? | ### Instructions
Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/voxel51/fiftyone/blob/develop/ISSUE_POLICY.md) for information on what types of issues we address.
1. Please fill in this template to ensure a timely and thorough response
2. Place an "x" between the brackets next to an option if it applies. For example:
- [x] Selected option
3. **Please delete everything above this line before submitting the issue**
### URL(s) with the issue
Please provide a link to the documentation entry in question.
### Description of proposal (what needs changing)
Provide a clear description. Why is the proposed documentation better?
### Willingness to contribute
The FiftyOne Community encourages documentation contributions. Would you or another member of your organization be willing to contribute a fix for this documentation issue to the FiftyOne codebase?
- [ ] Yes. I can contribute a documentation fix independently
- [ ] Yes. I would be willing to contribute a documentation fix with guidance from the FiftyOne community
- [ ] No. I cannot contribute a documentation fix at this time
| closed | 2024-08-31T16:34:50Z | 2024-08-31T16:36:19Z | https://github.com/voxel51/fiftyone/issues/4761 | [
"documentation"
] | lyf6 | 0 |
amisadmin/fastapi-amis-admin | fastapi | 163 | S3 support | what is the best way to transparent store files/images to S3? Maybe anybody can share a simple demo, please? | closed | 2024-03-15T14:48:48Z | 2024-03-20T13:46:54Z | https://github.com/amisadmin/fastapi-amis-admin/issues/163 | [] | mmmcorpsvit | 0 |
deeppavlov/DeepPavlov | nlp | 1,189 | ImportError: cannot import name 'build_model' | Hi team,
I have installed DeepPavlov version 0.1.0 but unable to import build_model. PFA screenshot for details.
Configuration: Windows 10

| closed | 2020-04-27T11:47:49Z | 2020-05-13T09:22:36Z | https://github.com/deeppavlov/DeepPavlov/issues/1189 | [] | SahithiParsi | 4 |
ContextLab/hypertools | data-visualization | 208 | ImportError: numpy.core.multiarray failed to import | I got an ImportError when I import hypertools, and my numpy is 1.12.1 in windows(or 1.14 in mac). How can I run it?
But when import hypertools second time, the error will disapper. | open | 2018-05-04T04:41:36Z | 2018-05-18T06:53:03Z | https://github.com/ContextLab/hypertools/issues/208 | [] | zhouyanasd | 4 |
huggingface/datasets | nlp | 6,834 | largelisttype not supported (.from_polars()) | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_polars(df)
```
### Expected behavior
Convert LargeListType to list.
### Environment info
- `datasets` version: 2.19.1.dev0
- Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.3.1 | closed | 2024-04-24T11:33:43Z | 2024-08-12T14:43:46Z | https://github.com/huggingface/datasets/issues/6834 | [] | Modexus | 0 |
xlwings/xlwings | automation | 1,928 | Conda enviroments at customized locations | When the VBA code tries to load the XLWings dll and there is a Conda Env informed, it tries to load the DLL from:
```
{Conda Path}\envs\{Conda Env}\xlwings...dll
```
In Windows, `Conda Path` usually is something like `C:\ProgramData\Miniconda3` and the default path for the conda environments is `C:\ProgramData\Miniconda3\envs\`, and so everything works.
But when we customize the path where the conda enviroments are created (with `conda config -add envs_dirs <path>`), it breaks, because now the DLL is at `<path>\{Conda Env}\xlwings...dll`. | open | 2022-06-02T16:47:55Z | 2022-06-03T06:31:26Z | https://github.com/xlwings/xlwings/issues/1928 | [] | jalexandretoledo | 2 |
miguelgrinberg/flasky | flask | 241 | post view has a little bug | Hi Miguel,
I am studying flask using your tutorials. Thanks for your helpful book and codes.
There I find a little bug in the `post` view in the views.py file in the main blueprint. That is this view doesn't check whether the current user has `Permission.COMMENT`. I noticed that you removed the comment form in the template when the current user has no comment permissions. However I think that the `post` view should have its own validation logic. If an anonymous user send a post request to this view, an error will be raised.
`AttributeError: 'AnonymousUser' object has no attribute '_sa_instance_state'`
Thanks. | closed | 2017-02-16T07:36:38Z | 2017-12-10T20:06:27Z | https://github.com/miguelgrinberg/flasky/issues/241 | [
"enhancement"
] | luog1992 | 2 |
Avaiga/taipy | automation | 2,308 | Integrate VTK Visualization into Taipy as an Extension | ### Description:
VTK (Visualization Toolkit) provides robust 3D visualization capabilities widely used in domains like medical imaging, computational fluid dynamics, and scientific data visualization. Integrating similar functionality directly into Taipy as an optional extension would greatly expand Taipy’s visualization repertoire, enabling users to build rich 3D interactive graphics within the Taipy environment.
### Proposed Solution:
- Create a Taipy extension or component wrapper that can be embeded directly within Taipy pages.
- Provide a straightforward API for developers to:
- Load 3D datasets.
- Interactively manipulate views (e.g., rotate, zoom).
- Apply filters, color maps, and advanced rendering options.
- Support bidirectional communication between the visualization component and Taipy states/variables, similar to how Taipy integrates with other components.
Example Use Case: A medical researcher might want to visualize MRI scans in 3D, slice through volumetric data, or apply custom segmentations. An engineer might want to display complex CFD simulations, adjusting parameters on the fly and seeing updated 3D renderings without leaving the Taipy interface.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2024-12-06T15:06:45Z | 2024-12-06T15:09:12Z | https://github.com/Avaiga/taipy/issues/2308 | [
"🖰 GUI",
"🟩 Priority: Low",
"✨New feature"
] | FlorianJacta | 0 |
paulpierre/RasaGPT | fastapi | 8 | An error is reported during installation, indicating that Organization already exists | Traceback (most recent call last):
File "/app/api/seed.py", line 128, in <module>
org_obj = create_org_by_org_or_uuid(
File "/app/api/helpers.py", line 95, in create_org_by_org_or_uuid
raise HTTPException(status_code=404, detail="Organization already exists")
fastapi.exceptions.HTTPException
| closed | 2023-05-10T09:24:40Z | 2023-05-10T13:23:33Z | https://github.com/paulpierre/RasaGPT/issues/8 | [] | Hkaisense | 1 |
gradio-app/gradio | python | 10,869 | Model3D Improvements Tracking Issue | - [x] I have searched to see if a similar issue already exists.
https://github.com/gradio-app/gradio/pull/10847#issuecomment-2745928539
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2025-03-24T18:11:21Z | 2025-03-24T18:11:21Z | https://github.com/gradio-app/gradio/issues/10869 | [] | dawoodkhan82 | 0 |
deeppavlov/DeepPavlov | tensorflow | 1,483 | `parse_config` doesn't allow to add extra variables | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
**What problem are we trying to solve?**:
```
1. `parse_config` function from `deeppavlov.core.commands.utils` doesn't allow me to add extra vars or override existing. The only way to override vars is to add environment variable. It is very unhandy. I can rewrite this function so it would allow to add extra vars.
2. Variables in config-files are substituted by hand. Why don't you use industry standard template engines like J2?
```
**How can we solve it?**:
```
1. Via adding new parameter to function.
2. Via using jinja
```
**Are there other issues that block this solution?**:
```
As far as I can see - none.
```
If you are ok with it - I will code it myself and do PR. | open | 2021-09-05T17:23:12Z | 2021-10-24T16:33:24Z | https://github.com/deeppavlov/DeepPavlov/issues/1483 | [
"enhancement"
] | QtRoS | 1 |
modin-project/modin | data-science | 6,879 | The query_compiler.merge reconstructs the Right dataframe for every partition of Left Dataframe | The query_compiler.merge reconstructs the Right dataframe from its partitions for every partition of Left Dataframe, The concat operation results in higher memory consumption when the size of right dataframe is large.
A possible option is to combine the right Dataframe partitions to a single partition dataframe by calling a remote function. This single partiotion dataframe is then passed to each partition of left dataframe thus avoiding the reconstruction in every worker while doing merge. | closed | 2024-01-25T11:39:01Z | 2024-02-13T12:09:24Z | https://github.com/modin-project/modin/issues/6879 | [
"Internals"
] | arunjose696 | 0 |
dynaconf/dynaconf | flask | 556 | [bug] - Dynaconf stop list variables and does not recognize develop variables in my env | Dynaconf dont list variables and thrown a error validation.
* I change my config from mac +zsh to manjaro + fish
* I run my venv
* I run dynaconf -i config.settings list
1. Having the following folder structure
> tree 09:10:07
.
├── alembic
│ ├── env.py
│ ├── __pycache__
│ │ └── env.cpython-39.pyc
│ ├── README
│ ├── script.py.mako
│ └── versions
├── alembic.ini
├── api
│ ├── api_v1
│ │ └── __init__.py
│ ├── deps.py
│ ├── __init__.py
│ └── tests
│ ├── __init__.py
│ ├── test_celery.py
│ ├── test_items.py
│ ├── test_login.py
│ └── test_users.py
├── backend_pre_start.py
├── celeryworker_pre_start.py
├── config.py
├── config.py.old
├── conftest.py
├── connections
│ ├── fetcher.py
│ └── __init__.py
├── constants
│ ├── core.py
│ ├── __init__.py
│ └── shopping_cart_checkout.py
├── crud
│ ├── base.py
│ └── tests
│ ├── __init__.py
│ ├── test_item.py
│ └── test_user.py
├── db
│ ├── base_class.py
│ ├── base.py
│ ├── __init__.py
│ └── __pycache__
│ ├── base_class.cpython-39.pyc
│ ├── base.cpython-39.pyc
│ └── __init__.cpython-39.pyc
├── ext
│ ├── celery_app.py
│ ├── config.py
│ ├── database.py
│ ├── __init__.py
│ └── security.py
├── init_db.py
├── initial_data.py
├── __init__.py
├── main.py
├── models
├── __pycache__
│ └── config.cpython-39.pyc
├── schemas
├── settings.toml
├── test.py
├── tests
│ ├── __init__.py
│ └── utils
│ ├── __init__.py
│ ├── item.py
│ ├── user.py
│ └── utils.py
├── tests_pre_start.py
├── utils.py
└── worker.py
2. Having the following config files:
The dynaconf run in app folder and settings.toml and .secrets.toml staying in app folder
**/app/.secrets.toml**
```toml
[development]
# Postgres
POSTGRES_SERVER=172.15.0.2
POSTGRES_USER=user
POSTGRES_PASSWORD=pass
POSTGRES_DB=mydb
```
and
**/app/settings.toml**
```toml
[default]
STACK_NAME="paymentgateway-com-br"
BACKEND_CORS_ORIGINS=["http://dev.domain.com"]
PROJECT_NAME="Microservice Gateway"
SECRET_KEY="dc24d995bf6c"
FIRST_SUPERUSER="email@email.com.br"
FIRST_SUPERUSER_PASSWORD="123"
SMTP_TLS=true
SMTP_PORT=587
SMTP_HOST=""
SMTP_USER=""
SMTP_PASSWORD=""
EMAILS_FROM_EMAIL="email@email.com"
USERS_OPEN_REGISTRATION=false
SENTRY_DSN=""
API_V1_STR="/payment-api/v1"
# 60 minutes * 24 hours * 8 days = 8 days
ACCESS_TOKEN_EXPIRE_MINUTES=11520
[development]
DOMAIN="localhost"
TRAEFIK_PUBLIC_NETWORK="traefik-public"
TRAEFIK_TAG="paymentgateway.app.com.br"
TRAEFIK_PUBLIC_TAG="traefik-public"
DOCKER_IMAGE_BACKEND="docker"
BACKEND_CORS_ORIGINS=["http://localhost", "https://localhost"]
# Postgres
SQLALCHEMY_DATABASE_URI="..."
```
3. Having the following app code:
```python
from typing import Any
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from dynaconf import settings
from loguru import logger
def get_engine():
SQLALCHEMY_DATABASE_URL = settings.SQLALCHEMY_DATABASE_URI
engine = create_engine(
SQLALCHEMY_DATABASE_URL,
)
return engine
def get_session():
_engine = get_engine()
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=_engine)
return SessionLocal
```
**/app/main.py**
```python
from fastapi import FastAPI
from starlette.middleware.cors import CORSMiddleware
from api.api_v1.api import api_router
from dynaconf import settings
import logging
import sys
from loguru import logger
app = FastAPI(
title=settings.PROJECT_NAME,
openapi_url=f"{settings.API_V1_STR}/openapi.json",
docs_url="/payment-api",
redoc_url=None
)
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(api_router, prefix=settings.API_V1_STR)
```
4. Executing under the following environment
<development>
```fish
$ cd app
$ dynaconf -i config.settings list
```
</details>
**Expected behavior**
List all variables settings
**Environment (please complete the following information):**
- OS: Manjaro BSPWN 20.0.1
- Dynaconf Version 3.1.2 and 3.1.4
- Frameworks in use (FastAPI - 0.61.2)
**Additional context**
Error:
```
> dynaconf -i config.settings list 09:06:22
Traceback (most recent call last):
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 253, in loads
try:n=K.load_line(C,G,T,P)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 355, in load_line
if P==A[-1]:raise ValueError('Invalid date or number')
ValueError: Invalid date or number
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/bin/dynaconf", line 8, in <module>
sys.exit(main())
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 221, in __call__
def __call__(A,*B,**C):return A.main(*B,**C)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 205, in main
H=E.invoke(F)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 345, in invoke
with C:return F(C.command.invoke(C))
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 288, in invoke
if A.callback is not _A:return ctx.invoke(A.callback,**ctx.params)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/click/core.py", line 170, in invoke
with G:return A(*B,**E)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/cli.py", line 442, in _list
cur_env = settings.current_env.lower()
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 145, in __getattr__
self._setup()
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 195, in _setup
self._wrapped = Settings(
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 259, in __init__
self.execute_loaders()
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/base.py", line 990, in execute_loaders
settings_loader(
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/__init__.py", line 126, in settings_loader
loader["loader"].load(
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/toml_loader.py", line 31, in load
loader.load(filename=filename, key=key, silent=silent)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/base.py", line 62, in load
source_data = self.get_source_data(files)
File "/home/jonatas/workspace/microservice-payment-gateway/.ve
nv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/loaders/base.py", line 83, in get_source_data
content = self.file_reader(open_file)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 83, in load
try:return loads(f.read(),B,A)
File "/home/jonatas/workspace/microservice-payment-gateway/.venv/app-payment-gateway-yLI0xW6Q-py3.9/lib/python3.9/site-packages/dynaconf/vendor/toml/decoder.py", line 254, in loads
except ValueError as Y:raise TomlDecodeError(str(Y),I,N)
dynaconf.vendor.toml.decoder.TomlDecodeError: Invalid date or number (line 3 column 1 char 25)
```
| closed | 2021-03-18T12:30:42Z | 2021-03-18T19:53:42Z | https://github.com/dynaconf/dynaconf/issues/556 | [
"bug"
] | jonatasoli | 4 |
nvbn/thefuck | python | 1,300 | [Feature request] Recognise and offer fixes for missing `git clone` when given a url or SSH ending in .git | It would be neat if a fix was required when I pasted a `git` URL intending to clone it, but forgot to add the `git clone` to the start. There's a fix for when `git clone` is present twice, so I think it's reasonable to have one for when it's never present.
If someone can point me to a getting started page so I can find my way around the project and learn the basics, I'm happy to create an implementation and PR for this myself.
| closed | 2022-05-23T13:12:26Z | 2022-07-04T15:18:10Z | https://github.com/nvbn/thefuck/issues/1300 | [] | MaddyGuthridge | 3 |
lepture/authlib | flask | 130 | Requesting empty scope removes scope from response | When this [request is made](https://httpie.org/doc#forms):
http -a client:secret -f :/auth/token grant_type=client_credentials scope=
I get response, without `scope` even if it was given in request.
Code responsible for this is here:
https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6750/wrappers.py#L98-L99
Is this a bug? I would expect `scope` present in response since it was given in request, even if given scope was empty string. | closed | 2019-05-13T06:29:32Z | 2019-05-14T05:27:16Z | https://github.com/lepture/authlib/issues/130 | [] | sirex | 2 |
idealo/imagededup | computer-vision | 1 | Get indexation setup complete for benchmarking workflow | - [x] Explore BKTree implementation
- [x] Explore `shelve` implementation
> `shelve` seems to have problems scaling to larger memory collections
>
> If problems persist, we will move to exploring some fast, local database solution
- [x] Explore Fallbacks/Brute Force/(other unoptimized search forms in worst case) | closed | 2019-05-07T09:17:36Z | 2019-07-02T15:42:49Z | https://github.com/idealo/imagededup/issues/1 | [
"enhancement"
] | valiantone | 3 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,116 | Make it possible to require whistleblowers to upload files before proceeding with the completion of the submission | **Describe the bug**
In questionnaires if an attachment field is set as required, alarm is not given and report can be sent without the attachment.
**To Reproduce**
Steps to reproduce the behavior:
1. On a questionnaire set an attachment field as required.
2. When you try to file a report the upsaid required field is ignored, and if there are other errors, It's ignored in the error list also.
3. Same if there are multiple files
**Expected behavior**
An alarm of missed attachment should be expected
**Desktop (please complete the following information):**
- OS: w10
- Browser: firefox 94
| closed | 2021-11-22T12:27:05Z | 2021-11-26T12:45:10Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3116 | [
"T: Enhancement",
"C: Client"
] | larrykind | 4 |
saulpw/visidata | pandas | 2,492 | Add friendly view of PyTables structured HDF5 files | pandas uses PyTables for HDF5 outputs. This creates a lot of extra structure (which I don't totally understand) that makes it hard to view idomatically in visidata.
"Unpacking" the PyTables schema would make this tool incredibly useful for peeking at HDF5 files created by pandas.
| closed | 2024-08-07T14:54:23Z | 2024-09-22T02:26:40Z | https://github.com/saulpw/visidata/issues/2492 | [
"wishlist"
] | jeffmelville | 1 |
chaoss/augur | data-visualization | 2,916 | Convert api and cli over to using user groups instead of repo groups | open | 2024-10-01T23:07:39Z | 2024-10-01T23:07:39Z | https://github.com/chaoss/augur/issues/2916 | [] | sgoggins | 0 |
|
aimhubio/aim | tensorflow | 2,972 | Support local path when migrating from wandb to aim | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
User can specify the local wandb directory path when migrating from wandb to aim.
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
When using the command `aim convert wandb --entity 'my_team' --project 'test_project'` to migrate, the server needs to be able to access the network.
However, since some private servers cannot be connected to the Internet, it cannot be executed at this time. In this case, it would be more flexible to be able to migrate without accessing the network but given a local path.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
`aim convert wandb --run $LOCAL_PATH_TO_WANDB_RUN`
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| open | 2023-09-01T10:21:43Z | 2023-09-01T10:21:43Z | https://github.com/aimhubio/aim/issues/2972 | [
"type / enhancement"
] | chenshen03 | 0 |
Gozargah/Marzban | api | 918 | نسخه dev سوالات | سلام
ببخشید الان تعداد یوزر های من بالاست تنها مشکلی که دارم روی فعال کردن فرگمنت که وقتی لینک بروز رسانی میکنن یا از نرم افزار ها خارج میشن فرکمنت غیر فعال میشه و مجدد باید دستی ثبت بکنن
الان دیدم نسخه هست به نام dev
اگر بخوام به این نسخه برم ممکنه الان باگی داشته باشه ؟؟
۱۲ تا سرور به صورت نود انجام دادم نیازی نیست تو اون سرور ها کاری انجام بدم ؟؟
معمولا چه کارهایی باید انجام بشه تا یوزرهام به مشکل نخوره ؟؟
| closed | 2024-04-04T06:56:15Z | 2024-04-28T20:16:58Z | https://github.com/Gozargah/Marzban/issues/918 | [] | hossein2 | 3 |
recommenders-team/recommenders | data-science | 1,366 | [ASK] New Python recommender systems library - LibRecommender | Hi!
just FYI, a new Python library that includes some interesting reco algorithms was recently added to Github: https://github.com/massquantity/LibRecommender
Maybe it would be interesting to include some usecases for some of the included algos that are not covered yet by this repo.
thank you!
| closed | 2021-04-03T07:40:10Z | 2021-12-17T10:24:23Z | https://github.com/recommenders-team/recommenders/issues/1366 | [
"help wanted"
] | julioasotodv | 1 |
mitmproxy/mitmproxy | python | 7,092 | DNS Resolver: Add `getaddrinfo` fallback | #### Problem Description
Based on https://github.com/mitmproxy/mitmproxy/issues/7064, hickory's functionality to determine the OS name servers seems to have issues on both Linux and Windows. As much as I prefer hickory, we should have a fallback that uses `getaddrinfo`. This restores at least some basic functionality.
Implementation-wise, this likely means we should change `DnsResolver.name_servers` to return an empty list if it's unable to determine servers. This way it's cached (whereas an exception is not).
#### Steps to reproduce the behavior:
1. Run mitmproxy in WireGuard mode on a setup where hickory is unable to determine nameservers. | closed | 2024-08-09T12:41:27Z | 2024-08-28T18:37:00Z | https://github.com/mitmproxy/mitmproxy/issues/7092 | [
"kind/bug",
"help wanted",
"area/protocols"
] | mhils | 0 |
LibreTranslate/LibreTranslate | api | 75 | [request] ARM64 image | I'd like to give this ago on my Pi4. Is there / will there be /could there be a version which runs on ARM64? | closed | 2021-04-09T11:23:10Z | 2022-12-11T07:31:00Z | https://github.com/LibreTranslate/LibreTranslate/issues/75 | [
"enhancement"
] | davidrutland | 2 |
ivy-llc/ivy | tensorflow | 27,972 | Fix Ivy Failing Test: numpy - statistical.sum | closed | 2024-01-20T16:49:46Z | 2024-01-22T12:21:11Z | https://github.com/ivy-llc/ivy/issues/27972 | [
"Sub Task"
] | samthakur587 | 0 |
|
recommenders-team/recommenders | deep-learning | 1,240 | [FEATURE] Mix MIND utils | ### Description
<!--- Describe your expected feature in detail -->
DRY in Mind:
- https://github.com/microsoft/recommenders/blob/master/reco_utils/dataset/mind.py
- https://github.com/microsoft/recommenders/blob/staging/reco_utils/recommender/newsrec/newsrec_utils.py
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| open | 2020-11-11T07:38:54Z | 2020-11-11T07:39:07Z | https://github.com/recommenders-team/recommenders/issues/1240 | [
"enhancement"
] | miguelgfierro | 0 |
ufoym/deepo | tensorflow | 50 | OpenCV function not implemented | I got unspecified error when trying to run opencv following this basic OpenCV [getting started](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#display-image). The error is:
```
OpenCV(3.4.1) Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /root/opencv/modules/highgui/src/window.cpp, line 636
Traceback (most recent call last):
File "image_get_started.py", line 8, in <module>
cv2.imshow("image", img)
cv2.error: OpenCV(3.4.1) /root/opencv/modules/highgui/src/window.cpp:636: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage
```
I was using the `keras-cpu-py36` while adding OpenCV in its Dockerfile. Looking back to OpenCV and the `libgtk2.0-dev` and `pkg-config` wasn't yet included. | closed | 2018-08-25T07:39:27Z | 2018-08-25T16:47:34Z | https://github.com/ufoym/deepo/issues/50 | [] | syahrulhamdani | 1 |
Gozargah/Marzban | api | 1,250 | باگ پروتکل httpguard | سلام وقتی با اینباند جنریتور کاستوم کانفیگ میزنم و به مرزبان اضافه میکنم موقع ساخت یوزر لینک ساب کانفیگو کامل نمیارهکلا ناقص کانفیگو بالا میاره مثه اینکه با پروتکل httpguard مشکل داره

| closed | 2024-08-18T12:22:01Z | 2024-08-18T13:18:10Z | https://github.com/Gozargah/Marzban/issues/1250 | [
"Duplicate",
"Invalid"
] | afraz5 | 1 |
apachecn/ailearning | nlp | 540 | MachineLearning(机器学习) 学习路线图链接失效 | MachineLearning(机器学习) 学习路线图链接失效:
http://www.apachecn.org/map/145.html | closed | 2019-08-05T05:50:26Z | 2019-10-28T03:17:39Z | https://github.com/apachecn/ailearning/issues/540 | [] | gocpplua | 1 |
ipython/ipython | jupyter | 14,590 | replace in confpy source_suffix = {'.rst': 'restructuredtext'} | open | 2024-11-29T10:50:12Z | 2024-11-29T10:50:12Z | https://github.com/ipython/ipython/issues/14590 | [] | Carreau | 0 |
|
koxudaxi/datamodel-code-generator | fastapi | 1,837 | static code to generated models | Hey there, for the maintainers, thanks for that great library.
I would appreciate a heads-up on something I'm trying to do, which is basically adding some static code to a generate model.
I'm not sure Jinja would be suitable for that, since it's just a templating. Can someone give me a direction on the best approach for that?
For example, for an Enum class:
```python
class Foo(str, Enum):
foo = 'foo'
bar = 'bar'
```
I would like the generated model to override a special method:
```python
class Foo(str, Enum):
foo = 'foo'
bar = 'bar'
@classmethod
def _missing_(cls, value):
pass
```
I'm generating my models from an `openapi.yml` spec. Appreciate any thoughts or help!
| open | 2024-02-05T12:54:57Z | 2024-03-16T16:57:34Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1837 | [
"answered"
] | aoliveiraenc | 1 |
aws/aws-sdk-pandas | pandas | 2,276 | quicksight.create_athena_dataset/datasource: allow user groups to be passed in allowed_to_use and allowed_to_manage | **Is your idea related to a problem? Please describe.**
Right now, the parameters `allowed_to_use` and `allowed_to_manage` inside the method `quicksight.create_athena_dataset` allow only users to be passed but not user groups. If I want to give those permissions to user groups then I have to do a separate call using boto and run `update_data_set_permissions `. Same goes for data sources.
**Describe the solution you'd like**
It would be nice if `allowed_to_use` and `allowed_to_manage` also accepted user groups to avoid the workaround with boto.
| closed | 2023-05-15T13:52:21Z | 2023-05-16T22:46:08Z | https://github.com/aws/aws-sdk-pandas/issues/2276 | [
"enhancement"
] | koberghe | 0 |
babysor/MockingBird | pytorch | 980 | 预处理数据集出现如下错误 | 做预处理数据集时出现如下错误:
E:\Miniconda3\envs\mockingbird\MockingBird-main>python pre.py E:\Miniconda3\envs\mockingbird\MockingBird-main -d aidatatang_200zh -n 1
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Using data from:
E:\Miniconda3\envs\mockingbird\MockingBird-main\aidatatang_200zh\corpus\train
aidatatang_200zh: 0%| | 0/547 [00:00<?, ?speakers/s]Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
aidatatang_200zh: 100%|████████████████████████████████████████████████████████| 547/547 [01:17<00:00, 7.03speakers/s]
The dataset consists of 0 utterances, 0 mel frames, 0 audio timesteps (0.00 hours).
Traceback (most recent call last):
File "E:\Miniconda3\envs\mockingbird\MockingBird-main\pre.py", line 72, in <module>
preprocess_dataset(**vars(args))
File "E:\Miniconda3\envs\mockingbird\MockingBird-main\models\synthesizer\preprocess.py", line 101, in preprocess_dataset
print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
ValueError: max() arg is an empty sequence

请大神们给予帮助!!! | open | 2024-01-18T13:16:50Z | 2024-01-18T13:16:50Z | https://github.com/babysor/MockingBird/issues/980 | [] | Sunsoar01 | 0 |
frappe/frappe | rest-api | 31,327 | Phonenumber library does not recognize +592 7 Guyanese phone numbers as valid | ## Description of the issue
The phonenumber library currently does not recognize new Guyanese (GY) phone numbers starting with `+592 7` as valid. Only numbers starting with `+592 6` are being correctly validated. This causes issues when users attempt to submit forms using phone numbers with the updated format.
## Context information (for bug reports)
**Output of `bench version`**:
15.56.0
## Steps to reproduce the issue
1. Attempt to input a phone number starting with `+592 7` in any form field using the phone fieldtype by selecting `guyana` then enter a number `7004812` (which is a valid GY number).
2. Submit the form.
3. Observe validation error.
### Observed result
Phone numbers starting with `+592 7` are incorrectly marked as invalid.
### Expected result
Phone numbers starting with `+592 7` should be recognized as valid.
### Stacktrace / full error message
```bash
15:23:36 web.1 | Traceback (most recent call last):
15:23:36 web.1 | File "apps/frappe/frappe/app.py", line 114, in application
15:23:36 web.1 | response = frappe.api.handle(request)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/api/_init_.py", line 49, in handle
15:23:36 web.1 | data = endpoint(**arguments)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/api/v1.py", line 36, in handle_rpc_call
15:23:36 web.1 | return frappe.handler.handle()
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/handler.py", line 50, in handle
15:23:36 web.1 | data = execute_cmd(cmd)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/handler.py", line 86, in execute_cmd
15:23:36 web.1 | return frappe.call(method, **frappe.form_dict)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 1726, in call
15:23:36 web.1 | return fn(*args, **newargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/utils/typing_validations.py", line 31, in wrapper
15:23:36 web.1 | return func(*args, **kwargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/desk/form/save.py", line 37, in savedocs
15:23:36 web.1 | doc.submit()
15:23:36 web.1 | File "apps/frappe/frappe/utils/typing_validations.py", line 31, in wrapper
15:23:36 web.1 | return func(*args, **kwargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 1060, in submit
15:23:36 web.1 | return self._submit()
15:23:36 web.1 | ^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 1043, in _submit
15:23:36 web.1 | return self.save()
15:23:36 web.1 | ^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 342, in save
15:23:36 web.1 | return self._save(*args, **kwargs)
15:23:36 web.1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 381, in _save
15:23:36 web.1 | self._validate()
15:23:36 web.1 | File "apps/frappe/frappe/model/document.py", line 587, in _validate
15:23:36 web.1 | self._validate_data_fields()
15:23:36 web.1 | File "apps/frappe/frappe/model/base_document.py", line 914, in _validate_data_fields
15:23:36 web.1 | frappe.utils.validate_phone_number_with_country_code(phone, phone_field.fieldname)
15:23:36 web.1 | File "apps/frappe/frappe/utils/_init_.py", line 119, in validate_phone_number_with_country_code
15:23:36 web.1 | frappe.throw(
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 603, in throw
15:23:36 web.1 | msgprint(
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 568, in msgprint
15:23:36 web.1 | _raise_exception()
15:23:36 web.1 | File "apps/frappe/frappe/_init_.py", line 519, in _raise_exception
15:23:36 web.1 | raise exc
15:23:36 web.1 | frappe.exceptions.InvalidPhoneNumberError: Phone Number +592-7123345 set in field phone_number is not valid.
15:23:36 web.1 |
15:23:36 web.1 | 172.18.0.1 - - [19/Feb/2025 15:23:36] "POST /api/method/frappe.desk.form.save.savedocs HTTP/1.1" 417 -
```
## Additional information
- **Frappe install method**: Docker
| closed | 2025-02-19T15:32:32Z | 2025-03-06T00:15:35Z | https://github.com/frappe/frappe/issues/31327 | [
"bug"
] | karotkriss | 0 |
streamlit/streamlit | deep-learning | 10,739 | Support a collapsed page navigation menu that only shows the page icons | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Support a collapsed page navigation menu that only shows the page icons similar to the VS Code activity bar:

### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-12T14:27:13Z | 2025-03-12T14:27:38Z | https://github.com/streamlit/streamlit/issues/10739 | [
"type:enhancement",
"feature:multipage-apps",
"feature:st.navigation"
] | lukasmasuch | 1 |
tortoise/tortoise-orm | asyncio | 1,905 | Explicit Routers | **Is your feature request related to a problem? Please describe.**
The [documentation](https://tortoise.github.io/router.html?h=router#model-signals) and the implementation for the Router don't seem to match. The documentation suggests that the methods for the router class are explicit while the code suggest that these methods are optional. The current code appears to follow django's methodology of allowing multiple routers to be registered and then processed in order of significance.
**Describe the solution you'd like**
Routers to be implemented according to the documentation. This would also allow for more accurate static type checking through Protocols. An example for the change can found [here](https://github.com/tortoise/tortoise-orm/compare/develop...i-am-grub:tortoise-orm:explicit-router)
**Describe alternatives you've considered**
Updating the documentation to clarify the intent of the router implementation.
| closed | 2025-02-28T09:16:02Z | 2025-03-18T14:31:35Z | https://github.com/tortoise/tortoise-orm/issues/1905 | [] | i-am-grub | 4 |
marshmallow-code/flask-marshmallow | sqlalchemy | 98 | Update to marshmallow 3 | Hi, everyone.
I want some information about update to marshmallow 3. Where I can look it? | closed | 2018-11-01T14:05:30Z | 2018-11-02T15:26:26Z | https://github.com/marshmallow-code/flask-marshmallow/issues/98 | [] | Bernardoow | 1 |
plotly/dash-oil-and-gas-demo | plotly | 16 | frontend for reporting OOM errors from the backend, and informing the users to contact admins. | closed | 2023-02-02T18:24:32Z | 2023-02-02T18:24:37Z | https://github.com/plotly/dash-oil-and-gas-demo/issues/16 | [] | eff-kay | 0 |
|
axnsan12/drf-yasg | django | 251 | Parameters to provide a `validate` method | Hi there!
I'm thinking of adding a 'validate' method to the Parameter class in the OpenAPI module. This would check the given value (e.g. in a query) actually matched the type (and format, if set) of the parameter.
A further possibility would be to provide a `from_request` method which searched for the value in the request structure according to where it would be found (header, query, path, etc), and returned the value - or the parameter's default if the parameter was not found in the request.
What do you think? Do you want me to write that?
Have fun,
Paul | closed | 2018-11-12T23:01:21Z | 2018-11-28T21:51:54Z | https://github.com/axnsan12/drf-yasg/issues/251 | [] | PaulWay | 1 |
OpenGeoscience/geonotebook | jupyter | 166 | Cannot add VRT raster layer | I am trying to add a layer using a RasterData object initialized with a VRT image.
```
vrt = RasterData("test.vrt")
M.add_layer(vrt)
```
The layer does not appear on the map and the Jupyter server throws this error:
```
RuntimeError: `/test.vrt' does not exist in the file system,
and is not recognised as a supported dataset name.
```
Am I correct in assuming that GeoNotebook supports raster layers from VRTs and if so, have I correctly gone about adding the layer? | closed | 2018-08-06T19:29:36Z | 2018-08-13T13:52:26Z | https://github.com/OpenGeoscience/geonotebook/issues/166 | [] | naterubin | 5 |
mljar/mljar-supervised | scikit-learn | 403 | supervised.exceptions ERROR No models produced | 2021-05-24 18:04:12,100 supervised.exceptions ERROR No models produced.
Please check your data or submit a Github issue at https://github.com/mljar/mljar-supervised/issues/new.
1_Optuna_LightGBM not trained. Stop training after the first fold. Time needed to train on the first fold 1.0 seconds. The time estimate for training on all folds is larger than total_time_limit.
There was an error during 2_Optuna_Xgboost training.
Please check AutoML_22\errors.md for details.
There was an error during 3_Optuna_CatBoost training.
Please check AutoML_22\errors.md for details.
There was an error during 4_Optuna_NeuralNetwork training.
Please check AutoML_22\errors.md for details.
There was an error during 5_Optuna_RandomForest training.
Please check AutoML_22\errors.md for details.
There was an error during 6_Optuna_ExtraTrees training.
Please check AutoML_22\errors.md for details.
Traceback (most recent call last):
File "<ipython-input-3-4182f0ec13ac>", line 1, in <module>
automl.fit(X_train, y_train)
File "C:\Users\bhava\Anaconda3\envs\py38\lib\site-packages\supervised\automl.py", line 337, in fit
return self._fit(X, y, sample_weight, cv)
File "C:\Users\bhava\Anaconda3\envs\py38\lib\site-packages\supervised\base_automl.py", line 1131, in _fit
raise e
File "C:\Users\bhava\Anaconda3\envs\py38\lib\site-packages\supervised\base_automl.py", line 1048, in _fit
raise AutoMLException(
AutoMLException: No models produced.
Please check your data or submit a Github issue at https://github.com/mljar/mljar-supervised/issues/new. | closed | 2021-05-24T12:47:23Z | 2021-06-07T15:11:45Z | https://github.com/mljar/mljar-supervised/issues/403 | [
"bug"
] | Bhavani-Shanker | 9 |
unit8co/darts | data-science | 1,853 | Investigate DirRec for RegressionModels | See the discussion [here](https://github.com/unit8co/darts/issues/1852#issuecomment-1607100247), and a paper [here](https://www.researchgate.net/publication/221165768_Time_Series_Prediction_using_DirRec_Strategy) | open | 2023-06-26T09:46:38Z | 2023-07-21T02:58:28Z | https://github.com/unit8co/darts/issues/1853 | [
"feature request"
] | dennisbader | 1 |
yt-dlp/yt-dlp | python | 12,221 | Parsing YouTube videos with yt-dlp.exe on Windows with VPN | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
I have a problem parsing YouTube videos with yt-dlp.exe on Windows, and I hope someone can help me.
The operating environment is as follows:
64-bit Windows 10 1809
The latest version of yt-dlp.exe (2025.1.26)
VPN: "SoftEther VPN Client Management Tool", do not specify a port.
Solution attempt:
yt-dlp.exe releases inbound and outbound rules
Run yt-dlp.exe with administrator privileges
Check VPN, the status is normal, and other software can connect to the Internet normally through VPN.
yt-dlp.exe uses a proxy with a specified port to parse video information normally.
Download the python module of yt-dlp, release the packaged software, and connect to the Internet normally with VPN
Result: The problem still exists.
How can yt-dlp.exe connect to the Internet normally on Windows through a VPN proxy without specifying a port and parse YouTube videos? I really hope someone can help me, thank you!
In Windows, after allowing yt-dlp.exe to enter and exit the station, the error message of the operation is shown below.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=uantfXeqTHg']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.15 from yt-dlp/yt-dlp [c8541f8b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.17763-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {'http': 'http://127.0.0.1:808', 'https': 'http://127.0.0.1:808', 'ftp': 'http://127.0.0.1:808'}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
ERROR: Unable to obtain version info (('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562DFCF40>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法 连接。'))); Please try again later or visit https://github.com/yt-dlp/yt-dlp/releases/latest
[youtube] Extracting URL: https://www.youtube.com/watch?v=uantfXeqTHg
[youtube] uantfXeqTHg: Downloading webpage
WARNING: [youtube] Unable to download webpage: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E740A0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))
[youtube] uantfXeqTHg: Downloading iframe API JS
WARNING: [youtube] Unable to download webpage: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E753F0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))
[youtube] uantfXeqTHg: Downloading tv player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E77A30>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (1/3)...
[youtube] uantfXeqTHg: Downloading tv player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E74EB0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (2/3)...
[youtube] uantfXeqTHg: Downloading tv player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E77880>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (3/3)...
[youtube] uantfXeqTHg: Downloading tv player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E53AC0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极 拒绝,无法连接。')) (caused by ProxyError("('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E53AC0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[youtube] uantfXeqTHg: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E76800>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (1/3)...
[youtube] uantfXeqTHg: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E538B0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (2/3)...
[youtube] uantfXeqTHg: Downloading ios player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E75180>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (3/3)...
[youtube] uantfXeqTHg: Downloading ios player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E763B0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极 拒绝,无法连接。')) (caused by ProxyError("('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E763B0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[youtube] uantfXeqTHg: Downloading web player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E755A0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (1/3)...
[youtube] uantfXeqTHg: Downloading web player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E761A0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (2/3)...
[youtube] uantfXeqTHg: Downloading web player API JSON
WARNING: [youtube] ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E98820>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。')). Retrying (3/3)...
[youtube] uantfXeqTHg: Downloading web player API JSON
WARNING: [youtube] Unable to download API page: ('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E74BB0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极 拒绝,无法连接。')) (caused by ProxyError("('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000018562E74BB0>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [youtube] uantfXeqTHg: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\youtube.py", line 4481, in _real_extract
File "yt_dlp\extractor\youtube.py", line 4445, in _download_player_responses
File "yt_dlp\extractor\youtube.py", line 4087, in _extract_player_responses
``` | closed | 2025-01-28T08:29:00Z | 2025-02-03T08:06:19Z | https://github.com/yt-dlp/yt-dlp/issues/12221 | [
"question"
] | busynusleys | 8 |
jpadilla/django-rest-framework-jwt | django | 148 | 1.7 python manage.py cry | Hi,
it's 4:45am here, good time for a lil pip update (and some tears) !
manage.py doesn't love me anymore
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/opt/projects/diagenv/diagproject/app/models.py", line 13, in <module>
from app import helpers
File "/opt/projects/diagenv/diagproject/app/helpers.py", line 6, in <module>
from rest_framework_jwt import utils
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/rest_framework_jwt/utils.py", line 6, in <module>
from rest_framework_jwt.compat import get_username, get_username_field
File "/opt/virtualenvs/diagenv/lib/python2.7/site-packages/rest_framework_jwt/compat.py", line 12, in <module>
class Serializer(rest_framework.serializers.Serializer):
AttributeError: 'module' object has no attribute 'serializers'
```
Any idea ?
| closed | 2015-08-18T02:48:32Z | 2015-09-11T01:16:02Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/148 | [
"bug"
] | madmoizo | 1 |
gevent/gevent | asyncio | 1,149 | are the monkey.patch_* methods idempotent? i.e. what are the implications of calling monkey.patch_all() twice in the same program? | * gevent version: 1.0.2
* Python version: cpython 2.7.9
* Operating System: Please be as specific as possible: "amazon linux"
for example, what is the behavior of monkey.get_original after calling patch_all twice? | closed | 2018-03-22T19:00:56Z | 2018-03-22T20:02:50Z | https://github.com/gevent/gevent/issues/1149 | [
"Type: Question"
] | sulphide0 | 4 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 674 | While updating conda(installing current version), I found some erros, how to solve it? | >> conda install conda=23.10.0
Result
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.31=0
- feature:|@/linux-64::__glibc==2.31=0
Your installed version is: 2.31
Note that strict channel priority may have removed packages required for satisfiability. | closed | 2023-11-17T01:40:30Z | 2023-12-11T07:44:47Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/674 | [
"pip/conda"
] | SC-PIONEER | 2 |
art049/odmantic | pydantic | 163 | Missing a little f-strings interpolation | I think is missing an ```f``` at the beginning of the string in line 266
https://github.com/art049/odmantic/blob/f20f08f8ab1768534c1e743f7539bfe4f8c73bdd/odmantic/model.py#L265-L268 | closed | 2021-07-26T14:41:34Z | 2022-06-01T19:51:30Z | https://github.com/art049/odmantic/issues/163 | [] | supermodo | 0 |
PokemonGoF/PokemonGo-Bot | automation | 6,277 | Is Bot working after Updates and talk.pogodev.org | Hi, wanted to ask if the bot is still working for the latest updates, since talk.pogodev.org is no longer available and so no installation is possible
Best regards | open | 2018-08-17T07:30:21Z | 2018-08-17T07:30:21Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6277 | [] | ageof | 0 |
rougier/scientific-visualization-book | numpy | 59 | Subplot margins (Chapter 1, Exercise 2) | Hi @rougier,
There is a statement about additional figure margins for Exercise 2 solution:
https://github.com/rougier/scientific-visualization-book/blob/a6e0607fa8dbcfa5e6251cd7b64b5a094ad4e0f9/code/anatomy/inch-cm.py#L13-L15
And there is a code that performs this adjustment:
https://github.com/rougier/scientific-visualization-book/blob/a6e0607fa8dbcfa5e6251cd7b64b5a094ad4e0f9/code/anatomy/inch-cm.py#L27-L38
It seems that in this case we need to have 0.125 inches margin on each side (not 0.25 as in the description), and `margin=0.25` (not 0.125 as in the code) because we multiply this margin by 0.5 when we call `plt.subplots_adjust()`.
Is it correct?
Thank you. | closed | 2022-07-10T13:19:11Z | 2022-07-13T18:16:43Z | https://github.com/rougier/scientific-visualization-book/issues/59 | [] | labdmitriy | 1 |
widgetti/solara | jupyter | 772 | Accessing the localStorage of a browser | Does solara provide access to (read/write/modify) the browsers "localStorage" ?
Thanks | open | 2024-09-04T22:47:56Z | 2024-11-06T02:29:09Z | https://github.com/widgetti/solara/issues/772 | [] | JovanVeljanoski | 2 |
serengil/deepface | machine-learning | 624 | Analyze API-endpoint failing with example request from Postman-collection | So, first of all, thanks for an amazing project!
I'm running the API with Docker and when I test the selection of api-requests in the postman collection, Analyze is the only one not working. The posted image data is the default from the collection. Am I missing something here?

| closed | 2023-01-08T23:22:30Z | 2023-01-09T12:46:04Z | https://github.com/serengil/deepface/issues/624 | [
"bug"
] | epiespen | 1 |
horovod/horovod | tensorflow | 3,804 | CI: Builds fail with Numpy 1.24 | Builds with Python versions newer than 3.7 fail because Numpy have changed their API in release 1.24 and some package requirements aren't restrictive enough.
https://github.com/horovod/horovod/actions/runs/3837009090
```
# ...
2023-01-03T19:09:52.2227909Z #29 28.83 File "/usr/local/lib/python3.8/dist-packages/pandas/_testing.py", line 24, in <module>
2023-01-03T19:09:52.2228233Z #29 28.83 import pandas._libs.testing as _testing
2023-01-03T19:09:52.2228836Z #29 28.83 File "pandas/_libs/testing.pyx", line 10, in init pandas._libs.testing
2023-01-03T19:09:52.2229288Z #29 28.83 File "/usr/local/lib/python3.8/dist-packages/numpy/__init__.py", line 284, in __getattr__
2023-01-03T19:09:52.2229635Z #29 28.83 raise AttributeError("module {!r} has no attribute "
2023-01-03T19:09:52.2230033Z #29 28.83 AttributeError: module 'numpy' has no attribute 'bool'
``` | closed | 2023-01-04T19:39:33Z | 2023-01-05T23:11:10Z | https://github.com/horovod/horovod/issues/3804 | [
"bug"
] | maxhgerlach | 1 |
autogluon/autogluon | scikit-learn | 3,813 | DDP issue | **Bug Report Checklist**
import pyarrow.parquet as pq
from autogluon.multimodal import MultiModalPredictor
import os
train_data = pq.read_table('features_with_label.parquet').to_pandas()
metric = 'f1'
time_limit = 180
predictor = MultiModalPredictor(label='label', eval_metric=metric)
predictor.fit(train_data, time_limit=time_limit)
**Describe the bug**
I am trying to use MultiModalPredictor to perform classification on combination of text and tabular data. I am running my code on "ml.p3.8xlarge" instance with kernel "conda_pytorch_py310". I am getting below eror
“Lightning can’t create new processes if CUDA is already initialized. Did you manually call `torch.cuda.*` functions, have moved the model to the device, or allocated memory on the GPU any other way? Please remove any such calls, or change the selected strategy. You will have to restart the Python kernel.”
**Screenshots / Logs**
[error_logs.txt](https://github.com/autogluon/autogluon/files/13666797/error_logs.txt)
```python
python version = Python 3.10.13
Lightning version = '2.0.9.post0'
autogluon = 2.21
```
| closed | 2023-12-14T00:01:48Z | 2024-06-27T10:36:23Z | https://github.com/autogluon/autogluon/issues/3813 | [
"bug: unconfirmed",
"Needs Triage",
"module: multimodal"
] | vinayakkarande | 3 |
xinntao/Real-ESRGAN | pytorch | 651 | The l_g_percep loss increased gradually, why | Your l_g_percep loss increased when traing? | open | 2023-06-28T08:16:07Z | 2023-06-28T08:16:07Z | https://github.com/xinntao/Real-ESRGAN/issues/651 | [] | FengMu1995 | 0 |
ARM-DOE/pyart | data-visualization | 1,157 | DOC: Put together a blog post pairing SPC reports w/ NEXRAD data | Russ put together a great example of an animation of SPC reports and NEXRAD data - this would make a GREAT blog post
[Link to thread on twitter](https://twitter.com/russ_schumacher/status/1522645682812690432)
[Link to example notebook on Github](https://github.com/russ-schumacher/ats641_spring2022/blob/master/example_notebooks/pyart_nexrad_maps_reports.ipynb)
| closed | 2022-05-09T14:56:18Z | 2022-11-23T19:25:46Z | https://github.com/ARM-DOE/pyart/issues/1157 | [
"good first issue",
"blog-post"
] | mgrover1 | 1 |
tensorpack/tensorpack | tensorflow | 840 | More pre-trained Faster RCNN Model? | Hi, according to this [page](http://models.tensorpack.com/FasterRCNN)
| COCO-R101C4-MaskRCNN-Standard.npz | 196 MiB | (sha256) |
| -- | -- | -- |
| COCO-R50C4-MaskRCNN-Standard.npz | 128 MiB | (sha256)|
| COCO-R50FPN-MaskRCNN-Standard.npz | 158 MiB | (sha256)|
| ImageNet-R101-AlignPadding.npz | 158 MiB | (sha256)|
| ImageNet-R50-AlignPadding.npz | 91 MiB | (sha256)|
| ImageNet-R50-GroupNorm32-AlignPadding.npz | 88 MiB | (sha256)|
Will you provide more pretrained model such as `COCO-R101FPN` or `COCO-R152FPN`?
Thank you! | closed | 2018-07-23T21:56:52Z | 2018-07-27T06:24:52Z | https://github.com/tensorpack/tensorpack/issues/840 | [
"examples"
] | PacteraOliver | 5 |
getsentry/sentry | django | 87,321 | [RELEASES] Replace direct links to the release details page with links to open the release flyout | At the time of writing I don't have a complete list of places that link to release details. There is a Release Hovercard thing that should also exist at all the callsites.
In the end we want all links in the app to:
- have a release hovercard, i assume this is already the case
- to open the new release flyout instead of linking directly to release details or the release list page
To help ease the migration the release flyout _could_ have a link along the lines of "Click here to view the old Release Details page" so people can still get back to the old thing. This is probably a good idea, but TBD right now. | open | 2025-03-18T19:55:37Z | 2025-03-18T19:55:37Z | https://github.com/getsentry/sentry/issues/87321 | [] | ryan953 | 0 |
RayVentura/ShortGPT | automation | 78 | 🐛 [Bug]: Some images does not added to video | ### What happened?
I found images added to video are less than the searched result image list.
### What type of browser are you seeing the problem on?
Microsoft Edge
### What type of Operating System are you seeing the problem on?
Windows
### Python Version
3.10
### Application Version
stable branch
### Expected Behavior
images should be added properly
### Error Message
```shell
got the exception stacktrace
`Traceback (most recent call last):
File "D:\Codebase\ContentGen\shortGPT\editing_framework\core_editing_engine.py", line 59, in generate_video
clip = self.process_image_asset(asset)
File "D:\Codebase\ContentGen\shortGPT\editing_framework\core_editing_engine.py", line 212, in process_image_asset
clip = ImageClip(asset['parameters']['url'])
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\moviepy\video\VideoClip.py", line 889, in __init__
img = imread(img)
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\imageio\__init__.py", line 97, in imread
return imread_v2(uri, format=format, **kwargs)
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\imageio\v2.py", line 359, in imread
with imopen(uri, "ri", **imopen_args) as file:
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\imageio\core\imopen.py", line 196, in imopen
plugin_instance = candidate_plugin(request, **kwargs)
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\imageio\plugins\pillow.py", line 83, in __init__
with Image.open(request.get_file()):
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\Image.py", line 2994, in open
im = _open_core(fp, filename, prefix, formats)
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\Image.py", line 2980, in _open_core
im = factory(fp, filename)
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\ImageFile.py", line 112, in __init__
self._open()
File "C:\Users\1\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\ImImagePlugin.py", line 153, in _open
s = s + self.fp.readline()
AttributeError: 'SeekableFileObject' object has no attribute 'readline'`
```
### Code to produce this issue.
```shell
I add stacktrace code to see what exception it throws
shortGPT/editing_framework/core_editing_engine.py
` elif asset_type == 'image':
try:
print(asset['parameters']['url'])
clip = self.process_image_asset(asset)
print(clip)
except Exception as e:
traceback.print_exc()
continue`
```
### Screenshots/Assets/Relevant links
[Related imageio issue](https://github.com/imageio/imageio/issues/1007) | open | 2023-08-03T10:26:27Z | 2023-08-04T07:21:50Z | https://github.com/RayVentura/ShortGPT/issues/78 | [
"bug"
] | cnwillz | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.