repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
graphql-python/graphene-django | graphql | 1,478 | Options to secure API | **Is your feature request related to a problem? Please describe.**
I used [graphql-cop](https://github.com/dolevf/graphql-cop) to test my graphql API built using Graphene Django and the result is as follows:
```
[HIGH] Alias Overloading - Alias Overloading with 100+ aliases is allowed (Denial of Service - /graphql)
[HIGH] Directive Overloading - Multiple duplicated directives allowed in a query (Denial of Service - /graphql)
[HIGH] Field Duplication - Queries are allowed with 500 of the same repeated field (Denial of Service - /graphql)
[LOW] Field Suggestions - Field Suggestions are Enabled (Information Leakage - /graphql)
[MEDIUM] GET Method Query Support - GraphQL queries allowed using the GET method (Possible Cross Site Request Forgery (CSRF) - /graphql)
[HIGH] Introspection - Introspection Query Enabled (Information Leakage - /graphql)
[HIGH] Introspection-based Circular Query - Circular-query using Introspection (Denial of Service - /graphql)
[MEDIUM] POST based url-encoded query (possible CSRF) - GraphQL accepts non-JSON queries over POST (Possible Cross Site Request Forgery - /graphql)
```
I would like to have options for example to disable or limit use of aliases to prevent Alias Overloading but I can't find options to mitigate this or the other attacks.
**Describe the solution you'd like**
Is it possible to provide options to mitigate those attacks in a futur version of graphene-django ?
**Describe alternatives you've considered**
...
**Additional context**
... | closed | 2023-11-21T10:44:04Z | 2023-11-22T10:15:19Z | https://github.com/graphql-python/graphene-django/issues/1478 | [
"✨enhancement"
] | lee-pai-long | 5 |
sherlock-project/sherlock | python | 1,797 | Support for Instagram | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a feature request
- [x] I've checked for similar feature requests including closed ones
## Description
<!--
Provide a detailed description of the feature you would like Sherlock to have
-->
Support to detect Instagram accounts, I already found closed issues saying that it has false positives and was removed because of that, but couldn't find any usernames to check it out and I found a config that seems to work for it:
```json
{
"Instagram": {
"errorMsg": "This content is no longer available",
"errorType": "message",
"url": "https://www.instagram.com/{}/?__a=1",
"urlMain": "https://www.instagram.com"
}
}
```
The only issue I see with that config is that clicking on the generated link wouldn't show the proper profile and would need to remove the `?__a=1` to do so. But it seems to do the work of detecting the account.
**EDIT**: I accidentally pressed ctrl and sent the uncompleted issue.
| closed | 2023-05-18T20:21:09Z | 2024-11-26T22:36:00Z | https://github.com/sherlock-project/sherlock/issues/1797 | [
"enhancement"
] | javalsai | 4 |
apify/crawlee-python | automation | 1,020 | Fix rendering of enqueue strategy in API docs | It is a type alias after https://github.com/apify/crawlee-python/pull/1019. | closed | 2025-02-24T16:47:16Z | 2025-02-25T12:11:58Z | https://github.com/apify/crawlee-python/issues/1020 | [
"documentation",
"t-tooling"
] | vdusek | 0 |
ipython/ipython | data-science | 14,240 | Memory leak with %matplotlib qt, with simple fix | When using matplotlib event loop integration (e.g. %matplotlib) with pyqt, there is a memory leak caused by a misunderstanding of the lifespan of QObjects.
I found it because I'm trying to debug slowdowns in long-running IPython sessions with autoreload. I'm not sure that this is actually the cause, but it's certainly ugly. Within a session which has been open for a couple of weeks I found:
```
>>> collections.Counter(map(type, gc.get_objects())).most_common(5)
[(PyQt5.QtCore.QEventLoop, 1718061),
(dict, 1006864),
(list, 702602),
(ast.Name, 267460),
(ast.Attribute, 96929)]
```
There are 1.7m QEventLoop instances kicking around. They are being created by `IPython.terminal.pt_inputhooks.qt.inputhook` on line 58, with `event_loop = QtCore.QEventLoop(app)`. But QEventLoop is a QObject subclass, which means that passing `app` in the constructor parents the instance to `app`, so even when it exits the scope it won't be deleted, even though it's no longer needed by that point.
The fix is simple: just add a line at the end of that function containing `event_loop.setParent(None)`.
Hopefully it's faster for you to just add that single line of code rather than me forking, making a pull request etc, but please tell me if you need me to do so. | closed | 2023-11-11T16:46:55Z | 2023-11-24T09:37:05Z | https://github.com/ipython/ipython/issues/14240 | [
"bug"
] | pag | 4 |
mljar/mljar-supervised | scikit-learn | 736 | getting TypeError: only integer scalar arrays can be converted to a scalar index on | I am trying to load the saved model using the following command
automl = AutoML(mode='Explain',results_path='Auto_Ml_testing')
automl.fit(X_train,y_train)
automl = AutoML(results_path='Auto_Ml_testing')
automl.predict(X_test)
but the '.predict' is giving the following error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[12], [line 1](vscode-notebook-cell:?execution_count=12&line=1)
----> [1](vscode-notebook-cell:?execution_count=12&line=1) automl.predict(X_test)
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\supervised\automl.py:451, in AutoML.predict(self, X)
[434](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:434) def predict(self, X: Union[List, numpy.ndarray, pandas.DataFrame]) -> numpy.ndarray:
[435](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:435) """
[436](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:436) Computes predictions from AutoML best model.
[437](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:437)
(...)
[449](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:449) AutoMLException: Model has not yet been fitted.
[450](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:450) """
--> [451](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/automl.py:451) return self._predict(X)
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\supervised\base_automl.py:1503, in BaseAutoML._predict(self, X)
[1502](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1502) def _predict(self, X):
-> [1503](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1503) predictions = self._base_predict(X)
[1504](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1504) # Return predictions
[1505](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1505) # If classification task the result is in column 'label'
[1506](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1506) # If regression task the result is in column 'prediction'
[1507](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1507) return (
[1508](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1508) predictions["label"].to_numpy()
[1509](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1509) if self._ml_task != REGRESSION
[1510](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1510) else predictions["prediction"].to_numpy()
[1511](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1511) )
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\supervised\base_automl.py:1469, in BaseAutoML._base_predict(self, X, model)
[1467](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1467) predictions = model.predict(X_stacked)
[1468](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1468) else:
-> [1469](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1469) predictions = model.predict(X)
[1471](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1471) if self._ml_task == BINARY_CLASSIFICATION:
[1472](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1472) # need to predict the label based on predictions and threshold
[1473](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1473) neg_label, pos_label = (
[1474](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1474) predictions.columns[0][11:],
[1475](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1475) predictions.columns[1][11:],
[1476](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/base_automl.py:1476) )
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\supervised\model_framework.py:447, in ModelFramework.predict(self, X)
[444](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/model_framework.py:444) y_predicted = None # np.zeros((X.shape[0],))
[445](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/model_framework.py:445) for ind, learner in enumerate(self.learners):
[446](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/model_framework.py:446) # preprocessing goes here
--> [447](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/model_framework.py:447) X_data, _, _ = self.preprocessings[ind].transform(X.copy(), None)
[448](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/model_framework.py:448) y_p = learner.predict(X_data)
[449](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/model_framework.py:449) y_p = self.preprocessings[ind].inverse_scale_target(y_p)
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\supervised\preprocessing\preprocessing.py:361, in Preprocessing.transform(self, X_validation, y_validation, sample_weight_validation)
[359](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/preprocessing.py:359) for tt in self._text_transforms:
[360](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/preprocessing.py:360) if X_validation is not None and tt is not None:
--> [361](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/preprocessing.py:361) X_validation = tt.transform(X_validation)
[363](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/preprocessing.py:363) for missing in self._missing_values:
[364](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/preprocessing.py:364) if X_validation is not None and missing is not None:
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\supervised\preprocessing\text_transformer.py:36, in TextTransformer.transform(self, X)
[34](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/text_transformer.py:34) ii = ~pd.isnull(X[self._old_column])
[35](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/text_transformer.py:35) x = X[self._old_column][ii]
---> [36](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/text_transformer.py:36) vect = self._vectorizer.transform(x)
[38](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/text_transformer.py:38) for f in self._new_columns:
[39](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/supervised/preprocessing/text_transformer.py:39) X[f] = 0.0
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\sklearn\feature_extraction\text.py:2118, in TfidfVectorizer.transform(self, raw_documents)
[2115](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:2115) check_is_fitted(self, msg="The TF-IDF vectorizer is not fitted")
[2117](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:2117) X = super().transform(raw_documents)
-> [2118](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:2118) return self._tfidf.transform(X, copy=False)
File c:\Users\narvashi\projects\STERIS\venv\Lib\site-packages\sklearn\feature_extraction\text.py:1707, in TfidfTransformer.transform(self, X, copy)
[1702](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1702) X.data += 1.0
[1704](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1704) if hasattr(self, "idf_"):
[1705](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1705) # the columns of X (CSR matrix) can be accessed with `X.indices `and
[1706](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1706) # multiplied with the corresponding `idf` value
-> [1707](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1707) X.data *= self.idf_[X.indices]
[1709](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1709) if self.norm is not None:
[1710](file:///C:/Users/narvashi/projects/STERIS/venv/Lib/site-packages/sklearn/feature_extraction/text.py:1710) X = normalize(X, norm=self.norm, copy=False)
TypeError: only integer scalar arrays can be converted to a scalar index | open | 2024-07-12T15:44:24Z | 2024-08-22T12:31:19Z | https://github.com/mljar/mljar-supervised/issues/736 | [] | vashist1994 | 2 |
aimhubio/aim | data-visualization | 3,278 | Incorrect connection of data points can happen when logging out of order with implicit step value | ## 🐛 Bug
**Summary**: When metrics are logged using only epoch parameter, step value is chosen incrementally. When this happens out of order (for example: asynchronous evaluation on a batch system), displaying them in an epoch/value graph will connect the lines incorrectly. This is because step is used to determine the order the data points are connected in.

### To reproduce
<!-- Reproduction steps. -->
Pseudo:
```
run.track(float(train_loss), name='train_loss', epoch=1)
run.track(float(eval_loss), name='eval_loss', epoch=1)
run.track(float(train_loss), name='train_loss', epoch=2)
run.track(float(train_loss), name='train_loss', epoch=3)
run.track(float(train_loss), name='train_loss', epoch=4)
run.track(float(eval_loss), name='eval_loss', epoch=3) # Out of order due to scheduling
run.track(float(eval_loss), name='eval_loss', epoch=2) # Out of order due to scheduling
```
In my specific setting, eval_loss is calculated by a seperate process and saved to disk. Periodically, the main process (also running the training and the aim logger picks up the value from disk and logs it)
### Expected behavior
Connection of data points in the graph is dictated by whatever is selected to be on the x axis.
### Environment
- Aim Version: v3.27.0
- Python version: latest
- pip version: latest
- OS: Linux
### Additional context
Workaround: Always also calculate and log current step value:
```run.track(float(eval_loss), name='eval_loss', epoch=epoch, step=len(train_dataloader) * epoch)``` | open | 2025-01-03T10:44:16Z | 2025-01-03T10:44:16Z | https://github.com/aimhubio/aim/issues/3278 | [
"type / bug",
"help wanted"
] | cdalinghaus | 0 |
tqdm/tqdm | jupyter | 636 | tqdm progress bar not showing after leave notebook(not shut down) | Before:

After:

How to make it appear again without shuting down. | closed | 2018-11-06T02:59:50Z | 2018-11-08T14:41:18Z | https://github.com/tqdm/tqdm/issues/636 | [] | Jacky97s | 1 |
scikit-multilearn/scikit-multilearn | scikit-learn | 253 | IterativeStratification use in medical and some datasets ValueError: Only one class present in y_true. ROC AUC score is not defined in that case | that means for some labels in y[train] that only have zero class , but i am sure that this label at least have two one class samples IterativeStratification does not work well | closed | 2022-12-01T14:38:58Z | 2023-03-14T17:04:05Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/253 | [] | CquptZA | 1 |
sktime/pytorch-forecasting | pandas | 1,311 | Error selecting device when executing prediction - M1 | - PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.0.0
- Python version: 3.10.0
- Operating System: MacOS Ventura 13.3.1 (a) - M1 architecture
I executed the example ar.py in an M1 computer. I selected the accelerator in the pl.Trainer as "cpu" to avoid using the "mps" as it still does not support certain operations.
When executing the .fit function it worked fine, but when executing the .predict function it throws an exception as if the "mps" accelerator is being used. I cannot find any way to execut the predict with the desired device although I tried to set the default device of torch as the cpu.
### Code to reproduce the problem
```
from pathlib import Path
import pickle
import warnings
import lightning.pytorch as pl
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
from lightning.pytorch.loggers import TensorBoardLogger
from lightning.pytorch.tuner import Tuner
import numpy as np
import pandas as pd
from pandas.errors import SettingWithCopyWarning
import torch
from pytorch_forecasting import EncoderNormalizer, GroupNormalizer, TimeSeriesDataSet
from pytorch_forecasting.data import NaNLabelEncoder
from pytorch_forecasting.data.examples import generate_ar_data
from pytorch_forecasting.metrics import NormalDistributionLoss
from pytorch_forecasting.models.deepar import DeepAR
from pytorch_forecasting.utils import profile
warnings.simplefilter("error", category=SettingWithCopyWarning)
data = generate_ar_data(seasonality=10.0, timesteps=400, n_series=100)
data["static"] = "2"
data["date"] = pd.Timestamp("2020-01-01") + pd.to_timedelta(data.time_idx, "D")
validation = data.series.sample(20)
max_encoder_length = 60
max_prediction_length = 20
training_cutoff = data["time_idx"].max() - max_prediction_length
training = TimeSeriesDataSet(
data[lambda x: ~x.series.isin(validation)],
time_idx="time_idx",
target="value",
categorical_encoders={"series": NaNLabelEncoder().fit(data.series)},
group_ids=["series"],
static_categoricals=["static"],
min_encoder_length=max_encoder_length,
max_encoder_length=max_encoder_length,
min_prediction_length=max_prediction_length,
max_prediction_length=max_prediction_length,
time_varying_unknown_reals=["value"],
time_varying_known_reals=["time_idx"],
target_normalizer=GroupNormalizer(groups=["series"]),
add_relative_time_idx=False,
add_target_scales=True,
randomize_length=None,
)
validation = TimeSeriesDataSet.from_dataset(
training,
data[lambda x: x.series.isin(validation)],
# predict=True,
stop_randomization=True,
)
batch_size = 64
train_dataloader = training.to_dataloader(
train=True, batch_size=batch_size, num_workers=0
)
val_dataloader = validation.to_dataloader(
train=False, batch_size=batch_size, num_workers=0
)
# save datasets
training.save("training.pkl")
validation.save("validation.pkl")
early_stop_callback = EarlyStopping(
monitor="val_loss", min_delta=1e-4, patience=5, verbose=False, mode="min"
)
lr_logger = LearningRateMonitor()
trainer = pl.Trainer(
max_epochs=100,
accelerator="cpu",
devices="auto",
gradient_clip_val=0.1,
limit_train_batches=30,
limit_val_batches=3,
# fast_dev_run=True,
# logger=logger,
# profiler=True,
callbacks=[lr_logger, early_stop_callback],
)
deepar = DeepAR.from_dataset(
training,
learning_rate=0.1,
hidden_size=32,
dropout=0.1,
loss=NormalDistributionLoss(),
log_interval=10,
log_val_interval=3,
# reduce_on_plateau_patience=3,
)
print(f"Number of parameters in network: {deepar.size()/1e3:.1f}k")
torch.set_num_threads(10)
torch.set_default_device(device=torch.device(type="cpu"))
trainer.fit(
deepar,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
# calcualte mean absolute error on validation set
actuals = torch.cat([y for x, (y, weight) in iter(val_dataloader)])
predictions = deepar.predict(val_dataloader)
print(f"Mean absolute error of model: {(actuals - predictions).abs().mean()}")
```
### Traceback
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File ".../.vscode/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File ".../.vscode/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File ".../.vscode/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File ".../.vscode/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File ".../.vscode/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File ".../.vscode/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File ".../ar.py", line 124, in <module>
predictions = deepar.predict(val_dataloader)
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/deepar/__init__.py", line 404, in predict
return super().predict(
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py", line 1423, in predict
trainer.predict(self, dataloader)
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 805, in predict
return call._call_and_handle_interrupt(
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 847, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 935, in _run
results = self._run_stage()
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 973, in _run_stage
return self.predict_loop.run()
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/loops/utilities.py", line 177, in _decorator
return loop_run(self, *args, **kwargs)
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/loops/prediction_loop.py", line 112, in run
self._predict_step(batch, batch_idx, dataloader_idx)
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/loops/prediction_loop.py", line 228, in _predict_step
predictions = call._call_strategy_hook(trainer, "predict_step", *step_kwargs.values())
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 288, in _call_strategy_hook
output = fn(*args, **kwargs)
File ".../venv/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 396, in predict_step
return self.model.predict_step(*args, **kwargs)
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py", line 625, in predict_step
_, out = self.step(x, y, batch_idx, **predict_callback.predict_kwargs)
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py", line 777, in step
out = self(x, **kwargs)
File ".../venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/deepar/__init__.py", line 327, in forward
output = self.decode(
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/deepar/__init__.py", line 296, in decode
output = self.decode_autoregressive(
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py", line 2184, in decode_autoregressive
prediction, current_target = self.output_to_prediction(
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/models/base_model.py", line 2036, in output_to_prediction
prediction = self.loss.sample(prediction_parameters, 1)
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/metrics/base_metrics.py", line 962, in sample
dist = self.map_x_to_distribution(y_pred)
File ".../venv/lib/python3.10/site-packages/pytorch_forecasting/metrics/distributions.py", line 23, in map_x_to_distribution
distr = self.distribution_class(loc=x[..., 2], scale=x[..., 3])
File ".../venv/lib/python3.10/site-packages/torch/distributions/normal.py", line 56, in __init__
super().__init__(batch_shape, validate_args=validate_args)
File ".../venv/lib/python3.10/site-packages/torch/distributions/distribution.py", line 62, in __init__
raise ValueError(
ValueError: Expected parameter loc (Tensor of shape (64, 100)) of distribution Normal(loc: torch.Size([64, 100]), scale: torch.Size([64, 100])) to satisfy the constraint Real(), but found invalid values:
tensor([[-0.5566, -0.5566, -0.5566, ..., -0.5566, -0.5566, -0.5566],
[-0.5578, -0.5578, -0.5578, ..., -0.5578, -0.5578, -0.5578],
[-0.5585, -0.5585, -0.5585, ..., -0.5585, -0.5585, -0.5585],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
device='mps:0')
``` | open | 2023-05-25T14:45:32Z | 2023-06-08T13:23:02Z | https://github.com/sktime/pytorch-forecasting/issues/1311 | [] | CBeckerUPC | 1 |
fbdesignpro/sweetviz | pandas | 60 | colname is ‘index’ error | if dataframe colname include 'index' ,then you will get error:
the true value of a series is amabiguous. use a empty,a.bool(),a.item(),a.any() or a.all
so delete or skip 'index' col.
列名里不能有叫index的列,否则会报错。要么直接删除,要么加到skip中即可。
| closed | 2020-09-29T12:12:28Z | 2020-11-24T15:11:20Z | https://github.com/fbdesignpro/sweetviz/issues/60 | [
"bug"
] | guduxingzou | 3 |
MaartenGr/BERTopic | nlp | 1,135 | Fine-Tuning Optional cTF-IDF Representations After Model Fit? | I am training the BERTopic model on ~1.4 million documents, which takes some time to finish (about 12 hours using "all-MiniLM-L6-v2"). I would like to try variations of fine-tuning the cTF-IDF vectors learned by the model using, e.g., `KeyBERTInspired`, `PartOfSpeech`, `MaximalMarginalRelevance`, and `TextGeneration` to get more "coherent" topic representations. However, I don't want to wait 12 hours for the model to run to test each fine-tuning method on my use case. Is there a way to train the topic model without the optional representation and THEN apply various fine-tuning methods to see what the topics look like after each one? This way I can evaluate them without waiting for the model to be fit each time. Pseudo code would look something like this:
```
from bertopic.representation import KeyBERTInspired
from bertopic.representation import MaximalMarginalRelevance
from bertopic import BERTopic
# Train the topic model without optional representations of the cTF-IDF vectors
topic_model = BERTopic(representation_model=None)
topics, probs = topic_model.fit_transform(docs)
# Save the topic model
topic_mode.save('my_bert_model.bert')
# Load the topic model to fine-tune
topic_model = BERTopic.load('my_bert_model.bert')
# Try fine-tuning using KeyBert
representation_model = KeyBERTInspired()
<fine-tune the loaded topic model using KeyBert, save fine-tuned model as topic_model_keybert>
# Try fine-tuning using MaximalMarginalRelevance
representation_model = MaximalMarginalRelevance(diversity=0.3)
<fine-tune the loaded topic model using MMR, save fine-tuned model as topic_model_mmr>
```
Also, is there any guidance on when to choose and/or combine in a chain the respective optional cTF-IDF representations? Specifically between `KeyBERTInspired`, `PartOfSpeech`, and `MaximalMarginalRelevance`?
Thank you | closed | 2023-03-30T03:35:52Z | 2023-05-23T09:24:51Z | https://github.com/MaartenGr/BERTopic/issues/1135 | [] | MarkWClements | 2 |
microsoft/nni | tensorflow | 5,608 | NNI 3.x support for quant_bits | <!-- Please only use this template for submitting model compression/speedup enhancement requests -->
**Describe the feature**:
NNI 3.x support for quant_bits[¶](https://nni.readthedocs.io/en/stable/compression/compression_config_list.html#quant-bits)
**Motivations**:
NNI 3.x only support quant_dtype[¶](https://nni.readthedocs.io/en/latest/compression/config_list.html#quant-dtype) for bits type. But I'd like to test the performance for BNN with params only -1 or +1.
**Alternatives**:
Use NNI 2.x quantization instead.
**Components that may involve changes**:
Compression Config Specification
nni.contrib.compression.quantization
**Brief description of your proposal if any**:
| open | 2023-06-13T04:11:11Z | 2023-06-14T02:37:59Z | https://github.com/microsoft/nni/issues/5608 | [] | aininot260 | 0 |
deepfakes/faceswap | deep-learning | 757 | Add OpenCL support | **Is your feature request related to a problem? Please describe.**
Not everyone has an NVidia GPU. OpenCL is an open-source alternative that **all** GPU vendors (Intel, AMD, NVidia) supports.
**Describe the solution you'd like**
I'd like to increase the speed of the program using AMD and Intel GPUs, benefitting all users since there is always some kind of GPU attached to any computer. CPU version is very slow.
**Describe alternatives you've considered**
[DeepFaceLap ](https://github.com/iperov/DeepFaceLab)added OpenCL support a few months ago. Since both projects are GPLv3 code could be borrowed fromt his project instead of coding from scratch, reducing work time.
| closed | 2019-06-12T07:06:43Z | 2019-06-12T07:09:26Z | https://github.com/deepfakes/faceswap/issues/757 | [] | Nonononoki | 1 |
flasgger/flasgger | flask | 594 | Switch from `pep8` to `pycodestyle` | The `pep8` project was renamed to `pycodestyle` more than seven years ago, but it is [still used](https://github.com/flasgger/flasgger/blob/master/requirements-dev.txt#L6) by `flasgger` as dev dependency. Please switch to `pycodestyle`. Thank you. | open | 2023-09-13T11:58:35Z | 2023-09-13T11:58:35Z | https://github.com/flasgger/flasgger/issues/594 | [] | mtelka | 0 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 248 | SAMPLE is not working with select() | **Describe the bug**
SAMPLE method is not working
select(<model>).sample(1) -> SELECT <model>.id FROM <model> SAMPLE, but now it looks like SELECT <model>.id FROM <model>
**To Reproduce**
from sqlalchemy import Column
from sqlalchemy.orm import declarative_base
from clickhouse_sqlalchemy import select, types
Base = declarative_base()
FORMAT_REPRESENTATION_NAME = "ID {id}, Date {name}"
FORMAT_DOMAIN_TABLE = "{id} - {name}"
class FooBar(Base):
__tablename__ = 'foobar'
id = Column(types.UInt64, primary_key=True)
print(select(FooBar).sample(0.1))
**Expected behavior**
when I call the "sample" method of the select object, I expect that I will get exactly what is written in the documentation, but I do not get it, I see that bindparam is being formed, and the string version of the query does not have a SAMPLE (and no other option does not have a SAMPLE)
like here https://clickhouse-sqlalchemy.readthedocs.io/en/latest/features.html?highlight=sample#sample
**Versions**
sqlalchemy = "1.4.46"
clickhouse-sqlalchemy = "0.2.4"
- Version of package with the problem.
clickhouse-sqlalchemy = "0.2.4"
- Python version.
Python3.9 - 3.10
| open | 2023-05-08T18:49:00Z | 2023-05-08T18:49:00Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/248 | [] | Ninefiveblade | 0 |
automagica/automagica | automation | 121 | delete_column() is not working | delete_column() is not working
When I try below delete_column() example in Automagica Documentation Release 2(Apr 02, 2020).
> excel = Excel()
> excel.write_cell(1, 1, 'Filled')
> excel.write_cell(2, 2, 'Filled')
> excel.write_cell(3, 3, 'Filled')
> excel.delete_column('B')
Error message is below :
File "C:\automagica\utilities.py", line 17, in wrapper
return func(*args, **kwargs)
File "C:\automagica\activities.py", line 5046, in delete_column
column_range = str(column) + "1"
NameError: name 'column' is not defined
| closed | 2020-04-13T03:47:58Z | 2020-09-07T21:35:53Z | https://github.com/automagica/automagica/issues/121 | [] | taesikkim | 1 |
slackapi/python-slack-sdk | asyncio | 1,510 | Add `Options` / `initial_options` validation | Suggestion raised in: https://github.com/slackapi/python-slack-sdk/issues/1509
Behavior:
`initial_options` options must contain an exact match of `options` provided, or users experience issues with properly updating forms. Suggestion has been raised to update this SDK to notify developers when an option provided does not exactly match an `initial_options` selection.
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [x] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| open | 2024-06-14T21:06:30Z | 2024-06-14T21:06:54Z | https://github.com/slackapi/python-slack-sdk/issues/1510 | [
"enhancement",
"auto-triage-skip"
] | srajiang | 0 |
lepture/authlib | flask | 525 | Authorization error responses are missing the state parameter | **Describe the bug**
While trying to implement a "silent signing" flow (using `prompt=none`) the library I use refuses to accept the error response (i.e. `login_required`) because the `state` parameter is missing from the response.
[RFC 6749 section-4.1.2.1](https://datatracker.ietf.org/doc/html/rfc6749#section-4.1.2.1) has the following normative text about the required presence of the state parameter on authorisation error responses:
> state
> REQUIRED if a "state" parameter was present in the client authorization request. The exact value received from the client.
>
> For example, the authorization server redirects the user-agent by sending the following HTTP response:
>
> HTTP/1.1 302 Found
> Location: https://client.example.com/cb?error=access_denied&state=xyz
**To Reproduce**
Send an authorization request with `prompt=none&state=xyz` while not authenticated.
**Expected behavior**
The error response contains the given state parameter.
**Environment:**
- OS: Debian 11
- Python Version: 3.8.16
- Authlib Version: 1.1.0
**Additional context**
`OAuth2Error` has a way to include the state parameter, but it's not always used. i.e. `validate_request_prompt` does not set it on the errors it raises. Other places do include the state parameter, i.e. `validate_authorization_redirect_uri`.
For the time being I've implemented a workaround on my side (Django), by setting the `state` parameter on `OAuth2Error`s raised by `get_consent_grant`:
```python
class OAuth2AuthorizeView(AccessMixin, View):
def get(self, request, *args, **kwargs):
user = request.user if request.user.is_authenticated else None
try:
grant = oauth2_server.get_consent_grant(request, user)
if grant.prompt == "login":
# Redirect to login page
return self.handle_no_permission()
except OAuth2Error as error:
logger.exception("Error during authorization request")
# Make sure the state parameter is reflected in the error response
error.state = request.GET.get("state")
return oauth2_server.handle_error_response(
oauth2_server.create_oauth2_request(request),
error,
)
....
``` | open | 2023-01-26T08:58:28Z | 2025-02-20T20:43:36Z | https://github.com/lepture/authlib/issues/525 | [
"bug",
"server"
] | jaap3 | 0 |
kevlened/pytest-parallel | pytest | 56 | ci: coverage reporting | Would you like to have coverage reporting setup?
I could do it, using pytest-cov and codecov. | closed | 2019-11-22T19:04:55Z | 2019-11-22T20:54:08Z | https://github.com/kevlened/pytest-parallel/issues/56 | [] | blueyed | 1 |
pytest-dev/pytest-html | pytest | 314 | Garbage date-time is printed in the Captured log. | Garbage date-time is printed in the Captured log.
**pytest-html report:**
------------------------------ Captured log setup ------------------------------
[32mINFO [0m root:test_cyclic_switchover.py:26 Inside Setup
[32mINFO [0m root:test_cyclic_switchover.py:54
------------------------------ Captured log call -------------------------------
[32mINFO [0m root:test_cyclic_switchover.py:82
Switchover Process : SCM
**pytest console log :**
collected 4 items / 2 deselected / 2 selected
test_cyclic_switchover.py::test_process_switchover[SCM]
------------------------------------------------------------ live log setup -------------------------------------------------------------
2020-07-13 21:51:50 [ INFO] Inside Setup
(test_cyclic_switchover.py:26)
2020-07-13 21:51:50 [ INFO]
(test_cyclic_switchover.py:54)
------------------------------------------------------------- live log call -------------------------------------------------------------
2020-07-13 21:51:50 [ INFO]
Switchover Process : SCM (test_cyclic_switchover.py:82)
PASSED [ 50%]
test_cyclic_switchover.py::test_process_switchover[SAM]
------------------------------------------------------------- live log call -------------------------------------------------------------
2020-07-13 21:51:50 [ INFO]
Switchover Process : SAM (test_cyclic_switchover.py:82)
PASSED [100%]
----------------------------------------------------------- live log teardown -----------------------------------------------------------
2020-07-13 21:51:50 [ INFO] Inside Teardown (test_cyclic_switchover.py:60)
| closed | 2020-07-13T16:43:22Z | 2020-08-10T12:25:37Z | https://github.com/pytest-dev/pytest-html/issues/314 | [] | pawan7476 | 20 |
facebookresearch/fairseq | pytorch | 4,744 | Regarding the size of RoBERTa's training data in terms of tokens | ## ❓ Questions and Help
Hello,
This is a rather unusual question but I hope that RoBERTa's authors could help me with their answer: How many tokens are there in the training set for RoBERTa?
I was unable to find such information anywhere on the internet. I know from the paper that the training data is about 160GB of text and the vocabulary size is 50K, but it doesn't seem to be possible to infer the total number of tokens.
Thank you very much in advance for your help! | open | 2022-09-26T21:38:17Z | 2022-09-26T21:38:17Z | https://github.com/facebookresearch/fairseq/issues/4744 | [
"question",
"needs triage"
] | netw0rkf10w | 0 |
python-visualization/folium | data-visualization | 1,867 | TagFilterButton - Dropdown overlaps with Button | **Describe the bug**
The Dropdown appearing when using TagFilterButton overlaps with the (then hidden) button itself.
Using multiple Filters becomes challenging, as the text stays hidden behind the other filter buttons.

```
The example page for folium shows this behavior:
https://python-visualization.github.io/folium/latest/user_guide/plugins/tag_filter_button.html#
The example page for Leaflet does not:
https://maydemirx.github.io/leaflet-tag-filter-button/
```
**Expected behavior**
Dropdown should open up next to the button, not overlapping it. By overlapping, using multiple TagFilterButtons is challenging.
**Environment (please complete the following information):**
- Browser: Chrome
- Jupyter Notebook or html files? HTML
- Python version (sys.version_info(major=3, minor=12, micro=1, releaselevel='final', serial=0))
- folium version (0.15.1)
- branca version (0.7.0)
**Possible solutions**
Solution: Add margin between Button and Dropdown
folium is maintained by volunteers. Can you help making a fix for this issue?
Sadly no, absolute beginner here. | closed | 2024-01-31T12:22:01Z | 2024-05-06T09:52:19Z | https://github.com/python-visualization/folium/issues/1867 | [
"bug",
"plugin"
] | Merodas | 1 |
rio-labs/rio | data-visualization | 158 | Reconsider Experimental Components | New components frequently get marked as experimental, to allow changing their API for some time when new lessons are learned. That's good, but has also led to many components still being marked as experimental, when in fact nobody intends to change them anymore.
Go through all components and update their metadata. | open | 2024-10-20T11:11:23Z | 2024-11-02T14:36:37Z | https://github.com/rio-labs/rio/issues/158 | [
"enhancement"
] | mad-moo | 1 |
deepfakes/faceswap | machine-learning | 967 | Fedora install | docker build -t deepfakes-gpu -f Dockerfile.gpu .

pip list
langtable 0.0.50
libcomps 0.1.14
Mako 1.1.0.dev0
MarkupSafe 1.1.1
matplotlib 3.1.1
ntplib 0.3.3
numpy 1.18.1
olefile 0.46
ordered-set 3.1 | closed | 2020-01-27T08:37:57Z | 2020-02-21T11:33:13Z | https://github.com/deepfakes/faceswap/issues/967 | [] | liveinno | 1 |
ray-project/ray | deep-learning | 51,373 | Ray rllib DreamerV3 incompatible with new API? | ### What happened + What you expected to happen
I've tried to use the dreamerV3 agent from rllib for one of my projects and find it quite challenging to configure it correctly so that it would not throw an exception. I apologize in advance if this is entirely a problem on my end, but I have reasons to believe that this is not the case. To simplify things, I tried the frozenlake_2x2 tuned example found [here](https://github.com/ray-project/ray/blob/master/rllib/tuned_examples/dreamerv3/frozenlake_2x2.py)
and it results in the same exception I encountered in my code as well:
`File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 528, in __init__
super().__init__(
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/tune/trainable/trainable.py", line 157, in __init__
self.setup(copy.deepcopy(self.config))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/dreamerv3/dreamerv3.py", line 488, in setup
super().setup(config)
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 748, in setup
self.learner_group = self.config.build_learner_group(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm_config.py", line 1249, in build_learner_group
learner_group = LearnerGroup(config=self.copy(), module_spec=rl_module_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/learner_group.py", line 129, in __init__
self._learner.build()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/tf/tf_learner.py", line 275, in build
super().build()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/learner.py", line 310, in build
self._module = self._make_module()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/learner.py", line 1586, in _make_module
module = self._module_spec.build()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/multi_rl_module.py", line 611, in build
module = self.multi_rl_module_class(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/multi_rl_module.py", line 127, in __init__
super().__init__(
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 467, in __init__
self.setup()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/multi_rl_module.py", line 145, in setup
self._rl_modules[module_id] = rl_module_spec.build()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 113, in build
module = self.module_class(module_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/tf/tf_rl_module.py", line 25, in __init__
RLModule.__init__(self, *args, **kwargs)
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 408, in __init__
deprecation_warning(
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/utils/deprecation.py", line 48, in deprecation_warning
raise ValueError(msg)
ValueError: RLModule(config=[RLModuleConfig]) has been deprecated. Use RLModule(observation_space=.., action_space=.., inference_only=.., learner_only=.., model_config=..) instead.`
I've tried to adapt the code to the new rl module API, which I'm admittedly not yet familiar with. I came up with this code:
```
import ray
from ray.rllib.algorithms.dreamerv3.dreamerv3_rl_module import DreamerV3RLModule
from ray.rllib.algorithms.dreamerv3 import DreamerV3Config
from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
from ray.rllib.core.rl_module.rl_module import RLModuleSpe
ray.init(ignore_reinit_error=True)
config = (
DreamerV3Config()
.environment('CartPole-v1')
.training(
train_batch_size_per_learner=2000,
lr=0.0004,
)
.framework("tf2") # set framework explicitly
.rl_module(
model_config=DefaultModelConfig(),
rl_module_spec=RLModuleSpec(
module_class=DreamerV3RLModule
)
)
)
# Build the algorithm.
algo = config.build()
# Training loop.
for i in range(100):
pprint(algo.train())
ray.shutdown()
```
This yields the exception:
`File "/Users/username/projects/dreamer-cscg/train_dreamer.py", line 98, in <module>
main()
File "/Users/username/projects/dreamer-cscg/train_dreamer.py", line 89, in main
algo = config.build()
^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/utils/deprecation.py", line 128, in _ctor
return obj(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm_config.py", line 5417, in build
return self.build_algo(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm_config.py", line 958, in build_algo
return algo_class(
^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 528, in __init__
super().__init__(
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/tune/trainable/trainable.py", line 157, in __init__
self.setup(copy.deepcopy(self.config))
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/dreamerv3/dreamerv3.py", line 488, in setup
super().setup(config)
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 748, in setup
self.learner_group = self.config.build_learner_group(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/algorithm_config.py", line 1249, in build_learner_group
learner_group = LearnerGroup(config=self.copy(), module_spec=rl_module_spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/learner_group.py", line 129, in __init__
self._learner.build()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/tf/tf_learner.py", line 275, in build
super().build()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/learner.py", line 310, in build
self._module = self._make_module()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/learner/learner.py", line 1586, in _make_module
module = self._module_spec.build()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/multi_rl_module.py", line 611, in build
module = self.multi_rl_module_class(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/multi_rl_module.py", line 127, in __init__
super().__init__(
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 467, in __init__
self.setup()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/multi_rl_module.py", line 145, in setup
self._rl_modules[module_id] = rl_module_spec.build()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 102, in build
module = self.module_class(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 467, in __init__
self.setup()
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/dreamerv3/dreamerv3_rl_module.py", line 50, in setup
self.encoder = self.catalog.build_encoder(framework=self.framework)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/tf/lib/python3.11/site-packages/ray/rllib/algorithms/dreamerv3/dreamerv3_catalog.py", line 45, in build_encoder
raise NotImplementedError
NotImplementedError`
### Versions / Dependencies
MacOS Sequoia 15.3.1
Python: 3.11
Package Version
---------------------------- -----------
absl-py 2.1.0
aiohappyeyeballs 2.4.4
aiohttp 3.10.5
aiohttp-cors 0.7.0
aiosignal 1.2.0
annotated-types 0.7.0
astunparse 1.6.3
attrs 24.3.0
blinker 1.9.0
Box2D 2.3.10
Brotli 1.0.9
cachetools 5.5.1
certifi 2025.1.31
cffi 1.17.1
charset-normalizer 3.3.2
click 8.1.7
cloudpickle 3.1.1
colorful 0.5.6
cryptography 44.0.1
decorator 5.2.1
distlib 0.3.9
dm-tree 0.1.9
Farama-Notifications 0.0.4
filelock 3.17.0
flatbuffers 24.3.25
frozenlist 1.5.0
fsspec 2025.3.0
gast 0.4.0
google-api-core 2.24.2
google-auth 2.38.0
google-auth-oauthlib 0.5.2
google-pasta 0.2.0
googleapis-common-protos 1.69.1
grpcio 1.62.2
gymnasium 1.0.0
h5py 3.12.1
idna 3.7
imageio 2.37.0
jinja2 None
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
keras 3.6.0
Keras-Preprocessing 1.1.2
lazy_loader 0.4
libclang 18.1.1
lz4 4.4.3
Markdown 3.4.1
markdown-it-py 2.2.0
MarkupSafe 3.0.2
mdurl 0.1.0
minigrid 3.0.0
ml_dtypes 0.5.1
mpmath 1.3.0
msgpack 1.1.0
multidict 6.1.0
namex 0.0.7
networkx 3.4.2
numpy 2.1.3
oauthlib 3.2.2
opencensus 0.11.4
opencensus-context 0.1.3
opt-einsum 3.3.0
optree 0.14.1
ormsgpack 1.7.0
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 25.0.1
platformdirs 4.3.6
prometheus_client 0.21.1
propcache 0.2.0
proto-plus 1.26.1
protobuf 4.25.3
py-spy 0.4.0
pyarrow 19.0.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
pydantic 2.10.6
pydantic_core 2.27.2
pygame 2.6.1
Pygments 2.15.1
PyJWT 2.10.1
pyOpenSSL 25.0.0
PySocks 1.7.1
python-dateutil 2.9.0.post0
pytz 2025.1
PyYAML 6.0.2
ray 2.43.0
referencing 0.36.2
requests 2.32.3
requests-oauthlib 2.0.0
rich 13.9.4
rpds 5.1.0
rpds-py 0.23.1
rsa 4.7.2
scikit-image 0.25.2
scipy 1.15.1
setuptools 75.8.0
shellingham 1.5.4
six 1.16.0
smart-open 7.1.0
svgwrite 1.4.3
swig 4.3.0
sympy 1.13.1
tensorboard 2.19.0
tensorboard_data_server 0.7.0
tensorboard-plugin-wit 1.8.1
tensorboardX 2.6.2.2
tensorflow 2.19.0
tensorflow_estimator 2.15.0
tensorflow-io-gcs-filesystem 0.37.1
tensorflow-probability 0.25.0
termcolor 2.1.0
tf_keras 2.19.0
tifffile 2025.2.18
torch 2.6.0
torchaudio 2.6.0
torchvision 0.21.0
tqdm 4.67.1
typer 0.15.2
typing_extensions 4.12.2
tzdata 2025.1
urllib3 2.3.0
virtualenv 20.29.3
Werkzeug 3.1.3
wheel 0.35.1
wrapt 1.17.0
yarl 1.18.0
### Reproduction script
Run the tuned example found [here](https://github.com/ray-project/ray/blob/master/rllib/tuned_examples/dreamerv3/frozenlake_2x2.py)
as instructed inside the script itself. This means:
1. Download [run_regression_tests.py](https://github.com/ray-project/ray/blob/4ff061b151401cca49dc4484c00c23889974ad5a/rllib/tests/run_regression_tests.py#L4)
2. Download the [example script](https://github.com/ray-project/ray/blob/master/rllib/tuned_examples/dreamerv3/frozenlake_2x2.py)
3. run with `python run_regression_tests.py --dir <absolute_path_to_frozenlake_2x2.py>`
### Issue Severity
DreamerV3 is not usable for me at the moment. | open | 2025-03-14T12:36:11Z | 2025-03-18T19:48:37Z | https://github.com/ray-project/ray/issues/51373 | [
"bug",
"triage",
"rllib"
] | rschiewer | 1 |
tatsu-lab/stanford_alpaca | deep-learning | 151 | Fine-tuning Does not work | ```
Traceback (most recent call last):
File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
train()
File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
model = self._wrap_model(self.model_wrapped)
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
Traceback (most recent call last):
File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
train()
File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
model = self._wrap_model(self.model_wrapped)
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
Traceback (most recent call last):
File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
train()
File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
model = self._wrap_model(self.model_wrapped)
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
[I ProcessGroupNCCL.cpp:844] [Rank 2] NCCL watchdog thread terminated normally
Traceback (most recent call last):
File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
train()
File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
trainer.train()
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
return inner_training_loop(
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
model = self._wrap_model(self.model_wrapped)
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
[I ProcessGroupNCCL.cpp:844] [Rank 3] NCCL watchdog thread terminated normally
[I ProcessGroupNCCL.cpp:844] [Rank 0] NCCL watchdog thread terminated normally
[I ProcessGroupNCCL.cpp:844] [Rank 1] NCCL watchdog thread terminated normally
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13697) of binary: /home/ubuntu/anaconda3/envs/lama/bin/python
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/lama/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
``` | open | 2023-03-28T06:31:21Z | 2023-03-30T12:27:54Z | https://github.com/tatsu-lab/stanford_alpaca/issues/151 | [] | akanyaani | 6 |
graphistry/pygraphistry | pandas | 381 | [BUG] dgl type error | Maybe datetime handling is having issues?
`pygraphistry/docker$ ./test-gpu-local.sh ` =>
```
=================================== FAILURES ===================================
_________________ TestDGL.test_build_dgl_with_no_node_features _________________
self = <test_dgl_utils.TestDGL testMethod=test_build_dgl_with_no_node_features>
@pytest.mark.skipif(not has_dependancy, reason="requires DGL dependencies")
def test_build_dgl_with_no_node_features(self):
g = graphistry.edges(edf, src, dst)
g.reset_caches() # so that we redo calcs
#g = g.umap(scale=1) #keep all edges with scale = 100
# should produce random features for nodes
> g2 = g.build_gnn(
use_node_scaler="robust",
use_edge_scaler="robust",
)
graphistry/tests/test_dgl_utils.py:172:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
graphistry/dgl_utils.py:493: in build_gnn
res = res._featurize_edges_to_dgl(
graphistry/dgl_utils.py:376: in _featurize_edges_to_dgl
X_enc, y_enc, res = res._featurize_or_get_edges_dataframe_if_X_is_None(
graphistry/feature_utils.py:2414: in _featurize_or_get_edges_dataframe_if_X_is_None
res = res._featurize_edges(
graphistry/feature_utils.py:1973: in _featurize_edges
encoder.fit(src=res._source, dst=res._destination, **nfkwargs)
graphistry/feature_utils.py:1602: in fit
res = self._encode(
graphistry/feature_utils.py:1555: in _encode
res = process_edge_dataframes(
graphistry/feature_utils.py:1296: in process_edge_dataframes
) = process_nodes_dataframes(
graphistry/feature_utils.py:1089: in process_nodes_dataframes
X_enc, y_enc, data_encoder, label_encoder = process_dirty_dataframes(
graphistry/feature_utils.py:875: in process_dirty_dataframes
X_enc = data_encoder.fit_transform(ndf, y)
../conda/envs/rapids/lib/python3.8/site-packages/dirty_cat/super_vectorizer.py:430: in fit_transform
return super().fit_transform(X, y)
../conda/envs/rapids/lib/python3.8/site-packages/sklearn/compose/_column_transformer.py:529: in fit_transform
return self._hstack(list(Xs))
../conda/envs/rapids/lib/python3.8/site-packages/sklearn/compose/_column_transformer.py:588: in _hstack
converted_Xs = [check_array(X,
../conda/envs/rapids/lib/python3.8/site-packages/sklearn/compose/_column_transformer.py:588: in <listcomp>
converted_Xs = [check_array(X,
../conda/envs/rapids/lib/python3.8/site-packages/sklearn/utils/validation.py:63: in inner_f
return f(*args, **kwargs)
../conda/envs/rapids/lib/python3.8/site-packages/sklearn/utils/validation.py:597: in check_array
dtype_orig = np.result_type(*dtypes_orig)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (dtype('int64'), dtype('<M8[ns]'), dtype('float64'), dtype('float64'), dtype('float64'), dtype('float64'), ...)
kwargs = {}
relevant_args = (dtype('int64'), dtype('<M8[ns]'), dtype('float64'), dtype('float64'), dtype('float64'), dtype('float64'), ...)
> ???
E TypeError: The DType <class 'numpy.dtype[datetime64]'> could not be promoted by <class 'numpy.dtype[float64]'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtype[int64]'>, <class 'numpy.dtype[datetime64]'>, <class 'numpy.dtype[float64]'>, <class 'numpy.dtype[float64]'>, <class 'numpy.dtype[float64]'>, <class 'numpy.dtype[float64]'>, <class 'numpy.dtype[int64]'>, <class 'numpy.dtype[int64]'>, <class 'numpy.dtype[int64]'>)
``` | open | 2022-07-26T04:50:02Z | 2022-07-26T04:50:02Z | https://github.com/graphistry/pygraphistry/issues/381 | [
"bug"
] | lmeyerov | 0 |
Miserlou/Zappa | django | 1,874 | unable to delete AWS Certificate after undeploying | This is probably related to AWS internals as much as to Zappa, but maybe someone using Zappa has encountered this.
My AWS account has limited amount of allowed certificates, so I need to delete the unused ones to create new ones. However, after undeployment of an app, I am unable to delete AWS certificate used to certify the app.
This is what the console tells me:
<img width="651" alt="Снимок экрана 2019-05-16 в 18 44 48" src="https://user-images.githubusercontent.com/7825762/57867948-b8fb6500-780a-11e9-9f7c-75a9f3d121fa.png">
However I am not able to find this cloudfront distribution in my Cloudfront distributions list. Quick search through Stackoverflow reveals that Cloudfront distributions created for API Gateway are not displayed in the admin console.
However, I do need to delete this distribution.
So the question is , what does zappa do to create and delete the distribution? How can I assist the deletion from the AWS Console?
P.S. I deleted the Route 53 DNS zone - it did not help.
I tried listing this distribution using the cli - it didn't show up:
```
aws cloudfront get-distribution --id E1884EQQC5R1BK
An error occurred (NoSuchDistribution) when calling the GetDistribution operation: The specified distribution does not exist.
```
| open | 2019-05-16T15:48:06Z | 2019-08-19T10:24:29Z | https://github.com/Miserlou/Zappa/issues/1874 | [] | kurtgn | 8 |
hankcs/HanLP | nlp | 1,798 | 人名识别对姓“张”识别不太准确 | <!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
抽取了一些短语发现张特别容易没识别出来。
如下是具体的例子
张先生对接城西 分词: [张先生/nz, 对接/v, 城西/d]
张先生开封 分词: [张先生/nz, 开封/ns]
张阿姨 分词: [张/q, 阿姨/n]
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
``` Segment segment = HanLP.newSegment().enableAllNamedEntityRecognize(true);
List<Term> list = segment.seg(str);
log.info("##{} 分词: {}", str, ArrayUtils.toString(list));
CoreStopWordDictionary.apply(list);
```
**Describe the current behavior**
很多识别出来
**Expected behavior**
张先生 识别出 张 nr,或者 张先生 nr
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mocos 11.3.1
- Java version: 8
- HanLP version: portable-1.8.3
**Other info / logs**
无
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2022-12-30T03:55:50Z | 2022-12-30T04:22:07Z | https://github.com/hankcs/HanLP/issues/1798 | [
"wontfix"
] | watsonwuh | 1 |
huggingface/datasets | nlp | 7,431 | Issues with large Datasets | ### Describe the bug
If the coco annotation file is too large the dataset will not be able to load it, not entirely sure were the issue is but I am guessing it is due to the code trying to load it all as one line into a dataframe. This was for object detections.
My current work around is the following code but would be nice to be able to do it without worrying about it also probably there is a better way of doing it:
`
dataset_dict = json.load(open("./local_data/annotations/train.json"))
df = pd.DataFrame(columns=['images', 'annotations', 'categories'])
df = df._append({'images': dataset_dict['images'], 'annotations': dataset_dict['annotations'], 'categories': dataset_dict['categories']}, ignore_index=True)
train=Dataset.from_pandas(df)
dataset_dict = json.load(open("./local_data/annotations/validation.json"))
df = pd.DataFrame(columns=['images', 'annotations', 'categories'])
df = df._append({'images': dataset_dict['images'], 'annotations': dataset_dict['annotations'],
'categories': dataset_dict['categories']}, ignore_index=True)
val = Dataset.from_pandas(df)
dataset_dict = json.load(open("./local_data/annotations/test.json"))
df = pd.DataFrame(columns=['images', 'annotations', 'categories'])
df = df._append({'images': dataset_dict['images'], 'annotations': dataset_dict['annotations'],
'categories': dataset_dict['categories']}, ignore_index=True)
test = Dataset.from_pandas(df)
dataset = DatasetDict({'train': train, 'validation': val, 'test': test})
`
### Steps to reproduce the bug
1) step up directory in and have the json files in coco format
-local_data
|-images
|---1.jpg
|---2.jpg
|---....
|---n.jpg
|-annotations
|---test.json
|---train.json
|---validation.json
2) try to load local_data into a dataset if the file is larger than about 300kb it will cause an error.
### Expected behavior
That it loads the jsons preferably in the same format as it has done with a smaller size.
### Environment info
- `datasets` version: 3.3.3.dev0
- Platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.29.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| open | 2025-02-28T14:05:22Z | 2025-03-04T15:02:26Z | https://github.com/huggingface/datasets/issues/7431 | [] | nikitabelooussovbtis | 4 |
amisadmin/fastapi-amis-admin | fastapi | 140 | Enhancement: Contribution guide | Hello everyone 👋
I would like to contribute this project with translations, but first I would like to read a Contribution Guide, steps how to contribute.
I would recommend checking FastAPI and Starlette-Admin Contribution guide as inspiration.
I'm planning to create a locale for Turkish Language. | open | 2023-10-19T13:25:01Z | 2023-10-19T13:25:01Z | https://github.com/amisadmin/fastapi-amis-admin/issues/140 | [] | hasansezertasan | 0 |
fastapi-users/fastapi-users | asyncio | 474 | Reset password validation | Follow-up of discussion #465.
There is currently no way to plug a password validation logic into the reset password router. This should be possible. | closed | 2021-02-06T08:12:10Z | 2021-05-20T06:57:15Z | https://github.com/fastapi-users/fastapi-users/issues/474 | [
"enhancement"
] | frankie567 | 1 |
Lightning-AI/pytorch-lightning | pytorch | 19,951 | docker image doesn't have `pytorch_lightning` | ### Bug description
```shell
$ docker run --rm pytorchlightning/pytorch_lightning:base-cuda-py3.10-torch2.2-cuda12.1.0 python -c "import pytorch_lightning"
==========
== CUDA ==
==========
CUDA Version 12.1.0
...
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pytorch_lightning'
```
is this the intended behavior?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
_No response_
### Environment
_No response_
### More info
_No response_
cc @borda | closed | 2024-06-05T22:31:11Z | 2024-06-24T08:44:59Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19951 | [] | grisaitis | 1 |
flasgger/flasgger | api | 102 | Import feature misidentifies root path for endpoint | Given `thing_doer.yml`:
```
tags:
- "thing_doer"
summary: "Do a thing"
description: ""
consumes:
- "application/json"
produces:
- "application/json"
parameters:
- in: "body"
name: "body"
required: true
schema:
import: "models/stuff.yml"
responses:
200:
description: "The results of doing a thing"
schema:
type: "object"
properties:
whatever:
type: "integer"
```
with a file structure as follows:
```
specs/
thing_doer.yml
models/
stuff.yml
```
I get the following exception:
```
[2017-05-17 16:08:43,303] ERROR in app: Exception on /apispec_1.json [GET]
Traceback (most recent call last):
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 294, in load_from_file
with open(swag_path) as yaml_file:
FileNotFoundError: [Errno 2] No such file or directory: 'models/raw_text.yml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/views.py", line 149, in dispatch_request
return meth(*args, **kwargs)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/base.py", line 229, in get
endpoint=rule.endpoint, verb=verb
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 338, in parse_docstring
full_doc = parse_imports(full_doc, root_path)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 420, in parse_imports
imported_doc = load_from_file(filepath, root_path=root_path)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 301, in load_from_file
with open(swag_path) as yaml_file:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/models/raw_text.yml'
```
The culprit appears to be https://github.com/rochacbruno/flasgger/blob/master/flasgger/utils.py#L362 where `obj.__globals__['__file__']` resolves to `'/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py'`. I'm not really sure why this would happen (maybe the way the endpoint function is imported influences the value of `__file__`? I'm unclear on Python's semantics here).
Thank you -- please let me know if you need more details. | open | 2017-05-17T20:22:51Z | 2020-10-01T16:06:11Z | https://github.com/flasgger/flasgger/issues/102 | [
"bug",
"hacktoberfest"
] | dkharlan | 0 |
thp/urlwatch | automation | 172 | Relative path and Absolute path comparison | Hi,
I haven't got to test out the program yet. However, I want to ask if the webpage were to change dynamically in terms of the URLs, (e.g., from /example/index.html to index.html) or (index.html to /example/index.html), does urlwatch detect this type of changes and generate false positive?
Any response is appreciated.
Thank you | closed | 2017-09-20T04:00:33Z | 2017-10-11T19:25:59Z | https://github.com/thp/urlwatch/issues/172 | [] | dotox | 3 |
flasgger/flasgger | flask | 102 | Import feature misidentifies root path for endpoint | Given `thing_doer.yml`:
```
tags:
- "thing_doer"
summary: "Do a thing"
description: ""
consumes:
- "application/json"
produces:
- "application/json"
parameters:
- in: "body"
name: "body"
required: true
schema:
import: "models/stuff.yml"
responses:
200:
description: "The results of doing a thing"
schema:
type: "object"
properties:
whatever:
type: "integer"
```
with a file structure as follows:
```
specs/
thing_doer.yml
models/
stuff.yml
```
I get the following exception:
```
[2017-05-17 16:08:43,303] ERROR in app: Exception on /apispec_1.json [GET]
Traceback (most recent call last):
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 294, in load_from_file
with open(swag_path) as yaml_file:
FileNotFoundError: [Errno 2] No such file or directory: 'models/raw_text.yml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flask/views.py", line 149, in dispatch_request
return meth(*args, **kwargs)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/base.py", line 229, in get
endpoint=rule.endpoint, verb=verb
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 338, in parse_docstring
full_doc = parse_imports(full_doc, root_path)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 420, in parse_imports
imported_doc = load_from_file(filepath, root_path=root_path)
File "/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py", line 301, in load_from_file
with open(swag_path) as yaml_file:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/models/raw_text.yml'
```
The culprit appears to be https://github.com/rochacbruno/flasgger/blob/master/flasgger/utils.py#L362 where `obj.__globals__['__file__']` resolves to `'/Users/dkharlan/venvs/myenv/lib/python3.6/site-packages/flasgger/utils.py'`. I'm not really sure why this would happen (maybe the way the endpoint function is imported influences the value of `__file__`? I'm unclear on Python's semantics here).
Thank you -- please let me know if you need more details. | open | 2017-05-17T20:22:51Z | 2020-10-01T16:06:11Z | https://github.com/flasgger/flasgger/issues/102 | [
"bug",
"hacktoberfest"
] | dkharlan | 0 |
PaddlePaddle/PaddleHub | nlp | 1,834 | 如何复用serving功能? | 欢迎您对PaddleHub提出建议,非常感谢您对PaddleHub的贡献!
在留下您的建议时,辛苦您同步提供如下信息:
- 您想要增加什么新特性?
- 什么样的场景下需要该特性?
paddlehub serving 非常好用,但是处理路由都是设置好的,如何用Flask或者fastapi复用serving的请求处理功能,其他请求调用自己的代码呢? | open | 2022-04-08T11:36:31Z | 2022-04-14T03:30:33Z | https://github.com/PaddlePaddle/PaddleHub/issues/1834 | [] | paopjian | 0 |
polakowo/vectorbt | data-visualization | 72 | Cryptic Numba issues while trying to reproduce the documentation examples | Hello.
I'm trying to start working with VectorBT, but I keep getting some very weird errors from Numba while invoking the code. Given that Numba is really bad at error descriptions it is hard to say if it is my mistake while copy-pasting or if it is a version problem.
Here's the proof of concept exhibiting the issue. It is mostly a copy-paste from the documentation and the article on the stop types comparison:
```
#!/usr/bin/env python3
import pandas as pd
import yfinance as yf
import vectorbt as vbt
from datetime import datetime
df = yf.Ticker('BTC-USD').history(interval='1h',
start=datetime(2020, 1, 1),
end=datetime(2020, 12, 1))
# sort of we have multiple data frames, that doesn't seem to influence the bug
# df.columns = pd.MultiIndex.from_tuples(
# (c, 'BTC-USD')
# for c in df.columns
# )
fast_ma = vbt.MA.run(df['Close'], 10, short_name='fast')
slow_ma = vbt.MA.run(df['Close'], 20, short_name='slow')
entries = fast_ma.ma_above(slow_ma, crossed=True)
exits = vbt.ADVSTEX.run(
entries,
df['Open'],
df['High'],
df['Low'],
df['Close'],
ts_stop=[0.1],
stop_type=None, hit_price=None
).exits
```
And it's output is the following:
```
Traceback (most recent call last):
File "/home/naquad/projects/tests/strat2/try1.py", line 39, in <module>
tp_exits = df['BUY'].vbt.signals.generate_stop_exits(np.array([0.2]), 0, trailing=True)
File "/home/naquad/.local/lib/python3.9/site-packages/vectorbt/signals/accessors.py", line 469, in generate_stop_exits
exits = nb.generate_stop_ex_nb(
File "/home/naquad/.local/lib/python3.9/site-packages/numba/core/dispatcher.py", line 415, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/naquad/.local/lib/python3.9/site-packages/numba/core/dispatcher.py", line 358, in error_rewrite
reraise(type(e), e, None)
File "/home/naquad/.local/lib/python3.9/site-packages/numba/core/utils.py", line 80, in reraise
raise value.with_traceback(tb)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Internal error at <numba.core.typeinfer.CallConstraint object at 0x7f4cb1d21fd0>.
Failed in nopython mode pipeline (step: analyzing bytecode)
Use of unsupported opcode (LIST_EXTEND) found
File "../../../.local/lib/python3.9/site-packages/vectorbt/signals/nb.py", line 79:
def generate_ex_nb(entries, wait, exit_choice_func_nb, *args):
<source elided>
# Run the UDF
idxs = exit_choice_func_nb(col, from_i, to_i, *args)
^
During: resolving callee type: type(CPUDispatcher(<function generate_ex_nb at 0x7f4cb8545790>))
During: typing of call at /home/naquad/.local/lib/python3.9/site-packages/vectorbt/signals/nb.py (471)
Enable logging at debug level for details.
File "../../../.local/lib/python3.9/site-packages/vectorbt/signals/nb.py", line 471:
def generate_stop_ex_nb(entries, ts, stop, trailing, wait, first, flex_2d):
<source elided>
temp_idx_arr = np.empty((entries.shape[0],), dtype=np.int_)
return generate_ex_nb(entries, wait, stop_choice_nb, ts, stop, trailing, wait, first, temp_idx_arr, flex_2d)
^
```
I've seen other errors like `Use of unsupported opcode (...) found` including CONTAINS_OP and similar ones. The versions I'm using are the following:
Versions:
Python 3.9
numba 0.51.2
numpy 1.19.4
pandas 1.1.4
vectorbt 0.14.4
llvmlite 0.34.0
llvm 10.0.1-3
If there are recommended versions then please state them, so I could try to run it in the docker container.
P. S. Thank you for your work, as the person who wrote a much smaller and limited version of the backtesting engine than VectorBT I acknowledge the amount of work it took. | closed | 2020-12-25T12:21:17Z | 2020-12-25T14:50:45Z | https://github.com/polakowo/vectorbt/issues/72 | [] | naquad | 2 |
slackapi/python-slack-sdk | asyncio | 1,608 | "channel_id" instead of "channel" in "files_upload_v2" code example | Mistake in the "files_upload_v2" method description. Using "channel_id" in the "files_upload_v2" code example calls error:
`TypeError: slack_sdk.web.client.WebClient.files_completeUploadExternal() got multiple values for keyword argument 'channel_id'`
I replaced "channel_id" with "channel" in my code and it works.
### The page URLs
- https://tools.slack.dev/python-slack-sdk/web/
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.

| closed | 2024-12-03T21:33:15Z | 2024-12-05T03:56:48Z | https://github.com/slackapi/python-slack-sdk/issues/1608 | [
"bug",
"docs",
"web-client",
"Version: 3x"
] | wefi-nick | 1 |
donnemartin/system-design-primer | python | 322 | Help with MQs | Could someone please provide an explanation to the following
Redis is useful as a simple message broker but messages can be lost. (Why lost?)
Amazon SQS is hosted but can have high latency and has the possibility of messages being delivered twice. (Why possibly delivered twice?) | closed | 2019-09-11T22:16:59Z | 2019-09-20T20:55:05Z | https://github.com/donnemartin/system-design-primer/issues/322 | [] | Peppershaker | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 466 | Inconsistency in default parameters of Triplet Loss and Triplet Loss miner | Hi,
In `TripletMarginLoss` you have default margin set to **0.05**:
```
class TripletMarginLoss(BaseMetricLossFunction):
"""
Args:
margin: The desired difference between the anchor-positive distance and the
anchor-negative distance.
swap: Use the positive-negative distance instead of anchor-negative distance,
if it violates the margin more.
smooth_loss: Use the log-exp version of the triplet loss
"""
def __init__(
self,
margin=0.05,
swap=False,
smooth_loss=False,
triplets_per_anchor="all",
**kwargs
):
```
While in `TripletMarginMiner` it is **0.2**
```
class TripletMarginMiner(BaseTupleMiner):
"""
Returns triplets that violate the margin
Args:
margin
type_of_triplets: options are "all", "hard", or "semihard".
"all" means all triplets that violate the margin
"hard" is a subset of "all", but the negative is closer to the anchor than the positive
"semihard" is a subset of "all", but the negative is further from the anchor than the positive
"easy" is all triplets that are not in "all"
"""
def __init__(self, margin=0.2, type_of_triplets="all", **kwargs):
super().__init__(**kwargs)
self.margin = margin
self.type_of_triplets = type_of_triplets
self.add_to_recordable_attributes(list_of_names=["margin"], is_stat=False)
self.add_to_recordable_attributes(
list_of_names=["avg_triplet_margin", "pos_pair_dist", "neg_pair_dist"],
is_stat=True,
)
```
I think it's better to set them to some common value, because I guess that most of people want to use them out of box
| open | 2022-05-01T12:19:53Z | 2022-05-04T15:08:35Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/466 | [
"enhancement"
] | Sorrow321 | 2 |
dpgaspar/Flask-AppBuilder | flask | 1,617 | Back Button Problem | I have the next problem, i'm create my templates, but FAB not redirect correctly. When click on back button FAB not redirect to last template if they is a template creating by me | closed | 2021-04-21T21:16:38Z | 2022-04-17T16:24:30Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1617 | [
"question",
"stale"
] | matiasjavierlucero | 5 |
jina-ai/serve | fastapi | 5,804 | chore: draft release note 3.15.0 | # Release Note
This release contains 6 new features, 6 bug fixes and 5 documentation improvements.
## 🆕 Features
### HTTP and composite protocols for Deployment (#5764)
When using a `Deployment` to serve a single Executor, you can now expose it via `HTTP` or a combination of `HTTP` and `gRPC` protocols:
```python
from jina import Deployment, Executor, requests
class MyExec(Executor):
@requests(on='/bar')
def bar(self, docs, **kwargs):
pass
dep = Deployment(protocol=['http', 'grpc'], port=[12345, 12346], uses=MyExec)
with dep:
dep.block()
```
With this, you can also access the OpenAPI schema in `localhost:12345/docs`:

### Force network mode option (#5789)
When using a containerized Executor inside a Deployment or as part of a Flow, under some circumstances you may want to force the network mode to make sure the container is reachable by the Flow or Deployment to ensure readiness. This ensures that the [Docker Python SDK](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.run) runs the container with the relevant options.
For this, we have added the argument `force_network_mode` to enable this.
You can set this argument to any of these options:
- `AUTO`: Automatically detect the Docker network.
- `HOST`: Use the host network.
- `BRIDGE`: Use a user-defined bridge network.
- `NONE`: Use `None` as the network.
```python
from jina import Deployment
dep = Deployment(uses='jinaai+docker://TransformerTorchEncoder', force_network_mode='None')
with dep:
dep.block()
```
### Allow disabling thread lock (#5771)
When an Executor exposes a synchronous method (not a coroutine) and exposes this method via the `@requests` decorator, Jina makes sure that each request received is run in a thread.
This thread is however locked with a `threading.Lock` object to protect the user from potential hazards of multithreading while leaving the Executor free to respond to health checks coming from the outside or from orchestrator frameworks such as Kubernetes. This lock can be bypassed if the `allow_concurrent` argument is passed to the Executor.
```python
from jina import Deployment, Executor, requests
class MyExec(Executor):
@requests(on='/bar')
def bar(self, docs, **kwargs):
pass
dep = Deployment(allow_concurrent=True, uses=MyExec)
with dep:
dep.block()
```
### `grpc_channel_options` for custom gRPC options for the channel (#5765)
You can now pass `grpc_channel_options` to allow granular tuning of the gRPC connectivity from the Client or Gateway. You can check the options in [gRPC Python documentation](https://grpc.github.io/grpc/python/glossary.html#term-channel_arguments)
```python
client = Client(grpc_channel_options={'grpc.max_send_message_length': -1})
```
### Create Deployments from the CLI (#5756)
New you can create from the Jina CLI to create a first project to deploy a single `Deployment` in the same way it was possible to create one for a `Flow`.
Now the `jina new` command accepts a new `type` argument that can be `flow` or `deployment`.
```shell
jina new hello-world --type flow
jina new hello-world --type deployment
```
### Add `replicas` argument to Gateway for Kubernetes (#5711)
To scale the Gateway in Kubernetes or in JCloud, you can now add the `replicas` arguments to the `gateway`.
```python
from jina import Flow
f = Flow().config_gateway(replicas=3).add()
f.to_kubernetes_yaml('./k8s_yaml_path')
```
```YAML
jtype: Flow
version: '1'
with: {}
gateway:
replicas: 3
executors:
- name: executor0
```
## 🐞 Bug Fixes
### Retry client gRPC stream and unary RPC methods (#5733)
The retry mechanism parameters were not properly respected by the `Client` in prior releases. This is now fixed and will improve the robustness against transient errors.
```python
from jina import Client, DocumentArray
Client(host='...').post(
on='/',
inputs=DocumentArray.empty(),
max_attempts=100,
)
```
### Allow HTTP timeout (#5797)
When using the `Client` to send data to an HTTP service, the connection timed out after five minutes (the [default setting for aiohttp](https://docs.aiohttp.org/en/latest/client_quickstart.html#timeouts)). This can now be edited for cases where a request may take longer, thus avoiding the Client disconnecting after a longer period.
```python
from jina import Client, DocumentArray
Client(protocol='http').post(
on='/',
inputs=DocumentArray.empty(),
timeout=600,
)
```
### Enable root logging at all times (#5736)
The `JINA_LOG_LEVEL` environment variable controls the log level of the JinaLogger. Previously the debug logging of other dependencies was not respected. Now they can be enabled.
```python
logging.get_logger('urllib3').setLevel(logging.DEBUG)
```
### Fix Gateway tensor serialization (#5752)
In prior releases, when an HTTP Gateway was run without `torch` installed and connected to an Executor returning `torch.Tensor` as part of the Documents, the Gateway couldn't serialize the Documents back to the Client, leading to a `no module torch` error. This is now fixed and works without installing `torch` in the Gateway container or system.
```python
from jina import Flow, Executor, Document, DocumentArray, requests
import torch
class DummyTorchExecutor(Executor):
@requests
def foo(self, docs: DocumentArray, **kwargs):
for d in docs:
d.embedding = torch.rand(1000)
d.tensor = torch.rand(1000)
```
```python
from jina import Flow, Executor, Document, DocumentArray, requests
flow = Flow().config_gateway(port=12346, protocol='http').add(port='12345', external=True)
with flow:
docs = flow.post(on='/', inputs=Document())
print(docs[0].embedding.shape)
print(docs[0].tensor.shape)
```
### Composite Gateway tracing (#5741)
Previously, tracing didn't work for Gateways that exposed multiple protocols:
```python
from jina import Flow
f = Flow(port=[12345, 12346], protocol=['http', 'grpc'], tracing=True).add()
with f:
f.block()
```
### Adapt to DocArray v2 (#5742)
Jina depends on [DocArray](https://github.com/docarray/docarray)'s data structures. This version adds support for DocArray v2's upcoming major changes.
The involves naming conventions:
- `DocumentArray` :arrow_right: `DocList`
- `BaseDocument` :arrow_right: ` BaseDoc`
```python
from jina import Deployment, Executor, requests
from docarray import DocList, BaseDoc
from docarray.documents import ImageDoc
from docarray.typing import AnyTensor
import numpy as np
class InputDoc(BaseDoc):
img: ImageDoc
class OutputDoc(BaseDoc):
embedding: AnyTensor
class MyExec(Executor):
@requests(on='/bar')
def bar(
self, docs: DocList[InputDoc], **kwargs
) -> DocumentArray[OutputDoc]:
docs_return = DocList[OutputDoc](
[OutputDoc(embedding=np.zeros((100, 1))) for _ in range(len(docs))]
)
return docs_return
with Deployment(uses=MyExec) as dep:
docs = dep.post(
on='/bar',
inputs=InputDoc(img=ImageDoc(tensor=np.zeros((3, 224, 224)))),
return_type=DocList[OutputDoc],
)
assert docs[0].embedding.shape == (100, 1)
assert docs.__class__.document_type == OutputDoc
```
## 📗 Documentation improvements
- JCloud Flow name customization (#5778)
- JCloud docs revamp for instance (#5759)
- Fix Colab link (#5760)
- Remove docsQA (#5743)
- Misc polishing
## 🤟 Contributors
We would like to thank all contributors to this release:
- Girish Chandrashekar (@girishc13)
- Asuzu Kosisochukwu (@asuzukosi)
- AlaeddineAbdessalem (@alaeddine-13)
- Zac Li (@zac-li)
- nikitashrivastava29 (@nikitashrivastava29)
- samsja (@samsja)
- Alex Cureton-Griffiths (@alexcg1)
- Joan Fontanals (@JoanFM)
- Deepankar Mahapatro (@deepankarm) | closed | 2023-04-13T13:39:52Z | 2023-04-14T09:59:41Z | https://github.com/jina-ai/serve/issues/5804 | [] | alaeddine-13 | 0 |
simple-login/app | flask | 1,521 | The extension icon is not displaying properly | I'm using Simple Login's extension on Firefox and came across this site: [https://fel.cvut.cz/cs](https://fel.cvut.cz/cs). If you scroll to the bottom, you can see there are 2 "layers". The "email box" is on the second layer, therefor the icon is being wrongly displayed on the top layer.
It's not breaking anything, just a styling issue. | open | 2023-01-09T16:57:51Z | 2023-01-12T14:47:56Z | https://github.com/simple-login/app/issues/1521 | [] | Nextross | 1 |
taverntesting/tavern | pytest | 809 | Look at removing pypy tests | Seeing as the pypy tests take a long time to run, and I doubt many people use pypy to run Tavern as it goes multiple times slower than just normal python for some reason | closed | 2022-10-01T13:01:42Z | 2023-01-09T15:47:48Z | https://github.com/taverntesting/tavern/issues/809 | [] | michaelboulton | 1 |
rthalley/dnspython | asyncio | 207 | to_unicode() error with Python3 and 1.14.0 | Following test case:
```
import dns.name
name = dns.name.from_text('.')
result = name.to_unicode()
assert result == '.'
```
- Python3 and dnspython 1.14.0: FAIL
- Python3 and dnspython3 1.12.0: SUCCESS
- Python2 and dnspython 1.14.0: SUCCESS
As far as I can tell the problem is following line in _dns/name.py_ in the **Name.to_unicode()** method.
```
if len(self.labels) == 1 and self.labels[0] == '':
return u'.'
```
self.labels[0] is b'' not '' (i.e. a byte-string not a Unicode string).
| closed | 2016-09-27T09:15:28Z | 2016-09-27T12:14:00Z | https://github.com/rthalley/dnspython/issues/207 | [
"Bug",
"Fixed"
] | omarkohl | 1 |
BoltzmannEntropy/xtts2-ui | streamlit | 14 | How to run on CPU or ROCm? | I'm trying to run this on a Debian VM either in CPU mode or with ROCm, and can't get it together. Has it been successfully run without an Nvidia GPU?
Any tips appreciated.
```
2024-01-02 13:49:34.802 Uncaught app exception
Traceback (most recent call last):
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/sij/TTS/xtts2-ui/app.py", line 38, in <module>
tts = TTS(model_name=params["model_name"]).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
2024-01-02 13:49:35.232 Uncaught app exception
Traceback (most recent call last):
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/sij/TTS/xtts2-ui/app.py", line 38, in <module>
tts = TTS(model_name=params["model_name"]).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/streamlit/watcher/local_sources_watcher.py:177: UserWarning: Torchaudio's I/O functions now support par-call bakcend dispatch. Importing backend implementation directly is no longer guaranteed to work. Please use `backend` keyword with load/save/info function, instead of calling the udnerlying implementation directly.
lambda m: [p for p in m.__path__._path],
2024-01-02 13:49:35.521 Uncaught app exception
Traceback (most recent call last):
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/sij/TTS/xtts2-ui/app.py", line 38, in <module>
tts = TTS(model_name=params["model_name"]).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
2024-01-02 14:00:02.809 Session with id 12d05f5a-ee59-4c2f-aa58-c4b9b5d103d4 is already connected! Connecting to a new session.
> tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
> Using model: xtts
2024-01-02 14:00:22.714 Uncaught app exception
Traceback (most recent call last):
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/sij/TTS/xtts2-ui/app.py", line 38, in <module>
tts = TTS(model_name=params["model_name"]).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sij/TTS/xtts2-ui/venv/lib/python3.11/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
``` | closed | 2024-01-02T22:24:32Z | 2024-01-03T11:00:04Z | https://github.com/BoltzmannEntropy/xtts2-ui/issues/14 | [] | sij-ai | 1 |
mwaskom/seaborn | matplotlib | 2,785 | histplot stat=count does not count all data points | `import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set(style="whitegrid")
data_a = [1, 2, 3]
data_b = [2.4, 2.5, 2.6]
sns.histplot(np.array(data_a), color="red", binwidth=0.01, stat="count")
sns.histplot(np.array(data_b), color="blue", binwidth=0.01, stat="count")
`plt.savefig("output.png")``
This produces [https://i.stack.imgur.com/TM6al.png](url)
The data point 2.6 is omitted in the output produced by histplot.
The problem also exists, if the first sns.histplot command is removed.
Interestingly, it has been pointed out to me that the following command works:
`sns.histplot([data_a, data_b], palette=['red', 'blue'], binwidth=0.01, stat="count")`
but as I said, the single command
`sns.histplot(np.array(data_b), color="blue", binwidth=0.01, stat="count")`
also does not work.
| closed | 2022-04-28T08:18:26Z | 2022-05-18T10:47:26Z | https://github.com/mwaskom/seaborn/issues/2785 | [
"bug",
"mod:distributions"
] | cmayer | 1 |
giotto-ai/giotto-tda | scikit-learn | 327 | Support order=np.inf in Silhouettes | #### Description
According to the thoery page, and the docstring in Silhouette, it should be possible to pass `order=np.inf` to `Silhouette`. In that case, the silhouette should be \lambda_i, with i=argmax (d_i-b_i).
Currently, the code will produce NaNs for orders 'as small as' 1000. | closed | 2020-02-24T16:34:14Z | 2020-02-26T14:46:58Z | https://github.com/giotto-ai/giotto-tda/issues/327 | [
"bug",
"enhancement"
] | wreise | 2 |
snarfed/granary | rest-api | 228 | Meetup: Add support for publishing events | (Just to capture this as a feature request, as I thought we already had something for it)
I'll be looking at working on this soon. | open | 2021-01-17T17:54:26Z | 2022-10-18T22:44:10Z | https://github.com/snarfed/granary/issues/228 | [] | jamietanna | 1 |
dgtlmoon/changedetection.io | web-scraping | 2,285 | Browserless Timeout 60000ms exceeded after running for 2 days | **Describe the bug**
After about 2 days of running the browserless container, the browser checks start failing with Timeout 60000ms exceeded for every site with Playwright.
**Version**
0.45.16 on linux/docker
**To Reproduce**
Run and leave the browserless container running for at least 2 days.
Steps to reproduce the behavior:
1. Start the changedetection and browserless containers
2. Let them run for at least 2 days
3. The Chrome checks start failing with timeout errors
**Expected behavior**
The checks shouldn't error
**Screenshots**




**Desktop (please complete the following information):**
- Device: Synology DS224+
- OS: Synology DSM 7.2
- Version 7.2.1-69057 Update 4
**Additional context**
Current memory usage is at 60% so it's not a memory issue. The temporary workaround is to restart the browserless container and the checks work again.
Browserless logs:
```
2024-04-01T19:18:34.032Z browserless:server 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: Recording timedout stat.
2024-04-01T19:18:34.032Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: Job has timed-out, closing the WebSocket.
2024-04-01T19:18:34.062Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: Cleaning up job
2024-04-01T19:18:34.062Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: No browser to cleanup, exiting
2024-04-01T19:18:34.062Z browserless:server Current workload complete.
2024-04-01T19:18:34.104Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: Websocket closed early, removing from queue and closing.
2024-04-01T19:18:34.104Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: Removing job from queue and cleaning up.
2024-04-01T19:18:34.104Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: Cleaning up job
2024-04-01T19:18:34.104Z browserless:job 3XK7JPGHF6PCBGCLRS0M7DZLHNW3X4G1: No browser to cleanup, exiting
2024-04-01T19:19:24.784Z browserless:server Health check stats: CPU 8%,6% MEM: 61%,61%
2024-04-01T19:19:24.784Z browserless:server Current period usage: {"date":1711998864804,"error":0,"rejected":0,"successful":0,"timedout":1,"totalTime":300030,"maxTime":300030,"minTime":300030,"meanTime":300030,"maxConcurrent":0,"units":11}
2024-04-01T19:24:24.784Z browserless:server Health check stats: CPU 9%,8% MEM: 61%,61%
2024-04-01T19:24:24.784Z browserless:server Current period usage: {"date":1711999164784,"error":0,"rejected":0,"successful":0,"timedout":0,"totalTime":0,"maxTime":0,"minTime":0,"meanTime":0,"maxConcurrent":0,"units":0}
2024-04-01T19:29:24.803Z browserless:server Health check stats: CPU 8%,9% MEM: 63%,61%
2024-04-01T19:29:24.803Z browserless:server Current period usage: {"date":1711999464784,"error":0,"rejected":0,"successful":0,"timedout":0,"totalTime":0,"maxTime":0,"minTime":0,"meanTime":0,"maxConcurrent":0,"units":0}
2024-04-01T19:33:14.736Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: /: Inbound WebSocket request.
2024-04-01T19:33:14.824Z browserless:hardware Checking overload status: CPU 8% Memory 64%
2024-04-01T19:33:14.824Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Adding new job to queue.
2024-04-01T19:33:14.825Z browserless:server Starting new job
2024-04-01T19:33:14.825Z browserless:system Waiting pre-booted chrome instance
2024-04-01T19:33:14.825Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Getting browser.
2024-04-01T19:34:24.827Z browserless:server Health check stats: CPU 7%,8% MEM: 63%,63%
2024-04-01T19:34:24.827Z browserless:server Current period usage: {"date":1711999764803,"error":0,"rejected":0,"successful":0,"timedout":0,"totalTime":0,"maxTime":0,"minTime":0,"meanTime":0,"maxConcurrent":1,"units":0}
2024-04-01T19:38:14.825Z browserless:server 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Recording timedout stat.
2024-04-01T19:38:14.825Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Job has timed-out, closing the WebSocket.
2024-04-01T19:38:14.854Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Cleaning up job
2024-04-01T19:38:14.854Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: No browser to cleanup, exiting
2024-04-01T19:38:14.854Z browserless:server Current workload complete.
2024-04-01T19:38:14.896Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Websocket closed early, removing from queue and closing.
2024-04-01T19:38:14.896Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Removing job from queue and cleaning up.
2024-04-01T19:38:14.896Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: Cleaning up job
2024-04-01T19:38:14.896Z browserless:job 98RTW57TRD35EPIAGRTH29S3UW36P7NZ: No browser to cleanup, exiting
2024-04-01T19:39:24.945Z browserless:server Health check stats: CPU 8%,7% MEM: 66%,63%
2024-04-01T19:39:24.945Z browserless:server Current period usage: {"date":1712000064827,"error":0,"rejected":0,"successful":0,"timedout":1,"totalTime":300001,"maxTime":300001,"minTime":300001,"meanTime":300001,"maxConcurrent":0,"units":11}
2024-04-01T19:44:24.893Z browserless:server Health check stats: CPU 9%,8% MEM: 66%,66%
2024-04-01T19:44:24.893Z browserless:server Current period usage: {"date":1712000364945,"error":0,"rejected":0,"successful":0,"timedout":0,"totalTime":0,"maxTime":0,"minTime":0,"meanTime":0,"maxConcurrent":0,"units":0}
```
Docker-compose:
```
version: "3.2"
name: "change-detection"
services:
#https://github.com/dgtlmoon/changedetection.io/wiki/Synology-NAS-setup
changedetection:
image: dgtlmoon/changedetection.io:latest
container_name: changedetection
hostname: changedetection
mem_limit: 1g
cpu_shares: 768
security_opt:
- no-new-privileges:true
volumes:
- /volume1/docker/changedetection:/datastore:rw
# Configurable proxy list support, see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#proxy-list-support
# - ./proxies.json:/datastore/proxies.json
environment:
- PUID=1027
- PGID=65536
- TZ=America/Phoenix
- UMASK=002
#- WEBDRIVER_URL=http://localhost:4444/wd/hub
#- PLAYWRIGHT_DRIVER_URL=ws://192.168.0.143:3000/
- PLAYWRIGHT_DRIVER_URL=ws://playwright-chrome:3000
- HIDE_REFERER=true
ports:
- 5555:5000
# links:
# - browser-chrome
restart: always
# networks:
# - changedetection
depends_on:
playwright-chrome:
condition: service_started
logging:
driver: json-file
options:
max-file: 10
max-size: 10m
#https://github.com/dgtlmoon/changedetection.io/wiki/Synology-NAS-setup
playwright-chrome:
container_name: playwright-chrome
hostname: playwright-chrome
image: browserless/chrome:1.60-chrome-stable
#image: browserless/chrome:latest #trying latest to fix the timeout errors
restart: always
mem_limit: 4g
# networks:
# - changedetection
ports:
- 3000:3000
environment:
- PUID=1027
- PGID=65536
- TZ=America/Phoenix
- UMASK=002
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1024
- SCREEN_DEPTH=16
- ENABLE_DEBUGGER=false
- PREBOOT_CHROME=true
- KEEP_ALIVE=true
- CONNECTION_TIMEOUT=300000
- MAX_CONCURRENT_SESSIONS=4
- CHROME_REFRESH_TIME=600000
#https://github.com/dgtlmoon/changedetection.io/issues/1536
- DEFAULT_BLOCK_ADS=false
- DEFAULT_STEALTH=true
#Use incognito mode
#- FUNCTION_ENABLE_INCOGNITO_MODE=true
# #Ignore HTTPS errors, like for self-signed certs
- DEFAULT_IGNORE_HTTPS_ERRORS=true
#browserless routinely checks on the health of the image as it's running. Sometimes it's helpful to have it restart automatically when CPU or Memory usage are above 100% for a period of time (by default 5 minutes). In order for browserless to restart on health-failure checks, you'll have to set a parameter of
#EXIT_ON_HEALTH_FAILURE=true.
#dns:
# - 192.168.0.45
logging:
driver: json-file
options:
max-file: 10
max-size: 10m
# playwright-chrome:
# container_name: playwright-chrome
# hostname: playwright-chrome
# image: dgtlmoon/sockpuppetbrowser:latest
# cap_add:
# - SYS_ADMIN
# # SYS_ADMIN might be too much, but it can be needed on your platform https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md#running-puppeteer-on-gitlabci
# restart: unless-stopped
# environment:
# - PUID=1027
# - PGID=65536
# - TZ=America/Phoenix
# - SCREEN_WIDTH=1920
# - SCREEN_HEIGHT=1024
# - SCREEN_DEPTH=16
# - MAX_CONCURRENT_CHROME_PROCESSES=10
# logging:
# driver: json-file
# options:
# max-file: 10
# max-size: 10m
volumes:
changedetection:
``` | closed | 2024-04-01T19:51:10Z | 2024-04-04T09:00:56Z | https://github.com/dgtlmoon/changedetection.io/issues/2285 | [
"triage"
] | mat926 | 3 |
snarfed/granary | rest-api | 860 | XMPP PubSub | Add support for publications that are hosted on XMPP PubSub nodes.
I have wrote a Python script which does that task.
AOX2HTML - Atom Over XMPP To HTML
- https://portal.mozz.us/gemini/woodpeckersnest.space/~schapps/projects/aox2html.gmi
- gemini://woodpeckersnest.space/~schapps/projects/aox2html.gmi
See also:
Rivista XJP
- https://schapps.woodpeckersnest.eu/rivista/
- https://video.xmpp-it.net/w/vNqcMooy3pqRAZ8Yb8grr1
[Dillo-dev] Re: XMPP PubSub Plugin
- https://lists.mailman3.com/hyperkitty/list/dillo-dev@mailman3.com/message/AZVAIYSQG24GCNAXSSLN24KOT7HNJUJU/ | open | 2025-01-08T10:57:39Z | 2025-01-08T18:15:03Z | https://github.com/snarfed/granary/issues/860 | [] | sjehuda | 2 |
chaos-genius/chaos_genius | data-visualization | 560 | Limit default third party connectors to streamline installation | closed | 2022-01-03T03:42:35Z | 2022-02-08T11:54:06Z | https://github.com/chaos-genius/chaos_genius/issues/560 | [
"🛠️ backend",
"P2"
] | suranah | 1 |
|
hankcs/HanLP | nlp | 1,106 | 自定义词库无效问题 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.2
我使用的版本是:1.7.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题

<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public static void main(String[] args) {
{
// 动态增加
CustomDictionary.add("攻城狮");
CustomDictionary.add("娶白富美");
// 强行插入
CustomDictionary.insert("娶白富美", "nz 1024");
// 删除词语(注释掉试试)
CustomDictionary.remove("迎娶");
System.out.println(CustomDictionary.add("单身狗", "nz 1024 n 1"));
System.out.println(CustomDictionary.get("单身狗"));
String text = "攻城狮逆袭单身狗,迎娶白富美,走上人生巅峰"; // 怎么可能噗哈哈!
// AhoCorasickDoubleArrayTrie自动机扫描文本中出现的自定义词语
final char[] charArray = text.toCharArray();
CustomDictionary.parseText(charArray, new
AhoCorasickDoubleArrayTrie.IHit<CoreDictionary.Attribute>()
{
@Override
public void hit(int begin, int end, CoreDictionary.Attribute value)
{
System.out.printf("[%d:%d]=%s %s\n", begin, end, new String(charArray, begin, end - begin), value);
}
});
// 自定义词典在所有分词器中都有效
System.out.println(HanLP.segment(text));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
true
nz 1024 n 1
[0:3]=攻城狮 nz 1
[5:8]=单身狗 nz 1024 n 1
[10:14]=娶白富美 nz 1024
[11:14]=白富美 nr 33
[攻城狮/nz, 逆袭/nz, 单身狗/nz, ,/w, 迎/v, 娶白富美/nr, ,/w, 走上/v, 人生/n, 巅峰/n]
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
true
nz 1024 n 1
[0:3]=攻城狮 nz 1
[5:8]=单身狗 nz 1024 n 1
[10:14]=娶白富美 nz 1024
[11:14]=白富美 nr 33
[攻城狮/nz, 逆袭/nz, 单身狗/nz, ,/w, 迎娶/v, 白富美/nr, ,/w, 走上/v, 人生/n, 巅峰/n]
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2019-03-05T03:36:46Z | 2019-03-28T14:05:51Z | https://github.com/hankcs/HanLP/issues/1106 | [] | li1996heng | 3 |
FlareSolverr/FlareSolverr | api | 370 | [1337x] (testing) Exception (1337x): Challenge detected but FlareSolverr is not configured: Challenge detected but FlareSolverr is not configured | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-04-20T19:05:25Z | 2022-04-21T22:39:33Z | https://github.com/FlareSolverr/FlareSolverr/issues/370 | [
"invalid"
] | ghuman1623 | 1 |
waditu/tushare | pandas | 1,504 | 基金拆分需要前复权和后复权系数 | 上证指数ETF 510210.SH 1拆5,拆分后数据没有复权系数不便分析。 | open | 2021-01-26T22:45:58Z | 2021-01-26T22:45:58Z | https://github.com/waditu/tushare/issues/1504 | [] | skymeteor | 0 |
nerfstudio-project/nerfstudio | computer-vision | 2,977 | Exported splatfacto cropped | It would be nice if the already cropped model could be exported to a ply, and it would also be great if the Gaussian could be cleaned.
| open | 2024-03-01T15:48:04Z | 2024-03-01T15:48:04Z | https://github.com/nerfstudio-project/nerfstudio/issues/2977 | [] | Alecapra96 | 0 |
nerfstudio-project/nerfstudio | computer-vision | 3,089 | ValueError: Invalid shape for means3d: torch.Size([0, 3]) | **Describe the bug**
Trained splatfacto till 600 steps, then reports the error.
```
Traceback (most recent call last):
File "/nerfstudio/scripts/train.py", line 294, in <module>
entrypoint()
File "/nerfstudio/scripts/train.py", line 285, in entrypoint
main(
File "/nerfstudio/scripts/train.py", line 270, in main
launch(
File "/nerfstudio/scripts/train.py", line 199, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/nerfstudio/scripts/train.py", line 106, in train_loop
trainer.train()
File "/nerfstudio/engine/trainer.py", line 273, in train
loss, loss_dict, metrics_dict = self.train_iteration(step)
File "/nerfstudio/utils/profiler.py", line 130, in inner
out = func(*args, **kwargs)
File "/nerfstudio/engine/trainer.py", line 527, in train_iteration
_, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
File "/nerfstudio/utils/profiler.py", line 130, in inner
out = func(*args, **kwargs)
File "/nerfstudio/pipelines/base_pipeline.py", line 326, in get_train_loss_dict
model_outputs = self._model(
File "/opt/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/nerfstudio/models/base_model.py", line 153, in forward
return self.get_outputs(ray_bundle)
File "/nerfstudio/models/splatfacto.py", line 838, in get_outputs
self.xys, depths, self.radii, conics, comp, num_tiles_hit, cov3d = project_gaussians( # type: ignore
File "/opt/python/lib/python3.9/site-packages/gsplat/project_gaussians.py", line 61, in project_gaussians
return _ProjectGaussians.apply(
File "/opt/python/lib/python3.9/site-packages/torch/autograd/function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/opt/python/lib/python3.9/site-packages/gsplat/project_gaussians.py", line 102, in forward
raise ValueError(f"Invalid shape for means3d: {means3d.shape}")
ValueError: Invalid shape for means3d: torch.Size([0, 3])
```
Default training parameters are used.
NerfStudio: 1.0.3
gsplat: 0.1.6
Thank you for the help! | open | 2024-04-18T04:06:29Z | 2024-05-02T21:43:12Z | https://github.com/nerfstudio-project/nerfstudio/issues/3089 | [] | chengluberkeley | 4 |
dask/dask | pandas | 11,296 | `2024.08.0` array slicing does not preserve masks | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
Since dask 2024.8.0, slicing a masked array does not preserve the mask.
**Minimal Complete Verifiable Example**:
```python
import numpy as np
import dask.array as da
arr = np.ma.array(range(8), mask=[0, 0, 1, 0, 0, 1, 0, 1])
darr = da.from_array(arr, chunks=(4, 4))
print(arr[[2, 6]])
print(darr[[2, 6]].compute())
```
Output with 2024.8.0:
```
[-- 6]
[2 6]
```
This works as expected with 2024.7.0:
```
[-- 6]
[-- 6]
```
**Anything else we need to know?**:
I suspect this line may be at least part of the problem
https://github.com/dask/dask/blob/1ba9c5bf9d0b005b89a04804f29c16e5f0f2dadf/dask/array/_shuffle.py#L190
`np.concatenate` does not preserve masks, although there is an open PR to change that: https://github.com/numpy/numpy/pull/22913. For masked arrays there is [np.ma.concatenate](https://numpy.org/doc/stable/reference/generated/numpy.ma.concatenate.html#numpy-ma-concatenate).
I have not been able to test any further because my internet connection is currently _flakey_ and my attempt to make a dev environment failed.
**Environment**:
- Dask version: 2024.8.0
- Python version: 3.12.5
- Operating System: Ubuntu
- Install method (conda, pip, source): conda
| closed | 2024-08-10T11:06:17Z | 2024-08-12T15:03:30Z | https://github.com/dask/dask/issues/11296 | [
"array",
"needs triage"
] | rcomer | 1 |
graphql-python/graphene | graphql | 643 | Fields with numbers in names do not capitalize correctly | I noticed that the field with names containing numbers (e.g. `field_i18n`) doesn't capitalize correctly.
For example:
`correct_field` becomes `correctField`, but `field_i18n` becomes `field_I18N` and `t1e2s3t` becomes `t1E2S3T` which is obvious incorrect.
This caused by using `.title` method in `str_converter.py` instead of `.capitalize`. Title method is suitable for use with words, but field names are single word. | closed | 2018-01-10T14:20:23Z | 2018-01-11T19:43:11Z | https://github.com/graphql-python/graphene/issues/643 | [] | frsv | 0 |
microsoft/unilm | nlp | 1,458 | Fine-tunning TextDiffuser2 Inpaiting | First, I would like to thank you for this great work and for sharing with the world.
I would like to fine-tune of TextDiffuser2 Inpainting mode and I was wondering if you could help me with some doubts:
1) I can only fine-tuning using the **textdiffuser-2/train_textdiffuser2_inpainting_full.py** not with lora?
2) How to prepare a dataset for it? Can you give me an example please?
3) Imagine that I want to fine-tune it for better billboard text editing, how many samples should I start with?
4) What GPU do you recommend?
Thank you for your time!
| open | 2024-02-08T17:09:29Z | 2024-02-08T17:22:27Z | https://github.com/microsoft/unilm/issues/1458 | [] | nfrnunes | 0 |
tflearn/tflearn | data-science | 764 | More Examples, Please! | Will there be more examples like GANs covering the field of DL? | open | 2017-05-19T14:34:38Z | 2017-05-23T18:09:20Z | https://github.com/tflearn/tflearn/issues/764 | [] | noeagles | 1 |
thp/urlwatch | automation | 804 | Feature request: Extension of regex filtering to extract data | I couldn't find a way, how to use `re.sub` to extract data as a filter in urlwatch. Unless I haven't overseen anything, and there is a way to do it, I would like to open a feature request.
According to #401 I would like to request an extensions of the regex filtering. Instead of replacing matched strings, it would be useful to have a positive regex filter, to extract specific data. An example of this filtering is even in the [documentation](https://urlwatch.readthedocs.io/en/latest/filters.html#using-a-shell-script-as-a-filter). However there it is implemented using shellpipes and grep, but for me it turned out to be a bit unstable.
```
url: https://example.net/shellpipe-grep.txt
filter:
- shellpipe: "grep -i -o 'price: <span>.*</span>'"
```
IMO better approach would be to have a filter like `re.findall` which goes along with `re.sub`:
```
url: https://example.net/pricelist.html
filter:
- re.findall: 'price: <span>(.*)</span>'
``` | closed | 2024-03-07T02:44:10Z | 2024-07-30T04:07:48Z | https://github.com/thp/urlwatch/issues/804 | [] | f0sh | 7 |
home-assistant/core | asyncio | 140,475 | Scrape component not working after update to 2025.3.2 | ### The problem
After the upgrade to 2025.3.2, the scrape integration stopped working, due to bs4.Tag not known. After downgrading to 2025.3.1, it started working again.
```
File "/usr/local/lib/python3.13/site-packages/soupsieve/css_match.py", line 98, in is_tag
return isinstance(obj, bs4.Tag)
^^^^^^^
AttributeError: module 'bs4' has no attribute 'Tag'
```
### What version of Home Assistant Core has the issue?
2025.3.2
### What was the last working version of Home Assistant Core?
2025.3.1
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
scrape
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/scrape/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
- resource: "http://192.168.178.53/?s=1,1"
sensor:
- name: WP_Leistung
select: "#content > div:nth-child(1) > table > tr:nth-child(22) > td.value.round-rightbottom"
unit_of_measurement: "kW"
value_template: '{{ value.replace("kW", "") | replace(",",".") | float }}'
```
### Anything in the logs that might be useful for us?
```txt
2025-03-12 20:29:23.717 DEBUG (MainThread) [homeassistant.components.scrape.coordinator] Finished fetching Scrape Coordinator data in 6.592 seconds (success: True)
2025-03-12 20:29:23.718 ERROR (MainThread) [homeassistant.components.sensor] Error adding entity sensor.web_scrape for domain sensor with platform scrape
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 633, in _async_add_entities
await coro
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 972, in _async_add_entity
await entity.add_to_platform_finish()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 1383, in add_to_platform_finish
await self.async_added_to_hass()
File "/usr/src/homeassistant/homeassistant/components/scrape/sensor.py", line 205, in async_added_to_hass
self._async_update_from_rest_data()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/components/scrape/sensor.py", line 210, in _async_update_from_rest_data
value = self._extract_value()
File "/usr/src/homeassistant/homeassistant/components/scrape/sensor.py", line 184, in _extract_value
tag = raw_data.select(self._select)[self._index]
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/bs4/element.py", line 2116, in select
return self.css.select(selector, namespaces, limit, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/bs4/css.py", line 193, in select
"""
return self.api.iselect(
^^^^^^^^^^^^^^^^^^^^^^^^
select, self.tag, self._ns(namespaces, select), limit, flags, **kwargs
^
File "/usr/local/lib/python3.13/site-packages/soupsieve/__init__.py", line 147, in select
return compile(select, namespaces, flags, **kwargs).select(tag, limit)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/soupsieve/css_match.py", line 1564, in select
return list(self.iselect(tag, limit))
File "/usr/local/lib/python3.13/site-packages/soupsieve/css_match.py", line 1569, in iselect
yield from CSSMatch(self.selectors, tag, self.namespaces, self.flags).select(limit)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/soupsieve/css_match.py", line 493, in __init__
self.assert_valid_input(scope)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/usr/local/lib/python3.13/site-packages/soupsieve/css_match.py", line 87, in assert_valid_input
if not cls.is_tag(tag):
~~~~~~~~~~^^^^^
File "/usr/local/lib/python3.13/site-packages/soupsieve/css_match.py", line 98, in is_tag
return isinstance(obj, bs4.Tag)
^^^^^^^
AttributeError: module 'bs4' has no attribute 'Tag'
```
### Additional information
| open | 2025-03-12T20:05:35Z | 2025-03-12T20:12:48Z | https://github.com/home-assistant/core/issues/140475 | [
"integration: scrape"
] | bong1991 | 1 |
scikit-image/scikit-image | computer-vision | 7,500 | One of our doc dependencies brings in numpy < 2 | closed | 2024-08-19T17:15:46Z | 2024-08-29T09:46:15Z | https://github.com/scikit-image/scikit-image/issues/7500 | [
":wrench: type: Maintenance"
] | stefanv | 3 |
|
opengeos/leafmap | jupyter | 419 | Integrate Segment Anything | https://github.com/aliaksandr960/segment-anything-eo | closed | 2023-04-17T12:41:00Z | 2023-04-22T16:14:07Z | https://github.com/opengeos/leafmap/issues/419 | [
"Feature Request"
] | giswqs | 1 |
allure-framework/allure-python | pytest | 314 | With robot framework, is xml generated or json? | I installed allure-robot framework using " pip install allure-robotframework".
When I run a test like:
**robot --listener allure_robotframework lldp.robot**
where lldp.robot is my test script.
A allure-report folder is created in my pwd's output dir whose 'ls' command contents are:
some html files
some json files
data folder which is empty
plugins folder which is empty
Is this fine?
Because, now when I have installed allure2 cli , and i generate a report, using the cmd-
allure generate <pwd>/output/allure-report
index.html is created which when I open looks like this:
<img width="1425" alt="screenshot at nov 20 16-37-00" src="https://user-images.githubusercontent.com/1056793/48811425-88ff5900-ece2-11e8-9b52-4c5dc57c19d0.png"> | closed | 2018-11-21T00:37:18Z | 2018-11-21T09:22:49Z | https://github.com/allure-framework/allure-python/issues/314 | [] | shreyashah | 3 |
sgl-project/sglang | pytorch | 4,338 | [Bug] fix DeepSeek V2/V3 awq | ### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 5. Please use English, otherwise it will be closed.
### Describe the bug
I tried to integrate the awq dequant from sgl-kernel and found that both the main version and the integrated version have issues with the awq of DeepSeek V2 Coder and DeepSeek V3, which need to be fixed.
```
casperhansen/deepseek-coder-v2-instruct-awq
cognitivecomputations/DeepSeek-V3-AWQ
```
### Reproduction
N/A
### Environment
N/A | open | 2025-03-12T09:23:32Z | 2025-03-18T15:14:17Z | https://github.com/sgl-project/sglang/issues/4338 | [
"bug",
"good first issue",
"help wanted",
"quant"
] | zhyncs | 9 |
SciTools/cartopy | matplotlib | 1,682 | Feature request: examples of using different tiles | There are a few examples
https://scitools.org.uk/cartopy/docs/latest/gallery/eyja_volcano.html#sphx-glr-gallery-eyja-volcano-py
https://scitools.org.uk/cartopy/docs/latest/gallery/image_tiles.html#sphx-glr-gallery-image-tiles-py
https://scitools.org.uk/cartopy/docs/latest/gallery/tube_stations.html#sphx-glr-gallery-tube-stations-py
But I would also like to see examples using `GoogleTiles` and `MapboxTiles`.
I understand these may need APIs so may be hard.
I like the way ipyleaflet has examples of different base maps here: https://ipyleaflet.readthedocs.io/en/latest/api_reference/basemaps.html | open | 2020-11-24T06:20:30Z | 2020-12-12T00:22:15Z | https://github.com/SciTools/cartopy/issues/1682 | [
"Type: Enhancement",
"Type: Documentation"
] | raybellwaves | 0 |
jupyter-widgets-contrib/ipycanvas | jupyter | 175 | Improve put_image_data performances | We can improve `put_image_data` by sending a jpeg encoded value instead of the entire matrix: See this code in ipyvtk https://github.com/Kitware/ipyvtk-simple/blob/7a30b215c7d8dce5109330e8c4a197ff2e2c4426/ipyvtk_simple/viewer.py#L165-L171 | open | 2021-01-20T09:50:46Z | 2023-05-18T01:43:35Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/175 | [
"enhancement"
] | martinRenou | 3 |
lepture/authlib | django | 369 | "state" is accidently passed to access token request in authorization code flow of OAuth2 client | In the OAuth client, there's a `_retrieve_oauth2_access_token_params()` function that generates some parameters for fetch access token endpoint in the authorization code flow. I'm wondering why `state` is added to the parameter and then passed to the OAuth server's token endpoint. As far as I know, the token endpoint defined by official docs for OAuth2.0 and OpenId1.0 is not expected to receive this parameter.
https://github.com/lepture/authlib/blob/45701441996b18452523105112d018c0f5f43a18/authlib/integrations/base_client/base_app.py#L127-L128
If my understanding is correct, the `state` parameter should be validated in-place instead of passing it to the remote OAuth server while trying to fetch an access token.
Looks like the current v1.0 already removed these two lines (it doesn't pass `state` to the downstream parameter):
https://github.com/lepture/authlib/blob/f17395323555de638eceecf51b535da5b91fcb0a/authlib/integrations/base_client/sync_app.py#L231-L243
Could we remove these two "unexpected" and probably buggy lines in v0.15? I think it's helpful if we could get a quick release to fix this bug. | closed | 2021-07-21T06:57:41Z | 2021-10-18T12:10:48Z | https://github.com/lepture/authlib/issues/369 | [] | imdwq | 0 |
scikit-hep/awkward | numpy | 2,611 | Errors should be exported to public namespace | ### Version of Awkward Array
main
### Description and code to reproduce
Now that we have a subset of custom error types, we should export them so that users can catch them. | closed | 2023-08-03T16:54:24Z | 2023-08-07T14:40:47Z | https://github.com/scikit-hep/awkward/issues/2611 | [
"feature"
] | agoose77 | 0 |
cvat-ai/cvat | tensorflow | 8,322 | CVAT annotation export issue | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1-Go to Tasks tab
2-Click the Actions
3-Click the export task dataset
4-Select Export format (YOLO 1.1)
5-Select save images
6-Assigned name for the zip file
7-Click OK
8-Go to the Requests tab
9-Download the dataset
### Expected Behavior
All of the annotation files (TXT files) should include the numbers related to the annotations.
### Possible Solution
_No response_
### Context
Some of the annotation text files are blank and do not contain the annotation information.
### Environment
_No response_ | closed | 2024-08-19T20:38:34Z | 2024-08-29T14:22:13Z | https://github.com/cvat-ai/cvat/issues/8322 | [
"bug",
"need info"
] | sedaye12 | 4 |
pandas-dev/pandas | data-science | 61,055 | BUG: invalid result of reindex on columns after unstack with Period data #60980 | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
series1 = pd.DataFrame(
[(0, "s2", pd.Period(2022)), (0, "s1", pd.Period(2021))],
columns=["A", "B", "C"]
).set_index(["A", "B"])["C"]
series2 = series1.astype(str)
print(series1.unstack("B").reindex(["s2"], axis=1))
print(series2.unstack("B").reindex(["s2"], axis=1))
```
### Issue Description
The example code prints
B s2
A
0 2021
B s2
A
0 2022
### Expected Behavior
Expect the result for both pd.Period and str data to be 2022:
B s2
A
0 2022
B s2
A
0 2022
(actually observed with older Pandas 2.0.3)
### Installed Versions
<details>
INSTALLED VERSIONS
commit : https://github.com/pandas-dev/pandas/commit/0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.10
python-bits : 64
OS : Linux
OS-release : 6.2.16
Version : https://github.com/pandas-dev/pandas/issues/1-NixOS SMP PREEMPT_DYNAMIC Tue Jan 1 00:00:00 UTC 1980
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.3
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| closed | 2025-03-04T18:11:53Z | 2025-03-05T00:13:34Z | https://github.com/pandas-dev/pandas/issues/61055 | [
"Bug",
"Reshaping"
] | Pranav970 | 3 |
MaartenGr/BERTopic | nlp | 1,654 | MMR doesn't work/ doesn't make a change | Hi Maarten,
Thank you once again for this amazing package. I used it for my master's thesis and several projects for my job at the university, and it's a lifesaver compared to other topic modeling techniques I tried.
That being said, I have run the model on about 200k tweets and many topics have quite a lot of repeating words. I have used the following code to add a representation model with different parameters (starting from .3 to .9) but the results are still the same.
```
mmr = MaximalMarginalRelevance(diversity=.9)
representation_model = {
"mmr": mmr
}
topic_model = BERTopic (
umap_model=UMAP(),
hdbscan_model=hdbscan_model,
vectorizer_model=vectorizer_model,
embedding_model=embedding_model,
top_n_words=10,
language='english',
verbose=True,
representation_model=representation_model
)
```
Here are a couple of examples:
> Representation:
> australia,australias,australian,auspol,renewable,nsw,energy,government,queensland,coal
> MMR:
> australia,australias,australian,auspol,renewable,nsw,energy,government,queensland,coal
Not only the results are the same, but plurals such as australia and australians are not merged. Could you please guide me on how to move further? | open | 2023-11-30T15:26:35Z | 2023-11-30T16:48:57Z | https://github.com/MaartenGr/BERTopic/issues/1654 | [] | daianacric95 | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,157 | Showing error ModuleNotFoundError: No module named 'fcntl' while trying to install with pip in windows | Note: Also tried to install fcntl but it's not available

| closed | 2023-03-25T14:21:44Z | 2023-03-25T14:47:58Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1157 | [] | noobSrijon | 0 |
vaexio/vaex | data-science | 1,686 | [BUG-REPORT] Issue converting object to string | **Description**
Trying to convert an "object" column to string fails.
Example code:
```python
import vaex
import numpy as np
if __name__ == "__main__":
arr = np.array([123, "test", None], dtype=object)
df = vaex.from_arrays(test=arr)
df['test'] = df['test'].astype('str')
print(df.head())
```
Exception:
```python
TypeError: to_string(): incompatible function arguments. The following argument types are supported:
1. (arg0: numpy.ndarray[float32]) -> vaex.superstrings.StringList64
2. (arg0: numpy.ndarray[float64]) -> vaex.superstrings.StringList64
3. (arg0: numpy.ndarray[int64]) -> vaex.superstrings.StringList64
4. (arg0: numpy.ndarray[int32]) -> vaex.superstrings.StringList64
5. (arg0: numpy.ndarray[int16]) -> vaex.superstrings.StringList64
6. (arg0: numpy.ndarray[int8]) -> vaex.superstrings.StringList64
7. (arg0: numpy.ndarray[uint64]) -> vaex.superstrings.StringList64
8. (arg0: numpy.ndarray[uint32]) -> vaex.superstrings.StringList64
9. (arg0: numpy.ndarray[uint16]) -> vaex.superstrings.StringList64
10. (arg0: numpy.ndarray[uint8]) -> vaex.superstrings.StringList64
11. (arg0: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
12. (arg0: numpy.ndarray[float32], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
13. (arg0: numpy.ndarray[float64], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
14. (arg0: numpy.ndarray[int64], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
15. (arg0: numpy.ndarray[int32], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
16. (arg0: numpy.ndarray[int16], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
17. (arg0: numpy.ndarray[int8], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
18. (arg0: numpy.ndarray[uint64], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
19. (arg0: numpy.ndarray[uint32], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
20. (arg0: numpy.ndarray[uint16], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
21. (arg0: numpy.ndarray[uint8], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
22. (arg0: numpy.ndarray[bool], arg1: numpy.ndarray[bool]) -> vaex.superstrings.StringList64
Invoked with: array([123], dtype=object)
Process finished with exit code 0
```
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
```
{'vaex': '4.5.0', 'vaex-core': '4.6.0a5', 'vaex-viz': '0.5.0', 'vaex-hdf5': '0.10.0', 'vaex-server': '0.6.1', 'vaex-astro': '0.9.0', 'vaex-jupyter': '0.6.0', 'vaex-ml': '0.14.0', 'vaex-arrow': '0.4.2'}
```
- Vaex was installed via: pip
- OS: Ubuntu 20.04
**Additional information**
This makes it impossible to perform an export to HDF5 or other operations that do not support the object type. This is especially blocking when loading CSV files.
Should be relevant to #1568.
| open | 2021-11-08T13:25:50Z | 2023-01-10T08:55:47Z | https://github.com/vaexio/vaex/issues/1686 | [
"feature-request",
"priority: low",
"good first issue"
] | grafail | 4 |
ResidentMario/geoplot | matplotlib | 56 | Single-pixel edge inversion in the voronoi plot when clipping is used | The new voronoi plot type has random junk lines on the edges of the plot when both a projection is set (e.g. the underlying axis instance is a `cartopy` object) and a clip is used:

The clipping is done by taking the symmetric difference between a rectangle whose corners are the extent of the plot (e.g. the corners) and the given set of geometries used to actually clip the data. This routine is implemented in `_get_clip`.
It appears that there are rounding errors in calculating where the "true edge" of the plot lies, resulting in single-pixel "windows" on the edge of the plot where data, if it exists, can peek through.
This is particularly puzzling because `kdeplot` suffered from the same problem initially, but I solved it by simply scaling the coverage rectangle by a factor of 1.25 (`shapely.affinity.scale(rect, xfact=1.25, yfact=1.25)`), producing something that *ought* to white-wash data even past the plot borders. But clearly this is not working in this case!
This example was generated as follows:
```python
import quilt
from quilt.data.ResidentMario import geoplot_data
import geopandas as gpd
boroughs = gpd.read_file(geoplot_data.nyc_boroughs())
fatal_collisions = gpd.read_file(geoplot_data.nyc_fatal_collisions())
injurious_collisions = gpd.read_file(geoplot_data.nyc_injurious_collisions())
import geoplot.crs as gcrs
import geoplot as gplt
# %matplotlib inline
ax = gplt.voronoi(injurious_collisions.head(1000), projection=gcrs.AlbersEqualArea(),
hue='NUMBER OF PERSONS INJURED', cmap='Blues', categorical=False,
edgecolor='white', linewidth=1, clip=boroughs.geometry
)
gplt.polyplot(boroughs, projection=gcrs.AlbersEqualArea(), linewidth=2, ax=ax)
``` | closed | 2018-05-08T02:06:49Z | 2019-07-05T02:35:32Z | https://github.com/ResidentMario/geoplot/issues/56 | [
"bug"
] | ResidentMario | 1 |
kizniche/Mycodo | automation | 1,021 | Mi Flora Sensor only gets new values once ... on activate/deactivate/reactivate Input | Hi,
Don´t know why ... no errors present in the log. MiFlora sensor works fine.
But the only way to aquire new meassurments is to deactivate and reactivate the sensor on input tab.
Mycodo 8.10.1 ... also tried new Master 8.11 but then the live graphs won´t work anymore.(seems to be a bug)
The Sensor gets the same meassurements until deactivate and reactivate.
Any suggestion which log i could check ?
I´m not quite sure if this could be an BT initializing error .
Or is there any workarround ...such like deactivating and reactivating an input in a period of time as a function ? ... couldn´t find something like this ... force meassurements as function won´t work also .
Greets El-Tonno
*ps ... could it be a timestamp problem ? .... i always see 'timestamp_utc': None
add... Log: ... everything seems fine ... but always the old values are recognized ... new values after deactivate and reactivate the input
2021-06-09 22:04:19,900 - DEBUG - mycodo.controllers.controller_input_ec1812f7 - Adding measurements to InfluxDB with ID ec1812f7-6661-4afc-82de-ac8a74d1cf59: {0: {'measurement': 'battery', 'unit': 'percent', 'value': 100.0, 'timestamp_utc': None}, 1: {'measurement': 'electrical_conductivity', 'unit': 'uS_cm', 'value': 0.0, 'timestamp_utc': None}, 2: {'measurement': 'light', 'unit': 'lux', 'value': 4.0, 'timestamp_utc': None}, 3: {'measurement': 'moisture', 'unit': 'unitless', 'value': 0.0, 'timestamp_utc': None}, 4: {'measurement': 'temperature', 'unit': 'C', 'value': 25.3, 'timestamp_utc': None}}
2021-06-09 22:04:49,862 - DEBUG - mycodo.inputs.xiaomi_miflora_ec1812f7 - Acquiring lock
2021-06-09 22:04:49,865 - DEBUG - mycodo.inputs.xiaomi_miflora_ec1812f7 - Releasing lock
2021-06-09 22:04:49,910 - DEBUG - mycodo.controllers.controller_input_ec1812f7 - Adding measurements to InfluxDB with ID ec1812f7-6661-4afc-82de-ac8a74d1cf59: {0: {'measurement': 'battery', 'unit': 'percent', 'value': 100.0, 'timestamp_utc': None}, 1: {'measurement': 'electrical_conductivity', 'unit': 'uS_cm', 'value': 0.0, 'timestamp_utc': None}, 2: {'measurement': 'light', 'unit': 'lux', 'value': 4.0, 'timestamp_utc': None}, 3: {'measurement': 'moisture', 'unit': 'unitless', 'value': 0.0, 'timestamp_utc': None}, 4: {'measurement': 'temperature', 'unit': 'C', 'value': 25.3, 'timestamp_utc': None}}
.... now the new ones after reactivating
2021-06-09 22:11:49,916 - DEBUG - mycodo.controllers.controller_input_ec1812f7 - Adding measurements to InfluxDB with ID ec1812f7-6661-4afc-82de-ac8a74d1cf59: {0: {'measurement': 'battery', 'unit': 'percent', 'value': 100.0, 'timestamp_utc': None}, 1: {'measurement': 'electrical_conductivity', 'unit': 'uS_cm', 'value': 455.0, 'timestamp_utc': None}, 2: {'measurement': 'light', 'unit': 'lux', 'value': 26.0, 'timestamp_utc': None}, 3: {'measurement': 'moisture', 'unit': 'unitless', 'value': 85.0, 'timestamp_utc': None}, 4: {'measurement': 'temperature', 'unit': 'C', 'value': 24.7, 'timestamp_utc': None}}
2021-06-09 22:12:11,684 - INFO - mycodo.controllers.controller_input_ec1812f7 - Deactivated in 76.3 ms
2021-06-09 22:12:21,141 - INFO - mycodo.inputs.xiaomi_miflora_ec1812f7 - Miflora: Name: Flower care, FW: 3.2.4
2021-06-09 22:12:21,143 - INFO - mycodo.controllers.controller_input_ec1812f7 - Activated in 3005.9 ms
2021-06-09 22:12:21,144 - DEBUG - mycodo.inputs.xiaomi_miflora_ec1812f7 - Acquiring lock
2021-06-09 22:12:21,386 - DEBUG - mycodo.inputs.xiaomi_miflora_ec1812f7 - Releasing lock
2021-06-09 22:12:21,515 - DEBUG - mycodo.controllers.controller_input_ec1812f7 - Adding measurements to InfluxDB with ID ec1812f7-6661-4afc-82de-ac8a74d1cf59: {0: {'measurement': 'battery', 'unit': 'percent', 'value': 100.0, 'timestamp_utc': None}, 1: {'measurement': 'electrical_conductivity', 'unit': 'uS_cm', 'value': 454.0, 'timestamp_utc': None}, 2: {'measurement': 'light', 'unit': 'lux', 'value': 134.0, 'timestamp_utc': None}, 3: {'measurement': 'moisture', 'unit': 'unitless', 'value': 85.0, 'timestamp_utc': None}, 4: {'measurement': 'temperature', 'unit': 'C', 'value': 24.5, 'timestamp_utc': None}}
| open | 2021-06-09T20:03:41Z | 2022-07-18T17:12:32Z | https://github.com/kizniche/Mycodo/issues/1021 | [] | El-Tonno | 2 |
babysor/MockingBird | deep-learning | 926 | monotonic-align==0.0.3 cant install | when i install it ,it prompts :
ERROR: Could not find a version that satisfies the requirement monotonic-align==0.0.3 (from versions: 1.0.0)
ERROR: No matching distribution found for monotonic-align==0.0.3 | closed | 2023-06-29T05:49:00Z | 2023-07-04T00:14:10Z | https://github.com/babysor/MockingBird/issues/926 | [] | 2265290305 | 1 |
scikit-learn/scikit-learn | data-science | 30,054 | ⚠️ CI failed on linux_arm64_wheel (last failure: Oct 13, 2024) ⚠️ | **CI failed on [linux_arm64_wheel](https://cirrus-ci.com/build/5764259953508352)** (Oct 13, 2024)
| closed | 2024-10-13T03:45:57Z | 2024-10-14T14:06:26Z | https://github.com/scikit-learn/scikit-learn/issues/30054 | [
"Needs Triage"
] | scikit-learn-bot | 0 |
microsoft/hummingbird | scikit-learn | 645 | Deprecation warnings in our GitHub Actions workflow | [Update TVM to 0.10 · microsoft/hummingbird@4f449ea](https://github.com/microsoft/hummingbird/actions/runs/3342225153)
```
Node.js 12 actions are deprecated. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/. Please update the following actions to use Node.js 16: actions/checkout, actions/setup-python, actions/setup-python, actions/checkout
```
```
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
```
TODOs
- [x] upgrade actions/checkout
- [x] upgrade actions/setup-python | closed | 2022-10-28T06:05:13Z | 2022-11-02T01:04:41Z | https://github.com/microsoft/hummingbird/issues/645 | [] | mshr-h | 0 |
strawberry-graphql/strawberry | asyncio | 3,349 | When i use run with python3, ImportError occured | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
ImportError : cannot import name 'GraphQLError' from 'graphql'
## Describe the Bug
It works well when executed with poetry run app.main:main.
However, when executing with python3 app/main.py, the following Import Error occurs.
_**Error occured code line**_
<img width="849" alt="image" src="https://github.com/strawberry-graphql/strawberry/assets/10377550/713ab6b4-76c1-4b0d-84e1-80903f8855ea">
**_Traceback_**
```bash
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/evanhwang/dev/ai-hub/hub-api/app/bootstrap/admin/bootstrapper.py", line 4, in <module>
from app.bootstrap.admin.router import AdminRouter
File "/Users/evanhwang/dev/ai-hub/hub-api/app/bootstrap/admin/router.py", line 3, in <module>
import strawberry
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/__init__.py", line 1, in <module>
from . import experimental, federation, relay
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/federation/__init__.py", line 1, in <module>
from .argument import argument
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/federation/argument.py", line 3, in <module>
from strawberry.arguments import StrawberryArgumentAnnotation
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/arguments.py", line 18, in <module>
from strawberry.annotation import StrawberryAnnotation
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/annotation.py", line 23, in <module>
from strawberry.custom_scalar import ScalarDefinition
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/custom_scalar.py", line 19, in <module>
from strawberry.exceptions import InvalidUnionTypeError
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/exceptions/__init__.py", line 6, in <module>
from graphql import GraphQLError
ImportError: cannot import name 'GraphQLError' from 'graphql' (/Users/evanhwang/dev/ai-hub/hub-api/app/graphql/__init__.py)
```
## System Information
- Operating System: Mac Ventura 13.5.1(22G90)
- Strawberry Version (if applicable):
Entered in pyproject.toml as follows:
```bash
strawberry-graphql = {extras = ["debug-server", "fastapi"], version = "^0.217.1"}
```
**_pyproject.toml_**
```toml
##############################################################################
# poetry 종속성 설정
# - https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# - 기본적으로 PyPI에서 종속성을 찾습니다.
##############################################################################
[tool.poetry.dependencies]
python = "3.11.*"
fastapi = "^0.103.2"
uvicorn = "^0.23.2"
poethepoet = "^0.24.0"
requests = "^2.31.0"
poetry = "^1.6.1"
sqlalchemy = "^2.0.22"
sentry-sdk = "^1.32.0"
pydantic-settings = "^2.0.3"
psycopg2-binary = "^2.9.9"
cryptography = "^41.0.4"
python-ulid = "^2.2.0"
ulid = "^1.1"
redis = "^5.0.1"
aiofiles = "^23.2.1"
pyyaml = "^6.0.1"
python-jose = "^3.3.0"
strawberry-graphql = {extras = ["debug-server", "fastapi"], version = "^0.217.1"}
[tool.poetry.group.dev.dependencies]
pytest = "^7.4.0"
pytest-mock = "^3.6.1"
httpx = "^0.24.1"
poetry = "^1.5.1"
sqlalchemy = "^2.0.22"
redis = "^5.0.1"
mypy = "^1.7.0"
types-aiofiles = "^23.2.0.0"
types-pyyaml = "^6.0.12.12"
commitizen = "^3.13.0"
black = "^23.3.0" # fortmatter
isort = "^5.12.0" # import 정렬
pycln = "^2.1.5" # unused import 정리
ruff = "^0.0.275" # linting
##############################################################################
# poethepoet
# - https://github.com/nat-n/poethepoet
# - poe를 통한 태스크 러너 설정
##############################################################################
types-requests = "^2.31.0.20240106"
pre-commit = "^3.6.0"
[tool.poe.tasks.format-check-only]
help = "Check without formatting with 'pycln', 'black', 'isort'."
sequence = [
{cmd = "pycln --check ."},
{cmd = "black --check ."},
{cmd = "isort --check-only ."}
]
[tool.poe.tasks.format]
help = "Run formatter with 'pycln', 'black', 'isort'."
sequence = [
{cmd = "pycln -a ."},
{cmd = "black ."},
{cmd = "isort ."}
]
[tool.poe.tasks.lint]
help = "Run linter with 'ruff'."
cmd = "ruff ."
[tool.poe.tasks.type-check]
help = "Run type checker with 'mypy'"
cmd = "mypy ."
[tool.poe.tasks.clean]
help = "Clean mypy_cache, pytest_cache, pycache..."
cmd = "rm -rf .coverage .mypy_cache .pytest_cache **/__pycache__"
##############################################################################
# isort
# - https://pycqa.github.io/isort/
# - python import 정렬 모듈 설정
##############################################################################
[tool.isort]
profile = "black"
##############################################################################
# ruff
# - https://github.com/astral-sh/ruff
# - Rust 기반 포맷터, 린터입니다.
##############################################################################
[tool.ruff]
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"C", # flake8-comprehensions
"B", # flake8-bugbear
# "T20", # flake8-print
]
ignore = [
"E501", # line too long, handled by black
"E402", # line too long, handled by black
"B008", # do not perform function calls in argument defaults
"C901", # too complex
]
[tool.commitizen]
##############################################################################
# mypy 설정
# - https://mypy.readthedocs.io/en/stable/
# - 정적 타입 체크를 수행합니다.
##############################################################################
[tool.mypy]
python_version = "3.11"
packages=["app"]
exclude=["tests"]
ignore_missing_imports = true
show_traceback = true
show_error_codes = true
disable_error_code="misc, attr-defined"
follow_imports="skip"
#strict = false
# 다음은 --strict에 포함된 여러 옵션들입니다.
warn_unused_configs = true # mypy 설정에서 사용되지 않은 [mypy-<pattern>] config 섹션에 대해 경고를 발생시킵니다. (증분 모드를 끄려면 --no-incremental 사용 필요)
disallow_any_generics = false # 명시적인 타입 매개변수를 지정하지 않은 제네릭 타입의 사용을 금지합니다. 예를 들어, 단순히 x: list와 같은 코드는 허용되지 않으며 항상 x: list[int]와 같이 명시적으로 작성해야 합니다.
disallow_subclassing_any = true # 클래스가 Any 타입을 상속할 때 오류를 보고합니다. 이는 기본 클래스가 존재하지 않는 모듈에서 가져올 때( --ignore-missing-imports 사용 시) 또는 가져오기 문에 # type: ignore 주석이 있는 경우에 발생할 수 있습니다.
disallow_untyped_calls = true # 타입 어노테이션이 없는 함수 정의에서 함수 호출시 오류를 보고합니다.
disallow_untyped_defs = false # 타입 어노테이션이 없거나 불완전한 타입 어노테이션이 있는 함수 정의를 보고합니다. (--disallow-incomplete-defs의 상위 집합)
disallow_incomplete_defs = false # 부분적으로 주석이 달린 함수 정의를 보고합니다. 그러나 완전히 주석이 달린 정의는 여전히 허용됩니다.
check_untyped_defs = true # 타입 어노테이션이 없는 함수의 본문을 항상 타입 체크합니다. (기본적으로 주석이 없는 함수의 본문은 타입 체크되지 않습니다.) 모든 매개변수를 Any로 간주하고 항상 Any를 반환값으로 추정합니다.
disallow_untyped_decorators = true # 타입 어노테이션이 없는 데코레이터를 사용할 때 오류를 보고합니다.
warn_redundant_casts = true # 코드가 불필요한 캐스트를 사용하는 경우 오류를 보고합니다. 캐스트가 안전하게 제거될 수 있는 경우 경고가 발생합니다.
warn_unused_ignores = false # 코드에 실제로 오류 메시지를 생성하지 않는 # type: ignore 주석이 있는 경우 경고를 발생시킵니다.
warn_return_any = false # Any 타입을 반환하는 함수에 대해 경고를 발생시킵니다.
no_implicit_reexport = true # 기본적으로 모듈에 가져온 값은 내보내진 것으로 간주되어 mypy는 다른 모듈에서 이를 가져오도록 허용합니다. 그러나 이 플래그를 사용하면 from-as를 사용하거나 __all__에 포함되지 않은 경우 내보내지 않도록 동작을 변경합니다.
strict_equality = true # mypy는 기본적으로 42 == 'no'와 같은 항상 거짓인 비교를 허용합니다. 이 플래그를 사용하면 이러한 비교를 금지하고 비슷한 식별 및 컨테이너 확인을 보고합니다. (예: from typing import Text)
extra_checks = true # 기술적으로는 올바르지만 실제 코드에서 불편할 수 있는 추가적인 검사를 활성화합니다. 특히 TypedDict 업데이트에서 부분 중첩을 금지하고 Concatenate를 통해 위치 전용 인자를 만듭니다.
# pydantic 플러그인 설정 하지 않으면 가짜 타입 오류 발생 여지 있음
# - https://www.twoistoomany.com/blog/2023/04/12/pydantic-mypy-plugin-in-pyproject/
plugins = ["pydantic.mypy", "strawberry.ext.mypy_plugin"]
##############################################################################
# 빌드 시스템 설정
##############################################################################
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
##############################################################################
# virtualenv 설정
# - 프로젝트에서 peotry 명령 호출 시 venv가 없다면 '.venv' 경로에 생성
##############################################################################
[virtualenvs]
create = true
in-project = true
path = ".venv"
```
## Additional Context
- I already run 'Invaidate Caches` in pycharm. | closed | 2024-01-19T02:17:47Z | 2025-03-20T15:56:34Z | https://github.com/strawberry-graphql/strawberry/issues/3349 | [] | evan-hwang | 6 |
ultrafunkamsterdam/undetected-chromedriver | automation | 945 | Can not load extention after update | i was working fine with my old undetected_webdriver and when i updated my google chrome it has some issues like the picture please could you help me find out a good solution and thank you
screen : https://prnt.sc/uRg-kYG86Hfh
my code:
import time
import undetected_chromedriver as uc
chrome_options = uc.ChromeOptions()
chrome_options.add_extension('test.crx')
driver = uc.Chrome(options=chrome_options)
time.sleep(999) | open | 2022-12-12T00:19:54Z | 2023-10-30T20:15:25Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/945 | [] | rotinabox | 2 |
django-cms/django-cms | django | 7,484 | Docs about color mode doesn't work | ## Description
Problem is that fixed color scheme doesnt work! Also set with the below variables:
```
CMS_COLOR_SCHEME = "light"
CMS_COLOR_SCHEME_TOGGLE = True
```
set in the `settings.py`
## Additional information (CMS/Python/Django versions)
```
Django==4.1.5
Django-cms==4.1.0rc1
Python 3.10
```
* [x] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue.
| closed | 2023-01-22T17:20:03Z | 2023-09-20T21:45:12Z | https://github.com/django-cms/django-cms/issues/7484 | [
"component: frontend",
"status: accepted",
"needs to be forwardported",
"needs contribution",
"4.1"
] | svandeneertwegh | 3 |
alteryx/featuretools | scikit-learn | 1,763 | Bug with parallel chunk calculation and cutoff time df | ### Bug with parallel chunk calculation
#### Bug Description
In my case I used dfs with cutoff time as df and n_jobs == -1
In the previous featuretools version everything worked fine. But after v.1.0.0 release I have found out that dfs breaks when running it with cutoff time which contains extra columns (pass_columns) and n_jobs > 1. The error is the following
`AttributeError: 'NoneType' object has no attribute 'logical_types'`
It happens in the following function `get_ww_types_from_features`
In the following lines
```
if pass_columns:
cutoff_schema = cutoff_time.ww.schema
for column in pass_columns:
logical_types[column] = cutoff_schema.logical_types[column]
semantic_tags[column] = cutoff_schema.semantic_tags[column]
origins[column] = "base"
```
However, when n_jobs == 1 everything works fine. I have investigated the source code and have found that it is probably caused by the fact that when cutoff_times is cut into chunks in the `parallel_calculate_chunks` function, the resulting chunks loose woodwork extension.
So to solve this issue, I think, it is needed to make .ww.init() for every chunk after division
#### Expected Output
#### Output of ``featuretools.show_info()``
<details>
Featuretools version: 1.0.0
SYSTEM INFO
-----------
python: 3.7.5.final.0
python-bits: 64
OS: Darwin
OS-release: 19.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: ru_RU.UTF-8
LOCALE: ru_RU.UTF-8
INSTALLED VERSIONS
------------------
numpy: 1.21.1
pandas: 1.3.2
tqdm: 4.62.2
PyYAML: 5.4.1
cloudpickle: 1.6.0
dask: 2021.9.0
distributed: 2021.9.0
psutil: 5.8.0
pip: 19.2.3
setuptools: 41.2.0
</details>
| closed | 2021-10-29T13:51:39Z | 2021-11-01T19:16:41Z | https://github.com/alteryx/featuretools/issues/1763 | [
"bug"
] | VGODIE | 1 |
sgl-project/sglang | pytorch | 4,684 | [Feature] support QServe | ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
https://github.com/mit-han-lab/omniserve
QServe is great. We can just use W4A8.
### Related resources
_No response_ | open | 2025-03-22T22:58:38Z | 2025-03-24T03:35:08Z | https://github.com/sgl-project/sglang/issues/4684 | [
"high priority"
] | zhyncs | 2 |
dynaconf/dynaconf | flask | 1,132 | [RFC]typed: Add all the options to Options dataclass | Complete the remaining arguments needed to be defined on `dynaconf.typed.Options`,
Also rename the `_trigger_validation` to just `trigger_validation` and the same pattern for other control args like this, then instead of filtering by `startswith(_)` filter from a specified list of attributes. | open | 2024-07-06T14:44:57Z | 2024-07-08T18:38:18Z | https://github.com/dynaconf/dynaconf/issues/1132 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 0 |
pytorch/vision | computer-vision | 8,437 | Add mobilenetv4 support and pretrained models? | ### 🚀 The feature
Google has published the mobilenetv4 model. When will pytorch support it and open the pre-trained model?
### Motivation, pitch
I very much hope to use the latest lightweight backbone
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-05-22T06:16:00Z | 2024-06-14T02:01:20Z | https://github.com/pytorch/vision/issues/8437 | [] | LiYufengzz | 5 |
BeanieODM/beanie | pydantic | 185 | Please issue a Changelog with releases | Please can you add changenotes with releases. its impossible to know what was shipped in the last update.
If you create the release on GitHub, use the "Auto-generate release notes" button to create one automatically from all the PRs that have been merged since the last release. | closed | 2022-01-11T02:20:25Z | 2023-02-21T03:07:32Z | https://github.com/BeanieODM/beanie/issues/185 | [
"Stale"
] | tonybaloney | 2 |
onnx/onnx | machine-learning | 5,842 | Shape Inference Error of DepthwiseConv2D |
## Issue Description
I have an ONNX model converted by `tf2onnx`. When I use `onnx.shape_inference.infer_shapes(original_model)`, I encounter the following error:
```
onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] (op_type:Transpose, node name: dscnn_5/depthwise_conv2d_1/depthwise__19): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 1: (1) vs (2)
```
It seems that there is a problem with the final `Transpose` operation. How can I solve it?
| closed | 2024-01-04T13:13:55Z | 2025-01-26T06:42:47Z | https://github.com/onnx/onnx/issues/5842 | [
"topic: converters",
"module: shape inference",
"stale"
] | Azulita0317 | 1 |
graphql-python/graphene-django | graphql | 1,193 | KeyError when making shallow copy of GrapheneObjectType | Making shallow copies of GrapheneObjectType using copy.copy raises `KeyError: 'graphene_type'`
```python
import graphene
import copy
class Query(graphene.ObjectType):
users = graphene.List(graphene.String)
def resolve_users(self, info):
return []
schema = graphene.Schema(query=Query)
copy.copy(schema.graphql_schema.query_type)
```
This was tested using `graphene==3.0b7` with python 3.8, and it's not happening in `graphene<3` | closed | 2021-04-29T02:59:27Z | 2021-04-29T03:22:55Z | https://github.com/graphql-python/graphene-django/issues/1193 | [
"🐛bug"
] | wallee94 | 1 |
ultralytics/ultralytics | deep-learning | 18,882 | Exporting best.pt model to tensorrt engine format on Jetson Orin nano | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I have trained a YOLOv11s model on my Windows machine and am now trying to deploy it on a Jetson Orin Nano. I installed the ultralytics library via pip install ultralytics and am attempting to export the model to a TensorRT engine format.
However, when running the following command:
yolo export model=best.pt format=engine task=detect device=0
I am encountering the following issue:
`YOLO11s summary (fused): 238 layers, 9,413,574 parameters, 0 gradients, 21.3 GFLOPs
PyTorch: starting from 'best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 6, 8400) (18.3 MB)
ONNX: starting export with onnx 1.17.0 opset 17...
ONNX: slimming with onnxslim 0.1.47...
ONNX: export success ✅ 8.6s, saved as 'best.onnx' (36.2 MB)
requirements: Ultralytics requirement ['tensorrt>7.0.0,!=10.1.0'] not found, attempting AutoUpdate...
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-h834alno/tensorrt-cu12_725ec5de08014dcfaae9e7b005a4b89d/setup.py", line 71, in <module>
raise RuntimeError("TensorRT does not currently build wheels for Tegra systems")
RuntimeError: TensorRT does not currently build wheels for Tegra systems
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Retry 1/2 failed: Command 'pip install --no-cache-dir "tensorrt>7.0.0,!=10.1.0" ' returned non-zero exit status 1.
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-fend8jhw/tensorrt-cu12_b7b9a53f94814a29bc8e20784950f64b/setup.py", line 71, in <module>
raise RuntimeError("TensorRT does not currently build wheels for Tegra systems")
RuntimeError: TensorRT does not currently build wheels for Tegra systems
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Retry 2/2 failed: Command 'pip install --no-cache-dir "tensorrt>7.0.0,!=10.1.0" ' returned non-zero exit status 1.
requirements: ❌ Command 'pip install --no-cache-dir "tensorrt>7.0.0,!=10.1.0" ' returned non-zero exit status 1.
TensorRT: export failure ❌ 35.7s: No module named 'tensorrt'
Traceback (most recent call last):
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 820, in export_engine
import tensorrt as trt # noqa
ModuleNotFoundError: No module named 'tensorrt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rocklee/radioconda/envs/rocklee_gpu/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/cfg/__init__.py", line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/engine/model.py", line 738, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 401, in __call__
f[1], _ = self.export_engine(dla=dla)
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 180, in outer_func
raise e
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 175, in outer_func
f, model = inner_func(*args, **kwargs)
File "/home/rocklee/radioconda/envs/rocklee_gpu/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 824, in export_engine
import tensorrt as trt # noqa
ModuleNotFoundError: No module named 'tensorrt'`
Jetson environment:
Cuda Arch Bin: 8.7
L4T: 36.3.0
Jetpack : 6.0
Machine: aarch64
Distribution: Ubuntu 22.04 Jammy Jellyfish
Python: 3.10.12
Cuda version : 12.2
Given the urgency of the deployment, any help on how to resolve this issue and successfully export the model for the Jetson Orin GPU would be greatly appreciated.
@glenn-jocher
### Additional
_No response_ | open | 2025-01-25T15:32:24Z | 2025-01-28T01:20:26Z | https://github.com/ultralytics/ultralytics/issues/18882 | [
"question",
"detect",
"embedded",
"exports"
] | Srikanthv466 | 11 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 637 | Toolbox launches but doesn't work | I execute the demo_toolbox.py and the toolbox launches. I click Browse and pull in a wav file. The toolbox shows the wav file loaded and a mel spectrogram displays. The program then says "Loading the encoder \encoder\saved_models\pretrained.pt
but that is where the program just goes to sleep until I exit out. I verify the pretrained.pt file is there so I don't understand why the program stops. Any help would be greatly appreciated.


Edlaster58@gmail.com | closed | 2021-01-22T21:56:38Z | 2021-01-23T00:36:12Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/637 | [] | bigdog3604 | 4 |
lukas-blecher/LaTeX-OCR | pytorch | 340 | [Windows GUI] latexocr: Hide Command Window and close to task bar | Hi,
programm works flawlessly, thanks for sharing it.
Are there any possibilities of adding 2 functions:
1) Hide command prompt when executing latexocr.exe on Windows machines?
2) Close latexocr to the taskbar so easily open it up when needed again during writing | open | 2023-11-30T10:15:51Z | 2023-11-30T10:15:51Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/340 | [] | D4Ci0 | 0 |
strawberry-graphql/strawberry | fastapi | 3,542 | Rename modules that have same name as aliases | I'm working on generating API docs for Strawberry, and noticed some issue with how we named some modules. For example we have `strawberry.auto` as an export, but we also have `strawberry.auto` as a module.
(Un)fortunately, this hasn't really caused any issue (as far as I know), but when using `griffe` to get a JSON I can use to generate the API docs I don't get everything (since some aliases and some modules have the same name). This could potentially be fixed with some changes to griffe, but I don't think it is worth it, because we'd still have some issue, for example if we have `strawberry.auto`, how do we go to its docs?
would it be `https://strawberry.rocks/docs/api/strawberry.auto`? or its full path: `https://strawberry.rocks/docs/api/strawberry.auto.auto`?
Second option could work, but I'd love to make it easier for people to find the docs by just writing the url (so `https://strawberry.rocks/docs/api/strawberry.auto` would be the best option for me).
To achieve this we'd need to change the name of the modules, while working on the site I've done some changes already (in `feature/docstring`), for example I changed `strawberry.type` to `strawberry.strawberry_type`, but while writing this issue I thought of maybe keeping the same name and making the module "private", so `strawberry._type` (it could be a good way to also tell people that they shouldn't import things from these modules)
What do you all think?
| alias name | proposed module name | alternate module name |
|-----------------------------|------------------------------|----------------------------|
| strawberry.auto | strawberry._auto | |
| strawberry.type | strawberry._type | strawberry.strawberry_type |
| strawberry.directive | strawberry._directive | strawberry.graphql_types |
| strawberry.enum | strawberry._enum | strawberry.graphql_types |
| strawberry.field | strawberry._field | |
| strawberry.mutation | strawberry._mutation | |
| strawberry.schema_directive | strawberry._schema_directive | |
| strawberry.union | strawberry._union | |
/cc @strawberry-graphql/core | closed | 2024-06-18T20:45:35Z | 2025-03-20T15:56:46Z | https://github.com/strawberry-graphql/strawberry/issues/3542 | [] | patrick91 | 16 |
ludwig-ai/ludwig | data-science | 3,161 | auto_train shouldn't require horovod | **Describe the bug**
Using the autoML feature in auto_train seems to require horovod, but it should be possible to leverage autoML on a single machine without requiring horovod (which is very difficult to install). If this is indeed by design, it should be called out as a requirement in the auto_train documentation.
the error is:
ModuleNotFoundError: Horovod isn't installed. To install Horovod with PyTorch support, run 'pip install 'horovod[pytorch]''. To install Horovod with TensorFlow support, run 'pip install 'horovod[tensorflow]''.
**To Reproduce**
Sample code:
```
from ludwig.automl import auto_train
from ludwig.datasets import mushroom_edibility
from ludwig.utils.dataset_utils import get_repeatable_train_val_test_split
mushroom_df = mushroom_edibility.load()
mushroom_edibility_df = get_repeatable_train_val_test_split(mushroom_df, 'class', random_seed=42)
auto_train_results = auto_train(
dataset=mushroom_edibility_df,
target='class',
time_limit_s=120,
tune_for_memory=False,
user_config={'preprocessing': {'split': {'column': 'split', 'type': 'fixed'}}},
)
pprint.pprint(auto_train_results)
```
**Expected behavior**
auto_train should run successfully on a machine without Horovod installed.
**Environment (please complete the following information):**
- OS: Linux
- Python version: 3.8
- Ludwig version: 0.7 installed via pip, along with dask, hummingbird-ml, and lightgbm
thanks | closed | 2023-02-28T06:50:02Z | 2023-03-01T18:16:39Z | https://github.com/ludwig-ai/ludwig/issues/3161 | [
"feature"
] | skunkwerk | 2 |
praw-dev/praw | api | 1,709 | Assertion error when running replace_more on certain submission comments | **Describe the bug**
Assertion error when running replace_more on one submission comments.
**Expected behavior**
`replace_more` run without error
**Code/Logs**
```
reddit = praw.Reddit("my_config")
print(reddit.read_only) # true
sub1_ = reddit.submission(id='h4tkd')
sub1_.selftext # ''
sub1_.comments.replace_more(limit=None)
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-42-e5c87477cd15> in <module>
----> 1 sub1_.comments.replace_more(limit=None)
~/miniconda3/envs/tensorflow-2.3/lib/python3.7/site-packages/praw/models/comment_forest.py in replace_more(self, limit, threshold)
176 continue
177
--> 178 new_comments = item.comments(update=False)
179 if remaining is not None:
180 remaining -= 1
~/miniconda3/envs/tensorflow-2.3/lib/python3.7/site-packages/praw/models/reddit/more.py in comments(self, update)
65 if self._comments is None:
66 if self.count == 0: # Handle "continue this thread"
---> 67 return self._continue_comments(update)
68 assert self.children, "Please file a bug report with PRAW."
69 data = {
~/miniconda3/envs/tensorflow-2.3/lib/python3.7/site-packages/praw/models/reddit/more.py in _continue_comments(self, update)
42 def _continue_comments(self, update):
43 assert not self.children, "Please file a bug report with PRAW."
---> 44 parent = self._load_comment(self.parent_id.split("_", 1)[1])
45 self._comments = parent.replies
46 if update:
~/miniconda3/envs/tensorflow-2.3/lib/python3.7/site-packages/praw/models/reddit/more.py in _load_comment(self, comment_id)
58 },
59 )
---> 60 assert len(comments.children) == 1, "Please file a bug report with PRAW."
61 return comments.children[0]
62
AssertionError: Please file a bug report with PRAW.
```
**System Info**
- OS: Ubuntu 18.04.5 LTS
- Python: Python 3.7.7
- PRAW Version: praw 7.2.0
| closed | 2021-04-09T05:46:50Z | 2022-12-07T05:40:53Z | https://github.com/praw-dev/praw/issues/1709 | [
"Stale",
"Auto-closed - Stale"
] | wenxichen | 4 |
JaidedAI/EasyOCR | pytorch | 1,305 | 段错误 (核心已转储) | ted since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/sdb2/Software/andconda3Install/envs/molannotator/lib/python3.11/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
/sdb2/Software/andconda3Install/envs/molannotator/lib/python3.11/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343970094/work/aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/sdb2/Software/andconda3Install/envs/molannotator/lib/python3.11/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
段错误 (核心已转储)
| open | 2024-09-11T07:27:16Z | 2024-09-11T07:28:13Z | https://github.com/JaidedAI/EasyOCR/issues/1305 | [] | SunNoJJ | 1 |
coqui-ai/TTS | deep-learning | 3,649 | [Bug] Unable to use xtts_v2 with mps device on Apple Silicon | ### Describe the bug
I have a M1 Max with 32 cores and 64 gb of unified memory. So if MPS is meant to work, it should work quite fast. But currently it doesn't work at all, it just hangs.
I'm running it this way: `tts --device mps --model_name "tts_models/multilingual/multi-dataset/xtts_v2" --speaker_idx 'Daisy Studious' --language_idx en --text "Hello world" --out_path ./output/speech.wav`
If I change the device to cpu then it works just fine and rather quickly. If I change it to mps, it just hangs until I Ctrl-C
I also have this info:
```
PyTorch version: 2.2.2
Is MPS (Metal Performance Shader) built? True
Is MPS available? True
Using device: mps
```
Am I missing something?
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.2.2",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "arm",
"python": "3.10.10",
"version": "Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000"
}
}
``` | closed | 2024-03-28T16:09:20Z | 2024-10-29T18:03:40Z | https://github.com/coqui-ai/TTS/issues/3649 | [
"bug",
"wontfix"
] | vesper8 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.