repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
httpie/cli | python | 965 | Incomplete responses do not cause an error | It seems that httpie does not validate the number of received bytes.
Here are some explanations: https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/
With this demo script:
```python
#!/usr/bin/env python3
#
# An HTTP server that returns fewer bytes in the body of the response than
# stated in the Content-Length header.
#
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
print(data.decode())
self.request.sendall(
b'HTTP/1.1 200 OK\r\n'
b'Content-Length: 10\r\n'
b'\r\n'
b'123456'
)
class MyTCPSever(socketserver.TCPServer):
allow_reuse_address = True
with MyTCPSever(('localhost', 8080), MyTCPHandler) as server:
server.serve_forever()
```
...httpie happily returns the incomplete response without raising an error.
```
$ http GET :8080/
HTTP/1.1 200 OK
Content-Length: 10
123456
$ echo $?
0
```
On the other hand, curl raises an error:
```
$ curl http://127.0.0.1:8080/
curl: (18) transfer closed with 4 bytes remaining to read
123456$
```
More details can be found in the blogpost. It seems that requests 3 will contain an option to force validation of incomplete responses, but it's not yet released. There are also workarounds for the current behavior. | open | 2020-09-17T15:27:19Z | 2022-01-14T10:23:12Z | https://github.com/httpie/cli/issues/965 | [
"enhancement"
] | dbrgn | 1 |
MagicStack/asyncpg | asyncio | 696 | Can connection pools be used for lots of long lived connections? | This is more of a question then an issue. For my case i have a websocket server using the python `websockets` package. Since its async i decided to use this library and my implementation of it works. At least on the surface level. i did some stress tests and immediately found out that concurrency is an issue. it raises `asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress`. not the libs fault is obvious whats going on. i am creating a connection object at the global scope of the script and using that same object throughout the entire program for everything including for all different connections. These connections are long lived and ideally i would like to be able to support hundreds to thousands of websocket connections (users). A naive approach is opening a new asyncpg connection for every websocket connection but i doubt its a smart idea to open thousands of database connections if working with thousands of websocket connections.
In the documentation i found connection pooling however a concern of mine is in its example its used for short lived connections in the context of an http request. not a long lived socket. my idea was to have a pool connection object at the global scope and only acquire the pool for every database operation that happens during the life of that websocket connection. which under peak loads i would say is about once per second (for each connection). My concern with this however is the performance. Does it take time to acquire the pool or is it instant? and what happens under high loads with lots of concurrent operations where the pool is trying to be acquired but some other operation still hasn't finished its `with` block? can multiple concurrent pool acquisitions happen? and about how much?
I'm going to attempt this implementation and test and respond with my findings. But if someone else can give an insight if this is a good idea and if it will scale well it would be greatly appreciated. | closed | 2021-01-25T19:27:04Z | 2021-01-27T18:38:31Z | https://github.com/MagicStack/asyncpg/issues/696 | [] | Directory | 6 |
browser-use/browser-use | python | 117 | [Node.js port] Created a node.js (typescript) re-write. Contributors are welcome! | Hey, guys!
Didn't find any js wrappers for this py lib and didn't have much luck wrapping original python code inside a node module, so I created a typescript port of this lib in typescript which is nodejs-compat!
If anybody interested in js version of this - please welcome.
repo: https://github.com/Dankovk/browser-use-js
npm: https://www.npmjs.com/package/browser-use-node | open | 2024-12-24T22:34:20Z | 2024-12-28T07:45:42Z | https://github.com/browser-use/browser-use/issues/117 | [] | Dankovk | 4 |
PokeAPI/pokeapi | api | 576 | Missing legendary Pokemon encounters. | There are no entries for any legendary encounters in the API despite other event encounters (like starters) being present, legendaries should be included. | closed | 2021-02-26T22:38:55Z | 2023-03-17T13:56:32Z | https://github.com/PokeAPI/pokeapi/issues/576 | [] | SimplyBLGDev | 10 |
huggingface/datasets | machine-learning | 6,437 | Problem in training iterable dataset | ### Describe the bug
I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have noticed that this distribution results in different processes having different amounts of data to train on. As a result, when the earliest process finishes training and starts predicting on the test set, other processes are still training, causing the overall training speed to be very slow.
### Steps to reproduce the bug
```
def train(args, model, device, train_loader, optimizer, criterion, epoch, length):
model.train()
idx_length = 0
for batch_idx, data in enumerate(train_loader):
s_time = time.time()
X = data['X']
target = data['y'].reshape(-1, 28)
X, target = X.to(device), target.to(device)
optimizer.zero_grad()
output = model(X)
loss = criterion(output, target)
loss.backward()
optimizer.step()
idx_length += 1
if batch_idx % args.log_interval == 0:
# print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
# epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(),
# 100. * batch_idx * len(
# X) * torch.distributed.get_world_size() / length, loss.item()))
print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\t'.format(
epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(),
100. * batch_idx * len(
X) * torch.distributed.get_world_size() / length))
if args.dry_run:
break
print('Process %s length: %s time: %s' % (torch.distributed.get_rank(), idx_length, datetime.datetime.now()))
train_iterable_dataset = load_dataset("parquet", data_files=data_files, split="train", streaming=True)
test_iterable_dataset = load_dataset("parquet", data_files=data_files, split="test", streaming=True)
train_iterable_dataset = train_iterable_dataset.map(process_fn)
test_iterable_dataset = test_iterable_dataset.map(process_fn)
train_iterable_dataset = train_iterable_dataset.map(scale)
test_iterable_dataset = test_iterable_dataset.map(scale)
train_iterable_dataset = datasets.distributed.split_dataset_by_node(train_iterable_dataset,
world_size=world_size, rank=local_rank).shuffle(seed=1234)
test_iterable_dataset = datasets.distributed.split_dataset_by_node(test_iterable_dataset,
world_size=world_size, rank=local_rank).shuffle(seed=1234)
print(torch.distributed.get_rank(), train_iterable_dataset.n_shards, test_iterable_dataset.n_shards)
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 3,#ngpus_per_node,
'pin_memory': True,
'shuffle': False}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
train_loader = torch.utils.data.DataLoader(train_iterable_dataset, **train_kwargs,
# sampler=torch.utils.data.distributed.DistributedSampler(
# train_iterable_dataset,
# num_replicas=ngpus_per_node,
# rank=0)
)
test_loader = torch.utils.data.DataLoader(test_iterable_dataset, **test_kwargs,
# sampler=torch.utils.data.distributed.DistributedSampler(
# test_iterable_dataset,
# num_replicas=ngpus_per_node,
# rank=0)
)
for epoch in range(1, args.epochs + 1):
start_time = time.time()
train_iterable_dataset.set_epoch(epoch)
test_iterable_dataset.set_epoch(epoch)
train(args, model, device, train_loader, optimizer, criterion, epoch, train_len)
test(args, model, device, criterion2, test_loader)
```
And here’s the part of output:
```
Train Epoch: 1 Batch_idx: 5000 Process: 0 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5000 Process: 1 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5000 Process: 2 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5862 Process: 3 Data_length: 12 coststime: 0.04095172882080078
Train Epoch: 1 Batch_idx: 5862 Process: 0 Data_length: 3 coststime: 0.0751960277557373
Train Epoch: 1 Batch_idx: 5867 Process: 3 Data_length: 49 coststime: 0.0032558441162109375
Train Epoch: 1 Batch_idx: 5872 Process: 1 Data_length: 2 coststime: 0.022842884063720703
Train Epoch: 1 Batch_idx: 5876 Process: 3 Data_length: 63 coststime: 0.002694845199584961
Process 3 length: 5877 time: 2023-11-17 17:03:26.582317
Train epoch 1 costTime: 241.72063446044922s . Process 3 Start to test.
3 0 tensor(45508.8516, device='cuda:3')
3 100 tensor(45309.0469, device='cuda:3')
3 200 tensor(45675.3047, device='cuda:3')
3 300 tensor(45263.0273, device='cuda:3')
Process 3 Reduce metrics.
Train Epoch: 2 Batch_idx: 0 Process: 3 [0/4710975.0 (0%)]
Train Epoch: 1 Batch_idx: 5882 Process: 1 Data_length: 63 coststime: 0.05185818672180176
Train Epoch: 1 Batch_idx: 5887 Process: 1 Data_length: 12 coststime: 0.006895303726196289
Process 1 length: 5888 time: 2023-11-17 17:20:48.578204
Train epoch 1 costTime: 1285.7279663085938s . Process 1 Start to test.
1 0 tensor(45265.9141, device='cuda:1')
```
### Expected behavior
I'd like to know how to fix this problem.
### Environment info
```
torch==2.0
datasets==2.14.0
```
| open | 2023-11-20T03:04:02Z | 2024-05-22T03:14:13Z | https://github.com/huggingface/datasets/issues/6437 | [] | 21Timothy | 5 |
rgerum/pylustrator | matplotlib | 70 | Linestyle string shown by default - without "" even if those seems required "-" and changes not saved. | When the linestyle is changed to another value, quotes should be added around the string, as in the screenshot below, otherwise an error is thrown to the console.
In addition the changes performed to the linestye are not propagated in the code snippet generated by Pylustrator.

Here is a MWE:
Before Pylustrator execution
```python
import matplotlib.pyplot as plt
import numpy as np
import pylustrator
x = np.linspace(1, 10)
y = x**2
pylustrator.start()
fig, ax = plt.subplots()
ax.plot(x, y)
plt.show()
```
And here is the code snippet added by Pylustrator that does not modify the linestyle.
```python
#% start: automatic generated code from pylustrator
plt.figure(1).ax_dict = {ax.get_label(): ax for ax in plt.figure(1).axes}
import matplotlib as mpl
getattr(plt.figure(1), '_pylustrator_init', lambda: ...)()
#% end: automatic generated code from pylustrator
```
I am using the following packages versions:
- matplotlib 3.10.0
- numpy 2.2.2
- spyder 6.0.3
- pylustrator 1.3.0 | open | 2025-01-22T10:40:06Z | 2025-01-22T10:40:06Z | https://github.com/rgerum/pylustrator/issues/70 | [] | FloGom | 0 |
graphql-python/graphene | graphql | 966 | Exposing Mypy-compatible type information | I'd like to use Mypy for checking a Graphene-based app. Is anyone working on stubs, or even better - a Mypy plugin?
What I imagine is writing:
```
class Me(ObjectType[User]):
full_name = graphene.String()
def resolve_full_name(self, info):
return self.get_full_name()
```
And getting a guarantee that User.get_full_name() returns something assignable to Optional[String]. | closed | 2019-05-12T08:05:58Z | 2023-10-16T23:52:27Z | https://github.com/graphql-python/graphene/issues/966 | [
"wontfix"
] | ktosiek | 8 |
DistrictDataLabs/yellowbrick | scikit-learn | 539 | TSNE size & title bug | **Describe the bug**
Looks like our `TSNEVisualizer` might have a bug that causes an error on instantiation if either the `size` or `title` parameters are used.
**To Reproduce**
```python
from yellowbrick.text import TSNEVisualizer
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = load_data('hobbies')
tfidf = TfidfVectorizer()
docs = tfidf.fit_transform(corpus.data)
labels = corpus.target
tsne = TSNEVisualizer(size=(1080, 720))
```
or
```
tsne = TSNEVisualizer(title="My Special TSNE Visualizer")
```
**Dataset**
This bug was triggered using the YB hobbies corpus.
**Expected behavior**
Users should be able to influence the size of the visualizer on instantiation using the `size` parameter and a tuple with `(width, height)` in pixels, and the title of the visualizer using the `title` parameter and a string.
**Traceback**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-120fbfcec07c> in <module>()
----> 1 tsne = TSNEVisualizer(size=(1080, 720))
2 tsne.fit(labels)
3 tsne.poof()
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in __init__(self, ax, decompose, decompose_by, labels, classes, colors, colormap, random_state, **kwargs)
180
181 # TSNE Parameters
--> 182 self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
183
184 def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in make_transformer(self, decompose, decompose_by, tsne_kwargs)
234 # Add the TSNE manifold
235 steps.append(('tsne', TSNE(
--> 236 n_components=2, random_state=self.random_state, **tsne_kwargs)))
237
238 # return the pipeline
TypeError: __init__() got an unexpected keyword argument 'size'
```
or for `title`:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-64-92c88e0bdd33> in <module>()
----> 1 tsne = TSNEVisualizer(title="My Special TSNE Visualizer")
2 tsne.fit(labels)
3 tsne.poof()
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in __init__(self, ax, decompose, decompose_by, labels, classes, colors, colormap, random_state, **kwargs)
180
181 # TSNE Parameters
--> 182 self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
183
184 def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in make_transformer(self, decompose, decompose_by, tsne_kwargs)
234 # Add the TSNE manifold
235 steps.append(('tsne', TSNE(
--> 236 n_components=2, random_state=self.random_state, **tsne_kwargs)))
237
238 # return the pipeline
TypeError: __init__() got an unexpected keyword argument 'title'
```
**Desktop (please complete the following information):**
- macOS
- Python Version 3.6
- Yellowbrick Version 0.8
| closed | 2018-08-01T17:43:21Z | 2018-11-02T08:04:26Z | https://github.com/DistrictDataLabs/yellowbrick/issues/539 | [
"type: bug",
"priority: high"
] | rebeccabilbro | 6 |
nteract/papermill | jupyter | 220 | Why does the output of "execute_notebook" return a "nbformat" notebook instead of a `papermill.api.Notebook`? | I noticed that the output of `execute_notebook` returns a "regular" nbformat-style notebook, while `read_notebook` returns a nifty papermill notebook object. Is there a particular reason for this discrepancy? Could be nice to have papermill notebook objects as an output to `execute_notebook` in case people are executing them interactively | closed | 2018-09-28T16:57:16Z | 2019-02-15T00:09:13Z | https://github.com/nteract/papermill/issues/220 | [
"question"
] | choldgraf | 3 |
vaexio/vaex | data-science | 2,057 | [BUG-REPORT] Dataframe with virtual columns and specific column names fails export to csv | **Description**
Issue when converting to csv with specific column names like "description" or "dtypes". I believe this has to do with the attributes on the DataFrame object being overridden when they coincide with a column name.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: {'vaex': '4.9.1', 'vaex-core': '4.9.1', 'vaex-viz': '0.5.1', 'vaex-hdf5': '0.12.1', 'vaex-server': '0.8.1', 'vaex-astro': '0.9.1', 'vaex-jupyter': '0.7.0', 'vaex-ml': '0.17.0', 'vaex-graphql': '0.2.0'}
- Vaex was installed via: source
- OS: Ubuntu 20.04
**Additional information**
Source code to reproduce:
```python
import vaex
df = vaex.open("test.csv")
df = df.as_arrow()
df.export_csv("test2.csv")
```
test.csv:
```
description,test
a,b
```
```python
Traceback (most recent call last):
File "<Path>/python3.8/site-packages/vaex/scopes.py", line 113, in evaluate
result = self[expression]
File "<Path>/python3.8/site-packages/vaex/scopes.py", line 198, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: 'as_arrow(____description)'"
``` | closed | 2022-05-20T10:33:22Z | 2022-07-26T12:05:54Z | https://github.com/vaexio/vaex/issues/2057 | [] | grafail | 3 |
plotly/dash | jupyter | 2,502 | [BUG] - Infinite loop - App refresh when mkdocs, ipython, gunicorn all installed | **Describe your context**
I have a minimum reproducible example pushed to a repo [here](https://github.com/schlich/dash-bug) using the Dash tutorial app. I am using WSL to run a local dev container and Poetry for my package manager.
- replace the result of `pip list | grep dash` below
```
dash 0.42.0
dash-core-components 0.47.0
dash-html-components 0.16.0
dash-renderer 0.23.0
dash-table 3.6.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Windows/WSL
- Browser Firefox
- Version 111.0.1
**Describe the bug**
When running the app, the page refreshes in an infinite loop. I am not sure what is triggering the refresh. I added a `print("hello")` statement that prints infinitely to the console.
Similar reported behavior in [this thread](https://community.plotly.com/t/callback-failed-the-server-did-not-respond-browser-keeps-updating/54496/20) on the forums in which i have also commented.
Notably the bug occurs when I have all three of `ipython`, `mkdocs`, and `gunicorn` installed. Dropping `ipython` or `mkdocs` solves the error while dropping `gunicorn` gives me the error ModuleNotFoundError: No module named 'pkg_resources' however installing `setuptools` doesn't fix the issue either.
**Screenshots**
Link to screencast [here](https://drive.google.com/file/d/10i_4RAjRVdKqYEEeVVWEfE37UcpkYG24/view?usp=sharing)
| closed | 2023-04-08T06:31:22Z | 2023-06-16T10:47:59Z | https://github.com/plotly/dash/issues/2502 | [] | schlich | 0 |
microsoft/unilm | nlp | 815 | Facing problem while extracting key value pair using LayoutLMV2 model | Hai,
Model I am using LayoutLMV2.
Trying to extract key value pair from scanned invoices but I am getting error. Just trying to check how the model is extracting the key value pair from scanned invoices and do I need to train the model on custom dataset for key value pair extraction. Below is the code which I have tried. Need help in this.
Thank you.
```
feature_extractor = LayoutLMv2FeatureExtractor(apply_ocr=True)
tokenizer = AutoTokenizer.from_pretrained(path_1, pad_token='<pad>')
processor = LayoutLMv2Processor(feature_extractor, tokenizer)
model = LayoutLMv2ForRelationExtraction.from_pretrained(path_1)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
image_file = 'image2.png'
image = Image.open(image_file).convert('RGB')
image.size
encoded_inputs = processor(image, return_tensors="pt")
encoded_inputs.keys()
for k,v in encoded_inputs.items():
print(k, v.shape)
for k,v in encoded_inputs.items():
encoded_inputs[k] = v.to(model.device)
# forward pass
outputs = model(**encoded_inputs)
```
This is the error
```
TypeError Traceback (most recent call last)
c:\Users\name\Parallel Project\Trans_LayoutXLM.ipynb Cell 7 in <cell line: 5>()
2 encoded_inputs[k] = v.to(model.device)
4 # forward pass
----> 5 outputs = model(**encoded_inputs)
File c:\Users\name\.conda\envs\layoutlmft\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File c:\Users\name\.conda\envs\layoutlmft\lib\site-packages\transformers\models\layoutlmv2\modeling_layoutlmv2.py:1598, in LayoutLMv2ForRelationExtraction.forward(self, input_ids, bbox, labels, image, attention_mask, token_type_ids, position_ids, head_mask, entities, relations)
1596 sequence_output, image_output = outputs[0][:, :seq_length], outputs[0][:, seq_length:]
1597 sequence_output = self.dropout(sequence_output)
-> 1598 loss, pred_relations = self.extractor(sequence_output, entities, relations)
1600 return RegionExtractionOutput(
1601 loss=loss,
1602 entities=entities,
(...)
1605 hidden_states=outputs[0],
1606 )
File c:\Users\name\.conda\envs\layoutlmft\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
...
-> 1421 batch_size = len(relations)
1422 new_relations = []
1423 for b in range(batch_size):
TypeError: object of type 'NoneType' has no len()
``` | open | 2022-08-04T15:29:37Z | 2022-08-04T15:29:37Z | https://github.com/microsoft/unilm/issues/815 | [] | Laxmi530 | 0 |
explosion/spaCy | machine-learning | 13,449 | SpaCy is not building today | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
I am building the devcontainer for https://github.com/lovellbrian/cpu and spaCy is not building. may be due to cpdef instead of cdef usage.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Ubuntu 22.04
* Python Version Used: 3.10
* spaCy Version Used: spacy-3.0.6.tar.gz
* Environment Information:
* Downloading spacy-3.0.6.tar.gz (7.1 MB)
1055.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.1/7.1 MB 6.2 MB/s eta 0:00:00
1056.4 Installing build dependencies: started
1074.9 Installing build dependencies: finished with status 'done'
1074.9 Getting requirements to build wheel: started
1079.7 Getting requirements to build wheel: finished with status 'error'
1079.8 error: subprocess-exited-with-error
1079.8
1079.8 × Getting requirements to build wheel did not run successfully.
1079.8 │ exit code: 1
1079.8 ╰─> [164 lines of output]
1079.8
1079.8 Error compiling Cython file:
1079.8 ------------------------------------------------------------
1079.8 ...
1079.8 int length
1079.8
1079.8
1079.8 cdef class Vocab:
1079.8 cdef Pool mem
1079.8 cpdef readonly StringStore strings
1079.8 ^
1079.8 ------------------------------------------------------------
1079.8
1079.8 spacy/vocab.pxd:28:10: Variables cannot be declared with 'cpdef'. Use 'cdef' instead.
1079.8
1079.8 Error compiling Cython file:
1079.8 ------------------------------------------------------------ | closed | 2024-04-20T04:59:31Z | 2024-07-06T00:02:30Z | https://github.com/explosion/spaCy/issues/13449 | [
"install"
] | lovellbrian | 18 |
graphql-python/gql | graphql | 353 | Modify a query | Hi!
How to modify a query on exception? Take this code like example:
```
async with Client(transport=transport) as s:
query = gql("""
query example{
first {example1}
second {example2}
}
""")
try:
result = await s.execute(query)
except TransportQueryError:
# remove 'second {example2}' in query
result = await s.execute(query)
```
My query is very long and I don't want to rewrite the same query with less lines by recreating a query=gql() variable when the error occurs.
I don't know if it's the best way to bypass the error, but I would like to remove 'second {example2}' when TransportQueryError is rise to retry the request and get the result without error.
Thanks | closed | 2022-08-18T17:36:24Z | 2022-08-18T20:06:31Z | https://github.com/graphql-python/gql/issues/353 | [
"type: question or discussion"
] | Hadevmin | 2 |
microsoft/nni | data-science | 4,937 | KeyError: "Fixed context with None not found. Existing values are: OrderedDict | **Describe the issue**:
I am using hell_nas.py as a tutorial code to rewrite my model structure, and when I run it, the trial log error gives me this error, how can I fix it? Thank you very much!
**Environment**:
- NNI version: 2.7
- Training service (local|remote|pai|aml|etc): local
- Client OS: ubunut 20.04
- Server OS (for remote mode only): ubunut 20.04
- Python version: 3.8
- PyTorch/TensorFlow version: Pytorch 1.11+cu113
- Is conda/virtualenv/venv used?: NO
- Is running in Docker?: Yes

```
@model_wrapper
class ModelSpace(nn.Module):
def __init__(self):
# super(BiLSTM, self).__init__()
super().__init__()
self.sofa_network_lstm1 = nn.LSTM(input_size=13, hidden_size=13, bidirectional=True, batch_first=True)
neural_lstm1 = nn.ValueChoice([64, 128, 256])
self.sofa_network_lstm2 = nn.LSTM(input_size=26, hidden_size= neural_lstm1, bidirectional=True, batch_first=True)
self.sofa_network_dropout1 = nn.Dropout(nn.ValueChoice([0.25, 0.5, 0.75]))
self.sofa_network_lstm3 = nn.LSTM(input_size = neural_lstm1*2, hidden_size=13, bidirectional=True, batch_first=True)
# self.sofa_network_fc = nn.Linear(26, 40)
self.sofa_network_bn = nn.BatchNorm1d(1)
self.sofa_network_dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(neural_lstm1, 50)
self.fc2 = nn.Linear(50, 1)
self.activate = nn.GELU()
self.softmax = nn.Softmax(dim=1)
#################
self.sofa_network_lstm1 = nn.LSTM(input_size=13, hidden_size=13, bidirectional=True, batch_first=True)
neural_lstm1 = nn.ValueChoice([64, 128, 256])
self.sofa_network_lstm2 = nn.LSTM(input_size=26, hidden_size=neural_lstm1, bidirectional=True, batch_first=True)
self.sofa_network_dropout1 = nn.Dropout(nn.ValueChoice([0.25, 0.5, 0.75]))
self.sofa_network_lstm4 = nn.LSTM(input_size=neural_lstm1*2, hidden_size=13, bidirectional=True, batch_first=True)
self.sofa_network_bn = nn.BatchNorm1d(1)
self.sofa_network_dropout2 = nn.Dropout(nn.ValueChoice([0.25, 0.5, 0.75]))
self.fc1 = nn.Linear(200, 50)
self.fc2 = nn.Linear(50, 1)
self.activate = nn.GELU()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# print(sofa, flush=True)
# print('0{}'.format(x.shape), flush=True)
x = torch.squeeze(x, dim = 0)
# print('1{}'.format(x.shape), flush=True)
x,_ = self.sofa_network_lstm1(x)
# print('2{}'.format(x.shape), flush=True)
x,_ = self.sofa_network_lstm2(x)
# print('3{}'.format(x.shape), flush=True)
x,_ = self.sofa_network_lstm3(x)
# print('{}'.format(x.shape), flush=True)
#x = x.contiguous().view(-1, 25600)
output = self.fc1(x)
output = self.fc2(output)
output = self.activate(output)
output = self.softmax(output)
return output
model_space = ModelSpace()
model_space
```
strategy
```
import nni.retiarii.strategy as strategy
search_strategy = strategy.Random(dedup=True) # dedup=False if deduplication is not wanted
```
```
import nni
from torchvision import transforms
# from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
def train_epoch(model, device, train_loader, optimizer, epoch):
loss_fn = torch.nn.CrossEntropyLoss()
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
if batch_idx % 10 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test_epoch(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
accuracy = 100. * correct / len(test_loader.dataset)
print('\nTest set: Accuracy: {}/{} ({:.0f}%)\n'.format(
correct, len(test_loader.dataset), accuracy))
return accuracy
def evaluate_model(model_cls):
# "model_cls" is a class, need to instantiate
model = model_cls()
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# transf = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# train_loader = DataLoader(MNIST('data/mnist', download=True, transform=transf), batch_size=64, shuffle=True)
# test_loader = DataLoader(MNIST('data/mnist', download=True, train=False, transform=transf), batch_size=64)
train_dir = './preprocess/train_custom_age.csv'
x_train = pd.read_csv(train_dir)
input_batch, target_batch = get_data(x_train)
dataset = Data.TensorDataset(input_batch, target_batch)
train_loader = Data.DataLoader(dataset, 16, True)
test_dir = './preprocess/test_custom_age.csv'
x_test = pd.read_csv(test_dir)
test_input_batch, target_batch = get_data(x_test)
test_dataset = Data.TensorDataset(test_input_batch, target_batch)
test_loader = Data.DataLoader(test_dataset, 16, True)
for epoch in range(3):
# train the model for one epoch
train_epoch(model, device, train_loader, optimizer, epoch)
# test the model for one epoch
accuracy = test_epoch(model, device, test_loader)
# call report intermediate result. Result can be float or dict
nni.report_intermediate_result(accuracy)
# report final test result
nni.report_final_result(accuracy)
```
```
from nni.retiarii.evaluator import FunctionalEvaluator
evaluator = FunctionalEvaluator(evaluate_model)
from nni.retiarii.experiment.pytorch import RetiariiExperiment, RetiariiExeConfig
exp = RetiariiExperiment(model_space, evaluator, [], search_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'mnist_search'
exp_config.max_trial_number = 4 # spawn 4 trials at most
exp_config.trial_concurrency = 1 # will run two trials concurrently
exp_config.trial_gpu_number = 1
exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 21223)
```
**How to reproduce it?**: | closed | 2022-06-15T02:48:38Z | 2022-06-15T09:24:12Z | https://github.com/microsoft/nni/issues/4937 | [] | CYH4157 | 2 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 4 | lack of definition of CONTENT_WEIGHT, STYLE_WEIGHT(in style_transfer_sols.py), prev_layer_name (in vgg_model_sols.py) | HI, Tks for the post, very helpful.
As title, I found several variable undefined,
for prev_layer_name, I think, it should be prev_name.name, however ':' is not accepted as scope name. So I changed ':' to '_', and it works
For CONTENT_WEIGHT and STYLE_WEIGHT, how to define it ?
(of course, omitted the weight could let the program keep running)
Tks
Larry | closed | 2017-03-01T06:27:02Z | 2017-03-01T22:58:01Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/4 | [] | tcglarry | 1 |
opengeos/leafmap | jupyter | 509 | leafmap Docker container fails to start | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: latest?
- Python version: Looks like 3.11 from the traceback
- Operating System: Docker
### Description
Run the leafmap Docker image provided in the docs at https://leafmap.org/installation/#use-docker
### What I Did
```
➜ ~/jupyter docker run -it -p 8899:8888 giswqs/leafmap:latest
Entered start.sh with args: jupyter lab
Executing the command: jupyter lab
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-lab", line 6, in <module>
from jupyterlab.labapp import main
File "/opt/conda/lib/python3.11/site-packages/jupyterlab/__init__.py", line 7, in <module>
from .handlers.announcements import ( # noqa
File "/opt/conda/lib/python3.11/site-packages/jupyterlab/handlers/announcements.py", line 14, in <module>
from jupyter_server.base.handlers import APIHandler
File "/opt/conda/lib/python3.11/site-packages/jupyter_server/base/handlers.py", line 23, in <module>
from jupyter_events import EventLogger
File "/opt/conda/lib/python3.11/site-packages/jupyter_events/__init__.py", line 3, in <module>
from .logger import EVENTS_METADATA_VERSION, EventLogger
File "/opt/conda/lib/python3.11/site-packages/jupyter_events/logger.py", line 18, in <module>
from .schema import SchemaType
File "/opt/conda/lib/python3.11/site-packages/jupyter_events/schema.py", line 18, in <module>
from .validators import draft7_format_checker, validate_schema
File "/opt/conda/lib/python3.11/site-packages/jupyter_events/validators.py", line 41, in <module>
JUPYTER_EVENTS_SCHEMA_VALIDATOR = Draft7Validator( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: create.<locals>.Validator.__init__() got an unexpected keyword argument 'registry'
```
| closed | 2023-08-18T14:26:12Z | 2023-08-19T01:00:55Z | https://github.com/opengeos/leafmap/issues/509 | [
"bug"
] | CloudNiner | 2 |
healthchecks/healthchecks | django | 640 | Origin checking failed error | Receiving this error in the logs:
```
Forbidden (Origin checking failed - https://example.com does not match any trusted origins.)
```
Running under docker-compose, using traefik as a reverse proxy and keycloak/gatekeeper as SSO. Existing postgres instance not shown below.
Using the REMOTE_USER_HEADER functionality to login, which works.
To get around the error, I had to add this line:
```
CSRF_TRUSTED_ORIGINS = os.getenv("CSRF_TRUSTED_ORIGINS", "*").split(",")
```
in this file:
./hc/settings.py
And then I added the following env variable to the docker compose below:
CSRF_TRUSTED_ORIGINS
Was there a better way to get around that error?
```
healthchecks:
container_name: healthchecks
build:
context: $DOCKERDIR/appdata/healthchecks
dockerfile: $DOCKERDIR/appdata/healthchecks/docker/Dockerfile
command: bash -c 'while !</dev/tcp/postgres/5432; do sleep 1; done; uwsgi /opt/healthchecks/docker/uwsgi.ini'
image: healthchecks:local
restart: always
networks:
- t2_proxy
environment:
- DEFAULT_FROM_EMAIL=$FROM_EMAIL
- EMAIL_HOST=$MAILJET_SMTP_SERVER
- EMAIL_HOST_USER=$MAILJET_USERNAME
- EMAIL_HOST_PASSWORD=$MAILJET_PASSWORD
- SECRET_KEY=$HEALTHCHECKS_SECRET_KEY
- REMOTE_USER_HEADER=HTTP_X_AUTH_EMAIL
- REMOTE_USER_HEADER_TYPE=EMAIL
- DB=postgres
- DB_HOST=$POSTGRES_IP
- DB_NAME=healthchecks
- DB_PORT=5432
- DB_USER=$HEALTHCHECKS_DB_USERNAME
- DB_PASSWORD=$HEALTHCHECKS_DB_PASSWORD
- ALLOWED_HOSTS=healthchecks,localhost,healthchecks.$DOMAINNAME,$DOMAINNAME
- CSRF_TRUSTED_ORIGINS=https://healthchecks.$DOMAINNAME
- SITE_ROOT=https://healthchecks.$DOMAINNAME
gatekeeper_healthchecks:
image: quay.io/gogatekeeper/gatekeeper:1.3.8
restart: always
container_name: gatekeeper_healthchecks
command: --resources $GATEKEEPER_INTERNAL
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
entrypoint:
- /opt/gatekeeper/gatekeeper
environment:
- PROXY_DISCOVERY_URL=$DISCOVERY_URL
- PROXY_CLIENT_ID=$HEALTHCHECKS_CLIENT_ID
- PROXY_CLIENT_SECRET=$HEALTHCHECKS_CLIENT_SECRET
- PROXY_ENCRYPTION_KEY=$HEALTHCHECKS_ENCRYPTION_KEY
- PROXY_LISTEN=:3000
- PROXY_ENABLE_REFRESH_TOKEN=true
- PROXY_UPSTREAM_URL=http://healthchecks:8000
labels:
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.healthchecks-gate.entrypoints=https"
- "traefik.http.routers.healthchecks-gate.rule=Host(`healthchecks.$DOMAINNAME`)"
- "traefik.http.routers.healthchecks-gate.tls=true"
- "traefik.http.routers.healthchecks-gate.tls.certresolver=dns-cloudflare"
## HTTP Services
- "traefik.http.routers.healthchecks-gate.service=healthchecks-gatesvc"
- "traefik.http.services.healthchecks-gatesvc.loadbalancer.server.port=3000"
## Flame
- flame.type=application
- flame.name=Healthchecks
- flame.url=https://healthchecks.$DOMAINNAME
- flame.icon=space-station
```
Also, I added this to the uwsgi.ini to fix issues with SSO redirects:
```
buffer-size = 32768
``` | closed | 2022-04-21T13:11:39Z | 2024-03-03T01:16:12Z | https://github.com/healthchecks/healthchecks/issues/640 | [] | nathanielread | 4 |
coqui-ai/TTS | deep-learning | 3,119 | [Feature request] Support voice clone on more languages, like zh-CN | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
Hi I try to clone a voice in zh-CN. But the exceptions show:
```
Language zh-CN is not in the available languages: dict_keys(['en', 'fr-fr', 'pt-br']).
```
How can I enable it then? Any directions will be appreciated if I can help contribute.
| closed | 2023-10-30T01:19:18Z | 2023-10-30T09:09:33Z | https://github.com/coqui-ai/TTS/issues/3119 | [
"feature request"
] | nobody4t | 1 |
xlwings/xlwings | automation | 1,924 | xlwings `Range.value` dropping formula error cells | #### OS (e.g. Windows 10 or macOS Sierra)
- macOS Monterey
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
- xlwings 0.24.9
- Office 365 (Excel v.16.61.1 22052000)
- Python 3.9
#### Describe your issue (incl. Traceback!)
(Read description in the next section for more explanation of how I got to this specific traceback, which is only one of the cases I ran into)
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/e3-work/opt/anaconda3/envs/new-modeling-toolkit/lib/python3.9/site-packages/xlwings/main.py", line 1993, in value
return conversion.read(self, None, self._options)
File "/Users/e3-work/opt/anaconda3/envs/new-modeling-toolkit/lib/python3.9/site-packages/xlwings/conversion/__init__.py", line 32, in read
pipeline(ctx)
File "/Users/e3-work/opt/anaconda3/envs/new-modeling-toolkit/lib/python3.9/site-packages/xlwings/conversion/framework.py", line 66, in __call__
stage(*args, **kwargs)
File "/Users/e3-work/opt/anaconda3/envs/new-modeling-toolkit/lib/python3.9/site-packages/xlwings/conversion/standard.py", line 136, in __call__
elif len(c.value[0]) == 1:
IndexError: list index out of range
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
In my case, xlwings silently drops cells from a range if there is a formula error. How I found this error was that I had two ranges that I expected to have the exact same dimensions.
- Doing `Range.shape` showed they had the same dimensions; however, one range had a row with all formula errors (`#REF!`, though it seems to work with other formula errors too, such as my example below).
- Doing `Range.value` on the two ranges, I would get nest lists/dataframes of different sizes (one `m x n` and the other `m-1 x n`), because it seemed like xlwings was silently dropping the row with formula errors.
I haven't tested all the combinations, but I've attached a small example workbook ([book1.xlsx](https://github.com/xlwings/xlwings/files/8783696/book1.xlsx)) to accompany the code below. In my small example, I also uncovered that if I try to do `Range.value` for a range larger than 1 cell with all formula errors, I get an error (see traceback above)
```python
import xlwings as xw
xw.Book("Book1.xlsx").set_mock_caller()
wb = xw.Book.caller()
wb.sheets["Sheet1"].range("A1").value
# >>> Returns nothing (presumably it's a None)
wb.sheets["Sheet1"].range("A1:B2").value
# >>> Returns error (traceback above), but would expect something like [[None, None], [None, None]]
wb.sheets["Sheet1"].range("A1:B3").value
# >>> Returns [2.0, 2.0], but would expect something like [[None, None], [None, None], [2.0, 2.0]]
``` | open | 2022-05-27T03:20:05Z | 2022-05-29T15:26:33Z | https://github.com/xlwings/xlwings/issues/1924 | [
"bug",
"dependency"
] | goroderickgo | 2 |
httpie/cli | python | 1,253 | --chunked with --raw raises an internal error | ```console
$ http --chunked pie.dev/post --raw '{"a": 1}'
http: error: AttributeError: 'bytes' object has no attribute 'encode'
``` | closed | 2021-12-27T09:59:48Z | 2021-12-29T09:41:45Z | https://github.com/httpie/cli/issues/1253 | [
"bug"
] | isidentical | 0 |
marimo-team/marimo | data-visualization | 3,666 | Exporting notebook as ipynb throws exception | ### Describe the bug
I have latest marimo running on Apple Macbook with 1.4 GHz Quad-Core Intel Core i5 with 16GB 2133 MHz RAM. Running the command `marino export ipynb <notebook>` throws exception.
### Environment
<details>
```
marimo export ipynb req_design_test.py
Traceback (most recent call last):
File "/usr/local/bin/marimo", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/marimo/_cli/export/commands.py", line 379, in ipynb
return watch_and_export(MarimoPath(name), output, watch, export_callback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/marimo/_cli/export/commands.py", line 66, in watch_and_export
result = export_callback(marimo_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/marimo/_cli/export/commands.py", line 377, in export_callback
return export_as_ipynb(file_path, sort_mode=sort)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/marimo/_server/export/__init__.py", line 78, in export_as_ipynb
result = Exporter().export_as_ipynb(file_manager, sort_mode=sort_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/marimo/_server/export/exporter.py", line 179, in export_as_ipynb
graph = file_manager.app.graph
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/marimo/_ast/app.py", line 450, in graph
self._app._maybe_initialize()
File "/usr/local/lib/python3.11/site-packages/marimo/_ast/app.py", line 271, in _maybe_initialize
raise MultipleDefinitionError(
marimo._ast.errors.MultipleDefinitionError: This app can't be run because it has multiple definitions of the name datetime
```
</details>
### Code to reproduce
_No response_ | closed | 2025-02-03T05:18:28Z | 2025-02-05T00:50:40Z | https://github.com/marimo-team/marimo/issues/3666 | [
"bug"
] | himalayahall | 4 |
alteryx/featuretools | data-science | 1,884 | Improve Primitive Documentation to include Search, etc. | Primitives should include some basic search and the ability to view primitives by category, for example Aggregation Primitives vs Transform Primitives.
A stretch goal would be to embed a Python repl that allows the user to experiment with primitives, like [Lodash docs](https://lodash.com/docs/4.17.15) for example. | open | 2022-02-07T19:54:33Z | 2023-06-26T18:52:33Z | https://github.com/alteryx/featuretools/issues/1884 | [
"documentation"
] | dvreed77 | 1 |
plotly/dash | jupyter | 3,118 | alert users that depend on `plotly-latest` that the website is using latest v1 not the latest | > @archmoj we really need to mark plotly-latest with a console warning.
>
> @BPowell76 plotly-latest is the end of the v1.x line. Starting in v2.0 we stopped updating this so that the major update and following updates don't accidentally break existing projects. Please update your project to use a specific version from the CDN.
_Originally posted by @alexcjohnson in [#1794](https://github.com/plotly/dash/issues/1794#issuecomment-2573429370)_ | closed | 2025-01-06T16:18:33Z | 2025-02-21T13:18:09Z | https://github.com/plotly/dash/issues/3118 | [
"feature",
"P2",
"task"
] | archmoj | 10 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 349 | unable to install webrtcvad | C:\Users\UJJWAL RASTOGI\Documents\GitHub\Real-Time-Voice-Cloning>pip3 install webrtcvad
Collecting webrtcvad
Using cached webrtcvad-2.0.10.tar.gz (66 kB)
Using legacy setup.py install for webrtcvad, since package 'wheel' is not installed.
Installing collected packages: webrtcvad
Running setup.py install for webrtcvad ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\ujjwal rastogi\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\UJJWAL RASTOGI\\AppData\\Local\\Temp\\pip-install-3lkfcsuw\\webrtcvad\\setup.py'"'"'; __file__='"'"'C:\\Users\\UJJWAL RASTOGI\\AppData\\Local\\Temp\\pip-install-3lkfcsuw\\webrtcvad\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\UJJWAL RASTOGI\AppData\Local\Temp\pip-record-ciqa_agr\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\ujjwal rastogi\appdata\local\programs\python\python38\Include\webrtcvad'
cwd: C:\Users\UJJWAL RASTOGI\AppData\Local\Temp\pip-install-3lkfcsuw\webrtcvad\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.8
copying webrtcvad.py -> build\lib.win-amd64-3.8
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\ujjwal rastogi\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\UJJWAL RASTOGI\\AppData\\Local\\Temp\\pip-install-3lkfcsuw\\webrtcvad\\setup.py'"'"'; __file__='"'"'C:\\Users\\UJJWAL RASTOGI\\AppData\\Local\\Temp\\pip-install-3lkfcsuw\\webrtcvad\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\UJJWAL RASTOGI\AppData\Local\Temp\pip-record-ciqa_agr\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\ujjwal rastogi\appdata\local\programs\python\python38\Include\webrtcvad' Check the logs for full command output. | closed | 2020-05-25T03:49:47Z | 2020-07-04T14:52:47Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/349 | [] | UJJWAL-1711 | 3 |
ray-project/ray | data-science | 51,223 | [Data] `Dataset.train_test_split` reads dataset twice | ### What happened + What you expected to happen
I'm following the [Fine-tuning a Torch object detection model](https://docs.ray.io/en/latest/train/examples/pytorch/torch_detection.html) example, and noticed when I call `train_test_split` the dataset is read twice.
I think it's because we call `Dataset.count()` in the method's implementation, and for non-trivial pipelines this causes the pipeline to execute. https://github.com/ray-project/ray/blob/d8835dfa4a0e7c0ea50ea01cc5617635ce2965f5/python/ray/data/dataset.py#L2131
### Versions / Dependencies
902b55a3ae432b3964b042d8e57e9211046e165a
### Reproduction script
See descripition.
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-03-10T21:02:11Z | 2025-03-10T21:02:11Z | https://github.com/ray-project/ray/issues/51223 | [
"P2",
"performance",
"data"
] | bveeramani | 0 |
pydata/bottleneck | numpy | 385 | Failed to compile on Apple M1 [BUG] | I am trying to compile on Apple M1 with ``pip install .`` and get following bug report. Any help would be appreciated
```
DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621
Processing /Users/.../bottleneck
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: numpy in /opt/homebrew/lib/python3.9/site-packages (from Bottleneck==1.4.0.dev0+117.gf2bc792) (1.20.3)
Building wheels for collected packages: Bottleneck
Building wheel for Bottleneck (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /var/folders/jh/255j6mls5nd57kvdpm0hv2qw0000gn/T/tmp0zyrguhb
cwd: /private/var/folders/jh/255j6mls5nd57kvdpm0hv2qw0000gn/T/pip-req-build-x4fx9n_c
Complete output (210 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-11-arm64-3.9
creating build/lib.macosx-11-arm64-3.9/bottleneck
copying bottleneck/_version.py -> build/lib.macosx-11-arm64-3.9/bottleneck
copying bottleneck/__init__.py -> build/lib.macosx-11-arm64-3.9/bottleneck
copying bottleneck/_pytesttester.py -> build/lib.macosx-11-arm64-3.9/bottleneck
creating build/lib.macosx-11-arm64-3.9/bottleneck/benchmark
copying bottleneck/benchmark/bench_detailed.py -> build/lib.macosx-11-arm64-3.9/bottleneck/benchmark
copying bottleneck/benchmark/autotimeit.py -> build/lib.macosx-11-arm64-3.9/bottleneck/benchmark
copying bottleneck/benchmark/__init__.py -> build/lib.macosx-11-arm64-3.9/bottleneck/benchmark
copying bottleneck/benchmark/bench.py -> build/lib.macosx-11-arm64-3.9/bottleneck/benchmark
creating build/lib.macosx-11-arm64-3.9/bottleneck/slow
copying bottleneck/slow/reduce.py -> build/lib.macosx-11-arm64-3.9/bottleneck/slow
copying bottleneck/slow/__init__.py -> build/lib.macosx-11-arm64-3.9/bottleneck/slow
copying bottleneck/slow/nonreduce.py -> build/lib.macosx-11-arm64-3.9/bottleneck/slow
copying bottleneck/slow/move.py -> build/lib.macosx-11-arm64-3.9/bottleneck/slow
copying bottleneck/slow/nonreduce_axis.py -> build/lib.macosx-11-arm64-3.9/bottleneck/slow
creating build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/nonreduce_axis_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/scalar_input_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/reduce_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/util.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/move_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/__init__.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/input_modification_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/common.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/nonreduce_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/list_input_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/memory_test.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
copying bottleneck/tests/test_template.py -> build/lib.macosx-11-arm64-3.9/bottleneck/tests
creating build/lib.macosx-11-arm64-3.9/bottleneck/src
copying bottleneck/src/bn_config.py -> build/lib.macosx-11-arm64-3.9/bottleneck/src
copying bottleneck/src/__init__.py -> build/lib.macosx-11-arm64-3.9/bottleneck/src
copying bottleneck/src/bn_template.py -> build/lib.macosx-11-arm64-3.9/bottleneck/src
creating build/lib.macosx-11-arm64-3.9/bottleneck/tests/data
creating build/lib.macosx-11-arm64-3.9/bottleneck/tests/data/template_test
copying bottleneck/tests/data/template_test/truth.c -> build/lib.macosx-11-arm64-3.9/bottleneck/tests/data/template_test
copying bottleneck/tests/data/template_test/test_template.c -> build/lib.macosx-11-arm64-3.9/bottleneck/tests/data/template_test
UPDATING build/lib.macosx-11-arm64-3.9/bottleneck/_version.py
set build/lib.macosx-11-arm64-3.9/bottleneck/_version.py to '1.4.0.dev0+117.gf2bc792'
running build_ext
running config
clang -E -I/opt/homebrew/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/include/python3.9 -I/opt/homebrew/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/include/python3.9 -o _configtest.i _configtest.c
removing: _configtest.c _configtest.i
compiling '_configtest.c':
#pragma clang diagnostic error "-Wattributes"
int __attribute__((optimize("O3"))) have_attribute_optimize_opt_3(void*);
int main(void)
{
return 0;
}
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -falign-functions=8 -falign-functions=8 -c _configtest.c -o _configtest.o
_configtest.c:4:20: error: unknown attribute 'optimize' ignored [-Werror,-Wunknown-attributes]
int __attribute__((optimize("O3"))) have_attribute_optimize_opt_3(void*);
^
1 error generated.
failure.
removing: _configtest.c _configtest.o
clang -E -I/opt/homebrew/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/include/python3.9 -I/opt/homebrew/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/include/python3.9 -o _configtest.i _configtest.c
removing: _configtest.c _configtest.i
compiling '_configtest.c':
#include <math.h>
int check(void) {
return __builtin_isnan(0.);
}
int main(void)
{
return check();
}
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -falign-functions=8 -falign-functions=8 -c _configtest.c -o _configtest.o
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest
compiling '_configtest.c':
#include <math.h>
int check(void) {
return isnan(0.);
}
int main(void)
{
return check();
}
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -falign-functions=8 -falign-functions=8 -c _configtest.c -o _configtest.o
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest
compiling '_configtest.c':
#include <math.h>
int check(void) {
return _isnan(0.);
}
int main(void)
{
return check();
}
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -falign-functions=8 -falign-functions=8 -c _configtest.c -o _configtest.o
_configtest.c:5:12: error: implicit declaration of function '_isnan' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
return _isnan(0.);
^
1 error generated.
failure.
removing: _configtest.c _configtest.o
compiling '_configtest.c':
#ifndef __cplusplus
static inline int static_func (void)
{
return 0;
}
inline int nostatic_func (void)
{
return 0;
}
#endif
int main(void) {
int r1 = static_func();
int r2 = nostatic_func();
return r1 + r2;
}
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -falign-functions=8 -falign-functions=8 -c _configtest.c -o _configtest.o
success!
removing: _configtest.c _configtest.o
building 'bottleneck.reduce' extension
creating build/temp.macosx-11-arm64-3.9
creating build/temp.macosx-11-arm64-3.9/bottleneck
creating build/temp.macosx-11-arm64-3.9/bottleneck/src
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -falign-functions=8 -falign-functions=8 -I/opt/homebrew/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/core/include -I/opt/homebrew/include -I/opt/homebrew/opt/openssl@1.1/include -I/opt/homebrew/opt/sqlite/include -I/opt/homebrew/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibottleneck/src -Ibottleneck/include -c bottleneck/src/reduce.c -o build/temp.macosx-11-arm64-3.9/bottleneck/src/reduce.o -O2
In file included from bottleneck/src/reduce_template.c:9:
In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/x86intrin.h:15:
In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/immintrin.h:15:
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:50:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_vec_init_v2si(__i, 0);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:129:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_packsswb((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:159:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_packssdw((__v2si)__m1, (__v2si)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:189:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_packuswb((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:216:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_punpckhbw((__v8qi)__m1, (__v8qi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:239:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_punpckhwd((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:260:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_punpckhdq((__v2si)__m1, (__v2si)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:287:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_punpcklbw((__v8qi)__m1, (__v8qi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:310:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_punpcklwd((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:331:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_punpckldq((__v2si)__m1, (__v2si)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:352:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddb((__v8qi)__m1, (__v8qi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:373:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddw((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:394:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddd((__v2si)__m1, (__v2si)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:416:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddsb((__v8qi)__m1, (__v8qi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:439:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddsw((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:461:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddusb((__v8qi)__m1, (__v8qi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:483:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_paddusw((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:504:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_psubb((__v8qi)__m1, (__v8qi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/usr/lib/clang/12.0.5/include/mmintrin.h:525:12: error: invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
return (__m64)__builtin_ia32_psubw((__v4hi)__m1, (__v4hi)__m2);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for Bottleneck
Failed to build Bottleneck
ERROR: Could not build wheels for Bottleneck which use PEP 517 and cannot be installed directly
``` | open | 2021-09-23T10:10:08Z | 2022-09-09T07:27:42Z | https://github.com/pydata/bottleneck/issues/385 | [
"bug"
] | NikZak | 12 |
mkhorasani/Streamlit-Authenticator | streamlit | 119 | Fix the sample code in user registration. | ### Old
```python
try:
if authenticator.register_user(preauthorization=False):
st.success('User registered successfully')
except Exception as e:
st.error(e)
```
### Update
```python
try:
email, username, name = authenticator.register_user(
location='main',
preauthorization=False,
domains=None, # ['gmail.com', 'yahoo.com']
fields={'Form name': 'Register User'}
)
if None not in (email, username, name):
st.success('User registered successfully')
except Exception as e:
st.error(e)
```
Also fix the return value of `register_user()` in the authenticate module. | closed | 2024-01-25T22:15:56Z | 2024-01-26T09:00:30Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/119 | [] | fsmosca | 3 |
tableau/server-client-python | rest-api | 739 | Add docs for Webhooks | Add docs for code changes from #523 | closed | 2020-11-19T00:48:46Z | 2022-09-08T18:15:50Z | https://github.com/tableau/server-client-python/issues/739 | [
"docs"
] | bcantoni | 0 |
django-cms/django-cms | django | 7,515 | Improve French translations by using only one token for action buttons | ## Description

There are three issues with translations in French.
1. "Nouveau Page" is two tokens "New" and "Page". But it's "une page" so it should be "Nouvelle Page". It also can happen anytime an object name is feminine. (quite a lot in the french language)
https://github.com/django-cms/django-cms/blob/9e33167701b65ce4fb7675d61e452e62792556e3/cms/locale/fr/LC_MESSAGES/django.po#L1057-L1062
2. We could improve "Ajouter page maintenant." with "Ajouter *une* page maintenant."
3. We should improve « il n’y a pas encore page » with « il n’y a pas encore *de* page ».
https://github.com/django-cms/django-cms/blob/9e33167701b65ce4fb7675d61e452e62792556e3/cms/locale/fr/LC_MESSAGES/django.po#L1102-L1109
## Steps to reproduce
With Django-CMS configured in French,
Go to Administration > Django CMS > Pages
Note: Adding only one token for new objects will probably also fix those kind of issues in other languages than french if they also distinguish genres.
# More examples

"Ajouter *une* page type", « il n’y a pas encore *de* page type » and "Nouvelle Page Type".
Note 2: This issue probably affects plugins too.
Note 3: This will facilitate the production of accurate translations since the context will be given. | closed | 2023-03-28T16:29:43Z | 2023-04-25T17:57:13Z | https://github.com/django-cms/django-cms/issues/7515 | [] | wasertech | 6 |
roboflow/supervision | deep-learning | 1,325 | comment utilser les resultas de ce model hors connectiom | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
j'ai projet sur le quel je travail qui consiste a calculer les dimensions reelles en mm d'un oeuf a partir de son image et je voulais savoir comment integrer ce model a mon projet et comment utiliser ses resultats pour avoir les dimensions en pixels de l'oeuf sur l'image
### Additional
_No response_ | closed | 2024-07-04T03:04:03Z | 2024-07-05T11:41:13Z | https://github.com/roboflow/supervision/issues/1325 | [
"question"
] | Tkbg237 | 0 |
miguelgrinberg/Flask-Migrate | flask | 181 | missing kwarg engine_name in migrations created with flask db init, after switching to multiple databases | I had a database which already had migrations. I added a second database, re-running `flask db init --multidb` as you suggested at https://github.com/miguelgrinberg/Flask-Migrate/issues/179. Worked fine. But, when I want to upgrade or downgrade one of the old migrations (*old* means, they where created by `flask db migrate` when I had only one database), I get the error
```
Traceback (most recent call last):
File "/[...]/bin/flask", line 11, in <module>
sys.exit(main())
File "/[...]/lib/python3.6/site-packages/flask/cli.py", line 507, in main
cli.main(args=args, prog_name=name)
File "/[...]/lib/python3.6/site-packages/flask/cli.py", line 374, in main
return AppGroup.main(self, *args, **kwargs)
File "/[...]/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/[...]/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/[...]/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/[...]/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/[...]/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/[...]/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/[...]/lib/python3.6/site-packages/flask/cli.py", line 251, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "/[...]/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/[...]/lib/python3.6/site-packages/flask_migrate/cli.py", line 132, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File "/[...]/lib/python3.6/site-packages/flask_migrate/__init__.py", line 244, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/[...]/lib/python3.6/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/[...]/lib/python3.6/site-packages/alembic/script/base.py", line 416, in run_env
util.load_python_file(self.dir, 'env.py')
File "/[...]/lib/python3.6/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/[...]/lib/python3.6/site-packages/alembic/util/compat.py", line 68, in load_module_py
module_id, path).load_module(module_id)
File "<frozen importlib._bootstrap_external>", line 399, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 823, in load_module
File "<frozen importlib._bootstrap_external>", line 682, in load_module
File "<frozen importlib._bootstrap>", line 251, in _load_module_shim
File "<frozen importlib._bootstrap>", line 675, in _load
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "migrations/env.py", line 158, in <module>
run_migrations_online()
File "migrations/env.py", line 138, in run_migrations_online
context.run_migrations(engine_name=name)
File "<string>", line 8, in run_migrations
File "/[...]/lib/python3.6/site-packages/alembic/runtime/environment.py", line 807, in run_migrations
self.get_context().run_migrations(**kw)
File "/[...]/lib/python3.6/site-packages/alembic/runtime/migration.py", line 321, in run_migrations
step.migration_fn(**kw)
TypeError: upgrade() got an unexpected keyword argument 'engine_name'
```
This is, because flask-Migrate is passing the `engine_name` as a parameter now, which it (understandably) didn't do before `flask db init --multidb`. I can fix this by refactoring each instance of `def upgrade()` to `def upgrade(**kwargs)` or `def upgrade(engine_name)` and analogously for ocurrences of `def downgrade()` in the existing migration files. But to me, it feels paradigmatically wrong to change *anything* in a migration. Maybe I am wrong with this, but is there a possibility for `flask-Migrate` to prevent me from refactoring the migrations?
| closed | 2018-01-15T15:51:34Z | 2022-03-25T15:06:02Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/181 | [
"question"
] | jonathan-scholbach | 4 |
ageitgey/face_recognition | python | 1,274 | face_recognition pipeline for multiple sources | * face_recognition version: -
* Python version:3.7
* Operating System: ubuntu 18.04
### Description
Hi,
I am have implemented a face recognition pipeline with face_recognition library on jetson nano which fetched 7-10 fps from a single 1080p source, which is decent for a single source. but, to increase the number of sources I am planning to use this library with deepstream to process multiple streams.
please do let me know if you've implemented something similar.
| open | 2021-02-02T08:21:37Z | 2021-04-08T13:27:36Z | https://github.com/ageitgey/face_recognition/issues/1274 | [] | shubham-shahh | 6 |
quantumlib/Cirq | api | 6,182 | Implement consistancy checks for unitary protocol in the presence of ancillas | **Is your feature request related to a use case or problem? Please describe.**
#6101 was created to update `cirq.unitary` and `cirq.apply_unitaries` protocols to support the case when gates allocate their own ancillas. This was achieved in #6112, however the fix assumes the decomposition is correct. a consistency check is need to check that. This is the fourth task on https://github.com/quantumlib/Cirq/issues/6101#issuecomment-1568686661.
The consistency check should check that the result is
1. Indeed a unitary
2. CleanQubits are restored to the $\ket{0}$ state.
3. Borrowable Qubits are restored to their original state.
**Describe the solution you'd like**
for the correctness checks we need a `cirq.testing.assert_consistent_unitary` that does the checks listed above.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
for the first task
P1 - I need this no later than the next release (end of quarter)
for the second and third tasks
P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle | closed | 2023-07-05T15:54:25Z | 2023-07-14T13:35:14Z | https://github.com/quantumlib/Cirq/issues/6182 | [
"good first issue",
"kind/feature-request",
"triage/accepted"
] | NoureldinYosri | 7 |
mwaskom/seaborn | data-science | 3,681 | `lineplot(..., dashes)` argument does not support string descriptors | When using `lineplot` with `style` set to a categorial, the `dashes` argument cannot be specified using the conventional matplotlib strings like `dashed` or `-`, but must take a list of dash tuples like `(1, 0)`, otherwise it fails with a nasty error.
MWE:
```python
import pandas as pd
import seaborn as sns
d = pd.DataFrame([
{"a": 1, "b:": 2.0, "c": "A"},
{"a": 2, "b:": 1.0, "c": "A"},
{"a": 3, "b:": 3.0, "c": "B"},
{"a": 4, "b:": 4.0, "c": "B"},
])
sns.lineplot(data=d, x="a", y="b:", style="c", dashes=["-", "-"])
```
Gives:
```
TypeError: unsupported operand type(s) for +: 'int' and 'str'
```
The intended behaviour can be obtained using `dashes=[(1, 0), (1, 0)]`
This is because `Line2D.set_dashes` is used instead of `Line2D.set_linestyle`. Perhaps an additional argument `styles=` could be added which would use the `set_linestyle` method instead? | closed | 2024-04-22T08:10:13Z | 2024-04-24T10:56:02Z | https://github.com/mwaskom/seaborn/issues/3681 | [] | JeppeKlitgaard | 3 |
jmcnamara/XlsxWriter | pandas | 1,082 | question: How to add format to already writed cells? | ### Question
I'm trying to generate excel from a pandas styler object which contains font color.
If I simply use `my_styler.to_excel(writer, sheet_name='Sheet1')` the color exist.
If I follow the document and set format to **dataframe**, percent exist.
```
# Add a percent number format.
percent_format = workbook.add_format({"num_format": "0%"})
# Apply the number format to Grade column.
worksheet.set_column(2, 2, None, percent_format)
```
But when I tried to set number format to workbook generated by `my_styler.to_excel(writer, sheet_name='Sheet1')` , it does not work.
test code:
```python
import pandas as pd
import numpy as np
from io import BytesIO
weather_df = pd.DataFrame(np.random.rand(10,2)*5,
index=pd.date_range(start="2021-01-01", periods=10),
columns=["Tokyo", "Beijing"])
def make_pretty(styler):
styler.set_caption("Weather Conditions")
styler.format_index(lambda v: v.strftime("%A"))
styler.background_gradient(axis=None, vmin=1, vmax=5, cmap="YlGnBu")
return styler
my_styler = weather_df.loc["2021-01-04":"2021-01-08"].style.pipe(make_pretty)
my_styler
writer = pd.ExcelWriter("test.xlsx", engine='xlsxwriter')
my_styler.to_excel(writer, sheet_name='Sheet1')
percent_format = writer.book.add_format({'num_format': '0.00%'}) # this line not work
# Now apply the number format to the column
writer.sheets['Sheet1'].set_column(0, 0, 30)
writer.sheets['Sheet1'].set_column(1, 2, 15, percent_format)
writer.close()
``` | closed | 2024-07-15T05:52:19Z | 2024-07-30T22:42:30Z | https://github.com/jmcnamara/XlsxWriter/issues/1082 | [
"question"
] | PaleNeutron | 3 |
sinaptik-ai/pandas-ai | pandas | 962 | Adding to LLMs Mistral AI Chat Models | ### 🚀 The feature
adding a class for MistralAI chat models
### Motivation, pitch
the Large MistralAI Model Performs very well, and the reasoning, and code capabilities are perfect according to what I have tested till now using multiple datasets and using the same prompts of PandasAI.
below is an example of prompts used, and the response:
Prompt:
```
Using prompt: <dataframe>
dfs[0]:3x2
Network_Activity_Type_Code,Network_Activity_Type_Name
3,Internet
1,Outgoing_Call
2,Incoming_Call
</dataframe>
<dataframe>
dfs[1]:10000x8
Subscription_Id,Running_Date,Rate_Plan_Product_Id,Call_Gap_Days,Most_Used_Governorate,Most_Used_Qism,Most_Used_Region,Duality_Flag
749,2019-05-23,2,18,Qena,puy,west,N
263,2016-08-31,12,50,Alex,xyz,west,Y
956,2016-05-15,6,18,Giza,puy,north,N
</dataframe>
<dataframe>
dfs[2]:10000x6
Subscription_Id,Rate_Plan_Product_Id,Network_Activity_Type_Code,Duration,Data_Volumne,Running_Date
708,14,1,59,17,2020-08-23
52,1,1,15,89,2016-03-18
211,10,3,95,71,2017-09-08
</dataframe>
<dataframe>
dfs[3]:16x5
Rate_Plan_Product_Id,Rate_Plan_code,Rate_Plan_Description,Rate_Plan_Type,Rate_Plan_Group
2,53,XYZU,Prepaid,Youth
3,52,XYZU,Prepaid,Youth
1,33,XYZU,Prepaid,Ahl
</dataframe>
<dataframe>
dfs[4]:10000x10
Subscription_Id,Running_Date,Total_Revenue,Recharges,Baki_Revenue,Nota_Revenue,Connect_Revenue,Admin_Fees,Tesla_Revenue,Balance_Transfer_Fees
871,2019-10-23,48,39,61,39,49,97,13,85
729,2016-06-21,96,20,16,86,93,22,31,96
483,2020-03-15,47,19,32,47,11,31,15,82
</dataframe>
<dataframe>
dfs[5]:10000x2
Subscription_Id,Rate_Plan_Group
956,Camp
263,Ahl
749,Youth
</dataframe>
Update this initial code:
```python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
```
Q: what's the total revenue per governorate?
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" variable as a dictionary of type and value.
```
response:
```
# Merge dfs[4] and dfs[1] on Subscription_Id
merged_df = pd.merge(dfs[4], dfs[1], on='Subscription_Id')
# Group the merged dataframe by Most_Used_Governorate and sum the Total_Revenue column
total_revenue_per_governorate = merged_df.groupby('Most_Used_Governorate')['Total_Revenue'].sum().reset_index()
# Declare the result variable
result = {
"type": "dataframe",
"value": total_revenue_per_governorate
}
```
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-02-26T20:59:10Z | 2024-06-08T16:03:52Z | https://github.com/sinaptik-ai/pandas-ai/issues/962 | [] | Ezzaldin97 | 6 |
TheAlgorithms/Python | python | 11,586 | Common errors used in python | ### What would you like to share?
IOError
Raised when an I/O operation fails for an I/O-related reason, e.g., "file not found".
ImportError:
Raised when an important statement fails to find the module definition
IndexError:
Raised when a sequence index is out of range
KeyError:
Raised when a dictionary key is not found
NameError:
Raised when a local or global name is not found
TypeError
Raised when an operation or function is applied to an object of inappropriate type
For a complete list of all errors go to https://docs.python.org/3/library/expectations.html.
### Additional information
_No response_ | closed | 2024-09-27T11:45:22Z | 2024-09-30T10:02:23Z | https://github.com/TheAlgorithms/Python/issues/11586 | [
"awaiting triage"
] | Phyhoncoder13 | 6 |
aleju/imgaug | machine-learning | 71 | generate N augemented images from one image | Hi, is it possible to generate N augmented images from one image given as input. If not could you please put me on the write path on how to tweak `imgaug `to make it possible.
I am only using `Sequential()` as meta augmenter.
Thank you for your effort implementing this cool library. | closed | 2017-10-17T09:40:25Z | 2017-10-17T14:09:09Z | https://github.com/aleju/imgaug/issues/71 | [] | sdikby | 4 |
mage-ai/mage-ai | data-science | 4,871 | Codespace/devcontainer for contributing | **Is your feature request related to a problem? Please describe.**
It will make it easier for contributors to jump straight in without having to do any setup.
(Also make speed up time for installs if you have a custom image - just spin up container and its ready.)
**Describe the solution you'd like**
Create a dev container for vscode and codespace.
**Additional context**
I would do it but do not know the repo that well yet to define what is needed for development. | open | 2024-04-02T16:05:43Z | 2024-04-03T17:06:39Z | https://github.com/mage-ai/mage-ai/issues/4871 | [] | mike-leuer | 4 |
deepset-ai/haystack | pytorch | 8,147 | Validated and hardened OPEA examples using Haystack | **Is your feature request related to a problem? Please describe.**
OPEA (Open Platform for Enterprise AI) is a newly introduced project by the Linux Foundation. You can find all the details at https://opea.dev/. It provides an open source platform that allows the creation of GenAI applications. They have [multiple samples](https://github.com/opea-project/GenAIExamples) that use the [underlying components](https://github.com/opea-project/GenAIComps). Creating samples that show how Haystack can be used for composing these samples is a commonly asked feature by customers and in public presentations. This feature request will create a Haystack version of all the GenAI Examples in the OPEA repo.
Deepset is a partner and their logo/quote is on the website.
**Describe the solution you'd like**
All GenAI examples validated and hardened using Haystack and contributed back to the OPEA GenAI Examples repo.
**Describe alternatives you've considered**
This has never been tried and needs to be tested/validated using Haystack.
**Additional context**
Customers and partners have asked how to compose these samples using Haystack. This question has been asked in public presentations as well. Validating and hardening OPEA samples using Haystack would fulfill that need.
| open | 2024-08-01T17:10:28Z | 2024-09-09T03:31:20Z | https://github.com/deepset-ai/haystack/issues/8147 | [
"P2"
] | arun-gupta | 0 |
lexiforest/curl_cffi | web-scraping | 430 | When accessing a link on the Amazon website that requires a 302 redirect, it returns a 400 error. |
**Describe the bug**
When accessing a link on the Amazon website that requires a 302 redirect, it returns a 400 error.
**To Reproduce**
```
https://www.amazon.com.br/sspa/click?ie=UTF8&spc=MToxNTE2MDY5ODk3MjUzNTYzOjE3MzEwNDgzOTY6c3BfYnRmOjMwMDQwOTY0MTkyNDIwMjo6MDo6&url=%2FRel%25C3%25B3gio-Seiko-FD462W-comuta%25C3%25A7%25C3%25A3o-anal%25C3%25B3gica%2Fdp%2FB00BH83M86%2Fref%3Dsr_1_58_sspa%3F__mk_pt_BR%3D%25C3%2585M%25C3%2585%25C5%25BD%25C3%2595%25C3%2591%26dib%3DeyJ2IjoiMSJ9.kiEe7C09pDW7u4_0s0Ws3bOu7x97Fl9ruJJU44ZD6pQHik0mB9N80sjGu87zh9wbz3PhJnl6E-RyMfjj4XKsKULwa3BBCiwj_QNKqGO5dCVpC67vX2iE06vyQFAjcLSm_G43rRYUfKb-tA5py5zdiMGDwP8y6pDuwk03Q31GM3LCa2wE2p1qAsSX-LeaMpz3sDq8tFBWjlUJxM_jSVpacFrtB23CgRNZJuigNqMZebB8ar4dIyqwBJh1cxb-kd9Oom64bkAF8NEPIxiXrJm-9kw93Vdr6P_SKGFsw_xFWrE.01cHJVOD9CIlk3VlQmY6OYFyN-2W3AUnKCuCWwOF13E%26dib_tag%3Dse%26keywords%3Dseiko%26qid%3D1731048396%26sr%3D8-58-spons%26ufe%3Dapp_do%253Aamzn1.fos.25548f35-0de7-44b3-b28e-0f56f3f96147%26sp_csd%3Dd2lkZ2V0TmFtZT1zcF9idGY%26psc%3D1
```
**Versions**
- OS: macOS Sequoia 15.0.1
- curl_cffi version 0.7.3 | closed | 2024-11-11T03:08:23Z | 2024-12-03T09:07:42Z | https://github.com/lexiforest/curl_cffi/issues/430 | [
"bug"
] | chicagocong | 3 |
aio-libs/aiomysql | sqlalchemy | 617 | aiomysql can't connect to database when mysql-connector can | This is a little bit weird. I don't know if this is an issue with aiomysql or some kind of dependency not being satisfied.
I have a docker container running a python app. The Dockerfile is:
```
FROM python:3.9.5-buster
WORKDIR /usr/src/app
COPY . .
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install gcc
RUN pip install --no-cache-dir -r requirements.txt
CMD [ "python", "app.py" ]
```
Inside app, I can run one of two possible code sets. Codeset 1 works, codeset 2 failed with:
```
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on '172.17.0.2'")
```
**codeset 1**
```
import mysql.connector
mydb = mysql.connector.connect(
host="host-ip",
user="username",
password="password",
database="database"
)
mycursor = mydb.cursor()
mycursor.execute("SHOW TABLES")
myresult = mycursor.fetchall()
for x in myresult:
print(x)
```
**codeset 2**
```
import asyncio
import aiomysql
loop = asyncio.get_event_loop()
async def test_example():
try:
conn = await aiomysql.connect(host='host-ip', port=3306,
user='username', password='password', db='database',
loop=loop)
cur = await conn.cursor()
await cur.execute("SELECT * FROM test")
print(cur.description)
r = await cur.fetchall()
print(r)
await cur.close()
conn.close()
except Exception as e:
print(e)
loop.run_until_complete(test_example())
```
I can connect to the MySQL database using mariadb-client and mysql.connector just fine, but not aiomysql.
Edit: python version is 3.10. | closed | 2021-09-27T09:31:15Z | 2022-01-13T00:23:27Z | https://github.com/aio-libs/aiomysql/issues/617 | [] | surpriseassociate | 4 |
miguelgrinberg/python-socketio | asyncio | 757 | Raise an Exception when we reach max client reconnection_attempts | **Is your feature request related to a problem? Please describe.**
After the max reconnection attempts I get a log output, but nothing else actionable.
**Describe the solution you'd like**
raise an Exception which I can catch and close the rest of the loops in my program (ASyncClient)
or
make the client's `attempt_count` attribute readable. But exception is preferable - maybe that exception is optionally raisable.
**Describe alternatives you've considered**
monkey patching the whole _handle_reconnect is a lot.
raising an error in the connect_error handler - but that disallows any reconnection attempts.
**Additional context**
This is while using the asyncclient
| closed | 2021-07-27T11:39:37Z | 2021-10-05T16:48:09Z | https://github.com/miguelgrinberg/python-socketio/issues/757 | [
"bug"
] | tomharvey | 4 |
ludwig-ai/ludwig | data-science | 3,245 | `torchaudio` incorrectly installing 2.0.0 with `torch==1.13.1` | **Describe the bug**
When installing `torch==1.13.1` with a specified `extra_index_url`, `torchaudio==2.0.0` is installed:
**To Reproduce**
Steps to reproduce the behavior:
```
(test_env) % pip install --no-cache-dir torch==1.13.1 torchaudio --extra-index-url=https://download.pytorch.org/whl/cpu
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cpu
Collecting torch==1.13.1
Downloading https://download.pytorch.org/whl/cpu/torch-1.13.1-cp38-none-macosx_10_9_x86_64.whl (135.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 135.4/135.4 MB 109.0 MB/s eta 0:00:00
Collecting torchaudio
Downloading https://download.pytorch.org/whl/cpu/torchaudio-2.0.0-cp38-cp38-macosx_10_9_x86_64.whl (3.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.9/3.9 MB 91.3 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions in ./test_env/lib/python3.8/site-packages (from torch==1.13.1) (4.5.0)
Installing collected packages: torch, torchaudio
Successfully installed torch-1.13.1 torchaudio-2.0.0
```
**Expected behavior**
`torchaudio==0.13.1` would be installed.
**Environment (please complete the following information):**
- OS: MacOS
- Version: 13.1
- Python version: 3.8
- Ludwig version: 0.7.2 | closed | 2023-03-14T21:11:27Z | 2023-03-17T15:49:53Z | https://github.com/ludwig-ai/ludwig/issues/3245 | [] | geoffreyangus | 1 |
HumanSignal/labelImg | deep-learning | 74 | the question about "make all" | zr@zr:~$ make all
make: *** No rule to make target 'all'. Stop.
zr@zr:~$ sudo su
root@zr:/home/zr# pip install lxml
Requirement already satisfied: lxml in /usr/local/lib/python2.7/dist-packages
root@zr:/home/zr# make all
make: *** No rule to make target 'all'. Stop.
ask for help! thank you! | closed | 2017-03-30T09:02:10Z | 2017-03-30T12:59:09Z | https://github.com/HumanSignal/labelImg/issues/74 | [] | xiaoheizi123 | 0 |
pytest-dev/pytest-html | pytest | 392 | github action step that ensures that scss was compiled into css for all PRs | @gnikonorov proposed to have a github action step that ensures that scss was compiled into css for all PRs.
see also the discussion here: https://github.com/pytest-dev/pytest-html/pull/385#discussion_r531829718
| open | 2020-11-30T17:46:55Z | 2020-12-03T05:08:58Z | https://github.com/pytest-dev/pytest-html/issues/392 | [
"Infrastructure"
] | jkowalleck | 2 |
pytorch/vision | machine-learning | 8,252 | Reading frames from VideoReader freezes eventually | ### 🐛 Describe the bug
The problem only occurs under some specific conditions:
1. The video backend is 'video_reader'
2. The VideoReader object is initialized with raw bytes rather than a path string
3. This byte buffer is loaded from an avi video file
Example code:
```
from torchvision.io import VideoReader
torchvision.set_video_backend('video_reader')
with open('/path/to/video.avi', 'rb') as fp:
data = fp.read()
reader = VideoReader(data)
frames = []
# this loop stops freezes after a while (usually towards the end of the video)
for data in reader:
frames.append(frame['data'])
```
### Versions
Collecting environment information...
PyTorch version: 2.1.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-6ubuntu2) 7.5.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:40:35) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-169-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 525.147.05
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.0
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7452 32-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 3272.256
CPU max MHz: 2350.0000
CPU min MHz: 1500.0000
BogoMIPS: 4700.22
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] numpy==1.26.2
[pip3] pytorch-lightning==2.1.2
[pip3] torch==2.1.1
[pip3] torchaudio==2.1.1
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.2.1
[pip3] torchvision==0.16.1
[pip3] torchvision==0.16.1
[pip3] triton==2.1.0
[conda] blas 1.0 mkl
[conda] libblas 3.9.0 12_linux64_mkl conda-forge
[conda] libcblas 3.9.0 12_linux64_mkl conda-forge
[conda] liblapack 3.9.0 12_linux64_mkl conda-forge
[conda] mkl 2021.4.0 h06a4308_640
[conda] numpy 1.26.2 pypi_0 pypi
[conda] pytorch 2.1.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 2.1.2 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.1.1 dev_0 <develop>
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchmetrics 1.2.1 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 2.1.0 py311 pytorch
[conda] torchvision 0.16.1 pypi_0 pypi | open | 2024-02-05T01:05:18Z | 2024-02-05T11:03:29Z | https://github.com/pytorch/vision/issues/8252 | [] | treasan | 1 |
matterport/Mask_RCNN | tensorflow | 2,398 | Problem with demo.ipynb (model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) | Create Model and Load Trained Weights¶
# Create model object in inference mode.
#model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-13-a57dd9ffa939> in <module>
1 # Create model object in inference mode.
2 #model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
----> 3 model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
4
5 # Load weights trained on MS-COCO
NameError: name 'config' is not defined
Please how we solve this config not defind | open | 2020-10-19T01:08:49Z | 2020-12-06T17:46:41Z | https://github.com/matterport/Mask_RCNN/issues/2398 | [] | fatana1 | 1 |
tox-dev/tox | automation | 3,391 | Free threaded Python support | I would like to be able to use ``py313t`` in the envlist to explicitly run ``python3.13t`` as a free-threaded nogil build in addition to ``py313`` which would run ``python3.13`` as an explicitly regular build.
Is there already some way to do that which I missed?
Currently ``py313t`` seems to use ``python``.
Related:
- https://github.com/pypa/virtualenv/issues/2776 | open | 2024-10-04T06:01:05Z | 2025-01-20T18:07:58Z | https://github.com/tox-dev/tox/issues/3391 | [
"help:wanted",
"enhancement"
] | fschulze | 8 |
python-gino/gino | asyncio | 80 | Use anonymous statement when needed | I'm using pgbouncer with transaction pooling enabled which means no prepared statements.
I'd like to see an option to disable prepared statements.
Actually, I think we should disable them at all because asyncpg has its own statements cache. | closed | 2017-09-27T17:43:12Z | 2018-01-31T11:25:03Z | https://github.com/python-gino/gino/issues/80 | [
"task"
] | AmatanHead | 5 |
slackapi/bolt-python | fastapi | 764 | Problem of receiving two or more duplicate messages when sending a message through chat_postMessage() | Problem of receiving two or more duplicate messages when sending a message through chat_postMessage()
I'm using slack_bolt and when I send a message through chat_postMessage(), I get a problem with receiving a message with two or more duplicate messages.
The message is sent as a combination of blocks, and the size of the block exceeds 4000 bytes. I tried debugging, but it doesn't seem to retry.
An example of a block is shown below.
```
ex1) app.client.chat_postMessage(blocks=, channel=)
ex2) blocks = [{'elements': [{'text': '*aaaaaaaaaa*', 'type': 'mrkdwn'}],
'type': 'context'},
{'elements': [{'alt_text': 'cute cat',
'image_url': 'https://cdn-icons-png.flaticon.com/512/752/752664.png',
'type': 'image'},
{'text': f'{"a"*37}',
'type': 'mrkdwn'}],
'type': 'context'},
{'elements': [{'text': f'>```{"a"*3920}```',
'type': 'mrkdwn'}],
'type': 'context'},
{'elements': [{'alt_text': 'cute cat',
'image_url': 'https://cdn-icons-png.flaticon.com/512/752/752665.png',
'type': 'image'},
{'text': f'{"a"*39}',
'type': 'mrkdwn'}],
'type': 'context'},
{'elements': [{'text': f'>```{"a"*100}```', 'type': 'mrkdwn'}],
'type': 'context'}]
```
| closed | 2022-11-10T16:19:07Z | 2024-12-07T20:05:26Z | https://github.com/slackapi/bolt-python/issues/764 | [
"question",
"auto-triage-stale"
] | sh4n3e | 25 |
zappa/Zappa | flask | 1,350 | Use UV to install packages much faster | Use UV - An extremely fast Python package and project manager, written in Rust to speed up the installation of dependencies https://github.com/astral-sh/uv
## Context
Zappa creates a virtual env with all the dependencies for the app for Lambda handler, and zips it. UV is 10-100x faster than pip. We could see a great speed boost.
| closed | 2024-09-17T21:25:07Z | 2025-01-31T19:36:50Z | https://github.com/zappa/Zappa/issues/1350 | [
"no-activity",
"auto-closed"
] | tugoavenza | 4 |
NullArray/AutoSploit | automation | 470 | Unhandled Exception (06c10795b) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-20-generic-x86_64-with-LinuxMint-19.1-tessa`
Running context: `./autosploit.py`
Error meesage: `argument of type 'NoneType' is not iterable`
Error traceback:
```
Traceback (most recent call):
File "/home/meowcat285/Autosploit/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/home/meowcat285/Autosploit/lib/term/terminal.py", line 502, in terminal_main_display
if "help" in choice_data_list:
TypeError: argument of type 'NoneType' is not iterable
```
Metasploit launched: `False`
| closed | 2019-02-15T18:22:37Z | 2019-02-19T04:23:39Z | https://github.com/NullArray/AutoSploit/issues/470 | [] | AutosploitReporter | 0 |
pywinauto/pywinauto | automation | 1,391 | Take ElementUI PATH | I was having some trouble retrieving elements once I had them stored. Pywinauto by default does not offer "support", be it a property or something like that, to store the value of the path to the element in question.
I created the code below, it works in 80% of cases, but it can be improved by someone more experienced.
For example, if you click on a button, the captured path will be different from the path you need, as unfortunately I was unable to "intercept" the mouse click to prevent an action from being taken.
I'm using Windwos 11 with Pywinauto version 0.6.8
This class basically allows me to capture an element based on its Click
```
class ElementWatch:
def init(self) -> None:
self.LastSave = time()
self.ElementWrapper: UIAWrapper = None
self.Desktop = Desktop(backend="uia")
move_event = (lambda x, y: self.on_mouse_move(x, y))
self._mouse_move_listener = Listener(on_move=move_event)
click_event = (lambda x, y, button, pressed: self.on_mouse_click(x, y, button, pressed))
self._mouse_click_listener = Listener(on_click=click_event)
def on_mouse_click(self, x, y, button, pressed):
if pressed:
try:
current_element = self.Desktop.from_point(x, y)
if isinstance(current_element, UIAWrapper):
self.ElementWrapper = current_element
self._mouse_click_listener.stop()
self._mouse_move_listener.stop()
except Exception as e:
pass
def on_mouse_move(self, x, y):
try:
current_element = self.Desktop.from_point(x, y)
if isinstance(current_element, UIAWrapper):
self.ElementUI = DesktopElementUI(current_element)
self.ElementWrapper = current_element
except Exception as e:
pass
def start_listener(self):
#self._mouse_move_listener.start()
self._mouse_click_listener.start()
#self._mouse_move_listener.join()
self._mouse_click_listener.join()
```
The methods below are helpers to make the calls we need to capture the element and generate the PATH
```
class Util:
#call this for start watcher.
@classmethod
def find_element_with_click(self) -> UIAWrapper:
watch = ElementWatch()
watch.start_listener()
return watch.ElementWrapper
@classmethod
def get_element_path(self, element: UIAWrapper) -> str:
path = []
current_element: UIAWrapper = element
while current_element:
parent: UIAWrapper = current_element.parent() or None
if parent is None or parent.parent() is None:
break
siblings: List[UIAWrapper] = parent.children()
try:
index = siblings.index(current_element)
except:
return ''
path.insert(0, str(index))
current_element = parent
return '|'.join(path)
```
This would be an example of use
```
from time import sleep
if __name__ == '__main__':
# After 5 seconds it starts monitoring the click, when clicked, it will capture the element.
sleep(5)
el = Util.find_element_with_click()
path = Util.get_element_path(el)
print(path)
```
As I mentioned previously, if it is something that generates an action and causes the application to change the elements, the generated path will not be correct in some cases.
The idea is that when there is a "click" to obtain the element we want, this click should not be transmitted to the system and should only be used for pywinauto to understand that this is the desired element.
I wasn't able to do this, but if anyone can, it's a way to generate a PATH based on the element's click. | open | 2024-05-21T04:55:00Z | 2024-05-21T04:59:18Z | https://github.com/pywinauto/pywinauto/issues/1391 | [] | lucas-fsousa | 0 |
sanic-org/sanic | asyncio | 2,908 | Start-up exception | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
1、DeprecationWarning

2、IndexError
Use the following CLI to start
```sanic main:app --host 0.0.0.0 --port 9001 --workers 2 --dev```

### Code snippet
_No response_
### Expected Behavior
_No response_
### How do you run Sanic?
Sanic CLI
### Operating System
Linux
### Sanic Version
23.12.1
### Additional context
_No response_ | closed | 2024-01-24T08:44:06Z | 2024-01-31T14:25:34Z | https://github.com/sanic-org/sanic/issues/2908 | [
"bug"
] | jsonvot | 1 |
lepture/authlib | django | 82 | Version 0.10 | In this version, Authlib will focus on improving API design and RFC implementations.
## API
1. [x] Make Grant extensible
2. [x] CodeChallenge Extension
3. [x] OpenID Extension
## RFC
1. [ ] RFC7516, implement full featured JWE
2. [x] OpenID Connect Discovery 1.0. Check #77
3. [ ] RFC7797 JSON Web Signature (JWS) Unencoded Payload Option
4. [x] RFC8414
## Integrations
Take a look and think about Django integrations.
## Documentation
Rethink (rewrite) about documentation, its folder structure, connections between each specs. | closed | 2018-08-16T15:51:30Z | 2018-10-11T02:07:54Z | https://github.com/lepture/authlib/issues/82 | [
"in future"
] | lepture | 0 |
Guovin/iptv-api | api | 538 | 啥都爬不到了 | 除了订阅源,组播,酒店,在线扫描都找不到源了,大佬更新一下吧 | closed | 2024-11-09T03:20:35Z | 2024-11-09T03:52:36Z | https://github.com/Guovin/iptv-api/issues/538 | [
"invalid"
] | mlzlzj | 1 |
docarray/docarray | fastapi | 1,828 | DocList raises exception for type object. | ### Initial Checks
- [X] I have read and followed [the docs](https://docs.docarray.org/) and still think this is a bug
### Description
This [commit](https://github.com/docarray/docarray/commit/2f3b85e333446cfa9b8c4877c4ccf9ae49cae660) introduced a check to verify that DocList is not used with an object:
```
if (
isinstance(item, object)
and not is_typevar(item)
and not isinstance(item, str)
and item is not Any
):
raise TypeError('Expecting a type, got object instead')
```
This is quite a broad condition (it breaks things like `DocList[TorchTensor]`, or nested `DocList[DocList[...]]` for me for instance) as:
- Almost everything will be an object so the first line if almost a catch-all.
- `is_typevar` only checks for TypeVar objects.
Should this not check as well for something `not isinstance(item, type)` instead of `not is_typevar` to allow for classes ?
This way only non class objects (like instances of class object that are not classes themselves) will raise the `TypeError`.
### Example Code
```Python
from docarray import DocList
from docarray.typing import TorchTensor
test = DocList[TorchTensor]
```
### Python, DocArray & OS Version
```Text
0.39.1
```
### Affected Components
- [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [X] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [X] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | open | 2023-10-30T19:17:54Z | 2023-10-31T09:26:38Z | https://github.com/docarray/docarray/issues/1828 | [] | corentinmarek | 3 |
tflearn/tflearn | data-science | 668 | How to use data_flow.FeedDictFlow? | I want to just use class FeedDictFlow, ImagePreprocessing and ImageAugmentation. but how to let test data set and validating date set to share the same parameters such as mean and variviance of training data.
this is outline of my code:
`
X_train, y_train, X_test, y_test = dataset.cifar_100()
image_batch = tf.placehoder()
label_batch = tf.placehoder()
feed_dict = {image_batch:X_train, label_batch:y_train}
test_feed_dict = {image_batch:X_test, label_batch:y_test}
loss, acc = model(image_batch, label_batch) # define a neural net
img_prep = tflearn.ImagePreprocessing()
img_prep.add_featurewise_zero_center()
img_aug = tflearn.ImageAugmentation()
img_aug.add_random_flip_leftright()
with tf.Session() as sess:
coord = tf.train.Coordinator()
df = data_flow.FeedDictFlow(feed_dict, coord,
batch_size=batch_size,
dprep_dict=img_prep,
daug_dict=img_aug,
index_array=None,
num_threads=1)
df.start()
test_df = data_flow.FeedDictFlow(test_feed_dict, coord,
batch_size=batch_size,
dprep_dict=img_prep,
daug_dict=None,
index_array=None,
num_threads=1)
test_df.start()
for i in range(training_iters):
train_batch = df.next()
sess.run([loss, acc], train_batch)
if i == TEST_ITER:
test_batch = test_df.next()
sess.run([loss, acc], test_batch)
...
`
Does this methods make test data set and training data set share the same image preprocessing parameter such as mean of data set? | open | 2017-03-18T04:58:14Z | 2017-03-18T08:15:50Z | https://github.com/tflearn/tflearn/issues/668 | [] | lfwin | 1 |
daleroberts/itermplot | matplotlib | 33 | Using ITERMPLOT=rv cycles between black and white axis labels | The first plot rendered using ITERMPLOT=rv has white axes and labels, and every other cycles between black and white. | open | 2018-08-10T20:14:13Z | 2021-07-06T21:47:45Z | https://github.com/daleroberts/itermplot/issues/33 | [] | r-zip | 1 |
pytest-dev/pytest-xdist | pytest | 402 | logging not captured | test_5.py file content
```python
import logging
logger = logging.getLogger(__file__)
def test_1():
logger.info('123123')
def test_2():
logger.info('123123')
def test_3():
logger.info('123123')
def test_4():
logger.info('123123')
```
pytest.ini
```
[pytest]
log_cli=true
log_cli_level=DEBUG
log_format = %(asctime)s %(levelname)s (%(threadName)-10s) %(filename)s:%(lineno)d %(message)s
log_date_format = %Y-%m-%d %H:%M:%S
```
run `pytest test/test_5 -n auto`
logs:
```
================================================================= test session starts =================================================================
platform darwin -- Python 3.6.5, pytest-4.1.1, py-1.7.0, pluggy-0.8.1 -- /Users/w/.virtualenvs/pytest_xdist_fixture_scope_test/bin/python
cachedir: .pytest_cache
rootdir: /Users/w/pytest_xdist_fixture_scope_test, inifile: pytest.ini
plugins: xdist-1.26.0, forked-1.0.1
[gw0] darwin Python 3.6.5 cwd: /Users/w/pytest_xdist_fixture_scope_test
[gw1] darwin Python 3.6.5 cwd: /Users/w/pytest_xdist_fixture_scope_test
[gw2] darwin Python 3.6.5 cwd: /Users/wpytest_xdist_fixture_scope_test
[gw3] darwin Python 3.6.5 cwd: /Users/wpytest_xdist_fixture_scope_test
[gw0] Python 3.6.5 (default, Apr 25 2018, 14:23:58) -- [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
[gw1] Python 3.6.5 (default, Apr 25 2018, 14:23:58) -- [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
[gw2] Python 3.6.5 (default, Apr 25 2018, 14:23:58) -- [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
[gw3] Python 3.6.5 (default, Apr 25 2018, 14:23:58) -- [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
gw0 [4] / gw1 [4] / gw2 [4] / gw3 [4]
scheduling tests via LoadScheduling
test/test_5.py::test_1
test/test_5.py::test_2
[gw0] [ 25%] PASSED test/test_5.py::test_1
test/test_5.py::test_3
[gw1] [ 50%] PASSED test/test_5.py::test_2
[gw2] [ 75%] PASSED test/test_5.py::test_3
test/test_5.py::test_4
[gw3] [100%] PASSED test/test_5.py::test_4
============================================================== 4 passed in 1.47 seconds ===============================================================
```
There are no logs print out.
I saw #256 , but it doesn't help | open | 2019-01-13T03:54:56Z | 2024-09-22T21:17:11Z | https://github.com/pytest-dev/pytest-xdist/issues/402 | [] | zsluedem | 17 |
xonsh/xonsh | data-science | 5,064 | Running the script with binary inside fails with UnicodeDecodeError | ## xonfig
<details>
```
$ xonfig
+------------------+------------------------------------+
| xonsh | 0.13.4 |
| Python | 3.10.9 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.36 |
| shell type | prompt_toolkit |
| history backend | sqlite |
| pygments | 2.14.0 |
| on posix | True |
| on linux | True |
| distro | arch |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib 1 | kitty |
| RC file 1 | /home/nicolas/.config/xonsh/rc.xsh |
+------------------+------------------------------------+
```
</details>
## Expected Behavior
xonsh should run `mill`.
## Current Behavior
When I try to run [mill](https://com-lihaoyi.github.io/mill/mill/Intro_to_Mill.html), instead of it running, I get a `UnicodeDecodeError`, see the traceback for details. I use mill version 0.10.11. This also happens when calling `mill --help` or `--version`.
I can reproduce this on a fresh `archlinux` Docker container with just `xonsh` and `mill` installed.
I've never seen this happen with any other command. Mill works as expected from bash or zsh.
Interestingly, this doesn't happen when using mill via the bootstrapping script (see https://com-lihaoyi.github.io/mill/mill/Installation.html). This might indicate something weird happening in the Arch package specifically, but I'd argue anyway that xonsh shouldn't crash like that regardless of what Arch's mill does.
### Traceback (if applicable)
<details>
```
$ mill
<xonsh-code>:1:0 - mill
<xonsh-code>:1:0 + ![mill]
Traceback (most recent call last):
File "/usr/bin/xonsh", line 33, in <module>
sys.exit(load_entry_point('xonsh==0.13.4', 'console_scripts', 'xonsh')())
File "/usr/lib/python3.10/site-packages/xonsh/main.py", line 470, in main
_failback_to_other_shells(args, err)
File "/usr/lib/python3.10/site-packages/xonsh/main.py", line 417, in _failback_to_other_shells
raise err
File "/usr/lib/python3.10/site-packages/xonsh/main.py", line 468, in main
sys.exit(main_xonsh(args))
File "/usr/lib/python3.10/site-packages/xonsh/main.py", line 534, in main_xonsh
exc_info = run_script_with_cache(
File "/usr/lib/python3.10/site-packages/xonsh/codecache.py", line 166, in run_script_with_cache
code = f.read()
File "/usr/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x98 in position 2836: invalid start byte
```
</details>
## Steps to Reproduce
```
mill
```
Or if you want to try it with Docker:
```
$ sudo docker run -it --rm archlinux
$ pacman -Sy
$ pacman xonsh mill
$ xonsh
$ mill
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2023-02-19T00:42:36Z | 2023-02-23T17:42:43Z | https://github.com/xonsh/xonsh/issues/5064 | [
"integration-with-other-tools",
"executor",
"priority-high"
] | Eisfunke | 1 |
gradio-app/gradio | data-visualization | 10,775 | Python gradio_client can't send files without an extension | ### Describe the bug
When the python gradio_client sends a file without an extension, e.g., `/tmp/test`, the sever throws an exception and fails to return a result. If the file sends a file with an extension, e.g., `/tmp/test.txt`, it works fine. I'm not sure if the problem is on the client or server side.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Server:
```python
import gradio as gr
def echo(d, history):
print(f"cool {d} {history}")
return f"You said: {d['text']} and uploaded {len(d['files'])} files"
demo = gr.ChatInterface(fn=echo, multimodal=True, type="messages", examples=["hello", "hola", "merhaba"], title="Echo Bot")
demo.launch()
```
Client:
```python
from gradio_client import Client, handle_file
file_with_extension = handle_file("/tmp/test.txt")
file_without_extension = handle_file("/tmp/test")
client = Client("http://localhost:7860/")
result = client.predict(
message={"text":"Describe this image","files":[file_with_extension]},
api_name="/chat"
)
print(result)
print("We were able to upload a file with an extension")
# this won't work
result = client.predict(
message={"text":"Describe this image","files":[file_without_extension]},
api_name="/chat"
)
print(result)
```
### Screenshot
Client Logs
```
Loaded as API: http://localhost:7860/ ✔
You said: Describe this image and uploaded 1 files
We were able to upload a file with an extension
Traceback (most recent call last):
File "/home/ed/Projects/re-copilot/testing/test.py", line 15, in <module>
result = client.predict(
File "/home/ed/Projects/re-copilot/venv/lib/python3.10/site-packages/gradio_client/client.py", line 478, in predict
).result()
File "/home/ed/Projects/re-copilot/venv/lib/python3.10/site-packages/gradio_client/client.py", line 1539, in result
return super().result(timeout=timeout)
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/ed/Projects/re-copilot/venv/lib/python3.10/site-packages/gradio_client/client.py", line 1158, in _inner
predictions = _predict(*data)
File "/home/ed/Projects/re-copilot/venv/lib/python3.10/site-packages/gradio_client/client.py", line 1270, in _predict
raise AppError(
gradio_client.exceptions.AppError: The upstream Gradio app has raised an exception but has not enabled verbose error reporting. To enable, set show_error=True in launch().
```
### Logs
```shell
Server logs:
* Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
cool {'text': 'Describe this image', 'files': ['/tmp/gradio/c060b44d3dd57019b0d1cadc60331dd4400b7cb15819dbfdf3138450ca6d4db3/test.txt']} []
Traceback (most recent call last):
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/blocks.py", line 2092, in process_api
inputs = await self.preprocess_data(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/blocks.py", line 1775, in preprocess_data
inputs_cached = await processing_utils.async_move_files_to_cache(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/processing_utils.py", line 648, in async_move_files_to_cache
return await client_utils.async_traverse(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio_client/utils.py", line 1128, in async_traverse
new_obj[key] = await async_traverse(value, func, is_root)
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio_client/utils.py", line 1133, in async_traverse
new_obj.append(await async_traverse(item, func, is_root))
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio_client/utils.py", line 1123, in async_traverse
if is_root(json_obj):
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio_client/utils.py", line 1175, in is_file_obj_with_meta
and d["meta"].get("_type", "") == "gradio.FileData"
AttributeError: 'NoneType' object has no attribute 'get'
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.20.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.27.2
huggingface-hub: 0.29.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.11
packaging: 24.1
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.10
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.5
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.29.2
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | open | 2025-03-10T17:42:28Z | 2025-03-10T18:49:42Z | https://github.com/gradio-app/gradio/issues/10775 | [
"bug"
] | edmcman | 3 |
codertimo/BERT-pytorch | nlp | 68 | How to Output Embedded Word Vector | I want to output the word vector | open | 2019-07-24T02:50:14Z | 2020-03-12T03:40:31Z | https://github.com/codertimo/BERT-pytorch/issues/68 | [] | enze5088 | 6 |
kennethreitz/responder | flask | 405 | api attribute of request is None | Yesterday I updated to responder 2.0.3. In my route, I make use of the api attribute. This worked in the past. But after upgrading to version 2.0.0 it is always None.
I checked the implementation. The `__call__` method of `Route `in `routes.py` creates a new `Request` object without providing an api argument and the default in `models.py` is `None`.
Is this a intentional design change or a bug. Maybe I'm spotting the wrong piece of code. If this is an intentional change, what is the best way to access the api in a Route-class method?
I found a work around by adding my own middleware. | closed | 2019-10-27T06:38:21Z | 2024-03-31T00:57:29Z | https://github.com/kennethreitz/responder/issues/405 | [] | FirstKlaas | 0 |
autogluon/autogluon | data-science | 4,681 | torch.load Compatibility Issue: Unsupported Global fastcore.foundation.L with weights_only=True | Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
Detailed Traceback:
Traceback (most recent call last):
File "C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\trainer\abstract_trainer.py", line 2103, in _train_and_save
model = self._train_single(**model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\trainer\abstract_trainer.py", line 1993](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/trainer/abstract_trainer.py#line=1992), in _train_single
model = model.fit(X=X, y=y, X_val=X_val, y_val=y_val, X_test=X_test, y_test=y_test, total_resources=total_resources, **model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\stacker_ensemble_model.py", line 270](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/stacker_ensemble_model.py#line=269), in _fit
return super()._fit(X=X, y=y, time_limit=time_limit, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\bagged_ensemble_model.py", line 298](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py#line=297), in _fit
self._fit_folds(
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\bagged_ensemble_model.py", line 724](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py#line=723), in _fit_folds
fold_fitting_strategy.after_all_folds_scheduled()
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 317](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=316), in after_all_folds_scheduled
self._fit_fold_model(job)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 322](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=321), in _fit_fold_model
fold_model = self._fit(self.model_base, time_start_fold, time_limit_fold, fold_ctx, self.model_base_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 358](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=357), in _fit
fold_model.fit(X=X_fold, y=y_fold, X_val=X_val_fold, y_val=y_val_fold, time_limit=time_limit_fold, num_cpus=num_cpus, num_gpus=num_gpus, **kwargs_fold)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\models\fastainn\tabular_nn_fastai.py", line 365](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/models/fastainn/tabular_nn_fastai.py#line=364), in _fit
self.model.fit_one_cycle(epochs, params["lr"], cbs=callbacks)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\callback\schedule.py", line 121](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/callback/schedule.py#line=120), in fit_one_cycle
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd, start_epoch=start_epoch)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 266](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=265), in fit
self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 203](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=202), in _with_events
self(f'after_{event_type}'); final()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 174](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=173), in __call__
def __call__(self, event_name): L(event_name).map(self._call_one)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastcore\foundation.py", line 159](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastcore/foundation.py#line=158), in map
def map(self, f, *args, **kwargs): return self._new(map_ex(self, f, *args, gen=False, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastcore\basics.py", line 910](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastcore/basics.py#line=909), in map_ex
return list(res)
^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastcore\basics.py", line 895](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastcore/basics.py#line=894), in __call__
return self.func(*fargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 178](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=177), in _call_one
for cb in self.cbs.sorted('order'): cb(event_name)
^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\callback\core.py", line 64](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/callback/core.py#line=63), in __call__
except Exception as e: raise modify_exception(e, f'Exception occured in `{self.__class__.__name__}` when calling event `{event_name}`:\n\t{e.args[0]}', replace=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\callback\core.py", line 62](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/callback/core.py#line=61), in __call__
try: res = getcallable(self, event_name)()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\models\fastainn\callbacks.py", line 116](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/models/fastainn/callbacks.py#line=115), in after_fit
self.learn.load(f"{self.fname}", with_opt=self.with_opt)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 422](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=421), in load
load_model(file, self.model, self.opt, device=device, **kwargs)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 53](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=52), in load_model
state = torch.load(file, map_location=device, **torch_load_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\torch\serialization.py", line 1455](file:///C:/Users/celes/anaconda3/Lib/site-packages/torch/serialization.py#line=1454), in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Exception occured in `AgSaveModelCallback` when calling event `after_fit`:
Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL fastcore.foundation.L was not an allowed global by default. Please use `torch.serialization.add_safe_globals([L])` or the `torch.serialization.safe_globals([L])` context manager to allowlist this global if you trust this class/function. | closed | 2024-11-23T11:41:14Z | 2024-11-23T14:04:34Z | https://github.com/autogluon/autogluon/issues/4681 | [
"enhancement"
] | celestinoxp | 0 |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 257 | Chinese Translation | I noticed that the Chinese translation appears to be machine-generated. Do you have any plans to officially translate this excellent project? If needed, I am willing to take on the responsibility for the Chinese translation! | open | 2024-06-17T13:18:06Z | 2024-06-22T00:02:32Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/257 | [] | pengchzn | 4 |
PokeAPI/pokeapi | api | 710 | Error formating |
Hello! As you might know, pokemon legends are out. Wich can cause not found errors in the db. But instead of formating it in json you format it in plain html. Why is this? As this can break alot of code . My request is if you guys could kindly change up that to json.
Thank you!
Zakaria Aourzag
| open | 2022-04-15T07:25:08Z | 2022-04-19T11:01:04Z | https://github.com/PokeAPI/pokeapi/issues/710 | [] | zaourzag | 8 |
ultralytics/ultralytics | python | 19,305 | Ultralytics memory problem | how to minimize the memory usage of gpu while training other than setting parameter of workers? | closed | 2025-02-19T05:13:32Z | 2025-02-21T05:05:12Z | https://github.com/ultralytics/ultralytics/issues/19305 | [
"bug",
"segment",
"detect"
] | abhishekb-weboccult | 11 |
littlecodersh/ItChat | api | 640 | 添加表情包到微信? | 如果能够实现自动添加一系列gif或者图片到微信表情包那就太好了。 | closed | 2018-04-19T06:02:39Z | 2023-02-06T08:56:06Z | https://github.com/littlecodersh/ItChat/issues/640 | [
"help wanted"
] | foreverlms | 4 |
pydantic/FastUI | fastapi | 262 | Could we build a "plugin" system to expand the components library? | I think it would be great if components that are built in separate python / js package by the community could be brought in the `fastui` framework.
This would make the components much easier to "pick and choose" and many of the components might not need to be in the main framework.
My lack of js knowledge unfortunately prevent me for now to provide more thoughts as i'm not sure how that could be integrated in the current `prebuilt_html` approach and other.
NB: I think the `polars` model for plugins is pretty cool, so just sharing for inspiration (albeit with a rust to python approach): [plugins](https://docs.pola.rs/user-guide/expressions/plugins/#community-plugins), [community](https://docs.pola.rs/user-guide/expressions/plugins/#community-plugins) | open | 2024-04-05T08:55:37Z | 2024-04-05T08:55:37Z | https://github.com/pydantic/FastUI/issues/262 | [] | tim-x-y-z | 0 |
OpenInterpreter/open-interpreter | python | 888 | Windows os feastures not available? | ### Is your feature request related to a problem? Please describe.
the demo show a mac opening youtube
I apologize for the confusion. While the demo may have shown a simulated interaction where the assistant operated a
browser and clicked on a play button, in reality, as an AI text-based assistant, I cannot directly access or control
your browser or any other application on your computer. The demo was created to showcase potential capabilities, but
the actual implementation of executing actions on a user's machine would require additional permissions and security
considerations.
However, I can still assist you with providing instructions on how to perform tasks manually or suggest alternative
approaches using available tools or methods. If you have any specific tasks or questions, please let me know, and I
will do my best to provide guidance.
### Describe the solution you'd like
i have wox but
I apologize for the confusion. While the demo may have shown a simulated interaction where the assistant operated a
browser and clicked on a play button, in reality, as an AI text-based assistant, I cannot directly access or control
your browser or any other application on your computer. The demo was created to showcase potential capabilities, but
the actual implementation of executing actions on a user's machine would require additional permissions and security
considerations.
However, I can still assist you with providing instructions on how to perform tasks manually or suggest alternative
approaches using available tools or methods. If you have any specific tasks or questions, please let me know, and I
will do my best to provide guidance.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-01-08T22:32:24Z | 2024-10-31T16:58:22Z | https://github.com/OpenInterpreter/open-interpreter/issues/888 | [
"Enhancement"
] | BishAU | 1 |
plotly/dash | plotly | 2,627 | [Feature Request] --contact-address with multiple workers at specific port range. | Sorry I confused dask with dash!
**Is your feature request related to a problem? Please describe.**
I'm running dash worker in a isolated network A. These workers has an internal IP address (192.168.0.1) and an external IP address 10.0.0.1. There are some other workers in network B or C, they can only communicat using external address(There are firewalls or other restrictions etc.). **So --contact-address.** is required.
I want to run multuple workers using --nworkers > 1 ( or --nworkers auto), **But**:
* if --nworkers is specificted, --bind-address is **NOT** allowed.
* If --contact-address is specificted --bind-address is **required**.
Even though I can run multiple instances with --nworkers 1, but If I want to start **hunders of worekrs in one node**, thing is going to be difficult. Put another way: things could have been very simple.
**Describe the solution you'd like**
* --bind-address can specific address and a range of ports. So it no longer conflicts with --nworkers.
for example: tcp://192.168.0.1:6000-6256
**Describe alternatives you've considered**
I couldn't think of a better solution.
| closed | 2023-08-17T12:09:38Z | 2023-08-17T12:13:49Z | https://github.com/plotly/dash/issues/2627 | [] | nanoric | 0 |
biolab/orange3 | pandas | 6,640 | New widget : cleansing data | **What's your use case?**
More than once, the data I work on contains null values, or unwanted space at the end of fields, and sometimes a full field is empty or there totally empty rows in the middle of the data set. There aren't a lot of case but it's happen so many times a widget to automatize that would be great.
**What's your proposed solution?**
A widget with the main cleaning operations. Somthing like that :

**Are there any alternative solutions?**
using on all the concerned workflows several widgets to process the data.
| closed | 2023-11-18T21:14:36Z | 2023-12-08T08:06:50Z | https://github.com/biolab/orange3/issues/6640 | [] | simonaubertbd | 4 |
JaidedAI/EasyOCR | deep-learning | 1,128 | Process finished with exit code -1073741795 (0xC000001D) | Description of the problem:
EasyOcr is successfully installed, the program starts, but the text from the picture does not output at all.
Code:
import easy ocr
Reader = easyocr.Reader(['ru', 'en']) # this needs to be run only once to load the model into memory
result = Reader.readtext('12.jpg ')
Conclusion:
neither CUDA nor MPS are available — the CPU is used by default. Note. This module works much faster with a GPU.
The process is completed with exit code -1073741795 (0xC000001D).
My torment:
I ran into a big problem when using easyOCR, namely, when working on a laptop with an i5 -8250U processor, I installed and tested your library for the first time, it started almost immediately without problems and recognized the text with the image, which I was very happy about.
Developing a program for classifying PDF files by keywords) At the end of the practice, I threw the virtual environment with this project on a flash drive, then started running it on an old laptop (i3-2100m, GT-610m) and the library refused to work on it, then I tried to run it on a PC (i7-4960X, RTX 2060, 64 RAM). I spent 10 hours trying to run this library, in the end I didn't succeed, the attempts I made during these 10 hours:
Reinstalling EasyOCR
Reinstalling PIL, V2, Torch
I was poking around in the code, I didn't understand anything
Created a new virtual environment, reinstalled everything, nothing helped
Old Python and other Python versions are installed.
Changed dependency versions randomly
I tried to install it very carefully several times according to the manual:
pip install torch torchvision torchaudio
pip install easyocr
And it didn't help, it outputs "The process is completed with exit code -1073741795 (0xC000001D)
I don't know what to do anymore, but I'm asking for help with this problem. | open | 2023-09-01T21:05:34Z | 2024-11-08T02:52:42Z | https://github.com/JaidedAI/EasyOCR/issues/1128 | [] | Michaelufcb | 6 |
voxel51/fiftyone | data-science | 5,292 | [BUG] Downloading dataset from HF HUB | Documentation [says](https://docs.voxel51.com/integrations/huggingface.html?highlight=wider#:~:text=(dataset)-,Detection%20Datasets,-Loading%20detection%20datasets):
> ### Detection Datasets
>
> Loading detection datasets from the Hub is just as easy. For example, to load the [MS COCO](https://huggingface.co/datasets/detection-datasets/coco) dataset, you can specify the detection_fields as "objects", which is the standard column name for detection features in Hugging Face datasets:
>
> ```python
> from fiftyone.utils.huggingface import load_from_hub
>
> dataset = load_from_hub(
> "detection-datasets/coco",
> format="parquet",
> detection_fields="objects",
> max_samples=1000,
> )
>
> session = fo.launch_app(dataset)
> ```
>
> The same syntax works for many other popular detection datasets on the Hub, including:
>
> * [CPPE - 5](https://huggingface.co/datasets/rishitdagli/cppe-5) (use "rishitdagli/cppe-5")
> * [WIDER FACE](https://huggingface.co/datasets/CUHK-CSE/wider_face) (use "CUHK-CSE/wider_face")
> * [License Plate Object Detection](https://huggingface.co/datasets/keremberke/license-plate-object-detection) (use "keremberke/license-plate-object-detection")
> * [Aerial Sheep Object Detection](https://huggingface.co/datasets/keremberke/aerial-sheep-object-detection) (use "keremberke/aerial-sheep-object-detection")
But when trying it for the `CUHK-CSE/wider_face` seems it might not be able to consume it like the others. https://datasets-server.huggingface.co/splits?dataset=detection-datasets/coco endpoint from example is returning properly, but https://datasets-server.huggingface.co/splits?dataset=CUHK-CSE%2Fwider_face is not.
Code to reproduce:
```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
dataset = load_from_hub(
"CUHK-CSE/wider_face",
format="parquet",
detection_fields="objects",
max_samples=1000,
)
if __name__ == "__main__":
session = fo.launch_app(dataset)
```
Throws `KeyError: 'splits'` errror. | open | 2024-12-18T02:42:06Z | 2024-12-18T03:10:05Z | https://github.com/voxel51/fiftyone/issues/5292 | [
"bug"
] | ankandrew | 1 |
plotly/dash | dash | 3,064 | [BUG] Cannot extract virtualRowData from an AG Grid table using pattern matching |
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
async-dash 0.1.0a1
dash 2.18.2
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash_canvas 0.1.0
dash-core-components 2.0.0
dash_daq 0.5.0
dash-dndkit 0.0.3
dash-extensions 1.0.15
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-leaflet 1.0.15
dash-loading-spinners 1.0.3
dash-mantine-components 0.12.1
dash-paperdragon 0.1.0
dash-resizable-panels 0.1.0
dash-svg 0.0.12
dash-table 5.0.0
dash_treeview_antd 0.0.1
```
**Describe the bug**
I am creating AG Grid tables dynamically and trying to reference them as Inputs/States dynamically in a Callback (using pattern matching) to get the virtualRowData. However, pattern matching can only return the component IDs.
```
import dash_bootstrap_components as dbc
from dash import Dash, dcc, html, Input, Output, callback, ALL
import dash_ag_grid as dag
app = Dash(__name__)
def create_grid(opt:int,
data:dict|list = None):
columnDefs = [
{'headerName': 'Label',
'field': 'label'},
{'headerName': 'Name',
'field': 'name',
'flex': 2}
]
grid = dag.AgGrid(
id=f"registration-{opt}-slides",
rowData = [
{'label': 'Test Label 1',
'name': 'Test Name 1'},
{'label': 'Test Label 2',
'name': 'Test Name 2'},
{'label': 'Test Label 3',
'name': 'Test Name 3'}
],
columnDefs=columnDefs,
defaultColDef={'resizable': True},
dashGridOptions={
"rowHeight": 70,
"rowDragManaged": True,
"rowDragEntireRow": True,
"rowDragMultiRow": True, "rowSelection": "multiple",
"suppressMoveWhenRowDragging": True,
},
dangerously_allow_code=True,
)
return grid
content = dcc.Tab(
label='Dynamic Components',
children = [
dbc.Row(
children = [
dbc.Col(
children = [
html.Div(
id='registration-grids',
children = [
html.Div(
id = 'grid-container',
children = [_create_grid_div("Option 1")]),
],
)
]
)
]
)
]
)
def _create_grid_div(opt):
grid = html.Div(
id={"type": "dynamic-cards", "index": opt},
children = [
html.H4(opt),
create_grid(opt)
],
className='registration-table-div'
)
return grid
@callback(
Output('dummy-div', 'children'),
Input({'type': "dynamic-cards", 'index': ALL}, 'id'),
Input({'type': "dynamic-cards", 'index': ALL}, 'virtualRowData'),
Input('registration-Option 1-slides', 'virtualRowData'),
prevent_initial_call=True
)
def _explore_virtual_data(id, vrd, opt1_vrd):
print(f"id: {id}")
print(f"vrd: {vrd}")
print(f"opt1_vrd: {opt1_vrd}")
return dash.no_update
dummy_div = html.Div(
children = [],
id='dummy-div'
)
app.layout = html.Div([
dummy_div,
content],
)
if __name__ == "__main__":
app.run_server(port=8888, jupyter_mode="external", debug=True, dev_tools_ui=True)
```
This results in the following output:
```
id: [{'type': 'dynamic-cards', 'index': 'Option 1'}]
vrd: [None]
opt1_vrd: [{'label': 'Test Label 1', 'name': 'Test Name 1'}, {'label': 'Test Label 2', 'name': 'Test Name 2'}, {'label': 'Test Label 3', 'name': 'Test Name 3'}]
```
**Expected behavior**
I expect that using pattern matching will allow me to get the virtualRowData from an AG Grid table just as I can calling the component directly. Can this be done, or is there another way to attain my goal?
| closed | 2024-11-07T16:15:18Z | 2024-11-08T15:50:11Z | https://github.com/plotly/dash/issues/3064 | [] | akunihiro | 2 |
matplotlib/matplotlib | data-science | 29,414 | [ENH]: Compute ticks of log scaled axes a bit better? | ### Problem
Taken from [here](https://github.com/matplotlib/matplotlib/issues/8768/#issuecomment-889516769), I found the following ticks (from `plt.semilogy([1.5, 50])`):

Not informative enough. I wish there was an easy way to `set_minor_formatter` such that say e.g `2x10^0`, `3x10^0` would have been displayed. I tried to use (full MWE this time):
```python
import matplotlib.pyplot as plt
from matplotlib import ticker
plt.semilogy([1.5, 50])
plt.gca().yaxis.set_minor_formatter(ticker.LogFormatter(base=10, labelOnlyBase=True))
plt.show()
```
But it changed nothing. OTH I was capable to obtain my desired output with:
```python
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import ticker
plt.semilogy([1.5, 50])
def get_minor_tick_string(x, pos):
b = np.floor(np.log10(x)).astype(int)
p = pos % 8
if p < 3:
return f"${p+2}\\cdot 10^{b}$"
else:
return ""
plt.gca().yaxis.set_minor_formatter(get_minor_tick_string)
plt.show()
```
But I'm sure it is not very versatile. It produces:

Which is not bad.
### Proposed solution
I'd like to first explain in the docs / fix the behavior of `set_minor_formatter(ticker.LogFormatter(base=10, labelOnlyBase=True))`. 2ndly, I'd like to discuss the possibility of adding tick labels as shown above, because I think I'm not the only one that was not satisfied by this default behavior. | closed | 2025-01-06T18:52:37Z | 2025-01-07T10:47:55Z | https://github.com/matplotlib/matplotlib/issues/29414 | [
"New feature"
] | doronbehar | 2 |
ivy-llc/ivy | pytorch | 28,039 | Fix Frontend Failing Test: torch - tensor.torch.Tensor.reshape | To-do List: https://github.com/unifyai/ivy/issues/27498 | open | 2024-01-25T09:07:11Z | 2024-01-25T09:07:11Z | https://github.com/ivy-llc/ivy/issues/28039 | [
"Sub Task"
] | Purity-E | 0 |
jmcnamara/XlsxWriter | pandas | 940 | question: scale_with_doc when adding pictures in header/footer | ### Question
Hey,
I've been trying to attach a picture in the header and footer of my worksheet and this works.
Unfortunately, the pictures are scaled causing them to appear differently than I would expect.
For example, I attach a picture in the header and when I open the xlsx file, I see the following in the Format Header Picture:
<img width="299" alt="image" src="https://user-images.githubusercontent.com/23147823/212534184-68bbaff0-7582-4ced-8be5-0f8eb0fd0b1c.png">
It scaled to 44%.
Same for the footer, where it got scaled 200%.
I tried using the `scale_with_doc: False` option but that didn't seem to do anything.
Is there a way to fix this?
| closed | 2023-01-15T09:53:52Z | 2023-01-17T10:36:18Z | https://github.com/jmcnamara/XlsxWriter/issues/940 | [
"question"
] | sidfeiner | 5 |
ivy-llc/ivy | pytorch | 28,104 | Fix Frontend Failing Test: torch - tensor.torch.Tensor.int | To-do List:https://github.com/unifyai/ivy/issues/27498 | closed | 2024-01-29T06:12:17Z | 2024-01-29T12:01:27Z | https://github.com/ivy-llc/ivy/issues/28104 | [
"Sub Task"
] | Husienvora | 1 |
flasgger/flasgger | flask | 161 | Accessing response model | I have a .yml file with a response model like this:
```
responses:
200:
description: The customer and its properties are deleted
400:
description: The request contains invalid syntax, value or cannot be fulfilled
403:
description: The customer cannot be deleted because there are still resources associated to the customer
```
I'd like to use that to create the response portion of my API. Something like:
```
if response_code == 200:
result = run_some_function()
return jsonify(result), response_code
elif response_code in responses.keys():
return {'description': responses[response_code]}, response_code
else:
return {'description': "Unspecified Server Error"}, 500
```
I see how I can access a schema but not the responses. So that I'd only have to write minimal code | open | 2017-10-24T00:26:08Z | 2018-10-01T17:31:31Z | https://github.com/flasgger/flasgger/issues/161 | [
"hacktoberfest"
] | currand | 0 |
onnx/onnxmltools | scikit-learn | 599 | Please verify 1.13.0 ONNX release candidate on TestPyPI | Hi,
We have released TestPyPI packages of ONNX 1.9.0: [onnx ](https://test.pypi.org/project/onnx/1.13.0rc1/)(ONNX 1.13.0rc1 is the latest version number for testing now).
Please verify it and let us know about any problems. Thank you for your help! | closed | 2022-11-30T16:40:34Z | 2022-12-12T15:36:50Z | https://github.com/onnx/onnxmltools/issues/599 | [] | p-wysocki | 1 |
SALib/SALib | numpy | 233 | Do SAlib supports only vector inputs? | The shape of output array of my model which i suppose to use as an argument for SAlib's analyse (SOBOL) does not fit the shape of array which sobol needs.
I have a parameter array which has a shape of (28,13) that is produced by SAlib package's saltelli function.
On the other hand, my model computes a solution at numerous time points, say 200. So in the end I have an output array of (28,200)
I must use this output array as an argument for the SAlib's Sobol function.
But as far as i understand from the example in the SAlib's doc , the input argument of Sobol must have a column array such as (28,)
The thing i want to learn is , do i infer correct conclusion from SAlib's document as SAlib's Sobol function needs only a column array ? or can i feed it with an array of (28,200) ?
PS : actually i tried (28,200) and got some error. But I am not sure this error occurs because SAlib does not support array shape of (28,200) or my code have an error. (I first asked this to the stackoverflow but couldn't get any answer.
https://stackoverflow.com/questions/55634208/does-salib-sensitivity-analysis-package-support-only-one-column-vector-input) | closed | 2019-04-13T07:38:13Z | 2019-04-20T12:07:04Z | https://github.com/SALib/SALib/issues/233 | [
"question"
] | kayakml | 5 |
thtrieu/darkflow | tensorflow | 440 | my dataset is tfrecord,I want to know how to train my own dataset? | I convert a dataset to tfrecord format,I want to know how can I train it,or is the darkflow can only be used to train VOC dataset? | closed | 2017-11-23T06:28:58Z | 2018-09-14T08:35:42Z | https://github.com/thtrieu/darkflow/issues/440 | [] | geroge-gao | 3 |
amidaware/tacticalrmm | django | 1,359 | Bug: running scripts and sending commands on Microsoft Exchange Servers | **Server Info (please complete the following information):**
- OS: Ubuntu Server 20.04
- Browser: chrome 108
- RMM Version (as shown in top left of web UI): 0.15.3
**Installation Method:**
- [ ] Standard
- [x] Docker
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI): 2.4.2
- Agent OS: Windows Server 2016
**Describe the bug**
If I run a powershell script on this server, which is an exchange-server, I get the following error:
exec: "Powershell": cannot run executable found relative to current directory
I reinstalled the wohle agent, the powershell.exe is available in c:\windows\system32, DISM and SFC Check made with no problems, Server has been restarted several times.
**To Reproduce**
Steps to reproduce the behavior:
Because I do not know, how the procedure looks like to start the powershell script, I can not reprocedure this. On all other systems I got, the scripts start, but not on this.
**Expected behavior**
The scripts should run.
**Screenshots**
no
**Additional context**
none
| closed | 2022-12-02T08:47:15Z | 2022-12-12T07:50:06Z | https://github.com/amidaware/tacticalrmm/issues/1359 | [] | rawbyne | 26 |
wkentaro/labelme | deep-learning | 904 | The key point belongs to the bbox, how to label the information? | How to mark the key points of the face and the face frame at the same time?How to judge whether the key points of a human face belong to a human face? How to judge whether the key points of a human face are not on the face? | closed | 2021-08-11T08:19:32Z | 2022-06-25T04:38:48Z | https://github.com/wkentaro/labelme/issues/904 | [] | watertianyi | 1 |
pydantic/logfire | pydantic | 674 | API access to the span generated by the @logfire.instrument() decorator | ### Description
(as discussed on Slack)
I'd like to have access to the span generated by the decorator. eg of how I'm hoping to use it:
```
@logfire.instrument()
def lf_example():
result = do_some_things()
if result:
# Set an attribute on the current logfire span from the decorator
logfire.set_attribute(status='OK')
else:
logfire.set_attribute(status='ERROR')
```
An idea explored on Slack was to use otel apis:
```
from opentelemetry.trace.propagation import get_current_span
get_current_span().set_attribute(...)
```
But it seems that the otel apis don't work with non-primatives.
An idea was to add a `logfire.get_current_span` api that would get the LogfireSpan instance.
| open | 2024-12-13T20:53:18Z | 2024-12-16T11:56:56Z | https://github.com/pydantic/logfire/issues/674 | [
"Feature Request"
] | milest | 0 |
microsoft/nni | data-science | 5,800 | How to display error messages | When I encounter some errors, the process will exit without giving me any information.
Actually, it is common to have some bugs in my code, but after
experiment.run(port=8080, wait_completion=False,debug=True)
the new process will not print my debug info onto the terminal.
For example, if I read a non-existent file, the process will fail sliently, and I have to gauss what has happened.
I attempt to read " https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md " but the file does't exist.
当使用nni运行带有bug的代码时,似乎nni会跳过此次trail,并且不报告错误在何处。例如我读取数据的路径出现了错误,在nni的终端里并不会报告此错误。
此外, " https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md " 也无法打开。
**Environment**:
- Training service (local|remote|pai|aml|etc): local
- Client OS: Windows 11 / ubuntu 20.04
- Python version: 3.8
- Is conda/virtualenv/venv used?: yes
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
**How to reproduce it?**:
You can make a bug deliberatly, and you will see that nothing about the bug is reported. | open | 2024-08-01T15:40:29Z | 2024-08-01T15:40:29Z | https://github.com/microsoft/nni/issues/5800 | [] | ayachi3 | 0 |
youfou/wxpy | api | 101 | sent_message.py 报错 | > File "/usr/local/lib/python3.6/site-packages/wxpy/api/messages/sent_message.py", line 88, in member
> if isinstance(Group, self.receiver):
```
@property
def member(self):
"""
若在群聊中发送消息,则为群员
"""
from wxpy import Group
if isinstance(Group, self.receiver):
return self.receiver.self
```
这个isinstance是不是参数搞反了 | closed | 2017-06-28T08:01:43Z | 2017-06-29T02:42:14Z | https://github.com/youfou/wxpy/issues/101 | [] | ourbest | 1 |
explosion/spacy-streamlit | streamlit | 7 | Cannot use `visualize` multiple times in the same script | When using `visualize` methods multiple times, we get into a `DuplicateWidgetID` error.
```python
import spacy_streamlit
models = ["en_core_web_sm", "en_core_web_md"]
text1 = "Sundar Pichai is the CEO of Google."
text2 = "Randy Zwitch is Head of Developer Relations at Streamlit"
spacy_streamlit.visualize(models, text1)
spacy_streamlit.visualize(models, text2)
```

Adding/generating a `key` parameter to pass to those widgets should do.
[Original forum thread](https://discuss.streamlit.io/t/spacys-entity-visualiser-cant-be-duplicated-out-of-the-box-is-there-a-way-to-get-1-visualiser-per-text-area/4434/4) | closed | 2020-07-23T07:23:23Z | 2020-07-26T19:57:03Z | https://github.com/explosion/spacy-streamlit/issues/7 | [] | andfanilo | 0 |
seleniumbase/SeleniumBase | pytest | 3,533 | Nested calls to `element.query_selector(selector)` fail | I can't figure out why, but `element.query_selector(selector)` seems to only work the first time I call it.
Here is an example on https://www.dezlearn.com/nested-iframes-example/
```python
from seleniumbase import Driver
driver = Driver(uc=True)
driver.uc_activate_cdp_mode("https://www.dezlearn.com/nested-iframes-example/")
driver.sleep(5)
iframe1 = driver.cdp.find_element("iframe#parent_iframe") # this works
print(type(iframe1))
# type(iframe1) is <class 'seleniumbase.undetected.cdp_driver.element.Element'>
iframe2 = iframe1.query_selector("iframe#iframe1") # this works
print(type(iframe2))
# type(iframe2) is <class 'seleniumbase.undetected.cdp_driver.element.Element'>
button = iframe2.query_selector("button#u_5_6") # this doesn't work, but doesn't output any error message either
print(type(button))
# type(button) is <class 'NoneType'>
```
Can you replicate this? I'm on SeleniumBase version `4.34.15` | closed | 2025-02-18T16:39:58Z | 2025-02-18T19:13:14Z | https://github.com/seleniumbase/SeleniumBase/issues/3533 | [
"workaround exists",
"UC Mode / CDP Mode"
] | julesmcrt | 1 |
hankcs/HanLP | nlp | 1,170 | FAQ第一条 | 
文中提到无论怎么分词,商品、和服、服务都不可能同时出现,这是不严谨的。

| closed | 2019-05-05T09:04:05Z | 2020-01-01T10:49:52Z | https://github.com/hankcs/HanLP/issues/1170 | [
"ignored"
] | soongp | 1 |
sktime/pytorch-forecasting | pandas | 1,685 | [ENH] enable large data use cases - decouple data input from `pandas`, allow `polars`, `dask`, and/or `spark` | A key limitation of current architecture seems to be the reliance on `pandas` of the input, which limites useability in large data cases.
While `torch` with appropriate backends should be able to handle large data, `pandas` as a container choice, in particular the current instantiation which seems to rely on in-memory, will prove to be the bottleneck.
We should therefore consider and implement support for data backends that scale better, such as `polars`, `dask`, or `spark`, and see how easy it is to get the `pandas` `pyarrow` integration to work.
Architecturally, I think we should:
* build a more abstract data loader layer
* make `pandas` one of multiple potential data soft dependencies
* try to prioritize the solution that would provide us with the quickest "impact for time invested"
The key entry point for this extension or refactor is `TimeSeriesDataSet`, which requires `pandas` objects to be passed.
| open | 2024-09-23T09:09:44Z | 2025-01-12T20:38:25Z | https://github.com/sktime/pytorch-forecasting/issues/1685 | [
"enhancement"
] | fkiraly | 7 |
keras-team/keras | deep-learning | 20,274 | Incompatibility in 'tf.GradientTape.watch' of TensorFlow 2.17 in Keras 3.4.1 | I read the issue 19155 (https://github.com/keras-team/keras/issues/19155), but still have problem
I am trying to perform gradient descent on the model.trainable variables, but have errors regarding model.trainable_variables
Tensorflow version is 2.17.0
keras version is 3.4.1
def get_grad(model, X_train, data_train):
with tf.GradientTape(persistent=True) as tape:
# This tape is for derivatives with
# respect to trainable variables
tape.watch(model.trainable_variables.value) ###added .value from issue 19155
loss = compute_loss(model, X_train, data_train)
g = tape.gradient(loss, model.trainable_variables.value) #
del tape
return loss, g
###################
Error:
AttributeError: in user code:
File "<ipython-input-13-fc15d0ce6166>", line 7, in train_step *
loss, grad_theta = get_grad(model, X_train, data_train)
File "<ipython-input-11-cca40e4543b3>", line 6, in get_grad *
tape.watch(model.trainable_variables.value)
AttributeError: 'list' object has no attribute 'value' | closed | 2024-09-20T06:20:02Z | 2024-10-22T02:03:06Z | https://github.com/keras-team/keras/issues/20274 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | yajuna | 4 |
amidaware/tacticalrmm | django | 1,849 | [BUG]: Resolved Alerts are triggered after exiting maintenance mode | **Server Info (please complete the following information):**
- OS: Debian (Kubernetes Node)
- Browser: Chrome / Edge
- RMM Version (as shown in top left of web UI): v0.18.2
**Installation Method:**
- [ ] Standard
- [ ] Standard with `--insecure` flag at install
- [ x ] Docker
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI): v2.7.0
- Agent OS: [e.g. Win 10 v2004, Server 2016] Windows Server 2019 Standard
**Describe the bug**
It seems like whenever we turn off maintenance mode we get a ton of “resolved” alerts.
We don’t get alerted during the window which is correct, but we get alerted for service restorations after the window is over, even if those services were resolved during the window.
This has been going on for as long as I remember, I just haven't had time to open an issue for it. I am confident every version of Tactical has done this, at least in our specific environment.
**To Reproduce**
Steps to reproduce the behavior:
1. Put a client in maintenance mode
2. Trigger a reboot
3. Exit maintenance mode (can be an hour later, it still does it)
4. "Service Check: Resolved" gets triggered even though the service was started long before exiting maintenance mode
**Expected behavior**
Maintenance mode should not trigger Resolved alerts after exiting maintenance mode.
**Screenshots**
N/A
**Additional context**
@wh1te909 is familiar with our environment. It is technically "unsupported" but nonetheless if it's something on our side, just need to know what the cause is so we can address and resolve accordingly. I am filing it as a bug report for now per conversation on Discord. | closed | 2024-04-18T18:07:54Z | 2024-06-28T20:30:09Z | https://github.com/amidaware/tacticalrmm/issues/1849 | [] | joeldeteves | 3 |
SciTools/cartopy | matplotlib | 2,212 | stock_img() fails with scipy 1.11 | ### Description
<!-- Please provide a general introduction to the issue/proposal. -->
Calling `ax.stock_img()` on a GeoAxes object fails in an environment with scipy 1.11.1 and Cartopy 0.21.1
The same code works as expected in an environment with scipy 1.10.1 and Cartopy 0.21.1.
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```
import matplotlib.pyplot as plt
from cartopy import crs as ccrs
fig = plt.figure(figsize=(11, 8.5))
ax = plt.subplot(1, 1, 1, projection=ccrs.Mollweide(central_longitude=0))
ax.stock_img()
```
#### Traceback
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 5
3 fig = plt.figure(figsize=(11, 8.5))
4 ax = plt.subplot(1, 1, 1, projection=ccrs.Mollweide(central_longitude=0))
----> 5 ax.stock_img()
File ~/miniconda3/envs/test-cartopy/lib/python3.11/site-packages/cartopy/mpl/geoaxes.py:1019, in GeoAxes.stock_img(self, name)
1014 source_proj = ccrs.PlateCarree()
1015 fname = os.path.join(config["repo_data_dir"],
1016 'raster', 'natural_earth',
1017 '50-natural-earth-1-downsampled.png')
-> 1019 return self.imshow(imread(fname), origin='upper',
1020 transform=source_proj,
1021 extent=[-180, 180, -90, 90])
1022 else:
1023 raise ValueError('Unknown stock image %r.' % name)
File ~/miniconda3/envs/test-cartopy/lib/python3.11/site-packages/cartopy/mpl/geoaxes.py:318, in _add_transform.<locals>.wrapper(self, *args, **kwargs)
313 raise ValueError(f'Invalid transform: Spherical {func.__name__} '
314 'is not supported - consider using '
315 'PlateCarree/RotatedPole.')
317 kwargs['transform'] = transform
--> 318 return func(self, *args, **kwargs)
File ~/miniconda3/envs/test-cartopy/lib/python3.11/site-packages/cartopy/mpl/geoaxes.py:1331, in GeoAxes.imshow(self, img, *args, **kwargs)
1329 from cartopy.img_transform import warp_array
1330 original_extent = extent
-> 1331 img, extent = warp_array(img,
1332 source_proj=transform,
1333 source_extent=original_extent,
1334 target_proj=self.projection,
1335 target_res=regrid_shape,
1336 target_extent=target_extent,
1337 mask_extrapolated=True,
1338 )
1339 alpha = kwargs.pop('alpha', None)
1340 if np.array(alpha).ndim == 2:
File ~/miniconda3/envs/test-cartopy/lib/python3.11/site-packages/cartopy/img_transform.py:192, in warp_array(array, target_proj, source_proj, target_res, source_extent, target_extent, mask_extrapolated)
186 # XXX Take into account the extents of the original to determine
187 # target_extents?
188 target_native_x, target_native_y, extent = mesh_projection(
189 target_proj, target_res[0], target_res[1],
190 x_extents=target_x_extents, y_extents=target_y_extents)
--> 192 array = regrid(array, source_native_xy[0], source_native_xy[1],
193 source_proj, target_proj,
194 target_native_x, target_native_y,
195 mask_extrapolated)
196 return array, extent
File ~/miniconda3/envs/test-cartopy/lib/python3.11/site-packages/cartopy/img_transform.py:278, in regrid(array, source_x_coords, source_y_coords, source_proj, target_proj, target_x_points, target_y_points, mask_extrapolated)
274 else:
275 # Versions of scipy >= v0.16 added the balanced_tree argument,
276 # which caused the KDTree to hang with this input.
277 kdtree = scipy.spatial.cKDTree(xyz, balanced_tree=False)
--> 278 _, indices = kdtree.query(target_xyz, k=1)
279 mask = indices >= len(xyz)
280 indices[mask] = 0
File _ckdtree.pyx:795, in scipy.spatial._ckdtree.cKDTree.query()
ValueError: 'x' must be finite, check for nan or inf values
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Mac OS Monterey Version 12.6.7
### Cartopy version
0.21.1
### conda list
```
# packages in environment at /Users/br546577/miniconda3/envs/test-cartopy:
#
# Name Version Build Channel
anyio 3.7.1 pyhd8ed1ab_0 conda-forge
appnope 0.1.3 pyhd8ed1ab_0 conda-forge
argon2-cffi 21.3.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py311he2be06e_3 conda-forge
asttokens 2.2.1 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_1 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 pyhd8ed1ab_3 conda-forge
backports.functools_lru_cache 1.6.5 pyhd8ed1ab_0 conda-forge
beautifulsoup4 4.12.2 pyha770c72_0 conda-forge
bleach 6.0.0 pyhd8ed1ab_0 conda-forge
brotli 1.0.9 h1a8c8d9_9 conda-forge
brotli-bin 1.0.9 h1a8c8d9_9 conda-forge
brotli-python 1.0.9 py311ha397e9f_9 conda-forge
bzip2 1.0.8 h3422bc3_4 conda-forge
c-ares 1.19.1 hb547adb_0 conda-forge
ca-certificates 2023.5.7 hf0a4a13_0 conda-forge
cartopy 0.21.1 py311hbf64cf6_1 conda-forge
certifi 2023.5.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311hae827db_3 conda-forge
charset-normalizer 3.2.0 pyhd8ed1ab_0 conda-forge
comm 0.1.3 pyhd8ed1ab_0 conda-forge
contourpy 1.1.0 py311he4fd1f5_0 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
debugpy 1.6.7 py311ha397e9f_0 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
exceptiongroup 1.1.2 pyhd8ed1ab_0 conda-forge
executing 1.2.0 pyhd8ed1ab_0 conda-forge
flit-core 3.9.0 pyhd8ed1ab_0 conda-forge
fonttools 4.40.0 py311heffc1b2_0 conda-forge
freetype 2.12.1 hd633e50_1 conda-forge
geos 3.11.2 hb7217d7_0 conda-forge
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.8.0 pyha770c72_0 conda-forge
importlib_metadata 6.8.0 hd8ed1ab_0 conda-forge
importlib_resources 6.0.0 pyhd8ed1ab_1 conda-forge
ipykernel 6.24.0 pyh5fb750a_0 conda-forge
ipython 8.14.0 pyhd1c38e8_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 8.0.7 pyhd8ed1ab_0 conda-forge
jedi 0.18.2 pyhd8ed1ab_0 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
jsonschema 4.18.1 pyhd8ed1ab_0 conda-forge
jsonschema-specifications 2023.6.1 pyhd8ed1ab_0 conda-forge
jupyter 1.0.0 py311h267d04e_8 conda-forge
jupyter_client 8.3.0 pyhd8ed1ab_0 conda-forge
jupyter_console 6.6.3 pyhd8ed1ab_0 conda-forge
jupyter_core 5.3.1 py311h267d04e_0 conda-forge
jupyter_events 0.6.3 pyhd8ed1ab_0 conda-forge
jupyter_server 2.7.0 pyhd8ed1ab_0 conda-forge
jupyter_server_terminals 0.4.4 pyhd8ed1ab_1 conda-forge
jupyterlab_pygments 0.2.2 pyhd8ed1ab_0 conda-forge
jupyterlab_widgets 3.0.8 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 py311hd6ee22a_1 conda-forge
krb5 1.20.1 h69eda48_0 conda-forge
lcms2 2.15 hd835a16_1 conda-forge
lerc 4.0.0 h9a09cb3_0 conda-forge
libblas 3.9.0 17_osxarm64_openblas conda-forge
libbrotlicommon 1.0.9 h1a8c8d9_9 conda-forge
libbrotlidec 1.0.9 h1a8c8d9_9 conda-forge
libbrotlienc 1.0.9 h1a8c8d9_9 conda-forge
libcblas 3.9.0 17_osxarm64_openblas conda-forge
libcurl 8.1.2 h912dcd9_0 conda-forge
libcxx 16.0.6 h4653b0c_0 conda-forge
libdeflate 1.18 h1a8c8d9_0 conda-forge
libedit 3.1.20191231 hc8eb9b7_2 conda-forge
libev 4.33 h642e427_1 conda-forge
libexpat 2.5.0 hb7217d7_1 conda-forge
libffi 3.4.2 h3422bc3_5 conda-forge
libgfortran 5.0.0 12_2_0_hd922786_31 conda-forge
libgfortran5 12.2.0 h0eea778_31 conda-forge
libjpeg-turbo 2.1.5.1 h1a8c8d9_0 conda-forge
liblapack 3.9.0 17_osxarm64_openblas conda-forge
libnghttp2 1.52.0 hae82a92_0 conda-forge
libopenblas 0.3.23 openmp_hc731615_0 conda-forge
libpng 1.6.39 h76d750c_0 conda-forge
libsodium 1.0.18 h27ca646_1 conda-forge
libsqlite 3.42.0 hb31c410_0 conda-forge
libssh2 1.11.0 h7a5bd25_0 conda-forge
libtiff 4.5.1 h23a1a89_0 conda-forge
libwebp-base 1.3.1 hb547adb_0 conda-forge
libxcb 1.15 hf346824_0 conda-forge
libzlib 1.2.13 h53f4e23_5 conda-forge
llvm-openmp 16.0.6 h1c12783_0 conda-forge
markupsafe 2.1.3 py311heffc1b2_0 conda-forge
matplotlib-base 3.7.2 py311h3bc9839_0 conda-forge
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mistune 3.0.0 pyhd8ed1ab_0 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
nbclassic 1.0.0 pyhb4ecaf3_1 conda-forge
nbclient 0.8.0 pyhd8ed1ab_0 conda-forge
nbconvert 7.6.0 pyhd8ed1ab_0 conda-forge
nbconvert-core 7.6.0 pyhd8ed1ab_0 conda-forge
nbconvert-pandoc 7.6.0 pyhd8ed1ab_0 conda-forge
nbformat 5.9.1 pyhd8ed1ab_0 conda-forge
ncurses 6.4 h7ea286d_0 conda-forge
nest-asyncio 1.5.6 pyhd8ed1ab_0 conda-forge
notebook 6.5.4 pyha770c72_0 conda-forge
notebook-shim 0.2.3 pyhd8ed1ab_0 conda-forge
numpy 1.25.1 py311hb8f3215_0 conda-forge
openjpeg 2.5.0 hbc2ba62_2 conda-forge
openssl 3.1.1 h53f4e23_1 conda-forge
overrides 7.3.1 pyhd8ed1ab_0 conda-forge
packaging 23.1 pyhd8ed1ab_0 conda-forge
pandoc 3.1.3 hce30654_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pexpect 4.8.0 pyh1a96a4e_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.0.0 py311h095fde6_0 conda-forge
pip 23.1.2 pyhd8ed1ab_0 conda-forge
pkgutil-resolve-name 1.3.10 pyhd8ed1ab_0 conda-forge
platformdirs 3.8.1 pyhd8ed1ab_0 conda-forge
pooch 1.7.0 pyha770c72_3 conda-forge
proj 9.2.1 h8fdea58_0 conda-forge
prometheus_client 0.17.1 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.39 pyha770c72_0 conda-forge
prompt_toolkit 3.0.39 hd8ed1ab_0 conda-forge
psutil 5.9.5 py311he2be06e_0 conda-forge
pthread-stubs 0.4 h27ca646_1001 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pygments 2.15.1 pyhd8ed1ab_0 conda-forge
pyobjc-core 9.2 py311hb702dc4_0 conda-forge
pyobjc-framework-cocoa 9.2 py311hb702dc4_0 conda-forge
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pyproj 3.6.0 py311h280d66e_1 conda-forge
pyshp 2.3.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.11.4 h47c9636_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.17.1 pyhd8ed1ab_0 conda-forge
python-json-logger 2.0.7 pyhd8ed1ab_0 conda-forge
python_abi 3.11 3_cp311 conda-forge
pyyaml 6.0 py311he2be06e_5 conda-forge
pyzmq 25.1.0 py311hb1af645_0 conda-forge
readline 8.2 h92ec313_1 conda-forge
referencing 0.29.1 pyhd8ed1ab_0 conda-forge
requests 2.31.0 pyhd8ed1ab_0 conda-forge
rfc3339-validator 0.1.4 pyhd8ed1ab_0 conda-forge
rfc3986-validator 0.1.1 pyh9f0ad1d_0 conda-forge
rpds-py 0.8.10 py311h0563b04_0 conda-forge
scipy 1.11.1 py311h93d07a4_0 conda-forge
send2trash 1.8.2 pyhd1c38e8_0 conda-forge
setuptools 68.0.0 pyhd8ed1ab_0 conda-forge
shapely 2.0.1 py311h7f8cfc4_1 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sniffio 1.3.0 pyhd8ed1ab_0 conda-forge
soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge
sqlite 3.42.0 h203b68d_0 conda-forge
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
terminado 0.17.1 pyhd1c38e8_0 conda-forge
tinycss2 1.2.1 pyhd8ed1ab_0 conda-forge
tk 8.6.12 he1e0b03_0 conda-forge
tornado 6.3.2 py311heffc1b2_0 conda-forge
traitlets 5.9.0 pyhd8ed1ab_0 conda-forge
typing-extensions 4.7.1 hd8ed1ab_0 conda-forge
typing_extensions 4.7.1 pyha770c72_0 conda-forge
typing_utils 0.1.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
urllib3 2.0.3 pyhd8ed1ab_1 conda-forge
wcwidth 0.2.6 pyhd8ed1ab_0 conda-forge
webencodings 0.5.1 py_1 conda-forge
websocket-client 1.6.1 pyhd8ed1ab_0 conda-forge
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
widgetsnbextension 4.0.8 pyhd8ed1ab_0 conda-forge
xorg-libxau 1.0.11 hb547adb_0 conda-forge
xorg-libxdmcp 1.1.3 h27ca646_0 conda-forge
xz 5.2.6 h57fd34a_0 conda-forge
yaml 0.2.5 h3422bc3_2 conda-forge
zeromq 4.3.4 hbdafb3b_1 conda-forge
zipp 3.16.0 pyhd8ed1ab_1 conda-forge
zstd 1.5.2 h4f39d0f_7 conda-forge
```
### pip list
```
Package Version
----------------------------- -----------
anyio 3.7.1
appnope 0.1.3
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
asttokens 2.2.1
attrs 23.1.0
backcall 0.2.0
backports.functools-lru-cache 1.6.5
beautifulsoup4 4.12.2
bleach 6.0.0
Brotli 1.0.9
Cartopy 0.21.1
certifi 2023.5.7
cffi 1.15.1
charset-normalizer 3.2.0
comm 0.1.3
contourpy 1.1.0
cycler 0.11.0
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
entrypoints 0.4
exceptiongroup 1.1.2
executing 1.2.0
fastjsonschema 2.17.1
flit_core 3.9.0
fonttools 4.40.0
idna 3.4
importlib-metadata 6.8.0
importlib-resources 6.0.0
ipykernel 6.24.0
ipython 8.14.0
ipython-genutils 0.2.0
ipywidgets 8.0.7
jedi 0.18.2
Jinja2 3.1.2
jsonschema 4.18.1
jsonschema-specifications 2023.6.1
jupyter 1.0.0
jupyter_client 8.3.0
jupyter-console 6.6.3
jupyter_core 5.3.1
jupyter-events 0.6.3
jupyter_server 2.7.0
jupyter_server_terminals 0.4.4
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.8
kiwisolver 1.4.4
MarkupSafe 2.1.3
matplotlib 3.7.2
matplotlib-inline 0.1.6
mistune 3.0.0
munkres 1.1.4
nbclassic 1.0.0
nbclient 0.8.0
nbconvert 7.6.0
nbformat 5.9.1
nest-asyncio 1.5.6
notebook 6.5.4
notebook_shim 0.2.3
numpy 1.25.1
overrides 7.3.1
packaging 23.1
pandocfilters 1.5.0
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 10.0.0
pip 23.1.2
pkgutil_resolve_name 1.3.10
platformdirs 3.8.1
pooch 1.7.0
prometheus-client 0.17.1
prompt-toolkit 3.0.39
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
pycparser 2.21
Pygments 2.15.1
pyobjc-core 9.2
pyobjc-framework-Cocoa 9.2
pyparsing 3.0.9
pyproj 3.6.0
pyshp 2.3.1
PySocks 1.7.1
python-dateutil 2.8.2
python-json-logger 2.0.7
PyYAML 6.0
pyzmq 25.1.0
referencing 0.29.1
requests 2.31.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.8.10
scipy 1.11.1
Send2Trash 1.8.2
setuptools 68.0.0
shapely 2.0.1
six 1.16.0
sniffio 1.3.0
soupsieve 2.3.2.post1
stack-data 0.6.2
terminado 0.17.1
tinycss2 1.2.1
tornado 6.3.2
traitlets 5.9.0
typing_extensions 4.7.1
typing-utils 0.1.0
urllib3 2.0.3
wcwidth 0.2.6
webencodings 0.5.1
websocket-client 1.6.1
wheel 0.40.0
widgetsnbextension 4.0.8
zipp 3.16.0
```
</details>
| closed | 2023-07-12T22:02:31Z | 2023-07-13T01:24:21Z | https://github.com/SciTools/cartopy/issues/2212 | [] | brian-rose | 1 |
zalandoresearch/fashion-mnist | computer-vision | 124 | BenchMark: CNN with 5 Conv Layers. Accuracy on FashionMNIST Dataset: 93.15 | The model details are as follows:
Preprocessing by calculating mean and std beforehand.
Trained using Cross Entropy Loss and Adam Optimizer.
The initial learning rate is 0.01 which is decreased to 1/4 after every 8 epochs.
The layers in sequence are:
Convolutional layer with 32 feature maps of size 3 x 3, stride 1 and pooling 0.
BatchNorm layer followed by ReLU activation.
Convolutional layer with 64 feature maps of size 3 x 3, stride 1 and pooling 0
BatchNorm layer followed by ReLU activation.
Convolutional layer with 128 feature maps of size 3 x 3, stride 1 and pooling 1
BatchNorm layer followed by ReLU activation.
Max Pooling layer of size 2 x 2 and stride 2.
Convolutional layer with 256 feature maps of size 3 x 3, stride 1 and pooling 0
BatchNorm layer followed by ReLU activation.
Convolutional layer with 512 feature maps of size 3 x 3, stride 1 and pooling 0
BatchNorm layer followed by ReLU activation.
Max Pooling layer of size 8 x 8 and stride 1.
Fully connected layer of input size 512 and output size 10.
Accuracy achieved on Fashion MNIST Test Dataset is 93.15%
This network has been implemented in PyTorch. The code can be found [here](https://gist.github.com/Noumanmufc1/60f00e434f0ce42b6f4826029737490a) | closed | 2018-07-26T11:10:38Z | 2018-07-27T02:54:40Z | https://github.com/zalandoresearch/fashion-mnist/issues/124 | [
"benchmark"
] | nouman-10 | 0 |
KaiyangZhou/deep-person-reid | computer-vision | 423 | Pre-trained model training issue | Hi @KaiyangZhou,
I am getting following issue while performing model training.
engine.run(
save_dir='log/osnet_ain_transfer',
max_epoch=60,
eval_freq=10,
print_freq=10,
test_only=False,
fixbase_epoch=5,
open_layers='classifier'
)
=> Start training
* Only train classifier (epoch: 1/5)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-21-db919c647e64> in <module>
6 test_only=False,
7 fixbase_epoch=5,
----> 8 open_layers='classifier'
9 )
10 # or open_layers=['fc', 'classifier'] if there is another fc layer that
~/person_reid_new/deep-person-reid/torchreid/engine/engine.py in run(self, save_dir, max_epoch, start_epoch, print_freq, fixbase_epoch, open_layers, start_eval, eval_freq, test_only, dist_metric, normalize_feature, visrank, visrank_topk, use_metric_cuhk03, ranks, rerank)
191 print_freq=print_freq,
192 fixbase_epoch=fixbase_epoch,
--> 193 open_layers=open_layers
194 )
195
~/person_reid_new/deep-person-reid/torchreid/engine/engine.py in train(self, print_freq, fixbase_epoch, open_layers)
243 for self.batch_idx, data in enumerate(self.train_loader):
244 data_time.update(time.time() - end)
--> 245 loss_summary = self.forward_backward(data)
246 batch_time.update(time.time() - end)
247 losses.update(loss_summary)
~/person_reid_new/deep-person-reid/torchreid/engine/image/softmax.py in forward_backward(self, data)
84
85 outputs = self.model(imgs)
---> 86 loss = self.compute_loss(self.criterion, outputs, pids)
87
88 self.optimizer.zero_grad()
~/person_reid_new/deep-person-reid/torchreid/engine/engine.py in compute_loss(self, criterion, outputs, targets)
437 loss = DeepSupervision(criterion, outputs, targets)
438 else:
--> 439 loss = criterion(outputs, targets)
440 return loss
441
~/anaconda3/envs/newmmpose/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/person_reid_new/deep-person-reid/torchreid/losses/cross_entropy_loss.py in forward(self, inputs, targets)
44 log_probs = self.logsoftmax(inputs)
45 zeros = torch.zeros(log_probs.size())
---> 46 targets = zeros.scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
47 if self.use_gpu:
48 targets = targets.cuda()
RuntimeError: index 5019 is out of bounds for dimension 1 with size 30 | closed | 2021-03-14T17:32:52Z | 2021-04-09T15:44:35Z | https://github.com/KaiyangZhou/deep-person-reid/issues/423 | [] | malpeddinilesh | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.