repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
sequencelengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
SALib/SALib
numpy
178
delta indices not summing up to unity
Hey, I am using the delta analysis for calculating sensetivity indices for s model with 21-30 Parameters. The delta values sum up to 1.2 - 4. As far as I understood the paper of Borgonovo 2007 they should sum up to 1, since the conditional denseties are normalized by the unconditional density. However, could it be that this is just the case if it is an additive model? The calculated sensitivity indices seen okay and are less than one. Thanks for helping out!
closed
2017-12-07T07:21:13Z
2019-11-07T22:50:25Z
https://github.com/SALib/SALib/issues/178
[ "question_interpretation" ]
witteire
6
scikit-hep/awkward
numpy
3,236
[CPU/GPU] prod kernel on an empty list of a complex type gives a wrong result
### Version of Awkward Array master branch (2.6.7) ### Description and code to reproduce ```python def test_block_boundary_prod_complex13(): rng = np.random.default_rng(seed=42) array = rng.integers(50, size=1000) complex_array = np.vectorize(complex)( array[0 : len(array) : 2], array[1 : len(array) : 2] ) content = ak.contents.NumpyArray(complex_array) assert np.allclose(to_list(ak.prod(content, -1, highlevel=False)), np.prod(ak.Array(content)), equal_nan=True) offsets = ak.index.Index64(np.array([0, 5, 996, 1000], dtype=np.int64)) depth1 = ak.contents.ListOffsetArray(offsets, content) print(to_list(ak.prod(depth1, -1, highlevel=False))) print([np.prod(ak.Array(depth1[0])), np.prod(ak.Array(depth1[1])), np.prod(ak.Array(depth1[2]))]) ``` where `ak.Array(depth1[2])` has `ArrayType(NumpyType('complex128'), 0, None)`. The `ak.prod` result of an empty list should be `(1+0j)`, as correctly produced by Numpy: ``` [(6891360-24365880j), (nan+nanj), 0j] [(6891360-24365880j), (nan+nanj), (1+0j)] ```
open
2024-09-12T13:05:31Z
2024-09-12T13:07:30Z
https://github.com/scikit-hep/awkward/issues/3236
[ "bug (unverified)" ]
ianna
0
biolab/orange3
numpy
6,461
Data Sets doesn't remember non-English selection
According to @BlazZupan, if one chooses a Slovenian dataset (in English version of Orange?) and saves the workflow, this data set is not selected after reloading the workflow. I suspect the problem occurs because the language combo is not a setting and is always reset to English for English Orange (and to Slovenian for Slovenian), and thus the data set is not chosen because it is not shown. The easiest solution would be to save the language as a schema-only setting.
closed
2023-06-02T12:50:51Z
2023-06-16T08:02:49Z
https://github.com/biolab/orange3/issues/6461
[ "bug" ]
janezd
0
KevinMusgrave/pytorch-metric-learning
computer-vision
239
Why precision_at_1 for a not trained model from MNIST example is 0.95
For [TripletMarginLossMNIST example]( https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TripletMarginLossMNIST.ipynb ) if measured before the training was started 'precision_at_1' is 0.953. Why is it so high? The model was not trained or pretrained on any dateset. ``` from pytorch_metric_learning import losses, miners, distances, reducers, testers from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator ### MNIST code originally from https://github.com/pytorch/examples/blob/master/mnist/main.py ### from torchvision import datasets import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms import numpy as np ### MNIST code originally from https://github.com/pytorch/examples/blob/master/mnist/main.py ### class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout2d(0.25) self.dropout2 = nn.Dropout2d(0.5) self.fc1 = nn.Linear(9216, 128) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) return x ### MNIST code originally from https://github.com/pytorch/examples/blob/master/mnist/main.py ### def train(model, loss_func, mining_func, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, labels) in enumerate(train_loader): data, labels = data.to(device), labels.to(device) optimizer.zero_grad() embeddings = model(data) indices_tuple = mining_func(embeddings, labels) loss = loss_func(embeddings, labels, indices_tuple) loss.backward() optimizer.step() if batch_idx % 20 == 0: print("Epoch {} Iteration {}: Loss = {}, Number of mined triplets = {}".format(epoch, batch_idx, loss, mining_func.num_triplets)) ### convenient function from pytorch-metric-learning ### def get_all_embeddings(dataset, model): tester = testers.BaseTester() return tester.get_all_embeddings(dataset, model) ### compute accuracy using AccuracyCalculator from pytorch-metric-learning ### def test(dataset, model, accuracy_calculator): embeddings, labels = get_all_embeddings(dataset, model) print("Computing accuracy") accuracies = accuracy_calculator.get_accuracy(embeddings, embeddings, np.squeeze(labels), np.squeeze(labels), True) print("Test set accuracy (MAP@10) = {}".format(accuracies["mean_average_precision_at_r"])) print("Test set accuracy (MAP@1) = {}".format(accuracies["precision_at_1"])) device = torch.device("cuda") transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) batch_size = 256 dataset1 = datasets.MNIST('.', train=True, download=True, transform=transform) dataset2 = datasets.MNIST('.', train=False, transform=transform) train_loader = torch.utils.data.DataLoader(dataset1, batch_size=256, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset2, batch_size=256) model = Net().to(device) optimizer = optim.Adam(model.parameters(), lr=0.01) num_epochs = 10 ### pytorch-metric-learning stuff ### distance = distances.CosineSimilarity() reducer = reducers.ThresholdReducer(low = 0) loss_func = losses.TripletMarginLoss(margin = 0.2, distance = distance, reducer = reducer) mining_func = miners.TripletMarginMiner(margin = 0.2, distance = distance, type_of_triplets = "semihard") accuracy_calculator = AccuracyCalculator(include = ("mean_average_precision_at_r","precision_at_1",), k = 10) ### pytorch-metric-learning stuff ### for epoch in range(1, num_epochs+1): test(dataset2, model, accuracy_calculator) #train(model, loss_func, mining_func, device, train_loader, optimizer, epoch) ```
closed
2020-11-27T13:34:48Z
2020-11-27T16:20:48Z
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/239
[]
bransGl
2
ultralytics/ultralytics
deep-learning
19,104
vid_stride when input is frame_list
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question When input is list of images (have been read with cv2.imread and put in a list ), model.predict(frame_list, vid_stride=n) not working. just predict in vid_stride=1 state. in other words, changing vid_stride to 2 or 3 is ignored. just work with 1. ### Additional _No response_
open
2025-02-06T15:37:41Z
2025-02-06T15:43:45Z
https://github.com/ultralytics/ultralytics/issues/19104
[ "question", "detect" ]
ansaricard
2
aio-libs-abandoned/aioredis-py
asyncio
778
ConnectionResetError: [Errno 104] Connection reset by peer
Hi, everybody! I use torando+aioredis, and recently i met this issue, below is traceback my environ: aioredis==1.2.0 tornado==5.1.1 I use this method `aioredis.create_redis_pool(**args)` to create pool can anybody show me help? thx a lot. `Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/tornado/web.py", line 1699, in _execute result = await result File "/views/notice.py", line 341, in get items, page_size, total_page, total_size = await Notice.cache_or_api_list(notice_id_list, page_count, page_size) File "models/notice.py", line 136, in cache_or_api_list items = await cls.query_list(page_list) File "models/notice.py", line 92, in query_list items = await asyncio.gather(*[Notice.cache_or_api(notice_id) for notice_id in notice_id_list]) File "models/notice.py", line 37, in cache_or_api info = await redis.execute('get', redis_key) File "models/notice.py", line 37, in cache_or_api info = await redis.execute('get', redis_key) File "models/notice.py", line 37, in cache_or_api info = await redis.execute('get', redis_key) [Previous line repeated 11 more times] File "/usr/local/lib/python3.6/site-packages/aioredis/connection.py", line 183, in _read_data obj = await self._reader.readobj() File "/usr/local/lib/python3.6/site-packages/aioredis/stream.py", line 94, in readobj await self._wait_for_data('readobj') File "/usr/local/lib/python3.6/asyncio/streams.py", line 464, in _wait_for_data yield from self._waiter File "/usr/local/lib/python3.6/asyncio/selector_events.py", line 723, in _read_ready data = self._sock.recv(self.max_size) ConnectionResetError: [Errno 104] Connection reset by peer`
open
2020-07-16T08:54:31Z
2022-07-07T17:43:45Z
https://github.com/aio-libs-abandoned/aioredis-py/issues/778
[ "bug" ]
zzlpeter
58
Lightning-AI/pytorch-lightning
pytorch
20,620
test: `flaky test_results.py::test_result_reduce_ddp` terminated with signal SIGABRT
### Bug description The test **`tests/tests_pytorch/core/test_results.py::test_result_reduce_ddp`** fails intermittently, similar to the test addressed in [#20537](https://github.com/Lightning-AI/pytorch-lightning/pull/20537), with the error: > `torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with signal SIGABRT` To address this, the test could be marked **flaky**. ### **Background** We are evaluating how our tool for **test prioritization** could find test failures faster in your project. During a CI rerun for commit `7322d63bef2cf1a0439f8b19b545cd4a89da62b0`, we encountered the failure of this test, which you can see in this log: [GitHub Actions Log](https://github.com/syncpr-user1/pytorch-lightning-random_order/actions/runs/13600237309/job/38024923746). ### What version are you seeing the problem on? master ### How to reproduce the bug ```python ``` ### Error messages and logs ``` # Error messages and logs here please ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.5.0): #- PyTorch Version (e.g., 2.5): #- Python version (e.g., 3.12): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): ``` </details> ### More info _No response_
closed
2025-03-05T18:22:12Z
2025-03-10T13:20:06Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20620
[ "bug", "needs triage", "ver: 2.5.x" ]
kaiyaok2
0
ijl/orjson
numpy
362
Add support for __slots__ classes
Hi 👋 I'd like to ask if it would be possible to add support for [__slots__](https://docs.python.org/3/reference/datamodel.html#slots) python classes? Currently trying to serialize one yields a `Type is not JSON serializable` ```python class MySlotsClass: __slots__ = ("a",) def __init__(self, a): self.a = a ``` the expected behaviour would be ```python >>> orjson.dumps(MySlotsClass(42)) b'{"a":42}' ``` This type of class allows for major memory savings when dealing with many small objects.
closed
2023-03-16T13:34:18Z
2023-03-20T23:03:58Z
https://github.com/ijl/orjson/issues/362
[]
grzegorzme
1
AirtestProject/Airtest
automation
598
windows系统在celery队列里执行 在终端看到命令都会执行两次 最后停止不动 请问这可能是什么原因呢
[2019-11-08 16:37:50,870: INFO/MainProcess] Connected to redis://localhost:6379// [2019-11-08 16:37:51,883: INFO/MainProcess] mingle: searching for neighbors [2019-11-08 16:37:56,934: INFO/MainProcess] mingle: all alone [2019-11-08 16:37:57,962: INFO/MainProcess] pidbox: Connected to redis://localhost:6379//. [2019-11-08 16:37:59,952: INFO/MainProcess] celery@X9ZY4FKMMCJWZD6 ready. [2019-11-08 16:37:59,956: INFO/MainProcess] Received task: task.tasks.wechat.subscribe_account[f5a9621d-149a-4a6c-85e9-ebe876a85c27] [04:37:59][DEBUG]<airtest.core.android.adb> g:\soft\python3.6.5\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 127.0.0.1:21503 get-state [2019-11-08 16:37:59,959: DEBUG/MainProcess] g:\soft\python3.6.5\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 127.0.0.1:21503 get-state [04:38:00][DEBUG]<airtest.core.android.adb> g:\soft\python3.6.5\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 127.0.0.1:21503 wait-for-device [2019-11-08 16:38:00,037: DEBUG/MainProcess] g:\soft\python3.6.5\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 127.0.0.1:21503 wait-for-device [04:38:00][DEBUG]<airtest.core.android.adb> g:\soft\python3.6.5\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 127.0.0.1:21503 shell getprop ro.build.version.sdk [2019-11-08 16:38:00,119: DEBUG/MainProcess] g:\soft\python3.6.5\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 127.0.0.1:21503 shell getprop ro.build.version.sdk
closed
2019-11-08T08:38:37Z
2019-11-11T01:37:28Z
https://github.com/AirtestProject/Airtest/issues/598
[]
hejiaqiang1980
1
Lightning-AI/pytorch-lightning
pytorch
19,994
Logging with Fabric using steps
### Description & Motivation Logging using Fabric does not consider any steps during training, unlike when using the Lightning Trainer. A LightningModule calling self.log simply passes the logged dictionary and nothing else to the Fabric logging code when using Fabric but when using the Trainer it is handled by grouping/frequency adjustments (such as aggregating during multi-gpu training or logging every X steps [default 50]). ### Pitch An option to enable similar logging in Fabric as the Lightning Trainer. This could be off by default but could track steps that are submitted with fabric hooks/calls, such as: `fabric.call('on_train_step')` This would allow for logged values to be aggregated during the same step, which makes logs more readable. ### Alternatives _No response_ ### Additional context _No response_ cc @borda
open
2024-06-18T22:56:50Z
2024-06-19T19:52:08Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19994
[ "feature", "needs triage" ]
liambsmith
2
fastapi/sqlmodel
sqlalchemy
1,242
how to config the pydantic JSON fields
### Privileged issue - [X] I'm @tiangolo or he asked me directly to create an issue here. ### Issue Content ![image](https://github.com/user-attachments/assets/a6c9589e-f967-427b-a8c4-76fb605d7cda) ![image](https://github.com/user-attachments/assets/9a95cb60-6a93-4005-af0b-d75a41f9cbf1)
closed
2024-12-12T02:00:36Z
2025-02-28T01:37:07Z
https://github.com/fastapi/sqlmodel/issues/1242
[]
cjdxhjj
10
QuivrHQ/quivr
api
2,790
Parse celery config from env
Use pydantic settings to parse `.env` celery config
closed
2024-07-01T13:51:57Z
2024-10-04T16:06:18Z
https://github.com/QuivrHQ/quivr/issues/2790
[ "Stale", "area: backend" ]
linear[bot]
2
yunjey/pytorch-tutorial
deep-learning
90
language model detach(states)
why shouldn't states be updated after every training step?I can't understand this line--"states = detach(states)" and what on earth does this step do? I am new to PyTorch and I am very grateful if anyone can help me. ``` # Some part of the code was referenced from below. # https://github.com/pytorch/examples/tree/master/word_language_model import torch import torch.nn as nn import numpy as np from torch.autograd import Variable from data_utils import Dictionary, Corpus # Hyper Parameters embed_size = 128 hidden_size = 1024 num_layers = 1 num_epochs = 5 num_samples = 1000 # number of words to be sampled batch_size = 20 seq_length = 30 learning_rate = 0.002 # Load Penn Treebank Dataset train_path = './data/train.txt' sample_path = './sample.txt' corpus = Corpus() ids = corpus.get_data(train_path, batch_size) vocab_size = len(corpus.dictionary) num_batches = ids.size(1) // seq_length # RNN Based Language Model class RNNLM(nn.Module): def __init__(self, vocab_size, embed_size, hidden_size, num_layers): super(RNNLM, self).__init__() self.embed = nn.Embedding(vocab_size, embed_size) self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True) self.linear = nn.Linear(hidden_size, vocab_size) self.init_weights() def init_weights(self): self.embed.weight.data.uniform_(-0.1, 0.1) self.linear.bias.data.fill_(0) self.linear.weight.data.uniform_(-0.1, 0.1) def forward(self, x, h): # Embed word ids to vectors x = self.embed(x) # Forward propagate RNN out, h = self.lstm(x, h) # Reshape output to (batch_size*sequence_length, hidden_size) out = out.contiguous().view(out.size(0)*out.size(1), out.size(2)) # Decode hidden states of all time step out = self.linear(out) return out, h model = RNNLM(vocab_size, embed_size, hidden_size, num_layers) model.cuda() # Loss and Optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Truncated Backpropagation def detach(states): return [state.detach() for state in states] # Training for epoch in range(num_epochs): # Initial hidden and memory states states = (Variable(torch.zeros(num_layers, batch_size, hidden_size)).cuda(), Variable(torch.zeros(num_layers, batch_size, hidden_size)).cuda()) for i in range(0, ids.size(1) - seq_length, seq_length): # Get batch inputs and targets inputs = Variable(ids[:, i:i+seq_length]).cuda() targets = Variable(ids[:, (i+1):(i+1)+seq_length].contiguous()).cuda() # Forward + Backward + Optimize model.zero_grad() states = detach(states)#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! outputs, states = model(inputs, states) loss = criterion(outputs, targets.view(-1)) loss.backward() torch.nn.utils.clip_grad_norm(model.parameters(), 0.5) optimizer.step() step = (i+1) // seq_length if step % 100 == 0: print ('Epoch [%d/%d], Step[%d/%d], Loss: %.3f, Perplexity: %5.2f' % (epoch+1, num_epochs, step, num_batches, loss.data[0], np.exp(loss.data[0]))) # Sampling with open(sample_path, 'w') as f: # Set intial hidden ane memory states state = (Variable(torch.zeros(num_layers, 1, hidden_size)).cuda(), Variable(torch.zeros(num_layers, 1, hidden_size)).cuda()) # Select one word id randomly prob = torch.ones(vocab_size) input = Variable(torch.multinomial(prob, num_samples=1).unsqueeze(1), volatile=True).cuda() for i in range(num_samples): # Forward propagate rnn output, state = model(input, state) # Sample a word id prob = output.squeeze().data.exp().cpu() word_id = torch.multinomial(prob, 1)[0] # Feed sampled word id to next time step input.data.fill_(word_id) # File write word = corpus.dictionary.idx2word[word_id] word = '\n' if word == '<eos>' else word + ' ' f.write(word) if (i+1) % 100 == 0: print('Sampled [%d/%d] words and save to %s'%(i+1, num_samples, sample_path)) # Save the Trained Model torch.save(model.state_dict(), 'model.pkl')` ```
closed
2017-12-24T13:11:10Z
2020-10-09T18:53:01Z
https://github.com/yunjey/pytorch-tutorial/issues/90
[]
qazwsx74269
2
ivy-llc/ivy
pytorch
28,525
Fix Frontend Failing Test: numpy - math.paddle.diff
To-do List: https://github.com/unifyai/ivy/issues/27497
closed
2024-03-09T20:56:31Z
2024-04-02T09:25:05Z
https://github.com/ivy-llc/ivy/issues/28525
[ "Sub Task" ]
ZJay07
0
vitalik/django-ninja
pydantic
1,418
Accept JSON as a payload field
Here I have this code that I want to get from user with PATCH method. the issue is I always got "missing" error for this type of fields. ```python class AnswerSchema(Schema): key: str value: str class UserDisease(Schema): question: str answers: List[AnswerSchema] class UserMedicalInSchema(Schema): ... diseases: UserDisease = None ``` And this is the error: ```python { "detail": [ { "type": "missing", "loc": [ "form", "payload", "diseases", "answers", 0, "key" ], "msg": "Field required" }, { "type": "missing", "loc": [ "form", "payload", "diseases", "answers", 0, "value" ], "msg": "Field required" } ] } ``` I sent the request from Ninja docs page(/api/docs) and always get this error for any field which is List input. Also I get this error in my terminal(`manage.py runserver`) when change 'disease' in 'UserMedicalInSchema' to `disease = Optional[UserDisease] = None` ```python File "/home/enriquette/Programming/Projects/work/sarvestan/sarvestan/urls.py", line 22, in <module> from .v1.api import api as api_v1 File "/home/enriquette/Programming/Projects/work/sarvestan/sarvestan/v1/api.py", line 1, in <module> from app_api.v1 import api as api_v1 File "/home/enriquette/Programming/Projects/work/sarvestan/app_api/v1/api.py", line 5, in <module> from .user import router as user_router File "/home/enriquette/Programming/Projects/work/sarvestan/app_api/v1/user.py", line 109, in <module> @router.patch('medical', auth=JWTAuth(), response=UPDATE_RESPONSE, tags=['user']) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/router.py", line 268, in decorator self.add_api_operation( ~~~~~~~~~~~~~~~~~~~~~~^ path, ^^^^^ ...<16 lines>... openapi_extra=openapi_extra, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/router.py", line 319, in add_api_operation path_view.add_operation( ~~~~~~~~~~~~~~~~~~~~~~~^ path=path, ^^^^^^^^^^ ...<16 lines>... openapi_extra=openapi_extra, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/operation.py", line 426, in add_operation operation = OperationClass( path, ...<16 lines>... openapi_extra=openapi_extra, ) File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/operation.py", line 82, in __init__ self.signature = ViewSignature(self.path, self.view_func) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/signature/details.py", line 87, in __init__ self.models: TModels = self._create_models() ~~~~~~~~~~~~~~~~~~~^^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/signature/details.py", line 144, in _create_models flatten_map = self._args_flatten_map(args) File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/signature/details.py", line 181, in _args_flatten_map for name, path in self._model_flatten_map(arg.annotation, arg.alias): ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/signature/details.py", line 205, in _model_flatten_map yield from self._model_flatten_map(field.annotation, name) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/enriquette/Programming/Projects/work/sarvestan/.venv/lib/python3.13/site-packages/ninja/signature/details.py", line 201, in _model_flatten_map for attr, field in model.model_fields.items(): ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.13/typing.py", line 1365, in __getattr__ return getattr(self.__origin__, attr) File "/usr/lib/python3.13/typing.py", line 548, in __getattr__ raise AttributeError(item) AttributeError: model_fields ``` and this is the API. ```python @router.patch('medical', auth=JWTAuth(), response=UPDATE_RESPONSE, tags=['user']) def update_user_medical_info( request, payload: Form[schema.UserMedicalInSchema] = None, ): user = request.user if payload: data = checks.normalize_medical_info(payload.dict()) check_result = checks.medial_info(data) if check_result != 200: return schema_gen.error_response(400, *check_result) for field, value in data.items(): setattr(user, field, value) user.save() return 204, None return schema_gen.error_response(400, 'no data provided') ```
closed
2025-03-08T11:50:03Z
2025-03-10T15:39:21Z
https://github.com/vitalik/django-ninja/issues/1418
[]
smjt2000
3
huggingface/datasets
nlp
6,534
How to configure multiple folders in the same zip package
How should I write "config" in readme when all the data, such as train test, is in a zip file train floder and test floder in data.zip
open
2023-12-26T03:56:20Z
2023-12-26T06:31:16Z
https://github.com/huggingface/datasets/issues/6534
[]
d710055071
1
keras-team/keras
python
20,376
AttributeError: 'Functional' object has no attribute 'get_state_tree'
using keras 3.4.1 in a colab. a simple model can be queried for `trainable_variables` and `non_trainable_variables` but `get_state_tree()` fails on `Functional` objects? ( `.compile` or `.build` don't make a difference ) ``` import os os.environ['KERAS_BACKEND'] = 'jax' import keras keras.__version__ ``` ``` 3.4.1 ``` ``` from keras.layers import Input, Dense from keras.models import Model input = Input((10, 3)) foo = Dense(3)(input) model = Model(input, foo) model.trainable_variables, model.non_trainable_variables ``` ``` ([<KerasVariable shape=(3, 3), dtype=float32, path=dense/kernel>, <KerasVariable shape=(3,), dtype=float32, path=dense/bias>], []) ``` ``` model.get_state_tree() ``` ``` AttributeError: 'Functional' object has no attribute 'get_state_tree' ```
closed
2024-10-18T04:33:50Z
2024-10-18T11:47:44Z
https://github.com/keras-team/keras/issues/20376
[ "stat:awaiting response from contributor", "type:Bug" ]
matpalm
4
marimo-team/marimo
data-visualization
3,893
Datatypes showing up again in mo.ui.table
### Describe the bug After 0.11.6 there is a regression where datatypes are shown, even if you pass a list of dicts. This was apparently fixed in https://github.com/marimo-team/marimo/pull/2907, but is back again. ### Environment { "marimo": "0.11.6", "OS": "Windows", "OS Version": "11", "Processor": "Intel64 Family 6 Model 140 Stepping 1, GenuineIntel", "Python Version": "3.12.9", "Binaries": { "Browser": "--", "Node": "--" }, "Dependencies": { "click": "8.1.8", "docutils": "0.21.2", "itsdangerous": "2.2.0", "jedi": "0.19.2", "markdown": "3.7", "narwhals": "1.25.1", "packaging": "24.2", "psutil": "6.1.1", "pygments": "2.19.1", "pymdown-extensions": "10.14.3", "pyyaml": "6.0.2", "ruff": "0.9.4", "starlette": "0.45.3", "tomlkit": "0.13.2", "typing-extensions": "4.12.2", "uvicorn": "0.34.0", "websockets": "11.0.3" }, "Optional Dependencies": { "pandas": "2.2.3" }, "Experimental Flags": {} } ### Code to reproduce _No response_
closed
2025-02-24T14:42:18Z
2025-02-24T20:48:38Z
https://github.com/marimo-team/marimo/issues/3893
[ "bug" ]
mrdobalina2k
1
Lightning-AI/pytorch-lightning
machine-learning
20,250
LearningRateMonitor broken on MPS backend with Apple silicon
### Bug description When the optimizer contains any data of type `float64`, then adding a `LearningRateMonitor` causes a Value Error on MPS backends with apple silicon. See the self-contained and minimal example in "How to reproduce the bug" below. The error is: ``` File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/lr_monitor.py", line 219, in <dictcomp> name: torch.tensor(value, device=trainer.strategy.root_device) for name, value in latest_stat.items() TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. ``` When removing the `LearningRateMonitor`, the code runs through, thus the optimiser itself is fine. Note that the quick fix to remove the `lr=np.float64(0.01)` works only for the minimal example. In my case, the optimiser is imported from an external module and has more parameters, making it much harder to change. I tried out 4 fixes in the pytorch lightning source code, all of them fix the problem, but might have side-effects or not work on other devices or in other configurations: Replace `torch.tensor(value, device=trainer.strategy.root_device)` in [this line](https://github.com/Lightning-AI/pytorch-lightning/blob/f3f10d460338ca8b2901d5cd43456992131767ec/src/lightning/pytorch/callbacks/lr_monitor.py#L217) to one of: - `torch.tensor(value, device="cpu")` - `torch.tensor(value, device=value.device)` - `torch.tensor(value, device=trainer.strategy.root_device, dtype=torch.float32)` - `value` ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug ```python import numpy as np import torch from torch import nn from torch.optim import Adam from torch.utils.data import DataLoader, TensorDataset import pytorch_lightning as pl from pytorch_lightning.callbacks import LearningRateMonitor class SimpleModel(pl.LightningModule): def __init__(self): super().__init__() self.layer = nn.Linear(2, 1) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx): x, y = batch loss = nn.functional.mse_loss(self(x), y) return loss def configure_optimizers(self): optimizer = Adam(self.parameters(), lr=np.float64(0.01)) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) return [optimizer], [scheduler] # Data x = torch.randn(100, 2) y = torch.randn(100, 1) dataset = TensorDataset(x, y) dataloader = DataLoader(dataset, batch_size=2) # Training model = SimpleModel() lr_monitor = LearningRateMonitor(logging_interval='step') trainer = pl.Trainer(max_epochs=10, callbacks=[lr_monitor]) trainer.fit(model, dataloader) ``` ### Error messages and logs ``` Epoch 0: 98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 49/50 [00:00<00:00, 188.15it/s, v_num=13]Traceback (most recent call last): File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/test_lr_monitor.py", line 37, in <module> trainer.fit(model, dataloader) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 538, in fit call._call_and_handle_interrupt( File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 47, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 574, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 981, in _run results = self._run_stage() File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in _run_stage self.fit_loop.run() File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run self.advance() File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance self.epoch_loop.run(self._data_fetcher) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run self.advance(data_fetcher) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 233, in advance call._call_callback_hooks(trainer, "on_train_batch_start", batch, batch_idx) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 218, in _call_callback_hooks fn(trainer, trainer.lightning_module, *args, **kwargs) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/lr_monitor.py", line 173, in on_train_batch_start latest_stat = self._extract_stats(trainer, interval) File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/lr_monitor.py", line 216, in _extract_stats trainer.callback_metrics.update({ File "/Users/malteebnerlightly/Documents/GitHub/lightly-train/.venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/lr_monitor.py", line 217, in <dictcomp> name: torch.tensor(value, device=trainer.strategy.root_device) for name, value in latest_stat.items() TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. Epoch 0: 98%|█████████▊| 49/50 [00:00<00:00, 129.92it/s, v_num=13] ``` ### Environment Machine is a MacBook Pro with M1-Pro CPU <details> <summary>Current environment</summary> * CUDA: - GPU: None - available: False - version: None * Lightning: - lightning-utilities: 0.11.7 - pytorch-lightning: 2.4.0 - torch: 2.4.1 - torchmetrics: 1.4.1 - torchvision: 0.19.1 * Packages: - absl-py: 2.1.0 - aenum: 3.1.15 - aiohappyeyeballs: 2.4.0 - aiohttp: 3.10.5 - aiosignal: 1.3.1 - annotated-types: 0.7.0 - antlr4-python3-runtime: 4.9.3 - async-timeout: 4.0.3 - attrs: 24.2.0 - autocommand: 2.2.2 - backports.tarfile: 1.2.0 - certifi: 2024.7.4 - charset-normalizer: 3.3.2 - exceptiongroup: 1.2.2 - filelock: 3.15.4 - frozenlist: 1.4.1 - fsspec: 2024.9.0 - grpcio: 1.65.5 - huggingface-hub: 0.24.6 - hydra-core: 1.3.2 - idna: 3.8 - importlib-metadata: 8.0.0 - importlib-resources: 6.4.0 - inflect: 7.3.1 - iniconfig: 2.0.0 - jaraco.context: 5.3.0 - jaraco.functools: 4.0.1 - jaraco.text: 3.12.1 - jinja2: 3.1.4 - licenseheaders: 0.8.8 - lightning-utilities: 0.11.7 - markdown: 3.7 - markupsafe: 2.1.5 - more-itertools: 10.3.0 - mpmath: 1.3.0 - multidict: 6.0.5 - mypy: 1.11.1 - mypy-extensions: 1.0.0 - networkx: 3.3 - numpy: 2.1.1 - omegaconf: 2.3.0 - packaging: 24.1 - pillow: 10.4.0 - platformdirs: 4.2.2 - pluggy: 1.5.0 - protobuf: 5.27.3 - psutil: 6.0.0 - pydantic: 1.10.18 - pydantic-core: 2.20.1 - pydeprecate: 0.3.2 - pytest: 8.3.2 - pytest-mock: 3.14.0 - python-dateutil: 2.9.0.post0 - pytorch-lightning: 2.4.0 - pyyaml: 6.0.2 - regex: 2024.7.24 - requests: 2.32.3 - ruff: 0.6.1 - safetensors: 0.4.4 - setuptools: 74.1.2 - six: 1.16.0 - sympy: 1.13.2 - tensorboard: 2.17.1 - tensorboard-data-server: 0.7.2 - timm: 1.0.8 - tomli: 2.0.1 - torch: 2.4.1 - torchmetrics: 1.4.1 - torchvision: 0.19.1 - tqdm: 4.66.5 - typeguard: 4.3.0 - types-tqdm: 4.66.0.20240417 - typing-extensions: 4.12.2 - urllib3: 2.2.2 - werkzeug: 3.0.3 - wheel: 0.43.0 - yarl: 1.9.11 - zipp: 3.19.2 * System: - OS: Darwin - architecture: - 64bit - - processor: arm - python: 3.10.8 - release: 23.6.0 - version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000 </details> ### More info _No response_
open
2024-09-06T08:40:53Z
2025-03-21T11:35:19Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20250
[ "bug", "needs triage", "ver: 2.4.x" ]
MalteEbner
1
SYSTRAN/faster-whisper
deep-learning
432
When using faster-whisper, how to automatically split sentences
This is my code: ```python segments, info = model.transcribe(videos_directory_path + "/" + file, beam_size=5, without_timestamps=True) with open(srt_directory_path + "/" + filename_without_extension + ".csv", "w") as output_file: for index, segment in enumerate(segments, start=1): output_file.write("%s\n\n" % segment.text) print(f"Parse {file} done!-------{index}/{length}") ``` If I want the stored content in segment.text to be a complete sentence, how should I configure the parameters?
closed
2023-08-20T05:19:07Z
2023-08-21T10:45:14Z
https://github.com/SYSTRAN/faster-whisper/issues/432
[]
OlalalalaO
2
hyperspy/hyperspy
data-visualization
3,382
Culling Annoying Warnings
I figured I'd make a thread of annoying warnings that people want to "change or adjust" and we can think about better ways of handling them. I'll start with: https://github.com/hyperspy/hyperspy/blob/b742845d7f606bc4086f5bec4bc0ca84b8e4104d/hyperspy/signal.py#L5320C1-L5323C18 Which is quite annoying espcially when chaining together multiple map functions or using a distributed backend which will mulitply this message times 100. There also isn't a way to slience it even if you handle the thing that it is warning about.
open
2024-06-03T13:39:24Z
2024-06-03T13:39:24Z
https://github.com/hyperspy/hyperspy/issues/3382
[]
CSSFrancis
0
ultralytics/yolov5
deep-learning
13,021
No module named 'models'
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question this is code : import cv2 import math import torch import pygame from models.experimental import attempt_load from utils.general import non_max_suppression, scale_coords from utils.torch_utils import select_device # Initialize Pygame and Pygame Mixer pygame.init() pygame.mixer.init() Sound = pygame.mixer.Sound(r"C:\Users\ITC\Downloads\mixkit-alert-alarm-1005.wav") # Initialize YOLOv5 device = select_device('') model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True) model = attempt_load(r'C:\Users\ITC\Downloads\best.pt', map_location=device) stride = int(model.stride.max()) # model stride names = model.module.names if hasattr(model, 'module') else model.names cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() img = frame.copy() img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Inference img = torch.from_numpy(img).to(device) img = img.float() # uint8 to fp16/32 img /= 255.0 # 0 - 255 to 0.0 - 1.0 if img.ndimension() == 3: img = img.unsqueeze(0) # Predict pred = model(img)[0] pred = non_max_suppression(pred, 0.5, 0.4) for i, det in enumerate(pred): if len(det): det[:, :4] = scale_coords(img.shape[2:], det[:, :4], frame.shape).round() for *xyxy, conf, cls in reversed(det): c = int(cls) confidence = conf.item() * 100 if confidence > 50 and names[c] == 'person': # Change to 'human' if that's your class name x1, y1, x2, y2 = [int(i) for i in xyxy] cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 5) cv2.putText(frame, f'{names[c]} {confidence:.2f}%', (x1 + 8, y1 + 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2) # Add your Pygame sound logic here pygame.mixer.Sound.play(Sound) # You might need to add logic to stop sound when there's no detection cv2.imshow("command", frame) if cv2.waitKey(1) == ord('a'): break cap.release() cv2.destroyAllWindows() and this is the error in juputer --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[7], line 5 3 import torch 4 import pygame ----> 5 from models.experimental import attempt_load 6 from utils.general import non_max_suppression, scale_coords 7 from utils.torch_utils import select_device ModuleNotFoundError: No module named 'models' ### Additional _No response_
closed
2024-05-17T13:54:55Z
2024-10-20T19:46:10Z
https://github.com/ultralytics/yolov5/issues/13021
[ "question", "Stale" ]
zyad630
3
ExpDev07/coronavirus-tracker-api
rest-api
318
Server error
Hello, I was doing some tests on the page https://coronavirus-tracker-api.herokuapp.com/ but sometimes a service is not available message appears or an application error, I would like to know if it is because they are doing maintenance to the server, also in some endpoints an api failure error message also appears
closed
2020-06-18T19:21:12Z
2020-06-25T12:28:10Z
https://github.com/ExpDev07/coronavirus-tracker-api/issues/318
[ "bug", "down" ]
TheApphiver
0
encode/httpx
asyncio
3,195
502 Error When Using Global Proxy
I'm encountering a 502 Bad Gateway error when using httpx to make requests to my local server, but the same requests succeed when using the requests library. Below are the details of the commands I ran and the outputs I received: >>> import requests >>> import httpx >>> httpx.get("http://localhost:6333/collections") <Response [502 Bad Gateway]> >>> httpx.get("https://www.google.com") <Response [200 OK]> >>> httpx.get("http://httpbin.org/") <Response [200 OK]> >>> requests.get("http://localhost:6333/collections") <Response [200]> Logs: I enabled logging to further investigate the issue: >>> import logging >>> logging.basicConfig(level=logging.DEBUG) >>> httpx.get("http://localhost:6333/collections") DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False DEBUG:httpx:load_verify_locations cafile='C:\\Users\\Lin\\anaconda3\\lib\\site-packages\\certifi\\cacert.pem' DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False DEBUG:httpx:load_verify_locations cafile='C:\\Users\\Lin\\anaconda3\\lib\\site-packages\\certifi\\cacert.pem' DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False DEBUG:httpx:load_verify_locations cafile='C:\\Users\\Lin\\anaconda3\\lib\\site-packages\\certifi\\cacert.pem' DEBUG:httpcore.connection:connect_tcp.started host='127.0.0.1' port=233 local_address=None timeout=5.0 socket_options=None DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x0000020FA7EFE4A0> DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'GET']> DEBUG:httpcore.http11:send_request_headers.complete DEBUG:httpcore.http11:send_request_body.started request=<Request [b'GET']> DEBUG:httpcore.http11:send_request_body.complete DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'GET']> DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 502, b'Bad Gateway', [(b'Connection', b'close'), (b'Content-Length', b'0')]) INFO:httpx:HTTP Request: GET http://localhost:6333/collections "HTTP/1.1 502 Bad Gateway" DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'GET']> DEBUG:httpcore.http11:receive_response_body.complete DEBUG:httpcore.http11:response_closed.started DEBUG:httpcore.http11:response_closed.complete I noticed that the connection is to port 233. I'm using the Clash service mode to globally proxy my network traffic, which seems to be causing the 502 error. However, why do other httpx requests, as well as requests library calls, succeed? >>> httpx.get("https://www.google.com") <Response [200 OK]> >>> httpx.get("http://httpbin.org/") <Response [200 OK]> >>> requests.get("http://localhost:6333/collections") <Response [200]> Thanks~
closed
2024-05-11T05:29:10Z
2024-05-11T08:09:20Z
https://github.com/encode/httpx/issues/3195
[]
imdoge
0
laurentS/slowapi
fastapi
106
Is it possible to bump limitis package version up to 2.1?
https://limits.readthedocs.io/en/stable/storage.html#async-storage It'll allow us to use async redis
closed
2022-08-11T11:49:33Z
2022-11-08T12:07:28Z
https://github.com/laurentS/slowapi/issues/106
[]
10ourto
1
zama-ai/concrete-ml
scikit-learn
550
how
## Feature request A clear and concise description of the feature proposal. ## Motivation Please outline the motivation for the proposal.
closed
2024-03-21T09:07:17Z
2024-03-21T09:08:51Z
https://github.com/zama-ai/concrete-ml/issues/550
[]
1ofvc
0
slackapi/python-slack-sdk
asyncio
1,319
chat.postMessage -not_authed?
Hi - I am starting to create an application for my company to send slack messages to specific users via their email. To start, I'm just doing testing around chat.postMessage. However, when I try to run it as per the example on https://api.slack.com/methods/chat.postMessage, I get a not_auth error. I'm still fairly new to APIs, so any help would be much appreciated. #### The Slack SDK version slack-sdk==3.19.5 #### Python runtime version Python 3.9.6pip #### OS info ProductName: macOS ProductVersion: 13.1 BuildVersion: 22C65 Darwin Kernel Version 22.2.0: Fri Nov 11 02:08:47 PST 2022; root:xnu-8792.61.2~4/RELEASE_X86_64 #### Steps to reproduce: ``` import logging import os # Import WebClient from Python SDK (github.com/slackapi/python-slack-sdk) from slack_sdk import WebClient from slack_sdk.errors import SlackApiError # WebClient instantiates a client that can call API methods # When using Bolt, you can use either `app.client` or the `client` passed to listeners. client = WebClient(token=os.environ.get("SLACK_BOT_TOKEN")) logger = logging.getLogger(__name__) # ID of the channel you want to send the message to channel_id = "U04KNHE0LHF" try: # Call the chat.postMessage method using the WebClient result = client.chat_postMessage( channel=channel_id, text="Hello world" ) logger.info(result) except SlackApiError as e: logger.error(f"Error posting message: {e}") ``` ### Expected result: The Slack bot sends the message to the user. ### Actual result: Error posting message: The request to the Slack API failed. (url: https://www.slack.com/api/chat.postMessage) The server responded with: {'ok': False, 'error': 'not_authed'}
closed
2023-01-20T17:32:23Z
2023-01-20T18:07:37Z
https://github.com/slackapi/python-slack-sdk/issues/1319
[ "question", "untriaged" ]
Brian-Wilcove
4
huggingface/transformers
python
36,911
Pipeline cannot guess which processor to use with Gemma 3
### System Info Inside a Kaggle kernel: ``` {'platform': 'Linux', 'platform-release': '6.6.56+', 'platform-version': '#1 SMP PREEMPT_DYNAMIC Sun Nov 10 10:07:59 UTC 2024', 'architecture': 'x86_64', 'hostname': 'e3804eb7eb6c', 'ip-address': '172.19.2.2', 'mac-address': '02:42:ac:13:02:02', 'processor': 'x86_64', 'ram': '31 GB'} ``` pip freeze output: ``` absl-py==1.4.0 accelerate==1.2.1 aiofiles==22.1.0 aiohappyeyeballs==2.4.6 aiohttp==3.11.12 aiosignal==1.3.2 aiosqlite==0.21.0 alabaster==1.0.0 albucore==0.0.19 albumentations==1.4.20 alembic==1.14.1 altair==5.5.0 annotated-types==0.7.0 annoy==1.17.3 ansicolors==1.1.8 antlr4-python3-runtime==4.9.3 anyio==3.7.1 argon2-cffi==23.1.0 argon2-cffi-bindings==21.2.0 args==0.1.0 array_record==0.5.1 arrow==1.3.0 arviz==0.20.0 astropy==6.1.7 astropy-iers-data==0.2024.12.16.0.35.48 asttokens==3.0.0 astunparse==1.6.3 async-timeout==5.0.1 atpublic==4.1.0 attrs==25.1.0 audioread==3.0.1 autograd==1.7.0 babel==2.16.0 backcall==0.2.0 bayesian-optimization==2.0.3 beautifulsoup4==4.12.3 betterproto==2.0.0b6 bigframes==1.29.0 bigquery-magics==0.4.0 bitsandbytes==0.45.3 bleach==6.2.0 blinker==1.9.0 blis==0.7.11 blobfile==3.0.0 blosc2==2.7.1 bokeh==3.6.2 Boruta==0.4.3 boto3==1.36.23 botocore==1.36.23 Bottleneck==1.4.2 -e git+https://github.com/SohierDane/BigQuery_Helper@8615a7f6c1663e7f2d48aa2b32c2dbcb600a440f#egg=bq_helper bqplot==0.12.43 branca==0.8.1 CacheControl==0.14.1 cachetools==5.5.0 Cartopy==0.24.1 catalogue==2.0.10 catboost==1.2.7 category_encoders==2.7.0 certifi==2025.1.31 cesium==0.12.1 cffi==1.17.1 chardet==5.2.0 charset-normalizer==3.4.1 Chessnut==0.4.1 chex==0.1.88 clarabel==0.9.0 click==8.1.7 click-plugins==1.1.1 cligj==0.7.2 clint==0.5.1 cloudpathlib==0.20.0 cloudpickle==3.1.0 cmake==3.31.2 cmdstanpy==1.2.5 colorama==0.4.6 colorcet==3.1.0 colorlog==6.9.0 colorlover==0.3.0 colour==0.1.5 comm==0.2.2 community==1.0.0b1 confection==0.1.5 cons==0.4.6 contourpy==1.3.1 coverage==7.6.12 cryptography==44.0.1 cuda-bindings==12.8.0 cuda-python==12.8.0 cudf-cu12==25.2.0 cufflinks==0.17.3 cuml-cu12==25.2.0 cupy-cuda12x==12.2.0 cuvs-cu12==25.2.0 cvxopt==1.3.2 cvxpy==1.6.0 cycler==0.12.1 cymem==2.0.10 Cython==3.0.11 cytoolz==1.0.1 daal==2025.2.0 dacite==1.9.2 dask==2024.12.1 dask-cuda==25.2.0 dask-cudf-cu12==25.2.0 dask-expr==1.1.21 dataclasses-json==0.6.7 datascience==0.17.6 datasets==3.3.1 datashader==0.17.0 db-dtypes==1.3.1 dbus-python==1.2.18 deap==1.4.2 debugpy==1.8.0 decorator==4.4.2 deepdiff==8.2.0 defusedxml==0.7.1 Deprecated==1.2.15 diffusers==0.31.0 dill==0.3.8 dipy==1.10.0 distributed==2024.12.1 distributed-ucxx-cu12==0.42.0 distro==1.9.0 dlib==19.24.2 dm-tree==0.1.8 dnspython==2.7.0 docker==7.1.0 docker-pycreds==0.4.0 docstring-to-markdown==0.15 docstring_parser==0.16 docutils==0.21.2 dopamine_rl==4.1.0 duckdb==1.1.3 earthengine-api==1.4.3 easydict==1.13 easyocr==1.7.2 editdistance==0.8.1 eerepr==0.0.4 einops==0.8.0 eli5==0.13.0 email_validator==2.2.0 emoji==2.14.1 en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl#sha256=86cc141f63942d4b2c5fcee06630fd6f904788d2f0ab005cce45aadb8fb73889 entrypoints==0.4 et_xmlfile==2.0.0 etils==1.11.0 etuples==0.3.9 eval_type_backport==0.2.0 exceptiongroup==1.2.2 execnb==0.1.11 Farama-Notifications==0.0.4 fastai==2.7.18 fastcore==1.7.27 fastdownload==0.0.7 fastjsonschema==2.21.1 fastprogress==1.0.3 fastrlock==0.8.2 fasttext==0.9.3 featuretools==1.31.0 filelock==3.17.0 fiona==1.10.1 firebase-admin==6.6.0 Flask==3.1.0 flatbuffers==24.3.25 flax==0.8.5 folium==0.19.2 fonttools==4.55.3 fqdn==1.5.1 frozendict==2.4.6 frozenlist==1.5.0 fsspec==2024.12.0 funcy==2.0 fury==0.12.0 future==1.0.0 fuzzywuzzy==0.18.0 gast==0.6.0 gatspy==0.3 gcsfs==2024.10.0 GDAL==3.6.4 gdown==5.2.0 geemap==0.35.1 gensim==4.3.3 geocoder==1.38.1 geographiclib==2.0 geojson==3.2.0 geopandas==0.14.4 geopy==2.4.1 ghapi==1.0.6 gin-config==0.5.0 gitdb==4.0.11 GitPython==3.1.43 glob2==0.7 google==2.0.3 google-ai-generativelanguage==0.6.10 google-api-core==1.34.1 google-api-python-client==2.155.0 google-auth==2.27.0 google-auth-httplib2==0.2.0 google-auth-oauthlib==1.2.1 google-cloud-aiplatform==1.74.0 google-cloud-automl==1.0.1 google-cloud-bigquery==3.25.0 google-cloud-bigquery-connection==1.17.0 google-cloud-bigtable==2.27.0 google-cloud-core==2.4.1 google-cloud-datastore==2.20.2 google-cloud-firestore==2.19.0 google-cloud-functions==1.19.0 google-cloud-iam==2.17.0 google-cloud-language==2.16.0 google-cloud-pubsub==2.27.1 google-cloud-resource-manager==1.14.0 google-cloud-storage==2.14.0 google-cloud-translate==3.12.1 google-cloud-videointelligence==2.16.0 google-cloud-vision==3.10.0 google-colab @ file:///colabtools/dist/google_colab-1.0.0.tar.gz google-crc32c==1.6.0 google-genai==0.2.2 google-generativeai==0.8.3 google-pasta==0.2.0 google-resumable-media==2.7.2 googleapis-common-protos==1.66.0 googledrivedownloader==0.4 gpxpy==1.6.2 graphviz==0.20.3 greenlet==3.1.1 grpc-google-iam-v1==0.13.1 grpcio==1.68.1 grpcio-status==1.48.2 grpclib==0.4.8rc2 gspread==6.0.2 gspread-dataframe==3.3.1 gTTS==2.5.4 gym==0.25.2 gym-notices==0.0.8 gymnasium==0.29.0 h11==0.14.0 h2==4.2.0 h2o==3.46.0.6 h5netcdf==1.4.1 h5py==3.12.1 haversine==2.9.0 hep_ml==0.7.3 hf_transfer==0.1.9 holidays==0.63 holoviews==1.20.0 hpack==4.1.0 html5lib==1.1 htmlmin==0.1.12 httpcore==1.0.7 httpimport==1.4.0 httplib2==0.22.0 httpx==0.28.1 huggingface-hub==0.29.0 humanize==4.11.0 hyperframe==6.1.0 hyperopt==0.2.7 ibis-framework==9.2.0 id==1.5.0 idna==3.10 igraph==0.11.8 ImageHash==4.3.1 imageio==2.36.1 imageio-ffmpeg==0.5.1 imagesize==1.4.1 imbalanced-learn==0.12.4 imgaug==0.4.0 immutabledict==4.2.1 importlib-resources==5.13.0 importlib_metadata==8.5.0 imutils==0.5.4 in-toto-attestation==0.9.3 inflect==7.4.0 iniconfig==2.0.0 intel-cmplr-lib-rt==2024.2.0 intel-cmplr-lib-ur==2024.2.0 intel-openmp==2024.2.0 ipyevents==2.0.2 ipyfilechooser==0.6.0 ipykernel==5.5.6 ipyleaflet==0.19.2 ipympl==0.9.6 ipyparallel==8.8.0 ipython==7.34.0 ipython-genutils==0.2.0 ipython-sql==0.5.0 ipytree==0.2.2 ipywidgets==8.1.5 isoduration==20.11.0 isoweek==1.3.3 itsdangerous==2.2.0 Janome==0.5.0 jax==0.4.33 jax-cuda12-pjrt==0.4.33 jax-cuda12-plugin==0.4.33 jaxlib==0.4.33 jedi==0.19.2 jeepney==0.7.1 jellyfish==1.1.0 jieba==0.42.1 Jinja2==3.1.4 jiter==0.8.2 jmespath==1.0.1 joblib==1.4.2 json5==0.10.0 jsonpatch==1.33 jsonpickle==4.0.1 jsonpointer==3.0.0 jsonschema==4.23.0 jsonschema-specifications==2024.10.1 jupyter-console==6.1.0 jupyter-events==0.12.0 jupyter-leaflet==0.19.2 jupyter-lsp==1.5.1 jupyter-ydoc==0.2.5 jupyter_client==8.6.3 jupyter_core==5.7.2 jupyter_server==2.12.5 jupyter_server_fileid==0.9.3 jupyter_server_terminals==0.5.3 jupyter_server_ydoc==0.8.0 jupyterlab==3.6.8 jupyterlab-lsp==3.10.2 jupyterlab_pygments==0.3.0 jupyterlab_server==2.27.3 jupyterlab_widgets==3.0.13 kaggle==1.6.17 kaggle-environments==1.16.11 kagglehub==0.3.9 keras==3.5.0 keras-core==0.1.7 keras-cv==0.9.0 keras-hub==0.18.1 keras-nlp==0.18.1 keras-tuner==1.4.7 keyring==23.5.0 kiwisolver==1.4.7 kornia==0.8.0 kornia_rs==0.1.8 kt-legacy==1.0.5 langchain==0.3.12 langchain-core==0.3.25 langchain-text-splitters==0.3.3 langcodes==3.5.0 langid==1.1.6 langsmith==0.2.3 language_data==1.3.0 launchpadlib==1.10.16 lazr.restfulclient==0.14.4 lazr.uri==1.0.6 lazy_loader==0.4 learntools @ git+https://github.com/Kaggle/learntools@010e3b5035354e15c073a0aca9e202c2e2beb742 leven==1.0.4 libclang==18.1.1 libcudf-cu12==25.2.0 libcuml-cu12==25.2.0 libcuvs-cu12==25.2.0 libkvikio-cu12==25.2.0 libpysal==4.9.2 libraft-cu12==25.2.0 librosa==0.10.2.post1 libucx-cu12==1.18.0 libucxx-cu12==0.42.0 lightgbm @ file:///tmp/lightgbm/lightgbm-4.5.0-py3-none-linux_x86_64.whl lightning-utilities==0.12.0 lime==0.2.0.1 line_profiler==4.2.0 linkify-it-py==2.0.3 llvmlite==0.43.0 lml==0.1.0 locket==1.0.0 logical-unification==0.4.6 lxml==5.3.0 Mako==1.3.9 mamba==0.11.3 marisa-trie==1.2.1 Markdown==3.7 markdown-it-py==3.0.0 MarkupSafe==3.0.2 marshmallow==3.26.1 matplotlib==3.7.5 matplotlib-inline==0.1.7 matplotlib-venn==1.1.1 mdit-py-plugins==0.4.2 mdurl==0.1.2 miniKanren==1.0.3 missingno==0.5.2 mistune==0.8.4 mizani==0.13.1 mkl==2025.0.1 mkl-fft==1.3.8 mkl-random==1.2.4 mkl-service==2.4.1 mkl-umath==0.1.1 ml-dtypes==0.4.1 mlcrate==0.2.0 mlxtend==0.23.3 mne==1.9.0 model-signing==0.2.0 more-itertools==10.5.0 moviepy==1.0.3 mpld3==0.5.10 mpmath==1.3.0 msgpack==1.1.0 multidict==6.1.0 multimethod==1.12 multipledispatch==1.0.0 multiprocess==0.70.16 multitasking==0.0.11 murmurhash==1.0.11 music21==9.3.0 mypy-extensions==1.0.0 namex==0.0.8 narwhals==1.18.4 natsort==8.4.0 nbclassic==1.1.0 nbclient==0.5.13 nbconvert==6.4.5 nbdev==2.3.34 nbformat==5.10.4 ndindex==1.9.2 nest-asyncio==1.6.0 networkx==3.4.2 nibabel==5.3.2 nilearn==0.10.4 ninja==1.11.1.3 nltk==3.2.4 nose==1.3.7 notebook==6.5.4 notebook_shim==0.2.4 numba==0.60.0 numba-cuda==0.2.0 numexpr==2.10.2 numpy==1.26.4 nvidia-cublas-cu12==12.6.4.1 nvidia-cuda-cupti-cu12==12.6.80 nvidia-cuda-nvcc-cu12==12.6.85 nvidia-cuda-runtime-cu12==12.6.77 nvidia-cudnn-cu12==9.6.0.74 nvidia-cufft-cu12==11.3.0.4 nvidia-curand-cu12==10.3.7.77 nvidia-cusolver-cu12==11.7.1.2 nvidia-cusparse-cu12==12.5.4.2 nvidia-ml-py==12.570.86 nvidia-nccl-cu12==2.23.4 nvidia-nvcomp-cu12==4.1.0.6 nvidia-nvjitlink-cu12==12.6.85 nvtx==0.2.10 nx-cugraph-cu12 @ https://pypi.nvidia.com/nx-cugraph-cu12/nx_cugraph_cu12-24.10.0-py3-none-any.whl oauth2client==4.1.3 oauthlib==3.2.2 odfpy==1.4.1 olefile==0.47 omegaconf==2.3.0 onnx==1.17.0 openai==1.57.4 opencv-contrib-python==4.10.0.84 opencv-python==4.10.0.84 opencv-python-headless==4.10.0.84 openpyxl==3.1.5 openslide-bin==4.0.0.6 openslide-python==1.4.1 opentelemetry-api==1.29.0 opentelemetry-sdk==1.29.0 opentelemetry-semantic-conventions==0.50b0 opt_einsum==3.4.0 optax==0.2.4 optree==0.13.1 optuna==4.2.1 orbax-checkpoint==0.6.4 orderly-set==5.3.0 orjson==3.10.12 osqp==0.6.7.post3 overrides==7.7.0 packaging==24.2 pandas==2.2.3 pandas-datareader==0.10.0 pandas-gbq==0.25.0 pandas-profiling==3.6.6 pandas-stubs==2.2.2.240909 pandasql==0.7.3 pandocfilters==1.5.1 panel==1.5.4 papermill==2.6.0 param==2.2.0 parso==0.8.4 parsy==2.1 partd==1.4.2 path==17.1.0 path.py==12.5.0 pathlib==1.0.1 pathos==0.3.2 patsy==1.0.1 pdf2image==1.17.0 peewee==3.17.8 peft==0.14.0 pettingzoo==1.24.0 pexpect==4.9.0 phik==0.12.4 pickleshare==0.7.5 pillow==11.0.0 platformdirs==4.3.6 plotly==5.24.1 plotly-express==0.4.1 plotnine==0.14.4 pluggy==1.5.0 ply==3.11 polars==1.9.0 pooch==1.8.2 portpicker==1.5.2 pox==0.3.5 ppft==1.7.6.9 preprocessing==0.1.13 preshed==3.0.9 prettytable==3.12.0 proglog==0.1.10 progressbar2==4.5.0 prometheus_client==0.21.1 promise==2.3 prompt_toolkit==3.0.48 propcache==0.2.1 prophet==1.1.6 proto-plus==1.25.0 protobuf==3.20.3 psutil==5.9.5 psycopg2==2.9.10 ptyprocess==0.7.0 pudb==2024.1.3 py-cpuinfo==9.0.0 py4j==0.10.9.7 pyaml==25.1.0 PyArabic==0.6.15 pyarrow==19.0.1 pyasn1==0.6.1 pyasn1_modules==0.4.1 pybind11==2.13.6 pyclipper==1.3.0.post6 pycocotools==2.0.8 pycparser==2.22 pycryptodome==3.21.0 pycryptodomex==3.21.0 pyct==0.5.0 pycuda==2025.1 pydantic==2.11.0a2 pydantic_core==2.29.0 pydata-google-auth==1.9.0 pydegensac==0.1.2 pydicom==3.0.1 pydot==3.0.3 pydotplus==2.0.2 PyDrive==1.3.1 PyDrive2==1.21.3 pydub==0.25.1 pyemd==1.0.0 pyerfa==2.0.1.5 pyexcel-io==0.6.7 pyexcel-ods==0.6.0 pygame==2.6.1 pygit2==1.16.0 pygltflib==1.16.3 Pygments==2.19.1 PyGObject==3.42.1 PyJWT==2.10.1 pyLDAvis==3.4.1 pylibcudf-cu12==25.2.0 pylibcugraph-cu12==24.10.0 pylibraft-cu12==25.2.0 pymc==5.19.1 pymc3==3.11.4 pymongo==4.11.1 Pympler==1.1 pymystem3==0.2.0 pynvjitlink-cu12==0.4.0 pynvml==12.0.0 pyogrio==0.10.0 Pyomo==6.8.2 PyOpenGL==3.1.7 pyOpenSSL==25.0.0 pyparsing==3.2.0 pypdf==5.3.0 pyperclip==1.9.0 pyproj==3.7.0 pyshp==2.3.1 PySocks==1.7.1 pyspark==3.5.3 pytensor==2.26.4 pytesseract==0.3.13 pytest==8.3.4 python-apt==0.0.0 python-bidi==0.6.6 python-box==7.3.0 python-dateutil==2.9.0.post0 python-json-logger==3.2.1 python-louvain==0.16 python-lsp-jsonrpc==1.1.2 python-lsp-server==1.12.2 python-slugify==8.0.4 python-utils==3.9.1 pytools==2025.1.1 pytorch-ignite==0.5.1 pytorch-lightning==2.5.0.post0 pytz==2025.1 PyUpSet==0.1.1.post7 pyviz_comms==3.0.3 PyWavelets==1.8.0 PyYAML==6.0.2 pyzmq==24.0.1 qdldl==0.1.7.post4 qgrid==1.3.1 qtconsole==5.6.1 QtPy==2.4.3 raft-dask-cu12==25.2.0 rapids-dask-dependency==25.2.0 ratelim==0.1.6 ray==2.42.1 referencing==0.35.1 regex==2024.11.6 requests==2.32.3 requests-oauthlib==1.3.1 requests-toolbelt==1.0.0 requirements-parser==0.9.0 rfc3161-client==0.1.2 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 rfc8785==0.1.4 rgf-python==3.12.0 rich==13.9.4 rmm-cu12==25.2.0 rpds-py==0.22.3 rpy2==3.4.2 rsa==4.9 Rtree==1.3.0 s3fs==0.4.2 s3transfer==0.11.2 safetensors==0.4.5 scikit-image==0.25.0 scikit-learn==1.2.2 scikit-learn-intelex==2025.2.0 scikit-multilearn==0.2.0 scikit-optimize==0.10.2 scikit-plot==0.3.7 scikit-surprise==1.1.4 scipy==1.13.1 scooby==0.10.0 scs==3.2.7 seaborn==0.12.2 SecretStorage==3.3.1 securesystemslib==1.2.0 segment_anything @ git+https://github.com/facebookresearch/segment-anything.git@dca509fe793f601edb92606367a655c15ac00fdf semver==3.0.4 Send2Trash==1.8.3 sentence-transformers==3.3.1 sentencepiece==0.2.0 sentry-sdk==2.19.2 setproctitle==1.3.4 setuptools-scm==8.1.0 shap==0.44.1 shapely==2.0.7 shellingham==1.5.4 Shimmy==1.3.0 sigstore==3.6.1 sigstore-protobuf-specs==0.3.2 sigstore-rekor-types==0.0.18 simple-parsing==0.1.6 SimpleITK==2.4.1 six==1.17.0 sklearn-pandas==2.2.0 slicer==0.0.7 smart-open==7.0.5 smmap==5.0.1 sniffio==1.3.1 snowballstemmer==2.2.0 sortedcontainers==2.4.0 soundfile==0.12.1 soupsieve==2.6 soxr==0.5.0.post1 spacy==3.7.5 spacy-legacy==3.0.12 spacy-loggers==1.0.5 Sphinx==8.1.3 sphinx-rtd-theme==0.2.4 sphinxcontrib-applehelp==2.0.0 sphinxcontrib-devhelp==2.0.0 sphinxcontrib-htmlhelp==2.1.0 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==2.0.0 sphinxcontrib-serializinghtml==2.0.0 SQLAlchemy==2.0.36 sqlglot==25.1.0 sqlparse==0.5.3 squarify==0.4.4 srsly==2.5.0 stable-baselines3==2.1.0 stanio==0.5.1 statsmodels==0.14.4 stopit==1.1.2 StrEnum==0.4.15 stringzilla==3.11.1 stumpy==1.13.0 sympy==1.13.1 tables==3.10.1 tabulate==0.9.0 tbb==2022.0.0 tbb4py==2022.0.0 tblib==3.0.0 tcmlib==1.2.0 tenacity==9.0.0 tensorboard==2.17.1 tensorboard-data-server==0.7.2 tensorflow==2.17.1 tensorflow-cloud==0.1.5 tensorflow-datasets==4.9.7 tensorflow-hub==0.16.1 tensorflow-io==0.37.1 tensorflow-io-gcs-filesystem==0.37.1 tensorflow-metadata==1.13.1 tensorflow-probability==0.24.0 tensorflow-text==2.17.0 tensorflow_decision_forests==1.10.0 tensorstore==0.1.71 termcolor==2.5.0 terminado==0.18.1 testpath==0.6.0 text-unidecode==1.3 textblob==0.17.1 texttable==1.7.0 tf-slim==1.1.0 tf_keras==2.17.0 Theano==1.0.5 Theano-PyMC==1.1.2 thinc==8.2.5 threadpoolctl==3.5.0 tifffile==2024.12.12 tiktoken==0.9.0 timm==1.0.12 tinycss2==1.4.0 tokenizers==0.21.0 toml==0.10.2 tomli==2.2.1 toolz==0.12.1 torch @ https://download.pytorch.org/whl/cu121_full/torch-2.5.1%2Bcu121-cp310-cp310-linux_x86_64.whl torchaudio @ https://download.pytorch.org/whl/cu121/torchaudio-2.5.1%2Bcu121-cp310-cp310-linux_x86_64.whl torchinfo==1.8.0 torchmetrics==1.6.1 torchsummary==1.5.1 torchtune==0.5.0 torchvision @ https://download.pytorch.org/whl/cu121/torchvision-0.20.1%2Bcu121-cp310-cp310-linux_x86_64.whl tornado==6.3.3 TPOT==0.12.1 tqdm==4.67.1 traitlets==5.7.1 traittypes==0.2.1 transformers @ git+https://github.com/huggingface/transformers@0ebd6651acd32c982fee265b23243b89bdb89577 treelite==4.4.1 trx-python==0.3 tsfresh==0.20.2 tuf==5.1.0 tweepy==4.14.0 typeguard==4.4.1 typer==0.15.1 types-python-dateutil==2.9.0.20241206 types-pytz==2024.2.0.20241003 types-setuptools==75.6.0.20241126 typing-inspect==0.9.0 typing_extensions==4.12.2 tzdata==2025.1 tzlocal==5.2 uc-micro-py==1.0.3 ucx-py-cu12==0.42.0 ucxx-cu12==0.42.0 ujson==5.10.0 umf==0.9.1 update-checker==0.18.0 uri-template==1.3.0 uritemplate==4.1.1 urllib3==2.3.0 urwid==2.6.16 urwid_readline==0.15.1 vega-datasets==0.9.0 visions==0.7.6 vtk==9.3.1 wadllib==1.3.6 Wand==0.6.13 wandb==0.19.1 wasabi==1.1.3 watchdog==6.0.0 wavio==0.0.9 wcwidth==0.2.13 weasel==0.4.1 webcolors==24.11.1 webencodings==0.5.1 websocket-client==1.8.0 websockets==14.1 Werkzeug==3.1.3 widgetsnbextension==4.0.13 woodwork==0.31.0 wordcloud==1.9.4 wrapt==1.17.0 wurlitzer==3.1.1 xarray==2024.11.0 xarray-einstats==0.8.0 xgboost==2.0.3 xlrd==2.0.1 xvfbwrapper==0.2.9 xxhash==3.5.0 xyzservices==2024.9.0 y-py==0.6.2 yarl==1.18.3 ydata-profiling==4.12.2 ydf==0.9.0 yellowbrick==1.5 yfinance==0.2.50 ypy-websocket==0.8.4 zict==3.0.0 zipp==3.21.0 ``` ### Who can help? Hi, it is my first issue here. I hope I do everything right. I try to run Gemma 3 inside a Kaggle Kernel. When trying to create the pipeline object I get the error `Impossible to guess which processor to use. Please provide a processor instance or a path/identifier to a processor.` ``` def create_pipeline(): quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, ) accelerator = Accelerator() model_path = "/kaggle/input/gemma-3/transformers/gemma-3-4b-it/1/" # Load the processor processor = AutoProcessor.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) # Load the model with quantization configuration model = Gemma3ForConditionalGeneration.from_pretrained( model_path, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config, ) # Create the pipeline with model + tokenizer pipe = pipeline( "image-text-to-text", model=model, tokenizer=tokenizer, # Pass tokenizer explicitly ) pipe, model = accelerator.prepare(pipe, model) return pipe, accelerator pipe, accelerator = create_pipeline() ``` The full error message: ``` Exception Traceback (most recent call last) <ipython-input-9-ac024d156de2> in <cell line: 1>() ----> 1 pipe, accelerator = create_pipeline() <ipython-input-8-124136e0adb8> in create_pipeline() 23 24 # Create the pipeline with model + tokenizer ---> 25 pipe = pipeline( 26 "image-text-to-text", 27 model=model, /usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 1134 else: 1135 # Impossible to guess what is the right processor here -> 1136 raise Exception( 1137 "Impossible to guess which processor to use. " 1138 "Please provide a processor instance or a path/identifier " Exception: Impossible to guess which processor to use. Please provide a processor instance or a path/identifier to a processor. ``` The Gemma 3 dataset on Kaggle seems to container the tokeniser config file though. I am not sure if this is a bug in the library or in my code. ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ## Reproducible example [Here](https://www.kaggle.com/code/thomasmeiner/gemma-3-with-quantisation-debug-notebook-for-hf) is a Kaggle kernel: * copy & edit * add Gemma 3 4b-it dataset * enable T4 x 2 GPU * run all cells ### Expected behavior Expected behaviour would be that the code runs with error and recognises the tokeniser.
closed
2025-03-23T09:48:37Z
2025-03-23T11:29:53Z
https://github.com/huggingface/transformers/issues/36911
[ "bug" ]
thomasmeissnercrm
1
FactoryBoy/factory_boy
django
900
SubFactory with a None value by default
#### The problem Occasionally I want to have a SubFactory (or a RelatedFactory) whose value is set to None unless explicitly stated. That is, I would want to `FooFactory().bar` to be None but still be able to use `FooFactory(bar__name='test)` or `FooFactory(bar=BarFactory())`. ``` class BarFactory(DjangoModelFactory): name = factory.Faker("first_name") class FooFactory(DjangoModelFactory): bar = factory.SubFactory(BarFactory) ``` Is this achievable somehow? Obviously I could use `FooFactory(bar=None)` but this is not ideal.
closed
2022-01-03T13:57:05Z
2022-01-04T07:03:35Z
https://github.com/FactoryBoy/factory_boy/issues/900
[ "Q&A" ]
aleehedl
2
ijl/orjson
numpy
304
Support for datetime.timedelta
I tried to serialize dataclass that had a `timedelta` in it. Unfortunately it is not serializable unlike `datetime`objects which kinda comes hand in hand with `datetime`. ``` TypeError: Type is not JSON serializable: datetime.timedelta ```
closed
2022-09-27T15:32:54Z
2022-10-04T19:32:59Z
https://github.com/ijl/orjson/issues/304
[]
TommyDuc
1
kennethreitz/responder
flask
326
Uvicorn 0.5
Uvicorn 0.5 has been released. * Auto-reloading will now take effect, without having to use `uvicorn` from the console. * Multi-worker support is here. I'd suggest the following: * Switch `debug=True` to `reload=True` in `uvicorn.run(...)` * Pin uvicorn to `0.5.*` * If you want to enable multiworker support (it's not the default *yet*) then use `workers=multiprocessing.cpu_count()`.
closed
2019-03-04T13:58:26Z
2024-03-31T00:57:26Z
https://github.com/kennethreitz/responder/issues/326
[ "bug" ]
tomchristie
5
pytorch/vision
computer-vision
8,874
Some v2 transforms silently ignore numpy arrays.
### 🐛 Describe the bug ```python import torch import PIL.Image import numpy as np import torchvision as tv import torchvision.transforms.v2 img_npy = np.zeros((8, 8, 3), dtype=np.uint8) img_pil = PIL.Image.fromarray(img_npy) img_tch = torch.zeros((3, 8, 8), dtype=torch.uint8) def check_resize(tr, img): try: img = tr.Resize(64)(img) return np.array(img).shape except Exception as ex: return ex for tr in [tv.transforms, tv.transforms.v2]: print(f"{tr.__name__ = }") print(f"{check_resize(tr, img_npy) = }") print(f"{check_resize(tr, img_pil) = }") print(f"{check_resize(tr, img_tch) = }") print() ``` produces the following output: ``` tr.__name__ = 'torchvision.transforms' check_resize(tr, img_npy) = TypeError("Unexpected type <class 'numpy.ndarray'>") check_resize(tr, img_pil) = (64, 64, 3) check_resize(tr, img_tch) = (3, 64, 64) tr.__name__ = 'torchvision.transforms.v2' check_resize(tr, img_npy) = (8, 8, 3) check_resize(tr, img_pil) = (64, 64, 3) check_resize(tr, img_tch) = (3, 64, 64) ``` Notice that `check_resize(tr, img_npy)` with `transforms.v2` doesn't actually resize the image. ### Versions <details> ``` Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Manjaro Linux (x86_64) GCC version: (GCC) 14.2.1 20240910 Clang version: 18.1.8 CMake version: version 3.31.2 Libc version: glibc-2.40 Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime) Python platform: Linux-5.15.173-1-MANJARO-x86_64-with-glibc2.40 Is CUDA available: True CUDA runtime version: 12.6.85 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti Nvidia driver version: 550.135 cuDNN version: Probably one of the following: /usr/lib/libcudnn.so.9.5.1 /usr/lib/libcudnn_adv.so.9.5.1 /usr/lib/libcudnn_cnn.so.9.5.1 /usr/lib/libcudnn_engines_precompiled.so.9.5.1 /usr/lib/libcudnn_engines_runtime_compiled.so.9.5.1 /usr/lib/libcudnn_graph.so.9.5.1 /usr/lib/libcudnn_heuristic.so.9.5.1 /usr/lib/libcudnn_ops.so.9.5.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz CPU family: 6 Model: 60 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 3 CPU(s) scaling MHz: 76% CPU max MHz: 3900.0000 CPU min MHz: 800.0000 BogoMIPS: 7039.17 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d Virtualization: VT-x L1d cache: 128 KiB (4 instances) L1i cache: 128 KiB (4 instances) L2 cache: 1 MiB (4 instances) L3 cache: 8 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Unknown: No mitigations Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Mitigation; Microcode Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.2.2 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] torch==2.5.1 [pip3] torchvision==0.20.1 [pip3] triton==3.1.0 [conda] Could not collect ``` </details>
closed
2025-01-22T15:59:04Z
2025-02-19T16:45:50Z
https://github.com/pytorch/vision/issues/8874
[]
ruro
5
chaoss/augur
data-visualization
2,892
GitLab Messages for Reviews Error
Getting this error in `dev` for GitLab reviews: ```bash Traceback (most recent call last): File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 451, in trace_task R = retval = fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 734, in __protected_call__ return self.run(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/augur/augur/tasks/gitlab/issues_task.py", line 224, in collect_gitlab_issue_comments process_gitlab_issue_messages(comments, f"{owner}/{repo}: Gitlab issue messages task", repo_id, logger, session) File "/home/ubuntu/github/augur/augur/tasks/gitlab/issues_task.py", line 287, in process_gitlab_issue_messages issues = session.session.query(Issue).filter(Issue.repo_id == repo_id).all() ^^^^^^^^^^^^^^^ AttributeError: 'Session' object has no attribute 'session' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute cursor.execute(statement, parameters) psycopg2.errors.UndefinedFunction: operator does not exist: character varying = integer[] LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,... ^ HINT: No operator matches the given name and argument types. You might need to add explicit type casts. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 468, in trace_task I, R, state, retval = on_error(task_request, exc, uuid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 379, in on_error R = I.handle_error_state( ^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 178, in handle_error_state return { ^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 231, in handle_failure task.on_failure(exc, req.id, req.args, req.kwargs, einfo) File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 105, in on_failure self.augur_handle_task_failure(exc, task_id, repo_git, "core_task_failure") File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 88, in augur_handle_task_failure repo = session.query(Repo).filter(Repo.repo_git == repo_git).one() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2798, in one return self._iter().one() # type: ignore ^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2847, in _iter result: Union[ScalarResult[_T], Result[_T]] = self.session.execute( ^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute return self._execute_internal( ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2188, in _execute_internal result: Result[Any] = compile_state_cls.orm_execute_statement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement result = conn.execute( ^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute return meth( ^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context return self._exec_single_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context self._handle_dbapi_exception( File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: character varying = integer[] LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,... ^ HINT: No operator matches the given name and argument types. You might need to add explicit type casts. [SQL: SELECT augur_data.repo.repo_id AS augur_data_repo_repo_id, augur_data.repo.repo_group_id AS augur_data_repo_repo_group_id, augur_data.repo.repo_git AS augur_data_repo_repo_git, augur_data.repo.repo_path AS augur_data_repo_repo_path, augur_data.repo.repo_name AS augur_data_repo_repo_name, augur_data.repo.repo_added AS augur_data_repo_repo_added, augur_data.repo.repo_type AS augur_data_repo_repo_type, augur_data.repo.url AS augur_data_repo_url, augur_data.repo.owner_id AS augur_data_repo_owner_id, augur_data.repo.description AS augur_data_repo_description, augur_data.repo.primary_language AS augur_data_repo_primary_language, augur_data.repo.created_at AS augur_data_repo_created_at, augur_data.repo.forked_from AS augur_data_repo_forked_from, augur_data.repo.updated_at AS augur_data_repo_updated_at, augur_data.repo.repo_archived_date_collected AS augur_data_repo_repo_archived_date_collected, augur_data.repo.repo_archived AS augur_data_repo_repo_archived, augur_data.repo.tool_source AS augur_data_repo_tool_source, augur_data.repo.tool_version AS augur_data_repo_tool_version, augur_data.repo.data_source AS augur_data_repo_data_source, augur_data.repo.data_collection_date AS augur_data_repo_data_collection_date FROM augur_data.repo WHERE augur_data.repo.repo_git = %(repo_git_1)s] [parameters: {'repo_git_1': [59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]}] (Background on this error at: https://sqlalche.me/e/20/f405) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute cursor.execute(statement, parameters) psycopg2.errors.UndefinedFunction: operator does not exist: character varying = integer[] LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,... ^ HINT: No operator matches the given name and argument types. You might need to add explicit type casts. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/billiard/pool.py", line 362, in workloop result = (True, prepare_result(fun(*args, **kwargs))) ^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 649, in fast_trace_task R, I, T, Rstr = tasks[task].__trace__( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 572, in trace_task I, _, _, _ = on_error(task_request, exc, uuid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 379, in on_error R = I.handle_error_state( ^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 178, in handle_error_state return { ^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 231, in handle_failure task.on_failure(exc, req.id, req.args, req.kwargs, einfo) File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 105, in on_failure self.augur_handle_task_failure(exc, task_id, repo_git, "core_task_failure") File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 88, in augur_handle_task_failure repo = session.query(Repo).filter(Repo.repo_git == repo_git).one() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2798, in one return self._iter().one() # type: ignore ^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2847, in _iter result: Union[ScalarResult[_T], Result[_T]] = self.session.execute( ^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute return self._execute_internal( ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2188, in _execute_internal result: Result[Any] = compile_state_cls.orm_execute_statement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement result = conn.execute( ^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute return meth( ^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context return self._exec_single_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context self._handle_dbapi_exception( File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context self.dialect.do_execute( File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: character varying = integer[] LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,... ^ HINT: No operator matches the given name and argument types. You might need to add explicit type casts. [SQL: SELECT augur_data.repo.repo_id AS augur_data_repo_repo_id, augur_data.repo.repo_group_id AS augur_data_repo_repo_group_id, augur_data.repo.repo_git AS augur_data_repo_repo_git, augur_data.repo.repo_path AS augur_data_repo_repo_path, augur_data.repo.repo_name AS augur_data_repo_repo_name, augur_data.repo.repo_added AS augur_data_repo_repo_added, augur_data.repo.repo_type AS augur_data_repo_repo_type, augur_data.repo.url AS augur_data_repo_url, augur_data.repo.owner_id AS augur_data_repo_owner_id, augur_data.repo.description AS augur_data_repo_description, augur_data.repo.primary_language AS augur_data_repo_primary_language, augur_data.repo.created_at AS augur_data_repo_created_at, augur_data.repo.forked_from AS augur_data_repo_forked_from, augur_data.repo.updated_at AS augur_data_repo_updated_at, augur_data.repo.repo_archived_date_collected AS augur_data_repo_repo_archived_date_collected, augur_data.repo.repo_archived AS augur_data_repo_repo_archived, augur_data.repo.tool_source AS augur_data_repo_tool_source, augur_data.repo.tool_version AS augur_data_repo_tool_version, augur_data.repo.data_source AS augur_data_repo_data_source, augur_data.repo.data_collection_date AS augur_data_repo_data_collection_date FROM augur_data.repo WHERE augur_data.repo.repo_git = %(repo_git_1)s] [parameters: {'repo_git_1': [59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]}] (Background on this error at: https://sqlalche.me/e/20/f405) ```
closed
2024-08-12T21:45:02Z
2025-03-05T01:49:20Z
https://github.com/chaoss/augur/issues/2892
[ "bug", "server" ]
sgoggins
2
tensorpack/tensorpack
tensorflow
1,148
python-prctl platform specifier is useless when installing from a wheel
When installing `tensorpack[all]` on Mac `python-prctl` is expected to be skipped using a platform specifier. Unfortunately this works only when installing from a source package, not from a wheel. The reason is that the platform is evaluated in [setup.py](https://github.com/tensorpack/tensorpack/blob/master/setup.py#L60) at the machine which built the wheel, not at the target machine. This affects the `master` at the time of creating this issue. ``` mac$ pip install tensorpack[all]==0.9.4 # [...] Collecting python-prctl; extra == "all" (from tensorpack[all]==0.9.4) Using cached https://files.pythonhosted.org/packages/7a/90/61935e2530a76f41e9e4f8ba0fe073d4ad0a3e16c4953156253f939fb057/python-prctl-1.7.tar.gz Complete output from command python setup.py egg_info: This module only works on linux ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/k6/pp54bs857851lpcsrb51fm1h0000gn/T/pip-install-fda0nlll/python-prctl/ ``` This works: ``` mac$ pip install tensorpack[all]==0.9.4 --no-binary tensorpack ``` I'm not sure if the proper platform specifiers work in `install_requires`: ``` # this works well in requirements.txt python-prctl==1.7; 'linux' in sys_platform ```
closed
2019-04-16T10:18:49Z
2019-04-16T14:19:27Z
https://github.com/tensorpack/tensorpack/issues/1148
[ "enhancement" ]
bzamecnik
7
microsoft/qlib
machine-learning
1,691
Tsne 画图
help me! 我想知道DDG-DA论文里面的figure2具体是怎么画出来的呀,然后用的是什么数据呢? ```[tasklist] ### Tasks ```
open
2023-11-09T12:56:06Z
2023-11-09T12:56:55Z
https://github.com/microsoft/qlib/issues/1691
[ "question" ]
lianlin666
0
youfou/wxpy
api
218
登录之后回调login_callback获取bot相关信息
login_callback中想获取bot的头像以及昵称信息, 目前似乎只能从qr_callback中获取uuid以及status和二维码。
open
2017-10-25T02:58:24Z
2017-10-26T02:28:29Z
https://github.com/youfou/wxpy/issues/218
[]
yanpengzhe
2
iperov/DeepFaceLab
machine-learning
578
The old Optimizer 2 mode gave the best balance between batch size and speed. Is there any chance we can have it on DFL 2.0?
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS POST ONLY ISSUES RELATED TO BUGS OR CODE ## Expected behavior *Describe, in some detail, what you are trying to do and what the output is that you expect from the program.* ## Actual behavior *Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.* ## Steps to reproduce *Describe, in some detail, the steps you tried that resulted in the behavior described above.* ## Other relevant information - **Command lined used (if not specified in steps to reproduce)**: main.py ... - **Operating system and version:** Windows, macOS, Linux - **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary)
open
2020-01-26T04:17:20Z
2023-06-08T20:31:50Z
https://github.com/iperov/DeepFaceLab/issues/578
[]
pesado1
1
ultralytics/ultralytics
deep-learning
19,518
Metrics all 0 after TensorRT INT8 export for mode val, only INT8 ONNX performs well
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question I succesfully exported my FP32 YOLOv8 OBB (s) model to FP16 and INT8. For FP16 I get nearly the same metrics values like FP32, but the INT8 model performs very bad. My calibration set are 3699 images, I tried with training calibration set (18536 images) too, but the metrics stay all at 0. Different export `batch_sizes=1,8,16` didn't helped. Update: The problem, must be between the conversion from `ONNX` to `engine` format (see below). There must be a bug between the conversion process, which leads to 0 in all metrics using `engine` model. Exporter Code: ```python from ultralytics import YOLO import argparse def export_model(model, export_args): model.export(**export_args) def main(): parser = argparse.ArgumentParser(description='Export YOLOv8 OBB model to TensorRT with user-configurable parameters.') parser.add_argument('--model_path', type=str, required=True, help='Path to the trained YOLOv8 model (.pt file).') parser.add_argument('--export_fp16', type=bool, default=False, help='Export to FP16 TensorRT model.') parser.add_argument('--export_int8', type=bool, default=False, help='Export to INT8 TensorRT model.') parser.add_argument('--format', type=str, default='engine', help="Format to export to (e.g., 'engine', 'onnx').") parser.add_argument('--imgsz', type=int, default=640, help='Desired image size for the model input. Can be an integer for square images or a tuple (height, width) for specific dimensions.') parser.add_argument('--keras', type=bool, default=False, help='Enables export to Keras format for TensorFlow SavedModel, providing compatibility with TensorFlow serving and APIs.') parser.add_argument('--optimize', type=bool, default=False, help='Applies optimization for mobile devices when exporting to TorchScript, potentially reducing model size and improving performance.') parser.add_argument('--half', type=bool, default=False, help='Enables FP16 (half-precision) quantization, reducing model size and potentially speeding up inference on supported hardware.') parser.add_argument('--int8', type=bool, default=False, help='Activates INT8 quantization, further compressing the model and speeding up inference with minimal accuracy loss, primarily for edge devices.') parser.add_argument('--dynamic', type=bool, default=False, help='Allows dynamic input sizes for ONNX, TensorRT and OpenVINO exports, enhancing flexibility in handling varying image dimensions (enforced).') parser.add_argument('--simplify', type=bool, default=False, help='Simplifies the model graph for ONNX exports with onnxslim, potentially improving performance and compatibility.') parser.add_argument('--opset', type=int, default=None, help='Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version.') parser.add_argument('--workspace', type=int, default=None, help='Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance; use None for auto-allocation by TensorRT up to device maximum.') parser.add_argument('--nms', type=bool, default=False, help='Adds Non-Maximum Suppression (NMS) to the exported model when supported (see Export Formats), improving detection post-processing efficiency.') parser.add_argument('--batch', type=int, default=1, help="Batch size for export. For INT8 it's recommended using a larger batch like batch=8 (calibrated as batch=16))") parser.add_argument('--device', type=str, default='0', help="Device to use for export (e.g., '0' for GPU 0).") parser.add_argument('--data', type=str, default=None, help="Path to the dataset configuration file for INT8 calibration.") args = parser.parse_args() # Load the final trained YOLOv8 model model = YOLO(args.model_path, task='obb') export_args = { 'format': args.format, 'imgsz': args.imgsz, 'keras': args.keras, 'optimize': args.optimize, 'half': args.half, 'int8': args.int8, 'dynamic': args.dynamic, 'simplify': args.simplify, 'opset': args.opset, 'workspace': args.workspace, 'nms': args.nms, 'batch': args.batch, 'device': args.device, 'data': args.data, } if args.export_fp16: # data argument isn't needed for FP16 exports since no calibration is required print('Exporting to FP16 TensorRT model...') fp16_args = export_args.copy() fp16_args['half'] = True fp16_args['int8'] = False export_model(model, fp16_args) print('FP16 export completed.') if args.export_int8: # NOTE: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#enable_int8_c, for INT8 calibration, the kitti_bev.yaml val split with 3769 images is used. print('Exporting to INT8 TensorRT model...') int8_args = export_args.copy() int8_args['half'] = False int8_args['int8'] = True export_model(model, int8_args) print('INT8 export completed.\nThe calibration .cache which can be reused to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the batch value is changed drastically. In these circumstances, the existing .cache should be renamed and moved to a different directory or deleted entirely.') if not args.export_fp16 and not args.export_int8: print('No export option selected. Please specify --export_fp16 and/or --export_int8.') if __name__ == '__main__': main() ``` Used export command: ```txt python export_kitti_obb.py --model_path /home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run_94_Adam_88.8_87.2/weights/best.pt --export_int8 True --int8 True --dynamic=True --batch 1 --data /home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml ``` Validation script: ```python from ultralytics import YOLO model = YOLO('/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run_94_Adam_88.8_87.2/weights/best_1.engine', task='obb', verbose=False) metrics = model.val(data='/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml', imgsz=640, batch=16, save_json=False, save_hybrid=False, conf=0.001, iou=0.5, max_det=300, half=False, device='0', dnn=False, plots=False, rect=False, split='val', project=None, name=None) ``` Validation output with INT8 TensorRT: ![Image](https://github.com/user-attachments/assets/1998340a-ffbb-4f23-9290-0c39b63c2e30) Validation output with INT8 ONNX: ![Image](https://github.com/user-attachments/assets/237143c2-7a38-400c-b177-5acfe0ae160a) Thank you very much! ### Additional _No response_
open
2025-03-04T17:11:26Z
2025-03-14T01:33:53Z
https://github.com/ultralytics/ultralytics/issues/19518
[ "question", "OBB", "exports" ]
Petros626
19
ydataai/ydata-profiling
jupyter
1,135
pandas-profiling does not support latest version of matplotlib
### Missing functionality Support for latest version of matplotlib, version 3.6.x ### Proposed feature Update dependencies to support latest version of matplotlib ### Alternatives considered _No response_ ### Additional context _No response_
closed
2022-11-02T15:58:16Z
2023-08-08T19:27:27Z
https://github.com/ydataai/ydata-profiling/issues/1135
[ "feature request 💬" ]
tleonhardt
3
blacklanternsecurity/bbot
automation
1,712
Convert is_login_page() to excavate YARA rule
closed
2024-08-27T21:11:29Z
2024-10-14T02:55:35Z
https://github.com/blacklanternsecurity/bbot/issues/1712
[ "enhancement", "low priority" ]
TheTechromancer
1
deepfakes/faceswap
deep-learning
979
Unable to merge alignment files
## Expected behavior Merging two alignment files will generate one file containing alignments from both files. ## Actual behavior Merge fails every time returning: `self.final_alignments.file = filename AttributeError: can't set attribute` ## Steps to reproduce Running the merge command through the GUI. Selecting two alignment files, and their corresponding faces folder. ## Other relevant information crash log attached [crash_report.2020.03.05.124208091265.log](https://github.com/deepfakes/faceswap/files/4294321/crash_report.2020.03.05.124208091265.log)
closed
2020-03-05T17:49:49Z
2020-03-05T17:51:53Z
https://github.com/deepfakes/faceswap/issues/979
[]
michaeldeprospo
1
autokey/autokey
automation
588
Selecting Match phrase case to typed abbreviation also selects its opposite, Ignore case of typed abbreviation
## Classification: Bug ## Reproducibility: Always AutoKey version: 0.95.10 Both If the problem is known to be present in more than one version, please list all of those. Installed via: debs from GitHub Linux Distribution: kubuntu 18.04 and others ## Summary Selecting Match phrase case to typed abbreviation also selects its opposite, Ignore case of typed abbreviation in the GUI ## Steps to Reproduce (if applicable) Define a phrase and select Match phrase case to typed abbreviation ## Expected Results Just that one option should be selected ## Actual Results Ignore case of typed abbreviation also becomes automatically selected - which should be mutually exclusive with the selected option ## Notes In 0.95.10, if you define a phrase and select Match phrase case to typed abbreviation, it automatically also selects Ignore case of typed abbreviation which makes no sense to me. This only works one way. Selecting Ignore case of typed abbreviation does not auto-select Match phrase case to typed abbreviation. I recreated this on both front ends. I find it most curious that this bug appears in both front ends. I thought most of that code was disjoint. I did not check which option actually takes effect, but I believe it honors the first option. If it didn't, we would probably have seen numerous error reports starting shortly after 0.95.10 was released (assuming that's where the bug was introduced - which has not been investigated.)
open
2021-07-29T08:18:16Z
2023-05-21T18:44:23Z
https://github.com/autokey/autokey/issues/588
[ "bug", "autokey-qt", "autokey-gtk", "help-wanted", "user interface", "easy fix", "good first issue" ]
josephj11
30
mljar/mljar-supervised
scikit-learn
491
Custom validation/test set - turn off cross-validation (CV)
As title suggests. How do I do it please?
closed
2021-11-25T11:05:06Z
2023-05-01T13:34:18Z
https://github.com/mljar/mljar-supervised/issues/491
[]
cibic89
3
kizniche/Mycodo
automation
1,092
Internal server error after upgrading from version 8.11.0 to 8.12.6
### Describe the problem/bug Mycodo produces an internal server error on version `8.12.6` (after succesfully upgrading from `8.11.0`) with the following log: ``` Fout 500 - Interne Server Fout Something bad happened but it's probably not your fault. Letting the developers know about these issues is crucial to supporting Mycodo. Please submit a new issue on GitHub with the following diagnostic information and error traceback (copy the entire traceback): Version: 8.12.6 Database: 6e394f2e8fec Model: Raspberry Pi 4 Model B Rev 1.4 Release: Distributor ID: Raspbian Description: Raspbian GNU/Linux 10 (buster) Release: 10 Codename: buster Firmware: Aug 3 2021 18:14:56 Copyright (c) 2012 Broadcom version 40787ee5905644f639a2a0f6e00ae12e517a2211 (clean) (release) (start) Error (Full Traceback): Traceback (most recent call last): File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask_restx/api.py", line 652, in error_router return original_handler(e) File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise raise value File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/sjoerd/Mycodo/mycodo/mycodo_flask/routes_general.py", line 86, in index_page if not flask_login.current_user.index_page: File "/home/sjoerd/Mycodo/env/lib/python3.7/site-packages/werkzeug/local.py", line 347, in __getattr__ return getattr(self._get_current_object(), name) AttributeError: 'AnonymousUserMixin' object has no attribute 'index_page' ``` ### Versions: - Database: 6e394f2e8fec - Model: Raspberry Pi 4 Model B Rev 1.4 Release: - Distributor ID: Raspbian - Description: Raspbian GNU/Linux 10 (buster) - Release: 10 - Codename: buster Firmware: - Aug 3 2021 18:14:56 - Copyright (c) 2012 Broadcom - version 40787ee5905644f639a2a0f6e00ae12e517a2211 (clean) (release) (start) ### Reproducibility This error occurred after upgrading Mycodo from version `8.11.0` to version ` 8.12.6`. The updating process itself went fine and did not produce any errors. ### Expected behavior I expect to use the latest version without errors after upgrading. ### Additional context I was able to get Mycodo working again after restoring. I restored immediatly since my Shiitake are fruiting at the moment so I was not able to grab additional logs. I can reproduce the issue if the logs are desired. (Or perhaps they are still stored somewhere?)
closed
2021-09-22T18:26:05Z
2021-10-28T18:26:00Z
https://github.com/kizniche/Mycodo/issues/1092
[]
sjoerdschouten
5
darrenburns/posting
rest-api
174
Global header setting
Setting headers in a global scope for all the request of a collection. That would be useful when working with an api that requires authentication. It will make it much easier to manage the headers passed.
open
2025-02-01T03:55:28Z
2025-02-17T17:49:29Z
https://github.com/darrenburns/posting/issues/174
[]
snikoletopoulos
2
flairNLP/flair
pytorch
2,886
Error resuming training of a NER model
I want to resume training a NER model as shown in the tutorials, loading the model checkpoint but running it with : `trainer.resume(trained_model, base_path=path + '-resume', max_epochs=25, )` It simply shows me the metrics of the loaded model and does not perform any training.
closed
2022-08-04T22:50:43Z
2023-01-07T13:48:20Z
https://github.com/flairNLP/flair/issues/2886
[ "bug", "wontfix" ]
fmafelipe
5
flasgger/flasgger
flask
181
yaml syntax broken
Hello I have an issue now that is https://gist.github.com/anonymous/904c4628fc3f9d23870a915cf0111610 because we have some files like https://zerobin.net/?e3390f9a981a3ccd#mAvv8zX8ItzuQPs6qxIqXVPDc03d4yspKeQ1mQTeYms= and our flasgger version is flasgger==0.8.0 previously flasgger==0.6.3 It was working fine until I upgraded flasgger version. (maybe I done something else, but I don't mind what) Any idea on if I should do something or if flasgger should change something ?
closed
2018-02-19T09:24:56Z
2018-08-07T14:20:20Z
https://github.com/flasgger/flasgger/issues/181
[ "bug" ]
eregnier
7
qubvel-org/segmentation_models.pytorch
computer-vision
842
Segmentation fault (core dumped) Probelm
![image](https://github.com/qubvel/segmentation_models.pytorch/assets/35205776/e0660f21-25c3-4087-9711-acfff0b5dca4) I have some problem runnung: trainer.fit( model, train_dataloaders=train_dataloader, val_dataloaders=valid_dataloader, )
closed
2023-12-12T09:26:31Z
2024-02-18T01:49:34Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/842
[ "Stale" ]
sean86428
2
jupyter-incubator/sparkmagic
jupyter
56
Kill all sessions for a given Livy endpoint
When a user adds a new session, the user might find out that leaked/unused Livy sessions are taking resources up and might want to kill some of them.
closed
2015-12-04T23:18:49Z
2015-12-18T22:07:37Z
https://github.com/jupyter-incubator/sparkmagic/issues/56
[ "kind:enhancement" ]
aggFTW
3
voila-dashboards/voila
jupyter
772
Call a python function from an HTML widget
I have created an HTML widget with multiple elements, I want to be able to call a python function from the onclick event of one of them using javascript. If I run the widgets from the jupyter I am able to do something like ``` onclick="IPython.notebook.kernel.execute(`my_function({some_parameter})`)" ``` That way I can programmatically create the HTML elements and make them call a python function with whatever parameter I need, but when running this with Voila I get the error `IPython is not defined` Is there a way to call a function defined in python from javascript in Voila? Thank you!
closed
2020-11-28T10:20:43Z
2020-12-21T10:41:59Z
https://github.com/voila-dashboards/voila/issues/772
[]
pabloppp
5
sammchardy/python-binance
api
1,208
Spot batch orders
I'm familiar with futures_place_batch_order() on futures, but is there a way to set batch orders on Spot? Can't find any information on it
closed
2022-06-26T16:05:14Z
2022-07-01T01:07:39Z
https://github.com/sammchardy/python-binance/issues/1208
[]
Karlheinzniebuhr
1
netbox-community/netbox
django
18,198
Can not create a Duplicate IP-Range with ENFORCE_GLOBAL_UNIQUE set to false
### Deployment Type Self-hosted ### Triage priority N/A ### NetBox Version V4.1.7 ### Python Version 3.12 ### Steps to Reproduce Configuration parameter ENFORCE_GLOBAL_UNIQUE set to false 1. Click on IPAM -> IP Ranges 2. Click on Add 3. Add an IP range (Leave VRF empty) 4. Add the same IP range again (Leave VRF empty) ### Expected Behavior An overlapping IP range should be created ### Observed Behavior Error message: Defined addresses overlap with range xxx.xxx.xxx.xxx-xxx/xx in VRF None
closed
2024-12-10T19:12:45Z
2025-03-12T03:08:47Z
https://github.com/netbox-community/netbox/issues/18198
[]
antonvdl
2
dynaconf/dynaconf
django
407
[bug] Only one CombinedValidator is registered - subsequent are silently ignored
**Describe the bug** If validators are added through `settings.validators.register()`, only first CombinedValidator is registered - subsequent are silently skipped. **Analysis** The root cause is `Validator.__eq__()` method. `ValidatorList.register()` will go through provided validators and add them, but only if they aren't already on the list (`validator not in self`). `in` will return "`True` if an item of *s* is equal to *x*" ([docs](https://docs.python.org/3.8/library/stdtypes.html#common-sequence-operations)). That logic was added in #256 . When `Validator.__eq__()` compares different objects, it looks into various `Validator` properties and compares them in pair. If they are all the same, `__eq__()` will assume these are two instances of effectively the same validation rule. The problem is, `CombinedValidator` doesn't have any of these properties, so two completely different `CombinedValidator` will appear to be the same for `__eq__()` method. **To Reproduce** In python shell: ``` >>> from dynaconf import Validator >>> (Validator("foo.foo") | Validator("foo.bar")) == (Validator("bar.foo") & Validator("bar.bar")) True ``` This should return False, as these two `CombinedValidator`s have nothing in common. **Environment (please complete the following information):** - OS: Linux/Fedora32 - Dynaconf master (6c568d687e29ca5ed9806a74f1f4fb7e4b96be2f), 3.0.1 **Additional context** I might try working on patch, but I'm not sure about best approach. Perhaps we need type comparison inside `__eq__()` (so comparing AndValidator to OrValidator can return early). But how do we compare two `AndValidator`s? Look into combined validators properties recursively?
closed
2020-09-11T15:50:54Z
2020-09-16T15:31:31Z
https://github.com/dynaconf/dynaconf/issues/407
[ "bug" ]
mirekdlugosz
1
AntonOsika/gpt-engineer
python
311
how i make it work on fish shell
# Issue Template ╰─λ source venv/bin/activate venv/bin/activate (line 41): Unsupported use of '='. In fish, please use 'set VIRTUAL_ENV "/home/alex/Desktop/GPT-Plus/gpt-engineer/venv"'. VIRTUAL_ENV="/home/alex/Desktop/GPT-Plus/gpt-engineer/venv" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ from sourcing file venv/bin/activate source: Error while reading file 'venv/bin/activate' ### Steps to Reproduce 1. step 1 open fish shell and do what that readme.md says 2. step 2 when you try to add source venv/bin/activate this env and 3. you get it...
closed
2023-06-22T01:49:30Z
2023-06-29T08:45:34Z
https://github.com/AntonOsika/gpt-engineer/issues/311
[]
ALEX5402
2
JaidedAI/EasyOCR
deep-learning
748
Using both opencv-python and opencv-python-headless
Hi! I already have the first package of `opencv-python` and using it for my project. But when i download `EasyOCR` it also download `opencv-python-headless` which cause conflict with the first package: ```bash cv2.imshow('Original Image', img) cv2.error: OpenCV(4.5.4) /tmp/pip-req-build-9vck9bv0/opencv/modules/highgui/src/window.cpp:1274: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage' ``` #630 did work for me, so i propose to check it in setup.py if the user has installed `opencv-python<=4.5.4.60` Thanks!
open
2022-06-06T14:25:27Z
2022-06-07T12:36:38Z
https://github.com/JaidedAI/EasyOCR/issues/748
[]
s39674
0
babysor/MockingBird
deep-learning
467
奇怪的注意力模型和较低的Loss是什么情况?以及训练vocoder时失败
**注意力模型居然不是斜的,是直的,而且位于顶部,但是Loss值降低到了0.19。** **wavrnn不能训练,也没有报错,hifigan显存不够。** **1k** ![step-1000-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/75235761/159192266-af8b1a59-ad6b-4843-8a09-12706578cc31.png) ![attention_step_1000_sample_1](https://user-images.githubusercontent.com/75235761/159192273-9f118433-c232-4238-9998-bccc9070648c.png) 一开始的时候没在意。 **3k** ![step-3000-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/75235761/159192289-6c0a32e9-4a20-43e0-9501-d98dd4326318.png) ![attention_step_3000_sample_1](https://user-images.githubusercontent.com/75235761/159192291-1f694d47-a3a6-46a2-8b16-1ab61a792184.png) 这里有事出去了一下,回来就发现问题了。 **5k** ![step-5000-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/75235761/159192343-2e635485-9f9f-4bce-95b0-01787058dfd7.png) ![attention_step_5000_sample_1](https://user-images.githubusercontent.com/75235761/159192349-8e9c7619-e428-4327-902d-1ad5e0f2dee0.png) **10k** ![step-10000-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/75235761/159192362-9bb45e00-1f51-4169-80d8-bcb882113a93.png) ![attention_step_10000_sample_1](https://user-images.githubusercontent.com/75235761/159192368-a99d6ddb-8f82-49b1-82c1-98cbd812d59a.png) **16k** ![step-16000-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/75235761/159192381-bb3eb561-f7ac-4e29-9abf-a4fd0032e6a5.png) ![attention_step_16000_sample_1](https://user-images.githubusercontent.com/75235761/159192393-859b52b4-204d-4545-b322-f4d8b1c4f59d.png) **21k** ![step-21000-mel-spectrogram_sample_1](https://user-images.githubusercontent.com/75235761/159192396-59babf99-b58e-46e9-b87e-82643a56435c.png) ![attention_step_21000_sample_1](https://user-images.githubusercontent.com/75235761/159192400-e348f183-88db-4aea-bc94-f5848326e707.png) 然后我回来直接终止训练了。工具箱下测试了一下,不是杂音但也不是人话…… 我考虑是不是官方自带vocoder的问题,打算自己训练vocoder,但是wavrnn运行直接退出,无报错。batchsize设置4。 ![image](https://user-images.githubusercontent.com/75235761/159192649-b6d96ee5-50f3-4d1f-9696-9ee73bef0122.png) hifigan也不能训练,batchsize设置4。 ![image](https://user-images.githubusercontent.com/75235761/159192570-59417a0c-0348-4e6a-9668-5671a1140cdd.png) 在训练hifigan时还有以下提示 ![image](https://user-images.githubusercontent.com/75235761/159192588-3b6cd419-07b5-4650-9d0c-5f387fa3dfa4.png)
closed
2022-03-21T00:31:36Z
2022-03-29T13:39:05Z
https://github.com/babysor/MockingBird/issues/467
[]
Okimoto-TK
2
ageitgey/face_recognition
python
1,327
Face Detection Accuracy issue
Latest Face_Recognition * 3.9 * Google Colab There are few issues with this library which is very well done, I might add. First is that it somehow confuses black faces together, even ones that are clearly distinct. The other is that it sometimes detects weird things as faces. This is an example of one of those weird detection ![image](https://user-images.githubusercontent.com/78774159/121973946-a7387300-cd7e-11eb-8038-eae421de260f.png)
open
2021-06-15T00:55:27Z
2021-07-27T13:47:12Z
https://github.com/ageitgey/face_recognition/issues/1327
[]
SetuBaru
1
alirezamika/autoscraper
automation
50
WebScraper
closed
2021-02-07T17:32:25Z
2021-02-07T17:33:38Z
https://github.com/alirezamika/autoscraper/issues/50
[]
jidegade
0
albumentations-team/albumentations
deep-learning
2,426
[Feature request] Add apply_to_images to RandomShadow
open
2025-03-11T01:14:40Z
2025-03-11T01:14:46Z
https://github.com/albumentations-team/albumentations/issues/2426
[ "enhancement", "good first issue" ]
ternaus
0
kornia/kornia
computer-vision
2,931
implement a deterministic two view scene
Add a todo or open a ticket to implement a deterministic two view scene _Originally posted by @edgarriba in https://github.com/kornia/kornia/pull/2930#discussion_r1640805620_ We have a bunch of tests on the suite that use the random two scenes, but it's not enough since some cases will fail if it isn't deterministic similar to: https://github.com/kornia/kornia/blob/ca91494a504d04ff23ef5eff0747cee01cb3bcba/testing/geometry/create.py#L34
open
2024-06-15T17:35:16Z
2024-07-02T01:51:11Z
https://github.com/kornia/kornia/issues/2931
[ "help wanted", "feature request", "module: geometry" ]
johnnv1
0
proplot-dev/proplot
data-visualization
198
Show labels of grouped pandas dataframe in one legend
### Description Proplot plots the legend for each label instead of listing labels on one legend. As a result, several legends are stacked together. ### Steps to reproduce ```python import proplot as plot df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : np.random.randn(8), 'D' : np.random.randn(8)}) fig, axs = plot.subplots() df.groupby('A')['C'].plot(legend=True, ax=axs) ``` **Expected behavior**: ![test_group_legend_matplotlib](https://user-images.githubusercontent.com/30388627/85844125-cf793b80-b7d4-11ea-81b9-e5a8748fb6f2.jpg) **Actual behavior**: ![test_group_legend](https://user-images.githubusercontent.com/30388627/85843964-9640cb80-b7d4-11ea-94ff-ef2e3c6c7ddf.jpg) ### Equivalent steps in matplotlib ```python import matplotlib.pyplot as plt df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : np.random.randn(8), 'D' : np.random.randn(8)}) df.groupby('A')['C'].plot(legend=True) plt.show() ``` ### Proplot version 0.6.3
closed
2020-06-26T09:46:52Z
2021-07-03T16:03:56Z
https://github.com/proplot-dev/proplot/issues/198
[ "integration" ]
zxdawn
2
geex-arts/django-jet
django
383
icon-yes, icon-no not showing on themes gray, purple, light green, light blue
It's due to the `$warning-text-color` and `$success-text-color` set to `#fff` on those themes.
open
2019-01-03T10:10:13Z
2019-01-05T16:45:17Z
https://github.com/geex-arts/django-jet/issues/383
[]
aparakian
0
tensorflow/tensor2tensor
deep-learning
1,710
Rebulid T2T with single thread
I test the machine translation job with CPU in interactive way through commandway. It needs about 1 second for decoding one sentence. I find in this way, mutil threads does not ues at all. May I rebulid the T2T in single thread way and it may accelerate the decoding speed ?
open
2019-09-24T06:53:47Z
2019-09-24T06:53:47Z
https://github.com/tensorflow/tensor2tensor/issues/1710
[]
Jason-kid
0
fa0311/TwitterInternalAPIDocument
graphql
274
Some i18n data is missing.
https://twitter.com/kaonasi_biwa/status/1692736910089941078 @kaonasi-biwa
closed
2023-08-19T20:28:22Z
2023-08-21T15:00:57Z
https://github.com/fa0311/TwitterInternalAPIDocument/issues/274
[]
fa0311
0
onnx/onnx
machine-learning
6,162
The PixelUnshuffle op Cannot be converted to SpaceToDepth
# Ask a Question ### Question The Pytorch PixelUnshuffle operator is converted as Reshape->Transpose->Reshape. But what I expect is SpaceToDepth in ONNX. ### Further information ![image](https://github.com/onnx/onnx/assets/23145532/13197f5a-aef0-4af5-9f5d-bfc76dcaff2a)
closed
2024-06-04T09:45:16Z
2024-08-24T05:30:20Z
https://github.com/onnx/onnx/issues/6162
[ "question" ]
iamweiweishi
3
koxudaxi/fastapi-code-generator
fastapi
207
Add support for tags
It would be great to have support for tags. I volunteer to implement this, I just don't want to start implementing it before https://github.com/koxudaxi/fastapi-code-generator/pull/203 is merged to avoid merge conflicts. FastAPI docs: https://fastapi.tiangolo.com/tutorial/path-operation-configuration/#tags
closed
2021-10-22T13:28:58Z
2023-01-18T12:28:25Z
https://github.com/koxudaxi/fastapi-code-generator/issues/207
[]
rominf
3
pytorch/pytorch
deep-learning
149,586
UserWarning: Dynamo does not know how to trace the builtin `None.pybind11_object.__new__.`
### 🐛 Describe the bug I'm filing an issue since this is a Python built-in (granted the error message implies that it is not since it references PyBind11, but I'm opening an issue anyway since it is caused by using returning/using `None` in a compiled function). ### Versions 2.7.0a0+gitebd087e cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @xmfan @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
open
2025-03-20T00:32:49Z
2025-03-21T19:28:30Z
https://github.com/pytorch/pytorch/issues/149586
[ "triaged", "oncall: pt2", "module: dynamo", "module: higher order operators", "module: compiled autograd", "module: pt2-dispatcher", "module: flex attention" ]
cora-codes
11
mithi/hexapod-robot-simulator
dash
117
pip install problem (on windows)
requirements.txt requests markupsafe 1.1.1 but werkzeug 2.2.3 requires MarkupSafe 2.1.1 or above
open
2023-08-01T14:23:21Z
2023-12-04T11:06:51Z
https://github.com/mithi/hexapod-robot-simulator/issues/117
[]
bestbinaryboi
2
d2l-ai/d2l-en
pytorch
2,632
Equation error in calculus.md
In section [2.4.3 2.4.3. Partial Derivatives and Gradients](https://d2l.ai/chapter_preliminaries/calculus.html#partial-derivatives-and-gradients), the equation seems to be wrong, <img width="477" alt="Image" src="https://github.com/user-attachments/assets/5fb2a401-164c-4ec2-b657-a368380103d6" /> It should be <img width="495" alt="Image" src="https://github.com/user-attachments/assets/bd35af03-e7c1-48c4-9fd9-33c2eddd2e3e" /> If so, I would like to create a pr to fix. Thanks
open
2025-01-17T14:50:03Z
2025-01-17T14:50:22Z
https://github.com/d2l-ai/d2l-en/issues/2632
[]
wsehjk
0
xinntao/Real-ESRGAN
pytorch
606
怎么自定义输出图片的分辨率?
open
2023-04-12T12:42:37Z
2023-04-12T12:42:37Z
https://github.com/xinntao/Real-ESRGAN/issues/606
[]
TQG1997
0
nvbn/thefuck
python
1,502
fish issue on termux with psutil
Error running `eval $(TF_SHELL=fish thefuck --alias)` on fish shell in termux. Also have psutil err when running normally. The fix was to run the eval command using TF_SHELL=fish but that doesn't seem to work here. I'm using https://github.com/DL909/thefuck/ because this repo is just dead and has err due to imp. Running eval alias command has this error ``` eval $(TF_SHELL=fish thefuck --alias) 21.2s  Fri Mar 7 14:16:28 2025 fish: Expected end of the statement, but found a pipe function fuck -d "Correct your previous console command" set -l fucked_up_command $history[1] env TF_SHELL=fish TF_ALIAS=fuck PYTHONIOENCODING=utf-8 thefuck $fucked_up_command THEFUCK_ARGUMENT_PLACEHOLDER $argv | read -l unfucked_command if [ "$unfucked_command" != "" ] eval $unfucked_command builtin history delete --exact --case-sensitive -- $fucked_up_command builtin history merge end end ^ ``` Running regular command has this error ``` fuck 7.6s  Fri Mar 7 14:14:36 2025 Traceback (most recent call last): File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/_pslinux.py", line 1646, in wrapper return fun(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/_pslinux.py", line 1890, in create_time bt = BOOT_TIME or boot_time() ^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/_pslinux.py", line 1561, in boot_time with open_binary(path) as f: ^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/_common.py", line 766, in open_binary return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PermissionError: [Errno 13] Permission denied: '/proc/stat' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/data/data/com.termux/files/usr/bin/fuck", line 5, in <module> from thefuck.entrypoints.not_configured import main File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/thefuck/entrypoints/not_configured.py", line 14, in <module> from ..shells import shell # noqa: E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/thefuck/shells/__init__.py", line 52, in <module> shell = _get_shell_from_env() or _get_shell_from_proc() ^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/thefuck/shells/__init__.py", line 45, in _get_shell_from_proc proc = proc.parent() ^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/__init__.py", line 596, in parent ctime = self.create_time() ^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/__init__.py", line 772, in create_time self._create_time = self._proc.create_time() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/psutil/_pslinux.py", line 1648, in wrapper raise AccessDenied(pid, name) from err psutil.AccessDenied: (pid=25206, name='fuck') ```
open
2025-03-07T14:26:46Z
2025-03-07T14:26:46Z
https://github.com/nvbn/thefuck/issues/1502
[]
EnderNon
0
STVIR/pysot
computer-vision
81
model size different
I want to train pysot based on mobilenetV2. The siamrpn_mobilev2_l234_dwxcorr model that downloaded in ModelZoo was 44.9MB in Ubuntu16 system. But when I train a new model using config in "experiments/siamrpn_mobilev2_l234_dwxcorr/config.yaml", the model size was 75.6MB. So is this normal?
closed
2019-06-28T16:45:21Z
2019-06-29T00:43:09Z
https://github.com/STVIR/pysot/issues/81
[]
MaxLin86
1
aminalaee/sqladmin
fastapi
714
Support the Aiohttp framework
### Checklist - [X] There are no similar issues or pull requests for this yet. ### Is your feature related to a problem? Please describe. Hi there, I think this project is really cool and it can use some [aiohttp](https://docs.aiohttp.org/en/stable/) support. ### Describe the solution you would like. _No response_ ### Describe alternatives you considered _No response_ ### Additional context _No response_
closed
2024-02-19T00:41:09Z
2024-03-14T10:22:26Z
https://github.com/aminalaee/sqladmin/issues/714
[]
anorprogrammer
2
robotframework/robotframework
automation
5,376
Process: Kill process if Robot's timeout occurs when waiting for process to end
Issue #5345 reported that Robot's timeouts weren't able to stop `Run Process` or `Wait For Process` keywords. That was fixed so that these keywords can be stopped, but processes that keywords were waiting for were left running. Leaving process on background especially is likely not a good idea, especially because they often have hung in this case. This issue proposes killing the processes instead. Killing processes if Robot's timeout occur requires handling the timeout in the library code. That is actually surprisingly easy by catching `robot.errors.TimeoutError` and re-raising it once the process has been killed. There could be other libraries that want to do such cleanup as well, and documenting how to do that in the User Guide is probably a good idea. I'll submit a separate issue about that. Notice that killing process as proposed above doesn't fully prevent processes to be left running. That can still happen if you use `Start Process` and Robot's timeout occurs before `Wait For Process` is called. We could enhance the library by adding some kind of auto-closing functionality to it, but I don't consider that too high priority because the library already has `Terminate All Processes` that can be used in test or suite teardown. Such an enhancement should anyway get its own issue.
closed
2025-03-21T09:32:26Z
2025-03-21T09:42:11Z
https://github.com/robotframework/robotframework/issues/5376
[ "priority: medium", "effort: small" ]
pekkaklarck
0
cvat-ai/cvat
computer-vision
8,213
LambdaFunction does not map attrs of skeleton sublabels
### Actions before raising this issue - [X] I searched the existing issues and did not find anything similar. - [X] I read/searched [the docs](https://docs.cvat.ai/docs/) ### Steps to Reproduce 1. Create skeleton 2. Add confidence attr to individual points 3. Have nuclio function return a skeleton with keypoints that have confidence sublabels 4. CVAT ui, the confidence labels will be unchanged / have the default values. ### Expected Behavior nuclio functions should be able to return skeletons with sublabels that have attrbibutes. ### Possible Solution Update views.py: https://github.com/josiahls/cvat/blob/patch-1/cvat/apps/lambda_manager/views.py ```python def update_mapping(_mapping, _model_labels, _db_labels): logger.debug("Starting update_mapping with _mapping: %s, _model_labels: %s, _db_labels: %s", _mapping, _model_labels, _db_labels) copy = deepcopy(_mapping) for model_label_name, mapping_item in copy.items(): try: logger.debug("Processing model_label_name: %s", model_label_name) md_label = next(filter(lambda x: x['name'] == model_label_name, _model_labels)) db_label = next(filter(lambda x: x.name == mapping_item['name'], _db_labels)) mapping_item.setdefault('attributes', {}) mapping_item['md_label'] = md_label mapping_item['db_label'] = db_label logger.debug("Mapped md_label: %s to db_label: %s", md_label, db_label) if md_label['type'] == 'skeleton' and db_label.type == 'skeleton': mapping_item['sublabels'] = update_mapping( mapping_item['sublabels'], md_label['sublabels'], db_label.sublabels.all() ) logger.debug("Updated sublabels for label: %s", model_label_name) # Ensure sublabel attributes are also mapped for sub_md_label in md_label['sublabels']: sub_md_name = sub_md_label['name'] sub_db_label = next(filter(lambda x: x.name == sub_md_name, db_label.sublabels.all()), None) if sub_db_label: sublabel_attr_mapping = { attr['name']: attr['name'] for attr in sub_md_label['attributes'] } mapping_item['sublabels'][sub_md_name]['attributes'] = sublabel_attr_mapping logger.debug("Mapped sublabel attributes for sublabel: %s - %s", sub_md_name, sublabel_attr_mapping) except Exception as e: logger.error("Error processing label: %s, Error: %s", model_label_name, e) logger.debug("Finished update_mapping with result: %s", copy) return copy ``` ### Context nulcio autolabelling for skeletons that have keypoints with attributes. ### Environment ```Markdown git log -2 commit c99b4503b3d6b2f04413cc8d5dd666bab1e40ece (HEAD -> patch-1, origin/patch-1) Merge: d931645b1 ab636fb14 Author: josiahls <josiahls@users.noreply.github.com> Date: Fri May 31 14:59:42 2024 -0400 Merge branch 'cvat-ai:develop' into patch-1 commit ab636fb1455a49bb820ee697c394b9dc82d66830 (origin/develop, origin/HEAD) Author: Boris Sekachev <boris.sekachev@yandex.ru> Date: Fri May 31 10:38:44 2024 +0300 Squashed `zoom:image` and `send:exception` client events (#7953) ```
closed
2024-07-23T20:00:51Z
2024-08-08T08:39:14Z
https://github.com/cvat-ai/cvat/issues/8213
[ "bug" ]
josiahls
1
InstaPy/InstaPy
automation
6,579
TypeError: 'module' object is not callable
py bot.py Traceback (most recent call last): File "C:\Users\****\OneDrive\Desktop\bot.py", line 3, in <module> session = instapy (username="****", password="********") TypeError: 'module' object is not callable not sure what is not working help wanted
open
2022-04-10T01:50:58Z
2022-04-10T01:50:58Z
https://github.com/InstaPy/InstaPy/issues/6579
[]
rzrv
0
autokey/autokey
automation
834
Review and update pip-requirements.txt
### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland? Xorg ### Has this issue already been reported? - [X] I have searched through the existing issues. ### Is this a question rather than an issue? - [X] This is not a question. ### What type of issue is this? Enhancement ### Choose one or more terms that describe this issue: - [ ] autokey triggers - [ ] autokey-gtk - [ ] autokey-qt - [ ] beta - [ ] bug - [ ] critical - [X] development - [ ] documentation - [ ] enhancement - [ ] installation/configuration - [ ] phrase expansion - [ ] scripting - [X] technical debt - [ ] user interface ### Other terms that describe this issue if not provided above: _No response_ ### Which Linux distribution did you use? _No response_ ### Which AutoKey GUI did you use? None ### Which AutoKey version did you use? _No response_ ### How did you install AutoKey? _No response_ ### Can you briefly describe the issue? The contents of the [pip-requirements.txt](https://github.com/autokey/autokey/blob/master/pip-requirements.txt) needs to be reviewed and updated. ### Can the issue be reproduced? Always ### What are the steps to reproduce the issue? 1. Examine the [pip-requirements.txt](https://github.com/autokey/autokey/blob/beta/pip-requirements.txt) file on the **beta** branch. 2. Examine the [pip-requirements.txt](https://github.com/autokey/autokey/blob/develop/pip-requirements.txt) file on the **develop** branch. 3. Examine the [pip-requirements.txt](https://github.com/autokey/autokey/blob/master/pip-requirements.txt) file on the **master** branch. ### What should have happened? The contents should be current. ### What actually happened? It's possible the contents are outdated. ### Do you have screenshots? _No response_ ### Can you provide the output of the AutoKey command? _No response_ ### Anything else? The [PyPI](https://pypi.org) page may be useful for searching for each of the libraries/modules listed in the **pip-requirements.txt** file on each branch to find out if any libraries/modules need to be added or removed.
open
2023-04-06T20:40:59Z
2023-05-06T17:22:58Z
https://github.com/autokey/autokey/issues/834
[ "installation/configuration" ]
Elliria
2
CTFd/CTFd
flask
2,193
CTFd Function Question.
hello sir. I would like to ask a second question. --- To run CTFd, I'm trying to run it with docker on a VM with 8 cores and 16RAM. I'd like to modify the options to improve CTFd performance, but the guide doesn't seem to exist. Can you give me a guide to improving the performance? If I'm wrong, can you give me a guide to improving the performance? --- I'm using dockerchallenge mode with the ultimate library for CTFd. (this, https://github.com/andyjsmith/CTFd-Docker-Plugin) It works very well, but I have one problem when displaying docker info in CTFd. Docker information is exposed via elements of `<span class='connection-info`> within the CTFd template. (this. core/templates/challenge.html) <img width="482" alt="스크린샷 2022-09-29 오전 10 24 20" src="https://user-images.githubusercontent.com/50125695/192917478-90add1b6-1965-4e67-8567-7c66a301e2f2.png"> But if you use this element, you have to hit the `Get Connection info` button every time you open a challenge, It doesn't seem to work dynamically. Is there any solution for this sir? The way I think is to click the `Get Connection info' button only once for the first time when opening the docker challenge, and the connection information should be displayed instead of this button while the docker container is open.
closed
2022-09-29T01:28:11Z
2022-10-09T00:26:31Z
https://github.com/CTFd/CTFd/issues/2193
[]
dhje0ng
0
mlflow/mlflow
machine-learning
14,807
[BUG] Description editor doesnt support dark mode
### MLflow version 2.20.4.dev0 ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: mac - **Python version**: 3.9 - **yarn version, if running the dev UI**: 1.22 ### Describe the problem <img width="660" alt="Image" src="https://github.com/user-attachments/assets/384e2c05-a989-451b-89a1-ad2937407a3f" /> - Description editor doesnt support dark mode ### Steps to reproduce the bug - Turn on Dark mode - open to edit description - the editor view doesnt support dark mode ### Code to generate data required to reproduce the bug _No response_ ### Is the console panel in DevTools showing errors relevant to the bug? _No response_ ### Does the network panel in DevTools contain failed requests relevant to the bug? _No response_
closed
2025-03-03T10:56:24Z
2025-03-04T06:49:00Z
https://github.com/mlflow/mlflow/issues/14807
[ "bug", "area/uiux" ]
Gumichocopengin8
2
pydantic/logfire
pydantic
651
logfire with distributed package like CLI
### Question Hello, I would like to use logfire to collect logs from a distributed pip package that acts as a CLI. The issue is that for now the only easy method I see to log into my logfire project would be to share the write_token to everyone through the pip package. Is there a way you would recommend to go differently about that ? An authentication system could be put in place but then I would need to be able to create write_token programmatically for each new user, is it something you facilitate ? Or do you have other suggestions?
closed
2024-12-06T08:02:24Z
2024-12-16T13:51:07Z
https://github.com/pydantic/logfire/issues/651
[ "Question" ]
grll
2
pandas-dev/pandas
pandas
60,343
BUG (string): contruction of Series / Index fails from dict keys when "str" dtype is specified explicitly
When not specifying a dtype (inferring the type), construction of `Index` or `Series` from dict keys goes fine: ```python >>> pd.options.future.infer_string = True >>> d = {"a": 1, "b": 2} >>> pd.Index(d.keys()) Index(['a', 'b'], dtype='str') ``` But if you explicitly specify the dtype, then it fails: ```python >>> pd.Index(d.keys(), dtype="str") ... File ~/scipy/repos/pandas/pandas/core/arrays/string_arrow.py:206, in ArrowStringArray._from_sequence(cls, scalars, dtype, copy) 203 return cls(pc.cast(scalars, pa.large_string())) 205 # convert non-na-likes to str --> 206 result = lib.ensure_string_array(scalars, copy=copy) 207 return cls(pa.array(result, type=pa.large_string(), from_pandas=True)) File lib.pyx:727, in pandas._libs.lib.ensure_string_array() File lib.pyx:822, in pandas._libs.lib.ensure_string_array() ValueError: Buffer has wrong number of dimensions (expected 1, got 0) ``` The reason is that at that point we pass the data directly to the dtype's array `_from_sequence` instead of first pre-processing the data into a numpy array, and `_from_sequence` calling `ensure_string_array` directly doesn't seem to be able to handle dict keys (although we do call `np.asarray(..)` inside `ensure_string_array`, so not entirely sure what is going wrong)
closed
2024-11-17T08:31:05Z
2025-01-26T11:29:27Z
https://github.com/pandas-dev/pandas/issues/60343
[ "Bug", "Strings", "Constructors" ]
jorisvandenbossche
9
ymcui/Chinese-BERT-wwm
nlp
76
对RoBERTa-wwm-ext-large模型的疑问
您好!在使用RoBERTa-wwm-ext-large模型的时候,我发现似乎缺少了MLM层的参数(预测句子中某个字几乎是乱的)。 请问确实是缺少了这层参数吗?能否发布添加了这层参数的RoBERTa-wwm-ext-large模型呢?
closed
2019-11-19T08:03:26Z
2020-12-29T08:11:56Z
https://github.com/ymcui/Chinese-BERT-wwm/issues/76
[]
AnShengqiang
5
davidteather/TikTok-Api
api
360
ERROR: No matching distribution found for playwright (from TikTokApi)
**Describe the error** My English is not good, you can see blow. **The buggy code** pip install TikTokApi --upgrade **Error Trace (if any)** ``` Collecting TikTokApi Using cached TikTokApi-3.7.9.tar.gz (55 kB) Requirement already satisfied, skipping upgrade: requests in ./anaconda3/envs/aispider/lib/python3.6/site-packages (from TikTokApi) (2.23.0) ERROR: Could not find a version that satisfies the requirement playwright (from TikTokApi) (from versions: none) ERROR: No matching distribution found for playwright (from TikTokApi) ``` **Desktop (please complete the following information):** - OS: CentOS Linux release 7.4.1708 (Core) - TikTokApi Version 3.7.9 **Additional context** try1: pip install playwright -- upgrade ``` ERROR: Could not find a version that satisfies the requirement playwright (from versions: none) ERROR: No matching distribution found for playwright ``` try2: npm i -D playwright ``` > playwright@1.6.1 install /search/odin/meng/node_modules/playwright > node install.js (node:121287) UnhandledPromiseRejectionWarning: Error: EACCES: permission denied, mkdir '/search/odin/meng/.cache/ms-playwright' (Use `node --trace-warnings ...` to show where the warning was created) (node:121287) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1) (node:121287) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. npm WARN saveError ENOENT: no such file or directory, open '/search/odin/meng/package.json' npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN enoent ENOENT: no such file or directory, open '/search/odin/meng/package.json' npm WARN ws@7.4.0 requires a peer of bufferutil@^4.0.1 but none is installed. You must install peer dependencies yourself. npm WARN ws@7.4.0 requires a peer of utf-8-validate@^5.0.2 but none is installed. You must install peer dependencies yourself. npm WARN meng No description npm WARN meng No repository field. npm WARN meng No README data npm WARN meng No license field. + playwright@1.6.1 added 37 packages from 83 contributors in 3.811s 3 packages are looking for funding run `npm fund` for details ``` try 3: pip install TikTokApi-pyppeteer I write ``` from TikTokApi import TikTokApi api = TikTokApi(debug=True) results = 10 trending = api.trending(count=results) for tiktok in trending: # Prints the id of the tiktok print(tiktok['id']) print(len(trending)) ``` to test.py. when I python test.py, I got this code: ``` Class initialized [W:pyppeteer.chromium_downloader] start chromium download. Download may take a few minutes. The following error occurred, but it was ignored. 'browser' object has no attribute 'timezone_name' [W:pyppeteer.chromium_downloader] start chromium download. Download may take a few minutes. Traceback (most recent call last): File "test.py", line 6, in <module> trending = api.trending(count=results) File "/search/odin/meng/anaconda3/envs/aispider/lib/python3.6/site-packages/TikTokApi/tiktok.py", line 222, in trending res = self.getData(b, **kwargs) File "/search/odin/meng/anaconda3/envs/aispider/lib/python3.6/site-packages/TikTokApi/tiktok.py", line 107, in getData query = {"verifyFp": b.verifyFp, "did": b.did, "_signature": b.signature} AttributeError: 'browser' object has no attribute 'signature' ```
closed
2020-11-16T08:11:36Z
2020-11-17T05:42:50Z
https://github.com/davidteather/TikTok-Api/issues/360
[ "installation_help" ]
mengguiyouziyi
8
holoviz/panel
jupyter
7,333
Tabulator selection with `pd.MultiIndex` is not working in Panel 1.5.1
Worked in Panel 1.5, Without doing a `git bisect` likely it is https://github.com/holoviz/panel/pull/7304 Two examples: ``` python import panel as pn import pandas as pd pn.extension("tabulator") index = pd.MultiIndex.from_tuples([(i, j) for i in range(10) for j in range(10)], names=["A", "B"]) df = pd.DataFrame(index=index, data={"C": range(100)}) w = pn.widgets.Tabulator(df, pagination="remote") w.on_click(lambda x: print(x)) w.servable() ``` <details> <summary>Traceback </summary> ``` python-traceback message: Message 'PATCH-DOC' content: {'events': [{'kind': 'MessageSent', 'msg_type': 'bokeh_event', 'msg_data': {'type': 'event', 'name': 'cell-click', 'values': {'type': 'map', 'entries': [['model', {'id': 'p1220'}], ['column', 'B'], ['row', 9]]}}}]} error: ValueError('The Tabulator widget expects the provided `value` Pandas DataFrame to have unique indexes, in particular when it has to deal with click or edit events. Found this duplicate index: 9') Traceback (most recent call last): File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/server/protocol_handler.py", line 94, in handle work = await handler(message, connection) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/server/session.py", line 94, in _needs_document_lock_wrapper result = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/server/session.py", line 286, in _handle_patch message.apply_to_document(self.document, self) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/protocol/messages/patch_doc.py", line 104, in apply_to_document invoke_with_curdoc(doc, lambda: doc.apply_json_patch(self.payload, setter=setter)) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/callbacks.py", line 453, in invoke_with_curdoc return f() ^^^ File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/protocol/messages/patch_doc.py", line 104, in <lambda> invoke_with_curdoc(doc, lambda: doc.apply_json_patch(self.payload, setter=setter)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/document.py", line 391, in apply_json_patch DocumentPatchedEvent.handle_event(self, event, setter) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/events.py", line 244, in handle_event event_cls._handle_event(doc, event) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/events.py", line 279, in _handle_event cb(event.msg_data) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/callbacks.py", line 400, in trigger_event model._trigger_event(event) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/util/callback_manager.py", line 111, in _trigger_event self.document.callbacks.notify_event(cast(Model, self), event, invoke) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/callbacks.py", line 262, in notify_event invoke_with_curdoc(doc, callback_invoker) File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/document/callbacks.py", line 453, in invoke_with_curdoc return f() ^^^ File "/home/shh/miniconda3/envs/holoviz/lib/python3.12/site-packages/bokeh/util/callback_manager.py", line 107, in invoke cast(EventCallbackWithEvent, callback)(event) File "/home/shh/projects/holoviz/repos/panel/panel/reactive.py", line 572, in _server_event self._comm_event(doc, event) File "/home/shh/projects/holoviz/repos/panel/panel/reactive.py", line 559, in _comm_event state._handle_exception(e) File "/home/shh/projects/holoviz/repos/panel/panel/io/state.py", line 468, in _handle_exception raise exception File "/home/shh/projects/holoviz/repos/panel/panel/reactive.py", line 557, in _comm_event self._process_bokeh_event(doc, event) File "/home/shh/projects/holoviz/repos/panel/panel/reactive.py", line 494, in _process_bokeh_event self._process_event(event) File "/home/shh/projects/holoviz/repos/panel/panel/widgets/tables.py", line 1343, in _process_event self._validate_iloc(idx, iloc) File "/home/shh/projects/holoviz/repos/panel/panel/widgets/tables.py", line 1300, in _validate_iloc raise ValueError( ValueError: The Tabulator widget expects the provided `value` Pandas DataFrame to have unique indexes, in particular when it has to deal with click or edit events. Found this duplicate index: 9 ``` </details> ``` python import panel as pn import pandas as pd pn.extension("tabulator") index = pd.MultiIndex.from_tuples([(i, j) for i in range(10) for j in range(10)], names=["A", "B"]) df = pd.DataFrame(index=index, data={"C": range(100)}) w = pn.widgets.Tabulator(df, pagination="remote", selectable='checkbox') w.servable() b = pn.widgets.Button(name='Print selected rows', on_click=lambda x: print(w.selection)) b.servable() ``` Will just return an empty list
closed
2024-09-27T15:13:19Z
2024-09-30T10:34:50Z
https://github.com/holoviz/panel/issues/7333
[ "component: tabulator" ]
hoxbro
0
ivy-llc/ivy
tensorflow
27,991
Fix Frontend Failing Test: paddle - tensor.paddle.Tensor.any
To-do List: https://github.com/unifyai/ivy/issues/27500
closed
2024-01-22T16:50:43Z
2024-01-23T15:29:19Z
https://github.com/ivy-llc/ivy/issues/27991
[ "Sub Task" ]
Sai-Suraj-27
0
pytest-dev/pytest-xdist
pytest
230
loadscope and flake8 don't work together with one node
I'm not sure if this is a loadscope bug or a problem with the implementation of `pytest-flake8` but when I try to run `--flake8 --dist=loadscope -n 1` everything hangs: ``` $ pytest -v --dist=loadscope -n 1 --flake8 --fulltrace tests/test_register.py ============================= test session starts ============================== platform darwin -- Python 3.6.0, pytest-3.2.0, py-1.4.34, pluggy-0.4.0 -- /Users/timj/work/lsstsw3/miniconda/bin/python cachedir: .cache rootdir: /Volumes/G-RAID with Thunderbolt/transient/lsstsw3/build/pipe_tasks, inifile: setup.cfg plugins: session2file-0.1.9, forked-0.3.dev0+g1dd93f6.d20170815, xdist-1.19.2.dev0+g459d52e.d20170815, flake8-0.8.1 [gw0] darwin Python 3.6.0 cwd: /Volumes/G-RAID with Thunderbolt/transient/lsstsw3/build/pipe_tasks [gw0] Python 3.6.0 |Continuum Analytics, Inc.| (default, Dec 23 2016, 13:19:00) -- [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] gw0 [5] scheduling tests via LoadScopeScheduling ^C !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ``` The following commands all work fine: ``` $ pytest -v --dist=loadscope -n 1 --fulltrace tests/test_register.py $ pytest -v --dist=loadscope -n 2 --flake8 --fulltrace tests/test_register.py $ pytest -v -n 1 --flake8 --fulltrace tests/test_register.py $ pytest -v -n 2 --flake8 --fulltrace tests/test_register.py ``` leading to the conclusion that everything hangs only when one subprocess is used and loadscope is enabled and flake8 testing is enabled. ``` $ pytest -v --dist=loadscope -n 2 --flake8 --fulltrace tests/test_register.py ============================= test session starts ============================== platform darwin -- Python 3.6.0, pytest-3.2.0, py-1.4.34, pluggy-0.4.0 -- /Users/timj/work/lsstsw3/miniconda/bin/python cachedir: .cache rootdir: /Volumes/G-RAID with Thunderbolt/transient/lsstsw3/build/pipe_tasks, inifile: setup.cfg plugins: session2file-0.1.9, forked-0.3.dev0+g1dd93f6.d20170815, xdist-1.19.2.dev0+g459d52e.d20170815, flake8-0.8.1 [gw0] darwin Python 3.6.0 cwd: /Volumes/G-RAID with Thunderbolt/transient/lsstsw3/build/pipe_tasks [gw1] darwin Python 3.6.0 cwd: /Volumes/G-RAID with Thunderbolt/transient/lsstsw3/build/pipe_tasks [gw0] Python 3.6.0 |Continuum Analytics, Inc.| (default, Dec 23 2016, 13:19:00) -- [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] [gw1] Python 3.6.0 |Continuum Analytics, Inc.| (default, Dec 23 2016, 13:19:00) -- [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] gw0 [5] / gw1 [5] scheduling tests via LoadScopeScheduling tests/test_register.py::RegisterTestCase::testRegister [gw1] PASSED tests/test_register.py::RegisterTestCase::testRegister tests/test_register.py::RegisterTestCase::testRejection [gw1] PASSED tests/test_register.py::RegisterTestCase::testRejection tests/test_register.py::MyMemoryTestCase::testFileDescriptorLeaks <- ../../../../../../Users/timj/work/lsstsw3/stack/DarwinX86/utils/13.0-9-gf29e843+2/python/lsst/utils/tests.py [gw1] PASSED tests/test_register.py::MyMemoryTestCase::testFileDescriptorLeaks <- ../../../../../../Users/timj/work/lsstsw3/stack/DarwinX86/utils/13.0-9-gf29e843+2/python/lsst/utils/tests.py tests/test_register.py::MyMemoryTestCase::testLeaks <- ../../../../../../Users/timj/work/lsstsw3/stack/DarwinX86/utils/13.0-9-gf29e843+2/python/lsst/utils/tests.py tests/test_register.py [gw1] PASSED tests/test_register.py::MyMemoryTestCase::testLeaks <- ../../../../../../Users/timj/work/lsstsw3/stack/DarwinX86/utils/13.0-9-gf29e843+2/python/lsst/utils/tests.py [gw0] FAILED tests/test_register.py ``` (the failure is simply that this particular file has a flake8 issue). I am wondering if the `pytest-flake8` plugin is not correctly returning scoping information to the scheduler in a similar way to it not working properly with `pytest-randomly` (tholo/pytest-flake8#26), even so, how come `-n 2` is fine? When it hangs this is the stack trace: ``` scheduling tests via LoadScopeScheduling ^C !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! config = <_pytest.config.Config object at 0x10ae57b00> doit = <function _main at 0x10ae2f378> def wrap_session(config, doit): """Skeleton command line program""" session = Session(config) session.exitstatus = EXIT_OK initstate = 0 try: try: config._do_configure() initstate = 1 config.hook.pytest_sessionstart(session=session) initstate = 2 > session.exitstatus = doit(config, session) or 0 ../../stack/DarwinX86/pytest/3.2.0/lib/python/pytest-3.2.0-py3.6.egg/_pytest/main.py:110: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ config = <_pytest.config.Config object at 0x10ae57b00> session = <Session 'pipe_tasks'> def _main(config, session): """ default command line protocol for initialization, session, running tests and reporting. """ config.hook.pytest_collection(session=session) > config.hook.pytest_runtestloop(session=session) ../../stack/DarwinX86/pytest/3.2.0/lib/python/pytest-3.2.0-py3.6.egg/_pytest/main.py:146: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_HookCaller 'pytest_runtestloop'> kwargs = {'__multicall__': <_MultiCall 0 results, 1 meths, kwargs={'session': <Session 'pipe_tasks'>, '__multicall__': <_MultiCall 0 results, 1 meths, kwargs={...}>}>, 'session': <Session 'pipe_tasks'>} def __call__(self, **kwargs): assert not self.is_historic() > return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) ../../stack/DarwinX86/pytest/3.2.0/lib/python/pytest-3.2.0-py3.6.egg/_pytest/vendored_packages/pluggy.py:745: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_pytest.config.PytestPluginManager object at 0x10ac03d68> hook = <_HookCaller 'pytest_runtestloop'> methods = [<_pytest.vendored_packages.pluggy.HookImpl object at 0x10ae666d8>] kwargs = {'__multicall__': <_MultiCall 0 results, 1 meths, kwargs={'session': <Session 'pipe_tasks'>, '__multicall__': <_MultiCall 0 results, 1 meths, kwargs={...}>}>, 'session': <Session 'pipe_tasks'>} def _hookexec(self, hook, methods, kwargs): # called from all hookcaller instances. # enable_tracing will set its own wrapping function at self._inner_hookexec > return self._inner_hookexec(hook, methods, kwargs) ../../stack/DarwinX86/pytest/3.2.0/lib/python/pytest-3.2.0-py3.6.egg/_pytest/vendored_packages/pluggy.py:339: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ hook = <_HookCaller 'pytest_runtestloop'> methods = [<_pytest.vendored_packages.pluggy.HookImpl object at 0x10ae666d8>] kwargs = {'__multicall__': <_MultiCall 0 results, 1 meths, kwargs={'session': <Session 'pipe_tasks'>, '__multicall__': <_MultiCall 0 results, 1 meths, kwargs={...}>}>, 'session': <Session 'pipe_tasks'>} self._inner_hookexec = lambda hook, methods, kwargs: \ > _MultiCall(methods, kwargs, hook.spec_opts).execute() ../../stack/DarwinX86/pytest/3.2.0/lib/python/pytest-3.2.0-py3.6.egg/_pytest/vendored_packages/pluggy.py:334: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_MultiCall 0 results, 1 meths, kwargs={'session': <Session 'pipe_tasks'>, '__multicall__': <_MultiCall 0 results, 1 meths, kwargs={...}>}> def execute(self): all_kwargs = self.kwargs self.results = results = [] firstresult = self.specopts.get("firstresult") while self.hook_impls: hook_impl = self.hook_impls.pop() try: args = [all_kwargs[argname] for argname in hook_impl.argnames] except KeyError: for argname in hook_impl.argnames: if argname not in all_kwargs: raise HookCallError( "hook call must provide argument %r" % (argname,)) if hook_impl.hookwrapper: return _wrapped_call(hook_impl.function(*args), self.execute) > res = hook_impl.function(*args) ../../stack/DarwinX86/pytest/3.2.0/lib/python/pytest-3.2.0-py3.6.egg/_pytest/vendored_packages/pluggy.py:614: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <xdist.dsession.DSession object at 0x10b21a1d0> def pytest_runtestloop(self): self.sched = self.config.hook.pytest_xdist_make_scheduler( config=self.config, log=self.log ) assert self.sched is not None self.shouldstop = False while not self.session_finished: > self.loop_once() /Users/timj/work/lsstsw3/stack/DarwinX86/pytest_xdist/1.19.1/lib/python/pytest_xdist-1.19.2.dev0+g459d52e.d20170815-py3.6.egg/xdist/dsession.py:114: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <xdist.dsession.DSession object at 0x10b21a1d0> def loop_once(self): """Process one callback from one of the slaves.""" while 1: try: > eventcall = self.queue.get(timeout=2.0) /Users/timj/work/lsstsw3/stack/DarwinX86/pytest_xdist/1.19.1/lib/python/pytest_xdist-1.19.2.dev0+g459d52e.d20170815-py3.6.egg/xdist/dsession.py:124: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <queue.Queue object at 0x10b21a240>, block = True, timeout = 2.0 def get(self, block=True, timeout=None): '''Remove and return an item from the queue. If optional args 'block' is true and 'timeout' is None (the default), block if necessary until an item is available. If 'timeout' is a non-negative number, it blocks at most 'timeout' seconds and raises the Empty exception if no item was available within that time. Otherwise ('block' is false), return an item if one is immediately available, else raise the Empty exception ('timeout' is ignored in that case). ''' with self.not_empty: if not block: if not self._qsize(): raise Empty elif timeout is None: while not self._qsize(): self.not_empty.wait() elif timeout < 0: raise ValueError("'timeout' must be a non-negative number") else: endtime = time() + timeout while not self._qsize(): remaining = endtime - time() if remaining <= 0.0: raise Empty > self.not_empty.wait(remaining) /Users/timj/work/lsstsw3/miniconda/lib/python3.6/queue.py:173: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <Condition(<unlocked _thread.lock object at 0x10b1ac4e0>, 0)> timeout = 1.999992159951944 def wait(self, timeout=None): """Wait until notified or until a timeout occurs. If the calling thread has not acquired the lock when this method is called, a RuntimeError is raised. This method releases the underlying lock, and then blocks until it is awakened by a notify() or notify_all() call for the same condition variable in another thread, or until the optional timeout occurs. Once awakened or timed out, it re-acquires the lock and returns. When the timeout argument is present and not None, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). When the underlying lock is an RLock, it is not released using its release() method, since this may not actually unlock the lock when it was acquired multiple times recursively. Instead, an internal interface of the RLock class is used, which really unlocks it even when it has been recursively acquired several times. Another internal interface is then used to restore the recursion level when the lock is reacquired. """ if not self._is_owned(): raise RuntimeError("cannot wait on un-acquired lock") waiter = _allocate_lock() waiter.acquire() self._waiters.append(waiter) saved_state = self._release_save() gotit = False try: # restore state no matter what (e.g., KeyboardInterrupt) if timeout is None: waiter.acquire() gotit = True else: if timeout > 0: > gotit = waiter.acquire(True, timeout) E KeyboardInterrupt /Users/timj/work/lsstsw3/miniconda/lib/python3.6/threading.py:299: KeyboardInterrupt ======================== no tests ran in 65.08 seconds ========================= ```
open
2017-09-03T15:59:06Z
2017-09-03T16:28:02Z
https://github.com/pytest-dev/pytest-xdist/issues/230
[ "bug" ]
timj
0
KaiyangZhou/deep-person-reid
computer-vision
111
Need to download datasets ?
Hello, Have a simple question on how to use the pretrained versions on a given dataset or my own one. I got the weights "se_resnet50_fc512_market_xent.pth.tar" And run : python train_imgreid_xent.py -t market1501 -s market1501 --height 256 --width 128 --test-batch-size 100 --evaluate -a se_resnet50_fc512 --load-weights se_resnet50_fc512_market_xent\se_resnet50_fc512_market_xent.pth.tar --save-dir log\eval-resnet50 --gpu-devices 0 But get the following error, as if the dataset was not found. Traceback (most recent call last): File "train_imgreid_xent.py", line 257, in <module> main() File "train_imgreid_xent.py", line 53, in main dm = ImageDataManager(use_gpu, **image_dataset_kwargs(args)) File "C:\Users\César Bouyssi\Desktop\FitnessPlus\re_Identification\deep-person-reid\torchreid\data_manager.py", line 68, in __init__ cuhk03_classic_split=cuhk03_classic_split, market1501_500k=market1501_500k File "C:\Users\César Bouyssi\Desktop\FitnessPlus\re_Identification\deep-person-reid\torchreid\datasets\__init__.py", line 47, in init_imgreid_dataset return __imgreid_factory[name](**kwargs) File "C:\Users\César Bouyssi\Desktop\FitnessPlus\re_Identification\deep-person-reid\torchreid\datasets\market1501.py", line 45, in __init__ self._check_before_run() File "C:\Users\César Bouyssi\Desktop\FitnessPlus\re_Identification\deep-person-reid\torchreid\datasets\market1501.py", line 68, in _check_before_run raise RuntimeError('"{}" is not available'.format(self.dataset_dir)) RuntimeError: "data\market1501" is not available Should I get it manually, and put it in the folder "data\market1501" ? Now what if I want to try it on a dataset of mine ? Looking forward to your answer. Thanks
closed
2019-02-07T11:26:55Z
2019-02-12T19:57:05Z
https://github.com/KaiyangZhou/deep-person-reid/issues/111
[]
cbouyssi
6
jmcnamara/XlsxWriter
pandas
573
In-memory workbooks are not compressed on close()
Using XlsxWriter with the `in_memory` option results in files which are not properly compressed. The issue is caused by XlsxWriter constructing its own `ZipInfo` objects and using `ZipFile.writestr()` to write them without specifying the compression type. It's not documented, but from the Python `zipfile` module's source I was able to tell that in this case `ZipInfo` files do not inherit the `ZipFile`'s compression and default to `ZIP_STORED`. As the `ZipFile` is instantiated with `ZIP_DEFLATED` I assume this is not intentional. I've verified the problem occurs with XlsxWriter 1.1.1 (and the latest development version too) both with Python version 2.7.15 and 3.6.6. A short script that can be used to reproduce the problem: ```python from xlsxwriter import Workbook with open("sample.xlsx", "wb") as f, Workbook(f, {"in_memory": True}) as wb: ws = wb.add_worksheet() ws.write_number("A1", 0) ``` I've first noticed the size differences when comparing generated files with ones created by Excel. But it can be verified with the `zipinfo` command: ``` $ zipinfo sample.xlsx Archive: sample.xlsx Zip file size: 13503 bytes, number of entries: 9 ?rw------- 2.0 unx 516 b- stor 80-Jan-01 00:00 xl/worksheets/sheet1.xml ?rw------- 2.0 unx 550 b- stor 80-Jan-01 00:00 xl/workbook.xml ?rw------- 2.0 unx 784 b- stor 80-Jan-01 00:00 docProps/app.xml ?rw------- 2.0 unx 592 b- stor 80-Jan-01 00:00 docProps/core.xml ?rw------- 2.0 unx 1031 b- stor 80-Jan-01 00:00 [Content_Types].xml ?rw------- 2.0 unx 867 b- stor 80-Jan-01 00:00 xl/styles.xml ?rw------- 2.0 unx 6994 b- stor 80-Jan-01 00:00 xl/theme/theme1.xml ?rw------- 2.0 unx 587 b- stor 80-Jan-01 00:00 _rels/.rels ?rw------- 2.0 unx 556 b- stor 80-Jan-01 00:00 xl/_rels/workbook.xml.rels 9 files, 12477 bytes uncompressed, 12477 bytes compressed: 0.0% ``` The expected compression level is greater than 0%, also the files are listed with `STORE` compression.
closed
2018-10-18T15:41:21Z
2018-10-20T14:14:04Z
https://github.com/jmcnamara/XlsxWriter/issues/573
[ "bug" ]
theag3nt
3
nonebot/nonebot2
fastapi
3,238
Plugin: nonebot_plugin_dingzhen
### PyPI 项目名 nonebot_plugin_dingzhen ### 插件 import 包名 nonebot_plugin_dingzhen ### 标签 [{"label":"丁真","color":"#ff337b"},{"label":"语音合成","color":"#1942ff"},{"label":"QQ","color":"#07ede9"}] ### 插件配置项 ```dotenv ``` ### 插件测试 - [ ] 如需重新运行插件测试,请勾选左侧勾选框
closed
2025-01-05T05:36:36Z
2025-01-05T06:12:47Z
https://github.com/nonebot/nonebot2/issues/3238
[ "Plugin", "Publish" ]
Pochinki98
1
huggingface/peft
pytorch
1,504
Feature Request: Integrate Lora+/different learning rates for adapter matrices A and B
### Feature request [LoRA+: Efficient Low Rank Adaptation of Large Models](https://arxiv.org/abs/2402.12354) builds on LoRA " by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen ratio", which they argue provides performance improvements, speedups, and no increase in computational cost. Code is available at https://github.com/nikhil-ghosh-berkeley/loraplus. ### Motivation If it is true that using a ratio between the learning rates provides improvements at no cost, then having this as a new default could be broadly helpful. ### Your contribution Just wanted to point to https://github.com/nikhil-ghosh-berkeley/loraplus/blob/main/loraplus.py#L31 and https://github.com/nikhil-ghosh-berkeley/loraplus/blob/main/loraplus.py#L131, which seem to provide pretty much drop-in replacements for 🤗 Trainer. They explain usage in the README also, at https://github.com/nikhil-ghosh-berkeley/loraplus?tab=readme-ov-file#usage, showing how to create a Trainer, or an Optimizer, and the new hyperparameters introduced.
closed
2024-02-22T17:49:57Z
2024-07-29T10:51:50Z
https://github.com/huggingface/peft/issues/1504
[]
cleong110
22
deepfakes/faceswap
deep-learning
853
ValueError: Error initializing Aligner
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Download Releases from https://github.com/deepfakes/faceswap/releases/download/v1.0.0/faceswap_setup_x64.exe 2. Install 3. Open FaceSwap and click Extract 4. Get this error **Screenshots** ![image](https://user-images.githubusercontent.com/41262596/64065476-55216e80-cc38-11e9-934c-9313803d4a8e.png) **Expected behavior** The output files should appear in the selected folder **Desktop (please complete the following information):** - OS: [Windows 10.17763] - Browser [Chrome] - Version [76.0.3809.132] **Additional context** 08/31/2019 21:33:52 MainProcess MainThread logger log_setup INFO Log level set to: INFO 08/31/2019 21:33:54 MainProcess MainThread extract __init__ INFO Output Directory: C:\Users\ppepp\Downloads\test 08/31/2019 21:33:54 MainProcess MainThread fsmedia check_input_folder INFO Input Video: C:\Users\ppepp\Desktop\test.mp4 08/31/2019 21:33:54 MainProcess MainThread plugin_loader _import INFO Loading Detect from S3Fd plugin... 08/31/2019 21:33:54 MainProcess MainThread plugin_loader _import INFO Loading Align from Fan plugin... 08/31/2019 21:33:54 MainProcess MainThread pipeline set_parallel_processing WARNING Not enough free VRAM for parallel processing. Switching to serial 08/31/2019 21:33:54 MainProcess MainThread extract process INFO Starting, this may take a while... 08/31/2019 21:33:57 Detector.run MainThread s3fd initialize INFO Initializing S3FD Detector... 08/31/2019 21:33:57 Detector.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\detect\s3fd.py:142: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.\n 08/31/2019 21:33:57 Detector.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\detect\s3fd.py:143: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.\n 08/31/2019 21:33:58 Detector.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\detect\s3fd.py:165: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n 08/31/2019 21:33:58 Detector.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\detect\s3fd.py:172: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n 08/31/2019 21:34:07 Detector.run MainThread s3fd initialize WARNING You are running s3fd with 1743MB VRAM. The model is optimized for 4224MB VRAM. Detection should still run but you may get warnings/errors 08/31/2019 21:34:07 Detector.run MainThread s3fd initialize INFO Initialized S3FD Detector. 08/31/2019 21:36:04 Aligner.run MainThread fan initialize INFO Initializing Face Alignment Network... 08/31/2019 21:36:04 Aligner.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\align\fan.py:206: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.\n 08/31/2019 21:36:04 Aligner.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\align\fan.py:207: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.\n 08/31/2019 21:36:08 Aligner.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\align\fan.py:219: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n 08/31/2019 21:36:08 Aligner.run MainThread deprecation_wrapper __getattr__ WARNING From C:\Users\ppepp\faceswap\plugins\extract\align\fan.py:221: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n 08/31/2019 21:36:16 Aligner.run MainThread _base run ERROR Caught exception in child process: 11128 08/31/2019 21:36:16 Aligner.run MainThread _base run ERROR Traceback: Traceback (most recent call last): File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call return fn(*args) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node fa/convolution}}]] [[fa/transpose_647/_3]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node fa/convolution}}]] 0 successful operations. 0 derived errors ignored. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ppepp\faceswap\plugins\extract\align\_base.py", line 112, in run self.align(*args, **kwargs) File "C:\Users\ppepp\faceswap\plugins\extract\align\_base.py", line 127, in align self.initialize(*args, **kwargs) File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 47, in initialize raise err File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 41, in initialize self.model = FAN(self.model_path, ratio=tf_ratio) File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 199, in __init__ self.session = self.set_session(ratio) File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 227, in set_session session.run(self.output, feed_dict={self.input: placeholder}) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 950, in run run_metadata_ptr) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run feed_dict_tensor, options, run_metadata) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run run_metadata) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node fa/convolution (defined at C:\Users\ppepp\faceswap\plugins\extract\align\fan.py:211) ]] [[fa/transpose_647/_3]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node fa/convolution (defined at C:\Users\ppepp\faceswap\plugins\extract\align\fan.py:211) ]] 0 successful operations. 0 derived errors ignored. Original stack trace for 'fa/convolution': File "<string>", line 1, in <module> File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 118, in _main return self._bootstrap() File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 258, in _bootstrap self.run() File "C:\Users\ppepp\faceswap\lib\multithreading.py", line 362, in run super().run() File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 93, in run self._target(*self._args, **self._kwargs) File "C:\Users\ppepp\faceswap\plugins\extract\align\_base.py", line 112, in run self.align(*args, **kwargs) File "C:\Users\ppepp\faceswap\plugins\extract\align\_base.py", line 127, in align self.initialize(*args, **kwargs) File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 41, in initialize self.model = FAN(self.model_path, ratio=tf_ratio) File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 196, in __init__ self.graph = self.load_graph() File "C:\Users\ppepp\faceswap\plugins\extract\align\fan.py", line 211, in load_graph self.tf.import_graph_def(graph_def, name="fa") File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func return func(*args, **kwargs) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 443, in import_graph_def _ProcessNewOps(graph) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 236, in _ProcessNewOps for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in _add_new_tf_operations for c_op in c_api_util.new_tf_operations(self) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in <listcomp> for c_op in c_api_util.new_tf_operations(self) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3641, in _create_op_from_tf_operation ret = Operation(c_op, self) File "C:\Users\ppepp\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__ self._traceback = tf_stack.extract_stack() 08/31/2019 21:36:18 MainProcess MainThread cli execute_script ERROR Got Exception on main handler: Traceback (most recent call last): File "C:\Users\ppepp\faceswap\lib\cli.py", line 125, in execute_script process.process() File "C:\Users\ppepp\faceswap\scripts\extract.py", line 62, in process self.run_extraction() File "C:\Users\ppepp\faceswap\scripts\extract.py", line 189, in run_extraction self.extractor.launch() File "C:\Users\ppepp\faceswap\plugins\extract\pipeline.py", line 178, in launch self.launch_aligner() File "C:\Users\ppepp\faceswap\plugins\extract\pipeline.py", line 206, in launch_aligner raise ValueError("Error initializing Aligner") ValueError: Error initializing Aligner 08/31/2019 21:36:18 MainProcess MainThread cli execute_script CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\ppepp\faceswap\crash_report.2019.08.31.213618860456.log'. Please verify you are running the latest version of faceswap before reporting
closed
2019-08-31T14:43:42Z
2019-12-02T00:54:44Z
https://github.com/deepfakes/faceswap/issues/853
[]
peppapighs
8
Kanaries/pygwalker
plotly
644
Support for pygwalker in Reflex
Reflex (https://reflex.dev) is the up and coming Python framework for web apps with 20k stars on Github. Would love to see an integration so we can use pygwalker in Reflex apps.
open
2024-10-15T17:28:38Z
2024-10-20T14:22:10Z
https://github.com/Kanaries/pygwalker/issues/644
[ "enhancement" ]
tgberkeley
0
Significant-Gravitas/AutoGPT
python
9,025
Marketplace - agent page - update font of description header
### Describe your issue. <img width="598" alt="Screenshot 2024-12-17 at 18 53 43" src="https://github.com/user-attachments/assets/32fe7be0-ef3e-400b-a750-372381a8d177" /> Please update font to the "p-ui-medium" style in the typography sheet Update font to the following: font-family: Geist; font-size: 16px; font-weight: 500; line-height: 24px; text-align: left; text-underline-position: from-font; text-decoration-skip-ink: none; **Update color to:** background: var(--neutral-800, #262626);
closed
2024-12-17T10:54:58Z
2024-12-20T13:46:28Z
https://github.com/Significant-Gravitas/AutoGPT/issues/9025
[ "good first issue", "UI", "platform/frontend" ]
ograce1421
0
holoviz/panel
jupyter
7,506
AttributeError: 'Button' object has no attribute 'deepcopy' - when running Component Gallery > Widgets > Button in Panel 1.5.4 documentation
#### ALL software version info The error does appear in the Panel documentation, when tryin to run the code examples in https://panel.holoviz.org/reference/widgets/Button.html via "Run cell" button. At this time the most recent version of Panel is 1.5.4. <details> <summary>Software Version Info</summary> ```plaintext Chrome 130.0.6723.117 (64-bit) Windows 10 Home 22H2 (64-bit) ``` </details> #### Description of expected behavior and the observed behavior I wanted to see how the Button widget would behave with Python backend behind it, so on the [Component Gallery > Widgets > Button](https://panel.holoviz.org/reference/widgets/Button.html) page in Panel v1.5.4 user documentation I have clicked on "Run cell" button in the first cell, then clicked it again when asked "Click again to proceed". Most cells executed just fine, and showed "Executed successfully" info message. The three last cells however showed instead the following error message "AttributeError: 'Button' object has no attribute 'deepcopy'" #### Example code cell in the documentation that exhibits this behavior ```python pn.Row( pn.widgets.Button(icon='alert-triangle-filled', button_type='warning', name='WARNING'), pn.widgets.Button(icon='bug', button_type='danger', name='Error') ) ``` #### Stack traceback and/or browser JavaScript console output ``` AttributeError: 'Button' object has no attribute 'deepcopy' ``` #### Screenshots or screencasts of the bug in action ![panel_documentation-component_gallery-button-error_running_cells-screenshot.png](https://github.com/user-attachments/assets/0cf82282-3e49-4111-b485-5797eb354cf3) - [ ] I may be interested in making a pull request to address this
open
2024-11-19T22:33:45Z
2024-11-19T22:33:45Z
https://github.com/holoviz/panel/issues/7506
[]
jnareb
0
sqlalchemy/alembic
sqlalchemy
1,151
Comparing computed fields throws warn
I'm not sure if this is a bug or a mistake caused by me, but I wanted to share it here because I couldn't find a proper solution. I have a computed `tsvector` field in my database as you can see in the section of the migration file and model I shared below. ```python def upgrade(): op.create_table( "a_table", sa.Column("Column1", sa.String(), nullable=False), sa.Column("Column2", sa.ARRAY(sa.String(), dimensions=1), nullable=True), sa.Column("Column3", sa.ARRAY(sa.String(), dimensions=1), nullable=True), sa.Column("Column4", sa.ARRAY(sa.String(), dimensions=1), nullable=True), sa.Column("Column5", sa.ARRAY(sa.String(), dimensions=1), nullable=True), sa.Column("Column6", sa.ARRAY(sa.String(), dimensions=1), nullable=True), ) op.execute( """ CREATE OR REPLACE FUNCTION immutable_array_to_string(text[], text) RETURNS text as $$ SELECT array_to_string($1, $2); $$ LANGUAGE sql IMMUTABLE""" ) op.execute( """ ALTER TABLE elementsearchindex ADD COLUMN new_column tsvector GENERATED ALWAYS AS (to_tsvector('english', Column1 || ' ' || immutable_array_to_string(coalesce(Column2, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column3, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column4, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column5, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column6, '{}'), ' ') ) ) STORED """ ) ``` ```python class TSVector(TypeDecorator): impl = TSVECTOR class ATable(Base): Column1 = Column(String, nullable=False, index=True) Column2 = Column(ARRAY(String, dimensions=1), nullable=True, index=True) Column3 = Column(ARRAY(String, dimensions=1), nullable=True, index=True) Column4 = Column(ARRAY(String, dimensions=1), nullable=True, index=True) Column5 = Column(ARRAY(String, dimensions=1), nullable=True, index=True) Column6 = Column(ARRAY(String, dimensions=1), nullable=True, index=True) new_column = Column( TSVector(), Computed( """to_tsvector('english', Column1 || ' ' || immutable_array_to_string(coalesce(Column2, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column3, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column4, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column5, '{}'), ' ') || ' ' || immutable_array_to_string(coalesce(Column6, '{}'), ' ') )""", persisted=True, ), nullable=True, index=True, ) ``` I do not have any problems during the upgrade or downgrade. Likewise, the system is working properly. But whenever I want to do any database update I see the following warning. ``` alembic/autogenerate/compare.py:1090: UserWarning: Computed default on a_table.new_field cannot be modified ``` I can see the need of throwing message when there is a change but no idea why comparison think there are some changes. **Versions.** - OS: Monterey 12.1 - Python: 3.10.6 - Alembic: 1.8.1 - SQLAlchemy: 1.4.35 - Database: Postgres 14 Thank you!
closed
2023-01-10T16:51:37Z
2024-07-18T12:22:39Z
https://github.com/sqlalchemy/alembic/issues/1151
[ "bug", "autogenerate - defaults", "autogenerate - detection", "postgresql", "cant reproduce" ]
hevalhazalkurt
4
gunthercox/ChatterBot
machine-learning
2,080
Failed to install ChatterBot (v1.1.0) through PyCharm.
Trying to install ChatterBot (v1.1.0) through PyCharm (Community Edition 2019.3.3 x64) and installed Python version is -- v3.8.6 **D:\Python_project>python --version Python 3.8.6** The installing is failing and getting error as -- Collecting ChatterBot==1.1.0 Using cached ChatterBot-1.1.0-py2.py3-none-any.whl (63 kB) Requirement already satisfied: pytz in d:\software\python3.8.6-64\lib\site-packages (from ChatterBot==1.1.0) (2020.4) Requirement already satisfied: nltk<4.0,>=3.2 in d:\software\python3.8.6-64\lib\site-packages (from ChatterBot==1.1.0) (3.5) Requirement already satisfied: mathparse<0.2,>=0.1 in d:\software\python3.8.6-64\lib\site-packages (from ChatterBot==1.1.0) (0.1.2) Requirement already satisfied: pint>=0.8.1 in d:\software\python3.8.6-64\lib\site-packages (from ChatterBot==1.1.0) (0.16.1) Requirement already satisfied: regex in d:\software\python3.8.6-64\lib\site-packages (from nltk<4.0,>=3.2->ChatterBot==1.1.0) (2020.11.13) Requirement already satisfied: joblib in d:\software\python3.8.6-64\lib\site-packages (from nltk<4.0,>=3.2->ChatterBot==1.1.0) (0.17.0) Requirement already satisfied: tqdm in d:\software\python3.8.6-64\lib\site-packages (from nltk<4.0,>=3.2->ChatterBot==1.1.0) (4.54.1) Requirement already satisfied: click in d:\software\python3.8.6-64\lib\site-packages (from nltk<4.0,>=3.2->ChatterBot==1.1.0) (7.1.2) Requirement already satisfied: packaging in d:\software\python3.8.6-64\lib\site-packages (from pint>=0.8.1->ChatterBot==1.1.0) (20.7) Requirement already satisfied: pyparsing>=2.0.2 in d:\software\python3.8.6-64\lib\site-packages (from packaging->pint>=0.8.1->ChatterBot==1.1.0) (2.4.7) Collecting python-dateutil<2.9,>=2.8 Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB) Requirement already satisfied: six>=1.5 in d:\software\python3.8.6-64\lib\site-packages (from python-dateutil<2.9,>=2.8->ChatterBot==1.1.0) (1.15.0) Collecting pyyaml<5.4,>=5.3 Using cached PyYAML-5.3.1-cp38-cp38-win_amd64.whl (219 kB) Collecting spacy<2.2,>=2.1 Using cached spacy-2.1.9.tar.gz (30.7 MB) Installing build dependencies: started Installing build dependencies: still running... Installing build dependencies: finished with status 'error' DEPRECATION: The -b/--build/--build-dir/--build-directory option is deprecated and has no effect anymore. pip 21.1 will remove support for this functionality. A possible replacement is use the TMPDIR/TEMP/TMP environment variable, possibly combined with --no-clean. You can find discussion regarding this at https://github.com/pypa/pip/issues/8333. ERROR: Command errored out with exit status 1: command: 'D:\Python_project\venv\Scripts\python.exe' 'D:\software\python3.8.6-64\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\bgh25154\AppData\Local\Temp\pip-build-env-_yz43l6b\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0,<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0' cwd: None Complete output (286 lines): Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.4-cp38-cp38-win_amd64.whl (36 kB) Collecting Cython Using cached Cython-0.29.21-cp38-cp38-win_amd64.whl (1.7 MB) Collecting murmurhash<1.1.0,>=0.28.0 Using cached murmurhash-1.0.4-cp38-cp38-win_amd64.whl (21 kB) Collecting preshed<2.1.0,>=2.0.1 Using cached preshed-2.0.1.tar.gz (113 kB) Collecting setuptools Using cached setuptools-50.3.2-py3-none-any.whl (785 kB) Collecting thinc<7.1.0,>=7.0.8 Using cached thinc-7.0.8.tar.gz (1.9 MB) Collecting wheel<0.33.0,>0.32.0 Using cached wheel-0.32.3-py2.py3-none-any.whl (21 kB) Collecting blis<0.3.0,>=0.2.1 Using cached blis-0.2.4.tar.gz (1.5 MB) Collecting numpy>=1.7.0 Using cached numpy-1.19.4-cp38-cp38-win_amd64.whl (13.0 MB) Collecting plac<1.0.0,>=0.9.6 Using cached plac-0.9.6-py2.py3-none-any.whl (20 kB) Collecting srsly<1.1.0,>=0.0.6 Using cached srsly-1.0.4-cp38-cp38-win_amd64.whl (287 kB) Collecting tqdm<5.0.0,>=4.10.0 Using cached tqdm-4.54.1-py2.py3-none-any.whl (69 kB) Collecting wasabi<1.1.0,>=0.0.9 Using cached wasabi-0.8.0-py3-none-any.whl (23 kB) Building wheels for collected packages: preshed, thinc, blis Building wheel for preshed (setup.py): started Building wheel for preshed (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: 'D:\Python_project\venv\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\preshed_ce345c272e0544caae37430a8c27ad64\\setup.py'"'"'; __file__='"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\preshed_ce345c272e0544caae37430a8c27ad64\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\bgh25154\AppData\Local\Temp\pip-wheel-_my3konf' cwd: C:\Users\bgh25154\AppData\Local\Temp\pip-install-p0xvvzvq\preshed_ce345c272e0544caae37430a8c27ad64\ Complete output (23 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.8 creating build\lib.win-amd64-3.8\preshed copying preshed\about.py -> build\lib.win-amd64-3.8\preshed copying preshed\__init__.py -> build\lib.win-amd64-3.8\preshed creating build\lib.win-amd64-3.8\preshed\tests copying preshed\tests\test_counter.py -> build\lib.win-amd64-3.8\preshed\tests copying preshed\tests\test_hashing.py -> build\lib.win-amd64-3.8\preshed\tests copying preshed\tests\test_pop.py -> build\lib.win-amd64-3.8\preshed\tests copying preshed\tests\__init__.py -> build\lib.win-amd64-3.8\preshed\tests copying preshed\counter.pyx -> build\lib.win-amd64-3.8\preshed copying preshed\maps.pyx -> build\lib.win-amd64-3.8\preshed copying preshed\counter.pxd -> build\lib.win-amd64-3.8\preshed copying preshed\maps.pxd -> build\lib.win-amd64-3.8\preshed copying preshed\__init__.pxd -> build\lib.win-amd64-3.8\preshed warning: build_py: byte-compiling is disabled, skipping. running build_ext building 'preshed.maps' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Failed building wheel for preshed Running setup.py clean for preshed Building wheel for thinc (setup.py): started Building wheel for thinc (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: 'D:\Python_project\venv\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\thinc_bd51eae97b87487ba0b69752d2e1c682\\setup.py'"'"'; __file__='"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\thinc_bd51eae97b87487ba0b69752d2e1c682\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\bgh25154\AppData\Local\Temp\pip-wheel-8wvak1rd' cwd: C:\Users\bgh25154\AppData\Local\Temp\pip-install-p0xvvzvq\thinc_bd51eae97b87487ba0b69752d2e1c682\ Complete output (168 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.8 creating build\lib.win-amd64-3.8\thinc copying thinc\about.py -> build\lib.win-amd64-3.8\thinc copying thinc\api.py -> build\lib.win-amd64-3.8\thinc copying thinc\check.py -> build\lib.win-amd64-3.8\thinc copying thinc\compat.py -> build\lib.win-amd64-3.8\thinc copying thinc\describe.py -> build\lib.win-amd64-3.8\thinc copying thinc\exceptions.py -> build\lib.win-amd64-3.8\thinc copying thinc\i2v.py -> build\lib.win-amd64-3.8\thinc copying thinc\loss.py -> build\lib.win-amd64-3.8\thinc copying thinc\misc.py -> build\lib.win-amd64-3.8\thinc copying thinc\rates.py -> build\lib.win-amd64-3.8\thinc copying thinc\t2t.py -> build\lib.win-amd64-3.8\thinc copying thinc\t2v.py -> build\lib.win-amd64-3.8\thinc copying thinc\v2v.py -> build\lib.win-amd64-3.8\thinc copying thinc\__init__.py -> build\lib.win-amd64-3.8\thinc creating build\lib.win-amd64-3.8\thinc\tests copying thinc\tests\conftest.py -> build\lib.win-amd64-3.8\thinc\tests copying thinc\tests\strategies.py -> build\lib.win-amd64-3.8\thinc\tests copying thinc\tests\test_api_funcs.py -> build\lib.win-amd64-3.8\thinc\tests copying thinc\tests\test_util.py -> build\lib.win-amd64-3.8\thinc\tests copying thinc\tests\util.py -> build\lib.win-amd64-3.8\thinc\tests copying thinc\tests\__init__.py -> build\lib.win-amd64-3.8\thinc\tests creating build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_about.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_affine.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_beam_search.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_check_exceptions.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_difference.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_feature_extracter.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_hash_embed.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_imports.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_linear.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_loss.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_mem.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_model.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_ops.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_pickle.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_pooling.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_pytorch_wrapper.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_rates.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\test_rnn.py -> build\lib.win-amd64-3.8\thinc\tests\unit copying thinc\tests\unit\__init__.py -> build\lib.win-amd64-3.8\thinc\tests\unit creating build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_affine_learns.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_basic_tagger.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_batch_norm.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_feed_forward.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_mnist.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_pickle.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_roundtrip_bytes.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\test_shape_check.py -> build\lib.win-amd64-3.8\thinc\tests\integration copying thinc\tests\integration\__init__.py -> build\lib.win-amd64-3.8\thinc\tests\integration creating build\lib.win-amd64-3.8\thinc\tests\linear copying thinc\tests\linear\test_avgtron.py -> build\lib.win-amd64-3.8\thinc\tests\linear copying thinc\tests\linear\test_linear.py -> build\lib.win-amd64-3.8\thinc\tests\linear copying thinc\tests\linear\test_sparse_array.py -> build\lib.win-amd64-3.8\thinc\tests\linear copying thinc\tests\linear\__init__.py -> build\lib.win-amd64-3.8\thinc\tests\linear creating build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\__init__.py -> build\lib.win-amd64-3.8\thinc\linear creating build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\mem.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\pooling.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\train.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\util.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\vec2vec.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\vecs2vec.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\vecs2vecs.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\_lsuv.py -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\__init__.py -> build\lib.win-amd64-3.8\thinc\neural creating build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\datasets.py -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\hpbff.py -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\load_nlp.py -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\visualizer.py -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\wrappers.py -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\__init__.py -> build\lib.win-amd64-3.8\thinc\extra creating build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\affine.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\attention.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\batchnorm.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\convolution.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\difference.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\elu.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\embed.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\encoder_decoder.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\feature_extracter.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\feed_forward.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\function_layer.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\hash_embed.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\layernorm.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\maxout.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\model.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\multiheaded_attention.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\relu.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\resnet.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\rnn.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\selu.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\softmax.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\static_vectors.py -> build\lib.win-amd64-3.8\thinc\neural\_classes copying thinc\neural\_classes\__init__.py -> build\lib.win-amd64-3.8\thinc\neural\_classes creating build\lib.win-amd64-3.8\thinc\extra\_vendorized copying thinc\extra\_vendorized\keras_datasets.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized copying thinc\extra\_vendorized\keras_data_utils.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized copying thinc\extra\_vendorized\keras_generic_utils.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized copying thinc\extra\_vendorized\__init__.py -> build\lib.win-amd64-3.8\thinc\extra\_vendorized creating build\lib.win-amd64-3.8\thinc\extra\wrapt copying thinc\extra\wrapt\decorators.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt copying thinc\extra\wrapt\importer.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt copying thinc\extra\wrapt\wrappers.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt copying thinc\extra\wrapt\__init__.py -> build\lib.win-amd64-3.8\thinc\extra\wrapt copying thinc\linalg.pyx -> build\lib.win-amd64-3.8\thinc copying thinc\structs.pyx -> build\lib.win-amd64-3.8\thinc copying thinc\typedefs.pyx -> build\lib.win-amd64-3.8\thinc copying thinc\cpu.pxd -> build\lib.win-amd64-3.8\thinc copying thinc\linalg.pxd -> build\lib.win-amd64-3.8\thinc copying thinc\structs.pxd -> build\lib.win-amd64-3.8\thinc copying thinc\typedefs.pxd -> build\lib.win-amd64-3.8\thinc copying thinc\__init__.pxd -> build\lib.win-amd64-3.8\thinc copying thinc\compile_time_constants.pxi -> build\lib.win-amd64-3.8\thinc copying thinc\linalg.cpp -> build\lib.win-amd64-3.8\thinc copying thinc\structs.cpp -> build\lib.win-amd64-3.8\thinc copying thinc\typedefs.cpp -> build\lib.win-amd64-3.8\thinc copying thinc\linear\avgtron.pyx -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\features.pyx -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\linear.pyx -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\serialize.pyx -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\sparse.pyx -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\avgtron.pxd -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\features.pxd -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\serialize.pxd -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\sparse.pxd -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\__init__.pxd -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\avgtron.cpp -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\features.cpp -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\linear.cpp -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\serialize.cpp -> build\lib.win-amd64-3.8\thinc\linear copying thinc\linear\sparse.cpp -> build\lib.win-amd64-3.8\thinc\linear copying thinc\neural\ops.pyx -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\optimizers.pyx -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\_aligned_alloc.pyx -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\cpu.pxd -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\ops.pxd -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\__init__.pxd -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\ops.cpp -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\optimizers.cpp -> build\lib.win-amd64-3.8\thinc\neural copying thinc\neural\_aligned_alloc.cpp -> build\lib.win-amd64-3.8\thinc\neural copying thinc\extra\cache.pyx -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\eg.pyx -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\mb.pyx -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\search.pyx -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\cache.pxd -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\eg.pxd -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\mb.pxd -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\search.pxd -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\__init__.pxd -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\cache.cpp -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\eg.cpp -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\mb.cpp -> build\lib.win-amd64-3.8\thinc\extra copying thinc\extra\search.cpp -> build\lib.win-amd64-3.8\thinc\extra warning: build_py: byte-compiling is disabled, skipping. running build_ext error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Failed building wheel for thinc Running setup.py clean for thinc Building wheel for blis (setup.py): started Building wheel for blis (setup.py): finished with status 'error' ERROR: Command errored out with exit status 1: command: 'D:\Python_project\venv\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\blis_2b4c26321f554dae9c96c87ec2510fbf\\setup.py'"'"'; __file__='"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\blis_2b4c26321f554dae9c96c87ec2510fbf\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\bgh25154\AppData\Local\Temp\pip-wheel-uu3mi8zn' cwd: C:\Users\bgh25154\AppData\Local\Temp\pip-install-p0xvvzvq\blis_2b4c26321f554dae9c96c87ec2510fbf\ Complete output (23 lines): BLIS_COMPILER? None running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.8 creating build\lib.win-amd64-3.8\blis copying blis\about.py -> build\lib.win-amd64-3.8\blis copying blis\benchmark.py -> build\lib.win-amd64-3.8\blis copying blis\__init__.py -> build\lib.win-amd64-3.8\blis creating build\lib.win-amd64-3.8\blis\tests copying blis\tests\common.py -> build\lib.win-amd64-3.8\blis\tests copying blis\tests\test_dotv.py -> build\lib.win-amd64-3.8\blis\tests copying blis\tests\test_gemm.py -> build\lib.win-amd64-3.8\blis\tests copying blis\tests\__init__.py -> build\lib.win-amd64-3.8\blis\tests copying blis\cy.pyx -> build\lib.win-amd64-3.8\blis copying blis\py.pyx -> build\lib.win-amd64-3.8\blis copying blis\cy.pxd -> build\lib.win-amd64-3.8\blis copying blis\__init__.pxd -> build\lib.win-amd64-3.8\blis warning: build_py: byte-compiling is disabled, skipping. running build_ext error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Failed building wheel for blis Running setup.py clean for blis Failed to build preshed thinc blis Installing collected packages: numpy, cymem, wasabi, tqdm, srsly, preshed, plac, murmurhash, blis, wheel, thinc, setuptools, Cython Running setup.py install for preshed: started Running setup.py install for preshed: finished with status 'error' ERROR: Command errored out with exit status 1: command: 'D:\Python_project\venv\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\preshed_ce345c272e0544caae37430a8c27ad64\\setup.py'"'"'; __file__='"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\preshed_ce345c272e0544caae37430a8c27ad64\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\bgh25154\AppData\Local\Temp\pip-record-0r2ojr59\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\bgh25154\AppData\Local\Temp\pip-build-env-_yz43l6b\overlay' --compile --install-headers 'C:\Users\bgh25154\AppData\Local\Temp\pip-build-env-_yz43l6b\overlay\include\site\python3.8\preshed' cwd: C:\Users\bgh25154\AppData\Local\Temp\pip-install-p0xvvzvq\preshed_ce345c272e0544caae37430a8c27ad64\ Complete output (8 lines): running install running build running build_py warning: build_py: byte-compiling is disabled, skipping. running build_ext building 'preshed.maps' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'D:\Python_project\venv\Scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\preshed_ce345c272e0544caae37430a8c27ad64\\setup.py'"'"'; __file__='"'"'C:\\Users\\bgh25154\\AppData\\Local\\Temp\\pip-install-p0xvvzvq\\preshed_ce345c272e0544caae37430a8c27ad64\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\bgh25154\AppData\Local\Temp\pip-record-0r2ojr59\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\bgh25154\AppData\Local\Temp\pip-build-env-_yz43l6b\overlay' --compile --install-headers 'C:\Users\bgh25154\AppData\Local\Temp\pip-build-env-_yz43l6b\overlay\include\site\python3.8\preshed' Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: 'D:\Python_project\venv\Scripts\python.exe' 'D:\software\python3.8.6-64\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\bgh25154\AppData\Local\Temp\pip-build-env-_yz43l6b\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0,<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0' Check the logs for full command output.
closed
2020-12-05T18:47:45Z
2025-02-17T19:23:19Z
https://github.com/gunthercox/ChatterBot/issues/2080
[]
krishnendudas1979
4
Lightning-AI/LitServe
api
426
Allow users to define a custom health check logic
<!-- ⚠️ BEFORE SUBMITTING, READ: We're excited for your request! However, here are things we are not interested in: - Decorators. - Doing the same thing in multiple ways. - Adding more layers of abstraction... tree-depth should be 1 at most. - Features that over-engineer or complicate the code internals. - Linters, and crud that complicates projects. --> ---- ## 🚀 Feature <!-- A clear and concise description of the feature proposal --> Right now the health check endpoint returns "ok" when all the processes has started. There could be scenario when the LitAPI depends on another API/service (such as check if Ollama has pulled the model in background) and health-check need to account for the liveliness of that service too. It would be great to allow users to provide a custom health logic: ```py class MyAPI(ls.LitAPI): def health(self): if SOMETHING: return A else: return B ``` ### Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too... --> ### Pitch <!-- A clear and concise description of what you want to happen. --> ### Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> ### Additional context <!-- Add any other context or screenshots about the feature request here. -->
closed
2025-02-12T15:59:06Z
2025-02-19T12:26:03Z
https://github.com/Lightning-AI/LitServe/issues/426
[ "enhancement" ]
aniketmaurya
1
iMerica/dj-rest-auth
rest-api
411
Subclass of RegisterSerializer not saving email when overriding class variable like email = serializers.EmailField(required=False)
open
2022-06-08T09:48:29Z
2022-06-08T09:48:29Z
https://github.com/iMerica/dj-rest-auth/issues/411
[]
mateoKutnjak
0
unit8co/darts
data-science
2,186
[BUG] multi_models=FALSE not working for XGBOOST
**Describe the bug** Predict covariates does not work for XGBOOST/ CATBOOST/LIGHTGBM **To Reproduce** Attached is the excel file that has sample data - I did spend time going through darts.utils.timeseries_generation t generate dummy data - was not very successful - so attaching the sample excel file and code snippet ```python data = pd.read_csv('Book1.csv') forecast_xgboost=pd.DataFrame() train =data.iloc[:len(data)-8] predict=data.iloc[len(data)-8:len(data)] series = darts.TimeSeries.from_series(train['y']) timeseries_flag_past=darts.TimeSeries.from_series(train['weekday']) timeseries_flag_future=darts.TimeSeries.from_series(predict['weekday']) #works fine multi-models True model_XGB = XGBModel(lags_future_covariates=[0],output_chunk_length=7,multi_models =True) XGB=model_XGB.fit(series,future_covariates=timeseries_flag_past) pred_xgb = XGB.predict(n=7,future_covariates=timeseries_flag_future) # do not work : multi - models = False model_XGB = XGBModel(lags_future_covariates=[0],output_chunk_length=7,multi_models =False) XGB=model_XGB.fit(series,future_covariates=timeseries_flag_past) pred_xgb = XGB.predict(n=7,future_covariates=timeseries_flag_future) ``` **Expected behavior** In the predict function XGBOOST does not take the future covariates and throws error- The corresponding future_covariate of the series at index 0 isn't sufficiently long. Given horizon `n=7`, `min(lags_future_covariates)=0`, `max(lags_future_covariates)=0` and `output_chunk_length=7`, the future_covariate has to range from 95 until 101 (inclusive), but it ranges only from 101 until 108. **System (please complete the following information):** Python Version 3.11.5 darts version 0.25.0 [Book1.csv](https://github.com/unit8co/darts/files/14016837/Book1.csv) [Book1.csv](https://github.com/unit8co/darts/files/14016840/Book1.csv) **Additional context** Nbeats - future_covariates Nbeats API reference does not specify the covariates- still in the fit function it takes the covariates but fails in predict function NBEATS_14=model_nbeats_14.fit(series,past_covariates=timeseries_flag_past) pred_nbeats_14 = NBEATS_14.predict(n=7,past_covariates=timeseries_flag_future)
closed
2024-01-23T00:33:06Z
2024-02-22T14:41:49Z
https://github.com/unit8co/darts/issues/2186
[ "question" ]
suswamin
1
ansible/awx
automation
15,088
Build of 24.2.0 image failed
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.) ### Bug Summary Build and push of the 24.2.0 release image failed 2 hours ago: https://github.com/ansible/awx/actions/runs/8619697870/job/23626024956#step:8:147 And operator version 2.15.0 is already depending on that image tag: https://github.com/ansible/awx-operator/releases/tag/2.15.0 Do we have an ETA on when the build job will be retriggered? Thanks ### AWX version 24.2.0 ### Select the relevant components - [ ] UI - [ ] UI (tech preview) - [ ] API - [ ] Docs - [X] Collection - [ ] CLI - [X] Other ### Installation method kubernetes ### Modifications no ### Ansible version _No response_ ### Operating system _No response_ ### Web browser _No response_ ### Steps to reproduce Deploy AWX using the AWX operator ### Expected results Image can be pulled ### Actual results Image can't be pulled since tag doesn't exist ### Additional information _No response_
closed
2024-04-09T19:56:03Z
2024-04-10T16:34:12Z
https://github.com/ansible/awx/issues/15088
[ "type:bug", "component:awx_collection", "needs_triage", "community" ]
Nachichuri
9