QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,927,662
| 13,219,123
|
Pandas groupby, resample, return NaN not 0
|
<p>I have the following dataframe:</p>
<pre><code>data = {"timestamp": ["2022-12-15 22:00:00", "2022-12-15 22:00:30", "2022-12-15 22:00:47",
"2022-12-15 22:00:03", "2022-12-15 22:00:30", "2022-12-15 22:00:43",
"2022-12-15 22:00:10", "2022-12-15 22:00:34", "2022-12-15 22:00:59"],
"ID": ["A","A","A",
"B", "B", "B",
"C", "C", "C"],
"value": [11, 0, 0,
7, 5, 7,
0, 3.4, 3.4]
}
df_test = pd.DataFrame(data, columns=["timestamp", "ID", "value"])
df_test["timestamp"] = pd.to_datetime(df_test["timestamp"])
</code></pre>
<p>I want to create a new dataframe which for every ID has a row for every second from "2022-12-15 22:00:00" to "2022-12-15 22:01:00" in the same dataframe. So the end dataframe will have 180 rows (60 for each ID, so each rows is one second in the timeinterval.). For the rows which match the <code>timestamp</code> in <code>df_test</code> I want the <code>value</code> and otherwise I want a <code>NaN</code> value.</p>
<p>I have tried using the following code:</p>
<pre><code>df_resampled = df_test.groupby("ID").resample("S", on="timestamp").sum().reset_index()
</code></pre>
<p>But this have the problem that for rows which do not match, 0 is returned instead of <code>NaN</code>.</p>
|
<python><pandas>
|
2022-12-27 09:31:42
| 3
| 353
|
andKaae
|
74,927,506
| 4,953,759
|
how to get active REPL block in PyScript and copy whatever code written in it?
|
<p>I have multiple code blocks in pyscript repl which were created dynamically on button click. I want to know which code block is active. I know library adds <code>cm-activeLine</code> class when line is active but that changes when I click some other button to get the class..
I want to change order of code blocks on button click. what I am thinking is I will copy all code inside editor and Swap it with other code block but I dont know how to get the code inside editor. library has no documentation.</p>
|
<python><react-typescript><pyscript><pyodide>
|
2022-12-27 09:15:55
| 1
| 708
|
Jamshaid Tariq
|
74,927,469
| 15,852,600
|
How do I improve the python function for boxcox transformation?
|
<p>I created a function giving a fair evaluation of lambda coefficient for a given series/list of data, however it takes lot of time when the input has a long size, is there some tips to speed it up ?</p>
<p>This is my code:</p>
<pre><code>from scipy.stats import norm, pearsonr
def get_lambda_coef(series):
x=[series[i] for i in range(len(series))]
for i in range(len(x)-1):
for j in range(len(x)-1):
if x[j]>=x[j+1]:
z=x[j]
x[j]=x[j+1]
x[j+1]=z
i=[j for j in range(1,len(x)+1)]
f=[(i[j]-0.375)/(len(x)+0.25) for j in range(len(x))]
u=[norm.ppf(f[i]) for i in range(len(x))]
lambda_coef=0
width=3
step=width/6
k=lambda_coef-width
iteration=1
while iteration<=15:
r_vector=[]
lambda_vect=[]
while k<=lambda_coef+width:
if k==0:
y=[np.log(i) for i in x]
else:
y=[(i**k-1)/k for i in x]
r_vector.append(pearsonr(y, u)[0])
k+=step
k=lambda_coef-width
while k<=lambda_coef+width:
lambda_vect.append(k)
k+=step
lambda_coef=lambda_vect[r_vector.index(max(r_vector))]
width/=2
step/=3
k=lambda_coef-width
iteration+=1
normalized = [(x**lambda_coef - 1)/lambda_coef for x in series]
return (normalized, lambda_coef)
</code></pre>
<p>Any help from your side will be highly appreciated (I upvote all answers).</p>
<p>Thank you !</p>
|
<python><function><normal-distribution>
|
2022-12-27 09:11:13
| 1
| 921
|
Khaled DELLAL
|
74,927,174
| 7,800,760
|
Networkx: select nodes only if they have a given attribute
|
<p>Here is a sample graph based on the code of a previous question ("How can I select nodes with a given attribute value"):</p>
<pre><code>import networkx as nx
P = nx.Graph()
P.add_node("node1", at=5)
P.add_node("node2", at=5)
P.add_node("node3", at=6)
# You can select like this
selected_data = dict((n, d["at"]) for n, d in P.nodes().items() if d["at"] == 5)
# Then do what you want to do with selected_data
print(f"Node found : {len (selected_data)} : {selected_data}")
</code></pre>
<p>but my case is that only a few nodes possess a given attribute such as:</p>
<pre><code>import networkx as nx
P = nx.Graph()
P.add_node("node1", at=5)
P.add_node("node2")
P.add_node("node3")
# You can select like this
selected_data = dict((n, d["at"]) for n, d in P.nodes().items() if d["at"] == 5)
# Then do what you want to do with selected_data
print(f"Node found : {len (selected_data)} : {selected_data}")
</code></pre>
<p>which as you can see only has node1 having the "at" attribute. The code as above would fail with a "Keyerror".</p>
<p>How would you define a function returning the list of the nodes having a given attribute?</p>
|
<python><networkx>
|
2022-12-27 08:34:00
| 1
| 1,231
|
Robert Alexander
|
74,927,085
| 11,591,931
|
AWS Forecast on Jupyter Notebook - Credentials error
|
<p>I'm following the <a href="https://github.com/aws-samples/amazon-forecast-samples/blob/main/notebooks/basic/Getting_Started/Amazon_Forecast_Quick_Start_Guide.ipynb" rel="nofollow noreferrer">Quick Start Guide</a> Jupyter Notebook from AWS in order to make AWS Forecast run with Python.</p>
<p>For the following cell, I replace :
<a href="https://i.sstatic.net/RVT5z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RVT5z.png" alt="Notebook Cell 1" /></a>
adding my access informations to make it works :</p>
<pre><code>session = boto3.Session(region_name=region, aws_access_key_id=access_key_id, aws_secret_access_key=secret_access_key)
</code></pre>
<p>However, I'm still stuck with the following cell :
<a href="https://i.sstatic.net/UWBaA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UWBaA.png" alt="Where I don't pass" /></a>
where I don't know how to solve the current "NoCredentialsError: Unable to locate credentials".</p>
<p>What should I add in the notebook to make it works ?</p>
|
<python><amazon-web-services><jupyter-notebook><amazon-forecast>
|
2022-12-27 08:23:14
| 1
| 1,327
|
Alex Dana
|
74,926,850
| 10,204,719
|
How to solve ModuleNotFoundError: No module named 'openpyxl.cell._writer'?
|
<p>I am trying to build an exe file for the GUI that I created using python POyqt5. After completing the process, I try to launch the UI and I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "main_3.py", line 14, in <module>
import openpyxl
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "openpyxl\___init__.py", line 6, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "openpyxl\workbook\____init__.py", line 4, in <module>
File "PyInstaller\loader\pyimode2_importers.py", line 499, in exec_module
File "openpyxl\workbook\workbook.py", line 9, in <module>
File "PyInstaller\loader\pyimode2_importers.py", line 499, in exec_module
File "openpyxl\worksheet\_write_only.py", line 13, in <module>
File "openpyxl\worksheet\_writer.py", line 23, in init openpyxl.worksheet._writer
ModuleNotFoundError: No module named 'openpyxl.cell._writer'
[13336] Failed to execute script 'main_3' due to unhandled exception!
</code></pre>
<p>I have openpyxl installed and I have also got it imported in my python script. Still, this error remains. Any leads on solving this will be appreciated.</p>
<p>Thanks!</p>
|
<python><pyinstaller><openpyxl>
|
2022-12-27 07:52:21
| 2
| 344
|
sumitpal0593
|
74,926,735
| 8,124,392
|
The training function is throwing an "index out of range in self" error
|
<p>This is my code:</p>
<pre><code># Extract input and target sequences from data list
input_sequences = []
target_sequences = []
BATCH_SIZE = 64
data = read_csv('gpt-j-data.csv')
for query, rephrases in data:
input_sequences.append(query)
target_sequences.append(rephrases)
# Load the tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Tokenize the input and target sequences
input_sequences = [tokenizer.encode(sequence, add_special_tokens=True) for sequence in input_sequences]
target_sequences = [tokenizer.encode(sequence, add_special_tokens=True) for sequence in target_sequences]
# Convert the input and target sequences to tensors
input_sequences = [torch.tensor(sequence) for sequence in input_sequences]
target_sequences = [torch.tensor(sequence) for sequence in target_sequences]
input_sequences = ensure_tensor_size(input_sequences, 4)
target_sequences = ensure_tensor_size(target_sequences, 4)
# Create a RephraseDataset object from the input and target sequences
dataset = RephraseDataset(input_sequences, target_sequences)
# Create a DataLoader for the dataset
dataloader = torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True)
# Set the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = RephraseGenerator(vocab_size=1000, embedding_dim=256, hidden_size=512, num_layers=2, dropout=0.2)
# Move the model to the device
model.to(device)
# Set the optimizer and loss function
optimizer = optim.AdamW(model.parameters())
loss_fn = nn.CrossEntropyLoss()
train(model, dataloader, optimizer, device)
</code></pre>
<p>And this is my train function:</p>
<pre><code># Training loop
def train(model, data_loader, optimizer, device):
model.train()
epoch_loss = 0
for input_sequence, target_sequence in data_loader:
input_sequence = input_sequence.to(device)
target_sequence = target_sequence.to(device)
optimizer.zero_grad()
predictions = model(input_sequence, target_sequence)
loss = rephrase_loss(predictions, target_sequence)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(data_loader)
</code></pre>
<p>This is my ensure_tensor_size() function:</p>
<pre><code>def ensure_tensor_size(tensor_list, size):
"""Ensures that each tensor in the list has the given size.
If a tensor has a different size, it is padded with zeros.
Args:
tensor_list: a list of tensors
size: an integer representing the desired size of the tensors
Returns:
a new list of tensors with the same size
"""
padded_tensor_list = []
for tensor in tensor_list:
if tensor.size(0) < size:
tensor = F.pad(tensor, (0, size - tensor.size(0)), value=0)
elif tensor.size(0) > size:
tensor = tensor[:size]
padded_tensor_list.append(tensor)
return padded_tensor_list
</code></pre>
<p>This is the error that I'm getting:</p>
<pre><code>IndexError Traceback (most recent call last)
<ipython-input-5-1e843ae1e696> in <module>
233 loss_fn = nn.CrossEntropyLoss()
234
--> 235 train(model, dataloader, optimizer, device)
5 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2211
2212
IndexError: index out of range in self
</code></pre>
<p>This is what my data looks like:</p>
<pre><code>[('Outdoor toys for kids',
[" Kids' outdoor toys",
' Outdoor playthings for children',
" Children's outdoor entertainment",
' Outdoor games for young ones ']),
('ducational toys for kids',
[" Kids' educational toys",
' Educational playthings for children',
" Children's educational entertainment",
' Educational games for young ones']),
('Dolls for girls',
[" Girls' dolls",
' Dolls for little girls',
' Entertainment for young girls',
' Toys for young girls']),
("Kids' swings",
[" Children's swings", ' Swings for kids', ' Swings for young ones', '']),
("Kids' footballs",
[" Children's footballs",
' Footballs for kids',
' Footballs for young ones',
' Entertainment for young kids']),
('Pogo sticks for children',
[" Kids' pogo sticks",
" Children's pogo sticks",
' Pogo sticks for kids',
' Pogo sticks for young ones']),
('Holiday gifts',
[' Gifts for the holidays',
' Gifts for special occasions',
' Gifts for celebrations',
' Gifts for loved ones']),
('Best holiday gifts',
[' Top holiday gifts',
' Highly rated holiday gifts',
' Recommended holiday gifts',
' Best gifts for the holidays']),
('Popular holiday gifts',
[' Best-selling holiday gifts',
' Most sought-after holiday gifts',
' Trending holiday gifts',
' Hot holiday gifts']),
('Holiday gifts for kids',
[' Gifts for children during the holidays',
' Gifts for young ones during the holidays',
' Gifts for little ones during the holidays',
' Gifts for minors during the holidays']),
('Holiday gifts for men',
[' Gifts for men during the holidays',
' Gifts for him during the holidays',
' Gifts for fathers during the holidays',
' Gifts for husbands during the holidays']),
('Holiday gifts for women',
[' Gifts for women during the holidays',
' Gifts for her during the holidays',
' Gifts for mothers during the holidays',
' Gifts for wives during the holidays']),
('Holiday gifts for teens',
[' Gifts for teenagers during the holidays',
' Gifts for adolescents during the holidays',
' Gifts for young adults during the holidays',
' Gifts for older kids during the holidays']),
('Holiday gifts for parents',
[' Gifts for parents during the holidays',
' Gifts for mom and dad during the holidays',
' Gifts for caregivers during the holidays',
' Gifts for adults during the holidays']),
('Holiday gifts for grandparents',
[' Gifts for grandparents during the holidays',
' Gifts for grandpa and grandma during the holidays',
' Gifts for senior citizens during the holidays',
' Gifts for older adults during the holidays']),
('Holiday gifts for friends',
[' Gifts for friends during the holidays',
' Gifts for close friends during the holidays',
' Gifts for companions during the holidays',
' Gifts for peers during the holidays']),
('Holiday gifts for coworkers',
[' Gifts for coworkers during the holidays',
' Gifts for colleagues during the holidays',
' Gifts for associates during the holidays',
' Gifts for professionals during the holidays']),
('Holiday gifts for pets',
[' Gifts for pets during the holidays',
' Gifts for dogs during the holidays',
' Gifts for cats during the holidays',
' Gifts for animals during the holidays']),
('Holiday gifts for gamers',
[' Gifts for gamers during the holidays',
' Gifts for video game enthusiasts during the holidays',
' Gifts for console gamers during the holidays',
' Gifts for PC gamers during the holidays']),
('Holiday gifts for hikers',
[' Gifts for hikers during the holidays',
' Gifts for outdoor enthusiasts during the holidays',
' Gifts for walkers during the holidays',
' Gifts for nature lovers during the holidays']),
('Holiday gifts for book lovers',
[' Gifts for book lovers during the holidays',
' Gifts for readers during the holidays',
' Gifts for bibliophiles during the holidays',
' Gifts for literature enthusiasts during the holidays']),
('Holiday gifts for foodies',
[' Gifts for foodies during the holidays',
' Gifts for gourmet cooks during the holidays',
' Gifts for culinary enthusiasts during the holidays',
' Gifts for epicures during the holidays']),
('Holiday gifts for knitters and crocheters',
[' Gifts for knitters and crocheters during the holidays',
' Gifts for fiber artists during the holidays',
' Gifts for yarn enthusiasts during the holidays',
' Gifts for needlework enthusiasts during the holidays']),
('Holiday gifts for sewers and quilters',
[' Gifts for sewers and quilters during the holidays',
' Gifts for needleworkers during the holidays',
' Gifts for seamstresses during the holidays',
' Gifts for tailors during the holidays']),
('Holiday gifts for DIYers',
[' Gifts for DIYers during the holidays',
' Gifts for home improvement enthusiasts during the holidays',
' Gifts for handymen and handywomen during the holidays',
' Gifts for crafters during the holidays']),
('Holiday gifts for mechanics',
[' Gifts for mechanics during the holidays',
' Gifts for auto mechanics during the holidays',
' Gifts for mechanic enthusiasts during the holidays',
' Gifts for technicians during the holidays']),
('Holiday gifts for handymen and handywomen',
[' Gifts for handymen and handywomen during the holidays',
' Gifts for DIY enthusiasts during the holidays',
' Gifts for home improvement experts during the holidays',
' Gifts for craftspeople during the holidays']),
('Luggage sets',
[' Best luggage',
' Travel luggage',
' Suitcase sets',
' Best luggage sets ']),
('Travel backpacks',
[' Backpacks for travel',
' Best travel backpacks',
' Popular travel backpacks',
' Backpacks for vacation ']),
('Travel pillows',
[' Best travel pillows',
' Top-rated travel pillows',
' Popular travel pillows',
' Best pillows for travel ']),
('Travel neck pillows',
[' Best travel neck pillows',
' Best-selling travel neck pillows',
' Neck pillows for travel',
' Popular travel neck pillows '])]
</code></pre>
<p>Where am I going wrong?</p>
|
<python><machine-learning><pytorch>
|
2022-12-27 07:36:25
| 1
| 3,203
|
mchd
|
74,926,714
| 17,582,019
|
Getting the HTML element using Selenium WebDriver
|
<p>I'm trying to get price of a product on amazon using Selenium:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
url = \
"https://www.amazon.in/Celevida-Kesar-Elaichi-Flavor-Metal/dp/B081WJ6536/ref=sr_1_5?crid=3NRZERQ8H4T8L&keywords=dr+reddys+celevida&qid=1672124472&sprefix=%2Caps%2C5801&sr=8-5"
services = Service(r"C:\Users\Deepak Shetter\chromedriver_win32\chromedriver.exe")
driver = webdriver.Chrome(service=services)
driver.get(url)
price = driver.find_element(By.CLASS_NAME, "a-offscreen")
print("price is "+price.text)
</code></pre>
<p><a href="https://i.sstatic.net/5kvPT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5kvPT.png" alt="enter image description here" /></a></p>
<p>As you can see in this image the html for the price is of <code>class="a-offscreen".</code> But when I run my code on pycharm it return <code>None</code>. How can I get the price string? (btw I checked it using Beautiful soup and it worked fine)</p>
<p>Edit :
This time I used another url : <code>https://www.amazon.in/Avvatar-Alpha-Choco-Latte-Shaker/dp/B08S3TNGYK/?_encoding=UTF8&pd_rd_w=ofFKu&content-id=amzn1.sym.1f592895-6b7a-4b03-9d72-1a40ea8fbeca&pf_rd_p=1f592895-6b7a-4b03-9d72-1a40ea8fbeca&pf_rd_r=PT3Y6GWJ7YHADW09VKNK&pd_rd_wg=lBWZa&pd_rd_r=0a44c278-bcfa-49c2-806b-cf8eb292038a&ref_=pd_gw_ci_mcx_mr_hp_atf_m</code></p>
<p>In this case it has 2 price elements one with the class="a-offscreen" and another one with calss="a-price-whole".</p>
<p>my code :</p>
<pre><code>price = driver.find_element(By.CLASS_NAME, "a-price-whole")
</code></pre>
<p>this time return value is <code>1,580</code>.</p>
|
<python><selenium-webdriver><css-selectors><selenium-chromedriver><webdriverwait>
|
2022-12-27 07:33:03
| 2
| 790
|
Deepak
|
74,926,364
| 13,000,378
|
Detect objects in a video using a yolo model
|
<p>Ive created a simple object detection model using yolo v3 pre-trained model that detects objects in a single image.Below is the python code for the model,</p>
<pre><code>import cv2
import numpy as np
# Load Yolo
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
classes = []
with open("coco.names", "r") as f:
classes = [line.strip() for line in f.readlines()]
layer_names = net.getLayerNames()
output_layers= [layer_names[i-1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0,255,size=(len(classes),3))
img= cv2.imread("heyyy.jpg")
height,width,channels = img.shape
blob= cv2.dnn.blobFromImage(img,0.00392,(416,416),(0,0,0),True,crop=False)
net.setInput(blob)
outs= net.forward(output_layers)
class_ids=[]
confidences=[]
boxes=[]
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
center_x = int(detection[0]*width)
center_y = int(detection[1]*height)
w = int(detection[2]*width)
h = int(detection[3]*height)
cv2.circle(img,(center_x,center_y),10,(0,255,0),2)
x = int(center_x-w/2)
y = int(center_y - h/2)
boxes.append([x,y,w,h])
confidences.append(float(confidence))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes,confidences,0.5,0.4)
print(indexes)
font=cv2.FONT_HERSHEY_PLAIN
for i in range (len(boxes)):
if i in indexes:
x,y,w,h = boxes[i]
label = str(classes[class_ids[i]])
color = colors[i]
cv2.rectangle(img,(x,y),(x+w,y+h),color,2)
cv2.putText(img,label,(x,y+30),font,3,color,3)
cv2.imshow("Image",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>For any given image, the model identifies the objects flawlessly. How can I get the model working for video(.mp4) files? Please help!</p>
|
<python><opencv><machine-learning><object-detection><yolo>
|
2022-12-27 06:39:52
| 1
| 661
|
Kavishka Rajapakshe
|
74,926,354
| 3,671,056
|
botot3 with sqs: the address 'QueueUrl' is not valid for this endpoint
|
<p>I am trying to the get queue url from a SQS queue. I have read about other posts <a href="https://stackoverflow.com/questions/36666494/set-the-endpoint-for-boto3-sqs">Set the endpoint for boto3 SQS</a> and <a href="https://stackoverflow.com/questions/65663622/boto3-sqs-incorrect-url-when-not-specified-endpoint-url">boto3 sqs incorrect url when not specified endpoint url</a> but it is still puzzling why the code is not working.</p>
<p>My region being <code>us-east-1</code> and the endpoint seems to be the correct one <a href="https://queue.amazonaws.com/XXXXXXXXXXXX/MyMessages" rel="nofollow noreferrer">https://queue.amazonaws.com/XXXXXXXXXXXX/MyMessages</a> (legacy)</p>
<pre><code>import json
import boto3
from datetime import datetime
def lambda_handler(event, context):
now = datetime.now()
current_time = now.strftime("%H:%M:%S %p")
sqs = boto3.client('sqs', region_name="us-east-1")
#get queue url using queue name
queueurl = sqs.get_queue_url(QueueName='MyMessages')
print(queueurl)
for x in range(5):
sqs.send_message(
QueueUrl=str(queueurl),
MessageBody=current_time
)
return {
'statusCode': 200,
'body': json.dumps(current_time)
}
</code></pre>
<p><strong>Error</strong></p>
<pre><code>Test Event Name
sendMessage
Response
{
"errorMessage": "An error occurred (InvalidAddress) when calling the SendMessage operation: The address {'QueueUrl': 'https://queue.amazonaws.com/XXXXXXXXXXXX/MyMessages', 'ResponseMetadata': {'RequestId': '10fea792-29fa-5484-9bc0-fbb6ad9320b7', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '10fea792-29fa-5484-9bc0-fbb6ad9320b7', 'date': 'Tue, 27 Dec 2022 05:27:05 GMT', 'content-type': 'text/xml', 'content-length': '322'}, 'RetryAttempts': 0}} is not valid for this endpoint.",
"errorType": "ClientError",
"requestId": "ba1f3c72-6180-4308-ad12-0bc6b02c3793",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 18, in lambda_handler\n sqs.send_message(\n",
" File \"/var/runtime/botocore/client.py\", line 391, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 719, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
Function Logs
START RequestId: ba1f3c72-6180-4308-ad12-0bc6b02c3793 Version: $LATEST
{'QueueUrl': 'https://queue.amazonaws.com/XXXXXXXXXXXX/MyMessages', 'ResponseMetadata': {'RequestId': '10fea792-29fa-5484-9bc0-fbb6ad9320b7', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '10fea792-29fa-5484-9bc0-fbb6ad9320b7', 'date': 'Tue, 27 Dec 2022 05:27:05 GMT', 'content-type': 'text/xml', 'content-length': '322'}, 'RetryAttempts': 0}}
[ERROR] ClientError: An error occurred (InvalidAddress) when calling the SendMessage operation: The address {'QueueUrl': 'https://queue.amazonaws.com/XXXXXXXXXXXX/MyMessages', 'ResponseMetadata': {'RequestId': '10fea792-29fa-5484-9bc0-fbb6ad9320b7', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '10fea792-29fa-5484-9bc0-fbb6ad9320b7', 'date': 'Tue, 27 Dec 2022 05:27:05 GMT', 'content-type': 'text/xml', 'content-length': '322'}, 'RetryAttempts': 0}} is not valid for this endpoint.
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 18, in lambda_handler
sqs.send_message(
File "/var/runtime/botocore/client.py", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)END RequestId: ba1f3c72-6180-4308-ad12-0bc6b02c3793
REPORT RequestId: ba1f3c72-6180-4308-ad12-0bc6b02c3793 Duration: 1108.64 ms Billed Duration: 1109 ms Memory Size: 128 MB Max Memory Used: 65 MB Init Duration: 275.81 ms
</code></pre>
|
<python><amazon-web-services><aws-lambda><boto3><amazon-sqs>
|
2022-12-27 06:38:11
| 1
| 412
|
Sri
|
74,926,328
| 219,976
|
Troubleshooting k8s readiness probe failure
|
<p>I'm trying to run my django rest framework application in k8s environment but readiness probe fails. I wonder how to get what is wrong.<br />
When I look at pod logs the app seems to be running. It has unapplied migrations, but it is ok:</p>
<pre><code>C:\Users\user>kubectl logs test-bbdccbc76-8cwg9
Watching for file changes with StatReloader
Performing system checks...
Server initialized for gevent.
System check identified some issues:
WARNINGS:
?: (staticfiles.W004) The directory '/app/staticfiles' in the STATICFILES_DIRS setting does not exist.
System check identified 1 issue (0 silenced).
You have 1 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): auth
Run 'python manage.py migrate' to apply them.
December 27, 2022 - 06:13:28
Django version 4.1.2, using settings 'test.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
</code></pre>
<p>Here is <code>describe pod</code> output:</p>
<pre><code>C:\Users\user>kubectl describe pod test-bbdccbc76-8cwg9
Name: test-bbdccbc76-8cwg9
Namespace: test
Priority: 2000
Priority Class Name: default
Service Account: default
Node: sd2-k8s-stg-n05/10.216.14.52
Start Time: Tue, 27 Dec 2022 11:11:14 +0500
Labels: app=test
pod-template-hash=bbdccbc76
Annotations: checksum/config: f0ee0887d3fd078979831f04d13ade8759ef8a4ee9aad23830c5909300e322b4
cni.projectcalico.org/containerID: ac4c0c57c0a3baa1c15a78aedb36e3e1890f8b9859547807b1e3587835792efb
cni.projectcalico.org/podIP: 10.233.110.84/32
cni.projectcalico.org/podIPs: 10.233.110.84/32
container.apparmor.security.beta.kubernetes.io/test: runtime/default
kubernetes.io/psp: restricted
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
IP: 10.233.110.84
IPs:
IP: 10.233.110.84
Controlled By: ReplicaSet/test-bbdccbc76
Containers:
test:
Container ID: containerd://26e3234ba31c09d6d62e3efb999634cb64d26d6ce2e8e734a26104d62b6f2f5f
Image: registry/library/test:latest
Image ID: registry/library/test-@sha256:12345678b16eb2c90324916756846d5dfa557198bd4aeed9e790db677702b1
Port: 8000/TCP
Host Port: 0/TCP
Command:
python
manage.py
runserver
State: Running
Started: Tue, 27 Dec 2022 11:11:15 +0500
Ready: False
Restart Count: 0
Limits:
cpu: 500m
ephemeral-storage: 500Mi
memory: 500Mi
Requests:
cpu: 500m
ephemeral-storage: 500Mi
memory: 500M
Liveness: http-get http://:8000/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8000/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
test-configmap ConfigMap Optional: false
Environment:
CONNECTION_STRING: mongodb://xxx:yyy@zzz:27017
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpstr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-tpstr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 79s default-scheduler Successfully assigned test/test-bbdccbc76-8cwg9 to xx-k8s-xx
Normal Pulling 78s kubelet Pulling image "registry/library/test:latest"
Normal Pulled 78s kubelet Successfully pulled image "registry/library/test:latest" in 57.066835ms
Normal Created 78s kubelet Created container test
Normal Started 78s kubelet Started container test
Warning Unhealthy 29s (x2 over 39s) kubelet Readiness probe failed: Get "http://10.233.110.84:8000/healthz": dial tcp 10.233.110.84:8000: connect: connection refused
Warning Unhealthy 29s (x2 over 39s) kubelet Liveness probe failed: Get "http://10.233.110.84:8000/healthz": dial tcp 10.233.110.84:8000: connect: connection refused
</code></pre>
<p>Here's service:</p>
<pre><code>C:\Users\user>kubectl describe service test-service
Name: test-service
Namespace: test
Labels: app=test
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/version=0.1.0
helm.sh/chart=my-helm-char-1.0.0
Annotations: meta.helm.sh/release-name: test
meta.helm.sh/release-namespace: test
Selector: app=test
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.233.53.185
IPs: 10.233.53.185
Port: api-port 8000/TCP
TargetPort: api-port/TCP
NodePort: api-port 32426/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
|
<python><django><kubernetes><django-rest-framework><kubernetes-helm>
|
2022-12-27 06:33:13
| 0
| 6,657
|
StuffHappens
|
74,926,252
| 8,124,392
|
How to replace the tokenize() and pad_sequence() functions from transformers?
|
<p>I got the following imports:</p>
<pre><code>import torch, csv, transformers, random
import torch.nn as nn
from torch.utils.data import Dataset
import torch.optim as optim
import pandas as pd
from transformers import GPT2Tokenizer, GPT2LMHeadModel, tokenize, pad_squences
</code></pre>
<p>And I'm getting this error:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-35-e04c63220105> in <module>
4 import torch.optim as optim
5 import pandas as pd
----> 6 from transformers import GPT2Tokenizer, GPT2LMHeadModel, tokenize, pad_squences
ImportError: cannot import name 'tokenize' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/__init__.py)
</code></pre>
<p>This is how I am using the <code>tokenize()</code> and <code>pad_sequence()</code> functions:</p>
<pre><code>class RephraseDataset(Dataset):
def __init__(self, data, tokenizer):
self.data = data
self.tokenizer = tokenizer
def __len__(self):
return len(self.data)
def __getitem__(self, index):
query, rephrases = self.data[index]
tokenized_query = tokenizer.encode(query, add_special_tokens=True)
# tokenized_query = tokenize(self.tokenizer, query)
padded_query = tokenized_query + [tokenizer.pad_token_id] * (max_length - len(tokenized_query))
# padded_query = pad_sequences(self.tokenizer, tokenized_query, max_length=128)
tokenized_rephrases = [tokenize(self.tokenizer, r) for r in rephrases]
padded_rephrases = [pad_sequences(self.tokenizer, r, max_length=128) for r in tokenized_rephrases]
return padded_query, padded_rephrases
# Create the dataset
dataset = RephraseDataset(data, tokenizer)
# Create a dataloader
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=32,
shuffle=True,
)
</code></pre>
<p>How can I fix this problem? I couldn't find anything in the docs. What version should I roll transformers back to?</p>
|
<python><huggingface-transformers><huggingface-tokenizers><gpt-2>
|
2022-12-27 06:19:42
| 1
| 3,203
|
mchd
|
74,925,849
| 3,713,236
|
In Pandas, what is the correct dtype for binary (dummy) variables?
|
<p>In Pandas, what is the correct dtype for binary (dummy) variables?</p>
<p>There are obviously lots of dtypes in Pandas, some of which are:
<code>float64</code>, <code>int64</code>, <code>category</code>, or simply <code>object</code>. To me, these all seem like correct dtypes for binary (dummy) variables. Which one is the right answer?</p>
<p>If the correct dtype is <code>category</code>, I have a follow-up question which is basically I am unable to cast a column in pandas as <code>dtype='category'</code>, but we will cross that bridge when we get there.</p>
|
<python><pandas><binary-data><dummy-variable>
|
2022-12-27 05:04:05
| 0
| 9,075
|
Katsu
|
74,925,843
| 1,056,563
|
Is there any way to check if the declared type of a method parameter were consistent with the declared generic type?
|
<p>Consider a <code>generic type</code> baseclass:</p>
<pre><code>from abc import ABC, abstractmethod
class Reader(ABC, Generic[S]):
@abstractmethod
def get_resource(self) -> S:
pass
</code></pre>
<p>Also let's set up some dummy classes for trying this out:</p>
<pre><code>
class ResourceA():
def uri(self) -> str:
return "uri:ResourceA"
class ResourceB():
def uri(self) -> str:
return "uri:ResourceB"
</code></pre>
<p>Now let's cause some nonsense and mayhem. Yet no warnings/notifications are provided!</p>
<pre><code>from abc import ABC, abstractmethod
from typing import Generic, TypeVar
from typing_extensions import override
class Resource:
@abstractmethod
def uri(self) -> str:
raise NotImplementedError
class ResourceA(Resource):
@override
def uri(self) -> str:
return "uri:ResourceA"
class ResourceB(Resource):
@override
def uri(self) -> str:
return "uri:ResourceB"
class ResourceC(Resource):
@override
def uri(self) -> str:
return "uri:ResourceC"
S = TypeVar("S", bound=Resource)
class Reader(ABC, Generic[S]):
@abstractmethod
def get_resource(self) -> S:
pass
class ReaderA(Reader[ResourceA]):
def __init__(self, resource: ResourceA):
self.resource = resource
@override
def get_resource(self) -> ResourceA:
return self.resource # Incorrect on purpose to kick tires on warnings
class ReaderB(Reader[ResourceB]):
def __init__(self, resource: ResourceA):
self.resource = resource
@override
def get_resource(self) -> ResourceA: # incorrect on purpose to kick tires
return self.resource
</code></pre>
<p>Now let's send in some messy stuff..</p>
<pre><code>fr1 = ReaderA(ResourceB())
print('resourceA', fr1.get_resource().uri())
fr2 = ReaderB(ResourceC())
print('resourceB', fr2.get_resource().uri())
</code></pre>
<p>We get only the following output:</p>
<pre><code>resourceA uri:ResourceB
resourceB uri:ResourceC
</code></pre>
<p>I realize these are generic type <strong>hints</strong> but still.. is there not some way to at least be <em>notified</em> /warned of the typing mismatch? This is after all a compile time visibility situation.</p>
|
<python><generics>
|
2022-12-27 05:03:19
| 0
| 63,891
|
WestCoastProjects
|
74,925,822
| 2,882,380
|
How to shorten the command when filtering data-frame in Python
|
<p>In Python, a common way to filter a data frame is like this</p>
<pre><code>df.loc[(df['field 1'] == 'a') & (df['field 2'] == 'b'), 'field 3']...
</code></pre>
<p>When <code>df</code> name is long, or when there are more filter conditions (only two in the above), the above line will be long naturally. Moreover, it is a bit tedious to have type out <code>df</code> name for each condition. In R or SQL, we don't really need to do that. So, my question is if there is a way to shorten the above line in Python. For example, is there a way that I don't have to write down <code>df</code> name in each condition? Thanks.</p>
|
<python>
|
2022-12-27 04:56:58
| 3
| 1,231
|
LaTeXFan
|
74,925,536
| 499,363
|
How to make Github app connect after approval
|
<p>We have a GitHub app that can be installed on a repository. This works using the <a href="https://docs.github.com/en/developers/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps#web-application-flow" rel="nofollow noreferrer">GitHub app authorization flow</a> that returns back an installation_id that we use to associate a user account on our web app with their GitHub repository. In this case we get a callback to our url: <code>/callback?setup_action=install&installation_id=<installation_id></code></p>
<p>This typically works fine, but there are some scenarios where the authorization flow doesn't complete in a single step. In many GitHub orgs, it requires approval from an admin before the app can be installed. In these cases we don't immediately get the installation_id in the url but a request state: <code>/callback?setup_action=request</code>, and once the admin approves we get the <code>installation_id</code>.</p>
<p>In this case, since the approval step is completed by a different user, we don't have our web app session to associate the user with this <code>installation_id</code>. Is there a way to identify the user / account of the original request when the authorization is approved?</p>
|
<python><authentication><github-api><github-app>
|
2022-12-27 03:45:11
| 1
| 4,840
|
Ankit
|
74,925,359
| 16,906,826
|
Access elements of Python tuple in Matlab
|
<p>I want to run and access the output of a Python function in Matlab. Please find below function. The python function returns a Python tuple as output in Matlab. Can I access elements of tuple in Matlab? I do not want to export output as .mat file and import it in Matlab, which will be computationally expensive for my work. Thanks</p>
<p>Python code:
python_file name: test_file</p>
<pre><code>import numpy as np
def test (x1, x2, x3, x4):
y = x1 + x2 + x3 + x4
z = y**2
return {"y_value":y,"z_value": z}
</code></pre>
<p>Calling python function in Matlab:</p>
<pre><code>f = py.test_file.test(2.2,3.9,4.2,5.1)
</code></pre>
<p>Output received in Matlab as of 1 by 1 tuple.</p>
<pre><code>f =
Python tuple with no properties.
(15.4, 237.16000000000003)
</code></pre>
|
<python><matlab><tuples><iterable-unpacking>
|
2022-12-27 02:53:09
| 1
| 303
|
Husnain
|
74,925,285
| 472,485
|
Django form validation
|
<p>Is there an easier way to write validation for each item in a form? Maybe embed them in model declaration itself?</p>
<pre><code>class InfoUpdateForm(forms.ModelForm):
class Meta:
model = Profile
fields = [
'first_name',
'middle_name',
'last_name'
]
def clean(self):
cleaned_data = super().clean()
first_name = cleaned_data.get('first_name')
if not str(first_name).isalnum():
self._errors['first_name'] = self.error_class(['First name should constain only alpha numeric characters'])
middle_name = cleaned_data.get('middle_name')
if not str(middle_name).isalnum():
self._errors['middle_name'] = self.error_class(['Middle name should constain only alpha numeric characters'])
last_name = cleaned_data.get('last_name')
if not str(last_name).isalnum():
self._errors['last_name'] = self.error_class(['Last name should constain only alpha numeric characters'])
</code></pre>
|
<python><django><django-forms>
|
2022-12-27 02:28:00
| 1
| 22,975
|
Jean
|
74,925,119
| 1,391,466
|
Run N processes but never reuse the same process
|
<p>I like to run a bunch of processes concurrently but never want to reuse an already existing process. So, basically once a process is finished I like to create a new one. But at all times the number of processes should not exceed N.</p>
<p>I don't think I can use multiprocessing.Pool for this since it reuses processes.</p>
<p>How can I achieve this?</p>
<p>One solution would be to run N processes and wait until all processed are done. Then repeat the same thing until all tasks are done. This solution is not very good since each process can have very different runtimes.</p>
<p>Here is a naive solution that appears to work fine:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Process, Queue
import random
import os
from time import sleep
def f(q):
print(f"{os.getpid()} Starting")
sleep(random.choice(range(1, 10)))
q.put("Done")
def create_proc(q):
p = Process(target=f, args=(q,))
p.start()
if __name__ == "__main__":
q = Queue()
N = 5
for n in range(N):
create_proc(q)
while True:
q.get()
create_proc(q)
</code></pre>
|
<python><python-3.x><multiprocessing><python-multiprocessing>
|
2022-12-27 01:40:56
| 1
| 2,087
|
chhenning
|
74,925,074
| 7,796,211
|
How do I equally divide an iterator into N chunks?
|
<p>I want to divide an iterator into N equal chunks. The chunks themselves need to be iterators.</p>
<p>Here's some examples of what I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>iter1 = iter(range(10))
chunks = split_n(iter1, n=2)
outputs iterator:
[[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]]
iter2 = iter(range(20))
chunks = split_n(iter2, n=4)
outputs iterator:
[[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]]
</code></pre>
<p>If an iterator is not evenly divisible, I want the remaining elements to be spread out so that the chunks are "as equal" as possible.</p>
<pre class="lang-py prettyprint-override"><code>iter3 = iter(range(30))
chunks = split_n(iter3, n=4)
outputs iterator:
[[0, 1, 2, 3, 4, 5, 6, 7],
[8, 9, 10, 11, 12, 13, 14, 15],
[16, 17, 18, 19, 20, 21, 22],
[23, 24, 25, 26, 27, 28, 29]]
</code></pre>
<p>One important caveat is that at no point should an iterator be converted to a list, as the iterators can be very huge and it would simply consume too much memory.</p>
|
<python>
|
2022-12-27 01:23:39
| 0
| 418
|
Thegerdfather
|
74,925,057
| 3,905,546
|
How to redirect the user to another webpage without using JavaScript in a Jinja2 HTML template?
|
<p>I'm using a Jinja2 Template with FastAPI. All I want to know is how to implement a <code>redirect</code> action in Jinja2 template <strong>without using JavaScrpit</strong>?</p>
<p>If the variable I set exists, I would like to force a page redirection.</p>
<pre><code>{% if my_var %}
// What value should I enter here?
{% endif %}
</code></pre>
<p>Case in my titles with java :</p>
<pre><code><c:if test="${!emtpy(my_var)}">
<% response.sendRedirect("/new/url"); %>
</c:if>
</code></pre>
<p>Case in my PHP :</p>
<pre><code>if(!empty($my_var) ){
header('Location:/new/url');
exit;
}
</code></pre>
<p>If there is no way, I have to use JavaScript, but I don't want to use this method at all, as some of people deactivate JavaScript in their browsers.</p>
|
<python><html><http-redirect><jinja2><fastapi>
|
2022-12-27 01:21:14
| 1
| 351
|
Richard
|
74,925,053
| 14,159,985
|
How to cache data into Pyspark before using multiple sql.write functions properly
|
<p>I'm new to pyspark, and I'm trying to create a Insert Update component.</p>
<p>Basically I got a dataframe, "df", with a column named "action", where I can filter the actions between "insert" and "update". The code looks something like that:</p>
<pre><code>df_to_insert = df.filter(df.action == 1)
df_to_upsert = df.filter(df.action == 2)
</code></pre>
<p>What I'm trying to do is chaching the dataframe df, this way I can use a count operation to see if there is any row to update or to insert in a faster way:</p>
<pre><code>cached_dataframe = df.cache()
df_to_insert = df.filter(df.action == 1)
df_to_update = df.filter(df.action == 2)
if df_to_insert.count() != 0:
#insert rows
if df_to_update.count() != 0:
#update rows
</code></pre>
<p>As you should know, the first count is quite slow, once the pyspark applies all the transformations required, but the second one is much faster, since I cached the dataframe df.</p>
<p>After chaching the data and diving it between insert and update I just need to drop the "action" column, then I'm using the io.github.spark_redshift_community.spark.redshift connector to perform a write function into a redshift database. The code looks like this:</p>
<pre><code>df_to_insert.write.format("io.github.spark_redshift_community.spark.redshift").options(
url=f"jdbc:redshift://{HOST_REDSHIFT}:{PORT_REDSHIFT}/{DATABASE_REDSHIFT}",
user=USER_REDSHIFT,
password=PASSWORD_REDSHIFT,
tempdir= f"s3a://{S3_BUCKET}/test_folder",
dbtable=table,
forward_spark_s3_credentials="true",
fetchsize="100000") \
.mode("append").save()
</code></pre>
<p>But performing the insert with cached data to a database is taking to long.</p>
<p>What is the best approach here?
Should I cache the data in this case?
If so, what am I doing wrong?</p>
<p>Thank you in advance!</p>
|
<python><apache-spark><caching><pyspark>
|
2022-12-27 01:19:40
| 0
| 338
|
fernando fincatti
|
74,925,007
| 7,874,234
|
Python, package exists on PyPi but cant install it via pip
|
<p>The package <code>PyAudioWPatch</code> is shown as available on PyPi with a big old green check mark.
<a href="https://pypi.org/project/PyAudioWPatch/" rel="nofollow noreferrer">https://pypi.org/project/PyAudioWPatch/</a></p>
<p>However when I try to install it, I am getting the following error:</p>
<pre><code>% pip install PyAudioWPatch
ERROR: Could not find a version that satisfies the requirement PyAudioWPatch (from versions: none)
ERROR: No matching distribution found for PyAudioWPatch
</code></pre>
<p>For context:</p>
<pre><code>% python -V; pip -V
Python 3.9.13
pip 22.3.1 from /Users/petertoth/Documents/Desktop_record_sum/py/venv/lib/python3.9/site-packages/pip (python 3.9)
</code></pre>
<p>Why is this the case?</p>
|
<python><pip><pypi>
|
2022-12-27 01:02:38
| 1
| 800
|
Peter Toth
|
74,924,901
| 7,317,408
|
Pandas TA lib not working when using group_by
|
<p>I have some OHLC 5m data like so:</p>
<pre><code> timestamp open high ... symbol volume_10_day last_high_volume_high
0 2022-09-09 11:20:00+00:00 1.4000 1.4000 ... AMAM NaN 0.50
1 2022-09-09 13:30:00+00:00 1.4100 1.4100 ... AMAM NaN 0.50
2 2022-09-09 14:05:00+00:00 1.4749 1.4749 ... AMAM NaN 0.50
3 2022-09-09 16:45:00+00:00 1.4700 1.4702 ... AMAM NaN 0.50
4 2022-09-09 17:10:00+00:00 1.4300 1.4300 ... AMAM NaN 0.50
... ... ... ... ... ... ... ...
281476 2022-12-03 00:35:00+00:00 1.3300 1.3300 ... ZH 31921.4 1.07
281477 2022-12-03 00:40:00+00:00 1.3300 1.3300 ... ZH 31921.4 1.07
281478 2022-12-03 00:45:00+00:00 1.3200 1.3300 ... ZH 31921.4 1.07
281479 2022-12-03 00:50:00+00:00 1.3250 1.3250 ... ZH 31921.4 1.07
281480 2022-12-03 00:55:00+00:00 1.3300 1.3300 ... ZH 31921.4 1.07
</code></pre>
<p>I am then attempting to add a 72 ema, grouping everything by the symbol:</p>
<pre><code>import pandas as pd
import pandas_ta as ta
df["EMA72"] = ta.ema(df.groupby('symbol')['close'], length=2) // length is 2 to demonstrate it isn't working
timestamp open high low ... vwap symbol volume_10_day EMA72
0 2022-09-09 11:20:00+00:00 1.4000 1.4000 1.4000 ... 1.400000 AMAM NaN None
1 2022-09-09 13:30:00+00:00 1.4100 1.4100 1.4100 ... 1.410000 AMAM NaN None
2 2022-09-09 14:05:00+00:00 1.4749 1.4749 1.4749 ... 1.474900 AMAM NaN None
3 2022-09-09 16:45:00+00:00 1.4700 1.4702 1.4100 ... 1.445265 AMAM NaN None
4 2022-09-09 17:10:00+00:00 1.4300 1.4300 1.4100 ... 1.413117 AMAM NaN None
... ... ... ... ... ... ... ... ... ...
281476 2022-12-03 00:35:00+00:00 1.3300 1.3300 1.3300 ... 1.330000 ZH 31921.4 None
281477 2022-12-03 00:40:00+00:00 1.3300 1.3300 1.3300 ... 1.330000 ZH 31921.4 None
281478 2022-12-03 00:45:00+00:00 1.3200 1.3300 1.3200 ... 1.322804 ZH 31921.4 None
281479 2022-12-03 00:50:00+00:00 1.3250 1.3250 1.3250 ... 1.325000 ZH 31921.4 None
281480 2022-12-03 00:55:00+00:00 1.3300 1.3300 1.3200 ... 1.326081 ZH 31921.4 None
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><pandas><ta-lib>
|
2022-12-27 00:29:37
| 1
| 3,436
|
a7dc
|
74,924,677
| 3,321,579
|
Is there a way to convert a non zero padded time string into a datetime?
|
<p>I am looking a the <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">strptime</a> docs. It only specifies that it can read formatted times with zero padded strings like '01:00pm'. Is there a way I can read a time like '1:00am' using the strptime function?</p>
|
<python><datetime>
|
2022-12-26 23:27:43
| 2
| 1,947
|
Scorb
|
74,924,632
| 5,404,647
|
Appending to a list with multithreading ThreadPoolExecutor and map
|
<p>I have the following code</p>
<pre><code>import random
import csv
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import ProcessPoolExecutor
import pandas as pd
def generate_username(id, job_location):
number = "{:03d}".format(random.randrange(1, 999))
return "".join([id, job_location.strip(), str(number)])
def append_to_list(l, idx, job_location):
l.append([idx, generate_username(str(idx), job_location)])
def generate_csv(filepath, df):
rows = [["EMP_ID", "username"]]
ids, locations = df.EMP_ID, df["Job Location"]
for idx, location in zip(ids, locations):
rows.append([idx, generate_username(str(idx), location)])
with open(filepath, 'w') as file:
writer = csv.writer(file)
writer.writerows(rows)
</code></pre>
<p>And this is the multithreading implementation</p>
<pre><code>def generate_csv_threads(filepath, df, n):
rows = [["EMP_ID", "username"]]
ids, locations = df.EMP_ID, df["Job Location"]
with ThreadPoolExecutor(max_workers=n) as executor:
executor.map(append_to_list, rows, ids, locations)
executor.shutdown(wait=True)
with open(filepath, 'w') as file:
writer = csv.writer(file)
writer.writerows(rows)
</code></pre>
<p>I have several questions regarding this. I saw that <code>append</code> is thread safe, so I would not need a <code>lock</code>. However, the <code>csv</code> generated is the following:</p>
<pre><code>[['EMP_ID', 'username', [234687, '234687Oregon696']]]
</code></pre>
<p>(I have more than one user to generate)</p>
|
<python><multithreading><python-multithreading>
|
2022-12-26 23:16:56
| 1
| 622
|
Norhther
|
74,924,400
| 651,174
|
Case-insensitive section of a pattern
|
<p>Does Python have something like vim where it allows inlining a portion of the pattern that may have flags, for example being case-insensitive? Here would be an example:</p>
<pre><code>re.search(r'he\cllo', string)
</code></pre>
<p><code>\c</code> being the case-insensitive inline indicator. Or is it an all or nothing in python with the <code>re.I</code> flag?</p>
|
<python><regex>
|
2022-12-26 22:25:22
| 1
| 112,064
|
David542
|
74,924,277
| 17,945,841
|
How to set seed in python
|
<p>I want to draw the same exact samples from the data, two times, in order to run a different analysis each time. To do so I did</p>
<pre><code>random.seed(10)
data.sample(n = 1000)
</code></pre>
<p>But this is not working, I get different samples each time. I searched for a built-in parameter in the <code>sample()</code> function but I found non. How do I set seed?</p>
<p>Thanks!</p>
|
<python><random-seed>
|
2022-12-26 22:03:09
| 0
| 1,352
|
Programming Noob
|
74,923,932
| 10,779,391
|
stable_baselines3 best observation space for custom environment
|
<p>I'm newbie in RL and I'm learning stable_baselines3. I've create simple 2d game, where we want't to catch as many as possible falling apples. If we don't catch apple, apple disappears and we loose a point, else we gain 1 point. We can move only left or right.
I thought that AI will learn faster when I give him raw data without CNN with PPO and MlpPolicy.</p>
<p>The problem is that I don't know how many apples will be in the game in every moment, only that there will max 10 of them.
So I thought that I will create observation_space like this:</p>
<pre><code>self.observation_space = Box(0, 1, (11, 2))
</code></pre>
<p>Where first element would be position of player, and rest positions of apples. If apple doesn't exists I would push value (0, 0).
I trained it for 100000 steps, but it seems very stupid, and goes to left edge of screen.
How can I improve it?</p>
|
<python><artificial-intelligence><reinforcement-learning><openai-gym><stable-baselines>
|
2022-12-26 20:57:31
| 0
| 313
|
Rozrewolwerowany rewolwer
|
74,923,866
| 14,104,321
|
How to make a class return a value?
|
<p>Consider the following example:</p>
<pre><code>import numpy as np
class Vector:
def __init__(self, x, y, z):
self._vector = np.array([x, y, z])
self._magnitude = np.linalg.norm(self._vector)
self._direction = self._vector/self._magnitude
@property
def magnitude(self) -> float:
return self._magnitude
@property
def direction(self) -> np.ndarray:
return self._direction
vec = Vector(10, 4, 2)
print(vec) # <__main__.Vector object at 0x0000027BECAAFEE0>
print(vec.magnitude) # 10.954451150103322
print(vec.direction) # [0.91287093 0.36514837 0.18257419]
</code></pre>
<p>When I try to <code>print(vec)</code> it returns the allocated memory address and not the value of the array, <code>which should be [10, 4, 2]</code>.</p>
<p>NOTE: I don't want to use <code>__repr__</code> because in that way I would get a string and I need the actual type to be returned. The one above is just a small example.</p>
|
<python><class>
|
2022-12-26 20:46:30
| 2
| 582
|
mauro
|
74,923,841
| 881,603
|
asyncio try to acquire a lock without waiting on it
|
<p>I'm converting some threaded code to asyncio.</p>
<p>In the threaded code, I'm calling threading.RLock.acquire( blocking = False, timeout = 0 )</p>
<p>There doesn't seem to be a way to try to aquire an asyncio.Lock without also waiting on it. Is there a way to do this and if so, what am I missing?</p>
<p>In case it helps, here's my helper function:</p>
<pre><code>@contextlib.contextmanager
def try_acquire_lock ( lock: gevent.lock.RLock ) -> Iterator[bool]:
try:
locked: bool = lock.acquire ( blocking = False, timeout = 0 )
yield locked
finally:
if locked:
lock.release()
</code></pre>
<p>and here's an example of how I use it:</p>
<pre><code> def generate( self, cti: Freeswitch_acd_api ) -> bool:
log = logger.getChild( 'Cached_data._generate' )
if self.data_expiration and tz.utcnow() < self.data_expiration:
log.debug( f'{type(self).__name__} data not expired yet' )
else:
with try_acquire_lock( self.lock ) as locked:
if locked:
log.debug( f'{type(self).__name__} regenerating' )
try:
new_data = self._generate( cti )
except Freeswitch_error as e:
log.exception( 'FS error trying to generate data: %r', e )
return False
else:
self.data = new_data
self.data_expiration = tz.utcnow() + tz.timedelta( seconds=self.max_cache_seconds )
return True
</code></pre>
<p>Because somebody is bound to ask "why would you want to do this", it's because in some scenarios I have 3 different threads (now tasks) that each have a connection to a different server. These tasks are responsible for updating state using information from each of these servers. There is some information that is "global" that I can get from any one of the servers. If one task is already updating that global information, I don't want another task to repeat that effort, so I use a lock to control who's currently doing that process. The reason I need to be able to get the information from all the servers is because sometimes one will be taken down for maintenance and this was the simplest most fool-proof way I could think of to implement it without creating extra connections to the servers.</p>
|
<python><multithreading><locking><python-asyncio>
|
2022-12-26 20:42:26
| 2
| 1,492
|
royce3
|
74,923,838
| 8,372,455
|
pydantic models to reference another class
|
<p>Is it possible on a pydantic model to reference another class? For example below in the <code>ReadRequestModel</code> in <code>point_type</code> I am trying to figure out if its possible reference that only these "types of points" in a <em><strong>string</strong></em> format can be chosen:</p>
<pre><code># type-of-points
# just for reference
multiStateValue
multiStateInput
multiStateOutput
analogValue
analogInput
analogOutput
binaryValue
binaryInput
binaryOutput
</code></pre>
<p>And depending on what <code>point_type</code> is that depics what type of <code>point_id</code> can be chosen that I am trying to reference in the <code>PointType</code> class.</p>
<pre><code>from typing import List, Literal, Optional
from pydantic import BaseModel
BOOLEAN_ACTION_MAPPING = Literal["active", "inactive"]
class ReadRequestModel(BaseModel):
device_address: str
point_type: PointType <--- not correct
point_id: PointType <--- not correct
class PointType(BaseModel):
multiStateValue: Optional[int]
multiStateInput: Optional[int]
multiStateOutput: Optional[int]
analogValue: Optional[int]
analogInput: Optional[int]
analogOutput: Optional[int]
binaryValue: Optional[BOOLEAN_ACTION_MAPPING]
binaryInput: Optional[BOOLEAN_ACTION_MAPPING]
binaryOutput: Optional[BOOLEAN_ACTION_MAPPING]
r = ReadRequestModel({'device_address': '12345:5',
'point_type': 'analogInput',
'point_id': 8})
print(r)
</code></pre>
<p>The idea for the request above <code>r</code> it should be valid because the <code>point_type</code> is correct (per <code>type-of-points</code>) and the <code>point_id</code> for an <code>analogInput</code> is an int type. Hopefully this makes sense not a lot of wisdom here but there is documentation for this on the <a href="https://docs.pydantic.dev/usage/models/#recursive-models" rel="nofollow noreferrer">pydantic website</a> but I am having some difficulties trying to figure it out. Any tips appreciated.</p>
<p>The code above that has some major issues will just print that the <code>point_type: PointType NameError: name 'PointType' is not defined</code></p>
|
<python><pydantic>
|
2022-12-26 20:42:06
| 2
| 3,564
|
bbartling
|
74,923,647
| 7,895,542
|
Automatic GUI for python script with command line arguments?
|
<p>Is there any software that auto generates GUI wrappers around python scripts?</p>
<p>My specific scenario is that i wrote a simple script for my father in law to bulk download some stuff from a given url.</p>
<p>Normally you just run the script via</p>
<p><code>python my_script.py --url https://test.com --dir C:\Downloads</code></p>
<p>and it just downloads all the relevant files from test.com to the Downloads folder.</p>
<p>I think he might be able to handle that but i am not sure and so i was thinking if there is any simple software out there that would allow me to take the script and turn it into an executable that just asks for all arguments and then has a simple <code>run</code> button to execute the script and download the things.</p>
<p>Ideally this would mean that he doesnt have to install python but at the very least allow for easier handling for him.</p>
<p>I am aware that there are libraries that allow for the creation of custom GUIs for python but thought that maybe there already exists something simpler and generic for my very simple and i also think fairly common use case.</p>
|
<python><user-interface><exe>
|
2022-12-26 20:05:39
| 2
| 360
|
J.N.
|
74,923,479
| 1,497,139
|
How to get syncify / asyncify syntactic sugar for python
|
<p>It seems there is a general need in some use cases that async and non async code according to <a href="https://peps.python.org/pep-0492/" rel="nofollow noreferrer">https://peps.python.org/pep-0492/</a> in python should be able to work together in a straightforward way.</p>
<p>It should be possible:</p>
<ol>
<li>To call an async function from a synchronous context</li>
<li>To call a synchronous function from an asynchronous context</li>
</ol>
<p>without having to bother about the complex infrastructure problems involved and the design decisions that have been made by the inventors of the async/await functionality in python which call for such complexities.</p>
<p>see e.g.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/55647753/call-async-function-from-sync-function-while-the-synchronous-function-continues">Call async function from sync function, while the synchronous function continues : Python</a></li>
<li><a href="https://stackoverflow.com/questions/40143289/why-do-most-asyncio-examples-use-loop-run-until-complete">Why do most asyncio examples use loop.run_until_complete()?</a></li>
<li><a href="https://www.joeltok.com/posts/2021-02-python-async-sync/" rel="nofollow noreferrer">https://www.joeltok.com/posts/2021-02-python-async-sync/</a></li>
<li><a href="https://bbc.github.io/cloudfit-public-docs/asyncio/asyncio-part-5.html" rel="nofollow noreferrer">https://bbc.github.io/cloudfit-public-docs/asyncio/asyncio-part-5.html</a></li>
</ul>
<p>Hiding the complexity of the answers with some "syntactic sugar" is what i am looking for.</p>
<p>There is e.g.</p>
<ul>
<li><p>A) <a href="https://github.com/ccorcos/syncify/blob/master/syncify/syncify.py" rel="nofollow noreferrer">https://github.com/ccorcos/syncify/blob/master/syncify/syncify.py</a>
since 2014 - but that project has no stars as of 2022-12.</p>
</li>
<li><p>B) See also <a href="https://gist.github.com/phizaz/20c36c6734878c6ec053245a477572ec" rel="nofollow noreferrer">https://gist.github.com/phizaz/20c36c6734878c6ec053245a477572ec</a> for a gist proposing a similar approach.</p>
</li>
<li><p>C) <a href="https://www.aeracode.org/2018/02/19/python-async-simplified/" rel="nofollow noreferrer">https://www.aeracode.org/2018/02/19/python-async-simplified/</a> which points to <a href="https://github.com/django/asgiref/blob/main/asgiref/sync.py" rel="nofollow noreferrer">https://github.com/django/asgiref/blob/main/asgiref/sync.py</a></p>
</li>
<li><p>D) there is <a href="https://asyncer.tiangolo.com/" rel="nofollow noreferrer">https://asyncer.tiangolo.com/</a> which has almost a thousand stars.</p>
</li>
</ul>
<p><strong>I tried</strong></p>
<p><strong>A</strong></p>
<p>whith python 3.10 and got the error message:</p>
<pre><code> def async(*args):
^^^^^
SyntaxError: invalid syntax
CRITICAL: Exiting due to uncaught exception <class 'SyntaxError'>
</code></pre>
<p><strong>the gist B)</strong></p>
<p>with:</p>
<pre class="lang-py prettyprint-override"><code>scite_entry=force_sync(doi_obj.asScite)()
</code></pre>
<p>and got the error message:</p>
<pre><code>RuntimeError: This event loop is already running
</code></pre>
<p>which leads to the question:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/46827007/runtimeerror-this-event-loop-is-already-running-in-python">RuntimeError: This event loop is already running in python</a></li>
</ul>
<p>so it's not useable as a general syntactic sugar hiding the complex details - you have to know in which context you are running your code.</p>
<p><strong>asgiref C)</strong>
and get the error message:</p>
<pre class="lang-py prettyprint-override"><code>RuntimeError: You cannot use AsyncToSync in the same thread as an async event loop - just await the async function directly.
</code></pre>
<p>so it's again not useable as a general syntactic sugar hiding the complex details - you have to know in which context you are running your code.</p>
<p><strong>the tiangolo library D)</strong></p>
<p>with a single line:</p>
<pre class="lang-py prettyprint-override"><code>scite_entry=syncify(doi_obj.asScite)()
</code></pre>
<p>and got the error message:</p>
<pre><code>RuntimeError: This function can only be run from an AnyIO worker thread
</code></pre>
<p>so it's not generally useable.</p>
<p><strong>What would be a general approach that hides the involved complex infrastructure problems and might make it into a PEP in the future?</strong></p>
|
<python><asynchronous><async-await>
|
2022-12-26 19:37:50
| 0
| 15,707
|
Wolfgang Fahl
|
74,923,420
| 6,147,428
|
VS Code - How to add folders to the search path in a Python project?
|
<p>I've recently switched from Spyder to VS Code to code my Python projects. Spyder is great for me because it uses IPython, i.e., it is based on a REPL (interactive environment), but it still lacks some useful features as code refactoring. In turn, VS Code is superb because it provides a more sophisticated editor, but it is hard to configure properly. In my current project, I have this folder structure:</p>
<pre><code>\SysID
|--\src
|--\app
|--\core
|--\utils
|-- config.py
</code></pre>
<p>The root of my project (i.e., my workspace) is the folder \SysID and all my runnable scripts are stored within the \app folder. The custom functions I use are stored in the \core and \utils folders, so that I can't import them directly. In Spyder, I had a script (configure.py) to setup the environment, shown below:</p>
<pre><code># config.py
# This script configures your environment to run all files in this project
import sys, os
sys.path.append(os.path.join(os.path.dirname(os.getcwd()),'src','core'))
sys.path.append(os.path.join(os.path.dirname(os.getcwd()),'src','utils'))
</code></pre>
<p>Every time I opened the project, I had to run that script first. This solution seems a bit awkward, but it works fine - in Spyder. With VS Code though, it is useless because it opens a new Python session every time you run a script. By researching here and there over the internet (including Stackoverflow) I've tried this:</p>
<p><strong>(1)</strong> added a cwd key to the launch.json file as below:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"cwd": "${workspaceFolder}"
},
] }
</code></pre>
<p><strong>(2)</strong> added the following lines to the settings.json file as below:</p>
<pre><code>"terminal.integrated.env.windows": {
"PYTHONPATH": "${env:PYTHONPATH};
${workspaceFolder}/src/core;
${workspaceFolder}/src/utils"}
</code></pre>
<p><strong>(3)</strong> created a .env file in the root folder of the project with the following text:</p>
<pre><code>PYTHONPATH=${PYTHONPATH};./src/core # Use path separator ';' on Windows.
</code></pre>
<p>Nothing worked though. So, folks, what do I have to do to setup the search path of my project as I need? I am using Anaconda to run both Spyder and VS Code.</p>
|
<python><visual-studio-code><anaconda><pythonpath>
|
2022-12-26 19:28:01
| 1
| 613
|
Humberto Fioravante Ferro
|
74,923,377
| 7,788,402
|
Python library import fails when executed with 'pOpen' sub process from another Python script
|
<p>I have a Python service that needs to run another Python script (<em>in a completely different folder path</em>) using "pOpen". That file needs to import multiple classes in different folders. 'pOpen' fails to run the script with an error message that import fails as follows :</p>
<pre><code>Traceback (most recent call last):
File "C:\basic_detection\creatorop.py", line 2, in <module> from detection_Inference import * File "C:\basic_detection\detection_Inference.py",
line 13, in <module> from notebook_utils import segmentation_map_to_image, to_rgb ModuleNotFoundError: No module named 'notebook_utils'. Try again.
</code></pre>
<p>I am calling the script to run with the following statement :<br />
process = Popen(command_arg_list, shell=True, stdout=PIPE, stderr=PIPE)</p>
<p>The file I am calling is</p>
<blockquote>
<p>creatorop.py</p>
</blockquote>
<p>The header of the file is as follows :</p>
<pre><code>import argparse
from detection_Inference import *
#import mytracker as m
sys.path.append(".")
sys.path.append("../utils")
sys.path.append("./utils")
from notebook_utils import segmentation_map_to_image, to_rgb
parser = argparse.ArgumentParser()
</code></pre>
<p><strong>Important</strong> : Each folder in this source path includes empty <strong>init</strong>.py files.</p>
<p>The folder structure where creatorop.py resides is as follows :</p>
<pre><code>C:\basic_detection
> creatorop.py
> detection_inference.py
> __initi__py
<utils>
---> notebook_utils.py
---> __init__.py
</code></pre>
<p>I appended all possible paths, including the 'utils' folder, but the system did not see it.</p>
<p>When I run the creatorop.py in its folder, it runs without any issues. This only happens when I run it from 'pOpen' from another application.</p>
<p>Any help ?</p>
|
<python><import><popen>
|
2022-12-26 19:19:43
| 0
| 2,301
|
PCG
|
74,923,308
| 1,842,491
|
How can I keep poetry and commitizen versions synced?
|
<p>I have a <code>pyproject.toml</code> with</p>
<pre><code>[tool.poetry]
name = "my-project"
version = "0.1.0"
[tool.commitizen]
name = "cz_conventional_commits"
version = "0.1.0"
</code></pre>
<p>I add a new feature and commit with commit message</p>
<pre><code>feat: add parameter for new feature
</code></pre>
<p><strong>That's one commit.</strong></p>
<p>Then I call</p>
<pre><code>commitizen bump
</code></pre>
<p>Commitizen will recognize a minor version increase, update my <code>pyproject.toml</code>, and commit again with the updated <code>pyproject.toml</code> and a tag <code>0.2.0</code>.</p>
<p><strong>That's a second commit.</strong></p>
<p>But now my <code>pyproject.toml</code> is "out of whack" (assuming I want my build version in sync with my git tags).</p>
<pre><code>[tool.poetry]
name = "my-project"
version = "0.1.0"
[tool.commitizen]
name = "cz_conventional_commits"
version = "0.2.0"
</code></pre>
<p>I'm two commits in, one tagged, and things still aren't quite right. Is there workflow to keep everything aligned?</p>
|
<python><python-poetry><commitizen>
|
2022-12-26 19:09:05
| 1
| 1,509
|
Shay
|
74,923,172
| 2,365,595
|
Spotify /authorize endpoint
|
<p>I would like to know which is the /authorize endpoint, after some search I see that it's <a href="https://accounts.spotify.com/authorize?client_ID" rel="nofollow noreferrer">https://accounts.spotify.com/authorize?client_ID</a>, but why? where is the documentation about it? I don't see any endpoints list and the guide only tell me to "send a GET request to the /authorize endpoint" without tell me which is the endpoint.
Sorry, I only want to know the logic behind the endpoint address</p>
<p>Thank you</p>
|
<python><web-applications><spotify><webapi><spotify-app>
|
2022-12-26 18:50:35
| 0
| 575
|
Aidoru
|
74,923,140
| 15,649,230
|
Matplotlib memory leak using FigureCanvasTkAgg
|
<p>is there any way to clear matplotlib memory usage from a tkinter application, the following code is taken from <a href="https://matplotlib.org/stable/gallery/user_interfaces/embedding_in_tk_sgskip.html" rel="nofollow noreferrer">Embedding in Tk</a>, i just put it in a loop to make the memory leak more clear.</p>
<pre class="lang-py prettyprint-override"><code>import tkinter
import matplotlib
print(matplotlib._version.version)
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
import gc
# Implement the default Matplotlib key bindings.
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
import psutil
import os, psutil
process = psutil.Process(os.getpid())
import numpy as np
import time
root = tkinter.Tk()
frame = tkinter.Frame(root)
root.wm_title("Embedding in Tk")
import matplotlib.pyplot as plt
def my_func():
global root,frame
fig = Figure(figsize=(5, 4), dpi=100)
t = np.arange(0, 3, .01)
ax = fig.add_subplot()
line = ax.plot(t, 2 * np.sin(2 * np.pi * t))
ax.set_xlabel("time [s]")
ax.set_ylabel("f(t)")
canvas = FigureCanvasTkAgg(fig, master=frame) # A tk.DrawingArea.
canvas.draw()
# pack_toolbar=False will make it easier to use a layout manager later on.
toolbar = NavigationToolbar2Tk(canvas, frame, pack_toolbar=False)
toolbar.update()
toolbar.pack(side=tkinter.BOTTOM, fill=tkinter.X)
canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=True)
time.sleep(0.1)
# everything i tried to clear memory
ax = fig.axes[0]
ax.clear()
canvas.get_tk_widget().pack_forget()
toolbar.pack_forget()
canvas.figure.clear()
canvas.figure.clf()
canvas.get_tk_widget().destroy()
toolbar.destroy()
mem = process.memory_info().rss/2**20
print(mem) # in bytes
if mem > 1000:
root.destroy()
frame.destroy()
frame = tkinter.Frame(root)
root.after(10,my_func)
gc.collect()
if __name__ == "__main__":
root.after(1000,my_func)
root.mainloop()
</code></pre>
<p>it just keeps eating memory up to 1000 MBs,</p>
<p>i tried everything to remove this memory leak without hope, i tried the answer here, but it also didn't work <a href="https://stackoverflow.com/questions/28757348/how-to-clear-memory-completely-of-all-matplotlib-plots">How to clear memory completely of all matplotlib plots</a>.</p>
<p>just updating the figure instead of creating a new figure on each loop iteration would "avoid" some of the memory leak, but it doesn't "fix it", how do i reclaim this memory ?</p>
<p>this issue seems related <a href="https://github.com/matplotlib/matplotlib/issues/20490" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/issues/20490</a> but i am using version <code>3.6.2</code> which should have it fixed, i can duplicate it on almost all python versions on windows, (but the code in the issue doesn't produce this problem)</p>
<p>tracemalloc only shows around 1 MB was allocated on python side, so the rest of the leak is on C side ... something isn't getting cleaned up.</p>
<p>Edit: this also seems related <a href="https://stackoverflow.com/questions/55053568/tkinter-memory-leak-with-canvas">Tkinter - memory leak with canvas</a>, but the canvases are correctly reclaimed, so it's not a bug in the canvases or tk.</p>
<p>Edit2: the renderer on the C side is not getting freed ... althought there seems to be no reference to it.</p>
|
<python><matplotlib><tkinter><memory-leaks>
|
2022-12-26 18:46:22
| 1
| 23,158
|
Ahmed AEK
|
74,923,076
| 4,898,127
|
Check constraint noninfringement in PuLP
|
<p>I have just created a model in PuLP, though its result seems to have violated its constraints. Is there a quick way to check the value of each constraint?</p>
|
<python><linear-programming><pulp>
|
2022-12-26 18:36:12
| 0
| 351
|
Incognito
|
74,923,024
| 15,875,806
|
proper way to communicate between more than two processes in pyqt5 python
|
<p>I currently have two processes, one is my pyqt5 MainWindow (parent process) and a child process. I mainly use Pipe to communicate between the two. As per my research, a Pipe can have only two end-points which in my case, it has already been given to the two end-points above. But I now am in need of a temporary third/brother process which is created when a user click on pyqt5 button any time. The brother process needs to send pyqtSignal to parent process for result output.</p>
<p>Why not use threads? 1. I don't want the child thread to access parent thread's memory. 2. there is no direct way to abruptly kill a thread unless using <code>try & except</code> method, which I cannot use. Hence, I need to use multiprocessing.</p>
<p>Can anyone tell me If I need a third pipe for my brother process, if so, how do I implement it to my pyqt5 application. Reproducible code is below.</p>
<p>Code <strong>main.py</strong>:</p>
<pre><code>import os
import sys
import PyQt5
from PyQt5 import QtWidgets
from multiprocessing import Process, Queue, Pipe
from untitled import Ui_MainWindow
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtCore import pyqtSignal, QObject, Qt
from PyQt5.QtGui import QPixmap, QStandardItemModel, QStandardItem, QIcon
from multiprocessing import Process, Queue, Pipe
import time
sys.path.insert(0, os.path.abspath("."))
class Emitter(QThread):
pcLBL = pyqtSignal(str)
def __init__(self, from_process: Pipe):
super().__init__()
self.data_from_process = from_process
def run(self):
while True:
try:
sd = self.data_from_process.recv()
except EOFError:
break
else:
if(len(sd) > 1 and sd[0] == "pcLBL"):
self.pcLBL.emit(sd[1])
class childProc(Process):
def __init__(self, to_emitter: Pipe, from_mother: Queue, daemon=True):
super().__init__()
self.daemon = daemon
self.to_emitter = to_emitter
self.data_from_mother = from_mother
def run(self):
i = 0
while(True):
print("talking to mommy")
self.to_emitter.send(["pcLBL", "Yollo mommy, I am your first child "+str(i)])
i += 1
time.sleep(10)
class MainWindow(QMainWindow, Ui_MainWindow):
def __init__(self, child_process_queue: Queue, emitter: Emitter):
QMainWindow.__init__(self)
self.process_queue = child_process_queue
self.emitter = emitter
self.emitter.daemon = True
self.emitter.start()
self.setupUi(self)
self.mainCode()
def mainCode(self):
self.someBTN.clicked.connect(self.testFunc)
self.emitter.pcLBL.connect(self.pcLBL.setText)
def testFunc(self):
files = ["c:\\windows\\system32\\cmd.exe"]
mother_pipe, child_pipe = Pipe()
queue = Queue()
emitter = Emitter(mother_pipe)
broProc = brotherProc(files[0], child_pipe, self.process_queue)
broProc.start()
class brotherProc(Process):
def __init__(self, files, to_emitter: Pipe, from_mother: Queue, daemon=True):
super().__init__()
self.daemon = daemon
self.to_emitter = to_emitter
self.data_from_mother = from_mother
self.files = files
def run(self):
print("started")
doStuff(self.to_emitter, self.files, 2)
def doStuff(signaller, file, num):
signaller.send(["pcLBL", "Hello mother"])
def pickleChildProc(child_pipe, queue):
cp = childProc(child_pipe, queue)
cp.start()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
mother_pipe, child_pipe = Pipe()
queue = Queue()
emitter = Emitter(mother_pipe)
childProc = pickleChildProc(child_pipe, queue)
Dialog = MainWindow(queue, emitter)
Dialog.show()
sys.exit(app.exec_())
</code></pre>
<p><strong>pyqt5Main.py:</strong></p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.someBTN = QtWidgets.QPushButton(self.centralwidget)
self.someBTN.setGeometry(QtCore.QRect(260, 260, 251, 23))
self.someBTN.setObjectName("someBTN")
self.pcLBL = QtWidgets.QLabel(self.centralwidget)
self.pcLBL.setGeometry(QtCore.QRect(30, 350, 731, 20))
self.pcLBL.setObjectName("pcLBL")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 22))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.someBTN.setText(_translate("MainWindow", "Create Process"))
self.pcLBL.setText(_translate("MainWindow", "Show results here"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>With above on line <code>signaller.send(["pcLBL", "Hello mother process"])</code> in the brother process, I get <code>BrokenPipeError: [WinError 232] The pipe is being closed</code>.</p>
<p>Can anyone tell me what is wrong with my code?</p>
|
<python><pyqt5><multiprocessing>
|
2022-12-26 18:29:21
| 0
| 305
|
hashy
|
74,922,991
| 1,907,765
|
Can you specify a bidirectional edge in a NetworkX digraph?
|
<p>I'd like to be able to draw a NetworkX graph connecting characters from the movie "Love, Actually" (because it's that time of the year in this country), and specifying how each character "relates" to the other in the story.</p>
<p>Certain relationships between characters are unidirectional - e.g. Mark is in love with Juliet, but not the reverse. However, Mark is best friends with Peter, and Peter is best friends with Mark - this is a bidirectional relationship. Ditto Peter and Juliet being married to each other.</p>
<p>I'd like to specify both kinds of relationships. Using a NetworkX digraph in Python, I seem to have a problem: to specify a bidirectional edge between two nodes, I apparently have to provide the same link twice, which will subsequently create two arrows between two nodes.</p>
<p>What I'd really like is a single arrow connecting two nodes, with heads pointing both ways. I'm using NetworkX to create the graph, and pyvis.Network to render it in HTML.</p>
<p>Here is the code so far, which loads a CSV specifying the nodes and edges to create in the graph.</p>
<pre><code>import networkx as nx
import csv
from pyvis.network import Network
dg = nx.DiGraph()
with open("rels.txt", "r") as fh:
reader = csv.reader(fh)
for row in reader:
if len(row) != 3:
continue # Quick check for malformed csv input
dg.add_edge(row[0], row[1], label=row[2])
nt = Network('500px', '800px', directed=True)
nt.from_nx(dg)
nt.show('nx.html', True)
</code></pre>
<p>Here is the CSV, which can be read as "Node1", "Node2", "Edge label":</p>
<pre><code>Mark,Juliet,in love with
Mark,Peter,best friends
Peter,Mark,best friends
Juliet,Peter,married
Peter,Juliet,married
</code></pre>
<p>And the resulting image:</p>
<p><a href="https://i.sstatic.net/qooel.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qooel.png" alt="enter image description here" /></a></p>
<p>Whereas what I'd really like the graph to look like is this:</p>
<p><a href="https://i.sstatic.net/dnF0f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dnF0f.png" alt="enter image description here" /></a></p>
<p>(Thank you to <a href="https://csacademy.com/app/graph_editor/" rel="nofollow noreferrer">this site for the wonderful graph tool</a> for the above visualisation)</p>
<p>Is there a way to achieve the above visualisation using NetworkX and Pyvis? I wasn't able to find any documentation on ways to create bidirectional edges in a directed graph.</p>
|
<python><networkx><digraphs><pyvis>
|
2022-12-26 18:24:17
| 1
| 2,527
|
Lou
|
74,922,987
| 3,247,006
|
What is "django_admin_log" used for in Django Admin?
|
<p>When adding data (I use PostgreSQL):</p>
<p><a href="https://i.sstatic.net/uBW7q.png" rel="noreferrer"><img src="https://i.sstatic.net/uBW7q.png" alt="enter image description here" /></a></p>
<p><strong>"django_admin_log"</strong> is inserted as shown below. *These below are <strong>the PostgreSQL query logs</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>how to log PostgreSQL queries</strong></a>:</p>
<p><a href="https://i.sstatic.net/9GVHW.png" rel="noreferrer"><img src="https://i.sstatic.net/9GVHW.png" alt="enter image description here" /></a></p>
<p>And, when changing data:</p>
<p><a href="https://i.sstatic.net/sEJ3X.png" rel="noreferrer"><img src="https://i.sstatic.net/sEJ3X.png" alt="enter image description here" /></a></p>
<p><strong>"django_admin_log"</strong> is inserted as shown below:</p>
<p><a href="https://i.sstatic.net/xVerF.png" rel="noreferrer"><img src="https://i.sstatic.net/xVerF.png" alt="enter image description here" /></a></p>
<p>And, when clicking <strong>Delete</strong> on <strong><code>Change</code> page</strong>:</p>
<p><a href="https://i.sstatic.net/G9e3D.png" rel="noreferrer"><img src="https://i.sstatic.net/G9e3D.png" alt="enter image description here" /></a></p>
<p>Then, clicking <strong>Yes, I'm sure</strong> to delete data:</p>
<p><a href="https://i.sstatic.net/Mp4ni.png" rel="noreferrer"><img src="https://i.sstatic.net/Mp4ni.png" alt="enter image description here" /></a></p>
<p><strong>"django_admin_log"</strong> is inserted as shown below:</p>
<p><a href="https://i.sstatic.net/IqLLb.png" rel="noreferrer"><img src="https://i.sstatic.net/IqLLb.png" alt="enter image description here" /></a></p>
<p>So, what is <strong>"django_admin_log"</strong> used for in Django Admin?</p>
|
<python><python-3.x><django><logging><django-admin>
|
2022-12-26 18:24:06
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,922,979
| 17,658,327
|
Defined attributes that are seemingly not used at all for instance objects of some loss classes
|
<p>I will try to explain my question using an example.
Consider the <a href="https://github.com/keras-team/keras/blob/e6784e4302c7b8cd116b74a784f4b78d60e83c26/keras/losses.py#L576" rel="nofollow noreferrer">definition of the BinaryCrossentropy loss class</a> as shown in the following.</p>
<pre><code>@keras_export("keras.losses.BinaryCrossentropy")
class BinaryCrossentropy(LossFunctionWrapper):
def __init__(
self,
from_logits=False,
label_smoothing=0.0,
axis=-1,
reduction=losses_utils.ReductionV2.AUTO,
name="binary_crossentropy",
):
super().__init__(
binary_crossentropy,
name=name,
reduction=reduction,
from_logits=from_logits,
label_smoothing=label_smoothing,
axis=axis,
)
self.from_logits = from_logits
</code></pre>
<p>Why defining the attribute of <code>from_logits</code> for an instance object?
My understanding is that for an instance object, the attributes of <code>fn</code>, <code>name</code>, and <code>reduction</code> are assigned within the <code>__init__</code> method of <code>LossFunctionWrapper</code> (<a href="https://github.com/tensorflow/addons/blob/b2dafcfa74c5de268b8a5c53813bc0b89cadf386/tensorflow_addons/utils/keras_utils.py#L24" rel="nofollow noreferrer">Source code for LossFunctionWrapper</a>) by passing the corresponding arguments to <code>super().__init__</code> from inside the <code>__init__</code> method of the loss class (BinaryCrossentropy in this example).
The rest of the keyword argumets (from_logits, label_smoothing, and axis) are assigned to <code>self._fn_kwargs</code> to be passed on to <code>fn</code> when the instance is called.</p>
<p>Why does an instance object of BinaryCrossentropy need to have the attribute of <code>from_logits</code> while (my understanding is that) this attribute is not used anywhere except within <code>fn</code>, which is already provided with the required value? The BinaryFocalCrossentropy class seems to have a similar issue. I would be grateful if someone could please clarify.</p>
|
<python><tensorflow>
|
2022-12-26 18:22:48
| 0
| 626
|
learner
|
74,922,913
| 17,696,880
|
Why does this capture limit set with regex fail by modifying substrings that it shouldn't?
|
<pre class="lang-py prettyprint-override"><code>import re
def date_to_internal_stored_format(input_text, identify_only_4_digit_years = False, limit_numbers_immediately_after_date = True):
#grupo de captura para fechas en donde el año tiene si o si 4 digitos
if (identify_only_4_digit_years == True):
if(limit_numbers_immediately_after_date == True):
date_capture_pattern = r"([12]\d{3}-[01]\d-[0-3]\d)(\D*?)"
elif(limit_numbers_immediately_after_date == False):
date_capture_pattern = r"([12]\d{3}-[01]\d-[0-3]\d)"
input_text = re.sub(date_capture_pattern, lambda m: m.group().replace("-", "_-_", 2), input_text)
return input_text
input_text = "el dia 2022-12-2344 o sino el dia 2022-09-23 10000-08-23"
input_text = date_to_internal_stored_format(input_text, False, True)
print(repr(input_text))
</code></pre>
<p>Why doesn't the limit imposed behind work so that after the last number of the date there can be no more numbers to be considered a group that must be captured?</p>
<p>I need this output where <code>2022-12-23</code> is not captured I need this output where this is not captured, since there are more than 2 digits at the end of the date <code>(\D*?)</code></p>
<pre class="lang-py prettyprint-override"><code>'el dia 2022-12-2344 o sino el dia 2022_-_09_-_23 10000_-_08_-_23'
</code></pre>
|
<python><python-3.x><regex><replace><regex-group>
|
2022-12-26 18:13:04
| 0
| 875
|
Matt095
|
74,922,897
| 14,584,978
|
Apply str.contains for different in strings on pandas dataframe or groupby object in pandas or dask
|
<p>I would like to preform str.contains() elementwise with some format like:</p>
<pre><code>df['superstring'].str.contains(df['substring'])
</code></pre>
|
<python><pandas><string><dataframe><dask>
|
2022-12-26 18:09:54
| 1
| 374
|
Isaacnfairplay
|
74,922,780
| 18,059,131
|
How to get list of amplitude values from a wave file in python
|
<p>How could I get a list of the amplitudes of each frame in a wave file using python (in the dB unit)?</p>
<p>So far I have this:</p>
<pre><code>samplerate, data = wavfile.read(patternPath)
print(max((data[:, 1]).tolist())))
</code></pre>
<p>but that prints out <code>0.7856917381286621</code>, which doesn't make much sense because I know that the wav file never surpasses 0 dB.</p>
|
<python><audio>
|
2022-12-26 17:53:57
| 2
| 318
|
prodohsamuel
|
74,922,758
| 580,937
|
SnowparkFetchDataException: (1406): Failed to fetch a Pandas Dataframe. The error is: Found non-unique column index
|
<p>While running some code like this:</p>
<pre class="lang-py prettyprint-override"><code> session = ...
return session.table([DB,SCHEMA, MANUAL_METRICS_BY_SIZE]).select("TECHNOLOGY","OBJECTTYPE","OBJECTTYPE","SIZE","EFFORT").to_pandas()
</code></pre>
<p>I got this error.</p>
<p>Any idea of what might be causing this?</p>
|
<python><pandas><snowflake-cloud-data-platform>
|
2022-12-26 17:50:08
| 1
| 2,758
|
orellabac
|
74,922,725
| 3,074,348
|
Can't add "list:xxxxxxxxxxx" StreamRule to StreamingClient of Twitter API v2
|
<p>I can't add "list:XXXXXXXXXX" to my rules (I can add other rules though and they work).
What I'm missing?</p>
<pre><code>import tweepy
class TweetPrinter(tweepy.StreamingClient):
def on_tweet(self, tweet):
print(tweet)
printer = TweetPrinter(bearer_token)
printer.add_rules(tweepy.StreamRule("list:XXXXXXXXXX"))
printer.filter()
</code></pre>
<p>After executing this code instead of creating a StreamRule with an id, I got this (checking the list of the rules with <code>printer.get_rules()</code>:</p>
<p><code>Response(data=None, includes={}, errors=[], meta={'sent': '2022-12-27T00:23:44.073Z', 'result_count': 0}) </code></p>
<p>I'm using <a href="https://docs.tweepy.org/en/stable/streamrule.html" rel="nofollow noreferrer">Tweepy 4.12</a> and <a href="https://developer.twitter.com/en/docs/twitter-api/tweets/search/integrate/build-a-query" rel="nofollow noreferrer">Twitter API v2</a>.</p>
|
<python><tweepy><twitterapi-python><twitter-api-v2><sttwitterapi>
|
2022-12-26 17:45:35
| 0
| 342
|
arthur
|
74,922,611
| 6,594,089
|
Stripe payment failing after trial period
|
<p>I'm attempting to create a monthly subscription with free 7 day trial, but after the trial period the payment fails.</p>
<p>EDIT: It appears to fail because the customer has no default payment method, so despite the payment method being attached to customer, it is not set to default. I can not figure out how to set it to default payment method.</p>
<p>I am setting ConfirmCardSetup in frontend javascript, which I believe is tying the card to the customer. And I am creating the customer, and starting the subscription/trial in my backend django view.</p>
<p>I have found this in Stripe documentation:</p>
<blockquote>
<p>"To use this PaymentMethod as the default for invoice or subscription
payments, set invoice_settings.default_payment_method, on the Customer
to the PaymentMethod’s ID."</p>
</blockquote>
<p>but I am unsure how to get the Payment Method ID from front end, and use it to update the customer.</p>
<p>Here is my subscription create view:</p>
<pre><code>class SubscriptionCreateView(View, SubscriptionCancellationMixin,
CustomerMixin, StripeReferenceMixin, FetchCouponMixin):
"""View to Create subscription and sync with stripe"""
def post(self, request, *args, **kwargs):
context = {}
subscription_payload = {}
price_id = request.POST.get('price_id')
coupon_id = request.POST.get('coupon_id')
customer = self.fetch_customer(request.user)
#Retrieve stripe coupon id or None
if coupon_id != '':
stripe_coupon_id, is_valid = self.is_coupon_valid(coupon_id)
#Set payload to create a subscription
subscription_payload['coupon'] = stripe_coupon_id
#Send invalid coupon response
if not is_valid:
context['invalid_coupon'] = True
response = JsonResponse(context)
response.status_code = 400
return response
if not customer:
customer = self.stripe.Customer.create(
email=request.user.email
)
try:
now = int(time.time())
# Cancel the previous subscription.
self.cancel_subscription(customer)
#Create a new subscription
subscription_payload.update({
'customer':customer.id,
'items': [{
'price': price_id,
},
],
'trial_end': now +30,
},
)
#create a setup intent
setup_intent = self.stripe.SetupIntent.create(
payment_method_types=["card"],
customer=customer.id,
)
subscription = self.stripe.Subscription.create(**subscription_payload)
# Sync the Stripe API return data to the database,
# this way we don't need to wait for a webhook-triggered sync
Subscription.sync_from_stripe_data(subscription)
request.session['created_subscription_id'] = subscription.get('id')
# Note we're sending the Subscription's
# latest invoice and client secret
# to the front end to confirm the payment
context['subscriptionId'] = subscription.id
context['clientSecret'] = setup_intent['client_secret']
except Exception as e:
response = JsonResponse({})
response.status_code = 400
return response
return JsonResponse(context)
</code></pre>
<p>And here's the relevant javascript!</p>
<pre><code>var stripe = Stripe($('#public-key').val());
var elements = stripe.elements();
var style = {
base: {
color: "#32325d",
}
};
var card = elements.create("card", { style: style });
card.mount("#card-element");
//Capture modal payment button click
$(document).on('click', '#submit-payment-btn', function (e) {
e.preventDefault();
//Send an ajax call to backend to create a subscription
$(this).prop('disabled', true);
let spinnerHtml = '<div class="spinner-border text-light"></div>';
let origButtonHtml = $(this).html();
$(this).html(spinnerHtml);
$.ajax({
url: $(this).data('create-sub-url'),
type: 'POST',
data: {
csrfmiddlewaretoken: $('[name="csrfmiddlewaretoken"]').val(),
'price_id': $(this).data('price-id'),
'coupon_id': $("#coupon_id").val()
},
success: function (result) {
if(!result.clientSecret)
{
console.log("result not okay")
window.location.href = '/'
}
else{
$('#client-secret').val(result.clientSecret);
// Confirm payment intent.
console.log('set up confirmed');
stripe.confirmCardSetup(result.clientSecret, {
payment_method: {
card: card,
billing_details: {
name: $('#cardholder-name').val(),
},
}
}).then((result) => {
if (result.error) {
alert('Payment failed:'. result.error.message);
} else {
window.location.href = '/'
}
});
}
},
error: function (result){
$('#submit-payment-btn').html(origButtonHtml);
if(result.responseJSON.invalid_coupon)
{
if($(".coupon-error").length == 0)
$('.coupon-div').append('<p class="coupon-error text-center" style="color:red">Invalid coupon code</p>')
}
else{
$('#payment-modal').modal('hide');
alert('Something Went Wrong!')
}
}
}
);
})
</code></pre>
<p>Thanks for any help or point in the right direction! I've been trying to solve this for hours!</p>
|
<javascript><python><stripe-payments><subscription>
|
2022-12-26 17:31:09
| 0
| 459
|
LBJ33
|
74,922,542
| 4,645,982
|
Update record in DynamoDB for reserved Keyword
|
<p>Following data mapped in DynamoDB for record_id 7, I want to update customer with new value</p>
<pre><code>"customer": {
"value": "id2",
"label": "Customer2"
}
</code></pre>
<p>However, dyanamoDB does not allow to update because of "ValidationException: Invalid UpdateExpression: Attribute name is a reserved keyword; reserved keyword: value".</p>
<p>Record in DynamoDB:</p>
<pre><code>{
"terms": "Terms",
"action": {
"value": "id1",
"label": "In Progress"
},
"line_items": [{
"product": "dd",
"quantity": "10",
}],
"_id": "7",
"customer": {
"value": "id1",
"label": "Customer1"
}
}
updateExpression = "set "
updateValues = dict()
expression_attributes_names = {}
for key, value in dictData.items():
key1 = key.replace('.', '.#')
updateExpression +=" #{key1} = :{key},"
updateValues[f":{key}"] = value
expression_attributes_names[f"#{key1}"] = key
table.update_item(
Key={
'id': item_id
},
UpdateExpression=updateExpression,
ExpressionAttributeValues=updateValues
ExpressionAttributeNames=expression_attributes_names
)
</code></pre>
|
<python><python-3.x><amazon-web-services><amazon-dynamodb><boto3>
|
2022-12-26 17:21:37
| 2
| 2,676
|
Neelabh Singh
|
74,922,497
| 1,245,659
|
post action drops part of the url in django app
|
<p>I'm building my first ecommerce app, while attempting to gain more skills in Django. I have a form problem, where I am either adding a product, or editing one, using the same Template. My problem is where the action call drops part of the url when submitting POST back to the server ( right now, it's just the <code>python manage.py runserver</code>).</p>
<p>When I go to edit, I see the url: <code>http://127.0.0.1:8000/mystore/edit-product/4</code>
When I edit the product and click submit, I get the <code>Request URL: http://127.0.0.1:8000/mystore/edit-product/</code> page not found error.<br />
This error prevents me from returning to the view to determine anything else. please note, I am missing the last part of the url (<code>4</code>), which is the crux of my issue.</p>
<p>It looks like I'm missing something. This is what I have</p>
<p><strong>userprofile/url.py</strong></p>
<pre><code>from django.urls import path
from django.contrib.auth import views as auth_views
from . import views
urlpatterns = [
path('signup/', views.signup, name='signup'),
path('logout/', auth_views.LogoutView.as_view(), name='logout'),
path('login/', auth_views.LoginView.as_view(template_name='userprofile/login.html'), name='login'),
path('myaccount/', views.myaccount, name='myaccount'),
path('mystore/', views.my_store, name='my_store'),
path('mystore/add-product/', views.add_product, name='add-product'),
path('mystore/edit-product/<int:pk>', views.edit_product, name='edit-product'),
path('vendors/<int:pk>/', views.vendor_detail, name='vendor_detail')
]
</code></pre>
<p><strong>store/urls.py</strong></p>
<pre><code>from django.urls import path
from . import views
urlpatterns =[
path('search/', views.search, name='search'),
path('<slug:slug>', views.category_detail, name='category_detail'),
path('<slug:category_slug>/<slug:slug>', views.product_detail, name='product_detail')
]
</code></pre>
<p><strong>core/urls.py</strong> <-- the base urls</p>
<pre><code>from django.urls import path,include
from .views import frontpage, about
urlpatterns =[
path('', include('userprofile.urls')),
path('', frontpage, name='frontpage'),
path('about', about, name='about'),
path('', include('store.urls'))
]
</code></pre>
<p><strong>Views</strong></p>
<pre><code>@login_required
def add_product(request):
if request.method == 'POST':
form = ProductForm(request.POST, request.FILES)
if form.is_valid():
title = request.POST.get('title')
slug = slugify(title)
product = form.save(commit=False)
product.user = request.user
product.slug = slug
product.save()
return redirect('my_store')
else:
form = ProductForm()
return render(request, 'userprofile/add-product.html', {
'title': 'Add Product',
'form':form
})
@login_required
def edit_product(request, pk):
product = Product.objects.filter(user=request.user).get(pk=pk)
if request.method == 'POST':
form = ProductForm(request.POST, request.FILES, instance=product)
if form.is_valid():
form.save()
return redirect('my_store')
else:
form = ProductForm(instance=product)
return render(request, 'userprofile/add-product.html', {
'title': 'Edit Product',
'form': form
})
</code></pre>
<p><strong>add-product.html</strong> (Note: template is used for both add and edit. the variable <code>title</code> distinguishes from the two.)</p>
<pre><code>{% extends 'core/base.html' %}
{% block title %}My Store{% endblock %}
{% block content %}
<h1 class="text-2xl">My Store</h1>
<h2 class="mt-6 text-xl ">{{ title }}</h2>
<form method="post" action="." enctype="multipart/form-data">
{% csrf_token %}
{{ form.as_p }}
<button class="mt-6 py-4 px-8 bg-teal-500 text-white hover:bg-teal-800">Save</button>
</form>
{% endblock %}
</code></pre>
<p>form.py</p>
<pre><code>from django import forms
from .models import Product
class ProductForm(forms.ModelForm):
class Meta:
model = Product
fields = ('category', 'title', 'description', 'price', 'images', )
</code></pre>
<p><strong>logs</strong></p>
<pre><code>[26/Dec/2022 20:27:32] "GET /mystore/edit-product/4 HTTP/1.1" 200 2690
Not Found: /mystore/edit-product/
[26/Dec/2022 20:27:47] "POST /mystore/edit-product/ HTTP/1.1" 404 5121
</code></pre>
<p>The not found indicates the response from pressing the "submit" button.</p>
<p>Thank you.</p>
|
<python><django><django-views><django-forms>
|
2022-12-26 17:14:38
| 3
| 305
|
arcee123
|
74,922,314
| 2,216,953
|
yield from vs yield in for-loop
|
<p>My understanding of <code>yield from</code> is that it is similar to <code>yield</code>ing every item from an iterable. Yet, I observe the different behavior in the following example.</p>
<p>I have <code>Class1</code></p>
<pre><code>class Class1:
def __init__(self, gen):
self.gen = gen
def __iter__(self):
for el in self.gen:
yield el
</code></pre>
<p>and Class2 that different only in replacing <code>yield</code> in for loop with <code>yield from</code></p>
<pre><code>class Class2:
def __init__(self, gen):
self.gen = gen
def __iter__(self):
yield from self.gen
</code></pre>
<p>The code below reads the first element from an instance of a given class and then reads the rest in a for loop:</p>
<pre><code>a = Class1((i for i in range(3)))
print(next(iter(a)))
for el in iter(a):
print(el)
</code></pre>
<p>This produces different outputs for <code>Class1</code> and <code>Class2</code>. For <code>Class1</code> the output is</p>
<pre><code>0
1
2
</code></pre>
<p>and for <code>Class2</code> the output is</p>
<pre><code>0
</code></pre>
<p><a href="https://godbolt.org/z/sjb54zcTx" rel="noreferrer">Live demo</a></p>
<p>What is the mechanism behind <code>yield from</code> that produces different behavior?</p>
|
<python><generator><yield><python-internals><yield-from>
|
2022-12-26 16:48:14
| 2
| 628
|
erzya
|
74,922,197
| 7,333,766
|
Can I safely remove all u-strings from a project that will only use python 3?
|
<p>I work on a project that includes many <code>u"string"</code> in its codebase,</p>
<p>I want to know if I can safely remove the <code>u</code> in front of all those strings knowing that the project will only use Python 3 from now on (it used to use both python 2 and 3)</p>
<p>I have only <a href="https://bugs.python.org/issue33551" rel="nofollow noreferrer">one source</a> that says :</p>
<blockquote>
<p>"The string prefix u is used exclusively for compatibility with Python
2."</p>
</blockquote>
|
<python><python-3.x>
|
2022-12-26 16:30:52
| 3
| 2,215
|
Eli O.
|
74,922,075
| 20,054,635
|
How to specify wildcard filenames for .Zip type files in Python Variables
|
<p>My requirement is</p>
<p>I'm calling a function to extract files/folders from a .Zip file</p>
<pre><code>files_extract_with_structure(file_source, file_dest)
</code></pre>
<p><code>file_source</code> and <code>file_dest</code> are the variables I'm passing to the above function and the value of the <code>file_source</code> variable is defined as below.</p>
<pre><code>file_source = "/dbfs/mnt/devadls/pre/Source_Files/2022-10/767676.XXX.XXX.XXXX.20221010090858.txt.zip"
</code></pre>
<p>where <code>767676.XXX.XXX.XXXX.20221010090858.txt.zip</code> is the zip file name</p>
<p>The above function works fine if I pass the <code>file_source</code> variable value as above (hardcoded the zip file name)</p>
<p>My requirement is instead of hard coding the zip file name, Can we specify the wild card file names as below?</p>
<pre><code>file_source = "/dbfs/mnt/devadls/pre/Source_Files/2022-10/767676.XXX.XXX.XXXX.*.txt.zip"
</code></pre>
<p>because I will receive the same file with a different date in the next month and so on...</p>
<p>But when I specify the wildcard names as <code>"767676.XXX.XXX.XXXX.*.txt.zip"</code>, it is throwing a `No such file or directory error.</p>
<p>Kindly help to resolve this issue.
Thanks.</p>
|
<python><pyspark><azure-databricks>
|
2022-12-26 16:13:36
| 1
| 369
|
Anonymous
|
74,921,877
| 13,707,795
|
Url with umlaut (non ascii), dashes and brackets is not parseable by requests
|
<p>I am trying to get the HTML content of a page with requests, but it results in <code>UnicodeDecodeError</code>. The reproducible code:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import urllib
url = "https://www.unique.nl/vacature/coördinator-facilitair-(v2037635)"
</code></pre>
<p>Attempt 1:</p>
<pre class="lang-py prettyprint-override"><code>requests.get(url)
</code></pre>
<p>Attempt 2:</p>
<pre class="lang-py prettyprint-override"><code>requests.get(requests.utils.requote_uri(url))
</code></pre>
<p>Both result in <code>UnicodeDecodeError</code></p>
<p>Attempt 3:</p>
<pre class="lang-py prettyprint-override"><code>requests.get(urllib.parse.quote(url))
</code></pre>
<p>Attempt 4:</p>
<pre class="lang-py prettyprint-override"><code>requests.get(urllib.parse.quote(url.encode("Latin-1"), ":/"))
</code></pre>
<p>What am I missing here. Also encoding it to <code>utf-8</code>, <code>latin1</code> or <code>unicode_escape</code>, does not work.</p>
<p>Full error message:</p>
<pre><code>File /usr/local/lib/python3.9/site-packages/requests/api.py:75, in get(url, params, **kwargs)
64 def get(url, params=None, **kwargs):
65 r"""Sends a GET request.
66
67 :param url: URL for the new :class:`Request` object.
(...)
72 :rtype: requests.Response
73 """
---> 75 return request('get', url, params=params, **kwargs)
File /usr/local/lib/python3.9/site-packages/requests/api.py:61, in request(method, url, **kwargs)
57 # By using the 'with' statement we are sure the session is closed, thus we
58 # avoid leaving sockets open which can trigger a ResourceWarning in some
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
File /usr/local/lib/python3.9/site-packages/requests/sessions.py:542, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
537 send_kwargs = {
538 'timeout': timeout,
539 'allow_redirects': allow_redirects,
540 }
541 send_kwargs.update(settings)
--> 542 resp = self.send(prep, **send_kwargs)
544 return resp
File /usr/local/lib/python3.9/site-packages/requests/sessions.py:677, in Session.send(self, request, **kwargs)
674 if allow_redirects:
675 # Redirect resolving generator.
676 gen = self.resolve_redirects(r, request, **kwargs)
--> 677 history = [resp for resp in gen]
678 else:
679 history = []
File /usr/local/lib/python3.9/site-packages/requests/sessions.py:677, in <listcomp>(.0)
674 if allow_redirects:
675 # Redirect resolving generator.
676 gen = self.resolve_redirects(r, request, **kwargs)
--> 677 history = [resp for resp in gen]
678 else:
679 history = []
File /usr/local/lib/python3.9/site-packages/requests/sessions.py:150, in SessionRedirectMixin.resolve_redirects(self, resp, req, stream, timeout, verify, cert, proxies, yield_requests, **adapter_kwargs)
146 """Receives a Response. Returns a generator of Responses or Requests."""
148 hist = [] # keep track of history
--> 150 url = self.get_redirect_target(resp)
151 previous_fragment = urlparse(req.url).fragment
152 while url:
File /usr/local/lib/python3.9/site-packages/requests/sessions.py:116, in SessionRedirectMixin.get_redirect_target(self, resp)
114 if is_py3:
115 location = location.encode('latin1')
--> 116 return to_native_string(location, 'utf8')
117 return None
File /usr/local/lib/python3.9/site-packages/requests/_internal_utils.py:25, in to_native_string(string, encoding)
23 out = string.encode(encoding)
24 else:
---> 25 out = string.decode(encoding)
27 return out
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf6 in position 29: invalid start byte
</code></pre>
|
<python><web-scraping><unicode><python-requests><urllib>
|
2022-12-26 15:47:36
| 1
| 975
|
Zal
|
74,921,871
| 12,242,085
|
When import package to use it in Jupyter Notebook with function saved in .py file in Python?
|
<p>I have in Jupyter Lab functions.py file with function which I wrote which use package statsmodels as sm like below:</p>
<pre><code>def my_fk():
x = sm.Logit()
...
</code></pre>
<p>Then I import function from my file in Jupyter Notebook like below and I import needed package statsmodels:</p>
<pre><code>import functions as fck
import statsmodels.api as sm
</code></pre>
<p>But when I try to use function imported from my file in Jupyter Notebook like below:</p>
<pre><code>fck.my_fk()
</code></pre>
<p>I have an error like follow: <code>NameError: name 'sm' is not defined</code></p>
<p>My question is:
Where should I import package <code>import statsmodels.api as sm</code> needed to run my function, so as to be able to use it in Jupyter Notebook ? In Jupyter Notebook or in .py file where I save my function ?</p>
|
<python><function><import><python-import>
|
2022-12-26 15:47:12
| 1
| 2,350
|
dingaro
|
74,921,717
| 4,015,352
|
Keeping the dimensions when using torch.diff on a tensor in pytorch
|
<p>Suppose the following code:</p>
<pre><code>a=torch.rand(size=(3,3,3), dtype=torch.float32)
a_diff=torch.diff(a, n=1, dim= 1, prepend=None, append=None).shape
print(a_diff)
torch.Size([3, 2, 3])
</code></pre>
<p>I would like to keep the dimensions like the original a with (3,3,3). How can I
append 0 to the beginning of the sequence so that the dimensions remain the same?</p>
|
<python><pytorch><diff><tensor>
|
2022-12-26 15:27:38
| 1
| 391
|
freak11
|
74,921,568
| 2,636,579
|
Script doesn't execute when wrapped inside of a function
|
<p>When I execute the script below with <code>python3 ocr-test.py</code>, it runs correctly:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
import pytesseract
# If you don't have tesseract executable in your PATH, include the following:
pytesseract.pytesseract.tesseract_cmd = r'/opt/homebrew/bin/tesseract'
# Simple image to string
print(pytesseract.image_to_string(Image.open('receipt1.jpg')))
</code></pre>
<p>However, when I excute the below script with <code>python3 ocr-test.py process</code>, the process/function does not get called and nothing happens:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
import pytesseract
def process():
# If you don't have tesseract executable in your PATH, include the following:
pytesseract.pytesseract.tesseract_cmd = r'/opt/homebrew/bin/tesseract'
# Simple image to string
print(pytesseract.image_to_string(Image.open('receipt1.jpg')))
</code></pre>
<p>Why is this (not) happening?</p>
|
<python><tesseract>
|
2022-12-26 15:09:17
| 2
| 1,034
|
reallymemorable
|
74,921,547
| 9,244,323
|
How to parse a date column as datetimes, not objects in Pandas?
|
<p>I'd like to create DataFrame from a csv with one datetime-typed column.</p>
<p>Follow <a href="https://towardsdatascience.com/4-tricks-you-should-know-to-parse-date-columns-with-pandas-read-csv-27355bb2ad0e" rel="nofollow noreferrer">the article</a>, the code should create needed DateFrame:</p>
<pre><code>df = pd.read_csv('data/data_3.csv', parse_dates=['date'])
df.info()
RangeIndex: 3 entries, 0 to 2
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 3 non-null datetime64[ns]
1 product 3 non-null object
2 price 3 non-null int64
dtypes: datetime64[ns](1), int64(1), object(1)
memory usage: 200.0+ bytes
</code></pre>
<p>But when I do exacly the same steps, I get object-typed date column:</p>
<pre><code>df = pd.read_csv(path, parse_dates=['published_at'])
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100000 entries, 0 to 99999
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 name 100000 non-null object
1 salary_from 48041 non-null float64
2 salary_to 53029 non-null float64
3 salary_currency 64733 non-null object
4 area_name 100000 non-null object
5 published_at 100000 non-null object
dtypes: float64(2), object(4)
memory usage: 4.6+ MB
</code></pre>
<p>I have tried a couple of various ways to parse datetime column and still can't get a DateFrame with datetime dtype. So how to parse a column with datetime type (not object)?</p>
|
<python><pandas>
|
2022-12-26 15:06:59
| 1
| 316
|
Sergey Kazantsev
|
74,921,335
| 13,118,338
|
How to pass argument to docker-compose
|
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'--strat',
type=str,
)
args = parser.parse_args()
strat = args.strat
</code></pre>
<p>I would like to right my docker-compose.yml file such as I would just pass my argument from there.
I did</p>
<pre><code>version: "3.3"
services:
mm:
container_name: mm
stdin_open: true
build: .
context: .
dockerfile: Dockerfile
args:
strat: 1
</code></pre>
<p>and my docker file</p>
<pre><code>FROM python:3.10.7
COPY . .
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD python3 main.py
</code></pre>
<p>But it does not work.</p>
<p>Any idea what I should change pelase?</p>
|
<python><python-3.x><docker><docker-compose><dockerfile>
|
2022-12-26 14:39:33
| 2
| 481
|
Nicolas Rey
|
74,921,232
| 12,863,331
|
Download a file that's linked to a button on a website
|
<p>I'm looking for a way to get files such as the one in <a href="https://biobank.ndph.ox.ac.uk/showcase/coding.cgi?id=9" rel="nofollow noreferrer">this link</a>, which can be downloaded by clicking a "download" button. I couldn't find a way despite reading many posts that seemed to be relevant.</p>
<p>The code I got so far:</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
with open('ukb49810.html', 'r') as f:
html = f.read()
index_page = bs(html, 'html.parser')
for i in index_page.find_all('a', href=True)[2:]:
if 'coding' in i['href']:
file = requests.get(i['href']).text
download_page = bs(file, 'html.parser').find_all('a', href=True)
</code></pre>
<p>From the <code>download_page</code> variable I got "URLs" with the code</p>
<pre><code>for ii in download_page:
print(ii['href'])
</code></pre>
<p>which printed</p>
<pre><code>http://
index.cgi
browse.cgi?id=9&cd=data_coding
search.cgi
catalogs.cgi
download.cgi
https://bbams.ndph.ox.ac.uk/ams/resApplications
help.cgi?cd=data_coding
field.cgi?id=22001
field.cgi?id=22001
label.cgi?id=100313
field.cgi?id=31
field.cgi?id=31
label.cgi?id=100094
</code></pre>
<p>I tried to use these supposedly-URLs to compose the download URL but the link I got didn't work.<br />
Thanks.</p>
|
<python><beautifulsoup><python-requests>
|
2022-12-26 14:25:50
| 1
| 304
|
random
|
74,921,183
| 498,504
|
numpy.float64 object is not callable error on concrete dataset
|
<p>I'm writing a simple Regression Model in Keras for predicting Strength.</p>
<p>This is my code:</p>
<pre><code>epochs_number = 50
mean_squared_errors = []
number_of_reapeat = 50
for i in range(0, number_of_reapeat):
print(i)
X_train, X_test, y_train, y_test = train_test_split(predictors, target, test_size=0.3, random_state=i)
model.fit(X_train, y_train, epochs=epochs_number, verbose=0)
y_pred = model.predict(X_test)
mean_squared_error = mean_squared_error(y_test, y_pred)
mean_squared_errors.append(mean_squared_error)
</code></pre>
<p>But whenever I run this code , following error shown :</p>
<pre><code>0
10/10 [==============================] - 0s 2ms/step
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [11], line 10
8 model.fit(X_train , y_train , epochs=epochs_number , verbose=0)
9 y_pred= model.predict(X_test)
---> 10 mean_squared_error = mean_squared_error(y_test,y_pred)
11 mean_squared_errors.append(mean_squared_error)
TypeError: 'numpy.float64' object is not callable
</code></pre>
<p>What is problem and How can I solve it??</p>
|
<python><keras>
|
2022-12-26 14:18:18
| 1
| 6,614
|
Ahmad Badpey
|
74,921,132
| 17,176,270
|
How to populate DB from fixture before test
|
<p>I have a FastAPI app and I need to populate a testing DB with some data needed for testing using pyTest.</p>
<p>This is my code for testing DB in conftest.py:</p>
<pre><code>SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
def override_get_db():
"""Redirect request to use testing DB."""
try:
db = TestingSessionLocal()
yield db
finally:
db.close()
app.dependency_overrides[get_db] = override_get_db
@pytest.fixture(scope="module")
def test_client():
"""Test client initiation for all tests."""
client = TestClient(app)
yield client
</code></pre>
<p>I need to implement something like this:</p>
<pre><code>@pytest.fixture(scope="function")
def test_data(get_db):
waiter = Waiter(
id=1,
username="User name",
password="$12$BQhTQ6/OLAmkG/LU6G2J2.ngFk6EI9hBjFNjeTnpj2eVfQ3DCAtT.",
)
dish = Dish(
id=1,
name="Some dish",
description="Some description",
image_url="https://some.io/fhjhd.jpg",
cost=1.55,
)
get_db.add(waiter)
get_db.add(dish)
get_db.commit()
</code></pre>
<p>And here is a test:</p>
<pre><code>def test_get_waiter(test_client, waiter_data):
"""Test Get a waiter by id."""
response = test_client.get("/waiters/1")
assert response.status_code == 200
</code></pre>
<p>But in this case I get <code>fixture 'get_db' not found</code>. How do I?</p>
|
<python><pytest><fastapi><fixtures>
|
2022-12-26 14:13:03
| 2
| 780
|
Vitalii Mytenko
|
74,921,129
| 13,576,164
|
Web Scraping Table from 'Dune.com' with Python3 and bs4
|
<p>I am trying to web scrape table data from Dune.com (<a href="https://dune.com/queries/1144723" rel="nofollow noreferrer">https://dune.com/queries/1144723</a>). When I 'inspect' the web page, I am able to clearly see the <code><table></table></code> element, but when I run the following code I am returned None results.</p>
<pre><code>import bs4
import requests
data = []
r=requests.get('https://dune.com/queries/1144723/1954237')
soup=bs4.BeautifulSoup(r.text, "html5lib")
table = soup.find('table')
</code></pre>
<p>How can I successfully find this table data?</p>
|
<python><html><beautifulsoup><python-requests>
|
2022-12-26 14:12:40
| 1
| 338
|
spal
|
74,921,104
| 34,935
|
How to properly specify argument type accepting dictionary values?
|
<p>Here are a couple of functions:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Sequence
def avg(vals: Sequence[float]):
return sum(val for val in vals) / len(vals)
def foo():
the_dict = {'a': 1., 'b': 2.}
return avg(the_dict.values())
</code></pre>
<p>PyCharm 2022.3 warns about <code>the_dict.values()</code> in the last line:</p>
<blockquote>
<p>Expected type 'Sequence[float]', got _dict_values[float, str] instead</p>
</blockquote>
<p>But those values can be iterated across and have their length taken.</p>
<p>I tried</p>
<pre><code>from typing import Sequence, Union
def avg(vals: Union[Sequence[float], _dict_values]):
...
</code></pre>
<p>which seems insane, but that also didn't work.</p>
<p>Suggestions?</p>
<p>I can turn off the typing for that argument, but I am curious what the right annotation is.</p>
|
<python><pycharm><python-typing>
|
2022-12-26 14:09:51
| 2
| 21,683
|
dfrankow
|
74,920,923
| 5,161,197
|
Python2 vs Python3 : Exception variable not defined
|
<p>I wrote this small snippet in python2 (2.7.18) to catch the exception in a variable and it works</p>
<pre class="lang-py prettyprint-override"><code>>>> ex = None
>>> try:
... raise Exception("test")
... except Exception as ex:
... print(ex)
...
test
>>> ex
Exception('test',)
>>>
>>> ex2 = None
>>> try:
... raise Exception("test")
... except Exception as ex2:
... print(ex2)
... finally:
... print(ex2)
...
test
test
</code></pre>
<p>When I run the same in python3 (3.10.8), I get NameError</p>
<pre class="lang-py prettyprint-override"><code>>>> ex = None
>>> try:
... raise Exception("test")
... except Exception as ex:
... print(ex)
...
test
>>> ex
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'ex' is not defined. Did you mean: 'hex'?
>>>
>>> ex2 = None
>>> try:
... raise Exception("test")
... except Exception as ex2:
... print(ex2)
... finally:
... print(ex2)
...
test
Traceback (most recent call last):
File "<stdin>", line 6, in <module>
NameError: name 'ex2' is not defined
</code></pre>
<p>What is the reason for this? Is the new python3 compiler doing optimisation where None assignment doesn't mean anything or is the try/except clause doing some magic?</p>
<p>What is the workaround for this which works in both python2 and python3?</p>
|
<python><python-3.x><python-2.7><exception>
|
2022-12-26 13:47:01
| 1
| 363
|
likecs
|
74,920,881
| 607,846
|
Clipping a datatime series along the y-axis
|
<p>I have a list of tuples, where each tuple is a datetime and float. I wish to clip the float values so that they are all above a threshold value. For example if I have:</p>
<pre><code>a = [
(datetime.datetime(2021, 11, 1, 0, 0, tzinfo=tzutc()), 100),
(datetime.datetime(2021, 11, 1, 1, 0, tzinfo=tzutc()), 9.0),
(datetime.datetime(2021, 11, 1, 2, 0, tzinfo=tzutc()), 100.0)
]
</code></pre>
<p>and if I want to clip at 10.0, this would give me:</p>
<pre><code>b = [
(datetime.datetime(2021, 11, 1, 0, 0, tzinfo=tzutc()), 100),
(datetime.datetime(2021, 11, 1, 0, ?, tzinfo=tzutc()), 10.0),
(datetime.datetime(2021, 11, 1, 1, ?, tzinfo=tzutc()), 10.0),
(datetime.datetime(2021, 11, 1, 2, 0, tzinfo=tzutc()), 100.0)
]
</code></pre>
<p>So if I were to plot the <code>a</code> data (before clipping), I would get a V shaped graph. However, if I clip the data at 10.0 to give me the <code>b</code> data, and plot, I will have a \_/ shaped graph instead. There is a bit of math involved in calculating the new times so I'm hoping there is already functionality available to do this kind of thing. The datetimes are sorted in order and are unique. I can fix the data so the difference between consecutive times is equal, should that be necessary.</p>
|
<python><pandas><numpy><scipy>
|
2022-12-26 13:41:49
| 1
| 13,283
|
Baz
|
74,920,856
| 4,645,982
|
Invalid UpdateExpression: Attribute name is a reserved keyword; reserved keyword: value
|
<p>I am trying to update the record in dynamodb using following dictData, I have <strong>RESERVER_KEYWORDS</strong> array which has reserved keyword in dynamoDB. Please check code segment which I am trying to replace with the reserve keyword. Main issue that <code>customer.value</code> and <code>action.value</code> type of keyword used in given record. I am not sure how to replace with #.</p>
<p>However I am getting error</p>
<p>CRITICAL Couldn't update record in table table. Here's why: ValidationException: Invalid UpdateExpression: Attribute name is a reserved keyword; reserved keyword: value</p>
<pre><code> dictData = {
':line_items': [{
'search_product': 'dd',
'quantity': '10'
}]
'email': 'xyz@email',
'poc_name': 'XYZ',
'contact': '90912',
'action': {
'value': 'id1',
'label': 'In Progress'
},
'terms': 'Cash',
'customer': {
'value': 'id1',
'label': 'Customer'
},
}
</code></pre>
<p>In side the function definition:</p>
<pre><code>updateExpression = ["set "]
updateValues = dict()
expression_attributes_names = {}
for key, value in dictData.items():
updateExpression.append(f" #{key}_alias = :{key.replace('.', '_')},")
updateValues[f":{key.replace('.', '_')}"] = value
expression_attributes_names[f"#{key}_alias"] = key
response = table.update_item(
Key={"_id": id},
UpdateExpression="".join(updateExpression)[:-1],
ExpressionAttributeValues=updateValues,
ReturnValues="UPDATED_NEW",
ExpressionAttributeNames=expression_attributes_names,
)
</code></pre>
<p>Above function will give following values of expression_attributes_names, updateExpression, updateValues.</p>
<pre><code> expression_attributes_names: {}
updateExpression: ['set ', ' line_items = :line_items,', ' email = :email', ' poc_name = :poc_name,', ' contact = :contact,', ' action.value = :action_value,',
' action.label = :action_label,', ' terms = :terms,',
' customer.value = :customer_value,', ' customer.label = :customer_label,'
]
updateValues: {
':line_items': [{
'search_product': 'dd',
'quantity': '10'
}],
':email': 'xyz@email',
':poc_name': 'Test',
':contact': '90912',
':action_value': 'id1',
':action_label': 'In Progress',
':terms': 'Cash',
':customer_value': 'id1',
':customer_label': 'Customer'
}
</code></pre>
|
<python><python-3.x><amazon-web-services><amazon-dynamodb><boto3>
|
2022-12-26 13:38:59
| 1
| 2,676
|
Neelabh Singh
|
74,920,765
| 15,144,596
|
How to know which line in a python library raised an exception?
|
<p>I have a program which runs fine but when I use uvicorn, my program raises an <code>asyncio.exceptions.CancelledError</code> when I call <code>asyncio.gather(*tasks)</code>. The problem is I don't know which line in the library (uvicorn or some other library) is rasing that exception.</p>
<p>How can I know which line in which library is rasing the exception?</p>
|
<python><python-3.x><python-asyncio><uvicorn>
|
2022-12-26 13:24:41
| 0
| 549
|
Shakir
|
74,920,750
| 19,838,445
|
Difference between Callable and FunctionType
|
<p>I am trying to properly type hint my code and encountered both <a href="https://docs.python.org/3/library/typing.html#typing.Callable" rel="nofollow noreferrer">Callable</a> and <a href="https://docs.python.org/3/library/types.html?highlight=functiontype#types.FunctionType" rel="nofollow noreferrer">FunctionType</a></p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable
def my_func() -> Callable:
f = lambda: _
return f
result = my_func()
type(result) # <class 'function'>
isinstance(result, Callable) # True
</code></pre>
<p>vs</p>
<pre class="lang-py prettyprint-override"><code>from types import FunctionType
def my_func() -> FunctionType
f = lambda: ...
return f
result = my_func()
type(result) # <class 'function'>
isinstance(result, FunctionType) # True
</code></pre>
<p>One possible case I can think of is to distinguish between regular and class-based callables like this</p>
<pre class="lang-py prettyprint-override"><code>class C:
def __call__(self):
pass
def my_func() -> Callable:
c = C()
return c
result = my_func()
type(result) # <class 'function'>
isinstance(result, Callable) # True
isinstance(result, FunctionType) # False
</code></pre>
<p>What are the differences between those and when I have to use one over the other?</p>
|
<python><python-3.x><python-typing><callable-object>
|
2022-12-26 13:22:43
| 1
| 720
|
GopherM
|
74,920,749
| 4,822,772
|
Pandas cumsum by chunk
|
<p>In dataset, I have two columns</p>
<ul>
<li>N: ID number to identify each row</li>
<li>Indicator: it is either 0 or 1.</li>
</ul>
<p>What I would like to obtain:</p>
<ul>
<li>Cumsum: calculate the cumulative cum of the column Indicator, but only to successive values of 1.</li>
<li>Total: then for each chunk of non-null values, get the total of non-null values (or the max of the cum sum, or the last value) for each chunk</li>
</ul>
<p>How can I get the two columns efficiently?</p>
<p>(A for loop over the rows would not be efficient.)</p>
<p><a href="https://i.sstatic.net/EFUvB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EFUvB.png" alt="enter image description here" /></a></p>
|
<python><pandas><cumsum>
|
2022-12-26 13:22:39
| 2
| 1,718
|
John Smith
|
74,920,637
| 19,826,650
|
Insert from python to mysql
|
<p>How to insert data from code below?
I have a code below</p>
<pre><code>latitude1 = -6.208470935786019
longitude1 = 106.81796891087399
new_data = [[latitude1, longitude1]]
preds = model.predict(new_data)
preds
arr = [latitude1,longitude1]
arrcon = np.concatenate((arr,preds))
print(arrcon) #[-6.208470935786019 106.81796891087399 'Not Categorized']
listarcon= arrcon.tolist()
print(listarcon) #[-6.208470935786019, 106.81796891087399, 'Not Categorized']
#make the list into multi list
singlearcon = np.array(listarcon).reshape(1,3)
print(singlearcon) #[['-6.208470935786019' '106.81796891087399' 'Not Categorized']]
</code></pre>
<p>This is insert into database code</p>
<pre><code>mycursor = conn.cursor()
sql = "INSERT INTO traveldata (Latitude,Longitude,Wisata) VALUES (%s, %s, %s)"
val = (listarcon[0],listarcon[1],listarcon[2])
mycursor.execute(sql, val)
</code></pre>
<p><a href="https://i.sstatic.net/yxLYF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yxLYF.png" alt="example" /></a></p>
<p>How to insert it to database? the data didn't seem to get to the database.</p>
|
<python><mysql>
|
2022-12-26 13:08:55
| 1
| 377
|
Jessen Jie
|
74,920,558
| 9,385,568
|
Lengths must match to compare
|
<p>I'm trying to map values of one dataframe to another.
My dataframes are as follows:</p>
<pre><code>df2.head()
ext_id credit_debit_indicator index_name business_date trench_tag trench_tag_l2
0 4SL19N2YQLCU62TY C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99
1 1EXHR74Y2YXBN4AM D ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99
2 OI0001WMRUD C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99
3 OI0001WKKXA C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99
4 SGW7000490024199 C ib-prodfulltext-t24-transhist-202208 2022-07-31 XXX9999999 XXX99
</code></pre>
<p>and</p>
<pre><code>mapping_df.head()
trench_code trench_level fink_code fink_level
0 COM0101001 4 PREPAID_01 2
1 COM0101002 4 PREPAID_01 2
2 COM0101003 4 PREPAID_01 2
3 COM0101099 4 PREPAID_01 2
4 COM0101999 4 PREPAID_01 2
</code></pre>
<p>I tried:</p>
<pre><code>df2['fink_sub_tag_key'] = mapping_df.apply(
mapping_df["fink_code"] if mapping_df["trench_code"] in df2['trench_tag'].values else np.nan,
axis=1,
)
</code></pre>
<p>Which raises:</p>
<blockquote>
<p>ValueError: Lengths must match to compare</p>
</blockquote>
<p>It is the truth that I have values of different lenghts on mapping_df.trench_code. But don't know how to cope around it.</p>
<p>Help would be appreciated.</p>
|
<python><pandas><numpy>
|
2022-12-26 13:00:02
| 1
| 873
|
Stanislav Jirak
|
74,920,470
| 7,806,269
|
PyQt6 program crashes when using two TreeView delegates
|
<p>I've prepared a minimal reproducible example of this problem. There are two <code>TreeView</code> delegates: one for checkboxes and one for progress bars. If I add any one of these delegates, but not the other one, the program doesn't crash. But if I add both, it crashes (upon trying to show the rows and the delegates). It gives the Segmentation Fault error when crashing.</p>
<p><code>delegates.py</code></p>
<pre><code>from PySide6 import QtWidgets
from PySide6.QtCore import Qt, QEvent, QPoint, QRect
from PySide6 import QtCore, QtGui
class CheckBoxDelegate(QtWidgets.QStyledItemDelegate):
"""
A delegate that places a fully functioning QCheckBox cell of the column to which it's applied.
"""
def __init__(self, parent = None):
QtWidgets.QStyledItemDelegate.__init__(self, parent)
def createEditor(self, parent, option, index):
"""
Important, otherwise an editor is created if the user clicks in this cell.
"""
return None
def paint(self, painter, option, index):
checked = bool(index.model().data(index, Qt.DisplayRole))
check_box_style_option = QtWidgets.QStyleOptionButton()
if (index.flags() & Qt.ItemIsEditable):
check_box_style_option.state |= QtWidgets.QStyle.State_Enabled
else:
check_box_style_option.state |= QtWidgets.QStyle.State_ReadOnly
if checked:
check_box_style_option.state |= QtWidgets.QStyle.State_On
else:
check_box_style_option.state |= QtWidgets.QStyle.State_Off
check_box_style_option.rect = self.getCheckBoxRect(option)
QtWidgets.QApplication.style().drawControl(QtWidgets.QStyle.CE_CheckBox, check_box_style_option, painter)
def editorEvent(self, event, model, option, index):
if not (index.flags() & Qt.ItemIsEditable):
return False
# Do not change the checkbox-state
if event.type() == QEvent.MouseButtonRelease or event.type() == QEvent.MouseButtonDblClick:
if event.button() != Qt.LeftButton or not self.getCheckBoxRect(option).contains(event.pos()):
return False
if event.type() == QEvent.MouseButtonDblClick:
return True
elif event.type() == QEvent.KeyPress:
if event.key() != Qt.Key_Space and event.key() != Qt.Key_Select:
return False
else:
return False
# Change the checkbox-state
self.setModelData(None, model, index)
return True
def getCheckBoxRect(self, option):
check_box_style_option = QtWidgets.QStyleOptionButton()
check_box_rect = QtWidgets.QApplication.style().subElementRect(QtWidgets.QStyle.SE_CheckBoxIndicator, check_box_style_option, None)
check_box_point = QPoint (option.rect.x() +
option.rect.width() / 2 -
check_box_rect.width() / 2,
option.rect.y() +
option.rect.height() / 2 -
check_box_rect.height() / 2)
return QRect(check_box_point, check_box_rect.size())
def setModelData (self, editor, model, index):
newValue = not bool(index.model().data(index, Qt.DisplayRole))
model.setData(index, newValue, Qt.EditRole)
class ProgressBarDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super().__init__(parent)
def paint(self, painter, option, index):
# Get the data for the item
progress = index.data(QtCore.Qt.ItemDataRole.UserRole)
# Draw the progress bar
painter.save()
rect = option.rect
rect.setWidth(int(rect.width() * progress))
painter.fillRect(rect, QtGui.QColor("#00c0ff"))
painter.restore()
</code></pre>
<p><code>mainwindow.py</code></p>
<pre><code>from ui_form import Ui_MainWindow
from PySide6.QtCore import QThread, SIGNAL, Slot
from PySide6 import QtGui, QtCore
from PySide6.QtWidgets import QApplication, QMainWindow
from delegates import CheckBoxDelegate, ProgressBarDelegate
class MainWindow(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.row_data = []
self.model = QtGui.QStandardItemModel()
self.ui.actionExit.triggered.connect(self.exit)
def reinit_model(self):
self.model.clear()
self.rootItem = self.model.invisibleRootItem()
self.model.setHorizontalHeaderLabels(['Checkbox', 'Title', 'Progress'])
self.ui.treeView.setModel(self.model)
def get_row_data(self):
row_data = []
for i in range(3):
title = f"Col 1 Row {i}"
row_data.append([title])
return row_data
def reset_row_data(self):
self.row_data.clear()
self.row_data = self.get_row_data()
class Show_list(QThread):
def __init__(self, AppWindow):
QThread.__init__(self)
self.AppWindow = AppWindow
def run(self):
self.AppWindow.reset_row_data()
def populate_window_list(self):
self.reinit_model()
for title_link in self.row_data:
item = [QtGui.QStandardItem(),
QtGui.QStandardItem(title_link[0]),
QtGui.QStandardItem()]
progress = 0.5
item[2].setData(progress, QtCore.Qt.ItemDataRole.UserRole)
self.rootItem.appendRow(item)
self.ui.treeView.show()
cb_delegate = CheckBoxDelegate()
pb_delegate = ProgressBarDelegate()
self.ui.treeView.setItemDelegateForColumn(0, cb_delegate)
# self.ui.treeView.setItemDelegateForColumn(2, pb_delegate)
for i in range(3):
self.ui.treeView.resizeColumnToContents(i)
@Slot()
def show_list(self):
self.thread = self.Show_list(self)
self.connect(self.thread, SIGNAL("finished()"),
self.populate_window_list)
self.thread.start()
def exit(self):
QApplication.quit()
</code></pre>
<p><code>main.py</code></p>
<pre><code>import sys
from mainwindow import MainWindow
from PySide6.QtWidgets import QApplication
if __name__ == "__main__":
app = QApplication(sys.argv)
widget = MainWindow()
widget.reinit_model()
getRowDataButton = widget.ui.getRowDataButton
getRowDataButton.clicked.connect(widget.show_list)
widget.show()
sys.exit(app.exec())
</code></pre>
<p><a href="https://pastebin.com/Kf3itP3a" rel="nofollow noreferrer">Here</a>'s also <code>ui_form.py</code>.</p>
<p>I've tried to add <code>cb_delegate.deleteLater()</code> and <code>pb_delegate.deleteLater()</code>, but it didn't help. The program loaded the list and crashed immediately.</p>
|
<python><segmentation-fault><crash><delegates><pyqt6>
|
2022-12-26 12:51:09
| 0
| 862
|
sequence
|
74,920,459
| 4,033,876
|
How to retain datetime column in pandas grouper and group by?
|
<p>I have a pandas dataframe that has a structure as shown in this question <a href="https://stackoverflow.com/questions/74412513/parsing-json-with-number-as-key-usng-pandas">Parsing JSON with number as key usng pandas</a>-</p>
<pre><code> Date Time InverterVoltage Invertercurrent
2021-11-15 14:37:05 219.1 20
2021-11-15 14:38:05 210.2 21
</code></pre>
<p>And so on . Data is available every 1 minute.</p>
<p>I have code like this -</p>
<pre><code>df['inverterConsumption'] = df.inverterVoltage*df.inverterCurrent
</code></pre>
<p>Then I calculate the hourly mean by using this groupby construct</p>
<pre><code>df['Datetime'] = pd.to_datetime(df['Date'].apply(str)+' '+df['Time'].apply(str))
davg_df2 = df.groupby(pd.Grouper(freq='H', key='Datetime')).mean()
</code></pre>
<p>What I want to do is the following - I want to filter the inverterConsumption for only the month of September</p>
<pre><code>davg_df2 = davg_df2[davg_df2['Datetime'].dt.month_name() =='September']
</code></pre>
<p>But I get an error saying</p>
<pre><code>KeyError: Datetime
</code></pre>
<p>So clearly the <code>davg_df2</code> dataframe does not include the Datetime column that is present in <code>df</code>(as it is non numeric). How can I include that in the groupby and grouper clause ?</p>
<p>Pandas version 1.5.2 and Python version 3.8</p>
|
<python><pandas><python-3.8>
|
2022-12-26 12:50:20
| 1
| 1,194
|
gansub
|
74,920,369
| 12,725,674
|
Replace substring with exception
|
<p>I want to replace certain characters in file name of pdf files.
My code so far:</p>
<pre><code>for file in files:
file_ed = file
replace = [",","-", "The "," "]
for item in replace:
file_ed = file_ed.replace(item,"")
</code></pre>
<p>In addition I would like to replace dots "." in the file names. If I would include "." in the replace list though, it will also replace the dot in ".pdf" which obviously is not what I want.
Any help is much appreciated.</p>
|
<python><string><replace>
|
2022-12-26 12:37:36
| 5
| 367
|
xxgaryxx
|
74,920,349
| 8,794,019
|
Gateway Time-out with StreamingResponse and custom Middleware fastapi
|
<p>I wrote a simple custom Middleware as below:</p>
<pre><code>class LoggingMiddleware(BaseHTTPMiddleware):
def __init__(self, app):
super().__init__(app)
async def dispatch(self, request, call_next):
user_token = request.headers.get("x-token")
req_id = time_ns()
try:
user_phone_number = decode(user_token, JWT_SECRET_KEY, algorithms=[HASH_ALGORITHM]).get("phone_number")
except:
user_phone_number = ''
req_body = await get_json(request)
reqb = dumps(req_body.decode('utf-8', 'ignore'), ensure_ascii=False)
self.req_logger.info()
response_time = 0
try:
s = time.time()
resp = await call_next(request)
response_time = round(time.time() - s, 2)
except:
self.error_logger.error()
raise CustomException(
details=ErrorResponseSerializer(
)
),
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
resp_body = b""
async for item in resp.body_iterator:
resp_body += item
respb = dumps(resp_body.decode('utf-8', 'ignore'), ensure_ascii=False)
if resp.status_code != 200:
self.error_logger.error()
return Response(
content=resp_body,
status_code=resp.status_code,
headers=dict(resp.headers),
media_type=resp.media_type
)
</code></pre>
<p>and in one of my endpoints I wrote this functionality:</p>
<pre><code>@router.get("/statistics/contract/count/", response_model=StatisticsCountsResponseSerializer)
def get_contract_count(
province: str = None,
dashboard_status: DashboardContractStatus = None,
status: ContractStatus = None,
csv: int = 0,
context: Context = Depends(get_context)):
repository = PostgresContractRepository(context=context)
result = repository.get_counts(PROVINCES.get(province), dashboard_status, status)
if not csv:
return StatisticsCountsResponseSerializer.parse_obj({"count": result})
else:
filename = f"contract_counts_{int(datetime.datetime.utcnow().timestamp())}.csv"
return StreamingResponse(
# get_csv() yield an string which has csv format: yield "name,fmaily\nemad,emad"
get_csv(),
media_type="text/csv",
headers={"Content-Disposition": f'attachment; filename="{filename}"'}
)
</code></pre>
<p>when I send the request to this endpoint I gave this error in postman:</p>
<pre><code><html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.20.1</center>
</body>
</html>
</code></pre>
<p>When I remove my middleware, everything is right and I got the excpected response. Maybe its useful to know that in other endpoints I return FileResponse and with the Middleware everything works fine. Just with StreamingResponse and Middleware I cannot get response.</p>
<h2>Edit #1</h2>
<p>After reading one of my comments, in this <a href="https://stackoverflow.com/questions/69670125/how-to-log-raw-http-request-response-in-python-fastapi/73464007#73464007">link</a>, I change my code to this one:</p>
<pre><code>async def dispatch(self, request, call_next):
# self.check_authorization(request)
user_token = request.headers.get("x-token")
try:
user_phone_number = decode(user_token, JWT_SECRET_KEY, algorithms=[HASH_ALGORITHM]).get("phone_number")
except:
user_phone_number = ''
details = {
"token": {
"phone_number": user_phone_number
},
"request": {
"id": time_ns(),
"url_path": request.url.path,
"query_params": request.query_params,
"path_params": request.path_params
},
"response": {
"duration": 0.0,
"status_code": 200,
"exception": None
}
}
request_body = await get_json(request)
try:
s = time.time()
response = await call_next(request)
details["response"]["duration"] = round(time.time() - s, 2)
except:
self.error_logger.error(
Logger.message_formatter(
request_id=details.get("request").get("id"),
path=request.url.path,
token_phone_number=user_phone_number,
query_params=request.query_params,
path_params=request.path_params,
body=request_body.decode('utf-8', 'ignore')
)
)
self.error_logger.error(
Logger.message_formatter(
request_id=details.get("request").get("id"),
path=request.url.path,
message=traceback.format_exc(),
status=500
)
)
raise CustomException(
details=ErrorResponseSerializer(
metadata=MessageResponseSerializer(
type=MessageTypeEnum.error,
text=MessageTextResponseSerializer(
text="internal server error",
translate="..."
)
)
),
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
response_body = b""
async for item in response.body_iterator:
response_body += item
return Response(
content=response_body,
status_code=response.status_code,
headers=dict(response.headers),
media_type=response.media_type,
background=BackgroundTask(self.log, request_body, response_body, details)
)
</code></pre>
<p>But the problem does not solve. I still get time out.</p>
|
<python><fastapi>
|
2022-12-26 12:35:44
| 0
| 705
|
Emad Helmi
|
74,919,802
| 12,544,460
|
Parse a Text file to a table using Python or SQL
|
<p>I have a text file like this</p>
<pre><code>{u'Product_id': u'1234567', u'Product_name': u'Apple', u'Product_code': u'2.4.14'}
{u'Product_id': u'1234123', u'Product_name': u'Orange', u'Product_code': u'2.4.20'}
</code></pre>
<p>I have searched it on google but not know yet what kind of string is this, it's not json . How to parse it to table using Python or SQL specifically PL/SQL ? Desired table result have column and row like this:</p>
<pre><code>Product_id Product_name Product_code
1234567 Apple 2.4.14
1234123 Orange 2.4.20
</code></pre>
|
<python><sql><parsing>
|
2022-12-26 11:20:50
| 1
| 362
|
Tom Tom
|
74,919,353
| 13,038,144
|
Setting same frame width in matplotlib subplots with external colorbar element
|
<p>I want to produce two subplots that contain a lot of curves, so I defined a function that produces a colorbar, to avoid having a super long legend that is not readable.
This is the function to create the colorbar:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib as mpl, matplotlib.pyplot as plt
def colorbar (cmap, vmin, vmax, label, ax=None, **cbar_opts):
norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax, clip=False)
cbar = plt.colorbar(mpl.cm.ScalarMappable(norm=norm, cmap=cmap),
label=label, ax=ax, **cbar_opts)
return cbar
</code></pre>
<p>I want to show just one colorbar, since its values are the same for the two plots. So, I place it only on the right of the second axis.
Here is the code.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd, numpy as np
df1 = pd.DataFrame({i: np.linspace(i*i, 10, 10) for i in range(50)})
df2 = pd.DataFrame({i: np.linspace(2*i*i, 10, 10) for i in range(50)})
fig, axs = plt.subplots(1,2, figsize=(5,3))
df1.plot(ax=axs[0], legend=False, cmap='turbo')
df2.plot(ax=axs[1], legend=False, cmap='turbo')
colorbar(ax = axs[1], cmap='turbo', vmin=0, vmax=49, label='My title')
axs[1].set_title('I want this frame \n as large as \n the first one')
plt.tight_layout()
</code></pre>
<p><strong>My problem</strong> is that now the two plots have different width, because the colorbar is considered in the measurement of the width of the second axis. How can I get the two frames to have the same width?</p>
<p><a href="https://i.sstatic.net/IBPp2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IBPp2.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot>
|
2022-12-26 10:21:03
| 2
| 458
|
gioarma
|
74,919,257
| 12,085,129
|
Convert a CRON expression from local TZ to UTC in Python
|
<p>I'm building an application in which some tasks are scheduled by using CRON like jobs (with <a href="https://docs.celeryq.dev/en/stable/" rel="nofollow noreferrer">celery</a>). Through an interface, jobs are saved as celery jobs with <a href="https://django-celery-beat.readthedocs.io/en/latest/" rel="nofollow noreferrer">django-celery-beat</a> package.</p>
<p>For that, the schedule the users want is translated into a CRON expression such as <code>30 9 * * 1</code> (Every Monday at 9:30).
It works well. But the celery backend as well as the database are working in UTC timezone. While end users of my application way work from a different timezone.</p>
<p>So if a user is working in UTC+1, and he wants the job to be run at 9:30 each Monday, it's not <code>30 9 * * 1</code> that will have to be used but <code>30 8 * * 1</code> instead.</p>
<p>This use case is kinda easy to solve but what if I want it to be run at 00:00 on each Monday ? It means that I'll have to go a day backwards, so Sunday at 23:00 which would lead to <code>0 23 * * 0</code>. And the same behavior for every possible timezone other than UTC.</p>
<p>Is there an easy way to do so ?</p>
<p>Also I'm worryied about DST...</p>
<p>Thanks in advance !</p>
|
<python><cron><django-celery-beat>
|
2022-12-26 10:08:21
| 1
| 1,259
|
lbris
|
74,919,028
| 19,716,381
|
Center the coordinate system of python pillow
|
<p>I would like to create images using mathematical equations / functions and python pillow. For example,</p>
<p>pseudo code:</p>
<pre><code>iterate through pixels
if x^2 + y^2 <= 25, pixel_color = green.
Draws a solid circle of radius 5
</code></pre>
<p>I am aware python pillow has inbuilt functions to draw circles. I would like to draw other complicated functions as well.</p>
<p>The problem is that (0,0) of a pillow image is at the top left corner.
Can I somehow center the coordinate system of python pillow so that I can draw the whole circle instead of a quadrant?</p>
<p>I am again aware that I can use (x-x1)<sup>2</sup> + (y-y1)<sup>2</sup> <= 25 to center the circle at (x1, y1), but it would be very convenient if I can center the axes like in case of <code>nannou</code>.</p>
|
<python><python-imaging-library>
|
2022-12-26 09:38:13
| 0
| 484
|
berinaniesh
|
74,918,706
| 8,437,546
|
Efficient way to perform rolling operations on tensor with Pytorch
|
<p>I would like to write a function to perform some forward rolling operations over a tensor slice with PyTorch. Is there a way to do efficiently this?</p>
<p>For example, the RollingSum function should take a tensor and add up all values across the specified axis within the rolling slice/window.</p>
<pre><code>X = np.array([1,2,3,4,5,4,3,2,1]).reshape(-1, 1, 1)
X = torch.tensor(X, dtype=torch.float32)
print(X)
tensor([[[1.]],
[[2.]],
[[3.]],
[[4.]],
[[5.]],
[[4.]],
[[3.]],
[[2.]],
[[1.]]])
def RollingSum(X, slice, axis):
'''Return sum of a rolling slice on the tensor over a specified axis'''
Xroll = _i_dont_know_how_to_do_this
return Xroll
def RollingMax(X, slice, axis):
'''Return max of a rolling slice on the tensor over a specified axis'''
Xroll = _i_dont_know_how_to_do_this
return Xroll
# Rolling sum
Xroll = RollingSum(X, slice=3, axis=0)
print(Xroll)
tensor([[[1.]],
[[3.]],
[[6.]],
[[9.]],
[[12.]],
[[13.]],
[[12.]],
[[9.]],
[[6.]]])
# Rolling max
Xroll = RollingMax(X, slice=3, axis=0)
print(Xroll)
tensor([[[1.]],
[[2.]],
[[3.]],
[[4.]],
[[5.]],
[[5]],
[[5.]],
[[4.]],
[[3.]]])
</code></pre>
|
<python><pytorch><rolling-computation>
|
2022-12-26 08:54:42
| 1
| 1,962
|
ProteinGuy
|
74,918,633
| 1,115,237
|
AWS Glue - Getting partition information into a dynamic frame column
|
<p>I am writing a Glue ETL job that takes an array of paths as an argument to create a DynamicFrame. The job will read data from the specified paths and create a DynamicFrame for further processing.</p>
<p>Having the following s3 folder structure:</p>
<pre><code>s3://my_bucket/root/dt=2022-24-12/file.parquet
s3://my_bucket/root/dt=2022-25-12/file.parquet
s3://my_bucket/root/dt=2022-26-12/file.parquet ..
</code></pre>
<p>.</p>
<p>I've created a Glue job script using <code>glueContext.create_dynamic_frame.from_options</code> to load the data like so:</p>
<pre><code>dynamic_frame = glueContext.create_dynamic_frame.from_options(
connection_type="s3",
format='parquet',
connection_options={
"paths": ["s3://my_bucket/root/"], # <---- provided as an argument
"recurse": True,
},
transformation_ctx="S3bucket_node1",
)
</code></pre>
<p>which reads the DF correctly and produce the following DF:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>column_A</th>
<th>Column_B</th>
</tr>
</thead>
<tbody>
<tr>
<td>data</td>
<td>data</td>
</tr>
<tr>
<td>data</td>
<td>data</td>
</tr>
</tbody>
</table>
</div>
<p>How can I add the partition data (dt) to the data frame, <strong>without using a catalog DB</strong> so the output would be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>column_A</th>
<th>Column_B</th>
<th>dt</th>
</tr>
</thead>
<tbody>
<tr>
<td>data</td>
<td>data</td>
<td>2022-24-12</td>
</tr>
<tr>
<td>data</td>
<td>data</td>
<td>2022-24-12</td>
</tr>
</tbody>
</table>
</div>
<p>I.E all data from a specific partition would get the correct date</p>
|
<python><amazon-s3><pyspark><aws-glue><partition>
|
2022-12-26 08:46:05
| 0
| 8,953
|
Shlomi Schwartz
|
74,917,986
| 13,000,378
|
Getting curl-L invalid syntax when running on jupyter notebook
|
<p>I'm trying to create a object detection module using YOLO V5 following this tutorial
<a href="https://www.youtube.com/watch?v=Ciy1J97dbY0&t=352s" rel="nofollow noreferrer">YT Link</a></p>
<p>In this they have used google colab but I want to create it on jupyter note book.
<a href="https://i.sstatic.net/skK8L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/skK8L.png" alt="enter image description here" /></a>
I get the above error when trying to get the data set from roboflow.
Please help!</p>
|
<python><python-3.x><jupyter-notebook><google-colaboratory><yolo>
|
2022-12-26 07:08:41
| 1
| 661
|
Kavishka Rajapakshe
|
74,917,863
| 13,738,079
|
FastAPI - Unable to get auth token from middleware's Request object
|
<p>Following <a href="https://www.starlette.io/requests/" rel="nofollow noreferrer">Starlette documentation</a> (FastAPI uses Starlette for middlewares), <code>response.headers["Authorization"]</code> should allow me to get the bearer token, but I get a <code>KeyError</code> saying no such attribute exists.</p>
<p>When I print <code>response.headers</code>, I get <code>MutableHeaders({'content-length': '14', 'content-type': 'application/json'})</code>.</p>
<p>Why is the authorization attribute not in the header despite of making a request with an <code>auth</code> header?</p>
<pre class="lang-py prettyprint-override"><code>@app.middleware("http")
async def validate_access_token(request: Request, call_next):
response = await call_next(request)
access_token = response.headers["Authorization"].split()
is_valid_signature = jwt.decode(access_token[1], key=SECRET, algorithms=CRYPT_ALGO)
if is_valid_signature:
return response
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail='Invalid access token'
)
</code></pre>
|
<python><rest><authentication><fastapi>
|
2022-12-26 06:49:32
| 1
| 1,170
|
Jpark9061
|
74,917,836
| 17,696,880
|
Why does this replace function fail inside this lambda function but not outside it?
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "el dia 2022-12-23 o sino el dia 2022-09-23 10000-08-23" #example
date_capture_pattern = r"([12]\d*-[01]\d-[0-3]\d)(\D*?)"
#input_text = re.sub(date_capture_pattern , lambda m: print(repr(m[1])) , input_text)
input_text = re.sub(date_capture_pattern , lambda m: m[1].replace("_-_", "-", 2) , input_text)
print(repr(input_text))
</code></pre>
<p>Using a <code>print()</code> I have noticed that the capture groups <code>m[1]</code> capture them correctly, since it is able to print the 3 dates correctly.</p>
<p>However, I feel that there is something in the syntax (in python) of the lambda function <code>lambda m: m[1].replace("_-_", "-", 2)</code> that does not allow the replacement to be possible, that is, even if the lambda function correctly receives the information, it is not able to return it.</p>
<p>The output that I need is that</p>
<pre><code>"el dia 2022_-_12_-_23 o sino el dia 2022_-_09_-_23 10000_-_08_-_23"
</code></pre>
<p>It should be clarified that the code as it is in the question does not generate any error in the console, however it does not work correctly since it is simply limited to deleting the capture groups within the original string</p>
<p>What is wrong with that lambda function?</p>
|
<python><python-3.x><regex><lambda><replace>
|
2022-12-26 06:44:29
| 3
| 875
|
Matt095
|
74,917,772
| 7,339,624
|
How to make an empty tensor in Pytorch?
|
<p>In python, we can make an empty list easily by doing <code>a = []</code>. I want to do a similar thing but with Pytorch tensors.</p>
<p>If you want to know why I need that, I want to get all of the data inside a given dataloader (to create another customer dataloader). Having an empty tensor can help me gather all of the data inside a tensor using a for-loop. This is a sudo code for it.</p>
<pre><code>all_data_tensor = # An empty tensor
for data in dataloader:
all_data_tensor = torch.cat((all_data_tensor, data), 0)
</code></pre>
<p>Is there any way to do this?</p>
|
<python><pytorch><tensor>
|
2022-12-26 06:30:47
| 2
| 4,337
|
Peyman
|
74,917,608
| 3,878,377
|
How to know what commands you can write in DockerFile?
|
<p>I have been looking at many DockerFile in the docker hub( This is one example: <a href="https://hub.docker.com/layers/library/python/latest/images/sha256-dcd0251df5efeb39af10af998b45d21436d85e2b9facf12a8800e34ad3d84c91?context=explore" rel="nofollow noreferrer">https://hub.docker.com/layers/library/python/latest/images/sha256-dcd0251df5efeb39af10af998b45d21436d85e2b9facf12a8800e34ad3d84c91?context=explore</a>)
I am wondering what the procedure is for identifying what goes to DockerFile. For example, I understand what <code>RUN</code>, <code>Copy</code>, and <code>WorkDIr</code> commands do; however, how do you know what to include as <code>ENV</code> or environmental variables and, more importantly, what commands are accepted? For example, in the above link: How to figure out what commands I can use for after <code>ENV</code>?</p>
<pre><code>ENV PYPY_VERSION=7.3.10
ENV LANG=C.UTF-8
</code></pre>
|
<python><docker><dockerfile>
|
2022-12-26 05:53:36
| 1
| 1,013
|
user59419
|
74,917,596
| 661,716
|
dataframe fill in a value where there is no data
|
<p>I have a data like below.</p>
<p>I need to fill in the 'value' column where where there is no data for each month/name.</p>
<p>The month values are the unique values of df['month']</p>
<pre><code>import pandas as pd
a = [['2020-01',1,'a'], ['2020-02',2,'a']]
b = [['2020-01',1,'b'], ['2020-03',4,'b']]
a.extend(b)
df = pd.DataFrame(a, columns=['month','value','name'])
print(df)
</code></pre>
<p>Below is the original data.</p>
<pre><code> month value name
0 2020-01 1 a
1 2020-02 2 a
2 2020-01 1 b
3 2020-03 4 b
</code></pre>
<p>Below is the expected results when filling in zeros(0). Note that there is a missing month for each of name a and b.</p>
<pre><code> month value name
0 2020-01 1 a
1 2020-02 2 a
2 2020-03 0 a
3 2020-01 1 b
4 2020-02 0 b
5 2020-03 4 b
</code></pre>
<p>What would be the most efficient way?</p>
|
<python><pandas><dataframe>
|
2022-12-26 05:51:05
| 2
| 1,226
|
tompal18
|
74,917,588
| 12,101,201
|
Error code 215 when authenticating Twitter API 2.0 in Python using Authlib and OAuth2
|
<p>I've seen lots of related questions to this one, but none of the answers have helped me.</p>
<p>First, I went to the <a href="https://developer.twitter.com/en/portal/" rel="nofollow noreferrer">Twitter Developer Portal</a> and set up my OAuth2.0 Client ID and Secret:</p>
<p><a href="https://i.sstatic.net/VYe13.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VYe13.png" alt="enter image description here" /></a></p>
<p>Then, I used <a href="https://docs.authlib.org/en/latest/index.html" rel="nofollow noreferrer">Authlib</a> to set up a <code>OAuth2Session</code> with an Authorization Code flow, documented <a href="https://docs.authlib.org/en/latest/client/oauth2.html" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-py prettyprint-override"><code>authorization_url = 'https://api.twitter.com/2/oauth2/authorize'
scopes = ['tweet.read', 'tweet.write', 'users.read']
scope = ' '.join(scopes)
</code></pre>
<p>Then I set up an OAuth2 client using the ID and secret, create the authorization URL, and print the URI for use:</p>
<pre class="lang-py prettyprint-override"><code>client = OAuth2Session(client_id, client_secret, scope=scope)
uri, state = client.create_authorization_url(authorization_url)
print("> Open this in your browser: " + uri)
</code></pre>
<p>In the terminal, it prints</p>
<pre><code>> Open this in your browser: https://api.twitter.com/2/oauth2/authorize?response_type=code&client_id=[redacted]&scope=tweet.read+tweet.write+users.read&state=L21JiPkirR8awYs7kcDjFC4jrPj68x
</code></pre>
<p>But opening the link in a browser displays the Twitter-produced error</p>
<pre><code>{"errors":[{"message":"Bad Authentication data","code":215}]}
</code></pre>
<p>which is all over Stack Overflow. So I tried</p>
<ul>
<li>logging out of Twitter again,</li>
<li>opening the link on various browsers,</li>
<li>removing the <code>scope</code> parameter from the <code>OAuth2Session</code> object,</li>
<li>changing the <code>scope</code> join character from <code>' '</code> to <code>'%20'</code> and <code>'+'</code>,</li>
<li>double-checking and regenerating Client ID + Secret,</li>
<li>removing Authlib and implementing the <a href="https://pastebin.com/2C9GeNt2" rel="nofollow noreferrer">same code</a> using just Python requests</li>
</ul>
<p>I believe <a href="https://stackoverflow.com/questions/29972426/twitter-error-code-215-bad-authentication-data">another answer</a> (and others like these) fail to help in my situation because Authlib's <code>OAuth2Session</code> <em>should be</em> correctly set up and authenticating via OAuth2 (unless there's a problem with the library).</p>
<p>So, after that, I tried adding the callback URI through the client. I used a local callback URI with a Flask app. This is the same Callback URI I set up in the Twitter developer portal. Although this probably isn't done correctly, I don't believe it's causing Error 215. I thought I'd include this just in case:</p>
<pre class="lang-py prettyprint-override"><code>callback_uri = 'http://127.0.0.1:5000/oauth/callback'
client = OAuth2Session(client_id, client_secret, scope=scope, redirect_uri=callback_uri)
</code></pre>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
app = Flask(__name__)
@app.route("/oauth/callback", methods=["GET"])
def callback():
print('Callback')
</code></pre>
<p>I could be missing something, or perhaps my data is improperly formatted in some way. This has proven to be a frustrating issue.</p>
|
<python><twitter><oauth-2.0><python-requests><authlib>
|
2022-12-26 05:49:44
| 1
| 1,485
|
Ben Myers
|
74,917,543
| 3,878,377
|
Obtain version of packages installed from docker image/container
|
<p>Assume I pull a docker image using <code>docker pull image1</code>. The image is for a python application and its dependencies. I can run a container from this image, and everything works well. I am interested in finding the list of all installed packages and their dependencies from the running container or pulled image. I am wondering if there is a way to get that.</p>
<p>Note: I know if there is a requirements.txt, then I can find all the information but assume there is no such thing.</p>
|
<python><docker><pip><package><dockerfile>
|
2022-12-26 05:41:20
| 0
| 1,013
|
user59419
|
74,917,459
| 1,843,011
|
How to add two derived fields in a single statement?
|
<p>I have a dataframe, and based on certain condition, I need to add two calculated fields to the dataframe. I can do this in two statements, each one at a time.
Is there a way to add more than one fields at the same time?
Is there any performance difference in these two approaches?</p>
<pre><code>import pandas as pd
df=pd.DataFrame([{"Name":"Axe","x":10,"y":20},{"Name":"Tree","x":50,"y":15},{"Name":"Sand","x":-10,"y":-15}])
df.loc[df["x"] > 0, "SUM"] = df["x"] + df["y"]
df.loc[df["x"] > 0, "DIFF"] = df["x"] - df["y"]
df.head()
Name x y SUM DIFF
0 Axe 10 20 30.0 -10.0
1 Tree 50 15 65.0 35.0
2 Sand -10 -15 NaN NaN
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2022-12-26 05:21:40
| 2
| 3,582
|
Remis Haroon - رامز
|
74,917,455
| 11,332,693
|
Remove space between string after comma in python dataframe column
|
<p>df1</p>
<pre><code>ID Col
1 new york, london school of economics, america
2 california & washington, harvard university, america
</code></pre>
<p>Expected output is :</p>
<p>df1</p>
<pre><code>ID Col
1 new york,london school of economics,america
2 california & washington,harvard university,america
</code></pre>
<p>My try is :</p>
<pre><code>df1[Col].apply(lambda x : x.str.replace(", ","", regex=True))
</code></pre>
|
<python><pandas><string><text>
|
2022-12-26 05:21:08
| 4
| 417
|
AB14
|
74,917,247
| 13,097,857
|
Can someone explain why this function returns None?
|
<p>I've been trying to figure out why this function is returning None every time I run it, I would appreciate a lot if someone could explain my why.</p>
<pre><code>x = set([1,2,3])
def inserta(multiconjunto, elemento):
a = multiconjunto.add(elemento)
return a
mc1 = inserta(x, 2)
print(mc1)
</code></pre>
|
<python><arrays><list><function><set>
|
2022-12-26 04:22:59
| 1
| 302
|
Sebastian Nin
|
74,917,129
| 10,829,044
|
Pandas - compute and pivot to get revenue from previous two years
|
<p>I have a dataframe like as below</p>
<pre><code>df = pd.DataFrame(
{'stud_id' : [101, 101, 101, 101,
101, 102, 102, 102],
'sub_code' : ['CSE01', 'CSE01', 'CSE01',
'CSE01', 'CSE02', 'CSE02',
'CSE02', 'CSE02'],
'ques_date' : ['10/11/2022', '06/06/2022','09/04/2022', '27/03/2022',
'13/05/2010', '10/11/2021','11/1/2022', '27/02/2022'],
'revenue' : [77, 86, 55, 90,
65, 90, 80, 67]}
)
df['ques_date'] = pd.to_datetime(df['ques_date'])
</code></pre>
<p>I would like to do the below</p>
<p>a) Compute custom financial year based on our organization FY calendar. Meaning, Oct-Dec is Q1, Jan -Mar is Q2,Apr - Jun is Q3 and July to Sep is Q4.</p>
<p>b) Group by stud_id</p>
<p>c) Compute sum of revenue from previous two custom FYs (from a specific date = 20/12/2022). For example, if we are in the FY-2023, I would like to get the sum of revenue for a customer from FY-2022 and FY-2021 separately</p>
<p>So, I tried the below based on this post <a href="https://stackoverflow.com/questions/74860835/pandas-compute-previous-custom-quarter-wise-total-revenue-and-reshape-table">here</a></p>
<pre><code>df['custom_qtr'] = pd.to_datetime(df['ques_date'], dayfirst=True).dt.to_period('Q-SEP')
date_1 = pd.to_datetime('20-12-2022') # CUT-OFF DATE
df['custom_year'] = df['custom_qtr'].astype(str).str.extract('(?P<year>\d+)')
df['date_based_qtr'] = date_1.to_period('Q-SEP')
df['custom_date_year'] = df['date_based_qtr'].astype(str).str.extract('(?P<year>\d+)')
df['custom_year'] = df['custom_year'].astype(int)
df['custom_date_year'] = df['custom_date_year'].astype(int)
df['diff'] = df['custom_date_year'].sub(df['custom_year'])
df = df[df['diff'].isin([1,2])]
out_df = df.pivot_table("revenue", index=['stud_id'],columns=['custom_year'],aggfunc=['sum']).add_prefix('rev_').reset_index().droplevel(0,axis=1)
</code></pre>
<p>But this results in an inconcistent output column like below</p>
<p><a href="https://i.sstatic.net/cdc0Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cdc0Q.png" alt="enter image description here" /></a></p>
<p>I expect my output to be like as below</p>
<p><a href="https://i.sstatic.net/k0JzB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k0JzB.png" alt="enter image description here" /></a></p>
<p><strong>updated output</strong></p>
<p><a href="https://i.sstatic.net/1nMPe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1nMPe.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><group-by><pivot-table>
|
2022-12-26 03:44:11
| 1
| 7,793
|
The Great
|
74,917,051
| 3,250,829
|
Tensorflow Error on Macbook M1 Pro - NotFoundError: Graph execution error
|
<p>I've installed Tensorflow on a Macbook Pro M1 Max Pro by first using Anaconda to install the dependencies:</p>
<pre><code>conda install -c apple tensorflow-deps
</code></pre>
<p>Then after, I install the Tensorflow distribution that is specific for the M1 architecture and additionally a toolkit that works with the Metal GPUs:</p>
<pre><code>pip install tensorflow-metal tensorflow-macos
</code></pre>
<p>I then write a very simple feedforward architecture with some dummy training and validation data to see if I can execute a training session:</p>
<pre><code>from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers
import numpy as np
model = Sequential([layers.Input((3, 1)),
layers.LSTM(64),
layers.Dense(32, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(1)])
model.compile(loss='mse',
optimizer=Adam(learning_rate=0.001),
metrics=['mean_absolute_error'])
X_train = np.random.rand(100,3)
y_train = np.random.rand(100)
X_val = np.random.rand(100,3)
y_val = np.random.rand(100)
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100)
</code></pre>
<p>When I execute this, I get a slew of errors, the origin being a <code>NotFoundError: Graph execution error</code>. I assume this has something to do with the computational graph of the network that Tensorflow is setting up for me, based on my <code>Sequential</code> definition specified before compilation and training:</p>
<pre><code>File ~/test.py:20
18 X_val = np.random.rand(100,3)
19 y_val = np.random.rand(100)
---> 20 model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100)
File ~/anaconda3/envs/cv/lib/python3.8/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File ~/anaconda3/envs/cv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
50 try:
51 ctx.ensure_initialized()
---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
53 inputs, attrs, num_outputs)
54 except core._NotOkStatusException as e:
55 if name is not None:
NotFoundError: Graph execution error:
Detected at node 'StatefulPartitionedCall_7' defined at (most recent call last):
File "/Users/rphan/anaconda3/envs/cv/bin/ipython", line 8, in <module>
sys.exit(start_ipython())
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/__init__.py", line 123, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/traitlets/config/application.py", line 1041, in launch_instance
app.start()
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/terminal/ipapp.py", line 318, in start
self.shell.mainloop()
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 685, in mainloop
self.interact()
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 678, in interact
self.run_cell(code, store_history=True)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2940, in run_cell
result = self._run_cell(
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2995, in _run_cell
return runner(coro)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
coro.send(None)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3194, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3373, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3433, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-0ed839f9b556>", line 1, in <module>
get_ipython().run_line_magic('run', 'test.py')
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2364, in run_line_magic
result = fn(*args, **kwargs)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/magics/execution.py", line 829, in run
run()
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/magics/execution.py", line 814, in run
runner(filename, prog_ns, prog_ns,
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2797, in safe_execfile
py3compat.execfile(
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/utils/py3compat.py", line 55, in execfile
exec(compiler(f.read(), fname, "exec"), glob, loc)
File "/Users/rphan/test.py", line 20, in <module>
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1650, in fit
tmp_logs = self.train_function(iterator)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function
return step_function(self, iterator)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step
outputs = model.train_step(data)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1027, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
self.apply_gradients(grads_and_vars)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients
return super().apply_gradients(grads_and_vars, name=name)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 634, in apply_gradients
iteration = self._internal_apply_gradients(grads_and_vars)
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1166, in _internal_apply_gradients
return tf.__internal__.distribute.interim.maybe_merge_call(
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1216, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1211, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
Node: 'StatefulPartitionedCall_7'
could not find registered platform with id: 0x1056be9e0
[[{{node StatefulPartitionedCall_7}}]] [Op:__inference_train_function_4146]
</code></pre>
<p>I have no further insight on what this <code>Graph execution error</code> means. Has someone seen these errors before? This seems to be a very simple network and I can't seem to understand why the training doesn't execute.</p>
|
<python><macos><tensorflow><deep-learning><metal>
|
2022-12-26 03:15:11
| 1
| 104,825
|
rayryeng
|
74,917,035
| 817,659
|
Don't truncate columns output
|
<p>I am setting the <code>options</code> like this</p>
<pre><code>pd.options.display.max_columns = None
</code></pre>
<p>When I try to print the <code>DataFrame</code>, I get truncated columns:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Index(['contractSymbol', 'strike', 'currency', 'lastPrice', 'change', 'volume',
'bid', 'ask', 'contractSize', 'lastTradeDate', 'impliedVolatility',
'inTheMoney', 'openInterest', 'percentChange'],
dtype='object')
contractSymbol strike currency \
symbol expiration optionType
TSLA 2022-12-30 calls TSLA221230C00050000 50.00 USD
calls TSLA221230C00065000 65.00 USD
</code></pre>
<p>How do I show all columns in one row?</p>
|
<python><pandas>
|
2022-12-26 03:11:34
| 3
| 7,836
|
Ivan
|
74,916,881
| 11,860,883
|
slicing assignment numpy does not work as expected
|
<pre><code>import numpy as np
array = np.random.uniform(0.0, 1.0, (100, 2))
array[(array[:, 1:2] < 3.0).flatten()][:][1:2] = 4.0
</code></pre>
<p>I want to change the second value of the rows who is less than 3.0 to 4.0, but the above code does not work. I tried to search a little bit, it appears that fancy slicing always just operates on a copy of the original array. Is that true? How to do the correct assignment in this case?</p>
|
<python><numpy>
|
2022-12-26 02:14:18
| 1
| 361
|
Adam
|
74,916,803
| 5,945,518
|
Update programmatically "value" and "delta" attributes of Indicators using Plotly gauge charts
|
<p>From the examples illustrated in <a href="https://plotly.com/python/indicator/" rel="nofollow noreferrer"><strong>How to make gauge charts in Python with Plotly</strong></a>, I wonder whether it is possible to programmatically update the fields, hereinafter:</p>
<ul>
<li><code>value</code></li>
<li><code>delta</code></li>
</ul>
<p>?</p>
<p><strong>N.B:</strong> By updating programmatically, I mean:</p>
<ul>
<li><p>assigning global variables like <code>var1</code>, <code>var2</code> to <code>attributes</code>, while ensuring - in this case - that their values are passed to <code>value</code>, <code>delta</code>:</p>
<pre><code>fig.add_trace(go.Indicator(
value = var1,
delta = {'reference': var2},
gauge = {
'axis': {'visible': False}},
domain = {'row': 0, 'column': 0}))
</code></pre>
</li>
</ul>
|
<python><plotly-dash><plotly>
|
2022-12-26 01:52:01
| 1
| 685
|
dark.vador
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.