QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,097,347
| 514,149
|
No preview images for dataset in ClearML web UI
|
<p>After creating a new dataset via CLI for a bunch of images, I closed that dataset and thus uploaded it to our own, newly installed ClearML server. Now in the web UI the new dataset has been created and can be opened, the images are being listed. However, none of the images within that dataset is shown in any of the previews.</p>
<p>Here is a simple script to test this:</p>
<pre class="lang-py prettyprint-override"><code>from clearml import Dataset
IMG_PATH = "/home/mfb/Temp/sample-ds/50-ok.jpg"
# Create dataset and add sample image
ds = Dataset.create(dataset_name="Test", dataset_project="Dataset-Test")
ds.add_files(path=IMG_PATH)
ds.upload()
# Add and report image
logger = ds.get_logger()
logger.report_image("image", "sample image", iteration=0, local_path=IMG_PATH)
logger.flush()
# Finalize the dataset
ds.finalize()
</code></pre>
<p>But there are no sample images in the web UI:</p>
<p><a href="https://i.sstatic.net/LLsay.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LLsay.png" alt="enter image description here" /></a></p>
<p>Any ideas?</p>
|
<python><clearml>
|
2023-01-12 13:51:21
| 1
| 10,479
|
Matthias
|
75,097,288
| 11,868,566
|
Trouble with attachments in outlook using SMTPlib from python
|
<p>I am writing an email system for my application.
Just a simple email with an attachment (pdf file) that needs to be sent upon completing a step.</p>
<p>On every client that I can test (Geary on linux, Outlook on mobile, Outlook on web, our own email system using roundcube) my attachments comes through perfectly and we have no problems.</p>
<p>Except for the Outlook client on windows, our mails just get received without the attachment.</p>
<p>I've tried changing my attachment to .png , .jpg or even .docx and with all of the filetypes it is just the same problem as above.</p>
<p>I'm using this code:</p>
<pre><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
import base64
FROM = "sender@mail.com"
data = request.data['data']
base64_string = data['file']
base64_string = base64_string.split(',')[1]
attachment_data = base64.b64decode(base64_string)
PASS = ""
SERVER = 'mail.ourhost.be'
TO = "receiver@mail.com"
msg = MIMEMultipart('alternative')
msg['Subject'] = "subject"
msg['From'] = FROM
msg['To'] = TO
html = f"""\
<html style="font-family: Heebo;">
my email content
</html>
"""
part2 = MIMEText(html, 'html')
msg.attach(part2)
# Add the decoded attachment data to the email
part = MIMEBase('application', 'pdf')
part.set_payload((attachment_data))
encoders.encode_base64(part)
filename = f'name.pdf'
part.add_header('Content-Disposition', "attachment; filename= %s" % filename)
msg.attach(part)
</code></pre>
<p>Is there anything particular that I need to watch out for in Outlook on windows?</p>
|
<python><email><outlook><email-attachments><smtplib>
|
2023-01-12 13:47:08
| 1
| 456
|
YorbjΓΆrn
|
75,097,271
| 3,337,597
|
Python unittest startTestRun to execute setup only once before all tests
|
<p>I have several test files in different directories.</p>
<pre><code>\tests
\subtestdir1
-__init__.py
-test1.py
\subtestdir2
-__init__.py
-test2.py
-__init__.py
-test3.py
</code></pre>
<p>I need to do some setups only once before all tests in all test files.</p>
<p>According to <a href="https://stackoverflow.com/a/66252981">https://stackoverflow.com/a/66252981</a>, the top-level <code>__init__.py</code> looks like this:</p>
<pre><code>import unittest
OLD_TEST_RUN = unittest.result.TestResult.startTestRun
def startTestRun(self):
print('once before all tests')
OLD_TEST_RUN(self)
unittest.result.TestResult.startTestRun = startTestRun
</code></pre>
<p>I've tried this too: <a href="https://stackoverflow.com/a/64892396/3337597">https://stackoverflow.com/a/64892396/3337597</a></p>
<pre><code>import unittest
def startTestRun(self):
print('once before all tests')
setattr(unittest.TestResult, 'startTestRun', startTestRun)
</code></pre>
<p>In both cases, all tests ran successfully, but startTestRun doesn't execute. I couldn't figure out why. I appreciate any clarification.</p>
<p>(I use unittest.TestCase and run my tests by right click on the tests directory and clicking Run 'Python tests in test...')</p>
|
<python><python-3.x><unit-testing><python-unittest>
|
2023-01-12 13:46:13
| 2
| 805
|
Reyhaneh Sharifzadeh
|
75,097,183
| 14,790,056
|
Why is Jupyter suddenly showing dataframe as text-based?
|
<p>I've used jupyter notebook for a while now and if I do <code>df.head()</code> it always returns a nicely formatted table format. now I called the data and I get this!?? why? and how do I fix it?</p>
<pre><code>import pandas as pd
df = pd.read_csv("df.csv")
df.head()
</code></pre>
<pre><code> Unnamed: 0 address date \
0 0 0x0000000000000000000000000000000000000000 2016Q3
1 1 0x0000000000000000000000000000000000000000 2016Q4
2 2 0x0000000000000000000000000000000000000000 2016Q4
3 3 0x0000000000000000000000000000000000000000 2017Q1
4 4 0x0000000000000000000000000000000000000000 2017Q1
token_address balance decimals \
0 0xd8912c10681d8b21fd3742244f44658dba12264e 2.000000e+19 18.0
1 0xd8912c10681d8b21fd3742244f44658dba12264e 2.060000e+19 18.0
2 0xe0b7927c4af23765cb51314a0e0521a9645f0e2a 4.500000e+11 9.0
3 0xd8912c10681d8b21fd3742244f44658dba12264e 2.060000e+19 18.0
4 0xe0b7927c4af23765cb51314a0e0521a9645f0e2a 4.571544e+11 9.0
token_price tokens_USD
0 1.99522 39.904400
1 1.39913 28.822078
2 8.53556 3841.002000
3 1.33777 27.558062
4 19.42000 8877.938149
</code></pre>
<p>I have tried the following but to no avail.</p>
<pre><code>from IPython.display import display
display(df)
</code></pre>
|
<python><dataframe><jupyter-notebook>
|
2023-01-12 13:39:59
| 0
| 654
|
Olive
|
75,097,042
| 10,357,604
|
Why is the module 'ultralytics' not found even after pip installing it in the Python conda environment?
|
<p>In a conda environment with Python 3.8.15 I did
<code>pip install ultralytics</code></p>
<blockquote>
<p>successfully installed ...,ultralytics-8.0.4</p>
</blockquote>
<p>But when running <code>from ultralytics import YOLO</code> , it says</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'ultralytics'</p>
</blockquote>
|
<python><pip><conda><yolo><modulenotfounderror>
|
2023-01-12 13:30:11
| 8
| 1,355
|
thestruggleisreal
|
75,096,739
| 11,809,811
|
copy an PIL image to the macOS clipboard with Python
|
<p>I am trying to copy a image created in PIL to the macOS clipboard.</p>
<p>I found 2 ways:</p>
<ol>
<li><p>Using Pasteboard (pypi.org/project/pasteboard) but that one does not seem to work with Python 3.10 (I get a wheels error when trying to install it).</p>
</li>
<li><p>The other way is from here (<a href="https://stackoverflow.com/questions/54008175/copy-an-image-to-macos-clipboard-using-python">Copy an image to MacOS clipboard using python</a>) and uses the following code:</p>
<pre><code>import subprocess
subprocess.run(["osascript", "-e", 'set the clipboard to (read (POSIX file
'image.jpg') as JPEG picture)'])
</code></pre>
</li>
</ol>
<p>I tried playing around with this second approach but I cannot get it to work with an image created in Pillow. For example:</p>
<pre><code>import subprocess
from PIL import Image
image = Image.open('img.png')
subprocess.run(["osascript", "-e", 'set the clipboard to (read (POSIX file '???') as JPEG picture)'])
</code></pre>
<p>In the original post the '???' was an image.jpg, so how could I insert the PIL data in there? I tried some variations but kept on getting the same error:</p>
<pre><code>CFURLGetFSRef was passed an URL which has no scheme (the URL will not work with other CFURL routines)
22:67: 2023-01-12 13:53:42.227 osascript[1361:27923] CFURLCopyResourcePropertyForKey failed because it was passed an URL which has no scheme
execution error: Canβt make file ":image" into type file. (-1700)
</code></pre>
|
<python><macos><python-imaging-library>
|
2023-01-12 13:02:47
| 1
| 830
|
Another_coder
|
75,096,711
| 3,717,987
|
torch.einsum to matmul conversion
|
<p>I have the following line:</p>
<pre><code>torch.einsum("bfts,bhfs->bhts", decay_kernel, decay_q)
</code></pre>
<p>I have a codebase where I need to convert these einsums to primitive methods like <code>reshape</code>, <code>matmul</code>, <code>multiplication</code>, <code>sum</code>, and so on. Is it even possible? I had bunch of other einsums that I could convert them just fine but I can't seem to understand what's going on in this case. Does anyone have a clue? Thanks in advance.</p>
|
<python><torch>
|
2023-01-12 13:00:52
| 1
| 419
|
majstor
|
75,096,556
| 18,089,995
|
In Django how to convert a single page pdf from html template?
|
<p>I am using <code>puppeteer_pdf</code> package to convert an HTML template into a pdf, but when my content increases, it creates another page in the pdf. I want the whole pdf as a single page.</p>
<p>My main goal is the make a jpg image from that pdf and send the image to the client. If there is another way to make a jpg image from HTML maintaining all CSS and images it will be very helpful.</p>
<p>This is my code :</p>
<pre><code>import os
from puppeteer_pdf import render_pdf_from_template
from django.conf import settings
from django.http import HttpResponse
def generate_pdf_invoice(invoice):
context = {
'invoice': invoice
}
output_temp_file = os.path.join(
settings.BASE_DIR, 'static', 'temp.pdf')
pdf = render_pdf_from_template(
input_template='shop/invoice_email_pdf.html',
header_template='',
footer_template='',
context=context,
cmd_options={
'format': 'A4',
'scale': '1',
'marginTop': '0',
'marginLeft': '0',
'marginRight': '0',
'marginBottom': '0',
'printBackground': True,
'preferCSSPageSize': False,
'output': output_temp_file,
'pageRanges': '1-2',
}
)
filename = 'test.pdf'
# return pdf
response = HttpResponse(pdf, content_type='application/pdf;')
response['Content-Disposition'] = 'inline;filename='+filename
return response
</code></pre>
|
<python><django><pdf>
|
2023-01-12 12:47:54
| 1
| 595
|
Manoj Kamble
|
75,096,530
| 14,790,056
|
How to disply output as a table in jupyter notebook
|
<p>Usually, when i do <code>df.head()</code> on jupyter notebote, it gives me a nice table format, but suddenly it is giving me the following.</p>
<p>I am sure what changed? can you help with displaying output as a table on jupyter notebook?</p>
<p><a href="https://i.sstatic.net/wHhOZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wHhOZ.png" alt="enter image description here" /></a></p>
<p>data</p>
<pre><code> address date \
0 0x0000000000000000000000000000000000000000 2016Q3
1 0x0000000000000000000000000000000000000000 2016Q4
2 0x0000000000000000000000000000000000000000 2017Q1
3 0x0000000000000000000000000000000000000000 2017Q2
4 0x0000000000000000000000000000000000000000 2017Q3
tokens_USD
0 3.990440e+01
1 3.869824e+03
2 8.905545e+03
3 7.586588e+04
4 1.413279e+08
</code></pre>
|
<python><dataframe><jupyter-notebook>
|
2023-01-12 12:46:01
| 0
| 654
|
Olive
|
75,096,511
| 3,368,667
|
Getting rid of special characters and unicode in byte object converted to string in Python
|
<p>I'm having difficulty getting ride of special characters and unicode in Pandas dataframe.</p>
<p>Here is an example of one of the byte objects:</p>
<pre><code>b"Asana Check out your two-week recap. \n \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \n\xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \n\xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \n\n\n View in browser
</code></pre>
<p>I tried using regex to eliminate the repeating special characters with this:</p>
<pre><code>re.sub(r"""|b"|(\\n)+|b|\\xe2\\x80\\x8c '""", "", string.decode('utf-8'))
</code></pre>
<p>When I run this code, I get:</p>
<pre><code>Asana Check out your two-week recap. \n \u200c \u200c
</code></pre>
<p>I can't remove the special unicode characters with regex, so tried doing</p>
<pre><code>string.encode("ascii", "ignore")
</code></pre>
<p>That works! But, then I still have some special characters in the string:</p>
<pre><code>b"Asana Check out your two-week recap. \n
</code></pre>
<p>But, when I try to use</p>
<pre><code>re.sub(r"""|b"|(\\n)+|b'""", "", string)
</code></pre>
<p>I get an error that I can't use .sub on a byte object. If I add <code>.decode('utf-8')</code> the <code>\n</code> don't get replaced.</p>
<p>Can someone help explain what's going on and how to fix this? I'm used to just dealing with plain text strings.</p>
<p>Here's some code below, but I'm not sure how to create the same kind of 'byte' object that I currently have in the pandas dataframe. So, it's a simple string below:</p>
<pre><code>string = 'b"Asana Check out your two-week recap. \n \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \n\xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \n\xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \xe2\x80\x8c \n\n\n View in browser'
string = re.sub(r"""|b"|(\\n)+|b'""", "", string.decode('utf-8'))
string
string = string.encode("ascii", "ignore")
string
re.sub(r"""|b"|\\n|b'""", "", string.decode('utf-8'))
</code></pre>
<p>Thanks and please let me know how I can make this question easier to answer!</p>
|
<python><pandas><regex>
|
2023-01-12 12:44:16
| 1
| 1,077
|
tom
|
75,096,400
| 10,570,372
|
What is the point of using `MISSING` in hydra or pydantic?
|
<p>I am creating a configuration management system in python and are exploring options between hydra/pydantic/both. I get a little confused over when to use <code>MISSING</code> versus just leaving it blank/optional. I will use an example of <a href="https://omegaconf.readthedocs.io/en/2.1_branch/structured_config.html#nesting-structured-configs" rel="nofollow noreferrer">OmegaConf</a> here since
the underlying structure of hydra uses it.</p>
<pre><code>@dataclass
class User:
# A simple user class with two missing fields
name: str = MISSING
height: Height = MISSING
</code></pre>
<p>where it says that this <code>MISSING</code> field will convert to yaml's <code>???</code> equivalent. Can I just leave it blank?</p>
|
<python><config><pydantic><fb-hydra><omegaconf>
|
2023-01-12 12:34:00
| 1
| 1,043
|
ilovewt
|
75,096,386
| 8,081,835
|
Keras TimeDistributed input_shape mismatch
|
<p>I'm trying to build a model with the TimeDistributed Dense layer, but I still get this error.</p>
<pre><code>ValueError: `TimeDistributed` Layer should be passed an `input_shape` with at least 3 dimensions, received: (None, 16)
</code></pre>
<p>Am I missing something? The data has a format like the one provided in the snippet below. The model is simplified, but the error is the same.</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, TimeDistributed, Dense
import numpy as np
data = np.random.random((100, 10, 32))
labels = np.random.randint(2, size=(100, 10, 1))
model = Sequential()
model.add(LSTM(16, input_shape=(10, 32)))
model.add(TimeDistributed(Dense(10, activation='sigmoid')))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(data, labels, epochs=10, batch_size=32)
</code></pre>
|
<python><tensorflow><keras>
|
2023-01-12 12:32:48
| 1
| 771
|
Mateusz Dorobek
|
75,096,369
| 14,594,208
|
How to count the occurrences of a column's value in a column of lists?
|
<p>Consider the following dataframe:</p>
<pre><code> column_of_lists scalar_col
0 [100, 200, 300] 100
1 [100, 200, 200] 200
2 [300, 500] 300
3 [100, 100] 200
</code></pre>
<p>The desired output would be a Series, representing how many times the scalar value of <code>scalar_col</code> appears inside the list column.</p>
<p>So, in our case:</p>
<pre><code>1 # 100 appears once in its respective list
2 # 200 appears twice in its respective list
1 # ...
0
</code></pre>
<p>I have tried something along the lines of:</p>
<pre><code>df['column_of_lists'].apply(lambda x: x.count(df['scalar_col'])
</code></pre>
<p>and I get it that it won't work because I am asking it to count a Series instead of a single value.</p>
<p>Any help would be welcome!</p>
|
<python><pandas><series>
|
2023-01-12 12:30:37
| 3
| 1,066
|
theodosis
|
75,096,366
| 6,694,814
|
Python folium Clickformarker - 2 functions don't collaborate with each other
|
<p>I would like to use the folium.CLickFormarker macro more than once in my map. Unfortunately it doesn't work, as my function takes only the first one.</p>
<pre><code>fs=folium.FeatureGroup(name="Surveyors")
df = pd.read_csv("survey.csv")
class Circle(folium.ClickForMarker):
_template = Template(u"""
{% macro script(this, kwargs) %}
var circle_job = L.circle();
function newMarker(e){
circle_job.setLatLng(e.latlng).addTo({{this._parent.get_name()}});
circle_job.setRadius(50000);
circle_job.bringToFront();
};
{{this._parent.get_name()}}.on('click', newMarker);
{% endmacro %}
""") # noqa
def __init__(self, popup=None):
super(Circle, self).__init__()
self._name = 'Circle'
job_range2 = Circle()
class ClickForOneMarker(folium.ClickForMarker):
_template = Template(u"""
{% macro script(this, kwargs) %}
var new_mark = L.marker();
function newMarker(e){
new_mark.setLatLng(e.latlng).addTo({{this._parent.get_name()}});
new_mark.setZIndexOffset(-1);
new_mark.on('dblclick', function(e){
{{this._parent.get_name()}}.removeLayer(e.target)})
var lat = e.latlng.lat.toFixed(4),
lng = e.latlng.lng.toFixed(4);
new_mark.bindPopup("<a href=https://www.google.com/maps?layer=c&cbll=" + lat + "," + lng + " target=blank><img src=img/streetview.svg width=150 title=StreetView></img></a>")//.openPopup();
};
{{this._parent.get_name()}}.on('click', newMarker);
{% endmacro %}
""") # noqa
def __init__(self, popup=None):
super(ClickForOneMarker, self).__init__()
self._name = 'ClickForOneMarker'
click_for_marker = ClickForOneMarker()
map.add_child(click_for_marker)
for i,row in df.iterrows():
lat =df.at[i, 'lat']
lng = df.at[i, 'lng']
sp = df.at[i, 'sp']
phone = df.at[i, 'phone']
role = df.at[i, 'role']
rad = int(df.at[i, 'radius'])
popup = '<b>Phone: </b>' + str(df.at[i,'phone'])
order_rng = folium.Circle([lat,lng],
radius=rad * 10.560,
popup= df.at[i, 'sp'],
tooltip = sp + ' - Job limits',
color='black',
fill=True,
fill_color='black',
opacity=0.1,
fill_opacity=0.1
)
if role == 'Contractor':
fs.add_child(
folium.Marker(location=[lat,lng],
tooltip=folium.map.Tooltip(
text='<strong>Contact surveyor</strong>',
style=("background-color: lightgreen;")),
popup=popup,
icon = folium.Icon(color='darkred', icon='glyphicon-user'
)
)
)
fs.add_child (
folium.Marker(location=[lat,lng],
popup=popup,
icon = folium.DivIcon(html="<b>" + sp + "</b>",
class_name="mapText_contractor",
icon_anchor=(30,5))
)
)
fs.add_child(job_range)
</code></pre>
<p>The first one is just what I want to include as the child for the existing feature group. The second one should be applicable to the entire map.
Both don't work when included together. Is it folium limited to one ClickForMarker macro or something?</p>
|
<python><folium>
|
2023-01-12 12:30:11
| 1
| 1,556
|
Geographos
|
75,096,336
| 1,436,800
|
How to fix Assertion error 400 in the following problem?
|
<p>I am working on Django project.
I am writing a test case for the following model:</p>
<pre><code>class Meeting(models.Model):
team = models.OneToOneField("Team", on_delete=models.CASCADE)
created_at = models.DateTimeField()
def __str__(self) -> str:
return self.team.name
</code></pre>
<p>The code for Test case is as follows:</p>
<pre><code>class MeetingTestCase(BaseTestCase):
def test_meeting_post(self):
self.client.force_login(self.owner)
meeting_data = {
"team": self.team.id,
"created_at": "2023-01-12T01:52:00+05:00"
}
response = self.client.post("/api/meeting/", data=meeting_data)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
</code></pre>
<p>Inside .setup() function, I have created a team object:</p>
<pre><code>self.team = factories.TeamFactory(name="Team", organization=self.organization)
</code></pre>
<p>This is my factory class:</p>
<pre><code>class TeamFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.Team
</code></pre>
<p>But My test case is failing and I have following assertion error:</p>
<pre><code>self.assertEqual(response.status_code, status.HTTP_201_CREATED)
AssertionError: 400 != 201
</code></pre>
<p>which means there is something wrong with my request. What wrong am I doing?</p>
|
<python><django><django-rest-framework><django-testing><django-tests>
|
2023-01-12 12:28:03
| 1
| 315
|
Waleed Farrukh
|
75,096,188
| 6,368,217
|
TypeError when creating a marshmallow schema from dataclass inherited from generic class
|
<p>I'm trying to create a marshmallow schema from dataclass which is inherited from generic type:</p>
<p><code>users_schema = marshmallow_dataclass.class_schema(Users)()</code></p>
<p>This is my code:</p>
<pre><code>from dataclasses import dataclass
from typing import Generic, List, TypeVar
T = TypeVar("T")
@dataclass
class Pagination(Generic[T]):
items: List[T]
@dataclass
class User:
pass
@dataclass
class Users(Pagination[User]):
pass
</code></pre>
<p>However I get this traceback:</p>
<pre><code>src/c1client/entities/schemas.py:39: in <module>
users_schema = marshmallow_dataclass.class_schema(Users)()
../../../.venvs/cva-user-service/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py:357: in class_schema
return _internal_class_schema(clazz, base_schema, clazz_frame)
../../../.venvs/cva-user-service/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py:403: in _internal_class_schema
attributes.update(
../../../.venvs/cva-user-service/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py:406: in <genexpr>
field_for_schema(
../../../.venvs/cva-user-service/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py:701: in field_for_schema
generic_field = _field_for_generic_type(typ, base_schema, typ_frame, **metadata)
../../../.venvs/cva-user-service/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py:505: in _field_for_generic_type
child_type = field_for_schema(
../../../.venvs/cva-user-service/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py:719: in field_for_schema
if issubclass(typ, Enum):
E TypeError: issubclass() arg 1 must be a class
</code></pre>
<p>When I print <code>typ</code> variable from the traceback it is printed like <code>~T</code> which is really not a class.</p>
<p>Also when I make Pagination class concrete - not generic, it works. So the problem is related to Generic.</p>
<p>Any ideas what could I do to make it work?</p>
|
<python><generics><python-typing><python-dataclasses><marshmallow-dataclass>
|
2023-01-12 12:15:55
| 0
| 991
|
Alexander Shpindler
|
75,096,055
| 6,386,155
|
Comparison circle plot in Python/Pandas
|
<p>Is there a way to create a comparison circle plot in python on Pandas dataframe, something similar to this <a href="https://docs.tibco.com/pub/spotfire/7.0.1/doc/html/box/box_what_are_comparison_circles.htm" rel="nofollow noreferrer">https://docs.tibco.com/pub/spotfire/7.0.1/doc/html/box/box_what_are_comparison_circles.htm</a></p>
<p><a href="https://i.sstatic.net/LmlfR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LmlfR.png" alt="enter image description here" /></a></p>
<p>preferably using plotly, panda plot, matplotlib etc</p>
|
<python><pandas><dataframe><plot><comparison>
|
2023-01-12 12:02:55
| 0
| 885
|
user6386155
|
75,096,031
| 15,376,262
|
Convert scraped HTML table to pandas dataframe
|
<p>I would like to scrape a webpage containing data regarding all trains that have arrived and departed from Amsterdam Central station on a specific date, and convert it into a pandas dataframe.</p>
<p>I know I can scrape the webpage and convert it into a pandas dataframe like so (see below), but this doesn't give me the correct table.</p>
<pre><code>import pandas as pd
import requests
url = 'https://www.rijdendetreinen.nl/en/train-archive/2022-01-12/amsterdam-centraal'
response = requests.get(url).content
dfs = pd.read_html(response)
dfs[1]
</code></pre>
<p>What I like to achieve is one pandas dataframe containing all data that's under the header "Train services" of the webpage, like so:</p>
<pre><code>Arrival Departure Destination Type Train Platform Composition
02:44 +2Β½ 02:46 +2 Rotterdam Centraal Intercity 1409 4a VIRM-4 9416
03:17 +5 03:19 +4 Utrecht Centraal Intercity 1410 7a ICM-3 4086
03:44 03:46 Rotterdam Centraal Intercity 1413 7a ICM-3 4014
04:17 04:19 Utrecht Centraal Intercity 1414 7a ICM-4 4216
04:44 04:46 Rotterdam Centraal Intercity 1417 7a ICM-3 4086
... ... ... ... ... ... ...
</code></pre>
<p>I hope there's someone able to help me with that.</p>
|
<python><html><pandas><python-requests>
|
2023-01-12 12:00:46
| 1
| 479
|
sampeterson
|
75,096,011
| 2,732,991
|
Python logging file config with variable directory
|
<p>I'm using Python logging and loading it with <code>logging.config.fileConfig(config_file)</code>, from e.g. <code>logging.cfg</code>. It contains a <code>TimedRotatingFileHandler</code>. It works great. For a long time it has been using (simplied):</p>
<pre><code>[handler_file]
class=handlers.TimedRotatingFileHandler
formatter=complex
level=DEBUG
args=('logs/logging.log', 'midnight')
</code></pre>
<p>Now I want to make the logging directory (<code>logs</code> above) configurable, but still use the configuration file, e.g. <code>logging.cfg</code>.</p>
<p>I was hoping to somehow inject/string interpolate in a variable e.g. <code>log_directory</code> to just do:</p>
<pre><code>args=('%(log_directory)s/logging.log', 'midnight')
</code></pre>
<p>Or since logging does an eval maybe something like:</p>
<pre><code>args=(f'{log_directory}/logging.log', 'midnight')
</code></pre>
<p>I have a feeling I'm missing something simple. How can I best do this, while still using a configuration file?</p>
|
<python><logging><configparser>
|
2023-01-12 11:59:26
| 1
| 20,596
|
Halvor Holsten Strand
|
75,095,999
| 7,497,912
|
How to connect to a Lotus-Notes database with Python?
|
<p>I have to extract data from a Notes database automatically for a data pipeline validation.
With HCL Notes I was able to connect to the database, so I know the access works.
I have the following information to access the database:
host (I got both hostname and ip address), domino server name, database name (.nsf filename)
I tried the noteslib library in the below way:</p>
<pre><code>import noteslib
db = noteslib.Database('domino server name','db filename.nsf')
</code></pre>
<p>I also tried adding the host to the server parameter instead, but it does not work.
I receive this error:</p>
<pre><code>Error connecting to ...Double-check the server and database file names, and make sure you have
read access to the database.
</code></pre>
<p>My question is how can I add the host and the domino server name as well (if it is required)?
Notes HCL authenticates me before accessing the database using the domino server name and the .nsf file. I tried adding the password parameter to the end, but also without luck. I am on company VPN, so that also should not be an issue.</p>
|
<python><lotus-notes><hcl-notes>
|
2023-01-12 11:58:24
| 2
| 417
|
Looz
|
75,095,986
| 4,832,010
|
Building a DataFrame recursively in Python with pd.concat
|
<pre><code>def recursive_df (n):
if n==1:
return pd.DataFrame({"A":[1],"B":[1]})
if n>=2:
return pd.concat(recursive_df(n-1),{"A":[n],"B":[n*n]} )
</code></pre>
<p>this is not working, and i can't see a reason "why" and what i should do about it ?</p>
<blockquote>
<p>TypeError: first argument must be an iterable of pandas objects, you
passed an object of type "DataFrame"</p>
</blockquote>
<p>In practice, the real problem i want to solve is that i have created some dataframes as output of some function to store results, and i want to concatenate them.</p>
<p>for elegance, i want to avoid for loops</p>
<p>thanks</p>
|
<python><pandas><recursion>
|
2023-01-12 11:57:00
| 1
| 1,937
|
Fagui Curtain
|
75,095,937
| 9,182,743
|
Convert String "YYYY-MM-DD hh:mm:ss Etc/GMT" to timestamp in UTC pandas
|
<p>I have a pandas column of datetime-like string values like this exampe:</p>
<pre class="lang-py prettyprint-override"><code>exammple_value = "2022-06-24 16:57:33 Etc/GMT"
</code></pre>
<p>Expected output</p>
<pre><code>Timestamp('2022-06-24 16:57:33+0000', tz='UTC')
</code></pre>
<p>Etc/GMT is the timezone, you can get it in python with:</p>
<pre class="lang-py prettyprint-override"><code>import pytz
list(filter(lambda x: 'GMT' in x, pytz.all_timezones))[0]
----
OUT:
'Etc/GMT'
</code></pre>
|
<python><pandas><datetime><timestamp><utc>
|
2023-01-12 11:52:56
| 1
| 1,168
|
Leo
|
75,095,842
| 16,844,801
|
How to scrape multiple pages for the same item using scrapy
|
<p>I want to scrape multiple pages for the same item. But every time I yield, it returns an increment of the list of items instead of all the sub-items in the same item's list.</p>
<pre><code>class GdSpider(scrapy.Spider):
name = 'pcs'
start_urls = [...]
def parse(self, response):
PC= dict()
PC['Name'] = response.css('h2::text').get()
components_urls = response.css('a::attr(href)').get()
components = []
for url in components_urls:
req = yield scrapy.Request(response.urljoin(url), self.parse_component)
components.append(parse_component(req))
PC['components'] = components
yield PC
def parse_component(self, response):
component_name = response.css('h1::text')
component_tag = response.css('div[class="tag"]::text').get()
yield {"component_name": component_name, "component_tag": component_tag}
</code></pre>
<p>My out should look like:</p>
<pre><code>{"Name": "HP 15", "components": [.....]}
</code></pre>
<p>But it scrapes everything independently:</p>
<pre><code>{"Name": "HP 15", "components": [<generator object GdSpider.parse_part_component at 0x000001B8A7405230>]
{component1}
{component2}
</code></pre>
<p>How can I return one item with all the components inside it using @inline-requests decorator for example?</p>
|
<python><scrapy>
|
2023-01-12 11:44:35
| 1
| 434
|
Baraa Zaid
|
75,095,711
| 15,255,487
|
Error code 304 in flask python with GET method
|
<p>I'm new to python and I faced an error which I totally don't understand why occures.
In the Insomnia client REST API I'm creating item with POST method, and it works well, below it the code</p>
<pre><code>@app.post('/item')
def create_item():
item_data = request.get_json()
if (
"price" not in item_data
or "store_id" not in item_data
or "name" not in item_data
):
abort(
400,
message="Bad request"
)
for item in items.values():
if (
item_data["name"] == item["name"]
and item_data["store_id"] == item["store_id"]
):
abort(400, message="Item already exist")
if item_data["store_id"] not in stores:
abort(404, message="Store not found")
if item_data["store_id"] not in stores:
abort(404, message="Store not found")
item_id = uuid.uuid4().hex
item = {**item_data, "id": item_id}
items["item_id"] = item
return item, 201
</code></pre>
<p>and here is the outcome of post method, created item with "id"
{
"id": "1c0deba2c86542e3bde3bcdb5da8adf8",
"name": "chair",
"price": 17,
"store_id": "e0de0e2641d0479c9801a32444861e06"
}</p>
<p>when I run GET method using "id" from above item putting it to the link I get error code 304</p>
<pre><code>@app.get("/item/<string:item_id>")
def get_item(item_id):
try:
return items[item_id]
except KeyError:
abort(404, message="Item not found")
</code></pre>
<p><a href="https://i.sstatic.net/eBy0x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eBy0x.png" alt="enter image description here" /></a></p>
<p>Can You please suggest what is wrong here ?</p>
<p>thanks</p>
|
<python><flask>
|
2023-01-12 11:32:59
| 1
| 912
|
marcinb1986
|
75,095,700
| 9,778,828
|
Keras doesn't find GPU, how to change that?
|
<p>I am trying to build a simple NN with keras. I have 2 GPUs on my machine:</p>
<p><a href="https://i.sstatic.net/ISGkS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ISGkS.png" alt="enter image description here" /></a></p>
<p>But Keras doesn't seem to recognize any of them:</p>
<pre><code>print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Num GPUs Available: 0
</code></pre>
<p>Any ideas?</p>
<p>I am using python 3.9, tensorflow 2.11, keras 2.11.</p>
|
<python><tensorflow><machine-learning><keras><gpu>
|
2023-01-12 11:32:03
| 0
| 505
|
AlonBA
|
75,095,651
| 10,811,647
|
How to unpack values from a list stored inside another list
|
<p>I have a list containing some elements, a lit and some other elements like so <code>[a, b, [c, d, e], f, g]</code> and I would like to get <code>[a, b, c, d, e, f, g]</code>. I tried using itertools which I'm not familiar with, but I was unsucessfull:</p>
<pre><code>from itertools import chain
a = 1
b = 2
c = [3, 4, 5]
d = 6
e = 7
list(chain(a, b, c, d, e))
</code></pre>
<p>It throws a TypeError</p>
<p>Any help is appreciated!</p>
|
<python><list><unpack>
|
2023-01-12 11:28:30
| 2
| 397
|
The Governor
|
75,095,632
| 7,720,738
|
Is the __eq__ method defined as part of a Protocol?
|
<p>Consider the following ADT:</p>
<pre><code>T = TypeVar("T")
@dataclass(frozen=True)
class Branch(Generic[T]):
value: T
left: Tree[T]
right: Tree[T]
Tree = Branch[T] | None
</code></pre>
<p>Note the requirement for <code>from __future__ import annotations</code> since <code>Tree</code> is used before it is defined.</p>
<p>Suppose we want to implement a function to check whether a tree contains a given element. A possible implementation is the following:</p>
<pre><code>def contains(t: Tree[T], x: T) -> bool:
match t:
case None:
return False
case Branch(value, left, right):
return x == value or contains(left, x) or contains(right, x)
</code></pre>
<p>This all type checks nicely, except for the fact that it's wrong. We have no guarantee that type <code>T</code> supports equality checking.</p>
<p>I'm new to Python but I understand that <code>Protocol</code>s are the way of defining shared functionality. Is <code>__eq__</code> part of a <code>Protocol</code> that would allow us to put a bound on <code>T</code> in this case, something equivalent to Haskell's <code>Eq</code> type class?</p>
<p>If not then are there any other potential ways to get better type safety here?</p>
|
<python><protocols>
|
2023-01-12 11:27:08
| 1
| 804
|
James Burton
|
75,095,594
| 2,927,489
|
Modeling a One to One Relation with Optional Tables in SQLAlchemy
|
<p><strong>Context:</strong>
I have a table with a lot of columns. I would like to break this data up because it will be used by people from a frontend like Metabase. The data is for people who will be doing simple queries that the frontend guides them through, so it won't be queried with raw SQL and I'm not worried about performance or complexity of multiple join statements etc. Additionally, I don't want to go through hundreds of columns to create "models" for them on the front end. I have very little database design experience myself and am having some trouble grasping how the following example would be implemented in SQLAlchemy.</p>
<p>A lot of examples are just one <code>parent</code> and one <code>child</code> and the concept is a bit nebulous in my mind because the docs SQLAlchemy makes it seem like a <code>one-to-one</code> relationship is one parent and one child, but my understanding is that its one record in a <code>parent</code> table to one record in any <code>child</code> table, though I could be very wrong in my understanding.</p>
<p><strong>Example:</strong>
There are 11 tables and each record in a table only corresponds to a single record in any other table, but some tables depending on a particular column are optional. I want to ensure that when I write and delete records the records on any optional table are removed as well. I'd also like to be able to kind of mix and match any table with any other table by the <code>table1.id</code>, where able.</p>
<pre><code>Table 1: Table 2 (for type a): Table 3 (for type b):
index index index
id table_1.id table_1.id
name some_info some_info_1
type other_info other_info_1
</code></pre>
<p>Attempt:</p>
<pre class="lang-py prettyprint-override"><code># ...
class Contract(Base):
__tablename__: "contract"
index = Column(Integer, primary_key=True, autoincrement="auto")
id = Column(String)
issue_date = Column(Date)
rate_type = Column(String)
# relationship("Rate_type_a", cascade="all, delete", passive_deletes=True)
# relationship("Rate_type_b", cascade="all, delete", passive_deletes=True)
# Not sure how to setup relationship() and or backrefs
#...
class Rate_type_a(Base):
# Optional based on contract.type
__tablename__: "rate_type_a"
index = Column(Integer, primary_key=True, autoincrement="auto")
id = Column(String, ForeignKey("contract.id"))
issue_date = Column(Date)
rate = Column(Integer)
#...
class Rate_type_b(Base):
# Optional based on contract.type
__tablename__: "rate_type_b"
index = Column(Integer, primary_key=True, autoincrement="auto")
id = Column(String, ForeignKey("contract.id"))
issue_date = Column(Date)
rate = Column(Integer)
weight = Column(Integer)
# ...
class Other_info(Base):
# Other Columns that are part of every contract for example
# ...
</code></pre>
<p><strong>Edit:</strong>
It seems I might be able to simplify, this by just putting all rate data in a single table, getting creative with column names, though this is still not ideal as it would have quite a few columns still.</p>
|
<python><sqlalchemy>
|
2023-01-12 11:24:16
| 1
| 324
|
TYPKRFT
|
75,095,439
| 11,743,318
|
Why doesn't figwidth parameter work with pyplot.subplots() call?
|
<p>I'm trying to create a subplot figure in matplotlib, specifying only the width (using <code>figwidth(float)</code> rather than <code>figsize(float, float)</code>), assuming that the height will have some default value. When calling <code>pyplot.subplots()</code>, the <code>**fig_kw</code> parameter is <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html#matplotlib.pyplot.subplots" rel="nofollow noreferrer">passed to the <code>pyplot.figure()</code> call</a>:</p>
<blockquote>
<p>All additional keyword arguments are passed to the pyplot.figure
call.</p>
</blockquote>
<p>which in turn <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figure.html#matplotlib.pyplot.figure" rel="nofollow noreferrer">passes any additional keyword arguments to the <code>Figure</code> constructor</a>. One of the valid Figure constructor keyword arguments is <a href="https://matplotlib.org/stable/api/figure_api.html#module-matplotlib.figure" rel="nofollow noreferrer"><code>figwidth=float</code></a>. However, when this is done via the passing mentioned above (subplots(figwidth) <strong>--></strong> figure(figwidth) <strong>--></strong> Figure(figwidth)) an AttributeError is raised:</p>
<pre><code>AttributeError: 'Figure' object has no attribute 'bbox_inches'
</code></pre>
<p>The MWE to recreate this is:</p>
<pre><code>import matplotlib.pyplot as plt
f, ax = plt.subplots(figwidth=4)
</code></pre>
<p>I thought perhaps this was due to needing to set the height and width together so I tried calling <code>pyplot.figure()</code> directly with parameters <code>figheight=3</code> and <code>figwidth=4</code>, but got the same AttirbuteError.</p>
<p>What am I missing here?</p>
|
<python><matplotlib><figure>
|
2023-01-12 11:11:41
| 2
| 1,377
|
compuphys
|
75,095,429
| 3,414,559
|
Selenium: Python Webdriver to get elements from popup iframe does not find the elements
|
<p>In the case of the URL</p>
<p><a href="https://console.us-ashburn-1.oraclecloud.com/" rel="nofollow noreferrer">https://console.us-ashburn-1.oraclecloud.com/</a></p>
<p>where it pops up to accept cookies, I try to click the button element to allow/deny cookies</p>
<pre><code>//div[@class="pdynamicbutton"]//a[@class="call"]
</code></pre>
<p>but it is not seen.</p>
<p>When I use</p>
<pre><code>switch_to.frame('trustarcNoticeFrame')
</code></pre>
<p>It still does not find it.</p>
<p>Not sure how i can get at these buttons for login.</p>
|
<python><selenium><selenium-chromedriver><webdriver>
|
2023-01-12 11:11:00
| 2
| 847
|
Ray
|
75,095,401
| 7,720,535
|
Vectorizing the aggregation operation on different columns of a Pandas dataframe
|
<p>I have a Pandas dataframe, mostly containing boolean columns. A small example is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3, 1, 2, 3],
"B": ['a', 'b', 'c', 'a', 'b', 'c'],
"f1": [True, True, True, True, True, False],
"f2": [True, True, True, True, False, True],
"f3": [True, True, True, False, True, True],
"f4": [True, True, False, True, True, True],
"f5": [True, False, True, True, True, True],
"target1": [True, False, True, True, False, True],
"target2": [False, True, True, False, True, False]})
df
</code></pre>
<p>Outout:</p>
<pre><code> A B f1 f2 f3 f4 f5 target1 target2
0 1 a True True True True True True False
1 2 b True True True True False False True
2 3 c True True True False True True True
3 1 a True True False True True True False
4 2 b True False True True True False True
5 3 c False True True True True True False
</code></pre>
<p>for each True and False class of each <code> f</code> columns and for all groups in <code>("A", "B")</code> columns, I want to do a sum over <code>target1</code> and <code>target2</code> columns. Using a loop over <code>f</code> columns, we have:</p>
<pre><code>for col in ["f1", "f2", "f3", "f4", "f5"]:
print(col, "\n",
df[df[col]].groupby(["A", "B"]).agg({"target1": "sum", "target2": "sum"}), "\n",
df[~df[col]].groupby(["A", "B"]).agg({"target1": "sum", "target2": "sum"}))
</code></pre>
<p>Now, I need to do it without the <code>for</code> loop; I mean a vecotization over <code>f</code> columns to reduce the computation time (computation time should be almost equal to time needed for doing it for one <code>f</code> column).</p>
|
<python><pandas><group-by>
|
2023-01-12 11:08:28
| 1
| 485
|
Esi
|
75,095,378
| 3,021,252
|
Adding a market to a line chart Plotly python
|
<p>I'm trying to add a point to the last observation on a time series chart with plotly. It is not very different from the example here <a href="https://stackoverflow.com/a/72539011/3021252">https://stackoverflow.com/a/72539011/3021252</a> for instance. Except it is the last observation. Unfortunately following such pattern modifies the axis range.</p>
<p>Here is an example of an original chart</p>
<pre><code>import plotly.express as px
df = px.data.gapminder().query("country=='Canada'")
fig = px.line(df, x="year", y="lifeExp", title='Life expectancy in Canada')
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/a5zV9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a5zV9.png" alt="enter image description here" /></a></p>
<p>But after adding a marker</p>
<pre><code>import plotly.graph_objects as go
fig.add_trace(
go.Scatter(
x=[df["year"].values[-1]],
y=[df["lifeExp"].values[-1]],
mode='markers'
)
)
</code></pre>
<p>It looks like that
<a href="https://i.sstatic.net/pBNhj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBNhj.png" alt="enter image description here" /></a></p>
<p>Has anyone have an idea how not to introduce this gap on the right?</p>
|
<python><plotly>
|
2023-01-12 11:07:17
| 0
| 1,099
|
kismsu
|
75,095,331
| 4,432,671
|
NumPy: grid to full coordinate set
|
<p>How do I go from a shape (either <code>np.mgrid</code> or <code>np.indices</code>) to actual coordinate vectors if I don't know the number of axes up front? If you know APL, this is its <a href="https://help.dyalog.com/latest/index.htm#Language/Primitive%20Functions/Index%20Generator.htm" rel="nofollow noreferrer">index generator</a> (<code>β³</code>) primitive.</p>
<p>Example: in the 2d case, I want to go from</p>
<pre><code>>>> np.indices((2, 2))
array([[[0, 0],
[1, 1]],
[[0, 1],
[0, 1]]])
</code></pre>
<p>to</p>
<pre><code>[[0, 0], [0, 1],
[1, 0], [1, 1]]
</code></pre>
<p>So it's a concatenate reduction over axis 0. In APL, I can express that just so:</p>
<pre><code>yx
ββββββ
ββ0 0β
ββ1 1β
ββ β
ββ0 1β
ββ0 1β
ββ~βββ
,βΏyx βΒ Catenate reduce-first-axis
βββββββββββββββ
β βββββ βββββ β
β β0 0β β0 1β β
β β~βββ β~βββ β
β βββββ βββββ β
β β1 0β β1 1β β
β β~βββ β~βββ β
βββββββββββββββ
</code></pre>
<p>(or of course the built-in <code>β³</code> directly)</p>
<pre><code>β³2 2
βββββββββββββββ
β βββββ βββββ β
β β0 0β β0 1β β
β β~βββ β~βββ β
β βββββ βββββ β
β β1 0β β1 1β β
β β~βββ β~βββ β
βββββββββββββββ
</code></pre>
|
<python><numpy>
|
2023-01-12 11:03:52
| 1
| 3,737
|
xpqz
|
75,095,250
| 10,924,836
|
Map with sub regions in Python (Choropleth)
|
<p>I am trying to plot a Choropleth map with subregions in Python for Armenia. Below you can see my data.</p>
<pre><code>import numpy as np
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
data_arm = {
'subregion': ['Aragatsotn','Ararat','Armavir','Gegharkunik','Kotayk','Lori','Shirak','Syunik','Tavush','Vayots Dzor','Yerevan'],
'Value':[0.2560,0.083,0.0120,0.9560,0.423,0.420,0.2560,0.043,0.0820,0.4560,0.019]
}
df = pd.DataFrame(data_arm, columns = ['subregion',
'Value'
])
df
</code></pre>
<p>Namely, I want to plot a map for Armenia with all subregions. Unfortunately, I don't know how to implement it in this environment. Below you will see the code where I tried to do something</p>
<pre><code>world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
asia = world.query('continent == "Asia"')
EAC = ["ARM"]
asia["EAC"] = np.where(np.isin(asia["iso_a3"], EAC), 1, 0)
asia.plot(column="iso_a3")
plt.show()
</code></pre>
<p>So can anybody help me how to solve this problem and make Choropleth map similar as Choropleth map shown below:
<a href="https://i.sstatic.net/KFhb7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KFhb7.png" alt="enter image description here" /></a></p>
|
<python><geopandas><choropleth>
|
2023-01-12 10:57:48
| 1
| 2,538
|
silent_hunter
|
75,095,249
| 12,083,557
|
How do I correctly code an abstract class and its subclasses to have getter/setter behavior?
|
<p>In Python (3.11) I have this abstract class:</p>
<pre><code>from abc import ABCMeta, abstractmethod
from copy import deepcopy
class BookPrototype(metaclass=ABCMeta):
@property
def title(self):
pass
@title.setter
@abstractmethod
def title(self, val):
pass
@abstractmethod
def clone(self):
pass
</code></pre>
<p>I create this subclass:</p>
<pre><code>class ScienceFiction(BookPrototype):
def title(self, val):
print("this never gets called without decorator")
def __init__(self):
pass
def clone(self):
return deepcopy(self)
</code></pre>
<p>And use it this way:</p>
<pre><code>science1 = ScienceFiction()
science1.title = "From out of nowhere"
science2 = science1.clone()
print(science2.title)
science2.title = "New title"
print(science2.title)
print(science1.title)
</code></pre>
<p>This code does exactly what I want, that is it creates an instance of <code>ScienceFiction</code> class, it clones it, it prints the title of the cloned object and again the title of the first one. So, my prints here are "From out of nowhere", "New Title", "From out of nowhere".</p>
<p>Problem is when, following <a href="https://docs.python.org/3/library/abc.html#abc.abstractproperty" rel="nofollow noreferrer">the docs</a>, I add the <code>@BookPrototype.title.setter</code> decorator to the title setter, this way:</p>
<pre><code>@BookPrototype.title.setter
def title(self, val):
print("this gets called now")
</code></pre>
<p>In this case the print inside the title method works, BUT I can't assign any value, so that the code prints three None.</p>
<p>What am doing wrong? How do I correctly code an abstract class and its subclasses to have getter/setter behavior?</p>
|
<python><abstract-class>
|
2023-01-12 10:57:48
| 1
| 337
|
Life after Guest
|
75,095,240
| 6,409,572
|
Why does my pip access two site-packages?
|
<p>I have a bit of confusion with <code>pip</code> and multiple <code>python</code> installations.
When running <code>python -m pip install pb_tool</code> I get console output:</p>
<pre><code>Requirement already satisfied: pb_tool in c:\osgeo4w\apps\python39\lib\site-packages (3.1.0)
Requirement already satisfied: colorama in c:\users\hbh1\appdata\roaming\python\python39\site-packages (from pb_tool) (0.4.6)
Requirement already satisfied: Sphinx in c:\users\hbh1\appdata\roaming\python\python39\site-packages (from pb_tool) (6.1.1)
Requirement already satisfied: Click in c:\osgeo4w\apps\python39\lib\site-packages (from pb_tool) (7.1.2)
...
</code></pre>
<p><strong>I am wondering, why there are mixed site-packages paths, some in <code>c:\osgeo4w\apps\</code> and some in <code>c:\users\hbh1\appdata\...</code>?</strong></p>
<p>I installed pb_tool with the OSGeo4W python, I'd expect to find it and its requirements found/installed in <code>c:\osgeo4w\...</code>, not (even partially?!) in the <code>c:\users\hbh1\appdata\...</code>, especially when running <code>pip</code> with <code>python -m</code>.</p>
<p>To elaborate: This is not necessarily a problem, but I'd like to understand, why and also if/how I can circumvent this behavior. It caused me some confusion as to which python installation has which modules installed, and I'd like to keep things separate and an overview over where I installed what.</p>
<p>Some time back I ran <code>pip install pb_tool</code> in my dev shell and couldn't run <code>pb_tool</code> afterwards, despite the successful install. I assume, the problem is, that I didn't have <code>c:\users\hbh1\appdata\roaming\python\python39\site-packages</code> on the PATH in that current environment. But somehow <code>pip</code> knew it, installed <code>pb_tool</code> there and <code>python</code> didn't know about it (I didn't add it, since I'd like a "clean and separated" dev enviroment with it's own python packages)...
I carefully checked PATH, my python/pip versions and which is which (cleaning PATH, using <code>where pip</code>/<code>where python</code> and <code>py -0b</code> to check the windows python launcher as well). My setup basically is:</p>
<pre><code> # add to PATH depending on the version I use
C:\Apps\Python39\
C:\OSGeo4W\apps\Python39 # respectively C:\OSGeo4W\bin
# and their corresponding script dirs
C:\Apps\Python39\Scripts
C:\Users\hbh1\AppData\Roaming\Python\Python39\Scripts
C:\OSGeo4W\apps\Python39\Scripts
# and if relevant: Windows Python Launcher listing these (py -0p), where I only use the first (the second one is not on PATH):
-3.9-64 C:\Apps\Python39\python.exe *
-2.7-32 C:\Apps\ArcGIS\Python27\ArcGIS10.8\python.exe
</code></pre>
<p>C:\OSGeo4W\ is a dev environment for me and I use a "clean" shell for command line tools I use with it (meaning that I don't use the system PATH, but start with a .bat in which I clean the PATH and only add what I specifically need, plus a few general sytem paths).</p>
<pre><code>@echo off
SET OSGEO4W_ROOT=C:\OSGeo4W
set path=%OSGEO4W_ROOT%\bin;%WINDIR%\system32;%WINDIR%;%WINDIR%\system32\WBem
path %PATH%;%OSGEO4W_ROOT%\apps\Python39\Scripts
path %PATH%;%OSGEO4W_ROOT%\apps\qgis-ltr\bin
path %PATH%;%OSGEO4W_ROOT%\apps\grass\grass78\lib
path %PATH%;%OSGEO4W_ROOT%\apps\Qt5\bin
set PYTHONPATH=%PYTHONPATH%;%OSGEO4W_ROOT%\apps\qgis-ltr\python
set PYTHONHOME=%OSGEO4W_ROOT%\apps\Python39
set PATH=C:\Program Files\Git\bin;%PATH%
cmd.exe
</code></pre>
<p>I am still puzzled, why in this environment, <code>pip install</code> would put anything in the <code>c:\users\hbh1\appdata\roaming\python\python39\site-packages</code> which is "normally" used by my <code>C:\Apps\Python39\</code> installation.</p>
|
<python><windows><pip><path><pythonpath>
|
2023-01-12 10:57:25
| 1
| 3,178
|
Honeybear
|
75,095,222
| 5,057,022
|
Stfrtime with 'D' inside String
|
<p>How can you deal with a random string inside a datetime string when parsing using pandas?</p>
<p>I have some timestamps of the form</p>
<p><a href="https://i.sstatic.net/ItPBT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItPBT.png" alt="enter image description here" /></a></p>
<p>Which I try to match with this <code>'%Y-%m-%d %H:%M:%S:%f'</code></p>
<p><em>(Why they have a 'D' instead of a 'T' is uncertain - they're not durations!)</em></p>
<p>When I try to parse them using Pandas, I get this error</p>
<pre><code>TypeError: Unrecognized value type: <class 'str'>
</code></pre>
<p>I am confident the dataset is consistent in form.</p>
<p>Is there a correct way to do this?</p>
<p><em>I realise I can replace the 'D' with 'T' but keeping the original form of the data is crucial for this piece of work.</em></p>
|
<python><pandas><string>
|
2023-01-12 10:56:29
| 2
| 383
|
jolene
|
75,095,132
| 20,589,275
|
How to make the list into table Django?
|
<p>I have got an a list:</p>
<pre><code>[{'name': 'WEB-Π Π°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊ/ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡ (SEO-ΠΏΡΠ°Π²ΠΊΠΈ)', 'description': 'Hastra Agency ΠΈΡΠ΅Ρ ΡΠ°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊΠ° Ρ ΠΎΠΏΡΡΠΎΠΌ ΡΠ°Π±ΠΎΡΡ Ρ ΡΠ²ΡΠ·ΠΈ Ρ ΡΠΎΡΡΠΎΠΌ ΠΎΡΠ΄Π΅Π»Π° SEO-ΠΏΡΠΎΠ΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ. Π’ΡΠ΅Π±ΡΠ΅ΠΌΡΠΉ ΠΎ...', 'key_skills': ['HTML', 'CSS', 'MySQL', 'PHP', 'SEO'], 'employer': 'Hastra Agency', 'salary': 'None - 60000 RUR', 'area': 'ΠΠΎΡΠΊΠ²Π°', 'published_at': '2022-12-12', 'alternate_url': 'https://hh.ru/vacancy/73732873'}, {'name': 'ΠΠ΅Π±-ΡΠ°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊ golang, react, vue, nuxt, angular, websockets', 'description': 'ΠΠ±ΡΠ·Π°Π½Π½ΠΎΡΡΠΈ: Π‘ΡΠ΅ΠΊ: react, vue, nuxt, angular, websockets, golang ΠΠΎΠ΄Π΄Π΅ΡΠΆΠΊΠ° ΡΠ΅ΠΊΡΡΠ΅Π³ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΠ° Ρ ΡΡΡΠ΅ΡΡ...', 'key_skills': [], 'employer': 'ΠΠ΅Π³Π°Π‘Π΅ΠΎ', 'salary': '150000 - None RUR', 'area': 'ΠΠΎΡΠΊΠ²Π°', 'published_at': '2022-12-12', 'alternate_url': 'https://hh.ru/vacancy/73705217'}, ....., ......
</code></pre>
<p>Array is with 10 lists.</p>
<p>How can I make it into table? <em>CSV</em> file?<br>
Columns are <em><strong>name</strong>, <strong>description</strong>, <strong>key_skills ...</strong></em><br>
Rows are <em><strong>1, 2, 3, 4, 5 ...</strong></em></p>
<p>I need to make this table in Django, how can I do it?</p>
|
<python><django><pandas>
|
2023-01-12 10:49:07
| 0
| 650
|
Proger228
|
75,095,037
| 1,128,648
|
How to select some key values from a dictionary and assign to another dictionary in python
|
<p>I have a variable which stores below dictionary</p>
<pre><code>initial_ltp =
{'s': 'ok',
'd': [{'n': 'MCX:CRUDEOIL23JANFUT',
's': 'ok',
'v': {'ch': 47.0,
'chp': 0.74,
'lp': 6377.0,
'spread': 2.0,
'ask': 6379.0,
'bid': 6377.0,
'open_price': 6330.0,
'high_price': 6393.0,
'low_price': 6305.0,
'prev_close_price': 6330.0,
'volume': 8410,
'short_name': 'CRUDEOIL23JANFUT',
'exchange': 'MCX',
'description': 'MCX:CRUDEOIL23JANFUT',
'original_name': 'MCX:CRUDEOIL23JANFUT',
'symbol': 'MCX:CRUDEOIL23JANFUT',
'fyToken': '1120230119244999',
'tt': 1673481600,
'cmd': {'t': 1673518200,
'o': 6376.0,
'h': 6378.0,
'l': 6375.0,
'c': 6377.0,
'v': 19,
'tf': '15:40'}}},
{'n': 'MCX:SILVERMIC23FEBFUT',
's': 'ok',
'v': {'ch': 485.0,
'chp': 0.71,
'lp': 68543.0,
'spread': 5.0,
'ask': 68545.0,
'bid': 68540.0,
'open_price': 68200.0,
'high_price': 68689.0,
'low_price': 68200.0,
'prev_close_price': 68058.0,
'volume': 49595,
'short_name': 'SILVERMIC23FEBFUT',
'exchange': 'MCX',
'description': 'MCX:SILVERMIC23FEBFUT',
'original_name': 'MCX:SILVERMIC23FEBFUT',
'symbol': 'MCX:SILVERMIC23FEBFUT',
'fyToken': '1120230228242738',
'tt': 1673481600,
'cmd': {'t': 1673518200,
'o': 68525.0,
'h': 68543.0,
'l': 68524.0,
'c': 68543.0,
'v': 140,
'tf': '15:40'}}}]}
</code></pre>
<p>I am trying to collect <code>('n')</code> and <code>('lp')</code> and save it in a different dictionary using code:</p>
<pre><code>if 'd' in initial_ltp.keys():
ltp[initial_ltp['d'][0]['n']] = initial_ltp['d'][0]['v']['lp']
</code></pre>
<p>But it is only taking the first <code>n</code> and <code>lp</code></p>
<pre><code>ltp
{'MCX:CRUDEOIL23JANFUT': 6377.0}
</code></pre>
<p><strong>My expected output:</strong></p>
<pre><code>ltp
{'MCX:CRUDEOIL23JANFUT': 6377.0, 'MCX:SILVERMIC23FEBFUT': 68543.0}
</code></pre>
<p>How can I get both the values</p>
|
<python><python-3.x>
|
2023-01-12 10:41:39
| 4
| 1,746
|
acr
|
75,094,994
| 11,274,362
|
Put Header, Footer and page number for pdf with pdfpages matplotlib
|
<p>I created <code>PDF</code> file of figures with <code>pdfpages</code> in <code>matplotlib</code>.</p>
<p>Now I want to add headers (contains text and line), footer (contains text and line) and page number. <strong>How can I do that?</strong></p>
|
<python><matplotlib><pdfpages>
|
2023-01-12 10:38:00
| 1
| 977
|
rahnama7m
|
75,094,971
| 5,224,236
|
Docker: cannot import name 'BlobServiceClient' from 'azure.storage.blob
|
<p>I have this code running fine on my personal computer</p>
<pre><code>from azure.storage.blob import BlobServiceClient
blob_client = BlobClient.from_blob_url(file_sas)
</code></pre>
<p>This is my local envir:</p>
<pre><code>python --version
Python 3.10.4
$ pip show azure.storage.blob
Name: azure-storage-blob
Version: 12.14.1
</code></pre>
<p>I have a docker image where I force the same version of <code>azure.storage.blob: Version: 12.14.1</code>. However, my python version is different.</p>
<pre><code># python3 --version
Python 3.8.10
</code></pre>
<p>And in docker I have the following error:</p>
<pre><code>>>> from azure.storage.blob import BlobServiceClient
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'BlobServiceClient' from 'azure.storage.blob' (/usr/lib/python3/dist-packages/azure/storage/blob/__init__.py)
</code></pre>
<p>Any help welcome</p>
|
<python><azure><docker>
|
2023-01-12 10:36:30
| 1
| 6,028
|
gaut
|
75,094,831
| 14,269,252
|
Merge of two data frames with some similar features in python
|
<p>I am new to Python.
I want to merge two DataFrames, I used merge and it get too large with duplicates. I created a sample code for what I did. is there any better performance? I really appreciate your idea.</p>
<pre><code>import pandas as pd
data = [['t', 10, 5], ['n', 15, 5], ['j', 14, 66],['t', 10, 8], ['n', 15, 55]]
df1 = pd.DataFrame(data, columns=['Name', 'Age', "HH"])
data = [['t', 10, 100], ['n', 15, 101], ['j', 14, 102],['t', 10, 81], ['n', 15, 81]]
df2 = pd.DataFrame(data, columns=['Name', 'Age', "year"])
res= pd.merge(df2, df, on=['Name',"Age"], how = "inner")
result :
Name Age year HH
0 t 10 100 5
1 t 10 100 8
2 t 10 81 5
3 t 10 81 8
4 n 15 101 5
5 n 15 101 55
6 n 15 81 5
7 n 15 81 55
8 j 14 102 66
</code></pre>
<p>I also used JOIN, but it didn't help and provided the same result.</p>
<pre><code>
df1.set_index(['Name','Age'],inplace=True)
df2.set_index(['Name','Age'],inplace=True)
df2.join(df1)
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-12 10:27:03
| 1
| 450
|
user14269252
|
75,094,780
| 17,220,672
|
mypy does not infer generic type T with Optional[Any]
|
<p>I have a problem with mypy and generic T parameter. I am using sqlalchemy sessions <strong>get()</strong> method to retrieve something from the database (the code works). I am injecting (self.model=model) one of my SqlalchemyORM models that I have defined somewhere else.</p>
<p>Mypy is giving me this message:</p>
<pre><code>Variable "my_project.repositories.sql_repo.Foo.Optional" is not valid as a type [valid-type]mypy
See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliasesmypy
</code></pre>
<pre><code>T = TypeVar("T")
class Foo(Generic[T]):
def __init__(self, model: Type[T], db: db_session) -> None:
super().__init__()
self.model = model
self.db = db
def get_by_id(self, item_id: int) -> Optional[T]:
return self.session().get(self.model,item_id)
</code></pre>
|
<python><sqlalchemy><mypy>
|
2023-01-12 10:23:02
| 1
| 419
|
mehekek
|
75,094,645
| 2,743,307
|
pythonic way to fill an array with geometric cumulative function
|
<p>I need to fill an array with the position of the boundary of <code>n_cells</code> (starting from 0 up to <code>n_cells+1</code>) with the size of the cell in geometric progression (with parameter <code>c</code>) and the total size will be thick (thus <code>x[0]=0</code> and <code>x[n_cells]=thick</code>)</p>
<p>I have this which works:</p>
<pre><code>n_cells=10
c=0.9
thick=2
x = np.zeros(n_cells + 1)
x_0=thick * (c - 1) / (c**n_cells - 1)
for i in range(n_cells):
x[i+1] = x[i] + x_0 * c ** i
</code></pre>
<p>Since I'm learning, is there a simpler pythonic comprehension way to do the same?</p>
|
<python><numpy><numpy-ndarray>
|
2023-01-12 10:10:52
| 2
| 3,783
|
bibi
|
75,094,568
| 1,436,800
|
How to make Nested Models in django
|
<p>I am new to django.
My task is to make a feature on shared documents in backend. Documents can have folders, like google docs.We will have list of documents within list of folders.</p>
<p>I created following model classes:</p>
<pre><code>class Folder(models.Model):
name = models.CharField(max_length=128, unique=True)
def __str__(self) -> str:
return self.name
class File(models.Model):
folder_name = models.ForeignKey(Folder, on_delete=models.CASCADE)
docfile = models.FileField(upload_to='documents/%Y/%m/%d')
def __str__(self) -> str:
return self.name
</code></pre>
<p>So first, a folder will be created. Then a file will be uploaded in that folder.
My Questions are:</p>
<ul>
<li>In google docs, we can have folders inside folders. How can I update my model if I want to add this feature of adding folder inside folder and then storing file on it.</li>
<li>What does FileField attribute actually do? I want to store the data in postgres database, not in my local storage. How to deal with that?</li>
<li>What additional features should I add in my model for this purpose?</li>
</ul>
|
<python><django><django-models><django-rest-framework><django-serializer>
|
2023-01-12 10:05:37
| 1
| 315
|
Waleed Farrukh
|
75,094,229
| 14,649,447
|
Is there a way to rotate a dash-leaflet dl.Rectangle() through a callback?
|
<p>I'm building an python dash-leaflet app where the user can click on a map to place a rectangle through a callback, and I would like to add a second callback (bond to a slider) to rotate it around its center according to North (eg. rotate it by 20Β° to the East).</p>
<p>To place the rectangle in a callback I use:</p>
<pre class="lang-py prettyprint-override"><code>mapOne.append(dl.Rectangle(fillOpacity=0.5, weight=0.7, color='#888',
bounds=[[click_lat_lng[0]+xbox1, click_lat_lng[1]+ybox1],
[click_lat_lng[0]+xbox2, click_lat_lng[1]+ybox2]]
))
</code></pre>
<p>I thought about a sort of inverse Haversine function to get the new "rotated" <code>xbox1</code> and <code>xbox2</code>, but the <code>dl.Rectangle()</code> seem to be always aligned with axes.</p>
<p>Any thought on the best way to achieve this?</p>
|
<python><rotation><plotly-dash><dash-leaflet>
|
2023-01-12 09:38:05
| 1
| 376
|
Mat.B
|
75,093,820
| 20,589,275
|
How to parse data into .csv?
|
<p>I have got an a list:</p>
<pre><code>[{'name': 'WEB-Π Π°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊ/ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡ (SEO-ΠΏΡΠ°Π²ΠΊΠΈ)', 'description': 'Hastra Agency ΠΈΡΠ΅Ρ ΡΠ°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊΠ° Ρ ΠΎΠΏΡΡΠΎΠΌ ΡΠ°Π±ΠΎΡΡ Ρ ΡΠ²ΡΠ·ΠΈ Ρ ΡΠΎΡΡΠΎΠΌ ΠΎΡΠ΄Π΅Π»Π° SEO-ΠΏΡΠΎΠ΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ. Π’ΡΠ΅Π±ΡΠ΅ΠΌΡΠΉ ΠΎ...', 'key_skills': ['HTML', 'CSS', 'MySQL', 'PHP', 'SEO'], 'employer': 'Hastra Agency', 'salary': 'None - 60000 RUR', 'area': 'ΠΠΎΡΠΊΠ²Π°', 'published_at': '2022-12-12', 'alternate_url': 'https://hh.ru/vacancy/73732873'}, {'name': 'ΠΠ΅Π±-ΡΠ°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊ golang, react, vue, nuxt, angular, websockets', 'description': 'ΠΠ±ΡΠ·Π°Π½Π½ΠΎΡΡΠΈ: Π‘ΡΠ΅ΠΊ: react, vue, nuxt, angular, websockets, golang ΠΠΎΠ΄Π΄Π΅ΡΠΆΠΊΠ° ΡΠ΅ΠΊΡΡΠ΅Π³ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΠ° Ρ ΡΡΡΠ΅ΡΡ...', 'key_skills': [], 'employer': 'ΠΠ΅Π³Π°Π‘Π΅ΠΎ', 'salary': '150000 - None RUR', 'area': 'ΠΠΎΡΠΊΠ²Π°', 'published_at': '2022-12-12', 'alternate_url': 'https://hh.ru/vacancy/73705217'}, ....., ......
</code></pre>
<p>array is with 10 list's
How can i make it into table? csv file?
columns - name, description, key_skills, ....
rows - 1, 2, 3, 4, 5 ...</p>
|
<python><pandas>
|
2023-01-12 09:05:24
| 1
| 650
|
Proger228
|
75,093,819
| 10,319,707
|
Common Lisp equivalent of Python's itertools.starmap?
|
<p>Python's Itertools has what is called <code>starmap</code>. Given a collection of collections and a function, it applies the function to each collection strictly inside the collection, using the elements of said internal collection as arguments to the function. For example,</p>
<pre><code>from itertools import starmap
Β
NestedList = [(1, 2), (3, 4), (5, 6), (0, 0), (1, 1), (2, 2)]
list(starmap(lambda x, y:x + y, NestedList))
</code></pre>
<p>returns the list containing 3, 7, 11, 0, 2, and 4.</p>
<p>I refuse to believe that Python was the first to come up with this concept, but I'm drawing a blank when I try to think of what it was called in older languages. Does any analogous functionality exist in common lisp? I feel certain that it does, but I cannot name it.</p>
|
<python><common-lisp><python-itertools><starmap>
|
2023-01-12 09:05:22
| 3
| 1,746
|
J. Mini
|
75,093,724
| 6,409,572
|
How can I find out the path of the Python that the py Launcher uses?
|
<p>I have multiple Python installations and when working on different projects, these can cause some confusion. I just recently found out, that under Windows there also is a py Launcher installed and updated alongside each installation of Python.
That is useful tool, but I think this py launcher is messing up some things for me.</p>
<p>I'd like to find out, which python installation it defaults to. How can I do this on cmd?</p>
<p>Usually, to avoid using the wrong installation, I check in the command line (cmd) with the following commands:</p>
<pre><code>where python
where python3 # when specifying
where pip # when installing packages
</code></pre>
<p>But for the py Launcher I can't use this:</p>
<pre><code>where py # gives: C:\Windows\py.exe
py --version # gives Python 3.9.5, but not the path of the installation (i have multiple python 3.9.5)
</code></pre>
<p>Is there something like <code>py --path</code>?</p>
|
<python><windows><cmd>
|
2023-01-12 08:57:15
| 1
| 3,178
|
Honeybear
|
75,093,715
| 455,998
|
How to alter the response output of a list based on a query parameter in FastAPI?
|
<p>I'm trying to alter the content of a list view on FastAPI, depending on a query parameter. As the format is defined by a pydantic model, how can I customize it (or use an alternative model from within the view)?</p>
<p>Here's my view:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi_pagination import Page, Params, paginate
from pydantic import BaseModel
from sqlalchemy.orm import Session
class EventSerializer(BaseModel):
id: str
# ...
class EventAttendeeSerializer(BaseModel):
id: str
event: str # contains the event UUID
# ...
class Config:
orm_mode = True
@api.get("/", response_model=Page[EventAttendeeSerializer])
async def get_list(db: Session, pagination: Params = Depends(), extend: str = None):
objects = db.query(myDbModel).all()
if "event" in extend.split(","):
# return EventSerializer for each object instead of id
return paginate(objects, pagination)
</code></pre>
<p>At runtime, it would work like this:</p>
<pre><code>GET /v1/event-attendees/
{
"items": [
{
"id": <event_attendee_id>,
"event": <event_id>,
}
],
"total": 1,
"page": 1,
"size": 50,
}
</code></pre>
<pre><code>GET /v1/event-attendees/?extend=event
{
"items": [
{
"id": <event_attendee_id>,
"event": {
"id": <event_id>,
# ...
}
}
],
"total": 1,
"page": 1,
"size": 50,
}
</code></pre>
<p>I searched for some kind of hooks in the pydantic and FastAPI docs and source code, but did not find anything relevant.
Anyone can help please?</p>
|
<python><fastapi><crud><pydantic>
|
2023-01-12 08:56:13
| 1
| 347
|
koleror
|
75,093,702
| 17,839,669
|
How to apply annotation to each item of QuerySet with Django ORM
|
<p>There are a lot questions similar to this one but none of them worked for me.</p>
<p>Let's assume that I have the following models:</p>
<pre><code>class Cafe(models.Model):
name = models.CharField(max_length=150)
def __str__(self):
return self.name
class Food(models.Model):
class SoldStatus(models.TextChoices):
SOLD_OUT = "SoldOut", "Sold out"
NOT_SOLD_OUT = "NotSoldOut", "Not sold out"
name = models.CharField(max_length=50)
cafe = models.ForeignKey(Cafe, related_name="foods", on_delete=models.CASCADE)
status = models.CharField(choices=SoldStatus.choices)
def __str__(self):
return self.name
</code></pre>
<p>In my QuerySet, I want to retrieve all cafes with the following fields in each: 'cafe name', 'total number of foods', 'total number of not sold foods', and 'percentage of not sold foods'</p>
<p>Is there any way to achieve the above result with Django ORM?</p>
|
<python><django><django-models><django-orm><django-postgresql>
|
2023-01-12 08:55:30
| 2
| 359
|
Jasur
|
75,093,685
| 19,336,534
|
Connect to IRC server with python
|
<p>i am trying to connect to anonops irc server and subsequently #anonops channel using python. What i have done so far :</p>
<pre><code>import sys
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
HOST = 'irc.anonops.com' #irc server
PORT = 6697 #port
NICK = 'testingbot'
print('soc created |', s)
remote_ip = socket.gethostbyname(HOST)
print('ip of irc server is:', remote_ip)
s.connect((HOST, PORT))
print('connected to: ', HOST, PORT)
nick_cr = ('NICK ' + NICK + '\r\n').encode()
s.send(nick_cr)
s.send('JOIN #anonops \r\n'.encode()) #chanel
#
s.send(bytes("PRIVMSG " + '#anonops' + 'hi'+ "\n", "UTF-8"))
</code></pre>
<p>I think this connect succesfuly to the irc server but i can't seem to connect to the channel. I have an open irc client (Hexchat) in my pc and i don't see the message:<br />
testingbot has joined nor do i see the hi message.<br />
Any idea about what am i doing wrong?</p>
|
<python><python-3.x><sockets><irc>
|
2023-01-12 08:54:18
| 2
| 551
|
Los
|
75,093,675
| 219,976
|
Django Rest Framework adding import in settings.py causes 403 error
|
<p>I have a Django Rest Framework application with custom render and parse classes, set in settings.py:</p>
<pre><code>REST_FRAMEWORK = {
"DEFAULT_RENDERER_CLASSES": (
"djangorestframework_camel_case.render.CamelCaseJSONRenderer",
"djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer",
),
"DEFAULT_PARSER_CLASSES": (
"djangorestframework_camel_case.parser.CamelCaseFormParser",
"djangorestframework_camel_case.parser.CamelCaseMultiPartParser",
"djangorestframework_camel_case.parser.CamelCaseJSONParser",
),
}
</code></pre>
<p>And at this point everything works fine.<br />
But if I add import statement in settings.py like this:</p>
<pre><code>from djangorestframework_camel_case.render import CamelCaseJSONRenderer
REST_FRAMEWORK = {
"DEFAULT_RENDERER_CLASSES": (
"djangorestframework_camel_case.render.CamelCaseJSONRenderer",
"djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer",
),
"DEFAULT_PARSER_CLASSES": (
"djangorestframework_camel_case.parser.CamelCaseFormParser",
"djangorestframework_camel_case.parser.CamelCaseMultiPartParser",
"djangorestframework_camel_case.parser.CamelCaseJSONParser",
),
}
</code></pre>
<p>I start to get 403 error.
Why is that? How may I fix it?</p>
<p>Just in case it somehow matters my installed apps:</p>
<pre><code>INSTALLED_APPS = [
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.postgres",
"django.contrib.sessions",
"django.contrib.staticfiles",
"corsheaders",
"django_filters",
"rest_framework",
"my_porject.my_app",
]
</code></pre>
|
<python><django><django-rest-framework>
|
2023-01-12 08:53:26
| 0
| 6,657
|
StuffHappens
|
75,093,503
| 12,575,557
|
Are Python 3.11 objects as light as slots?
|
<p>After <a href="https://github.com/python/cpython/issues/89503" rel="nofollow noreferrer">Mark Shannon's optimisation of Python objects</a>, is a plain object different from an object with slots?
I understand that after this optimisation in a normal use case, objects <a href="https://github.com/faster-cpython/ideas/issues/72" rel="nofollow noreferrer">have no dictionary</a>.
Have the new Python objects made the use of slots totally unnecessary?</p>
|
<python><python-internals><slots><python-3.11>
|
2023-01-12 08:38:30
| 1
| 950
|
Jorge Luis
|
75,093,455
| 859,227
|
Reset index of grouped data frames
|
<p>I would like to reset the index of grouped data frames and based on the examples, I have written:</p>
<pre><code>for name, df_group in df.groupby('BL', as_index=False):
print(df_group)
</code></pre>
<p>But the output shows that index has not been reset.</p>
<pre><code> Num BL Home Requester
4 16986 140080043863936 5 5
5 16987 140080043863936 0 5
10 16986 140080043863936 7 5
</code></pre>
<p>How can I fix that?</p>
|
<python><pandas>
|
2023-01-12 08:33:48
| 3
| 25,175
|
mahmood
|
75,093,113
| 9,861,647
|
Pandas replace in Data frame values which are contains in specific range
|
<p>I have this Pandas Data Frame</p>
<pre><code>Months 2022-10 2022-11 2022-12 2023-01 β¦
2023-01 10 N/A 12 13 β¦
2022-12 2 14 14 N/A β¦
2022-11 N/A 11 N/A N/A β¦
2022-10 12 N/A N/A N/A β¦
β¦ β¦ β¦ β¦ β¦
</code></pre>
<p>I would like to replace the values inside the "date range" by 0 and outside the "date range" with blanks like this in this example:</p>
<pre><code>Months 2022-10 2022-11 2022-12 2023-01 β¦
2023-01 10 0 12 13 β¦
2022-12 2 14 14 β¦
2022-11 0 11 β¦
2022-10 12 β¦
β¦ β¦ β¦ β¦ β¦
</code></pre>
<p>How can I do this with Python Pandas?</p>
|
<python><pandas>
|
2023-01-12 07:56:47
| 2
| 1,065
|
Simon GIS
|
75,093,076
| 10,062,025
|
Why requests is getting same data and returning errors from different urls in python?
|
<p>I have a list of data to scrape here <a href="https://docs.google.com/spreadsheets/d/1KMYsjN2ggklFMQ5HPKMbdkVqc04MdjhYGLvku1Js0cc/edit?usp=sharing" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1KMYsjN2ggklFMQ5HPKMbdkVqc04MdjhYGLvku1Js0cc/edit?usp=sharing</a>
So the scraper runs but there are two problems here.</p>
<ol>
<li>The scraper returns the same data for different url, even though when I checked the urls reponse through web is both different and both are returning 200.
The dictionary ID is a distinct id mapped out to the urls.</li>
</ol>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>scraped_product_name</th>
<th>url</th>
<th>dictionary_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hansaplast aqua protect 6s</td>
<td>https://www.blibli.com/p/hansaplast-aqua-protect-6s/ps--RAM-70107-07546?ds=RAM-70107-07546-00001&source=MERCHANT_PAGE&sid=7cb21b003f4516bb&cnc=false&pickupPointCode=PP-3239816&pid=RAM-70107-07546</td>
<td>418</td>
</tr>
<tr>
<td>Hansaplast aqua protect 6s</td>
<td>https://www.blibli.com/p/hansaplast-aqua-protect-6s/ps--RAM-70107-07546?ds=RAM-70107-07546-00001&source=MERCHANT_PAGE&sid=7cb21b003f4516bb&cnc=false&pickupPointCode=PP-3239816&pid=RAM-70107-07546</td>
<td>181</td>
</tr>
</tbody>
</table>
</div>
<ol start="2">
<li>After running a few urls, it returns an error Access Denied where there is no data. I was wondering if this affects the data being returned?However when I tried rerunning it again in another iteration it returns a 200.</li>
</ol>
<p>Here's my code</p>
<pre><code>from random import randint
import requests
rancherrors=pd.DataFrame()
ranchdf=pd.DataFrame()
for id, input,url in zip(ranch['product_id'],ranch['urls'],ranch['urls2']):
headers={
"user-agent" : f"{UserAgent().random}",
'referer':'https://www.blibli.com/'
}
response =requests.get(url,headers=headers,verify=True)
#catches error
####################
if response.status_code != 200:
datum={
'id':id,
'url':url,
'date_key':today
}
rancherrors=rancherrors.append(pd.DataFrame([datum]))
print(f'{url} error')
sleep(randint(5,15))
else:
#runs scraper
################################
try:
price=str(response.json()['data']['price']['listed']).replace(".0","")
discount=str(response.json()['data']['price']['totalDiscount'])
except:
price="0"
discount="0"
try:
unit=str(response.json()['data']['uniqueSellingPoint']).replace("β’ ","")
except:
unit=""
datranch={
'product_name':response.json()['data']['name'],
'normal_price':price,
'discount':discount,
'competitor_id':response.json()['data']['ean'],
'url':input,
'unit':unit,
'product_id':id,
'date_key':today,
'web':'ranch market'
}
ranchdf=ranchdf.append(pd.DataFrame([datranch]))
</code></pre>
<p>I use the rancherrors to catch errors and run in loop until no errors appears.
Any help would be greatly appreciated.</p>
|
<python><python-requests>
|
2023-01-12 07:52:48
| 1
| 333
|
Hal
|
75,092,778
| 13,359,498
|
How to solve `NameError: name 'compression' is not defined`?
|
<p>I am trying to implement DenseNet model and I am using image dataset with 4 classes.</p>
<p>Code snippets:</p>
<p>For building model:</p>
<pre><code>def denseblock(input, num_filter = 12, dropout_rate = 0.2):
global compression
temp = input
for _ in range(l):
BatchNorm = BatchNormalization()(temp)
relu = Activation('relu')(BatchNorm)
Conv2D_3_3 =Conv2D(int(num_filter*compression), (3,3), use_bias=False ,padding='same')(relu)
if dropout_rate>0:
Conv2D_3_3 = Dropout(dropout_rate)(Conv2D_3_3)
concat = Concatenate(axis=-1)([temp,Conv2D_3_3])
temp = concat
return temp
## transition Block
def transition(input, num_filter = 12, dropout_rate = 0.2):
global compression
BatchNorm = BatchNormalization()(input)
relu = Activation('relu')(BatchNorm)
Conv2D_BottleNeck = Conv2D(int(num_filter*compression), (1,1), use_bias=False ,padding='same')(relu)
if dropout_rate>0:
Conv2D_BottleNeck = Dropout(dropout_rate)(Conv2D_BottleNeck)
avg = AveragePooling2D(pool_size=(2,2))(Conv2D_BottleNeck)
return avg
#output layer
def output_layer(input):
global compression
BatchNorm = BatchNormalization()(input)
relu = Activation('relu')(BatchNorm)
AvgPooling = AveragePooling2D(pool_size=(2,2))(relu)
flat = Flatten()(AvgPooling)
output = Dense(categories, activation='softmax')(flat)
return output
</code></pre>
<p>creating a model with the two DenseNet blocks:</p>
<pre><code>l = 7
input = Input(shape=(height, width, 3))
First_Conv2D = Conv2D(30, (3,3), use_bias=False ,padding='same')(input)
First_Block = denseblock(First_Conv2D, 30, 0.5)
First_Transition = transition(First_Block, 30, 0.5)
Last_Block = denseblock(First_Transition, 30, 0.5)
output = output_layer(Last_Block)
model = Model(inputs=[input], outputs=[output])
</code></pre>
<p>The error is: <code>NameError: name 'compression' is not defined</code></p>
|
<python><tensorflow><keras><conv-neural-network><densenet>
|
2023-01-12 07:19:52
| 1
| 578
|
Rezuana Haque
|
75,092,525
| 19,950,360
|
how to create table number columns use python to bigquery load_table_from_dataframe
|
<p>i want to bigquery table from python.bigquery
dataframe have number columns like '333115'
when i user load_table_from_dataframe(df, table_path)</p>
<p>error occur</p>
<pre><code>400 POST https://bigquery.googleapis.com/upload/bigquery/v2/projects/paprika-cada/jobs?uploadType=multipart: Invalid field name "`3117539507`". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 300 characters long.
</code></pre>
<p>if mysql create number columns use ``
but in python dataframe how to do that?</p>
|
<python><dataframe><google-cloud-platform><google-bigquery>
|
2023-01-12 06:47:35
| 1
| 315
|
lima
|
75,092,339
| 11,634,498
|
Generating ImportError: `load_model` requires h5py in jupyter notebook
|
<p>I am running the following github code for Age and Gender Detection on jupyter notebook (anaconda navigator)
<a href="https://github.com/kaushikjadhav01/Deep-Surveillance-Monitor-Facial-Emotion-Age-Gender-Recognition-System" rel="nofollow noreferrer">https://github.com/kaushikjadhav01/Deep-Surveillance-Monitor-Facial-Emotion-Age-Gender-Recognition-System</a></p>
<p>I am using latest version of Anaconda Navigator and created a new environment with python=3.7.
Installed the required libraries and packages, but when loading the model it gives error.
<a href="https://i.sstatic.net/v8VIy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8VIy.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><tf.keras>
|
2023-01-12 06:24:26
| 1
| 644
|
Krupali Mistry
|
75,092,144
| 1,323,992
|
How to print SQLAlchemy update query?
|
<p>According to <a href="https://docs.sqlalchemy.org/en/14/faq/sqlexpressions.html#how-do-i-render-sql-expressions-as-strings-possibly-with-bound-parameters-inlined" rel="nofollow noreferrer">docs</a> printing queries is as simple as print(query).</p>
<p>But according to <code>update</code> function description, it returns an integer</p>
<p><em>:return: the count of rows matched as returned by the database's
"row count" feature.</em></p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code> state = 'router_in_use'
q = self.db_session.query(Router).filter(
Router.id == self.router_id,
).update(
{Router.state: state}, synchronize_session=False
)
#print(q) outputs just 1
self.db_session.commit()
</code></pre>
<p>Is there a way to print <code>q</code> query in SQL language?</p>
<p>Query itself works fine.</p>
<p>python 3.8</p>
|
<python><debugging><sqlalchemy>
|
2023-01-12 05:56:45
| 0
| 846
|
yevt
|
75,092,119
| 7,679,045
|
Can't capture using set-literal, set(), or empty dict-literal (PEP-634, PEP-636)
|
<p>I work at a Python shop, and I've got the green-light to start using 3.10 (we're so cutting-edge). SMP was a little disappointing at first, as I was anticipating Rust-style matching, but it's still pretty dang awesome.</p>
<p>...It has some quirks... I can match empty lists as expected, but I can't match empty dict literals, and I can't match set literals at al -- I can match dicts and sets by asserting the type of a capture, but that's kind of defeating the point of SMP.</p>
<p>Examples below, but I can't find existing SO questions, I can't find Python issue reports, and I can't find an explanation in the body of PEP-634 or PEP-636. A pointer towards an example of any of those 3 options probably counts as an "answered question", but I'm asking because I'm hoping there are answers.</p>
<p>Examples:</p>
<pre><code># predictable and sane
match [1,2,3]:
case [1,2,3]: print("lists are no substitute for arrays, but they do generally behave")
case _: print("nice, predictable, and Never :)")
# a bit weird, but makes sense given the grammar
match (1,2,3):
case tuple([1,2,3]): print("'e's a sinner, right, but 'e's friendly enough")
case list((1,2,3)): print("This bit's fine. It sure is Python, but smells pleasantly of rust")
# this is officially annoying, but still workable. I'd rather have SMP like this than not.
match {"one": 1, "two": 2}:
case dict(one=1, two=2): print("like, I get this not working...")
case dict((("one", 1), ("two", 2))): print("...and I can't imagine this ever making sense.")
case {"one": 1, "two": 2}: print("but you knew all along this was gonna work, so it's OK...")
# Definitely. Getting. Weirder:
match {1,2,3}:
case set((1,2,3)): print("This should work, as far as what PEP-634 describes")
case set(capture) if capture == {1,2,3}: print("There's this, sure, but...!?")
case {1, 2, 3}: print("Syntax error. If anyone asks. '{' and '}' are for mapped captures.")
case _: print("is there, for real, no way to capture a set-literal?")
</code></pre>
|
<python><pattern-matching><python-3.10><structural-pattern-matching>
|
2023-01-12 05:52:59
| 0
| 795
|
Sam Hughes
|
75,092,052
| 11,264,031
|
enable_auto_commit=False, but still offsets are committed automatically
|
<p>I'm using <code>kafka-python==2.0.2</code>, and have disabled <code>auto_commit</code> but still if I don't commit through code, offsets are automatically getting committed</p>
<p>In the below code even if I comment out <code>self.consumer.commit_async(callback= ....</code>, offsets are still getting committed</p>
<pre class="lang-py prettyprint-override"><code>class KafkaMessageConsumer:
def __init__(self, bootstrap_servers: str, topic: str, group_id: str, offset_reset_strategy: str):
self.bootstrap_servers: str = bootstrap_servers
self.topic: str = topic
self.group_id: str = group_id
self.consumer: KafkaConsumer = KafkaConsumer(topic, bootstrap_servers=bootstrap_servers, group_id=group_id,
enable_auto_commit=False, auto_offset_reset=offset_reset_strategy)
def consume_messages(self, consumer_poll_timeout: int, max_poll_records: int,
message_handler: MessageHandlerImpl = MessageHandlerImpl()):
try:
while True:
try:
msg_pack = self.consumer.poll(timeout_ms=consumer_poll_timeout, max_records=max_poll_records)
if bool(msg_pack):
for topic_partition, messages in msg_pack.items():
message_handler.process_messages(messages)
self.consumer.commit_async(callback=(lambda offsets, response: log.error(
f"Error while committing offset in async due to: {response}", exc_info=True) if isinstance(
response, Exception) else log.debug(f"Successfully committed offsets: {offsets}")))
except Exception as e:
log.error(f"Error while consuming/processing message due to: {e}", exc_info=True)
finally:
log.error("Something went wrong, closing consumer...........")
self.consumer.close()
</code></pre>
<p>Is this a proper way to disable auto commit and commit manually?</p>
|
<python><apache-kafka><kafka-consumer-api><kafka-python>
|
2023-01-12 05:41:13
| 1
| 426
|
Swastik
|
75,091,827
| 10,062,025
|
Scrape website using httpx and requests returns a timeout
|
<p>I am trying to scrape this website
<a href="https://www.blibli.com/p/facial-tissue-tisu-wajah-250-s-paseo/is--LO1-70001-00049-00003?seller_id=LO1-70001&sku_id=LO1-70001-00049-00001&sclid=7zuGEaS4hh5SowAA6tnfd5i2wKjR6e3p&sid=c5746ccfbb298d3b&pid=LO1-70001-00049-00001&pickupPointCode=PP-3227395" rel="nofollow noreferrer">https://www.blibli.com/p/facial-tissue-tisu-wajah-250-s-paseo/is--LO1-70001-00049-00003?seller_id=LO1-70001&sku_id=LO1-70001-00049-00001&sclid=7zuGEaS4hh5SowAA6tnfd5i2wKjR6e3p&sid=c5746ccfbb298d3b&pid=LO1-70001-00049-00001&pickupPointCode=PP-3227395</a></p>
<p>where I located that the url above uses this api <a href="https://www.blibli.com/backend/product-detail/products/is--LO1-70001-00049-00003/_summary?pickupPointCode=PP-3227395" rel="nofollow noreferrer">https://www.blibli.com/backend/product-detail/products/is--LO1-70001-00049-00003/_summary?pickupPointCode=PP-3227395</a>
to return the product details.</p>
<p>The response headers returns this</p>
<pre><code>akamai-grn: 0.06d62c17.1673499257.a978ef0
content-encoding: gzip
content-length: 1838
content-security-policy: frame-ancestors 'self' https://ext.blibli.com/ https://mcdomo.id/
content-type: application/json
date: Thu, 12 Jan 2023 04:54:17 GMT
link: <https://www.blibli.com/xm1-6K/C5vh/MHG4/Ij6z/3uMhJV/7iSaJfNzYY/XHgJa1FGaAI/VW8QI1/hIGwU>; rel=preload; as=script
set-cookie: bm_sv=22A8F907C6313015C5C3083E1983894A~YAAQBtYsF2Cgj6GFAQAA2ClUpBIai8qi26yqmrx2D8ZyG4Bo/8dZETALmKhcPL3NIXVn4Ev4KGzGiabEq1nOQn9LMDu8wj7qcbuc2aCvCvWdeo+zdQsat+vNhpsYvp3bn28pS9zdU9SMQmwMlwj14P7xCnQke8+FD0XS92OT87sybuT63iEfivGZyo7PfRmRgfLqcSNa9sNbiGUi3N8aPBa863LkCxJ0pcZqF0n22Gv4phcuQIwxhgZdm6QJj2mjiQ==~1; Domain=.blibli.com; Path=/; Expires=Thu, 12 Jan 2023 06:53:34 GMT; Max-Age=7157; Secure
strict-transport-security: max-age=15724800; includeSubDomains
vary: Accept-Encoding
x-blibli-canary-mode: 0
</code></pre>
<p>and the payload is pickupPointCode=PP-3227395</p>
<p>I can't make sense what needs to be passed in the headers from the above headers reponse.So I tried just passing the user agent.</p>
<p>I tried using httpx, initially I tried requests it doesn't work too.</p>
<p>My code is as follows</p>
<pre><code>from fake_useragent import UserAgent
import httpx
ua = UserAgent()
USER_AGENT = ua.random
headers={
'user-agent': USER_AGENT}
url=" https://www.blibli.com/backend/product-detail/products/is--LO1-70001-00049-00003/_summary?pickupPointCode=PP-3227395 "
response=httpx(url,headers=headers)
print(response.json())
</code></pre>
<p>This returns a timeout. Using requests just keeps running.
Please do help to scrape the web.</p>
|
<python><python-requests-html><httpx>
|
2023-01-12 05:05:16
| 1
| 333
|
Hal
|
75,091,649
| 13,497,264
|
Can't install python extension on visual studio: XHR failed,XHR failed,
|
<p>this is the error code where the plugin can't be installed</p>
<pre><code>2023-01-12 11:26:54.575 [error] XHR failed,XHR failed,XHR failed: Error: XHR failed,XHR failed,XHR failed
at vscode-file://vscode-app/c:/Users/names/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:87:88554
at Array.reduce (<anonymous>)
at u (vscode-file://vscode-app/c:/Users/names/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:87:88540)
at Z.D (vscode-file://vscode-app/c:/Users/names/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:87:79894)
at async Z.z (vscode-file://vscode-app/c:/Users/names/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:87:77428)
at async Z.installFromGallery (vscode-file://vscode-app/c:/Users/names/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:87:73871)
</code></pre>
<p>I have clear the DNS cache and this is my DNS setting:</p>
<pre><code> DHCP Server . . . . . . . . . . . : 192.168.1.1
DHCPv6 IAID . . . . . . . . . . . : 229927932
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-2A-7B-D7-CA-B4-6B-FC-45-73-47
DNS Servers . . . . . . . . . . . : 8.8.8.8
8.8.4.4
</code></pre>
|
<python><visual-studio><proxy><dns>
|
2023-01-12 04:33:27
| 1
| 357
|
1988
|
75,091,633
| 15,974,438
|
Get resource icon from exe generated from pyinstaller
|
<p>I'm generating an .exe using pyinstaller, but I'm not able to include an file without resources.</p>
<p>command line:</p>
<p><code>pyinstaller --onefile --icon=./resources/image/icon.ico --add-data="./resources/data.txt;data" ./src/MainGui.py</code></p>
<p>When I try to open the file <code>./data/data.txt</code> or <code>data/data.txt</code> or <code>data.txt</code> , it says that the file was not found. The icon was changed sucessfully.</p>
<p>A part of the pyinstaller output:</p>
<pre><code>447 INFO: Writing RT_GROUP_ICON 0 resource with 104 bytes
448 INFO: Writing RT_ICON 1 resource with 3752 bytes
448 INFO: Writing RT_ICON 2 resource with 2216 bytes
448 INFO: Writing RT_ICON 3 resource with 1384 bytes
449 INFO: Writing RT_ICON 4 resource with 37019 bytes
449 INFO: Writing RT_ICON 5 resource with 9640 bytes
449 INFO: Writing RT_ICON 6 resource with 4264 bytes
450 INFO: Writing RT_ICON 7 resource with 1128 bytes
454 INFO: Copying 0 resources to EXE
454 INFO: Embedding manifest in EXE
</code></pre>
<p>In line 454 says that 0 resources are copying.</p>
<p>Can I print inside the exe file to know the structure?</p>
|
<python><resources><pyinstaller>
|
2023-01-12 04:30:34
| 0
| 420
|
Arcaniaco
|
75,091,547
| 4,152,567
|
Keras model training on GPU stops/hangs at first epoch
|
<p>In the <a href="https://github.com/matterport/Mask_RCNN" rel="nofollow noreferrer">Mask RCNN</a> model I replaced the Lambda layer below by a custom layer. While the model compiles it does not train on GPU. It seems to stop at Epoch 1 right before allocating Workers. I am not certain what I am doing wrong. The lambda layer will not allow me to save the model so I have to use a different approach but it is not working. Any help is appreciated.</p>
<pre><code>gt_boxes = KL.Lambda(lambda x: norm_boxes_graph(x, K.shape(input_image)[1:3]))(input_gt_boxes)
</code></pre>
<p>by a custom layer</p>
<pre><code>gt_boxes = GtBoxesLayer(name='lambda_get_norm_boxes')([input_image, input_gt_boxes])
</code></pre>
<p>The layer code is:</p>
<pre><code>class GtBoxesLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(GtBoxesLayer, self).__init__(**kwargs)
def call(self, input):
return norm_boxes_graph(input[1], get_shape_image(input[0]))
def get_config(self):
config = super(GtBoxesLayer, self).get_config()
return config
@classmethod
def from_config(cls, config):
return cls(**config)
def norm_boxes_graph(self, boxes, shape):
"""Converts boxes from pixel coordinates to normalized coordinates.
boxes: [..., (y1, x1, y2, x2)] in pixel coordinates
shape: [..., (height, width)] in pixels
Note: In pixel coordinates (y2, x2) is outside the box. But in normalized
coordinates it's inside the box.
Returns:
[..., (y1, x1, y2, x2)] in normalized coordinates
"""
h, w = tf.split(tf.cast(shape, tf.float32), 2)
scale = tf.concat([h, w, h, w], axis=-1) - tf.constant(1.0)
shift = tf.constant([0., 0., 1., 1.])
fin = tf.divide(boxes - shift, scale)
return fin
def get_shape_image_(input_image_):
shape_= tf.shape(input_image_)
return shape_[1:3]
</code></pre>
|
<python><tensorflow><keras>
|
2023-01-12 04:19:47
| 1
| 512
|
Mihai.Mehe
|
75,091,443
| 7,077,532
|
Python: How to Slice & Index Chunks of a List into a New List
|
<p>Let's say I have the following list:</p>
<pre><code>nums = [10,2,3,4, 8, 3]
</code></pre>
<p>I want to slice and index nums so that I create a new list from only some elements of the original list. For example #1:</p>
<pre><code>nums_1 = [10, 3, 4, 8, 3]
</code></pre>
<p>Or Example 2:</p>
<pre><code>nums_2 = [10, 3, 4, 3]
</code></pre>
<p>Is there an efficient way to do this? I tried nums.pop() but I don't want to change the original list.</p>
|
<python><arrays><list><indexing><slice>
|
2023-01-12 04:01:01
| 0
| 5,244
|
PineNuts0
|
75,091,418
| 9,983,652
|
Filter a dataframe with datetime index using another dataframe with date index
|
<p>I have two dataframes, one dataframe has index with date and time, the other dataframe's index only has date. Now I'd like to filter rows of first dataframe if its date is within 2nd dataframe's index.</p>
<p>I can do it but very complicated to use two for loop to check each item. Is there a simple way?</p>
<p>Here is what I did:</p>
<p>First dataframe</p>
<pre><code>import pandas as pd
df=pd.DataFrame({'value':[1,2,3,4,5,6]})
df.index=['2022-01-01 10:00','2022-01-02 13:00','2022-01-07 10:00','2022-01-08 10:00','2022-01-11 10:00','2022-01-12 10:00']
df.index=pd.to_datetime(df.index)
df
value
2022-01-01 10:00:00 1
2022-01-02 13:00:00 2
2022-01-07 10:00:00 3
2022-01-08 10:00:00 4
2022-01-11 10:00:00 5
2022-01-12 10:00:00 6
</code></pre>
<p>2nd dataframe</p>
<pre><code>f_1=pd.DataFrame({'value':[1,2,3]})
df_1.index=['2022-01-02','2022-01-05','2022-01-07']
df_1.index=pd.to_datetime(df_1.index)
df_1
value
2022-01-02 1
2022-01-05 2
2022-01-07 3
</code></pre>
<p>Now I am checking each element of first dataframe's index is equal to any element of 2nd dataframe's index. If so, then save value of True</p>
<pre><code>date_same=[False]*len(df.index)
for ix,date1 in enumerate(df.index.date):
for date2 in df_1.index:
if date1==date2:
date_same[ix]=True
break
date_same
[False, True, True, False, False, False]
</code></pre>
<p>Now using saved list to filter the first dataframe</p>
<pre><code>df_filter=df.loc[date_same]
df_filter
value
2022-01-02 13:00:00 2
2022-01-07 10:00:00 3
</code></pre>
|
<python><pandas><dataframe><indexing><datetimeindex>
|
2023-01-12 03:57:38
| 1
| 4,338
|
roudan
|
75,091,364
| 1,128,648
|
Google spreadsheet api batchupdate using python
|
<p>I am trying to update multiple cell values using batchupdate. But giving me below error</p>
<p><strong>My code:</strong></p>
<pre><code>import gspread
gc = gspread.service_account(filename='gdrive_cred.json')
sh = gc.open('SmartStraddle').sheet1
stock_name = "NIFTY50"
stock_price = 15000
batch_update_values = {
'value_input_option': 'USER_ENTERED',
'data': [
{
'range': "A1",
'values': [stock_name]
},
{
'range': "B2",
'values': [stock_price]
}
],
}
sh.batch_update(batch_update_values)
</code></pre>
<p><strong>Error Message:</strong></p>
<pre><code>Traceback (most recent call last):
File "*\interactiveshell.py", line 3433, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-028b1369fa03>", line 23, in <module>
sh.batch_update(batch_update_values)
File "*\utils.py", line 702, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "*\worksheet.py", line 1152, in batch_update
data = [
^
File "*\worksheet.py", line 1153, in <listcomp>
dict(vr, range=absolute_range_name(self.title, vr["range"])) for vr in data
~~^^^^^^^^^
TypeError: string indices must be integers, not 'str'
</code></pre>
<p>I am following <a href="https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/batchUpdate" rel="nofollow noreferrer">this</a> google documentation, but not sure how to construct <code>data': []</code> field correctly.</p>
|
<python><python-3.x><google-sheets><google-sheets-api><gspread>
|
2023-01-12 03:47:25
| 1
| 1,746
|
acr
|
75,091,265
| 344,669
|
Python setuptools_scm get version from git tags
|
<p>I am using project.toml file to package my module, I want to extract the version from git tag using <code>setuptools_scm</code> module.</p>
<p>When I run <code>python setup.p y --version</code> command it gives this output <code>0.0.1.post1.dev0</code>. How will I get only <code>0.0.1</code> value and omit the <code>.post.dev0</code> value?</p>
<p>Here is project.toml file settings:</p>
<pre><code>[build-system]
requires = ["setuptools>=46.1.0", "setuptools_scm[toml]>=5"]
build-backend = "setuptools.build_meta"
[tool.setuptools_scm]
version_scheme = "no-guess-dev"
local_scheme="no-local-version"
write_to = "src/showme/version.py"
git_describe_command = "git describe --dirty --tags --long --match v* --first-parent"
[tool.setuptools.dynamic]
version = {attr = "showme.__version__"}
</code></pre>
<p>output:</p>
<pre><code> python setup.py --version
setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
warnings.warn(msg, _BetaConfiguration)
0.0.1.post1.dev0
</code></pre>
<p>Thanks</p>
|
<python><setuptools><setup.py><python-packaging><setuptools-scm>
|
2023-01-12 03:28:44
| 1
| 19,251
|
sfgroups
|
75,091,163
| 14,673,832
|
How to solve lambda() takes 1 positional argument but 2 were given
|
<p>I have the following code snippet but it is giving me the type error above. What I basically wanted to do is add two to every element and reduct it to a single sum, but it didnt work. How can we achieve this??</p>
<pre><code>import functools
list1 = [1,2,3,4]
result = functools.reduce(lambda x:x+2, list1)
print(result)
</code></pre>
<p><strong>Update:</strong></p>
<p>It works when I do the following though:</p>
<pre><code>list1 = [1,2,3,4]
result = functools.reduce(lambda x,y:x+y, list1)
print(result)
</code></pre>
|
<python><lambda><typeerror>
|
2023-01-12 03:06:14
| 2
| 1,074
|
Reactoo
|
75,091,078
| 3,495,236
|
how to write a python worker that stays in memory processing jobs until the master thread kills it
|
<p>I have a worker node that reads data off of a queue to process images. The job loads from a redis queue and then a new thread is spun up to process the job. The jobs must process sequentially, I can't use parallization. I need to use threads because for some reason memory is not fully released with the GPU, so this helps ensure memory the memory is released between threads. To load all the data to process the job is very expensive. I want to make processing faster and I can do that if the job parameters are similar.
The problem with this is it is slow to do it this way. To load a the data into memory takes about 15 seconds. So that means every thread is loading the data, processing, then killing the thread and repeating. If the main job queue looks like this: [1 1 1 1 2 2 2 2 2 1 1 2 2 2 2 ]
I could save time by continuing to reuse the older thread before killing it because the main data for the thread is the same for all 1's, its only when I go from 1 to 2 that I really need to kill the thread and reload.</p>
<p>This is my currently working, but slow code:</p>
<pre><code>def process_job(job):
pass
message = r.brpop(list_name)
j = json.loads(message[1])
thread = threading.Thread(target=process_job, args=(j,))
thread.start()
thread.join()
</code></pre>
<p>I tried to rewrite it like this, but it doesn't work:</p>
<pre><code>while True:
# Read from the redis queue
message = r.blpop(list_name)
job = json.loads(message[1])
# Parse the JSON string and get the 'name' field
model_name = job['model_id']
# Check if we already have a thread for this name
if model_name in threads:
# Update the target function of the existing thread
thread = threads[model_name]
thread.target = process_job
# Start the thread with the new arguments
thread.start(job)
else:
# Create a new thread and start it
for name, thread in threads.items():
thread.join()
# del threads[name]
thread = threading.Thread(target=process_job, args=(job,))
thread.start()
threads[model_name] = thread
</code></pre>
<p>How can I rewrite this so I don't kill the thread if the model_id is the same between job requests?</p>
|
<python><python-3.x><multithreading>
|
2023-01-12 02:50:11
| 1
| 590
|
jas
|
75,091,028
| 134,044
|
How and why to supply scalar scope to FastAPI oAuth2 scopes dict?
|
<p>I am using FastAPI with Python 3.9. I haven't been able to get the available oAuth2 dependencies to work with our particular Azure token authentication, and my initial attempt at using <code>fastapi-azure-auth</code> didn't seem to match either.</p>
<p>I am therefore currently sub-classing <code>fastapi.security.base.SecurityBase</code> to try to create my own authentication dependency. I am using as a guide the approach in <code>fastapi.security.oauth2.OAuth2</code> and <code>fastapi.security.oauth2.OAuth2PasswordBearer</code>.</p>
<ul>
<li><a href="https://github.com/tiangolo/fastapi/blob/master/fastapi/security/oauth2.py" rel="nofollow noreferrer">https://github.com/tiangolo/fastapi/blob/master/fastapi/security/oauth2.py</a></li>
</ul>
<p>These models rely on <code>fastapi.openapi.models.OAuth2</code> and <code>fastapi.openapi.models.OAuthFlow</code> leading back to Pydantic's <code>BaseModel</code> where presumably nothing much happens except initialising the fields that are provided.</p>
<ul>
<li><a href="https://github.com/tiangolo/fastapi/blob/master/fastapi/openapi/models.py" rel="nofollow noreferrer">https://github.com/tiangolo/fastapi/blob/master/fastapi/openapi/models.py</a></li>
</ul>
<p>The only information I can seem to find on using OAuth2 with FastAPI seem to be repetitious cut and pastes of the great little FastAPI security tutorial which only provides guidance for a simplistic dummy example.</p>
<ul>
<li><a href="https://fastapi.tiangolo.com/tutorial/security/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/security/</a></li>
</ul>
<p>At this stage I would just really like an answer to one question which is a puzzle for me: how are we supposed to supply <em><strong>scalar</strong></em> scopes in a <em><strong>dict</strong></em>?</p>
<ol>
<li>I have a "scope" that I believe is probably essential to be provided for the security scheme to work.</li>
<li>The <code>fastapi.security.oauth2.OAuth2</code> model needs to provide a <code>fastapi.openapi.models.OAuth2</code> model for its <code>model</code> attribute.</li>
<li>The <code>fastapi.openapi.models.OAuth2</code> model needs to provide a <code>fastapi.openapi.models.OAuthFlows</code> model for its <code>flows</code> attribute.</li>
<li>The <code>OAuthFlows</code> model contains one of the <code>OAuthFlow<Type></code> models which sub-class <code>fastapi.openapi.models.OAuthFlow</code>.</li>
<li>The <code>OAuthFlow</code> base class is where the "scopes" are stored: <code>scopes: Dict[str, str] = {}</code></li>
</ol>
<p>I can't seem to find even one sentence on the behaviour and usage for <code>OAuth2PasswordBearer</code> right the way back to <code>OAuthFlow</code>, and even the code is completely bare of any in-line documentation for any of these classes.</p>
<p><em>But what seems to be clear from the FastAPI tutorial and OpenAPI documentation is that a "scope" is a string; and multiple scopes may sometimes be represented as a single string using space as a separator. I can't avoid the conclusion (and the data I have available to supply as a scope confirms), that "scopes" are scalars: a single value.</em></p>
<p><a href="https://fastapi.tiangolo.com/advanced/security/oauth2-scopes/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/advanced/security/oauth2-scopes/</a> says:</p>
<ul>
<li>The OAuth2 specification defines "scopes" as a list of strings separated by spaces.</li>
<li>The content of each of these strings can have any format, but should not contain spaces.</li>
<li>Each "scope" is just a string (without spaces).</li>
</ul>
<p>So my question is: <strong>how are we supposed to supply <em>scalar</em> values to the <code>OAuthFlow.scopes</code> <em>dict</em>?</strong></p>
<p>My scope (scalar) looks like this kind of thing:</p>
<pre><code>api://a12b34cd-5e67-89f0-a12b-c3de456f78ab/.default
</code></pre>
<p>Should this be supplied as the key, or the value, or both, and otherwise can the other key/value be left blank (<code>""</code>), <code>None</code>, or what should go in there (and why?)?</p>
<p>Also, since there is the <code>fastapi.security.oauth2.SecurityScopes</code> class that <em>does</em> store <em>scalar</em> scopes as space-separated strings, why are there two ways to store scopes and how do they interact (if at all)?</p>
|
<python><oauth-2.0><fastapi><openapi>
|
2023-01-12 02:40:01
| 1
| 4,109
|
NeilG
|
75,090,901
| 4,996,021
|
Find value of column based on another column condition (max) in polars for many columns
|
<p>If I have this dataframe:</p>
<pre><code>pl.DataFrame(dict(x=[0, 1, 2, 3], y=[5, 2, 3, 3],z=[4,7,8,2]))
shape: (4, 3)
βββββββ¬ββββββ¬ββββββ
β x β y β z β
β --- β --- β --- β
β i64 β i64 β i64 β
βββββββͺββββββͺββββββ‘
β 0 β 5 β 4 β
β 1 β 2 β 7 β
β 2 β 3 β 8 β
β 3 β 3 β 2 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
<p>and I want to find the value in x where y is max, then again find the value in x where z is max, and repeat for hundreds more columns so that I end up with something like:</p>
<pre><code>shape: (2, 2)
ββββββββββ¬ββββββββββ
β column β x_value β
β --- β --- β
β str β i64 β
ββββββββββͺββββββββββ‘
β y β 0 β
β z β 2 β
ββββββββββ΄ββββββββββ
</code></pre>
<p>or</p>
<pre><code>shape: (1, 2)
βββββββ¬ββββββ
β y β z β
β --- β --- β
β i64 β i64 β
βββββββͺββββββ‘
β 0 β 2 β
βββββββ΄ββββββ
</code></pre>
<p>What is the best polars way to do that?</p>
|
<python><python-polars>
|
2023-01-12 02:16:42
| 1
| 610
|
pwb2103
|
75,090,896
| 5,901,318
|
django changeform_view extra_context
|
<p>I'm trying to learn on model admin template customization.</p>
<p>I need that custom template can read some data stored/passed in 'extra_context'</p>
<p>admin.py</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from .models import MailTemplate
# Register your models here.
class MailTemplateAdmin(admin.ModelAdmin):
change_form_template = 'change_form_htmx.html'
def changeform_view(self,request, object_id=None, form_url="", extra_context=None):
extra_context = extra_context or {}
extra_context['myvar']='this is myvar'
return super(MailTemplateAdmin, self).changeform_view(request, object_id=object_id, form_url=form_url, extra_context=extra_context)
admin.site.register(MailTemplate,MailTemplateAdmin)
</code></pre>
<p>template 'change_form_htmx.html'</p>
<pre><code>{% extends "admin/base_site.html" %}
{% load i18n admin_urls static admin_modify %}
{% block extrahead %}{{ block.super }}
<script src="{% url 'admin:jsi18n' %}"></script>
{{ media }}
{% endblock %}
{% block extrastyle %}{{ block.super }}<link rel="stylesheet" href="{% static "admin/css/forms.css" %}">{% endblock %}
{% block coltype %}colM{% endblock %}
{% block bodyclass %}{{ block.super }} app-{{ opts.app_label }} model-{{ opts.model_name }} change-form{% endblock %}
{% if not is_popup %}
{% block breadcrumbs %}
<div class="breadcrumbs">
<a href="{% url 'admin:index' %}">{% translate 'Home' %}</a>
&rsaquo; <a href="{% url 'admin:app_list' app_label=opts.app_label %}">{{ opts.app_config.verbose_name }}</a>
&rsaquo; {% if has_view_permission %}<a href="{% url opts|admin_urlname:'changelist' %}">{{ opts.verbose_name_plural|capfirst }}</a>{% else %}{{ opts.verbose_name_plural|capfirst }}{% endif %}
&rsaquo; {% if add %}{% blocktranslate with name=opts.verbose_name %}Add {{ name }}{% endblocktranslate %}{% else %}{{ original|truncatewords:"18" }}{% endif %}
</div>
{% endblock %}
{% endif %}
{% block content %}<div id="content-main">
<!--- add htmx -->
<script src="https://unpkg.com/htmx.org@1.6.0"></script>
{% block object-tools %}
{% if change and not is_popup %}
<ul class="object-tools">
{% block object-tools-items %}
{% change_form_object_tools %}
{% endblock %}
</ul>
{% endif %}
{% endblock %}
<form {% if has_file_field %}enctype="multipart/form-data" {% endif %}{% if form_url %}action="{{ form_url }}" {% endif %}method="post" id="{{ opts.model_name }}_form" novalidate>{% csrf_token %}{% block form_top %}{% endblock %}
<div>
{% if is_popup %}<input type="hidden" name="{{ is_popup_var }}" value="1">{% endif %}
{% if to_field %}<input type="hidden" name="{{ to_field_var }}" value="{{ to_field }}">{% endif %}
{% if save_on_top %}{% block submit_buttons_top %}{% submit_row %}{% endblock %}{% endif %}
{% if errors %}
<p class="errornote">
{% blocktranslate count counter=errors|length %}Please correct the error below.{% plural %}Please correct the errors below.{% endblocktranslate %}
</p>
{{ adminform.form.non_field_errors }}
{% endif %}
{% block field_sets %}
{% for fieldset in adminform %}
{% include "admin/includes/fieldset.html" %}
{% endfor %}
{% endblock %}
{% block after_field_sets %}{% endblock %}
{% block inline_field_sets %}
{% for inline_admin_formset in inline_admin_formsets %}
{% include inline_admin_formset.opts.template %}
{% endfor %}
{% endblock %}
<div id="some_buttons">
<!-- here we have button for add and delete row-->
from extra_context = {{ extra_context.myvar }}
</div>
{% block after_related_objects %}{% endblock %}
{% block submit_buttons_bottom %}{% submit_row %}{% endblock %}
{% block admin_change_form_document_ready %}
<script id="django-admin-form-add-constants"
src="{% static 'admin/js/change_form.js' %}"
{% if adminform and add %}
data-model-name="{{ opts.model_name }}"
{% endif %}
async>
</script>
{% endblock %}
{# JavaScript for prepopulated fields #}
{% prepopulated_fields_js %}
</div>
</form></div>
{% endblock %}
</code></pre>
<p>There is no error occured, but the extra_content['myval'] is not showed.
<a href="https://i.sstatic.net/X8fvi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X8fvi.png" alt="enter image description here" /></a></p>
<p>Kindly please tell how is the proper way to send extra_context from ModelAdmin and read it in template.</p>
<p>Sincerely<br />
-bino-</p>
|
<python><django>
|
2023-01-12 02:15:28
| 1
| 615
|
Bino Oetomo
|
75,090,788
| 1,336,134
|
Last 4 digits are getting converted to 0 while writing to excel using Panda and ExcelWriter
|
<p>I am using xlsxwriter with Panda to write an excel. Doing so 19-character long value is getting changed from <strong>9223781998151429879</strong> to <strong>9223781998151420000</strong>. Excel <a href="https://learn.microsoft.com/en-us/office/troubleshoot/excel/long-numbers-incorrectly-in-excel" rel="nofollow noreferrer">handling</a> of long numbers might be the reason.</p>
<p>I tried to remove the formatting using the below code. I tried various combinations of formats. But nothing worked.</p>
<pre><code>writer = pd.ExcelWriter("pandas_column_formats.xlsx", engine='xlsxwriter')
df.to_excel(writer, sheet_name='Result 1')
workbook = writer.book
worksheet = writer.sheets['Result 1']
format1 = workbook.add_format({'num_format': '#,##0.00'})
worksheet.set_column('M:M', 20, format1)
writer.close()
</code></pre>
|
<python><excel><pandas><dataframe><xlsxwriter>
|
2023-01-12 01:53:49
| 1
| 1,165
|
sandy
|
75,090,782
| 2,463,570
|
Subprocess Popen to call a tasks file python which cannot able to load models in Django
|
<p>I have Django app ..we have several tasks file(stored into tasks folder).
And we want to call those tasks file from views.py</p>
<p>Now when we call</p>
<pre><code>p = Popen("python","./tasks/task1.py", "jobid", stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
output_filename, err = p.communicate(b"input data that is passed to subprocess' stdin")
</code></pre>
<p>now in task1.py</p>
<pre><code>from app.models import Job #its throwing error
</code></pre>
<p>It cannot import Models into task1.py where I have sent job id</p>
<pre><code>from app.models import Job\r\nModuleNotFoundError: No module named \'app\'\r\n
</code></pre>
<p>Folder structure</p>
<pre><code>project
|-app
|-models.py
|-views.py
|-tasks
|- task1.py
</code></pre>
|
<python><django>
|
2023-01-12 01:51:17
| 0
| 12,390
|
Rajarshi Das
|
75,090,778
| 1,653,413
|
Making a logging.Handler with async emit
|
<p>I have a Python log handler that writes using asyncio (it's too much work to write to this particular service any other way). I also want to be able to log messages from background threads, since a few bits of code do that. So my code looks basically like this (minimal version):</p>
<pre><code>class AsyncEmitLogHandler(logging.Handler):
def __init__(self):
self.loop = asyncio.get_running_loop()
super().__init__()
def emit(self, record):
self.format(record)
asyncio.run_coroutine_threadsafe(
coro=self._async_emit(record.message),
loop=self.loop,
)
async def _async_emit(message):
await my_async_write_function(message)
</code></pre>
<p>Mostly it works fine but when processes exit I get a lot some warnings like this: "coroutine 'AsyncEmitLogHandler._async_emit' was never awaited"</p>
<p>Any suggestions on a cleaner way to do this? Or some way to catch shutdown and kill pending writes? Or just suppress the warnings?</p>
<p>Note: the full code is [here][1]
[1]: https://github.com/lsst-ts/ts_salobj/blob/c0c6473f7ff7c71bd3c84e8e95b4ad7c28e67721/python/lsst/ts/salobj/sal_log_handler.py</p>
|
<python><logging><python-asyncio>
|
2023-01-12 01:50:42
| 2
| 449
|
Russell Owen
|
75,090,691
| 9,989,761
|
How to suppress PyTorch Lightning Logging Output in DARTs
|
<p>I am using the python <a href="https://unit8co.github.io/darts/" rel="nofollow noreferrer">DARTs</a> package, and would like to run the prediction method without generating output logs. I appear unable to do so; all of the suggestions I've seen do not work, even when I attempt to apply them to DARTs source code.</p>
<p>Here is a reproducible example, which generates the output logs:</p>
<pre><code>import darts
import datetime
import pandas as pd
import numpy as np
#reprex
yseries = np.random.rand(100)
xseries = np.random.rand(100)
zseries = np.random.rand(100)
d = datetime.datetime.now()
tseries = [d + datetime.timedelta(days=i) for i in range(100)]
df = pd.DataFrame({"y":yseries,"x":xseries,"z":zseries,"t":tseries})
yseries = TimeSeries.from_dataframe(df, "t", "y").astype(np.float32)
xseries = TimeSeries.from_dataframe(df, "t", ["x","z"]).astype(np.float32)
from darts.models import NBEATSModel, TCNModel
model3 = NBEATSModel(input_chunk_length=20, # init
output_chunk_length=1,n_epochs=50,
torch_metrics=torch_metrics)
yseries_train, yseries_val = yseries.split_before(0.5)
xseries_train, xseries_val = xseries.split_before(0.5)
model3.fit(series=yseries_train,past_covariates=xseries_train,max_samples_per_ts=50)
for t in range(0,100):
forecast = model3.predict(n=1,series=yseries_train,past_covariates=xseries_train)
</code></pre>
<p>Which gives the following (100 times):</p>
<pre><code>GPU available: True (mps), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
</code></pre>
<p>Here are the leads I've attempted to follow, to no avail: <a href="https://github.com/Lightning-AI/lightning/issues/2757" rel="nofollow noreferrer">https://github.com/Lightning-AI/lightning/issues/2757</a>, <a href="https://stackoverflow.com/questions/68807896/how-to-disable-logging-from-pytorch-lightning-logger">How to disable logging from PyTorch-Lightning logger?</a></p>
|
<python><pytorch-lightning><u8darts><pytorch-forecasting>
|
2023-01-12 01:32:45
| 1
| 364
|
Josh Purtell
|
75,090,690
| 11,182,916
|
Best way to find a point near all four points known coordinates
|
<p>I have the coordinates of four points. Can anyone help me find the coordinates of one point that satisfies the condition: the distances from the finding point to four input points are in the range of 1.9 and 2.5?</p>
<pre><code>import numpy as np
dist_min = 1.9
dist_max = 2.5
# this show no points satisfied
input_points1 = [[ 7.57447956, 6.67658376, 10.79921475],
[ 8.98026868, 7.69010703, 12.89377068],
[ 6.22242062, 7.73362942, 12.87947421],
[ 10.0000000, 9.00000000, 8.500000000]]
#this has
input_points2 = [[ 7.57447956, 6.67658376, 10.79921475],
[ 8.98026868, 7.69010703, 12.89377068],
[ 6.22242062, 7.73362942, 12.87947421],
[ 6.22473072, 4.74175054, 12.96455411]]
def Distance(point1, point2):
return np.linalg.norm(point1 - point2)
</code></pre>
|
<python>
|
2023-01-12 01:32:36
| 2
| 405
|
Binh Thien
|
75,090,629
| 12,980,888
|
How pandas process boolean statements inside a pandas frame object
|
<p>Wondering how Pandas process the following statement internally please.</p>
<pre><code>import pandas as pd
test = pd.DataFrame({'classNumber': [2, 4], 'center x': [2, 0], 'center y': [4, 4]})
</code></pre>
<p>The output of <code>test</code>:</p>
<p><a href="https://i.sstatic.net/SbTMB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SbTMB.png" alt="enter image description here" /></a></p>
<pre><code>test.loc[test['classNumber'] == 2, 'center y'] = '5'
</code></pre>
<p>The output of <code>test</code> after operation:</p>
<p><a href="https://i.sstatic.net/zoRAc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zoRAc.png" alt="enter image description here" /></a></p>
<p><strong>Question</strong>: so is this how pandas work here: once it sees that <code>test.loc</code> has a boolean statement, it will pass it to a function that will return all rows whose <code>test['classNumber'] == 2</code> then it will assign those rows' column <code>test['center y']</code> with the new value 5?</p>
|
<python><pandas>
|
2023-01-12 01:20:35
| 1
| 519
|
Avv
|
75,090,614
| 1,870,832
|
Python dataprep package seems to break pip-compile due to conflicting dependencies
|
<p>I used the dataprep package in a jupyter notebook, installing via <code>!pip install dataprep</code> recently and it installed smoothly.</p>
<p>Now I'm tidying up some of that work and am using a venv, but <code>pip-compile</code> keeps crashing and I seem to have isolated dataprep as the cause. A minimal reproducible example below:</p>
<p>Given a <code>test_requirements.in</code> file which contains only a single line: <code>dataprep</code></p>
<p>...Running <code>pip-compile test_requirements.in --verbose --output-file test_requirements.txt</code> yields the following error:</p>
<pre><code>Could not find a version that matches executing<0.9.0,>=0.8.3,>=1.2.0 (from varname==0.8.3->dataprep==0.4.5->-r test_requirements.in (line 1))
Tried: 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.2.0, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.5.0, 0.5.2, 0.5.3, 0.5.3, 0.5.4, 0.5.4, 0.6.0, 0.6.0, 0.7.0, 0.7.0, 0.8.0, 0.8.0, 0.8.1, 0.8.1, 0.8.2, 0.8.2, 0.8.3, 0.8.3, 0.9.0, 0.9.0, 0.9.1, 0.9.1, 0.10.0, 0.10.0, 1.0.0, 1.0.0, 1.1.0, 1.1.0, 1.1.1, 1.1.1, 1.2.0, 1.2.0
There are incompatible versions in the resolved dependencies:
executing<0.9.0,>=0.8.3 (from varname==0.8.3->dataprep==0.4.5->-r test_requirements.in (line 1))
executing>=1.2.0 (from stack-data==0.6.2->ipython==8.8.0->ipywidgets==7.7.2->dataprep==0.4.5->-r test_requirements.in (line 1))
</code></pre>
<p>I might be reading this wrong but it seems to be saying that one of <code>dataprep</code>'s dependencies requires a version of the <code>executing</code> package between 0.8.3 and 0.9, but another of <code>dataprep</code>'s dependencies requires a version of the <code>executing</code> package >=1.2. Is there any way to resolve this apparent contradiction with pip-compile?</p>
|
<python><pip><dataprep>
|
2023-01-12 01:17:21
| 0
| 9,136
|
Max Power
|
75,090,518
| 1,330,974
|
Mocking os.path.exists and os.makedirs returning AssertionError
|
<p>I have a function like below.</p>
<pre><code># in retrieve_data.py
import os
def create_output_csv_file_path_and_name(output_folder='outputs') -> str:
"""
Creates an output folder in the project root if it doesn't already exist.
Then returns the path and name of the output CSV file, which will be used
to write the data.
"""
if not os.path.exists(output_folder):
os.makedirs(output_folder)
logging.info(f"New folder created for output file: " f"{output_folder}")
return os.path.join(output_folder, 'results.csv')
</code></pre>
<p>I also created a unit test file like below.</p>
<pre><code># in test_retrieve_data.py
class OutputCSVFilePathAndNameCreationTest(unittest.TestCase):
@patch('path.to.retrieve_data.os.path.exists')
@patch('path.to.retrieve_data.os.makedirs')
def test_create_output_csv_file_path_and_name_calls_exists_and_makedirs_once_when_output_folder_is_not_created_yet(
self,
os_path_exists_mock,
os_makedirs_mock
):
os_path_exists_mock.return_value = False
retrieve_cradle_profile_details.create_output_csv_file_path_and_name()
os_path_exists_mock.assert_called_once()
os_makedirs_mock.assert_called_once()
</code></pre>
<p>But when I run the above unit test, I get the following error.</p>
<pre><code>def assert_called_once(self):
"""assert that the mock was called only once.
"""
if not self.call_count == 1:
msg = ("Expected '%s' to have been called once. Called %s times.%s"
% (self._mock_name or 'mock',
self.call_count,
self._calls_repr()))
raise AssertionError(msg)
AssertionError: Expected 'makedirs' to have been called once. Called 0 times.
</code></pre>
<p>I tried poking around with <code>pdb.set_trace()</code> in <code>create_output_csv_file_path_and_name</code> method and I'm sure it is receiving a mocked object for <code>os.path.exists()</code>, but the code never go pasts that <code>os.path.exists(output_folder)</code> check (<code>output_folder</code> was already created in the program folder but I do not use it for unit testing purpose and want to keep it alone). What could I possibly be doing wrong here to mock <code>os.path.exists()</code> and <code>os.makedirs()</code>? Thank you in advance for your answers!</p>
|
<python><python-3.x><python-unittest><python-mock>
|
2023-01-12 00:58:37
| 1
| 2,626
|
user1330974
|
75,090,510
| 6,810,602
|
Understand shap values for binary classification
|
<p>I have trained my imbalanced dataset (binary classification) using CatboostClassifer. Now, I am trying to interpret the model using the SHAP library. Below is the code to fit the model and calculate shap values:</p>
<pre><code>weights = y.value_counts()[0] / y.value_counts()[1]
catboost_clf = CatBoostClassifier(loss_function='Logloss', iterations=100, verbose=True, \
l2_leaf_reg=6, scale_pos_weight=weights,eval_metric="MCC")
catboost_clf.fit(X, y)
trainx_preds = catboost_clf.predict(X_test)
explainer = shap.TreeExplainer(catboost_clf)
shap_values = explainer.shap_values(Pool(X,y))
#Class 0 samples 1625125
#Class 1 samples 122235
</code></pre>
<p>The size of shap values is (1747360, 13) i.e. (number of instances, number of features). I was expecting the shap values to be a 3d array i.e. (number of classes,number of instances, number of features). Shap values for each of the positive and negative class. How do I achieve that? How do I extract class wise shapley values to better understanding of the model.</p>
<p>Also, explainer.expected_value shows one base value instead of two.</p>
<p>Is there anything missing or incorrect in the code?</p>
<p>Thanks in advance!</p>
|
<python><binary><shap><catboost><imbalanced-data>
|
2023-01-12 00:55:58
| 1
| 371
|
Dhvani Shah
|
75,090,483
| 19,123,103
|
Randomly select argmax of non-unique maximum
|
<p>Given a 2D numpy array, I want to construct an array out of the column indices of the maximum value of each row. So far, <code>arr.argmax(1)</code> works well. However, for my specific case, for some rows, 2 or more columns may contain the maximum value. In that case, I want to select a column index randomly (not the first index as it is the case with <code>.argmax(1)</code>).</p>
<p>For example, for the following <code>arr</code>:</p>
<pre class="lang-py prettyprint-override"><code>arr = np.array([
[0, 1, 0],
[1, 1, 0],
[2, 1, 3],
[3, 2, 2]
])
</code></pre>
<p>there can be two possible outcomes: <code>array([1, 0, 2, 0])</code> and <code>array([1, 1, 2, 0])</code> each chosen with 1/2 probability.</p>
<p>I have code that returns the expected output using a list comprehension:</p>
<pre class="lang-py prettyprint-override"><code>idx = np.arange(arr.shape[1])
ans = [np.random.choice(idx[ix]) for ix in arr == arr.max(1, keepdims=True)]
</code></pre>
<p>but I'm looking for an optimized numpy solution. In other words, how do I replace the list comprehension with numpy methods to make the code feasible for bigger arrays?</p>
|
<python><arrays><numpy><random>
|
2023-01-12 00:49:54
| 2
| 25,331
|
cottontail
|
75,090,480
| 875,295
|
Potential bug in heapq.heappush?
|
<p>I'm trying to understand if the following snippet doesn't exhibit a bug in heapq.heappush :</p>
<pre><code>import heapq
x = []
heapq.heappush(x, 1)
print(x)
try:
heapq.heappush(x, "a")
except:
pass
print(x) # [1, 'a']
</code></pre>
<p>Here, I'm trying to build a heap with non-comparable items. As expected, the second call to heappush is throwing an exception. However, I notice that the second item was inserted in the array anyway. This array was supposed to always contain a heap, and now it isn't a heap anymore.</p>
<p><strong>Shouldn't heappush method undo the insert of "a" in case it can't finish the heapify process on insertion ?</strong></p>
<p>Consider the case where you have a multi-threaded system relying on this. Assume I have some workers that are dequeueing from the heap, and some worker processes that are inserting in the heap. This means that if a worker is incorrectly inserting an invalid type, the worker will raise an excpetion as expected, but the underlying data-structure is still corrupted</p>
|
<python>
|
2023-01-12 00:49:28
| 1
| 8,114
|
lezebulon
|
75,090,432
| 5,212,614
|
Can we rotate the x-axis and y-axis in a Seaborn Pairplot?
|
<p>I'm trying to rotate the x-axis and y-axis in a Seaborn Pairplot. SO far all I can rotate is the tick-params. I really want to rotate the x-axis and y-axis 45 degrees. Can we do that???</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
penguins = sns.load_dataset("penguins")
g=sns.pairplot(penguins)
for ax in g.axes.flatten():
ax.tick_params(rotation = 45)
</code></pre>
<p><a href="https://i.sstatic.net/xLl7r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xLl7r.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><seaborn>
|
2023-01-12 00:39:29
| 1
| 20,492
|
ASH
|
75,090,429
| 12,224,591
|
Best Way to Fill 3D Scatter Points? (MatPlotLib, Py 3.10)
|
<p>I have the following <code>Python 3.10</code> script to generate a simple 3D Scatter Plot with <code>MatPlotLib</code>, according to the <a href="https://matplotlib.org/stable/gallery/mplot3d/scatter3d.html" rel="nofollow noreferrer">MatPlotLib 3D Scatter tutorial</a>:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
X = [1, 3, 2]
Y = [1, 1, 2]
Z = [2, 2, 2]
ax.scatter(X, Y, Z)
ax.set_xlim(0, 4)
ax.set_ylim(0, 3)
ax.set_zlim(1.9, 2.1)
ax.set_xlabel('X Axis')
ax.set_ylabel('Y Axis')
ax.set_zlabel('Z Axis')
plt.show()
</code></pre>
<p>The script above works as intended, and I get the correct output:</p>
<p><a href="https://i.sstatic.net/EfzWG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EfzWG.png" alt="enter image description here" /></a></p>
<p>However, I was wondering whether it would be possible to connect & fill the scatter points, to create a "face" of sorts, and to provide a color for it.
In this case, it would be as such:</p>
<p><a href="https://i.sstatic.net/kDj58.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kDj58.png" alt="enter image description here" /></a></p>
<p>I looked around the other <code>MatPlotLib</code> tutorials, notably the ones for the <a href="https://matplotlib.org/stable/gallery/mplot3d/surface3d.html" rel="nofollow noreferrer">3D Surface Colormap</a> plot and the <a href="https://matplotlib.org/stable/gallery/mplot3d/surface3d_2.html" rel="nofollow noreferrer">3D Surface Solid Color</a> plot, however it's confusing to me how they are implemented.</p>
<p>From what I could gather by attempting to run the examples, the Z-axis data set is a multi-dimensional array (the interperter threw up an error if I simply supplied a singular list or np array to it). I don't really understand why the Z-axis data set needs to be a multi-dimensional array, as I would imagine that the actual data colors are supplied via the <code>color</code> argument to the <code>scatter</code> function call.</p>
<p>Is what I'm trying to do even possible with <code>MatPlotLib</code>? Is there a better way to approach this?</p>
<p>Thanks for reading my post, any guidance is appreciated!</p>
|
<python><matplotlib>
|
2023-01-12 00:39:07
| 1
| 705
|
Runsva
|
75,090,410
| 5,041,471
|
Defining an aggregation function with groupby in pandas
|
<p>I would like to collapse my dataset using <code>groupby</code> and <code>agg</code>, however after collapsing, I want the new column to show a string value only for the grouped rows.
For example, the initial data is:</p>
<pre><code>df = pd.DataFrame([["a",1],["a",2],["b",2]], columns=['category','value'])
category value
0 a 1
1 a 3
2 b 2
</code></pre>
<p>Desired output:</p>
<pre><code> category value
0 a grouped
1 b 2
</code></pre>
<p>How should I modify my code (to show "grouped" instead of 3):</p>
<pre><code>df=df.groupby(['category'], as_index=False).agg({'value':'max'})
</code></pre>
|
<python><pandas><dataframe><group-by><aggregation>
|
2023-01-12 00:35:55
| 2
| 471
|
Hossein
|
75,090,351
| 7,991,581
|
Python requests lib mix up headers in multi threaded context
|
<p>I'm managing several user accounts on a website with an API and I'm regularly retrieving some information for every user.</p>
<p>To regularly get those information I'm using a python script which loads user data from a database and then uses the API connector to make the request.</p>
<p>The endpoints I'm using to do this are private endpoints and, to authenticate, I need to make a request on a specific endpoint with user's api_key and api_secret as parameters, the response contains an access_token which is then used to authenticate the user on private endpoints.</p>
<p>This token is given using request's headers and it must be refreshed regularly.</p>
<p>The connector is working well, however I recently tried to use this in a multi-threaded context. So instead of looping users, I'm launching a thread for every user and I join them after.</p>
<p>In a multi-threaded context the connector also works, but on some rare occasions I realized that user data were mixed up.</p>
<p>I went further into debugging and I realized that in those cases, the issue was that the connector was using the access_token of another user.</p>
<hr />
<p>I reproduced this issue with a simple example to expose the logic of the script.</p>
<pre><code>#!/usr/bin/python3
from utils.database import Database
from urllib.parse import urlencode
import threading
import requests
import time
class User():
def __init__(self, user_id, database):
self.user_id = user_id
self.database = database
self.connector = None
def get_connector(self:
if not self.connector:
self.__create_connector()
return self.connector
def __create_connector(self):
__user_api_df = self.database.get_table("users_apis", where=f"WHERE user_id = {self.user_id}")
api_key = __user_api_df["api_key"].values[0]
api_secret = __user_api_df["api_secret"].values[0]
self.connector = ApiConnector(api_key, api_secret, self)
def __str__(self):
return f"User-{self.user_id}"
class ApiConnector():
def __init__(self, api_key, api_secret, user=None):
self.base_url = "https://www.api.website.com"
self.api_key = api_key
self.api_secret = api_secret
self.user = user
self.session = requests.Session()
self.__auth_token = None
self.__auth_timeout = None
def api_call_1(self):
return self.__request("GET", "endpoint_path_1", auth=True)
def api_call_2(self):
return self.__request("GET", "endpoint_path_2", auth=True)
def api_call_3(self):
return self.__request("GET", "endpoint_path_3", auth=True)
def __request(self, method, path, payload={}, auth=False, headers={}):
url = f"{self.base_url}{path}"
headers["Accept"] = "application/json"
if auth:
if not self.__is_authenticated():
self.__authenticate()
headers["Authorization"] = "Bearer " + self.__auth_token
print(f"[{self.user}] IN => {path} - {self.__auth_token}")
if method == "GET":
payload_str = f"?{urlencode(payload)}" if payload else ""
response = self.session.request(method, f"{url}{payload_str}", headers=headers)
else:
response = self.session.request(method, url, params=payload, headers=headers)
if auth:
print(f"[{self.user}] OUT => {path} - {response.request.headers['Authorization']}")
return response.json()
def __authenticate(self):
response = self.__request("GET", "authentication_endpoint", payload={
"api_key": self.api_key,
"api_secret": self.api_secret
})
self.__auth_token = response["result"]["access_token"]
self.__auth_timeout = time.time() + response["result"]["expires_in"]
def __is_authenticated(self):
if not self.__auth_timeout:
return False
if self.__auth_timeout < time.time():
return False
return True
class RequestsTester:
def __init__(self):
self.database = Database("host",
"user",
"password",
"database")
self.user_ids = [1, 2, 3]
self.threads = {}
def run(self):
for user_id in self.user_ids:
user = User(user_id, self.database)
thread_name = f"Thread-{user_id}"
self.threads[thread_name] = threading.Thread(target=self.get_data, args=[user])
self.threads[thread_name].start()
for thread_name in self.threads.keys():
self.threads[thread_name].join()
def get_data(self, user):
user.get_connector().api_call_1()
user.get_connector().api_call_2()
user.get_connector().api_call_3()
if __name__ == "__main__":
RequestsTester().run()
</code></pre>
<p>Note 1 : I didn't include the <code>Database</code> class since it's not relevant for the context but every class method is mutex protected to avoid concurrent access.</p>
<p>Note 2 : I'm using python 3.9.2 and request 2.25.1</p>
<hr />
<p>Before making the call I print the access_token and after the call I print the access_token from the response's request headers</p>
<p>The output generally looks like this:</p>
<pre><code>[User-1] IN => /private/endpoint_path_1 - 1673482029231.1EPZ7Ya-
[User-3] IN => /private/endpoint_path_1 - 1673482029265.1Cdx06z2
[User-2] IN => /private/endpoint_path_1 - 1673482029284.1JrX_wyQ
[User-3] OUT => /private/endpoint_path_1 - Bearer 1673482029265.1Cdx06z2
[User-1] OUT => /private/endpoint_path_1 - Bearer 1673482029231.1EPZ7Ya-
[User-2] OUT => /private/endpoint_path_1 - Bearer 1673482029284.1JrX_wyQ
</code></pre>
<p>But on some rare occasion it looks like this</p>
<pre><code>[User-1] IN => /private/endpoint_path_1 - 1673482029231.1EPZ7Ya-
[User-3] IN => /private/endpoint_path_1 - 1673482029265.1Cdx06z2
[User-2] IN => /private/endpoint_path_1 - 1673482029284.1JrX_wyQ
[User-3] OUT => /private/endpoint_path_1 - Bearer 1673482029231.1EPZ7Ya-
[User-1] OUT => /private/endpoint_path_1 - Bearer 1673482029231.1EPZ7Ya-
[User-2] OUT => /private/endpoint_path_1 - Bearer 1673482029284.1JrX_wyQ
</code></pre>
<p>The output access token is not the same than the input one and it's the token of another user that is used.</p>
<hr />
<p>This minimal example is just to understand how the script works but in real condition I have way more than 3 users and I'm not just making API calls but also processing data and storing some things into database from <code>get_data</code> function.</p>
<p>Every time this error case happens, the input token is always the good one but the output token is always a token from another user, so the issue seems to come from <code>requests</code> lib.</p>
<p>If I use a loop instead of launching threads, the error never occurs, so it seems to come from the multi-threading context.</p>
<p>From what I saw <code>requests</code> lib and <code>Session</code> class are supposed to be thread-safe so I don't understand where this error can come from.</p>
<p>I'm not experimented with python multi-threading so I may be doing something wrong but I can't find what.</p>
<p>Does anybody already had such an issue with <code>requests</code> lib miwing headers in a multi-threaded context ?</p>
|
<python><multithreading><python-requests>
|
2023-01-12 00:22:36
| 1
| 924
|
Arkaik
|
75,090,112
| 8,569,490
|
Unable to create a conda environment based on yaml file due dependencies issue (protobuf issue)
|
<p>hope you're doing well.
Wanted to know if you could help me out with the following issue.</p>
<p>I am trying to recreate a conda environment (old) in my local macbook M2 with some issues.</p>
<p>Here is the <code>environment.yaml</code> for the mac.</p>
<pre><code>name: airflow36
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- blas=1.0=mkl
- intel-openmp=2019.1=144
- libgfortran=3.0.1=h93005f0_2
- mkl=2019.1=144
- numpy=1.15.4=py36hacdab7b_0
- numpy-base=1.15.4=py36h6575580_0
- python-daemon=2.1.2=py36_0
- airflow=1.10.1=py36_0
- alembic=0.8.10=py36_1
- appnope=0.1.0=py36_1000
- apscheduler=3.4.0=py36_0
- asn1crypto=0.24.0=py36_1003
- attrs=18.2.0=py_0
- awscli=1.16.98=py36_0
- babel=2.6.0=py_1
- backcall=0.1.0=py_0
- bleach=2.1.2=py_0
- blinker=1.4=py_1
- boto=2.49.0=py_0
- boto3=1.9.88=py_0
- botocore=1.12.88=py_0
- ca-certificates=2020.6.24=0
- certifi=2020.6.20=py36_0
- cffi=1.11.5=py36h342bebf_1001
- chardet=3.0.4=py36_1003
- click=6.7=py_1
- colorama=0.3.9=py_1
- configparser=3.5.0=py36_1001
- cookies=2.2.1=py_0
- coverage=4.4.2=py36_0
- croniter=0.3.27=py_0
- cryptography=2.3.1=py36hdbc3d79_1000
- decorator=4.3.2=py_0
- defusedxml=0.5.0=py_1
- dicttoxml=1.7.4=py36_0
- dill=0.2.9=py36_0
- docutils=0.14=py36_1001
- entrypoints=0.3=py36_1000
- flake8=3.5.0=py36_0
- flask=0.12.4=py_0
- flask-admin=1.4.1=py36_0
- flask-appbuilder=1.11.1=py_1
- flask-babel=0.11.1=py_0
- flask-cache=0.13.1=py_1000
- flask-caching=1.3.3=py_0
- flask-login=0.2.11=py36_0
- flask-openid=1.2.5=py36_1003
- flask-sqlalchemy=2.1=py36_1
- flask-swagger=0.2.13=py36_0
- flask-wtf=0.14.2=py_1
- future=0.16.0=py36_1002
- gitdb2=2.0.5=py_0
- gitpython=2.1.11=py_0
- gunicorn=19.9.0=py36_1000
- html5lib=1.0.1=py_0
- icu=58.2=h0a44026_1000
- idna=2.8=py36_1000
- ipykernel=5.1.0=py36h24bf2e0_1002
- ipython=7.2.0=py36h24bf2e0_1000
- ipython_genutils=0.2.0=py_1
- iso8601=0.1.12=py_1
- itsdangerous=1.1.0=py_0
- jedi=0.13.2=py36_1000
- jinja2=2.8=py36_1
- jira=2.0.0=py_0
- jmespath=0.9.3=py_1
- jsonschema=3.0.0a3=py36_1000
- jupyter_client=5.2.4=py36_0
- jupyter_core=4.4.0=py_0
- libcxx=4.0.1=hcfea43d_1
- libcxxabi=4.0.1=hcfea43d_1
- libffi=3.2.1=h0a44026_1005
- libiconv=1.15=h1de35cc_1004
- libpq=9.6.3=0
- libsodium=1.0.16=h1de35cc_1001
- libxml2=2.9.8=hf14e9c8_1005
- libxslt=1.1.32=h33a18ac_1002
- llvm-meta=7.0.0=0
- lockfile=0.12.2=py_1
- lxml=3.8.0=py36_0
- mako=1.0.7=py_1
- markdown=2.6.11=py_0
- markupsafe=1.0=py36_0
- mccabe=0.6.1=py_1
- mistune=0.8.4=py36h1de35cc_1000
- mkl_fft=1.0.10=py36h470a237_1
- mkl_random=1.0.2=py36h1702cab_2
- mock=2.0.0=py36_0
- monotonic=1.5=py_0
- moto=1.1.1=py36_0
- mysql-connector-c=6.1.11=he2f1009_1
- mysqlclient=1.3.13=py36_1000
- nbconvert=5.4.1=py_2
- nbformat=4.4.0=py_1
- ncurses=5.9=10
- notebook=5.5.0=py36_0
- numexpr=2.6.4=py36_0
- oauthlib=2.1.0=py_0
- openssl=1.0.2u=h1de35cc_0
- pandas=0.24.0=py36h0a44026_0
- pandoc=2.6=0
- pandocfilters=1.4.2=py_1
- parso=0.3.3=py_0
- pbr=5.1.2=py_0
- pendulum=1.4.4=py36_0
- pep8=1.7.0=py36_0
- pexpect=4.6.0=py36_1000
- pickleshare=0.7.5=py36_1000
- pip=9.0.1=py36_1
- pluggy=0.6.0=py_0
- prompt_toolkit=2.0.8=py_0
- psutil=4.4.2=py36_0
- psycopg2=2.7.5=py36hdffb7b8_2
- ptyprocess=0.6.0=py36_1000
- py=1.7.0=py_0
- pyaml=18.11.0=py_0
- pyasn1=0.4.4=py_1
- pycodestyle=2.3.1=py_1
- pycparser=2.19=py_0
- pyflakes=1.6.0=py36_0
- pygments=2.3.1=py_0
- pyjwt=1.7.1=py_0
- pyopenssl=19.0.0=py36_0
- pyrsistent=0.14.9=py36h1de35cc_1000
- pysocks=1.6.8=py36_1002
- pytest=3.4.0=py36_0
- pytest-cov=2.5.1=py36_0
- pytest-mock=1.6.3=py36_1
- python=3.6.2=0
- python-dateutil=2.8.0=py_0
- python-editor=1.0.4=py_0
- python-nvd3=0.15.0=py_1
- python-slugify=1.2.6=py_0
- python3-openid=3.1.0=py36_1001
- pytz=2018.9=py_0
- pytzdata=2018.9=py_0
- pyyaml=3.12=py36_1
- pyzmq=17.1.2=py36h111632d_1001
- readline=6.2=0
- requests=2.21.0=py36_1000
- requests-oauthlib=1.0.0=py_1
- requests-toolbelt=0.9.1=py_0
- rsa=3.4.2=py_1
- s3transfer=0.1.13=py36_1001
- send2trash=1.5.0=py_0
- setproctitle=1.1.10=py36h1de35cc_1001
- setuptools=38.4.0=py36_0
- simple-salesforce=0.74.2=py36_0
- six=1.12.0=py36_1000
- smmap2=2.0.5=py_0
- sqlalchemy=1.1.18=py36h470a237_0
- sqlite=3.13.0=1
- tabulate=0.7.7=py36_0
- tenacity=4.8.0=py36_0
- terminado=0.8.1=py36_1001
- testpath=0.4.2=py36_1000
- thrift=0.11.0=py36h0a44026_1001
- tk=8.5.19=2
- tornado=5.1.1=py36h1de35cc_1000
- traitlets=4.3.2=py36_1000
- tzlocal=1.5.1=py_0
- unicodecsv=0.14.1=py_1
- unidecode=1.0.23=py_0
- urllib3=1.24.1=py36_1000
- wcwidth=0.1.7=py_1
- webencodings=0.5.1=py_1
- werkzeug=0.14.1=py_0
- wheel=0.30.0=py36_2
- wtforms=2.1=py36_0
- xmltodict=0.11.0=py_1
- xz=5.2.4=h1de35cc_1001
- yaml=0.1.7=h1de35cc_1001
- zeromq=4.2.5=h0a44026_1006
- zlib=1.2.11=0
- zope.deprecation=4.4.0=py_0
- pip:
- apache-airflow==1.10.1
- datadiff==1.1.6
- funcsigs==1.0.0
- ordereddict==1.1
- pytest-pythonpath==0.7
- python-logstash==0.4.5
- google-api-core==1.14.2
- google-api-python-client==1.7.11
- oauth2client==4.1.2
- cached-property==1.5.1
</code></pre>
<p>It seems it is causing a conflict when installing some dependencies because I got the following output when running <code>conda env create --force -f "$ENVIRONMENT_FILE"</code></p>
<pre><code>Pip subprocess error:
protobuf requires Python '>=3.7' but the running Python is 3.6.2
You are using pip version 9.0.1, however version 22.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
failed
CondaEnvException: Pip failed
</code></pre>
<p>It is weird because I'm not installing the <code>protobuf</code> library but it looks it is used in one of the libraries I am using.</p>
<p>Here is the conda info:</p>
<pre><code> active environment : airflow36
active env location : /Users/username/opt/anaconda3/envs/airflow36
shell level : 1
user config file : /Users/username/.condarc
populated config files : /Users/username/.condarc
conda version : 22.11.1
conda-build version : 3.22.0
python version : 3.9.13.final.0
virtual packages : __archspec=1=x86_64
__osx=10.16=0
__unix=0=0
base environment : /Users/username/opt/anaconda3 (writable)
conda av data dir : /Users/username/opt/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/osx-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /Users/username/opt/anaconda3/pkgs
/Users/username/.conda/pkgs
envs directories : /Users/username/opt/anaconda3/envs
/Users/username/.conda/envs
platform : osx-64
user-agent : conda/22.11.1 requests/2.28.1 CPython/3.9.13 Darwin/21.6.0 OSX/10.16
UID:GID : 501:20
netrc file : None
offline mode : False
# conda environments:
#
base /Users/username/opt/anaconda3
airflow36 * /Users/username/opt/anaconda3/envs/airflow36
/opt/homebrew/Caskroom/miniconda/base
</code></pre>
<p>Would someone faced the same issue?</p>
<p>Thanks</p>
|
<python><anaconda><conda><python-3.6><miniconda>
|
2023-01-11 23:35:28
| 0
| 395
|
nariver1
|
75,090,088
| 12,011,020
|
Kedro (Python) DeprecationWarning: `np.bool8`
|
<p>When I try to create a new kedro project or run an existing one, I get the following deprecation warning (see also screenshot below). As far as I understand the warning is neglebile, however, as I am trying to setup a clean project, I would like to resolve this warning.
From the warning I get that it stems from the ploltly package which apparently uses the old <code>np.bool8</code> over the new <code>np.bool_</code></p>
<pre><code>WARNING D:\Code\Python\kedro-tutorial\.venv\lib\site-packages\plotly\express\imshow_utils.py:24: warnings.py:109 DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`. (Deprecated NumPy 1.24)
np.bool8: (False, True),
</code></pre>
<p>Thus I tried to upgrade plotly, but it seems like it is already on the newest version</p>
<pre><code>pip install --upgrade plotly
Requirement already satisfied: plotly in d:\code\python\kedro-tutorial\.venv\lib\site-packages (5.11.0)
Requirement already satisfied: tenacity>=6.2.0 in d:\code\python\kedro-tutorial\.venv\lib\site-packages (from plotly) (8.1.0)
</code></pre>
<p><a href="https://i.sstatic.net/4ewZ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ewZ8.png" alt="enter image description here" /></a></p>
<p>Is there anyway to resolve this warning, despite not using the plotly package at all?</p>
|
<python><plotly><dependencies><kedro>
|
2023-01-11 23:31:21
| 1
| 491
|
SysRIP
|
75,090,084
| 11,462,274
|
Using find_all in BeautifulSoup when the filter is based on two distinct elements
|
<p>Currently I do it this way to pass only when there is a <code>tf-match-analyst-verdict</code> element inside the <code>div</code> which in turn should contain a <code>class</code> called <code>match-header</code>:</p>
<pre class="lang-python prettyprint-override"><code>matches = soup.find_all('div', attrs={"class": "match-header"})
for match in matches:
if (match.find('tf-match-analyst-verdict')):
</code></pre>
<p>which method is correct to pass this need in the creation of the <code>matches</code> object to remove the need to use <code>if</code>?</p>
|
<python><beautifulsoup>
|
2023-01-11 23:30:47
| 1
| 2,222
|
Digital Farmer
|
75,090,010
| 19,009,577
|
Is there a way for pandas rolling.apply to enter a dataframe into the function
|
<p>I have code as below:</p>
<pre><code>def fn(x):
...
df.rolling(length).apply(lambda x: fn(x))
</code></pre>
<p>I want the function to take a subset of the dataframe as input. Is this possible?</p>
|
<python><pandas><rolling-computation>
|
2023-01-11 23:17:41
| 2
| 397
|
TheRavenSpectre
|
75,089,903
| 3,713,236
|
`bootstrap_distribution` attribute in scipy.stats.bootstrap() gone?
|
<p>Here is the documentation of <code>scipy.stats.bootstrap()</code>, showing that <code>bootstrap_distribution</code> is an attribute that should be included in the return object:</p>
<p><a href="https://i.sstatic.net/XBv85.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XBv85.png" alt="enter image description here" /></a>
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bootstrap.html" rel="nofollow noreferrer">Link to docs here</a></p>
<p>Here is my code that I'm running with, <code>portfolio['X']</code> is just a simple column of floats in a pandas dataframe called <code>portfolio</code>.</p>
<pre><code>from scipy.stats import bootstrap
data = portfolio['X']
data = (data,)
res = bootstrap(data, np.std, n_resamples = 100, confidence_level = 0.95)
print(res.__dir__())
print( res.standard_error)
print( res.confidence_interval)
fig, ax = plt.subplots()
ax.hist(res.bootstrap_distribution, bins = 25)
ax.set_title( 'Bootstrap Distribution')
ax.set_xlabel( 'statistic value')
ax.set_ylabel( 'frequency')
plt.show()
</code></pre>
<p>Here is my output:</p>
<pre><code>['confidence_interval', 'standard_error', '__annotations__', '__module__', '__dict__', '__weakref__', '__doc__', '__dataclass_params__', '__dataclass_fields__', '__init__', '__repr__', '__eq__', '__hash__', '__str__', '__getattribute__', '__setattr__', '__delattr__', '__lt__', '__le__', '__ne__', '__gt__', '__ge__', '__new__', '__reduce_ex__', '__reduce__', '__subclasshook__', '__init_subclass__', '__format__', '__sizeof__', '__dir__', '__class__']
0.06917310850614278
ConfidenceInterval(low=0.9366397203338929, high=1.2104462911847762)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-53-728a3ec52bd0> in <module>
9
10 fig, ax = plt.subplots()
---> 11 ax.hist(res.bootstrap_distribution, bins = 25)
12 ax.set_title( 'Bootstrap Distribution')
13 ax.set_xlabel( 'statistic value')
AttributeError: 'BootstrapResult' object has no attribute 'bootstrap_distribution'
</code></pre>
|
<python><scipy><bootstrapping>
|
2023-01-11 23:01:01
| 1
| 9,075
|
Katsu
|
75,089,656
| 2,862,387
|
Get Request to Azure DevOps with Personal Access Token (PAT) using Python
|
<p>I'm trying to make a get request to Azure DevOps.</p>
<p>I have the URL and the Personal_Access_Token. The URL was created following these intructions <a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/git/items/get?view=azure-devops-rest-6.1&tabs=HTTP#definitions" rel="noreferrer">https://learn.microsoft.com/en-us/rest/api/azure/devops/git/items/get?view=azure-devops-rest-6.1&tabs=HTTP#definitions</a> , and it is working fine in the browser. It is possible to see the information of the file that I'm targeting.</p>
<p>However, when I execute the request in python:</p>
<pre><code>import requests
headers = {
'Authorization': 'Bearer myPAT',
}
response = requests.get('exampleurl.com/content', headers=headers)
</code></pre>
<p>I'm getting the 203 response...</p>
<p>I have also try other options following this link <a href="https://stackoverflow.com/questions/19069701/python-requests-library-how-to-pass-authorization-header-with-single-token">Python requests library how to pass Authorization header with single token</a> without success. Including these headers:</p>
<pre><code>personal_access_token_encoded = base64.b64encode(personal_access_token.encode('utf-8')).decode('utf-8')
headers={'Authorization': 'Basic '+personal_access_token_encoded}
headers={'Authorization': 'Basic '+personal_access_token}
</code></pre>
<p>But in both cases still having the same response.</p>
<p>For sure I'm not considering something. What could be missing?</p>
|
<python><azure><azure-devops><python-requests><http-headers>
|
2023-01-11 22:26:15
| 3
| 936
|
d2907
|
75,089,614
| 1,974,918
|
Applying methods conditionally across columns in a pivot table
|
<p>Really enjoying chaining in polars. Almost feels like dplyr in R. Question: Is there a way to apply <code>fill_null</code> conditionally across columns in the table (i.e., replace with 0 for numerics and "0%" for str in the example below)? Like the old <code>mutate_if</code> in R-dplyr.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
scores = pl.DataFrame(
{
"zone": ["North", "North", "North", "South", "South", "East", "East", "East", "East"],
"funding": ["yes", "yes", "no", "no", "no", "no", "no", "yes", "yes"],
"score": [78, 39, 76, 56, 67, 89, 100, 55, 80],
}
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>(
scores.group_by("zone", "funding").len()
.with_columns(
pl.col("len").over("funding"),
pl.format(
"{}%",
(pl.col("len") * 100 / pl.sum("len"))
.over("funding")
.round(2),
).alias("perc"),
)
.pivot("funding", index="zone")
.with_columns(
pl.col("len_yes").fill_null(0),
pl.col("perc_yes").fill_null("0%")
)
)
</code></pre>
<pre><code>shape: (3, 5)
βββββββββ¬βββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββββ
β zone β len_no β len_yes β perc_no β perc_yes β
β --- β --- β --- β --- β --- β
β str β u32 β u32 β str β str β
βββββββββͺβββββββββͺββββββββββͺββββββββββͺβββββββββββ‘
β South β 2 β 0 β 40.0% β 0% β
β North β 1 β 2 β 20.0% β 50.0% β
β East β 2 β 2 β 40.0% β 50.0% β
βββββββββ΄βββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββββ
</code></pre>
<p>Can I perform the <code>fill_null()</code> by dtype without having to name each column individually?</p>
<p><strong>EDIT:</strong> Based on suggestions the below works if you want to explicitly column types to apply the <code>is_null</code> method to columns of different types</p>
<pre class="lang-py prettyprint-override"><code>(
scores.group_by("zone", "funding").len()
.with_columns(
pl.col("len").over("funding"),
pl.format(
"{}%",
(pl.col("len") * 100 / pl.sum("len"))
.over("funding")
.round(2),
).alias("perc"),
)
.pivot("funding", index="zone")
.with_columns(
pl.exclude(pl.String).fill_null(0),
pl.col(pl.String).fill_null("0%")
)
)
</code></pre>
|
<python><python-polars>
|
2023-01-11 22:19:59
| 0
| 5,289
|
Vincent
|
75,089,569
| 4,281,353
|
pandas read_json dtype=pd.CategoricalDtype does not work but dtype='category' does
|
<p>Is this a known issue that specifying <a href="https://pandas.pydata.org/docs/reference/api/pandas.CategoricalDtype.html" rel="nofollow noreferrer">CategoricalDtype</a> dtype at read_json does not convert the column dtype, or is there a mistake in the code?</p>
<pre><code>import pandas as pd
df = pd.read_json(
"./data/data.json",
dtype={
#"facility": pd.CategoricalDtype, # does not work
"facility": 'category', # does work
"supplier": pd.CategoricalDtype, # does not work
}
)
df.info()
-----
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 facility 232 non-null category
3 supplier 111 non-null object
</code></pre>
<h1>Environment</h1>
<pre><code>MacOS 13.0.1 (22A400)
$ python --version
Python 3.9.13
$ pip list | grep pandas
pandas 1.5.2
</code></pre>
|
<python><pandas><dataframe><dtype>
|
2023-01-11 22:14:13
| 1
| 22,964
|
mon
|
75,089,555
| 7,720,535
|
Faster way to apply lambda function to Pandas groupby
|
<p>I have Pandas dataframe which is a (large number of times) repetition of a smaller dataframe, but only one column is not repeated. I want to apply a function that works on this non-repeated column and one of the repeating columns. But the whole procedure is slow and I need an alternative way that works faster. Here is a minimal example:</p>
<pre><code>import pandas as pd
import numpy as np
import random
repeating_times = 4
df = pd.DataFrame({"col1": [1, 2, 3, 4, 5]*repeating_times,
"col2": ['a', 'b', 'c', 'd', 'e']*repeating_times,
"true": ['P', 'P', 'N', 'P', 'N']*repeating_times,
"pred": random.choices(["P", "N"], k=5*repeating_times)})
grps = df.groupby(by=["col1", "col2"])
true_pos = grps.apply(lambda gr: np.sum(gr[gr['pred'] == 'P']["true"] == 'P'))
true_pos
</code></pre>
<p><code>true_pos</code> measures the true positive samples (where prediction and true values are of positive class) for all groups of (col1, col2).</p>
<p><strong>Update:</strong>
A better way of doing that which makes it much faster is to use <code>agg</code> instead of apply the function.</p>
<pre><code>repeating_times = 4
df = pd.DataFrame({"col1": [1, 2, 3, 4, 5]*repeating_times,
"col2": ['a', 'b', 'c', 'd', 'e']*repeating_times,
"true": ['P', 'P', 'N', 'P', 'N']*repeating_times,
"pred": random.choices(["P", "N"], k=5*repeating_times)})
df["true_pos"] = (df["true"]=="P") & (df["pred"]=="P")
true_pos = df.groupby(["col1", "col2"]).agg({"true_pos": "sum"})
</code></pre>
|
<python><pandas><group-by>
|
2023-01-11 22:12:33
| 1
| 485
|
Esi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.