QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,974,103
| 1,788,656
|
Reading netcdf time with unit of years
|
<p>All,
I am trying to read the time coordinate from Berkley Earth in the following temperature file. The time spans from 1850 to 2022. The time unit is in the year A.D. (1850.041667, 1850.125, 1850.208333, ..., 2022.708333, 2022.791667,2022.875).</p>
<p>The <code>pandas.to_datetime</code> cannot correctly interpret the time array because I think I need to state the origin of the time coordinate and the unit. I tried
to use <code>pd.to_datetime(dti,unit='D',origin='julian’)</code>, but it did not work (out of bounds). Also, I think I have to use a unit of years instead of Days.</p>
<p>The file is located here <a href="http://berkeleyearth.lbl.gov/auto/Global/Gridded/Land_and_Ocean_LatLong1.nc" rel="nofollow noreferrer">http://berkeleyearth.lbl.gov/auto/Global/Gridded/Land_and_Ocean_LatLong1.nc</a></p>
<pre><code>import xarray as xr
import numpy as np
import pandas as pd
# read data into memory
flname="Land_and_Ocean_LatLon1.nc"
ds = xr.open_dataset("./"+flname)
dti = ds['time']
pd.to_datetime(dti,unit='D',origin='julian')
np.diff(dti)
</code></pre>
|
<python><python-3.x><pandas><python-2.7><datetime>
|
2023-01-01 10:24:55
| 1
| 725
|
Kernel
|
74,974,056
| 15,342,452
|
Is there a faster way to compress files/make an archive in Python?
|
<p>I'm making archive of a folder that has around ~1 GB of files in it. It takes like 1 or 2 minutes but I want it to be faster.</p>
<p>I am making a UI app in Python that allows you to ZIP files (it's a project, I know stuff like 7Zip exists lol), and I am using a folder with ~1GB of files in it, like I said. The program won't accept files over 5GB because of the speed.</p>
<p>Here is how I compress/archive the folder:
<code>shutil.make_archive('files_archive', 'zip', 'folder_to_archive')</code></p>
<p>I know I can change the format but I'm not sure which one is fast and can compress well.</p>
<p>So, my end goal is to compress files fast, whilst also reducing size.</p>
|
<python><python-3.x><shutil>
|
2023-01-01 10:13:01
| 1
| 496
|
WhatTheClown
|
74,973,982
| 7,849,631
|
Selenium - How to find an element by part of the placeholder value?
|
<p>Is there any way to find an element by a part of the placeholder value? And ideally, case insensitive.</p>
<p><code><input id="id-9" placeholder="some TEXT"></code></p>
<p>Search by the following function doesn't work</p>
<p><code>browser.find_element(by=By.XPATH, value="//input[@placeholder='some te']")</code></p>
|
<python><selenium><selenium-webdriver><xpath><placeholder>
|
2023-01-01 09:54:10
| 2
| 487
|
Mike
|
74,973,975
| 6,273,451
|
Python: calling a private method within static method
|
<p>What is the correct way and best practice to call a private method from within a static method in <strong><code>python 3.x</code></strong>? See example below:</p>
<pre><code>class Class_A:
def __init__(self, some_attribute):
self.some_attr = some_attribute
def __private_method(arg_a):
print(arg)
@staticmethod
def some_static_method(arg_a):
__private_method(arg_a) # "__private_method" is not defined
</code></pre>
<p>Now, if I instantiate a <code>Class_A</code> object and call the static method:</p>
<pre><code>my_instance = Class_A("John Doe")
my_instance.some_static_method("Not his real name")
</code></pre>
<p>I get a NameError: <code>NameError: name '__private_method' is not defined</code></p>
|
<python><python-3.x><oop><static-methods><private-methods>
|
2023-01-01 09:51:08
| 0
| 334
|
Mehdi Rezzag Hebla
|
74,973,663
| 6,320,774
|
combine multiple dict with different keys but to a dataframe
|
<p>I am trying to create a dataframe from a dict with different key names.</p>
<p>Here is a MWE:</p>
<pre><code># loads price data
from yahoofinancials import YahooFinancials
yahoo_tickers = ['SMT.L', 'MSFT', 'NIO']
yahoo_financials = YahooFinancials(yahoo_tickers)
data = yahoo_financials.get_historical_price_data(start_date='1970-01-30',
end_date='2022-12-30',
time_interval='daily')
</code></pre>
<p>This prints a dictionary with the three keys (here the name of the yahoo tickers) with information about each stock. This is stored as another dictionary within a dictionary.</p>
<p>I tried to clean up this <code>dict</code>via loops, but end up being quite a slow process, especially with many more stocks.</p>
<p>Can someone suggest something that I can quickly convert the <code>data</code> <code>dict</code>in a <code>dataframe</code> that looks like this:</p>
<p>My expected result:</p>
<pre><code>Out[11]:
high low open ... adjclose yahoo_ticker instrumentType
Date ...
1968-12-31 4.852 4.852 4.852 ... 1.762543 SMT.L EQUITY
1969-01-01 4.852 4.852 4.852 ... 1.762543 SMT.L EQUITY
1969-01-02 4.852 4.852 4.852 ... 1.762543 SMT.L EQUITY
1969-01-03 4.852 4.852 4.852 ... 1.762543 SMT.L EQUITY
1969-01-06 4.852 4.852 4.852 ... 1.762543 SMT.L EQUITY
... ... ... ... ... ... ...
2022-12-23 11.220 10.690 11.220 ... 10.970000 NIO EQUITY
2022-12-27 10.610 9.970 10.530 ... 10.060000 NIO EQUITY
2022-12-28 10.250 9.610 10.010 ... 9.800000 NIO EQUITY
2022-12-29 10.270 9.770 9.920 ... 9.990000 NIO EQUITY
2022-12-30 9.980 9.520 9.830 ... 9.750000 NIO EQUITY
</code></pre>
|
<python><pandas><dictionary>
|
2023-01-01 08:15:15
| 2
| 1,581
|
msh855
|
74,973,455
| 13,000,695
|
AWS Cloudwatch synthetics - how to install external dependencies for runnning canaries
|
<p>I have a python selenium script that needs a third party library ('redis' package for instance).
I would like to run this script as an AWS cloudwatch synthetic canary. But it fails because it does not have this library installed.</p>
<p>So far I did not find in the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html" rel="nofollow noreferrer">documentation</a>, nor in the AWS UI console how to <code>pip install</code> dependencies before executing the canary. Is it even possible ?</p>
|
<python><amazon-web-services><selenium><amazon-cloudwatch>
|
2023-01-01 07:04:19
| 1
| 4,032
|
jossefaz
|
74,973,437
| 4,215,840
|
Database more efficient way storing the referral to 1'st table in every line in the 2'nd table
|
<p>I have 2 tables on my DB</p>
<p><a href="https://i.sstatic.net/d14G3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d14G3.png" alt="enter image description here" /></a></p>
<p>I want to avoid the <strong>"account ref"</strong> for every line in dates table, is there a way to keep fast fetching and reduce memory ?</p>
<p>I uses postgresql as DB with Django and Python.</p>
|
<python><database><postgresql>
|
2023-01-01 06:55:59
| 0
| 451
|
Sion C
|
74,973,427
| 7,347,925
|
How to concatenate csv files and continue the last row value?
|
<p>I have multiple csv files which have <code>case</code> column which begins from 0.</p>
<p>I want to concatenate them by setting the last <code>case</code> value +1 as the beginning value of the next one.</p>
<p>I know I can create a for loop to read each csv file and add the last value to the <code>case</code> column in each loop.</p>
<pre><code>import pandas as pd
# List of file names
file_list = ['file1.csv', 'file2.csv', 'file3.csv']
# Read the first file and store it in a DataFrame
df = pd.read_csv(file_list[0])
# Get the last value of the column that you want to continue
last_value = df.iloc[-1]['column_name']
# Loop through the remaining files
for file in file_list[1:]:
# Read the file into a DataFrame
df_temp = pd.read_csv(file)
# Continue the last value from the previous file in the current file
df_temp['column_name'] += last_value+1
last_value = df_temp.iloc[-1]['column_name']
# Concatenate the current file with the main DataFrame
df = pd.concat([df, df_temp])
</code></pre>
<p>Is it possible to directly use something like <code>pd.concat(map(pd.read_csv, file_list)</code>?</p>
|
<python><pandas>
|
2023-01-01 06:52:03
| 2
| 1,039
|
zxdawn
|
74,973,347
| 13,097,857
|
I can't run a function in a loop for certain amount of time
|
<p>ok, so I have this function that consists in a game that will always return a random value,</p>
<p>and the thing is that I want to run this function 10,000 times to receive 10,000 different values, and it works when I run it individually, but when I run it with a loop, it shows me this error:</p>
<p>Loop:</p>
<pre><code>premio_acumulado = []
for i in range(10000):
premio_acumulado.append(juego()['premio'])
</code></pre>
<p>And here's the error it presents:</p>
<pre><code>KeyError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_22100/4261353443.py in <module>
3 for i in range(10000):
4
----> 5 premio_acumulado.append(juego()['premio'])
6
7
~\AppData\Local\Temp/ipykernel_22100/2828093291.py in juego()
39
40
---> 41 premio = premios[movimientos[-1]]
42
43 movimiento_premio = {'premio': premio, 'secuencia':secuencia, 'movimientos':movimientos }
KeyError: 12
</code></pre>
|
<python><python-3.x><function><loops><error-handling>
|
2023-01-01 06:19:41
| 2
| 302
|
Sebastian Nin
|
74,973,249
| 3,667,693
|
Pandas merge not working as expected in streamlit
|
<h3>Summary</h3>
<p>When using pandas <em>merge</em> function within a callback function, the dataframe is not updated correctly. However, the pandas <em>drop</em> function works as expected</p>
<p>Note that although i have turned on st.cache. The same behavior is noted when removing the cache function as well.</p>
<h3>Steps to reproduce</h3>
<p><strong>Code snippet:</strong></p>
<pre><code>import streamlit as st
import pandas as pd
@st.cache(allow_output_mutation=True)
def read_df():
df = pd.DataFrame({
'col1':[1,2],
'col2':['A','B']
})
return df
df = read_df()
def do_something():
global df
df_new = pd.DataFrame({
'col1':[1,2],
'col3':["X","Y"]
})
df.drop(['col2'], axis = 1, inplace = True)
df = df.merge(df_new, on="col1")
st.button("Do Something", on_click=do_something, args =())
download_csv = df.to_csv().encode('utf-8')
st.download_button('Download', data = download_csv, file_name = 'download_csv.csv', mime='text/csv')
</code></pre>
<p>Steps to reproduce behavior</p>
<ul>
<li>click on "Do Something" button</li>
<li>click on "Download" button</li>
</ul>
<p><strong>Expected behavior:</strong></p>
<p>I would expect the downloaded csv to be displayed</p>
<pre><code> col1 col3
0 1 X
1 2 Y
</code></pre>
<p><strong>Actual behavior:</strong></p>
<p>However, i get the following output instead</p>
<pre><code> col1
0 1
1 2
</code></pre>
<h3>Debug info</h3>
<ul>
<li>Streamlit version: 1.16.0</li>
<li>Python version: 3.8.15</li>
<li>Using Conda: Yes</li>
<li>OS version: Windows 11</li>
<li>Browser version: Edge v108.0.1462.54</li>
</ul>
|
<python><pandas><streamlit>
|
2023-01-01 05:34:47
| 1
| 405
|
John Jam
|
74,973,096
| 5,029,589
|
Insert JSON Array into Elasticsearch Using Python Bulk API
|
<p>I have a JSON array of 100 records , I want to insert it into elasticsearch ,I tried it using the below code but it gives me JSON Error exception . I am not sure where I am doing it wrong . The code which I am using to insert record is</p>
<pre><code>
from esService.esClient import ESCient
from elasticsearch.helpers import bulk
#data is the json array object which has around 100 records in it .
#I want to insert it into elasticsearch in such a way
#that each record is a entry into ES (I can remove the ID if required ,I don't need the ID )
def insert_bulk_record(self,index,data,id):
docs = []
doc = {
"_index": index,
"_id": id,
"_source": data
}
docs.append(doc)
bulk(self.esconnect, docs)
</code></pre>
<p>When I do bulk insert using the above code I get the below exception ,</p>
<pre><code>elasticsearch.helpers.errors.BulkIndexError: ('1 document(s) failed to index.', [{'index': {'_index': 'data_record', '_type': '_doc', '_id': '7908745568_0.csv', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'not_x_content_exception', 'reason': 'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes'}}
</code></pre>
|
<python><json><elasticsearch>
|
2023-01-01 04:25:12
| 1
| 2,174
|
arpit joshi
|
74,973,032
| 3,099,733
|
Can I use a typing.Callable type to annotate a function automatically? How?
|
<p>Suppose we define a type for a callback like so:</p>
<pre class="lang-py prettyprint-override"><code>CallbackFn = Callable[[str, str, int, int], None]
</code></pre>
<p>When I want to implement a callback function, and have external tools type-check it against <code>CallbackFn</code>, I have to annotate the types explicitly:</p>
<pre class="lang-py prettyprint-override"><code>def my_callback(s1: str, s2: str, i1: int, i2: int) -> None:
...
</code></pre>
<p>Is there any way that I can use the type hint defined in <code>CallbackFn</code> to annoate a function - something like <code>def my_callback(s1,s2,i1,i2) as CallbackFn:</code> - to avoid the explicit annotation work?</p>
|
<python><python-typing>
|
2023-01-01 03:49:24
| 2
| 1,959
|
link89
|
74,972,995
| 10,597,313
|
OpenCV - AWS Lambda - /lib64/libz.so.1: version `ZLIB_1.2.9' not found
|
<p>In Python, trying to run the opencv package in an AWS lambda layer. Using opencv-python-headless but keep getting this error.</p>
<pre><code>Response
{
"errorMessage": "Unable to import module 'lambda_function': /lib64/libz.so.1: version `ZLIB_1.2.9' not found (required by /opt/python/lib/python3.8/site-packages/cv2/../opencv_python_headless.libs/libpng16-186fce2e.so.16.37.0)",
"errorType": "Runtime.ImportModuleError",
"stackTrace": []
}
</code></pre>
<p>Have tried different versions of opencv to no avail. And different versions of python.</p>
|
<python><amazon-web-services><lambda><aws-lambda>
|
2023-01-01 03:31:14
| 4
| 389
|
John Welsh
|
74,972,850
| 11,070,463
|
jax.lax.select vs jax.numpy.where
|
<p>Was taking a look at the <a href="https://flax.readthedocs.io/en/latest/_modules/flax/linen/stochastic.html#Dropout" rel="nofollow noreferrer">dropout implementation in flax</a>:</p>
<pre class="lang-py prettyprint-override"><code>def __call__(self, inputs, deterministic: Optional[bool] = None):
"""Applies a random dropout mask to the input.
Args:
inputs: the inputs that should be randomly masked.
deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
masked, whereas if true, no mask is applied and the inputs are returned
as is.
Returns:
The masked inputs reweighted to preserve mean.
"""
deterministic = merge_param(
'deterministic', self.deterministic, deterministic)
if (self.rate == 0.) or deterministic:
return inputs
# Prevent gradient NaNs in 1.0 edge-case.
if self.rate == 1.0:
return jnp.zeros_like(inputs)
keep_prob = 1. - self.rate
rng = self.make_rng(self.rng_collection)
broadcast_shape = list(inputs.shape)
for dim in self.broadcast_dims:
broadcast_shape[dim] = 1
mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)
mask = jnp.broadcast_to(mask, inputs.shape)
return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))
</code></pre>
<p>Particularly, I'm interested in last line <code>lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))</code>. Wondering why <code>lax.select</code> is used here instead of:</p>
<pre><code>return jnp.where(mask, inputs / keep_prob, 0)
</code></pre>
<p>or even more simply:</p>
<pre><code>return mask * inputs / keep_prob
</code></pre>
|
<python><numpy><machine-learning><deep-learning><jax>
|
2023-01-01 02:11:45
| 1
| 4,113
|
Jay Mody
|
74,972,685
| 17,148,496
|
Feature importance using gridsearchcv for logistic regression
|
<p>I've trained a logistic regression model like this:</p>
<pre><code>reg = LogisticRegression(random_state = 40)
cvreg = GridSearchCV(reg, param_grid={'C':[0.05,0.1,0.5],
'penalty':['none','l1','l2'],
'solver':['saga']},
cv = 5)
cvreg.fit(X_train, y_train)
</code></pre>
<p>Now to show the feature's importance I've tried this code, but I don't get the names of the coefficients in the plot:</p>
<pre><code>from matplotlib import pyplot
importance = cvreg.best_estimator_.coef_[0]
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
</code></pre>
<p><a href="https://i.sstatic.net/1BCWb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1BCWb.png" alt="enter image description here" /></a></p>
<p>Obviously, the plot isn't very informative. How do I add the names of the coefficients to the x-axis?</p>
<p>The importance of the coeff is:</p>
<pre><code>cvreg.best_estimator_.coef_
array([[1.10303023e+00, 7.48816905e-01, 4.27705027e-04, 6.01404570e-01]])
</code></pre>
|
<python><matplotlib><scikit-learn><logistic-regression>
|
2023-01-01 00:48:19
| 1
| 375
|
Kev
|
74,972,602
| 6,635,590
|
SQLite select subquery return multiple rows as a list in column
|
<p>So I've redesigned my database after I realized it can be made better by splitting previous columns that were "lists" (a string of words/strings split by a space), into different tables instead.</p>
<p>My issue is that I need to get the file names from one table that have a certain <code>uid</code> that references a <code>uid</code> in another table, I have done this mostly, except the fact that I need to be able to get multiple rows from the original table that references the second table.</p>
<p>If this didn't make sense, here is my code and relevant structure:</p>
<pre><code>SELECT * FROM (
SELECT
a.uid, a.reward_title, a.reward_content,
a.reward_date as date_time, a.reward_pinned as pinned, (
SELECT c.file_name FROM files c WHERE c.reference_uid = a.uid AND c.reference_type = ?
) as images
FROM rewards a
INNER JOIN subscriptions b ON b.creator_uid = a.creator_uid AND b.tier_hierarchy = a.tier_hierarchy
WHERE b.user_uid = ?
)
</code></pre>
<pre><code>CREATE TABLE rewards (
uid TEXT UNIQUE PRIMARY KEY NOT NULL,
creator_uid TEXT NOT NULL,
tier_hierarchy INTEGER,
reward_title TEXT NOT NULL,
reward_content TEXT NOT NULL,
reward_date TEXT NOT NULL,
reward_tags TEXT,
reward_pinned INTEGER NOT NULL DEFAULT 0,
FOREIGN KEY (creator_uid) REFERENCES users (uid)
)
</code></pre>
<pre><code>CREATE TABLE files (
uid TEXT UNIQUE PRIMARY KEY NOT NULL,
reference_uid TEXT NOT NULL,
reference_type TEXT NOT NULL,
file_name TEXT NOT NULL,
file_type INTEGER NOT NULL DEFAULT 1,
thumbnail BOOLEAN DEFAULT NULL
)
</code></pre>
<p><code>files</code> table</p>
<p><a href="https://i.sstatic.net/rKOzv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rKOzv.png" alt="enter image description here" /></a></p>
<p><code>rewards</code> table</p>
<p><a href="https://i.sstatic.net/mzm06.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mzm06.png" alt="enter image description here" /></a></p>
<p>Currently this returns this</p>
<p><a href="https://i.sstatic.net/eG6q3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eG6q3.png" alt="enter image description here" /></a></p>
<p>As you can see in the second image, there are two rows that from the conditions would return, I want to know how I can get both of those rows <code>file_name</code> column into a list as the 6th column in the original <code>select</code></p>
|
<python><sqlite>
|
2023-01-01 00:07:46
| 1
| 734
|
tygzy
|
74,972,493
| 4,931,135
|
Are "map" and "filter" subclasses of the class "iterator"?
|
<p>I read in the python documentation that <code>map</code> and <code>filter</code> return "iterator" object.
but when I check the type of the returned object, I find that it's of type <code><class 'filter'></code> and <code><class 'map'></code></p>
<pre><code>x = [1,2,3,4]
print(type(filter(lambda i: i>2, x)))
<class 'filter'>
print(iter(x))
<class 'list_iterator'>
</code></pre>
<p>what is the relation of the class <code>map</code> with the class <code>iterator</code>? Does <code>map</code>, <code>filter</code> or <code>zip</code> class inherit from the <code>iterator</code> class?<br />
I also saw that there are multiple types of iterators depending on the source variable.
for example:</p>
<pre><code>x = (1,2,3,4)
print(type(iter(x)))
<class 'tuple_iterator'>
x = [1,2,3,4]
print(type(iter(x)))
<class 'list_iterator'>
x = "1234"
print(type(iter(x)))
<class 'str_iterator'>
</code></pre>
<p>Are <code>tuple_iterator</code>, <code>list_iterator</code> and <code>str_iterator</code> classes that inherit from an abstract class <code>iterator</code> that we don't know about?<br />
Finally, where can I see the actual implementation of the class <code>map</code> and <code>list_iterator</code>?</p>
|
<python><python-3.x>
|
2022-12-31 23:22:08
| 1
| 1,763
|
Bassel
|
74,972,442
| 159,072
|
How can I send a request to flask socket.io with the click of a button?
|
<p>The following source code sends a request to the server upon page load, and when the task is done, the page is updated automatically.</p>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask, render_template
from flask_socketio import SocketIO, emit, disconnect
from time import sleep
async_mode = None
app = Flask(__name__)
socket_ = SocketIO(app, async_mode=async_mode)
@app.route('/')
def index():
return render_template('index.html',
sync_mode=socket_.async_mode)
@socket_.on('do_task', namespace='/test')
def run_lengthy_task(data):
try:
duration = int(data['duration'])
sleep(duration)
emit('task_done', {'data': 'long task of {} seconds complete'.format(duration)})
disconnect()
except Exception as ex:
print(ex)
if __name__ == '__main__':
socket_.run(app, host='0.0.0.0', port=80, debug=True)
</code></pre>
<p><strong>index.html</strong></p>
<pre><code><!DOCTYPE HTML>
<html>
<head>
<title>Long task</title>
<script src="https://code.jquery.com/jquery-3.6.3.js"></script>
<script src="https://cdn.socket.io/4.5.4/socket.io.min.js"></script>
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
namespace = '/test';
var socket = io(namespace);
socket.on('connect', function() {
$('#messages').append('<br/>' + $('<div/>').text('Requesting task to run').html());
socket.emit('do_task', {duration: '60'});
});
socket.on('task_done', function(msg, cb) {
$('#messages').append('<br/>' + $('<div/>').text(msg.data).html());
if (cb)
cb();
});
});
</script>
</head>
<body>
<h3>Messages</h3>
<div id="messages" ></div>
</body>
</html>
</code></pre>
<p>How can this program be modified, so that the request is sent only up on the click of a button?</p>
|
<python><flask><flask-socketio>
|
2022-12-31 23:02:55
| 1
| 17,446
|
user366312
|
74,972,396
| 3,313,563
|
Plot `semilogx` graph for Euclidean distances matrix
|
<p>I am struggling to make a <code>semilogx</code> plot for the Euclidean distances matrix. I want a single line to show the differences, but the Euclidean distances matrix lists different items.</p>
<p>When making the plot using the <code>distance = np.arange(0.05, 15, 0.1) ** 2</code>, the line appears as expected.</p>
<p>However, when I use the following code blocks that generate the Euclidean distances matrix, it no longer appears in a single line.</p>
<pre class="lang-py prettyprint-override"><code>m = random((8, 2))
distance_matrix = cdist(m, m)
</code></pre>
<p>What's the best way to plot the Euclidean distances matrix based on the following code I have done so far?</p>
<pre class="lang-py prettyprint-override"><code>from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
import numpy as np
# % Log-distance or Log-normal shadowing path loss model
# % Inputs: fc : Carrier frequency[Hz]
# % d : Distance between base station and mobile station[m]
# % d0 : Reference distance[m]
# % n : Path loss exponent
# % sigma : Variance[dB]
def logdist_or_norm(fc, d, d0, n, sigma):
lamda = 3e8 / fc
PL = -20 * np.log10(lamda / (4 * np.pi * d0)) + 10 * n * np.log10(d / d0)
if sigma:
# PL = PL + sigma * np.random.randn(d.shape)
PL = PL + sigma * np.random.randn(len(d))
return PL
# % Channel Parameters
fc = 2.4e9 #% operation in 2.4 GHz
d0 = 0.25 #% good choice for inddor distances (microcells)
sigma = 3 #% keep the book suggestion
Gt = 1 #% No transmitter antenna gain as provided by nordic datasheet
Gr = 1 #% No receiver antenna gain as provided by nordic datasheet
Exp = 4; #% Mid value in the obstructed in building range (Table 1.1)
# Exp = 1 #% Mid value in the obstructed in building range (Table 1.1)
# % Distance vector also for plot
# distance = np.arange(0.05, 15, 0.1) ** 2
m = np.random.random((8, 2))
distance = cdist(m, m)
# np.random.randn(len(distance))
# % Log-normal shadowing model
y_lognorm = logdist_or_norm(fc, distance.flatten(), d0, Exp, sigma)
# % Plot Path loss versus distance
plt.semilogx(distance.flatten(), y_lognorm, 'k-o')
plt.grid(True), plt.axis([0.05, 20, 0, 110]), plt.legend('Log-normal shadowing model')
plt.title(['Log-normal Path-loss Model, f_c = ', str(fc/1e6),'MHz,', '\sigma = ', str(sigma), 'dB, n = 2'])
plt.xlabel('Distance[m]'), plt.ylabel('Path loss[dB]')
</code></pre>
<p><strong>Expectation</strong></p>
<p><a href="https://i.sstatic.net/X6O1I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X6O1I.png" alt="enter image description here" /></a></p>
|
<python><numpy><matplotlib><matrix>
|
2022-12-31 22:47:48
| 1
| 6,121
|
mmik
|
74,972,126
| 159,072
|
Unable to insert data into SQLite table
|
<p>The following source code is giving me an error:</p>
<pre><code>405 Method not allowed
</code></pre>
<p>when I am pressing [Submit] button.</p>
<ol>
<li>Why is it happening?</li>
<li>How can I fix it?</li>
</ol>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask, render_template, request, redirect
from flask_sqlalchemy import SQLAlchemy
from datetime import datetime
import secrets
import string
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///filename.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
def get_random_string():
return ''.join(secrets.choice(string.ascii_uppercase + string.ascii_lowercase) for i in range(7))
class job_queue(db.Model):
job_id = db.Column(db.Integer, primary_key=True)
unique_job_key = db.Column(db.String(64), index=True)
user_name = db.Column(db.Integer, index=True)
input_string = db.Column(db.String(256))
is_done = db.Column(db.Boolean)
created_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
db.create_all()
@app.route('/')
def index():
if request.method == 'POST':
unique_job_key_str = get_random_string()
user_name1 = request.form['user_name1']
text1 = request.form['text1']
is_done_bool = 0
created_at_time = datetime.utcnow
new_job = job_queue(unique_job_key=unique_job_key_str,
user_name=user_name1,
input_string=text1,
is_done=is_done_bool,
created_at=created_at_time)
try:
db.session.add(new_job)
db.session.commit()
return redirect('/')
except:
return "There was a problem adding new job!"
# end try
else:
users = job_queue.query.order_by(job_queue.created_at).all()
return render_template('bootstrap_table.html', title='Jobs', users=users)
# end if
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p><strong>bootstrap_table.html</strong></p>
<pre><code>{% extends "base.html" %}
{% block content %}
<div>
<form method="POST">
<table id="input_panel1" class="table table-striped">
<tr>
<td>Username:</td>
<td><input name="user_name1"></td>
</tr>
<tr>
<td>Input:</td><td> <textarea name="text1" cols="40" rows="5" ></textarea></td>
</tr>
<tr>
<td>Submit: </td><td><input type="submit"></td>
</tr>
</table>
</form>
</div>
<hr>
<table id="data" class="table table-striped">
<thead>
<tr>
<th>Job ID</th>
<th>Job Key</th>
<th>User</th>
<th>Input String</th>
<th>Is Done?</th>
</tr>
</thead>
<tbody>
{% for job_queue in users %}
<tr>
<td>{{ job_queue.job_id }}</td>
<td>{{ job_queue.unique_job_key }}</td>
<td>{{ job_queue.user_name }}</td>
<td>{{ job_queue.input_string }}</td>
<td>{{ job_queue.is_done }}</td>
</tr>
{% endfor %}
</tbody>
</table>
{% endblock %}
</code></pre>
<p><strong>EDIT:</strong> Stack Trace:</p>
<pre><code>C:\ProgramData\Miniconda3\python.exe C:/git/funkclusterfrontend/bootstrap_table.py
* Serving Flask app 'bootstrap_table' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Restarting with stat
* Debugger is active!
* Debugger PIN: 100-569-534
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [31/Dec/2022 22:37:19] "GET / HTTP/1.1" 200 -
ERROR:root:SQLite insertion error!
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1719, in _execute_context
context = constructor(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\default.py", line 1072, in _init_compiled
param = [
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\default.py", line 1073, in <listcomp>
processors[key](compiled_params[key])
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\dialects\sqlite\base.py", line 1003, in process
raise TypeError(
TypeError: SQLite DateTime type only accepts Python datetime and date objects as input.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\git\funkclusterfrontend\bootstrap_table.py", line 46, in index
db.session.commit()
File "<string>", line 2, in commit
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\session.py", line 829, in commit
self._prepare_impl()
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\session.py", line 808, in _prepare_impl
self.session.flush()
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\session.py", line 3383, in flush
self._flush(objects)
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\session.py", line 3523, in _flush
transaction.rollback(_capture_exception=True)
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\util\langhelpers.py", line 70, in __exit__
compat.raise_(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
raise exception
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\session.py", line 3483, in _flush
flush_context.execute()
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\unitofwork.py", line 456, in execute
rec.execute(self)
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\persistence.py", line 245, in save_obj
_emit_insert_statements(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\orm\persistence.py", line 1238, in _emit_insert_statements
result = connection._execute_20(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\sql\elements.py", line 332, in _execute_on_connection
return connection._execute_clauseelement(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1725, in _execute_context
self._handle_dbapi_exception(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
raise exception
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1719, in _execute_context
context = constructor(
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\default.py", line 1072, in _init_compiled
param = [
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\engine\default.py", line 1073, in <listcomp>
processors[key](compiled_params[key])
File "C:\ProgramData\Miniconda3\lib\site-packages\sqlalchemy\dialects\sqlite\base.py", line 1003, in process
raise TypeError(
sqlalchemy.exc.StatementError: (builtins.TypeError) SQLite DateTime type only accepts Python datetime and date objects as input.
[SQL: INSERT INTO job_queue (unique_job_key, user_name, input_string, is_done, created_at) VALUES (?, ?, ?, ?, ?)]
[parameters: [{'is_done': 0, 'created_at': <built-in method utcnow of type object at 0x00007FFCB1E11650>, 'user_name': 'user_name1', 'input_string': 'SasASas', 'unique_job_key': 'cwyRgim'}]]
127.0.0.1 - - [31/Dec/2022 22:37:25] "POST / HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [31/Dec/2022 22:37:25] "POST / HTTP/1.1" 200 -
</code></pre>
|
<python><flask><sqlalchemy>
|
2022-12-31 21:21:19
| 1
| 17,446
|
user366312
|
74,972,054
| 20,433,449
|
I'm trying make a list into a string in Python
|
<p>I change the list into a string but it is not printing on the same line</p>
<pre><code>spam= ['apples', 'bananas', 'tofu', 'cats']
for i in spam:
print(str(i))
</code></pre>
|
<python><string><list>
|
2022-12-31 20:58:32
| 1
| 619
|
DarkPhinx
|
74,971,956
| 7,973,902
|
FuelSDK: use get() to pull salesforce items into a dataframe
|
<p>I'm attempting to use Salesforce FuelSDK to pull audit items (i.e. click events, unsub events, bounce events, etc.) from our client's Marketing Cloud. I'm sure this is down to my inexperience with APIs, but although I'm able to get a success code from "get()", I'm not sure how to pull the actual content. The end goal is to pull each event type into its own dataframe. Here's what I have so far:</p>
<pre><code>import FuelSDK
import ET_Client
import pandas as pd
import requests
# In[2]:
#define local variables
clientid= 'client_id_code_here'
clientsecret='client_secret_code_here'
subdomain = 'mcx1k2thcht5qzdsm6962p8ln2c8'
auth_base_url = f'https://{subdomain}.auth.marketingcloudapis.com/'
rest_url = f'https://{subdomain}.rest.marketingcloudapis.com/'
soap_url=f'https://{subdomain}.soap.marketingcloudapis.com/'
# In[3]:
#Passing config as a parameter to ET_Client constructor:
myClient = FuelSDK.ET_Client(True, False,
{'clientid': clientid,
'clientsecret': clientsecret,
'useOAuth2Authentication': 'True',
'authenticationurl': auth_base_url,
'applicationType': 'server',
'baseapiurl': rest_url,
'soapendpoint': soap_url})
# In[4]:
# Create an instance of the object type we want to work with:
list = FuelSDK.ET_List()
# In[5]:
# Associate the ET_Client- object using the auth_stub property:
list.auth_stub = myClient
# In[6]:
# Utilize one of the ET_List methods:
response = list.get()
# In[7]:
# Print out the results for viewing
print('Post Status: ' + str(response.status))
print('Code: ' + str(response.code))
print('Message: ' + str(response.message))
print('Result Count: ' + str(len(response.results)))
# print('Results: ' + str(response.results))
# In[12]:
debug = False
# stubObj = ET_Client(False, False, params={'clientid': clientid,
# 'clientsecret': clientsecret})
getBounceEvent = ET_Client.ET_BounceEvent()
getBounceEvent.auth_stub = myClient
getBounceEvent.props = ["SendID","SubscriberKey","EventDate","Client.ID","EventType","BatchID","TriggeredSendDefinitionObjectID","PartnerKey"]
# getBounceEvent.search_filter = {'Property' : 'EventDate', 'SimpleOperator' : 'greaterThan', 'DateValue' : retrieveDate}
getResponse = getBounceEvent.get()
# In[13]:
print(getResponse)
# In[13]:
#pull data into dataframe
df=pd.DataFrame(getResponse)
# In[52]:
df.head()
# In[53]:
#load data into csv data file
df.to_csv('salesforce_bounce_events.csv', index=False)
</code></pre>
<p>Obviously, "getResponse" is not going to go into a dataframe, since it's just a response code. I know from using other ETL programs that the data is in "Bounce Event" in a tabular format, I just need to know how to pull it from the API. Any assistance would be appreciated.</p>
|
<python><pandas><soap><get><salesforce>
|
2022-12-31 20:30:32
| 1
| 434
|
lengthy_preamble
|
74,971,910
| 5,053,347
|
how to remove confidance interval in pairplot?
|
<p>May you please help to remove the confidence interval in the seaborn pairplot.</p>
<pre><code>import seaborn as sns
penguins = sns.load_dataset("penguins")
sns.pairplot(penguins,kind="reg")
</code></pre>
<p>I know it is possible in regplot or lmplot using ci=None, but I would like the same functionality in pairplot.</p>
<p>Thanks.</p>
|
<python><seaborn><pairplot>
|
2022-12-31 20:16:21
| 1
| 497
|
Seyed Omid Nabavi
|
74,971,907
| 15,520,615
|
PySpark / Python Slicing and Indexing Issue
|
<p>Can someone let me know how to pull out certain values from a Python output.</p>
<p>I would like the retrieve the value 'ocweeklyreports' from the the following output using either indexing or slicing:</p>
<pre><code>'config': '{"hiveView":"ocweeklycur.ocweeklyreports"}
</code></pre>
<p>This should be relatively easy, however, I'm having problem defining the Slicing / Indexing configuation</p>
<p>The following will successfully give me 'ocweeklyreports'</p>
<pre><code>myslice = config['hiveView'][12:30]
</code></pre>
<p>However, I need the indexing or slicing modified so that I will get any value after'ocweeklycur'</p>
|
<python><apache-spark><pyspark><azure-databricks>
|
2022-12-31 20:15:34
| 3
| 3,011
|
Patterson
|
74,971,896
| 4,796,942
|
clear validated cells using a batch update using pygsheets
|
<p>I have a pygsheets work sheet with cells that have been filled in and would like to clear the cells after pulling the data into python.</p>
<p>My issue is that I have validated the cells with a particular data format then when I try to batch update the sheet the cells are either not going empty (if I use <code>None</code> for the batch update) or an error occurs when I use <code>""</code>.</p>
<p>Please can you help me update the cells to be empty. I have read the <a href="https://readthedocs.org/projects/pygsheets/downloads/pdf/stable/" rel="nofollow noreferrer">documentation</a> and haven't come across how this is done. I have seen a <code>values_batch_clear</code> method but it seems to be on the spreadsheet.</p>
<pre><code>import pygsheets
spread_sheet_id = "...insert...spreadsheet...id"
spreadsheet_name = "...spreadsheet_name..."
wks_name_or_pos = "...worksheet_name..."
spreadsheet = pygsheets.Spreadsheet(client=service,id=spread_sheet_id)
wksheet = spreadsheet.worksheet('title',wks_name_or_pos)
# trying to batch update cells (that have a data validation format) to make them empty again
wksheet.update_values_batch('C2:F6',
[[None, None, None, ""],
[None, None, None, ""],
[None, None, None, ""],
[None, None, None, ""],
[None, None, None, ""]])
</code></pre>
|
<python><google-sheets><formatting><gspread><pygsheets>
|
2022-12-31 20:13:17
| 1
| 1,587
|
user4933
|
74,971,599
| 7,778,016
|
Generating a maze in python and saving it into an image file resolves in to a black image
|
<p>I split the code into two sections for easier viewing as it's little long. I have the following code that generates a maze in python and prints it into the terminal. This part of code works as expected:</p>
<pre><code>import random
from PIL import Image
# The width and height of the maze
width = 30
height = 30
# The maze is represented as a 2D list of cells, where each cell is a dictionary with the following keys:
# "wall" - a set of directions ("N", "S", "E", "W") indicating which walls exist for the cell
# "visited" - a boolean indicating whether the cell has been visited during the maze generation process
maze = [[{"wall": {"N", "S", "E", "W"}, "visited": False} for j in range(width)] for i in range(height)]
# The start and end positions of the maze are always fixed
start = (0, 0)
end = (height-1, width-1)
# Set the start and end cells to be unvisited so that they are not included in the maze generation process
maze[start[0]][start[1]]["visited"] = True
maze[end[0]][end[1]]["visited"] = True
# Initialize a list of walls with all the walls in the maze
walls = []
for i in range(height):
for j in range(width):
if "N" in maze[i][j]["wall"]:
walls.append((i, j, "N"))
if "S" in maze[i][j]["wall"]:
walls.append((i, j, "S"))
if "E" in maze[i][j]["wall"]:
walls.append((i, j, "E"))
if "W" in maze[i][j]["wall"]:
walls.append((i, j, "W"))
# Shuffle the list of walls
random.shuffle(walls)
# Initialize a disjoint-set data structure to keep track of connected cells
parent = {}
rank = {}
def make_set(cell):
parent[cell] = cell
rank[cell] = 0
def find(cell):
if parent[cell] != cell:
parent[cell] = find(parent[cell])
return parent[cell]
def union(cell1, cell2):
root1 = find(cell1)
root2 = find(cell2)
if root1 != root2:
if rank[root1] > rank[root2]:
parent[root2] = root1
else:
parent[root1] = root2
if rank[root1] == rank[root2]:
rank[root2] += 1
# Initialize the disjoint-set data structure with one set for each cell
for i in range(height):
for j in range(width):
make_set((i, j))
# Iterate through the list of walls and remove walls that would create a cycle
for wall in walls:
i, j, direction = wall
if direction == "N":
ni, nj = i-1, j
elif direction == "S":
ni, nj = i+1, j
elif direction == "E":
ni, nj = i, j+1
elif direction == "W":
ni, nj = i, j-1
if ni >= 0 and ni < height and nj >= 0 and nj < width and find((i, j)) != find((ni, nj)):
maze[i][j]["wall"].discard(direction)
maze[ni][nj]["wall"].discard("S" if direction == "N" else "N" if direction == "S" else "W" if direction == "E" else "E")
union((i, j), (ni, nj))
# Print the maze
for i in range(height):
for j in range(width):
if (i, j) == start:
print("S", end="")
elif (i, j) == end:
print("E", end="")
elif "S" in maze[i][j]["wall"]:
print("|", end="")
else:
print(" ", end="")
if "E" in maze[i][j]["wall"]:
print("_", end="")
else:
print(" ", end="")
print()
</code></pre>
<p>Now when I try to generate an image using PIL, it just outputs a black image without the maze grid on a white background.</p>
<pre><code># Create an image with a white background
img = Image.new("RGB", (width*2+1, height*2+1), (255, 255, 255))
pixels = img.load()
# Color the cells and walls of the maze
for i in range(height):
for j in range(width):
if "S" in maze[i][j]:
color = (0, 255, 0)
elif "E" in maze[i][j]:
color = (255, 0, 0)
else:
color = (0, 0, 0)
for di in range(2):
for dj in range(2):
pixels[2*j+dj, 2*i+di] = color
if "S" in maze[i][j]["wall"]:
pixels[2*j+1, 2*i] = (0, 0, 0)
if "E" in maze[i][j]["wall"]:
pixels[2*j, 2*i+1] = (0, 0, 0)
# Save the image to a file
img.save("maze.png")
</code></pre>
<p>Any suggestions on how this could be fixed / improved?</p>
|
<python><python-imaging-library><maze>
|
2022-12-31 19:05:40
| 1
| 1,012
|
P_n
|
74,971,536
| 6,663,588
|
Reorder expression in descending negtive power of a variable using Sage or Sympy
|
<p>I have an expression</p>
<pre class="lang-py prettyprint-override"><code>1/24*(8*(l + 1)*l + 5*(2*E*(l + 1)*l + 3)/E - 6)/E
</code></pre>
<p>I want to reorder it to the form</p>
<pre class="lang-py prettyprint-override"><code>a * E**1 + b * E**0 + c * E**(-1) + d * E**(-2) + ...
</code></pre>
<p><code>sympy.simplify()</code> gives me a close result.</p>
<pre class="lang-py prettyprint-override"><code>from sympy import simplify
simplify(1/24*(8*(l + 1)*l + 5*(2*E*(l + 1)*l + 3)/E - 6)/E)
</code></pre>
<p>output</p>
<pre class="lang-py prettyprint-override"><code>(6*E*l**2 + 6*E*l - 2*E + 5)/(8*E**2)
</code></pre>
<p>It didn't combine <code>8*E**2</code>.</p>
<p>What I expect is</p>
<pre class="lang-py prettyprint-override"><code>3/4*l**2*E**(-1) + 3/4*l*E**(-1) - 1/4*E**(-1) + 5/8*E**(-2)
</code></pre>
<p>Is this possible?</p>
|
<python><sympy><sage>
|
2022-12-31 18:54:47
| 1
| 652
|
IvanaGyro
|
74,971,337
| 3,361,975
|
Python numpy dataframe conditional operation (e.g. sum) across two dataframes
|
<p>I'm trying to calculate a conditional sum that involves a lookup in another dataframe.</p>
<pre><code>import pandas as pd
first = pd.DataFrame([{"a": "aaa", "b": 2, "c": "bla", "d": 1}, {"a": "bbb", "b": 3, "c": "bla", "d": 1}, {"a": "aaa", "b": 4, "c": "bla", "d": 1}, {"a": "ccc", "b": 11, "c": "bla", "d": 1}, {"a": "bbb", "b": 23, "c": "bla", "d": 1}])
second = pd.DataFrame([{"a": "aaa", "val": 111}, {"a": "bbb", "val": 222}, {"a": "ccc", "val": 333}, {"a": "ddd", "val": 444}])
print(first)
print(second)
</code></pre>
<p>The two DataFrames are</p>
<pre><code> a b c d
0 aaa 2 bla 1
1 bbb 3 bla 1
2 aaa 4 bla 1
3 ccc 11 bla 1
4 bbb 23 bla 1
</code></pre>
<p>and</p>
<pre><code> a val
0 aaa 111
1 bbb 222
2 ccc 333
3 ddd 444
</code></pre>
<p>I want to append a column in <code>second</code> that has the sum of column <code>b</code> in <code>first</code> in which <code>first.a</code> matches the corresponding <code>second.a</code>. The expected result is:</p>
<pre><code> a val result
0 aaa 111 6
1 bbb 222 26
2 ccc 333 11
3 ddd 444 0
</code></pre>
<p>Note that this is a minimal example and I'd ideally see a generalizable solution that uses lambda or other functions and not a specific hack that works with this specific example.</p>
|
<python><pandas><dataframe>
|
2022-12-31 18:11:55
| 2
| 1,653
|
SCBuergel
|
74,971,083
| 14,391,210
|
Pandas losing inheritance when subsetting dataframe
|
<p>I made my own pandas class to add some custom methods, here's the code</p>
<pre><code>Class Mypandas(pd.DataFrame):
def foo(sefl):
return self
</code></pre>
<p>When I run the following, I get an AttributeError saying 'DataFrame' object has no attribute</p>
<pre><code>df = Mypandas(df)
df = df.query('count > 0')
df.foo()
</code></pre>
<p>I guess it's because I'm subsetting my df but as I instantiated Mypandas, should I still have access to df.foo()? Why is that? How can I counter this so I can still subset my df?</p>
|
<python><pandas><class>
|
2022-12-31 17:18:13
| 0
| 621
|
Marc
|
74,971,030
| 8,602,367
|
How to convert encoded unicode string to native string in Python
|
<p>How do I convert a string that looks like the following:</p>
<pre><code>s1 = u'MDcxNTFjZWU5MzQ2MTRjZmZiOWIyNTBhYjJlZDhkODY0OTEyYmE2Yjp7ImFjdHVhbF9jcmVhdGVkX3RpbWVzdGFtcCI6ICIxNjcyNDg5NjAxLjMyOTg5MyIsICJhY3R1YWxfaWQiOiAiYWhGa1pYWi1ibWxuYUhSc2IyOXdMVzVsZDNJakN4SWJibWxuYUhSc2IyOXdYMUpsYzJWeWRtRjBhVzl1UVdOMGRXRnNHTUc3QVF3In0%3D'
</code></pre>
<p>to a string that looks like this</p>
<pre><code>s2 = u'MDcxNTFjZWU5MzQ2MTRjZmZiOWIyNTBhYjJlZDhkODY0OTEyYmE2Yjp7ImFjdHVhbF9jcmVhdGVkX3RpbWVzdGFtcCI6ICIxNjcyNDg5NjAxLjMyOTg5MyIsICJhY3R1YWxfaWQiOiAiYWhGa1pYWi1ibWxuYUhSc2IyOXdMVzVsZDNJakN4SWJibWxuYUhSc2IyOXdYMUpsYzJWeWRtRjBhVzl1UVdOMGRXRnNHTUc3QVF3In0='
</code></pre>
<p>Notice that <code>s1</code> ends with <code>%3D</code> which is the unicode representation of <code>=</code></p>
<p>I have tried using the <code>.decode</code> function but that doesn't seem to work.</p>
|
<python><python-3.x><python-2.7>
|
2022-12-31 17:09:39
| 1
| 1,605
|
Dave Kalu
|
74,970,947
| 5,651,481
|
How to identify inputs for a diffusion model?
|
<p>I'm calling a Stable Diffusion in-painting model with the following code, however I know there are more parameters available in the model pipeline. How do I identify all the available parameters in <a href="https://huggingface.co/runwayml/stable-diffusion-inpainting" rel="nofollow noreferrer">this stable diffusion in-painting model?</a></p>
<pre><code>from diffusers import StableDiffusionInpaintPipeline
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
#image and mask_image should be PIL images.
#The mask structure is white for inpainting and black for keeping as is
image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0]
image.save("./yellow_cat_on_park_bench.png")
</code></pre>
|
<python><huggingface-tokenizers><huggingface><stable-diffusion>
|
2022-12-31 16:53:14
| 1
| 1,769
|
cormacncheese
|
74,970,914
| 7,654,773
|
How to create table from function and add to docx document
|
<p>I have working code that builds a document with several tables using a for-loop.
To keep the code clean I would like to break out the table creation into its own function but cannot see how to do this from the API doc.</p>
<p>Essentially, I want to call a function to create & return a Table() object and then add it to the document.</p>
<p>Is this do-able?</p>
<pre><code># this works fine
from docx import Document, document, table
document = Document()
table1 = document.add_table(rows=1, cols=4)
# more table1 building code here
table2 = document.add_table(rows=1, cols=4)
# more table2 building code here
document.save('foo.docx')
</code></pre>
<p>but refactoring like below will not build - I get</p>
<pre><code>TypeError: Table.__init__() got an unexpected keyword argument 'rows'
</code></pre>
<pre><code>from docx import Document, document, table
document = Document()
document.add_table(build_mytable(somedata))
document.add_table(build_mytable(someotherdata))
def build_mytable(mydata):
table = docx.table.Table(rows=1, cols=4)
# more table building code here
return table
</code></pre>
|
<python><python-docx>
|
2022-12-31 16:46:11
| 1
| 696
|
Bill
|
74,970,894
| 6,663,588
|
Solve recurrence with symbols using SymPy: I got None
|
<p>I tried to solve a recurrence relation.</p>
<p><a href="https://i.sstatic.net/XgcoN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XgcoN.png" alt="enter image description here" /></a></p>
<pre class="lang-tex prettyprint-override"><code>$$
\def\r #1{\langle r^{#1} \rangle}
0 = 8 n E \r{n - 1}
+ (n - 1)[n(n - 2) - 4l(l + 1)] \r{n - 3}
+ 4(2n - 1) \r{n - 2}
$$
</code></pre>
<p>This equation is from quantum mechanics. It derived from the Hamitonian with Coloumb potential.</p>
<p><a href="https://i.sstatic.net/HavJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HavJd.png" alt="enter image description here" /></a></p>
<pre class="lang-tex prettyprint-override"><code>$$
H = \frac{1}{2} p^2_r
+ \frac{l(l + 1)}{2 r^2}
- \frac{1}{r}
$$
</code></pre>
<p>E and l are symbols, <r^n> is the expected value of r^n repect to the energy eigenstate.</p>
<p>I have a Sage script using SymPy to solve the recurrence relation. My goal is to express <r^n> as a function of E and l. Here is my script.</p>
<pre class="lang-py prettyprint-override"><code>import sympy
from sympy import Function, rsolve
from sympy.abc import n
l = sympy.symbols('l',integer=True)
energy = sympy.symbols('E')
r = Function('r')
f = 8 * n * energy * r(n - 1) \
+ 4 * (2 * n - 1) * r(n - 2) \
+ (n - 1) * (n * (n - 2) - 4 * l * (l + 1)) * r(n - 3)
print(rsolve(f, r(n), {r(0): 1}))
</code></pre>
<p>I don't know why the output result is <code>None</code>. I have tried to set <code>l</code> and <code>energy</code> to explicit interger, for example, 1. But it didn't help.</p>
<h2>Expected result</h2>
<p>I am sorry. I don't know. The recurrence relation is too hard for my brain. I am not good at math.</p>
<h2>Output from print</h2>
<p>There is no error, and below is the output.</p>
<pre><code>None
</code></pre>
<h2>Extra question</h2>
<p>If my recurrence relation doesn't have a general solution, is it possible to get the results of specific <code>r(n)</code>, for example <code>r(10)</code>?</p>
<h2>Reply for my extra question</h2>
<p>I figured out the recursive method to generate the result of specific <code>r(n)</code>.</p>
<pre class="lang-py prettyprint-override"><code>from functools import cache
@cache
def expected_distance(n, l):
if n <= -2:
return 0
if n == -1:
return -2 * energy
if n == 0:
return 1
return simplify((
- 4 * (2 * (n + 1) - 1) * expected_distance(n - 1, l) \
- n * ((n + 1) * (n - 1) - 4 * l * (l + 1)) * expected_distance(n - 2, l)
) / (8 * (n + 1) * energy))
</code></pre>
|
<python><sympy><sage>
|
2022-12-31 16:42:42
| 0
| 652
|
IvanaGyro
|
74,970,892
| 2,411,604
|
Singleton and inheritance overriding
|
<p>I m stumbling upon singleton and multiple inheritance.</p>
<p>In the following code if i create <strong>slow_grizzly</strong> first i got the expected results.</p>
<p>But if i create <strong>alaska_bear</strong> first the <strong>slow_grizlly</strong> is completelely overridden by <strong>alaska_bear</strong>.</p>
<p>Not sure why is this happening, or if i m missing something.</p>
<pre><code>class Singleton():
__instance = None
def __new__(cls, *args, **kwargs):
if cls.__instance is None:
cls.__instance = super().__new__(cls)
return cls.__instance
class Animal(Singleton):
def __init__(self, *args, is_white=False, **kwargs):
self.is_white = is_white
class Bear(Animal):
def __init__(self, region=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.region = region
class Grizzly(Bear):
def __init__(self, max_speed, *args, **kwargs):
super().__init__(*args, **kwargs)
self.max_speed = max_speed
alaska_bear = Bear("Alaska")
slow_grizzly = Grizzly(44, region="Colorado", is_white=False)
print(alaska_bear.__dict__) #{'is_white': False, 'region': 'Alaska'}
print(slow_grizzly.__dict__)#expected {'is_white': False, 'region': 'Colorado', 'max_speed': 44}
#but got {'is_white': False, 'region': 'Alaska'}
</code></pre>
|
<python><inheritance><singleton>
|
2022-12-31 16:42:26
| 0
| 518
|
Alexis
|
74,970,712
| 9,827,719
|
Python JIRA Rest API get cases from board ORDER BY last updated
|
<p>I have sucessfully managed to get a Jira rest API working with Python code. It lists cases. However, it lists the last 50 cases order by created date. I want to list the 50 cases order by updated date.</p>
<p>This is my Python code:</p>
<pre><code>jiraOptions = {'server': "https://xxx.atlassian.net"}
jira = JIRA(options=jiraOptions, basic_auth=(jira_workspace_email, jira_api_token))
for singleIssue in jira.search_issues(jql_str=f"project = GEN"):
key = singleIssue.key
raw_fields_json = singleIssue.raw['fields']
created = raw_fields_json['created']
updated = raw_fields_json['updated']
</code></pre>
|
<python><jira>
|
2022-12-31 16:12:03
| 1
| 1,400
|
Europa
|
74,970,507
| 8,384,089
|
How do you clip a circle (or any non-Rect) from an image in pygame?
|
<p>I am using Pygame and have an image. I can clip a rectangle from it:</p>
<pre class="lang-py prettyprint-override"><code>image = pygame.transform.scale(pygame.image.load('example.png'), (32, 32))
handle_surface = image.copy()
handle_surface.set_clip(pygame.Rect(0, 0, 32, 16))
clipped_image = surface.subsurface(handle_surface.get_clip())
</code></pre>
<p>I have tried to use <code>subsurface</code> by passing a <code>Surface</code>:</p>
<pre class="lang-py prettyprint-override"><code>handle_surface = image.copy()
hole = pygame.Surface((32, 32))
pygame.draw.circle(hole, (255, 255, 255), (0, 0), 32)
handle_surface.set_clip(hole)
image = surface.subsurface(handle_surface.get_clip())
surf = image.copy()
</code></pre>
<p>But I get the error:</p>
<pre><code>ValueError: invalid rectstyle object
</code></pre>
<p>This error is because despite its name, <code>subsurface</code> expects a <code>Rect</code>, not a <code>Surface</code>. Is there a way to clip another shape from this image and have <code>collidepoint</code> work correctly?</p>
|
<python><python-3.x><pygame><pygame-surface>
|
2022-12-31 15:26:58
| 1
| 762
|
Anonymous
|
74,970,481
| 7,892,936
|
Python: How to match the words in split and non split?
|
<p>I have a Dataframe as below and i wish to detect the repeated words either in split or non split words:</p>
<p>Table A:</p>
<pre><code>Cat Comments
Stat A power down due to electric shock
Stat A powerdown because short circuit
Stat A top 10 on re work
Stat A top10 on rework
</code></pre>
<p>I wish to get the output as below:</p>
<pre><code>Repeated words= ['Powerdown', 'top10','on','rework']
</code></pre>
<p>Anyone have ideas?</p>
|
<python><python-3.x><pandas><dataframe><arraylist>
|
2022-12-31 15:22:23
| 2
| 2,521
|
Shi Jie Tio
|
74,970,255
| 3,010,334
|
python -We ResourceWarning not erroring
|
<p>If I take a simple test script like this, which results in ResourceWarning:</p>
<pre><code>import asyncio
import aiohttp
async def main():
client = aiohttp.ClientSession()
await client.get("https://google.com")
asyncio.run(main())
</code></pre>
<p>Run it with <code>python -bb -We -X dev test.py</code>.</p>
<p>I'm expecting the <code>ResourceWarning</code> to error and result in a non-0 exit code.</p>
<p>Actual result:</p>
<pre><code>Exception ignored in: <function ClientSession.__del__ at 0x7f397e04beb0>
Traceback (most recent call last):
File "/home/s/.local/lib/python3.8/site-packages/aiohttp/client.py", line 342, in __del__
_warnings.warn(
ResourceWarning: Unclosed client session <aiohttp.client.ClientSession object at 0x7f397dfce4b0>
Exception ignored in: <function BaseConnector.__del__ at 0x7f397e0b6eb0>
Traceback (most recent call last):
File "/home/s/.local/lib/python3.8/site-packages/aiohttp/connector.py", line 281, in __del__
_warnings.warn(f"Unclosed connector {self!r}", ResourceWarning, **kwargs)
ResourceWarning: Unclosed connector <aiohttp.connector.TCPConnector object at 0x7f397dfce690>
Exception ignored in: <socket.socket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.18', 45750), raddr=('172.217.169.4', 443)>
ResourceWarning: unclosed <socket.socket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.18', 45750), raddr=('172.217.169.4', 443)>
Exception ignored in: <function _SelectorTransport.__del__ at 0x7f397e8264b0>
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/selector_events.py", line 696, in __del__
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: unclosed transport <_SelectorSocketTransport fd=7>
Exception ignored in: <function _SelectorTransport.__del__ at 0x7f397e8264b0>
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/selector_events.py", line 696, in __del__
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: unclosed transport <_SelectorSocketTransport fd=6>
Exception ignored in: <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.18', 42986), raddr=('142.250.187.206', 443)>
ResourceWarning: unclosed <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.18', 42986), raddr=('142.250.187.206', 443)>
</code></pre>
<p>0 exit code indicating success.</p>
<p>How do I ensure the program actually fails on a warning?</p>
|
<python><python-asyncio><warnings>
|
2022-12-31 14:44:37
| 1
| 3,364
|
Sam Bull
|
74,970,235
| 19,238,204
|
How to Add another subplot to show Solid of Revolution toward x-axis?
|
<p>I have this code modified from the topic here:</p>
<p><a href="https://stackoverflow.com/questions/59402531/how-to-produce-a-revolution-of-a-2d-plot-with-matplotlib-in-python">How to produce a revolution of a 2D plot with matplotlib in Python</a></p>
<p>The plot contains a subplot in the XY plane and another subplot of the solid of revolution toward the y-axis.</p>
<p>I want to add another subplot that is the <strong>solid of revolution toward the x-axis</strong> + how to add a legend to each subplot (above them), so there will be 3 subplots.</p>
<p>This is my MWE:</p>
<pre><code># Compare the plot at xy axis with the solid of revolution
# For function x=(y-2)^(1/3)
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
n = 100
fig = plt.figure(figsize=(12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122,projection='3d')
y = np.linspace(np.pi/8, np.pi*40/5, n)
x = (y-2)**(1/3) # x = np.sin(y)
t = np.linspace(0, np.pi*2, n)
xn = np.outer(x, np.cos(t))
yn = np.outer(x, np.sin(t))
zn = np.zeros_like(xn)
for i in range(len(x)):
zn[i:i+1,:] = np.full_like(zn[0,:], y[i])
ax1.plot(x, y)
ax2.plot_surface(xn, yn, zn)
plt.show()
</code></pre>
|
<python><numpy><matplotlib><plot>
|
2022-12-31 14:41:54
| 1
| 435
|
Freya the Goddess
|
74,970,230
| 5,617,608
|
How can I hide my source code in HuggingFace spaces?
|
<p>I've been trying to figure out how to hide my source code in public HuggingFace spaces, but I wasn't able to find a solution for this. Here is what I've read and tried:</p>
<ol>
<li><p>Using Google Actions to rely on tokens, but the code is synced of course to the HuggingFace space, so it is visible.</p>
</li>
<li><p>In the documentation it is said that Git LFS can be used to make the models hidden, but this cannot be applied to source code.</p>
</li>
<li><p>I've also read about using secrets, but this applies to API keys, not app functions.</p>
</li>
</ol>
<p>Note: I've searched for a more relevant community to post this question and already asked the question in HuggingFace community, but no help yet.</p>
|
<python><huggingface>
|
2022-12-31 14:41:11
| 0
| 1,759
|
Esraa Abdelmaksoud
|
74,970,058
| 6,060,982
|
How to type hint results of specific function calls with generic return signature
|
<p>I believe it is easier to ask this question using a concrete example:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
_, ax = plt.subplots()
# Pyright: Cannot access member "plot" for type "ndarray[Any, dtype[Any]]"
# Member "plot" is unknown
ax.plot([1, 2, 3], [3, 4, 1])
plt.show()
</code></pre>
<p>The code above runs as expected, but the static type checker (I am using Pyright) has trouble figuring out the type of <code>ax</code>. The problem is that the return type of <code>plt.subplots</code> depends on its arguments (and possibly on the context it is being used), so its return signature is rather general, i.e.
<code>tuple[FigureBase | Unknown, Any | ndarray[Any, dtype[Any]] | Unknown]</code>.</p>
<p>However, in this particular case I know that the result is expected to be of type <code>tuple[plt.Figure, plt.Axes]</code>. So I guess the question is what is the way to declare this in the code, so that the static type checker can reason about <code>ax.plot</code>?</p>
|
<python><type-hinting><pyright>
|
2022-12-31 14:09:28
| 0
| 700
|
zap
|
74,970,023
| 7,347,925
|
Select rows by column value and include previous row by another column value
|
<p>Here's an example of DataFrame:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame([
[0, "file_0", 5],
[0, "file_1", 0],
[1, "file_2", 0],
[1, "file_3", 8],
[2, "file_4", 0],
[2, "file_5", 5],
[2, "file_6", 100],
[2, "file_7", 0],
[2, "file_8", 50]
], columns=["case", "filename", "num"])
</code></pre>
<p>I wanna select <code>num==0</code> rows and their previous rows with the same <code>case</code> value, no matter the <code>num</code> value of the previous row.</p>
<p>Finally, we should get</p>
<pre><code>case filename num
0 file_0 5
0 file_1 0
1 file_2 0
2 file_4 0
2 file_6 100
2 file_7 0
</code></pre>
<p>I have got that I can select the previous row by</p>
<pre><code>df[(df['num']==0).shift(-1).fillna(False)]
</code></pre>
<p>However, this doesn't consider the <code>case</code> value. One solution that came to my mind is group by <code>case</code> first and then filter data. I have no idea how to code it ...</p>
|
<python><pandas>
|
2022-12-31 14:02:16
| 3
| 1,039
|
zxdawn
|
74,969,971
| 18,081,244
|
FileNotFoundError: Could not find module 'E:\...\exp_simplifier\ExpSim.dll' (or one of its dependencies)
|
<p>Hope you're having a great day!</p>
<p>I have created an <code>ExpSim.dll</code> file that I compiled from Visual Studio 2022 in a DLL project. Here is the main code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include "ExpressionSimplifier.h"
#include <Windows.h>
#include <Python.h>
exs::ExprSimplifier simplifier = exs::ExprSimplifier();
extern "C" __declspec(dllexport)
PyObject* _cdecl getSteps()
{
// some code
}
extern "C" __declspec(dllexport)
void _cdecl getSimplifiedExpressionSize(const char* expression)
{
// some code
}
extern "C" __declspec(dllexport)
const char* _cdecl getSimplifiedExpression(const char* expression)
{
// some code
}
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
case DLL_PROCESS_DETACH:
break;
}
return TRUE;
}
</code></pre>
<p>Now in python, when I run the following code:</p>
<pre class="lang-py prettyprint-override"><code>from ctypes import *
exp_sim = cdll.LoadLibrary("python/maths/exp_simplifier/ExpSim.dll")
exp_sim.getSteps.restype = c_char_p
exp_sim.simplifyExpression.argtypes = [c_char_p]
exp_sim.simplifyExpression.restype = py_object
result = exp_sim.simplifyExpression(b"x+y+x")
print(result)
</code></pre>
<p>It shows the following error at line 3:</p>
<pre><code>Traceback (most recent call last):
File "e:\Python Stuff\Projects\Minisoft\python\maths\exp_simplifier\exp_simplifier.py", line 3, in <module>
exp_sim = cdll.LoadLibrary("python/maths/exp_simplifier/ExpSim.dll")
File "C:\Users\Saket\AppData\Local\Programs\Python\Python310\lib\ctypes\__init__.py", line 452, in LoadLibrary
return self._dlltype(name)
File "C:\Users\Saket\AppData\Local\Programs\Python\Python310\lib\ctypes\__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'E:\Python Stuff\Projects\Minisoft\python\maths\exp_simplifier\ExpSim.dll' (or one of its dependencies). Try using the full path with constructor syntax.
</code></pre>
<p>Before I started using the Python API in my C++ code, the DLL seemed to work fine, but not after that. Firstly, it refused to compile my code due to it not being able to find <code>python311_d.lib</code>, and after fixing that, python refuses to read my DLL.</p>
<p>I believe it's due to ExpSim.dll still having a dependency on <code>python311_d.lib</code>, but I have no idea how to fix that in VS Code.</p>
<p>Any help would be appreciated. Have a good day!</p>
|
<python><c++>
|
2022-12-31 13:51:07
| 1
| 1,610
|
The Coding Fox
|
74,969,967
| 6,628,863
|
Create all permutations of size N of the digits 0-9 - as optimized as possible using numpy\scipy
|
<p>i need to create an array of all the permutations of the digits 0-9 of size N (input, 1 <= N <= 10).</p>
<p>I've tried this:</p>
<pre><code>np.array(list(itertools.permutations(range(10), n)))
</code></pre>
<p>for n=6:</p>
<pre><code>timeit np.array(list(itertools.permutations(range(10), 6)))
</code></pre>
<p>on my machine gives:</p>
<pre><code>68.5 ms ± 881 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>But it simply not fast enough.
I need it to be below 40ms.</p>
<p>Note:
I cannot change the machine from numpy version 1.22.3</p>
|
<python><numpy><scipy>
|
2022-12-31 13:50:41
| 1
| 490
|
Guy Barash
|
74,969,840
| 13,183,164
|
Django annotate field value from external dictionary
|
<p>Lets say I have a following dict:</p>
<pre><code>schools_dict = {
'1': {'points': 10},
'2': {'points': 14},
'3': {'points': 5},
}
</code></pre>
<p>And how can I put these values into my queryset using annotate?
I would like to do smth like this, but its not working</p>
<pre><code>schools = SchoolsExam.objects.all()
queryset = schools.annotate(
total_point = schools_dict[F('school__school_id')]['points']
)
</code></pre>
<p>Models:</p>
<pre><code>class SchoolsExam(Model):
school = ForeignKey('School', on_delete=models.CASCADE),
class School(Model):
school_id = CharField(),
</code></pre>
<p>This code gives me an error <code>KeyError: F(school__school_id)</code></p>
|
<python><django><django-orm><keyerror><django-annotate>
|
2022-12-31 13:30:45
| 1
| 465
|
mecdeality
|
74,969,645
| 9,078,185
|
Kivy widget position not stable as new widgets added
|
<p>I have a very simple Kivy project where I want to ask the user to input a number and then have the app display some calculations. I start out with the following, which puts the widgets at the top of the screen just as I expect:</p>
<pre><code>class MainApp(App):
def build(self):
self.main = BoxLayout(orientation='vertical')
self.my_number = '100000'
inp_box = BoxLayout(pos_hint={'x': 0, 'top': 1},
size_hint=(0.4, 0.2))
inp_box.add_widget(Label(text='Enter something',
pos_hint={'x': 0, 'top': 1},
size_hint=(None, None),
size=(300,50),
font_size=25))
self.get_inp = TextInput(text=self.my_number,
multiline=False,
pos_hint={'x': 0, 'top': 1},
size_hint=(None, None),
size=(125,50),
font_size=25)
go = Button(text='Button',
pos_hint={'x': 0, 'top': 1},
size_hint=(None, None),
size=(100,50),
font_size=25)
go.bind(on_release=self.more_widgets)
inp_box.add_widget(self.get_inp)
inp_box.add_widget(go)
self.main.add_widget(inp_box)
</code></pre>
<p>All good so far. All widgets appear at the top of the screen as expected.</p>
<p>However if I add another <code>BoxLayout</code> under this one, it appears toward the middle of the screen vertically. Based on my understanding of <code>pos_hint</code> it should be right under the <code>inp_box</code> widget and sized to 20% of the vertical space on screen.</p>
<pre><code>lbl_box = BoxLayout(pos_hint={'x': 0, 'top': 1},
size_hint=(0.4, 0.2))
for i in ['One', 'Two', 'Three']:
lbl_box.add_widget(Label(text=i,
size_hint=(None, None),
size=(100,50),
pos_hint={'x': 0.2, 'top': 1},
font_size=20))
self.main.add_widget(lbl_box)
</code></pre>
<p>Lastly, I want the app to do some calculations and display the results. When this displays it pushes up the <code>lbl_box' widget toward the </code>inp_box` widget. I'm not sure why this is happening.</p>
<pre><code> def more_widgets(self, instance):
self.my_number = self.get_inp.text
number_disp = GridLayout(cols=4,
pos_hint={'x': 0.1, 'top': 0.5},
size_hint=(0.8, 0.6))
for t in range(0, 20):
n = int(self.my_number) + t
number_disp.add_widget(
Label(text=f'{t}: {n:,.0f}'))
self.main.add_widget(number_disp)
</code></pre>
<p>Full MRE below:</p>
<pre><code>import kivy
from kivy.app import App
from kivy.uix.label import Label
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.gridlayout import GridLayout
class MainApp(App):
def build(self):
self.main = BoxLayout(orientation='vertical')
self.my_number = '100000'
inp_box = BoxLayout(pos_hint={'x': 0, 'top': 1},
size_hint=(0.4, 0.2))
inp_box.add_widget(Label(text='Enter something',
pos_hint={'x': 0, 'top': 1},
size_hint=(None, None),
size=(300,50),
font_size=25))
self.get_inp = TextInput(text=self.my_number,
multiline=False,
pos_hint={'x': 0, 'top': 1},
size_hint=(None, None),
size=(125,50),
font_size=25)
go = Button(text='Button',
pos_hint={'x': 0, 'top': 1},
size_hint=(None, None),
size=(100,50),
font_size=25)
go.bind(on_release=self.more_widgets)
inp_box.add_widget(self.get_inp)
inp_box.add_widget(go)
self.main.add_widget(inp_box)
lbl_box = BoxLayout(pos_hint={'x': 0, 'top': 1},
size_hint=(0.4, 0.2))
for i in ['One', 'Two', 'Three']:
lbl_box.add_widget(Label(text=i,
size_hint=(None, None),
size=(100,50),
pos_hint={'x': 0.2, 'top': 1},
font_size=20))
self.main.add_widget(lbl_box)
return self.main
def more_widgets(self, instance):
self.my_number = self.get_inp.text
number_disp = GridLayout(cols=4,
pos_hint={'x': 0.1, 'top': 0.5},
size_hint=(0.8, 0.6))
for t in range(0, 20):
n = int(self.my_number) + t
number_disp.add_widget(
Label(text=f'{t}: {n:,.0f}'))
self.main.add_widget(number_disp)
if __name__ == '__main__':
app = MainApp()
app.run()
</code></pre>
|
<python><kivy>
|
2022-12-31 12:49:03
| 1
| 1,063
|
Tom
|
74,969,579
| 9,640,238
|
Format dates on an entire column with ExcelWriter and Openpyxl
|
<p>I'm trying to write a pandas DataFrame to Excel, with dates formatted as "YYYY-MM-DD", omitting the time. Since I need to write multiple sheets, and I want to use some advanced formatting opens (namely setting the column width), I'm using an <code>ExcelWriter</code> object and <code>openpyxl</code> as engine.</p>
<p>Now, I just can't seem to figure out how to format my date column.</p>
<p>Starting with</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'string_col': ['abc', 'def', 'ghi']})
df['date_col'] = pd.date_range(start='2020-01-01', periods=3)
with pd.ExcelWriter('test.xlsx', engine='openpyxl') as writer:
df.to_excel(writer, 'test', index=False)
</code></pre>
<p>This will write the dates as <code>2020-01-01 00:00:00</code>. For some reason I can't understand, adding <code>datetime_format='YYYY-MM-DD'</code> has no effect <em>if openpyxl is the selected engine</em> (works just fine if <code>engine</code> is left unspecified).</p>
<p>So I'm trying to work around this:</p>
<pre class="lang-py prettyprint-override"><code>with pd.ExcelWriter('test.xlsx', engine='openpyxl') as writer:
df.to_excel(writer, 'test', index=False)
writer.sheets['test'].column_dimensions['B'].width = 50
writer.sheets['test'].column_dimensions['B'].number_format = 'YYYY-MM-DD'
</code></pre>
<p>The column width is properly applied, but not the number formatting. On the other hand, it does work applying the style to an individual cell: <code>writer.sheets['test']['B2'].number_format = 'YYYY-MM-DD'</code>.</p>
<p>But how can I apply the formatting to the entire column (I have tens of thousands of cells to format)? I couldn't find anything in the openpyxl documentation on how to address an entire column...</p>
<blockquote>
<p>Note: I could do:</p>
<pre class="lang-py prettyprint-override"><code>for cell in writer.sheets['test']['B']:
cell.number_format = 'YYYY-MM-DD'
</code></pre>
<p>but my point is precisely to avoid iterating over each individual cell.</p>
</blockquote>
|
<python><pandas><date><openpyxl>
|
2022-12-31 12:37:46
| 2
| 2,690
|
mrgou
|
74,969,550
| 1,745,291
|
How to display app logs in nginx-unit docker?
|
<p>I am using the official nginx unit docker image, And I would like to run a hello world wsgi application, and being able to print stuff on docker-compose output.</p>
<p>How can I do that ?</p>
<p>Here the hello world:</p>
<pre><code>print('DURING INITIAL SETUP')
def application(environ, start_response):
print('DURING REQUEST')
start_response('200 OK', [('Content-Type', 'text/plain')])
yield b'Hello, World!\n'
</code></pre>
<p>But nothing is ever written to the output... (I also tried writing directly to /proc/1/fd/1, but missing permissions)</p>
|
<python><docker><logging><wsgi><nginx-unit>
|
2022-12-31 12:33:08
| 0
| 3,937
|
hl037_
|
74,969,462
| 11,985,743
|
Catch a Python exception that a module already caught
|
<p>I am using the <code>requests</code> Python module to connect to a website through a SOCKS4 proxy. While trying to connect to the website, the program fails to even connect to the SOCKS4. Therefore, the <a href="https://pypi.org/project/PySocks/" rel="nofollow noreferrer">PySocks</a> module throws a <code>TimeoutError</code> exception, which gets caught and rethrown as a <code>ProxyConnectionError</code> exception.</p>
<p>If this was the end of the story, I could have just caught the <code>ProxyConnectionError</code> directly. However, the underlying <code>urllib3</code> module catches the exception and re-raises a <code>NewConnectionError</code> instead. You can see this in the <a href="https://github.com/urllib3/urllib3/blob/main/src/urllib3/contrib/socks.py#L133" rel="nofollow noreferrer">official source code</a>.</p>
<p>Here is the final traceback that I get from my program (cut many lines for brevity):</p>
<pre><code>Traceback (most recent call last):
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
socks.ProxyConnectionError: Error connecting to SOCKS4 proxy ...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
urllib3.exceptions.NewConnectionError: <urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x0000025DA70CCDC0>: Failed to establish a new connection: ...
During handling of the above exception, another exception occurred:
... (eventually raises requests.exceptions.ConnectionError, then terminates the program)
</code></pre>
<p>My target is to catch all the PySocks errors (such as the <code>ProxyConnectionError</code> that was raised in this example), which can be done by catching the base exception class <code>socks.ProxyError</code>.</p>
<p>As the <code>requests</code> library is a downloaded module, I don't have the liberty of <a href="https://stackoverflow.com/a/64206788/11985743">editing the underlying code</a> (if I edit the source code directly, then these changes won't be updated if someone else downloads my code and installs the requests library from PyPI).</p>
<p>Is there any way to catch an error that was already caught inside another module?</p>
|
<python><exception>
|
2022-12-31 12:19:40
| 1
| 3,666
|
Xiddoc
|
74,969,359
| 2,313,176
|
Using local project installs (also called editable installs) with root user
|
<p>Using git submodules I put the library <code>my-pkg</code> into a subfolder of my scripts. In <code>my-pkg</code> there is a valid package structure for a package <code>my_pkg</code>.</p>
<pre><code>my-pkg/
├─ setup.cfg
├─ setup.py
├─ my_pkg
| ├─ __init__.py
...
</code></pre>
<p>It is possible to <code>pip install -e my-pkg</code> when not using the root user (in a testing environment where it is possible to use other users). How can I debug or solve this for the root user in the target environment?</p>
<p>There is no error message, the only feedback is:</p>
<pre><code>Installing collected packages: my-pkg
Running setup.py develop for my-pkg
Successfully installed my-pkg
</code></pre>
<p>The statement has no effect when using the root user. When using the statement with other users, <code>my-pkg</code> becomes importable.</p>
<p>It is possible there are other differences between the environment where the local project install works and where it doesn't. But the user thing is the most obvious. Everything else runs with all versions and files fixed using Docker.</p>
<p>Some additional information:</p>
<ul>
<li>version of setuptools (I tried several other as well): 65.6.3</li>
<li>It does not make a difference if <code>python -m pip</code> or absolute paths are used for pip or python3</li>
</ul>
|
<python><pip><pypi>
|
2022-12-31 11:59:01
| 0
| 369
|
Telcrome
|
74,969,352
| 757,714
|
Altair: change point symbol
|
<br>
I use the code below to create this faceted chart:
<p><a href="https://vega.github.io/editor/#/gist/a9f238f389418c106b7aacaa10561281/spec.json" rel="nofollow noreferrer">https://vega.github.io/editor/#/gist/a9f238f389418c106b7aacaa10561281/spec.json</a></p>
<p>I would like to use another symbol (a dash in example) instead of the gray points.</p>
<p>I found a lot of examples using a field to render a point by value, but I haven't found how to apply a specific symbol to all points of a layer.</p>
<p>Thank you</p>
<pre class="lang-py prettyprint-override"><code>df=pd.read_csv("tmp_reshape.csv",keep_default_na=False)
mean=alt.Chart(df).mark_line(color="#0000ff",strokeWidth=1).encode(
alt.X('period:O'),
alt.Y('mean:Q',title=None, scale=alt.Scale(zero=False))
)
median=alt.Chart(df).mark_line(color="#ffa500",strokeWidth=1,strokeDash=[5, 5]).encode(
alt.X('period:O'),
alt.Y('median:Q',title=None, scale=alt.Scale(zero=False))
)
minmax=alt.Chart(df).mark_rule(color="grey",strokeWidth=0.5).encode(
alt.X('period:O'),
alt.Y('minimum:Q',title=None),
alt.Y2('maximum:Q',title=None)
)
min=alt.Chart(df).mark_point(filled=True,color="grey",size=15).encode(
alt.X('period:O'),
alt.Y('minimum:Q',title=None)
)
max=alt.Chart(df).mark_point(filled=True,color="grey",size=15).encode(
alt.X('period:O'),
alt.Y('maximum:Q',title=None)
)
alt.layer(minmax,min,max,median,mean).properties(width=470,height=100).facet(row='param:N').resolve_scale(y='independent')
</code></pre>
|
<python><charts><symbols><point><altair>
|
2022-12-31 11:58:14
| 1
| 5,837
|
aborruso
|
74,969,204
| 7,500,268
|
write a train routine in class with pytorch
|
<p>I want to write a train function in a class for training a model; The following code reported an error; can anyone give me a hint for solving this issue?</p>
<pre><code>import numpy as np
import os
import sys
sys.executable
sys.version
##define a neuralnet class
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
os.chdir("/Users/zhangzhongheng/Downloads/")
os.getcwd()
</code></pre>
<p>#Download training data from open datasets.</p>
<pre><code>training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
</code></pre>
<h1>Download test data from open datasets.</h1>
<pre><code>test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
batch_size = 64
</code></pre>
<h1>Create data loaders.</h1>
<pre><code>train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print(f"Shape of X [N, C, H, W]: {X.shape}")
print(f"Shape of y: {y.shape} {y.dtype}")
break
</code></pre>
<h1>Get cpu or gpu device for training.</h1>
<pre><code>device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
</code></pre>
<h1>Define model</h1>
<pre><code>class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
def model_train(self,dataloader):
self.train()
size = len(dataloader.dataset)
optimizer = torch.optim.SGD(self.parameters(), lr=1e-3)
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = self.forward(X)
loss = nn.CrossEntropyLoss(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
Model = NeuralNetwork()
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
Model.model_train(train_dataloader)
print("Done!")
</code></pre>
<p>The above code reported the following error:</p>
<pre><code>Epoch 1
-------------------------------
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
Done!
</code></pre>
|
<python><python-3.x><pytorch>
|
2022-12-31 11:28:16
| 1
| 797
|
Z. Zhang
|
74,969,067
| 5,678,057
|
How to query dates in pandas using pd.DataFrame.query()
|
<p>When querying rows in a dataframe based on a datcolumn value, it works when comparing just the year, but not for a date.</p>
<pre><code>fil1.query('Date>2019')
</code></pre>
<p>This works fine. However when giving complete date, it fails.</p>
<pre><code>fil1.query('Date>01-01-2019')
#fil1.query('Date>1-1-2019') # fails as well
TypeError: Invalid comparison between dtype=datetime64[ns] and int
</code></pre>
<p>What is the correct way to use dates on <code>query</code> function ? The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html" rel="nofollow noreferrer">docs</a> doesnt seem to help.</p>
|
<python><pandas>
|
2022-12-31 10:58:27
| 3
| 389
|
Salih
|
74,969,012
| 18,313,588
|
replace column values with values in another list when values appear in a specific list
|
<p>Hi I have a list and a pandas dataframe</p>
<pre><code>[ apple, tomato, mango ]
</code></pre>
<pre><code>values
tomato
pineapple
apple
banana
mango
</code></pre>
<p>I would like to replace the column values that appear in the list with the values in the new list as follow</p>
<pre><code>[ fruit1, fruit2, fruit3 ]
</code></pre>
<p>My expected output is a pandas dataframe that looks like this</p>
<pre><code>values
fruit2
pineapple
fruit1
banana
fruit3
</code></pre>
<p>How can I accomplish this efficiently?</p>
|
<python><python-3.x><pandas><dataframe>
|
2022-12-31 10:44:37
| 2
| 493
|
nerd
|
74,968,971
| 4,418,481
|
Plot each Dask partition seperatly using python
|
<p>I'm using <code>Dask</code> to read 500 parquet files and it does it much faster than other methods that I have tested.</p>
<p>Each parquet file contains a time column and many other variable columns.</p>
<p>My goal is to create a single plot that will have 500 lines of variable vs time.</p>
<p>When I use the following code, it works very fast compared to all other methods that I have tested but it gives me a single "line" on the plot:</p>
<pre><code>import dask.dataframe as dd
import matplotlib.pyplot as plt
import time
start = time.time()
ddf = dd.read_parquet("results_parq/*.parquet")
plt.plot(ddf['t'].compute(),ddf['reg'].compute())
plt.show()
end = time.time()
print(end-start)
</code></pre>
<p><a href="https://i.sstatic.net/A4iRh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A4iRh.png" alt="enter image description here" /></a></p>
<p>from my understanding, it happens because Dask just plots the following:</p>
<pre><code>t
0
0.01
.
.
100
0
0.01
.
.
100
0
</code></pre>
<p>What I mean it plots a huge column instead of 500 columns.</p>
<p>One possible solution that I tried to do is to plot it in a for loop over the partitions:</p>
<pre><code>import dask.dataframe as dd
import matplotlib.pyplot as plt
import time
start = time.time()
ddf = dd.read_parquet("results_parq/*.parquet")
for p in ddf.partitions:
plt.plot(p['t'].compute(),p['reg'].compute())
plt.show()
end = time.time()
print(end-start)
</code></pre>
<p>It does the job and the resulting plot looks like I want:</p>
<p><a href="https://i.sstatic.net/riImB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/riImB.png" alt="enter image description here" /></a></p>
<p>However, it results in much longer times.</p>
<p>Is there a way to do something like this but yet to use Dask multicore benefits? Like somehow use map_partitions for it?</p>
<p>Thank you</p>
|
<python><dask>
|
2022-12-31 10:36:02
| 1
| 1,859
|
Ben
|
74,968,936
| 17,148,496
|
How to make a barplot for the target variable with each predictive variable?
|
<p>I have a pandas df, that looks something like this (after scaling):</p>
<pre><code> Age blood_Press golucse Cholesterol
0 1.953859 -1.444088 -1.086684 -1.981315
1 0.357992 -0.123270 -0.585981 0.934929
2 0.997219 0.998712 2.005212 0.019169
3 2.589318 -0.528543 -1.123484 -1.299904
4 2.088141 0.792976 0.021526 -0.777959
</code></pre>
<p>and a binary target feature:</p>
<pre><code> y
0 1.0
1 1.0
2 1.0
3 0.0
4 1.0
</code></pre>
<p>I want to make a bar chart for each predictive feature with the <code>y</code> target. So the <code>y</code> values would be on the x-axis, which is just <code>1</code> or <code>0</code>, and on the y-axis would be the values for the predictive feature. For example, something that looks like this (ignore the features used here, just an example of what I need). So here instead of <code>male</code> and <code>female</code> I'd have <code>1</code> and <code>0</code>...</p>
<p><a href="https://i.sstatic.net/mpjbX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mpjbX.png" alt="enter image description here" /></a></p>
<p>the code for this plot is something like this:</p>
<pre><code>myPlot = sns.catplot(data = data, x = 'the y feature' , y = 'the x feature', kind = 'bar')
myPlot.fig.suptitle('title title', size=15, y=1.);
myPlot.set_ylabels('Y label whatever', fontsize=15, x=1.02)
myPlot.fig.set_size_inches(9,8);
</code></pre>
<p>But I don't want to repeat it for every feature, I'm sure it's much simpler than that. But how?</p>
|
<python><pandas><matplotlib><seaborn>
|
2022-12-31 10:28:47
| 1
| 375
|
Kev
|
74,968,869
| 8,118,024
|
How to call method in correct asyncio context from a different thread in Python (Textual framework)
|
<p>I have an old python multithreaded console app using urwid as its visualization library.
I'd like to switch to <a href="https://textual.textualize.io/" rel="nofollow noreferrer">Textual</a> which looks amazing.</p>
<p>I've never managed a mix of multithreaded and asyncio code, and I'd like to avoid rewriting the old app to migrate all the thread-based code to async.</p>
<p>I'm not able to figure out how to call Textual widgets methods that update the ui from external threads.</p>
<p>For example, the following test code that simulates a separate thread trying to add lines to a TextLog widget raises a <code>NoActiveAppError</code> exception.</p>
<pre class="lang-py prettyprint-override"><code>
import time
import threading
from textual.app import App, ComposeResult
from textual.widgets import TextLog
from textual.pilot import Pilot
# simple textual app with a single textlog widget
class MyApp(App):
def compose(self) -> ComposeResult:
self.logbox = TextLog(id="log", wrap=True, max_lines=1024)
yield self.logbox
# init textlog content when ready
async def on_ready(pilot: Pilot):
pilot.app.logbox.write("Welcome to this and that!")
# separate thread
def test_thread():
i = 0
while True:
# do stuff
time.sleep(3)
i += 1
# this throws a NoActiveApp exception
app.logbox.write(f"Iteration #{i}")
app = MyApp()
threading.Thread(target=test_thread).start()
app.run(auto_pilot=on_ready)
</code></pre>
<p>It looks like Textual is making use of contextvars, thus storing in the main async loop thread context a copy of the app instance.</p>
<p>For instance I'd like to call the TextLog widget <code>write</code> method from the separate thread, but I cannot figure out how.</p>
|
<python><multithreading><python-asyncio><textual-framework>
|
2022-12-31 10:14:33
| 0
| 305
|
Alberto Pastore
|
74,968,820
| 1,032,099
|
Passing a variable to include in extends in Django templates
|
<p>I have the following structure of templates:</p>
<p><code>main.html</code></p>
<pre><code><html>
<body>
<p>
This works: {% block title %}{% endblock %}
</p>
{% include 'heading.html' with title=title %} {# but this does not work since it is not a variable #}
</body>
</html>
</code></pre>
<p><code>heading.html</code></p>
<pre><code><p>
{{ title }}
</p>
</code></pre>
<p><code>page.html</code></p>
<pre><code>{% extends 'main.html' %}
{% block title %}test title{% endblock %}
</code></pre>
<p>How can I pass the title from <code>page.html</code> to <code>heading.html</code>? Ideally, it should be defined as a block like now, but alternatives are also welcome. I'd like to contain the solution within the templates if possible.</p>
<p>Clarification:</p>
<p>This is a minimal example, but I have multiple pages like <code>main.html</code> that share a larger header that has title and some other variables that I'd like defined in the child template (not necessarily as variable as long as the text is passed).</p>
<p>I could put the title into the view code, but this solution would just decrease the separation of displayed data from the logic.</p>
|
<python><django><django-templates>
|
2022-12-31 10:06:15
| 6
| 1,238
|
Xilexio
|
74,968,761
| 12,200,808
|
pip3 can't download the latest tflite-runtime
|
<p>The current version of <code>tflite-runtime</code> is <code>2.11.0</code>:</p>
<p><a href="https://pypi.org/project/tflite-runtime/" rel="noreferrer">https://pypi.org/project/tflite-runtime/</a></p>
<p>Here is a testing for downloading the <code>tflite-runtime</code> to the <code>tmp</code> folder:</p>
<pre><code>mkdir -p /tmp/test
cd /tmp/test
echo "tflite-runtime == 2.11.0" > ./test.txt
pip3 download -r ./test.txt
</code></pre>
<p><strong>Here is the error:</strong></p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tflite-runtime==2.11.0 (from versions: none)
ERROR: No matching distribution found for tflite-runtime==2.11.0
</code></pre>
<p><strong>Here is the pip3 version:</strong></p>
<pre><code># pip3 --version
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
</code></pre>
<p>What's wrong in the above <code>pip3 download</code>? Why can't it find the latest version? And how to fix?</p>
|
<python><ubuntu><pip><tensorflow-lite>
|
2022-12-31 09:51:59
| 3
| 1,900
|
stackbiz
|
74,968,585
| 1,139,541
|
Using environment variables in `pyproject.toml` for versioning
|
<p>I am trying to migrate my package from <code>setup.py</code> to <code>pyproject.toml</code> and I am not sure how to do the dynamic versioning in the same way as before. Currently I can pass the development version using environment variables when the build is for development.</p>
<p>The <code>setup.py</code> file looks similar to this:</p>
<pre><code>import os
from setuptools import setup
import my_package
if __name__ == "__main__":
dev_version = os.environ.get("DEV_VERSION")
version = dev_version if dev_version else f"{my_package.__version__}"
setup(
name="my_package",
version=version,
...
)
</code></pre>
<p>Is there a way to do similar thing when using <code>pyproject.toml</code> file?</p>
|
<python><version><setuptools><pyproject.toml>
|
2022-12-31 09:16:16
| 4
| 852
|
Ilya
|
74,968,428
| 8,207,701
|
Pytorch - Object of type 'method' has no len()
|
<p>So I'm trying to train a time series model using pytorch lightning library.</p>
<p>But after running the following code,</p>
<pre><code>trainer = pl.Trainer(
max_epochs = N_EPOCHS,
)
trainer.fit(model,data_module)
</code></pre>
<p>I'm getting this error,</p>
<pre><code>/usr/local/lib/python3.8/dist-packages/torch/utils/data/sampler.py in __iter__(self)
64
65 def __iter__(self) -> Iterator[int]:
---> 66 return iter(range(len(self.data_source)))
67
68 def __len__(self) -> int:
TypeError: object of type 'method' has no len()
</code></pre>
<p><a href="https://i.sstatic.net/tMmE9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tMmE9.png" alt="enter image description here" /></a></p>
<p>Also here's my data module initialization,</p>
<pre><code>data_module = StockPriceDataModule(train_sequences, test_sequences, batch_size=BATCH_SIZE)
data_module.setup()
</code></pre>
<p>DataModule class</p>
<pre><code>class StockPriceDataModule(pl.LightningDataModule):
def __init__(self, train_sequences, test_sequences, batch_size = 8):
super().__init__()
self.train_sequences = train_sequences
self.test_sequences = test_sequences
self.batch_size = batch_size
def setup(self, stage=None):
self.train_dataset = StockDataset(self.train_sequences)
self.test_dataset = StockDataset(self.test_sequences)
def train_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size = self.batch_size,
shuffle = False,
num_workers = 2
)
def val_dataloader(self):
return DataLoader(
self.test_dataloader,
batch_size=1,
shuffle = False,
num_workers=1,
)
def test_dataloader(self):
return DataLoader(
self.test_dataloader,
batch_size=1,
shuffle = False,
num_workers=1,
)
</code></pre>
<p>Im kinda beginner. So I was just following this video <a href="https://www.youtube.com/watch?v=ODEGJ_kh2aA" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ODEGJ_kh2aA</a> to learn about time series forecasting with multiple features using LSTM. My code is almost same...</p>
<p>So what am I doing wrong?</p>
|
<python><machine-learning><pytorch><lstm><pytorch-lightning>
|
2022-12-31 08:36:24
| 1
| 1,216
|
Bucky
|
74,968,179
| 18,044
|
Session state is reset in Streamlit multipage app
|
<p>I'm building a Streamlit multipage application and am having trouble keeping session state when switching between pages. My main page is called mainpage.py and has something like the following:</p>
<pre><code>import streamlit as st
if "multi_select" not in st.session_state:
st.session_state["multi_select"] = ["abc", "xyz"]
if "select_slider" not in st.session_state:
st.session_state["select_slider"] = ("1", "10")
if "text_inp" not in st.session_state:
st.session_state["text_inp"] = ""
st.sidebar.multiselect(
"multiselect",
["abc", "xyz"],
key="multi_select",
default=st.session_state["multi_select"],
)
st.sidebar.select_slider(
"number range",
options=[str(n) for n in range(1, 11)],
key="select_slider",
value=st.session_state["select_slider"],
)
st.sidebar.text_input("Text:", key="text_inp")
for v in st.session_state:
st.write(v, st.session_state[v])
</code></pre>
<p>Next, I have another page called 'anotherpage.py' in a subdirectory called 'pages' with this content:</p>
<pre><code>import streamlit as st
for v in st.session_state:
st.write(v, st.session_state[v])
</code></pre>
<p>If I run this app, change the values of the controls and switch to the other page, I see the values for the control being retained and printed. However, if I switch back to the main page, everything gets reset to the original values. For some reason <code>st.session_state</code> is cleared.</p>
<p>Anyone have any idea how to keep the values in the session state? I'm using Python <code>3.11.1</code> and Streamlit <code>1.16.0</code></p>
|
<python><session-state><streamlit>
|
2022-12-31 07:37:13
| 3
| 23,709
|
MvdD
|
74,968,078
| 18,059,131
|
Does the Gmail API skip verification code emails? Python Gmail API
|
<p>I'm trying to use the Gmail api to attain verification codes for 2FA. However, the following code seems to like literally skip my verification code emails:</p>
<pre><code> def getEmails(self):
SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']
creds = None
if os.path.exists(self.CREDENTIALS_PATH+'\\token.pickle'):
with open(self.CREDENTIALS_PATH+'\\token.pickle', 'rb') as token:
creds = pickle.load(token)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(self.CREDENTIALS_PATH+'\\credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
with open(self.CREDENTIALS_PATH+'\\token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('gmail', 'v1', credentials=creds)
result = service.users().messages().list(userId='me').execute()
result = service.users().messages().list(maxResults=5, userId='me').execute()
messages = result.get('messages')
for msg in messages:
txt = service.users().messages().get(userId='me', id=msg['id']).execute()
try:
payload = txt['payload']
headers = payload['headers']
for d in headers:
if d['name'] == 'Subject':
subject = d['value']
if d['name'] == 'From':
sender = d['value']
parts = payload.get('parts')[0]
data = parts['body']['data']
data = data.replace("-","+").replace("_","/")
decoded_data = base64.b64decode(data)
soup = BeautifulSoup(decoded_data , "lxml")
body = soup.body()
print("Subject: ", subject)
print("From: ", sender)
print("Message: ", body)
print('\n')
except:
pass
</code></pre>
<p>Is this actually the case? Does the Gmail API skip verification code emails? If not what could I be doing wrong?</p>
|
<python><gmail-api>
|
2022-12-31 07:13:13
| 1
| 318
|
prodohsamuel
|
74,968,012
| 755,229
|
why does Os.environ.keys() and Os.environ.items() return the semantically same data?
|
<p>running Ipython3 using Python3.10 on Ubuntu 22.10</p>
<pre><code>a=Os.environ.keys()
b=Os.environ.items()
</code></pre>
<p>I expect <strong>a</strong> to be a <strong>list</strong> of environmental variable keys/names
such as :</p>
<pre><code>['SHELL','SESSION_MANAGER',......]
</code></pre>
<p>but instead I got:</p>
<pre><code>KeysView(environ({'SHELL': '/bin/bash', 'SESSION_MANAGER': 'local....}))
</code></pre>
<p>and <strong>b</strong> which I expected to return to me tuples of key value pair I got this:</p>
<pre><code>ItemsView(environ({'SHELL': '/bin/bash', 'SESSION_MANAGER': 'local
</code></pre>
<p>which to me seems like the same data wrapped in something else. Technically nothing wrong with these two but it seems to me it defeats the purpose if you gave someone a 10$ bill to get a loaf of bread but they just wrap the dollar note in an envelope marked <strong>loaf of bread</strong></p>
<p>what is it that I am ignorant of here?</p>
|
<python><ubuntu><environment-variables>
|
2022-12-31 06:56:23
| 1
| 4,424
|
Max
|
74,967,983
| 7,477,154
|
Remove specific string and all number and blank string from array in python
|
<p>I want to remove some words and all string that contains number from this array and</p>
<pre><code>reviews = ['', 'alternative', ' ', 'transcript', ' ', 'alive I', 'm Alive I', 'm Alive I', 'm Alive', ' ', 'confidence', ' 0', '987629', ' ', 'transcript', ' ', 'alive I', 'm Alive I', 'm Alive I', 'm Alive yeah', ' ', 'transcript', ' ', 'alive I', 'm Alive I', 'm Alive I', 'm Alive life', ' ', 'final', ' True', '']
for review in reviews:
if review == '' or review == ' ' or review == 'alternative' or review == 'transcript' or review.replace(' ', '').isdigit() or review.isdigit():
reviews.remove(review)
print(reviews)
</code></pre>
<p>but I am not getting expected output from the array, it's output showing number and the word as well.
['alternative', 'transcript', 'alive I', 'm Alive I', 'm Alive I', 'm Alive', 'confidence', '987629', 'transcript', 'alive I', 'm Alive I', 'm Alive I', 'm Alive yeah', 'transcript', 'alive I', 'm Alive I', 'm Alive I', 'm Alive life', 'final', ' True']</p>
<p>I cannot not understand what I am doing wrong. Please help.</p>
|
<python><arrays><arraylist>
|
2022-12-31 06:50:15
| 1
| 339
|
Ferdousi
|
74,967,916
| 3,713,236
|
How to create Predicted vs. Actual plot using abline_plot and statsmodels
|
<p>I am trying to recreate <a href="https://medium.com/analytics-vidhya/eda-and-multiple-linear-regression-on-boston-housing-in-r-270f858dc7b" rel="nofollow noreferrer">this plot from this website</a> in Python instead of R:
<a href="https://i.sstatic.net/h1cuZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h1cuZ.png" alt="enter image description here" /></a></p>
<p><strong>Background</strong></p>
<p>I have a dataframe called <code>boston</code> (the popular educational boston housing dataset).</p>
<p>I created a <strong>multiple</strong> linear regression model with some variables with statsmodels api below. Everything works.</p>
<pre><code>import statsmodels.formula.api as smf
results = smf.ols('medv ~ col1 + col2 + ...', data=boston).fit()
</code></pre>
<p>I create a dataframe of actual values from the boston dataset and predicted values from above linear regression model.</p>
<pre><code>new_df = pd.concat([boston['medv'], results.fittedvalues], axis=1, keys=['actual', 'predicted'])
</code></pre>
<p>This is where I get stuck. When I try to plot the regression line on top of the scatterplot, I get this error below.</p>
<pre><code>from statsmodels.graphics.regressionplots import abline_plot
# scatter-plot data
ax = new_df.plot(x='actual', y='predicted', kind='scatter')
# plot regression line
abline_plot(model_results=results, ax=ax)
ValueError Traceback (most recent call last)
<ipython-input-156-ebb218ba87be> in <module>
5
6 # plot regression line
----> 7 abline_plot(model_results=results, ax=ax)
/usr/local/lib/python3.8/dist-packages/statsmodels/graphics/regressionplots.py in abline_plot(intercept, slope, horiz, vert, model_results, ax, **kwargs)
797
798 if model_results:
--> 799 intercept, slope = model_results.params
800 if x is None:
801 x = [model_results.model.exog[:, 1].min(),
ValueError: too many values to unpack (expected 2)
</code></pre>
<p>Here are the independent variables I used in the linear regression if that helps:</p>
<pre><code>{'crim': {0: 0.00632, 1: 0.02731, 2: 0.02729, 3: 0.03237, 4: 0.06905},
'chas': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'nox': {0: 0.538, 1: 0.469, 2: 0.469, 3: 0.458, 4: 0.458},
'rm': {0: 6.575, 1: 6.421, 2: 7.185, 3: 6.998, 4: 7.147},
'dis': {0: 4.09, 1: 4.9671, 2: 4.9671, 3: 6.0622, 4: 6.0622},
'tax': {0: 296, 1: 242, 2: 242, 3: 222, 4: 222},
'ptratio': {0: 15.3, 1: 17.8, 2: 17.8, 3: 18.7, 4: 18.7},
'lstat': {0: 4.98, 1: 9.14, 2: 4.03, 3: 2.94, 4: 5.33},
'rad3': {0: 0, 1: 0, 2: 0, 3: 1, 4: 1},
'rad4': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'rad5': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'rad6': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'rad7': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'rad8': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'rad24': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'dis_sq': {0: 16.728099999999998,
1: 24.67208241,
2: 24.67208241,
3: 36.75026884,
4: 36.75026884},
'lstat_sq': {0: 24.800400000000003,
1: 83.53960000000001,
2: 16.240900000000003,
3: 8.6436,
4: 28.4089},
'nox_sq': {0: 0.28944400000000003,
1: 0.21996099999999996,
2: 0.21996099999999996,
3: 0.209764,
4: 0.209764},
'rad24_lstat': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0},
'rm_lstat': {0: 32.743500000000004,
1: 58.687940000000005,
2: 28.95555,
3: 20.57412,
4: 38.09351},
'rm_rad24': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}}
</code></pre>
|
<python><matplotlib><linear-regression><statsmodels>
|
2022-12-31 06:33:34
| 1
| 9,075
|
Katsu
|
74,967,665
| 1,610,428
|
Displaying a Matplotlib plot in Kivy without using the kivy-garden tools
|
<p>I am just learning to use Kivy and want to embed some plots from Matplotlib and potentially OpenGL graphics in an app. I was looking at this particular <a href="https://kivycoder.com/add-matplotlib-graph-to-kivy-python-kivy-gui-tutorial-59/" rel="nofollow noreferrer">tutorial</a> on how to use kivy-garden to display a Matplotlib plot</p>
<p>However, I was hoping someone might be able to point me to an example of importing a plot into Kivy without using the matplotlib kivy-garden widgets. I want to be a little plotting backend agnostic, and hence I wanted to learn to import plots directly into the Kivy widgets. Matplotlib will export an image from <code>plt.show()</code> so I imagine the corresponding Kivy widget needs to have a property that can receive images? Plotly exports something different, so I was hoping to understand how to directly import these different plots into Kivy.</p>
<p>If anyone knows some good examples of directly plotting into Kivy, it would be appreciated.</p>
|
<python><matplotlib><user-interface><kivy>
|
2022-12-31 05:19:04
| 1
| 10,190
|
krishnab
|
74,967,657
| 3,728,901
|
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed
|
<pre class="lang-py prettyprint-override"><code>import torch
from torch.autograd import Variable
x = Variable(torch.FloatTensor([11.2]), requires_grad=True)
y = 2 * x
print(x)
print(y)
print(x.data)
print(y.data)
print(x.grad_fn)
print(y.grad_fn)
y.backward() # Calculates the gradients
print(x.grad)
print(y.grad)
</code></pre>
<p><a href="https://i.sstatic.net/Cmv4m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cmv4m.png" alt="enter image description here" /></a></p>
<p>Error:</p>
<pre><code>C:\Users\donhu\AppData\Local\Temp\ipykernel_9572\106071707.py:2: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten\src\ATen/core/TensorBody.h:485.)
print(y.grad)
</code></pre>
<p>Source code <a href="https://github.com/donhuvy/Deep-learning-with-PyTorch-video/blob/master/1.5.variables.ipynb" rel="nofollow noreferrer">https://github.com/donhuvy/Deep-learning-with-PyTorch-video/blob/master/1.5.variables.ipynb</a></p>
<p>How to fix?</p>
|
<python><pytorch><gradient><tensor><user-warning>
|
2022-12-31 05:16:59
| 1
| 53,313
|
Vy Do
|
74,967,508
| 5,509,839
|
Django: mysterious slow request before timing middleware kicks in
|
<p>I'm using Django with Sentry to monitor errors and performance. There's one specific endpoint (this doesn't happen with other endpoints) that seems to stall for a few seconds before actually processing the request.</p>
<ul>
<li>It's a very simple endpoint, but it does get called a lot.</li>
<li>It seems to do nothing for a few seconds and is waiting on the time before Django middleware is even called.</li>
<li>The actual logic that goes into rendering only takes a few hundred milliseconds: something is happening before the Django middleware that causes this to only happen for this view.</li>
</ul>
<p>What could this mean? I don't often see the <code>Django.db.close_old_connections</code> call and I'm hoping someone might have a clue for what might be causing it when the stall is after that, with another stall in the same place towards the end right before close_old_connections is called again.</p>
<p><a href="https://i.sstatic.net/27u4V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/27u4V.png" alt="enter image description here" /></a></p>
|
<python><django>
|
2022-12-31 04:24:04
| 0
| 5,126
|
aroooo
|
74,967,506
| 9,616,557
|
try/except vs validate the data while inserting to avoid "Violation of UNIQUE KEY constraint"?
|
<p>Which is the better option for performance?</p>
<p>Python try/except?</p>
<pre><code>try:
cursor.execute('INSERT INTO date (iyear, imonth, iday) VALUES (?,?,?)',year, month, day)
except Exception:
pass
</code></pre>
<p>or validate the data in database?</p>
<pre><code>cursor.execute('IF NOT EXISTS (SELECT TOP 1 * FROM date where iyear = ? and imonth = ? and iday = ?) INSERT INTO date (iyear, imonth, iday) VALUES (?,?,?)',year, month, day,year, month, day)
</code></pre>
<p>Actually I am not sure if using <code>if not exists</code> is the better option to validate the data. Is there any better way to do it?</p>
|
<python><sql><sql-server>
|
2022-12-31 04:23:47
| 0
| 767
|
Astora
|
74,967,490
| 7,743,427
|
How to find the outer keys in a nested dictionary
|
<p>I have a large nested JSON in python, which I've converted to a dictionary. In it, there's a key called 'achievements' in some sub-dictionary (don't know how far it's nested). This sub-dictionary is itself either an element of a list, or the value of a key in a larger dictionary.</p>
<p>Assuming all I know is that this key is called 'achievements', how can I get its value? It's fine to assume that the key's 'achievements' name is unique in the entire dictionary at all levels.</p>
|
<python><json><dictionary>
|
2022-12-31 04:18:06
| 1
| 561
|
Inertial Ignorance
|
74,967,458
| 11,925,464
|
Pandas: conditional, groupby, cumsum
|
<p>I have dataframe where i'm trying to create a new column showing the value with conditional groupby.</p>
<p>conditions:</p>
<ol>
<li>tag == 1, profit - cost</li>
<li>tag > 1, -(cost)</li>
<li>net is summed after every iteration</li>
</ol>
<p>original df:</p>
<pre>
╔══════╦═════╦══════╦════════╗
║ rep ║ tag ║ cost ║ profit ║
╠══════╬═════╬══════╬════════╣
║ john ║ 1 ║ 1 ║ 5 ║
║ pete ║ 2 ║ 1 ║ 3 ║
║ pete ║ 3 ║ 1 ║ 4 ║
║ pete ║ 4 ║ 1 ║ 5 ║
║ john ║ 5 ║ 1 ║ 7 ║
║ john ║ 1 ║ 1 ║ 9 ║
║ pete ║ 1 ║ 1 ║ 3 ║
║ john ║ 3 ║ 1 ║ 5 ║
╚══════╩═════╩══════╩════════╝
</pre>
<p>output hope to get:</p>
<pre>
╔══════╦═════╦══════╦════════╦═════╗
║ rep ║ tag ║ cost ║ profit ║ net ║
╠══════╬═════╬══════╬════════╬═════╣
║ john ║ 1 ║ 1 ║ 5 ║ 4 ║
║ pete ║ 2 ║ 1 ║ 3 ║ -1 ║
║ pete ║ 3 ║ 1 ║ 4 ║ -2 ║
║ pete ║ 4 ║ 1 ║ 5 ║ -3 ║
║ john ║ 5 ║ 1 ║ 7 ║ 3 ║
║ john ║ 1 ║ 1 ║ 9 ║ 11 ║
║ pete ║ 1 ║ 1 ║ 3 ║ -1 ║
║ john ║ 3 ║ 1 ║ 5 ║ 15 ║
╚══════╩═════╩══════╩════════╩═════╝
</pre>
<p>I'm able to use loc to derives 'if' conditions, however, i'm not able to figure out or find examples of groupby/if/cumsum for this.</p>
<p>sample df code:</p>
<pre><code>data = {
"rep": ['john','pete','pete','pete','john','john','pete','john'],
"tag": [1,2,3,4,5,1,1,3],
"cost": [1,1,1,1,1,1,1,1],
"profit": [5,3,4,5,7,9,3,5]}
df = pd.DataFrame(data)
</code></pre>
<p>kindly advise</p>
|
<python><pandas><group-by><cumsum>
|
2022-12-31 04:09:29
| 1
| 597
|
ManOnTheMoon
|
74,967,443
| 3,728,901
|
ERROR: No matching distribution found for torch
|
<p>Python 3.11.1 , With stable version of PyTorch</p>
<pre><code>pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
</code></pre>
<p><a href="https://i.sstatic.net/5ORj6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5ORj6.png" alt="enter image description here" /></a></p>
<pre><code>Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com, https://download.pytorch.org/whl/cu117
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
</code></pre>
<p><a href="https://i.sstatic.net/455iz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/455iz.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/MpDHW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MpDHW.png" alt="enter image description here" /></a></p>
<p>Python 3.11.1 , even with nightly version of PyTorch</p>
<pre><code>!pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu117
</code></pre>
<p><a href="https://i.sstatic.net/aGdRN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aGdRN.png" alt="enter image description here" /></a></p>
<p>How to fix?</p>
|
<python><pytorch>
|
2022-12-31 04:05:00
| 3
| 53,313
|
Vy Do
|
74,967,413
| 2,009,441
|
How do I use the Elasticsearch Query DSL to query an API using Python?
|
<p>I'm trying to query the public <a href="http://api.artic.edu/docs/#introduction" rel="nofollow noreferrer">Art Institute of Chicago API</a> to only show me results that match certain criteria. For example:</p>
<ul>
<li><code>classification_title</code> = "painting"</li>
<li><code>colorfulness</code> <= 13</li>
<li><code>material_titles</code> includes "paper (fiber product)"</li>
</ul>
<p>The API documentation states:</p>
<blockquote>
<p>Behind the scenes, our search is powered by Elasticsearch. You can use its Query DSL to interact with our API.</p>
</blockquote>
<p>I can't figure out how to take an Elasticsearch DSL JSON-like object and pass it into the API URL, beyond a single criteria.</p>
<p>Here are some working single-criteria examples specific to this API:</p>
<pre class="lang-py prettyprint-override"><code>requests.get("https://api.artic.edu/api/v1/artworks/search?q=woodblock[classification_title]").json()
requests.get("https://api.artic.edu/api/v1/artworks/search?q=monet[artist_title]").json()
</code></pre>
<p>And here are some of my failed attempts to have return only items that pass 2+ criteria items:</p>
<pre class="lang-py prettyprint-override"><code>requests.get("https://api.artic.edu/api/v1/artworks/search?q=woodblock[classification_title]monet[artist_title]")
requests.get("https://api.artic.edu/api/v1/artworks/search?q=woodblock[classification_title],monet[artist_title]")
requests.get("https://api.artic.edu/api/v1/artworks/search?q=%2Bclassification_title%3A(woodblock)+%2Bartist_title%3A(monet)")
</code></pre>
<p>And lastly some of my failed attempts to return more complex criteria, like range:</p>
<pre class="lang-py prettyprint-override"><code>requests.get("https://api.artic.edu/api/v1/artworks/search?q={range:lte:10}&query[colorfulness]").json()
requests.get("https://api.artic.edu/api/v1/artworks/search?q=<10&query[colorfulness]").json()
requests.get("https://api.artic.edu/api/v1/artworks/search?q=%2Bdate_display%3A%3C1900").json()
</code></pre>
<p>All of these failed attempts return data but not within my passed criteria. For example, <code>woodblock[classification_title]monet[artist_title]</code> should return no results.</p>
<p>How could I query all of these criteria, only returning results (if any) that match all these conditions? The JSON-like <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html" rel="nofollow noreferrer">Query DSL</a> does not seem compatible with a <code>requests.get</code>.</p>
|
<python><elasticsearch><elasticsearch-dsl>
|
2022-12-31 03:55:48
| 1
| 515
|
Danny
|
74,967,353
| 322,513
|
Python package version not found, but it's clearly there
|
<p>I am trying to create specific python environment inside docker for reproducible builds, but package <code>python-opencv</code> that previously was manually installed refuses to get installed with error:</p>
<blockquote>
<p>ERROR: Could not find a version that satisfies the requirement opencv_python==4.7.0 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66, 4.7.0.68)</p>
<p>ERROR: No matching distribution found for opencv_python==4.7.0</p>
</blockquote>
<p>Command was:</p>
<blockquote>
<p>pip3 install face_recognition==1.3.0 opencv_python==4.7.0</p>
</blockquote>
<p>Inside docker: <code>ubuntu 22.04; Python 3.10.6; pip 22.0.2</code></p>
<p>Why pip3 cannot find <code>opencv_python</code> version <code>4.7.0</code> since it's clearly in the list of available packages? What's the best way to create reproducible python environment when building docker image?</p>
|
<python><pip>
|
2022-12-31 03:35:37
| 1
| 2,230
|
Slaus
|
74,967,339
| 1,934,510
|
How to specify Youtube Channel Id to Upload Video from the same user?
|
<p>I'm using this code to upload to my Youtube Channel that is working fine. Now I created another channel under my account and I want to upload a video to this new Chanel, how do I specify the channel Id in the code?</p>
<pre><code># Explicitly tell the underlying HTTP transport library not to retry, since
# we are handling retry logic ourselves.
httplib2.RETRIES = 1
# Maximum number of times to retry before giving up.
MAX_RETRIES = 10
# Always retry when these exceptions are raised.
RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError)
# Always retry when an apiclient.errors.HttpError with one of these status
# codes is raised.
RETRIABLE_STATUS_CODES = [500, 502, 503, 504]
CLIENT_SECRETS_FILE = "youtube_client_secret.json"
# This OAuth 2.0 access scope allows an application to upload files to the
# authenticated user's YouTube channel, but doesn't allow other types of access.
YOUTUBE_UPLOAD_SCOPE = "https://www.googleapis.com/auth/youtube.upload"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the API Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),
CLIENT_SECRETS_FILE))
VALID_PRIVACY_STATUSES = ("public", "private", "unlisted")
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_UPLOAD_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
def initialize_upload(youtube, options):
tags = None
# if options.keywords:
# tags = options.keywords.split(",")
body = dict(
snippet=dict(
title=options['title'],
description=options['description'],
tags=tags,
# categoryId=options['category']
),
status=dict(
privacyStatus=options['privacyStatus']
)
)
# Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
media_body=MediaFileUpload(
options['file'], chunksize=-1, resumable=True)
)
resumable_upload(insert_request)
# This method implements an exponential backoff strategy to resume a
# failed upload.
def resumable_upload(insert_request):
response = None
error = None
retry = 0
while response is None:
try:
print("Uploading file...")
status, response = insert_request.next_chunk()
if response is not None:
if 'id' in response:
print("Video id '%s' was successfully uploaded." %
response['id'])
else:
exit("The upload failed with an unexpected response: %s" % response)
except HttpError as e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = "A retriable HTTP error %d occurred:\n%s" % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS as e:
error = "A retriable error occurred: %s" % e
if error is not None:
print(error)
retry += 1
if retry > MAX_RETRIES:
exit("No longer attempting to retry.")
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print("Sleeping %f seconds and then retrying..." % sleep_seconds)
time.sleep(sleep_seconds)
def upload_video(video_data):
args = argparser.parse_args()
if not os.path.exists(video_data['file']):
exit("Please specify a valid file using the --file= parameter.")
youtube = get_authenticated_service(args)
try:
initialize_upload(youtube, video_data)
except HttpError as e:
print("An HTTP error %d occurred:\n%s" % (e.resp.status, e.content))
</code></pre>
<p>Thanks!</p>
|
<python><youtube><youtube-api>
|
2022-12-31 03:29:01
| 1
| 8,851
|
Filipe Ferminiano
|
74,967,311
| 13,494,917
|
How to specify a ProgrammingError exception
|
<p>Sometimes I get this error when executing a to_sql() statement</p>
<blockquote>
<p>Exception: ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near ','. (102) (SQLExecDirectW)")</p>
</blockquote>
<p>I know exactly what I'd like to do with this error so I want to catch it, but I can't seem to do so. If I do a catch all, it works, but catching this error specifically hasn't seemed to work. Basically, here's what I've got below.</p>
<pre><code>from pyodbc import ProgrammingError
try:
#code
except ProgrammingError:
#other code
</code></pre>
<p>At first when I tried specifying the error, it said ProgrammingError wasn't defined so I just thought I'd give the import a shot, but no luck. I'd rather not do a catch all, how can I specify this exception?</p>
|
<python><pyodbc>
|
2022-12-31 03:17:04
| 0
| 687
|
BlakeB9
|
74,967,083
| 3,591,044
|
Combining rows in a DataFrame
|
<p>I have a Pandas DataFrame shown below consisting of three columns.</p>
<pre><code>import pandas as pd
data = [[1, "User1", "Hello."], [1, "User1", "How are you?"], [1, "User2", "I'm fine."], [2, "User1", "Nice to meet you."], [2, "User2", "Hello."], [2, "User2", "I'm happy."], [2, "User2", "Goodbye."], [3, "User2", "Hello."]]
df = pd.DataFrame(data, columns=['Conversation', 'User', 'Text'])
</code></pre>
<pre class="lang-none prettyprint-override"><code> Conversation User Text
0 1 User1 Hello.
1 1 User1 How are you?
2 1 User2 I'm fine.
3 2 User1 Nice to meet you.
4 2 User2 Hello.
5 2 User2 I'm happy.
6 2 User2 Goodbye.
7 3 User2 Hello.
</code></pre>
<p>I would like to merge the Text of groups of consecutive Users, but not over conversation boundaries. If in a Conversation a User has several consecutive rows, I would like to merge these rows into one row by combining the Text with whitespace. When a new Conversation starts, it should not be combined. For the example, the result should look as follows:</p>
<pre class="lang-none prettyprint-override"><code> Conversation User Text
0 1 User1 Hello. How are you?
2 1 User2 I'm fine.
3 2 User1 Nice to meet you.
4 2 User2 Hello. I'm happy. Goodbye.
7 3 User2 Hello.
</code></pre>
<p>How can this be achieved in an efficient way (I have a big DataFrame)?</p>
|
<python><pandas><dataframe><group-by>
|
2022-12-31 01:48:47
| 1
| 891
|
BlackHawk
|
74,967,048
| 817,659
|
Numba Vectorize FakeOverload
|
<p>This program gives an error when it gets to the "@vectorize(['float32(float32)'], target='cuda')" function with the error:</p>
<pre><code>File "/home/idf/anaconda3/envs/gpu/lib/python3.9/site-packages/numba/cuda/vectorizers.py", line 206, in _compile_core
return cudevfn, cudevfn.overloads[sig.args].signature.return_type
AttributeError: 'FakeOverload' object has no attribute 'signature'
</code></pre>
<p>Notice that I am not even calling the <code>function</code>. It seems to be an error at the <code>interpreter</code> stage:</p>
<pre><code>import math
import numpy as np
from numba import vectorize,guvectorize,cuda
import timeit
def numpy_sqrt_x(x):
return np.sqrt(x)
def math_sqrt_x(x):
return [math.sqrt(xx) for xx in x]
@vectorize
def cpu_sqrt(x):
return math.sqrt(x)
@vectorize(['float32(float32)'], target='cuda')
def gpu_sqrt(x):
return math.sqrt(x)
if __name__ == '__main__':
x = np.arange(10)
print(x**2)
#print(np.exp(x))
#x = np.arange(int(1e2))
#print(timeit.timeit("numpy_sqrt_x(x)", globals=globals()))
#print(timeit.timeit("math_sqrt_x(x)", globals=globals()))
#print(gpu_sqrt(x))
</code></pre>
<p>Not sure what I am doing wrong?</p>
|
<python><ubuntu><numba>
|
2022-12-31 01:37:27
| 0
| 7,836
|
Ivan
|
74,966,897
| 13,142,245
|
PyStan: TypeError: cannot convert the series to <class ‘int’>
|
<p>I'm using PyStan in a Jupyter notebook. This is possible with the <code>nest_asyncio.apply()</code> trick. Stan seems to have some issue receiving a numpy array of integer variables.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import stan
import nest_asyncio
nest_asyncio.apply()
def sigmoid(x):
return 1 / (1 + np.exp(-x))
treatment_slope = 0.2
test_slope = 0.15
intercept = 0.7
def link(X):
theta = sigmoid(X[0] * treatment_slope + X[1] * test_slope + intercept)
return theta
ctrl = np.random.binomial(n=1, p=link([0,0]), size=500)
test_no_exp = np.random.binomial(n=1, p=link([0,1]), size=250)
test_w_exp = np.random.binomial(n=1, p=link([0,1]), size=250)
import pandas as pd
data = {'test':[], 'exposure':[], 'outcome':[]}
for c in ctrl:
data['test'].append(0)
data['exposure'].append(0)
data['outcome'].append(c)
for t in test_no_exp:
data['test'].append(1)
data['exposure'].append(0)
data['outcome'].append(t)
for e in test_w_exp:
data['test'].append(1)
data['exposure'].append(1)
data['outcome'].append(e)
df = pd.DataFrame(data)
X = df[['test','exposure']].to_numpy()
y = df['outcome']
payload = {'X':X, 'y':y, 'K':1000, 'N':2}
logistic_regression_model = """
// logistic regression code
data {
int<lower=0> N; // number of observations
int<lower=0> K; // number of predictor variables
matrix[N, K] X; // predictor matrix
int<lower=0,upper=1> y[N]; // outcome vector
}
parameters {
real alpha; // intercept
vector[K] theta; // coefficients on Q_ast
real<lower=0> sigma; // error scale
}
model {
// priors
for (d in 1:D) {
theta[d] ~ normal(0, 1);
}
alpha ~ normal(0, 10);
// likelihood
y ~ bernoulli_logit(X * theta + alpha);
}
"""
posterior = stan.build(logistic_regression_model, data=payload)
fit = posterior.sample(num_chains=4, num_samples=1000)
</code></pre>
<p>The error returned is: <code>TypeError: cannot convert the series to <class 'int'></code>. But my inputs, as you can reproduce from code block, are already integer types. What's the issue here?</p>
|
<python><numpy><jupyter-notebook><bayesian><stan>
|
2022-12-31 00:52:42
| 0
| 1,238
|
jbuddy_13
|
74,966,767
| 1,192,885
|
How to extract specific part of html using Beautifulsoup?
|
<p>I am trying to extract the what's within the 'title' tag from the following html, but so far I didn't manage to.</p>
<pre><code><div class="pull_right date details" title="22.12.2022 01:49:03 UTC-03:00">
</code></pre>
<p>This is my code:</p>
<pre><code>from bs4 import BeautifulSoup
with open("messages.html") as fp:
soup = BeautifulSoup(fp, 'html.parser')
results = soup.find_all('div', attrs={'class':'pull_right date details'})
print(results)
</code></pre>
<p>And the output is a list with all <div for the html file.</p>
|
<python><beautifulsoup><html-parsing>
|
2022-12-31 00:19:42
| 1
| 1,303
|
Marco Almeida
|
74,966,649
| 5,509,839
|
Does chaining prefetch_related fetch all sub-queries?
|
<p>Take this for example:</p>
<p><code>user = User.objects.prefetch_related("foo__bar__baz").get(id=id)</code></p>
<p>In this scenario, do I not need to do:</p>
<p><code>user = User.objects.prefetch_related("foo__bar").get(id=id)</code> if I want to access values on <code>bar</code>, or if they are mixed between <code>select_related</code> and <code>prefetch_related</code> due to some being OneToOne and others being FK like this below?</p>
<p><code>user = User.objects.prefetch_related("foo__bar__baz").select_related("foo_bar_baz_qux").get(id=id)</code></p>
<p>Or would skipping the prefetch and only using select be sufficient?</p>
<p><code>user = User.objects.select_related("foo_bar_baz_qux").get(id=id)</code></p>
|
<python><django>
|
2022-12-30 23:46:03
| 0
| 5,126
|
aroooo
|
74,966,521
| 19,130,803
|
Celery: How to abort running task
|
<p><strong>calculator.py</strong></p>
<pre><code>def add(n):
r = n + n
time.sleep(15) # long running task
return r
</code></pre>
<p><strong>tasks.py</strong></p>
<pre><code>from calculator import add
REDIS_URL = redis://redis:6379/0
celery_app = Celery(__name__, broker=REDIS_URL, backend=REDIS_URL)
@celery_app.task(bind=True, name="background_task_job", base=AbortableTask)
def background_task_job(self, n):
abt = "Aborted"
val = 0
while not self.is_aborted():
val = add(n) # ADD() which calls sleep_one()
return val
return abt
</code></pre>
<p><strong>app.py</strong></p>
<pre><code>from tasks import background_task_job
# On Start button
result = background_task_job.apply_async(args=(2,))
# On Cancel button
abortable_task = AbortableAsyncResult(result.id)
abortable_task.abort()
</code></pre>
<p><strong>Observed:</strong>
I clicked start button, background task started running. Say after 8 seconds (for eg) I clicked cancel button, I observed no "Aborted" message but the inside add() seems to be alive/running, As after, 7 more seconds which makes 15 seconds [8 + 7] I observed celery returns outputs as 4 [celery| success 15.0sec 4] (for eg).</p>
<p>1.
Why <strong>add()</strong> not getting aborted which is inside the <strong>background_task_job()</strong>?
What I am missing?</p>
<p>2.
If so output is returned as 4, why [celery| success <strong>15.0sec</strong> 4] and not <strong>8sec</strong> as I clicked cancel button at 8 seconds? Is this behavior correct?</p>
<p>Please suggest.</p>
<p><strong>Updated: C</strong>ode</p>
<p>After replacing <code>time.sleep()</code> with <code>sleep_one()</code>, I am able to get state and status of the task as <strong>ABORTED</strong>, but still the function runs completely and didnot get aborted.</p>
<pre><code>def sleep_one(n):
start = time.time()
finish = start + 4
while time.time() < finish:
pass
return n
</code></pre>
<pre><code>def add(n):
# time.sleep(15)
n = 0
# simulating long running process
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
n += sleep_one(1)
return n
</code></pre>
<p>What I am missing?</p>
<p>As while searching, I came across small sample codes only in which there is direct <code>for loop</code> inside the <code>background_task_job()</code>.
Since, I am calling <code>add()</code> inside the <code>background_task_job()</code> does it affect or prevent the task from getting aborted?</p>
|
<python><redis><celery>
|
2022-12-30 23:19:50
| 0
| 962
|
winter
|
74,966,344
| 6,054,404
|
Python numpy sampling a 2D array for n rows
|
<p>I have a numpy array as follows, I want to take a random sample of <code>n</code> rows.</p>
<pre><code> ([[996.924, 265.879, 191.655],
[996.924, 265.874, 191.655],
[996.925, 265.884, 191.655],
[997.294, 265.621, 192.224],
[997.294, 265.643, 192.225],
[997.304, 265.652, 192.223]], dtype=float32)
</code></pre>
<p>I've tried:</p>
<pre><code>rows_id = random.sample(range(0,arr.shape[1]-1), 1)
row = arr[rows_id, :]
</code></pre>
<p>But this 9ndex mask only returns a single row, I want to return <code>n</code> rows as an numpy array (without duplication).</p>
|
<python><numpy><sample>
|
2022-12-30 22:40:55
| 2
| 1,993
|
Spatial Digger
|
74,966,335
| 726,730
|
pywinauto print_control_identifiers for pyqt5 application
|
<p>Using python 3.10 i am trying to use pywinauto and print_control_identifiers for a PyQt5 app (for example Qt Designer).</p>
<pre><code>from pywinauto.application import Application
import os
app = Application().start("C:/python/Lib/site-packages/QtDesigner/designer.exe")
main_dlg = app.QtDesigner
main_dlg.wait('visible')
main_dlg.print_control_identifiers()
</code></pre>
<p>output:</p>
<pre><code>Control Identifiers:
Qt5QWindowIcon - 'Qt Designer' (L632, T250, R1928, B1008)
['Qt Designer', 'Qt DesignerQt5QWindowIcon', 'Qt5QWindowIcon']
child_window(title="Qt Designer", class_name="Qt5QWindowIcon")
</code></pre>
<p>It's strange maybe the control identifiers are in child_window(title="Qt Designer", class_name="Qt5QWindowIcon") but how can i access them?</p>
|
<python><pyqt5><controls><pywinauto>
|
2022-12-30 22:38:52
| 1
| 2,427
|
Chris P
|
74,966,241
| 11,692,124
|
Python renew a function's definition of imported module and use renewed one in even functions of imported module
|
<p>I have a <code>file1.py</code>:</p>
<pre><code>def funcCommon1():
pass
def file1Func1():
funcCommon1()
....#other lines of code
</code></pre>
<p>and I have a <code>file2.py</code> which I have imported funcs of <code>file1.py</code></p>
<pre><code>from file1 import *
def file2FuncToReplacefuncCommon1():
....#some lines of code
def funcCommon1():#renewed or lets just say replaced `funcCommon1` definition
file2FuncToReplacefuncCommon1()
</code></pre>
<p>but in <code>file1Func1</code>, <code>funcCommon1()</code> refers to <code>pass</code> defenition but I want to refer to <code>file2FuncToReplacefuncCommon1</code>.
I know in <code>file1.py</code>I cant do <code>from file2 import *</code> because it will go to infinite loop. or I dont want to pass <code>funcCommon1</code> as an argument to funcs like <code>file1Func1</code>. something like:</p>
<pre><code>def file1Func1(funcCommon1):
funcCommon1()
....#other lines of code
</code></pre>
<p>so is there any other way I can renew the definition of <code>funcCommon1</code> in <code>file1Func1</code>.</p>
|
<python>
|
2022-12-30 22:24:50
| 1
| 1,011
|
Farhang Amaji
|
74,966,221
| 7,267,480
|
Regression task for neural network with Keras and Tensorflow, shapes and formats of input data
|
<p>I am studying Keras to build a neural network for regression purposes.</p>
<p>I have obtained a dataset to train a model - each row represents a case with inputs in separate columns and output.</p>
<p>Here is the example dataset:</p>
<pre><code> x1 x2 x3 x4 x5 y
0 0.00 0.00 0.00 0.00 1.00 76.800
1 0.00 0.00 0.00 0.05 0.95 77.815
2 0.00 0.00 0.00 0.10 0.90 78.830
3 0.00 0.00 0.00 0.15 0.85 79.845
4 0.00 0.00 0.00 0.20 0.80 80.860
... ... ... ... ... ... ...
9108 0.95 0.00 0.00 0.00 0.05 94.945
9109 0.95 0.00 0.00 0.05 0.00 95.960
9110 0.95 0.00 0.05 0.00 0.00 95.550
9111 0.95 0.05 0.00 0.00 0.00 95.250
9112 1.00 0.00 0.00 0.00 0.00 95.900
</code></pre>
<p>the NN I am trying to build must use x1..x5 as inputs and calculate y - output - continuous variable.</p>
<p>As I understand I need to prepare training, validation and cross-validation samples using initial dataset I have.</p>
<p>Here are my newbie-questions about data preparation for learning process and data-saving for the use with Keras.</p>
<ol>
<li>How to prepare data if I have a simple dataframe (e.g. dataset I have now) - the most recommended way?</li>
</ol>
<p>What is the most common way and format to prepare data for learning??
Is it possible to use a dataframe or it should be converted specific shaped numpy arrays or any other format - e.g. each row of the array must present a case??</p>
<p>Do I really have to convert my dataframe to specific shaped numpy arrays?</p>
<pre><code>subset_learn_np = subset_learn_df.to_numpy()
</code></pre>
<p>E.g. in this case I got a numpy array with shape: (9113, 5)
9113 cases - 5 inputs each.</p>
<p>At the same time I have created a numpy array with the results for each case and inputs from the previous array - splitted the initial dataframe and converted it to separate numpy array:</p>
<pre><code>subset_answers_df = df[['y']]
subset_answers_np = subset_answers_df.to_numpy()
</code></pre>
<ol start="2">
<li><p>Do I need to normalize this data before the learning process?
How to create a regression model based on the data I have?</p>
</li>
<li><p>In the next stage, I will probably use a dataset with millions of rows and up to several hundreds of inputs and multiple outputs.</p>
</li>
</ol>
<p>As this data is now stored in separate files I need to merge it somehow - what do you suggest for this? Any special instruments to prepare such dataframes and work with such data?</p>
|
<python><tensorflow><keras>
|
2022-12-30 22:21:12
| 1
| 496
|
twistfire
|
74,966,220
| 14,403,635
|
Python Selenium: How to select the drop down menu with highlighted feature
|
<p>I am trying to select the drop down menu. Selenium can control Chrome to click the drop down list and the list shows the options. However, it fails to select and click the options. Is it because the drop down list has a feature when the cursor point to the text and the text will be highlighted. The class name will be changed.</p>
<p>Here is the code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(service=webdriver_service, options=options)
url = 'https://system.arthuronline.co.uk/genieliew1/dashboards/index'
driver.get(url)
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.ID, 'username'))).send_keys("")
wait.until(EC.element_to_be_clickable((By.ID, 'password'))).send_keys("")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type='submit']"))).click()
url = 'https://system.arthuronline.co.uk/genieliew1/units/view/377225/filter:Transaction-_statement:property-owner;Transaction-_payeeEntityId:149280;Transaction-unit_id:377225;Transaction-_order:default;/page:1#tab-transactions-7322b9'
driver.get(url)
#Click the drop down menu
WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.XPATH,"//div[@id='s2id__statement']"))).click()
WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.XPATH,"//div[contains(.,'Property Owner Statement')]"))).click()
</code></pre>
<p>This is the error:</p>
<pre><code>---------------------------------------------------------------------------
ElementClickInterceptedException Traceback (most recent call last)
<ipython-input-26-3afc38e2a733> in <module>
25 #Click the drop down menu
26 WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.XPATH,"//div[@id='s2id__statement']"))).click()
---> 27 WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.XPATH,"//div[contains(.,'Property Owner Statement')]"))).click()
/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py in click(self)
91 def click(self) -> None:
92 """Clicks the element."""
---> 93 self._execute(Command.CLICK_ELEMENT)
94
95 def submit(self):
/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py in _execute(self, command, params)
408 params = {}
409 params["id"] = self._id
--> 410 return self._parent.execute(command, params)
411
412 def find_element(self, by=By.ID, value=None) -> WebElement:
/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
442 response = self.command_executor.execute(driver_command, params)
443 if response:
--> 444 self.error_handler.check_response(response)
445 response["value"] = self._unwrap_value(response.get("value", None))
446 return response
/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
247 alert_text = value["alert"].get("text")
248 raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here
--> 249 raise exception_class(message, screen, stacktrace)
ElementClickInterceptedException: Message: element click intercepted: Element <div id="outer">...</div> is not clickable at point (1280, 646). Other element would receive the click: <div id="select2-drop-mask" class="select2-drop-mask" style="width: 2560px; height: 1293px;"></div>
(Session info: chrome=108.0.5359.124)
Stacktrace:
0 chromedriver 0x00000001055fff38 chromedriver + 4910904
1 chromedriver 0x000000010557fa03 chromedriver + 4385283
2 chromedriver 0x00000001051c4747 chromedriver + 472903
3 chromedriver 0x0000000105212588 chromedriver + 791944
4 chromedriver 0x000000010520f9ec chromedriver + 780780
5 chromedriver 0x000000010520c671 chromedriver + 767601
6 chromedriver 0x000000010520b18b chromedriver + 762251
7 chromedriver 0x00000001051fbac3 chromedriver + 699075
8 chromedriver 0x000000010522f112 chromedriver + 909586
9 chromedriver 0x00000001051fb0ed chromedriver + 696557
10 chromedriver 0x000000010522f2ce chromedriver + 910030
11 chromedriver 0x000000010524a28e chromedriver + 1020558
12 chromedriver 0x000000010522eee3 chromedriver + 909027
13 chromedriver 0x00000001051f930c chromedriver + 688908
14 chromedriver 0x00000001051fa88e chromedriver + 694414
15 chromedriver 0x00000001055cd1de chromedriver + 4702686
16 chromedriver 0x00000001055d1b19 chromedriver + 4721433
17 chromedriver 0x00000001055d928e chromedriver + 4752014
18 chromedriver 0x00000001055d291a chromedriver + 4725018
19 chromedriver 0x00000001055a6b02 chromedriver + 4545282
20 chromedriver 0x00000001055f1888 chromedriver + 4851848
21 chromedriver 0x00000001055f1a05 chromedriver + 4852229
22 chromedriver 0x0000000105607e5f chromedriver + 4943455
23 libsystem_pthread.dylib 0x00007fff6bdeb109 _pthread_start + 148
24 libsystem_pthread.dylib 0x00007fff6bde6b8b thread_start + 15[!
</code></pre>
<p><a href="https://i.sstatic.net/lTFKk.jpg" rel="nofollow noreferrer">enter image description here</a>]<a href="https://i.sstatic.net/lTFKk.jpg" rel="nofollow noreferrer">1</a></p>
|
<python><selenium><selenium-webdriver>
|
2022-12-30 22:21:01
| 0
| 333
|
janicewww
|
74,966,144
| 5,865,393
|
Flask-WTF StopValidation after default validator fails
|
<p>I am adding a field and validating it against <code>IPAddress</code> WTForms' validator:</p>
<pre class="lang-py prettyprint-override"><code>from ipaddress import ip_network
from flask_wtf import FlaskForm
from wtforms import StringField
from wtforms.validators import DataRequired, IPAddress, ValidationError, StopValidation
class CreateForm(FlaskForm):
ip = StringField(
label="IP Address",
validators=[DataRequired(), IPAddress(ipv4=True)],
render_kw={"autocomplete": False, "placeholder": "IP Address"},
)
...
</code></pre>
<p>However, some extra custom validations are required and are defined as:</p>
<pre class="lang-py prettyprint-override"><code>...
def validate_ip(self, ip):
if not ip_network(ip.data).overlaps(ip_network('192.168.1.1/24')):
raise ValidationError(f"{ip.data} is out of bound for 192.168.1.1!")
...
</code></pre>
<p>When I add an IP address for example <code>192.168.1.256</code> which is obviously an invalid IP, The validation is not stopped because <code>StopValidation</code> is not called and I get the exception raised in the browser, not under the field I want.</p>
<p>A workaround to this is to create a custom validator, the same as the built-in validator, before any other validations and raise the <code>StopValidation</code>:</p>
<pre class="lang-py prettyprint-override"><code>...
def validate_ip(self, ip):
try:
ip_address(ip.data)
except Exception as e:
raise StopValidation(e) from e
if not ip_network(ip.data).overlaps(ip_network('192.168.1.1/24')):
raise ValidationError(f"{ip.data} is out of bound for 192.168.1.1!")
...
</code></pre>
<p>The question is how can I stop validation on the first validation error which is the default built-in validator in this case? If the default validation is invalid, stop the validation chain immediately, else continue with the rest of the validation rules.</p>
<blockquote>
<p>I feel that recreating the built-in validator as a custom validator is a bit DRY.</p>
</blockquote>
|
<python><validation><flask><flask-wtforms>
|
2022-12-30 22:07:19
| 0
| 2,284
|
Tes3awy
|
74,966,141
| 9,002,634
|
Most performant way to search if row exists in sqlalchemy?
|
<p>Let's say I have a:</p>
<pre class="lang-py prettyprint-override"><code>base = declarative_base()
class Users(base):
__tablename__ = "users"
user_id = Column(BigInteger, primary_key=True, autoincrement=True)
</code></pre>
<p>And I have supposedly a list of user_ids.</p>
<p>What is the most performant way to check if the user_id exists in my table?</p>
<p>So far I have:</p>
<pre class="lang-py prettyprint-override"><code>user_ids_list = [1, 2, 3, 999]
validated_user_ids = [id if Merchant.get(left_id) for id in user_ids_list]
</code></pre>
<p>Bonus question:</p>
<p>what about just one user_id? What would be the most performant way?</p>
|
<python><sqlalchemy>
|
2022-12-30 22:06:36
| 1
| 318
|
Viet Than
|
74,966,045
| 898,042
|
how to extract from sqs json msg nested values in python?
|
<p>my aws chain: lambda->sns->sqs->python script on ec2</p>
<p>the python script gives me error due to unable to extract values i need , wrong structure i think but i cant see the bug.</p>
<p>I can't get the values from "vote" field in sqs msg. How to do that?</p>
<p>The test event structure(raw msg delivery enabled):</p>
<pre><code>{
"body": {
"MessageAttributes": {
"vote": {
"Type": "Number",
"Value": "90"
},
"voter": {
"Type": "String",
"Value": "default_voter"
}
}
}
}
</code></pre>
<p>the python script that processes sqs msg:</p>
<pre><code>#!/usr/bin/env python3
import boto3
import json
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
queue = boto3.resource('sqs', region_name='us-east-1').get_queue_by_name(QueueName="erjan")
table = boto3.resource('dynamodb', region_name='us-east-1').Table('Votes')
def process_message(message):
try:
payload = json.loads(message.body) #unable to parse sqs json msg here
#payload = message.body
#payload = message['body']
voter = payload['MessageAttributes']['voter']['Value'] #here the exception raised!
vote = payload['MessageAttributes']['vote']['Value']
logging.info("Voter: %s, Vote: %s", voter, vote)
store_vote(voter, vote)
update_count(vote)
message.delete()
except Exception as e:
print('x = msg.body')
x = (message.body)
print(x)
print('-------')
print('message.body')
print(message.body)
try:
vote = x['MessageAttributes']['vote']['Value']
logging.error("Failed to process message")
logging.error('------- here: ' + str(e))
logging.error('vote %d' % vote)
except TypeError:
logging.error("error catched")
def store_vote(voter, vote):
try:
logging.info('table put item.......')
print('table put item......')
response = table.put_item(
Item={'voter': voter, 'vote': vote}
)
except:
logging.error("Failed to store message")
raise
def update_count(vote):
logging.info('update count....')
print('update count....')
table.update_item(
Key={'voter': 'count'},
UpdateExpression="set #vote = #vote + :incr",
ExpressionAttributeNames={'#vote': vote},
ExpressionAttributeValues={':incr': 1}
)
if __name__ == "__main__":
while True:
try:
messages = queue.receive_messages()
except KeyboardInterrupt:
logging.info("Stopping...")
break
except:
logging.error(sys.exc_info()[0])
logging.info('here error - we continue')
continue
for message in messages:
process_message(message)
</code></pre>
<p>the error msg i get:</p>
<pre><code> payload = json.loads(message)
File "/usr/lib64/python3.7/json/__init__.py", line 341, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not sqs.Message
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./processor.py", line 90, in <module>
process_message(message)
File "./processor.py", line 40, in process_message
vote = x['MessageAttributes']['vote']['Value']
TypeError: string indices must be integers
</code></pre>
<p>I understand its sqs message and has diff structure than json.
I tried</p>
<pre><code>json.loads(message.body)
</code></pre>
<p>but sqs msg is generated with empty msg body. even though the test event above has "body" field its not the same.
when i just debug and print(message)</p>
<p>it gives this output:</p>
<pre><code>sqs.Message(queue_url='https://sqs.us-east-1.amazonaws.com/025416187662/erjan', receipt_handle='AQEBAz3JiGRwss1ROOr2R8GkpBWwr7tJ1tUDUa7JurX7H6SxoF6gyj7YOxoLuU1/KcHpBIowon12Vle97mJ/cZFUIjzJon78zIIcVSVLrZbKPBABztUeE/Db0ALCMncVXpHWXk76hZVLCC+LHMsi8E5TveZ7+DbTdyDX
U6djTI1VcKpUjEoKLV9seN6JIEZV35r3fgbipHsX897IqTVvjhb0YADt6vTxYQTM1kVMEPBo5oNdTWqn6PfmoYJfZbT1GHMqphTluEwVuqBzux2kPSMtluFk3yk4XXwPJS304URJ7srMksUdoVTemA56OsksVZzXT4AcS8sm8Y3SO2PLLjZSV+7Vdc6JZlX7gslvVSADBlXw5BJCP/Rb9mA2xI9FOyW4')
</code></pre>
<p>i think the auto generated sqs msg has it hidden somewhere</p>
|
<python><json><amazon-web-services><amazon-sqs>
|
2022-12-30 21:47:10
| 1
| 24,573
|
ERJAN
|
74,965,989
| 13,860,217
|
Python list forward fill elements according to thresholds
|
<p>I have a list</p>
<pre><code>a = ["Today, 30 Dec",
"01:10",
"02:30",
"Tomorrow, 31 Dec",
"00:00",
"04:30",
"05:30",
"01 Jan 2023",
"01:00",
"10:00"]
</code></pre>
<p>and would like to kind of forward fill this list so that the result looks like this</p>
<pre><code>b = ["Today, 30 Dec 01:10",
"Today, 30 Dec 02:30",
"Tomorrow, 31 Dec 00:00",
"Tomorrow, 31 Dec 04:30",
"Tomorrow, 31 Dec 05:30",
"01 Jan 2023 01:00",
"01 Jan 2023 10:00"]
</code></pre>
|
<python><list><fill><forward>
|
2022-12-30 21:38:24
| 3
| 377
|
Michael
|
74,965,968
| 9,537,439
|
How can I modify the grid parameters from an trained ML model?
|
<p>I have trained an <code>xgboost.XGBClassifier</code> model with <code>GridSearchCV</code>, when calling <code>grid_search_xgb.best_estimator_.get_params()</code> to obtain the best parameters of that model I get this:</p>
<pre><code>{'objective': 'binary:logistic',
...
'missing': nan,
'monotone_constraints': '()',
'n_estimators': 1000,
...
}
</code></pre>
<p>From a plot I did, I know that this model is overfitted. However, if <code>n_estimators = 123</code>, then the training and test evaluation metric are very similar (minimum overfitting). Hence, I will train the model again <strong>only replacing the <code>n_estimators</code></strong> with 123 instead of 1000, with this piece of code:</p>
<pre><code>optimal_params_grid = grid_search_xgb.best_estimator_.get_params()
optimal_params_grid['n_estimators'] = 123
</code></pre>
<p>Which works perfectly! However, when I train that model again:</p>
<pre><code>model_xgb = XGBClassifier()
grid_search_xgb = GridSearchCV(model_xgb, optimal_params_grid, cv=5, verbose=1, n_jobs=-1)
grid_search_xgb.fit(X_train, y_train, eval_set = [(X_train,y_train),(X_test,y_test)])
</code></pre>
<p>It raises this error:</p>
<pre><code>TypeError: Parameter grid for parameter 'objective' needs to be a list or a numpy array, but got 'binary:logistic' (of type str) instead. Single values need to be wrapped in a list with one element.
</code></pre>
<p>This is because the dictionary is not in the right format. Each key should be encoded in brackets, like:</p>
<pre><code>{'objective': ['binary:logistic'],
...
}
</code></pre>
<p>However, I can't find a way to add brackets to every value, and at the same time be 100% sure that it was done correctly. I read somewhere that when I call a dictionary (or something in a dictionary), the order is not always the same. Hence, I'm afraid of replacing the wrong value in the wrong key.</p>
<p><strong>Problems/Questions</strong></p>
<ol>
<li>Is there any way this can be done 100% correctly?</li>
<li>As a second question, I'm wondering if there is no a more straightforward alternative to pick the latest model and modify it's number of estimators. For example, it would be nice if I could just pick the model when the estimator number 123 was trained. Is that possible? Or the only alternative is to train it again with <code>n_estimators=123</code>?</li>
</ol>
|
<python><machine-learning><scikit-learn><parameters><grid-search>
|
2022-12-30 21:33:58
| 1
| 2,081
|
Chris
|
74,965,894
| 3,247,006
|
By default, is transaction used in Django Admin Actions?
|
<p>I know that by default, transaction is used in <strong>Django Admin</strong> when adding, changing and deleting data according to my tests.</p>
<p>But, I selected <strong>Delete selected persons</strong> and clicked on <strong>Go</strong> in <strong>Django Admin Actions</strong>. *I use PostgreSQL:</p>
<p><a href="https://i.sstatic.net/RgIy9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RgIy9.png" alt="enter image description here" /></a></p>
<p>Then, clicked on <strong>Yes, I'm sure</strong> to delete data:</p>
<p><a href="https://i.sstatic.net/Tp4zN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tp4zN.png" alt="enter image description here" /></a></p>
<p>Now, <strong>only one query <code>DELETE</code></strong> is run without including one or more other queries between <code>BEGIN</code> and <code>COMMIT</code> as shown below so I doubt that by default, transaction is used in <strong>Django Admin Actions</strong>. *These below are <strong>the PostgreSQL query logs</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>how to log PostgreSQL queries</strong></a> :</p>
<p><a href="https://i.sstatic.net/hbnU3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hbnU3.png" alt="enter image description here" /></a></p>
<p>So by default, is transaction used in <strong>Django Admin Actions</strong>?</p>
|
<python><django><transactions><django-admin><django-admin-actions>
|
2022-12-30 21:23:17
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
74,965,764
| 11,316,253
|
How can I properly hash dictionaries with a common set of keys, for deduplication purposes?
|
<p>I have some log data like:</p>
<pre><code>logs = [
{'id': '1234', 'error': None, 'fruit': 'orange'},
{'id': '12345', 'error': None, 'fruit': 'apple'}
]
</code></pre>
<p>Each dict has the same keys: <code>'id'</code>, <code>'error'</code> and <code>'fruit'</code> (in this example).</p>
<p>I want to <a href="https://stackoverflow.com/questions/7961363">remove duplicates</a> from this list, but straightforward <code>dict</code> and <code>set</code> based approaches do not work because my elements are themselves <code>dict</code>s, which are <a href="https://stackoverflow.com/questions/1151658">not hashable</a>:</p>
<pre><code>>>> set(logs)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
</code></pre>
<p>Another approach is to <a href="https://stackoverflow.com/questions/2213923">sort and use itertools.groupby</a> - but dicts are also not comparable, so this also does not work:</p>
<pre><code>>>> from itertools import groupby
>>> [k for k, _ in groupby(sorted(logs))]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'dict' and 'dict'
</code></pre>
<p>I had the idea to calculate a hash value for each log entry, and store it in a <code>set</code> for comparison, like so:</p>
<pre><code>def compute_hash(log_dict: dict):
return hash(log_dict.values())
def deduplicate(logs):
already_seen = set()
for log in logs:
log_hash = compute_hash(log)
if log_hash in already_seen:
continue
already_seen.add(log_hash)
yield log
</code></pre>
<p>However, I found that <code>compute_hash</code> would give the same hash for different dictionaries, even ones with completely bogus contents:</p>
<pre><code>>>> logs = [{'id': '123', 'error': None, 'fruit': 'orange'}, {}]
>>> # The empty dict will be removed; every dict seems to get the same hash.
>>> list(deduplicate(logs))
[{'id': '123', 'error': None, 'fruit': 'orange'}]
</code></pre>
<p>After some experimentation, I was seemingly able to fix the problem by modifying <code>compute_hash</code> like so:</p>
<pre><code>def compute_hash(log_dict: dict):
return hash(frozenset(log_dict.values()))
</code></pre>
<p>However, I cannot understand why this makes a difference. <strong>Why</strong> did the original version seem to give the same hash for every input dict? Why does converting the <code>.values</code> result to a <code>frozenset</code> first fix the problem?
Aside from that: <strong>is this algorithm correct</strong>? Or is there some counterexample where the wrong values will be removed?</p>
<hr />
<p><sub>This question discusses how hashing works in Python, in depth, as well as considering other data structures that might be more appropriate than dictionaries for the list elements. See <a href="https://stackoverflow.com/questions/11092511">List of unique dictionaries</a> instead if you simply want to remove duplicates from a list of dictionaries.</sub></p>
|
<python><python-3.x>
|
2022-12-30 21:00:14
| 3
| 549
|
dCoder
|
74,965,747
| 667,570
|
Python AudioPlayer on Chromebook fails to play file
|
<p>I have this python program:</p>
<pre><code>from gtts import gTTS
from audioplayer import AudioPlayer
speech = gTTS('Hello, world!')
speech.save('test.mp3')
AudioPlayer('test.mp3').play(block=True)
</code></pre>
<p>This runs fine on my Macbook. When I try to run it on my chromebook I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/me/test.py", line 6, in <module>
AudioPlayer('test.mp3').play(block=True)
File "/home/me/.local/lib/python3.9/site-packages/audioplayer/abstractaudioplayer.py", line 114, in play
self._doplay(loop, block)
File "/home/me/.local/lib/python3.9/site-packages/audioplayer/audioplayer_linux.py", line 45, in _doplay
raise AudioPlayerError(
audioplayer.abstractaudioplayer.AudioPlayerError: Failed to play "/home/me/test.mp3"
</code></pre>
<p>The AudioPlayer pypi page (<a href="https://pypi.org/project/audioplayer/" rel="nofollow noreferrer">https://pypi.org/project/audioplayer/</a>) lists packages to install. The very first one I try, python-gst-1.0 isn't found.</p>
<pre><code>sudo apt-get install python-gst-1.0
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python-gst-1.0
E: Couldn't find any package by glob 'python-gst-1.0'
E: Couldn't find any package by regex 'python-gst-1.0'
</code></pre>
<p>Not sure what to do next.</p>
|
<python><chromebook>
|
2022-12-30 20:57:07
| 0
| 1,005
|
Sol
|
74,965,734
| 3,614,036
|
Building opencv-python from source with videoio off results in file not found error
|
<p>The error:</p>
<pre><code> /opencv-python/opencv/modules/gapi/include/opencv2/gapi/streaming/cap.hpp:26:10: fatal error: opencv2/videoio.hpp: No such file or directory
#include <opencv2/videoio.hpp>
</code></pre>
<p>The docker image command that fails:</p>
<pre><code>RUN pip wheel . --verbose
</code></pre>
<p>Here are my cmake args:</p>
<pre><code>ENV CMAKE_ARGS="\
-D BUILD_JAVA=OFF \
-D BUILD_PERF_TESTS=ON \
-D BUILD_TESTS=ON \
-D BUILD_opencv_apps=OFF \
-D BUILD_opencv_freetype=OFF \
-D BUILD_opencv_calib3d=OFF \
-D BUILD_opencv_videoio=OFF \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_python3=ON \
-D WITH_GSTREAMER=OFF \
-D VIDEOIO_ENABLE_PLUGINS=OFF \
-D ENABLE_FAST_MATH=1 \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_TESTS=OFF"
</code></pre>
<p>I realize that the file is not found because I have videoio off, but it should not be looking for the file in the first place. Any advice?</p>
<p>I've tried</p>
<pre><code>-D WITH_GSTREAMER=OFF
</code></pre>
<p>but no success.</p>
|
<python><docker><opencv>
|
2022-12-30 20:54:45
| 1
| 415
|
Joey Grisafe
|
74,965,731
| 17,301,834
|
Default arguments in __init__ of class inheriting from int
|
<p>I feel like I'm losing my mind. I was working on a simple project involving a subclass of <code>int</code>, which may not be the best idea, but I need to inherit many of <code>int</code>'s magic methods for integer operations.</p>
<p>I added a default argument <code>wraps=True</code> to the class' <code>__init__</code> method but started getting an unexpected <code>TypeError</code>, where my default argument was being assumed as a keyword argument for <code>int</code>.</p>
<p>The section of the code could be simplified to this:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass(int):
def __init__(self, a, *args, b=False, **kwargs):
super().__init__(*args, **kwargs)
self.a = a
self.b = b
</code></pre>
<p>and for some reason <code>b</code> (or in my case, <code>wraps</code>) was being passed as a keyword argument to <code>int</code>, which of course, didn't work.</p>
<p>After some testing I found that the same problem occurred with <code>str</code> but not with most other classes, which means that, (I'm guessing) the problem arises from inheriting from an immutable class. Am I correct and is there a way around this problem?</p>
<p>Thanks in advance.</p>
|
<python><python-3.x><inheritance><immutability>
|
2022-12-30 20:54:18
| 0
| 459
|
user17301834
|
74,965,720
| 866,082
|
How to make my protobuf objects pickle-able?
|
<p>I have this protobuf file called <code>HistogramBins.proto</code> like this:</p>
<pre class="lang-protobuf prettyprint-override"><code>syntax = "proto3";
message HistogramBins {
string field_name = 1;
repeated float bins = 2;
}
</code></pre>
<p>which I compile it this way:</p>
<pre><code>protoc --python_out=. ./HistogramBins.proto
</code></pre>
<p>And this will generate a file called <code>HistogramBins_pb2.py</code> for me. Then I'll try to serialize an Apache Beam stream using this class:</p>
<pre class="lang-py prettyprint-override"><code>import apache_beam as beam
from components.HistogramBins_pb2 import HistogramBins
from apache_beam.options.pipeline_options import PipelineOptions
class ToProtoFn(beam.DoFn):
def process(self, t):
hBin = HistogramBins()
hBin.field_name = t[0]
hBin.bins.extend(t[1])
print(hBin)
yield hBin
with beam.Pipeline(options=PipelineOptions()) as p:
input_collection = (
p
| 'Read input data' >> beam.Create([("f1", [1.0, 2.0]), ("f2", [3.0, 4.0])])
| 'record to HistogramBins' >> beam.ParDo(ToProtoFn())
| beam.io.WriteToText('data/test2.pbtxt',
coder=beam.coders.ProtoCoder(HistogramBins().__class__))
)
</code></pre>
<p>But fails with the following error message (this is the final line):</p>
<pre><code>_pickle.PicklingError: Can't pickle <class 'HistogramBins_pb2.HistogramBins'>: it's not found as HistogramBins_pb2.HistogramBins
</code></pre>
<p>This is while the following example works just fine. This time, I'm using a protobuf from Google package:</p>
<pre class="lang-py prettyprint-override"><code>import apache_beam as beam
from google.protobuf.timestamp_pb2 import Timestamp
from apache_beam.options.pipeline_options import PipelineOptions
class ToProtoFn(beam.DoFn):
def process(self, element):
timestamp = Timestamp()
timestamp.seconds, timestamp.nanos = [int(x) for x in element.strip().split(',')]
print(timestamp)
yield timestamp
with beam.Pipeline(options=PipelineOptions()) as p:
lines = (p
| beam.Create(["1586753000,222333000", "1586754000,222333000"])
| beam.ParDo(ToProtoFn())
| beam.io.WriteToText('time-pb',
coder=beam.coders.ProtoCoder(Timestamp().__class__)))
</code></pre>
<p>What is the difference between my protobuf and theirs?</p>
<p>This is Google's protobuf file:</p>
<pre class="lang-protobuf prettyprint-override"><code>syntax = "proto3";
package google.protobuf;
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
option cc_enable_arenas = true;
option go_package = "github.com/golang/protobuf/ptypes/timestamp";
option java_package = "com.google.protobuf";
option java_outer_classname = "TimestampProto";
option java_multiple_files = true;
option objc_class_prefix = "GPB";
message Timestamp {
int64 seconds = 1;
int32 nanos = 2;
}
</code></pre>
|
<python><protocol-buffers><apache-beam>
|
2022-12-30 20:52:47
| 0
| 17,161
|
Mehran
|
74,965,715
| 11,923,747
|
Split dataframe according to common consecutive sequences
|
<p>Let's consider this DataFrame :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"type" : ["dog", "cat", "whale", "cat", "cat", "lion", "dog"],
"status" : [False, True, True, False, False, True, True],
"age" : [4, 6, 7, 7, 1, 7, 5]})
</code></pre>
<p>It looks like that :</p>
<pre><code> type status age
0 dog False 4
1 cat True 6
2 whale True 7
3 cat False 7
4 cat False 1
5 lion True 7
6 dog True 5
</code></pre>
<p>I want to split this dataframe according to <strong>consecutive identical values</strong> in the column status.
The result is stored in a list.</p>
<p>Here i write the expected result manually :</p>
<pre><code>result = [df.loc[[0],:], df.loc[1:2,:], df.loc[3:4,:], df.loc[5:6,:]]
</code></pre>
<p>So result[0] is this dataframe:</p>
<pre><code> type status age
0 dog False 4
</code></pre>
<p>result[1] is this dataframe:</p>
<pre><code> type status age
1 cat True 6
2 whale True 7
</code></pre>
<p>result[2] is this dataframe:</p>
<pre><code> type status age
3 cat False 7
4 cat False 1
</code></pre>
<p>result[3] is dataframe:</p>
<pre><code> type status age
5 lion True 7
6 dog True 5
</code></pre>
<p>What is the most efficient way to do that ?</p>
|
<python><pandas><dataframe><split><sequence>
|
2022-12-30 20:52:16
| 1
| 321
|
floupinette
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.