QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,133,547
| 3,337,597
|
Using decorators cause not showing parameter unfilled for function call (Python)
|
<p>I have a function named 'sample_func'. As you can see when I call the function without passing the required parameter, there is a red warning.</p>
<p><a href="https://i.sstatic.net/H3vFJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3vFJ.jpg" alt="enter image description here" /></a></p>
<p>I have a decorator called 'decorators' in a separate file:</p>
<pre><code>def decorator1(func):
def wrapper(arg1):
print('decorator 1')
return func(arg1)
return wrapper
</code></pre>
<p>Now if I decorate the 'sample_func' with the 'decorator1' and call the function without passing the parameter, there is no red warning anymore.</p>
<p><a href="https://i.sstatic.net/G0bGC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G0bGC.jpg" alt="enter image description here" /></a></p>
<p>My question is that how can I get this warning? I've seen this: <a href="https://stackoverflow.com/a/29569355/3337597">https://stackoverflow.com/a/29569355/3337597</a> but I'm a little confused. If I determine the acceptable type for the argument and I pass another type, I'll get an 'unexpected type' warning.</p>
<p><a href="https://i.sstatic.net/Wb42K.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wb42K.jpg" alt="enter image description here" /></a></p>
<p>I wonder why I'll get the 'unexpected type' warning but won't get the 'parameter unfilled' warning.</p>
<p>I got more confused when I tried this:</p>
<p><a href="https://i.sstatic.net/y18tm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y18tm.jpg" alt="enter image description here" /></a></p>
<p>When I've defined the 'decorator1' in the same file as 'sample_func', I got the 'parameter unfilled' warning, but I won't when the decorator is defined in another file.</p>
<p>I appreciate any explanation of why this happens.</p>
<p>Also, I appreciate any solution to get the 'parameter unfilled' warning, even if it is using another pattern. I'm developing a package in which most of its functions are decorated to check if the user is logged in (or other conditions) and it would be a bit annoying to use when there is no such warning and you realize you missed a parameter or pass an unexpected parameter only in runtime.</p>
|
<python><python-3.x><decorator><python-decorators>
|
2023-01-16 11:25:00
| 0
| 805
|
Reyhaneh Sharifzadeh
|
75,133,515
| 4,555,441
|
Get list of columns names with cells satisfying a condition and store in separate column for each row in df
|
<p>I have a df with structure similar to below data with more cols and more rows. How to get the expected result. My code has an error - I interpret that the list cannot be saved in df cell? How to avoid loops if possible?</p>
<pre><code>data = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 0, 1], [0, 0, 0]]
df = pd.DataFrame(data, columns=["choice_a", "choice_b", "choice_c"])
Expected result
choice_a choice_b choice_c choices
index
0 0 0 1 ['c']
1 0 1 0 ['b']
2 1 0 0 ['a']
3 1. 0. 1 ['a','b']
4 0. 0. 0 NA
</code></pre>
<p>My code</p>
<pre><code>df['choices']=0
for i in np.arange(df.shape[0]):
choice_list = []
for j in np.arange(len(df.columns)):
if df.iloc[i,j]==1:
choice_list.append(df.columns[j].split('_')[1])
df.iloc[i,4]=choice_list
</code></pre>
<p>Error I am getting</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/var/folders/65/3mqr9fpn37jf2xt2pxbcgp_w0000gn/T/ipykernel_1513/2279334138.py in <module>
5 if main_dataset.iloc[i,j]==1:
6 choice_list.append(main_dataset.columns[j].split('_')[1])
----> 7 main_dataset.iloc[i,5]=choice_list
~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexing.py in __setitem__(self, key, value)
714
715 iloc = self if self.name == "iloc" else self.obj.iloc
--> 716 iloc._setitem_with_indexer(indexer, value, self.name)
717
718 def _validate_key(self, key, axis: int):
~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexing.py in _setitem_with_indexer(self, indexer, value, name)
1689 if take_split_path:
1690 # We have to operate column-wise
-> 1691 self._setitem_with_indexer_split_path(indexer, value, name)
1692 else:
1693 self._setitem_single_block(indexer, value, name)
~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexing.py in _setitem_with_indexer_split_path(self, indexer, value, name)
1744 return self._setitem_with_indexer((pi, info_axis[0]), value[0])
1745
-> 1746 raise ValueError(
1747 "Must have equal len keys and value "
1748 "when setting with an iterable"
ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
|
<python><pandas>
|
2023-01-16 11:22:16
| 2
| 648
|
pranav nerurkar
|
75,133,458
| 1,354,400
|
Polars str.starts_with() with values from another column
|
<p>I have a polars DataFrame for example:</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pl.DataFrame({'A': ['a', 'b', 'c', 'd'], 'B': ['app', 'nop', 'cap', 'tab']})
>>> df
shape: (4, 2)
βββββββ¬ββββββ
β A β B β
β --- β --- β
β str β str β
βββββββͺββββββ‘
β a β app β
β b β nop β
β c β cap β
β d β tab β
βββββββ΄ββββββ
</code></pre>
<p>I'm trying to get a third column <code>C</code> which is <code>True</code> if strings in column <code>B</code> starts with the strings in column <code>A</code> of the same row, otherwise, <code>False</code>. So in the case above, I'd expect:</p>
<pre class="lang-py prettyprint-override"><code>βββββββ¬ββββββ¬ββββββββ
β A β B β C β
β --- β --- β --- β
β str β str β bool β
βββββββͺββββββͺββββββββ‘
β a β app β true β
β b β nop β false β
β c β cap β true β
β d β tab β false β
βββββββ΄ββββββ΄ββββββββ
</code></pre>
<p>I'm aware of the <code>df['B'].str.starts_with()</code> function but passing in a column yielded:</p>
<pre class="lang-py prettyprint-override"><code>>>> df['B'].str.starts_with(pl.col('A'))
... # Some stuff here.
TypeError: argument 'sub': 'Expr' object cannot be converted to 'PyString'
</code></pre>
<p>What's the way to do this? In pandas, you would do:</p>
<pre class="lang-py prettyprint-override"><code>df.apply(lambda d: d['B'].startswith(d['A']), axis=1)
</code></pre>
|
<python><python-polars>
|
2023-01-16 11:16:11
| 3
| 902
|
Syafiq Kamarul Azman
|
75,133,401
| 15,215,859
|
In Airflow version 2, create a DAG to Archive Mysql table into bigquery
|
<p>I basically want to fetch the data older than 2 weeks from a my_sql_table called "testing_monitor_archive" and put it into a bigquery table "monitoring_table". I am using airflow version 2.1.4 currently. This is the DAG which i have created currently which is failing in "GCSCreateBucketOperator" saying invalid operators {provide_context
python_callable,object} which even i agree since these were the parameters of the operator "GoogleCloudStorageBucketOperator" which is depreciated in version 2 now. Now since i know "GCSCreateBucketOperator" has no such upload operators, how can i upload the results of the mysql task "fetch_data" as a CSV into my cloud bucket and then put into Bigquery, please help.</p>
<p>My code -</p>
<pre><code>from datetime import timedelta, datetime
from airflow import DAG
from airflow.operators.mysql_operator import MySqlOperator
from airflow.providers.google.cloud.operators.gcs import GCSCreateBucketOperator
from airflow.providers.google.cloud.transfers.gcs_to_bigquery import GCSToBigQueryOperator
from airflow.utils.dates import days_ago
# Default arguments for the DAG
default_args = {
'owner': 'airflow',
'start_date': days_ago(2),
'depends_on_past': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
# Create a new DAG
dag = DAG(
'my_data_pipeline',
default_args=default_args,
schedule_interval='0 0 * * *', # run daily at midnight
)
# SQL query to fetch data older than 2 weeks
query = """
SELECT * FROM testing_monitor_archive
WHERE CREATE_TS < DATE_SUB(NOW(), INTERVAL 2 WEEK)
"""
# Create a task to execute the SQL query
fetch_data = MySqlOperator(
task_id='fetch_data',
mysql_conn_id='my_mysql_connection', # connection details for the MySQL database
sql=query,
dag=dag,
)
# GCS bucket and object name where the data will be stored
bucket = 'mysql-archive-gcs-bucket'
object_name = 'data/{{ execution_date }}.csv'
def retrieve_data(**kwargs):
ti= kwargs['ti']
data=ti.xcom_pull(task_ids="fetch_data")
return data
# Create a task to upload the data to GCS
upload_to_gcs = GCSCreateBucketOperator(
task_id='upload_to_gcs',
# src='{{ task_instance.xcom_pull(task_ids="fetch_data") }}', # source is the output of the fetch_data task
bucket_name=bucket,
provide_context=True,
python_callable=retrieve_data,
object=object_name,
gcp_conn_id='gcp_conn_id', # connection details for GCS
dag=dag,
)
# BigQuery dataset and table name where the data will be loaded
dataset_id = 'archive_dataset'
table_id = 'monitoring_table'
# Create a task to load the data from GCS to BigQuery
load_to_bigquery = GCSToBigQueryOperator(
task_id='load_to_bigquery',
bucket=bucket,
source_objects=[object_name],
destination_project_dataset_table=f"{dataset_id}.{table_id}",
google_cloud_storage_conn_id='gcp_conn_id', # connection details for GCS
bigquery_conn_id='gcp_conn_id', # connection details for BigQuery
dag=dag,
)
# Set the dependencies between the tasks
fetch_data >> upload_to_gcs >> load_to_bigquery
</code></pre>
|
<python><airflow><airflow-2.x>
|
2023-01-16 11:10:35
| 1
| 317
|
Tushaar
|
75,133,201
| 7,089,239
|
Incorrect format of labels in custom dataset for multi-label classification
|
<p>I'm trying to implement a custom <code>Dataset</code> for multi-label classification. That is, one element may have multiple classes simultaneously. I tried returning the one-hot encoded representation or the class indices from the dataset directly, but neither of them works.</p>
<ul>
<li>One-hot encoded produces <code>RuntimeError: expected scalar type Long but found Float</code> when calculating the loss.</li>
<li>Returning labels produces <code>IndexError: Target 1 is out of bounds.</code> when calculating the loss.</li>
</ul>
<p>Here's a dummy implementation:</p>
<pre><code>import torch
from torch import nn
from torch.utils.data import DataLoader, Dataset
n_classes = 3
class Data(Dataset):
def __getitem__(self, index):
return torch.tensor([[0, 0, 0, 0.0]]), torch.tensor([0.0] * n_classes)
# return torch.tensor([[0, 0, 0, 0.0]]), torch.tensor(range(n_classes))
def __len__(self):
return 10
data_loader = DataLoader(Data())
model = nn.Sequential(nn.Linear(4, n_classes), nn.ReLU())
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
model.train()
for epoch in range(3):
print('EPOCH {}:'.format(epoch))
for inputs, labels in data_loader:
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
</code></pre>
<p>I couldn't really find any documentation or tutorials for implementing this sort of dataset, and the base <code>Dataset</code> <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset" rel="nofollow noreferrer">docs</a> are quite terse as well. Am I missing something, or doing it all wrong?</p>
|
<python><pytorch>
|
2023-01-16 10:53:20
| 1
| 2,688
|
Felix
|
75,133,181
| 4,835,496
|
Find point where touching geometries touch each other
|
<p>I have a GeoDataFrame with LineStrings and want to get all coordinates where the Geometries touch each other. I can find the ones who touch each other so far but how do I get the coordinates of the touching point? It should be as fast as possible because I have a lot of data.</p>
<pre><code>data = gpd.read_file("data.geojson")
heads, tails = data.sindex.query_bulk(data.geometry, predicate="touches")
touching_dataframe = data.iloc[heads]
</code></pre>
|
<python><geopandas>
|
2023-01-16 10:51:11
| 1
| 1,681
|
Kewitschka
|
75,133,059
| 5,437,090
|
selenium.common.exceptions.StaleElementReferenceException | python
|
<p>I am doing web scraping using <code>selenium</code> in python with the following code:</p>
<pre><code>from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common import exceptions
def get_all_search_details(URL):
SEARCH_RESULTS = {}
options = Options()
options.headless = True
options.add_argument("--remote-debugging-port=9222") #
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--disable-extensions")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
driver.get(URL)
print(f"Scraping {driver.current_url}")
try:
medias = WebDriverWait(driver,timeout=5,).until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'result-row')))
except exceptions.StaleElementReferenceException as e:
print(f">> {type(e).__name__}: {e.args}")
return
except exceptions.NoSuchElementException as e:
print(f">> {type(e).__name__}: {e.args}")
return
except exceptions.TimeoutException as e:
print(f">> {type(e).__name__}: {e.args}")
return
except exceptions.WebDriverException as e:
print(f">> {type(e).__name__}: {e.args}")
return
except exceptions.SessionNotCreatedException as e:
print(f">> {type(e).__name__}: {e.args}")
return
except Exception as e:
print(f">> {type(e).__name__} line {e.__traceback__.tb_lineno} of {__file__}: {e.args}")
return
except:
print(f">> General Exception: {URL}")
return
for media_idx, media_elem in enumerate(medias):
outer_html = media_elem.get_attribute('outerHTML')
result = scrap_newspaper(outer_html) # some external functions
SEARCH_RESULTS[f"result_{media_idx}"] = result
return SEARCH_RESULTS
if __name__ == '__main__':
in_url = "https://digi.kansalliskirjasto.fi/clippings?query=isokyr%C3%B6&categoryId=12&orderBy=RELEVANCE&page=3&resultMode=THUMB"
my_res = get_all_search_details(in_url)
</code></pre>
<p>I applied several <code>try except</code> mentioned in <a href="https://www.selenium.dev/selenium/docs/api/py/common/selenium.common.exceptions.html#selenium.common.exceptions.StaleElementReferenceException" rel="nofollow noreferrer">documentation</a> to ensure I would not get trapped in selenium exceptions, however, here is the error I obtained:</p>
<pre><code>Traceback (most recent call last):
File "nationalbiblioteket_logs.py", line 277, in <module>
run()
File "nationalbiblioteket_logs.py", line 264, in run
all_queries(file_=get_query_log(QUERY=args.query),
File "nationalbiblioteket_logs.py", line 219, in all_queries
df = pd.DataFrame( df.apply( check_urls, axis=1, ) )
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/frame.py", line 8740, in apply
return op.apply()
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/apply.py", line 688, in apply
return self.apply_standard()
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/apply.py", line 812, in apply_standard
results, res_index = self.apply_series_generator()
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/apply.py", line 828, in apply_series_generator
results[i] = self.f(v)
File "nationalbiblioteket_logs.py", line 218, in <lambda>
check_urls = lambda INPUT_DF: analyze_(INPUT_DF)
File "nationalbiblioteket_logs.py", line 201, in analyze_
df["search_results"] = get_all_search_details(in_url)
File "/home/xenial/WS_Farid/DARIAH-FI/url_scraping.py", line 68, in get_all_search_details
outer_html = media_elem.get_attribute('outerHTML')
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py", line 174, in get_attribute
self, name)
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 494, in execute_script
'args': converted_args})['value']
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: headless chrome=110.0.5481.30)
</code></pre>
<p>What am I doing wrong in my python script which causes such exception? I want to return <code>None</code> and get out of function in case such exception occurs.</p>
<p>Here are some more details regarding libraries I use:</p>
<pre><code>>>> selenium.__version__
'4.5.0'
>>> webdriver_manager.__version__
'3.8.4'
</code></pre>
|
<python><selenium><web-scraping><exception><staleelementreferenceexception>
|
2023-01-16 10:40:07
| 1
| 1,621
|
farid
|
75,133,025
| 10,710,625
|
using testing.assert_series_equal when series are not in the same order
|
<p>I have two series that are equal but in different order.</p>
<pre><code>data1 = np.array(['1','2','3','4','5','6'])
data2=np.array(['6','2','4','3','1','5'])
sr1 = pd.Series(data1)
sr2=pd.Series(data2)
</code></pre>
<p>the two series are outputs of different functions and I'm testing if they are equal:</p>
<pre><code>pd.testing.assert_series_equal(sr1,sr2,check_names=False)
</code></pre>
<p>This is failing of course because the two series are not in the same order.
I checked in the documentation they have online, they mention <code>check_like</code> but it does not work for me (I guess because I don't have the same version of pandas).
Is there a quick way to test if these two series are equal even if they are not in the same order for a unit test without updating any packages ?</p>
|
<python><pandas><dataframe><series><python-unittest>
|
2023-01-16 10:36:32
| 2
| 739
|
the phoenix
|
75,133,008
| 2,735,636
|
Why does Pymodbus change baudrate when running on linux without requesting a baudrate change?
|
<p>I have python3 program that acts as a Modbus Master. I start a ModbusSerialClient and then proceed to read register from the slave. This is working fine on Windows. The issue is that on Ubuntu I am seeing that the ModbusSerialClient keeps changing the baudrate which makes the communication inconsistent.</p>
<p>I start the communication with:</p>
<pre><code>from pymodbus.client.sync import ModbusSerialClient as ModbusClient
...
try:
self.client = ModbusClient(
method = 'rtu'
,port= self.port
,baudrate=int(115200)
,parity = 'N'
,stopbits=1
,bytesize=8
,timeout=3
,RetryOnEmpty = True
,RetryOnInvalid = True
)
self.connection = self.client.connect()
# Some delay may be necessary between connect and first transmission
time.sleep(2)
</code></pre>
<p>Where <code>self.port = "COM_X"</code> in Windows and <code>self.port = "/dev/ttyS1"</code> in Linux</p>
<p>And then I read the registers using:</p>
<pre><code>rr = self.client.read_holding_registers(register_addr,register_block,unit=MODBUS_CONFIG_ID)
if(rr.isError()):
logger.debug(rr)
else:
# Proceed with the processing
</code></pre>
<p>The error I log in some ocasions is:</p>
<pre><code>Modbus Error: [Input/Output] Modbus Error: [Invalid Message] No response received, expected at least 2 bytes (0 received)
</code></pre>
<p>I have verified the baudrate change physically measuring the signals.
I have verified that with a command line tools like <code>cu</code> the baudrate remains consistent.</p>
<p>The verions I am using are:</p>
<ul>
<li>pymodbus <code>3.1.0</code> (error also present with <code>2.5.3</code>)</li>
<li>pyserial <code>3.5</code></li>
<li>python <code>3.8.10</code></li>
<li>kubuntu <code>22.04</code> (same behaviour with ubuntu)</li>
</ul>
|
<python><python-3.x><pymodbus>
|
2023-01-16 10:34:22
| 1
| 460
|
Ricard Molins
|
75,132,832
| 1,877,010
|
Create Python alternating range programmatically
|
<p>I am looking for a convenient way to create an ordered, alternating (negative, positive while decrementing and incrementing by one from previous pair) list of integers in the form of</p>
<pre><code>[-1, 1, -2, 2, -3, 3, -4, 4, -5, 5, -6, 6, -7, 7, -8, 8, -9, 9, -10, 10]
</code></pre>
<p>with Python with variable length/end integer, like:
<code>alternating_range(2) = [-1, 1, -2, 2]</code></p>
<p>I can easily do this in a "for loop" but I am looking for a more "Pythonic" way</p>
|
<python><python-3.x>
|
2023-01-16 10:17:31
| 5
| 377
|
DaHoC
|
75,132,827
| 8,318,946
|
docker-compose not working on Azure App Service
|
<p>I have pushed 4 images to Azure Container Registry: django-app, celery-worker, celery-beats and nginx. I have created docker-compose file that I used to created images and I want to upload to Azure App Service but I am getting this error:</p>
<pre><code>Exception in multi-container config parsing: YamlException: (Line: 12, Col: 9, Idx: 256) - (Line: 12, Col: 35, Idx: 282): Bind mount must start with ${WEBAPP_STORAGE_HOME}.
</code></pre>
<p><strong>I changed WEBSITES_ENABLE_APP_SERVICE_STORAGE to true</strong>.</p>
<p>Below you can find docker-compose file I am uploading to Azure:</p>
<pre><code>version: '3'
services:
app:
container_name: app.azurecr.io/app-app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: python manage.py runserver 0.0.0.0:8000
volumes:
- backend/:/usr/src/backend/
ports:
- 8000:8000
env_file:
- .env
celery_worker:
container_name: app.azurecr.io/app-celery_worker
restart: always
build:
context: ./backend
command: celery -A app_settings worker --loglevel=info --logfile=logs/celery.log
volumes:
- backend:/usr/src/backend
env_file:
- .env
depends_on:
- app
celery-beat:
container_name: app.azurecr.io/app-celery-beat
build: ./backend
command: celery -A app_settings beat -l info
volumes:
- backend:/usr/src/backend
env_file:
- .env
depends_on:
- app
nginx:
container_name: app_nginx
restart: always
build: ./azure/nginx
ports:
- "8080:8080"
volumes:
- static:/home/app/web/static
- media:/home/app/web/media
depends_on:
- app
volumes:
backend:
static:
media:
</code></pre>
<p>Below you can see my mouted storages I created in <strong>Path mappings</strong>:
<a href="https://i.sstatic.net/RmGDK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RmGDK.png" alt="enter image description here" /></a></p>
<p>I don't want to get rid of volumes because I want to store and persist files on Azure Storage. I'd like to know how to fix this error and learn what am I doing wrong?</p>
|
<python><django><azure><docker><celery>
|
2023-01-16 10:16:49
| 1
| 917
|
Adrian
|
75,132,758
| 8,282,898
|
How set charset encoding of pytest mock response?
|
<p>I learned about the encoding of python string.</p>
<p>In python, strings are all same though there were encoded with different types.</p>
<pre class="lang-py prettyprint-override"><code>error_msg = 'text'
response_text = error_msg.encode().decode('ISO-8859-1')
response_target1 = error_msg.encode().decode('cp949')
response_target2 = error_msg.encode().decode('utf-8')
response_target3 = error_msg.encode().decode('euc-kr')
print(response_text == response_target1) # True
print(response_text == response_target2) # True
print(response_text == response_target3) # True
</code></pre>
<p>But when I got a text from the response, python gives <code>False</code> if the encoding is different between the response and my string variable though It has the same value. I fixed my code to compare string after decoding to the same type.</p>
<p>So I want to make pytest function with mocking, to test my decoding function.</p>
<pre class="lang-py prettyprint-override"><code>mock_request.return_value = mock.Mock(
status_code='200',
text='text',
headers={'content-type': 'text/html; charset=iso-8859-1'}
)
assert mock_request.return_value != 'text' # expected True but gives False
</code></pre>
<p>I expected <code>mock_request.return_value.text != 'text'</code> gives me <code>True</code> because encoding types are different from each other, but It gives me <code>False</code>.</p>
<p>I don't know how to set charset when using pytest mock instance.</p>
<p>Is there any way to set charset encoding type when using pytest mock? Please help me.</p>
<hr />
<pre class="lang-py prettyprint-override"><code>import requests
from unittest import mock
def test():
response = mock.Mock()
response.headers = {'Content-Type': 'text/html; charset=iso-8859-1'}
response.text = 'text'
requests.post.return_value = response
# the rest of your test code here
assert requests.post.return_value.text in ['text']
</code></pre>
<p>And also this code gives me <code>True</code> though charsets are different each other.</p>
|
<python><mocking><pytest>
|
2023-01-16 10:11:53
| 0
| 985
|
Lazyer
|
75,132,544
| 11,852,936
|
Change boolean value to True for duplicates with more distant/far date pandas
|
<p>Given dataframe I want to set <code>isActive</code> column value to <code>True</code> only duplicated value and add '_duplicate' to the <code>Name</code> column.</p>
<pre><code>df =
Name isActive LoginDate
John False 2021
John False 2022
Fred False 2020
</code></pre>
<p>Desired output is:</p>
<p>df =</p>
<pre><code>Name isActive LoginDate
John_duplicate True 2021
John False 2022
Fred False 2020
</code></pre>
<p>For now I was able to add numbers to each duplicates, but I want to skip with nearest login date and add text to oldest. And change boolean value:</p>
<pre><code>df.LoginDate = ad.groupby('LoginDate').LoginDate.apply(lambda n: n + (np.arange(len(n))+1).astype(str))
</code></pre>
<p>Any suggestion?</p>
|
<python><pandas>
|
2023-01-16 09:55:46
| 1
| 772
|
Mamed
|
75,132,451
| 14,004,563
|
Custom Euler method implementation always tends to infinity
|
<p>I have a system of differential equations:</p>
<p><a href="https://i.sstatic.net/zeaK7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zeaK7.png" alt="enter image description here" /></a></p>
<p>I'm using the following code to run Euler's method:</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import pandas as pd
def f(s,t):
a = 0.63
b = 0.02
g = 0.61
d = 0.015
h = s[1]
l = s[2]
# print(s)
# prey
dhdt = a * h - b * h * l
# predator
dldt = d * h * l - g * l
return [dhdt, dldt]
# s0=[24.2, 16.16]
# h = 1
# t = np.arange(0, 1 + h, h) # Numerical grid
# Explicit Euler Method
# s = [[1890, s0[0], s0[1]]]
# s[0] = s0
result = [
[1890, 24.2, 16.16]
]
for i in range(40):
# s[i + 1] = s[i] + h*f(t[i], s[i])
[dhdt, dldt] = f(result[i], i)
# print(f(result[i], i))
# print([dhdt, dldt])
latest = [1890+1+i, result[i][1] + dhdt, result[i][2] + dldt]
result.append(latest)
print(result[-1])
</code></pre>
<p>However, this always tends to infinity (printing result[-1] every iteration):</p>
<p><a href="https://i.sstatic.net/K1yFl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K1yFl.png" alt="enter image description here" /></a></p>
<p>However, when I use scipy's odeint, it works perfectly fine:</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import pandas as pd
def f(s,t):
a = 0.63
b = 0.02
g = 0.61
d = 0.015
h = s[0]
l = s[1]
# prey
dhdt = a * h - b * h * l
# predator
dldt = d * h * l - g * l
return [dhdt, dldt]
t = np.linspace(0,45)
s0=[24.2, 16.16]
s = odeint(f,s0,t)
s = np.insert(s, 0, 0, axis=1)
for i in range(len(s)):
# p: np.ndarray = s[i]
# print(s[i])
# pass
s[i][0] = i+1890
# print(s[i])
# s[i].insert(0, i+1890)
# print(s)
prediction = pd.DataFrame(data=s, columns=['Year', 'Hare\'', 'Lynx\''])
prediction.set_index('Year', inplace=True)
url = 'http://people.whitman.edu/~hundledr/courses/M250F03/LynxHare.txt'
real = pd.read_csv(url, delim_whitespace=True, header=None, index_col=0)
real.index.name = 'Year'
real.columns = ['Hare', 'Lynx']
# Get the years from 1890 and onwards
real = real.iloc[45:]
print(prediction)
</code></pre>
<p>So what is it that odeint does differently? Why is mine tending to infinity?</p>
<p>I think there is something wrong with my mathematical formula (the euler's part in the for loop), but I can't seem to find the issue.</p>
<p>EDIT: Fixed. I reduced the step size and it worked!</p>
|
<python><math>
|
2023-01-16 09:45:21
| 0
| 467
|
Berlm
|
75,132,358
| 5,573,294
|
In pandas, filter for duplicate values appearing in 1 of 2 different columns, for list of certain values only
|
<pre><code>zed = pd.DataFrame(data = { 'date': ['2022-03-01', '2022-03-02', '2022-03-03', '2022-03-04', '2022-03-05'], 'a': [1, 5, 7, 3, 4], 'b': [3, 4, 9, 12, 5] })
</code></pre>
<p>How can the following dataframe be filtered to keep the earliest row (earliest == lowest date) for each of the 3 values <code>1, 5, 4</code> appearing in either column <code>a</code> or column <code>b</code>? In this example, the rows with dates <code>'2022-03-01'</code>, <code>'2022-03-02'</code> would be kept as they are the lowest dates where each of the 3 values appears?</p>
<p>We have tried <code>zed[zed.isin({'a': [1, 5, 4], 'b': [1, 5, 4]}).any(1)].sort_values(by=['date'])</code> but this returns the incorrect result as it returns 3 rows.</p>
|
<python><pandas>
|
2023-01-16 09:34:39
| 2
| 10,679
|
Canovice
|
75,132,235
| 898,042
|
Userdata partially working but not installing boto3 on EC2 launch, have to explicitly install it
|
<p>The userdata script for my EC2:</p>
<pre><code>#!/bin/bash
curl https://raw.githubusercontent.com/erjan/MyVoteAWS/main/vote-processor/processor.py > /home/ec2-user/processor.py
cd /home/ec2-user/
sudo chmod +x processor.py
sudo yum -y install python3-pip
sudo yum -y install python3 python3-setuptools
pip3 install boto3 --user #this is not executed
./processor.py
</code></pre>
<p>The file processor.py is pulled from my GitHub repo, I do see it, but it's not launched because it needs boto3 - gives error:</p>
<pre><code>"Import error: no boto3 module found"
</code></pre>
<p>I have to wait till it shows '2/2 checks passed' in the AWS GUI, then connect, then do explicitly type <code>pip3 install boto3 --user</code>, then I see progress bar downloading boto3, then my script processor.py works.</p>
<p>But it does not work out of box from userdata. What is the reason?</p>
|
<python><amazon-web-services><shell><boto3><ec2-userdata>
|
2023-01-16 09:22:11
| 1
| 24,573
|
ERJAN
|
75,132,232
| 2,755,116
|
Why \n does not expand when read file but yes in built-in strings?
|
<ul>
<li><p>If you run</p>
<pre><code>tmpl = "This is the first line\n And this is the second line"
print("tmpl)
</code></pre>
<p>you get</p>
<pre><code> This is the first line
And this is the second line
</code></pre>
<p>So you get a new line <em>expanded</em>.</p>
</li>
<li><p>But if you write in a file, you will not get that:</p>
<p>Put in <code>test.tmpl</code>:</p>
<pre><code>This is the first line\n And this is the second line
</code></pre>
<p>and run</p>
<pre><code>with open("test.tmpl") as f:
contents = f.read()
print(contents)
</code></pre>
<p>you get</p>
<pre><code>This is the first line\n And this is the second line
</code></pre>
</li>
</ul>
<p>Why this behaviour? How can you get the the <code>contents</code> displays the same than <code>tmpl</code>?</p>
|
<python><string><carriage-return>
|
2023-01-16 09:21:46
| 1
| 1,607
|
somenxavier
|
75,132,191
| 10,844,607
|
How to Grab All Globals as a Reference in Python?
|
<p>I'm working on a project that's written in Go and uses <a href="https://github.com/go-python/gpython/" rel="nofollow noreferrer">go-python</a> to run Python code inside. The way it works is you write <code>code.py</code> snippits, then the Go code throws in Global variables to use the internal API.</p>
<p>i.e. <code>code.py</code> might look like:</p>
<pre class="lang-py prettyprint-override"><code>some_api_call() # Linters think some_api_call is not defined, but go-python put it in the global scope
</code></pre>
<p>This works great, but for modularity I want my <code>code.py</code> to import <code>code_2.py</code>. Unfortunately, if I do this, <code>code_2.py</code> doesn't have acces to the API that the go code manually added to <code>code.py</code>'s global scope.</p>
<p>i.e.</p>
<p><code>code.py</code>:</p>
<pre><code>import code2.py
code2.run_func()
</code></pre>
<p><code>code2.py</code>:</p>
<pre><code>def run_func():
some_api_call() # This fails since code2.py doesn't have the API injected
</code></pre>
<p>I can pass in individual API functions to <code>code_2.py</code> manually, but there's a lot of them. It would be easier if I could just pass everything in <code>code.py</code>'s Global scope, like <code>code_2.init(Globals)</code>. Is there an easy way to do this? I've tried circular importing, but <code>go-python</code> doesn't let me import the originally called script.</p>
<p>Thanks!</p>
|
<python><python-3.x><go><global-variables><go-python>
|
2023-01-16 09:17:13
| 3
| 5,694
|
Dr-Bracket
|
75,132,137
| 5,802,479
|
Web scraping LinkedIn job posts using Selenium gives repeated or empty results
|
<p>I am trying to get the job post data from LinkedIn using Selenium for a practice project.</p>
<p>I am getting the list of job card elements and the job IDs and clicking on each of them to load the job post, and then obtaining the job details.</p>
<pre><code>import time
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
login_page_link = 'https://www.linkedin.com/login'
search_page_link = 'https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=data%20analyst&location=Australia&refresh=true'
job_list_item_class = 'jobs-search-results__list-item'
job_title_class = 'jobs-unified-top-card__job-title'
company_name_class = 'jobs-unified-top-card__company-name'
def get_browser_driver():
browser = webdriver.Chrome()
# maximise browser window
browser.maximize_window()
return browser
def login_to_linkedin(browser):
browser.get(login_page_link)
# enter login credentials
browser.find_element(by=By.ID, value=username_id).send_keys("username@mail.com")
browser.find_element(by=By.ID, value=password_id).send_keys("pwd")
login_btn = browser.find_element(by=By.XPATH, value=login_btn_xpath)
# attempt login
login_btn.click()
# wait till new page is loaded
time.sleep(2)
def get_job_post_data(browser):
# list to store job posts
job_posts = []
# get the search results list
job_cards = browser.find_elements(by=By.CLASS_NAME, value=job_list_item_class)
for job_card in job_cards:
job_id = job_card.get_attribute('data-occludable-job-id')
# dict to store each job post
job_dict = {}
# scroll job post into view
browser.execute_script("arguments[0].scrollIntoView();", job_card)
# click to load each job post
job_card.click()
time.sleep(5)
# get elements from job post by css selector
job_dict['Job ID'] = job_id
job_dict['Job title'] = get_element_text_by_classname(browser, job_title_class)
job_dict['Company name'] = get_element_text_by_classname(browser, company_name_class)
job_posts.append(job_dict)
return job_posts
def get_element_text_by_classname(browser, class_name):
return browser.find_element(by=By.CLASS_NAME, value=class_name).text
browser = get_browser_driver()
login_to_linkedin(browser)
load_search_results_page(browser)
jobs_list = get_job_post_data(browser)
jobs_df = pd.DataFrame(all_jobs)
</code></pre>
<p>When I try to scrape all job posts in the page, it gives me repeated (duplicated) and empty results as shown in the images below. The job ID, keeps updating and changing but the job details get randomly duplicated.</p>
<p><a href="https://i.sstatic.net/6O3Iv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6O3Iv.png" alt="Results-1" /></a>
<a href="https://i.sstatic.net/jJPFV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jJPFV.png" alt="Results-2" /></a>
<a href="https://i.sstatic.net/BJXa5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BJXa5.png" alt="Results-3" /></a></p>
<p>I would be much thankful if you could suggest any ideas as to why this is happening and how to fix this error.</p>
|
<python><selenium><web-scraping><linkedin-api>
|
2023-01-16 09:10:10
| 2
| 686
|
Fleur
|
75,132,080
| 15,175,627
|
importing modules best practice
|
<p>I wrote a package with several modules</p>
<pre><code>pkg/
βββ __init__.py
βββ mod.py
βββ mod_calc.py
βββ mod_plotting.py
</code></pre>
<p><code>mod.py</code> uses functions from <code>mod_calc.py</code> and <code>mod_plotting.py</code>, so in <code>mod.py</code> I use:</p>
<pre class="lang-py prettyprint-override"><code># mod.py:
import pkg.mod_calc as MC
import pkg.mod_plotting as MP
MC.calculate_this()
MP.plot_this()
</code></pre>
<p>I reckon this is ok.</p>
<p>There are also scripts (jupyter notebooks) with the suggested workflow designed for users with very little python knowledge, and I'd like them to use the functions as <code>calculate_this()</code> (defined in <code>mod_calc.py</code>) etc (as oppose to <code>mod_calc.calculate_this()</code> etc)</p>
<p>So here is what I'm currently doing:</p>
<pre class="lang-py prettyprint-override"><code># __init__.py:
from pkg.mod import *
from pkg.mod_calc import *
from pkg.mod_plotting import *
</code></pre>
<pre class="lang-py prettyprint-override"><code># script:
from pkg import *
do_this() # from mod.py
calculate_this() # from mod_calc.py
plot_this() # from mod_plotting.py
</code></pre>
<p>This way the user doesn't need to worry about which function is defined in which module. It works fine, but I understand that <code>from ... import *</code> is not best practice. So what is the pythonic way to do this?</p>
|
<python><import><module>
|
2023-01-16 09:04:08
| 0
| 511
|
konstanze
|
75,131,507
| 8,282,898
|
compare response.text and string failed in python
|
<p>I have a mission to filter response whether it's a normal response or not.</p>
<p>I should log the response if response.text is not<code>'<Br>No match<br>OK!!'</code>.</p>
<pre class="lang-py prettyprint-override"><code>if not response.text == '<Br>No match<br>OK!!':
logger.info('ERROR!!')
</code></pre>
<p>But I still can check the error message in log file which is <code>'<Br>No match<br>OK!!'</code></p>
<p>I fixed my code as bellow but It doesn't work.</p>
<pre><code>if not str(response.text) == '<Br>No match<br>OK!!':
logger.info('ERROR!!')
</code></pre>
<p>There was the other message in response.text that encoded with <strong>ISO-8859-1</strong>. Centain text in the log was broken so I could get the right text like <code>normalize('NFC', msg).encode('ISO-8859-1').decode('cp949').</code></p>
<pre><code>u'hello' == 'hello'.encode('ISO-8859-1').decode('cp949') # True
</code></pre>
<p>Is there any problem with my code? Or what should I check more? please help me.</p>
|
<python><python-requests>
|
2023-01-16 07:59:41
| 0
| 985
|
Lazyer
|
75,131,492
| 13,638,219
|
Altair: custom format axis title in a Repeat
|
<p>Is there a way to access the label/title of a datum when using it in a Repeat?<br />
I would like to change the Y-axis title by applying a custom function to the default label (simple example below).<br />
I know I can change the columns in the dataframe or do a manual loop to get the result, but I wonder if there is a direct way using repeat.</p>
<pre><code>import altair as alt
from vega_datasets import data
def replace(label):
label = str(label)
return label.replace('_', ' ')
source = data.cars()
alt.Chart(source).mark_circle(size=60).encode(
x='Horsepower',
y=alt.Y(alt.repeat('repeat'), type='quantitative', title=replace(alt.datum.label)),
color='Origin',
).repeat(
repeat=['Miles_per_Gallon', 'Weight_in_lbs'],
)
</code></pre>
<p><a href="https://i.sstatic.net/uMSBT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uMSBT.png" alt="original" /></a>
becomes:
<a href="https://i.sstatic.net/sqQR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sqQR2.png" alt="using a function" /></a></p>
|
<python><altair>
|
2023-01-16 07:57:50
| 0
| 982
|
debbes
|
75,131,490
| 1,668,622
|
Is there a way to wrap pygame.midi.Input.read() in an asynchronous task without polling or an extra thread?
|
<p>I have basically the following code and want to embed it in an <code>async</code> coroutine:</p>
<pre><code>def read_midi():
midi_in = pygame.midi.Input(0)
while True:
if midi_in.poll():
midi_data = midi_in.read(1)[0][0]
# do something with midi_data, e.g. putting it in a Queue..
</code></pre>
<p>From my understanding since <code>pygame</code> is not asynchronous I have two options here: put the whole function in an extra thread or turn it into an <code>async</code> coroutine like this:</p>
<pre><code>async def read_midi():
midi_in = pygame.midi.Input(1)
while True:
if not midi_in.poll():
await asyncio.sleep(0.1) # very bad!
continue
midi_data = midi_in.read(1)[0][0]
# do something with midi_data, e.g. putting it in a Queue..
</code></pre>
<p>So it looks like I have to either keep the busy loop and put it in a thread and waste lots of cpu time or put it into the (fake) coroutine above and introduce a tradeoff between time lags and wasting CPU time.</p>
<p>Am I wrong?</p>
<p>Is there a way to read MIDI without a busy loop?</p>
<p>Or even a way to <code>await</code> <code>midi.Input.read</code>?</p>
|
<python><pygame><python-asyncio><midi>
|
2023-01-16 07:57:34
| 1
| 9,958
|
frans
|
75,131,418
| 13,708,824
|
Pulp - Constraints
|
<p>I'm essentially trying to emulate an excel problem in pulp, thus far I think that the problem was frame correctly, with the exception of the constraints, in essence, the constraints compare the demand for each flight with the estimated demand, that depends on one of the variables from solver</p>
<p><a href="https://i.sstatic.net/iM3qP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iM3qP.png" alt="Solver in Exce;" /></a></p>
<p>My Python Code is below:</p>
<pre><code>
import pulp
import pandas as pd
import numpy as np
data_flight = pd.read_csv('data_flight.csv')
resource = pd.read_csv('Resource_Requirement.csv')
seat_capacity = pd.read_csv('seat_capacity.csv')
products = list(data_flight['Product j'])
flight_capacity = list(seat_capacity["Type"])
dict_price = dict(zip(products,data_flight['Price']))
dict_demand = dict(zip(products,data_flight['Exp. Demand']))
dict_y = dict(zip(products,list(np.arange(0,14))))
resource_req = {}
for i in flight_capacity:
resource_req[i] = resource[i]
phasing = pulp.LpProblem("Maximise",pulp.LpMaximize)
phasing += pulp.lpSum(Selection[idx]*data_flight.loc[idx, 'Price'] for idx in data_flight.index)
from pulp import lpDot
constraint_seats = []
for i in flight_capacity:
constraint_seats.append(lpDot(resource_req[i], Selection))
for i, j in zip(seat_capacity['Seat Capacity'], constraint_seats):
phasing += j <= i
for i,j in zip(data_flight['Exp. Demand'],Selection):
phasing += j <= i
phasing.solve()
</code></pre>
<p>After solving I'm getting an error, so pulp is not really solving anything, and I think the problem is that my constraints are setup incorrectly</p>
<p>Traceback</p>
<pre><code>Traceback (most recent call last)
Cell In[30], line 1
----> 1 phasing.solve()
File ~/.python/current/lib/python3.10/site-packages/pulp/pulp.py:1913, in LpProblem.solve(self, solver, **kwargs)
1911 # time it
1912 self.startClock()
-> 1913 status = solver.actualSolve(self, **kwargs)
1914 self.stopClock()
1915 self.restoreObjective(wasNone, dummyVar)
File ~/.python/current/lib/python3.10/site-packages/pulp/apis/coin_api.py:137, in COIN_CMD.actualSolve(self, lp, **kwargs)
135 def actualSolve(self, lp, **kwargs):
136 """Solve a well formulated lp problem"""
--> 137 return self.solve_CBC(lp, **kwargs)
File ~/.python/current/lib/python3.10/site-packages/pulp/apis/coin_api.py:206, in COIN_CMD.solve_CBC(self, lp, use_mps)
204 if pipe:
205 pipe.close()
--> 206 raise PulpSolverError(
207 "Pulp: Error while trying to execute, use msg=True for more details"
208 + self.path
209 )
210 if pipe:
211 pipe.close()
PulpSolverError: Pulp: Error while trying to execute, use msg=True for more details/home/codespace/.python/current/lib/python3.10/site-packages/pulp/solverdir/cbc/linux/64/cbc
</code></pre>
|
<python><pulp>
|
2023-01-16 07:47:52
| 0
| 399
|
Francisco Colina
|
75,131,158
| 1,436,800
|
It is impossible to add a non-nullable field 'name' to table_name without specifying a default
|
<p>I have added a following field in my already existing model:</p>
<pre><code>name = models.CharField(max_length=128, unique=True)
</code></pre>
<p>But it is giving following prompt when applying migrations:</p>
<pre><code> It is impossible to add a non-nullable field 'name' to table_name without specifying a default. This is because the database needs something to populate existing rows.
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit and manually define a default value in models.py.
</code></pre>
<p>I cannot set it's attributes to blank=True, null=True as this field is must. I cannot set it's default value as the field has to be unique. I also deleted all the previous data from that table.
If I try to set it's default value in command prompt, it says plz select a valid option.
How to fix it?</p>
|
<python><django><django-models><django-rest-framework><django-views>
|
2023-01-16 07:14:13
| 3
| 315
|
Waleed Farrukh
|
75,131,112
| 16,295,405
|
How to install python3.10 virtual environment when python3.10-venv has no installation candidate?
|
<p>I had just upgraded to Ubuntu 22.04.1 LTS, which comes preinstalled with python3.10. I tried creating a virtual environment but it was unsuccessful. Trying to install the virtual env package gets an error <code>E: Package 'python3-venv' has no installation candidate</code></p>
<pre><code>python3 -m venv newpy310
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.
apt install python3.10-venv
You may need to use sudo with that command. After installing the python3-venv
package, recreate your virtual environment.
Failing command: ['/home/user/Desktop/pyenvs/newpy310/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']
</code></pre>
<p>Following which i used <code>sudo apt install python3.10-venv</code>, and was returned with</p>
<pre><code>sudo apt install python3.10-venv
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package python3.10-venv is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'python3.10-venv' has no installation candidate
</code></pre>
<p>Something similar can be encountered if I used <code>sudo apt install python3.10-virtualenv</code></p>
<pre><code>sudo apt-get install python3.10-virtualenv
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python3.10-virtualenv
E: Couldn't find any package by glob 'python3.10-virtualenv'
E: Couldn't find any package by regex 'python3.10-virtualenv'
</code></pre>
<p>My sudo apt-get update also looks suspicious, but i am not entirely sure if it is the culprit</p>
<pre><code>sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://archive.ubuntu.com/ubuntu focal-security InRelease
Hit:5 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease
Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:7 http://archive.ubuntu.com/ubuntu jammy InRelease
Hit:8 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
8 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Packages (main/binary-i386/Packages) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Translations (main/i18n/Translation-en_US) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Translations (main/i18n/Translation-en_SG) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11 (main/dep11/Components-amd64.yml) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11 (main/dep11/Components-all.yml) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11-icons-small (main/dep11/icons-48x48.tar) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11-icons (main/dep11/icons-64x64.tar) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11-icons-hidpi (main/dep11/icons-64x64@2.tar) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Packages (main/binary-i386/Packages) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Translations (main/i18n/Translation-en_US) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Translations (main/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target Translations (main/i18n/Translation-en_SG) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11 (main/dep11/Components-amd64.yml) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11 (main/dep11/Components-all.yml) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11-icons-small (main/dep11/icons-48x48.tar) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11-icons (main/dep11/icons-64x64.tar) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target DEP-11-icons-hidpi (main/dep11/icons-64x64@2.tar) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
W: Target CNF (main/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list:13 and /etc/apt/sources.list:14
</code></pre>
<p>I also have already added the deadsnakes ppa repos. I noticed some other questions were for python 3.6 or 3.8, which had worked in the past when i was using python 3.6 and 3.8 respectively. However, the methods described within does not work for my current setup with Ubuntu 22.04 and python3.10.</p>
<ol>
<li><a href="https://stackoverflow.com/questions/62773433/python3-8-venv-not-working-with-python3-8-m-venv-env">python3.8-venv not working with python3.8 -m venv env</a></li>
<li><a href="https://stackoverflow.com/questions/68211364/python3-8-venv-is-no-longer-working-after-pop-os-upgraded-to-21-04">python3.8-venv is no longer working after Pop OS upgraded to 21.04</a></li>
<li><a href="https://stackoverflow.com/questions/74368514/trouble-installing-python3-6-virtual-environment-on-ubuntu-22-04">Trouble Installing Python3.6 Virtual Environment on Ubuntu 22.04</a></li>
</ol>
<p>These are the other links that I have consulted but did not work for me. I have also tried reinstalling python3.10 itself.</p>
<ol>
<li><a href="https://stackoverflow.com/questions/69830431/how-to-use-python3-10-on-ubuntu">How to use Python3.10 on Ubuntu?</a></li>
<li><a href="https://stackoverflow.com/questions/74368514/trouble-installing-python3-6-virtual-environment-on-ubuntu-22-04">Trouble Installing Python3.6 Virtual Environment on Ubuntu 22.04</a></li>
<li><a href="https://stackoverflow.com/questions/39539110/pyvenv-not-working-because-ensurepip-is-not-available">pyvenv not working because ensurepip is not available</a></li>
<li><a href="https://stackoverflow.com/questions/39539110/pyvenv-not-working-because-ensurepip-is-not-available?noredirect=1&lq=1">pyvenv not working because ensurepip is not available</a></li>
<li><a href="https://askubuntu.com/questions/879437/ensurepip-is-disabled-in-debian-ubuntu-for-the-system-python">https://askubuntu.com/questions/879437/ensurepip-is-disabled-in-debian-ubuntu-for-the-system-python</a></li>
<li><a href="https://stackoverflow.com/questions/71818928/python3-10-source-venv-has-changed">Python3.10 source venv has changed</a></li>
</ol>
<p>Q: How to install python3.10 virtual environment when python3.10-venv has no installation candidate?</p>
|
<python><python-3.x><ubuntu><virtualenv>
|
2023-01-16 07:07:38
| 3
| 412
|
fatbringer
|
75,131,103
| 4,733,871
|
For loop does not update Python Pandas data-frame when importing data using SQL Alchemy
|
<p>I have a for loop that should update a pandas data frame from a postgres table that is updated by another thread every 5 seconds.</p>
<p>if I run the code without the for loop I get what I want which is just the latest update time. However, if I run the code by using the for loop the results do not update and remain stuck on the first risults.</p>
<p>Why is this happening and how can I fix the problem?</p>
<pre><code>metadata = MetaData(bind=None)
table = Table(
'datastore',
metadata,
autoload=True,
autoload_with=engine
)
stmt = select([
table.columns.date,
table.columns.open,
table.columns.high,
table.columns.low,
table.columns.close
]).where(and_(table.columns.date_ == datetime.today().strftime('%Y-%m-%d') and table.columns.close != 0))
#]).where(and_(table.columns.date_ == '2023-01-12' and table.columns.close != 0))
connection = engine.connect()
for x in range(1000000):
data_from_db = pd.DataFrame(connection.execute(stmt).fetchall())
data_from_db = data_from_db[data_from_db['close'] != 0]
print(data_from_db.date.iloc[-1])
time.sleep(5)
</code></pre>
<p>I'm also trying the psycopg2 library and the problem is always there:</p>
<pre><code>for x in range(1000000):
conn = psycopg2.connect(
host='localhost',
database='ABC',
user='postgres',
password='*******')
cur = conn.cursor()
cur.execute("select max(date) from public.datastore")
y = cur.fetchall()
print(y)
time.sleep(5)
</code></pre>
|
<python><dataframe><for-loop><sqlalchemy>
|
2023-01-16 07:06:25
| 2
| 1,258
|
Dario Federici
|
75,130,984
| 10,829,044
|
pandas dynamic wide to long based on time
|
<p>I have pandas dataframe that contains data given below</p>
<pre><code> ID Q1_rev Q1_transcnt Q2_rev Q2_transcnt Q3_rev Q3_transcnt Q4_rev Q4_transcnt
1 100 2 200 4 300 6 400 8
2 101 3 201 5 301 7 401 9
</code></pre>
<p>dataframe looks like below</p>
<p><a href="https://i.sstatic.net/LYpEc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LYpEc.png" alt="enter image description here" /></a></p>
<p>I would like to do the below</p>
<p>a) For each ID, create 3 rows (from 8 input columns data)</p>
<p>b) Each row should contain the two columns data</p>
<p>c) subsequent rows should shift the columns by 1 (one quarter data).</p>
<p>To understand better, I expect my output to be like as below</p>
<p><a href="https://i.sstatic.net/g8M3I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g8M3I.png" alt="enter image description here" /></a></p>
<p>I tried the below based on the SO post here but unable to get the expected output</p>
<pre><code>s = 3
n = 2
cols = ['1st_rev','1st_transcnt','2nd_rev','2nd_transcnt']
output = pd.concat((df.iloc[:,0+i*s:6+i*s].set_axis(cols, axis=1) for i in range(int((df.shape[1]-(s*n))/n))), ignore_index=True, axis=0).set_index(np.tile(df.index,2))
</code></pre>
<p>Can help me with this? The problem is in real time, <code>n=2</code> will not be the case. It could be 4 or 5 as well. Meaning, Instead of <code>'1st_rev','1st_transcnt','2nd_rev','2nd_transcnt'</code>, I may have the below. You can see there are 4 pairs of columns.</p>
<pre><code>'1st_rev','1st_transcnt','2nd_rev','2nd_transcnt','3rd_rev','3rd_transcnt','4th_rev','4th_transcnt'
</code></pre>
|
<python><pandas><dataframe><group-by><transformation>
|
2023-01-16 06:48:14
| 2
| 7,793
|
The Great
|
75,130,761
| 2,998,077
|
To find commonality/characteristics of in a group of lists
|
<p>A group of lists of equal length (strings as elements). I want to find out their commonality/characteristics.</p>
<p>Letβs call them βgoodβ lists - I want to find out what makes a βgoodβ list.</p>
<p>A thinking is to output all 3-element combination from each list, then summarize the occurrences of the 3-element to rank them.</p>
<p>For example, βDβ and βNβ and βTβ appeared 4 times. It may conclude that, when βDβ and βNβ and βTβ appear in a list, it is a βgoodβ list.</p>
<p>The same method can apply to 4-element combinations, 5-element combinations (when the many lists are very long).</p>
<p>What would be a better solution?</p>
<pre><code>import itertools
from itertools import combinations
from collections import Counter
s = [
['O', 'V', 'R', 'M', 'Y'],
['I', 'Q', 'L', 'J', 'A'],
['M', 'I', 'Q', 'N', 'G'],
['Y', 'M', 'R', 'Q', 'Z'],
['D', 'X', 'C', 'Q', 'N'],
['B', 'O', 'Q', 'E', 'V'],
['V', 'M', 'J', 'G', 'R'],
['M', 'T', 'L', 'I', 'Z'],
['Y', 'H', 'A', 'V', 'L'],
['O', 'T', 'D', 'N', 'E'],
['D', 'N', 'T', 'I', 'G'],
['T', 'Q', 'H', 'I', 'P'],
['F', 'T', 'D', 'W', 'N'],
['F', 'Z', 'H', 'E', 'X'],
['E', 'Z', 'R', 'K', 'J'],
['P', 'C', 'U', 'D', 'F'],
['N', 'I', 'Y', 'U', 'E'],
['T', 'N', 'D', 'L', 'V'],
['D', 'Z', 'I', 'P', 'X'],
['H', 'L', 'C', 'P', 'Y']]
summary = []
for each in s:
all_combinations = [comb for comb in combinations(each, 3)] # unique combinations only
for a in all_combinations:
summary.append('-'.join(sorted(a)))
print (Counter(summary))
</code></pre>
<p>Output:</p>
<pre><code>Counter({'D-N-T': 4, 'M-R-V': 2, 'M-R-Y': 2.....})
</code></pre>
|
<python><list>
|
2023-01-16 06:15:56
| 1
| 9,496
|
Mark K
|
75,130,738
| 19,238,204
|
Check Why Sympy in Python so Slow to Calculate Symbolic Integration for Sphere Surface Area
|
<p>is SymPy the only reliable package in Python to do symbolic integration? I try SymPy for Julia, and it can compute faster. Please check whether this code is efficient, or something wrong in it. Thanks all..</p>
<p>It is just to prove that the surface of area of a sphere is the one and only 4 pi times r square.</p>
<pre><code>
import sympy as sy
x = sy.Symbol("x")
r = sy.Symbol("r")
def f(x):
return sy.sqrt(r**2 - x**2)
def fd(x):
return sy.simplify(sy.diff(f(x), x))
def f2(x):
return sy.sqrt((1 + (fd(x)**2)))
def vx(x):
return 2*sy.pi*(f(x)*sy.sqrt(1 + (fd(x) ** 2)))
vxi = sy.Integral(vx(x), (x, -r, r))
vxf = vxi.simplify().doit()
</code></pre>
<p>It's been an hour after I run <code>vxi.simplify().doit()</code> and it has not even finish yet</p>
<p><a href="https://i.sstatic.net/9Z71o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Z71o.png" alt="1" /></a></p>
|
<python><sympy>
|
2023-01-16 06:12:46
| 1
| 435
|
Freya the Goddess
|
75,130,595
| 8,481,155
|
Cloud Function to retrieve attributes from PubSub - Python
|
<p>I created a GCP Cloud Functions which is triggered by a PubSub topic.</p>
<pre><code>import base64
def hello_pubsub(event, context):
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
</code></pre>
<p>I publish messages using the below command which triggers the Cloud Functions.</p>
<pre><code>gcloud pubsub topics publish test-topic --message="test" \
--attribute="origin=gcloud-sample,username=gcp"
</code></pre>
<p>Using this I can access only the "message" part of the topic. How can I access the "attribute" values from the PubSub message. I want to fetch the origin and username from the topic.</p>
|
<python><google-cloud-platform><google-cloud-functions><google-cloud-pubsub>
|
2023-01-16 05:44:26
| 1
| 701
|
Ashok KS
|
75,130,303
| 1,306,186
|
Python - Detect type of private key (rsa/dsa/ecc)
|
<p>I have many private keys in a format looking like this:</p>
<pre><code>-----BEGIN ENCRYPTED PRIVATE KEY-----
...
-----END ENCRYPTED PRIVATE KEY-----
</code></pre>
<p>I would like to remove the passphrase from these keys using the python <code>PyCryptodome</code> lib. I do know the passphrase.<br />
However, I do not know the key type of each key (They are a mix of RSA/DSA/ECC keys).</p>
<p>Is there any way to detect the key type, so that I can use the correct procedure to import/export the key?<br />
(e.g. <code>Crypto.PublicKey.RSA.import_key</code>
, <code>Crypto.PublicKey.ECC.import_key</code>)</p>
<p>Currently, I'm doing this, which is rather ugly:</p>
<pre><code>try:
rsa_key_decrypted = RSA.import_key(key, key_passphrase)
decrypted_key = rsa_key_decrypted.export_key(pkcs=8).decode('ascii')
except:
try:
dsa_key_decrypted = DSA.import_key(key, key_passphrase)
decrypted_key = dsa_key_decrypted.export_key().decode('ascii')
except:
ecc_key_decrypted = ECC.import_key(key, key_passphrase)
decrypted_key = ecc_key_decrypted.export_key(format='PEM')
</code></pre>
|
<python><pycryptodome>
|
2023-01-16 04:41:19
| 0
| 8,462
|
Zulakis
|
75,130,239
| 20,508,530
|
How to keep deleted rows from table in variable in Django
|
<p>I want to delete all the rows in a tables but before that I want to store the old data and do some operation between old and new data.</p>
<p>This is what I have tried.</p>
<pre><code>old_data = Rc.object.all()
Rc.object.all().delete()
# Now I fetch some api and apply saving operation on table
</code></pre>
<p>I have noticed that data in <code>old_data</code> is updated to the new data.
To test this I have done some thing like this</p>
<pre><code>for rows in old_data:
check = Rc.object.filter(Q(id=dt.id)&Q(modified_date__gt=rows.modified_date)).exist()
print(check)
</code></pre>
<p>I found check is always false (rows with that id exists in the table)
But when I print each rows in <code>old_data</code> just after <code>old_data = Rc.object.all()</code>, old_data remains same(<code>check</code> becomes true) and does not updated to the new data.</p>
<p>Is there something I am missing? How can I store all rows in the variable and then delete the rows ? Please help.</p>
|
<python><django>
|
2023-01-16 04:26:22
| 1
| 325
|
Anonymous
|
75,130,124
| 2,998,077
|
Search keys and read from multiple dictionaries
|
<p>Several dictionaries containing different keys and values.</p>
<p>By looping a list of keys, I want to return their values in the dictionaries (when the key is available), and sum them.</p>
<p>The problem is, some keys are not available in all the dictionaries.</p>
<p>I need to come up with different IF statements, which look clumsy. Especially when there are more dictionaries came in, it became very difficult.</p>
<p>What is the different smart way to do so?</p>
<pre><code>dict_1 = {"Mike": 1, "Lilly": 2, "David": 3}
dict_2 = {"Mike": 4, "Peter": 5, "Kate": 6}
dict_3 = {"Mike": 7, "Lilly": 8, "Jack": 9}
for each in ["Mike", "Lilly", "David"]:
if each in list(dict_1.keys()) and each in list(dict_2.keys()) and each in list(dict_3.keys()):
print (each, dict_1[each] * 1 + dict_2[each] * 2 + dict_3[each] * 3)
if each in list(dict_1.keys()) and each in list(dict_2.keys()):
print (each, dict_1[each] * 1 + dict_2[each] * 2)
if each in list(dict_2.keys()) and each in list(dict_3.keys()):
print (dict_2[each] * 2 + dict_3[each] * 3)
if each in list(dict_1.keys()) and each in list(dict_3.keys()):
print (each, dict_1[each] * 1 + dict_3[each] * 3)
</code></pre>
|
<python><loops><dictionary><if-statement>
|
2023-01-16 03:57:25
| 4
| 9,496
|
Mark K
|
75,130,063
| 16,971,617
|
Filter numpy array base 1-0 mask
|
<p>Say I have a numpy ndarray of shape (172,40,20) and a 1-0 mask of shape (172, 40). I would like to do something similar to bitwise_and: to keep those values where the mask value is 1 while other values set to 0 where the mask value is 0.</p>
<p>Is there a way that I can do without looping?
for example</p>
<pre class="lang-py prettyprint-override"><code>a3D = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(a3D.shape)
mask = np.array([[1, 0], [0, 1]])
print(mask.shape)
# (2, 2, 2)
# (2, 2)
# my func
result = bitwise_and(a3D, mask)
# result = np.array([[[1, 2], [0, 0]], [[0, 0], [7, 8]]])
</code></pre>
|
<python><numpy>
|
2023-01-16 03:42:46
| 2
| 539
|
user16971617
|
75,129,995
| 10,062,025
|
How to scrape a website based on response headers using requests?
|
<p>I am trying to scrape <a href="https://www.foodhall.co.id/grand-indonesia/catalog" rel="nofollow noreferrer">https://www.foodhall.co.id/grand-indonesia/catalog</a> .
I found the api <a href="https://api.foodhall.co.id/v1/catalog/productbycategoryv2" rel="nofollow noreferrer">https://api.foodhall.co.id/v1/catalog/productbycategoryv2</a> for the url above where the products are loaded from. I checked the response headers via inspect element and the returned response headers is as so:</p>
<pre><code>HTTP/1.1 200 OK
Date: Mon, 16 Jan 2023 03:07:59 GMT
Server: Apache/2.4.41 (Ubuntu)
Set-Cookie: advanced-api=cigighcd1tcmdoj0eic643mogl; path=/; HttpOnly
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Access-Control-Allow-Origin: *
Content-Length: 2685
Keep-Alive: timeout=5, max=94
Connection: Keep-Alive
Content-Type: application/json; charset=UTF-8
</code></pre>
<p>What do I have to notice for in the response headers so that I don't get an invalid authorization error.</p>
<p>Currently my code is as follows</p>
<pre><code>import requests
payload={
'store':'49',
'category_id':'',
'search':'',
'filter':"",
'tag':"",
'lang':'ID',
'page':'0'
}
headers={
'Authorization': 'Bearer 17485f41ae19fbba0f4edf3241c9f033bb1af4e1c843789acfc9cf5136d443ea1673838475',
'Connection': 'keep-alive',
'Content-Length': '57',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Host': 'api.foodhall.co.id',
'Origin': 'https://www.foodhall.co.id',
'Referer': 'https://www.foodhall.co.id/',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}
reponse=requests.post('https://api.foodhall.co.id/v1/catalog/productbycategoryv2',json=payload,headers=headers)
</code></pre>
<p>The response json is returning a {'success': 0, 'message': 'invalid Authorization'}.I thought that the set cookie response needs an authorization, so my next step is to figure out how to get the authorization code I guess.</p>
<p>Can someone help me?</p>
|
<python><python-requests>
|
2023-01-16 03:26:49
| 0
| 333
|
Hal
|
75,129,827
| 20,536,016
|
Failed to run /bin/build: for Python, an entrypoint must be manually set, either with "GOOGLE_ENTRYPOINT" env var or by creating a "Procfile" file"
|
<p>I'm trying to automate a CloudDeploy pipeline by following the instructions here:</p>
<p><a href="https://davelms.medium.com/automate-gke-deployments-using-cloud-build-and-cloud-deploy-2c15909ddf22" rel="nofollow noreferrer">https://davelms.medium.com/automate-gke-deployments-using-cloud-build-and-cloud-deploy-2c15909ddf22</a></p>
<p>Things seemed to be working until I got to the last steps of adding in the commit hashes and Cloud Deploy trigger. I'm getting the following error when the trigger runs:</p>
<pre><code>Failed to run /bin/build: for Python, an entrypoint must be manually set, either with "GOOGLE_ENTRYPOINT" env var or by creating a "Procfile" file"
</code></pre>
<p>However, I definitely have a Procfile in the base directory where my clouddeploy.yaml and cloudbuild.yaml files are stored with the following content:</p>
<pre><code>web: gunicorn --bind :8080 main:app
</code></pre>
<p>Is there something else I need to do to get it to reference that?</p>
<p>For reference, here is my cloudbuild.yaml file:</p>
<pre><code>steps:
- name: 'gcr.io/k8s-skaffold/pack'
entrypoint: 'pack'
args: ['build', '--builder=gcr.io/buildpacks/builder', '--publish', 'us-east1-docker.pkg.dev/fakeproject/my-docker-repo/app3:$SHORT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args:
- '-c'
- >
gcloud deploy releases create release-$BUILD_ID
--delivery-pipeline=app3
--region=us-east1
--source=./
--images=app3=us-east1-docker.pkg.dev/fakeproject/my-docker-repo/app3:$SHORT_SHA
options:
logging: CLOUD_LOGGING_ONLY
</code></pre>
<p>Here is my clouddeploy.yaml:</p>
<pre><code>---
apiVersion: deploy.cloud.google.com/v1beta1
kind: DeliveryPipeline
metadata:
name: app3
description: App3 Deployment Pipeline
serialPipeline:
stages:
- targetId: demo-gke
---
apiVersion: deploy.cloud.google.com/v1beta1
kind: Target
metadata:
name: demo-gke
description: Demo Environment
gke:
cluster: projects/fakeproject/locations/us-east1/clusters/demo-gke
executionConfigs:
- usages:
- DEPLOY
workerPool: "projects/fakeproject/locations/us-east1/workerPools/cloud-build-pool"
- usages:
- RENDER
- VERIFY
</code></pre>
<p>Here is my main.py:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from app3"
# run the app.
if __name__ == "__main__":
app.run()
</code></pre>
<p>requirements.txt and skaffold.yaml are identical to the article.</p>
<p>Thanks!</p>
|
<python><google-cloud-platform><cicd><google-cloud-build><google-cloud-deploy>
|
2023-01-16 02:44:55
| 1
| 389
|
Gary Turner
|
75,129,826
| 17,696,880
|
Add letters with diacritics, and u with umlauts, to a pattern that captures strings tolerant of the rest of the letters
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "HabΓa... ; MartΓn ZΓ‘zza no se trata de un nombre" #example 1
input_text = "asasjhsah; Carolina MarΓa Sol no se tratarΓa de un nombre" #example 2
input_text = "IsaΓas no se tratarΓa de un nombre" #example 3
word = ""
name_capture_pattern_01 = r"([A-Z][a-z]+(?:\s*[A-Z][a-z]+)*)"
regex_pattern_01 = r"(?:^|[.;,]\s*)" + name_capture_pattern_01 + r"\s*(?i:no)\s*(?i:se\s*tratar[Γi]a\s*de\s*un\s*nombre|se\s*trata\s*de\s*un\s*nombre|(?:ser[Γi]a|es)\s*un\s*nombre)"
n1 = re.search(regex_pattern_01, input_text)
if n1 and word == "":
word, = n1.groups()
word = word.strip()
print(repr(word)) #print the captured substring
</code></pre>
<p>How to add these symbols, where the accented vowel letters are included and the letter u with diaeresis, <code>[ÑéΓΓ³ΓΊΓΌΓ±]</code> to the search pattern defined by the pattern <code>[A-Z][a-z]+</code></p>
<p>In this way, the search pattern will be able to capture strings that start with a capital letter, and have spaces in between, but that can include those additional symbols. In other words, the objective is to add those symbols without modifying the behavior of the capture group already defined with this regex.</p>
<p>This is the part of the capture pattern that I need to expand, <code>name_capture_pattern_01 = r"([A-Z][a-z]+(?:\s*[A-Z][a-z]+)*)"</code> so that it can accept substring that include these symbols <code>[ÑéΓΓ³ΓΊΓΌΓ±]</code>. The idea is that, if possible, try to add that implementation in that part of the regex without modifying the rest of the regex.</p>
<p>And the outputs should be the substring(names) obtained by the capture group already amplified:</p>
<pre><code>MartΓn ZΓ‘zza
Carolina MarΓa Sol
IsaΓas
</code></pre>
|
<python><python-3.x><regex><string><regex-group>
|
2023-01-16 02:44:46
| 0
| 875
|
Matt095
|
75,129,712
| 5,924,264
|
Interpolating column of first dataframe based on second dataframe
|
<p>Suppose that I have 2 dataframes <code>df1</code>:</p>
<pre><code>col1
0
10
20
30
</code></pre>
<p>and <code>df2</code>:</p>
<pre><code>col1 val
0 0
5 2
15 4
25 5
33 8
</code></pre>
<p>I want to compute <code>val</code> column for <code>df1</code> that is linearly interpolated from <code>val</code> in <code>df2</code> based on <code>col1</code>. Also note that <code>col1</code> in both dfs are sorted in strictly increasing order.</p>
<p>So <code>df1.val</code> should look as follows:</p>
<pre><code>0 (no interpolation needed since there's an exact match on col1)
3 (interpolate between df2.iloc[1] and df2.iloc[2])
4.5 (interpolate between df2.iloc[2] and df2.iloc[3])
5 + 5 / 8 * 3 = 6.875 (interpolate between df2.iloc[3] and df2.iloc[4])
</code></pre>
<p>One idea I have is to use <code>merge_asof</code> twice and then interpolate.</p>
<pre><code>df1["val_left"] = pd.merge_asof(df1, df2[['col1', 'val']],
on='col1',
direction='backward')["val"].values
df1["val_right"] = pd.merge_asof(df1, df2[['col1', 'val']],
on='col1',
direction='forward')["val"].values
# then do the interpolation in df1, but I don't have the df2.col1 info (is there an easy way to get that) to do the interpolation?
</code></pre>
<p>Are there (better) alternative approaches besides the above?</p>
|
<python><dataframe><interpolation>
|
2023-01-16 02:17:21
| 1
| 2,502
|
roulette01
|
75,129,600
| 11,919,198
|
Python Weird Scope
|
<p>Given the following:</p>
<pre><code>a = 0
def func():
a += 2
print(a)
func()
print(a)
</code></pre>
<p>I get a <code>UnboundLocalError: local variable 'a' referenced before assignment</code> error. Fair enough, since <code>a</code> is local to <code>func</code>.</p>
<p>However when remove the increment statement and simply want to just print <code>a</code> within <code>func</code>, it works.</p>
<pre><code>a = 0
def func():
print(a)
func()
print(a)
</code></pre>
<p>Why does it work then? Why does the increment operator not see <code>a</code> while the <code>print</code> statement does?</p>
|
<python><scope>
|
2023-01-16 01:43:59
| 2
| 334
|
doofus
|
75,129,097
| 7,575,552
|
IndexError: too many indices for array: array is 2-dimensional, but 4 were indexed
|
<p>I am training a multilabel VGG-16 based classification model. There are 25 labels for this task. I am trying to replicate this code at <a href="https://towardsdatascience.com/multi-label-classification-and-class-activation-map-on-fashion-mnist-1454f09f5925" rel="nofollow noreferrer">https://towardsdatascience.com/multi-label-classification-and-class-activation-map-on-fashion-mnist-1454f09f5925</a> to generate the class activation map using the trained model.</p>
<pre><code>model = load_model('weights/vgg16_multilabel.09-0.3833.h5')
model.summary()
sgd = SGD(learning_rate=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='binary_crossentropy',
metrics=['accuracy'])
#labels
columns=['Action', 'Adventure', 'Animation', 'Biography', 'Comedy',
'Crime', 'Documentary', 'Drama', 'Family', 'Fantasy',
'History', 'Horror', 'Music', 'Musical', 'Mystery',
'N/A', 'News', 'Reality-TV', 'Romance', 'Sci-Fi', 'Short',
'Sport', 'Thriller', 'War', 'Western']
gap_weights = model.layers[-1].get_weights()[0] #final dense layer
print(" >>> size(gap_weights) = ", gap_weights.size)
#extract from the deepest convolutional layer
cam_model = Model(inputs=model.input,
outputs=(model.layers[-3].output,
model.layers[-1].output))
print(" >>> K.int_shape(model.layers[-3].output) = ", K.int_shape(model.layers[-3].output))
print(" >>> K.int_shape(model.layers[-1].output) = ", K.int_shape(model.layers[-1].output))
#--- make the prediction
features, results = cam_model.predict(X_test)
# check the CAM activations for 10 test images
for idx in range(10):
# get the feature map of the test image
features_for_one_img = features[idx, :, :, :]
# map the feature map to the original size
height_roomout = train_img_size_h / features_for_one_img.shape[0]
width_roomout = train_img_size_w / features_for_one_img.shape[1]
cam_features = sp.ndimage.zoom(features_for_one_img, (height_roomout, width_roomout, 1), order=2)
# get the predicted label with the maximum probability
pred = np.argmax(results[idx])
# prepare the final display
plt.figure(facecolor='white')
# get the weights of class activation map
cam_weights = gap_weights[:, pred]
# create the class activation map
cam_output = np.dot(cam_features, cam_weights)
# draw the class activation map
ax.set_xticklabels([])
ax.set_yticklabels([])
buf = 'Predicted Class = ' + columns[pred] + ', Probability = ' + str(results[idx][pred])
plt.xlabel(buf)
plt.imshow(t_pic[idx], alpha=0.5)
plt.imshow(cam_output, cmap='jet', alpha=0.5)
plt.show()
</code></pre>
<p>This is the output</p>
<pre><code>size(gap_weights) = 12800
K.int_shape(model.layers[-4].output) = (None, 512)
K.int_shape(model.layers[-1].output) = (None, 25)
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/project/1/complete_code.py", line 1295, in <module>
features_for_one_img = features[idx, :, :, :]
IndexError: too many indices for array: array is 2-dimensional, but 4 were indexed
</code></pre>
<p>I am getting this error in Tensorflow 2.X but I had no problems in Tensorflow 1.X.</p>
|
<python><tensorflow><keras><indexing>
|
2023-01-15 23:25:49
| 1
| 1,189
|
shiva
|
75,129,055
| 6,676,101
|
How can we convert an arbitrary iterable of iterables into a nested list of lists of strings?
|
<p>Suppose <code>obj</code> has an <code>__iter__()</code> method.</p>
<p>We want to write a function which accepts <code>obj</code> as input and outputs a nested list of lists of lists .... of lists of strings.</p>
<p>If an object is iterable, and is not a string, we recur.</p>
<p>If an object is a string, we return the string.</p>
<p>If an object is non-iterable, we call the string-class constructor on that object, and return the resulting string.</p>
<p>Unfortunately, there may be self-loops. That is, an objects <strong>iter</strong> method might return an object we have seen previously.</p>
<pre class="lang-python prettyprint-override"><code>root.__iter__().__next__().__next__().__iter__().__next__().__next__().__iter__().__next__() == root
</code></pre>
<p>Maybe we should use the <code>id()</code> function to check if the object has been seen previously?</p>
|
<python><python-3.x><recursion><tree>
|
2023-01-15 23:15:26
| 2
| 4,700
|
Toothpick Anemone
|
75,129,034
| 9,400,421
|
Is there a library in python to accomplish cross platform OS binary installation? Cross platform dependency installation via Python
|
<p>I often write Jupyter notebooks (python) and since you can execute them in Windows or Linux (and the many flavors thereof), when I have a dependency for the notebook's activity (like let's say the notebook is meant to demo a proof of concept for a Node.js module CVE), well I can execute a command to install that node module cross platform, since Node.js / node package manager works the same on various platforms. But it would be more beginner friendly and convenient if I could also install Node.js for them, with Python.</p>
<p>I could write a bunch of logic to detect the platform and so on, but that would be a lot of work each time. Before I write some re-usable solution myself, I'm just wondering if it's already been done. Or even if there's a good best-practice / trick in python to make it easier.</p>
<p>For example with this code (executed in an ipynb file):</p>
<pre><code># Prereqs: docker-compose, node.js, node package manager
import requests
import os
path = "./open_vas"
exists = os.path.exists(path)
if not exists:
os.makedirs(path)
r = requests.get('https://greenbone.github.io/docs/latest/_static/docker-compose-22.4.yml')
with open("./open_vas/docker-compose.yml",'wb') as f:
f.write(r.content)
!docker-compose -f ./open_vas/docker-compose.yml -p greenbone-community-edition pull
!docker-compose -f ./open_vas/docker-compose.yml -p greenbone-community-edition up -d
!npm i express@4.10.4
</code></pre>
<p>I'd like a cross platform solution to install those pre-reqs and save the user the time / effort.</p>
|
<python>
|
2023-01-15 23:11:01
| 0
| 847
|
J.Todd
|
75,128,970
| 6,676,101
|
How do I merge two objects into one?
|
<p>How can we create dynamically (procedurally.... at run-time) one new object from two old objects so that operations performed on the new object are performed on both of the two old objects?</p>
<p>As just one example, we might have two streams:</p>
<blockquote>
<ol>
<li>a string stream ... <code>str_strm = io.StringIO()</code></li>
<li>a file stream ... <code>fl_strm = open("test_file.txt", "w")</code></li>
</ol>
</blockquote>
<p>In the example, we might want it to be that anytime we try to write a string to the new steam, copies of that string are written to the two older streams.</p>
<pre class="lang-python prettyprint-override"><code>class MergedStream:
def __init__(strm1, strm2):
self._strm1 = strm1
self._strm2 = strm2
def write(self, msg:str):
self._strm1.write()
self._strm2.write()
return None
str_strm = io.StringIO()
fl_strm = open("test_file.txt", "w")
ms = MergedStream(str_strm, fl_strm)
</code></pre>
<p>I can <em><strong>NOT</strong></em> guarantee that the two old objects both have a method named <code>write</code>.</p>
<p>Our solution should be general and dynamically generate the new merged object no matter what the two old objects are.</p>
<p>How might we create a new object in such a way that anytime we try to act upon the new object, we perform that same action upon the two older objects?</p>
<p>Something like the following is a step in the right direction, but not a complete solution:</p>
<pre class="lang-python prettyprint-override"><code>class MergedObject:
@classmethod
def merge(cls, left, right):
if left == right:
return left
else:
return cls(left, right)
def __call__(self, *args, **kwargs):
try:
r1 = self._lefty(*args, **kwargs)
r2 = self._righty(*args, **kwargs)
return type(self).merge(r1, r2)
except AttributeError:
raise AttributeError()
def __init__(lefty, righty):
self._lefty = lefty
self._righty = righty
reserved = dir(self) # reserved attribute names
attributes = dict()
attribute_names = set(dir(lefty), dir(righty)).difference(reserved)
for attribute_name in attribute_names:
latter = getattr(lefty) # `latter` is the `left-attribute`
ratter = getattr(righty) # `ratter` is the `right-attribute`
setattr(attribute_name, type(self).merge(latter, ratter))
</code></pre>
<h2>SOME WRINKLES TO IRON OUT</h2>
<ul>
<li>I need to be able to merge two objects which have no overloaded <code>==</code> operator. That is, even if there is no code inside of the class definition which says <code>def __eq__():</code>, I need to test if two things are equal. Suppose I merge two objects which both have a method named <code>insert()</code> which return <code>-1</code>. I do not want to carry around duplicate values. I do not want a merged object of two copies of the number <code>10</code>, and then <code>x + 5</code> computes <code>10 + 5</code> twice.</li>
<li>There are some issues with overriding python's "<em>magic</em>" methods (dunder methods) such as <code>__len__</code> or <code>__mul__</code> If the two old objects both have a <code>__len__</code> method, the new merged object might fail to have a a method named <code>__len__</code>.</li>
</ul>
|
<python><python-3.x><oop><wrapper>
|
2023-01-15 22:58:11
| 0
| 4,700
|
Toothpick Anemone
|
75,128,941
| 4,070,660
|
Gunicorn with django giving 500 with no extra information
|
<p>I am trying to run django 3.2.16 with gunicorn, I get this output in console:</p>
<pre><code>[2023-01-15 23:45:39 +0100] [210935] [INFO] Starting gunicorn 20.1.0
[2023-01-15 23:45:39 +0100] [210935] [DEBUG] Arbiter booted
[2023-01-15 23:45:39 +0100] [210935] [INFO] Listening at: http://0.0.0.0:8000 (210935)
[2023-01-15 23:45:39 +0100] [210935] [INFO] Using worker: sync
[2023-01-15 23:45:39 +0100] [210936] [INFO] Booting worker with pid: 210936
[2023-01-15 23:45:39 +0100] [210935] [DEBUG] 1 workers
</code></pre>
<p>Everything looks like working, but when I go to localhost, I get <code>Internal Server Error</code>.</p>
<p>It kinda behaves like if I had <code>DEBUG = False</code>, but I have <code>DEBUG = True</code> and there is also nothing in console. Django setup finishes and I also verify, that settings.DEBUG is indded true:</p>
<p>My wsgi.py file:</p>
<pre><code>application = get_wsgi_application()
print(settings.DEBUG)
</code></pre>
<p>And of course <code>runserver</code> works fine.</p>
<p>What else could that be? How to get some kind of error output? I tried <code>capture-out</code> and all the log files and levels that gunicorn provides but got nothing useful from the console.</p>
|
<python><django><gunicorn><wsgi>
|
2023-01-15 22:50:36
| 1
| 1,512
|
K.H.
|
75,128,758
| 9,137,211
|
Django Path.home() giving servers directory instead of users home directory
|
<p>I am on a Django Project. Using version 4 and Python 3.10.
I want to download file and my code is :</p>
<pre><code>downloads_path = str(Path.home() / "Downloads").replace('\\','/')
if not os.path.isdir(f"{downloads_path}/{str(row['CUSTOMER NAME'])}"):
os.makedirs(f"{downloads_path}/{str(row['CUSTOMER NAME'])}")
pdf.output(f"{downloads_path}/{str(row['CUSTOMER NAME'])}/document.pdf")
</code></pre>
<p><code>downloads_path = str(Path.home() / "Downloads").replace('\\','/')</code>
Giving me <strong>PermissionError : [Errno 13] Permission denied: '/var/www/Downloads'</strong></p>
<p>Its working on my local server but not in my live server. It is looking for my server's home location instead of my computers home directory.
Permission :
<a href="https://i.sstatic.net/vAslm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vAslm.png" alt="enter image description here" /></a></p>
|
<python><django>
|
2023-01-15 22:13:50
| 0
| 335
|
Mayanktaker
|
75,128,656
| 3,611,472
|
Why do I get RecursionError in this implementation of sorted dictionary as Binary Search Tree?
|
<p>I am trying to implement a kind of sorted dictionary as a Binary Search Tree. The idea is that no matter what operation I do on this new dictionary-like object, the elements of the dictionary are always sorted with respect to the keys. My implementation is not complete - there are some issues of performance that I would like to fix before completing the code.</p>
<p>The idea is to create two classes, <code>Node</code>and <code>SortedDict()</code>.</p>
<p>The class <code>Node</code> has the <code>__getitem__</code> and <code>__setitem__</code> methods to insert and get elements in the tree (for the moment I am not implementing a delete operation).</p>
<p>The class <code>SortedDict()</code> takes a sorted dictionary (for the moment I am not implementing cases where the argument is not a sorted dictionary), and it starts inserting the elements of the dictionary starting from the leaves of the tree. First, it processes the left child until it reaches the median of the keys of the dictionary. Then, it processes the right child, and then it glues the two children trees to the root. The root is the median of the sorted dictionary' keys.</p>
<p>If the dictionary is very large, I get a <code>RecursionError: maximum recursion depth exceeded in comparison</code> error if I try to access an element that is very far from the root. Why do I get this error? How can I avoid it?</p>
<p>The traceback is:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "main.py", line 96, in <module>
print(dic_obj[5])
File "main.py", line 91, in __getitem__
return self.root.__getitem__(item)
File "main.py", line 26, in __getitem__
return self.left.__getitem__(item)
File "main.py", line 26, in __getitem__
return self.left.__getitem__(item)
File "main.py", line 26, in __getitem__
return self.left.__getitem__(item)
[Previous line repeated 994 more times]
File "main.py", line 18, in __getitem__
if item > self.root:
RecursionError: maximum recursion depth exceeded in comparison
</code></pre>
<p>To reproduce this error you can use the following code:</p>
<pre><code>dic = {k:k for k in range(1,10000)}
dic_obj = SortedDict(dic)
print(dic_obj[5])
</code></pre>
<p>where the definition of <code>SortedDict</code> is given as follow:</p>
<pre><code>class Node():
def __init__(self, root=None, value=None, left=None, right=None, parent=None):
self.parent = parent
self.root, self.value = root, value
self.left = left
self.right = right
def __str__(self):
return f'<Node Object - Root: {self.root}, Value: {self.value}, Parent: {self.parent}>'
def __getitem__(self, item):
if self.root is None:
raise KeyError(item)
if item > self.root:
if self.right is None:
raise KeyError(item)
return self.right.__getitem__(item)
if item < self.root:
if self.left is None:
raise KeyError(item)
return self.left.__getitem__(item)
if item == self.root:
return self.value
def __setitem__(self, key, value):
if self.root is None:
self.root, self.value = key, value
else:
if key > self.root:
if self.right is not None:
self.right.__setitem__(key,value)
else:
self.right = Node(root=key, value=value)
self.right.parent = self
elif key < self.root:
if self.left is not None:
self.left.__setitem__(key,value)
else:
self.left = Node(root=key, value=value)
self.left.parent = self
elif key == self.root:
self.root = value
class SortedDict():
def __init__(self, array: dict):
self.root = Node()
if array:
keys = list(array.keys())
for key in range(len(keys)//2):
self.__setitem__(keys[key],array[keys[key]])
root = Node(root=keys[key+1],value=array[keys[key+1]])
self.root.parent = root
root.left = self.root
self.root = Node()
for key in range(len(keys)//2+1,len(keys)):
self.__setitem__(keys[key],array[keys[key]])
self.root.parent = root
root.right = self.root
self.root = root
def __setitem__(self, key, value):
try:
if key > self.root.root:
if self.root.right is None and self.root.left is None:
node = Node(root=key, value=value)
self.root.parent = node
node.left = self.root
self.root = node
else:
if self.root.right is None:
self.root.right = Node(root=key, value=value, parent=self.root)
else:
node = Node(root=key, value=value)
self.root.parent = node
node.left = self.root
self.root = node
except:
self.root.root = key
self.root.value = value
def __getitem__(self, item):
return self.root.__getitem__(item)
</code></pre>
|
<python><dictionary><recursion><binary-search-tree>
|
2023-01-15 21:50:46
| 0
| 443
|
apt45
|
75,128,633
| 4,709,889
|
Calculate percent of values based on column in dataframe
|
<p>I am trying to get the percent of values in a column based on a list of unique values in another column. My dataframe has the following structure:</p>
<pre><code>property_state_code | converted
--------------------------------
NY converted
TX converted
TX Not Converted
CA Not Converted
MO converted
</code></pre>
<p>The results I want would be something like:</p>
<pre><code>states | conversion_pct
-----------------------
NY 1
TX .5
CA 0
MO 1
</code></pre>
<p>My code thus far is below. The error I am getting is</p>
<blockquote>
<p>"The truth value of a Series is ambiguous. Use a.empty, a.bool(),
a.item(), a.any() or a.all()."</p>
</blockquote>
<p>In the line where I am trying to do the pct calculation (pct = ...). I am not sure where in that line this error is occurring, so any insight or help would be appreciated!</p>
<pre><code>states = []
for val in results.property_state_code:
if val not in states:
states.append(val)
print(states)
conversion_pct = []
for state in states:
if results['property_state_code'] == state:
pct = (results['converted'].value_counts()['converted']) / ((results['converted'].value_counts()['converted']) + (results['converted'].value_counts()['Not Converted']))
conversion_pct.append(pct)
</code></pre>
|
<python><pandas>
|
2023-01-15 21:45:41
| 1
| 391
|
FrenchConnections
|
75,128,584
| 596,057
|
Compute shader won't write to texture
|
<p>I'm porting an application from Windows to Ubuntu 20.04 and none of my compute shaders will write to a texture. I've reproduced the problem in the following program. I initialize the texture to zeros and after the shader runs the texture is still filled with zeros.</p>
<pre><code>import numpy
import moderngl
import OpenGL.GL
compute_shader_source = '''
#version 450 core
#extension GL_ARB_uniform_buffer_object : enable
layout(local_size_x=1, local_size_y=1, local_size_z=1) in;
layout(r8ui) uniform uimage3D to_image;
void main() {
imageStore(to_image, ivec3(gl_GlobalInvocationID.x, gl_GlobalInvocationID.y, gl_GlobalInvocationID.z), uvec4(255, 255, 255, 255));
return;
}
'''.strip()
context = moderngl.create_context(require=450, standalone=True)
compute_program = OpenGL.GL.glCreateProgram()
shader = OpenGL.GL.glCreateShader(OpenGL.GL.GL_COMPUTE_SHADER)
OpenGL.GL.glShaderSource(shader, [compute_shader_source])
OpenGL.GL.glCompileShader(shader)
compiled = OpenGL.GL.glGetShaderiv(shader, OpenGL.GL.GL_COMPILE_STATUS, None)
if not compiled:
log = OpenGL.GL.glGetShaderInfoLog(shader).decode()
raise RuntimeError("Couldn't compile shader:\n" + log)
OpenGL.GL.glAttachShader(compute_program, shader)
OpenGL.GL.glLinkProgram(compute_program)
linked = OpenGL.GL.glGetProgramiv(compute_program, OpenGL.GL.GL_LINK_STATUS, None)
if not linked:
log = OpenGL.GL.glGetProgramInfoLog(compute_program).decode()
raise RuntimeError("Couldn't link shader:\n" + log)
OpenGL.GL.glUseProgram(compute_program)
OpenGL.GL.glActiveTexture(OpenGL.GL.GL_TEXTURE0)
to_image_location = OpenGL.GL.glGetUniformLocation(compute_program, 'to_image')
OpenGL.GL.glUniform1i(to_image_location, 0)
textures = OpenGL.GL.glGenTextures(1)
zeros = numpy.zeros((32, 32, 32), dtype=numpy.uint8)
OpenGL.GL.glBindTexture(OpenGL.GL.GL_TEXTURE_3D, textures)
OpenGL.GL.glTexImage3D(OpenGL.GL.GL_TEXTURE_3D, 0, OpenGL.GL.GL_RED,
32, 32, 32, 0,
OpenGL.GL.GL_RED, OpenGL.GL.GL_UNSIGNED_BYTE, zeros)
OpenGL.GL.glTexParameteri(OpenGL.GL.GL_TEXTURE_3D, OpenGL.GL.GL_TEXTURE_MIN_FILTER, OpenGL.GL.GL_NEAREST)
OpenGL.GL.glTexParameteri(OpenGL.GL.GL_TEXTURE_3D, OpenGL.GL.GL_TEXTURE_MAG_FILTER, OpenGL.GL.GL_NEAREST)
OpenGL.GL.glBindTexture(OpenGL.GL.GL_TEXTURE_3D, 0)
OpenGL.GL.glBindImageTexture(0, textures, 0, True, 0, OpenGL.GL.GL_READ_WRITE, OpenGL.GL.GL_R8UI)
OpenGL.GL.glDispatchCompute(32, 32, 32)
OpenGL.GL.glFinish()
OpenGL.GL.glBindTexture(OpenGL.GL.GL_TEXTURE_3D, textures)
data = OpenGL.GL.glGetTexImage(OpenGL.GL.GL_TEXTURE_3D, 0, OpenGL.GL.GL_RED, OpenGL.GL.GL_UNSIGNED_BYTE)
print('data', sum(data))
</code></pre>
|
<python><glsl><compute-shader>
|
2023-01-15 21:37:32
| 1
| 1,638
|
HahaHortness
|
75,128,500
| 4,700,367
|
Relative path ImportError when trying to import a shared module from a subdirectory in a script
|
<p>I am trying to import a util package one directory up from where my code is, but I get an ImportError which I don't understand.</p>
<p>I have a number of different variations on the import syntax in Python, none of which are working.</p>
<p>There are a number of similar questions on Stack Overflow, but none have helped me understand or fix this issue.</p>
<p>Of the top of my head, I have tried the following variations:</p>
<pre class="lang-py prettyprint-override"><code>import util
import ..util
from .. import util
from ..util import parser
from AdventOfCode2022 import util
from ..AdventOfCode2022 import util
from ...AdventOfCode2022 import util
</code></pre>
<p>Most of these I guessed wouldn't work, but I tried them anyway to be sure.</p>
<p>Error message:</p>
<blockquote>
<p>ImportError: attempted relative import with no known parent package</p>
</blockquote>
<p>Directory structure:</p>
<pre class="lang-none prettyprint-override"><code>.
βββ day03
β βββ input.txt
β βββ part1.py
β βββ part2.py
β βββ test_input.txt
βββ util
βββ __init__.py
βββ parser.py
</code></pre>
<p>I just want to import my util package from any "day0*/" directory - not sure why Python makes it so hard!</p>
|
<python><python-3.x><python-import>
|
2023-01-15 21:24:02
| 1
| 438
|
Sam Wood
|
75,128,494
| 14,167,846
|
Subset a list in python on pre-defined string
|
<p>I have some extremely large lists of character strings I need to parse. I need to break them into smaller lists based on a pre-defined character string, and I figured out a way to do it, but I worry that this will not be performant on my real data. Is there a better way to do this?</p>
<p>My goal is to turn this list:</p>
<pre><code>['a', 'b', 'string_to_split_on', 'c', 'd', 'e', 'f', 'g', 'string_to_split_on', 'h', 'i', 'j', 'k', 'string_to_split_on']
</code></pre>
<p>Into this list:</p>
<pre><code>[['a', 'b'], ['c', 'd', 'e', 'f', 'g'], ['h', 'i', 'j', 'k']]
</code></pre>
<p>What I tried:</p>
<pre><code># List that replicates my data. `string_to_split_on` is a fixed character string I want to break my list up on
my_list = ['a', 'b', 'string_to_split_on', 'c', 'd', 'e', 'f', 'g', 'string_to_split_on', 'h', 'i', 'j', 'k', 'string_to_split_on']
# Inspect List
print(my_list)
# Create empty lists to store dat ain
new_list = []
good_letters = []
# Iterate over each string in the list
for i in my_list:
# If the string is the seporator, append data to new_list, reset `good_letters` and move to the next string
if i == 'string_to_split_on':
new_list.append(good_letters)
good_letters = []
continue
# Append letter to the list of good letters
else:
good_letters.append(i)
# I just like printing things thay because its easy to read
for item in new_list:
print(item)
print('-'*100)
### Output
['a', 'b', 'string_to_split_on', 'c', 'd', 'e', 'f', 'g', 'string_to_split_on', 'h', 'i', 'j', 'k', 'string_to_split_on']
['a', 'b']
----------------------------------------------------------------------------------------------------
['c', 'd', 'e', 'f', 'g']
----------------------------------------------------------------------------------------------------
['h', 'i', 'j', 'k']
----------------------------------------------------------------------------------------------------
</code></pre>
|
<python><string><list><split>
|
2023-01-15 21:23:11
| 1
| 545
|
pkpto39
|
75,128,468
| 558,619
|
Conditionally including multiple keyword arguments in a function call
|
<p>What is the 'pythonic' way to implement a waterfall if-statement like situation like this, when it applies to <code>kwargs</code>? I'm trying to avoid a situation where <code>c=None</code> is added to <code>kwargs</code> (because having <code>c</code> in the <code>kwargs</code> keys causes all sorts of problems downstream).</p>
<pre><code>def run_something(**kwargs):
print(kwargs)
def func(a = None, b=None, c=None):
if a and b and c:
run_something(a=a,b=b,c=c)
elif a and b:
run_something(a=a,b=b)
elif a:
run_something(a=a)
else:
run_something()
</code></pre>
<p>I know the easy answer is that I could just do:</p>
<pre><code>def func(**kwargs):
run_something(**kwargs)
</code></pre>
<p>however my particular use case doesn't make this easy</p>
|
<python>
|
2023-01-15 21:18:17
| 3
| 3,541
|
keynesiancross
|
75,128,463
| 104,910
|
macOS trusts Python whl files, but not tar files
|
<p>I provide my python c extension as both as a tar file, as well as a whl file.</p>
<p>On macOS, I am surprised that there are no problems installing the wheel for my extension. When setting PYTHONPATH to my python extension in the tarball, I am immediately met with the dialog about unverified developers.</p>
<p>Could someone please explain why macOS trusts my unsigned python extension, but has no problem if I package it in a wheel file.</p>
<p>Update: This is the error I see after I close the dialog box:</p>
<pre><code>>>> import devsim
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jsanchez/Downloads/devsim_macos_v2.3.7rc1/lib/devsim/__init__.py", line 8, in <module>
from .devsim_py3 import *
ImportError: dlopen(/Users/jsanchez/Downloads/devsim_macos_v2.3.7rc1/lib/devsim/devsim_py3.so, 0x0002): tried: '/Users/jsanchez/Downloads/devsim_macos_v2.3.7rc1/lib/devsim/devsim_py3.so' (code signature in <ED44BB00-36CF-39A8-8999-8309CBA016E4> '/Users/jsanchez/Downloads/devsim_macos_v2.3.7rc1/lib/devsim/devsim_py3.so' not valid for use in process: library load disallowed by system policy)
</code></pre>
<p>Could someone please explain why macOS trusts my unsigned python extension, but has no problem if I package it in a wheel file.</p>
<p>Update 2:</p>
<p>From what I can see. If I download the tgz to my Downloads folder using Chrome, I am getting the error. If I download the tgz using curl, it works. I am thinking that Chrome is adding this bad attribute.</p>
<p><a href="https://i.sstatic.net/zz7l0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zz7l0.png" alt="enter image description here" /></a></p>
|
<python><macos><security><pip><shared-libraries>
|
2023-01-15 21:17:30
| 1
| 3,813
|
Juan
|
75,128,358
| 3,714,982
|
Pandas: Convert multiple columns in DataFrame from object to float
|
<p>To clean up data I want to convert several columns that should be numeric to numeric. Assignment of multiple columns has no effect.</p>
<p>Minimal (non-)working example.
The column 'S' is a string, 'D' and 'L' should be converted to numeric. The <code>dtypes</code> are <code>object</code> - as expected</p>
<p>1)</p>
<pre><code>x=pd.DataFrame([ ['S',1,5],
['M',1.4,'10'],
['L','2.3',14.5]],
columns=['S', 'D', 'L'])
x.dtypes
S object
D object
L object
dtype: object
</code></pre>
<ol start="2">
<li></li>
</ol>
<p>I tried the conversion using <code>pd.to_numeric</code> and <code>astype(float)</code> on the slice. <strong>It does not work</strong></p>
<pre><code>x.loc[:, ['D','L'] ] = x.loc[:, ['D', 'L']].apply(pd.to_numeric)
x.dtypes
S object
D object
L object
dtype: object
</code></pre>
<p>3 Check: creating a new dataframe with the righthand side does work and gives the correct type (float64)</p>
<pre><code>x.loc[:, ['D', 'L']].apply(pd.to_numeric).dtypes
D float64
L float64
dtype: object
</code></pre>
<p>4 What <strong>does work</strong> is assigning a <em>single</em> column.</p>
<pre><code>x.loc[:, 'D' ] = x.loc[:, 'D'].apply(pd.to_numeric)
</code></pre>
<p>yields the <strong>correct</strong> types for column 'D'.</p>
<p>The weird thing is after assinging a single column with a corrected type the assignment of multiple columns (as in 2) works ?!?</p>
<p>5</p>
<pre><code>x.loc[:, ['D','L'] ] = x.loc[:, ['D', 'L']].apply(pd.to_numeric)
x.dtypes
S object
D float64
L float64
dtype: object
</code></pre>
<p>I am not sure if it has to do with views vs. copies but even when creating a deep-copy (<code>copy(deep=True)</code>) the assignment of a slice with multiple columns has no effect.</p>
<p><strong>Why</strong> does this not work? and <strong>How</strong> does pandas expect to work with multiple columns?</p>
|
<python><pandas>
|
2023-01-15 20:56:59
| 2
| 343
|
truschival
|
75,128,330
| 1,523,238
|
How do I read the local storage of my Safari Web Extension using Selenium in Python?
|
<p>With the Firefox WebDriver I can read the local storage of my extension like so:</p>
<pre class="lang-py prettyprint-override"><code>extension_path = "/path/to/my/extension"
info = {
"extension_id": f"foobar",
"uuid": uuid.uuid4(),
}
base_url = f"moz-extension://{info['uuid']}/"
opts = FirefoxOptions()
opts.set_preference('extensions.webextensions.uuids', '{"%s": "%s"}' % (
info["extension_id"], info["uuid"]))
driver = webdriver.Firefox(options=opts)
driver.install_addon(extension_path, temporary=True)
driver.get(f"{base_url}_generated_background_page.html")
results = self.driver.execute_async_script((
"let done = arguments[arguments.length - 1],"
" store_name = arguments[0];"
"browser.storage.local.get([store_name], function (res) {"
" done(res[store_name]);"
"});"
), "foo")
</code></pre>
<p>How can I do the same with the Safari WebDriver on macOS? I've ported the extension using <code>xcrun safari-web-extension-converter /path/to/my/extension</code> and built and manually tested that it works in Safari. In Safari I can go to <code>Develop -> Web Extension Background Pages -> <my web extension></code> to find the id of the extension and see that a generated background page is located at <code>safari-web-extension://<id>/_generated_background_page.html</code></p>
<p>But running the following results in Selenium freezing at <code>driver.get(f"{base_url}_generated_background_page.html")</code></p>
<pre class="lang-py prettyprint-override"><code>base_url = f"safari-web-extension://<id>/"
driver = webdriver.Safari()
driver.get(f"{base_url}_generated_background_page.html")
results = self.driver.execute_async_script((
"let done = arguments[arguments.length - 1],"
" store_name = arguments[0];"
"browser.storage.local.get([store_name], function (res) {"
" done(res[store_name]);"
"});"
), "foo")
</code></pre>
<p>What can I do?</p>
<h2>Update Feb 8th 2023</h2>
<p>I have also tried an approach using <code>browser.runtime.sendMessage</code> where in Python Selenium I do this:</p>
<pre><code>results = self.driver.execute_async_script((
"let done = arguments[arguments.length - 1],"
" store_name = arguments[0];"
" browser.runtime.sendMessage('com.oskar.foo.Extension (Apple Team ID)', {}, function (res) {"
" done(res[store_name]);"
" });"
), "foo")
</code></pre>
<p>and add the following to background.js in the extension:</p>
<pre><code>browser.runtime.onMessageExternal.addListener(function (
request,
sender,
sendResponse
) {
browser.storage.local.get("foo").then((j) => {
sendResponse(j);
});
return true;
});
</code></pre>
<p>and this to the manifest.json</p>
<pre><code>"externally_connectable": {
"ids": ["*"],
"matches": ["https://example.org/*"]
}
</code></pre>
<p>This way I actually get a value from the extension when running the test. But instead of reading the storage of the extension from the Safari instance started by Selenium, it reads the storage of the extension from the "real" safari instance.</p>
|
<javascript><python><selenium><safari><safari-web-extension>
|
2023-01-15 20:54:08
| 1
| 6,845
|
Oskar Persson
|
75,128,123
| 11,092,636
|
Is there a difference between torch.IntTensor and torch.Tensor
|
<p>When using PyTorch tensors, is there a point to initialize my data like so:</p>
<pre><code>X_tensor: torch.IntTensor = torch.IntTensor(X)
Y_tensor: torch.IntTensor = torch.IntTensor(Y)
</code></pre>
<p>Or should I just do the 'standard':</p>
<pre><code>X_tensor: torch.Tensor = torch.Tensor(X)
Y_tensor: torch.Tensor = torch.Tensor(Y)
</code></pre>
<p>even though I know <code>X: list[list[int]</code> and <code>Y: list[list[int]</code></p>
|
<python><types><tensor><torch>
|
2023-01-15 20:16:29
| 2
| 720
|
FluidMechanics Potential Flows
|
75,128,113
| 10,817,571
|
Django GPU model deployed in Nivdia/Cuda container consumes all GPU memory
|
<p>I'm using an Nvidia/Cuda container to host my Django website. It's a new deployment of an old website that used to utilize CPU scored models. The rationale for using the Nivida/Cuda docker is to accelerate scoring speed when requesting an analysis through the Django interface.</p>
<p>The difficulty I am running into is that my docker-compose build is generating GPU memory errors. I hadn't anticipated that Celery / Django would load the models directly into the GPU in advance of an actual scoring call, and that this process would consume so much space. Accordingly, the GPU memory is quickly consumed and my website is not launching appropriately.</p>
<p>My question is whether there are ways that I could manage the GPU memory more effectively. Presently, I am loading the Tensorflow models in my Django settings.py invocation. Since I am using Celery, it is effectively doubling the GPU memory demand. At runtime, most models (but not all) are using Celery-based scoring mechanisms.</p>
<p>Some options I am considering:</p>
<ol>
<li>Ways to eliminate non-scoring components of the Tensorflow model that take up unnecessary space;</li>
<li>Passing an environment variable to identify Celery vs Django to conditionally load models;</li>
<li>Limiting my Tensorflow model complexity to reduce size;</li>
<li>Reducing Celery concurrency (was 10; now set to 1) to limit duplication in GPU memory.</li>
</ol>
<p>Any other possibilities that others have used?</p>
|
<python><django><tensorflow><cuda><celery>
|
2023-01-15 20:14:53
| 1
| 591
|
C. Cooney
|
75,128,068
| 396,014
|
Can't get correct input for DBSCAN clustersing
|
<p>I have a node2vec embedding stored as a .csv file, values are a square symmetric matrix. I have two versions of this, one with node names in the first column and another with node names in the first row. I would like to cluster this data with DBSCAN, but I can't seem to figure out how to get the input right. I tried this:</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
input_file = "node2vec-labels-on-columns.emb"
# for tab delimited use:
df = pd.read_csv(input_file, header = 0, delimiter = "\t")
# put the original column names in a python list
original_headers = list(df.columns.values)
emb = df.as_matrix()
db = DBSCAN(eps=0.3, min_samples=10).fit(emb)
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
n_noise_ = list(labels).count(-1)
print("Estimated number of clusters: %d" % n_clusters_)
print("Estimated number of noise points: %d" % n_noise_)
</code></pre>
<p>This leads to an error:</p>
<pre><code>dbscan.py:14: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
emb = df.as_matrix()
Traceback (most recent call last):
File "dbscan.py", line 15, in <module>
db = DBSCAN(eps=0.3, min_samples=10).fit(emb)
File "C:\Python36\lib\site-packages\sklearn\cluster\_dbscan.py", line 312, in fit
X = self._validate_data(X, accept_sparse='csr')
File "C:\Python36\lib\site-packages\sklearn\base.py", line 420, in _validate_data
X = check_array(X, **check_params)
File "C:\Python36\lib\site-packages\sklearn\utils\validation.py", line 73, in inner_f
return f(**kwargs)
File "C:\Python36\lib\site-packages\sklearn\utils\validation.py", line 646, in check_array
allow_nan=force_all_finite == 'allow-nan')
File "C:\Python36\lib\site-packages\sklearn\utils\validation.py", line 100, in _assert_all_finite
msg_dtype if msg_dtype is not None else X.dtype)
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
</code></pre>
<p>I've tried other input methods that lead to the same error. All the tutorials I can find use datasets imported form sklearn so those are of not help figuring out how to read from a file. Can anyone point me in the right direction?</p>
|
<python><scikit-learn><dbscan>
|
2023-01-15 20:08:38
| 1
| 1,001
|
Steve
|
75,127,956
| 14,391,210
|
ModuleNotFoundError and circular import
|
<p>I can't manage to import some functions in the files. Here is the structure</p>
<pre><code>.
βββ main.py
βββ src
β βββ mypandas.py
β βββ labelling.py
</code></pre>
<p>Labelling uses class from mypandas and mypandas uses functions from labelling.</p>
<p>Labelling imports mypandas as</p>
<pre><code>import mypandas as myp
</code></pre>
<p>mypandas imports labelling as</p>
<pre><code>import labelling as l
</code></pre>
<p>In main, I import both as</p>
<pre><code>import src.labelling as sl
import src.mypandas as sp
</code></pre>
<p>Which gives me an error,: module not</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 7
5 import src.webscraping as sw
6 import src.viz as sv
----> 7 import src.mypandas as sp
9 import sqlite3
10 import pandas as pd
File ~/Desktop/code/Instagram_bot_classification/src/mypandas.py:5
1 def foo ():
2 print('oui')
----> 5 import labelling
6 import pandas as pd
7 import plotly.express as px
ModuleNotFoundError: No module named 'labelling'
</code></pre>
<p>How can I change the folder structure or code to not get an error but still to what I want?</p>
|
<python><import><modulenotfounderror>
|
2023-01-15 19:47:56
| 0
| 621
|
Marc
|
75,127,817
| 50,959
|
Using or reusing Wagtail StreamField blocks as Fields in other models
|
<p>I have a Wagtail project with two different <code>Page</code> models that use identical gallery blocks, but in different ways. In one model, I want to use the gallery in a list of other freeform blocks; but in the other model there is only one gallery appearing in a specific place on the page. The galleries are pretty complex, with lots of options like <code>autoadvance</code> that I donβt want to repeat between models, and which arenβt easy to move into a mixin. I can accomplish what I want by adding an extra <code>gallery</code> StreamField to the strict model, like this: <em>(<code>content_panel</code> etc. omitted)</em></p>
<pre class="lang-py prettyprint-override"><code># models.py
from wagtail.models import Page
from wagtail.core.fields import RichTextField, StreamField
from wagtail.images.blocks import ImageChooserBlock
from wagtail.core.blocks import (
RichTextBlock,
StructBlock,
StreamBlock,
)
class ImageWithCaptionBlock(StructBlock):
image = ImageChooserBlock()
caption = RichTextBlock()
class GalleryBlock(StructBlock):
# lots of gallery options like βautoadvanceβ
items = StreamBlock([
('image', ImageWithCaptionBlock()),
('video', EmbedBlock()),
])
class FreeFormContentPage(Page):
body = StreamField([
('richtext', RichTextBlock()),
('gallery', GalleryBlock()),
], use_json_field=True)
class StrictContentPage(Page):
body = RichTextField()
gallery = StreamField([
('gallery', GalleryBlock()),
], use_json_field=True, max_num=1)
</code></pre>
<p>This works but it looks a little weird when editing a <code>StrictContentPage</code>!</p>
<p><a href="https://i.sstatic.net/8fSSq.png" rel="noreferrer"><img src="https://i.sstatic.net/8fSSq.png" alt="screenshot of wagtail admin UI showing nested βGalleryβ elements" /></a></p>
<p>What Iβm looking to do is something like this:</p>
<pre class="lang-py prettyprint-override"><code>
class StrictContentPage(Page):
body = RichTextField()
gallery = GalleryBlock()
</code></pre>
<p>(or the equivalent via tweaking the admin)</p>
<p>β¦ so that the admin experience looks like:</p>
<p><a href="https://i.sstatic.net/mngBQ.png" rel="noreferrer"><img src="https://i.sstatic.net/mngBQ.png" alt="screenshot of Wagtail admin UI showing one βGalleryβ element with an imageblock" /></a></p>
<p>Thanks in advance for any pointers!</p>
|
<python><wagtail><wagtail-streamfield>
|
2023-01-15 19:26:22
| 1
| 514
|
axoplasm
|
75,127,731
| 3,770,251
|
Given specifc pixel coordinates, get the color at the python
|
<p>Let's say i have an 800x600 image:
<a href="https://i.sstatic.net/4FmR0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4FmR0.jpg" alt="enter image description here" /></a></p>
<p>and I have an HTML map element where it defines multiple poly areas(in this case only one is defined which is Tenessee):</p>
<pre><code><img src="usa-colored-regions-map.jpg" usemap="#image-map">
<map name="image-map">
<area target="" alt="" title="" href="" coords="492,379,521,376,562,377,568,363,583,355,589,347,595,340,570,343,535,348,515,349,507,355,494,355,491,370" shape="poly">
</map>
</code></pre>
<p>I want to use python to detect the color of each polygon, I want the dominant color only since each state has text in white color that might skew the results.</p>
<p>How can i achieve this?</p>
|
<python><numpy><opencv><coordinates><object-detection>
|
2023-01-15 19:12:24
| 2
| 405
|
MOHAMMAD RASIM
|
75,127,383
| 1,449,982
|
How to remotely debug Python with IntelliJ?
|
<p>Is there a way for remotely debugging Python3 with IntelliJ? I couldn't find any options for it. With VS Code, it's as simple as having this file:</p>
<h4>launch.json</h4>
<pre><code>
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "172.18.0.5",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "."
}
],
"justMyCode": true
}
]
}
</code></pre>
<p>and everything works like a (py)charm!</p>
<p>How can i do it with IntelliJ?</p>
<p>I examined all of JetBrains' tutorials, but none seemed to fit what I was searching for.</p>
|
<python><python-3.x><debugging><intellij-idea>
|
2023-01-15 18:20:12
| 1
| 8,652
|
Dima
|
75,127,370
| 12,248,328
|
How to get list of parent methods in VSCode?
|
<p>In PyCharm I can hit Ctrl/Cmd + O and it'll give me a modal with all methods I can override. What's the equivalent in VSCode?</p>
|
<python><visual-studio-code><vscode-python>
|
2023-01-15 18:18:29
| 0
| 423
|
Epic Programmer
|
75,127,347
| 15,537,675
|
Group pandas DataFrame on column and sum it while retaining the number of sumed observations
|
<p>I have a pandas Dataframe that looks like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id':[1, 1, 2, 2], 'comp': [-0.10,0.20,-0.10, 0.4], 'word': ['boy','girl','man', 'woman']})
</code></pre>
<p>I would like to group the dataframe on <code>id</code>, and calculate the sum of corresponding <code>comp</code> as well as get a new column called <code>n_obs</code> that tracks how many rows(ids) were summed up.</p>
<p>I tried using <code>df.groupby('id').sum()</code> but this is not quite producing the results that I want.</p>
<p>I'd like an output on the below form:</p>
<pre><code>id comp n_obs
1 0.1 2
2 0.3 2
</code></pre>
<p>Any suggestions on how I can do this?</p>
|
<python><pandas><dataframe><group-by>
|
2023-01-15 18:16:03
| 1
| 472
|
OLGJ
|
75,127,295
| 12,851,199
|
How to solve a problem while trying install a pipenv with python 3.11?
|
<p>Preconditions:</p>
<ul>
<li>python 3.11</li>
<li>pipenv 2022.12.19</li>
</ul>
<p>Problem:
I want to create a new pipenv virtual environment for my project with command:</p>
<pre><code>pipenv --python 3.11
</code></pre>
<p>or</p>
<pre><code>pipenv --python /usr/bin/python3.11
</code></pre>
<p>I have the next error:</p>
<pre><code>Loading .env environment variables...
Creating a virtualenv for this project...
Pipfile: /home/dev/geotek-dev/Pipfile
Using /usr/bin/python3.11 (3.11.1) to create virtualenv...
β Creating virtual environment...created virtual environment CPython3.11.1.final.0-64 in 227ms
creator CPython3Posix(dest=/root/.local/share/virtualenvs/geotek-dev-Rxuh4rdd, clear=False, global=False)
seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, pkg_resources=latest, via=copy, app_data_dir=/root/.local/share/virtualenv/seed-app-data/v1.0.1.debian.1)
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
β Successfully created virtual environment!
Traceback (most recent call last):
File "/usr/local/bin/pipenv", line 8, in <module>
sys.exit(cli())
^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/cli/options.py", line 57, in main
return super().main(*args, **kwargs, windows_expand_args=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/core.py", line 1053, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/core.py", line 1637, in invoke
super().invoke(ctx)
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/vendor/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/cli/command.py", line 210, in cli
ensure_project(
File "/usr/local/lib/python3.11/dist-packages/pipenv/core.py", line 541, in ensure_project
ensure_virtualenv(
File "/usr/local/lib/python3.11/dist-packages/pipenv/core.py", line 474, in ensure_virtualenv
do_create_virtualenv(
File "/usr/local/lib/python3.11/dist-packages/pipenv/core.py", line 1060, in do_create_virtualenv
project._environment = Environment(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/environment.py", line 79, in __init__
self._base_paths = self.get_paths()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/environment.py", line 383, in get_paths
c = subprocess_run(command)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/pipenv/utils/processes.py", line 75, in subprocess_run
return subprocess.run(
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1901, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/root/.local/share/virtualenvs/geotek-dev-Rxuh4rdd/bin/python'
</code></pre>
<p>I have decided that the structure of folder if different than in previous versions of python.</p>
<p>In python 3.9 I have folders bin, lib and pyvenv.cfg
In 3.11 I have lib, local and pyvenv.cfg</p>
<p>/bin/python is inside local folder, not in root level.
What should I configure to work with pipenv normally?</p>
|
<python><pipenv><python-3.11>
|
2023-01-15 18:09:37
| 0
| 438
|
Vadim Beglov
|
75,126,908
| 9,494,140
|
how to change shape color using `simple_kml` in python
|
<p>I'm using <code>simple_kml</code> library to generate a kml file with data sent from <code>drawing manager</code> on google Maps using Javascript, everything goes fine except the color of the shape .. the color send in this format <code>#ec0909</code>, and here is how am trying to add it to the shape:</p>
<pre class="lang-py prettyprint-override"><code>ls = kml.newlinestring(name="Polyline")
ls.coords = [(latlng['lng'], latlng['lat']) for latlng in shape['coordinates']]
ls.description = shape['info_box']
print(f"THE COLOR IS {color_code}") #returns #ec0909
ls.style.polystyle.color = color_code
</code></pre>
<p>as you see the color should be red .. but it is shown as blue/red shaded color.</p>
<p><a href="https://i.sstatic.net/UnqBY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UnqBY.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Yj8Mo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yj8Mo.png" alt="enter image description here" /></a></p>
|
<python><kml><simplekml>
|
2023-01-15 17:14:29
| 2
| 4,483
|
Ahmed Wagdi
|
75,126,772
| 1,003,190
|
yolo v8: does segment contain point?
|
<p>I'm using yolo v8 to detect subjects in pictures. It's working well, and can create quite precise masks over subjects.</p>
<pre class="lang-py prettyprint-override"><code>from ultralytics import YOLO
model = YOLO('yolov8x-seg.pt')
for output in model('image.jpg', return_outputs=True):
for segment in output['segment']:
print(segment)
</code></pre>
<p>The code above works, and generates a series of "segments", which are a list of points that define the shape of subjects on my image. That shape is not convex (for example horses).</p>
<p>I need to figure out if a random coordinate on the image falls within these segments, and I'm not sure how to do it.</p>
<p>My first approach was to build an image mask using PIL. That roughly worked, but it doesn't always work, depending on the shape of the segments. I also thought about using <code>shapely</code>, but it has restrictions on the Polygon classes, which I think will be a problem in some cases.</p>
<p>In any case, this really feels like a problem that could easily be solved with the tools I'm already using (yolo, pytorch, numpy...), but to be honest I'm too new to all this to figure out how to do it properly.</p>
<p>Any suggestion is appreciated :)</p>
|
<python><pytorch><computer-vision><geometry><yolo>
|
2023-01-15 16:53:36
| 1
| 3,925
|
aspyct
|
75,126,685
| 9,715,163
|
Why doesn't numpy allow broadcasting when a dimenison of first array is a multiple of corresponding dimension of second array?
|
<p>Two dimensions are compatible when</p>
<ul>
<li>they are equal, or</li>
<li>one of them is 1</li>
</ul>
<p>Can't they also be made compatible if one of them is a multiple of another?</p>
<p>Consider the following example when shape of array A is (2,3) and shape of array B is (4,3):</p>
<p><code>A = [[1,2,3], [4,5,6]] B = [[11,12,13],[14,15,16],[17,18,19],[20,21,22]]</code></p>
<p>For acheving <code>A + B</code>, we broadcast A to be of shape (4,3) by repeating the rows 2 (i.e 4/2) times.</p>
<p>So <code>A = [[1,2,3],[4,5,6],[1,2,3],[4,5,6]] </code> and <code>A + B = [[12,14,16],[18,20,22],[18,20,22],[24,26,28]]</code></p>
<p>Is the problem with this approach that it is not useful when dealing with real data ?</p>
|
<python><numpy>
|
2023-01-15 16:43:19
| 0
| 521
|
mangesh
|
75,126,565
| 11,170,350
|
Save ImageGrid without ploting it on jupyter notebook
|
<p>I am using following function to save ImageGrid, but i dont want to show it on jupyter notebook. How can i do that?</p>
<pre><code>from mpl_toolkits.axes_grid1 import ImageGrid
def save_img(hists,path):
fig = plt.figure(figsize=(8, 8))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(16, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
)
for ax, im in zip(grid, hists):
# Iterating over the grid returns the Axes.
ax.set_xticks([]),ax.set_yticks([])
ax.margins(x=0,y=0)
ax.imshow(im)
plt.savefig(f'{path}.jpg',bbox_inches='tight')
</code></pre>
|
<python><matplotlib><jupyter-notebook>
|
2023-01-15 16:24:26
| 1
| 2,979
|
Talha Anwar
|
75,126,554
| 4,589,608
|
Use lark to analyze reST markup language like sections
|
<p>I would like to define one basic grammar such as to start to work with <code>lark</code>. Here is my M(not)WE.</p>
<pre class="lang-py prettyprint-override"><code>from lark import Lark
GRAMMAR = r"""
?start: _NL* (day_heading)*
day_heading : "==" _NL day_nb _NL "==" _NL+ (paragraph _NL)*
day_nb : /\d{2}/
paragraph : /[^\n={2}]+/ (_NL+ paragraph)*
_NL : /(\r?\n[\t ]*)+/
"""
parser = Lark(GRAMMAR)
tree = parser.parse("""
==
12
==
Bla, bla
Bli, Bli
Blu, Blu
==
10
==
Blo, blo
""")
print(tree.pretty())
</code></pre>
<p>This prints :</p>
<pre><code>start
day_heading
day_nb 12
paragraph
Bla, bla
paragraph
Bli, Bli
paragraph Blu, Blu
day_heading
day_nb 10
paragraph Blo, blo
</code></pre>
<p>The tree I want is the following one.</p>
<pre><code>start
day_heading
day_nb 12
paragraph
line Bla, bla
line Bli, Bli
line Blu, Blu
day_heading
day_nb 10
paragraph
line Blo, blo
</code></pre>
<p>How can I modify my EBNF?</p>
|
<python><restructuredtext><lark>
|
2023-01-15 16:23:01
| 1
| 1,504
|
projetmbc
|
75,126,461
| 465,159
|
Dockerfile and multiple images (for github codespace)
|
<p>I am trying to set up a <code>.devcontainer.json</code> file to use in Github Codespace. What I want is to have a container which has the basic python image, plus the bazel image so that I can use bazel without having to install it any time I create a new workspace.</p>
<p>How can I achieve this?</p>
<p><strong>My confused understanding of the situation</strong></p>
<p>From what I understand, github codespace will look up into the <code>.devcontainer.json</code>, follow the instructions to build a container, and this container will be used for the virtual machine which is created for a new workspace.</p>
<p>Question 1: Already here I am confused, because the default python template only specifies <code>"image": "mcr.microsoft.com/devcontainers/python:0-3.11"</code> - but of course my VM is running a full OS, not just python. Does it mean that by default it downloads e.g. ubuntu and <em>then</em> adds the python image to the container?</p>
<p>Anyway, I need to add bazel to this. IIUC, the best way would be to use <a href="https://containers.dev/implementors/features/" rel="nofollow noreferrer">features</a>, which as far as I understand are additional images to be added to the main image.
However, the bazel feature appears to be deprecated and not available at the moment.</p>
<p>So I probably need to <a href="https://containers.dev/guide/dockerfile" rel="nofollow noreferrer">use a Dockerfile</a> to set up my devcontainer. I presume this time I should start from the ubuntu base image, not from the python3.11 image.</p>
<p>Regardless, how to add install bazel (and buildifier) in dockerfiles then? I could in theory follow the bazel installation instructions (which at the moment involves downloading and running the bazel-6.0.0-installer-linux-x86_64.sh script, set up env vars for bazel and buildifier, etc.).</p>
<p>This sounds like a pain. On the other hand, there is an official bazel image available at gcr.io/bazel-public/bazel, so ideally I would just use that one. Is there a way of simply adding that docker image to my container? I have found suggestions to use docker compose, but frankly at this point I don't know what's going on.</p>
<p>Can someone recommend the easiest way to install bazel / buildifier / fix system paths from a basic ubuntu image to use as a starting point for github codespace development?</p>
<p>Thank you!</p>
|
<python><docker><bazel><codespaces><github-codespaces>
|
2023-01-15 16:08:17
| 1
| 5,474
|
Ant
|
75,126,395
| 17,227,709
|
django Count aggregate is not working as I intended
|
<p>It's a code that reproduces the problem I experienced.</p>
<p><strong>models.py</strong></p>
<pre><code>from django.db import models
class User(models.Model):
pass
class Group(models.Model):
users = models.ManyToManyField(
User,
related_name='groups',
)
class Notice(models.Model):
groups = models.ManyToManyField(
Group,
related_name='notices',
)
</code></pre>
<p><strong>tests.py</strong></p>
<pre><code>from django.test import TestCase
from tests.models import User, Group, Notice
from django.db.models.aggregates import Count
def print_user_count(group):
count = group.users.count()
annotate_count = (
Group.objects
.annotate(
user_count=Count('users'),
notice_count=Count('notices')
)
.get(id=group.id)
.user_count
)
print('count: %d, annotate_count: %d' % (count, annotate_count))
class ModelTestCase(TestCase):
def test_(self):
user1 = User.objects.create()
group1 = Group.objects.create()
group1.users.set([user1])
print_user_count(group1) # count: 1, annotate_count: 1
for _ in range(5):
notice = Notice.objects.create()
notice.groups.set([group1])
print_user_count(group1) # count: 1, annotate_count: 5
</code></pre>
<p>I didn't add users to the group.
But the value obtained using annotate has increased from 1 to 5.
<br>
Is this a bug? Or did I use something wrong?</p>
|
<python><django><orm><aggregate><annotate>
|
2023-01-15 15:58:21
| 1
| 343
|
gypark
|
75,125,728
| 10,844,937
|
Cannot get all the rows but can get all single row by using iloc?
|
<p>Today I have faced an issue while using <code>pandas</code>. The problem is very simple, <code>df.iloc[:][0]</code> gives me such error.</p>
<pre><code> File "C:\workspaces\venv\lib\site-packages\pandas\core\frame.py", line 3805, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\workspaces\venv\lib\site-packages\pandas\core\indexes\base.py", line 3805, in get_loc
raise KeyError(key) from err
KeyError: 0
</code></pre>
<p>To check which row cannot using <code>iloc</code>. I tried the following</p>
<pre><code>for i in range(content_df.shape[0]):
try:
df.iloc[i][0]
except:
print(i)
</code></pre>
<p>Here nothing prints at all!</p>
<p>One thing also makes me very surprised. I would like to using <code>df.iloc[:][2:]</code> to remove the first two columns. However, it removes the first <strong>two rows</strong> in this way.</p>
<p>Anyone knows why? Thanks in advance.</p>
|
<python><pandas><dataframe>
|
2023-01-15 14:17:20
| 2
| 783
|
haojie
|
75,125,715
| 15,724,084
|
scrapy python cannot parse with condition
|
<p>With an assitance I have written a scipt in scrapy, which should previously read excel file -> take plate number then put it <code>plate_num_xlsx</code> variable. Although for not being an empty I have assigned a value to it. But the logic is to take one by one the values from excel file column A with pandas then invoke parse() function and inside parse function `</p>
<pre><code> plate = row.css('a::text').get()
price = row.css('p::text').get()
if plate_num_xlsx==plate.replace(" ","")
</code></pre>
<p>` if taken value is equal parsed value from website value it should print it.</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
import pandas as pd
plate_num_xlsx = 'LA55ERR'
class plateScraper(scrapy.Spider):
name = 'scrapePlate'
allowed_domains = ['dvlaregistrations.direct.gov.uk']
start_urls = [f"https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="]
def read_xlsx(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATES']
for row in columnA_values:
plate_num_xlsx=row
yield scrapy.Request(start_urls.format(row))
def parse(self, response):
for row in response.css('div.resultsstrip'):
plate = row.css('a::text').get()
price = row.css('p::text').get()
if plate_num_xlsx==plate.replace(" ",""):
print(plate.replace(" ", ""))
yield {"plate": plate.strip(), "price": price.strip()}
process = CrawlerProcess()
process.crawl(plateScraper)
process.start()
</code></pre>
<p>The problem with scrapy is it does not give you traceback error. In Selenium or other scripts if debug screen was in red, I could easily understand what is an issue. Here it only writes the output all in red color. Cannot understand on which line there is a problem</p>
<pre><code>C:\Users\Admin\AppData\Local\Programs\Python\Python310\python.exe C:/pythonPro/w_crawl/SimonDarak/scrpy_00.py
2023-01-15 18:07:45 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: scrapybot)
2023-01-15 18:07:45 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)], pyOpenSSL 23.0.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 39.0.0, Platform Windows-10-10.0.19044-SP0
2023-01-15 18:07:45 [scrapy.crawler] INFO: Overridden settings:
{}
2023-01-15 18:07:45 [py.warnings] WARNING: C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\request.py:231: ScrapyDeprecationWarning: '2.6' is a deprecated value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting.
It is also the default value. In other words, it is normal to get this warning if you have not defined a value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting. This is so for backward compatibility reasons, but it will change in a future version of Scrapy.
See the documentation of the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting for information on how to handle this deprecation.
return cls(crawler)
2023-01-15 18:07:45 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2023-01-15 18:07:45 [scrapy.extensions.telnet] INFO: Telnet Password: adce4fc71429f9ef
2023-01-15 18:07:45 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2023-01-15 18:07:46 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2023-01-15 18:07:46 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-01-15 18:07:46 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2023-01-15 18:07:46 [scrapy.core.engine] INFO: Spider opened
2023-01-15 18:07:46 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-01-15 18:07:46 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-01-15 18:07:47 [filelock] DEBUG: Attempting to acquire lock 2802896714000 on C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2023-01-15 18:07:47 [filelock] DEBUG: Lock 2802896714000 acquired on C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2023-01-15 18:07:47 [filelock] DEBUG: Attempting to release lock 2802896714000 on C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2023-01-15 18:07:47 [filelock] DEBUG: Lock 2802896714000 released on C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2023-01-15 18:07:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://dvlaregistrations.dvla.gov.uk/search/results.html?search=LA55ERR&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto=> (referer: None)
2023-01-15 18:07:47 [scrapy.core.engine] INFO: Closing spider (finished)
2023-01-15 18:07:47 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 461,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 11458,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 0.709073,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2023, 1, 15, 14, 7, 47, 140805),
'httpcompression/response_bytes': 75657,
'httpcompression/response_count': 1,
'log_count/DEBUG': 6,
'log_count/INFO': 10,
'log_count/WARNING': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2023, 1, 15, 14, 7, 46, 431732)}
2023-01-15 18:07:47 [scrapy.core.engine] INFO: Spider closed (finished)
Process finished with exit code 0
</code></pre>
|
<python><scrapy>
|
2023-01-15 14:15:54
| 1
| 741
|
xlmaster
|
75,125,519
| 12,684,429
|
Monthly averages to daily granularity
|
<p>I have a dataframe which is monthly averages which looks like the following;</p>
<pre><code> A B C D E
1 3 21 3 22 3
2 4 32 3 24 0
3 5 1 12 3 12
.
.
11 5 4 9 85 85 3
12 43 4 48 3 84 4
</code></pre>
<p>I'm looking to convert this data to a daily timeframe so that the dataframe would be a ten year timeseries and each value would correspond to its' monthly value. For example;</p>
<pre><code> A B C D E
01/01/2010 3 21 3 22 3
02/01/2010 3 21 3 22 3
.
.
31/01/2010 3 21 3 22 3
.
.
.
30/12/2020 43 4 48 84 4
31/12/2020 43 4 48 84 4
</code></pre>
<p>Any help much appreciated!</p>
<p>Thanks</p>
|
<python><pandas><indexing><time-series><resample>
|
2023-01-15 13:47:02
| 2
| 443
|
spcol
|
75,125,468
| 6,054,404
|
Pandas concat() in Python 3.9 in a def returning the error: No objects to concatenate, while inline scripting there is no error
|
<p>Using Python 3.9 I have several pandas dataframes, each are stored within the <code>self.pointcloud.<dict></code>.
This looks something like the following, named after their date of capture:</p>
<pre><code>self.pointcloud['20180712']['data']
self.pointcloud['20180713']['data']
self.pointcloud['20180714']['data']
</code></pre>
<p>Each <code>['data']</code> dict is a pandas dataframe all containing the same columns.</p>
<p>I'm running into an issue then I try to concat them into a single dict.</p>
<p>I can easily get all the dataframes in a list:</p>
<pre><code>def get_data(self, tag):
return [self.pointcloud[pc]['data'] for pc in self.pointcloud if self.pointcloud[pc]['data']['tag'].unique() == [tag]]
</code></pre>
<p>This uses list comprehension, it filters the data based on the <code>tag</code> value relative to the <code>tag</code> column, but that's not an issue.</p>
<p>So now I have all the data I want to merge them into one single dataframe, if I run my code in debug mode the following gives me no issue and runs as expected:</p>
<pre><code>tag = 'a value'
df_list = self.get_data(tag)
new_df = pd.concat(df_list)
</code></pre>
<p>However, when I add it to a def I get the error: <code>ValueError: No objects to concatenate</code></p>
<pre><code>def merge_pointclouds(self, tag):
df_list = self.get_data(tag)
return pd.concat(df_list)
self.merge_pointclouds('a value')
</code></pre>
<p>As I stated, in debug mode it works, even within the <code>merge_pointclouds</code> def, is there anything obvious that I'm missing?</p>
<p>The exact error from Pycharm:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2020.2.2\plugins\python-ce\helpers\pydev\pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2020.2.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/.../PycharmProjects/PCPro2023_0_3_2py39-64/pc.py", line 927, in <module>
debug(debugging=debugging)
File "C:/.../PycharmProjects/PCPro2023_0_3_2py39-64/pc.py", line 68, in debug
pc.set_distance_between()
File "C:/.../PycharmProjects/PCPro2023_0_3_2py39-64/pc.py", line 388, in set_distance_between
data_top = self.merge_pointclouds(tag='Top')
File "C:/.../PycharmProjects/PCPro2023_0_3_2py39-64/pc.py", line 221, in merge_pointclouds
res = pd.concat(df_list)
File "C:\...\PycharmProjects\PCPro2023_0_3_py39-64\venv\lib\site-packages\pandas\util\_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "C:\...\PycharmProjects\PCPro2023_0_3_py39-64\venv\lib\site-packages\pandas\core\reshape\concat.py", line 368, in concat
op = _Concatenator(
File "C:\...\PycharmProjects\PCPro2023_0_3_py39-64\venv\lib\site-packages\pandas\core\reshape\concat.py", line 425, in __init__
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
</code></pre>
|
<python><pandas><concatenation>
|
2023-01-15 13:38:41
| 0
| 1,993
|
Spatial Digger
|
75,125,364
| 7,347,925
|
How to use respective colorbar when using facet_col with plotly.express?
|
<p>I have a data which has three variables whose magnitudes are different.</p>
<p>I'm trying to apply <code>animation_frame</code> and <code>facet_col</code> to make them animate at the same time.</p>
<p>Here's the code:</p>
<pre><code>import plotly.express as px
import xarray as xr
# Load xarray from dataset included in the xarray tutorial
ds = xr.tutorial.open_dataset('eraint_uvz').sel(level=500)
# convert Dataset to DataArray for animation
data = ds.to_array().transpose('month', ...)
# fix the bug
# TypeError: %d format: a real number is required, not str
plot_data = data.assign_coords({'variable': range(len(data['variable']))})
fig = px.imshow(plot_data, animation_frame='month', facet_col='variable', color_continuous_scale='viridis')
# set variable back to string
# https://community.plotly.com/t/cant-set-strings-as-facet-col-in-px-imshow/60904
for k in range(len(data['variable'])):
fig.layout.annotations[k].update(text = data['variable'].values[k])
fig.show()
</code></pre>
<p>By default, they share the same colorbar like below.
Is it possible to make three colorbars (even different cmaps) with manual zmin/zmax values?</p>
<p><a href="https://i.sstatic.net/eSA9s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eSA9s.png" alt="img" /></a></p>
|
<python><plotly><imshow>
|
2023-01-15 13:22:16
| 1
| 1,039
|
zxdawn
|
75,125,300
| 3,672,716
|
SQLAlchemy connect through specific network interface
|
<p>I have a client computer (now Windows 10, soon to be Ubuntu Server) connected through a LAN Ethernet interface to a remote DB server using SQLAlchemy. The same client holds a Wifi network interface connected to Internet in order to provide external access to the client. Within the client there is a Python script that runs reading the remote DB Server and updating a local DB with records of interest.</p>
<p>Everything works fine while the Wifi interface is disconnected, and SQLAlchemy's <code>engine.connect()</code> throws a connection error when the Wifi becomes active.</p>
<p>The question is how to force the connection to be done through the Ethernet interface for the following commands:</p>
<pre><code>engine = create_engine(url)
engine.connect()
</code></pre>
<p>I am expecting some sort of default routing configuration for the SQLAlchemy engine, or a workaround that does not involve SQLAlchemy.</p>
|
<python><linux><windows><network-programming><sqlalchemy>
|
2023-01-15 13:11:48
| 0
| 394
|
Alejandro QA
|
75,125,292
| 9,879,869
|
Plot existing covariance dataframe
|
<p>I have computed a covariance of 26 inputs from another software. I have an existing table of the results. See image below:
<a href="https://i.sstatic.net/ZSAYI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZSAYI.png" alt="enter image description here" /></a></p>
<p>What I want to do is enter the table as a pandas dataframe and plot the matrix. I have seen the thread here: <a href="https://stackoverflow.com/questions/29432629/plot-correlation-matrix-using-pandas">Plot correlation matrix using pandas</a>. However, the aforementioned example, computed the covariance first and plotted the 'covariance' object. In my case, I want to plot the dataframe object to look like the covariance matrix in the example.</p>
<p>Link to data: <a href="https://docs.google.com/spreadsheets/d/10RqmlH7T73kl_tgFyQJ0crhDwuazsQHyQEC5tbFPak8/edit?usp=sharing" rel="nofollow noreferrer">HERE</a>.</p>
|
<python><pandas><plot><covariance-matrix>
|
2023-01-15 13:11:03
| 2
| 1,572
|
Nikko
|
75,125,236
| 18,937,024
|
Sqlalchemy does not return an model object with `session.scalars` on mapped objects
|
<pre class="lang-py prettyprint-override"><code>from attrs import define
from sqlalchemy.orm import (
registry,
)
from sqlalchemy.sql import (
schema,
sqltypes,
)
@define(slots=False)
class Cat():
id: int
name: str
mapper_registry = registry()
cat_table = schema.Table(
"cat",
mapper_registry.metadata,
schema.Column("id", sqltypes.Integer, primary_key=True),
schema.Column("name", sqltypes.String, nullable=False),
)
mapper_registry.map_imperatively(models.Cat, cat_table)
async def main() -> None:
...
model = Cat(1, "Meow")
async with session_maker() as session:
await session.add(model) # Ok
await session.commit()
result = await session.scalar(cat_table.select())
print(result) # 1
print(type(result)) # int
</code></pre>
<p><code>session.scalar</code> returns <code>int</code>, not <code>Cat</code></p>
<p>It looks like it worked with query:
<a href="https://github.com/cosmicpython/code/blob/master/src/allocation/adapters/repository.py#L48" rel="nofollow noreferrer">https://github.com/cosmicpython/code/blob/master/src/allocation/adapters/repository.py#L48</a></p>
<p>I tried to get the object through <code>scalars</code>, but I need to construct it myself</p>
<p>Am I doing something wrong or is it supposed to be like this?</p>
|
<python><asynchronous><sqlalchemy><python-asyncio>
|
2023-01-15 13:00:10
| 1
| 341
|
LEv145
|
75,125,197
| 9,182,743
|
Remove/handle experimental type pd.NA pandas?
|
<p>I am unsure how to remove/handle pd.NA experimental data type, it is causing me problems, for example in the code snipped below, with pd.NA it functions differently:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
print (pd.__version__)
print ("with pd.nan --> working")
df = pd.DataFrame({"a": [1,2,3,4,np.nan], 'b': [2,3,4,5,np.nan], 'c': [10, 1,2,3,np.nan]})
Conditions_group = [
(df.values == 1) | (df.values == 2),
(df.values == 3) | (df.values == 4)
]
choices = ["1&2", "3&4"]
print(Conditions_group)
new_df = np.select(Conditions_group, choices, default=np.nan)
print(new_df)
print ("with pd.NA -> not workign ")
df.replace(np.nan, pd.NA, inplace=True)
Conditions_group = [
(df.values == 1) | (df.values == 2),
(df.values == 3) | (df.values == 4)
]
print(Conditions_group)
new_df = np.select(Conditions_group, choices, default=np.nan)
print(new_df)
----- OUT ----
1.4.4
with pd.nan --> working
[array([[ True, True, False],
[ True, False, True],
[False, False, True],
[False, False, False],
[False, False, False]]), array([[False, False, False],
[False, True, False],
[ True, True, False],
[ True, False, True],
[False, False, False]])]
[['1&2' '1&2' 'nan']
['1&2' '3&4' '1&2']
['3&4' '3&4' '1&2']
['3&4' 'nan' '3&4']
['nan' 'nan' 'nan']]
with pd.NA -> not workign
[False, False]
nan
</code></pre>
<p>I am unable to find where, but it seems to me that either when exporting/importing/doing something to the data, the pd.NA reappear in it. Since it is experimental, I think it best to always remove the pd.NA.</p>
<ul>
<li>Is there a way to stop pandas from ever using pd.NA?</li>
<li>Should I write a funciton that every time i re-import the data checks for pd.NA and remvoes them?</li>
</ul>
<p>It also seems to fail with simple operations like replace, for exmaple when type of a is category:</p>
<pre class="lang-py prettyprint-override"><code>
df['a'] = df['a'].astype("category")
print (df.replace(pd.NA, np.nan))
--OUT--
TypeError: boolean value of NA is ambiguous
</code></pre>
|
<python><pandas>
|
2023-01-15 12:52:23
| 0
| 1,168
|
Leo
|
75,125,180
| 13,345,744
|
How to manipulate Pandas Series without changing the given Original?
|
<p><strong>Context</strong></p>
<p>I have a method that takes a Pandas Series of categorial Data and returns it as an indexed version. However, I think my implementation is also modifying the given Series, not just returning a modified new Series. I also get the following Errors:</p>
<blockquote>
<p>A value is trying to be set on a copy of a slice from a DataFrame.
See the caveats in the documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy</a>
series[series == value] = index</p>
</blockquote>
<blockquote>
<p>SettingWithCopyWarning: modifications to a property of a datetimelike object are not supported and are discarded. Change values on the original.
cacher_needs_updating = self._check_is_chained_assignment_possible()</p>
</blockquote>
<hr />
<p><strong>Code</strong></p>
<pre class="lang-py prettyprint-override"><code>def categorials(series: pandas.Series) -> pandas.Series:
unique = series.unique()
for index, value in enumerate(unique):
series[series == value] = index
return series.astype(pandas.Int64Dtype())
</code></pre>
<hr />
<p><strong>Question</strong></p>
<ul>
<li>How can I achieve my goal: This method should return the modified series without manipulating the original given series?</li>
</ul>
|
<python><pandas><numpy><data-science><series>
|
2023-01-15 12:49:26
| 1
| 1,721
|
christophriepe
|
75,125,121
| 13,550,050
|
Python initialize class with list/mutable attribute
|
<p>If I want to initialise a class which has a list attribute, then I can directly assign an list to the attribute in the class <strong>init</strong>.</p>
<p>Doing this however would just be an assignment of list reference, and my class attribute would be changed if I update the original list.</p>
<p>What is the best way to initialise a python classes with a list attribute? Should I use an list copy/deepcopy instead, or is it ok to ignore the issue above and just rely on people discarding the list that they use for initialisation?</p>
<pre><code>class A:
def __init__(self, arr):
self.arr = arr
arr = [1,2,3]
a = A(arr)
print(a.arr)
arr[0] = 100
print(a.arr)
</code></pre>
|
<python><class><initialization>
|
2023-01-15 12:39:53
| 2
| 369
|
crixus
|
75,125,081
| 8,995,741
|
Matplotlib: Rotating labels on lower half of pie chart and repositioning text labels
|
<p>I am using the following code to create the attached visualization:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.textpath import TextPath
from matplotlib.patches import PathPatch
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0, 0, 1, 1], aspect=1)
size = 0.1
params = [
"Parameter 1", "Parameter 2","Parameter 3","Parameter 4","Parameter 5","Parameter 6",
"Parameter 7","Parameter 8","Parameter 9","Parameter 10","Parameter 11","Parameter 12"
]
# Simple pie
ax.pie(np.ones(12), radius=1, colors=["#F5F5F5"] * len(params), wedgeprops=dict(width=size, edgecolor="w"))
# Rotated and transformed label
def plot_curved_text(text, angle, radius=1, scale=0.005):
"credits: Nicolas P. Rougier"
path = TextPath((0, 0), text, size=10)
path.vertices.flags.writeable = True
V = path.vertices
xmin, xmax = V[:, 0].min(), V[:, 0].max()
ymin, ymax = V[:, 1].min(), V[:, 1].max()
V -= (xmin + xmax) / 2, (ymin + ymax) / 2
V *= scale
for i in range(len(V)):
a = angle - V[i, 0]
V[i, 0] = (radius + V[i, 1]) * np.cos(a)
V[i, 1] = (radius + V[i, 1]) * np.sin(a)
patch = PathPatch(path, facecolor="k", linewidth=0)
ax.add_artist(patch)
for val, label in zip(
np.linspace(0.5, 11.5, 12),
params
):
plot_curved_text(label, val * 2 * np.pi / 12, 1 - 0.5 * size)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/hFZr0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hFZr0.jpg" alt="enter image description here" /></a></p>
<p>I am currently facing challenges in resolving the following issues:</p>
<ol>
<li>I am seeking a solution to rotate the labels applied on the lower half of a circle/pie (parameters 7 to 12) in the opposite direction, in order to achieve a similar appearance to that of the labels parameters 5 to 9, as depicted in the attached visual below:</li>
</ol>
<p><a href="https://i.sstatic.net/rd66H.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rd66H.jpg" alt="enter image description here" /></a></p>
<ol start="2">
<li>Additionally, using the current code, the parameter 1 is added to the right-hand side, however, my desired outcome is for it to be located at the top, as illustrated in point 1 visual</li>
</ol>
<p>If anyone could provide assistance in addressing these two concerns, it would be greatly appreciated.</p>
|
<python><matplotlib>
|
2023-01-15 12:32:44
| 1
| 1,439
|
slothfulwave612
|
75,125,071
| 16,169,533
|
Extend list with another list in specific index?
|
<p>In python we can add lists to each other with the extend() method but it adds the second list at the end of the first list.</p>
<pre><code>lst1 = [1, 4, 5]
lst2 = [2, 3]
lst1.extend(lst2)
Output:
[1, 4, 5, 2, 3]
</code></pre>
<p>How would I add the second list to be apart of the 1st element? Such that the result is this;</p>
<pre><code>[1, 2, 3, 4, 5 ]
</code></pre>
<p>I've tried using <code>lst1.insert(1, *lst2)</code> and got an error;</p>
<pre><code>TypeError: insert expected 2 arguments, got 3
</code></pre>
|
<python><arrays><python-3.x><list>
|
2023-01-15 12:30:43
| 3
| 424
|
Yussef Raouf Abdelmisih
|
75,125,034
| 2,302,244
|
Getting the query string out of an rdflib prepared query
|
<p>How do I recover the original string from an <code>rdflib</code> preparedQuery.</p>
<p>In other words,</p>
<pre class="lang-py prettyprint-override"><code>query_string = "select ?s {?s ?p ?o.}"
q = prepareQuery(query_string)
</code></pre>
<p>I would like to run some function against <code>q</code> to recover the <code>query_string</code>.</p>
|
<python><rdflib>
|
2023-01-15 12:22:53
| 1
| 935
|
user2302244
|
75,124,894
| 361,455
|
Linking errors when using setuptools
|
<p>I am extending C++ library with functionality that requires GRPC. GRPC dependencies are added through VCPKG (example from CMakeLists.txt):</p>
<pre><code>find_package(gRPC CONFIG REQUIRED)
target_link_libraries(
mylib PRIVATE
gRPC::grpc++)
</code></pre>
<p>Now, that same library has python bindings (where I enter into, for me, an uncharted territory).
The library is built through <code>setuptools</code>. The setup itself initially went ok but when I try to load the library I get:</p>
<pre><code>β― python3
Python 3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mylib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/python3.8/dist-packages/mylib-0.0.0-py3.8-linux-x86_64.egg/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZTIN6google8protobuf2io20ZeroCopyOutputStreamE
</code></pre>
<p>Missing _ZTIN6google8protobuf2io20ZeroCopyOutputStreamE clearly comes from GRPC dependency. I tried playing with <code>setup.py</code> and including grpc lib folder:</p>
<pre><code>toolchain_args += ['-I/home/atomic/vcpkg/installed/x64-linux/lib']
</code></pre>
<p>I also tried extending required libraries list:</p>
<pre><code>libraries += ['libgrpc++', 'libprotobuf']
</code></pre>
<p>Grpc is installed under /usr/local/lib/ and also as vcpkg package.</p>
<p>But without any luck. Including libs failed with following error:</p>
<pre><code>/usr/bin/ld: cannot find -llibgrpc++
/usr/bin/ld: cannot find -llibprotobuf
</code></pre>
|
<python><c++><grpc><setuptools><vcpkg>
|
2023-01-15 11:58:09
| 0
| 8,318
|
Klark
|
75,124,867
| 12,057,138
|
How can I either use arguments or constructor values in a python class
|
<p>I have a python class and a main function next to it so I can execute it from command line. My <strong>init</strong> function deals my provided arguments:</p>
<pre><code>import argparse
class Tester:
def __init__(self):
self.args_parser = argparse.ArgumentParser(description='Test')
self.args = self.__parse_parameters()
...
if __name__ == "__main__":
tester = Tester()
</code></pre>
<p>This way, when I execute the above file from command line, for example:</p>
<pre><code>#python teser.py --test eating --lowcarb
</code></pre>
<p>I can provide the parameters and they will eventually get passed to __parse_parameters function. All good.</p>
<p>My question is, how can I pass these parameters to the class if I decide to use this class from python code?</p>
|
<python>
|
2023-01-15 11:53:10
| 2
| 688
|
PloniStacker
|
75,124,795
| 2,105,338
|
Looping over and mapping slices of a DataArray in xarray
|
<p>Say I have a multidimensional DataArray and I would like to loop over slices according to some dimension and change them. For example, the first dimension is time and I would like for each time to receive a DataArray that represents a slice of that time and map it to a different DataArray of the same size.</p>
<p>I can use <code>apply_ufunc</code> but then I lose the ability to use labelled dimensions within the function that operates on them. I thought maybe I could use <code>map_blocks</code> but I couldn't understand how to specify a dimension to loop over.</p>
<p>Edit:</p>
<p>I've implemented what I wanted to do like this:</p>
<pre><code>def xarray_map_over_dimension(
data_array: xr.DataArray, func: Callable, dim: str, *args, **kwargs
) -> None:
"""
For an n-dimensional DataArray this function will map n-1 dimensional slices by iterating over a given dimension.
This is similar to xarray's apply_ufunc except that it calls :func with DataArray objects rather than ndarrays
WARNING: this function will modify data_array inplace
"""
for i, data_slice in data_array.groupby(dim):
data_array[{dim: i}] = func(data_slice, *args, **kwargs)
</code></pre>
<p>Basically I wanted something like apply_ufunc that would give me data_array slices instead of ndarray slices. I thought this was quite a common use case so I figured there would be a standard way to do this.</p>
|
<python><numpy><data-science><python-xarray>
|
2023-01-15 11:43:46
| 1
| 2,286
|
davegri
|
75,124,624
| 5,353,753
|
Comparing two dataframes if one contains the other
|
<p>I have two dataframes <code>A</code> and <code>B</code>. For example:</p>
<p>A:</p>
<pre><code>Key | Col1 | Col2
A aa bb
B bb bc
C cc bd
</code></pre>
<p>B:</p>
<pre><code>Key | Col1 | Col2
A a b
B ab c
C cc b
</code></pre>
<p>Both have one unique row per key(which is a key of the df), and can hold different values for the other columns.</p>
<p>I'm trying to analyzing the difference between the two dataframes.</p>
<p>I want 3 comparisons(possibly 4).</p>
<ol>
<li>number of <code>notna</code> cells in <code>A</code> per column</li>
<li>number of records that are equal, divided by the number of cells with a <code>notna</code> value in <code>A</code> per column</li>
<li>number of records in <code>B</code>, that is in <code>A</code> string wise(for instance 'aaa' is in 'baaab') per column</li>
<li>some other custom comparison. I.E. Some similarity measure.</li>
</ol>
<p>I've managed to achieve <code>1</code> and <code>2</code>, using this:</p>
<pre><code>counts = A.notna().sum()
scores = B.eq(A).sum().div(counts)
</code></pre>
<p>So for <code>3</code>, I want to get:</p>
<pre><code>Col1 2
Col2 3
</code></pre>
<p>My problem is that I don't know how to achieve a custom comparison like <code>in</code> for my third and forth options.. If both were in the same df, then I could probably do something like this</p>
<pre><code>df[col] = df.apply(lambda x: x[col + '_1'] in x[col + '+2'], axis=1)
</code></pre>
<p>For each column, and sum the result, but that seems complicated and messy. Any suggestion for a cleaner solution with both dataframes apart?</p>
|
<python><pandas><dataframe>
|
2023-01-15 11:13:45
| 2
| 40,569
|
sagi
|
75,124,580
| 9,218,680
|
Creating a new df joining two data frames (iterating each row) with multiple conditions
|
<p>I have a couple of decent sized dataframes that look like:</p>
<p><strong>df_B</strong></p>
<pre><code>id start_time end_time side cost
1234 2021-01-01 16:00:00.100000 2021-01-01 16:02:00.100000 BUY 100
1564 2021-01-01 16:05:00.100000 2021-01-01 16:10:00.100000 BUY 111
7535 2021-01-01 16:40:00.100000 2021-01-01 16:55:00.100000 BUY 124
9999 2021-01-01 16:44:00.100000 2021-01-01 16:45:00.100000 BUY 128
</code></pre>
<p><strong>df_S</strong></p>
<pre><code>id start_time end_time side cost
5366 2021-01-01 16:00:00.100000 2021-01-01 16:02:00.100000 SELL 100
4533 2021-01-01 16:05:00.100000 2021-01-01 16:08:00.100000 SELL 105
4532 2021-01-01 16:20:00.100000 2021-01-01 16:50:00.100000 SELL 122
5827 2021-01-01 16:30:00.100000 2021-01-01 16:35:00.100000 SELL 123
</code></pre>
<p>I would like to create a new dataframe such that:
for each id in df_B:
if df_S.cost <= df_B.cost
& df_S.start_time <= df_B.end_time</p>
<p>Eg:
Desired output:</p>
<pre><code>id start_time end_time side cost id_S start_time_S end_time_S side_S cost_S
1234 2021-01-01 16:00:00.100000 2021-01-01 16:02:00.100000 BUY 100 5366 2021-01-01 16:00:00.100000 2021-01-01 16:02:00.100000 SELL 100
1564 2021-01-01 16:05:00.100000 2021-01-01 16:10:00.100000 BUY 111 4533 2021-01-01 16:05:00.100000 2021-01-01 16:08:00.100000 SELL 105
7535 2021-01-01 16:40:00.100000 2021-01-01 16:55:00.100000 BUY 124
9999 2021-01-01 16:44:00.100000 2021-01-01 16:45:00.100000 BUY 128
</code></pre>
<p>Could you please advise how I can efficiently write this, for a large dataframe</p>
|
<python><pandas>
|
2023-01-15 11:06:03
| 2
| 2,510
|
asimo
|
75,124,417
| 4,371,803
|
Python Selenium: send keys to tag 'input type="hidden" '
|
<p>I try to log in to this web page with my credentials.</p>
<p><a href="https://www.oddsportal.com/login" rel="nofollow noreferrer">https://www.oddsportal.com/login</a></p>
<p>I am able to get the "username" and "password" input boxes but I am not able to send the data.</p>
<p>Selenium locates the elements (via "id" or otherwise), but gives problems when trying to send values or pressing enter:
"AttributeError: 'NoneType' object has no attribute 'click'"
"AttributeError: 'NoneType' object has no attribute 'send_keys'"</p>
<p>I've tried find, xpath, actionchain and execute_script, too.</p>
<p>Any clue how to click and send keys?</p>
<p>TVM.</p>
<p><strong>I add some code tried:</strong></p>
<pre><code>us = driver.find_element_by_xpath('//input[@type="hidden"]')
print(us)
</code></pre>
<p>output:</p>
<pre><code><selenium.webdriver.remote.webelement.WebElement (session="84a2b7c7a5e9eda5e562d7a8f17ab749", element="1201c113-012c-4918-84cf-8011b696a5ff")>
</code></pre>
<p><strong>I tried too:</strong></p>
<pre><code>us = driver.find_element(By.ID, "login-username-sign")
print(us)
</code></pre>
<p>output:</p>
<pre><code><selenium.webdriver.remote.webelement.WebElement (session="b8e6d51511c6cad8d24cf7946c46865f", element="004a920b-752d-4280-8dd1-5c8f330efa74")>
</code></pre>
<p><strong>I tried:</strong></p>
<pre><code>us = driver.find_element(By.ID, "login-username-sign")
us.send_keys("1234")
</code></pre>
<p>output:</p>
<pre><code>ElementNotInteractableException: Message: element not interactable
(Session info: chrome=109.0.5414.74)
</code></pre>
<p>Etc.</p>
<p><strong>"Info about human login":</strong> To log in manually you have to put the cursor over the text box, click and then type.
This actions fail (click and type) under Selenium (I dont get it clicks but I think Selinium does get the object).</p>
<p><strong>HTML:</strong></p>
<pre><code><form action="https://www.oddsportal.com/userLogin" method="post" class="flex flex-col flex-1 mt-[10px]">
<input type="hidden" name="_token" value="PhbLpoaq9vnNILGD4H6YYO7kFAoq2CUnT606hOHO">
<input type="hidden" name="referer">
<div class="item">
<div class="title">
<label for="login-username-sign" class="mb-2 text-xs text-black-main">Username</label>
<span class="required" title="required item">*</span>
</div>
<div>
<input class="int-text w-[300px] min-sm:w-[340px] pl-4 h-10 mb-[14px] border border-solid border-box border-black-main border-opacity-20 hover:input-hover" type="text" id="login-username-sign" name="login-username" size="25" required="">
</div>
</div>
<div class="item">
<div class="title">
<label for="login-password-sign" class="mb-2 text-xs text-black-main">Password</label>
<span class="required" title="required item">*</span>
</div>
<div class="flex h-10 bg-white-main">
<input class="int-text w-[300px] min-sm:w-[340px] pl-4 h-10 mb-[14px] border-solid border border-black-main border-opacity-20 hover:input-hover" type="password" id="login-password-sign" name="login-password" autocomplete="on" size="25" required="">
<div class="grid absolute left-[90%] items-center justify-center h-10 w-8"><div onclick="switchVisibility(this, 'login-password-sign')" class="w-6 h-5 bg-center bg-no-repeat bg-cover cursor-pointer passlog bg-eye_icon !bg-off_eye_icon">
</div>
</div>
</div>
</code></pre>
|
<python><selenium><hidden><sendkeys><clickable>
|
2023-01-15 10:37:41
| 4
| 689
|
daniel2014
|
75,124,341
| 3,716,664
|
PySide6 - high datarate from DLL to QTableWidget
|
<p>I have a python (3.11) project with PySide6. Application connects to DLL for CAN communication with other nodes. As part of configuration, approx 50-70 CAN messages are sent in short timeframe (but I added delays up to 50ms per message) and still see data loss.</p>
<p>A DLL callback function properly receives all packets from low-level interface driver (done in C) and if I try to directly call UI elements, I am receiving access denied issues (which is clear as UI seems to be in another thread).</p>
<p>Therefore I have a signal, that is emitted on every DLL message, and transfers data to UI.
If, in the UI signal emitted handler, I only print same packet with <code>print()</code> function, data is nicely displayed in correct order.</p>
<pre class="lang-py prettyprint-override"><code> # Signal in a class
signal_dll_packet_received = Signal(object)
# Main function called from DLL to process various events
def dll_event_function(self, inpacket):
packet = inpacket.contents
print('DLL PACKET:', packet.data.can_msg)
self.signal_dll_packet_received.emit(packet)
def signal_dll_packet_received_emitted(self, packet):
self.new_ui_window.process_packet(packet)
</code></pre>
<p>But since I have a <code>QTableWidget</code>, I have to perform table search for record and update the value for specific cell. When doing so, I am observing:</p>
<ul>
<li>Some packets are lost in the UI - looks like because UI is too slow and cannot process DLL messages fast enough.</li>
<li>Some packets seem to be "overwritten", meaning that even if 2 different packets arrive through DLL, UI seems both as the same (looks like same pointer memory).</li>
</ul>
<p>What is the Qt's correct way to transfer "high throughput" data from DLL to the UI and not have data loss.</p>
<p>I have tried things like:</p>
<ul>
<li>Make a deepcopy of the packet and continue with a copy</li>
<li>Test-wise remove <code>tablemodel.beginResetModel()</code> and <code>endResetModel()</code> functions, that may have an impact on performance.</li>
</ul>
<p>Should I be implementing some kind of ring buffer or is there any recommended Qt's way?</p>
<p>New UI window class method</p>
<pre class="lang-py prettyprint-override"><code> # method in window class for table manipulation
def process_packet(self, packet):
print('UI PACKET:', packet.data.can_msg)
# DO WORK with data in QTableWidget
# If this is ignored, then it works properly
</code></pre>
<p>Here you see that DLL packet log printed several messages with different data,
but UI packet (that went through emit + table modification), in a long chunk of data, only sees data format from last DLL packet -> feels like memory overwrite.</p>
<p>At the beginning, sequence is correct -> DLL first, UI follows, DLL again, UI follows. later it goes to several DLL first, then several UIs, but with wrong data..</p>
<pre><code>DLL PACKET: sec=36913; used=996589; len = 8; data = 01 00 52 41 54 43 4B 4C
UI PACKET: sec=36913; used=996589; len = 8; data = 01 00 52 41 54 43 4B 4C
DLL PACKET: sec=36914; used=027252; len = 8; data = 01 00 45 30 30 31 00 00
UI PACKET: sec=36914; used=027252; len = 8; data = 01 00 45 30 30 31 00 00
DLL PACKET: sec=36914; used=057970; len = 4; data = 02 00 07 00
DLL PACKET: sec=36914; used=087962; len = 4; data = 02 00 02 00
DLL PACKET: sec=36914; used=118081; len = 6; data = 02 00 02 00 03 00
DLL PACKET: sec=36914; used=148968; len = 4; data = 02 00 00 00
DLL PACKET: sec=36914; used=179087; len = 6; data = 02 00 3C 00 14 00
DLL PACKET: sec=36914; used=210279; len = 8; data = 02 00 00 00 00 00 00 00
DLL PACKET: sec=36914; used=240934; len = 4; data = 02 00 00 00
DLL PACKET: sec=36914; used=271181; len = 8; data = 02 00 52 41 54 43 4B 4C
DLL PACKET: sec=36914; used=302253; len = 8; data = 02 00 45 30 30 32 00 00
DLL PACKET: sec=36914; used=332939; len = 4; data = 03 00 0B 00
DLL PACKET: sec=36914; used=362938; len = 4; data = 03 00 03 00
DLL PACKET: sec=36914; used=393082; len = 6; data = 03 00 02 00 03 00
DLL PACKET: sec=36914; used=423929; len = 4; data = 03 00 00 00
DLL PACKET: sec=36914; used=454072; len = 6; data = 03 00 3C 00 14 00
DLL PACKET: sec=36914; used=485224; len = 8; data = 03 00 00 00 00 00 00 00
DLL PACKET: sec=36914; used=516342; len = 4; data = 03 00 00 00
DLL PACKET: sec=36914; used=547190; len = 8; data = 03 00 52 41 54 43 4B 4C
DLL PACKET: sec=36914; used=578189; len = 8; data = 03 00 45 30 30 33 00 00
DLL PACKET: sec=36914; used=608964; len = 4; data = 04 00 17 00
DLL PACKET: sec=36914; used=638963; len = 4; data = 04 00 04 00
DLL PACKET: sec=36914; used=669074; len = 6; data = 04 00 02 00 03 00
DLL PACKET: sec=36914; used=699954; len = 4; data = 04 00 00 00
DLL PACKET: sec=36914; used=730065; len = 6; data = 04 00 14 00 00 00
DLL PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
!!! <- see here -> all UI packets have content from last DLL packet
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
UI PACKET: sec=36914; used=761640; len = 8; data = 04 00 00 00 00 00 00 00
DLL PACKET: sec=36914; used=791959; len = 4; data = 04 00 00 00
UI PACKET: sec=36914; used=791959; len = 4; data = 04 00 00 00
DLL PACKET: sec=36914; used=822167; len = 8; data = 04 00 52 41 54 43 4B 4C
UI PACKET: sec=36914; used=822167; len = 8; data = 04 00 52 41 54 43 4B 4C
DLL PACKET: sec=36914; used=853238; len = 8; data = 04 00 45 30 30 34 00 00
UI PACKET: sec=36914; used=853238; len = 8; data = 04 00 45 30 30 34 00 00
</code></pre>
<p>How to solve it?</p>
|
<python><qt><pyqt><pyside6>
|
2023-01-15 10:21:49
| 1
| 7,472
|
unalignedmemoryaccess
|
75,124,041
| 764,195
|
asyncio or threads for long running background tasks
|
<p>I'm a relative python noob so apologies if this is a stupid question, but what is the best approach for handling long-running background jobs? So the basic premise of what I'm trying to achieve</p>
<ul>
<li>Web app running FastAPI handles all user "interaction"</li>
<li>Background task responsible for streaming data from a third party API</li>
<li>Background task responsible for other activities</li>
</ul>
<p>The web app side of things is fine, but where I'm running into no end of confusion is the background jobs. Logically in my head, the background workers are isolated jobs essentially so they run in their own threads and do whatever they need to do. Any kind of communication would happen through queues but really this would just be to send simple instructions (like "start streaming X data", or "stop streaming Y data").</p>
<p>Obviously whilst developing this I came across asyncio which seems to be touted as having advantages to threads in a number of ways, but I can't get my head around how it would work in this situation. I think I've got it running as a background job (using <code>create_task</code> and then waiting for them with <code>asyncio.wait([t1, t2...])</code>) which seems to work, but then I run into the next issue - the library I'm using for streaming internally uses asyncio. When I try to start the streaming, I get an error about being unable to start asyncio in an existing asyncio loop which makes sense, but then I have no idea how to make this work. I've seen references to using libraries like the nested asyncio lib, but this all feels like hacky-ish workarounds just to make something work which would've been much simpler in a thread.</p>
<p>How would you tackle this? Just use threads, or am I overlooking something with asyncio and doing something stupid?</p>
|
<python><multithreading><python-asyncio>
|
2023-01-15 09:17:27
| 0
| 4,448
|
PaReeOhNos
|
75,123,988
| 7,800,760
|
SPARQLWrapper query error with an f-string : QueryBadFormed
|
<p>I am trying to lookup the names of Italian Senators in a public SPARQL endpoint.</p>
<p>Starting sample data is the following set:</p>
<pre class="lang-py prettyprint-override"><code>longnames = ['Lars Danielsson', 'Giorgia Meloni', 'Ursula von der Leyen', 'Filippo Mannino', 'Matteo Piantedosi', 'Lamberto Giannini']
</code></pre>
<p>and here the relevant part of my code:</p>
<pre class="lang-py prettyprint-override"><code>from SPARQLWrapper import SPARQLWrapper, JSON
endpoint = "http://dati.senato.it/sparql"
sparql = SPARQLWrapper(endpoint)
sparql.setReturnFormat(JSON)
for label in longnames:
sparqlquery = f"""
PREFIX osr:<http://dati.senato.it/osr/>
SELECT * WHERE {{
?uri a osr:Senatore.
?uri rdfs:label '{label}'^^xsd:string.
}}
"""
print(f"{endpoint} ---> {sparqlquery}")
sparql.setQuery(sparqlquery)
try:
ret = sparql.queryAndConvert()
for r in ret["results"]["bindings"]:
print(r)
except Exception as e:
print(e)
</code></pre>
<p>Please note that I am building the SPARQL query using an f-string. Sadly this code gives an error I'm not understanding despite the line printing the endpoint and the query looks perfectly allright. Here's the error:</p>
<blockquote>
<p>QueryBadFormed: A bad request has been sent to the endpoint: probably the SPARQL query is badly formed.</p>
<p>Response:<br />
b"Virtuoso 37000 Error SP030: SPARQL compiler, line 0: Invalid character in SPARQL expression at '%'\n\nSPARQL query:\ndefine sql:big-data-const 0 \n#output-format:application/sparql-results+json\n%0A PREFIX osr%3A%3Chttp%3A//dati.senato.it/osr/%3E%0A SELECT %2A WHERE %7B%0A %3Furi a osr%3ASenatore. %0A %3Furi rdfs%3Alabel %27Ursula von der Leyen%27%5E%5Exsd%3Astring.%0A %7D%0A "</p>
</blockquote>
<p>What am I doing wrong?</p>
|
<python><sparql><f-string><sparqlwrapper>
|
2023-01-15 09:07:48
| 0
| 1,231
|
Robert Alexander
|
75,123,754
| 7,865,368
|
How to filter child element by putting condition on another child element in XML
|
<p>In below XML, I need to extract the <code>BinaryImage</code> if the <code>ImageType</code> is <code>fullimage</code>.</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header />
<soapenv:Body>
<Instation xmlns="http://ffsf.us.com/schema_1-2" SchemaVersion="1.2">
<ImageArray>
<Image>
<InstanceID>5216</InstanceID>
<TimeStamp>2022-12-01T10:34:24.499Z</TimeStamp>
<LaneID>0</LaneID>
<ImageType>fullImage</ImageType>
<ImageFormat>jpeg</ImageFormat>
<BinaryImage>abcd</BinaryImage>
</Image>
<Image>
<InstanceID>5216</InstanceID>
<TimeStamp>2022-12-01T10:34:24.499Z</TimeStamp>
<LaneID>0</LaneID>
<ImageType>Patch</ImageType>
<ImageFormat>jpeg</ImageFormat>
<BinaryImage>abcd</BinaryImage>
</Image>
</ImageArray>
</Instation>
</soapenv:Body>
</soapenv:Envelope>
</code></pre>
<p>I tried with <code>findall</code> and <code>xpath</code> but it gave the following errors:</p>
<pre class="lang-py prettyprint-override"><code>root.findall(".//{http://ffsf.us.com/schema_1-2}Image[contains(@ImageType,'fullImage')]")
root.xpath(".//{http://ffsf.us.com/schema_1-2}Image[contains(@ImageType,'fullImage')]")
root.xpath(".//{http://ffsf.us.com/schema_1-2}BinaryImage[contains(@ImageType,'fullImage')]")
root.xpath(".//{http://ffsf.us.com/schema_1-2}BinaryImage[@ImageType='fullImage']")
root.xpath(".//{http://ffsf.us.com/schema_1-2}Image[@ImageType='fullImage']")
</code></pre>
<blockquote>
<p>lxml.etree.XPathEvalError: Invalid expression</p>
</blockquote>
<blockquote>
<p>SyntaxError: invalid predicate</p>
</blockquote>
<p>The <a href="https://lxml.de/xpathxslt.html#xpath" rel="nofollow noreferrer">documentation</a> does not seem to be very helpful, what am I doing wrong?</p>
|
<python><xml><xpath><lxml>
|
2023-01-15 08:14:13
| 2
| 6,252
|
Vishal Singh
|
75,123,646
| 13,087,576
|
Understanding fancy einsum equation
|
<p>I was reading about attention and came across this equation:</p>
<pre class="lang-py prettyprint-override"><code>import einops
from fancy_einsum import einsum
import torch
x = torch.rand((200, 10, 768))
y = torch.rand((20, 768, 64))
res = einsum("batch query_pos d_model, n_heads d_model d_head -> batch query_pos n_heads d_head", x, y)
</code></pre>
<p>And I am not able to understand the underlying operations that give the result res</p>
<p>I thought it might be matmul and tried this:</p>
<pre class="lang-py prettyprint-override"><code>import torch
x_ = x.unsqueeze(dim = 2).unsqueeze(dim = 2)
y_ = torch.broadcast_to(y, (1, 1, 20, 768, 64))
res2 = x_ @ y_
res2 = res2.squeeze(dim = -2)
(res == res2).all() # Prints False
</code></pre>
<p>But that does not seem to be right.</p>
<p>Any help regarding this is greatly appreciated</p>
|
<python><torch><array-broadcasting><einops><einsum>
|
2023-01-15 07:46:24
| 1
| 307
|
Sai Prashanth
|
75,123,599
| 1,521,585
|
problem with adding javascript to django view form update
|
<p>I have a django view to add an invoice to my application, that has added javascript, and it works fine.</p>
<pre><code><p>
<label for="id_total_without_vat">Price</label>
<input type="number" name="total_without_vat" step="any" required="" id="id_total_without_vat" oninput="calculateTotalWithVat()">
<label for="id_currency_name">Currency</label>
<select name="currency_name" id="id_currency_name" onchange="update_currency_rate()">
<option value="NIS">NIS</option>
</select>
<label for="units">Units</label>
<span id="units"></span>
<label for="id_currency_rate">Currency Rate</label>
<input type="number" name="currency_rate" step="any" id="id_currency_rate" value=1.00 oninput="calculateTotalWithVat()">
<label for="nis_total">Total Price</label>
<span id="nis_total"></span>
</p>
<p>
<label for="id_total_vat_included">Total VAT Included</label>
<input type="number" name="total_vat_included" step="any" required="" id="id_total_vat_included">
<label for="id_vat_percentage">VAT Perentage</label>
<input type="number" name="vat_percentage" step="any" value="17.0" id="id_vat_percentage" oninput="updateVat(this.value)">
<label for="nis_total_with_tax">NIS Total With Tax</label>
<span id="nis_total_with_tax"></span>
</p>
</code></pre>
<p>The problem is while trying to do something similar in the update view I see the oninput part of the command in the browser as text, this is the update view code:</p>
<pre><code><p>
<label for="id_total_without_vat">Total without VAT</label>
{{ form.total_without_vat }}
<label for="id_currency_name">Currency</label>
<select name="currency_name" id="id_currency_name" onchange="update_currency_rate()">
<option value=" {{ form.currency_name }} "></option>
</select>
<label for="units">Units</label>
<span id="units"></span>
<label for="id_currency_rate">Currency Rate</label>
<input type="number" name="currency_rate" step="any" id="id_currency_rate" value= "{{ form.currency_rate }}" oninput="calculateTotalWithVat()">
<label for="nis_total">Price</label>
<span id="nis_total"></span>
</p>
<p>
<label for="id_total_vat_included">Price Including VAT</label>
{{ form.total_vat_included }}
<label for="id_vat_percentage">VAT Percentage</label>
<input type="number" name="vat_percentage" step="any" value=" {{ form.vat_percentage }} " id="id_vat_percentage" oninput="updateVat(this.value)">
<label for="nis_total_with_tax">Price Including Taxes</label>
<span id="nis_total_with_tax"></span>
</p>
</code></pre>
<p>Can someone tell me why the add view works, but the update doesn't?</p>
|
<javascript><python><django>
|
2023-01-15 07:37:25
| 1
| 350
|
Daniel Ben-Shabtay
|
75,123,539
| 17,347,824
|
Nested SQL Queries using multiple tables and pymssql in Python
|
<p>I have 2 tables with information about contestant races that I need to organize into one clean output statement (not a tuple or list) that only gives each unique contestant along with their region, average race time, and a count of how many races they competed in.</p>
<p>Using <code>"SELECT TABLE_NAME, COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS;"</code> returns the following tables I have to work with:
Tables, Column_Names</p>
<pre><code>
('Outcome', 'ID')
('Outcome', 'RaceID')
('Outcome', 'RaceTime')
('Outcome', 'ResultID')
('Contestant', 'Age')
('Contestant', 'Region')
('Contestant', 'ID')
('Contestant', 'Name')
('Contestant', 'Gender')
</code></pre>
<p>Here is what I have so far. I can get the first part to join the tables and return the individual columns that I need, but they aren't in the right format and it is including each race instead of an aggregate of each contestant.</p>
<pre><code>#This block returns all the columns needed from all tables with INNER JOIN
cursor.execute("SELECT Contestant.Name, Contestant.Region, Outcome.RaceTime, Outcome.RaceID FROM Contestant INNER JOIN Outcome ON Contestant.ID=Outcome.ID;")
for row in cursor:
print(row[0]," ", row[1]," ", row[2]," ", row[3])
</code></pre>
<p>The first couple lines that this Query returns are below but they don't have the full region name, not an average of racetime, and showing the raceID instead of count of races.</p>
<pre><code>Smith PNW 64.59 1.0
Kohl SW 50.3 1.0
</code></pre>
<p>If this was working the way I needed, it would return this way with 3 spaces between variables, full region name, and average of all the contestants race times, and a count of how many races they were in. This would also then need to be ordered by average time from highest to lowest.</p>
<pre><code>Smith Pacific Northwest 59.454 7
Kohl Southwest 54.203 4
</code></pre>
<p>Here are other lines I have that will return separately everything else that I need to put into the clean table:</p>
<pre><code>#The following each does a part of what needs to happen with the columns data ultimately
cursor.execute("SELECT Region, CASE WHEN Region = 'PNW' THEN 'Pacific Northwest' ELSE 'Southwest' END AS Region_Long FROM Contestant;") #Updates abbreviated region names to full names
for row in cursor:
print(row[1]) #prints out just the full region names
cursor.execute("SELECT ID, AVG(RaceTime) FROM Outcome GROUP BY ID ORDER BY AVG(RaceTime) DESC;") #Calculates each Contestants's avg race time
for row in cursor:
print(row[1])
cursor.execute("SELECT COUNT(DISTINCT RaceID) FROM Outcome GROUP BY ID;") #Counts how many races each Contestant had
for row in cursor:
print(row[0])
</code></pre>
<p>I just can't seem to figure out how to put all of this together to make it output how I need it to instead of just outputting each part separately.</p>
|
<python><sql><pymssql>
|
2023-01-15 07:21:30
| 1
| 409
|
data_life
|
75,123,366
| 3,135,025
|
Pandas concat with outer join - Reindexing only valid with uniquely valued Index objects
|
<p>I have 6 dataframes that contain information about unique customers like one df for emails the second one for first name .. etc</p>
<p>I am doing concat with outer join to have one df with all customer and information columns.</p>
<p>This is what I did so far:</p>
<pre><code>info_dfs = (df1,df2,df3,df4,df5,df6)
</code></pre>
<p>Now the concatenation:</p>
<pre><code>all_merged = pd.concat(
objs=(dfs.set_index('ID') for dfs in info_dfs),
axis=1,
join='outer').reset_index().fillna(0)
</code></pre>
<p>But I get this error:</p>
<pre><code>InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>I checked if individual dfs if they have duplicates but no one!
Then I checked each <code>df.index.is_unique</code> and all come <code>True</code>
I used <code>ignore_index=True</code> but the same error again!
Not sure what is the problem here</p>
|
<python><python-3.x><pandas><numpy><concatenation>
|
2023-01-15 06:37:02
| 1
| 1,842
|
MTALY
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.