QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,916,782
| 3,624,549
|
Python argparse - make arguments required or optional based on another argument
|
<p>How can a program accept/validate a set of parameters, depending on a previous parameter/option?</p>
<p>e.g:</p>
<pre><code>params:
<action1> -p <path> -name <name> -t <type>
<action2> -v <value> -name <name>
<action3> -p <path> -t <type>
<action4> -m <mode1 | mode2>
--verbose
--test
--..
</code></pre>
<p>So if one of the <code>actionX</code> parameters is used (only one can be used), additional parameters might be required.
For instance for <code>action2</code> the <code>-v</code> and <code>-name</code> are required.</p>
<p><strong>valid input:</strong></p>
<pre><code>python myparser.py action2 -v 11 -name something --test --verbose
python myparser.py action4 -m mode1
python myparser.py --test
</code></pre>
<p><strong>invalid input:</strong></p>
<pre><code>python myparser.py action2 -v 11
python myparser.py action4 -n name1
</code></pre>
<p>Can the <code>argparse</code> validate this or is it better to add all of them as <em>optional</em> and validate them later on?</p>
|
<python><argparse>
|
2022-12-26 01:44:36
| 1
| 2,420
|
Alg_D
|
74,916,729
| 4,159,193
|
Install matplotlib for Python with MSYS MinGW 64
|
<p>I want to install matplotlib for Python using MSYS MinGW x64.
The command</p>
<pre><code>$ pacman -S mingw-w64-x86_64-python-matplotlib
</code></pre>
<p>had failed previously. Then I made some changes, and now I want to try the above command again, but I only get these error messages:</p>
<pre><code>$ pacman -S mingw-w64-x86_64-python-matplotlib
resolving dependencies...
looking for conflicting packages...
Packages (12) mingw-w64-x86_64-libimagequant-4.0.4-2 mingw-w64-x86_64-libraqm-0.9.0-1
mingw-w64-x86_64-python-cycler-0.11.0-2 mingw-w64-x86_64-python-dateutil-2.8.2-3
mingw-w64-x86_64-python-fonttools-4.38.0-1 mingw-w64-x86_64-python-packaging-22.0-1
mingw-w64-x86_64-python-pillow-9.3.0-2 mingw-w64-x86_64-python-pyparsing-3.0.9-3
mingw-w64-x86_64-python-pytz-2022.7-1 mingw-w64-x86_64-python-six-1.16.0-3
mingw-w64-x86_64-qhull-2020.2-2 mingw-w64-x86_64-python-matplotlib-3.6.2-1
Total Installed Size: 60.64 MiB
:: Proceed with installation? [Y/n] Y
imelf@DESKTOP-CFHKUQA MINGW64 ~
$ pacman -S mingw-w64-x86_64-python-matplotlib
resolving dependencies...
looking for conflicting packages...
Packages (12) mingw-w64-x86_64-libimagequant-4.0.4-2 mingw-w64-x86_64-libraqm-0.9.0-1
mingw-w64-x86_64-python-cycler-0.11.0-2 mingw-w64-x86_64-python-dateutil-2.8.2-3
mingw-w64-x86_64-python-fonttools-4.38.0-1 mingw-w64-x86_64-python-packaging-22.0-1
mingw-w64-x86_64-python-pillow-9.3.0-2 mingw-w64-x86_64-python-pyparsing-3.0.9-3
mingw-w64-x86_64-python-pytz-2022.7-1 mingw-w64-x86_64-python-six-1.16.0-3
mingw-w64-x86_64-qhull-2020.2-2 mingw-w64-x86_64-python-matplotlib-3.6.2-1
Total Installed Size: 60.64 MiB
:: Proceed with installation? [Y/n] Y
(12/12) checking keys in keyring [###############################] 100%
(12/12) checking package integrity [###############################] 100%
(12/12) loading package files [###############################] 100%
(12/12) checking for file conflicts [###############################] 100%
error: failed to commit transaction (conflicting files)
mingw-w64-x86_64-python-six: /mingw64/lib/python3.10/site-packages/__pycache__/six.cpython-310.pyc exists in filesystem
mingw-w64-x86_64-python-six: /mingw64/lib/python3.10/site-packages/six.py exists in filesystem
mingw-w64-x86_64-python-cycler: /mingw64/lib/python3.10/site-packages/__pycache__/cycler.cpython-310.pyc exists in filesystem
mingw-w64-x86_64-python-cycler: /mingw64/lib/python3.10/site-packages/cycler.py exists in filesystem
mingw-w64-x86_64-python-dateutil: /mingw64/lib/python3.10/site-packages/dateutil/__init__.py exists in filesystem
mingw-w64-x86_64-python-dateutil: /mingw64/lib/python3.10/site-packages/dateutil/__pycache__/__init__.cpython-310.pyc exists in filesystem
mingw-w64-x86_64-python-dateutil: /mingw64/lib/python3.10/site-packages/dateutil/__pycache__/_common.cpython-310.pyc exists in filesystem
mingw-w64-x86_64-python-dateutil: /mingw64/lib/python3.10/site-packages/dateutil/__pycache__/_version.cpython-310.pyc exists in filesystem
mingw-w64-x86_64-python-dateutil: /mingw64/lib/python3.10/site-packages/dateutil/__pycache__/easter.cpython-310.pyc exists in filesystem
</code></pre>
<p>My question: How can I delete the cached files, so I get meaningful error messages again. Or is there another way to install matplotlib with MSYS MinGW x64</p>
|
<python><matplotlib><msys2><pacman-package-manager>
|
2022-12-26 01:26:15
| 1
| 546
|
flori10
|
74,916,557
| 10,626,286
|
Error decode/deserialize Avro with Python from Kafka
|
<p>Hello all i have debezium which listen to changes on postgres and put events on kafka topic
everything works great except i have issues decoding payloads i have tried both methods but no luck</p>
<p><a href="https://i.sstatic.net/FIj5O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FIj5O.png" alt="enter image description here" /></a></p>
<h1>SQL Insert Statement</h1>
<pre><code>INSERT INTO public.student
(id, name)
VALUES (45,'soumil 2')
</code></pre>
<h1>Docker Compose files</h1>
<pre><code>version: "3.7"
services:
postgres:
image: debezium/postgres:13
ports:
- 5432:5432
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=exampledb
zookeeper:
image: confluentinc/cp-zookeeper:5.5.3
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-enterprise-kafka:5.5.3
depends_on: [zookeeper]
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9991
ports:
- "9092:9092"
- "29092:29092"
debezium:
image: debezium/connect:1.4
environment:
BOOTSTRAP_SERVERS: kafka:29092
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: connect_configs
OFFSET_STORAGE_TOPIC: connect_offsets
KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
depends_on: [kafka]
ports:
- 8083:8083
schema-registry:
image: confluentinc/cp-schema-registry:5.5.3
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_LISTENERS=http://schema-registry:8081,http://localhost:8081
ports:
- 8081:8081
depends_on: [zookeeper, kafka]
</code></pre>
<h2>EXEC commands</h2>
<pre><code>docker run --tty --network debezium_default confluentinc/cp-kafkacat kafkacat -b kafka:29092 -C -s key=s -s value=avro -r http://schema-registry:8081 -t postgres.public.student
</code></pre>
<h1>Works fine when i exec into container</h1>
<p><a href="https://i.sstatic.net/yNwkv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yNwkv.png" alt="enter image description here" /></a></p>
<h1>Kafka Python code</h1>
<pre><code>try:
import kafka
import json
import requests
import os
import sys
from json import dumps
from kafka import KafkaProducer
from kafka import KafkaConsumer
from confluent_kafka.schema_registry import SchemaRegistryClient
import io
from confluent_kafka import Consumer, KafkaError
from avro.io import DatumReader, BinaryDecoder
import avro.schema
from confluent_kafka.avro.serializer.message_serializer import MessageSerializer
from confluent_kafka.avro.cached_schema_registry_client import CachedSchemaRegistryClient
from confluent_kafka.avro.serializer import (SerializerError, # noqa
KeySerializerError,
ValueSerializerError)
print("ALL ok")
except Exception as e:
print("Error : {} ".format(e))
SCHEME_REGISTERY = "http://schema-registry:8081"
TOPIC = "postgres.public.student"
BROKER = "localhost:9092"
schema = """
{
"type":"record",
"name":"Key",
"namespace":"postgres.public.student",
"fields":[
{
"name":"id",
"type":"int"
},
{
"name":"name",
"type":"string"
}
],
"connect.name":"postgres.public.student.Key"
}
"""
schema = avro.schema.Parse(schema)
reader = DatumReader(schema)
def decode_method_1(msg_value):
message_bytes = io.BytesIO(msg_value)
decoder = BinaryDecoder(message_bytes)
event_dict = reader.read(decoder)
return event_dict
def decode_method_2(msg_value):
message_bytes = io.BytesIO(msg_value)
message_bytes.seek(5)
decoder = BinaryDecoder(message_bytes)
event_dict = reader.read(decoder)
return event_dict
def fetch_schema():
from confluent_kafka.schema_registry import SchemaRegistryClient
sr = SchemaRegistryClient({"url": 'http://localhost:8081'})
subjects = sr.get_subjects()
for subject in subjects:
schema = sr.get_latest_version(subject)
print(schema.version)
print(schema.schema_id)
print(schema.schema.schema_str)
def main():
print("Listening *****************")
consumer = KafkaConsumer(
TOPIC,
bootstrap_servers=[BROKER],
auto_offset_reset='latest',
enable_auto_commit=False,
group_id="some group"
)
for msg in consumer:
msg_value = msg.value
print("\n")
print("msg_value", msg_value)
print("decode_method_1", decode_method_1(msg_value))
print("decode_method_2", decode_method_2(msg_value))
print("\n")
main()
</code></pre>
<h1>Outputs</h1>
<pre><code>msg_value b'\x00\x00\x00\x00\x02\x00\x02Z\x02\x10soumil 2\x161.4.2.Final\x14postgresql\x10postgres\xac\xa6\xe4\xbc\xa9a\x00\nfalse\x12exampledb\x0cpublic\x0estudent\x02\xf4\x07\x02\xa0\x9f\xe5\x16\x00\x02c\x02\xd4\xa8\xe4\xbc\xa9a\x00'
decode_method_1 {'id': 0, 'name': ''}
decode_method_2 {'id': 0, 'name': 'Z'}
</code></pre>
<p>Your help would be great as i am not able to resolve the issue
here are some references</p>
<h1>References</h1>
<ul>
<li><p><a href="https://stackoverflow.com/questions/44407780/how-to-decode-deserialize-avro-with-python-from-kafka">How to decode/deserialize Avro with Python from Kafka</a></p>
</li>
<li><p><a href="https://groups.google.com/g/confluent-platform/c/A7B6uSnJa5k" rel="nofollow noreferrer">https://groups.google.com/g/confluent-platform/c/A7B6uSnJa5k</a></p>
</li>
<li><p><a href="https://medium.com/swlh/how-to-deserialize-avro-messages-in-python-faust-400118843447" rel="nofollow noreferrer">https://medium.com/swlh/how-to-deserialize-avro-messages-in-python-faust-400118843447</a></p>
</li>
<li><p><a href="https://github.com/confluentinc/confluent-kafka-python/blob/master/examples/avro_consumer.py" rel="nofollow noreferrer">https://github.com/confluentinc/confluent-kafka-python/blob/master/examples/avro_consumer.py</a></p>
</li>
<li><p><a href="https://pypi.org/project/confluent-kafka/0.9.4/" rel="nofollow noreferrer">https://pypi.org/project/confluent-kafka/0.9.4/</a></p>
</li>
</ul>
|
<python><apache-kafka><kafka-consumer-api>
|
2022-12-26 00:24:42
| 2
| 750
|
Soumil Nitin Shah
|
74,916,479
| 14,293,274
|
replicate numpy style array initialisation in c++
|
<p>I'm trying to replicate basic NumPy functionalities in C++ (for practice).</p>
<p>I'm stuck replicating the following initialization:
here's how it looks in python/NumPy:</p>
<pre><code>x = np.array([[1], [2]])
</code></pre>
<p>which evaluates to:</p>
<pre><code>array([[1],
[2]])
</code></pre>
<p>In my C++ code, I decided to use <code>std::vector</code>. I currently have two functions to initialize arrays (I want to use overloading so that I can always use the same function name when initializing an array).</p>
<p>function1 declaration for 1D arrays:</p>
<pre><code>std::vector<std::vector<int>> init(std::vector<int> vec);
</code></pre>
<p>I can use this like so:</p>
<pre><code>std::vector<std::vector<int>> vec1d = init({0, 1, 2});
</code></pre>
<p>function2 declaration for 2D arrays:</p>
<pre><code>std::vector<std::vector<int>> init(std::vector<std::vector<int>> vec);
</code></pre>
<p>I can use this like so:</p>
<pre><code>std::vector<std::vector<int>> vec2d = init({{0, 1, 2}, {4, 5, 6}});
</code></pre>
<p>However, I'm unable to implement the NumPy-like initialization shown at the top of my question.
If I try to run the following line:</p>
<pre><code>std::vector<std::vector<int>> vec2d = init({{0}, {1}});
</code></pre>
<p>I get the following error:</p>
<pre><code>error: call to 'init' is ambiguous
</code></pre>
<p>I thought that the 2D version of the function would be called.</p>
<p>What do I have to change (or what other overloaded functions do I have to add) in order for this type of initialization to work?</p>
|
<python><c++><arrays><numpy><initialization>
|
2022-12-25 23:54:58
| 0
| 594
|
koegl
|
74,916,461
| 16,319,191
|
How to drop rows (or subset other rows) based on values in lists in pandas? Create mutually exclusive subsets of dfs
|
<p>How to drop rows which have at least 1 element from both the lists? Looking for something iterative over more than 100 columns.
Minimal example with 3 columns is:</p>
<pre><code>list1 = ["abc1", "def"]
list2 = ["ghi", "ghj"]
df = pd.DataFrame({"index": [0,1,2,3,4,5,6,7,8],
"col1": ["abc1", "ghj", "ghi", "abc1", "","def","ghj","abc1","abc1"],
"col2": ["abc1", "abc1", "dfg", "dfg", "ghi","dfg","","ghj","abc1"],
"col3": ["abc1", "qrst", "dfg", "dfg", "dfg","dfg","abc1","ghi","abc1"]})
</code></pre>
<pre><code> index col1 col2 col3
0 0 abc1 abc1 abc1
1 1 ghj abc1 qrst
2 2 ghi dfg dfg
3 3 abc1 dfg dfg
4 4 ghi dfg
5 5 def dfg dfg
6 6 ghj abc1
7 7 abc1 ghj ghi
8 8 abc1 abc1 abc
</code></pre>
<p>Row numbers 1, 6, 7 must be dropped because they have elements from both the lists. Finaldf should be:</p>
<pre><code> index col1 col2 col3
0 0 abc1 abc1 abc1
1 2 ghi dfg dfg
2 3 abc1 dfg dfg
3 4 ghi dfg
4 5 def dfg dfg
5 8 abc1 abc1 abc1
</code></pre>
|
<python><pandas><subset><drop>
|
2022-12-25 23:49:26
| 1
| 392
|
AAA
|
74,916,343
| 3,520,363
|
python script removes file extension
|
<p>I have some video files</p>
<pre><code>A Single Man (2009).avi
Accident (2009).mkv
Adventureland (2009).mp4
</code></pre>
<p>I use a python script that rename in this way</p>
<pre><code>A Single Man (2009) [Drama]
Accident (2009) [Thriller]
Adventureland (2009) [Comedy]
</code></pre>
<p>What is the problem ? Script removes file extension. I don't understand why</p>
<p><strong>Part 1</strong> : the function makes an API request to TMDb to search for a movie with the specified name and year, using the supplied API key. If the request was successful, it extracts the movie data from the response.</p>
<pre><code>import os
import re
import requests
# Replace "YOUR_API_KEY" with your TMDb API key
api_key = 'My Api KEY'
# Let's create a function to get the movie ID from TMDb
def get_movie_id(name, year):
# We encode the name of the movie to insert it in the URL
name_encoded = requests.utils.quote(name)
# We build the URL of the API request
url = f'https://api.themoviedb.org/3/search/movie?api_key={api_key}&query={name_encoded}&year={year}'
# We send the API request
response = requests.get(url)
# We verify that the request was successful
if response.status_code == 200:
# We extract the movie data from the response
data = response.json()
</code></pre>
<p><strong>Part 2</strong>: after getting the search results from the API request response, the code checks that there are search results. If there are, it takes the first movie of the search results and extracts the movie ID from it. Then, it returns the movie ID. If there are no search results or if the API request was unsuccessful, the function returns None.</p>
<pre><code> # We verify that there are search results
if data['total_results'] > 0:
# Let's take the first movie of the search results
movie = data['results'][0]
# We extract the movie ID from the data
movie_id = movie['id']
return movie_id
else:
return None
else:
return None
</code></pre>
<p><strong>Part 3</strong>: this code defines a function called get_movie_genre that takes one argument movie_id, which represents the ID of a movie. The function then makes an API request to TMDb to get the movie data with the specified ID, using the supplied API key. If the request is successful, it extracts the film genre from the obtained data and returns it. If the request was unsuccessful, the function returns None.</p>
<pre><code># Let's create a function to get the movie genre from TMDb
def get_movie_genre(movie_id):
# We build the URL of the API request
url = f'https://api.themoviedb.org/3/movie/{movie_id}?api_key={api_key}'
# We send the API request
response = requests.get(url)
# We verify that the request was successful
if response.status_code == 200:
# We extract the movie data from the response
data = response.json()
# We extract the film genre from the data
genre = data['genres'][0]['name']
return genre
else:
return None
</code></pre>
<p><strong>Part 4</strong>: this code is a loop that iterates through all the files in the current directory. For each file, the code checks that it is a video file (ending in .avi or .mp4). If the file is a video, the code extracts the movie name and year from the filename using a regular expression. Then, the code looks up the movie on TMDb using the get_movie_id function we saw earlier.</p>
<p>If the movie is found, the code gets the genre of the movie using the get_movie_genre function we saw earlier. If the genre is found, the code constructs a new filename using the movie name, year, and genre. If the new name doesn't already exist, the code prints the old file name and the new computed name. Finally, the code renames the file to the new name.</p>
<pre><code># We loop over the files in the directory
for file in os.listdir():
# We verify that the file is a video file
if file.endswith('.avi') or file.endswith('.mp4'):
# We extract the movie name and year from the file name
match = re.search(r'(.*) \(((?:19|20)\d\d)\)', file)
name = match.group(1)
year = match.group(2)
# We search for the movie on TMDb
movie_id = get_movie_id(name, year)
# We verify that the movie has been found
if movie_id:
# We get the genre of the film
genre = get_movie_genre(movie_id)
# Let's verify that the genus has been found
if genre:
# Let's construct the new file name
new_name = f'{name} ({year}) [{genre}]'
# We check if a file with the new name already exists
if not os.path.exists(new_name):
# Stampa il nome del file e il nuovo nome calcolato
print(f'{file} -> {new_name}')
# Let's rename the file
os.rename(file, new_name)
</code></pre>
|
<python>
|
2022-12-25 23:12:46
| 0
| 380
|
user3520363
|
74,916,307
| 9,135,359
|
How to cleanup ‘orphaned’ text in poorly written html using Python?
|
<p>Let me explain: I scrapped html off a poorly written website and wish to clean up the code by encapsulating each line within a <code><div></code> tag, keep the existing <code>bold</code>, <code>italic</code> and other formatting information, keep the <code>images</code> and <code>links</code>. I will then format everything and prettify it once cleaned.</p>
<p>Below are 3 sample lines from the website:</p>
<pre><code>line1 = '1. O: upper border of 1st rib &amp; cartilage.<div>2. I: inferior surface of middle third of the clavicle.&nbsp;</div><div>3. NS: nerve to subclavius.&nbsp;</div><div>4. A: anchors &amp; depresses clavicle.&nbsp;</div><div><br></div><div><div><img src="paste-3461743641109.jpg"></div><div><span style="font-weight: bolder">Image:&nbsp;</span>Gray, Henry.&nbsp;<i>Anatomy of the Human Body.</i>&nbsp;Philadelphia: Lea &amp; Febiger, 1918; Bartleby.com, 2000.&nbsp;<a href="">www.bartleby.com/107/</a>&nbsp;[Accessed 15 Nov. 2018].&nbsp;</div></div>'
line2 = '''<div><i>CVS</i></div><div>1. Cardiovascular conditioning &amp; improves postural hypotension<br></div><div><b><span style="font-weight: 400;">2. Improves ventilation</span></b><br></div><div><b><span style="font-weight: 400;"><br></span></b></div><div><b><span style="font-weight: 400;"><i>BONES</i></span></b></div><div>3. Promote &amp; maintain bone density, prevent osteoporosis<b><span style="font-weight: 400;"><br></span></b></div><div><br></div><div><i>MUSCLES &amp; JOINTS</i></div>4. Safe reintroduction of the patient to vertical position<div><div>5. Facilitate early weight bearing</div><div><b><span style="font-weight: 400;">6. Prevent contractures</span></b><br></div><div><b><span style="font-weight: 400;"><br></span></b></div><div><b><span style="font-weight: 400;"><i>SKIN</i></span></b></div><div>7. Decreases prolonged bed rest &amp; its complications</div></div><div><br></div><div><i>PSYCHOLOGY</i></div><div>8. Improves psychological outlook &amp; motivation</div>'''
line3 = '''ORIGIN<div>1. Branch of the posterior cord of the brachial plexus -&nbsp;C5, C6.</div><div><br></div><div>COURSE</div>2. Passes out of the axilla, through the quadrangular space with posterior circumflex humeral vessels, to the upper arm where it's in contact with surgical neck of the humerus.&nbsp;</div><div><br></div><div>BRANCHES</div><div><i><font color="#ff086c">3. Sensory supply to small 'regimental patch' over shoulder.</font></i></div><div><i><font color="#ff086c">4. Anterior - supplies the deltoid.&nbsp;</font></i></div><div><i><font color="#ff086c">5. Posterior - supplies teres minor, becomes upper lateral cutaneous nerve of the arm.&nbsp;</font></i></div><i><font color="#ff086c"><img src="paste-6103148528016.jpg"></font></i></div><div><div><b style="font-weight: bold; ">Image:&nbsp;</b>Gray, Henry.&nbsp;<i>Anatomy of the Human Body.</i>&nbsp;Philadelphia: Lea &amp; Febiger, 1918; Bartleby.com, 2000.&nbsp;<a href="https://www.bartleby.com/107/">www.bartleby.com/107/</a>&nbsp;[Accessed 16 Nov. 2018].</div></div>'''
</code></pre>
<p>You will notice that in <code>line1</code> there is no <code><div></code> tag at all at the beginning, whereas <code>line2</code> starts with a tag but point <code>4</code> within is not enclosed in such a tag. <code>line3</code> has multiple strings not enclosed in <code><div></code> tags.</p>
<p>I wrote the following to correct the first line (<code>line1</code>):</p>
<pre><code># 1. First, find all lines enclosed in <div> tags
temp_soup = BeautifulSoup(html.unescape(line), "html.parser")
soup = BeautifulSoup("", "html.parser")
for tag in temp_soup.find_all('div'):
tag.extract()
soup.append(tag)
# 2. Then, ensure that the first line starts with the <div> tag, else isolate the first sentence and enclose it between <div> tags
new_div = soup.new_tag("div")
new_div.string = str(temp_soup)
soup.insert(0, new_div)
print(soup)
</code></pre>
<p>However, the above code does not fix the second line. Moreover, it cannot correct lines with multiple strings not enclosed in <code><div></code> tags.</p>
<p>Could someone suggest an algorithm to clean up all 3 lines? I've tried <code>BeautifulSoup.prettify()</code> and <code>lxml clean_html()</code> to no avail.</p>
|
<python><html><beautifulsoup>
|
2022-12-25 23:01:58
| 1
| 844
|
Code Monkey
|
74,916,234
| 4,828,720
|
Using Python Social Auth with Flask-Login
|
<p><a href="https://python-social-auth.readthedocs.io/en/latest/configuration/flask.html" rel="nofollow noreferrer">https://python-social-auth.readthedocs.io/en/latest/configuration/flask.html</a> says "The application works quite well with Flask-Login" however it does not specify what exactly is needed to be implemented and setup.</p>
<p>I used the code snippets from <a href="https://github.com/maxcountryman/flask-login" rel="nofollow noreferrer">https://github.com/maxcountryman/flask-login</a> to create a tiny PoC of Flask-Login. The user database is a simple dictionary here. It works fine.</p>
<pre class="lang-python prettyprint-override"><code># https://github.com/maxcountryman/flask-login
import flask
import flask_login
app = flask.Flask(__name__)
app.secret_key = 'throwaway97b1d9abffce3c5dbf1d3d79079166eede16ca098550'
login_manager = flask_login.LoginManager()
login_manager.init_app(app)
users = {'foo@bar.tld': {'password': 'secret'}}
class User(flask_login.UserMixin):
pass
@login_manager.user_loader
def user_loader(email):
if email not in users:
return
user = User()
user.id = email
return user
@login_manager.request_loader
def request_loader(request):
email = request.form.get('email')
if email not in users:
return
user = User()
user.id = email
return user
@app.route('/login', methods=['GET', 'POST'])
def login():
if flask.request.method == 'GET':
return '''
<form action='login' method='POST'>
<input type='text' name='email' id='email' placeholder='email'/>
<input type='password' name='password' id='password' placeholder='password'/>
<input type='submit' name='submit'/>
</form>
'''
email = flask.request.form['email']
if email in users and flask.request.form['password'] == users[email]['password']:
user = User()
user.id = email
flask_login.login_user(user)
return flask.redirect(flask.url_for('protected'))
return 'Bad login'
@app.route('/protected')
@flask_login.login_required
def protected():
return 'Logged in as: ' + flask_login.current_user.id
@app.route('/logout')
def logout():
flask_login.logout_user()
return 'Logged out'
@login_manager.unauthorized_handler
def unauthorized_handler():
return 'Unauthorized', 401
</code></pre>
<p>What do I need to change and add to be able to login to that Flask-Login using a authentication method from Python Social Auth, e. g. a <a href="https://python-social-auth.readthedocs.io/en/latest/backends/discourse.html" rel="nofollow noreferrer">properly set up Discourse forum</a>? The goal would be accessing <code>/protected</code> and seeing information that was received from the serviced used via PSA to login.</p>
|
<python><flask><python-social-auth>
|
2022-12-25 22:40:48
| 0
| 1,190
|
bugmenot123
|
74,916,211
| 14,427,714
|
How to iterate divs in selenium python
|
<p>I want to iterate this html div</p>
<pre><code><div class="ag-center-cols-container" ref="eCenterContainer" role="rowgroup" unselectable="on" style="width: 1550px; height: 118.4px;"><div role="row" row-index="0" aria-rowindex="4" row-id="RGB4075DC397C648815FEFFFF0629B28AF02C6D6A80V3DA701" comp-id="4579" class="ag-row ag-row-focus ag-row-even ag-row-level-0 ag-row-group ag-row-group-contracted ag-row-position-absolute ag-row-first" aria-selected="false" style="height: 29.6px; transform: translateY(0px); " aria-label="Press SPACE to select this row."><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="122" comp-id="4580" col-id="recordingUrl" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 90px; left: 0px; "><div><a href="javascript:;" class="recording-cell__btn"><i class="fa fa-play"></i></a> <a href="javascript:;" class="recording-cell__btn" style="display: none;"><i class="red fa fa-stop"></i></a></div></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="123" comp-id="4581" col-id="campaignName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 90px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window Installation Inbound</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="124" comp-id="4582" col-id="publisherName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 299px; "><span><span title="Click to Filter" class="filterable-cell contrast">Aef media group</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="125" comp-id="4583" col-id="inboundPhoneNumber" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-focus ag-cell-value" style="width: 139px; left: 508px; "><span><span title="" class="filterable-cell ">+19104458082</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="126" comp-id="4584" col-id="number" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 647px; "><span><span title="Click to Filter" class="filterable-cell ">+18333361696</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="127" comp-id="4585" col-id="timeToCallInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 786px; "><span><span title="Click to Filter" class="filterable-cell ">00:00:00</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="128" comp-id="4586" col-id="isDuplicate" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 70px; left: 856px; "><span><span title="Click to Filter" class="filterable-cell ">Yes</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="129" comp-id="4587" col-id="endCallSource" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 56px; left: 926px; "><span><span title="Caller" class="filterable-cell fa fa-mobile f-s-16"></span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="130" comp-id="4588" col-id="timeToConnectInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 982px; "><span>00:00:05</span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="131" comp-id="4589" col-id="targetName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 1052px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window Ever</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="132" comp-id="4590" col-id="conversionAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 69px; left: 1261px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="133" comp-id="4591" col-id="payoutAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1330px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="134" comp-id="4592" col-id="callLengthInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1400px; "><span><span title="Click to Filter" class="filterable-cell ">00:01:04</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="135" comp-id="4593" col-id="action" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 80px; left: 1470px; "><span><div>
<a title="Block Number" class="btn btn-function block-number-btn">
<i class="fa fa-ban "></i>
</a>
<a title="Add Tag Annotation" class="btn btn-function m-l-5 annotate-call-btn">
<i class="fa fa-pencil"></i>
</a>
<a title="Adjust Call Payments" class="btn btn-function m-l-5 adjust-call-payment-btn ">
<i class="fa fa-usd"></i>
</a>
</div></span></div></div><div role="row" row-index="1" aria-rowindex="5" row-id="RGBDA27E840095A2D3399CFA05C93C0A16964830F8FV3EHS01" comp-id="4595" class="ag-row ag-row-no-focus ag-row-odd ag-row-level-0 ag-row-group ag-row-group-contracted ag-row-position-absolute" aria-selected="false" style="height: 29.6px; transform: translateY(29.6px); " aria-label="Press SPACE to select this row."><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="122" comp-id="4596" col-id="recordingUrl" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 90px; left: 0px; "><div><a href="javascript:;" class="recording-cell__btn"><i class="fa fa-play"></i></a> <a href="javascript:;" class="recording-cell__btn" style="display: none;"><i class="red fa fa-stop"></i></a></div></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="123" comp-id="4597" col-id="campaignName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 90px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window Installation Inbound</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="124" comp-id="4598" col-id="publisherName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 299px; "><span><span title="Click to Filter" class="filterable-cell contrast">Aef media group</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="125" comp-id="4599" col-id="inboundPhoneNumber" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 508px; "><span><span title="" class="filterable-cell ">+17137732947</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="126" comp-id="4600" col-id="number" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 647px; "><span><span title="Click to Filter" class="filterable-cell ">+18333361696</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="127" comp-id="4601" col-id="timeToCallInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 786px; "><span><span title="Click to Filter" class="filterable-cell ">00:00:00</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="128" comp-id="4602" col-id="isDuplicate" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 70px; left: 856px; "><span><span title="Click to Filter" class="filterable-cell ">Yes</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="129" comp-id="4603" col-id="endCallSource" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 56px; left: 926px; "><span><span title="Caller" class="filterable-cell fa fa-mobile f-s-16"></span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="130" comp-id="4604" col-id="timeToConnectInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 982px; "><span>00:00:05</span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="131" comp-id="4605" col-id="targetName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 1052px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window Ever</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="132" comp-id="4606" col-id="conversionAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 69px; left: 1261px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="133" comp-id="4607" col-id="payoutAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1330px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="134" comp-id="4608" col-id="callLengthInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1400px; "><span><span title="Click to Filter" class="filterable-cell ">00:00:11</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="135" comp-id="4609" col-id="action" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 80px; left: 1470px; "><span><div>
<a title="Block Number" class="btn btn-function block-number-btn">
<i class="fa fa-ban "></i>
</a>
<a title="Add Tag Annotation" class="btn btn-function m-l-5 annotate-call-btn">
<i class="fa fa-pencil"></i>
</a>
<a title="Adjust Call Payments" class="btn btn-function m-l-5 adjust-call-payment-btn ">
<i class="fa fa-usd"></i>
</a>
</div></span></div></div><div role="row" row-index="2" aria-rowindex="6" row-id="RGB11E56B7E626FA1C7A4507B513239AFBE495206CEV3PCS01" comp-id="4611" class="ag-row ag-row-no-focus ag-row-even ag-row-level-0 ag-row-group ag-row-group-contracted ag-row-position-absolute" aria-selected="false" style="height: 29.6px; transform: translateY(59.2px); " aria-label="Press SPACE to select this row."><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="122" comp-id="4612" col-id="recordingUrl" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 90px; left: 0px; "><div><a href="javascript:;" class="recording-cell__btn"><i class="fa fa-play"></i></a> <a href="javascript:;" class="recording-cell__btn" style="display: none;"><i class="red fa fa-stop"></i></a></div></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="123" comp-id="4613" col-id="campaignName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 90px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window Installation Inbound</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="124" comp-id="4614" col-id="publisherName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 299px; "><span><span title="Click to Filter" class="filterable-cell contrast">Aef media group</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="125" comp-id="4615" col-id="inboundPhoneNumber" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 508px; "><span><span title="" class="filterable-cell ">+12818548738</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="126" comp-id="4616" col-id="number" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 647px; "><span><span title="Click to Filter" class="filterable-cell ">+18333361696</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="127" comp-id="4617" col-id="timeToCallInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 786px; "><span><span title="Click to Filter" class="filterable-cell ">00:00:00</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="128" comp-id="4618" col-id="isDuplicate" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 70px; left: 856px; "><span><span title="Click to Filter" class="filterable-cell ">Yes</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="129" comp-id="4619" col-id="endCallSource" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 56px; left: 926px; "><span><span title="Caller" class="filterable-cell fa fa-mobile f-s-16"></span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="130" comp-id="4620" col-id="timeToConnectInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 982px; "><span>00:00:03</span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="131" comp-id="4621" col-id="targetName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 1052px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window-2</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="132" comp-id="4622" col-id="conversionAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 69px; left: 1261px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="133" comp-id="4623" col-id="payoutAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1330px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="134" comp-id="4624" col-id="callLengthInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1400px; "><span><span title="Click to Filter" class="filterable-cell ">00:00:37</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="135" comp-id="4625" col-id="action" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 80px; left: 1470px; "><span><div>
<a title="Block Number" class="btn btn-function block-number-btn">
<i class="fa fa-ban "></i>
</a>
<a title="Add Tag Annotation" class="btn btn-function m-l-5 annotate-call-btn">
<i class="fa fa-pencil"></i>
</a>
<a title="Adjust Call Payments" class="btn btn-function m-l-5 adjust-call-payment-btn ">
<i class="fa fa-usd"></i>
</a>
</div></span></div></div><div role="row" row-index="3" aria-rowindex="7" row-id="RGB98CC6997C0F7F7C1456B8E03D534E6434C4328A9V3ILT01" comp-id="4627" class="ag-row ag-row-no-focus ag-row-odd ag-row-level-0 ag-row-group ag-row-group-contracted ag-row-position-absolute ag-row-last" aria-selected="false" style="height: 29.6px; transform: translateY(88.80000000000001px); " aria-label="Press SPACE to select this row."><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="122" comp-id="4628" col-id="recordingUrl" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 90px; left: 0px; "><div><a href="javascript:;" class="recording-cell__btn"><i class="fa fa-play"></i></a> <a href="javascript:;" class="recording-cell__btn" style="display: none;"><i class="red fa fa-stop"></i></a></div></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="123" comp-id="4629" col-id="campaignName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 90px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window Installation Inbound</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="124" comp-id="4630" col-id="publisherName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 299px; "><span><span title="Click to Filter" class="filterable-cell contrast">Aef media group</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="125" comp-id="4631" col-id="inboundPhoneNumber" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 508px; "><span><span title="" class="filterable-cell ">+19549456507</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="126" comp-id="4632" col-id="number" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 139px; left: 647px; "><span><span title="Click to Filter" class="filterable-cell ">+18333361696</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="127" comp-id="4633" col-id="timeToCallInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 786px; "><span><span title="Click to Filter" class="filterable-cell ">00:00:00</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="128" comp-id="4634" col-id="isDuplicate" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 70px; left: 856px; "><span><span title="Click to Filter" class="filterable-cell ">No</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="129" comp-id="4635" col-id="endCallSource" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height text-center ag-cell-value" style="width: 56px; left: 926px; "><span><span title="Caller" class="filterable-cell fa fa-mobile f-s-16"></span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="130" comp-id="4636" col-id="timeToConnectInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 982px; "><span>00:00:03</span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="131" comp-id="4637" col-id="targetName" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 209px; left: 1052px; "><span><span title="Click to Filter" class="filterable-cell contrast">Window-2</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="132" comp-id="4638" col-id="conversionAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 69px; left: 1261px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="133" comp-id="4639" col-id="payoutAmount" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1330px; "><span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="134" comp-id="4640" col-id="callLengthInSeconds" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-right-aligned-cell ag-cell-value" style="width: 70px; left: 1400px; "><span><span title="Click to Filter" class="filterable-cell ">00:01:18</span></span></div><div tabindex="-1" unselectable="on" role="gridcell" aria-colindex="135" comp-id="4641" col-id="action" class="ag-cell ag-cell-not-inline-editing ag-cell-auto-height ag-cell-value" style="width: 80px; left: 1470px; "><span><div>
<a title="Block Number" class="btn btn-function block-number-btn">
<i class="fa fa-ban "></i>
</a>
<a title="Add Tag Annotation" class="btn btn-function m-l-5 annotate-call-btn">
<i class="fa fa-pencil"></i>
</a>
<a title="Adjust Call Payments" class="btn btn-function m-l-5 adjust-call-payment-btn ">
<i class="fa fa-usd"></i>
</a>
</div></span></div></div></div>
</code></pre>
<p>I just want to iterate the divs and get the data out of it. I tried this code:</p>
<pre><code>try:
number = drive.find_element(By.CSS_SELECTOR, "div[col-id='inboundPhoneNumber']")
print(number.text)
except:
print(" NOTHING FOUND ")
</code></pre>
<p>But I failed to achieve my target. I was trying to get all the numbers from the divs. Is there any easier solution for me. I am so new in selenium.</p>
|
<python><selenium><web-scraping><automation>
|
2022-12-25 22:33:54
| 1
| 549
|
Sakib ovi
|
74,916,074
| 152,113
|
How to inject arguments from a custom decorator to a command in discord.py?
|
<p>I'm working on a bot which keeps track of various text-based games in different channels. Commands used outside the channel in which the relevant game is running should do nothing of course, and neither should they activate is the game is not running (for example, when a new game is starting soon). Therefore, almost all my commands start with the same few lines of code</p>
<pre><code>@commands.command()
async def example_command(self, ctx):
game = self.game_manager.get_game(ctx.channel.id)
if not game or game.state == GameState.FINISHED:
return
</code></pre>
<p>I'd prefer to just decorate all these methods instead. Discord.py handily provides a system of "check" decorators to automate these kinds of checks, but this does not allow me to pass on the <code>game</code> object to the command. As every command needs a reference to this object, I'd have to retrieve it every time again anyway, and ideally I'd like to just pass it along to the command.</p>
<p>My naive attempt at a decorator looks as follows</p>
<pre><code>def is_game_running(func):
async def wrapper(self, ctx):
# Retrieve `game` object here and do some checks
game = ...
return await func(self, ctx, game)
wrapper.__name__ = func.__name__
return wrapper
# Somewhere in the class
@commands.command()
@is_game_running
async def example_command(self, ctx, game):
pass
</code></pre>
<p>However this gives me the quite cryptic error "discord.ext.commands.errors.MissingRequiredArgument: ctx is a required argument that is missing."</p>
<p>I've tried a few variants of this, using <code>*args</code> etc... but nothing seems to work.</p>
|
<python><discord.py><python-decorators>
|
2022-12-25 21:55:20
| 2
| 2,511
|
Kasper
|
74,915,831
| 1,970,440
|
VsCode notebook can't see pandas module
|
<p>In VsCode I have activated .venv environment in which I can see pandas module confirmed with <code>pip show pandas</code> command and I still see error: <code>ModuleNotFoundError: No module named 'pandas'</code></p>
<p>(.venv) C:\PythonWs\testVsCodeNotebook>python --version
Python 3.10.8</p>
<p>How I can resolve this?</p>
<p><a href="https://i.sstatic.net/Hmtv0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hmtv0.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ILlbD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ILlbD.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/dGdIj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dGdIj.png" alt="enter image description here" /></a></p>
<p>John Gordon points out that VsCode uses the wrong path to the python interpretation (.venv environment was created from VsCode palette and activated from command line). How I can fix it in VsCode?</p>
<p>PS. I put <code>import pandas as pd;</code> in a standalone python file and it works without any problem.
<a href="https://i.sstatic.net/R8kZr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8kZr.png" alt="enter image description here" /></a></p>
|
<python><pandas><visual-studio-code><pip>
|
2022-12-25 20:53:05
| 1
| 663
|
AlexeiP
|
74,915,752
| 18,360,265
|
How to resolve TypeError: 'Request' object is not callable Error in Flask?
|
<p>I am trying to learn Flask. and here I am facing an issue.</p>
<p>I have created a route ( <code>/register</code> ) in my Flask app.
and I am trying to trigger this using Postman.</p>
<p>But I am getting this Error:</p>
<pre><code>TypeError: 'ImmutableMultiDict' object is not callable
</code></pre>
<p>Here is my code for <code>/register</code> route.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/register', methods=['POST'])
def register():
if request.method == 'POST':
fname = request.form(["fname"])
mname = request.form(["mname"])
lname = request.form(["lname"])
gender = request.form(["gender"])
age = request.form(["age"])
email = request.form(["email"])
password = request.form(["password"])
new_member = User(fname, mname, lname, gender, age, email, password)
try:
db.session.add(new_member)
db.session.commit()
return redirect('/')
except:
return 'Error: Error found'
</code></pre>
<p>Here is request body that I am sending from Postman.</p>
<pre class="lang-json prettyprint-override"><code>{
"fname":"ashutosh",
"mname":"kumar",
"lname":"yadav",
"gender":"m",
"age":25,
"email":"test@gmail.com",
"password": "PassWord@123"
}
</code></pre>
<p>But I am getting this Error: <code>TypeError: 'ImmutableMultiDict' object is not callable</code></p>
<p>In case if it is needed, here is curl request.</p>
<pre><code>curl --location --request POST 'http://127.0.0.1:5000/register' \
--header 'Content-Type: application/json' \
--data-raw '{
"fname":"ashutosh",
"mname":"kumar",
"lname":"yadav",
"gender":"m",
"age":25,
"email":"test@gmail.com",
"password": "PassWord@123"
}'
</code></pre>
<p><a href="https://stackoverflow.com/questions/40901522/flask-typeerror-immutablemultidict-object-is-not-callable">A similar problem I found on SOF</a></p>
<p>But I am not able to figure out what issue is there in my code.</p>
<p>Kindly help me guys.</p>
|
<python><flask>
|
2022-12-25 20:31:28
| 1
| 409
|
Ashutosh Yadav
|
74,915,654
| 2,474,581
|
Tkinter Treeview shows only first 3 columns
|
<p>On startup of the program; Tkinter Treeview shows only the first 3 columns of 5. When you alter with the mousepointer in the heading the width of a random column by a small amount(few pixels), all columns comes in sight after releasing the mouse button.</p>
<p>update: option <code>displaycolumns="#all"</code> gives the same result.</p>
<pre><code>hcolumns=('Hoofdstuk','Naam','Datum','Grootte','tafel')
tv=ttk.Treeview(mainframe, columns=hcolumns , show='headings', height=5)
for col in hcolumns:
tv.heading(col, text=col, command=lambda _col=col: treeview_sort_column(tv, _col, False))
tv.grid(column=0, row=0, sticky=(N,W,E,S))
</code></pre>
<p>Before altering width with mousepointer
<a href="https://i.sstatic.net/RnGTN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RnGTN.png" alt="enter image description here" /></a></p>
<p>After altering width with mousepointer:
<a href="https://i.sstatic.net/pk1n4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pk1n4.png" alt="enter image description here" /></a></p>
|
<python><tkinter>
|
2022-12-25 20:10:48
| 1
| 1,346
|
Kustekjé Meklootn
|
74,915,592
| 11,280,068
|
Browser console shows CORS error regardless of the type of error that happens on the server side of a fastapi app
|
<p>I have fastapi app that is running on the same server as a react app.</p>
<p>I make a simple, async request to the route, but regardless of the type of error that happens on the server (syntaxerror, valueerror, etc...), the browser console shows a CORS error</p>
<p>I am 100% positive it is not a cors error</p>
<p>How can I make it display more accurate errors on the frontend?</p>
|
<javascript><python><python-3.x><rest><fastapi>
|
2022-12-25 19:56:44
| 0
| 1,194
|
NFeruch - FreePalestine
|
74,915,576
| 18,059,131
|
How to get the peak of an audio file in python?
|
<p>How could I get the dB value of the peak of a wav file (for example, the peak of some wav file could be -6db, aka its loudest point is -6db) using python?</p>
|
<python><audio><pydub>
|
2022-12-25 19:52:04
| 1
| 318
|
prodohsamuel
|
74,915,347
| 1,840,471
|
pip3 using a different Python version than pyenv has activated
|
<p>I've installed pyenv to activate Python 3.9, and that appears to have worked:</p>
<pre class="lang-bash prettyprint-override"><code>maxghenis@MacBook-Air-3 policyengine-canada % pyenv versions
system
* 3.9.16 (set by /Users/maxghenis/.pyenv/version)
maxghenis@MacBook-Air-3 policyengine-canada % python -V
Python 3.9.6
</code></pre>
<p>But when I run <code>pip3 install -e .</code> in <a href="https://github.com/PolicyEngine/policyengine-canada/blob/master/setup.py" rel="nofollow noreferrer">this project</a>, it fails with an error message that suggests it's using Python 3.10:</p>
<pre class="lang-bash prettyprint-override"><code>× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Error in sitecustomize; set PYTHONVERBOSE for traceback:
AssertionError:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
</code></pre>
<p>How can I ensure that <code>pip3</code> uses 3.9?</p>
|
<python><pip><virtual-environment>
|
2022-12-25 18:59:09
| 0
| 15,993
|
Max Ghenis
|
74,915,335
| 15,359,178
|
Append Data in HDF5 file
|
<p>I want to append new date to my already created HDF5 file but I don't how to append more data to it, I don't know the actual syntax for appending</p>
<p>I have created an HDF5 file to save my data in HDF format as</p>
<pre class="lang-py prettyprint-override"><code>with h5py.File(save_path+'PIC200829_256x256x256x3_fast_sj1.hdf5', 'w') as db:
db.create_dataset('Predframes', data=trainX)
db.create_dataset('GdSignal', data=trainY)
# this can create an hdf5 file with given name
# and save the data in the given format
</code></pre>
<p>what I want is that I want to append more data (same data) to it to, in next iteration, instead of overwriting and creating new HDF file ,
one thing I know that I will change "w" to "a" but I don't know what I need to write for append instead of create</p>
<p>Instead of <code>db.create_dataset('Predframes', data=trainX)</code>
as <code>db.append('Predframes', data=trainX)</code> is not the right format/syntax?
What should I write to append instead of create?</p>
<p>The shape of the trainX is (2500, 100, 100, 40) so when the next trainX with same shape (2500, 100, 100, 40) is appended with the first one, its size should be (5000, 100, 100, 40) while the size of trainY is (2500,80). After appending it should be (5000, 80)</p>
|
<python><pandas><hdf5><hdf>
|
2022-12-25 18:57:09
| 1
| 451
|
Saran Zeb
|
74,915,324
| 8,262,214
|
How do I split a string with backward slashes in from the backward slashes substring using split() of python?
|
<p>I have a string which looks similar to <code>123456 \\RE1NUM=987</code> and I have been trying to split it <code>\\RE1NUM=</code>.</p>
<p>I have tried <code>.split("\\RE1NUM=")</code> and it gives <code>['123456 \\', '987']</code>. I believe backward slashes are being interpreted as escape characters.
The final list I need will be <code>['123456 ', '987']</code>.</p>
<p>The "string" is actually a line I am reading from a file object. It does work when isolated and tested on string, but fails when used on file's line. (I'll try to recreate this problem on a test file and paste the contents here.)</p>
|
<python><python-3.x>
|
2022-12-25 18:54:18
| 3
| 512
|
Zircoz
|
74,915,281
| 5,407,797
|
Type error when accessing Django request.POST
|
<p>I'm attempting to build an XML document out of <code>request.POST</code> data in a Django app:</p>
<pre><code>ElementTree.Element("occasion", text=request.POST["occasion"])
</code></pre>
<p>PyCharm is giving me an error on the <code>text</code> parameter saying <code>Expected type 'str', got 'Type[QueryDict]' instead</code>. I only bring up PyCharm because I know its type checker can be overzealous sometimes. However, I haven't been able to find anything about this issue specifically.</p>
<p>Am I doing something wrong? Or should I try to silence this error?</p>
|
<python><django><pycharm><typeerror>
|
2022-12-25 18:47:21
| 1
| 1,239
|
AAM111
|
74,915,260
| 694,360
|
Pillow conversion from RGB to indexed P mode
|
<p>I have an <code>RGB</code> image with only 25 colors,</p>
<p><a href="https://i.sstatic.net/ws8lc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ws8lc.png" alt="enter image description here" /></a></p>
<p>and I want to convert it to <code>P</code> mode to make it indexed (saving space, as far as I understand), but the resulting image, though in <code>P</code> mode and indexed, has got 70 colors instead of 25:</p>
<pre><code>>>> from PIL import Image
>>> img = Image.open(r"test_rgb2p.png")
>>> img.mode
'RGB'
>>> len(img.getcolors())
25
>>> img = img.convert(mode='P')
>>> img.mode
'P'
>>> len(img.getcolors())
70
</code></pre>
<p>I guess that the <code>convert</code> command performs some kind of unnecessary resampling, because I see no reason why the number of colors should increment. In fact with GIMP I'm able to convert this same image to indexed colors and the number of colors does not change.</p>
<p>Is there a way to do this <code>RGB</code> to <code>P</code> mode conversion without changes in colors?</p>
|
<python><python-imaging-library>
|
2022-12-25 18:44:20
| 2
| 5,750
|
mmj
|
74,915,246
| 525,913
|
Create new conda environment with latest python AND all the packages that I've added in an existing environment
|
<p>I want to create new conda environment with latest python (want 3.10 or later) AND the appropriate versions of all the packages (like matplotlib and pandas) that I've added in an existing environment (it's NOT the base environment). I don't recall what all of these packages are. Is there a way to do this without breaking things?</p>
|
<python><anaconda><conda>
|
2022-12-25 18:41:07
| 1
| 2,347
|
ViennaMike
|
74,915,118
| 4,531,757
|
How I find drug_code sequence for each pateint in their treatment?
|
<p>Experts, Even though every patient comes to the hospital for the treatment of the same disease, his/her sequence of treatment is slightly different based on their current conditions. Example: Few patients may need more pre-care/pre-medicines than other patients. Here our objective is how to collect all sequences of treatments and quantify those patterns. Please help me if you can. My Pandas knowledge is not enough for solving this problem :-(</p>
<p>Current Dataset:</p>
<pre><code>df2 = pd.DataFrame({'patient: ['one', 'one', 'one', 'two','two', 'two','three','three', 'three'],
'drug_code': ['011', '012', '013', '012', '013', '011','011', '012', '013'],
'date': ['11/20/2022', '11/22/2022', '11/23/2022', '11/8/2022', '11/9/2022', '11/14/2022','11/8/2022', '11/9/2022', '11/14/2022']})
df2['date'] = pd.to_datetime(df2['date'])
</code></pre>
<p>Result Dataset like to have to find pattern sequences:</p>
<pre><code>code_patterns = pd.DataFrame({'pattern':['011-012-013','012-013-011'],
'frequency': [2,1]})
</code></pre>
|
<python><pandas><numpy>
|
2022-12-25 18:16:28
| 1
| 601
|
Murali
|
74,915,029
| 2,878,298
|
Iterate through each column and find the max length
|
<p>I want to get the maximum length from each column from a pyspark dataframe.</p>
<p>Following is the sample dataframe:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql.types import StructType,StructField, StringType, IntegerType
data2 = [("James","","Smith","36636","M",3000),
("Michael","Rose","","40288","M",4000),
("Robert","","Williams","42114","M",4000),
("Maria","Anne","Jones","39192","F",4000),
("Jen","Mary","Brown","","F",-1)
]
schema = StructType([ \
StructField("firstname",StringType(),True), \
StructField("middlename",StringType(),True), \
StructField("lastname",StringType(),True), \
StructField("id", StringType(), True), \
StructField("gender", StringType(), True), \
StructField("salary", IntegerType(), True) \
])
df = spark.createDataFrame(data=data2,schema=schema)
</code></pre>
<p>I tried to implement the <a href="https://stackoverflow.com/questions/54263293/in-spark-iterate-through-each-column-and-find-the-max-length">solution provided in Scala</a> but could not convert it.</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2022-12-25 18:01:33
| 1
| 1,268
|
venus
|
74,914,830
| 1,873,689
|
Error working with COM object from Python throw clr reference - class not registred
|
<p>Trying to make an App for Solidworks.
First tried to use win32com or comtypes to get access to the COM object of solidworks.
Made some progress, but couldnt get it working well.
Found a way to work with clr and interop dlls.
More progress, but now got stuck on this error:</p>
<pre><code>System.Runtime.InteropServices.COMException: (0x80040154): Retrieving the COM class factory for component with CLSID {27526253-6119-4B38-A1F9-2DC877E72334} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
at System.Runtime.Remoting.RemotingServices.AllocateUninitializedObject(RuntimeType objectType)
at System.Runtime.Remoting.Activation.ActivationServices.CreateInstance(RuntimeType serverType)
at System.Runtime.Remoting.Activation.ActivationServices.IsCurrentContextOK(RuntimeType serverType, Object[] props, Boolean bNewObj)
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at Python.Runtime.ConstructorBinder.InvokeRaw(BorrowedReference inst, BorrowedReference args, BorrowedReference kw, MethodBase info)
</code></pre>
<p>my code is:</p>
<pre><code>>>> import clr
>>> clr.AddReference("C:\Program Files\SOLIDWORKS 2019\SOLIDWORKS\SolidWorks.Interop.sldworks.dll")
<System.Reflection.RuntimeAssembly object at 0x000001D80F877040>
>>> clr.AddReference("C:\Program Files\SOLIDWORKS 2019\SOLIDWORKS\SolidWorks.Interop.swconst.dll")
<System.Reflection.RuntimeAssembly object at 0x000001D80F877080>
>>> from SolidWorks.Interop.sldworks import *
>>> from SolidWorks.Interop.swconst import *
>>> swApp = ISldWorks(SldWorksClass())
>>> swApp.Visible=True
>>> modl = IModelDoc2(ModelDoc2Class())
</code></pre>
<p>Solidworks loads well, can work with it, but got no access to ModelDoc2 object.
All they are in registry (SdlWorksClass, ISldModelDoc2, ModelDoc2Class, IModelDoc2):
Their interfaces are both in registry. TypeLib is registred too.</p>
<p>from registry CLSID:</p>
<pre><code>assembly: SolidWorks.Interop.sldworks, Version=27.5.0.72, Culture=neutral, PublicKeyToken=7c4797c3e4eeac0
class: SolidWorks.Interop.sldworks.ModelDoc2Class
</code></pre>
<p>I can see ModelDoc2 object methods: <code>dir(ModelDoc2)</code></p>
<p>As far as I dig the class is registered (cannot test otherwise, any suggestions?)
COMview tool shows me the typelib as CLSID. Tested on two SW versions: 2019 and 2020.</p>
<p>Stuck and cannot continue.
Please advice.
Thanks</p>
|
<python><com><registry><win32com><solidworks>
|
2022-12-25 17:25:10
| 0
| 1,301
|
aleXela
|
74,914,765
| 12,242,085
|
How to correctly change format of date where day and month is changes in position in Data Frame in Python Pandas?
|
<p>I have DataFrame in Pandas like below:</p>
<p>data type of COL1 is "object"</p>
<pre><code>COL1
------
1-05-2019
22-04-2019
5-06-2019
</code></pre>
<p>And I need to have this column as data type "object" and in format dd-mm-yyyy for example 01-05-2019.</p>
<p>When I use code like follow: <code>df["COL2"] = df["COL1"].astype("datetime64").dt.strftime('%d-%m-%Y')</code></p>
<p>I have result like below:</p>
<pre><code>COL1 | COL2
-----------|------
1-05-2019 | 05-01-2019
22-04-2019 | 22-04-2019
5-06-2019 | 06-05-2019
</code></pre>
<p>As you can see, for dates from COL1 like: <code>1-05-2019</code> and <code>5-06-2019</code> my code change position of day and month but for dates like <code>22-04-2019</code> works correctly.</p>
<p>I need to have an output like below in "object" data type:</p>
<pre><code>COL1 | COL2
-----------|------
1-05-2019 | 01-05-2019
22-04-2019 | 22-04-2019
5-06-2019 | 05-06-2019
</code></pre>
<p>How can I do taht in Python Pandas ?</p>
|
<python><pandas><dataframe><date><strftime>
|
2022-12-25 17:13:59
| 1
| 2,350
|
dingaro
|
74,914,731
| 913,098
|
Python boto3, list contents of specific dir in bucket, limit depth
|
<p>This is the same as <a href="https://stackoverflow.com/questions/27292145/python-boto-list-contents-of-specific-dir-in-bucket">this question</a>, but I also want to limit the depth returned.</p>
<p>Currently, all answers return all the objects after the specified prefix. I want to see just what's in the current hierarchy level.</p>
<p>Current code that returns everything:</p>
<pre><code>self._session = boto3.Session(
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
)
self._session.resource("s3")
bucket = self._s3.Bucket(bucket_name)
detections_contents = bucket.objects.filter(Prefix=prefix)
for object_summary in detections_contents:
print(object_summary.key)
</code></pre>
<p>How to see only the files and folders directly under <code>prefix</code>? How to go <code>n</code> levels deep?</p>
<p>I can parse everything locally, and this is clearly not what I am looking for here.</p>
|
<python><amazon-s3><boto3>
|
2022-12-25 17:08:16
| 3
| 28,697
|
Gulzar
|
74,914,720
| 5,050,577
|
Python versions are not changing despite activating virtual environment in WSL2
|
<p><strong>Background:</strong></p>
<p>In WSL2 (ubuntu 20.04) I created a python virtual environment inside a directory. Using the command <code>python3 -m venv venv</code> my system's python version was set to python3.11 (after downloading) via <code>sudo update-alternatives --config python3</code> and then choosing the version. I noticed I was having some errors of missing modules when I started WSL2 (happening after a computer restart), I read this was because I was using a different python version than the one ubuntu 20.04 came with so I switched back to 3.8 via the config menu as before. I am also using VS code that's connected to my WSL2.</p>
<p>These are some of the contents of my venv directory: <code>venv/bin/python</code> <code>venv/bin/python3</code> <code>venv/bin/python3.11</code> <code>venv/bin/pip</code> <code>venv/bin/pip3</code></p>
<p><strong>Question:</strong></p>
<p>After activating my virutal env via <code>source venv/bin/activate</code>, when I do <code>python3 --version</code> I still get a version of 3.8.10 despite creating the virtual environment with 3.11. I was able to get the interpretor set to 3.11 on VS code.I know I was in the virtual environment since my command prompt had (venv) in front. I went into the python console while in the virtual env and did <code>import sys</code> and <code>sys.path</code> this was my output <code>['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload']</code>. <strong>Why isn't the python version changing, am I misunderstanding something or did I not do something correctly?</strong> Seems like pip isn't working either but works when I switch my system python to 3.11 (I tried installing it on 3.8 but it said it was already installed).</p>
<p><strong>Solved:</strong></p>
<p>Answered below, just re-created the virtual env while making sure my system python version was 3.11 (may have been some mixup earlier).</p>
|
<python><python-3.x><virtualenv><windows-subsystem-for-linux>
|
2022-12-25 17:05:45
| 2
| 3,229
|
DanT29
|
74,914,696
| 7,773,898
|
issue while writing into postgresql database in glue job
|
<p>Hey I am trying to write data into postgresql db from glue job but getting below error
<code>IllegalArgumentException: Option 'dbtable' can not be empty.</code></p>
<p>Below is my code which seems to be correct</p>
<p>How I read from s3</p>
<pre><code>glueContext.create_dynamic_frame.from_options(
format_options={"multiline": False},
connection_type="s3",
format="json",
connection_options={
"paths": [s3path],
"recurse": True,
},
transformation_ctx=S3_TO_CUSTOM_TRANSFORM,
)
</code></pre>
<p>And then after some transformation I try to writeusing below code</p>
<pre><code>
glueContext.write_dynamic_frame.from_options(
frame=df,
connection_type="postgresql",
connection_options={
"url": "jdbc:postgresql://<host>:port/db",
"user": "username",
"password": "password",
"dbtable": "table_name"
}
)
</code></pre>
|
<python><amazon-web-services><apache-spark><aws-glue>
|
2022-12-25 17:01:11
| 0
| 383
|
ALTAF HUSSAIN
|
74,914,655
| 10,563,068
|
How to Re-initialise the dataframe to integer and decimal from float for multiple columns in pandas?
|
<p>I am trying to convert the dataframe to integer and decimal based on the columns. However, I want to do it without specifying the column names as there are many columns. For now, I have converted a float to integer but when I use this code, it also converts the columns with decimal into integer. Below is my code:</p>
<pre><code>df = pd.read_csv (filename , low_memory=False)
df = pd.DataFrame(df)
print(df.head(165))
df = pd.to_numeric(df, errors='coerce').fillna(0).astype('int')
print(df.head(165))
</code></pre>
<p><a href="https://i.sstatic.net/iFFu0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iFFu0.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/EBnHk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EBnHk.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2022-12-25 16:54:29
| 1
| 453
|
Sriram
|
74,914,592
| 5,704,664
|
Merge data with timestamp without overriding old data
|
<p>I have data <code>A</code> that looks like this:</p>
<pre><code>timestamp,some_value
389434893,abc
348973493,dac
128197291,fgd
</code></pre>
<p>I have other data <code>B</code> that is the newer version of <code>A</code> (with more data):</p>
<pre><code>timestamp,some_value
389434893,wwwwwwe # timestamp DID NOT CHANGE
348973493,wwwwags # timestamp DID NOT CHANGE
128197291,wwaswww # timestamp DID NOT CHANGE
982379283,ggg
</code></pre>
<p>This data exists in the form of <code>pandas.DataFrame</code>.</p>
<p>I want to merge <code>A</code> with <code>B</code> without affecting old rows from <code>A</code>, even if <code>some_value</code> has been changed. Result <code>R</code> should look like this:</p>
<pre><code>timestamp,some_value
389434893,abc # copied from A
348973493,dac # copied from A
128197291,fgd # copied from A
982379283,ggg # new row from B
</code></pre>
<p>Order is guaranteed.</p>
<p>What pandas methods should I use to achieve this?</p>
|
<python><pandas>
|
2022-12-25 16:44:32
| 1
| 2,018
|
comonadd
|
74,914,576
| 558,639
|
seeking yet another numpy stacking function
|
<p>Just this:</p>
<pre><code>>>> a1
array([[0],
[1],
[2],
[3],
[4]])
>>> b2
array([[100, 101],
[102, 103],
[104, 105],
[106, 107],
[108, 109]])
</code></pre>
<p>I want to stack them side by side in a way that results in:</p>
<pre><code>array([[[0], [100, 101]],
[[1], [102, 103]],
[[2], [104, 105]],
[[3], [106, 107]],
[[4], [108, 109]]])
</code></pre>
<p>I already figured out that <code>hstack</code> flattens the individual elements <code>[0, 100, 101]</code>, and <code>dstack</code> requires the arrays to have the same shape.</p>
<p>But "there's always a way in numpy", I just haven't found it.</p>
|
<python><numpy><numpy-ndarray>
|
2022-12-25 16:41:45
| 2
| 35,607
|
fearless_fool
|
74,914,461
| 475,982
|
Spacy add_alias, TypeError
|
<p><strong>MWE</strong></p>
<pre><code>from spacy.kb import KnowledgeBase
import spacy
#kb.add_entity already called.
nlp = spacy.blank("en")
kb = KnowledgeBase(vocab=nlp.vocab, entity_vector_length=96)
name = "test"
qid = 1 # type(qid) => int
kb.add_alias(alias=name.lower(), entities=[qid], probabilities=[1])
</code></pre>
<p>produces the error at the last line: <code>TypeError: an integer is required</code></p>
<p>A <a href="https://stackoverflow.com/questions/58577977/upon-importing-spacy-i-found-typeerror">previous SO post</a> suggested that the same error arose in another context (importing SpaCy) because the version of <code>srsly</code> was greater than 2. Using their solution of downgrading to <code>v1.0.1</code> of <code>srsly</code> merely switched the error to module <code>srsly</code> has no attribute <code>read_yaml</code>.</p>
<p>I am using <code>spacy 3.4.4</code> and <code>srsly 2.4.5</code>.</p>
<p><strong>Update</strong>
A fuller stack trace points to line 228 in <code>spacy/kb.pyx</code>:</p>
<pre><code> for entity, prob in zip(entities, probabilities):
entity_hash = self.vocab.strings[entity] #this gives the error
if not entity_hash in self._entry_index:
raise ValueError(Errors.E134.format(entity=entity))
entry_index = <int64_t>self._entry_index.get(entity_hash)
entry_indices.push_back(int(entry_index))
probs.push_back(float(prob))
</code></pre>
|
<python><cython><spacy><spacy-3>
|
2022-12-25 16:14:45
| 1
| 3,163
|
mac389
|
74,914,333
| 10,563,068
|
Re-initialise the dataframe did not work in pandas
|
<p>I am new to pandas and I am trying to re-initialise the dataframe using the pd.DataFrame constructor with dtype=object. This is because the column from excel is reading it as float instead of integer. When I try to change it to object, the data seems the same. It still showing as float. The datatype have changed to object but the data did not change to the original number. Below is the code and screenshot of the output:</p>
<pre><code>df = pd.read_csv (filename , low_memory=False)
df = pd.DataFrame(df)
print(df['C9_O1'].head(165))
df = pd.DataFrame(df, dtype=object)
print(df['C9_O1'].head(165))
</code></pre>
<p><a href="https://i.sstatic.net/UyBMG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UyBMG.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2022-12-25 15:51:27
| 1
| 453
|
Sriram
|
74,914,319
| 5,133,524
|
How to type only the first positional parameter of a Protocol method and let the others be untyped?
|
<h1>Problem</h1>
<p>How to only type the first positional parameter of a Protocol method and let the others be untyped?</p>
<p>Example, having a protocol named <code>MyProtocol</code> that has a method named <code>my_method</code> that requires only the first positional parameter to be an int, while letting the rest be untyped.
the following class would implement it correctly without error:</p>
<pre class="lang-py prettyprint-override"><code>class Imp1(MyProtocol):
def my_method(self, first_param: int, x: float, y: float) -> int:
return int(first_param - x + y)
</code></pre>
<p>However the following implementation wouldn't implement it correctly, since the first parameter is a float:</p>
<pre class="lang-py prettyprint-override"><code>class Imp2(MyProtocol):
def my_method(self, x: float, y: float) -> int: # Error, method must have a int parameter as a first argument after self
return int(x+y)
</code></pre>
<p>I thought I would be able to do that with <code>*args</code>, and <code>**kwargs</code> combined with <code>Protocol</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol, Any
class MyProtocol(Protocol):
def my_method(self, first_param: int, /, *args: Any, **kwargs: Any) -> int:
...
</code></pre>
<p>But (in mypy) this makes both Imp1 and Imp2 fail, because it forces the method contract to really have a <code>*args</code>, <code>**kwargs</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>class Imp3(MyProtocol):
def my_method(self, first_param: int, /, *args: Any, **kwargs: Any) -> int:
return first_param
</code></pre>
<p>But this does not solves what I am trying to achieve, that is make the implementation class have any typed/untyped parameters except for the first parameter.</p>
<h1>Workaround</h1>
<p>I manged to circumvent the issue by using an abstract class with a setter <code>set_first_param</code>, like so:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import Any
class MyAbstractClass(ABC):
_first_param: int
def set_first_param(self, first_param: int):
self._first_param = first_param
@abstractmethod
def my_method(self, *args: Any, **kwargs: Any) -> int:
...
class AbcImp1(MyAbstractClass):
def my_method(self, x: float, y: float) -> int:
return int(self._first_param + x - y) # now i can access the first_parameter with self._first_param
</code></pre>
<p>But this totally changes the initial API that I am trying to achieve, and in my opinion makes less clear to the implementation method that this parameter will be set before calling <code>my_method</code>.</p>
<h1>Note</h1>
<p>This example was tested using python version <code>3.9.13</code> and mypy version <code>0.991</code>.</p>
|
<python><type-hinting><mypy><python-3.9><duck-typing>
|
2022-12-25 15:48:46
| 3
| 523
|
giuliano-macedo
|
74,914,230
| 17,889,840
|
How to import Transformers with Tensorflow
|
<p>After installing Transformers using</p>
<pre><code>pip install Transformers
</code></pre>
<p>I get version 4.25.1 , but when I try to import Transformer by</p>
<pre><code>from tensorflow.keras.layers import Transformer
# or
from tensorflow.keras.layers.experimental import Transformer
</code></pre>
<p>I get this error:</p>
<pre><code>ImportError: cannot import name 'Transformer' from 'tensorflow.keras.layers'
</code></pre>
<p>I am using <code>Tenserflow 2.10</code> and <code>python 3.7</code>.</p>
|
<python><tensorflow><keras><transformer-model>
|
2022-12-25 15:30:53
| 1
| 472
|
A_B_Y
|
74,914,150
| 2,602,550
|
Why objects in flask-sqlalchemy still show up as valid?
|
<p>After setting up two tables with the following code</p>
<pre class="lang-py prettyprint-override"><code>class User(db.Model):
id = db.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
email = db.Column(db.String(100), unique=True)
password = db.Column(Password)
name = db.Column(db.String(1000))
studies = db.relationship('Study', backref='owner', lazy=True, cascade='save-update, merge, delete')
class Study(db.Model):
id = db.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
owner_id = db.Column(UUID(as_uuid=True), db.ForeignKey('user.id'), nullable=False)
</code></pre>
<p>I am testing that the removal of studies happens when the user gets removed. So far, seems to be the case, but I need to get to the database to discover that:</p>
<pre class="lang-py prettyprint-override"><code>def test_studies_removed_with_user(app):
user = User.query.filter_by(email=app.config['TESTUSER_EMAIL']).first()
study = Study(owner=user)
db.session.add(study)
db.session.commit()
assert study
db.session.delete(user)
db.session.commit()
assert User.query.filter_by(email=app.config['TESTUSER_EMAIL']).first() is None
assert not Study.query.filter_by(owner_id=user.id).all()
</code></pre>
<p>if I break with the debugger in the prior to last assert, user variable is still populated, and so is study. I'd have expected these variables to be invalid now (because the asserts succeed, even the second which uses id from 'user', so the entries are removed on the database).</p>
<p>Is this expected behavior, or am I doing something wrong?</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2022-12-25 15:16:32
| 0
| 355
|
chronos
|
74,913,940
| 12,242,085
|
How to convert specific format of date to useful and readable format of date in Python Pandas?
|
<p>I have DataFrame in Pandas like below:</p>
<p>DATA TYPES:</p>
<ul>
<li><p>ID - numeric</p>
</li>
<li><p>HOLIDAY - object</p>
</li>
<li><p>YEAR - object</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>HOLIDAY</th>
<th>YEAR</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>1 sty</td>
<td>2022</td>
</tr>
<tr>
<td>222</td>
<td>20 kwi</td>
<td>2022</td>
</tr>
<tr>
<td>333</td>
<td>8 mar</td>
<td>2022</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div></li>
<li><p>sty - January</p>
</li>
<li><p>kwi - APril</p>
</li>
<li><p>mar - March</p>
</li>
</ul>
<p>And I need to convert above table so as to have full and useful date (as string format).</p>
<p>So, I need to have something like below:</p>
<pre><code>ID | HOLIDAY | YEAR
----|-------------|-------
111 | 01-01-2022 | 2022
222 | 20-02-2022 | 2022
333 | 08-03-2022 | 2022
... | ... | ...
</code></pre>
<p>How can I do that in Python Pandas ?</p>
<p>I used somethink like that:</p>
<pre><code>df['HOLIDAY'] = pd.to_datetime(df['HOLIDAY'] +" "+ df['YEAR'] , format='%d %b %Y')
df['HOLIDAY'] = df['HOLIDAY'].dt.strftime('%d-%m-%Y')
</code></pre>
<p>but it generate error like the follow: <code>ValueError: time data '1 sty 2022' does not match format '%d %b %Y' (match)</code></p>
|
<python><pandas><string><date>
|
2022-12-25 14:38:45
| 2
| 2,350
|
dingaro
|
74,913,886
| 11,981,718
|
How to change the plot x axis in time series in graph objects plotly
|
<p>I want them to go from 0 to 24 h to export those images and create a gif.
<a href="https://i.sstatic.net/qALO4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qALO4.png" alt="enter image description here" /></a></p>
|
<python><pandas><plotly>
|
2022-12-25 14:30:11
| 0
| 412
|
tincan
|
74,913,853
| 12,226,377
|
Removing different string patterns from Pandas column
|
<p>I have the following column which consists of email subject headers:</p>
<pre><code>Subject
EXT || Transport enquiry
EXT || RE: EXTERNAL: RE: 0001 || Copy of enquiry
EXT || FW: Model - Jan
SV: [EXTERNAL] Calculations
</code></pre>
<p>What I want to achieve is:</p>
<pre><code>Subject
Transport enquiry
0001 || Copy of enquiry
Model - Jan
Calculations
</code></pre>
<p>and for this I am using the below code which only takes into account the first regular expression that I am passing and ignoring the rest</p>
<pre><code>def clean_subject_prelim(text):
text = re.sub(r'^EXT \|\| $' , '' , text)
text = re.sub(r'EXT \|\| RE: EXTERNAL: RE:', '' , text)
text = re.sub(r'EXT \|\| FW:', '' , text)
text = re.sub(r'^SV: \[EXTERNAL]$' , '' , text)
return text
df['subject_clean'] = df['Subject'].apply(lambda x: clean_subject_prelim(x))
</code></pre>
<p>Why this is not working, what am I missing here?</p>
|
<python><pandas><regex>
|
2022-12-25 14:24:17
| 2
| 807
|
Django0602
|
74,913,841
| 4,215,840
|
Python Object inheritance
|
<pre class="lang-py prettyprint-override"><code>class Phone:
def install():
...
class InstagramApp(Phone):
...
def install_app(phone: "Phone", app_name):
phone.install(app_name)
app = InstagramApp()
install_app(app, 'instagram') # <--- is that OK ?
</code></pre>
<p><code>install_app</code> gets a <code>Phone</code> object.
will it work with with InstagramApp object ?</p>
|
<python><object><inheritance>
|
2022-12-25 14:22:11
| 4
| 451
|
Sion C
|
74,913,448
| 12,242,085
|
How to read table from url as DataFrame and modify format of data in one column in Python Pandas?
|
<p>I have a link to the website with table like the follow: <a href="https://www.timeanddate.com/holidays/kenya/2022" rel="nofollow noreferrer">https://www.timeanddate.com/holidays/kenya/2022</a></p>
<p>How can I:</p>
<ol>
<li>read this table as DataFrame in Jupyter Notebook in Python ?</li>
<li>Convert column "Date" so as to have date format like "01.01.2022" not as exists on website "1 sty"</li>
<li>how to create column "Day" where will be value like: sobota, niedziela and so on which currently are between columns "Date" and "Name" ?</li>
</ol>
<p>So, as a result I need something like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Day</th>
<th>Name</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>01.01.2022</td>
<td>sobota</td>
<td>New Year's Day</td>
<td>Public holiday</td>
</tr>
<tr>
<td>20.03.2022</td>
<td>niedziela</td>
<td>March Equinox</td>
<td>Season</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>How can I do that in Python Pandas ?</p>
|
<python><pandas><date><web-scraping><beautifulsoup>
|
2022-12-25 13:03:48
| 2
| 2,350
|
dingaro
|
74,913,369
| 2,153,383
|
how can I get the access token needed for ms graph
|
<p>when registering an app in azure to access mail using ms graph the callback URL is optional, when I try this code it doesn't seem to get the correct token because I get an error 400 when using the token to access office 365. But if I copy the token from the ms graph developer page and use that token it works.</p>
<pre><code>tenant_id = "xxxxx"
TOKEN_ENDPOINT = 'https://login.microsoftonline.com/' + tenant_id + '/oauth2/v2.0/token'
# Makes a POST request to the token endpoint to get an access token
def get_access_token():
payload = {
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'grant_type': 'client_credentials',
'scope': 'https://graph.microsoft.com/.default'
}
response = requests.post(TOKEN_ENDPOINT, data=payload)
if response.status_code == 200:
return response.json()['access_token']
else:
print('Failed to get access token:')
print(response.status_code)
print(response.text)
return None
</code></pre>
<p>Can someone show me the correct way to get the token?</p>
|
<python><microsoft-graph-api>
|
2022-12-25 12:46:36
| 2
| 2,471
|
MTplus
|
74,913,169
| 5,783,753
|
How can I make pdf2image work with PDFs that have paths containing Chinese characters?
|
<p>Following <a href="https://stackoverflow.com/questions/46184239/extract-a-page-from-a-pdf-as-a-jpeg">this question</a>, I tried to run the following code to convert PDF with a path that contains Chinese characters to images:</p>
<pre><code>from pdf2image import convert_from_path
images = convert_from_path('path with Chinese character in it/some Chinese character.pdf', 500)
# save images
</code></pre>
<p>I got this error message:</p>
<pre><code>PDFPageCountError: Unable to get page count.
I/O Error: Couldn't open file 'path with Chinese character in it/??????.pdf': No such file or directory.
</code></pre>
<p>in which all Chinese characters are replaced with "?".</p>
<p>The issue is caused solely by the Chinese characters in the directory since the program worked as intended after I ensured that the path contains no Chinese characters.</p>
<p>In <code>pdf2image.py</code>, I tried to alter the function <code>pdfinfo_from_path</code>, that <code>out.decode("utf8", "ignore")</code> is changed to e.g. <code>out.decode("utf32", "ignore")</code>, which also does not work.</p>
<p>Not sure whether it is relevant: according to the aforementioned answer, I also need to install poppler. But my code also worked when the directory does not contain any Chinese characters. In addition, running this code <code>conda install -c conda-forge poppler</code> (from the answer above) never ends after several hours of waiting.</p>
|
<python><image><pdf>
|
2022-12-25 12:00:52
| 1
| 848
|
Aqqqq
|
74,913,090
| 3,482,266
|
How does one use memit magic?
|
<p>I'm trying to run the following code in a jupyter notebook:</p>
<pre><code>%%memit
list1 = [i*i for i in range(100_000)]
lista2 = list1
</code></pre>
<p>However, I get the following error message:</p>
<pre><code>UsageError: Cell magic `%%memit` not found.
</code></pre>
<p>If instead of <code>memit</code>, I use <code>timeit</code>, everything works.</p>
|
<python><memory><jupyter-notebook>
|
2022-12-25 11:45:43
| 1
| 1,608
|
An old man in the sea.
|
74,913,020
| 12,858,691
|
Pandas get postion of last value based on condition for each column (efficiently)
|
<p>I want to get the information in which row the value <code>1</code> occurs last for each column of my dataframe. Given this last row index I want to calculate the "recency" of the occurence. Like so:</p>
<pre><code>>> df = pandas.DataFrame({"a":[0,0,1,0,0]," b":[1,1,1,1,1],"c":[1,0,0,0,1],"d":[0,0,0,0,0]})
>> df
a b c d
0 0 1 1 0
1 0 1 0 0
2 1 1 0 0
3 0 1 0 0
4 0 1 1 0
</code></pre>
<p>Desired result:</p>
<pre><code>>> calculate_recency_vector(df)
[3,1,1,None]
</code></pre>
<p>The desired result shows for each column "how many rows ago" the value <code>1</code> appeared for the last time. Eg for the column <code>a</code> the value <code>1</code> appears last in the 3rd-last row, hence the recency of <code>3</code> in the result vector. Any ideas how to implement this?</p>
<p>Edit: to avoid confusion, I changed the desired output for the last column from <code>0</code> to <code>None</code>. This column has no recency because the value <code>1</code> does not occur at all.</p>
<p>Edit II: Thanks for the great answers! I have to calculate this recency vector approx. 150k times on dataframes shaped (42,250). A more efficient solution would be much appreciated.</p>
|
<python><pandas>
|
2022-12-25 11:28:40
| 3
| 611
|
Viktor
|
74,912,686
| 2,391,859
|
Question about using multiprocessing and slaves in Python. Getting error: <class 'TypeError'> 'bytes' object is not callable
|
<p>I just started with trying to use multiprocessing in Python to offload some tasks. This is the basic code here, but I am using it as part of a 'Python Plug-in' that is part of Orthanc, as referenced here: <a href="https://book.orthanc-server.com/plugins/python.html?highlight=python#slave-processes-and-the-orthanc-module" rel="nofollow noreferrer">Orthanc Multiprocessing</a></p>
<p>It is a little complicated, but the issue I am having seems to be maybe pretty simple:</p>
<p><strong>"Slave Process"</strong></p>
<pre><code>def DelegateStudyArchive(uri):
new_zip = BytesIO()
logging.info("In the Slave Handler")
r = requests.get('http://localhost:8042'+uri, headers = { 'Authorization' : TOKEN })
logging.info(r.ok)
logging.info(r.headers)
archive = r.text # vs. text vs. content
with ZipFile('/python/radiant_cd.zip', 'r') as radiant_zip:
with ZipFile(new_zip, 'w') as new_archive:
for item in radiant_zip.filelist:
# To get rid of '__MACOSX' files skip them here
if '__MACOSX' not in item.filename:
# logging.info("Adding " +item.filename+ " to archive")
new_archive.writestr(item, radiant_zip.read(item.filename))
else:
logging.info("Skipping " +item.filename+ ", it is a Mac OS file remnant.")
new_archive.writestr('dcmdata.zip', archive)
# Important to read as binary, otherwise the codec fails.
f = open("/python/ReadMe.pdf", "rb")
new_archive.writestr('ReadMe.pdf', f.read())
f.close()
value = new_zip.getvalue()
return value
</code></pre>
<p><strong>Main script</strong></p>
<pre><code>def OnDownloadStudyArchive(output, uri, **request):
# Offload the call to "SlowComputation" onto one slave process.
# The GIL is unlocked until the slave sends its answer back.
host = "Not Defined"
userprofilejwt = "Not Defined"
if "headers" in request and "host" in request['headers']:
host = request['headers']['host']
if "headers" in request and "userprofilejwt" in request['headers']:
userprofilejwt = request['headers']['userprofilejwt']
logging.info("STUDY|DOWNLOAD_ARCHIVE|ID=" + request['groups'][0] + " HOST=" + host + " PROFILE= " + userprofilejwt)
uri = uri.replace("_slave", '')
answer = POOL.apply(DelegateStudyArchive(uri), args=(uri), kwds = {})
pool.close()
output.AnswerBuffer(answer, 'application/zip')
orthanc.RegisterRestCallback('/studies/(.*)/archive_slave', OnDownloadStudyArchive)
</code></pre>
<p>I got far enough to get the Main script to call DelegateStudyArchive(uri) because the logger is showing:</p>
<pre><code>2022-12-25 04:55:24,504 | root | INFO | In the Slave Handler
2022-12-25 04:55:24,525 | urllib3.connectionpool | DEBUG | Starting new HTTP connection (1): localhost:8042
2022-12-25 04:55:24,686 | urllib3.connectionpool | DEBUG | http://localhost:8042 "GET /studies/0cc9fb82-726d3dfc-e6f2b353-e96558d7-986cbb2c/archive HTTP/1.1" 200 None
2022-12-25 04:55:25,610 | root | INFO | JOB|JOB_SUCCESS|{"CompletionTime": "20221225T095525.609389", "Content": {"ArchiveSize": "7520381", "ArchiveSizeMB": 7, "Description": "REST API", "InstancesCount": 51, "UncompressedSize": "17817326", "UncompressedSizeMB": 16}, "CreationTime": "20221225T095524.546173", "EffectiveRuntime": 0.923, "ErrorCode": 0, "ErrorDescription": "Success", "ErrorDetails": "", "ID": "8b619458-5b82-441d-9505-94e68d90398e", "Priority": 0, "Progress": 100, "State": "Success", "Timestamp": "20221225T095525.609624", "Type": "Archive"}
2022-12-25 04:55:25,612 | root | INFO | JOB|MEDIA|ArchiveorDCMCreatedviaJOB
2022-12-25 04:55:25,622 | root | INFO | True
2022-12-25 04:55:25,623 | root | INFO | {'Connection': 'close', 'Content-Disposition': 'filename="0cc9fb82-726d3dfc-e6f2b353-e96558d7-986cbb2c.zip"', 'Content-Type': 'application/zip'}
2022-12-25 04:55:26,468 | charset_normalizer | DEBUG | Encoding detection: Unable to determine any suitable charset.
</code></pre>
<p>But then I get an error in the main script that says:</p>
<pre><code>E1225 04:55:27.163292 PluginsManager.cpp:153] Error in the REST callback, traceback:
<class 'TypeError'>
'bytes' object is not callable
File "/python/combined.py", line 2147, in OnDownloadStudyArchive
answer = POOL.apply(DelegateStudyArchive(uri), args=(uri), kwds = {})
File "/usr/lib/python3.9/multiprocessing/pool.py", line 357, in apply
return self.apply_async(func, args, kwds).get()
File "/usr/lib/python3.9/multiprocessing/pool.py", line 771, in get
raise self._value
</code></pre>
<p>and so I think "answer" is null or just throws an exception and the zip file is not returned. I presume / hope there is an easy fix for that since it otherwise seems to be working, and if so, I have several other places where I would like to do something similar.</p>
|
<python><python-3.x><multiprocessing>
|
2022-12-25 10:08:52
| 2
| 2,338
|
SScotti
|
74,912,653
| 10,181,236
|
How can I efficiently plot a distance matrix using seaborn?
|
<p>So I have a dataset of more ore less 11.000 records, with 4 features all them are discrete or continue. I perform clustering using K-means, then I add the column "cluster" to the dataframe using <code>kmeans.labels_</code>. Now I want to plot the distance matrix so I used <code>pdist</code> from <code>scipy</code>, but the matrix is not plotted.</p>
<p>Here is my code.</p>
<pre><code>from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
import gc
# distance matrix
def distance_matrix(df_labeled, metric="euclidean"):
df_labeled.sort_values(by=['cluster'], inplace=True)
dist = pdist(df_labeled, metric)
dist = squareform(dist)
sns.heatmap(dist, cmap="mako")
print(dist)
del dist
gc.collect()
distance_matrix(finalDf)
</code></pre>
<p>Output:</p>
<pre><code>[[ 0. 2.71373462 3.84599479 ... 7.59910903 8.10265588
8.27195104]
[ 2.71373462 0. 2.94410672 ... 7.90444283 8.28225031
8.48094661]
[ 3.84599479 2.94410672 0. ... 9.78706347 10.42014451
10.61261498]
...
[ 7.59910903 7.90444283 9.78706347 ... 0. 1.27795469
1.44711258]
[ 8.10265588 8.28225031 10.42014451 ... 1.27795469 0.
0.52333107]
[ 8.27195104 8.48094661 10.61261498 ... 1.44711258 0.52333107
0. ]]
</code></pre>
<p>I get the following graph:<br />
<a href="https://i.sstatic.net/iNyad.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iNyad.png" alt="enter image description here" /></a></p>
<p>As you can see, the plot is empty.
Also I have to free up some RAM because google colab crashes.</p>
<p>How can I solve the problem?</p>
|
<python><seaborn><cluster-analysis><distance-matrix>
|
2022-12-25 10:01:16
| 2
| 512
|
JayJona
|
74,912,638
| 7,702,011
|
How to use current chrome with selenium with python
|
<p>I want to use my local chrome with selenium, but every time I just get a new chrome without any cookies</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
option = webdriver.ChromeOptions()
option.add_argument(r'--user-data-dir=/Users/mac/Library/Application Support/Google/Chrome/Default')
driver = webdriver.Chrome(chrome_options=option, executable_path="/usr/local/bin/chromedriver")
driver.get("https://twitter.com/")
</code></pre>
<ul>
<li>my chrome user data dir</li>
</ul>
<p><a href="https://i.sstatic.net/C7EAL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C7EAL.png" alt="enter image description here" /></a></p>
<p><strong>I want to use default chrome with selenium, then how to setup chrome options?</strong></p>
<p>Please help, thanks!</p>
|
<python><selenium><selenium-webdriver>
|
2022-12-25 09:57:59
| 2
| 323
|
SylorHuang
|
74,912,330
| 20,612,566
|
How to add a Django custom signal to worked out request
|
<p>I have a Django project and some foreign API's inside it. So, I have a script, that changes product stocks in my store via marketplace's API. And in my views.py I have a CreateAPIView class, that addresses to marketplace's API method, that allows to get product stocks, and writes it to MYSQL DB. Now I have to add a signal to start CreateAPIView class (to get and add changed stocks data) immediately after marketplace's change stocks method worked out. I know how to add a Django signal with pre_save and post_save, but I don't know how to add a singal on request.</p>
<p>I found some like this:
from django.core.signals import request_finished
from django.dispatch import receiver</p>
<p>@receiver(request_finished)
def my_callback(sender, **kwargs):
print("Request finished!")</p>
<p>But it is not that I'm looking for. I need a signal to start an CreateAPIView class after another api class finished its request. I will be very thankfull for any advise how to solve this problem.</p>
|
<python><django><django-rest-framework><django-signals>
|
2022-12-25 08:49:16
| 1
| 391
|
Iren E
|
74,912,100
| 4,429,265
|
Set dynamic values from a dictionary for select values inside a Django template using Javascript or other method
|
<p>I have three consecutive select options that their values change according to the previous select. The purpose is to categorize products using these select options. First option is to either categorize products with their <code>usage</code> or <code>model</code> value. If <code>usage</code> is selected as the first <code>select</code> option, then the second <code>select</code> that is populated with <code>usage</code>s list which is all objects of the <code>Usage</code> model, is shown, and if <code>model</code> is selected, then the select populated with all objects of <code>MainModel</code> model is shown and the other <code>select</code> tag gets hidden with <code>visually-hidden</code> class.</p>
<p>To this point, my codes are as below:</p>
<p>views.py:</p>
<pre><code>def get_common_queryset():
usage_queryset = Usage.objects.all()
main_model_queryset = MainModel.objects.all()
sub_usage_queryset = SubUsage.objects.all()
pump_type_queryset = PumpType.objects.all()
queryset_dictionary = {
"usage_queryset": usage_queryset,
"main_model_queryset": main_model_queryset,
"sub_usage_queryset": sub_usage_queryset,
"pump_type_queryset": pump_type_queryset,
}
return queryset_dictionary
def solution_main(request):
context = get_common_queryset()
return render(request, "solutions/solution_main.html", context)
</code></pre>
<p>my template:</p>
<pre><code><div class="col-md-3 mx-md-5">
<h2 class="h5 nm-text-color fw-bold mb-4">Choose usage or model:</h2>
<select required aria-label="Select usage or model"
id="usage_model_select" class="form-select" onchange="set_usage_or_model_dic()">
<option>----</option>
<option value="usage">Usage</option>
<option value="model">Model</option>
</select>
</div>
{# usage or model select div #}
<div class="col-md-3 mx-md-5">
{# usage select div #}
<div class="usage visually-hidden" id="usage_div">
<h2 class="h5 nm-text-color fw-bold mb-4">Select usage:</h2>
<select required aria-label="Select usage" class="form-select"
name="usage_select" onchange="set_sub_usage_list()" id="usage_select_id">
<option selected>----</option>
{% for usage in usage_queryset %}
<option value="{{ usage.id }}">{{ usage.usage_name_fa }}</option>
{% endfor %}
</select>
</div>
{# model select div #}
<div class="model visually-hidden" id="model_div">
<h2 class="h5 nm-text-color fw-bold mb-4">Select model:</h2>
<select required aria-label="Select model" class="form-select"
name="model_select" onchange="set_pump_type_list()" id="model_select_id">
<option selected>----</option>
{% for model in main_model_queryset %}
<option value="{{ model.id }}">{{ model.model_name_fa }}</option>
{% endfor %}
</select>
</div>
</div>
{# select sub_usage or pump_type div #}
<div class="col-md-3 mx-md-5">
{# sub_usage select div #}
<div class="sub_usage visually-hidden" id="sub_usage_div">
<h2 class="h5 nm-text-color fw-bold mb-4">Select sub-usage:</h2>
<select required aria-label="Select sub_usage" class="form-select" name="sub_usage_select">
<option selected>All sub-usages from this usage</option>
{% for sub_usage in sub_usage_queryset %}
<option value="{{ sub_usage.id }}">{{ sub_usage.sub_usage_name_fa }}</option>
{% endfor %}
</select>
</div>
{# model select div #}
<div class="pump-type visually-hidden" id="pump_type_div">
<h2 class="h5 nm-text-color fw-bold mb-4">Select pump type:</h2>
<select aria-label="Select pump_type" class="form-select" name="pump_type_select">
<option selected>All pump types of this model</option>
{% for pump_type in pump_type_queryset %}
<option value="{{ pump_type.id }}">{{ pump_type.type_name }}</option>
{% endfor %}
</select>
</div>
</div>
<div>
<input type="submit" value="Next" id="submit" class="btn btn-primary">
</div>
</code></pre>
<p>(I'm using JS to show/hide specific divs with removing/adding <code>visually-hidden</code> class)</p>
<p>But, I don't want to use <code>sub_usage_queryset = SubUsage.obects.all()</code>, I want to see which <code>usage</code> (or <code>model</code>) is selected in the previous stage and populate the next <code>select</code> according to this choice.</p>
<p>The solution that I have in mind is that since there are few <code>usage</code>s and <code>main_model</code>s, I can create a dictionary for each that contains different <code>usage</code>s or <code>main_model</code>s as keys, and their <code>sub_usage</code>s or <code>pump_type</code>s as a list for their value. As an example for <code>usage</code>s:</p>
<pre><code>sub_usage_list = {}
for usage in Usage.objects.all():
usage_sub_usage_list = SubUsage.objects.filter(usage=usage)
sub_usage_list[usage] = usage_sub_usage_list
</code></pre>
<p>So <code>sub_usage_list</code> will contain each <code>usage</code> as a key, and that <code>usage</code>'s <code>sub_usage</code>s as a list for that key's value.</p>
<p>I don't know if this approach is correct, and even if it is, I don't know how to use specific sub_usage list from this dictionary accoring to the selected value in the relating <code>select</code> option. I'm not very familiar with JS, So I'd very much appreciate your help and tips.</p>
<p>NOTE: As you can see, I'm using server side rendering, so I don't know how to do this without refreshing the page.</p>
|
<javascript><python><django><django-views><django-templates>
|
2022-12-25 07:46:05
| 1
| 417
|
Vahid
|
74,911,971
| 7,339,624
|
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'
|
<p>I'm exploring ubuntu and for a small project, I'm writing a <code>.sh</code>(bash) file that will activate a Conda environment and run a python file. Here is my <code>.sh</code> file:</p>
<pre><code>#!/bin/sh
conda activate simple_python3.10
python3 code.py
</code></pre>
<p>When I run the bash file it conda gives me this warning:</p>
<pre><code>CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
</code></pre>
<p>At this time I don't know what exactly I should do. I did <code>conda init bash</code> but didn't work. I added <code>conda init bash</code> to my file but it didn't work. This warning showed up once again. I rebooted my PC too but didn't work. Could you please tell me what exactly I should do?</p>
<p>P.S. Just for the case. This is what I will get after running <code>conda init bash</code>:</p>
<pre><code>no change /home/p2mohsen/anaconda3/condabin/conda
no change /home/p2mohsen/anaconda3/bin/conda
no change /home/p2mohsen/anaconda3/bin/conda-env
no change /home/p2mohsen/anaconda3/bin/activate
no change /home/p2mohsen/anaconda3/bin/deactivate
no change /home/p2mohsen/anaconda3/etc/profile.d/conda.sh
no change /home/p2mohsen/anaconda3/etc/fish/conf.d/conda.fish
no change /home/p2mohsen/anaconda3/shell/condabin/Conda.psm1
no change /home/p2mohsen/anaconda3/shell/condabin/conda-hook.ps1
no change /home/p2mohsen/anaconda3/lib/python3.9/site-packages/xontrib/conda.xsh
no change /home/p2mohsen/anaconda3/etc/profile.d/conda.csh
no change /home/p2mohsen/.bashrc
No action taken.
</code></pre>
|
<python><bash><ubuntu><anaconda><conda>
|
2022-12-25 07:09:04
| 1
| 4,337
|
Peyman
|
74,911,643
| 15,982,771
|
Can I use sync_to_async for any function in Python?
|
<p>Background:
I'm working on a Discord bot that uses requests. The requests are asynchronous, so I'm using the library <em>asgiref.sync</em>.</p>
<p>(I know I obviously can't use this function for asynchronous functions.)</p>
<p>I implemented <em>sync_to_async</em> into all the requests and things that may take long to process. The function doesn't produce any error. However, I'm not sure if this actually does anything.</p>
|
<python><asynchronous>
|
2022-12-25 05:26:22
| 4
| 1,128
|
Blue Robin
|
74,911,624
| 2,037,787
|
ctypes python create string structure
|
<p>I have a struct like below in dlang</p>
<pre><code>struct gph
{
string x;
string y;
}
</code></pre>
<p>and a function as follows:</p>
<pre><code>pragma(mangle, "print_gph")
void print_gph(gph g)
{
stderr.writeln(g.x);
}
</code></pre>
<p>I have created a .so file and trying to access it from Python. I have created a Ctype for the same in Python</p>
<pre><code>class gph(Structure):
_fields_ = [
('x_p', c_char_p),
('x_len', c_size_t),
('y_p', c_char_p),
('y_len', c_size_t),
]
_input = create_string_buffer(b"input 1")
sample_struct = gph()
sample_struct.x_p = cast(_input, c_char_p)
sample_struct.x_len = c_size_t(sizeof(_input))
lib.print_gph(c_void_p(),
sample_struct) //calling Dlang function
</code></pre>
<p>however i am getting some error like below</p>
<pre><code>src/rt/dwarfeh.d:330: uncaught exception reached top of stack
This might happen if you're missing a top level catch in your fiber or signal handler
std.exception.ErrnoException@/usr/include/dmd/phobos/std/stdio.d(3170): Enforcement failed (Bad address)
Aborted
</code></pre>
<p>I am suspecting that the <code>char_p</code> assignment or the structure creation is not correct. i have to pass <code>c_void_p()</code> as my <code>gph</code> function is within a <code>struct</code>. (meaning it is point to <code>this</code> or <code>self</code>)</p>
<p>Can you please help me ?</p>
|
<python><ctypes><d>
|
2022-12-25 05:17:44
| 1
| 8,154
|
backtrack
|
74,911,605
| 2,359,865
|
python lambda functions behavior
|
<p>I am having a hard time understanding the behavior of these lambda functions in python 3.10.0</p>
<p>I am trying to define the <code>NOT</code> logical operator from lambda calculus (see, e.g., the definition on <a href="https://en.wikipedia.org/wiki/Lambda_calculus#Logic_and_predicates" rel="nofollow noreferrer">wikipedia</a>
The following implementation is correct:</p>
<pre><code>In [1]: TRUE = lambda a: lambda b: a
...: FALSE = lambda a: lambda b: b
...: NOT = lambda a: a(FALSE)(TRUE)
...: assert NOT(FALSE) == TRUE
</code></pre>
<p>However, when I try and do a literal substitution either for <code>FALSE</code> or <code>TRUE</code>, the code fails</p>
<pre><code>In [2]: NOT1 = lambda a: a(FALSE)(lambda a: lambda b: a)
...: assert NOT1(FALSE) == TRUE
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[2], line 2
1 NOT1 = lambda a: a(FALSE)(lambda a: lambda b: a)
----> 2 assert NOT1(FALSE) == TRUE
AssertionError:
</code></pre>
<p>Can anybody explain me why this happens?</p>
|
<python><lambda>
|
2022-12-25 05:11:11
| 3
| 462
|
acortis
|
74,911,598
| 9,707,286
|
python compare strings return difference
|
<p>Consider this sample data:</p>
<pre><code>str_lst = ['abcdefg','abcdefghi']
</code></pre>
<p>I am trying to write a function that will compare these two strings in this list and return the difference, in this case, 'hi'</p>
<p>This attempt failed and simply returned both strings.</p>
<pre><code>def difference(string1, string2):
# Split both strings into list items
string1 = string1.split()
string2 = string2.split()
A = set(string1) # Store all string1 list items in set A
B = set(string2) # Store all string2 list items in set B
str_diff = A.symmetric_difference(B)
# isEmpty = (len(str_diff) == 0)
return str_diff
</code></pre>
<p>There are several SO questions claiming to seek this, but they simply return a list of the letters that differ between two strings where, in my case, the strings will have many characters identical at the start and I only want the characters near the end that differ between the two.</p>
<p>Ideas of how to reliably accomplish this? My exact situation would be a list of very similar strings, let's say 10 of them, in which I want to use the first item in the list and compare it against all the others one after the other, placing those differences (i.e. small substrings) into a list for collection.</p>
<p>I appreciate you taking the time to check out my question.</p>
<p>Some hypos:</p>
<p>The strings in my dataset would all have initial characters identical, think, directory paths:</p>
<pre><code>sample_lst = ['c:/universal/bin/library/file_choice1.zip',
'c:/universal/bin/library/file_zebra1.doc',
'c:/universal/bin/library/file_alpha1.xlsx']
</code></pre>
<p>Running the ideal function on this list would yield a list with the following strings:</p>
<pre><code>result = ['choice1.zip', 'zebra1.doc', 'alpha1.xlsx']
</code></pre>
<p>Thus, these are the strings that remaining when you remove any duplicate characters at the start of all of the three lists items in sample_lst</p>
|
<python><string><difference>
|
2022-12-25 05:08:05
| 2
| 747
|
John Taylor
|
74,911,597
| 9,050,514
|
Garbage collection is not happening properly when using boto3
|
<p>I'm just running the code to listen to sqs messages in a loop for n times. After each iteration I'm calling <code>gc.collect()</code> but it's returning some unreachable objects and also I'm checking the <code>gc.garbagge</code> for the objects not collected by <code>gc</code> this list also keep on increasing with each iteration.</p>
<p>Sample Code:</p>
<pre><code>import os
import gc
import boto3
import psutil
gc.set_debug(gc.DEBUG_SAVEALL | gc.DEBUG_UNCOLLECTABLE)
def get_memory_usage():
return psutil.Process(os.getpid()).memory_info().rss // 1024 ** 2
def test():
queue_url = 'https://sqs.us-east-2.amazonaws.com/123/test.fifo'
sqs = create_client('sqs')
for i in range(250):
message = sqs.receive_message(QueueUrl=queue_url)
if message.get('Messages'):
recept_handle = message['Messages'][0]['ReceiptHandle']
sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=recept_handle)
print(f'Iteration - {i + 1} Unreachable Objects: {gc.collect()} and length: {len(gc.garbage)}')
print(f'Memory usage Before: {get_memory_usage()}mb')
test()
print(f'==================Unreachable Objects: {gc.collect()}==================')
print(len(gc.garbage))
print(f'Memory usage After: {get_memory_usage()}mb')
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Memory usage Before: 27mb
Iteration - 1 Unreachable Objects: 449 and length: 554
Iteration - 2 Unreachable Objects: 3 and length: 557
Iteration - 3 Unreachable Objects: 3 and length: 560
Iteration - 4 Unreachable Objects: 3 and length: 563
Iteration - 5 Unreachable Objects: 3 and length: 566
Iteration - 6 Unreachable Objects: 3 and length: 569
Iteration - 7 Unreachable Objects: 3 and length: 572
Iteration - 8 Unreachable Objects: 3 and length: 575
Iteration - 9 Unreachable Objects: 3 and length: 578
Iteration - 10 Unreachable Objects: 3 and length: 581
Iteration - 11 Unreachable Objects: 3 and length: 584
Iteration - 12 Unreachable Objects: 3 and length: 587
Iteration - 13 Unreachable Objects: 3 and length: 590
Iteration - 14 Unreachable Objects: 3 and length: 593
Iteration - 15 Unreachable Objects: 3 and length: 596
Iteration - 16 Unreachable Objects: 3 and length: 599
Iteration - 17 Unreachable Objects: 3 and length: 602
Iteration - 18 Unreachable Objects: 3 and length: 605
Iteration - 19 Unreachable Objects: 3 and length: 608
Iteration - 20 Unreachable Objects: 3 and length: 611
Iteration - 21 Unreachable Objects: 3 and length: 614
Iteration - 22 Unreachable Objects: 3 and length: 617
Iteration - 23 Unreachable Objects: 3 and length: 620
Iteration - 24 Unreachable Objects: 3 and length: 623
Iteration - 25 Unreachable Objects: 3 and length: 626
Iteration - 26 Unreachable Objects: 3 and length: 629
Iteration - 27 Unreachable Objects: 3 and length: 632
Iteration - 28 Unreachable Objects: 3 and length: 635
Iteration - 29 Unreachable Objects: 3 and length: 638
Iteration - 30 Unreachable Objects: 3 and length: 641
Iteration - 31 Unreachable Objects: 3 and length: 644
Iteration - 32 Unreachable Objects: 3 and length: 647
Iteration - 33 Unreachable Objects: 3 and length: 650
Iteration - 34 Unreachable Objects: 3 and length: 653
Iteration - 35 Unreachable Objects: 3 and length: 656
Iteration - 36 Unreachable Objects: 3 and length: 659
Iteration - 37 Unreachable Objects: 3 and length: 662
Iteration - 38 Unreachable Objects: 3 and length: 665
Iteration - 39 Unreachable Objects: 3 and length: 668
Iteration - 40 Unreachable Objects: 3 and length: 671
Iteration - 41 Unreachable Objects: 3 and length: 674
Iteration - 42 Unreachable Objects: 3 and length: 677
Iteration - 43 Unreachable Objects: 3 and length: 680
Iteration - 44 Unreachable Objects: 3 and length: 683
Iteration - 45 Unreachable Objects: 3 and length: 686
Iteration - 46 Unreachable Objects: 3 and length: 689
Iteration - 47 Unreachable Objects: 3 and length: 692
Iteration - 48 Unreachable Objects: 3 and length: 695
Iteration - 49 Unreachable Objects: 3 and length: 698
Iteration - 50 Unreachable Objects: 3 and length: 701
Iteration - 51 Unreachable Objects: 3 and length: 704
Iteration - 52 Unreachable Objects: 3 and length: 707
Iteration - 53 Unreachable Objects: 3 and length: 710
Iteration - 54 Unreachable Objects: 3 and length: 713
Iteration - 55 Unreachable Objects: 3 and length: 716
Iteration - 56 Unreachable Objects: 3 and length: 719
Iteration - 57 Unreachable Objects: 3 and length: 722
Iteration - 58 Unreachable Objects: 3 and length: 725
Iteration - 59 Unreachable Objects: 3 and length: 728
Iteration - 60 Unreachable Objects: 3 and length: 731
Iteration - 61 Unreachable Objects: 3 and length: 734
Iteration - 62 Unreachable Objects: 3 and length: 737
Iteration - 63 Unreachable Objects: 3 and length: 740
Iteration - 64 Unreachable Objects: 3 and length: 743
Iteration - 65 Unreachable Objects: 3 and length: 746
Iteration - 66 Unreachable Objects: 3 and length: 749
Iteration - 67 Unreachable Objects: 3 and length: 752
Iteration - 68 Unreachable Objects: 3 and length: 755
Iteration - 69 Unreachable Objects: 3 and length: 758
Iteration - 70 Unreachable Objects: 3 and length: 761
Iteration - 71 Unreachable Objects: 3 and length: 764
Iteration - 72 Unreachable Objects: 3 and length: 767
Iteration - 73 Unreachable Objects: 3 and length: 770
Iteration - 74 Unreachable Objects: 3 and length: 773
Iteration - 75 Unreachable Objects: 3 and length: 776
Iteration - 76 Unreachable Objects: 3 and length: 779
Iteration - 77 Unreachable Objects: 3 and length: 782
Iteration - 78 Unreachable Objects: 3 and length: 785
Iteration - 79 Unreachable Objects: 3 and length: 788
Iteration - 80 Unreachable Objects: 3 and length: 791
Iteration - 81 Unreachable Objects: 3 and length: 794
Iteration - 82 Unreachable Objects: 3 and length: 797
Iteration - 83 Unreachable Objects: 3 and length: 800
Iteration - 84 Unreachable Objects: 3 and length: 803
Iteration - 85 Unreachable Objects: 3 and length: 806
Iteration - 86 Unreachable Objects: 3 and length: 809
Iteration - 87 Unreachable Objects: 3 and length: 812
Iteration - 88 Unreachable Objects: 3 and length: 815
Iteration - 89 Unreachable Objects: 3 and length: 818
Iteration - 90 Unreachable Objects: 3 and length: 821
Iteration - 91 Unreachable Objects: 3 and length: 824
Iteration - 92 Unreachable Objects: 3 and length: 827
Iteration - 93 Unreachable Objects: 3 and length: 830
Iteration - 94 Unreachable Objects: 3 and length: 833
Iteration - 95 Unreachable Objects: 3 and length: 836
Iteration - 96 Unreachable Objects: 3 and length: 839
Iteration - 97 Unreachable Objects: 3 and length: 842
Iteration - 98 Unreachable Objects: 3 and length: 845
Iteration - 99 Unreachable Objects: 3 and length: 848
Iteration - 100 Unreachable Objects: 3 and length: 851
Iteration - 101 Unreachable Objects: 3 and length: 854
Iteration - 102 Unreachable Objects: 3 and length: 857
Iteration - 103 Unreachable Objects: 3 and length: 860
Iteration - 104 Unreachable Objects: 3 and length: 863
Iteration - 105 Unreachable Objects: 3 and length: 866
Iteration - 106 Unreachable Objects: 3 and length: 869
Iteration - 107 Unreachable Objects: 3 and length: 872
Iteration - 108 Unreachable Objects: 3 and length: 875
Iteration - 109 Unreachable Objects: 3 and length: 878
Iteration - 110 Unreachable Objects: 3 and length: 881
Iteration - 111 Unreachable Objects: 3 and length: 884
Iteration - 112 Unreachable Objects: 3 and length: 887
Iteration - 113 Unreachable Objects: 3 and length: 890
Iteration - 114 Unreachable Objects: 3 and length: 893
Iteration - 115 Unreachable Objects: 3 and length: 896
Iteration - 116 Unreachable Objects: 3 and length: 899
Iteration - 117 Unreachable Objects: 3 and length: 902
Iteration - 118 Unreachable Objects: 3 and length: 905
Iteration - 119 Unreachable Objects: 3 and length: 908
Iteration - 120 Unreachable Objects: 3 and length: 911
Iteration - 121 Unreachable Objects: 3 and length: 914
Iteration - 122 Unreachable Objects: 3 and length: 917
Iteration - 123 Unreachable Objects: 3 and length: 920
Iteration - 124 Unreachable Objects: 3 and length: 923
Iteration - 125 Unreachable Objects: 3 and length: 926
Iteration - 126 Unreachable Objects: 3 and length: 929
Iteration - 127 Unreachable Objects: 3 and length: 932
Iteration - 128 Unreachable Objects: 3 and length: 935
Iteration - 129 Unreachable Objects: 3 and length: 938
Iteration - 130 Unreachable Objects: 3 and length: 941
Iteration - 131 Unreachable Objects: 3 and length: 944
Iteration - 132 Unreachable Objects: 3 and length: 947
Iteration - 133 Unreachable Objects: 3 and length: 950
Iteration - 134 Unreachable Objects: 3 and length: 953
Iteration - 135 Unreachable Objects: 3 and length: 956
Iteration - 136 Unreachable Objects: 3 and length: 959
Iteration - 137 Unreachable Objects: 3 and length: 962
Iteration - 138 Unreachable Objects: 3 and length: 965
Iteration - 139 Unreachable Objects: 3 and length: 968
Iteration - 140 Unreachable Objects: 3 and length: 971
Iteration - 141 Unreachable Objects: 3 and length: 974
Iteration - 142 Unreachable Objects: 3 and length: 977
Iteration - 143 Unreachable Objects: 3 and length: 980
Iteration - 144 Unreachable Objects: 3 and length: 983
Iteration - 145 Unreachable Objects: 3 and length: 986
Iteration - 146 Unreachable Objects: 3 and length: 989
Iteration - 147 Unreachable Objects: 3 and length: 992
Iteration - 148 Unreachable Objects: 3 and length: 995
Iteration - 149 Unreachable Objects: 3 and length: 998
Iteration - 150 Unreachable Objects: 3 and length: 1001
Iteration - 151 Unreachable Objects: 3 and length: 1004
Iteration - 152 Unreachable Objects: 3 and length: 1007
Iteration - 153 Unreachable Objects: 3 and length: 1010
Iteration - 154 Unreachable Objects: 3 and length: 1013
Iteration - 155 Unreachable Objects: 3 and length: 1016
Iteration - 156 Unreachable Objects: 3 and length: 1019
Iteration - 157 Unreachable Objects: 3 and length: 1022
Iteration - 158 Unreachable Objects: 3 and length: 1025
Iteration - 159 Unreachable Objects: 3 and length: 1028
Iteration - 160 Unreachable Objects: 3 and length: 1031
Iteration - 161 Unreachable Objects: 3 and length: 1034
Iteration - 162 Unreachable Objects: 3 and length: 1037
Iteration - 163 Unreachable Objects: 3 and length: 1040
Iteration - 164 Unreachable Objects: 3 and length: 1043
Iteration - 165 Unreachable Objects: 3 and length: 1046
Iteration - 166 Unreachable Objects: 3 and length: 1049
Iteration - 167 Unreachable Objects: 3 and length: 1052
Iteration - 168 Unreachable Objects: 3 and length: 1055
Iteration - 169 Unreachable Objects: 3 and length: 1058
Iteration - 170 Unreachable Objects: 3 and length: 1061
Iteration - 171 Unreachable Objects: 3 and length: 1064
Iteration - 172 Unreachable Objects: 3 and length: 1067
Iteration - 173 Unreachable Objects: 3 and length: 1070
Iteration - 174 Unreachable Objects: 3 and length: 1073
Iteration - 175 Unreachable Objects: 3 and length: 1076
Iteration - 176 Unreachable Objects: 3 and length: 1079
Iteration - 177 Unreachable Objects: 3 and length: 1082
Iteration - 178 Unreachable Objects: 3 and length: 1085
Iteration - 179 Unreachable Objects: 3 and length: 1088
Iteration - 180 Unreachable Objects: 3 and length: 1091
Iteration - 181 Unreachable Objects: 3 and length: 1094
Iteration - 182 Unreachable Objects: 3 and length: 1097
Iteration - 183 Unreachable Objects: 3 and length: 1100
Iteration - 184 Unreachable Objects: 3 and length: 1103
Iteration - 185 Unreachable Objects: 3 and length: 1106
Iteration - 186 Unreachable Objects: 3 and length: 1109
Iteration - 187 Unreachable Objects: 3 and length: 1112
Iteration - 188 Unreachable Objects: 3 and length: 1115
Iteration - 189 Unreachable Objects: 3 and length: 1118
Iteration - 190 Unreachable Objects: 3 and length: 1121
Iteration - 191 Unreachable Objects: 3 and length: 1124
Iteration - 192 Unreachable Objects: 3 and length: 1127
Iteration - 193 Unreachable Objects: 3 and length: 1130
Iteration - 194 Unreachable Objects: 3 and length: 1133
Iteration - 195 Unreachable Objects: 3 and length: 1136
Iteration - 196 Unreachable Objects: 3 and length: 1139
Iteration - 197 Unreachable Objects: 3 and length: 1142
Iteration - 198 Unreachable Objects: 3 and length: 1145
Iteration - 199 Unreachable Objects: 3 and length: 1148
Iteration - 200 Unreachable Objects: 3 and length: 1151
Iteration - 201 Unreachable Objects: 3 and length: 1154
Iteration - 202 Unreachable Objects: 3 and length: 1157
Iteration - 203 Unreachable Objects: 3 and length: 1160
Iteration - 204 Unreachable Objects: 3 and length: 1163
Iteration - 205 Unreachable Objects: 3 and length: 1166
Iteration - 206 Unreachable Objects: 3 and length: 1169
Iteration - 207 Unreachable Objects: 3 and length: 1172
Iteration - 208 Unreachable Objects: 3 and length: 1175
Iteration - 209 Unreachable Objects: 3 and length: 1178
Iteration - 210 Unreachable Objects: 3 and length: 1181
Iteration - 211 Unreachable Objects: 3 and length: 1184
Iteration - 212 Unreachable Objects: 3 and length: 1187
Iteration - 213 Unreachable Objects: 3 and length: 1190
Iteration - 214 Unreachable Objects: 3 and length: 1193
Iteration - 215 Unreachable Objects: 3 and length: 1196
Iteration - 216 Unreachable Objects: 3 and length: 1199
Iteration - 217 Unreachable Objects: 3 and length: 1202
Iteration - 218 Unreachable Objects: 3 and length: 1205
Iteration - 219 Unreachable Objects: 3 and length: 1208
Iteration - 220 Unreachable Objects: 3 and length: 1211
Iteration - 221 Unreachable Objects: 3 and length: 1214
Iteration - 222 Unreachable Objects: 3 and length: 1217
Iteration - 223 Unreachable Objects: 3 and length: 1220
Iteration - 224 Unreachable Objects: 3 and length: 1223
Iteration - 225 Unreachable Objects: 3 and length: 1226
Iteration - 226 Unreachable Objects: 3 and length: 1229
Iteration - 227 Unreachable Objects: 3 and length: 1232
Iteration - 228 Unreachable Objects: 3 and length: 1235
Iteration - 229 Unreachable Objects: 3 and length: 1238
Iteration - 230 Unreachable Objects: 3 and length: 1241
Iteration - 231 Unreachable Objects: 3 and length: 1244
Iteration - 232 Unreachable Objects: 3 and length: 1247
Iteration - 233 Unreachable Objects: 3 and length: 1250
Iteration - 234 Unreachable Objects: 3 and length: 1253
Iteration - 235 Unreachable Objects: 3 and length: 1256
Iteration - 236 Unreachable Objects: 3 and length: 1259
Iteration - 237 Unreachable Objects: 3 and length: 1262
Iteration - 238 Unreachable Objects: 3 and length: 1265
Iteration - 239 Unreachable Objects: 3 and length: 1268
Iteration - 240 Unreachable Objects: 3 and length: 1271
Iteration - 241 Unreachable Objects: 3 and length: 1274
Iteration - 242 Unreachable Objects: 3 and length: 1277
Iteration - 243 Unreachable Objects: 3 and length: 1280
Iteration - 244 Unreachable Objects: 3 and length: 1283
Iteration - 245 Unreachable Objects: 3 and length: 1286
Iteration - 246 Unreachable Objects: 3 and length: 1289
Iteration - 247 Unreachable Objects: 3 and length: 1292
Iteration - 248 Unreachable Objects: 3 and length: 1295
Iteration - 249 Unreachable Objects: 3 and length: 1298
Iteration - 250 Unreachable Objects: 3 and length: 1301
==================Unreachable Objects: 233==================
1534
Memory usage After: 37mb
</code></pre>
<p>This is just a sample code, in my actual use case I'm running the code in infinite loop in a separate thread so memory is kept on accumulating.</p>
<p>Please help me in understanding the issue here is it normal or is something wrong with my code</p>
|
<python><boto3><amazon-sqs>
|
2022-12-25 05:07:50
| 1
| 9,077
|
deadshot
|
74,911,596
| 2,009,441
|
How do I fill in missing numbers in a predictable stepped list?
|
<p>I have a list of the 3–5 high and low tide events throughout a 24 hour period, placed at the appropriate index based on their timestamp:</p>
<pre class="lang-py prettyprint-override"><code>tides = [None, None, None, (0.07, 'low'), None, None, None, None, None, None, (2.14, 'high'), None, None, None, None, None, (0.32, 'low'), None, None, None, None, None, (1.34, 'high'), None]
</code></pre>
<p><em>For instance: that <code>(0.07, 'low')</code> value is at index <code>3</code> because it occurs at around 3am.</em></p>
<p><strong>I'd like to replace the <code>None</code> values with stepped values between known values.</strong></p>
<p>I know how to do this manually:</p>
<pre class="lang-py prettyprint-override"><code>difference = tides[10][0] - tides[3][0] # 2.07
steps = 10 - 3 # 7
increment = difference / steps # 0.2957142857
# Add to each list item
tides[4] = (tides[3][0] + (increment * 1), '')
tides[5] = (tides[3][0] + (increment * 2), '')
tides[6] = (tides[3][0] + (increment * 3), '')
# etc...
</code></pre>
<p><strong>...but how would I approach this programmatically?</strong></p>
<p>This approach should:</p>
<ol>
<li>Take into account the 'direction' (up or down) of the wave and make a negative or positive step value accordingly</li>
<li>Handle a changing number of and time of each tide event each day (therefore varying amounts and indexes of known list items)</li>
<li>Fill in the unknown first and last values by 'reflecting' the adjacent known stepped value (to complete the curve)</li>
</ol>
<h2>Failed attempts</h2>
<p>My attempts revolve around looping through the above list and trying to define the steps:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(24):
if tides[i] != None:
# Known value
else:
# Missing value
# Get 'out' and calculate steps from known neighbours
</code></pre>
<p>I get stuck at that last part: understanding how to be 'within' an index in a loop whilst also finding its nearest known neighbours and calculating steps.</p>
<p>I have also tried making a separate 'known' list (which includes the indexes) to compare against, that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>tidesKnown = [(3, 0.07, 'low'), (10, 2.14, 'high'), (16, 0.32, 'low'), (22, 1.34, 'high')]
</code></pre>
|
<python>
|
2022-12-25 05:07:37
| 1
| 515
|
Danny
|
74,911,498
| 7,778,016
|
Unable to get image src attribute in python using selenium
|
<p>When I try to get the <code>src</code> it returns <code>None</code>. The code should locate the image by its class name and get the attribute.</p>
<pre><code>import selenium.webdriver as webdriver
import time
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
url = ('https://www.instagram.com/cats')
driver.get(url)
time.sleep(3)
imgs = driver.find_element(By.CLASS_NAME, "_aagv").get_attribute("src")
print(imgs)
driver.quit()
</code></pre>
<p><a href="https://i.sstatic.net/BycIJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BycIJ.png" alt="enter image description here" /></a></p>
<p>I tired to use it with for loop as well but the results were same <code>None</code>. Any suggestions how to get it working?</p>
|
<python><selenium>
|
2022-12-25 04:23:51
| 2
| 1,012
|
P_n
|
74,911,490
| 13,489,398
|
Sharing Schema across different services
|
<p>I have an ETL pipeline that extracts data from several sources and stores it within a database table (which has a fixed schema).</p>
<p>I also have a separate FASTAPI service that allows me to query the database through a REST endpoint, which is called to display data on the frontend (React TS).</p>
<p>The issue now is that my ETL Pipeline, FASTAPI service, and frontend all have a separate version of the schema, and in the case where the data schema needs to be changed, this change has to be done to the schema specifications on all 3 services.</p>
<p>I have thought about creating a python package that contains this schema, but this can only be shared between the services that uses Python, and my frontend still has to keep its own version of the schema.</p>
<p>Is there some sort of "schema service" that I should be having? What can I do to reduce this coupling?</p>
|
<python><database><architecture>
|
2022-12-25 04:20:11
| 2
| 341
|
ZooPanda
|
74,911,467
| 4,002,633
|
Customising Categorical axis labels in Bokeh 3 (showing every nth one), or otherwise labeling duration bins sensibly
|
<p>I have Bokeh histogram plots showing the number of events in duration bins. Basically we're measuring visit durations to a site and they are typically a few minutes, so we've divided the data into 30 second bins labelled like "0:00-0:30", "0:30-1:00", "1:00-1:30" ... etc.</p>
<p>Unfortunately all the labels overlap. I've found rumours tips on how to maybe plot only some of the labels (these are categories but progressive and so it's OK to skip some, like every other one for example).</p>
<p>Now I have found this asked before, numerous times, but the answers all apply to Bokeh versions pre 1.0 and link to pages that are dated and offer solutions that don't work in Bokeh 3. For example:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/49172201/how-to-show-only-evey-nth-categorical-tickers-in-bokeh">How to show only evey nth categorical tickers in Bokeh</a></li>
<li><a href="https://stackoverflow.com/questions/49663952/how-to-specify-n-th-ticker-for-bokeh-plot-from-python-side-where-n-is-the-numbe">How to specify n-th ticker for bokeh plot from python side, where n is the number of tickers</a></li>
<li><a href="https://stackoverflow.com/questions/34949298/python-bokeh-show-only-every-second-categorical-ticker">Python bokeh show only every second categorical ticker</a></li>
</ul>
<p>The tend to link to: <a href="http://bokeh.pydata.org/en/latest/docs/user_guide/extensions.html" rel="nofollow noreferrer">http://bokeh.pydata.org/en/latest/docs/user_guide/extensions.html</a> which is not relevant any more.</p>
<p>So I wonder what the Bokeh 3 solution is? Or perhaps there is a non-categorical preference for a histogram binned by durations? Or time of day (I have those too), or month of year, I have those too ... all plotting visit counts in different type of bins. All currently using categorical x values.</p>
<p>As is the wont of our times I tried very hard to solve this with ChatGPT, to no avail, so I asked it write us a poem:</p>
<pre><code>Oh Bokeh, why must thou be so tough?
With categorical data, we've had enough.
We long to see just every second tick,
But alas, it seems the answer's too quick.
We've tried and tried, but all in vain,
To show just some ticks and hide the rest again.
We've searched high and low, but to no avail,
It seems that this feature is simply unavailable.
We've used CustomJS, tried lists and such,
But still the solution is just out of touch.
It's like we're stuck in a never-ending fight,
With Bokeh, the plotting library of might.
But despite our frustration, we must say,
Bokeh's beauty and power cannot be swayed.
So we'll keep trying, keep pushing on,
Until we find a way to make it dawn.
</code></pre>
|
<python><histogram><categories><bokeh>
|
2022-12-25 04:12:15
| 0
| 2,192
|
Bernd Wechner
|
74,911,401
| 343,215
|
Calculate time difference as decimal in a difference matrix
|
<p>I'm analyzing timecard data and comparing employee's clockin/out times to each other. I'm exploring the data using <a href="https://stackoverflow.com/a/46266707/343215">a difference matrix in a DataFrame</a>. How do I convert the day, hour timedelta to decimal, or even just a sensible +/- without the <code>-1 days +23:40:00</code> weirdness?</p>
<pre><code>employees = [('GILL', datetime(2022,12,1,6,40,0), datetime(2022,12,1,14,30,0)),
('BOB', datetime(2022,12,1,6,0,0), datetime(2022,12,1,14,10,0)),
('TOBY', datetime(2022,12,1,14,0,0), datetime(2022,12,1,22,30,0))]
labels = ['name', 'clockin', 'clockout']
df = pd.DataFrame.from_records(employees, columns=labels)
</code></pre>
<p>and my difference matrix is constructed with these two lines:</p>
<pre><code>arr = (df['clockin'].values - df['clockin'].values[:, None])
pd.concat((df['name'], pd.DataFrame(arr, columns=df['name'])), axis=1)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>name</th>
<th>GILL</th>
<th>BOB</th>
<th>TOBY</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>GILL</td>
<td>0 days 00:00:00</td>
<td>-1 days +23:20:00</td>
<td>0 days 07:20:00</td>
</tr>
<tr>
<td>1</td>
<td>BOB</td>
<td>0 days 00:40:00</td>
<td>0 days 00:00:00</td>
<td>0 days 08:00:00</td>
</tr>
<tr>
<td>2</td>
<td>TOBY</td>
<td>-1 days +16:40:00</td>
<td>-1 days +16:00:00</td>
<td>0 days 00:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>The trick to get a decimal difference is to use Pandas Datetime assessor's <code>total_seconds()</code> <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.total_seconds.html" rel="nofollow noreferrer">function</a>. But, this has no place in the <code>arr</code> array expression.</p>
<hr />
<p>Here is <code>total_seconds()</code> doing it's magic:</p>
<pre><code>df['workhours'] = round((df['clockout'] - df['clockin']).dt.total_seconds() / 60.0 / 60.0, 2)
</code></pre>
<p>I tried an apply on the time columns, but I can't get it to work. This might be the easy answer.</p>
<pre><code>df_in.apply(lambda x: (x.total_seconds() / 60.0 / 60.0), columns=['BOB', 'GILL', 'TOBY'])
</code></pre>
|
<python><pandas>
|
2022-12-25 03:43:04
| 2
| 2,967
|
xtian
|
74,911,393
| 726,802
|
Read page source of window opened in new tab
|
<p><strong>Code for trying to read elements of window opened in new tab. I always get to see message "Not entered"</strong></p>
<pre><code>webd = webdriver.Chrome(service=s, options=options)
url = "some url"
webd.execute_script("window.open('" + url + "','_blank');")
if len(webd.find_elements(By.TAG_NAME, "pre")) > 0:
print("entered")
else:
print("not enetered")
</code></pre>
<p><strong>It part works perfectly fine if opened in same tab like below</strong></p>
<pre><code>webd = webdriver.Chrome(service=s, options=options)
url = "some url"
webd.get(url)
if len(webd.find_elements(By.TAG_NAME, "pre")) > 0:
print("entered")
else:
print("not enetered")
</code></pre>
<p>Am I missing something in former part?</p>
|
<python><selenium><selenium-webdriver><selenium-chromedriver>
|
2022-12-25 03:39:29
| 3
| 10,163
|
Pankaj
|
74,911,283
| 17,464,023
|
Click on SVG image selenium python
|
<p>I am trying to use selenium for clicking on a svg element in order to close a pop up:</p>
<pre><code><div class=" light-ui-pop-up--header">
<svg name="ClosePopUp" class="pop-up--close-btn" width="14" height="14" viewBox="0 0 14 14" xmlns="http://www.w3.org/2000/svg"><path d="M14 1.41L12.59 0L7 5.59L1.41 0L0 1.41L5.59 7L0 12.59L1.41 14L7 8.41L12.59 14L14 12.59L8.41 7L14 1.41Z" fill="#ffffff"></path></svg>>
</code></pre>
<p>Following previous answers on stack overflow, I have tried:</p>
<pre><code>wd.find_element_by_xpath('//div[@class="light-ui-pop-up--header"]/*[name()="svg"][@id="Root"]').click()
</code></pre>
<p>and</p>
<pre><code>wd.find_element_by_xpath('//div[@class="light-ui-pop-up--header"]/*[name()="svg"][@name="ClosePopUp"]').click()
</code></pre>
<p>But in both cases, I have got the same error:</p>
<pre><code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@class="light-ui-pop-up--header"]/*[name()="svg"][@id="Root"]"}
</code></pre>
<p>How could I close the emergent pop up with selenium python on this page:
<a href="https://data.anbima.com.br/debentures/JTEE11/caracteristicas" rel="nofollow noreferrer">https://data.anbima.com.br/debentures/JTEE11/caracteristicas</a></p>
|
<python><html><selenium><svg>
|
2022-12-25 02:34:53
| 1
| 309
|
Per Mertesacker
|
74,911,243
| 3,508,956
|
How to do broadcasting addition in Keras Functional API?
|
<p>I have the following layer defined in Keras:</p>
<pre class="lang-py prettyprint-override"><code>class TextDec(tf.keras.Model):
def __init__(
self,
n_vocab: int,
n_ctxs: int,
n_states: int,
n_heads: int,
n_layers: int,
):
super(TextDec, self).__init__(name="dec")
self.token_emb = tf.keras.layers.Embedding(
n_vocab, n_states, name="dec-token-emb"
)
self.position_emb = tf.Variable(
np.zeros((n_ctxs, n_states)),
name="dec-position-emb",
dtype=tf.float32,
)
self.add = tf.keras.layers.Add(name="dec-add")
# ...
def call(self, inputs: List[tf.Tensor]):
tokens, audio_features = inputs
offset = 0
x = self.add(
[
self.token_emb(tokens),
tf.slice(self.position_emb, [offset, 0], [tokens.shape[-1], -1]),
]
)
# ...
return x
</code></pre>
<p>When I call this with <code>n_ctxs = 448, n_states = 512</code>, it does not work. I get this:</p>
<blockquote>
<p>ValueError: Cannot merge tensors with different batch sizes. Got tensors with shapes [(5, 1, 512), (1, 512)]</p>
</blockquote>
<p>I also tried using <code>tf.expand_dims</code>, but that led to a different error:</p>
<pre><code>x = self.add([ self.token_emb(tokens), tf.expand_dims(tf.slice(self.position_emb, [offset, 0], [tokens.shape[-1], -1]), 0) ])
</code></pre>
<blockquote>
<p>ValueError: Cannot merge tensors with different batch sizes. Got tensors with shapes [(5, 1, 512), (1, 1, 512)]</p>
</blockquote>
<p>I also tried <code>tf.compat.v1.placeholder_with_default</code> as suggested in <a href="https://stackoverflow.com/questions/55375665/how-to-reintroduce-none-batch-dimension-to-tensor-in-keras-tensorflow?rq=1">this answer</a>, even though it is deprecated:</p>
<pre><code>z = tf.slice(self.position_emb, [offset, 0], [tokens.shape[-1], -1])
z = tf.compat.v1.placeholder_with_default(z, [None, 1, 512])
x = self.add([ self.token_emb(tokens), z ])
</code></pre>
<p>That didn't work either.</p>
<blockquote>
<p>ValueError: Shapes must be equal rank, but are 2 and 3 for '{{node PlaceholderWithDefault}} = PlaceholderWithDefaultdtype=DT_FLOAT, shape=[?,1,512]' with input shapes: [1,512].</p>
</blockquote>
<p>And <code>tf.broadcast_to</code> does not work if the shape contains <code>None</code>.</p>
<p>However, if I change the offending line to this:</p>
<pre><code>x = self.token_emb(tokens) + tf.slice(self.position_emb, [offset, 0], [tokens.shape[-1], -1])
</code></pre>
<p>Then it works. <strong>Why doesn't it work when I use <code>Add</code>, and how can I get around this?</strong></p>
<hr />
<p>The reason I want to use <code>Add</code> is because when I summarize my model, I can see the addition show up as a line like this:</p>
<pre><code>Model: "dec"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
tokens (InputLayer) [(None, 1)] 0 []
dec-token-emb (Embedding) (None, 1, 512) 26554368 ['tokens[0][0]']
tf.__operators__.add (TFOpLamb (None, 1, 512) 0 ['dec-token-emb[0][0]']
da)
</code></pre>
<p>Seeing <code>TFOpLambda</code> makes me concerned that this will prevent me from serializing my model.</p>
|
<python><tensorflow><keras>
|
2022-12-25 02:15:43
| 1
| 7,107
|
laptou
|
74,911,033
| 14,287,127
|
How to query in SQLAlchemy[Asyncio] (Sanic)
|
<p>I am new to sanic and SQLAlchemy, I am implementing c2c ecom. I have a many-to-many relationship between orders and products that is joined by order_products table with quantity as an extra field on the join table (i.e. order_products)</p>
<pre><code>
class Order(Base):
__tablename__ = "orders"
uuid = Column(String(), primary_key=True)
userId = Column(String(), nullable=False)
status = Column(String(), nullable=False, default="draft")
products = relationship(
"Product", secondary="order_products", back_populates='orders')
order_products = relationship(
"OrderProduct", back_populates="order")
def to_dict(self):
return {"uuid": self.uuid, "userId": self.userId, "status": self.status, "products": [{
"title": product.title,
"price": product.price,
"description": product.description,
"uuid": product.uuid,
"userId": product.userId,
"quantity": product.order_products.quantity
} for product in self.products]}
---
class Product(Base):
__tablename__ = "products"
uuid = Column(String(), primary_key=True)
title = Column(String(), nullable=False)
price = Column(FLOAT(), nullable=False)
description = Column(String(), nullable=False)
userId = Column(String(), nullable=False)
orders = relationship(
"Order", secondary="order_products", back_populates='products', uselist=False)
order_products = relationship(
"OrderProduct", back_populates="product")
---
class OrderProduct(Base):
__tablename__ = "order_products"
id = Column(Integer, primary_key=True)
orderId = Column(String, ForeignKey("orders.uuid"))
productId = Column(String, ForeignKey("products.uuid"))
quantity = Column(Integer)
# Define the relationships to the Order and Product tables
order = relationship("Order", back_populates="order_products")
product = relationship("Product", back_populates="order_products")
</code></pre>
<p>A user here is a buyer as well as seller.</p>
<p>I need to extract all the orders for a seller's product that is being paid for (i.e Order.status == 'complete')and populate the products and quantity of those products.</p>
<pre><code>[
{
"uuid": "order_Id_abc",
"userId": "user_Id_def", ## USER who paid for the order
"status": "complete",
"products": [
{
"title": "One Piece t-shirt",
"price": 900.0,
"description": "Luffy at laugh tail island",
"uuid": "product_Id",
"userId": "current_user_Id", ## Sellers Id
"quantity": 2
},
## End up getting products of other Sellers, which was the part of this order.
]
}
]
</code></pre>
<p>To achieve this I used following query</p>
<pre><code>from sqlalchemy.orm import selectinload
from sqlalchemy.sql.expression import and_
from sqlalchemy.future import select
q = (select(Order)
.join(OrderProduct, Order.uuid == OrderProduct.orderId)
.join(Product, OrderProduct.productId == Product.uuid)
.filter(and_((Order.status == 'complete'), (Product.userId == "current_user_id")))
.options(selectinload(Order.products).selectinload(Product.order_products))
.group_by(Order.uuid)
)
</code></pre>
<p>I get my desired result, but end up getting other user's products too.</p>
<p>Can't figure out what is wrong here..</p>
<hr />
<p>Thank you.</p>
|
<python><postgresql><sqlalchemy><sanic>
|
2022-12-25 00:38:52
| 1
| 329
|
Shreyas Chorge
|
74,911,025
| 12,214,867
|
How do I put a pair of parentheses on each matched line with Python regex?
|
<p>I'm trying to convert the following code</p>
<pre><code>print "*** Dictionaries"
dictionaries = json.loads(api.getDictionaries())
print dictionaries
...
print(bestMatch)
...
</code></pre>
<p>to</p>
<pre><code>print("*** Dictionaries")
dictionaries = json.loads(api.getDictionaries())
print(dictionaries)
...
print(bestMatch)
...
</code></pre>
<p>that is, to put a pair of parentheses on each <code>print</code> line.</p>
<p>Here is my code</p>
<pre><code>import re
with open('p2code.txt') as f:
lines = [line.rstrip() for line in f]
cmplr = re.compile(r'(?<=print).*')
p3code = []
for line in lines:
p3line = line
m = cmplr.search(line)
if m:
p3line = 'print(' + m.group(0) + ')'
p3code.append(p3line)
with open('p3code.txt', 'w') as f:
for line in p3code:
f.write(f"{line}\n")
</code></pre>
<p>There're 2 questions related to the code above.</p>
<p>Is there a more elegant way to do the replacement, e.g. <code>cmplr.sub</code>? If yes, how do I do that?</p>
<p>one of the <code>print</code> lines has already put the parentheses</p>
<pre><code>print(bestMatch)
</code></pre>
<p>How do I make my code skip that line, avoiding something like</p>
<pre><code>print((bestMatch))
</code></pre>
<p>The idea/need comes from Cambridge's API doc
<a href="https://dictionary-api.cambridge.org/api/resources#python" rel="nofollow noreferrer">https://dictionary-api.cambridge.org/api/resources#python</a></p>
|
<python><regex>
|
2022-12-25 00:35:47
| 1
| 1,097
|
JJJohn
|
74,910,864
| 1,019,250
|
Why is the [:] included in `.removeprefix()` definition?
|
<p>In <a href="https://peps.python.org/pep-0616/" rel="nofollow noreferrer">PEP-616</a>, the specification for <code>removeprefix()</code> includes this code block:</p>
<pre><code>def removeprefix(self: str, prefix: str, /) -> str:
if self.startswith(prefix):
return self[len(prefix):]
else:
return self[:]
</code></pre>
<p>Why does the last line say <code>return self[:]</code>, instead of just <code>return self</code>?</p>
|
<python><string>
|
2022-12-24 23:39:19
| 1
| 1,429
|
Philip Massey
|
74,910,845
| 3,696,118
|
pyqt QListModelView IconMode highlight to be constant size and add one more line to text?
|
<p>I posted this question below and I was told of a possible alternative <code>QListModelView.IconMode</code> which I never heard of before
<a href="https://stackoverflow.com/questions/74909513/issue-with-filtering-flowlayout-items-in-pyqt5">Issue with filtering FlowLayout items in PyQt5</a></p>
<p>I have found this question below to start picking at this Qt feature:
<a href="https://stackoverflow.com/questions/69474089/how-to-get-item-text-to-wrap-using-a-qlistview-set-to-iconmode-and-model-set-to/">How to get item text to wrap using a QListView set to IconMode and model set to QFileSystemModel</a></p>
<p>Something I realized though is if the label size is very short (like two characters) then the selection highlight also gets shrunk to fit that size.</p>
<p><a href="https://i.sstatic.net/TQ1ZP.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TQ1ZP.gif" alt="enter image description here" /></a></p>
<p>I am wondering if there is a way to</p>
<ol>
<li>get the selection highlight to always be the same size for every item</li>
<li>perhaps not like the above question, but just get the text label to get at least one more line to display...?</li>
</ol>
<p>here is the current code i have for reference (its basically the same as the code as the above linked question)</p>
<pre><code>import sys
from PyQt5 import QtCore
from PyQt5 import QtGui
from PyQt5 import QtWidgets
class TreeViewDialog(QtWidgets.QDialog):
def __init__(self, parent=None):
super(TreeViewDialog, self).__init__(parent)
self.setMinimumSize(500, 400)
self.create_widgets()
self.create_layout()
def create_widgets(self):
root_path = r"C:\Users\PCUser\Documents\pythonFiles"
self.model = QtWidgets.QFileSystemModel()
self.model.setRootPath(root_path)
self.list_view = QtWidgets.QListView()
self.list_view.setViewMode(QtWidgets.QListView.IconMode)
self.list_view.setResizeMode(QtWidgets.QListView.Adjust)
self.list_view.setFlow(QtWidgets.QListView.LeftToRight)
self.list_view.setMovement(QtWidgets.QListView.Snap)
self.list_view.setModel(self.model)
self.list_view.setRootIndex(self.model.index(root_path))
self.list_view.setGridSize(QtCore.QSize(80, 100))
self.list_view.setUniformItemSizes(True)
self.list_view.setWordWrap(True)
self.list_view.setTextElideMode(QtCore.Qt.ElideRight)
def create_layout(self):
main_layout = QtWidgets.QHBoxLayout(self)
main_layout.setContentsMargins(2, 2, 2, 2)
main_layout.addWidget(self.list_view)
if __name__ == "__main__":
app = QtWidgets.QApplication.instance()
if not app:
app = QtWidgets.QApplication(sys.argv)
tree_view_dialog = TreeViewDialog()
tree_view_dialog.show()
sys.exit(app.exec_())
</code></pre>
|
<python><qt><pyqt><qt5>
|
2022-12-24 23:32:59
| 1
| 353
|
user3696118
|
74,910,809
| 20,102,061
|
Error: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, None, None, 224, 224, 3)
|
<p>I am trying to create a model that can detect balloons by creating a CNN model. When I try to <code>fit()</code> the model I get this error:
<code>ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, None, None, 224, 224, 3)</code></p>
<p>I have tried making some changes (primarily small changes) but nothing really worked. I don't know what can cause that so anything will be helpful.</p>
<h2>My code:</h2>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=['accuracy'])
annotations_file = '/content/balloon-data.csv'
annotations = pd.read_csv(annotations_file)
image_paths = []
labels = []
for i, row in annotations.iterrows():
if row['num_balloons'] > 0:
image_path = 'path/to/image/{}'.format(row['fname'])
image_paths.append(image_path)
labels.append(1)
else:
image_path = 'path/to/image/{}'.format(row['fname'])
image_paths.append(image_path)
labels.append(0)
dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))
def preprocess_image(image_path, label):
image = tf.io.read_file(image_path)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, (224, 224))
image = image / 255.0
return image, label
dataset = dataset.map(lambda x, y: preprocess_image(x, y)).batch(64)
train_size = int(0.8 * len(image_paths))
test_size = len(image_paths) - train_size
train_dataset = dataset.take(train_size)
test_dataset = dataset.skip(train_size)
train_dataset = train_dataset.shuffle(1024).batch(32)
test_dataset = test_dataset.batch(32)
train_dataset = train_dataset.shuffle(1024).batch(64)
test_dataset = test_dataset.batch(64)
model.fit(train_dataset, epochs=10, validation_data=test_dataset)
</code></pre>
<h2><strong>Edit</strong></h2>
<p>Changes:</p>
<pre><code>dataset = dataset.map(lambda x, y: preprocess_image(x, y)).batch(64)
#changed to
dataset = dataset.map(preprocess_image)
train_dataset = train_dataset.shuffle(1024).batch(64)
test_dataset = test_dataset.batch(64)
#Were deleted
</code></pre>
|
<python><python-3.x><tensorflow><keras><tf.keras>
|
2022-12-24 23:18:55
| 0
| 402
|
David
|
74,910,798
| 16,319,191
|
Subset rows based on multiple iterative columns in Pandas
|
<p>I need to subset rows of df based on several columns (c1 through c100 columns) which have strings. Subset rows which equal a particular value for any of the 100 columns.
Minimal example with 3 columns is:</p>
<pre><code>df = pd.DataFrame({"": [0,1,2,3,4,5,6,7,8],
"c1": ["abc1", "", "dfg", "abc1", "","dfg","ghj","abc1","abc1"],
"c2": ["abc1", "abc1", "dfg", "dfg", "","dfg","","ghj","abc1"],
"c3": ["abc1", "", "dfg", "dfg", "dfg","dfg","abc1","ghj","abc1"]})
</code></pre>
<pre><code> c1 c2 c3
0 0 abc1 abc1 abc1
1 1 abc1
2 2 dfg dfg dfg
3 3 abc1 dfg dfg
4 4 dfg
5 5 dfg dfg dfg
6 6 ghj abc1
7 7 abc1 ghj ghj
8 8 abc1 abc1 abc1
</code></pre>
<p>The loc command gives us the answer but too many 'or' conditions to write. I am looking for something iterative over 100 columns. Also what if I want to check for a list of strings instead of just 1 string. For example ['abc1', 'ghj'], instead of just 'abc1'.</p>
<pre><code>df.loc[(df['c1'] == "abc1") | (df['c2'] == "abc1") | (df['c3'] == "abc1")]
</code></pre>
|
<python><pandas><subset>
|
2022-12-24 23:15:23
| 1
| 392
|
AAA
|
74,910,632
| 3,826,115
|
Read compressed GRIB file from URL into xarray dataframe entirely in memory in Python
|
<p>I am trying to read the gzipped grib2 files at tis URL: <a href="https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/" rel="nofollow noreferrer">https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/</a></p>
<p>I want to read the grib file into an xarray DataFrame. I know I could write a script to download the file to disk, decompress it, and read it in, but ideally I want to be able to do this entirely in-memory.</p>
<p>I feel like I should be able to do this with some combination of the urllib and gzip packages, but I can't quite figure it out.</p>
<p>I have the following code so far:</p>
<pre><code>import urllib
import io
import gzip
URL = 'https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/SeamlessHSR_00.00_20221224-000000.grib2.gz'
response = urllib.request.urlopen(URL)
compressed_file = io.BytesIO(response.read())
decompressed_file = gzip.GzipFile(fileobj=compressed_file)
</code></pre>
<p>But I can't figure out how to read <code>decompressed_file</code> into xarray.</p>
<p>Bonus points if you can figure out how to <code>open_mfdataset</code> on all of the URLs there at once.</p>
|
<python><gzip><urllib><python-xarray><grib>
|
2022-12-24 22:24:43
| 1
| 1,533
|
hm8
|
74,910,446
| 3,247,006
|
Is there "non-lazy mode" or "strict mode" for "querysets" in Django?
|
<p>When I use <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#select-for-update" rel="nofollow noreferrer"><strong>select_for_update()</strong></a> and <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.update" rel="nofollow noreferrer"><strong>update()</strong></a> of <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet" rel="nofollow noreferrer"><strong>a queryset</strong></a> together as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
# Here # Here
print(Person.objects.select_for_update().filter(id=1).update(name="Tom"))
return HttpResponse("Test")
</code></pre>
<p>Only <strong><code>UPDATE</code> query</strong> is run without <strong><code>SELECT FOR UPDATE</code> query</strong> as shown below. *I use <strong>PostgreSQL</strong> and these logs below are <strong>the queries of PostgreSQL</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>on PostgreSQL, how to log queries with transaction queries such as "BEGIN" and "COMMIT"</strong></a>:</p>
<p><a href="https://i.sstatic.net/YYhjc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YYhjc.png" alt="enter image description here" /></a></p>
<p>But, when I use <code>select_for_update()</code> and <code>update()</code> of <strong>a queryset</strong> separately then put <code>print(qs)</code> between them as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from django.db import transaction
from .models import Person
from django.http import HttpResponse
@transaction.atomic
def test(request):
qs = Person.objects.select_for_update().filter(id=1)
print(qs) # Here
qs.update(name="Tom")
return HttpResponse("Test")
</code></pre>
<p><strong><code>SELECT FOR UPDATE</code> and <code>UPDATE</code> queries</strong> are run as shown below:</p>
<p><a href="https://i.sstatic.net/KUzMR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KUzMR.png" alt="enter image description here" /></a></p>
<p>Actually, this example above occurs because <strong>QuerySets are lazy</strong> according to <a href="https://docs.djangoproject.com/en/4.1/topics/db/queries/#querysets-are-lazy" rel="nofollow noreferrer"><strong>the Django documentation</strong></a> below:</p>
<blockquote>
<p>QuerySets are lazy – the act of creating a QuerySet doesn’t involve
any database activity. You can stack filters together all day long,
and Django won’t actually run the query until the QuerySet is
evaluated.</p>
</blockquote>
<p>But, this is not simple for me. I just want <strong>normal database behavior</strong>.</p>
<p>Now, is there <strong>non-lazy mode</strong> or <strong>strict mode</strong> for <strong>querysets</strong> in Django?</p>
|
<python><python-3.x><django><django-queryset><lazy-evaluation>
|
2022-12-24 21:37:06
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,910,408
| 1,609,514
|
Sympy is not doing the substitution of a symbol with a value when it has a specified assumption
|
<p>I'm using the subs method to replace certain parameters in an expression with values prior to solving the equation.</p>
<p>The following simple example works fine:</p>
<pre><code>from sympy import Symbol
Q = Symbol("Q")
exp1 = Q + 1
print(exp1.subs({'Q': 1})) # prints 2
</code></pre>
<p>However, if the symbol has an assumption such as <code>real</code> or <code>positive</code> specified this does not work:</p>
<pre><code>Q = Symbol("Q", positive=True)
exp1 = Q + 1
print(exp1.subs({'Q': 1})) # prints Q + 1
</code></pre>
<p>Why is this and what am I doing wrong?</p>
|
<python><sympy><substitution><symbolic-math>
|
2022-12-24 21:26:07
| 1
| 11,755
|
Bill
|
74,910,342
| 4,133,384
|
aws python cdk: how to solve constructs version conflict (trying to create httpapi)
|
<p>Goal (quite simple, or at least I thought so):</p>
<ul>
<li>Returning HTML-Content from a lambda for a web browser.</li>
<li>Defining it with python aws cdk</li>
</ul>
<p>As far as I understand it, I've got to put my lambda into an <code>HttpLambdaIntegration</code> which I can use in a route added to my <code>HttpApi</code></p>
<p>Here is the example:</p>
<pre><code>from aws_cdk.aws_apigatewayv2_integrations import HttpLambdaIntegration
# books_default_fn: lambda.Function
books_integration = HttpLambdaIntegration("BooksIntegration", books_default_fn)
http_api = apigwv2.HttpApi(self, "HttpApi")
http_api.add_routes(
path="/books",
methods=[apigwv2.HttpMethod.GET],
integration=books_integration
)
</code></pre>
<p><a href="https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigatewayv2_integrations/README.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_apigatewayv2_integrations/README.html</a></p>
<p>So far I found:</p>
<p><code>HttpLambdaIntegration</code> here: <a href="https://pypi.org/project/aws-cdk.aws-apigatewayv2-integrations/" rel="nofollow noreferrer">https://pypi.org/project/aws-cdk.aws-apigatewayv2-integrations/</a></p>
<p><code>HttpApi</code> here: <a href="https://pypi.org/project/aws-cdk.aws-apigatewayv2-alpha/#defining-http-apis" rel="nofollow noreferrer">https://pypi.org/project/aws-cdk.aws-apigatewayv2-alpha/#defining-http-apis</a></p>
<p>But I can`t install both because of a version clash:</p>
<blockquote>
<p>The conflict is caused by:</p>
<p>aws-cdk-lib 2.56.1 depends on constructs<11.0.0 and >=10.0.0</p>
<p>aws-cdk-aws-apigatewayv2-alpha 2.56.1a0 depends on constructs<11.0.0
and >=10.0.0</p>
<p>aws-cdk-aws-apigatewayv2-integrations 1.184.1 depends on
constructs<4.0.0 and >=3.3.69</p>
</blockquote>
<p>Am I using the wrong packages? Or am I on the wrong track anyway? The version mash up is quite confusing in the whole...</p>
<p><strong>UPDATE (A few days later)</strong></p>
<p>Ok, good news: I got it running ... but I switched to Typescript, following this example: <a href="https://bobbyhadz.com/blog/aws-cdk-http-api-apigateway-v2-example" rel="nofollow noreferrer">https://bobbyhadz.com/blog/aws-cdk-http-api-apigateway-v2-example</a></p>
<p>And I still had to install using npm with <code>--force</code>, which always makes me more or less unhappy, but at least I could and it works.</p>
<p>I've come now to the conclusion, that I can't really recommend the python aws cdk. This is at least the third time I tried, and at least the third time I ran into trouble ... There are just a lot more typescript examples!</p>
|
<python><amazon-web-services>
|
2022-12-24 21:10:19
| 0
| 996
|
evilive
|
74,910,304
| 6,271,800
|
How to add GCP bucket to the Firestore with Python SDK?
|
<p>I am trying to upload the file to the custom Google Cloud Storage bucket with a Flutter web app.</p>
<pre><code>final _storage = FirebaseStorage.instanceFor(bucket: bucketName);
Reference documentRef = _storage.ref().child(filename);
await documentRef.putData(await data);
</code></pre>
<p>The code works fine for a default bucket but fails with a new custom GCP bucket.</p>
<pre><code>Error: FirebaseError: Firebase Storage: An unknown error occurred, please check the error payload for server response. (storage/unknown)
</code></pre>
<p>The HTTP POST response causing this error says:</p>
<pre><code>{
"error": {
"code": 400,
"message": "Your bucket has not been set up properly for Firebase Storage. Please visit 'https://console.firebase.google.com/project/{my_project_name}/storage/rules' to set up security rules."
}
}
</code></pre>
<p>So apparently, I need to add a new bucket to Firestore and set up access rules before I can upload the file there.</p>
<p><a href="https://i.sstatic.net/Z1Ydl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z1Ydl.png" alt="enter image description here" /></a></p>
<p>Since these buckets are created automatically by my backend microservice, is there a way to add them to Firestore and set up the rules with Python SDK? Alternatively, is there any other way to upload data to GCP buckets with Flutter besides Firebase Storage?</p>
<p>Thank you.</p>
|
<python><flutter><firebase><google-cloud-platform>
|
2022-12-24 21:01:23
| 1
| 611
|
Sergii
|
74,910,225
| 13,828,837
|
Implementation of an algorithm for simultaneous diagonalization
|
<p>I am trying to write an implementation of an algorithm for the simultaneous diagonalization of two matrices (which are assumed to be simultaneously diagonalizable). However, the algorithm does not seem to converge. The algorithm is described in SIAM J. Matrix Anal. Appl. <strong>14</strong>, 927 (1993).</p>
<p>Here is the first part of my code to set up a test case:</p>
<pre><code>import numpy as np
import numpy.linalg as lin
from scipy.optimize import minimize
N = 3
# Unitary example matrix
X = np.array([
[-0.54717736-0.43779416j, 0.26046313+0.11082439j, 0.56151027-0.33692186j],
[-0.33452046-0.37890784j, -0.40907097-0.70730291j, -0.15344477+0.23100467j],
[-0.31253864-0.39468687j, 0.05342909+0.49940543j, -0.70062586+0.05835082j]
])
# Generate eigenvalues
LA = np.diag(np.arange(0, N))
LB = np.diag(np.arange(N, 2*N))
# Generate simultaneously diagonalizable matrices
A = X @ LA @ np.conj(X).T
B = X @ LB @ np.conj(X).T
</code></pre>
<p>This should generate two 3x3 matrices which are simultaneously diagonalizable, since they are constructed this way via <code>X</code>. The following code block then defines a few helper functions:</p>
<pre><code>def off2(A, B):
"""Defines the distance from the matrices from
their diagonal form.
"""
C = np.abs(A) ** 2 + np.abs(B) ** 2
diag_idx = np.diag_indices(N)
C[diag_idx] = 0
return np.sum(C)
def Rijcs(i, j, c, s):
"""Function R(i, j, c, s) from the paper, see
Eq. (1) therein. Used for plane rotations in
the plane ij.
"""
res = np.eye(N, dtype=complex)
res[i, i] = c
res[i, j] = -np.conj(s)
res[j, i] = s
res[j, j] = np.conj(c)
return res
def cs(theta, phi):
"""Parametrization for c and s."""
c = np.cos(theta)
s = np.exp(1j * phi) * np.sin(theta)
return c, s
</code></pre>
<p>With these definitions, the algorithm can be implemented:</p>
<pre><code>tol = 1e-10
Q = np.eye(N, dtype=complex)
while True:
off = off2(A, B)
# Print statement for debugging purposes
print(off)
# Terminate if the result is converged
if off <= tol * (lin.norm(A, "fro") + lin.norm(B, "fro")):
break
for i in range(N):
for j in range(i + 1, N):
def fij(c, s):
aij = A[i, j]
aji = A[j, i]
aii = A[i, i]
ajj = A[j, j]
bij = B[i, j]
bji = B[j, i]
bii = B[i, i]
bjj = B[j, j]
x = np.array(
[
[np.conj(aij), np.conj(aii - ajj), -np.conj(aji)],
[aji, (aii - ajj), -aij ],
[np.conj(bij), np.conj(bii - bjj), -np.conj(bji)],
[bji, (bii - bjj), -bij ]
]
)
y = np.array(
[
[c ** 2],
[c * s],
[s ** 2]
]
)
return lin.norm(x @ y, 2)
# 5
result = minimize(
lambda x: fij(*cs(x[0], x[1])),
x0=(0, 0),
bounds=(
(-0.25 * np.pi, 0.25 * np.pi),
(-np.pi, np.pi)
),
)
theta, phi = result['x']
c, s = cs(theta, phi)
# 6
R = Rijcs(i, j, c, s)
# 7
Q = Q @ R
A = np.conj(R).T @ A @ R
B = np.conj(R).T @ B @ R
</code></pre>
<p>As you can observe from the print statement, the "distance" of <code>A</code> and <code>B</code> from diagonal form does not really converge. Instead, the values printed range from 0.5 up to 3 and oscillate up and down. Is there a bug in this code and if so, where exactly is it?</p>
|
<python><matrix><linear-algebra><numeric><diagonal>
|
2022-12-24 20:43:22
| 0
| 344
|
schade96
|
74,910,223
| 2,584,772
|
Trying to connect from a docker container to docker host running Django in development mode gets connection refused
|
<p>I'm running a django development server in my Debian GNU/Linux host machine with <code>python manage.py runserver</code>.</p>
<p>After running a Debian docker container with</p>
<p><code>$ docker run -it --add-host=host.docker.internal:host-gateway debian bash</code></p>
<p>I expected I could make a curl in http://localhost:8000 with</p>
<p><code>curl http://host.docker.internal:8000</code></p>
<p>but I get</p>
<p><code>Failed to connect to host.docker.internal port 8000: Connection refused</code></p>
<p>Is it something related to running Django with <code>python manage.py runserver</code>?</p>
|
<python><django><docker>
|
2022-12-24 20:42:23
| 0
| 1,684
|
Caco
|
74,910,208
| 2,049,312
|
How do I patch a python @classmethod to call my side_effect method?
|
<p>The following code shows the problem.</p>
<p>I can successfully patch object instance and static methods of this <code>SomeClass</code></p>
<p>However, I can't seem to be able to patch classmethods.</p>
<p>Help much appreciated!</p>
<pre><code>from contextlib import ExitStack
from unittest.mock import patch
class SomeClass:
def instance_method(self):
print("instance_method")
@staticmethod
def static_method():
print("static_method")
@classmethod
def class_method(cls):
print("class_method")
# --- desired patch side effect methods ----
def instance_method(self):
print("mocked instance_method")
def static_method():
print("mocked static_method")
def class_method(cls):
print("mocked class_method")
# --- Test ---
obj = SomeClass()
with ExitStack() as stack:
stack.enter_context(
patch.object(
SomeClass,
"instance_method",
side_effect=instance_method,
autospec=True
)
)
stack.enter_context(
patch.object(
SomeClass,
"static_method",
side_effect=static_method,
# autospec=True,
)
)
stack.enter_context(
patch.object(
SomeClass,
"class_method",
side_effect=class_method,
# autospec=True
)
)
# These work
obj.instance_method()
obj.static_method()
# This fails with TypeError: class_method() missing 1 required positional argument: 'cls'
obj.class_method()
</code></pre>
|
<python><mocking><patch>
|
2022-12-24 20:40:41
| 1
| 457
|
OldSchool
|
74,909,716
| 4,115,031
|
How can I get pdb's debugger working while running uvicorn using uvicorn.run()?
|
<p>I'm trying to debug a FastAPI application and came across <a href="https://jaketrent.com/post/debug-fastapi-pdb/" rel="nofollow noreferrer">this blog post</a> describing how to run uvicorn from the command line with a <code>--debug</code> flag to get <code>import pdb; pdb.set_trace()</code> to work.</p>
<p>But I'm running my FastAPI app with <code>uvicorn.run()</code> from within a <code>main.py</code> file, and it seems <code>uvicorn.run()</code> doesn't have a <code>debug</code> keyword argument.</p>
<p>So how can I get <code>pdb.set_trace()</code> to work?</p>
|
<python><fastapi>
|
2022-12-24 18:53:08
| 0
| 12,570
|
Nathan Wailes
|
74,909,513
| 3,696,118
|
Issue with filtering FlowLayout items in PyQt5
|
<p>I am trying to get a filter feature working for this browser type interface i wrote, but I am facing an issue where when i filter the icon layouts looks all wrong.</p>
<p>Here is what it looks like normally:
<a href="https://i.sstatic.net/lZusd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lZusd.png" alt="looks normal enough" /></a></p>
<p>and here is what it looks like with a filter applied:
<a href="https://i.sstatic.net/yLwgf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yLwgf.png" alt="as you can see there are weird spacings in between the items now" /></a></p>
<p>I am sure this has something to do with me showing/hiding these widgets, but i don't really know how else to do this without destroying the C++ object each time.</p>
<p>Here is the base code i am working with:</p>
<pre><code>import sys, re, os
from PyQt5 import QtCore, QtGui, QtWidgets
PATTERN = re.compile(r'(\d+)(\.)([0-9a-zA-Z]+$)')
HTML_STRING = '<p align=\"center\">{body}</p>'
class FlowLayout(QtWidgets.QLayout):
heightChanged = QtCore.pyqtSignal(int)
def __init__(self, parent=None):
super(FlowLayout, self).__init__(parent)
if parent is not None:
self.setContentsMargins(QtCore.QMargins(0, 0, 0, 0))
self._item_list = []
def __del__(self):
item = self.takeAt(0)
while item:
item = self.takeAt(0)
def addItem(self, item):
self._item_list.append(item)
def count(self):
return len(self._item_list)
def itemAt(self, index):
if 0 <= index < len(self._item_list):
return self._item_list[index]
return None
def takeAt(self, index):
if 0 <= index < len(self._item_list):
return self._item_list.pop(index)
return None
def expandingDirections(self):
return QtCore.Qt.Orientation(0)
def hasHeightForWidth(self):
return True
def heightForWidth(self, width):
height = self._do_layout(QtCore.QRect(0, 0, width, 0), True)
return height
def setGeometry(self, rect):
super(FlowLayout, self).setGeometry(rect)
self._do_layout(rect, False)
def sizeHint(self):
return self.minimumSize()
def minimumSize(self):
size = QtCore.QSize()
for item in self._item_list:
size = size.expandedTo(item.minimumSize())
size += QtCore.QSize(2 * self.contentsMargins().top(), 2 * self.contentsMargins().top())
return size
def _do_layout(self, rect, test_only):
x = rect.x()
y = rect.y()
line_height = 0
spacing = self.spacing()
for item in self._item_list:
style = item.widget().style()
layout_spacing_x = style.layoutSpacing(
QtWidgets.QSizePolicy.PushButton, QtWidgets.QSizePolicy.PushButton, QtCore.Qt.Horizontal
)
layout_spacing_y = style.layoutSpacing(
QtWidgets.QSizePolicy.PushButton, QtWidgets.QSizePolicy.PushButton, QtCore.Qt.Vertical
)
space_x = spacing + layout_spacing_x
space_y = spacing + layout_spacing_y
next_x = x + item.sizeHint().width() + space_x
if next_x - space_x > rect.right() and line_height > 0:
x = rect.x()
y = y + line_height + space_y
next_x = x + item.sizeHint().width() + space_x
line_height = 0
if not test_only:
item.setGeometry(QtCore.QRect(QtCore.QPoint(x, y), item.sizeHint()))
x = next_x
line_height = max(line_height, item.sizeHint().height())
new_height = y + line_height - rect.y()
self.heightChanged.emit(new_height)
return new_height
class WordwrapLabel(QtWidgets.QTextEdit):
size_change_signal = QtCore.pyqtSignal()
def __init__(self, parent=None):
# Call the parent constructor
super(WordwrapLabel, self).__init__(parent)
self.viewport().setCursor(QtCore.Qt.ArrowCursor)
self.setStyleSheet('background:transparent;')
self.selected = False
self.default_style_sheet = self.styleSheet()
self.setFrameShape(QtWidgets.QFrame.NoFrame)
self.setReadOnly(True)
self.setTextInteractionFlags(QtCore.Qt.NoTextInteraction)
self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
def wheelEvent(self, event):
event.ignore()
super(WordwrapLabel, self).wheelEvent(event)
def resizeEvent(self, event):
super(WordwrapLabel, self).resizeEvent(event)
font = self.document().defaultFont()
font_metrics = QtGui.QFontMetrics(font)
text_size = font_metrics.size(0, self.toPlainText())
if text_size.width() >= self.size().width():
line_factor = int(text_size.width() / self.size().width()) + 1
new_height = (text_size.height() * line_factor) + 15
self.setMinimumHeight(new_height)
self.setMaximumHeight(new_height)
else:
height = text_size.height() + 15
self.setMinimumHeight(height)
self.setMaximumHeight(height)
class Thumbnail(QtWidgets.QWidget):
def __init__(self, parent=None, label=None):
# Call the parent constructor
super(Thumbnail, self).__init__(parent)
self.image_label_width = 0
self.image_label_height = 0
self.name_label_min_width = 0
self.name_label_max_width = 0
self.name_label_min_height = 18
self.max_width = 0
self.max_height = 0
self.icon_sizes = ['small', 'medium', 'large']
self.icon_size_index = 1
self.current_icon_size = 'medium'
self.widget = QtWidgets.QWidget()
master_lyt = QtWidgets.QVBoxLayout()
sub_lyt = QtWidgets.QVBoxLayout()
self.widget.setLayout(sub_lyt)
self.image_label = QtWidgets.QLabel()
image_path = 'D:/PC7/Unzipped/Explosion00-sequence-tga/jpg/explosion00-frame030.jpg'
self.pic = QtGui.QPixmap()
self.pic.load(image_path)
self.image_label.setPixmap(self.pic)
self.image_label.setScaledContents(True)
sub_lyt.addWidget(self.image_label, alignment=QtCore.Qt.AlignCenter)
name_label = WordwrapLabel()
if label:
self.label_text = label
else:
self.label_text = 'D:/PC7/Unzipped/Explosion00-sequence-tga/jpg/explosion00-frame030.jpgTestingetlskjfi9023ijd90o8ajsl'
# label_text = 'explosion00-frame030.jpgl'
name_label.setHtml(HTML_STRING.format(body=self.label_text))
sub_lyt.addWidget(name_label, alignment=QtCore.Qt.AlignCenter)
master_lyt.addWidget(self.widget)
self.setLayout(master_lyt)
self.setIconSize(self.icon_sizes[self.icon_size_index])
def increaseIcon(self):
self.icon_size_index += 1
if self.icon_size_index == 3:
self.icon_size_index = 2
self.setIconSize(self.icon_sizes[self.icon_size_index])
def decreaseIcon(self):
self.icon_size_index -= 1
if self.icon_size_index == -1:
self.icon_size_index = 0
self.setIconSize(self.icon_sizes[self.icon_size_index])
def setIconSize(self, size):
if size == 'small':
self.image_label_width = 25
self.image_label_height = 25
self.name_label_min_width = 50
self.name_label_max_width = 95
self.max_width = 100
self.max_height = 100
elif size == 'medium':
self.image_label_width = 50
self.image_label_height = 50
self.name_label_min_width = 90
self.name_label_max_width = 145
self.max_width = 150
self.max_height = 150
elif size == 'large':
self.image_label_width = 80
self.image_label_height = 80
self.name_label_min_width = 120
self.name_label_max_width = 170
self.max_width = 175
self.max_height = 175
self.image_label.setMinimumHeight(self.image_label_width)
self.image_label.setMinimumWidth(self.image_label_width)
self.image_label.setMaximumHeight(self.image_label_width)
self.image_label.setMaximumWidth(self.image_label_width)
self.widget.setMinimumWidth(self.max_width)
self.widget.setMaximumWidth(self.max_width)
self.widget.setMinimumHeight(self.max_height)
self.widget.setMaximumHeight(self.max_height)
class DialogTest(QtWidgets.QScrollArea):
def __init__(self, parent=None):
# Call the parent constructor
super(DialogTest, self).__init__(parent)
# Set the title of the window
self.setWindowTitle("Icons Test")
self.selected_icon = None
# Set the geometry for the window
self.resize(400, 300)
self.container = QtWidgets.QWidget()
self.container_lyt = QtWidgets.QVBoxLayout()
filter_horizontal_lyt = QtWidgets.QHBoxLayout()
self.filter_line_edit = QtWidgets.QLineEdit()
filter_horizontal_lyt.addWidget(self.filter_line_edit)
self.filter_button = QtWidgets.QPushButton('Filter')
self.filter_button.clicked.connect(self.updateFlowLayout)
filter_horizontal_lyt.addWidget(self.filter_button)
self.container_lyt.addLayout(filter_horizontal_lyt)
horizontal_lyt = QtWidgets.QHBoxLayout()
self.plus_button = QtWidgets.QPushButton()
self.plus_button.setText('Plus')
self.plus_button.clicked.connect(self.increaseIcon)
horizontal_lyt.addWidget(self.plus_button)
self.minus_button = QtWidgets.QPushButton()
self.minus_button.clicked.connect(self.decreaseIcon)
self.minus_button.setText('Minus')
horizontal_lyt.addWidget(self.minus_button)
self.container_lyt.addLayout(horizontal_lyt)
self.widget = QtWidgets.QWidget()
self.widget_flow_lyt = FlowLayout()
self.widget_flow_lyt.heightChanged.connect(self.widget.setMinimumHeight)
self.widget.setLayout(self.widget_flow_lyt)
self.button_list = ['passages', 'random', 'years', 'discovered', 'through', 'looks', 'using', 'still', 'roots', 'looking', 'layout', 'generator', 'sentence', 'editors', 'including', 'passage', 'consectetur', 'source', 'going', 'generators', 'suffered', 'remaining', 'specimen', 'reader', 'virginia', 'words', 'during', 'exact', 'ipsum', 'necessary', 'always', 'cicero', 'alteration', 'hampden-sydney', 'popular', 'accompanied', 'obscure', 'aldus', "isn't", 'opposed', 'classical', 'infancy', 'chunk', 'repeat', 'college', 'theory', 'predefined', 'established', 'richard', 'non-characteristic', 'section', 'since', 'sites', 'content', 'written', 'various', 'internet', 'contrary', 'available', 'latin', 'recently', 'reproduced', 'dictionary', 'handful', 'standard', 'release', 'uncover', 'packages', 'generate', 'search', 'accident', 'anything', 'industry', 'reasonable', 'extremes', 'slightly', 'comes', 'first', 'distracted', 'letters', 'humour', 'point', 'galley', 'bonorum', 'simply', 'chunks', 'desktop', 'electronic', 'publishing', 'unknown', 'there', 'readable', 'their', 'combined', 'typesetting', 'sections', 'printer', 'therefore', 'injected', 'essentially', 'rackham', 'survived', 'injected', 'translation', 'repetition', 'industrys', 'hidden', 'those', 'treatise', 'versions', 'default', 'structures', 'variations', 'below', 'printing', 'maloru', 'evolved', 'popularised', 'making', 'distribution', 'piece', 'undoubtable', 'embarrassing', 'unchanged', 'middle', 'believable', 'cites', 'containing', 'more-or-less', 'majority', 'belief', 'interested', 'which', 'literature', 'centuries', 'normal', 'dolor', 'renaissance', 'generated', 'purpose', 'sheets', 'ethics', 'lorem', 'dummy', 'pagemaker', 'randomised', 'professor', 'sometimes', 'letraset', 'looked', 'scrambled', 'english', 'finibus', 'model', 'software', 'original', 'mcclintock']
self.button_dict = {}
for btn in self.button_list:
thumbnail = Thumbnail(self, label=btn)
self.widget_flow_lyt.addWidget(thumbnail)
self.button_dict[btn] = thumbnail
self.container_lyt.addWidget(self.widget)
self.container_lyt.addStretch()
self.container.setLayout(self.container_lyt)
self.setWidgetResizable(True)
self.setWidget(self.container)
def resizeEvent(self, event):
super(DialogTest, self).resizeEvent(event)
self.container.setMaximumHeight(self.widget.minimumHeight())
def mousePressEvent(self, event):
cursor = event.globalPos()
if event.button() == QtCore.Qt.LeftButton:
self.highlightIcon(event, cursor)
def mouseDoubleClickEvent(self, event):
pos = self.geometry()
cursor = event.globalPos()
if event.button() == QtCore.Qt.LeftButton:
self.highlightIcon(event, cursor)
def increaseIcon(self):
for button_name, button_obj in self.button_dict.items():
button_obj.increaseIcon()
def decreaseIcon(self):
for button_name, button_obj in self.button_dict.items():
button_obj.decreaseIcon()
def highlightIcon(self, event, cursor):
clicked_widget = QtWidgets.QApplication.widgetAt(cursor)
if clicked_widget == self:
return
thumbnail_widget = self.getThumbnailWidget(clicked_widget)
if not thumbnail_widget:
return
if thumbnail_widget == self.selected_icon and event.modifiers() & QtCore.Qt.ControlModifier:
self.selected_icon.widget.setStyleSheet('')
self.selected_icon = None
return
if self.selected_icon:
self.selected_icon.widget.setStyleSheet('')
thumbnail_widget.widget.setStyleSheet('background-color: cyan;')
self.selected_icon = thumbnail_widget
def updateFlowLayout(self):
for btn_label in self.button_dict:
filter_key = str(self.filter_line_edit.text())
if filter_key:
if filter_key not in btn_label:
self.button_dict[btn_label].hide()
else:
self.button_dict[btn_label].show()
def getThumbnailWidget(self, inputWidget):
if type(inputWidget) == Thumbnail:
return inputWidget
parent_widget = inputWidget.parentWidget()
thumbnail_found = False
while parent_widget:
if type(parent_widget) == Thumbnail:
thumbnail_found = True
break
parent_widget = parent_widget.parentWidget()
if not thumbnail_found:
parent_widget = None
return parent_widget
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
w = DialogTest()
w.show()
sys.exit(app.exec_())
</code></pre>
<p>Can anyone give any advice to how to format this better? ideally i want this looking something like this "Content Browser" from Autodesk Maya 2020:
<a href="https://i.sstatic.net/SJysE.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SJysE.gif" alt="enter image description here" /></a></p>
<p>Also separate question: is there a way to store Qt objects for later use without having them wiped from memory? I have been trying to load maybe 1000 items once and then have them put aside in a dictionary or something while I reset the layout and replace them with something else, but everytime I do that it seems to actually just destroy the pointed C++ object even though that dictionary is still present...</p>
<p>Edit 1: There was as suggestion to try using <code>QListView.IconMode</code> in the comments, which looks great but after experimenting I have found this feature is too slow once we start assigning custom thumbnail images and reaches over 50 items.</p>
|
<python><qt><pyqt><pyqt5><maya>
|
2022-12-24 18:15:10
| 1
| 353
|
user3696118
|
74,909,205
| 19,625,920
|
How can I use Python's exec() and eval() to execute arbitrary code AND return a value afterwards?
|
<p>I'm looking for a function that will let me execute code passed as a string, but also return a value upon completion. I have found Python's <code>exec</code> and <code>eval</code>, each of which manage to do a part of what I want:</p>
<ul>
<li><code>exec</code> lets me execute some lines of code which I pass as a string: print to the console; set or change variables; write to files etc.</li>
<li><code>eval</code> lets me evaluate a single expression and returns the value of this expression.</li>
</ul>
<p>However, the functionality I want is to combine these: with a single function call, I want to execute some arbitrary code, and then return a value, which might be dependent on the code executed.</p>
<p>To contextualise, I want to modify the in-built Pickle <code>__reduce__</code> method so that I can execute some code in the background while the object un-pickles. However, at the end of that code execution, I still want to return the original object that was pickled.</p>
<p><a href="https://docs.python.org/3/library/pickle.html#object.__reduce__" rel="nofollow noreferrer">Pickle's <code>__reduce__</code> has to return a function which is used to reassemble the object on un-pickling</a>, so I want a use of <code>eval</code> and <code>exec</code> that lets me combine their usage into a single function call.</p>
<p>As an example, my code might look something like this:</p>
<pre><code>def __reduce__(self):
code = """with open("flag.txt", "w") as f:\n\tf.write("A flag I have left!")\ndict()"""
return exec, (code, ), None, None, iter(self.items())
</code></pre>
<p>The odd return formatting is a quirk of Pickle. The oddly formatted code string should do this:</p>
<pre><code>with open("flag.txt", "w") as f:
f.write("A flag I have left")
dict() # I'm trying to get the intepreter to 'evaluate' this final line
</code></pre>
<p>However, this doesn't work, as <code>exec</code> just does nothing with this final line, and returns <code>None</code>. If I swap, and use <code>eval</code> instead, then I get an error too, as <code>eval</code> can't do anything with the lines above.</p>
<p>I ave tried using the in-built <code>compile</code> method, but this doesn't actually seem to help because <code>eval</code> still won't evaluate compiled execution code.</p>
<p>I also see that this problem has popped up elsewhere on SO (<a href="https://stackoverflow.com/questions/12698028/why-is-pythons-eval-rejecting-this-multiline-string-and-how-can-i-fix-it">here</a> and <a href="https://stackoverflow.com/questions/74218830/how-can-i-handle-eval-exec-multiline-as-first-in-first-out?noredirect=1&lq=1">here</a>) but I'm unsatisfied with the answers provided, because they involve defining new functions, which are then useless in the context of getting Pickle to execute them on un-pickling, where the interpreter is naive of their definition.</p>
<p>Is there any way to neatly combine these expressions to achieve arbitrary execution as well as returning a value?</p>
|
<python>
|
2022-12-24 17:16:42
| 1
| 518
|
whatf0xx
|
74,909,164
| 310,370
|
Is that possible to reset / empty / free up Google Colab notebook GPU VRAM? Without restarting the session
|
<p>Currently I am trying to run Whisper on my Google Colab.</p>
<p>It throws a GPU memory error. However after error is thrown, GPU ram usage is still maximum</p>
<p>There are i don't know maybe 100 questions but none of the given answers are working</p>
<p>I need a way to free up GPU memory without restarting the session so avoid all downloaded data to be erased</p>
<p>Here take a look at the current status of the notebook. You see error thrown but GPU ram is still max</p>
<p><a href="https://i.sstatic.net/Or2GO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Or2GO.png" alt="enter image description here" /></a></p>
<p>And here the entire code of the Google Colab</p>
<pre><code>!pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
!pip install git+https://github.com/openai/whisper.git
import os
# Add folders
checkContentFolder = os.path.exists("content")
checkDownLoadFolder = os.path.exists("download")
if not checkContentFolder:
os.mkdir("content")
if not checkDownLoadFolder:
os.mkdir("download")
import whisper
from pathlib import Path
from whisper.utils import write_srt
import pandas as pd
def main():
# transcribe the audio
#model = whisper.load_model("large")
model = whisper.load_model("large-v1")
#model = whisper.load_model("../input/whisper2/large-v1.pt")
transcribe_name_begin="oop";
sub_folder_name="/download/oop/"
import os
if not os.path.isdir(sub_folder_name):
os.makedirs(sub_folder_name)
_compression_ratio_threshold = 2.4
for lectureId in range(142, 143):
transcribePath=f"../content/"+transcribe_name_begin+str(lectureId)+".mp3";
result = model.transcribe(transcribePath,
language="en",
beam_size=9,
initial_prompt="Welcome to the Software Engineering Courses channel.",
best_of=9,verbose=True,temperature=0.0,compression_ratio_threshold=_compression_ratio_threshold)
#result = model.transcribe("../input/whisper2/lecture_"+str(lectureId)+".mp3",language="en",beam_size=5,initial_prompt="Welcome to the Software Engineering Courses channel.",best_of=5,verbose=True,temperature=0.0)
# save SRT
language = result["language"]
sub_name = sub_folder_name+transcribe_name_begin+str(lectureId)+".srt"
with open(sub_name, "w", encoding="utf-8") as srt:
write_srt(result["segments"], file=srt)
# Save output
writing_lut = {
'.txt': whisper.utils.write_txt,
'.vtt': whisper.utils.write_vtt,
'.srt': whisper.utils.write_txt,
}
output_type="All"
if output_type == "All":
for suffix, write_suffix in writing_lut.items():
transcript_local_path =sub_folder_name+transcribe_name_begin+str(lectureId) +suffix
with open(transcript_local_path, "w", encoding="utf-8") as f:
write_suffix(result["segments"], file=f)
try:
transcript_drive_path =file_name
except:
print(f"**Transcript file created: {transcript_local_path}**")
else:
transcript_local_path =sub_folder_name+transcribe_name_begin+str(lectureId) +output_type
with open(transcript_local_path, "w", encoding="utf-8") as f:
writing_lut[output_type](result["segments"], file=f)
if __name__ == "__main__":
main()
</code></pre>
|
<python><tensorflow><pytorch><google-colaboratory>
|
2022-12-24 17:08:59
| 1
| 23,982
|
Furkan Gözükara
|
74,909,112
| 12,242,085
|
How to convert object column with date, time and NaN to datetime64 column in DataFrame in Python Pandas?
|
<p>I have DataFrame in Python Pandas like below:</p>
<p>date type</p>
<ul>
<li><p>COL1 - object</p>
</li>
<li><p>COL2 - object</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>COL1</th>
<th>COL2</th>
</tr>
</thead>
<tbody>
<tr>
<td>abc</td>
<td>2019-11-12T20:15:08+030</td>
</tr>
<tr>
<td>ddd</td>
<td>2019-12-01T22:14:11+030</td>
</tr>
<tr>
<td>bbb</td>
<td>NaN</td>
</tr>
<tr>
<td>...</td>
<td>....</td>
</tr>
</tbody>
</table>
</div></li>
</ul>
<p>How can I convert COL2 so as to have only date here (format: datetime64)? Be aware that I have many more columns in my original DF and COL2 could have NaN, so as a result I need to have something like below:</p>
<pre><code>COL1 | COL2
-----|-------------------------
abc | 2019-11-12
ddd | 2019-12-01
bbb | NaN
... | ....
</code></pre>
<p>How can I do that in Python Pandas?</p>
|
<python><pandas><datetime><object><nan>
|
2022-12-24 17:02:23
| 1
| 2,350
|
dingaro
|
74,908,952
| 8,372,455
|
Pandas calculating time deltas from index
|
<p>I have a months time series data that I am trying calculate total hours, minutes, seconds in the dataset as well as for a unique Boolean column for when the column is True or a 1. And for some reason I am doing something wrong where the total time calculations don't appear correct. The code (runs) below goes through calculating the time delta between each index time stamp:</p>
<pre><code>import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/bbartling/Data/master/hvac_random_fake_data/testdf2_fc5.csv',
index_col='Date',
parse_dates=True)
print(df)
df["timedelta_alldata"] = df.index.to_series().diff()
seconds_alldata = df.timedelta_alldata.sum().seconds
print('SECONDS ALL DATA: ',seconds_alldata)
days_alldata = df.timedelta_alldata.sum().days
print('DAYS ALL DATA: ',days_alldata)
hours_alldata = round(seconds_alldata/3600, 2)
print('HOURS ALL DATA: ',hours_alldata)
minutes_alldata = round((seconds_alldata/60) % 60, 2)
total_hours_calc = days_alldata * 24.0 + hours_alldata
print('TOTAL HOURS CALC: ',total_hours_calc)
# fault flag 5 true time delta calc
df["timedelta_fddflag_fc5"] = df.index.to_series(
).diff().where(df["fc5_flag"] == 1)
seconds_fc5_mode = df.timedelta_fddflag_fc5.sum().seconds
print('FALT FLAG TRUE TOTAL SECONDS: ',seconds_fc5_mode)
hours_fc5_mode = round(seconds_fc5_mode/3600, 2)
print('FALT FLAG TRUE TOTAL HOURS: ',hours_fc5_mode)
percent_true_fc5 = round(df.fc5_flag.mean() * 100, 2)
print('PERCENT TIME WHEN FLAG 5 TRUE: ',percent_true_fc5,'%')
percent_false_fc5 = round((100 - percent_true_fc5), 2)
print('PERCENT TIME WHEN FLAG 5 FALSE: ',percent_false_fc5,'%')
</code></pre>
<p>returns:</p>
<pre><code>SECONDS ALL DATA: 85500 <--- I think NOT correct
DAYS ALL DATA: 30
HOURS ALL DATA: 23.75 <--- I think NOT correct
TOTAL HOURS CALC: 743.75
FALT FLAG TRUE TOTAL SECONDS: 1800 <--- I think NOT correct
FALT FLAG TRUE TOTAL HOURS: 0.5 <--- I think NOT correct
PERCENT TIME WHEN FLAG 5 TRUE: 74.29 %
PERCENT TIME WHEN FLAG 5 FALSE: 25.71 %
</code></pre>
<p>30 days is correct (<code>DAYS ALL DATA: 30</code>) and the percent of time when a Boolean column (<code>fc5_flag</code>) is True or False but the total seconds and hours seems way off...? Would anyone have any tips to write this better?</p>
|
<python><pandas><data-science><data-wrangling><data-scrubbing>
|
2022-12-24 16:31:32
| 1
| 3,564
|
bbartling
|
74,908,823
| 9,808,098
|
How to convert a string into Hex in TypeScript and back to String in Python
|
<p>I have a use-case where a TypeScript application should print a string in hex and a Python application should read this hex and convert it back to string.</p>
<p>The TypeScript code I have:</p>
<pre class="lang-js prettyprint-override"><code>function stringToHex(str: string): string {
return str.split('').map(char => char.charCodeAt(0).toString(16)).join('');
}
</code></pre>
<p>The Python code I have:</p>
<pre class="lang-py prettyprint-override"><code>def hexToString(hex_str: str) -> str:
return bytes.fromhex(hex_str).decode('utf-8')
</code></pre>
<p>I'm trying to run this:</p>
<pre class="lang-js prettyprint-override"><code>hexEncode("def main():\n print(\"Hello World\")\n\nif __name__ == '__main__':\n main()")
</code></pre>
<p>But I'm getting the following error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa6 in position 0: invalid start byte
</code></pre>
<p>I have control on both sides. Where am I going wrong on this?</p>
|
<python><typescript><hex>
|
2022-12-24 16:09:15
| 1
| 3,186
|
vesii
|
74,908,633
| 13,132,728
|
How to optimize turning a group of wide pandas columns into two long pandas columns
|
<p>I have a process that takes a dataframe and turns a set of wide pandas columns into two long pandas columns, like so:</p>
<p>original wide:</p>
<pre><code>wide = pd.DataFrame(
{
'id':['foo'],
'a':[1],
'b':[2],
'c':[3],
'x':[4],
'y':[5],
'z':[6]
}
)
wide
id a b c x y z
0 foo 1 2 3 4 5 6
</code></pre>
<p>desired long:</p>
<pre><code>lon = pd.DataFrame(
{
'id':['foo','foo','foo','foo','foo','foo'],
'type':['a','b','c','x','y','z'],
'val':[1,2,3,4,5,6]
}
)
lon
id type val
0 foo a 1
1 foo b 2
2 foo c 3
3 foo x 4
4 foo y 5
5 foo z 6
</code></pre>
<p>I found out a way to do this by chaining the following pandas assignments</p>
<pre><code>(wide
.set_index('id')
.T
.unstack()
.reset_index()
.rename(columns={'level_1':'type',0:'val'})
)
id type val
0 foo a 1
1 foo b 2
2 foo c 3
3 foo x 4
4 foo y 5
5 foo z 6
</code></pre>
<p>But when I scale my data this seems to be posing issues for me. I was just looking for an alternative solution to what I have already accomplished that is perhaps faster/more computationally efficient.</p>
|
<python><pandas><dataframe>
|
2022-12-24 15:35:53
| 1
| 1,645
|
bismo
|
74,908,506
| 13,507,819
|
Ensemble LDA: How to find more than 1 stable topic?
|
<p>I used LDA (Latent Dirichlet Allocation) algorithm to analyse corpus from <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwi10uCnvJL8AhV-RmwGHec6DmQQFnoECBEQAQ&url=https%3A%2F%2Fconsole.cloud.google.com%2Fmarketplace%2Fproduct%2Fstack-exchange%2Fstack-overflow&usg=AOvVaw3b3AF70UCH8KOYJAH6adZ9" rel="nofollow noreferrer">StackExchange database</a>. LDA is not ideal because it has problem with reproducibility because it gives different topics suggestion with every execution, which make it unreliable from my point of view. After reading an <a href="https://radimrehurek.com/gensim/models/ensemblelda.html" rel="nofollow noreferrer">introduction for Ensemble LDA (eLDA) from Gensim</a>, eLDA addresses this problem by executing LDA models multiple times and only outputs stable topics, which are topics that are occur multiple times from multiple LDAs (to explain it in a very simplified way).</p>
<p>How I executed eLDAs are:</p>
<ol start="0">
<li><p>Filter (so that only questions with topic "security" are left) and preprocess the data from StackExchange. I use last 10 years data of Q&A from StackOverflow.</p>
</li>
<li><p>Do trial and error of normal LDA to get ideal hyper parameter (passes and iteration) so that most if not all documents have converged. I refer this step to <a href="https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.html" rel="nofollow noreferrer">this documentation</a>.</p>
</li>
<li><p>With the optimal LDA parameters, I ran eLDA:</p>
<pre><code> corpus = self.doc_matrix
dictionary = vectorizer.id2word.id2token
topic_model_class = LdaModel
ensemble_workers = 4
num_models = 12
distance_workers = 4
num_topics = 50
passes=20
iterations=5000
epsilon = 1
eval_every=10
self.estimator = EnsembleLda(
corpus=corpus,
id2word=dictionary,
num_topics=num_topics,
passes=passes,
iterations=iterations,
num_models=num_models,
topic_model_class=topic_model_class,
ensemble_workers=ensemble_workers,
distance_workers=distance_workers,
epsilon=epsilon,
eval_every=eval_every
)
</code></pre>
</li>
<li><p>The result is always only one stable topic. This is just too few stable topic for my purpose. After couple days, I doesn't manage to increase stable topic findings/ improve my eLDA.
<a href="https://i.sstatic.net/rpvqb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rpvqb.png" alt="enter image description here" /></a></p>
</li>
</ol>
<p>My goal: find trend or topics in StackOverflow questions that are related to topic "security".</p>
<p>My opinion: the finding should be more than just one stable topic, because in StackOverflow there are tags available for every questions. Tag by itself is a topic. Therefore it should find more than one stable topic.</p>
<p>Questions:</p>
<p>a. Can somebody point me a direction how to optimise my eLDA model so that it can find more than one stable topic?</p>
<p>b. Is my logic wrong to assume that there should be more than one stable topic? I could accept it if the data are very diverse. But since every StackOverflow question consists tags, I assume the finding should have > 1 stable topic.</p>
<p>Thank you for any advice.</p>
<p>#################################################################</p>
<p>Why did I choose epsilon=1?
I execute the following:</p>
<pre><code>def optimize_ensembleLda(self):
print('*** Optimize ensemble LDA model.')
import numpy as np
shape = self.estimator.asymmetric_distance_matrix.shape
without_diagonal = self.estimator.asymmetric_distance_matrix[~np.eye(shape[0], dtype=bool)].reshape(shape[0], -1)
print("Min, mean & max value of asymetric distance matrix:")
print(without_diagonal.min(), without_diagonal.mean(), without_diagonal.max())
new_epsilon = without_diagonal.max()
self.estimator.recluster(eps=new_epsilon, min_samples=2, min_cores=2)
return
</code></pre>
<p>I chose to use the maximum distance, because I am trying to get more stable topics.</p>
<p>#################################################################</p>
<p>After executing Erwan's suggestion from commentary (increasing num_models to e.g. 40), I got the following error:</p>
<pre><code>RecursionError: maximum recursion depth exceeded in comparison
</code></pre>
<p>The error points to the eLDA model from under step 2 above.
I doubled the ensemble_worker but the problem is not fixed. I will keep increasing it and give update here.</p>
<p>#################################################################</p>
<p>I solved the <code>RecursionError</code> by increasing the Python recursion limit with:</p>
<pre><code>sys.setrecursionlimit(int(len(self.ttda)*1.2))
</code></pre>
<p>inside my Gensim eLDA code...
The reason for this problem is my large corpus size and hence the large topic numbers.
Still my original problem still exists, after increasing <code>num_models</code> to 40, I still get one stable topic.</p>
<p><em>I would appreciate very much, if someone can help me finding more than one stable topic with Ensemble LDA from Gensim.</em></p>
|
<python><nlp><gensim><lda><ensemble-learning>
|
2022-12-24 15:15:33
| 0
| 369
|
gunardilin
|
74,908,080
| 12,862,712
|
Why is fractional part not discarded in python in some cases
|
<p>In python<br />
<code>3//2</code> evaluates to <code>1</code>, in this case, the fractional part is discarded<br />
while <code>1.2//1</code> evaluates to <code>1.0</code></p>
|
<python><floor-division>
|
2022-12-24 13:50:48
| 1
| 1,238
|
rohitt
|
74,908,005
| 389,717
|
Error while loading a remote file via the _load_file( ) method - Thonny IDE Plugin
|
<p>I am trying to load a remote <strong>.py</strong> file in Thonny editor but I get the following error:</p>
<pre><code>File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/thonny/editors.py", line 193, in _load_local_file
with open(filename, "rb") as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/thonny/thonny/3eaa0319a9722bcdac12b4401b0a69b47d2e9f00/thonny/first_run.py'
</code></pre>
<p>The file I am just using for testing purposes is the:
<a href="https://github.com/thonny/thonny/blob/3eaa0319a9722bcdac12b4401b0a69b47d2e9f00/thonny/first_run.py" rel="nofollow noreferrer">https://github.com/thonny/thonny/blob/3eaa0319a9722bcdac12b4401b0a69b47d2e9f00/thonny/first_run.py</a></p>
<p>Any ideas why <code>self.editor._load_file( ... )</code> does not understands that this is a remote file?</p>
<p>Here is my <code>__init__.py</code> of my plugin:</p>
<pre><code>class Exercise:
"""
Fetch exercise to the current loaded file.
Description
"""
def __init__(self) -> None:
"""Get the workbench to fetch and copy the fetched content to editor, later."""
self.workbench = get_workbench()
def load_exercise(self) -> None:
"""Handle the plugin execution."""
self.editor = self.workbench.get_editor_notebook().get_current_editor()
self.filename = self.editor.get_filename()
if self.filename is not None and self.filename[-3:] == ".py":
self.editor.save_file()
# self.filename = "/Users/limitcracker/Documents/Projects/ThonnyPlugin/test.py"
self.filename = "https://raw.githubusercontent.com/thonny/thonny/3eaa0319a9722bcdac12b4401b0a69b47d2e9f00/thonny/first_run.py"
self.editor._load_file(self.filename, keep_undo=True)
showinfo(title="Exercise Loader", message="OK")
def load_plugin(self) -> None:
"""
Load the plugin on runtime.
Using self.workbench.add_command(), the plugin is registered in Thonny
with all the given arguments.
"""
self.workbench.add_command(
command_id="load_exercise",
menu_name="tools",
command_label="Load Exercise",
handler=self.load_exercise,
default_sequence="<Control-Alt-c>",
extra_sequences=["<<CtrlAltCInText>>"],
)
if get_workbench() is not None:
run = Exercise().load_plugin()
</code></pre>
|
<python><tkinter><thonny>
|
2022-12-24 13:35:01
| 0
| 2,386
|
limitcracker
|
74,907,879
| 1,720,743
|
plotly subpot legend item spacing `legend_tracegroupgap`, how to define as percentage of figure height?
|
<p>I am trying to make a function using plotly 5.9.0 that will reproduce a specific type of plot. I am having trouble aligning legend entries with their subplots, especially when the figure is resizable.</p>
<p>This is what i currently have:</p>
<pre><code>import pandas as pd
import numpy as np
import plotly.graph_objects as go
import plotly.subplots as sp
from plotly.offline import plot
def get_df(len_df):
x = np.linspace(-1, 1, len_df)
# Create a dictionary with the functions to use for each column
funcs = {
"column1": np.sin,
"column2": np.cos,
"column3": np.tan,
"column4": np.arcsin,
"column5": np.arccos,
"column6": np.arctan
}
# Create an empty dataframe with the same index as x
df = pd.DataFrame(index=pd.date_range('2022-01-01', periods=len(x), freq='H'))
# Populate the dataframe with the functions
for column, func in funcs.items():
df[column] = func(x)
return df
def plot_subplots(df, column_groups, fig_height=1000):
# Create a figure with a grid of subplots
fig = sp.make_subplots(rows=len(column_groups), shared_xaxes=True, shared_yaxes=True, vertical_spacing=.1)
# Iterate over the list of column groups
for i, group in enumerate(column_groups):
# Iterate over the columns in the current group
for column in group:
# Add a scatter plot for the current column to the figure, specifying the row number
fig.add_trace(go.Scatter(x=df.index, y=df[column], mode="lines", name=column, legendgroup=str(i)), row=i + 1, col=1)
fig.update_layout(legend_tracegroupgap=fig_height/len(column_groups), height=fig_height)
return fig
df = get_df(1000)
column_groups = [
['column1', 'column3'],
['column2', 'column4'],
['column5', 'column6']
]
fig = plot_subplots(df, column_groups)
plot(fig)
</code></pre>
<p>This produces a plot that looks like this:</p>
<p><a href="https://i.sstatic.net/kjCvt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kjCvt.png" alt="enter image description here" /></a></p>
<p>How do I align my legend subgroups with the top of each corresponding plotly subplot?</p>
<p>If we can somehow relate the <code>legend_tracegroupgap</code> to the <code>height</code> of the figure that would be a great first step. This feels like such a logical thing to want that I feel like I'm missing something.</p>
<p><strong>In reply to r-beginners:</strong></p>
<p>I tried this:</p>
<pre><code>tracegroupgap=(fig.layout.yaxis.domain[1] - fig.layout.yaxis.domain[0])*fig_height
</code></pre>
<p>Which works perfectly for a figure with a height of 1000. But not for a height of 500 pixels. I still have to subtract some value that has to do with the vertical spacing is my guess.</p>
|
<python><plotly>
|
2022-12-24 13:15:38
| 1
| 770
|
XiB
|
74,907,839
| 6,718,626
|
Pydantic returns 'field required (type=value_error.missing)' on an Optional field with a custom model
|
<p>I'm trying to build a custom field in Fastapi-users pydantic schema as follows:</p>
<pre class="lang-py prettyprint-override"><code>class UserRead(schemas.BaseUser[uuid.UUID]):
twitter_account: Optional['TwitterAccount']
</code></pre>
<p>On UserRead validation Pydantic returns</p>
<pre class="lang-none prettyprint-override"><code>ValidationError: 1 validation error for UserRead
twitter_account
Field required [type=missing, input_value={}, input_type=dict]
</code></pre>
<p>on every field in <code>'TwitterAccount'</code> <code>schema.update_forward_refs()</code> is called at the end.</p>
<p><code>TwitterAccount</code> itself has required fields and making them optional isn't an acceptable workaround. I notices I could make <code>Optional[List['TwitterAccount']]</code> and it will work, but that's a bit silly.</p>
|
<python><fastapi><python-typing><pydantic><fastapiusers>
|
2022-12-24 13:07:27
| 2
| 581
|
Jakub Królikowski
|
74,907,735
| 8,848,630
|
Difference between '?' and 'help()'
|
<p>What is the difference between '?' and 'help()' in Python Jupyter Notebooks. E.g.</p>
<pre><code>import scipy
help(scipy)
?scipy
</code></pre>
|
<python>
|
2022-12-24 12:49:11
| 2
| 335
|
shenflow
|
74,907,336
| 1,167,835
|
Validation errors on output model when input model is populated by alias
|
<p>Given an input model with a field <code>alias</code>, and a output model without, shouldn't FastAPI still be able to implicitly <a href="https://fastapi.tiangolo.com/tutorial/response-model" rel="nofollow noreferrer">convert the output data</a> according to <code>response_model</code>? In the example below, a validation error is being raised. What am I doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>1 validation error for OutModel
response -> foo
field required (type=value_error.missing)
</code></pre>
<pre class="lang-py prettyprint-override"><code>import fastapi
import starlette.testclient
from pydantic import BaseModel, Field
app = fastapi.FastAPI()
class DbModel(BaseModel):
foo: str = Field(alias='bar')
class OutModel(BaseModel):
foo: str
@app.get('/test', response_model=OutModel)
def test():
foo = DbModel(bar="foo")
print(foo.dict()) # {'foo': 'foo'}
return foo
with starlette.testclient.TestClient(app) as test_client:
try:
print(test_client.get('/test').json())
except Exception as exc:
print(exc)
</code></pre>
|
<python><fastapi><pydantic>
|
2022-12-24 11:29:20
| 2
| 769
|
Kjell-Bear
|
74,907,244
| 13,745,926
|
How can I use batch embeddings using OpenAI's API?
|
<p>I am using the OpenAI API to get embeddings for a bunch of sentences. And by a bunch of sentences, I mean a bunch of sentences, like thousands. Is there a way to make it faster or make it do the embeddings concurrently or something?</p>
<p>I tried Looping through and sending a request for each sentence but that was super slow, but so is sending a list of the sentences. For both cases, I used this code: '''</p>
<pre><code>response = requests.post(
"https://api.openai.com/v1/embeddings",
json={
"model": "text-embedding-ada-002",
"input": ["text:This is a test", "text:This is another test", "text:This is a third test", "text:This is a fourth test", "text:This is a fifth test", "text:This is a sixth test", "text:This is a seventh test", "text:This is a eighth test", "text:This is a ninth test", "text:This is a tenth test", "text:This is a eleventh test", "text:This is a twelfth test", "text:This is a thirteenth test", "text:This is a fourteenth test", "text:This is a fifteenth test", "text:This is a sixteenth test", "text:This is a seventeenth test", "text:This is a eighteenth test", "text:This is a nineteenth test", "text:This is a twentieth test", "text:This is a twenty first test", "text:This is a twenty second test", "text:This is a twenty third test", "text:This is a twenty fourth test", "text:This is a twenty fifth test", "text:This is a twenty sixth test", "text:This is a twenty seventh test", "text:This is a twenty eighth test", "text:This is a twenty ninth test", "text:This is a thirtieth test", "text:This is a thirty first test", "text:This is a thirty second test", "text:This is a thirty third test", "text:This is a thirty fourth test", "text:This is a thirty fifth test", "text:This is a thirty sixth test", "text:This is a thirty seventh test", "text:This is a thirty eighth test", "text:This is a thirty ninth test", "text:This is a fourtieth test", "text:This is a forty first test", "text:This is a forty second test", "text:This is a forty third test", "text:This is a forty fourth test", "text:This is a forty fifth test", "text:This is a forty sixth test", "text:This is a forty seventh test", "text:This is a forty eighth test", "text:This is a forty ninth test", "text:This is a fiftieth test", "text:This is a fifty first test", "text:This is a fifty second test", "text:This is a fifty third test"],
},
headers={
"Authorization": f"Bearer {key}"
}
)
</code></pre>
<p>For the first test, I did a bunch of those requests one by one, and for the second one I sent a list. Should I send individual requests in parallel? Would that help? Thanks!</p>
|
<python><embedding><openai-api>
|
2022-12-24 11:12:42
| 2
| 417
|
Constantly Groovin'
|
74,907,221
| 5,429,320
|
JavaScript fetch to Django view: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
|
<p>I have a JavaScript fetch to call a URL to pass data into my Django view to update a value for the user.</p>
<p><strong>Error in views.py:</strong></p>
<pre><code>Traceback (most recent call last):
File "C:\Python310\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "C:\Python310\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Python310\lib\site-packages\django\contrib\auth\decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "C:\Users\rossw\Documents\Projects\Scry\apps\administration\views.py", line 47, in administration_users_page
switchData = json.load(request)['switch']
File "C:\Python310\lib\json\__init__.py", line 293, in load
return loads(fp.read(),
File "C:\Python310\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Python310\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python310\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p><strong>Error in browser:</strong></p>
<pre><code>0
PUT http://127.0.0.1:8000/administration/users/JM/ 500 (Internal Server Error)
SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON
</code></pre>
<p><strong>JavaScript:</strong></p>
<pre><code>const changeAdminUserAction = (id,data) => {
console.log(data)
fetch(`/administration/users/${id}/`,{
method: 'PUT',
body: JSON.stringify({type: "adminUser", switch: data}),
headers: {'X-CSRFToken' : csrfToken, 'Content-Type': 'application/json'},
})
.then((response) => response.json())
.then((result) => {location.reload()})
.catch((err) => {console.log(err)})
}
</code></pre>
<p><strong>views.py:</strong></p>
<pre><code> if request.method == 'PUT':
user = CustomUser.objects.get(username=kwargs.get('username'))
switchType = json.load(request)['type']
switchData = json.load(request)['switch']
print(switchType, switchData)
if switchType == 'adminUser':
user.admin_user = switchData
elif switchType == 'admindata':
user.admin_data = switchData
elif switchType == 'Status':
if user.status == COMMON.USER_ACTIVE:
user.status = COMMON.USER_SUSPENDED
else:
user.status = COMMON.USER_ACTIVE
user.failed_logins = 0
user.save()
return JsonResponse({})
</code></pre>
<p>This is throwing an error at <code>switchData = json.load(request)['switch']</code>. I don't know why it is throwing the error there as the previous line <code>switchType = json.load(request)['type']</code> works as expected and returns the value that was passed to it.</p>
<p><code>data</code> is a integer - either 1 or 0. I have tried make this value a string but it throws the same error.</p>
|
<javascript><python><django><fetch>
|
2022-12-24 11:08:31
| 2
| 2,467
|
Ross
|
74,907,157
| 19,826,650
|
How to change color figure dots of matplotlib cursor? in python
|
<p>i have a data of csv Longitude,latitude and labels</p>
<pre><code> Longitude Latitude
0 106.895231 -6.302275
1 106.900976 -6.285152
2 106.873755 -6.237447
3 106.894059 -6.238875
4 106.820816 -6.311941
.. ... ...
225 106.938847 -6.131683
226 106.937381 -6.109117
227 106.932118 -6.147447
228 106.958474 -6.155166
229 106.862266 -6.129799
</code></pre>
<p>and labels</p>
<pre><code>0 TMII
1 Monumen Pancasila Sakti
2 Taman Simanjuntak
3 Mall Cipinang Indah
4 Kebun Binatang Ragunan
...
225 Not Categorized
226 Not Categorized
227 Not Categorized
228 Not Categorized
229 Not Categorized
Name: Wisata, Length: 230, dtype: object
</code></pre>
<p>then i have a matplotlib that shows a cursor figure with the cod below</p>
<pre><code>X, Y, labels = df['Latitude'], df['Longitude'], df['Wisata']
Total = df['Wisata'].sum()
fig, ax = plt.subplots()
line, = ax.plot(X, Y, 'ro')
# for color in ['tab:red','tab:green','tab:blue','tab:purple','tab:forestgreen',
# 'tab:maroon','tab:sienna','tab:steelblue','tab:hotpink','tab:darkorchid',
# 'tab:navy','tab:orange','tab:lime','tab:black','tab:turquoise',
# 'tab:salmon','tab:magenta','tab:gold','tab:brown','tab:grey']:
# n = Total
# x, y = np.random.rand(2, n)
# scale = 200.0 * np.random.rand(n)
# ax.scatter(x, y, c=color, s=scale, label=Total,
# alpha=0.3, edgecolors='none')
# ax.legend()
# ax.grid(True)
#plt.scatter(X, y, c=labels, cmap=plt.colors.ListedColormap(mcolors))
mpl.cursor(ax).connect(
"add", lambda sel: sel.annotation.set_text(labels[sel.index]))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/jG8fj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jG8fj.png" alt="output for now" /></a>
I want to give every dots different color based on different labels that i have (currently there is 20 different labels).</p>
<p>Any suggestions of correct way to do that?</p>
|
<python><matplotlib>
|
2022-12-24 10:55:49
| 1
| 377
|
Jessen Jie
|
74,907,060
| 17,148,496
|
Taking two samples from the data but with different observations
|
<p>My data is made of about 9000 observations and 20 features (Edit - Pandas dataframe). I've taken a sample of 200 observations like this and conducted some analysis on it:</p>
<pre><code>sample_data = data.sample(n = 200)
</code></pre>
<p>Now I want to randomely take a sample of 1000 observations from the original data, with non of the observations that showed up in the previous n = 200 sample. How do I do that?</p>
|
<python><pandas><dataframe><sampling>
|
2022-12-24 10:39:19
| 1
| 375
|
Kev
|
74,906,942
| 12,108,866
|
Parent class instance variable getting overridden when executing from the parent class
|
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self):
self.model = "A"
def create(self):
print(f"Model in class {self.model}")
class Bar(Foo):
def __init__(self):
super().__init__()
self.model = "B"
def save(self):
super().create()
print(f"Model in class Bar {self.model}")
bar = Bar()
bar.save()
</code></pre>
<p>Output:</p>
<blockquote>
<p>Model in class Foo B</p>
<p>Model in class Bar B</p>
</blockquote>
<p>Why is the variable <code>model</code> in class <code>Foo</code> getting overridden when it is in the parent class. Is this what should really happen? I want to make it the print <code>A</code> when it is executing from the parent class. What's going wrong here in the code?</p>
|
<python><class><inheritance><attributes>
|
2022-12-24 10:17:51
| 2
| 343
|
ABHIJITH EA
|
74,906,750
| 6,599,648
|
Python Flask WTForm Email validation does not allow a dash
|
<p>I'm looking to validate an email input field for a user in Flask. I'm currently using WTForms Email validation, but this does not allow the user to input a dash as part of their email address. My code is below:</p>
<pre class="lang-py prettyprint-override"><code>from flask_wtf import FlaskForm
from wtforms import StringField
from wtforms.validators import DataRequired, Email
class RegistrationForm(FlaskForm):
email = StringField('Email', validators=[DataRequired(), Email()])
</code></pre>
<p>How can I allow a dash in the email address but still protect against malicious inputs?</p>
|
<python><flask><flask-wtforms><wtforms>
|
2022-12-24 09:36:36
| 1
| 613
|
Muriel
|
74,906,457
| 7,462,275
|
Problem with apply(int) to convert string to int in pandas
|
<p>This question follows the question: <a href="https://stackoverflow.com/q/74903173/7462275">Problem in Pandas : impossible to do sum of int with arbitrary precision</a> and I used the accepted answer from there: <code>df["my_int"].apply(int).sum()</code></p>
<p>But it does not work in all cases.</p>
<p>For example, with this file</p>
<pre><code>my_int
9220426963983292163
5657924282683240
</code></pre>
<p>The ouput is <code>-9220659185443576213</code></p>
<p>After looking at the <code>apply(int)</code> output, I understand the problem. In this case, <code>apply(int)</code> returns <code>dtype:int64</code>.</p>
<pre><code>0 9220426963983292163
1 5657924282683240
Name: my_int, dtype: int64
</code></pre>
<p>But with large numbers, it returns <code>dtype:object</code>:</p>
<pre><code>0 1111111111111111111111111111111111111111111111...
1 2222222222222222222222222222222222222222222222...
Name: my_int, dtype: object
</code></pre>
<p>Is it possible to solve it with pandas ?
Or should I follow <a href="https://stackoverflow.com/a/74903381/7462275">Tim Robert's answer</a> from the previous question?</p>
<h1>Edit 1:</h1>
<p>Awful solution. A line is added to the end of the file with a large integer</p>
<pre><code>my_int
9220426963983292163
5657924282683240
11111111111111111111111111111111111111111111111111111111111111111111111111
</code></pre>
<p>And after, sum is done on all lines except the last one :</p>
<pre><code>data['my_int'].apply(int).iloc[:-1].sum()
</code></pre>
|
<python><pandas>
|
2022-12-24 08:29:19
| 2
| 2,515
|
Stef1611
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.