QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,956,129
| 20,176,161
|
Computing % return using groupby returns NaN
|
<p>I have a dataframe that looks like this</p>
<pre><code> city Year Asset_Type TotalTransac NbBiens psqm_city
0 Agadir 2010.0 Appart 3225 3156 6276.923077
1 Agadir 2010.0 Maison_Dar 37 37 8571.428571
2 Agadir 2010.0 Villa 107 103 6279.469027
3 Agadir 2011.0 Appart 2931 2816 6516.853933
4 Agadir 2011.0 Maison_Dar 33 32 9000.000000
.. ... ... ... ... ... ...
171 Tanger 2019.0 Maison_Dar 268 263 12158.385093
172 Tanger 2020.0 Appart 5223 5180 5760.869565
173 Tanger 2020.0 Maison_Dar 193 192 13500.000000
</code></pre>
<p>I am trying to compute the % return by <code>Asset_Type</code> using the following function:</p>
<pre><code>Index_byCity["Returns"] = Index_byCity.groupby(['city','Year','Asset_Type'])['psqm_city'].apply(lambda x: (x - x.shift(1))-1 )
</code></pre>
<p>But i get an unexpected output</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
..
171 NaN
172 NaN
173 NaN
174 NaN
175 NaN
Name: Returns, Length: 176, dtype: float64
</code></pre>
<p>I don't understand why iam getting this output since dtype shows <code>psqm_city</code> as a float.</p>
<pre><code>city object
Year float64
Asset_Type object
TotalTransac int64
NbBiens int64
psqm_city float64
dtype: object
</code></pre>
<p>Can someone help please</p>
|
<python><dataframe><lambda><group-by>
|
2022-12-29 21:17:05
| 0
| 419
|
bravopapa
|
74,955,937
| 14,462,728
|
Issue installing Scrapy using Python 3.11 on Windows 11
|
<p>I'm using Windows 11, Python 3.10.1. I created a virtual environment using <code>venv</code>, and installed scrapy, and all requirements. <strong>Everything worked perfectly!</strong> Then I installed Python 3.11.1, created a virtual environment using <code>venv</code>, installed scrapy, and I received an error:</p>
<pre class="lang-none prettyprint-override"><code> Building wheel for twisted-iocpsupport (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for twisted-iocpsupport (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
running bdist_wheel
running build
running build_ext
building 'twisted_iocpsupport.iocpsupport' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for twisted-iocpsupport
Failed to build twisted-iocpsupport
ERROR: Could not build wheels for twisted-iocpsupport, which is required to install pyproject.toml-based projects
</code></pre>
<h2>Issue of concern</h2>
<p>I already had Microsoft Visual C++ 14.0 installed on my device, and I was able to run scrapy 2.7.1 successfully using Python 3.10. I only experienced this issue with Python 3.11.</p>
<h2>Mitigation attempts</h2>
<ol>
<li>Installed a fresh copy of Microsoft Visual C++ 14.0 from the website ~ <strong>No resolve.</strong> ❌</li>
<li>Uninstalled, reinstalled Python 3.11 ensuring <code>PATH</code> was installed ~ <strong>No resolve.</strong> ❌</li>
<li>Created a virtual environment using <code>pipenv</code>, and Python 3.11. Same error as using <code>venv</code> ~ <strong>no resolve.</strong> ❌</li>
<li>Created a virtual environment using <code>venv</code>, and Python 3.10 ~ <strong>Scrapy works!</strong> ✅</li>
<li>Created a virtual environment using <code>pipenv</code>, and Python 3.10 ~ <strong>Scrapy works!</strong> ✅</li>
</ol>
<p>So it seems like this is specific to Python 3.11 perhaps. So, right now I am back to using Python 3.10 for Scrapy projects.</p>
<h2>Question</h2>
<p>How can I resolve this issue, and use Python 3.11 for Scrapy projects?</p>
|
<python><visual-c++><pip><scrapy><pipenv>
|
2022-12-29 20:50:06
| 1
| 454
|
Seraph
|
74,955,744
| 10,118,393
|
Confluent kafka create_topics is failing to create kafka topics from pycharm but works fine from python terminal
|
<p>I am just playing around with confluent kafka python APIs. Here I am running a 3 worker kafka cluster service on local (<em><strong>localhost:9902,localhost:9903,localhost:9904</strong></em>) and trying to create a topic via python API of confluent kafka using below function</p>
<pre><code>from confluent_kafka import admin
from typing import Union
consumer_config = {
'bootstrap.servers': 'localhost:9092'
}
# utils function for creating new kafka topics
def create_new_topics(names: Union[str, list], partitions: int = 1, replication: int = 1):
if isinstance(names, str):
names = names.split(',')
new_topics = [admin.NewTopic(name, partitions, replication) for name in names]
fs = admin.AdminClient(consumer_config).create_topics(new_topics, validate_only=False)
for topic, f in fs.items():
try:
f.result() # The result itself is None
print("Topic {} created".format(topic))
except Exception as e:
print("Failed to create topic {}: {}".format(topic, e))
return admin.AdminClient(consumer_config).list_topics().topics
</code></pre>
<p>But calling this function from PyCharm resulting in below kafka exception and topics are not getting created. But If I am running it in a python terminal, it works !!</p>
<pre><code>#Calling the above utils function for topic creation
topics=['test1','test2']
create_new_topics(topics)
#Exception received in Pycharm
Failed to create topic test1: KafkaError{code=_DESTROY,val=-197,str="Handle is terminating: Success"}
Failed to create topic test2: KafkaError{code=_DESTROY,val=-197,str="Handle is terminating: Success"}
</code></pre>
<p>I am not understanding what's the issue with Pycharm alone! Is there any listener configuration I need to add for pycharm ?</p>
|
<python><apache-kafka><pycharm>
|
2022-12-29 20:22:42
| 1
| 1,102
|
akhil pathirippilly
|
74,955,725
| 3,322,222
|
Getting the generic arguments of a subclass
|
<p>I have a generic base class and I want to be able to inspect the provided type for it. My approach was using <a href="https://docs.python.org/3/library/typing.html#typing.get_args" rel="nofollow noreferrer"><code>typing.get_args</code></a> which works like so:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, Tuple, TypeVarTuple, get_args
T = TypeVarTuple("T")
class Base(Generic[*T]):
values: Tuple[*T]
Example = Base[int, str]
print(get_args(Example)) # (<class 'int'>, <class 'str'>)
</code></pre>
<p>But when I'm inheriting the class, I'm getting an empty list of parameters like so:</p>
<pre class="lang-py prettyprint-override"><code>class Example2(Base[int, str]):
pass
print(get_args(Example2)) # ()
</code></pre>
<p>What I actually need is to know what types are expected for the <code>values</code> property. I might have the wrong approach but I've also tried to use <a href="https://docs.python.org/3/library/typing.html#typing.get_type_hints" rel="nofollow noreferrer"><code>typing.get_type_hints</code></a> which seems to just return <code>Tuple[*T]</code> as the type.</p>
<p>So how can I get the typed parameters?</p>
<p>Edit: I need to know the types of the <em>class</em>, not the object.</p>
|
<python><generics><python-typing>
|
2022-12-29 20:20:56
| 2
| 781
|
yotamN
|
74,955,701
| 10,146,441
|
RabbitMQ headers binding is not working as expected, Header exchange is routing message to all the bound queues
|
<p>I want to route rabbitmq messages based on headers and I have created a appropriate infrastructure including headers exchange, queues, bindings etc.
Below is the complete code for the <code>consumer.py</code></p>
<pre><code>import pika
# define variables
url = "amqp://rabbitmq-host/"
exchange = 'headers-exchange',
s_queue = 'StudentQueue'
t_queue = 'TeacherQueue'
# create connection
connection_parameters = pika.URLParameters(url)
connection = pika.BlockingConnection(connection_parameters)
channel = connection.channel()
# declare exchange
channel.exchange_declare(
exchange=exchange,
exchange_type='headers',
durable=True
)
# declare student queue
channel.queue_declare(
queue=s_queue,
durable=True,
)
# bind student queue
channel.queue_bind(
exchange=exchange,
queue=s_queue,
# bind arguments:
# match all the given headers
# match x-queue should be equal to student
arguments={
"x-match": "all",
"x-queue": "student"
},
)
# declare teacher queue
channel.queue_declare(
queue=t_queue,
durable=True,
)
# bind teacher queue
channel.queue_bind(
exchange=exchange,
queue=t_queue,
# bind arguments:
# match all the given headers
# match x-queue should be equal to teacher
arguments={
"x-match": "all",
"x-queue": "teacher"
},
)
</code></pre>
<p>and publish module(<code>publish.py</code>) looks like below:</p>
<pre><code>import datetime
import time
import uuid
import pika
# define variables
url = "amqp://rabbitmq-host/"
exchange = 'headers-exchange',
# create connection
connection_parameters = pika.URLParameters(url)
connection = pika.BlockingConnection(connection_parameters)
channel = connection.channel()
# declare exchange
channel.exchange_declare(
exchange=exchange,
exchange_type='headers',
durable=True
)
# define message id
id_ = uuid.uuid4()
message_id = id_.hex
timestamp = time.mktime(datetime.datetime.now().timetuple())
# define message property esp. headers
properties = pika.BasicProperties(
content_type="application/json",
# match x-queue = student header
# to the StudentQueue
headers={"x-queue": "student"}
message_id=message_id,
timestamp=timestamp,
delivery_mode=2,
)
# publish the message for student queue
channel.basic_publish(
exchange=exchange,
routing_key="",
body="for student queue only",
properties=properties,
)
</code></pre>
<p>published message should only be delivered to <code>StudentQueue</code> because we have <code>headers={"x-queue": "student"}</code> but it is getting delivered to <code>TeacherQueue</code> as well which is incorrect.</p>
<p>the list of appication versions are:</p>
<pre><code>RabbitMQ: 3.6.16
Erlang: 20.3.4
Pika: 1.2.1
</code></pre>
<p>Could someone point the obvious which I have missed, could it be related to mismatched versions? Any help would be really appreciated.</p>
<p>Cheers,
DD.</p>
|
<python><rabbitmq><pika><python-pika>
|
2022-12-29 20:17:34
| 1
| 684
|
DDStackoverflow
|
74,955,684
| 15,171,387
|
How to perform filter map reduce equivalent in Pyhon?
|
<p>Let's assume there are two lists like:</p>
<pre><code>list1 = ["num", "categ"]
all_names = ["col_num1", "col_num2", "col_num3", "col_categ1", "col_categ2", "col_bol1", "col_bol2", "num_extra_1", "num_extra_2", "categ_extra_1", "categ_extra_2"]
</code></pre>
<p>I am trying to create a new list by filtering the elements that 1) <strong>not</strong> contain "extra" and 2) contains the elements of <code>list1</code>.</p>
<p>For example, Here is I expect to get something like this:</p>
<pre><code>l=["col_num1", "col_num2", "col_num3", "col_categ1", "col_categ2"]
</code></pre>
<p>In Pyspark this can be done using filter, map and reduce, but not sure what is the equivalent in Python? For now, I am doing this in two steps like below, but I think there might be a more straightforward way of doing this.</p>
<pre><code>temp_list = [a for a in all_names if "extra" not in a]
print(temp_list)
['col_num1', 'col_num2', 'col_num3', 'col_categ1', 'col_categ2', 'col_bol1', 'col_bol2']
l = [b for b in temp_list for c in list1 if c in b]
print(l)
['col_num1', 'col_num2', 'col_num3', 'col_categ1', 'col_categ2']
</code></pre>
|
<python><python-3.x><list><filter>
|
2022-12-29 20:15:01
| 2
| 651
|
armin
|
74,955,525
| 3,311,276
|
understand setting up database using sqlalchamy and avoid errorAttributeError: type object 'User' has no attribute 'query' in flask app
|
<p>I am a beginner writing a todo app in flask and in writing database application, at least I have never setup db for any production app myself.</p>
<p>I have written this code (<strong>models.py</strong>) that uses sqlalchemy ORM to define table model.</p>
<pre><code>from sqlalchemy import create_engine, Column, Integer, String, Boolean
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from datetime import datetime
from .extensions import db
from .config import app
# define base for declarative ORM
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = Column(String, unique=True)
password = Column(String)
def __repr__(self):
return '<User {}>'.format(self.username)
class Todo(Base):
__tablename__ = 'todos'
id = Column(db.Integer, primary_key=True)
title = Column(db.String(255))
description = Column(db.Text)
created_at = Column(db.DateTime, default=datetime.utcnow)
completed = Column(Boolean)
user_id = Column(db.Integer, db.ForeignKey('user.id'))
user = relationship('User', back_populates='todos')
def __repr__(self):
return '<Todo {}>'.format(self.id)
User.todos = relationship('Todo', order_by=Todo.id, back_populates='user')
# create database engine
engine = create_engine(app.config['SQLALCHEMY_DATABASE_URI'])
# create all tables
Base.metadata.create_all(engine)
</code></pre>
<p><strong>extensions.py</strong></p>
<pre><code>from .config import app
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app)
</code></pre>
<p><strong>config.py</strong></p>
<pre><code>from flask import Flask
from datetime import timedelta
app = Flask(__name__)
</code></pre>
<p>finally the <strong>start.py</strong></p>
<pre><code>from todo_app.extensions import db
from flask import Flask
from todo_app.endpoint import todo_bp, auth_bp
def create_app(config_file='settings.cfg'):
app = Flask(__name__, instance_relative_config=True)
app.config.from_pyfile(config_file)
db.init_app(app)
app.register_blueprint(auth_bp)
app.register_blueprint(todo_bp)
return app
if __name__ == '__main__':
app = create_app()
app.run()
</code></pre>
<p>but when I run the command <strong>python start.py</strong></p>
<p>I get the following error:</p>
<pre><code>$ python start.py
* Running on http://127.0.0.1:5000 (Press CTRL+C to quit)
[2022-12-29 20:01:57,004] ERROR in app: Exception on / [GET]
Traceback (most recent call last):
File "/home/ciasto/Dev/crud_app/crud_app_env/lib/python3.10/site-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/home/ciasto/Dev/crud_app/crud_app_env/lib/python3.10/site-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ciasto/Dev/crud_app/crud_app_env/lib/python3.10/site-packages/flask_restplus/api.py", line 584, in error_router
return original_handler(e)
File "/home/ciasto/Dev/crud_app/crud_app_env/lib/python3.10/site-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ciasto/Dev/crud_app/crud_app_env/lib/python3.10/site-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/home/ciasto/Dev/crud_app/src/todo_app/endpoint.py", line 105, in index
user = User.query.filter_by(id=user_id).first()
AttributeError: type object 'User' has no attribute 'query'
</code></pre>
<p>This is the <strong>endpoint.py</strong></p>
<pre><code># todo index endpoint
@todo_bp.route('/')
def index():
# get the logged in user
user_id = session.get('user_id')
user = User.query.filter_by(id=user_id).first()
# get the todo items for the user
todos = user.todos
return render_template('index.html', todos=todos)
</code></pre>
|
<python><flask><sqlalchemy><flask-sqlalchemy>
|
2022-12-29 19:57:51
| 1
| 8,357
|
Ciasto piekarz
|
74,955,487
| 4,391,249
|
How would I implement my own container with type hinting?
|
<p>Say I want to make a container class <code>MyContainer</code> and I want to enable its use in type hints like <code>def func(container: MyContainer[SomeType])</code>, in a similar way to how I would be able to do <code>def func(ls: list[SomeType])</code>. How would I do that?</p>
|
<python><types>
|
2022-12-29 19:53:11
| 1
| 3,347
|
Alexander Soare
|
74,955,415
| 8,681,229
|
decorator argument: unable to access instance attribute
|
<h2>Context</h2>
<p>class <code>MyMockClient</code> below has two methods:</p>
<ul>
<li><code>push_log</code>: pushes one <code>log</code> from list <code>self.logs</code> and pops it</li>
<li><code>push_event</code>: pushes one <code>event</code> from list <code>self.events</code> and pops it</li>
</ul>
<p>Both methods are decorated using <code>__iterate_pushes</code> which essentially
iterate push_event and push_log until their respective lists are entirely popped
AND in the full program applies some extra logics (e.g. tracks how many objects were pushed).</p>
<h2>Problem</h2>
<p>The decorator <code>__iterate_pushes</code> needs to access <code>self.logs</code> and <code>self.events</code>.
However, it does not seem to work. In the example below, you can see that the program fails because self.logs is not available when the function is defined.</p>
<p>Any idea how to solve this?</p>
<h2>Example</h2>
<pre><code>class MyMockClient:
def __init__(self, logs, events):
self.logs = logs
self.events = events
def __iterate_pushes(self, obj):
"""
Decorate function to iterate the 'pushes' until
the lists are entirely popped.
"""
def outer_wrapper(self, decorated_func):
def inner_wrapper(self):
responses = []
print(f"len: {len(obj)}")
while len(obj) > 0: # iterate until list is entirely popped
response = decorated_func() # when executed, it pops the first item
responses.append(response)
print(f"len: {len(obj)}")
return responses
return inner_wrapper
return outer_wrapper
@__iterate_pushes(obj = self.logs)
def push_log(self):
# Send first log and pops it if successful
log = self.logs[0]
print(log, "some extra logics for logs")
response = "OK" # let's assume API call is OK
if response == "OK":
self.logs.pop(0)
return response
@__iterate_pushes(obj = self.events)
def push_event(self):
# Send first event and pops it if successful
event = self.events[0]
print(event, "some extra logics for events")
response = "OK" # let's assume API call is OK
if response == "OK":
self.events.pop(0)
return response
my_mock_client = MyMockClient(
logs = [1, 2, 3],
events = [4, 5, 6]
)
my_mock_client.push_log()
</code></pre>
<h2>Error</h2>
<pre><code>---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[2], line 1
----> 1 class MyMockClient:
2 def __init__(self, logs, events):
3 self.logs = logs
Cell In[2], line 25, in MyMockClient()
21 return inner_wrapper
23 return outer_wrapper
---> 25 @__iterate_pushes(obj = self.logs)
26 def push_log(self):
27 # Send first log and pops it if successful
28 log = self.logs[0]
29 print(log, "some extra logics for logs")
NameError: name 'self' is not defined
</code></pre>
|
<python><class><methods><decorator>
|
2022-12-29 19:43:32
| 1
| 3,294
|
Seymour
|
74,955,370
| 6,483,314
|
How to sample based on long-tail distribution from a pandas dataframe?
|
<p>I have a pandas dataframe of 1000 elements, with value counts shown below. I would like to sample from this dataset in a way that the value counts follow a long-tailed distribution. For example, to maintain the long-tailed distribution, <code>sample4</code> may only end up with a value count of 400.</p>
<pre><code> a
sample1 750
sample2 746
sample3 699
sample4 652
sample5 622
...
sample996 4
sample997 3
sample998 2
sample999 2
sample1000 1
</code></pre>
<p>I tried using this code:</p>
<pre><code>import numpy as np
# Calculate the frequency of each element in column 'area'
freq = df['a'].value_counts()
# Calculate the probability of selecting each element based on its frequency
prob = freq / freq.sum()
# Sample from the df_wos dataframe without replacement
df_sampled = df.sample(n=len(df), replace=False, weights=prob.tolist())
</code></pre>
<p>However, I end up with errors <code>ValueError: Weights and axis to be sampled must be of same length</code>.</p>
|
<python><pandas>
|
2022-12-29 19:38:21
| 1
| 369
|
HumanTorch
|
74,955,363
| 5,713,713
|
Use Python version v0 task fails in azure pipeline
|
<pre><code>- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
architecture: 'x64'
</code></pre>
<p><strong>I get the following error on ubuntu18.04:</strong></p>
<pre><code>##[error]Version spec 3.x for architecture x64 did not match any version in Agent.ToolsDirectory.
</code></pre>
|
<python><azure-devops><azure-pipelines><pipeline><azure-pipelines-build-task>
|
2022-12-29 19:37:45
| 1
| 443
|
Mukesh Bharsakle
|
74,955,285
| 1,362,485
|
Dask rolling function fails with message to repartition dataframe
|
<p>I'm getting this error when I run a dask rolling function to calculate a moving average:</p>
<pre><code>df['some_value'].rolling(10).mean()
</code></pre>
<p>Error:</p>
<blockquote>
<p>Partition size is less than overlapping window size. Try using
"df.repartition" to increase the partition size.</p>
</blockquote>
<p>What is this message? Why it's asking to repartition the dataframe?</p>
|
<python><pandas><dask><dask-distributed><dask-dataframe>
|
2022-12-29 19:25:24
| 1
| 1,207
|
ps0604
|
74,955,261
| 1,145,760
|
Split a large object into constant size list entries via list comprehension
|
<p>How to split a <code>bytes</code> object into a list/tuple of constant size objects? Ignore padding. Something like</p>
<pre><code>max_size = 42
def foo(b: bytes):
return [b[i:j] for (i, j) in range(max_size)]
foo(b'a' * 100000)
</code></pre>
<p>but working.</p>
<p>The 'list comprehension' part is only because it's concise, readable and efficient. IMHO a <code>for()</code> loop has been ugly for a decade or two.</p>
<p>Desired output: list[bytes objects of specific length]</p>
|
<python><list-comprehension>
|
2022-12-29 19:22:04
| 1
| 9,246
|
Vorac
|
74,955,223
| 569,976
|
AWS kills my opencv script, resource limits exceeded?
|
<p>I have the following Python / OpenCV code which is supposed to take a filled out document (new.png), line it up with the reference document (ref.png) and put the result in output.png. Unfortunately, sometimes when I run it I get a "Killed" message in bash. I guess <a href="https://stackoverflow.com/q/19189522/569976">Python does this when it runs out of some resource</a> so maybe my code is just written inefficiently and could be improved upon?</p>
<p>Here's my code:</p>
<pre><code>import sys
import cv2
import numpy as np
if len(sys.argv) != 4:
print('USAGE')
print(' python3 diff.py ref.png new.png output.png')
sys.exit()
GOOD_MATCH_PERCENT = 0.15
def alignImages(im1, im2):
# Convert images to grayscale
#im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
#im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# Detect ORB features and compute descriptors.
orb = cv2.AKAZE_create()
#orb = cv2.ORB_create(500)
keypoints1, descriptors1 = orb.detectAndCompute(im1, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
print(matches[numGoodMatches].distance)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
cv2.imwrite("matches.jpg", imMatches)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
height, width = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg, h
def removeOverlap(refBW, newBW):
# invert each
refBW = 255 - refBW
newBW = 255 - newBW
# get absdiff
xor = cv2.absdiff(refBW, newBW)
result = cv2.bitwise_and(xor, newBW)
# invert
result = 255 - result
return result
def offset(img, xOffset, yOffset):
# The number of pixels
num_rows, num_cols = img.shape[:2]
# Creating a translation matrix
translation_matrix = np.float32([ [1,0,xOffset], [0,1,yOffset] ])
# Image translation
img_translation = cv2.warpAffine(img, translation_matrix, (num_cols,num_rows), borderValue = (255,255,255))
return img_translation
# the ink will often bleed out on printouts ever so slightly
# to eliminate that we'll apply a "jitter" of sorts
refFilename = sys.argv[1]
imFilename = sys.argv[2]
outFilename = sys.argv[3]
imRef = cv2.imread(refFilename, cv2.IMREAD_GRAYSCALE)
im = cv2.imread(imFilename, cv2.IMREAD_GRAYSCALE)
imNew, h = alignImages(im, imRef)
refBW = cv2.threshold(imRef, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
newBW = cv2.threshold(imNew, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
for x in range (-2, 2):
for y in range (-2, 2):
newBW = removeOverlap(offset(refBW, x, y), newBW)
cv2.imwrite(outFilename, newBW)
</code></pre>
<p>Is there anything obvious that I could be doing to improve the speed of my program?</p>
<p>I'm running 3.5.3 and OpenCV 4.4.0. <code>uname -r</code> return 4.14.301-224.520.amzn2.x86_64 (Linux)</p>
<p>Any ideas?</p>
|
<python><linux><amazon-web-services><opencv>
|
2022-12-29 19:16:20
| 1
| 16,931
|
neubert
|
74,955,215
| 13,860,217
|
Scrapy loop over spider
|
<p>I'd like to loop over my <code>scrapy.Spider</code> somehow like this</p>
<pre><code>for i in range(0, 10):
class MySpider(scrapy.Spider, ABC):
start_urls = ["example.com"]
def start_requests(self):
for url in self.urls:
if dec == i:
yield SplashRequest(url=url, callback=self.parse_data, args={"wait": 1.5})
def parse_data(self, response):
data= response.css("td.right.data").extract()
items["Data"] = data
yield items
settings = get_project_settings()
settings["FEED_URI"] = f"/../Data/data_{i}.json"
if __name__ == "__main__":
process = CrawlerProcess(settings)
process.crawl(MySpider)
process.start()
</code></pre>
<p>However, this yields</p>
<pre><code>twisted.internet.error.ReactorNotRestartable
</code></pre>
<p>Using</p>
<pre><code>process.start(stop_after_crawl=False)
</code></pre>
<p>executes the script for <code>i=0</code> but than hangs at <code>i=1</code></p>
|
<python><loops><scrapy><web-crawler>
|
2022-12-29 19:15:29
| 1
| 377
|
Michael
|
74,955,052
| 12,361,700
|
Create multiprocessing pool outside main to avoid recursion
|
<p>I have a py file with functions that requires multiprocessing, so i do something like this:</p>
<pre><code>pool = Pool()
def function_():
pool.map(...)
</code></pre>
<p>Then, I'll import this file into the main one, but when i run <code>function_</code> I get:</p>
<blockquote>
<p>daemonic processes are not allowed to have children</p>
</blockquote>
<p>and this is usually due to the fact that multiprocessing will re-run the file where it's called (thus usually the pool has to be inserted in <code>if __name__ == "__main__"</code>, see here <a href="https://stackoverflow.com/questions/61290945/python-multiprocessing-gets-stuck">Python multiprocessing gets stuck</a>)... is there a way to avoid this?</p>
|
<python><multithreading><multiprocessing>
|
2022-12-29 18:55:47
| 2
| 13,109
|
Alberto
|
74,954,959
| 5,212,614
|
Can't upgrade geopandas
|
<p>I looked at the documentation for geopandas.</p>
<p><a href="https://geopandas.org/en/v0.4.0/install.html" rel="nofollow noreferrer">https://geopandas.org/en/v0.4.0/install.html</a></p>
<p>Apparent, this is how you install geopandas: <code>conda install -c conda-forge geopandas</code></p>
<p>I tried that and I'm getting version .9, but the newest version seems to be .12, but I can't seem to upgrade my version of geopandas. Also, when I try to run this: <code>geopandas.sjoin_nearest</code></p>
<p>I'm getting this error: <code>AttributeError: module 'geopandas' has no attribute 'sjoin_nearest'</code></p>
<p>Has anyone experienced this? Is there some kind of work-around?</p>
|
<python><python-3.x><geopandas>
|
2022-12-29 18:45:13
| 2
| 20,492
|
ASH
|
74,954,928
| 6,067,528
|
How to disable import logging from Pyinstaller executable
|
<p>How can I disable all the import logging that is fired off when running the executable compiled by Pyinstaller?</p>
<p>This is my set up:</p>
<pre><code>a = Analysis(["model_trainer.py"],
pathex=[],
binaries=BINARIES,
datas=DATA_FILES,
hiddenimports=HIDDEN_IMPORTS,
cipher=BLOCK_CIPHER,
noarchive=True)
pyz = PYZ(a.pure, a.zipped_data, cipher=BLOCK_CIPHER)
exe = EXE(pyz,
a.scripts,
[("v", None, "OPTION")],
name=name,
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=False)
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name=name)
</code></pre>
<p>These are the logs when I run my executable with <code>/project/model_trainer/</code> from terminal... how can I disable these? I was under the impression that <code>debug=False</code> and <code>console=False</code> would disable these.</p>
<pre><code>root@desktop:/# /project/model_trainer
import _frozen_importlib # frozen
import _imp # builtin
import sys # builtin
import '_warnings' # <class '_frozen_importlib.BuiltinImporter'>
import '_thread' # <class '_frozen_importlib.BuiltinImporter'>
import '_weakref' # <class '_frozen_importlib.BuiltinImporter'>
import '_frozen_importlib_external' # <class '_frozen_importlib.FrozenImporter'>
import '_io' # <class '_frozen_importlib.BuiltinImporter'>
import 'marshal' # <class '_frozen_importlib.BuiltinImporter'>
</code></pre>
|
<python><pyinstaller>
|
2022-12-29 18:42:22
| 1
| 1,313
|
Sam Comber
|
74,954,747
| 15,171,387
|
Compare a string that was read from a config ini file in Python?
|
<p>I have a config.ini file like this:</p>
<pre><code>[LABEL]
NAME = "eventName"
</code></pre>
<p>And I am reading it into my python code as shown below. However, when I compare it with the exactly the same string, the result is <code>False</code>. I wonder if I should use a different way to read this or I need to do something else?</p>
<pre><code>import configparser
config = configparser.ConfigParser()
config.read('config.ini')
my_label_name = config['LABEL']['name']
print("My label is: ", my_label_name, " and has type of: ", type(my_label_name))
My label is: "eventName" and has type of: <class 'str'>
print(my_label_name == "eventName")
False
</code></pre>
|
<python><python-3.x><config><ini>
|
2022-12-29 18:22:49
| 2
| 651
|
armin
|
74,954,704
| 159,508
|
Confused Pip install with Virtual Environments
|
<p>Probably a rookie question: I created a project folder, cd'ed to it, created a virtual environment via</p>
<pre class="lang-none prettyprint-override"><code>python -m venv .venv
</code></pre>
<p>and it made the VE in the right place with what looks like the right stuff. But pip-installing modules <em>sometimes</em> (??) fails, with the module going into the global site-packages. This is a vanilla Python 3.9 install on Linux (Raspberry Pi) and being used within Visual Studio Code. To get it right I need to use the "Pip Manager" extension. Why?? To illustrate my confusion, I understand that Python (and Pip?) uses <strong>sys.path</strong> to resolve packages (see <a href="https://xkcd.com/1987/" rel="nofollow noreferrer">XKCD 1897</a>). So after activating the VE, I looked and got this? I expected to see the <strong>.venv</strong> site-packages in that path <em>first</em>, right? Well here is what I get:
<a href="https://i.sstatic.net/8kp1A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8kp1A.png" alt="terminal snapshot" /></a>
Where am I misinformed or otherwise confused?</p>
<p><strong>Info added in response to questions below</strong></p>
<p>This system never had Python2 on it. Within the activated .venv there are links:</p>
<pre class="lang-none prettyprint-override"><code>pi@rpi1:~/Documents/alpyca-device*.venv/bin $ ls -l pyt*
lrwxrwxrwx 1 pi pi 15 Dec 14 14:41 python -> /usr/bin/python
lrwxrwxrwx 1 pi pi 6 Dec 14 14:41 python3 -> python
lrwxrwxrwx 1 pi pi 6 Dec 14 14:41 python3.9 -> python
</code></pre>
<p>and this is the usual way it is put into the Python3 virtuals by the venv module, which is recommended now over virtualenv and other schemes.</p>
<pre class="lang-none prettyprint-override"><code>(.venv) pi@rpi1:~/Documents/alpyca $ which python
/home/pi/Documents/alpyca/.venv/bin/python
</code></pre>
<p>and other requests for info:</p>
<pre class="lang-none prettyprint-override"><code>(.venv) pi@rpi1:~/Documents/alpyca $ grep VIRTUAL_ENV= .venv/bin/activate
VIRTUAL_ENV="/home/pi/Documents/alpyca/.venv"
(.venv) pi@rpi1:~/Documents/alpyca $ .venv/bin/python
Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.path)
['', '/usr/lib/python39.zip', '/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload', '/home/pi/Documents/alpyca/.venv/lib/python3.9/site-packages']
>>>
</code></pre>
<p>I don't know about Python 2, I have never used it.</p>
|
<python><pip><python-venv>
|
2022-12-29 18:17:38
| 0
| 1,303
|
Bob Denny
|
74,954,628
| 848,277
|
Getting the current git hash of a library from an Airflow Worker
|
<p>I have a library <code>mylib</code> that I want to get the current git hash for logging purposes when I run an Airflow Worker via the <code>PythonOperator</code>, <a href="https://stackoverflow.com/questions/14989858/get-the-current-git-hash-in-a-python-script">I know several methods to get the latest git hash</a>, <a href="https://stackoverflow.com/questions/69730660/compute-git-hash-of-file-or-directory-outside-of-git-repository">the main issue is I don't know where the directory will be and I'll likely be running it out of directory</a>.
The workers themselves can all vary (docker, ec2, gke, kubenetes) but the source python library <code>mylib</code> would always be installed via a <code>pip install git+https://${GITHUB_TOKEN}@github.com/user/mylib.git@{version}</code> command. Is there a generic way I can get the git hash for <code>mylib</code> on any Airflow worker since the directory where <code>mylib</code> is installed will change across my Airflow Workers?</p>
|
<python><git><airflow>
|
2022-12-29 18:09:20
| 1
| 12,450
|
pyCthon
|
74,954,469
| 7,624,196
|
mypy: how to ignore specified files (not talking about how to ignore errors)
|
<p>I want to let mypy ignore a specified file. I'm <strong>not talking about just ignoring error</strong> (which already answered in many places), rather I want mypy to completely ignore specified files.</p>
<p>When applying mypy to jax library, mypy somehow hangs. For example, let's say there is a file named <code>main.py</code> with only single file</p>
<pre class="lang-py prettyprint-override"><code>import jax
</code></pre>
<p>Mypy somehow takes really long time (over 5 minuites) when applying <code>main.py</code> with the following log (abbreviated). Thus I want let mypy ignore specific file.</p>
<pre><code>h-ishida@stone-jsk:/tmp/test$ mypy -v main.py
LOG: Could not load plugins snapshot: @plugins_snapshot.json
LOG: Mypy Version: 0.991
LOG: Config File: Default
LOG: Configured Executable: /usr/bin/python3
LOG: Current Executable: /usr/bin/python3
LOG: Cache Dir: .mypy_cache
LOG: Compiled: True
LOG: Exclude: []
LOG: Found source: BuildSource(path='main.py', module='main', has_text=False, base_dir='/tmp/test', followed=False)
LOG: Could not load cache for main: main.meta.json
LOG: Metadata not found for main
LOG: Parsing main.py (main)
LOG: Could not load cache for jax: jax/__init__.meta.json
LOG: Metadata not found for jax
LOG: Parsing /home/h-ishida/.local/lib/python3.8/site-packages/jax/__init__.py (jax)
LOG: Metadata fresh for builtins: file /home/h-ishida/.local/lib/python3.8/site-packages/mypy/typeshed/stdlib/builtins.pyi
LOG: Metadata fresh for jax._src.cloud_tpu_init: file /home/h-ishida/.local/lib/python3.8/site-packages/jax/_src/cloud_tpu_init.py
LOG: Could not load cache for jax._src.basearray: jax/_src/basearray.meta.json
LOG: Metadata not found for jax._src.basearray
...
LOG: Found 163 SCCs; largest has 196 nodes
LOG: Processing 161 queued fresh SCCs
LOG: Processing SCC of size 196 (jax.scipy jax._src.lib jax._src.config jax.experimental.x64_context jax._src.iree jax.config jax._src.pretty_printer jax._src.util jax.util jax._src.lax.stack jax._src.state.types jax._src.lax.svd jax._src.nn.initializers jax._src.lax.qdwh jax._src.debugger.web_debugger jax._src.debugger.colab_debugger jax._src.debugger.cli_debugger jax._src.debugger.core jax._src.clusters.cloud_tpu_cluster jax._src.clusters.slurm_cluster jax._src.clusters jax._src.lax.utils jax._src.profiler jax._src.ops.scatter jax._src.numpy.vectorize jax._src.lax.ann jax._src.lax.other jax._src.errors jax._src.abstract_arrays jax._src.traceback_util jax._src.source_info_util jax._src.callback jax._src.lib.xla_bridge jax._src.sharding jax.lib jax._src.tree_util jax._src.environment_info jax._src.lax.eigh jax._src.lib.mlir.dialects jax.lib.xla_bridge jax._src.stages jax.nn.initializers jax._src.debugger jax._src.distributed jax._src.api_util jax.tree_util jax.profiler jax.ops jax.errors jax._src.typing jax._src.array jax._src.basearray jax._src.state.primitives jax._src.numpy.ndarray jax._src.custom_transpose jax._src.ad_checkpoint jax._src.lax.convolution jax._src.lax.slicing jax._src.debugging jax.linear_util jax.interpreters.mlir jax._src.dtypes jax._src.dispatch jax._src.device_array jax.stages jax.distributed jax.api_util jax.core jax._src.state.discharge jax._src.lax.control_flow.common jax._src.numpy.util jax._src.ad_util jax._src.lax.parallel jax.custom_transpose jax.ad_checkpoint jax.interpreters.pxla jax.interpreters.xla jax.interpreters.partial_eval jax.dtypes jax.debug jax.abstract_arrays jax._src.state jax._src.third_party.numpy.linalg jax._src.numpy.fft jax._src.lax.control_flow.solves jax._src.lax.control_flow.conditionals jax._src.numpy.reductions jax._src.numpy.index_tricks jax.interpreters.batching jax.interpreters.ad jax.sharding jax._src.custom_derivatives jax._src.custom_batching jax.numpy.fft jax._src.lax.lax jax._src.lax.linalg jax.custom_derivatives jax.custom_batching jax.lax.linalg jax._src.api jax.experimental.global_device_array jax._src.prng jax._src.numpy.ufuncs jax._src.lax.fft jax jax._src.numpy.linalg jax._src.lax.control_flow.loops jax.experimental.maps jax._src.lax.windowed_reductions jax._src.image.scale jax._src.numpy.lax_numpy jax.experimental.pjit jax._src.ops.special jax.numpy.linalg jax._src.numpy.setops jax._src.numpy.polynomial jax._src.lax.control_flow jax.image jax._src.random jax._src.nn.functions jax.numpy jax.lax jax.random jax.nn jax._src.dlpack jax.experimental.compilation_cache.compilation_cache jax.experimental jax.interpreters jax._src.lax jax.dlpack jax._src.scipy.cluster.vq jax._src.scipy.stats.gennorm jax._src.scipy.stats.chi2 jax._src.scipy.stats.uniform jax._src.scipy.stats.t jax._src.scipy.stats.pareto jax._src.scipy.stats.multivariate_normal jax._src.scipy.stats.laplace jax._src.scipy.stats.expon jax._src.scipy.stats.cauchy jax._src.third_party.scipy.betaln jax._src.scipy.sparse.linalg jax._src.third_party.scipy.signal_helper jax._src.scipy.fft jax._src.scipy.stats._core jax._src.scipy.ndimage jax._src.scipy.linalg jax._src.third_party.scipy.interpolate jax.scipy.cluster.vq jax.scipy.stats.gennorm jax.scipy.stats.chi2 jax.scipy.stats.uniform jax.scipy.stats.t jax.scipy.stats.pareto jax.scipy.stats.multivariate_normal jax.scipy.stats.laplace jax.scipy.stats.expon jax.scipy.stats.cauchy jax._src.scipy.special jax.scipy.sparse.linalg jax._src.scipy.signal jax._src.third_party.scipy.linalg jax.scipy.fft jax.scipy.ndimage jax.scipy.interpolate jax._src.scipy.stats.betabinom jax._src.scipy.stats.nbinom jax._src.scipy.stats.multinomial jax.scipy.cluster jax.scipy.special jax.scipy.sparse jax.scipy.signal jax.scipy.linalg jax._src.scipy.stats.poisson jax._src.scipy.stats.norm jax._src.scipy.stats.logistic jax._src.scipy.stats.geom jax._src.scipy.stats.gamma jax._src.scipy.stats.dirichlet jax._src.scipy.stats.beta jax._src.scipy.stats.bernoulli jax.scipy.stats.betabinom jax.scipy.stats.nbinom jax.scipy.stats.multinomial jax._src.scipy.stats.kde jax._src.scipy.stats.truncnorm jax.scipy.stats.poisson jax.scipy.stats.norm jax.scipy.stats.logistic jax.scipy.stats.geom jax.scipy.stats.gamma jax.scipy.stats.dirichlet jax.scipy.stats.beta jax.scipy.stats.bernoulli jax.scipy.stats.truncnorm jax.scipy.stats) as inherently stale
</code></pre>
|
<python><mypy>
|
2022-12-29 17:51:32
| 0
| 1,623
|
HiroIshida
|
74,954,463
| 11,013,499
|
how do I show decimal point in print function in python?
|
<p>I am trying to find the roots of an equation and I need to print the roots with 10 decimal points I used the following code but it produced an error. how can I tell python to print 10 decimal points?</p>
<pre><code>def find_root(a1,b1,c1,d1,e1,f1,a2,b2,c2,d2,e2,f2):
coeff=[a1-a2,b1-b2,c1-c2,d1-d2,e1-e2,f1-f2]
print('%.10f' %np.roots(coeff))
Traceback (most recent call last):
File "C:\Users\Maedeh\AppData\Local\Temp\ipykernel_14412\3234902339.py", line 1, in <cell line: 1>
find_root(0.0024373075,-0.0587498671,0.4937598857,-1.7415327555,3.6624839316,20.8771496554,0.0021396943,-0.0504345999,0.4152634229,-1.4715228738,3.3049020821,20.8406692002)
File "C:\Users\Maedeh\AppData\Local\Temp\ipykernel_14412\3260022606.py", line 3, in find_root
print('%.10f' %np.roots(coeff))
TypeError: only size-1 arrays can be converted to Python scalars
</code></pre>
|
<python><numpy>
|
2022-12-29 17:51:02
| 2
| 1,295
|
david
|
74,954,281
| 12,084,907
|
How to capture the Name of a Table in a Database when running a stored procedure on multiple tables
|
<p>I am currently making a python program that will run a stored procedure which has multiple select statements for multiple different tables. I am storing the results in a dataframe and the displaying it with a treeview. I want to make it so that the user can see what table each row of the results came from. I was wondering if there is anyway that I could modify my stored procedure so that each row that is returned in the procedure also includes the table name.</p>
<p>Here is the stored procedure as of right now</p>
<pre><code>procedure [dbo].[getUser](@username nvarchar(225))
AS
Select username,user_id,user_type
from db1.dublin_table
where username like '%' + TRIM(@username) + '%'
ORDER BY username
Select username,user_id,user_type
from db2.nyc_table
where username like '%' + TRIM(@username) + '%'
ORDER BY username
Select username,user_id,user_type
from db3.tokyo_table
where username like '%' + TRIM(@username) + '%'
ORDER BY username
</code></pre>
<p>I am somewhat new to SQL stored procedures so any help is appreciated.</p>
|
<python><sql-server><dataframe><stored-procedures>
|
2022-12-29 17:31:27
| 0
| 379
|
Buzzkillionair
|
74,954,233
| 4,478,466
|
How to get static files to work when testing the Flask REST API with flask_testing framework
|
<p>I'm making a Flask REST API with Flask-RESTX library. One part of API are also calls that generate PDF documents. To generate documents I'm using some font files and images from the <code>static/</code> directory. Everything is working when I'm running the API, but when I'm trying to call the same calls from my tests I'm getting an error that files used in PDF generation can't be found.</p>
<pre><code>OSError: Cannot open resource "static/logo.png"
</code></pre>
<p>I'm guessing the reason for this is that the path to <code>static</code> folder is different when running tests. Is there a nice way to use the same path to static files in tests or there needs to be some custom path-switching logic depending on the run type (development, production, testing).</p>
<p>Folder structure:</p>
<pre><code>/src
/application
/main
app.py
/test
test.py
/static
logo.png
</code></pre>
<p>I access the assets in my application as such:</p>
<pre><code>static/logo.png
</code></pre>
|
<python><flask>
|
2022-12-29 17:26:59
| 1
| 658
|
Denis Vitez
|
74,954,027
| 12,596,824
|
Dataframe with Path column - creating new columns
|
<p>I have the following dataframe. I want to create new columns based on the FilePath column.</p>
<pre><code>FilePath
S:\\colab\a.csv
S:\\colab\b.csv
S:\\colab\c.csv
S:\\colab\apple\dog.txt
S:\\colab\apple\cat.pdf
</code></pre>
<p>Below is the expected output. I want to get the hierarchy of the files in a string and convert "" to ">" and remove "S:\" in the file path. I also want to get counts for the amount of files and directories based on the file path. For example the first instance has a filecnt of 3 because they're 3 files in the directory colab (a.csv, b.csv, c.csv) and one directory (apple).</p>
<p>How can I do this in python?</p>
<p>Expected Output:</p>
<pre><code>FilePath Hierarchy FileCnt DirCnt
S:\\colab\a.csv colab 3 1
S:\\colab\b.csv colab 3 1
S:\\colab\c.csv colab 3 1
S:\\colab\apple\dog.txt colab > apple 2 0
S:\\colab\apple\cat.pdf colab > apple 2 0
</code></pre>
<p>So far I have</p>
<pre><code>df['Hierarchy'] = df['FilePath'].str[4:].str.replace('\', ' > ')
</code></pre>
|
<python><pandas><path>
|
2022-12-29 17:04:38
| 1
| 1,937
|
Eisen
|
74,953,960
| 14,141,126
|
Send email with plain text and html attachment
|
<p>I'm trying to send email that contain both plain text in the body and html attachment. I've succeeded in adding the attachment but can't figure out how to add the plain text to the body. Any help is appreciated.</p>
<p>Here is the code:</p>
<pre><code>from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.text import MIMEText
import smtplib, ssl
from email import encoders
def sendmail():
subject = 'Subject here'
body = "Shard Count: 248" #Need to add this to body in plain text
senders_email = 'title@domain.com'
receiver_email = 'security@domain.com'
#Create a multipart message and set headers
message = MIMEMultipart('alternative')
message['From'] = senders_email
message['To'] = receiver_email
message['Subject'] = subject
message.attach(MIMEText(html_file, 'html'))
filename = 'logs.html'
# Open file in binary mode
with open(filename, 'rb') as attachment:
# Add file as application/octet-stream
part = MIMEBase('application','octet-stream')
part.set_payload(attachment.read())
# Encodes file in ASCII characters to send via email
encoders.encode_base64(part)
# Add header as key/value pair to attachment part
part.add_header(
'Content-Disposition',
f"attachment; filename= {filename}",
)
# Add attachment to message and convert message to string
message.attach(part)
text = message.as_string()
# Log into server using secure connection
context = ssl.create_default_context()
with smtplib.SMTP("smtp.domain.com", 25) as server:
# server.starttls(context=context)
# server.login(senders_email, password)
server.sendmail(senders_email,receiver_email,text)
print("Email sent!")
sendmail()
</code></pre>
|
<python>
|
2022-12-29 16:57:20
| 1
| 959
|
Robin Sage
|
74,953,923
| 5,398,127
|
Prophet CMDStanpy error - Procedure Entry Point Not Located
|
<p>I am trying to use prophet library.</p>
<p>The 'cmdstanpy' and 'prophet' packages are successfully installed. But I am getting this error while running my model - "The procedure entry point _ZNt3bb19task_scheduler_init10initilaizeEiy could not be located in the dynamic link library D:/ProgramData/ Anaconda3/Lib/site-packages/prophet/stan_model/prophet_model.bin"</p>
<p>I tried the steps mentioned here <a href="https://github.com/facebook/prophet/issues/2255" rel="nofollow noreferrer">https://github.com/facebook/prophet/issues/2255</a></p>
<p>like creating new virtualenv and installing prophet in that env and running it also installing cmdstanpy but I am still facing the issue.</p>
<p>Yesterday I was able to resolve the same issue in base environment with the following steps</p>
<pre><code>pip install cmdstanpy
Then in the libs import section, I've added
import cmdstanpy
cmdstanpy.install_cmdstan(compiler=True)
</code></pre>
<p>But today I am again getting same issue in base and any other virtual environment.</p>
<p>Please help</p>
<p><a href="https://i.sstatic.net/5ztyP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5ztyP.png" alt="enter image description here" /></a></p>
|
<python><model-fitting><prophet>
|
2022-12-29 16:53:20
| 1
| 3,480
|
Stupid_Intern
|
74,953,856
| 7,148,573
|
Flask traceback only shows external libraries
|
<p>I'm running a Flask app and get the following error message:</p>
<pre><code> * Debugger is active!
* Debugger PIN: ******
127.0.0.1 - - [29/Dec/2022 12:21:53] "GET /authorization/****/authorize HTTP/1.1" 302 -
127.0.0.1 - - [29/Dec/2022 12:21:53] "GET /authorization/****/test HTTP/1.1" 500 -
Traceback (most recent call last):
File "******/lib/python3.10/site-packages/flask/app.py", line 2548, in __call__
return self.wsgi_app(environ, start_response)
File "******/lib/python3.10/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.handle_exception(e)
File "******/lib/python3.10/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "******/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
return self.finalize_request(rv)
File "******/lib/python3.10/site-packages/flask/app.py", line 1844, in finalize_request
response = self.process_response(response)
File "******/lib/python3.10/site-packages/flask/app.py", line 2340, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "******/lib/python3.10/site-packages/flask/sessions.py", line 409, in save_session
val = self.get_signing_serializer(app).dumps(dict(session)) # type: ignore
File "******/lib/python3.10/site-packages/itsdangerous/serializer.py", line 207, in dumps
payload = want_bytes(self.dump_payload(obj))
File "******/lib/python3.10/site-packages/itsdangerous/url_safe.py", line 53, in dump_payload
json = super().dump_payload(obj)
File "******/lib/python3.10/site-packages/itsdangerous/serializer.py", line 169, in dump_payload
return want_bytes(self.serializer.dumps(obj, **self.serializer_kwargs))
File "******/lib/python3.10/site-packages/flask/json/tag.py", line 308, in dumps
return dumps(self.tag(value), separators=(",", ":"))
File "******/lib/python3.10/site-packages/flask/json/__init__.py", line 124, in dumps
return app.json.dumps(obj, **kwargs)
File "******/lib/python3.10/site-packages/flask/json/provider.py", line 230, in dumps
return json.dumps(obj, **kwargs)
File "/usr/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "******/lib/python3.10/site-packages/flask/json/provider.py", line 122, in _default
raise TypeError(f"Object of type {type(o).__name__} is not JSON serializable")
</code></pre>
<p>I don't understand why it shows only lines from external libraries. What could be the reason it doesn't show which line in my code caused the program to stop?</p>
<p>When I try to replicate the error in a small app (code below), it does show the line number.</p>
<pre><code>import flask
import json
app = flask.Flask(__name__)
app.secret_key = "####"
@app.route("/")
def index():
json.dumps(json) # Line that causes error
if __name__ == "__main__":
app.run("localhost", 8080, debug=True)
</code></pre>
|
<python><flask>
|
2022-12-29 16:48:07
| 1
| 1,740
|
Raphael
|
74,953,822
| 9,182,743
|
Separate script's functions into modules, callable by 2 separate mains
|
<p>I have a single script that:</p>
<ol start="0">
<li>imports 2 sets of data: df_height['user', 'height'], df_age['user', 'age']</li>
<li>clean the data</li>
<li>analyse the data: i) sum(height), ii) mean(age), iii) sum(height) * mean(age)</li>
<li>display the data.</li>
</ol>
<p>I want to:</p>
<ul>
<li>Separate the functions out into modules</li>
<li>divide the different analysis into their own 'main'</li>
<li>For each analysis, divide into i) import and clean, ii) process iii) display</li>
</ul>
<hr />
<p><strong>Here is the complete script</strong> (in the comments with #-> I indicate in what folder the function will be moved to):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
#1. functions for import data #-> These functions into src/import_data/import_data.py
def get_data_age():
df = pd.DataFrame({
"user_id": ['1', '2', '3', '4', '5'],
"age": [10, 20, 30, "55", 50],
})
return df
def get_data_height():
df = pd.DataFrame({
"user_id": ['5', '7', '12', '5'],
"height": [160, 170, 180, 'replace_this_with_190']
})
return df
#2. functions for cleaning data #-> These functions into src/clean_data/clean_data.py
def clean_age (df):
df['age'] = pd.to_numeric(df['age'])
return df
def clean_height (df):
df['height'] = df['height'].replace("replace_this_with_190", 200)
return df
#3. functions for processing data #-> These functions into src/alghorithms/calculations.py
def alghorithm_age (df):
return df['age'].mean()
def alghorithm_height (df):
return df['height'].sum()
#4. functions in common (display data) #-> This functions into src/display_data/display_data.py
def common_function_display_data (data):
print (data)
#5. function that combines data from alghorithm_height and alghorithm_age #-> This functions into src/alghorithms/calculations.py
def product_age_mean_and_height_sum(mean_age, sum_height):
return mean_age * sum_height
#main 1 (age)
df_age = get_data_age() # -> this step into file main_age/00_import_and_clean_age.py
df_age_clean = clean_age(df_age) # -> this step into file main_age/00_import_and_clean_age.py
age_mean = alghorithm_age(df_age_clean) # -> this step into main_age/file 01_process_age.py
common_function_display_data(age_mean)# -> this step into main_age/file 02_display_age.py
#main 2 (height)
df_height = get_data_height()# -> this step into file main_height/00_import_and_clean_height.py
df_height_clean = clean_height(df_height)# -> this step into file main_height/00_import_and_clean_height.py
height_sum = alghorithm_height(df_height_clean)# -> this step into main_height/file 01_process_height.py
common_function_display_data(height_sum)# -> this step into file main_height/02_display_height.py
#main 3 (combined)
age_mean_height_sum_product = product_age_mean_and_height_sum(age_mean, height_sum) # -> this step into file main_display_combined/display_combined.py
common_function_display_data(age_mean_height_sum_product)# -> this step into file main_height/02_display_height.py
</code></pre>
<p><strong>Here is the final project structure I had in mind</strong>.</p>
<p><a href="https://github.com/leodtprojectsd/stack_two_mains" rel="nofollow noreferrer">github repo with example</a></p>
<p><a href="https://i.sstatic.net/ujb32.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ujb32.png" alt="project structure" /></a></p>
<h2><a href="https://i.sstatic.net/0v5t8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0v5t8.png" alt="Data flow" /></a></h2>
<p><strong>Problem</strong>
However when i structure the project as above, I am unable to import modules into the main scripts. I believe this is because they are on parallel levels.
getting the following error:</p>
<pre class="lang-py prettyprint-override"><code># EXAMPLE for file main_one_age/00_import_and_clean_age.py
---
from ..import_data.import_data import get_data_age
from ..clean_data.clean_data import clean_age
df_age = get_data_age() # -> this step into file main_age/00_import_and_clean_age.py
df_age_clean = clean_age(df_age) # -> this step into file main_age/00_import_and_clean_age.py
---
OUT:
from ..import_data.import_data import get_data_age
ImportError: attempted relative import beyond top-level package
PS C:\Users\leodt\LH_REPOS\src\src>
</code></pre>
<hr />
<p><strong>QUESTIONS</strong></p>
<p><strong>Q:</strong> How can I separate the script into modules/main into a common structure?</p>
<p>The current solution doesn't allow me to:</p>
<ul>
<li>place a main within a subfolder eg: main_one_age/main_here.py
With this structure the code wont work</li>
<li>run files like <code>import_and_clean_age_00.py</code> as main, if I do this i get the error:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>ef display_data_main_one(age_mean):
return display_data.common_function_display_data(age_mean)
if __name__ == '__main__':
display_data("path to age mean")
OUT:
ModuleNotFoundError: No module named 'display_data'
</code></pre>
<p>Q: can you provide a soltion that re-writes "path_for_data_etc.py" into a standard form? and also add all the setup.py/ pyproject.toml etc that is needed for this to be considered a "completed" project?</p>
<p>Basically looking for a standard solution that I can then use as a template for my real projects.</p>
|
<python><import><module><project>
|
2022-12-29 16:45:15
| 1
| 1,168
|
Leo
|
74,953,809
| 18,110,596
|
Select the rows with top values until the sum value reach the 30% of total value in Python Pandas
|
<p>I'm using Python pandas and have a data frame that is pulled from my CSV file:</p>
<pre><code>ID Value
123 10
432 14
213 12
'''
214 2
999 43
</code></pre>
<p>I was advised using the following code can randomly select some rows with the condition that the sum of the selected values = 30% of the total value. ("close to 30%" works)</p>
<pre><code>out = df.sample(frac=1).loc[lambda d: d['Value'].cumsum().le(d['Value'].sum()*0.3)]
</code></pre>
<p>Now I want to sort the rows based on the value and select the top rolls until they add up to 30% of the total value.</p>
<p>Please advise.</p>
|
<python><pandas>
|
2022-12-29 16:44:02
| 2
| 359
|
Mary
|
74,953,747
| 192,204
|
Why is the spaCy Scorer returning None for the entity scores but the model is extracting entities?
|
<p>I am really confused why the Scorer.score is returning ents_p, ents_r, and ents_f as None for the below example. I am seeing something every similar with my own custom model and want to understand why it is returning None?</p>
<p><strong>Example Scorer Code - Returning None for ents_p, ents_r, ents_f</strong></p>
<pre><code>import spacy
from spacy.scorer import Scorer
from spacy.tokens import Doc
from spacy.training.example import Example
examples = [
('Who is Talha Tayyab?',
{(7, 19, 'PERSON')}),
('I like London and Berlin.',
{(7, 13, 'LOC'), (18, 24, 'LOC')}),
('Agra is famous for Tajmahal, The CEO of Facebook will visit India shortly to meet Murari Mahaseth and to visit Tajmahal.',
{(0, 4, 'LOC'), (40, 48, 'ORG'), (60, 65, 'GPE'), (82, 97, 'PERSON'), (111, 119, 'GPE')})
]
def my_evaluate(ner_model, examples):
scorer = Scorer()
example = []
for input_, annotations in examples:
pred = ner_model(input_)
print(pred,annotations)
temp = Example.from_dict(pred, dict.fromkeys(annotations))
example.append(temp)
scores = scorer.score(example)
return scores
ner_model = spacy.load('en_core_web_sm') # for spaCy's pretrained use 'en_core_web_sm'
results = my_evaluate(ner_model, examples)
print(results)
</code></pre>
<p><strong>Scorer Results</strong></p>
<pre><code>{'token_acc': 1.0, 'token_p': 1.0, 'token_r': 1.0, 'token_f': 1.0, 'sents_p': None, 'sents_r': None, 'sents_f': None, 'tag_acc': None, 'pos_acc': None, 'morph_acc': None, 'morph_micro_p': None, 'morph_micro_r': None, 'morph_micro_f': None, 'morph_per_feat': None, 'dep_uas': None, 'dep_las': None, 'dep_las_per_type': None, 'ents_p': None, 'ents_r': None, 'ents_f': None, 'ents_per_type': None, 'cats_score': 0.0, 'cats_score_desc': 'macro F', 'cats_micro_p': 0.0, 'cats_micro_r': 0.0, 'cats_micro_f': 0.0, 'cats_macro_p': 0.0, 'cats_macro_r': 0.0, 'cats_macro_f': 0.0, 'cats_macro_auc': 0.0, 'cats_f_per_type': {}, 'cats_auc_per_type': {}}
</code></pre>
<p>It is clearly picking out entities from the text</p>
<pre><code>doc = ner_model('Agra is famous for Tajmahal, The CEO of Facebook will visit India shortly to meet Murari Mahaseth and to visit Tajmahal.')
for ent in doc.ents:
print(ent.text, ent.label_)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Agra PERSON
Tajmahal ORG
Facebook ORG
India GPE
Murari Mahaseth PERSON
Tajmahal ORG
</code></pre>
|
<python><spacy><named-entity-recognition><precision-recall>
|
2022-12-29 16:37:40
| 2
| 9,234
|
scarpacci
|
74,953,725
| 12,065,403
|
Efficient file format to store nested objects
|
<p>I am working on a python project where I have nested objects. I know the structure of each entity. I am looking for a way to save it.</p>
<p><strong>Here is a very simple example:</strong></p>
<p>A User always have:</p>
<ul>
<li>an id (int)</li>
<li>a name: (string)</li>
<li>a list of items (0, 1 or many)</li>
</ul>
<p>An item always have:</p>
<ul>
<li>a name (str)</li>
<li>a price (int)</li>
</ul>
<pre><code>to_save_data = [
{ 'id': 8, 'name': 'Alice', 'items': [ { 'name': 'Sword', 'price': 10 } ] },
{ 'id': 23, 'name': 'Bob', 'items': [ { 'name': 'Axe', 'price': 4 }, { 'name': 'Shield', 'price': 33 } ] },
{ 'id': 57, 'name': 'Charlie', 'items': [] },
]
</code></pre>
<p>I need to store the data with following constrainst:</p>
<ul>
<li>All in one file, one line should correspond to one user</li>
<li>I can read / write line by line. I can add a line at the end of file (without reading all and rewriting all)</li>
<li>values are not encoded (like a csv file for instance but not like a pickle file)</li>
<li>efficient in terms of space.</li>
</ul>
<p>I started with JSON but it is not efficient in terms of space because the name of each field is repated and written for each line.
In this example, words "id", "name" and "items" are written 3 times (if I had 1B users, it would be written 1B times, same problem for items).</p>
<p><strong>I am looking for something that would look like this for instance:</strong></p>
<p>A header on the first line that describes the data-structure. then:</p>
<pre><code>8, 'Alice', [ ('Sword', 10) ]
23, 'Bob', [ ('Axe', 4), ('Shield', 33) ]
57, 'Charlie', []
</code></pre>
<p>I could implement it but it would take time and I feel I would reinvent the wheel.</p>
<p><strong>Is there an existing file-format and python module to handle that ?</strong></p>
|
<python>
|
2022-12-29 16:34:54
| 0
| 1,288
|
Vince M
|
74,953,674
| 15,725,039
|
Need to output a function That takes in strings and ints in python
|
<p>The function in question is:</p>
<pre><code>switch({source}, condition1, Condition Nth, value1, value Nth)
</code></pre>
<p>The Way I need to output it involves using 0 and 1s (True/False) and will look like this:</p>
<pre><code>"switch([.]ALARM,1,0,On,Off, Unknown)"
</code></pre>
<p>Where the 1 and 0 need to be ints instead of strings.</p>
<p>The program we have to use at work does not allow me to use .format and I was curious is there a way to make this happen.</p>
<p>Example of what I have tried:</p>
<h4>First test (index 0 is the 0 and/or 1s in question for both variables):</h4>
<pre><code>formula = "switch({source}, \"" + int(conditionOne[0]) + "\""+ "," + "\"" + int(conditionTwo[0]) + "\""+ "," + "\"" + conditionOne[1] + "\"" + "," + "\"" + conditionTwo[1] + "\"" + ", \"Unknown\")"
</code></pre>
<h4>Second test (the program reads {} as sets instead of formatting:</h4>
<pre><code> test = "switch({source}, \"" + {0} + "\""+ "," + "\"" + {1} + "\""+ "," + "\"" + conditionOne[1] + "\"" + "," + "\"" + conditionTwo[1] + "\"" + ", \"Unknown\")".format(conditionOne[0],conditionTwo[0])
</code></pre>
<h1>Edit and Answer:</h1>
<p>I reached out to ignition personal and found out they use %d, which I have never used since the updates of python 3. I am guessing that program uses an old form of python?</p>
<p>Here is the code that gets it to work, if anyone else out there uses the ignition program:</p>
<pre><code>test3 = "switch({source}, %d, %d," % (int(conditionOne[0]),int(conditionTwo[0])) + "\"" + conditionOne[1] + "\"" + "," + "\"" + conditionTwo[1] + "\"" + ", \"Unknown\")"
</code></pre>
|
<python><string><formatting><format><concatenation>
|
2022-12-29 16:30:11
| 1
| 330
|
JQTs
|
74,953,624
| 2,998,077
|
Python to bin calculated results from a function
|
<p>A function is defined to perform some calculation (in this example, it's to sum). The calculated result is to be put into bins (of each 10 as an interval, such as <10, 10-19, 20-29 etc).</p>
<p>What is the smart way to do so?</p>
<p>I have a dump way, which created a dictionary to list all the possible calculated results and their bins:</p>
<pre><code>def total(iterable):
dict = {36: '30 - 40' , 6 : '< 10'}
total = dict[sum(iterable)]
return total
candidates = [[11,12,13],[1,2,3]]
for iterable in candidates:
output = str(total(iterable))
print (output)
</code></pre>
|
<python><function>
|
2022-12-29 16:25:17
| 1
| 9,496
|
Mark K
|
74,953,594
| 14,558,228
|
how to create cascade window in python3 win32gui
|
<p>i want to cascade window in pywin32 module.
i use:
<code>win32gui.CascadeWindow()</code>
but rased an error:</p>
<blockquote>
<p>AttributeError: module 'win32gui' has no attribute 'CascadeWindow'.
how can i fix this?</p>
</blockquote>
<p>AttributeError: module 'win32gui' has no attribute 'CascadeWindow'. Did you mean: 'CreateWindow'?</p>
|
<python><windows><winapi><pywin32><windows-api-code-pack>
|
2022-12-29 16:22:13
| 1
| 453
|
xander karimi
|
74,953,540
| 4,391,249
|
What is `np.ndarray[Any, np.dtype[np.float64]]` and why does `np.typing.NDArray[np.float64]` alias it?
|
<p>The <a href="https://numpy.org/devdocs/reference/typing.html" rel="nofollow noreferrer">documentation</a> for <code>np.typing.NDArray</code> says that it is "a generic version of <code>np.ndarray[Any, np.dtype[+ScalarType]]</code>". Where is the generalization in "generic" happening?</p>
<p>And in the <a href="https://numpy.org/devdocs/reference/generated/numpy.ndarray.__class_getitem__.html" rel="nofollow noreferrer">documentation</a> for <code>numpy.ndarray.__class_getitem__</code> we have this example <code>np.ndarray[Any, np.dtype[Any]]</code> with no explanation as to what the two arguments are.</p>
<p>And why can I do <code>np.ndarray[float]</code>, ie just use one argument? What does that mean?</p>
|
<python><numpy><python-typing>
|
2022-12-29 16:17:22
| 1
| 3,347
|
Alexander Soare
|
74,953,510
| 3,860,847
|
How to pass unencoded URL in FastAPI/Swagger UI via GET method?
|
<p>I would like to write a FastAPI endpoint, with a Swagger page (or something similar) that will accept a non-encoded URL as input. It should preferably use <code>GET</code>, <strong>not</strong> <code>POST</code> method.</p>
<p>Here's an example of a <code>GET</code> endpoint that <strong>does</strong> require double-URL encoding.</p>
<pre><code>@app.get("/get_from_dub_encoded/{double_encoded_url}")
async def get_from_dub_encoded(double_encoded_url: str):
"""
Try https%253A%252F%252Fworld.openfoodfacts.org%252Fapi%252Fv0%252Fproduct%252F7622300489434.json
"""
original_url = urllib.parse.unquote(urllib.parse.unquote(double_encoded_url))
response = requests.get(original_url)
return response.json()
</code></pre>
<p>Which generates a Swagger interface as below.</p>
<p>The following <code>PUT</code> request does solve my problem, but the simplicity of a <code>GET</code> request with a <code>form</code> is better for my co-workers.</p>
<pre class="lang-py prettyprint-override"><code>class InputModel(BaseModel):
unencoded_url: AnyUrl = Field(description="An unencoded URL for an external resource", format="url")
@app.post("/unencoded-url")
def unencoded_url(inputs: InputModel):
response = requests.get(inputs.unencoded_url)
return response.json()
</code></pre>
<p>How can I deploy a convenient interface like that without requiring users to write the payload for a <code>PUT</code> request or to perform double URL encoding?</p>
<p><a href="https://i.sstatic.net/vXJzp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vXJzp.png" alt="enter image description here" /></a></p>
<p>This post has some helpful related discussion, but doesn't explicitly address the FORM solution: <a href="https://stackoverflow.com/questions/72801333/how-to-pass-url-as-a-path-parameter-to-a-fastapi-route">How to pass URL as a path parameter to a FastAPI route?</a></p>
|
<python><swagger><fastapi><swagger-ui><urlencode>
|
2022-12-29 16:14:32
| 1
| 3,136
|
Mark Miller
|
74,953,359
| 3,433,489
|
matplotlib pcolor gives blank plot when data is a single column
|
<p>When I use pcolor involving meshes and data consisting of a single column, the resulting plot is just white everywhere. What am I doing wrong?</p>
<pre><code>import matplotlib.pyplot as plt
from numpy import array
U = array([[0],[1]])
V = array([[0],[0]])
data = array([[0.4],[0.5]])
plt.pcolor(U,V,data)
plt.show()
</code></pre>
<p>This is part of a more general program where the arrays can have different sizes. Ideally, the solution should also generalize to arrays that also consist of multiple rows and columns.</p>
|
<python><matplotlib>
|
2022-12-29 16:00:35
| 1
| 1,009
|
user3433489
|
74,953,271
| 15,263,719
|
is it safe to trow exception in python ThreadPool executor map?
|
<p>I have created this example code to try to explain what my question is about:</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor
def my_method(x):
if x == 2:
print("error")
raise Exception("Exception raised")
else:
print("ok")
with ThreadPoolExecutor() as executor:
executor.map(my_method, [1, 2, 3])
</code></pre>
<p>I have a similar code in my app, but, instead of printing a message it just call an AWS Lambda function. The thing is, some lines before calling the lambda inside <code>my_method</code> function can throw an exception.
If I don't need to read the result of the <code>executor.map</code> method, the code does not throw an exception and It does finish all jobs that it can.
In this case the output is:</p>
<pre><code>> ok
> error
> ok
</code></pre>
<p>I know that if i do something like this:</p>
<pre><code>with ThreadPoolExecutor() as executor:
for r in executor.map(my_method, [1, 2, 3]):
print(r)
</code></pre>
<p>this lines will throw an error, and not all tasks will be executed. In my case, not all AWS lambda functions will be triggered. But if I use the execute method like the first example, my code will trigger all lambda as possible. Is it safe to do that?</p>
<p>I'm asking this because it could be some memory leak, or something that I may not know about.</p>
<p>What are the best practices in this cases?</p>
|
<python><python-3.x><concurrency><threadpool><concurrent.futures>
|
2022-12-29 15:51:59
| 1
| 360
|
Danilo Bassi
|
74,953,267
| 18,110,596
|
Randomly select 30% of the sum value in Python Pandas
|
<p>I'm using Python pandas and have a data frame that is pulled from my CSV file:</p>
<pre><code>ID Value
123 10
432 14
213 12
'''
214 2
999 43
</code></pre>
<p>I want to randomly select some rows with the condition that the sum of the selected values = 30% of the total value.</p>
<p>Please advise how should I write this condition.</p>
|
<python><pandas>
|
2022-12-29 15:51:48
| 1
| 359
|
Mary
|
74,953,042
| 505,188
|
Python Scrape Weather Site Returns Nothing
|
<p>I'm trying to scrape a forecast table from wunderground. When inspecting the website, I see this:
<a href="https://i.sstatic.net/KqJF2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KqJF2.png" alt="enter image description here" /></a></p>
<p>Here is the code...When I run this, I get no results.</p>
<pre><code> import requests
from bs4 import BeautifulSoup
URL = "https://www.wunderground.com/calendar/us/ca/santa-barbara/KSBA"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
for ultag in soup.find_all('ul', {'class': 'calendar-days'}):
for litag in ultag.find_all('li'):
print (litag.text)
</code></pre>
<p>any help is appreciated.</p>
|
<python><web-scraping><beautifulsoup>
|
2022-12-29 15:32:18
| 1
| 712
|
Allen
|
74,952,981
| 19,003,861
|
Django_filters - ModelChoiceFilter - How to filter based on the page_ID the user is visiting
|
<p>I would like the user to be able to filter based on categories. I am using <code>django_filters</code>.</p>
<p>Two important things:</p>
<ol>
<li>These categories are not hard coded in the model. They are instead provided by <code>Venue</code>. <code>Venue</code> is the user the categories are displayed to.</li>
<li>I only want to show the categories relevant to the Venue ID page.</li>
</ol>
<p>I am probably lacking a bit of "vocabulary" to express this in a way django understand.</p>
<p>It's a recurring problem I have, and somehow always manage to go around it. I think this time I am definitely going to need to do something about it. Would be useful to find out!</p>
<p><strong>model</strong></p>
<pre><code>from django.contrib.auth.models import User
class Venue(models.Model, HitCountMixin):
id = models.AutoField(primary_key=True)
name = models.CharField(verbose_name="Name",max_length=100,
class Catalogue(models.Model):
product = models.ManyToManyField('Product', blank=True)
product_title = models.TextField('Product Title', blank=True)
venue = models.ForeignKey(Venue, null=True, blank=True, on_delete=models.CASCADE)
</code></pre>
<p><strong>url</strong></p>
<pre><code>path('show_venue/<venue_id>', views.show_venue,name="show-venue"),
</code></pre>
<p><strong>views</strong></p>
<pre><code>def show_venue(request, venue_id):
if request.method == "POST" and 'btnreview_form' in request.POST:
form = CatalogueReviewForm(request.POST)
if form.is_valid():
data = form.save(commit=False)
data.product_id = venue_id # links to product ID in Catalogue Model and return product name
data.venue_id = request.POST.get('venue_id') # prints venue ID in form
data.user_id = request.user.id
data.save()
ven_id = request.POST.get('venue_id')
print(data)
form.save()
return HttpResponseRedirect(ven_id)
else:
venue = Venue.objects.get(pk=venue_id)
menu = Catalogue.objects.filter(venue=venue_id)
categories = Catalogue.objects.filter(venue=venue_id).order_by('category_order')
myFilter = CatalogueFilter(request.GET,queryset=categories)
categories = myFilter.qs
</code></pre>
<p><strong>filter.py</strong> This is where the problem is.</p>
<pre><code>def venue_id(request):
venue = Venue.objects.get(pk=venue_id) #<-- This is obvioulsy not right, but this is how far I got.
queryset = Catalogue.objects.filter(venue=venue)
return queryset
class CatalogueFilter(django_filters.FilterSet):
product_title = django_filters.CharFilter(label='',lookup_expr='icontains',widget=forms.widgets.TextInput(attrs={'placeholder':'Long list! Type the name of your product here.'}))
type = django_filters.ModelChoiceFilter(queryset = venue_id)
class Meta:
model = Catalogue
fields = ['product_title','type']
</code></pre>
|
<python><django><django-views><django-filter>
|
2022-12-29 15:26:01
| 1
| 415
|
PhilM
|
74,952,776
| 3,302,016
|
Merge 2 Dataframes using start_date & end_date
|
<p>I have 2 dataframes which look something like this.</p>
<p>Train Departure Details (<code>data_df</code>)</p>
<pre><code>pd.DataFrame(columns=['Train', 'Origin', 'Dest', 'j_type', 'bucket', 'dep_date'],
data = [['AB001', 'NZM', 'JBP', 'OP', 'S1', '2022-12-27'],
['AB001', 'NZM', 'JBP', 'SP', 'S1', '2023-01-02'],
['AB001', 'NZM', 'JBP', 'OP', 'S1', '2023-01-05'],
['AB002', 'NZM', 'JBP', 'SP', 'S1', '2022-12-21'],
['AB002', 'NZM', 'JBP', 'SP', 'S1', '2023-05-21'],
['AB003', 'NZM', 'RKP', 'OP', 'S2', '2023-01-07'],
['AB012', 'NZM', 'JBP', 'OP', 'S2', '2023-02-07'],
]
)
</code></pre>
<p><a href="https://i.sstatic.net/IEldu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IEldu.png" alt="Train Departure Details" /></a></p>
<p>Fares Dataframe (<code>fares_df</code>)</p>
<pre><code>pd.DataFrame(columns=['Origin', 'Dest', 'j_type', 'bucket', 'start_date', 'end_date', 'fare'],
data = [['NZM', 'JBP', 'OP', 'S1', '2022-01-01', '2022-12-31', 200],
['NZM', 'JBP', 'SP', 'S1', '2023-01-01', '2023-12-31', 400],
['NZM', 'JBP', 'OP', 'S1', '2023-01-01', '2022-01-31', 205],
['NZM', 'JBP', 'OP', 'S1', '2023-01-31', '2023-12-31', 210],
['NZM', 'JBP', 'OP', 'S2', '2023-01-31', '2023-12-31', 215]]
)
</code></pre>
<p><a href="https://i.sstatic.net/a6MyJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a6MyJ.png" alt="Fare Details" /></a></p>
<p>I need to merge these 2 Datframes such that fares are applied based on correct <code>dep_date</code> that falls between <code>start_date</code> & <code>end_date</code> and also correct <code>origin</code>, <code>destination</code>, <code>j_type</code> & <code>bucket</code> columns</p>
<p>Expected Result</p>
<p><a href="https://i.sstatic.net/N0Ggg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N0Ggg.png" alt="enter image description here" /></a></p>
<hr />
<p>What I have tried:</p>
<pre><code>
ranges = fares[['start_date', 'end_date']].drop_duplicates().reset_index(drop=True)
# Get array of dep_date, start_dates, and end_dates
dep_dates = data_df['dep_date'].values
low_date = ranges['start_date'].values
high_date = ranges['end_date'].values
# Generate arrays for which set of date ranges the dep_date falls between
i, j = np.where((dep_dates[:, None] >= low_date) & (dep_dates[:, None] <= high_date))
# Add date range columns to data_df:
data_df.reset_index(inplace=True)
data_df.loc[i, 'fare_start_date'] = np.column_stack([ranges['start_date'].values[j]])
data_df.loc[i, 'fare_end_date'] = np.column_stack([ranges['end_date'].values[j]])
data_df.merge(
fares_df,
left_on=['Origin', 'Dest',
'bucket', 'j_type',
'fare_start_date', 'fare_end_date'],
right_on=['Origin', 'Dest',
'bucket', 'j_type',
'start_date', 'end_date'],
how='left')
</code></pre>
<p>However this does not give me the correct result after merge.</p>
|
<python><pandas>
|
2022-12-29 15:06:39
| 1
| 4,859
|
Mohan
|
74,952,571
| 2,696,230
|
pandas.Series.str.replace returns nan
|
<p>I have some text stored in pandas.Series. for example:</p>
<pre><code>df.loc[496]
'therapist and friend died in ~2006 Parental/Caregiver obligations:\n'
</code></pre>
<p>I need to replace the number in the text with full date, so I wrote</p>
<pre><code>df.str.replace(
pat=r'(?:[^/])(\d{4}\b)',
repl= lambda m: ''.join('Jan/1/', m.groups()[0]),
regex=True
)
</code></pre>
<p>but the output is nan; though I tried to test the regular expression using findall, and there is no issue:</p>
<pre><code>df.str.findall(r'(?:[^/])(\d{4}\b)')
496 [2006]
</code></pre>
<p>I don't understand what the issue is. most of the problems raised are about cases where Series type is number and not str; but my case is different the data type obviously is str. Nonetheless, I tried <code>.astype(str)</code> and still have the same result nan.</p>
|
<python><pandas><regex><python-re>
|
2022-12-29 14:48:31
| 1
| 1,763
|
Abdulkarim Kanaan
|
74,952,552
| 10,434,565
|
Python and Plotly: Two sliders that controls an intersection of the data to plot
|
<p>I'm trying to make a plot where I have two sliders that filter on a single plot. The data is a video sequence that is generated from a ML model and is a 4-d tensor, where the dimensions are Epoch x Frame x Height x Width. Ideally, I want to have one slider over Epoch to see how each frame evolves as the ML model evolves, and another to see the animation play out.</p>
<p>I'm stuck with this so far:</p>
<pre class="lang-py prettyprint-override"><code># Epoch x Simulation Frame x Channel x Height x Width
# The first epoch is the sanity check, so we remove it.
render_comparison_per_epoch_np = np.stack(
self.render_comparison_per_epoch[1:])
# Epoch x Simulation Frame x Height x Width
render_comparison_per_epoch_np_one_channel = np.mean(
render_comparison_per_epoch_np, axis=2)
dimensions = render_comparison_per_epoch_np_one_channel.shape
dimensions_epoch = dimensions[0]
dimensions_frame = dimensions[1]
fig = go.Figure()
for epoch_index in range(dimensions_epoch):
for frame_index in range(dimensions_frame):
fig.add_trace(
go.Heatmap(
z=render_comparison_per_epoch_np_one_channel[
# Heatmap is flipped on the y-axis, so we flip it back.
epoch_index, frame_index, ::-1, :],
colorscale="Viridis",
showscale=False,
name=f"Epoch {epoch_index}"))
fig.data[0].visible = True # type: ignore
steps_epoch = []
steps_frame = []
for index_epoch in range(dimensions_epoch): # type: ignore
for index_frame in range(dimensions_frame): # type: ignore
step = dict(
method="update",
args=[
{
"visible": [False] * len(fig.data) # type: ignore
},
{
"title": f"Epoch {index_epoch}, Frame {index_frame}"
}
], # layout attribute
)
step["args"][0]["visible"][index_epoch * dimensions_epoch +
index_frame] = True
steps_epoch.append(step)
sliders = [
dict(
active=10,
currentvalue={"prefix": "Epoch: "},
pad={"t": 50},
steps=steps_epoch,
),
dict(
active=10,
currentvalue={"prefix": "Frame: "},
pad={"t": 150},
steps=steps_frame,
)
]
fig.update_layout(sliders=sliders,)
# Convert to html
fig.write_html("render_comparison.html")
wandb.log(
{"simulation_render_per_epoch": wandb.Html("render_comparison.html")})
</code></pre>
<p>I am stuck because I do not see a way to do the following:</p>
<ul>
<li>first slider specify which epoch to display, so ideally it creates an array of boolean where each index that corresponds to the epoch is set to True. Practically this means a contiguous region in the array for "visible" key is set to True</li>
<li>second slider specify which frame to display, so a boolean array where the specified frame of each contiguous region is set to True</li>
</ul>
<p>For example, if #epoch = 2 and #frame = 4, I'd have an array of 8 boolean values.</p>
<p>If I want to see the first epoch, the boolean array that the epoch slider generates would be: [True, True, True, True, False, False, False, False]</p>
<p>If I want to see the second frame, I'd have this array: [False, True, False, False, False, True, False, False]</p>
<p>Then to see the image that I want, I'd get the intersection: [False, True, False, False, False, False, False, False]</p>
<p>And use that as the value to the "visible" key.</p>
<p>Though that would require the slider to have some intermediate logic before displaying, which I do not know how to achieve.</p>
<p>Any ideas?</p>
<p>EDIT: I have been able to produce the behaviour I seek with Matplotlib, but I am unable to save the plot to disk.</p>
<p>Here is the code</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(1, 1)
# Display the first epoch.
l = plt.imshow(render_comparison_per_epoch_np_one_channel[0][0])
axepoch = plt.axes([0.25, 0.1, 0.65, 0.03])
axframe = plt.axes([0.25, 0.15, 0.65, 0.03])
slider_epoch = Slider(
ax=axepoch,
label="Epoch",
valmin=0,
valmax=dimensions_epoch - 1,
valinit=0,
valstep=1,
)
slider_frame = Slider(
ax=axframe,
label="Frame",
valmin=0,
valmax=dimensions_frame - 1,
valinit=0,
valstep=1,
)
def update(val):
epoch = int(slider_epoch.val)
frame = int(slider_frame.val)
l.set_data(render_comparison_per_epoch_np_one_channel[epoch][frame])
fig.canvas.draw_idle()
slider_epoch.on_changed(update)
slider_frame.on_changed(update)
plt.show()
</code></pre>
|
<python><plotly>
|
2022-12-29 14:46:12
| 0
| 331
|
truvaking
|
74,952,545
| 10,966,677
|
openpyxl delete rows from formatted table / error referencing table
|
<p>I am trying to delete rows from a formatted table in Excel using the <code>delete_rows()</code> method. However, this does not delete the rows but only the content of the cells.</p>
<p>As an info, you can format a range as table using openpyxl as described in the documentation: <a href="https://openpyxl.readthedocs.io/en/latest/worksheet_tables.html" rel="nofollow noreferrer">https://openpyxl.readthedocs.io/en/latest/worksheet_tables.html</a></p>
<p>I have a formatted table called <em>Table5</em> in the worksheet <em>Sheet1</em>:</p>
<p><a href="https://i.sstatic.net/BSgFb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BSgFb.png" alt="enter image description here" /></a></p>
<p>To delete 2 rows from row 5, I use the following commands:</p>
<pre><code>import openpyxl as xl
wb = xl.load_workbook('data.xlsx')
ws = wb['Sheet1']
ws.delete_rows(5, 2) # this is the delete command
wb.save('data.xlsx')
</code></pre>
<p>With the command <code>delete_rows()</code>, the range of the formatted table remains till row 6, whereas it shrinks when I delete the rows directly in Excel.</p>
<p><a href="https://i.sstatic.net/NM1p0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NM1p0.png" alt="enter image description here" /></a></p>
<p>The question: How do I delete properly both the data and the formatted range?</p>
<p>Corollary note:</p>
<p>Also when I insert data, the table range does not expand. For example:</p>
<pre><code>for i in range(4, 9):
ws.cell(row=i, column=1).value = i - 1
ws.cell(row=i, column=2).value = (i - 1) * 100
</code></pre>
<p>The range stays the same i.e. till row 6, whereas the range expands automatically by inserting data into Excel manually.</p>
<p><a href="https://i.sstatic.net/TAlB5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TAlB5.png" alt="enter image description here" /></a></p>
|
<python><excel><openpyxl>
|
2022-12-29 14:45:50
| 2
| 459
|
Domenico Spidy Tamburro
|
74,952,504
| 6,446,053
|
Elagent way multiplying diffrent constant value to diffrent columns in Pandas
|
<p>The objective is to multiply some constant value to a column in Pandas. Each column has its own constant value.</p>
<p>For example, the columns <code>'a_b_c','dd_ee','ff_ff','abc','devb'</code> are multiply with constant 15,20,15,15,20, respectively.</p>
<p>The constants values and its associated column is store in a dict <code>const_val</code></p>
<pre><code>const_val=dict(a_b_c=15,
dd_ee=20,
ff_ff=15,
abc=15,
devb=20,)
</code></pre>
<p>Currently, I am using a <code>for-loop</code> to multiply each column to its associate constant value which is shown in code below</p>
<pre><code>for dpair in const_val:
df[('per_a',dpair)]=df[dpair]*const_val[dpair]/reval
</code></pre>
<p>However, I wonder whether there is more elagent ways of doing this.</p>
<p>The full code is provided below</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
const_val=dict(a_b_c=15,
dd_ee=20,
ff_ff=15,
abc=15,
devb=20,)
df = pd.DataFrame(data=np.random.randint(5, size=(3, 6)),
columns=['id','a_b_c','dd_ee','ff_ff','abc','devb'])
reval=6
for dpair in const_val:
df[('per_a',dpair)]=df[dpair]*const_val[dpair]/reval
</code></pre>
<p>The expected output is as below</p>
<pre><code> id a_b_c dd_ee ... (per_a, ff_ff) (per_a, abc) (per_a, devb)
0 4 0 3 ... 7.5 7.5 3.333333
1 3 2 4 ... 0.0 0.0 13.333333
2 2 1 0 ... 2.5 2.5 0.000000
</code></pre>
<p>Please note that the</p>
<pre><code>(per_a, ff_ff) (per_a, abc) (per_a, devb)
</code></pre>
<p>are multiindex column. The representative might be different in your compiler</p>
<p>p.s., I am using IntelliJ IDEA</p>
|
<python><pandas><performance>
|
2022-12-29 14:41:27
| 3
| 3,297
|
rpb
|
74,952,320
| 950,275
|
Django and pylintt: inconsistent module reference?
|
<p>Possibly because of my noobness, I can't get pylint and django management commands to agree about how to import files in my project.</p>
<h2>Setup</h2>
<pre><code># venv
cd $(mktemp -d)
virtualenv venv
venv/bin/pip install django pylint pylint-django
# django
venv/bin/django-admin startproject foo
touch foo/__init__.py
touch foo/foo/models.py
# management command
mkdir -p foo/foo/management/commands
touch foo/foo/management/__init__.py
touch foo/foo/management/commands/__init__.py
echo -e "import foo.models\nclass Command:\n def run_from_argv(self, options):\n pass" > foo/foo/management/commands/pa.py
# install
perl -pe 's/(INSTALLED_APPS = \[)$/\1 "foo",\n/' -i foo/foo/settings.py
# testing
venv/bin/python foo/manage.py pa
venv/bin/pylint --load-plugins pylint_django --django-settings-module=foo.foo.settings --errors-only foo
</code></pre>
<h2>Result</h2>
<p>You'll note that <code>manage.py</code> is happy with <code>import foo.models</code>, but <code>pylint</code> isn't:</p>
<pre><code>************* Module foo.foo.management.commands.pa
foo/foo/management/commands/pa.py:1:0: E0401: Unable to import 'foo.models' (import-error)
foo/foo/management/commands/pa.py:1:0: E0611: No name 'models' in module 'foo' (no-name-in-module)
</code></pre>
<p>If I change it to <code>import foo.foo.models</code>, <code>pylint</code> passes but <code>manage.py</code> breaks:</p>
<pre><code>...
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/tmp/tmp.YnQCNTrkbX/foo/foo/management/commands/pa.py", line 1, in <module>
import foo.foo.models
ModuleNotFoundError: No module named 'foo.foo'
</code></pre>
<p>What am I missing?</p>
|
<python><django><pylint>
|
2022-12-29 14:25:42
| 2
| 380
|
Nitz
|
74,952,037
| 14,190,526
|
Python a handy way to mock/patch only "top-level" / first call to object
|
<p>Firstly the code:</p>
<pre><code>class Foo:
...
def apply_* ...
...
def apply_all(self, ) -> None:
self.apply_goal_filter()
self.apply_gender_filter()
self.apply_age_filter()
... # Many apply_*
</code></pre>
<p>To test this function I should mock every <code>apply_*</code> method in class and call <code>assert_called</code> on it.<br />
Is I can do it automatically and wrap my class somehow to mock all except top level calls?<br />
I.E.:</p>
<pre><code>foo = Foo()
foo.apply_all() # The real call to method
# Inside "foo.apply_all"
self.apply_goal_filter() # Mock call
self.apply_gender_filter() # Mock call
# Another example:
Foo = Mock(...)
Foo.top_level_call # bound method ...
Foo.top_level_call.second_level_call # Mock object ...
</code></pre>
<p>Most likely I should patch <code>self/cls</code> somehow.<br />
The workaround is call class method rather than instance method and pass <code>self</code> manually but it will not help for classmethods.<br />
<code>Mock(wraps=foo)</code> will not help because it's wraps the object and executing all the calls.</p>
|
<python>
|
2022-12-29 13:58:00
| 1
| 1,100
|
salius
|
74,951,826
| 7,422,352
|
Correct number of workers for Gunicorn
|
<p>According to the Gunicorn's <a href="https://docs.gunicorn.org/en/latest/design.html#how-many-workers" rel="nofollow noreferrer">documentation</a>, the number of workers recommended are <code>(2 * #cores) + 1</code>.</p>
<p>They have provided the following explanation:</p>
<blockquote>
<p>Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with. While not overly scientific, the formula is based on the assumption that for a given core, one worker will be reading or writing from the socket while the other worker is processing a request.</p>
</blockquote>
<p>According to the above logic, 2 workers/processes (as a worker is a process) share the same core. Then where did the additional <code>1</code> came from?</p>
<p>Secondly, 2 processes are sharing the same core. Isn't this a bad architecture?</p>
|
<python><gunicorn>
|
2022-12-29 13:39:10
| 1
| 5,381
|
Deepak Tatyaji Ahire
|
74,951,747
| 2,039,339
|
Overfitted model performs poorly on training data
|
<p>What can be the cause of accuracy being >90% while model predicts one class in 100% cases in multiclass clasification problem? I would expect that the overfitted model with high accuracy for training data will predict well on training data.</p>
<p>Model:</p>
<pre><code>model = tf.keras.Sequential([
tf.keras.layers.Rescaling(scale=1./255., offset=0.0),
mobilenet_custom,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
</code></pre>
<p>Where <code>mobilenet_custom</code> is a <code>tf.keras.applications.MobileNetV2</code> model with random weights and input is modified to have 7 channels.</p>
<p>I am trying to classify frames of a clip into three classes. The training data set is kind of balanced:</p>
<pre><code>Total count: 15849
Labels[ A ]: 3906
Labels[ B ]: 5955
Labels[ O ]: 5988
Batch Shape X: (32, 224, 224, 7)
Batch Shape y: (32, 3)
</code></pre>
<p>The accuracy is >90% after 2 epochs. (the val_accuracy is around 35%)</p>
<p>However I also record confusion matrices at each epoch end using the <code>Callback</code> class using the following function to collect data for the confusion matrix:</p>
<pre><code>def _collect_validation_data(self, arg_datagen):
predicts = []
truths = []
print("\nCollecting data for epoch evaluation...")
batch_num_per_epoch = arg_datagen.__len__()
for batch_i in range(batch_num_per_epoch):
X_batch, y_batch = arg_datagen[batch_i]
y_pred = self.model.predict(X_batch, verbose=0)
y_true = y_batch
predicts += [item for item in np.argmax(y_pred, axis=1)]
truths += [item for item in np.argmax(y_true, axis=1)]
print("Batch: {}/{}".format(batch_i+1, batch_num_per_epoch), end='\r')
print("\nDone!\n")
return truths, predicts
</code></pre>
<p>Every time the confusion matrices look like following:</p>
<p><img src="https://i.sstatic.net/J3QC4.png" alt="Confusion matrix for Training data" /></p>
<p>When I saved the values of <code>y_pred</code> and <code>y_true</code> that are passed to compute the accuracy it confirms that the accuracy is calculated correctly.</p>
<p><img src="https://i.sstatic.net/xGoIW.png" alt="Data from which the metrics are computed - last batch of 2nd epoch" /></p>
|
<python><tensorflow><keras>
|
2022-12-29 13:31:02
| 1
| 406
|
Khamyl
|
74,951,508
| 6,067,528
|
Why is warning log not being passed through my custom formatter?
|
<p>I'm trying to have every log message my python application produces formatted with my custom formatter. The issue is that my application is not logging everything with my customer formatter, i.e. this RunTimeWarning:</p>
<p><code>__main__:1: RuntimeWarning: divide by zero encountered in true_divide</code></p>
<p>whereas I am expecting something like - i.e. formatted with jsonlogger:</p>
<p><code>{"name": "root", "funcName": "<module>", "level": null, "lineno": 1, "threadName": "MainThread", "message": "RuntimeWarning: divide by zero encountered in true_divide"}</code></p>
<p>Does anyone have any advice as to how to fix? I assumed assigning root logger to use <code>json_handler</code> would be enough. To reproduce, here is the code!</p>
<pre><code>
import logging
import os
import sys
import numpy as np
import pandas as pd
from pythonjsonlogger import jsonlogger
from logging import getLogger, config
log_level = os.environ.get("LOG_LEVEL", "INFO")
log_format = "%(name) %(funcName) %(level) %(lineno) %(threadName) %(message)"
config.dictConfig({
'version': 1,
'formatters': {'json': {
'()': jsonlogger.JsonFormatter,
'format': log_format,
}},
'handlers': {
'json_handler': {
'class': 'logging.StreamHandler',
'formatter': 'json',
'level': log_level,
'stream': sys.stdout
}
},
'loggers': {
'': {
'handler': ['json_handler'],
'level': log_level,
'propagate': False
},
},
'root': {
'level': log_level,
'handlers': ['json_handler'],
},
'disable_existing_loggers': True
})
logging.info("test") # this formats
series_raw = pd.Series([0, float('nan'), 1,4,6,-1], [0,1,2,3,4,5])
x = series_raw.values
values = np.where(np.logical_or(x == 0, np.isnan(x)), np.nan, 10 /x) # warning from here does not
</code></pre>
|
<python><logging><python-logging>
|
2022-12-29 13:09:17
| 1
| 1,313
|
Sam Comber
|
74,951,506
| 12,097,553
|
django query: groupby and keep rows that are the most recent
|
<p>I am struggling on a query that is supposed to group by a model attribute and return for each unique value of that model attribute, the row that has the most recent date.</p>
<p>I can't manage to get the output that I am looking for in a way that will be digestible for my template.</p>
<p>here is the model I am trying to query:</p>
<pre><code>class Building(models.Model):
building_id = models.AutoField(primary_key=True)
created_at = models.DateTimeField(auto_now_add=True, blank=True)
project = models.CharField(max_length=200,blank=True, null=True)
</code></pre>
<p>and here are my attempt #1: get the unique projects and use a for loop:</p>
<pre><code>projects = list(Building.objects.all().values_list('project', flat=True).distinct())
context = {}
for project in projects:
proj = Building.objects.filter(project=project).latest("created_at")
context[project] = proj
</code></pre>
<p>this is ok but it does not allow me to access the queryset as <code>{% for i in projects %} </code>in template => causing pk problems for links</p>
<p>my other attempt tries to get rows, to keep everything in a queryset:</p>
<pre><code>projects = Building.objects.all().values_list('project', flat=True).distinct()
proj = Building.objects.filter(project__in=list(projects)).latest("created_at")
</code></pre>
<p>that only returns the latest entered row regardless of the project name. This is close to what I am looking for, but not quite since I try to output the latest entered row for each unique project value.</p>
<p>what is the proper way to do that?</p>
|
<python><django>
|
2022-12-29 13:09:09
| 2
| 1,005
|
Murcielago
|
74,951,416
| 16,332,690
|
Adding element of a range of values to every N rows in a pandas DataFrame
|
<p>I have the following dataframe that is ordered and consecutive:</p>
<pre class="lang-py prettyprint-override"><code> Hour value
0 1 41
1 2 5
2 3 7
3 4 107
4 5 56
5 6 64
6 7 46
7 8 50
8 9 95
9 10 81
10 11 8
11 12 94
</code></pre>
<p>I want to add a range of values to each N rows (4 in this case), e.g.:</p>
<pre class="lang-py prettyprint-override"><code> Hour value val
0 1 41 1
1 2 5 1
2 3 7 1
3 4 107 1
4 5 56 2
5 6 64 2
6 7 46 2
7 8 50 2
8 9 95 3
9 10 81 3
10 11 8 3
11 12 94 3
</code></pre>
|
<python><pandas>
|
2022-12-29 13:00:38
| 2
| 308
|
brokkoo
|
74,951,314
| 5,398,127
|
Unable to install fastquant in new virtualenv conda
|
<p>In base environment I have already installed fastquant package</p>
<p>but after creating and activating new conda environment I am not able to install the fastquant package.</p>
<p>I already have pandas 1.5.2</p>
<p>but when I install fastquant it is trying to install pandas 1.1.5 as it is a dependency</p>
<pre><code>Collecting fastquant
Using cached fastquant-0.1.8.0-py3-none-any.whl (5.9 MB)
Collecting oauthlib>=3.1.0
Using cached oauthlib-3.2.2-py3-none-any.whl (151 kB)
Collecting chardet>=3.0.4
Using cached chardet-5.1.0-py3-none-any.whl (199 kB)
Requirement already satisfied: tqdm>=4.28.1 in c:\users\11832\anaconda3\envs\fbprophetenv\lib\site-packages (from fastquant) (4.64.1)
Requirement already satisfied: six>=1.13.0 in c:\users\11832\anaconda3\envs\fbprophetenv\lib\site-packages (from fastquant) (1.16.0)
Collecting tweepy>=3.8.0
Using cached tweepy-4.12.1-py3-none-any.whl (101 kB)
Requirement already satisfied: soupsieve>=1.9.5 in c:\users\11832\anaconda3\envs\fbprophetenv\lib\site-packages (from fastquant) (2.3.2.post1)
Collecting black>=19.10b0
Using cached black-23.1a1-cp310-cp310-win_amd64.whl (1.2 MB)
Collecting nltk>=3.5
Using cached nltk-3.8-py3-none-any.whl (1.5 MB)
Requirement already satisfied: urllib3>=1.25.7 in c:\users\11832\anaconda3\envs\fbprophetenv\lib\site-packages (from fastquant) (1.26.13)
Requirement already satisfied: certifi>=2019.11.28 in c:\users\11832\anaconda3\envs\fbprophetenv\lib\site-packages (from fastquant) (2022.12.7)
Collecting pandas==1.1.5
Using cached pandas-1.1.5.tar.gz (5.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... \
</code></pre>
<p>but is it failing after this with following error:</p>
<pre><code>creating build\lib.win-amd64-cpython-310\pandas\_libs\tslibs
copying pandas\_libs\tslibs\__init__.py -> build\lib.win-amd64-cpython-310\pandas\_libs\tslibs
creating build\lib.win-amd64-cpython-310\pandas\_libs\window
copying pandas\_libs\window\__init__.py -> build\lib.win-amd64-cpython-310\pandas\_libs\window
creating build\lib.win-amd64-cpython-310\pandas\io\formats\templates
copying pandas\io\formats\templates\html.tpl -> build\lib.win-amd64-cpython-310\pandas\io\formats\templates
UPDATING build\lib.win-amd64-cpython-310\pandas/_version.py
set build\lib.win-amd64-cpython-310\pandas/_version.py to '1.1.5'
running build_ext
building 'pandas._libs.algos' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pandas
Failed to build pandas
ERROR: Could not build wheels for pandas, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><pandas><package><anaconda><conda>
|
2022-12-29 12:48:00
| 1
| 3,480
|
Stupid_Intern
|
74,951,298
| 11,885,361
|
TypeError: slice indices must be integers or None or have an __index__ method error in scraping a help page
|
<p>I created python script to scrape <strong>facebook</strong> <em>help page</em>. I wanted to scrape <code>cms_object_id</code>, <code>cmsID</code>, <code>name</code>. so these values are in a script tag then firstly tried to find all <code><script></code> tags then tried to iterate over this and then there is <code>__bbox</code> inside the tags which contains the values wanted to scrape.</p>
<p>so this is my script:</p>
<pre><code>import json
import requests
import bs4
from Essentials import Static
class CmsIDs:
def GetIDs():
# cont = requests.get(""https://www.facebook.com:443/help"", headers=Static.headers) # syntax error
cont = requests.get("https://www.facebook.com:443/help", headers=Static.headers)
soup = bs4.BeautifulSoup(cont.content, "html5lib")
text = soup.find_all("script")
start = ""
txtstr = ""
for i in range(len(text)):
mystr = text[i]
# mystr = text[i]
print("this is: ", mystr.find('__bbox":'))
if text[i].get_text().find('__bbox":') != -1:
# print(i, text[i].get_text())
txtstr += text[i].get_text()
start = text[i].get_text().find('__bbox":') + len('__bbox":')
print('start:', start)
count = 0
for end, char in enumerate(txtstr[start:], start):
if char == '{':
count += 1
if char == '}':
count -= 1
if count == 0:
break
print('end:', end)
# --- convert JSON string to Python structure (dict/list) ---
data = json.loads(txtstr[start:end+1])
# pp.pprint(data)
print('--- search ---')
CmsIDs.search(data)
# --- use recursion to find all 'cms_object_id', 'cmsID', 'name' ---
def search(data):
if isinstance(data, dict):
found = False
if 'cms_object_id' in data:
print('cms_object_id', data['cms_object_id'])
found = True
if 'cmsID' in data:
print('cmsID', data['cmsID'])
found = True
if 'name' in data:
print('name', data['name'])
found = True
if found:
print('---')
for val in data.values():
CmsIDs.search(val)
if isinstance(data, list):
for val in data:
CmsIDs.search(val)
if __name__ == '__main__':
CmsIDs.GetIDs()
</code></pre>
<p>the page contains <code>cms_object_id</code>, <code>cmsID</code>, <code>name</code>. so wanted to scrape all these 3 values but I am getting an error:</p>
<pre><code>for end, char in enumerate(txtstr[start:], start):
TypeError: slice indices must be integers or None or have an __index__ method
</code></pre>
<p>so how can I solve this error and reach ultimate goal?</p>
|
<python><python-3.x><web-scraping><beautifulsoup><python-requests>
|
2022-12-29 12:46:01
| 1
| 630
|
hanan
|
74,951,293
| 8,618,380
|
Big Query: Create table with time partitioning and clustering fields using Python
|
<p>I can successfully create a Big Query table in Python as:</p>
<pre><code>from google.cloud import bigquery
bq_client = bigquery.Client()
table_name = "my_test_table"
dataset = bq_client.dataset("MY_TEST_DATASET")
table_ref = dataset.table(table_name)
table = bigquery.Table(table_ref)
table = bq_client.create_table(table)
</code></pre>
<p>And later I upload a local Pandas DataFrame as:</p>
<pre><code># --- Define BQ options ---
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = "WRITE_APPEND"
job_config.source_format = bigquery.SourceFormat.CSV
# --- Load data ---
job = bq_client.load_table_from_dataframe(
df, f"MY_TEST_DATASET.{table_name}", job_config=job_config
)
</code></pre>
<p>How can I specify, while creating the table and using Python:</p>
<ol>
<li>Partition by daily ingestion time</li>
<li>As clustering fields ["business_id", "software_house", "product_id"]</li>
</ol>
|
<python><google-bigquery>
|
2022-12-29 12:44:57
| 1
| 1,975
|
Alessandro Ceccarelli
|
74,951,025
| 11,693,768
|
Covert a column of integers and interger + strings into all integers using multiplication based on what is in the string
|
<p>How do I convert this column of values, mostly integers, and some strings to all integers.</p>
<p>The column looks like this,</p>
<pre><code>x1
___
128455551
92571902
123125
985166
np.NaN
2241
1.50000MMM
2.5255MMM
1.2255MMMM
np.NaN
...
</code></pre>
<p>And I want it to look like this, where the rows with MMM, the characters are dropped and the number is multiplied by a billion (10**9) and converted to integers.</p>
<p>The rows where there are MMMM, the characters are dropped and the number is multiplied by a trillion (10**12) and converted to integers.</p>
<p>Basically each M means 1,000. There are other columns so I cannot drop the <code>np.NaN</code>.</p>
<pre><code>x1
___
128455551
92571902
123125
985166
np.NaN
2241
1500000000
2525500000
1225500000000
np.NaN
...
</code></pre>
<p>I tried this,</p>
<pre><code>df['x1'] =np.where(df.x1.astype(str).str.contains('MMM'), (df.x1.str.replace('MMM', '').astype(float) * 10**9).astype(int), df.x1)
</code></pre>
<p>When I do it with just the 2 rows it works fine, but when I do it with the whole dataframe I get this error, <code>IntCastingNaNError: Cannot convert non-finite values (NA or inf) to integer</code>.</p>
<p>How do i fix it?</p>
|
<python><pandas><dataframe><numpy>
|
2022-12-29 12:18:58
| 3
| 5,234
|
anarchy
|
74,951,017
| 9,142,914
|
Tensorflow 1.15, Keras 2.2.5, on_batch_end or on_train_batch_end not triggering
|
<p>This is my custom callblack:</p>
<pre><code>class CustomCallback(keras.callbacks.Callback):
def on_train_begin(self, logs=None):
print("on_train_begin")
def on_train_batch_end(self, batch, logs=None):
print("on_train_batch_end")
def on_batch_end(self, batch, logs=None):
print("on_batch_end")
</code></pre>
<p>Problem: during training I only see "on_train_begin" appearing in the output.</p>
<p>My Tensorflow version: 1.15
My Keras version: 2.2.5</p>
<p>Any solution ?</p>
<p>Thank you</p>
<p>p.s: I don't want to change my Keras/TF versions</p>
|
<python><tensorflow><keras>
|
2022-12-29 12:18:35
| 1
| 688
|
ailauli69
|
74,950,971
| 10,037,034
|
How to solve great expectations "MetricResolutionError: Cannot compile Column object until its 'name' is assigned." Error?
|
<p>I am trying to use great expectations.<br />
The function I want to use is <code>expect_compound_columns_to_be_unique</code>.
This is the code (main code - template):</p>
<pre><code>import datetime
import pandas as pd
import great_expectations as ge
import great_expectations.jupyter_ux
from great_expectations.core.batch import BatchRequest
from great_expectations.checkpoint import SimpleCheckpoint
from great_expectations.exceptions import DataContextError
context = ge.data_context.DataContext()
# Note that if you modify this batch request, you may save the new version as a .json file
# to pass in later via the --batch-request option
batch_request = {'datasource_name': 'impala_okh', 'data_connector_name': 'default_inferred_data_connector_name', 'data_asset_name': 'okh.okh_forecast_prod', 'limit': 1000}
# Feel free to change the name of your suite here. Renaming this will not remove the other one.
expectation_suite_name = "okh_forecast_prod"
try:
suite = context.get_expectation_suite(expectation_suite_name=expectation_suite_name)
print(f'Loaded ExpectationSuite "{suite.expectation_suite_name}" containing {len(suite.expectations)} expectations.')
except DataContextError:
suite = context.create_expectation_suite(expectation_suite_name=expectation_suite_name)
print(f'Created ExpectationSuite "{suite.expectation_suite_name}".')
validator = context.get_validator(
batch_request=BatchRequest(**batch_request),
expectation_suite_name=expectation_suite_name
)
column_names = [f'"{column_name}"' for column_name in validator.columns()]
print(f"Columns: {', '.join(column_names)}.")
validator.head(n_rows=5, fetch_all=False)
</code></pre>
<p>Using this code calling</p>
<pre><code>validator.expect_compound_columns_to_be_unique(['column1', 'column2'])
</code></pre>
<p>produces the following error:</p>
<blockquote>
<p>MetricResolutionError: Cannot compile Column object until its 'name' is assigned.</p>
</blockquote>
<p>How can i solve this problem?</p>
|
<python><great-expectations>
|
2022-12-29 12:14:03
| 1
| 1,311
|
Sevval Kahraman
|
74,950,933
| 12,858,691
|
Pandas automatically infer best dtype: str to int not working
|
<p>On a dataframe with > 100 columns I want pandas (v1.4.2) to <strong>automatically</strong> convert all columns to the "best" dtype. According to the docs <a href="https://pandas.pyda" rel="nofollow noreferrer">df.convert_dtypes()</a> or <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.infer_objects.html#pandas.DataFrame.infer_objects" rel="nofollow noreferrer">df.infer_objects()</a> should do the trick. Consider the following example:</p>
<pre><code>>>df = pd.DataFrame({"A":["1","2"], "C":["abc","bcd"]})
>>df
A C
0 1 abc
1 2 bcd
>>df.dtypes
A object
C object
dtype: object
>>df.convert_dtypes().dtypes
A string
C string
dtype: object
>>df.infer_objects().dtypes
A object
C object
dtype: object
</code></pre>
<p>Why is column <code>A</code> not converted into <code>int</code>? What would be an alternative if I am trying the wrong pandas methods?</p>
|
<python><pandas><dataframe><dtype>
|
2022-12-29 12:08:56
| 1
| 611
|
Viktor
|
74,950,904
| 11,665,178
|
Unable to import module 'lambda_function': No module named 'pymongo'
|
<p>I have followed this <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-create-package-with-dependency" rel="nofollow noreferrer">guide</a> last year to build my AWS python archive and it was working.</p>
<p>Today i have automated my code deployment and it was not working (i am using <code> shutil.make_archive("package", 'zip', root_dir=path, base_dir="package/", verbose=True)</code> instead of the <code>zip -r package.zip package/</code> command now.</p>
<p>But when i start doing all these steps manually again and deploy (without the automated code) it's not working anymore.</p>
<p>Here is what my folders look like :</p>
<p><a href="https://i.sstatic.net/lw3Jg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lw3Jg.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/vHcAP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vHcAP.png" alt="enter image description here" /></a></p>
<p>The only thing that has changed since the last time it was working is the pymongo version which was 3.12 instead of 4.3.3.</p>
<p>My handler is :</p>
<pre><code>def lambda_handler(event, context):
print(event)
</code></pre>
<p>The error i am getting from CloudWatch :</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'pymongo'
Traceback (most recent call last):
</code></pre>
<p>What am i doing wrong ?</p>
<p>EDIT :</p>
<p>I have found an old dependency package that is working, but that is using more than just <code>pymongo[srv]</code> so i will check what may be different but the error comes from the dependencies.</p>
|
<python><python-3.x><amazon-web-services><aws-lambda><pymongo>
|
2022-12-29 12:06:17
| 2
| 2,975
|
Tom3652
|
74,950,802
| 10,517,777
|
Export features to excel after fit-transform of the TFIDFVectorizer
|
<p>Python Version: 3.7</p>
<p>Hi everyone:</p>
<p>I am using the tfidfVectorizer from the library scikit-learn as follow:</p>
<pre><code>vec_body = TfidfVectorizer(**vectorizer_parameters)
X_train_features = vec_body.fit_transform(X_train)
</code></pre>
<p><code>X_train</code> contains the email body. If I understood correctly, X_train_features is a sparse matrix. My objective is to create an excel report to validate which features or words per mail were identified after transformation with the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>email_body</th>
<th>email features</th>
</tr>
</thead>
<tbody>
<tr>
<td>this is an example</td>
<td>example</td>
</tr>
<tr>
<td>this is another example</td>
<td>another example</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>The column "email_body" should have the email body for each mail I have in the X_train. The column "email_features" should have a string with all the features or words after the transformation (fit-transform) for each particular mail. In the vectorizer, I deleted all stop words and used lemmatization too. That is why I want to export the result to excel to validate which words were used in the transformation. I do not know how to achieve that when my result is a sparse matrix.</p>
<p>Please forgive me if I explained something incorrectly but I am very new with this library. Thank so much in advance for any advice or solution.</p>
|
<python><scikit-learn><tfidfvectorizer>
|
2022-12-29 11:54:33
| 1
| 364
|
sergioMoreno
|
74,950,760
| 10,639,382
|
Pandas Grouping Weekly Data
|
<p>I want to group data based on each week and found the solution in the following post,
<a href="https://www.statology.org/pandas-group-by-week/" rel="nofollow noreferrer">https://www.statology.org/pandas-group-by-week/</a></p>
<p>According to the post, following the two operations below allows you to group the data per week.</p>
<pre><code># Create a dataframe with datetime values
df = pd.DataFrame({'date': pd.date_range(start='1/5/2022', freq='D', periods=15),
'sales': [6, 8, 9, 5, 4, 8, 8, 3, 5, 9, 8, 3, 4, 7, 7]})
#convert date column to datetime and subtract one week
df['date'] = pd.to_datetime(df['date']) - pd.to_timedelta(7, unit='d')
#calculate sum of values, grouped by week
df.groupby([pd.Grouper(key='date', freq='W')])['sales'].sum()
</code></pre>
<p>The question I have is whats the point of operation 1, where you subtract 7 days from the original date. How does this help with the grouping as all the dates are translated 7 days back? Couldn't you just use the second operation to group the dates ?</p>
<p>Thanks</p>
|
<python><pandas>
|
2022-12-29 11:50:11
| 1
| 3,878
|
imantha
|
74,950,696
| 10,428,677
|
Add year values to all existing rows in dataframe
|
<p>I have a dataframe that looks like this:</p>
<pre><code>df_dict = {'country': ['Japan','Japan','Japan','Japan','Japan','Japan','Japan', 'Greece','Greece','Greece','Greece','Greece','Greece','Greece'],
'product': ["A", "B", "C", "D", "E", "F", "G", "A", "B", "C", "D", "E", "F", "G"],
'region': ["Asia","Asia","Asia","Asia","Asia","Asia","Asia","Europe","Europe","Europe","Europe","Europe","Europe","Europe"]}
df = pd.DataFrame(df_dict)
</code></pre>
<p>Is there a way to add another column called <code>year</code> with values from 2005 until 2022 for each of these rows? For example, it should look like this:</p>
<pre><code> country product region year
Japan A Asia 2005
Japan A Asia 2006
Japan A Asia 2007
...
Japan A Asia 2022
Japan B Asia 2005
Japan B Asia 2006
...
</code></pre>
|
<python><pandas>
|
2022-12-29 11:44:48
| 1
| 590
|
A.N.
|
74,950,685
| 10,303,685
|
How to monitor internet connectivity using infinite while loop Python?
|
<p>I am using an infinite while loop to perform specific function if internet is available. And to print an statement if internet is not available. Following is the code.</p>
<pre><code>import requests,time
while True:
print("HI")
try:
requests.get('https://www.google.com/').status_code
print('online')
except:
print('offline')
</code></pre>
<p>The code works on first run whether internet is running or not. But once internet gets disconnected it get stuck. Nothing prints. any idea?</p>
<p>And is using infinite while loop a good solution for this task ?</p>
|
<python><python-3.x><python-requests><except>
|
2022-12-29 11:42:58
| 1
| 388
|
imtiaz ul Hassan
|
74,950,461
| 10,437,110
|
How to create a column to store trailing high value in Pandas DataFrame?
|
<p>Consider a DataFrame with only one column named values.</p>
<pre><code>data_dict = {values:[5,4,3,8,6,1,2,9,2,10]}
df = pd.DataFrame(data_dict)
display(df)
</code></pre>
<p>The output will look something like:</p>
<pre><code> values
0 5
1 4
2 3
3 8
4 6
5 1
6 2
7 9
8 2
9 10
</code></pre>
<p>I want to generate a new column that will have the trailing high value of the previous column.</p>
<p>Expected Output:</p>
<pre><code> values trailing_high
0 5 5
1 4 5
2 3 5
3 8 8
4 6 8
5 1 8
6 2 8
7 9 9
8 2 9
9 10 10
</code></pre>
<p>Right now I am using for loop to iterate on df.iterrows() and calculating the values at each row. Because of this, the code is very slow.</p>
<p>Can anyone share the vectorization approach to increase the speed?</p>
|
<python><pandas><dataframe>
|
2022-12-29 11:20:41
| 1
| 397
|
Ash
|
74,950,386
| 1,016,004
|
Static type hint for decorator that adds attribute to a class
|
<p>Is it possible to create a type hint for a class decorator that adds an attribute to the given class? Since PEP612 it's possible to use <code>ParamSpec</code> and <code>Concatenate</code> for decorators that modify signatures, but I haven't found any equivalents for decorators that modify classes.</p>
<pre class="lang-py prettyprint-override"><code>def add_x(cls_):
cls_.x: int = 5
return cls_
@add_x
class Example: pass
</code></pre>
<p>How would I have to type <code>add_x</code> to have <code>x: int</code> hinted on my <code>Example</code> class?</p>
|
<python><python-3.x><python-decorators><python-typing>
|
2022-12-29 11:12:47
| 0
| 6,505
|
Robin De Schepper
|
74,950,359
| 2,245,136
|
Check if a function has been called from a loop
|
<p>Is it possible to check dynamically whether a function has been called from within a loop?</p>
<p>Example:</p>
<pre><code>def some_function():
if called_from_loop:
print("Hey, I'm called from a loop!")
for i in range(0, 1):
some_function()
some_function()
</code></pre>
<p>Expected output:</p>
<pre><code>> Hey, I'm called from a loop!
</code></pre>
|
<python><loops>
|
2022-12-29 11:10:45
| 1
| 372
|
VIPPER
|
74,950,298
| 4,853,434
|
Python Kafka Consumer in docker container localhost
|
<p>I try to consume some messages created with a spring-boot kafka producer on localhost.</p>
<p>For the consumer I have the following python code (consumer.py):</p>
<pre><code>from kafka import KafkaConsumer
# To consume latest messages and auto-commit offsets
print("consume...")
consumer = KafkaConsumer('upload_media',
group_id='groupId',
bootstrap_servers=['host.docker.internal:9092'])
for message in consumer:
# message value and key are raw bytes -- decode if necessary!
# e.g., for unicode: `message.value.decode('utf-8')`
print("message")
print ("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition,
message.offset, message.key,
message.value))
</code></pre>
<p>And I have the dockerfile:</p>
<pre><code># Set python version
FROM python:3.6-stretch
# Make a directory for app
WORKDIR /consumer
# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# RUN pip install --no-cache-dir --user -r /req.txt
# Copy source code
COPY ./app/consumer.py .
# Run the application
#CMD ["python", "-m", "app"]
ENTRYPOINT [ "python3", "-u", "consumer.py" ]
CMD [ "" ]
</code></pre>
<p>When I build and run it, i get:</p>
<pre><code>$ docker run --expose=9092 consumer
consume...
</code></pre>
<p>But it never gets some messages.
When I start the script just with <code>python consumer.py</code> it works fine.
So something seems to be wrong with the docker image?</p>
<p>Edit:
I started the ZooKeeper and a broker locally as it is described here:
<a href="https://kafka.apache.org/quickstart" rel="nofollow noreferrer">https://kafka.apache.org/quickstart</a></p>
<p>Edit 2:
<strong>I don't know why the question is a duplicate. I don't use ksqlDB, don't use docker compose. I am not on a VM and I do not get any error.</strong></p>
|
<python><docker><apache-kafka>
|
2022-12-29 11:04:37
| 1
| 859
|
simplesystems
|
74,950,152
| 6,400,443
|
Fastest way to calculate cosine similartity between two 2D arrays
|
<p>I have one array A containing 64000 embeddings and an other array B containing 12000 embeddings (each of the embedding is 1024 floats long).</p>
<p>Now I want to calculate the cosine similarity for all the pairs between array A and array B (cartesian product).</p>
<p>To perform that (using pandas), I merge array A with array B using <code>.merge(how="cross")</code>.
It gives me 768 000 000 pairs.</p>
<p>Now I am looking for the fastest way of calculating the cosine sim. for now I used something like this using Numpy:</p>
<pre><code>def compute_cosine_sim(a, b):
return dot(a, b)/(norm(a)*norm(b))
np.vectorize(compute_cosine_sim)(df.embedding_A.to_numpy(), df.embedding_B.to_numpy())
</code></pre>
<p>To keep the RAM at reasonable level, I use pandas Dataframe chunking.</p>
<p>The problem is my method is not. fast enough, and I was wondering if there wasn't something to change here, especially regarding the effectiveness of the numpy function I use.</p>
<p>To give some details, I reach 130000 iter/sec with this function, is it normal ?
Also, could this kind of operation be run on GPU easily ?</p>
<p>Thanks for the help</p>
|
<python><numpy><numba>
|
2022-12-29 10:49:38
| 1
| 737
|
FairPluto
|
74,949,892
| 19,106,705
|
Implementing a conv2d backward in pytorch
|
<p>I want to implement backward function of conv2d.</p>
<p>Here is an <a href="https://pytorch.org/docs/stable/notes/extending.html" rel="nofollow noreferrer">example of a linear function</a>:</p>
<pre class="lang-py prettyprint-override"><code># Inherit from Function
class LinearFunction(Function):
@staticmethod
# bias is an optional argument
def forward(ctx, input, weight, bias=None):
ctx.save_for_backward(input, weight, bias)
output = input.mm(weight.t())
if bias is not None:
output += bias.unsqueeze(0).expand_as(output)
return output
@staticmethod
def backward(ctx, grad_output):
input, weight, bias = ctx.saved_tensors
grad_input = grad_weight = grad_bias = None
if ctx.needs_input_grad[0]:
grad_input = grad_output.mm(weight)
if ctx.needs_input_grad[1]:
grad_weight = grad_output.t().mm(input)
if bias is not None and ctx.needs_input_grad[2]:
grad_bias = grad_output.sum(0)
return grad_input, grad_weight, grad_bias
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Linear(nn.Module):
def __init__(self, input_features, output_features, bias=True):
super(Linear, self).__init__()
self.input_features = input_features
self.output_features = output_features
self.weight = nn.Parameter(torch.empty(output_features, input_features))
if bias:
self.bias = nn.Parameter(torch.empty(output_features))
else:
self.register_parameter('bias', None)
# Not a very smart way to initialize weights
nn.init.uniform_(self.weight, -0.1, 0.1)
if self.bias is not None:
nn.init.uniform_(self.bias, -0.1, 0.1)
def forward(self, input):
# See the autograd section for explanation of what happens here.
return LinearFunction.apply(input, self.weight, self.bias)
</code></pre>
<p>I don't think I have a clear understanding of this function yet.</p>
<p>How can I implement conv2d backward function?</p>
|
<python><pytorch><backpropagation>
|
2022-12-29 10:23:40
| 1
| 870
|
core_not_dumped
|
74,949,838
| 20,574,508
|
How FastAPI manages WFT Forms?
|
<p>I am migrating from Flask to FastAPI and it is not clear to me how FastAPI manages WTF Forms.</p>
<p>I would like to use forms in Classes. However, I don't know if there is a correct way to do it in FastAPI, and if not what is the recommended solution to manage forms easily.</p>
<p>Here is a code example:</p>
<pre><code>from fastapi import Form
from wtforms import RadioField,SubmitField,SelectField, StringField,PasswordField, BooleanField
from wtforms.validators import Length, Email, InputRequired,EqualTo, DataRequired
class SignUpForm(Form):
email = StringField('Email', validators=[DataRequired(), Length(1,100),Email()])
password = ...
confirm_password = ...
</code></pre>
<p>Is it possible to handle Forms in this way in FastAPI?</p>
<p>FastAPI has already <a href="https://fastapi.tiangolo.com/tutorial/request-forms/" rel="nofollow noreferrer">a page explaining Forms in general</a>, but I didn't find any source explaining how to use it in a Class.</p>
<p>Any source to understand how FastAPI manages forms exactly is welcome.</p>
|
<python><fastapi><wtforms>
|
2022-12-29 10:18:33
| 1
| 351
|
Nicolas-Fractal
|
74,949,666
| 6,322,082
|
Data cleaning in python/pyspark: Conditional (for/if) exits prematurely before all the code is applied to the data
|
<p>I have a problem which looks similarly to this: I have three groups of salary classes, High/Medium/Low. On each group I need to perform some operations (adding columns, cleaning, ...). However, some groups (in my code example High and Low) share several identical cleaning operations. In order to avoid code duplication for them, I wrote a for loop where I want to apply some cleaning on "High", some on "Low", and then some code on both (the line with the word "PROBLEM). The category "Medium" does not share any of that cleaning, so it is separate.
Each one of the results I need to write in a separate df for post processing.</p>
<p>I realized after half a day trial and error that the code in the for loop exits after the if block is over without applying the shared code to the df. Is there a way to modify this? If not, how would this problem be best solved?</p>
<pre><code>import pyspark
from pyspark.sql.functions import *
from pyspark.sql import SparkSession
import os
import sys
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
spark = SparkSession.builder.appName('SparkByExamples.com').getOrCreate()
data = [
("1", "High", 10000),
("2", "High", 10000),
("3", "Medium", 1000),
("4", "Medium", 1000),
("5", "Low", 100),
("6", "Low", 100)]
columns = ["ID","Salary_Level", "Salary"]
my_data = spark.createDataFrame(data = data, schema = columns)
</code></pre>
<pre><code>def cleaning_function(df, name):
high_and_low = [name]
for name in high_and_low:
# This one only for high
if name == "High":
df = df.filter(df["Salary_Level"] == "High")
df = df.withColumn("Bonus", F.lit("Denied"))
# This one only for Low
elif name == "Low":
df = df.filter(df["Salary_Level"] == "Low")
df = df.withColumn("Bonus", F.lit("Approved"))
# PROBLEM: This one I want to apply for both High and Low after they were preprocessed above, but its not working
df = df.withColumn("Extra_Column", F.lit("Processed"))
# And this one is a separate
if name == "Medium":
df = df.withColumn("Extra_Column", F.lit("Processed"))
df = df.withColumn("Bonus", F.lit("Special Treatment"))
return df
</code></pre>
<pre><code>result_h = cleaning_function(my_data, "High")
result_m = cleaning_function(my_data, "Medium")
result_l = cleaning_function(my_data, "Low")
</code></pre>
|
<python><apache-spark><pyspark>
|
2022-12-29 10:00:24
| 1
| 371
|
Largo Terranova
|
74,949,599
| 6,446,053
|
How to efficiently collapse recurrent columns into rows in Pandas
|
<p>The objective is to collapse a <code>df</code> with the following columns</p>
<pre><code>['ID', 'St ti', 'Comp time', 'Email', 'Name', 'Gr Name\n',
'As Na (P1)\n', 'Fr ID (P1)\n', ' Role & royce ', 'Cradle insigt', 'Co-exist network', 'Ample Tree (P1)\n',
'As Na (P2)\n', 'Fr ID (P2)\n', ' Role & royce 2', 'Cradle insigt2', 'Co-exist network2', 'Ample Tree (P2)\n',
'As Na (P3)\n', 'Fr ID (P3)\n', ' Role & royce 3', 'Cradle insigt3', 'Co-exist network3', 'Ample Tree (P3)\n']
</code></pre>
<p>into</p>
<pre><code>['id','st_ti','comp_ti','email','name','gr_na','as_na',
'fr_id','role_royce','cradle_insight',
'coexist_net','ample_tree']
</code></pre>
<p>The original <code>df</code> columns contain common naming but are differentiated by unique numbering.</p>
<p>For example, the common name is as below</p>
<pre><code>'As Na', 'Fr ID', ' Role & royce ', 'Cradle insigt', 'Co-exist network', 'Ample Tree'
</code></pre>
<p>whereas the unique numbering is as below</p>
<pre><code>'As Na (P1)\n','As Na (P2)\n','As Na (P3)\n'
</code></pre>
<p>Therefore, for a <code>df</code> as below,</p>
<pre><code> ID St ti Comp time ... Cradle insigt3 Co-exist network3 Ample Tree (P3)\n
0 4 0 3 ... 0 3 0
1 2 3 0 ... 4 0 4
2 1 4 1 ... 4 4 4
[3 rows x 24 columns]
</code></pre>
<p>The collapsing operation should yield the following output</p>
<pre><code> id st_ti comp_ti ... cradle_insight coexist_net ample_tree
0 4 0 3 ... 0 0 4
1 2 3 0 ... 1 1 0
2 1 4 1 ... 1 3 3
3 4 0 3 ... 1 1 0
4 2 3 0 ... 3 2 4
5 1 4 1 ... 3 4 1
6 4 0 3 ... 0 3 0
7 2 3 0 ... 4 0 4
8 1 4 1 ... 4 4 4
[9 rows x 12 columns]
</code></pre>
<p>The above objective can be completed using the following code.</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
cosl_ori=['ID', 'St ti', 'Comp time', 'Email', 'Name', 'Gr Name\n',
'As Na (P1)\n', 'Fr ID (P1)\n', ' Role & royce ', 'Cradle insigt', 'Co-exist network', 'Ample Tree (P1)\n',
'As Na (P2)\n', 'Fr ID (P2)\n', ' Role & royce 2', 'Cradle insigt2', 'Co-exist network2', 'Ample Tree (P2)\n',
'As Na (P3)\n', 'Fr ID (P3)\n', ' Role & royce 3', 'Cradle insigt3', 'Co-exist network3', 'Ample Tree (P3)\n']
df = pd.DataFrame(data=np.random.randint(5, size=(3, 24)),
columns=cosl_ori)
cols=df.columns.tolist()
indices_s = [i for i, s in enumerate(cols) if 'As Na' in s]
indices_e = [i for i, s in enumerate(cols) if 'Ample Tree' in s]
indices_e=[x.__add__(1) for x in indices_e]
col_int=[range(x,y) for x,y in zip(indices_s,indices_e)]
std_col=range(0,6)
col_name=['id','st_ti','comp_ti','email','name','gr_na','as_na',
'fr_id','role_royce','cradle_insight',
'coexist_net','ample_tree']
all_df=[]
for dcols in col_int:
expanded_col=list(std_col)+list(dcols)
dd=df.iloc[:,expanded_col]
dd.columns=col_name
all_df.append(dd)
df_expected_output = pd.concat(all_df).reset_index(drop=True)
</code></pre>
<p>However, I wonder whether there are more elegant ways of addressing the aforementioned objective. Specifically,the code block</p>
<pre><code>all_df=[]
for dcols in col_int:
expanded_col=list(std_col)+list(dcols)
dd=df.iloc[:,expanded_col]
dd.columns=col_name
all_df.append(dd)
df_expected_output = pd.concat(all_df).reset_index(drop=True)
</code></pre>
<p>Apart from that (OPTIONAL), the above solution strictly request the last SIX columns to be in the specific order.</p>
<pre><code>['As Na (P1)\n', 'Fr ID (P1)\n', ' Role & royce ', 'Cradle insigt', 'Co-exist network', 'Ample Tree (P1)\n',
'As Na (P2)\n', 'Fr ID (P2)\n', ' Role & royce 2', 'Cradle insigt2', 'Co-exist network2', 'Ample Tree (P2)\n',
'As Na (P3)\n', 'Fr ID (P3)\n', ' Role & royce 3', 'Cradle insigt3', 'Co-exist network3', 'Ample Tree (P3)\n']
</code></pre>
<p>and will fail if some of the columns is shuffle.</p>
|
<python><pandas><performance>
|
2022-12-29 09:53:20
| 1
| 3,297
|
rpb
|
74,949,556
| 3,375,378
|
Poetry fails to install tensorflow
|
<p>I've got a poetry project. My environment is Conda 22.9.0 on a windows machine with poetry version 1.2.2:</p>
<p>This is my pyproject.toml file:</p>
<pre><code>[tool.poetry]
name = "myproject"
version = "0.1.0"
description = ""
[tool.poetry.dependencies]
# REVIEW DEPENDENCIES
python = ">=3.7,<3.11"
numpy = "*"
tensorflow = "^2.8"
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
[tool.poetry.scripts]
start = "myproject.main:start"
</code></pre>
<p>The myproject\main.py module contains:</p>
<pre><code>import tensorflow as tf
def start():
if tf.test.is_gpu_available():
print("TensorFlow is using a GPU.")
else:
print("TensorFlow is NOT using a GPU.")
</code></pre>
<p>If I do <code>poetry install</code>, it seems to work fine:</p>
<pre><code>Creating virtualenv myproject in D:\Projects\myproject\dev\myproject-series-forecast\.venv
Updating dependencies
Resolving dependencies...
Writing lock file
Package operations: 41 installs, 0 updates, 0 removals
• Installing certifi (2022.12.7)
• Installing charset-normalizer (2.1.1)
• Installing idna (3.4)
• Installing pyasn1 (0.4.8)
• Installing urllib3 (1.26.13)
• Installing cachetools (5.2.0)
• Installing oauthlib (3.2.2)
• Installing rsa (4.9)
• Installing six (1.16.0)
• Installing zipp (3.11.0)
• Installing requests (2.28.1)
• Installing pyasn1-modules (0.2.8)
• Installing google-auth (2.15.0)
• Installing importlib-metadata (5.2.0)
• Installing requests-oauthlib (1.3.1)
• Installing markupsafe (2.1.1)
• Installing absl-py (1.3.0)
• Installing grpcio (1.51.1)
• Installing numpy (1.21.6)
• Installing tensorboard-data-server (0.6.1)
• Installing markdown (3.4.1)
• Installing tensorboard-plugin-wit (1.8.1)
• Installing protobuf (3.19.6)
• Installing werkzeug (2.2.2)
• Installing google-auth-oauthlib (0.4.6)
• Installing astunparse (1.6.3)
• Installing flatbuffers (22.12.6)
• Installing gast (0.4.0)
• Installing google-pasta (0.2.0)
• Installing h5py (3.7.0)
• Installing keras (2.11.0)
• Installing tensorflow-estimator (2.11.0)
• Installing packaging (22.0)
• Installing opt-einsum (3.3.0)
• Installing libclang (14.0.6)
• Installing tensorboard (2.11.0)
• Installing tensorflow-io-gcs-filesystem (0.29.0)
• Installing termcolor (2.1.1)
• Installing typing-extensions (4.4.0)
• Installing wrapt (1.14.1)
• Installing tensorflow (2.11.0)
Installing the current project: myproject (0.1.0)
</code></pre>
<p>But when executing <code>poetry run start</code> I got error in import</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\Python\Anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "D:\Projects\myproject\dev\myproject-series-forecast\myproject\main.py", line 3, in <module>
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
</code></pre>
|
<python><tensorflow><python-poetry>
|
2022-12-29 09:49:08
| 8
| 2,598
|
chrx
|
74,949,518
| 5,581,893
|
Python: await the generator end
|
<p>Current versions of Python (Dec 2022) still allow using @coroutine decorator and a generation can be as:</p>
<pre><code>import asyncio
asyncify = asyncio.coroutine
data_ready = False # Status of a pipe, just to test
def gen():
global data_ready
while not data_ready:
print("not ready")
data_ready = True # Just to test
yield
return "done"
async def main():
result = await asyncify(gen)()
print(result)
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
</code></pre>
<p>However, new Python versions 3.8+ will deprecate @coroutine decorator (the <code>asyncify</code> function alias), <strong>how to wait for (await) generator to end as above</strong>?</p>
<p>I tried to use <code>async def</code> as expected by the warning but not working:</p>
<pre><code>import asyncio
asyncify = asyncio.coroutine
data_ready = False # Just to test
async def gen():
global data_ready
while not data_ready:
print("not ready")
data_ready = True # Just to test
yield
yield "done"
return
async def main():
# this has error: TypeError: object async_generator can't be used in 'await' expression
result = await gen()
print(result)
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
</code></pre>
|
<python><async-await><python-asyncio><generator><coroutine>
|
2022-12-29 09:45:15
| 1
| 8,759
|
Dan D
|
74,949,432
| 18,756,733
|
If list contains an item, then list equals to item
|
<p>I don't know how to formulate the question correctly:
I have a list of lists, which contains multiple items.</p>
<pre><code>mylist=[['a','b','c','₾'],['x','t','f','₾'],['a','d'],['r','y'],['c','₾'],['a','b','c','i'],['h','j','l','₾']]
</code></pre>
<p>If any of the lists contains symbol '₾', I want to append the symbol to another list, else append None.</p>
<pre><code>result=[]
for row in mylist:
for i in row:
if i=='₾':
result.append(i)
else:
result.append(None)
result
</code></pre>
<p>But I want to get this result :</p>
<pre><code>result=['₾','₾',None,None,'₾',None,'₾']
</code></pre>
|
<python><list>
|
2022-12-29 09:35:09
| 4
| 426
|
beridzeg45
|
74,949,362
| 20,646,427
|
How to show history of orders for user in Django
|
<p>I will pin some screenshots of my template and admin panel
I have history of orders in admin panel but when im trying to show title and img of order product in user profile in my template that`s not working and i got queryset
Im sorry for russian words in my site, i can rescreen my screenshots if you need that</p>
<p>models.py</p>
<pre><code>class Order(models.Model):
user = models.ForeignKey(User, on_delete=models.PROTECT, related_name='orders', verbose_name='Заказы',
default=1)
username = models.CharField(max_length=50, verbose_name='Имя пользователя')
email = models.EmailField()
vk_or_telegram = models.CharField(max_length=255, verbose_name='Ссылка для связи', default='vk.com')
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
paid = models.BooleanField(default=False, verbose_name='Оплачено')
class Meta:
ordering = ['-created',]
verbose_name = 'Заказ'
verbose_name_plural = 'Заказы'
def __str__(self):
return 'Заказ {}'.format(self.id)
def get_cost(self):
return sum(item.get_cost() for item in self.items.all())
class OrderItem(models.Model):
order = models.ForeignKey(Order, related_name='order', on_delete=models.CASCADE)
product = models.ForeignKey(Posts, related_name='order_items', on_delete=models.CASCADE)
price = models.DecimalField(max_digits=10, decimal_places=2)
def __str__(self):
return '{}'.format(self.id)
def get_cost(self):
return self.price
</code></pre>
<p>views.py</p>
<pre><code>@login_required
def profile(request):
user_orders = Order.objects.filter(user=request.user)
data = {
'user_orders': user_orders,
}
return render(request, 'store/main_pages/profile.html', data)
</code></pre>
<p>order history template:</p>
<pre><code>{% for item in user_orders %}
{{ item }}
{{ item.order.all }}
{% endfor %}
</code></pre>
<p><a href="https://i.sstatic.net/9O4zr.png" rel="nofollow noreferrer">Profile template</a>
<a href="https://i.sstatic.net/Ok3tp.png" rel="nofollow noreferrer">admin order panel</a></p>
|
<python><django><django-templates>
|
2022-12-29 09:27:47
| 3
| 524
|
Zesshi
|
74,949,189
| 9,640,238
|
Strip tags and keep content with Beautifulsoup
|
<p>I thought this question would have been answered 1000 times, but apparently not (or I'm not looking right!). I want to clean up some overloaded HTML content with BeautifulSoup and remove unwanted tags. In some cases (e.g. <code><span></code> or <code><div></code>), I want to preserve the content of the tag instead of destroying it entirely with <code>decompose</code>.</p>
<p>With LXML, this can be achieved with <code>strip_tag</code>. How do I do that with BS4?</p>
|
<python><html><beautifulsoup>
|
2022-12-29 09:10:08
| 1
| 2,690
|
mrgou
|
74,949,121
| 4,451,521
|
Two lists in python. Check when the elements are of the same sign
|
<p>I have two lists in python</p>
<pre><code>list1=[1,3,5,-3,-3]
list2=[2,-3,3,-3,5]
</code></pre>
<p>I want to produce a list that is true when the elements are of the same sign</p>
<pre><code>result=[True,False,True,True,False]
</code></pre>
<p>which is the fastest and more pythonic way to do this?</p>
<p>EDIT:</p>
<p>Solution</p>
<p>I solve it by transforming both list to numpy arrays</p>
<pre><code>np.array(list1)*np.array(list1)>0
</code></pre>
|
<python>
|
2022-12-29 09:02:31
| 1
| 10,576
|
KansaiRobot
|
74,949,106
| 360,274
|
Consistently coloring tracks by elevation across multiple maps with gpxplotter
|
<p>I use <a href="https://gpxplotter.readthedocs.io/en/latest/" rel="nofollow noreferrer">gpxplotter</a> and <a href="https://python-visualization.github.io/folium/" rel="nofollow noreferrer">folium</a> to generate maps from GPX tracks. I use the following <a href="https://gpxplotter.readthedocs.io/en/latest/source/gpxplotter.folium_map.html#gpxplotter.folium_map.add_segment_to_map" rel="nofollow noreferrer">method</a> to color them based on elevation data:</p>
<pre><code>add_segment_to_map(the_map, segment, color_by='elevation', cmap='YlOrRd_09')
</code></pre>
<p>Now, I would like to keep the colors consistent across multiple maps?</p>
<p>For example, if I have a track that runs from 0 to 1500 meters, 0 will be on one end of the spectrum (light yellow in my case) and 1500 on the other (red). Another track might go from 1000 and 2000 meters. The coloring for 0 meters of elevation in the first track is the same as for 1000 in the second (and equally for 1500 and 2000).</p>
<p>Rather than using the local elevation data of each map, is there a way to specify a global range, e.g. from 0 to 3000?</p>
|
<python><maps><folium><gpx>
|
2022-12-29 09:01:12
| 1
| 2,194
|
martin
|
74,948,945
| 4,451,521
|
How can I use fill_between if the points I have are array with single value
|
<p>I have a matplotlib script.</p>
<p>In it in the end I have</p>
<pre><code>x_val=[x[0] for x in lista]
y_val=[x[1] for x in lista]
z_val=[x[2] for x in lista]
ax.plot(x_val,y_val,'.-')
ax.plot(x_val,z_val,'.-')
</code></pre>
<p>This script plots well <em>eventhough</em> the values in <code>y_val</code> and <code>z_val</code> are not strictly numbers</p>
<p>Debugging I have</p>
<pre><code>(Pdb) x_val
[69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153]
(Pdb) y_val
[array(1.74204588), array(1.74162786), array(1.74060187), array(1.73956786), array(-1.89492498), array(-1.89225716), array(-1.89842406), array(-1.89143466), array(-1.89171231), array(-1.88730752), array(-1.89144205), nan, array(1.71829279), array(-1.88108125), array(-1.87515878), array(-1.87912412), array(-1.87015615), array(-1.87152107), array(-1.86639765), array(-1.87383146), array(-1.86896753), array(-1.87339903), array(-1.8748417), array(-1.88515482), array(-1.88263666), array(-1.88571425), nan, nan, array(1.72480822), array(1.73666841), array(-1.88835078), array(-1.88489648), array(-1.89135095), array(-1.88647712), array(-1.88697799), array(-1.88330942), array(-1.88929744), array(-1.88320532), array(-1.88466698), array(-1.87994435), array(-1.88546968), array(-1.88014776), array(-1.87803843), array(-1.87505217), array(-1.8797416), array(-1.87223076), array(-1.87333355), array(-1.86838693), array(-1.87577428), array(-1.86875561), array(-1.86872998), array(-1.86385078), array(-1.87095955), array(-1.86509266), array(-1.86601095), array(-1.86223456), array(-1.87151403), array(-1.86695325), array(-1.86540432), array(-1.86244142), array(-1.87018407), array(-1.86767604), array(-1.8699986), array(-1.87008087), array(-1.88049869), array(1.70057683), array(1.74942263), array(-1.86556665), array(-1.88470081), array(-1.90776552), array(-1.9103818), array(-1.91022515), array(-1.89490587), array(-1.89507617), array(-1.8875979), array(-1.89318633), array(-1.8942595), array(-1.902641), array(-1.89313615), array(-1.87870174), array(-1.86319541), array(-1.85999368), array(-1.85943922), array(-1.88398592), array(1.73030903)]
</code></pre>
<p>z_val similarly</p>
<p>This does not represent a problem
However I want to do</p>
<pre><code>ax.fill_between(x_val,0,1,where=(y_val*z_val) >0,
color='green',alpha=0.5 )
</code></pre>
<p>It is a first attempt that I will probably modify (in <a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_demo.html#selectively-marking-horizontal-regions-across-the-whole-axes" rel="nofollow noreferrer">this example</a> for instance I don't understand yet what <code>transform=ax.get_xaxis_transform()</code> does) but the problem is that now I got an error</p>
<pre><code> File "plotgt_vs_time.py", line 160, in plot
ax.fill_between(x_val,0,1,where=(y_val*z_val) >0,
TypeError: can't multiply sequence by non-int of type 'list'
</code></pre>
<p>I suppose it is because it is an array. How can I modify my code so as to be able to use <code>fill_between</code>?</p>
<p>I tried modifying it to</p>
<pre><code>x_val=[x[0] for x in lista]
y_val=[x[1][0] for x in lista]
z_val=[x[2][0] for x in lista]
</code></pre>
<p>but this throws an error</p>
<pre><code>IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
</code></pre>
<p>Then I modified it to</p>
<pre><code>x_val=[x[0] for x in lista]
y_val=[float(x[1]) for x in lista]
z_val=[float(x[2]) for x in lista]
</code></pre>
<p>And now I only get floats, so I eliminated the 0-D arrays
but still got the error</p>
<pre><code>TypeError: can't multiply sequence by non-int of type 'list'
</code></pre>
<p>How can I use <code>fill_beetween</code>?</p>
|
<python><matplotlib>
|
2022-12-29 08:44:38
| 1
| 10,576
|
KansaiRobot
|
74,948,609
| 14,457,833
|
The PyQt5 programme automatically stops working after some time when using setTextCursor()
|
<p>I have a <strong>PyQt5 GUI</strong> that is in charge of taking voice input from the user and converting it to text. </p>
<p>Everything was fine until I was told to add a new feature where the user can edit text while speaking. The cursor should not move to the start or end of a paragraph; it should stay where it is, and the text should come as it did before, so the user can edit text while speaking.</p>
<p>So when I add cursor positioning code my program runs for few minutes and breaks or quits window without any errors in console*(terminal).* When I remove cursor code it works.</p>
<p>My <code>VoiceWorker()</code> class inherited from <code>QThread()</code>:</p>
<pre><code>class VoiceWorker(QtCore.QThread):
textChanged = QtCore.pyqtSignal(str)
sig_started = QtCore.pyqtSignal()
def run(self) -> None:
self.sig_started.emit()
mic = PyAudio()
stream = mic.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True, frames_per_buffer=CHUNK)
while not self.stopped:
try:
data = stream.read(4096, exception_on_overflow=False)
if recognizer.AcceptWaveform(data):
text = recognizer.Result()
cursor = txtbox.textCursor()
pos = cursor.position()
print(f"\n Cursor position : {pos} \n")
speech = text[14:-3]+" "
print(f"\n {speech=} \n")
self.textChanged.emit(speech)
cursor.movePosition(cursor.End)
txtbox.setTextCursor(cursor)
except Exception as e:
print(f"\n {e} \n")
break
</code></pre>
<p>and I've connected <strong>VoiceWorker</strong> thread to my <code>QTextEdit()</code></p>
<pre><code>voice_thread = VoiceWorker()
voice_thread.textChanged.connect(txtbox.insertPlainText)
</code></pre>
<p>When I run this programme on a Windows system, it closes without any errors, but on a Linux system, it closes with this error: "<code>segmentation fault (core dumped)</code>". This problem occurs only when I add cursor code.</p>
<p>I don't know why this is happening; please help me to solve this issue.</p>
<p>I tried to search on Google but no luck becouse I didn't understand why It's happening.</p>
|
<python><linux><qt><pyqt5><pyaudio>
|
2022-12-29 08:03:22
| 1
| 4,765
|
Ankit Tiwari
|
74,948,525
| 20,051,041
|
FutureWarning: save is not part of the public API in Python
|
<p>I am using Python to convert Pandas df to .xlsx (in Plotly-Dash app.). All working well so far but with this warning tho:</p>
<p><strong>"FutureWarning:
save is not part of the public API, usage can give unexpected results and will be removed in a future version"</strong></p>
<p>How should I modify the code below in order to keep its functionality and stability in future? Thanks!</p>
<pre><code> writer = pd.ExcelWriter("File.xlsx", engine = "xlsxwriter")
workbook = writer.book
df.to_excel(writer, sheet_name = 'Sheet', index = False)
writer.save()
</code></pre>
|
<python><excel><warnings>
|
2022-12-29 07:51:00
| 2
| 580
|
Mr.Slow
|
74,948,340
| 5,336,651
|
How to generate complex Hypothesis data frames with internal row and column dependencies?
|
<p>Is there an elegant way of using <code>hypothesis</code> to directly generate complex <code>pandas</code> data frames with internal row and column dependencies? Let's say I want columns such as:</p>
<pre><code>[longitude][latitude][some-text-meta][some-numeric-meta][numeric-data][some-junk][numeric-data][…
</code></pre>
<p>Geographic coordinates can be individually picked at random, but sets must usually come from a general area (e.g. standard reprojections don't work if you have two points on opposite sides of the globe). It's easy to handle that by choosing an area with one strategy and columns of coordinates from inside that area with another. All good so far…</p>
<pre><code>@st.composite
def plaus_spamspam_arrs(
draw,
st_lonlat=plaus_lonlat_arr,
st_values=plaus_val_arr,
st_areas=plaus_area_arr,
st_meta=plaus_meta_arr,
bounds=ARR_LEN,
):
"""Returns plausible spamspamspam arrays"""
size = draw(st.integers(*bounds))
coords = draw(st_lonlat(size=size))
values = draw(st_values(size=size))
areas = draw(st_areas(size=size))
meta = draw(st_meta(size=size))
return PlausibleData(coords, values, areas, meta)
</code></pre>
<p>The snippet above makes clean <code>numpy</code> arrays of coordinated single-value data. But the numeric data in the columns example (n-columns interspersed with junk) can also have row-wise dependencies such as needing to be normalised to some factor involving a row-wise sum and/or something else chosen dynamically at runtime.</p>
<p>I can generate all these bits separately, but I can't see how to stitch them into a single data frame without using a clumsy concat-based technique that, I presume, would disrupt <code>draw</code>-based shrinking. Moreover, I need a solution that adapts beyond what's above, so a hack likely get me too far…</p>
<p>Maybe there's something with <code>builds</code>? I just can't quite see out how to do it. Thanks for sharing if you know! A short example as inspiration would likely be enough.</p>
<h5>Update</h5>
<p>I can generate columns roughly as follows:</p>
<pre><code>@st.composite
def plaus_df_inputs(
draw, *, nrows=None, ncols=None, nrow_bounds=ARR_LEN, ncol_bounds=COL_LEN
):
"""Returns …"""
box_lon, box_lat = draw(plaus_box_geo())
ncols_jnk = draw(st.integers(*ncol_bounds)) if ncols is None else ncols
ncols_val = draw(st.integers(*ncol_bounds)) if ncols is None else ncols
keys_val = draw(plaus_smp_key_elm(size=ncols_val))
nrows = draw(st.integers(*nrow_bounds)) if nrows is None else nrows
cols = (
plaus_df_cols_lonlat(lons=plaus_lon(box_lon), lats=plaus_lat(box_lat))
+ plaus_df_cols_meta()
+ plaus_df_cols_value(keys=keys_val)
+ draw(plaus_df_cols_junk(size=ncols_jnk))
)
random.shuffle(cols)
return draw(st_pd.data_frames(cols, index=plaus_df_idx(size=nrows)))
</code></pre>
<p>where the sub-stats are things like</p>
<pre><code>@st.composite
def plaus_df_cols_junk(
draw, *, size=1, names=plaus_meta(), dtypes=plaus_dtype(), unique=False
):
"""Returns strategy for list of columns of plausible junk data."""
result = set()
for _ in range(size):
result.add(draw(names.filter(lambda name: name not in result)))
return [
st_pd.column(name=result.pop(), dtype=draw(dtypes), unique=unique)
for _ in range(size)
]
</code></pre>
<p>What I need is something more elegant that incorporates the row-based dependencies.</p>
|
<python><pytest><python-hypothesis>
|
2022-12-29 07:28:56
| 1
| 401
|
curlew77
|
74,948,277
| 17,696,880
|
How to operate with dates that dont have a 4-digit year? Is it possible extract this year number and adapt it so that datetime can operate with this?
|
<p>I was having a <code>ValueError</code> every time I tried to pass a date whose years are not 4 digits to some function of the <code>datetime</code> module, in this case the operation to be performed is to add or subtract days</p>
<pre class="lang-py prettyprint-override"><code>import datetime
def add_or_subtract_days(datestr, days, operation):
if operation == "add" : input_text = (datetime.datetime.strptime(datestr, "%Y-%m-%d") + datetime.timedelta(days=int(days))).strftime('%Y-%m-%d')
elif operation == "subtract" : input_text = (datetime.datetime.strptime(datestr, "%Y-%m-%d") - datetime.timedelta(days=int(days))).strftime('%Y-%m-%d')
return input_text
input_text = add_or_subtract_days("2023-01-20", "5", "add")
print(repr(input_text)) # ---> '2023-01-25'
input_text = add_or_subtract_days("999-12-27", "5", "add")
print(repr(input_text)) # ---> ValueError: time data '999-12-27' does not match format '%Y-%m-%d'
input_text = add_or_subtract_days("12023-01-20", "5", "add")
print(repr(input_text)) # ---> ValueError: time data '12023-01-20' does not match format '%Y-%m-%d'
</code></pre>
<p>Something that occurred to me is to identify these problem cases with an exception, and I managed to extract the number of years. Perhaps it will help me to operate with it in some way, so that it does not generate problems when adding or subtracting days with datetime. You also have to take into account that when adding or subtracting days, you can also change the number of the month and the number of the year, which is a very important factor to consider and is giving me a lot of problems since I don't have much idea how to solve it.</p>
<pre class="lang-py prettyprint-override"><code>import datetime, re
def add_or_subtract_days(datestr, days, operation):
try:
if operation == "add" : input_text = (datetime.datetime.strptime(datestr, "%Y-%m-%d") + datetime.timedelta(days=int(days))).strftime('%Y-%m-%d')
elif operation == "subtract" : input_text = (datetime.datetime.strptime(datestr, "%Y-%m-%d") - datetime.timedelta(days=int(days))).strftime('%Y-%m-%d')
except ValueError:
m1 = re.search( r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})", datestr, re.IGNORECASE, )
if m1:
ref_year = str(m1.groups()[0])
print(ref_year)
return input_text
</code></pre>
<p>I've already managed to extract the year number, however I can't think of what kind of algorithm I should use to truncate the year to a value with the amount that supports the datetime and then join it (keeping logic) to get the correct date though does not have a 4-digit year</p>
<p>for example...</p>
<p><code>"212023-12-30"</code> + <code>2 days</code> --> <code>21</code> <code>2023-12-30</code> + <code>2 days</code> --> 21 <code>2024-01-01</code> --> <code>212024-01-01</code></p>
<p><code>"12023-12-30"</code> + <code>2 days</code> --> <code>1</code> <code>2023-12-30</code> + <code>2 days</code> --> 1 <code>2024-01-01</code> --> <code>12024-01-01</code></p>
<p><code>"999-12-30"</code> + <code>2 days</code> --> <code>0</code> <code>999-12-30</code> + <code>2 days</code> --> 0 <code>1000-01-01</code> --> <code>1000-01-01</code></p>
<p><code>"999-11-30"</code> + <code>5 days</code> --> <code>0</code> <code>999-11-30</code> + <code>5 days</code> --> 0 <code>999-11-05</code> --> <code>999-11-05</code></p>
<p><code>"99-12-31"</code> + <code>5 days</code> --> <code>00</code> <code>99-12-31</code> + <code>5 days</code> --> 00 <code>100-01-05</code> --> <code>100-01-05</code></p>
|
<python><python-3.x><regex><datetime><python-datetime>
|
2022-12-29 07:21:35
| 1
| 875
|
Matt095
|
74,948,261
| 16,186,109
|
Is there any way to connect my google calendar to google colab?
|
<p>I created a project using the Google Cloud Console and created an OAuth screen. But when I used Python in colab to access the calendar, I got an error message that I don't have a proper redirect uri. Is there any way to bypass accessing my Google Calendar with my login credentials rather than using an access token </p>
|
<python><google-colaboratory><google-calendar-api>
|
2022-12-29 07:19:31
| 0
| 325
|
Vinasirajan Vilakshan
|
74,948,111
| 7,959,614
|
Get shape parameters of scipy.stats.beta from shape of PDF
|
<p>I have the following script from the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.beta.html" rel="nofollow noreferrer">docs</a></p>
<pre><code>import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1)
a = 1.25
b = 1.5
x = np.linspace(ss.beta.ppf(0.00, a, b),
ss.beta.ppf(1, a, b), 100)
ax.plot(x, ss.beta.pdf(x, a, b),
'r-', lw=5, alpha=0.6, label='beta pdf')
plt.show()
</code></pre>
<p>I would like to reverse the process and get <code>a</code> and <code>b</code> from the shape of the pdf. I will only have the output of</p>
<pre><code>ss.beta.pdf(x, a, b)
</code></pre>
<p>where only <code>x</code> is known (specified earlier). So how do I get <code>a</code> and <code>b</code> if I only have the following data:</p>
<pre><code>[0. 0.63154416 0.74719517 0.82263393 0.87936158 0.92490497
0.96287514 0.9953117 1.02349065 1.04826858 1.07025112 1.08988379
1.10750482 1.12337758 1.13771153 1.15067624 1.16241105 1.17303196
1.18263662 1.19130806 1.19911747 1.20612637 1.21238825 1.21794995
1.22285265 1.22713276 1.23082258 1.23395089 1.23654342 1.23862319
1.24021091 1.24132519 1.2419828 1.2421989 1.24198713 1.24135982
1.24032811 1.23890203 1.23709059 1.23490192 1.23234325 1.22942106
1.22614106 1.2225083 1.21852712 1.21420131 1.209534 1.20452779
1.19918472 1.19350628 1.18749347 1.18114672 1.17446601 1.16745076
1.16009989 1.1524118 1.14438438 1.13601495 1.12730028 1.11823656
1.10881937 1.09904366 1.0889037 1.07839304 1.06750447 1.05622997
1.0445606 1.03248649 1.01999669 1.00707911 0.99372038 0.97990571
0.96561876 0.95084139 0.93555349 0.91973269 0.90335407 0.88638972
0.86880835 0.85057469 0.83164884 0.81198535 0.79153224 0.77022959
0.74800782 0.7247854 0.70046587 0.67493372 0.64804878 0.61963821
0.58948474 0.55730896 0.52274112 0.48527403 0.44417862 0.39833786
0.34587496 0.2831383 0.20072303 0. ]
</code></pre>
<p>After finding this <a href="https://stackoverflow.com/a/73872039/7959614">answer</a> and this <a href="https://www.codeproject.com/articles/56371/finding-probability-distribution-parameters-from-p" rel="nofollow noreferrer">blogpost</a> I have an idea of what needs to be done:</p>
<pre><code>def find_parameters(x1, p1, x2, p2):
"""Find parameters for a beta random variable X
so that P(X > x1) = p1 and P(X > x2) = p2.
"""
def objective(v):
(a, b) = v
temp = (ss.beta.cdf(x1, a, b) - p1) ** 2.0
temp += (ss.beta.cdf(x2, a, b) - p2) ** 2.0
return temp
# arbitrary initial guess of (1, 1) for parameters
xopt = ss.optimize.fmin(objective, (1, 1))
return [xopt[0], xopt[1]]
</code></pre>
<p>However, this function only uses two points. I modified the solution as follows:</p>
<pre><code>def find_parameters(x, p):
def objective(v):
(a, b) = v
return np.sum((ss.beta.cdf(x, a, b) - p) ** 2.0)
# arbitrary initial guess of (1, 1) for parameters
xopt = optimize.fmin(objective, (1, 1))
return [xopt[0], xopt[1]]
fitted_a, fitted_b = find_parameters(x=x, p=ss.beta.pdf(x, a, b))
</code></pre>
<p>This results in the following incorrect values:</p>
<blockquote>
<p>a: 4.2467303147231366e-10</p>
</blockquote>
<p>and</p>
<blockquote>
<p>b: 1.9183210434237443</p>
</blockquote>
<p>Please advice</p>
|
<python><scipy>
|
2022-12-29 06:55:20
| 1
| 406
|
HJA24
|
74,947,992
| 17,374,216
|
How to remove the error "SystemError: initialization of _internal failed without raising an exception"
|
<p>I am trying to import Top2Vec package for nlp topic modelling. But even after upgrading pip, numpy this error is coming.</p>
<p>I tried</p>
<pre><code>pip install --upgrade pip
</code></pre>
<pre><code>pip install --upgrade numpy
</code></pre>
<p>I was expecting to run</p>
<pre><code>from top2vec import Top2Vec
model = Top2Vec(FAQs, speed='learn', workers=8)
</code></pre>
<p>but it is giving the mentioned error</p>
|
<python><import><nlp><google-colaboratory>
|
2022-12-29 06:37:49
| 5
| 571
|
Sayonita Ghosh Roy
|
74,947,797
| 17,103,465
|
Fetching the column values from another table to create a new column in the main table : Pandas Merging with square brackets
|
<p>As you can see below, I have two tables main table and reference table . In the main table I have a column 'Subject' which contains tr_id within the '[]' separated by ',' . I have it to match with my reference table using the 'tr_id' to fetch the 'test_no' as 'Linked_Test_No' in my main table.</p>
<p>main table :</p>
<pre><code>my_id Name Subject
12 Ash The test [101 , 105]
15 Brock The testing of the subject [101,102]
16 Misty Subject Test [102,106]
18 Tracy Subject Testing [101]
10 Oak Test
19 Paul Testing []
21 Gary Testing : [107]
44 Selena Subject : [104]
</code></pre>
<p>reference table :</p>
<pre><code>tr_id latest_em test_no
101 pichu@icloud.com; paul@gmail.com 120
102 ash@yahoo.com 130
103 squirtle@gmail.com 160
104 charmander@gmail.com 180
105 ash@yahoo.com;misty@yahoo.com 100
</code></pre>
<p>Currently, I am using <code>str.extract()</code> to fetch the tr_id and then using <code>pd.merge()</code>
to join two tables and then collating test_no into a single column "Linked_Test_no" which has lot of steps; can we achieve this with few lines of code; my coding skills are pretty basic.</p>
<p>Expected Output :</p>
<pre><code>
my_id Name Subject Linked_Test_No
12 Ash The test [101 , 105] [120,100]
15 Brock The testing of the subject [101,102] [120,130]
16 Misty Subject Test [102,106] [130]
18 Tracy Subject Testing [101] [120]
10 Oak Test
19 Paul Testing []
21 Gary Testing : [107]
44 Selena Subject : [104] [180]
</code></pre>
|
<python><pandas>
|
2022-12-29 06:04:54
| 1
| 349
|
Ash
|
74,947,815
| 889,213
|
numpy.vectorize: "ValueError: setting an array element with a sequence"
|
<p>I'm having trouble when using np.vectorize in my code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from sklearn.neighbors import KNeighborsClassifier
length = 1000
evens = np.array([np.arange(0, length*2, 2)])
odds = np.array([np.arange(1, length*2, 2)])
zeroes = np.array([np.resize([[0]], length)])
ones = np.array([np.resize([[1]], length)])
evens_data = np.concatenate((evens.T, zeroes.T), axis=1)
odds_data = np.concatenate((odds.T, ones.T), axis=1)
data = np.array(np.append(evens_data, odds_data)).reshape(length*2, 2)
np.random.shuffle(data)
X = np.array(data[:,0], dtype=np.uint8).reshape(length*2,1)
y = data[:,1]
###
### IF I COMMENT FROM HERE
###
print(f'1 X.dtype: {X.dtype} X[0].dtype: {X[0].dtype} and X.shape {X.shape} and X[0].shape {X[0].shape}')
X = tobinstr(X)
print(f'2 X.dtype: {X.dtype} X[0].dtype: {X[0].dtype} and X.shape {X.shape} and X[0].shape {X[0].shape}')
X = tobits(X)
print(f'3 X.dtype: {X.dtype} X[0].dtype: {X[0].dtype} and X.shape {X.shape} and X[0].shape {X[0].shape} and X[0][0].shape {X[0][0].shape}')
X = X[:,0]
print(f'4 X.dtype: {X.dtype} X[0].dtype: {X[0].dtype} and X.shape {X.shape} and X[0].shape {X[0].shape}')
###
### TO HERE, the code works fine
###
KNeighborsClassifier(3).fit(X, y)
# The line above fails with error "TypeError: only size-1 arrays can be converted to Python scalars"
# Caused by: ValueError: setting an array element with a sequence.
</code></pre>
<p>And here's the output:</p>
<pre><code>1 X.dtype: uint8 X[0].dtype: uint8 and X.shape (2000, 1) and X[0].shape (1,)
2 X.dtype: <U16 X[0].dtype: <U16 and X.shape (2000, 1) and X[0].shape (1,)
3 X.dtype: object X[0].dtype: object and X.shape (2000, 1) and X[0].shape (1,) and X[0][0].shape (16,)
4 X.dtype: object X[0].dtype: uint8 and X.shape (2000,) and X[0].shape (16,)
</code></pre>
<p>How do I turn X into a shape of (2000,16)?</p>
|
<python><numpy>
|
2022-12-29 06:01:14
| 0
| 7,458
|
thiagoh
|
74,947,583
| 2,878,290
|
PyArrow Issues in Dataframe
|
<p>I am using Prophet lib to develop the analytics data but if we are using different source of the data, we are encountering as below kind of error.</p>
<blockquote>
<p>PythonException: An exception was thrown from a UDF:
'pyarrow.lib.ArrowTypeError: Expected a string or bytes dtype, got
float64</p>
</blockquote>
<p>Code we are using:</p>
<pre><code>def run_row_outlier_check(df: DataFrame, min_date, start_date, groupby_cols, job_id) -> DataFrame:
pd_schema = StructType([
StructField(groupby_col, StringType(), True),
StructField("ds", DateType(), True),
StructField("y", IntegerType(), True),
StructField("yhat", FloatType(), True),
StructField("yhat_lower", FloatType(), True),
StructField("yhat_upper", FloatType(), True),
StructField("trend", FloatType(), True),
StructField("trend_lower", FloatType(), True),
StructField("trend_upper", FloatType(), True),
StructField("additive_terms", FloatType(), True),
StructField("additive_terms_lower", FloatType(), True),
StructField("additive_terms_upper", FloatType(), True),
StructField("weekly", FloatType(), True),
StructField("weekly_lower", FloatType(), True),
StructField("weekly_upper", FloatType(), True),
StructField("yearly", FloatType(), True),
StructField("yearly_lower", FloatType(), True),
StructField("yearly_upper", FloatType(), True),
StructField("multiplicative_terms", FloatType(), True),
StructField("multiplicative_terms_lower", FloatType(), True),
StructField("multiplicative_terms_upper", FloatType(), True)
])
# dataframe of consecutive dates
df_rundates = (ps.DataFrame({'date':pd.date_range(start=min_date, end=(date.today() - timedelta(days=1)))})).to_spark()
# combine + explode to create row for each date and grouped col (e.g. business segment)
df_bizlist = (
df.filter(f"{date_col} >= coalesce(date_sub(date 'today', {num_days_check}), '{start_date}')")
.groupBy(groupby_col)
.count()
.orderBy(col("count").desc())
)
df_rundates_bus = (
df_rundates
.join(df_bizlist, how='full')
.select(df_bizlist[groupby_col], df_rundates["date"].alias("ds"))
)
# create input dataframe for prophet forecast
df_grouped_cnt = df.groupBy(groupby_cols).count()
df_input = (
df_rundates_bus.selectExpr(f"{groupby_col}", "to_date(ds) as ds")
.join(df_grouped_cnt.selectExpr(f"{groupby_col}", f"{date_col} as ds", "count as y"), on=['ds',f'{groupby_col}'], how='left')
.withColumn("y", coalesce("y", lit(0)))
.repartition(sc.defaultParallelism, "ds")
)
# cache dataframe to improve performance
#df_input.cache().repartition(sc.defaultParallelism, "ds")
# forecast
df_forecast = (
df_input
.groupBy(groupby_col)
.applyInPandas(pd_apply_forecast, schema=pd_schema)
)
# filter forecast with outlier scores
df_rowoutliers = (
df_forecast
.filter("y > 0 AND (y > yhat_upper OR y < array_max(array(yhat_lower,0)))")
.withColumn("check_type", lit("row_count"))
.withColumn("deduct_score", expr("round(sqrt(pow(y-yhat, 2) / pow(yhat_lower - yhat_upper,2)))").cast('int'))
.select(
col("check_type"),
col("ds").alias("ref_date"),
col(groupby_col).alias("ref_dimension"),
col("y").cast('int').alias("actual"),
col("deduct_score"),
col("yhat").alias("forecast"),
col("yhat_lower").alias("forecast_lower"),
col("yhat_upper").alias("forecast_upper")
)
)
return add_metadata_columns(df_forecast, job_id), add_metadata_columns(df_rowoutliers, job_id)
def add_metadata_columns(df: DataFrame, job_id) -> DataFrame:
"""
| Add job metadata to dataframe
"""
df = df.select(
lit(f"{job_id}").cast("int").alias("job_id"),
lit(f"{source_type}").alias("source_type"),
lit(f"{server}").alias("server_name"),
lit(f"{database}").alias("database_name"),
lit(f"{table}").alias("table_name"),
lit(f"{groupby_col}").alias("ref_dim_column"),
lit(f"{date_col}").alias("ref_date_column"),
df["*"],
expr("current_timestamp()").alias("_job_timestamp"),
expr("current_user()").alias("_job_user")
).withColumnRenamed(groupby_col, "ref_dimension")
return df
</code></pre>
|
<python><dataframe><pyspark><pyarrow>
|
2022-12-29 05:27:50
| 0
| 382
|
Developer Rajinikanth
|
74,947,546
| 7,959,614
|
Selenium driver get_log() stops suddenly
|
<p>I have a script that creates multiple <code>selenium.webdriver</code>-instances, executes a js-script and reads the resulting logs of them. Most of the time the script runs without problems, but in a few cases the logs suddenly stop <em>after running for a while</em>.
I am not sure how to mimic the error.</p>
<p>My headless webdriver is based on this <a href="https://stackoverflow.com/a/53518159/7959614">answer</a> and defined as follows:</p>
<pre><code>import threading
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
threadLocal = threading.local()
DRIVER_PATH = '/chromedriver'
def create_driver_headless() -> webdriver.Chrome:
driver = getattr(threadLocal, 'driver', None)
if driver is None:
dc = DesiredCapabilities.CHROME
dc['goog:loggingPrefs'] = {'browser': 'ALL'}
chrome_options = Options()
chrome_options.add_argument('--disable-infobars')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--disable-logging')
chrome_options.add_argument('--disable-extensions')
chrome_options.add_argument('--disable-notifications')
chrome_options.add_argument('--disable-default-apps')
chrome_options.add_argument('--window-size=1920,1080')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_experimental_option('excludeSwitches', ['enable-automation'])
chrome_options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(executable_path=CHROME_DRIVER,
desired_capabilities=dc,
chrome_options=chrome_options)
setattr(threadLocal, 'driver', driver)
return driver
</code></pre>
<p>I read the logs using</p>
<pre><code> while True:
logs = driver.get_log('browser')
for log in logs:
# do something
</code></pre>
<p>My initial guess was that the headless driver crashed when this occured. However, later in my script I have the following code which is satisfied (returned <code>None</code>):</p>
<pre><code>if len(logs) == 0:
try:
if 'END' in driver.find_element(By.XPATH, f"(.//*[@class='sr-match-set-sports__status-str srm-is-uppercase']").text:
return None
except NoSuchElementException:
continue
</code></pre>
<p>I assume that if the driver crashed this line should return a <code>NoSuchElementException</code>, so I can conclude that it did not crash?</p>
<p>I am also certain that additional logs should have been received by checking simultaneously the url in Google Chrome.</p>
<p>Any idea what's causing this behaviour?</p>
|
<python><multithreading><selenium>
|
2022-12-29 05:21:08
| 2
| 406
|
HJA24
|
74,947,485
| 3,899,975
|
Horizontal barplot with offset in seaborn
|
<p>My dataset is like this, where the data points in each row or column are pandas objects.
<a href="https://i.sstatic.net/QmDGR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QmDGR.png" alt="enter image description here" /></a></p>
<p>Here is the dataset:
<a href="https://github.com/aebk2015/multipleboxplot.git" rel="nofollow noreferrer">https://github.com/aebk2015/multipleboxplot.git</a></p>
<p>I want to have bar plots for each of the columns "Location" (P1 -P14) for each categories (92A11, 92B11, 82B11); something like this:
<a href="https://i.sstatic.net/O8iH8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O8iH8.png" alt="enter image description here" /></a></p>
<p>I have tried something like this and i can have a bar plots for each individual Pi (i=1...14) but not only is it a laborious, it does not look what I want:</p>
<pre><code>fig, ax = plt.subplots(2, 3, figsize=(8,2))
sns.stripplot(data=df.loc[7]['92A11'].split(','), dodge=True, linewidth=1, ax=ax[0,0], color='black', jitter=False, orient='h')
sns.violinplot(data=df.loc[7]['92A11'].split(','), ax=ax[0,0], color='orange', orient='h')
sns.stripplot(data=df.loc[7]['92B11'].split(','), dodge=True, linewidth=1, ax=ax[0,1], color='black', jitter=False, orient='h')
sns.violinplot(data=df.loc[7]['92B11'].split(','), ax=ax[0,1], color='orange', orient='h')
sns.stripplot(data=df.loc[7]['82B11'].split(','), dodge=True, linewidth=1, ax=ax[0,2], color='black', jitter=False, orient='h')
sns.violinplot(data=df.loc[7]['82B11'].split(','), ax=ax[0,2], color='orange', orient='h')
sns.stripplot(data=df.loc[6]['92A11'].split(','), dodge=True, linewidth=1, ax=ax[1,0], color='black', jitter=False, orient='h')
sns.violinplot(data=df.loc[6]['92A11'].split(','), ax=ax[1,0], color='orange', orient='h')
sns.stripplot(data=df.loc[6]['92B11'].split(','), dodge=True, linewidth=1, ax=ax[1,1], color='black', jitter=False, orient='h')
sns.violinplot(data=df.loc[6]['92B11'].split(','), ax=ax[1,1], color='orange', orient='h')
sns.stripplot(data=df.loc[6]['82B11'].split(','), dodge=True, linewidth=1, ax=ax[1,2], color='black', jitter=False, orient='h')
sns.violinplot(data=df.loc[6]['82B11'].split(','), ax=ax[1,2], color='orange', orient='h')
ax[0,0].set_xlim(0,200)
ax[0,1].set_xlim(0,200)
ax[0,2].set_xlim(0,200)
ax[1,0].set_xlim(0,200)
ax[1,1].set_xlim(0,200)
ax[1,2].set_xlim(0,200)
ax[1,0].set_xlabel('92A11')
ax[1,1].set_xlabel('92A11')
ax[1,2].set_xlabel('92A11')
ax[0,0].set_ylabel('P8')
ax[1,0].set_ylabel('P7')
fig.tight_layout()
</code></pre>
<p><a href="https://i.sstatic.net/Bpycq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bpycq.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn>
|
2022-12-29 05:12:39
| 1
| 1,021
|
A.E
|
74,947,453
| 16,869,946
|
Returning most recent row with certain values in Pandas
|
<p>I have a dataframe sorted by <code>ID</code> and in descending order of Date in Pandas that looks like</p>
<pre><code>ID Date A Salary
1 2022-12-01 2 100
1 2022-11-11 3 200
1 2022-10-25 1 150
1 2022-05-17 4 160
2 2022-12-01 2 170
2 2022-11-19 1 220
2 2022-10-10 1 160
3 2022-11-11 3 350
3 2022-09-11 1 200
3 2022-08-19 3 160
3 2022-07-20 3 190
3 2022-05-11 3 200
</code></pre>
<p>I would like to add a new column <code>Salary_argmin_recent_A</code> that outputs the most recent Salary row of a specific ID such that A=1, so the desired output looks like</p>
<pre><code>ID Date A Salary Salary_argmin_recent_A
1 2022-12-01 2 100 150 (most recent salary such that A=1 is 2022-10-25)
1 2022-11-11 3 200 150 (most recent salary such that A=1 is 2022-10-25)
1 2022-10-25 1 150 NaN (no rows before with A=1 for ID 1)
1 2022-05-17 4 160 NaN (no rows before with A=1 for ID 1)
2 2022-12-01 2 170 220
2 2022-11-19 1 220 160
2 2022-10-10 1 160 NaN
3 2022-11-11 3 350 200
3 2022-09-11 1 200 NaN
3 2022-08-19 3 160 NaN
3 2022-07-20 3 190 NaN
3 2022-05-11 3 200 NaN
</code></pre>
<p>Thanks in advance.</p>
|
<python><python-3.x><pandas><dataframe><datetime>
|
2022-12-29 05:05:36
| 2
| 592
|
Ishigami
|
74,947,421
| 2,956,053
|
Do pyperclip3 or pasteboard support paste of PNG on macOS
|
<p>I am trying to figure out how to paste a PNG graphic from the clipboard into a Python script on macOS. I have looked into using the pyperclip, pyperclip3, and pasteboard modules. I cannot get any of these to work. I am able to paste text from the clipboard, but not PNG.</p>
<p>The project description for <a href="https://pypi.org/project/pyperclip/" rel="nofollow noreferrer">pyperclip</a> explicitly states that it is for copy/paste of text data.</p>
<p>The overview for <a href="https://pypi.org/project/pyperclip3/" rel="nofollow noreferrer">pyperclip3</a> states that it has cross-platform support for both text and binary data. It also states that it is using pasteboard as the backend for MacOS.</p>
<p>There is a caveat in the description of the <code>get_contents</code> function in <a href="https://pypi.org/project/pasteboard/" rel="nofollow noreferrer">pasteboard</a> that states:</p>
<blockquote>
<p>type - The format to get. Defaults to pasteboard.String, which corresponds to NSPasteboardTypeString. See the pasteboard module members for other options such as HTML fragment, RTF, PDF, PNG, and TIFF. Not all formats of NSPasteboardType are implemented.</p>
</blockquote>
<p>It does not, however, give any indication of which formats may or may not actually be implemented.</p>
<p>Here is the complete script from my attempts to make pasteboard work:</p>
<pre><code>import pasteboard
pb = pasteboard.Pasteboard()
x = pb.get_contents(type=pasteboard.PNG)
print(type(x))
print(x)
</code></pre>
<p>If I drop the type argument from get_contents and there is text in the clipboard, this works as expected. If I keep the <code>type=pasteboard.PNG</code> argument and there is PNG data in the clipboard, the output shows that the type of x in <code>NoneType</code></p>
<p>And this is the complete script from my attempts to make pyperclip3 work:</p>
<pre><code>import pyperclip3
x = pyperclip3.paste()
print(type(x))
print(x)
</code></pre>
<p>In this case it doesn't matter if I have text or PNG data in the clipboard, the type of x is <code>bytes</code>. For text, the content of x is as expected. For PNG data, the content of x is the empty byte string.</p>
<p>Is what I'm trying to do even possible? Or am I wasting my time trying to figure out how to make pasting of PNG data into a python script work?</p>
|
<python><macos><pyperclip><pasteboard>
|
2022-12-29 05:00:05
| 2
| 586
|
MikeMayer67
|
74,947,287
| 6,291,574
|
how to merge next nearest data points using a common datetime stamp value for a group?
|
<p>I have two data frames like the samples given below:</p>
<pre><code>df1 = pd.DataFrame({"items":["i1", "i1", "i1", "i2","i2", "i2"], "dates":["09-Nov-2022", "10-Aug-2022", "27-May-2022", "20-Oct-2022", "01-Nov-2022","27-Jul-2022"]})
df2 = pd.DataFrame({"items": ["i1", "i1", "i1", "i1", "i2", "i2"], "prod_mmmyyyy": ["Sep 2022", "Jun 2022", "Mar 2022", "Dec 2021", "Sep 2022", "Jun 2022"]})
</code></pre>
<p>Here I wanted to map df1 into df2 with the next closest date of df2.prod_mmmyyyy from df1.dates column for each category on items.
The expected result would be like this below:</p>
<pre><code>result = pd.DataFrame({"items":["i1", "i1", "i1", "i1", "i2", "i2"],
"prod_mmmyyyy": ["Sep 2022", "Jun 2022", "Mar 2022", "Dec 2021", "Sep 2022", "Jun 2022"],
"mapped_date": ["09-Nov-2022", "10-Aug-2022", "27-May-2022", "27-May-2022", "20-Oct-2022", "27-Jul-2022"]})
</code></pre>
<p>I have tried to convert the prod_mmmyyyy to month end date then group by on the items but after that what should be done is being very difficult for me.</p>
<p>Thanks in advance for any help provided.</p>
|
<python><pandas><dataframe><group-by><data-science>
|
2022-12-29 04:36:20
| 1
| 1,741
|
durjoy
|
74,947,247
| 6,210,219
|
Update series values with a difference of 1
|
<p>I have a certain series on in dataframe.</p>
<pre><code>df=pd.DataFrame()
df['yMax'] = [127, 300, 300, 322, 322, 322, 322, 344, 344, 344, 366, 366, 367, 367, 367, 388, 388, 388, 388, 389, 389, 402, 403, 403, 403]
</code></pre>
<p>For values very close to one another, say, with a difference of 1, I would like to obliterate that difference to yield the same number, either by adding or subtracting by 1.</p>
<p>So, for example, the resultant list would become:</p>
<pre><code>df['yMax'] = [127, 300, 300, 322, 322, 322, 322, 344, 344, 344, 367, 367, 367, 367, 367, 389, 389, 389, 389, 389, 389, 403, 403, 403, 403]
</code></pre>
<p>I know we can easily find the difference between adjacent values with <code>df.diff()</code>.</p>
<pre><code>0 NaN
1 173.0
2 0.0
3 22.0
4 0.0
5 0.0
8 0.0
6 22.0
7 0.0
9 0.0
10 22.0
11 0.0
12 1.0
13 0.0
14 0.0
15 21.0
16 0.0
17 0.0
20 0.0
18 1.0
19 0.0
21 13.0
22 1.0
23 0.0
24 0.0
Name: yMax, dtype: float64
</code></pre>
<p>But how should I perform the transformation?</p>
|
<python><arrays><pandas><dataframe><series>
|
2022-12-29 04:27:08
| 1
| 728
|
Sati
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.