QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,013,095
| 929,732
|
convert_tz is taking on GMT at the end of the date although I'm converting to East Coast Time
|
<p>So here the the time in the database...</p>
<pre><code>2023-01-02 21:00:00
</code></pre>
<p>Here is the select that I use to get the information out of the database...</p>
<pre><code>SELECT job_id,job_desc,
CONVERT_TZ(job_date, 'GMT', 'US/Eastern') "job_date",
cust_auto_id,
cost,
paid,
mileage,
(
SELECT
CONCAT(LastName, ',', FirstName)
from
persons
where
jobs.person_id = persons.person_id) "Name"
from jobs where job_id = %s'''
cursorone.execute(sql,(1,))
</code></pre>
<p>Here is what is returned in the JSON block</p>
<pre><code>{
"Name": "Lterkdkflskd,Robert",
"cost": "0.00",
"cust_auto_id": 2,
"job_date": "Mon, 02 Jan 2023 16:00:00 GMT",
"job_desc": "Front Tires",
"job_id": 1,
"mileage": 123432,
"paid": ""
}
</code></pre>
|
<python><mariadb><convert-tz>
|
2023-01-05 01:48:03
| 0
| 1,489
|
BostonAreaHuman
|
75,013,084
| 3,137,388
|
gzip python module is giving empty bytes
|
<p>I copied one .gz file to a remote machine using fabric. When I tried to read the same remote file, empty bytes are getting displayed.</p>
<p>Below is the python code I used to read the zip file from remote machine.</p>
<pre><code>try:
fabric_connection = Connection(host = host_name, connect_kwargs = {'key_filename' : tmp_id, 'timeout' : 10})
fd = io.BytesIO()
remote_file = '/home/test/' + 'test.txt.gz'
fabric_connection.get(remote_file, fd)
with gzip.GzipFile(mode = 'rb', fileobj = fd) as fin:
content = fin.read()
print(content)
decoded_content = content.decode()
print(decoded_content)
except BaseException as err:
assert not err
finally:
fabric_connection.close()
</code></pre>
<p>It is giving below O/P:</p>
<pre><code>b''
</code></pre>
<p>I verified on the remote machine and the content is present in the file.</p>
<p>Can some one please let me know how to fix this issue.</p>
|
<python><gzip><fabric>
|
2023-01-05 01:45:39
| 1
| 5,396
|
kadina
|
75,012,933
| 8,616,751
|
Confusion matrix plot one decimal but exact zeros
|
<p>I'm trying to plot a confusion matrix that has one decimal for all values but if the value is exact zero, I would like to keep it as an exact zero rather than 0.0. How can I achieve this? In this minimal example for example I would like to keep <code>4.0</code> but instead of <code>0.0</code> I would like a <code>0</code></p>
<pre><code>import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
y_true = [0, 0, 0, 0, 1, 1, 1, 1]
y_pred = [0, 0, 0, 0, 1, 1, 1, 1]
disp = ConfusionMatrixDisplay(confusion_matrix=confusion_matrix(y_true, y_pred),
display_labels=[0, 1])
disp.plot(include_values=True, cmap='Blues', xticks_rotation='vertical',
values_format='.1f', ax=None, colorbar=True)
plt.show()
</code></pre>
<p>I tried to pass a custom function</p>
<pre><code>def custom_formatter(value):
if value == 0:
return '{:.0f}'.format(value)
else:
return '{:.1f}'.format(value)
</code></pre>
<p>to</p>
<pre><code>disp.plot(include_values=True, cmap='Blues', xticks_rotation='vertical',
values_format=values_format, ax=None, colorbar=True)
</code></pre>
<p>but this gives the error</p>
<pre><code>TypeError: format() argument 2 must be str, not function
</code></pre>
<p>Thank you so much!</p>
|
<python><matplotlib><scikit-learn><confusion-matrix>
|
2023-01-05 01:11:20
| 1
| 303
|
scriptgirl_3000
|
75,012,853
| 875,295
|
How do python pipe still work through spawning processes?
|
<p>I'm trying to understand why the following code <strong>works</strong>:</p>
<pre><code>import multiprocessing
def send_message(conn):
# Send a message through the pipe
conn.send("Hello, world!")
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
# Create a pipe
parent_conn, child_conn = multiprocessing.Pipe()
# Create a child process
p = multiprocessing.Process(target=send_message, args=(child_conn,))
p.start()
# Wait for the child process to finish
p.join()
# Read the message from the pipe
message = parent_conn.recv()
print(message)
</code></pre>
<p>As I understand, python pipes are just regular OS pipes, which are file descriptors.
When a new process is created via <strong>spawn</strong> , we should lose all the file descriptors (contrary to regular fork)</p>
<p>In that case, how is it possible that the python pipe is still "connected" to its parent process?</p>
|
<python><multiprocessing>
|
2023-01-05 00:56:51
| 1
| 8,114
|
lezebulon
|
75,012,806
| 7,766,024
|
Python logger is not creating log file despite a FileHandler being specified
|
<p>I've been using a logger to log messages to a file and also output them to the console. I don't believe I did anything wrong, but the log file hasn't been being created for a few of the runs that I did.</p>
<p>The code I have is:</p>
<pre><code>log_msg_format = '[%(asctime)s - %(levelname)s - %(filename)s: %(lineno)d] %(message)s'
handlers = [logging.FileHandler(filename=args.log_filename), logging.StreamHandler()]
logging.basicConfig(format=log_msg_format,
level=logging.INFO,
handlers=handlers)
</code></pre>
<p>When I check the logging level for each of the handlers, they are both <code>NOTSET</code>. I thought that this <em>might</em> be the problem, but the script is still outputting log messages to the console which means that this shouldn't be a problem.</p>
<p>What might be going wrong?</p>
|
<python><logging>
|
2023-01-05 00:47:08
| 0
| 3,460
|
Sean
|
75,012,613
| 1,278,365
|
Python how to programmatically print the first two lines the REPL prints everytime is started
|
<pre class="lang-none prettyprint-override"><code>$ python
Python 3.11.1 (main, Jan 4 2023, 23:41:08) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
</code></pre>
<p>Is there a programmatic way of printing those two lines that get printed when python is started?</p>
<p>Note: Somehow related to <a href="https://stackoverflow.com/questions/1252163/printing-python-version-in-output">Printing Python version in output</a> but not exactly the same question.</p>
|
<python>
|
2023-01-05 00:08:23
| 4
| 2,058
|
gmagno
|
75,012,582
| 236,594
|
Generating an interval set in hypothesis
|
<p>I have some code that works with intervals, which are really just python dicts with the following structure:</p>
<pre class="lang-py prettyprint-override"><code>{
"name": "some utf8 string",
"start": 0.0, # 0.0 <= start < 1.0
"end": 1.0, # start < end <= 1.0
"size": 1.0, # size == end - start
}
</code></pre>
<p>Writing a strategy for a single interval is relatively straightforward. I'd like to write a strategy to generate <strong>interval sets</strong>. An interval set is a list of intervals, such that:</p>
<ul>
<li>The list contains an arbitrary number of intervals.</li>
<li>The interval names are unique.</li>
<li>The intervals do not overlap.</li>
<li>All intervals are contained within the range <code>(0.0, 1.0)</code>.</li>
<li>Each interval's <code>size</code> is correct.</li>
<li>The intervals do not have to be contiguous and the entire range doesn't need to be covered.</li>
</ul>
<p>How would you write this strategy?</p>
|
<python><python-hypothesis>
|
2023-01-05 00:03:21
| 1
| 2,098
|
breadjesus
|
75,012,514
| 12,242,085
|
How to make dummy coding (pd.get_dummies()) only for categories which share in nominal variables is at least 40% in Python Pandas?
|
<p>I have DataFrame like below:</p>
<pre><code>COL1 | COL2 | COL3 | ... | COLn
-----|------|------|------|----
111 | A | Y | ... | ...
222 | A | Y | ... | ...
333 | B | Z | ... | ...
444 | C | Z | ... | ...
555 | D | P | ... | ...
</code></pre>
<p>And I need to do dummy coding (<code>pandas.get_dummies()</code>) only for categories which share in variable is at least 40%.</p>
<p>So, in COL2 only category "A" share in variable is at least 40%, in COL3 categories "Y" and "Z" share is at least 40% in variable.</p>
<p>So, as a result I need output like below:</p>
<pre><code>COL1 | COL2_A | COL3_Y | COL_Z | ... | COLn
-------|---------|---------|-------|------|-------
111 | 1 | 1 | 0 | ... | ...
222 | 1 | 1 | 0 | ... | ...
333 | 0 | 0 | 1 | ... | ...
444 | 0 | 0 | 1 | ... | ...
555 | 0 | 0 | 0 | ... | ...
</code></pre>
<p>reproducible example of source df below:</p>
<pre><code>df = pd.DataFrame()
df["COL1"] = [111,222,333,444,555]
df["COL2"] = ["A", "A", "B", "C", "D"]
df["COL3"] = ["Y", "Y", "Z", "Z", "P"]
df
</code></pre>
|
<python><pandas><categories><one-hot-encoding><dummy-variable>
|
2023-01-04 23:46:13
| 1
| 2,350
|
dingaro
|
75,012,497
| 14,729,820
|
How to split big compressed text file to small text files
|
<p>I want to convert this corpus <a href="https://data.statmt.org/cc-100/hu.txt.xz" rel="nofollow noreferrer">hu.txt.xz</a> <strong>15GB</strong> which becomes around <strong>60GB</strong> after unpacking to small versions of text files, each file with less than <strong>1GB</strong> or <strong>100000</strong> lines</p>
<pre><code>The expected output:
| siplit_1.txt
| siplit_2.txt
| siplit_3.txt
.....
| siplit_n.txt
</code></pre>
<p>I have this script on a local machine but doesn't work it just loads without process because bigdata as I think :</p>
<pre><code>import fun
import sys
import os
import shutil
# //-----------------------
# Retrieve and return output file max lines from input
def how_many_lines_per_file():
try:
return int(input("Max lines per output file: "))
except ValueError:
print("Error: Please use a valid number.")
sys.exit(1)
# //-----------------------
# Retrieve input filename and return file pointer
def file_dir():
try:
filename = input("Input filename: ")
return open(filename, 'r')
except FileNotFoundError:
print("Error: File not found.")
sys.exit(1)
# //-----------------------
# Create output file
def create_output_file_dir(num, filename):
return open(f"./data/output_{filename}/split_{num}.txt", "a")
# //-----------------------
# Create output directory
def create_output_directory(filename):
output_path = f"./data/output_{filename}"
try:
if os.path.exists(output_path): # Remove directory if exists
shutil.rmtree(output_path)
os.mkdir(output_path)
except OSError:
print("Error: Failed to create output directory.")
sys.exit(1)
def ch_dir():
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
# Change the current working directory
os.chdir('./data')
# Print the current working directory
print("Current working directory: {0}".format(os.getcwd()))
# //-----------------------
def split_file():
try:
line_count = 0
split_count = 1
max_lines = how_many_lines_per_file()
# ch_dir()
input_file = fun.file_dir()
input_lines = input_file.readlines()
create_output_directory(input_file.name)
output_file = create_output_file_dir(split_count, input_file.name)
for line in input_lines:
output_file.write(line)
line_count += 1
# Create new output file if current output file's line count is greater than max line count
if line_count > max_lines:
split_count += 1
line_count = 0
output_file.close()
# Prevent creation of an empty file after splitting is finished
if not len(input_lines) == max_lines:
output_file = create_output_file_dir(split_count, input_file.name)
# Handle errors
except Exception as e:
print(f"An unknown error occurred: {e}")
# Success message
else:
print(f"Successfully split {input_file.name} into {split_count} output files!")
# //-----------------------
if __name__ == "__main__":
split_file()
</code></pre>
<p>Is there any python script or deep learning tool to split them for using the to next task</p>
|
<python><file><deep-learning><pytorch><nlp>
|
2023-01-04 23:41:17
| 2
| 366
|
Mohammed
|
75,012,234
| 968,273
|
how to send message to azure bot in python via direct line service
|
<p>We recently deployed a Bot to Azure with type User assigned managed identity, so I only have app id, but not app password. I am trying to send messages via the Direct Line service from a python app. Following are my code:</p>
<pre><code>from botframework.connector.auth import MicrosoftAppCredentials
from botframework.connector.client import ConnectorClient
credentials = MicrosoftAppCredentials("YOUR_CLIENT_ID", "")
base_url="https://directline.botframework.com/"
client = ConnectorClient(credentials, base_url=base_url)
connectorClient = ConnectorClient(credentials, base_url=base_url)
client.conversations.send_to_conversation(conversation_id, message, service_url=base_url)
</code></pre>
<p>The python package I installed is botframework-connector==4.14.0</p>
<p>I got an error about access_token. Can anyone help what I am missing?</p>
<p>Thanks</p>
|
<python><botframework>
|
2023-01-04 22:56:55
| 1
| 3,703
|
GLP
|
75,012,197
| 9,537,439
|
Why eval_set is not used in cross_validate with XGBClassifier?
|
<p>I'm trying to plot a graph where I can see the evolution in learning of the model as the number of estimators increases. I can do this with <code>eval_set</code> in <code>xgboost.XGBClassifier</code>, giving this plot:</p>
<p><a href="https://i.sstatic.net/iqHMm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iqHMm.png" alt="enter image description here" /></a></p>
<p>But when I use <code>cross_validate()</code>, it ignores the <code>eval_set</code> argument. Why? The code:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from xgboost import XGBClassifier
from sklearn.model_selection import cross_validate
scores = cross_validate(
estimator=XGBClassifier(n_estimators=30,
max_depth=3,
min_child_weight =4,
random_state=42,
eval_set=[(X_train,y_train),(X_test,y_test)]),
X=data.drop('Y', axis=1),
y=data['Y'],
#fit_params=params,
cv=5,
error_score='raise',
return_train_score=True,
return_estimator=True
)
</code></pre>
<p>The warning is: <code>Parameters: { "eval_set" } are not used.</code></p>
<p>How can I do such that <code>cross_validate</code> takes the argument of <code>eval_set</code>?</p>
|
<python><xgboost><cross-validation>
|
2023-01-04 22:52:37
| 1
| 2,081
|
Chris
|
75,012,181
| 17,047,177
|
Impossible to retrieve data form pyattck module
|
<p>I am using the <code>pyattck</code> module to retrieve information from mitre att&ck.</p>
<p>Versions:</p>
<pre class="lang-bash prettyprint-override"><code> - pyattck==7.0.0
- pyattck-data==2.5.2
</code></pre>
<p>Then, I just created a simple <code>main.py</code> file to test the module.</p>
<pre class="lang-py prettyprint-override"><code>from pyattck import Attck
def main():
attck = Attck()
for technique in attck.enterprise.techniques:
print(technique.name)
if __name__ == '__main__':
main()
</code></pre>
<p>When running the <code>main.py</code> script I get the following exception:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/<path>/main.py", line 15, in <module>
main()
File "/<path>/main.py", line 8, in main
for technique in attck.enterprise.techniques:
File "/<path_venv>/lib/python3.10/site-packages/pyattck/attck.py", line 253, in enterprise
from .enterprise import EnterpriseAttck
File "/<path_venv>/lib/python3.10/site-packages/pyattck/enterprise.py", line 7, in <module>
class EnterpriseAttck(Base):
File "/<path_venv>/lib/python3.10/site-packages/pyattck/enterprise.py", line 42, in EnterpriseAttck
__attck = MitreAttck(**Base.config.get_data("enterprise_attck_json"))
File "/<path_venv>/lib/python3.10/site-packages/pyattck_data/attack.py", line 55, in __init__
raise te
File "/<path_venv>/lib/python3.10/site-packages/pyattck_data/attack.py", line 53, in __init__
self.__attrs_init__(**kwargs)
File "<attrs generated init pyattck_data.attack.MitreAttck>", line 14, in __attrs_init__
File "/<path_venv>/lib/python3.10/site-packages/pyattck_data/attack.py", line 66, in __attrs_post_init__
raise te
File "/<path_venv>/lib/python3.10/site-packages/pyattck_data/attack.py", line 62, in __attrs_post_init__
data = TYPE_MAP.get(item['type'])(**item)
TypeError: 'NoneType' object is not callable
</code></pre>
<p>Anyone knows where is the issue? Maybe I have forgotten to import something? It would be helpful to know if this module actually works in another version. This one is the lasted stable one ATTOW.</p>
<p><strong>UPDATE</strong>
There is am issue with this project. Mitre added some new features that are not supported by the module and make it unusable.</p>
<p>There is an <a href="https://github.com/swimlane/pyattck/issues/125" rel="nofollow noreferrer">issue on github</a> related to this.</p>
|
<python><python-3.10><mitre-attck>
|
2023-01-04 22:51:36
| 1
| 1,069
|
A.Casanova
|
75,012,176
| 20,851,944
|
Call of class in python with attributes
|
<p>I made a typo with the <code>__init__()</code> in line 23, but my program runs only with this failure and shows the right result. Could some experienced OOP expert help me please.</p>
<p>If I correct this tripple underscore <code>___init__()</code> to the correct on <code>__init__(file_path)</code> I get this ERROR for Line 53:</p>
<pre><code>con = Contact('dummy.xml')
TypeError: Contact.__init__() takes 1 positional argument but 2 were given
</code></pre>
<p>Here is a <code>dunmy.xml</code> for test:</p>
<pre><code><?xml version="1.0" encoding = "utf-8"?>
<phonebooks>
<phonebook name="Telefonbuch">
<contact>
<category>0</category>
<person>
<realName>Dummy, Name, Street</realName>
</person>
<telephony nid="1">
<number type="work" prio="1" id="0">012345678</number>
</telephony>
<services />
<setup />
<features doorphone="0" />
<!-- <mod_time>1587477163</mod_time> -->
<uniqueid>358</uniqueid>
</contact>
<contact>
<category>0</category>
<person>
<realName>Foto Name</realName>
</person>
<telephony nid="1">
<number type="home" prio="1" id="0">067856743</number>
</telephony>
<services />
<setup />
<features doorphone="0" />
<mod_time>1547749691</mod_time>
<uniqueid>68</uniqueid>
</contact>
</phonebook>
</phonebooks>
</code></pre>
<p>And here my program which work with the TYPO:</p>
<pre><code>import psutil
import timeit
import xml.etree.ElementTree as ET
class Phonebook:
def __init__(self, file_path):
"""Split tree in contact branches """
self.file_path = file_path
def contacts_list(self, file_path):
contacts = []
events =('start','end','start-ns','end-ns')
for event, elem in ET.iterparse(self.file_path, events=events):
if event == 'end' and elem.tag == 'contact':
contact = elem
contacts.append(contact)
elem.clear()
return contacts
#print("Superclass:",contacts)
class Contact(Phonebook):
def ___init__(file_path): # Here is a Failure !!!
super().__init__(Contact)
def search_node(self, contact, searched_tag):
contact_template =['category','person', 'telephony', 'services', 'setup', 'features', 'mod_time', 'uniqueid' ]
node_tag_list = []
list_difference = []
search_list = []
for node in contact:
if node.tag not in node_tag_list:
node_tag_list.append(node.tag)
for element in contact_template:
if element not in node_tag_list:
list_difference.append(element)
for node in contact:
if node.tag == searched_tag and node.tag not in list_difference:
search_list.append(node.text)
#print(node.text)
else:
if len(list_difference) != 0 and searched_tag in list_difference:
message = self.missed_tag(list_difference)
#print(message)
if message not in search_list:
search_list.append(message)
return search_list
def missed_tag(self, list_difference):
for m in list_difference:
message = f'{m} - not assigned'
return message
def main():
con = Contact('dummy.xml')
contacts = con.contacts_list('dummy.xml')
mod_time_list =[]
for contact in contacts:
mod_time = con.search_node(contact, 'mod_time')
mod_time_list.append(mod_time)
print(len(mod_time_list))
print(mod_time_list)
if __name__ == '__main__':
""" Input XML file definition """
starttime=timeit.default_timer()
main()
print('Finished')
# Getting % usage of virtual_memory ( 3rd field)
print('RAM memory % used:', psutil.virtual_memory()[2])
# Getting usage of virtual_memory in GB ( 4th field)
print('RAM Used (GB):', psutil.virtual_memory()[3]/1000000000)
print("Runtime:", timeit.default_timer()-starttime)
</code></pre>
<p>Could someone explain me, whats going wrong, please.</p>
|
<python><init><class-variables>
|
2023-01-04 22:50:29
| 1
| 316
|
Paul-ET
|
75,011,719
| 3,133,027
|
Connect to Oracle on an encrypted port using sqlalchemy
|
<p>we have been using sqlalchemy successfully to connect to Oracle. Just now our organization is moving to encrypted Oracle database and we have been asked to switch to the encrypted database.</p>
<p>The sample code given to me by the database engineering team is which uses cx_Oracle directly is:</p>
<pre><code>import cx_Oracle
dsn = """(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcps)(HOST=test_host)(PORT=1531)))
(CONNECT_DATA=(SERVICE_NAME=test_service)))"""
connection = cx_Oracle.connect(user="test", password="test", dsn=dsn, encoding="UTF-8")
</code></pre>
<p>However, when I try to connect to the database using sqlalchemy using :
oracle+cx_oracle://test:test@test_host:1531/test_service</p>
<p>I get an error : <code>(cx_Oracle.DatabaseError) ORA-12547: TNS:lost contact\n(Background on this error at: http://sqlalche.me/e/13/4xp6)</code></p>
<p>I suspect that it is the protocol tcps that needs to be set.</p>
<p>I tried the following connection string :
<code>protocol_url = 'oracle+cx_oracle://test:test@test_host:1531?service_name=test_service&protocol=tcps'</code></p>
<p>I get the following error :
<code>ValueError: invalid literal for int() with base 10: '1531?service_name=test_service&protocol=tcps'</code></p>
<p>Is there a way to use sqlalchemy to connect to Oracle on an encrypted port?</p>
<p>EDIT : I went through the steps listed in <a href="https://stackoverflow.com/questions/66094119/python-connect-to-oracle-database-with-tcps">Python connect to Oracle database with TCPS</a>
I still get the error : sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) ORA-12547: TNS:lost contact</p>
<p>NOTE: I know that I can successfully connect to Oracle with encryption using a direct cx_Oracle connection.</p>
|
<python><oracle-database><encryption><sqlalchemy>
|
2023-01-04 21:49:11
| 2
| 578
|
Krish Srinivasan
|
75,011,620
| 8,942,319
|
Adding USER to Dockerfile results in /bin/sh: 1: poetry: not found
|
<p>Dockerfile:</p>
<pre><code>FROM python:3.10
WORKDIR /home/app/my_script
COPY pyproject.toml /home/app/my_script
COPY poetry.lock /home/app/my_script
RUN groupadd -g 9999 app; \
useradd -ms /bin/bash -u 9999 -g app app; \
mkdir /home/app/.ssh; \
ssh-keyscan github.com >> /home/app/.ssh/known_hosts; \
chown -R app:app /home/app
USER app # THIS LINE HERE
RUN pip install poetry==1.3.1
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi --no-root
COPY src/. /home/app/my_script/
CMD poetry run python script.py
</code></pre>
<p>The <code>USER</code> line changes things in a way I don't understand. We need to do it for security reasons so as to not run as root. Without it everything works perfectly (locally). With it, the result of <code>docker build...</code> is</p>
<pre><code>[7/8] RUN poetry config virtualenvs.create false && poetry install --no-interaction --no-ansi --no-root: #11 0.432 /bin/sh: 1: poetry: not found
</code></pre>
<p>Basically, I don't understand how USER changes what packages are accessible in following RUN commands.</p>
<p>Thanks for any tips.</p>
|
<python><docker><python-poetry>
|
2023-01-04 21:39:22
| 1
| 913
|
sam
|
75,011,487
| 10,792,871
|
Use ast.literal_eval on all columns of a Pandas Dataframe
|
<p>I have a data frame that looks like the following:</p>
<pre><code>Category Class
==========================
['org1', 'org2'] A
['org2', 'org3'] B
org1 C
['org3', 'org4'] A
org2 A
['org2', 'org4'] B
...
</code></pre>
<p>When I read in this data using Pandas, the lists are read in as strings (e.g., <code>dat['Category][0][0]</code> returns <code>[</code> rather than returning <code>org1</code>). I have several columns like this. I want every categorical column that already contains at least one list to have all records be a list. For example, the above data frame should look like the following:</p>
<pre><code>Category Class
==========================
['org1', 'org2'] A
['org2', 'org3'] B
['org1'] C
['org3', 'org4'] A
['org2'] A
['org2', 'org4'] B
...
</code></pre>
<p>Notice how the singular values in the <code>Category</code> column are now contained in lists. When I reference <code>dat['Category][0][0]</code>, I'd like <code>org1</code> to be returned.</p>
<p>What is the best way to accomplish this? I was thinking of using <code>ast.literal_eval</code> with an apply and lambda function, but I'd like to try and use best-practices if possible. Thanks in advance!</p>
|
<python><python-3.x><pandas><lambda><eval>
|
2023-01-04 21:22:52
| 2
| 724
|
324
|
75,011,459
| 2,011,041
|
How to create set (as value to dict) or add element in case set exists?
|
<p>I'm iterating over a list of objects that have three int attributes: <code>start</code>, <code>end</code>, <code>data</code>. And I'm trying to add all numbers from <code>start</code> up until <code>end</code> as keys to a dictionary, with the value being a set containing <code>data</code>. So, for example, if objects are:</p>
<pre><code>import types
objects_list = [types.SimpleNamespace( start=5, end=8, data=2 ),
types.SimpleNamespace( start=6, end=12, data=6 ),
types.SimpleNamespace( start=10, end=11, data=5 ),
types.SimpleNamespace( start=20, end=22, data=4 )]
</code></pre>
<p>for each object I want to add all numbers in range(start, end+1) as dictionary keys, which in case of the first object would be numbers from 5 to 8, each one with a set containing number 2. For the second object I'll add keys 9 to 12 with a set containing number 6, however I'll need to update keys 6, 7 and 8 to add number 6 to the set (which already contained 2). For the third object I'll need to update keys 10 and 11 to add element 5. And for the last object I'll just add keys 20 to 22 with a set containing number 4.</p>
<p>So the dictionary I want to get would have int:set pairs as follows:</p>
<pre><code>5: {2}
6: {2,6}
7: {2,6}
8: {2,6}
9: {6}
10: {6,5}
11: {6,5}
12: {6}
20: {4}
21: {4}
22: {4}
</code></pre>
<p>I've been trying to avoid a nested loop by iterating over the list of objects and updating the dict, like this:</p>
<pre><code>my_dict={}
for o in objects_list:
my_dict.update(
dict.fromkeys(range(o.start, o.end+1), set([o.data]))
)
</code></pre>
<p>but of course this creates a new set every time set() is called, overwriting old values.</p>
<p>So I'm not sure if there's a way to create the set in case it doesn't exist, or else add the element to it. Because what I'm getting right now is the following dictionary:
<code>{5: {2}, 6: {6}, 7: {6}, 8: {6}, 9: {6}, 10: {5}, 11: {5}, 12: {6}, 20: {4}, 21: {4}, 22: {4}}</code></p>
<p>Or maybe I'm just missing the whole point and should be doing this in a different way?</p>
|
<python><dictionary>
|
2023-01-04 21:20:30
| 1
| 1,397
|
Floella
|
75,011,337
| 7,287,543
|
In-line substring exchange in a Python string
|
<p>I have a string that says <code>"Peter has a boat, but Steve has a car." </code>. I need to change that to <code>"Steve has a boat, but Peter has a car." </code> The only way I can think of doing this is an ugly three-step <code>replace()</code> with a placeholder:</p>
<pre><code>"Peter has a boat, but Steve has a car.".replace("Peter","placeholder").replace("Steve","Peter").replace("placeholder","Steve")
</code></pre>
<p>Is there a more concise and proper - and less awful and ugly - way to do this?</p>
|
<python><string><replace>
|
2023-01-04 21:07:32
| 1
| 1,893
|
Yehuda
|
75,011,288
| 6,612,915
|
Python List calculation this is not about slicing
|
<pre><code>list=[3, 1, -1]
list [-1]=list [-2]
print(list)
</code></pre>
<p>ok my last post was closed, even though this is not about slicing. There are no semicolons. The question is an exam question from the Python Institute course. I do not understand why the result of this calculation is [3, 1, 1]</p>
<p>Can someone explain me how this works?</p>
|
<python><list>
|
2023-01-04 21:03:06
| 2
| 464
|
Anna
|
75,011,155
| 740,553
|
How to get the time in ms between two calls in Python
|
<p>How does one properly get the number of millis between two successive calls? There's a bunch of different ways to get or profile time, and all of them seem to have problems:</p>
<ul>
<li><code>time.clock()</code> is deprecated</li>
<li><code>time.time()</code> is based on the system's clock to determine seconds since epoch and has <a href="https://docs.python.org/3/library/time.html#time.time" rel="nofollow noreferrer">unreliable leap second handling</a></li>
<li><code>time.process_time()</code> and <code>time.thread_time()</code> can't be used because they ignore time spent sleeping</li>
<li><code>time.monotonic()</code> only ticks 64 times per second</li>
<li><code>time.perf_counter()</code> has lossy nano precision encoded as float</li>
<li><code>time.perf_counter_ns()</code> has precise nano resolution but encoded as int, so can't track more than 4 seconds</li>
</ul>
<p>Of these, <code>time.perf_counter()</code> is the only option for millisecond precision (i.e. most use cases) but is there something better? (e.g. some kind of <code>perf_counter_ms</code> that yields an <code>int</code> but with a reliability interval of four million seconds instead of 4 seconds?)</p>
|
<python><precision><timing>
|
2023-01-04 20:52:15
| 1
| 54,287
|
Mike 'Pomax' Kamermans
|
75,011,084
| 18,205,996
|
Generating new HTML rows for each Document from firestore in python Django
|
<p>I have a collection of documents in a Firestore database. I want to create a web form to display all the documents and their fields. I started by streaming all the documents using:</p>
<pre><code> docs = db.collection(u'users').stream()
</code></pre>
<p>and then appending all the docs IDs to a list and pass it to the HTML file as follows</p>
<pre><code>
def index(request):
myIDs =[]
db = firestore.Client()
docs = db.collection(u'users').stream()
for doc in docs:
myIDs.append(f'{doc.id}')
return render(request, 'index.html', {
"weight":"abc",
"firstRow":myIDs[0],
"secondRow":myIDs[1],
"thirdRow":myIDs[2],
"fourthRow":myIDs[3],
"fifthRow":myIDs[4],
"sixthRow":myIDs[5],
"seventhRow":myIDs[6],
})
</code></pre>
<p>After that, I created a very basic HTML code just to display the document IDs as follows:</p>
<pre><code><html>
<body>
<table border="1" cellpadding = "5" cellspacing="5">
<tr>
<td>{{ firstRow }}</td>
<tr>
<td>{{ secondRow }}</td>
</tr>
<tr>
<td>{{ thirdRow }}</td>
</tr>
<tr>
<td>{{ fourthRow }}</td>
</tr>
<tr>
<td>{{ fifthRow }}</td>
</tr>
<tr>
<td>{{ sixthRow }}</td>
</tr>
<tr>
<td>{{ seventhRow }}</td>
</tr>
</tr>
</table>
</body>
</html>
</code></pre>
<p>Till this point I am just doing very basic stuff. But I am new to Django and Python for the Web, so my question is: how can I make it so that the HTML rows become adaptive based on the number of documents?
For example, if I have 20 documents, I want to make it so that it will generate the 20 rows and pass each of the document IDs to one row, and so on.</p>
|
<python><html><django><google-cloud-firestore>
|
2023-01-04 20:44:48
| 1
| 597
|
taha khamis
|
75,011,003
| 12,350,966
|
How to get value_counts normalize=True with pandas agg?
|
<p>This gives me counts of the # <code>my_other_var</code> that are contained in 'my_var'</p>
<p><code>df.groupby('my_var').agg({'my_other_var' : 'value_counts')</code></p>
<p>What I want is the percentage of 'my_var' that is comprised of 'my_other_var'</p>
<p>I tried this, but does not seem to give me what I expect</p>
<p><code>df.groupby('my_var').agg({'my_other_var' : lambda x: pd.value_counts(x, normalize=True)})</code></p>
|
<python><pandas>
|
2023-01-04 20:35:16
| 1
| 740
|
curious
|
75,010,992
| 833,208
|
Why is poetry not updating git+ssh dependency from private repo versioned using git tag?
|
<p>I have two python projects, lib and app, managed through poetry. Lib is on github in a private repo and the version in its pyproject.toml is 0.2.0. This is tagged in github with git tag v0.2.0.</p>
<p>Access to the gh private repo is enabled by adding my ssh public key to my gh account <a href="https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account" rel="nofollow noreferrer">using these instructions</a>. Lib is then made a dependency of app using</p>
<pre><code>poetry add git+ssh://git@github.com:org/lib.git#v0.2.0
</code></pre>
<p>in the app folder and this creates the dependency in pyproject.toml of app with the line</p>
<pre><code>lib = {git = "git@github.com:org/lib.git", rev = "v0.2.0"}
</code></pre>
<p>So far, so good.</p>
<p>Now I make a change to lib and the version increases to 0.2.1 in pyproject.toml. The code is pushed to gh and tagged with git tag v0.2.1. I try to update the dependecy in app using</p>
<pre><code>poetry update lib
</code></pre>
<p>in the app folder but it doesn't work. Neither does <code>poetry lock</code>.</p>
<p>As a worksaround, if I issue the command</p>
<pre><code>poetry add git+ssh://git@github.com:org/lib.git#v0.2.1
</code></pre>
<p>then it updates without issue, however I would like poetry to check for updates with just</p>
<pre><code>poetry update
</code></pre>
<p>or</p>
<pre><code>poetry update lib
</code></pre>
<p>I have seen that this is possible for public repos (using https) and also (I think, but could be mistaken) where the git+ssh url is pinned to a branch, say <code>#latest</code>. However I cannot get it to work with a tagged version.</p>
<p>How to do this?</p>
|
<python><git><ssh><python-poetry>
|
2023-01-04 20:33:56
| 1
| 5,039
|
Robino
|
75,010,986
| 18,758,062
|
Avoid distortring squares/circles in Matplotlib figure with fixed figsize
|
<p>In a Matplotlib figure with aspect ratio <code>12:5</code>, a square is drawn on <code>ax</code> and 4 circles on <code>ax2</code>. The 4 circles are then redrawn with different colors.</p>
<p>However, the square looks like a rectangle, and the circles look like ellipses.</p>
<p><a href="https://i.sstatic.net/QMEZ7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QMEZ7.png" alt="enter image description here" /></a>
Is it possible to maintain the figure aspect ratio without distorting the square and circles?</p>
<pre><code>import matplotlib.pyplot as plt
def draw():
ax.set_xlim(-1, 7)
ax.set_ylim(-1, 7)
ax2.set_xlim(-1, 7)
ax2.set_ylim(-1, 7)
ax2.set_xticks([])
ax2.set_yticks([])
p = plt.Rectangle((0,0), 5, 5)
ax.add_patch(p)
ax2.add_patch(plt.Circle((0,0), 0.1, color="r"))
ax2.add_patch(plt.Circle((0,5), 0.1, color="r"))
ax2.add_patch(plt.Circle((5,0), 0.1, color="r"))
ax2.add_patch(plt.Circle((5,5), 0.1, color="r"))
def redraw():
ax2.clear()
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_xlim(-1, 7)
ax2.set_ylim(-1, 7)
ax2.add_patch(plt.Circle((0,0), 0.1, color="g"))
ax2.add_patch(plt.Circle((0,5), 0.1, color="r"))
ax2.add_patch(plt.Circle((5,0), 0.1, color="g"))
ax2.add_patch(plt.Circle((5,5), 0.1, color="r"))
fig, ax = plt.subplots(figsize=(12, 5)) # needs to be non-square
ax2 = ax.twinx().twiny()
draw()
redraw()
</code></pre>
|
<python><matplotlib>
|
2023-01-04 20:33:23
| 1
| 1,623
|
gameveloster
|
75,010,789
| 803,801
|
How to typecheck a property with a class decorator
|
<p>I'm struggling to get this code to properly type check with <code>mypy</code>:</p>
<pre><code>from typing import Any
class Decorator:
def __init__(self, func) -> None:
self.func = func
def __call__(self, *args: Any, **kwargs: Any) -> Any:
return self.func(*args, **kwargs)
class Foo:
@property
@Decorator
def foo(self) -> int:
return 42
f = Foo()
# reveal_type(f.foo)
a: int = f.foo
print(a)
</code></pre>
<p>This gives me this error, even though running it correctly prints <code>42</code> (note I've uncommented the <code>reveal_type</code> when running <code>mypy</code>):</p>
<pre><code>$ mypy test.py
test.py:17: note: Revealed type is "mypy_test.test.Decorator"
test.py:18: error: Incompatible types in assignment (expression has type "Decorator", variable has type "int") [assignment]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>This code properly type checks with <code>mypy</code>:</p>
<pre><code>from typing import Any
class Decorator:
def __init__(self, func) -> None:
self.func = func
def __call__(self, *args: Any, **kwargs: Any) -> Any:
return self.func(*args, **kwargs)
class Foo:
@property
@Decorator
def foo(self) -> int:
return 42
f = Foo()
# reveal_type(f.foo)
a: int = f.foo()
print(a)
</code></pre>
<p>But this obviously won't run correctly:</p>
<pre><code>$ python3 test.py
Traceback (most recent call last):
File "/home/gsgx/code/tmp/mypy_test/test.py", line 18, in <module>
a: int = f.foo()
TypeError: 'int' object is not callable
</code></pre>
<p>Note that if instead of using a class decorator I use a function decorator, everything works fine (runs correctly and no typecheck errors, <code>reveal_type</code> now shows <code>f.foo</code> is <code>Any</code>):</p>
<pre><code>def Decorator(f):
return f
</code></pre>
<p>How can I get this to work using a class decorator? I'm using Python 3.10.6 and mypy 0.991.</p>
|
<python><python-decorators><mypy><python-typing>
|
2023-01-04 20:12:50
| 2
| 12,299
|
gsgx
|
75,010,637
| 3,525,780
|
Replace and += is abismally slow
|
<p>I've made following code that deciphers some byte-arrays into "Readable" text for a translation project.</p>
<pre><code>with open(Path(cur_file), mode="rb") as file:
contents = file.read()
file.close()
text = ""
for i in range(0, len(contents), 2): # Since it's encoded in UTF16 or similar, there should always be pairs of 2 bytes
byte = contents[i]
byte_2 = contents[i+1]
if byte == 0x00 and byte_2 == 0x00:
text+="[0x00 0x00]"
elif byte != 0x00 and byte_2 == 0x00:
#print("Normal byte")
if chr(byte) in printable:
text+=chr(byte)
elif byte == 0x00:
pass
else:
text+="[" + "0x{:02x}".format(byte) + "]"
else:
#print("Special byte")
text+="[" + "0x{:02x}".format(byte) + " " + "0x{:02x}".format(byte_2) + "]"
# Some dirty replaces - Probably slow but what do I know - It works
text = text.replace("[0x0e]n[0x01]","[USERNAME_1]") # Your name
text = text.replace("[0x0e]n[0x03]","[USERNAME_3]") # Your name
text = text.replace("[0x0e]n[0x08]","[TOWNNAME_8]") # Town name
text = text.replace("[0x0e]n[0x09]","[TOWNNAME_9]") # Town name
text = text.replace("[0x0e]n[0x0a]","[CHARNAME_A]") # Character name
text = text.replace("[0x0a]","[ENTER]") # Generic enter
lang_dict[emsbt_key_name] = text
</code></pre>
<p>While this code does work and produce output like:</p>
<pre><code>Cancel[0x00 0x00]
</code></pre>
<p>And more complex ones, I've stumbled upon a performance problem when I loop it through 60000 files.</p>
<p>I've read a couple of questions regarding += with large strings and people say that <em>join</em> is preferred with large strings. <em>However</em>, even with strings of just under 1000 characters, a single file takes about 5 seconds to store, which is a lot.</p>
<p>I almost feel like it's starts fast and gets progressively slower and slower.</p>
<p>What would be a way to optimize this code? I feel it's also abysmal.</p>
<p>Any help or clue is greatly appreciated.</p>
<p>EDIT: Added cProfile output:</p>
<pre><code> 261207623 function calls (261180607 primitive calls) in 95.364 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
284/1 0.002 0.000 95.365 95.365 {built-in method builtins.exec}
1 0.000 0.000 95.365 95.365 start.py:1(<module>)
1 0.610 0.610 94.917 94.917 emsbt_to_json.py:21(to_json)
11179 11.807 0.001 85.829 0.008 {method 'index' of 'list' objects}
62501129 49.127 0.000 74.146 0.000 pathlib.py:578(__eq__)
125048857 18.401 0.000 18.863 0.000 pathlib.py:569(_cparts)
63734640 6.822 0.000 6.828 0.000 {built-in method builtins.isinstance}
160958 0.183 0.000 4.170 0.000 pathlib.py:504(_from_parts)
160958 0.713 0.000 3.942 0.000 pathlib.py:484(_parse_args)
68959 0.110 0.000 3.769 0.000 pathlib.py:971(absolute)
160959 1.600 0.000 2.924 0.000 pathlib.py:56(parse_parts)
91999 0.081 0.000 1.624 0.000 pathlib.py:868(__new__)
68960 0.028 0.000 1.547 0.000 pathlib.py:956(rglob)
68960 0.090 0.000 1.518 0.000 pathlib.py:402(_select_from)
68959 0.067 0.000 1.015 0.000 pathlib.py:902(cwd)
37 0.001 0.000 0.831 0.022 __init__.py:1(<module>)
937462 0.766 0.000 0.798 0.000 pathlib.py:147(splitroot)
11810 0.745 0.000 0.745 0.000 {method '__exit__' of '_io._IOBase' objects}
137918 0.143 0.000 0.658 0.000 pathlib.py:583(__hash__)
</code></pre>
<p>EDIT: Upon further inspection with line_profiler, turns out that the culprit isn't even in above code. It's well outside that code where I read search over the indexes to see if there is +1 file (looking ahead of the index). This apparently consumes a whole lot of CPU time.</p>
|
<python><string><replace>
|
2023-01-04 19:55:46
| 3
| 1,382
|
Fusseldieb
|
75,010,474
| 11,370,582
|
KMeans Clustering - Getting "malloc" error
|
<p>I'm trying to run kmeans clustering on a dataset with ~120,000 datapoints but I am getting a 'malloc' error:</p>
<pre><code>python3(2830,0x70000d545000) malloc: *** error for object 0xc0: pointer being freed was not allocated
python3(2830,0x70000e54b000) malloc: *** error for object 0xc0: pointer being freed was not allocated
python3(2830,0x70000dd48000) malloc: *** error for object 0xc0: pointer being freed was not allocated
python3(2830,0x70000ed4e000) malloc: *** error for object 0xc0: pointer being freed was not allocated
python3(2830,0x70000d545000) malloc: *** set a breakpoint in malloc_error_break to debug
python3(2830,0x70000dd48000) malloc: *** set a breakpoint in malloc_error_break to debug
python3(2830,0x70000e54b000) malloc: *** set a breakpoint in malloc_error_break to debug
python3(2830,0x70000ed4e000) malloc: *** set a breakpoint in malloc_error_break to debug
python3(2830,0x70000f551000) malloc: *** error for object 0xc0: pointer being freed was not allocated
python3(2830,0x70000f551000) malloc: *** set a breakpoint in malloc_error_break to debug
[1] 2830 abort python3 kmeans.py
</code></pre>
<p>The data is in an ndarray formatters as such:</p>
<pre><code>[[2.06e+01 1.03e+00]
[2.06e+01 1.70e-01]
[2.06e+01 1.03e+00]
...
[2.42e+01 2.30e-01]
[2.36e+01 1.00e-02]
[2.41e+01 1.40e-01]]
</code></pre>
<p>I've ran various supervised models on the same dataset without any issue. My code here:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
df = pd.read_csv('data/final_dataset.csv')
df = df[df['Episode_Count'].notna()]
#create X coords
X = df[['BMI','Episode_Count']].values
plt.scatter(X[:,0],X[:,1])
plt.show()
#train model
k = 5
kmeans = KMeans(n_clusters=k, random_state=42)
y_pred = kmeans.fit_predict(X)
print(y_pred)
</code></pre>
<p>I know this is not reproducible but that is difficult with such a large dataset. I'm happy to find a way to share training data if that would help.</p>
<p>Has anybody ran into this issue before??</p>
|
<python><machine-learning><scikit-learn><malloc><k-means>
|
2023-01-04 19:37:45
| 0
| 904
|
John Conor
|
75,010,391
| 880,783
|
Why do imports behind `if False` generate local variables?
|
<p>The following code works as expected:</p>
<pre class="lang-py prettyprint-override"><code>from os import name
def function():
# if False:
# from os import name
print(name)
function()
</code></pre>
<p>However, when uncommenting the <code>if False</code> block, the code stops working:</p>
<blockquote>
<p>UnboundLocalError: cannot access local variable <code>name</code> where it is not associated with a value</p>
</blockquote>
<p>I wonder why that is. I have since found <a href="https://github.com/python/cpython/issues/79250#issuecomment-1093803068" rel="nofollow noreferrer">https://github.com/python/cpython/issues/79250#issuecomment-1093803068</a> which says that this is "documented behavior" - I wonder <em>where</em> this is documented, though.</p>
|
<python><import><scope>
|
2023-01-04 19:31:43
| 1
| 6,279
|
bers
|
75,010,371
| 20,266,647
|
An error occurred while calling o91.parquet. : java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
|
<p>I used this simple python code with focus on spark.sql:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('jist-test').getOrCreate()
myDF = spark.read.parquet("v3io://projects/fs-demo-example/FeatureStore/vct_all_other_basic")
myDF.show(3)
</code></pre>
<p>and I got this exception:</p>
<pre><code>Py4JJavaError Traceback (most recent call last)
<ipython-input-28-fd92fbf397a3> in <module>
1 spark = SparkSession.builder.appName('jist-test').getOrCreate()
----> 2 myDF = spark.read.parquet("v3io://projects/fs-demo-example/FeatureStore/vct_all_other_basic")
/spark/python/pyspark/sql/readwriter.py in parquet(self, *paths, **options)
299 int96RebaseMode=int96RebaseMode)
300
--> 301 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
302
303 def text(self, paths, wholetext=False, lineSep=None, pathGlobFilter=None,
/spark/python/lib/py4j-0.10.9.3-src.zip/py4j/java_gateway.py in __call__(self, *args)
1320 answer = self.gateway_client.send_command(command)
1321 return_value = get_return_value(
-> 1322 answer, self.gateway_client, self.target_id, self.name)
1323
1324 for temp_arg in temp_args:
/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
109 def deco(*a, **kw):
110 try:
--> 111 return f(*a, **kw)
112 except py4j.protocol.Py4JJavaError as e:
113 converted = convert_exception(e.java_exception)
/spark/python/lib/py4j-0.10.9.3-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o91.parquet.
: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java
...
</code></pre>
<p>It happened from time to time in pyspark version 3.2.1. Do you know, how to avoid this mistake?</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-01-04 19:29:34
| 1
| 1,390
|
JIST
|
75,010,343
| 12,242,085
|
How to make dummy columns only on variables witch appropriate number of categories and suffisant share category in column?
|
<p>I have DataFrame in Python Pandas like below (both types of columns: numeric and object):</p>
<p>data types:</p>
<ul>
<li>COL1 - numeric</li>
<li>COL2 - object</li>
<li>COL3 - object</li>
</ul>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>COL1</th>
<th>COL2</th>
<th>COL3</th>
<th>...</th>
<th>COLn</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>A</td>
<td>Y</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>222</td>
<td>A</td>
<td>Y</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>333</td>
<td>B</td>
<td>Z</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>444</td>
<td>C</td>
<td>Z</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>555</td>
<td>D</td>
<td>P</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>And i need to make dummy coding (<code>pandas.get_dummies()</code>) only on categorical variables which has:</p>
<ol>
<li>max 3 categories in variable</li>
<li>The minimum percentage of the category's share of the variable is 0.4</li>
</ol>
<p>So, for example:</p>
<ol>
<li>COL2 does not meetr requirement nr. 1 (has 4 different categories: A, B, C, D), so remove it</li>
<li>In COL3 category "P" does not meet requirements nr.2 (share is 1/5 = 0.2), so use only categories "Y" and "Z" to dummy coding</li>
</ol>
<p>So, as a result I need something like below:</p>
<pre><code>COL1 | COL3_Y | COL3_Z | ... | COLn
-----|--------|--------|------|------
111 | 1 | 0 | ... | ...
222 | 1 | 0 | ... | ...
333 | 0 | 1 | ... | ...
444 | 0 | 1 | ... | ...
555 | 0 | 0 | ... | ...
</code></pre>
|
<python><pandas><categories><one-hot-encoding><dummy-variable>
|
2023-01-04 19:27:06
| 1
| 2,350
|
dingaro
|
75,010,319
| 11,644,523
|
Python extract values in JSON array to form a new key-value pair
|
<p>Given an array of JSON objects as:</p>
<pre><code>arr=[{"id": "abc", "value": "123"}, {"id": "xyz", "value": "456"}]
</code></pre>
<p>I would like to output a single JSON object like:</p>
<pre><code>new_arr={"abc":123,"xyz":456}
</code></pre>
<p>Currently I can extract the elements like <code>arr[0]['id']</code> but I am wondering what's the one-liner or better way to form the output.</p>
|
<python><json>
|
2023-01-04 19:24:34
| 4
| 735
|
Dametime
|
75,010,297
| 1,410,829
|
Accessing txt file while using module function
|
<p>I'm attempting to access a .txt file while accessing a module function I made. Problem is I don't think the function can access the text file. The file has data in it. It goes through the motions of trying to populate it. Do I need a <em>init</em>.py? I'm getting now output in my data.csv file that I'm attempting to output. But my data.txt file has data in it. So I'm not sure what I'm doing wrong. When run the file on it's on not as a module it runs perfectly.</p>
<p>The call:</p>
<pre><code>import csv_generator
#choose to print out in csv or json
def printData():
answer = input('Please choose 1 for csv or 2 for jsson output file: ')
if(answer == '1'):
csv_generator.csv_write()
elif(answer == '2'):
json_writer.json_write()
else:
printData()
printData()
</code></pre>
<p>The definition:</p>
<pre><code> import csv
def csv_write():
#files to read from
data_read = open('data.txt', 'r')
#process data and write to csv file
with open('data.csv', 'w', newline='') as f:
thewriter = csv.writer(f)
thewriter.writerow(['Last', 'First', 'Email', 'Address', 'Phone'])
for x in data_read:
y = x.split(',')
thewriter.writerow([y[0].strip(), y[1].strip(), y[2].strip(), y[3].strip(), y[4].strip()])
</code></pre>
|
<python><text><module>
|
2023-01-04 19:22:06
| 2
| 745
|
John Verber
|
75,010,219
| 1,601,580
|
how do I pip install something in editable mode using a requirements.txt file?
|
<p>I tried:</p>
<pre><code>pip install -er meta-dataset/requirements.txt
</code></pre>
<p>doesn't work:</p>
<pre><code>(meta_learning) brandomiranda~/diversity-for-predictive-success-of-meta-learning ❯ pip install -er meta-dataset/requirements.txt
ERROR: Invalid requirement: 'meta-dataset/requirements.txt'
Hint: It looks like a path. The path does exist. The argument you provided (meta-dataset/requirements.txt) appears to be a requirements file. If that is the case, use the '-r' flag to install the packages specified within it.
</code></pre>
<p>how to fix it?</p>
<p>related: <a href="https://stackoverflow.com/questions/51010251/what-does-e-in-requirements-txt-do">What does `-e .` in requirements.txt do?</a></p>
|
<python><pip>
|
2023-01-04 19:14:54
| 1
| 6,126
|
Charlie Parker
|
75,009,890
| 11,956,484
|
Round each column in df1 based on corresponding column in df2
|
<p>df1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Li</th>
<th>Be</th>
<th>Sc</th>
<th>V</th>
<th>Cr</th>
<th>Mn</th>
</tr>
</thead>
<tbody>
<tr>
<td>20.1564</td>
<td>-0.0011</td>
<td>-0.1921</td>
<td>0.0343</td>
<td>0.5729</td>
<td>0.1121</td>
</tr>
<tr>
<td>19.2871</td>
<td>-0.0027</td>
<td>0.0076</td>
<td>0.066</td>
<td>0.5196</td>
<td>0.0981</td>
</tr>
<tr>
<td>0.8693</td>
<td>0.0016</td>
<td>0.1997</td>
<td>0.0317</td>
<td>0.0533</td>
<td>0.014</td>
</tr>
</tbody>
</table>
</div>
<p>df2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Li</th>
<th>Be</th>
<th>Sc</th>
<th>V</th>
<th>Cr</th>
<th>Mn</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.0</td>
<td>0.050</td>
<td>0.3</td>
<td>0.111</td>
<td>0.50</td>
<td>0.40</td>
</tr>
</tbody>
</table>
</div>
<p>I need to round the columns in df1 to the same number of decimals places as the corresponding columns in df2. The issue is that each df contains 40+ columns all needing to be rounded to a specific number of decimal places.</p>
<p>I can do this column by column like</p>
<pre><code>df1["Li"]=df1["Li"].round(1)
df1["Be"]=df1["Be"].round(3)
etc
</code></pre>
<p>Is there an easier way to round all the columns in df1 based on the number of decimals in df2</p>
<p>desired output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Li</th>
<th>Be</th>
<th>Sc</th>
<th>V</th>
<th>Cr</th>
<th>Mn</th>
</tr>
</thead>
<tbody>
<tr>
<td>20.2</td>
<td>-0.001</td>
<td>-0.2</td>
<td>0.034</td>
<td>0.57</td>
<td>0.11</td>
</tr>
<tr>
<td>19.3</td>
<td>-0.003</td>
<td>0</td>
<td>0.066</td>
<td>0.52</td>
<td>0.1</td>
</tr>
<tr>
<td>0.9</td>
<td>0.002</td>
<td>0.2</td>
<td>0.032</td>
<td>0.05</td>
<td>0.01</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-01-04 18:45:18
| 2
| 716
|
Gingerhaze
|
75,009,815
| 4,418,481
|
Displaying Dash plot without using Web
|
<p>I have a large dataset of around 500 parquet files with about 42 million samples.</p>
<p>In order to read those files I'm using <code>Dask</code> which does a great job.</p>
<p>In order to display them, I downsampled the Dask DataFrame in the most basic way (something like dd[::200]) and plotted it using <code>Plotly</code>.</p>
<p>So far everything works great.</p>
<p>Now, I had like to have an interactive figure on one side but I don't want it to open a web tab/to use jupyter/anything of this kind. I just want it to create a figure as <code>matplotlib</code> does.</p>
<p>In order to do so, I found a great solution that uses <code>QWebEngineView</code>:</p>
<p><a href="https://stackoverflow.com/questions/53570384/plotly-how-to-make-a-standalone-plot-in-a-window">plotly: how to make a standalone plot in a window?</a></p>
<p>My simplified code looks something like this:</p>
<pre><code>import dask.dataframe as dd
import time
import plotly.graph_objects as go
def show_in_window(fig):
import sys, os
import plotly.offline
from PyQt5.QtCore import QUrl
from PyQt5.QtWebEngineWidgets import QWebEngineView
from PyQt5.QtWidgets import QApplication
plotly.offline.plot(fig, filename='temp.html', auto_open=False)
app = QApplication(sys.argv)
web = QWebEngineView()
file_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "temp.html"))
web.load(QUrl.fromLocalFile(file_path))
web.show()
sys.exit(app.exec_())
def create_plot(df,x_param,y_param):
fig.add_trace(go.Scattergl(x = df[x_param] , y = df[y_param], mode ='markers'))
fig = go.Figure()
ddf = dd.read_parquet("results_parq/*.parquet")
create_data_for_plot(ddf,'t','reg',1)
fig.update_layout(showlegend=False)
show_in_window(fig)
</code></pre>
<p><strong>QUESTION</strong>:</p>
<p>Since the dataset is large and I want to use a smarter downsample method, I would like to use a library called <code>plotly-resampler</code> (<a href="https://predict-idlab.github.io/plotly-resampler/getting_started.html#how-to-use" rel="nofollow noreferrer">https://predict-idlab.github.io/plotly-resampler/getting_started.html#how-to-use</a>) which dynamically changes the amount of samples based on the zooming level. However, it uses Dash.</p>
<p>I thought to do something like:</p>
<pre><code>fig = FigureResampler(go.Figure())
ddf = dd.read_parquet("results_parq/*.parquet")
create_data_for_plot(ddf,'t','reg',1)
show_in_window(fig)
</code></pre>
<p>This creates a plot with the smart resample but it does not change its resampling when the zoom changes (it basically stuck on its initial sampling).</p>
<p>Is there any solution that might give me a Dash figure in a separate window instead of a tab and yet to have the functionalities of Dash?</p>
<p>Thank you</p>
|
<python><plotly-dash><plotly>
|
2023-01-04 18:38:30
| 1
| 1,859
|
Ben
|
75,009,761
| 10,480,181
|
Do we need to run load_dotenv() in every module?
|
<p>I have a <code>.env</code> defined with the following content:</p>
<pre><code>env=loc
</code></pre>
<p>I have three python module that make use of this variable.</p>
<pre><code>├── __init__.py
├── cli.py
|── settings.py
├── commands
│ ├── __init__.py
│ └── output.py
</code></pre>
<p><code>settings.py:</code></p>
<pre><code>from dotenv import load_dotenv
load_dotenv()
if not os.getenv("env"):
raise TypeError("'env' variable not found in .env file")
</code></pre>
<p><code>output.py:</code></p>
<pre><code>import os
def output():
return getenv("env")
</code></pre>
<p><code>cli.py</code>:</p>
<pre><code>import settings
from commands.output import output
import os
CURR_ENV = getenv("env")
print(CURR_ENV)
print(output())
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>loc
None
</code></pre>
<p>Why is the output from <code>output.py</code> not <em><strong>loc</strong></em>? The environment variables were loaded when load_dotenv() was run for the first time.</p>
<p>Do I have to run load_dotenv() every time I need to access the environment variables?</p>
|
<python><python-3.x><dotenv>
|
2023-01-04 18:32:20
| 1
| 883
|
Vandit Goel
|
75,009,474
| 19,633,374
|
How to write if condition in when condition - PySpark
|
<p>I am trying to add an if condition in F. when in a pyspark column
my code:</p>
<pre><code>df = df.withColumn("column_fruits",F.when(F.col('column_fruits') == "Berries"
if("fruit_color")== "red":
return "cherries"
elif("fruit_color") == "pink":
return "strawberries"
else:
return "balackberries").otherwise("column_fruits")
</code></pre>
<p>I want to first filter out berries and change fruit names according to color. And all the remaining fruits remain the same.
Can anyone tell me if this is a valid way of writing <code>withColumn</code> code?</p>
|
<python><dataframe><apache-spark><pyspark>
|
2023-01-04 18:01:27
| 1
| 642
|
Bella_18
|
75,009,431
| 11,462,274
|
Maintain a default of True/False for each date
|
<p>During the day, new investment possibilities are registered, but the results (<code>lay</code> column) are only registered at midnight each day.</p>
<p>So let's assume this <code>CSV</code>:</p>
<pre class="lang-none prettyprint-override"><code>clock_now,competition,market_name,lay
2022/12/30,A,B,-1
2022/12/31,A,B,1.28
2023/01/01,A,B,-1
2023/01/02,A,B,1
2023/01/03,A,B,1
2023/01/04,A,B,
2023/01/04,A,B,
2023/01/04,A,B,
</code></pre>
<p>Until yesterday, <code>2023/01/03</code>, the sum of the lines that have the value <code>A</code> in <code>competition</code> and <code>B</code> in <code>market_name</code>, was <code>+1.28</code></p>
<p>I only invest if it is above <code>0</code>, so during today, every time this combination of values comes, the answer will be <code>True</code> to invest.</p>
<p>At the end of the day, when the lay values are registered, I look at the total result:</p>
<pre class="lang-none prettyprint-override"><code>clock_now,competition,market_name,lay
2022/12/30,A,B,-1
2022/12/31,A,B,1.28
2023/01/01,A,B,-1
2023/01/02,A,B,1
2023/01/03,A,B,1
2023/01/04,A,B,-1
2023/01/04,A,B,-1
2023/01/04,A,B,-1
</code></pre>
<p>End of the day: <code>-1,72</code></p>
<p>This means that tomorrow, if that same combination of values appears in the columns, I will not invest once because it will always be negative because it only calculates the values that it has until the previous day.</p>
<p>I'm trying to create a column to show where it was True and where it was False:</p>
<pre class="lang-python prettyprint-override"><code>df = pd.read_csv('example.csv')
combinations = [['market_name', 'competition']]
for cbnt in combinations:
df['invest'] = (df.groupby(cbnt)['lay']
.apply(lambda s: s.cumsum().shift())
.gt(df['lay'])
)
df['cumulative'] = (df.groupby(cbnt)['lay']
.apply(lambda s: s.cumsum().shift())
)
print(df[['clock_now','invest','cumulative']])
</code></pre>
<p>But the result is this:</p>
<pre class="lang-none prettyprint-override"><code> clock_now invest cumulative
0 2022/12/30 False NaN
1 2022/12/31 False -1.00
2 2023/01/01 True 0.28
3 2023/01/02 False -0.72
4 2023/01/03 False 0.28
5 2023/01/04 True 1.28
6 2023/01/04 True 0.28
7 2023/01/04 True -0.72
</code></pre>
<p>The expected result would be this:</p>
<pre class="lang-none prettyprint-override"><code> clock_now invest cumulative
0 2022/12/30 False NaN
1 2022/12/31 False -1.00
2 2023/01/01 True 0.28
3 2023/01/02 False -0.72
4 2023/01/03 True 0.28
5 2023/01/04 True 1.28
6 2023/01/04 True 0.28
7 2023/01/04 True -1.72
</code></pre>
<p>How should I proceed so that <code>cumsum</code> can understand that attention must be paid to maintaining a daily pattern according to the results of previous days?</p>
<p>Example Two:</p>
<pre class="lang-none prettyprint-override"><code>clock_now,competition,market_name,lay
2022/08/09,A,B,-1.0
2022/08/12,A,B,1.28
2022/09/07,A,B,-1.0
2022/10/15,A,B,1.0
2022/10/15,A,B,-1.0
2022/11/20,A,B,1.0
</code></pre>
<p>Note that on <code>2022/10/15</code>, it is delivering one <code>False</code> and one <code>True</code>, so in fact it is not tracking according to the date which is how I want it to happen:</p>
<pre class="lang-none prettyprint-override"><code> clock_now invest cumulative
0 2022/08/09 False NaN
1 2022/08/12 False -1.00
2 2022/09/07 True 0.28
3 2022/10/15 False -0.72
4 2022/10/15 True 0.28
5 2022/11/20 False -0.72
</code></pre>
<p>The correct would be always or all <code>False</code> or all <code>True</code> when on equal dates. Like this:</p>
<pre class="lang-none prettyprint-override"><code> clock_now invest cumulative
0 2022/08/09 False NaN
1 2022/08/12 False -1.00
2 2022/09/07 True 0.28
3 2022/10/15 False -0.72
4 2022/10/15 False 0.28
5 2022/11/20 False -0.72
</code></pre>
|
<python><pandas><dataframe><group-by><cumsum>
|
2023-01-04 17:57:23
| 1
| 2,222
|
Digital Farmer
|
75,009,352
| 9,783,831
|
Build python package using swig and setuptools
|
<p>I have a dll file that I am wrapping using swig.
Then I want to create a package to distribute the python extension.</p>
<p>The folder structure looks like:</p>
<pre><code>- myPackage
- __init__.py
- my_interface.i
- my_header.hpp
- my_dll.dll
- my_lib.lib
- setup.py
- pyproject.toml
- MANIFEST.in
</code></pre>
<p>My <code>setup.py</code> file looks like:</p>
<pre><code>from setuptools import setup, Extension, find_packages
my_module = Extension('_MyModule',
sources=['myPackage/my_interface_wrap.cpp'],
library_dirs=['myPackage'],
libraries= ["my_lib"],
include_dirs=['myPackage']
)
# my_module = Extension('_MyModule',
# sources=['myPackage/my_interface.i'],
# swig_opts=["-c++", "-ImyPackage"],
# library_dirs=['myPackage'],
# libraries= ["my_lib"],
# include_dirs=['myPackage']
# )
setup (name = 'MyPackage',
version = '1.0',
author = "",
description = """SWIG wrapper""",
ext_modules = [my_module],
py_modules = ["myModule.myModule"],
package=["myPackage"],
package_data={"myPackage": ["my_dll.dll"]},
)
</code></pre>
<p>I would like <code>setup tools</code> to run swig, create the python module and build the extension (create the <code>.pyd</code> file). Then I would like to have everything in an archive to publish the package.</p>
<p>To build the package I use: <code>python -m build</code></p>
<p><strong>The problem I have is that with the non commented extension definition, the extension is not packaged in the archive.</strong> I have no error message from the build.</p>
<p>In order for the build to work, I need to first run swig and then build the package.
In that case I use the commented extension definition.</p>
<pre><code> swig -c++ -python myPackage/my_interface.i
python -m build
</code></pre>
<p>Something must be wrong in the <code>setup.py</code> file.</p>
<p>There is not problem in the <code>.i</code> file since when doing all steps manually, the wrapper works.</p>
|
<python><setuptools><swig>
|
2023-01-04 17:49:23
| 0
| 407
|
Thombou
|
75,009,287
| 2,410,605
|
Selenium Python Am I selecting a drop down item the best way?
|
<p>I'm working on my first-ever Selenium Python project and am baby stepping through it. I have an inconsistent issue and want to see if there's a better way to code it.</p>
<p>I need to select "Active" from a drop-down menu. I'm selecting the drop down input element and clicking on it. This will open a child element of the input with the list items in it. Here's where the problem lies...sometimes the drop down element stays open and I can select "Active" and everything works great. Sometimes, however, the drop down element opens and closes immediately before I can pick "Active". I'm currently trying to sleep for 3 seconds after clicking on the input element, hoping that will solve the issue. It seemed to for a little while, but now then it will revert back to closing immediately.</p>
<p>So is there a better way to open this drop down box and select active?</p>
<pre><code>#select the status drop down box and click to open it, wait 3 seconds to make sure it loads
#and select the 'A'ctive list item and click to select it
WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "input-w_112"))).click()
time.sleep(3)
WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "w_113"))).find_element(By.XPATH, '//*[@combovalue="A"]').click()
</code></pre>
<p><a href="https://i.sstatic.net/hv5u6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hv5u6.png" alt="enter image description here" /></a></p>
<p>Sometimes the list items box will stay open, sometimes it closes immediately.</p>
<p><a href="https://i.sstatic.net/Xl1ID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xl1ID.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-01-04 17:44:42
| 1
| 657
|
JimmyG
|
75,008,898
| 5,088,513
|
Count python dataclass instances
|
<p>what is the best way to achieve class counter implementation with dataclass</p>
<pre><code>class geeks:
# this is used to print the number
# of instances of a class
counter = 0
# constructor of geeks class
def __init__(self,name):
self.name = name
# increment
geeks.counter += 1
</code></pre>
<p>with dataclass:</p>
<pre><code>@dataclass
class geeks:
name : str
</code></pre>
|
<python><python-dataclasses>
|
2023-01-04 17:10:18
| 0
| 633
|
Sylvain Page
|
75,008,868
| 8,864,226
|
Python: Improve this by making the algorithm less complex
|
<p>I have a working block of code, but it seems there should be a more efficient algorithm, presumably with less loops or using a library/module.</p>
<p>This example version of the code takes a list of strings, reverse sort by <code>len()</code>, then build a new list:</p>
<pre class="lang-py prettyprint-override"><code>gs = ["catan", "ticket to ride", "azul"]
mg = {}
for i in range(len(gs)):
mg[i] = len(gs[i])
popularity = {k: v for k, v in sorted(mg.items(), key=lambda v: v[1], reverse=True)}
tg = []
for game in popularity.keys():
tg.append(gs[game])
</code></pre>
<p>The production code does not set <code>mg[i]</code> to <code>len()</code> and the elements in the list are not necessarily strings, but the algorithm works otherwise.</p>
<p>For this, the same output is:</p>
<pre><code>['ticket to ride', 'catan', 'azul']
</code></pre>
<p><strong>How can I improve the efficiency of this algorithm?</strong></p>
|
<python><algorithm><sorting>
|
2023-01-04 17:07:28
| 1
| 6,097
|
James Risner
|
75,008,833
| 893,866
|
How to transfer TRC20 token using tronpy?
|
<p>I want to send USDT TRC20 tokens using <a href="https://tronpy.readthedocs.io/" rel="nofollow noreferrer">tronpy</a>, while i succeded to transfer TRX, same approach failed for the TRC20 tokens, here's my code:</p>
<pre><code>import codecs
from tronpy.keys import PrivateKey
from hexbytes import HexBytes
def transfer(self, private_key: str, to_address: str, amount: int, contract_address: str, abi: str = None) -> HexBytes:
pk = PrivateKey(bytes.fromhex(private_key))
# Prepare contract
contract = self._tron.get_contract(contract_address)
contract.abi = abi
# Base tx
tx = (
contract.functions.transfer(
to_address,
amount)
.with_owner(pk.public_key.to_base58check_address())
#.fee_limit(5_000_000)
.build()
.sign(pk)
)
broadcasted_tx = tx.broadcast().wait()
return HexBytes(codecs.decode(broadcasted_tx['id'], 'hex_codec'))
</code></pre>
<p>Where:</p>
<pre><code>abi = [{
"outputs":[
{
"type":"bool"
}
],
"inputs":[
{
"name":"_to",
"type":"address"
},
{
"name":"_value",
"type":"uint256"
}
],
"name":"transfer",
"stateMutability":"Nonpayable",
"type":"Function"
}]
</code></pre>
<p>and:</p>
<pre><code>contract_address = 'TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf' # USDT token on Nile testnet
</code></pre>
<p>And the transaction is broadcasted then get failed: <a href="https://nile.tronscan.org/#/transaction/70ac4ff25674d94dd7860e815560fbe425bfd275cf1afaa11d4897efa83d706a" rel="nofollow noreferrer">https://nile.tronscan.org/#/transaction/70ac4ff25674d94dd7860e815560fbe425bfd275cf1afaa11d4897efa83d706a</a></p>
<p>What's wrong with my transaction building ? Anyway to get it done using <strong>tronpy</strong> and not <em>tronapi</em> ?</p>
|
<python><smartcontracts><tron><tronpy>
|
2023-01-04 17:04:15
| 2
| 921
|
zfou
|
75,008,676
| 5,041,045
|
why does python allows me to access elements not specified in __all__ within a module?
|
<p>I have the following directory structure:</p>
<pre><code>.
├── main.py
└── myproj
├── __init__.py
└── mymodule.py
</code></pre>
<p>This is <strong>mymodule.py</strong>:</p>
<pre><code>hello = "hello"
byebye = "byebye"
class sayHello():
def __init__(self):
print("Hello World!")
class sayBye():
def __init__(self):
print("ByeBye World!")
</code></pre>
<p>I want only to expose the <code>sayHello</code> class when someone imports like: <code>from myproj import *</code>. My understanding is that this can be achieved by adding an <code>__all__</code> in the <code>__init__.py</code> file.</p>
<p><strong><strong>init</strong>.py</strong></p>
<pre><code>__all__ = ["sayHello"]
</code></pre>
<p>but I can still access everything from the <strong>mymodule.py</strong> module<br />
<strong>main.py</strong></p>
<pre><code>from myproj.mymodule import *
sayBye()
print(locals())
</code></pre>
<p>Output:</p>
<pre><code>$ python main.py
ByeBye World!
{'__name__': '__main__', '__doc__': None, '__package__': None, '__loader__': <_frozen_importlib_external.SourceFileLoader object at 0x10daf5210>, '__spec__': None, '__annotations__': {}, '__builtins__': <module 'builtins' (built-in)>, '__file__': '/Users/simon/junk/main.py', '__cached__': None, 'hello': 'hello', 'byebye': 'byebye', 'sayHello': <class 'myproj.mymodule.sayHello'>, 'sayBye': <class 'myproj.mymodule.sayBye'>}
</code></pre>
<p>I can also access variables like <code>hello</code>, and <code>byebye</code>, which I was not expecting to. I was expecting some sort of error, since everywhere in the docs seems to imply that only the symbols specified in <code>__all__</code> will be exported.</p>
|
<python><import><module>
|
2023-01-04 16:51:33
| 0
| 3,022
|
Simon Ernesto Cardenas Zarate
|
75,008,445
| 4,364,157
|
Why CLOB slower than VARCHAR2 in Oracle?
|
<p>Currently, we have a table containing a varchar2 column with 4000 characters, however, it became a limitation as the size of the 'text' being inserted can grow bigger than 4000 characters, therefore we decided to use CLOB as the data type for this specific column, what happens now is that both the insertions and selections are way too slow compared to the previous varchar2(4000) data type.
We are using Python combined with SqlAlchemy to do both the insertions and the retrieval of the data. In simple words, the implementation itself did not change at all, only the column data type in the database.</p>
<p>Does anyone have any idea on how to tweak the performance?</p>
|
<python><oracle-database>
|
2023-01-04 16:31:17
| 3
| 1,026
|
Bruno Assis
|
75,008,347
| 11,370,582
|
Installed 2 Versions of Anaconda - libraries don't work
|
<p>I have mistakenly installed 2 versions of Anaconda on my 2019 Mac Pro and now I cannot access python libraries.</p>
<p>In an attempt to update anaconda I installed another version using homebrew instead of the GUI installer and it seems to have changed the path to my libraries. Now when I try to run a python script in the terminal I get a module not found error.</p>
<p>Even with the most basic libraries such as numpy and pandas:</p>
<pre><code> import numpy as np
ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>Though they still show up when I run a <code>pip list</code>...</p>
<p>Can anybody recommend how to fix this? I set up this machine a long time ago and woud still consider myself a low intermediate user. I use zshell in the terminal and originally set the path to anaconda in .zshrc as below.</p>
<pre><code># >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/canderson/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/canderson/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/Users/canderson/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/canderson/opt/anaconda3/bin:$PATH"
fi
fi
</code></pre>
<p>As a test, when I try and install numpy with pip it returns requirement already satisfied. Even though It's returning module not found.</p>
<pre><code>pip3 install numpy
Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (1.20.0)
</code></pre>
<p>** EDIT **</p>
<p>I've noticed this is only an issue with VS Code. When I run the same code in a standalone terminal it works fine.</p>
|
<python><macos><visual-studio-code><path><anaconda>
|
2023-01-04 16:23:49
| 1
| 904
|
John Conor
|
75,008,222
| 10,886,283
|
Assign specific colors to array columns without explicit iteration when plotting in matplotlib
|
<p>It is helpful sometimes to do <code>plt.plot(x, y)</code> when <code>y</code> is a 2D array due to every column of <code>y</code> will be plotted against <code>x</code> automatically in the same subplot. In such a case, line colors are set by default. But is it possible to customize colors with something similar to <code>plt.plot(x, y, color=colors)</code> where now <code>colors</code> is an iterable?</p>
<p>For example, let's say I have three datasets that scatter around straight lines and want to plotted with fitting curves in such a way that each dataset and its fit share the same color.</p>
<pre><code>np.random.seed(0)
# fake dataset
slope = [1, 2, 3]
X = np.arange(10)
Y = slope * X[:,None] + np.random.randn(10,3)
# fitting lines
params = np.polyfit(X, Y, deg=1)
x = np.linspace(0, 10, 50)
y = np.polyval(params, x[:,None])
</code></pre>
<p>I would like to get the ouput of the following code without having to iterate manually.</p>
<pre><code>colors = ['b', 'r', 'g']
for i in range(3):
plt.plot(X, Y[:,i], '.', color=colors[i])
plt.plot(x, y[:,i], color=colors[i])
</code></pre>
<p><a href="https://i.sstatic.net/qPzWl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qPzWl.png" alt="enter image description here" /></a></p>
|
<python><numpy><matplotlib>
|
2023-01-04 16:14:12
| 1
| 509
|
alpelito7
|
75,007,950
| 4,051,219
|
Pandas Group by column and concatenate rows to form a string separated by spaces
|
<p>I have a dataset that looks like:</p>
<pre class="lang-py prettyprint-override"><code>my_df = pd.DataFrame({"id": [1, 1, 1], "word": ["hello", "my", "friend"]})
</code></pre>
<p>And I would like to group by id and concatenate the word (without changing the order), the result dataset should look like:</p>
<p><a href="https://i.sstatic.net/Dsn21.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dsn21.png" alt="Expected result" /></a></p>
<p>Given:</p>
<ul>
<li>Dataframe is already sorted by id, words in the desired order</li>
</ul>
<p>Constraints:</p>
<ul>
<li>The the word order should be maintained</li>
</ul>
|
<python><pandas><dataframe>
|
2023-01-04 15:50:04
| 2
| 510
|
dpalma
|
75,007,763
| 10,891,675
|
Why unit tests fil whereas the program runs?
|
<p>I'm asked to develop unit tests for a program which is such badly developed that the tests don't run... but the program does. Thus, I need to explain the reason why and I actually don't know!</p>
<p>Here is a piece of code that intends to represent the code I need to test:</p>
<pre><code>from services import myModule1
from services.spec1 import importedFunc
from services.spec2 import getTool
from services.spec3 import getDict
class myClass(object):
def __init__(self, param1, param2):
self.param1 = param1
self.param2 = param2
self.param3 = 0
self.param4 = 0
def myMethod(self):
try:
myVar1 = globalDict['key1']
myVar2 = globalDict['key2']
newVar = importedFunc(par1=myVar1, par2=myVar2, par3=extVar3)
calcParam = myModule1.methodMod1(self.param1)
self.param3 = calcParam["keyParam3"]
self.param4 = newVar.meth1(self.param2)
globTools.send_message(self.param3, self.param4)
except:
globTools.error_message(self.param3, self.param4)
return
class myClass2(object):
def __init__(self, *myclass2_params):
# some piece of code to intialize dedicated attributes
self.add_objects()
def add_objects(self):
# Some piece of code
my_class = myClass(**necessary_params)
# Some piece of code
return
if __name__ == '__main__':
globTools = getTool("my_program")
globalDict = getDict(some_params)
# Some piece of code
my_class2 = myClass2(**any_params)
# Some piece of code
</code></pre>
<p>As you can see, the problem is that the class and its methods uses global variables, defined in the main scope. And it's just a quick summary because it's actually a bit more complicated, but I hope it's enough to give you an overview of the context and help me understand why the unit test fail.</p>
<p>I tried to mock the imported modules, but I did not manage to a successful result, so I first tried to make it simple and just initialize all parameters.
I went to this test file:</p>
<pre><code>import unittest
from my_module import myClass
from services import myModule1
from services.spec1 import importedFunc
from services.spec2 import getTool
from services.spec3 import getDict
def test_myClass(unittest.TestCase):
def setUp(self):
globTools = getTool("my_program")
globalDict = getDict(some_params)
def test_myMethod(self):
test_class = myClass(*necessary_parameters)
test_res = test_class.myMethod()
self.assertIsNotNone(test_res)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>But the test fail, telling me 'globTools is not defined' when trying to instantiate myClass
I also tried to initialize variables directly in the test method, but the result is the same</p>
<p>And to be complete about the technical environment, I cannot run python programs directly and need to launch a docker environment via a Jenkins pipeline - I'm not very familiar with this but I imagine it should not have an impact on the result</p>
<p>I guess the problem comes from the variable's scopes, but I'm not able to explain it in this case: why the test fail where as the method itself works (yes, it actually works, or at least the program globally runs without)</p>
|
<python><unit-testing>
|
2023-01-04 15:33:22
| 1
| 696
|
Christophe
|
75,007,630
| 12,084,907
|
UnicodeEncodeError: 'charmap' codec can't encode characters in position 202-203: character maps to <undefined>
|
<p>I am trying to write a data stored in a dataframe to a csv and am running into the following error:</p>
<pre><code>UnicodeEncodeError: 'charmap' codec can't encode characters in position 202-203: character maps to <undefined>
</code></pre>
<p>I looked at <a href="https://stackoverflow.com/questions/27092833/unicodeencodeerror-charmap-codec-cant-encode-characters">this thread</a> and <a href="https://stackoverflow.com/questions/16923281/writing-a-pandas-dataframe-to-csv-file">this one</a> and tried their solutions of adding <code>encoding='utf-8' </code> however this did not seem to help. I was wondering if there is another encoding recommendation/ alternative solution to this problem. The code for my function that writes to the csv is as follows:</p>
<pre><code>def data_to_csv(data, data2):
encode = 'utf-8'
with open('data.csv', 'w') as data_csv:
data.to_csv(path_or_buf=data_csv, encoding=encode)
with open('data2_data.csv', 'w') as data2_csv:
data2.to_csv(path_or_buf=data2_csv, encoding=encode)
</code></pre>
<p>Any help would be greatly appreciated!</p>
|
<python><python-3.x><pandas><csv>
|
2023-01-04 15:23:18
| 1
| 379
|
Buzzkillionair
|
75,007,603
| 14,882,883
|
Combining dictionaries with a particular key: Python
|
<p>I have the following dictionaries generated using for loop and I would to combine and dump them into one <code>YAML</code> file.</p>
<p>dict1</p>
<pre><code>{'name': 'test',
'version': '0011 * *? *',
'datasets': [{
'dataset': 'dataset_a',
'match_on': ['code'],
'columns': {'include': ['aa',
'bb',
'cc']}}]
}
</code></pre>
<p>dict2</p>
<pre><code>{'name': 'test',
'version': '0011 * *? *',
'datasets': [{
'dataset': 'dataset_b',
'match_on': ['code'],
'columns': {'include': ['aaa',
'bbb',
'ccc']}}]
}
</code></pre>
<p>The dictionaries have common values on <code>name</code> and <code>version</code> keys. So, I would like to put it as common but they have differences on the <code>datasets</code> key. The desired output look like the following</p>
<p>combined_dict</p>
<pre><code>{'name': 'test',
'version': '0011 * *? *',
'datasets': [{
'dataset': 'dataset_a',
'match_on': ['code'],
'columns': {'include': ['aa',
'bb',
'cc']}}]
'datasets': [{
'dataset': 'dataset_b',
'match_on': ['code'],
'columns': {'include': ['aaa',
'bbb',
'ccc']}}]
}
</code></pre>
<p>To achieve this, I have tried the following.</p>
<pre><code>combined_dict= {**dict1,**dict2}
with open('combined_dict.yml', 'w') as outfile:
yaml.dump(updated_dict , outfile, sort_keys=False)
</code></pre>
|
<python><dictionary><yaml><pyyaml>
|
2023-01-04 15:20:45
| 1
| 608
|
Hiwot
|
75,007,568
| 329,829
|
How to specify the parameter for FeatureUnion to let it pass to underlying transformer
|
<p>In my code, I am trying to access the <code>sample_weight</code> of the <code>StandardScaler</code>. However, this <code>StandardScaler</code> is within a <code>Pipeline</code> which again is within a <code>FeatureUnion</code>. I can't seem to get this parameter name correct: <code>scaler_pipeline__scaler__sample_weight</code> which should be specified in the <code>fit</code> method of the preprocessor object.</p>
<p>I get the following error: <code>KeyError: 'scaler_pipeline</code></p>
<p>What should this parameter name be? Alternatively, if there is a generally better way to do this, feel free to propose it.</p>
<p>The code below is a standalone example.</p>
<pre><code>from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import StandardScaler
import pandas as pd
class ColumnSelector(BaseEstimator, TransformerMixin):
"""Select only specified columns."""
def __init__(self, columns):
self.columns = columns
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.columns]
def set_output(self, *, transform=None):
return self
df = pd.DataFrame({'ds':[1,2,3,4],'y':[1,2,3,4],'a':[1,2,3,4],'b':[1,2,3,4],'c':[1,2,3,4]})
sample_weight=[0,1,1,1]
scaler_pipeline = Pipeline(
[
(
"selector",
ColumnSelector(['a','b']),
),
("scaler", StandardScaler()),
]
)
remaining_pipeline = Pipeline([("selector", ColumnSelector(["ds","y"]))])
# Featureunion fitting training data
preprocessor = FeatureUnion(
transformer_list=[
("scaler_pipeline", scaler_pipeline),
("remaining_pipeline", remaining_pipeline),
]
).set_output(transform="pandas")
df_training_transformed = preprocessor.fit_transform(
df, scaler_pipeline__scaler__sample_weight=sample_weight
)
</code></pre>
|
<python><pandas><scikit-learn><pipeline>
|
2023-01-04 15:18:26
| 1
| 5,232
|
Olivier_s_j
|
75,007,444
| 10,687,615
|
Extract Date/time from string
|
<p>I have a dataframe that looks like this:</p>
<pre><code>ID RESULT
1 Pivot (Triage) Form Entered On: 12/30/2022 23:20 EST Performed On: 12/30/2022 23:16 EST
</code></pre>
<p>I would like to extract both datetime variables so the new dataframe looks like this:</p>
<pre><code>ID END_TIME START_TIME
1 12/30/2022 23:20 12/30/2022 23:16
</code></pre>
<p>I'm trying multiple methods but getting results where the <code>'END_TIME'</code> and <code>'START_TIME'</code> variables output is '<code>NA</code>'.</p>
<pre><code>TEST['END_TIME']=TEST['RESULT'].str.extract("Entered On: (\d+) EST")
TEST['START_TIME']=TEST['RESULT'].str.extract("Performed On: (\d+) EST")
</code></pre>
|
<python><pandas>
|
2023-01-04 15:10:09
| 2
| 859
|
Raven
|
75,007,435
| 8,864,226
|
Replace None in list with average of last 3 non-None entries when the average is above the same entry in another list
|
<p>I have two lists:</p>
<pre class="lang-py prettyprint-override"><code>dataa = [11, 18, 84, 51, 82, 1, 19, 45, 83, 22]
datab = [1, None, 40, 45, None, None, 23, 24, None, None]
</code></pre>
<p>I need to replace all None in datab for any instance where the prior 3 entries are > than the data entry (see walk-through example below).
Ignore entries where there are not 3 prior non-None entries to average to make the comparison to dataa.</p>
<p>My first attempt was this:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(len(dataa)):
if (datab[i] == None):
a = (datab[i-3]+datab[i-2]+datab[i-1])/3
if ((datab[i-3]+datab[i-2]+datab[i-1])/3 > dataa[i]):
datab[i] = dataa[i]
</code></pre>
<p>It errors trying to compute the average of the prior three in the case where one of the prior 3 are None. I tried to keep a running total, but this fails for some of them.</p>
<pre class="lang-py prettyprint-override"><code>c = 0;
a = 0;
for i in range(len(dataa)):
c = c + 1
if (datab[i] == None):
if (a > dataa[i]):
datab[i] = a
else:
if (c > 2):
a = (a * 3 + datab[i])/3
</code></pre>
<p>This also did not work as expected.</p>
<p>From this sample data, I expected:</p>
<ul>
<li>Entry 1, 2, and 3 have no average, so leave as is.</li>
<li>Entry 5 is None in datab and 82 in dataa. Since <code>(1+40+45)/3 = 28.66</code> we also leave as is.</li>
<li>Entry 6 is None in datab and 1 in dataa. The 3 prior non-None average are greater (<code>28.66 > 1</code>), so set to the 28.66 average.</li>
<li>Entry 9 is None, but <code>(28.66+23+24)/3 = 25.22</code> is not greater than 83, so leave as is.</li>
<li>Entry 10 is None and the 3 prior non-None average are greater (<code>25.22>22</code>), so set it to the 25.22 average.</li>
</ul>
<p>The correct expected output:</p>
<pre><code>[1, None, 40, 45, None, 28.66, 23, 24, None, 25.22]
</code></pre>
|
<python><list><loops><average>
|
2023-01-04 15:09:35
| 1
| 6,097
|
James Risner
|
75,007,399
| 13,543,225
|
Multiple pip locations and versions, and pip not updating - Ubuntu on WSL
|
<h1>Why are there so many versions and locations of pip?</h1>
<p>When I run <code>pip install -U pip</code> it says it's updating to a newer version, then when I run <code>pip --version</code> it's the un-updated version. I am on WSL.</p>
<p><a href="https://i.sstatic.net/z47qa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z47qa.png" alt="so many pips..." /></a></p>
|
<python><pip>
|
2023-01-04 15:07:08
| 2
| 650
|
j7skov
|
75,006,928
| 1,128,632
|
Sagemaker training job fails ""FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/input/data/training/annotations.json'"
|
<p>When trying to use a Quick Start model in AWS Sagemaker, specifically for Object Detection, all fine tune models fail to train.</p>
<p>I'm attempting to fine tune a <code>SSD Mobilenet V1 FPN 640x640 COCO '17</code> model.</p>
<p>The annotations and images are accepted, but after initializing the training session, the Training Job is unable to find a specific file: <code>FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/input/data/training/annotations.json</code>.</p>
<p>The S3 directory given follows the template required, using a 1 image example for simplicity:</p>
<pre><code>images/
abc.png
annotations/
abc.json
</code></pre>
<p>The following stack trace is returned:</p>
<pre><code>We encountered an error while training the model on your data. AlgorithmError: ExecuteUserScriptError:
ExitCode 1
ErrorMessage "FileNotFoundError: [Errno 2] No such file or directory: '/opt/ml/input/data/training/annotations.json'
"
Command "/usr/local/bin/python3.9 transfer_learning.py --batch_size 3 --beta_1 0.9 --beta_2 0.999 --early_stopping False --early_stopping_min_delta 0 --early_stopping_patience 5 --epochs 5 --epsilon 1e-7 --initial_accumulator_value 0.1 --learning_rate 0.001 --model-artifact-bucket jumpstart-cache-prod-us-east-1 --model-artifact-key tensorflow-training/train-tensorflow-od1-ssd-mobilenet-v1-fpn-640x640-coco17-tpu-8.tar.gz --momentum 0.9 --optimizer adam --reinitialize_top_layer Auto --rho 0.95 --train_only_top_layer False", exit code: 1
</code></pre>
<p>There might be an internal bug where the mapping of input annotations isn't transformed and placed into this directory in the Training Job container?</p>
|
<python><tensorflow><machine-learning><amazon-sagemaker><amazon-sagemaker-studio>
|
2023-01-04 14:30:46
| 1
| 4,678
|
Dylan Pierce
|
75,006,911
| 1,200,914
|
How can I dynamically observe/change limits of a Autoscale group?
|
<p>I want to modify the number of minimum/maximum/target instances of an autoscale group and see if there's any instance on from this autoscale group, all dynamically using the AWS SDK for Python. How can I do it?</p>
<p>I'm unable to find it from literature.</p>
|
<python><amazon-web-services><aws-auto-scaling>
|
2023-01-04 14:29:53
| 2
| 3,052
|
Learning from masters
|
75,006,868
| 12,886,858
|
Adding multiple foreign keys from same model (FastAPI and SqlAlechemy)
|
<p>I am trying to have two foreign keys from User table inside Ban table. Here is how I did it:</p>
<pre><code>class Ban(Base):
__tablename__ = "ban"
ban_id = Column(Integer, primary_key=True, index=True)
poll_owner_id = Column(Integer)
banned_by = Column(String , ForeignKey('user.username', ondelete='CASCADE', ), unique=True)
user_id = Column(Integer, ForeignKey('user.user_id', ondelete='CASCADE', ))
updated_at = Column(DateTime)
create_at = Column(DateTime)
ban_to_user = relationship("User", back_populates='user_to_ban', cascade='all, delete')
</code></pre>
<p>and User table:</p>
<pre><code>class User(Base):
__tablename__ = "user"
user_id = Column(Integer, primary_key=True, index=True)
username = Column(String, unique=True)
email = Column(String)
create_at = Column(DateTime)
updated_at = Column(DateTime)
user_to_ban = relationship("Ban", back_populates='ban_to_user', cascade='all, delete')
</code></pre>
<p>When I try to run a query to fetch all users like this:</p>
<pre><code>@router.get('/all')
async def get_all_users(db:Session = Depends(get_db)):
return db.query(models.User).all()
</code></pre>
<p>I get this error:</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: One or more mappers failed to initialize - can't proceed with initialization of other mappers. Triggering mapper: 'mapped class User->user'. Origina
l exception was: Could not determine join condition between parent/child tables on relationship User.user_to_ban - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
</code></pre>
<p>I did the relationship between them as you can see but it states that there is problem between them. If needed I can show you how I did migration for my db using alembic if that is possible cause or is there a cleaner and better way to do this.</p>
|
<python><sqlalchemy>
|
2023-01-04 14:26:30
| 1
| 633
|
Vedo
|
75,006,842
| 7,848,740
|
Reduce dimension of Docker Python Image
|
<p>I'm building a Python Image in Docker and I need it to be the smaller possible.</p>
<p>I'm using venv on my PC and the app is basically a unqiue file .py of 3.2kb. I'm using Python 3.10 and the following libraries</p>
<pre><code>import feedparser
import sched
import time
import dbm
import paho.mqtt.client as mqtt
import json
import logging
</code></pre>
<p>My current Dockerfile looks like</p>
<pre><code>FROM python:3.10.9-alpine3.17
WORKDIR /app
COPY *.py ./
COPY *.txt ./
RUN pip install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["./main.py"]
</code></pre>
<p>And the final image is about 60Mb. Now, I'm seeing that Python Alpine image is about 20/30Mb and the folder containing my project is about 22Mb.</p>
<p>Is there a way to strip even more the dimension of the Docker Image I'm creating, maybe clearing cache after the building or installing only the necessary package?</p>
|
<python><python-3.x><docker><dockerfile><alpine-linux>
|
2023-01-04 14:23:57
| 2
| 1,679
|
NicoCaldo
|
75,006,821
| 4,380,853
|
What's the best way to create flink table for time series data
|
<p>I have aws kinesis data format as follows:</p>
<pre><code>[
{
'customer_id': UUID,
'sensor_id': UUID,
'timestamps': [ ... ],
'values': [ ...]
},
...
]
</code></pre>
<p>I later want to apply a Sliding Window on the data based on event time (which is included in time stamps.</p>
<p>What's the best way to model the Flink table given that the data scheme contains arrays?</p>
|
<python><apache-flink><pyflink>
|
2023-01-04 14:22:48
| 1
| 2,827
|
Amir Afianian
|
75,006,769
| 1,128,648
|
NaN values when new column is added to pandas DataFrame based on an existing column data
|
<p>I am trying to create a new column in pandas DataFrame which is based on another existing column. I am extracting characters <code>10:19</code> from column <code>Name</code> and adding it as a new column <code>expiry</code> . But most of the datas in <code>expiry</code> are showing as <code>nan</code>. I am new to python and Pandas. How can I solve this ?</p>
<pre><code>allowedSegment = [14]
index_symbol = "BANKNIFTY"
fno_url = 'http://public.fyers.in/sym_details/NSE_FO.csv'
fno_symbolList = pd.read_csv(fno_url, header=None)
fno_symbolList.columns = ['FyersToken', 'Name', 'Instrument', 'lot', 'tick', 'ISIN', 'TradingSession', 'Lastupdatedate',
'Expirydate', 'Symbol', 'Exchange', 'Segment', 'ScripCode', 'ScripName', 'Ignore_1',
'StrikePrice', 'CE_PE', 'Ignore_2']
fno_symbolList = fno_symbolList[fno_symbolList['Instrument'].isin(allowedSegment) & (fno_symbolList['ScripName'] == index_symbol)]
fno_symbolList['expiry'] = fno_symbolList['Name'][10:19]
</code></pre>
|
<python><pandas>
|
2023-01-04 14:18:19
| 1
| 1,746
|
acr
|
75,006,733
| 3,657,967
|
Python Random Module sample method with setstate notworking
|
<p>I am wondering if I am doing it wrong when setting the random seed and state. The random number generated from the random.sample seems not predictable. Does anyone know why? Thanks.</p>
<pre><code>>>> state = random.getstate()
>>> random.seed(7)
>>> x = list(range(10))
>>> random.sample(x, 5)
[5, 2, 6, 9, 0]
>>> random.sample(x, 5)
[1, 8, 9, 2, 4]
>>> random.sample(x, 5)
[0, 8, 3, 9, 6]
>>> random.setstate(state)
>>> random.sample(x, 5)
[3, 1, 9, 7, 8]
>>> random.sample(x, 5)
[4, 2, 7, 5, 0]
>>> random.sample(x, 5)
[9, 6, 7, 8, 0]
</code></pre>
|
<python><random>
|
2023-01-04 14:15:34
| 2
| 479
|
acai
|
75,006,683
| 12,760,550
|
Create column that identify first date of another column pandas dataframe
|
<p>Imagine I have a dataframe with employee IDs, their Hire Dates, the contract type (can be Employee or Contractor) and the company they where hired. Each employee may have as many rows for the same, or different companies and the same or different contract types.</p>
<pre><code>ID Hire Date Contract Type Company
10000 2000.01.01 Employee Abc
10000 2001.01.01 Contractor Zxc
10000 2000.01.01 Employee Abc
10000 2000.01.01 Contractor Abc
10000 2002.01.01 Employee Cde
10000 2002.01.01 Employee Abc
10001 1999.03.11 Employee Zxc
10002 1989.01.01 Employee Abc
10002 1989.01.01 Contractor Cde
10002 1988.01.01 Contractor Zxc
10002 1999.01.01 Employee Abc
</code></pre>
<p>Per each ID, and each contract type they have, I need to identify the earliest hire date and assign that as their Primary Assignment (if the ID has 2 rows with the same contract type and same hire date in the same company just take the first value it appears and set to "Yes") for each unique Company they are hired, this way resulting on this dataframe:</p>
<pre><code>ID Hire Date Contract Type Company Primary Assignment
10000 2000.01.01 Employee Abc Yes
10000 2001.01.01 Contractor Zxc Yes
10000 2000.01.01 Employee Abc No
10000 2000.01.01 Contractor Abc Yes
10000 2002.01.01 Employee Cde Yes
10000 2002.01.01 Employee Abc No
10001 1999.03.11 Employee Zxc Yes
10002 1989.01.01 Employee Abc Yes
10002 1989.01.01 Contractor Cde Yes
10002 1988.01.01 Contractor Zxc Yes
10002 1999.01.01 Employee Abc No
</code></pre>
<p>What would be the best way to achieve it?</p>
|
<python><pandas><lambda><merge>
|
2023-01-04 14:11:22
| 1
| 619
|
Paulo Cortez
|
75,006,666
| 6,187,009
|
In python, how to transfer file in bytes with GRPC in a right way?
|
<p>For some reason, I want to transfer file (not in stream) with grpc in python. The protobuf and python code are as follows</p>
<ul>
<li>protobuf</li>
</ul>
<pre><code>syntax = "proto3";
import "google/protobuf/empty.proto";
package xxxx;
service gRPCComServeFunc {
rpc sendToClient(FileRequest) returns (google.protobuf.Empty) {}
}
message FileRequest {
FileInfo info = 1;
bytes chunk = 2;
}
message FileInfo {
int32 sender = 1;
int32 state = 2;
string timestamp = 3;
}
</code></pre>
<ul>
<li>python</li>
</ul>
<pre><code>def get_file_content(path_file):
with open(path_file, 'rb') as f:
content = f.read()
return content
def pack_file_request(state, chunk):
request = gRPC_comm_manager_pb2.FileRequest(
info=gRPC_comm_manager_pb2.FileInfo(
sender=0,
state=state,
timestamp=""),
chunk=chunk
)
return request
...
chunk = get_file_content(path_file)
request = pack_file_request(state, chunk)
stub.sendToClient(request)
</code></pre>
<ul>
<li>First, the server reports error as follows</li>
</ul>
<pre><code>...
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Received RST_STREAM with error code 8"
debug_error_string = "UNKNOWN:Error received from peer 0.0.0.0:50078 {created_time:"2023-01-04T22:02:11.242643217+08:00", grpc_status:1, grpc_message:"Received RST_STREAM with error code 8"}"
</code></pre>
<ul>
<li>It looks like the message is sent successfully, but refused by the remote client at 0.0.0.0:50078.</li>
<li>So I guess it's caused by the file chunk, and this time I send the message <strong>without chunk</strong> as follows</li>
</ul>
<pre><code>def pack_file_request(state, chunk):
request = gRPC_comm_manager_pb2.FileRequest(
info=gRPC_comm_manager_pb2.FileInfo(
sender=0,
state=state,
timestamp=""),
)
return request
</code></pre>
<p>And it works. Therefore I guess I cannot directly read the file and pass it to grpc services in python.</p>
<p>So how should I read the file and pass it to the parameter <code>chunk</code> in grpc service <code>sendToClient</code>?</p>
|
<python><grpc><grpc-python>
|
2023-01-04 14:10:06
| 0
| 974
|
david
|
75,006,356
| 7,583,084
|
How to find the whole word with re.findall?
|
<p>this is my code:</p>
<pre><code>str_list = 'Hallo Welt Halloer'
conversations_counter = len(re.findall("Hallo", str_list))
print(conversations_counter)
</code></pre>
<p>The result is 2!
But I just want to have a match for the whole word 'Hallo'. The word 'Hallo' in the word 'Halloer' should not be counted.</p>
<p>Hallo = Hallo -> 1
Halloer <> Hallo -> 0</p>
<p>How to achieve this?</p>
<p>Thanks</p>
|
<python><findall>
|
2023-01-04 13:42:18
| 1
| 301
|
HansMuff
|
75,006,287
| 11,462,274
|
How to make a cumulative sum in blocks with results according to until the day before instead of each line?
|
<p>Example of my CSV:</p>
<pre class="lang-none prettyprint-override"><code>clock_now,competition,market_name,back,lay
2022/08/09,South African Premier Division,Over/Under 0.5 Goals,0.28985,-1.0
2022/08/12,South African Premier Division,Over/Under 0.5 Goals,-1.0,1.28
2022/09/07,South African Premier Division,Over/Under 0.5 Goals,0.37,-1.0
2022/09/07,South African Premier Division,Over/Under 0.5 Goals,0.20,-1.0
2022/10/15,South African Premier Division,Over/Under 0.5 Goals,0.20,1.0
2022/10/15,South African Premier Division,Over/Under 0.5 Goals,0.20,1.0
2022/10/15,South African Premier Division,Over/Under 0.5 Goals,0.20,1.0
2022/11/20,South African Premier Division,Over/Under 0.5 Goals,0.20,1.0
</code></pre>
<p>The results that are recorded in the <code>back</code> column and the <code>lay</code> column are updated every midnight (<code>00:00</code>).</p>
<p>So when I try to analyze the cumulative sum to know which rows are above zero throughout the day (the <code>combinations</code> list of lists is like a looping because there are several types of columns combinations that I analyze, I just summarized to facilitate the example in the question):</p>
<pre class="lang-python prettyprint-override"><code>combinations = [['market_name', 'competition']]
for cbnt in combinations:
df['invest'] = df.groupby(cbnt)['lay'].cumsum().gt(df['lay'])
</code></pre>
<p>The current result is this:</p>
<pre class="lang-none prettyprint-override"><code> clock_now cumulativesum invest
0 2022/08/09 -1.00 False
1 2022/08/12 0.28 False
2 2022/09/07 -0.72 True
3 2022/09/07 -1.72 False
4 2022/10/15 -0.72 False
5 2022/10/15 0.28 False
6 2022/10/15 1.28 True
7 2022/11/20 2.28 True
</code></pre>
<p>But the expected result is this:</p>
<p>Until <code>2022/08/09</code> the sum was <code>0</code> then <code>False</code><br />
Until <code>2022/08/12</code> the sum was <code>-1</code> then <code>False</code><br />
Until <code>2022/09/07</code> the sum was <code>+0.28</code> then <code>True</code><br />
Until <code>2022/09/07</code> the sum was <code>+0.28</code> then <code>True</code><br />
Until <code>2022/10/15</code> the sum was <code>-1.72</code> then <code>False</code><br />
Until <code>2022/10/15</code> the sum was <code>-1.72</code> then <code>False</code><br />
Until <code>2022/10/15</code> the sum was <code>-1.72</code> then <code>False</code><br />
Until <code>2022/11/20</code> the sum was <code>+1.28</code> so <code>True</code></p>
<p>How should I proceed to be able to add this cumulative sum according to the dates?</p>
|
<python><pandas><group-by><cumsum>
|
2023-01-04 13:36:30
| 1
| 2,222
|
Digital Farmer
|
75,006,181
| 10,963,018
|
How can I tell if wx.media.MediaCtrl was included in my installation of wxPython?
|
<p><strong>What's the problem:</strong> I am trying to get wxPython, specifically wx.media.MediaCtrl, working on my Raspberry Pi running Raspbian bullseye. I'm trying to play an mp4 video when a button is pushed and I keep getting errors like this:</p>
<pre><code>Traceback (most recent call last):
File "/home/[User Name]/Downloads/testapp/testapp.py", line 152, in lesson_one_frame
frame = videoFrame(title=title)
File "/home/[User Name]/Downloads/testapp/testapp.py", line 87, in __init__
self.testMedia = wx.media.MediaCtrl(self, style=wx.SIMPLE_BORDER, szBackend=wx.media.MEDIABACKEND_GSTREAMER)
NotImplementedError
</code></pre>
<p>Here is some more information on the versions of things I am running:</p>
<ul>
<li>wxPython: 4.2.0</li>
<li>GStreamer Core Library: 1.18.4</li>
<li>Python: 3.9.2</li>
<li>pip: 22.3.1</li>
</ul>
<p><strong>What I've Tried:</strong> After googling the initial issue I found this StackOverFlow issue <a href="https://stackoverflow.com/q/14302005/10963018">Getting a "NotImplementedError" in wxPython</a> where the top answer says this:</p>
<blockquote>
<p>The wxMediaCtrl is an optional part of the build, and it will automatically be excluded if wxWidgets' configure script was not able to finde the right dependent libraries or their -devel packages are not installed. When wxPython is built using a wxWigets without wxMediaCtrl then it creates a stub class that simply raises NotImplementedError if you try to use it.</p>
</blockquote>
<p>I was unsure if my issue was a problem with wx.media.MediaCtrl not being part of the build or Gstreamer not working so I tested GStreamer. I typed this <code>gst-launch-1.0 videotextsrc ! videoconvert ! autovideosink</code> into a terminal and it worked to show the test video.</p>
<p>After testing GStreamer I started looking for ways to tell if wxMediaCtrl was part of the build and I found this StackOverFlow issue <a href="https://stackoverflow.com/q/57886608/10963018">wxMediaCtrl excluded during the installation</a> which the answer suggested some library dependency might be missing and to try the steps listed here <a href="https://wxpython.org/blog/2017-08-17-builds-for-linux-with-pip/" rel="nofollow noreferrer">https://wxpython.org/blog/2017-08-17-builds-for-linux-with-pip/</a>. I have followed all of these steps and it still gives me the <code>NotImplementedError</code>.</p>
<p><strong>Code Sample:</strong></p>
<p>Here is my code sample for using wxMediaCtrl</p>
<pre class="lang-py prettyprint-override"><code>
import wx, wx.media
import wx.html
class videoFrame(wx.Frame):
def __init__(self, title, parent=None):
wx.Frame.__init__(self, parent=parent, title=title)
# This line is where the problem is.
self.testMedia = wx.media.MediaCtrl(self, style=wx.SIMPLE_BORDER, szBackend=wx.media.MEDIABACKEND_GSTREAMER)
self.media = 'testVideo.mp4'
self.testMedia.Bind(wx.media.EVT_MEDIA_LOADED, self.play)
self.testMedia.Bind(wx.media.EVT_MEDIA_FINISHED, self.quit)
if self.testMedia.Load(self.media):
pass
else:
print("Media not found")
self.quit(None)
def play(self, event):
self.testMedia.Play()
def quit(self, event):
self.Destroy()
class mainAppFrame(wx.Frame):
def __init__(self, title, parent=None):
wx.Frame.__init__(self, parent=parent, title=title)
panel = wx.Panel(self)
lessonOneBtn = wx.Button(panel, label="Lesson 1")
self.Bind(wx.EVT_BUTTON, self.OnLessonOnePress, lessonOneBtn)
def OnLessonOnePress(self, event):
title = 'Lesson One'
frame = videoFrame(title=title)
</code></pre>
<p><strong>EDIT</strong>**: After posting I checked my build.log file to see if I missed anything and I found this:</p>
<pre><code>checking for GST... configure: GStreamer 1.7.2+ not available. Not using GstPlayer and falling back to 1.0
checking for GST... configure: WARNING: GStreamer 1.0 not available, falling back to 0.10
checking for GST... configure: WARNING: GStreamer 0.10 not available
configure: WARNING: GStreamer not available... disabling wxMediaCtrl
</code></pre>
<p>So it seems that when I was installing wxPython for some reason it could not find GStreamer. I believe GStreamer is installed correctly because the command <code>gst-launch-1.0 videotextsrc ! videoconvert ! autovideosink</code> worked.</p>
<p>Here are the packages I have downloaded related to GStreamer:</p>
<ul>
<li>libgstreamer1.0-dev</li>
<li>libgstreamer-plugins-base1.0-dev</li>
<li>libgstreamer-plugins-bad1.0-dev</li>
<li>libgstreamer1.0-0</li>
<li>gstreamer1.0-plugins-base</li>
<li>gstreamer1.0-plugins-good</li>
<li>gstreamer1.0-plugins-bad</li>
<li>gstreamer1.0-plugins-ugly</li>
<li>gstreamer1.0-libav</li>
<li>gstreamer1.0-doc</li>
<li>gstreamer1.0-tools</li>
<li>gstreamer1.0-x</li>
<li>gstreamer1.0-alsa</li>
<li>gstreamer1.0-gl</li>
<li>gstreamer1.0-gtk3</li>
</ul>
<p>All of these seem to be located in /usr/lib/arm-linux-gnueabihf/. I have also tried to add them to my pkg-config by using <code>pkg-config pkg-config --cflags --libs gstreamer-1.0</code> like it says to do here <a href="https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c" rel="nofollow noreferrer">https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c</a> but I get the error:</p>
<pre><code>Package gstreamer-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing 'gstreamer-1.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gstreamer-1.0' found
</code></pre>
<p>I then tried to use <code>locate gstreamer | grep pc</code> and there are no .pc files for GStreamer even though it is installed. I'll update if I can figure out any more information.</p>
|
<python><linux><raspberry-pi><wxpython><wxwidgets>
|
2023-01-04 13:28:35
| 1
| 422
|
Jkkarr
|
75,006,095
| 8,510,149
|
Subclass EarlyStopper in scikit-optimize
|
<p>I can't figure out how to subclass EarlyStopper to use it as callback in scikit-optimize (gp_minimize). Based on the documentation. How should I think when subclassinging?</p>
<p>Documentation: <a href="https://scikit-optimize.github.io/stable/modules/generated/skopt.callbacks.EarlyStopper.html#skopt.callbacks.EarlyStopper" rel="nofollow noreferrer">https://scikit-optimize.github.io/stable/modules/generated/skopt.callbacks.EarlyStopper.html#skopt.callbacks.EarlyStopper</a></p>
<p>This article provides an example:
<a href="https://medium.com/sitechassethealthcenter/gaussian-process-to-optimize-hyperparameters-of-an-algorithm-5b4810277527" rel="nofollow noreferrer">https://medium.com/sitechassethealthcenter/gaussian-process-to-optimize-hyperparameters-of-an-algorithm-5b4810277527</a></p>
<pre><code>from skopt.callbacks import EarlyStopper
class StoppingCriterion(EarlyStopper):
def __init__(self, delta=0.05, n_best=10):
super(EarlyStopper, self).__init__()
self.delta = delta
self.n_best = n_best
def _criterion(self, result):
if len(result.func_vals) >= self.n_best:
func_vals = np.sort(result.func_vals)
worst = func_vals[self.n_best - 1]
best = func_vals[0]
return abs((best - worst)/worst) < self.delta
else:
return None
</code></pre>
<p>However, the above throws back an error.</p>
<p>I've also tried this:</p>
<pre><code>class MyEarlyStopper(EarlyStopper):
def __init__(self, patience=15, relative_improvement=0.01):
super().__init__(patience=patience, relative_improvement=relative_improvement)
self.best_loss = float("inf")
def __call__(self, result):
current_loss = result.fun # current loss value
if current_loss < self.best_loss * (1 - self.relative_improvement):
# update the best loss and reset the patience counter
self.best_loss = current_loss
self.counter = 0
else:
# increment the patience counter
self.counter += 1
return self.counter >= self.patience
</code></pre>
<p>This returns:</p>
<pre><code>TypeError: object.__init__() takes exactly one argument (the instance to initialize)
</code></pre>
|
<python><callback><subclassing><python-class><scikit-optimize>
|
2023-01-04 13:21:56
| 0
| 1,255
|
Henri
|
75,005,916
| 5,114,495
|
How to run only a part of the DAG in Airflow?
|
<p>Is it possible to run <strong>only a part of the DAG</strong> in Airflow?</p>
<p>I know one can run a single task. But is it possible to run a set of linked tasks within the DAG?</p>
|
<python><python-3.x><airflow><airflow-2.x>
|
2023-01-04 13:07:42
| 1
| 1,394
|
cavalcantelucas
|
75,005,699
| 12,760,550
|
Create "Yes" column according to another column value pandas dataframe
|
<p>Imagine I have a dataframe with employee IDs, their Contract Number, and the Company they work for. Each employee can have as many contracts as they want for the same company or even for different companies:</p>
<pre><code>ID Contract Number Company
10000 1 Abc
10000 2 Zxc
10000 3 Abc
10001 1 Zxc
10002 2 Abc
10002 1 Cde
10002 3 Zxc
</code></pre>
<p>I need to find a way to identify the company of the contract number "1" per each ID and then create a column "Primary Contract" that would be set to "Yes" if the contract is in the same company as the company of contract number 1 resulting on this dataframe:</p>
<pre><code>ID Contract Number Company Primary Compay
10000 1 Abc Yes
10000 2 Zxc No
10000 3 Abc Yes
10001 1 Zxc Yes
10002 2 Abc No
10002 1 Cde Yes
10002 3 Zxc No
</code></pre>
<p>What would be the best way to achieve it?</p>
|
<python><pandas><dataframe><multiple-columns>
|
2023-01-04 12:51:00
| 2
| 619
|
Paulo Cortez
|
75,005,594
| 10,658,339
|
merge pandas dataframe from two columns
|
<p>I have two dataframes with a composite primary key, that is, two columns identify each element, and I would like to merge these dataframes into one. How can I do that
? My example is:</p>
<pre><code>import random
import pandas as pd
import numpy as np
A = ['DF-PI-05', 'DF-PI-09', 'DF-PI-10', 'DF-PI-15', 'DF-PI-16',
'DF-PI-19']
Sig = [100,200,400]*6
B = np.repeat(A,3)
C = pd.DataFrame(list(zip(B,Sig)),columns=['Name','Sig'])
D = pd.DataFrame(list(zip(B,Sig)),columns=['Name','Sig'])
C['param_1'] = np.linspace(0,20,18)
D['param_2'] = np.linspace(20,100,18)
</code></pre>
<p>And I want to merge C and D, by the columns 'Name' and 'Sig', in reality, the column 'Name' is not ordered so, I can not only concatenate or append C to D. I need to match by the two columns together: 'Name' and 'Sig'.</p>
|
<python><dataframe><join><merge><concatenation>
|
2023-01-04 12:41:45
| 1
| 527
|
JCV
|
75,005,560
| 8,757,033
|
How to clear a terminal in pyscript (py-terminal)
|
<p>The idea is to clear the output of the python REPL provided by pyscript, embedded in a website.</p>
<p>I have tried the <a href="https://stackoverflow.com/questions/517970/how-to-clear-the-interpreter-console">regular ways</a> used in the OS console (<code>os.system("clear")</code>, <code>print("\033c")</code> and similar), but they don't work.</p>
<p>I have not found anything in the documentation of the <a href="https://docs.pyscript.net/latest/reference/elements/py-repl.html" rel="nofollow noreferrer">py-repl</a> or <a href="https://docs.pyscript.net/latest/reference/plugins/py-terminal.html" rel="nofollow noreferrer">py-terminal</a> elements.</p>
|
<python><web-frontend><pyscript>
|
2023-01-04 12:38:26
| 1
| 2,229
|
Miguel
|
75,005,513
| 6,400,443
|
Does Pandas "cross" merge keep order of both left and right?
|
<p>I would like to know if the merge operation using <code>how="cross"</code> will keep my lines order on the left and right side, to be more clear, I except something like that :</p>
<pre><code>df1 = pd.DataFrame(["a", "b", "c"])
df2 = pd.DataFrame(["1", "2", "3"])
df1.merge(df2, how="cross")
# I except the result to be ALWAYS like this (with 1, 2, 3 repeating) :
0 a 1
1 a 2
2 a 3
3 b 1
4 b 2
5 b 3
6 c 1
7 c 2
8 c 3
</code></pre>
<p>I tested with few data, but I will have to use billions of rows, thus it's hard to check if the order stays the same.</p>
<p>In <a href="https://pandas.pydata.org/docs/reference/api/pandas.merge.html" rel="nofollow noreferrer">pandas doc</a>, they say :</p>
<blockquote>
<p>cross: creates the cartesian product from both frames, preserves the order of the left keys.</p>
</blockquote>
<p>Left key is preserved, so should I assume right keys order is not ?</p>
<p>Thanks for your help</p>
|
<python><pandas>
|
2023-01-04 12:33:58
| 1
| 737
|
FairPluto
|
75,005,451
| 925,179
|
flask --help displays Error: Could not import 'server'
|
<p>I cloned project <a href="https://github.com/sayler8182/MockServer" rel="nofollow noreferrer">https://github.com/sayler8182/MockServer</a>, go to project directory and run ./scripts/init.sh</p>
<p>Init.sh code:</p>
<pre class="lang-py prettyprint-override"><code>pip3 install virtualenv
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
flask db init
flask db migrate
flask db upgrade
</code></pre>
<p>The output from this script:</p>
<pre><code>Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in ./venv/lib/python3.9/site-packages (from jsonschema>=3.0.1->flasgger->-r requirements.txt (line 2)) (0.19.3)
Requirement already satisfied: greenlet!=0.4.17 in ./venv/lib/python3.9/site-packages (from SQLAlchemy>=1.4.18->Flask-SQLAlchemy->-r requirements.txt (line 7)) (2.0.1)
Error: Could not import 'server'.
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
Error: No such command 'db'.
Error: Could not import 'server'.
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
Error: No such command 'db'.
Error: Could not import 'server'.
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
</code></pre>
<p>When I print env variables:</p>
<pre><code>FLASK_APP=server.py
VIRTUAL_ENV=/Users/user1/Project/MockServer/venv
PS1=(venv) %{%f%b%k%}$(build_prompt)
VIRTUAL_ENV_PROMPT=(venv)
rvm_hook=
_=/usr/bin/env
</code></pre>
<p>I can see FLASK_APP and VIRTUAL_ENV.
When I run:</p>
<pre><code>flask --help
</code></pre>
<p>I see in the output:</p>
<pre><code>Error: Could not import 'server'.
Usage: flask [OPTIONS] COMMAND [ARGS]...
</code></pre>
<p>I tried few times reinstall flask in the project folder:</p>
<pre><code>pip3 uninstall flask
pip3 install Flask
</code></pre>
<p>with success:</p>
<pre><code>Collecting Flask
Using cached Flask-2.2.2-py3-none-any.whl (101 kB)
Requirement already satisfied: Werkzeug>=2.2.2 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from Flask) (2.2.2)
Requirement already satisfied: itsdangerous>=2.0 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from Flask) (2.1.2)
Requirement already satisfied: click>=8.0 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from Flask) (8.1.3)
Requirement already satisfied: importlib-metadata>=3.6.0 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from Flask) (5.0.0)
Requirement already satisfied: Jinja2>=3.0 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from Flask) (3.1.2)
Requirement already satisfied: zipp>=0.5 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from importlib-metadata>=3.6.0->Flask) (3.9.0)
Requirement already satisfied: MarkupSafe>=2.0 in /Users/user1/Project/MockServer/venv/lib/python3.9/site-packages (from Jinja2>=3.0->Flask) (2.1.1)
Installing collected packages: Flask
Successfully installed Flask-2.2.2
</code></pre>
<p>with success. But error with <code>flask --help</code> still exists.</p>
<p>Pip, Python details:</p>
<pre><code>pip3 -V
pip 22.3.1 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)
python3 -V
Python 3.10.7
</code></pre>
<p>I will be very grateful for any advice on how to fix this error</p>
<pre><code>Error: Could not import 'server'.
</code></pre>
<p>Update:</p>
<ul>
<li>I can run this project on Apple Terminal, but not in iTerm2 :)</li>
</ul>
|
<python><macos><flask><iterm2>
|
2023-01-04 12:28:33
| 1
| 341
|
granan
|
75,005,416
| 5,114,495
|
Multiple inheritance using `BaseBranchOperator` in Airflow
|
<p>Can one use multiple inheritance using <code>BaseBranchOperator</code> in Airflow?</p>
<p>I want to define an operator like:</p>
<pre><code>from airflow.models import BaseOperator
from airflow.operators.branch import BaseBranchOperator
class MyOperator(BaseOperator, BaseBranchOperator):
def execute(self, context):
print('hi')
def choose_branch(self, context):
if True:
return 'task_A'
else:
return 'task_B'
</code></pre>
<p>In that case, is it accurate to think that the <code>execute</code> method will run before the <code>choose_branch</code> method?</p>
|
<python><python-3.x><airflow><airflow-2.x>
|
2023-01-04 12:24:57
| 2
| 1,394
|
cavalcantelucas
|
75,005,351
| 7,462,275
|
Problem to solve sympy symbolic equation with tanh
|
<p>I write the following equation <code>my_Eq</code>. <code>a</code> and <code>b</code> are reals.</p>
<pre><code>a,b=sp.symbols('a,b',real=True)
my_Eq=sp.Eq(sp.tanh(a),b)
</code></pre>
<p>When I tried to solve it : <code>sp.solve(my_Eq,a)</code>, two solutions are found : <code>[log(-sqrt(-(b + 1)/(b - 1))), log(sqrt(-(b + 1)/(b - 1)))]</code></p>
<p>In the first solution, <code>a</code> is the log of something negative. Why did I obtained this solution because <code>a</code> and <code>b</code> are declared <code>reals</code> ?</p>
<p>After, <code>log</code>, <code>sqrt</code> are used. How is to possible to have something very simple : <code>a=atanh(b)</code></p>
|
<python><sympy><hyperbolic-function>
|
2023-01-04 12:19:36
| 1
| 2,515
|
Stef1611
|
75,005,300
| 801,618
|
How can I fix my DSL grammar to parse a problem statement?
|
<p>I've been tasked with creating a grammar for a legacy DSL that's been in use for over 20 years. The original parser was written using a mess of regular expressions, so I've been told.</p>
<p>The syntax is generally of the "if this variable is n then set that variable to m" style.</p>
<p>My grammar works for almost all cases, but there are a few places where it baulks because of a (mis)use of the <code>&&</code> (logical and) operator.</p>
<p>My Lark grammar (which is LALR(1)) is:</p>
<pre><code>?start: statement*
?statement: expression ";"
?expression : assignment_expression
?assignment_expression : conditional_expression
| primary_expression assignment_op assignment_expression
?conditional_expression : logical_or_expression
| logical_or_expression "?" expression (":" expression)?
?logical_or_expression : logical_and_expression
| logical_or_expression "||" logical_and_expression
?logical_and_expression : equality_expression
| logical_and_expression "&&" equality_expression
?equality_expression : relational_expression
| equality_expression equals_op relational_expression
| equality_expression not_equals_op relational_expression
?relational_expression : additive_expression
| relational_expression less_than_op additive_expression
| relational_expression greater_than_op additive_expression
| relational_expression less_than_eq_op additive_expression
| relational_expression greater_than_eq_op additive_expression
?additive_expression : multiplicative_expression
| additive_expression add_op multiplicative_expression
| additive_expression sub_op multiplicative_expression
?multiplicative_expression : primary_expression
| multiplicative_expression mul_op primary_expression
| multiplicative_expression div_op primary_expression
| multiplicative_expression mod_op primary_expression
?primary_expression : variable
| variable "[" INT "]" -> array_accessor
| ESCAPED_STRING
| NUMBER
| unary_op expression
| invoke_expression
| "(" expression ")"
invoke_expression : ID ("." ID)* "(" argument_list? ")"
argument_list : expression ("," expression)*
unary_op : "-" -> negate_op
| "!" -> invert_op
assignment_op : "="
add_op : "+"
sub_op : "-"
mul_op : "*"
div_op : "/"
mod_op : "%"
equals_op : "=="
not_equals_op : "!="
greater_than_op : ">"
greater_than_eq_op : ">="
less_than_op : "<"
less_than_eq_op : "<="
ID : CNAME | CNAME "%%" CNAME
?variable : ID
| ID "@" ID -> namelist_id
| ID "@" ID "@" ID -> exptype_id
| "$" ID -> environment_id
%import common.WS
%import common.ESCAPED_STRING
%import common.CNAME
%import common.INT
%import common.NUMBER
%import common.CPP_COMMENT
%ignore WS
%ignore CPP_COMMENT
</code></pre>
<p>And some working examples are:</p>
<pre><code>(a == 2) ? (c = 12);
(a == 2 && b == 3) ? (c = 12);
(a == 2 && b == 3) ? (c = 12) : d = 13;
(a == 2 && b == 3) ? ((c = 12) && (d = 13));
</code></pre>
<p>But there are a few places where I see this construct:</p>
<pre><code>(a == 2 && b == 3) ? (c = 12 && d = 13);
</code></pre>
<p>That is, the two assignments are joined by <code>&&</code> but aren't in parentheses and it doesn't like the second assignment operator. I assume this is because it's trying to parse it as <code>(c = (12 && d) = 13)</code></p>
<p>I've tried changing the order of the rules (this is my first non-toy DSL, so there's been a lot of trial and error), but I either get similar errors or the precedence is wrong. And the Earley algorithm doesn't fix it.</p>
|
<python><parsing><dsl><lark-parser>
|
2023-01-04 12:15:42
| 1
| 436
|
MerseyViking
|
75,005,072
| 2,601,357
|
Flask Route: pattern matching to exclude certain routes
|
<p>I have the following routes already set up:</p>
<pre><code>
@app.route("/")
@app.route("/index")
def index():
...
@app.route("/projects/<projname>")
def projects(projname):
...
@app.route("/extras/")
def extras():
...
@app.route("/extras/<topic>/")
def extras_topics(topic):
...
</code></pre>
<p>To this, I would like to add a view into a route that matches any pattern excluding the existing routes, something like:</p>
<pre><code>
@app.route("/<pagename>") # this excludes /index, /projects, and /extras
def page_view(pagename):
...
</code></pre>
<p>So if one wanted to get to <code>/research</code>, it should trigger <code>page_view()</code>, but for <code>/</code> or <code>/index</code> I'd like to trigger <code>index()</code></p>
<p>What would be ways to go about this?</p>
|
<python><flask>
|
2023-01-04 11:54:26
| 1
| 564
|
asuprem
|
75,005,044
| 1,114,453
|
Add version to pipeline
|
<p>Is there any standard or recommended way to add a version number to a pipeline (written in snakemake in my case)?</p>
<p>For example, I have this <a href="https://github.com/glaParaBio/genomeAnnotationPipeline" rel="nofollow noreferrer">pipeline</a> and just now I added a <code>CHANGELOG.md</code> file with the current version on top. Are there better ways to identify the version a user is deploying?</p>
|
<python><python-3.x><version><pipeline><snakemake>
|
2023-01-04 11:52:26
| 1
| 9,102
|
dariober
|
75,004,957
| 15,406,157
|
Python virtual environment does not recognize its own site packages
|
<p>I created a virtual environment <code>myenv1</code> on machine #1.</p>
<p>I installed some packages in this virtual environment.</p>
<p><code>pip3 freeze</code> shows me this list of packages:</p>
<pre class="lang-none prettyprint-override"><code>cffi==1.15.1
chardet==5.0.0
click==8.0.4
cryptography==39.0.0
dataclasses==0.8
distlib==0.3.6
dnspython==2.2.1
filelock==3.4.1
Flask==2.0.3
future==0.18.2
impacket==0.10.0
importlib-metadata==4.8.3
importlib-resources==5.4.0
itsdangerous==2.0.1
Jinja2==3.0.3
ldap3==2.9.1
ldapdomaindump==0.9.4
MarkupSafe==2.0.1
natsort==8.2.0
platformdirs==2.4.0
pyasn1==0.4.8
pycparser==2.21
pycryptodomex==3.16.0
pyOpenSSL==23.0.0
six==1.16.0
typing_extensions==4.1.1
virtualenv==20.17.1
Werkzeug==2.0.3
zipp==3.6.0
</code></pre>
<p>I archived the folder <code>myenv1</code> and copied it to machine #2 (which is identical to machine #1).</p>
<p>On machine #2 I activated <code>myenv1</code>. <code>pip3 freeze</code> shows the same list of packages. So looks like the transfer passed successfully.</p>
<p>But when I run some code that requires these packages, I see <code>ModuleNotFoundError: No module named 'impacket</code>, so Python does not see the installed packages.</p>
<p>When I check <code>sys.path</code> I see the folder of my virtual environment <code>myenv1</code>:</p>
<pre><code>python3 -c "import sys; print('\n'.join(sys.path))"
/usr/lib64/python36.zip
/usr/lib64/python3.6
/usr/lib64/python3.6/lib-dynload
/opt/allot/igor_test/myenv1/lib64/python3.6/site-packages
/opt/allot/igor_test/myenv1/lib/python3.6/site-packages
</code></pre>
<p>Why doesn't the virtual environment see its own packages? How can I make these packages visible?</p>
<p><strong>UPD</strong></p>
<p>I found the issue.</p>
<p>This is the script that I run:</p>
<pre><code>#!/usr/bin/python3
import argparse
import re
import sys
import configparser
import io
from impacket.dcerpc.v5.rpcrt import RPC_C_AUTHN_LEVEL_PKT_INTEGRITY, RPC_C_AUTHN_GSS_NEGOTIATE
from natsort import natsorted, ns
from impacket.dcerpc.v5.dtypes import NULL
from impacket.dcerpc.v5.dcom import wmi
from impacket.dcerpc.v5.dcomrt import DCOMConnection
... //and all the code further
</code></pre>
<p>This script addresses to <code>#!/usr/bin/python3</code> which is a global Python. And global python does not see packages fron virtual environment.</p>
<p>I replaced this line with python from my virtual environment:</p>
<p><code>#!/path/to/my/virt_env/myenv1/bin/python3</code></p>
<p>and everythind started working.</p>
|
<python><python-3.x><virtualenv>
|
2023-01-04 11:44:42
| 1
| 338
|
Igor_M
|
75,004,683
| 14,720,380
|
How do I override the precedence of a header file when using pybind11 and cmake?
|
<p>I have a header file called <code>node.h</code> within my C++ project, where I am now writing python bindings. Within python, there is also a header file called <a href="https://github.com/python/typed_ast/blob/master/ast27/Include/node.h" rel="nofollow noreferrer"><code>node.h</code></a>. I am creating my Python bindings with PyBind11 and CMake similar to:</p>
<pre><code>cmake_minimum_required(VERSION 3.10)
project(WeatherRouting)
message("Configuring WeatherRouting (python package)")
set(PYBIND11_CPP_STANDARD -std=c++17)
find_package(Python COMPONENTS Interpreter Development)
find_package(pybind11 CONFIG)
include_directories(${ROUTINGLIB_INCLUDE_DIRECTORIES})
file(GLOB_RECURSE SRC_FILES CONFIGURE_DEPENDS LIST_DIRECTORIES false ${CMAKE_CURRENT_SOURCE_DIR}/src *.cpp *.cc)
pybind11_add_module(_WeatherRouting SHARED ${SRC_FILES})
target_link_libraries(_WeatherRouting PRIVATE RoutingLib)
set_target_properties(_WeatherRouting PROPERTIES POSITION_INDEPENDENT_CODE TRUE)
</code></pre>
<p>And this is causing the header file in python being used, over my header file, when I include my header file like:</p>
<pre><code>#include "node.h"
</code></pre>
<p>How do I fix this?</p>
|
<python><c++><header-files><pybind11>
|
2023-01-04 11:21:51
| 0
| 6,623
|
Tom McLean
|
75,004,607
| 12,040,751
|
Type annotation for non iterables
|
<p>I want to write an annotation for variables that are not iterables.</p>
<pre><code>def f(x: NonIterable):
if not isinstance(x, Iterable):
pass
else:
raise TypeError
</code></pre>
<p>As you can see in the function the definition of a <code>NonIterable</code> is clear, but I am struggling to turn that into a type annotation.</p>
|
<python><python-typing>
|
2023-01-04 11:14:54
| 0
| 1,569
|
edd313
|
75,004,599
| 1,315,472
|
Monitor the progress of pyspark's `applyInPandas()`
|
<p>In Databricks, we use the Python command</p>
<pre><code>spark_df.groupBy("variable1").applyInPandas(python_function, schema=schema)
</code></pre>
<p>to run the <code>python_function</code> on subsets of the <code>spark_df</code>. The command works fine and the computation also scales to 100+ CPUs. However, it takes a couple of hours to finish, and it would be great to monitor the progress of the computation.</p>
<p>Is there a way to monitor the progress of the computation?</p>
|
<python><pyspark><parallel-processing><databricks><hpc>
|
2023-01-04 11:14:18
| 1
| 2,566
|
Nairolf
|
75,004,590
| 8,610,286
|
How to put data in a PyPi python package and retrieve via pip?
|
<p>I am trying to package some data alongside the scripts within a package of mine: <a href="https://pypi.org/project/taxon2wikipedia/0.0.4/" rel="nofollow noreferrer">https://pypi.org/project/taxon2wikipedia/0.0.4/</a></p>
<p>The source distribution seems to contain the files that I need, but when trying to use the package I get the error message:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '~/my_venv/lib/python3.8/site-packages/taxon2wikipedia/dicts/phrase_start_dict.json'
</code></pre>
<p>The "data" and "dicts" folders are not there in "site-packages/taxon2wikipedia", but are present in the package when I manually download it? Any suggestions on why it might be happening?</p>
<p>Thanks!</p>
<h2>Edit: extra information</h2>
<p>I already have a MANIFEST.in file with <code>recursive-include src *.rq *.py *.jinja *json</code>, and even changing it to other similar options, it did not work.
I am using a pyproject.toml configuration and a mock setup.py to run <code>setuptools.setup()</code>.
I run <code>python3 -m build</code> to build the package. Maybe the problem lies there?</p>
<p>This is the source repository by the way: <a href="https://github.com/lubianat/taxon2wikipedia" rel="nofollow noreferrer">https://github.com/lubianat/taxon2wikipedia</a></p>
<p>The files in PyPI are all correct, but don't seem to be downloaded when I pip install the package.</p>
<h2>Edit 2 - Related question</h2>
<p><a href="https://stackoverflow.com/questions/42791179/why-does-pip-install-not-include-my-package-data-files">Why does "pip install" not include my package_data files?</a></p>
<p>It seems to be some problem with how pip installs the package. The solution is different as in my case the files seem to be at the correct directory.</p>
|
<python><pypi>
|
2023-01-04 11:13:19
| 2
| 349
|
Tiago Lubiana
|
75,004,494
| 159,072
|
TypeError: Object of type Response is not JSON serializable (2)
|
<p>I have been trying to serialize an SQLAlchemy model so that I can pass a number of data from flask server to an html socketio client.</p>
<p>I am trying the following <a href="https://medium.com/@mhd0416/flask-sqlalchemy-object-to-json-84c515d3c11c" rel="nofollow noreferrer">where an object is frirst converted into a dictionary and then jsonified.</a> However, this is not working. I am getting the following error message at the designated line:</p>
<blockquote>
<pre><code>TypeError: Object of type Response is not JSON serializable
</code></pre>
</blockquote>
<p>How can achieve what I need?</p>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask, render_template, jsonify
from flask_marshmallow import Marshmallow
from flask_sqlalchemy import SQLAlchemy
from datetime import datetime
import secrets
import string
import logging
from flask_socketio import SocketIO, emit
import os
import subprocess
async_mode = None
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///filename.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
socketio_obj = SocketIO(app)
class JobQueue(db.Model):
__tablename__ = 'job_queue'
job_id = db.Column(db.Integer, primary_key=True)
unique_job_key = db.Column(db.String(64), index=True)
user_name = db.Column(db.String, index=True)
input_string = db.Column(db.String(256))
is_done = db.Column(db.Boolean)
created_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
def to_dictionary(self):
return {
"job_id": self.job_id,
"unique_job_key": self.unique_job_key,
"user_name": self.user_name,
"input_string": self.input_string,
"is_done": self.is_done,
"created_at": self.created_at
}
# end function
# end class
def get_data():
users = JobQueue.query.order_by(JobQueue.created_at).all()
return users
def get_serialized_data():
all_users = get_data()
returns = [item.to_dictionary() for item in all_users]
return jsonify(all_users = returns)
... ... ... ... ... ... ... ...
@socketio_obj.on('first_connect_event', namespace='/test')
def handle_first_connect_event(arg1):
try:
msg = arg1['first_message']
print(msg)
users = get_serialized_data()
emit('get_data', {'users': users}) #<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
except Exception as ex:
logging.exception("")
print(ex)
</code></pre>
<p>The full error trace:</p>
<pre><code>ERROR:root:
Traceback (most recent call last):
File "C:\git\funkclusterfrontend\app.py", line 106, in handle_first_connect_event
emit('get_data', {'users': users})
File "C:\ProgramData\Miniconda3\lib\site-packages\flask_socketio\__init__.py", line 899, in emit
return socketio.emit(event, *args, namespace=namespace, to=to,
File "C:\ProgramData\Miniconda3\lib\site-packages\flask_socketio\__init__.py", line 460, in emit
self.server.emit(event, *args, namespace=namespace, to=to,
File "C:\ProgramData\Miniconda3\lib\site-packages\socketio\server.py", line 294, in emit
self.manager.emit(event, data, namespace, room=room,
File "C:\ProgramData\Miniconda3\lib\site-packages\socketio\base_manager.py", line 167, in emit
self.server._emit_internal(eio_sid, event, data, namespace, id)
File "C:\ProgramData\Miniconda3\lib\site-packages\socketio\server.py", line 612, in _emit_internal
self._send_packet(eio_sid, packet.Packet(
File "C:\ProgramData\Miniconda3\lib\site-packages\socketio\server.py", line 617, in _send_packet
encoded_packet = pkt.encode()
File "C:\ProgramData\Miniconda3\lib\site-packages\socketio\packet.py", line 61, in encode
encoded_packet += self.json.dumps(data, separators=(',', ':'))
File "C:\ProgramData\Miniconda3\lib\json\__init__.py", line 234, in dumps
return cls(
File "C:\ProgramData\Miniconda3\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\ProgramData\Miniconda3\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "C:\ProgramData\Miniconda3\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Response is not JSON serializable
Object of type Response is not JSON serializable
</code></pre>
|
<python><flask><flask-socketio>
|
2023-01-04 11:05:25
| 2
| 17,446
|
user366312
|
75,004,379
| 5,058,026
|
Python assign tuple to set() without unpacking
|
<p>How can I assign a <code>tuple</code> to a <code>set</code> without the members being unpacked and added separately?</p>
<p>For example (python 3.9.11):</p>
<pre><code>from collections import namedtuple
Point = namedtuple('Point', 'x y')
p = Point(5, 5)
set(p)
</code></pre>
<p>produces <code>{5}</code>, whereas I would like <code>{Point(5, 5)}</code></p>
|
<python><set><iterable-unpacking>
|
2023-01-04 10:55:35
| 3
| 1,439
|
Martin CR
|
75,004,270
| 10,737,778
|
Is it possible to show __init_subclass__ docstring on the subclass?
|
<p>Let's say I have a class that implement __init_subclass__ with two parameters:</p>
<pre><code>class Foo:
def __init_subclass__(name: str, surname: str):
"""
name: The name of the user
surname: The surname of the user
"""
...
</code></pre>
<p>When I subclass from Foo. I would like to be able to see in the docstring of the subclassed class the description of the two parameter (name, surname). Is there any way to achieve this ?
This way when a user subclass from Foo, he knows what to put in the parameters of the class</p>
<pre><code>class Bar(Foo, name = 'bar', surname = 'xxx')
...
</code></pre>
|
<python><python-3.x><docstring>
|
2023-01-04 10:45:11
| 1
| 441
|
CyDevos
|
75,004,129
| 3,381,215
|
Snakemake/miniforge: Nothing provides libgcc-ng >=12 needed by freebayes-1.3.6-hbfe0e7f_2
|
<p>I am running a Snakemake NGS data analysis pipeline, one rule uses Freebayes, it's env looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>name: freebayes
channels:
- bioconda
dependencies:
- freebayes=1.3.6
</code></pre>
<p>While creating the env this error occurs:</p>
<pre class="lang-bash prettyprint-override"><code>Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however crucial for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
Creating conda environment envs/freebayes.yaml...
Downloading and installing remote packages.
CreateCondaEnvironmentException:
Could not create conda environment from /home/nlv24077/temp/test_legacy_pipeline/rules/../envs/freebayes.yaml:
Command:
mamba env create --quiet --file "/home/nlv24077/mpegg/snaqs_required_files/snakemake_envs/08937c429b94df5250c66c66154d19b9.yaml" --prefix "/home/nlv24077/mpegg/snaqs_required_files/snakemake_envs/08937c429b94df5250c66c66154d19b9"
Output:
Encountered problems while solving:
- nothing provides libgcc-ng >=12 needed by freebayes-1.3.6-hbfe0e7f_2
</code></pre>
<p>If I set the channel to conda-forge the error changes to:</p>
<pre class="lang-bash prettyprint-override"><code>- nothing provides requested freebayes 1.3.6**
</code></pre>
<p>How could I solve this?</p>
|
<python><conda><bioinformatics><snakemake><mini-forge>
|
2023-01-04 10:32:33
| 2
| 1,199
|
Freek
|
75,004,108
| 1,029,902
|
Constructing GET url to retrieve JSON from JS button click function
|
<p>I am trying to scrape this page using requests and BeautifulSoup in Python but the page is in Javascript so I am including both tags for the question. The page is</p>
<p><a href="https://untappd.com/v/southern-cross-kitchen/469603" rel="nofollow noreferrer">https://untappd.com/v/southern-cross-kitchen/469603</a></p>
<p>and others like it, but it has a 'Show More' button. I want to avoid using a headless browser so I went snooping around for the JavaScript behind this to see if I can find a url, get or post request.</p>
<p>After some inspection, this is the button's code:</p>
<pre><code><a class="yellow button more show-more-section track-click" data-href=":moremenu" data-menu-id="78074484" data-section-id="286735920" data-track="venue" data-venue-id="469603" href="javascript:void(0);">
</code></pre>
<p>and it is controlled and redirected by this function:</p>
<pre><code>$(document).on("click", ".show-more-menu-section", (function() {
var e = $(this);
$(e).hide();
var t = $(this).attr("data-venue-id"),
a = $(this).attr("data-menu-id"),
n = $(".section-area .menu-section").length;
return $(".section-loading").addClass("active"), $.ajax({
url: "/venue/more_menu_section/" + t + "/" + n,
type: "GET",
data: "menu_id=" + a,
dataType: "json",
error: function(t) {
$(".section-loading").removeClass("active"), $(e).show(), $.notifyBar({
html: "Hmm. Something went wrong. Please try again!",
delay: 2e3,
animationSpeed: "normal"
})
},
success: function(t) {
$(".section-loading").removeClass("active"), "" == t.view ? $(e).hide() : (trackMenuView("viewMoreMenuSection"), t.count >= 15 ? ($(e).show(), $(".section-area").append(t.view)) : $(".section-area").append(t.view), handleNew())
}
})
</code></pre>
<p>which is contained in <a href="https://assets.untappd.com/assets/v3/js/venue/venue.menu.min.js?v=2.7.10" rel="nofollow noreferrer">https://assets.untappd.com/assets/v3/js/venue/venue.menu.min.js?v=2.7.10</a></p>
<p>So for the required values in the function, <code>t</code>, <code>a</code> and <code>n</code> are:</p>
<pre><code>t = 469603
n = 78074484
a = 1
</code></pre>
<p>I am now trying to construct the url using this <code>url</code> part of the function which is:</p>
<pre><code>url: "/venue/more_menu_section/" + t + "/" + n
</code></pre>
<p>Using <code>https://www.untappd.com</code> as my base url, I have tried the following urls with no luck:</p>
<p><code>/venue/more_menu_section/469603/1?data=%7B%22menu_id%22%3A%2278074484%22%7D</code></p>
<p><code>/venue/more_menu_section/469603/1?data%3D%7B%22menu_id%22%3A78074484%7D</code></p>
<p><code>/venue/more_menu_section/469603/1?data=%7B%22menu_id%22%3A78074484%7D</code></p>
<p><code>/venue/more_menu_section/469603/1?data={"menu_id":78074484}</code></p>
<p>As a result, I have not been able to programmatically retrieve the data. I really would want to avoid using webdrivers and headless browsers to simulate clicks so I am guessing this should be possible with a GET request. Creating that url is proving a challenge.</p>
<p>How can I do create the right url to fetch?</p>
|
<javascript><python><web-scraping><get>
|
2023-01-04 10:30:29
| 1
| 557
|
Tendekai Muchenje
|
75,004,065
| 12,934,163
|
Receiving TypeError, when PandasCursor is imported from pyathena.pandas.cursor
|
<p>I want to read an excel file into <code>pandas</code> from an AWS S3 bucket. Everything worked fine But when I import <code>PandasCursor</code>, which I need for another part of the code, I receive the following error message:</p>
<pre><code>import pandas as pd
import s3fs
from pyathena import connect
from pyathena.pandas.cursor import PandasCursor
path = "s3://some/path/to/file.xlsx"
df = pd.read_excel(path)
>>>TypeError: S3FileSystem.__init__() missing 1 required positional argument: 'connection'
</code></pre>
<p>Can anyone explain what is happening and how I can fix it?
From the <a href="https://pypi.org/project/pyathena/#pandascursor" rel="nofollow noreferrer">pyathena docs</a> I don't understand how <code>PandasCursor</code> is influencing <code>pd.read_excel()</code></p>
|
<python><pandas><amazon-web-services><amazon-s3><pyathena>
|
2023-01-04 10:26:03
| 0
| 885
|
TiTo
|
75,003,903
| 9,262,339
|
Failed to connect to postgre database inside docker container
|
<p><strong>docker-compose.yaml</strong></p>
<pre><code>version: '3.9'
services:
web:
env_file: .env
build: .
command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000"
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
redis:
image: redis:6-alpine
volumes:
postgres_data:
</code></pre>
<p><strong>.env</strong></p>
<pre><code>DB_USER='wplay'
DB_PASS='wplay'
DB_HOST=db
DB_NAME='wplay'
DB_PORT=5432
</code></pre>
<p>When I running docker container</p>
<pre><code>web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 5432?
</code></pre>
<p>I try to change .env <code>DB_HOST='localhost'</code> and add</p>
<pre><code>ports:
- '5432:5432'
</code></pre>
<p>to yaml <code>db</code> configuration, but nothing</p>
<p>upd</p>
<p><a href="https://i.sstatic.net/ZuQEO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZuQEO.png" alt="enter image description here" /></a></p>
<p><strong>logs</strong></p>
<pre><code>db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2023-01-04 12:44:55.386 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2023-01-04 12:44:55.386 UTC [1] LOG: listening on IPv6 address "::", port 5432
</code></pre>
<p>Connection to DB</p>
<p><strong>db.py</strong></p>
<pre><code>import os
from decouple import config
import databases
import sqlalchemy
DEFAULT_DATABASE_URL = f"postgresql://{config('DB_USER')}:{config('DB_PASS')}" \
f"@{config('DB_HOST')}:5432/{config('DB_NAME')}"
DATABASE_URL = (os.getenv('DATABASE_URL', DEFAULT_DATABASE_URL))
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.create_all(engine)
</code></pre>
|
<python><docker><docker-compose>
|
2023-01-04 10:11:03
| 1
| 3,322
|
Jekson
|
75,003,869
| 970,872
|
how to handle timestamps from summer and winter when converting strings in polars
|
<p>I'm trying to convert string timestamps to polars datetime from the timestamps my camera puts in it RAW file metadata, but polars throws this error when I have timestamps from both summer time and winter time.</p>
<pre><code>ComputeError: Different timezones found during 'strptime' operation.
</code></pre>
<p>How do I persuade it to convert these successfully?
(ideally handling different timezones as well as the change from summer to winter time)</p>
<p>And then how do I convert these timestamps back to the proper local clocktime for display?</p>
<p>Note that while the timestamp strings just show the offset, there is an exif field "Time Zone City" in the metadata as well as fields with just the local (naive) timestamp</p>
<pre><code>import polars as plr
testdata=[
{'name': 'BST 11:06', 'ts': '2022:06:27 11:06:12.16+01:00'},
{'name': 'GMT 7:06', 'ts': '2022:12:27 12:06:12.16+00:00'},
]
pdf = plr.DataFrame(testdata)
pdfts = pdf.with_column(plr.col('ts').str.strptime(plr.Datetime, fmt = "%Y:%m:%d %H:%M:%S.%f%z"))
print(pdf)
print(pdfts)
</code></pre>
<p>It looks like I need to use tz_convert, but I cannot see how to add it to the conversion expression and what looks like the relevant docpage just 404's
<a href="https://pola-rs.github.io/polars/py-polars/html/reference/series.html#timeseries" rel="noreferrer">broken link to dt_namespace</a></p>
|
<python><datetime><timestamp><strftime><python-polars>
|
2023-01-04 10:07:32
| 2
| 557
|
pootle
|
75,003,854
| 19,580,067
|
Split the single alphanumeric string column to two columns as numbers and alphabets
|
<p>I have a single alphanumeric string column that has to be split into two different columns: numbers and alphabet.
But the thing is we just need to split the first part of numbers from string and the remaining alphanumeric should remain same in 2nd column</p>
<p>For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Col A</th>
</tr>
</thead>
<tbody>
<tr>
<td>2 Nutsx20mm</td>
</tr>
<tr>
<td>2 200 jibs50</td>
</tr>
<tr>
<td>3 200</td>
</tr>
<tr>
<td>5</td>
</tr>
<tr>
<td>8 Certs 20</td>
</tr>
</tbody>
</table>
</div>
<p>Expected:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A1</th>
<th>A2</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Nutsx20mm</td>
</tr>
<tr>
<td>2 200</td>
<td>jibs50</td>
</tr>
<tr>
<td>3 200</td>
<td>null</td>
</tr>
<tr>
<td>5</td>
<td>null</td>
</tr>
<tr>
<td>8</td>
<td>Certs 20</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried out, It works correctly but fails when the string is just a 4 digit number with space.</p>
<pre><code>code:
df_tab["Col A"].str.extract(r'(\d+)(?: (\S.+))?$')
</code></pre>
<p>The output I get is below table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A1</th>
<th>A2</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Nutsx20mm</td>
</tr>
<tr>
<td>2</td>
<td>200 jibs50</td>
</tr>
<tr>
<td>3</td>
<td>200</td>
</tr>
<tr>
<td>5</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>Certs 20</td>
</tr>
</tbody>
</table>
</div>
|
<python><dataframe><python-re>
|
2023-01-04 10:06:15
| 2
| 359
|
Pravin
|
75,003,817
| 20,051,041
|
How to remove unexpected hover value in Plotly Express bar plot, Python
|
<p>I am using plotly express for creating a horizontal bar plot. After adding <code>hovertamplate</code>, an ugly zero appeared in hover box (always a zero for each element). What is it and how can I get rid of it?
Thanks.</p>
<pre><code>fig = px.bar(df, orientation = 'h')
fig.update_traces(hovertemplate = "%{label}, in %{value} drugs")
</code></pre>
<p><a href="https://i.sstatic.net/wMvAN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wMvAN.png" alt="enter image description here" /></a></p>
|
<python><plotly><hover>
|
2023-01-04 10:03:05
| 1
| 580
|
Mr.Slow
|
75,003,385
| 8,788,960
|
What is equivalent Protocol of Python Callable?
|
<p>I always though that Callable is equivalent to having the dunder <code>__call__</code> but apparently there is also <code>__name__</code>, because the following code is correct for <code>mypy --strict</code>:</p>
<pre class="lang-py prettyprint-override"><code>def print_name(f: Callable[..., Any]) -> None:
print(f.__name__)
def foo() -> None:
pass
print_name(foo)
print_name(lambda x: x)
</code></pre>
<p>What is actual interface of python Callable?</p>
<p>I dug out what <code>functools.wraps</code> does. AFAIU it sets <code>('__module__', '__name__', '__qualname__', '__doc__', '__annotations__')</code> - is that the same what the <code>Callable</code> is expected to have?</p>
|
<python><mypy><python-typing>
|
2023-01-04 09:25:04
| 1
| 1,171
|
hans
|
75,003,376
| 1,773,702
|
How to capture the error logs from Airflow and send it in mail?
|
<p>We are trying to report the failures that occur during Airflow job execution and capture the exceptions from logs and send it in email .</p>
<p>Currently we display following things in the failure email written in pyt.</p>
<pre><code>failure_msg = """
:red_circle: Task Failed.
*Dag*: {dag}
*Task*: {task}
*Execution Time*: {exec_date}
*Log Url*: {log_url}
""".format(
dag=context.get('task_instance').dag_id,
task=context.get('task_instance').task_id,
ti=context.get('task_instance'),
exec_date=context.get('execution_date'),
log_url=context.get('task_instance').log_url
</code></pre>
<p>I was looking to capture the exception message from Airflow. The above message displays high level info like dag id, task id, url etc.</p>
<p>Referrer below Airflow documentation but so far did not get any way to capture exact exception message.</p>
<p>Currently I am manually throwing error in one of the DAG as</p>
<pre><code>def scan_table():
try:
raise ValueError('File not parsed completely/correctly')
logging.info(message)
except Exception as error:
raise ValueError('File not parsed completely/correctly inside exception block')
print("Error while fetching data from backend", error)
</code></pre>
<p>Tried using this <code>exception=context.get('task_instance').log.exception</code>
but it showed as</p>
<pre><code><bound method Logger.exception of <Logger airflow.task (INFO)>>
</code></pre>
<p>In the DAG log output in Airflow UI, the exception is thrown as:</p>
<pre><code>[2023-01-04, 09:05:07 UTC] {taskinstance.py:1909} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/bitnami/airflow/dags/git_airflow-dags/scan_table.py", line 37, in scan_table
raise ValueError('File not parsed completely/correctly')
ValueError: File not parsed completely/correctly
</code></pre>
<p>I want to capture this part of log and print in the failure_msg in the Python snippet.</p>
|
<python><airflow>
|
2023-01-04 09:24:42
| 1
| 949
|
azaveri7
|
75,003,271
| 10,950,656
|
Iterate over rows while targeting columns two at a time
|
<p><code>cols_df</code> represents the chunk of the DataFrame in which I want to be able to perform multiple operations but, each time, target two columns along with the first one. I sort the selected columns from small to large in each round (ex. '0', '2', '3' columns in first round, and '0', '4', '5' columns in second round). In a new column, I mark each row with an X if it does not contain numerical values in one of the two targeted columns. I go ahead applying this for each pair of <code>cols_df</code>'s columns. Then, I will have a DataFrame containing the newly marked column along with all the other columns.</p>
<p>Input:</p>
<pre><code>import pandas as pd
cols_dict = {'matr': {0: '18I1', 1: '03I2', 2: '03I3', 3: '18I4', 4: '03I5', 5: '03I6', 6: '03I7', 7: '03I8', 8: '18I9', 9: '18I0'}, 'cat': {0: '3', 1: '3', 2: '3', 3: '3', 4: '3', 5: '18', 6: '3', 7: '3', 8: '3', 9: '3'}, 'Unnamed: 5': {0: 81, 1: 81, 2: 81, 3: 77, 4: None, 5: None, 6: 83, 7: 81, 8: 79, 9: 81}, 'Unnamed: 6': {0: 91, 1: 97, 2: 97, 3: 91, 4: None, 5: 93, 6: 89, 7: 83, 8: 81, 9: 99}, 'Unnamed: 7': {0: 117.0, 1: 115.0, 2: 115.0, 3: 115.0, 4: 115.0, 5: None, 6: 115.0, 7: 115.0, 8: 115.0, 9: 115.0}, 'Unnamed: 8': {0: 123.0, 1: 115.0, 2: 115.0, 3: 115.0, 4: 123.0, 5: 123.0, 6: 125.0, 7: 123.0, 8: 117.0, 9: None}}
cols_df = pd.DataFrame.from_dict(cols_dict)
</code></pre>
<p>The desired outout:</p>
<pre><code>cols_dict_out = {'matr': {0: '18I1', 1: '03I2', 2: '03I3', 3: '18I4', 4: '03I5', 5: '03I6', 6: '03I7', 7: '03I8', 8: '18I9', 9: '18I0'}, 'xs': {0: None, 1: None, 2: None, 3: None, 4: None, 5: 'X', 6: None, 7: None, 8: None, 9: 'X'}, 'cat': {0: '3', 1: '3', 2: '3', 3: '3', 4: '3', 5: '18', 6: '3', 7: '3', 8: '3', 9: '3'}, 'Unnamed: 5': {0: 81, 1: 81, 2: 81, 3: 77, 4: None, 5: None, 6: 83, 7: 81, 8: 79, 9: 81}, 'Unnamed: 6': {0: 91, 1: 97, 2: 97, 3: 91, 4: None, 5: 93, 6: 89, 7: 83, 8: 81, 9: 99}, 'Unnamed: 7': {0: 117.0, 1: 115.0, 2: 115.0, 3: 115.0, 4: 115.0, 5: None, 6: 115.0, 7: 115.0, 8: 115.0, 9: 115.0}, 'Unnamed: 8': {0: 123.0, 1: 115.0, 2: 115.0, 3: 115.0, 4: 123.0, 5: 123.0, 6: 125.0, 7: 123.0, 8: 117.0, 9: None}}
cols_out_df = pd.DataFrame.from_dict(cols_dict_out)
</code></pre>
|
<python><pandas>
|
2023-01-04 09:16:05
| 1
| 481
|
Maya_Cent
|
75,003,123
| 5,138,888
|
How to make a figure of a keyboard where each key has its own color
|
<p>I would like to prepare images of keyboards where each key can be assigned a different color. I have seen the Python <a href="https://pypi.org/project/keyboardlayout/" rel="nofollow noreferrer">keyboardlayout</a> package but it is not clear how to control the color of the keys. I would prefer to use existing Python packages.</p>
|
<python><matplotlib><keyboard-layout><color-mapping>
|
2023-01-04 09:03:34
| 1
| 656
|
ranlot
|
75,002,560
| 5,805,893
|
Install Local Python Package with pip
|
<p>I'm building a python package to use for 'global' functions (i.e. stuff that I will use in multiple other projects). I have built the package using <code>py -m build</code> and it then puts the <code>MyPackage-0.1.0.tar.gz</code> into the <code>dist</code> directory in my folder.</p>
<p>My goal is to be able to run <code>pip install MyPackage</code> from within any other projects, and it will install the latest build of my package. In other words, I do not want to use something like <code>--find-links</code>. This way, I could also include the package in a <code>requirements.txt</code> file.</p>
<p>I have tried putting the tarball in a directory which is on my system's <code>PATH</code>, and into a subfolder within there (e.g. <code>PathDir/MyPackage/MyPackage-0.1.0.tar.gz</code>), but I keep getting the same 'No matching distribution found' error.</p>
<p>The documentation for <code>pip install</code> says:</p>
<blockquote>
<p>pip looks for packages in a number of places: on PyPI (if not disabled via --no-index), in the local filesystem, and in any additional repositories specified via --find-links or --index-url.</p>
</blockquote>
<p>When it says 'in the local filesystem' where does it begin it's search? Is there a way to change this (e.g. set some environment variable)</p>
|
<python><pip>
|
2023-01-04 08:10:52
| 4
| 1,261
|
Michael Barrowman
|
75,002,460
| 6,213,883
|
How to pip install from a private artifactory?
|
<p>A private artifactory is available. We pushed a Python lib into it using <code>Twine</code>:</p>
<pre><code>/python-XYZ/mylib/version/mylib-version.tar.gz
</code></pre>
<p>However, I do not understand how we can get this library using <code>pip install</code>.</p>
<p>I have tried the following (based on <a href="https://www.zepl.com/use-your-private-python-libraries-from-artifactory-in-zepl/" rel="nofollow noreferrer">this</a>):</p>
<pre><code>pip install mylib==version --index 'https://myuser:mypassword@XYZ-DOMAIN.fr:443/artifactory/api/pypi/python-XYZ/simple
</code></pre>
<p>Which gives this:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement mylib==version (from versions: none)
ERROR: No matching distribution found for mylib==version
</code></pre>
<p>The result is the same with the following endpoints:</p>
<pre><code>/artifactory/api/pypi/python-XYZ/mylib/simple
/artifactory/api/pypi/mylib/simple
/artifactory/api/pypi/python-XYZ/simple
</code></pre>
<p>Given the location of my library, how can I <code>pip install</code> it ?</p>
|
<python><pip><artifactory>
|
2023-01-04 08:00:30
| 1
| 3,040
|
Itération 122442
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.