QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,834,337
| 14,823,310
|
Why as type('category') is not saving memory in my data frame?
|
<p>I have a data frame with a column with strings that I want to optimize using 'category'. I am obvisouly doing something wrong as I thought the memory usage is far less with category rather than string.</p>
<pre><code>In [28]: df1.memory_usage()
Out[28]:
Index 15218784
DATE_CALCUL 15218784
ABN_CONTRAT 15218784
MONTANT_HT 15218784
dtype: int64
In [29]: df1['ABN_CONTRAT'].astype('category').memory_usage()
Out[29]: 28190544
</code></pre>
<p>Do you know why ?</p>
|
<python><pandas><dataframe><memory>
|
2022-12-17 13:09:18
| 1
| 591
|
pacdev
|
74,834,293
| 3,728,901
|
Windows 11 uninstall Python 3.9, install 3.11.1: Fatal error in launcher: Unable to create process using .. The system cannot find the file specified
|
<p>Windows 11 uninstall Python 3.9, then install Python 3.11.1: Fatal error in launcher: Unable to create process using</p>
<pre class="lang-py prettyprint-override"><code>D:\temp20221103>jupyter lab
Fatal error in launcher: Unable to create process using '"C:\Users\donhu\AppData\Local\Programs\Python\Python39\python.exe" "C:\Users\donhu\AppData\Local\Programs\Python\Python39\Scripts\jupyter.exe" lab': The system cannot find the file specified.
D:\temp20221103>
</code></pre>
<p><a href="https://i.sstatic.net/QEsGi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QEsGi.png" alt="enter image description here" /></a></p>
<p>Full log <a href="https://gist.github.com/donhuvy/98ca21f22a5abc7b7922143054eed7e2#file-python_console-log-L92" rel="nofollow noreferrer">https://gist.github.com/donhuvy/98ca21f22a5abc7b7922143054eed7e2#file-python_console-log-L92</a> . How to fix?</p>
<p>When I looking for file <code>jupyter-lab.exe</code> inside Windows Explorer, run it, it works. But this way is inconvenience.</p>
<p><a href="https://i.sstatic.net/dabDw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dabDw.png" alt="enter image description here" /></a></p>
<p>In Windows Explorer, I still seen Python 3.9</p>
<p><a href="https://i.sstatic.net/QaoYi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QaoYi.png" alt="enter image description here" /></a></p>
<p>The problem is when call command <code>jupyter lab</code>, it looks for inside wrong folder. How to fix it?</p>
|
<python><jupyter-lab>
|
2022-12-17 13:03:24
| 1
| 53,313
|
Vy Do
|
74,834,145
| 1,485,853
|
let sublimetext run last used build command
|
<p>As an example, there are two files, <code>main.py</code> and <code>module.py</code>, I could only run the project when <code>main.py</code> is in the current active tab window by <code>Ctrl + B</code>. However, a big amount of my code work is in <code>module.py</code>, after some modification of <code>module.py</code>, I have to switch to <code>main.py</code> to run and test my code, frequently switching to <code>main.py</code> is not convenient, I wonder if there is a way to run <code>module.py</code> when the current file in the active tab window is <code>module.py</code> rather than <code>main.py</code>.</p>
<p>Briefly, I have <code>main.py</code> and <code>module.py</code> opened in Sublime, after running <code>main.py</code> I switch to <code>module.py</code>, how to run <code>main.py</code> when the current file in the active tab window is <code>module.py</code> rather than <code>main.py</code>?</p>
<p>I found a solution in <a href="https://irwinkwan.com/2013/08/22/sublime-text-always-run-a-single-file-in-a-project-no-matter-what-is-focused/" rel="nofollow noreferrer">this post</a> , but that requires to configure manually, I expect there is a feature to remember last build command so that I can repeat last command by some shortcut. It would be welcome if someone can guide me to write some plugin to implement the feature.</p>
|
<python><sublimetext3><sublimetext><sublime-text-plugin>
|
2022-12-17 12:41:21
| 1
| 2,478
|
iMath
|
74,833,958
| 324,315
|
How to convert None values in Json to null using Pyspark?
|
<p>Currently I am parsing my Json feeds with:</p>
<pre><code>rdd = self.spark.sparkContext.parallelize([(json_feed)])
df = self.spark.read.json(rdd)
</code></pre>
<p>That works fine as long as values are all there, but if I have a Json (as Python dict) like:</p>
<pre><code>json_feed = { 'name': 'John', 'surname': 'Smith', 'age': None }
</code></pre>
<p>I would like to get the generated DataFrame with a value <code>null</code> on the <code>age</code> column, but what I am getting at the moment is <code>_corrupt_record</code>. Is there a way to parse <code>None</code> values to <code>null</code> with Pyspark?</p>
|
<python><pyspark>
|
2022-12-17 12:10:26
| 2
| 9,153
|
Randomize
|
74,833,942
| 1,290,055
|
Efficient way to get the minimal tuple from a set of columns in polars
|
<p>What is the most efficient way in polars to do this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
rng = np.random.default_rng()
df = pl.DataFrame([
pl.Series('a', rng.normal(size=10_000_000)),
pl.Series('b', rng.normal(size=10_000_000)),
])
df.sort('a', 'b').head(1)
</code></pre>
<p>I.e. I want to find the smallest tuple of numbers (a, b) based on lexicographical ordering. Finding the minimum does not require a full sort, so the above code is very inefficient. I tried</p>
<pre class="lang-py prettyprint-override"><code>df.lazy().sort('a', 'b').head(1).collect()
</code></pre>
<p>but this does not lead to a significant speedup. Creating a single column of type <code>pl.Struct</code> does not help either since polars does not seem to define a notion of ordering on structs.</p>
<h3>Update</h3>
<p>Thanks to the answer from ΩΠΟΚΕΚΡΥΜΜΕΝΟΣ I have now settled for the following solution:</p>
<pre class="lang-py prettyprint-override"><code>def lexicographic_min(df):
columns = list(df.columns)
for col in columns:
if df.select(pl.col(col).is_null().any()).row(0)[0]:
df = df.filter(pl.col(col).is_null())
else:
df = df.filter(pl.col(col) == pl.col(col).min())
return df.row(0)
def lexicographic_max(df):
columns = list(df.columns)
for col in columns:
df = df.filter(pl.col(col) == pl.col(col).max())
return df.row(0)
</code></pre>
<p>This version handles null values by considering them 'smaller' than any non-null values and does not require any invocations of <code>.sort()</code>.</p>
|
<python><sorting><python-polars>
|
2022-12-17 12:08:30
| 1
| 1,823
|
Martin Wiebusch
|
74,833,794
| 5,362,515
|
Python threading code giving different thread counts in Notebook and in shell
|
<p>I just started learning Python <code>asyncio</code> and ran a script containing following basic code in Jupyter Notebook -</p>
<pre><code>import threading
def hello_from_thread():
print(f'Hello from thread {threading.current_thread()}!')
hello_thread = threading.Thread(target=hello_from_thread)
hello_thread.start()
total_threads = threading.active_count()
thread_name = threading.current_thread().name
print(f'Python is currently running {total_threads} thread(s)')
print(f'The current thread is {thread_name}')
hello_thread.join()
</code></pre>
<p>In notebook, this gives following (seemingly wrong) output -</p>
<pre><code>Hello from thread <Thread(Thread-1, started 6492)>!
Python is currently running 1 thread(s)
The current thread is MainThread
</code></pre>
<p>But when I run the same script through command prompt, I get something like following -</p>
<pre><code>Hello from thread <Thread(Thread-1, started 5486)>!
Python is currently running 2 thread(s)
The current thread is MainThread
</code></pre>
<p>The shell output is in line with the tutorial I am following. I don't get why this is happening. What is going on?</p>
|
<python><python-multithreading>
|
2022-12-17 11:43:33
| 1
| 327
|
mayankkaizen
|
74,833,769
| 3,515,313
|
Scapy proxy HTTP packet
|
<p>I want to send an HTTP packet to port 31112, but I want to change the IP identification header to 0xabcd.</p>
<p>What I am doing is using iptables for, whatever packet with destination port 31112, redirect it to a queue:</p>
<pre><code>iptables -A OUTPUT -p tcp --dport 31112-j NFQUEUE --queue-num 1
</code></pre>
<p>I have also enabled forwarding:</p>
<pre><code>sysctl -w net.ipv4.ip_forward=1
</code></pre>
<p>My program is this one:</p>
<pre><code>#!/usr/bin/env python
from netfilterqueue import NetfilterQueue
from scapy.all import *
def print_and_accept(pkt):
data = pkt.get_payload()
ip_h = IP(data)
print ('source: ' + ip_h[IP].src)
print ('destination: ' + ip_h[IP].dst)
print ('IP TTL: ' + str(ip_h[IP].ttl))
print (str (ip_h[TCP].payload))
ip_h[IP].ttl = 40
ip_h[IP].id = 0xabcd
#print (ip_h[IP].id)
del ip_h[IP].chksum
send(ip_h,verbose=0)
nfqueue = NetfilterQueue()
nfqueue.bind(1, print_and_accept)
try:
nfqueue.run()
except KeyboardInterrupt:
print ('\nProgram Ended')
</code></pre>
<p>And, when I send a curl to my destination:</p>
<pre><code>curl http://serverexample.com:31112/
</code></pre>
<p>I get this in my program's output:</p>
<pre><code>source: 192.168.206.128
destination: 35.182.181.240
IP TTL: 64
</code></pre>
<p>It is weird that I don't capture this:</p>
<p>print (str (ip_h[TCP].payload))</p>
<p>which I think it must be something like "GET / HTTP/1.1" and whatever headers might follow.</p>
<p>I want to know if someone can spot the issue.</p>
<p>Regards</p>
|
<python><network-programming><tcp><scapy><netfilter>
|
2022-12-17 11:40:23
| 1
| 1,949
|
aDoN
|
74,833,553
| 11,291,663
|
How to plot my own logistic regression decision boundaries and SKlearn's ones on the same figure
|
<p>I have an assignment in which I need to compare my own multi-class logistic regression and the built-in SKlearn one.</p>
<p>As part of it, I need to plot the decision boundaries of each, on the same figure (for 2,3, and 4 classes separately).</p>
<p>This is my model's decision boundaries for 3 classes:</p>
<p><a href="https://i.sstatic.net/tBfZ9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tBfZ9.png" alt="My model - 3 classes" /></a></p>
<p>Made with this code:</p>
<pre><code>x1_min, x1_max = X[:,0].min()-.5, X[:,0].max()+.5
x2_min, x2_max = X[:,1].min()-.5, X[:,1].max()+.5
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
grid = np.c_[xx1.ravel(), xx2.ravel()]
for i in range(len(ws)):
probs = ol.predict_prob(grid, ws[i]).reshape(xx1.shape)
plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors='green')
</code></pre>
<p>where</p>
<ul>
<li><code>ol</code> - is my Own Linear regression</li>
<li><code>ws</code> - the current weights</li>
</ul>
<p>That's how I tried to plot the Sklearn boundaries:</p>
<pre><code>for i in range(len(clf.coef_)):
w = clf.coef_[i]
a = -w[0] / w[1]
xx = np.linspace(x1_min, x1_max)
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.plot(xx, yy, 'k-')
</code></pre>
<p>Resulting</p>
<p><a href="https://i.sstatic.net/ZV23K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZV23K.png" alt="SKlearn - 3 classes" /></a></p>
<p>I understand that it's due to the 1dim vs 2dim grids, but I can't understand how to solve it.</p>
<p>I also tried to use the built-in <code>DecisionBoundaryDisplay</code> but I couldn't figure out how to plot it with my boundaries + it doesn't plot only the lines but also the whole background is painted in the corresponding color.</p>
|
<python><plot><scikit-learn><logistic-regression>
|
2022-12-17 10:59:21
| 1
| 313
|
RedYoel
|
74,833,499
| 13,230,147
|
Type hints for Python 3 dictionary containing required and un-determined optional keys
|
<p>If I have a dictionary which contains required and but also arbitrary optional keys, how should I type this dictionary?</p>
<p>Example:</p>
<pre><code>Dictionary family:
father: str,
mother: str,
# optional key start
son1: str,
daughter1: str,
son2: str,
daughter2: str,
# arbitrary many more sons, daughters, grandparents, ...
</code></pre>
<p>The <code>family</code> can contain any one of them.</p>
<pre><code>{father: "Bob", mother: "Alice"}
</code></pre>
<p>or</p>
<pre><code>{father: "Bob", mother: "Alice", son1: "Peter"}
</code></pre>
<p>or</p>
<pre><code>{father: "Bob", mother: "Alice", daughter1: "Jane"}
</code></pre>
<p>If I just use <code>Dict[str, str]</code>, users cannot know <code>father</code> and <code>mother</code> are required. If I use a <code>TypedDict</code>, no additional keys are allowed; <code>total=False</code> and <code>NotRequired[str]</code> still require to know all keys up-front. How to type this?</p>
<hr />
<p>In TypeScript on would use</p>
<pre><code>type family = {
father: string,
mother: string,
[key: string]: string
}
</code></pre>
<p>to allow <em>arbitrary</em> additional keys of type <code>string</code>.</p>
<p>Does Python have this function?</p>
|
<python><python-typing><typeddict>
|
2022-12-17 10:49:33
| 1
| 343
|
obelisk0114
|
74,833,164
| 7,451,580
|
using exec how to save variable equal to a string
|
<p>I am using exec to save a variable equal to a string. I am getting a SyntaxError. I'm assuming exec is getting confused with the value as string. Is this assumption accurate? Would appreciate the learnings! If I changed each question to an str(int), the code will work. Any help is much appreciated.</p>
<pre><code>json_template = {
"Introduction" : {
"What is your Department?" : "",
"What is your Name?" : "",
"What is your Email?" : ""
},
"Context" : {
"What is the necessary action or change?": "",
"What is the urgency?" : "",
"What lead to this change?" : "",
"What is the opportunity or pain point this action solves?" : ""
}
}
for each_category in json_template:
for index, each_question in enumerate(json_template[each_category]):
left_side = each_category + str(index)
right_side = each_question
bigstring = '='.join([left_side, right_side])
exec(bigstring)
print(bigstring)
</code></pre>
<p>Error below:</p>
<blockquote>
<pre><code>exec(bigstring)
File "<string>", line 1
Introduction0=What is your Department?
^^^^^^^^^^
</code></pre>
</blockquote>
|
<python><python-exec>
|
2022-12-17 09:54:34
| 1
| 441
|
BrianBeing
|
74,833,139
| 12,000,021
|
Outlier detection in time-series
|
<p>I have a dataset in the following form:</p>
<pre><code> timestamp consumption
2017-01-01 00:00:00 14.3
2017-01-01 01:00:00 29.1
2017-01-01 02:00:00 28.7
2017-01-01 03:00:00 21.3
2017-01-01 04:00:00 18.4
... ... ...
2017-12-31 19:00:00 53.2
2017-12-31 20:00:00 43.5
2017-12-31 21:00:00 37.1
2017-12-31 22:00:00 35.8
2017-12-31 23:00:00 33.8
</code></pre>
<p>And I want to perform anomaly detection in the sense that it predicts abnormal high or low values.</p>
<p>I am performing <code>isolation forest</code>:</p>
<pre><code>IF = IsolationForest(random_state=0, contamination=0.005, n_estimators=200, max_samples=0.7)
IF.fit(model_data)
# New Outliers Column
data['Outliers'] = pd.Series(IF.predict(model_data)).apply(lambda x: 1 if x == -1 else 0)
# Get Anomaly Score
score = IF.decision_function(model_data)
# New Anomaly Score column
data['Score'] = score
data.head()
</code></pre>
<p>The result that I am getting as outliers is the following:</p>
<p><a href="https://i.sstatic.net/GyBGw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GyBGw.png" alt="enter image description here" /></a></p>
<p>It seems that identifies the peaks, but it misses some low values that are apparently outliers and I have highlighted them in the plot.</p>
<p>Any idea of what is causing this error?</p>
|
<python><machine-learning><outliers><anomaly-detection><isolation-forest>
|
2022-12-17 09:49:58
| 1
| 428
|
Kosmylo
|
74,832,746
| 7,168,244
|
Include both % and N as bar labels
|
<p>I have created a bar plot with percentages. However, since there's possibility of attrition I would like to include N, the number of observations or sample size (in brackets) as part of the bar labels. In other words, N should be the count of baseline and endline values.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.ticker import PercentFormatter
import seaborn as sns
import pandas as pd
import numpy as np
data = {
'id': [1, 1, 2, 3, 3, 4, 4, 5, 6, 6, 7, 7, 8, 8, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15],
'survey': ['baseline', 'endline', 'baseline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', ],
'growth': [1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0]
}
df = pd.DataFrame(data)
sns.set_style('white')
ax = sns.barplot(data = df,
x = 'survey', y = 'growth',
estimator = lambda x: np.sum(x) / np.size(x) * 100, ci = None,
color = 'cornflowerblue')
ax.bar_label(ax.containers[0], fmt = '%.1f %%', fontsize = 20)
sns.despine(ax = ax, left = True)
ax.grid(True, axis = 'y')
ax.yaxis.set_major_formatter(PercentFormatter(100))
ax.set_xlabel('')
ax.set_ylabel('')
plt.tight_layout()
plt.show()
</code></pre>
<p>I will appreciate guidance on how to achieve this
Thanks in advance!</p>
|
<python><pandas><matplotlib><seaborn>
|
2022-12-17 08:29:47
| 1
| 481
|
Stephen Okiya
|
74,832,586
| 688,208
|
Python: muliple projects in the same package
|
<p>I want to have multiple Python projects in the same package. For example: <code>mycompany.parser</code>, <code>mycompany.database</code>. These projects have to be able to be installed separately. So a user can have only <code>mycompany.parser</code> or only <code>mycompany.database</code>, or both.</p>
<p>Is it possible to achieve?</p>
|
<python><python-packaging>
|
2022-12-17 07:55:42
| 1
| 493
|
Number47
|
74,832,296
| 17,582,019
|
"TypeError: string indices must be integers" when getting data of a stock from Yahoo Finance using Pandas Datareader
|
<pre><code>import pandas_datareader
end = "2022-12-15"
start = "2022-12-15"
stock_list = ["TATAELXSI.NS"]
data = pandas_datareader.get_data_yahoo(symbols=stock_list, start=start, end=end)
print(data)
</code></pre>
<p>When I run this code, I get error <code>"TypeError: string indices must be integers"</code>.</p>
<p>Edit : I have updated the code and passed list as symbol parameter but it still shows the same error</p>
<p>Error :</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\mp3downloader.py", line 7, in <module>
data = pandas_datareader.get_data_yahoo(symbols=[TATAELXSI], start=start, end=end)
File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\data.py", line 80, in get_data_yahoo
return YahooDailyReader(*args, **kwargs).read()
File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\base.py", line 258, in read
df = self._dl_mult_symbols(self.symbols)
File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\base.py", line 268, in _dl_mult_symbols
stocks[sym] = self._read_one_data(self.url, self._get_params(sym))
File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\yahoo\daily.py", line 153, in _read_one_data
data = j["context"]["dispatcher"]["stores"]["HistoricalPriceStore"]
TypeError: string indices must be integers
</code></pre>
|
<python><yahoo-finance><pandas-datareader>
|
2022-12-17 06:53:00
| 5
| 790
|
Deepak
|
74,831,956
| 13,079,519
|
How to extract value from specific P-tag with BeautifulSoup?
|
<p>Is there a way to only extract the value for Acid(<code>5.9 g/L</code>) and Alcohol(<code>14.5%</code>)?</p>
<p>I thought of using <code>find_all('p')</code>, but it is giving me all the p tag while I only need two of them.</p>
<p><a href="https://i.sstatic.net/vpQ3P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vpQ3P.png" alt="enter image description here" /></a></p>
|
<python><html><web-scraping><beautifulsoup>
|
2022-12-17 05:17:30
| 1
| 323
|
DJ-coding
|
74,831,714
| 874,380
|
Why does spaCy run slower in a difference conda environment?
|
<p>I used JupyterLab to preprocess a larger set of text documents with spaCy. While there's overall no problem, I've noticed that there's a huge speed difference when I use different conda kernels / virtual environments. The difference is about 10x.</p>
<p>Both environments have the same version of spaCy and NumPy installed; also both using the same Python version (3.9.15).</p>
<pre><code>numpy 1.23.4 py39h14f4228_0
spacy 3.3.1 py39h79cecc1_0
</code></pre>
<p>so I cannot tell where the speed difference might come from. Maybe it's from another package that spaCy requires?</p>
<p>I also converted the notebooks into .py scripts and running from the console, but the same results: In one virtual environment it runs about 10x slower.</p>
|
<python><anaconda><conda><spacy>
|
2022-12-17 04:03:37
| 1
| 3,423
|
Christian
|
74,831,706
| 364,197
|
zlib error code -3 while using zlib to decompress PDF Flatedecode stream
|
<p>I am trying to extract some information from a PDF file. There is a 12 character stream that is compressed with Flatedecode that I've been unable to decompress although other streams in the document are readily decompressed with the same python 3.9 program.</p>
<p>This is extracted from a US Government - FAA Instrument procedures plate) PDF document that opens without an issue in Adobe acrobat.</p>
<p>The Excellent RUPS program for investigating PDFs which is written by the author of iText also appears to have difficulty decoding this stream as it shows only a single character from the 12-byte stream.</p>
<pre><code>import zlib
hexDigits = "78 9c e3 2a e4 e5 02 20 01 a3 20 93"
stripWhitespace = hexDigits.replace(" ", "")
myByteArray = bytearray.fromhex(stripWhitespace)
data = zlib.decompress(myByteArray) # Here I get Error -3 while decompressing data: incorrect data check
print(data)
</code></pre>
|
<python><pdf><zlib><itext7>
|
2022-12-17 04:01:43
| 1
| 1,133
|
DarwinIcesurfer
|
74,831,663
| 1,497,199
|
How to create unit tests for a FastAPI endpoint that makes request to another endpoint?
|
<p>I have a FastAPI app that makes requests to other endpoint within a function that handles a particular request.</p>
<p>How can I build unit tests for this endpoint using <code>fastapi.testclient.TestClient</code>?</p>
<pre><code>import fastapi
import requests
import os
app = fastapi.FastAPI()
# in production this will be a k8s service to allow for
# automatic scaling and load balancing.
# The default value works when the app is run using uvicorn
service_host = os.environ.get('SERVICE_HOST', 'localhost:8000')
@app.post('/spawner')
def spawn():
# an endpoint that makes multiple requests to set of worker endpoints
for i in range(4):
url = 'http://'+service_host+f'/create/{i}'
requests.post(url)
return {'msg': 'spawned the creation/update of 4 resources'}
@app.post('/create/{index}')
def worker(index: int):
# an endpoint that does the actual work of creating/updating the resource
return {'msg': f'Created/updated resource {index}'}
</code></pre>
|
<python><unit-testing><fastapi>
|
2022-12-17 03:47:43
| 1
| 8,229
|
Dave
|
74,831,603
| 9,983,652
|
yahoo_fin get_balance_sheet() doesn't return any data
|
<p>I am using yahoo_fin to get some fundamental data and it is working fine for PE, PS ratio etc. but when using get_balance_sheet(), it return no data. Any suggestion? Thanks</p>
<pre><code>import yahoo_fin.stock_info as si
sheet = si.get_balance_sheet("AAPL")
</code></pre>
<p>Get PE ratio is fine</p>
<pre><code>quote = si.get_quote_table("aapl")
quote["PE Ratio (TTM)"]
22.01
</code></pre>
|
<python><yfinance>
|
2022-12-17 03:30:11
| 0
| 4,338
|
roudan
|
74,831,583
| 640,558
|
for python string how can I print each line as a new line?
|
<p>I have a weird problem. I call a API and get this result:</p>
<pre><code>1. Increase the use of alternative fuels: The aviation industry can reduce its carbon emissions by increasing the use of alternative fuels such as biofuels, hydrogen, and synthetic fuels.
2. Improve aircraft efficiency: The aviation industry can reduce its carbon emissions by improving the efficiency of aircraft through the use of advanced materials, aerodynamic designs, and lighter engines.
3. Utilize air traffic management systems: The aviation industry can reduce its carbon emissions by utilizing air traffic management systems that optimize flight paths and reduce fuel consumption.
4. Invest in research and development: The aviation industry can reduce its carbon emissions by investing in research and development of new technologies that can reduce emissions.
5. Increase the use of renewable energy: The aviation industry can reduce its carbon emissions by increasing the use of renewable energy sources such as solar, wind, and geothermal.
6. Implement carbon offset programs: The aviation industry can reduce its carbon emissions by implementing carbon offset programs that allow airlines to purchase carbon credits to offset their emissions.
</code></pre>
<p>I am trying to print each line as it's own line(I want to save this to variable later) but it's not working. When I try:</p>
<pre><code>for item in reponse:
print("*")
print(item)
</code></pre>
<p>it just prints 1 character at a time. What can I do to save each line at a time? I was trying to see the raw string data but I'm not sure how or why it's doing a new line.</p>
<p>What can I do?</p>
|
<python>
|
2022-12-17 03:24:45
| 3
| 26,167
|
Lostsoul
|
74,831,471
| 4,451,521
|
Filtering dataframes based on one column with a different type of other column
|
<p>I have the following problem</p>
<pre><code>import pandas as pd
data = {
"ID": [420, 380, 390, 540, 520, 50, 22],
"duration": [50, 40, 45,33,19,1,3],
"next":["390;50","880;222" ,"520;50" ,"380;111" ,"810;111" ,"22;888" ,"11" ]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
print(df)
</code></pre>
<p>As you can see I have</p>
<pre><code> ID duration next
0 420 50 390;50
1 380 40 880;222
2 390 45 520;50
3 540 33 380;111
4 520 19 810;111
5 50 1 22;888
6 22 3 11
</code></pre>
<p>Things to notice:</p>
<ul>
<li>ID type is int</li>
<li>next type is a string with numbers separated by ; if more than two numbers</li>
</ul>
<p>I would like to filter the rows with no next in the ID</p>
<p>For example in this case</p>
<ul>
<li><p>420 has a follow up in both 390 and 50</p>
</li>
<li><p>380 has as next 880 and 222 both of which are not in ID so this one</p>
</li>
<li><p>540 has as next 380 and 111 and while 111 is not in ID, 380 is so not this one</p>
</li>
<li><p>same with 50</p>
</li>
</ul>
<p>In the end I want to get</p>
<pre><code>1 380 40 880;222
4 520 19 810;111
6 22 3 11
</code></pre>
<p>With only one value I used <code>print(df[~df.next.astype(int).isin(df.ID)])</code> but in this case <code>isin</code> can not be simply applied.</p>
<p>How can I do this?</p>
|
<python><pandas><dataframe>
|
2022-12-17 02:48:09
| 2
| 10,576
|
KansaiRobot
|
74,831,439
| 2,416,097
|
Subprocess.call appears to modify (rsync) command inputs causing failure
|
<p>I am attempting to write a python script that calls rsync to synchronize code between my vm and my local machine. I had done this successfully using subprocess.call to write from my local machine to my vm, but when I tried to write a command that would sync from my vm to my local machine, subprocess.call seems to erroneously duplicate text or modify the command in a way that doesn't make sense to me.</p>
<p>I've seen a number of posts about subprocess relating to whether a shell is expected / being used, but I tried running the command with both <code>shell=True</code> and shell=False` and saw the same weird command modification.</p>
<p>Here is what I am running:</p>
<pre class="lang-py prettyprint-override"><code>command = ['rsync', '-avP', f'{user}@{host}:/home/{user}/Workspace/{remote_dir}', os.getcwd()]) # attempting to sync from a dir on my remote machine to my current directory on my local machine
print('command: ' + ' '.join(command))
subprocess.call(command)
</code></pre>
<p>Here is the output (censored):</p>
<pre><code>command: rsync -avP benjieg@[hostname]:/home/benjieg/Workspace/maige[dirname] /Users/benjieg/Workspace
building file list ...
rsync: link_stat "/Users/benjieg/Workspace/benjieg@[hostname]:/home/benjieg/Workspace/[dirname]" failed: No such file or directory (2)
8383 files to consider
sent 181429 bytes received 20 bytes 362898.00 bytes/sec
total size is 217164673 speedup is 1196.84
rsync error: some files could not be transferred (code 23) at /AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/rsync/rsync/main.c(996) [sender=2.6.9]
</code></pre>
<p>If I run the command directly in my terminal, it works just fine. The mysterious thing that i'm seeing is that subprocess appears to append some extra path before my user name in the command. You can see this in the 'link stat' line in the output. I'm guessing this is why it's unable to find the directory. But I really can't figure out why it would do that.</p>
<p>Further, I tried moving out of the 'Workspace' directory on my local machine to home, and even more weird behavior ensued. It seemed to begin looking into all of the files on my local machine, even prompting dropbox to ask me if iterm2 had permission to access it. Why would this happen? Why would whatever subprocess is doing cause a search of my whole local file system? I'm really mystified by this seemingly random behavior.</p>
<p>I'm on MacOS 13.0.1 (22A400) using python 3.10.4</p>
|
<python><subprocess><rsync>
|
2022-12-17 02:36:49
| 0
| 4,079
|
bgenchel
|
74,831,339
| 5,508,736
|
flask-HTTPTokenAuth Fails to Verify Token
|
<p>I'm working my way through Miguel Grinberg's book Flask Web Development, and I've run into a snag in Chapter 14 (Application Programming Interfaces) with the authentication routine. I'm attempting to update the code to use the current version of flask-HTTPAuth according to the example code in the github repo. I can authenticate to HTTPBasicAuth with email/password, but when I try to pass a token I still get a password prompt.</p>
<p>Here is my app/api/authentication.py file:</p>
<pre class="lang-py prettyprint-override"><code>from flask import g, jsonify
from flask_httpauth import HTTPBasicAuth, HTTPTokenAuth, MultiAuth
from ..models import User
from . import api
from .errors import forbidden, unauthorized
basic_auth = HTTPBasicAuth()
token_auth = HTTPTokenAuth(scheme='Bearer')
multi_auth = MultiAuth(basic_auth, token_auth)
@basic_auth.verify_password
def verify_password(email, password):
if email == '':
return False
user = User.query.filter_by(email=email).first()
if not user:
return False
g.current_user = user
g.token_used = False
if user.verify_password(password):
return user
else:
return False
@token_auth.verify_token
def verify_token(token):
user = User.verify_auth_token(token)
if user:
g.current_user = user
g.token_used = True
return user
else:
return False
@basic_auth.error_handler
def auth_error():
return unauthorized('Invalid credentials')
@token_auth.error_handler
def auth_error():
return unauthorized('Invalid credentials')
@api.before_request
@multi_auth.login_required
def before_request():
if not g.current_user.is_anonymous and not g.current_user.confirmed:
return forbidden('Unconfirmed account')
@api.route('/tokens/', methods=['POST'])
@multi_auth.login_required
def get_token():
if g.current_user.is_anonymous or g.token_used:
return unauthorized('Invalid credentials')
return jsonify({'token': g.current_user.generate_auth_token(), 'expiration': 3600})
</code></pre>
<p>I'm using Python 3.10.6, Flask 2.2.2 and HTTPie 3.2.1. What am I doing wrong?</p>
|
<python><flask><httpie><flask-httpauth>
|
2022-12-17 02:07:41
| 1
| 5,266
|
Darwin von Corax
|
74,831,338
| 1,698,736
|
How do you import a nested package via string?
|
<p>I'm trying to dynamically import a package which is a subdirectory of another package.</p>
<p>Though there is no <a href="https://docs.python.org/3/library/importlib.html" rel="nofollow noreferrer">documentation</a> about it, this doesn't seem to be possible with <code>importlib</code>.</p>
<p>Example:
Start with</p>
<pre><code>pip install pytest
</code></pre>
<p>then,</p>
<pre><code>import importlib
mod = importlib.import_module("pytester", package="_pytest")
</code></pre>
<p>fails with <code>ModuleNotFoundError: No module named 'pytester'</code>, while</p>
<pre><code>from _pytest import pytester
</code></pre>
<p>succeeds.</p>
<pre><code>import importlib
mod = importlib.__import__("_pytest.pytester")
</code></pre>
<p>also succeed, but returns the <code>_pytest</code> module, not the <code>pytester</code> module, and I don't seem to have a way of accessing <code>mod.pytester</code> by string.</p>
<p>Am I doing something wrong? Is there another way to achieve this?</p>
|
<python><import><python-module><python-importlib>
|
2022-12-17 02:07:17
| 1
| 9,202
|
cowlinator
|
74,831,253
| 1,893,234
|
Rethinking Python for loop to pySpark to create dataframes
|
<p>I have a list of accounts which I iterate in a loop calling the details from an API function <code>get_accounts</code>. The JSON response for each call includes details for one account with one or more contacts which I add to dataframes <code>df_accounts</code> and <code>df_contacts</code> respectively.</p>
<p>The API loop portion works though it takes a long time. The volume of data is under 5K accounts and 10K contacts. The final goal is to save two delta tables one for all accounts and one contacts. I know there <strong>must be a better approach</strong> utilizing pySpark architecture.</p>
<pre><code>account_details = []
provider_details = []
for account_id in account_header['providerAccounts']:
account_detail = get_accounts(account_id['id'])
account_details.append(spark.createDataFrame(data = [account_detail['providerAccount']], schema=account_schema))
for provider in account_detail['providerAccount']['providers']:
provider_details.append(spark.createDataFrame(data = [provider], schema=provider_schema))
df_account = reduce(DataFrame.unionAll, account_details)
df_provider = reduce(DataFrame.unionAll, provider_details)
</code></pre>
<p>However an error occurs when I try to save the dataframes as a delta tables.</p>
<pre><code>df_accounts.write.format("delta").mode('overwrite').save(acct_fulldeltapath)
df_provider.write.format("delta").mode('overwrite').save(px_fulldeltapath)
</code></pre>
<p>Error code below:</p>
<pre><code>Py4JJavaError Traceback (most recent call last)
<ipython-input-19-7b487f51> in <module>
4 px_fulldeltapath = pathFormatted + px_pathSparkSubFolderFull
5
----> 6 df_contacts.write.format("delta").mode('overwrite').save(px_fulldeltapath)
/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options)
738 self._jwrite.save()
739 else:
--> 740 self._jwrite.save(path)
741
742 @since(1.4)
~/cluster-env/env/lib/python3.8/site-packages/py4j/java_gateway.py in __call__(self, *args)
1319
1320 answer = self.gateway_client.send_command(command)
-> 1321 return_value = get_return_value(
1322 answer, self.gateway_client, self.target_id, self.name)
1323
/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py in deco(*a, **kw)
109 def deco(*a, **kw):
110 try:
--> 111 return f(*a, **kw)
112 except py4j.protocol.Py4JJavaError as e:
113 converted = convert_exception(e.java_exception)
~/cluster-env/env/lib/python3.8/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o63219.save.
...
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 14638 tasks (4.0 GiB) is bigger than spark.driver.maxResultSize (4.0 GiB)
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2464)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2413)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2412)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
</code></pre>
|
<python><pyspark><apache-spark-sql><azure-synapse>
|
2022-12-17 01:47:54
| 0
| 2,154
|
Alen Giliana
|
74,831,244
| 3,247,006
|
How to run "SELECT FOR UPDATE" instead of "SELECT" when adding data in Django Admin?
|
<p>In <code>PersonAdmin():</code> below, I overrode <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.response_add" rel="nofollow noreferrer"><strong>response_add()</strong></a> with <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#select-for-update" rel="nofollow noreferrer"><strong>select_for_update()</strong></a> so that <a href="https://stackoverflow.com/a/73847878/8172439"><strong>write skew</strong></a> doesn't occur then only 2 persons can be added on <strong>Add person</strong> and overrode <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.save_model" rel="nofollow noreferrer"><strong>save_model()</strong></a> so that <code>obj.save()</code> works only when changing a person on <strong>Change person</strong>:</p>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
def response_add(self, request, obj, post_url_continue=None):
# Here
obj_count = super().get_queryset(request).select_for_update().all().count()
if obj_count < 2:
obj.save()
return super().response_add(request, obj, post_url_continue)
def save_model(self, request, obj, form, change):
last_part_of_path = request.path.split('/')[-2]
if last_part_of_path == "change":
obj.save() # Here
</code></pre>
<p>But, when adding a person on <strong>Add person</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/ydyHS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ydyHS.png" alt="enter image description here" /></a></p>
<p><code>SELECT</code> is run instead of <code>SELECT FOR UPDATE</code> as shown below. *I use <strong>PostgreSQL</strong> and these logs below are <strong>the queries of PostgreSQL</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>On PostgreSQL, how to log queries with transaction queries such as "BEGIN" and "COMMIT"</strong></a>:</p>
<p><a href="https://i.sstatic.net/bbJMs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bbJMs.png" alt="enter image description here" /></a></p>
<p>So, how can I run <code>SELECT FOR UPDATE</code> instead of <code>SELECT</code> when adding a person on <strong>Add person</strong>?</p>
|
<python><python-3.x><django><django-admin><select-for-update>
|
2022-12-17 01:45:44
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,831,243
| 14,159,985
|
Getting empty dataframe after foreachPartition execution in Pyspark
|
<p>I'm kinda new in PySpark and I'm trying to perform a foreachPartition function in my dataframe and then I want to perform another function with the same dataframe.
The problem is that after using the foreachPartition function, my dataframe gets empty, so I cannot do anything else with it. My code looks like the following:</p>
<pre><code>def my_random_function(partition, parameters):
#performs something with the dataframe
#does not return anything
my_py_spark_dataframe.foreachPartition(
lambda partition: my_random_function(partition, parameters))
</code></pre>
<p>Could someone tell me how can I perform this foreachPartition and also use the same dataframe to perform other functions?</p>
<p>I saw some users talking about copying the dataframe using df.toPandas().copy() but in my case this causes some perform issues, so I would like to use the same dataframe instead of creating a new one.</p>
<p>Thank you in advance!</p>
|
<python><apache-spark><pyspark>
|
2022-12-17 01:45:38
| 1
| 338
|
fernando fincatti
|
74,831,234
| 15,171,387
|
Get a list from numpy ndarray in Python?
|
<p>I have a numpy.ndarray here which I am trying to convert it to a list.</p>
<pre><code>>>> a=np.array([[[0.7]], [[0.3]], [[0.5]]])
</code></pre>
<p>I am using hstack for it. However, I am getting a list of a list. How can I get a list instead? I am expecting to get <code>[0.7, 0.3, 0.5]</code>.</p>
<pre><code>>>> b = np.hstack(a)
>>> b
array([[0.7, 0.3, 0.5]])
</code></pre>
|
<python><numpy>
|
2022-12-17 01:43:30
| 2
| 651
|
armin
|
74,831,044
| 497,132
|
How to get ImageField url when using QuerySet.values?
|
<pre><code>qs = self.items.values(
...,
product_preview_image=F('product_option_value__product__preview_image'),
).annotate(
item_count=Count('product_option_value'),
total_amount=Sum('amount'),
)
</code></pre>
<p><code>product_option_value__product__preview_image</code> is an ImageField, and in the resulting QuerySet it looks like <code>product_preview_images/photo_2022-12-10_21.38.55.jpeg</code>.
The problem is that <code>url</code> is a property, and I can't use <code>media</code> settings as images are stored in an S3-compatible storage.</p>
<p>Is it possible to somehow encrich the resulting QuerySet with images URLs with a single additional query, not iterating over each one?</p>
|
<python><django><django-queryset>
|
2022-12-17 00:43:12
| 1
| 16,835
|
artem
|
74,831,038
| 15,542,245
|
File Opening - Script still finds file in cwd even when absolute path specified
|
<p>I have looked at <a href="https://stackoverflow.com/questions/22282760/filenotfounderror-errno-2-no-such-file-or-directory">How to open a list of files in Python</a> This problem similar but not covered.</p>
<pre><code>path = "C:\\test\\test5\\"
files = os.listdir(path)
fileNames = []
for f in files:
fileNames.append(f)
for fileName in fileNames:
pathFileName = path + fileName
print(f"This is the path: {pathFileName}")
fin = open(pathFileName, 'rt')
texts = []
with open(fileName) as file_in:
# read file text lines into an array
for text in file_in:
texts.append(text)
for text in texts:
print(text)
</code></pre>
<p>The file <code>aaaatest.txt</code> is in <code>C:\test\test5</code> The output is:</p>
<pre><code>This is the path: C:\test\test5\aaaatest.txt
Traceback (most recent call last):
File "c:\Users\david\source\repos\python-street-spell\diffLibFieldFix.py", line 30, in <module>
with open(fileName) as file_in:
FileNotFoundError: [Errno 2] No such file or directory: 'aaaatest.txt'
</code></pre>
<p>So here's the point. If I take a copy of <code>aaaatest.txt</code> (leaving original where it is) and put it in the current working directory. Running the script again I get:</p>
<pre><code>This is the path: C:\test\test5\aaaatest.txt
A triple AAA test
This is the path: C:\test\test5\AALTONEN-ALLAN_PENCARROW_PAGE_1.txt
Traceback (most recent call last):
File "c:\Users\david\source\repos\python-street-spell\diffLibFieldFix.py", line 30, in <module>
with open(fileName) as file_in:
FileNotFoundError: [Errno 2] No such file or directory: 'AALTONEN-ALLAN_PENCARROW_PAGE_1.txt'
</code></pre>
<p>The file <code>aaaatest.txt</code> is opened and the single line of text, contained in it, is outputted. Following this an attempt is made to open the next file of <code>C:\test\test5</code> where the same error occurs again.
Seems to me that while the path is saying <code>C:\test\test5</code> the file is only being read from the cwd?</p>
|
<python><file>
|
2022-12-17 00:40:12
| 0
| 903
|
Dave
|
74,830,869
| 14,729,041
|
Delete only first occurrence of element in list Recursively
|
<p>I am trying to remove only the first occurrence of an element from a file-like structure. I have the following code:</p>
<pre><code>d2 = ("home",
[("Documents",
[("FP",
["lists.txt", "recursion.pdf", "functions.ipynb", "lists.txt"]
)]
),
"tmp.txt",
"page.html"]
)
def print_tree(dirtree, file_to_find):
if type(dirtree) == str:
if file_to_find == dirtree:
return []
else:
return [dirtree]
name, subdir = dirtree
a = []
for subtree in subdir:
t = print_tree(subtree, file_to_find)
a += t
return (name, a)
</code></pre>
<p>This returns for <code>print_tree(d2, "lists.txt")</code>:</p>
<blockquote>
<p>('home', ['Documents', ['FP', ['recursion.pdf',
'functions.ipynb']], 'tmp.txt', 'page.html'])</p>
</blockquote>
<p>But I wanted it to return:</p>
<blockquote>
<p>('home', ['Documents', ['FP', ['recursion.pdf', 'functions.ipynb',
'lists.txt']], 'tmp.txt', 'page.html'])</p>
</blockquote>
<p>Do you have any suggestions how can I achieve this behaviour? Thanks in advance for any help you can provide.</p>
|
<python><recursion>
|
2022-12-16 23:59:27
| 1
| 443
|
AfonsoSalgadoSousa
|
74,830,790
| 13,079,519
|
Why is the html content I got from inspector different from what I got from Request?
|
<p>Here is the site I am trying to scrap data from:
<a href="https://i.sstatic.net/QQjEh.png" rel="nofollow noreferrer">https://www.onestopwineshop.com/collection/type/red-wines</a></p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://www.onestopwineshop.com/collection/type/red-wines"
response = requests.get(url)
#print(response.text)
soup = BeautifulSoup(response.content,'lxml')
</code></pre>
<p>The code I have above.</p>
<p>It seems like the HTML content I got from the inspector is different from what I got from BeautifulSoup.
My guess is that they are preventing me from getting their data as they detected I am not accessing the site with a browser. If so, is there any way to bypass that?</p>
<hr />
<p>(Update) Attempt with selenium:</p>
<pre><code>from selenium import webdriver
import time
path = "C:\Program Files (x86)\chromedriver.exe"
# start web browser
browser=webdriver.Chrome(path)
#navigate to the page
url = "https://www.onestopwineshop.com/collection/type/red-wines"
browser.get(url)
# sleep the required amount to let the page load
time.sleep(3)
# get source code
html = browser.page_source
# close web browser
browser.close()
</code></pre>
<hr />
<p>Update 2:(loaded with devtool)
<a href="https://i.sstatic.net/QQjEh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QQjEh.png" alt="enter image description here" /></a></p>
|
<python><web-scraping><beautifulsoup>
|
2022-12-16 23:41:43
| 2
| 323
|
DJ-coding
|
74,830,591
| 18,248,287
|
Inverse of double colon indexing
|
<p>I'm in want of figuring out the inverse of double indexing. For example, If I index my data <code>[::3]</code> at every three steps, then I want the negation of all those that were selected, so all those not in three steps. How can I achieve this?</p>
<p>For example:</p>
<pre><code>df = {
'OR': [0, 1, 0, 0, 1, 0, 0],
'name': ['EPL Prekickoff 15m Next Arsenal Match Timer Over', 'EPL Prekickoff 15m Next Arsenal Match Timer Over', 'EPL Prekickoff 15m Next Chelsea Match Timer Over', 'Set Time Targeting 15m Next Event 1 Timer Over', 'Set Time Targeting 15m Next Event 1 Timer Over', 'Sportsbook Convert View to Bet Timer Over', 'Timer is up - send back to main profile'],
'id': ['_unibet-delay_120', '_unibet-delay_120', '_unibet-delay_121', '_unibet-delay_123', '_unibet-delay_123', '_unibet-delay_122', 'unibet-delay_102'],
'baseOperator': [None, None, None, None, None, None, None],
'operator': ['equals_ignore_case', 'equals_ignore_case', 'equals_ignore_case', 'equals', 'greater_than', 'equals', 'exists'],
'operand1': ['metrics.5130', 'metrics.29', 'metrics.29', 'metrics.5140', 'metrics.29', 'metrics.5136', 'properties.5012'],
'operand2': [1, 2, 2, '1', '1', 1, 0]
}
</code></pre>
<p>Then create a few repeats:</p>
<pre><code>DF = df.apply(lambda x: pd.Series.repeat(x, 3))
</code></pre>
<p>But how to get the indexing right?</p>
<pre><code>DF[::3]
> OR name id baseOperator operator operand1 operand2
1 0 EPL Prekickoff 15m Next Arsenal Match Timer Over _unibet-delay_120 None equals_ignore_case metrics.5130 1
2 1 EPL Prekickoff 15m Next Arsenal Match Timer Over _unibet-delay_120 None equals_ignore_case metrics.29 2
3 0 EPL Prekickoff 15m Next Chelsea Match Timer Over _unibet-delay_121 None equals_ignore_case metrics.29 2
20 0 Set Time Targeting 15m Next Event 1 Timer Over _unibet-delay_123 None equals metrics.5140 1
21 1 Set Time Targeting 15m Next Event 1 Timer Over _unibet-delay_123 None greater_than metrics.29 1
22 0 Sportsbook Convert View to Bet Timer Over _unibet-delay_122 None equals metrics.5136 1
23 0 Timer is up - send back to main profile _unibet-delay_102 None exists properties.5012 0
</code></pre>
<p>Prints out the original dataset, I have tried using <code>-</code> signs but this reorders the data.</p>
|
<python><pandas>
|
2022-12-16 23:04:58
| 0
| 350
|
tesla john
|
74,830,550
| 17,639,970
|
Why having this error just for test_data?
|
<p>I'm trying to build a simple dtaset to work with, however, pytorch gives an error that I can't understand. Why?</p>
<pre><code>import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
from sklearn.model_selection import train_test_split
data = np.array([[1,1]])
label = np.array([2])
for i in range(-5000,5001,1):
data = np.append(data, [[i,i]], axis=0)
label = np.append(label, [i+i])
# conver to tensor
T_data = torch.tensor(data).float()
T_label = torch.tensor(label).long()
# split data
train_data, test_data, train_label, tets_label = train_test_split(T_data, T_label, test_size= .2)
# convert into Pytorch dataset
train_data = TensorDataset(train_data, train_label)
test_data = TensorDataset(test_data, test_label)
</code></pre>
<p>for <code>test_data</code> it shows <code>'int' object is not callable</code>, what is the problem?</p>
|
<python><pytorch><dataloader>
|
2022-12-16 22:58:25
| 1
| 301
|
Rainbow
|
74,830,375
| 10,713,420
|
How do I create new columns based on the first occurrence in the previous group?
|
<p>I have a dataframe that looks like below</p>
<pre><code>id reg version
1 54 1
2 54 1
3 54 1
4 54 2
5 54 3
6 54 3
7 55 1
</code></pre>
<p>The goal is to assign two new columns previous_version and next_version that takes the values from id's and populate in previous version and the next version. In the above example df, for id = 1, since the version = 1 the next version starts from the id = 4, I populated next_version value as 4 and previous_version as null since there isn't any.</p>
<p>if there is no previous or next version, it should populate with null.</p>
<p>What I was able to achieve is to</p>
<ul>
<li>get the unique versions in the df.</li>
<li>the first id of each version.</li>
<li>Dictionary of the above 2.</li>
</ul>
<p>I am struggling to come with a logic to apply that dictionary to the dataframe so I can populate previous_version and next_version columns.</p>
<pre><code>versions = df['version'].unique().tolist()
version_ids = df.groupby(['reg', 'version'])['id'].first().tolist()
</code></pre>
<p>Here is how the data frame should look like</p>
<pre><code>id reg version previous_version next_version
1 54 1 NULL 4
2 54 1 NUll 4
3 54 1 NULL 4
4 54 2 1 5
5 54 3 4 NULL
6 54 3 4 NULL
7 55 1 NULL NULL
</code></pre>
<p>What would be the best way to achieve the result if n versions are there?</p>
|
<python><pandas><dataframe>
|
2022-12-16 22:31:29
| 2
| 471
|
NAB0815
|
74,830,329
| 10,853,071
|
Pandas Crosstab dos not support Float (with capital F) number formats
|
<p>I am working on a sample data transaction dataframe. Such base contains cliente ID, transaction gross value (GMV) and revenue. Take this example as DF :</p>
<pre><code>num_variables = 100
rng = np.random.default_rng()
df = pd.DataFrame({
'id' : np.random.randint(1,999999999,num_variables),
'date' : [np.random.choice(pd.date_range(datetime(2022,6,1),datetime(2022,12,31))) for i in range(num_variables)],
'gmv' : rng.random(num_variables) * 100,
'revenue' : rng.random(num_variables) * 100})
</code></pre>
<p>I am grouping such data by client ID, crossing with transaction month and exhibiting revenue values.</p>
<pre><code>clients = df[['id', 'date','revenue']].groupby(['id', df.date.dt.to_period("M")], dropna=False).aggregate({'revenue': 'sum'})
clients.reset_index(inplace=True)
</code></pre>
<p>Now I create a crosstab</p>
<pre><code>CrossTab = pd.crosstab(clients['id'], clients['date'], values=clients['revenue'], rownames=None, colnames=None, aggfunc='sum', margins=True, margins_name='All', dropna=False, normalize=False)
</code></pre>
<p>The code above works normally as my sample dataframe revenue is a "float64" dtype.
But it a change the dtype to Float64, it does not work anymore.</p>
<pre><code>num_variables = 100
rng = np.random.default_rng()
df = pd.DataFrame({
'id' : np.random.randint(1,999999999,num_variables),
'date' : [np.random.choice(pd.date_range(datetime(2022,6,1),datetime(2022,12,31))) for i in range(num_variables)],
'gmv' : rng.random(num_variables) * 100,
'revenue' : rng.random(num_variables) * 100})
df = df.astype({'revenue':'Float64'})
clients = df[['id', 'date','revenue']].groupby(['id', df.date.dt.to_period("M")], dropna=False).aggregate({'revenue': 'sum'})
clients.reset_index(inplace=True)
CrossTab = pd.crosstab(clients['id'], clients['date'], values=clients['revenue'], rownames=None, colnames=None, aggfunc='sum', margins=True, margins_name='All', dropna=False, normalize=False)
</code></pre>
<p>The output</p>
<pre><code>Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[31], line 1
----> 1 CrossTab = pd.crosstab(clients['id'], clients['date'], values=clients['revenue'], rownames=None, colnames=None, aggfunc='sum', margins=True, margins_name='All', dropna=False, normalize=False)
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\reshape\pivot.py:691, in crosstab(index, columns, values, rownames, colnames, aggfunc, margins, margins_name, dropna, normalize)
688 df["__dummy__"] = values
689 kwargs = {"aggfunc": aggfunc}
--> 691 table = df.pivot_table(
692 "__dummy__",
693 index=unique_rownames,
694 columns=unique_colnames,
695 margins=margins,
696 margins_name=margins_name,
697 dropna=dropna,
698 **kwargs,
699 )
701 # Post-process
702 if normalize is not False:
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\frame.py:8728, in DataFrame.pivot_table(self, values, index, columns, aggfunc, fill_value, margins, dropna, margins_name, observed, sort)
8711 @Substitution("")
8712 @Appender(_shared_docs["pivot_table"])
8713 def pivot_table(
(...)
...
--> 292 raise TypeError(dtype) # pragma: no cover
294 converted = maybe_downcast_numeric(result, dtype, do_round)
295 if converted is not result:
TypeError: Float64
</code></pre>
|
<python><pandas>
|
2022-12-16 22:24:20
| 1
| 457
|
FábioRB
|
74,830,287
| 6,346,514
|
Pandas, making a dataframe based on the length of another dataframe
|
<p>I am trying to convert <code>df</code> to just get the length of it in a new dataframe.
Which is what I do, but then this dataframe does not have a header.
How do I add a header to this length?</p>
<pre><code> df = df.append(df_temp, ignore_index=True, sort=True)
df = len(df)
</code></pre>
<p>when I print <code>df</code> i get the # of records but no header. How can I add a header to this?</p>
|
<python><pandas>
|
2022-12-16 22:19:05
| 1
| 577
|
Jonnyboi
|
74,830,278
| 1,562,772
|
python Environment Variables not changed for object of imported module
|
<p>I am puzzled with this behavior where environment variable is not updated correctly for class of a module I import on second or later calls.</p>
<p>I have a module which have some class I initialize to get a function (and totally works fine). To control and <strong>Test</strong> some behaviors on an automated manner <strong>locally</strong> (which in prod used by Docker environment variables) have some environment variables which can be set or unset.</p>
<p>To test some behaviors I am trying to make a script that tests some permutations of these environment variables importing the relevant module and initializing an object, reloading the module each call.</p>
<p>Each object initialization shows environment variable value and also if it affected the call as in below code:</p>
<pre><code>#inside MyModule.SubModule
SOME_ENV_VAR = os.getenv("SOME_ENV_VAR", "0")
# convert environment variable to integer
try:
SOME_ENV_VAR = int(SOME_ENV_VAR)
except ValueError:
# if non integer given set to 0
SOME_ENV_VAR = 0
print("SOME_ENV_VAR", SOME_ENV_VAR)
class MyClass:
def __init__(self):
# Logic to set Cuda device depending on Environmental variable
print("Value of SOME_ENV_VAR: ", SOME_ENV_VAR)
import sys, importlib
def get_object(env_set=True):
os.environ["SOME_ENV_VAR"] = "1" if env_set else "0"
importlib.reload(sys.modules['MyModule'])
from MyModule.SubModule import MyClass
p = MyClass()
return p
</code></pre>
<p>On first call it is fine and environment variable is set correctly as in logs:</p>
<pre><code>get_object(env_set=False)
#log output
SOME_ENV_VAR 0
Value of SOME_ENV_VAR: 0
</code></pre>
<p>But on second (and later calls) when I try to change environment variable, log output show environment variable is set correctly for module but on object gets the previously set environment variable.</p>
<pre><code>get_object(env_set=True)
#log output
SOME_ENV_VAR 1
SOME_ENV_VAR: 0
</code></pre>
<p>I tried using multiprocessing too (to create another process), which had same result.
Why is this behaviour happening, and how can I fix it some simple logic?</p>
|
<python><python-3.x><object><environment-variables><python-3.8>
|
2022-12-16 22:18:18
| 1
| 991
|
Gorkem
|
74,830,150
| 8,816,642
|
Aggregate data with given bins by dates in Python
|
<p>I have two dataframes, one is the score with a given date,</p>
<pre><code>date score
2022-12-01 0.28
2022-12-01 0.12
2022-12-01 0.36
2022-12-01 0.42
2022-12-01 0.33
2022-12-02 0.15
2022-12-03 0.23
2022-12-03 0.25
</code></pre>
<p>Another dateframe is score bins,</p>
<pre><code>breakpoints
0.1
0.2
0.3
0.4
0.5
</code></pre>
<p>The breakpoint <code>0.1</code> means any values less or equals to 0.1.
How do I create a dataframe that group the data with this known bins by date? I tried to use <code>numpy.histogram</code> which the aggregate function works good but doesn't know how to group it by date.
My expected output will be like,</p>
<pre><code>breakpoints 2022-12-01 2022-12-02 2022-12-03 ...
0.1 0 0 0
0.2 1 1 0
0.3 1 0 2
0.4 2 0 0
... ... ... ...
</code></pre>
|
<python><pandas><dataframe><numpy><group-by>
|
2022-12-16 21:59:21
| 1
| 719
|
Jiayu Zhang
|
74,829,976
| 5,942,100
|
Separate values from date datatype using Pandas
|
<p>I wish to separate values from date datatype</p>
<h2>Data</h2>
<pre><code>time ID
2021-04-16T00:00:00.000-0800 AA
2021-04-23T00:00:00.000-0800 AA
2021-04-30T00:00:00.000-0800 BB
</code></pre>
<h2>Desired</h2>
<pre><code>time ID
2021-04-16 AA
2021-04-23 AA
2021-04-30 BB
</code></pre>
<h2>Doing</h2>
<pre><code>df["time"] = df["time"].str.extract(r'(\w+)')
</code></pre>
<p>Any suggestion is appreciated</p>
|
<python><pandas><numpy>
|
2022-12-16 21:35:14
| 2
| 4,428
|
Lynn
|
74,829,912
| 5,356,096
|
Splitting a list of strings into list of tuples rapidly
|
<p>I'm trying to figure out how to squeeze as much performance out of my code as possible, and I am facing with the issue of losing a lot of performance on tuple conversion.</p>
<pre class="lang-py prettyprint-override"><code>with open("input.txt", 'r') as f:
lines = f.readlines()
lines = [tuple(line.strip().split()) for line in lines]
</code></pre>
<p>Unfortunately a certain component of my code requires the list to contain tuples instead of lists after splitting, but converting the lists to tuples this way is very slow. Is there any way to force <code>.split()</code> to return a tuple or to perform this conversion much faster, or would I have to change the rest of the code to avoid <code>.split()</code>?</p>
|
<python><python-3.x><list><performance><tuples>
|
2022-12-16 21:25:53
| 2
| 1,665
|
Jack Avante
|
74,829,721
| 7,211,014
|
Parse xml for text of every specific tag not working
|
<p>I am trying to gather every element <code><sequence-number></code> text into a list. Here is my code</p>
<pre><code>#!/usr/bin/env python
from lxml import etree
response = '''
<rpc-reply xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:0d07cdf5-c8e5-45d9-89d1-92467ffd7fe4">
<data>
<ipv4-acl-and-prefix-list xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-acl-cfg">
<accesses>
<access>
<access-list-name>TESTTEST</access-list-name>
<access-list-entries>
<access-list-entry>
<sequence-number>1</sequence-number>
<remark>TEST</remark>
<sequence-str>1</sequence-str>
</access-list-entry>
<access-list-entry>
<sequence-number>10</sequence-number>
<grant>permit</grant>
<source-network>
<source-address>10.10.5.0</source-address>
<source-wild-card-bits>0.0.0.255</source-wild-card-bits>
</source-network>
<next-hop>
<next-hop-type>regular-next-hop</next-hop-type>
<next-hop-1>
<next-hop>10.10.5.2</next-hop>
<vrf-name>SANE</vrf-name>
</next-hop-1>
</next-hop>
<sequence-str>10</sequence-str>
</access-list-entry>
<access-list-entry>
<sequence-number>20</sequence-number>
<grant>permit</grant>
<source-network>
<source-address>10.10.6.0</source-address>
<source-wild-card-bits>0.0.0.255</source-wild-card-bits>
</source-network>
<next-hop>
<next-hop-type>regular-next-hop</next-hop-type>
<next-hop-1>
<next-hop>10.10.6.2</next-hop>
<vrf-name>VRFNAME</vrf-name>
</next-hop-1>
</next-hop>
<sequence-str>20</sequence-str>
</access-list-entry>
</access-list-entries>
</access>
</accesses>
</ipv4-acl-and-prefix-list>
</data>
</rpc-reply>
'''
q = etree.fromstring(response)
print(q.findall('.//sequence-number'))
</code></pre>
<p>But get nothing for output. I have tried the following statements too with no luck:</p>
<pre><code>print(q.findall('./sequence-number/'))
print(q.findall('sequence-number/'))
print(q.findall('.//sequence-number/'))
print(q.findall('sequence-number'))
</code></pre>
<p>How can I gather this data?</p>
|
<python><xml><parsing><element>
|
2022-12-16 21:01:25
| 1
| 1,338
|
Dave
|
74,829,713
| 4,343,563
|
Try-except with if conditions?
|
<p>I have created if-else statements within my functions to check if certain conditions are met. However, I need to convert it try-except statement since the app I am working on is set up so that when a condition is met, it creates log info statements and when it is not met it creates log error statements. Currently my code looks like:</p>
<pre><code>if (conditon1 == 1):
print('Everything is good')
else:
orig_diff = list(set(sheetnames).difference(file.sheet_names))
new_diff = list(set(file.sheet_names).difference(sheetnames))
if len(orig_diff) != 0:
print("{} cond1 not met".format(','.join(orig_diff)))
if len(new_diff) != 0:
print("{} cond2 not met".format(','.join(new_diff)))
</code></pre>
<p>I want it to be so that if the if-condition is met then it will print the info statement. But if it is not met, then the else statements will be executed. Something like:</p>
<pre><code>try:
return bool(condition1 == 1)
print('Everything is good')
except:
orig_diff = list(set(sheetnames).difference(file.sheet_names))
new_diff = list(set(file.sheet_names).difference(sheetnames))
if len(orig_diff) != 0:
print("{} cond1 not met".format(','.join(orig_diff)))
if len(new_diff) != 0:
print("{} cond2 not met".format(','.join(new_diff)))
</code></pre>
<p>But, I can't seem to get this to work and even when the condition is not met it returns 'Everything is good'.</p>
|
<python><if-statement><try-except>
|
2022-12-16 20:59:47
| 1
| 700
|
mjoy
|
74,829,709
| 3,727,975
|
Receiving Error: 'apxs' command appears not to be installed
|
<p>This is the error I am receiving:
<code>RuntimeError: The 'apxs' command appears not to be installed or is not executable. Please check the list of prerequisites in the documentation for this package and install any missing Apache httpd server packages.</code></p>
<p>How can I get around this? I have received this while trying to install different packages. I am working on a Django project. I have already made sure the <code>apache2-dev</code> package is installed. I am developing on linux.</p>
|
<python><django>
|
2022-12-16 20:59:27
| 1
| 720
|
qbush
|
74,829,659
| 5,881,882
|
Can't fix torch autograd runtime error: UNet inplace operation
|
<p>I can't fix the runtime error "one of the variables needed for gradient computation has been modified by an inplace operation.</p>
<p>I know, that if I comment out <code>loss.backward()</code> the code will run, but I don't get in which order should I call the functions to avoid this error</p>
<p>When I call it my wrapper with Resnet50 I don't experience any problems, but with Unet the RuntimeError occurs</p>
<pre><code> for i, (x, y) in batch_iter:
with torch.autograd.set_detect_anomaly(True):
input, target = x.to(self.device), y.to(self.device)
self.optimizer.zero_grad()
if self.box_training:
out = self.model(input)
else:
out = self.model(input).clamp(0,1)
loss = self.criterion(out, target)
loss_value = loss.item()
train_losses.append(loss_value)
loss.backward()
self.optimizer.step()
batch_iter.set_description(f'Training: (loss {loss_value:.4f})')
self.training_loss.append(np.mean(train_losses))
self.learning_rate.append(self.optimizer.param_groups[0]['lr'])
</code></pre>
<p>As the comments pointed out, I should provide a model</p>
<p>And by looking at it, I actually found what was the problem:</p>
<pre><code>model = UNet(in_channels=1,
num_encoding_blocks = 6,
out_classes = 1,
padding=1,
dimensions = 2,
out_channels_first_layer = 32,
normalization = None,
pooling_type = 'max',
upsampling_type = 'conv',
preactivation = False,
#residual = True,
padding_mode = 'zeros',
activation = 'ReLU',
initial_dilation = None,
dropout = 0,
monte_carlo_dropout = 0
)
</code></pre>
<p>It is <code>residual = True</code> which I has commented out. I will look into the docs, what is going on. Maybe if you have an idea, you can enlighten me</p>
|
<python><pytorch><autograd><unet-neural-network>
|
2022-12-16 20:53:32
| 1
| 388
|
Alex
|
74,829,618
| 4,361,020
|
Split string into segments according to the alphabet
|
<p>I want to split the given string into alphabet segments that the string contains. So for example, if the following string is given:</p>
<pre><code>Los eventos automovilísticos comenzaron poco después de la construcción exitosa de los primeros automóviles a gasolina. El veloz zorro marrón saltó sobre el perezoso perro.
Motoring events began soon after the construction of the first successful gasoline-fueled automobiles. The quick brown fox jumped over the lazy dog.
Мотори су почели убрзо након изградње првих успешних аутомобила на бензин.Брза смеђа лисица је прескочила лењог пса.
Автомобилните събития започнаха скоро след конструирането на първите успешни автомобили с бензиново гориво. Бързата кафява лисица прескочи мързеливото куче.
自動車イベントは、最初の成功したガソリン燃料自動車の製造直後に始まりました。 素早い茶色のキツネは怠け者の犬を飛び越えました。
بدأت أحداث السيارات بعد وقت قصير من بناء أول سيارة ناجحة تعمل بالبنزين. قفز الثعلب البني السريع فوق الكلب الكسول.
</code></pre>
<p>The above text contains spanish, english, serbian, bulgarian, japanese, arabic paragraphs (the order of the languages follows the paragraphs order).</p>
<p>Then, after applying some magic function, I would like to get the following output:</p>
<pre><code>{
"langs": [
{
"alphabet": "latin",
"text": "Los eventos automovilísticos comenzaron poco después de la construcción exitosa de los primeros automóviles a gasolina. El veloz zorro marrón saltó sobre el perezoso perro. Motoring events began soon after the construction of the first successful gasoline-fueled automobiles. The quick brown fox jumped over the lazy dog."
},
{
"alphabet": "cyrillic",
"text": "Мотори су почели убрзо након изградње првих успешних аутомобила на бензин.Брза смеђа лисица је прескочила лењог пса. Автомобилните събития започнаха скоро след конструирането на първите успешни автомобили с бензиново гориво. Бързата кафява лисица прескочи мързеливото куче."
},
{
"alphabet": "japanese",
"text": "自動車イベントは、最初の成功したガソリン燃料自動車の製造直後に始まりました。 素早い茶色のキツネは怠け者の犬を飛び越えました。"
},
{
"alphabet": "arabic",
"text": "بدأت أحداث السيارات بعد وقت قصير من بناء أول سيارة ناجحة تعمل بالبنزين. قفز الثعلب البني السريع فوق الكلب الكسول."
}
]
}
</code></pre>
<p>As you see, some of the languages are grouped by their family alphabets. For example, spanish and english paragraphs were grouped as latin, or serbian and bulgarian paragraphs were grouped as cyrillic. This is because it is hard to find a specific language (since most of the letters are shared between languages).</p>
<p>Ideally, my final output should be like this:</p>
<pre><code>{
"langs": [
{
"lang": "spanish",
"text": "Los eventos automovilísticos comenzaron poco después de la construcción exitosa de los primeros automóviles a gasolina. El veloz zorro marrón saltó sobre el perezoso perro."
},
{
"lang": "english",
"text": "Motoring events began soon after the construction of the first successful gasoline-fueled automobiles. The quick brown fox jumped over the lazy dog."
},
{
"lang": "serbian",
"text": "Мотори су почели убрзо након изградње првих успешних аутомобила на бензин.Брза смеђа лисица је прескочила лењог пса."
},
{
"lang": "bulgarian",
"text":"Автомобилните събития започнаха скоро след конструирането на първите успешни автомобили с бензиново гориво. Бързата кафява лисица прескочи мързеливото куче."
},
{
"lang": "japanese",
"text": "自動車イベントは、最初の成功したガソリン燃料自動車の製造直後に始まりました。 素早い茶色のキツネは怠け者の犬を飛び越えました。"
},
{
"lang": "arabic",
"text": "بدأت أحداث السيارات بعد وقت قصير من بناء أول سيارة ناجحة تعمل بالبنزين. قفز الثعلب البني السريع فوق الكلب الكسول."
}
]
}
</code></pre>
<p>I need to split the text into sub-strings according to the language. For that I am planning to use <a href="https://github.com/aboSamoor/pycld2" rel="nofollow noreferrer"><code>cld2</code></a> which can split text into sentences, but according to my experiments, it does not do well when the string contains text with mixed alphabets (i.e. cyrillic + japanese etc.). However, <code>cld2</code> does well on the text with mixed languages that share the family of alphabets (i.e. french + english etc.).</p>
<p>That's why, I am planning to split the text into sub-strings by the family of alphabets, then for each of the family, I will aplly <code>cld2</code> to predict the specific language.</p>
<p>Another important requirements:</p>
<ul>
<li>the mixed languages might not be separated clearly by lines like above example (I did that for the sake of simplicity and to make the problem clear)</li>
<li>I need to be able to do this 'offline' without connecting to 3rd party servers like google etc. (since there will be tons of data that need to be handled)</li>
</ul>
<p>I would appreciate any ideas that you might have on the above problems. Thanks in advance.</p>
|
<python><string><text><nlp><cld2>
|
2022-12-16 20:48:30
| 1
| 791
|
Sirojiddin Komolov
|
74,829,544
| 2,803,488
|
How to use record separator as delimiter in Pandas
|
<p>I am trying to use the record separator (<code>0x1E</code>) as the separator in the Pandas read_table() function, but it is instead it seems to be splitting on <code>\n</code> (<code>0x0A</code>).</p>
<p>This is my code:</p>
<pre><code>df = pandas.read_table( "separator.log", sep = "[\x1E]", engine = 'python' )
print( df )
</code></pre>
<p>This is my input file (separator.log):</p>
<pre><code>{
"a": 1
}{
"b": 2
}{
"c": 3
}
</code></pre>
<p>The record separator is after each closing brace, but may not show up in your browser.</p>
<p>The output looks like this:</p>
<pre><code> {
"a": 1
} {
"b": 2 None
} {
"c": 3 None
} None
</code></pre>
<p>When I try</p>
<pre><code>df = pandas.read_table( "separator.log", sep = chr(0x1E), engine = 'python' )
</code></pre>
<p>the error <code>'' expected after '"'</code> is given. Inside the first <code>''</code> is the record separator character, but it does not show up in the S/O editor.</p>
<p>Is there a way to force <code>read_table</code> to use 0x1E for the delimiter?</p>
|
<python><pandas>
|
2022-12-16 20:37:52
| 1
| 455
|
Adam Howell
|
74,829,476
| 3,521,180
|
why the return statement inside main function is terminating unexpectedly?
|
<p>I have a requirement wherein I have to perform multiple pyspark transformations and write it to a parquet file. But below are the conditions:</p>
<ul>
<li><p>there are in total 2 files, but at a time either one of them or both could be supplied by the user.</p>
</li>
<li><p>when x_path file is given then <code>if i == x_path and x_path != "null"</code> block should be executed and written to a destination for x_path only.</p>
</li>
<li><p>when its l_path, then <code>elif i == l_path and l_path != "null"</code> block should be executed and written to a destination for l_path only.</p>
</li>
<li><p>when both the files are passed then <code>if x_path != "null" and l_path != "null"</code> block should be executed and written to a destination for both files combined.</p>
</li>
<li><p>Since I am using pytest for integration testing, writing in the destination shouldn't happen. But the final DF (append_l_x_df) should be returned.</p>
</li>
</ul>
<p>Below is the code snippet. I have tried to make it as close to my actual requirement as possible.</p>
<pre><code>def main(x_path, l_path, y_path, test=False):
for i in [x_path, l_path]:
if i == x_path and x_path != "null":
if not test:
p_df = func.read_parquet_file(x_path)
elif test:
p_df = func.create_spark_dataframe_with_schema(test_p.test_data, test_p.x_schema)
rename_cols_df = p_df.withColumnRenamed(existingName, newNam)
return rename_prem_cols_df
if x_path != "null" and l_path == "null":
if not test:
c_time = func.get_current_timestamp_for_curate_container()
c_path = <destination_path>
func.write_df_as_parquet_file(rename_cols_df, c_path, logger)
elif i == l_path and l_path != "null":
if not test:
l_df = func.read_parquet_file(l_path)
elif test:
l_df = func.create_spark_dataframe_with_schema(test_p.l_test_data, test_p.l_schema)
split_yr_mnth_qtr_df = func.split_yr_mnth_qtr(rename_ldr_cols_df, "Date", "Year", "Month", "Quarter"
return split_yr_mnth_qtr_df
if x_path == "null" and l_path != "null":
if not test:
c_time = func.get_current_timestamp_for_curate_container()
c_path = <destination_path>
func.write_df_as_parquet_file(split_yr_mnth_qtr_df, c_path, logger)
if test:
if x_path != "null" and l_path != "null":
append_l_x_df = func.union_df([rename_cols_df, split_yr_mnth_qtr_df])
return append_l_x_df
elif not test:
if x_path != "null" and l_path != "null":
append_l_x_df = func.union_df([rename_cols_df, split_yr_mnth_qtr_df])
c_time = func.get_current_timestamp_for_curate_container()
c_path = <destination_path>
# Write final data to Curate zone
func.write_df_as_parquet_file(append_l_x_df, c_path, logger)
if __name__ == '__main__':
main(x_path, l_path, y_path, test=False)
</code></pre>
<p>My problem is when I am executing my test file through pytest, the main python file with above code snippet is executed, but getting terminated at the very first return statement <code>return rename_prem_cols_df</code>. Hence my test case(s) are failing. Below are the reason.</p>
<ul>
<li>For my test scenario, I have created set of expected schema, which has 26 columns. Those 26 columns are present in the final combined DF, which is <code>append_l_x_df</code>.</li>
</ul>
<p>Since the program is getting terminated at <code>return rename_prem_cols_df</code>, so my test is failing, because this DF has different sets of column count.</p>
<p>Note:</p>
<ul>
<li><p>At the time of testing, I am passing 2 files. so, ideally the program should come till the end where we have <code>append_l_x_df</code> DF.</p>
</li>
<li><p>I think that I have not used the return statement at the right places. But not sure if that could be the issue.</p>
</li>
<li><p>the sample pyspark transformation which I have mentioned in the code snippet is supposed to execute whether or not <code>test=True</code>, ie, the transformation should execute even if test mode is False. This part is taken care already as seen in the above snippet.</p>
</li>
</ul>
|
<python><python-3.x><pyspark><pytest>
|
2022-12-16 20:30:07
| 0
| 1,150
|
user3521180
|
74,829,469
| 18,392,410
|
polars native way to convert unix timestamp to date
|
<p>I'm working with some data frames that contain Unix epochs in ms, and <strong>would like to display the entire timestamp series as a date.</strong> Unfortunately, the docs did not help me find a polars native way to do this, and I'm reaching out here. <strong>Solutions on how to do this in Python and also in Rust</strong> would brighten my mind and day.</p>
<p>With pandas, for example, such things were possible:</p>
<pre><code>pd.to_datetime(pd_df.timestamp, unit="ms")
# or to convert the whole col
pd_df.timestamp = pd.to_datetime(pd_df.timestamp, unit="ms")
</code></pre>
<p>I could loop over the whole thing and do something like I'm doing here for a single entry in each row.</p>
<pre><code>datetime.utcfromtimestamp(pl_df["timestamp"][0] / 1000).strftime("%Y-%m-%d")
</code></pre>
<p>If I were to do this in Rust, I would then use something like chrono to convert the ts to a date. But I don't think looping over each row is a good solution.</p>
<p>For now, as the best way I have found to help me is to convert <code>pd_df = pl_df.to_pandas()</code> and do it in pandas.</p>
|
<python><pandas><datetime><python-polars><rust-polars>
|
2022-12-16 20:29:14
| 1
| 563
|
tenxsoydev
|
74,829,420
| 18,086,775
|
How to drop rows based on multiple condition?
|
<p>I have the following <strong>datframe setup</strong>:</p>
<pre><code>dic = {'customer_id': [102, 102, 105, 105, 110, 110, 111],
'product':['skateboard', 'skateboard', 'skateboard', 'skateboard', 'shoes', 'skateboard', 'skateboard'],
'brand': ['Vans', 'Converse', 'Vans', 'Converse', 'Converse','Converse', 'Vans'],
'membership': ['member', 'not-member', 'not-member', 'not-member', 'member','not-member', 'not-member']}
df = pd.DataFrame(dic)
</code></pre>
<p><strong>Requirement:</strong> I need to drop rows where membership 'not-member' at customer_id and product granularity if the customer is a 'member' of any brand.</p>
<p>For example, in the above dataframe, we drop customer '102' with product 'skateboard' where membership is 'not-member' because they are already a member of a brand (Vans). We do not drop 105 because they are not a member of any brand. We do not drop 110 because the products are different.</p>
<p>So, the output should look like the following:
<a href="https://i.sstatic.net/WTj2T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTj2T.png" alt="enter image description here" /></a></p>
<p><strong>My approach:</strong> First to make a list of unique customer_id + product (ex: 102_skateboard). Then loop over the list, then filter out the dataframe on unique customer-product pairs, then check if the dataframe contains membership, if true then drop not-member rows. This gives me the expected output but I was wondering if there was a better way to do it.</p>
<pre><code>df['customer_product'] = df['customer_id'].astype(str) + '_' + df['product']
unique_customer_product = df['customer_product'].unique()
for pair in unique_customer_product:
filtered_df = df[df['customer_product'] == pair]
if 'member' in filtered_df['membership'].values:
df = df.drop(df[(df.customer_product == pair) & (df.membership == 'not-member')].index)
</code></pre>
|
<python><pandas><dataframe><data-wrangling>
|
2022-12-16 20:23:14
| 3
| 379
|
M J
|
74,829,335
| 10,037,461
|
Convert PySpark DataFrame column with list in StringType to ArrayType
|
<p>So I got an input pysaprk dataframe that looks like the following:</p>
<pre><code>df = spark.createDataFrame(
[("1111", "[clark, john, silvie]"),
("2222", "[bob, charles, seth]"),
("3333", "[jane, luke, adam]"),
],
["column1", "column2"]
)
</code></pre>
<pre><code>| column1 | column2 |
| ------- | ------- |
| 1111 | [clark kent, john, silvie] |
| 2222 | [bob, charles, seth rog] |
| 3333 | [jane, luke max, adam] |
</code></pre>
<p>And my goal is to convert the column and values from the <code>column2</code> which is in StringType() to an ArrayType() of StringType().</p>
<p>But I have managed to only partially get the result converting it to ArrayType, but those values from the string list with more than one word are split separately, like the follow:</p>
<pre><code>from pyspark.sql.functions import expr
df_out = df.withColumn('column2', expr(r"regexp_extract_all(column2, '(\\w+)', 1)"))
</code></pre>
<p>Which gets me something like (my regex skills aren't that good):</p>
<pre><code>| column1 | column2 |
| ------- | ------- |
| 1111 | ["clark", "kent", "john", "silvie"] |
| 2222 | ["bob", "charles", "seth", "rog"] |
| 3333 | ["jane", "luke", "max", "adam"] |
</code></pre>
<p>But I'm actually looking to get something like:</p>
<pre><code>| column1 | column2 |
| ------- | ------- |
| 1111 | ["clark kent", "john", "silvie"] |
| 2222 | ["bob", "charles", "seth rog"] |
| 3333 | ["jane", "luke max", "adam"] |
</code></pre>
|
<python><python-3.x><pyspark>
|
2022-12-16 20:12:13
| 1
| 415
|
Lucas Mengual
|
74,829,297
| 5,106,834
|
Can a QAbstractItemModel trigger a layout change whenever underyling data is changed?
|
<p>The following is a slightly modified version of the <a href="https://www.pythonguis.com/tutorials/modelview-architecture/" rel="nofollow noreferrer">Model/View To-Do List tutorial</a>.</p>
<p>I have a class <code>Heard</code> that is composed of a list of <code>Animal</code>. The <code>Heard</code> serves as the underlying data for a <code>HeardModel</code> which is displayed in a <code>ListView</code> in my interface.</p>
<p>In my <code>MainWindow</code>, I've created a function called <code>add_animal_to_heard</code> which:</p>
<ol>
<li>Creates a new <code>Animal</code> using the user input</li>
<li>Uses the <code>Heard</code> class's <code>add_animal</code> method to add the new <code>Animal</code> to the <code>Heard</code></li>
<li>Tells the <code>HeardModel</code> to update the view using <code>layoutChanged.emit()</code></li>
</ol>
<p>It's this last point that concerns me. To manage increasing complexity in the app, shouldn't the <code>HeardModel</code> know to trigger a layout change <strong>whenever the underlying <code>Heard</code> data is changed?</strong> Is this possible, and if so, is there any reason that wouldn't be desireable?</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets, uic
from PyQt5.QtCore import Qt
from typing import Dict, List
qt_creator_file = "animals.ui"
Ui_MainWindow, QtBaseClass = uic.loadUiType(qt_creator_file)
class Animal:
def __init__(self, genus: str, species: str):
self.genus = genus
self.species = species
def name(self):
return f"{self.genus} {self.species}"
class Heard:
animals: List[Animal]
def __init__(self, animals: List[Animal]):
self.animals = animals
def add_animal(self, animal: Animal):
self.animals.append(animal)
def remove_animal(self, animal: Animal):
self.animals.remove(animal)
class HeardModel(QtCore.QAbstractListModel):
heard: Heard
def __init__(self, *args, heard: Heard, **kwargs):
super(HeardModel, self).__init__(*args, **kwargs)
self.heard = heard
def data(self, index, role):
if role == Qt.DisplayRole:
animal = self.heard.animals[index.row()]
return animal.name()
def rowCount(self, index):
return len(self.heard.animals)
class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
QtWidgets.QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
self.model = HeardModel(heard=Heard([Animal('Canis', 'Familiaris'), Animal('Ursus', 'Horribilis')]))
self.heardView.setModel(self.model)
self.addButton.pressed.connect(self.add_animal_to_heard)
def add_animal_to_heard(self):
genus = self.genusEdit.text()
species = self.speciesEdit.text()
if genus and species: # Don't add empty strings.
# Create new animal
new_animal = Animal(genus, species)
# Add animal to heard
self.model.heard.add_animal(new_animal)
# Trigger refresh.
self.model.layoutChanged.emit()
# Empty the input
self.genusEdit.setText("")
self.speciesEdit.setText("")
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec_()
</code></pre>
|
<python><architecture><pyqt><pyqt5>
|
2022-12-16 20:06:39
| 1
| 607
|
Andrew Plowright
|
74,829,288
| 17,277,677
|
predict_proba on pyspark testing dataframe
|
<p>I am very new to pyspark and need to perform prediction. I've already done everything but in python, because the data I have to apply the logic is huge - I need to transform everything to pyspark.</p>
<p>The problem is: I have 2 dataframes, first dataframe is for training purposes with Y column and the second one is for testing (from my s3 bucket).</p>
<ol>
<li><p>I already cleaned and prepared training dataframe in python and converted it to pyspark df:</p>
<p>from pyspark.sql import SparkSession</p>
<p>import pandas as pd</p>
<p>Create a SparkSession:</p>
<p>spark = SparkSession.builder.appName('myapp').getOrCreate()</p>
<p>Load the Pandas DataFrame:</p>
<p>pdf = dataframe #dataframe is the name of my pandas df</p>
<p>Convert the Pandas DataFrame to a PySpark DataFrame:</p>
<p>df_spark = spark.createDataFrame(pdf)</p>
</li>
<li><p>I selected training columns - for training purposes:</p>
<p>y = df_spark.select("Y")</p>
<p>X = df_spark.select([c for c in dataframe.columns if c != "Y"])</p>
<p>training_columns = X.columns</p>
</li>
<li><p>I followed the tutorial where I used VectorAssembler:</p>
<p>from pyspark.ml.feature import VectorAssembler</p>
<p>df_spark.columns</p>
<p>assembler = VectorAssembler(inputCols=training_columns, outputCol='features')</p>
<p>output = assembler.transform(df_spark)</p>
<p>final_data = output.select('features', 'Y')</p>
</li>
<li><p>I trained the model - Random Forest</p>
<p>from pyspark.ml.classification import RandomForestClassifier</p>
<p>from pyspark.ml import Pipeline</p>
<p>Set the hyperparameters:</p>
<p>n_estimators = 100</p>
<p>max_depth = 5</p>
<p>Create the model:</p>
<p>rfc = RandomForestClassifier(
featuresCol='features',
labelCol='Y',
numTrees=n_estimators,
maxDepth=max_depth
)</p>
<p>Fit the model on the training data:</p>
<p>model = rfc.fit(final_data)</p>
</li>
<li><p>I checked model evaluation on training data:</p>
<p>predictions = model.transform(final_data)</p>
<p>from pyspark.ml.evaluation import BinaryClassificationEvaluator</p>
<p>my_binary_eval = BinaryClassificationEvaluator(labelCol='Y')</p>
<p>print(my_binary_eval.evaluate(predictions))</p>
</li>
<li><p>Now is the moment where I want to apply the model on a different pyspark dataframe - test data, with a lot of records = df_to_predict</p>
</li>
</ol>
<p>df_to_predict might have slightly different set of columns comparing to training data - as I was dropping columns with no variance. Hence first I need to apply the columns from training set:</p>
<pre><code>df_to_predict = df_to_predict.select(training_columns)
</code></pre>
<p>next I would like to apply the model and do predict_proba <- this is in sklearn package - I do not know how to apply/ convert this code to pyspark:</p>
<pre><code>df_to_predict["Predicted_Y_Probability"] = model.predict_proba(df_to_predict)[::, 1]
</code></pre>
|
<python><pyspark><classification>
|
2022-12-16 20:05:53
| 1
| 313
|
Kas
|
74,829,264
| 6,535,324
|
PyCharm, open new window for each "View as DataFrame"
|
<p>In Debug mode, I would like PyCharm to open a new window (<em>not</em> a new tab!) for every "View as DataFrame" click I do.</p>
<p><a href="https://i.sstatic.net/kSNdA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kSNdA.png" alt="enter image description here" /></a></p>
<p>Right now, they are opened as tabs, which makes it very difficult to compare.</p>
<p><a href="https://i.sstatic.net/hkRZB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hkRZB.png" alt="enter image description here" /></a></p>
<p><a href="https://stackoverflow.com/questions/68254196/pycharm-dataframe-or-array-in-new-window">Other posts</a> suggest to klick the "Open in editor" button to then move it. This works but it is annoyingly cumbersome. And I remember in the past multiple windows opened (maybe in the community edition?)</p>
|
<python><pycharm>
|
2022-12-16 20:02:49
| 0
| 2,544
|
safex
|
74,829,237
| 392,923
|
populating elements of dict using pandas.read_pickle() results in killed python process
|
<p>On an Ubuntu 18.04.5 image running on AWS, I've noticed that attempting to populate a dict with multiple (7, in my case) dataframes loaded via pandas.read_pickle(), e.g., using something like</p>
<pre><code>import pathlib
import pandas as pd
df_dict = {}
base_dir = pathlib.Path('some_path')
for i, f in base_dir.glob('*.pkl'):
print(f)
df = pd.read_pickle(f)
</code></pre>
<p>results in the python process being killed before all of the dataframes are loaded. What's odd is that if I read the files but don't assign them to elements of the dict, the loop completes successfully. Also, if I try loading the exact same dataframes stored in some other file format such as Feather (using pandas.read_feather()), the loop completes successfully. Any thoughts as to what could be going on? I'm using Python 3.8.8 and Pandas 1.2.4 installed via conda; I've also tried Python 3.8.15 with Pandas 1.5.2 and the same pickle files, but the result is the same. When I try to replicate the problem using the same Python and Pandas versions on MacOS 12.6.2 (also installed via conda), the problem does not occur.</p>
|
<python><pandas><dataframe><pickle><feather>
|
2022-12-16 20:00:17
| 0
| 1,391
|
lebedov
|
74,829,095
| 7,802,183
|
Which Seasonal Adjustment Program should I use with Statsmodels X-13-ARIMA
|
<p>I have downloaded Win X-13 <a href="https://www.census.gov/data/software/x13as.Win_X-13.html#list-tab-635278563" rel="nofollow noreferrer">from Census</a>, and unpacked it on my drive.</p>
<p>My code looks like this:</p>
<pre><code>import pandas as pd
from pandas import Timestamp
import os
import statsmodels.api as sm
s = pd.Series(
{Timestamp('2013-03-01 00:00:00'): 838.2,
Timestamp('2013-04-01 00:00:00'): 865.17,
Timestamp('2013-05-01 00:00:00'): 763.0,
Timestamp('2013-06-01 00:00:00'): 802.99,
Timestamp('2013-07-01 00:00:00'): 875.56,
Timestamp('2013-08-01 00:00:00'): 754.4,
Timestamp('2013-09-01 00:00:00'): 617.48,
Timestamp('2013-10-01 00:00:00'): 994.75,
Timestamp('2013-11-01 00:00:00'): 860.86,
Timestamp('2013-12-01 00:00:00'): 786.66},
name='Cost')
PATH = os.chdir(r'C:\WinX13\x13as')
result = sm.tsa.x13_arima_analysis(s, x12path=PATH)
</code></pre>
<p>This return error:</p>
<pre><code>---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [10], in <cell line: 3>()
1 PATH = os.chdir(r'C:\Users\usr\WinX13\x13as')
----> 3 result = sm.tsa.x13_arima_analysis(s, x12path=PATH)
File ~\AppData\Roaming\Python\Python39\site-packages\pandas\util\_decorators.py:207, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
205 else:
206 kwargs[new_arg_name] = new_arg_value
--> 207 return func(*args, **kwargs)
File C:\ProgramData\Anaconda3\lib\site-packages\statsmodels\tsa\x13.py:457, in x13_arima_analysis(endog, maxorder, maxdiff, diff, exog, log, outlier, trading, forecast_periods, retspec, speconly, start, freq, print_stdout, x12path, prefer_x13, tempdir)
455 print(p.stdout.read())
456 # check for errors
--> 457 errors = _open_and_read(ftempout.name + '.err')
458 _check_errors(errors)
460 # read in results
File C:\ProgramData\Anaconda3\lib\site-packages\statsmodels\tsa\x13.py:206, in _open_and_read(fname)
204 def _open_and_read(fname):
205 # opens a file, reads it, and make sure it's closed
--> 206 with open(fname, 'r', encoding="utf-8") as fin:
207 fout = fin.read()
208 return fout
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\usr\\AppData\\Local\\Temp\\tmp0d23o_q8.err'
</code></pre>
<p>What am I doing wrong here? Am I donwloading the wrong Seasonal Adjustment Program? Or using it wrong. I found little help online. <a href="https://stackoverflow.com/questions/36945324/python-statsmodels-x13-arima-analysis-attributeerror-dict-object-has-no-att">This</a> solution from SO did not helt me.</p>
|
<python><statsmodels><arima>
|
2022-12-16 19:43:37
| 1
| 507
|
NRVA
|
74,829,045
| 5,319,229
|
subsetting by two conditions (True & False) evaluating to (True)
|
<pre><code>import pandas as pd
d = {'col1':[1, 2, 3, 4, 5], 'col2':[5, 4, 3, 2, 1]}
df = pd.DataFrame(data=d)
df[(df['col1'] == 1) | (df['col1'] == df['col1'].max()) & (df['col1'] > 2)]
</code></pre>
<p>Why doesn't this filter out the first row? Where col1 is less than 2?</p>
<p>I'm getting this:</p>
<pre><code> col1 col2
0 1 5
4 5 1
</code></pre>
<p>Expecting this:</p>
<pre><code> col1 col2
4 5 1
</code></pre>
|
<python><pandas>
|
2022-12-16 19:39:07
| 1
| 3,226
|
Rafael
|
74,829,028
| 5,213,521
|
detect socket timeout when using "with" instead of "try"
|
<p>Using python 3.6.8, I have the following code</p>
<pre><code>with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(20)
s.connect(host, port)
</code></pre>
<p>As I understand it, using <code>with</code> as shown instead of a <code>try</code> <code>except</code> block better handles the error condition that occurs if opening the socket (<code>s.connect()</code>) fails. How so? In the code as shown, if <code>s.connect()</code> times out, will it print a message? My main question is, can I detect that it timed out, e.g. to run some code if a timeout occurs? I want to be able to do that from within the <code>with</code> block without using <code>try</code>. I read this example</p>
<pre><code>with opened_w_error("/etc/passwd", "a") as (f, err):
if err:
print "IOError:", err
else:
f.write("guido::0:0::/:/bin/sh\n")
</code></pre>
<p>But that doesn't detect the type of error. I read this as well</p>
<pre><code>try:
with ContextManager():
BLOCK
except InitError: # raised from __init__
...
except AcquireResourceError: # raised from __enter__
...
except ValueError: # raised from BLOCK
...
except ReleaseResourceError: # raised from __exit__
</code></pre>
<p>But I'm hoping there is a way to detect if it's a timeout error from within the <code>with</code> block.</p>
|
<python>
|
2022-12-16 19:37:13
| 0
| 382
|
Consumer of Cat Content
|
74,829,002
| 327,038
|
Is there a way to use pytest.raises in pytest-bdd "When" steps?
|
<p>I would like to define a scenario as follows:</p>
<pre class="lang-gherkin prettyprint-override"><code>Scenario: An erroneous operation
Given some data
And some more data
When I perform an operation
Then an exception is raised
</code></pre>
<p>Is there a good way to do this so that the <code>when</code> step isn't actually called until a <code>pytest.raises</code> context manager can be constructed for it? I could use a callable fixture, but that's going to pollute all other uses of the <code>"I perform an operation"</code> step.</p>
|
<python><pytest><pytest-bdd>
|
2022-12-16 19:34:16
| 1
| 9,487
|
asthasr
|
74,828,993
| 15,781,591
|
how to build multiple dropdown prompt function using IPyWidget?
|
<p>I am trying to build a simple tool in python here that simply asks the user to select a fruit type and then a color type from two drop down options and then, based on the user input, print a string that reads: "I would love to try a <em>color</em> <em>fruit</em>!". I am using IPyWidget for the dropdown functionality. I am trying to accomplish this in a Jupyter Notebook.</p>
<p>Here is what I have so far:</p>
<pre><code>import ipywidgets as widgets
country_list=['Banana', 'Apple','Lemon','Orange']
color_list=['Blue', 'Red', 'Yellow']
def update_dropdown(change):
print(change.new)
drop1 = widgets.Dropdown(options=country_list, value='Banana', description='Fruit:')
drop1.observe(update_dropdown,names='value')
display(drop1)
drop2 = widgets.Dropdown(options=color_list, value='Blue', description='Color:')
drop2.observe(update_dropdown,names='value')
display(drop2)
</code></pre>
<p>This directly produces these two dropdown, which is what I want to see:
<a href="https://i.sstatic.net/m7UhF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m7UhF.png" alt="enter image description here" /></a></p>
<p>However, I am a bit unsure about the functionality.
My dropdown options list the first two potential choices, and so below these dropdowns, I want to see the string "I would love to try a blue banana!" printed. And so I want each dropdown option to be stored in a variable, fruit and color, so that there is a dynamic/changing print function that responds directly to the dropdown options chosen. And so, once the user for example selects "Apple", the print statement would change to "I would love to try a blue apple!", and once the user then chooses "Yellow", the print statement would change to "I would love to try a yellow apple!", and so forth, only showing a single print statement at a time.</p>
<p>My confusion with accomplishing this, is I am not sure how to get those fruit and color variables set up, since they both are stored in 'change.new' in my code, within each dropdown function. When I try to run 'print(change.new)' after I just get an error reading: 'NameError: name 'change' is not defined', which tells me that outside of the dropdown code, "change" is not recognized.</p>
<p>How can I use my dropdown functionality to store both dropdown options as unique variables that can be used to to print unique strings in a subsequent print statement?</p>
|
<python><jupyter-notebook><ipywidgets>
|
2022-12-16 19:32:11
| 1
| 641
|
LostinSpatialAnalysis
|
74,828,731
| 1,411,376
|
When kuberentes terminates a pod, it kills any ongoing pyodbc calls, even when the parent python process is handling SIGTERM etc
|
<p>I have a python docker container running in kuberetes. The code's general workflow is that it receives messages and then kicks off a series of long-running SQL Server statements via pyodbc.</p>
<p>My goal is to increase the kubernetes timeout and intercept the shutdown signal so that we can finish our SQL statements before shutting down the pod.
First, I set <code>terminationGracePeriodSeconds: 1800</code> in my k8s.yaml file to make sure the pod has time to finish running the queries.</p>
<p>I've set up the code in python:</p>
<pre class="lang-py prettyprint-override"><code> def graceful_shutdown(_signo, _stack_frame):
_logger.info(
f"Received {_signo}, attempting to let the queries finish gracefully before shutting down."
)
# shutdown_event is a threading.Event object
shutdown_event.set()
</code></pre>
<p>And then in the main execution loop:</p>
<pre class="lang-py prettyprint-override"><code> signal.signal(signal.SIGTERM, graceful_shutdown)
signal.signal(signal.SIGINT, graceful_shutdown)
signal.signal(signal.SIGHUP, graceful_shutdown)
</code></pre>
<p>(Tangentially, a good discussion as to why I get SIGHUP and not SIGTERM here: <a href="https://stackoverflow.com/a/71281122/1411376">1</a>)</p>
<p>This works as expected. I get the <code>SIGHUP</code> signal, the <code>Threading.Event</code> <code>shutdown_event</code> object tells my main execution loop to terminate, everything works...</p>
<p>...Except when one of those SQL queries is actually running. In that case, my <code>graceful_shutdown()</code> method doesn't get called. What happens is that the pyodbc connection throws the following error:</p>
<blockquote>
<p>pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x2714 (10004) (SQLExecDirectW)')</p>
</blockquote>
<p>This error is a communication link failure caused by Kubernetes telling the pod to terminate.</p>
<p>My question is, can I prevent this error from being raised so that pyodbc can finish running the query?</p>
|
<python><sql-server><docker><kubernetes><pyodbc>
|
2022-12-16 19:00:50
| 0
| 795
|
Max
|
74,828,698
| 143,684
|
Python type checking: cannot assign to a dict
|
<p>I get the following error message in my Python code when assigning something to a dict:</p>
<pre><code>Argument of type "dict[Unknown, Unknown]" cannot be assigned to parameter "__value" of type "str" in function "__setitem__"
"dict[Unknown, Unknown]" is incompatible with "str"
</code></pre>
<p>The code:</p>
<pre class="lang-py prettyprint-override"><code>async def notify(self, method: str, params: Optional[dict] = None):
reqDict = {"jsonrpc": "2.0", "method": method}
if params is not None:
reqDict["params"] = params
# ^^^^^^^^^^^^^^^ error
</code></pre>
<p>I don't understand why that is. The following code works fine:</p>
<pre class="lang-py prettyprint-override"><code>async def notify(self, method: str, params: Optional[dict] = None):
id = 1
reqDict = {"jsonrpc": "2.0", "id": id, "method": method}
if params is not None:
reqDict["params"] = params
</code></pre>
<p>Not much difference, just the extra "id" key in the dict.</p>
<p>What is wrong here? My code or the type checker? How can I fix this?</p>
<p>This appears in VSCode 1.74.1 on Windows 10 with the Python extension and Python 3.9.6.</p>
|
<python><python-typing><pyright>
|
2022-12-16 18:56:36
| 0
| 20,704
|
ygoe
|
74,828,666
| 5,237,560
|
Why doesn't spawned process start
|
<p>I'm having issues using Python's multiprocessing module. Here is a (Very) simplified version of the issue i'm having trying to use a multiprocessing queue to communicate between the spawned process and the main process:</p>
<pre><code>from time import sleep
import multiprocessing as mp
# add number to queue Q every 2 second, then None
def func(Q):
for i in range(1,11):
Q.put(i)
sleep(2)
Q.put(None)
# start the process to add numbers
Q = mp.Queue()
P = mp.Process(target=func, args=(Q,))
P.start()
# place some initial values in the queue
for n in range(10,15):
sleep(0.5)
Q.put(n)
# wait a bit to let process start
print("waiting...")
sleep(2)
# get everything from the queue up to None
while True:
n = Q.get()
if n is None: break
print(n)
</code></pre>
<p>The process (func) should be adding items to the queue but it is either not started or is getting stuck somehow.</p>
<p>The output i'm getting is:</p>
<pre><code>waiting...
10
11
12
13
14
</code></pre>
<p>I would expect 1 through 10 to come out somewhere in there and the loop to eventually end (but it doesn't)</p>
<p>What am I missing ?</p>
<p>By the way, I'm doing this on MacOS Monterey (12.6) with Python 3.7. on a Macbook Pro (2.9 GHz 6-Core Intel Core i9)</p>
<p>I also tried a bunch of simpler mutiprocessing examples from the net and none of them seem to work either (but most of them try to print from the child process which is not an issue I want/need to deal with).</p>
<p><strong>SOLVED</strong></p>
<p>I finally figured it out. MacOS only has "spawn" as the default start method from Python 3.8 (so my assumptions were wrong on that). Also I should have used the <code>if __name__ == '__main__':</code> condition in my oversimplified example because spawn actually starts a new interpreter and was executing the process creation code recursively. Introducing a delay in the feeding of the 11..14 numbers in the queue made it clearer that this was sufficient to make it work (I initially thought P.join() was the culprit).</p>
<p>Here is the code that works with my simplified example:</p>
<pre><code>from time import sleep
import multiprocessing as mp
def func(Q):
for i in range(1,11):
Q.put(i)
sleep(1)
Q.put(None)
if __name__ == "__main__":
mp.set_start_method("spawn")
# start the process to add numbers
Q = mp.Queue()
P = mp.Process(target=func, name="child", args=(Q,),daemon=True)
P.start()
# place some initial values in the queue
for n in range(10,15):
sleep(0.5)
Q.put(n)
# wait a bit to let process start
print("waiting...")
sleep(2)
# get everything from the queue up to None
while True:
n = Q.get()
if n is None: break
print(n)
</code></pre>
|
<python><multiprocessing>
|
2022-12-16 18:52:16
| 2
| 42,197
|
Alain T.
|
74,828,640
| 4,863,700
|
Stable Diffusion issue on intel mac: connecting the weights/model and connecting to the model.ckpt file
|
<p>I'm trying to get a command line version of Stable Diffusion up and running on Mac Intel from the following repo: <a href="https://github.com/cruller0704/stable-diffusion-intel-mac" rel="nofollow noreferrer">https://github.com/cruller0704/stable-diffusion-intel-mac</a></p>
<p>I'm getting the error:
<code>Too many levels of symbolic links: 'models/ldm/stable-diffusion-v1/model.ckpt'</code> I don't 100% know what it means (I mean I kind of do but not really), or how to fix it.</p>
<p>I created and activated a conda environment with:</p>
<pre><code>conda env create -f environment.yaml
conda activate ldm
</code></pre>
<p>I then did this:</p>
<pre><code>mkdir -p models/ldm/stable-diffusion-v1/
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt
</code></pre>
<p>I didn't do anything with the following and I guess this is the problem as the above maybe needs to link to this, but renamed? <a href="https://github.com/cruller0704/stable-diffusion-intel-mac#weights" rel="nofollow noreferrer">https://github.com/cruller0704/stable-diffusion-intel-mac#weights</a> I don't know what to do with this, what do I do? It talks about these being weights, but are these the model or just the weights? and... what do I do with them? I clicked through some links there and couldn't figure out how to download anything. Any suggestions what to do next? Thank you!</p>
<p>**edit: ok that link says: <code>We currently provide the following checkpoints:</code> So, it's not just the weights it's the model in it's current state as well I guess. But how do I get them? They're not in the downloaded zip.</p>
|
<python><artificial-intelligence><stable-diffusion>
|
2022-12-16 18:49:03
| 1
| 4,550
|
Agent Zebra
|
74,828,633
| 10,819,464
|
Selenium Python: Hidden Input Field Not Interactable
|
<p>I'm working on selenium python (modifying zap auth repo) trying to pass login page that has hidden field for the password. So the login flow would be, <strong>Insert Email</strong> > <strong>Click Button "Continue"</strong> > <strong>(password field comes up)</strong> <strong>Insert Password</strong> > <strong>Click Button "Login"</strong>. Each button has different button id and this flow <strong>working on the same page</strong>.</p>
<p>I've tried using actionchain, find_element and execute_script to fill the field as well, but seems not found any solutions.</p>
<p>here's my part of code</p>
<pre><code>if self.config.auth_password:
try:
# ActionChains(self.driver).click("Continue").perform()
self.fill_password()
except Exception:
logging.warning(
'Did not find the password field - clicking Next button and trying again')
button = self.driver.find_element_by_id("Continue")
button.click()
# if the password field was not found, we probably need to submit to go to the password page
# login flow: username -> next -> password -> submit
self.fill_password()
</code></pre>
<p>This code always returning <code>selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable</code>, seems like ignoring the button click before.</p>
<p>This code returning</p>
<pre><code>2022-12-16 18:42:21,099 Found element password by name
Traceback (most recent call last):
File "/zap/zap_auth.py", line 203, in login
self.fill_password()
File "/zap/zap_auth.py", line 262, in fill_password
return self.find_and_fill_element(self.config.auth_password,
File "/zap/zap_auth.py", line 280, in find_and_fill_element
element.clear()
File "/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/webelement.py", line 95, in clear
self._execute(Command.CLEAR_ELEMENT)
File "/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
(Session info: headless chrome=108.0.5359.98)
</code></pre>
<p>Any suggestions about this case? once again my login page flow, <strong>Insert Email</strong> > <strong>Click Button "Continue"</strong> > <strong>(password field comes up)</strong> <strong>Insert Password</strong> > <strong>Click Button "Login"</strong></p>
<p><strong>UPDATE</strong></p>
<p>Sadly I can't afford the url due to private working website. But I hope this inspect element on this page would help. This is before continue button clicked...</p>
<p><a href="https://i.sstatic.net/rmVQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rmVQE.png" alt="image1" /></a></p>
<p>After I clicked the Continue button, the field will be unhide and button id change to login. In this link <a href="https://github.com/ICTU/zap2docker-auth-weekly/blob/master/zap_auth.py" rel="nofollow noreferrer">zap</a> is the source code and handler the login flow.</p>
|
<python><selenium><selenium-webdriver>
|
2022-12-16 18:47:43
| 0
| 467
|
Dhody Rahmad Hidayat
|
74,828,478
| 10,681,595
|
Convert Row values of a column into multiple columns by value count with Dask Dataframe
|
<p>Using the pandas library, this operation is very quick to be performed.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import dask.dataframe as dd
df = pd.DataFrame(columns=['name','contry','pet'],
data=[['paul', 'eua', 'cat'],
['pedro', 'brazil', 'dog'],
['paul', 'england', 'cat'],
['paul', 'england', 'cat'],
['paul', 'england', 'dog']])
def pre_transform(data):
return (data
.groupby(['name', 'country'])['pet']
.value_counts()
.unstack()
.reset_index()
.fillna(0)
.rename_axis([None], axis=1)
)
pre_transform(df_exp)
</code></pre>
<p>output:</p>
<pre><code>| | name | country | cat | dog |
|---|-------|---------|-----|-----|
| 0 | paul | england | 2.0 | 1.0 |
| 1 | paul | eua | 1.0 | 0.0 |
| 2 | pedro | brazil | 0.0 | 1.0 |
</code></pre>
<p>But to apply this operation in a dataset with hundreds of gbs, there is no RAM to do this operation with Pandas.</p>
<p>A palliative alternative would be to use pandas through iterations with the chunksize parameter while reading the data.</p>
<pre class="lang-py prettyprint-override"><code>concat_df = pd.DataFrame()
for chunk in pd.read_csv(path_big_file, chunksize=1_000_000):
concat_df = pd.concat([concat_df, pre_transform(chunk)])
merged_df = concat_df.reset_index(drop=True).groupby(['name', 'country']).sum().reset_index()
display(merged_df)
</code></pre>
<p>But pursuing more efficiency, I tried to replicate the same operation with the Dask lib.</p>
<p>My efforts led me to produce the function below, which despite reaching the same result, is VERY inefficient in processing time.</p>
<p>Bad Dask approach:</p>
<pre class="lang-py prettyprint-override"><code>
def pivot_multi_index(ddf, index_columns, pivot_column):
def get_serie_multi_index(data):
return data.apply(lambda x:"_".join(x[index_columns].astype(str)), axis=1,meta=("str")).astype('category').cat.as_known()
return (dd
.merge(
(ddf[index_columns]
.assign(FK=(lambda x:get_serie_multi_index(x)))
.drop_duplicates()),
(ddf
.assign(FK=(lambda x:get_serie_multi_index(x)))
.assign(**{pivot_column:lambda x: x[pivot_column].astype('category').cat.as_known(),
f'{pivot_column}2':lambda x:x[pivot_column]})
.pivot_table(index='FK', columns=pivot_column, values=f'{pivot_column}2', aggfunc='count')
.reset_index()),
on='FK', how='left')
.drop(['FK'], axis=1)
)
ddf = dd.from_pandas(df_exp, npartitions=3)
index_columns = ['name','country']
pivot_column = 'pet'
merged = pivot_multi_index(ddf, index_columns, pivot_column)
merged.compute()
</code></pre>
<p>output</p>
<pre><code>| | name | country | cat | dog |
|---|-------|---------|-----|-----|
| 0 | paul | eua | 1.0 | 0.0 |
| 1 | pedro | brazil | 0.0 | 1.0 |
| 2 | paul | england | 2.0 | 1.0 |
</code></pre>
<p>But by applying the above function to a large dataset, it was much slower to run than using pandas by iteration via chunk size.</p>
<p>The question remains:</p>
<p>Given the operation of convert Row values of a column into multiple columns by value count,
what would be the most efficient way to achive this goal using the Dask library?</p>
|
<python><pandas><dataframe><group-by><dask>
|
2022-12-16 18:30:32
| 1
| 442
|
the_RR
|
74,827,982
| 1,394,697
|
Using a buffer to write a psycopg3 copy result through pandas
|
<p>Using <code>psycopg2</code>, I could write large results as CSV using <code>copy_expert</code> and a <code>BytesIO</code> buffer like this with <code>pandas</code>:</p>
<pre class="lang-py prettyprint-override"><code>copy_sql = "COPY (SELECT * FROM big_table) TO STDOUT CSV"
buffer = BytesIO()
cursor.copy_expert(copy_sql, buffer, size=8192)
buffer.seek(0)
pd.read_csv(buffer, engine="c").to_excel(self.output_file)
</code></pre>
<p>However, I can't figure out how to replace the <code>buffer</code> in <code>copy_expert</code> with <code>psycopg3</code>'s new copy command. Has anyone figured out a way to do this?</p>
|
<python><pandas><postgresql><psycopg2><psycopg3>
|
2022-12-16 17:39:55
| 2
| 14,401
|
FlipperPA
|
74,827,763
| 14,594,208
|
How to create all combinations of one column's items and choose one of the other columns each time?
|
<p>For instance, let's consider the following DataFrame:</p>
<pre><code> id metric_a metric_b
0 a 1 2
1 b 10 20
2 c 30 40
</code></pre>
<p>The resulting dataframe would consist of all the combinations of <code>id</code>, that is n<sup>2</sup> rows (square matrix).</p>
<p>In our case, since we have 3 unique ids, we would get 9 rows in total.</p>
<p>Now, given that each row is actually a pair <code>x-y</code> of ids, I'd like to have <code>metric_a</code> for <code>x</code> and <code>metric_b</code> for <code>y</code>, where <code>x</code> and <code>y</code> are simply the two ids of the given row.</p>
<p>To illustrate this:</p>
<pre><code> x y metric_a metric_b
0 a a 1 2
1 a b 1 20
2 a c 1 40
3 b a 10 2
4 b b 10 20
5 b c 10 40
6 c a 30 2
7 c b 30 20
8 c c 30 40
</code></pre>
<p>One way to achieve this is by first creating all possible combinations with <code>itertools.product</code> and then merging the initial dataframe two times.
First time, to bring the metric for <code>x</code> and the second time to bring the metric for <code>y</code>.</p>
<p>Another way that came to my mind is:</p>
<pre class="lang-py prettyprint-override"><code># creating all the combinations of ids
pd.DataFrame(list(itertools.product(df['id'], df['id'])))
# creating all the combinations of metrics
pd.DataFrame(list(itertools.product(df['metric_a'], df['metric_b'])))
# some more code to concat those two horizontally..
</code></pre>
<p>However, I think that there should be a more elegant solution that I can't think of at the moment.</p>
<p>Also, could ideas around using <code>MultiIndex.from_product</code> and then reindexing work?</p>
<p>Any help is more than welcome!</p>
|
<python><pandas>
|
2022-12-16 17:20:15
| 1
| 1,066
|
theodosis
|
74,827,534
| 7,134,235
|
How can I create a new dataframe out of a json column in a pyspark dataframe?
|
<p>I have a pyspark dataframe where some of the columns are nested json objects, because I created it from a jsonl file. The schema looks like this:</p>
<pre><code>root
|-- _corrupt_record: string (nullable = true)
|-- meeting: struct (nullable = true)
| |-- meeting_id: string (nullable = true)
| |-- meeting_name: string (nullable = true)
| |-- meeting_url: string (nullable = true)
| |-- time: long (nullable = true)
|-- category: struct (nullable = true)
| |-- city: string (nullable = true)
| |-- country: string (nullable = true)
| |-- category_id: long (nullable = true)
| |-- category_lat: double (nullable = true)
| |-- category_lon: double (nullable = true)
| |-- category_name: string (nullable = true)
| |-- category_state: string (nullable = true)
| |-- category_topics: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- topic_name: string (nullable = true)
| | | |-- urlkey: string (nullable = true)
| |-- category_urlname: string (nullable = true)
</code></pre>
<p>What I need to do is filter on some of the root level columns, and then create a new dataframe from the category_topics struct (this is a json object column). After this I imagine I would have to do a pivot and count in order to see how many times a topic appears. I have not found any examples of how to do this directly with dataframe operations, and I don't want to extract the column as a list and then create a new data frame out of it (unless that's the only possible way)</p>
<p>Any help would be much appreciated.</p>
|
<python><json><pyspark>
|
2022-12-16 16:56:23
| 0
| 906
|
Boris
|
74,827,473
| 9,212,995
|
What are other options available to generate DOIs without using extensions in CKAN?
|
<p>Well, I would like to know if there is any other options that I can use to generate <strong>DOI</strong>s for all or newly created datasets in <strong>CKAN</strong> if I am not using <strong>ckanext-doi</strong> extension. Can someone try to explain how this is possible.</p>
<p>As far as I know, <strong>DataCite</strong> or <strong>TIB</strong> provides this, with <strong>ckanext-doi</strong> extension you can use it to assign <code>digitial object identifiers</code> (<strong>DOI</strong>s) to the datasets, using the DataCite DOI service they offer. When a new dataset is created it is assigned a new DOI, and this is in the format of <a href="https://doi.org/%5Bprefix%5D/%5B8" rel="nofollow noreferrer">https://doi.org/[prefix]/[8</a> random alphanumeric characters].</p>
<p>If the new dataset is active and public, then, the DOI and metadata will be registered with <strong>DatCite</strong> or <strong>TIB</strong> but if the dataset is a draft or private, the DOI is not registered until it's made active and public, hence the DOI will be submitted. Doing this allows the datasets to be embargoed, but still provides a DOI to be referenced in publications.I can either auto-generate them using <strong>python</strong> and <strong>Datacite API</strong> or manually generate them from <strong>Fabrica</strong>.</p>
<p>But, the bad news is that for me to be able to test or have access to DOI service or use with this extension, I need to have an active account with them and having an account means to be a registered member with dataCite/TIB, until this is done, minting active DOIs is not be possible and I am not or my institution is a member of DataCite/TIB yet.</p>
<p>So, I'm inquiring if there is any other options available with ckan other than using <strong>DataCite</strong> or <strong>TIB</strong> to generate <strong>DOI</strong>s,</p>
<p>Thanks.</p>
|
<python><metadata><ckan><metadata-extractor><doi>
|
2022-12-16 16:50:39
| 1
| 372
|
Namwanza Ronald
|
74,827,466
| 7,800,760
|
Stanford Stanza NLP to networkx: superimpose NER entities onto graph of words
|
<p>Here is a sample program which will take a text (example is in italian but Stanza supports many languages) and builds and displays a graph of the words (only certain Parts of Speech) and their syntactic relationships:</p>
<pre><code>"""
Sample program to analyze a phrase with Stanfords STANZA and
build a networkx graph selecting certain token types
"""
import stanza
import networkx as nx
import matplotlib.pyplot as plt
IN_TXT = "All'ippica disse nel 1999 Ignazio Larussa. Poi a Taranto svenne!"
LANG = "it"
ALLOWED_POS = ["NOUN", "PROPN", "VERB"]
NXDOPTS = {
"node_color": "orange",
"edge_color": "powderblue",
"node_size": 400,
"width": 2,
}
# download appropriate language model
stanza.download(LANG, verbose=False)
nlp = stanza.Pipeline(
LANG,
processors="tokenize, pos, lemma, mwt, ner, depparse",
verbose=False,
use_gpu=False,
)
# generate a STANZA document from the given text via the pipeline
doc = nlp(IN_TXT)
for sentence in doc.sentences:
# initialize G as MultiDiGraph for every sentence
G = nx.DiGraph()
# fill the graph with the relevant data
G.add_node(0, id=0, text="ROOT", lemma="ROOT", upos="ROOT") # fictitious ROOT node
for word in [w for w in sentence.words if w.pos in ALLOWED_POS]:
wordict = word.to_dict()
G.add_node(wordict["id"], **wordict)
G.add_edge(wordict["id"], wordict["head"], label=wordict["deprel"])
# compute graph drawing parameters
pos = nx.spring_layout(G)
nodelabels = nx.get_node_attributes(G, "text")
edgelabels = nx.get_edge_attributes(G, "label")
# and now display the graphs
nx.draw(G, pos, with_labels=True, labels=nodelabels, **NXDOPTS)
nx.draw_networkx_edge_labels(G, pos, edge_labels=edgelabels)
plt.margins(0.2)
plt.suptitle(sentence.text)
plt.show()
# clear the graph to repeat for another sentence
G.clear()
</code></pre>
<p>and here is the resulting graph:
<a href="https://i.sstatic.net/cpLSj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cpLSj.png" alt="enter image description here" /></a>
The NLP pipeline also performs a NER on the same text and here are the two recognized name entities (a person and a place:</p>
<pre><code>[{
"text": "Ignazio Larussa",
"type": "PER",
"start_char": 26,
"end_char": 41
}, {
"text": "Taranto",
"type": "LOC",
"start_char": 49,
"end_char": 56
}]
</code></pre>
<p>I would like to be able to map these two on the graph above.</p>
<p>In Taranto's case where the NER span equals the word span, I guess I just need to map the NER span onto the right word start and end chars.</p>
<p>In "Ignazio Larussa"'s case things are more tricky, since the NER span encompasses TWO words. So in this case I want to show only ONE node with the NER span "Ignazio Larussa" connected via the nsubj edge to the "disse" verb.</p>
<p>How would you handle networkx G graph to achieve this? Thanks a lot</p>
|
<python><nlp><networkx><stanford-nlp>
|
2022-12-16 16:49:39
| 0
| 1,231
|
Robert Alexander
|
74,827,380
| 3,450,163
|
Convert 1-D array to upper triangular square matrix (anti-diagonal) in numpy
|
<p>I have an array as below:</p>
<pre><code>arr = np.numpy([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21])
</code></pre>
<p>What I would like to do is to convert this array to an upper triangular square matrix anti-diagonally. The expected output is like below:</p>
<pre><code>output = [
[1, 2, 3, 4, 5, 6],
[7, 8, 9, 10, 11, 0],
[12, 13, 14, 15, 0, 0],
[16, 17, 18, 0, 0, 0],
[19, 20, 0, 0, 0, 0],
[21, 0, 0, 0, 0, 0]
]
</code></pre>
<p>the approach that I followed is to create a square matrix that has all zeros and update the indexes using <code>triu_indices</code>.</p>
<pre><code>tri = np.zeros((6, 6))
tri[np.triu_indices(6, 1)] = arr
</code></pre>
<p>However, this gives me the error:
<code>ValueError: shape mismatch: value array of shape (21,) could not be broadcast to indexing result of shape (15,)</code></p>
<p>Please note that the sizes of the 1-d and matrix will be always 21 and 6x6 respectively.</p>
<p>Not sure where I make the mistake. Is it right way to go? Or is there a better approach instead of this? I would appreciate for any help.</p>
|
<python><arrays><numpy>
|
2022-12-16 16:42:29
| 2
| 3,097
|
GoGo
|
74,827,320
| 4,171,008
|
AtributeError: can't set attribute for python list property
|
<p>I'm working with the <code>python-docx</code> library from a forked <a href="https://pypi.org/project/bayoo-docx/" rel="nofollow noreferrer">version</a>, and I'm having an issue with editing the elements list as it is defined as a property.</p>
<pre class="lang-py prettyprint-override"><code># docx.document.Document
@property
def elements(self):
return self._body.elements
</code></pre>
<p>I tried to go with the solution mentioned <a href="https://stackoverflow.com/a/34179815/4171008">here</a> but the error <em><code>AtributeError: can't set attribute</code></em> still popping out.
<br>Next thing I tried is adding the <code>setter</code> to the attribute derived from <code>self._body</code> and editing the code:</p>
<pre class="lang-py prettyprint-override"><code># docx.blkcntnr.BlockItemContainer
@property
def elements(self):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return [element(item,self.part) for item in self._element.getchildren()]
</code></pre>
<p>I've tried to add the <code>setter</code> in both levels but ended up again with the error <em><code>AtributeError: can't set attribute</code></em></p>
<p><strong>The <code>setter</code> I wrote:</strong></p>
<pre class="lang-py prettyprint-override"><code>@elements.setter
def elements(self, value):
return value
</code></pre>
<p>The implementation I tired:</p>
<pre class="lang-py prettyprint-override"><code>elements_list = docx__document.elements
elem_list = []
docx__document.elements = elements_list = elem_list
</code></pre>
<p>The main problem with that code is <code>docx__document.elements</code> still contains all the elements that are supposed to have been deleted!</p>
<p>Editing the library was like this:</p>
<pre class="lang-py prettyprint-override"><code># Inside docx.document.Document
@property
def elements(self):
return self._body.elements
@elements.setter
def elements(self, value=None):
self._body.elements = value
gc.collect()
return value
</code></pre>
<p>The other part:</p>
<pre class="lang-py prettyprint-override"><code># Inside docx.blkcntnr.BlockItemContainer
@property
def elements(self):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return [element(item,self.part) for item in self._element.getchildren()]
@elements.setter
def elements(self, value):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return value
</code></pre>
<h2>Related question [Update]</h2>
<hr />
<p>If I did add a <code>setter</code> for this <code>property</code> :</p>
<pre><code># docx.document.Document
@property
def elements(self):
return self._body.elements
</code></pre>
<p>Should I add also a <code>setter</code> for the <code>property</code>:</p>
<pre class="lang-py prettyprint-override"><code># docx.blkcntnr.BlockItemContainer
@property
def elements(self):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return [element(item,self.part) for item in self._element.getchildren()]
</code></pre>
<p>Because the value of <code>document.elemetns</code> is actually the value from <code>document._body.elements</code>, am I right?</p>
<p>Any help would appreciate it!</p>
|
<python><python-3.x><list><python-decorators><python-docx>
|
2022-12-16 16:36:41
| 2
| 1,884
|
Ahmad
|
74,827,286
| 10,190,191
|
How to read bytes type from bigquery in Java?
|
<p>We have a legacy dataflow job in Scala which basically reads from Bigquery and then dumps it into Postgres. <br>
In Scala we read from bigquery, map it onto a case class and then dump it into Postgres, and it works perfectly for bigquery's <code>Bytes</code> type as well.<br>
The Schema we read from BQ into has an <code>Array[Byte]</code> field in Scala, we use <code>.setBytes</code> function to dump it into postgres table in the relevant <code>Bytea</code> column.<br></p>
<p>Now we are migrating that job to Java, we are not using type case classes this time and the read from bigquery returns as <code>com.google.api.services.bigquery.model.TableRow</code> object, for all the other field types it works as expected but I am having issues with the <code>Bytes</code> type. <br></p>
<p>When I do</p>
<pre><code>insetQuery.setBytes(3, row.get('bytes_type_column'))
</code></pre>
<p>it says that <code>setBytes</code> column expects bytes, while <code>row.get('bytes_type_column')</code> is an object. Now, if I do <code>row.get('bytes_type_column').toString().getBytes()</code>, it works fine but it seems like the content of the original bytes columns is changed and I can not use it after reading from Postgres.<br>
It seems to me that <code>.toString()</code> messes up the bytes and changes into some Java string converting which to bytes messes up the original form.<br></p>
<p>The other approach I tried was</p>
<pre><code>insetQuery.setBytes(3, (byte[])row.get('bytes_type_column'))
</code></pre>
<p>which also seems to have changed the content of the column.<br>
Had the same issue when I tried this <a href="https://stackoverflow.com/a/2836659/10190191">answer</a>.<br></p>
<p>I have almost no experience with Java, can someone guide me here on how can I dump the BQ's byte column value I read as it is into Postgres without changing anything in it?
Thanks.</p>
<p>If it's helpful for anyone, the BQ's byte column is actually a <a href="https://docs.python.org/3/library/pickle.html" rel="nofollow noreferrer">pickled</a> python object, which I want to dump in Postgres, and then unpickle after reading in a Python application, if it's not being unpickled it means it wasn't dumped as it is.</p>
|
<python><java><google-bigquery><pickle>
|
2022-12-16 16:33:27
| 1
| 714
|
saadi
|
74,827,243
| 1,862,861
|
Cython numpy array view off by one when wraparound is False
|
<p>I have some Cython code where I fill in the last value in each row of a memory view of a NumPy array with a number. If I compile the code with <code>wraparound = False</code>, the last value in the final row of the array does not get filled in. However, if I set <code>wraparound = True</code> it does get filled in as expected. As far as I can tell, in my code having wraparound as either True or False should make no difference, but it obviously does. Does anyone know why?</p>
<p>A simplified version of the code that demonstrates this and can be run in a Jupyter notebook is below:</p>
<pre class="lang-py prettyprint-override"><code>%load_ext cython
</code></pre>
<p>Setting <code>wraparound = False</code></p>
<pre class="lang-py prettyprint-override"><code>%%cython
# cython: boundscheck = False
# cython: wraparound = False
import numpy as np
cimport numpy as np
def set_array(np.float32_t[:, :] buffer):
cdef Py_ssize_t i = 0
cdef Py_ssize_t n = buffer.shape[0]
for i in range(n):
# fill final value in row i with a 1
buffer[i, -1] = 1.0
print(np.asarray(buffer))
</code></pre>
<p>This gives:</p>
<pre><code>import numpy as np
x = np.zeros((3, 4), dtype=np.float32)
set_array(x)
</code></pre>
<pre><code>[[0. 0. 0. 1.]
[0. 0. 0. 1.]
[0. 0. 0. 0.]]
</code></pre>
<p>where the final row does not have a 1 at the end.</p>
<p>If I instead use:</p>
<pre><code># cython: wraparound = True
</code></pre>
<p>the final output is:</p>
<pre><code>[[0. 0. 0. 1.]
[0. 0. 0. 1.]
[0. 0. 0. 1.]]
</code></pre>
<p>which is what is expected.</p>
|
<python><numpy><cython>
|
2022-12-16 16:29:06
| 1
| 7,300
|
Matt Pitkin
|
74,827,216
| 10,934,417
|
ThreadPoolExecutor or multi-processing for Pandas DataFrame
|
<p>Asuume there will be a mega size dataframe, which has > <strong>1M</strong> columns. Based on the following toy dataframe.</p>
<pre><code>import pandas as pd
import numpy as np
from concurrent.futures import *
import multiprocessing
num_processes = multiprocessing.cpu_count()
print(f'num_precesses: {num_processes}')
def ReplaceItem(old_item, input_list, new_item = 'ham'):
output = []
for item in input_list:
if item == old_item:
output.append(new_item)
else:
output.append(item)
return output
food = pd.DataFrame({'customer':range(1,4),
'original':[['egg', 'berry', 'pork', 'tea'],
['chicken', 'beef', 'water', 'soda'],
['fish', 'chicken', 'pork', 'coffee']]})
replace_items = {'list1':'egg', 'list2':'chicken', 'list3':'fish'}
food
num_precesses: 10
customer original
0 1 [egg, berry, pork, tea]
1 2 [chicken, beef, water, soda]
2 3 [fish, chicken, pork, coffee]
</code></pre>
<p>The goal is to create a new dataframe which have update columns that replace items on the replace_items list with a NEW item 'ham'. The final table should be like below</p>
<pre><code>change_items, item_dict = [], dict()
for i in range(len(replace_items.values())):
change_items.append((list(replace_items.keys())[i],
food['original'].apply(lambda x: ReplaceItem(list(replace_items.values())[i], x))))
item_dict.update(change_items)
new_df = pd.DataFrame(item_dict)
update_df = pd.concat([food, new_df], axis = 1)
update_df
customer original list1 list2 list3
0 1 [egg, berry, pork, tea] [ham, berry, pork, tea] [egg, berry, pork, tea] [egg, berry, pork, tea]
1 2 [chicken, beef, water, soda] [chicken, beef, water, soda] [ham, beef, water, soda] [chicken, beef, water, soda]
2 3 [fish, chicken, pork, coffee] [fish, chicken, pork, coffee] [fish, ham, pork, coffee] [ham, chicken, pork, coffee]
</code></pre>
<p>I try to use multi-threading or multi-processing to speed up the whole process but don't know how to implement it correctly. Here, each new column indicates any of rows that have been replaced by a new item. Any suggestions? many thanks in advance</p>
|
<python><pandas>
|
2022-12-16 16:26:51
| 0
| 641
|
DaCard
|
74,827,212
| 13,839,945
|
Python add path of data directory
|
<p>I want to add a path to my data directory in python, so that I can read/write files from that directory without including the path to it all the time.</p>
<p>For example I have my working directory at <code>/user/working</code> where I am currently working in the file <code>/user/working/foo.py</code>. I also have all of my data in the directory <code>/user/data</code> where I want to excess the file <code>/user/data/important_data.csv</code>.</p>
<p>In <code>foo.py</code>, I could now just read the csv with pandas using</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv('../data/important_data.csv')
</code></pre>
<p>which totally works. I just want to know if there is a way to include <code>/user/data</code> as a main path for the file so I can just read the file with</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv('important_data.csv')
</code></pre>
<p>The only idea I had was adding the path via <code>sys.path.append('/user/data')</code>, which didnt work (I guess it only works for importing modules).</p>
<p>Is anyone able to provide any ideas if this is possible?</p>
<p>PS: My real problem is of course more complex, but this minimal example should be enough to handle my problem.</p>
|
<python><file><path>
|
2022-12-16 16:26:36
| 2
| 341
|
JD.
|
74,827,200
| 11,197,796
|
Groupby and assign operation result to each group
|
<pre><code>df = pd.DataFrame({'ID': ['A','A','A','A','A'],
'target': ['B','B','B','B','C'],
'length':[208,315,1987,3775,200],
'start':[139403,140668,141726,143705,108],
'end':[139609,140982,143711,147467,208]})
ID target length start end
0 A B 208 139403 139609
1 A B 315 140668 140982
2 A B 1987 141726 143711
3 A B 3775 143705 147467
4 A C 200 108 208
</code></pre>
<p>If I perform the operation:</p>
<pre><code>(df.assign(length=
df['start'].lt(df['end'].shift())
.mul(df['start']-df['end'].shift(fill_value=0))
.add(df['length'])))
</code></pre>
<p>I get the correct result but how do I apply this logic to every group in a groupby?</p>
<pre><code>for (a, b) in df.groupby(['start','end']):
(df.assign(length=
df['sstart'].lt(df['send'].shift())
.mul(df['sstart']-df['send'].shift(fill_value=0))
.add(df['length'])))
</code></pre>
<p>Leaves the dataframe unchanged?</p>
|
<python><pandas><dataframe>
|
2022-12-16 16:25:26
| 1
| 440
|
skiventist
|
74,827,163
| 9,394,364
|
Parsing a log file and ignoring text between two targets
|
<p>This question is a follow-up to my previous question here: <a href="https://stackoverflow.com/q/74818311/9394364">Parsing text and JSON from a log file and keeping them together</a></p>
<p>I have a log file, <code>your_file.txt</code> with the following structure and I would like to extract the timestamp, run, user, and json:</p>
<pre><code>A whole bunch of irrelevant text
2022-12-15 12:45:06 garbage, run: 1, user: james json:
[{"value": 30, "error": 8}]
</code></pre>
<p>Another stack user was helpful enough to provide this abridged code to extract the relevant pieces:</p>
<pre><code>import re
pat = re.compile(
r'(?ms)^([^,\n]+),\s*run:\s*(\S+),\s*user:\s*(.*?)\s*json:\n(.*?)$'
)
with open('your_file.txt', 'r') as f_in:
print(pat.findall(f_in.read()))
</code></pre>
<p>Which returns this value which is then processed further:</p>
<pre><code>[('2022-12-15 12:45:06 garbage', '1', 'james', '[{"value": 30, "error": 8}]')]
</code></pre>
<p>How can I amend the regex expression used to ignore the word "garbage" after the timestamp so that word is not included in the output of <code>pat.findall</code>?</p>
|
<python><regex>
|
2022-12-16 16:22:30
| 1
| 1,651
|
DJC
|
74,827,127
| 8,901,144
|
Pyspark Rolling Sum based on ID, timestamp and condition
|
<p>I have the following pyspark dataframe</p>
<pre><code>id timestamp col1
1 2022-01-01 0
1 2022-01-02 1
1 2022-01-03 1
1 2022-01-04 0
2 2022-01-01 1
2 2022-01-02 0
2 2022-01-03 1
</code></pre>
<p>I would like to get the cumulative sum of col1 for each ID and based on timestamp as an additional column and obtain something like this:</p>
<pre><code>id timestamp col1 cum_sum
1 2022-01-01 0 0
1 2022-01-02 1 1
1 2022-01-03 1 2
1 2022-01-04 0 2
2 2022-01-01 1 1
2 2022-01-02 0 1
2 2022-01-03 1 2
</code></pre>
<p>Probably a Window Function can work here but I am not sure how to count only when col1 is equal to 1.</p>
|
<python><pyspark><group-by><window-functions>
|
2022-12-16 16:19:01
| 1
| 1,255
|
Marco
|
74,827,051
| 16,988,223
|
How I can get the value of this json key called 'sentence'
|
<p>I want to extract the values of the key called 'sentence' of this json:</p>
<pre><code>{"title": "llamar | Definici\u00f3n | Diccionario de la lengua espa\u00f1ola | RAE - ASALE", "articles": [{"id": "NTReP1j", "lema": {"lema": "llamar", "index": 0, "female_suffix": ""}, "supplementary_info": [{"text": "Del lat. (lat\u00edn) clam\u0101re."}], "is": {"verb": true}, "definitions": [{"index": 1, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Intentar captar la atenci\u00f3n de alguien mediante voces, ruidos o gestos."}, "examples": []}, {"index": 2, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [{"abbr": "U. t. c. intr.", "text": "Usado tambi\u00e9n como intransitivo"}], "sentence": {"text": "Realizar las operaciones necesarias para establecer comunicaci\u00f3n telef\u00f3nica con alguien."}, "examples": [{"text": "La llam\u00e9, pero no estaba en casa."}, {"text": "Llama a su oficina."}]}, {"index": 3, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Invocar, pedir auxilio a alguien."}, "examples": []}, {"index": 4, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [{"abbr": "U. t. c. intr.", "text": "Usado tambi\u00e9n como intransitivo"}], "sentence": {"text": "Pedir a alguien que vaya a un lugar."}, "examples": [{"text": "Llamar al m\u00e9dico, a los refuerzos."}, {"text": "Llamar a reuni\u00f3n."}]}, {"index": 5, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Despertar a alguien."}, "examples": []}, {"index": 6, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [{"abbr": "U. t. c. intr.", "text": "Usado tambi\u00e9n como intransitivo"}], "sentence": {"text": "Incitar a alguien a que se comporte de una determinada manera."}, "examples": [{"text": "Llamar a la desobediencia civil."}]}, {"index": 7, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Dar a alguien o algo como denominaci\u00f3n o calificativo la palabra o enunciado que se expresa."}, "examples": [{"text": "Ac\u00e1 llamamos celular a lo que all\u00e1 llaman m\u00f3vil."}, {"text": "Ahora llaman do\u00f1a Ana a Anita."}, {"text": "Lo llaman orgulloso."}]}, {"index": 8, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Dar a alguien el tratamiento que se expresa."}, "examples": [{"text": "Ll\u00e1mame de t\u00fa."}]}, {"index": 9, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Designar a alguien para ocupar un puesto, desempe\u00f1ar un cargo o ejercer un derecho."}, "examples": [{"text": "Fue llamada a suceder a su hermano."}]}, {"index": 10, "category": {"abbr": "tr.", "text": "verbo transitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Atraer a alguien o algo."}, "examples": [{"text": "El chocolate no me llama en absoluto."}]}, {"index": 11, "category": {"abbr": "intr.", "text": "verbo intransitivo"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Hacer una se\u00f1al sonora en una puerta, golpe\u00e1ndola o accionando un instrumento sonoro, para que alguien la abra."}, "examples": []}, {"index": 12, "category": {"abbr": "prnl.", "text": "verbo pronominal"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [], "sentence": {"text": "Tener el nombre o la denominaci\u00f3n que se expresa."}, "examples": []}, {"index": 13, "category": {"abbr": "prnl.", "text": "verbo pronominal"}, "is": {"adjective": false, "adverb": false, "interjection": false, "noun": false, "pronoun": false, "verb": true}, "abbreviations": [{"abbr": "Mar.", "text": "Marina"}, {"abbr": "desus.", "text": "desusado"}], "sentence": {"text": "Dicho del viento: Cambiar de direcci\u00f3n hacia la parte que se expresa."}, "examples": []}], "complex_forms": [], "other_entries": [{"text": "treta del llamar", "link": "https://dle.rae.es/?id=abLU9KP#40fzk3z"}], "conjugations": {"verb": "llamar", "conjugations": {"Formas no personales": {"Infinitivo": "", "Gerundio": "", "Participio": "", "": "llamado"}, "Indicativo": {"Presente": {"yo": "llamo", "t\u00fa / vos": ["llamas", "llam\u00e1s"], "usted": "llama", "\u00e9l, ella": "llama", "nosotros, nosotras": "llamamos", "vosotros, vosotras": "llam\u00e1is", "ustedes": "llaman", "ellos, ellas": "llaman"}, "Copret\u00e9rito": {"yo": "llamaba", "t\u00fa / vos": "llamabas", "usted": "llamaba", "\u00e9l, ella": "llamaba", "nosotros, nosotras": "llam\u00e1bamos", "vosotros, vosotras": "llamabais", "ustedes": "llamaban", "ellos, ellas": "llamaban"}, "Pret\u00e9rito": {"yo": "llam\u00e9", "t\u00fa / vos": "llamaste", "usted": "llam\u00f3", "\u00e9l, ella": "llam\u00f3", "nosotros, nosotras": "llamamos", "vosotros, vosotras": "llamasteis", "ustedes": "llamaron", "ellos, ellas": "llamaron"}, "Futuro": {"yo": "llamar\u00e9", "t\u00fa / vos": "llamar\u00e1s", "usted": "llamar\u00e1", "\u00e9l, ella": "llamar\u00e1", "nosotros, nosotras": "llamaremos", "vosotros, vosotras": "llamar\u00e9is", "ustedes": "llamar\u00e1n", "ellos, ellas": "llamar\u00e1n"}, "Pospret\u00e9rito": {"yo": "llamar\u00eda", "t\u00fa / vos": "llamar\u00edas", "usted": "llamar\u00eda", "\u00e9l, ella": "llamar\u00eda", "nosotros, nosotras": "llamar\u00edamos", "vosotros, vosotras": "llamar\u00edais", "ustedes": "llamar\u00edan", "ellos, ellas": "llamar\u00edan"}}, "Subjuntivo": {"Presente": {"yo": "llame", "t\u00fa / vos": "llames", "usted": "llame", "\u00e9l, ella": "llame", "nosotros, nosotras": "llamemos", "vosotros, vosotras": "llam\u00e9is", "ustedes": "llamen", "ellos, ellas": "llamen"}, "Futuro": {"yo": "llamare", "t\u00fa / vos": "llamares", "usted": "llamare", "\u00e9l, ella": "llamare", "nosotros, nosotras": "llam\u00e1remos", "vosotros, vosotras": "llamareis", "ustedes": "llamaren", "ellos, ellas": "llamaren"}, "Copret\u00e9rito": {}, "": {"yo": ["llamara", "llamase"], "t\u00fa / vos": ["llamaras", "llamases"], "usted": ["llamara", "llamase"], "\u00e9l, ella": ["llamara", "llamase"], "nosotros, nosotras": ["llam\u00e1ramos", "llam\u00e1semos"], "vosotros, vosotras": ["llamarais", "llamaseis"], "ustedes": ["llamaran", "llamasen"], "ellos, ellas": ["llamaran", "llamasen"]}}, "Imperativo": {"": {"t\u00fa / vos": ["llama", "llam\u00e1"], "usted": "llame", "vosotros, vosotras": "llamad", "ustedes": "llamen"}}}}}]}
</code></pre>
<p><a href="https://i.sstatic.net/SzV7J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SzV7J.png" alt="enter image description here" /></a></p>
<p>By the moment I only was able to convert the dic to json:</p>
<pre><code>from pyrae import dle
import json
res = dle.search_by_word(word='llamar')
res = res.to_dict()
json_string = json.dumps(res)
data = json.loads(json_string)
print(data['sentence']) # throws error
</code></pre>
<p>This library pyrae allows me to get meaning of a spanish word.</p>
<p>I will appreciate any idea guys to solve it,
thanks so much.</p>
|
<python><json>
|
2022-12-16 16:12:10
| 4
| 429
|
FreddicMatters
|
74,827,011
| 930,122
|
Python: reconstruct image from difference
|
<p>Using the following code I calculated the difference matrix between two images:</p>
<pre><code>import time
import cv2
from imutils.video import VideoStream
from skimage.metrics import structural_similarity
from skimage.color import rgb2gray
cap = VideoStream(src=0, framerate=30).start()
cap.stream.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
time.sleep(2)
image1 = cap.read()
time.sleep(0.1)
image2 = cap.read()
image1_gray = rgb2gray(image1)
image2_gray = rgb2gray(image2)
score, difference = structural_similarity(image1_gray, image2_gray, full=True)
</code></pre>
<p>Now I need to apply this difference to the second image in order to extract a (480,640,3) RGB image, where the only colored pixels are the different ones extracted, and the others are "empty" (black or transparent maybe?).</p>
<p>Is there any way to do this since the difference calculated is a 1 channel image (480,640)?
I should also replace those pixels to the first image, any ideas?</p>
<p><strong>EDIT:</strong></p>
<p>I also tryed this approach:</p>
<pre><code>encode_params = [int(cv2.IMWRITE_JPEG_QUALITY), 80,
int(cv2.IMWRITE_JPEG_PROGRESSIVE), 1]
# to calculate diff:
diff = cv2.absdiff(frame1, frame2)
_, buffer = cv2.imencode('.jpg', diff, encode_params)
# to recontruct the image
diff = cv2.imdecode(buffer, cv2.IMREAD_COLOR)
prev_frame[diff != 0] = diff[diff != 0]
</code></pre>
<p>but the result is not as expected</p>
|
<python><opencv><difference><scikit-image>
|
2022-12-16 16:09:00
| 0
| 1,695
|
Lorenzo Sciuto
|
74,826,979
| 8,117,999
|
How to tag previous months with sequence
|
<p>Given a datframe:</p>
<pre><code>df = pd.DataFrame({'c':[0,1,1,2,2,2],'date':pd.to_datetime(['2016-01-01','2016-02-01','2016-03-01','2016-04-01','2016-05-01','2016-06-05'])})
</code></pre>
<p>How can I tag the latest month as M1, 2nd latest as M2 and then so on.</p>
<p>so for and example out looks like this:</p>
<pre><code>df = pd.DataFrame({'c':[0,1,1,2,2,2],'date':pd.to_datetime(['2016-01-01','2016-02-01','2016-03-01','2016-04-01','2016-05-01','2016-06-05']),
'tag':['M6', 'M5', 'M4', 'M3', 'M2', 'M1']})
</code></pre>
<pre><code>
+----+-------+-------------+----+
| | c | date |tag
+----+-------+-------------+----+
| 0 | 0 | 2016-01-01 | M6 |
| 1 | 1 | 2016-02-01 | M5 |
| 2 | 1 | 2016-03-01 | M4 |
| 3 | 2 | 2016-04-01 | M3 |
| 4 | 2 | 2016-05-01 | M2 |
| 5 | 2 | 2016-06-05 | M1 |
+----+-------+-------------+----+
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-16 16:06:27
| 1
| 2,806
|
M_S_N
|
74,826,828
| 7,168,098
|
Python Pandas: using slice to build a multiindex slicing in pandas
|
<p>I have a double Multiindex dataframe as follows. I slice the rows with idx = pd.IndexSlice but I don't know how to do the same with the columns so provided this data:</p>
<pre><code>df = pd.DataFrame(data=pd.DataFrame(data=np.random.randint(0, 10, size=(9, 5))))
# rows
list1 = ['2021-01-01','2022-02-01','2022-03-01']
list2 = ['PHOTO', 'QUE','TXR']
combinations = [(x, y) for x in list1 for y in list2]
df.index = pd.MultiIndex.from_tuples(combinations, names = ["DATE","DB"])
df.index.set_names(["DATE","DB"], inplace=True)
#columns
list1c = [('AB30','ACTIVE','A2'),('CD55','ACTIVE','A1'),('ZT52','UNACTIVE','A2'),('MIKE','PENSIONER','A2'),('ZZ00001','ACTIVE','A1')]
df.columns = pd.MultiIndex.from_tuples(list1c, names = ["UserID","KIND","DEPARTMENT"])
</code></pre>
<p>I don't understand why the following does not work:</p>
<pre><code>idx_cols = (slice(None, None, None), slice(None, ['ACTIVE', 'UNACTIVE'], None), slice(None, ['A1'], None))
df.loc[:, idx_cols]
</code></pre>
<p>gives the error:</p>
<pre><code>UnsortedIndexError: 'MultiIndex slicing requires the index to be lexsorted: slicing on levels [1, 2], lexsort depth 0'
</code></pre>
<p>If I try:</p>
<pre><code>df.columns.levels
</code></pre>
<p>I get:</p>
<pre><code>FrozenList([['AB30', 'CD55', 'MIKE', 'ZT52', 'ZZ00001'], ['ACTIVE', 'PENSIONER', 'UNACTIVE'], ['A1', 'A2']])
</code></pre>
<p>so level 0 are the names, level 1 ['ACTIVE', 'PENSIONER', 'UNACTIVE'] and level 2 ['A1', 'A2']</p>
<p>How can I solve this problem?</p>
|
<python><pandas><slice><multi-index>
|
2022-12-16 15:54:02
| 1
| 3,553
|
JFerro
|
74,826,411
| 5,224,236
|
Listing blobs inside container in Azure
|
<p>I am able to download a single file from azure blob storage using python:</p>
<pre><code>from azure.storage.blob import BlobClient, ContainerClient
import pandas as pd
from io import StringIO
sas_url = 'https://tenant_datalake.blob.core.windows.net/filename.xml?sp=racwdymeop&st=2022-12-16T14:24:34Z&se=2022-12-16T22:24:34Z&sv=2021-06-08&sr=b&sig=xxxxx'
blob_client = BlobClient.from_blob_url(sas_url)
blob_data = blob_client.download_blob()
df = pd.read_csv(StringIO(blob_data.content_as_text())) # works fine
</code></pre>
<p>However, when I try to list blobs inside a virtual folder after defining an SAS url at virtual folder level, I get</p>
<pre><code># container level
container_sas_url = 'https://tenantdatalake.blob.core.windows.net/folder1/folder2?sp=racwdlmeop&st=2022-12-16T09:55:06Z&se=2022-12-16T17:55:06Z&sv=2021-06-08&sr=d&sig=xxxx&sdd=1'
container_client = ContainerClient.from_container_url(container_sas_url)
[*container_client.list_blobs('New')]
HttpResponseError: The requested URI does not represent any resource on the server.
RequestId:3e296e90-101e-00a3-4b5e-1105f9000000
Time:2022-12-16T14:57:54.6144977Z
ErrorCode:InvalidUri
</code></pre>
<p>Any help most welcome</p>
|
<python><azure><azure-blob-storage>
|
2022-12-16 15:19:50
| 1
| 6,028
|
gaut
|
74,826,333
| 2,707,864
|
Install Quantum Development Kit (QDK) in Ubuntu, without conda
|
<p>I mean to use QDK in Ubuntu 22.04LTS.
I have a virtualenv, but no conda.
I installed qsharp in my venv with</p>
<pre><code>$ pip3 install qsharp
</code></pre>
<p>but then</p>
<pre><code>$ python -c "import qsharp"
IQ# is not installed.
Please follow the instructions at https://aka.ms/qdk-install/python.
Traceback (most recent call last):
</code></pre>
<p>The link provided takes me to <a href="https://learn.microsoft.com/en-us/azure/quantum/install-overview-qdk?tabs=tabid-vscode%2Ctabid-conda" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/quantum/install-overview-qdk?tabs=tabid-vscode%2Ctabid-conda</a>.
The proper section is <a href="https://learn.microsoft.com/en-us/azure/quantum/install-overview-qdk?tabs=tabid-vscode%2Ctabid-conda#use-q-and-python-with-jupyter-notebooks" rel="nofollow noreferrer">this</a>, where it is stated that I could use conda to install the necessary components for a Juptyer Notebooks environment.</p>
<p>I understand this could be the <em>easiest</em> way to do it.
But can I install a QDK precompiled package without conda?</p>
<p><strong>Related</strong>:</p>
<ol>
<li><a href="https://github.com/microsoft/qsharp-runtime" rel="nofollow noreferrer">https://github.com/microsoft/qsharp-runtime</a></li>
<li><a href="https://github.com/microsoft/qsharp-compiler" rel="nofollow noreferrer">https://github.com/microsoft/qsharp-compiler</a></li>
<li><a href="https://github.com/microsoft/iqsharp/issues/102" rel="nofollow noreferrer">https://github.com/microsoft/iqsharp/issues/102</a></li>
<li><a href="https://stackoverflow.com/questions/62226431/jupyter-notebook-not-finding-iqsharp">Jupyter Notebook not finding IQSharp</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/quantum/overview-what-is-qsharp-and-qdk" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/quantum/overview-what-is-qsharp-and-qdk</a></li>
</ol>
|
<python><q#><qdk>
|
2022-12-16 15:13:20
| 1
| 15,820
|
sancho.s ReinstateMonicaCellio
|
74,826,238
| 8,761,554
|
Assigning functional relu to a variable while inplace parameter is True
|
<p>If I want to do ReLU operation after my convolution on x, and in my code I do:</p>
<pre><code>x = F.leaky_relu(x, negative_slope=0.2, inplace=True)
</code></pre>
<p>Is this code wrong since I assign the relu to x variable while <code>inplace</code> is <code>True</code>? Ie. does it mean the ReLU function ran twice and in order to work correctly I should set the <code>inplace</code> to <code>False</code> or not assign to x?
Thank you</p>
|
<python><machine-learning><pytorch><in-place><relu>
|
2022-12-16 15:06:12
| 1
| 341
|
Sam333
|
74,826,213
| 17,654,424
|
Inserting items in a list with python while looping
|
<p>I'am trying to change the following code to get the following return :</p>
<p>"1 2 3 ... 31 32 33 34 35 36 37 ... 63 64 65"</p>
<pre><code>def createFooter2(current_page, total_pages, boundaries, around) -> str:
footer = []
page = 1
#Append lower boundaries
while page <= boundaries:
footer.append(page)
page += 1
#Append current page and arround
page = current_page - around
while page <= current_page + around:
footer.append(page)
page += 1
#Append upper boundaries
page = total_pages - boundaries + 1
while page <= total_pages:
footer.append(page)
page += 1
#Add Ellipsis if necessary
for i in range(len(footer)):
if i > 0 and footer[i] - footer[i - 1] > 1:
footer.insert(i, "...")
result = ' '.join(str(page) for page in result)
print(result)
return result
createFooter2(34, 65, 3, 3)
</code></pre>
<p>I want to insert an "..." between the pages if the next page is not directly next to it. However Iam having trouble inserting into the list.</p>
<p>How should I change the code to make it work ?</p>
|
<python><list><insert>
|
2022-12-16 15:04:22
| 2
| 651
|
Simao
|
74,825,983
| 1,169,091
|
It looks like a List but I can't index into it: ValueError: Length of values (2) does not match length of index (279999)
|
<p>I am importing the CSV file from here: <a href="https://raw.githubusercontent.com/kwartler/Harvard_DataMining_Business_Student/master/BookDataSets/LaptopSales.csv" rel="nofollow noreferrer">https://raw.githubusercontent.com/kwartler/Harvard_DataMining_Business_Student/master/BookDataSets/LaptopSales.csv</a></p>
<p>This code works:</p>
<pre><code>from dfply import *
import pandas as pd
df = pd.read_csv("LaptopSales.csv")
(df >> select(X["Date"]) >> mutate(AdjDate = (X.Date.str.split(" "))) >> head(3))
</code></pre>
<p>and produces this result:</p>
<pre><code> Date AdjDate
0 01-01-2008 00:01 [01-01-2008, 00:01]
1 01-01-2008 00:02 [01-01-2008, 00:02]
2 01-01-2008 00:04 [01-01-2008, 00:04]
</code></pre>
<p>But when I try to extract the first element in the list:</p>
<pre><code>from dfply import *
import pandas as pd
df = pd.read_csv("LaptopSales.csv")
(df >> select(X["Date"]) >> mutate(AdjDate = (X.Date.str.split(" ")[0])) >> head(3))
</code></pre>
<p>I get a wall of error culminating in:</p>
<pre><code>ValueError: Length of values (2) does not match length of index (279999)
</code></pre>
|
<python><pandas><dataframe><dfply>
|
2022-12-16 14:44:15
| 2
| 4,741
|
nicomp
|
74,825,855
| 12,752,172
|
How to format list data and write to csv file in selenium python?
|
<p>I'm getting data from a website and storing them inside a list of variables. Now I need to send these data to a CSV file.
The website data is printed and shown below.</p>
<p><strong>The data getting from the Website</strong></p>
<pre><code>['Company Name: PATRY PLC', 'Contact Name: Jony Deff', 'Company ID: 234567', 'CS ID: 236789', 'MI/MC:', 'Road Code:']
['Mailing Address:', 'Street: 19700 I-45 Spring, TX 77373', 'City: SPRING', 'State: TX', 'Postal Code: 77388', 'Country: US']
['Physical Address:', 'Street: 1500-1798 Runyan Ave Houston, TX 77039, USA', 'City: HOUSTON', 'State: TX', 'Postal Code: 77039', 'Country: US']
['Registration Period', 'Registration Date/Time', 'Registration ID', 'Status']
['2020-2025', 'MAY-10-2020 15:54:12', '26787856889l', 'Active']
</code></pre>
<p>I'm using for loop to get these data using the below code:</p>
<pre><code>listdata6 = []
for c6 in cells6:
listdata6.append(c6.text)
</code></pre>
<p>Now I have all data inside the 5 list variables. How can I write these data into CSV file like the below format?
<a href="https://i.sstatic.net/2GgI0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2GgI0.png" alt="enter image description here" /></a></p>
|
<python><selenium><csv>
|
2022-12-16 14:32:58
| 1
| 469
|
Sidath
|
74,825,685
| 7,168,098
|
python pandas: using pd.IndexSlice for both rows and columns in a double multiindex dataframe
|
<p>I have a double Multiindex dataframe as follows. I slice the rows with idx = pd.IndexSlice but I dont know how to do the same with the columns
so provided this data:</p>
<pre><code>df = pd.DataFrame(data=pd.DataFrame(data=np.random.randint(0, 10, size=(9, 5))))
# rows
list1 = ['2021-01-01','2022-02-01','2022-03-01']
list2 = ['PHOTO', 'QUE','TXR']
combinations = [(x, y) for x in list1 for y in list2]
df.index = pd.MultiIndex.from_tuples(combinations, names = ["DATE","DB"])
df.index.set_names(["DATE","DB"], inplace=True)
#columns
list1c = [('AB30','ACTIVE','A2'),('CD55','ACTIVE','A1'),('ZT52','UNACTIVE','A2'),('MIKE','PENSIONER','A2'),('ZZ00001','ACTIVE','A1')]
df.columns = pd.MultiIndex.from_tuples(list1c, names = ["UserID","KIND","DEPARTMENT"])
</code></pre>
<p>I slice the rows as follows:</p>
<pre><code># filtering in rows
idx = pd.IndexSlice
###### ROWS #######
# slicing dates
date_start = '2021-01-01'
date_end = '2021-02-01'
# slicing databases
databases = ['PHOTO','QUE']
# creating the index sclice for rows
i_s = idx[date_start:date_end, databases]
###### COLUMNS ######
# ??? here mask for the columns i_c = ???
df.loc[i_s, ]
</code></pre>
<p>My goal is to use the same method to slice the columns
So how I generate the IndexSlice for columns that give me for example:</p>
<p>pseudocode:
KIND= ACTIVE
DEPARTMENT = A2</p>
<p>I would like to use the same approach, for each multilevel defining a mask</p>
|
<python><pandas><slice><multi-index>
|
2022-12-16 14:17:02
| 1
| 3,553
|
JFerro
|
74,825,616
| 4,557,493
|
Multiple Django Projects using IIS but Getting Blank Page on Second Site
|
<p>I'm running two django projects in IIS with wfastcgi enabled. The first django project is running without an issue but the second project displays a blank page (code 200) returned.</p>
<p>Second Project Info:</p>
<p>A virtual folder, within it's own application pool in IIS, is created to host the second project. The second project was created in an python environment folder. The second project runs from django using python manage.py runserver 0.0.0.0:8080, but displays a blank page when browsing to the virtual folder page.</p>
<p>The root folder was granted with "everyone" full control access to all sub-folders
for the second project, wfastcgi application and handler is pointing to the virtual environment python and wfastcgi file correctly.</p>
<p>Can you have two wfastcgi applications and handlers "Second Python FastCGI"
C:\project_folder\project_env\Scripts\python.exe|C:\project_folder\project_env\lib\site-packages\wfastcgi.py"</p>
<p>I wanted them separate so that two developers don't interfere with each others work, but they need to run within the same server and port.</p>
|
<python><django><iis><wfastcgi>
|
2022-12-16 14:11:07
| 1
| 723
|
Shoother
|
74,825,613
| 453,767
|
Element is found but not clickable
|
<p>I'm trying to find an element by it's id, click on it and download a file.</p>
<pre><code>driver.get(url);
driver.implicitly_wait(60);
time.sleep(3)
element = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "ContentPlaceHolder1_a1")))
href = element.get_attribute('href')
value = href.split('/')[-1]
print(value);
element.click(); # Error
</code></pre>
<p>Error
<code>element click intercepted: Element is not clickable at point (110, 1003)</code></p>
<p>I've tried Xpath, and CSS path too. All give the same error. If I check for the visibility then it times out. But I can manually see that the element is visible</p>
<pre><code>element = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//a[contains(text(), 'text of the link')]")))
</code></pre>
<p>At last, I tried this code.</p>
<pre><code>element = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "ContentPlaceHolder1_a1")))
ActionChains(driver).move_to_element(element).click().perform()
</code></pre>
<p>But it gives error</p>
<pre><code>selenium.common.exceptions.MoveTargetOutOfBoundsException: Message: move target out of bounds
</code></pre>
|
<python><selenium><webdriverwait><web-scripting>
|
2022-12-16 14:10:56
| 1
| 7,409
|
Amit Kumar Gupta
|
74,825,507
| 14,030,805
|
why unpickling functions is not possible using pickle?
|
<p>when i use pickle package to serialize a dict object for example, i can load it in another context without problem. but it's not the case for functions unless i import it again as explained <a href="https://stackoverflow.com/questions/27732354/unable-to-load-files-using-pickle-and-multiple-modules#:%7E:text=111,class_def.py%20module%3A">here</a> (i don't know if it's feasible in case of functions defined inside notebooks). But why this behaviour?, how function object is different from dict object?</p>
<p>example:</p>
<p>open notebook <strong>A</strong>, run the following:</p>
<pre><code>def func(x):
return x**2
with open("myfile", "wb") as f:
pickle.dump(func,f)
</code></pre>
<p>open notebook <strong>B</strong>, run the following:</p>
<pre><code>with open("myfile", "rb") as f:
func = pickle.load(f)
</code></pre>
<p>this would return</p>
<pre><code>AttributeError: Can't get attribute 'func' on <module '__main__'>
</code></pre>
|
<python><function><pickle>
|
2022-12-16 14:02:56
| 0
| 365
|
Kaoutar
|
74,824,944
| 3,474,956
|
How to optimize the zoom parameter in zoomed_inset_axes?
|
<p>I am creating plots that include zoom inserts. The data is diverse it is impossoble for me to know what the data will be like before the program starts. I want to make the zoom insert zoom in as much as possible, without overlapping with any other element of my plot. Here is an example, where I use a zoom of 2. Ideally, I would like to automatically determine what this number should be:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
fig, ax = plt.subplots()
xin = np.linspace(0, np.random.uniform(.5, 4), 1000)
x_samples = np.random.uniform(0.9, 1.1, (1, 1000)) * np.sqrt(xin[:, np.newaxis])
ax.fill_between(xin, x_samples.min(1), x_samples.max(1))
axins = zoomed_inset_axes(ax, zoom=2, loc='upper left')
axins.fill_between(xin, x_samples.min(1), x_samples.max(1))
axins.set_xlim(.05, .1)
idx = np.logical_and(xin > 0.05, xin < 0.1)
axins.set_ylim(x_samples.min(1)[idx].min(), x_samples.max(1)[idx].max())
axins.set_xticks([])
axins.set_yticks([])
mark_inset(ax, axins, loc1=4, loc2=3, fc="none", ec="0.5")
plt.savefig('hey')
plt.clf()
</code></pre>
<p><a href="https://i.sstatic.net/GRk30.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GRk30.png" alt="enter image description here" /></a></p>
<p>As you can see, <code>zoom=2</code> was too low of a value. I can manually set the zoom parameter to a correct value. This is a tedious process. Is there a way to automatically find the zoom parameter that will maximize the insert size while avoiding overlaps with other parts of the plot?</p>
<p><a href="https://i.sstatic.net/Y9YVU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y9YVU.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2022-12-16 13:13:54
| 1
| 10,243
|
kilojoules
|
74,824,935
| 5,945,369
|
Transform a 3D numpy array to 1D based on column value
|
<p>Maybe this is a very simple task, but I have a numpy.ndarray with shape (1988,3).</p>
<pre class="lang-python prettyprint-override"><code>preds = [[1 0 0]
[0 1 0]
[0 0 0]
...
[0 1 0]
[1 0 0]
[0 0 1]]
</code></pre>
<p>I want to create a 1D array with shape=(1988,) that will have values corresponding to the column of my 3D array that has a value of 1.</p>
<p>For example,</p>
<pre class="lang-python prettyprint-override"><code> new_preds = [0 1 NaN ... 1 0 2]
</code></pre>
<p>How can I do this?</p>
|
<python><arrays><numpy>
|
2022-12-16 13:13:09
| 1
| 976
|
joasa
|
74,824,911
| 2,095,521
|
Python split string retaining the bracket
|
<p>I would like to split the string and eliminate the whitespaces such as</p>
<pre><code>double a[3] = {0.0, 0.0, 0.0};
</code></pre>
<p>The expected output is</p>
<pre><code>['double', 'a', '[', '3', ']', '=', '{', '0.0', ',', '0.0', ',', '0.0', '}', ';']
</code></pre>
<p>How could I do that with re module in Python?</p>
|
<python><string><split>
|
2022-12-16 13:10:11
| 3
| 570
|
kstn
|
74,824,819
| 13,285,583
|
selenium-standalone-chrome throws error when driver is trying to connect
|
<p>My goal is to run Selenium in a docker. The problem is that it refused to connect although selenium-standalone-chrome works fine and I can hit <a href="http://127.0.0.1:4444" rel="nofollow noreferrer">http://127.0.0.1:4444</a> in my own browser.</p>
<p>The culrpit:</p>
<pre><code>chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--disable-gpu")
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument("--disable-infobars")
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument("--disable-notifications")
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Remote(
command_executor='http://127.0.0.1:4444/wd/hub',
options=chrome_options
)
</code></pre>
<p>The error</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/flask", line 8, in <module>
sys.exit(main())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/flask/cli.py", line 1047, in main
cli.main()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/flask/cli.py", line 911, in run_command
raise e from None
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/flask/cli.py", line 897, in run_command
app = info.load_app()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/flask/cli.py", line 308, in load_app
app = locate_app(import_name, name)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/flask/cli.py", line 218, in locate_app
__import__(module_name)
File "/Volumes/T7Touch/Learn/p2---final-project-ftds-016-rmt-group-001/scraping-2/deployment/app.py", line 2, in <module>
from util.scrapeutil import get_db
File "/Volumes/T7Touch/Learn/p2---final-project-ftds-016-rmt-group-001/scraping-2/deployment/util/scrapeutil.py", line 22, in <module>
driver = webdriver.Remote(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 272, in __init__
self.start_session(capabilities, browser_profile)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 364, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: Could not start a new session. Error while creating session with the driver service. Stopping driver service: Could not start a new session. Response code 500. Message: unknown error: Chrome failed to start: crashed.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Host info: host: '07a1292de76f', ip: '172.17.0.2'
Build info: version: '4.7.1', revision: 'c6795baf1a3'
System info: os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.124-linuxkit', java.version: '11.0.17'
Driver info: driver.version: unknown
Build info: version: '4.7.1', revision: 'c6795baf1a3'
System info: os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.124-linuxkit', java.version: '11.0.17'
Driver info: driver.version: unknown
Stacktrace:
at org.openqa.selenium.grid.node.config.DriverServiceSessionFactory.apply (DriverServiceSessionFactory.java:202)
at org.openqa.selenium.grid.node.config.DriverServiceSessionFactory.apply (DriverServiceSessionFactory.java:69)
at org.openqa.selenium.grid.node.local.SessionSlot.apply (SessionSlot.java:147)
at org.openqa.selenium.grid.node.local.LocalNode.newSession (LocalNode.java:379)
at org.openqa.selenium.grid.distributor.local.LocalDistributor.startSession (LocalDistributor.java:645)
at org.openqa.selenium.grid.distributor.local.LocalDistributor.newSession (LocalDistributor.java:564)
at org.openqa.selenium.grid.distributor.local.LocalDistributor$NewSessionRunnable.handleNewSessionRequest (LocalDistributor.java:818)
at org.openqa.selenium.grid.distributor.local.LocalDistributor$NewSessionRunnable.lambda$run$1 (LocalDistributor.java:779)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:628)
at java.lang.Thread.run (Thread.java:829)
</code></pre>
<p>Edit: I use Mac M1. So I should use <code>docker run -d -p 4444:4444 --shm-size="2g" seleniarm/standalone-chromium</code> instead of <code>docker run -d -p 4444:4444 --shm-size="2g" seleniarm/standalone-chromium</code></p>
|
<python><selenium><selenium-webdriver>
|
2022-12-16 13:01:29
| 0
| 2,173
|
Jason Rich Darmawan
|
74,824,477
| 20,793,070
|
Multi-columns filter VAEX dataframe, apply expression and save result
|
<p>I want to use VAEX for lazy work wih my dataframe. After quick start with export big csv and some simple filters and extract() I have initial df for my work with 3 main columns: cid1, cid2, cval1. Each combitations of cid1 and cid2 is a workset with some rows where is cval1 is different. My df contents only valid cid1 and cid2. I want to save in df only rows with minimun cval1 and drop other. cval1 is float, cid1 and cid2 is int.</p>
<p>I try one filter:</p>
<pre><code>df = df.filter(df.cid1 == 36 & df.cid2 == 182 & df.cval1 == df.min(df.cval1))
</code></pre>
<p>I must to receive in result df with only one row.
But it not work properly, it's result:
<a href="https://i.sstatic.net/qGxOp.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>It's a first problem. But next I must to find minimum cval1 for each valid combination of cid1 and cid2.</p>
<p>I have list of tuples with each values cid1 and cid2:</p>
<pre><code>cart_prod=[(2, 5), (3, 9), ...]
</code></pre>
<p>I think I try:</p>
<pre><code>df_finally = vaex.DataFrame()
for x in cart_prod:
df2 = df.filter(df.cid1 == x[0] & df.cid2 == x[1] & df.cval1 == df.min(df.cval1))
df_finally = vaex.concat([df_finally, df2])
</code></pre>
<p>But the filter not valid, and VAEX can not concat with error that DataFrame have not attribute concat.. But I try really vaex.concat(list_of_dataframes).</p>
<p>I think may be I can use:</p>
<pre><code>df.select(df.cid1 == x[0] & df.cid2 == x[1] & df.cval1 == df.min(df.cval1), name = "selection")
</code></pre>
<p>But I can't to make that df this selection take and use..</p>
<pre><code>df = df.filter((df.cid1, df.cid2) in cart_prod)
</code></pre>
<p>This code have not result too..</p>
<p>Hmmm.. Help me please!</p>
<p>How to choose minimum df.cval1 for each combinations of df.cid1 and df.cid2 and save result to dataframe in VAEX?</p>
<p>Maybe goupby? But I don't understand how it works..</p>
|
<python><dataframe><vaex>
|
2022-12-16 12:29:38
| 1
| 433
|
Jahspear
|
74,824,353
| 12,231,454
|
Patch Django EmailMultiAlternatives send() in a Celery Task so that an exception is raised
|
<p>I want to test a Celery Task by raising an SMTPException when sending an email.</p>
<p>With the following code, located in:</p>
<p><strong>my_app.mailer.tasks</strong></p>
<pre><code>from django.core.mail import EmailMultiAlternatives
@app.task(bind=True )
def send_mail(self):
subject, from_email, to = 'hello', 'from@example.com', 'to@example.com'
text_content = 'This is an important message.'
html_content = '<p>This is an <strong>important</strong> message.</p>'
msg = EmailMultiAlternatives(subject, text_content, from_email, [to])
msg.attach_alternative(html_content, "text/html")
try:
msg.send(fail_silently=False)
except SMTPException as exc:
print('Exception ', exc)
</code></pre>
<p>and then running the following test against it:</p>
<pre><code>class SendMailTest(TestCase):
@patch('my_app.mailer.tasks.EmailMultiAlternatives.send')
def test_task_state(self, mock_send):
mock_send.side_effect = SMTPException()
task = send_mail.delay()
results = task.get()
self.assertEqual(task.state, 'SUCCESS')
</code></pre>
<p>The email is sent without error.</p>
<p>However, if I turn the task into a standard function (<strong>my_app.mailer.views</strong>) and then run the following test against it:</p>
<pre><code>class SendMailTest(TestCase):
@patch('myapp.mailer.views.EmailMultiAlternatives.send')
def test_task_state(self, mock_send):
mock_send.side_effect = SMTPException()
send_mail(fail_silently=False)
</code></pre>
<p>The string 'Exception' is displayed, but there is no exc information as to what caused the exception.</p>
<p>Please, what am I doing wrong?</p>
<p>!!UPDATE!!</p>
<p>I now understand why no exc info was printed for the function version of the code. This can be achieved by changing;</p>
<pre><code>mock_send.side_effect = SMTPException()
to;
mock_send.side_effect = Exception(SMTPException)
resulting in;
Exception <class 'smtplib.SMTPException'>
</code></pre>
<p>However, the issue of how to raise the same exception in the Celery Task in the first part of this post remains.</p>
|
<python><django><mocking><celery><django-celery>
|
2022-12-16 12:17:49
| 2
| 383
|
Radial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.