QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,041,623
7,505,256
How to check that a string is a string literal for mypy?
<p>With this code</p> <pre class="lang-py prettyprint-override"><code>import os from typing import Literal, get_args Markets = Literal[ &quot;BE&quot;, &quot;DE&quot;, &quot;DK&quot;, &quot;EE&quot;, &quot;ES&quot;, &quot;FI&quot;, &quot;FR&quot;, &quot;GB&quot;, &quot;IT&quot;, &quot;LT&quot;, &quot;LV&quot;, &quot;NL&quot;, &quot;NO&quot;, &quot;PL&quot;, &quot;PT&quot;, &quot;SE&quot; ] MARKETS: list[Markets] = list(get_args(Markets)) def foo(x: Markets) -&gt; None: print(x) market = os.environ.get(&quot;market&quot;) if market not in MARKETS: raise ValueError foo(market) </code></pre> <p>I get the following error.</p> <pre><code>Argument 1 to &quot;foo&quot; has incompatible type &quot;str&quot;; expected &quot;Literal['BE', 'DE', 'DK', 'EE', 'ES', 'FI', 'FR', 'GB', 'IT', 'LT', 'LV', 'NL', 'NO', 'PL', 'PT', 'SE']&quot; [arg-type]mypy(error) </code></pre> <p>How do I need to check the <code>market</code> variable so that <code>mypy</code> knows that is of correct type?</p>
<python><mypy><python-typing>
2023-01-07 15:44:59
2
316
Prokie
75,041,608
18,806,499
How to configure DB in a Flask app dynamicly?
<p>I'm working on app, where user have to enter database connection parametrs by hands</p> <p>So I've made an api here it is:</p> <pre><code>from flask_mysqldb import MySQL from flask import Flask, request import MySQLdb.cursors app = Flask(__name__) # /hello @app.route(&quot;/hello&quot;, methods=['GET']) def hello(): return &quot;Hello word&quot; # /get_tables @app.route(f&quot;/get_tables&quot;, methods=['GET']) def get_tables(): host = request.args.get('host', type=str) user = request.args.get('user', type=str) password = request.args.get('password', type=str) database = request.args.get('database', type=str) app.config['MYSQL_HOST'] = &quot;localhost&quot; app.config['MYSQL_USER'] = &quot;root&quot; app.config['MYSQL_PASSWORD'] = &quot;password&quot; app.config['MYSQL_DB'] = &quot;restaurant&quot; mysql = MySQL(app) cursor = mysql.connection.cursor(MySQLdb.cursors.DictCursor) query = &quot;SHOW TABLES&quot; cursor.execute(query) records = cursor.fetchone() print(records) return records </code></pre> <p>&quot;hello&quot; endpoint is working fine, but when run &quot;get_tables&quot; I recieve an error:</p> <p><code>AssertionError: The setup method 'teardown_appcontext' can no longer be called on the application. It has already handled its first request, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the application are done before running it.</code></p> <p>I also was trying to debug my api and found out that errorr ocures on a <code>mysql = MySQL(app)</code> line</p> <p>What can I do to solve this problem? or maybe it's better not to use api at all and just connect to DB from ui?</p>
<python><mysql><flask>
2023-01-07 15:42:48
0
305
Diana
75,041,592
936,652
Oracle AdvancedQueuing: failing to broadcast (ORA-24033: no recipients for message, Python client)
<p>I am referring to the following project on GitHub: <a href="https://github.com/chris-ch/poc-oracle-aq/tree/v0.1" rel="nofollow noreferrer">poc-oracle-aq</a></p> <p>The code responsible for publishing messages is located in <em>scripts/publisher.py</em>:</p> <pre><code>def loop(connection, queue_high, queue_low, name): recipients = ['Subscriber01', 'Subscriber02', 'Subscriber03'] while True: pause = random.randint(10, 30) sleep(float(pause) / 10.) value = random.randint(1, 100) message = {'timestamp': datetime.now(), 'source': name, 'value': value} if value &lt; 10: logging.info('sending {}'.format(message)) queue_low.enqone(connection.msgproperties(payload=json.dumps(message, cls=DateTimeEncoder), recipients=recipients)) elif value &gt; 90: logging.info('sending {}'.format(message)) queue_high.enqone(connection.msgproperties(payload=json.dumps(message, cls=DateTimeEncoder), recipients=recipients)) connection.commit() </code></pre> <p>This is working well for multicast, where subscribers are listed as recipients and provided as a message parameter when calling the <em>enqone()</em> method.</p> <p>However according to the documentation I should be able to perform the same call without specifying the recipients in order to broadcast the message to all subscribers.</p> <p>When setting the recipients to None in the first line of the <em>loop()</em> function above:</p> <pre><code>recipients = None </code></pre> <p>I get the following error when I try:</p> <pre><code> File &quot;src/oracledb/impl/thick/queue.pyx&quot;, line 117, in oracledb.thick_impl.ThickQueueImpl.enq_one File &quot;src/oracledb/impl/thick/utils.pyx&quot;, line 410, in oracledb.thick_impl._raise_from_odpi File &quot;src/oracledb/impl/thick/utils.pyx&quot;, line 400, in oracledb.thick_impl._raise_from_info oracledb.exceptions.DatabaseError: ORA-24033: no recipients for message </code></pre> <p>According to the <a href="https://www.oraexcel.com/database-oracle-11gR1-ORA-24033" rel="nofollow noreferrer">documentation</a> that means:</p> <blockquote> <p>Error code: ORA-24033 Description: no recipients for message Cause: An enqueue was performed on a queue that has been set up for multiple dequeuers but there were neither explicit recipients specified in the call nor were any queue subscribers determined to be recipients for this message. Action: Either pass a list of recipients in the enqueue call or add subscribers to the queue for receiving this message.</p> </blockquote> <p>However I do have subscribers running and listening when the publisher is sending the message. Is there an undocumented parameter when calling <em>enqone()</em>?</p> <p>I am using Oracle Thick Client 19.8 and Oracle Enterprise 19.3.</p>
<python><oracle19c><advanced-queuing>
2023-01-07 15:40:08
0
2,012
Christophe
75,041,364
13,994,829
Python-MultiProces in closure function
<p>The <strong>case1</strong> in <code>test.py</code> (as following code):</p> <p>I can get a expected result for variable &quot;<code>res</code>&quot;.</p> <p>(<code>res[-1]</code> is correct that i want)</p> <p><strong>test.py:</strong></p> <pre><code>from multiprocessing import Pool as ProcessPool skus = [i for i in range(4)] def call_func(i): skus[i] = i * 10 return skus def process(): with ProcessPool() as pool: res = pool.map(call_func, skus, chunksize=2) print(f&quot;process result={res}&quot;) if __name__ == &quot;__main__&quot;: [func() for func in (process, )] </code></pre> <p><strong>Output:</strong></p> <pre><code>process result=[[0, 10, 2, 3], [0, 10, 2, 3], [0, 10, 20, 30], [0, 10, 20, 30]] </code></pre> <p>However, I try to use <strong>two file(.py)</strong> in <strong>case2</strong></p> <ol> <li><code>main.py</code></li> <li><code>process.py</code></li> </ol> <p>I have run <code>main.py</code>, and it call function from <code>process.py</code>.</p> <p>In the <code>process.py</code>, I use <strong>MultiProcess pool</strong> to execute. (use the &quot;<strong>pathos.multiprocessing</strong>&quot; in python, which can avoid &quot;<strong>cannot pickle problem</strong>&quot;)</p> <p>But, when the variable &quot;<code>sku_res</code>&quot; in the <code>outer()</code> function, the variable cannot share like <strong>case1</strong></p> <p>I expected my process result like <strong>case1</strong></p> <p>where is the problem in the <strong>case2</strong> ?</p> <p>How can I modify ?</p> <p><strong>main.py:</strong></p> <pre><code>type here </code></pre> <pre><code>from process import outer def main(): outer() if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>process.py:</strong></p> <pre><code>from pathos.multiprocessing import Pool as ProcessPool from decorate import timeit @timeit def outer(): skus = [i for i in range(4)] def call_func(i): sku[i] = i * 10 return skus @timeit def process(): with ProcessPool() as pool: res = pool.map(call_func, skus, chunksize=2) print(&quot;process result=&quot;, res) return process() </code></pre> <p><strong>Output:</strong></p> <pre><code>process result=[[0, 10, 2, 3], [0, 10, 2, 3], [0, 1, 20, 30], [0, 1, 20, 30]] </code></pre>
<python><multiprocessing><closures><python-multiprocessing>
2023-01-07 15:06:09
1
545
Xiang
75,041,291
13,491,606
Seaborn manually set interval of x axis
<p>My data:</p> <pre class="lang-py prettyprint-override"><code>data = pd.DataFrame({'y': {0: 0.8285, 1: 0.869, 2: 0.8781, 3: 0.8806, 4: 0.8825, 5: 0.8831},'x': {0: 5764, 1: 22021, 2: 56906, 3: 114157, 4: 289474, 5: 4755584}}) </code></pre> <p>My current plot:</p> <pre class="lang-py prettyprint-override"><code>sns.set(rc={'figure.figsize':(6,3.5)}, style=&quot;white&quot;) ax = sns.lineplot(data=data, x=&quot;x&quot;, y=&quot;y&quot;, dashes=False, markersize=8, marker=&quot;o&quot;) ax.grid(axis='x', linestyle='dotted') ax.set_xlabel(&quot;X&quot;, fontsize=16) ax.set_ylabel(&quot;Y&quot;, fontsize=16) </code></pre> <p><a href="https://i.sstatic.net/zPLTr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zPLTr.png" alt="enter image description here" /></a></p> <p>As you can see, the curve is too sharp. Since there are 5 points from <code>x=0</code> to <code>x=1</code>, I want to <strong>expand</strong> the interval from <code>x=0</code> to <code>x=1</code> and <strong>squeeze</strong> the interval from <code>x=1</code> to <code>x=4</code>. A prefered plot looks like this:</p> <p><a href="https://i.sstatic.net/646Gr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/646Gr.png" alt="enter image description here" /></a></p> <p>where the x axis is uneven and I can manually set its interval.</p> <p>How to do this? Thanks in advance.</p>
<python><matplotlib><seaborn>
2023-01-07 14:56:44
1
1,964
namespace-Pt
75,041,261
8,401,374
driver.switch_to.alert.text throws exepction: no such alerts but there is an alert
<p>I'm trying to open a <a href="https://drive.inditex.com/drfrcomr/login" rel="nofollow noreferrer">https://drive.inditex.com/drfrcomr/login</a> with Selenium.</p> <p>It opens a prompt with username and password fields, &amp; submit and cancel buttons. I am trying to close the prompt. It can be closed by clicking the cancel button or clicking the escape key.</p> <p>Note: ignore irrelevant imports please</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.alert import Alert from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.action_chains import ActionChains options = webdriver.ChromeOptions() options.headless = False options.page_load_strategy = 'none' chrome_path = ChromeDriverManager().install() chrome_service = Service(chrome_path) driver = Chrome(options=options, service=chrome_service) driver.implicitly_wait(5) url = &quot;https://drive.inditex.com/drfrcomr/sales/home?brand=2&quot; driver.get(url) print(&quot;printing alert text:&quot;, driver.switch_to.alert.text) ActionChains(driver).send_keys(Keys.ESCAPE).perform() </code></pre> <p>The WebDriver is not picking the prompt box.</p>
<python><selenium><selenium-webdriver><web-scraping><selenium-chromedriver>
2023-01-07 14:51:54
0
1,710
Shaida Muhammad
75,041,235
17,779,615
How to read specific n_rows and n_columns from parquet file?
<p>How can I read first row and specific columns 1,3,5 from the parquet file? Currently I use <code>pd.read_parquet(filename,columns=['first_col','third_col','fifth_col'])</code> to read only the columns that I want but I don't know how to read only first row while reading those specific column from parquet file.</p>
<python><parquet>
2023-01-07 14:48:38
1
539
Susan
75,041,218
5,370,631
Recognize date format and convert it
<p>Is there a way where to guess date format of a string and convert other dates to the same format as the given string (e.g. <code>YYYYMMDD</code>)?</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code># Recognize the format as YYYYMMDD date1 = '20221111' # Recognize the format as YYYY-MM-DD and converts to YYYYMMDD date2 = '2022-11-12' </code></pre>
<python>
2023-01-07 14:46:28
3
1,572
Shibu
75,041,188
4,040,643
Exception in soundfile.py: io.UnsupportedOperation: seek
<p>I am currently going through this <a href="https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html" rel="nofollow noreferrer">jupiter notebook</a></p> <p>However when running this code block, I get the error below. Is there someone familiar with this problem and has an idea how to fix/resolve this? I would appreciate any form and advice or help. Thank you in advance.</p> <pre><code>print(&quot;Source:&quot;, SAMPLE_WAV_URL) with requests.get(SAMPLE_WAV_URL, stream=True) as response: metadata = torchaudio.info(response.raw) print(metadata) Source: https://pytorch-tutorial-assets.s3.amazonaws.com/steam-train-whistle-daniel_simon.wav Exception ignored from cffi callback &lt;function SoundFile._init_virtual_io.&lt;locals&gt;.vio_get_filelen at 0x000002487DE62C10&gt;: Traceback (most recent call last): File &quot;C:\Users\me\anaconda3\envs\myenv\lib\site-packages\soundfile.py&quot;, line 1228, in vio_get_filelen file.seek(0, SEEK_END) io.UnsupportedOperation: seek --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) Input In [5], in &lt;cell line: 2&gt;() 1 print(&quot;Source:&quot;, SAMPLE_WAV_URL) 2 with requests.get(SAMPLE_WAV_URL, stream=True) as response: ----&gt; 3 metadata = torchaudio.info(response.raw) 4 print(metadata) File ~\anaconda3\envs\myenv\lib\site-packages\torchaudio\backend\soundfile_backend.py:103, in info(filepath, format) 84 @_mod_utils.requires_soundfile() 85 def info(filepath: str, format: Optional[str] = None) -&gt; AudioMetaData: 86 &quot;&quot;&quot;Get signal information of an audio file. 87 88 Note: (...) 101 102 &quot;&quot;&quot; --&gt; 103 sinfo = soundfile.info(filepath) 104 return AudioMetaData( 105 sinfo.samplerate, 106 sinfo.frames, (...) 109 encoding=_get_encoding(sinfo.format, sinfo.subtype), 110 ) File ~\anaconda3\envs\myenv\lib\site-packages\soundfile.py:464, in info(file, verbose) 456 def info(file, verbose=False): 457 &quot;&quot;&quot;Returns an object with information about a `SoundFile`. 458 459 Parameters (...) 462 Whether to print additional information. 463 &quot;&quot;&quot; --&gt; 464 return _SoundFileInfo(file, verbose) File ~\anaconda3\envs\myenv\lib\site-packages\soundfile.py:409, in _SoundFileInfo.__init__(self, file, verbose) 407 def __init__(self, file, verbose): 408 self.verbose = verbose --&gt; 409 with SoundFile(file) as f: 410 self.name = f.name 411 self.samplerate = f.samplerate File ~\anaconda3\envs\myenv\lib\site-packages\soundfile.py:655, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 652 self._mode = mode 653 self._info = _create_info_struct(file, mode, samplerate, channels, 654 format, subtype, endian) --&gt; 655 self._file = self._open(file, mode_int, closefd) 656 if set(mode).issuperset('r+') and self.seekable(): 657 # Move write position to 0 (like in Python file objects) 658 self.seek(0) File ~\anaconda3\envs\myenv\lib\site-packages\soundfile.py:1213, in SoundFile._open(self, file, mode_int, closefd) 1210 if file_ptr == _ffi.NULL: 1211 # get the actual error code 1212 err = _snd.sf_error(file_ptr) -&gt; 1213 raise LibsndfileError(err, prefix=&quot;Error opening {0!r}: &quot;.format(self.name)) 1214 if mode_int == _snd.SFM_WRITE: 1215 # Due to a bug in libsndfile version &lt;= 1.0.25, frames != 0 1216 # when opening a named pipe in SFM_WRITE mode. 1217 # See http://github.com/erikd/libsndfile/issues/77. 1218 self._info.frames = 0 </code></pre> <p>LibsndfileError: Error opening &lt;urllib3.response.HTTPResponse object at 0x000002487DDD02B0&gt;: Error in WAV file. No 'data' chunk marker.</p>
<python><exception><audio>
2023-01-07 14:42:50
1
489
Imago
75,041,095
7,578,494
How to apply a custom function to xarray.DataArray.coarsen.reduce()?
<p>I have a (2x2) NumPy array:</p> <pre class="lang-py prettyprint-override"><code>ar = np.array([[2, 0],[3, 0]]) </code></pre> <p>and the same one in the form of xarray.DataArray:</p> <pre class="lang-py prettyprint-override"><code>da = xr.DataArray(ar, dims=['x', 'y'], coords=[[0, 1], [0, 1]]) </code></pre> <p>I am trying to downsample the 2d array spatially using a custom function to find the mode (i.e., the most frequently occurring value):</p> <pre class="lang-py prettyprint-override"><code>def find_mode(window): # find the mode over all axes uniq = np.unique(window, return_counts=True) return uniq[0][np.argmax(uniq[1])] </code></pre> <p>The <code>find_mode()</code> works well for <code>ar</code> as <code>find_mode(ar)</code> gives <code>0</code>.</p> <p>However, it doesn't work for <code>da</code> (i.e., <code>da.coarsen(x=2, y=2).reduce(find_mode)</code>), with an error:</p> <pre class="lang-py prettyprint-override"><code>TypeError: find_mode() got an unexpected keyword argument 'axis' </code></pre> <p>Thank you so much for your attention and participation.</p>
<python><numpy><python-xarray>
2023-01-07 14:29:59
2
343
hlee
75,040,882
6,392,779
Split string with multiple possible delimiters to get substring
<p>I am trying to make a simple Discord bot to respond to some user input and having difficulty trying to parse the response for the info I need. I am trying to get their &quot;gamertag&quot;/username but the format is a little different sometimes.</p> <p>So, my idea was to make a list of delimiter words I am looking for (different versions of the word gamertag such as Gamertag:, Gamertag -, username, etc.)</p> <p>Then, look line by line for one that contains any of those delimiters.</p> <p>Split the string on first matching delim, strip non alphanumeric characters</p> <p>I had it kinda working for a single line, then realized some people don't put it on the first line so added line by line check and messed it up (on line 19 I just realized).. Also thought there must be a better way than this? please advise, some kinda working code at this link and copied below:</p> <pre><code>testString = &quot;&quot;&quot;Application Gamertag : testGamertag Discord - testDiscord Age - 25&quot;&quot;&quot; applicationString = testString gamertagSplitList = [ &quot;gamertag&quot;, &quot;Gamertag&quot;,&quot;Gamertag:&quot;, &quot;gamertag:&quot;] #splWord = 'Gamertag' lineNum = 0 for line in applicationString.partition('\n'): print(line) if line in gamertagSplitList: applicationString = line break #get first line #applicationString = applicationString.partition('\n')[0] res = &quot;&quot; #split on word, want to split on first occurrence of list of words for splitWord in gamertagSplitList: if splitWord in applicationString: res = applicationString.split(splitWord) break splitString = res[1] #res = test_string.split(spl_word, 1) #splitString = res[1] #get rid of non alphaNum characters finalString = &quot;&quot; #define string for ouput for character in splitString: if(character.isalnum()): # if character is alphanumeric concat to finalString finalString = finalString + character print(finalString) </code></pre>
<python>
2023-01-07 13:59:31
2
901
nick
75,040,793
17,779,615
How to erase memory while using watchdog in python
<p>I used watchdog to monitor file creating in a folder. Whenever a parquet file is created, watchdog will call python script to read the parquet file.</p> <p>When the watchdog program is first run, the memory usage is 90KB. After finished reading first parquet file, the memory usage is 100KB. I delete all the dataframe when the python script finished executing.</p> <p>When 2nd parquet file is created, watchdog run the python script by staring memory usage of 100KB and after finished reading 2nd parquet file, the memory usage is 101KB.</p> <p>Why the watchdog does not release memory when the first file finished executing?</p> <p><strong>How to write the script if I want the watchdog script to reset the memory after finished executing every file?</strong> i.e., When 1st parquet file is created, watchdog call python script by starting memory 90KB and ending with 100KB. When 2nd parquet file is created, watchdog call python script by starting memory 90KB and ending with 100KB instead of starting memory 100KB and ending with 101KB.</p> <p>The following is the watchdog script and reading parquet file python script:</p> <p><em>Reading parquet file python script</em></p> <pre><code>import os import pandas as pd import gc, psutil def usage(): process = psutil.Process(os.getpid()) return process.memory_info()[0]/float(2**20) def readfile(filename): gc.enable() gc.get_threshold() gc.get_count() print(&quot;Before reading parquet: &quot;+str(usage())) df = pd.read_parquet(filename) print(&quot;After reading parquet: &quot;+str(usage())) del df gc.collect() print(&quot;After deleting df: &quot;+str(usage())) </code></pre> <p><em>Watchdog script</em></p> <pre><code>import os, sys, time from watchdog.observers import Observer from watchdog.observers.polling import PollingObserver from watchdog.events import FileSystemEventHandler from utility.python_script import readfile as rf class MonitorFolder(FileSystemEventHandler): def on_created(self,event): filename = event.src_path rf(filename) if __name__==&quot;__main__&quot;: path_ = &quot;C:\\folder1\\folder2&quot; event_handler_=MonitorFolder() observer = PollingObserver(timeout=0.1) observer.schedule(event_handler,path=path_, recursive = True) observer.start() try: while (True): time.sleep(0.1) except KeyboardInterrupt: observer.stop() observer.join() </code></pre>
<python><pandas><out-of-memory><watchdog>
2023-01-07 13:44:20
0
539
Susan
75,040,733
10,958,326
Is there a way to use StrEnum in earlier python versions?
<p>The enum package in python 3.11 has the StrEnum class. I consider it very convenient but cannot use it in python 3.10. What would be the easiest method to use this class anyway?</p>
<python><enums>
2023-01-07 13:35:37
3
390
algebruh
75,040,633
19,838,445
Why does function descriptor create new bound method each time
<p>Could you explain why new bound method is created each time when trying to access same method of the same class instance?</p> <pre class="lang-py prettyprint-override"><code>class MyClass: def my_method(self): print(f&quot;Called bounded to {self}&quot;) m_unbound = MyClass.my_method print(f&quot;{type(m_unbound)} {hash(m_unbound)}&quot;) # &lt;class 'function'&gt; 8783579798336 m_unbound(None) mc = MyClass() m1 = mc.my_method print(f&quot;{type(m1)} {hash(m1)}&quot;) # &lt;class 'method'&gt; 122173 m1() m2 = mc.my_method # THOUGHT IT WOULD BE THE SAME OBJECT AS m1 print(f&quot;{type(m2)} {hash(m2)}&quot;) # &lt;class 'method'&gt; 122173 m2() print(m1 == m2) # True print(m1 is m2) # False, why is so? print(id(m1) == id(m2)) # False, equivalent of the above </code></pre> <p>I do not understand why new bound method object is created each time if underlying instance still stays the same (as well as target function)</p> <pre class="lang-py prettyprint-override"><code>print(m1.__self__ is m2.__self__) # True, bound instance is the same print(m1.__func__ is m2.__func__) # True, target function is the same print(m2.__func__ is m_unbound) # True </code></pre>
<python><methods><class-method><python-descriptors><unbound>
2023-01-07 13:16:34
0
720
GopherM
75,040,613
13,910,839
Incorrect diagnostics in Nvim LSP (Python)
<p>I have installed lsp server for python and have def like this one:</p> <pre><code>def get_info_about_file(db: Session, name_of_file: str) -&gt; schema.File: return db.query(models.File).filter(models.File.name == name_of_file).first() </code></pre> <p>After that I got error:</p> <pre><code>Diagnostics: 1. Expression of type &quot;Any | None&quot; cannot be assigned to return type &quot;File&quot;   Type &quot;Any | None&quot; cannot be assigned to type &quot;File&quot;     Type &quot;None&quot; cannot be assigned to type &quot;File&quot; </code></pre> <p>But, in vscode I don't have error like this, everything fine. To be more precise it isn't looks like error, but I can start and it is working, but this really annoying.And additional info about my NullLsInfo:</p> <pre><code> Logging * current level: warn * path: /home/user/.cache/nvim/null-ls.log Active source(s) * name: black * filetypes: python * methods: formatting * name: autoflake * filetypes: python * methods: formatting </code></pre> <p>And LspInfo:</p> <pre><code> Language client log: /home/user/.local/state/nvim/lsp.log Detected filetype: python 2 client(s) attached to this buffer: Client: null-ls (id: 2, bufnr: [1, 16]) filetypes: scss, vue, javascriptreact, javascript, jsonc, yaml, less, typescript, graphql, typescriptreact, json, markdown.mdx, markdown, handlebars, css, html, rust, lua, luau, python autostart: false root directory: /home/user/dir cmd: &lt;function&gt; Client: pyright (id: 3, bufnr: [1, 16]) filetypes: python autostart: true root directory: Running in single file mode. cmd: pyright-langserver --stdio Configured servers list: rust_analyzer, clangd, tsserver, pyright </code></pre>
<python><neovim><liskov-substitution-principle>
2023-01-07 13:13:02
1
692
0xActor
75,040,459
1,362,055
Type conversion during function call in python
<p>In the example below I'm passing float value to a function accepting and int argument (using type hints). Looks like the value read into the function arg is a float nonetheless (was expecting int(11.2) * 10 = 110 instead of 112)</p> <p>Why is this the case?</p> <pre><code>def f(n:int): return n*10 l = [11.2, 2.2, 3.3, 4.4] mfunc = map(f,l) print(list(mfunc)) </code></pre> <blockquote> <p>Result: [112.0, 22.0, 33.0, 44.0]</p> <p>** Process exited - Return Code: 0 ** Press Enter to exit terminal</p> </blockquote>
<python><function><parameter-passing>
2023-01-07 12:47:50
1
817
Nikhil
75,040,437
1,792,344
More Iterations Of Gauss-Seidel With Scipy Infinite Norm
<p>I have a python code to solve linear systems with Gauss-Seidel Method, using Numpy and Scipy. I'm implementing the code and an example from the book: <strong>'Numerical Analysis: Burden and Faires'</strong>. <em><strong>The problem is I obtain the exact solution but with more iterations: 10 iterations with 0.0000001 tolerance but the book obtains the solution with only 6 iterations and 0.001 tolerance.</strong></em> I think the problem is because of infinity norm using scipy to calculate the error. When I don't use the error in the code (only iterations) I obtain the same result as the book. Here's my python code:</p> <pre><code>import numpy as np import scipy as scp def gauss_seidel(A, b, x_0, max_iterations=15, tolerance=0.0000001): L = -np.tril(A, -1) U = -np.triu(A, 1) v = np.diagonal(A) D = np.diag(v) DL = D - L Hg = np.linalg.inv(DL) Tg = Hg @ U Cg = Hg @ b n = A.shape[0] x = np.zeros(n) diff = np.zeros(n) error = 0.0 k = 1 while k &lt;= max_iterations: x = Tg @ x_0 + Cg diff = x - x_0 error = scp.linalg.norm(diff, ord=np.inf, axis=None) / \ scp.linalg.norm(x, ord=np.inf) x_0 = x k += 1 if(error &lt; tolerance): break return x, k A = np.matrix([ [10, -1, 2, 0], [-1, 11, -1, 3], [2, -1, 10, -1], [0, 3, -1, 8] ]) b = np.array([6, 25, -11, 15]) x_0 = np.array([0, 0, 0, 0]) solution = gauss_seidel(A, b, x_0, tolerance=0.001) print('WITH TOLERANCE = 0.001') print( f'Solution = {solution[0]} with {solution[1]} iterations') solution = gauss_seidel(A, b, x_0) print('WITH TOLERANCE = 0.0000001') print( f'Solution = {solution[0]} with {solution[1]} iterations') </code></pre> <p>And this is my terminal output:</p> <blockquote> <p>WITH TOLERANCE = 0.001 Solution = [ 1.00009128 2.00002134 -1.00003115 0.9999881 ] with 6 iterations WITH TOLERANCE = 0.0000001 Solution = [ 1. 2. -1. 1.] with 10 iterations</p> </blockquote> <p>Thanks</p>
<python><numpy><scipy>
2023-01-07 12:44:10
1
759
Tobal
75,040,352
16,142,496
I have this Dataframe, Using Transform Function how can I assign them index values as new row?
<p><a href="https://i.sstatic.net/fdIX6.png" rel="nofollow noreferrer">This is the Dataset, I want to add a column named Index which will start number for every person from 1</a></p> <p>This provides count, but I want to provide Index values for eg: 1,2,3 for one person, and new person comes again 1,2,3,4 etc.</p> <pre><code>new_df['Index']=new_df.groupby('Name Of Reporting Person')['Name Of Reporting Person'].transform('count') </code></pre>
<python><pandas><dataframe><transform><analytics>
2023-01-07 12:33:00
2
351
Manas Jadhav
75,040,330
383,793
How can I define a TypeAlias for a nested Generic in Python?
<p>I currently have this code</p> <pre><code>T = TypeVar(&quot;T&quot;) Grid = Sequence[Sequence[T]] def columns(grid: Grid) -&gt; Iterable[list[T]]: return ([row[i] for row in grid] for i in range(len(grid[0]))) </code></pre> <p>But I think the <code>T</code> in the alias <code>Grid</code> is bound to a different <code>T</code> in the return type of the function.</p> <p>How do I define <code>Grid</code> such that I can write</p> <pre><code>def columns(grid: Grid[T]) -&gt; Iterable[list[T]]: ... </code></pre> <p>I've looked at <code>typing.GenericAlias</code>, but can't see how it helps me.</p> <p>(I'm aware that Sequence[Sequence[T]] has no guarantee that the grid is actually rectangular, but that's not the problem I want to focus on here.)</p>
<python><generics><python-typing><nested-generics>
2023-01-07 12:30:03
1
6,396
Chris Wesseling
75,040,164
18,948,596
Check if numpy's array_like is an empty array
<p>Suppose <code>a</code> is an <code>array_like</code> and we want to check if it is empty. Two possible ways to accomplish this are:</p> <pre class="lang-py prettyprint-override"><code>if not a: pass if numpy.array(a).size == 0: pass </code></pre> <p>The first solution would also evaluate to <code>True</code> if <code>a=None</code>. However I would like to only check for an empty <code>array_like</code>.</p> <p>The second solution seems good enough for that. I was just wondering if there is a numpy built-in function for that or a better solution then to check for the size?</p>
<python><arrays><numpy>
2023-01-07 12:02:51
1
413
Racid
75,040,072
1,636,973
linux: how to wait for a process and file at the same time?
<p>I am trying to build an event loop in python, mostly for educational purposes. I am not interested in solutions involving asyncio or similar because my goal is really to learn about the low-level linux APIs.</p> <p>For file-like objects (pipes, sockets, …) there are the select/poll/epoll system calls. In python, these are wrapped in the <a href="https://docs.python.org/3/library/selectors.html" rel="nofollow noreferrer">selectors module</a>. This is a simple loop that can wait for several files at the same time:</p> <pre class="lang-py prettyprint-override"><code>file1 = open('/var/run/foo.sock') file2 = open('/var/run/bar.sock') with selectors.DefaultSelector() as selector: selector.register(file1, selectors.EVENT_READ) selector.register(file2, selectors.EVENT_READ) while True: for key, mask in selector.select(): if key.fileobj == file1: handle_data(file1) else: handle_data(file2) </code></pre> <p>For proceeses there is the wait system call. In python this is wrapped in the <a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow noreferrer">subprocess module</a>. This is the code to wait for a single process to terminate:</p> <pre class="lang-py prettyprint-override"><code>proc = subprocess.Popen(['/usr/bin/foo']) proc.wait() handle_data(proc) </code></pre> <p>How can I mix those two, or even wait for more than one process at a time.</p> <p>Both <code>selector.select()</code> and <code>proc.wait()</code> will take a timeout, so we can turn this into a semi-busy loop:</p> <pre class="lang-py prettyprint-override"><code>file1 = open('/var/run/foo.sock') file2 = open('/var/run/bar.sock') proc = subprocess.Popen(['/usr/bin/foo']) timeout = 1 with selectors.DefaultSelector() as selector: selector.register(file1, selectors.EVENT_READ) selector.register(file2, selectors.EVENT_READ) while True: for key, mask in selector.select(timeout): if key.fileobj == file1: handle_data(file1) else: handle_data(file2) if proc: try: proc.wait(timeout) handle_data(proc) proc = None except TimeoutExpired: pass </code></pre> <p>I guess there must be a better way. How is this usually done?</p>
<python><linux><asynchronous><select>
2023-01-07 11:47:49
1
2,494
tobib
75,040,059
8,849,755
Python print single byte as char
<p>I have a long array of bytes and I need to carefully inspect the values at each position. So I want to print it in two columns with byte number and byte value. How can this be done?</p> <p>Example:</p> <pre><code>bytes = b'hola\x00chau' print(bytes) for i,byte in enumerate(bytes): print(i,byte) </code></pre> <p>Desired output:</p> <pre><code>b'hola\x00chau' 0 h 1 o 2 l 3 a 4 \x00 5 c 6 h 7 a 8 u </code></pre> <p>The code actually prints the bytes as integers.</p>
<python><arrays><byte>
2023-01-07 11:45:56
2
3,245
user171780
75,040,048
859,141
Django. Access list index in child for loop
<p>How can I change the accessed index of an inner for loop list based on a counter from the outer loop? In normal python I would do something like</p> <pre><code>parent_list = ['One','Two','Three'] child_list = ['A','B','C'] for idx, item in enumerate(parent_list): for child_item in child_list[idx]: print(str(idx), item, child_item) </code></pre> <p>I've been looking at using with and forloop.counters but I either run into the index not being accessed or index not changing. This is what I currently have.</p> <pre><code> {% for item in payout_items %} {% with forloop.counter0 as outer_counter %} &lt;h2&gt;{{ item.market_place }} on {{ item.entry_date }}: {% for item in royalty_items.outer_counter %} &lt;tr&gt; &lt;th scope=&quot;row&quot;&gt;{{ item.id }}&lt;/th&gt; &lt;td&gt;{{ item.entry_date }}&lt;/td&gt; &lt;td&gt;{{ item.market_place }}&lt;/td&gt; &lt;/tr&gt; {% endfor %} {% endwith %} {% endfor %} </code></pre> <p>If I change</p> <pre><code>{% for item in royalty_items.outer_counter %} </code></pre> <p>to</p> <pre><code>{% for item in royalty_items.0 %} </code></pre> <p>I get the first index repeated many times. I can see that outer_counter is incrementing from the output but just need royalty_items to increment the accessed index as well.</p> <p>As requested the view is</p> <pre><code> detail_object = AccountingPeriod.objects.get(pk=detail_id) payout_items = Payout.objects.filter(sales_period_start__range=[detail_object.period_start, detail_object.period_end]) royalty_items = [] for payout in payout_items: temp = Royalty.objects.filter(entry_date__range=[payout.sales_period_start, payout.sales_period_end]).filter(market_place=payout.market_place) print(str(payout.sales_period_start) + &quot; - &quot; + str(payout.sales_period_end) + &quot; - &quot; + payout.market_place + &quot; = &quot; + str(len(temp))) royalty_items.append(temp) </code></pre> <p>And the following render call is passed.</p> <pre><code>render(request, 'royalties/accounting_period/summary.html', {'detail_object': detail_object, 'payout_items': payout_items, 'royalty_items': royalty_items}) </code></pre> <p>Solution: I created a template filter but feel there should be a more elegant answer.</p> <pre><code>@register.filter() def getRoyaltySet(royalty_items, outer_counter): return royalty_items[outer_counter] </code></pre>
<python><django><django-views><django-templates>
2023-01-07 11:44:40
2
1,184
Byte Insight
75,039,860
13,950,870
How to concat column Y to column X and replicate values Z in pandas dataframe?
<p>I have a pandas DataFrame with three columns:</p> <pre><code> X Y Z 0 1 4 True 1 2 5 True 2 3 6 False </code></pre> <p>How do I make it so that I have two columns <code>X</code> and <code>Z</code> with values:</p> <pre><code> X Z 0 1 True 1 2 True 2 3 False 3 4 True 4 5 True 5 6 False </code></pre>
<python><pandas>
2023-01-07 11:12:45
4
672
RogerKint
75,039,721
3,287,355
Matplotlib add labels to individual stacks in a stacked chart for imbalanced dataset
<p>I am trying to plot a graph based on a huge imbalanced dataset using matplotlib and it looks something like</p> <p><a href="https://i.sstatic.net/9Gcad.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Gcad.png" alt="enter image description here" /></a></p> <p>I want to represent % values on each bar, but they all seem cluttered/messy when the stacks get really small. The dataset is dynamic, and I cannot modify it.</p> <p>I tried playing around with <a href="https://matplotlib.org/stable/api/text_api.html" rel="nofollow noreferrer">matplotlib.text</a>, but couldn't figure out a way to get past this.</p> <pre><code>def plot_stack_bar(plot_df, xlabel, ylabel, title): plot_df_percent=pd.DataFrame() cols = list(plot_df) plot_df_percent[cols] = plot_df[cols].div( plot_df[cols].sum(axis=1), axis=0 ) plot_df.plot(kind='bar', stacked=True, figsize=(20, 17)) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) for n, x in enumerate([*plot_df.index.values]): i=0 for (proportion, y_loc) in zip(plot_df_percent.loc[x], plot_df_percent.loc[x].cumsum()): y_loc = plot_df.iloc[n,I] / 2 + plot_df.iloc[n,:i].sum() plt.text(x=n-0.17, y=y_loc, s=f'{np.round(proportion * 100, 1)}%', color=&quot;black&quot;, fontsize=12, fontweight=&quot;bold&quot;) i=i+1 plt.savefig(f'draft/{title}.png') plt.show() </code></pre> <p>Appreciate any help.</p>
<python><matplotlib><plot><stacked-chart>
2023-01-07 10:51:15
0
1,346
moonwalker7
75,039,674
1,439,912
Python type hinting for generic container constructor
<p>What is the correct typing to use for the below marked in ???, where we cast a generic iterable data container type to an iterable container of different type?</p> <pre><code>def foo(itr:Iterable, cast_type:???) -&gt; ???: (For Py 3) # type: (Iterable[Any], ???) -&gt; ??? (For Py 2.7) return cast_type(itr) foo([1,2], cast_type=set) # Example 1 foo(set([1,2]), cast_type=list) # Example 2 ... </code></pre>
<python><python-typing>
2023-01-07 10:42:32
2
480
Pontus Hultkrantz
75,039,612
7,437,143
Typing: NameError: name 'function' is not defined
<p>While trying to specify the return type of a function that returns a list of functions, I am experiencing some difficulties. The typechecker retrurns:</p> <pre><code>Traceback (most recent call last): File &quot;/home/name/anaconda/envs/snncompare/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/home/name/anaconda/envs/snncompare/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/home/name/git/Hiveminds/Apk-controller/src/apkcontroller/__main__.py&quot;, line 5, in &lt;module&gt; from src.apkcontroller.scripts.org_torproject_android import Apk_script File &quot;/home/name/git/Hiveminds/Apk-controller/src/apkcontroller/scripts/org_torproject_android.py&quot;, line 8, in &lt;module&gt; from src.apkcontroller.Screen import Screen File &quot;/home/name/git/Hiveminds/Apk-controller/src/apkcontroller/Screen.py&quot;, line 10, in &lt;module&gt; class Screen: File &quot;/home/name/git/Hiveminds/Apk-controller/src/apkcontroller/Screen.py&quot;, line 105, in Screen ) -&gt; List[function]: NameError: name 'function' is not defined </code></pre> <p>For code:</p> <pre class="lang-py prettyprint-override"><code> @typechecked def get_next_actions( self, required_objects: Dict[str, str], optional_objects: Dict[str, str], ) -&gt; List[function]: &quot;&quot;&quot;Looks at the required objects and optinoal objects and determines which actions to take next. An example of the next actions could be the following List: 0. Select a textbox. 1. Send some data to a textbox. 2. Click on the/a &quot;Next&quot; button. Then the app goes to the next screen and waits a pre-determined amount, and optionally retries a pre-determined amount of attempts. TODO: determine how to put this unique function on the right node. &quot;&quot;&quot; print( &quot;TODO: determine how to specify how to compute the next action&quot; + f&quot; for this screen. {required_objects},{optional_objects}&quot; ) return [actions_1, actions_2] @typechecked def actions_1(d: AutomatorDevice) -&gt; None: &quot;&quot;&quot;Performs the actions in option 1 in this screen.&quot;&quot;&quot; print(f&quot;TODO: perform actions 1.{d}&quot;) @typechecked def actions_2(d: AutomatorDevice) -&gt; None: &quot;&quot;&quot;Performs the actions in option 2 in this screen.&quot;&quot;&quot; print(f&quot;TODO: perform actions 2.{d}&quot;) </code></pre> <p>Which is called with:</p> <pre class="lang-py prettyprint-override"><code>screen.get_next_actions(screen.required_objects,screen.optional_objects) </code></pre> <h2>Question</h2> <p>How can I specify the return type of a function, as being a list of functions?</p> <h2>Approach I</h2> <p>I tried importing:</p> <pre class="lang-py prettyprint-override"><code>from typing import Function </code></pre> <p>However, that object does not exist in the <code>typing</code> library.</p>
<python><function><typing>
2023-01-07 10:31:37
1
2,887
a.t.
75,039,533
19,003,861
Cannot render model fields in forloop with annotate() and values() (Django)
<p>I am using <code>.values()</code> and <code>.annotate()</code>to sum up 2 models fields based on matching criteria.</p> <p>I wrapped this in a <code>forloop</code> in my template to iterate.</p> <p>Problem: I cannot call the model fields anymore. The forloop returns the venue_id instead of the name and the usual approach to call the logo does not work anymore.</p> <p>(these were rendering fine before I used <code>.values()</code> and <code>.annotate()</code>. Makes me think I am missing something in the logic here. Any ideas?</p> <p><strong>Models</strong></p> <pre><code>class Venue(models.Model, HitCountMixin): id = models.AutoField(primary_key=True) name = models.CharField(verbose_name=&quot;Name&quot;,max_length=100, blank=True) logo = models.URLField('Logo', null=True, blank=True) class Itemised_Loyatly_Card(models.Model): user = models.ForeignKey(UserProfile, blank=True, null=True, on_delete=models.CASCADE) venue = models.ForeignKey(Venue, blank=True, null=True, on_delete=models.CASCADE) add_points = models.IntegerField(name = 'add_points', null = True, blank=True, default=0) use_points = models.IntegerField(name= 'use_points', null = True, blank=True, default=0) </code></pre> <p><strong>Views</strong></p> <pre><code>from django.db.models import Sum, F def user_loyalty_card(request): itemised_loyalty_cards = Itemised_Loyatly_Card.objects.filter(user=request.user.id).values('venue').annotate(add_points=Sum('add_points')).annotate(use_points=Sum('use_points')).annotate(total=F('add_points')-F('use_points')) return render(request,&quot;main/account/user_loyalty_card.html&quot;, {'itemised_loyalty_cards':itemised_loyalty_cards}) </code></pre> <p><strong>Templates</strong></p> <pre><code>{%for itemised_loyatly_card in itemised_loyalty_cards %} &lt;img&quot;src=&quot;{{itemised_loyatly_card.venue.logo}}&quot;&gt; {{itemised_loyatly_card.venue}} {{itemised_loyatly_card.total}} {%endfor%} </code></pre> <p><strong>Renders</strong></p> <p><a href="https://i.sstatic.net/0izvt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0izvt.png" alt="enter image description here" /></a></p>
<python><django><django-views><django-templates>
2023-01-07 10:17:06
1
415
PhilM
75,039,459
15,352,143
For categorical class RuntimeError: 0D or 1D target tensor expected, multi-target not supported
<p>I have 28 features and target variable is categorical (0-8) i.e. 9 target variable .</p> <p>Data sample:</p> <pre><code>X_train.shape,y_train.shape output --((640, 28), (640, 1)) X_train[0] output --array([0.4546875 , 0.63958333, 0.46875 , 0.62916667, 0.4859375 , 0.62916667, 0.5015625 , 0.64166667, 0.4859375 , 0.65 , 0.4671875 , 0.65 , 0.478125 , 0.6375 , 0.5625 , 0.64166667, 0.5765625 , 0.62708333, 0.5921875 , 0.62708333, 0.60625 , 0.63541667, 0.59375 , 0.64583333, 0.5765625 , 0.64791667, 0.58125 , 0.63541667]) y_train[0] output --array([1]) </code></pre> <p>defined data generator and model like below</p> <pre><code>class ClassifierDataset(Dataset): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data) train_dataset = ClassifierDataset(torch.from_numpy(X_train).float(), torch.from_numpy(y_train).long()) val_dataset = ClassifierDataset(torch.from_numpy(X_val).float(), torch.from_numpy(y_val).long()) test_dataset = ClassifierDataset(torch.from_numpy(X_test).float(), torch.from_numpy(y_test).long()) EPOCHS = 150 BATCH_SIZE = 32 LEARNING_RATE = 0.0007 NUM_FEATURES = len(X[0]) NUM_CLASSES = 9 train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle = True ) val_loader = DataLoader(dataset=val_dataset, batch_size=1) test_loader = DataLoader(dataset=test_dataset, batch_size=1) class MulticlassClassification(nn.Module): def __init__(self, num_feature, num_class): super(MulticlassClassification, self).__init__() self.layer_1 = nn.Linear(num_feature, 512) self.layer_2 = nn.Linear(512, 128) self.layer_3 = nn.Linear(128, 64) self.layer_out = nn.Linear(64, num_class) self.relu = nn.ReLU() self.dropout = nn.Dropout(p=0.2) self.batchnorm1 = nn.BatchNorm1d(512) self.batchnorm2 = nn.BatchNorm1d(128) self.batchnorm3 = nn.BatchNorm1d(64) def forward(self, x): x = self.layer_1(x) x = self.batchnorm1(x) x = self.relu(x) x = self.layer_2(x) x = self.batchnorm2(x) x = self.relu(x) x = self.dropout(x) x = self.layer_3(x) x = self.batchnorm3(x) x = self.relu(x) x = self.dropout(x) x = self.layer_out(x) return x </code></pre> <p>Defined loss and batch size as:</p> <pre><code>model = MulticlassClassification(num_feature = NUM_FEATURES, num_class=NUM_CLASSES) model.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) print(model) </code></pre> <p>Defined function for multi accuracy class</p> <pre><code>def multi_acc(y_pred, y_test): y_pred_softmax = torch.log_softmax(y_pred, dim = 1) _, y_pred_tags = torch.max(y_pred_softmax, dim = 1) correct_pred = (y_pred_tags == y_test).float() acc = correct_pred.sum() / len(correct_pred) acc = torch.round(acc * 100) return acc </code></pre> <p>Started training like this</p> <pre><code>accuracy_stats = { 'train': [], &quot;val&quot;: [] } loss_stats = { 'train': [], &quot;val&quot;: [] } print(&quot;Begin training.&quot;) for e in tqdm(range(1, EPOCHS+1)): # TRAINING train_epoch_loss = 0 train_epoch_acc = 0 model.train() for X_train_batch, y_train_batch in train_loader: print(X_train_batch.shape, y_train_batch.shape) X_train_batch, y_train_batch = X_train_batch.to(device), y_train_batch.to(device) optimizer.zero_grad() y_train_pred = model(X_train_batch) # y_train_pred = y_train_pred.unsqueeze(1) print(y_train_pred.shape,y_train_batch.shape) print(y_train_batch) print(y_train_pred) # train_loss = criterion(y_train_pred, torch.max(y_train_batch,1)[1]) train_loss = criterion(y_train_pred, y_train_batch) train_acc = multi_acc(y_train_pred, y_train_batch) train_loss.backward() optimizer.step() train_epoch_loss += train_loss.item() train_epoch_acc += train_acc.item() # VALIDATION with torch.no_grad(): val_epoch_loss = 0 val_epoch_acc = 0 model.eval() for X_val_batch, y_val_batch in val_loader: X_val_batch, y_val_batch = X_val_batch.to(device), y_val_batch.to(device) y_val_pred = model(X_val_batch) # val_loss = criterion(y_val_pred, torch.max(y_val_batch,1)[1]) val_loss = criterion(y_val_pred, y_val_batch) val_acc = multi_acc(y_val_pred, y_val_batch) val_epoch_loss += val_loss.item() val_epoch_acc += val_acc.item() loss_stats['train'].append(train_epoch_loss/len(train_loader)) loss_stats['val'].append(val_epoch_loss/len(val_loader)) accuracy_stats['train'].append(train_epoch_acc/len(train_loader)) accuracy_stats['val'].append(val_epoch_acc/len(val_loader)) print(f'Epoch {e+0:03}: | Train Loss: {train_epoch_loss/len(train_loader):.5f} | Val Loss: {val_epoch_loss/len(val_loader):.5f} | Train Acc: {train_epoch_acc/len(train_loader):.3f}| Val Acc: {val_epoch_acc/len(val_loader):.3f}') </code></pre> <p>This is the error I am getting:</p> <pre><code>RuntimeError Traceback (most recent call last) &lt;ipython-input-529-1d57dbd350e4&gt; in &lt;module&gt; 17 print(y_train_pred) 18 # train_loss = criterion(y_train_pred, torch.max(y_train_batch,1)[1]) ---&gt; 19 train_loss = criterion(y_train_pred, y_train_batch) 20 train_acc = multi_acc(y_train_pred, y_train_batch) 21 2 frames /usr/local/lib/python3.8/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2699 if size_average is not None or reduce is not None: 2700 reduction = _Reduction.legacy_get_string(size_average, reduce) -&gt; 2701 return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2702 2703 RuntimeError: 0D or 1D target tensor expected, multi-target not supported </code></pre> <p>Any idea how to correct this? I've been stuck for a long time</p>
<python><deep-learning><pytorch><neural-network>
2023-01-07 10:03:18
1
313
k_p
75,039,222
11,082,866
Dynamic annotion in django queryset
<p>I have a Django queryset being created for a graph like the following:</p> <pre><code>obj = Allotment.objects .all() .values('dispatch_date') .annotate(transaction_no=Count('transaction_no')) .values('dispatch_date', 'transaction_no') </code></pre> <p>Now I am trying to pass all these values dynamically so I tried this :</p> <pre><code> data = request.data.copy() x_model = data['x_model'] y_model = data['y_model'] obj = Allotment.objects .all() .values('{}'.format(x_model)) .annotate(transaction_no=Count('{}'.format(y_model))) .values('{}'.format(x_model), '{}'.format(y_model)) </code></pre> <p>where I just replaced the string values and it gives me the following error:</p> <blockquote> <p>The annotation 'transaction_no' conflicts with a field on the model.</p> </blockquote> <p>How do I dynamically change the field in annotate function? and apply aggregators such as <code>Sum</code>, <code>Count</code>, etc given that the aggregators are passed through the API as well</p>
<python><django><django-rest-framework>
2023-01-07 09:16:34
1
2,506
Rahul Sharma
75,039,156
3,088,891
How can I rotate axis labels in a faceted seaborn.objects plot?
<p>I am working with the excellent <code>seaborn.objects</code> module in the most recent version of <strong>seaborn</strong>.</p> <p>I would like to produce a plot:</p> <ul> <li>With rotated x-axis labels</li> <li>With facets</li> </ul> <p>Rotating x-axis labels is not directly supported within <code>seaborn.objects</code> (and standard stuff like <code>plt.xticks()</code> after making the graph doesn't work), but the documentation suggests doing it using the <code>.on()</code> method. <code>.on()</code> takes a <strong>matplotlib</strong> figure/subfigure or axis object and builds on top of it. As I pointed out in my answer to <a href="https://stackoverflow.com/questions/74715767/how-to-rotate-the-xticks-in-a-seaborn-objects-v0-12x-plot/75039104#75039104">this question</a>, the following code works to rotate axis labels:</p> <pre class="lang-py prettyprint-override"><code>import seaborn.objects as so import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({'a':[1,2,3,4], 'b':[5,6,7,8], 'c':['a','a','b','b']}) fig, ax = plt.subplots() ax.xaxis.set_tick_params(rotation=90) (so.Plot(df, x = 'a', y = 'b') .add(so.Line()) .on(ax)) </code></pre> <p>However, if I add facets by changing the graph code to</p> <pre class="lang-py prettyprint-override"><code>(so.Plot(df, x = 'a', y = 'b') .add(so.Line()) .on(ax) .facet('c')) </code></pre> <p>I get the error <code>Cannot create multiple subplots after calling Plot.on with a &lt;class 'matplotlib.axes._axes.Axes'&gt; object. You may want to use a &lt;class 'matplotlib.figure.SubFigure'&gt; instead.</code></p> <p>However, if I follow the instruction, and instead use the <code>fig</code> object, rotating the axes in that, I get a strange dual x-axis, one with rotated labels unrelated to the data, and the actual graph's labels, unrotated:</p> <pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots() plt.xticks(rotation = 90) (so.Plot(df, x = 'a', y = 'b') .add(so.Line()) .on(fig) .facet('c')) </code></pre> <p><a href="https://i.sstatic.net/PG62p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PG62p.png" alt="Faceted graph with strange dual labeling" /></a></p> <p>How can I incorporate rotated axis labels with facets?</p>
<python><matplotlib><seaborn><seaborn-objects>
2023-01-07 09:04:32
1
1,253
NickCHK
75,039,071
7,959,614
Turn table-element into Pandas DataFrame
<p>I would like to turn a table into a <code>pandas.DataFrame</code>.</p> <pre><code>URL = 'https://ladieseuropeantour.com/reports-page/?tourn=1202&amp;tclass=rnk&amp;report=tmscores~season=2015~params=P*4ESC04~#/profile' </code></pre> <p>The element in question is</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By driver.get(URL) ranking = driver.find_element(By.XPATH, &quot;.//*[@id='maintablelive']&quot;) </code></pre> <p>I tried the following:</p> <pre><code>import pandas as pd pd.read_html(ranking.get_attribute('outerHTML'))[0] </code></pre> <p>I am also using the dropdown-menu to select multiple rounds. When a different round is selected, <code>driver.current_url</code> doesn't change so I think it's not possible to load these new tables with <code>requests</code> or anything.</p> <p>Please advice!</p>
<python><pandas><selenium>
2023-01-07 08:44:55
1
406
HJA24
75,038,677
8,049,220
Plotly Scatter3D plot with consistent gradient
<p>I'm trying to visualize temperature curves from different source in a 3D scatter plot. I have set <code>colorscale='Turbo'</code> so that hotter region will be read and colder region will be blue. But I cannot plot multiple scatter line with consistent color gradient. I understand the problem is occurring because I'm drawing multiple line separately. But I couldn't find a way to do it at a time. I can do it for <code>line</code> plots. But <code>line</code> cannot be drawn with gradient.</p> <p>Here is the visualization. <a href="https://i.sstatic.net/Fqfvc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fqfvc.png" alt="enter image description here" /></a></p> <p>For a curve negative temperature is shown as red.</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>fig = go.Figure(go.Scatter3d(x=ch6_data['time'], y=ch6_data['channel'], z=ch6_data['Ch6'], mode='markers', marker=dict(color=ch6_data['Ch6'], colorscale='Turbo', size=1, showscale=True))) fig.add_trace( go.Scatter3d(x=ch1_data['time'], y=ch1_data['channel'], z=ch1_data['Ch1'], mode='markers', marker=dict(color=ch1_data['Ch1'], colorscale='Turbo', size=1, showscale=True)) ) </code></pre> <p>Is there any way to plot separate temperature curves with consistent gradient?</p>
<python><plot><plotly><scatter-plot>
2023-01-07 07:12:10
0
643
carl
75,038,622
2,773,461
Understanding Open Telemetry Integration to google cloud pub sub in python
<p>I am curious about how distributed tracing can be provided to a message from a publisher and how this is received in the subscriber part just to get the possibility to get track about what could be happening when things go wrong in the point the message is sent (publisher) and the message is received (subscriber). This under python gcp pubsub client perspective.</p> <p>I see this <a href="https://github.com/googleapis/python-pubsub/pull/149" rel="nofollow noreferrer">PR</a> and it seems to pursue that, as it is also kind of explained <a href="https://medium.com/google-cloud/integrating-opentelemetry-into-cloud-pub-sub-19aacd83692a" rel="nofollow noreferrer">in this article</a> that the author of the PR owns. But it seems Open telemetry support for keeping traces of the flow of pub-sub messages is not still in place for <a href="https://github.com/googleapis/python-pubsub" rel="nofollow noreferrer">gcp python client pub-sub</a></p> <p>I wanted to mention this foreword just to ask the following question:</p> <p>On the other hand, I see in the OTEL collector project the <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/googlecloudpubsubexporter" rel="nofollow noreferrer">Google Cloud Pub Sub exporter</a> and the <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/googlecloudpubsubreceiver" rel="nofollow noreferrer">Google Cloud Pub Sub Receiver</a> modules, how is this different from the purpose of the above here mentioned PR?</p> <p>I guess under the collector perspective those modules are for sending traces (already in OTEL collector) from an application perspective to a pub sub-topic (exporter) and for getting OTEL messages from a subscription (receiver), but not to trace the messages that a publisher send and a subscriber receive?</p> <p>I would like to get a better comprehension about sending traces to a pub sub-topic or receiving OTEL messages from a subscription and the idea of generating tracing from a publisher to see the behavior of those messages until they reach the subscriber(s)</p>
<python><publish-subscribe><google-cloud-pubsub><open-telemetry>
2023-01-07 06:58:50
1
3,263
bgarcial
75,038,573
2,147,347
Modifying Pytorch Pretrained Model's Parameter in `forward()` Makes Training Slowing
<p>I have two parameters, A and B, that I need to put to replace all the weight of the pre-trained model. So I want to utilize the forward calculation of the pre-trained model but not the weight.</p> <p>I want to modify the weight of the model W = A + B, where A is a fixed tensor (not trainable), but B is a trainable parameter. So, in the end, my aim is to train B in the structure of the pre-trained model.</p> <p>This is my attempt:</p> <pre class="lang-py prettyprint-override"><code>class Net(nn.Module): def __init__(self, pre_model, B): super(Net, self).__init__() self.B = B self.pre_model = copy.deepcopy(pre_model) for params in self.pre_model.parameters(): params.requires_grad = False def forward(self, x, A): for i, params in enumerate(self.pre_model.parameters()): params.copy_(A[i].detach().clone()) # detached because I don't need to train A params.add_(self.B[i]) # I need to train B so no detach params.retain_grad() x = self.pre_model(x) return x </code></pre> <p>And this is how I calculate A and B:</p> <pre class="lang-py prettyprint-override"><code>b = [] A = [] for params in list(pre_model.parameters()): A.append(torch.rand_like(params)) b_temp = nn.Parameter(torch.rand_like(params)) b.append(b_temp.detach().clone()) B = nn.ParameterList(b) </code></pre> <p>I checked in the process, and B was already trained. But the problem is in every iteration, the training process keeps getting slower:</p> <pre><code>Epoch 1: 24%|██▍ | 47/196 [00:05&lt;00:23, 6.44it/s] 57%|█████▋ | 111/196 [00:18&lt;00:19, 4.28it/s] 96%|█████████▋| 189/196 [00:41&lt;00:02, 2.90it/s] Epoch 2: 6%|▌ | 11/196 [00:04&lt;01:14, 2.50it/s] </code></pre> <p>I think I have detached all the parameters correctly, but I am not sure why it happened.</p> <p>UPDATED:</p> <p>credits to <a href="https://discuss.pytorch.org/t/modifying-weight-of-pretrained-model-in-forward-makes-training-slowing/169602/5?u=malioboro" rel="nofollow noreferrer">ptrblck from PyTorch Forum</a>, you can run the minimal example code in <a href="https://colab.research.google.com/drive/1L16Rs3rlNrtY-vJ0q4uz-40a-tJCggur#scrollTo=cBp6bBJvsCOx" rel="nofollow noreferrer">Google Colab here</a>. Or use the code below for the main iteration. You will see the training iteration keeps getting slower and slower.</p> <pre><code>from torch.cuda import synchronize device = 'cuda' pre_model = models.resnet18().to(device) b = [] A = [] for params in list(pre_model.parameters()): A.append(torch.rand_like(params)) b_temp = nn.Parameter(torch.rand_like(params)) b.append(b_temp.detach().clone()) B = nn.ParameterList(b) modelwithAB = Net(pre_model, B) optimizer = torch.optim.Adam(modelwithAB.parameters(), lr=1e-3) image = torch.randn(2, 3, 224, 224).to(device) print(torch.cuda.memory_allocated()/1024**2) for i in tqdm(range(300)): optimizer.zero_grad() out = modelwithAB(image, A) start = time.time() out.mean().backward() torch.cuda.synchronize() optimizer.step() if i%40==0: print(&quot;-&quot;, torch.cuda.memory_allocated()/1024**2, &quot;-&quot;, time.time()-start) </code></pre>
<python><pytorch>
2023-01-07 06:44:09
2
3,321
malioboro
75,038,239
15,893,581
can't install scikit-learn in python 3.10 for Windows 10?
<p>it's all about <strong>meson-python</strong>:</p> <blockquote> <p>Run-time dependency openblas found: NO</p> </blockquote> <blockquote> <p>....\scipy\meson.build:134:0: <strong>ERROR</strong>: Dependency &quot;OpenBLAS&quot; not found</p> <p>AttributeError: module 'mesonpy' has no attribute 'prepare_metadata_for_build_wheel'</p> </blockquote> <p><a href="https://pypi.org/project/meson-python/" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Platform Support - Windows :hammer: Does not support linking against libraries from the Meson project</p> </blockquote> <p>how to solve this problem on Windows 10 python 3.10 ?? in order to install scikit-learn</p> <p>p.s. though developers say <a href="https://scikit-learn.org/stable/developers/advanced_installation.html#building-on-windows" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Since version 0.21, scikit-learn automatically detects and uses the linear algebra library used by SciPy at runtime. Scikit-learn has therefore no build dependency on BLAS/LAPACK implementations such as OpenBlas, Atlas, Blis or MKL</p> </blockquote>
<python>
2023-01-07 05:08:53
2
645
JeeyCi
75,037,969
11,575,257
Vectorize Creating logical replacement along color channel
<p>I have an image with a subject and a mostly white background. I am trying to create an alpha mask around the subject. Due to image compression artifacts, the white background is not entirely made up of (255,255,255) values for rgb values. I am trying to convert values such as (252,253,252) to (255,255,255).</p> <p>My logic is as follows:</p> <ul> <li>If rgb values are within 2 of each other</li> <li>If minimum rgb value is greater than 244</li> <li>Then set those rgb values to (255,255,255)</li> </ul> <p>Here is my inefficient code</p> <pre><code>image = cv2.imread('image.png') #400,400,3 in shape for c_idx, column in enumerate(image): for r_idx, row in enumerate(column): if min(row) &gt; 244 and max(row)-min(row)&lt;=2: image[c_idx, r_idx] = [255,255,255] </code></pre> <p>I've attempted to make it more efficient by vectorizing with np.where. I've gotten it to check for the second condition but not the first condition.</p> <pre><code>image = cv2.imread('image.png') #400,400,3 in shape image2 = np.where(image&gt;=244), 255, image) </code></pre> <p>I have used <a href="https://stackoverflow.com/questions/63001988/how-to-remove-background-of-images-in-python/63003020">this</a> algorithm, but the blending doesn't bring values down all of the way and I end up with integer overflows on the edges of the subject.</p>
<python><numpy><alpha-transparency>
2023-01-07 03:41:23
2
1,055
theastronomist
75,037,933
103,252
Python/SqlAlchemy joining table to itself not generating expected query
<p>What I want to do seems like it should be straightforward. I want to join a table representing data collection stations to itself, in order to track previous iterations of stations deployed in the same location.</p> <p>In the code below, I have two classes: StationTable and StationTypeTable. The StationTable has two FK relationships -- one to a station type, and another back to the station table.</p> <p>At the end is the generated SQL that shows the correct join to the StationType table, but no trace whatsoever of the link created by the previous_station column.</p> <p>What am I doing wrong? Note that this will eventually be used with FastApi and the Async Postgres driver, which may or may not be of interest. Also, I don't need to modify the related tables via the relationship; I only need to read some attributes.</p> <p>Using SQLAlchemy 1.4, latest version.</p> <pre><code>from typing import Any import sqlalchemy as sa from sqlalchemy import select from sqlalchemy.orm import registry, RelationshipProperty from sqlalchemy.schema import ForeignKey from sqlalchemy.dialects.postgresql import UUID from sqlalchemy.orm.decl_api import DeclarativeMeta mapper_registry = registry() class BaseTable(metaclass=DeclarativeMeta): __abstract__ = True registry = mapper_registry metadata = mapper_registry.metadata __init__ = mapper_registry.constructor id = sa.Column(UUID(as_uuid=True), primary_key=True, server_default=sa.text(&quot;gen_random_uuid()&quot;)) class Relationship(RelationshipProperty): # type: ignore &quot;&quot;&quot; Using this class hides some of the static typing messiness in SQLAlchemy. &quot;&quot;&quot; inherit_cache = True # If this works, should improve performance def __init__(self, *args: Any, **kwargs: Any): if &quot;lazy&quot; not in kwargs: # 'joined' means items should be loaded &quot;eagerly&quot; in the same query as that of the parent, using a JOIN or LEFT # OUTER JOIN. Whether the join is &quot;outer&quot; or not is determined by the relationship.innerjoin parameter. # We need this to keep loading in async mode from blowing up. # https://docs.sqlalchemy.org/en/14/orm/relationship_api.html kwargs[&quot;lazy&quot;] = &quot;joined&quot; super().__init__(*args, **kwargs) class StationTypeTable(BaseTable): __tablename__ = &quot;station_type&quot; name = sa.Column(sa.String(255), unique=True, nullable=False) description = sa.Column(sa.UnicodeText) class StationTable(BaseTable): __tablename__ = &quot;station&quot; name = sa.Column(sa.String(255), unique=True, nullable=False) installation_date = sa.Column(sa.BigInteger, nullable=False) station_type_id = sa.Column(UUID(as_uuid=True), ForeignKey(StationTypeTable.id), nullable=False) previous_station = sa.Column(UUID(as_uuid=True), ForeignKey(&quot;station.id&quot;), nullable=True) station_type_table = Relationship(StationTypeTable, uselist=False) previous_station_table = Relationship(&quot;StationTable&quot;, uselist=False) # self join, uselist=False ==&gt; one-to-one query = select(StationTable) print(query) # SELECT station.id, station.name, station.installation_date, station.station_type_id, station.previous_station, # station_type_1.id AS id_1, station_type_1.name AS name_1, station_type_1.description # FROM station # LEFT OUTER JOIN station_type AS station_type_1 ON station_type_1.id = station.station_type_id </code></pre> <p>EDIT:</p> <p>Based on Ian Wilson's reply below, I added the parameter <code>join_depth=1</code> to the previous_station_table relationship, which did indeed generate the SQL for the relationship, but it is, oddly, &quot;the wrong way around&quot; compared to the station_type_table relationship. Here is the SQL generated with that param:</p> <pre><code>SELECT station.id, station.name, station.installation_date, station.station_type_id, station.previous_station, station_type_1.id AS id_1, station_type_1.name AS name_1, station_type_1.description, station_type_2.id AS id_2, station_type_2.name AS name_2, station_type_2.description AS description_1, station_1.id AS id_3, station_1.name AS name_3, station_1.installation_date AS installation_date_1, station_1.station_type_id AS station_type_id_1, station_1.previous_station AS previous_station_1 FROM station LEFT OUTER JOIN station_type AS station_type_1 ON station_type_1.id = station.station_type_id -- looks right LEFT OUTER JOIN station AS station_1 ON station.id = station_1.previous_station -- looks backward, see below LEFT OUTER JOIN station_type AS station_type_2 ON station_type_2.id = station_1.station_type_id </code></pre> <p>I think that the marked line should be:</p> <p><code>LEFT OUTER JOIN station AS station_1 ON station.previous_station = station_1.id</code></p>
<python><sqlalchemy>
2023-01-07 03:28:35
1
1,932
Watusimoto
75,037,894
151,954
In python, how do you search for a tag value pair in a JSON file
<p>Im dealing with a large json file that has many boolean values, so I cant just search on the value alone. How can I extract the entire user information if one value is true?</p> <p>For example the json file has many lines that could read something like this:</p> <pre><code>[ 12, { &quot;name&quot;: &quot;Big Bird&quot;, &quot;muting&quot;: false, &quot;following&quot;: true, &quot;blocked_by&quot;: false, &quot;followers_count&quot;: 42 } ] </code></pre> <p>How can I iterate through the file to find all users who could be set to muting as true?</p> <p>The code I have is import json</p> <pre><code>with open(&quot;tempList&quot;) as f: data = json.load(f) for key in data: if key['muting'] == &quot;false&quot;: print(&quot;YES&quot;) else: print(&quot;$*%#!@&quot;) </code></pre>
<python><json>
2023-01-07 03:19:03
2
2,780
Leroy Jenkins
75,037,853
12,060,596
Does Tweepy offer proxy support for Twitter API v2?
<p>I've looked through the <a href="https://docs.tweepy.org/en/stable/index.html" rel="nofollow noreferrer">documentation</a> but only found documentation indicating that Tweepy supports proxied requests through the v1 API. I have an essential developer account so I am limited to using the v2 API. As a workaround, I wrote an API call in Python using the requests library as it does give me the ability to specify a proxy URL but in retrospect, I am now wondering if I could've done this with Tweepy after all. I'd be surprised if proxy support was removed from Tweepy for the v2 API.</p>
<python><twitter><python-requests><tweepy><twitter-api-v2>
2023-01-07 03:07:44
2
318
OTM
75,037,777
13,136,438
Google colab blocking requests.get, causing 403 error
<p>I'm trying to scrape web reviews from the appstore through rss and when I run my code on a local environment, it runs just fine and it gets all the requests 200 without issue. but if i run my code in google colab, it eventually fails after a while and google seems to block it, giving a 403 error.</p> <p>i've tried adding headers, putting a time sleep, and adding a proxy to the request but nothing seems to work.</p> <p>does anyone know how to fix this? thank you very much.</p> <p>Here is a link to an example of my code: <a href="https://colab.research.google.com/drive/1gVCpIA3t0h05lPo670hp5i9jZxXRnsN9?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1gVCpIA3t0h05lPo670hp5i9jZxXRnsN9?usp=sharing</a></p> <p>The following below is a simplified version of my code</p> <pre><code>import re import glob import requests import time countries = [&quot;us&quot;, &quot;dz&quot;, &quot;ao&quot;, &quot;ai&quot;, &quot;ag&quot;] # proxy. get one from https://spys.one/en/ of type HTTP (not HTTPS) proxy = { # &quot;https&quot;: 'http://95.154.76.20:3128 ', &quot;http&quot;: 'http://66.11.117.253:9999' } failed_try = 0 def requesturl(url): global failed_try global proxy headers = { 'User-Agent': ('Mozilla/6.0 (Windows NT 10.0; Win64; x64; rv:61.0) ' 'Gecko/20100101 Firefox/61.0'), 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', &quot;Content-Type&quot;: &quot;application/json&quot; } while True: response = requests.get(url, headers=headers, proxies=proxy) if response.status_code == 403: print(response, &quot; Retrying request...&quot;) failed_try += 1 if failed_try &gt;= 5: time.sleep(2) continue failed_try = 0 return response # downloads reviews by county def download(country): global total_review_count # list of target genres genres = { &quot;action&quot;: &quot;7001&quot;, &quot;strategy&quot;: &quot;7017&quot;, &quot;sports&quot;: &quot;7016&quot;, } # shows current country progress country_pos = f&quot;{countries.index(country)+1}/{len(countries)}&quot; print(f&quot;\n\n\n================Scraping Country {country_pos}================&quot;) # iterate through mode of payment for payment in [&quot;topfreeapplications&quot;, &quot;toppaidapplications&quot;]: print(f&quot;\n\n================Scraping {payment} [{country_pos}]================&quot;) # loop through every genre for genre in genres: print(f&quot;\n Getting {genre} list...&quot;) # get list of games per genre and type of payment genre_link = f&quot;https://itunes.apple.com/{country}/rss/{payment}/limit=200/genre={genres[genre]}/json&quot; dict_genre_resp = requesturl(genre_link) print(&quot;this&gt;&quot;, dict_genre_resp, genre_link) dict_genre_response = dict_genre_resp.json() # number o review_count = 0 # iterates through every game in a genre for game in dict_genre_response[&quot;feed&quot;][&quot;entry&quot;]: # get relevant data from response game_id = game[&quot;id&quot;][&quot;attributes&quot;][&quot;im:id&quot;] game_name = game[&quot;title&quot;]['label'] # Loop through the game's reviews per pages for n in range(1,10+1): # get review reviews_response = requesturl(f&quot;https://itunes.apple.com/us/rss/customerreviews/page={n}/id={game_id}/sortBy=mostRecent/json&quot;) print(game_name, reviews_response) print() def main(): for country in countries: download(country) if __name__ == &quot;__main__&quot;: main()``` </code></pre>
<python><python-requests><google-colaboratory>
2023-01-07 02:46:51
1
378
Aeiddius
75,037,616
115,102
Convert ctypes object to numpy array
<p>I’m trying to write a ctypes structure so that it can be easily converted into a numpy array and used for assignments. Here is a simple example that shows the issue:</p> <pre><code>from ctypes import Structure, c_double import numpy as np class Vec3d: x = 1 y = 2 z = 3 def __array__(self, dtype): return np.array([self.x, self.y, self.z]) class Vec3dS(Structure): _fields_ = [(&quot;x&quot;, c_double), (&quot;y&quot;, c_double), (&quot;z&quot;, c_double)] def __array__(self, dtype): return np.array([self.x, self.y, self.z]) v = Vec3d() vs = Vec3dS(1,2,3) n = np.zeros((2,3)) n[0] = v n[1] = vs </code></pre> <p>The first assignment <code>n[0]=v</code> works but not the second one <code>n[1]=vs</code>. Numpy seems to be able to convert <code>v</code> to an numpy array but the assignment ultimately fails because the array has the wrong dtype:</p> <pre><code>TypeError: Cannot cast array data from dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) to dtype('float64'). </code></pre> <p>It’s the same dtype as if I were using</p> <pre><code>np.array(vs) </code></pre> <p>Why does implementing <code>__array__</code> (I also tried <code>__array_interface__</code>) not work when using a <code>ctypes.Structure</code>? How do I have to modify the <code>Vec3dS</code> class to give numpy a hint on how to convert it to a numpy array with the right dtype and values?</p> <p><strong>Edit</strong>: I suspect <code>ctypes.Structure</code> implements PEP 3118 which takes precedence over <code>__array__</code>. Is it possible to overwrite this from the python side?</p>
<python><numpy>
2023-01-07 01:53:17
2
6,523
hanno
75,037,544
12,350,966
*** pyarrow.lib.ArrowInvalid: CSV parse error: Empty CSV file or block: cannot infer number of columns
<p>Trying the new pyarrow, this gives error:</p> <pre><code>data= pd.read_csv(path,sep='\t',engine=&quot;pyarrow&quot;) *** pyarrow.lib.ArrowInvalid: CSV parse error: Empty CSV file or block: cannot infer number of columns </code></pre> <p>But this works:</p> <pre><code>data= pd.read_csv(path,sep='\t') </code></pre>
<python><pandas><pyarrow>
2023-01-07 01:37:43
0
740
curious
75,037,364
954,835
Unable to access notion comments via API/python
<p>I'm trying to read the comments on a database entry in notion but I can't figure out how I need to make the request.</p> <pre><code>import requests _url = 'https://api.notion.com/v1/comments' _headers = {&quot;Authorization&quot;: _auth, &quot;Notion-Version&quot;: &quot;2021-08-16&quot;} _data = {'block_id': _page_id} _results = requests.post(_url, headers=_headers, data=json.dumps(_data)) _data = _results.json() </code></pre> <p>These results I get back are something like this:</p> <pre><code>{u'code': u'validation_error', u'message': u'body failed validation. Fix one:\nbody.parent should be defined, instead was `undefined`.\nbody.discussion_id should be defined, instead was `undefined`.', u'object': u'error', u'status': 400} </code></pre> <p>I'm trying to follow the docs here <a href="https://developers.notion.com/reference/retrieve-a-comment" rel="nofollow noreferrer">https://developers.notion.com/reference/retrieve-a-comment</a> but I'm not sure how it translates into python.</p> <p>Has anyone managed to do this?</p>
<python><notion-api><notion>
2023-01-07 00:47:27
1
712
ninhenzo64
75,037,336
7,826,852
Changing value of a value in a dictionary within a list within a dictionary
<p>I have a json like:</p> <pre><code>pd = { &quot;RP&quot;: [ { &quot;Name&quot;: &quot;PD&quot;, &quot;Value&quot;: &quot;qwe&quot; }, { &quot;Name&quot;: &quot;qwe&quot;, &quot;Value&quot;: &quot;change&quot; } ], &quot;RFN&quot;: [ &quot;All&quot; ], &quot;RIT&quot;: [ { &quot;ID&quot;: &quot;All&quot;, &quot;IDT&quot;: &quot;All&quot; } ] } </code></pre> <p>I am trying to change the value <code>change</code> to <code>changed</code>. This is a dictionary within a list which is within another dictionary. Is there a better/ more efficient/pythonic way to do this than what I did below:</p> <pre><code>for key, value in pd.items(): ls = pd[key] for d in ls: if type(d) == dict: for k,v in d.items(): if v == 'change': pd[key][ls.index(d)][k] = &quot;changed&quot; </code></pre> <p>This seems pretty inefficient due to the amount of times I am parsing through the data.</p>
<python>
2023-01-07 00:41:32
1
927
qwerty
75,037,330
4,348,534
requests.get() not completing with Tiktok user profile
<p>So, basically, it seems that <code>requests.get(url)</code> can't complete with Tiktok user profiles url:</p> <pre><code>import requests url = &quot;http://tiktok.com/@malopedia&quot; rep = requests.get(url) #&lt;= will never complete </code></pre> <p>As I don't get any error message, I have no idea what's going on. Why is it not completing? How do I get it to complete?</p>
<python><python-requests>
2023-01-07 00:40:13
1
4,297
François M.
75,037,252
11,614,319
socket.gaierror: [Errno 11001] getaddrinfo failed when sending gmail email via Python
<p>I'm on Windows and using Python 3.9.10 and I have trouble sending an email via google smtp.</p> <p>My code is :</p> <pre class="lang-py prettyprint-override"><code>import smtplib from email.mime.text import MIMEText def send_email(host, port, subject, msg, sender, recipients, password): msg = MIMEText(msg) msg['Subject'] = subject msg['From'] = sender msg['To'] = ', '.join(recipients) smtp_server = smtplib.SMTP_SSL(host, port) smtp_server.login(sender, password) smtp_server.sendmail(sender, recipients, msg.as_string()) smtp_server.quit() def main(): host = &quot;smtp.gmail.com&quot; port = 465 user = &lt;my email&gt; pwd = &lt;google app password&gt; subject = &quot;Test email&quot; msg = &quot;Hello world&quot; sender = user recipients = [user] send_email(host, port, subject, msg, sender, recipients, pwd) if __name__ == '__main__': main() </code></pre> <p>What I don't understand is I keep getting the error <code>socket.gaierror: [Errno 11001] getaddrinfo failed</code>:</p> <pre><code>Traceback (most recent call last): File &quot;%project_location%\send_email_google.py&quot;, line 37, in &lt;module&gt; main() File &quot;%project_location%\send_email_google.py&quot;, line 33, in main send_email(host, port, subject, msg, sender, recipients, pwd) File &quot;%project_location%\send_email_google.py&quot;, line 13, in send_email smtp_server = smtplib.SMTP_SSL(host, port) File &quot;%Python%\lib\smtplib.py&quot;, line 1050, in __init__ SMTP.__init__(self, host, port, local_hostname, timeout, File &quot;%Python%\lib\smtplib.py&quot;, line 255, in __init__ (code, msg) = self.connect(host, port) File &quot;%Python%\lib\smtplib.py&quot;, line 341, in connect self.sock = self._get_socket(host, port, self.timeout) File &quot;%Python%\lib\smtplib.py&quot;, line 1056, in _get_socket new_socket = super()._get_socket(host, port, timeout) File &quot;%Python%\lib\smtplib.py&quot;, line 312, in _get_socket return socket.create_connection((host, port), timeout, File &quot;%Python%\lib\socket.py&quot;, line 823, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File &quot;%Python%\lib\socket.py&quot;, line 954, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed Process finished with exit code 1 </code></pre> <p>I'm not using a VPN, proxies or anything.</p> <p>I have checked, I have the correct host and the correct port for Gmail. I'm using the google app password thing. I just don't get why it won't work.</p> <p>Thanks!</p>
<python><smtp><gmail>
2023-01-07 00:20:53
1
362
gee3107
75,037,137
7,714,681
Is it possible to set generic legend next to four subplots matplotlib?
<p>I have the following code:</p> <pre><code>x1 = np.linspace(0, 5, 10) y1 = x1 + np.random.randn(10) y2 = x1 + np.random.randn(10) x2 = np.linspace(0, 5, 10) y3 = x2 + np.random.randn(10) y4 = x2 + np.random.randn(10) x3 = np.linspace(0, 5, 10) y5 = x3 + np.random.randn(10) y6 = x3 + np.random.randn(10) x4 = np.linspace(0, 5, 10) y7 = x4 + np.random.randn(10) y8 = x4 + np.random.randn(10) # Set up a figure with 2 rows and 2 columns of subplots fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) x = [i for i in range(10,101,10)] # Plot multiple line charts on each subplot l1, = ax1.plot(x1, y1, 'o-') l2, = ax1.plot(x1, y2, '^-') ax1.legend((l1, l2), ('Line 1', 'Line 2')) l3, = ax2.plot(x2, y3, 's-') l4, = ax2.plot(x2, y4, 'd-') ax2.legend((l3, l4), ('Line 3', 'Line 4')) l5, = ax3.plot(x3, y5, 'x-') l6, = ax3.plot(x3, y6, '+-') ax3.legend((l5, l6), ('Line 5', 'Line 6')) l7, = ax4.plot(x4, y7, '*-') l8, = ax4.plot(x4, y8, 'p-') ax4.legend((l7, l8), ('Line 7', 'Line 8')) # Set the x- and y-limits and labels for each subplot ax1.set_xlim([0, 5]) ax1.set_ylim([0, 10]) ax1.set_xlabel('X1') ax1.set_ylabel('Y1') ax1.set_title('Subplot 1') ax2.set_xlim([0, 5]) ax2.set_ylim([0, 10]) ax2.set_xlabel('X2') ax2.set_ylabel('Y2') ax2.set_title('Subplot 2') ax3.set_xlim([0, 5]) ax3.set_ylim([0, 10]) ax3.set_xlabel('X3') ax3.set_ylabel('Y3') </code></pre> <p>Which yields:</p> <p><a href="https://i.sstatic.net/Wz2v5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wz2v5.png" alt="enter image description here" /></a></p> <p>With my actual datasets, the lines with the same color are the same. Hence, I only need one legend on the side of the four subplots. How can I do this?</p>
<python><matplotlib><legend><subplot>
2023-01-06 23:54:09
0
1,752
Emil
75,037,007
14,593,213
"ImportError: DLL load failed while importing cv2" but "Requirement already satisfied"
<p>I'm getting this error while trying to import the cv2 module on a anaconda virtual enviroment:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\anaconda3\envs\venv-1\lib\site-packages\cv2\__init__.py&quot;, line 181, in &lt;module&gt; bootstrap() File &quot;C:\anaconda3\envs\venv-1\lib\site-packages\cv2\__init__.py&quot;, line 153, in bootstrap native_module = importlib.import_module(&quot;cv2&quot;) File &quot;C:\anaconda3\envs\venv-1\lib\importlib\__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: DLL load failed while importing cv2: Não foi possível encontrar o módulo especificado. </code></pre> <p>But the opencv-python is on the package list when I run <code>pip list</code>. And when I run <code>pip install opencv-python</code> I got this message:</p> <pre><code>Requirement already satisfied: opencv-python in c:\anaconda3\envs\venv-1\lib\site-packages (4.7.0.68) Requirement already satisfied: numpy&gt;=1.17.0 in c:\anaconda3\envs\venv-1\lib\site-packages (from opencv-python) (1.23.5) </code></pre> <p>.</p> <p>When I try to import on the base environment, it works fine</p>
<python><opencv><anaconda>
2023-01-06 23:26:09
1
355
Davi A. Sampaio
75,036,877
8,971,218
Assign a value to a Pandas DataFrame column, based on multiple conditions, fastest ways?
<p>I am aware there are some other similar questions but I haven't found a <code>timeit</code> test with a fairly large dataset. I have a ~1.6M rows DF , and I want to use the fastest way to assign a column ['stag'] a 0-1 value depending on the index weekday and hour.</p> <p>Is there a faster approach I am missing ? suggestions ?</p> <pre><code>import timeit import statistics ... ... df_backtest['weekday'] = df_backtest.index.weekday def approach_1(): df_backtest[&quot;stag&quot;]=1 df_backtest.loc[(df_backtest.index.hour&gt;=2) &amp; (df_backtest.index.hour&lt;=10),&quot;stag&quot; ]=0 df_backtest.loc[df_backtest.index.strftime('%w')==6,&quot;stag&quot; ]=0 df_backtest.loc[df_backtest.index.strftime('%w')==0,&quot;stag&quot; ]=0 def approach_2(): df_backtest['stag'] = 1 df_backtest.loc[df_backtest.index.hour.isin(range(2, 11)), 'stag'] = 0 df_backtest.loc[df_backtest.index.strftime('%w').isin(['6', '0']), 'stag'] = 0 def approach_3(): df_backtest['stag'] = 1 df_backtest.loc[df_backtest.index.hour.isin(range(2, 11)), 'stag'] = 0 df_backtest.loc[df_backtest['weekday'].isin(['6', '0']), 'stag'] = 0 def approach_4(): df_backtest['stag'] = 1 df_backtest['stag'] = df_backtest['stag'].where((df_backtest.index.hour &lt; 2) | (df_backtest.index.hour &gt; 10)) df_backtest['stag'] = df_backtest['stag'].where(~df_backtest['weekday'].isin(['6', '0'])) num_repeats = 10 num_loops = 5 print(f'Approach 1: {statistics.mean(timeit.repeat(approach_1, number=num_loops, repeat=num_repeats)) / num_loops} seconds per loop') print(f'Approach 2: {statistics.mean(timeit.repeat(approach_2, number=num_loops, repeat=num_repeats)) / num_loops} seconds per loop') print(f'Approach 3: {statistics.mean(timeit.repeat(approach_3, number=num_loops, repeat=num_repeats)) / num_loops} seconds per loop') print(f'Approach 4: {statistics.mean(timeit.repeat(approach_4, number=num_loops, repeat=num_repeats)) / num_loops} seconds per loop') print('Shape of DF:',df_backtest.shape) </code></pre> <p>output:</p> <pre><code>Approach 1: 4.617 seconds per loop Approach 2: 2.256 seconds per loop Approach 3: 0.087 seconds per loop Approach 4: 0.106 seconds per loop Shape of DF: (1605144, 7) </code></pre> <p>Hence, it seems <code>approach_3</code> is the fastest, I assume because it does not use the overhead of 'where' nor managing strings. Any other approach is welcome. Thanks</p>
<python><pandas><timeit>
2023-01-06 23:00:56
0
955
Lorenzo Bassetti
75,036,861
8,795,358
How can I extract value of variable from script element in Scrapy
<p>I need to extract some data from a website, I found that all I need is exist in <code>&lt;script&gt;</code> element, So I extracted them with this command:</p> <pre><code>script = response.css('[id=&quot;server-side-container&quot;] script::text').get() </code></pre> <p>And this is the value of <code>script</code>:</p> <pre><code> window.bottomBlockAreaHtml = ''; ... window.searchQuery = ''; window.searchResult = { &quot;stats&quot;: {...}, &quot;products&quot;: {...}, ... }; window.routedFilter = ''; ... window.searchContent = ''; </code></pre> <p>What is the best way to get the value of <code>&quot;products&quot;</code> in my python code?</p>
<python><regex><web-scraping><xpath><scrapy>
2023-01-06 22:59:08
1
359
Tanhaeirad
75,036,841
4,437,631
unittest directory structure - cannot import src code
<p>I have the following folder structure in my project:</p> <pre><code>my-project src/ __init__.py script.py test/ __init__.py test_script.py </code></pre> <p>Ideally I want to have a separate folder where all the unit tests go. My <code>test_script.py</code> looks something like this:</p> <pre><code>from src.script import my_object class TestClass(unittest.TestCase): def test_script_object(self): # unit test here pass if __name__ == 'main': unittest.main() </code></pre> <p>When I try to run the script (using <code>python test_script.py</code>) I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;test_script.py&quot;, line 4, in &lt;module&gt; from src.script import my_object ModuleNotFoundError: No module named 'src' </code></pre> <p>I was following the instructions <a href="https://stackoverflow.com/questions/1896918/running-unittest-with-typical-test-directory-structure">from this other thread</a>, and I even tried appending <code>src</code> to the sys path (which forces me to change how I do imports in the rest of the project). When I'm not trying to append to the sys path, both of my <code>__init__.py</code> files are empty.</p> <p>I am using python 3.8.</p> <p>Does anyone have any suggestions? I'm new to unit testing in python, so maybe there is a better structure or other conventions I'm not aware of. Thanks in advance for your help!</p>
<python><python-3.x><python-unittest>
2023-01-06 22:55:58
1
714
ellen
75,036,613
9,352,077
Automatically use subclass type in method signature
<p>I have a parent class with many subclasses in Python 3. Currently, the hierarchy looks something like this:</p> <pre class="lang-py prettyprint-override"><code>class Parent: @classmethod def check(cls, obj: &quot;Parent&quot;): pass class Child1(Parent): def __init__(self, x): self.x = x @classmethod def check(cls, obj: Parent) -&gt; &quot;Child1&quot;: if cls == obj.__class__: return obj else: raise TypeError(&quot;Bad type received.&quot;) class Child2(Parent): def __init__(self, y): self.y = y @classmethod def check(cls, obj: Parent) -&gt; &quot;Child2&quot;: if cls == obj.__class__: return obj else: raise TypeError(&quot;Bad type received.&quot;) ... many more children here ... </code></pre> <p>And then there is another hierarchy that uses these classes:</p> <pre class="lang-py prettyprint-override"><code>from abc import abstractmethod, ABC class Runnable(ABC): @abstractmethod def run(self) -&gt; Parent: pass class Thing1(Runnable): def run(self) -&gt; Parent: ... do a thing that produces a Child1 ... class Thing2(Runnable): def run(self) -&gt; Parent: ... do a thing that produces a Child2 ... </code></pre> <p>There are places where I call <code>Thing1.run()</code> and need to access its field <code>x</code>. Python allows this, but it is not type-safe. The purpose of <code>check()</code> is to be a kind of assert and cast, so that using <code>Child1.check(Thing.run()).x</code> is type-safe but can raise an error.</p> <p>But as you can see, <code>Child1.check()</code> and <code>Child2.check()</code> have an identical implementation; the only thing that changes is their return type. I have many such child classes, so I have repeated implementations for no good reason. <strong>Is there a way to write the following in actual Python, so that duplicating the implementations is no longer needed?</strong></p> <pre class="lang-py prettyprint-override"><code>class Parent: @classmethod def check(cls, obj: Parent) -&gt; cls: # &lt;--- This return type is not allowed in real Python if cls == obj.__class__: return obj else: raise TypeError(&quot;Bad type received.&quot;) </code></pre>
<python><oop><casting><signature><type-safety>
2023-01-06 22:23:51
1
415
Mew
75,036,423
14,141,126
DataFrame returns Value Error after adding auto index
<p>This script needs to query the DC server for events. Since this is done live, each time the server is queried, it returns query results of varying lengths. The log file is long and messy, as most logs are. I need to filter only the event names and their codes and then create a DataFrame. Additionally, I need to add a third column that counts the number of times each event took place. I've done most of it but can't figure out how to fix the error I'm getting.</p> <p>After doing all the filtering from Elasticsearch, I get two lists - action and code - which I have emulated here.</p> <pre><code>action_list = ['logged-out', 'logged-out', 'logged-out', 'Directory Service Access', 'Directory Service Access', 'Directory Service Access', 'logged-out', 'logged-out', 'Directory Service Access', 'created-process', 'created-process'] code_list = ['4634', '4634', '4634', '4662', '4662', '4662', '4634', '4634', '4662','4688'] </code></pre> <p>I then created a list that contains only the codes that need to be filtered out.</p> <pre><code>event_code_list = ['4662', '4688'] </code></pre> <p>My script is as follows:</p> <pre><code>import pandas as pd from collections import Counter #Create a dict that combines action and code lists2dict = {} lists2dict = dict(zip(action_list,code_list)) # print(lists2dict) #Filter only wanted eventss filtered_events = {k: v for k, v in lists2dict.items() if v in event_code_list} # print(filtered_events) index = 1 * pd.RangeIndex(start=1, stop=2) #add automatic index to DataFrame df = pd.DataFrame(filtered_events,index=index)#Create DataFrame from filtered events #Create Auto Index count = Counter(df) action_count = dict(Counter(count)) action_count_values = action_count.values() # print(action_count_values) #Convert Columns to Rows and Add Index new_df = df.melt(var_name=&quot;Event&quot;,value_name=&quot;Code&quot;) new_df['Count'] = action_count_values print(new_df) </code></pre> <p>Up until this point, everything works as it should. The problem is what comes next. If there are no events, the script outputs an empty DataFrame. This works fine. However, if there are events, then we should see the events, the codes, and the number of times each event occurred. The problem is that it always outputs 1. How can I fix this? I'm sure it's something ridiculous that I'm missing.</p> <pre><code>#If no alerts, create empty DataFrame if new_df.empty: empty_df = pd.DataFrame(columns=['Event','Code','Count']) empty_df['Event'] = ['-'] empty_df['Code'] = ['-'] empty_df['Count'] = ['-'] empty_df.to_html() html = empty_df.to_html() with open('alerts.html', 'w') as f: f.write(html) else: #else, output alerts + codes + count new_df.to_html() html = new_df.to_html() with open('alerts.html', 'w') as f: f.write(html) </code></pre> <p>Any help is appreciated.</p>
<python><pandas>
2023-01-06 21:55:15
1
959
Robin Sage
75,036,412
4,714,742
No version is set for command pg_config
<p>I am attempting to install psycopg2 for use within a project. I am also using asdf in order to manage my python versions. I have tried doing this inside of a venv but I get the same error so to keep things simple let's just say I want to install it outside of a venv.</p> <pre><code>❯ cat .tool-versions nodejs 15.9.0 python 3.10.8 postgres 11.8 ❯ which pip ~/.asdf/shims/pip ❯ which pg_config ~/.asdf/shims/pg_config ❯ pg_config --version PostgreSQL 11.8 ❯ pip install psycopg2 Collecting psycopg2 Using cached psycopg2-2.9.5.tar.gz (384 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; [9 lines of output] ~/.asdf/installs/python/3.10.8/lib/python3.10/site-packages/setuptools/config/setupcfg.py:463: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running egg_info creating &lt;redacted_temp_folder&gt;/T/pip-pip-egg-info-3tgyoyym/psycopg2.egg-info writing &lt;redacted_temp_folder&gt;/T/pip-pip-egg-info-3tgyoyym/psycopg2.egg-info/PKG-INFO writing dependency_links to &lt;redacted_temp_folder&gt;/T/pip-pip-egg-info-3tgyoyym/psycopg2.egg-info/dependency_links.txt writing top-level names to &lt;redacted_temp_folder&gt;/T/pip-pip-egg-info-3tgyoyym/psycopg2.egg-info/top_level.txt writing manifest file '&lt;redacted_temp_folder&gt;/T/pip-pip-egg-info-3tgyoyym/psycopg2.egg-info/SOURCES.txt' Error: b'No version is set for command pg_config\n' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>Everything I can find seems to revolve around installing pg_config or postgresql or some other dependency but no one else seems to have encountered this exact error. The best I can figure is that somehow the process <code>python setup.py egg_info</code> is not using the same <code>$PATH</code> as the root command, maybe because it's working directory is outside of the scope of my .tool-versions? Any help is appreciated.</p>
<python><postgresql><homebrew><psycopg2><asdf>
2023-01-06 21:53:37
1
1,704
Jon McClung
75,036,296
3,654,588
Poetry/Meson: How to use meson as a build backend with Cython files to install a Python/Cython package
<p>I have a Python package that contains Cython code, similar to scikit-learn. I am trying to use poetry to do package management, but meson to handle the build process for Cython and possible c++ code. Before, when I was using the deprecated setuptools and distutils, I was able to do the following in scikit-learn:</p> <pre><code>pip install -e . </code></pre> <p>which built the Cython files and allowed them to be importable by sklearn's Python files. For example <code>from _criterion import Criterion</code> works, where <code>_criterion.pxd/pyx</code> is a Cython file that contains some variable named <code>Criterion</code>.</p> <p>Now I am using poetry and meson and things do not work.</p> <h2>My Goals to Use Poetry and Meson</h2> <p>I would ideally like to be able to just do <code>poetry install</code> (or <code>poetry build</code> and <code>poetry install</code>), which should build and install things by using Meson to build local Cython/C++ files and then use poetry to handle other pip installable Python packages.</p> <h2>My Question</h2> <p>How do I correctly use poetry with meson to build my Cython files and install them along with other relevant Python dependencies to install my package?</p> <h2>My Attempt using just Meson incorrectly Cython files to the wrong environment</h2> <p>I instantiated my meson.build files such that running <code>meson build</code> in my repo's root directory works:</p> <pre><code>The Meson build system Version: 0.63.2 Source dir: /Users/adam2392/Documents/scikit-tree Build dir: /Users/adam2392/Documents/scikit-tree/build Build type: native build Project name: scikit-tree Project version: 0.0.0.dev0 C compiler for the host machine: arm64-apple-darwin20.0.0-clang (clang 14.0.6 &quot;clang version 14.0.6&quot;) C linker for the host machine: arm64-apple-darwin20.0.0-clang ld64 609 C++ compiler for the host machine: arm64-apple-darwin20.0.0-clang++ (clang 14.0.6 &quot;clang version 14.0.6&quot;) C++ linker for the host machine: arm64-apple-darwin20.0.0-clang++ ld64 609 Host machine cpu family: aarch64 Host machine cpu: arm64 Library m found: YES Program cython found: YES (/Users/adam2392/miniforge3/envs/sktree/bin/cython) Program python3 found: YES (/Users/adam2392/miniforge3/envs/sktree/bin/python3) Message: /Users/adam2392/miniforge3/envs/sktree/bin/python3 Message: /opt/homebrew/lib/python3.9/site-packages/ Found pkg-config: /opt/homebrew/bin/pkg-config (0.29.2) Program _build_utils/cythoner.py found: YES (python /Users/adam2392/Documents/scikit-tree/sktree/_build_utils/cythoner.py) Build targets in project: 1 scikit-tree 0.0.0.dev0 User defined options backend: ninja Found ninja-1.11.1 at /opt/homebrew/bin/ninja </code></pre> <p>When I then run <code>meson install</code> within the <code>build/</code> directory:</p> <pre><code>(sktree) adam2392@Adams-MacBook-Pro build % meson install ninja: Entering directory `/Users/adam2392/Documents/scikit-tree/build' [3/3] Linking target sktree/tree/_unsup_criterion.cpython-39-darwin.so ld: warning: -pie being ignored. It is only used when linking a main executable Installing sktree/tree/_unsup_criterion.cpython-39-darwin.so to /opt/homebrew/lib/python3.9/site-packages/sktree/tree Installing /Users/adam2392/Documents/scikit-tree/sktree/__init__.py to /opt/homebrew/lib/python3.9/site-packages/sktree Installing /Users/adam2392/Documents/scikit-tree/sktree/_forest.py to /opt/homebrew/lib/python3.9/site-packages/sktree Installing /Users/adam2392/Documents/scikit-tree/sktree/tree/__init__.py to /opt/homebrew/lib/python3.9/site-packages/sktree/tree Installing /Users/adam2392/Documents/scikit-tree/sktree/tree/_classes.py to /opt/homebrew/lib/python3.9/site-packages/sktree/tree </code></pre> <p>which installs for example the Cython files into the incorrect Python environment (i.e. home-brew). I need it to be installed in my virtual environment.</p> <h2>My Attempt using Poetry doesn't work either</h2> <p>My <code>pyproject.toml</code> file has the following lines:</p> <pre><code>[build-system] build-backend = &quot;mesonpy&quot; requires = [ &quot;meson-python&gt;=0.11.0&quot;, # `wheel` is needed for non-isolated builds, given that `meson-python` # doesn't list it as a runtime requirement (at least in 0.10.0) # See https://github.com/FFY00/meson-python/blob/main/pyproject.toml#L4 &quot;wheel&quot;, &quot;setuptools&lt;=65.5&quot;, &quot;packaging&quot;, &quot;Cython&gt;=0.29.24&quot;, &quot;scikit-learn&quot;, ] </code></pre> <p>However, when I run <code>poetry install</code>, there are no errors, but:</p> <pre><code>python -c &quot;from sktree import tree&quot; Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/adam2392/Documents/scikit-tree/sktree/__init__.py&quot;, line 37, in &lt;module&gt; from . import tree File &quot;/Users/adam2392/Documents/scikit-tree/sktree/tree/__init__.py&quot;, line 1, in &lt;module&gt; from ._classes import UnsupervisedDecisionTree File &quot;/Users/adam2392/Documents/scikit-tree/sktree/tree/_classes.py&quot;, line 3, in &lt;module&gt; from ._unsup_criterion import UnsupervisedCriterion ModuleNotFoundError: No module named 'sktree.tree._unsup_criterion' </code></pre>
<python><cython><python-poetry><meson-build>
2023-01-06 21:37:55
0
1,302
ajl123
75,036,272
8,749,168
Why are different objects created when using globals in a file vs importing them?
<p>Below is a simple code example that may help to explain my question.</p> <p><strong>file_1.py</strong></p> <pre><code>from functools import lru_cache from file_2 import add_stuff, add_stats @lru_cache() def add(x, y): return x + y if __name__ == &quot;__main__&quot;: add(1, 2) add(1, 2) add(3, 4) print(add.cache_info) print(add.cache_info()) add_stuff(1, 2) add_stuff(3, 4) add_stats() </code></pre> <p><strong>file_2.py</strong></p> <pre><code>def add_stuff(x, y): from file_1 import add add(x, y) def add_stats(): from file_1 import add print(add.cache_info) print(add.cache_info()) </code></pre> <p>And the output looks like this:</p> <pre><code>&lt;built-in method cache_info of functools._lru_cache_wrapper object at 0x017E9E48&gt; CacheInfo(hits=1, misses=2, maxsize=128, currsize=2) &lt;built-in method cache_info of functools._lru_cache_wrapper object at 0x017E9D40&gt; CacheInfo(hits=0, misses=2, maxsize=128, currsize=2) </code></pre> <p>When I use the function inside of the file it was defined in, the function object is different from when another file imports it. Which means that for things like lru_cache, if you didn't realize this, you could be populating two caches inside of your process/threads if you don't keep the cached functions inside of a different file from where they are used.</p> <p>My question is, is this a python gotcha to look out for? Or is there documentation somewhere that I just never read that explains this more in depth? I looked at the lru_cache documentation, and this was not called out there as anything to be aware of.</p>
<python>
2023-01-06 21:34:51
1
1,088
pythonweb
75,036,229
4,586,180
does pydev support python 3.5 Type Hints?
<p>I use eclipse/pydev.</p> <p>I recently learned about python 3.5 support for type hints <a href="https://medium.com/techtofreedom/8-levels-of-using-type-hints-in-python-a6717e28f8fd" rel="nofollow noreferrer">https://medium.com/techtofreedom/8-levels-of-using-type-hints-in-python-a6717e28f8fd</a></p> <p>I found <a href="https://www.pydev.org/manual_adv_type_hints.html" rel="nofollow noreferrer">https://www.pydev.org/manual_adv_type_hints.html</a>. It looks like type support is done using sphinx and epydoc.</p> <p>Does anyone know of plans to support the python native type hint mechanism?</p> <p>Kind regards</p> <p>Andy</p>
<python><eclipse><pydev>
2023-01-06 21:28:52
1
968
AEDWIP
75,036,210
7,194,464
How to list all parquet files in s3 bucket folder via OPENROWSET (SQL Server)?
<p>I have a bucket (AWS) in a folder with 3 PARQUET files that are the same and have different names:</p> <p><img src="https://i.sstatic.net/8MFw4.png" alt="enter image description here" /></p> <p>I'm trying to create an EXTERNAL TABLE with the code below:</p> <pre><code>CREATE EXTERNAL TABLE tb_Test ( coluna_1 INT, coluna_2 VARCHAR(100) ) WITH ( LOCATION = '/testeParquet/**', DATA_SOURCE = s3_ds, FILE_FORMAT = ParquetFileFormat ); GO </code></pre> <p>when I try to read the External table I get this error message:</p> <blockquote> <p>Msg 16561, Level 16, State 1, Line 109<br /> External table 'db_S3.dbo.tb_Test' is not accessible because content of directory cannot be listed.</p> </blockquote> <p>If I inform the name of the file it creates and reads correctly.</p> <p>But I would like to create with all the files in the folder without having to inform file by file.</p>
<python><sql-server><amazon-web-services><openrowset>
2023-01-06 21:26:33
2
405
Clayton A. Santos
75,036,001
2,876,994
Error: Got unexpected extra arguments when using Click library
<p>I'm trying to use the Click library in Python to create a command line interface, but I keep getting the following error when I try to run my script:</p> <pre><code>Error: Got unexpected extra arguments (hello hello1) </code></pre> <p>Here is my code:</p> <pre><code>import click @click.group(name='script') def cli(): pass @cli.command() def hello1(): click.echo('Hello, World!') @cli.command() def hello2(): click.echo('Hola, Mundo!') @cli.command() @click.argument('function', type=click.Choice(['hello1', 'hello2'])) def hello(function): if function == 'hello1': hello1() elif function == 'hello2': hello2() if __name__ == '__main__': cli() </code></pre> <p>I'm trying to call the &quot;hello&quot; function with the argument &quot;hello1&quot; or &quot;hello2&quot;, but it's not working. Can anyone help me figure out what's going wrong?</p> <pre><code>python script.py hello hello1 </code></pre>
<python><command-line-interface><python-click>
2023-01-06 20:54:49
1
1,552
Shinomoto Asakura
75,035,999
2,186,567
Scrapy to parse multiple URLs into a single json
<p>Currently using scrapy with multiple URLs on the start_urls parameter which I load like this:</p> <pre><code> with open('urllist.txt') as f: start_urls = f.read().splitlines() </code></pre> <p>Then, I parse it the page with this code:</p> <pre><code> def parse(self, response): yield { 'date': datetime.now().strftime(&quot;%d/%m/%Y %H:%M&quot;), 'item': response.xpath(&quot;//font[contains(@id, 'item')]/a/text()&quot;).get(), 'price': response.xpath(&quot;//font[contains(@id, 'price')]/a/text()&quot;).get(), } </code></pre> <p>It works fine, but the resulting json file looks like this (notice two lines per execution):</p> <pre><code>{&quot;date&quot;: &quot;05/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 1&quot;, &quot;price&quot;: &quot;6.80&quot;} {&quot;date&quot;: &quot;05/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 2&quot;, &quot;price&quot;: &quot;8.50&quot;} {&quot;date&quot;: &quot;06/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 1&quot;, &quot;price&quot;: &quot;6.70&quot;} {&quot;date&quot;: &quot;06/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 2&quot;, &quot;price&quot;: &quot;8.50&quot;} </code></pre> <p>My expectation was something like:</p> <pre><code>{&quot;scans&quot;: [ {&quot;date&quot;: &quot;05/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 1&quot;, &quot;price&quot;: &quot;6.80&quot;}, {&quot;date&quot;: &quot;05/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 2&quot;, &quot;price&quot;: &quot;8.50&quot;}, {&quot;date&quot;: &quot;06/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 1&quot;, &quot;price&quot;: &quot;6.70&quot;}, {&quot;date&quot;: &quot;06/01/2023 02:39:49&quot;, &quot;item&quot;: &quot;Item 2&quot;, &quot;price&quot;: &quot;8.50&quot;}]} </code></pre> <p>Should I post-process the results to add the &quot;scans&quot; level? I will later import this on a page and ajax doesn't like broken JSON files.</p>
<python><json><scrapy>
2023-01-06 20:54:32
0
8,178
douglaslps
75,035,806
8,260,088
re.findall() function python
<p>Can you please help me to understand the following line of the code:</p> <pre><code>import re a= re.findall('[А-Яа-я-\s]+', string) </code></pre> <p>I am a bit confused with the pattern that has to be found in the string. Particularly, a string should start with <code>A</code> and end with any string in-between <code>A</code> and <code>я</code>, should be separated by <code>-</code> and space, but what does the second term <code>Яа</code> stand for?</p>
<python><python-re>
2023-01-06 20:30:13
1
875
Alberto Alvarez
75,035,589
2,026,010
Panda rename rows after grouping by columns
<p>I've recently started to play around with Pandas in order to manipulate some data and I am now trying to anonymize a few columns after a <code>groupBy</code> to find unique occurrences for persons.</p> <p>For example, suppose the following DF:</p> <pre><code> First Name Last Name DOB 0 Bob One 28/05/1973 1 Bob One 28/05/1973 2 Ana Two 28/07/1991 3 Ana Two 28/07/1991 4 Ana Two 28/07/1991 5 Jim Three 07/01/1994 </code></pre> <p>I can easily find unique person by First Name, Last Name and DOB by using <code>df.groupby(['First Name', 'Last Name', 'DOB'])</code>.</p> <p>However, I'd like to apply a function to every unique combination that would transform those names to a known anonymized (incremental) version.</p> <pre><code> First Name Last Name DOB 0 F1 L1 28/05/1973 1 F1 L1 28/05/1973 2 F2 L2 28/07/1991 3 F2 L2 28/07/1991 4 F2 L2 28/07/1991 5 F3 L3 07/01/1994 </code></pre> <p>I've tried a few things with <code>transform</code> and <code>apply</code> functions of DF groupBy but with no lucky so far. How could I achieve this?</p>
<python><pandas><group-by>
2023-01-06 20:02:09
2
3,947
Felipe Mosso
75,035,567
12,126,272
How can I read and filter many parquet files with different column names without spending many hours
<p>I have a lot of files on parquet, i need jut jo take 3 columns of this files. Some times one of this columns can have different names. I have this code but, this is spending more than 3 hours to run. This is not good. i'm using pyspark.</p> <pre><code>df_list = [] # I iterate all paths from a df which contains all file paths that i need for index, row in df.iterrows(): path = adl_gen2_full_url(DATALAKE,FILESYSTEM,'/APPLICATION/'+row['Ingested_Path']) try: spark_df = spark.read.parquet(path) # Here i select just the columns that i need, one of this columns have different name spark_df = spark_df.select(row['Data_referencia'] \ ,'data_upload' \ ,'data_processamento' \ ) spark_df = spark_df.withColumn(&quot;nome_arquivo&quot;,F.lit(row['Nome_Arquivo'])) spark_df = spark_df.distinct() # Each file that i read i append on a list df_list.append(spark_df) print(&quot;\n&quot;) print(&quot;Sucesso &quot;, row['Nome_Arquivo']) print(&quot;\n&quot;) except requests.exceptions.RequestException as e: print(&quot;Connection refused&quot;) print(path) pass except Exception as e: print(&quot;Internal error&quot;, e) pass # In the end, i reduce that list in a unique dataframe dfs = reduce(DataFrame.unionAll, df_list) dfs = dfs.filter(F.col('data_referência') != 'NaT') </code></pre>
<python><for-loop><pyspark><parquet>
2023-01-06 19:59:11
1
467
Gizelly
75,035,337
7,209,497
How to ensure that .nonzero() returns one element tensor?
<p>[Edited to include the original source code]</p> <p>I try to run the code that I found here:<a href="https://colab.research.google.com/drive/1roZqqhsdpCXZr8kgV_Bx_ABVBPgea3lX?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1roZqqhsdpCXZr8kgV_Bx_ABVBPgea3lX?usp=sharing</a> (linked from: <a href="https://www.youtube.com/watch?v=-lz30by8-sU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=-lz30by8-sU</a>)</p> <pre><code>!pip install transformers diffusers lpips accelerate from huggingface_hub import notebook_login notebook_login() import torch from transformers import CLIPTextModel, CLIPTokenizer from diffusers import AutoencoderKL, UNet2DConditionModel, LMSDiscreteScheduler from tqdm.auto import tqdm from torch import autocast from PIL import Image from matplotlib import pyplot as plt import numpy from torchvision import transforms as tfms # For video display: from IPython.display import HTML from base64 import b64encode # Set device torch_device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; # Load the autoencoder model which will be used to decode the latents into image space. vae = AutoencoderKL.from_pretrained(&quot;CompVis/stable-diffusion-v1-4&quot;, subfolder=&quot;vae&quot;, use_auth_token=True) # Load the tokenizer and text encoder to tokenize and encode the text. tokenizer = CLIPTokenizer.from_pretrained(&quot;openai/clip-vit-large-patch14&quot;) text_encoder = CLIPTextModel.from_pretrained(&quot;openai/clip-vit-large-patch14&quot;) # The UNet model for generating the latents. unet = UNet2DConditionModel.from_pretrained(&quot;CompVis/stable-diffusion-v1-4&quot;, subfolder=&quot;unet&quot;, use_auth_token=True) # The noise scheduler scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule=&quot;scaled_linear&quot;, num_train_timesteps=1000) # To the GPU we go! vae = vae.to(torch_device) text_encoder = text_encoder.to(torch_device) unet = unet.to(torch_device) from google.colab import drive drive.mount('/content/drive') prompt = [&quot;A digital illustration of a steampunk computer laboratory with clockwork machines, 4k, detailed, trending in artstation, fantasy vivid colors&quot;] height = 512 width = 768 num_inference_steps = 50 guidance_scale = 7.5 generator = torch.manual_seed(4) batch_size = 1 # Prep text text_input = tokenizer(prompt, padding=&quot;max_length&quot;, max_length=tokenizer.model_max_length, truncation=True, return_tensors=&quot;pt&quot;) with torch.no_grad(): text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] max_length = text_input.input_ids.shape[-1] uncond_input = tokenizer( [&quot;&quot;] * batch_size, padding=&quot;max_length&quot;, max_length=max_length, return_tensors=&quot;pt&quot; ) with torch.no_grad(): uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) # Prep Scheduler scheduler.set_timesteps(num_inference_steps) # Prep latents latents = torch.randn( (batch_size, unet.in_channels, height // 8, width // 8), generator=generator, ) latents = latents.to(torch_device) latents = latents * scheduler.sigmas[0] # Need to scale to match k # Loop with autocast(&quot;cuda&quot;): for i, t in tqdm(enumerate(scheduler.timesteps)): # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. latent_model_input = torch.cat([latents] * 2) sigma = scheduler.sigmas[i] latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5) # predict the noise residual with torch.no_grad(): noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings)[&quot;sample&quot;] # perform guidance noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -&gt; x_t-1 latents = scheduler.step(noise_pred, i, latents)[&quot;prev_sample&quot;] # scale and decode the image latents with vae latents = 1 / 0.18215 * latents with torch.no_grad(): image = vae.decode(latents) # Display image = (image / 2 + 0.5).clamp(0, 1) image = image.detach().cpu().permute(0, 2, 3, 1).numpy() images = (image * 255).round().astype(&quot;uint8&quot;) pil_images = [Image.fromarray(image) for image in images] pil_images[0] </code></pre> <p>However, I run into the following error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-36-0fa46b18e9c1&gt; in &lt;module&gt; 48 49 # compute the previous noisy sample x_t -&gt; x_t-1 ---&gt; 50 latents = scheduler.step(noise_pred, i, latents)[&quot;prev_sample&quot;] 51 52 # scale and decode the image latents with vae /usr/local/lib/python3.8/dist-packages/diffusers/schedulers/scheduling_lms_discrete.py in step(self, model_output, timestep, sample, order, return_dict) 216 timestep = timestep.to(self.timesteps.device) 217 --&gt; 218 step_index = (self.timesteps == timestep).nonzero().item() 219 sigma = self.sigmas[step_index] 220 ValueError: only one element tensors can be converted to Python scalars </code></pre> <p>This error occurs only during the 2nd iteration of the loop. The first iteration runs through smoothly.</p> <p>I printed the involved variables (noise_pred, i, latents) and their respective dimensions. They have the same dimensions during the first and second iteration.</p> <p>Since I run it on Colab, I don't have direct access to the underlying code in scheduling_lms_discrete.py</p> <p>What can I do to avoid this error? Has it something to do with the versioning of python or torch? (current version: python==3.8.16. torch==1.13.0+cu116)</p> <p>Thanks!</p>
<python><torch><stable-diffusion>
2023-01-06 19:28:44
1
314
Jan
75,035,171
5,495,385
Vectorizing a function for finding local minima and maxima in a 2D array with strict comparison
<p>I'm trying to improve the performance of a function that returns the local minima and maxima of an input 2D NumPy array. The function works as expected, but it is too slow for my use case. I'm wondering if it's possible to create a vectorized version of this function to improve its performance.</p> <p>Here is the formal definition for defining whether an element is a local minima (maxima): <a href="https://i.sstatic.net/vSSRF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vSSRF.png" alt="Equation 1" /></a></p> <p>where</p> <p><a href="https://i.sstatic.net/WRtLB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WRtLB.png" alt="Equation 2" /></a></p> <p><code>A=[a_m,n]</code> is the 2D matrix, <code>m</code> and <code>n</code> are the row and column respectively, <code>w_h</code> and <code>w_w</code> are the height and width of the sliding windows, respectively.</p> <p>I have tried using <code>skimage.morphology.local_minimum</code> and <code>skimage.morphology.local_maxima</code>, but they consider an element a minimum (maximum) if its value is less than or equal to (greater than or equal to) all of its neighbors.<br /> In my case, I need the function to consider an element a minimum (maximum) if it is strictly less than (greater than) all of its neighbors.</p> <p>The current implementation uses a sliding window approach with <code>numpy.lib.stride_tricks.sliding_window_view</code>, but the function does not necessarily have to use this approach.</p> <p>Here is my current implementation:</p> <pre><code>import numpy as np def get_local_extrema(array, window_size=(3, 3)): # Check if the window size is valid if not all(size % 2 == 1 and size &gt;= 3 for size in window_size): raise ValueError(&quot;Window size must be odd and &gt;= 3 in both dimensions.&quot;) # Create a map to store the local minima and maxima minima_map = np.zeros_like(array) maxima_map = np.zeros_like(array) # Save the shape and dtype of the original array for later original_size = array.shape original_dtype = array.dtype # Get the halved window size half_window_size = tuple(size // 2 for size in window_size) # Pad the array with NaN values to handle the edge cases padded_array = np.pad(array.astype(float), tuple((size, size) for size in half_window_size), mode='constant', constant_values=np.nan) # Generate all the sliding windows windows = np.lib.stride_tricks.sliding_window_view(padded_array, window_size).reshape( original_size[0] * original_size[1], *window_size) # Create a mask to ignore the central element of the window mask = np.ones(window_size, dtype=bool) mask[half_window_size] = False # Iterate through all the windows for i in range(windows.shape[0]): window = windows[i] # Get the value of the central element center_val = window[half_window_size] # Apply the mask to ignore the central element masked_window = window[mask] # Get the row and column indices of the central element row = i // original_size[1] col = i % original_size[1] # Check if the central element is a local minimum or maximum if center_val &gt; np.nanmax(masked_window): maxima_map[row, col] = center_val elif center_val &lt; np.nanmin(masked_window): minima_map[row, col] = center_val return minima_map.astype(original_dtype), maxima_map.astype(original_dtype) a = np.array([[8, 8, 4, 1, 5, 2, 6, 3], [6, 3, 2, 3, 7, 3, 9, 3], [7, 8, 3, 2, 1, 4, 3, 7], [4, 1, 2, 4, 3, 5, 7, 8], [6, 4, 2, 1, 2, 5, 3, 4], [1, 3, 7, 9, 9, 8, 7, 8], [9, 2, 6, 7, 6, 8, 7, 7], [8, 2, 1, 9, 7, 9, 1, 1]]) (minima, maxima) = get_local_extrema(a) print(minima) # [[0 0 0 1 0 2 0 0] # [0 0 0 0 0 0 0 0] # [0 0 0 0 1 0 0 0] # [0 1 0 0 0 0 0 0] # [0 0 0 1 0 0 3 0] # [1 0 0 0 0 0 0 0] # [0 0 0 0 6 0 0 0] # [0 0 1 0 0 0 0 0]] print(maxima) # [[0 0 0 0 0 0 0 0] # [0 0 0 0 7 0 9 0] # [0 8 0 0 0 0 0 0] # [0 0 0 4 0 0 0 8] # [6 0 0 0 0 0 0 0] # [0 0 0 0 0 0 0 8] # [9 0 0 0 0 0 0 0] # [0 0 0 9 0 9 0 0]] expected_minima = np.array([[0, 0, 0, 1, 0, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 3, 0], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 6, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0]]) expected_maxima = np.array([[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 7, 0, 9, 0], [0, 8, 0, 0, 0, 0, 0, 0], [0, 0, 0, 4, 0, 0, 0, 8], [6, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 8], [9, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 9, 0, 9, 0, 0]]) np.testing.assert_array_equal(minima, expected_minima) np.testing.assert_array_equal(maxima, expected_maxima) print('All tests passed') </code></pre> <p>Any suggestions or ideas on how to vectorize this function would be greatly appreciated.</p> <p>Thanks in advance!</p> <p><strong>EDIT #1</strong><br /> After playing a bit with NumPy, I managed to get the following code to almost work, in a completely vectorized way if I understood correctly:</p> <pre><code>def get_local_extrema_2(img): minima_map = np.zeros_like(img) maxima_map = np.zeros_like(img) minima_map[1:-1, 1:-1] = np.where( (a[1:-1, 1:-1] &lt; a[:-2, 1:-1]) &amp; (a[1:-1, 1:-1] &lt; a[2:, 1:-1]) &amp; (a[1:-1, 1:-1] &lt; a[1:-1, :-2]) &amp; (a[1:-1, 1:-1] &lt; a[1:-1, 2:]) &amp; (a[1:-1, 1:-1] &lt; a[2:, 2:]) &amp; (a[1:-1, 1:-1] &lt; a[:-2, :-2]) &amp; (a[1:-1, 1:-1] &lt; a[2:, :-2]) &amp; (a[1:-1, 1:-1] &lt; a[:-2, 2:]), a[1:-1, 1:-1], 0) maxima_map[1:-1, 1:-1] = np.where( (a[1:-1, 1:-1] &gt; a[:-2, 1:-1]) &amp; (a[1:-1, 1:-1] &gt; a[2:, 1:-1]) &amp; (a[1:-1, 1:-1] &gt; a[1:-1, :-2]) &amp; (a[1:-1, 1:-1] &gt; a[1:-1, 2:]) &amp; (a[1:-1, 1:-1] &gt; a[2:, 2:]) &amp; (a[1:-1, 1:-1] &gt; a[:-2, :-2]) &amp; (a[1:-1, 1:-1] &gt; a[2:, :-2]) &amp; (a[1:-1, 1:-1] &gt; a[:-2, 2:]), a[1:-1, 1:-1], 0) return minima_map, maxima_map </code></pre> <p>Output of get_local_extrema_2 is:<br /> Minima map:</p> <pre><code>[[0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0] [0 0 0 1 0 0 3 0] [0 0 0 0 0 0 0 0] [0 0 0 0 6 0 0 0] [0 0 0 0 0 0 0 0]] </code></pre> <p>Maxima map:</p> <pre><code>[[0 0 0 0 0 0 0 0] [0 0 0 0 7 0 9 0] [0 8 0 0 0 0 0 0] [0 0 0 4 0 0 0 0] [0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0]] </code></pre> <p>The problem with the above is that pixels on the border that are minima or maxima are not detected.</p> <p><strong>EDIT #2</strong><br /> It would be fine even if in the output arrays there is 1 instead of the value of the local minima (maxima) i.e. a 2d array of 0 and 1 (or False and True).</p> <p><strong>EDIT #3</strong><br /> Here is a version of the function based on <a href="https://stackoverflow.com/users/7328782/cris-luengo">Cris Luengo</a>'s <a href="https://stackoverflow.com/a/75049306/5495385">answer</a>. Notice the use of the mode 'mirror' (equivalent to NumPy 'reflect') so that if a minima or maxima is on the edge, it will not be copied outside the border and it will stand out. This way, there is no need to pad the image with the minimum or maximum element of the matrix. I think this is the most performant way to accomplish this task:</p> <pre><code>import numpy as np import scipy def get_local_extrema_v3(image): footprint = np.ones((3, 3), dtype=bool) footprint[1, 1] = False minima = image * (scipy.ndimage.grey_erosion(image, footprint=footprint, mode='mirror') &gt; image) maxima = image * (scipy.ndimage.grey_dilation(image, footprint=footprint, mode='mirror') &lt; image) return minima, maxima </code></pre>
<python><numpy><image-processing><vectorization><mathematical-morphology>
2023-01-06 19:07:06
1
997
Oliver
75,035,056
1,700,890
Microsecond do not work in Python logger format
<p>For some reason my Python logger does not want to recognize microseconds format.</p> <pre><code>import logging, io stream = io.StringIO() logger = logging.getLogger(&quot;TestLogger&quot;) logger.setLevel(logging.INFO) logger.propagate = False log_handler = logging.StreamHandler(stream) log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s',&quot;%Y-%m-%d %H:%M:%S.%f %Z&quot;) log_handler.setFormatter(log_format) logger.addHandler(log_handler) logger.info(&quot;This is test info log&quot;) print(stream.getvalue()) </code></pre> <p>It returns:</p> <pre><code>2023-01-06 18:52:34.%f UTC - TestLogger - INFO - This is test info log </code></pre> <p>Why are microseconds missing?</p> <p><strong>Update</strong></p> <p>I am running Python 3.10.4 Distributor ID: Debian Description: Debian GNU/Linux 11 (bullseye) Release: 11 Codename: bullseye</p>
<python><logging><time><format><python-logging>
2023-01-06 18:53:59
2
7,802
user1700890
75,035,018
1,818,713
Proper syntax for a pydeck PathLayer from a shapely geometry
<p>I see <a href="https://deckgl.readthedocs.io/en/latest/gallery/path_layer.html?highlight=pathlayer" rel="nofollow noreferrer">the example</a> of a PathLayer and it shows the input data is a list of lists within a regular pandas df, as opposed to a geopandas df.</p> <p>Let's say I'm coming from a GeoPandas df (let's say called <code>consline</code>) that has a shapely MULTILINESTRING geometry column. What's the proper/best syntax to use that? I tried different combinations between calling it a PathLayer and a GeoJsonLayer with using the <code>get_path</code> parameter and nothing seemed to work except for doing <code>json.loads(consline.geometry.to_json())['features'][0]['geometry']['coordinates']</code></p> <p>The <code>consline</code> can be loaded as</p> <pre><code>consline=gpd.read_file(&quot;https://github.com/deanm0000/SOexamples/raw/main/consline.gpkg&quot;) </code></pre> <p>For instance, this doesn't work:</p> <pre><code>mylayers=[ pdk.Layer( 'PathLayer', data=consline, get_path=&quot;geometry&quot;, pickable=True, get_fill_color=[0,255,0], width_scale=20, width_min_pixels=2, ) ] view_state = pdk.ViewState( longitude=-98.99, latitude=31.79, zoom=5, min_zoom=1, max_zoom=15) r = pdk.Deck(layers=mylayers, initial_view_state=view_state, map_style='light') r.to_html(&quot;example.html&quot;) </code></pre> <p>but this does:</p> <pre><code>mylayers=[ pdk.Layer( 'PathLayer', data=json.loads(consline.geometry.to_json())['features'][0]['geometry']['coordinates'], get_path=&quot;-&quot;, pickable=True, get_fill_color=[0,255,0], width_scale=20, width_min_pixels=2, ) ] view_state = pdk.ViewState( longitude=-98.99, latitude=31.79, zoom=5, min_zoom=1, max_zoom=15) r = pdk.Deck(layers=mylayers, initial_view_state=view_state, map_style='light') r.to_html(&quot;example.html&quot;) </code></pre> <p>I can't imagine having shapely convert it to json and then python to convert it to a dict and then have pydeck convert it back to json for deck.gl is the best way to do this but I can't get it to work any other way.</p>
<python><deck.gl><pydeck>
2023-01-06 18:49:50
1
19,938
Dean MacGregor
75,034,877
2,425,753
Download AWS CloudWatch logs for a period
<p>I want to download all CloudWatch logs from AWS for:</p> <ul> <li>a spectific log group</li> <li>a specific time range</li> </ul> <p>My plan is fairly simple:</p> <ol> <li>Iterate over all logstreams for log group.</li> <li>For each log stream iterate over events and build a list of all log events.</li> </ol> <pre class="lang-py prettyprint-override"><code>import boto3 def overlaps(start1, end1, start2, end2): return max(start1, start2) &lt; min(end1, end2) def load_logs(region, group, start=0, end=2672995600000): client = boto3.client('logs', region_name=region) paginator = client.get_paginator('describe_log_streams') response_iterator = paginator.paginate(logGroupName=group) events = [] for page in response_iterator: for log_stream in page[&quot;logStreams&quot;]: print(f&quot;Stream: {log_stream['logStreamName']}, start: {log_stream['firstEventTimestamp']} end: {log_stream['lastEventTimestamp']}&quot;) if overlaps(log_stream[&quot;firstEventTimestamp&quot;], log_stream[&quot;lastEventTimestamp&quot;], start, end): print(&quot;processing&quot;) token = None while True: event_args = { &quot;logGroupName&quot;: group, &quot;logStreamName&quot;: log_stream['logStreamName'], &quot;startTime&quot;: start, &quot;endTime&quot;: end } if token is not None: event_args[&quot;nextToken&quot;] = token response = client.get_log_events(**event_args) for event in response[&quot;events&quot;]: if start &lt; event[&quot;timestamp&quot;] &lt; end: events.append(event) if response[&quot;nextBackwardToken&quot;] == token: break else: token = response[&quot;nextBackwardToken&quot;] print(events) </code></pre> <p>I'm passing <code>0</code> as a <code>start</code> and a far future <code>2672995600000</code> as <code>end</code> and some events are downloaded, however <code>events</code> list does not contain all logevents. Is there some iteration I'm missing? I'm especially concerned with <code>get_log_events</code> <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/logs.html#CloudWatchLogs.Client.get_log_events" rel="nofollow noreferrer">iteration</a></p>
<python><amazon-web-services><loops><amazon-cloudwatch><amazon-cloudwatchlogs>
2023-01-06 18:34:38
3
1,636
rfg
75,034,770
4,343,563
Create column of unique randomly generated letters to pandas dataframe?
<p>I have a dataframe where there is one column of random letters and numbers, then a column of how many random letters/numbers need to be added to random string in the first column. Like so but my dataframe is 3+ million rows:</p> <pre><code> id missing XK39J 4 NI94N 4 9IN3 5 MN83D 4 IUN2 5 </code></pre> <p>I am using the following code to generate the new random sequence:</p> <pre><code>def id_generator(size, chars=string.ascii_uppercase + string.digits): return ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(size)) data['new_id'] = data['missing'].apply(lambda x: id_generator(size = x)) data['final_id'] = data['id'] + data['new_id'] </code></pre> <p>However, when I use this I end up getting a couple values in the 'final_id' column that are duplicates. But, I need each value in the 'final_id' column to be unique. Like:</p> <pre><code>id missing new_id final_id XK39J 4 NJI4 XK39JNJI4 NI94N 4 BNER NI94NBNER 9IN3 5 ER41J 9IN3ER41J MN83D 4 9D4S MN83D9D4S IUN2 5 MNST3 IUN2MNST3 </code></pre> <p>My idea was to store all the ids in a list and then get a new randomly generated sequence if it matches but given that there will be 3million+ ids it won't work since iterating through a row of 3m will take too long. Like:</p> <pre><code>def id_generator(size, chars=string.ascii_uppercase + string.digits): val_ls = [] val = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(size)) while val in val_ls: val = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(size)) else: val_ls.append(val) return val </code></pre> <p>How can I ensure that there are no repeats?</p>
<python><pandas><list><sequence>
2023-01-06 18:23:12
1
700
mjoy
75,034,758
5,205,393
Delete all keys in Python dictionary greater than x in constant time
<p>If you have a dictionary:</p> <pre><code>d = {1: True, 2: False, 3: False, 4: True, 5: False, 6: True, 7: False, 8: False} </code></pre> <p>and you want all keys greater than 3 to be deleted so the dictionary becomes:</p> <pre><code>{1: True, 2: False, 3: False} </code></pre> <p>can you do this in constant time if the keys are sorted?</p>
<python><algorithm><dictionary>
2023-01-06 18:22:08
2
301
Andrew
75,034,746
5,270,376
PIL image save causes FFMPEG to fail
<p>I have been attempting to convert some videos using FFMPEG with image2pipe using PIL. I have found that when the frame is particularly simple (such as all one colour), it causes FFMPEG to fail with the following message:</p> <pre><code>[image2pipe @ 000001785b599bc0] Could not find codec parameters for stream 0 (Video: none, none): unknown codec Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, image2pipe, from 'pipe:': Duration: N/A, bitrate: N/A Stream #0:0: Video: none, none, 24 tbr, 24 tbn, 24 tbc Output #0, mp4, to '&lt;YOUR FILEPATH HERE&gt;/test.mp4': Output file #0 does not contain any stream </code></pre> <p>The minimum code I have found to reproduce this is as follows:</p> <pre><code>import numpy as np from subprocess import Popen, PIPE from PIL import Image output_file = &quot;&lt;YOUR FILEPATH HERE&gt;/test.mp4&quot; p = Popen(['ffmpeg', '-y', # Overwrite files '-f', 'image2pipe', # Input format '-r', '24', # Framerate '-i', '-', # stdin '-c:v', 'libx264', # Codec '-preset', 'slow', '-crf', f'18', # H264 Constant Rate Factor (quality, lower is better) output_file], stdin=PIPE) # This one works # vid = np.random.randint(0, 255, (10, 64, 64)) # Create a 64x64 'video' with 10 frames of random noise # This one does not vid = np.full((10, 64, 64), 129) # Create a 64x64 'video' with 10 frames of pure grey for frame in vid: im = Image.fromarray(np.uint8(frame)) im.save(p.stdin, 'JPEG') p.stdin.close() p.wait() </code></pre> <p>Notably, if I do the same thing with a randomly generated series of frames (commented as &quot;This one works&quot; in the script above), it will output fine.</p> <p>One workaround I have found so far is to replace 'JPEG' with 'PNG' in the <code>im.save(...)</code> call. However, I would be interested in understanding what causes it to fail with JPEG.</p>
<python><ffmpeg><python-imaging-library>
2023-01-06 18:20:45
0
520
Xorgon
75,034,658
9,742,558
Install Python3.8 on Ubuntu 22 server and got an error
<p>I have a Ubuntu 22.04 server and I need to install Python3.8 on it.</p> <p>Here is what I did:</p> <pre><code>sudo apt update sudo apt upgrade sudo apt install python3.8 -y </code></pre> <p>The last command gave me this error:</p> <pre><code>Reading package lists... Done Building dependency tree... Done Reading state information... Done Package python3.8 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'python3.8' has no installation candidate </code></pre> <p>The server I am working on is a GCP instance, and it has Python3.10 installed before:</p> <pre><code>root@ansible-3:/usr/bin# ll python* lrwxrwxrwx 1 root root 7 Jan 5 23:39 python -&gt; python3* lrwxrwxrwx 1 root root 10 Aug 18 10:39 python3 -&gt; python3.10* lrwxrwxrwx 1 root root 17 Aug 18 10:39 python3-config -&gt; python3.10-config* -rwxr-xr-x 1 root root 5921160 Nov 14 16:10 python3.10* lrwxrwxrwx 1 root root 34 Nov 14 16:10 python3.10-config -&gt; x86_64-linux-gnu-python3.10-config* </code></pre> <p>Any idea what is going on?</p> <p>Thanks!</p>
<python><linux><ubuntu>
2023-01-06 18:12:48
0
557
Philip Shangguan
75,034,644
5,437,090
if else statement for input arguments of a function in python
<p>I run my bash script <code>my_file.sh</code> in a python file as follows:</p> <pre><code>import subprocess def rest_api(): params = { 'query': 'indepedence day', 'formats': '[&quot;NEWSPAPER&quot;]', } subprocess.call(['bash', 'my_file.sh', f'QUERY={params.get(&quot;query&quot;)}', f'DOC_TYPE={params.get(&quot;formats&quot;)}', f'LANGUAGE={params.get(&quot;lang&quot;)}', # returns None! ]) if __name__ == '__main__': rest_api() </code></pre> <p>Several of my input arguments in <code>subprocess.call</code> do not normally exist in a dictionary <code>params={}</code> (here I provided <code>f'LANGUAGE={params.get(&quot;lang&quot;)}'</code> as one example). I handle such unavailability in <code>my_file.sh</code> to initialize with something, for instance:</p> <pre><code>if [ -z &quot;$LANGUAGE&quot; ]; then LANGUAGE=&quot;${LANGUAGE:-[]}&quot;; fi </code></pre> <p>What I want is to apply some sort of <code>if else</code> statement in <code>subprocess.call</code> function with this logic:</p> <p>if <code>params.get(&quot;lang&quot;)</code> is <code>None</code>, do not even send it as an input to bash file, e.g., treat it as I never provided such input for <code>my_file.sh</code>.</p> <p>Therefore, I tried to rewrote my code like this:</p> <pre><code>subprocess.call(['bash', 'my_file.sh', f'QUERY={params.get(&quot;query&quot;)}', f'DOC_TYPE={params.get(&quot;formats&quot;)}', if params.get(&quot;lang&quot;): f'LANGUAGE={params.get(&quot;lang&quot;)}', # syntax Error ]) </code></pre> <p>which is wrong I get the following <code>invalid syntax error</code>:</p> <pre><code>Traceback (most recent call last): File &quot;nationalbiblioteket_logs.py&quot;, line 13, in &lt;module&gt; from url_scraping import * File &quot;/home/xenial/WS_Farid/DARIAH-FI/url_scraping.py&quot;, line 17, in &lt;module&gt; from utils import * File &quot;/home/xenial/WS_Farid/DARIAH-FI/utils.py&quot;, line 53 if params.get(&quot;lang&quot;): f'LANGUAGE={params.get(&quot;lang&quot;)}', ^ SyntaxError: invalid syntax </code></pre> <p>Do I have a wrong understanding of applying <code>if else</code> statement for the input arguments of a python function or is there an easier or cleaner way doing it?</p> <p>Cheers,</p>
<python><function><arguments><parameter-passing>
2023-01-06 18:11:30
2
1,621
farid
75,034,628
19,321,677
How to replace values by users correctly based on reference value?
<p>I have a dataframe where for each user I have always 2 pairs by goods_type and business KPI for EACH USER such as:</p> <pre><code>user | amount | goods_type | business 1 | $10 | used | cost 1 | $30 | used | revenue 1 | $15 | new | cost 1 | $50 | new | revenue </code></pre> <p>Now, I want to add a segment column using np.where in pandas but my logic applies only to rows where business = revenue. As far as the cost rows go, the segment value should be the same as the revenue for the SAME USER and SAME GOODS_TYPE pair, not based on the np.where clause. The segment value for all &quot;revenue&quot; rows is determined like this:</p> <pre><code>np.where( (df.amount &lt; 15) &amp; (df.goods_type =='used') , 'low', 'high' ) </code></pre> <p>How would I do this so it looks like this:</p> <pre><code>user | amount | goods_type | business | segment 1 | $10 | used | cost | low (this should be based on the segment value of the associated revenue row below!) 1 | $30 | used | revenue. | low (this comes from the logic) 1 | $15 | new | cost | high (this should be based on the segment value of the associated revenue row below!) 1 | $50 | new | revenue | high (this comes from the logic) </code></pre>
<python><pandas>
2023-01-06 18:09:10
1
365
titutubs
75,034,593
1,700,890
Cannot log to stream in Python
<p>I want to log to stream of io.StringIO, but end up with empty stream. Here is my code:</p> <pre><code>import logging, io log_handler = logging.StreamHandler(stream) log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') log_handler.setFormatter(log_format) log_handler.setLevel(logging.INFO) logger.addHandler(log_handler) logger.info(&quot;This is test info log&quot;) print(stream.getvalue()) </code></pre> <p>What am I doing wrong?</p> <p><strong>UPDATE</strong> It seems to be working if I replace level with &quot;Warning&quot; like this</p> <pre><code>log_handler.setLevel(logging.WARNING) logger.warning(&quot;This is test info log&quot;) </code></pre> <p>It also prints into console for some reason</p>
<python><string><logging>
2023-01-06 18:06:48
1
7,802
user1700890
75,034,589
2,442,879
Variable in dictionary
<p>I want to replace <code>CHANGE</code> with the variable <code>zone_list</code>.</p> <pre class="lang-py prettyprint-override"><code>output_zones = {'CHANGE' : {}} </code></pre> <p>I would like to get:</p> <pre class="lang-py prettyprint-override"><code>{'zone_name': {... a set of dictionaries...}} </code></pre> <p>What is the correct syntax? This code is wrong:</p> <pre class="lang-py prettyprint-override"><code>zone_list = zone_name output_zones = {f&quot;{zone_list}:&quot;, {}} output_zones[zone_list].update(zone_info) </code></pre>
<python>
2023-01-06 18:06:27
1
615
Rostyslav Malenko
75,034,523
1,473,517
How to average across two dataframes
<p>I have two dataframes:</p> <pre><code>{'id': {4: 1548638, 6: 1953603, 7: 1956216, 8: 1962245, 9: 1981386, 10: 1981773, 11: 2004787, 13: 2017418, 14: 2020989, 15: 2045043}, 'total': {4: 17, 6: 38, 7: 59, 8: 40, 9: 40, 10: 40, 11: 80, 13: 44, 14: 51, 15: 46}} {'id': {4: 1548638, 6: 1953603, 7: 1956216, 8: 1962245, 9: 1981386, 10: 1981773, 11: 2004787, 13: 2017418, 14: 2020989, 15: 2045043}, 'total': {4: 17, 6: 38, 7: 59, 8: 40, 9: 40, 10: 40, 11: 80, 13: 44, 14: 51, 15: 46}} </code></pre> <p>For every 'id' that exists in both dataframes I would like to compute the average of their values in 'total' and have that in a new dataframe.</p> <p>I tried:</p> <pre><code> pd.merge(df1, df2, on=&quot;id&quot;) </code></pre> <p>with the hope that I could then do:</p> <pre><code>merged_df[['total']].mean(axis=1) </code></pre> <p>but it doesn't work at all.</p> <p>How can you do this?</p>
<python><pandas>
2023-01-06 18:00:42
2
21,513
Simd
75,034,449
1,877,527
Aggregating multiple columns into one in Pandas quickly
<p>I have a DataFrame that is, ultimately, object IDs attached to individual X and Y coordinates, something like</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>X</th> <th>Y</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>1</td> <td>3</td> </tr> <tr> <td>1</td> <td>2</td> <td>5</td> </tr> <tr> <td>2</td> <td>7</td> <td>1</td> </tr> <tr> <td>2</td> <td>8</td> <td>5</td> </tr> <tr> <td>2</td> <td>9</td> <td>7</td> </tr> </tbody> </table> </div> <p>I ultimately have no guarantee about order of IDs or X/Y, and can't make these connected upstream.</p> <p>With the ultimate goal of getting a convex hull of the points involved, I'm currently grouping the X/Ys into a list, then zipping them, then changing that list-of-tuples to a Shapely <code>MultiPoint</code> before finding the convex hull.</p> <pre><code>import shapely.geometry as shGeom sf = df.groupby(&quot;ID&quot;).agg({&quot;X&quot;: list, &quot;Y&quot;: list}) # I want to keep this coordinate set for later, though as the MultiPoint would be fine. # In tests, storing the MultiPoint as an intermediate is slower due to memory pressure # rather than the list-of-tuples sf[&quot;coordinates&quot;] = shapeFrame[[&quot;Y&quot;, &quot;X&quot;]].apply(lambda x: [(a,b) for a, b in zip(x[0], x[1])], axis= 1) # This next &quot;hull&quot; column is the target sf[&quot;hull&quot;] = sf[&quot;coordinates&quot;].apply(lambda x: shGeom.MultiPoint(x).convex_hull) </code></pre> <p>This approach though requires several data passes over a 1M+ row frame, and in particular the zipping pass is slow.</p> <p>Is there a way to do this with fewer data passes? It feels like there should be. (At the end of the day this code <em>works</em> but this is a very slow step in it)</p> <p>I do use GeoPandas later, but there's no geometry column to operate upon until the X and Y entries are turned into <code>Point</code> s or a <code>MultiPolygon</code>, which doesn't get around the slow step.</p>
<python><pandas><shapely>
2023-01-06 17:53:35
1
732
Philip Kahn
75,034,252
8,849,071
How to handle inheritance in mypy?
<p>I'm trying to add mypy to my Python project but I have found a roadblock. Let's say I have the following inheritance:</p> <pre class="lang-py prettyprint-override"><code>class BaseClass: base_attribute: str class A(BaseClass): attribute_for_class_A: str class B(BaseClass): attribute_for_class_B: str </code></pre> <p>Now let's create some code that handle both instances of these classes, but without really knowing it:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class ClassUsingTheOthers: fields: Dict[str, BaseClass] def get_field(self, field_name: str) -&gt; BaseClass: field = self.fields.get(field_name) if not field: raise ValueError('Not found') return field </code></pre> <p>The important bit here is the <code>get_field</code> method. Now let's create a function to use the <code>get_field</code> method, but that function will require to use a particlar subclass of <code>BaseClass</code>, <code>B</code>, for instance:</p> <pre class="lang-py prettyprint-override"><code>def function_that_needs_an_instance_of_b(instance: B): print(instance.attribute_for_class_B) </code></pre> <p>Now if we use all the code together, we can get the following:</p> <pre class="lang-py prettyprint-override"><code>if __name__ == &quot;__main__&quot;: class_using_the_others = ClassUsingTheOthers( fields={ 'name_1': A(), 'name_2': B() } ) function_that_needs_an_instance_of_b(class_using_the_others.get_field('name_2')) </code></pre> <p>Obviously, when I run <code>mypy</code> to this <a href="https://gist.github.com/antoniogamizbadger/bbc48f7145469a026c1a4fdb7ad1bd86" rel="nofollow noreferrer">file</a> (in this gist you find all the code), I get the following error, as expected:</p> <pre><code> error: Argument 1 to &quot;function_that_needs_an_instance_of_b&quot; has incompatible type &quot;BaseClass&quot;; expected &quot;B&quot; [arg-type] </code></pre> <p>So my question is, how do I fix my code to make this error go away? I cannot change the type hint of the <code>fields</code> attribute because I really need to set it that way. Any ideas? Am I missing something? Should I check the type of the field returned?</p>
<python><inheritance><mypy>
2023-01-06 17:33:39
1
2,163
Antonio Gamiz Delgado
75,034,239
9,820,773
Python boto3 how to parse role_arn from AWS_CONFIG_FILE?
<p>I have an AWS config file that my boto3 session has access to, via the AWS_CONFIG_FILE environment variable. The config file looks like this: (multi-account environment)</p> <pre><code>[profile profile1] credential_source Environment region=us-east-whatever role_arn=arn:aws:iam:&lt;ACCOUNT NUMBER 1&gt;:role/all-profiles-same-role-name [profile profile2] credential_source Environment region=us-east-whatever role_arn=arn:aws:iam:&lt;ACCOUNT NUMBER 2&gt;:role/all-profiles-same-role-name [profile profileN] credential_source Environment region=us-east-whatever role_arn=arn:aws:iam:&lt;ACCOUNT NUMBER N&gt;:role/all-profiles-same-role-name </code></pre> <p>In my Python code, I am trying to setup RefreshableCredentails (boto3 method) using somethign like this: (excluding full code because I think the issue is mainly about parsing the aws_config_file):</p> <pre><code>def __get_session_credentials(self): # hardcode one role_arn for now but need to variablize session_ttl=3000 aws_role_arn=&quot;arn:aws:iam::&lt;ACCOUNT NUM&gt;:role/all-profiles-same-role-name ... </code></pre> <p>Can I somehow parse the &quot;role_arn&quot; from the config file on a per profile basis to make that function more extensible? How would I do that?</p>
<python><amazon-web-services><boto3>
2023-01-06 17:32:07
2
323
Robert Campbell
75,033,658
11,925,464
count unique combination in one column using Groupby
<p>i have a dataframe which i'm trying to create new columns showing the occurrence of different combinations within different groups. Solutions I've found are all combinations of values across 2 or more columns instead of one. Therefore, is hoping somebody can help.</p> <p>sample df:</p> <pre> ╔════╦═════╗ ║ id ║ tag ║ ╠════╬═════╣ ║ a ║ 1 ║ ║ a ║ 1 ║ ║ a ║ 2 ║ ║ a ║ 2 ║ ║ a ║ 3 ║ ║ a ║ 3 ║ ║ b ║ 2 ║ ║ b ║ 2 ║ ║ b ║ 2 ║ ║ b ║ 3 ║ ║ b ║ 3 ║ ║ b ║ 3 ║ ╚════╩═════╝ </pre> <p>output hope to get:</p> <pre> ╔════╦═════╦═════╦═════╦═════╦═════╦═════╦═════╗ ║ id ║ tag ║ 1,1 ║ 1,2 ║ 1,3 ║ 2,2 ║ 2,3 ║ 3,3 ║ ╠════╬═════╬═════╬═════╬═════╬═════╬═════╬═════╣ ║ a ║ 1 ║ 1 ║ 4 ║ 4 ║ 1 ║ 4 ║ 1 ║ ║ a ║ 1 ║ 1 ║ 4 ║ 4 ║ 1 ║ 4 ║ 1 ║ ║ a ║ 2 ║ 1 ║ 4 ║ 4 ║ 1 ║ 4 ║ 1 ║ ║ a ║ 2 ║ 1 ║ 4 ║ 4 ║ 1 ║ 4 ║ 1 ║ ║ a ║ 3 ║ 1 ║ 4 ║ 4 ║ 1 ║ 4 ║ 1 ║ ║ a ║ 3 ║ 1 ║ 4 ║ 4 ║ 1 ║ 4 ║ 1 ║ ║ b ║ 2 ║ 0 ║ 0 ║ 0 ║ 3 ║ 9 ║ 3 ║ ║ b ║ 2 ║ 0 ║ 0 ║ 0 ║ 3 ║ 9 ║ 3 ║ ║ b ║ 2 ║ 0 ║ 0 ║ 0 ║ 3 ║ 9 ║ 3 ║ ║ b ║ 3 ║ 0 ║ 0 ║ 0 ║ 3 ║ 9 ║ 3 ║ ║ b ║ 3 ║ 0 ║ 0 ║ 0 ║ 3 ║ 9 ║ 3 ║ ║ b ║ 3 ║ 0 ║ 0 ║ 0 ║ 3 ║ 9 ║ 3 ║ ╚════╩═════╩═════╩═════╩═════╩═════╩═════╩═════╝ </pre> <p>sample df code:</p> <pre><code>data = { &quot;id&quot;: ['a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b'], &quot;tag&quot;: [1, 1, 2, 2, 3, 3, 2, 2, 2, 3, 3, 3]} df = pd.DataFrame(data) </code></pre> <p>for clarification: &quot;col &quot;x,y&quot; is the combinations of the tag values grouped by the id&quot; as mentioned by @Chrysophylaxs (thanks).</p> <p>kindly advise</p>
<python><pandas><group-by>
2023-01-06 16:37:06
2
597
ManOnTheMoon
75,033,602
1,189,620
QSqlDatabase insert array of float using QSqlTableModel
<p>I am trying to insert an array into sql3 and retrieve it by first converting them into a BLOB, but its not inserting anything into the DB, code Below of a widget with QTableView using a pyqt model pattern</p> <pre><code>from PyQt6.QtWidgets import QWidget, QVBoxLayout, QHBoxLayout, QLineEdit, QTableView, QMessageBox, QAbstractItemView, QHeaderView from PyQt6.QtSql import QSqlTableModel, QSqlDatabase, QSqlError, QSqlQuery from PyQt6.QtCore import Qt, QSortFilterProxyModel from numpy import array, frombuffer from app.tools.cube_config import Config class CubeDataTable(QWidget): def __init__(self): super(CubeDataTable, self).__init__() self.lay = QVBoxLayout(self) self.connect() self.__model = QSqlTableModel() self.__view = QTableView() self.__proxy = QSortFilterProxyModel() self.rows = 0 self._set_main_layout() self.insert_data() def _set_main_layout(self) -&gt; None: self._set_data_model() self._set_table_view() self.lay.addWidget(self.__view) def _set_data_model(self) -&gt; None: self.__model.setTable('audio') self.__model.setEditStrategy(QSqlTableModel.EditStrategy.OnManualSubmit) self.__model.setHeaderData(0, Qt.Orientation.Horizontal, &quot;ID&quot;) self.__model.setHeaderData(1, Qt.Orientation.Horizontal, &quot;Name&quot;) self.__model.setHeaderData(2, Qt.Orientation.Horizontal, &quot;Sample Rate&quot;) self.__model.setHeaderData(3, Qt.Orientation.Horizontal, &quot;Raw Frames&quot;) self.__model.sort(1, Qt.SortOrder.DescendingOrder) self.__model.select() self.rows = self.__model.rowCount() self.__proxy.setSourceModel(self.__model) self.__proxy.setFilterCaseSensitivity(Qt.CaseSensitivity.CaseInsensitive) self.__proxy.setFilterKeyColumn(1) def _set_table_view(self) -&gt; None: self.__view.setModel(self.__proxy) self.__view.setSortingEnabled(True) self.__view.setAlternatingRowColors(True) self.__view.setColumnHidden(0, True) self.__view.verticalHeader().setSectionResizeMode(QHeaderView.ResizeMode.Stretch) self.__view.horizontalHeader().setSectionResizeMode(QHeaderView.ResizeMode.Stretch) self.__view.setEditTriggers(QAbstractItemView.EditTrigger.NoEditTriggers) self.__view.setSelectionBehavior(QAbstractItemView.SelectionBehavior.SelectRows) self.__view.setSelectionMode(QAbstractItemView.SelectionMode.SingleSelection) self.__view.resizeColumnsToContents() self.__view.selectionModel().selectionChanged.connect(self._get_selected_row) def _get_selected_row(self, selected, deselected): index_entity = self.__view.selectionModel().selectedIndexes() temp_entity = self.__view.selectionModel().model() for index in sorted(index_entity): d = temp_entity.data(index) print(str(temp_entity.data(index))) def insert_data(self): try: a = array([0.1, 0.2, 0.3]) self.__model.insertRows(self.rows, 1) self.__model.setData(self.__model.index(self.rows, 1), 'foo') self.__model.setData(self.__model.index(self.rows, 3), a.tobytes()) self.__model.submitAll() self.rows += 1 except (Exception, QSqlError) as e: self.__model.revertAll() QSqlDatabase.database().rollback() print(&quot;Error: &quot; + str(e)) @staticmethod def connect() -&gt; bool: config = Config.get('data') conn = QSqlDatabase.addDatabase('QSQLITE') conn.setHostName(config['host']) conn.setPort(config['port']) conn.setPassword(config['password']) conn.setUserName(config['user']) conn.setDatabaseName(config['database']) if not conn.open(): QMessageBox.critical(QMessageBox(), &quot;Error&quot;, &quot;Error: %s&quot; % conn.lastError().text()) return False cursor = QSqlQuery() cursor.exec( &quot;&quot;&quot; CREATE TABLE audio ( id INTEGER PRIMARY KEY AUTOINCREMENT UNIQUE NOT NULL, name VARCHAR(40) NOT NULL, sampleRate INTEGER, rawFrames BLOB ) &quot;&quot;&quot; ) return True </code></pre>
<python><python-3.x><pyqt6>
2023-01-06 16:32:57
1
3,636
ramon22
75,033,545
11,980,301
How to open a json.gz.part file using Python?
<p>I have lots of json.gz files in a directory and some them are json.gz.part. Supposedly, when saving them, some of the files were too large and they were splitted.</p> <p>I tried to open them as normally using:</p> <pre><code>with gzip.open(file, 'r') as fin: json_bytes = fin.read() json_str = json_bytes.decode('utf-8') # 2. string (i.e. JSON) bb = json.loads(json_str) </code></pre> <p>But when it comes to the <code>.gz.part</code> files I get an error:</p> <pre><code>uncompress = self._decompressor.decompress(buf, size) error: Error -3 while decompressing data: invalid code lengths set </code></pre> <p>I've tried the <a href="https://stackoverflow.com/questions/1732709/unzipping-part-of-a-gz-file-using-python">jiffyclub's</a> solution, but I get the following error:</p> <pre><code> _read_eof = gzip.GzipFile._read_eof AttributeError: type object 'GzipFile' has no attribute '_read_eof' </code></pre> <p>EDIT:</p> <p>If I read line by line I'm able to read most of the content file, until I get an error:</p> <pre><code>with gzip.open(file2,'r') as fin: for line in fin: print(line.decode('utf-8')) </code></pre> <p>After printing most of the content I get:</p> <pre><code>error: Error -3 while decompressing data: invalid code lengths set </code></pre> <p>But using this last method I cannot convert its content to a json file.</p>
<python><json><gzip>
2023-01-06 16:26:52
1
403
Marlon Teixeira
75,033,467
2,977,164
Solutions to 'make_dirs' problem in labelbox_json2yolo.py code
<p>I'd like to convert my label file in *json to YOLO * txt with to class ('bsb','wsb') using the <code>labelbox_json2yolo.py</code> (<a href="https://github.com/ultralytics/JSON2YOLO/blob/master/labelbox_json2yolo.py" rel="nofollow noreferrer">https://github.com/ultralytics/JSON2YOLO/blob/master/labelbox_json2yolo.py</a>) in Jupyter Notebook.</p> <p>In my data (image and label file) I have:</p> <pre><code>file = &quot;https://raw.githubusercontent.com/Leprechault/trash/main/YT_EMBRAPA_002.zip&quot; # Patch to zip file in GitHub import json import os from pathlib import Path import requests import yaml from PIL import Image from tqdm import tqdm from utils import make_dirs def convert(file, zip=True): # Convert Labelbox JSON labels to YOLO labels names = ['bsb','wsb'] # class names file = Path(file) save_dir = make_dirs(file.stem) with open(file) as f: data = json.load(f) # load JSON for img in tqdm(data, desc=f'Converting {file}'): im_path = img['Labeled Data'] im = Image.open(requests.get(im_path, stream=True).raw if im_path.startswith('http') else im_path) # open width, height = im.size # image size label_path = save_dir / 'labels' / Path(img['External ID']).with_suffix('.txt').name image_path = save_dir / 'images' / img['External ID'] im.save(image_path, quality=95, subsampling=0) for label in img['Label']['objects']: # box top, left, h, w = label['bbox'].values() # top, left, height, width xywh = [(left + w / 2) / width, (top + h / 2) / height, w / width, h / height] # xywh normalized # class cls = label['value'] # class name if cls not in names: names.append(cls) line = names.index(cls), *xywh # YOLO format (class_index, xywh) with open(label_path, 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') # Save dataset.yaml d = {'path': f&quot;../datasets/{file.stem} # dataset root dir&quot;, 'train': &quot;images/train # train images (relative to path) 128 images&quot;, 'val': &quot;images/val # val images (relative to path) 128 images&quot;, 'test': &quot; # test images (optional)&quot;, 'nc': len(names), 'names': names} # dictionary with open(save_dir / file.with_suffix('.yaml').name, 'w') as f: yaml.dump(d, f, sort_keys=False) # Zip if zip: print(f'Zipping as {save_dir}.zip...') os.system(f'zip -qr {save_dir}.zip {save_dir}') print('Conversion completed successfully!') if __name__ == '__main__': convert('export-2021-06-29T15_25_41.934Z.json') </code></pre> <p>Despite the corrected install of <code>utils</code>, when I have tried to run the script but I get the following error:</p> <pre><code>--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[6], line 10 7 from PIL import Image 8 from tqdm import tqdm ---&gt; 10 from utils import make_dirs ImportError: cannot import name 'make_dirs' from 'utils' (C:\Users\fores\anaconda3\lib\site-packages\utils\__init__.py) </code></pre> <p>Is possible to give a alternative way to local directory in my machine and not call <code>make_dirs(file.stem)</code>? Please, any help with it?</p>
<python><json><yolo><yolov5><yolov4>
2023-01-06 16:20:59
2
1,883
Leprechault
75,033,404
12,519,771
Timeout on task cancellation
<p>Consider this code:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from async_timeout import timeout async def sleep_coroutine(sleep=10): while True: print(&quot;Coroutine running&quot;) try: await asyncio.sleep(10) except Exception: print(&quot;Ignoring exception&quot;) async def main(): task = asyncio.get_event_loop().create_task(sleep_coroutine()) await asyncio.sleep(2) print(&quot;Cancelling&quot;) task.cancel() with timeout(3) as cancel_timeout: await task if cancel_timeout.expired: print(&quot;Timed out !&quot;) print(&quot;Finally done !&quot;) if __name__ == &quot;__main__&quot;: asyncio.get_event_loop().run_until_complete(main()) </code></pre> <p>There is clearly an issue in this code, making task cancellation impossible. When <code>task.cancel()</code> is called, <code>sleep_coroutine</code> catches the <code>CancelledError</code> as any other <code>Exception</code>. So it ignores it and <code>await task</code> becomes infinite.<br /> The broad except should be removed, or at least <code>asyncio.CancelledError</code> should be caught and re-raised before. But let's say I don't have hands on this code.</p> <p>Is there a way to put a timeout on the task cancellation ? Something like &quot;Wait for cancellation for 3 seconds with a timeout. And if timed out, 'force cancel' it&quot;.<br /> Using <code>async_timeout</code> here doesn't help since it uses the same task cancellation internally.</p>
<python><python-asyncio>
2023-01-06 16:15:27
0
957
RobBlanchard
75,033,370
15,445,589
Github Workflow / Action commit to repository returning 403
<p>I have a Github Workflow file where I bump the version of the python package (setup.py) and afterwards I want to push the changes to the repository the workflow runs in. But I get 403 no access granted back</p> <pre class="lang-yaml prettyprint-override"><code> build-package: permissions: contents: read id-token: write pull-requests: write issues: write repository-projects: write deployments: write packages: write runs-on: ubuntu-latest needs: test steps: - uses: actions/checkout@v3 &quot;&quot;&quot; STEPS BETWEEN&quot;&quot;&quot;&quot; - name: Set up Python 3.10 uses: actions/setup-python@v1 with: python-version: &quot;3.10&quot; - name: Install dependencies run: | python -m pip install --upgrade pip python -m pip install setuptools python -m pip install wheel python -m pip install bump - name: Bump version run: | bump --patch # add step that commits the setup.py and skips the ci/cd - name: Commit version run: | git config --global user.email &quot;github-actions[bot]@users.noreply.github.com&quot; git config --global user.name &quot;bot&quot; git commit -m &quot;Bump version&quot; setup.py git push - name: Build package run: | python setup.py sdist bdist_wheel </code></pre> <p>It returns</p> <pre class="lang-yaml prettyprint-override"><code>fatal: unable to access 'https://github.com/repository/': The requested URL returned error: 403 </code></pre>
<python><git><github><github-actions>
2023-01-06 16:11:45
1
641
Kevin Rump
75,033,296
1,422,058
Consistent way of getting labels from plot, bar and other drawings with matplotlib
<p>With line plots, I can get all labels like this and build a legend:</p> <pre><code>p1 = ax1.plot(x, 'P1', data=df) p2 = ax1.plot(x, 'P2', data=df) p3 = ax1.plot(x, 'P3', data=df) p4 = ax1.plot(x, 'P4', data=df) p = p1+p2+p3+p4 labs = [l.get_label() for l in p] ax1.legend(p, labs, loc=0, frameon=False) </code></pre> <p>When I have bar plots, this does not work anymore. E.g.:</p> <pre><code>b1 = ax1.bar(x-2*w, 'B1', data=df, width=w, label=&quot;TP&quot;) b2 = ax1.bar(x-w, 'B2', data=df, width=w, label=&quot;FN&quot;) b3 = ax1.bar(x, 'B3', data=df, width=w, label=&quot;FP&quot;) b4 = ax2.bar(x+w, 'B4', data=df, width=w, label=&quot;AP&quot;) b5 = ax2.bar(x+2*w, 'B5', data=df, width=w, label=&quot;AR&quot;) </code></pre> <p><code>b1.get_label()</code> returns a string similar to a <code>__str__</code> method:</p> <pre><code>'0 87 Name: TP, dtype: object' </code></pre> <p>Why does <code>.get_label()</code> not behave identically?</p>
<python><matplotlib>
2023-01-06 16:04:10
1
1,029
Joysn
75,033,149
4,796,942
Using python to connect or write a google apps script to a google sheet
<p>Is it true that it is not possible to directly connect a Google Apps Script (which uses JavaScript) to a Google Sheets spreadsheet using Python?</p> <p>I am asking this more as a design question: would it not be possible to keep a Google Apps Script in a file and simply use Python to connect it to a gsheets spreadsheet using the spreadsheet id? I have not found a way to do this, but it would be interesting to hear if anyone has found a way to do it and if so how.</p>
<javascript><python><google-apps-script><design-patterns><pygsheets>
2023-01-06 15:51:05
2
1,587
user4933
75,033,112
19,003,861
Calculate sum of columns matching 2 criteria within a for loop
<p>Looking to calculate the sum of models objects macthing 2 requirements.</p> <p>As an example this is what the table would look like</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">User</th> <th style="text-align: center;">Venue</th> <th style="text-align: right;">Points</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_A</td> <td style="text-align: right;">50</td> </tr> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_B</td> <td style="text-align: right;">25</td> </tr> <tr> <td style="text-align: left;">User_2</td> <td style="text-align: center;">Venue_A</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_A</td> <td style="text-align: right;">50</td> </tr> <tr> <td style="text-align: left;">User_2</td> <td style="text-align: center;">Venue_A</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_C</td> <td style="text-align: right;">25</td> </tr> </tbody> </table> </div> <p>In this example, I would like to use a <code>for loop</code> to sum the points allocated against for each venue against User_1.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">User</th> <th style="text-align: center;">Venue</th> <th style="text-align: right;">Total Points</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_A</td> <td style="text-align: right;">100</td> </tr> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_B</td> <td style="text-align: right;">25</td> </tr> <tr> <td style="text-align: left;">User_1</td> <td style="text-align: center;">Venue_C</td> <td style="text-align: right;">25</td> </tr> </tbody> </table> </div> <p>I am struggling to understand how I can express the need for a double check within the loop</p> <ol> <li>Step 1: check 1 - look up for <code>user.request</code></li> <li>Step 2:check 2 - look up for venue_x</li> <li>Step 3: add points;</li> <li>Step 4: ...restart process for next venue</li> </ol> <p>There are 2 problems with my current function:</p> <ol> <li>It calculates the sum of all points independently of the venue</li> <li>It's outside of the queryset used in the <code>for loop</code> and therefore cannot return the number sum specific to the <code>venue</code> (<em>assuming I could get the first point working</em>)</li> </ol> <p><strong>models.py</strong></p> <pre><code>class Itemised_Loyatly_Card(models.Model): user = models.ForeignKey(UserProfile, blank=True, null=True, on_delete=models.CASCADE) venue = models.ForeignKey(Venue, blank=True, null=True, on_delete=models.CASCADE) add_points = models.IntegerField(name = 'add_points', null = True, blank=True, default=0) </code></pre> <p><strong>views</strong></p> <pre><code>from django.db.models import Sum def user_loyalty_card(request): itemised_loyalty_cards = Itemised_Loyatly_Card.objects.filter(user=request.user.id) sum_objects = Itemised_Loyatly_Card.objects.filter(user=request.user.id).aggregate(Sum(&quot;add_points&quot;)) # considering the loop will be with itemised_loyalty_cards , should the sum be in the same queryset? return render(request,&quot;main/account/user_loyalty_card.html&quot;, {'itemised_loyalty_cards':itemised_loyalty_cards,'sum_objects':sum_objects}) </code></pre> <p><strong>template</strong></p> <pre><code>{%for itemised_loyatly_card in itemised_loyalty_cards %} {{itemised_loyatly_card.venue}} {{itemised_loyatly_card.sum_objects}}&lt;/td&gt; #**&lt;-- this is obviously not working** {%endfor%} </code></pre>
<python><django-models><django-views><django-templates>
2023-01-06 15:47:50
1
415
PhilM
75,033,069
1,667,018
Type hint for a dict that maps tuples containing classes to the corresponding instances
<p>I'm making a semi-singleton class <code>Foo</code> that can have (also semi-singleton) subclasses. The constructor takes one argument, let's call it a <code>slug</code>, and each (sub)class is supposed to have at most one instance for each value of <code>slug</code>.</p> <p>Let's say I have a subclass of <code>Foo</code> called <code>Bar</code>. Here is an example of calls:</p> <ol> <li><code>Foo(&quot;a slug&quot;)</code> -&gt; returns a new instance of <code>Foo</code>, saved with key <code>(Foo, &quot;a slug&quot;)</code>.</li> <li><code>Foo(&quot;some new slug&quot;)</code> -&gt; returns a new instance <code>Foo</code>, saved with key <code>(Foo, &quot;some new slug&quot;)</code>.</li> <li><code>Foo(&quot;a slug&quot;)</code> -&gt; we have the same class and <code>slug</code> from step 1, so this returns the same instance that was returned in step 1.</li> <li><code>Bar(&quot;a slug&quot;)</code> -&gt; we have the same slug as before, but a different class, so this returns a new instance of <code>Bar</code>, saved with key <code>(Bar, &quot;a slug&quot;)</code>.</li> <li><code>Bar(&quot;a slug&quot;)</code> -&gt; this returns the same instance of <code>Bar</code> that we got in step 4.</li> </ol> <p>I know how to implement this: class dictionary associating a tuple of <code>type</code> and <code>str</code> to instance, override <code>__new__</code>, etc. Simple stuff.</p> <p>My question is how to type annotate this dictionary?</p> <p>What I tried to do was something like this:</p> <pre class="lang-py prettyprint-override"><code>FooSubtype = TypeVar(&quot;FooSubtype&quot;, bound=&quot;Foo&quot;) class Foo: _instances: Final[dict[tuple[Type[FooSubtype], str], FooSubtype]] = dict() </code></pre> <p>So, the idea is &quot;whatever type is in the first element of the key (&quot;assigning&quot; it to <code>FooSubtype</code> type variable), the value needs to be an instance of that same type&quot;.</p> <p>This fails with <code>Type variable &quot;FooSubtype&quot; is unbound</code>, and I kinda see why.</p> <p>I get the same error if I split it like this:</p> <pre class="lang-py prettyprint-override"><code>FooSubtype = TypeVar(&quot;FooSubtype&quot;, bound=&quot;Foo&quot;) InstancesKeyType: TypeAlias = tuple[Type[FooSubtype], str] class Foo: _instances: Final[dict[InstancesKeyType, FooSubtype]] = dict() </code></pre> <p>The error points to the last line in this example, meaning it's the value type, not the key one, that is the problem.</p> <p><code>mypy</code> also suggests using <code>Generic</code>, but I don't see how to do it in this particular example, because the value's type should somehow relate to the key's type, not be a separate generic type.</p> <p>This works:</p> <pre class="lang-py prettyprint-override"><code>class Foo: _instances: Final[dict[tuple[Type[&quot;Foo&quot;], str], &quot;Foo&quot;]] = dict() </code></pre> <p>but it allows <code>_instance[(Bar1, &quot;x&quot;)]</code> to be of type <code>Bar2</code> (<code>Bar1</code> and <code>Bar2</code> here being different subclasses of <code>Foo</code>). It's not a big problem and I'm ok with leaving it like this, but I'm wondering if there is a better (stricter) approach.</p>
<python><mypy><python-typing>
2023-01-06 15:44:35
1
3,815
Vedran Šego
75,032,996
511,302
python subprocess behaviour changed between ubuntu 20 and 22?
<p>I upgraded my ubuntu installation to 22.04 from 18.04 yesterday. Now I notice that python virtual environment is no longer working as expect.</p> <p>I use python to run a lot of tools, and hence am highly dependent on subprocess library.</p> <p>However to my &quot;horror&quot; I notice that it changed quite a lot, even when I keep using python 3.8. Mostly I notice I can no longer interact and the output is no longer piped to the shell that executes the python script.</p> <pre><code>import subprocess def main(): proc = subprocess.run([&quot;echo&quot;, &quot;test&quot;], check=True, stdout=subprocess.PIPE) print('-------') print(proc.stdout) print(&quot;FINISH&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>If I call this with <code>python3.8 test.py</code> I notice that the output isn't displayed in the shell. However it is displayed when the prints happen.</p> <p>What changed and how do I fix this? So output of the subprocess is piped to the output and can be seen?</p> <p>Especially since a lot of tools are just running dockers (which in turn run git/javascript programs) and having output while the process is busy is kind of useful.</p>
<python><ubuntu-22.04>
2023-01-06 15:37:59
0
9,627
paul23
75,032,936
662,285
Python Get complete string before last slash
<p>I want to get complete string path before last occurrence of slash (/)</p> <pre><code>String : /d/d1/Projects/Alpha/tests Output : /d/d1/Projects/Alpha </code></pre> <p>I am able to get the last part of string after last slash by doing</p> <pre><code>String.split('/')[-1] </code></pre> <p>But I want to get <code>&quot;/d/d1/Projects/Alpha&quot;</code></p> <p>Thanks.</p>
<python>
2023-01-06 15:33:04
3
4,564
Bokambo
75,032,742
4,822,772
SQL in python to include a where clause
<p>Here are the SQL code as string in python:</p> <pre><code>sql_code=&quot;&quot;&quot; SELECT VAR VAR2 FROM TABLE WHERE VAR in ('A','B') &quot;&quot;&quot; </code></pre> <p>And I would like to create a variable for the list of selection in the WHERE clause, this is what we can do:</p> <pre><code>sql_code_arg1=&quot;&quot;&quot; SELECT VAR VAR2 FROM TABLE WHERE VAR in {} &quot;&quot;&quot; </code></pre> <p>Then</p> <pre><code>lst=[&quot;A&quot;,&quot;B&quot;] print(sql_code_arg1.format(tuple(lst))) </code></pre> <p>Now, I would like to parameter the entire condition in WHERE clause:</p> <pre><code>sql_code_arg2=&quot;&quot;&quot; SELECT VAR VAR2 FROM TABLE WHERE {} &quot;&quot;&quot; </code></pre> <p>I tried someting like this:</p> <pre><code>print(sql_code_arg2.format(&quot;VAR in &quot;+tuple(list))) </code></pre> <p>But it doesn't work.</p>
<python><sql><python-3.x><prepared-statement>
2023-01-06 15:15:01
2
1,718
John Smith
75,032,730
11,092,636
sys.path.append which main.py will be imported
<p>If I have a project with two files <code>main.py</code> and <code>main2.py</code>. In <code>main2.py</code> I do <code>import sys; sys.path.append(path)</code>, and I do <code>import main</code>, which <code>main.py</code> will be imported? The one in my project or the one in <code>path</code> (the question poses itself only if there are two <code>main.py</code> obviously)?</p> <p>Can I force Python to import a specific one?</p>
<python><sys>
2023-01-06 15:14:08
2
720
FluidMechanics Potential Flows
75,032,546
3,050,230
Github Actions not accessing download from Newspaper3k
<p>I've been trying to use Github Actions to run a python script. Everything seems to run fine, except a specific function that uses the Newspaper3k package. The article appears to download fine (article.html works ok), but Article.parse() does not work. This works fine on my local server, but not in Github. Is this related to being able to access file locations, that are different on Github? It's a private repository, in case that makes a difference.</p> <p>My yaml script is as follows:</p> <pre><code>build: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v3 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: '3.10' #install the python needed cache: 'pip' - name: install python packages run: | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: execute py script # run file env: WORDPRESS_USER: ${{ secrets.WORDPRESS_USER }} WORDPRESS_PASSWORD: ${{ secrets.WORDPRESS_PASSWORD }} run: | python main.py </code></pre> <p>The function in question is provided below:</p> <pre><code>def generate_article_summary(supplied_links): summary_list = &quot;&quot; for news_article in supplied_links[:5]: try: url = news_article article = Article(url, config=config) article.download() article.parse() article.nlp() except: summary_list = summary_list + &quot;\n&quot; pass summary_list = summary_list + &quot;\n&quot; + article.summary return summary_list </code></pre> <p>Any help would be much appreciated.</p>
<python><python-3.x><github-actions><newspaper3k>
2023-01-06 14:56:35
0
387
Dave C
75,032,272
418,875
Debugging a C++ extension for Python from a Jupyter Notebook in Visual Studio Code
<p>I have a C++ extension for Python (produced using <code>pybind11</code>). Debugging this C++ extension from a Python script in Visual Studio Code can be achieved by adding the following configuration in the <code>launch.json</code> file:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Debug Python/C++ Script&quot;, &quot;type&quot;: &quot;cppdbg&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${env:CONDA_PREFIX}/bin/python&quot;, &quot;args&quot;: [&quot;${workspaceFolder}/path/to/script.py&quot;], &quot;stopAtEntry&quot;: false, &quot;cwd&quot;: &quot;${workspaceFolder}&quot;, &quot;environment&quot;: [ { &quot;name&quot;: &quot;PYTHONPATH&quot;, &quot;value&quot;: &quot;${workspaceFolder}/path/to/cpp/extension/for/python:${env:PYTHONPATH}&quot; } ], &quot;MIMode&quot;: &quot;gdb&quot;, &quot;setupCommands&quot;: [ { &quot;description&quot;: &quot;Enable pretty-printing for gdb&quot;, &quot;text&quot;: &quot;-enable-pretty-printing&quot;, &quot;ignoreFailures&quot;: true } ] } ] } </code></pre> <p>where <code>script.py</code> is the Python script that makes use of the C++ extension.</p> <p>Is there a similar way to achieve this when the C++ extension is used from a Jupyter Notebook file instead? My approach above is also quite old. Wondering here if there is a newer and more convenient alternative nowadays (just in case).</p>
<python><visual-studio-code><jupyter-notebook><vscode-debugger><pybind11>
2023-01-06 14:32:40
1
1,200
A.L.
75,032,247
8,200,410
Add nan buffer to xarray dataset
<p>I have an xarray Dataset which will be acting as a mask to a different dataset. I'd like to create a buffer (of a configurable distance) from any nan values in the mask. I haven't seen anything that adds a buffer internally, instead of expanding the array size with padded values. Below is some reproducible code to show what I mean (the datasets I'm using have 10,000s of x/y coordinates):</p> <pre><code>import numpy as np import xarray as xr data = [[ 0., 1., 2., 3., nan], [ 0., 6., 4., nan, nan], [ 4., 3., 6., 4., nan], [ 1., 0., 3., 4., nan]] y = [0, 1, 2, 3] x = [0, 1, 2, 3, 4] test = xr.Dataset({'band': xr.DataArray(data, coords=[y, x], dims=['y', 'x'])}) </code></pre> <p>I'd like to create a dataset where if I supplied a distance of 1, the above would look like this:</p> <pre><code>[[ 0., 1., 2., nan., nan], [ 0., 6., nan., nan, nan], [ 4., 3., 6., nan., nan], [ 1., 0., 3., nan., nan]]) </code></pre> <p>And ideally would be able to have a configurable buffer distance that could be set. I've tried to do this via downsampling the image and then upsampling the downsampled image but it was very slow and a struggle to get to work properly so thought I'd see if I'm missing a better option.</p>
<python><python-xarray><resampling>
2023-01-06 14:30:59
1
441
JackLidge
75,032,181
1,562,489
Deprecating and Sunsetting endpoints in Django Rest Framework
<p>It's very hard to find documentation in Django Rest Framework for deprecation decorators. It seems the first few pages in google list Django's changelog and the various functionality they are deprecating, rather than how to deprecate endpoints in your own app.</p> <p>I have an API that I'm trying to develop inline with API evolution principals. As such, one of my endpoints needs to be marked as sunset/end of life/deprecated.</p> <p>Can I do this with Django decorators? The idea would be to list a date when this endpoint will no longer be available. I do not have API version numbers (API evolution), so there is no API version to mark.</p> <pre><code># Here's my old action that doesn't follow REST standards which I'd like to deprecate @action(url_path=&quot;filter&quot;, methods=[&quot;post&quot;], detail=False) def get_filtered(self, request, *args, **kwargs): ..... return filtered_queryset </code></pre> <p>The closets solution I've found is below.. but I think it's not the &quot;Django Rest way&quot;</p> <pre><code>@action(url_path=&quot;filter&quot;, methods=[&quot;post&quot;], detail=False) def get_filtered(self, request, *args, **kwargs): response = filtered_queryset response['Warning'] = 'This endpoint is deprecated and will be removed on April 1st 2023' return response </code></pre> <p>Can anyone offer better suggestions to achieve this?</p> <p>Thanks in advance</p>
<python><django-rest-framework>
2023-01-06 14:25:34
1
1,198
David Sigley
75,032,145
2,876,994
How to pass an argument calling a function using ArgParse?
<p>I'm trying to call function_3 with an argument but i'm receiving a error unrecognized arguments. I'm calling this way: <code>python script.py --pass test</code></p> <pre><code>import argparse import sys def function_1(): print('Function 1') def function_2(): print('Function 2') def function_3(arg): print(f'Function 3 {arg}') if __name__ == &quot;__main__&quot;: parser = argparse.ArgumentParser() subparsers = parser.add_subparsers() parser_f1 = subparsers.add_parser('fc1', help='Function 1') parser_f1.set_defaults(func=function_1) parser_f2 = subparsers.add_parser('fc2', help='Function 2') parser_f2.set_defaults(func=function_2) parser_f3 = subparsers.add_parser('fc3', help='Function 3') parser_f3.add_argument(&quot;pass&quot;, help=&quot;function 3 argument&quot;) parser_f3.set_defaults(func=function_3) if len(sys.argv) &lt;= 1: sys.argv.append('--help') options = parser.parse_args() options.func() </code></pre> <h2>Error</h2> <pre><code>usage: argtest.py [-h] {fc1,fc2,fc3} ... argtest.py: error: unrecognized arguments: --arg </code></pre>
<python><command-line-interface><argparse>
2023-01-06 14:22:15
2
1,552
Shinomoto Asakura