QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,080,626
14,104,321
Numpy: improve arrays operations
<p>As an example, I have the 2 following 1d-arrays:</p> <pre><code>import numpy as np a = np.array([1, 2, 3, 4]) b = np.array([5, 6]) </code></pre> <p>Now, I need to multiply <code>a</code> for each element of <code>b</code> in order to obtain a 2d-array:</p> <pre><code>[[5 10 15 20], [6 12 18 24]] </code></pre> <p>Now, I solved the problem by creating 2 new 2d-arrays by repeating either rows or columns and then performing the multiplication, i.e.:</p> <pre><code>a_2d = np.repeat(a[np.newaxis, :], b.size, axis=0) b_2d = np.tile(b[:, np.newaxis], a.size) print(a_2d*b_2d) #[[ 5 10 15 20] # [ 6 12 18 24]] </code></pre> <p>Is there a way to make this operation more efficient?</p> <p>This would not be limited to multiplications only but, ideally, applicable to all 4 operations. Thanks a lot!</p>
<python><arrays><numpy>
2023-01-11 09:12:32
4
582
mauro
75,080,477
8,033,003
ImportError when running watson machine learning locally
<p>I am trying to communicate with Watson Machine Learning from my local windows machine using vscode to run a jupyter notebook in a virtual enviroment but I cannot get it to work.</p> <p>I installed</p> <pre><code>!pip install tensorflow !pip install ibm_watson_machine_learning </code></pre> <p>I created and trained a keras model in the same notebook --&gt; tensorflow definitly is installed and functioning. But when I run:</p> <pre><code>from ibm_watson_machine_learning import APIClient LOCATION = 'https://us-south.ml.cloud.ibm.com' API_KEY = 'xxx-this-is-my-api-key-xxx' wml_credentials = { &quot;apikey&quot;: API_KEY, &quot;url&quot;: LOCATION } wml_client = APIClient(wml_credentials) </code></pre> <p>I get an error:</p> <pre><code>ImportError: The system lacks installations of pyspark, scikit-learn, pandas, xgboost, mlpipelinepy, ibmsparkpipeline and tensorflow. At least one of the libraries is required for the repository-client to be used` </code></pre> <p>Does anyone know what to do about that?</p>
<python><tensorflow><ibm-cloud><ibm-watson><watson-ml>
2023-01-11 08:58:08
2
693
Maximilian Jesch
75,080,439
10,202,292
Disallow passing f-strings as argument
<p>I am reporting data from some tests, and each test can have a 'summary' and a 'details' section. The 'summary' field for any particular test should be static, with any additional dynamic information going into the details field, as follows:</p> <pre><code>run_test() if condition: report_test_data(summary=&quot;condition met&quot;, details=f&quot;{component} ended with {state}&quot;) else: report_test_data(summary=&quot;condition not met&quot;, details=f{component} ended with {state}&quot;) </code></pre> <p>However, this only applies to these calls to <code>report_test_data</code>, and there is nothing to stop another test from swapping these around, or putting all the data into the 'summary' field:</p> <pre><code>run_test() if condition: report_test_data(summary=f&quot;{component} ended with {state} - condition met&quot;, details=&quot;&quot;) else: report_test_data(summary=f&quot;{component} ended with {state} - condition not met&quot;, details=&quot;&quot;) </code></pre> <p>I am analyzing the test data based off the summary, so any particular ending state (e.g. <code>condition = True</code>) should have a static return string. I thought about making a class that manually defines every possible 'summary' string, but that quickly becomes untenable with more tests when a given test can have tens of possible ending states. The best option I can think of is if I could force the value passed into 'summary' to be a normal string. <strong>Is there any way to disallow passing f-strings into a particular function?</strong></p> <p>Note: I use pylint, so if there's an easy way to make it call these out, that would work as well.</p>
<python><pylint><f-string>
2023-01-11 08:55:21
1
403
tigerninjaman
75,080,380
15,307,844
Why/when is numpy.equal() 10x faster when working with x.T.copy() instead of x.T?
<p><strong>Context:</strong> I made the following observations when trying to optimize my code for speed. It involves the use of large square matrices with up to 3.6e9 elements (assuming I can speed it up sufficiently ...).</p> <p><strong>Observations:</strong> Consider a square numpy array <code>x</code>. Doing <code>x == x.T</code> takes significantly longer than <code>x == x</code>, <code>x == x.T.copy()</code>, or even <code>x.T == x.T</code>. If <code>x</code> has 1e8 elements, <code>x == x.T</code> is 10x times slower than all other variants (on my system), including doing <code>x.T.copy()</code>.</p> <p><strong>Question:</strong> I suppose that the speed loss is related to the fact that <code>x.T</code> <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.transpose.html#numpy.ndarray.transpose" rel="nofollow noreferrer">returns a view of x</a> ? I am however surprised at the amplitude of the loss. I was even more surprised to realize that working with two transposed arrays, or with copies of the transposed array appears to have very little overhead.</p> <p>I am now left wondering if <strong>I should start systematically copying my transposed numpy arrays</strong> (i.e. use <code>x.T.copy()</code> instead of <code>x.T</code>) to gain significant speed in my code ?</p> <p>Is this expected behavior from numpy, or could there be something off with my setup (incl. the underlying BLAS/LAPACK) ?</p> <p><strong>MWE:</strong></p> <pre><code>from datetime import datetime import numpy as np def time_loop(a, b, niter=10): time = 0 for _ in range(niter): start = datetime.now() _ = (a == b) time += (datetime.now()-start).total_seconds() return time/niter very_start = datetime.now() for size in [100, 1000, 10000, 15000, 20000]: print(f'\nsize: {size}') # Create the array once to not influence the timings x = np.random.rand(size, size) # Create the transpose too already x_t = x.T x_t_c = x.T.copy() # Case 1: straight vs straight print(f' x == x: {time_loop(x, x):.6f}s') # Case 2: straight vs transposed print(f' x == x.T: {time_loop(x, x_t):.6f}s') # Case 3: straight vs transposed.copy() print(f' x == x.T.copy(): {time_loop(x, x_t_c):.6f}s') # Case 4: transposed vs transposed print(f' x.T == x.T: {time_loop(x_t, x_t):.6f}s') print(f'\nTotal runtime: {(datetime.now()-very_start).total_seconds():.1f}s') </code></pre> <p>returns</p> <pre><code>size: 100 x == x: 0.000005s x == x.T: 0.000010s x == x.T.copy(): 0.000004s x.T == x.T: 0.000004s size: 1000 x == x: 0.000412s x == x.T: 0.001698s x == x.T.copy(): 0.000887s x.T == x.T: 0.000396s size: 10000 x == x: 0.070157s x == x.T: 1.024655s x == x.T.copy(): 0.099336s x.T == x.T: 0.067587s size: 15000 x == x: 0.151993s x == x.T: 2.561973s x == x.T.copy(): 0.223449s x.T == x.T: 0.152339s size: 20000 x == x: 0.280847s x == x.T: 6.938656s x == x.T.copy(): 0.396516s x.T == x.T: 0.275735s Total runtime: 141.5s </code></pre> <p>on my system: OS X, Python 3.10.6 via miniconda/conda-forge, numpy 1.23.4</p> <pre><code> In [111]: np.show_config() blas_info: libraries = ['cblas', 'blas', 'cblas', 'blas'] library_dirs = ['/Some/secret/path'] include_dirs = ['/Some/secret/path'] language = c define_macros = [('HAVE_CBLAS', None)] blas_opt_info: define_macros = [('NO_ATLAS_INFO', 1), ('HAVE_CBLAS', None)] libraries = ['cblas', 'blas', 'cblas', 'blas'] library_dirs = ['/Some/secret/path'] include_dirs = ['/Some/secret/path'] language = c lapack_info: libraries = ['lapack', 'blas', 'lapack', 'blas'] library_dirs = ['/Some/secret/path'] language = f77 lapack_opt_info: libraries = ['lapack', 'blas', 'lapack', 'blas', 'cblas', 'blas', 'cblas', 'blas'] library_dirs = ['/Some/secret/path'] language = c define_macros = [('NO_ATLAS_INFO', 1), ('HAVE_CBLAS', None)] include_dirs = ['/Some/secret/path'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL </code></pre>
<python><numpy><performance>
2023-01-11 08:50:44
0
441
fpavogt
75,080,340
10,413,550
Pytest not able to run test where script A importing another script B in the same folder level as A and giving me ModuleNotFoundError
<p>I am trying to run the unit test using pytest in this project, here main_0.py is importing s3 file.</p> <p>I am getting <code>ModuleNotFoundError: no module named 's3'</code></p> <p>Project Folder Structure</p> <pre><code> some_project └───src β”œβ”€β”€β”€main β”‚ └───lambda_function β”‚ └───some β”‚ main_0.py β”‚ s3.py β”‚ └───test └───unittest └───lambda_function └───some test_main_0.py test_s3.py </code></pre> <p>main_0.py</p> <pre><code>from s3 import PrintS3 def lambda_handler(): obj = PrintS3() res = obj.print_txt() return res </code></pre> <p>s3.py</p> <pre><code>class PrintS3: def __init__(self) -&gt; None: self.txt = &quot;Hello&quot; def print_txt(self): print(self.txt) return self.txt </code></pre> <p>test_main_0.py</p> <pre><code>import unittest class TestSomeMain(unittest.TestCase): def test_main_0(self): from src.main.lambda_function.some.main_0 import lambda_handler res = lambda_handler() assert res == &quot;Hello&quot; </code></pre> <p>test_s3.py is empty.</p> <p>I also tried adding an empty <code>__init__.py</code> file in both the dir but still the same error Project Folder Structure after adding <code>__init__.py</code> file</p> <pre><code>some_project └───src β”œβ”€β”€β”€main β”‚ └───lambda_function β”‚ └───some β”‚ main_0.py β”‚ s3.py β”‚ __init__.py β”‚ └───test └───unittest └───lambda_function └───some test_main_0.py test_s3.py __init__.py </code></pre> <p>the command I am using to run pytest:</p> <pre><code>python -m pytest ./src/test </code></pre> <p>and I am inside <code>some_project</code> folder and also using <code>main_0.py</code> instead of <code>main.py</code> because to not get confused with main folder</p> <p><strong>Edit 2</strong>:</p> <p>I am to run the test case successfully by adding <code>sys.path</code> in the <code>test_main_0.py</code> file <s>but it is breaking linting and hinting in the code editor (vscode)</s> it didn't broke the linting and hinting, both import statement works but is there any better way.</p> <p>new test_main_0.py:</p> <pre><code>import unittest import os import sys sys.path.append(os.path.abspath(&quot;./src/main/lambda_function/some/&quot;)) class TestSomeMain(unittest.TestCase): def test_main_0(self): from src.main.lambda_function.some.main_0 import lambda_handler # this works from main_0 import lambda_handler # this also works but break linting and hinting in the code editor res = lambda_handler() assert res == &quot;Hello&quot; </code></pre>
<python><python-3.x><unit-testing><pytest><python-unittest>
2023-01-11 08:46:43
2
393
SKJ
75,080,119
6,054,404
Freezing code and including .libs folders
<p>I have a successful build of a python 3.9 program I'm working on. However, the following folders need to be manually copied into the lib folder;</p> <ul> <li>pyproj.libs</li> <li>scipy.libs</li> <li>Shapely.libs</li> </ul> <p>I have the setup.py, how do I set it up so that these folders are copied to the correct location?</p> <p>Here's my <code>options</code> definition:</p> <pre><code>options = { 'build_exe': { 'includes': [ ], 'packages': [ 'PyQt5', 'scipy', 'os', 'sys', 'datetime', 'pandas', 'scipy.spatial.distance', 'open3d', 'geopandas', 'laspy', 'numpy', 'alphashape', 'shapely.speedups' ], 'excludes': [ 'tkinter', 'matplotlib', 'sqlite3' ] } } </code></pre>
<python><cx-freeze>
2023-01-11 08:25:54
0
1,993
Spatial Digger
75,080,042
7,074,969
ModuleNotFoundError: No module named 'Cython' even though Cython is installed in Azure Devops
<p>I'm setting up some CI/CD for a Python app (Azure Function) using Azure Devops and I've stumbled upon an issue with one of the requirements. The error I receive is <strong>ModuleNotFoundError: No module named 'Cython'</strong> while trying to install <code>pymssql-2.1.4</code>. Before this requirement, I have set a <code>Cython==0.29.21</code> to be installed and it is installed as it can be seen through out the logs as</p> <pre><code>Collecting Cython==0.29.21 Downloading Cython-0.29.21-cp38-cp38-manylinux1_x86_64.whl (1.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 73.5 MB/s eta 0:00:00 </code></pre> <p>I'm not sure why this error happens, I did some research and saw that it can be a problem with the <code>pip</code> version so I update the <code>pip</code> before doing the installation but the error keeps persisting. This is my yaml code for the part where I install the requirements.</p> <pre><code>python3 -m venv worker_venv source worker_venv/bin/activate python3 -m pip install pip --upgrade pip3 install setuptools pip3 install -r requirements.txt </code></pre> <p>I use python 3.8 version and ubuntu 20.04. Does somebody know how can I fix this?</p>
<python><azure-devops><azure-functions><requirements.txt>
2023-01-11 08:17:25
1
1,013
anthino12
75,079,985
5,342,009
How to change subscription from PlanA to PlanB using Stripe with Python/Django
<p>I can create a Stripe subscription for a customer using the following code :</p> <pre><code> subscription = stripe.Subscription.create( customer=stripe_customer_id, items=[ { &quot;plan&quot;: stripe_plan_A }, ] ) </code></pre> <p>Here is what I see on Stripe Dashboard.</p> <p><a href="https://i.sstatic.net/tPa7a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tPa7a.png" alt="enter image description here" /></a></p> <p>I want to change the upgrade the subscription to stripe_plan_B but while keeping rest of the configuration same.</p> <p>I have experimented with the following code to change the plan from stripe_plan_A to stripe_plan_B; however this following command results in having two subscriptions at the same time, rather than a single subscription.</p> <pre><code> stripe.Subscription.modify( subscription.id, items=[ { &quot;plan&quot;: selected_membership.stripe_plan_B }, ] ) </code></pre> <p>As you can see below, we can have two subscriptions at the same time for the same user. (Stripe docs say you can have n number of subscriptions)</p> <p><a href="https://i.sstatic.net/gCvFj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gCvFj.png" alt="enter image description here" /></a></p> <p>Is there any suggestion for this, so I can fluently change between the plans ?</p>
<python><django><stripe-payments>
2023-01-11 08:11:56
2
1,312
london_utku
75,079,735
5,641,924
Logs 100K request per second in real-time with python logging package
<p>I have to get the data from one of my localhost ports and saved it to a log file. To do this, I use the <code>logging</code> package. Here is my code:</p> <ol> <li><p>In the first step I designed my custom <code>TimedRotatingFileHandler</code>:</p> <pre><code>from logging.handlers import BaseRotatingHandler, TimedRotatingFileHandler from itertools import islice import logging import time import os _intervals = { &quot;S&quot;: 1, &quot;M&quot;: 60, &quot;H&quot;: 60 * 60, &quot;D&quot;: 60 * 60 * 24, &quot;MIDNIGHT&quot;: 60 * 60 * 24, &quot;W&quot;: 60 * 60 * 24 * 7} class CustomTimedRotatingFileHandler(BaseRotatingHandler): &quot;&quot;&quot;File handler that uses the current time in the log filename. The time is quantisized to a configured interval. See TimedRotatingFileHandler for the meaning of the when, interval, utc and atTime arguments. If backupCount is non-zero, then older filenames that match the base filename are deleted to only leave the backupCount most recent copies, whenever opening a new log file with a different name. &quot;&quot;&quot; def __init__(self, filenamePattern, when=&quot;h&quot;, interval=1, backupCount=0, encoding=None, delay=False, utc=False, atTime=None): self.when = when.upper() self.backupCount = backupCount self.utc = utc self.atTime = atTime try: key = &quot;W&quot; if self.when.startswith(&quot;W&quot;) else self.when self.interval = _intervals[key] except KeyError: raise ValueError( f&quot;Invalid rollover interval specified: {self.when}&quot; ) from None if self.when.startswith(&quot;W&quot;): if len(self.when) != 2: raise ValueError( &quot;You must specify a day for weekly rollover from 0 to 6 &quot; f&quot;(0 is Monday): {self.when}&quot; ) if not &quot;0&quot; &lt;= self.when[1] &lt;= &quot;6&quot;: raise ValueError( f&quot;Invalid day specified for weekly rollover: {self.when}&quot; ) self.dayOfWeek = int(self.when[1]) self.interval = self.interval * interval self.pattern = os.path.abspath(os.fspath(filenamePattern)) # determine best time to base our rollover times on # prefer the creation time of the most recently created log file. t = now = time.time() entry = next(self._matching_files(), None) if entry is not None: t = entry.stat().st_ctime while t + self.interval &lt; now: t += self.interval self.rolloverAt = self.computeRollover(t) # delete older files on startup and not delaying if not delay and backupCount &gt; 0: keep = backupCount if os.path.exists(self.baseFilename): keep += 1 delete = islice(self._matching_files(), keep, None) for entry in delete: os.remove(entry.path) # Will set self.baseFilename indirectly, and then may use # self.baseFilename to open. So by this point self.rolloverAt and # self.interval must be known. super().__init__(filenamePattern, &quot;a&quot;, encoding, delay) @property def baseFilename(self): &quot;&quot;&quot;Generate the 'current' filename to open&quot;&quot;&quot; # use the start of *this* interval, not the next t = self.rolloverAt - self.interval if self.utc: time_tuple = time.gmtime(t) else: time_tuple = time.localtime(t) dst = time.localtime(self.rolloverAt)[-1] if dst != time_tuple[-1] and self.interval &gt; 3600: # DST switches between t and self.rolloverAt, adjust addend = 3600 if dst else -3600 time_tuple = time.localtime(t + added) return time.strftime(self.pattern, time_tuple) @baseFilename.setter def baseFilename(self, _): # assigned to by FileHandler, just ignore this as we use self.pattern # instead pass def _matching_files(self): &quot;&quot;&quot;Generate DirEntry entries that match the filename pattern. The files are ordered by their last modification time, most recent files first. &quot;&quot;&quot; matches = [] pattern = self.pattern for entry in os.scandir(os.path.dirname(pattern)): if not entry.is_file(): continue try: time.strptime(entry.path, pattern) matches.append(entry) except ValueError: continue matches.sort(key=lambda e: e.stat().st_mtime, reverse=True) return iter(matches) def doRollover(self): &quot;&quot;&quot;Do a roll-over. This basically needs to open a new generated filename. &quot;&quot;&quot; if self.stream: self.stream.close() self.stream = None if self.backupCount &gt; 0: delete = islice(self._matching_files(), self.backupCount, None) for entry in delete: os.remove(entry.path) now = int(time.time()) rollover = self.computeRollover(now) while rollover &lt;= now: rollover += self.interval if not self.utc: # If DST changes and midnight or weekly rollover, adjust for this. if self.when == &quot;MIDNIGHT&quot; or self.when.startswith(&quot;W&quot;): dst = time.localtime(now)[-1] if dst != time.localtime(rollover)[-1]: rollover += 3600 if dst else -3600 self.rolloverAt = rollover if not self.delay: self.stream = self._open() # borrow *some* TimedRotatingFileHandler methods computeRollover = TimedRotatingFileHandler.computeRollover shouldRollover = TimedRotatingFileHandler.shouldRollover def setup_handlers(): formatter = logging.Formatter('%(message)s') loggers = {'log_t': logging.getLogger('t'), 'log_b': logging.getLogger('b'), 'log': logging.getLogger('log')} for logger in loggers: loggers[logger].setLevel(logging.INFO) if logger == 'log': when = 'M' interval = 5 backup_count = 24 filename_pattern = 'logs/%Y_%m_%d_%H_%M.txt' else: when = 'H' interval = 1 backup_count = 2 filename_pattern = f'logs/%Y_%m_%d_%H_{logger}.txt' handler = CustomTimedRotatingFileHandler(filenamePattern=filename_pattern, when=when, interval=interval, backupCount=backup_count) handler.setFormatter(formatter) loggers[logger].addHandler(handler) return loggers </code></pre> </li> <li><p>In the second step, I created a <code>Syslog</code> class to get the data from the port:</p> <pre><code>from handler import setup_handlers import socketserver import os import re HOST, PORT = &quot;0.0.0.0&quot;, 8080 PATTERNS = [r'pattern1', r'pattern2', r'pattern3', r'pattern4', r'pattern5', r'pattern6', r'pattern7', r'pattern8'] class SysLogUDPHandler(socketserver.BaseRequestHandler): @staticmethod def __check_log_type(text): if 'LOG_T' in text: log_type = 'teardown' elif 'LOG_B' in text: log_type = 'built' else: log_type = None return log_type @staticmethod def __get_pattern(pattern, text): result = re.split(pattern, text) if len(result) &gt; 1: result = result[1].split(',')[0] else: result = ' ' return result def __get_data(self, data) -&gt; str: logs = '' for pattern in PATTERNS: log = self.__get_pattern(pattern=pattern, text=data) if log: logs += log logs += ',' return logs def handle(self) -&gt; None: request = self.request[0].decode() log_type = self.__check_log_type(request) if log_type in ['log_t', 'log_b']: logs = self.__get_data(data=request) loggers[log_type].info(logs) else: print('Invalid log type!') data = bytes.decode(self.request[0]) loggers['log'].info(data) if __name__ == &quot;__main__&quot;: try: os.makedirs('logs', exist_ok=True) loggers = setup_handlers() server = socketserver.UDPServer((HOST, PORT), SysLogUDPHandler) server.serve_forever(poll_interval=0.5) except (IOError, SystemExit): raise except KeyboardInterrupt: print(&quot;Ctrl+C Pressed. Shutting down.&quot;) </code></pre> </li> </ol> <p>The challenge is the rate of requests per second is around 100,000 which is really high. Therefore, I miss a huge part of the requests. I guess it is around 80%! What is the best practice to log this amount of data?</p>
<python><logging><syslog>
2023-01-11 07:46:28
0
642
Mohammadreza Riahi
75,079,637
216,229
type annotation for return type of method where that depends on an attribute of the class
<p>Say I have:</p> <pre class="lang-py prettyprint-override"><code>class A: pass class B: pass class Foo: factory: Type = A def make(self) -&gt; ?: return self.factory() class Bar(Foo): factory: Type = B </code></pre> <p>What type annotation do I use on make to indicate that the type returned is that of the <code>factory</code> attribute?</p>
<python>
2023-01-11 07:35:48
1
11,138
Chris Withers
75,079,329
1,397,922
How to properly open and encode CSV file in Python to be processed in Odoo framework
<p>I tried to import a CSV file in Odoo custom module, but my logic stopped at some point where I decode the file object. Below is my code:</p> <pre class="lang-py prettyprint-override"><code>def import_csv(self, csv_file): reader = csv.reader(csv_file) next(reader) for row in reader: record = { 'name' : row[0], 'component_name' : row[1], 'percentage' : row[2], 'processing_start_date' : row[3], 'finished_real_date' : row[4], } self.env['item.master'].create(record) def action_import_csv(self): outfile = open('test.csv', 'r') data_record = outfile.read() ir_values = { 'name': 'test.csv', 'datas': data_record, } data_id = self.env['ir.attachment'].sudo().create(ir_values) self.import_csv(data_id) </code></pre> <p>It raises an error:</p> <blockquote> <p>binascii.Error: Invalid base64-encoded string: number of data characters (141) cannot be 1 more than a multiple of 4</p> </blockquote> <p>What is actually wrong in my code?</p> <p>I've tried to put this line too:</p> <pre class="lang-py prettyprint-override"><code>data_record = base64.b64encode(outfile.read()) </code></pre> <p>Right after the file opened, but a different error is raised:</p> <blockquote> <p>TypeError: a bytes-like object is required, not 'str'</p> </blockquote>
<python><csv><odoo><decode><encode>
2023-01-11 06:59:42
1
550
Andromeda
75,079,183
19,834,019
Bulk insert on multi-column unique constraint Django
<p>Suppose we have a model</p> <pre><code>from django.db import models class Concept(models.Model): a = models.CharField(max_length=255) b = models.CharField(max_length=255) c = models.CharField(max_length=255) d = models.CharField(max_length=255) class Meta: constraints = [ models.UniqueConstraint( fields=('a', 'b'), name='first_two_constraint'), ] </code></pre> <p>I want to execute <code>bulk_create</code> on this model such that, on a unique constraint violation of 'first_two_constraint', an update would be performed.</p> <p>For sqlite3, the features <a href="https://github.com/django/django/blob/main/django/db/backends/sqlite3/features.py#L44" rel="nofollow noreferrer">https://github.com/django/django/blob/main/django/db/backends/sqlite3/features.py#L44</a> forces that <code>unique_fields</code> be passed to the <code>bulk_create</code> function. However, it's non-obvious to me what that should be. <a href="https://github.com/django/django/blob/829f4d1448f7b40238b47592fc17061bf77b0f23/django/db/models/query.py#L701" rel="nofollow noreferrer">https://github.com/django/django/blob/829f4d1448f7b40238b47592fc17061bf77b0f23/django/db/models/query.py#L701</a></p> <p>I tried the constraint's name, however that failed. Tracing, that occurs since this list of <code>unique_fields</code> is specifically the field names, and there wouldn't be a field name for a constraint . <a href="https://github.com/django/django/blob/829f4d1448f7b40238b47592fc17061bf77b0f23/django/db/models/query.py#L768" rel="nofollow noreferrer">https://github.com/django/django/blob/829f4d1448f7b40238b47592fc17061bf77b0f23/django/db/models/query.py#L768</a></p> <p>As a result, I'm at a loss of how to approach this issue.</p> <p>Based off of the sqlite3 documentation, <a href="https://www.sqlite.org/lang_conflict.html" rel="nofollow noreferrer">https://www.sqlite.org/lang_conflict.html</a> sub-heading 'REPLACE', the functionality should be possible as, even if it's multiple columns, the violation would still be a unique constraint violation &quot;When a UNIQUE or PRIMARY KEY constraint violation occurs...&quot;</p> <p>Does anyone have any insight as to how to deal with multiple column constraints with the <code>bulk_create</code> function or confirmation that the only approach to this is with raw SQL?</p> <p>I don't believe it's to have <code>unique_fields=('a', 'b')</code> as that would be representative of two separate column constraints, correct?</p>
<python><sql><django><sqlite>
2023-01-11 06:42:14
1
303
Ambiguous Illumination
75,079,148
425,964
Python logging stdout and stderr based on level
<p>Using Python 3 logging, how can I specify that <code>logging.debug()</code> and <code>logging.info()</code> should go to <code>stdout</code> and <code>logging.warning()</code> and <code>logging.error()</code> go to <code>stderr</code>?</p>
<python><python-3.x><logging>
2023-01-11 06:36:37
2
45,778
Justin
75,079,092
1,236,858
Poetry: Disabling SSL certification check
<p>I'm trying to disable SSL certification check for self-signed certificate using example mentioned in : <a href="https://python-poetry.org/docs/repositories/#certificates" rel="nofollow noreferrer">https://python-poetry.org/docs/repositories/#certificates</a></p> <pre><code>poetry config certificates.foo.cert false </code></pre> <p>My repository is &quot;private-pypi&quot;, so I am using <code>poetry config certificates.private-pypi.cert false</code></p> <p>But it seems that the value is getting &quot;false&quot; assigned as a string instead of boolean.</p> <p><a href="https://i.sstatic.net/k6Cm3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k6Cm3.png" alt="enter image description here" /></a></p> <p>See the difference between the first and the second config, where the second config does not have <code>&quot;</code> in the value.</p> <p>How do I set it as such is should be assigned as boolean instead?</p>
<python><python-poetry>
2023-01-11 06:30:40
0
7,307
rcs
75,078,992
8,874,837
Get key hierarchy from a nested dict of other lists/dicts in Python
<p>I have an input dict like so:</p> <pre><code>input={'boo': 'its', 'soo': 'your', 'roo': 'choice', 'qoo': 'this', 'fizz': 'is', 'buzz': 'very', 'yoyo': 'rambling', 'wazzw': 'lorem', 'bnn': 'ipsum', 'cc': [{'boo': 'fill', 'soo': 'ing', 'roo': 'in', 'qoo': 'the', 'fizz': 'words', 'buzz': 'here', 'yoyo': 'we', 'wazzw': 'go', 'nummm': 2, 'bsdfff': 3, 'hgdjgkk': 4, 'opu': 1, 'mnb': True}, {'boo': 'again', 'soo': 'loop', 'roo': 'de', 'qoo': 'loop', 'fizz': 'wowzers', 'buzz': 'try', 'yoyo': 'again', 'wazzw': 'how', 'nummm': 1, 'bsdfff': 7, 'hgdjgkk': 0, 'opu': 1, 'mnb': True}], 'soos': ['ya'], 'tyu': 'doin', 'dddd3': 'today'} </code></pre> <p>Using python builtin libraries how to get hierarchy (dot separated) of each key. ie:</p> <pre><code>expected_output=['boo','soo','roo','qoo','fizz','buzz','yoyo','wazzw','bnn','cc','cc.boo','cc.soo','cc.roo','cc.qoo','cc.fizz','cc.buzz','cc.yoyo','cc.wazzw','cc.nummm','cc.bsdfff','cc.hgdjgkk','cc.opu','cc.mnb','soos','tyu','dddd3'] </code></pre> <p>First attempt is not handling lists:</p> <pre><code>def getKeys(object, prev_key = None, keys = []): if type(object) != type({}): keys.append(prev_key) return keys new_keys = [] for k, v in object.items(): if prev_key != None: new_key = &quot;{}.{}&quot;.format(prev_key, k) else: new_key = k new_keys.extend(getKeys(v, new_key, [])) return new_keys </code></pre>
<python><list><dictionary>
2023-01-11 06:16:58
3
350
tooptoop4
75,078,868
9,376,425
use Scalene for FastAPI profiling
<p>I'm currently trying to profile my FastAPI endpoints using Scalene. my code looks like this. I have a profiling middleware like this:</p> <pre class="lang-py prettyprint-override"><code>class ScaleneProfilerMiddleware(BaseHTTPMiddleware): def __init__(self, app): super().__init__(app) async def dispatch(self, request: Request, call_next): with _profiler() as profiler: response = await call_next(request) return _handle_response(profiler) </code></pre> <p>and the <strong>_profiler()</strong> function should be something like this:</p> <pre class="lang-py prettyprint-override"><code>@contextmanager def _profiler(): profiler = Scalene() try: profiler.start() yield profiler finally: profiler.stop() </code></pre> <p>and I like to generate an HTML response right away using the <strong>_handle_response(response, profiler)</strong> which is something like this:</p> <pre class="lang-py prettyprint-override"><code>def _handle_response( profiler: Scalene ) -&gt; Response: return HTMLResponse(profiler.generate_html()) </code></pre> <p>I couldn't find any example like this so I'm wondering if this is possible at the moment</p>
<python><profiling><profiler><scalene>
2023-01-11 06:00:35
0
424
Mahdi Sorkhmiri
75,078,513
14,551,577
How to download sub pages linked by javascript function in selenium python
<p>I am going to download all pages linked by javascript function.<br /> The HTML structure is following.</p> <p><code>index.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;script type=&quot;text/javascript&quot;&gt; function a(){ document.location = 'a.html' } function b(){ document.location = 'b.html' } function c(){ document.location = 'c.html' } &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;ul&gt; &lt;li&gt; &lt;a href=&quot;javascript:a()&quot;&gt;Item a&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href=&quot;javascript:b()&quot;&gt;Item b&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href=&quot;javascript:c()&quot;&gt;Item c&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><code>a.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;h1&gt;Item A&lt;/h1&gt; &lt;script type=&quot;text/javascript&quot;&gt; function d(){ document.location = 'd.html' } function e(){ document.location = 'e.html' } &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;ul&gt; &lt;li&gt; &lt;a href=&quot;javascript:d()&quot;&gt;Item d&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href=&quot;javascript:e()&quot;&gt;Item e&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><code>b.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Item B&lt;/h1&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><code>c.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Item C&lt;/h1&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><code>d.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Item D&lt;/h1&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><code>e.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;script type=&quot;text/javascript&quot;&gt; function d(){ document.location = 'd.html' } function b(){ document.location = 'b.html' } &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Item E&lt;/h1&gt; &lt;ul&gt; &lt;li&gt; &lt;a href=&quot;javascript:d()&quot;&gt;Item d&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href=&quot;javascript:b()&quot;&gt;Item b&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>What is the best way to download all pages?</p>
<python><selenium><selenium-webdriver><web-scraping>
2023-01-11 05:00:47
0
644
bcExpt1123
75,078,486
14,627,589
Unexpected end of template. Jinja was looking for the following tags: 'endblock'. The innermost block that needs to be closed is 'block'
<p>I have visited these answers too <a href="https://stackoverflow.com/questions/49822676/running-flask-environment-using-htmlreceiving-error-message-of-expected-else-st">Running Flask environment using HTML:receiving error message of expected else statement</a> and <a href="https://stackoverflow.com/questions/55933020/how-to-fix-jinja2-exceptions-template-syntaxerror">How to fix jinja2 exceptions Template SyntaxError:</a> but could not solve the problem.</p> <p>I am a beginner at Flask and tried using jinja2 template inheritance. Here are my files.</p> <p><strong>index.html</strong></p> <pre><code>{% extends &quot;layout.html&quot; %} {% block body %} &lt;!-- Page Header--&gt; &lt;header class=&quot;masthead&quot; style=&quot;background-image: url('{{ url_for('static',filename='assets/img/home-bg.jpg') }}')&quot;&gt; &lt;div class=&quot;container position-relative px-4 px-lg-5&quot;&gt; &lt;div class=&quot;row gx-4 gx-lg-5 justify-content-center&quot;&gt; &lt;div class=&quot;col-md-10 col-lg-8 col-xl-7&quot;&gt; &lt;div class=&quot;site-heading&quot;&gt; &lt;h1&gt;Clean Blog&lt;/h1&gt; &lt;span class=&quot;subheading&quot;&gt;A Blog Theme by Start Bootstrap&lt;/span&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/header&gt; &lt;!-- Main Content--&gt; &lt;div class=&quot;container px-4 px-lg-5&quot;&gt; &lt;div class=&quot;row gx-4 gx-lg-5 justify-content-center&quot;&gt; &lt;div class=&quot;col-md-10 col-lg-8 col-xl-7&quot;&gt; &lt;!-- Divider--&gt; &lt;hr class=&quot;my-4&quot; /&gt; &lt;!-- Pager--&gt; &lt;div class=&quot;d-flex justify-content-end mb-4&quot;&gt;&lt;a class=&quot;btn btn-primary text-uppercase&quot; href=&quot;#!&quot;&gt;Older Posts β†’&lt;/a&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endblock body %} </code></pre> <p><strong>layout.html</strong></p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot; /&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1, shrink-to-fit=no&quot; /&gt; &lt;meta name=&quot;description&quot; content=&quot;&quot; /&gt; &lt;link href=&quot;{{ url_for('static',filename='css/styles.css') }}&quot; rel=&quot;stylesheet&quot; /&gt; &lt;/head&gt; &lt;body&gt; &lt;!-- Navigation--&gt; &lt;nav class=&quot;navbar navbar-expand-lg navbar-light&quot; id=&quot;mainNav&quot;&gt; &lt;div class=&quot;container px-4 px-lg-5&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;index.html&quot;&gt;Start Bootstrap&lt;/a&gt; &lt;button class=&quot;navbar-toggler&quot; type=&quot;button&quot; data-bs-toggle=&quot;collapse&quot; data-bs-target=&quot;#navbarResponsive&quot; aria-controls=&quot;navbarResponsive&quot; aria-expanded=&quot;false&quot; aria-label=&quot;Toggle navigation&quot;&gt; Menu &lt;i class=&quot;fas fa-bars&quot;&gt;&lt;/i&gt; &lt;/button&gt; &lt;div class=&quot;collapse navbar-collapse&quot; id=&quot;navbarResponsive&quot;&gt; &lt;ul class=&quot;navbar-nav ms-auto py-4 py-lg-0&quot;&gt; &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link px-lg-3 py-3 py-lg-4&quot; href=&quot;/&quot;&gt;Home&lt;/a&gt;&lt;/li&gt; &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link px-lg-3 py-3 py-lg-4&quot; href=&quot;/about&quot;&gt;About&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/nav&gt; {% block body %} { % endblock % } &lt;!-- Footer--&gt; &lt;footer class=&quot;border-top&quot;&gt; &lt;div class=&quot;container px-4 px-lg-5&quot;&gt; &lt;div class=&quot;row gx-4 gx-lg-5 justify-content-center&quot;&gt; &lt;div class=&quot;col-md-10 col-lg-8 col-xl-7&quot;&gt; &lt;ul class=&quot;list-inline text-center&quot;&gt; &lt;li class=&quot;list-inline-item&quot;&gt; &lt;a href=&quot;#!&quot;&gt; &lt;span class=&quot;fa-stack fa-lg&quot;&gt; &lt;i class=&quot;fas fa-circle fa-stack-2x&quot;&gt;&lt;/i&gt; &lt;i class=&quot;fab fa-twitter fa-stack-1x fa-inverse&quot;&gt;&lt;/i&gt; &lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/footer&gt; &lt;!-- Bootstrap core JS--&gt; &lt;script src=&quot;https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js&quot;&gt;&lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I want to inheritent the template from <strong>layout.html</strong> to <strong>index.html</strong>. But I am getting this error:</p> <pre><code>jinja2.exceptions.TemplateSyntaxError: Unexpected end of template. Jinja was looking for the following tags: 'endblock'. The innermost block that needs to be closed is 'block'. </code></pre> <p>I have surfed a lot of websites to solve it but could not. Any help would be highly appreciated.</p>
<python><jinja2>
2023-01-11 04:53:35
1
551
ARHAM RUMI
75,078,418
1,157,639
Passing **kwargs arguments to parent classes in multiple inheritance loses kwargs content
<p>In a multiple inheritance scenario, how does Python3 pass the arguments to all the parent classes? Consider the toy program below:</p> <pre><code>class A(object): def __init__(self, a='x', **kwargs): print('A', kwargs) super().__init__() class B(object): def __init__(self, **kwargs): print('B', kwargs) super().__init__() class C(A,B): def __init__(self, **kwargs): print('C', kwargs) super().__init__(**kwargs) c = C(a='A', b='B', c='C') </code></pre> <p>The output is:</p> <pre><code>C {'a': 'A', 'b': 'B', 'c': 'C'} A {'b': 'B', 'c': 'C'} B {} </code></pre> <p>What I expect to do is to pass the same kwargs to all the parent classes and let them use the values as they want. But, what I am seeing is that once I'm down the first parent class <code>A</code>, the kwargs is consumed and nothing is passed to <code>B</code>.</p> <p>Please help!</p> <hr /> <p><em>Update</em> If the order of inheritance was deterministic and I knew the exhaustive list of kwargs that can be passed down the inheritance chain, we could solve this problem as</p> <pre><code>class A(object): def __init__(self, a='x', **kwargs): print('A', kwargs) super().__init__(**kwargs) class B(object): def __init__(self, **kwargs): print('B', kwargs) super().__init__() class C(A,B): def __init__(self, **kwargs): print('C', kwargs) super().__init__(**kwargs) c = C(a='A', b='B', c='C') </code></pre> <p>Here, since <code>A</code> passes down kwargs and we know that <code>B</code> would be called after it, we are safe, and by the time <code>object.__init__()</code> is invoked, the kwargs would be empty.</p> <p>However, this may not always be the case.</p> <p>Consider this variation:</p> <pre><code>class C(B,A): def __init__(self, **kwargs): print('C', kwargs) super().__init__(**kwargs) </code></pre> <p>Here, <code>object.__init__()</code> invoked from <code>class A</code> would raise an exception since there are still kwargs left to consume.</p> <p>So, is there a general design guideline that I should be following?</p>
<python><multiple-inheritance>
2023-01-11 04:42:08
1
776
Anshul
75,078,361
15,542,245
Skewing text - How to take advantage of existing edges
<p>I have the following JPG image. If I want to find the edges where the white page meets the black background. So I can rotate the contents a few degrees clockwise. My aim is to straighten the text for using with Tesseract OCR conversion. I don't see the need to rotate the text blocks as I have seen in similar examples.</p> <p>In the docs <a href="https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html" rel="nofollow noreferrer">Canny Edge Detection</a> the third arg 200 eg <code>edges = cv.Canny(img,100,200)</code> is maxVal and said to be 'sure to be edges'. Is there anyway to determine these (max/min) values ahead of any trial &amp; error approach?</p> <p>I have used code examples which utilize the Python cv2 module. But the edge detection is set up for simpler applications.</p> <p>Is there any approach I can use to take the text out of the equation. For example: only detecting edge lines greater than a specified length?</p> <p>Any suggestions would be appreciated.</p> <p><a href="https://i.sstatic.net/6TJ3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6TJ3L.png" alt="Historical Electoral Roll" /></a></p> <p>Below is an example of edge detection (above image same min/max values) The outer edge of the page is clearly defined. The image is high contrast b/w. It has even lighting. I can't see a need for the use of an adaptive threshold. Simple global is working. Its just at what ratio to use it.</p> <p>I don't have the answer to this yet. But to add. I now have the contours of the above doc.</p> <p><a href="https://i.sstatic.net/7ZcWL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7ZcWL.png" alt="edges" /></a></p> <p><a href="https://i.sstatic.net/6KqZZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6KqZZ.png" alt="contours" /></a></p> <p>I used <a href="https://docs.opencv.org/4.x/df/d0d/tutorial_find_contours.html" rel="nofollow noreferrer">find contours tutorial</a> with some customization of the file loading. Note: removing words gives a thinner/cleaner outline.</p>
<python><ocr><canny-operator><image-thresholding>
2023-01-11 04:31:19
1
903
Dave
75,078,287
10,035,190
how to make multiselecte dropdown field in flask?
<p>I am trying to make a multiselecte dropdown field with clear button in flask like this.</p> <p><a href="https://i.sstatic.net/SeBBv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SeBBv.png" alt="enter image description here" /></a></p> <p>I have tried this but not working the way I want</p> <pre><code>type = SelectMultipleField(&quot;Type&quot;,choices=[(&quot;None&quot;, &quot;None&quot;), (&quot;one&quot;, &quot;one&quot;), (&quot;two&quot;, &quot;two&quot;), (&quot;three&quot;, &quot;three&quot;)], default=[('None', 'None')] </code></pre> <p>but I am getting like this, getting all choices at once without dropdown option</p> <p><a href="https://i.sstatic.net/zNkkL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zNkkL.png" alt="enter image description here" /></a></p>
<python><flask><wtforms>
2023-01-11 04:18:23
2
930
zircon
75,078,242
13,659,567
How to generate a PNG image in PIL and display it in Jinja2 template using FastAPI?
<p>I have a FastAPI endpoint that is generating PIL images. I want to then send the resulting image as a stream to a Jinja2 <code>TemplateResponse</code>. This is a simplified version of what I am doing:</p> <pre class="lang-py prettyprint-override"><code>import io from PIL import Image @api.get(&quot;/test_image&quot;, status_code=status.HTTP_200_OK) def test_image(request: Request): '''test displaying an image from a stream. ''' test_img = Image.new('RGBA', (300,300), (0, 255, 0, 0)) # I've tried with and without this: test_img = test_img.convert(&quot;RGB&quot;) test_img = test_img.tobytes() base64_encoded_image = base64.b64encode(test_img).decode(&quot;utf-8&quot;) return templates.TemplateResponse(&quot;display_image.html&quot;, {&quot;request&quot;: request, &quot;myImage&quot;: base64_encoded_image}) </code></pre> <p>With this simple html:</p> <pre class="lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;Display Uploaded Image&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;My Image&lt;h1&gt; &lt;img src=&quot;data:image/jpeg;base64,{{ myImage | safe }}&quot;&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I've been working from these answers and have tried multiple permutations of these:</p> <p><a href="https://stackoverflow.com/questions/73263202/how-to-display-uploaded-image-in-html-page-using-fastapi-jinja2">How to display uploaded image in HTML page using FastAPI &amp; Jinja2?</a></p> <p><a href="https://stackoverflow.com/questions/31826335/how-to-convert-pil-image-image-object-to-base64-string">How to convert PIL Image.image object to base64 string?</a></p> <p><a href="https://stackoverflow.com/questions/70848602/how-can-i-display-pil-image-to-html-with-render-template-flask">How can I display PIL image to html with render_template flask?</a></p> <p>This seems like it ought to be very simple but all I get is the html icon for an image that didn't render.</p> <p>What am I doing wrong? Thank you.</p> <p>I used Mark Setchell's answer, which clearly shows what I was doing wrong, but still am not getting an image in html. My FastAPI is:</p> <pre class="lang-py prettyprint-override"><code>@api.get(&quot;/test_image&quot;, status_code=status.HTTP_200_OK) def test_image(request: Request): # Create image im = Image.new('RGB',(1000,1000),'red') im.save('red.png') print(im.tobytes()) # Create buffer buffer = io.BytesIO() # Tell PIL to save as PNG into buffer im.save(buffer, 'PNG') # get the PNG-encoded image from buffer PNG = buffer.getvalue() print() print(PNG) base64_encoded_image = base64.b64encode(PNG) return templates.TemplateResponse(&quot;display_image.html&quot;, {&quot;request&quot;: request, &quot;myImage&quot;: base64_encoded_image}) </code></pre> <p>and my html:</p> <pre class="lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;Display Uploaded Image&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;My Image 3&lt;h1&gt; &lt;img src=&quot;data:image/png;base64,{{ myImage | safe }}&quot;&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>When I run, if I generate a 1x1 image I get the exact printouts in Mark's answer. If I run this version, with 1000x1000 image, it saves a red.png that I can open and see. But in the end, the html page has the heading and the icon for no image rendered. I'm clearly doing something wrong now in how I send to html.</p>
<python><jinja2><python-imaging-library><fastapi>
2023-01-11 04:07:32
2
365
Brad Allen
75,078,174
4,935,567
Instagram follower and following count mismatch
<p>When scraping the Instagram profile page of an account, it shows 3 numbers: posts, followers, and following. There is a request to Instagram's APIs that corresponds to these numbers.</p> <p>If you look at the other request which loads the account's page itself (a document-type request), in the meta tags there's a meta tag that looks like this:</p> <pre class="lang-html prettyprint-override"><code>&lt;meta property=&quot;og:description&quot; content=&quot;X Followers, Y Following, Z Posts - See Instagram photos and videos from account_name (username)&quot; /&gt; </code></pre> <p>But these three numbers differ (I dare say always are greater) than the ones returned from the API. Which set of numbers is correct and why this mismatch occurs? Does it take some time for the API to process interactions and posting or something like that?</p>
<python><web-scraping><instagram>
2023-01-11 03:50:00
1
2,618
Masked Man
75,078,125
2,463,570
Panda merge all columns start with a prefix "result"
<p>I have a dynamic dataframe generation which create column or columns like below x,y,z, can be anything</p> <p>df</p> <pre><code> result result_x result_y result_z result_..... 0 1 1 0 0 1 0 1 </code></pre> <p>some times it will generate</p> <pre><code> result result_x 0 1 0 1 </code></pre> <p>Some time it will generate</p> <pre><code> result result_x result_y 0 1 1 0 1 2 </code></pre> <p>some time it can generate</p> <pre><code> result result_x result_y result_z result_a result_b......any number of result 0 1 1 0 1 2 </code></pre> <p>I want a simple dataframe which will merge all columns start prefix with <code>result</code></p> <p>it would be output of first df:</p> <pre><code>result 0110.. #merge all columns value 0101... </code></pre> <p><strong>Note:</strong></p> <p>I have other columns as well in df</p> <p>like</p> <pre><code>username,address,xapi_data,...,result,result_x,result_y,... </code></pre>
<python><pandas>
2023-01-11 03:37:23
1
12,390
Rajarshi Das
75,078,108
5,212,614
Struggling with geopy install. ModuleNotFoundError: No module named 'geopy.geocoders'
<p>I ran <code>pip install geopy</code> and it seemed to install ok, but I couldn't run the following script.</p> <pre><code>from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent=&quot;ryan_data&quot;) location = geolocator.geocode(&quot;175 5th Avenue NYC&quot;) print(location.address) </code></pre> <p>That script gives me this.</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\ryans\AppData\Local\Temp\ipykernel_19160\137417210.py&quot;, line 1, in &lt;module&gt; from geopy.geocoders import Nominatim File &quot;C:\Users\ryans\geopy.py&quot;, line 3, in &lt;module&gt; from geopy.geocoders import Nominatim ModuleNotFoundError: No module named 'geopy.geocoders'; 'geopy' is not a package </code></pre> <p>I also tried the following three lines to get geopy installed; none worked.</p> <pre><code>conda install -c conda-forge geopy conda install -c conda-forge geopy=2.3.0 conda install -c conda-forge geopandas=0.10 </code></pre> <p>The geopy package works fine on another computer that I use, but it doesn't seem to install on the laptop that I am using now. Has anyone encountered this issue before? Any idea how I can get this package installed?</p>
<python><python-3.x><geopy>
2023-01-11 03:33:46
1
20,492
ASH
75,077,884
2,377,957
Implement pkg_resources.resource_filename in setuptools
<p>I am working to incorporate the symspellpy package for spell checking and correcting large amounts of data. However, the package suggests using pkg_resources.resource_filename, which is no longer supported. Can you please provide guidance on how to access the necessary resources using the currently preferred method?</p> <pre class="lang-py prettyprint-override"><code>dictionary_path = pkg_resources.resource_filename(&quot;symspellpy&quot;, &quot;frequency_dictionary_en_82_765.txt&quot;) bigram_path = pkg_resources.resource_filename(&quot;symspellpy&quot;, &quot;frequency_bigramdictionary_en_243_342.txt&quot;) </code></pre>
<python><pkg-resources><symspellpy>
2023-01-11 02:50:44
1
4,105
Francis Smart
75,077,578
4,045,121
Lazy file reader that yields one line at a time
<p>I want to build a class that can be used like this:</p> <pre><code>d = Data(file_name) line = next(d) # or for line in d: print(line) </code></pre> <p>I have this class:</p> <pre><code>@dataclass class Data: file_name: str data: IO = field(default=None) def __post_init__(self): self._data = open(self.file_name) def __enter__(self): return self def read_lines(self): for line in self._data: yield line def __next__(self): for line in self._data: yield line def __exit__(self, exc_type, exc_val, exc_tb): if self.data: self.data.close() </code></pre> <p>If I use this code then it works fine:</p> <pre><code>with self._data as data: for line in data.read_lines(): print(line) </code></pre> <p>but when trying to use <code>next</code> then I print generator objects instead of the file lines.</p> <pre><code>with self._data as data: while line := next(data): print(line) </code></pre> <p>How can I successfully use the <code>next</code> method?</p>
<python>
2023-01-11 01:50:45
1
3,452
dearn44
75,077,537
1,887,277
"Vectorized" Matrix-Vector multiplication in numpy
<p>I have an $I$-indexed array $V = (V_i)_{i \in I}$ of (column) vectors $V_i$, which I want to multiply pointwise (along $i \in I$) by a matrix $M$. So I'm looking for a &quot;vectorized&quot; operation, wherein the individual operation is a multiplication of a matrix with a vector; that is</p> <p>$W = (M V_i)_{i \in I}$</p> <p>Is there a numpy way to do this?</p> <p><code>numpy.dot</code> unfortunately assumes that $V$ is a matrix, instead of an $I$-indexed family of vectors, which obviously fails.</p> <hr /> <p>So basically I want to &quot;vectorize&quot; the operation</p> <pre><code>W = [np.dot(M, V[i]) for i in range(N)] </code></pre> <p>Considering the 2D array V as a list (first index) of column vectors (second index).</p> <p>If</p> <pre><code>shape(M) == (2, 2) shape(V) == (N, 2) </code></pre> <p>Then</p> <pre><code>shape(W) == (N, 2) </code></pre>
<python><numpy><vectorization><linear-algebra>
2023-01-11 01:43:16
2
722
Mark
75,077,412
6,283,073
How to generate embeddings of object images detected with YOLOv5?
<p>I want to run real time object detection using YOLOv5 on a camera and then generate vector embeddings for cropped images of detected objects.</p> <p>I currently generate image embeddings using this function below for locally saved images:</p> <pre class="lang-py prettyprint-override"><code>def generate_img_embedding(img_file_path): images = [ Image.open(img_file_path) ] # Encoding a single image takes ~20 ms embeddings = embedding_model.encode(img_str) return embeddings </code></pre> <p>also I start the Yolov5 objection detection with image cropping as follows</p> <pre class="lang-py prettyprint-override"><code>def start_camera(productid): print(&quot;Attempting to start camera&quot;) # productid = &quot;11011&quot; try: command = &quot; python ./yolov5/detect.py --source 0 --save-crop --name &quot;+ id +&quot; --project ./cropped_images&quot; os.system(command) print(&quot;Camera runnning&quot;) except Exception as e: print(&quot;error starting camera!&quot;, e) </code></pre> <p>How can I modify the YOLOv5 model to pass the cropped images into my embedding function in real time?</p>
<python><yolo><embedding>
2023-01-11 01:12:11
1
1,679
e.iluf
75,077,369
14,575,973
How to import module using path related to working directory in a python project that managed by poetry?
<p>I'm using poetry to manage my python project, here's the project:</p> <pre><code>my_project/ β”œβ”€β”€ pyproject.toml β”œβ”€β”€ module.py └── scripts/ └── main.py </code></pre> <p>And I want to know how to import function from <code>module.py</code> into <code>my_scripts/main.py</code> correctly.</p> <p>My pyproject.toml:</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;my_project&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [] [tool.poetry.dependencies] python = &quot;^3.11&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>I have tried this:</p> <pre class="lang-py prettyprint-override"><code># In my_scripts/main.py from module import my_function </code></pre> <p>And run these commands:</p> <pre class="lang-bash prettyprint-override"><code>poetry install poetry shell python my_scripts/main.py </code></pre> <p>then got this error:</p> <pre><code>ModuleNotFoundError: No module named 'module' </code></pre> <p>I also have put a <code>__init__.py</code> under <code>my_project/</code> but didn't work out.</p>
<python><python-import><python-module><python-packaging><python-poetry>
2023-01-11 01:01:51
3
1,020
Matt Peng
75,077,337
6,611,672
Use Django ORM in locust
<p>I'm using locust to load test my Django application. I'm currently hardcoding certain values in my <code>locustfile.py</code>:</p> <pre><code>GROUP_UUID = &quot;6790f1e6-64f9-4707-aa82-d4edd64c9cc7&quot; @task def get_single_group(self): self.client.get(f&quot;/api/groups/{GROUP_UUID}/&quot;) </code></pre> <p>This obviously fails if <code>GROUP_UUID</code> is not in the database, which occurs whenever I reseed my database so I'm constantly having to change these hardcoded values.</p> <p>It would be much better if I could dynamically fetch a group at runtime:</p> <pre><code>from myapp.models import Group @task def get_single_group(self): group = Group.objects.first() self.client.get(f&quot;/api/groups/{group.id}/&quot;) </code></pre> <p>However, I'm getting the following error, which makes sense because locust doesn't know anything about Django:</p> <pre><code>ModuleNotFoundError: No module named 'myapp' </code></pre> <p>Is there anyway to use the Django ORM in <code>locustfile.py</code> or do I need to continue hardcoding data all over the place?</p> <p>Related:</p> <ul> <li><a href="https://stackoverflow.com/questions/56929080/dynamic-get-parameter-in-locust">Dynamic GET parameter in locust</a></li> </ul>
<python><django><locust>
2023-01-11 00:52:39
1
5,847
Johnny Metz
75,077,283
872,009
Append element to column with numpy.ndarray datatype in Pandas
<p>I want to append a string to a column which is a numpy.ndarray object.The following code is not working:</p> <pre><code>def filter_by_player(df, players, team): filtered_df = df[df['player'].isin(players)] filtered_df['league'] = filtered_df['league'].apply(lambda x: x + [team]) return filtered_df </code></pre> <p>the league column looks like this ['barca','real','sevilla']. I want to add to it but the code above is not working.</p> <p>players = ['messi', 'benzema', 'busquets']</p> <pre><code>league_df player | team messi | ['barca'] lewandowski | ['dortmund', 'bayern', 'barca'] </code></pre> <p>when i call the function filter_by_player(players, 'psg')</p> <p>the new dataframe league_df should become this as messi is in the list of players:</p> <pre><code>player | team messi | ['barca', 'psg'] </code></pre>
<python><pandas><list><dataframe>
2023-01-11 00:38:57
1
438
user872009
75,077,247
4,930,299
Dynamically renaming attributes of a python class?
<p>I am working with a beta version of a python library that has renamed a few of the class functions, for instance from &quot;isConnected&quot; to &quot;is_connected&quot;. I have different servers with different versions of this library on them and it tends to get annoying to deal with. So far I have till now been dealing with it like:</p> <pre><code>try: from eth_utils.conversions import toHex as to_hex except ImportError: from eth_utils.conversions import to_hex </code></pre> <p>But this obviously isn't ideal. What I would like to be able to do is something like this:</p> <pre><code>is_beta = hasattr(eth_utils.conversions, 'to_hex') if not is_beta: setattr(eth_utils.conversions, 'toHex', 'to_hex') </code></pre> <p>How can this be done? I am guessing that it involves using some <a href="https://stackoverflow.com/questions/100003/what-are-metaclasses-in-python">metaclass</a> magic?</p> <p>Edit: The reason why I am looking for a different method here is because I need to initialize web3 like this:</p> <pre><code>w3 = web3.Web3(web3.HTTPProvider(os.environ.get('ethereum_http_endpoint'))) </code></pre> <p>Then I need to call w3.is_connected:</p> <pre><code>if w3.is_connected: print('Web3 is connected ... now I can do stuff') else: print('Not connected, something wrong, do something else') </code></pre> <p>So I can't just call is_connected because I specifically need to call the attribute of the object which is created.</p>
<python><attributes>
2023-01-11 00:32:06
0
343
Chev_603
75,077,179
1,029,902
div not showing up in html from url using requests library and bs4
<p>I have a simple script where I want to scrape a menu from a url:</p> <p><a href="https://untappd.com/v/glory-days-grill-of-ellicott-city/3329822" rel="nofollow noreferrer">https://untappd.com/v/glory-days-grill-of-ellicott-city/3329822</a></p> <p>When I inspect the page using dev tools, I identify that the menu contained in the menu section <code>&lt;div class=&quot;menu-area&quot; id=&quot;section_1026228&quot;&gt;</code></p> <p>So my script is fairly simple as follows:</p> <pre><code>import requests from bs4 import BeautifulSoup venue_url = 'https://untappd.com/v/glory-days-grill-of-ellicott-city/3329822' response = requests.get(venue_url, headers = {'User-agent': 'Mozilla/5.0'}) soup = BeautifulSoup(response.text, 'html.parser') menu = soup.find('div', {'class': 'menu-area'}) print(menu.text) </code></pre> <p>I have tried this on a locally saved page of the url and it works. But when I do it to the full url using the requests library, it does not work. It cannot find the div. It throws this error:</p> <pre><code>print(menu.text) AttributeError: 'NoneType' object has no attribute 'text' </code></pre> <p>which basically means it cannot find the div. Does anyone know why this is happening and how to fix it?</p>
<python><web-scraping><beautifulsoup><python-requests>
2023-01-11 00:17:31
1
557
Tendekai Muchenje
75,077,028
5,463,912
How to create summary statistics for an entire SQLite data base?
<p>Consider some SQLite database.db, with a large number of tables and columns.</p> <p>Panda's <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.describe.html" rel="nofollow noreferrer">.describe()</a> produces the summary statistics that I want (see below). However, it requires reading each table in full - a problem for large data bases. Is there an (SQL or Python) alternative that is less memory hungry? Specifiying column names manually is not feasible here.</p> <pre><code>import pandas as pd import sqlite3 con = sqlite3.connect(&quot;file:database.db&quot;, uri=True) tables = pd.read_sql(&quot;SELECT name FROM sqlite_master WHERE type='table'&quot;, con) columns = [] for _, row in tables.iterrows(): col = pd.read_sql(f&quot;PRAGMA table_info({row['name']})&quot;, con) col['table'] = row['name'] stats = pd.read_sql(f&quot;&quot;&quot;SELECT * FROM {row['name']}&quot;&quot;&quot;, con) stats = stats.describe(include='all') stats = stats.transpose() col = col.merge(stats, left_on='name', right_index=True) columns.append(col) columns = pd.concat(columns) </code></pre>
<python><sqlite>
2023-01-10 23:53:25
1
314
CFW
75,077,008
7,204,831
pytest using global package despite using virtual env
<p>Situation: on the linux PC, the global package version installed: x.y.z In the project directory, requirements.txt specifies a.b.c version for package. a.b.c &gt; x.y.z there is a bash script in the directory that sets up a virtual environment, installs the packages from requirements.txt in that virtual environment, and then runs pytest in the virtual environment.</p> <p>the virtual environment is set up like so in the bash script:</p> <pre><code>#!/usr/bin/env bash set -x python3 -m pip install --user virtualenv python3 -m virtualenv .env source .env/bin/activate </code></pre> <p>After this, pytest is run in the script which runs a bunch of test scripts. In one of these test scripts, a python script is called like so:</p> <pre><code>command=[&quot;/usr/bin/python&quot;, &quot;/path/to/script/script.py&quot;, ...(bunch of args)] process = subprocess.Popen(command) </code></pre> <p>When I run the bash script, I get an output that specifies that the requirement for package==a.b.c is satisfied in the virtual environment:</p> <pre><code>Requirement already satisfied: package==a.b.c in ./.env/lib/python3.8/site-packages (from -r requirements.txt (line 42)) (a.b.c) </code></pre> <p>However, when I get to the point in the test script that calls the above python script.py, I get an error related to the global package version x.y.z unable to find a hardware device. This error is specific to version x.y.z and is fixed by using an updated version a.b.c as specified in requirements.txt and is what I thought we were using in the virtual environment.</p> <p>The error references the global package as well:</p> <pre><code> File &quot;/path/to/script/script.py&quot;, line 116, in &lt;module&gt; run() File &quot;/path/to/script/script.py&quot;, line 82, in run device = scan_require_one(config='auto') File &quot;**/home/jenkins/.local/lib/python3.8/site-packages/package/driver.py**&quot;, line 1353, in scan_require_one raise RuntimeError(&quot;no devices found&quot;) RuntimeError: no devices found System information </code></pre> <p>whereas it should use the driver.py that's in .env (or so I thought). How should I get the test script to use the package from the virtual environment?</p>
<python><package><pytest><virtualenv>
2023-01-10 23:49:28
2
383
Samyukta Ramnath
75,076,887
13,119,030
Display output of numpy array on full length of screen
<p>I want to display the output of a numpy array in PyCharm such that the output is printed on a single line. So I don't want that the output characters are split up into two lines as the example below. I want all the characters of the numpy array on the same line. Not like below:</p> <pre><code>[ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. -1. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. -1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0. 0. -1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0.] [-1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0.] </code></pre> <p>How can I do this? What command, which setting?</p> <p><a href="https://i.sstatic.net/6wcCf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6wcCf.png" alt="enter image description here" /></a></p>
<python><arrays><numpy><pycharm><display>
2023-01-10 23:26:42
1
540
Pieter-Jan
75,076,748
14,509,604
Can't query Twitter API v2 with elevated acces project/app
<p>I'm trting to query Twitter API v2 with elevated research access via tweepy, but it still gives me a <code>403 Forbidden</code>. <a href="https://i.sstatic.net/ZvIhv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZvIhv.png" alt="evelevated acces proof of my app" /></a></p> <pre class="lang-py prettyprint-override"><code>client = tweepy.Client( bearer_token=BEARER_TOKEN, # consumer_key=CONSUMER_KEY, # consumer_secret=CONSUMER_SECRET, # access_token=OAUTH_ACCESS_TOKEN, # access_token_secret=OAUTH_ACCESS_TOKEN_SECRET, ) test = client.search_all_tweets(query=&quot;#something&quot;, start_time = &quot;2023-01-01&quot;) print(test) </code></pre> <p>Response:</p> <blockquote> <p>Forbidden: 403 Forbidden When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal.</p> </blockquote> <p>I've tried all the combinations of the commented lines when created the client, nothing works.</p> <p>What am I doing wrong?</p> <p>No problem with normal access endpoints and <code>bearer_token</code> param. However, can't <strong>nothing</strong> when using consumer_key/secret and acces_token(_secret) instead. Maybe this is the real issue?</p> <p>Thank you in advance.</p>
<python><twitter><tweepy>
2023-01-10 23:05:34
1
329
juanmac
75,076,741
2,679,223
Numpy indexing oddity: How to subselect from multidimensional array and keep all axes
<p>I have a multi-dimensional array, and have two lists of integers, L_i and L_j, corresponding to the elements of axis-i and axis-j I want to keep. I also want to satisfy the following:</p> <ol> <li>Keep original dimensionality of the array, even if L_i or L_j consists of just 1 element (in other words I dont want a singleton axis to be collapsed)</li> <li>Preserve the order of the axes</li> </ol> <p>What is the cleanest way to do this?</p> <p>Here is a reproducible example that shows some of the unexpected behavior I've been getting:</p> <pre><code>import numpy as np aa = np.arange(120).reshape(5,4,3,2) aa.shape ### (5,4,3,2) as expected aa[:,:,:,[0,1]].shape ### (5, 4, 3, 2) as expected aa[:,:,:,[0]].shape ### (5,4,3,1) as desired. Notice that even though the [0] is one element, ### that last axis is preserved, which is what I want aa[:,[1,3],:,[0]].shape ### (2, 5, 3) NOT WHAT I EXPECTED!! ### I was expecting (5, 2, 3, 1) </code></pre> <p>Curious as to why numpy is collapsing and reordering axes, and also best way to do my subsetting correctly.</p>
<python><arrays><numpy>
2023-01-10 23:05:00
2
1,294
bigO6377
75,076,678
1,839,555
Overriding a method when creating a new object
<p>I'm using the <a href="https://pypi.org/project/watchdog/" rel="nofollow noreferrer">Watchdog library</a> to monitor different folders. There are two folders with two different behaviors:</p> <ol> <li>In folder <code>alpha</code>, when a new file is created, move it to <code>destination_alpha</code>.</li> <li>In folder <code>beta</code>, when a new file is created, pass it to a method.</li> </ol> <p>Here's the code snippet for the first behavior:</p> <pre><code>import shutil from watchdog.events import FileSystemHandler from watchdog.observers import Observer class FolderWatcher(FileSystemEventHandlder): '''Overrides the on_created method to take action when a file is created.''' def on_created(self, event): shutil.move(event.src_path, '/destination_alpha') event_handler = FolderWatcher() folder_alpha_observer = Observer() folder_alpha_observer.schedule(event_handler,'/folder_alpha') try: while True: time.sleep(1) finally: folder_alpha_observer.stop() folder_alpha_observer.join() </code></pre> <p>Can I reuse the same class for another <code>FolderWatcher</code> object with different behavior in the <code>on_created</code> method? Or do I need to create a new <code>FolderWatcher</code>-ish class with a different <code>on_created</code> method?</p> <pre><code>class SecondFolderWatcher(FileSystemEventHandlder): '''Overrides the on_created method to take action when a file is created.''' def on_created(self, event): imported_method(event.src_path) second_folder_watcher = SecondFolderWatcher() folder_beta_observer = Observer() folder_beta_observer.schedule(second_folder_watcher,'/folder_alpha') try: while True: time.sleep(1) finally: folder_alpha_observer.stop() folder_alpha_observer.join() </code></pre> <p>This doesn't seem very elegant, creating a whole new class for every <code>on_created</code> action I want to take. But I don't see a better way to do it. Your thoughts?</p>
<python><inheritance><overriding>
2023-01-10 22:54:56
1
1,508
Bagheera
75,076,523
3,462,509
Python DFS (CS 188 Berkeley Pacman)
<p>I am not a Berkeley student, I'm just taking this course for fun (so you aren't helping me cheat). I've implemented their <a href="https://inst.eecs.berkeley.edu/%7Ecs188/fa18/project1.html" rel="nofollow noreferrer">project 1</a>, but I am failing the autograder for Question 1 (DFS) and only question 1. I'm always skeptical of the &quot;it's the grader that's wrong!&quot; statements, but in this case... I think there is a mistake in the autograder :)</p> <p>The following is my implementation, which fails the autograder:</p> <pre class="lang-py prettyprint-override"><code>def depthFirstSearch(problem): &quot;&quot;&quot; Search the deepest nodes in the search tree first. Your search algorithm needs to return a list of actions that reaches the goal. Make sure to implement a graph search algorithm. To get started, you might want to try some of these simple commands to understand the search problem that is being passed in: &quot;&quot;&quot; start = problem.getStartState() frontier = util.Stack() frontier.push([(start,None)]) visited = {start} while frontier: path = frontier.pop() # path = [(pt1,dir1),(pt2,dir2)...] pt0, dir0 = state = path[-1] if problem.isGoalState(pt0): p = [p[1] for p in path if p[1]] # return dirs only, not points, first dir is &quot;None&quot; so filter that out return p for pt1,dir1,cost1 in problem.getSuccessors(pt0): if pt1 not in visited: visited.add(pt1) frontier.push(path + [(pt1,dir1)]) </code></pre> <p>And this code passes (identical except for the two lines marked added/removed):</p> <pre class="lang-py prettyprint-override"><code>def depthFirstSearch(problem): &quot;&quot;&quot; Search the deepest nodes in the search tree first. Your search algorithm needs to return a list of actions that reaches the goal. Make sure to implement a graph search algorithm. To get started, you might want to try some of these simple commands to understand the search problem that is being passed in: &quot;&quot;&quot; start = problem.getStartState() frontier = util.Stack() frontier.push([(start,None)]) visited = {start} while frontier: path = frontier.pop() # path = [(pt1,dir1),(pt2,dir2)...] pt0, dir0 = state = path[-1] if problem.isGoalState(pt0): p = [p[1] for p in path if p[1]] # return dirs only, not points, first dir is &quot;None&quot; so filter that out return p visited.add(p0) # ADDED for pt1,dir1,cost1 in problem.getSuccessors(pt0): if pt1 not in visited: # visited.add(pt1) # REMOVED frontier.push(path + [(pt1,dir1)]) </code></pre> <p>The only difference between the two is when you mark a node as visited. In the latter version, you mark upon popping from the stack, then you generate neighbors, adding them to the frontier if you haven't already visited them. In the latter, you generate neighbors <em>first</em>, and then for <em>each</em> neighbor you check if they've been visited, adding them to the visited set and frontier if not.</p> <p>Both versions will find the same path, and both versions are correct as I understand it. If anything, the first version is actually a better implementation because it avoids re-queuing already visited states as outlined in <a href="https://stackoverflow.com/questions/45623722/marking-node-as-visited-on-bfs-when-dequeuing">this answer</a> to a similar question. You could avoid the re-queuing issue inherent in the second implementation by also checking that a state is not on the frontier before enqueuing, but that's a lot of wasted work to scan the frontier for every neighbor.</p> <p>Is there some conceptual misunderstanding I've made here, or is my skepticism warranted? I've tried to dissect the autograder testcases but there is a lot of magic going on and it's tough to follow the <code>evaluate</code> function in <code>autograder.py</code>. From what I <em>can</em> tell though, this is the test that fails:</p> <pre><code># Graph where BFS finds the optimal solution but DFS does not class: &quot;GraphSearchTest&quot; algorithm: &quot;depthFirstSearch&quot; diagram: &quot;&quot;&quot; /-- B | ^ | | | *A --&gt;[G] | | ^ | V | \--&gt;D ----/ A is the start state, G is the goal. Arrows mark possible transitions &quot;&quot;&quot; # The following section specifies the search problem and the solution. # The graph is specified by first the set of start states, followed by # the set of goal states, and lastly by the state transitions which are # of the form: # &lt;start state&gt; &lt;actions&gt; &lt;end state&gt; &lt;cost&gt; graph: &quot;&quot;&quot; start_state: A goal_states: G A 0:A-&gt;B B 1.0 A 1:A-&gt;G G 2.0 A 2:A-&gt;D D 4.0 B 0:B-&gt;D D 8.0 D 0:D-&gt;G G 16.0 &quot;&quot;&quot; </code></pre> <p>and the reason it fails is that the path I find is just going straight from <code>A-&gt;G</code> whereas the grader wants to see <code>A-&gt;D-&gt;G</code> (in both cases <code>{A,D}</code> is the visited set). To me this would be entirely dependent on the order the neighbors are added to the fringe. If <code>G</code> is the final neighbor of <code>A</code> added to the fringe (stack here), my solution makes sense. If not it doesn't. This is where I got stuck, I couldn't tell from the test cases what order neighbors were being added in due to the metaprogramming magic.</p> <h3> EDIT </h3> Shortly after posting this I found a file that allowed me to inspect the test-case that was failing, after some modification. For future learners, it's in `graphProblem.py`. After loading up the graph mentioned above using this file, and adding some print statements to my DFS, here are the results: <p>My version:</p> <pre><code>Step 1: Frontier = [A], Visited = {A} Step 2: Frontier = [A-&gt;B,A-&gt;G,A-&gt;D], Visited = {A,G,D,B} Step 3: Frontier = [A-&gt;B,A-&gt;G], Visited = {A,G,D,B} Step 4: Goal </code></pre> <p>Between steps 2 and 3 is the critical difference. Ending Step 2 we have A-&gt;D at the top of the stack, so starting step 3 we are in State D. However, since we've recorded all of the nodes as visited as soon as we generate them as successors (in step 2), we do not add any additional paths to the frontier, and then on step 4 we pop A-&gt;G and arrive at the goal.</p> <p>Contrast this with the &quot;correct&quot; sequence of actions the autograder requires:</p> <pre><code>Step 1: Frontier = [A], Visited = {A} Step 2: Frontier = [A-&gt;B,A-&gt;G,A-&gt;D], Visited = {A} Step 3: Frontier = [A-&gt;B,A-&gt;G,A-&gt;D-&gt;G], Visited = {A,D} Step 4: Goal </code></pre> <p>Here, since we do not add <code>G</code> to the visited set upon generating successors in step 2 (only when de-queing do we add items to the visited set), we are able to extend path <code>A-&gt;D</code> to add <code>G</code> despite the fact that <code>G</code> is already on the frontier.</p> <p>So, <strong>the autograder is wrong</strong>.</p>
<python><artificial-intelligence><depth-first-search>
2023-01-10 22:29:17
1
2,792
Solaxun
75,076,441
8,065,797
retrieving xml element value by searching the element by substring in its name
<p>I would need to retrieve xml element value by searching the element by substring in its name, eg. I would need to get value for all elements in XML file which names contains <code>client</code>.</p> <p>I found a way how to find element with xpath by an attribute, but I haven't find a way for element name.</p>
<python><xpath><lxml><elementtree>
2023-01-10 22:17:53
1
529
JanFi86
75,075,770
5,024,631
Python while loop continues beyond the while condition
<p>My while loop doesn't stop when it's supposed to. Obviously there's something fundamental I'm missing here.</p> <p>Here is my code:</p> <pre><code>import time import datetime import pandas as pd period = 5 start = pd.to_datetime('2022-01-01') end_final = pd.to_datetime('2022-01-31') sd = start while start &lt; end_final: ed = sd + datetime.timedelta(period) print('This is the start of a chunk') print(sd) print(ed) print('This is the end of a chunk') print('+*************************') sd = ed + datetime.timedelta(2) </code></pre> <p>which prints dates until the 10th of April 2262 and then gives me the error:</p> <pre><code> OverflowError: Python int too large to convert to C long </code></pre> <p>But the while loop should stop at the end of January 2022. Any ideas?</p>
<python><date><while-loop>
2023-01-10 20:58:32
2
2,783
pd441
75,075,754
12,226,377
Using pandas to identify which are the 5 point and 10 point scale survey questions in a dataframe
<p>I have a unique situation where my dataset contains multiple survey responses that were asked on two different scales primarily - a 5 point scale and then a 10 point scale and I have consolidated all of these responses in one dataframe. Now I would like to split and create a new column in my dataframe that can tell by looking into the responses and correspondinlgy identify it it's a 5 point scale or a 10 point scale question. For a response where there are no numbers mentioned such as 1-5 scale or 1-10 scale, the output should be blank. My dataframe looks like:</p> <pre><code>Question_Text on a scale of 1 Γ’β‚¬β€œ 10 how well would you rate the following statements. on a scale of 1 to 10 how well would you rate the following statements. on a scale of 1-10 how well would you rate the following statements. on a scale of 1 10 how well would you rate the following statements. on a scale of 1 Γ’β‚¬β€œ 5 how well would you rate the following statements. on a scale of 1 to 5 how well would you rate the following statements. on a scale of 1-5 how well would you rate the following statements. on a scale of 1 5 how well would you rate the following statements. please tell us how ready you feel for this (0 - 6 not ready, 6-8 somewhat ready, and 9-10 ready) how useful did you find the todayÒ€ℒs webinar? </code></pre> <p>and what I would like to achieve looks like:</p> <pre><code>Question_Text Type_of_Question on a scale of 1 Γ’β‚¬β€œ 10 how well would you rate the following 10 point scale on a scale of 1 to 10 how well would you rate the following 10 point scale on a scale of 1 to 5 how well would you rate the following 5 point scale please tell us how ready you feel for this (0 - 6 not ready)... 10 point scale how useful did you find the todayÒ€ℒs webinar? ... </code></pre> <p>Is there any possible way to achieve this? Can a pattern be identified using regex that can take care of different sorts of inputs as I have shown above?</p>
<python><pandas><survey>
2023-01-10 20:56:18
2
807
Django0602
75,075,713
3,738,936
Cannot Insert Pandas dataframe in to PGsql with Python
<p>I am trying to use a pandas dataframe to insert data to sql. I am using pandas because there are some columns that I need to drop before I insert it into the SQL table. The database is in the cloud, but that isn't the issue. I've been able to create static strings, insert them in the the database &amp; it works fine.</p> <p>The database is postgres db, using the pg8000 driver.</p> <p>In this example, I am pulling out one column &amp; one value and trying to insert it in to the database.</p> <pre><code> connection = db_connection.connect() for i, rowx in data.iterrows(): with connection as db_conn: name_column = ['name'] name_value = [data.iloc[0][&quot;name&quot;]] cols = &quot;`,`&quot;.join([str(i) for i in name_column]) sql = &quot;INSERT INTO person ('&quot; + cols + &quot;') VALUES ( &quot; + &quot; %s,&quot;* ( len(name_value) - 1 ) + &quot;%s&quot; + &quot; )&quot; db_conn.execute(sql, tuple(name_value)) </code></pre> <p>The error I get is usually something related to the formatting of the <code>cols</code>.</p> <pre><code>Error: 'syntax error at or near &quot;\'name\'&quot; </code></pre> <p>variable cols:</p> <pre><code>(Pdb) cols 'name' </code></pre> <p>I guess it's upset that 'name' is a string but that seems odd.</p> <p>variable sql:</p> <pre><code>&quot;INSERT INTO persons ('name') VALUES ( %s )&quot; </code></pre> <p>Not a fan of the string encapsulation, I got this from a guide: <a href="https://www.dataquest.io/blog/sql-insert-tutorial/" rel="nofollow noreferrer">https://www.dataquest.io/blog/sql-insert-tutorial/</a></p> <p>Just looking for a reliable way to script this insert from pandas to pg.</p>
<python><pandas><postgresql><sqlalchemy>
2023-01-10 20:52:59
1
986
Don M
75,075,546
12,983,543
"celery": executable file not found in $PATH
<p>I am trying to make my Django redis celery project on with <code>docker-compose</code>, but there is no way it is starting. Here is the docker-compose file</p> <pre><code>version: &quot;3.9&quot; services: db: container_name: my_table_postgres image: postgres ports: - 5432/tcp volumes: - my_table_postgres_db:/var/lib/postgresql/data environment: - POSTGRES_DB=my_table_postgres - POSTGRES_USER=dev - POSTGRES_PASSWORD=Ieyh5&amp;RIR48!&amp;8fc redis: container_name: redis image: redis my_table: container_name: my_table build: . command: python manage.py runserver 0.0.0.0:5000 volumes: - .:/api ports: - &quot;5000:5000&quot; depends_on: - db - redis celery: restart: always build: context: . command: ['celery', '-A', 'mytable', '-l', 'INFO'] volumes: - .:/api container_name: celery depends_on: - db - redis - my_table volumes: my_table_postgres_db: redis_data: </code></pre> <p>And that is the Dockerfile:</p> <pre><code>FROM python:3.11 # Managing user RUN mkdir /mytable RUN useradd -ms /bin/bash devuser USER devuser COPY . /mytable WORKDIR /mytable # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 COPY --chown=devuser:devuser requirements.txt requirements.txt RUN pip install --user --default-timeout=1000 --upgrade pip RUN pip install --user --default-timeout=1000 -r requirements.txt EXPOSE 5000 #ENV PATH=&quot;/home/devuser/.local/bin&quot; COPY --chown=devuser:devuser . . # Run manage.py runserver when the container launches CMD [&quot;python3&quot;, &quot;manage.py&quot;, &quot;runserver&quot;] </code></pre> <p>And here is the full error that I get when I try to run <code>docker-compose up --build</code></p> <pre><code>Recreating eda7aad3f246_celery ... error ERROR: for eda7aad3f246_celery Cannot start service celery: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: &quot;celery&quot;: executable file not found in $PATH: unknown ERROR: for celery Cannot start service celery: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: &quot;celery&quot;: executable file not found in $PATH: unknown ERROR: Encountered errors while bringing up the project. </code></pre> <p>Does someone know how to fix it? I read several similar questions on this forum, but the only thing that I changed was passing from the one line command, to the parenthesis command in docker compose, and that did not work.</p> <p>Any suggestion? Thank you</p>
<python><docker><celery>
2023-01-10 20:35:41
2
614
Matteo Possamai
75,075,284
2,817,520
What are the caveats of namespace packages
<p>Here in <a href="https://packaging.python.org/en/latest/guides/packaging-namespace-packages/" rel="nofollow noreferrer">Packaging namespace packages</a>, it is mentioned that</p> <blockquote> <p>namespace packages can be useful for a large collection of loosely-related packages (such as a large corpus of client libraries for multiple products from a single company). <strong>However, namespace packages come with several caveats and are not appropriate in all cases.</strong> A simple alternative is to use a prefix on all of your distributions such as import <code>mynamespace_subpackage_a</code> (you could even use import <code>mynamespace_subpackage_a</code> as <code>subpackage_a</code> to keep the import object short).</p> </blockquote> <p>But there are no examples. It is better to use the alternative?</p>
<python><packaging>
2023-01-10 20:06:48
1
860
Dante
75,075,246
1,418,326
Pickle Load Custom Object Parameters Misaligned
<p>Car.py:</p> <pre><code> class Car(object): def __init__(self, year=2023, speed=50): self.year = year self.speed = speed self.word_index = {} </code></pre> <p>Util.py:</p> <pre><code>from custom.Car import Car c1 = Car(2020, 40) picklefile = open('car.pkl', 'wb') pickle.dump(c1, picklefile) with open('car.pkl', 'rb') as f: c2 = Car(pickle.load(f)) </code></pre> <p>After loading the file, the entire Car object is assigned to self.year. So I end up have: c2.year: The serialized Car object. c2.speed: default speed of 50 instead of 40. What am I missing?</p>
<python><pickle>
2023-01-10 20:02:11
1
1,707
topcan5
75,075,234
6,392,779
Regex to match first occurrence of non alpha-numeric characters
<p>I am parsing some user input to make a basic Discord bot assigning roles and such. I am trying to generalize some code to reuse for different similar tasks (doing similar things in different categories/channels).</p> <p>Generally, I am looking for a substring (the category), then taking the string after as that categories value. I am looking line by line for my category, replacing the &quot;category&quot; substring and returning a stripped version. However, what I have now also replaces any space in the &quot;value&quot; string.</p> <p>Originally the string looks like this:</p> <pre><code>Gamertag : 00test gamertag </code></pre> <p>What I want to do, is preserve the spaces in the value. The regex I am trying to do is: match all non alpha-numeric chars until the first letter.</p> <p>My return is already matching non alpha but can't figure out how to get just first group, looks like it should be simply adding a ? to make it a lazy operator but not sure.. example code and string below (regex I want to replace is the final print string).</p> <p>String I am working with:</p> <pre><code>- 00test Gamertag #(or any non-alpha delimiter) </code></pre> <p>Desired Results (by matching and stripping the extra characters)</p> <pre><code>00test Gamertag #(remove leading space and any non-alpha characters before the first words) </code></pre> <p>The regex I am trying to do is: match all non alpha-numeric chars until the first letter. Should be something like the following, which is close to what I use to strip non-alphas now but it does all not the first group - so I want to match the first group of non-alphas in a string to strip that part using re.sub..</p> <pre><code>\W+? </code></pre> <p><a href="https://www.online-python.com/gDVhZrnmlq" rel="nofollow noreferrer">https://www.online-python.com/gDVhZrnmlq</a></p> <p>Thank you!</p>
<python><regex>
2023-01-10 20:00:55
2
901
nick
75,075,099
12,014,637
TypeError: Failed to convert elements of (None, -1, 3, 1) to Tensor. Consider casting elements to a supported type
<p>I have a custom tensorflow layer which works fine by generating an output but it throws an error when used with the Keras functional API. Here is the code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Input # --------- Custom Layer ------- def scaled_dot_product_attention(query, key, value, mask=None): key_dim = tf.cast(tf.shape(key)[-1], tf.float32) scaled_scores = tf.matmul(query, key, transpose_b=True) / np.sqrt(key_dim) if mask is not None: scaled_scores = tf.where(mask==0, -np.inf, scaled_scores) softmax = tf.keras.layers.Softmax() weights = softmax(scaled_scores) return tf.matmul(weights, value), weights class MultiHeadSelfAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super(MultiHeadSelfAttention, self).__init__() self.d_model = d_model self.num_heads = num_heads self.d_head = self.d_model // self.num_heads self.wq = tf.keras.layers.Dense(self.d_model) self.wk = tf.keras.layers.Dense(self.d_model) self.wv = tf.keras.layers.Dense(self.d_model) # Linear layer to generate the final output. self.dense = tf.keras.layers.Dense(self.d_model) def split_heads(self, x): batch_size = x.shape[0] split_inputs = tf.reshape(x, (batch_size, -1, self.num_heads, self.d_head)) return tf.transpose(split_inputs, perm=[0, 2, 1, 3]) def merge_heads(self, x): batch_size = x.shape[0] merged_inputs = tf.transpose(x, perm=[0, 2, 1, 3]) return tf.reshape(merged_inputs, (batch_size, -1, self.d_model)) def call(self, q, k, v, mask): qs = self.wq(q) ks = self.wk(k) vs = self.wv(v) qs = self.split_heads(qs) ks = self.split_heads(ks) vs = self.split_heads(vs) output, attn_weights = scaled_dot_product_attention(qs, ks, vs, mask) output = self.merge_heads(output) return self.dense(output) # ----- Testing with simulated data ------- x = np.random.rand(1,2,3) values_emb = MultiHeadSelfAttention(3, 3)(x,x,x, mask = None) print(values_emb) </code></pre> <p>This generates the following output:</p> <pre><code>tf.Tensor( [[[ 0.50706375 -0.3537539 -0.23286441] [ 0.5081617 -0.3548487 -0.23382033]]], shape=(1, 2, 3), dtype=float32) </code></pre> <p>But when I use it in the Keras functional API it doesn't work. Here is the code:</p> <pre class="lang-py prettyprint-override"><code>x = Input(shape=(2,3)) values_emb = MultiHeadSelfAttention(3, 3)(x,x,x, mask = None) model = Model(x, values_emb) model.summary() </code></pre> <p>This is the error:</p> <pre><code>TypeError: Failed to convert elements of (None, -1, 3, 1) to Tensor. Consider casting elements to a supported type. </code></pre> <p>Does anyone know why this happens and how can I fix it?</p>
<python><tensorflow><keras><deep-learning>
2023-01-10 19:46:39
0
618
Amin Shn
75,075,084
13,488,334
PyTest - Specify cleanup tests in conftest.py
<p>I am testing a service that requires starting and shutting down a gRPC server via a client's request. In my set of integration tests, I need to specify a set of pre-test and post-test actions that should happen before any given test is run within the set. Ideally, I would like to keep these pre/post-test methods in conftest.py or organize them into their own class within separate module.</p> <p>I can specify the first test that should run (test that starts the server) by doing the following within conftest.py:</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture(scope=&quot;session&quot;, autouse=True) def test_start_server(): # code to start server </code></pre> <p>The problem is that when I execute another test module only the <code>test_start_server</code> function is executed and not the subsequent <code>test_shutdown_request</code> function further down in the file:</p> <pre class="lang-py prettyprint-override"><code>def test_shutdown_request(): # code to shutdown server </code></pre> <p>Is there any way to specify the last test (post-test action) to be run?<br /> If possible, I don't want to include any 3rd party dependencies or plugins, as my project already has enough.</p>
<python><pytest><integration-testing>
2023-01-10 19:45:25
1
394
wisenickel
75,075,061
2,817,520
Python namespace-subpackage naming convention
<p>According to <a href="https://peps.python.org/pep-0008/#package-and-module-names" rel="nofollow noreferrer">PEP 8 – Style Guide for Python Code</a>, Python packages should also have short, all-lowercase names, <strong>although the use of underscores is discouraged</strong>. But <a href="https://packaging.python.org/en/latest/guides/packaging-namespace-packages/" rel="nofollow noreferrer">Packaging namespace packages</a> suggests <code>mynamespace_subpackage_a</code> as an alternative to namespace packages. Why they do not follow their own conventions? Is it OK to use underscores in this situation? Any example of packages that use underscores in their name?</p>
<python>
2023-01-10 19:42:49
0
860
Dante
75,074,877
3,040,845
Get 'Can't get attribute" error while loading my pickel file
<p>I am trying to use pick to save and load my ML models but I get an error. Here is the simplify version of my code to save my model:</p> <pre><code>import pickle def test(x,y): return x+y filename = 'test.pkl' pickle.dump(test, open(filename, 'wb')) </code></pre> <p>I can load the pickle file from the same notebook that I am creating it but if I close the notebook and try to load the pick in a new one with the below code:</p> <pre><code>import pickle filename = 'test.pkl' loaded_model = pickle.load(open(filename, 'rb')) It gets me this error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[2], line 2 1 filename = 'test.pkl' ----&gt; 2 loaded_model = pickle.load(open(filename, 'rb')) AttributeError: Can't get attribute 'test' on &lt;module '__main__'&gt; </code></pre>
<python><pickle>
2023-01-10 19:23:19
1
1,097
Amir
75,074,870
7,158,458
Using multi threading to simulate 6 users logging in using Selenium
<p>I am trying to create a function using selenium to login to a page in a different tab for each credential in a dictionary of usernames and passwords. I have 6 users so I will have 6 tabs open. I want to keep track of each tab I have open so I can navigate back to a specific tab.</p> <p>This is what I have tried:</p> <pre><code>users = {'userA': 'pass', 'userB': 'pass2', 'userC': 'pass3', 'userD': 'pass4', 'userE': 'pass5', 'userF': 'pass6', } tab_names = [] count = 0 for key in users: try: element = WebDriverWait(browser, 100).until( EC.presence_of_element_located((By.ID, &quot;name-field&quot;)) # name_input = browser.find_element(By.ID, 'name-field') ) element.send_keys(key) browser.find_element(By.ID, 'password-field').send_keys(users[key]) second_element = WebDriverWait(browser, 100).until( EC.presence_of_element_located((By.ID, 'login-button')) ) second_element.click() sleep(3) browser.execute_script(&quot;window.open('https://somepage/login')&quot;) count+=1 tab_names.append(str(count)) # sleep(10) browser.find_element(By.ID, 'login-button').click() finally: browser.quit() </code></pre> <p>My code is not doing what I intend, this function opens a tab, logs in with specific credentials and closes the window without opening a new tab to login with the next credential.</p>
<python><selenium><selenium-webdriver>
2023-01-10 19:22:39
0
2,515
Emm
75,074,766
1,974,918
polars groupby on categorical produces nulls
<p><strong>Update:</strong> This issue is no longer present. The query produces the expected values.</p> <hr /> <p>I ran into an issue with polars where after a groupby on a categorical variable the labels are null</p> <pre class="lang-py prettyprint-override"><code>import polars as pl scores = pl.DataFrame({ 'zone': ['North', 'North', 'South', 'South'], 'score': [78, 39, 76, 56] }).with_columns(pl.col(&quot;zone&quot;).cast(pl.Categorical)) scores.group_by(&quot;zone&quot;).len() </code></pre> <p>This produces the output below with null values for the levels. Is this expected? If so, is there a fix? Note that this doesn't happen when <code>zone</code> is set to a string</p> <pre><code>shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ zone ┆ len β”‚ β”‚ --- ┆ --- β”‚ β”‚ cat ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════║ β”‚ null ┆ 2 β”‚ β”‚ null ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><python-polars>
2023-01-10 19:11:57
0
5,289
Vincent
75,074,596
7,437,221
PuLP linear programming problem is infeasible
<p>I am writing a Linear Programming problem to help generate daily fantasy lineups.</p> <p>The goal is:</p> <ul> <li>Maximize Fpts projection</li> </ul> <p>The constraints are:</p> <ul> <li>Salary (Maximum 50,000)</li> <li>Positionality (Have to have 1 PG, 1 SG, 1 SF, 1 PF, 1 C, 1 G, 1 F and 1 UTIL player) where &quot;G&quot; players are either PG or SG, &quot;F&quot; are either SF or PF players, and &quot;UTIL&quot; are any position players</li> </ul> <p>I have my problem set up like this:</p> <pre class="lang-py prettyprint-override"><code># Create problem prob = plp.LpProblem(&quot;NBA_DK&quot;, plp.LpMaximize) # Objective function prob += plp.lpSum([fpts[i] * player_vars[i] for i in players]), 'Objective - Maximize Fpts' # Salary constraint prob += plp.lpSum([salaries[i] * player_vars[i] for i in players]) &lt;= 50000, 'Salary Constraints' # Position constraints for curr_pos in ['PG', 'SG', 'SF', 'PF', 'C', 'G', 'F', 'UTIL']: prob += plp.lpSum([player_vars[player] for player in players if curr_pos in positions[player]]) == 1, f'{curr_pos} Positional Constraint' </code></pre> <p>Here are some of the example variables being referenced above</p> <p><strong>player_vars</strong></p> <p><code>player_vars = plp.LpVariable.dicts(&quot;Players&quot;, players, lowBound=0, cat='Binary')</code></p> <p><strong>projections</strong></p> <pre><code>projections = pd.read_csv('../dk_data/projections.csv', thousands=',') ... Name Fpts Position Team Opponent Minutes Salary Pts/$ Value 0 Nikola Jokic 61.52 C/UTIL DEN LAL 33.9 11700 5.26 3.02 1 Giannis Antetokounmpo 58.09 PF/C/F/UTIL MIL NYK 36.1 12000 4.84 -1.91 2 LeBron James 53.05 SF/PF/F/UTIL LAL DEN 35.4 10900 4.87 -1.45 3 Jayson Tatum 52.94 SF/PF/F/UTIL BOS CHI 37.1 10700 4.95 -0.56 4 Domantas Sabonis 52.62 C/UTIL SAC ORL 36.7 10500 5.01 0.12 5 Ja Morant 51.01 PG/G/UTIL MEM SAS 32.6 10300 4.95 -0.49 6 Julius Randle 49.74 PF/F/UTIL NYK MIL 38.0 10400 4.78 -2.26 7 Kristaps Porzingis 44.76 PF/C/F/UTIL WAS NOP 35.0 9400 4.76 -2.24 8 De'Aaron Fox 43.40 PG/G/UTIL SAC ORL 35.3 8700 4.99 -0.10 9 DeMar DeRozan 43.21 SF/PF/F/UTIL CHI BOS 36.8 9000 4.80 -1.79 10 Jaylen Brown 42.45 SG/SF/G/F/UTIL BOS CHI 33.7 9500 4.47 -5.05 11 CJ McCollum 42.05 PG/G/UTIL NOP WAS 35.0 8500 4.95 -0.45 12 Jalen Brunson 41.20 PG/G/UTIL NYK MIL 36.2 8100 5.09 0.70 13 Paolo Banchero 40.34 SF/PF/F/UTIL ORL SAC 34.3 8200 4.92 -0.66 14 Kyle Kuzma 39.88 SF/PF/F/UTIL WAS NOP 34.8 8400 4.75 -2.12 15 Nikola Vucevic 37.51 C/UTIL CHI BOS 32.0 8000 4.69 -2.49 16 Zach LaVine 37.45 SG/SF/G/F/UTIL CHI BOS 34.9 7500 4.99 -0.05 17 Russell Westbrook 37.11 PG/G/UTIL LAL DEN 28.9 7700 4.82 -1.39 18 Jaren Jackson Jr. 36.08 PF/C/F/UTIL MEM SAS 28.3 6800 5.31 2.08 19 Keldon Johnson 35.44 SF/F/UTIL SAS MEM 31.7 7000 5.06 0.44 20 Jrue Holiday 35.32 PG/G/UTIL MIL NYK 32.4 7400 4.77 -1.68 21 Desmond Bane 35.18 SG/G/UTIL MEM SAS 29.2 6600 5.33 2.18 22 Jonas Valanciunas 34.71 C/UTIL NOP WAS 26.4 6600 5.26 1.71 23 Jamal Murray 34.59 PG/G/UTIL DEN LAL 31.9 6800 5.09 0.59 24 Tre Jones 32.40 PG/G/UTIL SAS MEM 30.2 6000 5.40 2.40 25 Aaron Gordon 32.37 PF/F/UTIL DEN LAL 30.1 6600 4.90 -0.63 26 Franz Wagner 32.13 SG/SF/G/F/UTIL ORL SAC 35.1 7100 4.53 -3.37 27 Jakob Poeltl 31.88 C/UTIL SAS MEM 25.8 5900 5.40 2.38 28 Thomas Bryant 31.78 C/UTIL LAL DEN 29.9 6300 5.05 0.28 29 Immanuel Quickley 31.78 PG/SG/G/UTIL NYK MIL 35.9 6200 5.13 0.78 30 Wendell Carter Jr. 31.64 C/UTIL ORL SAC 29.8 6400 4.94 -0.36 31 Steven Adams 29.50 C/UTIL MEM SAS 27.6 6000 4.92 -0.50 32 Naji Marshall 29.14 SF/PF/F/UTIL NOP WAS 34.1 5500 5.30 1.64 33 Michael Porter Jr. 29.10 SF/F/UTIL DEN LAL 29.7 5900 4.93 -0.40 34 Markelle Fultz 29.02 PG/G/UTIL ORL SAC 28.8 6100 4.76 -1.48 35 Bobby Portis 28.49 PF/C/F/UTIL MIL NYK 27.1 6700 4.25 -5.01 36 Kevin Huerter 28.25 SG/SF/G/F/UTIL SAC ORL 32.9 5700 4.96 -0.25 37 Mitchell Robinson 27.48 C/UTIL NYK MIL 29.9 5200 5.28 1.48 38 Malcolm Brogdon 27.16 PG/SG/G/UTIL BOS CHI 24.5 5300 5.12 0.66 39 Dennis Schroder 27.07 PG/G/UTIL LAL DEN 34.6 5700 4.75 -1.43 40 Harrison Barnes 26.88 SF/PF/F/UTIL SAC ORL 32.9 5600 4.80 -1.12 41 Dillon Brooks 26.32 SG/SF/G/F/UTIL MEM SAS 29.1 5800 4.54 -2.68 42 Daniel Gafford 26.22 C/UTIL WAS NOP 24.7 4700 5.58 2.72 43 Brook Lopez 25.35 C/UTIL MIL NYK 28.6 5700 4.45 -3.15 44 Monte Morris 25.13 PG/G/UTIL WAS NOP 27.9 4800 5.23 1.13 45 Jeremy Sochan 25.10 PF/F/UTIL SAS MEM 28.6 5000 5.02 0.10 46 Cole Anthony 25.07 PG/SG/G/UTIL ORL SAC 26.7 5200 4.82 -0.93 47 Derrick White 24.74 PG/SG/G/UTIL BOS CHI 28.7 5100 4.85 -0.76 48 Bones Hyland 24.70 PG/SG/G/UTIL DEN LAL 21.8 4900 5.04 0.20 49 Malik Monk 23.98 SG/G/UTIL SAC ORL 22.1 4500 5.33 1.48 50 Robert Williams 23.88 C/UTIL BOS CHI 21.8 4400 5.43 1.88 51 Al Horford 23.51 PF/C/F/UTIL BOS CHI 29.0 4800 4.90 -0.49 52 Quentin Grimes 23.48 SG/SF/G/F/UTIL NYK MIL 34.4 5400 4.35 -3.52 53 Rui Hachimura 23.25 PF/F/UTIL WAS NOP 24.5 4700 4.95 -0.25 54 Herbert Jones 23.16 SF/F/UTIL NOP WAS 27.5 4900 4.73 -1.34 55 Deni Avdija 22.61 SF/F/UTIL WAS NOP 25.7 4200 5.38 1.61 56 Trey Murphy III 22.56 SG/SF/G/F/UTIL NOP WAS 28.2 5100 4.42 -2.94 57 Patrick Williams 21.92 SF/PF/F/UTIL CHI BOS 30.6 4400 4.98 -0.08 58 Kentavious Caldwell-Pope 21.74 SG/G/UTIL DEN LAL 31.3 4400 4.94 -0.26 59 Josh Richardson 21.62 PG/SG/G/UTIL SAS MEM 21.7 4000 5.40 1.62 60 Keegan Murray 21.61 PF/F/UTIL SAC ORL 27.8 4300 5.03 0.11 61 Bruce Brown 21.01 SG/SF/G/F/UTIL DEN LAL 24.2 5000 4.20 -3.99 62 Moritz Wagner 20.30 PF/C/F/UTIL ORL SAC 19.5 5300 3.83 -6.20 63 Zach Collins 20.08 C/UTIL SAS MEM 17.9 4700 4.27 -3.42 </code></pre> <p><strong>players</strong></p> <p><code>players = list(projections['Name'])</code></p> <p><strong>fpts, salaries, positions</strong></p> <pre><code>salaries = dict(zip(players, projections['Salary'])) fpts = dict(zip(players, projections['Fpts'])) positions = dict(zip(players, projections['Position'])) </code></pre> <p>Here is my <code>prob.writeLP()</code> output:</p> <pre><code>\* NBA_DK *\ Maximize OBJ: 32.37 Players_Aaron_Gordon + 23.51 Players_Al_Horford + 28.49 Players_Bobby_Portis + 24.7 Players_Bones_Hyland + 25.35 Players_Brook_Lopez + 21.01 Players_Bruce_Brown + 42.05 Players_CJ_McCollum + 25.07 Players_Cole_Anthony + 26.22 Players_Daniel_Gafford + 43.4 Players_De'Aaron_Fox + 43.21 Players_DeMar_DeRozan + 22.61 Players_Deni_Avdija + 27.07 Players_Dennis_Schroder + 24.74 Players_Derrick_White + 35.18 Players_Desmond_Bane + 26.32 Players_Dillon_Brooks + 52.62 Players_Domantas_Sabonis + 32.13 Players_Franz_Wagner + 58.09 Players_Giannis_Antetokounmpo + 26.88 Players_Harrison_Barnes + 23.16 Players_Herbert_Jones + 31.78 Players_Immanuel_Quickley + 51.01 Players_Ja_Morant + 31.88 Players_Jakob_Poeltl + 41.2 Players_Jalen_Brunson + 34.59 Players_Jamal_Murray + 36.08 Players_Jaren_Jackson_Jr. + 42.45 Players_Jaylen_Brown + 52.94 Players_Jayson_Tatum + 25.1 Players_Jeremy_Sochan + 34.71 Players_Jonas_Valanciunas + 21.62 Players_Josh_Richardson + 35.32 Players_Jrue_Holiday + 49.74 Players_Julius_Randle + 21.61 Players_Keegan_Murray + 35.44 Players_Keldon_Johnson + 21.74 Players_Kentavious_Caldwell_Pope + 28.25 Players_Kevin_Huerter + 44.76 Players_Kristaps_Porzingis + 39.88 Players_Kyle_Kuzma + 53.05 Players_LeBron_James + 27.16 Players_Malcolm_Brogdon + 23.98 Players_Malik_Monk + 29.02 Players_Markelle_Fultz + 29.1 Players_Michael_Porter_Jr. + 27.48 Players_Mitchell_Robinson + 25.13 Players_Monte_Morris + 20.3 Players_Moritz_Wagner + 29.14 Players_Naji_Marshall + 61.52 Players_Nikola_Jokic + 37.51 Players_Nikola_Vucevic + 40.34 Players_Paolo_Banchero + 21.92 Players_Patrick_Williams + 23.48 Players_Quentin_Grimes + 23.88 Players_Robert_Williams + 23.25 Players_Rui_Hachimura + 37.11 Players_Russell_Westbrook + 29.5 Players_Steven_Adams + 31.78 Players_Thomas_Bryant + 32.4 Players_Tre_Jones + 22.56 Players_Trey_Murphy_III + 31.64 Players_Wendell_Carter_Jr. + 20.08 Players_Zach_Collins + 37.45 Players_Zach_LaVine Subject To C_Positional_Constraint: Players_Al_Horford + Players_Bobby_Portis + Players_Brook_Lopez + Players_Daniel_Gafford + Players_Domantas_Sabonis + Players_Giannis_Antetokounmpo + Players_Jakob_Poeltl + Players_Jaren_Jackson_Jr. + Players_Jonas_Valanciunas + Players_Kristaps_Porzingis + Players_Mitchell_Robinson + Players_Moritz_Wagner + Players_Nikola_Jokic + Players_Nikola_Vucevic + Players_Robert_Williams + Players_Steven_Adams + Players_Thomas_Bryant + Players_Wendell_Carter_Jr. + Players_Zach_Collins = 1 F_Positional_Constraint: Players_Aaron_Gordon + Players_Al_Horford + Players_Bobby_Portis + Players_Bruce_Brown + Players_DeMar_DeRozan + Players_Deni_Avdija + Players_Dillon_Brooks + Players_Franz_Wagner + Players_Giannis_Antetokounmpo + Players_Harrison_Barnes + Players_Herbert_Jones + Players_Jaren_Jackson_Jr. + Players_Jaylen_Brown + Players_Jayson_Tatum + Players_Jeremy_Sochan + Players_Julius_Randle + Players_Keegan_Murray + Players_Keldon_Johnson + Players_Kevin_Huerter + Players_Kristaps_Porzingis + Players_Kyle_Kuzma + Players_LeBron_James + Players_Michael_Porter_Jr. + Players_Moritz_Wagner + Players_Naji_Marshall + Players_Paolo_Banchero + Players_Patrick_Williams + Players_Quentin_Grimes + Players_Rui_Hachimura + Players_Trey_Murphy_III + Players_Zach_LaVine = 1 G_Positional_Constraint: Players_Bones_Hyland + Players_Bruce_Brown + Players_CJ_McCollum + Players_Cole_Anthony + Players_De'Aaron_Fox + Players_Dennis_Schroder + Players_Derrick_White + Players_Desmond_Bane + Players_Dillon_Brooks + Players_Franz_Wagner + Players_Immanuel_Quickley + Players_Ja_Morant + Players_Jalen_Brunson + Players_Jamal_Murray + Players_Jaylen_Brown + Players_Josh_Richardson + Players_Jrue_Holiday + Players_Kentavious_Caldwell_Pope + Players_Kevin_Huerter + Players_Malcolm_Brogdon + Players_Malik_Monk + Players_Markelle_Fultz + Players_Monte_Morris + Players_Quentin_Grimes + Players_Russell_Westbrook + Players_Tre_Jones + Players_Trey_Murphy_III + Players_Zach_LaVine = 1 PF_Positional_Constraint: Players_Aaron_Gordon + Players_Al_Horford + Players_Bobby_Portis + Players_DeMar_DeRozan + Players_Giannis_Antetokounmpo + Players_Harrison_Barnes + Players_Jaren_Jackson_Jr. + Players_Jayson_Tatum + Players_Jeremy_Sochan + Players_Julius_Randle + Players_Keegan_Murray + Players_Kristaps_Porzingis + Players_Kyle_Kuzma + Players_LeBron_James + Players_Moritz_Wagner + Players_Naji_Marshall + Players_Paolo_Banchero + Players_Patrick_Williams + Players_Rui_Hachimura = 1 PG_Positional_Constraint: Players_Bones_Hyland + Players_CJ_McCollum + Players_Cole_Anthony + Players_De'Aaron_Fox + Players_Dennis_Schroder + Players_Derrick_White + Players_Immanuel_Quickley + Players_Ja_Morant + Players_Jalen_Brunson + Players_Jamal_Murray + Players_Josh_Richardson + Players_Jrue_Holiday + Players_Malcolm_Brogdon + Players_Markelle_Fultz + Players_Monte_Morris + Players_Russell_Westbrook + Players_Tre_Jones = 1 SF_Positional_Constraint: Players_Bruce_Brown + Players_DeMar_DeRozan + Players_Deni_Avdija + Players_Dillon_Brooks + Players_Franz_Wagner + Players_Harrison_Barnes + Players_Herbert_Jones + Players_Jaylen_Brown + Players_Jayson_Tatum + Players_Keldon_Johnson + Players_Kevin_Huerter + Players_Kyle_Kuzma + Players_LeBron_James + Players_Michael_Porter_Jr. + Players_Naji_Marshall + Players_Paolo_Banchero + Players_Patrick_Williams + Players_Quentin_Grimes + Players_Trey_Murphy_III + Players_Zach_LaVine = 1 SG_Positional_Constraint: Players_Bones_Hyland + Players_Bruce_Brown + Players_Cole_Anthony + Players_Derrick_White + Players_Desmond_Bane + Players_Dillon_Brooks + Players_Franz_Wagner + Players_Immanuel_Quickley + Players_Jaylen_Brown + Players_Josh_Richardson + Players_Kentavious_Caldwell_Pope + Players_Kevin_Huerter + Players_Malcolm_Brogdon + Players_Malik_Monk + Players_Quentin_Grimes + Players_Trey_Murphy_III + Players_Zach_LaVine = 1 Salary_Constraints: 6600 Players_Aaron_Gordon + 4800 Players_Al_Horford + 6700 Players_Bobby_Portis + 4900 Players_Bones_Hyland + 5700 Players_Brook_Lopez + 5000 Players_Bruce_Brown + 8500 Players_CJ_McCollum + 5200 Players_Cole_Anthony + 4700 Players_Daniel_Gafford + 8700 Players_De'Aaron_Fox + 9000 Players_DeMar_DeRozan + 4200 Players_Deni_Avdija + 5700 Players_Dennis_Schroder + 5100 Players_Derrick_White + 6600 Players_Desmond_Bane + 5800 Players_Dillon_Brooks + 10500 Players_Domantas_Sabonis + 7100 Players_Franz_Wagner + 12000 Players_Giannis_Antetokounmpo + 5600 Players_Harrison_Barnes + 4900 Players_Herbert_Jones + 6200 Players_Immanuel_Quickley + 10300 Players_Ja_Morant + 5900 Players_Jakob_Poeltl + 8100 Players_Jalen_Brunson + 6800 Players_Jamal_Murray + 6800 Players_Jaren_Jackson_Jr. + 9500 Players_Jaylen_Brown + 10700 Players_Jayson_Tatum + 5000 Players_Jeremy_Sochan + 6600 Players_Jonas_Valanciunas + 4000 Players_Josh_Richardson + 7400 Players_Jrue_Holiday + 10400 Players_Julius_Randle + 4300 Players_Keegan_Murray + 7000 Players_Keldon_Johnson + 4400 Players_Kentavious_Caldwell_Pope + 5700 Players_Kevin_Huerter + 9400 Players_Kristaps_Porzingis + 8400 Players_Kyle_Kuzma + 10900 Players_LeBron_James + 5300 Players_Malcolm_Brogdon + 4500 Players_Malik_Monk + 6100 Players_Markelle_Fultz + 5900 Players_Michael_Porter_Jr. + 5200 Players_Mitchell_Robinson + 4800 Players_Monte_Morris + 5300 Players_Moritz_Wagner + 5500 Players_Naji_Marshall + 11700 Players_Nikola_Jokic + 8000 Players_Nikola_Vucevic + 8200 Players_Paolo_Banchero + 4400 Players_Patrick_Williams + 5400 Players_Quentin_Grimes + 4400 Players_Robert_Williams + 4700 Players_Rui_Hachimura + 7700 Players_Russell_Westbrook + 6000 Players_Steven_Adams + 6300 Players_Thomas_Bryant + 6000 Players_Tre_Jones + 5100 Players_Trey_Murphy_III + 6400 Players_Wendell_Carter_Jr. + 4700 Players_Zach_Collins + 7500 Players_Zach_LaVine &lt;= 50000 UTIL_Positional_Constraint: Players_Aaron_Gordon + Players_Al_Horford + Players_Bobby_Portis + Players_Bones_Hyland + Players_Brook_Lopez + Players_Bruce_Brown + Players_CJ_McCollum + Players_Cole_Anthony + Players_Daniel_Gafford + Players_De'Aaron_Fox + Players_DeMar_DeRozan + Players_Deni_Avdija + Players_Dennis_Schroder + Players_Derrick_White + Players_Desmond_Bane + Players_Dillon_Brooks + Players_Domantas_Sabonis + Players_Franz_Wagner + Players_Giannis_Antetokounmpo + Players_Harrison_Barnes + Players_Herbert_Jones + Players_Immanuel_Quickley + Players_Ja_Morant + Players_Jakob_Poeltl + Players_Jalen_Brunson + Players_Jamal_Murray + Players_Jaren_Jackson_Jr. + Players_Jaylen_Brown + Players_Jayson_Tatum + Players_Jeremy_Sochan + Players_Jonas_Valanciunas + Players_Josh_Richardson + Players_Jrue_Holiday + Players_Julius_Randle + Players_Keegan_Murray + Players_Keldon_Johnson + Players_Kentavious_Caldwell_Pope + Players_Kevin_Huerter + Players_Kristaps_Porzingis + Players_Kyle_Kuzma + Players_LeBron_James + Players_Malcolm_Brogdon + Players_Malik_Monk + Players_Markelle_Fultz + Players_Michael_Porter_Jr. + Players_Mitchell_Robinson + Players_Monte_Morris + Players_Moritz_Wagner + Players_Naji_Marshall + Players_Nikola_Jokic + Players_Nikola_Vucevic + Players_Paolo_Banchero + Players_Patrick_Williams + Players_Quentin_Grimes + Players_Robert_Williams + Players_Rui_Hachimura + Players_Russell_Westbrook + Players_Steven_Adams + Players_Thomas_Bryant + Players_Tre_Jones + Players_Trey_Murphy_III + Players_Wendell_Carter_Jr. + Players_Zach_Collins + Players_Zach_LaVine = 1 Binaries Players_Aaron_Gordon Players_Al_Horford Players_Bobby_Portis Players_Bones_Hyland Players_Brook_Lopez Players_Bruce_Brown Players_CJ_McCollum Players_Cole_Anthony Players_Daniel_Gafford Players_De'Aaron_Fox Players_DeMar_DeRozan Players_Deni_Avdija Players_Dennis_Schroder Players_Derrick_White Players_Desmond_Bane Players_Dillon_Brooks Players_Domantas_Sabonis Players_Franz_Wagner Players_Giannis_Antetokounmpo Players_Harrison_Barnes Players_Herbert_Jones Players_Immanuel_Quickley Players_Ja_Morant Players_Jakob_Poeltl Players_Jalen_Brunson Players_Jamal_Murray Players_Jaren_Jackson_Jr. Players_Jaylen_Brown Players_Jayson_Tatum Players_Jeremy_Sochan Players_Jonas_Valanciunas Players_Josh_Richardson Players_Jrue_Holiday Players_Julius_Randle Players_Keegan_Murray Players_Keldon_Johnson Players_Kentavious_Caldwell_Pope Players_Kevin_Huerter Players_Kristaps_Porzingis Players_Kyle_Kuzma Players_LeBron_James Players_Malcolm_Brogdon Players_Malik_Monk Players_Markelle_Fultz Players_Michael_Porter_Jr. Players_Mitchell_Robinson Players_Monte_Morris Players_Moritz_Wagner Players_Naji_Marshall Players_Nikola_Jokic Players_Nikola_Vucevic Players_Paolo_Banchero Players_Patrick_Williams Players_Quentin_Grimes Players_Robert_Williams Players_Rui_Hachimura Players_Russell_Westbrook Players_Steven_Adams Players_Thomas_Bryant Players_Tre_Jones Players_Trey_Murphy_III Players_Wendell_Carter_Jr. Players_Zach_Collins Players_Zach_LaVine End </code></pre> <p>When trying to <code>.solve()</code> this problem`:</p> <pre><code>prob.solve(plp.PULP_CBC_CMD(msg=0)) print(f'Status: {plp.LpStatus[prob.status]}') score = str(prob.objective) for v in prob.variables(): score = score.replace(v.name, str(v.varValue)) score = eval(score) lineup = [v.name for v in prob.variables() if v.varValue != 0] print(score, lineup) </code></pre> <p>it outputs:</p> <pre><code>Status: Infeasible 23.570000000000004 ['Players_Giannis_Antetokounmpo', 'Players_Immanuel_Quickley', 'Players_Ja_Morant', 'Players_Jaylen_Brown', 'Players_Malcolm_Brogdon'] </code></pre> <p>I know the issue lies within the positionality constraints, because commenting them out or altering them produces an optimal result.</p>
<python><data-science><linear-programming><pulp>
2023-01-10 18:55:38
1
353
Sean Sailer
75,074,524
4,939,983
Why does python list resize overallocation differ from other languages?
<p>I've been reading a bit of python source code and found a <a href="https://hg.python.org/releasing/3.4.2/file/10298f4f42dc/Objects/listobject.c#l25" rel="nofollow noreferrer">peculiar comment about the resize overallocation strategy</a> -</p> <pre><code>/* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ + βˆ’new_allocated = (newsize &gt;&gt; 3) + (newsize &lt; 9 ? 3 : 6); </code></pre> <p>I remembered the usual strategy is to double the capacity when the limit is reached.</p> <p>I checked with a few other languages source code and found that indeed it is as I though, in <a href="https://github.com/nodejs/node/blob/49342fe6f2ca6cedd5219d835a0a810e6f03cdd7/deps/v8/src/objects/js-objects.h#L542" rel="nofollow noreferrer">go</a>, <a href="https://github.com/rust-lang/rust/blob/0ca7f74dbd23a3e8ec491cd3438f490a3ac22741/src/liballoc/raw_vec.rs#L397" rel="nofollow noreferrer">rust</a> and <a href="https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.ensurecapacity?view=net-7.0" rel="nofollow noreferrer">c#</a></p> <p>Also I found out they <a href="https://bugs.python.org/issue38373" rel="nofollow noreferrer">changed it a bit</a> in the next python prerelease version, but for unrelated reasons - keeping the same concept of the strategy.</p> <p>So why does python have such a unique method of resizing, and what are the implications on performance?</p>
<python><arrays><memory-management><cpython>
2023-01-10 18:47:49
0
334
Shu ba
75,074,499
4,157,666
Sagemaker: read-only file system: /opt/ml/models/../config.json when invoking endpoint
<p>Trying to create a Multi Model with sagemaker. Doing the following:</p> <pre><code>boto_seasson = boto3.session.Session(region_name='us-east-1') sess = sagemaker.Session(boto_session=boto_seasson) iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker-role')['Role']['Arn'] huggingface_model = HuggingFaceModel(model_data='s3://bucket/path/model.tar.gz', transformers_version=&quot;4.12.3&quot;, pytorch_version=&quot;1.9.1&quot;, py_version='py38', role=role, sagemaker_session=sess) mme = MultiDataModel(name='model-name', model_data_prefix='s3://bucket/path/', model=huggingface_model, sagemaker_session=sess) predictor = mme.deploy(initial_instance_count=1, instance_type=&quot;ml.t2.medium&quot;) </code></pre> <p>If I try to predict:</p> <pre><code>predictor.predict({&quot;inputs&quot;: &quot;test&quot;}, target_model=&quot;model.tar.gz&quot;) </code></pre> <p>I get the following error:</p> <pre><code>{ModelError}An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message &quot;{ &quot;code&quot;: 400, &quot;type&quot;: &quot;InternalServerException&quot;, &quot;message&quot;: &quot;[Errno 30] Read-only file system: \u0027/opt/ml/models/d8379026esds430426d32321a85878f6b/model/config.json\u0027&quot; } </code></pre> <p>If I deploy a single model through the huggingfacemodel:</p> <pre><code>huggingface_model = HuggingFaceModel(model_data='s3://bucket/path/model.tar.gz', transformers_version=&quot;4.12.3&quot;, pytorch_version=&quot;1.9.1&quot;, py_version='py38', role=role, sagemaker_session=sess) predictor = huggingface_model.deploy(initial_instance_count=1, instance_type=&quot;ml.t2.medium&quot;) </code></pre> <p>Then <code>predict</code> works normally with no error.</p> <p>So I was wondering what could be the reason that i get 'read-only' om <code>MultiDataModel</code> deploy?</p> <p>thanks in advance.</p>
<python><amazon-web-services><boto3><amazon-sagemaker>
2023-01-10 18:45:09
1
5,031
Mpizos Dimitris
75,074,493
10,481,744
Find all the possible combinations of the numbers in the column of pandas dataframe that sum to 0
<p>I have a dataframe with 10000 rows..I want to find all the combinations of rows where the values in particular column (Amount) sums to 0.</p> <p>df=</p> <pre><code>ID_Key Amount 10 12.4 12 -26.6 13 14.2 14 15 17 4.5 18 -9 19 94 20 -6 </code></pre> <p>Resultant dataframe is</p> <pre><code>Combinations Sum (10,12,13) 0 (14,18,20) 0 </code></pre> <p>Below is the code for combination of all the 3 numbers that sum to 0.I have to write combination of 4 numbers and 5 numbers too that sums to 0, but even for 3 numbers, when the dataframe size grows beyond 30,it becomes very slow.How can I reduce the time complexity of the below algorithm</p> <pre><code>from itertools import combinations lst = [] t_counter=0 #all combinations ID_key consisting of length 3 for tuple_nums in set(combinations(df['ID_Key'], 3)): if df.shape[0]&gt;2: t_counter=t_counter+1 if df.loc[df['ID_Key'].isin(tuple_nums)].empty==False: if df.loc[df['ID_Key'].isin(tuple_nums), 'Amount'].sum()==0: lst.append([tuple_nums,df.loc[df['ID_Key'].isin(tuple_nums), 'Amount'].sum()]) df=df.loc[~df['ID_Key'].isin(tuple_nums)] else: break df_final=pd.DataFrame(lst, columns=['Combinations', 'Sum']) </code></pre> <p>I think one of the issues is in the below code,iterator tuple_nums traverses through all the possible combinations <code>for tuple_nums in set(combinations(df['ID_Key'], 3))</code> I am reducing the dataframe size everytime, i get the combination that sums to 0 in this line <code>df=df.loc[~df['ID_Key'].isin(tuple_nums)]</code> but still all the possible combinations will be traversed.How can I reduce the time complexity and make the algorithm faster to process 10000 rows</p>
<python><pandas>
2023-01-10 18:44:00
1
340
TLanni
75,074,474
4,960,470
RPi.GPIO: defined output pin not found in /sys/class/gpio
<p>I have a python3 script which uses RPi.GPIO and defines 2 input pins and 1 output pin like shown below</p> <pre><code>GPIO.setmode( GPIO.BCM ) # init GPIOs GPIO.setup( 2, GPIO.IN, pull_up_down=GPIO.PUD_UP ) GPIO.setup( 3, GPIO.IN, pull_up_down=GPIO.PUD_UP ) GPIO.setup( 4, GPIO.OUT ) </code></pre> <p>I want to set the status of the output also from outside of my python script with simple shell commands or manually, but I can't see a GPIO setup for /sys/class/gpio4.<br /> The strange thing is, that RPi.GPIO creates the exports in /sys/class/gpio for the input pins as expected and I can check the status of these pins from a shell without problems.</p> <pre><code># ls -1 /sys/class/gpio export gpio2 gpio3 gpiochip0 unexport # cat /sys/class/gpio/gpio2/value 1 </code></pre> <p>Where does RPi.GPIO define the output pin and how can I access it from outside of python?</p>
<python><gpio>
2023-01-10 18:41:05
1
587
user333869
75,074,458
1,215,291
Change directory in python - os.chdir('/tmp') vs os.system("cd " + backup_location)
<p>I'm experimenting with using Python for backups, because my Bash script became too big and complicated. I'm totally new in Python, I'm not fan of it, but it seems like Python is perfect tool for such complicated scripts.</p> <p>I have found something to start with on Github:</p> <p><a href="https://github.com/Tutorialwork/Linux-Backup-Script/blob/master/backup.py" rel="nofollow noreferrer">https://github.com/Tutorialwork/Linux-Backup-Script/blob/master/backup.py</a></p> <p>In the script above there is line like this:</p> <pre><code>os.system(&quot;cd &quot; + config.backup_location + &quot; &amp;&amp; rm mysqlbackup-&quot; + date + &quot;.sql&quot;) </code></pre> <p>My question is:</p> <p>Is there any practical difference between calling filesystem manipulation commands thru <code>os.system(&quot;cd somedir&quot;)</code> and functions like <code>os.chdir(&quot;somedir&quot;)</code>?</p> <p>I'm using Python 3.9 on Debian 11. It would be good if my script could be portable between Linux distros. Windows compatybility is not required.</p>
<python><filesystems><python-3.9>
2023-01-10 18:39:40
1
14,043
Kamil
75,074,445
4,913,660
Pandas unexpectedly casts type in heterogeneous dataframe
<p>if I define a dataframe like this</p> <pre><code>import numpy as np import pandas as pd str1= [&quot;woz&quot;, &quot;baz&quot;, &quot;fop&quot;, &quot;jok&quot;] arr=np.array([2,3,4]) data = {&quot;Goz&quot;:str1, &quot;Jok&quot;: np.hstack((np.array([5,&quot;fava&quot;]), np.array ([1,2]) ) ) } df = pd.DataFrame(data) Goz Jok 0 woz 5 1 baz fava 2 fop 1 3 jok 2 </code></pre> <p>I am puzzled by the fact those nice floats in the second column, coming from a numpy array, are cast to string</p> <pre><code>type(df.loc[3][&quot;Jok&quot;]) out:: str </code></pre> <p>Further, if I try to manually cast back to float64 for a subset of the dataframe, say</p> <pre><code>df.iloc[2:,1].astype(&quot;float64&quot;, copy=False) </code></pre> <p>the type of the does not change. Besides I understand the<code>copy=False</code> option, comes with extreme danger.</p> <p>If on the other hand I define the dataframe using this</p> <pre><code> data = {&quot;Goz&quot;:str1, &quot;Jok&quot;: np.array ([1,2,3,4]) } </code></pre> <p>having hence the whole column to contain float64 uniformly, all seems to work. Can somebody please clarify what is going on here? How to handle type-heterogenous columns? Thanks a lot</p>
<python><pandas><numpy>
2023-01-10 18:38:22
0
414
user37292
75,074,394
10,043,234
Django password reset PasswordResetConfirmView
<p>View:</p> <pre><code>def f_forget_password(request): ctx = dict() if request.method == &quot;POST&quot;: password_reset_form = PasswordResetForm(request.POST) if password_reset_form.is_valid(): to_email = password_reset_form.cleaned_data['email'] to_email=to_email.lower() associated_users = User.objects.filter(Q(email=to_email)) if associated_users.exists(): if len(associated_users)!=1: return HttpResponse('Email Is Invalid.') current_site = get_current_site(request) for user in associated_users: mail_subject = &quot;Password Reset Requested&quot; email_template_name = &quot;password/password_reset_email.txt&quot; uid=urlsafe_base64_encode(force_bytes(user.pk)) c = { &quot;email&quot;:user.email, 'domain':current_site.domain, 'site_name': 'mysit', &quot;uid&quot;: uid, &quot;user&quot;: user, 'token': account_activation_token.make_token(user), } try: message = loader.render_to_string(email_template_name, c) email = EmailMessage( mail_subject, message, to=[to_email] ) email.send() except Exception as e: return HttpResponse('Invalid header found.&lt;br/&gt;') return redirect (&quot;/password_reset/done/&quot;) else: return HttpResponse('Email not found.') template = loader.get_template('password/forget_password.html') response = HttpResponse(template.render(ctx, request),content_type='text/html') return response </code></pre> <p>url:</p> <pre><code>path('password_reset/done/', auth_views.PasswordResetDoneView.as_view(template_name='password/password_reset_done.html'), name='password_reset_done'), path('reset/&lt;uidb64&gt;/&lt;token&gt;', auth_views.PasswordResetConfirmView.as_view(), name='password_reset_confirm'), path('reset/done/', auth_views.PasswordResetCompleteView.as_view(template_name='password/password_reset_complete.html'), name='password_reset_complete'), </code></pre> <p>token.py:</p> <pre><code>class TokenGenerator(PasswordResetTokenGenerator): def _make_hash_value(self, user, timestamp): return ( six.text_type(user.pk) + six.text_type(timestamp) + six.text_type(user.is_active) ) account_activation_token = TokenGenerator() </code></pre> <p>view PasswordResetConfirmView dont work and my password dont change. when i clicked on passwordresetlink i have an Error:</p> <blockquote> <p>Password reset unsuccessful</p> <p>The password reset link was invalid, possibly because it has already been used. Please request a new password reset.</p> </blockquote>
<python><django><django-views><passwords><reset-password>
2023-01-10 18:33:55
0
311
mohamadreza ch
75,074,339
7,050,517
VSCode Python Remote Debugging Suddenly Not Working
<p>Starting a couple days ago my normal process for debugging python code via pytest has just stopped working.</p> <p>My previous process was as follows:</p> <ol> <li>Insert the debugpy macro either above the pytest test definition or at the top level of the code being debugged.</li> </ol> <pre><code>import debugpy debugpy.listen(5678) debugpy.wait_for_client() </code></pre> <ol start="2"> <li><p>Insert the <code>debugpy.breakpoint()</code> call above the line to stop at <strong>OR</strong> use the red gutter button to mark a breakpoint (less reliable).</p> </li> <li><p>Hit the F5 key when pytest started and the <code>collecting...</code> message appeared in the terminal.</p> </li> </ol> <p>This process worked for the entire two months since I converted from PyCharm. As of a couple days ago, the new behavior is as follows:</p> <ul> <li>If I use the <code>debugpy.breakpoint()</code> call to stop the run, the code will stop however in a seemingly random place. Before I reinstalled nearly everything (VSCode, created a new linux VM), it would stop at a random line inside a file from the pytest library. After I reinstalled everything, it now stops a random line inside our test database teardown code.</li> <li>If I use the red gutter button to mark a breakpoint, the debugger will not stop at all.</li> </ul> <p>Things I've already tried:</p> <ol> <li>Clean reinstall of VSCode.</li> <li>Destroy and recreate vagrant VM.</li> <li>Add <code>&quot;env&quot;: {&quot;PYTEST_ADDOPTS&quot;: &quot;--no-cov&quot;}</code> to my <code>launch.json</code>. (The linter warns that this property is not allowed here and the option is not added at runtime).</li> <li>Downgraded the Python extension.</li> <li>Toggled the <code>justMyCode</code> launch option.</li> <li>Tried multiple git branches.</li> <li>Tried multiple modules in our monorepo.</li> </ol> <p>Again, I'd like to emphasize that this did work previously and that I made no configuration changes to cause this.</p> <p>More info:</p> <pre><code>Version: 1.74.2 (Universal) Commit: e8a3071ea4344d9d48ef8a4df2c097372b0c5161 Date: 2022-12-20T10:26:09.430Z (3 wks ago) Electron: 19.1.8 Chromium: 102.0.5005.167 Node.js: 16.14.2 V8: 10.2.154.15-electron.0 OS: Darwin x64 21.6.0 Sandboxed: No </code></pre> <p>launch.json</p> <pre><code> &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: Remote Attach&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;connect&quot;: { &quot;host&quot;: &quot;localhost&quot;, &quot;port&quot;: 5678 }, &quot;pathMappings&quot;: [ { &quot;localRoot&quot;: &quot;/mnt/the_repo_folder&quot;, &quot;remoteRoot&quot;: &quot;/mnt/the_repo_folder&quot; } ], &quot;justMyCode&quot;: true } ] } </code></pre>
<python><visual-studio-code><vagrant><vscode-debugger>
2023-01-10 18:28:36
3
510
smallpants
75,074,306
9,661,008
How to remove repeated sentences from a string
<p>I have an issue that I do not know how to tackle.</p> <p>For example: I have a string returning in a function that has multiple sentences separatade by a comma. And some of them are comming repeated:</p> <p>Like:</p> <p><code>&quot;lorem ipsum dolor, lorem ipsum dolor, lorem ipsum dolor&quot;</code></p> <p>I need to remove these sentences that are comming repeated but without checking word-by-word, rather sentence by sentence striped by &quot;,&quot;. Since there may have other sentences with repeated words that should not be removed.</p> <p><strong>Input example:</strong></p> <p><code>&quot;lorem ipsum dolor, lorem ipsum dolor, lorem mark dol&quot;</code></p> <p><strong>Output desired:</strong></p> <p><code>&quot;lorem ipsum dolor, lorem mark dol&quot;</code></p>
<python>
2023-01-10 18:25:28
3
1,874
Elias Prado
75,074,045
5,805,389
Typehinting async function and passing to asyncio.create_task
<p>In my research, I see the general consensus for the correct way to typehint an async function is <code>Callable[..., Awaitable[Any]]</code>.</p> <p>In Pycharm, I try this and have this issue when passing to <code>asyncio.create_task</code></p> <pre class="lang-py prettyprint-override"><code>import asyncio from typing import Callable, Awaitable, Any def fff(ccc: Callable[..., Awaitable[Any]]): return asyncio.create_task(ccc()) </code></pre> <p><a href="https://i.sstatic.net/mI7Z3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mI7Z3.png" alt="enter image description here" /></a></p> <p>Is this an issue with Pycharm, or should I be typehinting my async functions another way?</p>
<python><pycharm><type-hinting>
2023-01-10 17:59:49
1
805
Shuri2060
75,074,036
14,551,577
How to download pdf from html embed tag without src rendered by javascript in python selenium
<p>I am going to download pdf from html page.</p> <p>The page structure is following.</p> <p><code>index.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;iframe src=&quot;a.html&quot;&gt;&lt;/iframe&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p><code>a.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;script type='text/javascript'&gt;this.document.location = &quot;pubs/document-238727.PDF&quot;;&lt;/script&gt; &lt;/head&gt; &lt;/html&gt; </code></pre> <p>inspect view of <code>index.html</code>:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;iframe src=&quot;a.html&quot;&gt; #document &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;embed name=&quot;289740873&quot; style=&quot;position:absolute; left: 0; top: 0;&quot; width=&quot;100%&quot; height=&quot;100%&quot; src=&quot;about:blank&quot; type=&quot;application/pdf&quot; internalid=&quot;182D67B1A56A055A2F879042870BB4E0&quot;&gt; &lt;/head&gt; &lt;/html&gt; &lt;/iframe&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I am going to download this pdf file of <code>embed</code> tag.<br /> I have tried in several ways but didn't yet.</p>
<python><selenium><pdf><web-scraping><download>
2023-01-10 17:59:08
0
644
bcExpt1123
75,074,008
16,978,074
OSError: [Errno 28] No space left on device when I run a python program
<p>Hello all when I run a python program that makes network requests and downloads a lot of data, I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Python311\Lib\site-packages\pandas\io\formats\csvs.py&quot;, line 261, in save self._save() File &quot;C:\Python311\Lib\site-packages\pandas\io\formats\csvs.py&quot;, line 266, in _save self._save_body() File &quot;C:\Python311\Lib\site-packages\pandas\io\formats\csvs.py&quot;, line 304, in _save_body self._save_chunk(start_i, end_i) File &quot;C:\Python311\Lib\site-packages\pandas\io\formats\csvs.py&quot;, line 315, in _save_chunk libwriters.write_csv_rows( File &quot;pandas\_libs\writers.pyx&quot;, line 72, in pandas._libs.writers.write_csv_rows OSError: [Errno 28] No space left on device During handling of the above exception, another exception occurred: OSError: [Errno 28] No space left on device During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\elena\Desktop\actors.py&quot;, line 57, in &lt;module&gt; get_popular_movies() File &quot;C:\Users\elena\Desktop\actors.py&quot;, line 49, in get_popular_movies csv_edges.to_csv('edges.csv',index=False,header=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Python311\Lib\site-packages\pandas\util\_decorators.py&quot;, line 211, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Python311\Lib\site-packages\pandas\core\generic.py&quot;, line 3720, in to_csv return DataFrameRenderer(formatter).to_csv( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Python311\Lib\site-packages\pandas\util\_decorators.py&quot;, line 211, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Python311\Lib\site-packages\pandas\io\formats\format.py&quot;, line 1189, in to_csv csv_formatter.save() File &quot;C:\Python311\Lib\site-packages\pandas\io\formats\csvs.py&quot;, line 241, in save with get_handle( File &quot;C:\Python311\Lib\site-packages\pandas\io\common.py&quot;, line 133, in __exit__ self.close() File &quot;C:\Python311\Lib\site-packages\pandas\io\common.py&quot;, line 125, in close handle.close() OSError: [Errno 28] No space left on device </code></pre> <p>From what I understand I don't have enough space on my computer. How can I fix this? I use windows not linux</p>
<python><oserror>
2023-01-10 17:56:12
0
337
Elly
75,073,999
18,403,743
Using Aspose.Slides For Python With MacOS
<p>I'm utilising aspose.slides for Python successfully on Windows, recently they've added MacOS support. When I try to run ANY aspose.slides script on MacOS I get an error that it can't find libpython dylib in /usr/lib or /usr/local/lib</p> <p>Simple example</p> <pre><code>import aspose.slides as slides # Instantiate a Presentation object that represents a presentation file with slides.Presentation() as presentation: slide = presentation.slides[0] slide.shapes.add_auto_shape(slides.ShapeType.LINE, 50, 150, 300, 0) presentation.save(&quot;NewPresentation_out.pptx&quot;, slides.export.SaveFormat.PPTX) </code></pre> <p>I've added a symbolic link from /usr/lib or /usr/local/lib directory to the libpython file but now I get the error β€œzsh segmentation fault”</p> <p>I've googled and tried so many things but always the same result. Can anyone shed any light on where I maybe going wrong please?</p>
<python><aspose><aspose-slides>
2023-01-10 17:55:05
0
372
Luke Bowes
75,073,830
10,530,575
Python - run script at every specific MINUTE time of the day
<p>I need to run a python script at specific time at EVERY minutes. I CANNOT use time.sleep to wait 60 second for it, because my script takes 8-10 seconds to run, (60 + 8 = 68 or 60+10 = 70οΌ‰ I need to run the script at below specific time through out the day.</p> <pre><code>2023-01-09 01:01:05 2023-01-09 01:02:05 2023-01-09 01:03:05 2023-01-09 01:04:05 2023-01-09 01:05:05 </code></pre> <p>I do lots of search and can't find a good answer. Worse case should i create a dataframe to tell the script based on the time?</p> <p>besides, in window Task Scheduler,I don't see there's a option to set 1 Minute recurring.</p> <p>thanks</p>
<python><python-3.x><dataframe><datetime>
2023-01-10 17:38:29
1
631
PyBoss
75,073,788
12,506,486
Pyspark - withColumn + when with variable give "Method or([class java.lang.Boolean]) does not exist"
<p>I need to add a column to data frame based on the one of the other columns AND a variable value (represented here as <code>otherThing</code>), see below:</p> <pre class="lang-py prettyprint-override"><code>otherThing = &quot;test&quot; dataDF = spark.createDataFrame([(66, &quot;a&quot;, &quot;4&quot;), (67, &quot;a&quot;, &quot;0&quot;), (70, &quot;b&quot;, &quot;4&quot;), (71, &quot;d&quot;, &quot;4&quot;)], (&quot;id&quot;, &quot;code&quot;, &quot;amt&quot;)) #this works fine dataDF.withColumn(&quot;new_column&quot;, when((dataDF[&quot;id&quot;] &lt;= 70), &quot;A&quot;).otherwise(&quot;B&quot;)).display() #this gives me error dataDF.withColumn(&quot;new_column&quot;, when((dataDF[&quot;id&quot;] &lt;= 70) | (otherThing == &quot;&quot;), &quot;A&quot;).otherwise(&quot;B&quot;)).display() </code></pre> <p>This returns the following error: Method or([class java.lang.Boolean]) does not exist In the example <code>otherThing</code> is constant, but in real scenario it can have different values</p>
<python><dataframe><pyspark>
2023-01-10 17:34:15
1
1,734
Stachu
75,073,786
16,484,106
How do I write to a Python subprocess?
<p>I'm trying to write a Python script that starts a subprocess to run an Azure CLI command once the file is executed.</p> <p>When I run locally, I run:</p> <pre><code>az pipelines create --name pipeline-from-cli --repository https://github.com/&lt;org&gt;/&lt;project&gt; --yml-path &lt;path to pipeline&gt;.yaml --folder-path _poc-area </code></pre> <p>I get prompted for an input which looks like:</p> <pre><code>Which service connection do you want to use to communicate with GitHub? [1] Create new GitHub service connection [2] &lt;my connection name&gt; [3] &lt;org name&gt; Please enter a choice [Default choice(1)]: </code></pre> <p>I can type in 2 and press enter then my pipeline is successfully created in Azure DevOps. I would like to run this command being dynamically entered when prompted.</p> <p>So far I have tried:</p> <pre><code>import subprocess cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/&lt;org&gt;/&lt;project&gt; --yml-path &lt;path to pipeline&gt;.yaml --folder-path _poc-area cmd = cmd.split() subprocess.run(cmd, shell=True) </code></pre> <p>This will run in the exact same way as when I try to run it locally.</p> <p>Try to follow answers from <a href="https://stackoverflow.com/questions/8475290/how-do-i-write-to-a-python-subprocess-stdin">here</a> I have also tried:</p> <pre><code>p = subprocess.run(cmd, input=&quot;1&quot;, capture_output=True, text=True, shell=True) print(p) </code></pre> <p>Which gives me an error saying <code>raise NoTTYException(error_msg)\nknack.prompting.NoTTYException</code>.</p> <p>Is there a way where I can execute this Python script, and it will run the Azure CLI command then enter 2 when prompted without any manually intervention?</p>
<python><python-3.x><azure-devops><subprocess><azure-cli>
2023-01-10 17:34:10
2
384
agw2021
75,073,719
62,206
Python object returning different property values between threads
<h2>Problem</h2> <p>I'm experiencing unexpected behavior when trying to update an objects properties across threads. During application start, I spawn a thread to do some I/O that I don't want to block the main thread starting up the rest of the application. I then want to communicate back to the main thread that we have finished loading, by setting a <code>ready</code> property to <code>True</code> on an object.</p> <p>What happens instead is that the read property from the main thread is never updated and the application never fully initializes. In debugging, I see that both threads see the same object reference using <code>hex(id(object))</code>, but they both see different property values.</p> <p>To ensure this is not caused by a race condition, I setup <code>@property</code> methods (getter and setter) to monitor all interactions on the <code>ready</code> property and output the context:</p> <pre><code>Thread: 274910491840 - obj id 0x4002203d60 ready property set to True Thread: 274910491840 - obj id 0x4002203d60 ready property read as True Thread: 275066861312 - obj id 0x4002203d60 ready property read as False Thread: 274910491840 - obj id 0x4002203d60 ready property read as True Thread: 274910491840 - obj id 0x4002203d60 ready property read as True Thread: 275066861312 - obj id 0x4002203d60 ready property read as False ... etc </code></pre> <p>As we see here, the same object id reads as different values persistently across threads. I have not experienced this in other languages and think it might be a Python threading idiosyncrasy I'm not aware of?</p> <h2>Context</h2> <p>The main application is initialized as an asyncio loop, with the secondary thread spun up before initialization. I subclass an open source package called <code>kserve</code>, with the main thread startup logic looking like this: <a href="https://github.com/kserve/kserve/blob/master/python/kserve/kserve/model_server.py#L257-L280" rel="nofollow noreferrer">https://github.com/kserve/kserve/blob/master/python/kserve/kserve/model_server.py#L257-L280</a>.</p> <p>My secondary thread code is very simple and just looks like:</p> <pre class="lang-py prettyprint-override"><code> def loader(): # load things... obj.ready = True load_thread = Thread(target=loader) load_thread.start() </code></pre> <p>How is it possible that given object <code>obj</code> with the same reference id, one of its properties can return different values across threads?</p> <p>Thanks for any help you can provide!</p>
<python><multithreading><concurrency>
2023-01-10 17:28:26
1
1,199
Zack
75,073,594
13,054,038
Access python global variable inside airflow task group
<p>I have set the variable <code>cluster_key</code> in a python function which is called via a python operator.</p> <pre><code>cluster_key = &quot;&quot; new_key =&quot;&quot; def dynamic_list(e_run_id): global cluster_key cluster_key = 'key_cluster_'+e_run_id+'' global new_key new_key = 'new_key_'+e_run_id+'' . . . . . Variable.set(cluster_key, keys) Variable.set(new_key, new_values) </code></pre> <p>When I try to access it inside a task group, it is not able to fetch the variable:</p> <pre><code>with DAG( dag_id=JOB_NAME, default_args=default_args, start_date=yesterday, )as main_dag: groups = [] with TaskGroup(group_id='Dynamic_dataproc_Processing') as dataprocs_jobs: start = BashOperator(task_id=&quot;start&quot;, bash_command=&quot;sleep 1m&quot;) sub_groups = [] with TaskGroup('dataproc_create_cluster', prefix_group_id=False) as dataproc_create_clusters: for i in list(Variable.get(cluster_key)): dynmaic_create_cluster = DataprocCreateClusterOperator( task_id=&quot;create_cluster_{0}&quot;.format(str(i)), project_id='{0}'.format(project_id1), cluster_config=CLUSTER_GENERATOR_CONFIG, region='{0}'.format(REGION), cluster_name=&quot;dataproc-clustersrc-{0}-src-{1}&quot;.format(SRC_GROUP, (i)), sla=timedelta(minutes=5) ) </code></pre> <p>Error: <code>KeyError: 'Variable does not exist'</code></p> <p>If I mention <code>global cluster_key</code> in the scope of TaskGroup, then it gives the below error:</p> <p><code>SyntaxError: name 'cluster_key' is assigned to before global declaration</code></p>
<python><python-3.x><airflow>
2023-01-10 17:16:35
1
313
djgcp
75,073,590
3,896,008
How to remove boundaries in matplotlib rectangles?
<p>The following snippet draws non-overlapping rectangles and works as expected:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111, aspect=1) for i in range(5, 30, 5): for j in range(5, 30, 5): rect = matplotlib.patches.Rectangle((i, j), 5, 5, color='blue') ax.add_patch(rect) plt.xlim([0, 45]) plt.ylim([0, 45]) plt.show() </code></pre> <p><a href="https://i.sstatic.net/0vTaVm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0vTaVm.png" alt="enter image description here" /></a></p> <p>But when I wish to make the rectangles transparent, they tend to have a border around them, as shown below:</p> <pre class="lang-py prettyprint-override"><code> rect = matplotlib.patches.Rectangle((i, j), 5, 5, color='blue', alpha=0.2) </code></pre> <p><a href="https://i.sstatic.net/wB9nRm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wB9nRm.png" alt="enter image description here" /></a></p> <p>Initially, I thought maybe that was due to some overlaps. So, I reduced the size of the rectangles, but I still got it. For example, if I use</p> <pre class="lang-py prettyprint-override"><code> rect = matplotlib.patches.Rectangle((i, j), 4.5, 4.5, color='blue', alpha=0.2) </code></pre> <p>I get the following:</p> <p><a href="https://i.sstatic.net/n3QqBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n3QqBm.png" alt="enter image description here" /></a></p> <p>A zoomed-in version of the above image is:</p> <p><a href="https://i.sstatic.net/R8Gbgm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8Gbgm.png" alt="enter image description here" /></a></p> <p>As you can see, I still get those boundaries with lower transparency (than alpha=0.2). How can I get rid of those boundaries?</p>
<python><matplotlib>
2023-01-10 17:16:16
1
1,347
lifezbeautiful
75,073,571
468,455
How to add a Google formula containing commas and quotes to a CSV file?
<p>I'm trying to output a CSV file from Python and make one of the entries a Google sheet formula:</p> <p>This is what the formula var would look like:</p> <pre><code> strLink = &quot;https://xxxxxxx.xxxxxx.com/Interact/Pages/Content/Document.aspx?id=&quot; + strId + &quot;&amp;SearchId=0&amp;utm_source=interact&amp;utm_medium=general_search&amp;utm_term=*&quot; strLinkCellFormula = &quot;=HYPERLINK(\&quot;&quot; + strLink + &quot;\&quot;, \&quot;&quot; + strTitle + &quot;\&quot;)&quot; </code></pre> <p>and then for each row of the CSV I have this:</p> <pre><code> strCSV = strCSV + strId + &quot;, &quot; + &quot;\&quot;&quot; + strTitle + &quot;\&quot;, &quot; + strAuthor + &quot;, &quot; + strDate + &quot;, &quot; + strStatus + &quot;, &quot; + &quot;\&quot;&quot; + strSection + &quot;\&quot;, \&quot;&quot; + strLinkCellFormula +&quot;\&quot;\n&quot; </code></pre> <p>Which doesn't quite work, the hyperlink formula for Google sheets is like so:</p> <pre><code>=HYPERLINK(url, title) </code></pre> <p>and I can't seem to get that comma escaped. So in my Sheet I am getting an additional column with the title in it and obviously the formula does not work.</p>
<python><csv><google-sheets><formula>
2023-01-10 17:14:27
2
6,396
PruitIgoe
75,073,480
3,624,171
Pandas ValueError: Columns must be same length as key PyCharm
<p>I want to use a dictionary to add a column to a pandas DataFrame. I use apply lambda with a function to a row. I get 'ValueError: Columns must be same length as key'. I should be able to add a new column, but to simplify I included the column to change in the df.</p> <p>I don't see what I'm doing wrong.</p> <pre><code>import pandas as pd court_dict = dict(zip(['INC:INC08 Pensions', 'TX:TX01 Federal Tax', 'HO:HO08 Rent'], [8, 8, 0])) bank_info = { 'Category':['INC:INC08 Pensions', 'TX:TX01 Federal Tax', 'HO:HO08 Rent'], 'Amount':[1250.23, 300.0, 1000], 'Paragraph': ['', '', '', ] } bank2 = pd.DataFrame(bank_info) def get_column_names(row: pd.core.series.Series, position: int) -&gt; str: category = row['Category'] result = court_dict.get(category, 'd') print(category, result) return result if __name__==&quot;__main__&quot;: bank2[['Paragraph']] = bank2.apply(lambda row:get_column_names(row, 0), axis=1) print(bank2) </code></pre> <p>Here's the output:</p> <pre><code>C:\Users\Steve\anaconda3\envs\AccountingPersonal\python.exe C:\Users\Steve\PycharmProjects\AccountingPersonal\src\get_simple.py INC:INC08 Pensions 8 TX:TX01 Federal Tax 8 HO:HO08 Rent 0 Traceback (most recent call last): File &quot;C:\Users\Steve\PycharmProjects\AccountingPersonal\src\get_simple.py&quot;, line 20, in &lt;module&gt; bank2[['Paragraph']] = bank2.apply(lambda row:get_column_names(row, 0), axis=1) File &quot;C:\Users\Steve\anaconda3\envs\AccountingPersonal\lib\site-packages\pandas\core\frame.py&quot;, line 3643, in __setitem__ self._setitem_array(key, value) File &quot;C:\Users\Steve\anaconda3\envs\AccountingPersonal\lib\site-packages\pandas\core\frame.py&quot;, line 3702, in _setitem_array self._iset_not_inplace(key, value) File &quot;C:\Users\Steve\anaconda3\envs\AccountingPersonal\lib\site-packages\pandas\core\frame.py&quot;, line 3721, in _iset_not_inplace raise ValueError(&quot;Columns must be same length as key&quot;) ValueError: Columns must be same length as key Process finished with exit code 1 </code></pre>
<python><python-3.x><pandas><dataframe>
2023-01-10 17:07:19
3
432
Steve Maguire
75,073,470
10,710,625
sum rows from two different data frames based on the value of columns
<p>I have two data frames</p> <p><code>df1</code></p> <pre><code> ID Year Primary_Location Secondary_Location Sales 0 11 2023 NewYork Chicago 100 1 11 2023 Lyon Chicago,Paris 200 2 11 2023 Berlin Paris 300 3 12 2022 Newyork Chicago 150 4 12 2022 Lyon Chicago,Paris 250 5 12 2022 Berlin Paris 400 </code></pre> <p><code>df2</code></p> <pre><code> ID Year Primary_Location Sales 0 11 2023 Chicago 150 1 11 2023 Paris 200 2 12 2022 Chicago 300 3 12 2022 Paris 350 </code></pre> <p>I would like for each group having the same <code>ID</code> &amp; <code>Year</code>: to add the column <code>Sales</code> from <code>df2</code> to <code>Sales</code> in <code>df1</code> where <code>Primary_Location</code> in <code>df2</code> appear (contained) in <code>Secondary_Location</code> in <code>df1</code>.</p> <p>For example: For <code>ID=11</code> &amp; <code>Year=2023</code>, <code>Sales</code> for <code>Lyon</code> would be added to <code>Sales</code> for <code>Chicago</code> &amp; <code>Sales</code> for <code>Paris</code> of <code>df_2</code>.</p> <p>New <code>Sales</code> of <code>Paris</code> for that row would be 200+150+200=550.</p> <p>The expected output would be :</p> <pre><code>df_primary_output ID Year Primary_Location Secondary_Location Sales 0 11 2023 NewYork Chicago 250 1 11 2023 Lyon Chicago,Paris 550 2 11 2023 Berlin Paris 500 3 12 2022 Newyork Chicago 400 4 12 2022 Lyon Chicago,Paris 900 5 12 2022 Berlin Paris 750 </code></pre> <p><strong>Here are the dataframes to start with :</strong></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.DataFrame({'ID': [11, 11, 11, 12, 12, 12], 'Year': [2023, 2023, 2023, 2022, 2022, 2022], 'Primary_Location': ['NewYork', 'Lyon', 'Berlin', 'Newyork', 'Lyon', 'Berlin'], 'Secondary_Location': ['Chicago', 'Chicago,Paris', 'Paris', 'Chicago', 'Chicago,Paris', 'Paris'], 'Sales': [100, 200, 300, 150, 250, 400] }) df2 = pd.DataFrame({'ID': [11, 11, 12, 12], 'Year': [2023, 2023, 2022, 2022], 'Primary_Location': ['Chicago', 'Paris', 'Chicago', 'Paris'], 'Sales': [150, 200, 300, 350] }) </code></pre> <p><em><strong>EDIT: pandas.errors.InvalidIndexError: Reindexing only valid with uniquely valued Index objects</strong></em></p> <p>Would be great if the solution could work for these inputs as well:</p> <p><code>df1</code></p> <pre><code> Day ID Year Primary_Location Secondary_Location Sales 0 1 11 2023 NewYork Chicago 100 1 1 11 2023 Berlin Chicago 300 2 1 11 2022 Newyork Chicago 150 3 1 11 2022 Berlin Chicago 400 </code></pre> <p><code>df2</code></p> <pre><code> Day ID Year Primary_Location Sales 0 1 11 2023 Chicago 150 1 1 11 2022 Chicago 300 </code></pre> <p>The expected output would be :</p> <pre><code>df_primary_output Day ID Year Primary_Location Secondary_Location Sales 0 1 11 2023 NewYork Chicago 250 1 1 11 2023 Berlin Chicago 450 2 1 11 2022 Newyork Chicago 450 3 1 11 2022 Berlin Chicago 700 </code></pre>
<python><pandas><dataframe>
2023-01-10 17:06:49
2
739
the phoenix
75,073,409
25,282
Python: Download Google Doc as Odt
<p>I want to download a Google doc as an <code>.odt</code>-file. There's a <a href="https://stackoverflow.com/q/38511444/25282">previous question</a> on how to download normal files from Google Drive.</p> <p>I want to download the Doc with Python, because I then want to continue to work with it within my slave.</p>
<python><google-docs><google-docs-api>
2023-01-10 17:01:16
1
26,469
Christian
75,073,367
1,245,420
How to call SCons global function from a helper-script called by a SConscript?
<p>I have some non-trivial logic necessary to compute the paths to certain source &amp; header file directories and since it applies to multiple SConscripts I put it in a separate .py file, imported from each SConscript that needs it. Now I need to be able to call a SCons global function (in this case, the <code>Glob</code> function) from within the helper-script. Previously that function was callable directly from within the SConscripts, but now that I need to call it from the separate helper-script I can't figure out how to call it. I suppose this is probably trivial but I'm really not very familiar with Python.</p> <p><strong>UPDATE:</strong><br> It seems the ability to call <code>Glob()</code> as a &quot;global function&quot; has something to do with some trickery that SCons is playing. Here's an excerpt from the main entry file (SConscript.py, which is not the same as the SConscripts that we pepper all over our source tree):</p> <pre><code>_DefaultEnvironmentProxy = None def get_DefaultEnvironmentProxy(): global _DefaultEnvironmentProxy if not _DefaultEnvironmentProxy: default_env = SCons.Defaults.DefaultEnvironment() _DefaultEnvironmentProxy = SCons.Environment.NoSubstitutionProxy(default_env) return _DefaultEnvironmentProxy class DefaultEnvironmentCall(object): &quot;&quot;&quot;A class that implements &quot;global function&quot; calls of Environment methods by fetching the specified method from the DefaultEnvironment's class. Note that this uses an intermediate proxy class instead of calling the DefaultEnvironment method directly so that the proxy can override the subst() method and thereby prevent expansion of construction variables (since from the user's point of view this was called as a global function, with no associated construction environment).&quot;&quot;&quot; def __init__(self, method_name, subst=0): self.method_name = method_name if subst: self.factory = SCons.Defaults.DefaultEnvironment else: self.factory = get_DefaultEnvironmentProxy def __call__(self, *args, **kw): env = self.factory() method = getattr(env, self.method_name) return method(*args, **kw) def BuildDefaultGlobals(): &quot;&quot;&quot; Create a dictionary containing all the default globals for SConstruct and SConscript files. &quot;&quot;&quot; global GlobalDict if GlobalDict is None: GlobalDict = {} import SCons.Script # &lt;-------This is referring to a directory with a file named __init__.py, which I've learned is something special in Python d = SCons.Script.__dict__ def not_a_module(m, d=d, mtype=type(SCons.Script)): return not isinstance(d[m], mtype) for m in filter(not_a_module, dir(SCons.Script)): GlobalDict[m] = d[m] return GlobalDict.copy() </code></pre> <p>So I thought perhaps I need to <code>import SCons.Script</code> in my helper-script:</p> <pre><code>import SCons.Script </code></pre> <p>Unfortunately I still get this:</p> <pre><code>NameError: name 'Glob' is not defined: File &quot;D:\Git\......\SConstruct&quot;, line 412: SConscript(theSconscript, duplicate=0) File &quot;c:\python37\lib\site-packages\scons\SCons\Script\SConscript.py&quot;, line 671: return method(*args, **kw) File &quot;c:\python37\lib\site-packages\scons\SCons\Script\SConscript.py&quot;, line 608: return _SConscript(self.fs, *files, **subst_kw) . . . File &quot;D:\Git\........\SConscript&quot;, line 433: . . . File &quot;D:\Git\............\PathSelector.py&quot;, line 78: src_files.extend(Glob((os.path.join(src_path_a, '*.c')), strings=1)) </code></pre>
<python><scons>
2023-01-10 16:56:14
1
7,893
phonetagger
75,073,085
19,826,650
Passing array object from PHP to Python
<p>This is my code so far</p> <pre><code> $dataraw = $_SESSION['image']; $datagambar = json_encode($dataraw); echo '&lt;pre&gt;'; print_r($dataraw); echo '&lt;/pre&gt;'; print($escaped_json); $type1 = gettype($dataraw); print($type1); $type2 = gettype($datagambar); print($type2); </code></pre> <p>This is $dataraw output, the type is array</p> <pre><code>Array ( [0] =&gt; Array ( [FileName] =&gt; 20221227_202035.jpg [Model] =&gt; SM-A528B [Longitude] =&gt; 106.904251 [Latitude] =&gt; -6.167665 ) [1] =&gt; Array ( [FileName] =&gt; 20221227_202157.jpg [Model] =&gt; SM-A528B [Longitude] =&gt; 106.9042428 [Latitude] =&gt; -6.1676580997222 ) ) </code></pre> <p>This is $datagambar output, the type is string</p> <pre><code>[{&quot;FileName&quot;:&quot;20221227_202035.jpg&quot;,&quot;Model&quot;:&quot;SM-A528B&quot;,&quot;Longitude&quot;:106.904251,&quot;Latitude&quot;:-6.167665},{&quot;FileName&quot;:&quot;20221227_202157.jpg&quot;,&quot;Model&quot;:&quot;SM-A528B&quot;,&quot;Longitude&quot;:106.9042428,&quot;Latitude&quot;:-6.167658099722223}] </code></pre> <p>Pass to python</p> <pre><code>echo shell_exec(&quot;D:\Anaconda\python.exe D:/xampp/htdocs/Klasifikasi_KNN/admin/test.py $datagambar&quot;); </code></pre> <p>This is my python test.py</p> <pre><code>import sys, json import os import pymysql import pandas as pd import numpy as np import matplotlib.pyplot as plt import mplcursors as mpl from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score,hamming_loss,classification_report json_list = [] escaped_json1 = sys.argv[1] # this is working but its only a string of array json # print(escaped_json1) # this is working but its only a string of array json json_list.append(json.loads(escaped_json1)) parsed_data = json.loads(escaped_json1) print(json_list) print(parsed_data) </code></pre> <p>When i do print(escaped_json1) it display a string of array json from php($datagambar).</p> <p>python output:</p> <pre><code>Hello world has been called [{&quot;FileName&quot;:&quot;20221227_202035.jpg&quot;,&quot;Model&quot;:&quot;SM-A528B&quot;,&quot;Longitude&quot;:106.904251,&quot;Latitude&quot;:-6.167665},{&quot;FileName&quot;:&quot;20221227_202157.jpg&quot;,&quot;Model&quot;:&quot;SM-A528B&quot;,&quot;Longitude&quot;:106.9042428,&quot;Latitude&quot;:-6.167658099722223}] </code></pre> <p>I use apache as my server with phpmyadmin and anaconda.</p> <p>T tried using print(type(escapedjson1)) or print(type(escapedjson1)) but it doesn't display the type</p> <p>json.loads didn't change the type of data to python array</p> <p>How to loads it and make the string array into a variable array so i can call it and convert it to dataframe?.</p>
<python><php><json>
2023-01-10 16:34:05
2
377
Jessen Jie
75,072,979
4,736,140
Python Export table from postgres and import to another postgres using
<p>I have 2 postgres databases with same schema but in 2 different schemas. I am writing a python script with a goal to export data partially from one of the tables and import the result to the same table but in a different database (like <code>select from A where f=123</code>). The schema is large (it has many columns of different types, some are allowed to be null, some aren't. There are date types and string fields which can contain sentences, pseudo-queries and file names) and there can be thousands of rows in a table.</p> <p>I took approach of exporting the data from table to a csv file, then importing the data from a csv file to a second database table.</p> <p>I use <code>psycopg2</code> lib for working with Postgres in Python along with a <code>csv</code> lib to read and write csv files.</p> <p>I implemented the first version. The problem was that: Some columns in a row are empty, when I read the table data in python the empty fields have <code>None</code> value when the field is allowed to be <code>null</code> and where the field is not allowed to be <code>null</code> the value is <code>&quot;&quot;</code> empty string and when exported to csv all the values which are <code>None</code> and <code>&quot;&quot;</code> are inserted as empty strings in a csv file. As an example the row would look like this <code>1234,,,,,1,,</code>. And when I try to import the file to a postgres table all empty values in a csv are converted to <code>null</code> and are tried to insert this way, but it failed because fields which can't be <code>null</code> don't accept this value. Below you can see my code and after that code I pasted the improvement i did to avoid this problem.</p> <pre><code>import psycopg2 import csv def export_table(filename, tablename): conn = psycopg2.connect(....) cur = conn.cursor() cur.execute(f'SELECT * FROM {tablename} where f=123') rows = cur.fetchall() with open(filename, 'w', newline='') as csvfile: writer = csv.writer(csvfile) for row in rows: writer.writerow(row) cur.close() conn.close() def import_table(filename, tablename): conn = psycopg2.connect(..second db data) cur = conn.cursor() with open(filename, 'r') as csvfile: cur.copy_expert( f&quot;COPY {tablename} FROM STDIN WITH (FORMAT CSV)&quot;, csvfile ) conn.commit() cur.close() conn.close() </code></pre> <p>I tried to add <code>csv.QUOTE_MINIMAL</code>, <code>csv.QUOTE_NONNUMERIC</code> - they did not help me.</p> <p>Because I was not able to import the data with this code, I tried to try one more thing.</p> <p>I added a manual function for quoting:</p> <pre><code>def quote_field(field): if isinstance(field, str): if field == '': return '&quot;&quot;' elif any(c in field for c in (',', '&quot;', '\n')): return '&quot;' + field.replace('&quot;', '&quot;&quot;') + '&quot;' return field </code></pre> <p>And updated the import part this way:</p> <pre><code>with open(filename, 'w', newline='') as csvfile: writer = csv.writer(csvfile, quoting=csv.QUOTE_NONE, quotechar='', escapechar='\\') for row in rows: writer.writerow([quote_field(field) for field in row]) </code></pre> <p>I tried running the code, it pasting null values to a csv as <code>&quot;&quot;</code> and <code>None</code> values are placed in a csv as just empty fields. So a row in a csv would look like this <code>1234,,,&quot;&quot;,&quot;&quot;,,,,,&quot;&quot;,,,,,</code> and for some of the cases this would successfully work, the data was imported correctly. But sometimes for some reason the csv that is generated is not imported at all or just partially. To check it I tried to use DataGrip to import data from a csv file manually, for some data it was also importing it just partially (like 20 rows out of 1000) and for some data it was not importing at all. I checked the csv's for validity, they were valid. I think there's a bug in an import part but i don't know where it is and why it is behaving this way. Need help with this.</p>
<python><database><postgresql>
2023-01-10 16:25:52
1
2,277
komron
75,072,886
9,343,043
Create all possible combinations of entries from multiple lists
<p>I have different scalar metrics that I need to take inventory of in a large repository of data. The format of one entry for a metric would look something like this:</p> <pre><code>{a}_{b}_{c}_{d} # Where ${a} = subjid, ex. &quot;1000061&quot; # Where ${b} = &quot;ROIstats&quot; # Where ${c} = statistic, [mean, median, sigma] # Where ${d} = modality, [AX, FA, RAD, TR] # EXAMPLE DOWN BELOW: # {a}_{b}_{c}_{d} = 1000061_ROIstats_mean_FA </code></pre> <p>There should be 24 combos total (as seen below): <a href="https://i.sstatic.net/IB8rm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IB8rm.png" alt="All possible data points for one subj_ID" /></a></p> <p>I am attempting to use <code>itertools.product</code> after concatenating <code>{a}</code> and <code>{b}</code> since there is only one possible entry for both (one unique subject ID which I am getting from a list <code>found_subjs</code>, and <code>&quot;ROIstats&quot;</code> respectively. Is there a way I can combine <code>{a}, {b}, {c}, and {d}</code> together to capture all the points for each <code>{a}_{b}</code>?</p> <p>Here is my attempt. I don't know if I need to change the formatting or put the combinations in a seperate list:</p> <pre><code>cs = [&quot;mean&quot;, &quot;median&quot;, &quot;sigma&quot;] ds = [&quot;AX&quot;, &quot;FA&quot;, &quot;fwAX&quot;, &quot;fwFA&quot;, &quot;fwRAD&quot;, &quot;fwVF&quot;, &quot;RAD&quot;, &quot;TR&quot;] for s in found_subjs: os.chdir(&quot;{final_destination}\\{s}&quot;.format(final_destination=final_destination, s=s)) for c,d in product(cs, ds): print(s+&quot;_&quot;+&quot;ROIstats&quot;+&quot;_&quot;+c+&quot;_&quot;+d) Traceback (most recent call last): File &quot;&lt;ipython-input-34-7fa394581c24&gt;&quot;, line 4, in &lt;module&gt; print(s+&quot;_&quot;+&quot;ROIstats&quot;+&quot;_&quot;+c+&quot;_&quot;+d) TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('&lt;U21') dtype('&lt;U21') dtype('&lt;U21') </code></pre>
<python><string><list><combinations><python-itertools>
2023-01-10 16:19:02
0
871
florence-y
75,072,848
4,817,370
Python : how can I know which one of my dependencies is attempting to install a specific package?
<p>I have a project with a requirements.txt file that is quite big ( 67 dependencies ). One of the dependencies is attempting to use the psycopg2 package installed and fails. I would prefer to install the psycopg2-binary package</p> <p>How can I find witch one of the dependencies requires psycopg2 ?</p> <p>I have attempted to use <code>pip list</code> and <code>pip show psycopg2</code> but this only works if you package was installed correctly which is not my case</p> <p>I have also attempted to use the <code>pydeps</code> tool such as <code>pydeps requirements.txt</code> but this gives me an assertion error witch I have yet to figure out</p> <p>How can I know in advance which package is requiring directly or indirectly ( it can be through a dependency of one of my dependencies ) the psycopg2 package ?</p>
<python><python-3.x><pip>
2023-01-10 16:16:27
2
2,559
Matthieu Raynaud de Fitte
75,072,821
11,781,149
Comparing the value of a column with the previous value of a new column using Apply in Python (Pandas)
<p>I have a dataframe with these values in column A:</p> <pre><code>df = pd.DataFrame(A,columns =['A']) A 0 0 1 5 2 1 3 7 4 0 5 2 6 1 7 3 8 0 </code></pre> <p>I need to create a new column (called B) and populate it using next conditions:</p> <p>Condition 1: If the value of A is equal to 0 then, the value of B must be 0.</p> <p>Condition 2: If the value of A is not 0 then I compare its value to the previous value of B. If A is higher than the previous value of B then I take A, otherwise I take B. The result should be this:</p> <pre><code> A B 0 0 0 1 5 5 2 1 5 3 7 7 4 0 0 5 2 2 6 1 2 7 3 3 </code></pre> <p>The dataset is huge and using loops would be too slow. I would need to solve this without using loops and the pandas β€œLoc” function. Anyone could help me to solve this using the Apply function? I have tried different things without success.</p> <p>Thanks a lot.</p>
<python><pandas><dataframe><apply>
2023-01-10 16:14:02
4
523
Martingale
75,072,815
10,873,155
Python can't locate module files from other directory
<p>I have a project structured as follow:</p> <pre><code>project\ __init__.py gui\ __init__.py gui_file_one.py gui_file_two.py logic\ __init__.py logic_file_one.py logic_file_two.py </code></pre> <p>Let's say that in a <code>gui_file_one.py</code> I need to import class A that is located in a <code>logic_file_one.py</code>. When I try to do from <code>logic.logic_file_one import A</code> I get a <code>ModuleNotFoundError: No module named 'logic'</code>. How do I overcome this? How should this class be imported?</p>
<python>
2023-01-10 16:13:42
0
326
Jakub Sapko
75,072,671
15,399,131
Invalid python interpreter when using virtualenv --system-site-packages in VSCode
<p>Ideally I would like to use <a href="https://stackoverflow.com/questions/71484282/loading-an-lmod-module-using-vscode-with-remote-ssh">Loading an Lmod module using VSCode with Remote-SSH</a> however the soultion to use that directly does not seem to work. On the other hand virtualenvs ought to be <a href="https://code.visualstudio.com/docs/python/environments" rel="nofollow noreferrer">supported by the Python VSCode extension</a>.</p> <p>Thus I figured the following steps should work instead.</p> <ol> <li>Set-up the virtualenvironment in the remote environment</li> </ol> <pre class="lang-bash prettyprint-override"><code>$ module load Python $ virtualenv --system-site-packages my_python </code></pre> <ol start="2"> <li>Connect via &quot;Remote - SSH&quot; plugin</li> <li>In the terminal source your virtual environment from 1 and get the path to the python binary softlink:</li> </ol> <pre><code>$ source my_python/bin/activate $ which python /path/to/my_python/bin/python </code></pre> <ol start="4"> <li>Copy the path to the python binary and paste it into the path to the Python intepreter that VSCode asks for.</li> </ol> <p>And they do, but not always. The first time I tried this it worked and now everytime I try this for the same host it works directly. However, whenever I change to another host with the same parallel file system and redo step 4 it doesn't and instead says</p> <blockquote> <p>Invalid python interpreter selected</p> </blockquote> <p>This is the same error message as when you try to directly point to the python that the is loaded with Lmod (and built with EasyBuild if that makes a difference).</p> <p>Now I'm stuck because the error message is not very helpful, I haven't found a stacktrace or any log-files that include this error. If I could find that or the code that handles the logic for what is a valid interpreter, that would be really helpful.</p> <p>In summary:</p> <ul> <li><code>virtualenv --system-site-packages</code> symlinked Python doesn't work reliably as interpreter when system site Python is not a valid Python interpreter.</li> <li>How to get this to work reliably?</li> <li>Alternatively, what is the logic that determines a valid interpreter?</li> </ul>
<python><virtualenv><vscode-remote>
2023-01-10 16:03:47
0
642
VRehnberg
75,072,650
12,258,312
Finding mentioned weekdays from text
<p>Let's say we have a text of</p> <blockquote> <p>Even though we celebrate Good Friday and Easter Sunday, there is no mention of days such as &quot;Sunday&quot; or &quot;Wednesday&quot;.</p> </blockquote> <p>Notice the following weekdays are mentioned, <code>Friday, Sunday, Sunday, Wednesday</code></p> <p>We need output some variable <code>weekdays</code> that, in this case, would have:</p> <pre><code>weekdays: [3,5,7] </code></pre> <p>Assuming,</p> <ul> <li>We count from 1</li> <li>We don't care about repeated entries</li> <li>1 is Monday</li> <li>7 is Sunday</li> </ul> <p>What would be the most Pythonic way to approach such information?</p>
<python>
2023-01-10 16:01:52
2
1,204
Tim Abdiukov
75,072,627
3,986,055
Python: create a file on S3
<p>I have a function below for generating the rows of a huge text file.</p> <pre class="lang-py prettyprint-override"><code>def generate_content(n): for _ in range(n): yield 'xxx' </code></pre> <p>Instead of saving the file to disk, then uploading it to S3, is there any way to save the data directly to S3?</p> <p>One thing to mention is the data could be so huge that I don't have enough disk space or memory to hold it.</p>
<python><amazon-web-services><file><amazon-s3><upload>
2023-01-10 15:59:58
1
1,484
Ken Zhang
75,072,597
3,715,862
"Required Content-Range response header is missing or malformed" while using download_blob in azure blob storage using cdn hostname
<p>I have a storage account in azure with a connection string in the format :</p> <pre><code>connection_string = 'DefaultEndpointsProtocol=https;AccountName=&lt;storage_account_name&gt;;AccountKey=&lt;redacted_account_key&gt;;EndpointSuffix=azureedge.net' </code></pre> <p>I am trying to download a blob from a container using the cdn hostname <code>https://&lt;redacted_hostname&gt;.azureedge.net</code> instead of the origin hostname namely <code>https://&lt;redacted_hostname_2&gt;.blob.core.windows.net</code></p> <p>I am trying to download and store a blob present in the following way :</p> <pre><code>from azure.storage.blob import BlobServiceClient, generate_container_sas , ContainerSasPermissions from urllib.parse import urlparse from azure.storage.blob import BlobClient # get container details blob_service_client = BlobServiceClient.from_connection_string(connection_string) container_client = blob_service_client.get_container_client(&quot;container_name&quot;) # get permission perm = ContainerSasPermissions(read=True,list=True) # set expiry from datetime import datetime, timedelta expiry=datetime.utcnow() + timedelta(hours=1) # generate sas token sas_token = generate_container_sas( container_client.account_name, container_client.container_name, account_key=container_client.credential.account_key, permission = perm, expiry=datetime.utcnow() + timedelta(hours=1) ) sas_url = f&quot;https://&lt;redacted_hostname&gt;.azureedge.net/container_name?{sas_token}&quot; container_with_blob = &quot;container_name/file.wav&quot; sas_url_parts = urlparse(sas_url) account_endpoint = sas_url_parts.scheme + '://' + sas_url_parts.netloc sas_token = sas_url_parts.query blob_sas_url = account_endpoint + '/' + container_with_blob + '?' + sas_token; blob_client = BlobClient.from_blob_url(blob_sas_url); with open(&quot;download_file.wav&quot;, &quot;wb&quot;) as current_blob: stream = blob_client.download_blob() current_blob.write(stream.readall()) </code></pre> <p>However , this fails with the following error</p> <pre><code>raise ValueError(&quot;Required Content-Range response header is missing or malformed.&quot;) ValueError: Required Content-Range response header is missing or malformed </code></pre> <p>however, same snippet works with the <code>.blob.core.windows.net</code> hostname</p> <p><b>Attempts to solve issue</b></p> <ol> <li><p>Changed <code>EndpointSuffix=core.windows.net</code> to <code>EndpointSuffix=azureedge.net</code> in <code>connection_string</code>.</p> </li> <li><p>Got the <code>blob_properties</code> from <code>blob_client</code> and sent it to <code>download_blob</code> API as shown below</p> </li> </ol> <pre><code>... blob_properties = blob_client.get_blob_properties() ... stream = blob_client.download_blob(0, blob_properties.size) </code></pre> <p>This throws the same error if I am using cdn hostname but works fine using the origin.</p> <ol start="3"> <li><p>Tried using <code>BlobEndpoint=azureedge.net</code> instead of <code>EndpointSuffix</code> .</p> </li> <li><p> Trying to <code>set_http_headers</code> in <code>blob_client</code> <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python" rel="nofollow noreferrer">doc</a> but don't seem to have any <code>content_range</code> property.</p> </li> </ol> <hr /> <p>However, when I directly use the <code>blob_sas_url</code> i.e. <code>https://&lt;cdn_hostname&gt;/container_name/file.wav?se=&lt;sas_token&gt;</code> , I am able to download the file in my browser.</p> <hr /> <p>Additional point : I have also configured the caching rules to <code>cache all unique url</code>.</p>
<python><azure><azure-blob-storage><azure-cdn>
2023-01-10 15:57:43
1
600
Echo
75,072,498
10,647,708
Removing all text within double quotes
<p>I am working on preprocessing some text in Python and would like to get rid of all text that appears in double quotes within the text. I am unsure how to do that and will appreciate your help with. A minimally reproducible example is below for your reference. Thank you in advance.</p> <pre><code>x='The frog said &quot;All this needs to get removed&quot; something' </code></pre> <p>So, pretty much what I want to get is <code>'The frog said something'</code> by removing the text in the double quotes from <code>x</code> above, and I am not sure how to do that. Thanks once again.</p>
<python><python-3.x>
2023-01-10 15:50:53
2
329
Dave
75,072,472
6,694,814
Python folium fetching csv data to the jinja macro template
<p>I have the jinja macro template provided to my code, which executes the Leaflet circle creation.</p> <p>I would like to include the .csv data in this template when possible</p> <pre><code>df = pd.read_csv(&quot;survey.csv&quot;) class Circle(folium.ClickForMarker): _template = Template(u&quot;&quot;&quot; {% macro script(this, kwargs) %} var circle_job = L.circle(); function newMarker(e){ circle_job.setLatLng(e.latlng).addTo({{this._parent.get_name()}}); circle_job.setRadius({{rad}}); circle_job.getPopup({{role}}) if {{role}} = &quot;Contractor&quot; { color=&quot;red&quot; }else{ color=&quot;black&quot; } circle_job.bringToFront() parent.document.getElementById(&quot;latitude&quot;).value = lat; parent.document.getElementById(&quot;longitude&quot;).value =lng; }; {{this._parent.get_name()}}.on('click', newMarker); {% endmacro %} &quot;&quot;&quot;) # noqa def __init__(self, popup=None): super(Circle, self).__init__(popup) self._name = 'Circle' job_range = Circle() for i,row in df.iterrows(): lat =df.at[i, 'lat'] lng = df.at[i, 'lng'] sp = df.at[i, 'sp'] phone = df.at[i, 'phone'] role = df.at[i, 'role'] rad = int(df.at[i, 'radius']) </code></pre> <p>is it possible something like this?</p> <p>A similar approach was here:</p> <p><a href="https://stackoverflow.com/questions/68614721/how-add-circle-markers-to-a-rendered-folium-map-embedded-in-a-qwebengineview">How add circle markers to a rendered folium map embedded in a QWebEngineView?</a></p> <p><strong>UPDATE I:</strong></p> <p>I tried recently something like this:</p> <pre><code>class Circle(MacroElement): _template = Template(u&quot;&quot;&quot; {% macro script(this, kwargs) %} var {{this.get_name()}} = L.circle(); function newCircle(e){ {{this.get_name()}}.setLatLng(e.latlng) .addTo({{this._parent.get_name()}}); {{this.get_name()}}.setRadius({{rad}}); {{this.get_name()}}.setStyle({ color: 'black', fillcolor: 'black' }); }; {{this._parent.get_name()}}.on('click', newCircle); {% endmacro %} &quot;&quot;&quot;) # noqa def __init__(self, popup=None ): super(Circle, self).__init__() self._name = 'Circle' for i,row in df.iterrows(): lat =df.at[i, 'lat'] lng = df.at[i, 'lng'] sp = df.at[i, 'sp'] phone = df.at[i, 'phone'] role = df.at[i, 'role'] rad = int(df.at[i, 'radius']) popup = '&lt;b&gt;Phone: &lt;/b&gt;' + str(df.at[i,'phone']) work_range = os.path.join('survey_range.geojson') job_range = Circle() </code></pre> <p>Now I lost some features, whereas the Js console says nothing. Is it possible to fetch data from df.iterrows directly to the Macroelement?</p> <p><strong>UPDATE II</strong></p> <p>I tried to fiddle with def__init__ section and now my code looks as follows:</p> <p>class Circle(MacroElement):</p> <pre><code> def __init__(self, popup=None, draggable=False, edit_options=None, radius=rad #lat, #lng ): super(Circle, self).__init__() self._name = 'Circle', self.radius = radius, self._template = Template(u&quot;&quot;&quot; {% macro script(this, kwargs) %} var circle_job = L.circle(); function newCircle(e){ circle_job.setLatLng(e.latlng).addTo({{this._parent.get_name()}}); circle_job.setRadius(50000); circle_job.setStyle({ color: 'black', fillcolor: 'black' }); }; {{this._parent.get_name()}}.on('click', newCircle); {% endmacro %} &quot;&quot;&quot;) # noqa for i,row in df.iterrows(): lat =df.at[i, 'lat'] lng = df.at[i, 'lng'] sp = df.at[i, 'sp'] phone = df.at[i, 'phone'] role = df.at[i, 'role'] rad = int(df.at[i, 'radius']) </code></pre> <p>and the error thrown is: <strong>The rad is not defined</strong></p> <p>Is there any way of including the stuff from inside of <code>df.iterrows()</code> in the <code>jinja2</code> template via defining the <code>def___init?</code></p>
<javascript><python><leaflet><folium>
2023-01-10 15:49:05
0
1,556
Geographos
75,072,374
10,647,708
Removing these trailing characters from text in Python
<p>I am working with text in Python and trying to remove trailing characters. I am aware of the <code>rstrip</code> function but it unfortunately does not get rid of the trailing characters due (I think) to the nature of the trailing characters. Below is a minimally reproducible example I would appreciate your help with.</p> <pre><code>x=&quot;test string\\r\\n\\&quot; x.rstrip() </code></pre> <p>What I need to get as a result is &quot;text string&quot; but I am getting <code>test string\\r\\n\\</code> in other words nothing gets removed.</p> <p>Please advise. Thank you in advance.</p>
<python><python-3.x>
2023-01-10 15:42:24
1
329
Dave
75,072,265
16,511,234
Time series resample seems to result in wrong data
<p>I have data with 30 minutes interval. When I resample it to 1 hour I get kind of low values.</p> <p>Original data:</p> <pre><code>2022-12-31 22:00:00+01:00;7.500000 2022-12-31 22:30:00+01:00;8.200000 2022-12-31 23:00:00+01:00;10.800000 2022-12-31 23:30:00+01:00;9.500000 2023-01-01 00:00:00+01:00;12.300000 2023-01-01 00:30:00+01:00;168.399994 2023-01-01 01:00:00+01:00;157.399994 2023-01-01 01:30:00+01:00;73.199997 2023-01-01 02:00:00+01:00;59.700001 2023-01-01 02:30:00+01:00;74.000000 </code></pre> <p>After <code>df = df.resample('h', label='right')mean()</code> I get:</p> <pre><code>2022-12-31 23:00:00+01:00;7.850000 2023-01-01 00:00:00+01:00;10.150000 2023-01-01 01:00:00+01:00;90.349997 2023-01-01 02:00:00+01:00;15.299995 2023-01-01 03:00:00+01:00;66.850000 </code></pre> <p>Should the value for 01:00:00 not be <code>162.89</code>?</p>
<python><pandas><date><datetime>
2023-01-10 15:33:42
1
351
Gobrel
75,072,264
4,542,117
How to ignore specific numbers in a numpy moving average?
<p>Let's say I have a simple numpy array:</p> <pre><code>a = np.array([1,2,3,4,5,6,7]) </code></pre> <p>I can calculate the moving average of a window with size 3 simply like:</p> <pre><code>np.convolve(a,np.ones(3),'valid') / 3 </code></pre> <p>which would yield</p> <pre><code>array([2., 3., 4., 5., 6.]) </code></pre> <p>Now, I would like to take a moving average but exclude anytime the number '2' appears. In other words, for the first 3 numbers, originally, it would be (1 + 2 + 3) / 3 = 2. Now, I would like to do (1 + 3) / 2 = 2. How can I specify a user-defined number to ignore and calculate the running mean without including this user-defined number? I would like to keep this to some sort of numpy function without bringing in pandas.</p>
<python><numpy>
2023-01-10 15:33:36
1
374
Miss_Orchid
75,072,229
245,549
How to use hash function in Python3 to transform an arbitrary string into a fixed-length sequence of alphanumeric symbols?
<p>I have a large number of different sentences written in different languages (French, Ukrainian, English and so on). For each sentence I want to generate audio file with the given sentence being pronounced by a text-to-speech program. Now I need to decide how to name those audio files (one file for each sentence). I thought that it would be elegant if I can infer file name from the sentence. In other words, if I see the sentence, I should be able to computer (infer / derive) the name of audio file in which this sentence is spoken.</p> <p>I thought that I could use a hash function for that. I would apply a hash function to the string representing the sentence and, as a result, I would get a string (hash) that I can use as a name of the file.</p> <p>Why not to use the sentence itself as a name? Because sentences can be large and I do not want to have very large file names. Moreover, I do not want to have spaces and other punctuation symbols (as well as strange alphabet symbols) in the names of the files. Finally, I expect that hash will always have the same length which looks nice.</p> <p>Now is my question: How can I transform an arbitrary unicode string into a sequence of alphanumeric symbols being a hash of the input string in Python3?</p> <p>I also wonder if there is a danger of getting the same hash for different sentences.</p> <p><strong>ADDED:</strong></p> <p>I have just realized, that by applying <code>hash</code> function to the same string I can get different results for different sessions. This is, obviously, something that I would like to avoid.</p>
<python><hash>
2023-01-10 15:31:00
1
132,218
Roman
75,072,207
357,313
Why does read_csv give me a timezone warning?
<p>I try reading a CSV file using pandas and get a warning I do not understand:</p> <pre><code>Lib\site-packages\dateutil\parser\_parser.py:1207: UnknownTimezoneWarning: tzname B identified but not understood. Pass `tzinfos` argument in order to correctly return a timezone-aware datetime. In a future version, this will raise an exception. warnings.warn(&quot;tzname {tzname} identified but not understood. &quot; </code></pre> <p>I do nothing special, just <code>pd.read_csv</code> with <code>parse_dates=True</code>. I see no <code>B</code> that looks like a timezone anywhere in my data. What does the warning mean?</p> <p>A minimal reproducible example is the following:</p> <pre><code>import io import pandas as pd pd.read_csv(io.StringIO('x\n1A2B'), index_col=0, parse_dates=True) </code></pre> <p>Why does pandas think <code>1A2B</code> is a datetime?!</p> <p>To solve this, I tried adding <code>dtype={'x': str}</code> to force the column into a string. But I keep getting the warning regardless...</p>
<python><pandas><datetime><timezone><python-dateutil>
2023-01-10 15:29:06
1
8,135
Michel de Ruiter
75,072,007
1,382,437
Python/Jupyter: scroll scatter-plot with many data points horizontally
<p>I want to visualize time-series-like data with several measurements over time.</p> <p>There are a lot of such measurements in a dataset, in the order of tens to hundreds of thousands.</p> <p>In order to view these in a notebook or HTML page, I would like some efficient method to show a subrange of the whole time range with just a view hundred to thousand database and have controls to scroll lef/right i.e. forward/backward in time through the data.</p> <p>I have tried doing this with Plotly and a range slider, but unfortunately this does not scale to a lot of data at all. Apparently, this approach creates all the graph data in the output javascript, which slows down everything and at some point makes the browser hang or crash.</p> <p>What I would need is an approach that actually only renders the data in the subrange and interacts with the python code via the scrolling widgets to update the view.</p> <p>Ideally, this would work with Plotly as I am using it for all other visualizations, but any other efficient solution would also be welcome.</p>
<python><python-3.x><plotly><scatter-plot><line-plot>
2023-01-10 15:13:45
1
2,137
jpp1
75,071,918
10,687,615
Count number of " date hour minutes" before two datetime points
<p>I have code that will give me the cumulative number of patients in a location by hour between two date/time points. However, I want to tweak this code to show the data by minutes.</p> <pre><code>Datatable: ID ARRIVAL_DATE_TIME DISPOSITION_DATE 1 2021-11-07 08:35:00 2021-11-07 17:58:00 2 2021-11-07 13:16:00 2021-11-08 02:52:00 3 2021-11-07 15:12:00 2021-11-07 21:08:00 Desired output: ID DATE_HOUR_MIN_IN_ED 1 2021-11-07 08:35:00 1 2021-11-07 08:36:00 1 2021-11-07 08:37:00 ..... 1 2021-11-07 17:58:00 ... 2 2021-11-07 13:16:00 2 2021-11-07 13:17:00 2 2021-11-07 13:18:00 </code></pre> <p>I suspect I need to change what I have FREQ equal to but am not sure what to put.</p> <p>Code:</p> <pre><code> TEST['Date']=[pd.date_range(a,b , freq='H') for a , b in zip(TEST.ARRIVAL_DATE_TIME,TEST.DISPOSITION_DATE)] s=TEST[['Date','ID']].explode('Date').reset_index(drop=True) </code></pre> <p><a href="https://stackoverflow.com/questions/69929123/create-date-hour-variable-for-each-hour-between-two-datetime-variables">Create date/hour variable for each hour between two datetime variables</a></p>
<python><pandas>
2023-01-10 15:05:40
1
859
Raven
75,071,688
17,696,880
Set if conditional inside a lambda function depending on whether a value captured using regex is None or ""
<pre class="lang-py prettyprint-override"><code>import re input_text = 'desde el 2022_-_12_-_10 corrimos juntas hasta el 11Β° nivel de aquella montaΓ±a hasta el 2022_-_12_-_13' #example 1 #input_text = 'desde el 2022_-_11_-_10 18:30 pm hasta el 2022_-_12_-_01 21:00 hs' #example 2 #text in the middle associated with the date range... some_text = r&quot;(?:(?!\.\s*)[^;])*&quot; #but cannot contain &quot;;&quot;, &quot;.\s*&quot; identificate_hours = r&quot;(?:a\s*las|a\s*la|)\s*(?:\(|)\s*(\d{1,2}):(\d{1,2})\s*(?:(am)|(pm)|)\s*(?:\)|)&quot; #no acepta que no se le indicase el 'am' o el 'pm' date_format = r&quot;(?:\(|)\s*(\d*)_-_(\d{2})_-_(\d{2})\s*(?:\)|)&quot; some_text_limiters = [r&quot;,\s*hasta&quot;, r&quot;hasta&quot;, r&quot;al&quot;, r&quot;a &quot;] for some_text_limiter in some_text_limiters: identification_re_0 = r&quot;(?:(?&lt;=\s)|^)(?:desde\s*el|desde|del|de\s*el|de\s*la|de |)\s*(?:dΓ­a|dia|fecha|)\s*(?:del|de\s*el|de |)\s*&quot; + date_format + r&quot;\s*(?:&quot; + identificate_hours + r&quot;|)\s*(?:\)|)\s*(&quot; + some_text + r&quot;)\s*&quot; + some_text_limiter + r&quot;\s*(?:el|la|)\s*(?:fecha|d[Γ­i]a|)\s*(?:del|de\s*el|de|)\s*&quot; + date_format + r&quot;\s*(?:&quot; + identificate_hours + r&quot;|)\s*(?:\)|)&quot; input_text = re.sub(identification_re_0, lambda m: if(r&quot;{m[1]}&quot; == None or r&quot;{m[1]}&quot; == &quot; &quot; or r&quot;{m[1]}&quot; == &quot;&quot;) : (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;) else : (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))({m[8]})&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;), input_text, re.IGNORECASE) print(repr(input_text)) </code></pre> <p>I get a <code>SyntaxError: invalid syntax</code> whis this lambda <code>lambda m: if(r&quot;{m[8]}&quot; == None or r&quot;{m[8]}&quot; == &quot; &quot; or r&quot;{m[8]}&quot; == &quot;&quot;) : (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;) else : (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))({m[8]})&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;)</code></p> <p>How should I evaluate the conditions inside the lambda function housed inside the parameter of the <code>re.sub()</code> function?</p> <p><code>lambda m: if(r&quot;{m[8]}&quot; == None or r&quot;{m[8]}&quot; == &quot; &quot; or r&quot;{m[8]}&quot; == &quot;&quot;) : else: </code></p> <p>All the evaluation of the conditional depends on <code>&quot;{m[8]}&quot;</code>, and the outputs should be like the following</p> <pre><code>#for example 1, where {m[8]} is not None '(2022_-_12_-_(10(00:00 am)_--_2022_-_12_-_(13(00:00 am)))(corrimos juntas hasta el 11Β° nivel de aquella montaΓ±a)' #for example 2, where {m[8]} is None, and remove the last () '(2022_-_11_-_(10(18:30 pm)_--_2022_-_12_-_(01(21:00 am)))()hs' #wrong output '(2022_-_11_-_(10(18:30 pm)_--_2022_-_12_-_(01(21:00 am)))hs' #correct output </code></pre> <hr /> <p>Edit question with the error:</p> <pre class="lang-py prettyprint-override"><code>def sub_rule(m): res_true = f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))&quot; # ternary expression is general, not limited to lambdas return ( res_true.replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;) if (r&quot;{m[8]}&quot; == None or r&quot;{m[8]}&quot; == &quot; &quot; or r&quot;{m[8]}&quot; == &quot;&quot;) else (res_true + f&quot;({m[8]})&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;) ) some_text_limiters = [r&quot;,\s*hasta&quot;, r&quot;hasta&quot;, r&quot;al&quot;, r&quot;a &quot;] for some_text_limiter in some_text_limiters: identification_re_0 = r&quot;(?:(?&lt;=\s)|^)(?:desde\s*el|desde|del|de\s*el|de\s*la|de |)\s*(?:dΓ­a|dia|fecha|)\s*(?:del|de\s*el|de |)\s*&quot; + date_format + r&quot;\s*(?:&quot; + identificate_hours + r&quot;|)\s*(?:\)|)\s*(&quot; + some_text + r&quot;)\s*&quot; + some_text_limiter + r&quot;\s*(?:el|la|)\s*(?:fecha|d[Γ­i]a|)\s*(?:del|de\s*el|de|)\s*&quot; + date_format + r&quot;\s*(?:&quot; + identificate_hours + r&quot;|)\s*(?:\)|)&quot; input_text = re.sub(identification_re_0, sub_rule, input_text, re.IGNORECASE) </code></pre> <p>And the wrong output:</p> <pre><code>'(2022_-_12_-_(10(00:00 am)_--_2022_-_12_-_(13(00:00 am)))()' </code></pre> <p>And the correct ouput in example 2:</p> <pre><code>'(2022_-_12_-_(10(00:00 am)_--_2022_-_12_-_(13(00:00 am)))' </code></pre> <hr /> <p>EDIT 2: I have managed to perform the conditional, although not within the same lambda</p> <pre><code>def remove_or_not_parentheses_from_middle_text(m): print(repr(m[8])) if ( str(m[8]) == None or str(m[8]) == &quot; &quot; or str(m[8]) == &quot;&quot;): res_true = (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;) else: res_true = (f&quot;({m[1]}_-_{m[2]}_-_({m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'})_--_{m[9]}_-_{m[10]}_-_({m[11]}({m[12] or '00'}:{m[13] or '00'} {m[14] or m[15] or 'am'})))({m[8]})&quot;).replace(&quot; )&quot;, &quot;)&quot;).replace(&quot;( &quot;, &quot;(&quot;) return res_true </code></pre>
<python><python-3.x><regex><lambda><regex-group>
2023-01-10 14:49:03
1
875
Matt095