QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
74,824,288
9,372,996
Another Python not working in cron question
<p>Directory: <code>path/to/test</code></p> <p><code>tree</code>:</p> <pre><code>. β”œβ”€β”€ Pipfile β”œβ”€β”€ test.py β”œβ”€β”€ test.sh └── test.txt </code></pre> <p><code>test.py</code>:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import datetime with open(&quot;/path/to/test/test.txt&quot;, &quot;a&quot;) as f: f.write(f&quot;{datetime.datetime.now()}\n&quot;) </code></pre> <p><code>test.sh</code>:</p> <pre><code>#!/bin/zsh pipenv run python /Users/ayubkeshtmand/Downloads/test/test.py </code></pre> <p><code>crontab -l</code>:</p> <pre class="lang-bash prettyprint-override"><code>SHELL=/bin/zsh PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/bin/python3:/Library/Frameworks/Python.framework/Versions/3.10/bin:/opt/homebrew/bin:/opt/homebrew/sbin * * * * * env &gt; /tmp/env.output # works * * * * * touch /path/to/anothertest.txt # works * * * * * python /path/to/test/test.py # doesn't work * * * * * python3 /path/to/test/test.py # doesn't work * * * * * pipenv run python /path/to/test/test.py # doesn't work * * * * * zsh /path/to/test/test.sh # doesn't work * * * * * /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 /path/to/test/test.py # doesn't work * * * * * bash -l -c &quot;python /path/to/test.py&quot; # doesn't work * * * * * bash -l -c &quot;pipenv run python /path/to/test.py&quot; # doesn't work </code></pre> <p><code>/tmp/env.output</code>:</p> <pre><code>SHELL=/bin/zsh PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/bin/python3:/Library/Frameworks/Python.framework/Versions/3.10/bin:/opt/homebrew/bin:/opt/homebrew/sbin LOGNAME=myusername USER=myusername HOME=/path/to SHLVL=0 PWD=/path/to OLDPWD=/path/to _=/usr/bin/env </code></pre> <p><code>env</code> in regular terminal:</p> <pre><code>USER=myusername MallocNanoZone=0 __CFBundleIdentifier=com.microsoft.VSCode COMMAND_MODE=unix2003 LOGNAME=myusername PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/bin/python3:/Library/Frameworks/Python.framework/Versions/3.10/bin:/opt/homebrew/bin:/opt/homebrew/sbin SSH_AUTH_SOCK=/private/tmp/com.apple.launchd.RyAiUT08FA/Listeners SHELL=/bin/zsh HOME=/path/to __CF_USER_TEXT_ENCODING=0x1F6:0:2 TMPDIR=/var/folders/64/wd951s6x3pz3ytpq1wjz5n180000gp/T/ XPC_SERVICE_NAME=0 XPC_FLAGS=0x0 ORIGINAL_XDG_CURRENT_DESKTOP=undefined SHLVL=1 PWD=/path/to OLDPWD=/path/to HOMEBREW_PREFIX=/opt/homebrew HOMEBREW_CELLAR=/opt/homebrew/Cellar HOMEBREW_REPOSITORY=/opt/homebrew MANPATH=/opt/homebrew/share/man:/usr/share/man:/usr/local/share/man:/opt/homebrew/share/man: INFOPATH=/opt/homebrew/share/info:/opt/homebrew/share/info: GITHUB_TOKEN=fwlkehjgowih4t089240btonglkwenkjbalfihoi283 TERM_PROGRAM=vscode TERM_PROGRAM_VERSION=1.74.1 LANG=en_GB.UTF-8 COLORTERM=truecolor GIT_ASKPASS=/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/git/dist/askpass.sh VSCODE_GIT_ASKPASS_NODE=/Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper VSCODE_GIT_ASKPASS_EXTRA_ARGS=--ms-enable-electron-run-as-node VSCODE_GIT_ASKPASS_MAIN=/Applications/Visual Studio Code.app/Contents/Resources/app/extensions/git/dist/askpass-main.js VSCODE_GIT_IPC_HANDLE=/var/folders/64/gowiehg92834hbgobwkgjegp/T/vscode-git-45fnwiuheg284.sock VSCODE_INJECTION=1 ZDOTDIR=/path/to USER_ZDOTDIR=/path/to TERM=xterm-256color PS1=%n@%m %1~ %# _=/usr/bin/env </code></pre> <p>No logs in <code>/var/logs</code> and no <code>mail</code> outputted...searched everywhere online nothing works!</p>
<python><linux><shell><cron><zsh>
2022-12-16 12:11:18
0
741
AK91
74,824,268
514,149
How to log from multiple processes with different context (spawn/fork)?
<p>I have a Python application that spawns multiple daemon processes. I need to create these processes <a href="https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods" rel="nofollow noreferrer">via the <code>spawn</code> start method</a>. Since I want to log into a single file, I follow the <a href="https://docs.python.org/3/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes" rel="nofollow noreferrer">official docs on multiprocessing logging</a>. Thus I have created a <code>multiprocessing.Queue</code> and a <code>logging.handlers.QueueHandler</code>, as described in the tutorial.</p> <p>Now the problem is that since I use the <code>spawn</code> start method (under Linux), but the default start method (context) under Linux is <code>fork</code>, it seems this logging queue does not work correctly in my case. When I log from the spawned process, then nothing ever shows up in my log. However, to check my code I tried to log into the same queue from the main process, and then these log messages show up correctly.</p> <p>So in other words: Using the queue from the main process works, but from a spawned process that same logging queue does not seem to work any more. Note that I tried both, using the <code>Queue</code> class directly from <code>multiprocessing</code>, as well as the one from <code>multiprocessing.get_context(&quot;spawn&quot;)</code>. Fun fact: when I directly <code>put()</code> something into the queue from the process, then it shows up in the logs. Just when I call the <code>logger.error()</code>, then nothing happens.</p> <p>Any ideas?</p>
<python><logging><multiprocessing><queue><python-logging>
2022-12-16 12:09:17
1
10,479
Matthias
74,824,239
9,597,871
How to initialize dataclass field with a copy of a list?
<p>I'm parsing text files which have the following line structure:</p> <blockquote> <p>name_id, width, [others]</p> </blockquote> <p>After parsing an input, I'm feeding each parsed line into the <code>dataclass</code>:</p> <pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True, eq=True) class Node(): name: str rate: int other: list[str] = field(compare = False) </code></pre> <p>In the middle of the code, I'm removing values from <code>other</code>, which results in removing values from original parse output. I don't want that. How can I make <code>dataclass</code> to force making a copy of the list?</p> <p>In common class, as I understand, solution would be:</p> <pre class="lang-py prettyprint-override"><code>self.other = other.copy() </code></pre> <p>But if doing so, I should rewrite whole <code>init</code> method which breaks the purpose of <code>dataclass</code>.</p> <p>So how to initialize <code>dataclass</code> field with a copy of a list?</p> <p>I've tried doing this:</p> <pre class="lang-py prettyprint-override"><code>def __post_init__(self): setattr(self, 'other', self.other.copy()) </code></pre> <p>Which yield can't assign attribute of frozen instance.</p>
<python><python-dataclasses>
2022-12-16 12:06:12
1
348
Kezif Garbian
74,824,179
1,134,241
How to incorporate a piece-wise release of a contaminant and solve the diffusion equation over space and time
<p>I have a contamination source that releases for 10 minutes at the centre of a room and then turns off. The 1D diffusion equation with decay over time and space seems reasonable:</p> <p>$dC/dt = D * d^2C/dx^2 - k * C$</p> <p>It's easy to solve if the initial and boundary condition <code>C(50)=1</code> (and looks like the figure below), but how can I get it to have a piece-wise boundary condition?</p> <pre><code>import numpy as np def point_source_pde(C, D, k, dx, dt): &quot;&quot;&quot;Solves the PDE for a continuous point source under diffusion and decay. Parameters: C (ndarray): Concentration at each location x at time t. D (float): Diffusion coefficient. k (float): Decay rate. dx (float): Spatial discretization step. dt (float): Time discretization step. Returns: ndarray: Concentration at each location x at time t + dt. &quot;&quot;&quot; # Number of grid points N = C.shape[0] # Initialize array for updated concentration values C_new = np.zeros(N) # Iterate over all grid points for i in range(N): # Implement finite difference formula for diffusion C_diffusion = D * (C[(i+1)%N] - 2*C[i] + C[(i-1)%N]) / dx**2 # Implement finite difference formula for decay C_decay = -k * C[i] # Update concentration at each grid point C_new[i] = C[i] + dt * (C_diffusion + C_decay) return C_new # Set diffusion coefficient and decay rate D = 0.1 k = 0.01 # Set spatial and temporal discretization steps dx = 0.1 dt = 0.001 # Set initial concentration C = np.zeros(100) C[50] = 1 # Solve PDE for 1000 time steps for t in range(1000): C = point_source_pde(C, D, k, dx, dt) #Plot concentration profile import matplotlib.pyplot as plt plt.plot(C) plt.xlabel('Distance [x]') plt.ylabel('C') plt.show() </code></pre> <p><a href="https://i.sstatic.net/5eszj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5eszj.png" alt="enter image description here" /></a></p> <h1>Attempt 1:</h1> <pre><code> def point_source_pde(C, D, k, dx, dt, t, t_start, t_end): &quot;&quot;&quot;Solves the PDE for a continuous point source under diffusion and decay, with a continuous release of contaminant from t_start to t_end. Parameters: C (ndarray): Concentration at each location x at time t. D (float): Diffusion coefficient. k (float): Decay rate. dx (float): Spatial discretization step. dt (float): Time discretization step. t (int): Current time step. t_start (int): Start time of contaminant release. t_end (int): End time of contaminant release. Returns: ndarray: Concentration at each location x at time t + dt. &quot;&quot;&quot; # Number of grid points N = C.shape[0] # Initialize array for updated concentration values C_new = np.zeros(N) # Iterate over all grid points for i in range(N): # Implement finite difference formula for diffusion C_diffusion = D * (C[(i+1)%N] - 2*C[i] + C[(i-1)%N]) / dx**2 # Implement finite difference formula for decay C_decay = -k * C[i] # Implement continuous release of contaminant at center if t_start &lt;= t &lt; t_end: C_release = 1 else: C_release = 0 # Update concentration at each grid point C_new[i] = C[i] + dt * (C_diffusion + C_decay + C_release) return C_new # Set spatial and temporal discretization steps dx = 0.01 dt = 0.001 t_start=5 t_end=15 # Set initial concentration C = np.zeros(100) # Solve PDE for release period for t in range(t_start, t_end): C = point_source_pde(C, D, k, dx, dt, t, t_start, t_end) # Solve PDE for 1000 time steps after the release period for t in range(t_end, t_end+1000): C = point_source_pde(C, D, k, dx, dt, t, t_start, t_end) </code></pre>
<python>
2022-12-16 12:00:06
0
2,263
HCAI
74,824,071
2,054,399
Parallelize and separate computation from storage in python
<p>I am trying to parallelize the computation and storage of intermediate results. The task can be described as follows:</p> <p>Given a large set of tasks to compute, take a chunk of tasks and parallelize some kind of computation across the available CPU/GPU. The output is relatively large so that it doesn't fit in memory over all chunks. So once one chunk computation is done write the collected results from the processes to a single result file. The real storage mechanism is a bit more complicated and cannot be easily moved to the individual jobs. I really need to collect them and collectively store them.</p> <p>The storage part takes quite some time and I don't know how to decouple the two things. Ideally, the workflow would be: compute -&gt; collect -&gt; store / while storing start computing already -&gt; compute/store etc.</p> <p>Here is some dummy code that only features the parallel computation but not the computation / storing separation. What is the framework concept I need to implement to make this nicer / faster?</p> <pre><code>import numpy as np from multiprocessing import Pool import time def crunch(n): print(f&quot;crunch dummy things for input: {n}&quot;) results = np.random.random(100) time.sleep(np.random.randint(0, 3)) return results def store(results_npz, index): print(f&quot;storing iteration {index}&quot;) np.savetxt(f'test_{str(index).zfill(2)}.out', results_npz) # all tasks all_tasks = list(range(10)) # iterate over tasks in chunks for i in range(5): print(f&quot;start iteration {i}&quot;) input_chunk = [all_tasks.pop(0), all_tasks.pop(0)] with Pool(2) as mp: results = mp.map(crunch, input_chunk) print(&quot;storing results ...&quot;) # ideally, this should start and then the result computation can start again results_all = np.vstack(results) store(results, i) </code></pre> <p>Edit: Important information! There can only be one process running that stores the results.</p>
<python><parallel-processing><storage><python-multiprocessing>
2022-12-16 11:51:13
1
1,166
Kam Sen
74,823,988
13,285,583
How to run selenium in Docker with Flask?
<p>My goal is to run Selenium with Flask. The problem is that it throws an error.</p> <pre><code>[2022-12-16 11:30:05 +0000] [1] [INFO] Starting gunicorn 20.1.0 [2022-12-16 11:30:05 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2022-12-16 11:30:05 +0000] [1] [INFO] Using worker: sync [2022-12-16 11:30:05 +0000] [6] [INFO] Booting worker with pid: 6 [WDM] - Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.96M/6.96M [00:00&lt;00:00, 12.6MB/s] [2022-12-16 11:30:07 +0000] [6] [ERROR] Exception in worker process Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/arbiter.py&quot;, line 589, in spawn_worker worker.init_process() File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/workers/base.py&quot;, line 134, in init_process self.load_wsgi() File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/workers/base.py&quot;, line 146, in load_wsgi self.wsgi = self.app.wsgi() File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/app/base.py&quot;, line 67, in wsgi self.callable = self.load() File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py&quot;, line 58, in load return self.load_wsgiapp() File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py&quot;, line 48, in load_wsgiapp return util.import_app(self.app_uri) File &quot;/usr/local/lib/python3.10/site-packages/gunicorn/util.py&quot;, line 359, in import_app mod = importlib.import_module(module) File &quot;/usr/local/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1006, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 688, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 883, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;/app.py&quot;, line 2, in &lt;module&gt; from scrapeutil import get_db File &quot;/scrapeutil.py&quot;, line 14, in &lt;module&gt; driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=chrome_options) File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py&quot;, line 69, in __init__ super().__init__(DesiredCapabilities.CHROME['browserName'], &quot;goog&quot;, File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py&quot;, line 89, in __init__ self.service.start() File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/common/service.py&quot;, line 98, in start self.assert_process_still_running() File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/common/service.py&quot;, line 110, in assert_process_still_running raise WebDriverException( selenium.common.exceptions.WebDriverException: Message: Service /root/.wdm/drivers/chromedriver/linux64/108.0.5359/chromedriver unexpectedly exited. Status code was: 255 </code></pre> <p>What I've did:</p> <ol> <li>Cross checkk if gunicorn is the culprit. I make a simple Dockerfile with only gunicorn and a flask endpoint that return &quot;Hello World&quot;. It works as expected. Therefore, I assume that webdriver is the culprit.</li> </ol> <p>The code that trigger the error</p> <pre><code>chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(&quot;--headless&quot;) driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=chrome_options) </code></pre> <p>The Dockerfile</p> <pre><code># Do not use Alpine Linux if you use pandas # FROM python:3.10-alpine3.16 # bullseye https://wiki.debian.org/DebianReleases FROM python:3.10-bullseye COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . / # {module_import}:{app_variable} # equivalent to 'from app import app' CMD [&quot;gunicorn&quot;, &quot;-b&quot;, &quot;:8080&quot;, &quot;app:app&quot;] </code></pre> <p>The requirements.txt</p> <pre><code>flask==2.2.* pandas==1.5.* selenium==4.5.* webdriver-manager==3.8.* gunicorn==20.1.* </code></pre> <p>This code, in my Mac Mini M1 works. But when I tried to run the Docker Image, it throws the error.</p>
<python><django><docker><selenium><selenium-webdriver>
2022-12-16 11:42:55
0
2,173
Jason Rich Darmawan
74,823,970
9,308,542
Selenium - How to send file path to dynamically generated input element (not present in dom)
<p>here is an example</p> <p>html</p> <pre><code>&lt;button id=&quot;visible-btn&quot;&gt;visible button&lt;/button&gt; &lt;p&gt;selected file is: &lt;span id=&quot;selected-file&quot;&gt;&lt;/span&gt;&lt;/p&gt; </code></pre> <p>javascript (usually hidden deep inside)</p> <pre><code>document.getElementById('visible-btn').addEventListener('click', function(e){ const ele = document.createElement('input'); ele.type = 'file'; ele.accept = '*'; ele.onchange = function (e){ document.getElementById('selected-file').innerText = e.path[0].files[0].name; } ele.click(); }) </code></pre> <p>since the input element is not present in the dom, i cannot use below python code to send the file path</p> <pre><code>file_path = '/path/to/file' driver.find_element(By.XPATH, '//input').send_keys(file_path) </code></pre> <p>any idea how to solve this? much appreciated</p> <hr /> <p><strong>edit 1</strong></p> <p>issue is from facebook creator studio [https://business.facebook.com/creatorstudio/published?content_table=POSTED_POSTS&amp;post_type=FB_SHORTS]</p> <p>&quot;Create new&quot; button opens the file select dialog</p> <p><a href="https://i.sstatic.net/yHzQf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yHzQf.png" alt="enter image description here" /></a></p>
<javascript><python><html><reactjs><selenium>
2022-12-16 11:41:08
1
335
Gayan Jeewantha
74,823,966
5,510,540
Two colour heat map in python
<p>I have the following data:</p> <pre><code>my_array = array([[0, 0, 1, 0, 0], [0, 1, 1, 1, 0], [0, 0, 0, 1, 1], [0, 0, 1, 1, 1], [0, 1, 1, 0, 0], [1, 1, 1, 1, 0], [0, 1, 1, 1, 1], [0, 0, 0, 0, 1], [0, 1, 0, 1, 0]]) </code></pre> <p>and</p> <pre><code>df.values = array([246360, 76663, 29045, 11712, 5526, 3930, 3754, 1677, 1328]) </code></pre> <p>I am producing a heat-map as such:</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt cmap = sns.cm.rocket_r ax = sns.heatmap(my_array, xticklabels=[&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;], yticklabels=df.values, cmap = cmap) ax.set(xlabel='Test Type', ylabel='Number', title='patterns of missingness') fig=plt.figure(figsize=(40,30), dpi= 20, facecolor='w', edgecolor='k') fig </code></pre> <p>and I get the following: <a href="https://i.sstatic.net/g0wQq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g0wQq.png" alt="enter image description here" /></a></p> <p>My question is, how do I get rid of the continuous color scale and select only two different colors: white for 0 and green for 1?</p>
<python><heatmap>
2022-12-16 11:40:41
1
1,642
Economist_Ayahuasca
74,823,788
11,274,362
How to change position of endpoints in drf-yasg Django
<p>I'm trying to customize <code>drf</code> api documentation with <code>drf-yasg</code>. <strong>I want to change order of display endpoints.</strong> For example to change:</p> <pre><code>GET /endpoint/number1/ GET /endpoint/number2/ </code></pre> <p>to</p> <pre><code>GET /endpoint/number2/ GET /endpoint/number1/ </code></pre> <p>in swagger document. How can I do that?</p>
<python><django><django-rest-framework><swagger><drf-yasg>
2022-12-16 11:25:21
1
977
rahnama7m
74,823,783
6,372,859
Building Pipelines
<p>I've been recently trying to set up a Pipeline to produce a Machine Learning model. I have built my own data preprocessing classes and a new class with an optimized sklearn algorithm: Regresor_Model; however when I declare the pipeline steps, for example:</p> <pre><code>from source.preprocessing_functions import Change_Data_Type, Years_Passed, Duplicated_Data from source.preprocessing_functions import One_Hot_Encoding_Train, Standard_Scaling_Train, Reduce_Memory_Usage from source.machine_learning_toolbox import Regresor_Model from sklearn.pipeline import Pipeline # Loading the data # ================ data = lp.load_data(config.DATA,config.ID_VAR) X, y = data.drop(config.TARGET,axis=1), data[config.TARGET] X = X[config.PREDICTORS] # Train-Test Split # ================ X_train, X_tests, y_train, y_tests = train_test_split(X, y, test_size=0.3, random_state=config.SEED) # Defining the Pipeline steps # =========================== steps = [('to_float', Change_Data_Type('Kms_Driven','Float')), ('years_passed', Years_Passed('Year',config.YEAR)), ('duplicates', Duplicated_Data()), ('one_hot_train', One_Hot_Encoding_Train(config.CATEGORICAL,drop_first=False)), ('scale_train', Standard_Scaling_Train(config.NUMERICAL)), ('reduce_memory', Reduce_Memory_Usage()), ('model', Regresor_Model(config.BOUNDS))] # Producing the pipeline # ====================== pipeline = Pipeline(steps) pipeline.fit(X_train, y_train) </code></pre> <p>and start running the script, I get an error message that it cannot find the module <code>sklearn.preprocessing_functions</code></p> <p><code>preprocessing_functions</code> and <code>machine_learning_toolbox</code> are two scripts where I have stored the preprocessing classes and the optimized machine learning algorithm. In the literature, I have seen they use sklearn.Pipeline with pure sklearn libraries such as <code>estimators = [('reduce_dim', PCA()), ('clf', SVC())]</code></p> <p>Is there a walk-around method to create a pipeline using our own preprocessing tools and thus building the pipeline using sklearn?</p>
<python><machine-learning><scikit-learn><scikit-learn-pipeline>
2022-12-16 11:25:10
0
583
Ernesto Lopez Fune
74,823,596
298,847
Type a function that takes a tuple and returns a tuple of the same length with each element optional?
<p>I want to write a function that takes a tuple and returns a tuple of the same size but with each element wrapped in optional. Pseudo code:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar T = TypeVar(&quot;T&quot;, bound=tuple[dict[str, str], ...]) def f(tup: T) -&gt; Map[Optional, T]: # Dummy implementation return [None if ... else el for el in tup] </code></pre> <p>Here <code>Map</code> is a made-up, type-level function that wraps each returned element's type in <code>Optional</code>.</p> <p>Concretely, if the input type was e.g. <code>tuple[dict[str, str], dict[str, str]]</code> I want the return type to be <code>tuple[Optional[dict[str, str]], Optional[dict[str, str]]]</code>.</p>
<python><python-typing>
2022-12-16 11:08:20
2
9,059
tibbe
74,823,525
5,618,856
Match case against data type makes "remaining patterns unreachable"
<p>Why is</p> <pre><code>match type('a'): case list: print('a list') case str: print('a string') </code></pre> <p>wrong with giving an error &quot;SyntaxError: name capture 'list' makes remaining patterns unreachable&quot;</p> <p>while <code>type('a')==str</code> is <code>true</code>?</p>
<python><python-3.x>
2022-12-16 11:02:03
0
603
Fred
74,823,224
12,858,691
Pandas scale column(s) in groupby
<p>I want to scale the column &quot;amount&quot; between <code>[0,1]</code> grouped by the two key columns. Consider the following example:</p> <pre><code> key1 key2 amount 0 a 1 10 1 a 1 20 2 a 1 30 3 a 2 10 4 a 2 40 5 a 2 100 6 b 1 30 7 b 1 150 8 b 1 150 9 b 2 0 10 b 2 100 11 b 2 1000 </code></pre> <p>should turn into</p> <pre><code> key1 key2 amount amount_scaled 0 a 1 10 0 1 a 1 20 0.5 2 a 1 30 1 3 a 2 10 0 4 a 2 40 0.25 5 a 2 100 1 6 b 1 30 0 7 b 1 150 1 8 b 1 150 1 9 b 2 0 0 10 b 2 100 0.1 11 b 2 1000 1 </code></pre> <p>I tried</p> <pre><code>from sklearn.preprocessing import MinMaxScaler df = pd.DataFrame({&quot;key1&quot;:['a','a','a','a','a','a','b','b','b','b','b','b'],&quot;key2&quot;:[1,1,1,2,2,2,1,1,1,2,2,2],&quot;amount&quot;:[10,20,30,10,40,100,30,150,150,0,100,1000]}) df.groupby(['key1','key2'])['amount'].apply(lambda x: MinMaxScaler().fit_transform(x)) </code></pre> <p>without success. Any suggestions?</p>
<python><pandas><scikit-learn>
2022-12-16 10:36:12
1
611
Viktor
74,823,091
10,574,250
python script read in from subscript not finding file in directory even though the os.listdir sees it and subscript works fine on its own
<p>I am reading a python subscript into a python script. The subscript uses an excel file in it's working directory. The structure looks like this</p> <pre><code>main_folder -&gt; main.py subfolder -&gt; data.xlsx submain.py </code></pre> <p>My main script calls the subscript as such:</p> <pre><code>from subfolder.submain import df </code></pre> <p>My <code>submain.py</code> script looks as such:</p> <pre><code>df = pd.read_excel('data.xlsx') </code></pre> <p>However on my <code>main.py</code> script I get the following error:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'data.xlsx' </code></pre> <p>This is strange because not only does <code>submain.py</code> run fine by itself, a quick <code>os.listdir()</code> shows the file to be there:</p> <pre><code>for f in os.listdir('subfolder'): print(f) data.xlsx submain.py </code></pre> <p>Does anyone understand this behaviour? Many thanks</p>
<python><pandas><operating-system>
2022-12-16 10:23:23
1
1,555
geds133
74,823,031
7,334,203
Merge and manipulate xslt file using python lxml
<p>im a newbie in python and i have a difficult task to cope. Suppose we have two xslt files, the first one is like this:</p> <pre><code>&lt;xsl:stylesheet version=&quot;1.0&quot;&gt; &lt;xsl:function name=&quot;grp:MapToCD538A_var107&quot;&gt; &lt;xsl:param name=&quot;var106_cur&quot; as=&quot;node()&quot;/&gt; &lt;/xsl:function&gt; &lt;xsl:template match=&quot;/&quot;&gt; &lt;CD123&gt; &lt;xsl:attribute name=&quot;xsi:schemaLocation&quot; namespace=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;/&gt; &lt;xsl:for-each select=&quot;(./ns0:CD538C)[fn:not(fn:exists(*:ExportOperation[fn:namespace-uri() eq '']/*:requestRejectionReasonCode[fn:namespace-uri() eq '']))]&quot;&gt; &lt;SynIde xmlns=&quot;&quot;&gt;UN1OC&lt;/SynIde&gt; &lt;SynVer xmlns=&quot;&quot;&gt; &lt;xsl:sequence select=&quot;xs:string(xs:integer('3'))&quot;/&gt; &lt;/SynVer&gt; &lt;/xsl:for-each&gt; &lt;/CD123&gt; &lt;/xsl:template&gt; &lt;/xsl:stylesheet&gt; </code></pre> <p>and the second one is like this:</p> <pre><code>&lt;xsl:stylesheet version=&quot;1.0&quot;&gt; &lt;xsl:output method=&quot;xml&quot; encoding=&quot;UTF-8&quot; byte-order-mark=&quot;no&quot; indent=&quot;yes&quot;/&gt; &lt;xsl:template match=&quot;/&quot;&gt; &lt;CD96A&gt; &lt;xsl:attribute name=&quot;xsi:schemaLocation&quot; namespace=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;/&gt; &lt;xsl:for-each select=&quot;(./ns0:CD538C)[fn:exists(*:ExportOperation[fn:namespace-uri() eq '']/*:requestRejectionReasonCode[fn:namespace-uri() eq ''])]&quot;&gt; &lt;SynIdeMES1 xmlns=&quot;&quot;&gt;UNOC&lt;/SynIdeMES1&gt; &lt;SynVerNumMES2 xmlns=&quot;&quot;&gt; &lt;xsl:sequence select=&quot;xs:string(xs:integer('3'))&quot;/&gt; &lt;/SynVerNumMES2 &lt;/xsl:for-each&gt; &lt;/CD96A&gt; &lt;/xsl:template&gt; &lt;/xsl:stylesheet&gt; </code></pre> <p>Now is the tricky part with the merge process. I want somehow to merge these two file into one with the following output</p> <pre><code>&lt;xsl:stylesheet version=&quot;1.0&quot;&gt; &lt;xsl:function name=&quot;grp:MapToCD538A_var107&quot;&gt; &lt;xsl:param name=&quot;var106_cur&quot; as=&quot;node()&quot;/&gt; &lt;/xsl:function&gt; &lt;xsl:template match=&quot;/&quot;&gt; &lt;xsl:for-each select=&quot;(./ns0:CD538C)[fn:not(fn:exists(*:ExportOperation[fn:namespace-uri() eq '']/*:requestRejectionReasonCode[fn:namespace-uri() eq '']))]&quot;&gt; &lt;CD123&gt; &lt;xsl:attribute name=&quot;xsi:schemaLocation&quot; namespace=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;/&gt; &lt;SynIde xmlns=&quot;&quot;&gt;UN1OC&lt;/SynIde&gt; &lt;SynVer xmlns=&quot;&quot;&gt; &lt;xsl:sequence select=&quot;xs:string(xs:integer('3'))&quot;/&gt; &lt;/SynVer&gt; &lt;/CD123&gt; &lt;/xsl:for-each&gt; &lt;xsl:for-each select=&quot;(./ns0:CD538C)[fn:exists(*:ExportOperation[fn:namespace-uri() eq '']/*:requestRejectionReasonCode[fn:namespace-uri() eq ''])]&quot;&gt; &lt;CD96A&gt; &lt;xsl:attribute name=&quot;xsi:schemaLocation&quot; namespace=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;/&gt; &lt;SynIdeMES1 xmlns=&quot;&quot;&gt;UNOC&lt;/SynIdeMES1&gt; &lt;SynVerNumMES2 xmlns=&quot;&quot;&gt; &lt;xsl:sequence select=&quot;xs:string(xs:integer('3'))&quot;/&gt; &lt;/SynVerNumMES2 &lt;/CD96A&gt; &lt;/xsl:for-each&gt; &lt;/xsl:template&gt; &lt;/xsl:stylesheet&gt; </code></pre> <p>As you can see i have one <code>&lt;xsl:template match=&quot;/&quot;&gt;</code> and after that there is the first for each with the node and its content which is nested under the first for each and after the first for each i have the second for each of the second message which contains the node and its content </p> <p>I have tried using the lxml librady since it's recommended for xml manipulation</p> <pre><code># Parse the first XSLT file xslt_doc_1 = etree.parse(&quot;first file.xslt&quot;) # Find the root element of the first XSLT file root_1 = xslt_doc_1.getroot() # Parse the second XSLT file xslt_doc_2 = etree.parse(&quot;second file.xslt&quot;) # Find the root element of the second XSLT file root_2 = xslt_doc_2.getroot() # Add the root element of the second XSLT file as a child of the root element of the first XSLT file root_1.extend(root_2) # Write the merged XSLT file to a new file with open(&quot;merged_xslt_file.xslt&quot;, &quot;w&quot;) as f: f.write(etree.tostring(xslt_doc_1, pretty_print=True).decode()) </code></pre> <p>and tried to manipulate the output file but with no success. Do you know how to achieve the desired ouput?</p>
<python><xml><xslt><lxml>
2022-12-16 10:17:34
2
7,486
RamAlx
74,822,950
10,097,229
Save graph to blob in Azure Function
<p>I have a Python code that plots a graph in the end. The issue is when I am calling this code with Azure Function, it is not able to plot the graph. So, instead I tried saving the graph on Azure blob storage. But there's no option to upload the graph on blob without saving it loxally on VS Code. And Azure Function doesn't allow to save graph locally.</p> <p>Is there any way I can upload the graph onto Azure Blob Storage.</p>
<python><azure><azure-functions>
2022-12-16 10:09:07
1
1,137
PeakyBlinder
74,822,700
6,053,274
How to optimize the performance of a `pandas` `DataFrame` using `numexpr` and `Dask`?
<p>I'm working with a large <code>pandas</code> <code>DataFrame</code> and I'm trying to optimize its performance using the <code>numexpr</code> and <code>Dask</code> libraries. I've tried using the <code>numexpr.evaluate()</code> function to perform element-wise operations on the DataFrame, but it's still taking a long time to run.</p> <p>Here's an example of the code I'm using:</p> <pre><code>import numexpr import pandas as pd df = pd.read_csv('large_data.csv') # Perform element-wise operation using numexpr df['new_column'] = numexpr.evaluate('df.column1 + df.column2') </code></pre> <p>Is there a way to further optimize the performance of this code using <code>Dask</code> or some other method?</p>
<python><pandas><dask-dataframe><numexpr>
2022-12-16 09:46:44
0
495
Gil
74,822,689
80,353
How do I enforce a project to always wrap certain tags with {% raw %} and {% endraw %}?
<p>I am creating template starters that will work with cookiecutter, a python library.</p> <p>So inside some of the subfolders of a project, for files of a particular type, usually <code>.html</code>, I need any tags that look like this</p> <p><code>{% if blah blah %}</code></p> <p>to be wrapped like this</p> <p><code>{% raw %}{% if blah blah %}{% endraw %}</code></p> <p>The exact tags are uncertain.</p> <p>They may be</p> <pre><code>{% load blah %} </code></pre> <p>or</p> <pre><code>{% include blah %} </code></pre> <p>or even an image tag</p> <pre><code>&lt;img class=&quot;mx-auto h-12 w-auto&quot; src=&quot;{% raw %}{% static 'assets/v-sq.svg' %}{% endraw %}&quot; &gt; </code></pre> <p>I am unsure how to enforce this or autoformat.</p> <p>Can advise?</p> <h2>Context about escaping special characters in Cookiecutter</h2> <p><a href="https://cookiecutter.readthedocs.io/en/1.7.2/troubleshooting.html#i-m-having-trouble-generating-jinja-templates-from-jinja-templates" rel="nofollow noreferrer">https://cookiecutter.readthedocs.io/en/1.7.2/troubleshooting.html#i-m-having-trouble-generating-jinja-templates-from-jinja-templates</a></p>
<python><templating><cookiecutter>
2022-12-16 09:45:18
1
10,902
Kim Stacks
74,822,568
80,353
How to programatically create a svg with different colors in python/django?
<p>In telegram, when you have yet to upload your picture, they will programatically generate a logo based on your initials like the following</p> <p><a href="https://i.sstatic.net/2jSIM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2jSIM.png" alt="enter image description here" /></a></p> <p>I want something similar but as SVG and ICO for a Django web app.</p> <p>I only need 26 letters from A to Z. And I need to be able to change the fill color based on <a href="https://tailwindcss.com/docs/customizing-colors" rel="nofollow noreferrer">taiwindcss colors</a></p> <p>The letters itself would be white or black.</p> <p>Is there a programatical way to generate these svg and ico to be used in a Django webapp? I could not find any library that does this.</p> <p>I only need it as a square basically</p>
<python><django><svg><tailwind-css>
2022-12-16 09:33:18
0
10,902
Kim Stacks
74,822,543
7,339,624
Colab: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory warn(f"Failed to load image Python extension: {e}")
<p>I'm trying to use the python package <code>aitextgen</code> in google Colab so I can fine-tune GPT.</p> <p>First, when I installed the last version of this package I had this error when importing it.</p> <pre><code>Unable to import name '_TPU_AVAILABLE' from 'pytorch_lightning.utilities' </code></pre> <p>Though with the help of the solutions given in <a href="https://stackoverflow.com/questions/74319873/unable-to-import-name-tpu-available-from-pytorch-lightning-utilities">this question</a> I could pass this error by downgrading my packages like this:</p> <pre><code>!pip3 install -q aitextgen==0.5.2 !pip3 install -q torchtext==0.10.0 !pip3 install -q torchmetrics==0.6.0 !pip3 install -q pytorch-lightning==1.4.0rc0 </code></pre> <p>But now I'm facing this error when importing the <code>aitextgen</code> package and colab will crash!</p> <pre><code>/usr/local/lib/python3.8/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory warn(f&quot;Failed to load image Python extension: {e}&quot;) </code></pre> <p>Keep in mind that the error is in importing the package and there is not a bug in my code. To be more clear I have this error when I just import <code>aitextgen</code> like this:</p> <pre><code>import aitextgen </code></pre> <p>How can I deal with this error?</p>
<python><import><google-colaboratory><huggingface-transformers><gpt-2>
2022-12-16 09:31:33
1
4,337
Peyman
74,822,504
8,462,250
Is there a way to do parrallelization in an Airflow task?
<p>My problem: I am trying to retrieve data from Google Ads. Sometimes the API call hangs indefinitely.</p> <p>My idea: Create a separate process that monitors if the process doing the API call has returned data or has reached the timeout limit.</p> <p>What I have tried:</p> <ul> <li>I looked into doing Threading. This doesn't work because I cannot terminate a thread like I can a process. I have read that I can use flags to make it stop itself but that won't work because the API call hangs and there is no further python code executed.</li> <li>I looked into multiprocessing. This works fine in isolation but you cannot spawn a new process from an Airflow task.</li> <li>I looked into using signals, as mentioned <a href="https://stackoverflow.com/questions/25027122/break-the-function-after-certain-time">here</a>. This also works well in isolation but the alarm signal is sent to the main process which is Airflow and not the process doing my task and my error catching only happens in my own task.</li> <li>I tried looking into how I can launch a task a new Python interpreter instance but couldn't find how. There is a way to make all tasks start with a new Python interpreter instance apparently, by using the <a href="https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#execute-tasks-new-python-interpreter" rel="nofollow noreferrer">execute_tasks_new_python_interpreter</a> setting in the airflow.cfg file but when I switch that to true my task gets stuck at scheduling and never proceeds to start running.</li> <li>I tried using the PythonVirtualenvOperator operator but after installing virtualenv by the method stated in the <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#pythonvirtualenvoperator" rel="nofollow noreferrer">documentation</a>, Airflow says it can't find the package. But it's there because if I activate the virtual environment of Airflow itself, I can import it.</li> </ul> <p>Can anyone help me?</p>
<python><airflow>
2022-12-16 09:27:06
1
452
CristianCapsuna
74,822,499
10,357,604
Import Error when importing requests in Python 3
<p>I'm trying to run a python program where I run</p> <pre><code>from pycocotools.coco import COCO import requests </code></pre> <p>I use Python 3.11.1. It throws ImportError</p> <pre><code>C:\path&gt;python utils.py --img_out obj/ --label_out obj/ Traceback (most recent call last): File &quot;C:\path\utils.py&quot;, line 6, in &lt;module&gt; import requests File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\__init__.py&quot;, line 43, in &lt;module&gt; import urllib3 File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\__init__.py&quot;, line 8, in &lt;module&gt; from .connectionpool import ( File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py&quot;, line 29, in &lt;module&gt; from .connection import ( File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py&quot;, line 39, in &lt;module&gt; from .util.ssl_ import ( File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\__init__.py&quot;, line 3, in &lt;module&gt; from .connection import is_connection_dropped File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py&quot;, line 3, in &lt;module&gt; from .wait import wait_for_read File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\wait.py&quot;, line 1, in &lt;module&gt; from .selectors import ( File &quot;C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\selectors.py&quot;, line 14, in &lt;module&gt; from collections import namedtuple, Mapping ImportError: cannot import name 'Mapping' from 'collections' (C:\Users\windows\AppData\Local\Programs\Python\Python311\Lib\collections\__init__.py) </code></pre> <p>I installed the requests module via pip</p>
<python><python-3.x><python-requests><importerror><urllib3>
2022-12-16 09:26:54
1
1,355
thestruggleisreal
74,822,475
4,537,160
Iterate through nested dict, check bool values to get indexes of array
<p>I have a nested dict with boolean values, like:</p> <pre><code>assignments_dict = {&quot;first&quot;: {'0': True, '1': True}, &quot;second&quot;: {'0': True, '1': False}, } </code></pre> <p>and an array, with a number of elements equal to the number of True values in the assignments_dict:</p> <pre><code>results_array = [10, 11, 12] </code></pre> <p>and, finally, a dict for results structured this way:</p> <pre><code>results_dict = {&quot;first&quot;: {'0': {'output': None}, '1': {'output': None}}, &quot;second&quot;: {'0': {'output': None}, '1': {'output': None}}, } </code></pre> <p>I need to go through the fields in assignment_dict, check if they are True, and if they are take the next element of results_array and substitute it to the corresponding field in results_dict. So, my final output should be:</p> <pre><code>results_dict = {'first': {'0': {'output': 10}, '1': {'output': 11}}, 'second': {'0': {'output': 12}, '1': {'output': None}}} </code></pre> <p>I did it in a very simple way:</p> <pre><code># counter used to track position in array_outputs counter = 0 for outer_key in assignments_dict: for inner_key in assignments_dict[outer_key]: # check if every field in assignments_dict is True/false if assignments_dict[outer_key][inner_key]: results_dict[outer_key][inner_key][&quot;output&quot;] = results_array[counter] # move on to next element in array_outputs counter += 1 </code></pre> <p>but I was wondering if there's a more pythonic way to solve this.</p>
<python><dictionary>
2022-12-16 09:24:39
2
1,630
Carlo
74,822,402
6,119,375
NonConcreteBooleanIndexError fix with jax.numpy.where() in Python
<p>I am running a simple <a href="https://github.com/google/lightweight_mmm/blob/main/examples/simple_end_to_end_demo.ipynb" rel="nofollow noreferrer">demo example</a> of the lightweightMMM. I have used the lambda function for scaling, as follows:</p> <pre><code>media_data_train_a = media_data[:split_point, :] lambda_operation = lambda x: jnp.mean(x[x &gt; 0]) media_scaler = preprocessing.CustomScaler(divide_operation=lambda_operation) media_data_train = np.array(media_scaler.fit_transform(media_data_train_a)) </code></pre> <p>I have tried the same code with the example data, there it works just fine. However, when i tried it on my own dataset, i get following error when executing the last line:</p> <p><code>NonConcreteBooleanIndexError: Array boolean indices must be concrete; got ShapedArray(bool[41]) See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.NonConcreteBooleanIndexError</code></p> <p>Any ides what could be the reason that my data returns this error?</p> <p>Both dataset are a numpy array. I also read the suggested <a href="https://jax.readthedocs.io/en/latest/errors.html#jax.errors.NonConcreteBooleanIndexError" rel="nofollow noreferrer">jax.doc</a> and believe that error is due to using a non-static array and a common solution to this problem is using three-argument version of <code>jax.numpy.where()</code>.</p> <p>How can i implement the logic of the lambda function in terms of the JIT-compatible three-argument version of <code>jax.numpy.where()</code>?</p>
<python><numpy><where-clause><jax>
2022-12-16 09:19:07
1
1,890
Nneka
74,822,219
9,510,800
Cross join within a dataframe given a time range pandas
<p>I am having a dataframe as below:</p> <pre><code>ID GROUP DATE_MIN DATE_MAX 1 L1 02/12/2022 6:30AM 02/12/2022 6:35AM 2 L1 02/12/2022 6:33AM 02/12/2022 6:40AM 3 L1 02/12/2022 6:37AM 02/12/2022 6:40AM 4 L2 02/12/2022 7:30AM 02/12/2022 7:35AM 5 L2 02/12/2022 7:33AM 02/12/2022 7:35AM 6 L2 02/12/2022 7:34AM 02/12/2022 7:38AM </code></pre> <p>I wanted to count the number of rows per group (GROUP column) between the time range(DATE_MIN, DATE_MAX) output expected is</p> <pre><code>ID GROUP DATE_MIN DATE_MAX NumberOfRows 1 L1 02/12/2022 6:30AM 02/12/2022 6:35AM 2 &lt;&lt;because of ID 1 and 2&gt;&gt; 2 L1 02/12/2022 6:33AM 02/12/2022 6:40AM 3 &lt;&lt;because of ID 1, 2 and 3&gt;&gt; 3 L1 02/12/2022 6:37AM 02/12/2022 6:40AM 2 &lt;&lt;because of ID 3 and 2&gt;&gt; 4 L2 02/12/2022 7:30AM 02/12/2022 7:35AM 1 &lt;&lt; because of 4 only&gt;&gt; 5 L2 02/12/2022 7:36AM 02/12/2022 7:40AM 2 &lt;&lt;because of 5 and 6&gt;&gt; 6 L2 02/12/2022 7:37AM 02/12/2022 7:40AM 2 &lt;&lt;because of 5 and 6&gt;&gt; </code></pre>
<python><pandas><numpy>
2022-12-16 09:01:40
2
874
python_interest
74,822,216
16,981,638
remove outliers from a dataframe by while looping
<p>i've a dataframe and need to extract the outliers from it, then after that recheck the new dataframe again, if i found another outliers, then remove them also and so on... The problem here is that i can remove the first outlier only and for the other outliers, i can't combine them with the previous outliers in one dataframe to remove them all from the original data. i think that there's something not correct in my code and it's logic.</p> <p><strong>original DataFrame:-</strong></p> <pre><code>df = pd.DataFrame({'Date': ['01-01-2022','02-01-2022','03-01-2022','04-01-2022','05-01-2022','06-01-2022','07-01-2022','08-01-2022','09-01-2022','10-01-2022','11-01-2022','12-01-2022','13-01-2022','14-01-2022','15-01-2022','16-01-2022','17-01-2022','18-01-2022','19-01-2022','20-01-2022'], 'Value': [100,50,60,3,85,15,250,97,150,25,49,64,88,35,154,73,67,48,52,90]}) </code></pre> <p><a href="https://i.sstatic.net/fcL0u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fcL0u.png" alt="enter image description here" /></a></p> <p><strong>The expected data to see:-</strong></p> <p>for the first looping we should exclude row number 6 (value: 150)</p> <p><a href="https://i.sstatic.net/tgyEx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tgyEx.png" alt="enter image description here" /></a></p> <p>for the second looping we should exclude rows number 7 (value: 150) and row 13 (value: 154)</p> <p><a href="https://i.sstatic.net/Hfh9L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hfh9L.png" alt="enter image description here" /></a></p> <p><strong>The final data to see:-</strong></p> <p><a href="https://i.sstatic.net/sre2O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sre2O.png" alt="enter image description here" /></a></p> <p><strong>The code that i've used:-</strong></p> <pre><code>outlier = [] df2 = [] while True: Q1 = df['Value'].quantile(0.25) Q3 = df['Value'].quantile(0.75) IQR = Q3 - Q1 Lower_Limit = Q1 - 1.5 * IQR Upper_Limit = Q3 + 1.5 * IQR outliers = df['Value'][((df['Value'] &lt; Lower_Limit)|(df['Value'] &gt; Upper_Limit))] outlier.append(outliers) df = df['Value'][~((df['Value'] &lt; Lower_Limit)|(df['Value'] &gt; Upper_Limit))] df2.append(df) Q1 = pd.DataFrame(df2).quantile(0.25) Q3 = pd.DataFrame(df2).quantile(0.75) IQR = Q3 - Q1 Lower_Limit = Q1 - 1.5 * IQR Upper_Limit = Q3 + 1.5 * IQR outliers = pd.DataFrame(df2)[((pd.DataFrame(df2) &lt; Lower_Limit) | (pd.DataFrame(df2) &gt; Upper_Limit))].dropna() outlier.append(outliers) df2 = pd.DataFrame(df2)[~((pd.DataFrame(df2) &lt; Lower_Limit) |(pd.DataFrame(df2) &gt; Upper_Limit))] df2.append(df2) break </code></pre> <p><strong>The final output:-</strong></p> <pre><code>print('df before Removing Outliers: ' + str(len(df))) print('df After Removing Outliers: ' + str(len(df2))) print('Number of outliers: ' + str(len(outlier))) print('Max outlier value: '+ str(outlier.max())) print('Min outlier value: '+ str(outlier.min())) df before Removing Outliers: 20 df After Removing Outliers: 17 Number of outliers: 3 Max outlier value: 250 Min outlier value: 150 </code></pre>
<python><python-3.x><pandas><pyspark>
2022-12-16 09:01:13
1
303
Mahmoud Badr
74,822,137
6,296,447
Unable to install ddtrace using poetry on M1 Max
<p>I have a new M1 Max machine (Monterey) and when I run poetry install I get the error <code>Installing ddtrace (1.4.5): Failed</code>.</p> <p>There's a long stack trace you can find <a href="https://gist.github.com/NRKirby/7ae65d4d92ce828455d03726a7c03367" rel="nofollow noreferrer">here</a></p> <p>the first error appears to be</p> <p><code>ddtrace/internal/pack_template.h:561:5: error: implicit declaration of function '_PyFloat_Pack8' is invalid in C99 [-Werror,-Wimplicit-function-declaration] _PyFloat_Pack8(d, &amp;buf[1], 0);</code></p> <p>I am unsure how to debug this issue - can anyone help?</p>
<python><datadog><python-poetry>
2022-12-16 08:53:59
0
2,537
Vinyl Warmth
74,822,126
4,091,365
Why does importing numpy prints 2313 to the screen?
<p>I'm seeing this really strange behavior where my script outputs the number 2313 when I import numpy. It annoys me, but I don't know why it happens and what I can do about it. I'm using python 3.11.0 and numpy version 1.23.4.</p> <p>When my script is empty and I run it, nothing happens. However, when I write:</p> <p><code>import numpy as np</code></p> <p>with the rest of my script still completely empty, I get the output:</p> <p><a href="https://i.sstatic.net/btFg3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/btFg3.png" alt="output to screen" /></a></p> <p>Does anyone have a clue?</p>
<python><numpy>
2022-12-16 08:52:05
1
413
SPK.z
74,822,097
11,070,463
Purpose of stop gradient in `jax.nn.softmax`?
<p><code>jax.nn.softmax</code> is defined as:</p> <pre class="lang-py prettyprint-override"><code>def softmax(x: Array, axis: Optional[Union[int, Tuple[int, ...]]] = -1, where: Optional[Array] = None, initial: Optional[Array] = None) -&gt; Array: x_max = jnp.max(x, axis, where=where, initial=initial, keepdims=True) unnormalized = jnp.exp(x - lax.stop_gradient(x_max)) return unnormalized / jnp.sum(unnormalized, axis, where=where, keepdims=True) </code></pre> <p>I'm particularly interested in the <code>lax.stop_gradient(x_max)</code> part. I would love an explanation for why it's needed. From a practical standpoint, it seems that <code>stop_gradient</code> doesn't change the gradient calculation:</p> <pre class="lang-py prettyprint-override"><code>import jax import jax.numpy as jnp def softmax_unstable(x): return jnp.exp(x) / jnp.sum(jnp.exp(x)) def softmax_stable(x): x = x - jnp.max(x) return jnp.exp(x) / jnp.sum(jnp.exp(x)) def softmax_stop_gradient(x): x = x - jax.lax.stop_gradient(jnp.max(x)) return jnp.exp(x) / jnp.sum(jnp.exp(x)) # example input x = jax.random.normal(jax.random.PRNGKey(123), (100,)) # make sure all forward passes are equal a = softmax_unstable(x) b = softmax_stable(x) c = softmax_stop_gradient(x) d = jax.nn.softmax(x) assert jnp.allclose(a, b) and jnp.allclose(b, c) and jnp.allclose(c, d) # make sure all gradient calculations are the same a = jax.grad(lambda x: -jnp.log(softmax_unstable(x))[2])(x) b = jax.grad(lambda x: -jnp.log(softmax_stable(x))[2])(x) c = jax.grad(lambda x: -jnp.log(softmax_stop_gradient(x))[2])(x) d = jax.grad(lambda x: -jnp.log(jax.nn.softmax(x))[2])(x) assert jnp.allclose(a, b) and jnp.allclose(b, c) and jnp.allclose(c, d) # make sure all gradient calculations are the same, this time we use softmax functions twice a = jax.grad(lambda x: -jnp.log(softmax_unstable(softmax_unstable(x)))[2])(x) b = jax.grad(lambda x: -jnp.log(softmax_stable(softmax_stable(x)))[2])(x) c = jax.grad(lambda x: -jnp.log(softmax_stop_gradient(softmax_stop_gradient(x)))[2])(x) d = jax.grad(lambda x: -jnp.log(jax.nn.softmax(jax.nn.softmax(x)))[2])(x) assert jnp.allclose(a, b) and jnp.allclose(b, c) and jnp.allclose(c, d) </code></pre> <p>^ all implementations are equal, even the one where we apply the <code>x - x_max</code> trick but WITHOUT <code>stop_gradient</code>.</p>
<python><machine-learning><deep-learning><autograd><jax>
2022-12-16 08:49:48
2
4,113
Jay Mody
74,822,071
9,729,724
queue.close() "raises" BrokenPipeError
<p>Using python 3.7 on Unix, the following code:</p> <pre class="lang-py prettyprint-override"><code>import multiprocessing queue = multiprocessing.Queue() queue.put(1) # time.sleep(0.01) Adding this prevents the error queue.close() </code></pre> <p>&quot;raises&quot; (in the background) a BrokenPipeError:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3.7/multiprocessing/queues.py&quot;, line 242, in _feed send_bytes(obj) File &quot;/usr/lib/python3.7/multiprocessing/connection.py&quot;, line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File &quot;/usr/lib/python3.7/multiprocessing/connection.py&quot;, line 404, in _send_bytes self._send(header + buf) File &quot;/usr/lib/python3.7/multiprocessing/connection.py&quot;, line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe </code></pre> <p>To me this clearly looks like a bug, since <code>queue.close()</code> together with <code>queue.join_thread()</code> only exist to avoid that kind of bugs.<br /> Am I missing something?</p> <p>Notice that the BrokenPipeError is only raised in the background thread which is internally used by python to feed the queue, so from the main process point of view no error is raise and the &quot;only&quot; consequence is just spurious tracebacks printing.</p> <p>(Also related to <a href="https://stackoverflow.com/questions/51680479/multiprocessing-queue-fails-intermittently-bug-in-python">multiprocessing.Queue fails intermittently. Bug in Python?</a>)</p>
<python><multiprocessing><pipe>
2022-12-16 08:47:47
1
303
agemO
74,822,063
1,056,563
How to include instances of a given class in its constructor in python?
<p>How do we include instances of a given class in its constructor?</p> <pre><code>class Node: # Results in &quot;Unresolved reference 'Node'&quot; def __init__(self, val: int, left: Node, right: Node): self.val = val self.left = left self.right = right </code></pre> <p>Note: I have seen several questions <em>similar</em> to this but they are not asking exactly the same thing and/or are garbled.</p> <p><strong>Update</strong> Not surprisingly there does exist an answer somewhere in SOF to the question. But given the question I posed it's basically impossible to find that Q&amp;A from <em>This question already has answers here</em> . The answer given in a comment below is</p> <pre><code>from __future_ import annotations </code></pre> <p>as the first line in the script/module</p>
<python>
2022-12-16 08:46:59
0
63,891
WestCoastProjects
74,821,927
7,717,176
Splitting list of strings in a column of vaex dataframe
<p>There is a vaex dataframe with a column such as:</p> <pre><code>df['col'] ['aa', ' NO'] ['aa', ' NO'] ['aa', ' NO'] ['aa', ' NO'] ['aa', ' NO'] </code></pre> <p>I want to convert this one column to two columns as follow:</p> <pre><code>df['col1', 'col2'] ['aa'], [' NO'] ['aa'], [' NO'] ['aa'], [' NO'] ['aa'], [' NO'] ['aa'], [' NO'] </code></pre> <p>Is there any way to do that in Vaex?</p>
<python><pandas><vaex>
2022-12-16 08:35:32
1
391
HMadadi
74,821,825
4,451,521
How to set a specific column of the last row of a dataframe
<p>I have a dataframe where the last row has a Nan</p> <p>something like</p> <pre><code> A,B,C 94,35.8621497534,139.8811398413,23.075931722607212 95,35.8621915301,139.8811617064,23.013675634522304 96,35.8622333249,139.8811835151,22.93392191824463 97,35.8622751476,139.881205254,22.98818503390619 98,35.8623169559,139.8812270428,23.12554949571689 99,35.8623587254,139.8812489567,23.269025725355807 100,35.8624004417,139.8812709946,23.080210277956407 101,35.8624422121,139.881292861,Nan </code></pre> <p>I am trying to change that Nan to the previous row value (23.080210277956407) but I cannot do it I get an error</p> <pre><code>SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead </code></pre> <p>I have tried several methods like</p> <pre><code>rowi=df.shape[0] df.iloc[row-1]['C']=df.tail(2)['C'].iloc[0] </code></pre> <p>but this does not modify anything</p> <p>How can I set the last value of column 'C' to be the same as the previous value?</p>
<python><pandas>
2022-12-16 08:23:13
0
10,576
KansaiRobot
74,821,758
9,205,369
Python coverage used in Django takes too long to execute even though it is run with --source flag option
<p>I am using the Python package in combination with the Django testing framework and sometimes want to test only one app/directory/package stated in the coverage --source option.</p> <pre><code>coverage run --source='custom_auth' manage.py test custom_auth.tests.TestAuth.test_authentication --keepdb </code></pre> <p>Is this command the correct way to run only one test? I am also using --keepdb command to ignore recreating the database again. The test is executed in 0.147s, but something happens behind/before the test, and it takes about 3-5 minutes to start executing the test.</p>
<python><django><coverage.py><django-tests>
2022-12-16 08:14:04
1
679
Matija Lukic
74,821,540
10,829,044
Concat excel cols and combine rows into one using python pandas
<p>I have a data in excel sheet like as below</p> <pre><code>Name Segment revenue Product_id Status Order_count days_ago Dummy High value 1000 P_ABC Yes 2 30 days ago Dummy High value 1000 P_CDE No 1 20 days ago Dummy High value 1000 P_EFG Yes 3 10 days ago Tammy Low value 50 P_ABC No 0 100 days ago Tammy Low_value 50 P_DCF Yes 1 10 days ago </code></pre> <p>I would like to do the below steps in order</p> <p>a) Concat the columns <code>Product_id, Status, Order_count</code> into one column. Use <code>-</code> symbol in between values</p> <p>b) Group the data based on <code>Name, Segment and revenue</code></p> <p>c) Combine multiple rows for same group into one row (in excel).</p> <p>I tried something like below</p> <pre><code>df['concat_value'] = df['Product_id'] + &quot; - &quot; + df['Status'] + &quot; - &quot; + df['Order_count'] df_group = df.groupby(['Name','Segment','revenue']) df_nonrepeats = df[df_group['concat_value'].transform('count') == 1] df_repeats = df[df_group['concat_value'].transform('count') &gt; 1] </code></pre> <p>But am not able to get the expected output as shown below in the excel sheet.</p> <p>Can you help me on how can I get the below output in excel sheet?</p> <p><a href="https://i.sstatic.net/E1PcJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E1PcJ.png" alt="enter image description here" /></a></p>
<python><pandas>
2022-12-16 07:51:00
1
7,793
The Great
74,821,480
13,588,525
Tensorflow eye function
<p>I am trying to use the <code>tf.linalg.eye</code> function in TensorFlow to create a identity matrix, but I am having trouble understanding how to use the num_rows and num_columns arguments. Can someone explain how these arguments are used to determine the size of the identity matrix, and provide an example of how to use <code>tf.linalg.eye</code> to create a 3x3 identity matrix?</p> <p>I need clarificationon how to use the <code>num_rows</code> and <code>num_columns</code> arguments to determine the size of the identity matrix, and to provide an example of how to create a 3x3 identity matrix using the <code>tf.linalg.eye</code> function.</p>
<python><tensorflow2.0>
2022-12-16 07:43:22
0
1,736
Kaddu Livingstone
74,821,261
19,303,365
fetching the required info using regex within string
<p>i do have a below text which was extracted using <code>pdfminer</code> .</p> <p><em><strong>Output from pdfminer</strong></em> :</p> <blockquote> <p><strong>Work Experience</strong> -Job Role/Establishment -Sales Assistant @ DFSDuration -June 2021 - PresentCurrently at DFS I work as a sales assistant, My role entails me helping customers withproduct enquiries and assisting customers needs,whilst also dealing with difficult/frustrated customers. At DFS a lot can go wrong so it’s essential to be able to deal withmany different types of objections and show understanding towards the customer at alltimes for their needs to be met. Within the role I am expected to achieve sales targetswhich I currently have no problems reaching.Job Role/Establishment -Plasterer @ MB Plasterer’sDuration -September 2016 - PresentWhilst working as a plasterer I have been able to develop my practical trading skills.<strong>Areas Of Expertise</strong> -●Customer Interaction●Customer Service●Resilience●Rapport Building●Trader●Warehouse WorkPersonal Skills -●Friendly●Confident●Articulate●Self Motivated●Punctual</p> </blockquote> <p><em><strong>Expected Output :</strong></em></p> <blockquote> <p>Work Experience -Job Role/Establishment -Sales Assistant @ DFSDuration -June 2021 - PresentCurrently at DFS I work as a sales assistant, My role entails me helping customers withproduct enquiries and assisting customers needs,whilst also dealing with difficult/frustrated customers. At DFS a lot can go wrong so it’s essential to be able to deal withmany different types of objections and show understanding towards the customer at alltimes for their needs to be met. Within the role I am expected to achieve sales targetswhich I currently have no problems reaching.Job Role/Establishment -Plasterer @ MB Plasterer’sDuration -September 2016 - PresentWhilst working as a plasterer I have been able to develop my practical trading skills.</p> </blockquote> <p>in some text Work Experience are indicated with other terms such as EXPERIENCE , Job Experience and such other.</p> <p>I am looking to write a generic regex logic to fetch the text between the Work Experience and Areas Of Expertise.</p> <p>The pattern i tried is below :</p> <pre><code>pattern = r'^(?:EXPERIENCE|Employment experience|Work Experience|Work Experience|WORK EXPERIENCE|Previous Employment|Work Experience -|Job experience|)\s*(\S.*?)\n(?:Skills|EDUCATION|Education|SKILLS|Areas Of Expertise)' matches = re.findall(pattern, text, re.M | re.S) print(matches) </code></pre> <p>but i am getting output as <code>[]</code></p> <p>what is missed..? how it can be approached..?</p>
<python><regex>
2022-12-16 07:17:14
2
365
Roshankumar
74,821,227
11,930,479
Transformation function to be applied on every row in data frame
<p>I load a JSON file into a dataframe and want to apply one function on every single row. I will use the existing column &quot;item_name&quot; and do some logical checks and add a new column in the dataframe. But this doesn't work actually and throwing some error.</p> <pre><code> def load_data(data_file): with open(data_file, 'r', encoding='utf8') as d: data = json.load(d) df = pd.DataFrame.from_dict(data) df = df_gt.apply(name_list, axis=1) def name_list(df): for column in df[['item_name']]: names = [] #processing ... names.append(value) df['names_list'] = names return(df) </code></pre> <p>Inside my second function, name_list, when I try to print df, I am getting error. Hence would like to know how to pass the entire dataframe as argument</p>
<python><json><python-3.x><pandas>
2022-12-16 07:14:27
0
1,006
Lilly
74,820,793
14,617,547
adding global variable in colab cause indent error
<p>I'm trying to add a global variable to quicksort algorithm in colab. It works fine without the global variable 'r' but when I introduce it, I receive nonsense error of indention. but indention is correct. pls help.</p> <pre><code> # Quicksort Sort r=0 def partition(array, low, high): global r pivot = array[high] i = low - 1 for j in range(low, high): if array[j] &lt;= pivot: i = i + 1 (array[i], array[j]) = (array[j], array[i]) r=r+1 (array[i + 1], array[high]) = (array[high], array[i + 1]) r=r+1 return i + 1 def quickSort(array, low, high): if low &lt; high: pi = partition(array, low, high) quickSort(array, low, pi - 1) quickSort(array, pi + 1, high) data = [7,1,3,2,4,5,6] print(&quot;Unsorted Array&quot;) print(data) size = len(data) quickSort(data, 0, size - 1) print('Sorted Array in Ascending Order:') print(data,r) ``` the error: ``` File &quot;&lt;tokenize&gt;&quot;, line 11 r=r+1 ^ IndentationError: unindent does not match any outer indentation level ``` </code></pre>
<python><global-variables><google-colaboratory>
2022-12-16 06:12:33
1
410
Beryl Amend
74,820,711
1,895,649
Getting "PerformanceWarning: DataFrame is highly fragmented" when adding new columns to dataframe
<p>I have a pivot table which contains n number of columns with a format of YYYY-WW, since not all year-week combinations are present, I need to calculate the price difference and percentage of a weeks price vs the previous week.</p> <p>I got it figured out, and it's working, but I get the following error:</p> <pre class="lang-py prettyprint-override"><code>/var/folders/b6/jndhzshn3hlbwyrdsjzj2znw0000gn/T/ipykernel_28918/446450422.py:38: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()` df_pivot[f&quot;% {year} [{week:02d}-{prev_week:02d}]&quot;] = (df_pivot[current_week_colname] / df_pivot[prev_week_colname]) - 1 /var/folders/b6/jndhzshn3hlbwyrdsjzj2znw0000gn/T/ipykernel_28918/446450422.py:37: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()` df_pivot[f&quot;$ {year} [{week:02d}-{prev_week:02d}]&quot;] = df_pivot[current_week_colname] - df_pivot[prev_week_colname] /var/folders/b6/jndhzshn3hlbwyrdsjzj2znw0000gn/T/ipykernel_28918/446450422.py:38: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()` df_pivot[f&quot;% {year} [{week:02d}-{prev_week:02d}]&quot;] = (df_pivot[current_week_colname] / df_pivot[prev_week_colname]) - 1 </code></pre> <p>This is the code I'm running:</p> <pre class="lang-py prettyprint-override"><code>... years = [&quot;2020&quot;, &quot;2021&quot;, &quot;2022&quot;] df_pivot_colnames = tuple(df_pivot.columns) ... for year in years: for week in range(2, 53): prev_week = week - 1 current_week_colname = f&quot;{year}-{week:02d}&quot; prev_week_colname = f&quot;{year}-{prev_week:02d}&quot; new_week_colname = f&quot;{year} [{week:02d}-{prev_week:02d}]&quot; if ( current_week_colname in df_pivot_colnames and prev_week_colname in df_pivot_colnames ): df_pivot[f&quot;$ {new_week_colname}&quot;] = ( df_pivot[current_week_colname] - df_pivot[prev_week_colname] ) df_pivot[f&quot;% {new_week_colname}&quot;] = ( df_pivot[current_week_colname] / df_pivot[prev_week_colname] ) - 1 df_pivot.to_csv(source_csv_path + &quot;output_&quot; + csv) </code></pre> <p>I understand the message, but I am not sure how I could incorporate <code>concat</code>, having in mind that the column name will change depending on the <code>df</code> loaded.</p>
<python><pandas><dataframe>
2022-12-16 06:01:07
1
615
joseagaleanoc
74,820,698
11,332,693
Python Groupby on String values
<p>Below is the input data</p> <pre><code>A B C D R1 J1 D1 S1,S2 R1 J1 D1 S3,S4,S5 R1 J1 D2 S5,S6,S2 R1 J1 D2 S7,S8 P1 J2 E1 T1,T2 P1 J2 E1 T3,T4,T5 P1 J2 E2 T5,T6,T2 P1 J2 E3 T7,T8,T5 </code></pre> <p>Expected output with no repeated values in column D :</p> <pre><code>A B C D R1 J1 D1,D2 S1,S2,S3,S4,S5,S6,S7,S8 P1 J2 E1,E2,E3 T1,T2,T3,T4,T5,T6,T7,T8 </code></pre> <p>The script which I have tried and that not working</p> <pre><code>df.groupby(['A','B'])[['C','D']].agg([','.join]).reset_index() </code></pre>
<python><pandas><string><group-by>
2022-12-16 05:57:00
2
417
AB14
74,820,634
11,502,399
findHomography producing inaccurate transformation matrix
<p>I have the coordinates of the four corners of a table in my image frame as well as the actual dimensions of that table. What I want to do is find the transformation matrix from the table in the frame (first image) to the actual table (second image). The goal is to be able to map points in the raw frame (location of a ball when it bounces) to where it bounced in the second image rectangle.</p> <p>I have tried using openCV's <code>findHomography</code> however I am getting inaccurate results. I am trying to find the matrix <code>T = A -&gt; B</code> where:</p> <p><strong>A</strong>: the coordinates of table corners in the raw image (see first image):</p> <pre><code>[[512.10633894 269.22351997] # Bottom corner [325.78198672 236.36953072] # Left Corner [536.67952727 199.18259532] # Top Corner [715.21023044 214.80199122]] # Right Corner </code></pre> <p><strong>B</strong>: Actual coordinates to map to (see second image):</p> <pre><code>[[152.5 0. ] [ 0. 0. ] [ 0. 274. ] [152.5 274. ]] </code></pre> <p><strong>T</strong>: Transformation matrix</p> <pre><code>[[-5.96154850e-01 -3.38096031e+00 9.92931129e+02] [-5.59829402e-01 3.17495425e+00 -5.68494643e+02] [-2.76038296e-04 -8.61104226e-03 1.00000000e+00]] </code></pre> <p>Using <code>findHomography</code> gives me the transformation matrix <strong>T</strong>, but when I input the original coordinates <strong>A</strong> I expect to get <strong>B</strong> but I end up with <strong>B'</strong>:</p> <pre><code>[[[-221 0 -1]] [[ 0 0 -1]] [[ 0 -235 0]] [[-159 -285 -1]]] </code></pre> <p>This is of the correct magnitude so it seems like it's doing something right but is nowhere near the accuracy I want, and for some reason all the values are negative which I don't understand why. Why is <strong>B'</strong> not equal to <strong>B</strong>?</p> <p>Here is the relevant code:</p> <pre><code>T, status = cv.findHomography(img_corners, TABLE_COORDS, cv.RANSAC, 5.) T_img = cv.transform(img_corners.reshape((-1, 1, 2)), T) </code></pre> <p>I have also tried changing the 5.0 parameter to something lower but without any luck...</p> <p><a href="https://i.sstatic.net/zbGfl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zbGfl.jpg" alt="Table In frame" /></a></p> <p><a href="https://i.sstatic.net/OWQmr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OWQmr.jpg" alt="Actual dimensions of table" /></a></p>
<python><opencv><matrix><homography>
2022-12-16 05:43:57
1
1,511
Ken
74,820,559
275,002
Python: No of rows are always 9 and does not return affected rows count after UPDATE query
<p>This is not something complicated but not sure why is it not working</p> <pre><code>import mysql.connector def get_connection(host, user, password, db_name): connection = None try: connection = mysql.connector.connect( host=host, user=user, use_unicode=True, password=password, database=db_name ) connection.set_charset_collation('utf8') print('Connected') except Exception as ex: print(str(ex)) finally: return connection with connection.cursor() as cursor: sql = 'UPDATE {} set underlying_price=9'.format(table_name) cursor.execute(sql) connection.commit() print('No of Rows Updated ...', cursor.rowcount) </code></pre> <p>It always returns 0 no matter what. The same query shows correct count on TablePlus</p> <p>MysQL API provides <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-api-cext-affected-rows.html" rel="nofollow noreferrer">this</a> method but I do not know how to call it as calling against <code>connection</code> variable gives error</p>
<python><mysql><mysql-connector>
2022-12-16 05:30:29
1
15,089
Volatil3
74,820,512
10,192,022
paramiko module is not catching the output
<p>I'm trying to get Kafka lags using paramiko module to catch sum of the kafka lag but the output always giving zero using python paramiko ssh module, while manually same command giving the correct result.</p> <pre><code> # check list of consumers stdin, stdout, stderr = ssh.exec_command(&quot;/home/prduser/kafka_2.12-2.1.0/bin/kafka-consumer-groups.sh --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --list&quot;) list_consumers = stdout.readlines() print(list_consumers) # check lag for each consumer for consumer in list_consumers: print(consumer) # | awk '{sum += $5} END {print sum}' stdin, stdout, stderr = ssh.exec_command(&quot;/home/prduser/kafka_2.12-2.1.0/bin/kafka-consumer-groups.sh --bootstrap-server localhost:19092,localhost:29092,localhost:39092 --describe --group {{0}} | awk '{{sum+=$5}} END {{print sum}}' &quot;.format(consumer)) consumer_lag_out = stdout.readlines() print(consumer_lag_out) </code></pre> <p>sample output <a href="https://i.sstatic.net/OCpah.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OCpah.png" alt="enter image description here" /></a></p> <p>manual shell command :</p> <p><a href="https://i.sstatic.net/ubKY6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ubKY6.png" alt="enter image description here" /></a></p>
<python><python-3.x>
2022-12-16 05:23:19
1
381
Ashok Developer
74,820,202
2,525,940
Need PyQt QGroupBox to resize to match contained QTreeWidget
<p>I have a QTreeWidget inside a QGroupBox. If the branches on the tree expand or collapse the QGroupBox should resize rather than show scrollbars. The QGroupBox is in a window with no layout manager as in the full application the user has the ability to drag and resize the GroupBox around the window.</p> <p>The code below almost does this. I have subclassed QTreeWidget and set its size hint to follow that of the viewport (QAbstractScrollClass) it contains. The viewport sizehint does respond to the changes in the tree branch expansion unlike the tree sizehint. I've then subclassed QGroupBox to adjust its size to the sizehint in its init method.</p> <p>This part all works. When the gui first comes up the box matches the size of the expanded branches of the tree. Changing the expanded state in code results in the correctly sized box.</p> <p><a href="https://i.sstatic.net/TYssg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYssg.png" alt="enter image description here" /></a></p> <p>I then connected the TreeWidget's signals for itemExpanded and itemCollapsed to a function that calls box.adjustSize(). This bit doesn't work. The sizehint for the box stays stubbornly at the size first set when the box was first shown regardless of the user toggling the branches.</p> <p>I've looked at size policies etc, and have written a nasty hacks that will work in some situations, but I'd like to figure out how to do this properly.</p> <p>In the real app the adjustSize will be done I expect with signals but I've simplified here.</p> <pre class="lang-py prettyprint-override"><code>import sys from PyQt5.QtWidgets import ( QApplication, QWidget, QGroupBox, QVBoxLayout, QTreeWidget, QTreeWidgetItem, ) from PyQt5.QtCore import QSize class TreeWidgetSize(QTreeWidget): def __init__(self, parent=None): super().__init__(parent=parent) def sizeHint(self): w = QTreeWidget.sizeHint(self).width() h = self.viewportSizeHint().height() new_size = QSize(w, h + 10) print(f&quot;in tree size hint {new_size}&quot;) return new_size class GroupBoxSize(QGroupBox): def __init__(self, title): super().__init__(title) print(f&quot;box init {self.sizeHint()}&quot;) self.adjustSize() def test(item): print(f&quot;test sizehint {box.sizeHint()}&quot;) print(f&quot;test viewport size hint {tw.viewportSizeHint()}&quot;) box.adjustSize() app = QApplication(sys.argv) win = QWidget() win.setGeometry(100, 100, 400, 250) win.setWindowTitle(&quot;No Layout Manager&quot;) box = GroupBoxSize(win) box.setTitle(&quot;fixed box&quot;) box.move(10, 10) layout = QVBoxLayout() box.setLayout(layout) l1 = QTreeWidgetItem([&quot;String A&quot;]) l2 = QTreeWidgetItem([&quot;String B&quot;]) for i in range(3): l1_child = QTreeWidgetItem([&quot;Child A&quot; + str(i)]) l1.addChild(l1_child) for j in range(2): l2_child = QTreeWidgetItem([&quot;Child B&quot; + str(j)]) l2.addChild(l2_child) tw = TreeWidgetSize() tw.setColumnCount(1) tw.setHeaderLabels([&quot;Column 1&quot;]) tw.addTopLevelItem(l1) tw.addTopLevelItem(l2) l1.setExpanded(False) layout.addWidget(tw) tw.itemExpanded.connect(test) tw.itemCollapsed.connect(test) win.show() sys.exit(app.exec_()) </code></pre>
<python><pyqt5>
2022-12-16 04:29:50
1
499
elfnor
74,820,040
11,229,812
How to show webcam within tkinter canvas?
<p>I have Tkinter GUI with a canvas that currently shows the image by default and there is a switch that will print Cam On and Cam off when switched back and forth. What I'm trying to do is to get my webcam to stream instead of that image when I flit the switch on.</p> <p>Here is the code I have for my GUI:</p> <pre><code>import tkinter import tkinter.messagebox import customtkinter from PIL import Image,ImageTk # Setting up theme of GUI customtkinter.set_appearance_mode(&quot;Dark&quot;) # Modes: &quot;System&quot; (standard), &quot;Dark&quot;, &quot;Light&quot; customtkinter.set_default_color_theme(&quot;blue&quot;) # Themes: &quot;blue&quot; (standard), &quot;green&quot;, &quot;dark-blue&quot; class App(customtkinter.CTk): def __init__(self): super().__init__() # configure window self.is_on = True self.image = ImageTk.PhotoImage(Image.open(&quot;../data/Mars.PNG&quot;)) self.title(&quot;Cool Blue&quot;) self.geometry(f&quot;{1200}x{635}&quot;) # configure grid layout (4x4) self.grid_columnconfigure(1, weight=1) self.grid_columnconfigure((2, 3), weight=0) self.grid_rowconfigure((0, 1, 2, 3), weight=0) ################################################################### # Create Sidebar for LED, LIghts and Camera controls ################################################################### self.lights_control = customtkinter.CTkFrame(self) self.lights_control.grid(row=3, column=0, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky=&quot;nsew&quot;) self.lights_control.grid_rowconfigure(1, weight=1) # Camera self.camera_switch = customtkinter.CTkSwitch(master=self.lights_control, text=&quot;Camera&quot;, command=self.camera_switch) self.camera_switch.grid(row=2, column=1, pady=10, padx=20, ) ################################################################### # Create canvas for RPCam live stream ################################################################### self.picam_frame = customtkinter.CTkFrame(self) self.picam_frame.grid(row=0, column=1, rowspan=4, padx=(5, 5), pady=(10, 10), sticky=&quot;nsew&quot;) self.picam_frame.grid_rowconfigure(4, weight=1) self.picam_canvas = customtkinter.CTkCanvas(self.picam_frame, width=1730, height=944, background=&quot;gray&quot;) self.picam_canvas.create_image(0, 0, image=self.image, anchor=&quot;nw&quot;) self.picam_canvas.pack() ######################################################################### # Camera Switch ######################################################################### def camera_switch(self, event=None): if self.is_on: print(&quot;Cam on&quot;) self.is_on = False else: print(&quot;Cam off&quot;) self.is_on = True if __name__ == &quot;__main__&quot;: app = App() app.mainloop() </code></pre> <p>Now I did some research and found this code below that will do the trick in its own canvas but I am have not had luck to use the logic from the code below in my own code above.</p> <pre><code># Import required Libraries from tkinter import * from PIL import Image, ImageTk import cv2 # Create an instance of TKinter Window or frame win = Tk() # Set the size of the window win.geometry(&quot;700x350&quot;) # Create a Label to capture the Video frames label =Label(win) label.grid(row=0, column=0) cap= cv2.VideoCapture(0) # Define function to show frame def show_frames(): # Get the latest frame and convert into Image cv2image= cv2.cvtColor(cap.read()[1],cv2.COLOR_BGR2RGB) img = Image.fromarray(cv2image) # Convert image to PhotoImage imgtk = ImageTk.PhotoImage(image = img) label.imgtk = imgtk label.configure(image=imgtk) # Repeat after an interval to capture continiously label.after(20, show_frames) show_frames() win.mainloop() </code></pre> <p>So what I really need is help with using the logic from the code below and incorporating it into my own code above. Can you please help me?</p> <p>Here is my latest attempt to get this solved. So I was able to merge these two code based on the logic I provided above. And when I start the GUI and switch the camera button on, I can see that camera is starting since the LED light on the camera turns on and I get a windows notification that my program is starting the camera. however, it is not showing on the canvas yet. So I think I am close but I am not sure what I'm missing.</p> <pre><code>import tkinter import tkinter.messagebox import customtkinter from PIL import Image,ImageTk import cv2 # Setting up theme of GUI customtkinter.set_appearance_mode(&quot;Dark&quot;) # Modes: &quot;System&quot; (standard), &quot;Dark&quot;, &quot;Light&quot; customtkinter.set_default_color_theme(&quot;blue&quot;) # Themes: &quot;blue&quot; (standard), &quot;green&quot;, &quot;dark-blue&quot; class App(customtkinter.CTk): def __init__(self): super().__init__() # configure window self.is_on = True self.image = ImageTk.PhotoImage(Image.open(&quot;../data/Mars.PNG&quot;)) self.capture = cv2.VideoCapture(0) self.title(&quot;Cool Blue&quot;) self.geometry(f&quot;{1200}x{635}&quot;) # configure grid layout (4x4) self.grid_columnconfigure(1, weight=1) self.grid_columnconfigure((2, 3), weight=0) self.grid_rowconfigure((0, 1, 2, 3), weight=0) ################################################################### # Create Sidebar for LED, LIghts and Camera controls ################################################################### self.lights_control = customtkinter.CTkFrame(self) self.lights_control.grid(row=3, column=0, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky=&quot;nsew&quot;) self.lights_control.grid_rowconfigure(1, weight=1) # Camera self.camera_switch = customtkinter.CTkSwitch(master=self.lights_control, text=&quot;Camera&quot;, command=self.camera_switch) self.camera_switch.grid(row=2, column=1, pady=10, padx=20, ) ################################################################### # Create canvas for RPCam live stream ################################################################### self.picam_frame = customtkinter.CTkFrame(self) self.picam_frame.grid(row=0, column=1, rowspan=4, padx=(5, 5), pady=(10, 10), sticky=&quot;nsew&quot;) self.picam_frame.grid_rowconfigure(4, weight=1) # self.picam_canvas = customtkinter.CTkCanvas(self.picam_frame, width=1730, height=944, background=&quot;gray&quot;) # self.picam_canvas.create_image(0, 0, image=self.image, anchor=&quot;nw&quot;) self.picam_canvas = tkinter.Canvas(self.picam_frame, width=1730, height=944) #self.picam_canvas.pack ######################################################################### # Camera Switch ######################################################################### def camera_switch(self, event=None): if self.is_on: self.update_frames() print(&quot;Cam on&quot;) self.is_on = False else: #self.close_camera() self.image print(&quot;Cam off&quot;) self.is_on = True def update_frames(self): # Get the current frame from the webcam _, frame = self.capture.read() # Convert the frame to a PhotoImage object frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = Image.fromarray(frame) frame = ImageTk.PhotoImage(frame) # Update the canvas with the new frame self.picam_canvas.create_image(0, 0, image=frame, anchor=&quot;nw&quot;) self.picam_canvas.image = frame # Schedule the next update self.after(1000, self.update_frames) def close_camera(self): self.capture.release() if __name__ == &quot;__main__&quot;: app = App() app.mainloop() </code></pre> <p>Again, if anyone has any insight I would really appreciate it.</p>
<python><python-3.x><tkinter>
2022-12-16 04:00:38
1
767
Slavisha84
74,819,657
504,717
Upgrade python3.8 to 3.10 in Ubuntu Docker image
<p>I am using playwright base image</p> <pre><code>FROM mcr.microsoft.com/playwright </code></pre> <p>Unfortunately, this comes with python3.8. I could either use python3.10 image and install playright on it, but it came with other complexities, so i chose to upgrade python on playright image to 3.10.</p> <p>So far, my Dockerfile looks like this</p> <pre><code>FROM mcr.microsoft.com/playwright apt install -y software-properties-common &amp;&amp; add-apt-repository -y ppa:deadsnakes/ppa &amp;&amp; apt update &amp;&amp; apt install -y python3.10 RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1 RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.8 2 </code></pre> <p>This works fine, but the problem is &quot;How can i make python3.10&quot; default version after setting up the alternatives?</p> <p>Thanks</p>
<python><docker><python-3.8><ubuntu-20.04><python-3.10>
2022-12-16 02:43:19
3
8,834
Em Ae
74,819,610
943,222
linear programing: find max number of shares of stock to purchase given a budget
<p>The problem I am working on is this: say i have 10k, and I want to buy 3 stocks: AMZN, CBOE, CDW. and my goal is the calculate the max number of shares I can purchase and stay within 10k.</p> <p><code>X*amzn_price+Y*cboe_price+Z*cdw_price &lt;= 10000</code></p> <p>I did the below code based on this <a href="https://realpython.com/linear-programming-python/#example-2" rel="nofollow noreferrer">example</a>:</p> <pre><code> from scipy.optimize import linprog import yfinance as yf def linearProblem(dict): obj = list(dict.values()) lhs_ineq = [obj] rhs_ineq = [10000] opt = linprog(c=obj, A_ub=lhs_ineq, b_ub=rhs_ineq, method = &quot;revised simplex&quot;) # Press the green button in the gutter to run the script. def getPrice(): symbol_list = ['AMZN', 'CBOE', 'CDW'] symbol_list_dict = {} for symbol in symbol_list: ticker = yf.Ticker(symbol).info market_price = ticker['regularMarketPrice'] symbol_list_dict[symbol] = market_price return symbol_list_dict if __name__ == '__main__': symbol_price_dict = getPrice() linearProblem(symbol_price_dict) </code></pre> <p>but the result is confusing with a fun of 0? I don't think I am inputting my equation right.</p> <pre><code>{'AMZN': 88.45, 'CBOE': 124.14, 'CDW': 184.2} [88.45, 124.14, 184.2] 3 1 con: array([], dtype=float64) fun: 0.0 message: 'Optimization terminated successfully.' nit: 0 slack: array([10000.]) status: 0 success: True x: array([0., 0., 0.]) </code></pre>
<python><linear-programming>
2022-12-16 02:33:07
1
816
D.Zou
74,819,469
1,608,765
Sunpy old version? module 'sunpy.net.vso.attrs' has no attribute 'Time'
<p>I'm trying to run a code to download SDO data with the following script, but it get an attribute error. I am not sure why and can't find this error discussed online.</p> <p><code>AttributeError: module 'sunpy.net.vso.attrs' has no attribute 'Time'</code></p> <p>code</p> <pre><code>from sunpy.net import vso client = vso.VSOClient() mydir = '.' qr = client.search(vso.attrs.Time('2018/06/12 09:34:28', '2018/06/12 09:35:28'), vso.attrs.Instrument('hmi')) print(qr) resp = input('Press [y] to download and [n] to exit') if resp == 'y': print('Downloading ...') res = client.fetch(qr, path=mydir+'/{file}').wait() else: print('END') pass print('END') </code></pre> <p>edit:</p> <p>i found the source code, and it should parse time i think. <a href="https://github.com/sunpy/sunpy/blob/main/sunpy/net/vso/attrs.py" rel="nofollow noreferrer">https://github.com/sunpy/sunpy/blob/main/sunpy/net/vso/attrs.py</a></p>
<python>
2022-12-16 02:01:35
1
2,723
Coolcrab
74,819,062
652,779
Databricks: run python coverage inside databricks jobs
<p>I am using Azure Databricks, running a python script instead of using Notebooks.</p> <p>Given the way Databricks is implemented, it's hard to test the code in my local machine, so I wanted to create another job which could run coverage for it.</p> <p>The <a href="https://coverage.readthedocs.io/en/6.5.0/" rel="nofollow noreferrer">coverage package</a> says you should just run <code>coverage my_script.py arg1 arg2</code> but the Databricks Job creation dialog does not allow my to run shell commands. Only python scripts (or wheels, or other &quot;shelled&quot; methods) are available.</p> <p>Most of the recommendations about running coverage say the solution is to use the CLI, but when it's not available, which is the best way to go?</p>
<python><azure-databricks><environment><coverage.py>
2022-12-16 00:39:14
1
1,251
Lomefin
74,819,034
11,897,477
Why is IndexError thrown when I try to index a numpy ndarray with another array?
<p>I have a 5 dimensional ndarray called <code>self.q_table</code>. I've got a regular array, the length of which is 4. When I try to find out the maximum value in that row like this...</p> <pre><code>max = np.max(self.q_table[regular_array]) </code></pre> <p>...I get an IndexError, even though the elements of <code>regular_array</code> are smaller than the dimensions of <code>q_table</code>.</p> <p>I tried to alter the dimensions of both arrays, but it didn't get better.</p> <p>Edit:</p> <p>The error is <code>IndexError: index 11 is out of bounds for axis 0 with size 10</code></p> <p>Numpy is imported as np, and the last line in this sample throws the error.</p> <pre><code>class AgentBase: def __init__(self, observation_space): OBSERVATION_SPACE_RESOLUTION = [10, 15, 15, 15] self.q_table = np.zeros([*OBSERVATION_SPACE_RESOLUTION, 4]) max_val = np.max(self.q_table[self.quantize_state(observation_space, [-150, 100, 3, 3])]) print(max_val) @staticmethod def quantize_state(observation_space, state): state_quantized = np.zeros(len(state)) lin_spaces = [] for i in range(len(observation_space)): lin_spaces.append(np.linspace(observation_space[i][0], observation_space[i][1], OBSERVATION_SPACE_RESOLUTION[i] - 1, dtype=int)) for i in range(len(lin_spaces)): state_quantized[i] = np.digitize(state[i], lin_spaces[i]) return state_quantized.astype(int) </code></pre> <p><code>self.observation_space</code> is a parameter with the following values: <a href="https://i.sstatic.net/WNzhP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WNzhP.png" alt="enter image description here" /></a></p>
<python><arrays><numpy><multidimensional-array><numpy-ndarray>
2022-12-16 00:33:00
1
562
OlivΓ©r Raisz
74,819,005
3,831,854
Using pytest for Hardware in Loop Testing
<p>I would like to use pytest for Hardware in Loop Testing.</p> <p>Basically, with Python and Pytest hardware functionalities are going to be asserted.</p> <p>My problem is with the structural implementation of the tests: I would like every tests to be separate (so they can be tested individually), but also that they can be used in a pipeline manner. Example:</p> <pre><code>class TestCamera: def test_camera_start(): pass def test_photo_captured(): pass </code></pre> <p>and</p> <pre><code>class TestAudio: # And so on </code></pre> <p>Everything works fine. But now imagine I want to implement a system test with a runtime. I however want to use the old tests functions that I have written and use them in a pipeline manner. For example:</p> <pre><code># system_test.py while phone_is_booted: user_input = get_user_input_over_simulated_screen() # Now pass over user input to a list of input testers that run all sequencialy and async in tasks or threads. </code></pre> <p>Are there any frameworks for something like this?</p> <p>Thanks.</p>
<python><testing><automated-tests><pytest><python-asyncio>
2022-12-16 00:26:28
0
1,058
AKJ
74,818,933
10,720,618
PowerShell setting environment variable to subexpression discards newline
<p>I have a python script:</p> <pre class="lang-py prettyprint-override"><code># temp.py print(&quot;foo\nbar&quot;) </code></pre> <p>I can run this in powershell:</p> <pre><code>&gt; python .\temp.py foo bar </code></pre> <p>I want to set the environment variable <code>FOO</code> to the output of this python script. However, doing so changes the newline to a space:</p> <pre><code>&gt; $env:FOO=$(python .\temp.py) &gt; $env:FOO foo bar </code></pre> <h2>What I've Tried</h2> <p>Verify that environment variables can contain newlines:</p> <pre><code>&gt; echo &quot;foo`nbar&quot; foo bar &gt; $env:FOO2=$(echo &quot;foo`nbar&quot;) &gt; $env:FOO2 foo bar </code></pre> <p>... so it seems like the issue has something to do with execution of a python script?</p> <p>Verify that the subexpression operator <code>$()</code> is not modifying the python script output:</p> <pre><code>&gt; $(python .\temp.py) foo bar </code></pre> <p><code>echo</code>ing the output of the python script seems to exhibit the same behavior:</p> <pre><code>&gt; echo &quot;$(python .\temp.py)&quot; foo bar </code></pre> <p>... but not if I exclude the quotes:</p> <pre><code>&gt; echo $(python .\temp.py) foo bar </code></pre> <h2>Workaround</h2> <p>Base64 encoding the string bypasses bypasses this issue. However, I still wonder what could be causing this?</p> <p>My powershell version is 5.1.22621.963</p>
<python><windows><powershell><terminal>
2022-12-16 00:12:42
1
1,263
kym
74,818,864
856,804
Does mypy require __init__ to have -> None annotation
<p>Seeing kind of contradictory results:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, a: int): pass </code></pre> <p>The snippet above passes a <code>mypy</code> test, but the one below doesn't.</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self): pass </code></pre> <p>Any idea why?</p>
<python><mypy><python-typing>
2022-12-16 00:01:29
1
9,110
zyxue
74,818,863
4,062,526
Deeper Explanation to `Cannot initialize multiple indices of a constraint with a single expression` in Python's Pyomo
<p>Running into a <code>Cannot initialize multiple indices of a constraint with a single expression</code> error when trying to create the below constraint in Pyomo, I have read other tangential answers but unsure of how to translate the learnings here. I believe the issue is that some of the variables are indexed by two sets rather than just one. However, I am summing over those indexes in Pyomo and therefore thought that my constraint could be indexed by a singular variable.</p> <pre class="lang-py prettyprint-override"><code>def armington_composite_demand_constraint(self, k): return self.MODEL.armington_composite_demand[k] == ( sum( self.MODEL.armington_composite_intermediate_demand[(k, a)] for a in self.__activities() ) + sum( self.MODEL.armington_composite_household_demand[(k, h)] for h in self.__household_types() ) + sum( self.MODEL.armington_composite_government_demand[(k, g)] for g in self.__government_types() ) + sum( self.MODEL.armington_composite_investment_demand[(k, i)] for i in self.__investment_types() ) ) </code></pre> <p>Trying to use this function to create constraint:</p> <pre class="lang-py prettyprint-override"><code># armington_composite_price_constraint setattr( self.MODEL, &quot;armington_composite_price_constraint&quot;, po.Constraint( self.__commodities(), rule=self.armington_composite_price_constraint ), ) </code></pre> <p>Unsure of how to fix this, I'm not sure if I am understanding the error correctly. Could someone please provide an explanation of this error and why Pyomo might struggle within summing over another index in the constraint function?</p> <p>My guess is to maybe split out the sum functions... Would appreciate some help</p>
<python><optimization><pyomo>
2022-12-16 00:00:41
2
363
Adam
74,818,858
7,317,408
Python: how to randomise drop_duplicates using datetime?
<p>Excuse my lack of understanding - I am very new to Python programming.</p> <p>Imagine I have the following code:</p> <pre><code>df_filtered.drop_duplicates(subset=['date'], keep='first', inplace=True) </code></pre> <p>How can I randomise the dropping of the duplicates, instead of choosing always the first? Something like:</p> <pre><code>df_filtered.drop_duplicates(subset=['date'], keep='random', inplace=True) </code></pre>
<python><pandas><numpy>
2022-12-15 23:59:49
2
3,436
a7dc
74,818,834
7,084,115
How to install azure-cli on python docker image
<p>I am trying to install <code>azure-cli</code> in a <code>python</code> docker container but I get the following error:</p> <blockquote> <p>[5/5] RUN pip3 install azure-cli:<br /> #9 1.754 Collecting azure-cli<br /> #9 1.956 Downloading azure_cli-2.43.0-py3-none-any.whl (4.3 MB)<br /> #9 7.885 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.3/4.3 MB 730.7 kB/s eta 0:00:00<br /> #9 8.070 Collecting six&gt;=1.10.0<br /> #9 8.190 Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) #9 8.227 Collecting azure-loganalytics~=0.1.0 . . . #9 43.08 ERROR: Exception: . . . #9 43.08 raise IncompleteRead(self._fp_bytes_read, self.length_remaining) #9 43.08 File &quot;/usr/local/lib/python3.7/contextlib.py&quot;, line 130, in <strong>exit</strong> #9 43.08 self.gen.throw(type, value, traceback) #9 43.08 File &quot;/usr/local/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py&quot;, line 449, in _error_catcher #9 43.08 raise SSLError(e) #9 43.08 pip._vendor.urllib3.exceptions.SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2570) #9 43.28 WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available. #9 43.28 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.</p> </blockquote> <hr /> <p>executor failed running [/bin/sh -c pip3 install azure-cli]: exit code: 2</p> <p>And here's my <code>Dockerfile</code></p> <pre><code>FROM python:3.7-alpine RUN apk add --update git bash curl unzip make coreutils openssh shadow ARG TERRAFORM_VERSION=&quot;1.3.6&quot; ARG svgUserId= ENV AZURE_DEFAULT_REGION=germanywestcentral RUN if ! id $svgUserId; then \ adduser svg -D &amp;&amp;\ usermod -u ${svgUserId} svg; \ fi RUN curl https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip &gt; terraform_${TERRAFORM_VERSION}_linux_amd64.zip &amp;&amp; \ unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /bin &amp;&amp; \ rm -f terraform_${TERRAFORM_VERSION}_linux_amd64.zip RUN pip3 install azure-cli ENTRYPOINT [] </code></pre> <p>Can someone help me find the issue?</p>
<python><azure><docker><azure-cli>
2022-12-15 23:56:00
2
4,101
Jananath Banuka
74,818,831
10,470,463
pdfplumber extract table data works when the table has borders, doesn't work when the table has no borders
<p>Using reportlab I made 2 1 page pdfs with 1 table:</p> <p>The data in the table is this:</p> <pre><code>data1 = [['00', '', '02', '', '04'], ['', '11', '', '13', ''], ['20', '', '22', '23', '24'], ['30', '31', '32', '', '34']] </code></pre> <p>The point is, to get the rows including the empty cells. If the table has borders, no problem.</p> <p>But if the table has no borders, I don't get any results for table from the code below.</p> <p>Any ideas why?</p> <p>Like I said, the pdfs are identical except for pdf1 does not have borders, pdf2 has borders.</p> <pre><code>with pdfplumber.open(path2pdf + savename1) as pdf1: # Get the first page of the object page = pdf1.pages[0] # Get the text data of the page text = page.extract_text() # Get all the tabular data of this page tables = page.extract_tables() # Traversing table for t_index in range(len(tables)): table = tables[t_index] # Traversing each row of data for data in table: print(data) </code></pre> <p>Change pdf1 for pdf2 and I get the required result.</p> <p>EDIT: I tried with this, but get an error. Not sure how I should format it:</p> <blockquote> <blockquote> <blockquote> <blockquote> <p>pdf_table = page.extract_tables(vertical_strategy='text', horizontal_strategy='text') Traceback (most recent call last): File &quot;/usr/lib/python3.8/idlelib/run.py&quot;, line 559, in runcode exec(code, self.locals) File &quot;&lt;pyshell#70&gt;&quot;, line 1, in TypeError: extract_tables() got an unexpected keyword argument 'vertical_strategy'</p> </blockquote> </blockquote> </blockquote> </blockquote>
<python><pdfplumber>
2022-12-15 23:55:02
1
511
Pedroski
74,818,677
8,047,904
problem to install pyproject.toml dependencies with pip
<p>I have an old project created with poetry. The <strong>pyproject.toml</strong> create by poetry is the following:</p> <pre><code>[tool.poetry] name = &quot;Dota2Learning&quot; version = &quot;0.3.0&quot; description = &quot;Statistics and Machine Learning for your Dota2 Games.&quot; license = &quot;MIT&quot; readme = &quot;README.md&quot; homepage = &quot;Coming soon...&quot; repository = &quot;https://github.com/drigols/dota2learning/&quot; documentation = &quot;Coming soon...&quot; include = [&quot;CHANGELOG.md&quot;] authors = [ &quot;drigols &lt;drigols.creative@gmail.com&gt;&quot;, ] maintainers = [ &quot;drigols &lt;drigols.creative@gmail.com&gt;&quot;, ] keywords = [ &quot;dota2&quot;, &quot;statistics&quot;, &quot;machine Learning&quot;, &quot;deep learning&quot;, ] [tool.poetry.scripts] dota2learning = &quot;dota2learning.cli.main:app&quot; [tool.poetry.dependencies] python = &quot;^3.10&quot; requests = &quot;^2.27.1&quot; typer = {extras = [&quot;all&quot;], version = &quot;^0.4.1&quot;} install = &quot;^1.3.5&quot; SQLAlchemy = &quot;^1.4.39&quot; PyMySQL = &quot;^1.0.2&quot; cryptography = &quot;^37.0.4&quot; pydantic = &quot;^1.9.1&quot; rich = &quot;^12.5.1&quot; fastapi = &quot;^0.79.0&quot; uvicorn = &quot;^0.18.2&quot; [tool.poetry.dev-dependencies] black = {extras = [&quot;jupyter&quot;], version = &quot;^22.3.0&quot;} pre-commit = &quot;^2.19.0&quot; flake8 = &quot;^4.0.1&quot; reorder-python-imports = &quot;^3.1.0&quot; pyupgrade = &quot;^2.34.0&quot; coverage = &quot;^6.4.1&quot; [tool.black] line-length = 79 include = '\.pyi?$' # All Python files exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' [build-system] requires = [&quot;poetry-core&gt;=1.0.0&quot;] build-backend = &quot;poetry.core.masonry.api&quot; [tool.poetry.urls] &quot;Bug Tracker&quot; = &quot;https://github.com/drigols/dota2learning/issues&quot; </code></pre> <p>If a run <strong>&quot;pip install .&quot;</strong> it's worked for me, however, now I want to follow the new approach without poetry to manage the project and dependencies. Then I created a new pyproject.toml (manually):</p> <pre><code>[project] name = &quot;Dota2Learning&quot; version = &quot;2.0.0&quot; description = &quot;Statistics and Machine Learning for your Dota2 Games.&quot; license = &quot;MIT&quot; readme = &quot;README.md&quot; homepage = &quot;&quot; requires-python = &quot;&gt;=3.10&quot; repository = &quot;https://github.com/drigols/dota2learning/&quot; documentation = &quot;&quot; include = [&quot;CHANGELOG.md&quot;] authors = [ &quot;drigols &lt;drigols.creative@gmail.com&gt;&quot;, ] maintainers = [ &quot;drigols &lt;drigols.creative@gmail.com&gt;&quot;, ] keywords = [ &quot;dota2&quot;, &quot;statistics&quot;, &quot;machine Learning&quot;, &quot;deep learning&quot;, ] dependencies = [ &quot;requests&gt;=2.27.1&quot;, &quot;typer&gt;=0.4.1&quot;, &quot;SQLAlchemy&gt;=1.4.39&quot;, &quot;PyMySQL&gt;=1.0.2&quot;, &quot;cryptography&gt;=37.0.4&quot;, &quot;pydantic&gt;=1.9.1&quot;, &quot;rich&gt;=12.5.1&quot;, &quot;fastapi&gt;=0.79.0&quot;, &quot;uvicorn&gt;=0.18.2&quot;, ] [project.optional-dependencies] # Dev dependencies. dev = [ &quot;black&gt;=22.3.0&quot;, &quot;pre-commit&gt;=2.19.0&quot;, &quot;flake8&gt;=4.0.1&quot;, &quot;reorder-python-imports&gt;=3.1.0&quot;, &quot;pyupgrade&gt;=2.34.0&quot;, ] # Testing dependencies. test = [ &quot;coverage&gt;=6.4.1&quot;, ] # Docs dependencies. doc = [] [project.scripts] dota2learning = &quot;dota2learning.cli.main:app&quot; [tool.black] line-length = 79 include = '\.pyi?$' # All Python files exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' </code></pre> <p>The problem now is that the pip command <strong>&quot;pip install .&quot;</strong> don't work:</p> <pre><code> Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [89 lines of output] configuration error: `project.license` must be valid exactly by one definition (2 matches found): - keys: 'file': {type: string} required: ['file'] - keys: 'text': {type: string} required: ['text'] DESCRIPTION: `Project license &lt;https://www.python.org/dev/peps/pep-0621/#license&gt;`_. GIVEN VALUE: &quot;MIT&quot; OFFENDING RULE: 'oneOf' DEFINITION: { &quot;oneOf&quot;: [ { &quot;properties&quot;: { &quot;file&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;$$description&quot;: [ &quot;Relative path to the file (UTF-8) which contains the license for the&quot;, &quot;project.&quot; ] } }, &quot;required&quot;: [ &quot;file&quot; ] }, { &quot;properties&quot;: { &quot;text&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;$$description&quot;: [ &quot;The license of the project whose meaning is that of the&quot;, &quot;`License field from the core metadata&quot;, &quot;&lt;https://packaging.python.org/specifications/core-metadata/#license&gt;`_.&quot; ] } }, &quot;required&quot;: [ &quot;text&quot; ] } ] } Traceback (most recent call last): File &quot;/home/drigols/Workspace/dota2learning/environment/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 351, in &lt;module&gt; main() File &quot;/home/drigols/Workspace/dota2learning/environment/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 333, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/home/drigols/Workspace/dota2learning/environment/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 338, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 320, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 484, in run_setup super(_BuildMetaLegacyBackend, File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 335, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/__init__.py&quot;, line 87, in setup return distutils.core.setup(**attrs) File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py&quot;, line 159, in setup dist.parse_config_files() File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/dist.py&quot;, line 867, in parse_config_files pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py&quot;, line 62, in apply_configuration config = read_configuration(filepath, True, ignore_option_errors, dist) File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py&quot;, line 126, in read_configuration validate(subset, filepath) File &quot;/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py&quot;, line 51, in validate raise ValueError(f&quot;{error}\n{summary}&quot;) from None ValueError: invalid pyproject.toml config: `project.license`. configuration error: `project.license` must be valid exactly by one definition (2 matches found): - keys: 'file': {type: string} required: ['file'] - keys: 'text': {type: string} required: ['text'] [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>It's like pip doesn't know how to install the dependencies from pyproject.toml unlike poetry's approach and I don't understand why.</p> <p><strong>THE PROBLEM WAS SOLVED!</strong> A edited the pyproject.toml to:</p> <pre><code>[build-system] # Minimum requirements for the build system to execute. requires = [&quot;setuptools&quot;, &quot;wheel&quot;] # PEP 508 specifications. build-backend = &quot;setuptools.build_meta&quot; # Ignore flat-layout. [tool.setuptools] py-modules = [] [project] name = &quot;Dota2Learning&quot; version = &quot;2.0.0&quot; description = &quot;Statistics and Machine Learning for your Dota2 Games.&quot; license = {file = &quot;LICENSE.md&quot;} readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.10.0&quot; authors = [ { name = &quot;Rodrigo Leite&quot;, email = &quot;drigols.creative@gmail.com&quot; }, ] maintainers = [ { name = &quot;Rodrigo Leite&quot;, email = &quot;drigols.creative@gmail.com&quot; }, ] keywords = [ &quot;dota2&quot;, &quot;statistics&quot;, &quot;machine Learning&quot;, &quot;deep learning&quot;, ] dependencies = [ &quot;requests&gt;=2.27.1&quot;, &quot;typer&gt;=0.4.1&quot;, &quot;SQLAlchemy&gt;=1.4.39&quot;, &quot;PyMySQL&gt;=1.0.2&quot;, &quot;cryptography&gt;=37.0.4&quot;, &quot;pydantic&gt;=1.9.1&quot;, &quot;rich&gt;=12.5.1&quot;, &quot;fastapi&gt;=0.79.0&quot;, &quot;uvicorn&gt;=0.18.2&quot;, ] [project.optional-dependencies] # Dev dependencies. dev = [ &quot;black&gt;=22.3.0&quot;, &quot;pre-commit&gt;=2.19.0&quot;, &quot;flake8&gt;=4.0.1&quot;, &quot;reorder-python-imports&gt;=3.1.0&quot;, &quot;pyupgrade&gt;=2.34.0&quot;, ] # Test dependencies. test = [ &quot;coverage&gt;=6.4.1&quot;, ] # Doc dependencies. doc = [] [project.scripts] # dota2learning = &quot;dota2learning.cli.main:app&quot; [tool.black] line-length = 79 include = '\.pyi?$' # All Python files exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' </code></pre>
<python><pip><python-packaging><python-poetry><pyproject.toml>
2022-12-15 23:29:24
1
331
drigols
74,818,647
3,431,407
Python Google Drive API not creating csv with list of all files within sub-folders
<p>I am trying to get a list of files with names, dates, etc into a <code>csv</code> file from files on my Google Drive folder which has around 15k sub-folders (one level below) within the main folder that have about 1-10 files each that amount to around 65k files in total.</p> <p>I used the following code in Python to create my <code>csv</code> file which generates the information for all the sub-folders but only about 18k of the individual files within those sub-folders (the most recently uploaded files in those sub-folders).</p> <p>I am not quite sure why my code is not able to get the list for all the files in those sub-folders. Is there a limit I am hitting that stops me from getting the information for all the files?</p> <p>Note: The folder I am storing the files is a shared folder but I don't think that should be affecting anything.</p> <pre><code>import httplib2 import os from apiclient import discovery from oauth2client import client from oauth2client import tools from oauth2client.file import Storage try: import argparse flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args() except ImportError: flags = None import pandas as pd # If modifying these scopes, delete your previously saved credentials # at ~/.credentials/drive-python-quickstart.json SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly' CLIENT_SECRET_FILE = 'credentials.json' APPLICATION_NAME = 'Drive API Python Quickstart' folder_id = '########' # Set to id of the parent folder you want to list (should be the content folder) folder_list = [] all_folders = [] file_list = [] def get_credentials(): &quot;&quot;&quot;Gets valid user credentials from storage. If nothing has been stored, or if the stored credentials are invalid, the OAuth2 flow is completed to obtain the new credentials. Returns: Credentials, the obtained credential. &quot;&quot;&quot; home_dir = os.path.expanduser('~') credential_dir = os.path.join(home_dir, '.credentials') if not os.path.exists(credential_dir): os.makedirs(credential_dir) credential_path = os.path.join(credential_dir,'drive-python-quickstart.json') store = Storage(credential_path) credentials = store.get() if not credentials or credentials.invalid: flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES) flow.user_agent = APPLICATION_NAME if flags: credentials = tools.run_flow(flow, store, flags) else: # Needed only for compatibility with Python 2.6 credentials = tools.run(flow, store) print('Storing credentials to ' + credential_path) return credentials def get_root_folder(): # Gets folder list from original root folder credentials = get_credentials() http = credentials.authorize(httplib2.Http()) service = discovery.build('drive', 'v3', http=http) results = service.files().list(q=&quot;mimeType = 'application/vnd.google-apps.folder' and '&quot;+folder_id+&quot;' in parents&quot;, pageSize=1000, fields=&quot;nextPageToken, files(id, mimeType)&quot;, supportsAllDrives=True, includeItemsFromAllDrives=True).execute() folders = results.get('files', []) if not folders: print('No folders found.') else: for folder in folders: id = folder.get('id') folder_list.append(id) def get_all_folders(folder_list): # Creates list of all sub folder under root, keeps going until no folders underneath for folder in folder_list: additional_folders = [] credentials = get_credentials() http = credentials.authorize(httplib2.Http()) service = discovery.build('drive', 'v3', http=http) results = service.files().list( q=&quot;mimeType = 'application/vnd.google-apps.folder' and '&quot; +folder+ &quot;' in parents&quot;, pageSize=1000, fields=&quot;nextPageToken, files(id, mimeType)&quot;, supportsAllDrives=True, includeItemsFromAllDrives=True).execute() items = results.get('files', []) for item in items: id = item.get('id') additional_folders.append(id) if not additional_folders: pass else: all_folders.extend(additional_folders) folder_list = additional_folders get_all_folders(folder_list) def merge(): # Merges sub folder list with full list global full_list full_list = all_folders + folder_list full_list.append(folder_id) def get_file_list(): # Runs over each folder generating file list, for files over 1000 uses nextpagetoken to run additional requests, picks up metadata included in the request for folder in full_list: credentials = get_credentials() http = credentials.authorize(httplib2.Http()) service = discovery.build('drive', 'v3', http=http) page_token = None while True: results = service.files().list( q=&quot;'&quot; + folder + &quot;' in parents&quot;, pageSize=1000, fields=&quot;nextPageToken, files(name, md5Checksum, mimeType, size, createdTime, modifiedTime, id, parents, trashed)&quot;, pageToken=page_token, supportsAllDrives=True, includeItemsFromAllDrives=True).execute() items = results.get('files', []) for item in items: name = item['name'] checksum = item.get('md5Checksum') size = item.get('size', '-') id = item.get('id') mimeType = item.get('mimeType', '-') createdTime = item.get('createdTime', 'No date') modifiedTime = item.get('modifiedTime', 'No date') parents = item.get('parents') trashed = item.get('trashed') file_list.append([name, checksum, mimeType, size, createdTime, modifiedTime, id, parents, trashed]) page_token = results.get('nextPageToken', None) if page_token is None: break files = pd.DataFrame(file_list,columns=['file_name','checksum_md5','mimeType','size', 'date_created', 'date_last_modified','google_id', 'google_parent_id', 'trashed']) files.drop(files[files['trashed'] == True].index, inplace=True) #removes files which have True listed in trashed, these are files which had been moved to the recycle bin foldernumbers = files['mimeType'].str.contains('application/vnd.google-apps.folder').sum() filenumbers = (~files['mimeType'].str.contains('application/vnd.google-apps.folder')).sum() print('Number of folders is: ', foldernumbers) print('Number of files is: ', filenumbers) files.to_csv('D:/GoogleAPIMetadata.csv', index=False) if __name__ == '__main__': print('Collecting folder id list') get_root_folder() get_all_folders(folder_list) merge() print('Generating file metadata list') get_file_list() </code></pre>
<python><google-drive-api>
2022-12-15 23:24:47
0
661
Funkeh-Monkeh
74,818,563
3,507,584
Django and pgAdmin not aligned
<p>I was trying to change the order of the table columns in a postgreSQL table. As I am just starting, it seemed easier to <code>manage.py flush</code> and create a new DB with new name and apply migrations.</p> <p>I can see the new DB in pgAdmin received all the Django models migrations except my app model/table. I deleted all migrations folder (except <code>0001_initial.py</code>) and still, when I run <code>python manage.py makemigrations</code> I get:</p> <pre><code>No changes detected </code></pre> <p>But the model class table is not in the DB. When I try to migrate the app model it says:</p> <pre><code>Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: No migrations to apply. </code></pre> <p>Is there any way to delete all tables, <code>makemigrations</code> and <code>migrate</code> and postgres get all the tables? Any idea why postgreSQL/pgAdmin cannot get my Django app model/table?</p>
<python><django><postgresql><django-migrations>
2022-12-15 23:12:28
2
3,689
User981636
74,818,410
7,914,054
How to inverse transform a 3rd order difference?
<p>I'm trying to create a function that will do the inverse difference for 3rd order difference of a forecasted result. Currently, I have a function that will provide the inverse transform of the 2nd difference. Is there a way for me to edit this function to do the 3rd order difference?</p> <pre><code>def invert_transformation(df_train, df_forecast, second_diff=False): &quot;&quot;&quot;Revert back the differencing to get the forecast to original scale.&quot;&quot;&quot; df_fc = df_forecast.copy() columns = df_train.columns for col in columns: # Roll back 2nd Diff if second_diff: df_fc[str(col)+'_1d'] = (df_train[col].iloc[-1]-df_train[col].iloc[-2]) + df_fc[str(col)+'_2d'].cumsum() # Roll back 1st Diff df_fc[str(col)+'_forecast'] = df_train[col].iloc[-1] + df_fc[str(col)+'_1d'].cumsum() return df_fc </code></pre>
<python><time-series>
2022-12-15 22:50:00
0
789
QMan5
74,818,335
3,723,031
Unstack dataframe column
<p>I have this dataframe:</p> <pre><code>df = pd.DataFrame() df[&quot;A&quot;] = [2,2,4,4,4,8,9] df[&quot;B&quot;] = [2,2,4,4,4,7,9] df[&quot;C&quot;] = list(&quot;axcdxef&quot;) print(df.to_string(index=False)) A B C 2 2 a 2 2 x 4 4 c 4 4 d 4 4 x 8 7 e 9 9 f </code></pre> <p>I want to convert to this, unstacking column C for rows where columns A and B are duplicates:</p> <pre><code>df2 = pd.DataFrame() df2[&quot;A&quot;] = [2,4,8,9] df2[&quot;B&quot;] = [2,4,7,9] df2[&quot;C&quot;] = [&quot;a,x&quot;, &quot;c,d,x&quot;, &quot;e&quot;, &quot;f&quot;] print() print(df2.to_string(index=False)) A B C 2 2 a,x 4 4 c,d,x 8 7 e 9 9 f </code></pre> <p>I've looked at pivot() and unstack(), but haven't found the right recipe yet. Any ideas?</p>
<python><pandas>
2022-12-15 22:40:45
1
1,322
Steve
74,818,327
5,599,108
Match files starting with a number and underscore using glob
<p>Initially I tried</p> <pre><code>folder_path.glob('[0-9]_*.json') </code></pre> <p>Where folder_path is a pathlib.Path object. But it only works for files that start with a single digit.</p> <p>after failing to find a proper match pattern, I used an additional condition to verify that what precedes the underscore is a numeric string</p> <pre><code>[ file_path for file_path in folder_path.glob('*_*.json') if file_path.name.split('_')[0].isnumeric() ] </code></pre> <p>but that seems like a workaround that only applies to this case. Is there a better way to match numbers of any length using glob?</p>
<python><pathlib>
2022-12-15 22:39:23
2
415
Cristian
74,818,311
9,394,364
Parsing text and JSON from a log file and keeping them together
<p>I have a .log file containing both text strings and json. For example:</p> <pre><code>A whole bunch of irrelevant text 2022-12-15 12:45:06, run: 1, user: james json: [{&quot;value&quot;: &quot;30&quot;, &quot;error&quot;: &quot;8&quot;}] 2022-12-15 12:47:36, run: 2, user: kelly json: [{&quot;value&quot;: &quot;15&quot;, &quot;error&quot;: &quot;3&quot;}] More irrelevant text </code></pre> <p>My goal is to extract the json but keep it paired with the text that comes before it so that the two can be tied together. The keyword that indicates the start of a new section is <code>run</code>. However, as shown in the example below, I need to extract the timestamp from the same line where <code>run</code> appears. The character that indicates the end of a section is <code>]</code>.</p> <p>My goal is to parse this text into a pandas dataframe like the following:</p> <pre><code>timestamp run user value error 2022-12-15 12:45:06 1 james 30 5 2022-12-15 12:47:36 2 kelly 15 8 </code></pre>
<python><json>
2022-12-15 22:36:58
3
1,651
DJC
74,818,221
8,537,770
API gateway LambdaRestApi custom domain gets ECONNREFUSED Error upon requests
<p>I've looked at some other posts, but <a href="https://stackoverflow.com/questions/48306053/api-gateway-regional-custom-domain-is-not-working">this one</a> is the closest question I could find that might be close to what I'm experiencing. I'm just not that clear on it from what was stated in the answer.</p> <p>I'm creating an LambdaRestAPI through API gateway and attempting to use a route53 hosted zone to use as domain of my endpoint. I've created all of this using aws-cdk, which seems to work except when I am creating the alias record to connect to the custom domain of my api.</p> <p>My aws-cdk code is shown below.</p> <pre><code>from aws_cdk import ( Stack, aws_apigateway as apigateway, aws_lambda as lambda_, aws_iam as iam, aws_ecr as ecr, aws_route53 as route53, aws_route53_targets as targets, aws_certificatemanager as cman ) from constructs import Construct class ApiStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -&gt; None: super().__init__(scope, construct_id, **kwargs) repo_ = ecr.Repository.from_repository_arn(self, 'repo', repository_arn=&lt;image_arn&gt; tag_ = &lt;image_tag&gt; backend = lambda_.DockerImageFunction(self, 'myLambda', code=lambda_.DockerImageCode.from_ecr(repository=repo_, tag=tag_), architecture=lambda_.Architecture.X86_64 ) certificate = cman.Certificate.from_certificate_arn(self, 'cert', &lt;cert ARN&gt; ) api = apigateway.LambdaRestApi(self, &quot;myAPI&quot;, handler=backend, proxy=False, endpoint_configuration=apigateway.EndpointConfiguration( types=[apigateway.EndpointType.REGIONAL]), domain_name=apigateway.DomainNameOptions( domain_name=&lt;custom-domain-name&gt;, certificate=certificate ) ) hosted_zone = route53.HostedZone.from_lookup(self, 'myHostedZone', domain_name=&lt;hosted-zone-domain-name&gt;) ) route53.ARecord(self, 'Arecord', zone=hosted_zone, target=route53.RecordTarget.from_alias(targets.ApiGateway(api)), record_name=&lt;domain-name&gt; ) </code></pre> <p>When I hit the invoke url, everything works fine. However, when I try to hit the custom domain linked to my API or the route53 alias, I get <code>Error: connect ECONNREFUSED</code></p> <p>From the post I shared above, I think it might have something to do with HTTP vs HTTPS requests, but I don't feel like I know enough to explore that thoroughly.</p> <p>Any ideas why I cannot hit my custom domain?</p>
<python><certificate><aws-cdk><amazon-route53><api-gateway>
2022-12-15 22:24:23
1
663
A Simple Programmer
74,818,180
1,569,221
Module not found when trying to run `black` in a Jupyter notebook
<p>I've created a Jupyter notebook inside of a virtual environment.</p> <p>These packages are installed:</p> <pre><code>black==22.12.0 jupyter~= 1.0.0 jupyter-black==0.3.3 jupyter-contrib-nbextensions~=0.5.1 </code></pre> <p>In my notebook, there's a button with a little gavel to run <code>black</code> code formatting. However, when I click this, I get the error message:</p> <pre><code>localhost:8888 says [jupyter-black] Error:ModuleNotFoundError No module named 'black' </code></pre> <p>I've made sure I'm using the kernel created inside this virtual environment.</p> <p>Reading through the docs has not pointed out any obvious errors I'm making here. Any suggestions on getting <code>black</code> to run appropriately?</p>
<python><jupyter-notebook><code-formatting><black-code-formatter>
2022-12-15 22:19:14
1
2,393
canary_in_the_data_mine
74,818,054
5,750,741
ModuleNotFoundError: No module named 'google'
<p>I am new to PyCharm (coming from RStudio world). I am just trying to setup a PyCharm project. My first line of code is <code>google</code> library import (Later I intend to write codes for pulling data from BigQuery).</p> <p>But I am getting an error saying <code>ModuleNotFoundError: No module named 'google'</code> in PyCharm. I tried suggested solutions for a very <a href="https://stackoverflow.com/questions/57212573/modulenotfounderror-no-module-named-requests-in-pycharm">similar stackoverflow question</a>. I also tried <code>invalidating cache and restart</code> by doing File &gt;</p> <p>I can see that the <code>google</code> is installed in the Python interpreter. I am not able to figure out what's the issue. To me looks like it is related to the way we setup environment in PyCharm.</p> <p>Edit: I checked Project interpreter and Run Configuration interpreter. Both match and still get the same thing.</p> <p><a href="https://i.sstatic.net/Dgamw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dgamw.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/dlv2r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dlv2r.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/c5FGD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c5FGD.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/JGtaK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JGtaK.png" alt="enter image description here" /></a></p>
<python><pycharm><ide>
2022-12-15 22:01:49
1
1,459
Piyush
74,818,047
5,212,614
Is there a best practice for dropping dupes from a dataframe with many, many columns?
<p>I am merging two dataframes with quite a few columns and the IDs are repeating in one dataframe, so I'm getting a lot of dupes in the merged dataframe. I'm wondering if there is a best way to figure out which columns are actual dupes and leverage that logic to drop dupes.</p> <p>I'm doing this...</p> <pre><code>df_final = df.drop_duplicates(subset=['field1', 'field2','field3', 'field4', 'field5', 'field6'], keep='first') </code></pre> <p>There are many more fields. If I add too few, I don't get enough records dropped and if I add too many, or the wrong ones, I get too many records dropped. How can I figure out what's a true dupe and not a dupe and drop only dupes? Finally, the first dataframe has around 31k records and the second has around 9k records. I would expect the df_final to have around 31k records.</p>
<python><python-3.x><pandas>
2022-12-15 22:00:32
0
20,492
ASH
74,817,976
1,594,557
Alternative for scipy.stats.norm.ppf?
<p>Does anyone know of an alternative Python implementation for scipy.stats.norm.ppf()? I am building an EXE and I would like to avoid adding scipy for this one function.</p> <p>There is a great alternative for scipy.stats.norm.pdf() here: <a href="https://stackoverflow.com/questions/8669235/alternative-for-scipy-stats-norm-pdf">Alternative for scipy.stats.norm.pdf?</a></p>
<python><scipy>
2022-12-15 21:51:47
1
1,464
leenremm
74,817,975
7,267,480
How to save a list in a dataframe cell?
<p>I need to store a list in a dataframe cell.</p> <p>How can I save the list into one column? and how can it be read and parsed from this cell?</p> <p>I have tried this code to generate temporary dataframe in cycle:</p> <pre><code>newlist = [1267, 1296, 1311, 1320, 1413, 1450] temp = pd.DataFrame( { 'window_index_num': i, 'window_identifiers': newlist, ### here is the problem! 'left_pulse_num':w_left_pulse_num, 'right_pulse_num': w_right_pulse_num, 'idx_of_left_pulse': idx_of_left_pulse, 'idx_of_right_pulse': idx_of_right_pulse, 'left_pulse_pos_in_E': left_pulse_pos_in_E, 'right_pulse_pos_in_E':right_pulse_pos_in_E, 'idx_window_left_border': idx_window_left_border, 'idx_window_right_border': idx_window_right_border, 'left_win_pos_in_E': left_win_pos_in_E, 'right_win_pos_in_E': right_win_pos_in_E, 'window_width': points_num_in_current_window, 'window_width_in_E': window_width_in_E, 'sum_pulses_duration_in_E': sum_pulses_duration_in_E, 'sum_pulse_sq': sum_pulse_sq, 'pulse_to_window_rate': pulse_to_window_rate, 'max_height_in_window': max_height_in_window, 'min_height_in_window': window_min_val }, index=[i] ) </code></pre> <p>when I run it, I have an error <code>ValueError: Length of values (6) does not match length of index (1)</code></p> <p>I have tried to use this approach, but it didn't help.</p> <pre><code>... 'min_height_in_window': window_min_val }, index=[i] ) temp['window_identifiers']=temp['window_identifiers'].astype('object') temp['window_identifiers']=all_ids_in_window windows_df = pd.concat([windows_df, temp]) </code></pre>
<python><list><dataframe>
2022-12-15 21:51:45
2
496
twistfire
74,817,922
1,551,027
Is there a way to create a new branch on a Google Cloud Source Repositories repo using the Python SDK?
<p>I am using Google Cloud Source Repositories to store code for my CI/CD pipeline. What I'm building has two repos: <code>core</code> and <code>clients</code>. The <code>core</code> code will be built and deployed to monitor changes to a cloud storage bucket. When it detects a new customer config in the bucket, it will copy the <code>clients</code> code into a new branch of the <code>clients</code> repo named after the customer. The idea is to enable later potential tailoring for a given customer beyond the standard <code>clients</code> codebase.</p> <p>The solution I've been considering is to have the <code>core</code> deploy programmatically create the branches in the the <code>clients</code> repo, but have come up empty handed in my research for how to do that in Google Cloud.</p> <p>The only documentation that is close to what I want to do is <a href="https://cloud.google.com/source-repositories/docs/apis" rel="nofollow noreferrer">here</a>.</p>
<python><google-cloud-platform>
2022-12-15 21:46:18
1
3,373
Dshiz
74,817,641
8,305,252
OpenAI Whisper Cannot Import Numpy
<p>I am trying to run the OpenAI Whisper model but running into the following error when trying to run my script:</p> <blockquote> <p>ValueError: Unable to compare versions for numpy&gt;=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.</p> </blockquote> <p>I have, as the error suggests, tried reinstalling Numpy but that didn't solve anything. When I run the command 'pip show numpy' I get:</p> <pre><code>Name: numpy Version: 1.23.5 Summary: NumPy is the fundamental package for array computing with Python. Home-page: https://www.numpy.org Author: Travis E. Oliphant et al. Author-email: License: BSD Location: /opt/homebrew/lib/python3.10/site-packages Requires: Required-by: contourpy, matplotlib, pandas, pythran, scipy, transformers, whisper </code></pre> <p>So not only do I have Numpy version 1.23.5 (&gt;1.17), it also lists whisper as dependent on the package.</p> <p>My machine is a Macbook Air M1 running OS Ventura 13.0.1.</p> <p>I have looked through the OpenAI github for similar issues but I can't seem to find anything of the sort. I also tried importing the package manually with this following:</p> <pre><code>import sys sys.path.append('/opt/homebrew/lib/python3.10/site-packages/numpy') </code></pre> <p>But this didn't work either. Please let me know if you have any insight as to why this may be happening.</p>
<python><numpy><openai-whisper>
2022-12-15 21:14:18
2
686
Digglit
74,817,441
3,247,006
How to remove a useless "UPDATE" query when overriding "response_change()" in Django Admin?
<p>In <code>FruitAdmin():</code>, I overrode <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.response_change" rel="nofollow noreferrer"><strong>response_change()</strong></a> with the code to capitalize the name which a user inputs on <strong>Change fruit</strong> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;store/admin.py&quot; from django.contrib import admin from .models import Fruit @admin.register(Fruit) class FruitAdmin(admin.ModelAdmin): def response_change(self, request, obj): # Here obj.name = obj.name.upper() obj.save() return super().response_change(request, obj) </code></pre> <p>Then, I input <code>orange</code> to <strong>Name:</strong> on <strong>Change fruit</strong> as shown below:</p> <p><a href="https://i.sstatic.net/Yd03d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yd03d.png" alt="enter image description here" /></a></p> <p>Then, the name was successfully changed from <code>APPLE</code> to <code>ORANGE</code> capitalized as shown below:</p> <p><a href="https://i.sstatic.net/kaUB3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kaUB3.png" alt="enter image description here" /></a></p> <p>But according to PostgreSQL logs, there is <strong>a useless <code>UPDATE</code> query</strong> as shown below. *I use <strong>PostgreSQL</strong> and you can check <a href="https://stackoverflow.com/questions/54780698/postgresql-database-log-transaction/73432601#73432601"><strong>On PostgreSQL, how to log queries with transaction queries such as &quot;BEGIN&quot; and &quot;COMMIT&quot;</strong></a>:</p> <p><a href="https://i.sstatic.net/tPvcH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tPvcH.png" alt="enter image description here" /></a></p> <p>So, how can I remove <strong>the useless <code>UPDATE</code> query</strong> as shown above?</p>
<python><python-3.x><django><sql-update><django-admin>
2022-12-15 20:53:19
1
42,516
Super Kai - Kazuya Ito
74,817,250
1,014,747
How can I dynamically refer to a child class from the super class in Python?
<p>Consider this situation:</p> <pre class="lang-py prettyprint-override"><code>class Foo(ABC): @abstractmethod def some_method(self): return class Bar(Foo): def some_method(self, param): # do stuff return class Baz(Foo): def some_method(self, param): # do stuff differently return def do_something_with_obj(some_obj: Foo): some_param = 'stuff' some_obj.some_method(some_param) def main(cond): if cond: obj = Bar() else: obj = Baz() do_something_with_obj(obj) </code></pre> <p>I get an <code>Expected 0 positional arguments</code> error when I try to call <code>some_method()</code> under the <code>do_something_with_obj()</code> method. Of course, this is because I'm essentially calling the abstract method. My question is, how can I dynamically refer to the child class method since I have to choose the right child class based on some condition beforehand?</p>
<python><inheritance><abstract-class><pylance>
2022-12-15 20:33:35
0
603
b0neval
74,817,203
7,158,458
Case insensitive filtering multiple columns in pandas using .loc
<p>I would like to search for values ignoring differences in cases. So for example if I type in 'fred' I would still be able to filter for all values containing Fred, even if the F is capitalized.</p> <p>This is what I currently have:</p> <pre><code>def find(**kwargs): result = data.loc[data.rename(columns={&quot;FirstName&quot;: &quot;first&quot;, &quot;LastName&quot;: &quot;last&quot;, &quot;City&quot;: &quot;city&quot;, })[list(kwargs.keys())] .eq(list(kwargs.values())).all(axis=1)] return result </code></pre> <p>However, I realized that I cannot use .lower() at any point to forcible lower case both the strings I am passing in and the values I am filtering for</p> <p>Here is a sample of my data:</p> <pre><code>FirstName LastName City Fred Bob Austin Billy Bob NYC </code></pre> <p>when I run my function I expect this:</p> <pre><code>find('fred') Output: Fred Bob Austin </code></pre>
<python><pandas>
2022-12-15 20:28:33
3
2,515
Emm
74,817,183
10,924,836
Implement specific function in Python
<p>I am trying to implement my own function. Below you can see my code and data</p> <pre><code>import pandas as pd import numpy as np data = {'type_sale':[100,0,24,0,0,20,0,0,0,0], 'salary':[0,0,24,80,20,20,60,20,20,20], } df1 = pd.DataFrame(data, columns = ['type_sale', 'salary',]) def cal_fun(type_sale,salary): if type_sale &gt; 1: new_sale = 0 elif (type_sale==0) and (salary &gt;1): new_sale = (np.random.choice, 10, p=[0.5,0.05]))/2 ###&lt;-This line here return (new_sale) df1['new_sale']=cal_fun(type_sale,salary) </code></pre> <p>So with this function, I want to randomly select 50 percent of rows (with np.random) in the salary column. These randomly selected rows need to have zero at the same time in the column <code>type_sale,</code> and after that, I want to divide these values by 2.</p> <p>I tried with the above function, but I am not sure that I made this thing properly. So can anybody help me with how to solve this problem?</p> <p>In the end, I expect to have the table as the table is shown below.</p> <p>Your ideas, please implement in the above format of function</p> <p><a href="https://i.sstatic.net/iGmwY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iGmwY.png" alt="enter image description here" /></a></p>
<python><pandas>
2022-12-15 20:25:58
2
2,538
silent_hunter
74,817,155
163,567
Why is the argument getting passed in to this python method treated this way?
<p>version: Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]</p> <p>Setup:</p> <pre><code>class _Mixin: def vars_to_dct(self, results=[]): dct = self.to_dict() for k in dct.keys(): results.append(k) return results @dataclass class ClassA(_Mixin): a: str = &quot;&quot; b: str = &quot;&quot; @dataclass class ClassB(_Mixin): c: str = &quot;&quot; d: str = &quot;&quot; e: str = &quot;&quot; </code></pre> <p>Instantiate ClassA and method by itself. This is fine:</p> <pre><code>a = ClassA() assert [&quot;a&quot;, &quot;b&quot;] == a.vars_to_dct() </code></pre> <p>Instantiate ClassB and method by itself. This is fine:</p> <pre><code>b = ClassB() assert [&quot;c&quot;, &quot;d&quot;, &quot;e&quot;] == b.vars_to_dct() </code></pre> <p>Instantiate ClassA and ClassB together. This is not fine:</p> <p><code>AssertionError</code></p> <pre><code>a = ClassA() assert [&quot;a&quot;, &quot;b&quot;] == a.vars_to_dct() b = ClassB() assert [&quot;c&quot;, &quot;d&quot;, &quot;e&quot;] == b.vars_to_dct() </code></pre> <p>What actually happens:</p> <pre><code>b = ClassB() assert [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;, &quot;e&quot;] == b.vars_to_dct() </code></pre> <p>Explicitly redeclaring the argument also works fine:</p> <pre><code>b = ClassB() assert [&quot;c&quot;, &quot;d&quot;, &quot;e&quot;] == b.vars_to_dct(results=[]) </code></pre> <p>Note: I have tried to apply the answers here to this behaviour:</p> <ul> <li><a href="https://stackoverflow.com/questions/59488394/python-why-are-the-object-properties-leaking-into-one-another">python why are the object properties leaking into one another</a></li> <li><a href="https://stackoverflow.com/questions/1680528/how-to-avoid-having-class-data-shared-among-instances">How to avoid having class data shared among instances?</a></li> </ul> <p>But the difference and my question is: I am <strong>explicitly declaring the argument</strong> to the method.</p> <p>Why is the argument <code>results=[]</code> on the method not recognised when it's run again?</p> <pre><code> def vars_to_dct(self, results=[]): </code></pre> <p>Can someone confirm for me that it really is official Python behaviour to have variables declared in methods to be &quot;static members&quot;/&quot;class variables&quot;?</p>
<python>
2022-12-15 20:23:33
0
4,347
Williams
74,817,147
17,277,677
read pyspark json files and concat
<p>I have 999 gz files located in s3 bucket. I wanted to read them all and convert pyspark dataframe into pandas dataframe, it was impossible due to large files. I am trying to take a different approach - reading each single /gz file THEN convert it to pandas df - reduce number of columns and then concatenate it into one big pandas df.</p> <pre><code>spark_df = spark.read.json(f&quot;s3a://my_bucket/part-00000.gz&quot;) </code></pre> <p>part-000000.gz - this is zipped json, 0000 is the first one and 00999 is the last one to read. COuld you please help me unpack them all and later on concatenate pandas df.</p> <p>Logic:</p> <ol> <li><p>Read all json files:</p> <p>spark_df = spark.read.json(f&quot;s3a://my_bucket/part-00{}.gz&quot;)</p> </li> <li><p>convert to pandas</p> <p>pandas_df = spark_df.toPandas()</p> </li> <li><p>reduce columns (only few needed column)</p> <p>pandas_df = pandas_df[[&quot;col1&quot;,&quot;col2&quot;,&quot;col3&quot;]]</p> </li> <li><p>merge all the 999 pandas df into one full_df = pd.concat(for loop, to go through all the pandas dataframes)</p> </li> </ol> <p>This is the logic in my head, but I have difficulties to code it.</p> <p>EDIT: I started writing the code but it does not show me pandas_df:</p> <pre><code>for i in range(10,11): df_to_predict = spark.read.json(f&quot;s3a://my_bucket/company_v20_dl/part-000{i}.gz&quot;) df_to_predict = df_to_predict.select('id','summary', 'website') df_to_predict = df_to_predict.withColumn('text', lower(col('summary'))) df_to_predict = df_to_predict.select('id','text', 'website') df_to_predict = df_to_predict.withColumn(&quot;text_length&quot;, length(&quot;text&quot;)) df_to_predict.show() pandas_df = df_to_predict.toPandas() pandas_df.head() </code></pre> <p>Also I've notice this solution will be faulty for part00001 / part00100 etc &lt;- range does not &quot;fill up&quot; with zeros.</p>
<python><pandas><dataframe><pyspark>
2022-12-15 20:22:42
2
313
Kas
74,817,120
2,119,878
Convert object to int (Pandas Dataframe)
<p>I have seen this question asked several times, but I haven't found an answer that solves my problem yet. I have a CSV file that reads in the values as objects and I haven't found a way to convert a column to an int (column may contain None values). This is what I have tried:</p> <pre><code>df = pd.read_csv(&quot;workouts.csv&quot;, converters={'Length (minutes)': lambda x: pd.to_numeric( x, errors='ignore')} </code></pre> <pre><code>df['Length (minutes)'] = df['Length (minutes)'].astype('int', errors='ignore') </code></pre> <pre><code>df['Length (minutes)'] = df['Length (minutes)'].astype('str').astype('int', errors='ignore') </code></pre> <p>After all of the above, I get:</p> <pre><code>&gt;&gt;&gt; df.dtypes Workout Timestamp object Live/On-Demand object Instructor Name object Length (minutes) object </code></pre> <p>What is the trick do convert this to an int?</p>
<python><pandas><dataframe>
2022-12-15 20:19:52
1
591
cicit
74,816,826
7,158,458
Conditionally filter on multiple columns or else return entire dataframe
<p>I have a csv with a few people. I would like to build a function that will either filter based on all parameters given or return the entire dataframe as it is if no arguments are passed.</p> <p>So given this as the csv:</p> <pre><code>FirstName LastName City Matt Fred Austin Jim Jack NYC Larry Bob Houston Matt Spencer NYC </code></pre> <p>if I were to call my function <code>find</code>, assuming here is what I would expect to see depending on what I passed as arguments</p> <pre><code>find(first=&quot;Matt&quot;, last=&quot;Fred&quot;) Output: Matt Fred Austin find() Output: Full Dataframe find(last=&quot;Spencer&quot;) Output: Matt Spencer Fred find(address=&quot;NYC&quot;) Output: All people living in NYC in dataframe </code></pre> <p>This is what I have tried:</p> <pre><code>def find(first=None, last=None, city=None): file= pd.read_csv(list) searched = file.loc[(file[&quot;FirstName&quot;] == first) &amp; (file[&quot;LastName&quot; == last]) &amp; (file[&quot;City&quot;] == city)] return searched </code></pre> <p>This returns blank if I just pass in a first name and nothing else</p>
<python><pandas>
2022-12-15 19:50:23
2
2,515
Emm
74,816,808
18,948,596
Row-based or column-based convention for storing data in multi-arrays?
<p>Suppose we have <code>m</code> vectors in <code>R^n</code> (or <code>R^(n,1)</code>) with <code>m &gt;&gt; n</code>.</p> <p>We want to transform these by a linear transformation (applying a Matrix <code>A^(k, n)</code>) and by a translation given by a vector <code>b</code> in <code>R^n</code>. This is obviously faster with a 2D NumPy array then with a list of vectors.</p> <p>When designing an API, would it be preferable to have a Matrix <code>V</code> of shape <code>(n, m)</code> or <code>(m, n)</code>?</p> <p>The former, <code>V_1.shape = (n, m)</code>, is what we are used to from mathematics. It is easy to apply <code>A</code>, simply write <code>A @ V_1</code>. Translating <code>V_1</code> by <code>b</code> is not that easy though, we would need to reshape vector b in this case.</p> <p>If <code>V_2.shape = (m, n)</code> then translating <code>V_2</code> by <code>b</code> is easy (thanks to NumPy broadcasting), but applying <code>A</code> to <code>V_2</code> couldn't be done by calculating <code>A @ V_2</code> anymore. Instead we would need to use something like <code>V_2 @ A.T</code>.</p> <p>It appears (from limited research) that SciPy, Scikit usually prefer the latter convention (NumPy itself doesn't seem too opinionated on this problem). Why is that?</p> <p>Note my question centers around which convention to use in terms of the API and not in regard to the <a href="https://stackoverflow.com/questions/74808537/why-does-numpy-use-row-based-data-as-opposed-to-column-based-data?noredirect=1#comment132025662_74808537">data layout in memory</a>.</p>
<python><numpy><multidimensional-array>
2022-12-15 19:47:53
0
413
Racid
74,816,667
9,274,940
pythonic way to drop columns where length of list in column is x
<p>I would like drop the rows where a certain column has a list of length X. What is the most pythonic or efficient way? Instead of looping...</p> <p>Code example:</p> <pre><code>import pandas as pd data = {'column_1': ['1', '2', '3'] , 'column_2': [['A','B'], ['A','B','C'], ['A']], &quot;column_3&quot;: ['a', 'b', 'c']} df = pd.DataFrame.from_dict(data) </code></pre> <p>drop rows where length of list = 3. In this case, row 2 should be deleted since the length of the list is 3</p>
<python><pandas>
2022-12-15 19:33:57
2
551
Tonino Fernandez
74,816,653
1,424,739
How Explicit Wait is implemented under selenium?
<p><a href="https://www.testim.io/blog/how-to-wait-for-a-page-to-load-in-selenium/" rel="nofollow noreferrer">https://www.testim.io/blog/how-to-wait-for-a-page-to-load-in-selenium/</a></p> <p>I want to understand how &quot;Explicit Wait&quot; is implemented under selenium. Can you show some example python code to demonstrate how selenium's &quot;Explicit Wait&quot; is implemented without using selenium?</p> <p>Is the logic just wait for some time, then test for if an element is available, if not wait more time, check again, ..., until the element is available?</p>
<python><selenium><selenium-webdriver>
2022-12-15 19:32:47
1
14,083
user1424739
74,816,570
5,446,972
Are class variables accessed via self used as loop variables good practice/pythonic?
<p>Is it appropriate to use a class variable accessed with <code>self</code> as a loop variable (aka <a href="https://en.wikipedia.org/wiki/For_loop" rel="nofollow noreferrer">loop counter</a>, <a href="https://docs.python.org/3/reference/compound_stmts.html#the-for-statement" rel="nofollow noreferrer">target_list</a>, or <a href="https://docs.python.org/3/library/stdtypes.html#iterator.__next__" rel="nofollow noreferrer">iterated item</a>)? I have not found a standard which describes this situation. The following python code is valid and produces the expected result (&quot;1, 2, 3&quot;) but it feels wrong somehow:</p> <pre class="lang-py prettyprint-override"><code>class myClass(): foo = None def myfunc(self): for self.foo in [1, 2, 3]: print(self.foo) return myClass().myfunc() </code></pre> <p>I have not found a definitive answer that suggests using <code>self.foo</code> would be considered good or bad programming.</p> <p>More specifically:</p> <ul> <li>Does the MWE pass muster for standards?</li> <li>Is the MWE behaving like I claim or is something else happening here?</li> </ul>
<python><loops><self><python-class>
2022-12-15 19:25:01
0
490
WesH
74,816,532
3,107,858
Recursively copy a child directory to the parent in Google Cloud Storage
<p>I need to recursively move the contents of a sub-folder to a parent folder in google cloud storage. This code works for moving a single file from sub-folder to the parent.</p> <pre><code>client = storage.Client() bucket = client.get_bucket(BUCKET_NAME) source_path = Path(parent_dir, sub_folder, filename).as_posix() source_blob = bucket.blob(source_path) dest_path = Path(parent_dir, filename).as_posix() bucket.copy_blob(source_blob, bucket, dest_path) </code></pre> <p>but I don't know how to properly format the command because if my dest_path is &quot;parent_dir&quot;, I get the following error:</p> <pre><code>google.api_core.exceptions.NotFound: 404 POST https://storage.googleapis.com/storage/v1/b/bucket/o/parent_dir%2Fsubfolder/copyTo/b/geo-storage/o/parent_dir?prettyPrint=false: No such object: geo-storage/parent_dir/subfolder </code></pre> <p>Note: This works for recursive copy with gsutils but I would prefer to use the blob object:</p> <pre><code>os.system(f&quot;gsutil cp -r gs://bucket/parent_dir/subfolder/* gs://bucket/parent_dir&quot;) </code></pre>
<python><gsutil><google-cloud-storage>
2022-12-15 19:20:58
1
3,230
DanGoodrick
74,816,407
1,045,755
Plotly graph object laggy when plotting many additional add_trace
<p>I am trying to create a map, where I need to draw a line between several nodes/points. There is approximately 2000 node pairs that need a line drawn between them.</p> <p>I have a data frame containing the longitude/latitude coords for each node pair (i.e. <code>col1</code> and <code>col2</code>), and then I plot it by the following:</p> <pre><code>fig = go.Figure(go.Scattergeo()) for idx, row in df.iterrows(): fig.add_trace( go.Scattergeo( mode=&quot;markers+lines&quot;, lat=[row[&quot;col1&quot;][0], row[&quot;col2&quot;][0]], lon=[row[&quot;col1&quot;][1], row[&quot;col2&quot;][1]], marker={&quot;size&quot;: 10}, ) ) fig.show() </code></pre> <p>So I just run through the data frame and plot each node pair. However, my issue is that if I plot beyond 400-500 pairs, the resulting plot is very slow to render, and the zoom/drag effect are also not very good.</p> <p>I am not sure if I can optimize on this. I'm guessing the issue is that I create that many <code>add_trace</code> objects, but I can't seem to figure out how else to draw a line between pairs only. If I just give all latitude and longitude coords to the <code>lat</code> and <code>lon</code> args, then I will plot all points, and then just draw a line between everything - which is not intended.</p> <p>So yeah, any ideas ?</p>
<python><plotly>
2022-12-15 19:07:47
1
2,615
Denver Dang
74,816,379
2,893,712
xlsxwriter Conditional Format based on Formula not working
<p>I want to format an entire row if the value in column A starts with specific words.</p> <p>This code checks if cell A# equals 'Totals' and formats the entire row:</p> <pre><code>testformat = wb.add_format({'bg_color': '#D9D9D9', 'font_color': 'red'}) worksheet.conditional_format(0,0,10, 10, {&quot;type&quot;: &quot;formula&quot;,&quot;criteria&quot;: '=INDIRECT(&quot;A&quot;&amp;ROW())=&quot;Totals&quot;',&quot;format&quot;: testformat}) </code></pre> <p>However here is my modified code to check if the cell value equals 'Totals' or starts with 'A1', 'A2', or 'A3'. returns an error when I try to open the file:</p> <pre><code>worksheet.conditional_format(0,0,10, 10, {&quot;type&quot;: &quot;formula&quot;,&quot;criteria&quot;: '=OR(COUNTIF(INDIRECT(&quot;A&quot;&amp;ROW()),{&quot;Totals&quot;,&quot;A1*&quot;,&quot;A2*&quot;,&quot;A3*&quot;}))',&quot;format&quot;: testformat}) </code></pre> <p>I tested the formulas in Excel and they work fine (returning <code>TRUE</code>) but why is there a problem with the second formula.</p>
<python><conditional-formatting><xlsxwriter>
2022-12-15 19:03:58
1
8,806
Bijan
74,816,261
11,885,361
How can I recreate Ruby's sort_by {rand} in python?
<p>I have a line of code where can sort an array into random order:</p> <p><code>someArray.sort_by {rand}</code></p> <p>so in python, how can I convert to it to python code?</p>
<python>
2022-12-15 18:53:55
2
630
hanan
74,816,248
10,791,217
Regex Replace Not Replacing String on Dataframe
<p>I have the following line of code to update the contents of certain cells:</p> <pre><code>cmb['Skill'] = cmb['Skill'].replace(({'Application Programming Interface (API)': 'APIs'}), regex=True) </code></pre> <p>I have no idea why this won't work.... I've tried the following as well with no luck:</p> <pre><code>cmb['Skill'] = cmb['Skill'].astype(str) #adding this beforehand cmb = cmb.replace(({'Application Programming Interface (API)': 'APIs'}), regex=True) cmb['Skill'] = cmb['Skill'].replace(({'Application Programming Interface (API)': 'APIs'}), inplace=True) cmb['Skill'] = cmb['Skill'].str.replace(({'Application Programming Interface (API)': 'APIs'}), regex=True) </code></pre> <p>Any help is appreciated.</p>
<python><python-3.x><pandas>
2022-12-15 18:52:32
1
720
RCarmody
74,816,181
11,850,322
Panel Data - dealing with missing year when creating lead and lag variables
<p>I work with panel data. Typically my panel data is not balanced, i.e., there are some missing years. The general look of panel data is as follows:</p> <pre><code>df = pd.DataFrame({'name': ['a']*4+['b']*3+['c']*4, 'year':[2001,2002,2004,2005]+[2000,2002,2003]+[2001,2002,2003,2005], 'val1':[1,2,3,4,5,6,7,8,9,10,11], 'val2':[2,5,7,11,13,17,19,23,29,31,37]}) name year val1 val2 0 a 2001 1 2 1 a 2002 2 5 2 a 2004 3 7 3 a 2005 4 11 4 b 2000 5 13 5 b 2002 6 17 6 b 2003 7 19 7 c 2001 8 23 8 c 2002 9 29 9 c 2003 10 31 10 c 2005 11 37 </code></pre> <p>Now I want to create <code>lead</code> and <code>lag</code> variables that are <code>groupby</code> <code>name</code>. Using:</p> <pre><code>df['val1_lag'] = df.groupby('name')['val1'].shift(1) df['val1_lead'] = df.groupby('name')['val1'].shift(-1) </code></pre> <p>This simply shift up/down 1 row before/after which is not what I want. I want to shift in relative to <code>year</code>. My expected output:</p> <pre><code> name year val1 val2 val1_lag val1_lead 0 a 2001 1 2 NaN 2.0 1 a 2002 2 5 1.0 NaN 2 a 2004 3 7 NaN 4.0 3 a 2005 4 11 3.0 NaN 4 b 2000 5 13 NaN NaN 5 b 2002 6 17 NaN 7.0 6 b 2003 7 19 6.0 NaN 7 c 2001 8 23 NaN 9.0 8 c 2002 9 29 8.0 10.0 9 c 2003 10 31 9.0 NaN 10 c 2005 11 37 NaN NaN </code></pre> <p>My current work around solution is to fill is <code>missing</code> year by:</p> <pre><code>df.set_index(['name', 'year'], inplace=True) mux = pd.MultiIndex.from_product([df.index.levels[0], df.index.levels[1]], names=['name', 'year']) df = df.reindex(mux).reset_index() </code></pre> <p>Then using normal <code>shift</code>. However, because my data size is quite large. Using this often x3 the data size which is not very efficiency here.</p> <p>I am looking for a better approach for this scenario</p>
<python><python-3.x><pandas><dataframe><group-by>
2022-12-15 18:46:35
2
1,093
PTQuoc
74,815,957
8,829,403
Get list of files in directory return encode error for persian files name
<p>When I want to execute simple line of code to get a list of files in one drive i have encountered with encoding error I have a list of files with persian name. when it comes to list these files name raised an encoding error!</p> <pre><code>import os print(os.listdir('D:')) Traceback (most recent call last): File &quot;....\cp1252.py&quot;, line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 74-76: character maps to &lt;undefined&gt; </code></pre>
<python><codec><listdir><cp1252><charmap>
2022-12-15 18:23:48
3
373
Spring98
74,815,871
5,431,283
Tkinter - How do I populate a combo box with a list?
<p>I'm actually using Custom Tkinter but I think it should be the same.</p> <p>I would like to populate a ComboBox after clicking on a button. My button calls a function, which returns a list. I would like to populate the Combobox with each item in that list but I can't seem to get the hang of it.</p> <p>Here is a snippet of the app, you can actually copy paste it and it will run:</p> <pre class="lang-py prettyprint-override"><code>import boto3 import customtkinter ses = boto3.client('ses') class App(customtkinter.CTk): def __init__(self): super().__init__() # configure window self.title(&quot;App&quot;) self.geometry(f&quot;{1200}x{700}&quot;) self.load_template_button = customtkinter.CTkButton(master=self, text=&quot;Load Template&quot;, command=self.get_templates) self.load_template_button.grid(row=3, column=0, padx=5, pady=5) self.templates_list_cb = customtkinter.CTkComboBox(master=self) self.templates_list_cb.grid(row=4, column=0, padx=5, pady=5) def get_templates(self): templates_list = [] response = ses.list_templates() for template in response['TemplatesMetadata']: templates_list.append(template['Name']) print(templates_list) self.templates_list_cb['values'] = templates_list return templates_list if __name__ == &quot;__main__&quot;: app = App() app.mainloop() </code></pre> <p>As I understand it: My button <code>load_template_button</code> executes: <code>command=self.get_templates</code>, which inside of it sets <code>templates_list_cb['values']</code> to the list object which is <code>templates_list</code>.</p> <p>If I print <code>templates_list</code> I get: <code>['Template1', 'Template2']</code>.</p> <p>My issue is that when I click on the button, nothing changes inside of the ComboBox.</p>
<python><tkinter>
2022-12-15 18:15:19
1
673
Daniel
74,815,812
6,524,326
Incorrect results of modulo operation between large numbers in R
<p>To solve a puzzle from <a href="https://www.hackerrank.com/challenges/summing-the-n-series/problem?isFullScreen=true" rel="nofollow noreferrer">hackerrank</a>, I'm trying to apply modulo operations between large numbers in R (v4.2.2). However, I get incorrect results when at least one of the operands is very large. For example, <code>52504222585724001 %% 10</code> yields <code>0</code> in R. which is incorrect. However, when I try <code>52504222585724001 % 10</code> in python (v3.9.12) I get the correct result <code>1</code>. So I decided to test some other numbers. I downloaded a set of test cases for which my code was failing and I did <code>n*n mod (10^9 + 7)</code> for each <code>n</code> value.</p> <h2>R code:</h2> <pre><code>summingSeries &lt;- function(n) { return(n^2 %% (10^9 + 7)) } n &lt;- c(229137999, 344936985, 681519110, 494844394, 767088309, 307062702, 306074554, 555026606, 4762607, 231677104) expected &lt;- c( 218194447, 788019571, 43914042, 559130432, 685508198, 299528290, 950527499, 211497519, 425277675, 142106856 ) result &lt;- rep(0L, length(n)) start &lt;- Sys.time() for (i in 1:length(n)){ result[i] &lt;- summingSeries(n[i]) } print(Sys.time() - start) df &lt;- data.frame(expected, result, diff = abs(expected - result)) print(df) </code></pre> <p>I am pasting below the results and the absolute differences with the expected values</p> <pre><code>expected result diff ------------------------- 218194447 218194446 1 788019571 788019570 1 43914042 43914070 28 559130432 559130428 4 685508198 685508205 7 299528290 299528286 4 950527499 950527495 4 211497519 211497515 4 425277675 425277675 0 142106856 142106856 0 </code></pre> <h2>Python3 code:</h2> <pre><code>import numpy as np def summingSeries(n): return(n ** 2 % (10 ** 9 + 7)) n = [229137999, 344936985, 681519110, 494844394, 767088309, 307062702, 306074554, 555026606, 4762607, 231677104] expected = [218194447, 788019571, 43914042, 559130432, 685508198, 299528290, 950527499, 211497519, 425277675, 142106856] result = [0] * len(n) for i in range(0, len(n)): result[i] = summingSeries(n[i]) print(np.array(result) - np.array(expected)) </code></pre> <p>I get the correct results using the above python code. Can someone kindly explain why there are inconsistencies and why R is yielding the wrong results?</p>
<python><r><python-3.x>
2022-12-15 18:08:32
1
828
Wasim Aftab
74,815,728
6,346,514
Pandas, read Length only of all pickle files in directory
<p>I read a bunch of pickle files with the below code, I want to loop through and get each of these, identify the length of each file. Ie how many records.</p> <p>Two issues:</p> <ol> <li>Concat will combine all my dfs into one, which takes a long time. Anyone to just read the len?</li> <li>If Concat is the way to go, how can I get the length of each file if they all go into one dataframe? I guess the problem is here to identify where each file stops and starts. I could add a column to identify each filename and count there I suspect.</li> </ol> <p>What ive tried:</p> <pre><code>import pandas as pd import glob, os files = glob.glob('O:\Stack\Over\Flow\*.pkl') df = pd.concat([pd.read_pickle(fp, compression='xz').assign(New=os.path.basename(fp)) for fp in files]) </code></pre> <p>Any help would be appreciated.</p>
<python><pandas>
2022-12-15 17:59:50
2
577
Jonnyboi
74,815,705
10,544,599
How to decode text with special symbols using base64 in python3?
<p>I am trying to decode some list of texts using base64 module. Though I'm able to decode some, but probably the ones which have special symbols included in it I am unable to decode that.</p> <pre><code>import base64 # List of string which we are trying to decode encoded_text_list = ['MTA0MDI0','MTA0MDYw','MTA0MDgz','MTA0MzI%3D'] # Iterating and decoding string using base64 for k in encoded_text_list: print(k, base64.b64decode(k).decode()) </code></pre> <p>Output:</p> <pre><code>MTA0MDI0 104024 MTA0MDYw 104060 MTA0MDgz 104083 --------------------------------------------------------------------------- Error Traceback (most recent call last) &lt;ipython-input-60-d1ba00f4e54a&gt; in &lt;module&gt; 2 for k in member_url_list: 3 print(k) ----&gt; 4 print(base64.b64decode(k).decode()) 5 # break /usr/lib/python3.6/base64.py in b64decode(s, altchars, validate) 85 if validate and not re.match(b'^[A-Za-z0-9+/]*={0,2}$', s): 86 raise binascii.Error('Non-base64 digit found') ---&gt; 87 return binascii.a2b_base64(s) 88 89 Error: Incorrect padding </code></pre> <p>The script works well but as it reaches to decode string <em><strong>'MTA0MzI%3D'</strong></em> it gives the above error.</p> <p>As above text list is based on url, so also tried with parse method of urllib.</p> <pre><code>from urllib.parse import unquote b64_string = 'MTA0MzI%3D' b64_string = unquote(b64_string) # 'MTA0MzI=' b64_string += &quot;=&quot; * ((4 - len(b64_string) % 4) % 4) print(base64.b64decode(b64_string).decode()) </code></pre> <p>Output:</p> <pre><code>10432 </code></pre> <p><strong>Expected Output:</strong></p> <pre><code>104327 </code></pre> <p>Now the output may seems to be correct, but it isn't as it converts the input text from <strong>'MTA0MzI%3D</strong>' to <strong>'MTA0MzI='</strong> and so does it's output from <strong>'104327'</strong> to <strong>'10432'</strong>. Thing is the above text with symbol works perfectly on this <a href="https://www.base64decode.org/" rel="nofollow noreferrer">base64</a> site.</p> <p>I have tried in different versions on python i.e python 2, 3.6, 3.8, etc., I have also tried codecs module &amp; explored some base64 functions, but got no positive response. Can someone please help me to make it working or suggest any other way to get it done.</p>
<python><python-3.x><encryption><encoding><base64>
2022-12-15 17:57:31
3
379
David
74,815,632
4,321,462
Do not convert tabs to spaces when printing in CPython console
<h4>Problem</h4> <p>The statement</p> <pre class="lang-py prettyprint-override"><code>print(&quot;.\t.&quot;) </code></pre> <p>returns the string <code>. .</code> in the CPython console (the tab is replaced by 7 spaces). How can I preserve the tab character?</p> <p>Copying all output from the console to notepad++ shows the following output (whitespaces made visible): <a href="https://i.sstatic.net/kti8H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kti8H.png" alt="notepad++ including white spaces" /></a></p> <p>The problem may be related to the underlying console software rather than python (see &quot;My research&quot; below)</p> <h4>Setup</h4> <p>I am using Anaconda 3.9 on Windows 10 Enterprise.</p> <p>The problem occurs when</p> <ul> <li>I run Python from the command prompt cmd</li> <li>I run Python from the powershell</li> <li>I run IPython from the powershell</li> <li>I run a Python script from VSCode (I guess it is using the Powershell)</li> <li>I run a Python 2 interpreter not managed by Anaconda</li> </ul> <p>(The list above contains only the tested cases. I have not found an example where it works.)</p> <h4>Use case</h4> <p>I want to print a tab-separated list that can be copied and pasted to tables in other software (such as Excel) that distributes tab-separated lists to individual cells.</p> <h4>My research</h4> <p>The problem seems to go a little deeper than Python. Potentially, the default consoles on Windows (cmd and Powershell) cannot display tab characters. Some threads suggest this:</p> <ul> <li><a href="https://superuser.com/questions/240435/how-do-i-echo-a-tab-char-on-a-command-prompt">How do I echo a TAB char on a command prompt</a></li> <li><a href="https://stackoverflow.com/questions/70144322/copied-and-pasted-tab-characters-not-recognized-by-powershell">Copied-and-pasted tab characters not recognized by Powershell</a></li> </ul> <p>In the provided answers, tabs only work converted to spaces. I wonder if the problem could be circumvented by using a different console / UI.</p> <p>I have also found several similar questions related to specific IDEs or consoles (see e.g. <a href="https://stackoverflow.com/questions/37980943/how-to-prevent-tab-characters-from-being-converted-to-spaces-in-console-output-w">here</a> or <a href="https://github.com/spyder-ide/spyder/issues/10890" rel="nofollow noreferrer">here</a>). However, the issue remained either unresolved or the solutions applied only to the specific software.</p>
<python><powershell><printing><tabs><console>
2022-12-15 17:49:58
1
2,740
Samufi
74,815,510
5,120,817
Calling OneHotEncoding results into Java Heap Space Error
<p>When running <code>xGboost</code> Package in <code>H2o</code> throws Java heap space error. But when the memory is cleared manually it works fine.</p> <p>I often use del df del something import gc gc.collect()</p> <p>in order to clear memory. Any ideas are appreciated.</p> <pre><code>import h2o from h2o.tree import H2OTree from h2o.estimators import H2OIsolationForestEstimator, H2OXGBoostEstimator, encoding = &quot;one_hot_explicit&quot; baseModel = H2OXGBoostEstimator(model_id = modelId, ntrees = 100, max_depth = 3,seed = 0xDECAF, sample_rate = 1, categorical_encoding = encoding, keep_cross_validation_predictions=True, nfolds = 10 ) ## TRAIN DATA baseModel.train(x = predictor_columns, y = &quot;label&quot;, training_frame = train.rbind(valid)) </code></pre> <p>Error Trace:</p> <pre><code>Traceback (most recent call last): File &quot;/docs/code/000_pyGraph/dec_rf_gb_xgb.py&quot;, line 151, in &lt;module&gt; decxgb.xgb_cvs(df=df, year=year, model_path=model_path, File &quot;/docs/code/000_pyGraph/dec_xgb.py&quot;, line 90, in xgb_cvs baseModel.train(x = predictor_columns, y = &quot;label&quot;, training_frame = train.rbind(valid)) File &quot;/home/miniconda3/envs/tf-gpu-mem-day/lib/python3.10/site-packages/h2o/estimators/estimator_base.py&quot;, line 123, in train self._train(parms, verbose=verbose) File &quot;/home/miniconda3/envs/tf-gpu-mem-day/lib/python3.10/site-packages/h2o/estimators/estimator_base.py&quot;, line 215, in _train job.poll(poll_updates=self._print_model_scoring_history if verbose else None) File &quot;/home/miniconda3/envs/tf-gpu-mem-day/lib/python3.10/site-packages/h2o/job.py&quot;, line 90, in poll raise EnvironmentError(&quot;Job with key {} failed with an exception: {}\nstacktrace: &quot; OSError: Job with key $03017f00000132d4ffffffff$_8508e7043b6647f7868aa83a3f6842d4 failed with an exception: DistributedException from /127.0.0.1:54321: 'Java heap space', caused by java.lang.OutOfMemoryError: Java heap space stacktrace: DistributedException from /127.0.0.1:54321: 'Java heap space', caused by java.lang.OutOfMemoryError: Java heap space at water.MRTask.getResult(MRTask.java:660) at water.MRTask.getResult(MRTask.java:670) at water.MRTask.doAll(MRTask.java:530) at water.MRTask.doAll(MRTask.java:412) at water.MRTask.doAll(MRTask.java:397) at water.fvec.Vec.doCopy(Vec.java:514) at water.fvec.Vec.makeCopy(Vec.java:500) at water.fvec.Vec.makeCopy(Vec.java:493) at water.fvec.Vec.makeCopy(Vec.java:487) at water.util.FrameUtils$CategoricalOneHotEncoder$CategoricalOneHotEncoderDriver.compute2(FrameUtils.java:768) at water.H2O$H2OCountedCompleter.compute(H2O.java:1677) at jsr166y.CountedCompleter.exec(CountedCompleter.java:468) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:976) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) Caused by: java.lang.OutOfMemoryError: Java heap space </code></pre>
<python><java><xgboost><h2o>
2022-12-15 17:38:05
1
975
lpt
74,815,359
2,011,779
use a variable multiple times in a sql statement with pymysql
<p>[edit: removed the where statement]</p> <p>I'm trying to update a table when duplicate keys are passed in I have a list like</p> <pre><code>examplelist = [[key1, value1], [key2, value2], [key3, value3]] </code></pre> <p>and a table with, for example, <code>key1</code> and <code>key3</code> already in the table with different values, so we want to update those rows and insert the key/value pair key2 value2 in. The table was created with:</p> <pre><code>cursor.execute('CREATE TABLE exampletable (keycolumn FLOAT, valuecolumn FLOAT)') cursor.execute('CREATE INDEX keycolumn_index ON exampletable (keycolumn DESC)') </code></pre> <p>I was trying to use</p> <pre><code>sql = &quot;INSERT INTO exampletable (keycolumn, valuecolumn) VALUES (%s, %s)&quot;\ &quot; ON DUPLICATE KEY UPDATE valuecolumn = VALUES(valuecolumn)&quot; cursor.executemany(sql, examplelist) </code></pre> <p>but I get a bunch of rows with the same keycolumn value</p> <p>I can just change the input to a list of 3 element lists but this seems really sloppy, I imagine there's a more elegant way</p>
<python><mysql><sql><pymysql>
2022-12-15 17:24:45
1
2,384
hedgedandlevered
74,815,356
1,595,350
What is the correct way of creating a list of 2-pairs and search for on element of it?
<p>Lets assume that we have a list of elements which looks like this</p> <pre><code>identifier, url cars, /cars/1234 motorcycles, /motorcycles/jd723 yachts, /yachts/324lkaj trucks, /trucks/djfhe </code></pre> <p>The idea behind this, that i have a static list of items, which will nearly not change. I could put them into an JSON or even a Database, but this would 'cost' time, in my opinion.</p> <p>Therefore i tought about having this array/list statically in Python.</p> <p>What i a would need is, how to search for <code>yachts</code> and get back the second item of the list. What i found so far on the net ist always based on a 2 or multi-dimensional array and then pick the n-th item out of.</p> <p>How would you do this in Python.</p>
<python>
2022-12-15 17:24:24
1
4,326
STORM