QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
74,896,895
15,768,044
Using an existing Enum in new Alembic migration in python
<p>I've created Enum for one model in first migration:</p> <pre><code>def upgrade() -&gt; None: … gender_enum = postgresql.ENUM('MALE', 'FEMALE', 'OTHER', name='genderenum', create_type=False) gender_enum.create(op.get_bind(), checkfirst=True) op.add_column('user', sa.Column('gender', gender_enum, nullable=True)) … def downgrade() -&gt; None: … gender_enum = sa.Enum(name='genderenum') gender_enum.drop(op.get_bind(), checkfirst=True) … </code></pre> <p>How can I use this Enum with other model in new migration? I don't need to create new one. I've tried to create new one with Β«checkfirstΒ», but it just hasn't created.</p> <p>And one more question: I used <code>create_type=False</code>, but type was created in db. Is it ok, what is this arg for?</p>
<python><database><postgresql><alembic>
2022-12-23 06:48:10
1
521
aryadovoy
74,896,614
14,624,039
debug with terminal commands in vscode
<p>For some reason, I want to debug my python file in vscode via the command line.</p> <p>I followed this guide. <a href="https://code.visualstudio.com/docs/python/debugging" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/debugging</a></p> <p>The guide says that I can use this command</p> <pre><code>python -m debugpy --listen 5678 ./myscript.py </code></pre> <p>And then use the following configuration to attach from the VS Code Python extension.</p> <pre><code>{ &quot;name&quot;: &quot;Python: Attach&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;connect&quot;: { &quot;host&quot;: &quot;localhost&quot;, &quot;port&quot;: 5678 } } </code></pre> <p>But I'm totally in the dark about how can I use this lauch.json configuration to attach the debugger.</p>
<python><vscode-debugger>
2022-12-23 06:04:23
1
432
Arist12
74,896,585
273,506
pandas replace values of a list column
<p>I have a dataframe like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">ID</th> <th style="text-align: left;">Feeback</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">T223</td> <td style="text-align: left;">[Good, Bad, Bad]</td> </tr> <tr> <td style="text-align: left;">T334</td> <td style="text-align: left;">[Average,Good,Good]</td> </tr> </tbody> </table> </div> <pre><code>feedback_dict = {'Good':1, 'Average':2, 'Bad':3} </code></pre> <p>using this dictionary I have to replace Feedback column</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">ID</th> <th style="text-align: left;">Feeback</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">T223</td> <td style="text-align: left;">[1, 3, 3]</td> </tr> <tr> <td style="text-align: left;">T334</td> <td style="text-align: left;">[2,1,1]</td> </tr> </tbody> </table> </div> <p>I tried two way, but none worked, any help will be appreciated.</p> <pre><code>method1: df = df.assign(Feedback=[feedback_dict.get(i,i) for i in list(df['Feedback'])]) method2: df['Feedback'] = df['Feedback'].apply(lambda x : [feedback_dict.get(i,i) for i in list(x)]) </code></pre>
<python><pandas><dataframe>
2022-12-23 05:58:11
4
1,069
Arjun
74,896,349
1,230,724
How to share zero copy dataframes between processes with PyArrow
<p>I'm trying to work out how to share data between processes with PyArrow (to hopefully at some stage share pandas DataFrames). I am at a rather experimental (read: Newbie) stage and am trying to figure out how to use PyArrow. I'm a bit stuck and need help.</p> <p>Going through the documentation, I found an <a href="https://arrow.apache.org/docs/python/memory.html#pyarrow-buffer" rel="nofollow noreferrer">example to create a buffer</a></p> <pre class="lang-py prettyprint-override"><code>import time import pyarrow as pa data = b'abcdefghijklmnopqrstuvwxyz' buf = pa.py_buffer(data) print(buf) # &lt;pyarrow.Buffer address=0x7fa5be7d5850 size=26 is_cpu=True is_mutable=False&gt; while True: time.sleep(1) </code></pre> <p>While this process was running, I used the address and size, that the script printed, to (try to) access the buffer in another script:</p> <pre><code>import pyarrow as pa buf = pa.foreign_buffer(0x7fa5be7d5850, size=26) print(buf.to_pybytes()) </code></pre> <p>and received... a segmentation fault - most likely because the script is trying to access memory from another process, which I may require different handling. Is this not possible with PyArrow or is the just the way I am trying to do this? Do I need other libraries? I'd like to avoid serialisation (or writing to disk in general), if possible, but this may or may not be possible. Any pointers are appreciated.</p>
<python><pandas><pyarrow>
2022-12-23 05:15:40
2
8,252
orange
74,896,299
4,414,359
Find if a string contains substring with a number, some distance from another substring
<p>I have a string like</p> <pre><code>s = 'ahlkhsa-anchorawO&amp;B6o77ooiynlhwlkfnbla,)9;}[test:&quot;&quot;ajanchorkh,2Yio98vacpoi [p{7}]test:&quot;2&quot;*iuyanchorap07A(p)a test:&quot;&quot;*^g8p9JiguanchorLI;ntest:&quot;9.3&quot;' </code></pre> <p>i'd like to see if there exists a substring <code>test:&quot;some_number_here&quot;</code> that's at most 5 characters following <code>anchor</code></p> <p>So in this case there is, <code>test:&quot;9.3&quot;</code> is following substring <code>anchor</code> after 5 other character.</p> <p>There is also <code>test:&quot;2&quot;</code> but it's too far away from the <code>anchor</code> before it, so I don't want to count it.</p> <p>(Bonus points if i could pull out the number inside <code>test:&quot;&quot;</code>)</p>
<python><regex><substring>
2022-12-23 05:06:09
1
1,727
Raksha
74,896,124
7,190,950
Python Selenium webdriver.Chrome() sometimes returns "Timeout error from renderer: 600.00" exception
<p>I'm using Selenium to run tests using Chrome Webdriver. Most of the time script works fine, but sometimes it returns a Timeout error when initializing webdriver.Chrome() constructor.</p> <p>Below is the error message I'm receiving.</p> <pre><code> Message: session not created source-app-1 | from timeout: Timed out receiving message from renderer: 600.000 source-app-1 | (Session info: headless chrome=102.0.5005.182) </code></pre> <p>I see this failure at a rate of about 1 in 10 or sometimes about 1 in 20.</p> <p>Below is the code when it sometimes hangs and returns the Timeout exception.</p> <pre><code>from selenium import webdriver self.driver = webdriver.Chrome(executable_path=driver_executable_path, chrome_options=driver_options) </code></pre> <p><strong>ENVIRONMENT</strong></p> <p>The script runs inside the Docker container and below is my dockerfile.</p> <pre><code>FROM nginxinc/nginx-unprivileged:1-alpine ENV PATH=&quot;/scripts:${PATH}&quot; USER root # Install python/pip ENV PYTHONUNBUFFERED=1 RUN apk add --update --no-cache python3=3.9.15-r0 py3-setuptools --repository=https://dl-cdn.alpinelinux.org/alpine/v3.15/main/ RUN ln -sf python3 /usr/bin/python RUN python3 -m ensurepip RUN pip3 install --upgrade --no-cache pip COPY ./requirements.txt /requirements.txt RUN apk add --update --no-cache gcc libffi-dev libc-dev linux-headers openssl-dev cargo rust python3-dev libxml2-dev libxslt-dev jpeg-dev zlib-dev chromium chromium-chromedriver --repository=https://dl-cdn.alpinelinux.org/alpine/v3.15/main/ RUN pip3 install -r /requirements.txt RUN mkdir -p /vol/static RUN chmod 755 /vol/static RUN mkdir /app COPY . /app WORKDIR /app RUN adduser -D user RUN chown -R user:user /vol RUN chmod -R 755 /vol/web CMD [&quot;entrypoint.sh&quot;] </code></pre> <p><strong>Docker Compose</strong></p> <pre><code>version: '3.7' services: app: build: context: . command: sh -c &quot;python start.py&quot; deploy: resources: limits: cpus: &quot;2&quot; memory: 2048M reservations: cpus: &quot;0.25&quot; memory: 128M volumes: - .:/app environment: ... networks: default: external: name: my-network </code></pre>
<python><selenium><selenium-chromedriver>
2022-12-23 04:31:06
0
421
Arman Avetisyan
74,896,106
9,257,578
How to remove elements from json file using python?
<p>I have a json file which looks like this</p> <pre><code>{ &quot;name1&quot;: { &quot;name&quot;: &quot;xyz&quot;, &quot;roll&quot;: &quot;123&quot;, &quot;id&quot;: &quot;A1&quot; }, &quot;name2&quot;:{ &quot;name&quot;: &quot;abc&quot;, &quot;roll&quot;: &quot;456&quot;, &quot;id&quot;: &quot;A2&quot; }, &quot;name3&quot;:{ &quot;name&quot;: &quot;def&quot;, &quot;roll&quot;: &quot;789&quot;, &quot;id&quot;: &quot;B1&quot; } } </code></pre> <p>I want to remove <code>name1</code>, <code>name2</code>, <code>name3</code> and should be looking like this</p> <pre><code>{ { &quot;name&quot;: &quot;xyz&quot;, &quot;roll&quot;: &quot;123&quot;, &quot;id&quot;: &quot;A1&quot; }, { &quot;name&quot;: &quot;abc&quot;, &quot;roll&quot;: &quot;456&quot;, &quot;id&quot;: &quot;A2&quot; }, { &quot;name&quot;: &quot;def&quot;, &quot;roll&quot;: &quot;789&quot;, &quot;id&quot;: &quot;B1&quot; } } </code></pre> <p>Is there any way to do this in python? If this problem is asked earlier then me please refer to one cause I can't find it. Help will be thankful.</p>
<python><json>
2022-12-23 04:26:08
2
533
Neetesshhr
74,895,923
10,461,632
Create a treemap showing directory structure with plotly graph object
<p>I want to create a treemap that shows the folders in a given directory, including all subfolders and files using <a href="https://plotly.com/python-api-reference/generated/plotly.graph_objects.Treemap.html" rel="nofollow noreferrer">plotly.graph_objects.Treemap</a>. I understand simple examples like this <a href="https://stackoverflow.com/questions/64393535/python-plotly-treemap-ids-format-and-how-to-display-multiple-duplicated-labels-i">one</a> and this <a href="https://plotly.com/python/treemaps/#controlling-text-fontsize-with-uniformtext" rel="nofollow noreferrer">one</a>.</p> <p><strong>Problem:</strong> I can't figure out how to generate the <code>ids</code> column to make my figure render properly. I'm going to have duplicate <code>labels</code>, so I need to use <code>ids</code>. Right now, the figure renders blank.</p> <p><strong>Code:</strong></p> <p>Here's some code to generate a sample directory structure to help you help me:</p> <pre><code>import os folder = 'Documents' for i in range(10): for j in range(100): path = os.path.join(folder, f'folder_{i}', f'sub-folder-{j}') if not os.path.isdir(path): os.makedirs(path) for k in range(20): with open(os.path.join(path, f'file_{k + 1}.txt'), 'w') as file_out: file_out.write(f'Hello from file {k + 1}!\n') </code></pre> <p>Here's the code to calculate the files sizes and create the treemap:</p> <pre><code>import os from pathlib import Path import pandas as pd import plotly.graph_objects as go directory = '[input your directory here]/Documents' def calculate_size(folder): result = [] for root, dirs, files in os.walk(folder): relpath = Path(root).relative_to(Path(folder).parent) # Calculate directory size dir_size = sum(os.path.getsize(os.path.join(root, name)) for name in files) result.append({ 'parents': str(relpath), 'labels': str(Path(root).name), 'size': dir_size, 'ids': str(relpath), }) # Calculate individual file size for f in files: fp = os.path.join(root, f) relpath_fp = Path(fp).relative_to(Path(folder).parent) result.append({ 'parents': str(relpath_fp), 'labels': str(Path(fp).name), 'size': os.path.getsize(fp), 'ids': str(relpath_fp), }) return result result = calculate_size(directory) df = pd.DataFrame(result) # Set root df.loc[df.index == 0, 'parents'] = &quot;&quot; labels = df['labels'].tolist() parents = df['parents'].tolist() ids = df['ids'].tolist() values = df['size'].tolist() fig = go.Figure(go.Treemap( labels = labels, parents = parents, ids = ids, values = values, # maxdepth=3 )) fig.update_traces(root_color=&quot;lightgrey&quot;) fig.update_layout(margin = dict(t=50, l=25, r=25, b=25)) fig.show() </code></pre>
<python><plotly>
2022-12-23 03:47:42
2
788
Simon1
74,895,783
4,435,175
Bruteforce 7z archive with a password list
<p>I have a 7z file and a password list.</p> <p>I want to bruteforce that file with that list.</p> <p>I use Windows 10, Python 3.11.1 and 7z 22.1.0.0</p> <p>My code:</p> <pre><code>path_7z: str = &quot;C:/Program Files/7-Zip/7z.exe&quot; with open(&quot;passwords.txt&quot;, &quot;r&quot;, encoding=&quot;UTF-8&quot;) as file: for pw in file: pw = pw.rstrip(&quot;/n&quot;) #call 7z with parameter {filepath} and {pw}, unpack archive if {pw} is correct, otherwise continue with the next {pw} </code></pre> <p>How can I do the part in the last line?</p>
<python><7zip>
2022-12-23 03:14:43
0
2,980
Vega
74,895,771
11,922,765
Python Dataframe Find the length of maximum element in a dataframe
<p>I am trying to find the length (number of digits) in the maximum value of a data frame. Why? Then I know how much y-axis ticks will extend when plotting this data, and accordingly, I can adjust the plot's left border.</p> <p>My code:</p> <pre><code>df = datetime A B C 2022-08-23 06:12:00 1.91 98.35 1.88 2022-08-23 06:13:00 1.92 92.04 1.77 2022-08-23 06:14:00 132.14 81.64 1.75 # maximum length element max_len = df.round(2).dropna().astype('str').applymap(lambda x:len(x)).max().max() print(max_len) 6 df.plot(figsize=(5,3), use_index=True, colormap=cm.get_cmap('Set1'), alpha=0.5) # plot border for saving left_border = (max_len/100)+0.05 plt.subplots_adjust(left=left_border, right=0.90, top=0.95, bottom=0.25) plt.savefig(save_dir+plot_df.index[i]+'.jpg',dpi=500) plt.show() </code></pre> <p>is there a better way to find the maximum length of the element?</p>
<python><pandas><dataframe><matplotlib>
2022-12-23 03:12:08
1
4,702
Mainland
74,895,761
19,826,650
Showing multilabel in figures python knn
<p>i have this kind of csv data <a href="https://i.sstatic.net/CoCvp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CoCvp.png" alt="example data" /></a> i want to show them with matplotlib like this figure <a href="https://i.sstatic.net/1I0wj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1I0wj.png" alt="example figure" /></a> with latitude and longitude as x and y.</p> <p>this is my code</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.model_selection import train_test_split from matplotlib.colors import ListedColormap cmap = ListedColormap(['#FF0000','#00FF00','#0000FF']) df = pd.DataFrame(data) X = df.drop(['Wisata','Link Gambar','Nama','Region'],axis = 1) #this left with only latitude and #longitude y = df['Wisata'] #this is the label X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.2,random_state=1) </code></pre> <p>i've tried this but it doesn't work</p> <pre><code>plt.figure() plt.scatter(X[:,0],X[:,1],c=y,cmap=cmap,edgecolors='k',s=20) plt.show() </code></pre> <p>it gives this error</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) D:\Anaconda\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3628 try: -&gt; 3629 return self._engine.get_loc(casted_key) 3630 except KeyError as err: D:\Anaconda\lib\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() D:\Anaconda\lib\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() TypeError: '(slice(None, None, None), 0)' is an invalid key During handling of the above exception, another exception occurred: InvalidIndexError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_12876\2344342603.py in &lt;module&gt; 1 plt.figure() ----&gt; 2 plt.scatter(X[:,0],X[:,1],c=y,cmap=cmap,edgecolors='k',s=20) 3 plt.show() D:\Anaconda\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 3503 if self.columns.nlevels &gt; 1: 3504 return self._getitem_multilevel(key) -&gt; 3505 indexer = self.columns.get_loc(key) 3506 if is_integer(indexer): 3507 indexer = [indexer] D:\Anaconda\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3634 # InvalidIndexError. Otherwise we fall through and re-raise 3635 # the TypeError. -&gt; 3636 self._check_indexing_error(key) 3637 raise 3638 D:\Anaconda\lib\site-packages\pandas\core\indexes\base.py in _check_indexing_error(self, key) 5649 # if key is not a scalar, directly raise an error (the code below 5650 # would convert to numpy arrays and raise later any way) - GH29926 -&gt; 5651 raise InvalidIndexError(key) 5652 5653 @cache_readonly InvalidIndexError: (slice(None, None, None), 0) &lt;Figure size 640x480 with 0 Axes&gt; </code></pre> <p>im still new to python and can't read the error and fix it because the error is so long.. any suggestions how to fix it?</p>
<python><matplotlib><machine-learning>
2022-12-23 03:10:44
1
377
Jessen Jie
74,895,640
6,077,239
How to do regression (simple linear for example) in polars select or groupby context?
<p>I am using polars in place of pandas. I am quite amazed by the speed and lazy computation/evaluation. Right now, there are a lot of methods on lazy dataframe, but they can only drive me so far.</p> <p>So, I am wondering what is the best way to use polars in combination with other tools to achieve more complicated operations, such as regression/model fitting.</p> <p>To be more specific, I will give an example involving linear regression.</p> <p>Assume I have a polars dataframe with columns day, y, x1 and x2, and I want to generate a series, which is the residual of regressing y on x1 and x2 group by day. I have included the code example as follows and how it can be solved using pandas and statsmodels. How can I get the same result with the most efficiency using idiomatic polars?</p> <pre><code>import pandas as pd import statsmodels.api as sm def regress_resid(df, yvar, xvars): result = sm.OLS(df[yvar], sm.add_constant(df[xvars])).fit() return result.resid df = pd.DataFrame( { &quot;day&quot;: [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], &quot;y&quot;: [1, 6, 3, 2, 8, 4, 5, 2, 7, 3], &quot;x1&quot;: [1, 8, 2, 3, 5, 2, 1, 2, 7, 3], &quot;x2&quot;: [8, 5, 3, 6, 3, 7, 3, 2, 9, 1], } ) df.groupby(&quot;day&quot;).apply(regress_resid, &quot;y&quot;, [&quot;x1&quot;, &quot;x2&quot;]) # day # 1 0 0.772431 # 1 -0.689233 # 2 -1.167210 # 3 -0.827896 # 4 1.911909 # 2 5 -0.851691 # 6 1.719451 # 7 -1.167727 # 8 0.354871 # 9 -0.054905 </code></pre> <p>Thanks for your help.</p>
<python><python-polars>
2022-12-23 02:39:33
3
1,153
lebesgue
74,895,526
12,506,984
ValueError: The truth value of an array with more than one element is ambiguous Use a.any() or a.all() in masked extracting code
<p>I am trying to use a mask to mark the elements in the vector nodes that are in the matrix e. For that I am using a mask for the vector nodes, and trying to set to false the corresponding position, so that at the end I can extract all elements in nodes that are left with a TRUE in the corresponding mask array</p> <pre><code>def SortedInteriorNodes(p,e,t): # e is a matrix with two rows, p a matrix with 2 rows, t a matrix with 3 rows nn=p.shape[1] # number of nodes nt=t.shape[1] # number of triangles ne=e.shape[1] # number of edges = number of boundary nodes nodes = np.arange(start=0, stop=nn) # 0 1 2 3 ....np-1 mask = np.full((1, nn), True) for col in range(ne): for row in range(2): k = e[row, col] if (mask[k] == True): mask[e[row,col]]= False # I end up with a vector of boolean where the true indicate the position of Interior Nodes InteriorNodes = nodes[mask] return InteriorNodes </code></pre> <p>I get this error &quot;ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()&quot;in the line &quot;if (mask[k] == True):&quot;</p> <p>I ran the debugger and saw that mask is an array of booleans so mask[k] is a single value not an array so I can't make sense of the error, why would I have to use any or all if I am working with one element at a time not with an array? Any help will be appreciated</p>
<python><arrays>
2022-12-23 02:11:35
1
333
some_math_guy
74,895,516
4,377,095
Add a new column for each item in a list inside a cell in a Pandas DataFrame
<p>Assuming I have this table</p> <pre><code>import pandas as pd df_test = pd.DataFrame() df_test = df_test.assign(A=[[1,2,3], [1,2], [], [1,2,3,4]]) df_test +────────────+ | list | +────────────+ | [1,2,3] | | [1,2] | | [] | | [1,2,3,4] | +────────────+ </code></pre> <p>What I want to do is to add a column for every item in a list inside a row.</p> <p>The output I would like to have looks like this</p> <pre><code>+────────────+───+───+───+───+ | list | | | | | +────────────+───+───+───+───+ | [1,2,3] | 1 | 2 | 3 | | | [1,2] | 1 | 2 | | | | [] | | | | | | [1,2,3,4] | 1 | 2 | 3 | 4 | +────────────+───+───+───+───+ </code></pre>
<python><python-3.x><pandas>
2022-12-23 02:09:39
3
537
Led
74,895,441
9,206,962
Patching Function That Gets Imported In a Separate File
<pre><code># a.py from b import my_func def do_something(): my_func() # b.py from c import my_other_func # patch this while testing a.do_something() def my_func(): my_other_func() </code></pre> <p>I have a unit test for <code>do_something</code> and I need to mock out something that gets imported in another file where a function used in <code>do_something</code> is defined. I've tried patching it a number of ways, such as: <code>mock.patch(&quot;a.b.my_other_func&quot;)</code>, <code>mock.patch(&quot;b.my_other_func&quot;)</code>, but it hasn't gotten the naming correct yet. How would you mock <code>my_other_func</code> in a test again <code>do_something</code> in this case? I could just patch <code>b.my_func</code> directly, but I was hoping to be able to use it as is but just mock the underlying db call (which is happening in the <code>my_other_func</code> method).</p>
<python><pytest><patch><pytest-mock>
2022-12-23 01:50:51
2
417
michjo
74,895,435
1,229,736
Generator with For In only returning first character instead of the whole string
<p>I want this generator to return these characters in a loop: &quot;&gt;&lt;&gt;&lt;&gt;&lt;&quot; instead it's only returning the first character &quot;&gt;&gt;&gt;&gt;&gt;&gt;&quot;</p> <p>if I replace yield with print it works correctly, why is it only returning the first character?</p> <pre><code>def gn(): while True: for char in &quot;&gt;&lt;&quot;: yield char print([next(gn()) for _ in range(10)]) </code></pre> <p>I've tracked it to a problem in the For In part of the loop, but I still don't see what I'm doing wrong</p> <pre><code>def gn2(): for char in [&quot;&gt;&quot;,&quot;&lt;&quot;]: yield char print([next(gn2()) for _ in range(10)]) </code></pre>
<python><loops><generator>
2022-12-23 01:49:45
1
829
Chris Rudd
74,895,414
9,297,601
How can I generate instances of a class with inter-dependent attributes?
<p>Assume this simple case: (reposted and lightly edited from <a href="https://groups.google.com/g/hypothesis-users/c/munEwoJDUCY/m/7BWnti9OAwAJ" rel="nofollow noreferrer">the Hypothesis mailing list</a>)</p> <pre class="lang-py prettyprint-override"><code>@dataclass class A: names: list[str] ages: dict[str, float] </code></pre> <p>How could I write a strategy which generates objects of type A but with the restriction that the keys in the ages dict are taken from the list names (which are, in turn, randomly generated)?</p>
<python><python-hypothesis>
2022-12-23 01:43:50
1
3,132
Zac Hatfield-Dodds
74,895,292
3,321,579
Why do breakpoints in my function with a yield do not break?
<p>I am writing Python in VSCode. I have a file with two functions, and a second file that calls those function. When I put breakpoints in one of the functions, the breakpoint does not hit.</p> <p>Here is test2.py that has the two functions...</p> <pre><code>def func1(): for i in range(10): yield i def func2(): for i in range(10): print(i) </code></pre> <p>Here is test1.py for which the debugger is being launched from...</p> <pre><code>import test2 test2.func1() test2.func2() </code></pre> <p>When I put a breakpoint on the for loop in both func1 and func2, then run the debugger from test1.py, the breakpoint for func1 is never hit, but the breakpoint in func2 is hit.</p> <p>Why is this?</p>
<python>
2022-12-23 01:13:01
2
1,947
Scorb
74,895,071
8,705,841
Python find broken symlinks caused by Git
<p>I'm currently working on a script that is supposed to go through a cloned Git repo and remove any existing symlinks (broken or not) before recreating them for the user to ease with project initialization.</p> <p>The solved problems:</p> <ul> <li>Finding symbolic links (broken or unbroken) in Python is <a href="https://stackoverflow.com/questions/12890762/how-to-check-if-file-is-not-symlink-in-python">very</a> <a href="https://stackoverflow.com/questions/33232417/how-to-get-symlink-target-in-python">well</a> <a href="https://gist.github.com/seanh/229454" rel="nofollow noreferrer">documented</a>.</li> <li>GitHub/GitLab breaking symlinks upon downloading/cloning/updating repositories is also <a href="https://stackoverflow.com/questions/5917249/git-symbolic-links-in-windows/42137273#42137273">very</a> <a href="https://stackoverflow.com/questions/47259316/symlinks-not-recognized-when-pulled-from-git">well</a> <a href="https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-config.html" rel="nofollow noreferrer">documented</a> as is how to fix this problem. Tl;dr: Git will download symlinks within the repo as plain text files (with no extension) containing only the symlink path if certain <code>config</code> flags are not set properly.</li> </ul> <h3>The unsolved problem:</h3> <p>My problem is that developers may download this repo without realizing the issues with Git, and end up with the symbolic links &quot;checked out as small plain files that contain the link text&quot; which is completely undetectable (as far as I can tell) when parsing the cloned files/directories (via existing base libraries). Running <code>os.stat()</code> on one of these broken symlink files returns information as though it were a normal text file:</p> <pre class="lang-python prettyprint-override"><code>os.stat_result(st_mode=33206, st_ino=14073748835637675, st_dev=2149440935, st_nlink=1, st_uid=0, st_gid=0, st_size=42, st_atime=1671662717, st_mtime=1671662699, st_ctime=1671662699) </code></pre> <p>The <code>st_mode</code> information only indicates that it is a normal text file- <code>100666</code> (the first 3 digits are the file type and the last 3 are the UNIX-style permissions). It should show up as <code>120000</code>.</p> <p>The <code>os.path.islink()</code> function only ever returns <code>False</code>.</p> <p>THE CONFUSING PART is that when I run <code>git ls-files -s ./microservices/service/symlink_file</code> it gives <code>1200000</code> as the mode bits which, according to <a href="https://git-scm.com/docs/git-ls-files#Documentation/git-ls-files.txt--s" rel="nofollow noreferrer">the documentation</a>, indicates that this file is a symlink. However I cannot figure out how to see this information from within Python.</p> <p>I've tried a bunch of things to try and find and delete existing symlinks. Here's the base method that just finds symlink directories and then deletes them:</p> <pre class="lang-python prettyprint-override"><code>def clearsymlinks(cwd: str = &quot;&quot;): &quot;&quot;&quot; Crawls starting from root directory and deletes all symlinks within the directory tree. &quot;&quot;&quot; if not cwd: cwd = os.getcwd() print(f&quot;Clearing symlinks - starting from {cwd}&quot;) # Create a queue cwd_dirs: list[str] = [cwd] while len(cwd_dirs) &gt; 0: processing_dir: str = cwd_dirs.pop() # print(f&quot;Processing {processing_dir}&quot;) # Only when debugging - else it's too much output spam for child_dir in os.listdir(processing_dir): child_dir_path = os.path.join(processing_dir, child_dir) # check if current item is a directory if not os.path.isdir(child_dir_path): if os.path.islink(child_dir_path): print(f&quot;-- Found symbolic link file {child_dir_path} - removing.\n&quot;) os.remove(child_dir_path) # skip the dir checking continue # Check if the child dir is a symlink if os.path.islink(child_dir_path): print(f&quot;-- Found symlink directory {child_dir_path} - removing.&quot;) os.remove(child_dir_path) else: # Add the child dir to the queue cwd_dirs.append(child_dir_path) </code></pre> <p>After deleting symlinks I run <code>os.symlink(symlink_src, symlink_dst)</code> and generally run into the following error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\Users\me\my_repo\remakesymlinks.py&quot;, line 123, in main os.symlink(symlink_src, symlink_dst) FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\me\\my_repo\\SharedPy' -&gt; 'C:\\Users\\me\\my_repo\\microservices\\service\\symlink_file' </code></pre> <p>A workaround to specifically this error (inside the create symlink method) is:</p> <pre class="lang-python prettyprint-override"><code>try: os.symlink(symlink_src, symlink_dst) except FileExistsError: os.remove(symlink_dst) os.symlink(symlink_src, symlink_dst) </code></pre> <p>But this is not ideal because it doesn't prevent a huge list of defunct/broken symlinks from piling up in the directory. I <em>should</em> be able to find <strong>any</strong> symlinks (working, broken, non-existent, etc.) and then delete them.</p> <p>I have a list of the symlinks that should be created by my script, but extracting the list of targets from this list is also a workaround that also causes a 'symlink-leak'. Below is how I'm currently finding the broken symlink purely for testing purposes.</p> <pre class="lang-python prettyprint-override"><code>if not os.path.isdir(child_dir_path): if os.path.basename(child_dir_path) in [s.symlink_install_to for s in dirs_to_process]: print(f&quot;-- Found symlink file {child_dir_path} - removing.&quot;) os.remove(child_dir_path) # skip the dir checking continue </code></pre> <p>A rudimentary solution where I filter for only 'text/plain' files with exactly 1 line (since checking anything else is pointless) and trying to determine whether that single line is just a file path (this seems excessive though):</p> <pre class="lang-python prettyprint-override"><code># Check if Git downloaded the symlink as a plain text file (undetectable broken symlink) if not os.path.isdir(child_dir_path): try: if magic.Magic(mime = True, uncompress = True).from_file(child_dir_path) == 'text/plain': with open(child_dir_path, 'r') as file: for i, line in enumerate(file): if i &gt;= 1: raise StopIteration else: # this function is directly copied from https://stackoverflow.com/a/34102855/8705841 if is_path_exists_or_creatable(line): print(f&quot;-- Found broken Git link file '{child_dir_path}' - removing.&quot;) print(f&quot;\tContents: \&quot;{line}\&quot;&quot;) # os.remove(child_dir_path) raise StopIteration except (StopIteration, UnicodeDecodeError, magic.MagicException): file.close() continue </code></pre> <p>Clearly this solution would require a lot of refactoring (9 indents is pretty ridiculous) if it's the only viable option. Problem with this solution (currently) is that it also tries to delete any single-line files with a string that does not break pathname requirements- i.e. <code>import _virtualenv</code>, <code>some random test string</code>, <code>project-name</code>. Filtering out those files with spaces, or files without slashes, could potentially work but this still feels like chasing edge cases instead of solving the actual problem.</p> <p>I could <em>potentially</em> rewrite this script in Bash wherein I could, in addition to <a href="https://askubuntu.com/questions/1422411/remove-all-symlinks-in-a-folder">existing symlink search/destroy code</a>, parse the results from <code>git ls-files -s ...</code> and then delete any files with the <code>120000</code> tag. But is this method feasible and/or reliable? There should be a way to do this from within Python since Bash isn't going to run on every OS.</p> <hr /> <p>Note: file names have been redacted/changed after copy-paste for privacy, they shouldn't matter anyways since they are generated dynamically by the path searching functions</p>
<python><windows><git><filesystems><symlink>
2022-12-23 00:25:41
0
467
elkshadow5
74,895,049
17,365,694
TypeError: PaystackPayment() missing 1 positional argument: "request"
<p>I'm getting the error</p> <pre><code>PaystackPayment() missing 1 required positional argument: 'request' </code></pre> <p>at line 246 of my <code>core.views</code> of my Ecommerce project when trying to access the function PaystackPayment(). I don't know what is wrong.</p> <p>views:</p> <pre><code>def PaystackPayment(request): try: order = Order.objects.get(user=request.user, ordered=False) secret_key = settings.PAYSTACK_SECRET_KEY ref = create_ref_code() amount = int(order.get_total() * 100) client_credentials = request.user.email paystack_call = TransactionResource( secret_key, ref) response = paystack_call.initialize( amount, client_credentials ) authorization_url = response['data']['authorization_url'] # create payment payment = PaystackPayment() payment.user = request.user payment.ref = ref payment.email = client_credentials payment.amount = int(order.get_total()) payment.save() # assign the payment to the order order.ordered = True order.email = client_credentials order.ref_code = ref order.paystack_payment = payment.amount order.save() # send confirmation order email to the user subject = f'New Order Made by {request.user.username} today totaled {int(order.get_total())} ' message = 'Hello there ' + \ ', Thanks for your order. Your order has been received and it is being processed as we shall get back to you shortly. Thank you once again!' email_from = settings.EMAIL_HOST_USER recipient_list = [request.user.email, ] send_mail(subject, message, email_from, recipient_list) messages.success(request, 'Your order was successful') return redirect(authorization_url) except ObjectDoesNotExist: messages.info( request, &quot;&quot;&quot; Their was an error when you where possibly entering the checkout or payment fields. You were not charged, try again! &quot;&quot;&quot;) return redirect('core:checkout') </code></pre> <p>urls:</p> <pre><code>from django.urls import path from .views import (PurchaseSearch, PaystackPayment, Checkout) app_name = 'core' urlpatterns = [ path('checkout/', Checkout.as_view(), name='checkout'), path('make-payment/', PaystackPayment, name='payment') </code></pre> <p>model:</p> <pre><code>class PaystackPayment(models.Model): user = models.ForeignKey(User, on_delete=models.SET_NULL, blank=True, null=True) amount = models.PositiveIntegerField() time_stamp = models.DateTimeField(auto_now_add=True) ref = models.CharField(max_length=20) email = models.EmailField() payment_verified = models.BooleanField(default=False) def __str__(self) -&gt; str: return f'PaystackPayment: {self.amount}' </code></pre>
<python><django><django-models><django-views><django-templates>
2022-12-23 00:21:10
1
474
Blaisemart
74,895,015
13,236,293
Sort a nested list inside and outside in Python
<p>I am wondering how to sort a nested list in ascending order from both inside the nested list and outside the nested list. Any list comprehension way I can do this easily?</p> <pre><code>abc = [[6, 2], [7, 1]] </code></pre> <p>Current working but no good:</p> <pre><code>list(map(sorted, abc)) [[2, 6], [1, 7]] </code></pre> <p>Expected output</p> <pre><code>[[1, 7], [2, 6]] </code></pre>
<python>
2022-12-23 00:14:40
1
1,664
codedancer
74,894,984
10,308,255
How to flatten a column containing a list of dates as strings into just date?
<p>I have a dataframe containing a column that has a list of dates stored as strings:</p> <pre><code># sample dataframe data = [[1, [&quot;2019-08-02 08:30:56&quot;]], [2, [&quot;2020-08-02 08:30:56&quot;]]] df = pd.DataFrame(data, columns=[&quot;items&quot;, &quot;dates&quot;]) df[&quot;dates&quot;] = df[&quot;dates&quot;].astype(str) df </code></pre> <pre><code> items dates 0 1 ['2019-08-02 08:30:56'] 1 2 ['2020-08-02 08:30:56'] </code></pre> <p>I would like to do several things:</p> <ul> <li>Convert from a list to a string.</li> <li>Convert the string to a date.</li> <li>Eliminate the time stamp.</li> </ul> <p>So that the final dataset would look like this:</p> <pre><code> items dates 0 1 2019-08-02 1 2 2020-08-02 </code></pre> <p>I am able to remove the list brackets by doing this:</p> <pre><code>df[&quot;dates_2&quot;] = df[&quot;dates&quot;].apply(lambda x: x[1:-1]) </code></pre> <p>But I am wondering if there is a better way to do all of these things in one step?</p>
<python><pandas><list><dataframe><datetime>
2022-12-23 00:04:59
3
781
user
74,894,927
3,321,579
Why does VSCode Python code completion not work for my firestore client variable?
<p>I have the following python snippet...</p> <pre><code>import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate(&quot;./cert.json&quot;) firebase_admin.initialize_app(cred) client = firestore.client() client. </code></pre> <p>When I type the dot on the last line, no code completion options come up. On the 'firestore' variable, there are code completion options that come up.</p> <p>I have the Python and Pylance extensions installed in VSCode. Is this a configuration issue, or is there something about what the client() method is returning, that it is not possible to infer its type?</p>
<python><visual-studio-code><pylance>
2022-12-22 23:54:22
1
1,947
Scorb
74,894,926
9,331,134
mlflow: INVALID_PARAMETER_VALUE Parameters can not be modified after a value is set
<p>I get the following exception raised, help me please</p> <pre><code>mlflow.exceptions.RestException: INVALID_PARAMETER_VALUE: Response: { 'Error': { 'Code': 'ValidationError', 'Severity': None, 'Message': 'Parameters can not be modified after a value is set.', 'MessageFormat': None, 'MessageParameters': None, 'ReferenceCode': None, 'DetailsUri': None, 'Target': None, 'Details': [], 'InnerError': None, 'DebugInfo': None, 'AdditionalInfo': None }, 'Correlation': { 'operation': '13b09a7297c0e920a41eeaa728c4cb4e', 'request': 'ae8a75a01a168e65' }, 'Environment': 'westeurope', 'Location': 'westeurope', 'Time': '2022-12-22T23:37:07.4626959+00:00', 'ComponentName': 'mlflow', 'error_code': 'INVALID_PARAMETER_VALUE' } </code></pre> <p>my code pretty much looks like this</p> <pre><code>mlflow.start_run() mlflow.log_params({'key1':1, 'key2':2}) mlflow.log_param('key3', 3) //Exception raised </code></pre> <p>PS: the mlflow version i am using is 2.1.0</p>
<python><mlflow><mlops>
2022-12-22 23:54:20
0
1,082
Kaushik J
74,894,806
12,198,552
Unable to import a python package in Pyodide
<p>I am trying to import a custom package in <code>Pyodide</code> but it is not working.</p> <p>The directory structure is</p> <pre><code>index.html package/ __init__.py </code></pre> <p>And my index.html looks like this</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;script src=&quot;https://cdn.jsdelivr.net/pyodide/v0.19.1/full/pyodide.js&quot;&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; Pyodide test page &lt;br /&gt; Open your browser console to see Pyodide output &lt;script type=&quot;text/javascript&quot;&gt; async function main() { let pyodide = await loadPyodide({ indexURL: &quot;https://cdn.jsdelivr.net/pyodide/v0.19.1/full/&quot;, }); let mypkg = pyodide.pyimport(&quot;package&quot;); mypkg.main(); } main(); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Does anyone know why this might be happening?</p> <p>I am getting the error</p> <p><code>ModuleNotFoundError: No module named 'package'</code></p>
<python><webassembly><pyodide>
2022-12-22 23:29:46
2
646
Sanskar Jethi
74,894,738
6,742,108
Get request returns unfriendly response Python
<p>I am trying to perform a get request on <a href="https://www.tcgplayer.com/" rel="nofollow noreferrer">TCG Player</a> via <code>Requests</code> on <code>Python</code>. I checked the sites <code>robots.txt</code> which specifies:</p> <pre><code>User-agent: * Crawl-Delay: 10 Allow: / Sitemap: https://www.tcgplayer.com/sitemap/index.xml </code></pre> <p>This is my first time seeing a <code>robots.txt</code> file.</p> <p>My code is as follows:</p> <pre><code>import requests url = &quot;http://www.tcgplayer.com&quot; r = requests.get(url) print(r.text) </code></pre> <p>I cannot include <code>r.text</code> in my post because the character limit would be exceeded.</p> <p>I would have expected to be recieve the HTML content of the webpage, but I got an 'unfriendly' response instead. What is the meaning of the text above? Is there a way to get the HTML so I can scrape the site?</p> <p>By 'unfriendly' I mean:</p> <blockquote> <p>The HTML that is returned does not match the HTML that is produced by typing the URL into my web browser.</p> </blockquote>
<python><python-requests>
2022-12-22 23:13:53
1
516
karafar
74,894,719
9,591,001
How can I find images inside a specific class?
<p>I have been trying to use <code>Selenium</code> to do a webscraping.</p> <p>I need to download two lists of information: name and picture.</p> <p>I used this code to download the list of names:</p> <pre class="lang-py prettyprint-override"><code>### click on right div browser.find_element(&quot;xpath&quot;,&quot;/html/body/div[1]/div/div/div[3]/div/div[2]/div[2]/div/div&quot;).click() ### colect names nomes_ls_contatos = browser.find_elements(By.CLASS_NAME, &quot;zoWT4&quot;) len(nomes_ls_contatos) # 21 nomes_ls_contatos[1].text </code></pre> <p>To download the pictures, I tried:</p> <pre class="lang-py prettyprint-override"><code>### collect pictures fotos_ls_contatos = browser.find_elements(By.TAG_NAME, &quot;img&quot;) len(fotos_ls_contatos) # 24 print(fotos_ls_contatos[1].get_attribute(&quot;src&quot;)) </code></pre> <p>But in this way, it's downloading more three pictures which I don't need.</p> <p>I found out the right pictures is inside a class <code>_3GlyB</code>. And tried:</p> <pre class="lang-py prettyprint-override"><code>fotos_ls_contatos = browser.find_elements(By.CLASS_NAME, &quot;_3GlyB&quot;) len(fotos_ls_contatos) # 20 fotos_ls_contatos[1].text </code></pre> <p>But it returns an empty string.</p> <p>So, how can I can I do it?</p> <p>Something like <code>browser.find_elements(By.CLASS_NAME, &quot;_3GlyB&quot;).find_elements(By.TAG_NAME, &quot;img&quot;)</code></p> <p>PS: With R, this worked:</p> <pre class="lang-r prettyprint-override"><code> # collect names nome_remetente &lt;- lista_mensagens[[1]] |&gt; rvest::read_html() |&gt; rvest::html_elements(&quot;._3vPI2&quot;) |&gt; rvest::html_elements(&quot;.zoWT4&quot;) |&gt; rvest::html_element(&quot;span&quot;) |&gt; rvest::html_text() |&gt; as.data.frame() |&gt; dplyr::rename(nome = 1) |&gt; dplyr::filter(!is.na(nome)) # collect pictures imagem_remetente &lt;- lista_mensagens[[1]] |&gt; rvest::read_html() |&gt; rvest::html_elements(&quot;._1Oe6M&quot;) |&gt; rvest::html_elements(&quot;._2EU3r&quot;) |&gt; rvest::html_elements(&quot;._3GlyB&quot;) |&gt; rvest::html_elements(&quot;img&quot;) |&gt; rvest::html_attr(&quot;src&quot;) |&gt; as.data.frame() |&gt; dplyr::rename(imagem = 1) </code></pre>
<python><selenium>
2022-12-22 23:10:12
1
556
RxT
74,894,371
13,454,049
Pylint redundant comparison for NaN
<p>Why does Pylint think this is a redundant comparison? Is this not the fastest way to check for NaN?</p> <blockquote> <p>Refactor: R0124<br /> Redundant comparison - value_1 != value_1<br /> Redundant comparison - value_2 != value_2</p> </blockquote> <p>How else am I supposed to check if two values are equal including when they're nan?</p> <pre class="lang-py prettyprint-override"><code>NaN = float(&quot;NaN&quot;) def compare(value_1, value_2): match_nan = value_1 != value_1 print(match_nan and value_2 != value_2 or value_1 == value_2) compare(1, 1) compare(1, NaN) compare(NaN, 1) compare(NaN, NaN) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>True False False True </code></pre> <p>Now, sure math.is_nan is a better solution if you're working with custom classes:</p> <pre class="lang-py prettyprint-override"><code>from math import isnan class NotEQ: def __eq__(self, other): return False not_eq = NotEQ() print(not_eq != not_eq) print(isnan(not_eq)) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>True ... TypeError: must be real number, not NotEQ </code></pre> <p>I'm writing a JSON patcher, and I don't think the normal behaviour is very useful when you want to be able to remove them from lists, or raise an error is two values aren't equal (but allowing NaN and NaN)</p>
<python><pylint>
2022-12-22 22:14:40
2
1,205
Nice Zombies
74,894,318
12,349,734
How can I match a pattern, and then everything upto that pattern again? So, match all the words and acronyms in my below example
<h4>Context</h4> <p>I have the following paragraph:</p> <pre class="lang-py prettyprint-override"><code>text = &quot;&quot;&quot; Χ‘Χ‘Χ™Χ”Χ›Χ &quot;Χ‘ - Χ‘Χ‘Χ™Χͺ Χ”Χ›Χ Χ‘Χͺ Χ“Χ•&quot;Χ— - Χ“Χ™ΧŸ Χ•Χ—Χ©Χ‘Χ•ΧŸ Χ”Χͺ&quot;Χ“ - Χ”ΧͺΧ™Χ§Χ•Χ Χ™ דיקנא Χ‘Χ’Χ•&quot;Χ¨ - Χ‘Χ’Χ©ΧžΧ™Χ•Χͺ Χ•Χ¨Χ•Χ—Χ Χ™Χ•Χͺ Χ”&quot;א - Χ”' ΧΧœΧ•Χ§Χ™Χ›Χ Χ”ΧͺΧžΧ™' - Χ”ΧͺΧžΧ™Χ”Χ” Χ‘Χ”Χ &quot;ל - Χ‘Χ”Χ Χ–Χ›Χ¨ ΧœΧ’Χ™Χœ Χ”&quot;א - Χ”' ΧΧœΧ§Χ™Χš ואח&quot;Χ› - ואחר Χ›Χš Χ‘Χ”Χ©Χ™Χ΄Χͺ - בהשם Χ™ΧͺΧ‘Χ¨Χš Χ”&quot;Χ” - Χ”Χ¨Χ™ הוא / הוא Χ”Χ“Χ™ΧŸ ואΧͺ&quot;Χ” - ואיגוד ΧͺΧœΧžΧ™Χ“Χ™ &quot;&quot;&quot; </code></pre> <p>this paragraph is combined with Hebrew words and their acronyms.</p> <p>A word contains quotation marks (<code>&quot;</code>).</p> <p>So for example, some words would be:</p> <pre><code>[ 'Χ‘Χ‘Χ™Χ”Χ›Χ &quot;Χ‘', 'Χ“Χ•&quot;Χ—', 'Χ”Χͺ&quot;Χ“' ] </code></pre> <p>Now, I'm able to match all the words with this regex:</p> <pre><code>(\b[\u05D0-\u05EA]*\&quot;\b[\u05D0-\u05EA]*\b) </code></pre> <p><a href="https://i.sstatic.net/pE4W2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pE4W2.png" alt="enter image description here" /></a></p> <h4>Question</h4> <p>But how can I also match all the corresponding acronyms as a separate group? (the acronyms are what's not matched, so not the green in the picture).</p> <p>Example acronyms are:</p> <pre class="lang-py prettyprint-override"><code>[ 'Χ‘Χ‘Χ™Χͺ Χ”Χ›Χ Χ‘Χͺ', 'Χ“Χ™ΧŸ Χ•Χ—Χ©Χ‘Χ•ΧŸ', 'Χ”ΧͺΧ™Χ§Χ•Χ Χ™ דיקנא' ] </code></pre> <h4>Expected output</h4> <p>The expected output should be a dictionary with the Words as <code>keys</code> and the Acronyms as <code>values</code>:</p> <pre class="lang-py prettyprint-override"><code>{ 'Χ‘Χ‘Χ™Χ”Χ›Χ Χ‘': 'Χ‘Χ‘Χ™Χͺ Χ”Χ›Χ Χ‘Χͺ', 'Χ“Χ•&quot;Χ—': 'Χ“Χ™ΧŸ Χ•Χ—Χ©Χ‘Χ•ΧŸ', 'Χ”Χͺ&quot;Χ“': 'Χ”ΧͺΧ™Χ§Χ•Χ Χ™ דיקנא' } </code></pre> <h4>My attempt</h4> <p>What I tried was to match all the words (as above picture):</p> <pre><code>(\b[\u05D0-\u05EA]*\&quot;\b[\u05D0-\u05EA]*\b) </code></pre> <p>and then match everything until the pattern appears again with <code>.*\1</code>, so the entire regex would be:</p> <pre><code>(\b[\u05D0-\u05EA]*\&quot;\b[\u05D0-\u05EA]*\b).*\1 </code></pre> <p>But as you can see, that doesn't work:</p> <p><a href="https://i.sstatic.net/AdFtS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AdFtS.png" alt="enter image description here" /></a></p> <ul> <li>How can I match the words and acronyms to compose a dictionary with the words/acronyms?</li> </ul> <h4>Note</h4> <p>When you print the output, it might be printed in Left-to-right order. But it should really be from Right to left. So if you want to print from right to left, see this answer:</p> <p><a href="https://stackoverflow.com/a/57937042/12349734">right-to-left languages in Python</a></p>
<python><regex><match><python-re>
2022-12-22 22:08:24
3
20,556
MendelG
74,894,286
7,267,480
tensorflow installation/usage problem under wsl - can't see GPU and libraries
<p>Tried to install Tensorflow under WSL20 (Ubuntu) using pip.</p> <p>installed automatically using pip install tensorflow, without any error messages.</p> <p>When I am trying to run simple script like this:</p> <pre><code>import tensorflow as tf import os os.environ['TF_CPP_MIN_LOG_LEVEL']='2' print(tf.reduce_sum(tf.random.normal([1000, 1000]))) print(tf.config.list_physical_devices('GPU')) </code></pre> <p>I got a large warning message:</p> <pre><code>2022-12-22 22:50:25.297540: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-12-22 22:50:25.409607: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2022-12-22 22:50:25.412608: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-12-22 22:50:25.412658: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2022-12-22 22:50:25.854668: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2022-12-22 22:50:25.854941: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2022-12-22 22:50:25.854964: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. 2022-12-22 22:50:26.679236: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node Your kernel may have been built without NUMA support. 2022-12-22 22:50:26.679366: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679425: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679460: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679496: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679536: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcurand.so.10'; dlerror: libcurand.so.10: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679571: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679609: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679633: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory 2022-12-22 22:50:26.679640: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2022-12-22 22:50:26.680041: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. tf.Tensor(395.61298, shape=(), dtype=float32) [] </code></pre> <p>So I can see that tf can't use some libraries and can't see GPU I have.</p> <p>When I run nvidia-smi to check if I have a driver for GPU it shows:</p> <p>nvidia-smi Thu Dec 22 22:35:48 2022</p> <pre><code>+-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.02 Driver Version: 526.98 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A | | N/A 50C P8 3W / N/A | 0MiB / 4096MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 29 G /Xwayland N/A | +-----------------------------------------------------------------------------+ </code></pre> <p>What should I do to enable GPU for Tensorflow and fix the situation with warnings?</p>
<python><linux><tensorflow>
2022-12-22 22:03:46
2
496
twistfire
74,894,119
5,112,032
Is it possible to pass an xcom into a python variable?
<p>I am simply trying to get the <code>return</code> value from an xcom into a python variable, something like this:</p> <pre><code>datasets = S3ListPrefixesOperator( task_id='list_s3_prefixes', bucket = get_bucket(), prefix = 'my_key/', delimiter = '/' ) bucket_keys = task_instance.xcom_pull(task_ids='list_s3_prefixes') </code></pre> <p>what id like to do then is to loop through each bucket key and create a <code>S3Copy</code> based on each bucket.</p> <p>How do i get this accomplished?</p>
<python><airflow>
2022-12-22 21:41:19
0
2,522
eljusticiero67
74,894,030
19,797,660
Numpy How to make a moving(growing) sum of table contents without a for loop?
<p>Due to the fact english is not my first language it is very hard to me to explain simply the problem I am trying to solve in the topic, and thus I am sorry.</p> <p>So instead of trying to explain with bare words I am going to give an example.</p> <p>Let's say we have an array that is instantiated like this:</p> <pre><code>weight = np.arange(1, (n + 1)).astype('float64') </code></pre> <p>So the array looks like this:</p> <pre><code>[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.] </code></pre> <p>Now what I want to do is to have an array of moving sums(let's call it <code>norm</code>), summarizing the array <code>norm</code> and operations would look like this:</p> <pre><code>index, norm(new array), weight, operation 0 1 1 0+1 = 1 1 3 2 0+1+2 = 3 2 6 3 0+1+2+3 = 6 3 10 4 0+1+2+3+4 = 10 . . . . . . . . . . . . 9 55 10 0+1+2+3+...+10 = 55 </code></pre> <p>I hope it is understandable. How do I achieve this result without looping through the weight array?</p>
<python><numpy>
2022-12-22 21:29:43
1
329
Jakub Szurlej
74,893,967
7,267,480
ANN for object identification: how to prepare input data for learning (signals with varying length) and what structure to use?
<p>I am trying to artificial neural network as an instrument to identify some specific &quot;peaks&quot; in noised signals and get their coordinates. I want to use keras for that purpose.</p> <p>But I don't get how to prepare data for learning, can you suggest some example similar to my task or provide a link for the material?</p> <p>Currently I have signals that are presented as csv files y_i(x_i) where i stand for the number of point (I am reading each file as a dataframe and converting it to numpy arrays for processing).</p> <p>For each signal I have a file in csv format where I have x | y values in a row. Let's say it's called <code>signal1.csv</code></p> <p>I have &gt;500k of signals.</p> <p>An example of a signal is presented on the next figure:</p> <p><a href="https://i.sstatic.net/H0XPM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H0XPM.png" alt="Signal form form to consider" /></a></p> <p>The green lines visualize the actual peak positions (I have separate files for each signal which stand for actual peak positions for each corresponding signal - it's manually evaluated by experts). Let's say <code>signal1_solution.csv</code> - each peak has it's own set of parameters, and one of them is peak position.</p> <p>I want to learn ANN to find peaks in raw signals.</p> <p><strong>Supposed approach.</strong> The signal will be given to the input of a ANN (two arrays, x &amp; y - with similar number of points inside each, I can also provide additional parameters also for ANN: like average distance between peaks or some known params for that group of signals but I don't think it's necessary from the start).</p> <p>Is it relatively large number of inputs? (20000 floats) Maybe it's too much for that purpose and what can be done in this case?</p> <p>After processing the signal by ANN it must return an array of calculated positions of peaks - and here I don't know the number of outputs, e.g. one signal will have 10 peaks - so the ANN must return 10 floats, and the other will have 110 peaks - so the ANN must return 110 floats. I can probably predict the maximum number of peaks in output just by calculating statistically (e.g. in the current data available for learning it is 170) and it really depends on the range of x of a signal - the wider the range - the larger number of peaks.</p> <p>Each Signal consists of varying number of points, some can have 100 or 1000 points, 45% has more than 6000 points - up to 100000 points, the grid is nonuniform, so x[i]-x[i-1]&lt;&gt;x[i+1]-x[i].</p> <p>At the same time each signal has varying number of peaks one signal has 50 peaks, another only 10 peaks - it depends on a distance between separate peaks expressed in x-coordinates.</p> <p>There are complexed set of rules that are used by experts to estimate positions of peaks. One of them is the use of known statistical distributions of peaks parameters on the observable interval of X (e.g. distance between neighboring peaks). I suppose this information can be extracted during the learning process.</p> <p>Maybe you can suggest how to prepare the data for learning in this case or label existing?</p> <p>I had an idea just to label point by point (for each point of a signal I will mark if it belongs to the peak or not). But it's a problem because of the grid (the solution contain coordinate of a peak between some points of a signal and it can be on a relatively large distance to the nearest point in the signal).</p> <p><strong>UPDATE 1:</strong> As I said - all signals are stored as separate csv-s in folder.</p> <p>An example of of one signal is shown in the first figure, here is the corresponding dataframe:</p> <pre><code>one_example_signal = pd.read_csv(f, index_col=0) E exp_trans exp_trans_unc marked_points index 4883 20.005036 0.710289 0.057905 0 4882 20.012073 0.780100 0.060669 0 4881 20.019113 0.808801 0.063798 0 4880 20.026157 0.676023 0.053775 0 4879 20.033205 1.006920 0.072483 0 </code></pre> <p>The informative columns are: <code>E</code> - it's actually x-coordinate on the plot, <code>exp_trans</code> - it's an y coordinate on the plot and the <code>marked_points</code> - it's a predefined 'point class'. I need to explain the point class column for better understanding. Full csv can be found here: <a href="https://drive.google.com/file/d/11sHeDj6DiKWgQ4cj56Ts4OPChPdAfYA-/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/11sHeDj6DiKWgQ4cj56Ts4OPChPdAfYA-/view?usp=sharing</a></p> <p><a href="https://i.sstatic.net/PJy1O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PJy1O.png" alt="Visualized example of one signal on the input of ANN" /></a></p> <p><a href="https://i.sstatic.net/M5U8I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5U8I.png" alt="Region zoomed for better explanation of data specifics" /></a></p> <p>I have built a simple labeler that assigns a class to each point of the signal - e.g. points of clearly defined peaks have class 1, points nearest to the real peak left have class 2, points nearest from the right side has class 3.. Points that don't belong to the peak has class 0. They all are marked on the figure with various colors. Vertical lines help to understand where the real peaks are.</p> <p>For the signal on the fig. statistics is as follows (we have 4884 pairs of points for that specific signal).</p> <pre><code>0 4779 2 38 3 37 1 30 </code></pre> <p>So 4779 points are labeled as &quot;not peaks&quot;, 38 points are labeled as &quot;left boundaries&quot;, &quot;right boundaries&quot; - 37, and the closest to the real peaks - 30 points labeled (it's done using predefined model).</p> <p>The idea was to feed ANN with the signal array: E - 4884 values corresponding to the E-coordinate and 4884 values of exp_trans (9768 points total). The ANN outputs 4884 values that classifies each point of the input signal - it's a peak or not. If that makes sense :)</p>
<python><keras><neural-network>
2022-12-22 21:21:01
0
496
twistfire
74,893,922
1,391,441
Sublime Text 4: folding Python functions with a line break
<p>I'm running Sublime Text Build 4143. Given a function with parameters that spill over the 80 character limit like so:</p> <pre><code>def func(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6): &quot;&quot;&quot; &quot;&quot;&quot; print(&quot;hello world&quot;) return </code></pre> <p>ST will show a <code>PEP8 E501: line too long</code> warning and highlight the line (which is fine):</p> <p><a href="https://i.sstatic.net/AktLP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AktLP.png" alt="enter image description here" /></a></p> <p>But I can fold this function appropriately:</p> <p><a href="https://i.sstatic.net/3SLnv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3SLnv.png" alt="enter image description here" /></a></p> <p>If I modify it to avoid the PEP8 warning:</p> <p><a href="https://i.sstatic.net/PoDou.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PoDou.png" alt="enter image description here" /></a></p> <p>I can no longer fold it:</p> <p><a href="https://i.sstatic.net/tSdDU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tSdDU.png" alt="enter image description here" /></a></p> <p>This changed in the last update I think, because I used to be able to fold these functions without issues. How can I get around ths?</p>
<python><sublimetext4>
2022-12-22 21:15:10
2
42,941
Gabriel
74,893,906
6,057,371
pandas nested dict to flat dataframe
<p>I have a dict:</p> <pre><code>d = {12 : [8,2,3,4] 15 : [6,1,9,2]} </code></pre> <p>I want to transform it to a flat dataframe:</p> <pre><code>df = val 0 1 2 3 12 8 2 3 4 15 6 1 9 2 </code></pre> <p>How can I do this?</p>
<python><pandas><dataframe><data-science>
2022-12-22 21:13:06
0
2,050
Cranjis
74,893,893
1,011,926
Colima, PyCharm, and AWS SAM - cannot find pydevd.py
<p>Running a Mac M1 trying to start AWS SAM inside a Docker container through PyCharm fails with</p> <pre class="lang-bash prettyprint-override"><code>START RequestId: 35db17a5-d684-4ffa-b3ad-d50e253561b7 Version: $LATEST /var/lang/bin/python3.9: can't open file '/tmp/lambci_debug_files/pydevd.py': [Errno 2] No such file or directory 22 Dec 2022 21:06:55,372 [ERROR] (rapid) Init failed error=Runtime exited with error: exit status 2 InvokeID= /var/lang/bin/python3.9: can't open file '/tmp/lambci_debug_files/pydevd.py': [Errno 2] No such file or directory END RequestId: 45c424e4-0012-40a3-a049-928c3c838518 REPORT RequestId: 45c424e4-0012-40a3-a049-928c3c838518 Init Duration: 2.56 ms Duration: 358.87 ms Billed Duration: 359 ms Memory Size: 128 MB Max Memory Used: 128 MB </code></pre> <p>I have colima running succesfully and I breifly see the container start up there so PyCharm is doing the right thing, but how do I solve the error its complaning about?</p> <p>I can only assume its a permission issue but my understanding with Colima is by default its mounting my local directory as read/write</p> <p>The output of <code>colima status</code></p> <pre class="lang-bash prettyprint-override"><code>INFO[0000] colima is running using QEMU INFO[0000] arch: x86_64 INFO[0000] runtime: docker INFO[0000] mountType: 9p INFO[0000] socket: unix:///Users/chritu07/.colima/default/docker.sock </code></pre> <p>I've tried a different number of solutions that I've seen around, the most common one is to do</p> <pre class="lang-bash prettyprint-override"><code>colima start -c 4 -m 12 -a x86_64 --mount-type 9p --mount /Applications:w </code></pre> <p>Passing the mount endpoint to the <code>/Applications</code> directory which mimics the Docker Shared Files settings in the desktop client. This doesn't seem to have any effect though.</p>
<python><docker><pycharm><colima>
2022-12-22 21:11:39
2
3,677
Chris
74,893,820
3,423,825
Django's DRF has_object_permission method not called with get_object
<p>I scratch my head to understand why the <code>has_object_permission</code> bellow has no effect, because the documentation says that this method should be executed with <code>get_object</code>. What could be the reason ?</p> <pre><code>@permission_classes([HasViewObjectPermission]) class IndividualDetailsView(RetrieveAPIView): serializer_class = IndividualSerializer lookup_url_kwarg = &quot;pk&quot; def get_object(self): pk = self.kwargs.get(self.lookup_url_kwarg) return Individual.objects.get(pk=pk) class HasViewObjectPermission(permissions.BasePermission): def has_object_permission(self, request, view, obj): return False </code></pre>
<python><django><django-rest-framework>
2022-12-22 21:04:30
1
1,948
Florent
74,893,742
10,357,604
How to solve AttributeError: module 'numpy' has no attribute 'bool'?
<p>I'm using a conda environment with Python version 3.9.7, pip 22.3.1, numpy 1.24.0, gluoncv 0.10.5.post0, mxnet 1.7.0.post2</p> <p><code>from gluoncv import data, utils</code> gives the error:</p> <pre><code>C:\Users\std\anaconda3\envs\myenv\lib\site-packages\mxnet\numpy\utils.py:37: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar. (This may have returned Python scalars in past versions bool = onp.bool --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 3 1 #import cv2 2 #import os ----&gt; 3 from gluoncv import data, utils #does not work File ~\anaconda3\envs\myenv\lib\site-packages\gluoncv\__init__.py:16 14 _found_mxnet = _found_pytorch = False 15 try: ---&gt; 16 _require_mxnet_version('1.4.0', '2.0.0') 17 from . import data 18 from . import model_zoo File ~\anaconda3\envs\myenv\lib\site-packages\gluoncv\check.py:6, in _require_mxnet_version(mx_version, max_mx_version) 4 def _require_mxnet_version(mx_version, max_mx_version='2.0.0'): 5 try: ----&gt; 6 import mxnet as mx 7 from distutils.version import LooseVersion 8 if LooseVersion(mx.__version__) &lt; LooseVersion(mx_version) or \ 9 LooseVersion(mx.__version__) &gt;= LooseVersion(max_mx_version): File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\__init__.py:33 30 # version info 31 __version__ = base.__version__ ---&gt; 33 from . import contrib 34 from . import ndarray 35 from . import ndarray as nd File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\contrib\__init__.py:30 27 from . import autograd 28 from . import tensorboard ---&gt; 30 from . import text 31 from . import onnx 32 from . import io File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\contrib\text\__init__.py:23 21 from . import utils 22 from . import vocab ---&gt; 23 from . import embedding File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\contrib\text\embedding.py:36 34 from ... import base 35 from ...util import is_np_array ---&gt; 36 from ... import numpy as _mx_np 37 from ... import numpy_extension as _mx_npx 40 def register(embedding_cls): File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\numpy\__init__.py:23 21 from . import random 22 from . import linalg ---&gt; 23 from .multiarray import * # pylint: disable=wildcard-import 24 from . import _op 25 from . import _register File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\numpy\multiarray.py:47 45 from ..ndarray.numpy import _internal as _npi 46 from ..ndarray.ndarray import _storage_type, from_numpy ---&gt; 47 from .utils import _get_np_op 48 from .fallback import * # pylint: disable=wildcard-import,unused-wildcard-import 49 from . import fallback File ~\anaconda3\envs\myenv\lib\site-packages\mxnet\numpy\utils.py:37 35 int64 = onp.int64 36 bool_ = onp.bool_ ---&gt; 37 bool = onp.bool 39 pi = onp.pi 40 inf = onp.inf File ~\anaconda3\envs\myenv\lib\site-packages\numpy\__init__.py:284, in __getattr__(attr) 281 from .testing import Tester 282 return Tester --&gt; 284 raise AttributeError(&quot;module {!r} has no attribute &quot; 285 &quot;{!r}&quot;.format(__name__, attr)) AttributeError: module 'numpy' has no attribute 'bool' </code></pre>
<python><python-3.x><numpy><attributeerror>
2022-12-22 20:53:45
8
1,355
thestruggleisreal
74,893,662
8,973,609
Transpose pandas DF based on value data type
<p>I have pandas <code>DataFrame</code> A. I am struggling transforming this into my desired format, see <code>DataFrame</code> B. I tried <code>pivot</code> or <code>melt</code> but I am not sure how I could make it conditional (<code>string</code> values to <code>FIELD_STR_VALUE</code>, <code>numeric</code> values to <code>FIELD_NUM_VALUE</code>). I was hoping you could point me the right direction.</p> <p>A: Input DataFrame</p> <pre><code>|FIELD_A |FIELD_B |FIELD_C |FIELD_D | |--------|--------|--------|--------| |123123 |8 |a |23423 | |123124 |7 |c |6464 | |123144 |99 |x |234 | </code></pre> <p>B: Desired output DataFrame</p> <pre><code>|ID |FIELD_A |FIELD_NAME |FIELD_STR_VALUE |FIELD_NUM_VALUE | |---|--------|-----------|----------------|----------------| |1 |123123 |B | |8 | |2 |123123 |C |a | | |3 |123123 |D | |23423 | |4 |123124 |B | |7 | |5 |123124 |C |c | | |6 |123124 |D | |6464 | |7 |123144 |B | |99 | |8 |123144 |C |x | | |9 |123144 |D | |234 | </code></pre>
<python><pandas><dataframe>
2022-12-22 20:42:51
3
507
konichiwa
74,893,648
9,973,902
How to add a timestamp and error messages to Python rotating log?
<p>I created a rotating log for a Python project but can't seem to add timestamps for the debug log messages or any errors messages:</p> <pre><code>logger = logging.getLogger(&quot;Rotating Log&quot;) logger.setLevel(logging.DEBUG) path=&quot;Logs\Process.log&quot; handler = RotatingFileHandler(path, maxBytes=2000000,backupCount=5) logger.addHandler(handler) </code></pre> <p>This is how I am adding messages to the log. Only the text I put in quotes appears in the log but I'd want a timestamp for these messages and if the code errors out I would want that message as well:</p> <pre><code>logger.debug(&quot;Debug log message&quot;) </code></pre>
<python><logging>
2022-12-22 20:40:49
1
341
Riley Spotton
74,893,568
14,397,434
Display all prime numbers within a range
<pre><code># Write a program to display all prime numbers within a range start = 25 end = 50 for num in range(start, end + 1): if num &gt; 1: # all prime #s are greater than 1 for i in range(2,num): if (num % i) == 0: break else: print(num) </code></pre> <p>Why is it that the <code>else</code> is written directly under the second <code>for</code>?</p> <p>Is it that it's not necessarily under the <code>for</code> but instead, <em>outside</em> of the for-loop? Therefore, running <em>every</em> item in the range and checking the ones that are <em>not</em> prime in the for-loop as well as the ones that <em>are</em> during the <code>else</code>?</p> <p>If that's the case, then why is it that there is break? Doesn't the break immediately stop the entire process and keep the loop ever reaching the <code>else</code> statement? Or does it only stop the <em>current</em> for-loop-- allowing the <code>else</code> statement to run?</p> <p>I suppose I just need help understanding what's going on.</p>
<python><for-loop><if-statement><range>
2022-12-22 20:29:47
1
407
Antonio
74,893,381
8,584,739
error "unmatched group" when using re.sub in Python 2.7
<p>I have a list of strings. Each element represents a field as key value separated by space:</p> <pre><code>listA = [ 'abcd1-2 4d4e', 'xyz0-1 551', 'foo 3ea', 'bar1 2bd', 'mc-mqisd0-2 77a' ] </code></pre> <h3>Behavior</h3> <p>I need to return a <code>dict</code> out of this list with expanding the keys like <code>'xyz0-1'</code> by the range denoted by 0-1 into multiple keys like <code>abcd1</code> and <code>abcd2</code> with the same value like <code>4d4e</code>.</p> <p>It should run as part of an Ansible plugin, where Python 2.7 is used.</p> <h3>Expected</h3> <p>The end result would look like the dict below:</p> <pre><code>{ abcd1: 4d4e, abcd2: 4d4e, xyz0: 551, xyz1: 551, foo: 3ea, bar1: 2bd, mc-mqisd0: 77a, mc-mqisd1: 77a, mc-mqisd2: 77a, } </code></pre> <h3>Code</h3> <p>I have created below function. It is working with Python 3.</p> <pre class="lang-py prettyprint-override"><code> def listFln(listA): import re fL = [] for i in listA: aL = i.split()[0] bL = i.split()[1] comp = re.sub('^(.+?)(\d+-\d+)?$',r'\1',aL) cmpCountR = re.sub('^(.+?)(\d+-\d+)?$',r'\2',aL) if cmpCountR.strip(): nStart = int(cmpCountR.split('-')[0]) nEnd = int(cmpCountR.split('-')[1]) for j in range(nStart,nEnd+1): fL.append(comp + str(j) + ' ' + bL) else: fL.append(i) return(dict([k.split() for k in fL])) </code></pre> <h3>Error</h3> <p>In lower python versions like Python 2.7. this code throws an &quot;unmatched group&quot; error:</p> <pre><code> cmpCountR = re.sub('^(.+?)(\d+-\d+)?$',r'\2',aL) File &quot;/usr/lib64/python2.7/re.py&quot;, line 151, in sub return _compile(pattern, flags).sub(repl, string, count) File &quot;/usr/lib64/python2.7/re.py&quot;, line 275, in filter return sre_parse.expand_template(template, match) File &quot;/usr/lib64/python2.7/sre_parse.py&quot;, line 800, in expand_template raise error, &quot;unmatched group&quot; </code></pre> <p>Anything wrong with the regex here?</p>
<python><regex><python-2.7>
2022-12-22 20:08:07
3
1,228
Vijesh
74,893,354
14,401,160
Is literal ellipsis really valid as ParamSpec last argument?
<p>Quote from <a href="https://docs.python.org/3/library/typing.html#typing.Concatenate" rel="nofollow noreferrer">Python docs for <code>Concatenate</code></a>:</p> <blockquote> <p>The last parameter to Concatenate must be a ParamSpec or ellipsis (...).</p> </blockquote> <p>I know what <code>ParamSpec</code> is, but the ellipsis here drives me mad. It is <a href="https://mypy-play.net/?mypy=latest&amp;python=3.10&amp;gist=65a3d076ae29e2cbe24a500165d76126" rel="nofollow noreferrer">not accepted</a> by <code>mypy</code>:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, ParamSpec, Concatenate, TypeVar, Generic _P = ParamSpec('_P') _T = TypeVar('_T') class Test(Generic[_P, _T]): fn: Callable[Concatenate[_P, ...], _T] </code></pre> <pre class="lang-none prettyprint-override"><code>E: Unexpected &quot;...&quot; [misc] E: The last parameter to Concatenate needs to be a ParamSpec [valid-type] </code></pre> <p>and is not explained anywhere in docs. <a href="https://peps.python.org/pep-0612/#the-components-of-a-paramspec" rel="nofollow noreferrer">PEP612</a> doesn't mention it. Is it just a mistake, appeared as a result of mixing <code>Callable</code> and <code>Concatenate</code> together?</p> <p><a href="https://github.com/python/cpython/issues/88954" rel="nofollow noreferrer">This issue</a> is somewhat related and shows syntax with ellipsis literal in <code>Concatenate</code>:</p> <blockquote> <p>The specification should be extended to allow either <code>Concatenate[int, str, ...]</code>, or <code>[int, str, ...]</code>, or some other syntax.</p> </blockquote> <p>But this clearly targets &quot;future syntax&quot;.</p> <p>Note: I'm aware of meaning of ellipsis as <code>Callable</code> argument, this question is specifically about <code>Concatenate</code>.</p>
<python><python-typing><type-variables>
2022-12-22 20:04:24
1
8,871
STerliakov
74,893,346
17,696,880
Set search pattern by setting a constraint on how a substring should not start and another on how a substring should not end
<pre class="lang-py prettyprint-override"><code>import re, datetime input_text = &quot;Por las maΓ±anas de verano voy a la playa, y en la manana del 22-12-22 16:22 pm o quizas maΓ±ana en la maΓ±ana hay que estar alli y no 2022-12-22 a la manana&quot; today = datetime.date.today() tomorrow = str(today + datetime.timedelta(days = 1)) input_text = re.sub(r&quot;\b(?:las|la)\b[\s|]*(?:maΓ±ana|manana)\bs\s*\b&quot;, tomorrow, input_text) print(repr(input_text)) # --&gt; output </code></pre> <p>Why does the restriction that I place fail?</p> <p>The objective is that there cannot be any of these options <code>(?:las|la)</code> , the objective is that there cannot be any of these options in front of the pattern <code>(?:maΓ±ana|manana)</code> , and that there cannot be behind it either a letter <code>'s'</code> followed by one or more spaces <code>s\s*</code></p> <p>This is the <strong>correct output</strong> that you should get after making the replacements in the cases where it is appropriate</p> <pre><code>&quot;Por las maΓ±anas de verano voy a la playa, y en la manana del 22-12-22 16:22 pm o quizas 22-12-23 en la maΓ±ana hay que estar alli y no 2022-12-22 a la manana&quot; </code></pre>
<python><python-3.x><regex><replace><regex-group>
2022-12-22 20:03:21
1
875
Matt095
74,893,304
14,945,939
How do I speed up my GitHub PyTest executions?
<p>I have a repository of several hundred tests that have been fast enough up to now, but as we continue to grow the codebase I worry that it will get so slow that my team will be caught up in waiting for CI runs to complete.</p> <p>What can I do to speed this up and make my tests faster both in the short run and in the long run?</p> <p>I need to consider:</p> <ol> <li>Scalability</li> <li>Cost</li> <li>Rollout</li> </ol>
<python><performance><continuous-integration><pytest><github-actions>
2022-12-22 19:58:45
1
1,817
vanhooser
74,893,278
13,618,407
Tensorflow error Invalid argument Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function
<p>I am receiving an error when I train my Deep Learning network using TPU</p> <p>The error is related to the below part of my code but I don't know how to solve it.</p> <pre><code>preprocessing.RandomCrop(28, 28) </code></pre> <p><strong>NB: the code is running fine if I use no accelerator or GPU but not on TPU</strong></p> <p>I am using tensorflow 2.6.4 and kaggle TPU V3-8 for the training</p> <p>You can find the notebook link that reproduce the error <a href="https://www.kaggle.com/code/sticlaboratory/test-tf-distribute" rel="nofollow noreferrer"> here</a> on kaggle</p> <p>Here is the code that reproduce the error:</p> <pre class="lang-py prettyprint-override"><code>import math, re, os import tensorflow as tf print(&quot;Tensorflow version &quot; + tf.__version__) # from tensorflow.keras.utils import img_to_array, load_img from tensorflow.keras.preprocessing.image import img_to_array, load_img from tensorflow.image import extract_patches from tensorflow.keras.layers import Input, Dense, Embedding, LayerNormalization, Conv2D, concatenate, LeakyReLU, GlobalMaxPooling2D from tensorflow.keras.layers.experimental import preprocessing#, RandomFlip, RandomZoom, RandomRotation, Rescaling from tensorflow.keras.models import Model,Sequential#, Input import numpy as np AUTO = tf.data.experimental.AUTOTUNE try: # detect TPUs tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection strategy = tf.distribute.TPUStrategy(tpu) except ValueError: # detect GPUs strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines print(&quot;Number of accelerators: &quot;, strategy.num_replicas_in_sync) image_shape =(224, 224,3) #128, 128(286, 286) (224, 224,3) # batch_size = 5 batch_size = 32 * strategy.num_replicas_in_sync save_dir = &quot;./&quot; path = &quot;../input/urban-and-rural-photos/train/&quot; N_sample = 32 # number of samples per batch trainAug = Sequential([ # preprocessing.Rescaling(scale=1.0 / 255), preprocessing.RandomFlip(&quot;horizontal_and_vertical&quot;), preprocessing.RandomZoom( height_factor=(-0.05, -0.15), width_factor=(-0.05, -0.15)), preprocessing.RandomRotation(0.3) ]) def load_images(path, size=image_shape): data_list = list() # enumerate filenames in directory, assume all are images for filename in os.listdir(path): # load and resize the image pixels = load_img(path + filename, target_size=size) # convert to numpy array pixels = img_to_array(pixels) # store data_list.append(pixels) datas = np.asarray(data_list) datas = datas.astype('float32') / 255.0 dataset = tf.data.Dataset.from_tensor_slices(datas) # dataset = dataset.batch(128) # dataset = dataset.shuffle(buffer_size=1024) return dataset with strategy.scope(): # load dataset (low light image) dataL_all = (load_images(path + '/rural/') # .shuffle(batch_size * 100) .batch(batch_size) .cache() .map(lambda x: (trainAug(x)), num_parallel_calls=tf.data.AUTOTUNE) .prefetch(tf.data.AUTOTUNE) ) # load dataset (normal light image) dataH_all =( load_images(path + '/urban/') # .shuffle(batch_size * 100) .batch(batch_size) .cache() .map(lambda x: (trainAug(x)), num_parallel_calls=tf.data.AUTOTUNE) .prefetch(tf.data.AUTOTUNE) ) dataL_allD = strategy.experimental_distribute_dataset(dataL_all) dataH_allD = strategy.experimental_distribute_dataset(dataH_all) with strategy.scope(): discriminator_local = Sequential( [ Input(shape=image_shape), preprocessing.RandomCrop(28, 28), Conv2D(8, (3, 3), strides=(2, 2), padding=&quot;same&quot;), LeakyReLU(alpha=0.2), Conv2D(16, (3, 3), strides=(2, 2), padding=&quot;same&quot;), LeakyReLU(alpha=0.2), Conv2D(32, (3, 3), strides=(2, 2), padding=&quot;same&quot;), LeakyReLU(alpha=0.2), Conv2D(64, (3, 3), strides=(2, 2), padding=&quot;same&quot;), LeakyReLU(alpha=0.2), Conv2D(128, (3, 3), strides=(2, 2), padding=&quot;same&quot;), LeakyReLU(alpha=0.2), Conv2D(256, (3, 3), strides=(2, 2), padding=&quot;same&quot;), LeakyReLU(alpha=0.2), GlobalMaxPooling2D(), Dense(1), ], name=&quot;local_discriminator&quot;, ) discriminator_local.summary() @tf.function def train_steps(dataset): return discriminator_local(dataset[0]) EPOCHS = 2 STEPS = 3 for epoch in range(EPOCHS): total_loss = 0.0 num_batches = 0 for step, datasets in enumerate(zip(dataL_allD, dataH_allD)): loss_per_replica= strategy.run(train_steps, args=(datasets,)) #train_step(datasets) total_loss += strategy.reduce(&quot;SUM&quot;, loss_per_replica,axis=0) num_batches += 1 average_train_loss = total_loss / num_batches print(f'Epoch :{epoch} loss : {average_train_loss}') </code></pre> <p>And here is the stack trace of the error:</p> <pre><code>2022-12-25 12:21:26.814769: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:76] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 9 root error(s) found. (0) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XLA_TPU_JIT: StatelessRandomUniformInt (No registered 'StatelessRandomUniformInt' OpKernel for XLA_TPU_JIT devices compatible with node {{node local_discriminator/random_crop/stateless_random_uniform}} (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tseed=DT_INT64, dtype=DT_INT32, _device=&quot;/device:TPU_REPLICATED_CORE&quot;){{node local_discriminator/random_crop/stateless_random_uniform}}One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement `tf.config.set_soft_device_placement(True)`. This has a potential performance penalty. TPU compilation failed [[tpu_compile_succeeded_assert/_9679846132704141549/_5]] [[cluster_tpu_function/control_after/_1/_279]] (1) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XLA_TPU_JIT: StatelessRandomUniformInt (No registered 'StatelessRandomUniformInt' OpKernel for XLA_TPU_JIT devices compatible with node {{node local_discriminator/random_crop/stateless_random_uniform}} (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tseed=DT_INT64, dtype=DT_INT32, _device=&quot;/device:TPU_REPLICATED_CORE&quot;){{node local_discriminator/random_crop/stateless_random_uniform}}One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement `tf.config.set_soft_device_placement(True)`. This has a potential performance penalty. TPU compilation failed [[tpu_compile_succeeded_assert/_9679846132704141549/_5]] [[tpu_compile_succeeded_assert/_9679846132704141549/_5/_153]] (2) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XLA_TPU_JIT: StatelessRandomUniformInt (No registered 'StatelessRandomUniformInt' OpKernel for XLA_TPU_JIT devices compatible with node {{node local_discriminator/random_crop/stateless_random_uniform}} (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tseed=DT_INT64, dtype=DT_INT32, _device=&quot;/device:TPU_REPLICATED_CORE&quot;){{node local_discriminator/random_crop/stateless_random_uniform}}One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement `tf.config.set_soft_device_placement(True)`. This has a potential performance penalty. TPU compilation failed [[tpu_compile_succeeded_assert/_9679846132704141549/_5]] [[tpu_compile_succeeded_assert/_9679846132704141549/_5/_167]] (3) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XL ... [truncated] --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) /tmp/ipykernel_20/3910360392.py in &lt;module&gt; 6 for step, datasets in enumerate(zip(dataL_allD, dataH_allD)): 7 loss_per_replica= strategy.run(train_steps, args=(datasets,)) #train_step(datasets) ----&gt; 8 total_loss += strategy.reduce(&quot;SUM&quot;, loss_per_replica,axis=0) 9 num_batches += 1 10 average_train_loss = total_loss / num_batches /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py in reduce(self, reduce_op, value, axis) 1386 1387 self._reduce_sum_fns[axis] = def_function.function(reduce_sum_fn) -&gt; 1388 value = self._reduce_sum_fns[axis](value) 1389 else: 1390 value = self.run(reduce_sum, args=(value,)) /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 826 tracing_count = self.experimental_get_tracing_count() 827 with trace.Trace(self._name) as tm: --&gt; 828 result = self._call(*args, **kwds) 829 compiler = &quot;xla&quot; if self._experimental_compile else &quot;nonXla&quot; 830 new_tracing_count = self.experimental_get_tracing_count() /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 860 # In this case we have not created variables on the first call. So we can 861 # run the first trace but we should fail if variables are created. --&gt; 862 results = self._stateful_fn(*args, **kwds) 863 if self._created_variables: 864 raise ValueError(&quot;Creating variables on a non-first call to a function&quot; /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 2939 with self._lock: 2940 (graph_function, -&gt; 2941 filtered_flat_args) = self._maybe_define_function(args, kwargs) 2942 return graph_function._call_flat( 2943 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3312 if self.input_signature is None or args is not None or kwargs is not None: 3313 args, kwargs, flat_args, filtered_flat_args = \ -&gt; 3314 self._function_spec.canonicalize_function_inputs(*args, **kwargs) 3315 else: 3316 flat_args, filtered_flat_args = [None], [] /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in canonicalize_function_inputs(self, *args, **kwargs) 2695 2696 if self._input_signature is None: -&gt; 2697 inputs, flat_inputs, filtered_flat_inputs = _convert_numpy_inputs(inputs) 2698 kwargs, flat_kwargs, filtered_flat_kwargs = _convert_numpy_inputs(kwargs) 2699 return (inputs, kwargs, flat_inputs + flat_kwargs, /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _convert_numpy_inputs(inputs) 2735 # are eventually passed to ConcreteFunction()._call_flat, which requires 2736 # expanded composites. -&gt; 2737 flat_inputs = nest.flatten(inputs, expand_composites=True) 2738 2739 # Check for NumPy arrays in arguments and convert them to Tensors. /opt/conda/lib/python3.7/site-packages/tensorflow/python/util/nest.py in flatten(structure, expand_composites) 339 return [None] 340 expand_composites = bool(expand_composites) --&gt; 341 return _pywrap_utils.Flatten(structure, expand_composites) 342 343 /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/values.py in _type_spec(self) 381 return copy.deepcopy(self._type_spec_override) 382 return PerReplicaSpec( --&gt; 383 *(type_spec.type_spec_from_value(v) for v in self._values)) 384 385 @property /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/values.py in &lt;genexpr&gt;(.0) 381 return copy.deepcopy(self._type_spec_override) 382 return PerReplicaSpec( --&gt; 383 *(type_spec.type_spec_from_value(v) for v in self._values)) 384 385 @property /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/type_spec.py in type_spec_from_value(value) 537 is not supported. 538 &quot;&quot;&quot; --&gt; 539 spec = _type_spec_from_value(value) 540 if spec is not None: 541 return spec /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/type_spec.py in _type_spec_from_value(value) 559 if isinstance(value, ops.Tensor): 560 # Note: we do not include Tensor names when constructing TypeSpecs. --&gt; 561 return tensor_spec.TensorSpec(value.shape, value.dtype) 562 563 if isinstance(value, composite_tensor.CompositeTensor): /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in shape(self) 1173 # `_tensor_shape` is declared and defined in the definition of 1174 # `EagerTensor`, in C. -&gt; 1175 self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple()) 1176 except core._NotOkStatusException as e: 1177 six.raise_from(core._status_to_exception(e.code, e.message), None) InvalidArgumentError: 9 root error(s) found. (0) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XLA_TPU_JIT: StatelessRandomUniformInt (No registered 'StatelessRandomUniformInt' OpKernel for XLA_TPU_JIT devices compatible with node {{node local_discriminator/random_crop/stateless_random_uniform}} (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tseed=DT_INT64, dtype=DT_INT32, _device=&quot;/device:TPU_REPLICATED_CORE&quot;){{node local_discriminator/random_crop/stateless_random_uniform}}One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement `tf.config.set_soft_device_placement(True)`. This has a potential performance penalty. TPU compilation failed [[tpu_compile_succeeded_assert/_9679846132704141549/_5]] [[cluster_tpu_function/control_after/_1/_279]] (1) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XLA_TPU_JIT: StatelessRandomUniformInt (No registered 'StatelessRandomUniformInt' OpKernel for XLA_TPU_JIT devices compatible with node {{node local_discriminator/random_crop/stateless_random_uniform}} (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tseed=DT_INT64, dtype=DT_INT32, _device=&quot;/device:TPU_REPLICATED_CORE&quot;){{node local_discriminator/random_crop/stateless_random_uniform}}One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement `tf.config.set_soft_device_placement(True)`. This has a potential performance penalty. TPU compilation failed [[tpu_compile_succeeded_assert/_9679846132704141549/_5]] [[tpu_compile_succeeded_assert/_9679846132704141549/_5/_153]] (2) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XLA_TPU_JIT: StatelessRandomUniformInt (No registered 'StatelessRandomUniformInt' OpKernel for XLA_TPU_JIT devices compatible with node {{node local_discriminator/random_crop/stateless_random_uniform}} (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tseed=DT_INT64, dtype=DT_INT32, _device=&quot;/device:TPU_REPLICATED_CORE&quot;){{node local_discriminator/random_crop/stateless_random_uniform}}One approach is to outside compile the unsupported ops to run on CPUs by enabling soft placement `tf.config.set_soft_device_placement(True)`. This has a potential performance penalty. TPU compilation failed [[tpu_compile_succeeded_assert/_9679846132704141549/_5]] [[tpu_compile_succeeded_assert/_9679846132704141549/_5/_167]] (3) Invalid argument: {{function_node __inference_tpu_function_6371}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_tpu_function_6096887307567101759[] on XL ... [truncated] </code></pre> <p>Please, I need your help to advance on my project</p>
<python><tensorflow><kaggle><tpu>
2022-12-22 19:54:59
0
561
stic-lab
74,893,239
8,260,088
Keep the date and not the time in to_datetime pandas (while importing data from csv)
<p>Can you please help me with the following issue? When I import a csv file I have a dataframe like smth like this:</p> <pre><code>df = pd.DataFrame(['29/12/17', '30/12/17', '31/12/17', '01/01/18', '02/01/18'], columns=['Date']) </code></pre> <p>What I want is to convert `Date' column of df into Date Time object. So I use the code below:</p> <pre><code> df['date_f'] = pd.to_datetime(df['Date']) </code></pre> <p>What I get is smth like this:</p> <pre><code>df1 = pd.DataFrame({'Date': ['29/12/17', '30/12/17', '31/12/17', '01/01/18', '02/01/18'], 'date_f':['2017-12-29T00:00:00.000Z', '2017-12-30T00:00:00.000Z', '2017-12-31T00:00:00.000Z', '2018-01-01T00:00:00.000Z', '2018-02-01T00:00:00.000Z']}) </code></pre> <p>The question is, why am I getting date_f in the following format ('2017-12-29T00:00:00.000Z') and not just ('2017-12-29') and how can I get the later format ('2017-12-29')?</p> <p>P.S. I you use the code above it will the date_f in the format that I need. However, if the data is imported from csv, it provides the date_f format as specified above</p>
<python><pandas>
2022-12-22 19:50:49
1
875
Alberto Alvarez
74,893,227
7,717,176
How to merge two pandas dataframes based on a dictionary?
<p>There are two pandas dataframes as follows:</p> <pre><code>df1: col11 col22 col33 abc 25 36 bcd 55 96 cdf 15 19 abc 74 26 </code></pre> <p>and</p> <pre><code>df2: col01 col02 col03 name1 x 346 name2 g 926 name3 t 179 name1 k 286 </code></pre> <p>I want to merge <code>df1</code> and <code>df2</code> based on a dictionary that this dictionary's keys are <code>col11</code> and its values are <code>col01</code> as follows:</p> <pre><code>mydict = {'abc': 'name1', 'bcd': 'name2', 'cdf': 'name3'} </code></pre> <p>My expected is:</p> <pre><code>df1: col11 col22 col33 col01 col02 col03 abc 25 36 name1 x 346 bcd 55 96 name2 g 926 cdf 15 19 name3 t 179 abc 74 26 name1 x 346 </code></pre> <p>How can I merge this two dataframes?</p>
<python><pandas><dataframe><dictionary><merge>
2022-12-22 19:49:31
1
391
HMadadi
74,893,199
5,652,080
Apply str.replace() to string but error says it's a float
<p>I'm using Pandas to strip a '$' out of some strings in a column.</p> <pre><code>df['Amount'].dtypes` --&gt; `dtype('O') </code></pre> <p>So, I figure I can do string modification now:</p> <pre><code>df['Amount'].apply(lambda x: x.replace('$', '')) </code></pre> <p>I get the error: <code>'float' object has no attribute 'replace'</code>.</p> <p>Why would the type &quot;Object&quot; (string) but be &quot;float&quot; when I'm operating on it?</p>
<python><pandas>
2022-12-22 19:45:47
1
690
Sam Dillard
74,893,175
4,821,779
Trying to concat a list of 12 AnnData objects but am getting duplicates
<p>I'm prepping our experimental data for trajectory analysis by partially following this guide <a href="https://smorabit.github.io/tutorials/8_velocyto/" rel="nofollow noreferrer">here</a> and I have a list of 12 AnnData objects that I read in as loom files. 6 of them come from one sequencing run whereas the other 6 come from another. I followed the aforementioned link's recommendation to generated spliced/unspliced count matrices using velocyto, which is how I got the loom files.</p> <p>Anyway, that's all background information. I'm trying to merge all of these AnnData objects into one.</p> <pre><code>&gt;&gt;&gt; loom_data [AnnData object with n_obs Γ— n_vars = 5000 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 5773 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 6807 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 5613 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 6052 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 3500 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 10510 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 9356 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 3246 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 1132 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 13595 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced', AnnData object with n_obs Γ— n_vars = 9541 Γ— 36601 obs: 'comp.ident' var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced'] </code></pre> <p>I'm having trouble understanding how to do this. All of the barcodes, which are <code>loom_data[i].obs.index</code>, are unique and contain a suffix for the sample they correspond to. Ultimately, I want to bring in these layers into another AnnData object using scVelo.</p> <p>The issue is calling <code>sc.concat</code>. It's the genes that have overlap; none of the barcodes are supposed to match across the 12 list elements. So I want to select the <code>vars</code> axis, which I think is <code>1</code>, and I want the union of all of the elements in the other axis:</p> <pre><code>test = sc.concat(loom_data, axis = 1, join = 'outer') </code></pre> <p>But when I call the line above, I get a concatenation of all of the genes with the names made unique, even though I want to consolidate their counts:</p> <pre><code>&gt;&gt;&gt; test AnnData object with n_obs Γ— n_vars = 80125 Γ— 439212 var: 'Accession', 'Chromosome', 'End', 'Start', 'Strand' layers: 'ambiguous', 'matrix', 'spliced', 'unspliced' </code></pre> <p>I want the genes that aren't present in all samples to just have 0 counts. How would this be possible?</p>
<python><scanpy>
2022-12-22 19:42:52
1
1,108
CelineDion
74,893,089
14,397,434
Printing -1 to -10 in python exercise
<pre><code># the answer key for num in range(-10, 0, 1): print(num) </code></pre> <p>I wanted to see if I could find a way to do it without using range():</p> <pre><code>i = -1 while abs(i) &lt;= 10: print(i) </code></pre> <p>I'm new to python.</p> <p>Thank you.</p>
<python><for-loop><while-loop><range><sequence>
2022-12-22 19:31:48
1
407
Antonio
74,892,847
11,226,214
Fields from Django Abstract model are returning None
<p>I have an abstract django model and a basic model. I am expecting any instance of the basic model to have a value for the field created_at or updated_at. However, as of now, all my instances have None for both these fields. What am I doing wrong?</p> <pre><code>from django.db import models class Trackable(models.Model): created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) class Meta: abstract = True class Quotation(Trackable): reference = models.CharField(null=True, max_length=80) quotation = Quotation(id=1) print(quotation.created_at) &gt;&gt;&gt; None </code></pre> <p><strong>Edit:</strong> It works with below code (but that does not explain why above code does not work):</p> <pre><code>from django.db import models from django.utils import timezone class AutoDateTimeField(models.DateTimeField): def pre_save(self, model_instance, add): return timezone.now() class Trackable(models.Model): created_at = models.DateField(default=timezone.now) updated_at = AutoDateTimeField(default=timezone.now) class Meta: abstract = True class Quotation(Trackable): reference = models.CharField(null=True, max_length=80) </code></pre>
<python><django><django-models>
2022-12-22 19:01:31
1
1,797
nolwww
74,892,827
12,242,085
How to convert JSON column in DataFrame to "normal" columns in DataFrame in Python Pandas?
<p>I have DataFrame like below:</p> <pre><code>ABT = pd.read_excel(&quot;ABT.xlsx&quot;) </code></pre> <p>DATA TYPES:</p> <ul> <li><p>COL1 - float</p> </li> <li><p>COL2 - int</p> </li> <li><p>COL3 - object</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>COL1</th> <th>COL2</th> <th>COL3</th> <th>COL4</th> </tr> </thead> <tbody> <tr> <td>1.2</td> <td>5</td> <td>{&quot;X&quot;:&quot;cc&quot;, &quot;y&quot;:12}</td> <td>{&quot;A&quot;:{1,2}, &quot;B&quot;:{3,3}&quot;}</td> </tr> <tr> <td>0.0</td> <td>2</td> <td>{&quot;X&quot;:&quot;dd&quot;, &quot;y&quot;:13}</td> <td>{&quot;A&quot;:{0,1}, &quot;B&quot;:{2,2}&quot;}</td> </tr> <tr> <td>2.22</td> <td>0</td> <td>{&quot;X&quot;:&quot;ee&quot;, &quot;y&quot;:45}</td> <td>{&quot;A&quot;:{5,5}, &quot;B&quot;:{1,1}&quot;}</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> </div></li> </ul> <p>And I need to have something like below:</p> <pre><code>| COL1 | COL2| COL3 | X | y | A | B |------|-----|--------------------|-----|-----|---------- | 1.2 | 5 | {&quot;X&quot;:&quot;cc&quot;, &quot;y&quot;:12} | cc | 12 | | 0.0 | 2 | {&quot;X&quot;:&quot;dd&quot;, &quot;y&quot;:13} | dd | 13 | | 2.22 | 0 | {&quot;X&quot;:&quot;ee&quot;, &quot;y&quot;:45} | ee | 45 | | ... | ... | ... | ... | ... | </code></pre> <p>I tried to use code like below, but it does not work: <code>pd.json_normalize(ABT)</code> because of error:<code> AttributeError: 'str' object has no attribute 'values'</code></p> <p>I also tried this one: <code>pd.io.json.json_normalize(ABT.COL3[0]) </code>but I have error:<code> AttributeError: 'str' object has no attribute 'values'</code></p> <p>How can I do that in Python Pandas ? I have a problem to image how should look output for values in COL4 ?</p> <p>In my real DF: When I use <code>ABT.head().to_dict('list')</code> I have liek below:</p> <pre><code>{'COL1': [0.0], 'COL2': [2], 'COL3': [2162561990], 'COL4': [1500.0], 'COL5': [750.0], 'COL6': ['{&quot;paAccounts&quot;: {&quot;mySector&quot;: 4, &quot;otherSectors&quot;: 10}}'], 'COL7': ['{&quot;grade&quot;: &quot;CC&quot;}']} </code></pre>
<python><json><pandas><dataframe>
2022-12-22 18:59:15
0
2,350
dingaro
74,892,805
14,909,857
I'm unable to connect with DNS seed list format to Atlas but works with old string format
<p>I recently moved from town and i have a different ISP. In my old house i used to connect to Atlas with</p> <pre><code>mongodb+srv://cluster0.wiiy6cz.mongodb.net/myFirstDatabase&quot; --username &lt;username&gt; </code></pre> <p>and now it's not working, i have my new public ip Whitelisted on Atlas and the error message is <strong>pymongo.errors.ConfigurationError: The resolution lifetime expired after 21.123 seconds: Server 127.0.0.53 UDP port 53 answered The DNS operation timed out.</strong></p> <p>However, connection can be established with the old format string</p> <pre><code>mongodb://ac-tm89nv9-shard-00-00.wiiy6cz.mongodb.net:27017,ac-tm89nv9-shard-00-01.wiiy6cz.mongodb.net:27017,ac-tm89nv9-shard-00-02.wiiy6cz.mongodb.net:27017/myFirstDatabase?replicaSet=atlas-mqfpjp-shard-0&quot; --ssl --authenticationDatabase admin --username &lt;username&gt; --password &lt;password&gt; </code></pre> <p>The only thing different now is my Internet Provider so i think it has to be something with the configuration of the DNS.</p> <p>I already have installed pymongo[srv], i checked DNS of router configuration which is DNS1: 45.71.46.194 DNS2: 8.8.8.8 DNS3: 0.0.0.0</p> <p>So i'm curious about what is the problem here, i have already find a way around, but i really wish to know what is happening here</p>
<python><mongodb><pymongo><mongodb-atlas>
2022-12-22 18:57:52
0
336
eljapi
74,892,481
1,175,266
How to authenticate a GitHub Actions workflow as a GitHub App so it can trigger other workflows?
<p>By default (when using the default <code>secrets.GITHUB_TOKEN</code>) GitHub Actions <a href="https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow" rel="noreferrer">workflows can't trigger other workflows</a>. So for example if a workflow sends a pull request to a repo that has a CI workflow that normally runs the tests on pull requests, the CI workflow won't run for a pull request that was sent by another workflow.</p> <p>There are probably lots of other GitHub API actions that a workflow authenticating with the default <code>secrets.GITHUB_TOKEN</code> can't take either.</p> <p>How can I authenticate my workflow runs as a GitHub App, so that they can trigger other workfows and take any other actions that I grant the GitHub App permissions for?</p> <h2>Why not just use a personal access token?</h2> <p>The GitHub docs linked above recommend authenticating workflows using a <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="noreferrer">personal access token</a> (PAT) to allow them to trigger other workflows, but PATs have some downsides:</p> <ul> <li>You probably don't want your workflow to authenticate as any human user's account because any pull requests, issues, etc created by the workflow will appear to have been created by that human rather than appearing to be automated. The PAT would also become a very sensitive secret because it would grant access to all repos that the human user's account has access to.</li> <li>You could create a <a href="https://docs.github.com/en/developers/overview/managing-deploy-keys#machine-users" rel="noreferrer">machine user</a> account to own the PAT. But if you grant the machine user access to all repos in your organization then the PAT again becomes a very sensitive secret. You can add the machine user as a collaborator on only the individual repos that you need, but this is inconvenient because you'll always need to add the user to each new repo that you want it to have access to.</li> <li>Classic PATs have only broad-grained permissions. The recently-introduced <a href="https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/" rel="noreferrer">fine-grained PATs</a> <a href="https://github.com/cli/cli/issues/6680" rel="noreferrer">don't work with GitHub CLI</a> (which is the easiest way to send PRs, open issues, etc from workflows) and there's no ETA for when support will be added.</li> </ul> <p>GitHub Apps offer the best balance of convenience and security for authenticating workflows: apps can have fine-grained permissions and they can be installed only in individual repos or in all of a user or organization's repos (including automatically installing the app in new repos when they're created). Apps also get a nice page where you can type in some docs (<a href="https://github.com/apps/workflow-authentication-demo-app" rel="noreferrer">example</a>), the app's avatar and username on PRs, issues, etc link to this page. Apps are also clearly labelled as &quot;bot&quot; on any PRs, issues, etc that they create.</p> <p>This <a href="https://github.com/peter-evans/create-pull-request/blob/main/docs/concepts-guidelines.md#triggering-further-workflow-runs" rel="noreferrer">third-party documentation</a> is a good summary of the different ways of authenticating workflows and their pros and cons.</p> <h2>I don't want to use a third-party GitHub Action</h2> <p>There are guides out there on the internet that will tell you how to authenticate a workflow as an app but they all tell you to use third-party actions (from the <a href="https://github.com/marketplace?type=actions" rel="noreferrer">marketplace</a>) to do the necessary token exchange with the GitHub API. I don't want to do this because it requires sending my app's private key to a third-party action. I'd rather write (or copy-paste) my own code to do the token exchange.</p>
<python><github><github-actions><github-api><github-cli>
2022-12-22 18:20:47
2
15,138
Sean Hammond
74,892,362
45,207
How do I fix this Pyspark which modifies a complex struct inside an array
<h3>Problem Setup</h3> <p>Given a data frame where each row is a struct with an <code>answers</code> field that holds an array of answer structs, each with multiple fields, the following code is supposed to process each answer in the array, by examining its <code>render</code> field and applying some process to it. (Note this code is for an AWS Glue notebook running Glue 3.0, but aside from the spark context creation it should work on any PySpark &gt;= 3.1):</p> <pre><code>%glue_version 3.0 import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job import pyspark.sql.functions as F import pyspark.sql.types as T sc = SparkContext.getOrCreate() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) test_schema = T.StructType([ T.StructField('question_id', T.StringType(), True), T.StructField('subject', T.StringType(), True), T.StructField('answers', T.ArrayType( T.StructType([ T.StructField('render', T.StringType(), True), T.StructField('encoding', T.StringType(), True), T.StructField('misc_info', T.StructType([ T.StructField('test', T.StringType(), True) ]), True) ]), True), True) ]) json_df = spark.createDataFrame(data=[ [1, &quot;maths&quot;, [(&quot;[tex]a1[/tex]&quot;, &quot;text&quot;, (&quot;x&quot;,)),(&quot;a2&quot;, &quot;text&quot;, (&quot;y&quot;,))]], [2, &quot;bio&quot;, [(&quot;b1&quot;, &quot;text&quot;, (&quot;z&quot;,)),(&quot;&lt;p&gt;b2&lt;/p&gt;&quot;, &quot;text&quot;, (&quot;q&quot;,))]], [3, &quot;physics&quot;, None] ], schema=test_schema) json_df.show(truncate=False) json_df.printSchema() </code></pre> <p>resulting in:</p> <pre><code>+-----------+-------+---------------------------------------------+ |question_id|subject|answers | +-----------+-------+---------------------------------------------+ |1 |maths |[{[tex]a1[/tex], text, {x}}, {a2, text, {y}}]| |2 |bio |[{b1, text, {z}}, {&lt;p&gt;b2&lt;/p&gt;, text, {q}}] | |3 |physics|null | +-----------+-------+---------------------------------------------+ root |-- question_id: string (nullable = true) |-- subject: string (nullable = true) |-- answers: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- render: string (nullable = true) | | |-- encoding: string (nullable = true) | | |-- misc_info: struct (nullable = true) | | | |-- test: string (nullable = true) </code></pre> <p>Then taking the text processing method:</p> <pre><code>import re @F.udf(returnType=T.StringType()) def determine_encoding_udf(render): if render: if &quot;[tex]&quot; in render: return &quot;tex&quot; match = re.search(r&quot;&lt;[^&gt;]*&gt;&quot;, render) if match: return &quot;html&quot; else: return &quot;null render&quot; return &quot;text&quot; </code></pre> <p>Apply the transformation:</p> <pre><code>def normalize_answer(answer): return answer.withField( &quot;processed_render_input&quot;, answer.getField(&quot;render&quot;) ).withField( &quot;encoding&quot;, determine_encoding_udf(answer.getField(&quot;render&quot;)) ) json_mod_df = json_df.withColumn( &quot;answers&quot;, F.transform(&quot;answers&quot;, normalize_answer) ) json_mod_df.show(truncate=False) </code></pre> <p>Resulting in:</p> <pre><code>+-----------+-------+------------------------------------------------------------------------------+ |question_id|subject|answers | +-----------+-------+------------------------------------------------------------------------------+ |1 |maths |[{[tex]a1[/tex], null render, {x}, [tex]a1[/tex]}, {a2, null render, {y}, a2}]| |2 |bio |[{b1, null render, {z}, b1}, {&lt;p&gt;b2&lt;/p&gt;, null render, {q}, &lt;p&gt;b2&lt;/p&gt;}] | |3 |physics|null | +-----------+-------+------------------------------------------------------------------------------+ </code></pre> <p><code>process_text</code> is complex so this entire process can't be expressed in a lambda where the transform is defined.</p> <h3>The Problem</h3> <p>The problem is that when I run this on a much larger answers data set, <code>processed_text_input</code> is completely different text to the <code>encoding</code> field, i.e. what's in <code>encoding</code> seems to come from processing a completely different answer struct, from a different array from some other item in the input data frame. Sometimes it's also missing all together. In this toy example the text just seems to be missing, resulting in &quot;null render&quot; appearing in the encoding column for all examples. I tried adding 100 rows but in all cases the text was missing. I'm not sure why in my full version of this code with ~50k rows I get text from random rows. At any rate no text as shown in this toy problem isn't what I want either. The value in <code>processed_text_input</code> is the correct value from the corresponding answers text field.</p> <p>Is this a bug or should I be structuring this expression differently? I know that I could explode the answers array and process it more traditionally, but I'm trying to use the more recent <code>transform</code> and <code>withField</code> functions.</p> <h3>Makeshift Solution</h3> <p>This isn't an answer per se, but it gets a solution so I'm posting it for anyone having the same problem. For a post to be awarded as the solution it would need to show how to achieve the result I want using transformations, this uses the explode function but it covers a non-trivial data structure. I've added this solution as an answer below, for those who like me skip straight to the solutions.</p>
<python><apache-spark><pyspark><aws-glue>
2022-12-22 18:10:14
2
8,570
LaserJesus
74,892,355
12,084,907
tkinter: How to have one labelframe under another expand as the page grows without leaving space between the two frames, using grid()
<p>I apologize in advance where I am new to tkinter. I am making a search tool to find a moniker(username) and am currently working on the front end. While working on this I have started using sticky to help the different parts stretch as the user adjusts the page size. My current problem is that as I stretch it downwards/south, a bunch of space is created between the first top and the bottom LabelFrame. What i want is for the bottom label frame to stretch out as the page is stretched by the user and not have it create that unwanted space.</p> <p>I have used sticky nsew for the second frame so that it grows with the page without pulling away from the top labelframe. I would have the top label frame sticky to south but that would make the label frame grow as the page is stretched downward. Below is my code.</p> <pre><code> def query_window(conn): window.destroy() q_window = Tk() q_window.title('Moniker Lookup') q_window.columnconfigure(0, weight=1) q_window.rowconfigure(0, weight=1) path = os.path.realpath( os.path.join(os.getcwd(), os.path.dirname(__file__))) icon = PhotoImage(file = path + &quot;..\\assets\main_icon.png&quot;) q_window.iconphoto(False, icon) #q_window.geometry(&quot;700x700&quot;) #Create a search frame for the moniker search to sit in search_frame = tk.LabelFrame(q_window, text='Moniker Search') search_frame.rowconfigure(0, weight=1) search_frame.columnconfigure(0, weight=1) search_frame.grid(row = 0, column=0, pady=GLOBAL_PAD_Y, padx=GLOBAL_PAD_X, sticky=&quot;nw&quot;) #Create a results frame for the treeview widget to sit in. output_frame = tk.LabelFrame(q_window, text='Results') output_frame.rowconfigure(0, weight=1) output_frame.columnconfigure(0, weight=1) output_frame.grid(row = 1, column=0, pady=GLOBAL_PAD_Y, padx=GLOBAL_PAD_X, sticky=&quot;nsew&quot;) #Create area for user to provide Moniker moniker_label = tk.Label(search_frame, text=&quot;Moniker:&quot;) moniker_label.grid(row=0, column = 2, pady = GLOBAL_PAD_Y, padx = GLOBAL_PAD_X) moniker_field = tk.Entry(search_frame) moniker_field.grid(row=0, column = 3, pady=GLOBAL_PAD_Y, padx=GLOBAL_PAD_X, columnspan=2) #Create a run button run_button = tk.Button(search_frame, text=&quot;Run&quot;, command=lambda: moniker_query(moniker_field, conn, q_window), width=10) run_button.grid(row=0, column = 5, pady = GLOBAL_PAD_Y, padx = GLOBAL_PAD_X) #Create a treeview to display output tv1 = ttk.Treeview(output_frame) tv1.columnconfigure(0, weight=1) tv1.rowconfigure(0, weight=1) #Create the tree scrolls tree_scroll_y = tk.Scrollbar(output_frame, orient = &quot;vertical&quot;, command=tv1.yview) tree_scroll_x = tk.Scrollbar(output_frame, orient=&quot;horizontal&quot;, command=tv1.xview) tv1.configure(xscrollcommand=tree_scroll_x.set, yscrollcommand=tree_scroll_y.set) tv1.grid(row=0, column=0, pady = GLOBAL_PAD_Y, padx = GLOBAL_PAD_X, sticky=&quot;nsew&quot;) tree_scroll_x.grid(row=1, column=0, sticky=tk.E+tk.W) tree_scroll_x.grid_rowconfigure(0, weight=1) tree_scroll_y.grid(row=0, column=1, sticky=tk.N+tk.S) tree_scroll_x.grid_columnconfigure(0, weight=1) </code></pre> <p>Additionally, here is what my default page looks like on open and then when I have stretched the page.</p> <p><a href="https://i.sstatic.net/Kv2Vx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kv2Vx.png" alt="Default, on start." /></a></p> <p><a href="https://i.sstatic.net/IltUI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IltUI.png" alt="When Stretched" /></a></p>
<python><tkinter>
2022-12-22 18:09:48
1
379
Buzzkillionair
74,892,266
10,976,654
Nested structured array to pandas dataframe with new column names
<p>How can I convert/explode a nested numpy structured array into a pandas dataframe, while keeping the headers from the nested arrays?</p> <p>Using Python 3.8.3, numpy 1.18.5, pandas 1.3.4.</p> <p><strong>Example structured array</strong>: I am given a nested numpy structured array that looks like this, and I am just rebuilding it here for an <a href="https://stackoverflow.com/help/minimal-reproducible-example">MRE</a>.</p> <pre><code>import numpy as np import numpy.lib.recfunctions as rfn arr1 = np.array([4, 5, 4, 5]) arr2 = np.array([0, 0, -1, -1]) arr3 = np.array([0.51, 0.89, 0.59, 0.94]) arr4 = np.array( [[0.52, 0.80, 0.62, 1.1], [0.41, 0.71, 0.46, 0.77], [0.68, 1.12, 0.78, 1.19]] ).T arr5 = np.repeat(np.array([0.6, 0.2, 0.2]), 4).reshape(3, 4).T arrs = (arr1, arr2, arr3, arr4, arr5) dtypes = [ (&quot;state&quot;, &quot;f8&quot;), (&quot;variability&quot;, &quot;f8&quot;), (&quot;target&quot;, &quot;f8&quot;), (&quot;measured&quot;, [(&quot;mean&quot;, &quot;f8&quot;), (&quot;low&quot;, &quot;f8&quot;), (&quot;hi&quot;, &quot;f8&quot;)]), (&quot;var&quot;, [(&quot;mid&quot;, &quot;f8&quot;), (&quot;low&quot;, &quot;f8&quot;), (&quot;hi&quot;, &quot;f8&quot;)]), ] example = np.column_stack(arrs) example = rfn.unstructured_to_structured(example, dtype=np.dtype(dtypes)) </code></pre> <p><strong>Inspect example array</strong></p> <pre><code>print(example) print(example.dtype.names) </code></pre> <pre><code>[(4., 0., 0.51, (0.52, 0.41, 0.68), (0.6, 0.2, 0.2)) (5., 0., 0.89, (0.8 , 0.71, 1.12), (0.6, 0.2, 0.2)) (4., -1., 0.59, (0.62, 0.46, 0.78), (0.6, 0.2, 0.2)) (5., -1., 0.94, (1.1 , 0.77, 1.19), (0.6, 0.2, 0.2))] ('state', 'variability', 'target', 'measured', 'var') </code></pre> <pre><code>print(example[&quot;measured&quot;].dtype.names) </code></pre> <p><code>('mean', 'low', 'hi')</code></p> <pre><code>print(example[&quot;var&quot;].dtype.names) </code></pre> <p><code>('mid', 'low', 'hi')</code></p> <p><strong>Desired pandas dataframe</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>state</th> <th>variability</th> <th>target</th> <th>measured_mean</th> <th>measured_low</th> <th>measured_hi</th> <th>var_mid</th> <th>var_low</th> <th>var_hi</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>0</td> <td>0.51</td> <td>0.52</td> <td>0.41</td> <td>0.68</td> <td>0.6</td> <td>0.2</td> <td>0.2</td> </tr> <tr> <td>5</td> <td>0</td> <td>0.89</td> <td>0.8</td> <td>0.71</td> <td>1.12</td> <td>0.6</td> <td>0.2</td> <td>0.2</td> </tr> <tr> <td>4</td> <td>-1</td> <td>0.59</td> <td>0.62</td> <td>0.46</td> <td>0.78</td> <td>0.6</td> <td>0.2</td> <td>0.2</td> </tr> <tr> <td>5</td> <td>-1</td> <td>0.94</td> <td>1.1</td> <td>0.77</td> <td>1.19</td> <td>0.6</td> <td>0.2</td> <td>0.2</td> </tr> </tbody> </table> </div> <p><strong>Attempts</strong></p> <pre><code>test = pd.DataFrame(example) print(test) </code></pre> <pre><code> state variability target measured var 0 4.0 0.0 0.51 (0.52, 0.41, 0.68) (0.6, 0.2, 0.2) 1 5.0 0.0 0.89 (0.8, 0.71, 1.12) (0.6, 0.2, 0.2) 2 4.0 -1.0 0.59 (0.62, 0.46, 0.78) (0.6, 0.2, 0.2) 3 5.0 -1.0 0.94 (1.1, 0.77, 1.19) (0.6, 0.2, 0.2) </code></pre> <p><strong>How to I unpack the measured and var columns to get/concatenate the column names, as shown above, based on the rec array?</strong></p>
<python><pandas><multidimensional-array><structured-array>
2022-12-22 18:01:15
2
3,476
a11
74,892,218
1,833,539
Run alembic operations manually by code, using an sqlalchemy engine
<p>I invested some time to understand Alemic, how I could run migrations manually without any setup. Due to some reasons we need that, we need to be able to execute operations manually without init, ini files or execution context.</p>
<python><sql><python-3.x><sqlalchemy><alembic>
2022-12-22 17:56:38
1
2,616
Denny Weinberg
74,892,198
20,585,541
Function in a class doesn't require self parameter
<p>Trying to make a Python module which provides functions for user input. My code looks like this:</p> <pre class="lang-py prettyprint-override"><code>class PyInput: def inputString(prompt = &quot;Enter a string: &gt;&gt;&gt; &quot;): userInput = input(prompt) return userInput </code></pre> <p>I'm receiving the following error when I run the function:</p> <blockquote> <p>Instance methods should take a &quot;self&quot; parameter</p> </blockquote> <p>I have tried to add a self parameter to the function, but this requires the user to make an object of the class first, which isn't what I want. Is there a way to define a function within a class that so it doesn't take a <code>self</code> parameter?</p> <p>I'm running Python 3.11 on a <a href="https://en.wikipedia.org/wiki/Windows_11" rel="nofollow noreferrer">Windows 11</a> 64-bit laptop.</p>
<python><python-3.x><function><python-typing><self>
2022-12-22 17:55:18
1
555
TechyBenji
74,892,023
4,321,525
How to do columnwise operations with Numpy structured arrays?
<p>This shows the problem nicely:</p> <pre><code>import numpy as np a_type = np.dtype([(&quot;x&quot;, int), (&quot;y&quot;, float)]) a_list = [] for i in range(0, 8, 2): entry = np.zeros((1,), dtype=a_type) entry[&quot;x&quot;][0] = i entry[&quot;y&quot;][0] = i + 1.0 a_list.append(entry) a_array = np.array(a_list, dtype=a_type) a_array_flat = a_array.reshape(-1) print(a_array_flat[&quot;x&quot;]) print(np.sum(a_array_flat[&quot;x&quot;])) </code></pre> <p>and this produces the trackback and output:</p> <pre><code>[0 2 4 6] Traceback (most recent call last): File &quot;/home/andreas/src/masiri/booking_algorythm/demo_structured_aarray_flatten.py&quot;, line 14, in &lt;module&gt; print(np.sum(a_array_flat[&quot;x&quot;])) File &quot;&lt;__array_function__ internals&gt;&quot;, line 180, in sum File &quot;/home/andreas/src/masiri/venv/lib/python3.10/site-packages/numpy/core/fromnumeric.py&quot;, line 2298, in sum return _wrapreduction(a, np.add, 'sum', axis, dtype, out, keepdims=keepdims, File &quot;/home/andreas/src/masiri/venv/lib/python3.10/site-packages/numpy/core/fromnumeric.py&quot;, line 86, in _wrapreduction return ufunc.reduce(obj, axis, dtype, out, **passkwargs) numpy.core._exceptions._UFuncNoLoopError: ufunc 'add' did not contain a loop with signature matching types (dtype({'names': ['x'], 'formats': ['&lt;i8'], 'offsets': [0], 'itemsize': 16}), dtype({'names': ['x'], 'formats': ['&lt;i8'], 'offsets': [0], 'itemsize': 16})) -&gt; None </code></pre> <p>I chose this data structure because I must do many column-wise operations fast and have more esoteric types like <code>timedelta64</code> and <code>datetime64</code>, too. I am sure basic Numpy operations work, and I overlook something obvious. Please help me.</p>
<python><numpy><structured-array>
2022-12-22 17:37:03
1
405
Andreas Schuldei
74,892,000
1,542,011
CPLEX 22.1 on MacBook with M1 architecture
<p>I do not succeed to install CPLEX (version 22.1.1) on a MacBook with M1 chip (macOS Ventura - 13.1).</p> <p>The installer keeps installing the files for the wrong architecture, i.e., x86_64 instead of arm64.</p> <pre><code>/Applications/CPLEX_Studio2211/cplex/bin/x86-64_osx </code></pre> <p>When I try to use the Python API, I get an error containing the following message:</p> <pre><code>(mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) </code></pre> <p>Using the C++ API, a similar error occurs:</p> <pre><code>building for macOS-arm64 but attempting to link with file built for macOS-x86_64 </code></pre> <p><code>uname -m</code> in the used terminal yields <code>arm64</code></p> <p>The installer is a Java-Application though. So I created a Java- Programm to see what architecture Java returns, and</p> <pre><code>System.out.println(System.getProperty(&quot;os.arch&quot;)); </code></pre> <p>returns <code>x86_64</code>.</p> <p>So my guess is that this is the underlying issue.</p> <p>Edit: I removed all java installations - just to make sure the installer can't use any existing installation, but the installer installs its own JRE anyways. I executed the installer again, and the same issue occurs. What's strange is that I can actually solve a model in OPL without problems.</p> <p>Seems like IBM has added support for the new architecture, but not tested it properly.</p>
<python><macos><cplex>
2022-12-22 17:34:48
1
1,490
Christian
74,891,956
649,920
Throwing away from dataframe samples that are too close in time
<p>I have an original pandas dataframe with datetime column <code>ts</code>, ordered by this column. I need to build a new dataframe as follows:</p> <ol> <li>its first row is the first row of the original dataframe</li> <li>I only add the row from the original dataframe if it at least 5 seconds away from the last row of the new dataframe</li> </ol> <p>There's an obvious way to do this via a <code>for</code> loop, but I have a lot of data and I would prefer an pandas-specific way.</p> <p>As an example, let's say I have</p> <pre><code>df = pd.DataFrame({'ts': [ '2022-01-01 00:00:00', '2022-01-01 00:00:03', '2022-01-01 00:00:04', '2022-01-01 00:00:06', '2022-01-01 00:00:08', '2022-01-01 00:00:10', '2022-01-01 00:00:12', '2022-01-01 00:00:15', '2022-01-01 00:00:17', '2022-01-01 00:00:20' ]}) </code></pre> <p>and I expect to get</p> <pre><code>df = pd.DataFrame({'ts': [ '2022-01-01 00:00:00', '2022-01-01 00:00:06', '2022-01-01 00:00:12', '2022-01-01 00:00:17' ]}) </code></pre> <p>FWIW, one ai chat bot is not able to solve that, and even provided some answers claiming to get the final result, while actually they give an empty dataframe as an output.</p> <p><strong>Edit:</strong> the answer should also work if the times are provided with a millisecond granularity.</p>
<python><pandas><datetime>
2022-12-22 17:30:49
1
357
SBF
74,891,571
7,794,924
Python ImportError of "sounddevice" on M1 Mac when running script in PyCharm (incompatible architecture)
<p>I have an M1 Mac. My program was running fine in PyCharm when using the Intel-based dmg. PyCharm kept notifying me to upgrade to the version optimized for Apple Silicon. PyCharm opened noticeably smoother. But trying to run my script now gives me an ImportError for &quot;sounddevice&quot; library. I tried to pip uninstall/reinstall, but made no difference. How can I fix this?</p> <pre><code>Traceback (most recent call last): File &quot;/Users/anonymous/PycharmProjects/ChineseTranscriber/main.py&quot;, line 2, in &lt;module&gt; import sounddevice File &quot;/Users/anonymous/PycharmProjects/ChineseTranscriber/venv/lib/python3.9/site-packages/sounddevice.py&quot;, line 58, in &lt;module&gt; from _sounddevice import ffi as _ffi File &quot;/Users/anonymous/PycharmProjects/ChineseTranscriber/venv/lib/python3.9/site-packages/_sounddevice.py&quot;, line 2, in &lt;module&gt; import _cffi_backend ImportError: dlopen(/Users/anonymous/PycharmProjects/ChineseTranscriber/venv/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 0x0002): tried: '/Users/anonymous/PycharmProjects/ChineseTranscriber/venv/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/anonymous/PycharmProjects/ChineseTranscriber/venv/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so' (no such file), '/Users/anonymous/PycharmProjects/ChineseTranscriber/venv/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) Process finished with exit code 1 </code></pre>
<python><pycharm><apple-m1><python-sounddevice>
2022-12-22 16:53:51
1
812
nhershy
74,891,544
4,150,078
How to select number of k in cross validation for fbprophet?
<p>I have data in pandas DataFrame and have been trying to train the model fbprophet. During the <code>cross validation</code> stage I am unable to select number of <code>k</code> like you can in sklearn. Meaning k=5, k=50 folds...etc. Is there an option here or way to pass that? I couldn't find this information in the fbprophet documentation. It seems like it is selecting number of fold in the image below to 71? I am not sure how it is selecting that. How can I specify this?</p> <p><a href="https://i.sstatic.net/ImD79.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ImD79.png" alt="enter image description here" /></a> m = Prophet() m.fit(df)</p> <pre><code># Model Evaluation - RMSE and MAE df_cv = cross_validation(m, initial='200 days', period='180 days', horizon = '30 days') pm = performance_metrics(df_cv) </code></pre>
<python><python-3.x><facebook-prophet>
2022-12-22 16:51:51
1
2,158
sharp
74,891,522
1,028,270
Most reliable way to get the repo name?
<p>I'm not seeing anything in the docs for how to just get the repo name: <a href="https://www.pygit2.org/repository.html" rel="nofollow noreferrer">https://www.pygit2.org/repository.html</a></p> <p>Is reading the origin remote and parsing it out the only way pygit supports this or is there a more reliable or specific property for this?</p> <pre><code>myrepo = Repository(&quot;.&quot;) for remote in myrepo.remotes: if remote.name == &quot;origin&quot;: print(remote.url) # now parse git@bitbucket.org:mybb-workspace/myrepo.git ?? </code></pre>
<python><pygit2>
2022-12-22 16:50:14
0
32,280
red888
74,891,497
12,164,800
Is it possible to monkeypatch all objects of a class regardless of initialisation method?
<p>I'm testing some code that manipulates data using pandas, and I want to avoid writing out the data during tests.</p> <p>Let's say my code in a file name module.py is this:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import dask.dataframe as dd def do_stuff() -&gt; None: df = pd.DataFrame({'a': [1, 2, 3], 'b': [1, 2, 3]}) another_df = df.pivot_table(values='a', index='b') yet_another_df = another_df.groupby('b').sum() another_df.to_csv('data.csv') yet_another_df.to_csv('more_data.csv') </code></pre> <p>I want to intercept all these individual &quot;to_csv&quot; methods, so that tests ran don't write out data.</p> <p>My first thought was to try something like this with pytest:</p> <pre class="lang-py prettyprint-override"><code>import module class NonWritingDataFrame(pd.DataFrame): def to_csv(self, *args, **kwargs): pass def test_do_stuff_returns_nothing(monkeypatch): monkeypatch.settattr(module, 'pd.DataFrame', NonWritingDataFrame) actual = module.do_stuff() assert actual is None </code></pre> <p>But sadly this doesn't work - it might for the first &quot;df&quot; variable (I'm not actually sure if it does) but the another_df and yet_another_df are returned by other pandas methods, and not from the &quot;module&quot; module, so are normal pandas DataFrames and not my special NonWritingDataFrame object.</p> <p>My question is, is there a neat way to replace all pandas DataFrame &quot;to_csv&quot; calls, regardless of the method used to define the DataFrame?</p>
<python><pandas><mocking><pytest><monkeypatching>
2022-12-22 16:47:37
1
457
houseofleft
74,891,473
20,793,070
How I can round up values in Polars
<p>I have some calculated float columns. I want to display values of one column rounded, but <code>round(pl.col(&quot;value&quot;), 2)</code> not working properly in Polars. How I can make it?</p>
<python><dataframe><python-polars>
2022-12-22 16:45:41
2
433
Jahspear
74,891,472
248,123
Fastavro fails to parse Avro schema with enum
<p>I have the following code block:</p> <pre><code>import fastavro schema = { &quot;name&quot;: &quot;event&quot;, &quot;type&quot;: &quot;record&quot;, &quot;fields&quot;: [{&quot;name&quot;: &quot;event_type&quot;, &quot;type&quot;: &quot;enum&quot;, &quot;symbols&quot;: [&quot;START&quot;, &quot;STOP&quot;]}], } checker = fastavro.parse_schema(schema=schema) </code></pre> <p>Upon running it, it generates the following error:</p> <pre><code>Traceback (most recent call last): File &quot;.\test_schema.py&quot;, line 7, in &lt;module&gt; checker = fastavro.parse_schema(schema=schema) File &quot;fastavro\_schema.pyx&quot;, line 121, in fastavro._schema.parse_schema File &quot;fastavro\_schema.pyx&quot;, line 322, in fastavro._schema._parse_schema File &quot;fastavro\_schema.pyx&quot;, line 381, in fastavro._schema.parse_field File &quot;fastavro\_schema.pyx&quot;, line 191, in fastavro._schema._parse_schema fastavro._schema_common.UnknownType: enum </code></pre> <p>I'm running fastavro 1.7.0, on Windows 10, and python 3.8.8. I have had similar problems on Linux (RHEL 8) using python 3.10.</p> <p>Based on the <a href="https://avro.apache.org/docs/1.9.1/spec.html#Enums" rel="nofollow noreferrer">Avro documentation for enumerated types</a>, the schema seems correct, but fastavro fails to recognize the <code>enum</code> type. Am I doing something wrong here, or does fastavro 1.7.0 not support Avro 1.9 enumerations?</p>
<python><python-3.x><enums><avro><fastavro>
2022-12-22 16:45:39
1
17,567
andand
74,891,398
885,014
How to share a library among several Dockerfiles and repos?
<p>I am working on a project that's evolved from one Dockerfile supporting several apps to one Dockerfile per app.</p> <p>This generally works better than having them all together in one, but I'd like to share one Python library file among the apps without duplicating it.</p> <p>I don't see a good way to do this, at least with the structure as currently set up: all apps have individual Bitbucket repos.</p> <p>I don't think it's worth it to change the repo structure just for this, but is there some easier way I'm missing?</p>
<python><docker>
2022-12-22 16:37:59
1
651
Mark McWiggins
74,891,196
6,687,699
Azure Database for PostgreSQL flexible server Slow with Django
<p>Am using Django to connect to <code>Azure Database for PostgreSQL flexible server</code> but it's very slow, below are the specs of the server :</p> <p><strong>Compute + storage</strong></p> <pre><code>Pricing tier Memory Optimized Compute size Standard_E4s_v3 (4 vCores, 32 GiB memory, 6400 max iops) Storage 32 GiB </code></pre> <p><strong>High Availability :</strong></p> <pre><code>High availability Enabled High availability mode ZoneRedundant Availability zone 2 Standby availability zone 1 </code></pre> <p><a href="https://i.sstatic.net/DnV2y.png" rel="nofollow noreferrer">Specs </a></p> <p><img src="https://i.sstatic.net/DnV2y.png" alt="specs" /></p> <p>As you can see above, the specs are high, but the performance isn't better, in postman when I was hitting to get data it took <code>34.5 seconds</code>, this is way too much to wait.</p> <p>In my Django code, I tried my best to optimize the queries, and on Heroku, it was super fast, however here on Azure it's extremely slow, what could be done to improve the speed of the server?</p> <p>For more information on Django this is the <code>view.py</code> of the endpoint:</p> <pre><code>@method_decorator(cache_page(60 * 60 * 4), name='get') @method_decorator(vary_on_cookie, name='get') class PostList(generics.ListCreateAPIView): &quot;&quot;&quot;Blog post lists&quot;&quot;&quot; queryset = Post.objects.filter(status=APPROVED).select_related( &quot;owner&quot;, &quot;grade_level&quot;, &quot;feedback&quot;).prefetch_related( &quot;bookmarks&quot;, &quot;likes&quot;, &quot;comments&quot;, &quot;tags&quot;, &quot;tags__following&quot;).prefetch_related(&quot;address_views&quot;) serializer_class = serializers.PostSerializer authentication_classes = (JWTAuthentication,) permission_classes = (PostsProtectOrReadOnly, IsMentorOnly) def filter_queryset(self, queryset): ordering = self.request.GET.get(&quot;order_by&quot;, None) author = self.request.GET.get(&quot;author&quot;, None) search = self.request.GET.get(&quot;search&quot;, None) tag = self.request.GET.get(&quot;tag&quot;, None) # filter queryset with filter_backends πŸ–Ÿ queryset = super().filter_queryset(queryset) if ordering == 'blog_views': queryset = queryset.annotate( address_views_count=Count('address_views')).order_by( '-address_views_count') if author: queryset = queryset.filter(owner__email=author).select_related( &quot;owner&quot;, &quot;grade_level&quot;, &quot;feedback&quot;).prefetch_related( &quot;bookmarks&quot;, &quot;likes&quot;, &quot;comments&quot;, &quot;tags&quot;, &quot;tags__following&quot;).prefetch_related(&quot;address_views&quot;) if tag: queryset = queryset.filter( tags__name__icontains=tag).select_related( &quot;owner&quot;, &quot;grade_level&quot;, &quot;feedback&quot;).prefetch_related( &quot;bookmarks&quot;, &quot;likes&quot;, &quot;comments&quot;, &quot;tags&quot;, &quot;tags__following&quot;).prefetch_related(&quot;address_views&quot;) if search: queryset = queryset.annotate( rank=SearchRank(SearchVector('title', 'body', 'description'), SearchQuery(search))).filter( rank__gte=SEARCH_VALUE).order_by('-rank') return queryset </code></pre> <p>Then is my <code>serializer.py</code> :</p> <pre><code>class PostSerializer(SoftDeletionSerializer): &quot;&quot;&quot;Post Serializer&quot;&quot;&quot; owner = UserProfile(read_only=True) tags = TagSerializer(many=True) comments = CommentSerializer(many=True, read_only=True) slug = serializers.SlugField(read_only=True) grade_level = GradeClassSerializer(many=False) class Meta: model = Post fields = ('uuid', 'id', 'title', 'body', 'owner', 'slug', 'grade_level', 'comments', 'tags', 'description', 'image', 'created', 'modified', 'blog_views', 'blog_likes', 'blog_bookmarks', 'status', 'read_time', 'is_featured',) readonly = ('id', 'status',) + SoftDeletionSerializer.Meta.fields def to_representation(self, instance): &quot;&quot;&quot;Overwrite to serialize extra fields in the response.&quot;&quot;&quot; representation = super(PostSerializer, self).to_representation(instance) representation['has_bookmarked'] = self.has_bookmarked(instance) representation['has_liked'] = self.has_liked(instance) return representation def has_liked(self, instance): return self.context['request'].user in list(instance.likes.all()) def has_bookmarked(self, instance): return self.context['request'].user in list(instance.bookmarks.all()) </code></pre> <p>In my <code>settings.py</code>, I don't have unused middleware or apps, I cleaned it up, and I think the issue is on the database, not the code.</p> <p>What could improve the speed of <strong>Azure Database for PostgreSQL flexible server</strong>?</p>
<python><django><postgresql><azure>
2022-12-22 16:19:04
1
4,030
Lutaaya Huzaifah Idris
74,891,119
3,834,415
How does only output only dataframe columns to csv?
<p>Use case: golfing on the CLI in a utility function that I can't afford to make complicated.</p> <p>I need to peek at only the column names only of a large file in binary format, and not the column names plus, say, the first data row.</p> <p>In my current implementation, I have to write the burdensome command to peek at the first row of large files:</p> <pre><code>my-tool peek -n 1 huge-file.parquet | head -n 1 | tr ',' '\n' | less </code></pre> <p>What I would like is to:</p> <pre><code>my-tool peek --cols huge-file.parquet | tr ',' '\n' | less </code></pre> <p>or</p> <pre><code>my-tool peek --cols -d '\n' huge-file.parquet | less </code></pre> <p>Without getting complicated in python. I currently use the following mechanism to generate the csv:</p> <pre><code>out = StringIO() df.to_csv(out) print(out.getvalue()) </code></pre> <hr /> <p>Is there a <code>DataFrame</code>-ish way to output just the columns to <code>out</code> via <code>to_csv(...)</code> or similarly simple technique?</p>
<python><pandas><dataframe>
2022-12-22 16:11:08
1
31,690
Chris
74,890,785
1,694,518
How to create a histogram with different space between the bars in matplotlib
<p>I want to create a histogram with four pairs of bars as following. How to do it?<a href="https://i.sstatic.net/MqD67.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MqD67.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2022-12-22 15:41:07
1
1,155
f. c.
74,890,783
1,531,280
Is Django db_index helpful in text search queries?
<p>I've the following model in a Django project:</p> <pre><code>from django.db import models class Book(models.Model): title = models.CharField(verbose_name='Title', max_length=255, db_index=True) authors = models.CharField(verbose_name='Authors', max_length=255, db_index=True) date = models.DateField(verbose_name='Date', db_index=True) </code></pre> <p>In the views I need to do a full text search query like the following:</p> <pre><code>books = Book.objects.filter(Q(title__icontains=query) | Q(authors__icontains=query)) </code></pre> <p>Is the <code>db_index=True</code> actually helping the performances of my query or not?</p>
<python><django><django-models><django-queryset><database-indexes>
2022-12-22 15:40:32
1
2,712
Dos
74,890,722
9,003,672
how can I see the List of all lines from where my method or class is getting called in Jupyter Notebook?
<p>How can I redirect to the line from where the definition or class is called?</p> <p>I want something like this <a href="https://i.sstatic.net/tzzxg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tzzxg.jpg" alt="enter image description here" /></a></p> <p>I find it hard to believe this does not exist. I have gone through this <a href="https://github.com/krassowski/jupyterlab-go-to-definition" rel="nofollow noreferrer">https://github.com/krassowski/jupyterlab-go-to-definition</a> but I want similar to this in reverse.</p> <p>Any help is appreciated.</p>
<python><jupyter-notebook><jupyter><jupyter-lab><jupyterhub>
2022-12-22 15:36:05
2
501
Binit Amin
74,890,684
2,077,648
How to generate a report in python containing the difference of last and first row of multiple csv files?
<p>I have multiple csv files in the below format:</p> <p><strong>CSV File 1:</strong></p> <p><a href="https://i.sstatic.net/x0UqI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x0UqI.png" alt="enter image description here" /></a></p> <p><strong>CSV File 2:</strong></p> <p><a href="https://i.sstatic.net/DUUEq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DUUEq.png" alt="enter image description here" /></a></p> <p>The report needs to be generated containing the difference of last and first row of each csv file as below:</p> <p><a href="https://i.sstatic.net/kPM7I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kPM7I.png" alt="enter image description here" /></a></p> <p>The below code calculates the difference between the last and first row. How do we write the results into a separate file in the report format specified above?</p> <pre class="lang-py prettyprint-override"><code>def csv_subtract(): # this is the extension you want to detect extension = '.csv' for root, dirs_list, files_list in os.walk(csv_file_path): for f in files_list: if os.path.splitext(f)[-1] == extension: file_name_path = os.path.join(root, f) df = pd.read_csv(file_name_path) diff_row = (df.iloc[len(df.index) - 1] - df.iloc[0]).to_frame() </code></pre>
<python><pandas><dataframe><numpy><glob>
2022-12-22 15:33:21
1
967
user2077648
74,890,547
12,883,297
Increase the number of rows till we reach some condition in pandas
<p>I have a dataframe</p> <pre><code>df = pd.DataFrame([[&quot;X&quot;,&quot;day_2&quot;],[&quot;Y&quot;,&quot;day_4&quot;],[&quot;Z&quot;,&quot;day_3&quot;]],columns=[&quot;id&quot;,&quot;day&quot;]) </code></pre> <pre><code>id day X day_2 Y day_4 Z day_3 </code></pre> <p>I want to increase the rows for each id till I reach day_5 starting from the next day mentioned in the day column. For example for X id day_2 is there get 3 rows starting from day_3 to day_5, for Y id get only 1 row day_5 for Z get 2 rows day_4 and day_5 as day_3 is there in day column.</p> <p><strong>Expected Output:</strong></p> <pre><code>df = pd.DataFrame([[&quot;X&quot;,&quot;day_3&quot;],[&quot;X&quot;,&quot;day_4&quot;],[&quot;X&quot;,&quot;day_5&quot;],[&quot;Y&quot;,&quot;day_5&quot;],[&quot;Z&quot;,&quot;day_4&quot;],[&quot;Z&quot;,&quot;day_5&quot;]],columns=[&quot;id&quot;,&quot;day&quot;]) </code></pre> <pre><code>id day X day_3 X day_4 X day_5 Y day_5 Z day_4 Z day_5 </code></pre> <p>How to do it?</p>
<python><python-3.x><pandas><dataframe>
2022-12-22 15:21:12
4
611
Chethan
74,890,514
4,706,745
Django file upload is not uploading the file?
<p>I have created a file upload something like this:</p> <p><em>DataUpload</em> is the view that handles template rendering and <em>handle_uploaded_file</em> is the function that reads the file.</p> <p><strong>View.py</strong></p> <pre><code>def DataUpload(request): if request.method == 'POST': form = UploadFileForm(request.POST, request.FILES) if form.is_valid(): print(request.FILES['file']) handle_uploaded_file(request.FILES['file']) return HttpResponseRedirect('/success/url/') else: form = UploadFileForm() return render(request, 'DataBase/upload.html', {'form': form}) def handle_uploaded_file(f): with open(os.getcwd()+f, 'wb+') as destination: for chunk in f.chunks(): destination.write(chunk) </code></pre> <p><strong>url</strong></p> <pre><code>url(r'DataUpload', views.DataUpload, name='DataUpload'), </code></pre> <p><strong>forms.py</strong></p> <pre><code>from django import forms class Rand_Frag_From(forms.Form): Seq = forms.CharField(widget=forms.TextInput(attrs={'class':'form-control','placeholder':'Tropomyosin beta chain '}),required=False) Acc = forms.CharField(widget=forms.TextInput(attrs={'class':'form-control','placeholder':'P02671 '}), required=True) class UploadFileForm(forms.Form): file = forms.FileField() </code></pre> <p><strong>template</strong></p> <pre><code>{% load static %} &lt;link rel=&quot;stylesheet&quot; href=&quot;{% static 'css/table/custom.css' %}&quot;&gt; &lt;div class=&quot;main&quot;&gt; &lt;div class=&quot;site-content&quot;&gt; &lt;div class=&quot;mdl-grid site-max-width&quot;&gt; &lt;div class=&quot;mdl-cell mdl-cell--12-col mdl-card mdl-shadow--4dp page-content&quot;&gt; &lt;div class=&quot;mdl-grid&quot;&gt; &lt;form action=&quot;{%url 'DataUpload' %}&quot; method=&quot;POST&quot; class=&quot;form-contact&quot;&gt; {%csrf_token%} &lt;/div&gt; &lt;div class=&quot;mdl-textfield mdl-js-textfield mdl-textfield--floating-label&quot;&gt; {{form.file}} &lt;/div&gt; &lt;button class=&quot;mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent&quot; type=&quot;submit&quot;&gt; Submit &lt;/button&gt; &lt;/form&gt; &lt;/div&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p><strong>console log after submitting the form</strong></p> <pre><code>Quit the server with CONTROL-C. [22/Dec/2022 15:14:19] &quot;POST /DataUpload HTTP/1.1&quot; 200 1317 </code></pre> <p>I'm not getting any error anywhere, but it's not uploading the file.</p>
<python><django><django-views><django-forms><django-templates>
2022-12-22 15:18:33
1
4,217
jax
74,890,270
3,521,180
why the pyspark function is changing the data type of columns in pyspark?
<p>I am in a tricky situation. I have two columns with <code>string type</code> and it is supposed to be <code>dateType</code> in the final path. So, to achieve that, I am passing it as a python dictionary</p> <pre><code>d = {&quot;col1&quot;: DateType(), &quot;col2&quot;: DateType()} </code></pre> <p>The above is passed to a data frame where I am supposed to do this transformation.</p> <pre><code>df = func.cast_attribute(df2, p_property.d) </code></pre> <p>The col1, and col2 are in the <code>&quot;yyyy-MM-dd&quot;</code> format. But as per the business requirement, I need the <code>&quot;dd-mm-yyyy&quot;</code> format. To do that I was suggested to use pyspark in-built function <code>date_format()</code></p> <pre><code>col_df = df.withColumn(&quot;col1&quot;, date_format(&quot;col1&quot;, format=&quot;dd-MM-yyyy&quot;)) </code></pre> <p>But the problem is that the above transformation to convert <code>yyyy-MM-dd</code> to <code>dd-MM-yyyy</code> is converting the data type of the two cols back to string</p> <p>Note func.cast_attribute()</p> <pre><code>def cast_attribute(df, mapper={}): &quot;&quot;&quot; Cast columns of the dataframe \n Usage: cast_attribute(your_df, {'column_name', StringType()}) Args: df (DataFrame):DataFrame to convert some of the attributes. mapper (dict, optional): Dictionary with old column datatype to new datatype. Returns: DataFrame: Returns the dataframe with converted columns &quot;&quot;&quot; for column, dtype in mapper.items(): df = df.withColumn(column, col(column).cast(dtype)) return df </code></pre> <p>Please suggest, how to keep the date type in the final data frame..</p>
<python><python-3.x><pyspark>
2022-12-22 14:58:21
1
1,150
user3521180
74,890,214
6,702,598
Type hint callback function with optional parameters (aka Callable with optional parameter)
<p>How do I specify the type of a callback function which takes an optional parameter?</p> <p>For example, I have the following function:</p> <pre><code>def getParam(param: str = 'default'): return param </code></pre> <p>I'd like to pass it on as a callback function and use it both as <code>Callable[[str], str]</code> <em>and</em> <code>Callable[[], str]</code>:</p> <pre><code>def call(callback: Callable[[&lt;???&gt;], str]): return callback() + callback('a') </code></pre> <p>What do I write for <code>&lt;???&gt;</code>? I've tried Python's <code>Optional</code> but that does not work, as it still enforces a parameter (even if with value <code>None</code>) to be present.</p> <hr /> <p>For reference, I'm basically searching for the equivalent of the following TypeScript:</p> <pre class="lang-js prettyprint-override"><code>function call(callback: (a?: string) =&gt; string) { return callback() + callback('a'); } </code></pre>
<python><python-3.x>
2022-12-22 14:53:12
1
3,673
DarkTrick
74,890,197
2,278,742
networkx: Assigning node colors based on weight when using bbox
<p>I want to draw a graph with <code>networx</code> and render the nodes as boxes wide enough to contain each node's name, colored according to its weight.</p> <p>If I use the default shape, they are colored correctly, but when I use the bbox parameter facecolor, they all have the same color.</p> <p><a href="https://i.sstatic.net/fHD4U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fHD4U.png" alt="enter image description here" /></a></p> <pre><code>from pprint import pprint import matplotlib.pyplot as plt import networkx as nx from networkx.drawing.nx_pydot import graphviz_layout G = nx.Graph() G.add_node((1, 4), weight=10/255) G.add_node((7, 2), weight=130/255) G.add_node((9, 7), weight=250/255) node_labels = dict() node_colors = [] for node, node_data in G.nodes.items(): node_labels[node] = node node_colors.append(node_data['weight']) pprint(node_labels) pprint(node_colors) fig, axs = plt.subplots(ncols=2) plt.margins(0.0) pos = graphviz_layout(G, prog='dot') nx.draw(G, pos=pos, with_labels=True, labels=node_labels, node_color=node_colors, cmap=plt.cm.gray, ax=axs[0]) nx.draw(G, pos=pos, with_labels=True, labels=node_labels, node_color=node_colors, cmap=plt.cm.gray, ax=axs[1], bbox=dict(facecolor=node_colors, edgecolor='black', boxstyle='round,pad=0.2')) fig.tight_layout() plt.show() </code></pre>
<python><matplotlib><graph><networkx>
2022-12-22 14:51:46
1
493
804b18f832fb419fb142
74,890,115
12,725,674
Keep part of text between strings using regex
<p>Using the following code I have extracted a transcripted Interview from a pdf:</p> <pre><code>from pdfminer.high_level import extract_text text = extract_text(f&quot;C:/Users/User/Desktop/Interview1.pdf&quot;) </code></pre> <p>The general structure of a transcript looks like this (\n are included on purpose as these might be one reason why my code doesn't work):</p> <blockquote> <p>CHRIS BACON, CEO COMMERCIAL METALS COMPANY: bla bla bla...</p> <p>BARBARA SMITH, SVP AND CFO, COMMERCIAL METALS COMPANY: I agree with Chris Bacon, bla bla bla bla....</p> <p>\nJOSEPH ALVARADO: Bla bla bla bla....</p> </blockquote> <p><strong>GOAL</strong> As you can see there are multiple people being interviewed and I am only interested in the transcripted text of a specific person whereas the start of a person's text is always indicated with the person's name (uppercase) sometimes followed by their position in the company and always ends with a colon &quot;:&quot; before the start of the text.</p> <p>I tried the following code to extract the text of Barbara Smith which unfortunately doesn't return anything:</p> <pre><code>Smith_text = re.findall(r'SMITH(.*):(.*):', text) </code></pre> <p>I am not very familiar with regex and would appreciate if someone could help me out here.</p>
<python><regex>
2022-12-22 14:46:51
1
367
xxgaryxx
74,889,884
4,391,249
How to ensure expressions that evaluate to floats, give the expected integer value with int(*)
<p>In this question's most general form, I want to know how I can guarantee that <code>int(x * y)</code> (with <code>x</code> and <code>y</code> both being <code>float</code>s gives me the arithmetically &quot;correct&quot; answer when I know the result to be a round number. For example: 1.5 * 2.0 = 3, or 16.0 / 2.0 = 8. I worry this could be a problem because <code>int</code> can round down if there is some floating point error. For example: <code>int(16.0 - 5 * sys.float_info.epsilon)</code> gives <code>15</code>.</p> <p>And specializing the question a bit, I could also ask about division between two <code>int</code>s where I know the result is a round number. For example 16 / 2 = 8. If this specialization changes the answer to the more general question, I'd like to know how.</p> <p>By the way, I know that I could do <code>int(round(x * y)</code>. I'm just wondering if there's a more direct built-in, or if there's some proven guarantee that means I don't have to worry about this in the first place.</p>
<python><floating-point><integer>
2022-12-22 14:26:20
1
3,347
Alexander Soare
74,889,866
4,162,689
How can I scape data from nested tables in Python?
<p>I am trying to scrape <a href="http://www.radiofeeds.co.uk/mp3.asp" rel="nofollow noreferrer">radiofeeds</a> so that I can compile the following list of all UK radio stations.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Station Name</th> <th>Geographical Location</th> <th>Stream URLs</th> </tr> </thead> <tbody> <tr> <td>Athavan Radio</td> <td>Greater London</td> <td>{ 64: &quot;http://38.96.148.140:6150/listen.pls?sid=1&quot;,<br>128 : &quot;http://www.radiofeeds.net/playlists/athavan.m3u&quot;,<br />320 : &quot;http://www.radiofeeds.net/playlists/zeno.pls?station=e4k5enu6g18uv&quot;}</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> </div> <p>The stream URLs are provided in a nested table, which clearly causes difficulties when searching for new rows based on the next <code>tr</code> tag, as the next new row might be in a child table.</p> <p>Thus far, I have tried the following in BeautifulSoup.</p> <pre><code>from bs4 import BeautifulSoup from requests import get url = &quot;http://www.radiofeeds.co.uk/mp3.asp&quot; page = get(url=url).text lead = &quot;Listen live online to&quot; foot = &quot;Have &lt;b&gt;YOUR&lt;/b&gt; internet&quot; start = page.find(lead) stop = page.find(foot) soup = BeautifulSoup(page[start:stop], &quot;html.parser&quot;) data = [] table = soup.find(&quot;table&quot;) rows = table.find_all(&quot;tr&quot;) for row in rows: station_name = row.find(&quot;a&quot;).text print(f&quot;STATION_NAME: {station_name}&quot;) cols = len(row.find_all(&quot;td&quot;)) print(f&quot;COL_COUNT: {cols}&quot;) print(&quot;=====&quot;) </code></pre> <p>The output of this for the first three stations is,</p> <pre><code>STATION_NAME: 121 Radio COL_COUNT: 9 ===== STATION_NAME: COL_COUNT: 2 ===== STATION_NAME: 10-fi Radio COL_COUNT: 9 ===== STATION_NAME: COL_COUNT: 2 ===== STATION_NAME: 45 Radio COL_COUNT: 9 ===== STATION_NAME: COL_COUNT: 2 ===== </code></pre> <p>Each loop is clearly jumping between the parent and child table, as shown by the varying <code>COL_COUNT</code>. How do I search the current row <em>including</em> its child tables?</p> <p>I am happy to use a different library if BeautifulSoup is not the best one for this use.</p>
<python><html><web-scraping><beautifulsoup>
2022-12-22 14:24:49
1
972
James Geddes
74,889,837
11,401,702
Do I get any important benefits of using Gunicorn to run a Flask server on AWS Fargate?
<p>I'm currently looking at a Flask server run with Gunicorn in a Docker container on EC2. I would like to move the server to Fargate.</p> <p>Coming from NodeJS, my understanding of Gunicorn is it's like PM2; since Python is single threaded, Gunicorn increases or decrease the Python processes to handle the load. (plus some other benefit)</p> <p>Is this useful on a Fargate task?</p> <p>Aren't load-balanced Fargate instances sufficient for providing all the benefits Gunicorn provides?</p>
<python><flask><gunicorn><aws-fargate>
2022-12-22 14:22:17
1
428
Ismael
74,889,613
2,333,021
Creating a hierarchical tree dict
<p>I have been working with D3 and ObservableHQ to create a Tree Plot of some hierarchical data (SARS-CoV-2 Variant lineages). I'm looking at ways to expand what I can do with the dataset and so I am moving it to Python to experiment.</p> <p>One thing I did with D3 was the use d3.stratify on my tsv dataset. I am trying to recreate that data structure using python dict</p> <pre><code> class Lineage: tree = dict() def __init__(self, designation_date: str, pango: str, partial: str, unaliased: str): self.designation_date = designation_date self.pango = pango self.partial = partial self.unaliased = unaliased self.parent = Lineage.parent(partial) @classmethod def parent(cls, lineage: str) -&gt; Union[str, None]: if &quot;.&quot; in lineage: return &quot;.&quot;.join(lineage.split('.')[:-1]) return None </code></pre> <p>Here is the tree specific code from the <code>Lineage</code> class above</p> <pre><code> @classmethod def add_to_tree(cls, lineage: Lineage): if lineage.parent is None: # No children to add cls.tree.update( { 'id': lineage.partial, 'data': lineage, 'parent': None, } ) elif 'children' in cls.tree: count = len(cls.tree['children'].items()) cls.tree['children'][count] = { 'id': lineage.partial, 'data': lineage, 'parent': lineage.parent, } else: cls.tree['children'] = { 0: { 'id': lineage.partial, 'data': lineage, 'parent': lineage.parent, } } </code></pre> <p>In the following dict object, you will see what the code produces. And below that is an image of the structure of the dataset.</p> <p>As you can see, child index 1 should actually be a child of lineage at index 0. And Children indexes 2 and 5 should actually be children of lineage at index 1, and so on.</p> <p><strong>The problem is that everything is a child a the root. There is no further hierarchy.</strong> What are some ways to mitigate this issue?</p> <pre><code>{ &quot;id&quot;: &quot;BA&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2021-11-24&quot;, &quot;pango&quot;: &quot;B.1.1.529&quot;, &quot;partial&quot;: &quot;BA&quot;, &quot;unaliased&quot;: &quot;B.1.1.529&quot;, &quot;parent&quot;: null }, &quot;parent&quot;: null, &quot;children&quot;: { &quot;0&quot;: { &quot;id&quot;: &quot;BA.1&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2021-12-07&quot;, &quot;pango&quot;: &quot;BA.1&quot;, &quot;partial&quot;: &quot;BA.1&quot;, &quot;unaliased&quot;: &quot;B.1.1.529.1&quot;, &quot;parent&quot;: &quot;BA&quot; }, &quot;parent&quot;: &quot;BA&quot; }, &quot;1&quot;: { &quot;id&quot;: &quot;BA.1.1&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2022-01-07&quot;, &quot;pango&quot;: &quot;BA.1.1&quot;, &quot;partial&quot;: &quot;BA.1.1&quot;, &quot;unaliased&quot;: &quot;B.1.1.529.1.1&quot;, &quot;parent&quot;: &quot;BA.1&quot; }, &quot;parent&quot;: &quot;BA.1&quot; }, &quot;2&quot;: { &quot;id&quot;: &quot;BA.1.1.1&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2022-02-23&quot;, &quot;pango&quot;: &quot;BA.1.1.1&quot;, &quot;partial&quot;: &quot;BA.1.1.1&quot;, &quot;unaliased&quot;: &quot;B.1.1.529.1.1.1&quot;, &quot;parent&quot;: &quot;BA.1.1&quot; }, &quot;parent&quot;: &quot;BA.1.1&quot; }, &quot;3&quot;: { &quot;id&quot;: &quot;BA.1.1.1.1&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2022-02-23&quot;, &quot;pango&quot;: &quot;BC.1&quot;, &quot;partial&quot;: &quot;BA.1.1.1.1&quot;, &quot;unaliased&quot;: &quot;B.1.1.529.1.1.1.1&quot;, &quot;parent&quot;: &quot;BA.1.1.1&quot; }, &quot;parent&quot;: &quot;BA.1.1.1&quot; }, &quot;4&quot;: { &quot;id&quot;: &quot;BA.1.1.1.2&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2022-06-13&quot;, &quot;pango&quot;: &quot;BC.2&quot;, &quot;partial&quot;: &quot;BA.1.1.1.2&quot;, &quot;unaliased&quot;: &quot;B.1.1.529.1.1.1.2&quot;, &quot;parent&quot;: &quot;BA.1.1.1&quot; }, &quot;parent&quot;: &quot;BA.1.1.1&quot; }, &quot;5&quot;: { &quot;id&quot;: &quot;BA.1.1.2&quot;, &quot;data&quot;: { &quot;designation_date&quot;: &quot;2022-02-23&quot;, &quot;pango&quot;: &quot;BA.1.1.2&quot;, &quot;partial&quot;: &quot;BA.1.1.2&quot;, &quot;unaliased&quot;: &quot;B.1.1.529.1.1.2&quot;, &quot;parent&quot;: &quot;BA.1.1&quot; }, &quot;parent&quot;: &quot;BA.1.1&quot; }, </code></pre> <p>Again, this is how the data SHOULD be if it was in the correct heirachy. However, it isn't <a href="https://i.sstatic.net/LnV84.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LnV84.png" alt="enter image description here" /></a></p> <h1>Data</h1> <p>I discard the Github issues for now</p> <pre><code>&quot;PANGO Lineage&quot; &quot;Partial-aliased PANGO Lineage&quot; &quot;Unaliased PANGO Lineage&quot; &quot;Designation Date&quot; &quot;GitHub Issue&quot; B.1.1.529 BA B.1.1.529 2021-11-24 #343 BA.1 BA.1 B.1.1.529.1 2021-12-07 #361 BA.1.1 BA.1.1 B.1.1.529.1.1 2022-01-07 #360 BA.1.1.1 BA.1.1.1 B.1.1.529.1.1.1 2022-02-23 BC.1 BA.1.1.1.1 B.1.1.529.1.1.1.1 2022-02-23 BC.2 BA.1.1.1.2 B.1.1.529.1.1.1.2 2022-06-13 #570 BA.1.1.2 BA.1.1.2 B.1.1.529.1.1.2 2022-02-23 </code></pre>
<python><python-3.x><d3.js><tree><hierarchy>
2022-12-22 14:04:17
1
4,887
Christopher Rucinski
74,889,555
11,252,809
Query an instrumentedattribute by its length in SQLAlchemy
<p>The way the database is setup is like this. I'm using SQLModel, but underneath it's SQLAlchemy.</p> <pre><code>class Paper(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) similar: List[&quot;PaperSimilarLink&quot;] = Relationship( link_model=PaperSimilarLink, back_populates=&quot;similar&quot;, sa_relationship_kwargs=dict( primaryjoin=&quot;Paper.id==PaperSimilarLink.paper_id&quot;, secondaryjoin=&quot;Paper.id==PaperSimilarLink.similar_id&quot;, ), ) class PaperSimilarLink(SQLModel, table=True): paper_id: int = Field(default=None, foreign_key=&quot;paper.id&quot;, primary_key=True) similar_id: int = Field(default=None, foreign_key=&quot;paper.id&quot;, primary_key=True) paper: &quot;Paper&quot; = Relationship( back_populates=&quot;similar&quot;, sa_relationship_kwargs=dict( foreign_keys=&quot;[PaperSimilarLink.paper_id]&quot;, ), ) similar: &quot;Paper&quot; = Relationship( back_populates=&quot;similar&quot;, sa_relationship_kwargs=dict( foreign_keys=&quot;[PaperSimilarLink.similar_id]&quot;, ), ) similarity: Optional[float] </code></pre> <p>The query I am looking for is to look for all the <code>Paper</code> objects that have less than x number of PaperSimilarLinks located at Paper.similar. I've tried this:</p> <p><code>papers = session.exec(select(Paper).where(Paper.similar.count() &lt;= 11)).all()</code></p> <p>It errors me out saying <code>AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Paper.similar has an attribute 'count'</code></p> <p>I figured this was a very simple query, but it's illuding me! How would I do this?</p>
<python><sql><sqlalchemy><sqlmodel>
2022-12-22 13:59:06
0
565
phil0s0pher
74,889,515
6,245,473
Write to CSV the last non-null row from pandas dataframe?
<p>This writes all records, including null PbRatios. I would like to write the last non-null record only. When I add <code>df[df.asOfDate == df.asOfDate.max()].to_csv</code>, it gets the last record, which is always null.</p> <pre><code>import pandas as pd from yahooquery import Ticker symbols = ['AAPL','GOOG','MSFT','NVDA'] header = [&quot;asOfDate&quot;,&quot;PbRatio&quot;] for tick in symbols: faang = Ticker(tick) faang.valuation_measures df = faang.valuation_measures try: for column_name in header : if column_name not in df.columns: df.loc[:,column_name ] = None df.to_csv('output.csv', mode='a', index=True, header=False, columns=header) except AttributeError: continue </code></pre> <p>Current output:</p> <p><a href="https://i.sstatic.net/LXy09.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LXy09.png" alt="enter image description here" /></a></p> <p>Desired output:</p> <p><a href="https://i.sstatic.net/xgaak.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xgaak.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><csv><null>
2022-12-22 13:56:26
2
311
HTMLHelpMe
74,889,490
4,576,519
A faster Hessian vector product in PyTorch
<p>I need to take a Hessian vector product of a loss w.r.t. model parameters a large number of times. It seems that there is no efficient way to do this and a for loop is always required, resulting in a large number of independent <code>autograd.grad</code> calls. My current implementation is given below, it is representative of my use case. Do note in the real case the collection of vectors <code>v</code> are not all known beforehand.</p> <pre class="lang-py prettyprint-override"><code>import torch import time # Create model model = torch.nn.Sequential(torch.nn.Linear(1, 500), torch.nn.Tanh(), torch.nn.Linear(500, 1)) num_param = sum(p.numel() for p in model.parameters()) # Evaluate some loss on a random dataset x = torch.rand((10000,1)) y = torch.rand((10000,1)) y_hat = model(x) loss = ((y_hat - y)**2).mean() # Calculate Jacobian of loss w.r.t. model parameters J = torch.autograd.grad(loss, list(model.parameters()), create_graph=True) J = torch.cat([e.flatten() for e in J]) # flatten # Calculate Hessian vector product start_time = time.time() for i in range(10): v = torch.rand(num_param) HVP = torch.autograd.grad(J, list(model.parameters()), v, retain_graph=True) print('Time per HVP: ', (time.time() - start_time)/10) </code></pre> <p>Which takes around 0.05 s per Hessian vector product on my machine. <strong>Is there a way to speed this up?</strong> Especially considering that the Hessian itself does not change in between calls.</p>
<python><performance><pytorch><autograd><hessian>
2022-12-22 13:54:02
0
6,829
Thomas Wagenaar
74,889,280
20,174,226
Combine and fill a Pandas Dataframe with the single row of another
<p>If I have two dataframes:</p> <h3>df1:</h3> <pre><code>df1 = pd.DataFrame({'A':[10,20,15,30,45], 'B':[17,33,23,10,12]}) A B 0 10 17 1 20 33 2 15 23 3 30 10 4 45 12 </code></pre> <br> <br> <h3>df2:</h3> <pre><code>df2 = pd.DataFrame({'C':['cat'], 'D':['dog'], 'E':['emu'], 'F':['frog'], 'G':['goat'], 'H':['horse'], 'I':['iguana']}) C D E F G H I 0 cat dog emu frog goat horse iguana </code></pre> <br> <br> <p>How do I combine the two dataframes and fill <code>df1</code> whereby each row is a replicate of <code>df2</code> ?</p> <p>Here is what I have so far. The code works as intended, but if I were to have hundreds of columns, then I would anticipate there would be a much easier way than my current method:</p> <h3>Current Code:</h3> <pre><code>df1 = df1.assign(C = lambda x: df2.C[0], D = lambda x: df2.D[0], E = lambda x: df2.E[0], F = lambda x: df2.F[0], G = lambda x: df2.G[0], H = lambda x: df2.H[0], I = lambda x: df2.I[0]) </code></pre> <h3>Expected output:</h3> <pre><code> A B C D E F G H I 0 10 17 cat dog emu frog goat horse iguana 1 20 33 cat dog emu frog goat horse iguana 2 15 23 cat dog emu frog goat horse iguana 3 30 10 cat dog emu frog goat horse iguana 4 45 12 cat dog emu frog goat horse iguana </code></pre>
<python><pandas><dataframe><assign>
2022-12-22 13:34:36
2
4,125
ScottC
74,889,152
19,003,861
Django - How to use delete() in ManyToMany relationships to only delete a single relationship
<p>I have a model <code>Voucher</code> that can be allocated to several <code>users</code>.</p> <p>I used a <code>M2M</code> relationship for it.</p> <p>I want, in the template, the possibility to delete the voucher allocated to the logged in user, and the logged in user only (not all relationships).</p> <p>The problem I have is that the current model deletes the entire model for all users, instead of the single user requesting &quot;delete&quot;.</p> <p>The alternative would obvioulsy be to simply create a model <code>Voucher</code> on a ForeignKey, but something tells I can probably do it with a M2M in the views.</p> <p>Is there a way to focus my delete function specific to the user? In the example below, I tried to filter based on <code>user.request</code> which is not working. Looking at the data inside the model, users IDs are listed. Is it not what <code>request.user</code> does?</p> <p><strong>models</strong></p> <pre><code>class Voucher(models.Model): user = models.ManyToManyField(User, blank=True) </code></pre> <p><strong>views</strong></p> <pre><code>def delete_voucher(request, voucher_id): voucher = Voucher.objects.filter(pk=voucher_id).filter(user=request.user) voucher.delete() return redirect('account') </code></pre> <p><strong>template</strong></p> <pre><code>&lt;a class=&quot;button3 btn-block mybtn tx-tfm&quot; href=&quot;{% url 'delete-voucher' voucher.id %}&quot;&gt;Delete&lt;/a&gt; </code></pre> <p><strong>url</strong></p> <pre><code>path('delete_voucher/&lt;voucher_id&gt;', views.delete_voucher, name='delete-voucher'), </code></pre>
<python><django><django-models><django-views>
2022-12-22 13:23:40
1
415
PhilM
74,889,074
17,148,496
How to calculate SSR in python
<p>I've trained a linear regression model:</p> <pre><code>reg = linear_model.LinearRegression() reg.fit(X, Y) </code></pre> <p>And I want to calculate the three sum of squares SST, SSR and SSRES (a.k.a SSE).</p> <p>I've calculated the SSE like this: (X is 4 predictive features)</p> <pre><code>y_pred = reg.predict(X) df = pd.DataFrame({'Actual': Y, 'Predicted':y_pred}) print('SS_Res : '+ str(np.sum(np.square(df['Actual']-df['Predicted'])))) </code></pre> <p>How do I calculate the SSR (sum of squares regression)? it's supposed to be the sum of the differences between the predicted value by the model and the mean of the dependent variable, but in code how do I put this?</p> <p>the SST would be easy as soon as I get the SSR, because SST = SSR + SSE</p> <h4>EDIT-</h4> <p>Something like this maybe? I'll be glad if anyone can just make sure that I'm right:</p> <pre><code>y_pred = reg.predict(X) y_mean = np.mean(Y) df = pd.DataFrame({'Actual': Y, 'Predicted':y_pred, 'mean':y_mean}) print('SS_Res : '+ str(np.sum(np.square(df['Actual']-df['Predicted'])))) print('SS_R : '+ str(np.sum(np.square(df['Predicted']-df['mean'])))) </code></pre>
<python><machine-learning><regression><linear-regression>
2022-12-22 13:16:45
0
375
Kev
74,889,073
12,895,094
why numpy array (skimage object) inside a list wasn't updated when iterated inside a for loop?
<p>I'm trying to do some operations on skimage objects. they are fundamentally numpy arrays. But it has this weird behavior. If I put them in a list and iterate through the list, the operation doesn't reflect on the array/image.</p> <p>here is an example. I'm trying to convert it to float type, by using the img_as_float() function</p> <pre><code>import skimage as sk from skimage.util import img_as_ubyte, img_as_float import matplotlib.pyplot as plt import numpy as np a = np.random.randn(100,100,3).astype(np.uint8) b = np.random.randn(100,100,3).astype(np.uint8) a = img_as_float(a) l = [a , b] for ar in l: ar = img_as_float(ar) print(ar.dtype) for ar in l: print(ar.dtype) </code></pre> <p>I get</p> <pre><code>float64 float64 float64 uint8 </code></pre> <p>the function itself works, but inside the for loop it doesn't work, or at least didn't update the object b as I wantted.</p> <p>to be clear, eventually, I want a loop/function to change a and b.</p> <p>Any clues and suggestions?</p> <p>Thanks</p>
<python><numpy><scikit-image>
2022-12-22 13:16:43
1
372
fenixnano
74,889,001
774,575
Format string to alternate between fixed and scientific form, in order to show n digits at most
<p>I've difficulties to master the <code>g</code> <a href="https://docs.python.org/2/library/stdtypes.html#string-formatting-operations" rel="nofollow noreferrer">format for floats</a>. I'm trying to find a format which shows 2 figures at most. This requires to switch from <code>f</code> to <code>e</code> according to the magnitude.</p> <p>My constraints are:</p> <ul> <li>No change to configuration files</li> <li>No non-significant zeros</li> <li>No extra spaces (I've used width in the example below, but this is just for the question).</li> <li>If possible replace default <code>1.99e00</code> by <code>1.99</code>.</li> </ul> <p>There is <code>g</code> intended for that, but I can't find the correct combination of width and precision. I'm not skilled enough to know if there is a solution.</p> <hr /> <p>The first line are numbers to be printed, the second line is what I'm trying to get. The last is my best attempt:</p> <pre><code> 0.01234560 0.12345600 1.23456000 12.34560000 123.45600000 1234.56000000 1.23e-02 1.23e-01 1.23 12.35 1.23e+02 1.23e+03 0.012 0.12 1.2 12 1.2e+02 1.2e+03 </code></pre> <p>The code:</p> <pre><code>numbers = [1.23456 * 10**k for k in range(-2,4)] print(*[f'{x:12.8f}' for x in numbers], sep=' ') # Desired formatting for x in numbers: if x &lt; 1 or x &gt; 99: print(f'{x:12.2e}', end=' ') else: print(f'{x:12.2f}', end=' ') print() # Best approximation for x in numbers: print(f'{x:12.2g}', end=' ') </code></pre>
<python><formatting>
2022-12-22 13:11:08
2
7,768
mins
74,888,929
12,760,550
Assign row as primary depending on the combination of columns pandas dataframe
<p>I have the following dataframe:</p> <pre><code>ID Contract Type Company 10000 Employee Fake 10000 Contingent Fake 10000 Employee Fake 10001 Non-Worker Fake5 10002 Employee Fake4 10002 Employee Fake4 10002 Employee Fake4 10003 Contingent Fake3 10003 Employee Fake3 10003 Employee Fake4 10003 Employee Fake4 </code></pre> <p>I need to create a column named &quot;Primary Contract&quot; that is set to &quot;Yes&quot; for the first unique combination it sees on ID, Contract Type and Company columns and set all the others to &quot;No&quot;. As you can see, we may have duplicates in this columns.</p> <p>The result expected would be this:</p> <pre><code>ID Contract Type Company Primary Contract 10000 Employee Fake Yes 10000 Contingent Fake Yes 10000 Employee Fake No 10001 Non-Worker Fake5 Yes 10002 Employee Fake4 Yes 10002 Employee Fake4 No 10002 Employee Fake4 No 10003 Contingent Fake3 Yes 10003 Employee Fake3 Yes 10003 Employee Fake4 Yes 10003 Employee Fake4 No </code></pre> <p>What is the best way to achieve it?</p>
<python><pandas><dataframe><filter>
2022-12-22 13:05:08
1
619
Paulo Cortez
74,888,836
19,079,397
How to read R dataframe as a Pandas dataframe?
<p>I am executing the R code in python using <code>rpy2</code> library where I was able to achieve a data frame that I wanted. The dataframe has column with geometry, now I am trying to read the created R dataframe as Pandas dataframe using <code>ro.conversion.rpy2py</code>, but when I try to do the same it is giving an empty data frame like below.</p> <p>How do I convert R data frame with geometry column as an pandas data frame/ geo pandas data frame?</p> <p>Code:-</p> <pre class="lang-py prettyprint-override"><code>from rpy2 import robjects import rpy2.robjects as ro robjects.r(''' library(osrm) iso &lt;- osrmIsochrone(loc = c(12.0878, 55.6419), breaks = seq(from = 0, to = 60, length.out = 2), res = 50) print(iso)''') </code></pre> <p>output:-</p> <pre><code>Simple feature collection with 1 feature and 3 fields Geometry type: MULTIPOLYGON Dimension: XY Bounding box: xmin: 11.10805 ymin: 55.02879 xmax: 13.14378 ymax: 56.08983 Geodetic CRS: WGS 84 id isomin isomax geometry 1 1 0 60 MULTIPOLYGON (((12.54979 56... </code></pre> <p>#converting R dataframe to python data frame</p> <pre class="lang-py prettyprint-override"><code>r_mat = robjects.globalenv['iso'] pd_dt = ro.conversion.rpy2py(r_mat) pd_dt </code></pre> <p>Output:-</p> <pre><code>R/rpy2 DataFrame (1 x 4) id isomin isomax geometry ... ... ... ... </code></pre> <p><a href="https://i.sstatic.net/OnL4f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OnL4f.png" alt="enter image description here" /></a></p>
<python><r><pandas><geometry><rpy2>
2022-12-22 12:55:34
0
615
data en
74,888,671
11,162,983
How to convert trained models (ResNet) into inference-models PyTorch?
<p>I trained the model and then saved it as shown in the figure (right side), and I would like to convert it as shown in the figure (left side).</p> <p>I found the <a href="https://github.com/thohemp/6DRepNet/blob/master/sixdrepnet/convert.py" rel="nofollow noreferrer">script</a> for <a href="https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py" rel="nofollow noreferrer">RepVGG</a> using python.</p> <p>I would like to know how I can convert for ResNet that I got the model from this <a href="https://github.com/natanielruiz/deep-head-pose/blob/master/code/train_hopenet.py" rel="nofollow noreferrer">code</a>.</p> <p><a href="https://i.sstatic.net/IrrYA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IrrYA.png" alt="enter image description here" /></a></p>
<python><pytorch><resnet>
2022-12-22 12:42:08
0
987
Redhwan
74,888,577
13,235,421
Import dynamically classes in Pyinstaller
<p>I have a structure like this :</p> <pre><code>src β”‚ main.py β”‚ └───modules β”‚ __init__.py β”‚ module1.py β”‚ module2.py | module3.py </code></pre> <p>I want to get all the classes dynamically in the <em>modules</em> folder. Therefore, I edited the <strong>init</strong>.py file like this :</p> <pre><code>print(__path__) for loader, name, is_pkg in pkgutil.walk_packages(__path__): module = loader.find_module(name).load_module(name) for name, value in inspect.getmembers(module): if name.startswith('__'): continue globals()[name] = value __all__.append(name) </code></pre> <p>Is existing a solution to include dynamically all Python classes from <em>modules</em> directory in the <code>__init__.py</code> file using executable ?</p> <p>I add all my classes from <em>modules</em> folder in the <code>globals()</code> to use it in the <em>main.py</em> -&gt; it is working pretty well in &quot;python execution&quot;.</p> <p>Now, if I execute the script thanks to Pyinstaller, it is not work because the <code>__path__</code> is <em>bundled_app/dist/main/modules</em> and the parent folder is not containing the <em>modules</em> directory.</p> <p>I solved this issue adding the module folder as <em>data_files</em> in the <strong>spec file</strong>:</p> <pre><code>data_files = [ ('src/modules', 'modules'), ] </code></pre> <p>But this solution, force me to include all the python files used in <em>modules</em>.</p>
<python><pyinstaller>
2022-12-22 12:31:49
1
377
tCot
74,888,515
9,668,218
How to map a column in PySpark DataFrame and avoid getting Null values?
<p>I have a PySpark DataFrame and I want to map values of a column.</p> <p>Sample dataset:</p> <pre><code>data = [(1, 'N'), \ (2, 'N'), \ (3, 'C'), \ (4, 'S'), \ (5, 'North'), \ (6, 'Central'), \ (7, 'Central'), \ (8, 'South') ] columns = [&quot;ID&quot;, &quot;City&quot;] df = spark.createDataFrame(data = data, schema = columns) </code></pre> <p><a href="https://i.sstatic.net/FWL4Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FWL4Y.png" alt="enter image description here" /></a></p> <p>The mapping dictionary is:</p> <pre><code>{'N': 'North', 'C': 'Central', 'S': 'South'} </code></pre> <p>And I use the following code:</p> <pre><code>from pyspark.sql import functions as F from itertools import chain mapping_dict = {'N': 'North', 'C': 'Central', 'S': 'South'} mapping_expr = F.create_map([F.lit(x) for x in chain(*mapping_dict.items())]) df_new = df.withColumn('City_New', mapping_expr[df['City']]) </code></pre> <p>And the results are:</p> <p><a href="https://i.sstatic.net/186Ql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/186Ql.png" alt="enter image description here" /></a></p> <p>As you can see, I get Null values for rows which I don't include their values in the mapping dictionary. To solve this, I can define mapping dictionary by:</p> <pre><code>{'N': 'North', 'C': 'Central', 'S': 'South', \ 'North': 'North', 'Central': 'Central', 'South': 'South'} </code></pre> <p>However, if there are many unique values in the dataset, it is hard to define a mapping dictionary.</p> <p>Is there any better way for this purpose?</p>
<python><dataframe><pyspark><null><mapping>
2022-12-22 12:26:35
1
1,033
Mohammad
74,888,483
309,812
Efficient and performant way to calculate the change percentage columns
<p>I am using <code>yfinance</code> via pandas datareader to download multiple-symbols' multi-year data and am trying to calculate 'MTDChg', 'YTDChg' and I figure this is one of the slowest parts in the runtime.</p> <p>Here is the snippet of the code, where I have reservations about picking of the end of the previous period vis-a-viz availability of the data in the index itself. It is a DataFrame with multiple columns.</p> <p>I am curious and trying to figure out if there is a better way around this. Using <code>asfreq</code> looks appealing but I am afraid that I will not be able to use the actual reference for starting or ending periods, for which the data may or may not exist in the index. I am thinking of using a <code>applymap</code> but not really sure of how to go about it in terms of a functional code for it and if that would be better in terms of performance.</p> <p>Any ideas or suggestion on how do I go about this?</p> <pre><code> import yfinance as yf import pandas_datareader as pdr import datetime as dt from pandas_datareader import data as pdr yf.pdr_override() y_symbols = ['GOOG', 'MSFT', 'TSLA'] price_feed = pdr.get_data_yahoo(y_symbols, start = dt.datetime(2020,1,1), end = dt.datetime(2022,12,1), interval = &quot;1d&quot;) for dt in price_feed.index: dt_str = dt.strftime(&quot;%Y-%m-%d&quot;) current_month_str = f&quot;{dt.year}-{dt.month}&quot; previous_month_str = f&quot;{dt.year}-{dt.month - 1}&quot; current_year_str = f&quot;{dt.year}&quot; previous_year_str = f&quot;{dt.year - 1}&quot; if previous_month_str in price_feed.index: previous_month_last_day = price_feed.loc[previous_month_str].index[-1].strftime(&quot;%Y-%m-%d&quot;) else: previous_month_last_day = price_feed.loc[current_month_str].index[0].strftime(&quot;%Y-%m-%d&quot;) if previous_year_str in price_feed.index: previous_year_last_day = price_feed.loc[previous_year_str].index[-1].strftime(&quot;%Y-%m-%d&quot;) else: previous_year_last_day = price_feed.loc[current_year_str].index[0].strftime(&quot;%Y-%m-%d&quot;) if dt.month == 1 or dt.month == 2 or dt.month == 3: previous_qtr_str = f&quot;{dt.year - 1}-12&quot; current_qtr_str = f&quot;{dt.year}-01&quot; elif dt.month == 4 or dt.month == 5 or dt.month == 6: previous_qtr_str = f&quot;{dt.year}-03&quot; current_qtr_str = f&quot;{dt.year}-04&quot; elif dt.month == 7 or dt.month == 8 or dt.month == 9: previous_qtr_str = f&quot;{dt.year}-06&quot; current_qtr_str = f&quot;{dt.year}-07&quot; elif dt.month == 10 or dt.month == 11 or dt.month == 12: previous_qtr_str = f&quot;{dt.year}-09&quot; current_qtr_str = f&quot;{dt.year}-10&quot; else: previous_qtr_str = f&quot;{dt.year}-09&quot; current_qtr_str = f&quot;{dt.year}-10&quot; if previous_qtr_str in price_feed.index: #print(&quot;Previous quarter string is present in price feed for &quot;, dt_str) previous_qtr_last_day = price_feed.loc[previous_qtr_str].index[-1].strftime(&quot;%Y-%m-%d&quot;) #print(&quot;Last quarter last day is&quot;, previous_qtr_last_day) elif current_qtr_str in price_feed.index: previous_qtr_last_day = price_feed.loc[current_qtr_str].index[0].strftime(&quot;%Y-%m-%d&quot;) #print(&quot;Previous quarter is not present in price feed&quot;) #print(&quot;Last quarter last day is&quot;, previous_qtr_last_day) else: previous_qtr_last_day = price_feed.loc[current_month_str].index[0].strftime(&quot;%Y-%m-%d&quot;) #print(&quot;Previous quarter string is NOT present in price feed&quot;) #print(&quot;Last quarter last day is&quot;, previous_qtr_last_day) #print(dt.day, current_month_str, previous_month_last_day) for symbol in y_symbols: #print(symbol, dt.day, previous_month_last_day, &quot;&lt;---&gt;&quot;, pivot_calculations.loc[dt, ('Close', symbol)], pivot_calculations.loc[previous_month_last_day, ('Close', symbol)]) mtd_perf = (pivot_calculations.loc[dt, ('Close', symbol)] - pivot_calculations.loc[previous_month_last_day, ('Close', symbol)]) / pivot_calculations.loc[previous_month_last_day, ('Close', symbol)] * 100 pivot_calculations.loc[dt_str, ('MTDChg', symbol)] = round(mtd_perf, 2) # calculate the qtd performance values qtd_perf = (pivot_calculations.loc[dt, ('Close', symbol)] - pivot_calculations.loc[previous_qtr_last_day, ('Close', symbol)]) / pivot_calculations.loc[previous_qtr_last_day, ('Close', symbol)] * 100 pivot_calculations.loc[dt_str, ('QTDChg', symbol)] = round(qtd_perf, 2) ytd_perf = (pivot_calculations.loc[dt, ('Close', symbol)] - pivot_calculations.loc[previous_year_last_day, ('Close', symbol)]) / pivot_calculations.loc[previous_year_last_day, ('Close', symbol)] * 100 pivot_calculations.loc[dt_str, ('YTDChg', symbol)] = round(qtd_perf, 2)``` </code></pre>
<python><pandas><dataframe><multi-index>
2022-12-22 12:22:18
1
736
Nikhil Mulley