QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,413,507
1,230,724
Access `self` in classmethod when instance (self) calls classmethod
<p>Is it possible to access the object instance in a Python @classmethod annotated function when the call occurs via the instance itself rather than the class?</p> <pre><code>class Foo(object): def __init__(self, text): self.text = text @classmethod def bar(cls): return None print(Foo.bar()) foo = Foo('foo') # If possible, I'd like to return &quot;foo&quot; print(foo.bar()) </code></pre> <p>I'm working around limitations of a library which calls a class method via the instance (rather than the class) and was hoping I could somehow workaround the fact that I can only access the class in <code>bar(cls)</code> (I'd to access the instance to access <code>.text</code>). I don't think that's possible, but I thought I'd ask (I couldn't unearth something on the internet either for such a niche request).</p>
<python><inheritance><class-method>
2025-02-05 02:25:59
1
8,252
orange
79,413,347
16,706,763
Migrate from LLMChain to pipe operator
<p>I was following an <a href="https://www.analyticsvidhya.com/blog/2023/10/a-comprehensive-guide-to-using-chains-in-langchain/" rel="nofollow noreferrer">old tutorial</a> about chaining in Langchain. With it, I was writing some demo chains of my own, such as:</p> <pre class="lang-py prettyprint-override"><code>import json from langchain_core.prompts import ChatPromptTemplate from langchain.schema.runnable import RunnableMap from langchain.schema.output_parser import StrOutputParser from langchain_openai import ChatOpenAI from langchain.chains import SequentialChain from langchain.chains.llm import LLMChain api_key=&quot;sk-YOUR_OPENAI_API_KEY&quot; llm = ChatOpenAI( model=&quot;gpt-4o&quot;, temperature=0, max_tokens=None, timeout=None, max_retries=2, seed=42, api_key=api_key) output_parser = StrOutputParser() prompt_candidates = ChatPromptTemplate.from_template( &quot;&quot;&quot;A trivia game has asked to which contry does the town of '{town}' belongs to, and the options are: {country_options} Only return the correct option chosen based on your knowledge, nothing more&quot;&quot;&quot; ) prompt_finalists = ChatPromptTemplate.from_template( &quot;&quot;&quot;Your task is to build OUTPUTWORD, follow these instructions: 1. Get CAPITAL CITY: It is the capital city of {country} 2. Get INITIAL LETTER: It is the initial letter of the CAPITAL CITY 3. Get OUTPUTWORD: Make a word starting with INITIAL LETTER and related with {subject} Return the result in a JSON object with key `output` and OUTPUTWORD its correspondent output&quot;&quot;&quot; ) # -------------------- CURRENT FUNCTIONAL SOLUTION -------------------- # Chains definition candidates_chain = LLMChain(llm=llm, prompt=prompt_candidates, output_key=&quot;country&quot;) finalists_chain = LLMChain( llm=llm.bind(response_format={&quot;type&quot;: &quot;json_object&quot;}), prompt=prompt_finalists, output_key=&quot;finalists&quot; ) # Chaining final_chain = SequentialChain( chains=[candidates_chain, finalists_chain], input_variables=[&quot;town&quot;, &quot;country_options&quot;, &quot;subject&quot;], output_variables=[&quot;finalists&quot;], verbose=False ) result=final_chain.invoke( { &quot;town&quot;: &quot;Puembo&quot;, &quot;country_options&quot;: [&quot;Ukraine&quot;, &quot;Ecuador&quot;, &quot;Uzbekistan&quot;], &quot;subject&quot;: &quot;Biology&quot; } )[&quot;finalists&quot;] print(result) </code></pre> <p>However, I got the following warning:</p> <pre class="lang-bash prettyprint-override"><code>C:\Users\david\Desktop\dummy\test.py:44: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use :meth:`~RunnableSequence, e.g., `prompt | llm`` instead. candidates_chain = LLMChain(llm=llm, prompt=prompt_candidates, output_key=&quot;country&quot;) </code></pre> <p>Indeed, I was reading the docs, which <a href="https://python.langchain.com/docs/how_to/sequence/#the-pipe-operator-" rel="nofollow noreferrer">ask you to use the pipe &quot;|&quot; operator</a>; however the examples provided there are very simple, and usually involve a prompt and a llm, which are very straightforward (and are even provided in the same warning message); however I could not figure out how to adapt the pipe operator in my own chain.</p> <p>I was thinking of something like:</p> <pre class="lang-py prettyprint-override"><code>from langchain_core.output_parsers import StrOutputParser chain_a = prompt_candidates | llm | StrOutputParser() chain_b = prompt_finalists | llm | StrOutputParser() composed_chain = chain_a | chain_b output_chain=composed_chain.invoke( { &quot;career&quot;: &quot;Artificial Intelligence&quot;, &quot;research_list&quot;: &quot;\n&quot;.join(research_col) } ) </code></pre> <p>But this gets me:</p> <pre class="lang-bash prettyprint-override"><code>TypeError: Expected mapping type as input to ChatPromptTemplate. Received &lt;class 'str'&gt;. </code></pre> <p>I have tried several stuff, but nothing functional. What am I doing wrong?</p>
<python><langchain>
2025-02-05 00:13:25
1
879
David Espinosa
79,413,345
6,630,397
Jupyter interactive window in VSCode attached to dev container is priting a wrong path for __file__
<p>I have an official Python docker container (<code>python:3.11-slim-bookworm</code>) running with a vanilla bind mount on my host, defined as follow in a Compose file:</p> <pre><code>services: app: ... volumes: - ./src:/app </code></pre> <p>I'm using &quot;Dev Containers: Attach to Running Container...&quot; from my 1st VSCode windows opened at the root of my project to actually &quot;log into&quot; the running container of my project (which has been started with <code>docker compose up -d</code>) in order to benefit from the correct version of Python that I need.</p> <p>I also have both the Python and Jupyter extension installed inside the container through the 2nd VScode window that has been opened by the &quot;Dev Containers: Attach to Running Container...&quot;</p> <p>Now, always in this 2nd VSCode window, from the Python interactive tab, I can see this strange behavior:</p> <pre class="lang-py prettyprint-override"><code>import os # This first print is fine and reflecting the actual path inside the container: print(&quot;Current working directory in Jupyter:&quot;, os.getcwd()) # Current working directory in Jupyter: /app # This second print is getting me crazy because there is no such intermediary src/ folder # inside the container: print(&quot;My script folder path in Jupyter:&quot;, os.path.abspath(__file__)) # My script folder path in Jupyter: /app/src/mymodule/my_script.py </code></pre> <p>I also don't understand why <code>__file__</code> is focused on my_script.py here, because I didn't run any cell from this file yet. Anyway...</p> <p>My folder structure from inside the container is as follow:</p> <pre class="lang-bash prettyprint-override"><code>user@app-container:/app$ tree -L 2 . ├── __init__.py ├── mymodule │ ├── __init__.py │ └── my_script.py └── app.py </code></pre> <p>Can anybody explain why is VSCode seeing some <code>/app/src/</code> folder inside the container?<br /> The only place in my whole code base where the word &quot;src&quot; is, is in the <code>volume</code> section of my Compose file as described above and it's all about a path on my localhost, not inside the container.<br /> Also, from the bash terminal inside VSCode 2nd window, there is no such <code>/app/src/</code> folder, and if I try to <code>os.chdir('/app/src/')</code> in the interactive window, it obviously fails on a <code>FileNotFoundError: [Errno 2] No such file or directory: '/app/src/'</code></p> <h4>Versioning</h4> <p>Python kernel in the container: 3.11.11<br /> Docker: 27.5.1<br /> VSCode: 1.96.4 with:</p> <ul> <li>Jupyter extension: 2024.11.0</li> <li>Python extension: 2024.22.2</li> <li>Dev Containers: 0.394.0</li> </ul>
<python><visual-studio-code><docker-compose><dockerfile><jupyter>
2025-02-05 00:11:06
0
8,371
swiss_knight
79,413,251
23,192,403
PyTorch's seq2seq tutorial decoder
<p>I am learning through PyTorch's seq2seq tutorial: <a href="https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html</a></p> <p>I have a question about the decoder</p> <pre><code>class DecoderRNN(nn.Module): def __init__(self, hidden_size, output_size): super(DecoderRNN, self).__init__() self.embedding = nn.Embedding(output_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True) self.out = nn.Linear(hidden_size, output_size) def forward(self, encoder_outputs, encoder_hidden, target_tensor=None): batch_size = encoder_outputs.size(0) decoder_input = torch.empty(batch_size, 1, dtype=torch.long, device=device).fill_(SOS_token) decoder_hidden = encoder_hidden decoder_outputs = [] for i in range(MAX_LENGTH): decoder_output, decoder_hidden = self.forward_step(decoder_input, decoder_hidden) decoder_outputs.append(decoder_output) if target_tensor is not None: # Teacher forcing: Feed the target as the next input decoder_input = target_tensor[:, i].unsqueeze(1) # Teacher forcing else: # Without teacher forcing: use its own predictions as the next input _, topi = decoder_output.topk(1) decoder_input = topi.squeeze(-1).detach() # detach from history as input decoder_outputs = torch.cat(decoder_outputs, dim=1) decoder_outputs = F.log_softmax(decoder_outputs, dim=-1) return decoder_outputs, decoder_hidden, None # We return `None` for consistency in the training loop </code></pre> <p>Why is it that <code>if target_tensor is not None</code>:</p> <pre><code>decoder_input = target_tensor[:, i].unsqueeze(1) </code></pre> <p>but <code>if target_tensor is None</code>:</p> <pre><code>_, topi = decoder_output.topk(1) decoder_input = topi.squeeze(-1).detach() </code></pre> <p>specifically, isn't the shape of decoder_input different in both cases?</p> <p>I feel like in the first case, the shape of decoder_input is a 2D tensor but 1D in the second case.</p>
<python><machine-learning><pytorch><recurrent-neural-network><seq2seq>
2025-02-04 23:01:44
1
313
RoomTemperature
79,413,114
1,721,431
why do i have this compile error installing numpy in venv on cygwin?
<p>I am running python 3.9 in cygwin on Windows 10. The python is cygwin's python. I tried &quot;pip install numpy&quot; from a newly created venv. However it was not able to find a compatible wheel and instead started to build numpy. Unfortunately, it died compiling several .c files on account of sys/select.h missing. FWIW, chatgpt thinks this is because the build system (Ninja?) thinks that its still trying to build for cygwin instead of natively (even though I am using ming32-gcc). Below is a sample of one of the errors</p> <p>...</p> <pre><code> [194/535] Generating numpy/_core/_multiarray_tests.cpython-39-x86_64-cygwin.dll.p/_multiarray_tests.c [195/535] Compiling C object numpy/_core/libnpymath.a.p/meson-generated_ieee754.c.o FAILED: numpy/_core/libnpymath.a.p/meson-generated_ieee754.c.o /usr/bin/x86_64-w64-mingw32-gcc -Inumpy/_core/libnpymath.a.p -Inumpy/_core -I../numpy/_core -Inumpy/_core/include -I../numpy/_core/include -I../numpy/_core/src/npymath -I../numpy/_core/src/common -I/usr/include/python3.9 -I/tmp/pip-install-2nk4smdu/numpy_d0c3008da8224ebc9f1bede0e4cba273/.mesonpy-y0j8jkqq/meson_cpu -fdiagnostics-color=always -DNDEBUG -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -std=c11 -O3 -fno-strict-aliasing -msse -msse2 -msse3 -DNPY_HAVE_SSE2 -DNPY_HAVE_SSE -DNPY_HAVE_SSE3 -MD -MQ numpy/_core/libnpymath.a.p/meson-generated_ieee754.c.o -MF numpy/_core/libnpymath.a.p/meson-generated_ieee754.c.o.d -o numpy/_core/libnpymath.a.p/meson-generated_ieee754.c.o -c numpy/_core/libnpymath.a.p/ieee754.c In file included from /usr/include/python3.9/Python.h:50, from ../numpy/_core/src/npymath/npy_math_common.h:4, from ../numpy/_core/src/npymath/ieee754.c.src:7: /usr/include/python3.9/pyport.h:230:10: fatal error: sys/select.h: No such file or directory 230 | #include &lt;sys/select.h&gt; | ^~~~~~~~~~~~~~ compilation terminated. </code></pre> <p>Any ideas what the problem is?</p> <p>What's bizarre is that at the top level (outside the venv) I have numpy already installed, although I cannot recall how it got there - cygwin setup shows that I didn't install it from cygwin, and a recent attempt to upgrade numpy lead to the same error as above.</p> <p><strong>UPDATE</strong>: I appreciate the answers below. The problem is that cygwin pkg manager shows that I already have python39-numpy installed (I am not sure what python3-numpy is for, I didn't install it). I am using python 3.9. I ran into this problem because i was trying to install gymnasium. Gymnasium is not available in Cygwin so I have to use pip to install it. There's a wheel for it, but again instead of installing the wheel, pip in my venv started trying to build the package and in the process started trying to build numpy even though it's already installed!). There are other python packages such as torch and hyperopt that I am going to need too and they are not in cygwin either.</p> <p>Anyway,after 15 years of usage, I have sadly given up on cygwin and moved over to WSL, which works surprisingly smoothly, including easy access to my windows file system (about the only flaky aspect is the internet access but that could be more of a problem caused by Symantec Endpoint's firewall)</p>
<python><numpy><cygwin>
2025-02-04 21:46:54
3
976
Motorhead
79,412,896
3,143,269
Using a pointer to fill a ctypes structure
<p>I have need create a ctypes struct using a pointer. Example code</p> <pre><code>class MessageDecoder: def __init__(self, raw_data : bytes): self.raw_data = raw_data self.writable_raw_data = None self.datagram = None self.tag = None def decode(self): c_array = cast(self.raw_data, POINTER(c_ubyte * len(self.raw_data)))[0] data_pointer = pointer(c_array) self.datagram = Datagram.from_buffer(data_pointer) </code></pre> <p>I get the c_array. I can get the pointer. However, trying to create Datagram fails. I can create one using c_array. EX: Datagram.from_buffer(c_array) Datagram looks like:</p> <pre><code>class Datagram(BigEndianStructure): _pack_ = 1 _fields_ = [ (&quot;MagicNumber&quot;, c_uint), (&quot;VersionMinor&quot;, c_uint16), (&quot;VersionMajor&quot;, c_uint16), (&quot;SequenceNumber&quot;, c_uint16), (&quot;SeqNumberHigh&quot;, c_uint16), (&quot;DataSize&quot;, c_uint32), (&quot;Attribute&quot;, GenericAttribute) ] </code></pre> <p>Why?</p>
<python><ctypes>
2025-02-04 20:02:54
1
1,124
AeroClassics
79,412,706
8,285,811
What's going on with the chaining in Python's string membership tests?
<p>I just realized I had a typo in my membership test and was worried this bug had been causing issues for a while. However, the code had behaved just as expected. Example:</p> <pre><code>&quot;test&quot; in &quot;testing&quot; in &quot;testing&quot; in &quot;testing&quot; </code></pre> <p><strong>This left me wondering how this membership expression works and why it's allowed.</strong></p> <p>I tried applying some order of operations logic to it with parentheses but that just breaks the expression. And the <a href="https://docs.python.org/3.8/reference/expressions.html#membership-test-details" rel="nofollow noreferrer">docs</a> don't mention anything about chaining. Is there a practical use case for this I am just not aware of?</p>
<python><string><expression><membership>
2025-02-04 18:34:28
2
6,922
Akaisteph7
79,412,652
3,125,823
Import Error: module does not define a "CustomJWTAuthentication" attribute/class
<p>I'm building a REST Auth API with Django/DRF.</p> <p>All of a sudden when I start working today, I'm getting this error message in my cli:</p> <pre><code>ImportError: Could not import 'users.authentication.CustomJWTAuthentication' for API setting 'DEFAULT_AUTHENTICATION_CLASSES'. ImportError: Module &quot;users.authentication&quot; does not define a &quot;CustomJWTAuthentication&quot; attribute/class. </code></pre> <p>This is my <strong>REST_FRAMEWORK config in settings.py</strong></p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': [ 'users.authentication.CustomJWTAuthentication', ], 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', ], ... } </code></pre> <p>This is my <strong>/users/authentication.py, which has the CustomJWTAuthentication class:</strong></p> <pre><code>from django.conf import settings from rest_framework_simplejwt.authentication import JWTAuthentication class CustomJWTAuthentication(JWTAuthentication): def authenticate(self, request): try: header = self.get_header(request) if header is None: raw_token = request.COOKIES.get(settings.AUTH_COOKIE) else: raw_token = self.get_raw_token(header) if raw_token is None: return None validated_token = self.get_validated_token(raw_token) return self.get_user(validated_token), validated_token except: return None </code></pre> <p>I'm running Python v3.12, Django v4.2, DRF v3.14 and DRF SimpleJWT v5.4 on Ubuntu 24 in a venv.</p> <p>I have no idea why this is happening all of a sudden?</p>
<python><django><django-rest-framework>
2025-02-04 18:12:55
0
1,958
user3125823
79,412,615
11,402,025
Understanding and Fixing the regex?
<p>I have a regex on my input parameter:</p> <pre><code>r&quot;^(ABC-\d{2,9})|(ABz?-\d{3})$&quot; </code></pre> <p>Ideally it should not allow parameters with <code>++</code> or <code>--</code> at the end, but it does. Why is the regex not working in this case but works in all other scenarios?</p> <pre><code>ABC-12 is a valid. ABC-123456789 is a valid. AB-123 is a valid. ABz-123 is a valid. </code></pre>
<python><regex>
2025-02-04 18:01:41
2
1,712
Tanu
79,412,592
7,531,433
Inheriting from SQLModel with table=True raises Value error if parent has a non-trivial field type
<p>On sqlmodel 0.0.22, the following code will crash with <code>ValueError: &lt;class 'list'&gt; has no matching SQLAlchemy type</code></p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import ARRAY from sqlmodel import Field, SQLModel, String class Foo(SQLModel, table=True): id: str = Field(primary_key=True) bar: list[str] = Field(sa_type=ARRAY(String)) class Bar(Foo): pass </code></pre> <p>Removing <code>table=True</code> gets rid of the error.</p> <p>Does anyone know why and is there a workaround?</p>
<python><sqlmodel>
2025-02-04 17:53:39
0
709
tierriminator
79,412,501
13,971,251
Gmail Oauth2 - restrict the scope to only emails from a certain domain
<p>I have a Django site that uses Google Oauth2 to allow users to grant access to read and reply to their emails.</p> <pre class="lang-py prettyprint-override"><code>GOOGLE_OAUTH2_CREDENTIALS = { 'client_id': '********************', 'client_secret': '*******', 'scope': [ 'https://www.googleapis.com/auth/gmail.readonly', 'https://www.googleapis.com/auth/gmail.send' ], 'redirect_uri': 'https://www.********.com/*****/', } </code></pre> <p>However, for privacy and security purposes I want to set restrict the <code>scope</code> to only being able to read and reply to emails <em><strong>from a specific domain</strong></em>.</p> <p>Is it possible to modify the <code>scope</code> to only allow the permissions within for emails to/from a certain domain?</p>
<python><django><oauth-2.0><google-oauth><django-allauth>
2025-02-04 17:15:27
1
1,181
Kovy Jacob
79,412,451
268,581
plotly x-axis label is offset by one month
<h1>Code</h1> <pre class="lang-py prettyprint-override"><code>import pandas as pd import streamlit as st import plotly.express import plotly data = { '2024-01-31' : 1044, '2024-02-29' : 2310, '2024-03-31' : 518, '2024-04-30' : -1959, '2024-05-31' : 0, '2024-06-30' : -1010, '2024-07-31' : 1500, '2024-08-31' : -15459, '2024-09-30' : -14153, '2024-10-31' : -12604, '2024-11-30' : -5918, '2024-12-31' : -3897 } df = pd.DataFrame(data.items(), columns=['date', 'value']) fig = plotly.express.bar(data_frame=df, x='date', y='value') st.plotly_chart(fig) </code></pre> <h1>Issue</h1> <p>In this screenshot, the mouse is hovering over the rightmost bar.</p> <p>The hover label says <code>Dec 31, 2024</code>, which is expected.</p> <p>Note that the x-axis label says <code>Jan 2025</code>.</p> <p><a href="https://i.sstatic.net/YXvWs4x7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YXvWs4x7.png" alt="enter image description here" /></a></p> <h1>Workaround</h1> <p>Here's one approach to a workaround:</p> <pre><code>df['date'] = pd.to_datetime(df['date']).dt.to_period('M').dt.to_timestamp() </code></pre> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import streamlit as st import plotly.express import plotly data = { '2024-01-31' : 1044, '2024-02-29' : 2310, '2024-03-31' : 518, '2024-04-30' : -1959, '2024-05-31' : 0, '2024-06-30' : -1010, '2024-07-31' : 1500, '2024-08-31' : -15459, '2024-09-30' : -14153, '2024-10-31' : -12604, '2024-11-30' : -5918, '2024-12-31' : -3897 } df = pd.DataFrame(data.items(), columns=['date', 'value']) df['date'] = pd.to_datetime(df['date']).dt.to_period('M').dt.to_timestamp() fig = plotly.express.bar(data_frame=df, x='date', y='value') st.plotly_chart(fig) </code></pre> <p>Now the x-axis labels match the data bar months:</p> <p><a href="https://i.sstatic.net/F3Bcw5Vo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F3Bcw5Vo.png" alt="enter image description here" /></a></p> <h1>Notes</h1> <p>The workaround here basically changes each date value from being the month-end to the first of the month.</p> <p>So, instead of this</p> <pre><code>&gt;&gt;&gt; df date value 0 2024-01-31 1044 1 2024-02-29 2310 2 2024-03-31 518 3 2024-04-30 -1959 4 2024-05-31 0 5 2024-06-30 -1010 6 2024-07-31 1500 7 2024-08-31 -15459 8 2024-09-30 -14153 9 2024-10-31 -12604 10 2024-11-30 -5918 11 2024-12-31 -3897 </code></pre> <p>we have this:</p> <pre><code>&gt;&gt;&gt; df date value 0 2024-01-01 1044 1 2024-02-01 2310 2 2024-03-01 518 3 2024-04-01 -1959 4 2024-05-01 0 5 2024-06-01 -1010 6 2024-07-01 1500 7 2024-08-01 -15459 8 2024-09-01 -14153 9 2024-10-01 -12604 10 2024-11-01 -5918 11 2024-12-01 -3897 </code></pre> <p>The change is made with this line:</p> <pre><code>df['date'] = pd.to_datetime(df['date']).dt.to_period('M').dt.to_timestamp() </code></pre> <h1>Question</h1> <p>Is this the recommended approach to resolving this issue? Or is there a more idiomatic method using plotly?</p>
<python><pandas><plotly><streamlit>
2025-02-04 16:54:16
2
9,709
dharmatech
79,412,371
1,473,517
How to do fft based convolution with long doubles/float128s quickly and accurately
<p>On my linux system I have:</p> <pre><code>np.finfo(np.float128) info(resolution=1e-18, min=-1.189731495357231765e+4932, max=1.189731495357231765e+4932, dtype=float128) </code></pre> <p>So that is an 80-bit long double. I want to perform a convolution between two reasonably long arrays of <code>np.float128</code>s. <code>scipy.signal.convolve</code> works with <code>method='direct'</code> but gives the wrong answer for <code>method='fft'</code>. Here is a toy example:</p> <pre><code>a = np.array(['1.e+401', '1.e+000', '1.e+401', '1.e+000'], dtype=np.float128) convolve(a, a, mode='full', method='direct') array([1.e+802, 2.e+401, 2.e+802, 4.e+401, 1.e+802, 2.e+401, 1.e+000], dtype=float128) # correct convolve(a, a, mode='full', method='fft') array([1.e+802, 0.e+000, 2.e+802, 0.e+000, 1.e+802, 0.e+000, 0.e+000], dtype=float128) # wrong </code></pre> <p>I tried implementing the convolution from scratch using <code>pyfftw</code> but it still gave the wrong answer.</p> <p>My ultimate goal is to be able to do a convolution similar to the following quickly and accurately using FFTs:</p> <pre><code>a = np.array([1e401, 1e000, 1e401, 1e000] * 10000, dtype=np.float128) convolve(a, a) </code></pre> <p>How can this be done?</p>
<python><numpy><scipy><fft>
2025-02-04 16:22:09
1
21,513
Simd
79,412,357
8,445,557
python snowflake.connector & rsa private_key_file issue
<p>I meet an issue when trying to use <strong>snowflake.connector</strong> with an RSA <strong>pkcs8 key</strong> with <strong>passphrase</strong>.</p> <p>When I try this code, with this kind of RSA Key: <code>openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8</code></p> <pre class="lang-py prettyprint-override"><code>import snowflake.connector as sc private_key_file = 'config/rsa_key.p8' private_key_file_pwd = input(f&quot;Give to me the passphrase of the key {private_key_file}: &quot;) conn_params = { 'account': 'SFK_ORG-SRF_ISTANCE', 'user': 'TEST_RSA', 'private_key_file': private_key_file, 'private_key_file_pwd': private_key_file_pwd, 'warehouse': 'MY_WH', 'database': 'MY_DB', 'schema': 'MY_SCHEMA', 'role': 'USER_ROLE' } conn = sc.connect(**conn_params) </code></pre> <p>I get this error:</p> <p><code>TypeError: Password was given but private key is not encrypted.</code></p> <p>Why?</p>
<python><snowflake-cloud-data-platform><rsa>
2025-02-04 16:16:52
1
361
Stefano G.
79,412,334
11,940,250
Bevel in blender with bmesh after extrusion
<p>I've been trying to bevel with bmesh in Blender for a few hours. With bpy.ops this works without any problems but that's exactly what I want to avoid. Switching to edit mode makes the code very slow if you iterate through 100 objects or more.</p> <p>All my attempts so far have resulted in incorrect bevel results. I just can't manage to select the outer edges after extrusion with bmesh in order to then bevel them.</p> <pre><code>if extrude: bpy.context.view_layer.objects.active = obj obj.select_set( True ) for other_obj in bpy.context.view_layer.objects: if other_obj != obj: other_obj.select_set( False ) if bpy.context.object and bpy.context.object.type == 'MESH': obj = bpy.context.object mesh = obj.data bm = bmesh.new() bm.from_mesh( mesh ) bmesh.ops.remove_doubles( bm, verts=bm.verts, dist = merge_threshold ) vertices = [ v.co for v in obj.data.vertices if v.select ] center = sum( vertices, mathutils.Vector( ( 0.0, 0.0, 0.0 ) ) ) / len( vertices ) direction = mathutils.Vector( ( 0, 0, center.z ) ) - center direction.normalize() distance = brickheight geom_extrude = bmesh.ops.extrude_face_region(bm, geom = bm.faces) bmesh.ops.translate( bm, vec = direction * distance, verts=[ v for v in geom_extrude[&quot;geom&quot;] if isinstance( v, bmesh.types.BMVert ) ] ) #----------------------------------------- #Does anyone know what is best suited to beveling the outer edges with bmesh? #----------------------------------------- bm.to_mesh( mesh ) bm.free() #bpy.ops.object.mode_set( mode='EDIT' ) #bpy.ops.mesh.edges_select_sharp( sharpness=0.523599 ) #bpy.ops.mesh.bevel( offset=bevel_amount, segments=bevel_segments, profile = 0.5, clamp_overlap = True ) #bpy.ops.object.mode_set( mode='OBJECT' ) obj.select_set( False ) </code></pre>
<python><blender>
2025-02-04 16:09:48
0
419
Spiri
79,412,324
6,101,024
How to split a dataset in train, validation and test based on the value of another column
<p>Given a dataset of the form:</p> <pre><code> date user f1 f2 rank rank_group counts 0 09/09/2021 USER100 59.0 3599.9 1 1.0 3 1 10/09/2021 USER100 75.29 80790.0 2 1.0 3 2 11/09/2021 USER100 75.29 80790.0 3 1.0 3 1 10/09/2021 USER100 75.29 80790.0 2 2.0 3 2 11/09/2021 USER100 75.29 80790.0 3 2.0 3 3 12/09/2021 USER100 75.29 80790.0 4 2.0 3 2 11/09/2021 USER100 75.29 80790.0 3 3.0 3 3 12/09/2021 USER100 75.29 80790.0 4 3.0 3 4 13/09/2021 USER100 75.29 80790.0 5 3.0 3 3 12/09/2021 USER100 75.29 80790.0 4 4.0 3 4 13/09/2021 USER100 75.29 80790.0 5 4.0 3 5 14/09/2021 USER100 75.29 80790.0 6 4.0 3 4 13/09/2021 USER100 75.29 80790.0 5 5.0 3 5 14/09/2021 USER100 75.29 80790.0 6 5.0 3 6 15/09/2021 USER100 71.24 28809.9 7 5.0 3 5 14/09/2021 USER100 75.29 80790.0 6 6.0 3 6 15/09/2021 USER100 71.24 28809.9 7 6.0 3 7 16/09/2021 USER100 71.31 79209.9 8 6.0 3 6 15/09/2021 USER100 71.24 28809.9 7 7.0 3 7 16/09/2021 USER100 71.31 79209.9 8 7.0 3 8 17/09/2021 USER100 70.43 82809.9 9 7.0 3 7 16/09/2021 USER100 71.31 79209.9 8 8.0 3 8 17/09/2021 USER100 70.43 82809.9 9 8.0 3 9 18/09/2021 USER100 68.65 82809.9 10 8.0 3 </code></pre> <p>Given that rank_group indicates that the dataset has got 8 groups. I would like to split into a three dataset (train, validation and test) with the rate of 70%, 20%, 10% respectively. In this case, I would expect that train_set contains all the rows in corresponding rank_group=1.0,2.0,3.0,4.0,5.0. the validation_set contains all the rows in corresponding to the rank_group=6.0,7.0 and test_set contains all the rows in corresponding to rank_group=8.0.</p> <p>Approach I: using split from numpy</p> <ul> <li><code>train, validation, test = np.split(user_dataset, [int(.7*len(user_dataset)), int(.2*len(user_dataset)), int(.1*len(user_dataset))])</code></li> </ul> <p>Approach II: using ad-hoc split</p> <pre><code> `max_rank_group = user_dataset[rank_group].max() train_number = round(max_rank_group * train_rate) validation_number = round((max_rank_group-train_number) * validation_rate) test_number = round((max_rank_group-validation_number) * test_rate) print('train_number ', train_number) print('validation_number ', validation_number) print('test_number ', test_number) print(' ') train_number_frac = train_number % 1 validation_number_frac = validation_number % 1 test_number_frac = train_number % 1 current_train_rank_list = [] if train_number_frac &gt;= 0.5: current_train_rank_list = range(1, train_number+1) else: current_train_rank_list = range(1, train_number) current_validation_rank_list = [] if validation_number_frac &gt;= 0.5 and (train_number+validation_number+2) &lt; max_rank_group: current_validation_rank_list = range(train_number, train_number+validation_number+2) else: current_validation_rank_list = range(train_number, train_number+validation_number+1) current_test_rank_list = [] if test_number_frac &gt;= 0.5 and (train_number+validation_number+test_number+2)&lt;max_rank_group: current_test_rank_list = range(train_number+validation_number, train_number+validation_number+test_number+2) else: current_test_rank_list = range(train_number+validation_number, train_number+validation_number+test_number+1) #current_validation_rank_list = range(train_number, train_number+validation_number) #current_test_rank_list = range(train_number+validation_number, train_number+validation_number+test_number) print('current_train_rank_list ', current_train_rank_list) print('current_validation_rank_list ', current_validation_rank_list) print('current_test_rank_list ', current_test_rank_list) print(' ')` </code></pre> <p>Please, any help?</p>
<python><dataframe><numpy><split>
2025-02-04 16:06:49
2
697
Carlo Allocca
79,412,281
9,182,743
Import Fitz with AWS Lambda : cannot import name '_extra' from partially initialized module 'pymupdf'
<p>I want to import fitz on AWS lambda</p> <p><a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="nofollow noreferrer">Following these AWS instructions</a>, I have:</p> <ul> <li>pip installed PymuPdf <code>pip install --target ./package pymupdf</code></li> <li>added lambda_function.py to package folder</li> <li>Zipped the content of package: PyMuPdf pacakges &amp; the lambda_function.py</li> <li>Uploaded my_deployment_package.zip</li> <li>Tested the lambda</li> </ul> <p>(note: To verify process up until here was correct, I also pip installed <code>PyPDF2</code> to import this in the lambda)</p> <p>This is the .ZIP folder content:</p> <pre><code>my_deployment_package.zip/ | ├── bin/ #fitz pip install | ├── fitz/ #from fitz pip install | ├── pymupdf/ #from fitz pip install | ├── pymupdf-1.25.2.dist-info/ #from fitz pip install | ├── PyPDF2/ #from PyPDF2 pip install | ├── pypdf2-3.0.1.dist-info/ #from PyPDF2 pip install | └── lambda_function.py #my lambda function </code></pre> <p>After upload I tested the following lambda:</p> <pre class="lang-py prettyprint-override"><code>from PyPDF2 import PdfReader # importing this to verify .zip--&gt; upload method correct import fitz #target package def lambda_handler(event, context): # Print the event (input to the Lambda function) print(&quot;Received event:&quot;, event) # Return a simple response return { 'statusCode': 200, 'body': 'Hello from Lambda!' } </code></pre> <p>I get the following error:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;errorMessage&quot;: &quot;Unable to import module 'lambda_function': cannot import name '_extra' from partially initialized module 'pymupdf' (most likely due to a circular import) (/var/task/pymupdf/__init__.py)&quot;, &quot;errorType&quot;: &quot;Runtime.ImportModuleError&quot;, &quot;requestId&quot;: &quot;&quot;, &quot;stackTrace&quot;: [] } </code></pre>
<python><aws-lambda><import><pymupdf>
2025-02-04 15:56:32
0
1,168
Leo
79,412,165
13,097,194
Trying to understand differences in weighted logistic regression outputs between Statsmodels and R's survey & srvyr packages
<p>I have <a href="https://raw.githubusercontent.com/ifstudies/carsurveydata/refs/heads/main/car_survey.csv" rel="nofollow noreferrer">a fictional weighted survey dataset</a> that contains information about respondents' car colors and their response to the question &quot;I enjoy driving fast.&quot; I would like to perform a regression to see whether the likelihood of <em>slightly</em> agreeing with this question varies based on whether or not the respondent drives a black car. (This isn't a serious analysis; I'm just introducing it for the purpose of comparing weighted regression outputs in R and Python.)</p> <p>In order to answer this question, I first ran a weighted logistic regression using R's <code>survey</code> and <code>srvyr</code> packages. This regression provided a test statistic of -1.18 for the black car color coefficient and a p value of 0.238. However, when I ran a weighted logistic regression within statsmodels, I received a test statistic of -1.35 and a p value of 0.177 for this coefficient. I would like to understand why these test statistics are different, and whether I'm making any mistakes within my setup for either test that could explain this discrepancy.</p> <p>It's also worth noting that, when I removed the weight component from each test, my test statistics and P values were almost identical. Therefore, it seems that these two implementations are treating my survey weights differently.</p> <p>Here is my code: (Note: I am running my R tests within rpy2 so that I can execute them in the same notebook as my Python code. You should be able to reproduce this output on your end within a Jupyter Notebook as long as you have rpy2 set up on your end.)</p> <pre><code>import pandas as pd import statsmodels.formula.api as smf import statsmodels.api as sm %load_ext rpy2.ipython %R library(dplyr) %R library(srvyr) %R library(survey) %R library(broom) import pandas as pd df_car_survey = pd.read_csv( 'https://raw.githubusercontent.com/ifstudies/\ carsurveydata/refs/heads/main/car_survey.csv') # Adding dummy columns for independent and dependent variables: for column in ['Car_Color', 'Enjoy_Driving_Fast']: df_car_survey = pd.concat([df_car_survey, pd.get_dummies( df_car_survey[column], dtype = 'int', prefix = column)], axis = 1) df_car_survey.columns = [column.replace(' ', '_') for column in df_car_survey.columns] # Loading DataFrame into R and creating a survey design object: # See https://tidy-survey-r.github.io/tidy-survey-book/c10-sample-designs-replicate-weights.html # for more details. # This book was also inval %Rpush df_car_survey %R df_sdo &lt;- df_car_survey %&gt;% as_survey_design(\ weights = 'Weight') print(&quot;Survey design object:&quot;) %R print(df_sdo) # Logistic regression in R: # (This code was based on that found in # https://tidy-survey-r.github.io/tidy-survey-book/c07-modeling.html ) %R logit_result &lt;- svyglm(\ Enjoy_Driving_Fast_Slightly_Agree ~ Car_Color_Black, \ design = df_sdo, na.action = na.omit,\ family = quasibinomial()) print(&quot;\n\n Logistic regression results from survey package within R:&quot;) %R print(tidy(logit_result)) # Logistic regression within Python: # (Based on StupidWolf's response at # https://stackoverflow.com/a/62798889/13097194 ) glm = smf.glm(&quot;Enjoy_Driving_Fast_Slightly_Agree ~ Car_Color_Black&quot;, data = df_car_survey, family = sm.families.Binomial(), freq_weights = df_car_survey['Weight']) model_result = glm.fit() print(&quot;\n\nStatsmodels logistic regression results:&quot;) print(model_result.summary()) </code></pre> <p>Output:</p> <pre><code>Survey design object: Independent Sampling design (with replacement) Called via srvyr Sampling variables: - ids: `1` - weights: Weight Data variables: - Car_Color (chr), Weight (dbl), Enjoy_Driving_Fast (chr), Count (int), Response_Sort_Map (int), Car_Color_Black (int), Car_Color_Red (int), Car_Color_White (int), Enjoy_Driving_Fast_Agree (int), Enjoy_Driving_Fast_Disagree (int), Enjoy_Driving_Fast_Slightly_Agree (int), Enjoy_Driving_Fast_Slightly_Disagree (int), Enjoy_Driving_Fast_Strongly_Agree (int), Enjoy_Driving_Fast_Strongly_Disagree (int) Logistic regression results from survey package within R: # A tibble: 2 × 5 term estimate std.error statistic p.value &lt;chr&gt; &lt;dbl&gt; &lt;dbl&gt; &lt;dbl&gt; &lt;dbl&gt; 1 (Intercept) -2.08 0.145 -14.3 8.85e-43 2 Car_Color_Black -0.293 0.248 -1.18 2.38e- 1 Statsmodels logistic regression results: Generalized Linear Model Regression Results ============================================================================================= Dep. Variable: Enjoy_Driving_Fast_Slightly_Agree No. Observations: 1059 Model: GLM Df Residuals: 1057 Model Family: Binomial Df Model: 1 Link Function: Logit Scale: 1.0000 Method: IRLS Log-Likelihood: -345.24 Date: Tue, 04 Feb 2025 Deviance: 690.48 Time: 10:07:53 Pearson chi2: 1.06e+03 No. Iterations: 5 Pseudo R-squ. (CS): 0.001763 Covariance Type: nonrobust =================================================================================== coef std err z P&gt;|z| [0.025 0.975] ----------------------------------------------------------------------------------- Intercept -2.0834 0.125 -16.693 0.000 -2.328 -1.839 Car_Color_Black -0.2931 0.217 -1.350 0.177 -0.719 0.133 =================================================================================== </code></pre>
<python><r><logistic-regression><statsmodels><survey>
2025-02-04 15:11:52
1
974
KBurchfiel
79,412,108
5,676,198
How to specify conda env in Python Debugger in VScode
<h3>Problem</h3> <p>When I want to debug a Python file and hit the button <code>Python Debugger: Debug Python File</code> or <code>Debugin with JSON,</code> it selects the default conda environment.</p> <h3>Workaround</h3> <p>To fix that, I manually open the terminal (created by the debugger), <code>conda activate conda_name</code>, and then ask to debug again, and it works. However, it is tedious.</p> <h3>Desired</h3> <p>So, I tried to automate the conda environment activation with <code>launch.json</code>, and <code>tasks.json</code> through <code>&quot;preLaunchTask&quot;</code>. but it does not select the conda env specified. My attempt:</p> <p><code>launch.json</code></p> <pre class="lang-json prettyprint-override"><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python Debugger: Current File&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, // internalConsole -&gt; to prevent from printing &quot;cwd&quot;: &quot;${fileDirname}&quot;, // &quot;${workspaceFolder}/src&quot;, &quot;${workspaceFolder}&quot;, &quot;purpose&quot;: [ &quot;debug-in-terminal&quot; ], &quot;justMyCode&quot;: false, &quot;subProcess&quot;: true, &quot;logToFile&quot;: true, &quot;pythonArgs&quot;: [&quot;-Xfrozen_modules=off&quot;], &quot;env&quot;: { &quot;CONDA_DEFAULT_ENV&quot;: &quot;/mnt/data/mochinski/.conda/envs/sklearn&quot;, }, &quot;python&quot;: &quot;/mnt/data/mochinski/.conda/envs/sklearn/bin/python&quot;, &quot;preLaunchTask&quot;: &quot;activate-conda-env&quot;, } ] } </code></pre> <p><code>tasks.json</code>:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;version&quot;: &quot;2.0.0&quot;, &quot;tasks&quot;: [ { &quot;label&quot;: &quot;activate-conda-env&quot;, &quot;type&quot;: &quot;shell&quot;, &quot;command&quot;: &quot;conda init &amp;&amp; conda activate /mnt/data/mochinski/.conda/envs/sklearn&quot;, &quot;problemMatcher&quot;: [] } ] } </code></pre> <p>Thanks!!</p>
<python><visual-studio-code><anaconda><conda><vscode-debugger>
2025-02-04 14:54:29
1
1,061
Guilherme Parreira
79,412,014
1,866,038
Grouping function parameters with sphinx
<p>I want to group the parameters of a function so that, instead of</p> <pre><code>myfunc(from, until, col_color, col_taste) Parameters: · from First date of the period to be selected. · until Last date of the period to be selected. · col_color Whether to include the color column in the output. · col_taste. Whether to include the taste column in the output. </code></pre> <p>I want to have</p> <pre><code>myfunc(from, until, col_color, col_taste) Selection parameters: · from First date of the period to be selected. · until Last date of the period to be selected. Content parameters: · col_color Whether to include the color column in the output. · col_taste. Whether to include the taste column in the output. </code></pre> <p>Is there a means to do this?</p> <p>I tried to write the doc-string for the function as follows:</p> <pre><code>def myfunc(from, until, col_color, col_taste) &quot;&quot;&quot; Gets data from the desires period and returns them as a dataframe. Selection parameters: :param from: First date of the period to be selected. :param until: Last date of the period to be selected. Content parameters: :param col_color: Whether to include the color column in the output. :param col_taste: Whether to include the taste column in the output. </code></pre> <p>But then, he list of <em>all</em> parameters are printed twice, one below the &quot;Selection parameters&quot; text and another one after the &quot;Content parameters&quot; text.</p>
<python><python-sphinx><documentation-generation>
2025-02-04 14:18:32
1
517
Antonio Serrano
79,411,840
2,989,642
Python pip - expressing trusted host in command line call
<p>I'm attempting to fetch a dependency and install in user scope, during script execution:</p> <pre><code># PYPI is a global settings dict try: resp = subprocess.check_call( [ sys.executable, &quot;-m&quot;, &quot;pip&quot;, &quot;install&quot;, &quot;--user&quot;, &quot;--trusted-host&quot;, PYPI[&quot;trusted_host&quot;], &quot;-i&quot;, PYPI[&quot;pypi_url&quot;], package_name ], text=True ) </code></pre> <p>However, I receive:</p> <p><code>no such option: --trusted-host</code></p> <p>I am writing around Esri's ArcGIS python API, and fetching the optional package from a custom pypi endpoint. I'd like to avoid bothering end users with setting up pip.conf/pip.ini files, or create venvs or anything like that - so the question is:</p> <p>How do I make a pip call to a custom host and force trust in one command line expression?</p>
<python><pip><command-line-interface>
2025-02-04 13:19:45
0
549
auslander
79,411,501
5,337,505
How to pass Input to a tensorflow backbone model without getting AttributeError: 'tuple' object has no attribute 'as_list'
<p>I am trying to use ResNet3D from <code>tensorflow-models</code> library but I am getting this weird error when trying to run the block</p> <pre><code>!pip install tf-models-official==2.17.0 </code></pre> <p>Tensorflow version is <code>2.18</code> on the Kaggle notebook.</p> <p>After installing <code>tf-models-official</code></p> <pre><code>from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, GlobalAveragePooling3D, Input from tensorflow.keras.optimizers import AdamW import tensorflow_models as tfm def create_model(): base_model = tfm.vision.backbones.ResNet3D(model_id = 50, temporal_strides= [3,3,3,3], temporal_kernel_sizes = [(5,5,5),(5,5,5,5),(5,5,5,5,5,5),(5,5,5)], input_specs=tf.keras.layers.InputSpec(shape=(None, None, IMG_SIZE, IMG_SIZE, 3)) ) # Unfreeze the base model layers base_model.trainable = True # Create the model inputs = Input(shape=[None, None, IMG_SIZE, IMG_SIZE, 3]) x = base_model(inputs) # B,1,7,7,2048 x = GlobalAveragePooling3D(data_format=&quot;channels_last&quot;, keepdims=False)(x) x = Dense(1024, activation='relu')(x) x = tf.keras.layers.Dropout(0.3)(x) # Add dropout to prevent overfitting outputs = Dense(NUM_CLASSES, activation='softmax')(x) model = Model(inputs, outputs) # Compile the model with class weights optimizer = AdamW(learning_rate=1e-4, weight_decay=1e-5) model.compile( optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy', tf.keras.metrics.AUC()] ) return model # Create and display model model = create_model() model.summary() </code></pre> <p>When I run this, I get the error below:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-56-363271b4dda8&gt; in &lt;cell line: 39&gt;() 37 38 # Create and display model ---&gt; 39 model = create_model() 40 model.summary() &lt;ipython-input-56-363271b4dda8&gt; in create_model() 18 # Create the model 19 inputs = Input(shape=(None, None, IMG_SIZE, IMG_SIZE, 3)) ---&gt; 20 x = base_model(inputs) # B,1,7,7,2048 /usr/local/lib/python3.10/dist-packages/tf_keras/src/engine/training.py in __call__(self, *args, **kwargs) 586 layout_map_lib._map_subclass_model_variable(self, self._layout_map) 587 --&gt; 588 return super().__call__(*args, **kwargs) /usr/local/lib/python3.10/dist-packages/tf_keras/src/engine/base_layer.py in __call__(self, *args, **kwargs) 1101 training=training_mode, 1102 ): -&gt; 1103 input_spec.assert_input_compatibility( 1104 self.input_spec, inputs, self.name 1105 ) /usr/local/lib/python3.10/dist-packages/tf_keras/src/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name) 300 &quot;incompatible with the layer: &quot; 301 f&quot;expected shape={spec.shape}, &quot; --&gt; 302 f&quot;found shape={display_shape(x.shape)}&quot; 303 ) 304 /usr/local/lib/python3.10/dist-packages/tf_keras/src/engine/input_spec.py in display_shape(shape) 305 306 def display_shape(shape): --&gt; 307 return str(tuple(shape.as_list())) 308 309 AttributeError: 'tuple' object has no attribute 'as_list' </code></pre> <p>I have tried passing the input to the <code>shape</code> argument as a list, but still getting the same error.</p> <p>The error is occurring with this</p> <pre><code>!pip install tf-models-official==2.17.0 import tensorflow as tf inputs = tf.keras.Input(shape=[None, None, IMG_SIZE, IMG_SIZE, 3]) print(inputs.shape.as_list()) </code></pre> <p>Error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-39-6e88680ff7df&gt; in &lt;cell line: 2&gt;() 1 inputs = tf.keras.Input(shape=[None, None, IMG_SIZE, IMG_SIZE, 3]) ----&gt; 2 print(inputs.shape.as_list()) AttributeError: 'tuple' object has no attribute 'as_list' </code></pre>
<python><tensorflow><machine-learning><deep-learning><tf.keras>
2025-02-04 11:22:41
1
1,215
Siladittya
79,411,389
815,455
Snakemake - files as input/output with timestamp
<p>I try to setup a snakefile (snakemake 7.19.1). The final output should contain a timestamp. This is a minimal example:</p> <pre class="lang-py prettyprint-override"><code>#!/bin/python print('import packages') from datetime import datetime import time now = datetime.now().strftime(&quot;%Y-%m-%d_%H-%M-%S&quot;) tarfile = now + '_stardist.tar' print(tarfile) rule all: input: {tarfile} rule main_rule: input: output: {tarfile} run: shell('touch ' + output[0]) </code></pre> <p>If I run that snakefile with <code>snakemake -s snakefile --cores 1</code> I get the following output resulting in an error message:</p> <pre><code>import packages 2025-02-04_11-33-38_stardist.tar Building DAG of jobs... Using shell: /usr/bin/bash Provided cores: 1 (use --cores to define parallelism) Rules claiming more threads will be scaled down. Job stats: job count min threads max threads --------- ------- ------------- ------------- all 1 1 1 main_rule 1 1 1 total 2 1 1 Select jobs to execute... [Tue Feb 4 11:33:38 2025] rule main_rule: output: 2025-02-04_11-33-38_stardist.tar jobid: 1 reason: Missing output files: 2025-02-04_11-33-38_stardist.tar resources: tmpdir=/tmp import packages 2025-02-04_11-33-39_stardist.tar Building DAG of jobs... Using shell: /usr/bin/bash Provided cores: 1 (use --cores to define parallelism) Rules claiming more threads will be scaled down. Select jobs to execute... Waiting at most 5 seconds for missing files. MissingOutputException in rule main_rule in file /lustre/projects/xxx/test_timestamp_minimal/snakefile, line 14: Job 1 completed successfully, but some output files are missing. Missing files after 5 seconds. This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait: 2025-02-04_11-33-38_stardist.tar Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message Complete log: .snakemake/log/2025-02-04T113338.450326.snakemake.log </code></pre> <p>As it is now the name of <code>tarfile</code> is created twice and the file which is finally created does not have the same timestamp like the file which is expected as output. Can anybody please help?</p> <p><strong>Update</strong></p> <p>I created a fresh conda environment with python 3.12 and snakemake 8.27.1. I tested my code on a local Windows10 machine (miniforge) and on CentOS7 (the head node of an HPC environment). While there is no issue on my local machine, I get the same error on the HPC system.</p> <p>This is the output on the local machine (I do not put again the output of the HPC-node as it's comparable to what I've already reported):</p> <pre><code>import packages 2025-02-05_13-13-27_stardist.tar Assuming unrestricted shared filesystem usage. host: screening-pc-4 Building DAG of jobs... Provided cores: 1 (use --cores to define parallelism) Rules claiming more threads will be scaled down. Job stats: job count --------- ------- all 1 main_rule 1 total 2 Select jobs to execute... Execute 1 jobs... [Wed Feb 5 13:13:27 2025] localrule main_rule: output: 2025-02-05_13-13-27_stardist.tar jobid: 1 reason: Missing output files: 2025-02-05_13-13-27_stardist.tar resources: tmpdir=C:\Users\xxx\AppData\Local\Temp [Wed Feb 5 13:13:27 2025] Finished job 1. 1 of 2 steps (50%) done Select jobs to execute... Execute 1 jobs... [Wed Feb 5 13:13:27 2025] localrule all: input: 2025-02-05_13-13-27_stardist.tar jobid: 0 reason: Input files updated by another job: 2025-02-05_13-13-27_stardist.tar resources: tmpdir=C:\Users\xxx\AppData\Local\Temp [Wed Feb 5 13:13:27 2025] Finished job 0. 2 of 2 steps (100%) done Complete log: .snakemake\log\2025-02-05T131327.214796.snakemake.log </code></pre> <p>What I do not understand: In the case of the Win10 environment (where it's working): Snakemake reports the following lines just once:</p> <pre><code>import packages 2025-02-04_14-14-23_stardist.tar </code></pre> <p>while when it goes wrong, the snakemake file seems to be run twice (once for each job)? So the first execution gets a different filename due to the execution time compared to the second job.</p>
<python><snakemake>
2025-02-04 10:46:06
3
1,196
Antje Janosch
79,411,167
11,283,324
How to use the apply function to return a list to new column in Pandas
<p>I have a Pandas dataframe:</p> <pre><code>import pandas as pd import numpy as np np.random.seed(150) df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=['A', 'B']) </code></pre> <p>I want to add a new column &quot;C&quot; whose values ​​are the combined-list of every three rows in column &quot;B&quot;. So I use the following method to achieve my needs, but this method is slow when the data is large.</p> <pre><code>&gt;&gt;&gt; df['C'] = [df['B'].iloc[i-2:i+1].tolist() if i &gt;= 2 else None for i in range(len(df))] &gt;&gt;&gt; df A B C 0 4 9 None 1 0 2 None 2 4 5 [9, 2, 5] 3 7 9 [2, 5, 9] 4 8 3 [5, 9, 3] 5 8 1 [9, 3, 1] 6 1 4 [3, 1, 4] 7 4 1 [1, 4, 1] 8 1 9 [4, 1, 9] 9 3 7 [1, 9, 7] </code></pre> <p>When I try to use the df.apply function, I get an error message:</p> <pre><code>df['C'] = df['B'].rolling(window=3).apply(lambda x: list(x), raw=False) TypeError: must be real number, not list </code></pre> <p>I remember that Pandas <code>apply</code> doesn't seem to return a list, so how do I do this? I searched the forum, but couldn't find a suitable topic about apply and return.</p>
<python><pandas><sliding-window>
2025-02-04 09:28:01
4
351
Sun Jar
79,410,861
368,907
Locust Not Running
<p>I had coded some simple locust load testing code in python. Previously, it running without any errors but after some time i run the code, the http request is not get executed.</p> <pre><code>class WebsiteUser(HttpUser): wait_time = between(1, 5) #HTTPConnection.debuglevel = 1 #logging.basicConfig() #logging.getLogger().setLevel(logging.DEBUG) #requests_log = logging.getLogger(&quot;requests.packages.urllib3&quot;) #requests_log.setLevel(logging.DEBUG) #requests_log.propagate = True # All instances of this class will be limited to 10 concurrent connections at most. #pool_manager = PoolManager(maxsize=900000, block=True) #################################################################################################################### @events.test_start.add_listener def on_test_start(environment, **kwargs): if not isinstance(environment.runner, MasterRunner): print(&quot;Beginning test setup&quot;) else: print(&quot;Started test from Master node&quot;) @events.test_stop.add_listener def on_test_stop(environment, **kwargs): print(environment.runner.stats.total.fail_ratio) # Send test report def on_start(self): response = self.client.post( &quot;https://stg-accounts.mediaprimaplus.com.my/authorize?client_id=632354224783531&amp;redirect_uri=https%3A%2F%2Fstaging-ticket.revid.my%2Fcallback&amp;response_type=code&amp;state=%2F&quot;, {&quot;username&quot;: &quot;&quot;, &quot;password&quot;: &quot;&quot;}) print(&quot;Response status code:&quot;, response.status_code) print(&quot;Response text:&quot;, response.text) print(self.client.cookies) @task def homepage(self): self.client.get(&quot;/&quot;) @task def getEvent(self): self.client.get(&quot;event/tik-tok-award-2024-realisasikan-impianmu&quot;) @task def checkout(self): self.client.get(&quot;event/checkout&quot;) @task def checkAccount(self): self.client.get(&quot;account?tab=account&quot;) @task def orderHistory(self): self.client.get(&quot;account?tab=order&quot;) </code></pre> <p>I running on Macbook. There is no error in the Web UI dashboard, it just doesn't get executed the USERS.</p> <p>Is it relate to firewall? Please help. Thanks in advance.</p> <p>I get this in the Web UI log.</p> <blockquote> <p>[2025-02-04 15:28:02,602] RMG-NICHOLAS/CRITICAL/locust.runners: Unhandled exception in greenlet: &lt;Greenlet at 0x1039ea2a0: &gt; Traceback (most recent call last): File &quot;src/gevent/greenlet.py&quot;, line 908, in gevent._gevent_cgreenlet.Greenlet.run File &quot;/Users/nicholas/PycharmProjects/RevIDETicketing/.venv/lib/python3.12/site-packages/locust/runners.py&quot;, line 555, in lambda: self._start(user_count, spawn_rate, wait=wait, user_classes=user_classes) ^^^^^^^^^^^^^^^^^ File &quot;/Users/nicholas/PycharmProjects/RevIDETicketing/.venv/lib/python3.12/site-packages/locust/runners.py&quot;, line 526, in _start self.spawn_users(user_classes_spawn_count, wait) ^^^^^^^^^^^^^^^^^ File &quot;/Users/nicholas/PycharmProjects/RevIDETicketing/.venv/lib/python3.12/site-packages/locust/runners.py&quot;, line 240, in spawn_users new_users += spawn(user_class, spawn_count) ^^^^^^^^^^^^^^^^^ File &quot;/Users/nicholas/PycharmProjects/RevIDETicketing/.venv/lib/python3.12/site-packages/locust/runners.py&quot;, line 226, in spawn new_user = self.user_classes_by_nameuser_class ^^^^^^^^^^^^^^^^^ File &quot;/Users/nicholas/PycharmProjects/RevIDETicketing/.venv/lib/python3.12/site-packages/locust/user/users.py&quot;, line 261, in <strong>init</strong> raise LocustError( ^^^^^^^^^^^^^^^^^ locust.exception.LocustError: You must specify the base host. Either in the host attribute in the User class, or on the command line using the --host option.</p> </blockquote>
<python><locust>
2025-02-04 07:22:51
1
2,802
Ivan
79,410,811
14,749,391
How to implement QueueHandler and QueueListener inside flask factory
<p>I have a flask application delivered by gunicorn that spawns multiple threads and processes from itself during the request. The problem is that when using the standard app.logger, some of the children get deadlocked because of the logging module not able to release the lock. This leads to these processes staying in memory indefinitely and becomes issue after time passes.</p> <p>stack from py-spy</p> <pre><code>Thread 371832 (idle): &quot;MainThread&quot; flush (logging/__init__.py:1009) emit (logging/__init__.py:1029) emit (logging/__init__.py:1127) handle (logging/__init__.py:894) callHandlers (logging/__init__.py:1586) handle (logging/__init__.py:1524) _log (logging/__init__.py:1514) info (logging/__init__.py:1378) </code></pre> <p>As reference I used these awesome answers and questions</p> <p><a href="https://stackoverflow.com/questions/641420/how-should-i-log-while-using-multiprocessing-in-python">How should I log while using multiprocessing in Python?</a></p> <p><a href="https://stackoverflow.com/questions/47968861/does-python-logging-support-multiprocessing">Does python logging support multiprocessing?</a></p> <p>I have this as factory</p> <pre><code>import logging from logging import FileHandler, Formatter from logging.handlers import QueueHandler, QueueListener from multiprocessing import Queue from flask import Flask log_queue = Queue() def create_app(app_name=&quot;name&quot;, **kwargs): app = Flask(app_name) # Create a process-safe logging queue listener = setup_logging(app) listener.start() return app def setup_logging(app: Flask): logger = app.logger logger.handlers = [] logger.setLevel(logging.INFO) queue_handler = QueueHandler(log_queue) info_handler = FileHandler(&quot;info.log&quot;) info_handler.setFormatter(Formatter(&quot;%(asctime)s %(levelname)s %(name)s %(threadName)s : %(message)s&quot;)) crit_handler = FileHandler(&quot;critical.log&quot;) crit_handler.setLevel(logging.CRITICAL) crit_handler.setFormatter(Formatter(&quot;CRIT\t%(message)s&quot;)) logger.addHandler(queue_handler) listener = QueueListener( log_queue, info_handler, crit_handler, respect_handler_level=True ) return listener </code></pre> <p>The issue I am facing is this - each time I send a HUP to the master process to update my code and some env files I get this error</p> <pre><code>[2025-02-03 20:48:54 +0200] [51367] [INFO] Hang up: Master [2025-02-03 20:48:54 +0200] [52751] [INFO] Booting worker with pid: 52751 [2025-02-03 20:48:54 +0200] [52676] [INFO] Worker exiting (pid: 52676) [2025-02-03 20:48:54 +0200] [52673] [INFO] Worker exiting (pid: 52673) [2025-02-03 20:48:54 +0200] [52756] [INFO] Booting worker with pid: 52756 Exception in thread Thread-1: Traceback (most recent call last): File &quot;.pyenv/versions/3.7.5/lib/python3.7/threading.py&quot;, line 926, in _bootstrap_inner self.run() File &quot;.pyenv/versions/3.7.5/lib/python3.7/threading.py&quot;, line 870, in run self._target(*self._args, **self._kwargs) File &quot;.pyenv/versions/3.7.5/lib/python3.7/logging/handlers.py&quot;, line 1478, in _monitor record = self.dequeue(True) File &quot;.pyenv/versions/3.7.5/lib/python3.7/logging/handlers.py&quot;, line 1427, in dequeue return self.queue.get(block) File &quot;.pyenv/versions/3.7.5/lib/python3.7/multiprocessing/queues.py&quot;, line 94, in get res = self._recv_bytes() File &quot;.pyenv/versions/3.7.5/lib/python3.7/multiprocessing/connection.py&quot;, line 216, in recv_bytes buf = self._recv_bytes(maxlength) File &quot;.pyenv/versions/3.7.5/lib/python3.7/multiprocessing/connection.py&quot;, line 407, in _recv_bytes buf = self._recv(4) File &quot;.pyenv/versions/3.7.5/lib/python3.7/multiprocessing/connection.py&quot;, line 383, in _recv raise EOFError EOFError </code></pre> <p>I really am trying to understand what is happening here. My guess is that the queue is not empty at the time of worker respawn and it gets killed. How should I solve this?</p>
<python><python-3.x><flask><python-multiprocessing><flask-restful>
2025-02-04 07:02:00
0
772
Tony
79,410,679
6,711,954
Get changed columns and values from output of a do_orm_execute event
<p>I am trying to log the columns/values changed during an insert, update, delete statement using Session events.</p> <p>I am able to print the statement with their params, however I need the column/value pair (ex. {&quot;dept&quot;: &quot;HR&quot;} or {&quot;dept&quot;: {&quot;old&quot;: &quot;Finance&quot;, &quot;new&quot;: &quot;HR&quot;}).</p> <p>Tried get_history() and inspect() but maybe I am doing it wrong. If possible, please point me to the right documentation/article as well so I can understand more about this. This is the code I have so far:</p> <pre><code>from sqlalchemy.orm import DeclarativeBase from sqlalchemy import Column, Integer, String from sqlalchemy.orm import Session from sqlalchemy import create_engine from sqlalchemy import event, inspect from sqlalchemy.orm.attributes import get_history from random import randint class Base(DeclarativeBase): pass class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String, nullable=False) dept = Column(String, nullable=False) class DBAPI: def __init__(self): self.db_path = 'test.db' self.engine = create_engine( f&quot;sqlite:///{self.db_path}&quot;, echo=False, echo_pool=False ) Base.metadata.create_all(self.engine) #event.listen(Employee, &quot;after_insert&quot;, self.test) #event.listens_for(Employee, &quot;after_update&quot;, self.test_1) #event.listen(Employee, &quot;after_update&quot;, self.test_1) def add_record(self, name): with Session(self.engine) as session: session.add( Employee( name=name, steps=&quot;Finance&quot; ) ) session.commit() def update_record(self, dept): # ORMExecuteState def testing(state): # print the statement with values st = state.statement.compile( compile_kwargs={&quot;literal_binds&quot;: True} ) print(st) # print(state.is_update) # print(state.all_mappers[0].column_attrs) # mapper = state.all_mappers[0] # print([a.key for a in mapper.column_attrs]) # colattrs = [a for a in mapper.column_attrs] # print([a for a in dir(colattrs[1]) if not a.startswith('_')]) # print(colattrs[1].active_history) #print(state) #print(state.statement) #print(state.parameters) #print(state.user_defined_options) ##print(state.statement.column_descriptions) #print(state.all_mappers) #print(state.execution_options) ##print(state.load_options) #print(state.parameters) #print(state.statement.compile().params) #st = state.statement.compile( # compile_kwargs={&quot;literal_binds&quot;: True} #) #print(st) #print(type(state.statement)) #print(state.bind_arguments) #mapper = state.bind_arguments.get(&quot;mapper&quot;) #print(type(mapper)) #print(type(mapper.attrs)) #for s in mapper.attrs: # print(s) #print([ # attr for attr in dir(state.statement) # if not attr.startswith('_') #]) #print(state.statement.corresponding_column()) #for col in state.statement.exported_columns: # print(col) #insp = inspect(state.statement) #print(type(insp)) #for attr in insp.attrs: # print(attr) #print(insp) #for attr in insp.attrs: # print(attr) # hist = get_history(attr, &quot;steps&quot;) #print(state.statement.entity_description) #params = state.statement.compile().params #statement = state.statement #for param, val in params.items(): # statement = statement.replace( # param, f&quot;{ with Session(self.engine) as session: event.listen(session, &quot;do_orm_execute&quot;, testing) #event.listen(session, &quot;after_commit&quot;, testing1) session.query( Employee ).filter( Employee.name == &quot;Bob&quot; ).update( { Employee.dept: dept } ) session.commit() dbapi = DBAPI() dbapi.update_record(&quot;HR&quot;) </code></pre>
<python><events><sqlalchemy><orm>
2025-02-04 05:49:13
0
828
Shine J
79,410,495
13,825,658
How to parse list of strings to pydantic settings?
<p>pydantic-settings can parse list of ints, but not list of strings</p> <p>This works:</p> <pre class="lang-py prettyprint-override"><code>class Settings(BaseSettings): my_list_of_ints: list[int] os.environ[&quot;my_list_of_ints&quot;] = &quot;[1, 2, 3]&quot; print(Settings().model_dump()) # {'my_list_of_ints': [1, 2, 3]} </code></pre> <p>But not this one</p> <pre class="lang-py prettyprint-override"><code>class Settings(BaseSettings): my_list_of_strings: list[str] os.environ[&quot;my_list_of_strings&quot;] = &quot;['a', 'b', 'c']&quot; print(Settings().model_dump()) # pydantic_settings.sources.SettingsError: error parsing value for field &quot;my_list_of_strings&quot; from source &quot;EnvSettingsSource&quot; </code></pre> <p>Why is this the case? And is there a way to make this work without writing my own custom parser?</p>
<python><environment-variables><pydantic><pydantic-settings>
2025-02-04 03:09:32
0
1,368
Leonardus Chen
79,410,316
7,254,247
Transparent filesystem in pyfuse with encryption and compression
<p>I'm trying to make a simple transparent filesystem in fuse. I'm using <a href="https://www.stavros.io/posts/python-fuse-filesystem/" rel="nofollow noreferrer" title="example pass through filesystem in python">this guide</a> as a basis. This code works perfectly. I'm trying to modify it so that it compresses and then encrypts as well.</p> <p>So far I have:</p> <pre><code>from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad AES_KEY = b&quot;thisisaverysecretkey1234&quot; AES_IV = b&quot;thisisaverysecre&quot; def encrypt_data(self, data): cipher = AES.new(AES_KEY, AES.MODE_CBC, AES_IV) return cipher.encrypt(pad(data, AES.block_size)) def decrypt_data(self, data): cipher = AES.new(AES_KEY, AES.MODE_CBC, AES_IV) return unpad(cipher.decrypt(data), AES.block_size) </code></pre> <p>And I modified the <code>read</code> and <code>write</code> methods like so:</p> <pre><code> def read(self, path, size, offset, fh): full_path = self._get_file_path(path) os.lseek(fh, offset, os.SEEK_SET) encrypted_data = os.read(fh, size) # Decrypt the data before returning it return decrypt_data(encrypted_data) def write(self, path, buf, offset, fh): full_path = self._get_file_path(path) os.lseek(fh, offset, os.SEEK_SET) # Encrypt data before writing it encrypted_data = self._encrypt_data(buf) return os.write(encrypted_data) </code></pre> <p>However, I keep getting the error:</p> <pre><code>fuse: wrote too many bytes </code></pre> <p>I can't seem to write encrypted files properly (I haven't even compressed/decompressed before applying encryption).</p> <p>This is just a side project to test out my own compression/encryption scheme, and I'm using aes and zlib as placeholders.</p>
<python><encryption><compression><fuse>
2025-02-04 00:17:39
1
772
lee
79,410,268
1,273,751
After powershell update, conda elicits error as if command were empty
<p>After my system automatically updated Powershell to 7.5.0, my conda has not been working anymore.</p> <p>It doesn't correctly initialize when the terminal is started, nor when I run <code>conda init powershell</code>.</p> <p>I already checked the path: conda root folder, and Script and condabin folders are there.</p> <p>The exact error is illustrated below</p> <p><a href="https://i.sstatic.net/7obDUE4e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7obDUE4e.png" alt="printscreen of the error" /></a></p> <p>Can I fix this without needing to reinstall Conda? How? Thank you.</p>
<python><windows><powershell><conda>
2025-02-03 23:38:55
0
2,645
Homero Esmeraldo
79,410,096
9,334,609
D-ID API: pending_url returned instead of video URL after successful POST and GET
<p>I'm encountering a <code>pending_url</code> issue when using the D-ID API. I'm making a POST request to create a talk, which returns a 201 status code and a talk ID.<br /> Immediately after, I'm making a GET request to retrieve the talk details using this ID. However, the GET request response includes a <code>pending_url</code> instead of the expected video URL.</p> <p>Here's the relevant code snippet (Flask):</p> <pre class="lang-py prettyprint-override"><code>@app.route('/video') def video(): # retrieve the value of the environment variable named DID_API_KEY from .env did_api_key = os.getenv('DID_API_KEY') if len(did_api_key) != None: print(&quot;The function load_dotenv() has loaded the DID_API_KEY correctly.&quot;) print(&quot;The value of the did_api_key is: &quot;, did_api_key) else: print(&quot;Problem loading the DID_API_KEY. Please review .env file.&quot;) return &quot;Problem loading the DID_API_KEY. Please review .env file.&quot; # retrieve the value of the environment variable named BEARER_TOKEN from .env bearer_token = os.getenv('BEARER_TOKEN') if len(bearer_token) != None: print(&quot;The function load_dotenv() has loaded the BEARER_TOKEN correctly.&quot;) print(&quot;The value of the bearer_key is: &quot;, bearer_token) else: print(&quot;Problem loading the BEARER_TOKEN. Please review .env file.&quot;) return &quot;Problem loading the BEARER_TOKEN. Please review .env file.&quot; # Create a talk POST # DID_API_KEY obtained. Use it to generate the AI Talking Avatar. import requests url = &quot;https://api.d-id.com/talks&quot; source_url = &quot;https://i.imghippo.com/files/wHD7943BS.jpg&quot; input_text = &quot;Making videos is easy with D-ID&quot; payload = { &quot;source_url&quot;: source_url, &quot;script&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;subtitles&quot;: &quot;false&quot;, &quot;provider&quot;: { &quot;type&quot;: &quot;microsoft&quot;, &quot;voice_id&quot;: &quot;Sara&quot; }, &quot;input&quot;: input_text }, &quot;config&quot;: { &quot;fluent&quot;: &quot;false&quot;, &quot;pad_audio&quot;: &quot;0.0&quot; } } headers = { &quot;accept&quot;: &quot;application/json&quot;, &quot;content-type&quot;: &quot;application/json&quot;, &quot;authorization&quot;: &quot;Bearer {0}&quot;.format(bearer_token) } response = requests.post(url, json=payload, headers=headers) if response.status_code == 402: # not enough d-id api credits # response.text = {&quot;kind&quot;:&quot;InsufficientCreditsError&quot;,&quot;description&quot;:&quot;not enough credits&quot;} 402 return '&gt;&gt;&gt; ' + response.text + ' ' + str(response.status_code) if response.status_code == 201: # got a 201 after a POST request, it's a positive sign. # It confirms that the server has successfully created the resource. print(response.text) print(response.status_code) &quot;&quot;&quot; { &quot;id&quot;:&quot;tlk_pCB0A1dMhZi0JfA0S8NE5&quot;, &quot;created_at&quot;:&quot;2025-02-03T20:02:39.302Z&quot;, &quot;created_by&quot;:&quot;google-oauth2|103344389677485792824&quot;, &quot;status&quot;:&quot;created&quot;, &quot;object&quot;:&quot;talk&quot; } response.status_code equals 201 &quot;&quot;&quot; # get a specific talk GET print(&quot;get a specific talk GET&quot;) # get the id of the video id_video = response.json()[&quot;id&quot;] headers = { &quot;accept&quot;: &quot;application/json&quot;, &quot;authorization&quot;: &quot;Bearer {0}&quot;.format(bearer_token) } # set url url = &quot;https://api.d-id.com/talks/{0}&quot;.format(id_video) response = requests.get(url, headers=headers) print(response.text) print(response.status_code) return response.text </code></pre> <p>The POST request is successful (201), and the response contains the talk ID. However, the subsequent GET request returns a response with a <code>pending_url</code> and the <code>status</code> is <code>started</code>, indicating that the video processing is not yet complete. The expected behavior is for the GET request to eventually return a proper video URL when the processing is finished.</p> <p>Here's an example of the GET response I'm receiving:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;user&quot;: { &quot;features&quot;: [&quot;stitch&quot;, &quot;clips:write&quot;, &quot;scene&quot;, &quot;premium-plus:skip-speaker-validation&quot;, null], &quot;stripe_plan_group&quot;: &quot;deid-trial&quot;, &quot;authorizer&quot;: &quot;bearer&quot;, &quot;org_id&quot;: null, &quot;owner_id&quot;: &quot;auth0|67a12d64c60b14766a228fda&quot;, &quot;domain&quot;: &quot;https://studio.d-id.com&quot;, &quot;id&quot;: &quot;auth0|67a12d64c60b14766a228fda&quot;, &quot;plan&quot;: &quot;deid-trial&quot;, &quot;email&quot;: &quot;newsapidid@gmail.com&quot; }, &quot;script&quot;: { &quot;length&quot;: 31, &quot;subtitles&quot;: false, &quot;type&quot;: &quot;text&quot;, &quot;provider&quot;: { &quot;type&quot;: &quot;microsoft&quot;, &quot;voice_id&quot;: &quot;Sara&quot; } }, &quot;audio_url&quot;: &quot;https://d-id-talks-prod.s3.us-west-2.amazonaws.com/auth0%7C67a12d64c60b14766a228fda/tlk_dhNTB5cDt-aBHUGoqN0jf/microsoft.wav?AWSAccessKeyId=AKIA5CUMPJBIK65W6FGA&amp;Expires=1738704589&amp;Signature=%2F3lSL1GS%2FkgK%2FH1nlLAkVYSG3rw%3D&quot;, &quot;created_at&quot;: &quot;2025-02-03T21:29:49.187Z&quot;, &quot;config&quot;: { &quot;stitch&quot;: false, &quot;pad_audio&quot;: 0, &quot;align_driver&quot;: true, &quot;sharpen&quot;: true, &quot;reduce_noise&quot;: false, &quot;auto_match&quot;: true, &quot;normalization_factor&quot;: 1, &quot;show_watermark&quot;: true, &quot;motion_factor&quot;: 1, &quot;result_format&quot;: &quot;.mp4&quot;, &quot;fluent&quot;: false, &quot;align_expand_factor&quot;: 0.3 }, &quot;source_url&quot;: &quot;https://d-id-talks-prod.s3.us-west-2.amazonaws.com/auth0%7C67a12d64c60b14766a228fda/tlk_dhNTB5cDt-aBHUGoqN0jf/source/wHD7943BS.jpg?AWSAccessKeyId=AKIA5CUMPJBIK65W6FGA&amp;Expires=1738704589&amp;Signature=xY6c9d57Xy1gOnUhAJWK7zUtWXY%3D&quot;, &quot;created_by&quot;: &quot;auth0|67a12d64c60b14766a228fda&quot;, &quot;status&quot;: &quot;started&quot;, &quot;driver_url&quot;: &quot;bank://natural/&quot;, &quot;modified_at&quot;: &quot;2025-02-03T21:29:49.476Z&quot;, &quot;user_id&quot;: &quot;auth0|67a12d64c60b14766a228fda&quot;, &quot;subtitles&quot;: false, &quot;id&quot;: &quot;tlk_dhNTB5cDt-aBHUGoqN0jf&quot;, &quot;duration&quot;: 2.4625, &quot;started_at&quot;: &quot;2025-02-03T21:29:49.239&quot;, &quot;pending_url&quot;: &quot;s3://d-id-talks-prod/auth0|67a12d64c60b14766a228fda/tlk_dhNTB5cDt-aBHUGoqN0jf/1738618189187.mp4&quot; } </code></pre> <p>I've checked my <code>BEARER_TOKEN</code> and <code>DID_API_KEY</code>, and they are correct. How can I correctly retrieve the final video URL after the processing is complete? Is there a polling mechanism or a webhook I should be using? What is the recommended way to handle asynchronous video processing with the D-ID API?</p>
<python><flask>
2025-02-03 21:48:59
1
461
Ramiro
79,409,587
405,017
Performance impact of inheriting from many classes
<p>I am investigating the performance impact of a very broad inheritance setup.</p> <ol> <li>Start with 260 distinct attribute names, from <code>a0</code> through <code>z9</code>.</li> <li>Create 260 classes with 1 uniquely-named attribute each. Create one class that inherits from those 260 classes.</li> <li>Create 130 classes with 2 uniquely-named attributes each. Create one class that inherits from those 130 classes.</li> <li>Repeat for 52 classes with 5 attributes each, 26 classes with 10 attributes each, and 1 class with all 260 attributes.</li> <li>Create one instance of each of the five classes, and then measure the time to read (and add together) all 260 attributes on each.</li> </ol> <p>Average performance from 2.5M reads, interleaving in different orders.</p> <pre class="lang-none prettyprint-override"><code> From260: 2.48 From130: 1.55 From52: 1.22 From26: 1.15 AllInOne: 1.00 </code></pre> <p>These values <em>sort of</em> fit on a linear regression...but they don't. And these relationships hold true across many different runs, of various sizes and test orders.</p> <p><a href="https://i.sstatic.net/HlYcETUO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlYcETUO.png" alt="linear relationship" /></a></p> <p>The values come closer to fitting a second-degree polynomial, or exponential fit...but again, the data does not fit so cleanly as to be obvious.</p> <p><a href="https://i.sstatic.net/TppDtNdJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TppDtNdJ.png" alt="enter image description here" /></a></p> <p>As I massively increase the number of subclasses, will the performance falloff be linear, or non-linear?</p> <hr /> <p>Here's some updated code that samples many different subclass combinations up to 2310:</p> <pre class="lang-py prettyprint-override"><code>from time import time TOTAL_ATTRS = 2310 # Create attribute names &quot;a0000&quot; through &quot;a2309&quot; attr_names = [f&quot;a{i:04d}&quot; for i in range(TOTAL_ATTRS)] # Map each attribute to a default value (1..2310) all_defaults = {name: i + 1 for i, name in enumerate(attr_names)} # The provided factors of 2310 factors = [1, 2, 3, 5, 6, 7, 10, 11, 14, 15, 21, 22, 30, 33, 35, 42, 55, 66, 70, 77, 105, 110, 154, 165, 210, 231, 330, 385, 462, 770, 1155, 2310] # Build a dictionary mapping each factor to a composite class. # For factor f, create f subclasses each with (2310 // f) attributes, # then create a composite class inheriting from all f subclasses. composite_classes = {} for f in factors: group_size = TOTAL_ATTRS // f subclasses = [] for i in range(f): group = attr_names[i * group_size:(i + 1) * group_size] group_defaults = {name: all_defaults[name] for name in group} subclass = type(f&quot;Sub_{f}_{i}&quot;, (object,), group_defaults) subclasses.append(subclass) composite_classes[f] = type(f&quot;From_{f}&quot;, tuple(subclasses), {}) iterations = range(0, 1_000) for n, c in composite_classes.items(): i = c() t = time() for _ in iterations: for a in attr_names: getattr(i, a) print(f&quot;Inheriting from {n} subclasses: {time()-t:.3f}s&quot;) </code></pre> <p>and the results, which seem far more linear than polynomial, but which have odd &quot;ledges&quot; in them:</p> <p><a href="https://i.sstatic.net/Jpaaolf2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jpaaolf2.png" alt="data points with linear fitting" /></a></p> <p><a href="https://i.sstatic.net/CbaP0cVr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbaP0cVr.png" alt="data points with quadratic fitting" /></a></p>
<python><performance><python-internals>
2025-02-03 17:36:44
2
304,256
Phrogz
79,409,480
28,063,240
Expand a QuerySet with all related objects
<pre class="lang-py prettyprint-override"><code>class Hobby(models.Model): name = models.TextField() class Person(models.Model): name = models.TextField() created_at = models.DateTimeField(auto_now_add=True) hobbies = models.ManyToManyField(Hobby, related_name='persons') class TShirt(models.Model): name = models.TextField() person = models.ForeignKey( Person, related_name='tshirts', on_delete=models.CASCADE, ) class Shirt(models.Model): name = models.TextField() person = models.ForeignKey( Person, related_name='shirts', on_delete=models.CASCADE, ) class Shoes(models.Model): name = models.TextField() person = models.ForeignKey( Person, related_name='shoes', on_delete=models.CASCADE, ) </code></pre> <p>Given a queryset of <code>Person</code>, e.g.</p> <pre><code>Person.objects.order_by('-created_at')[:4] </code></pre> <p>How can I make a queryset which also includes all the objects related to the <code>Person</code> objects in that queryset?</p> <p>The input QuerySet only has <code>Person</code> objects, but the output one should have <code>Hobby</code>, Shoes<code>, </code>TShirt<code>, </code>Shirt` objects (if there are shirts/tshirts/shoes that reference any of the people in the original queryset).</p> <p>I've only been able to think of solutions that rely on knowing what the related objects are, e.g. <code>TShirt.objects.filter(person__in=person_queryset)</code>, but I would like a solution that will work for all models that reference <code>Person</code> without me having to one-by-one code each query for each referencing model.</p>
<python><django>
2025-02-03 16:55:59
2
404
Nils
79,409,259
6,662,425
How does Hydra `_partial_` interact with seeding
<p>In the configuration management library <a href="https://hydra.cc/" rel="nofollow noreferrer">Hydra</a>, it is possible to only partially instantiate classes defined in configuration using the <a href="https://hydra.cc/docs/1.1/advanced/instantiate_objects/overview/#partial-instantiation-for-hydra-version--112" rel="nofollow noreferrer"><code>_partial_</code> keyword</a>. The library explains that this results in a <a href="https://docs.python.org/3/library/functools.html#functools.partial" rel="nofollow noreferrer"><code>functools.partial</code></a>. I wonder how this interacts with seeding. E.g. with</p> <ul> <li><a href="https://pytorch.org/docs/stable/notes/randomness.html" rel="nofollow noreferrer">pytorch <code>torch.manual_seed()</code></a></li> <li><a href="https://pytorch-lightning.readthedocs.io/en/1.7.7/api/pytorch_lightning.utilities.seed.html#pytorch_lightning.utilities.seed.seed_everything" rel="nofollow noreferrer">lightnings <code>seed_everything</code></a></li> <li>etc.</li> </ul> <p>My reasoning is, that if I use the <code>_partial_</code> keyword while specifying <em>all</em> parameters for <code>__init__</code>, then I would essentially obtain a factory which could be called after specifying the seed to do multiple runs. But this assumes that <code>_partial_</code> does not bake the seed in already. To my understanding that should not be the case. Is that correct?</p>
<python><machine-learning><pytorch><pytorch-lightning><fb-hydra>
2025-02-03 15:28:56
3
1,373
Felix Benning
79,409,091
10,595,871
Problem when using azure cognitive services (.dll file not found)
<p>I have an app made with flask that is working fine on my machine. I'm trying to move it to a server in order to make in accessible to some people. I've installed all the packages and check that everything has the same version as on my local machine.</p> <p>Everything seems fine but when I try to run the app, it throws this error:</p> <pre><code>C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn(&quot;Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work&quot;, RuntimeWarning) Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\flask\__main__.py&quot;, line 3, in &lt;module&gt; main() ~~~~^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\flask\cli.py&quot;, line 1129, in main cli.main() ~~~~~~~~^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\click\core.py&quot;, line 1082, in main rv = self.invoke(ctx) File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\click\core.py&quot;, line 1697, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\click\core.py&quot;, line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\click\core.py&quot;, line 788, in invoke return __callback(*args, **kwargs) File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\click\decorators.py&quot;, line 92, in new_func return ctx.invoke(f, obj, *args, **kwargs) ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\click\core.py&quot;, line 788, in invoke return __callback(*args, **kwargs) File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\flask\cli.py&quot;, line 977, in run_command raise e from None File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\flask\cli.py&quot;, line 961, in run_command app: WSGIApplication = info.load_app() # pyright: ignore ~~~~~~~~~~~~~^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\flask\cli.py&quot;, line 349, in load_app app = locate_app(import_name, name) File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\flask\cli.py&quot;, line 245, in locate_app __import__(module_name) ~~~~~~~~~~^^^^^^^^^^^^^ File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\src\app.py&quot;, line 12, in &lt;module&gt; import job_manager File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\src\job_manager.py&quot;, line 5, in &lt;module&gt; import manager File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\src\manager.py&quot;, line 6, in &lt;module&gt; from audio.transcriber import recognize_from_file File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\src\audio\transcriber.py&quot;, line 4, in &lt;module&gt; import azure.cognitiveservices.speech as speechsdk File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\azure\cognitiveservices\speech\__init__.py&quot;, line 8, in &lt;module&gt; from .speech import * File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\azure\cognitiveservices\speech\speech.py&quot;, line 13, in &lt;module&gt; from .interop import ( _CallbackContext, _Handle, LogLevel, _c_str, _call_bool_fn, _call_hr_fn, _sdk_lib, _spx_handle, max_uint32, _call_string_function_and_free, _trace_message, _unpack_context) File &quot;C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\azure\cognitiveservices\speech\interop.py&quot;, line 20, in &lt;module&gt; _sdk_lib = load_library.LoadLibrary(lib_path) File &quot;C:\Users\Sparapan.F\AppData\Local\Programs\Python\Python313\Lib\ctypes\__init__.py&quot;, line 471, in LoadLibrary return self._dlltype(name) ~~~~~~~~~~~~~^^^^^^ File &quot;C:\Users\Sparapan.F\AppData\Local\Programs\Python\Python313\Lib\ctypes\__init__.py&quot;, line 390, in __init__ self._handle = _dlopen(self._name, mode) ~~~~~~~^^^^^^^^^^^^^^^^^^ FileNotFoundError: Could not find module 'C:\Users\Sparapan.F\Desktop\Scripts\Trascrittore\Tutto\trasc_env\Lib\site-packages\azure\cognitiveservices\speech\Microsoft.CognitiveServices.Speech.core.dll' (or one of its dependencies). Try using the full path with constructor syntax. </code></pre> <p>The missing file is actually present in the specified folder so I don't know what to do. Already tried: uninstall and re-install cognitive services check python and system versions (both 64 bit) restart the system</p>
<python><flask><azure-cognitive-services>
2025-02-03 14:19:54
0
691
Federicofkt
79,408,681
11,598,948
Perform a rolling operation on indices without using `with_row_index()`?
<p>I have a DataFrame like this:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({&quot;x&quot;: [1.2, 1.3, 3.4, 3.5]}) df # shape: (3, 1) # ┌─────┐ # │ a │ # │ --- │ # │ f64 │ # ╞═════╡ # │ 1.2 │ # │ 1.3 │ # │ 3.4 │ # │ 3.5 │ # └─────┘ </code></pre> <p>I would like to make a rolling aggregation using <code>.rolling()</code> so that each row uses a window [-2:1]:</p> <pre><code>shape: (4, 2) ┌─────┬───────────────────┐ │ x ┆ y │ │ --- ┆ --- │ │ f64 ┆ list[f64] │ ╞═════╪═══════════════════╡ │ 1.2 ┆ [1.2, 1.3] │ │ 1.3 ┆ [1.2, 1.3, 3.4] │ │ 3.4 ┆ [1.2, 1.3, … 3.5] │ │ 3.5 ┆ [1.3, 3.4, 3.5] │ └─────┴───────────────────┘ </code></pre> <p>So far, I managed to do this with the following code:</p> <pre class="lang-py prettyprint-override"><code>df.with_row_index(&quot;index&quot;).with_columns( y = pl.col(&quot;x&quot;).rolling(index_column = &quot;index&quot;, period = &quot;4i&quot;, offset = &quot;-3i&quot;) ).drop(&quot;index&quot;) </code></pre> <p>However this requires manually creating a column <code>index</code> and then removing it after the operation. Is there a way to achieve the same result in a single <code>with_columns()</code> call?</p>
<python><dataframe><python-polars><rolling-computation>
2025-02-03 11:37:42
2
8,865
bretauv
79,408,524
2,883,209
Writing back to a panda groupby group
<p>Good morning all</p> <p>I am trying to process a lot of data, and I need to group data, look at the group, then set a value based on the other entries in the group, but I want to set the value in a column in the full dataset. What I can't figure out is how I can use the group to write back to the main dataframe.</p> <p>So as an example, I created this data frame</p> <pre><code>import pandas as pd data = [{ &quot;class&quot;: &quot;cat&quot;, &quot;name&quot;: &quot;Fluffy&quot;, &quot;age&quot;: 3, &quot;child&quot;: &quot;Whiskers&quot;, &quot;parents_in_group&quot;: &quot;&quot; }, { &quot;class&quot;: &quot;dog&quot;, &quot;name&quot;: &quot;Spot&quot;, &quot;age&quot;: 5 }, { &quot;class&quot;: &quot;cat&quot;, &quot;name&quot;: &quot;Whiskers&quot;, &quot;age&quot;: 7 }, { &quot;class&quot;: &quot;dog&quot;, &quot;name&quot;: &quot;Rover&quot;, &quot;age&quot;: 2, &quot;child&quot;: &quot;Spot&quot; }] df = pd.DataFrame(data) df </code></pre> <p>So as an example, lets say that I want to set the parrents_in_group to a list of all the parrents in the group, easy to do</p> <pre><code>for name, group in group_by_class: mask = group[&quot;child&quot;].notna() print(&quot;This is the parrent in group&quot;) print(group[mask]) parent_name = group[mask][&quot;name&quot;].values[0] print(f&quot;This is the parent name: {parent_name}&quot;) group[&quot;parents_in_group&quot;] = parent_name print(&quot;And now we have the name set in group&quot;) print(group) </code></pre> <p>That updates the group, but not the actual data frame. So how would I go about writing this information back to the main data frame</p> <p><strong>Using the name and search</strong></p> <p>This works, but seems a bit untidy</p> <pre><code>for name, group in group_by_class: mask = group[&quot;child&quot;].notna() parent_name = group[mask][&quot;name&quot;].values[0] df.loc[df['class'] == name, 'parents_in_group'] = parent_name df </code></pre> <p><strong>Using group</strong></p> <p>How would I go about using group to set the values, rather than searching for the name that the group was created by. Or are there better ways to going about it.</p> <p>The real challenge I'm having is that I need to get the group, find some specific values in the group, then set some fields based on the data found.</p> <p>Any help of course welcome.</p>
<python><pandas>
2025-02-03 10:40:38
2
1,244
vrghost
79,408,042
11,405,787
CadQuery Stack Nagivation: how to unparent "XY" plane from `transformed`-made workplane
<p>Beginner at CadQuery,</p> <p>Trying to do basic modeling.</p> <p>I want to drill multiple holes from different direction in an object,</p> <p>so I:</p> <ul> <li>make the object, and then</li> <li>use <code>.faces(&quot;XY&quot;)</code> to get the top plane and then</li> <li>use <code>.transforms(...)</code> to rotate the plane to a specific direction then</li> <li>use <code>.hole(...)</code> to drill holes.</li> </ul> <p>however, workspace got from <code>faces()</code> seem to be parented to the previous one, and make the rotation aggregated.</p> <p>I have very few understandings of the cadquery stack in general, so I try:</p> <ul> <li><code>.first()</code> before or after the drilling holes</li> <li><code>.last()</code> before or after the drilling holes</li> <li><code>.end()</code> before or after the drilling holes</li> <li>unrotate at each iteration</li> </ul> <p>So, how can I achieve desired task, and what is the best practice(s) for that?</p> <pre class="lang-py prettyprint-override"><code>import cadquery as cq thickness = 2 rad = 5 leng = 10 # Making the Soild result = ( cq .Workplane(&quot;XY&quot;) .tag(&quot;top&quot;) .sphere(rad + thickness) .faces(&quot;XY&quot;) .split(keepTop=True) .faces(&quot;&lt;Z&quot;) .workplane() .circle(rad + thickness) .extrude(leng) .faces(&quot;&lt;Z&quot;) .shell(thickness) ) # Drilling Holes for i in range(3): result = ( result # .first() # .last() .faces(&quot;XY&quot;) .transformed(rotate=(0,30,0)) .transformed(rotate=(0,0,i*120)) .hole(1) # .transformed(rotate=(0,0,-i*120)) # .transformed(rotate=(0,-30,0)) ) show_object(result) </code></pre> <h2>Expected Result</h2> <p><a href="https://i.sstatic.net/4hdN3IgL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4hdN3IgL.png" alt="expected result" /></a></p>
<python><cad><openscad><cadquery>
2025-02-03 07:02:43
0
1,211
啊鹿Dizzyi
79,407,989
2,537,745
Why am I getting a ‘required broadcastable shapes’ error after removing max_length in my TensorFlow seq2seq with Attention?
<p>I’m working through an example of an encoder–decoder seq2seq model in TensorFlow with Bahdanau-style Attention. I followed a tutorial/book example that uses a <code>TextVectorization</code> layer with <code>output_sequence_length=max_length</code>. When I keep <code>max_length</code>, everything works fine. However, if I remove it because I want to handle variable-length input sequences, the model throws a <code>INVALID_ARGUMENT: required broadcastable shapes</code> error during training.</p> <p>This makes me wonder why my char-level RNN model didn’t require a fixed <code>max_length</code>, but my attention-based seq2seq suddenly does. If I look at the error stack trace, it points to a shape mismatch in <code>sparse_categorical_crossentropy/weighted_loss/Mul</code>.</p> <pre><code>import tensorflow as tf # Sample data sentences_en = [&quot;I love dogs&quot;, &quot;You love cats&quot;, &quot;We like soccer&quot;] sentences_es = [&quot;me gustan los perros&quot;, &quot;te gustan los gatos&quot;, &quot;nos gusta el futbol&quot;] # TextVectorization without max_length vocab_size = 1000 text_vec_layer_en = tf.keras.layers.TextVectorization( max_tokens=vocab_size # NOTE: Removed output_sequence_length to allow variable-length ) text_vec_layer_es = tf.keras.layers.TextVectorization( max_tokens=vocab_size # NOTE: Also removed output_sequence_length here ) text_vec_layer_en.adapt(sentences_en) text_vec_layer_es.adapt([&quot;startofseq &quot; + s + &quot; endofseq&quot; for s in sentences_es]) # Model Inputs encoder_inputs = tf.keras.layers.Input(shape=(), dtype=tf.string) decoder_inputs = tf.keras.layers.Input(shape=(), dtype=tf.string) encoder_input_ids = text_vec_layer_en(encoder_inputs) decoder_input_ids = text_vec_layer_es(decoder_inputs) embed_size = 128 embedding_en = tf.keras.layers.Embedding(vocab_size, embed_size, mask_zero=True) embedding_es = tf.keras.layers.Embedding(vocab_size, embed_size, mask_zero=True) encoder_embeds = embedding_en(encoder_input_ids) decoder_embeds = embedding_es(decoder_input_ids) # Bidirectional Encoder encoder = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(256, return_sequences=True, return_state=True) ) encoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder(encoder_embeds) encoder_state_h = tf.concat([forward_h, backward_h], axis=-1) encoder_state_c = tf.concat([forward_c, backward_c], axis=-1) encoder_state = [encoder_state_h, encoder_state_c] # Decoder decoder = tf.keras.layers.LSTM(512, return_sequences=True) decoder_outputs = decoder(decoder_embeds, initial_state=encoder_state) # Attention attention_layer = tf.keras.layers.Attention() attention_outputs = attention_layer([decoder_outputs, encoder_outputs]) # Final Dense Output output_layer = tf.keras.layers.Dense(vocab_size, activation='softmax') y_proba = output_layer(attention_outputs) model = tf.keras.Model(inputs=[encoder_inputs, decoder_inputs], outputs=[y_proba]) model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) # Attempted training # (Simplifying example: ignoring actual train split, just trying to run a dummy training step) x_en = tf.constant(sentences_en) x_es = tf.constant([&quot;startofseq &quot; + s + &quot; endofseq&quot; for s in sentences_es]) y_dummy = text_vec_layer_es(x_es) # shape mismatch expected if variable model.fit([x_en, x_es], y_dummy, epochs=1) </code></pre> <p>Error:</p> <pre><code>INVALID_ARGUMENT: required broadcastable shapes [[node gradient_tape/sparse_categorical_crossentropy/weighted_loss/Mul]] ... </code></pre> <p>Here's the seq2seq char RNN model which doesn't require a max_length in text vectorization layer.</p> <pre><code>model = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim=n_tokens, output_dim=16), tf.keras.layers.GRU(128, return_sequences=True), tf.keras.layers.Dense(n_tokens, activation=&quot;softmax&quot;) ]) </code></pre> <p>Why do char-level RNNs or simple seq2seq sometimes work without specifying max_length, but this attention-based model does not?</p> <p>Is this primarily because attention needs to be calculated over all input and output steps so <code>max_length</code> is important ahead of time?</p>
<python><tensorflow>
2025-02-03 06:30:34
0
1,794
abkds
79,407,952
11,283,324
Find the index of the current df value in another series and add to a column
<p>I have a dataframe and a series, as follows:</p> <pre><code>import pandas as pd from itertools import permutations df = pd.DataFrame({'a': [['a', 'b', 'c'], ['a', 'c', 'b'], ['c', 'a', 'b']]}) prob = list(permutations(['a', 'b', 'c'])) prob = [list(ele) for ele in prob] ps = pd.Series(prob) &gt;&gt;&gt; df a 0 [a, b, c] 1 [a, c, b] 2 [c, a, b] &gt;&gt;&gt; ps 0 [a, b, c] 1 [a, c, b] 2 [b, a, c] 3 [b, c, a] 4 [c, a, b] 5 [c, b, a] dtype: object </code></pre> <p>My question is how to add a column 'idx' in df, which contains the index of the value in column 'a' in series 'ps'? The desire result is:</p> <pre><code>a idx [a,b,c] 0 [a,c,b] 1 [c,a,b] 4 </code></pre> <p>The chatgpt gave me a answer, but it works very very slowly when my real data is huge.</p> <pre><code>df['idx'] = df['a'].apply(lambda x: ps[ps.apply(lambda y: y == x)].index[0]) </code></pre> <p>Is there a more efficient way?</p>
<python><pandas>
2025-02-03 06:04:36
1
351
Sun Jar
79,407,620
470,801
What is the correct way to insert into the DuckDB JSON type via the Python API?
<p>The <a href="https://duckdb.org/docs/api/python/overview.html" rel="nofollow noreferrer">DuckDB Python API docs</a> have examples of how to read from a JSON file, but not doesn't explicitly state how to use the <a href="https://duckdb.org/docs/data/json/overview.html" rel="nofollow noreferrer">JSON type</a>.</p> <p>For example, given a schema:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE test (obj JSON) </code></pre> <p>Given an arbitrary python object (e.g. nested <code>dict</code> or <code>list</code>) called <code>obj</code>, which of the following is recommended:</p> <ol> <li><code>conn.execute(&quot;INSERT INTO test VALUES (?)&quot;, (obj,))</code></li> <li><code>conn.execute(&quot;INSERT INTO test VALUES (?)&quot;, (json.dumps(obj),))</code></li> </ol> <p>Both <em>appear</em> to work, none seem prescribed or proscribed by the docs, so it might be tempting use the first approach -- it's more direct, potentially more efficient.</p> <p>However, it seems that the 2nd approach is more reliable, because the first approach seems to be converting to <a href="https://duckdb.org/docs/sql/data_types/struct.html" rel="nofollow noreferrer">STRUCTS</a> behind the scenes.</p> <p>This can be seen with this test:</p> <pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize(&quot;obj&quot;, [ {&quot;a&quot;: 1, &quot;b&quot;: 2}, [{&quot;a&quot;: 1}, {&quot;a&quot;: 2}], [ { &quot;a&quot;: 1, }, { &quot;b&quot;: 2, }, ], {}, ]) @pytest.mark.parametrize(&quot;pre_serialize&quot;, [True, False]) def test_duckdb(obj, pre_serialize): import duckdb conn = duckdb.connect(&quot;:memory:&quot;) conn.execute(&quot;CREATE TABLE test (obj JSON)&quot;) if pre_serialize: obj_to_insert = json.dumps(obj) else: obj_to_insert = obj conn.execute(&quot;INSERT INTO test VALUES (?)&quot;, (obj_to_insert,)) result = conn.execute(&quot;SELECT * FROM test&quot;).fetchone() retrieved_obj = json.loads(result[0]) assert retrieved_obj == obj </code></pre> <p>The test fails with <code>(False-obj2)</code>, with the following error:</p> <pre><code>&gt; conn.execute(&quot;INSERT INTO test VALUES (?)&quot;, (obj_to_insert,)) E duckdb.duckdb.TypeMismatchException: Mismatch Type Error: Type STRUCT(b INTEGER) does not match with STRUCT(a INTEGER). Cannot cast STRUCTs - element &quot;b&quot; in source struct was not found in target struct </code></pre> <p>Note that <code>obj1</code> (the 2nd example) passes. The difference between <code>obj1</code> and <code>obj2</code> is that the keys in the 2nd array element differ from the first. Judging by the error message, duckdb appears to be converting each element of the list to a struct, inducing a schema after the first element, and <a href="https://duckdb.org/docs/sql/data_types/struct.html" rel="nofollow noreferrer">structs have fixed schemas</a>.</p> <p>Is this expected behavior, or a bug? Presumably the safest way to proceed is to pre-serialize, but is this guaranteed in the future?</p> <ul> <li>duckdb python module version: <code>1.1.3</code></li> <li>python3.11</li> </ul>
<python><json><duckdb>
2025-02-03 01:14:01
0
822
Chris Mungall
79,407,537
4,330,537
undetected_chromedriver and cloudflare issues if there are any undetected_chromedriver commands after the cloudflare challenge
<p>Hi I am trying to use import undetected_chromedriver as uc:</p> <p>this works clicking a button:</p> <pre><code>driver.execute_script(&quot;arguments[0].focus();&quot;, username_field) WebDriverWait(driver, 100).until(EC.element_to_be_clickable((&quot;id&quot;,&quot;ctl00_lower_panel_btnContinue&quot;))).click() </code></pre> <p>This does not:</p> <pre><code>driver.execute_script(&quot;arguments[0].focus();&quot;, username_field) WebDriverWait(driver, 100).until(EC.element_to_be_clickable((&quot;id&quot;,&quot;ctl00_lower_panel_btnContinue&quot;))).click() WebDriverWait(driver, 100).until(EC.element_to_be_clickable((&quot;id&quot;,&quot;ctl00_lower_panel_btnContinue&quot;))).click() ## here is the cloudflare WebDriverWait(driver, 100).until(EC.element_to_be_clickable((&quot;id&quot;, &quot;ctl00_lower_panel_btnContinue&quot;))).click() element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((&quot;xpath&quot;, '//button[text()=&quot;Yes&quot; and @class=&quot;ui-button ui-corner-all ui-widget&quot;]'))) element1 = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((&quot;xpath&quot;, '//button[text()=&quot;Yes&quot; and @class=&quot;ui-button ui-corner-all ui-widget&quot;]'))) yes_driver3 = driver.find_element(&quot;xpath&quot;, '//button[text()=&quot;Yes&quot; and @class=&quot;ui-button ui-corner-all ui-widget&quot;]') yes_driver3.click() </code></pre> <p>is there some process to run undetected_chromedriver commands after encountering cloudflare?</p>
<python><selenium-webdriver><undetected-chromedriver>
2025-02-02 23:33:46
0
835
RobM
79,407,429
2,525,593
Run .py file that is in a subdirectory
<p>I can run a file from the root of my project, but not from a subdirectory. How do I fix this?</p> <p>I have a directory tree that looks like this:</p> <pre><code>$ tree . ├── kalman │ ├── find_observibility.py │ ├── __init__.py │ ├── kalman.py │ └── model.py </code></pre> <p><code>find_observibility.py</code> imports from <code>kalman</code>.</p> <p>If I copy <code>find_observibility.py</code> into the root of my project, it runs just fine. However, when I try to run it from its subdirectory, I get an error:</p> <pre><code>$ ./kalman/find_observibility.py Traceback (most recent call last): File &quot;/home/tim/Documents/software/cl_kalman/./kalman/find_observibility.py&quot;, line 3, in &lt;module&gt; from kalman import outputJacobian, stateJacobian, stateTransition File &quot;/home/tim/Documents/software/cl_kalman/kalman/kalman.py&quot;, line 6, in &lt;module&gt; from kalman.model import modelOutput, stateTransition ModuleNotFoundError: No module named 'kalman.model'; 'kalman' is not a package </code></pre> <p>Here's <code>find_observability.py</code>:</p> <pre><code>#! /usr/bin/env python3 from kalman import outputJacobian, stateJacobian, stateTransition from make_flight import Rising, Rounds if '__main__' == __name__: print('oh my') </code></pre> <p>Here's <code>kalman/__init__.py</code>:</p> <pre><code>from .model import initialState, modelOutput, stateTransition from .kalman import outputJacobian, stateJacobian </code></pre> <p>How to resolve this?</p>
<python>
2025-02-02 22:00:55
2
536
Tim Wescott
79,407,324
14,416,045
PIL vs OpenCV Affine Transform: why does the image turn upside down?
<p>I'm trying to get rid of OpenCV in my image pipeline. I'm replacing it with PIL. I understand that the affine transformation in OpenCV is source -&gt; destination, but the parameter for PIL is destination -&gt; source. The transformation matrix used in OpenCV can be inverted to be used in PIL.</p> <p>The problem I have is there's one image that I can successfully affine transform in OpenCV, but not PIL. The result I get from PIL is upside down and translated. I've tried inverting the OpenCV transformation matrix and applying it to the PIL &quot;.transform()&quot; method, but I get the same results. What am I doing wrong here?</p> <pre class="lang-py prettyprint-override"><code>... warped = cv2.warpAffine(image, M, (width, height)) ... a_f = M[0, 0] b_f = M[0, 1] c_f = M[0, 2] d_f = M[1, 0] e_f = M[1, 1] f_f = M[1, 2] a_i, b_i, c_i, d_i, e_i, f_i = invert_affine(a_f, b_f, c_f, d_f, e_f, f_f) warped = image.transform( ... (a_i, b_i, c_i, d_i, e_i, f_i), ... ) </code></pre> <p>I was asked for a reproducible example. I've put the image on a CDN. I have a script on <a href="https://github.com/tnorlund/Portfolio/blob/main/dev.py" rel="nofollow noreferrer">GitHub</a>. <a href="https://i.sstatic.net/bZGSmn6U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZGSmn6U.png" alt="Bounding Box" /></a> <a href="https://i.sstatic.net/UNU8NBED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UNU8NBED.png" alt="Bad Transform From PIL Affine Transform" /></a></p>
<python><opencv><python-imaging-library>
2025-02-02 20:47:36
1
468
Tyler Norlund
79,407,317
7,295,599
How to create possible sets of n numbers from m-sized prime number list?
<p>Input: a list of <code>m</code> prime numbers (with possible repetition), and integers <code>n</code> and <code>t</code>.</p> <p>Output: all sets of <code>n</code> numbers, where each set is formed by partitioning the input into <code>n</code> parts, and taking the product of the primes in each part. We reject any set containing a number greater than <code>t</code> or any duplicate numbers.<br> <em>Thanks, @Dave, for the now hopefully clear formulation!</em></p> <hr /> <p>I have a list of <code>m</code> elements (prime numbers) and out of these by using all of them, I want to create all possible sets of <code>n</code> numbers which fulfill certain conditions:</p> <ul> <li>the maximum value of each set should be below a certain threshold value <code>t</code></li> <li>the values in each set should be unique</li> </ul> <p>So, my thoughts:</p> <ol> <li>split the <code>m</code>-sized list into all possible <code>n</code> subsets</li> <li>calculate the product of each subset</li> <li>exclude duplicate sets, sets with duplicates and sets with maximum above the threshold</li> </ol> <p>Searching StackOverflow I found this: <a href="https://stackoverflow.com/q/66119264/7295599">How to get all possible combinations of dividing a list of length n into m sublists</a></p> <p>Taking the <code>partCombo</code> from <a href="https://stackoverflow.com/a/66120649">here</a> and add some more lines, I ended up with this:</p> <p><strong>Script:</strong></p> <pre><code>### get set of numbers from prime number list from itertools import combinations def partCombo(L,N=4): # https://stackoverflow.com/a/66120649 if N==1: yield [L]; return for size in range(1,len(L)-N+2): for combo in combinations(range(len(L)),size): # index combinations part = list(L[i] for i in combo) # first part remaining = list(L) for i in reversed(combo): del remaining[i] # unused items yield from ([part]+rest for rest in partCombo(remaining,N-1)) def lst_product(lst): p = 1 for i in range(len(lst)): p *= lst[i] return p a = [2,2,2,2,3,3,5] n = 3 # number of subsets t = 50 # threshold for max. d = sorted([u for u in set(tuple(v) for v in [sorted(list(w)) for w in set(tuple(z) for z in [[lst_product(y) for y in x] for x in partCombo(a, N=n) ])]) if max(u)&lt;t]) print(len(d)) print(d) ### end of script </code></pre> <p><strong>Result:</strong> (for <code>a = [2,2,2,2,3,3,5]</code>, i.e. <code>m=7</code>; <code>n=3</code>, <code>t=50</code>)</p> <pre><code>26 [(2, 8, 45), (2, 9, 40), (2, 10, 36), (2, 12, 30), (2, 15, 24), (2, 18, 20), (3, 5, 48), (3, 6, 40), (3, 8, 30), (3, 10, 24), (3, 12, 20), (3, 15, 16), (4, 4, 45), (4, 5, 36), (4, 6, 30), (4, 9, 20), (4, 10, 18), (4, 12, 15), (5, 6, 24), (5, 8, 18), (5, 9, 16), (5, 12, 12), (6, 6, 20), (6, 8, 15), (6, 10, 12), (8, 9, 10)] </code></pre> <p>This is fast and I need to also exclude the sets having duplicate numbers, e.g. <code>(4,4,45), (5,12,12), (6,6,20)</code></p> <p><strong>Here comes the problem:</strong></p> <p>If I have larger lists, e.g. <code>a = [2,2,2,2,2,2,2,3,5,5,7,13,19]</code>, i.e. <code>m=len(a)=13</code> and splitting into more subsets <code>n=6</code>, the combinations can quickly get into the millions/billions, but there are millions of duplicates which I would like to exclude right from the beginning. Also the threshold <code>t</code> will exclude a lot of combinations. And maybe just a few hundreds sets will remain.</p> <p>If I start the script with the above values for (<code>a, m=13, n=6</code>), the script will take forever... Is there maybe a smarter way to reduce/exclude a lot of duplicate combinations and achieve this in a much faster way or some smart Python feature which I am not aware of?</p> <p><strong>Clarification:</strong></p> <p>Thanks for your comments, apparently, I couldn't make my problem clear enough. Let me try to clarify. For example, given the prime number set <code>a = [2,2,2,2,3,3,5]</code> <code>(m=7)</code>, I want to make all possible sets of <code>n=3</code> numbers out of it which satisfy some conditions. So, for example, 5 out of many (I guess 1806) possibilities to split <code>a</code> into 3 parts and the corresponding products of each subset:</p> <ul> <li><p><code>[2],[2],[2,2,3,3,5]</code> --&gt; <code>(2,2,180)</code>: excl. because <code>max=180 &gt; t=50</code> and duplicate of 2</p> </li> <li><p><code>[2],[2,2,2],[3,3,5]</code> --&gt; <code>(2,8,45)</code>: included</p> </li> <li><p><code>[3],[2,2],[2,2,3,5]</code> --&gt; <code>(3,4,60)</code>: excluded because <code>max=60 &gt;t=50</code></p> </li> <li><p><code>[2,2],[2,2],[3,3,5]</code> --&gt; <code>(4,4,45)</code>: excluded because duplicates of <code>4</code></p> </li> <li><p><code>[2,3],[2,3],[2,2,5]</code> --&gt; <code>(6,6,20)</code>: excluded because duplicates of <code>6</code></p> </li> </ul>
<python><algorithm><combinations><primes><python-itertools>
2025-02-02 20:46:02
1
27,030
theozh
79,407,242
274,460
How do I make FastAPI URLs include the proxied URL?
<p>I have a FastAPI application that is behind a NextJS reverse proxy. I'm using NextJS rewrites, which sets the <code>x-forwarded-for</code> header to the externally-visible hostname and port. The rewrite looks like this:</p> <pre><code>rewrites: async () =&gt; [ { source: &quot;/api/:slug*&quot;, destination: &quot;http://backend:8000/:slug*&quot; } ] </code></pre> <p>The whole lot is running in a docker stack (which is where hostnames like <code>backend</code> come from) and I end up with headers like this:</p> <pre><code>host: backend:8000 x-forwarded-host: localhost:3000 </code></pre> <p>I'm then emailing a link from the FastAPI application. I construct the URL like this:</p> <pre><code>@app.get(&quot;/verify&quot;) async def verify(token: str): ... @app.post(&quot;/signup&quot;) async def signup(body: SignupRequest, request: Request) -&gt; str: user = add_user(body.username, body.email) token = user.get_signup_token() url = request.url_for(&quot;verify&quot;).include_query_params(token=token) email_verification(body.email, url) return &quot;&quot; </code></pre> <p>I've set up FastAPI with <code>root_path=&quot;/api&quot;</code> so that the path is rewritten correctly.</p> <p>The resulting url is <code>http://backend:8000/api/verify</code>. I want it to be <code>http://localhost:3000/api/verify</code> (ie to have the actual hosted URL rather than the intra-docker-stack URL).</p> <p>I've tried adding a middleware like this:</p> <pre><code>@app.middleware(&quot;http&quot;) async def rewrite_host_header(request: Request, call_next): if &quot;x-forwarded-host&quot; in request.headers: request.headers[&quot;host&quot;] = request.headers[&quot;x-forwarded-host&quot;] return await call_next(request) </code></pre> <p>but this doesn't seem to make a difference. I've also tried adding <code>request.url = request.url.replace(host=request.headers[&quot;x-forwarded-for&quot;])</code> but this also doesn't change the output of <code>request.url_for(...)</code>.</p> <p>How am I meant to configure this so that URLs are emitted with the right scheme, hostname and port?</p> <p>Edit to add: I've tried also setting <code>X-Forwarded-Proto</code>, <code>X-Forwarded-Port</code>, <code>X-Forwarded-Prefix</code> and <code>X-Forwarded-For</code>, making requests directly to FastAPI using <code>curl</code> and so cutting out the NextJS step. None of it makes any difference to the URLs that <code>url_for()</code> emits.</p>
<python><next.js><fastapi><reverse-proxy><fastapi-middleware>
2025-02-02 19:54:58
1
8,161
Tom
79,407,071
16,891,669
Understanding descriptor protocol for 'wrapper-descriptor' itself
<p>I was trying to explore how would the descriptor protocol work if I were to access the object of the 'wrapper-descriptor' class itself.</p> <p>So, I explored the c code in <a href="https://github.com/python/cpython/blob/3.13/Objects/typeobject.c" rel="nofollow noreferrer">typeobject.c</a> and found two <code>__get__</code> functions - <code>slot_tp_descr_get</code> and <code>wrap_descr_get</code>. They are related by the following command -</p> <pre class="lang-c prettyprint-override"><code>TPSLOT(__get__, tp_descr_get, slot_tp_descr_get, wrap_descr_get, &quot;__get__($self, instance, owner=None, /)\n--\n\nReturn an attribute of instance, which is of type owner.&quot;) </code></pre> <p>The <code>wrap_descr_get</code> is a wrapper function that wraps the <code>slot_tp_descr_get</code>.</p> <p>Going through the code of <code>slot_tp_descr_get</code> I found it calls the <code>__get__</code> of the <code>self</code> argument. So, will it run infinitely if an object of its own type is passed? There is one return statement in the function preluded by an if statement but I don't think it will be run. Because, the 'if clause' checks if the class of the <code>self</code> implements a <code>__get__</code> or not, since we pass an object of this class, it will find the <code>__get__</code>.</p> <hr /> <p>Related question</p> <p>If I call the <code>__get__</code> of other classes like the function class as written below.</p> <pre class="lang-py prettyprint-override"><code>def fun(): pass type(fun).__get__ or fun.__get__ </code></pre> <p>Is the underlying <code>__get__</code> of function class directly called by <code>wrap_descr_get</code> or as per the descriptor protocol which will look for the class of <code>__get__</code> (wrapper-descriptor here) and call the class's <code>__get__</code> i.e. <code>slot_tp_descr_get</code>?</p>
<python><cpython>
2025-02-02 18:12:40
1
597
Dhruv
79,406,955
281,965
Generate a token using Python requests vs mc binary?
<p>I am using Python MinIO SDK, but there are admin operations that it cannot do, and then we use the <code>mc</code> binary instead. But the binary has all these files on disk that we can't have in our ENV so we need to kick it out.</p> <p>I'm trying to simulate what the binary does, and while reviewing the <code>--debug</code> option I noticed that it access the following:</p> <pre><code>&lt;DEBUG&gt; PUT /minio/admin/v3/add-service-account HTTP/1.1 Host: myminio.server.com:9000 User-Agent: MinIO (linux; amd64) madmin-go/3.0.70 mc/RELEASE.2025-01-17T23-25-50Z Content-Length: 254 Accept-Encoding: zstd,gzip Authorization: AWS4-HMAC-SHA256 Credential=myuser/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED** X-Amz-Content-Sha256: 2...... X-Amz-Date: 2... </code></pre> <p>(edited based on comment) Then, I tried taking the parameters to some <code>requests.put</code> example, but I'm unable to make it work</p> <pre class="lang-py prettyprint-override"><code>import requests from requests_aws4auth import AWS4Auth auth = AWS4Auth(ak, sk, '', 's3') headers = { &quot;MINIO_ACCESS_KEY&quot;: &quot;a...&quot;, &quot;MINIO_SECRET_KEY&quot;: &quot;m...&quot;, &quot;MINIO_PATH&quot;: &quot;auto&quot;, &quot;MINIO_API&quot;: &quot;s3v4&quot; } res = requests.get(&quot;https://myminio.server.com:9000/minio/admin/v3/list-access-keys-bulk?all=true&amp;listType=all&quot;, auth=auth, verify=False) </code></pre> <p>The headers look great, yet i'm unable to match the signature</p> <pre class="lang-py prettyprint-override"><code>{ &quot;Code&quot;: &quot;SignatureDoesNotMatch&quot;, &quot;Message&quot;: &quot;The request signature we calculated does not match the signature you provided. Check your key and signing method.&quot;, &quot;Resource&quot;: &quot;/minio/admin/v3/list-access-keys-bulk&quot;, &quot;RequestId&quot;: &quot;1...&quot;, &quot;HostId&quot;: &quot;d...&quot; } # request headers looks the same as the `mc` binary 'x-amz-date': '20250203T094648Z', 'x-amz-content-sha256': 'e3b0...', 'Authorization': 'AWS4-HMAC-SHA256 Credential=myuser/20250203//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**' </code></pre> <p>Error I get makes me believe it's not that simple to implement. Anyone knows how to make it work?</p>
<python><python-3.x><minio>
2025-02-02 16:50:17
1
8,181
Ricky Levi
79,406,917
10,737,396
How can I accurately count tokens for Llama3/DeepSeek r1 prompts when Groq API reports “Request too large”?
<p>I'm integrating the Groq API in my Flask application to classify social media posts using a model based on DeepSeek r1 (e.g., <code>deepseek-r1-distill-llama-70b</code>). I build a prompt by combining multiple texts and send it to the API. However, I keep receiving an error like this:</p> <pre><code>Request too large: The prompt exceeds the model's token limit. Please reduce your message size and try again. Error code: 413 - {'error': {'message': 'Request too large for model `deepseek-r1-distill-llama-70b` in organization `org_01jbv28e7qfp4rx8d51ybw9ypr` service tier on tokens per minute (TPM): Limit 6000, Requested 9262, please reduce your message size and try again. ...', 'type': 'tokens', 'code': 'rate_limit_exceeded'}} </code></pre> <p>To handle this, I tried splitting my prompt into chunks so that each request stays below the limit. For token counting, I initially used a simple whitespace split—which clearly underestimates the number of tokens. Then, I switched to using Hugging Face’s deepseek-ai/DeepSeek-R1-Distill-Llama-70B and meta-llama/Meta-Llama-3-8B AutoTokenizer:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer # Load GPT-2 tokenizer as a fallback for token counting tokenizer_for_count = AutoTokenizer.from_pretrained(&quot;deepseek-ai/DeepSeek-R1-Distill-Llama-70B&quot;) def count_tokens(text): return len(tokenizer_for_count.encode(text)) # Example function to build prompt chunks (simplified) def build_prompt_for_chunk(chunk, offset): text_descriptions = [f'{{&quot;id&quot;: {offset + i}, &quot;text&quot;: &quot;{text}&quot;}}' for i, text in enumerate(chunk)] prompt = f&quot;Posts: [{', '.join(text_descriptions)}]&quot; return prompt # Building the full prompt full_prompt = build_prompt_for_chunk(texts, 0) token_count = count_tokens(full_prompt) print(f&quot;Total tokens: {token_count}&quot;) </code></pre> <p>When I build my prompt,the deepseek-ai/DeepSeek-R1-Distill-Llama-70B and meta-llama/Meta-Llama-3-8B tokenizers gave me same token count of 7209 tokens. Then the GPT‑2 tokenizer gives me a token count of around 21,204 tokens, and I also did split() where I got 3059 tokens. However, the Groq API error indicates that my prompt is requesting around 9,262 tokens. This discrepancy makes me think that the GPT‑2 tokenizer isn’t an accurate proxy for my deployed model (which is based on Llama3/DeepSeek r1). What should I do to get closer to the accurate count?</p> <pre><code>def count_tokens(text): # Encode the text first. tokens = tokenizer_for_count.encode(text) # Decode tokens with cleaning disabled. decoded = tokenizer_for_count.decode(tokens, clean_up_tokenization_spaces=False) # Re-encode the decoded text. corrected_tokens = tokenizer_for_count.encode(decoded) return len(corrected_tokens) </code></pre> <p>Also tried to implement a “round-trip” approach to better match the original behavior. However, after applying this method, the total token count comes out to 7210 tokens—still far from the 9262 tokens reported by the Groq API.</p>
<python><tokenize><groq><llama3><deepseek>
2025-02-02 16:19:08
0
784
Towsif Ahamed Labib
79,406,912
8,621,823
How is the thread to be executed selected by the OS?
<p>Why this question is different from <a href="https://stackoverflow.com/questions/73336295/python-lock-always-re-acquired-by-the-same-thread">Python Lock always re-acquired by the same thread</a>.</p> <p>It may look similar because both questions use 2 threads. That question specifically requires each thread to execute once before handing over, creating an alternating foo,bar,foo,bar print pattern. It is looking for specific implementations to achieve that goal.</p> <p>This question is more open. It is not looking for a specific pattern, but an explanation of what I see, and direction on what variables matter (to tweak) to study this phenomena.</p> <pre><code>import threading import time lock = threading.Lock() def high_priority(): while True: with lock: print(&quot;High priority thread acquired lock&quot;) time.sleep(0.2) def low_priority(): while True: with lock: print(&quot;Low priority thread acquired lock&quot;) # May never happen! time.sleep(0.1) t1 = threading.Thread(target=high_priority, daemon=True) t2 = threading.Thread(target=low_priority, daemon=True) t1.start() t2.start() time.sleep(1) </code></pre> <p>I ran the code above 23 times before seeing low_priority get the lock.</p> <p>My aim is to understand what determines</p> <ul> <li>when within the 1 second sleep of main script low_priority thread gets the lock</li> <li>how often low_priority thread gets the lock</li> </ul> <p>However the experiments seem to require a long time so i'm looking for some theory first.</p> <p>Last 3 runs' output out of 23:</p> <pre><code>➜ deadlock python deadlock_priority.py High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock ➜ deadlock python deadlock_priority.py High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock ➜ deadlock python deadlock_priority.py High priority thread acquired lock High priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock Low priority thread acquired lock Low priority thread acquired lock Low priority thread acquired lock </code></pre> <p>Is priority determined by order of start method, or by length of sleep in the threads?</p> <pre><code>t1.start() t2.start() </code></pre> <p>I tried to sleep less on low_priority in hopes of giving it higher chance to take over, is this wishful thinking? I suspect <code>time.sleep</code> settings inside the lock context manager in both threads have no impact.</p> <p>Is there any way to make this experiment more reproducible? (meaning in a certain number of runs, i can see low_priority get the lock.</p> <p>What other parameters are interesting to tweak or add to the code to understand this or related issues better?</p> <p>I just wrote the code to study thread competition with no use case in mind, but intuition tells me there is some use. Can anyone explain if such patterns appear in meaningful code?</p> <p>I decided to randomly add sleeps outside the lock context manager and somehow low_priority gets the lock a lot more often?</p> <pre><code>def high_priority(): while True: time.sleep(0.0001) with lock: print(&quot;High priority thread acquired lock&quot;) time.sleep(0.2) def low_priority(): while True: time.sleep(0.0001) with lock: print(&quot;Low priority thread acquired lock&quot;) time.sleep(0.1) </code></pre> <p>Output of 1st 3 runs:</p> <pre><code>➜ deadlock python deadlock_sleep.py High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock ➜ deadlock python deadlock_sleep.py High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock ➜ deadlock python deadlock_sleep.py High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock Low priority thread acquired lock High priority thread acquired lock </code></pre>
<python><multithreading>
2025-02-02 16:13:16
0
517
Han Qi
79,406,608
8,849,755
Pandas datetime index empty dataframe
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>data = pandas.read_csv('data.csv') data['when'] = pandas.to_datetime(data['when']) data.set_index('when', inplace=True) print(data) print(data.index.dtype) </code></pre> <p>which prints:</p> <pre><code> price when 2025-01-04 98259.4300 2025-01-03 98126.6400 2025-01-02 96949.1800 2025-01-01 94610.1400 2024-12-31 93647.0100 ... ... 2010-07-21 0.0792 2010-07-20 0.0747 2010-07-19 0.0808 2010-07-18 0.0858 2010-07-17 0.0500 [5286 rows x 1 columns] datetime64[ns] </code></pre> <p>Then, I am trying to select a range like this:</p> <pre class="lang-py prettyprint-override"><code>start_date = datetime(year=2010,month=1,day=1) end_date = datetime(year=2025,month=1,day=1) print(data.loc[start_date:end_date]) print(data.loc[start_date:]) print(data.loc[:end_date]) </code></pre> <p>and this prints</p> <pre><code>Empty DataFrame Columns: [price] Index: [] Empty DataFrame Columns: [price] Index: [] price when 2025-01-04 98259.43 2025-01-03 98126.64 2025-01-02 96949.18 2025-01-01 94610.14 </code></pre> <p>Why?</p> <p>I am using pandas 2.2.3.</p>
<python><pandas><datetime>
2025-02-02 12:49:12
3
3,245
user171780
79,406,511
3,333,449
Calculate cumulative sum of time series X for time points in series Y
<p>Imagine transactions, identified by <em>amount</em>, arriving throughout the day. You want to calculate the running total of <em>amount</em> at given points in time (9 am, 10 am, etc.).</p> <p>With pandas, I would use <code>apply</code> to perform such an operation. With Polars, I tried using <code>map_elements</code>. I have also considered <code>group_by_dynamic</code> but I am not sure it gives me control of the time grid's start / end / increment.</p> <p>Is there a better way?</p> <pre><code>import polars as pl import datetime df = pl.DataFrame({ &quot;time&quot;: [ datetime.datetime(2025, 2, 2, 11, 1), datetime.datetime(2025, 2, 2, 11, 2), datetime.datetime(2025, 2, 2, 11, 3) ], &quot;amount&quot;: [5.0, -1, 10] }) dg = pl.DataFrame( pl.datetime_range( datetime.datetime(2025, 2, 2, 11, 0), datetime.datetime(2025, 2, 2, 11, 5), &quot;1m&quot;, eager = True ), schema=[&quot;time&quot;] ) def _cumsum(dt): return df.filter(pl.col(&quot;time&quot;) &lt;= dt).select(pl.col(&quot;amount&quot;)).sum().item() dg.with_columns( cum_amount=pl.col(&quot;time&quot;).map_elements(_cumsum, return_dtype= pl.Float64) ) </code></pre>
<python><dataframe><time-series><python-polars>
2025-02-02 11:36:46
1
535
Dimitri Shvorob
79,406,498
2,869,544
TypeError raised when importing fbprophet
<p>Condition: any python command will have the same error. For example if run this simple command <code>python -V</code> , it will generate the same error.</p> <p>I am using:</p> <ul> <li>conda/miniconda3 version : 24.5.0</li> <li>spyder-kernels version : 3.0.2</li> <li>python version : 3.12.8</li> </ul> <p>My spyder-env path: <code>D:\Spyder6.0.3\envs\my-env-test</code></p> <p>I have set the PYTHONPATH to:</p> <p><a href="https://i.sstatic.net/zUoofO5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUoofO5n.png" alt="enter image description here" /></a></p> <p>and python interpreter to: <a href="https://i.sstatic.net/7yr4QleK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7yr4QleK.png" alt="enter image description here" /></a></p> <p>My python scripts:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from fbprophet import Prophet df = pd.read_csv('data/dau.csv', parse_dates=['Date'], index_col='Date') df_prophet = df.reset_index().rename(columns={'Date': 'ds', 'New Users': 'y'}) model = Prophet() model.fit(df_prophet) future = model.make_future_dataframe(periods=10) forecast = model.predict(future) model.plot(forecast) plt.show() </code></pre> <p>Console output:</p> <pre><code>%runfile D:/_SpyderAnalysis/New_User.py --wdir Unexpected exception formatting exception. Falling back to standard exception Traceback (most recent call last): File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\spyder_kernels\customize\code_runner.py&quot;, line 506, in _exec_code exec_encapsulate_locals( File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\spyder_kernels\customize\utils.py&quot;, line 209, in exec_encapsulate_locals exec_fun(compile(code_ast, filename, &quot;exec&quot;), globals, None) File &quot;d:\_spyderanalysis\new_user.py&quot;, line 3, in &lt;module&gt; from fbprophet import Prophet File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\fbprophet\__init__.py&quot;, line 8, in &lt;module&gt; from fbprophet.forecaster import Prophet File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\fbprophet\forecaster.py&quot;, line 17, in &lt;module&gt; from fbprophet.make_holidays import get_holiday_names, make_holidays_df File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\fbprophet\make_holidays.py&quot;, line 14, in &lt;module&gt; import fbprophet.hdays as hdays_part2 File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\fbprophet\hdays.py&quot;, line 17, in &lt;module&gt; from holidays import WEEKEND, HolidayBase, Turkey File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\holidays\__init__.py&quot;, line 22, in &lt;module&gt; from holidays.holiday_base import * File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\holidays\holiday_base.py&quot;, line 45, in &lt;module&gt; CategoryArg = Union[str, Iterable[str]] TypeError: 'ABCMeta' object is not subscriptable During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\IPython\core\interactiveshell.py&quot;, line 2105, in showtraceback stb = self.InteractiveTB.structured_traceback( File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\IPython\core\ultratb.py&quot;, line 1396, in structured_traceback return FormattedTB.structured_traceback( File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\IPython\core\ultratb.py&quot;, line 1287, in structured_traceback return VerboseTB.structured_traceback( File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\IPython\core\ultratb.py&quot;, line 1140, in structured_traceback formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context, File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\IPython\core\ultratb.py&quot;, line 1030, in format_exception_as_a_whole self.get_records(etb, number_of_lines_of_context, tb_offset) if etb else [] File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\IPython\core\ultratb.py&quot;, line 1081, in get_records style = get_style_by_name(&quot;default&quot;) File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\spyder_kernels\utils\style.py&quot;, line 134, in create_style_class class StyleClass(Style): File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\spyder_kernels\utils\style.py&quot;, line 136, in StyleClass styles = create_pygments_dict(color_scheme_dict) File &quot;D:\Spyder6.0.3\envs\my-env-test\lib\site-packages\spyder_kernels\utils\style.py&quot;, line 46, in create_pygments_dict fon_c, fon_fw, fon_fs = color_scheme[&quot;normal&quot;] TypeError: string indices must be integers </code></pre> <p>And because any python command/script I run will generates the same error. I don't think the error are coming from my python script.</p> <p>Can anyone help me?</p> <p>Note: I am still learning how to use spyder and this is my first setup on Spyder.</p> <p>fbprophet version 0.7.1(it's the latest one available) <a href="https://i.sstatic.net/AJFrPkc8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJFrPkc8.png" alt="enter image description here" /></a></p> <p>python version from conda cmd: <a href="https://i.sstatic.net/A2lExZr8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2lExZr8.png" alt="enter image description here" /></a></p>
<python><spyder><miniconda>
2025-02-02 11:28:21
1
1,004
noobsee
79,406,461
3,439,054
Navigate Firefox with Selenium
<p>I want to get to a particular session in firefox, so my login data is saved</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver profile_path = &quot;/home/XXXX/.mozilla/firefox/XXXX.selenium1&quot; # particular profile - I hid some part of the path for privacy reason options = webdriver.FirefoxOptions() options.add_argument(f&quot;--profile={profile_path}&quot;) driver = webdriver.Firefox(options=options) driver.get(url) </code></pre> <p>However, the get seems not to work. Firefox opens but I need to navigate to the page manually.</p> <p>Instead, if I do not use a profile, i.e.,</p> <pre><code>from selenium import webdriver driver = webdriver.Firefox() driver.get(url) </code></pre> <p>Firefox opens the right page... but of course, I am not logged in.</p> <p>I would like to get the first script to work.</p>
<python><selenium-webdriver><firefox>
2025-02-02 11:02:15
1
324
Sam
79,406,441
906,387
Iterable[str] vs [str]
<p>I'm trying to understand the difference between:</p> <pre><code>from typing import Iterable def func(self, stuff: Iterable[str]) -&gt; str: </code></pre> <p>and:</p> <pre><code>def func(self, stuff: [str]) -&gt; str: </code></pre> <p>Are both statements valid? Do they give the same information to Python (3.12+) interpreter?</p> <p>This <a href="https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5">question</a> is related but doesn't answer my specific problem about the <em>Iterable</em> keyword.</p>
<python><python-typing>
2025-02-02 10:49:54
1
1,379
suizokukan
79,406,394
347,484
How to save checkpoint in tensorflow format in ver 2.18?
<p>There was a feature - tensorflow format to save a checkpoint. I have checked and it is everywhere in the official docs and samples: <a href="https://www.tensorflow.org/tutorials/keras/save_and_load#savedmodel_format" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/save_and_load#savedmodel_format</a></p> <p>However in the TF I am running (2.18) if I write <code>model.save('something')</code> I get this:</p> <pre><code>Invalid filepath extension for saving. Please add either a `.keras` extension for the native Keras format (recommended) or a `.h5` extension. </code></pre> <p>How do I save in &quot;tensorflow&quot; format in the latest versions?</p>
<python><keras><tensorflow2.0>
2025-02-02 10:18:44
1
10,560
Boppity Bop
79,406,346
1,951,507
PyCharm type hinting for generic type concludes property instead of property's return type
<p>I have a problem with type hints recognition in Pycharm. Following simplified example constructed to show my issue:</p> <pre class="lang-py prettyprint-override"><code>class A: @property def a(self) -&gt; A: a_ = ... return a_ def f[T: A](b: T): c = b.a # `b.a` recognized as type `property` instead of `A` </code></pre> <p>The variable <code>c</code> is not recognized as having type <code>A</code> (which is returned by the property). Instead it is recognized as having type <code>property</code>.</p> <p>Any suggestion whether I do something wrong, or need a workaround, possibly because of known(?) PyCharm limitation in PEP 695 handling? I use Python 3.12.</p> <p>Note this is not exactly same case as with TypeVar. If I use TypeVar construction, then Pycharm makes an <code>Any</code> from <code>a.b</code> and then thus stops complaining about the type being a property instead of what it really &quot;returns&quot;.</p>
<python><pycharm><python-typing><pep-695>
2025-02-02 09:41:16
0
1,052
pfp.meijers
79,406,202
95,265
how to access copied content in python UIAutomator2
<p>I am trying to automate some flow in <code>UIAutomator</code> and a part of it has been to press the copy button on an app, after pressing this copy button I would need to access the copied content in code via python <code>UIAutomator</code>... how do I do so?</p> <pre class="lang-py prettyprint-override"><code>COPY_KEY = { CLASS_NAME: &quot;android.widget.Button&quot;, DESCRIPTION: &quot;Copy key&quot; } self.DEVICE.get_element(self.COPY_KEY).click() clean_seed = 'copied code' </code></pre>
<python><python-3.x><ui-automation><uiautomator2>
2025-02-02 07:24:02
1
14,438
aherlambang
79,406,027
28,063,240
Save all objects in QuerySet and related objects to a fixture
<p>I've written a function to save a QuerySet to a fixture JSON file:</p> <pre class="lang-py prettyprint-override"><code>def save_as_fixture(query_set: QuerySet, fixture_name: str, app_label: str='mainapp'): app_config = apps.get_app_config(app_label) fixture_dir = os.path.join(app_config.path, &quot;fixtures&quot;) os.makedirs(fixture_dir, exist_ok=True) fixture_path = os.path.join(fixture_dir, fixture_name) data = serializers.serialize(&quot;json&quot;, query_set, indent=2, use_natural_foreign_keys=True, use_natural_primary_keys=True) with open(fixture_path, 'w') as file: file.write(data) </code></pre> <p>But it doesn't save related objects.</p> <p>I would like it to also save the objects that reference one of the objects of the QuerySet via a <code>ForeignKey</code>, <code>OneToOneField</code>, etc. How can I do that?</p>
<python><django>
2025-02-02 03:29:31
0
404
Nils
79,406,005
9,135,359
How to pass `self` into RunnableLambda?
<p>I am using LangChain and have a class with many methods. I intend to use parallel chains to process lots of data.</p> <p>Here is one of my steps, which happens to use other methods in the class in which it is contained:</p> <pre><code>def management_plan(self, input) -&gt; str: # uses management_plan_model print(&quot;PROCESSING write_notes coroutine: starting management_plan()&quot;) for notes_template_item in self.notes_template_item_list: if notes_template_item in self.similar_prompt_items['MANAGEMENT PLAN']: prompt = self.personality[notes_template_item].replace(&quot;{input_text}&quot;, input.messages[0].content) context = self.modify_context(self.personality['CONTEXT'], RAG_data=self.rag_bot.output+self.rag_bot.issues_list+self.rag_bot.differentials) prompt_template = ChatPromptTemplate.from_messages([(&quot;system&quot;,&quot;{context}&quot;), (&quot;human&quot;,&quot;{prompt}&quot;)]) return prompt_template.format_prompt(context=context, prompt=prompt) </code></pre> <p>This is one of my branches:</p> <pre><code>demographics_review_chain = (RunnableLambda(lambda x: demographics_review(self, input=x)) | self.general_model | StrOutputParser()) </code></pre> <p>This is a part of the chain:</p> <pre><code>chain = ( input_template | RunnableParallel(branches={&quot;demographics_review&quot;:demographics_review_chain, &quot;management_plan&quot;:management_plan_chain... | RunnableLambda(lambda x: combine_all(self, x[&quot;branches&quot;][&quot;demographics_review&quot;], x[&quot;branches&quot;][&quot;management_plan&quot;]... ) </code></pre> <p>But this does not work. For one thing, even the model (<code>self.general_model</code>) cannot be reached, since it belongs to the class. So that branch provides no output.</p> <p>How do I pass self into the <code>RunnableLambda</code> to use methods and variables that are given in the class?</p>
<python><langchain>
2025-02-02 02:58:00
1
844
Code Monkey
79,405,989
12,609,881
All possible decision paths / outcomes given multiple choices at each decision
<p>Given I am iterating through multiple lists of elements that are zipped together, I need to create all possible outcomes given I can only choose an element from a single list at each iteration.</p> <p>Example input 1:</p> <pre><code>a = [3,19,13] b = [20,18,7] </code></pre> <p>Example output 1:</p> <pre><code>[[3, 19, 13], [3, 19, 7], [3, 18, 13], [3, 18, 7], [20, 19, 13], [20, 19, 7], [20, 18, 13], [20, 18, 7]] </code></pre> <p>Example input 2:</p> <pre><code>a = ['A','B'] b = ['C','D'] </code></pre> <p>Example output 2:</p> <pre><code>[['A', 'B'], ['A', 'D'], ['C', 'B'], ['C', 'D']] </code></pre> <p>I'm struggling to find the right mathematical terminology to define exactly what I am asking for. I do not think permutations, combinations, or cartesian product are correct or precise enough.</p>
<python><algorithm><math>
2025-02-02 02:38:24
1
911
Matthew Thomas
79,405,874
5,093,220
Android Chrome doesn't forget user even after Flask logout_user when using remember cookie
<p>I can't get Chrome on Android to let me logout a user.</p> <p>I'm using <code>Flask</code>, with <code>flask_login</code>'s user sessions. I use <code>login_user(remember=True)</code> in order to remember the logins even if the browser is closed and opened again afterwards. Then I use <code>logout_user</code> for logging out.</p> <p>I've tried each and every option I've come across. Among them:</p> <pre><code># session cookie configuration app.config[&quot;REMEMBER_COOKIE_DOMAIN&quot;] = &quot;mysite.com&quot; # also .mysite.com app.config[&quot;REMEMBER_COOKIE_SECURE&quot;] = True app.config[&quot;SESSION_COOKIE_SECURE&quot;] = True app.config[&quot;REMEMBER_COOKIE_SAMESITE&quot;] = &quot;Lax&quot; # also &quot;None&quot; app.config[&quot;REMEMBER_COOKIE_PATH&quot;] = &quot;/&quot; # which is the default but anyways app.config[&quot;SESSION_COOKIE_PATH&quot;] = &quot;/&quot; # trying to clear sessions on logout session.clear() session[&quot;_remember&quot;] = &quot;clear&quot; logout_user() # ... trying to manage cookies manually by making a request (deleting nor expiring) response.set_cookie(&quot;remember_token&quot;, &quot;&quot;, expires=datetime.utcnow() - timedelta(days=1), domain=&quot;.mysite.com&quot;) # URL taken straight from the browser inspector response.delete_cookie(&quot;remember_token&quot;, domain=&quot;mysite.com&quot;) # ... </code></pre> <p>At the begining, Android's Chrome wouldn't even remember the user, because it wouldn't accept the cookie. Then I got chrome to remember the cookie (not even sure how, at this point, but maybe it was those <code>COOKIE_SECURE</code> things).</p> <p>Currently, every variant that I tried logs the user out (and it stays logged out even on page refresh). But then, if I quit the app and open it again (either by sliding it or even force quitting it from android's configuration), it first loads the page as if the user wasn't logged in, AND THEN if I reload the page, the user magically pops back into existence as if it had logged in again.</p> <p>Checking the cookies on the inspector, the cookies always seem deleted when I log the user out (they stop appearing on the cookie list), but then when I quit and restart the app, they respawn again.</p> <p>No need to say, browsers on my laptop make it work perfectly fine with almost any of the variants I've tried.</p> <p>I'm on the verge of becoming a crazy man. Does anyone know what the hell is going on and how to keep users logged out after they've logged out? Thanks a lot!</p>
<python><flask><flask-login><remember-me><google-chrome-android>
2025-02-01 23:57:21
0
631
Rusca8
79,405,832
4,098,506
mariadb.OperationalError: Access denied for user... but credentials are correct
<p>I use this simple connection script:</p> <pre><code>import configparser import mariadb config = configparser.ConfigParser() config.read('dbconfig.ini') config_default = config['DEFAULT'] print(f' mariadb -h {config_default[&quot;server&quot;]} -u {config_default[&quot;username&quot;]} -p{config_default[&quot;password&quot;]} -P {config_default[&quot;port&quot;]} {config_default[&quot;database&quot;]}') conn = mariadb.connect( user=config_default['username'], password=config_default['password'], host=config_default['server'], port=int(config_default['port']), database=config_default['database'] ) </code></pre> <p>When I run this, I get:</p> <pre><code>mariadb.OperationalError: Access denied for user 'myuser'@'myhost.local' (using password: YES) </code></pre> <p>But when I just call the output of the print statement, it connects just fine. So i guess the credentials and everything is correct. For testing I also chose a password without any special characters.</p> <p>My setup:</p> <ul> <li>Database: 11.4.4-MariaDB-log</li> <li>Python: Python 3.13.1</li> <li>Connector: mariadb==1.1.11</li> <li>Database client: Ver 15.1 Distrib 10.11.6-MariaDB</li> </ul>
<python><mariadb>
2025-02-01 23:15:07
1
662
Mr. Clear
79,405,712
6,824,949
How to have a (FastAPI) GKE deployment handle multiple requests?
<p>I have a FastAPI deployment in GKE that has an end-point <code>/execute</code> that reads and parses a file, something like below:</p> <pre><code>from fastapi import FastAPI app = FastAPI() @app.post(&quot;/execute&quot;) def execute( filepath: str ): res = 0 with open(filepath, &quot;r&quot;) as fo: for line in fo.readlines(): if re.search(&quot;Hello&quot;, line): res += 1 return {&quot;message&quot;: f&quot;Number of Hello lines = {res}.&quot;} </code></pre> <p>The GKE deployment has 10 pods with a load balancer and service exposing the deployment.</p> <p>Now, I would like to send 100 different file paths to this deployment. In my mind, I have the following options, and related questions:</p> <ol> <li>Send all 100 requests at the same time and not wait for a response, either using threading, <code>asyncio</code> and <code>aiohttp</code>, or something hacky like this:</li> </ol> <pre><code>for filepath in filepaths: try: requests.post(&quot;http://127.0.0.1:8000/execute?filepath=filepath&quot;,timeout=0.0000000001) except requests.exceptions.ReadTimeout: pass </code></pre> <p>Ref: <a href="https://stackoverflow.com/a/45601591/6824949">https://stackoverflow.com/a/45601591/6824949</a></p> <p>In this case, what does the GKE load balancer do when it receives a 100 requests - does it deliver 10 requests to each pod at the same time (in which case I would need to make sure a pod has enough resources to handle all the incoming requests at the same time), <strong>OR</strong> does it have a queuing system delivering a request to a pod only when it is available?</p> <ol start="2"> <li>Send 10 requests at a time, so that no pod is working on more than 1 request at any given time. That way, I can have predictable resource usage in a pod and not crash it. But how do I accomplish this in Python? And do I need to change anything in my FastAPI application or GKE deployment configuration?</li> </ol> <p>Any help would be greatly appreciated!</p>
<python><kubernetes><fastapi><google-kubernetes-engine>
2025-02-01 21:30:38
0
348
aaron02
79,405,672
4,710,409
'IterQueue' object has no attribute 'not_full'
<p>I have a class called &quot;IterQueue which is an iter queue:</p> <p><strong>IterQueue.py</strong></p> <pre><code>from multiprocessing import Process, Queue, Pool import queue class IterQueue(queue.Queue): def __init__(self): self.current = 0 self.end = 10000 def __iter__(self): self.current = 0 self.end = 10000 while True: yield self.get() def __next__(self): if self.current &gt;= self.end: raise StopIteration current = self.current self.current += 1 return current </code></pre> <p>I put items inside it in a different processes:</p> <p><strong>modulexxx.py</strong></p> <pre><code>... self.q1= IterQueue() def function(self): x = 1 while True: x = x + 1 self.q1.put(x) </code></pre> <p>Everything works fine, but python gives me an error:</p> <pre><code>'IterQueue' object has no attribute 'not_full </code></pre> <p>I searched for this function so I implement it in my custom queue, but found nothing,</p> <p>What's the solution to this ?</p>
<python><multiprocessing><queue>
2025-02-01 20:55:16
1
575
Mohammed Baashar
79,405,614
2,221,360
Slicing netCDF4 dataset based on specific time interval using xarray
<p>I have a netCDF4 dataset for the following datatime which is stored in <code>_date_times</code> variable:-</p> <pre><code>&lt;xarray.DataArray 'Time' (Time: 21)&gt; Size: 168B array(['2025-01-30T00:00:00.000000000', '2025-01-30T06:00:00.000000000', '2025-01-30T12:00:00.000000000', '2025-01-30T18:00:00.000000000', '2025-01-31T00:00:00.000000000', '2025-01-31T06:00:00.000000000', '2025-01-31T12:00:00.000000000', '2025-01-31T18:00:00.000000000', '2025-02-01T00:00:00.000000000', '2025-02-01T06:00:00.000000000', '2025-02-01T12:00:00.000000000', '2025-02-01T18:00:00.000000000', '2025-02-02T00:00:00.000000000', '2025-02-02T06:00:00.000000000', '2025-02-02T12:00:00.000000000', '2025-02-02T18:00:00.000000000', '2025-02-03T00:00:00.000000000', '2025-02-03T06:00:00.000000000', '2025-02-03T12:00:00.000000000', '2025-02-03T18:00:00.000000000', '2025-02-04T00:00:00.000000000'], dtype='datetime64[ns]') </code></pre> <p>The above data is of six hour interval. However, I need to convert the dataset to twelve hourly dataset. The filtered dataset should look like this:-</p> <pre><code>&lt;xarray.DataArray 'Time' (Time: 21)&gt; Size: 168B array(['2025-01-30T00:00:00.000000000', '2025-01-30T12:00:00.000000000', '2025-01-31T00:00:00.000000000', '2025-01-31T12:00:00.000000000', '2025-02-01T00:00:00.000000000', '2025-02-01T12:00:00.000000000', '2025-02-02T00:00:00.000000000', '2025-02-02T12:00:00.000000000', '2025-02-03T00:00:00.000000000', '2025-02-03T12:00:00.000000000', '2025-02-04T00:00:00.000000000'], dtype='datetime64[ns]') </code></pre> <p>What I tried was:-</p> <pre><code>xr_ds.sel(Time=slice(_date_times[0], _date_times[-1]), freq='12 h') </code></pre> <p>Off course, it won't work as there is no option to specify <code>freq</code>.</p> <p>How do I slice dataset containing only on specific time interval?</p>
<python><python-xarray><netcdf4>
2025-02-01 20:03:10
1
3,910
sundar_ima
79,405,222
9,049,108
pwntools [Errno 24] Too many open files [-] Starting local process
<p>I'm having an issue with some code I'm writing. I'm getting this pwntools error about too many files being open. My code looks like.</p> <pre><code> for a in range(0,2**3360): try: with open(&quot;output.txt&quot;, &quot;a&quot;) as f: p =process(os.getcwd()+ &quot;/flag&quot;,stdout=f) f.write(f'L:{a}\na') sleep(0.05) p.wait_for_close() while(p.poll()!=0): continue f.close() </code></pre> <p>This unfortunately this code causes a too many files open error. How can I make sure the io files and such are closed so its safe to continue starting processes? The sleep and p.wait_for_close() and p.poll in while loop were attempts.</p>
<python><file><process><pipe><pwntools>
2025-02-01 15:23:45
0
576
Michael Hearn
79,405,200
1,386,750
Using named columns and relative row numbers with Pandas 3
<p>I switched from NumPy arrays to Pandas DataFrames (dfs) many years ago because the latter has column names, which</p> <ol> <li>makes programming easier;</li> <li>is robust in order changes when reading data from a <code>.json</code> or <code>.csv</code> file.</li> </ol> <p>From time to time, I need the last row (<code>[-1]</code>) of some column <code>col</code> of some <code>df1</code>, and combine it with the last row of the same column <code>col</code> of another <code>df2</code>. I know the <em>name</em> of the column, not their position/order (I could know, but it might change, and I want to have a code that is robust against changers in the order of columns).</p> <p>So what I have been doing for years in a number of Python scripts is something that looks like</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd # In reality, these are read from json files - the order # of the columns may change, their names may not: df1 = pd.DataFrame(np.random.random((2,3)), columns=['col2','col3','col1']) df2 = pd.DataFrame(np.random.random((4,3)), columns=['col1','col3','col2']) df1.col2.iloc[-1] = df2.col2.iloc[-1] </code></pre> <p>but since some time my mailbox gets flooded with cron jobs going wrong, telling me that</p> <blockquote> <p>You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy. A typical example is when you are setting values in a column of a DataFrame, like:</p> <pre><code>df[&quot;col&quot;][row_indexer] = value </code></pre> <p>Use <code>df.loc[row_indexer, &quot;col&quot;] = values</code> instead, to perform the assignment in a single step and ensure this keeps updating the original <code>df</code>.</p> <p>See the caveats in the documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy</a></p> <pre><code>df1.col2.iloc[-1] = df2.col2.iloc[-1] </code></pre> </blockquote> <p>Of course, this error message is incorrect, and replacing the last line in my example with either of</p> <pre class="lang-py prettyprint-override"><code>df1.loc[-1, 'col2'] = df2.loc[-1, 'col2'] # KeyError: -1 df1.iloc[-1, 'col2'] = df2.iloc[-1, 'col2'] # ValueError (can't handle 'col2') </code></pre> <p>does not work either, since <code>.iloc[]</code> cannot handle column names and <code>.loc[]</code> cannot handle relative numbers.</p> <p>How can I handle the last (or any other relative number) row and a column with given name of a Pandas DataFrame?</p>
<python><pandas><dataframe>
2025-02-01 15:06:33
1
468
AstroFloyd
79,404,917
1,892,584
Raise a window using tkinter on kde?
<p>With tkinter on Python you can use the following <a href="https://stackoverflow.com/questions/1892339/how-to-make-a-tkinter-window-jump-to-the-front">trick to raise a window</a></p> <pre><code>root.lift() root.attributes('-topmost',True) root.after_idle(root.attributes,'-topmost',False) </code></pre> <p>However, on kde plasma it appears not to work. Instead when you set <code>'-topmost'</code> to false the window is lowered again.</p> <p>Here is some code that demonstrates the behaviour.</p> <pre><code>import tkinter as tk window = tk.Tk() ok = tk.Button(window, takefocus=tk.YES, text=&quot;raise after delay&quot;) ok.pack() state = {&quot;mouse_on_button&quot;: True} def handle_enter(_): state[&quot;mouse_on_button&quot;] = True def handle_leave(_): state[&quot;mouse_on_button&quot;] = False def handle_click(_): if state[&quot;mouse_on_button&quot;]: run() ok.bind(&quot;&lt;Enter&gt;&quot;, handle_enter) ok.bind(&quot;&lt;Leave&gt;&quot;, handle_leave) ok.bind(&quot;&lt;ButtonRelease-1&gt;&quot;, handle_click) def run(): print(&quot;Lower window by hand!&quot;) window.after(1000, raise_window) print(&quot;Done&quot;) def raise_window(): #window.lift() window.attributes('-topmost', True) window.after(500, remove_topmost) def remove_topmost(): window.attributes('-topmost', False) window.mainloop() </code></pre> <p>How do I raise a window with tkinter on KDE? One work around is to run the <a href="https://www.freedesktop.org/wiki/Software/wmctrl/" rel="nofollow noreferrer">wmctrl</a> command-line program from within my script. This seems to be able to raise windows, but I would prefer the solution to be cross platform.</p> <h1>About my system</h1> <p>plasma version: 5.27.11, KDE framework version: 5.115.0, qt: 5.15.13. I'm using X11.</p>
<python><tkinter><window><kde-plasma>
2025-02-01 11:46:33
1
1,947
Att Righ
79,404,581
1,371,949
Python SqlManagementClient Azure MSAL Login: Add Firewall Rules Programmatically Not Working
<p>In Python notebook, I want to connect to my Azure SQL DB using MSAL. In the second step, after logging in successfully, I need to configure the firewall by adding the public IP to the Firewall settings:</p> <pre><code>from msal import PublicClientApplication from azure.identity import DefaultAzureCredential from azure.mgmt.sql import SqlManagementClient .... def add_firewall_rule(public_ip): def add_firewall_rule(public_ip): credential = DefaultAzureCredential() sql_client = SqlManagementClient(credential, subscription_id) firewall_rule = sql_client.firewall_rules.create_or_update( resource_group_name=resource_group_name, server_name=sql_server_name, firewall_rule_name=firewall_rule_name, parameters={ &quot;start_ip_address&quot;: public_ip, &quot;end_ip_address&quot;: public_ip, } ) ... app = PublicClientApplication(client_id, authority=f&quot;https://login.microsoftonline.com/{tenant_id}&quot;) result = app.acquire_token_interactive(scopes=[&quot;https://database.windows.net/.default&quot;]) # Add the public IP address to the SQL server firewall rule firewall_rule = add_firewall_rule(public_ip) conn = pyodbc.connect( f&quot;DRIVER={{ODBC Driver 18 for SQL Server}};&quot; f&quot;SERVER={server};&quot; f&quot;DATABASE={database};&quot; f&quot;Encrypt=yes;&quot; # Critical for Azure connections f&quot;TrustServerCertificate=no;&quot; f&quot;AccessToken={token};&quot; ) print(&quot;Connected successfully!&quot;) ... </code></pre> <p>But I got</p> <blockquote> <p>Error: 'SqlManagementClient' object has no attribute 'firewall_rules'</p> </blockquote> <p>Also tried other way like:</p> <pre><code>firewall_rule = sql_client.firewall_rules.create_or_update( resource_group_name, sql_server_name, firewall_rule_name, { 'start_ip_address': public_ip, 'end_ip_address': public_ip } ) </code></pre> <p>But I'm still getting the same error with no luck... As a last check, all packages are the latest. Guess it's a coding issue rather than a credential problem, as the credential works when used in Azure Data Studio.</p> <p>I have read through the references:</p> <ol> <li><a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-network/azure.mgmt.network.operations.azurefirewallsoperations?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-mgmt-network/azure.mgmt.network.operations.azurefirewallsoperations?view=azure-python</a></li> <li><a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-sql/azure.mgmt.sql.models?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-mgmt-sql/azure.mgmt.sql.models?view=azure-python</a></li> </ol>
<python><sql-server><azure><firewall><rules>
2025-02-01 07:22:54
1
2,117
felixwcf
79,404,534
10,255,994
How to set up Crawl4AI with managed browser
<p>Hello I am trying to setup a crawl4ai script with a managed browser from this guide:</p> <p><code>https://docs.crawl4ai.com/advanced/identity-based-crawling/</code></p> <p>I am running with WSL2 on Windows11. When I run my script, it can't seem to execute the script with a managed browser. I see the below error:</p> <p>Thats pretty much it I can provide more details. Idk Stack Overflow won't let me post the question until I &quot;add more details&quot;</p> <p><a href="https://i.sstatic.net/ZnyREBmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnyREBmS.png" alt="enter image description here" /></a></p>
<python><web-scraping><playwright><large-language-model>
2025-02-01 06:32:18
0
483
Bud Linville
79,404,513
647,987
Fast loading of a Solr streaming response (JSON) into Polars
<p>I want to load large responses of the Solr streaming API into polars (python), efficiently. The Solr streaming API returns JSON of the following form:</p> <pre><code>{ &quot;result-set&quot;:{ &quot;docs&quot;:[{ &quot;col1&quot;:&quot;value&quot;, &quot;col2&quot;:&quot;value&quot;} ,{ &quot;col1&quot;:&quot;value&quot;, &quot;col2&quot;:&quot;value&quot;} ... ,{ &quot;EOF&quot;:true, &quot;RESPONSE_TIME&quot;:12345}]}} </code></pre> <p>That is: I need every element of <code>result-set.docs</code>-- except for the last one, which marks the end of the response.</p> <p>For now, my fastest solution is to convert this first to ndjson using the <a href="https://github.com/bcicen/jstream" rel="nofollow noreferrer">jstream</a> and GNU head and then use <code>pl.read_ndjson</code>:</p> <pre><code>cat result.json | jstream -d 3 | head -n -1 &gt; result.ndjson </code></pre> <p>This clocks in at around 8s for a 770MiB file, which is perfectly fine for me. If I manually change the JSON to just have a top-level list, I can load this even faster using <code>pl.read_json(result_manipulated).head(-1)</code>, clocking in at around 3s -- at least, if specify the schema manually so the last line does not produce any schema errors.</p> <p>So, I wonder whether there is any fast way to import this file without leaving python?</p>
<python><solr><python-polars>
2025-02-01 06:13:32
1
3,687
Lars Noschinski
79,404,210
2,084,503
How to cancel trigonometric expressions in SymPy
<p>I have a bunch of expressions <code>deque([-6*cos(th)**3 - 9*cos(th), (11*cos(th)**2 + 4)*sin(th), -6*sin(th)**2*cos(th), sin(th)**3])</code>.</p> <p>Then I run them through some code that iteratively takes a derivative, adds, and then divides by <code>sin(th)</code>:</p> <pre class="lang-py prettyprint-override"><code>import sympy as sp th = sp.symbols('th') order = 4 for nu in range(order + 1, 2*order): # iterate order-1 more times to reach the constants q = 0 for mu in range(1, nu): # Terms come from the previous derivative, so there are nu - 1 of them here. p = exprs.popleft() term = q + sp.diff(p, th) exprs.append(sp.cancel(term/sp.sin(th))) q = p exprs.append(sp.cancel(q/sp.sin(th))) print(nu, exprs) </code></pre> <p>The output is a bunch of junk:</p> <pre><code>5 deque([18*cos(th)**2 + 9, (-22*sin(th)**2*cos(th) + 5*cos(th)**3 - 5*cos(th))/sin(th), 6*sin(th)**2 - cos(th)**2 + 4, -3*sin(th)*cos(th), sin(th)**2]) 6 deque([-36*cos(th), (22*sin(th)**4 - 19*sin(th)**2*cos(th)**2 + 14*sin(th)**2 - 5*cos(th)**4 + 5*cos(th)**2)/sin(th)**3, (-8*sin(th)**2*cos(th) + 5*cos(th)**3 - 5*cos(th))/sin(th)**2, (9*sin(th)**2 - 4*cos(th)**2 + 4)/sin(th), -cos(th), sin(th)]) 7 deque([36, (24*sin(th)**4*cos(th) + 39*sin(th)**2*cos(th)**3 - 24*sin(th)**2*cos(th) + 15*cos(th)**5 - 15*cos(th)**3)/sin(th)**5, (30*sin(th)**4 - 34*sin(th)**2*cos(th)**2 + 19*sin(th)**2 - 15*cos(th)**4 + 15*cos(th)**2)/sin(th)**4, (9*sin(th)**2*cos(th) + 9*cos(th)**3 - 9*cos(th))/sin(th)**3, (10*sin(th)**2 - 4*cos(th)**2 + 4)/sin(th)**2, 0, 1]) </code></pre> <p>I expect the formulas to become simpler over the time steps and end up as a bunch of constants. Here's a correct output:</p> <pre><code>5 deque([9*cos(2*th) + 18, -27*sin(2*th)/2, 7*sin(th)**2 + 3, -3*sin(2*th)/2, sin(th)**2]) 6 deque([-36*cos(th), 36*sin(th), -13*cos(th), 13*sin(th), -cos(th), sin(th)]) 7 deque([36, 0, 49, 0, 14, 0, 1]) </code></pre> <p>I can add various <code>.simplify()</code> calls and get it to work for <code>order = 4</code>, but I'm trying to get it to work with more complicated expressions with higher orders, and I'm finding SymPy is just not reliably figuring out how to cancel a <code>sin(th)</code> at each stage (even though I know it's possible to do so). How can I coax it in the right direction?</p> <p>I'm finding <code>.trigsimp()</code> and <code>.simplify()</code> sometimes create factors of higher frequency, like <code>sin(2th)</code>, and then <code>.cancel()</code> can't figure out how to eliminate a lower-frequency <code>sin(th)</code> from those. Yet if I don't simplify at all, or try to simplify only at the end, sympy does markedly worse, both in terms of runtime and final complexity.</p> <p>Here are the expression deques for higher orders. The example above comes from order 4 only for the sake of simplicity while explaining the setup. The real problem is that I can't find a solution that works beyond order 6. I'd like a solution that works on all of these:</p> <ul> <li>5: <code>deque([-24*cos(th)**4 - 72*cos(th)**2 - 9, (50*cos(th)**3 + 55*cos(th))*sin(th), (-35*cos(th)**2 - 10)*sin(th)**2, 10*sin(th)**3*cos(th), -sin(th)**4])</code></li> <li>6: <code>deque([-120*cos(th)**5 - 600*cos(th)**3 - 225*cos(th), (274*cos(th)**4 + 607*cos(th)**2 + 64)*sin(th), (-225*cos(th)**3 - 195*cos(th))*sin(th)**2, (85*cos(th)**2 + 20)*sin(th)**3, -15*sin(th)**4*cos(th), sin(th)**5])</code></li> <li>7: <code>deque([-720*cos(th)**6 - 5400*cos(th)**4 - 4050*cos(th)**2 - 225, (1764*cos(th)**5 + 6552*cos(th)**3 + 2079*cos(th))*sin(th), (-1624*cos(th)**4 - 2842*cos(th)**2 - 259)*sin(th)**2, (735*cos(th)**3 + 525*cos(th))*sin(th)**3, (-175*cos(th)**2 - 35)*sin(th)**4, 21*sin(th)**5*cos(th), -sin(th)**6])</code></li> <li>8: <code> deque([-5040*cos(th)**7 - 52920*cos(th)**5 - 66150*cos(th)**3 - 11025*cos(th), (13068*cos(th)**6 + 73188*cos(th)**4 + 46575*cos(th)**2 + 2304)*sin(th), (-13132*cos(th)**5 - 38626*cos(th)**3 - 10612*cos(th))*sin(th)**2, (6769*cos(th)**4 + 9772*cos(th)**2 + 784)*sin(th)**3, (-1960*cos(th)**3 - 1190*cos(th))*sin(th)**4, (322*cos(th)**2 + 56)*sin(th)**5, -28*sin(th)**6*cos(th), sin(th)**7])</code></li> </ul> <p>If you're curious where these come from, you can see my <a href="https://github.com/pavelkomarov/spectral-derivatives/blob/main/notebooks/chebyshev_domain_endpoints.ipynb" rel="nofollow noreferrer">full notebook here</a>, which is part of <a href="https://pypi.org/project/spectral-derivatives/" rel="nofollow noreferrer">a larger project</a> with <a href="https://pavelkomarov.com/spectral-derivatives/math.pdf" rel="nofollow noreferrer">a lot of math</a>.</p>
<python><sympy>
2025-02-01 00:00:52
1
1,266
Pavel Komarov
79,404,186
1,119,340
How to display an image in retina resolution from Python
<p>I'd like to open a window and display an image, in full resolution on a retina display, on MacOS.</p> <p>I can't find a way to do this, so I'm looking for a solution. I don't mind if it's via matplotlib or Tk or anything else. I don't need the script to do anything apart from generate the image and display it until I close the window. The image is stored in a NumPy array.</p> <p>Note, this isn't quite as trivial as it sounds. It's not just a case of resizing the image, it's a case of changing the mode that the window is displayed in. This is an OS feature, and so it's a question of whether there are any Python libraries that support activating retina mode and displaying an image using it.</p> <p>I've mostly tried poking around with Matplotlib, changing the backend and the <code>figure.dpi</code> in the <code>rcparams</code>, but it didn't seem to have any effect. It's difficult to google for solutions, because most people are using Jupyter notebooks but I really just want to use a script.</p>
<python><macos><retina-display>
2025-01-31 23:40:30
1
8,479
N. Virgo
79,404,090
3,324,751
Python 3.13.1 interpreter accepts y = set[x] as correct syntax
<p>Just noticed my Python 3.13 interpreter accepts the following syntax:</p> <pre><code>&gt;&gt;&gt; x (1, 2, 3) &gt;&gt;&gt; y = set[x] &gt;&gt;&gt; y set[1, 2, 3] &gt;&gt;&gt; y = set[1,2,3] &gt;&gt;&gt; y set[1, 2, 3] &gt;&gt;&gt; d = dict[1,2,3] &gt;&gt;&gt; d dict[1, 2, 3] &gt;&gt;&gt; type(d) &lt;class 'types.GenericAlias'&gt; </code></pre> <p>I am wondering if this a valid syntax, and if so, then what is the use and interpretation of it?</p>
<python><python-3.x>
2025-01-31 22:07:36
2
577
hagh
79,404,077
20,554,684
How to fix alignment of projection from (x,y,z) coordinates onto xy-plane in matplotlib 3d plot?
<p>I was trying to make a 3D visualization of the joint probability mass function with the following code:</p> <pre><code>import math import numpy as np import matplotlib.pyplot as plt def f(x, y): if(1 &lt;= x + y and x + y &lt;= 4): return (math.comb(3, x) * math.comb(2, y) * math.comb(3, 4 - x - y)) / math.comb(8, 4) else: return 0.0 x_domain = np.array([0, 1, 2, 3]) y_domain = np.array([0, 1, 2]) X, Y = np.meshgrid(x_domain, y_domain) X = np.ravel(X) Y = np.ravel(Y) Z = np.zeros_like(X, dtype=float) for i in range(len(X)): Z[i] = f(X[i], Y[i]) fig = plt.figure(figsize=(8, 4)) ax = fig.add_subplot(1, 1, 1, projection=&quot;3d&quot;) ax.scatter(X, Y, Z) # plots the induvidual points for i in range(len(X)): # draws lines from xy plane up (x,y,z) ax.plot([X[i], X[i]], [Y[i], Y[i]], [0, Z[i]], color=&quot;r&quot;) ax.set_xticks(x_domain) ax.set_yticks(y_domain) plt.show() </code></pre> <p>Which gave the following result:</p> <p><a href="https://i.sstatic.net/lQQRrIW9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lQQRrIW9.png" alt="3D scatter plot" /></a></p> <p>As you can see, the stems of lines from the xy-plane do not align with the integer coordinates. I have searched through the matplotlib documentation and cant find the cause for this (unless I have overlooked something). Does anyone know how to fix the alignment issue?</p> <p><strong>EDIT</strong> It seems stack overflow could not render the mathjax equation, so I removed it.</p>
<python><matplotlib><visualization>
2025-01-31 21:53:23
1
399
a_floating_point
79,404,070
10,951,092
Is there a way to tell where a user has clicked a 3D surface plot in matplotlib?
<p>I am creating a 3D plot using Matplotlib. The plot contains points, lines and surfaces displayed in 3D space. Using the “picker=true” option for the lines and points I can make them clickable. And when the user clicks on them I can return the location of their pointer in 3D space using “get_data_3d”. I can’t get this to work with the 3d surfaces though. They are poly3dcollections instead of line3d. They don’t have a “get_data_3d” function. Any idea how I can return where the user clicks on a 3d surface?</p> <pre><code># Imports import matplotlib.pyplot as plt import numpy # If a point is selected, print its location def onPick(event): points = event.artist print(points.get_data_3d()) # Create a 3D plot fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Create a plane in 3D space x = numpy.arange(-50, 50, 1) y = numpy.arange(-50, 50, 1) z = numpy.array([[5 for _ in x] for _ in y]) x, y = numpy.meshgrid(x, y) # Plot the plane ax.plot_surface(x, y, z, alpha=0.2, color=&quot;y&quot;, picker=True, pickradius=5) # Call a function if the plane is clicked on fig.canvas.mpl_connect('pick_event', onPick) # Show the plot plt.show() </code></pre> <p>clicking mouse on one point of the surface I get:</p> <pre><code>AttributeError: 'Poly3DCollection' object has no attribute 'get_data_3d' </code></pre>
<python><matplotlib><mplot3d>
2025-01-31 21:45:28
1
382
PetSven
79,403,767
1,678,371
How to create or copy folders to Windows AppData folders Using Python
<p>Using standard library Python functions like <code>os.makedirs</code> or <code>shutil.copy2</code>, I can put folders inside the <code>%HOME%/AppData/Roaming/MyApp</code> folder but they are invisible to Explorer, Powershell and any other app.</p> <p>I know they exist because I can see them via Python using <code>os.path.exists</code>.</p> <p>Can someone explain why and/or how to create folders that are accessible using standard library? I have a solution by calling out to <code>subprocess.run([&quot;powershell ...&quot;])</code> but that's 100% hacky in my opinion.</p>
<python><windows><appdata>
2025-01-31 18:54:04
1
2,285
Lucian Thorr
79,403,655
459,745
Click: How to propagate the exit code back to the group
<p>I have a script which use <code>click.group</code> to provide sub commands. Each of the sub command might pass or fail. How do I propagate the exit code from the sub commands back to main?</p> <pre class="lang-py prettyprint-override"><code>import click import sys @click.group() def main(): # How do I get the exit code from `get`? sys.exit(0) # use that exit code here @main.command() def get(): exit_code = 1 # How do I send the exit code back to main? </code></pre>
<python><python-click>
2025-01-31 18:05:22
1
41,381
Hai Vu
79,403,612
10,145,953
Calculate the count of distinct values appearing in multiple tables
<p>I have three pyspark dataframes in Databricks: <code>raw_old</code>, <code>raw_new</code>, and <code>master_df</code>. These are placeholders to work out the logic on a smaller scale (actual tables contain billions of rows of data). There is a column in all three called <code>label</code>. I want to calculate the number of labels that appear in:</p> <ul> <li><code>raw_old</code> and <code>raw_new</code> (the answer is 3: A789, B456, D123)</li> <li><code>raw_new</code> and <code>master_df</code> (the answer is 2: C456, D123)</li> <li><code>raw_old</code> and <code>master_df</code> (the answer is 4: A654, B987, C987, D123)</li> <li><code>raw_old</code>, <code>raw_new</code>, and <code>master_df</code> (the answer is 1: D123)</li> </ul> <p>The three tables are below. How do I calculate the above bullet points?</p> <p><code>raw_old</code></p> <pre><code>+---+-----+ | id|label| +---+-----+ | 1| A987| | 2| A654| | 3| A789| | 4| B321| | 5| B456| | 6| B987| | 7| C321| | 8| C654| | 9| C987| | 10| D123| +---+-----+ </code></pre> <p><code>raw_new</code></p> <pre><code>+---+-----+ | id|label| +---+-----+ | 1| A123| | 2| A456| | 3| A789| | 4| B123| | 5| B456| | 6| B789| | 7| C123| | 8| C456| | 9| C789| | 10| D123| +---+-----+ </code></pre> <p><code>master_df</code></p> <pre><code>+---+-----+ | id|label| +---+-----+ | 1| A999| | 2| A654| | 3| A000| | 4| B111| | 5| B000| | 6| B987| | 7| C999| | 8| C456| | 9| C987| | 10| D123| +---+-----+ </code></pre>
<python><pyspark><databricks>
2025-01-31 17:48:58
2
883
carousallie
79,403,409
14,802,285
How to prevent certain input from impacting certain output of neural networks in pytorch?
<p>I have an LSTM model that receives 5 inputs to predict 3 outputs:</p> <pre><code>import torch import torch.nn as nn class LstmModel(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(CustomLSTMModel, self).__init__() self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True) self.fc = nn.Linear(hidden_size, output_size) def forward(self, x): None </code></pre> <p>I want to prevent certain input from having any impact on a certain output. Let's say, the first input should not have any effect on the prediction of the second output. In other words, the second prediction should not be a function of the first input.</p> <p>One solution I have tried is using separate LSTMs for each output:</p> <pre><code>class LstmModel(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(CustomLSTMModel, self).__init__() self.lstm1 = nn.LSTM(input_size, hidden_size, batch_first=True) self.lstm2 = nn.LSTM(input_size, hidden_size, batch_first=True) self.lstm3 = nn.LSTM(input_size, hidden_size, batch_first=True) self.fc1 = nn.Linear(hidden_size, output_size) self.fc2 = nn.Linear(hidden_size, output_size) self.fc3 = nn.Linear(hidden_size, output_size) def forward(self, x): # Assume x is of shape (batch_size, seq_length, input_size) # Split inputs input1, input2, input3, input4, input5 = x.split(1, dim=2) # Mask inputs for each output # For output1, exclude input2 input1_for_output1 = torch.cat((input1, input3, input4, input5), dim=2) # For output2, exclude input3 input2_for_output2 = torch.cat((input1, input2, input4, input5), dim=2) # For output3, exclude input4 input3_for_output3 = torch.cat((input1, input2, input3, input5), dim=2) # Process through LSTM _, (hn1, _) = self.lstm1(input1_for_output1) output1 = self.fc1(hn1[-1]) _, (hn2, _) = self.lstm2(input2_for_output2) output2 = self.fc2(hn2[-1]) _, (hn3, _) = self.lstm3(input3_for_output3) output3 = self.fc2(hn3[-1]) return output1, output2, output3 </code></pre> <p>The problem with this approach is that it takes at least 3 times longer to run the model (since I am running LSTM 3 times, 1 for each output). Is it possible to do what I want to achieve more efficiently, with one run?</p>
<python><deep-learning><pytorch><neural-network><lstm>
2025-01-31 16:39:37
1
3,364
bird
79,403,323
1,559,388
Beam/Dataflow pipeline writing to BigQuery fails to convert timestamps (sometimes)
<p>I have a beam/dataflow pipeline that reads from Pub/Sub and writes to BiqQuery with <code>WriteToBigQuery</code>. I convert all timestamps to <code>apache_beam.utils.timestamp.Timestamp</code>. I am sure all timestamps are converted but I do get this error for some rows:</p> <pre><code>Error message from worker: generic::unknown: org.apache.beam.sdk.util.UserCodeException: java.lang.UnsupportedOperationException: Converting BigQuery type 'class java.lang.String' to 'LOGICAL_TYPE&lt;beam:logical_type:micros_instant:v1&gt;' is not supported </code></pre> <p>The volume of data is too high to isolate the failed rows and <code>WriteToBigQuery.failed_rows_with_errors</code> does not have anything. It works fine with <code>STREAMING_INSERTS</code> mode only <code>STORAGE_WRITE_API</code> has this issue.</p> <p>My pipeline is like this:</p> <pre><code>( p | &quot;ReadFromPubSub&quot; &gt;&gt; beam.io.ReadFromPubSub( subscription=config.get_arg(&quot;pub_sub_subscription&quot;)) | &quot;DecodeMessage&quot; &gt;&gt; beam.ParDo(ParsePubSubMessage()) | &quot;FilterEvents&quot; &gt;&gt; beam.Filter(lambda element: element[0][1] == &quot;event&quot;) | &quot;ExtractParsedMessage&quot; &gt;&gt; beam.Map(lambda element: element[1]) | &quot;LogBeforeSerialize&quot; &gt;&gt; beam.Map(lambda x: log_element(x, &quot;Before serialize&quot;)) | &quot;SerializeDataField&quot; &gt;&gt; beam.ParDo(SerializeDataFieldDoFn(events_schema)) | &quot;LogAfterSerialize&quot; &gt;&gt; beam.Map(lambda x: log_element(x, &quot;After serialize&quot;)) | &quot;FilterValidRows&quot; &gt;&gt; beam.Filter(lambda row: row is not None) | &quot;WriteToBigQuery&quot; &gt;&gt; WriteToBigQuery( table=&quot;xxx&quot;, dataset=&quot;xxx&quot;, project=&quot;xxx&quot;, schema={&quot;fields&quot;: events_schema}, method=WriteToBigQuery.Method.STORAGE_WRITE_API, use_at_least_once=True, validate=False, ignore_unknown_columns=True, write_disposition=BigQueryDisposition.WRITE_APPEND, create_disposition=BigQueryDisposition.CREATE_NEVER, ) ) </code></pre> <p>Any help will be much appreciated.</p>
<python><google-bigquery><runtime-error><google-cloud-dataflow><apache-beam>
2025-01-31 16:05:56
2
802
Jonathan
79,403,118
13,219,123
Pandas rolling rank issue
<p>I am trying to create a rolling rank column for a float column. However the output is not as expected. Below I have an example:</p> <pre><code>data = { 'time': pd.date_range(start='2025-01-01', periods=5, freq='H'), 'zone': ['A'] * 5, 'price': [1.0, 1.5, 1.7, 1.9, 2.0], } dummy_df = pd.DataFrame(data) class RollingRank: def __init__(self, rank_col: str, group_by_col: str, window_length: int, time_col: str): self.rank_col = rank_col self.time_col = time_col self.group_by_col = group_by_col self.window_length = window_length def fit_transform(self, df): df = df.sort_values(by=[self.group_by_col, self.time_col]) df.set_index(self.time_col, inplace=True) dfg = ( df.groupby(self.group_by_col) .rolling(window=f&quot;{self.window_length}H&quot;, center=True, closed=&quot;both&quot;) .rank(method=&quot;min&quot;) ) df.loc[:, f&quot;{self.rank_col}_rank&quot;] = dfg[self.rank_col].values return df df_rank = RollingRank(rank_col=&quot;price&quot;, group_by_col=&quot;zone&quot;, window_length=3, time_col=&quot;time&quot;).fit_transform(dummy_df) </code></pre> <p>As an output I get the rank <code>[2.0, 3.0, 3.0, 3.0, 3.0]</code> which does not make sense with <code>center=True</code> and <code>closed=&quot;both&quot;</code>.</p> <p>As an easy example for the middel row with the timestamp <code>2025-01-01 02:00:00</code> I would expect the rank to be 2 (and not 3 as in the output) as the values used for ranking would be <code>[1.5, 1.7, 1.9]</code>.</p> <p>Any ideas of what I am doing wrong?</p>
<python><pandas>
2025-01-31 14:53:46
1
353
andKaae
79,403,003
1,194,864
Conda installation in an ssh server
<p>I want to install <code>Miniconda</code> and create an environment where I can install specific packages. For installing Miniconda I am following these <a href="https://bash%20/miniconda3/miniconda.sh%20-b%20-u%20-p%20/miniconda3" rel="nofollow noreferrer">instructions</a>. In the server, there is <code>home</code> partition but also <code>local</code> and <code>data</code> partition. The <code>home</code> directory has a strict storage limitation so I would like to install everything in <code>/local/$USER</code> instead of <code>/home/$USER</code>.</p> <p>So I am doing the <code>Miniconda</code> installation on the <code>/local/$USER</code> and then I create an environment with a specific <code>Python</code> version using:</p> <pre><code>conda create -y -name my_username python=3.12 </code></pre> <p>That command, unfortunately, automatically installs the environment in the <code>/home/$USER</code> directory. Moreover, I found an ugly command that can enforce the environment installation to be made in a specific directory:</p> <pre><code>conda create -p /local/$USER/conda_envs/my_conda_env </code></pre> <p>Then I need to do sth like:</p> <pre><code>source activate /local/$USER/conda_envs/my_conda_env </code></pre> <p>And install all my packages. During the installation though I noticed that all these packages are still installed in the<code>/home/$USER/</code> directory and I am still ran out of storage. Even when I do all the Miniconda installations on the <code>local/$USER/</code>, if then I write <code>conda info</code>, I have figured out that the user config file : <code>/home/$USER/.condarc</code> and cache files are still in the home directory (<code>/home/$USER/.conda/envs</code>).</p> <p>How can I properly deal with this? I guess there was an old conda installation with instructions somewhere to install environments on the <code>/home/$USER</code> folder, how can I make <code>/local/$USER</code> the default one?</p> <p>The steps that I have followed for installing miniconda are the following:</p> <pre><code>cd /local/$USER mkdir -p miniconda3/ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda3/miniconda.sh bash miniconda3/miniconda.sh -b -u -p miniconda3/ rm miniconda3/miniconda.sh </code></pre> <p>Is there sth in the installation that could lead to <code>/home/</code> directory? I have inspect the <code>miniconda.sh</code> and there is this <code>PREFIX=&quot;${HOME:-/opt}/miniconda3&quot;</code> but during the installation, there is a message that says <code>PREFIX=&quot;local/$USER/miniconda3&quot;</code></p>
<python><server><anaconda><conda>
2025-01-31 14:14:21
1
5,452
Jose Ramon
79,402,996
17,160,160
Paginated API requests
<p><strong>Description</strong><br /> I am trying to make a paginated request to a public api.<br /> The request limit is 100 and so looped paginated requests are required to pull all records.<br /> The api contains some number of erroneous records that, if contained within the paginated request, will cause to the request to fail.<br /> I have attempted to create a loop that identifies and then skips the erroneous records in an efficient way while maintaining the largest batch limits where possible. However, I feel my approach of halving the batch limit is a bit simplistic and I wonder if there is a more efficient approach than mine?</p> <p><strong>Current Approach</strong></p> <ol> <li>Set the initial parameters for the API request, including limit and offset.</li> <li>Create a loop that continues until all records are fetched.</li> <li>In each iteration, make a request to the API with the current limit and offset.</li> <li>If the request is successful (status code 200), process the data and extend a results list.</li> <li>If an error 500 occurs an erroneous record is contained with the batch, halve the current batch limit until a successful request can be made.</li> <li>If the limit has been reduced to 1 and a 500 error is received an erroneous record is identified. Increment the offset by 1 to skip the record and return the batch limit to maximum.</li> <li>Continue until all records are fetched.</li> </ol> <p><strong>Code</strong></p> <pre><code>import requests # set initial parameters url = &quot;https://api.empire.britned.com/v1/public/auctions&quot; headers = {'accept': 'application/json'} params = { 'limit':100, 'offset':0, 'timescales' : 'LONG_TERM', &quot;statuses&quot;: &quot;&quot;, 'productTypes': 'LT_EXPLICIT_ANNUAL, LT_EXPLICIT_SEASONAL, LT_EXPLICIT_QUARTERLY, LT_EXPLICIT_MONTHLY', 'sortBy': 'BIDDING_PERIOD_START_ASC' } # empty list to store data all_data = [] # initial pull request to get total records count total_records = requests.get(url = url, headers = headers,params = params).json()['totalCount'] # loop to paginate and get all records while params['offset'] &lt; total_records: response = requests.get(url=url, headers=headers, params=params) # successful request if response.status_code == 200: print(f&quot;Success: {params['offset']} to {params['offset'] + params['limit']}&quot;) data = response.json() all_data.extend(data['entries']) params['offset'] += params['limit'] # Move to the next set of records # failed request elif response.status_code == 500: print(f&quot;Fail: {params['offset']} to {params['offset'] + params['limit']}&quot;) if params['limit'] &gt; 1: params['limit'] = params['limit'] // 2 # Halve the limit to narrow down the search else: # If limit is 1 and we get a 500 error, skip the problematic record params['offset'] += 1 # Increment offset to skip the problematic record params['limit'] = 100 # Reset limit back to 100 for the next batch </code></pre>
<python><python-requests>
2025-01-31 14:12:45
0
609
r0bt
79,402,927
4,289,400
A puzzle with Robot framework, scroll into view. I can't get a slider of a web element to start scrolling down
<p>A profile <a href="https://stackoverflow.com/users/16991720/ltl">LTL</a> on stackoverflow did a great attempt to do the <a href="https://obstaclecourse.tricentis.com/Obstacles/List" rel="nofollow noreferrer">obstacle course</a> of Tosca but then done with Robot framework (real clever to train test automation and you can compare along)</p> <p>If you're NEW with test automation this is really a nice way to start. It's a really good help to the community.</p> <p>I took this idea a bit further and put it on a <a href="https://github.com/valentijnpeters/Tosca-Obstacle-Course-With-RobotFramework" rel="nofollow noreferrer">Github repo</a>. Including a beautifull README. But I need the help of the community because I got stuck with one of the puzzles.</p> <p>So there is this <a href="https://obstaclecourse.tricentis.com/Obstacles/99999/" rel="nofollow noreferrer">puzzle 99999</a> scroll into view and the solution how to do it with Tosca is <a href="https://www.youtube.com/playlist?list=PLyN4Fw7MBsaJWcK1nyXzeOu_-nav-ZXUC" rel="nofollow noreferrer">really plain and simple</a>. (3th video)</p> <p>What I have consists of 2 parts...</p> <p>The custom Library code I made for that (filename is CustomActionsIntegrated.py)</p> <pre><code> from robot.api.deco import keyword @keyword('Draw Dot and Click At Coordinates') # New combined keyword def draw_and_click_at_coordinates(x, y): &quot;&quot;&quot; Draw a red dot at the specified coordinates (x, y) for debugging purposes, then simulate a click event at the same coordinates, followed by holding the LEFT mouse button, scrolling down with the mouse wheel, and moving the mouse down simultaneously. &quot;&quot;&quot; # JavaScript to draw the red dot draw_script = f&quot;&quot;&quot; var marker = document.createElement('div'); marker.style.position = 'absolute'; marker.style.left = '{x}px'; marker.style.top = '{y}px'; marker.style.width = '20px'; marker.style.height = '20px'; marker.style.backgroundColor = 'blue'; marker.style.borderRadius = '50%'; marker.style.zIndex = '9999'; marker.style.pointerEvents = 'none'; marker.style.border = '2px solid black'; document.body.appendChild(marker); &quot;&quot;&quot; # JavaScript to simulate the click click_script = f&quot;&quot;&quot; var event = new MouseEvent('click', {{ clientX: {x}, clientY: {y}, bubbles: true, cancelable: true }}); document.elementFromPoint({x}, {y}).dispatchEvent(event); &quot;&quot;&quot; # JavaScript to simulate LEFT mouse button down mouse_down_script = f&quot;&quot;&quot; var mouseDownEvent = new MouseEvent('mousedown', {{ clientX: {x}, clientY: {y}, button: 0, // 0 = LEFT mouse button bubbles: true, cancelable: true }}); document.elementFromPoint({x}, {y}).dispatchEvent(mouseDownEvent); &quot;&quot;&quot; # JavaScript to simulate mouse wheel scroll down (page down) wheel_scroll_script = f&quot;&quot;&quot; var wheelEvent = new WheelEvent('wheel', {{ deltaY: 1000, // Positive value scrolls down clientX: {x}, clientY: {y}, bubbles: true }}); document.elementFromPoint({x}, {y}).dispatchEvent(wheelEvent); &quot;&quot;&quot; # JavaScript to simulate mouse movement down mouse_move_script = f&quot;&quot;&quot; var mouseMoveEvent = new MouseEvent('mousemove', {{ clientX: {x}, clientY: {y + 1000}, // Move 1000 pixels down bubbles: true, cancelable: true }}); document.elementFromPoint({x}, {y}).dispatchEvent(mouseMoveEvent); &quot;&quot;&quot; # JavaScript to simulate LEFT mouse button up mouse_up_script = f&quot;&quot;&quot; var mouseUpEvent = new MouseEvent('mouseup', {{ clientX: {x}, clientY: {y + 1}, // Release at the new position button: 0, // 0 = LEFT mouse button bubbles: true, cancelable: true }}); document.elementFromPoint({x}, {y +1}).dispatchEvent(mouseUpEvent); &quot;&quot;&quot; # Combine all actions with minimal delays return f&quot;&quot;&quot; {draw_script} setTimeout(function() {{ {click_script} setTimeout(function() {{ {mouse_down_script} setTimeout(function() {{ {wheel_scroll_script} {mouse_move_script} setTimeout(function() {{ {mouse_up_script} }}, 50); // Delay after combined actions before releasing mouse button }}, 10); // Minimal delay after mouse down before combined actions }}, 10); // Minimal delay after click before mouse down }}, 10); // Minimal delay after drawing before click &quot;&quot;&quot; </code></pre> <p>and the .robot file is this:</p> <pre><code> *** Settings *** Library SeleniumLibrary Library CustomAction1.py # Import the custom actions file Library CustomAction2.py # Import the custom actions file Library CustomActionsIntegrated.py Variables variables.py *** Variables *** ${URL} https://obstaclecourse.tricentis.com/Obstacles/99999 *** Test Cases *** Visueel Klikken Onder Submit Open Browser ${URL} Chrome Maximize Browser Window Wait Until Element Is Visible xpath=//*[@id='submit'] # Get element's position using JavaScript ${location}= Execute JavaScript ... var rect = document.evaluate(&quot;//*[@id='submit']&quot;, document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue.getBoundingClientRect(); ... return [rect.left + (rect.width / 2) + 132, rect.top + window.scrollY + 130]; # Access the location array using indexes for x and y ${click_x}= Set Variable ${location}[0] ${click_y}= Set Variable ${location}[1] # Scroll to make the click area visible Execute JavaScript window.scrollTo(0, ${click_y} - 200); #Sleep 1s # Draw the red dot using the custom keyword Draw Red Dot ${click_x} ${click_y} #Sleep 1s # Click the page at the specified coordinates using the custom keyword #Click Page At Coordinates ${click_x} ${click_y} #Sleep 1s #combined , blue dot ${js_code}= Draw Dot and Click At Coordinates ${click_x} ${click_y} Execute JavaScript ${js_code} #Draw Dot and Click At Coordinates ${click_x} ${click_y} #Draw Dot and Click At Coordinates ${click_x} ${click_y} # Wait for the page to respond before proceeding # Click at the exact location (Now with improved visibility) #Click Element At Coordinates xpath=//*[@id='submit'] 132 130 # Wait for the page to respond before proceeding # Scroll down to bring up more content (if required) #Press Keys //body PAGE_DOWN Sleep 2s # Ensure overlay2 is visible (if needed) #Execute JavaScript document.getElementById('textfield').scrollIntoView() # Input text in the text field Input Text xpath=//*[@id='textfield'] Tosca Sleep 1s # Drag overlay2 down #Drag And Drop By Offset id=overlay2 0 100 Sleep 2s # Click the submit button again Click Element xpath=//*[@id='submit'] # Validate success message Element Should Contain xpath=//body You solved this automation problem. </code></pre> <p>But I can't get it to work. Maybe I'm thinking way to complex. I tried the help of DeepSeek and ChatGPT, but I didn't find a proper working solution yet.</p> <p>So if you see any mistakes or you see a solution. please help.</p> <p>Also if you would like to improve and upgrade this repo to the point that all the puzzles have a solution, let me know then I e-mail you so you can do commits as well.</p>
<python><slider><robotframework><tosca>
2025-01-31 13:48:34
1
418
tijnn
79,402,794
3,906,713
Python vectorized minimization of a multivariate loss function without jacobian
<p>I have a loss function that needs to be minimized</p> <pre><code>def loss(x: np.ndarray[float]) -&gt; float </code></pre> <p>My problem has <code>nDim=10</code> dimensions. Loss function works for 1D arrays of shape <code>(nDim,)</code>, and with 2D arrays of shape <code>(nSample, nDim)</code> for an arbitrary number of samples. Because of the nature of the implementation of the loss function (numpy), it is significantly faster to make a single call to the loss function with several samples packed into 2D argument than to make several 1D calls.</p> <p>The minimizer I am currently running is</p> <pre><code>sol = scipy.optimize.basinhopping(loss, x0, minimizer_kwargs={&quot;method&quot;: &quot;SLSQP&quot;}) </code></pre> <p>It does the job, but is too slow. As of current, the minimizer is making single 1D calls to the loss function. Based on observing the sample points, it seems, SLSQP is performing numerical differentiation, thus sampling 11 points for each 1 sample to calculate the gradient. Theoretically, it should be possible to implement this minimizer with vectorized function calls, requesting all 11 sample points from the loss function simultaneously.</p> <p>I was hoping that there would be a <code>vectorize</code> flag for SLSQP, but it does not seem to be the case, please correct me if I am wrong.</p> <p>Note also that the loss function is far too complicated for analytic calculation of derivatives, so explicit Jacobian not an option.</p> <p><strong>Question:</strong> Does Scipy or any other minimization library for python support a <em>global</em> optimization strategies (such as basinhopping) with vectorized loss function and no available Jacobian?</p>
<python><scipy><minimization>
2025-01-31 12:55:49
2
908
Aleksejs Fomins
79,402,617
6,618,051
Top or Bottom navigation bar
<p>For some reasons only bottom bar is shown (independently of <code>is_mobile</code> state). How can it be fixed?</p> <pre class="lang-none prettyprint-override"><code>BoxLayout: orientation: 'vertical' ActionBar: hidden: app.is_mobile size_hint_y: 0 if app.is_mobile else None ActionView: use_separator: True ActionPrevious: with_previous: True on_release: app.next_screen('main') ActionButton: text: 'Shuffle 1' ActionButton: text: 'Check' ActionButton: text: 'Next' Accordion: AccordionItem: title: '...' Label: text: '...' AccordionItem: title: '...' Label: text: '...' ActionBar: hidden: False if app.is_mobile else True size_hint_y: None if app.is_mobile else 0 ActionView: use_separator: True ActionPrevious: with_previous: True on_release: app.next_screen('main') ActionButton: text: 'Shuffle' ActionButton: text: 'Check' ActionButton: text: 'Next' </code></pre> <p>where app_mobile is calsulated as:</p> <pre class="lang-py prettyprint-override"><code>from kivy.utils import platform class MainApp(App): is_mobile = BooleanProperty(False) def build(self): if platform in ['android', 'ios']: self.is_mobile = True </code></pre>
<python><kivy>
2025-01-31 11:39:46
1
1,939
FieryCat
79,402,584
2,629,034
Obtaining diff for multiple columns in pandas
<p>i have a pandas dataframe with multiple columns (on which I am computing the cumulative value).</p> <p>I would like to get the incremental value for the same now.</p> <p>This is my current dataset:</p> <pre><code>Gender Cubic_Cap Branch UWYear yhat_all M 1000 A 2015 19 M 1000 A 2015 20 M 1000 A 2015 26 M 1000 A 2015 30 F 1500 B 2016 1 F 1500 B 2016 25 F 1500 B 2016 36 F 1500 B 2016 49 </code></pre> <p>My desired result is:</p> <pre><code>yhat_incremental 0 1 6 4 1 24 11 13 </code></pre> <p>I've tried the following methods (but to no avail):</p> <pre><code>all_c1['incremental_yhat'] = all_c1.groupby(rating_factors)['yhat_all'].diff().fillna(all_c1['yhat_all']) </code></pre> <p>I've also tried this:</p> <pre><code>all_c1['incremental_yhat'] = all_c1['yhat_all'].shift().where(all_c1[rating_factors].eq(all_c1[rating_factors].shift())) </code></pre> <p>Is there any other methods I can use to obtain this?</p>
<python><pandas>
2025-01-31 11:26:48
1
571
galeej
79,402,505
923,095
Exclude BOM(byte order mark) character when combining files in Python
<p>I've created a Python script to combine multiple SQL files into one.Sometimes the script inserts a BOM(byte order mark) character and that obviously breaks the SQL script and needs to be manually corrected.I can't do a simple string replace because BOM is interpreted as binary and I tried various decoding methods on both the files that I'm reading and the one that I write to,none of those have worked so far.Any tips on how I can solve this?</p> <p>What the BOM looks like:</p> <p><a href="https://i.sstatic.net/Jpcvgiq2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jpcvgiq2.png" alt="Byte Order Mark as seen in Notepad++" /></a></p> <p>Script:</p> <pre><code>import os from FileDto import FileDto def getFileItems(path): directory = os.fsencode(path) items = [] for file in os.listdir(directory): fileName = os.fsdecode(file) if os.path.isfile(path + '\\' + fileName): datePart = fileName[0:12] fileDto = FileDto(datePart,fileName) items.append(fileDto) items.sort(key=lambda x: x.id, reverse=False) return items path = input('Enter a path where the SQL scripts that need to be combined live:') fileItems = getFileItems(path) counter = 1 outFileName = 'Combined Release Script.sql' outFilePath = path + &quot;\\&quot; + outFileName with open(outFilePath, 'w', encoding='utf-8') as outfile: for names in fileItems: outfile.write('--' + str(counter) + '.' + names.fileName) outfile.write('\n\n') with open(path + &quot;\\&quot; + names.fileName) as infile: for line in infile: outfile.write(line) outfile.write('\nGO\n') counter += 1 print(&quot;Done.%s SQL scripts have been combined into one called - %s&quot; % (len(fileItems), outFileName)) </code></pre>
<python>
2025-01-31 10:49:02
2
17,067
Denys Wessels
79,402,318
9,525,238
In an array of counters that reset, find the start-end index for counter
<p>Given an array that looks like this:</p> <pre><code>values [0, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 6, 0, 0, 1, 2, 3] index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 </code></pre> <p>If I search for index 3, I want to get the indexes of the start and end indexes for that counter, before it is reset again, which is 2 - 6.</p> <p>And for index 10, I want to get 8 - 13. And for index 16, I want to get 16 - 18.</p> <p>How can I achieve this in numpy?</p>
<python><numpy>
2025-01-31 09:40:48
4
413
Andrei M.
79,402,275
2,790,838
How to fix inconsistent method resolution order when deriving from ctypes.Structure and Mapping
<p>Given the following Python code:</p> <pre><code>import ctypes from collections.abc import Mapping class StructureMeta(type(ctypes.Structure), type(Mapping)): pass class Structure(ctypes.Structure, Mapping, metaclass=StructureMeta): pass struct = Structure() assert isinstance(struct, ctypes.Structure) assert isinstance(struct, Mapping) </code></pre> <p>The metaclass is needed to avoid a metaclass conflict when deriving from both ctypes.Structure (metaclass <code>_ctypes.PyCStructType</code>) and Mapping (metaclass <code>abc.ABCMeta</code>).</p> <p>This works fine when executed with Python 3.11. Alas, pylint 3.3.4 reports two errors:</p> <pre><code>test.py:5:0: E0240: Inconsistent method resolution order for class 'StructureMeta' (inconsistent-mro) test.py:9:0: E1139: Invalid metaclass 'StructureMeta' used (invalid-metaclass) </code></pre> <p>How do I need to change the meta class to fix the error reported by pylint? Is it even a problem?</p>
<python><multiple-inheritance><pylint><metaclass>
2025-01-31 09:25:44
1
3,397
Markus
79,402,253
782,392
Can you specify a buffer size when opening a file in text I/O mode
<p>I'm confused by the documentation regarding the buffering when reading a file opened with the <code>open</code> function.</p> <p>Does the <code>buffering</code> parameter do anything when <code>mode</code> is <code>r</code> so that the file is opened in text I/O mode?</p> <p>The <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer">docs</a> state:</p> <blockquote> <p>Note that specifying a buffer size this way applies for binary buffered I/O, but TextIOWrapper (i.e., files opened with mode='r+') would have <strong>another</strong> buffering.</p> </blockquote> <p>What does <code>another</code> mean? An <em>additional</em> buffering or a <em>different</em> buffering?</p> <p>I'd assume it's the latter, which means that the <code>buffering</code> parameter does not specify the size in bytes of a fixed-size chunk buffer. Why would they state that the parameter has a different meaning but not explain it?</p> <p>What's the conclusion to this?</p> <p><code>open('myfile.txt', 'r', 1024)</code></p> <p>Does this use a buffer of 1024 bytes? Does the parameter change anything in this mode?</p>
<python><python-3.x>
2025-01-31 09:17:05
1
2,674
T3rm1
79,402,169
6,751,456
Django customize validation error message
<p>I have a following serializer definitions that validate request payload:</p> <pre><code># diagnosis serializer class ICD10Serializer(serializers.Serializer): icd_10 = serializers.IntegerField(required=True, allow_null=False) class DetailsSerializer(serializers.Serializer): diagnosis_details = ICD10Serializer(many=True) class ChartUpdateSerializer(serializers.Serializer): diagnosis = DetailsSerializer(many=True) </code></pre> <p>Its usage:</p> <pre><code>payload = ChartUpdateSerializer(data=request.data) if not payload.is_valid(): raise serializers.ValidationError(payload.errors) </code></pre> <p>This throws validation error message in the following format:</p> <pre><code>{ &quot;diagnosis&quot;: [ { &quot;diagnosisDetails&quot;: [ {}, &lt;- valid {}, &lt;- valid {}, &lt;- valid {}, &lt;- valid { &quot;icd10&quot;: [ &quot;This field may not be null.&quot; ] } ] } ] } </code></pre> <p>Here <code>{}</code> is also shown for valid ones. Can we simply raise validation error for invalid ones? Or better even if we can know which field and the message so custom message can be generated.</p>
<python><django><django-rest-framework>
2025-01-31 08:39:59
1
4,161
Azima
79,402,087
7,798,669
Issue in detecting the bubble of OMR sheet
<p>I am working on an OMR sheet processing system where users bubble answers in four columns (A, B, C, D). My goal is to detect each individual bubble inside each column.</p> <p>I am successfully detecting all four columns, but there is an issue:</p> <ul> <li>For columns 1 &amp; 4, I can correctly detect the bubbles inside.</li> <li>For columns 2 &amp; 3, my algorithm fails and instead detects entire columns as a single contour, instead of individual bubbles.</li> </ul> <p><a href="https://i.sstatic.net/VC5JewLt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC5JewLt.png" alt="This is the thresh image for counter 1" /></a></p> <p><a href="https://i.sstatic.net/iVQyyzcj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVQyyzcj.png" alt="Corresponding Bubble Detection from Thresh 1" /></a></p> <p><a href="https://i.sstatic.net/2fUoGh8M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fUoGh8M.png" alt="THresh Image for Counter 2" /></a></p> <p><a href="https://i.sstatic.net/4aMtgAbL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aMtgAbL.png" alt="Corresponding Bubble Detection from Thresh 2" /></a></p> <p>My code for the same is</p> <pre><code>from imutils.perspective import four_point_transform from imutils import contours import numpy as np import imutils import cv2 import matplotlib.pyplot as plt # Define Answer Key (Correct Answers for Each Question) counter_needed = 2 # Load the image image_name = &quot;omr_image_dummy.jpg&quot; image = cv2.imread(image_name) ANSWER_KEY = {0: 1, 1: 1, 2: 1, 3: 3, 4: 1, 5: 0, 6: 2, 7: 1, 8: 3, 9: 2} # Convert the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Apply Gaussian Blur blurred = cv2.GaussianBlur(gray, (5, 5), 0) # Apply Canny edge detection edged = cv2.Canny(blurred, 75, 200) # Find contours cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) docCnt = None count_counter = 0 # Ensure that at least one contour was found if len(cnts) &gt; 0: # Sort contours by area in descending order cnts = sorted(cnts, key=cv2.contourArea, reverse=True) # Loop over sorted contours for c in cnts: # Approximate the contour peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) # If the approximated contour has four points, we assume it's the document if len(approx) == 4 and count_counter &lt; 6: if count_counter == counter_needed: print(count_counter) docCnt = approx break count_counter += 1 paper = four_point_transform(image, docCnt.reshape(4, 2)) warped = four_point_transform(gray, docCnt.reshape(4, 2)) # apply Otsu's thresholding method to binarize the warped piece of paper thresh = cv2.threshold(warped, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] show_image(&quot;Threshold Image 2&quot;, thresh) contour_image = cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR) # Convert to BGR for colored visualization # Apply thresholding (assuming `thresh` is already computed) cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) # print(cnts) for idx, c in enumerate(cnts): color = (np.random.randint(0, 255), np.random.randint(0, 255), np.random.randint(0, 255)) cv2.drawContours(contour_image, [c], -1, color, 2) # Random color for each contour # Get bounding box for labeling x, y, w, h = cv2.boundingRect(c) # print(x, y, w, h) # Add index number to each contour cv2.putText(contour_image, f&quot;{idx}&quot;, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2) # Display the image with all contours show_image(&quot;Counter 1&quot;, contour_image) </code></pre>
<python><opencv><image-processing><computer-vision><omr>
2025-01-31 07:55:15
0
675
NoobCoder
79,401,951
8,668,595
How to correctly change layers in onnx model and restore them in onnxruntime
<p>I want to change onnx model and then restore the weights when using it with onnxruntime. However, the model doesn't seem to be changed.</p> <p>First, I load the existing model and change the weights based on <a href="https://github.com/onnx/onnx/issues/2978" rel="nofollow noreferrer">this</a> output. Then I save the model and try to check if the outputs are the same as in the original model.</p> <p>The model could be restored (if restore_layer is True) with onnxruntime as in <a href="https://stackoverflow.com/a/74139719/8668595">this</a> example based on <a href="https://github.com/microsoft/onnxruntime/issues/14545" rel="nofollow noreferrer">this</a> output, but it doesn't seem to work, they are always the same and the model doesn't seem to be changed. The expected behavior is when change_layer = True and restore_layer = False the script should raise an error, but it doesn't.</p> <p>I tried to change a part of a model weights when loading it with onnxruntime like in <a href="https://stackoverflow.com/a/74139719/8668595">this</a> example based on <a href="https://github.com/microsoft/onnxruntime/issues/14545" rel="nofollow noreferrer">this</a> issue, but it doesn't seem to work.</p> <p>How can I fix it?</p> <p>Here is the code to reproduce.</p> <pre class="lang-py prettyprint-override"><code>import onnx # onnx==&quot;1.17.0&quot; import torch # torch==&quot;2.3.0&quot; from onnx import numpy_helper import numpy as np # numpy==1.26.4 import torchvision.models as models # torchvision==0.18.0 from torchvision.models import ResNet18_Weights change_layer = True restore_layer = False no_changed_layers = 20 model = models.resnet18(weights=ResNet18_Weights.DEFAULT) model.eval() input_data = torch.randn(1, 3, 224, 224) torch.onnx.export(model, input_data, &quot;some_model.onnx&quot;, verbose=False) MODEL_PATH = &quot;some_model.onnx&quot; _model = onnx.load(MODEL_PATH) INTIALIZERS=_model.graph.initializer Weight= {} initializer_num_name = {} for num, initializer in enumerate(INTIALIZERS): W = numpy_helper.to_array(initializer) Weight[num] = W initializer_num_name[num] = initializer.name Weight_num_name = {} if change_layer: for weight_zeros in range(0, no_changed_layers): old_weight = numpy_helper.to_array(INTIALIZERS[weight_zeros]) new_weight = np.zeros_like(Weight[weight_zeros]) updated_weight = numpy_helper.from_array(new_weight) updating_weight_name = _model.graph.initializer[weight_zeros].name updated_weight.name = updating_weight_name Weight_num_name[weight_zeros] = updating_weight_name _model.graph.initializer[weight_zeros].CopyFrom(updated_weight) onnx.save(_model, &quot;model.onnx&quot;) import onnxruntime import numpy as np if restore_layer: options = onnxruntime.SessionOptions() # options.log_verbosity_level=2 # options.log_severity_level=0 # options.enable_profiling=1 ortvalue_initializers = [] for num in range(0, no_changed_layers): initializer = onnxruntime.OrtValue.ortvalue_from_numpy( Weight[num] ) ortvalue_initializers.append(initializer) options.add_initializer(Weight_num_name[num], initializer) import logging logging.basicConfig(level=logging.DEBUG) onnx_session3 = onnxruntime.InferenceSession(&quot;model.onnx&quot;, sess_options=options, providers=[&quot;CPUExecutionProvider&quot;]) else: onnx_session3 = onnxruntime.InferenceSession(&quot;model.onnx&quot;, providers=[&quot;CPUExecutionProvider&quot;]) onnx_session = onnxruntime.InferenceSession(MODEL_PATH) input = onnx_session.get_inputs()[0] generated_dummy_input_data = torch.randn(size=(input.shape[0], input.shape[1], input.shape[2], input.shape[3])) onnx_inputs = {input.name: generated_dummy_input_data.numpy()} onnx_output = onnx_session.run(None, onnx_inputs) onnx_inputs3 = {onnx_session3.get_inputs()[0].name: generated_dummy_input_data.numpy()} onnx_output3 = onnx_session3.run(None, onnx_inputs3) def softmax(x: np.ndarray, axis: int = -1) -&gt; np.ndarray: x_max = np.max(x, axis=axis, keepdims=True) tmp = np.exp(x - x_max) s = np.sum(tmp, axis=axis, keepdims=True) return tmp / s onnx_model_predictions = onnx_output[0][0] onnx_model_predictions3 = onnx_output3[0][0] diff = np.abs(onnx_model_predictions - onnx_model_predictions3) assert (diff &gt; 1e-2).any(), &quot;Big difference between logits of an original model and onnx one: \n%s\n%s&quot;%(onnx_model_predictions[:50], onnx_model_predictions3[:50]) print(&quot;all good&quot;) </code></pre>
<python><pytorch><onnx><onnxruntime>
2025-01-31 06:41:44
1
307
Eugene
79,401,801
1,570,408
ciphering services not available error on pysnmp v3 get using pysnmp7 - pycryptodome and cryptography not installed
<p>I am unable to get snmp usng pysnmp7 - cryptography and pycryptodome installed.</p> <p>Getting Error Indication: Ciphering services</p> <p>Referring <a href="https://stackoverflow.com/questions/79030219/ciphering-services-not-available-error-on-pysnmp-v3-walk">Ciphering services not available error on pysnmp v3 walk</a> did not help.</p> <pre class="lang-none prettyprint-override"><code>$# pip show pysnmp Name: pysnmp Version: 7.1.16 $:/etc/zabbix# pip show cryptography Name: cryptography Version: 42.0.5 $# pip show pycryptodome Name: pycryptodome Version: 3.21.0 </code></pre> <pre><code>import asyncio from pysnmp.hlapi.v3arch.asyncio import * async def main(): iterator = get_cmd( SnmpEngine(), UsmUserData( 'admin', authKey='privateprivate', privKey='privateprivate', authProtocol=usmHMACMD5AuthProtocol, privProtocol=usmDESPrivProtocol ), await UdpTransportTarget.create((&quot;10.20.7.108&quot;, 161)), ContextData(), ObjectType(ObjectIdentity('1.3.6.1.4.1.248.11.22.1.8.10.2.0')), ) errorIndication, errorStatus, errorIndex, varBinds = await iterator print(f&quot;Error Indication: {errorIndication}, Error Status: {errorStatus}, Error Index: {errorIndex}&quot;) print(f&quot;VarBinds: {varBinds}&quot;) if __name__ == '__main__': try: asyncio.run(main()) except RuntimeError: pass # to avoid RuntimeError: Event loop is closed print('out') </code></pre> <p>Output is</p> <pre><code>Error Indication: Ciphering services not available, Error Status: 0, Error Index: 0 VarBinds: () out </code></pre>
<python><cryptography><snmp><pysnmp><pycryptodome>
2025-01-31 04:59:44
1
6,107
Ulysses
79,401,768
6,141,238
In Python, what is the fastest way to append numerical tabular data to a data file?
<p>I am trying to write a script that appends new rows of a table to an existing file of tabular data, so that not all progress is lost if an error is encountered. Here are several ways to do this in Python:</p> <ul> <li>Write the data to a text file (a module-free method).</li> <li>Write the data to a CSV file using <code>writerows</code> of the <code>csv</code> module.</li> <li>Write the data to a CSV file using <code>polars</code>.</li> <li>Structure the data as a <code>pandas</code> DataFrame and <code>pickle</code> it.</li> <li>Structure the data as a <code>polars</code> DataFrame and <code>pickle</code> it.</li> <li>Write the data to a database using <code>sqlite3</code> (or another database library).</li> <li>Structure the data as a <code>pandas</code> DataFrame and save it as a Parquet file using <code>fastparquet</code>.</li> </ul> <p>The following code measures a runtime of each of these options, excluding the time required to build the data, header, and any single-row DataFrames. (I have tried to make this code both concise and readable.)</p> <pre><code>from numpy.random import rand from time import perf_counter as pc import csv import polars as pl import pandas as pd import pickle as pkl import sqlite3 import fastparquet as fp header = list('ABCDE') n_rows = 10**3 data = rand(n_rows, 5) df_pl_header = pl.DataFrame(schema=header) df_pl = pl.DataFrame(data, schema=header) df_pd = pd.DataFrame(data, columns=header) print('----------------------------------------------------------------------') print('[no module]') fname = 'file0.txt' pc0 = pc() with open(fname, mode='w') as f: f.write('\t'.join(header) + '\n') f.flush() for k in range(n_rows): f.write('\t'.join(map(str, data[k])) + '\n') f.flush() print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') print('csv') fname = 'file1.csv' pc0 = pc() with open(fname, mode='w', newline='') as f: csvwriter = csv.writer(f) csvwriter.writerow(header) f.flush() for row in data: csvwriter.writerow(row) f.flush() print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') print('polars') fname = 'file2.csv' pc0 = pc() with open(fname, mode='w', encoding='utf8') as f: df_pl_header.write_csv(f) for k in range(n_rows): df_pl[k].write_csv(f, include_header=False) print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') print('pandas, pickle') fname = 'file3.pkl' pc0 = pc() df_new = df_pd.iloc[[0]] df_new.to_pickle(fname) for k in range(1, n_rows): df_new = pd.concat([df_new, df_pd.iloc[[k]]], ignore_index=True) df_new.to_pickle(fname) print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') print('polars, pickle') fname = 'file4.pkl' pc0 = pc() df_new = df_pl[0] df_ser = pkl.dumps(df_new) with open(fname, mode='wb') as f: f.write(df_ser) for k in range(1, n_rows): df_new = df_new.vstack(df_pl[k]) df_ser = pkl.dumps(df_new) with open(fname, mode='wb') as f: f.write(df_ser) print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') print('sqlite3') fname = 'file5.db' pc0 = pc() conn = sqlite3.connect(fname) cur = conn.cursor() cur.execute('DROP TABLE IF EXISTS data_table') cur.execute('CREATE TABLE IF NOT EXISTS data_table (idx integer PRIMARY KEY, A real, B real, C real, D real, E real)') for k in range(n_rows): cur.execute('INSERT INTO data_table VALUES (?, ?, ?, ?, ?, ?)', [k, *data[k]]) conn.commit() conn.close() print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') print('pandas, fastparquet') fname = 'file6.parquet' pc0 = pc() fp.write(fname, df_pd.iloc[[0]]) for k in range(1, n_rows): fp.write(fname, df_pd.iloc[[k]], append=True) print(f'To write data to file: {pc()-pc0} sec') print('----------------------------------------------------------------------') </code></pre> <p>On my machine, running this script yields the following times.</p> <pre><code>---------------------------------------------------------------------- [no module] To write data to file: 0.014803300029598176 sec ---------------------------------------------------------------------- csv To write data to file: 0.01561770000262186 sec ---------------------------------------------------------------------- polars To write data to file: 0.10634330002358183 sec ---------------------------------------------------------------------- pandas, pickle To write data to file: 0.7372764999745414 sec ---------------------------------------------------------------------- polars, pickle To write data to file: 1.6317000000271946 sec ---------------------------------------------------------------------- sqlite3 To write data to file: 7.876829700020608 sec ---------------------------------------------------------------------- pandas, fastparquet To write data to file: 26.618547399993986 sec ---------------------------------------------------------------------- </code></pre> <p>So it appears that the first two approaches are much faster than the other five alternatives for this specific task and test.</p> <p>But am I missing faster ways of implementing the five slower alternatives or other faster ways of appending numerical data to a tabular file besides those listed?</p> <p>(I am posting this question in part because I and others I know have encountered this question several times, and we do not find many posted attempts at speed comparisons between the numerous available methods, which are discussed mostly separately online.)</p>
<python><pandas><csv><append><tabular>
2025-01-31 04:24:39
0
427
SapereAude
79,401,703
3,296,786
Pytest not executing mock statement
<p>Iam writing pytest for method -</p> <pre><code>def upload(type=&quot;&quot;, local_file=None, remote_file=None, target_config=None, verify_existence=True,throw_on_error=False): if not local_file or not target_config: return None logger = default_logger target_path = remote_file if remote_file else local_file pdir = os.path.dirname(target_path) cip = getattr( target_config, &quot;address&quot;, None) or target_config.cip if verify_existence: target_check = &quot;test -f '%s'&quot; % target_path _, _, ret = ssh(ip=cip,command=[target_check],throw_on_error=False) if ret == 0: message = (&quot;Target file %s(%s) already exists. &quot; &quot;Skipping upload&quot; % (cip, target_path)) logger.info(message) return local_file else: message = (&quot;Target file %s(%s) not present. Uploading file&quot; % (cip, target_path)) logger.info(message) for _ in range(10): _, _, ret = ssh(ip=cip, command=[&quot;mkdir&quot;, &quot;-p&quot;, pdir], throw_on_error=False) _, _, ret = scp( ip=cip,target_path=target_path,files=[local_file], log_on_error=True, throw_on_error=False, timeout=None) if ret == 0: return local_file else: message = (&quot;Failed to upload %s(%s) to %s:%s&quot; % (type, local_file, cip, target_path)) logger.error(message) if throw_on_error: raise Exception(message) </code></pre> <p>My test case - -</p> <pre><code> def test_upload(self): default_logger = self.mock.CreateMockAnything() pdir =&quot;tmp/sample_path&quot; local_file = &quot;tmp/local_path&quot; cip = &quot;10.1.1.1&quot; verify_existence=True objectName.ssh(cip, command=[&quot;test&quot;, &quot;-f&quot;, 'tmp/sample_path'], throw_on_error=False).AndReturn((&quot;&quot;, &quot;&quot;,0)) self.mock.ReplayAll() result = objectName.upload( local_file= &quot;tmp/local_path&quot; , verify_existence=True) print(result) self.mock.VerifyAll() </code></pre> <p>But it fails with :test_upload_local_file - mox3.mox.ExpectedMethodCallsError: Verify: Expected methods never called: and also return None as result, I want local_file as my output. What is wrong in my test case</p>
<python><pytest>
2025-01-31 03:22:14
0
1,156
aΨVaN
79,401,690
958,266
Coverage Discrepancies in pytest: Code Executed but Not Tracked
<p>I discovered that when I run <code>pytest</code> with coverage, the coverage report shows that the tests do not fully cover the code, even though the tests execute the relevant code.</p> <h2>Full Coverage Command (100% Coverage)</h2> <p><code>coverage run --source=/root/MyProject/lib_python -m pytest /root/MyProject/tests/lib_python/test_something.py</code></p> <h2>Partial Coverage Command (50% Coverage)</h2> <ul> <li><code>cd /root/MyProject; coverage run --source=lib_python -m pytest</code></li> <li><code>cd /root/MyProject; coverage run --source=lib_python -m pytest /root/MyProject/tests/lib_python/</code></li> </ul> <p>Using the second command, the coverage report indicates that the lines <code>return a + b</code> and <code>return a - b</code> in <code>lib_python/something.py</code> are not covered.</p> <h2>Context:</h2> <ul> <li><strong>PYTHONPATH</strong>: <code>/root/MyProject/lib_python</code></li> <li>My module doesn't have a package name. It is unusual but it's due to legacy reasons.</li> </ul> <h2>Code Files:</h2> <h3><code>lib_python/something.py</code></h3> <pre class="lang-py prettyprint-override"><code>def add(a, b): return a + b def sub(a, b): return a - b </code></pre> <h3><code>tests/lib_python/test_something.py</code></h3> <p>python</p> <pre class="lang-py prettyprint-override"><code>from something import add, sub def test_add(): assert add(1, 2) == 3 assert add(-1, 1) == 0 assert add(0, 0) == 0 def test_sub(): assert sub(2, 1) == 1 assert sub(1, 1) == 0 assert sub(0, 1) == -1 </code></pre> <p>How do I get my coverage command to find and match coverage with the actual file?</p>
<python><pytest><code-coverage><coverage.py><pytest-cov>
2025-01-31 03:09:27
0
1,227
jinhwan