QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,313,217
| 11,141,816
|
python sympy does not simplify the complex roots of unity
|
<p>Consider the following sympy expression</p>
<pre><code>(1 - (-1)**(Rational(1,3)) + exp(2*I*pi/3))/(3*(1 + exp(I*pi/3)))
</code></pre>
<p>the <code>.simplify()</code> and <code>.rewrite(exp)</code> does not lead to the identity <code>0</code>.</p>
<p>Basically, the simplification function does not response to the complex roots of unity very well, and is a bit unstable. Is there a way to tell sympy that the above expression is <code>0</code>? or is there a way to simplify it to <code>0</code>?</p>
|
<python><sympy>
|
2024-12-28 04:27:20
| 1
| 593
|
ShoutOutAndCalculate
|
79,313,134
| 200,783
|
How do I import from a package in a subdirectory?
|
<p>I have <a href="https://ics.uci.edu/%7Eeppstein/PADS/" rel="nofollow noreferrer">David Eppstein's PADS</a> library of Python Algorithms and Data Structures in a subdirectory next to my Python script:</p>
<pre><code>- my-script.py
- PADS
- __init__.py
- Automata.py
- Util.py
- ...
</code></pre>
<p>How can I use the <code>RegExp</code> class in my script? It's defined in <code>PADS/Automata.py</code> but when I use <code>import PADS.Automata</code>, I get an error from a line inside that file:</p>
<pre><code> File ".../PADS/Automata.py", line 9, in <module>
from Util import arbitrary_item
ModuleNotFoundError: No module named 'Util'
</code></pre>
<p><strong>Edit:</strong> Do I need to change the line inside <code>Automata.py</code> to <code>from .Util import arbitrary_item</code>? That seems to work, but then I can't run <code>python Automata.py</code> from the command line. Also, I'd rather avoid modifying the files within <code>PADS</code> if possible.</p>
|
<python><module><package><python-import>
|
2024-12-28 02:53:32
| 1
| 14,493
|
user200783
|
79,313,103
| 8,116,305
|
asof-join with multiple inequality conditions
|
<p>I have two dataframes: <strong>a (~600M rows)</strong> and <strong>b (~2M rows)</strong>. What is the best approach for joining b onto a, when using 1 equality condition and <strong>2 inequality conditions</strong> on the respective columns?</p>
<ul>
<li>a_1 = b_1</li>
<li>a_2 >= b_2</li>
<li>a_3 >= b_3</li>
</ul>
<p>I have explored the following paths so far:</p>
<ul>
<li><strong>Polars</strong>:
<ul>
<li>join_asof(): only allows for 1 inequality condition</li>
<li>join_where() with filter(): even with a small tolerance window, the standard Polars installation runs out of rows (4.3B row limit) during the join, and the polars-u64-idx installation runs out of memory (512GB)</li>
</ul>
</li>
<li><strong>DuckDB</strong>: ASOF LEFT JOIN: also only allows for 1 inequality condition</li>
<li><strong>Numba</strong>: As the above didn't work, I tried to create my own join_asof() function - see code below. It works fine but with increasing lengths of a, it becomes prohibitively slow. I tried various different configurations of for/ while loops and filtering, all with similar results.</li>
</ul>
<p>Now I'm running a bit out of ideas... What would be a more efficient way to implement this?</p>
<p>Thank you</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
import numpy as np
import polars as pl
import time
@nb.njit(nb.int32[:](nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:]), parallel=True)
def join_multi_ineq(a_1, a_2, a_3, b_1, b_2, b_3, b_4):
output = np.zeros(len(a_1), dtype=np.int32)
for i in nb.prange(len(a_1)):
for j in range(len(b_1) - 1, -1, -1):
if a_1[i] == b_1[j]:
if a_2[i] >= b_2[j]:
if a_3[i] >= b_3[j]:
output[i] = b_4[j]
break
return output
length_a = 5_000_000
length_b = 2_000_000
start_time = time.time()
output = join_multi_ineq(a_1=np.random.randint(1, 1_000, length_a, dtype=np.int32),
a_2=np.random.randint(1, 1_000, length_a, dtype=np.int32),
a_3=np.random.randint(1, 1_000, length_a, dtype=np.int32),
b_1=np.random.randint(1, 1_000, length_b, dtype=np.int32),
b_2=np.random.randint(1, 1_000, length_b, dtype=np.int32),
b_3=np.random.randint(1, 1_000, length_b, dtype=np.int32),
b_4=np.random.randint(1, 1_000, length_b, dtype=np.int32))
print(f"Duration: {(time.time() - start_time):.2f} seconds")
</code></pre>
|
<python><dataframe><python-polars><numba><duckdb>
|
2024-12-28 02:24:15
| 2
| 343
|
usdn
|
79,313,033
| 10,941,410
|
How to repair a PDF file that was transmitted with a wrong MIME type
|
<p>I have a service A (flask) that transmits a file to service B (Django) using python's <code>requests</code> library.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TYPE_CHECKING
import magic
if TYPE_CHECKING:
from werkzeug.datastructures import FileStorage
from backendssc.access.viewer_context import ViewerContext
def multipartify(data: DictData, file: FileStorage) -> DictData:
converted = {}
for key, value in data.items():
converted[key] = (None, value) # multipart representation of value
converted["file"] = ( # type: ignore
file.filename,
file.stream,
magic.from_buffer(file.read(2048), mime=True),
)
return converted
@retry_send_request()
def send_request(
method: str,
url: str,
queries: Optional[dict] = None,
body: Optional[dict] = None,
file: Optional[dict] = None,
api_token: Optional[str] = None,
jwt_token: Optional[str] = None,
extra_headers: Optional[Dict[str, Any]] = None,
tries: int = 1,
exp_backoff: int = 1,
) -> Response:
# Process header, queries and body
header = attach_api_auth_data(
url=url,
jwt_token=jwt_token,
api_token=api_token,
**(extra_headers if extra_headers else {}),
)
url = add_url_params(url, queries if queries else {})
response = requests.request(
method=method,
url=url,
json=body,
headers=header,
files=file,
)
return response
def create(
self,
*,
body: EvidenceLockerCreateRequestBody,
file: FileStorage,
viewer_context: ViewerContext,
raise_for_status: bool = False,
) -> Tuple[EvidenceLockerCreateApiResponse, bool]:
payload = multipartify(
data={
**body.as_dict(remove_null=True),
"ip_address": viewer_context.ip_address,
"user_agent": viewer_context.user_agent,
},
file=file,
)
response = send_request(
method="POST",
url=self.url,
file=payload,
jwt_token=viewer_context.jwt_token,
)
if raise_for_status:
response.raise_for_status()
return response.json(), response.ok
</code></pre>
<p>If I remove the MIME type from <code>file</code>, ie...</p>
<pre class="lang-py prettyprint-override"><code>converted["file"] = (file.filename, file.stream)
</code></pre>
<p>...then the file can be downloaded back from s3 without problems.</p>
<p>I'm trying to repair the files that were transmitted with MIME so folks don't need to reupload them.</p>
<p>I've tried a couple of different libraries/solutions such as <code>convertapi</code>, <code>pikepdf</code>, <code>ghostscript</code>, <code>I love pdf</code> (and similars)... but unlucky thus far.</p>
<p>Any ideas?</p>
<p>I've tried to repair the files. I'm expecting to get help to repair the files.</p>
|
<python><pdf><mime-types><corruption><corrupt-data>
|
2024-12-28 00:44:56
| 0
| 305
|
Murilo Sitonio
|
79,312,947
| 8,276,973
|
PyCharm creates new thread when I attach debugger to running process
|
<p>About three months ago I wrote a set of steps to attach the PDB debugger to a running process in Python (in Ubuntu 20.04). These instructions worked well, but I haven't used them in about three weeks. Today I tried this again, and instead it created a new thread, and I can't debug.</p>
<p>These are the instructions I wrote three months ago:</p>
<hr />
<ol>
<li><p>Put a line at the point where you want to start debugging:</p>
<p>input("Press Enter to continue...")</p>
</li>
</ol>
<p>and put a breakpoint right below that line.</p>
<ol start="2">
<li><p>On Ubuntu 20.04, start by isssuing the command "echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope" as the article says.</p>
</li>
<li><p>Start the program in the PyCharm terminal (Alt-F12).</p>
</li>
<li><p>At the command prompt, enter "python3 Trans_01.py"</p>
</li>
<li><p>When it hits the breakpoint with "Press Enter to continue..." DO NOT PRESS ENTER. Instead, attach the debugger with "Run" then "Attach to Process." Now the debugger is attached, but the terminal now shows "Connected" but the prompt to press Enter to continue is gone.</p>
</li>
<li><p>Press Alt-F12 again and the terminal reappears. Press Enter to continue, and it stops at my first breakpoint.</p>
</li>
</ol>
<p>See <a href="https://www.jetbrains.com/help/pycharm/attach-to-process.html" rel="nofollow noreferrer">https://www.jetbrains.com/help/pycharm/attach-to-process.html</a></p>
<hr />
<p>Today when I followed those steps, I was presented with these three options, and I chose to attach to 4947 Trans_01.py</p>
<p>-- 901 Python3; [New Thread 0x7fa26ffbc700 (LWP 15725)]
usr/bin/networkd-dispatcher --run-startup-triggers</p>
<p>-- 1075 usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal</p>
<p>-- 4947 Trans_01.py</p>
<p>import sys; print('Python %s on %s' % (sys.version, sys.platform))
/usr/bin/python3.8 /snap/pycharm-community/439/plugins/python-ce/helpers/pydev/pydevd_attach_to_process/attach_pydevd.py --port 39251 --pid 4947
PYDEVD_GDB_SCAN_SHARED_LIBRARIES not set (scanning all libraries for needed symbols).
Running: gdb --nw --nh --nx --pid 4947 --batch --eval-command='set scheduler-locking off' --eval-command='set architecture auto' --eval-command='call (void*)dlopen("/snap/pycharm-community/439/plugins/python-ce/helpers/pydev/pydevd_attach_to_process/attach_linux_amd64.so", 2)' --eval-command='sharedlibrary attach_linux_amd64' --eval-command='call (int)DoAttach(0, "import sys;sys.path.append("/snap/pycharm-community/439/plugins/python-ce/helpers/pydev");sys.path.append("/snap/pycharm-community/439/plugins/python-ce/helpers/pydev/pydevd_attach_to_process");import attach_script;attach_script.attach(port=39251, host="127.0.0.1", protocol="", debug_mode="");", 0)'
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f836e5451f2 in __GI___libc_read (fd=0, buf=0xe349680, nbytes=1024) at ../sysdeps/unix/sysv/linux/read.c:26
The target architecture is set automatically (currently i386:x86-64)
26 ../sysdeps/unix/sysv/linux/read.c: No such file or directory.
$1 = (void *) 0xe2cb080
[Detaching after fork from child process 4966]
[New Thread 0x7f8369af0700 (LWP 4967)]
[New Thread 0x7f83692ef700 (LWP 4968)]</p>
<p>But when I return to the PyCharm debug window, I am not able to debug. F7, F8 and F9 do not work at all. This is not how it worked before, and I don't know why.</p>
<p>How can I attach the debugger to the running process and debug? It appears to be assigned to a new thread, unlike several months ago. Can I switch to that thread, and how?</p>
<p>Thanks very much for any help.</p>
|
<python><python-3.x><pycharm>
|
2024-12-27 23:03:26
| 0
| 2,353
|
RTC222
|
79,312,881
| 3,990,451
|
boost:python expose const std::string&
|
<p>I understand std::string is exposed to python by implicit converters that don't need to be registered.
However, const std::string& isn't</p>
<p>I have</p>
<pre><code>class Cal {
const std::string& get_name() const;
};
</code></pre>
<p>and the eqv python class</p>
<pre><code>namespace bp=boost::python;
bp::class_<Cal, Cal*>("Cal")
.add_property("name",
make_function(&Cal::get_name, bp::return_value_policy<bp::reference_existing_object>()));
</code></pre>
<p>and the registration of the converter</p>
<pre><code>bp::to_python_converter<const Cal*, Cal_ptr_to_python_obj>{};
</code></pre>
<p>However, from Python when I call</p>
<pre><code>cal.name
</code></pre>
<p>I get this error:</p>
<pre class="lang-none prettyprint-override"><code>No Python class registered for C++ class std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >
</code></pre>
<p>which I don't understand because std::string converter is implicitly registered</p>
<p>I tried to register a converter for const std::string& but that fails to compile as the bp code tries to take a pointer to const std::string& and then asserts "pointer to reference"</p>
|
<python><c++><boost><boost-python>
|
2024-12-27 22:13:02
| 0
| 982
|
MMM
|
79,312,664
| 10,607,049
|
How the memory usage is calculated in function calling in Python's `tracemalloc`?
|
<p>I want to ask a question regarding Python's memory usage calculation in <code>tracemalloc</code>. I will give an example. I have a Python code structured in this way.</p>
<pre class="lang-py prettyprint-override"><code>def measure_performance(func):
@wraps(func)
def wrapper_func(*args, **kwargs):
tracemalloc.start()
start_time = perf_counter()
res = func(*args, **kwargs)
current, peak = tracemalloc.get_traced_memory()
finish_time = perf_counter()
print("\n")
print(f'Function: {func.__name__}')
print(f'Memory usage:\t\t {current / 10**6:.6f} MB \n'
f'Peak memory usage:\t {peak / 10**6:.6f} MB ')
print(f'Time elapsed is seconds: {finish_time - start_time:.6f}')
print(f'Run on: {datetime.today().strftime("%Y-%m-%d %H:%M:%S")}')
print(f'{"-"*40}')
tracemalloc.stop()
return res
return wrapper_func
class ClassName:
def __init__():
...
@measure_performance
def A():
...
@measure_performance
def B():
...
@measure_performance
def C():
.
.
.
@measure_performance
def main():
classname = ClassName()
a = classname.A()
b = classname.B()
c = classname.C()
...
if __name__ == "__main__":
main()
</code></pre>
<p>On running the above code, I am getting this kind of result</p>
<pre><code>Function: A
Memory usage: 191.501034 MB
Peak memory usage: 352.158040 MB
Time elapsed is seconds: 79.373798
Run on: 2024-12-28 00:37:05
Function: B
Memory usage: 1.591783 MB
Peak memory usage: 14.774033 MB
Time elapsed is seconds: 2.095723
Run on: 2024-12-28 00:37:07
Function: C
Memory usage: 2.196528 MB
Peak memory usage: 750.238682 MB
Time elapsed is seconds: 190.336022
Run on: 2024-12-28 00:40:18
Function: main
Memory usage: 0.000000 MB
Peak memory usage: 0.000000 MB
Time elapsed is seconds: 272.908980
Run on: 2024-12-28 00:40:18
</code></pre>
<p>My question is that for each function we are getting some memory usage and peak memory usage, but for the <code>main</code> we are getting 0.0 memory usage and peak memory usage. Why is that, even though we call <code>ClassName</code> in <code>main</code>? Is this normal behavior or how is this working?</p>
|
<python><memory-management>
|
2024-12-27 19:42:15
| 0
| 477
|
Pritam Sinha
|
79,312,644
| 5,561,472
|
Extracting substring between optional substrings
|
<p>I need to extract a substring which is between two other substrings. But I would like to make the border substrings optional - if no substrings found then the whole string should be extracted.</p>
<pre class="lang-py prettyprint-override"><code>patt = r"(?:bc)?(.*?)(?:ef)?"
a = re.sub(patt, r"\1", "bcdef") # d - as expected
a = re.sub(patt, r"\1", "abcdefg") # adg - as expected
# I'd like to get `d` only without `a` and `g`
# Trying to remove `a`:
patt = r".*(?:bc)?(.*?)(?:ef)?"
a = re.sub(patt, r"\1", "bcdef") # empty !!!
a = re.sub(patt, r"\1", "abcdef") # empty !!!
# make non-greedy
patt = r".*?(?:bc)?(.*?)(?:ef)?"
a = re.sub(patt, r"\1", "bcdef") # d - as expected
a = re.sub(patt, r"\1", "abcdef") # `ad` instead of `d` - `a` was not captured
# make `a` non-captured
patt = r"(?:.*?)(?:bc)?(.*?)(?:ef)?"
a = re.sub(patt, r"\1", "abcdef") # ad !!! `a` still not captured
</code></pre>
<p>I also tried to use <code>re.search</code> without any success.</p>
<p>How can I extract <code>d</code> only (a substring between optional substrings <code>bc</code> and <code>ef</code>) from <code>abcdefg</code>?</p>
<p>The same pattern should return <code>hij</code> when applied to <code>hij</code>.</p>
|
<python><regex><regex-greedy>
|
2024-12-27 19:26:05
| 1
| 6,639
|
Andrey
|
79,312,245
| 17,309,108
|
How to know the simultaneous query number limit of an HTTP ClickHouse connection?
|
<p>I used the following code in a Jupyter notebook to send several queries simultaneously:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import clickhouse_connect
async def query(order: int):
await asyncio.sleep(0.2 * order)
return await client.query(sleep_query)
n = 10
client = await clickhouse_connect.get_async_client(
host=config["host"],
username=config["username"],
password=config["password"],
port=config["port"],
database=config["database"],
secure=False,
send_receive_timeout=7200,
)
sleep_query = "SELECT sleep(3)"
await asyncio.gather(*(query(i) for i in range(n)))
</code></pre>
<p>Until n exceeds 6, it works fine. But after this I may catch the following exception:</p>
<pre><code>---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
Cell In[31], line 13
1 client = await clickhouse_connect.get_async_client(
2 host=config["host"],
3 username=config["username"],
(...)
8 send_receive_timeout=7200,
9 )
11 sleep_query = "SELECT sleep(3)"
---> 13 await asyncio.gather(*(query(i) for i in range(10)))
Cell In[30], line 3, in query(order)
1 async def query(order: int):
2 await asyncio.sleep(0.2 * order)
----> 3 return await client.query(sleep_query)
File ~\miniforge3\envs\sample_size_calculator\Lib\site-packages\clickhouse_connect\driver\asyncclient.py:96, in AsyncClient.query(self, query, parameters, settings, query_formats, column_formats, encoding, use_none, column_oriented, use_numpy, max_str_len, context, query_tz, column_tzs, external_data)
89 return self.client.query(query=query, parameters=parameters, settings=settings, query_formats=query_formats,
90 column_formats=column_formats, encoding=encoding, use_none=use_none,
91 column_oriented=column_oriented, use_numpy=use_numpy, max_str_len=max_str_len,
92 context=context, query_tz=query_tz, column_tzs=column_tzs,
93 external_data=external_data)
95 loop = asyncio.get_running_loop()
---> 96 result = await loop.run_in_executor(self.executor, _query)
97 return result
File ~\miniforge3\envs\sample_size_calculator\Lib\concurrent\futures\thread.py:59, in _WorkItem.run(self)
56 return
58 try:
---> 59 result = self.fn(*self.args, **self.kwargs)
60 except BaseException as exc:
61 self.future.set_exception(exc)
File ~\miniforge3\envs\sample_size_calculator\Lib\site-packages\clickhouse_connect\driver\asyncclient.py:89, in AsyncClient.query.<locals>._query()
88 def _query():
---> 89 return self.client.query(query=query, parameters=parameters, settings=settings, query_formats=query_formats,
90 column_formats=column_formats, encoding=encoding, use_none=use_none,
91 column_oriented=column_oriented, use_numpy=use_numpy, max_str_len=max_str_len,
92 context=context, query_tz=query_tz, column_tzs=column_tzs,
93 external_data=external_data)
File ~\miniforge3\envs\sample_size_calculator\Lib\site-packages\clickhouse_connect\driver\client.py:222, in Client.query(self, query, parameters, settings, query_formats, column_formats, encoding, use_none, column_oriented, use_numpy, max_str_len, context, query_tz, column_tzs, external_data)
220 return response.as_query_result()
221 return QueryResult([response] if isinstance(response, list) else [[response]])
--> 222 return self._query_with_context(query_context)
File ~\miniforge3\envs\sample_size_calculator\Lib\site-packages\clickhouse_connect\driver\httpclient.py:229, in HttpClient._query_with_context(self, context)
227 fields = None
228 headers['Content-Type'] = 'text/plain; charset=utf-8'
--> 229 response = self._raw_request(body,
230 params,
231 headers,
232 stream=True,
233 retries=self.query_retries,
234 fields=fields,
235 server_wait=not context.streaming)
236 byte_source = RespBuffCls(ResponseSource(response)) # pylint: disable=not-callable
237 context.set_response_tz(self._check_tz_change(response.headers.get('X-ClickHouse-Timezone')))
File ~\miniforge3\envs\sample_size_calculator\Lib\site-packages\clickhouse_connect\driver\httpclient.py:457, in HttpClient._raw_request(self, data, params, headers, method, retries, stream, server_wait, fields, error_handler)
455 if response.status in (429, 503, 504):
456 if attempts > retries:
--> 457 self._error_handler(response, True)
458 logger.debug('Retrying requests with status code %d', response.status)
459 elif error_handler:
File ~\miniforge3\envs\sample_size_calculator\Lib\site-packages\clickhouse_connect\driver\httpclient.py:385, in HttpClient._error_handler(self, response, retried)
382 else:
383 err_str = 'The ClickHouse server returned an error.'
--> 385 raise OperationalError(err_str) if retried else DatabaseError(err_str) from None
OperationalError: HTTPDriver for https://ch.<domain>.com:443 returned response code 429
</code></pre>
<p>As I am aware, the HTTP error 429 means "Too many requests". But how to get limit parameters explicitly via ClickHouse SELECT-query or any other way? So that I may set up an asyncio semaphore at the maximum allowed number to evade the exception.</p>
|
<python><python-asyncio><clickhouse>
|
2024-12-27 15:56:55
| 0
| 782
|
Vovin
|
79,312,159
| 8,229,029
|
How to run a parallel for loop in Python when filling an array?
|
<p>I am using the metpy package to calculate many different weather parameters for many different locations across North America for many different hours. I want to fill arrays containing these weather parameters that look like: [hrs,stns]. I am not able to vectorize these operations, unfortunately (see metpy package documentation and notice that many of these calculations cannot operate on the original arrays this data normally comes in).</p>
<p>Here is a very simple example of my code. How would I run the following code in parallel?</p>
<pre><code>wx_array1 = np.empty(shape=(3000,600))
wx_array2 = np.empty(shape=(3000,600))
for hr in range(3000):
for stn in range(600):
wx_array1[hr,stn] = hr * stn
wx_array2[hr,stn] = hr + stn
</code></pre>
|
<python><numpy><for-loop><parallel-processing>
|
2024-12-27 15:18:05
| 1
| 1,214
|
user8229029
|
79,312,133
| 10,474,998
|
Getting all leaf words (reverse stemming) into one Python List
|
<p>On the same lines as the solution provided <a href="https://stackoverflow.com/questions/65559962/get-all-leaf-words-for-a-stemmed-keyword">in this link</a>, I am trying to get all leaf words of one stem word. I am using the community-contributed (@Divyanshu Srivastava) package <code>get_word_forms</code></p>
<p>Imagine I have a shorter sample word list as follows:</p>
<pre><code>my_list = [' jail', ' belief',' board',' target', ' challenge', ' command']
</code></pre>
<p>If I work it manually, I do the following (which is go word-by-word, which is very time-consuming if I have a list of 200 words):</p>
<pre><code>get_word_forms("command")
</code></pre>
<p>and get the following output:</p>
<pre><code>{'n': {'command',
'commandant',
'commandants',
'commander',
'commanders',
'commandership',
'commanderships',
'commandment',
'commandments',
'commands'},
'a': set(),
'v': {'command', 'commanded', 'commanding', 'commands'},
'r': set()}
</code></pre>
<p>'n' is noun, 'a' is adjective, 'v' is verb, and 'r' is adverb.</p>
<p>If I try to reverse-stem the entire list in one go:</p>
<pre><code>[get_word_forms(word) for word in sample]
</code></pre>
<p>I fail at getting any output:</p>
<pre><code>[{'n': set(), 'a': set(), 'v': set(), 'r': set()},
{'n': set(), 'a': set(), 'v': set(), 'r': set()},
{'n': set(), 'a': set(), 'v': set(), 'r': set()},
{'n': set(), 'a': set(), 'v': set(), 'r': set()},
{'n': set(), 'a': set(), 'v': set(), 'r': set()},
{'n': set(), 'a': set(), 'v': set(), 'r': set()},
{'n': set(), 'a': set(), 'v': set(), 'r': set()}]
</code></pre>
<p>I think I am failing at saving the output to the dictionary. Eventually, I would like my output to be a list without breaking it down into noun, adjective, adverb, or verb:</p>
<p>something like:</p>
<pre><code>['command','commandant','commandants', 'commander', 'commanders', 'commandership',
'commanderships','commandment', 'commandments', 'commands','commanded', 'commanding', 'commands', 'jail', 'jailer', 'jailers', 'jailor', 'jailors', 'jails', 'jailed', 'jailing'.....] .. and so on.
</code></pre>
|
<python><nlp><nltk>
|
2024-12-27 15:04:05
| 1
| 1,079
|
JodeCharger100
|
79,311,933
| 179,581
|
How to solve multiple and nested discriminators with Pydantic v2?
|
<p>I am trying to validate Slack interaction payloads, that look like these:</p>
<pre class="lang-yaml prettyprint-override"><code>type: block_actions
container:
type: view
...
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>type: block_actions
container:
type: message
...
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>type: view_submission
...
</code></pre>
<p>I use 3 different models for payloads coming to the same interaction endpoint:</p>
<pre class="lang-py prettyprint-override"><code>class MessageContainer(BaseModel):
type: Literal["message"]
...
class ViewContainer(BaseModel):
type: Literal["view"]
...
class MessageActions(ActionsBase):
type: Literal["block_actions"]
container: MessageContainer
...
class ViewActions(ActionsBase):
type: Literal["block_actions"]
container: ViewContainer
...
class ViewSubmission(BaseModel):
type: Literal["view_submission"]
...
</code></pre>
<p>and I was planning to use</p>
<pre class="lang-py prettyprint-override"><code>BlockActions = Annotated[
MessageActions | ViewActions,
Field(discriminator="container.type"),
]
SlackInteraction = Annotated[
ViewSubmission | BlockActions,
Field(discriminator="type"),
]
SlackInteractionAdapter = TypeAdapter(SlackInteraction)
</code></pre>
<p>but cannot make it work with v2.10.4.</p>
<p>Do I have to dispatch them manually or there is a way to solve it with Pydantic?</p>
|
<python><slack><pydantic>
|
2024-12-27 13:33:19
| 1
| 11,136
|
Andy
|
79,311,930
| 7,082,564
|
Empty Plotly Candlestick chart with yfinance.download()
|
<p>I am trying to plot a simple Candlestick chart from OHLCV data retrieved by yfinance.</p>
<p>This is my code:</p>
<pre><code>import yfinance as yf
import pandas as pd
import plotly.graph_objects as go
from datetime import datetime
tf = '1d' # Time frame (daily)
asset = 'AAPL' # Asset ticker (e.g., Apple)
start = '2019-01-01' # Start date
end = datetime.now().strftime('%Y-%m-%d') # End date is current date
df = yf.download(asset, start=start, end=end, interval=tf)
df['pct_chg'] = df['Close'].pct_change() * 100
df.index.name = 'timestamp'
# now plot the chart
hover_text = [f"Open: {open}<br>Close: {close}<br>Pct: {pct_chg:.2f}%" for open, close, pct_chg in zip(df['Open'], df['Close'], df['pct_chg'])]
# Create a candlestick chart using Plotly
fig = go.Figure(data=[go.Candlestick(
x=df.index,
open=df['Open'],
high=df['High'],
low=df['Low'],
close=df['Close'],
hovertext=hover_text,
hoverinfo='text'
)])
# Update layout
fig.update_layout(
title='Candlestick chart',
xaxis_title='Date',
yaxis_title='Price',
xaxis_rangeslider_visible=False,
template='plotly_dark')
# Show the plot
fig.show()
</code></pre>
<p>Data is correctly downloaded. However the graph does not show any candle.</p>
|
<python><pandas><plotly><yfinance><candlestick-chart>
|
2024-12-27 13:30:59
| 1
| 346
|
soo
|
79,311,873
| 2,386,113
|
Vertical lines from 3D surface plot to 2D contour plot
|
<p>I have a 3D surface plot and a 2D contour plot in the xy plane. I want to create a few vertical lines from the surface plot to the computer plot (ideally, one vertical line per contour).</p>
<p><strong>Sample requirement (copied from <a href="https://docs.tibco.com/pub/stat/14.1.0/doc/html/UserGuide/6-working-with-graphs/conceptual-overviews-contour-plots.htm" rel="nofollow noreferrer">here</a>):</strong></p>
<p><a href="https://i.sstatic.net/M6Km0nop.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6Km0nop.png" alt="enter image description here" /></a></p>
<p>However, I am not able to achieve a similar plot as above. My dummy minimum working example is below in which I have added a red vertical line with hard-coded code, but i would like to have similar vertical lines.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
# Create a quadratic surface
x = np.linspace(-3, 3, 100)
y = np.linspace(-3, 3, 100)
X, Y = np.meshgrid(x, y)
Z = X**2 + Y**2 # Quadratic surface
Z = Z + 20
# Mask a circular region (e.g., radius > 2)
mask = (X**2 + Y**2) > 4
Z_masked = np.ma.masked_where(mask, Z)
# Create plot
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(111, projection='3d')
# Plot the surface
surf = ax.plot_surface(X, Y, Z_masked, cmap=cm.viridis, edgecolor='none', alpha=0.8)
# Add contours on the XY plane at Z_min level
z_min = Z_masked.min() - 20
contour = ax.contour(X, Y, Z_masked, zdir='z', offset=z_min, colors='black', linewidths=1)
# Calculate the center of the valid region
valid_mask = ~Z_masked.mask
x_valid = X[valid_mask]
y_valid = Y[valid_mask]
z_valid = Z_masked[valid_mask]
# Find the approximate center
x_center = np.mean(x_valid)
y_center = np.mean(y_valid)
z_center = np.mean(z_valid)
# Plot the vertical line in red
ax.plot([x_center, x_center], [y_center, y_center], [z_min, z_center],
color='red', linewidth=2, alpha=0.9)
# Add markers for clarity
ax.scatter(x_center, y_center, z_center, color='red', s=50, label='Surface Center')
ax.scatter(x_center, y_center, z_min, color='red', s=50, label='Contour Center')
# Labels and legend
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
ax.set_zlabel('Z-axis')
ax.legend()
# Show plot
plt.show()
print
</code></pre>
|
<python><matplotlib><plot>
|
2024-12-27 13:04:19
| 1
| 5,777
|
skm
|
79,311,809
| 12,415,855
|
Automate Chrome Extension using Selenium?
|
<p>i try to autoamte a chrome-extension using the following code:</p>
<p>This is the chrome-extension:
<a href="https://chromewebstore.google.com/detail/email-hunter/mbindhfolmpijhodmgkloeeppmkhpmhc" rel="nofollow noreferrer">https://chromewebstore.google.com/detail/email-hunter/mbindhfolmpijhodmgkloeeppmkhpmhc</a></p>
<p>And this is the code i try:</p>
<pre><code>import os, sys
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
import pyautogui
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fnExtension = os.path.join(path, "EmailHunter.crx")
print(f"Checking Browser driver...")
options = Options()
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_extension(fnExtension)
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = f"https://www.orf.at"
driver.get (link)
fnExtensionIcon = os.path.join(path, "IconExtension.png")
img_location = pyautogui.locateOnScreen(fnExtension, confidence=0.5)
image_location_point = pyautogui.center(img_location)
x, y = image_location_point
pyautogui.click(x, y)
</code></pre>
<p>But i only get this error:</p>
<pre><code>(seleniumALL) C:\DEVNEU\Fiverr2024\ORDER\robalf\TRYuseExtension>python test.py
Checking Browser driver...
Traceback (most recent call last):
File "C:\DEVNEU\Fiverr2024\ORDER\robalf\TRYuseExtension\test.py", line 29, in <module>
img_location = pyautogui.locateOnScreen(fnExtension, confidence=0.5)
File "C:\DEVNEU\.venv\seleniumALL\Lib\site-packages\pyautogui\__init__.py", line 172, in wrapper
return wrappedFunction(*args, **kwargs)
File "C:\DEVNEU\.venv\seleniumALL\Lib\site-packages\pyautogui\__init__.py", line 210, in locateOnScreen
return pyscreeze.locateOnScreen(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\DEVNEU\.venv\seleniumALL\Lib\site-packages\pyscreeze\__init__.py", line 405, in locateOnScreen
retVal = locate(image, screenshotIm, **kwargs)
File "C:\DEVNEU\.venv\seleniumALL\Lib\site-packages\pyscreeze\__init__.py", line 383, in locate
points = tuple(locateAll(needleImage, haystackImage, **kwargs))
File "C:\DEVNEU\.venv\seleniumALL\Lib\site-packages\pyscreeze\__init__.py", line 231, in _locateAll_opencv
needleImage = _load_cv2(needleImage, grayscale)
File "C:\DEVNEU\.venv\seleniumALL\Lib\site-packages\pyscreeze\__init__.py", line 193, in _load_cv2
raise IOError(
...<3 lines>...
)
OSError: Failed to read C:\DEVNEU\Fiverr2024\ORDER\robalf\TRYuseExtension\EmailHunter.crx because file is missing, has improper permissions, or is an unsupported or invalid format
</code></pre>
<p>How can i automate this extensions using Selenium?
(make several clicks on Icon and then get the data out of the text-box which is showing the emails from the extension)</p>
|
<python><selenium-webdriver><pyautogui>
|
2024-12-27 12:27:03
| 1
| 1,515
|
Rapid1898
|
79,311,792
| 6,312,979
|
Set column names using values from a specific row in Polars
|
<p>I am bringing in the data from an Excel spreadsheet.</p>
<p>I want to make all the info from <code>df.row(8)</code> into the column header names.</p>
<p>In pandas it was just:</p>
<pre><code>c = [ 'A', 'B', 'C', 'D', 'E', 'F' ]
df.columns = c
</code></pre>
<p>Using the rename does not seem very practical here. Is there an easier way?</p>
|
<python><excel><dataframe><python-polars>
|
2024-12-27 12:19:48
| 1
| 2,181
|
diogenes
|
79,311,706
| 7,052,505
|
How to properly use poetry with different versions of python on windows?
|
<p>I am trying to migrate our projects to using Poetry. What I want to achieve is when people open the project using PyCharm, it will automatically create the environment using Poetry and install dependencies. PyCharm does support this, however, when a different Python version is being specified on <code>pyproject.toml</code> file, it cannot find the executable.</p>
<p>This is due to the newly installed Python versions not being added to the <code>PATH</code> environment variable. From what I have read this is normal.</p>
<p>As the Python launcher correctly identifies the versions, I am able to get the path of the Python interpreter executable using <code>py</code> and pass it to <code>poetry</code> with this line:</p>
<pre><code>poetry env use (py -3.9 -c "import sys; print(sys.executable)")
</code></pre>
<p>However this requires user input which goes against what I was trying to do with setting up the environment using PyCharm.</p>
<p>What is the ideal way to make <code>poetry env use 3.x</code> working on Windows when I am working with multiple versions? Does <code>pyenv</code> help with this problem?</p>
|
<python><pycharm><python-poetry><pyenv>
|
2024-12-27 11:41:49
| 0
| 350
|
Monata
|
79,311,355
| 9,960,809
|
Dynamically create modules inside __init__ if they don't exist
|
<p>I would like to dynamically create and import modules inside an inner <code>__init__.py</code> file, if one or several of a set of indexed submodules doesn't exist.</p>
<p>I have a set of module layers, say;</p>
<pre><code>top_module/
__init__.py
sub_module/
__init__.py
a1/
__init__.py
n1.py
b1/
__init__.py
b1.py
b2/
__init__.py
b2.py
b3/
__init__.py
b3.py
a2/
__init__.py
a2.py
b*/...
a*/...
</code></pre>
<p>Where the <code>top_module/__init__.py</code> does a <code>from .sub_module import *</code>.</p>
<p>From within the <code>top_module/sub_module/__init__.py</code>, say I have several of these <code>a*</code> folders that are just indexed iteratively. I know that I can do something like this to iterate importing over an index for those modules that exist;</p>
<pre class="lang-py prettyprint-override"><code>from importlib import import_module
for a in range(some_max_a):
import_module(f'.a{a}', package='top_module.sub_module')
</code></pre>
<p>And that I can do something like this to just ignore modules that don't exist yet;</p>
<pre class="lang-py prettyprint-override"><code>from importlib import import_module
for a in range(some_max_a):
try:
import_module(f'.a{a}', package='top_module.sub_module')
except ModuleNotFoundError:
pass
</code></pre>
<p>What I would like to be able to do is dynamically create and import these modules if they don't exist.</p>
<p>What I have so far is</p>
<pre class="lang-py prettyprint-override"><code>from importlib import import_module
from sys import modules
from types import ModuleType
PKG = 'top_module.sub_module'
for a in range(some_max_a):
try:
import_module(f'.a{a}', package=PKG)
except ModuleNotFoundError:
modules[f'{PKG}.a{a}'] = ModuleType(f'{PKG}.a{a}')
for b in range(some_max_b):
modules[f'{PKG}.a{a}.b{b}'] = ModuleType(f'{PKG}.a{a}.b{b}')
def function_all_should_have(*args, **kwargs):
raise NotImplementedError
modules[f'{PKG}.a{a}.b{b}'].function_all_should_have = function_all_should_have
import_module(f'{PKG}.a{a}')
</code></pre>
<p>I've tried with and without the <code>{PKG}</code> in the <code>import_module</code> call and or the <code>ModuleType</code> call.</p>
<p>If I import the created package, I can see that all the modules that I'm expecting this to create exist in the <code>sys.modules</code>, but trying to access any of them with a call to something like <code>top_module.sub_module.a3.b3.function_all_should_have()</code> yields an error along the lines of</p>
<pre><code>AttributeError: module 'top_module.sub_module' has no attribute 'a3'.
</code></pre>
<p>Yet I can see that there is a <code>top_module.sub_module.a3</code> module along with all the <code>top_module.sub_module.a3.b*</code> modules.</p>
<p>I'm not really sure why the modules would be created and exist in <code>sys.modules</code> but be unreachable after importing.</p>
<p>If there's no easy answer I could just go back to my second example and <code>pass</code> if the modules don't exist, but I would still like to understand what's happening here. The only closest question I could find to this was <a href="https://stackoverflow.com/questions/2931950/dynamic-module-creation">dynamic module creation</a>.</p>
|
<python><python-internals>
|
2024-12-27 08:59:39
| 0
| 1,420
|
Skenvy
|
79,311,280
| 12,411,536
|
dask `var` and `std` with ddof in groupby context and other aggregations
|
<p>Suppose I want to compute variance and/or standard deviation with non-default <code>ddof</code> in a groupby context, I can do:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby("a")["b"].var(ddof=2)
</code></pre>
<p>If I want that to happen together with other aggregations, I can use:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum"))
</code></pre>
<p>My understanding is that to be able to have non default <code>ddof</code> I should create a custom aggregation.</p>
<p>Here what I got so far:</p>
<pre class="lang-py prettyprint-override"><code>def var(ddof: int = 1) -> dd.Aggregation:
import dask.dataframe as dd
return dd.Aggregation(
name="var",
chunk=lambda s: (s.count(), s.sum(), (s.pow(2)).sum()),
agg=lambda count, sum_, sum_sq: (count.sum(), sum_.sum(), sum_sq.sum()),
finalize=lambda count, sum_, sum_sq: (sum_sq - (sum_ ** 2 / count)) / (count - ddof),
)
</code></pre>
<p>Yet, I encounter a <code>RuntimeError</code>:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby("a").agg({"b": var(2)})
</code></pre>
<p>RuntimeError('Failed to generate metadata for DecomposableGroupbyAggregation(frame=df, arg={βbβ: <dask.dataframe.groupby.Aggregation object at 0x7fdfb8469910>}</p>
<p>What am I missing? Is there a better way to achieve this?</p>
<p>Replacing <code>s.pow(2)</code> with <code>s**2</code> also results in an error.</p>
<p>Full script:</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
data = {
"a": [1, 1, 1, 1, 2, 2, 2],
"b": range(7),
"c": range(10, 3, -1),
}
df = dd.from_dict(data, 2)
def var(ddof: int = 1) -> dd.Aggregation:
import dask.dataframe as dd
return dd.Aggregation(
name="var",
chunk=lambda s: (s.count(), s.sum(), (s.pow(2)).sum()),
agg=lambda count, sum_, sum_sq: (count.sum(), sum_.sum(), sum_sq.sum()),
finalize=lambda count, sum_, sum_sq: (sum_sq - (sum_ ** 2 / count)) / (count - ddof),
)
df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) # <- no issue
df.groupby("a").agg(b_var = ("b", var(2)), c_sum = ("c", "sum")) # <- RuntimeError
</code></pre>
|
<python><dask><dask-dataframe>
|
2024-12-27 08:18:04
| 2
| 6,614
|
FBruzzesi
|
79,311,210
| 418,246
|
Why is Python Rich printing in green?
|
<p>Using <a href="https://github.com/Textualize/rich" rel="nofollow noreferrer">Rich</a> I get what I think is a spurious green output in the console.</p>
<p>In the following code, the "d:c" is coloured green, the rest of the text is as expected.</p>
<pre><code>from rich.logging import RichHandler
import logging
logging.basicConfig(
handlers=[RichHandler()],
format='%(message)s'
)
logging.getLogger().setLevel(logging.DEBUG)
logging.info("Why is this green d:c?")
</code></pre>
<p><a href="https://i.sstatic.net/nS6J2SyP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nS6J2SyP.png" alt="Logging output showing unexpected green foreground for just the letters d:c " /></a></p>
<p>I've read the <a href="https://rich.readthedocs.io/en/stable/introduction.html" rel="nofollow noreferrer">rich documentation</a> and I've tried:</p>
<pre><code>RichHandler(markup=False)
</code></pre>
<pre><code>logging.info(escape("Why is this green d:c?"))
</code></pre>
<p>It's the same in Jupyter, Windows (11) and Linux (Ubuntu 24.04)</p>
<p>I'm using Rich version 13.9.3.</p>
<p>Is this expected, and what do I need to escape or disable to stop the green output?</p>
|
<python><logging><escaping><rich>
|
2024-12-27 07:41:46
| 1
| 4,641
|
Daniel James Bryars
|
79,310,820
| 2,941,322
|
Error: Explicit Conversion from Data Type ntext to vector Not Allowed in Azure SQL Database
|
<p>I'm working on implementing hybrid search functionality using Azure SQL Database. I am trying to execute <a href="https://github.com/Azure-Samples/azure-sql-db-vector-search/blob/main/Hybrid-Search/hybrid_search.py" rel="nofollow noreferrer">https://github.com/Azure-Samples/azure-sql-db-vector-search/blob/main/Hybrid-Search/hybrid_search.py</a> and encountered the following error in my Python script:</p>
<blockquote>
<p>pyodbc.DataError: ('22018', '[22018] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Explicit conversion from data type ntext to vector is not allowed. (529) (SQLExecDirectW); [22018] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)')</p>
</blockquote>
<p>Following is the simple SQL Statement which is failing</p>
<pre><code>INSERT INTO dbo.documents (id, content, embedding) VALUES (?, ?, CAST(? AS VECTOR(384)));
</code></pre>
<p>This error occurs when executing a SQL query that involves vector operations. It appears that the embeddings are being treated as ntext, and the current setup doesn't recognize the vector data type.</p>
<p>Environment Details:</p>
<ul>
<li>Python version: 3.11.x</li>
<li>ODBC Driver: Microsoft ODBC Driver 18 for SQL
Server</li>
<li>Azure SQL Database</li>
</ul>
<p>Steps Taken:</p>
<ul>
<li>Verified that the database schema defines the relevant column with the appropriate data type for storing vector embeddings.</li>
<li>Ensured that the Python environment is using the latest version (18) of the ODBC driver.</li>
<li>Confirmed that the SQL Server instance supports vector data types and operations.</li>
</ul>
|
<python><sql><azure><vector><azure-sql-database>
|
2024-12-27 03:01:46
| 1
| 541
|
UVData
|
79,310,810
| 943,222
|
windows pip cant install concurrent due to 'could not find a version that satisfies the requirement' but pycharm can
|
<p>I am writing a small script to do multithreading examples, but I can't get it to work outside the pycharm environment. When I run it in pycharm (community edition) it is able to install the <a href="https://docs.python.org/3.11/library/concurrent.html" rel="nofollow noreferrer">concurrent.futures</a> and execute the code perfectly. But since I want to see the threads in the resource manager I need to run it outside the pycharm.</p>
<p>Now my python on windows is 3.11:</p>
<pre><code>python -V
Python 3.11.2
pip -V
pip 24.3.1 from C:\Program Files\Python311\Lib\site-packages\pip (python 3.11)
</code></pre>
<p>my python venv interpreter on pycharm is 3.11 too.</p>
<p><a href="https://i.sstatic.net/iVNyS5yj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVNyS5yj.png" alt="pycharm interpreter 311" /></a>
my code uses concurrent.futures module, and when I do it from pycharm it installs it correctly without issues.
<a href="https://i.sstatic.net/8MwyWnYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MwyWnYT.png" alt="import the package" /></a></p>
<p>and yet when I try to do it on windows it doesn't work.</p>
<pre><code>C:\WINDOWS\system32>pip install concurrent
ERROR: Could not find a version that satisfies the requirement concurrent (from versions: none)
ERROR: No matching distribution found for concurrent
</code></pre>
<p>I have tried the following:
<a href="https://stackoverflow.com/questions/32302379/could-not-find-a-version-that-satisfies-the-requirement-package">Could not find a version that satisfies the requirement <package></a></p>
<p>I dont know if the package name is wrong or something? But if the package name worked in pycharm it should work in pip as well right?</p>
<p>I also tried to downgrade to py 3.8 but I got the same error</p>
<pre><code>C:/Users/jhg/AppData/Local/Programs/Python/Python38/scripts/pip install concurrent
ERROR: Could not find a version that satisfies the requirement concurrent (from versions: none)
</code></pre>
<p>I tried to search concurrent on pypi but I see only a <a href="https://pypi.org/project/futures/" rel="nofollow noreferrer">backport for python2</a>, but if there is a backport for python2 where is the python 3 version and how did my pycharm know where to find it?</p>
<p>here is the pip install verbose output:</p>
<pre><code>
ξ¨ 26/12/2024 ξ° ξ© 21:25.44 ξ° ξͺ /drives/c/Users/jhg/PycharmProjects/BulkSFTP ξ° ξ master ξ° pip3 -vvv install concurrent
Using pip 24.3.1 from C:\Program Files\Python311\Lib\site-packages\pip (python 3.11)
Defaulting to user installation because normal site-packages is not writeable
Created temporary directory: C:\Users\jhg\Documents\MobaXterm\slash\tmp\pip-build-tracker-00c_7tot
Initialized build tracking at C:\Users\jhg\Documents\MobaXterm\slash\tmp\pip-build-tracker-00c_7tot
Created build tracker: C:\Users\jhg\Documents\MobaXterm\slash\tmp\pip-build-tracker-00c_7tot
Entered build tracker: C:\Users\jhg\Documents\MobaXterm\slash\tmp\pip-build-tracker-00c_7tot
Created temporary directory: C:\Users\jhg\Documents\MobaXterm\slash\tmp\pip-install-a68ugnfx
Created temporary directory: C:\Users\jhg\Documents\MobaXterm\slash\tmp\pip-ephem-wheel-cache-l8k7j7vf
1 location(s) to search for versions of concurrent:
* https://pypi.org/simple/concurrent/
Fetching project page and analyzing links: https://pypi.org/simple/concurrent/
Getting page https://pypi.org/simple/concurrent/
Found index url https://pypi.org/simple/
Looking up "https://pypi.org/simple/concurrent/" in the cache
Request header has "max_age" as 0, cache bypassed
No cache entry available
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/concurrent/ HTTP/1.1" 404 13
Status code 404 not in (200, 203, 300, 301, 308)
Could not fetch URL https://pypi.org/simple/concurrent/: 404 Client Error: Not Found for url: https://pypi.org/simple/concurrent/ - skipping
Skipping link: not a file: https://pypi.org/simple/concurrent/
Given no hashes to check 0 links for project 'concurrent': discarding no candidates
ERROR: Could not find a version that satisfies the requirement concurrent (from versions: none)
Remote version of pip: 24.3.1
Local version of pip: 24.3.1
Was pip installed by pip? True
ERROR: No matching distribution found for concurrent
Exception information:
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 174, in _add_to_criteria
raise RequirementsConflicted(criterion)
pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('concurrent')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 399, in resolve
raise ResolutionImpossible(e.criterion.information)
pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('concurrent'), parent=None)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\pip\_internal\cli\base_command.py", line 105, in _run_wrapper
status = _inner_run()
^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_internal\cli\base_command.py", line 96, in _inner_run
return self.run(options, args)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_internal\cli\req_command.py", line 67, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_internal\commands\install.py", line 379, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 104, in resolve
raise error from e
pip._internal.exceptions.DistributionNotFound: No matching distribution found for concurrent
</code></pre>
|
<python><pycharm><pypi><concurrent.futures>
|
2024-12-27 02:52:13
| 1
| 816
|
D.Zou
|
79,310,713
| 5,567,893
|
How to apply the capitalize with condition?
|
<p>I'm wondering how to use the capitalize function when another column has a specific value.<br />
For example, I want to change the first letter of students with Master's degree.</p>
<pre class="lang-py prettyprint-override"><code># importing pandas as pd
import pandas as pd
# creating a dataframe
df = pd.DataFrame({
'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'],
'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'],
'C': [27, 23, 21, 23, 24]
})
# Expected result
# A B C
#0 John Masters 27
#1 bODAY Graduate 23
#2 minA Graduate 21
#3 Peter Masters 23
#4 nicky Graduate 24
</code></pre>
<p>I tried it like this, but it didn't apply well.</p>
<pre class="lang-py prettyprint-override"><code>df[df['B']=='Masters']['A'].str = df[df['B']=='Masters']['A'].str.capitalize()
</code></pre>
|
<python><dataframe>
|
2024-12-27 01:06:54
| 2
| 466
|
Ssong
|
79,310,511
| 352,403
|
How do I point to specific Python libraries?
|
<p>In our environment Ansible is installed on one of the servers that I have access to. However, I do not have access to root so I can only install packages etc as my account.</p>
<p>I installed the packages like requests, pywinrm and few others as my user in a venv but when I execute ansible, it does not pick these up.</p>
<p>In a nutshell the issue is:</p>
<ol>
<li><p>Packages do not get displayed in system python (the one that Ansible is pointing to)</p>
<pre><code>/usr/bin/python3.9 -m pip list
</code></pre>
</li>
<li><p>Package display just fine in venv (this is under my venv)</p>
<pre><code>python -m pip list
</code></pre>
</li>
<li><p>End result is that I receive error</p>
<pre><code>winrm or requests is not installed: No module named 'winrm'
</code></pre>
</li>
</ol>
<p>Is anyone aware of a fix for this issue?</p>
<p>Tried specifying <code>ansible_python_interpreter</code> at the command line while invoking ansible but that did not help.</p>
|
<python><ansible>
|
2024-12-26 21:57:53
| 1
| 6,134
|
souser
|
79,310,353
| 2,180,332
|
How to wait for page loading in evaluated code with Playwright?
|
<p>I want to fill a two-steps login form with Playwright.
The Playwright execution is controlled by a library, so I cannot change the execution context.
To fill my form, I can only use some Javascript code that will be executed by <code>evaluate</code>. I have no access to the <code>page</code> object.</p>
<p>In the following example, the first form is correctly loaded, but filling the password field fails. I suppose this is because the code is executed before the page is completely loaded.</p>
<p>How can I make this Javascript code wait for the second page to be completely loaded?</p>
<p>Again, I cannot edit the Playwright context, but if there is a generic solution, I could open a PR upstream.</p>
<p>Here is a schematic snippet of my issue (this is in Python, but that should not be relevant to the actual issue):</p>
<pre><code>from playwright.sync_api import sync_playwright
# this is the only things I can edit:
javascript = """
document.querySelector('input[name=login]').value = "admin"
document.querySelector('form').submit()
# <-- How to wait for the page to be loaded here?
document.querySelector('input[name=password]').value = "password"
document.querySelector('form').submit()
"""
# this is the library code I cannot edit
with sync_playwright() as pw:
browser = pw.firefox.launch()
context = browser.new_context()
page = context.new_page()
page.goto('http://localhost:5000')
page.evaluate(javascript)
browser.close()
</code></pre>
<p>The playwright error:</p>
<pre><code>playwright._impl._errors.Error: Page.evaluate: document.querySelector(...) is null
@debugger eval code line 234 > eval:4:18
evaluate@debugger eval code:234:30
@debugger eval code:1:44
</code></pre>
|
<python><playwright><playwright-python>
|
2024-12-26 20:02:48
| 1
| 4,656
|
azmeuk
|
79,310,312
| 3,264,147
|
Mojo not recognizing Python types
|
<p>Looking at <a href="https://docs.modular.com/mojo/manual/python/types#mojo-wrapper-objects" rel="nofollow noreferrer">this document</a>, I realize that this feature may not be supported yet, but I will give it a try. Here is my dilemma. I have a tiny mojo file (magic intiated inside a Python project) which imports a Python library. It looks like this:</p>
<pre class="lang-none prettyprint-override"><code>from python import Python
def main():
datasets = Python.import_module("sklearn.datasets")
iris = datasets.load_iris(as_frame=True)
print(iris.data.head(3))
</code></pre>
<p>This works:</p>
<pre class="lang-bash prettyprint-override"><code>β mojo main.mojo
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
</code></pre>
<p>If I try to re-organize it just a little bit, it stops working:</p>
<pre class="lang-none prettyprint-override"><code>from python import Python
def load_iris_dataset():
datasets = Python.import_module("sklearn.datasets")
iris = datasets.load_iris(as_frame=True)
return iris
def main():
iris = load_iris_dataset()
print(iris.data.head(3))
</code></pre>
<p>This is the shell output when running it:</p>
<pre class="lang-bash prettyprint-override"><code><REDACTED>/main.mojo:6:12: error: ambiguous call to '__init__', each candidate requires 1 implicit conversion, disambiguate with an explicit cast
return iris
^~~~
<REDACTED>/main.mojo:1:1: note: candidate declared here
from python import Python
^
<REDACTED>/main.mojo:1:1: note: candidate declared here
from python import Python
^
mojo: error: failed to parse the provided Mojo source module
</code></pre>
<p>But I also get this error from VSCode's Errors:</p>
<blockquote>
<p>cannot implicitly convert 'PythonObject' value to 'object'</p>
</blockquote>
<p>Could someone please explain, why does this behavior kick-in only when trying to pass <code>PythonObject</code>s between methods? What am I missing? Is Mojo just not ready to work with Python's dynamic types in an organized manner?</p>
|
<python><mojo>
|
2024-12-26 19:41:22
| 0
| 1,002
|
vasigorc
|
79,310,169
| 2,422,705
|
How to read character input, in a terminal, without blocking, in Python?
|
<p>I have a loop that I need to keep looping. Meanwhile I want to know <em>if</em> the user has pressed a key within the loop, but I don't want to have to wait before continuing.</p>
<p>This also needs to work in a terminal session.</p>
<p>Libraries like <a href="https://pypi.org/project/readchar/" rel="nofollow noreferrer">readchar</a> and <a href="https://sshkeyboard.readthedocs.io/en/latest/" rel="nofollow noreferrer">SSHkeyboard</a> work in a terminal session, but block the loop until a key has been pressed.</p>
<p>Libraries like <a href="https://pynput.readthedocs.io/en/latest/" rel="nofollow noreferrer">pynput</a> can do it without blocking the loop, but they aren't reading input from a shell, they are monitoring the system itself.</p>
<p>What is a simple way to do this?</p>
|
<python><input><keypress>
|
2024-12-26 18:14:42
| 0
| 1,559
|
Daniele Procida
|
79,310,043
| 1,358,829
|
vscode: in python, is there a way to know in code if a script is being run with F5 ("Start Debugging") or Ctrl+F5 ("Run Without Debugging")
|
<p>I run a lot of scripts using VSCode interface and its launch configurations. Since my scripts all produce a lot of artefacts from running, it would be very useful to have an automated way to know if the script is running in debug mode (with F5 in vscode) as oposed to running without debugging (Ctrl+F5 in vscode), so that I could perform some cleanups at the beginning and end of the script's run.</p>
<p>While I could set up a flag for debug mode in my configuration file, an automated way to identify it would be very useful because sometimes I or someone else running the scripts may forget to set the flag.</p>
<p>Is there some way to do that?</p>
|
<python><python-3.x><visual-studio-code>
|
2024-12-26 17:13:47
| 2
| 1,232
|
Alb
|
79,309,886
| 1,358,308
|
Parsing units out of column
|
<p>I've got some data I'm reading into Python using Pandas and want to keep track of units with the <a href="https://pint.readthedocs.io/en/stable/" rel="nofollow noreferrer">Pint package</a>. The values have a range of scales, so have mixed units, e.g. lengths are mostly meters but some are centimeters.</p>
<p>For example the data:</p>
<pre class="lang-none prettyprint-override"><code>what,length
foo,5.3 m
bar,72 cm
</code></pre>
<p>and I'd like to end up with the <em>length</em> column in some form that Pint understands. <a href="https://pint-pandas.readthedocs.io/" rel="nofollow noreferrer">Pint's Pandas integration</a> suggests that it only supports the whole column having the same datatype, which seems reasonable. I'm happy with some arbitrary unit being picked (e.g. the first, most common, or just SI base unit) and everything expressed in terms of that.</p>
<p>I was expecting some nice way of getting from the data I have to what's expected, but I don't see anything.</p>
<pre><code>import pandas as pd
import pint_pandas
length = pd.Series(['5.3 m', "72 cm"], dtype='pint[m]')
</code></pre>
<p>Doesn't do the correct thing at all, for example:</p>
<pre><code>length * 2
</code></pre>
<p>outputs</p>
<pre class="lang-none prettyprint-override"><code>0 5.3 m5.3 m
1 72 cm72 cm
dtype: pint[meter]
</code></pre>
<p>so it's just leaving things as strings. Calling <code>length.pint.convert_object_dtype()</code> doesn't help and everything stays as strings.</p>
|
<python><pandas><pint>
|
2024-12-26 16:08:46
| 1
| 16,473
|
Sam Mason
|
79,309,520
| 227,317
|
Trying to invoke the dagster web GUI in Pycharm
|
<p>Using Pycharm on Windows, I've created a new project, and put dagster and dagster-webserver into my requirements file. I can successfully import dagster into scripts, and write code - it looks good.</p>
<p>However, I can't use the terminal to invoke 'dagster dev' to run the web GUI for example. It says it is not a valid command. Is there another way to invoke the web GUI?</p>
|
<python><pycharm><dagster>
|
2024-12-26 12:47:39
| 0
| 993
|
Peter Schofield
|
79,309,271
| 1,487,336
|
Pandas Series subtract Pandas Dataframe strange result
|
<p>I'm wondering why pandas Series subtract a pandas dataframe produce such a strange result.</p>
<pre><code>df = pd.DataFrame(np.arange(10).reshape(2, 5), columns='a-b-c-d-e'.split('-'))
df.max(axis=1) - df[['b']]
</code></pre>
<p>What are the steps for pandas to produce the result?</p>
<pre><code> b 0 1
0 NaN NaN NaN
1 NaN NaN NaN
</code></pre>
|
<python><pandas><dataframe>
|
2024-12-26 10:47:40
| 2
| 809
|
Lei Hao
|
79,309,252
| 28,063,240
|
Typehint a method that returns new instance using a superclass classmethod
|
<pre class="lang-py prettyprint-override"><code>from typing import Self, Union
class Superclass:
@classmethod
def from_dict(cls, dict_: dict[str, str]) -> Self:
return cls(**dict_)
class Subclass(Superclass):
def __init__(self, name: Union[str, None] = None):
self.name = name
def copy(self) -> Self:
return Subclass.from_dict({'name': self.name})
</code></pre>
<p>I get an error on the bottom line,</p>
<blockquote>
<p>Type "Subclass" is not assignable to return type "Self@Subclass"</p>
</blockquote>
<p>I've also tried</p>
<pre class="lang-py prettyprint-override"><code>from typing import Type, TypeVar, Union, Dict
T = TypeVar('T', bound='Superclass')
class Superclass:
@classmethod
def from_dict(cls: Type[T], dict_: dict[str, str]) -> T:
return cls(**dict_)
class Subclass(Superclass):
def __init__(self, name: Union[str, None] = None):
self.name = name
def copy(self: T) -> T:
return self.from_dict({'name': self.name})
</code></pre>
<p>but this one gives me an error</p>
<blockquote>
<p>Cannot access attribute "name" for class "Superclass*" Β Β Attribute "name" is unknown</p>
</blockquote>
<hr />
<p>How can I use a superclass' class method to generate an instance of a child class, inside the child class method?</p>
|
<python><python-typing>
|
2024-12-26 10:32:37
| 1
| 404
|
Nils
|
79,309,217
| 2,856,552
|
How can I plot polygon numbers from a shapefile in python?
|
<p>I would like to plot each polygon's number within the polygon (not usually done). I am able to write the polygon names to a csv file with the following code;</p>
<pre><code>import geopandas as gpd
gdf1 = gpd.read_file("path/to/shapefile")
gdf= = gpd.read_file("path/to/shapefile")
gdf = gdf['geometry'].representative_point()
fig, ax = plt.subplots(figsize=(5, 5))
gdf1.plot(ax=ax,color='none', edgecolor='r', linewidths=1.8)
gdf.plot(ax=ax)
plt.show()
</code></pre>
<p>I have tried to get the index of each polygon and write these to csv with</p>
<pre><code>gdf=shp['index']
gdf.to_csv("Shpindices.csv")
</code></pre>
<p>Once I get the indices, I can plot them with basemap.
Question is, is it possible to access the indices this way, or can one actually plot the indices in the shapefile. I cannot find anything, googling pandas tutorials.</p>
|
<python><pandas>
|
2024-12-26 10:09:28
| 1
| 1,594
|
Zilore Mumba
|
79,309,190
| 2,013,747
|
numpy convention for storing time series of vectors and matrices - items in rows or columns?
|
<p>I'm working with discrete-time simulations of ODEs with time varying parameters. I have time series of various data (e.g. time series of state vectors generated by <code>solve_ivp</code>, time series of system matrices generated by my control algorithm, time series of system matrices in modal form, and so on).</p>
<p>My question: in what order should I place the indices? My intuition is that since numpy arrays are (by default) stored in row-major order, and I want per-item locality, each row should contain the "item" (i.e. a vector or matrix), and so the number of rows is the number of time points, and the number of columns is the dimension of my vector, e.g.:</p>
<pre><code>x_k = np.array((5000, 4)) # a time series of 5000, 4-vectors
display(x_k[25]) # the 26th timepoint
</code></pre>
<p>Or for matrices I might use:</p>
<pre><code>A_k = np.array((5000, 4, 4)) # a time series of 5000, 4x4-matrices
</code></pre>
<p>However, <code>solve_ivp</code> appears to do the opposite and returns a row-major array with the time series in columns (<code>sol.y</code> shape is <code>(4, 5000)</code>). Furthermore, transposing the result with <code>.T</code> just flips a flag to column-major so it is not really clear what the developers of <code>solve_ivp</code> and <code>numpy</code> intend me to do to write cache efficient code.</p>
<p>What are the conventions? Should I use the first index for the time index, as in my examples above, or last index as <code>solve_ivp</code> does?</p>
|
<python><numpy>
|
2024-12-26 09:54:39
| 1
| 4,240
|
Ross Bencina
|
79,309,183
| 1,913,554
|
Python projects with pick-and-choose submodules
|
<p>I am building a database management project. I want it to be able to work with MariaDB and SQLServer, but I don't want the MariaDB users to have to install SQLServer drivers and libraries, or vice-versa. I've seen this done with, for instance, SQLAlchemy, but their approach doesn't look very straightforward.</p>
<p>What techniques are common for this kind of project structure? Is there a "best practice" for doing this?</p>
|
<python><build><project>
|
2024-12-26 09:51:51
| 0
| 669
|
Robert Rapplean
|
79,308,624
| 3,099,733
|
python subprocess pending on running a nohup bash script
|
<p>Given the following code example:</p>
<pre class="lang-py prettyprint-override"><code># save as test.py
import subprocess as sp
import time
script = '''#!/bin/bash
nohup sleep 10 &
'''
with open('test.sh', 'w') as f:
f.write(script)
start_ts = time.time()
cp = sp.run('bash test.sh', shell=True, capture_output=True)
print(f'Elapsed time: {time.time() - start_ts:.2f}s')
</code></pre>
<p>When running the above code with <code>python test.py</code>,
what I expect is it should stop immediately,
but instead it is pending for 10s.</p>
<p>Why is that? How can I fix it?</p>
|
<python><bash><subprocess><nohup>
|
2024-12-26 03:46:21
| 0
| 1,959
|
link89
|
79,308,396
| 1,112,406
|
Does colab misunderstand type hints?
|
<p>The following code produces a colab error marker, but it runs properly.</p>
<pre><code>str1 = 'abcde'
zipped: List[Tuple[int, str]] = [(intgr, ltr) for intgr, ltr in enumerate(str1)]
# This is a clever way to "unzip"
(tuple3, _) = zip(*zipped)
lst3a: list = list(tuple3)
</code></pre>
<p>Everything is fine to this point. But the following line produces a colab error mark: a squiggly red line under <code>list(tuple3)</code>. (The program runs correctly.)</p>
<pre><code>lst3b: list[int] = list(tuple3)
print(f'{type(lst3a) == type(lst3b)}; {lst3a == lst3b}') # => True; True
</code></pre>
<p>Colab's error explanation is:</p>
<blockquote>
<p>Expression of type "list[int | str]" cannot be assigned to declared type "list[int]"</p>
</blockquote>
<p>Colab appears confused about the types produced by "unzip."</p>
<p>Or am I missing something?</p>
|
<python><google-colaboratory><python-typing>
|
2024-12-25 22:20:53
| 1
| 2,758
|
RussAbbott
|
79,308,356
| 12,350,600
|
FPDF2 render_toc spans multiple pages it writes over page headers
|
<p>Basically if my table of contents exceeds one-page then it the contents of the 2nd page overlaps with the document header.</p>
<p>I tried adding</p>
<pre><code>self.set_y(15)
</code></pre>
<p>at the end of the header function. However this didn't help.</p>
<p>(Submitted the issue : <a href="https://github.com/py-pdf/fpdf2/issues/1336" rel="nofollow noreferrer">here</a>)</p>
|
<python><python-3.x><fpdf><fpdf2>
|
2024-12-25 21:36:24
| 0
| 394
|
Kruti Deepan Panda
|
79,308,265
| 19,626,271
|
Fast(est) method to iterate through pairs of indices in Python, using numpy
|
<p>I'm constructing a graph with ~10000 nodes, each node having metadata that determine which other nodes it will be connected to, using an edge.<br>
Since the number of edge possibilities (~50M) is far greater than the actual number of edges that will be added (~300k), it is suboptimal to just iterate through node-pairs with for loops to check if an edge should be added between them. Using some logic to filter out many pairs to not have to check, with the help of <code>numpy</code>'s rapid methods I quickly reduced the possibilities to an array of ~30M pairs only.<br>
However, when iterating through them instead, the performance did not improve much - in fact iterating through a bigger 2D boolean matrix is twice as fast (compared to my method, which previously collected the <code>True</code> values from the matrix and only iterates through these ~30M instances). There must be a way to get the desired performance benefit, but I hit a deadend, looking to understand why some methods are faster and how to improve my runtime.</p>
<p><br><strong>Context</strong>: In particular, every node is an artist with metadata such as locations, birth and death year.<br>
I connect two artists based on a method that calculates a measure of how close they lived to each other at one time approximately (e.g. two artists living at the same place at the same time, for long enough time, would get a high value). This is a typical way to achieve just that (iterating through indices is preferred over names):</p>
<pre class="lang-py prettyprint-override"><code>for i, j in itertools.combinations(range(len(artist_names)), 2): #~50M iterations
artist1 = artist_names[i]
artist2 = artist_names[j]
#...
artist1_data = artist_data[artist1]
artist2_data = artist_data[artist2]
val = process.get_loc_similarity(artist1_data, artist2_data)
if val > 0:
G.add_edge(artist1, artist2, weight=val)
</code></pre>
<p>As the number of pair of nodes is ~50M, this runs for <strong>~14 mins</strong>. I reduced the number of possibilities by sorting out pairs of artists whose lifetimes did not overlap. With <code>numpy</code>'s methods running C under the hood, this executed in less than 5 seconds and gathered ~30M pairs to have to check only:</p>
<pre class="lang-py prettyprint-override"><code>birth_condition_matrix = (birth_years < death_years.reshape(-1, 1))
death_condition_matrix = (death_years > birth_years.reshape(-1, 1))
overlap_matrix = birth_condition_matrix & death_condition_matrix
overlapping_pairs_indices = np.array(np.where(overlap_matrix)).T
overlapping_pairs_indices = np.column_stack((overlapping_pairs_indices[:, 0], overlapping_pairs_indices[:, 1]))
</code></pre>
<p>We can thus iterate through less pairs:</p>
<pre class="lang-py prettyprint-override"><code>for i, j in overlapping_pairs_indices: #~30M iterations
if i != j:
artist1 = artist_names[i]
artist2 = artist_names[j]
artist1_data = artist_data[artist1]
artist2_data = artist_data[artist2]
val = process.get_loc_similarity(artist1_data, artist2_data)
if val > 0:
G.add_edge(artist1, artist2, weight=val)
</code></pre>
<p>It comes as a surprise, that this still runs for over <strong>~13 mins</strong> - instead of improving runtime by 40% or so.</p>
<p>Surprisingly, iterating on the matrix indices is much faster, nevertheless looking at all 50M combinations:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(len(artist_names)):
for j in range(i + 1, len(artist_names)): #~50M iterations
if overlap_matrix[i, j]:
artist1 = artist_names[i]
artist2 = artist_names[j]
artist1_data = artist_data[artist1]
artist2_data = artist_data[artist2]
val = process.get_loc_similarity(artist1_data, artist2_data)
if val > 0:
G.add_edge(artist1, artist2, weight=val)
</code></pre>
<p>This ran for <strong>less than 5 minutes</strong> despite iterating again 50M times.<br>
That is surprising and promising, and I would like to figure out what makes this faster than the previous attempt, and how to modify that to be even faster.</p>
<p>How could I improve runtime by using the right methods?<br>
I wonder if there is a possibility of further utilizing <code>numpy</code>, e.g. not having to use for loops even when calling the calculation function, using a method similar to pandas dataframe's <code>.apply()</code> instead.</p>
<p>(I also noticed that looping through a zip such as <code>for i, j in zip(overlap_pairs[:, 0], overlap_pairs[:, 1])</code> did not improve runtime.)</p>
|
<python><arrays><numpy><iteration><runtime>
|
2024-12-25 20:13:14
| 0
| 395
|
me9hanics
|
79,308,202
| 6,462,301
|
How to improve responsiveness of interactive plotly generated line plots saved as html files?
|
<p>I have some very long time series data (millions of data points) and generate interactive plotly html plots based on this data. I am using <code>Scattergl</code> from plotly's <code>graphical_obects</code>.</p>
<p>When I attempt to zoom in on these plots, the browser (both chrome and firefox) will go unresponsive.</p>
<p>Subsampling the data obviously helps at the expense of resolution.</p>
<p>I know that there are packages that claim to handle this <a href="https://github.com/predict-idlab/plotly-resampler" rel="nofollow noreferrer">for example</a>, but this is not an ideal approach for my use-case.</p>
<p>I am using an old macbook pro but when have observed the same phenomena on more modern, powerful machines.</p>
<p>Has anyone found a good solution for this problem?</p>
|
<python><plotly><zooming><large-data>
|
2024-12-25 19:16:20
| 1
| 1,162
|
rhz
|
79,308,067
| 893,254
|
Which Python or SQL Alchemy datatypes should be used for interfacing with databases containing hash values?
|
<p>SQL Alchemy supports multiple ways of specifying datatypes for SQL database columns. If regular Python datatypes are used, SQL Alchemy will try to sensibly map these to datatypes supported by the connected to database.</p>
<p>There is also the possibility of using more precicely defined types which are supplied by the SQL Alchemy framework. These types more closely match to what types are supported by specific databases. (For example Postgres, MySQL, sqlite3, ...)</p>
<p><a href="https://dba.stackexchange.com/questions/115271/what-is-the-optimal-data-type-for-an-md5-field">This question and answer seems to suggest that the UUID type should be used to represent MD5 hash values.</a></p>
<p>This seems slightly strange, since to generate a md5 hash, typically the Python code would look something like this</p>
<pre><code>hashlib.md5(input_string.encode('utf-8')).hexdigest()
</code></pre>
<p>or this</p>
<pre><code>hashlib.md5(input_string.encode('utf-8')).digest()
</code></pre>
<p>depending if the hash value is wanted as a <code>str</code> or a binary string (<code>bytes</code>).</p>
<p>Neither of these is exactly a <code>UUID</code>, however the <code>str</code> form appears to be implicitly convertable to and from a <code>UUID</code> type.</p>
<ul>
<li>Is <code>UUID</code> the correct datatype to use?</li>
<li>What type should be used for a SHA-256 hash?</li>
</ul>
<p>I have a feeling that in both cases, a fixed width binary data type would be better. However, this doesn't appear to be supported by SQL Alchemy, at least not in combination with Postgres.</p>
<p>This is because the <code>LargeBinary</code> SQL Alchemy datatype maps to <code>bytea</code> Postgres datatype. This is a variable length data type, and it cannot be forced to be of a fixed width in the same way that <code>varchar</code> can.</p>
<p>I have done some testing between <code>UUID</code> and <code>str</code> and I found that inserting data was slightly faster when using <code>UUID</code>, which may not be surprising since the length is fixed rather than variable. However, the difference was small. (233 vs 250 messages inserted / second.)</p>
<p>I intend to do further testing using the <code>bytes</code> datatype. My initial results suggest that it performs the same as <code>str</code>.</p>
|
<python><postgresql><sqlalchemy>
|
2024-12-25 17:21:18
| 2
| 18,579
|
user2138149
|
79,308,033
| 893,254
|
Which Python or SQL Alchemy datatype should be used to store the Python `bytes` datatype as a fixed width field?
|
<p>I want to store some data into a SQL database. (In this case, Postgres, however this may not be relevant.)</p>
<p>Some of the columns I want to store contain binary data. The Python datatype is <code>bytes</code>.</p>
<p>What datatype should I use to specify the <code>bytes</code> datatype with SQL Alchemy? (With the additional constraint that the column be of fixed width.)</p>
<p>For the <code>str</code> datatype, <code>Mapped[str]</code> can be used.</p>
<p>If the width should be of fixed rather than unconstrained length, <code>Mapped[str] = mapped_column(String(<length>))</code> can be used. Here is an example:</p>
<pre><code>class HtmlData(MyBase):
__tablename__ = 'html_data_table_name'
html_id: Mapped[int] = mapped_column(primary_key=True)
html_div_md5: Mapped[str] = mapped_column(String(16))
</code></pre>
<p>For binary data, the <code>bytes</code> datatype will work. However this is of unconstrained width.</p>
<p>How can I specify a fixed width for the <code>bytes</code> datatype?</p>
<p>While researching this question I found the <code>LargeBinary</code> SQL Alchemy datatype. This seems like it <em>might</em> be the right thing to use. However, if the length is restricted to a small value such as 16 bytes, is this still the appropriate datatype?</p>
<ul>
<li><a href="https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.LargeBinary" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.LargeBinary</a></li>
</ul>
|
<python><sqlalchemy>
|
2024-12-25 16:51:12
| 1
| 18,579
|
user2138149
|
79,307,990
| 893,254
|
How to specify a fixed length string for an SQL Alchemy column data type?
|
<p>The following code snippet is an SQL Alchemy ORM table definition class.</p>
<pre><code>class HtmlData(MyBase):
__tablename__ = 'html_data_table_name'
html_id: Mapped[int] = mapped_column(primary_key=True)
html_div: Mapped[str]
html_div_md5: Mapped[str]
</code></pre>
<p>If I run the <code>create_all</code> command from within Python code, this will create a table where the column <code>html_div_md5</code> has the type <code>varchar</code>.</p>
<p>However, this is a <code>varchar</code> of unconstrained length.</p>
<p>This doesn't make a huge amount of sense, since <code>md5</code> hashes are all 128 bits in length, or 16 bytes long.</p>
<p>It is likely that specifying the length to be fixed to be 16 bytes will improve performance, and it will almost certainly reduce the required data storage size. (I might be able to test this.)</p>
<p>How can I fix the length of <code>html_div_md5</code> using the <code>Mapped[str]</code> syntax?</p>
|
<python><sqlalchemy>
|
2024-12-25 16:33:47
| 1
| 18,579
|
user2138149
|
79,307,720
| 7,512,296
|
FastAPI: The asyncio extension requires an async driver to be used. The loaded 'psycopg2' is not async
|
<p>I'm learning FastAPI from their official tutorial guide and I've reached currently at chapter 5 (Databases with SQLModel). I'm using neon db as stated in the docs.</p>
<p>my code are as follows:</p>
<p><code>src/__init__.py</code></p>
<pre><code>from sqlmodel import create_engine, text
from sqlalchemy.ext.asyncio import AsyncEngine
from src.config import Config
engine = AsyncEngine(create_engine(
url=Config.DATABASE_URL,
echo=True
))
async def initdb():
"""create a connection to our db"""
async with engine.begin() as conn:
statement = text("select 'Hello World'")
result = await conn.execute(statement)
print(result)
</code></pre>
<p><code>src/db/main.py</code></p>
<pre><code>from sqlmodel import create_engine, text
from sqlalchemy.ext.asyncio import AsyncEngine
from src.config import Config
engine = AsyncEngine(create_engine(
url=Config.DATABASE_URL,
echo=True
))
async def initdb():
"""create a connection to our db"""
async with engine.begin() as conn:
statement = text("select 'Hello World'")
result = await conn.execute(statement)
print(result)
</code></pre>
<p><code>.env</code></p>
<pre><code>DATABASE_URL="postgresql+asyncpg://username:password@server_domain/fastapi?sslmode"
</code></pre>
<p><code>src/config.py</code></p>
<pre><code>from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
DATABASE_URL: str
model_config = SettingsConfigDict(
env_file=".env",
extra="ignore"
)
# add this line
Config = Settings()
</code></pre>
<p>With the above code I'm getting InvalidRequestError:</p>
<blockquote>
<p>InvalidRequestError: The asyncio extension requires an async driver to be used. The loaded
'psycopg2' is not async.</p>
</blockquote>
<p>package versions are (from requirements.txt):</p>
<pre><code>SQLAlchemy==2.0.36
sqlmodel==0.0.22
</code></pre>
<p>I'm running my python in virtual env in mac(m1). Does this error has to do anything with my hardware or they have changed anything in the package that they forgot to update in their tutorial docs.</p>
<p>Any kind of help will be appreciated.</p>
|
<python><fastapi><python-3.9><sqlmodel>
|
2024-12-25 13:59:46
| 2
| 1,596
|
psudo
|
79,307,527
| 13,392,257
|
Calling AIOKafkaConsumer via FastAPI raises "object should be created within an async function or provide loop directly" error
|
<p>I have a FastAPI application that subscribes to Kafka topic using <em>asynchronous</em> code (i.e., <code>async</code>/<code>await</code>). I have to create a unit test for my application.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>def create_consumer() -> AIOKafkaConsumer:
"""Create AIOKafkaConsumer.
Returns:
AIOKafkaConsumer: The created AIOKafkaConsumer instance.
"""
return AIOKafkaConsumer(
settings.kafka_consumer_topic,
bootstrap_servers=f"{settings.kafka_consumer_host}:{settings.kafka_consumer_port}"
)
app = FastAPI()
consumer = create_consumer()
@app.on_event("startup")
async def startup_event():
"""Startup event for FastAPI application."""
log.info("Starting up...")
await consumer.start()
asyncio.create_task(consume())
async def consume(db: Session = next(get_db())):
"""Consume and print messages from Kafka."""
while True:
async for msg in consumer:
...
</code></pre>
<p>I am using FastAPI's <code>TestClient</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from fastapi.testclient import TestClient
from app.main import get_task_files
#from app.main import app
client = TestClient(app) # ERR: AIOKafkaConsumer The object should be created within an async function or provide loop directly.
</code></pre>
<p>However, I am getting the following error at <code>AIOKafkaConsumer</code> instatiation:</p>
<blockquote>
<p>The object should be created within an async function or provide loop directly.</p>
</blockquote>
<p>How to properly test my application? It looks like I have to mock kafka-functionality.</p>
|
<python><apache-kafka><fastapi><kafka-consumer-api><aiokafka>
|
2024-12-25 12:07:59
| 2
| 1,708
|
mascai
|
79,307,099
| 1,325,861
|
Add links in dash-leaflet geojson popups that have a callback when clicked
|
<p>I've created an interactive map using dash-leaflet using the geojson module (hope module is the right word).</p>
<p>I have created custom html popups for each marker. I need to incorporate links in the popup that trigger a callback so that the app can fetch a file and provides a download through the user's browser.</p>
<pre><code>def create_geojson(data):
features = []
for _, row in data.iterrows():
feature = {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [row['longitude'], row['latitude']]
},
"properties": {
"price": row['price'],
"description": row['desc'],
"popup": create_popup(....) # Multiple keys for row are used to generate HTML code for each popup
}
}
features.append(feature)
return {"type": "FeatureCollection", "features": features}
</code></pre>
<p>ChatGPT tells me that callbacks are not supported within the popup content. If this is true, can I use some hack to achieve the functionality?</p>
|
<python><plotly-dash><geojson><dash-leaflet>
|
2024-12-25 07:54:48
| 1
| 535
|
Gaurav Suman
|
79,306,936
| 1,307,905
|
how to fix ruff error on module level docstring between from future and normal imports
|
<p>Many of my Python files start with a future import and a module level docstring:</p>
<pre><code>from __future__ import annotations
"""
module level docstring
"""
from datetime import datetime as DateTime
print(DateTime.today())
</code></pre>
<p>The <code>from __future__</code> import has to be at the top of the file, and although it can come after a module level docstring, I have some tooling that assumes that import is on the first non-comment/non-empty line. I also want to have my module level docstring close to the top, and not below a, potentially long, list of imports.</p>
<p>The above tests fine when using <code>flake8</code> (which I use from tox).</p>
<p>I started looking into using <code>ruff</code> instead of <code>flake8</code>, but when I run <code>ruff check</code> on such a file I get:</p>
<pre><code>test.py:8:1: E402 Module level import not at top of file
|
6 | """
7 |
8 | from datetime import datetime as DateTime
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E402
9 |
10 | print(DateTime.today())
|
Found 1 error.
</code></pre>
<p>When I remove the from future import, <em>or</em>* remove the docstring, <strong>or</strong> move the docstring before the future import (or below all the imports), the E402 error disappears.</p>
<p>How can I correct <code>ruff</code> behaviour, so I can keep the order of the special from future import, the module docstring, and the normal imports, without suppressing any real E402 errors (i.e. that follow some actual code and not a docstring)?</p>
<hr />
<p><sub>I know I can add <code># NOQA</code> after each of the normal module level imports, but with several hundred files in the 200+ packages I maintain, that is not a route I prefer to take. I can update my tooling to handle non-top-of-file <code>from __future__</code> imports, but I don't really look forward to hide those specific imports below a (longish) module docstring, especially since there might be breaking changes in the future.</sub></p>
|
<python><ruff>
|
2024-12-25 05:41:38
| 1
| 78,248
|
Anthon
|
79,306,829
| 4,421,975
|
MESH created with ezdxf crashed Autodesk Viewer
|
<p>Here is my code:</p>
<pre><code>import ezdxf
import numpy as np
x,y = np.meshgrid(range(-3,3),range(-3,3))
z = x**2 + y**2
doc = ezdxf.new('R2010', setup=True)
# Add a mesh entity
msp = doc.modelspace()
rows, cols = z.shape
# Add a MESH entity
mesh = msp.add_mesh()
msp.doc.layers.add("3d_surface")
mesh = msp.add_mesh(dxfattribs={"layer": "3d_surface"})
with mesh.edit_data() as mesh_data:
# Create vertices from X, Y, and dem_data (Z)
for i in range(rows):
for j in range(cols):
mesh_data.vertices.append((x[i, j], y[i, j], z[i, j]))
# Create faces (each grid cell becomes a quadrilateral face)
for i in range(rows - 1):
for j in range(cols - 1):
# Vertex indices for the four corners of the grid cell
v0 = i * cols + j # Top-left
v1 = v0 + 1 # Top-right
v2 = v0 + cols # Bottom-left
v3 = v2 + 1 # Bottom-right
# Add a quadrilateral face
mesh_data.faces.append((v0, v1, v3, v2))
dxf_file_path = 'example.dxf'
doc.saveas(dxf_file_path)
</code></pre>
<p>The created file can be read correctly by edrawings and on sharecad.org but not by freecad or autodesk.</p>
<p>Autodesk viewer gives the following:</p>
<blockquote>
<p>AutoCAD-InvalidFile Sorry, the drawing file is invalid and cannot be viewed. - Please try to recover the file in AutoCAD, and upload it again to view.</p>
<p>TranslationWorker-InternalFailure Unrecoverable exit code from extractor: -1073741831</p>
</blockquote>
<p>Am I doing something wrong?</p>
|
<python><dxf><ezdxf>
|
2024-12-25 03:52:23
| 1
| 1,481
|
OMRY VOLK
|
79,306,481
| 11,751,799
|
How can I save a figure to PDF with a specific page size and padding?
|
<p>I have generated a <code>matplotlib</code> figure that I want to save to a PDF. So far, this is straightforward.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
x = [1, 2, 3, 4]
y = [3, 5, 4, 7]
plt.scatter(x, y)
plt.savefig(
"example.pdf",
bbox_inches = "tight"
)
plt.close()
</code></pre>
<p>However, I would like the figure to appear in the middle of a standard page and have some white space around the figure, rather than fill up the entire page. How can I tweak the above to achieve this?</p>
|
<python><matplotlib><plot><graph>
|
2024-12-24 21:00:00
| 3
| 500
|
Dave
|
79,306,431
| 5,178,381
|
Having trouble accessing the values of an ElementTree root.iter object
|
<p>I have an xml document from the IRS and I'm trying to grab the values of specific tags that interest me. For example, in the following xml data, I'm interested in the value of <strong>CYTotalExpensesAmt</strong>, which is 12345:</p>
<pre><code><returndata>
<irs990>
<CYTotalExpensesAmt>12345</CYTotalExpensesAmt>
</irs990>
<returndata>
</code></pre>
<p>When I code the following, it returns a memory location:</p>
<pre><code>x = root.iter('CYTotalExpensesAmt')
print(x)
</code></pre>
<p>But when I try to grab the value, the 12345, with the following code:</p>
<pre><code>print(x.text)
</code></pre>
<p>or</p>
<pre><code>for e in root.iter('CYTotalExpensesAmt'):
print(e.text)
</code></pre>
<p>I get an error or nothing returned at all. Any ideas on what I can do differently to access the value of tags I know the name of but do not know their indexed location?</p>
|
<python><xml>
|
2024-12-24 20:17:27
| 1
| 461
|
Ryan Oliver Lanham
|
79,306,396
| 6,709,460
|
Python script deploys smart contract on sandbox local network, but application link fails with "Error: Application failed to load"
|
<p>Here is my code:</p>
<pre><code>from beaker import *
from pyteal import *
app = Application('Testing123')
if __name__ == '__main__':
# Build the app and export it
app.build().export('./TestFolder')
# Get the accounts (ensure the localnet is defined correctly)
accounts = localnet.get_accounts()
acc1 = accounts[0]
# Initialize the Algorand client
alogd_client = localnet.get_algod_client()
# Create an application client instance
app_client = client.ApplicationClient(
client=alogd_client,
app=app,
sender=acc1.address,
signer=acc1
)
# Create the application on the blockchain
app_id, address, txid = app_client.create()
# Print the transaction ID of the application creation
print(txid)
</code></pre>
<p>I have a Python script that successfully deploys a smart contract on a sandbox local network, but when I try to check the application through the link <a href="https://lora.algokit.io/localnet/application/1003" rel="nofollow noreferrer">https://lora.algokit.io/localnet/application/1003</a>, I get the following error:</p>
<p>Application Error: Application failed to load</p>
<p>However, when I use various APIs, I can interact with the deployed contract without any issues. The problem only occurs when accessing the application through the link.</p>
<p>Has anyone encountered this issue, or does anyone know what might be causing it? Any help would be greatly appreciated!</p>
|
<python><algorand>
|
2024-12-24 19:45:00
| 0
| 741
|
Testing man
|
79,306,325
| 1,103,595
|
Cannot instance custom environment with OpenAI Gymnasium
|
<p>I'm trying to make my own checkers bot to try and teach myself reinforment learning. I decided to try using Gymnasium as a framework and have been following the tutorials at <a href="https://gymnasium.farama.org/introduction/create_custom_env/" rel="nofollow noreferrer">https://gymnasium.farama.org/introduction/create_custom_env/</a>.</p>
<p>For some reason, calling <code>env = gym.make('CheckerWorld-v0')</code> is causing the error</p>
<pre><code>Exception has occurred: NameNotFound
Environment `CheckerWorld` doesn't exist.
File "D:\dev\starling2\game\2024\checkers_ai\python\train_agent.py", line 17, in <module>
env = gym.make('CheckerWorld-v0')
^^^^^^^^^^^^^^^^^^^^^^^^^^^
gymnasium.error.NameNotFound: Environment `CheckerWorld` doesn't exist.
</code></pre>
<p>I am adding my environment to the registry in my checkers_gym.py module:</p>
<pre><code>class CheckerWorldEnv(gym.Env):
def __init__(self, playing_as_black:bool = True, max_turns:int = 200):
...
gym.register(
id="gymnasium_env/CheckerWorld-v0",
entry_point=CheckerWorldEnv,
)
</code></pre>
<p>I am also importing the module with my environment with <code>from checkers_gym import *</code></p>
<p>My custom environment is also listed in the registry as I can see with <code>print(gym.envs.registry.keys())</code></p>
<pre><code>dict_keys(['CartPole-v0', 'CartPole-v1', 'MountainCar-v0', 'MountainCarContinuous-v0', 'Pendulum-v1', 'Acrobot-v1', 'phys2d/CartPole-v0', 'phys2d/CartPole-v1', 'phys2d/Pendulum-v0', 'LunarLander-v3', 'LunarLanderContinuous-v3', 'BipedalWalker-v3', 'BipedalWalkerHardcore-v3', 'CarRacing-v3', 'Blackjack-v1', 'FrozenLake-v1', 'FrozenLake8x8-v1', 'CliffWalking-v0', 'Taxi-v3', 'tabular/Blackjack-v0', 'tabular/CliffWalking-v0', 'Reacher-v2', 'Reacher-v4', 'Reacher-v5', 'Pusher-v2', 'Pusher-v4', 'Pusher-v5', 'InvertedPendulum-v2', 'InvertedPendulum-v4', 'InvertedPendulum-v5', 'InvertedDoublePendulum-v2', 'InvertedDoublePendulum-v4', 'InvertedDoublePendulum-v5', 'HalfCheetah-v2', 'HalfCheetah-v3', 'HalfCheetah-v4', 'HalfCheetah-v5', 'Hopper-v2',
'Hopper-v3', 'Hopper-v4', 'Hopper-v5', 'Swimmer-v2', 'Swimmer-v3', 'Swimmer-v4', 'Swimmer-v5', 'Walker2d-v2', 'Walker2d-v3', 'Walker2d-v4', 'Walker2d-v5', 'Ant-v2', 'Ant-v3', 'Ant-v4', 'Ant-v5', 'Humanoid-v2', 'Humanoid-v3', 'Humanoid-v4', 'Humanoid-v5', 'HumanoidStandup-v2', 'HumanoidStandup-v4', 'HumanoidStandup-v5', 'GymV21Environment-v0', 'GymV26Environment-v0', 'gymnasium_env/CheckerWorld-v0'])
</code></pre>
<p>I've also tried using <code>env = gym.make('gymnasium_env/CheckerWorld-v0')</code>, but I still get the same error. How am I supposed to create a new instanced of my environment?</p>
|
<python><openai-gym>
|
2024-12-24 18:50:15
| 0
| 5,630
|
kitfox
|
79,306,315
| 13,392,257
|
Can't create second celery queue
|
<p>I want to create two celery queues (for different types of tasks)</p>
<p>My celery configuration. I expect that this config creates two queues <code>'celery' and 'celery:1' </code></p>
<pre><code># celery.py
import os
from celery import Celery
from core_app.settings import INSTALLED_APPS
# this code copied from manage.py
# set the default Django settings module for the 'celery' app.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core_app.settings')
app = Celery("my_app")
# To start scheduling tasks based on priorities
# you need to configure queue_order_strategy transport option.
# ['celery', 'celery:1',] - two queues, the highest priority queue will be named celery
app.conf.broker_transport_options = {
'priority_steps': list(range(2)),
'sep': ':',
'queue_order_strategy': 'priority',
}
# read config from Django settings, the CELERY namespace would make celery
# config keys has `CELERY` prefix
app.config_from_object('django.conf:settings', namespace='CELERY')
# load tasks.py in django apps
app.autodiscover_tasks(lambda: INSTALLED_APPS)
</code></pre>
<p>I am defining and running celery task like this:</p>
<pre><code>@shared_task(queue="celery", soft_time_limit=600, time_limit=650)
def check_priority_urls(parsing_result_ids: List[int]):
check_urls(parsing_result_ids)
@shared_task(queue="celery:1", soft_time_limit=600, time_limit=650)
def check_common_urls(parsing_result_ids: List[int]):
check_urls(parsing_result_ids)
# running task
check_priority_urls.delay(parsing_results_ids)
</code></pre>
<p>But I see only one queue</p>
<pre><code>celery -A core_app inspect active_queues
-> celery@d1a287d1d3b1: OK
* {'name': 'celery', 'exchange': {'name': 'celery', 'type': 'direct', 'arguments': None, 'durable': True, 'passive': False, 'auto_delete': False, 'delivery_mode': None, 'no_declare': False}, 'routing_key': 'celery', 'queue_arguments': None, 'binding_arguments': None, 'consumer_arguments': None, 'durable': True, 'exclusive': False, 'auto_delete': False, 'no_ack': False, 'alias': None, 'bindings': [], 'no_declare': None, 'expires': None, 'message_ttl': None, 'max_length': None, 'max_length_bytes': None, 'max_priority': None}
1 node online.
</code></pre>
<p>And celery flower see only one queu
<a href="https://i.sstatic.net/wivzvcqY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wivzvcqY.png" alt="enter image description here" /></a></p>
<p>My celery in docker-compose</p>
<pre><code>celery:
build: ./project
command: celery -A core_app worker --loglevel=info --concurrency=15 --max-memory-per-child=1000000
volumes:
- ./project:/usr/src/app
- ./project/media:/project/media
- ./project/logs:/project/logs
env_file:
- .env
environment:
# environment variables declared in the environment section override env_file
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- django
- redis
</code></pre>
|
<python><django><celery>
|
2024-12-24 18:44:17
| 1
| 1,708
|
mascai
|
79,306,282
| 8,512,262
|
Segmentaiton fault or Trap in pyobjc keyboard hook
|
<p>I'm trying to write a small application for macOS in Python where opening characters such as <code>[ { ( < " '</code> are paired/closed automatically (similar to how it's handled in editors like VS Code, but globally).</p>
<p>I'm using <code>pyobjc</code> to set up a keyboard hook. So far I've been able to get the hook callback to fire <em>once</em> (e.g., when I type <code><</code> the callback fires and adds <code>></code>) before hitting either...</p>
<p>a segmentation fault</p>
<pre><code>zsh: segmentation fault /<project path>/.venv/bin/python
</code></pre>
<p>or trace trap</p>
<pre><code>zsh: trace trap /<project path>/.venv/bin/python
</code></pre>
<p>I'm not sure what's causing either, and the error I get isn't always the same (it's most often the <code>TRAP</code>, but sometimes <code>SEGV</code>)</p>
<p>Here's my working example:</p>
<pre class="lang-none prettyprint-override"><code>import Quartz as qz
from AppKit import NSEvent
from time import sleep
PAIRS = {
'\'': '\'',
'"': '"',
'(': ')',
'[': ']',
'{': '}',
'<': '>',
'`': '`',
}
def is_synthetic_event(event: qz.CGEventRef) -> bool:
"""Check if the event is synthetic"""
source_state = qz.CGEventGetIntegerValueField(
event,
qz.kCGEventSourceStateID
)
# NOTE: seems like this is always 1...
print(f'Event Source State: {source_state}') # DEBUG
return source_state == qz.kCGEventSourceStateHIDSystemState
def kbd_callback(
_proxy: object,
_event_type: int,
event: qz.CGEventRef,
_refcon: object
) -> qz.CGEventRef:
"""Intercept key press events"""
try:
# convert to NSEvent to access character information
key_event = NSEvent.eventWithCGEvent_(event)
print(key_event) # DEBUG
repeat = key_event.isARepeat()
# skip synthetic events to prevent recursion
# NOTE: this is not working as expected - negation is wrong, but
# the calback doesn't fire without it
if not is_synthetic_event(event):
return event
# check if the character matches an opening pair
if (char := key_event.characters()) in PAIRS and not repeat:
# post the opening character
qz.CGEventPost(qz.kCGAnnotatedSessionEventTap, event)
sleep(0.01)
closing_char = PAIRS[char]
# create and post a key down event for the closing character
press_event = qz.CGEventCreateKeyboardEvent(None, 0, True)
qz.CGEventKeyboardSetUnicodeString(press_event, 1, closing_char)
qz.CGEventPost(qz.kCGAnnotatedSessionEventTap, press_event)
qz.CFRelease(press_event)
# give the system time to process the event
sleep(0.01)
# create and post a key release event for the closing character
release_event = qz.CGEventCreateKeyboardEvent(None, 0, False)
qz.CGEventKeyboardSetUnicodeString(release_event, 1, closing_char)
qz.CGEventPost(qz.kCGAnnotatedSessionEventTap, release_event)
qz.CFRelease(release_event)
except Exception as err:
print(f'Error in callback: {err}')
return event
if __name__ == '__main__':
try:
# create a global event tap
event_tap = qz.CGEventTapCreate(
qz.kCGSessionEventTap, # tap location
qz.kCGHeadInsertEventTap, # insert at head of event queue
qz.kCGEventTapOptionDefault, # read-write access
qz.CGEventMaskBit(qz.kCGEventKeyDown), # key-down events only
kbd_callback, # callback function
None # no user data
)
if event_tap:
loop_source = qz.CFMachPortCreateRunLoopSource(None, event_tap, 0)
qz.CFRunLoopAddSource(
qz.CFRunLoopGetCurrent(),
loop_source,
qz.kCFRunLoopDefaultMode
)
qz.CGEventTapEnable(event_tap, True)
qz.CFRunLoopRun()
else:
print('Failed to create event tap')
except Exception as err:
print(f'Error in main: {err}')
</code></pre>
<p>I suspect this is related to the synthetic events triggering the callback, but I'm not sure. I've also found that my <code>is_synthetic_event</code> function doesn't work as expected. I have to invert it or the callback doesn't fire at all.</p>
<p>Any help is welcome! I'm quite comfortable in Python, but much less so with <code>pyobjc</code>.</p>
|
<python><macos><segmentation-fault><hook><pyobjc>
|
2024-12-24 18:22:03
| 0
| 7,190
|
JRiggles
|
79,306,280
| 6,550,398
|
Preserving Anchors for Numeric Value 0 in ruamel.yaml
|
<p>I'm encountering an issue with preserving YAML anchors for numeric value, particularly with the number 0 all the other numeric value works fine, when using <code>ruamel.yaml</code>. Here's what's happening:</p>
<p>Context: I'm using <code>ruamel.yaml</code> to parse and manipulate YAML files in Python. I need to keep anchors for numeric values intact, but here's the problem:</p>
<pre class="lang-py prettyprint-override"><code>from ruamel.yaml import YAML, ScalarInt, PlainScalarString
# Custom loader to attempt to preserve anchors for numeric values
class CustomLoader(YAML):
def __init__(self):
super().__init__(typ='rt')
self.preserve_quotes = True
self.explicit_start = True
self.default_flow_style = False
def construct_yaml_int(self, node):
value = super().construct_yaml_int(node)
if node.anchor:
# Preserve the anchor for numeric values
if value == 0:
return PlainScalarString("0", anchor=node.anchor.value)
else:
return ScalarInt(value, anchor=node.anchor.value)
return value
yaml = CustomLoader()
# Load the YAML file
with open('current.yaml', 'r') as current_file:
current_data = yaml.load(current_file)
print("Debug: current_data after load:", current_data)
for key, value in current_data.items():
print(f"Debug: Key '{key}', value type: {type(value)}, has anchor: {hasattr(value, 'anchor')}, anchor value: {getattr(value, 'anchor', None)}")
</code></pre>
<p><code>current.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>person: &person_age 0
person: &person_age 1 # this works
</code></pre>
<p>Expected Behavior: The anchor <code>&person_age</code> should be preserved for the person key with the value 0.</p>
<p>Actual Behavior: The anchor is not preserved; <code>hasattr(value, 'anchor')</code> returns <code>False</code>, and the value type is <code><class 'int'></code> rather than <code>ScalarInt</code> or <code>PlainScalarString</code> with an anchor.</p>
<p>What I've tried: I've tried to override <code>construct_yaml_int</code> in a custom loader to manually preserve anchors for integers, but it doesn't seem to work. I've ensured that <code>ruamel.yaml</code> is configured with <code>typ='rt'</code> for round-trip preservation. I've experimented with quoting the 0 in the YAML file (person: <code>&person_age "0"</code>), which does preserve the anchor, but this isn't a feasible solution for my use case where users might not quote their numeric values.</p>
<p>Question: How can I ensure that anchors are preserved for numeric value 0, when using <code>ruamel.yaml</code>? Is there a way to force <code>ruamel.yaml</code> to handle anchors for numbers without needing them to be quoted in the source YAML?</p>
<p>Any insights or alternative approaches would be greatly appreciated.</p>
<p>Version- [Python:3.12.5, ruamel.yaml:0.18.6]</p>
|
<python><yaml><ruamel.yaml>
|
2024-12-24 18:21:09
| 1
| 462
|
Ravi
|
79,306,264
| 7,233,155
|
Python typing for a generic instance method decorator
|
<p>I am creating a generic decorator for an instance method:</p>
<pre class="lang-py prettyprint-override"><code>def my_dec(func):
def wrapper(self, *args, **kwargs):
print("do something")
return func(self, *args, **kwargs)
return wrapper
</code></pre>
<p>All works fine, except typing. When I type it as follows:</p>
<pre class="lang-py prettyprint-override"><code>def my_dec(
func: Callable[[*Tuple[Any, ...]], Any]
) -> Callable[[*Tuple[Any, ...]], Any]:
def wrapper(
self: Any,
*args: Any,
**kwargs: Any
) -> Any:
print("do something")
return func(self, *args, **kwargs)
return wrapper
</code></pre>
<p>When I use the decorator I receive errors from <code>mypy</code> such as:</p>
<pre><code> Argument 1 to "my_dec" has incompatible type
"Callable[[datetime, datetime | str, str | NoInput], float | None]";
expected "Callable[[VarArg(Any)], Any]" [arg-type]
</code></pre>
<p>Shouldn't these be allowed with the set of <code>Any</code>? What to do to fix?
Obviously this cannot be function specific, the decorator is applied to many different functions.</p>
|
<python><decorator><python-typing><mypy>
|
2024-12-24 18:15:29
| 0
| 4,801
|
Attack68
|
79,306,155
| 10,474,998
|
Move a string from a filename to the end of the filename in Python
|
<p>If I have documents labeled:</p>
<pre class="lang-none prettyprint-override"><code>2023_FamilyDrama.pdf
2024_FamilyDrama.pdf
2022-beachpics.pdf
2020 Hello_world bring fame.pdf
2019-this-is-my_doc.pdf
</code></pre>
<p>I would like them to be</p>
<pre class="lang-none prettyprint-override"><code>FamilyDrama_2023.pdf
FamilyDrama_2024.pdf
beachpics_2022.pdf
Hello_world bring fame_2020.pdf
this-is-my_doc_2019.pdf
</code></pre>
<p>So far, I know how to remove a string from the beginning of the filename, but do not know how to save it and append it to the end of the string.</p>
<pre><code>import os
for root, dir, fname in os.walk(PATH):
for f in fname:
os.chdir(root)
if f.startswith('2024_'):
os.rename(f, f.replace('2024_', ''))
</code></pre>
|
<python><filenames>
|
2024-12-24 17:26:57
| 6
| 1,079
|
JodeCharger100
|
79,305,935
| 28,063,240
|
Generate dataclass for typing programmatically
|
<p>How can I express</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Measurements:
width: int
height: int
head_left: int
head_right: int
head_width: int
head_top: int
head_bottom: int
head_height: int
space_above_head: int
space_below_chin: int
eyes_to_bottom_edge: int
eyes_to_top_edge: int
@dataclass
class Constraints:
width: Bounds = (None, None)
height: Bounds = (None, None)
head_left: Bounds = (None, None)
head_right: Bounds = (None, None)
head_width: Bounds = (None, None)
head_top: Bounds = (None, None)
head_bottom: Bounds = (None, None)
head_height: Bounds = (None, None)
space_above_head: Bounds = (None, None)
space_below_chin: Bounds = (None, None)
eyes_to_bottom_edge: Bounds = (None, None)
eyes_to_top_edge: Bounds = (None, None)
</code></pre>
<p>without so much duplication?</p>
<p>I've tried</p>
<pre class="lang-py prettyprint-override"><code>Constraints = dataclasses.make_dataclass('Constraints', [(f.name, Bounds, dataclasses.field(default=(None, None))) for f in dataclasses.fields(Measurements)])
</code></pre>
<p>But now because I've defined <code>Constraints</code> like that, I get this typing error when I use <code>Constraint</code> as a type,</p>
<pre class="lang-py prettyprint-override"><code>def crop(image: npt.NDArray, constraints: Constraints) -> npt.NDArray:
^
</code></pre>
<blockquote>
<p>Variable not allowed in type expression</p>
</blockquote>
<p>Can anyone think of a better solution?</p>
|
<python><python-typing><python-dataclasses>
|
2024-12-24 15:22:31
| 1
| 404
|
Nils
|
79,305,824
| 8,229,265
|
Efficiently Removing a Single Page from a Large Multi-page TIFF with JPEG Compression in Python
|
<p>I am working with a large multi-page TIFF file that is JPEG-compressed, and I need to remove a single page from it. I am using the tifffile Python package to process the TIFF, and I already know which page I want to remove based on metadata tags associated with that page. My current approach is to read all pages, modify the target page (either by skipping or replacing it), and write the rest back to a new TIFF file.</p>
<p>Hereβs what Iβve tried so far:</p>
<pre class="lang-py prettyprint-override"><code>import tifffile
with tifffile.TiffFile('file') as tif:
for i, page in enumerate(tif.pages):
if some condition with tags is true:
# Skip the page to delete or replace with a dummy page
image_data = page.asarray(memmap=True) # Memory-mapped access to the page's data
# Write the page to the output file
writer.write(
image_data,
compression='jpeg',
photometric=page.photometric,
metadata=page.tags,
)
</code></pre>
<p>However, this approach has several issues:</p>
<ul>
<li><p>Memory Usage: Processing a large file consumes almost all available memory (I have 32GB of RAM, but it uses up to 28GB), which makes it unfeasible for large files.</p>
</li>
<li><p>Compression Issues: Different compression methods like LZW, ZSTD, and JPEG create files of vastly different sizes, and some are much larger than the original.</p>
</li>
<li><p>Performance: Using methods like strips or chunking leads to very slow processing, taking too long to delete a single page.</p>
</li>
<li><p>Output file size: The size of the output file with using a different compression method makes it too big! (3GB Input on JPEG to 50GB+ output on LZW)</p>
</li>
</ul>
<p>Is there any way in Python to efficiently remove a single page from a large multi-page TIFF file without consuming too much memory or taking forever? Iβve seen some .NET packages that can delete a page in-placeβdoes Python have a similar solution?</p>
|
<python><out-of-memory><tiff>
|
2024-12-24 14:29:38
| 1
| 310
|
Zenquiorra
|
79,305,588
| 7,715,250
|
Use YOLO with unbounded input exported to an mlpackage/mlmodel file
|
<p>I want to create an .mlpackage or .mlmodel file which I can import in Xcode to do image segmentation. For this, I want to use the segmentation package within YOLO to check out if it fit my needs.</p>
<p>The problem now is that this script creates an .mlpackage file which only accepts images with a fixed size (640x640):</p>
<pre><code>from ultralytics import YOLO
model = YOLO("yolo11n-seg.pt")
model.export(format="coreml")
</code></pre>
<p>I want the change something here, probably with <code>coremltools</code>, to handle unbounded ranges (I want to handle arbitrary sized images). It's described a bit here: <a href="https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html#enable-unbounded-ranges" rel="nofollow noreferrer">https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html#enable-unbounded-ranges</a>, but I don't understand how I can implement it with my script.</p>
|
<python><machine-learning><yolo><coreml><mlmodel>
|
2024-12-24 12:23:06
| 1
| 13,448
|
J. Doe
|
79,305,343
| 12,466,687
|
How to move x-axis on top of the plot in plotnine?
|
<p>I am using <code>plotnine</code> with a <code>date</code> x-axis plot and want to put x-axis <strong>date values on top</strong> of the chart but couldn't find a way to do it.</p>
<p>I have seen in ggplot it can be done using <code>scale_x_discrete(position = "top") </code> but with <code>scale_x_datetime()</code> I couldn't find any <code>position</code> parameter in plotnine.</p>
<p>Below is the sample code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from plotnine import *
# Create a sample dataset
data = pd.DataFrame({
'date': pd.date_range('2022-01-01', '2022-12-31',freq="M"),
'value': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120]
})
# Create the line chart
(ggplot(data, aes(x='date', y='value')) +
geom_line() +
labs(title='Line Chart with Dates on X-Axis', x='Date', y='Value') +
theme_classic()
)
</code></pre>
<p>Would really appreciate any help !!</p>
<p><strong>Update</strong> (facet plot) after the answer:</p>
<pre><code># sample dataset
new_data = {
'date': pd.date_range('2022-01-01', periods=8, freq="ME"),
'parent_category': ['Electronics', 'Electronics', 'Fashion', 'Fashion', 'Home Goods', 'Electronics', 'Fashion','Electronics'],
'child_category': ['Smartphones', 'Laptops', 'Shirts', 'Pants', 'Kitchenware','Laptops', 'Shirts', 'Smartphones']
}
# Create DataFrame
new_data = pd.DataFrame(new_data)
new_data
</code></pre>
<p>plot with answer tips</p>
<pre><code>(ggplot(new_data, aes(x="date", y="child_category", group="child_category")) +
geom_line(size=1, color="pink") +
geom_point(size=3, color="grey") +
facet_wrap("parent_category", ncol=1, scales="free_y") +
theme_538() +
theme(axis_text_x=element_text(angle=45, hjust=1),
panel_grid_major=element_blank(),
figure_size=(8, 6),
axis_line_x=element_line(position=('axes',1)))
)
</code></pre>
<p><a href="https://i.sstatic.net/UDA5klBE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDA5klBE.png" alt="enter image description here" /></a></p>
<p>In facet plot it doesn't go to top of the chart.</p>
|
<python><ggplot2><plotnine>
|
2024-12-24 10:31:26
| 1
| 2,357
|
ViSa
|
79,305,311
| 15,018,688
|
Keras Attention layer not returning attention scores
|
<p>I'm using keras=3.7.0 and trying to implement a custom Temporal Convolutional Attention Network (TCAN) block. While the Attention layer works in a standalone test case, I encounter an issue when integrating it into my custom model. Specifically, the error occurs when I attempt to unpack the output of the Attention layer.</p>
<p>The following works:</p>
<pre class="lang-py prettyprint-override"><code>
import tensorflow as tf
from tensorflow.keras.layers import Attention, Input
# Example inputs
batch_size, time_steps, features = 2, 8, 16
query = tf.random.uniform((batch_size, time_steps, features))
value = tf.random.uniform((batch_size, time_steps, features))
key = tf.random.uniform((batch_size, time_steps, features))
# Attention layer with return_attention_scores=True
attention_layer = Attention(use_scale=True, dropout=0.1)
output, attention_scores = attention_layer(
[query, value, key], return_attention_scores=True, use_causal_mask=True
)
print(f"Output shape: {output.shape}")
print(f"Attention scores shape: {attention_scores.shape}")
</code></pre>
<p>Gives:</p>
<pre class="lang-bash prettyprint-override"><code>Output shape: (2, 8, 16)
Attention scores shape: (2, 8, 8)
</code></pre>
<p>Why is it not working in my main code?</p>
<pre class="lang-py prettyprint-override"><code>def tcan_block(inputs, filters, kernel_size, activation, dilation_rate, d_k, atn_dropout):
"""
A single block of TCAN.
Arguments:
inputs: Tensor, input sequence.
filters: Integer, number of filters for the convolution.
kernel_size: Integer, size of the convolution kernel.
dilation_rate: Integer, dilation rate for the convolution.
d_k: Integer, dimensionality of the attention keys/queries.
Returns:
Tensor, output of the TCAN block.
"""
# Temporal Attention
query = Dense(d_k)(inputs)
key = Dense(d_k)(inputs)
value = Dense(d_k)(inputs)
# Apply Keras Attention with causal masking
attention_output, attention_scores = Attention(use_scale=True, dropout=atn_dropout)(
[query, value, key],
use_causal_mask=True,
return_attention_scores=True,
)
# Dilated Convolution
conv_output = Conv1D(
filters, kernel_size, dilation_rate=dilation_rate, padding="causal", activation=activation
)(attention_output)
# Enhanced Residual
# Calculate Mt using cumulative sum up to each time step
importance = Lambda(lambda x: K.cumsum(x, axis=1))(attention_scores)
enhanced_residual = Lambda(lambda x: x[0] * x[1])([inputs, importance])
# Add residual connection
output = Add()([inputs, conv_output, enhanced_residual])
return output
</code></pre>
<p>Error:</p>
<pre class="lang-bash prettyprint-override"><code> File "/home/furkan/Documents/Deep-Learning-Model/src/utils/models/tcan.py", line 138, in build_tcan_model
x = tcan_block(
^^^^^^^^^^^
File "/home/furkan/Documents/Deep-Learning-Model/src/utils/models/tcan.py", line 87, in tcan_block
attention_output, attention_scores = Attention(use_scale=True, dropout=atn_dropout)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/furkan/Documents/Deep-Learning-Model/.venv/lib/python3.11/site-packages/keras/src/backend/common/keras_tensor.py", line 167, in __iter__
raise NotImplementedError(
NotImplementedError: Iterating over a symbolic KerasTensor is not supported.
</code></pre>
|
<python><tensorflow><keras><deep-learning>
|
2024-12-24 10:08:47
| 0
| 556
|
Furkan ΓztΓΌrk
|
79,305,200
| 561,243
|
Static typing of Python regular expression: 'incompatible type "str"; expected "AnyStr | Pattern[AnyStr]" '
|
<p>Just to be clear, this question has nothing to do with the regular expression itself and my code is perfectly running even though it is not passing mypy strict verification.</p>
<p>Let's start from the basic, I have a class defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
import re
from typing import AnyStr
class MyClass:
def __init__(self, regexp: AnyStr | re.Pattern[AnyStr]) -> None:
if not isinstance(regexp, re.Pattern):
regexp = re.compile(regexp)
self._regexp: re.Pattern[str] | re.Pattern[bytes]= regexp
</code></pre>
<p>The user can build the class either passing a compiled re pattern or AnyStr.
I want the class to store in the private _regexp attribute the compiled value. So I check if the user does not provided a compiled pattern, then I compile it and assign it to the private attribute.</p>
<p>So far so good, even though I would have expected self._regexp to be type re.Pattern[AnyStr] instead of the union of the type pattern types. Anyhow, up to here everything is ok with mypy.</p>
<p>Now, in some (or most) cases, the user provides the regexp string via a configuration TOML file, that is read in, parsed in a dictionary. For this case I have a class method constructor defined as follow:</p>
<pre class="lang-py prettyprint-override"><code> @classmethod
def from_dict(cls, d: dict[str, str]) -> MyClass:
r = d.get('regexp')
if r is None:
raise KeyError('missing regexp')
return cls(regexp=r)
</code></pre>
<p>The type of dictionary will be dict[str, str].
I have to check that the dictionary contains the right key to prevent a NoneType in case the get function cannot find it.</p>
<p>I get the error:</p>
<blockquote>
<p>error: Argument "regexp" to "MyClass" has incompatible type "str"; expected "AnyStr | Pattern[AnyStr]" [arg-type]</p>
</blockquote>
<p>That looks bizarre, because str should be compatible with AnyStr.</p>
<p>Let's say that I modify the dictionary typing to dict[str, AnyStr]. Instead of fixing the problem, it multiplies it because I get two errors:</p>
<blockquote>
<p>error: Argument "regexp" to "MyClass" has incompatible type "str"; expected "AnyStr | Pattern[AnyStr]" [arg-type]
<br>
error: Argument "regexp" to "MyClass" has incompatible type "bytes"; expected "AnyStr | Pattern[AnyStr]" [arg-type]</p>
</blockquote>
<p>It looks like I am in a loop: when I think I have fixed something, I just moved the problem back elsewhere.</p>
|
<python><python-typing><mypy><python-re>
|
2024-12-24 09:10:20
| 1
| 367
|
toto
|
79,304,881
| 10,024,860
|
Optimize performance of SubstrateInterface.decode_scale
|
<p>I am using Python's substrate interface to look at extrinsic for a particular block of a Substrate-based blockchain. However, the decode_scale method is very slow: it takes about 1 second for 500 extrinsics (about 1 million bytes total). A simple baseline with binascii is 100-1000x faster, although it doesn't do the scale codec part of the decoding.</p>
<p>Current code:</p>
<pre><code>substrate = SubstrateInterface(url="ws://localhost:9944")
...
async with session.post("http://localhost:9944", json=payload) as resp:
res = await resp.json()
for hex_extrinsic in res['result']:
decoded = substrate.decode_scale("Extrinsic", hex_extrinsic)
... # process decoded
</code></pre>
<p>Is there a way to speed this up? Such as a performance-optimized library for this.</p>
|
<python><substrate>
|
2024-12-24 06:23:39
| 0
| 491
|
Joe C.
|
79,304,770
| 8,772,888
|
Python Virtual Environment Activation Not Working
|
<p>I am trying to activate a Python virtual environment on Windows and it seems like it took OK:</p>
<p><a href="https://i.sstatic.net/JpBMxc02.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpBMxc02.png" alt="enter image description here" /></a></p>
<p>I installed a module I needed for my venv, but when I deactivate the venv, the module I just installed is still listed under <code>pip list</code>. I would have thought that after deactivating the venv, the module would no longer be considered "installed" since the venv is no longer active.</p>
<p>I tried the <a href="https://stackoverflow.com/questions/1871549/how-to-determine-if-python-is-running-inside-a-virtualenv">sys.prefix test</a> and it returns <code>False</code> suggesting that I am not actually in my venv. I took a look at this <a href="https://stackoverflow.com/questions/24011399/activating-a-virtual-env-not-working">link</a> also.</p>
<p>Any ideas?</p>
|
<python><python-venv>
|
2024-12-24 05:10:17
| 1
| 3,821
|
ravioli
|
79,304,741
| 200,783
|
How should I convert this recursive function into iteration?
|
<p>I have a recursive function in the following form:</p>
<pre><code>def f():
if cond1:
...
f()
elif cond2:
...
</code></pre>
<p>I've "mechanically" converted it to an iterative function like this:</p>
<pre><code>def f():
while True:
if cond1:
...
elif cond2:
...
break
else:
break
</code></pre>
<p>I believe this conversion is valid, but is there a more elegant way to do it? For example, one that doesn't need multiple <code>break</code> statements?</p>
|
<python><recursion><iteration>
|
2024-12-24 04:43:45
| 1
| 14,493
|
user200783
|
79,304,665
| 624,533
|
In Python Flask new project, cannot Activate the environment as per official doc
|
<p>I'm new to Python Flask. Here is my terminal commands as per official flask doc.</p>
<p>Official flask link: <a href="https://flask.palletsprojects.com/en/stable/installation/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/stable/installation/</a></p>
<pre class="lang-bash prettyprint-override"><code>WORK/2025/Python
[08:45]β― cd FlaskIntro/
2025/Python/FlaskIntro
[08:45]β― python3 -m venv .venv
2025/Python/FlaskIntro
[08:46]β― . .venv/bin/activate
.venv/bin/activate (line 40): βcaseβ builtin not inside of switch block
case "$(uname)" in
^~~^
from sourcing file .venv/bin/activate
.: Error while reading file '.venv/bin/activate'
2025/Python/FlaskIntro
[08:46]β― python --version
Python 3.13.1
2025/Python/FlaskIntro
[08:46]β― python3 --version
Python 3.13.1
2025/Python/FlaskIntro
[08:46]β―
</code></pre>
<p><strong>SOLUTION:</strong> <code>β― . .venv/bin/activate.fish</code></p>
<blockquote>
<p>I tried all the files and found that I was using <code>fish shell</code>.</p>
</blockquote>
|
<python><flask>
|
2024-12-24 03:21:27
| 1
| 960
|
Sudhakar Krishnan
|
79,304,408
| 28,063,240
|
Get top (crown) of the head in Python?
|
<p>How can I get the pixel coordinates of the top of the head for a headshot image in Python?</p>
<p>I've taken a look at <code>dlib</code> with <a href="https://github.com/codeniko/shape_predictor_81_face_landmarks" rel="nofollow noreferrer">https://github.com/codeniko/shape_predictor_81_face_landmarks</a> as well as the <a href="https://github.com/ageitgey/face_recognition" rel="nofollow noreferrer"><code>face_recognition</code></a> module, but neither of them seem to support</p>
<p>Here's the script I've written</p>
<pre class="lang-py prettyprint-override"><code># /// script
# dependencies = [
# "cmake",
# "pillow",
# "face_recognition",
# "dlib",
# ]
# ///
import argparse
from pathlib import Path
import dlib
import face_recognition
from PIL import Image, ImageDraw
def main(image_path):
image = face_recognition.load_image_file(image_path)
return features(image)
def features(image):
# face_recognition
face_landmarks_list = face_recognition.face_landmarks(image)
print("I found {} face(s) in this photograph.".format(len(face_landmarks_list)))
pil_image = Image.fromarray(image)
d = ImageDraw.Draw(pil_image)
for face_landmarks in face_landmarks_list:
for facial_feature in face_landmarks.keys():
print("The {} in this face has the following points: {}".format(facial_feature, face_landmarks[facial_feature]))
for facial_feature in face_landmarks.keys():
d.line(face_landmarks[facial_feature], width=2)
locations = face_recognition.face_locations(image)
for top, right, bottom, left in locations:
d.line([(left, top), (right, top), (right, bottom), (left, bottom), (left, top)], width=2)
# dlib
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(str(Path(__file__).parent / 'shape_predictor_81_face_landmarks.dat'))
faces = detector(image)
for face in faces:
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
landmarks = predictor(image, face)
for part in landmarks.parts():
d.circle((part.x, part.y), 2, width=0, fill='white')
# Show the picture
pil_image.show()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Process an image to create a proportional passport photo.")
parser.add_argument("image", help="Path to the image file (jpeg, png, tiff)")
args = parser.parse_args()
main(args.image)
</code></pre>
<p>and its output:</p>
<p><a href="https://i.sstatic.net/ibZyddj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ibZyddj8.png" alt="enter image description here" /></a></p>
<p>But the point I am interested in is this one, where the crown of the head would be predicted to be:</p>
<p><a href="https://i.sstatic.net/TMr94QuJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMr94QuJ.png" alt="enter image description here" /></a></p>
<p>How can I get that point in Python?</p>
|
<python><face-recognition>
|
2024-12-23 22:48:44
| 0
| 404
|
Nils
|
79,304,389
| 2,751,179
|
select dataframe column and replace values by indices if True
|
<p>Hard to find the right title...here is what I want:</p>
<p>I have a dataframe and a column col1 with values : val1, val2, val3</p>
<p>I want to select the rows with val2 or val3 values for this specific column and replace them with val4 value but not for all of them, just for a "slice" between idx x and y :</p>
<pre><code>import pandas as pd
data = {'col1':["val1","val3","val3","val2","val1","val2","val3","val1"],'col2':["val3","val1","val2","val1","val2","val3","val2","val2"]}
df = pd.DataFrame(data)
df
col1 col2
0 val1 val3
1 val3 val1
2 val3 val2
3 val2 val1
4 val1 val2
5 val2 val3
6 val3 val2
7 val1 val2
</code></pre>
<p>Select rows from col1 with val2 or val3 values :</p>
<pre><code>(df['col1']=="val2") | (df['col1']=="val3")
0 False
1 True
2 True
3 True
4 False
5 True
6 True
7 False
</code></pre>
<p>Now I want to replace the first 4 True rows for col1 (rows with index 1 2 3 5) with val4 in order to obtain :</p>
<pre><code> col1 col2
0 val1 val3
1 val4 val1
2 val4 val2
3 val4 val1
4 val1 val2
5 val4 val3
6 val3 val2
7 val1 val2
</code></pre>
<p>I thought something like :</p>
<p><code>df[((df['col1']=="val2") | (df['col1']=="val3"))==True][0:4] = "val4"</code></p>
<p>but it doesn't work (not surprise...)</p>
<p>Thought I need to use something like .loc</p>
<p>Thanx for any clue</p>
|
<python><pandas><dataframe>
|
2024-12-23 22:31:15
| 4
| 455
|
Fabrice
|
79,304,247
| 9,759,769
|
Polars transform meta data of expressions
|
<p>Is it possible in python polars to transform the root_names of expression meta data?
E.g. if I have an expression like</p>
<pre class="lang-py prettyprint-override"><code>expr = pl.col("A").dot(pl.col("B")).alias("AdotB")
</code></pre>
<p>to add suffixes to the root_names, e.g. transforming the expression to</p>
<pre class="lang-py prettyprint-override"><code>pl.col("A_suffix").dot(pl.col("B_suffix")).alias("AdotB_suffix")
</code></pre>
<p>I know that <code>expr.meta.root_names()</code> gives back a list of the column names, but I could not find a way to transform them.</p>
|
<python><python-polars>
|
2024-12-23 21:11:17
| 1
| 690
|
Max
|
79,304,172
| 2,876,983
|
Opening all files from S3 folder into dataframe
|
<p>I am currently opening a csv file as is:</p>
<pre><code>request_csv = s3_client.get_object(Bucket='bucketname', Key='dw/file.csv')
</code></pre>
<p>I'd like to change this to open all files inside <code>dw/folder</code> (they are all CSV) into a single Dataframe. How can I approach this? Any pointers would be appreciated.</p>
|
<python><amazon-s3><boto3>
|
2024-12-23 20:39:02
| 1
| 321
|
user2876983
|
79,304,160
| 4,352,047
|
Use aggregate result again on another
|
<p>I am trying to use a <code>GROUP BY</code> query with DuckDB. I am having trouble with some nested aggregates and am unsure of how to approach it (in SQL land). For each aggregate group, I want to:</p>
<ol>
<li>Compute the mean of aggregate</li>
<li>Then, in each aggregate, for a given price, subtract that aggregate to compute another aggregate.</li>
</ol>
<p>I have tried looking at Arrow Vectorized UDF functions but I can't seem to find details on if they're supported with DuckDB?</p>
<p>The current code I'm working with, from a parquet file that has a <code>price</code> & <code>size</code> at a given timestamp:</p>
<pre><code> df = duckdb.sql(f"""
SELECT timestamp,
first(price) AS start
sum(size) AS vol,
sum(price * size).divide(vol) AS mean,
sum(price - mean) AS subtract_mean, ## This line does not ###work
FROM '{file_raw_parquet}'
GROUP BY timestamp
""")
</code></pre>
<p>I marked the line I'm having issues with. Should I alternatively compute the <code>mean</code> in a different table so it's ,, then <code>JOIN</code> it to the main table, then compute the <code>GROUP BY</code>?</p>
<p>I found a similar thread talking about re-using the group by statement: <a href="https://stackoverflow.com/questions/74625046/re-use-aggregate-result">Re-use aggregate result</a> but was slightly different than what I was after.</p>
<p>Thanks!</p>
|
<python><sql><duckdb>
|
2024-12-23 20:31:10
| 1
| 379
|
Deftness
|
79,304,114
| 10,292,638
|
Click on button on async website - Unable to locate element: {"method":"css selector","selector":".Icon__oBwY4"}
|
<p>I am trying to click over this button (which shows when the website is being accessed for the first time) from this website (<a href="https://www.popmart.com/sg/user/login" rel="nofollow noreferrer">https://www.popmart.com/sg/user/login</a>) through the use of <code>selenium</code> :</p>
<p><a href="https://i.sstatic.net/AM7w4B8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AM7w4B8J.png" alt="enter image description here" /></a></p>
<p>I am trying to use <code>find_element()</code> function and I can see on the website elements that the class is written as expected along its name. However when I try the following:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
import time
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
print("Imported packages successfully!")
########### launch website ##################
options = webdriver.ChromeOptions()
options.add_experimental_option("detach", True)
driver = webdriver.Chrome(options=options)
driver.get("https://www.popmart.com/sg/user/login")
time.sleep(10)
driver.find_element(By.CLASS_NAME, "Icon__oBwY4").click()
</code></pre>
<p>I got the following error:</p>
<blockquote>
<p>elenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".Icon__oBwY4"}</p>
</blockquote>
<p>Do you know if there's any other workaround to be able to click this button through
<code>selenium</code>?</p>
|
<python><selenium-webdriver><bots>
|
2024-12-23 20:08:14
| 2
| 1,055
|
AlSub
|
79,304,079
| 15,852,600
|
Better way to create column containing sequential values
|
<p>I have the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'sku' : ['A','A','A','A', 'B', 'B', 'C','C','C'],
'price': [10,10,10,10, 8,8,9,9,9]})
</code></pre>
<p><strong>out:</strong></p>
<pre class="lang-py prettyprint-override"><code> sku price
0 A 10
1 A 10
2 A 10
3 A 10
4 B 8
5 B 8
6 C 9
7 C 9
8 C 9
</code></pre>
<p><strong>My issu</strong></p>
<p>I want to create a new column (named <code>t</code>) containing sequential values according to column <code>sku</code> in order to get the following output:</p>
<pre class="lang-py prettyprint-override"><code> sku price t
0 A 10 1
1 A 10 2
2 A 10 3
3 A 10 4
4 B 8 1
5 B 8 2
6 C 9 1
7 C 9 2
8 C 9 3
</code></pre>
<p><strong>My current code:</strong></p>
<pre class="lang-py prettyprint-override"><code>t = [1]
for i in range(1, len(df)):
if df['sku'][i-1] != df['sku'][i]:
t.append(1)
else:
t.append(t[i-1] + 1)
df['t'] = t
</code></pre>
<p>Is there a better way to do it?</p>
|
<python><pandas>
|
2024-12-23 19:47:45
| 0
| 921
|
Khaled DELLAL
|
79,304,055
| 10,140,821
|
extract variables from list and run a loop
|
<p>I have a scenario like below in <code>Python</code>.</p>
<p>contents of <code>run_main.py</code> are below</p>
<pre><code>system_code = 'Ind'
if system_code == 'Ind':
ft_tb = ['B_FT', 'S_FT', 'D_FT']
bt_tb = ['B_BT', 'S_BT', 'D_BT']
else:
ft_tb = ['T_FT', 'T_FT', 'T_FT']
bt_tb = ['T_BT', 'T_BT', 'T_BT']
</code></pre>
<p>Now I want to run below code in loop for each list based on <code>system_code</code></p>
<p>for example:</p>
<pre><code>if system_code == Ind
# ft_tb list
element_1, element_2, element_3 = ft_tb[:3]
print("SELECT * FROM {} WHERE application = {}".format(element_1, element_3))
print("DELETE FROM {} WHERE application = {}".format(element_2, element_3))
# bt_tb list
element_1, element_2, element_3 = bt_tb[:3]
print("SELECT * FROM {} WHERE application = {}".format(element_1, element_3))
print("DELETE FROM {} WHERE application = {}".format(element_2, element_3))
</code></pre>
|
<python>
|
2024-12-23 19:36:39
| 1
| 763
|
nmr
|
79,304,021
| 3,008,410
|
Pyspark getting date from struct field with dollar sign in it
|
<p>Using Pyspark and this is a pyspark.sql.dataframe.DataFrame
I am trying to get to this value "entrytimestamp.$date" (can be null) from a Mongo JSON file that has this schema. I can get everything else but this field. Any suggestions?</p>
<pre><code> root
|-- ENTRY: struct (nullable = true)
| |-- log: struct (nullable = true)
| | |-- activity_fields: string (nullable = true)
|-- ENTRYTIMESTAMP: string (nullable = true)
|-- PART_ID: string (nullable = true)
|-- SESSIONID: string (nullable = true)
|-- entrytimestamp: struct (nullable = true)
| |-- $date: string (nullable = true)
|-- id: string (nullable = true)
</code></pre>
<p>The field contains:
$date="2024-12-01 00:00:00.047+0000"</p>
<p>edit: I should mention this is a Mongo json file.</p>
|
<python><mongodb><apache-spark><pyspark>
|
2024-12-23 19:22:37
| 1
| 856
|
user3008410
|
79,303,980
| 338,479
|
Can you override the default formatter for f-strings?
|
<p>Reading through <a href="https://peps.python.org/pep-3101/#user-defined-formatting" rel="nofollow noreferrer">PEP 3101</a>, it discusses how to subclass string.Formatter to define your own formats, but then you use it via <code>myformatter.format(string)</code>. Is there a way to just make it work with f-strings?</p>
<p>E.g. I'm looking to do something like <code>f"height = {height:.2f}"</code> but I want my own float formatter that handles certain special cases.</p>
|
<python><formatting>
|
2024-12-23 18:59:46
| 1
| 10,195
|
Edward Falk
|
79,303,873
| 3,696,153
|
Programmatically add an alias to an enum
|
<p>Consider the following:</p>
<pre class="lang-py prettyprint-override"><code> class DayOfWeek( Enum ):
SUN = 0
MON = 1
TUE = 2
WED = 3
THU = 4
FRI = 5
SAT = 6
</code></pre>
<p>I now need to add an alias but the <code>_add_alias_</code> member function only accepts 1 value.
And PyDocs do not show how to programmatically add a new enumeration that is an alias of an existing value.</p>
<p>An example of what I want to do is this:</p>
<pre class="lang-py prettyprint-override"><code>if pay_day_is_friday():
DayOfWeek._add_alias_( "PAYDAY", DayOfWeek.FRI )
elif pay_day_is_monday():
DayOfWeek._add_alias_( "PAYDAY", DayOfWeek.MON )
</code></pre>
|
<python><enums>
|
2024-12-23 18:01:42
| 1
| 798
|
user3696153
|
79,303,820
| 6,046,760
|
Camelot won't import even though its in sys.path
|
<p>Having a nightmare installing camelot-py for jupyter notebook</p>
<p>Tried installing with pip inside the notebook, then through powershell, then through Anaconda miniconda terminal.
The package finally installs with powershell, pip list shows it while in the powershell.</p>
<p>Go to the notebook, pip list doesnt show camelot, but it shows that camelot is in the sys.path, cant import camelot</p>
<p>Also conda info --envs in powershell shows same location for environment as sys.prefix in notebook - insane!!</p>
<p>please help</p>
|
<python><python-camelot>
|
2024-12-23 17:38:23
| 2
| 524
|
user6046760
|
79,303,657
| 12,671,057
|
Why is bytes(lst) slower than bytearray(lst)?
|
<p>With <code>lst = [0] * 10**6</code> I get times like these:</p>
<pre><code> 5.4 Β± 0.4 ms bytearray(lst)
5.6 Β± 0.4 ms bytes(bytearray(lst))
13.1 Β± 0.7 ms bytes(lst)
Python:
3.13.0 (main, Nov 9 2024, 10:04:25) [GCC 14.2.1 20240910]
namespace(name='cpython', cache_tag='cpython-313', version=sys.version_info(major=3, minor=13, micro=0, releaselevel='final', serial=0), hexversion=51183856, _multiarch='x86_64-linux-gnu')
</code></pre>
<p>I would've expected <code>bytes(lst)</code> and <code>bytearray(lst)</code> to be equally fast or <code>bytearray(lst)</code> to be the slower one, since it's the more complicated type (has more functionality, as it's mutable). But it's the other way around. And even the detour <code>bytes(bytearray(lst))</code> is much faster than <code>bytes(lst)</code>! Why is <code>bytes(lst)</code> so slow?</p>
<p>Benchmark script (<a href="https://ato.pxeger.com/run?1=ZVJBTsMwEJQ4-hV7s12F0IKKUKT-gXuIUJpsaETtRLZTKar6Ei69wJ0v8Axeg71xKRW5WJ7dzM6O5-2jH92m08fj--Ca64fvq6IxnQLXKmwdtKrvjIs3RhXrStda11b2VFVY6sTjNe5YhEyp606dbna0jFl0Qw8r4Fvr_JHPC5jBYj6b3XPGqq5GG1AG_uPr0aEVvlHy5IyUxpTjP9SKy5rkrGAsKA6E-yqDvICmM1BBq4EGHViNDS1iRSUz4nI03gVNeEf9LvRbrx9rQXR5Vcg8WxYF_WD8PkZDw_dhf-GszJbpojnA1yfsyQzC7ghTFjgLpM-B1LvzguJ2GUdPZqV2MzTNFgVJlFS5lD11k1ivdXoTUXnng7MJ6EGt0awWcwk33thzc9Seln2P2u8i2S9vXI_oE3jFcUWuRGG9abUTJ58SqCRjE8af9CPlJuMyQv6R0x0a23b6L-QjsEWFOqTGV6aUxbCdQvcD" rel="nofollow noreferrer">Attempt This Online!</a>):</p>
<pre class="lang-py prettyprint-override"><code>from timeit import timeit
from statistics import mean, stdev
import random
import sys
setup = 'lst = [0] * 10**6'
codes = [
'bytes(lst)',
'bytearray(lst)',
'bytes(bytearray(lst))'
]
times = {c: [] for c in codes}
def stats(c):
ts = [t * 1e3 for t in sorted(times[c])[:5]]
return f'{mean(ts):5.1f} Β± {stdev(ts):3.1f} ms '
for _ in range(25):
random.shuffle(codes)
for c in codes:
t = timeit(c, setup, number=10) / 10
times[c].append(t)
for c in sorted(codes, key=stats):
print(stats(c), c)
print('\nPython:')
print(sys.version)
print(sys.implementation)
</code></pre>
<p>Inspired by an <a href="https://stackoverflow.com/a/79302320">answer</a> that also found <code>bytearray</code> to be faster.</p>
|
<python><performance><python-internals>
|
2024-12-23 16:20:31
| 1
| 27,959
|
Kelly Bundy
|
79,303,456
| 1,471,980
|
insert into MS Access table using .to_sql()
|
<p>I am currently using ms sql server to insert and/or select data using podbc library. I need to be able to do the samething using MS Access db. Any ideas how I could insert into ms access table using pydobc.</p>
<p>I am currently doing this in ms sql server:</p>
<pre><code>import pyodbc
try
engine = create_engine("mssql+pyodbc://sqlserver_unc,15001/db_name?driver=ODBC Driver 17 for SQL Server?trusted_connection=yes", fast_executemany=True)
df.to_sql('<table_name>', con=engine, if_exists='append', method='multi', index=False, dtype=None)
except Exeption as e:
print(str(e))
</code></pre>
|
<python><pandas><ms-access><pyodbc>
|
2024-12-23 15:03:34
| 1
| 10,714
|
user1471980
|
79,303,440
| 5,627,817
|
How do I implement 2 different builds for my python project? One for uploading to pypi and another for local usage and mypycify optimizations
|
<p>I have built a basic python project. The project at its core is using a <code>toml</code> file to store all the configurations and build rules (see below):</p>
<pre><code>[project]
authors = [{ name = "name", email = "email@email.com" }]
name = "name"
version = "0.0.0"
description = "description"
readme = { file = "README.md", content-type = "text/markdown" }
license = { file = "LICENSE" }
keywords = ["key", "words"]
dependencies = ["requests", "filetype", "pypdf", "mypy", "types-requests"]
classifiers = [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.11",
"Operating System :: POSIX :: Linux",
"Operating System :: Unix",
]
[project.urls]
Repository = "https://www.myrepo.com/"
[project.scripts]
scripts = "package_name.main:main"
[build-system]
requires = ["setuptools>=42", "wheel", "mypy", "types-requests"]
build-backend = "setuptools.build_meta"
[tool.black]
line-length = 120
[tool.isort]
line_length = 120
profile = "black"
[tool.mypy]
ignore_missing_imports = true
[tool.commitizen]
name = "cz_conventional_commits"
tag_format = "$version"
version_files = ["./pyproject.toml", "./package_name/_version.py:__version__"]
version_scheme = "semver"
version_provider = "pep621"
update_changelog_on_bump = true
</code></pre>
<p>I found the code execution is quite slow compared to something like Go and I wanted to achieve easy gains. So, I decided to build my project using <code>mypycify</code>. For this however I need to locate all the python files that I want to pass to <code>mypycify</code> using <code>glob</code>, therefore I need to have <code>setup.py</code> as part of my project build as well. The contents of <code>setup.py</code> are:</p>
<pre><code>#!/usr/bin/env python
from glob import glob
from mypyc.build import mypycify
from setuptools import find_packages, setup
paths: list[str] = glob("package_name/**.py")
paths += glob("package_name/src/**.py")
setup(
packages=find_packages(exclude=["*tests*"]),
ext_modules=mypycify(paths),
)
</code></pre>
<p>Running <code>python3 -m build</code> with these configurations generates these files:</p>
<ul>
<li><code>package_name-0.0.0-cp311-cp311-macosx_15_0_arm64.whl</code></li>
<li><code>package_name-0.0.0.tar.gz</code></li>
</ul>
<p>Here is my issue. I need my project to build a wheel suitable for uploading to pypi for any system, not strictly for my own (as we generated above). To do this, I need to remove or comment out <code>ext_modules=mypycify(paths),</code> from setup.py and I generate:</p>
<ul>
<li><code>package_name-0.0.0-py3-none-any.whl</code></li>
<li><code>package_name-0.0.0.tar.gz</code></li>
</ul>
<p>Excellent! <strong>But what solution can I apply such that I can generate both?</strong> One for uploading to pypi, and another for when devs want to build locally with <code>mypycify</code> optimizations.</p>
<p>The easiest solution seems to be to have 2 packages with different names. One unique <code>setup.py</code> for each where the only difference is one has <code>ext_modules=mypycify(paths),</code> and the other does not. But this seems silly in my case as that line in <code>setup.py</code> would be the only difference.</p>
|
<python><setuptools><mypy><python-wheel>
|
2024-12-23 14:57:13
| 0
| 399
|
Death_by_Ch0colate
|
79,303,258
| 7,646,621
|
Where are Qt tools after installing PySide6 on macOS?
|
<p>After running <code>pip3 install pyside6</code> on Windows, we can see many Qt tools under PySide6.</p>
<p><a href="https://i.sstatic.net/kIPPUqb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kIPPUqb8.png" alt="enter image description here" /></a>
Where are these tools after installing PySide6 on macOS?</p>
<p>There is no result in the search.
<img src="https://ddgobkiprc33d.cloudfront.net/24a7479c-70b4-41d7-8dd1-4b45e17d01e9.png" alt="68576d42-03e4-4ad8-9e21-64e483668411-image.png" /></p>
<p>please tell me where is the tools after installed pyside6 on macos.</p>
<p>===</p>
<p>Before ask this question, I have searched official document,
<a href="https://doc.qt.io/qtforpython-6/tools/index.html#package-tools" rel="nofollow noreferrer">https://doc.qt.io/qtforpython-6/tools/index.html#package-tools</a></p>
<p>there only intro for windows.</p>
|
<python><qt><pyside>
|
2024-12-23 13:39:54
| 1
| 3,639
|
qg_java_17137
|
79,303,139
| 7,403,431
|
maptloblib, adjust_text and errorbar
|
<p>I use <code>adjust_text</code> to properly place text on the plot.</p>
<p>I don't want to have text placed on the connection lines between the points or the error bars.</p>
<p>My current best version for example code is:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scienceplots # type: ignore
from adjustText import adjust_text
plt.style.use(["science", "notebook"])
fig, ax = plt.subplots()
x = np.linspace(0, 1, 50) * np.pi
y = np.sin(x)
errbar_container = ax.errorbar(x, y, xerr=0.25, yerr=0.05)
texts = []
for idx in range(0, len(x),2):
texts.append(
ax.annotate(
idx,
xy=(x[idx], y[idx]),
size="large",
zorder=100,
)
)
adjust_text(
texts,
x=x,
y=y,
expand=(2, 2),
objects=[
*[mpl.transforms.Bbox(seg) for seg in errbar_container.lines[-1][0].get_segments()],
*[mpl.transforms.Bbox(seg) for seg in errbar_container.lines[-1][1].get_segments()],
],
arrowprops=dict(arrowstyle="->", color="red"),
)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/TpNrbvuJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpNrbvuJ.png" alt="enter image description here" /></a></p>
<p>I also tried to use <code>autoalign</code>, <code>only_move</code> to e.g. shift only in <code>x-</code> direction as mentioned <a href="https://stackoverflow.com/questions/50713258/matplotlib-y-adjustment-of-texts-using-adjusttext">here</a> but the options seem to be ignored as well.</p>
<p>How do I ensure that text is not placed on the error bars and the connection lines?</p>
<p>EDIT 1:</p>
<p>When I rerun above code I realized that the problem is actually because of the plot style from <code>scienceplots</code> I used. (When I tried it I used a jupyter notebook where the style was already set.)</p>
<p>EDIT 2:</p>
<p>Link to GitHub issue: <a href="https://github.com/Phlya/adjustText/issues/190" rel="nofollow noreferrer">https://github.com/Phlya/adjustText/issues/190</a></p>
|
<python><matplotlib>
|
2024-12-23 12:50:28
| 0
| 1,962
|
Stefan
|
79,303,054
| 2,006,921
|
Python/pybind11 environment
|
<p>I would like to start a new project using Python and pybind11 and OpenGL on windows. Can you give me some guidance as to how to set up an environment for that? On Linux that would be no problem, but on Windows I don't know what would be a good, convenient option. A long time ago I cobbled something together using MSYS2 using mingw as compiler. It worked back then, but now I am having some trouble getting it to work again, so I might as well use the opportunity to set up a new, stable, and well proven environment from scratch.</p>
<p>Maybe my question boils down to what is a good package management system for this purpose on windows that includes support for C++ compilation of python modules using pybind11.</p>
<p>Any help or is the question too broad to make sense?</p>
|
<python><windows><pybind11>
|
2024-12-23 12:21:23
| 0
| 1,105
|
zeus300
|
79,302,883
| 1,183,071
|
How to define local dependencies for deployment using Poetry?
|
<p>As I've been building my app, I separated various utilities into their own packages that I may publish later if they turn out to be useful. My project structure ended up as:</p>
<pre><code>root
[app]
..main.py
[lib]
..[lib_a]
..[lib_b]
..
pyproject.toml
</code></pre>
<p>I'm guessing the relative path is the problem, with the production environment being different (and out of my control, as I am using replit). What is the correct way to manage these local dependencies?</p>
<p><a href="https://i.sstatic.net/kEZ2dgMb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEZ2dgMb.png" alt="workspace" /></a></p>
|
<python><deployment><python-poetry><replit><pyproject.toml>
|
2024-12-23 11:03:04
| 1
| 8,092
|
GoldenJoe
|
79,302,863
| 1,517,782
|
How to use python to create ORC file compressed with ZLIB compression level 9?
|
<p>I want to create an ORC file compressed with ZLIB compression level 9.
Thing is, when using pyarrow.orc, I can only choose between "Speed" and "Compression" mode,
and can't control the compression level</p>
<p>E.g.</p>
<pre><code>orc.write_table(table, '{0}_zlib.orc'.format(file_without_ext),
compression='ZLIB', compression_strategy='COMPRESSION')
</code></pre>
<p>Ideally I'm looking for a non existing <code>compression_level</code> parameter, any help would be appreciated.</p>
|
<python><compression><zlib><orc>
|
2024-12-23 10:57:00
| 1
| 1,862
|
Y.S
|
79,302,825
| 122,792
|
How do you insert a map-reduce into a Polars method chain?
|
<p>Iβm doing a bunch of filters and other transform applications including a <code>group_by</code> on a polars data frame, the objective being to count the number of html tags in a single column per date per publisher. Here is the code:</p>
<pre><code>120 def contains_html3(mindate, parquet_file = default_file, fieldname = "text"):
121 """ checks if html tags are in field """
122
123
124 html_tags = [
125 "<html>", "</html>", "<head>", "</head>", "<title>", "</title>", "<meta>", "</meta>", "<link>", "</link>", "<style>",
126 "<body>", "</body>", "<header>", "</header>", "<footer>", "</footer>", "<nav>", "</nav>", "<main>", "</main>",
127 "<section>", "</section>", "<article>", "</article>", "<aside>", "</aside>", "<h1>", "</h1>", "<h2>", "</h2>",
128 "<h3>", "</h3>", "<h4>", "</h4>", "<h5>", "</h5>", "<h6>", "</h6>", "<p>", "</p>", "<ul>", "</ul>", "<ol>", "</ol>",
129 "<li>", "</li>", "<div>", "</div>", "<span>", "</span>", "<a>", "</a>", "<img>", "</img>", "<table>", "</table>",
130 "<thead>", "</thead>", "<tbody>", "</tbody>", "<tr>", "</tr>", "<td>", "</td>", "<th>", "</th>", "<form>", "</form>",
131 "<input>", "</input>", "<textarea>", "</textarea>", "<button>", "</button>", "<select>", "</select>", "<option>",
132 "<script>", "</script>", "<noscript>", "</noscript>", "<iframe>", "</iframe>", "<canvas>", "</canvas>", "<source>"]
133
134 gg = (pl.scan_parquet(parquet_file)
135 .cast({"date": pl.Date})
136 .select("publisher", "date", fieldname)
137 .drop_nulls()
138 .group_by("publisher", "date")
139 .agg(pl.col(fieldname).str.contains_any(html_tags).sum().alias(fieldname))
140 .filter(pl.col(fieldname) > 0)
141 .sort(fieldname, descending = True)).collect()
142
143 return gg
</code></pre>
<p>Here is example output for <code>fieldname = "text"</code>:</p>
<pre><code>Out[8]:
shape: (22_925, 3)
βββββββββββββββββββββββββββββ¬βββββββββββββ¬βββββββ
β publisher β date β text β
β --- β --- β --- β
β str β date β u64 β
βββββββββββββββββββββββββββββͺβββββββββββββͺβββββββ‘
β Kronen Zeitung β 2024-11-20 β 183 β
β Kronen Zeitung β 2024-10-25 β 180 β
β Kronen Zeitung β 2024-11-14 β 174 β
β Kronen Zeitung β 2024-11-06 β 172 β
β Kronen Zeitung β 2024-10-31 β 171 β
β β¦ β β¦ β β¦ β
β The Faroe Islands Podcast β 2020-03-31 β 1 β
β Sunday Standard β 2024-07-16 β 1 β
β Stabroek News β 2024-08-17 β 1 β
β CivilNet β 2024-09-01 β 1 β
β The Star β 2024-06-23 β 1 β
βββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
</code></pre>
<p>The issue is that instead of just passing a single <code>fieldname = "text"</code> argument, I would like to pass a list (for example <code>["text", "text1", "text2", ...]</code>). The idea would be to run the bottom three lines in the chain for each element of the list. I could wrap the whole polars method chain in a for loop and then join the resulting data frames, but is there a better way? For example to insert a map, or foreach, or other such construct after the <code>group_by</code> clause, and then have polars add a new column for each field name without using a loop?</p>
<p>What's the best way of handling this?</p>
<h2>EDIT WITH REPRODUCIBLE CODE</h2>
<p>This will produce a dataframe <code>df</code> and a sample output <code>tc</code>, with all four columns text1 through text4, summed and sorted, but not using polars for the last step.</p>
<pre><code>#colorscheme orbital dark
import polars as pl
import datetime as dt
from math import sqrt
import random
random.seed(8472)
from functools import reduce
html_tags = [
"<html>", "</html>", "<head>", "</head>", "<title>", "</title>", "<meta>", "</meta>", "<link>", "</link>", "<style>", "</style>",
"<body>", "</body>", "<header>", "</header>", "<footer>", "</footer>", "<nav>", "</nav>", "<main>", "</main>",
"<section>", "</section>", "<article>", "</article>", "<aside>", "</aside>", "<h1>", "</h1>", "<h2>", "</h2>",
"<h3>", "</h3>", "<h4>", "</h4>", "<h5>", "</h5>", "<h6>", "</h6>", "<p>", "</p>", "<ul>", "</ul>", "<ol>", "</ol>",
"<li>", "</li>", "<div>", "</div>", "<span>", "</span>", "<a>", "</a>", "<img>", "</img>", "<table>", "</table>",
"<thead>", "</thead>", "<tbody>", "</tbody>", "<tr>", "</tr>", "<td>", "</td>", "<th>", "</th>", "<form>", "</form>",
"<input>", "</input>", "<textarea>", "</textarea>", "<button>", "</button>", "<select>", "</select>", "<option>", "</option>",
"<script>", "</script>", "<noscript>", "</noscript>", "<iframe>", "</iframe>", "<canvas>", "</canvas>", "<source>", "</source>"]
def makeword(alphaLength):
"""Make a dummy name if none provided."""
consonants = "bcdfghjklmnpqrstvwxyz"
vowels = "aeiou"
word = ''.join(random.choice(consonants if i % 2 == 0 else vowels)
for i in range(alphaLength))
return word
def makepara(nwords):
"""Make a paragraph of dummy text."""
words = [makeword(random.randint(3, 10)) for _ in range(nwords)]
tags = random.choices(html_tags, k=3)
parawords = random.choices(tags + words, k=nwords)
para = " ".join(parawords)
return para
def generate_df_with_tags(rows = 100, numdates = 10, num_publishers = 6):
publishers = [makeword(5) for _ in range(num_publishers)]
datesrange = pl.date_range(start := dt.datetime(2024, 2, 1),
end = start + dt.timedelta(days = numdates - 1),
eager = True)
dates = sorted(random.choices(datesrange, k = rows))
df = pl.DataFrame({
"publisher": random.choices(publishers, k = rows),
"date": dates,
"text1": [makepara(15) for _ in range(rows)],
"text2": [makepara(15) for _ in range(rows)],
"text3": [makepara(15) for _ in range(rows)],
"text4": [makepara(15) for _ in range(rows)]
})
return df
def contains_html_so(parquet_file, fieldname = "text"):
""" checks if html tags are in field """
gg = (pl.scan_parquet(parquet_file)
.select("publisher", "date", fieldname)
.drop_nulls()
.group_by("publisher", "date")
.agg(pl.col(fieldname).str.contains_any(html_tags).sum().alias(fieldname))
.filter(pl.col(fieldname) > 0)
.sort(fieldname, descending = True)).collect()
return gg
if __name__ == "__main__":
df = generate_df_with_tags(100)
df.write_parquet("/tmp/test.parquet")
tc = [contains_html_so("/tmp/test.parquet", fieldname = x) for x in ["text1", "text2", "text3", "text4"]]
tcr = (reduce(lambda x, y: x.join(y, how = "full", on = ["publisher", "date"], coalesce = True), tc)
.with_columns((
pl.col("text1").fill_null(0)
+ pl.col("text2").fill_null(0)
+ pl.col("text3").fill_null(0)
+ pl.col("text4").fill_null(0)).alias("sum")).sort("sum", descending = True))
print(tcr)
</code></pre>
<p>Desired output is below, but you'll see that in the bottom of the code I have run a <code>functools.reduce</code> on four dataframes, outside of the polars ecosystem, to join them, and it's basically this reduce that I want to put into the polars method chain somehow. [As an aside, my multiple <code>(textX).fill_null(0)</code> are also a bit clumsy but I'll leave that for a separate question]</p>
<pre><code>In [59]: %run so_question.py
shape: (45, 7)
βββββββββββββ¬βββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββ
β publisher β date β text1 β text2 β text3 β text4 β sum β
β --- β --- β --- β --- β --- β --- β --- β
β str β date β u64 β u64 β u64 β u64 β u64 β
βββββββββββββͺβββββββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββ‘
β desob β 2024-02-10 β 5 β 5 β 5 β 5 β 20 β
β qopir β 2024-02-03 β 5 β 5 β 5 β 4 β 19 β
β jerag β 2024-02-04 β 5 β 5 β 5 β 4 β 19 β
β jerag β 2024-02-07 β 5 β 4 β 5 β 5 β 19 β
β wopav β 2024-02-07 β 4 β 5 β 3 β 5 β 17 β
β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β
β jerag β 2024-02-06 β 1 β null β 1 β 1 β 3 β
β desob β 2024-02-05 β 1 β 1 β null β 1 β 3 β
β cufeg β 2024-02-04 β 1 β 1 β 1 β null β 3 β
β cufeg β 2024-02-05 β 1 β null β 1 β 1 β 3 β
β wopav β 2024-02-06 β null β 1 β 1 β 1 β 3 β
βββββββββββββ΄βββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββ
</code></pre>
<p>So basically, tag counts by columns <code>["text1", "text2", "text3", "text4"]</code>, then summed ignoring nulls, and sorted descending on the sum. Join should be on publisher and date, outer (= "full"), and coalescing.</p>
|
<python><dataframe><python-polars>
|
2024-12-23 10:40:59
| 2
| 25,088
|
Thomas Browne
|
79,302,793
| 17,795,398
|
How to reduce unnecessary white spaces in matplotlib subplot2grid?
|
<p>I'm creating some plots with histograms and a color bar, but I'm struggling with the huge white gaps between subplots and I don't know how to reduce them. This an example of a more complex code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
ihist = np.array([
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
])
vhist = np.array([
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
])
mins = [1014.3484983803353, -168.92777938399416]
maxs = [5420.578637599565, 1229.7294914536292]
labels = ['x ($\\AA$)', 'y ($\\AA$)']
fig = plt.figure()
# SIAs
iax = plt.subplot2grid((9, 2), (1, 0), rowspan=8)
iax.set_xlabel(labels[0])
iax.set_xlim(mins[0], maxs[0])
iax.set_ylabel(labels[1])
iax.set_ylim(mins[1], maxs[1])
iax.imshow(ihist, origin="lower", extent=[mins[0], maxs[0], mins[1], maxs[1]])
# Vacancies
vax = plt.subplot2grid((9, 2), (1, 1), rowspan=8)
vax.set_xlabel(labels[0])
vax.set_xlim(mins[0], maxs[0])
vax.set_ylabel(labels[1])
vax.set_ylim(mins[1], maxs[1])
vax.yaxis.set_label_position("right")
vax.yaxis.tick_right()
vax_img = vax.imshow(vhist, origin="lower", extent=[mins[0], maxs[0], mins[1], maxs[1]])
# Color bar
cax = plt.subplot2grid((9, 2), (0, 0), colspan=2)
cbar = fig.colorbar(vax_img, cax=cax, orientation="horizontal")
cbar.set_label("Counts per ion")
cbar.ax.xaxis.set_ticks_position("top")
cbar.ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.show()
</code></pre>
<p>And this is the output:</p>
<p><a href="https://i.sstatic.net/wjvsBkbY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjvsBkbY.png" alt="enter image description here" /></a></p>
<p>as you can see there are unnecessary white spaces between the color bar and the histograms, and between the histograms and the bottom of the figure. I want to remove them.</p>
<p>I think they are caused by things like <code>plt.subplot2grid((9, 2), (1, 0), rowspan=8)</code> but I did it that way to reduce the vertical size of the color bar.</p>
<p>Note: in the real code, limits are obtained on the fly, so histograms might have same hight and width or not.</p>
|
<python><matplotlib>
|
2024-12-23 10:22:25
| 2
| 472
|
Abel GutiΓ©rrez
|
79,302,693
| 7,218,871
|
Do Not Print Plot
|
<p>I have a requirement to create multiple plots, one such plot's code is here.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame({'Label': ['a', 'b', 'c', 'd', 'e', 'f'],
'Val': [np.random.rand() for _ in range(6)],
})
sns.set_style('whitegrid')
plt.figure(figsize=(6, 4))
sns.barplot(df, x='Label', y='Val', color='steelblue')
plt.xlabel('Bin', fontsize=9)
plt.ylabel('Value', fontsize=9)
plt.title('Trend', fontsize=11)
plt.tight_layout()
all_plots = dict()
buffer = BytesIO()
plt.savefig(buffer, format='png', bbox_inches='tight')
buffer.seek(0)
all_plots['p1'] = buffer
</code></pre>
<p>I do not want to print these plots when running the codes in Jupyter, but to save them in another 'object' which I can use later to save the images to an excel file.</p>
<p>However, running above code prints the plot, how do I turn-off the "printing"?</p>
|
<python><seaborn>
|
2024-12-23 09:35:42
| 2
| 620
|
Abhishek Jain
|
79,302,589
| 6,234,139
|
Passing on a variable created in a class to a function in that class in which the variable is defined as a global variable
|
<p>Initially, before using a class, I could easily pass on a variable <code>a</code> to the function <code>fit_func_simple_a_fixed()</code> as follows (<code>a</code> is defined as a global variable in the function):</p>
<pre><code>import pandas as pd
import math
from scipy.optimize import curve_fit
import numpy as np
time = [i for i in range(0, 9)]
cumulative_rate = [0, 0.14, 0.28, 0.36, 0.43, 0.50, 0.57, 0.60, 0.62]
df = pd.DataFrame({'time': time,
'cumulative_rate': cumulative_rate})
def fit_func_simple_a_fixed(x, b):
global a
x2 = -b*x
def fun(input):
return(1-math.exp(input))
x3 = np.array(list(map(fun, x2)))
return a*x3
# Code before using a class
a = 0.8
x = np.array(df.time)
y = np.array(df.cumulative_rate)
params = curve_fit(fit_func_simple_a_fixed, x, y)
print(params)
</code></pre>
<p>This worked well. However, when I change the part after the <code># Code before ...</code> to:</p>
<pre><code>class Curve_Fit():
def __init__(self, data):
self.data = data
self.param = self.fit_curve()
def fit_curve(self):
a = 0.8
x = np.array(self.data.time)
y = np.array(self.data.cumulative_rate)
params = curve_fit(fit_func_simple_a_fixed, x, y)
return params
Curve_Fit(data=df)
</code></pre>
<p>Not unsurprisingly I get the error message <code>NameError: name 'a' is not defined</code>. I can't determine <code>a</code> outside the class as it is computed in the class itself for different groups. Is there a way I can make this work (maybe even via passing on <code>a</code> somehow to <code>curve_fit(fit_func_simple_a_fixed, x, y)</code>)?</p>
|
<python>
|
2024-12-23 08:42:38
| 1
| 701
|
koteletje
|
79,302,542
| 2,695,990
|
How to process and update (change attribute, add node, etc) XML file with a DOCTYPE in Python, without removing nor altering the "DOCTYPE"
|
<p>I have couple of xml files which I would like to process and update their nodes/attributes. I have couple of examples of scripts which can do that, but all of them change a bit the xml structure, remove or alter the DOCTYPE.
The simplified example of xml is:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE note:note SYSTEM "note.dtd">
<note:note xmlns:note="http://example.com/note">
<to checksum="abc">Tove</to>
</note:note>
</code></pre>
<p>the DTD note.dtd is:</p>
<pre><code><!ELEMENT note:note (to)>
<!ELEMENT to (#PCDATA)>
<!ATTLIST to
checksum CDATA #REQUIRED
>
</code></pre>
<p>Example python script which updates argument value is:</p>
<pre class="lang-py prettyprint-override"><code> @staticmethod
def replace_checksum_in_index_xml(infile, checksum_new, outfile):
from lxml import etree
parser = etree.XMLParser(remove_blank_text=True)
with open(infile, "rb") as f:
tree = etree.parse(f, parser)
for elem in tree.xpath("//to[@checksum]"):
elem.set("checksum", checksum_new)
with open(outfile, "wb") as f:
tree.write(f, pretty_print=True, xml_declaration=True, encoding="UTF-8", doctype=tree.docinfo.doctype)
</code></pre>
<p>I call the script like that:</p>
<pre class="lang-py prettyprint-override"><code> infile = "Input.xml"
check_sum = "aaabbb"
outfile = "Output.xml"
Hashes.replace_checksum_in_index_xml(infile, check_sum, outfile)
</code></pre>
<p>And the result xml file is:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE note SYSTEM "note.dtd">
<note:note xmlns:note="http://example.com/note">
<to checksum="aaabbb">Tove</to>
</note:note>
</code></pre>
<p>The output DOCTYPE has changed and instead of<br />
DOCTYPE <strong>note:note</strong><br />
there is<br />
DOCTYPE <strong>note</strong>
I would like to keep the DOCTYPE as it was.
How can I achieve desired result in Python?</p>
|
<python><xml><xml-parsing><dtd><doctype>
|
2024-12-23 08:17:34
| 2
| 3,174
|
fascynacja
|
79,302,514
| 17,174,267
|
inspect.signature: invalid method signature for lambda in class member
|
<p>The following code throws a ValueError ('invalid method signature')</p>
<pre class="lang-py prettyprint-override"><code>import inspect
class Foo:
boo = lambda: 'baa'
print(inspect.signature(Foo().boo))
</code></pre>
<p>Why? Changing it to <code>boo = lambda x: 'baa'</code> does not throw an error. It seems like, that python automatically binds the object (class instance) to the lambda function, but this seems strange to me (so lambdas are effectively the same as normal def's?).</p>
<p>When we run the following</p>
<pre class="lang-py prettyprint-override"><code>import inspect
class Foo:
boo = lambda x: x
print(Foo().boo())
</code></pre>
<p>We get <code><__main__.Foo object at 0x7dd9a4813dd0></code>, so it is indeed just binding the object to lambda.</p>
<p>But this does not explain, why <code>inspect.signature</code> fails.</p>
|
<python><python-inspect>
|
2024-12-23 08:07:27
| 4
| 431
|
pqzpkaot
|
79,302,301
| 14,149,761
|
Regex: get all matches in between two set substrings
|
<p>If I have a sentence similar to the following:</p>
<p><code>aabcadeaaabbacababe</code></p>
<p>and I want to get all of the text where it starts with <code>c</code> and ends with <code>e</code>, so that it matches like:</p>
<p><code>['ad', 'abab']</code></p>
<p>is it possible to achieve this? I have tried using lookarounds and <code>c(.*)e</code>, but instead of returning as separate entities in a list, they are combined into one substring that only looks for the first <code>c</code> and the last <code>e</code>.</p>
|
<python><regex>
|
2024-12-23 06:18:12
| 1
| 303
|
skoleosho97
|
79,302,105
| 1,512,250
|
Query length in Custom Search API
|
<p>I'm sending requests to Custom Search API like this:</p>
<pre class="lang-py prettyprint-override"><code>GOOGLE_SEARCH_URL=f'https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH_API_KEY}&cx={SEARCH_ENGINE_ID}&q={encoded_text}'
</code></pre>
<p>The text in encoded_text could be very large sometimes, and I received a 413 error.
What is the length limit for this parameter?</p>
|
<python><google-custom-search>
|
2024-12-23 03:58:35
| 1
| 3,149
|
Rikki Tikki Tavi
|
79,302,087
| 8,947,822
|
GCP Cloudbuild Git Submodule Python Installation Failing in Docker Build
|
<p>I'm trying to use GCP Cloudbuild to deploy a python project managed with UV. The project has a private git submodule that I'm cloning with an ssh setup in cloudbuild yaml (taken from <a href="https://cloud.google.com/build/docs/access-github-from-build#configure_the_build" rel="nofollow noreferrer">this example</a>)</p>
<p>The build seemingly clones the repo fine, installs the top level dependencies and then fails on <code>uv pip install -e ./lib/db-client</code>.</p>
<pre><code>...
Step #1: Submodule 'lib/db-client' (git@github.com:path/db-client.git) registered for path 'lib/db-client'
Step #1: Cloning into '/workspace/Service/lib/db-client'...
Step #1: Submodule path 'lib/db-client': checked out '28f72f4...3443'
... goes on to install the dependencies ...
error: /app/lib/db-client does not appear to be a Python project, as neither `pyproject.toml` nor `setup.py` are present in the directory
</code></pre>
<p>I have successfully run my service locally from scratch with:</p>
<pre><code>git clone git@github.com:path/service.git
cd service
uv sync
uv pip install -e ./lib/db-client
uv run uvicorn app:app --host 0.0.0.0 --port 8080 --reload
</code></pre>
<p>My cloudbuild is the following</p>
<pre><code> steps:
- name: 'gcr.io/cloud-builders/git'
secretEnv: ['SSH_KEY']
entrypoint: 'bash'
args:
- -c
- |
echo "$$SSH_KEY" >> /root/.ssh/id_rsa
chmod 400 /root/.ssh/id_rsa
cp known_hosts.github /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
# Clone the repository
- name: 'gcr.io/cloud-builders/git'
args:
- clone
- --recurse-submodules
- git@github.com:path/service.git
volumes:
- name: 'ssh'
path: /root/.ssh
# Initialize and update submodules
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- -c
- |
cd EmailService
git submodule init
git submodule update --init --recursive
volumes:
- name: 'ssh'
path: /root/.ssh
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/service', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/service']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'service', '--image', 'gcr.io/$PROJECT_ID/service', '--region', 'us-central1']
images:
- gcr.io/$PROJECT_ID/service
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/github-ssh-key/versions/latest
env: 'SSH_KEY'
options:
logging: CLOUD_LOGGING_ONLY
</code></pre>
<p>Any help on why it's not correctly handling the submodule would be super helpful.</p>
|
<python><docker><git-submodules><google-cloud-build>
|
2024-12-23 03:44:10
| 1
| 423
|
Connor
|
79,302,073
| 825,227
|
Dealing with `StopIteration` return from a next() call in Python
|
<p>I'm using the below to skip a group of records when a certain condition is met:</p>
<pre><code>if (condition met):
...
[next(it) for x in range(19)]
</code></pre>
<p>Where <code>it</code> is an <code>itertuples</code> object created to speed up looping through a large dataframe (yes, the loop is necessary).</p>
<pre><code>it = df.itertuples()
for row in it:
...
</code></pre>
<p>What's the idiomatic way of dealing with a <code>StopIteration</code> return from the <code>next</code> call (presumably due to reaching the end of the dataframe)?</p>
|
<python><python-itertools>
|
2024-12-23 03:33:22
| 2
| 1,702
|
Chris
|
79,302,069
| 328,347
|
How to save and resume state for a chess exploration script
|
<p>I have a small python script that uses the chess library. It simply iterates all possible games from the starting set of legal moves. It ends a branch when the maximum plies is hit, or when a check mate is found. Additionally, it will log the game if it was a checkmate.</p>
<pre><code>import chess
import chess.pgn
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(message)s',
filename='chess_game_exploration.log', # Log to this file
filemode='w' # Overwrite the log file every time the script runs
)
logger = logging.getLogger()
checkmate_count = 0
total_games_count = 0
def save_game(board, current_moves):
with open("checkmates.pgn", "a") as pgn_file:
game = chess.pgn.Game.from_board(board)
game.headers["Event"] = "Checkmate Search"
game.headers["PlyCount"] = str(len(current_moves))
pgn_file.write(str(game) + "\n\n")
def explore_games(board, current_moves, max_plies=5):
global checkmate_count, total_games_count
# Check if we hit checkmate
if board.is_checkmate():
checkmate_count += 1
total_games_count += 1
print("Check Mate Found")
save_game(board, current_moves)
return
total_games_count += 1
if total_games_count % 100000 == 0:
logger.info(f"Total games: {total_games_count} Total checkmates: {checkmate_count}")
# Stop if the maximum plies (moves) are reached
if len(current_moves) >= max_plies:
return
# Recursively explore each legal move
for move in board.legal_moves:
board.push(move)
explore_games(board.copy(), current_moves + [move], max_plies)
board.pop() # Undo the move for backtracking
logger.info("Starting chess game exploration.")
board = chess.Board()
explore_games(board, [], max_plies=5) # Set a reasonable number of plies (maximum moves)
logger.info(f"Total checkmates found: {checkmate_count}")
logger.info(f"Total full games computed: {total_games_count}")
</code></pre>
<p>My question is, how can I save out some kind of state, that can later be reloaded. This would be a long running process. As such, its likely the machine will be rebooted, or the script may need to be moved or modified and restarted. In those cases, I would want to begin the search again where it left off, without starting over.</p>
|
<python><recursion><chess>
|
2024-12-23 03:30:15
| 0
| 3,532
|
Roge
|
79,301,958
| 26,579,940
|
Changing the size of TransformedBbox in matplotlib
|
<p>When zooming, it is done by changing xlim and ylim.
But this is very slow.</p>
<p>Even if xlim and ylim are changed, restore_region is based on the size of the axes, so it does not affect previously created backgrounds.</p>
<p>It is possible to change the xy coordinates exposed by restore_region or the created background. However, I don't know how to zoom the created background or limit the area when creating the background.</p>
<p>Is this possible? When creating a background for blit, limit the xlim and ylim of the bbox.</p>
<p>Since figure.canvas.draw() is very slow, I think I can use matplotlib more quickly if this task is possible.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.axes import Axes
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.transforms import TransformedBbox
fig, ax = plt.subplots(1, 1)
cv: FigureCanvasAgg = fig.canvas
ax: Axes
ax.plot([1, 2, 3])
bx: TransformedBbox = ax.bbox
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
bg = []
a = [True]
def lim(*args):
if not bg: bg.append(cv.copy_from_bbox(bx))
bg[0].set_x(0)
bg[0].set_y(0)
if a[0]:
ax.set_xlim(0.5, 1.5)
ax.set_ylim(1, 2.5)
a[0] = False
else:
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
a[0] = True
cv.draw()
if not a[0]: cv.restore_region(bg[0])
fig.canvas.mpl_connect('button_press_event', lim)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-12-23 01:22:05
| 0
| 404
|
white.seolpyo.com
|
79,301,659
| 3,952,885
|
AssertionError: 'detection' in self.models Using InsightFace on Linux Docker Container
|
<p>Iβm developing a Python application that uses Flask, running in a Docker container on a Linux server with NGINX. The application works perfectly on my local machine, but when I deploy it on the server, I encounter the following error:</p>
<blockquote>
<p>ERROR:app:Exception: Traceback (most recent call last): File
"/app/app.py", line 32, in analyze_face
analyzer = FaceFeatureAnalyzer() # Create an instance here File "/app/face_feature_analyzer/main_face_analyzer.py", line 43, in
<strong>init</strong>
self.face_app = FaceAnalysis(name='antelopev2', root=self.model_root) File
"/usr/local/lib/python3.9/site-packages/insightface/app/face_analysis.py",
line 43, in <strong>init</strong>
assert 'detection' in self.models AssertionError</p>
</blockquote>
<p>Here is the code</p>
<pre><code>class FaceFeatureAnalyzer:
def __init__(self):
self.model_root = "/root/.insightface"
self.model_path = os.path.join(self.model_root, "models/antelopev2")
self.zip_path = os.path.join(self.model_root, "models/antelopev2.zip")
self.model_url = "https://github.com/deepinsight/insightface/releases/download/v0.7/antelopev2.zip"
# Initialize FaceAnalysis
self.face_app = FaceAnalysis(name='antelopev2', root=self.model_root)
self.face_app.prepare(ctx_id=0, det_size=(640, 640))
</code></pre>
<p>I have also tried to download it in same directory but that attempt also results in same error.. here is what i additionally tried</p>
<pre><code>class FaceFeatureAnalyzer:
def __init__(self):
# Initialize the InsightFace model
self.face_app = FaceAnalysis(name='antelopev2')
self.face_app.prepare(ctx_id=0, det_size=(640, 640))
logger.info("Initialized FaceAnalysis with model 'antelopev2'.")
</code></pre>
<p><strong>What Iβve Observed and Tried:</strong>
Model Download and Extraction Logs:
β’ During startup, the model antelopev2 is downloaded and extracted to /root/.insightface/models/antelopev2. The logs confirm this:</p>
<pre><code>Download completed.
Extracting /root/.insightface/models/antelopev2.zip to /root/.insightface/models/antelopev2...
Extraction completed.
</code></pre>
<p>However, when checking the directory, it appears empty or the program cannot detect the models.</p>
<p><strong>Manually Adding the Models</strong>
Previously, manually downloading the antelopev2 model and placing it in /root/.insightface/models/antelopev2 resolved the issue. I also set appropriate permissions using:</p>
<pre><code>chmod -R 755 /root/.insightface/models/antelopev2
</code></pre>
<p>After making updates to the codebase and rebuilding the Docker container, the issue reappeared.</p>
<p><strong>Directory Contents:</strong>
The following files exist in /root/.insightface/models/antelopev2:#</p>
<pre><code>1k3d68.onnx
2d106det.onnx
genderage.onnx
glintr100.onnx
scrfd_10g_bnkps.onnx
</code></pre>
<p>These are the expected .onnx files for antelopev2.</p>
<p>The application works locally without any errors. The issue only arises in the Docker container on the Linux server.</p>
<p>Even though the files are present and permissions are set correctly, the application seems unable to detect them. How can I debug or fix this issue?</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set the working directory in the container
WORKDIR /app
# Install system dependencies including libgl1-mesa-glx and others
RUN apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx \
libglib2.0-0 \
g++ \
build-essential \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy the requirements file into the container
COPY requirements.txt /app/
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code into the container
COPY . /app
EXPOSE 7002
# Run the Flask application
CMD ["python", "app.py"]
</code></pre>
<p>Docker-compose.yml</p>
<pre><code>version: '3.8'
services:
flask-app:
build:
context: ./backend
container_name: flask-app
ports:
- "7000:7000"
environment:
- FLASK_RUN_HOST=0.0.0.0
- FLASK_RUN_PORT=7000
volumes:
- ./backend:/app
depends_on:
- nginx
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/sites-enabled
- ./nginx-certificates:/etc/letsencrypt
</code></pre>
|
<python><docker><insightface>
|
2024-12-22 20:12:25
| 1
| 2,762
|
Amir Dora.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.