QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,446,618
| 1,442,731
|
Update a prepackaged Protobuf message from **args
|
<p>I have a Python function that sends pre-canned Protobuf messages based on a command id that's passed in. I also need to be able to modify fields in that message based on <code>**args</code> that are passed into the function:</p>
<pre><code>def sendCmd(self, id, **args):
cmdType = mod_com.commandToMessage[id] # Find command template
cmd = cmdType() # instantiate command message
# Apply parameters
updateArgs(cmd, args)
...
def updateArgs(self, cmd:message, args):
for key in args:
if type(cmd) == google._upb._message.RepeatedCompositeContainer:
if (len(cmd) == 0):
obj = cmd.add()
self.updateArgs(obj, args, level=level + 1)
elif args[key] == None:
pass
elif isinstance(args[key], message):
pass
else:
if isinstance(args[key], dict):
self.updateArgs(getattr(cmd, key), args[key], level=level + 1)
else:
setattr(cmd, key, args[key])
</code></pre>
<p>This works great if I call <code>sendCmd</code> like</p>
<pre><code>mod_com.sendCmd(study_pb2.STUDY_LIST, type='xxx')
</code></pre>
<p>The protobuf message type is passed in, translated to a message template and then instantiates it. <code>updateArgs()</code> then walks through the message replacing fields based on the args name (I probably could have used a dict for this, but I'm exploring Python and <code>**args</code> looked interesting).</p>
<p>Problem is that some of my canned messages contain submessages. To get around this I've tried several possibilities, the most recent was to create the updated submessage and attempt to pass it in via the args and then plug it into the base message. Unfortunately Protobuf won't let me simply overwrite the submessage, I need to use <code>CopyFrom</code>. I tried this in <code>updateArgs()</code>:</p>
<pre><code>attr=getattr(cmd, key)
if isinstance(attr, google.protobuf.message.Message):
attr.CopyFrom(args[key])
else:
setattr(cmd, key, args[key])
</code></pre>
<p>But now I when I hit my Timestamp submessage get the error:</p>
<pre><code>TypeError: Parameter to MergeFrom() must be instance of same class: expected <class 'timestamp_pb2.Timestamp'> got <class 'google._upb._message.MessageMeta'>.
</code></pre>
<p>Protobuf seems to have some interesting ideas on how to create object hierarchies under the covers.</p>
<p>Any ideas where I'm going wrong here? Any better ways to do this?</p>
<p><code>.Proto</code> file:</p>
<pre class="lang-none prettyprint-override"><code>message StudyStart {
string name = 1;
string filename = 2;
bool compressed = 3;
int32 period = 4;
int32 cycles = 5;
google.protobuf.Timestamp timeToStart = 6; // Time to start
</code></pre>
|
<python><protocol-buffers>
|
2025-02-17 21:23:51
| 0
| 6,227
|
wdtj
|
79,446,466
| 3,936,496
|
pandas remap new two columns based other column
|
<p>I have pandas table where I want to create new column and fill data based on another columns values. I also want to know, if new columns value is updated
So I have dictionary like this:</p>
<pre><code>update_values = {"Groub_A": {"aadff2": "Mark", "aasd12": "Otto", "asdd2": "Jhon"},"Groub_B": {"aadfaa": "Josh", "aa1113": "Math", "967323sd": "Marek"}}
</code></pre>
<p>And I want to my table look like this:</p>
<pre><code>Column_1 | Column_new_2 | Column_new_3
aadff2 | Mark | Groub_A
aadff2 | Mark | Groub_A
aasd12 | Otto | Groub_A
asdd2 | Jhon | Groub_A
967323sd | Marek | Groub_B
967323sd | Marek | Groub_B
aa1113 | Math | Groub_B
</code></pre>
<p>So far I have just copied Column_1 and use <code>df.replace("Column_new_2":update_values["Groub_A"])</code> and same thing with groub_B, but then don't know how to make Column_new_3?
There must be a easy solution, but I just can't figure it out.</p>
|
<python><pandas>
|
2025-02-17 19:54:52
| 3
| 401
|
pinq-
|
79,446,445
| 7,519,434
|
Selection not working after changing Text widget focus
|
<p>I have an application with two Text widgets in two Frames. Clicking a link (formatted text with a tag bind) in one Text widget (<code>Results.text</code>) should select text in another (<code>Editor.text</code>).</p>
<p>When I click once on the link, the <code>see</code> part works but <code>mark_set</code> and applying the selection do not.</p>
<p>If I double click the link it works as intended, but I'd like it to work with a single click. I have tried returning <code>"break"</code> from the event handler and using <code>focus_set</code>, <code>focus_force</code>, <code>update_idletasks</code> and <code>update</code> in the <code>goto</code> method with no success.</p>
<p>I'm using Python 3.13, Tcl/Tk 8.6.15 and Windows 10.</p>
<pre><code>import math
import tkinter as tk
from tkinter.font import Font
from typing import Any, Callable
LIPSUM = """\
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque et lectus lacinia enim dictum posuere. Quisque nunc tellus, luctus quis eleifend a, placerat maximus ipsum.
Nam egestas, nisi in varius tempus, enim tortor consectetur eros, ac pretium urna ipsum at orci. Praesent ut dui eu lectus efficitur ultrices. In vehicula leo faucibus tempor posuere.
Nam et est in nisi iaculis rhoncus eu et nibh. Aliquam eget risus lacus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus."""
class Editor(tk.Frame):
def __init__(self, *args: Any, **kwargs: Any):
tk.Frame.__init__(self, *args, **kwargs)
self.grid_rowconfigure(1, weight=1)
self.grid_columnconfigure(1, weight=1)
self.line_num = tk.Text(
self,
width=3,
yscrollcommand=self.scroll_update,
cursor="arrow",
)
self.line_num.tag_configure("rjust", justify="right")
self.line_num.insert("1.0", "1", "rjust")
# disable selection
self.line_num.bindtags((str(self.line_num), str(self), "all"))
self.line_num.grid(row=1, column=0, sticky="ns")
self.text = tk.Text(
self,
yscrollcommand=self.scroll_update,
wrap="none",
)
self.text.bind("<KeyRelease>", lambda _: self.update_lines())
self.text.grid(row=1, column=1, sticky="nesw")
self.editor_v_sbr = tk.Scrollbar(self, command=self.editor_scroll)
self.editor_v_sbr.grid(row=1, column=2, sticky="ns")
self.editor_h_sbr = tk.Scrollbar(
self, command=self.text.xview, orient="horizontal" # type:ignore
)
self.editor_h_sbr.grid(row=2, column=0, columnspan=2, sticky="ew")
self.text.config(xscrollcommand=self.editor_h_sbr.set)
def editor_scroll(self, _: Any, position: float):
self.line_num.yview_moveto(position)
self.text.yview_moveto(position)
def scroll_update(self, first: float, last: float):
self.text.yview_moveto(first)
self.line_num.yview_moveto(first)
self.editor_v_sbr.set(first, last)
def update_lines(self, new_text: str | None = None):
if not new_text:
new_text = self.text.get("1.0", "end")
self.line_num.config(state="normal")
self.line_num.delete("1.0", "end")
num_lines = len(new_text.splitlines(True))
if num_lines >= 1000:
new_width = int(math.log10(num_lines)) + 1
self.line_num.config(width=new_width)
self.line_num.insert(
"1.0",
"\n".join([str(x + 1) for x in range(num_lines)]),
"rjust",
)
self.line_num.config(state="disabled")
def view_file(self):
self.text_content = LIPSUM
self.update_lines(self.text_content)
self.text.delete("1.0", "end")
self.text.insert("1.0", self.text_content)
def goto(self, start: tuple[int, int], end: tuple[int, int]):
pos: Callable[[tuple[int, int]], str] = lambda x: f"{x[0]}.{x[1]-1}"
self.text.focus_set()
self.text.see(pos(start))
self.text.mark_set("insert", pos(start))
self.text.tag_remove("sel", "1.0", "end")
self.text.tag_add("sel", pos(start), pos(end))
class Results(tk.LabelFrame):
def __init__(
self,
master: tk.Tk,
goto: Callable[[tuple[int, int], tuple[int, int]], None],
*args: Any,
**kwargs: Any,
):
tk.LabelFrame.__init__(self, master, *args, text="Results", **kwargs)
self.grid_columnconfigure(0, weight=1)
self.grid_rowconfigure(0, weight=1)
self.text = tk.Text(
self,
wrap="none",
)
self.text.insert("end", "Lorem ipsum dolor sit amet\n")
self.text.insert("end", "View in editor", "link")
def callback():
self.goto((1, 1), (1, 27))
return "break"
self.text.tag_bind("link", "<Button-1>", lambda _: callback())
self.link_font = Font(self, "TkDefaultFont")
self.link_font.configure(underline=True)
self.text.tag_configure("link", font=self.link_font, foreground="SlateBlue3")
self.text.configure(state="disabled")
self.text.tag_bind("link", "<Enter>", self.link_enter)
self.text.tag_bind("link", "<Leave>", self.link_leave)
self.text.grid(row=0, column=0, sticky="nesw", padx=(10, 0), pady=(10, 0))
self.y_sbr = tk.Scrollbar(self, command=self.text.yview) # type:ignore
self.y_sbr.grid(row=0, column=1, sticky="ns", padx=(0, 10), pady=(10, 0))
self.x_sbr = tk.Scrollbar(
self, command=self.text.xview, orient="horizontal" # type:ignore
)
self.x_sbr.grid(row=1, column=0, sticky="ew", pady=(0, 10), padx=(10, 0))
self.text.configure(yscrollcommand=self.y_sbr.set)
self.text.configure(xscrollcommand=self.x_sbr.set)
self.goto = goto
def link_enter(self, _: Any):
self.text.config(cursor="hand2")
def link_leave(self, _: Any):
self.text.config(cursor="")
class App(tk.Tk):
def __init__(self):
tk.Tk.__init__(self)
self.editor = Editor(self)
self.editor.grid(row=0, column=1, sticky="nesw", padx=10, pady=10)
self.results = Results(self, self.editor.goto)
self.results.grid(row=0, column=2, sticky="nesw", padx=10, pady=10)
self.editor.text.focus()
self.editor.view_file()
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
|
<python><tkinter><tkinter-text>
|
2025-02-17 19:44:44
| 1
| 3,989
|
Henry
|
79,446,437
| 825,227
|
Convert dictionary of lists with entries as dictionaries into dataframe with top level key as additional column value in resulting dataframe
|
<p>I have a dictionary of lists, each with a key string value (stock ticker) and value consisting of a list of dicts which looks like this:</p>
<pre><code>data
Out[88]:
{'NVDA': [{'open': 144.75, 'high': 144.21, 'low': 174.33, 'close': 210.47},
{'open': 123.97, 'high': 128.5, 'low': 110.25, 'close': 154.09},
{'open': 118.19, 'high': 134.81, 'low': 104.37, 'close': 149.72},
{'open': 225.35, 'high': 126.81, 'low': 104.77, 'close': 209.46},
{'open': 247.2, 'high': 243.25, 'low': 220.44, 'close': 186.01}],
'MSFT': [{'open': 175.78, 'high': 213.98, 'low': 229.75, 'close': 206.59},
{'open': 142.98, 'high': 168.42, 'low': 188.33, 'close': 232.52},
{'open': 184.14, 'high': 163.42, 'low': 194.81, 'close': 153.03},
{'open': 199.54, 'high': 130.26, 'low': 101.05, 'close': 102.1},
{'open': 243.91, 'high': 119.21, 'low': 190.2, 'close': 223.31}],
'AAPL': [{'open': 202.06, 'high': 162.54, 'low': 212.3, 'close': 226.78},
{'open': 191.17, 'high': 153.49, 'low': 135.13, 'close': 151.83},
{'open': 187.15, 'high': 149.75, 'low': 123.28, 'close': 247.32},
{'open': 194.29, 'high': 175.34, 'low': 244.14, 'close': 207.45},
{'open': 228.9, 'high': 133.26, 'low': 100.59, 'close': 129.35}]}
ticks = ['NVDA', 'MSFT', 'AAPL']
data = {}
for s in ticks:
data[s] = []
for _ in range(5):
entry = {
'open': round(random.uniform(100, 250), 2),
'high': round(random.uniform(100, 250), 2),
'low': round(random.uniform(100, 250), 2),
'close': round(random.uniform(100, 250), 2)
}
data[s].append(entry)
</code></pre>
<p>I'd like to convert this to a dataframe which looks like this:</p>
<pre><code>df
Out[98]:
tick open high low close
0 NVDA 215.44 124.29 121.61 244.35
1 NVDA 214.89 184.49 157.39 239.31
2 NVDA 221.42 204.17 148.83 215.00
3 NVDA 182.49 104.29 175.36 226.59
4 NVDA 127.31 182.31 228.92 173.52
5 MSFT 217.79 147.98 120.40 239.97
6 MSFT 108.66 222.83 177.20 172.62
7 MSFT 138.16 116.36 241.62 231.15
8 MSFT 160.53 234.88 154.93 127.49
9 MSFT 168.22 127.77 224.75 207.59
10 AAPL 119.95 106.36 150.28 195.93
11 AAPL 117.71 142.54 210.08 116.37
12 AAPL 147.07 204.46 223.98 104.91
13 AAPL 135.71 211.83 210.11 102.34
14 AAPL 216.45 136.08 130.27 236.48
</code></pre>
|
<python><pandas><dataframe><dictionary>
|
2025-02-17 19:35:35
| 5
| 1,702
|
Chris
|
79,446,423
| 3,637,646
|
"File Load Error for...Invalid response: 404 Not Found" when loading .ipynb on Spyder Notebook
|
<p>I am using Spyder v6 and I am trying to load a .ipynb file using the Notebook plugin. The Notebook plugin is successfully installed, since I can see the Notebook interface when I open Spyder.</p>
<p>However, when I try to load the .ipynb file I do get the following error:</p>
<pre><code>File Load Error for test_file.ipynb
Invalid response: 404 Not Found
</code></pre>
<p>I checked that the problem was not with the .ipynb file by opening it with jupiter notebook and it worked. So I guess the problem is with Spyder.</p>
<p>The file I am trying to load can be download from here: <a href="https://github.com/dnanexus/OpenBio/blob/master/UKB_notebooks/ukb-rap-pheno-basic.ipynb" rel="nofollow noreferrer">https://github.com/dnanexus/OpenBio/blob/master/UKB_notebooks/ukb-rap-pheno-basic.ipynb</a></p>
<p>Note that Spyder was installed using conda on a specific environment.</p>
|
<python><jupyter-notebook><spyder><ipynb>
|
2025-02-17 19:25:45
| 0
| 1,268
|
CafféSospeso
|
79,446,382
| 11,246,348
|
Negative lookahead regex in `re.subn()` context
|
<p>I am trying to use regular expressions to replace numeric ranges in text, such as <code>"4-5"</code>, with the phrase <code>"4 to 5"</code>.</p>
<p>The text also contains dates such as <code>"2024-12-26"</code> that should <em>not</em> be replaced (should be left as is).</p>
<p>The regular expression <code>(\d+)(\-)(\d+)</code> (attempt one below) is clearly wrong, because it falsely matches dates.</p>
<p>Using a negative lookahead expression, I came up with the regex <code>(?!\d+\-\d+\-)(\d+)(\-)(\d+)</code> instead (attempt two below), which correctly matches <code>"4-5"</code> while rejecting <code>"2024-12-26"</code>.</p>
<p>However, <code>attempt_two</code> does not behave correctly in a <code>re.subn()</code> context, because although it rejects <code>"2024-12-26"</code>, the search continues on to match (and replace) the substring <code>"12-26"</code>:</p>
<pre class="lang-py prettyprint-override"><code>import re
text = """
2024-12-26
4-5
78-79
"""
attempt_one = re.compile(r"(\d+)(\-)(\d+)")
attempt_two = re.compile(r"(?!\d+\-\d+\-)(\d+)(\-)(\d+)")
print("Attempt one:")
print(re.match(attempt_one, "4-5")) # Match: OK
print(re.match(attempt_one, "2024-12-26")) # Match: False positive
new_text, _ = re.subn(attempt_one, r"\1 to \3", text) # Incorrect substitution
print(new_text)
print("Attempt two:")
print(re.match(attempt_two, "4-5")) # Match: OK
print(re.match(attempt_two, "2024-12-26")) # Doesn't match: OK
new_text, _ = re.subn(attempt_two, r"\1 to \3", text) # Still incorrect
print(new_text)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Attempt one:
<re.Match object; span=(0, 3), match='4-5'>
<re.Match object; span=(0, 7), match='2024-12'>
2024 to 12-26
4 to 5
78 to 79
Attempt two:
<re.Match object; span=(0, 3), match='4-5'>
None
2024-12 to 26
4 to 5
78 to 79
</code></pre>
<p>What regular expression can I use so that the substitution returns the following instead?</p>
<pre class="lang-none prettyprint-override"><code>2024-12-26
4 to 5
78 to 79
</code></pre>
<p>(As my goal is to learn about regular expressions, I am not interested in workarounds such as matching the whitespace or newline after <code>"12-26"</code>.)</p>
|
<python><regex>
|
2025-02-17 19:09:31
| 3
| 887
|
Max
|
79,446,162
| 405,017
|
Check if one array contains another in psql with psycopg3
|
<p>My PostgreSQL 14 database has a table with a field that is an array of strings, <code>tags TEXT[]</code>. I have a list of strings in Python <code>["foo", "bar"]</code>, and I want to select rows in the table that have all the tags in the list.</p>
<p>Based on <a href="https://www.crunchydata.com/blog/tags-aand-postgres-arrays-a-purrfect-combination#what-cats-have-these-three-tags-2" rel="nofollow noreferrer">this article</a> I believe the SQL i want to execute is: <code>SELECT * FROM mytable WHERE tags @> ARRAY['foo', 'bar']</code>.</p>
<p>I cannot figure out how to do this with the psycopg3 module.</p>
<pre class="lang-py prettyprint-override"><code>sqlcmd = "SELECT * FROM mytable WHERE tags @> ARRAY[%(tags)s]"
params = {"tags": ["foo", "bar"]}
cursor = conn.cursor()
print(cursor.mogrify(sqlcmd, params))
#=> SELECT * FROM mytable WHERE tags @> ARRAY['{foo,bar}']
</code></pre>
<p>If I supply just one tag in the list I get <code>… tags @> ARRAY['{foo}']</code>.</p>
|
<python><postgresql><psycopg2><psycopg3>
|
2025-02-17 17:23:21
| 1
| 304,256
|
Phrogz
|
79,446,161
| 2,440,505
|
Problem while retrieving entire word match from regex with findall()
|
<p>I'm trying to get the entire word that matches the expression. I can do it for one match, with <code>search()</code> but there are several in the string.</p>
<p>With this code I get exactly what I want, but only the first one.</p>
<pre><code>re_pattern = r"(\d+(\.\d+){2,})-(20\d{2})(\d{2})(\d{2})-XPTO"
s = "sdfljkfslk fdjf dslkjfkfj \n | ksldfjflkj \n 1.3.6-20241129-XPTO slkj lkj | ## lkjlkjd ssss jkjkj$$=1.3.6-20241129-XPTO"
matches = re.search(re_pattern, s)
print(matches[0])
</code></pre>
<p>Prints <code>1.3.6-20241129-XPTO</code></p>
<p>When trying to use <code>findall()</code> I get very different results:</p>
<pre><code>matches = re.findall(re_pattern, s)
print(matches)
</code></pre>
<p>Produces <code>[('1.3.6', '.6', '2024', '11', '29'), ('1.3.6', '.6', '2024', '11', '29')]</code></p>
<p>Can't figure out why. Any help is appreciated, thanks.</p>
|
<python><regex>
|
2025-02-17 17:23:14
| 1
| 864
|
Christian Dechery
|
79,445,973
| 1,356,926
|
Copy Optuna study while slightly altering trial scores
|
<p>I am using Optuna to optimize the parameters of a non-ML task. Now, each trial consists in processing several files in sequence, each of which gets a score. The scores are summed cumulatively in order to allow Optuna to prune unfavorable trials, and the final sum of all scores is reported.</p>
<p>In order to optimize pruning, I want to have the highest variance files be processed first, as those are the most relevant to determine whether a run should be stopped or not. However, in order to know which files have the highest variance, I first must run some trials without any pruning. Once I have done that, I can determine from the results the optimal order in which to process the files.</p>
<p>However, there is no reason to discard the previous results. So I have written a small script that extracts the existing scores, including the intermediate steps, and re-arranges them as if the best optimal file processing order had always been used. I'd now like to create a new study with some trials that report my newly "re-arranged" scores.</p>
<p>However, I can't figure out a way to also pass the original parameter values to the new trials. With my current code, I get a study that reports the correct score, but doesn't show any parameter at all (since I'm not currently copying them). Is there a way to do so?</p>
<p>Below is an excerpt of my script to give an idea of what I'm looking for.</p>
<pre><code> new_study = optuna.create_study(
storage=new_storage,
study_name=args.study_name,
direction="maximize")
for i, trial in enumerate(completed_trials):
new_trial = new_study.ask()
# How to actually copy these? What else should be copied?
#
# new_trial.distributions = cp.deepcopy(trial.distributions)
# new_trial.params = cp.deepcopy(trial.params)
new_trial_score = cum_reordered_scores[i]
for i in range(len(new_trial_score)):
new_trial.report(new_trial_score[i], i)
new_study.tell(new_trial, new_trial_score[-1])
</code></pre>
|
<python><optuna>
|
2025-02-17 16:06:05
| 0
| 5,637
|
Svalorzen
|
79,445,887
| 893,254
|
How to create a system-wide environment to install pip packages when using pip inside a Docker container?
|
<p>I am trying to build an existing <code>Dockerfile</code>, which needs to be migrated from an older ubuntu version base image to a more up-to-date base image.</p>
<p>The existing <code>Dockerfile</code> contains commands such as</p>
<pre><code>pip install pandas
</code></pre>
<p>There are many such <code>pip</code> commands, each of which triggers the following error message.</p>
<pre><code>error: externally-managed-environment
</code></pre>
<p>This is not unexpected. Recent versions of Ubuntu produce this error when the user attempts to install <code>pip</code> packages without first activating a virtual environment.</p>
<p>This can be fixed by creating an activating an virtual environment. The disadvantage is that inside a Docker container, this shouldn't really be needed, since a container is its own isolated environment. In addition, it creates an additional layer which is slightly inconvenient. <code>RUN python3 my_file.py</code> no longer works directly, as the venv has to be activated first. (There are two ways to do this, the easiest of which is to do <code>RUN /path/to/.venv/bin/python3 /path/to/my_file.py</code>.)</p>
<p>The error could also be "fixed" by passing the <code>--break-system-packages</code> argument. I do not know in detail what the consequences of this are, so I do not know if this could be a recommended solution in this context.</p>
<p>There is a third possibility, which would be to install <code>python3-pandas</code> (assuming it exists). This is an <code>apt</code> package which provides an installation of <code>pandas</code> via <code>apt</code>. I would prefer not to use this method, since not all <code>pip</code> packages are available as <code>apt</code> packages. I aim to try and avoid a fragmented install whereby some packages are provided through one method and other packages are provided through a different method.</p>
<p>To review:</p>
<ul>
<li>What does the <code>--break-system-packages</code> command line option do? How "safe" is this inside a Docker container? (Rather than frequently creating and destroying this particular container, it tends to persist for a significant period of time. Typically several weeks to a few months.)</li>
<li>If this isn't a suitable or recommended approach, is there a way that I can conveniently create a system-wide virtual environment, and somehow cause it to be "permanently" activated. (In other words, to create some kind of "transparent" virtual environment, which isn't noticeable to the user - so that running <code>python3 main.py</code> will run <code>main.py</code> with the virtual environment active, automatically. Can this be done?)</li>
</ul>
|
<python><python-3.x><docker><pip>
|
2025-02-17 15:33:53
| 1
| 18,579
|
user2138149
|
79,445,863
| 9,430,509
|
How can I prevent the "Telemetry client instance id changed from AAAAAAAAAAAAAAAAAAAAAA to" log in a confluent-kafka-python consumer?
|
<p>When the consumer (which is a very simple confluent-kafka-python consumer) starts, we see this log message after the assignment</p>
<blockquote>
<p>%6|1739802885.947|GETSUBSCRIPTIONS|<consumer id>#consumer-1| [thrd:main]: Telemetry client instance id changed from AAAAAAAAAAAAAAAAAAAAAA to <some random string></p>
</blockquote>
<ul>
<li><p>I tried running the consumer locally (in contrast to the Kubernetes cluster) and see no such logs.</p>
</li>
<li><p>I tried googling for this log message but found no bugs or help avoiding this (though I am not the only person with such logs)</p>
</li>
</ul>
|
<python><kubernetes><apache-kafka><confluent-kafka-python>
|
2025-02-17 15:26:09
| 1
| 935
|
Sergej Herbert
|
79,445,857
| 2,648,504
|
Pandas DataFrame returning only 1 column after creating from a list
|
<p>I'm using something similar to this as <code>input.txt</code></p>
<pre><code> 040525 $$$$$ 9999 12345
040525 $$$$$ 8888 12345
040525 $$$$$ 7777 12345
040525 $$$$$ 6666 12345
</code></pre>
<p>Due to the way this input is being pre-processed, I cannot correctly use pd.read_csv. I must first create a list from the input; Then, create a DataFrame from the list.</p>
<pre><code>data_list = []
with open('input.txt', 'r') as data:
for line in data:
data_list.append(line.strip())
df = pd.DataFrame(data_list)
</code></pre>
<p>This results in each row being considered 1 column</p>
<pre><code>print(df.shape)
print(df)
print(df.columns.tolist())
(4, 1)
0
0 040525 $$$$$ 9999 12345
1 040525 $$$$$ 8888 12345
2 040525 $$$$$ 7777 12345
3 040525 $$$$$ 6666 12345
[0]
</code></pre>
<p>How can I create 4 columns in this DataFrame? Desired output would be:</p>
<pre><code>(4, 4)
a b c d
0 40525 $$$$$ 9999 12345
1 40525 $$$$$ 8888 12345
2 40525 $$$$$ 7777 12345
3 40525 $$$$$ 6666 12345
['a', 'b', 'c', 'd']
</code></pre>
|
<python><pandas>
|
2025-02-17 15:23:46
| 1
| 881
|
yodish
|
79,445,822
| 6,128,612
|
Getting tweet replies with tweepy gives '429 Too Many Requests'
|
<p>I am trying to get tweet replies of a chosen tweet using tweepy (academic access, API 2.0)</p>
<pre><code>import tweepy
import time
bearer_token = "bearer_token"
twitter_client = tweepy.Client(bearer_token=bearer_token)
tweet = "numbers"
def check_replys(tweet_ID):
time.sleep(1) #where to put it?
query = f"conversation_id:{tweet_ID} is:reply"
replys= twitter_client.search_recent_tweets(query= query )
return replys
check_replys(tweet)
</code></pre>
<p>In response I get <code>TooManyRequests: 429 Too Many Requests Too Many Requests</code>. I was googling around and I learned that tweepy might in some cases send too may requests than severs allows (because of academic access? Someone confirm that?) so I have added <code>sleep.time(1)</code>, however it did not help at all. Can someone give me a hint how to fix it?</p>
|
<python><twitter><tweepy>
|
2025-02-17 15:11:22
| 1
| 343
|
Alexandros
|
79,445,265
| 12,760,550
|
Python script to run Jupyter Notebook in loop and save results to HTML
|
<p>Currently I have a jupyter notebook data analysis that I run for multiple countries and the only thing I need to do is replace the "country" variable on top with the country I desire to run the analysis. After analysis is complete, I have to save the jupyter and only then run this code:</p>
<pre><code>!jupyter nbconvert OneHRHelpDeskTicketCountryReport.ipynb --no-input --no-prompt --to html
</code></pre>
<p>This would generate an HTML file I have to rename, then change the "country" variable again for another country, run the script (besides last line that is commented out), save the result after graphs are displayed, and run the last line again and so on.</p>
<p>I want to create a Python file that would execute this Jupyter and save the reports (with any name, just identifying which country it is) in a "Reports" folder in loop, tried this with help of chatgpt but have no idea what to do, it is generating jupyter notebook files .ipynb in the "Report" folder but when I see it the "country = " is not being replaced by anything, it is the same jupyter for all countries. Also not saved and converted to HTML:</p>
<pre><code>import os
import re
# List of countries for which reports are needed
countries = ['HQ Queue', 'Czech Queue', 'Switzerland Queue', 'TMC Queue', 'AMS Queue', 'Netherlands Queue', 'Portugal Queue',
'Peru Queue', 'London Queue', 'Sweden Queue', 'Slovakia Queue', 'Finland Queue', 'Denmark Queue', 'UAE Queue',
'Norway Queue', 'Spain Queue', 'York Queue', 'France Queue']
notebook_input = "OneHRHelpDeskTicketCountryReport.ipynb" # Your original notebook
output_dir = "Reports"
os.makedirs(output_dir, exist_ok=True)
print(f"Saving reports to: {os.path.abspath(output_dir)}")
for country in countries:
output_notebook = os.path.join(output_dir, f"{country.replace(' ', '_')}_Report.ipynb")
output_html = os.path.join(output_dir, f"{country.replace(' ', '_')}_Report.html")
print(f"\nGenerating report for {country}...")
# Read the notebook content
with open(notebook_input, "r", encoding="utf-8") as f:
notebook_content = f.read()
# Replace the existing "country =" assignment properly
notebook_content = re.sub(r'country\s*=\s*".*?"', f'country = "{country}"', notebook_content)
# Write the updated notebook
with open(output_notebook, "w", encoding="utf-8") as f:
f.write(notebook_content)
# Execute the notebook
exit_code = os.system(f"jupyter nbconvert --execute --inplace {output_notebook}")
if exit_code != 0:
print(f"Error executing notebook for {country}. Check the notebook manually.")
continue
# Convert executed notebook to HTML
os.system(f"jupyter nbconvert {output_notebook} --no-input --no-prompt --to html --output={output_html}")
print("\nAll reports generated successfully!")
input("\nPress Enter to exit...") # Prevents the command prompt from closing immediately
</code></pre>
<p>I need a Python script that does the following:</p>
<ul>
<li>Open my Jupyter Notebook file;</li>
<li>Find the "country =" which should be in the first line and add one of the "countries" in the "countries" list</li>
<li>Run the Jupyter notebook so graphs and tables are displayed</li>
<li>Save the new displayed graphs</li>
<li>Print the page in HTML (before using this line on Jupyter itself, now it is commented out !jupyter nbconvert OneHRHelpDeskTicketCountryReport.ipynb --no-input --no-prompt --to html)</li>
<li>Do it again for each country on the countries list</li>
</ul>
|
<python><html><jupyter-notebook><executable><nbconvert>
|
2025-02-17 11:33:43
| 0
| 619
|
Paulo Cortez
|
79,445,198
| 11,277,108
|
Does an errback method always get called regardless of retries?
|
<p>The <a href="https://docs.scrapy.org/en/latest/topics/request-response.html#using-errbacks-to-catch-exceptions-in-request-processing" rel="nofollow noreferrer">scrapy docs</a> state:</p>
<blockquote>
<p>The errback of a request is a function that will be called when an exception is raise while processing it.</p>
</blockquote>
<p>How does this function interact with <a href="https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.downloadermiddlewares.retry" rel="nofollow noreferrer"><code>RetryMiddleware</code></a>? Is the <code>errback</code> method always called regardless of whether the request is being retried?</p>
<hr />
<p><strong>Update:</strong></p>
<p><a href="https://stackoverflow.com/a/79445299/11277108">This answer</a> pointed me to the scrapy docs however it didn't quite clear things up for me as it doesn't explain <code>errback</code> interaction(s). However, in the next paragraph in the docs is the following:</p>
<blockquote>
<p>If it [the <code>process_request</code> method] raises an IgnoreRequest exception, the process_exception() methods of installed downloader middleware will be called. If none of them handle the exception, the errback function of the request (Request.errback) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).</p>
</blockquote>
<p>However, this seems to be a little confusing as the <code>errback</code> interaction seems to specifically apply to the <code>IgnoreRequest</code> exception?</p>
|
<python><scrapy>
|
2025-02-17 11:04:33
| 1
| 1,121
|
Jossy
|
79,445,122
| 687,331
|
Sniffing bluetooth packet using Scapy on raspberry pi5
|
<p>I have been working on Scapy to sniff the wifi packets & works like a champ. With self interest started to read other supporting feature like bluetooth that supported by Scapy <a href="https://scapy.readthedocs.io/en/latest/layers/bluetooth.html#what-is-bluetooth" rel="nofollow noreferrer">framework</a></p>
<p>Started to search for few samples but no luck. So i decided to code myself following instruction mentioned in scapy.</p>
<pre><code>sudo hciconfig hci0 up
sudo hciconfig hci0 piscan # Enable the Bluetooth adapter to be discoverable
sudo hcidump -X
</code></pre>
<p>snippet:</p>
<pre><code>from scapy.all import *
import sys
# This callback will process Bluetooth packets
def packet_callback(packet):
if packet.haslayer(Bluetooth_HCI_Hdr):
if packet.haslayer(Bluetooth_HCI_Data):
# Check if it's a Probe Request (you might need to inspect the specific packet format for your platform)
if packet[Bluetooth_HCI_Data].opcode == 0x04: # Probe Request opcode
print("Probe Request captured:")
print(packet.show())
# Start sniffing for Bluetooth packets
def start_sniffing(interface):
print(f"Starting Bluetooth sniffing on {interface}...")
sniff(iface=interface, prn=packet_callback, store=0)
# Make sure the script is run with root privileges to access the Bluetooth interface
if __name__ == "__main__":
# You need to specify your Bluetooth interface, e.g., 'hci0' for Linux
interface = "hci0"
start_sniffing(interface)
</code></pre>
<p>On running the above code getting an error stating</p>
<pre><code>File "/home/scapy_bluetooth/probe_request.py", line 22, in <module>
start_sniffing(interface)
File "/home/scapy_bluetooth/probe_request.py", line 16, in start_sniffing
sniff(iface=interface, prn=packet_callback, store=0)
File "/usr/lib/python3/dist-packages/scapy/sendrecv.py", line 1311, in sniff
sniffer._run(*args, **kwargs)
File "/usr/lib/python3/dist-packages/scapy/sendrecv.py", line 1171, in _run
sniff_sockets[_RL2(iface)(type=ETH_P_ALL, iface=iface,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/scapy/arch/linux.py", line 499, in __init__
set_promisc(self.ins, self.iface)
File "/usr/lib/python3/dist-packages/scapy/arch/linux.py", line 179, in set_promisc
mreq = struct.pack("IHH8s", get_if_index(iff), PACKET_MR_PROMISC, 0, b"")
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/scapy/arch/linux.py", line 399, in get_if_index
return int(struct.unpack("I", get_if(iff, SIOCGIFINDEX)[16:20])[0])
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/scapy/arch/unix.py", line 42, in get_if
return ioctl(sck, cmd, struct.pack("16s16x", iff.encode("utf8")))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 19] No such device
</code></pre>
<p>When i used <strong>ifconfig hci0</strong> in command line</p>
<pre><code>hci0: error fetching interface information: Device not found
</code></pre>
<p><strong>rfkill list bluetooth</strong></p>
<pre><code>1: hci1: Bluetooth
Soft blocked: no
Hard blocked: no
4: hci0: Bluetooth
Soft blocked: no
Hard blocked: no
</code></pre>
<p>Not sure, why hci0 is not detected.</p>
<p>thanks for reading.</p>
|
<python><ubuntu><bluetooth><scapy><raspberry-pi5>
|
2025-02-17 10:35:30
| 0
| 1,985
|
Anand
|
79,445,120
| 3,059,001
|
llama3 responding only function call?
|
<p>I am trying to make Llama3 Instruct able to use function call from tools , it does work but now it is answering only function call! if I ask something like <code>who are you ?</code> or <code>what is apple device ?</code> it answer back function call , I believe it is something in chat template ? or still something is missing in my code ?</p>
<pre><code>from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
import os
import torch
from huggingface_hub import login
def get_current_temperature(location: str, unit: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
unit: The unit to return the temperature in. (choices: ["celsius", "fahrenheit"])
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22. # A real function should probably actually get the temperature!
def get_current_wind_speed(location: str) -> float:
"""
Get the current wind speed in km/h at a given location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current wind speed at the given location in km/h, as a float.
"""
return 6. # A real function should probably actually get the wind speed!
tools = [get_current_temperature, get_current_wind_speed]
# Suppress MPS log message (optional)
os.environ["TORCH_MPS_DEVICE"] = "1"
checkpoint = "models/Llama-3.2-1B-Instruct"
messages = [
{"role": "user", "content": "Hey, who are you ?"}
]
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="cpu")
inputs = tokenizer.apply_chat_template(messages, tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
out = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):],skip_special_tokens=True))
</code></pre>
|
<python><artificial-intelligence><chatbot><llama><llama3>
|
2025-02-17 10:35:20
| 1
| 14,460
|
Kodr.F
|
79,444,966
| 4,194,483
|
Error in BLE communication between ESP32 and Python
|
<p>I am running a BLE server on an ESP32 which is sending the value of time periodically. This code is being run on Arduino IDE. On the other hand, I have a Python code running on my PC which needs to receive that value and display it on the GUI.</p>
<p>When I run the Python code, it connects to the ESP32 successfully but throws an error while trying to subscribe to the notifications from the ESP32. The error is as follows:</p>
<pre><code>Exception in thread Thread-1 (start_ble_loop):
Traceback (most recent call last):
File "C:\Program Files\Python312\Lib\threading.py", line 1073, in _bootstrap_inner
self.run()
File "C:\Program Files\Python312\Lib\threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "E:\AC_Work\Task_9_Radar implementation\BLE GUI\gui_code.py", line 42, in start_ble_loop
loop.run_until_complete(connect_and_receive())
File "C:\Program Files\Python312\Lib\asyncio\base_events.py", line 684, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "E:\AC_Work\Task_9_Radar implementation\BLE GUI\gui_code.py", line 28, in connect_and_receive
await client.start_notify(BLE_CHARACTERISTIC_UUID, notification_handler)
File "D:\Users\26101179\AppData\Roaming\Python\Python312\site-packages\bleak\__init__.py", line 844, in start_notify
await self._backend.start_notify(characteristic, wrapped_callback, **kwargs)
File "D:\Users\26101179\AppData\Roaming\Python\Python312\site-packages\bleak\backends\winrt\client.py", line 981, in start_notify
await winrt_char.write_client_characteristic_configuration_descriptor_async(
OSError: [WinError -2140864509] The attribute cannot be written
</code></pre>
<p>My Arduino code is as follows:</p>
<pre><code>#include <BLEDevice.h>
#include <BLEUtils.h>
#include <BLEServer.h>
#include "esp_bt_device.h"
#define SERVICE_UUID "12345678-1234-5678-1234-56789abcdef0"
#define CHARACTERISTIC_UUID "87654321-4321-6789-4321-67890abcdef0"
BLECharacteristic *pCharacteristic;
bool deviceConnected = false;
void printBLEMacAddress() {
const uint8_t* mac = esp_bt_dev_get_address();
Serial.print("ESP32 BLE MAC Address: ");
for (int i = 0; i < 6; i++) {
Serial.printf("%02X", mac[i]);
if (i < 5) Serial.print(":");
}
Serial.println();
}
class MyServerCallbacks: public BLEServerCallbacks {
void onConnect(BLEServer* pServer) {
deviceConnected = true;
}
void onDisconnect(BLEServer* pServer) {
deviceConnected = false;
pServer->getAdvertising()->start(); // Restart advertising
}
};
void setup() {
Serial.begin(115200);
BLEDevice::init("ESP32_BLE");
// Print BLE MAC Address
printBLEMacAddress();
BLEServer *pServer = BLEDevice::createServer();
pServer->setCallbacks(new MyServerCallbacks());
BLEService *pService = pServer->createService(SERVICE_UUID);
pCharacteristic = pService->createCharacteristic(
CHARACTERISTIC_UUID,
BLECharacteristic::PROPERTY_READ |
BLECharacteristic::PROPERTY_NOTIFY
);
pService->start();
pServer->getAdvertising()->start();
Serial.println("BLE Server Started");
}
void loop() {
if (deviceConnected) {
String millisStr = String(millis()); // Get millis() value
pCharacteristic->setValue(millisStr.c_str());
pCharacteristic->notify(); // Send the value
Serial.println("Sent: " + millisStr);
}
delay(1000);
}
</code></pre>
<p>And my Python code is as follows:</p>
<pre><code>import tkinter as tk
from bleak import BleakClient, BleakError
import asyncio
import time
import threading
# Values determined through nrfConnect
ESP32_BLE_MAC = "74:4D:BD:61:D2:6D"
BLE_CHARACTERISTIC_UUID = "87654321-4321-6789-4321-67890abcdef0"
async def connect_and_receive():
while True: # Keep retrying connection
try:
print("Attempting to connect to ESP32 BLE...")
label.config(text="Connecting to ESP32...")
async with BleakClient(ESP32_BLE_MAC) as client:
if client.is_connected:
print("Connected to ESP32 BLE")
label.config(text="Connected to ESP32")
def notification_handler(sender, data):
"""Callback function when new data is received."""
millis_value = int.from_bytes(data, byteorder="little")
label.config(text=f"Millis: {millis_value}")
# Subscribe to notifications
await client.start_notify(BLE_CHARACTERISTIC_UUID, notification_handler)
while True:
await asyncio.sleep(1) # Keep listening for data
except BleakError as e:
print(f"Connection failed: {e}")
label.config(text="ESP32 not found! Retrying...")
time.sleep(5) # Wait before retrying
# Function to run asyncio loop in a separate thread
def start_ble_loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(connect_and_receive())
# Tkinter GUI Setup
root = tk.Tk()
root.title("ESP32 BLE Monitor")
label = tk.Label(root, text="Waiting for connection...", font=("Arial", 16))
label.pack(pady=20)
# Start BLE communication in a separate thread
threading.Thread(target=start_ble_loop, daemon=True).start()
# Run the GUI
root.mainloop()
</code></pre>
<p>Any ideas what could be causing the problem?</p>
|
<python><bluetooth-lowenergy><arduino-esp32>
|
2025-02-17 09:33:18
| 1
| 645
|
Mobi Zaman
|
79,444,566
| 4,451,521
|
Gradio dataframe dynamically changes its height
|
<p>This is something that I have began noticing lately.
I have a dataframe in Gradio like</p>
<pre><code>dataframe_output = gr.DataFrame(
label="Results Dataframe",
headers=["Image Path"], # , "Result", "Check"],
wrap=True,
interactive=False,
type="array",
height=400
)
</code></pre>
<p>This dataframe receives data from a function and displays it correctly. It also responds to clicking on it.</p>
<p>The only thing that bugs me is that when I roll the mouse on it to see the lower rows, the height of the datafame dynamically increases.</p>
<p>Which is not a big error in the behavior of my application but still bothers the easy of use of the application</p>
<p>Does anyone knows how to set the height? (As you see I have tried with <code>height=400</code>)</p>
|
<python><gradio>
|
2025-02-17 06:27:04
| 0
| 10,576
|
KansaiRobot
|
79,444,501
| 11,156,131
|
Fpdf header and background
|
<p>I need to create a pdf with header, footer and background color. Tge following code is generating all 3, but it seems the footer is getting behind the pdf rect</p>
<pre><code>from fpdf import FPDF
class PDF(FPDF):
def header(self):
self.set_font(family='Helvetica', size=8)
self.cell(0, 10, 'test_header', align='L')
def footer(self):
self.set_y(-15)
self.set_font(family='Helvetica', size=8)
self.cell(0, 10, 'test_footer', align='L')
pdf = PDF()
pdf.add_page()
pdf.set_font("Times", size=12)
# BG
pdf.set_fill_color(r=249, g=247, b=242)
pdf.rect(h=pdf.h, w=pdf.w, x=0, y=0, style="F")
</code></pre>
<p>With the above only footer is visible but without it, both are visible.</p>
<p>How can I achieve the desired outcome?</p>
|
<python><fpdf2>
|
2025-02-17 05:49:24
| 1
| 4,376
|
smpa01
|
79,444,425
| 9,448,637
|
Altair line chart with last values labeled - how to stop overlapping labels?
|
<p>How can I stop line labels from overlapping when the last values that the labels are pinned to are close together? I am using Altair 5.5.</p>
<pre><code>import altair as alt
from vega_datasets import data
# Import example data
source = data.stocks()
# Create a common chart object
chart = alt.Chart(source).transform_filter(
alt.datum.symbol != "IBM" # A reducation of the dataset to clarify our example. Not required.
).encode(
alt.Color("symbol").legend(None)
)
# Draw the line
line = chart.mark_line().encode(
x="date:T",
y="price:Q"
)
# Use the `argmax` aggregate to limit the dataset to the final value
label = chart.encode(
x='max(date):T',
y=alt.Y('price:Q').aggregate(argmax='date'),
text='symbol'
)
# Create a text label
text = label.mark_text(align='left', dx=4)
# Create a circle annotation
circle = label.mark_circle()
# Draw the chart with all the layers combined
line + circle + text
</code></pre>
|
<python><altair>
|
2025-02-17 04:44:04
| 1
| 641
|
footfalcon
|
79,444,392
| 3,163,618
|
How does PyPy implement integers?
|
<p>CPython implements arbitrary-precision integers as <a href="https://docs.python.org/3/c-api/long.html#c.PyLongObject" rel="nofollow noreferrer">PyLongObject</a>, which extends PyObject. On 64-bit platforms they take at least 28 bytes, which is quite memory intensive. Also well-known is that it keeps a cache of small integer objects from -5 to 256.</p>
<p>I am interested in seeing how PyPy implements these, in particular what optimizations there are optimizations for limited size integer objects. It is difficult to find documentation online. The <a href="https://doc.pypy.org/en/latest/interpreter-optimizations.html#integers-as-tagged-pointers" rel="nofollow noreferrer">PyPy docs mention a tagged pointer optimization</a> for "small ints" of 63 bits (signed?). The most obvious to me is treating an integer as a primitive instead of a general purpose object if possible.</p>
|
<python><integer><implementation><pypy>
|
2025-02-17 04:20:43
| 2
| 11,524
|
qwr
|
79,443,999
| 7,437,143
|
How to open an image, parse input from user, and close the image afterwards in Python?
|
<p><a href="https://stackoverflow.com/a/31751501/7437143">This</a> answer did not work for me, nor for some Mac users, and I did not find a working solution among the following similar questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/69576881/how-to-open-an-image-in-python-and-close-afterwards?noredirect=1&lq=1">How to open an image in Python and close afterwards?</a></li>
<li><a href="https://stackoverflow.com/questions/3135328/how-to-close-an-image?rq=3">How to Close an Image?</a></li>
<li><a href="https://stackoverflow.com/questions/6725099/how-can-i-close-an-image-shown-to-the-user-with-the-python-imaging-library?rq=3">How can I close an image shown to the user with the Python Imaging Library?</a></li>
<li><a href="https://stackoverflow.com/questions/73665184/how-do-i-close-figure-in-matplotlib">How do I close figure in matplotlib?</a></li>
<li><a href="https://stackoverflow.com/questions/6725099/how-can-i-close-an-image-shown-to-the-user-with-the-python-imaging-library">How can I close an image shown to the user with the Python Imaging Library?</a></li>
<li><a href="https://stackoverflow.com/questions/68470767/how-to-use-user-input-to-display-image">How to use user input to display image?</a></li>
</ul>
<p>How can one open an image in Python, then let the user answer questions about it in the CLI, and close the image afterwards?</p>
<h2>Contstaints</h2>
<ol start="0">
<li>Don't use sub-process.</li>
<li>Don't start killing processes that happen to match a substring.</li>
<li>In essence, use a Python-only method that opens an image and then closes that, and only that image, with control, whilst allowing other code (that allows user interaction) to be executed in between.</li>
</ol>
|
<python><image>
|
2025-02-16 21:53:11
| 1
| 2,887
|
a.t.
|
79,443,888
| 1,951,507
|
How can python type var and type aliases be used to simplify function definitions with multiple generics
|
<p>Say I have a definition:</p>
<pre><code>def foo[A: AMin, B: BMin, C: CMin](d: D[A, B, C], e: E[A, B]) -> F[C]: ...
</code></pre>
<p>Is there a way (in python 3.13) to simplify (by type vars/aliases) to make this more condense, while keeping typing checkers like pyright happy?</p>
<p>E.g. import the <code>A, B, C</code> as bound <code>TypeVar</code> from a common module.
As well as as type definitions like <code>type DType = ...</code></p>
<p>I tried following with the simplicity (in foo/foo2 defs) I hoped to achieve:</p>
<pre><code>from typing import TypeVar
class AMin: pass
class BMin: pass
class CMin: pass
class D: pass
class E: pass
class F: pass
A = TypeVar('A', bound=AMin)
B = TypeVar('B', bound=BMin)
C = TypeVar('C', bound=CMin)
type DType = D[A, B, C]
type EType = E[A, B]
type FType = F[C]
def foo(d: DType, e: EType) -> FType: ...
def foo2(d: DType) -> C: ...
</code></pre>
<p>But that still gives many typing errors.</p>
|
<python><generics><python-typing>
|
2025-02-16 20:34:36
| 0
| 1,052
|
pfp.meijers
|
79,443,799
| 1,860,376
|
Is there an idiomatic way of raising an error in polars
|
<p>When doing some types of data processing, I want to receive an indicative error message in polars. For example, if I have the following transformation</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
lf = pl.LazyFrame(
{
"first_and_middle_name": ["mister banana", "yoda the jedi", "not gonna"],
"middle_and_last_name": ["banana muffin", "jedi master", "work at all"],
}
)
split_first_name = pl.col("first_and_middle_name").str.split(" ").list
split_last_name = pl.col("middle_and_last_name").str.split(" ").list
lf.with_columns(
pl.when(split_first_name.last() == split_last_name.first())
.then(
pl.col("first_and_middle_name")
+ " "
+ split_last_name.slice(1, split_last_name.len()).list.join(" ")
)
.otherwise(pl.lit(None))
.alias("full_name")
).collect()
</code></pre>
<p>I want to receive an informative error that the last row was problematic, instead of a "null".</p>
<p>I couldn't find in the documentation of polars what's a good way to do that.
I found hacks like defining a UDF to run there and throw an exception, but this feels like a strange detour.</p>
|
<python><dataframe><error-handling><python-polars>
|
2025-02-16 19:27:25
| 3
| 609
|
nadavge
|
79,443,495
| 7,366,596
|
Given A and B, find the number of all integer X values such that X*(X+1) falls between [A, B], both inclusive
|
<p>Given A and B, find the number of all integer X values such that X*(X+1) falls between [A, B], both inclusive.</p>
<p>My approach:</p>
<p>X^2 + X >= A --> X^2 + X - A >= 0</p>
<p>X^2 + X <= B --> X^2 + X - B <= 0</p>
<ol>
<li><p>Find the roots of both equations using find_quadratic_roots() below.</p>
</li>
<li><p>Use the higher and lower roots of both equations to derive all positive & negative X integers (this is where I am stuck)</p>
</li>
</ol>
<pre class="lang-none prettyprint-override"><code>from math import sqrt, floor, ceil
def find_quadratic_roots(a: int, b: int, c: int) -> tuple:
""" Returns roots of ax² + bx + c = 0
For our case, a and b will always be 1 """
discriminant = b*b - 4*a*c
if discriminant < 0:
return None
root1 = (-b + sqrt(discriminant)) / (2*a)
root2 = (-b - sqrt(discriminant)) / (2*a)
return (min(root1, root2), max(root1, root2))
def solution(A: int, B: int) -> int:
if A > B: return 0
# For lower bound: x² + x - A ≥ 0
lower_roots = find_quadratic_roots(1, 1, -A)
# For upper bound: x² + x - B ≤ 0
upper_roots = find_quadratic_roots(1, 1, -B)
assert solution(-1, 0) == 2 # -1, 0
assert solution(0, 0) == 2 # -1, 0
assert solution(0, 1) == 2 # -1, 0
assert solution(0, 2) == 4 # -2, -1, 0, 1
assert solution(-5, 5) == 4 # -2, -1, 0, 1
assert solution(0, 6) == 6 # -3, -2, -1, 0, 1, 2
assert solution(6, 6) == 2 # -3, 2
assert solution(3, 6) == 2 # -3, 2
</code></pre>
|
<python><math>
|
2025-02-16 16:14:17
| 1
| 402
|
bbasaran
|
79,443,476
| 2,249,312
|
Constant Volume chart in python
|
<p>I want to create a constant volume chart in python. Here is an example with a constant volume of 50 and some sample data:</p>
<pre><code>import pandas as pd
import numpy as np
date_rng = pd.date_range(start='2024-01-01', end='2024-12-31 23:00:00', freq='h')
# Create a dataframe with the date range
df = pd.DataFrame(date_rng, columns=['timestamp'])
# Add the 'price' column with random floating numbers between 70 and 100
df['price'] = np.round(np.random.uniform(70, 100, size=(len(date_rng))), 2)
# Add the 'volume' column with random integers between 1 and 10
df['volume'] = np.random.randint(1, 11, size=(len(date_rng)))
constantvolume = 50
df['cumsum'] = np.cumsum(df['volume'])
df['mod'] = df['cumsum']/ constantvolume
df['whole'] = np.ceil(df['mod'])
df['next_num'] = df['whole'].shift(-1) - df['whole']
df['mod2'] = df[df['next_num'] > 0]['cumsum'] % constantvolume
df['mod2'] = df['mod2'].fillna(0)
dfa = df.groupby(df['whole']).agg({'price': ['min', 'max', 'last', 'first'], 'timestamp': 'first', 'volume': 'sum'})
dfa.columns = ['low', 'high', 'close', 'open', 'timestamp', 'volume']
dfa['timestamp'] = pd.to_datetime(dfa['timestamp'])
dfa.set_index('timestamp', inplace=True)
dfa
</code></pre>
<p>Now this is very close to what I want to do. The only issue is that the volume in each row is not exactly the defined quantity of 50 because the cumsum doesnt always add to 50.</p>
<p>So what I would have to do is where next_num >0, see if there is the volume = to the defined constant volume, if yes good if no then split the next row with the same timestamp and same price but split the volume in two parts so that the mod is zero and then move on.</p>
<p>The desired result is that in the final dataframe the volume = constantvolume in all rows exactly, with the exception of the last row where it could be different.</p>
<p>The only way I can think of is a loop which I dont think is the best way and will be very slow as the actual dataframe as 1mn rows...</p>
|
<python><pandas>
|
2025-02-16 16:02:31
| 1
| 1,816
|
nik
|
79,443,038
| 971,904
|
How to make viewport auto-scroll to the bottom in Renpy
|
<p>I am currently trying to make a phone text game. I have got most of functionality working except the part where I want to make the viewport of a screen to automatically scroll to the bottom when a new text comes in. I have already tried using <code>YScrollValue</code> but Renpy doesn't seem to recognize it, so I'm stuck on how to make it auto-scroll.</p>
<p>I had to revert the code back to how it was before and this is what I currently have for my script.rpy and screens.rpy files:</p>
<p><strong>script.rpy</strong></p>
<pre><code># The script of the game goes in this file.
# Declare characters used by this game. The color argument colorizes the
# name of the character.
define e = Character("Testing")
default chat_messages = []
default chat_scroll_pos = 1.0 # Default to bottom
python early:
import enum
class ChatAlign(enum.Enum):
LEFT = 0
RIGHT = 1
# Function to add a message and refresh the screen
def add_message(text, color = "#000000", align = ChatAlign.LEFT):
chat_messages.append((text, color, align))
renpy.restart_interaction() # Forces UI refresh
# The game starts here.
label start:
# Show a background. This uses a placeholder by default, but you can
# add a file (named either "bg room.png" or "bg room.jpg") to the
# images directory to show it.
scene trudy_phone_message
show screen phone_chat
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
$ add_message("This is a text from the sender", align = ChatAlign.RIGHT)
$ renpy.pause()
$ add_message("This is a text from the receiver", align = ChatAlign.LEFT)
$ renpy.pause()
return
</code></pre>
<p><strong>screens.rpy</strong></p>
<pre><code>screen phone_chat():
frame:
xsize 430
ysize 700
xpos 745
ypos 220
background "#FFFFFF"
viewport:
id "chat_scroll"
draggable True
mousewheel True
ysize 694
vbox:
spacing 10
xalign 0.5
for msg, color, align in chat_messages:
hbox:
xfill True
if align == ChatAlign.LEFT:
frame:
background Frame("gui/leftBubble.png", 15, 15)
xpadding 10
ypadding 5
xfill False
xmaximum 300
xalign 0.0
xmargin 5
text msg color "#000000" size 15
elif align == ChatAlign.RIGHT:
null width 100 # Pushes message to the right
frame:
background Frame("gui/rightBubble.png", 15, 15)
xpadding 10
ypadding 5
xfill False
xmaximum 300
xalign 1.0
xmargin 5
text msg color "#FFFFFF" size 15
vbar:
value YScrollValue("chat_scroll")
xalign 1.0
xsize 3
unscrollable "hide"
</code></pre>
|
<python><renpy>
|
2025-02-16 11:22:19
| 0
| 9,664
|
Danny
|
79,443,020
| 1,485,926
|
Python script to detect stale users in GitHub organizations fails getting members activity
|
<p>I have developed a Python script to detect stale users in GitHub organizations. In particular a script that gets all the users of a given GitHub organization and prints it's last activity date.</p>
<p>The script is as follows (explained below):</p>
<pre><code>import requests
def get_github_organization_members(token, organization):
url = f"https://api.github.com/orgs/{organization}/members"
headers = {
"Authorization": f"token {token}",
"Accept": "application/vnd.github.v3+json"
}
members = []
while url:
response = requests.get(url, headers=headers)
if response.status_code == 200:
members_page = response.json()
members.extend(members_page)
url = response.links.get('next', {}).get('url')
else:
print(f"Failed to retrieve members: {response.status_code}")
break
for member in members:
member_login = member['login']
events_url = f"https://api.github.com/users/{member_login}/events/orgs/{organization}"
events_response = requests.get(events_url, headers=headers)
if events_response.status_code == 200:
events = events_response.json()
if events:
last_event = events[0]
last_activity = last_event['created_at']
print(f"{member_login}: Last activity on {last_activity}")
else:
print(f"{member_login}: No recent activity")
else:
print(f"Failed to retrieve events for {member_login}: {events_response.status_code}")
if __name__ == "__main__":
token = "<a given PAT token>"
organization = "<a given GitHub organization>"
get_github_organization_members(token, organization)
</code></pre>
<p>It works as follows:</p>
<ul>
<li>It get a list of the users using the <a href="https://docs.github.com/en/rest/activity/events?apiVersion=2022-11-28#list-organization-events-for-the-authenticated-user" rel="nofollow noreferrer"><code>https://api.github.com/orgs/{organization}/members</code></a> GitHub API method.</li>
<li>For each member in that list, it gets user information using <a href="https://docs.github.com/en/rest/orgs/members?apiVersion=2022-11-28#list-organization-members" rel="nofollow noreferrer"><code>https://api.github.com/users/{member_login}/events/orgs/{organization}</code></a> GitHubAPI method.</li>
<li>The <code><a given GitHub organization></code> is actually an organization name</li>
<li>The <code><a given PAT token></code> is a GitHub token (as the API methods need authentication). It belongs to the <code>user42</code> (actual user obfuscated) which belongs to the <code><a given GitHub organization></code> organization with "Owner" role.</li>
</ul>
<p>With regards to the <code>user42</code> PAT (Personal Access Token), taking into account above API documentation, it needs following permissions:</p>
<blockquote>
<p>"Events" organization permissions (read)</p>
<p>...</p>
<p>"Members" organization permissions (read)</p>
</blockquote>
<p>So the token is configured in that way:</p>
<p><a href="https://i.sstatic.net/Cbb6tOQr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cbb6tOQr.png" alt="enter image description here" /></a></p>
<p>So far, so good.</p>
<p>But, when I run the script I get this:</p>
<pre><code>Failed to retrieve events for user1: 404
Failed to retrieve events for user2: 404
Failed to retrieve events for ...
Failed to retrieve events for user41: 404
user42: Last activity on 2025-02-15T20:56:16Z
Failed to retrieve events for user42: 404
...
Failed to retrieve events for user79: 404
Failed to retrieve events for user80: 404
</code></pre>
<p>So:</p>
<ul>
<li>The members of the organization are obtained correctly (the organization has 80 users, all them are printed)</li>
<li>Activity information is not get in any user, <strong>except</strong> <code>user42</code> (the one to who the PAT token belongs). All the other cases return a 404.</li>
</ul>
<p>Probably I'm missing something... maybe the PAT token needs some other permissions? Maybe the users has to configure some way "I want my activity to be shared with the <code><a given GitHub organization></code>" in their profiles? Another reason?</p>
<p>In fact, any other way of achieving the goal of detecting stale users in GitHub organizations would suffice. Any feedback on this sense is also very welcome.</p>
|
<python><github><github-api>
|
2025-02-16 11:11:47
| 0
| 12,442
|
fgalan
|
79,442,983
| 13,222,679
|
How to do web scraping using pyspark
|
<p>Hello I've a question how to do web scraping and read the response in pyspark
Here's my code</p>
<pre><code>import requests
import pyspark
from pyspark.sql.functions import *
from pyspark.sql import SparkSession
r = requests.get('https://www.skysports.com/football-scores-fixtures')
spark=SparkSession.builder.getOrCreate()
df=spark.read.text(r.content)
</code></pre>
<p>but I think I am doing it wrong so how can i read it with pyspark?</p>
|
<python><apache-spark><web-scraping><pyspark>
|
2025-02-16 10:47:29
| 2
| 311
|
Bahy Mohamed
|
79,442,953
| 545,537
|
Tracing the cause of drf_spectacular.plumbing.UnableToProceedError
|
<p>I have inherited a Django app in which DRF Spectacular was never fully installed. I want to get it working so I can easily see how the API has been set up.</p>
<p>There are quite a few models and serializers, and I suspect a custom field may be the cause of this error I am getting:
<code>drf_spectacular.plumbing.UnableToProceedError</code></p>
<p>There is a stack trace pointing to <code>drf_spectacular/plumbing.py</code> but all that tells me is that a field type hint was not able to be generated, not which field or modal or whatever caused it.</p>
<p>In the Spectacular settings, I have debugging set up:</p>
<pre><code>SPECTACULAR_SETTINGS = {
'DEBUG': True,
...
</code></pre>
<p>How else can I tell what actually went wrong?</p>
<p>I am trying to generate the schema inside a docker container with this command:
<code>docker compose run --rm site python manage.py spectacular --color --file schema.yml</code></p>
|
<python><django><drf-spectacular>
|
2025-02-16 10:21:20
| 0
| 3,040
|
koosa
|
79,442,836
| 7,290,715
|
Python asyncio not able to run the tasks
|
<p>I am trying to test python <code>asyncio</code> and <code>aiohttp</code>. Idea is to fetch the data from API <strong>parallely</strong> and store the .html file in a local drive. Below is my code.</p>
<pre><code>import asyncio
import aiohttp
import time
import os
url_i = "<some_urls>-"
file_path = "<local_drive>\\asynciotest"
async def download_pep(pep_number: int) -> bytes:
url = url + f"{pep_number}/"
print(f"Begin downloading {url}")
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
content = await resp.read()
print(f"Finished downloading {url}")
return content
async def write_to_file(pep_number: int, content: bytes) -> None:
with open(os.path.join(file_path,f"{pep_number}"+'-async.html'), "wb") as pep_file:
print(f"{pep_number}_Begin writing ")
pep_file.write(content)
print(f"Finished writing")
async def web_scrape_task(pep_number: int) -> None:
content = await download_pep(pep_number)
await write_to_file(pep_number, content)
async def main() -> None:
tasks = []
for i in range(8010, 8016):
tasks.append(web_scrape_task(i))
await asyncio.wait(tasks)
if __name__ == "__main__":
s = time.perf_counter()
asyncio.run(main())
elapsed = time.perf_counter() - s
print(f"Execution time: {elapsed:0.2f} seconds.")
</code></pre>
<p>The above code is throwing an error</p>
<pre><code>TypeError: Passing coroutines is forbidden, use tasks explicitly.
sys:1: RuntimeWarning: coroutine 'web_scrape_task' was never awaited
</code></pre>
<p>I am completely novish in <code>asyncio</code> hence not getting any clue. I have followed <a href="https://us-pycon-2019-tutorial.readthedocs.io/aiohttp_intro.html" rel="nofollow noreferrer">documentation</a> but have not got any clue.</p>
<p>Am I missing here things?</p>
<p><strong>Edit</strong></p>
<p>I am trying to call APIs sequentially with each concurrent / parallel call. For this I am using asyncio.Semaphore() and restricting the concurrency into 2. I got the clue from <a href="https://stackoverflow.com/questions/56990958/how-to-get-response-time-and-response-size-while-using-aiohttp">here</a> and from the comments below.</p>
<p>I have made the revision in the code below:</p>
<pre><code>async def web_scrape_task(pep_number: int) -> None:
for i in range(8010, 8016):
content = await download_pep(i)
await write_to_file(pep_number, content)
##To limit concurrent call 2##
sem = asyncio.Semaphore(2)
async def main() -> None:
tasks = []
for i in range(8010, 8016):
async with sem:
tasks.append(asyncio.create_task(web_scrape_task(i)))
await asyncio.gather(*tasks)
if __name__ == "__main__":
s = time.perf_counter()
asyncio.run(main())
#await main()
elapsed = time.perf_counter() - s
print(f"Execution time: {elapsed:0.2f} seconds.")
</code></pre>
<p>Now the question is whether this is the correct approach?</p>
|
<python><python-asyncio>
|
2025-02-16 08:51:53
| 1
| 1,259
|
pythondumb
|
79,442,327
| 3,259,222
|
DataFrame values not computed during Dask Delayed.compute
|
<p>I want to make custom Dask task graphs, which consist of operations over Dask DataFrames.</p>
<p><a href="https://docs.dask.org/en/stable/custom-graphs.html" rel="nofollow noreferrer">Here</a>, I see two interfaces with the one constructing a Delayed object from a dictionary being the preferred one, as I hope of being able to request computation of multiple keys which is executed in an optimal way by Dask, e.g. compute common dependencies only once.</p>
<p>After calling .compute() of the Delayed object, the result still has lazy instead of materialised DataFrame results. The results can be materialised with a second .compute() but I wonder if there is a way to avoid this and get all requested keys computed in the first step?</p>
<pre><code>import dask.dataframe as dd
import pandas as pd
from dask.delayed import Delayed
from functools import partial
person = pd.DataFrame({
'name': ['John', 'Jane'],
'age': [30, 25]
})
task_graph = {
'df': (
partial(dd.from_pandas, npartitions=1),
person
),
'person_old': (
lambda x: x.assign(age=x.age*3),
'df'
),
}
request = Delayed(['df','person_old'], task_graph)
result = request.compute()
display(result[0])
display(result[0].compute())
</code></pre>
|
<python><dask>
|
2025-02-15 22:27:12
| 0
| 431
|
Konstantin
|
79,442,105
| 16,869,946
|
Subtle mistake in pandas .apply(lambda g: g.shift(1, fill_value=0).cumsum())
|
<p>I have a dataframe that records the performance of F1-drivers and it looks like</p>
<pre><code>Driver_ID Date Place
1 2025-02-13 1
1 2024-12-31 1
1 2024-11-03 2
1 2023-01-01 1
2 2025-01-13 5
2 2024-12-02 1
2 2024-11-12 2
2 2023-11-12 1
2 2023-05-12 1
</code></pre>
<p>and I want to create a new columns <code>Total_wins</code> which counts the number of wins of the driver before today's race, so the desired column looks like</p>
<pre><code>Driver_ID Date Place Total_wins
1 2025-02-13 1 2
1 2024-12-31 1 1
1 2024-11-03 2 1
1 2023-01-01 1 0
2 2025-01-13 5 3
2 2024-12-02 1 2
2 2024-11-12 2 2
2 2023-11-12 1 1
2 2023-05-12 1 0
</code></pre>
<p>And here is my code:</p>
<pre><code>win = (df.assign(Date=Date)
.sort_values(['Driver_ID','Date'], ascending=[True,True])
['Place'].eq(1))
df['Total_wins']=(win.groupby(df['Driver_ID'], group_keys=False).apply(lambda g: g.shift(1, fill_value=0).cumsum()))
</code></pre>
<p>So the code works (mostly) fine. I used mostly because I checked the result manually and most of the results are correct, but for a few rows, it gives wrong results like</p>
<pre><code>Driver_ID Date Place Total_wins
1 2025-02-13 1 2
1 2024-12-31 1 4
1 2024-11-03 2 1
1 2023-01-01 1 0
</code></pre>
<p>I tried to debug it but I couldn't find anything wrong. Is there any subtle mistake in my code that might have caused the mistake? Or what is the possible reason for this? My original dataframe is huge (~150000 rows)</p>
<p>Thank you so much in advance</p>
|
<python><pandas><dataframe><group-by>
|
2025-02-15 19:13:19
| 1
| 592
|
Ishigami
|
79,442,094
| 7,706,098
|
Does python reads all lines of a file when numpy.genfromtxt() is executed?
|
<p>I have really large ASCII file (63 million lines or more) that I would like to read using <code>numpy.genfromtxt()</code>. But, it is taking up so much memory. I want to know what python actually does when <code>numpy.genfromtxt()</code> is executed. Does it read all the lines at once?</p>
<p>Look at the below code, for example.</p>
<pre><code>import numpy as np
data = np.genfromtxt("large.file.txt")
</code></pre>
<p>When I execute the code above, would python read all the contents in <code>large.file.txt</code> and load it on to the memory? If yes, is there another way of reading a large file line-by-line so that python would not use large memory?</p>
|
<python><numpy>
|
2025-02-15 19:09:41
| 2
| 301
|
Redshoe
|
79,441,934
| 1,725,553
|
python venv install skips component file "pointer.png"
|
<p>This is a strange issue. I maintain the pi3d python module and it contains this file
<code>github.com/tipam/pi3d/blob/master/src/pi3d/util/icons/pointer.png</code></p>
<p>When I clone the repo locally it has the .png file but when the package is installed using pip it seems to be missing. This didn't used to be a problem. Is it something to do with the fact that pip insists on installing to a venv now, i.e. if I made pip install with --no-warn-script-location would it include the missing file?</p>
|
<python><pip><venv>
|
2025-02-15 16:59:47
| 1
| 2,277
|
paddyg
|
79,441,739
| 12,415,855
|
Try to click on element in shadow root?
|
<p>i try to click on a button in a shadow root using the following code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
import time
print(f"Checking Browser driver...")
options = Options()
# options.add_argument('--headless=new')
options.add_argument("start-maximized")
options.add_argument('--log-level=3')
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = "https://www.firmy.cz/kraj-liberecky/liberec/1818-liberec?q="
driver.get (link)
time.sleep(5)
shadowHost = driver.find_element(By.XPATH,'//div[@class="szn-cmp-dialog-container"]')
shadowRoot = shadowHost.shadow_root
time.sleep(5)
shadowRoot.find_element(By.XPATH, '//button[@data-testid="cw-button-agree-with-ads"]').click()
input("Press!")
</code></pre>
<p>But i get the following error:</p>
<pre><code>(selenium) C:\DEVNEU\Fiverr2025\TRY\readingmadness>python test.py
Checking Browser driver...
Traceback (most recent call last):
File "C:\DEVNEU\Fiverr2025\TRY\readingmadness\test.py", line 29, in <module>
shadowRoot.find_element(By.XPATH, '//button[@data-testid="cw-button-agree-with-ads"]').click()
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\shadowroot.py", line 53, in find_element
return self._execute(Command.FIND_ELEMENT_FROM_SHADOW_ROOT, {"using": by, "value": value})["value"]
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\shadowroot.py", line 82, in _execute
return self.session.execute(command, params)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 384, in execute
self.error_handler.check_response(response)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "C:\DEVNEU\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 232, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.InvalidArgumentException: Message: invalid argument: invalid locator
(Session info: chrome=133.0.6943.60)
Stacktrace:
GetHandleVerifier [0x00007FF64EE46F15+28773]
(No symbol) [0x00007FF64EDB2600]
(No symbol) [0x00007FF64EC48FAA]
(No symbol) [0x00007FF64EC9F05A]
(No symbol) [0x00007FF64EC9F4FC]
(No symbol) [0x00007FF64EC91CAC]
(No symbol) [0x00007FF64ECC728F]
(No symbol) [0x00007FF64EC91B36]
(No symbol) [0x00007FF64ECC7460]
(No symbol) [0x00007FF64ECEF6F3]
(No symbol) [0x00007FF64ECC7023]
(No symbol) [0x00007FF64EC8FF5E]
(No symbol) [0x00007FF64EC911E3]
GetHandleVerifier [0x00007FF64F19425D+3490733]
GetHandleVerifier [0x00007FF64F1ABA43+3586963]
GetHandleVerifier [0x00007FF64F1A147D+3544525]
GetHandleVerifier [0x00007FF64EF0C9DA+838442]
(No symbol) [0x00007FF64EDBD04F]
(No symbol) [0x00007FF64EDB9614]
(No symbol) [0x00007FF64EDB97B6]
(No symbol) [0x00007FF64EDA8CE9]
BaseThreadInitThunk [0x00007FFC4B4D259D+29]
RtlUserThreadStart [0x00007FFC4C7AAF38+40]
</code></pre>
<p>How can i click the button in the shadow root?</p>
|
<python><selenium-webdriver>
|
2025-02-15 15:01:17
| 1
| 1,515
|
Rapid1898
|
79,441,548
| 418,507
|
Forwarding multiple ports through one channel
|
<p>I have a simple python channel ( socket on <code>Asyncio Streams</code> ) and client-server software that uses several TCP ports for connection. Is it possible to forward this connection through this socket channel so that the software does not notice it?</p>
|
<python>
|
2025-02-15 12:56:57
| 1
| 24,878
|
Bdfy
|
79,441,532
| 72,437
|
How to specific WhisperX Models stored location?
|
<p>I expected all model files to be downloaded into the <code>/app/.cache</code> folder using the following code, but they were not. Why is this happening? How can I specify the storage location for the models?</p>
<pre><code>import os
import whisperx
os.environ["HF_HOME"] = "/app/.cache"
# Store cached models inside /app/.cache
model = whisperx.load_model("large-v3", device=device, compute_type="int8")
</code></pre>
|
<python><openai-whisper>
|
2025-02-15 12:41:48
| 0
| 42,256
|
Cheok Yan Cheng
|
79,441,474
| 7,215,135
|
pytest isn't catching exception found in Qt Event Loop?
|
<p>I am writing an integration test for saving data disk as JSON. The desired behavior: when a record already exists in the JSON file, the application should raise an exception before overwriting existing JSON records. Since the exception being raised is desired behavior, the integration test should catch this exception, and pass.</p>
<p>Although the error is raised within the Qt Event Loop, it appears that pytest does not catch it. What can I do to ensure that pytest catches the ValueError I raise?</p>
<h2>The Error</h2>
<p>Pytest did not catch the exception...</p>
<pre><code> records = json.loads(database.read_text())["records"].keys()
if record_name in records:
> with pytest.raises(ValueError) as execinfo:
E Failed: DID NOT RAISE <class 'ValueError'>
</code></pre>
<p>... but it does appear in stderr -- caught by the qt event loop.</p>
<pre><code>---------------------------- Captured stderr call -----------------------------
Exceptions caught in Qt event loop:
________________________________________________________________________________
Traceback (most recent call last):
File "<string>", line 3, in save
File "C:\Users\me\AppData\Local\Programs\Python\Python312\Lib\unittest\mock.py", line 1137, in __call__
return self._mock_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python312\Lib\unittest\mock.py", line 1141, in _mock_call
return self._execute_mock_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python312\Lib\unittest\mock.py", line 1202, in _execute_mock_call
result = effect(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\Desktop\ProjectDir\.venv\Lib\site-packages\pytest_mock\plugin.py", line 177, in wrapper
r = method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\me\Desktop\ProjectDir\record\home.py", line 132, in save
record.toJSON()
File "c:\Users\me\Desktop\ProjectDir\record\record.py", line 53, in toJSON
raise ValueError(f"record '{self.name}' already exists in database")
ValueError: record 'existing_record_name' already exists in database
</code></pre>
<h2>The Code</h2>
<p>The Integration Test: this snippet is part of a larger integration test. I expect this mouse click signal to trigger a slot method that raises a ValueError.</p>
<pre class="lang-py prettyprint-override"><code>
## code within the test_save_new_record() integration test
records = json.loads(database.read_text())["records"].keys()
if record_name in records:
with pytest.raises(ValueError) as execinfo:
qtbot.mouseClick(home_widget.save_button, Qt.MouseButton.LeftButton) # << exception SHOULD be raised
else:
# rest of test...
</code></pre>
<p>The slot to which the signal is connected:</p>
<pre class="lang-py prettyprint-override"><code>class HomeWidget(QWidget, Ui_widget):
def __init__(self):
self.save_button.clicked.connect(self.save) # <<<
</code></pre>
<p>The body of the slot method:</p>
<pre class="lang-py prettyprint-override"><code>def save(self):
selected_record = self.record_list.currentItem()
if selected_record:
self.record_builder.name_edited = selected_record.text()
record = self.record_builder.build()
record.toJSON() # <<< this raises the exception
</code></pre>
<p>Finally, we come to the method where data is saved to disk, and the exception is raised:</p>
<pre class="lang-py prettyprint-override"><code>def toJSON(self):
# if the record already exists, raise ValueError exception
db = json.loads(self.record_db.read_text(encoding="UTF-8"))
if self.name in db["records"].keys():
raise ValueError(f"record '{self.name}' already exists in database") # <<<
else:
# rest of method ...
</code></pre>
|
<python><qt><pytest><pyside><pyside6>
|
2025-02-15 12:02:42
| 1
| 663
|
David
|
79,441,232
| 7,290,715
|
Parallel OData API calls using python asyncio to extract large data from SAP
|
<p>I am trying to develop a code that will call the SAP OData API endpoints multiple times in parallel and fetch data in json. Lets assume that at a time we want to send 5 concurrent requests. Now in each request, the OData API attribute values are to be changed dynamically.</p>
<p>Below is the OData API url:</p>
<pre><code>url = f"https://<server_ip>:<port>/" + "sap/opu/odata/sap/ZRSO_BKPF&$format=json&$top={top_value}&$skip = {skip_value}"
</code></pre>
<p>It is clear that <code>top_value</code> and <code>skip_value</code> will be changing in each API call. So the ideal urls for 1st parallel call (containing 5 API endpoints with different <code>top_value</code> and <code>skip_value</code>) would be like:</p>
<pre><code>url_1 = https://<server_ip>:<port>/" + "sap/opu/odata/sap/ZRSO_BKPF&$format=json&$top=1000&$skip = 0" #for 1st call skip value is 0
url_2 = https://<server_ip>:<port>/" + "sap/opu/odata/sap/ZRSO_BKPF&$format=json&$top=1001&$skip = 2000"
url_3 = https://<server_ip>:<port>/" + "sap/opu/odata/sap/ZRSO_BKPF&$format=json&$top=2001&$skip = 3000"
url_4 = https://<server_ip>:<port>/" + "sap/opu/odata/sap/ZRSO_BKPF&$format=json&$top=3001&$skip = 4000"
url_5 = https://<server_ip>:<port>/" + "sap/opu/odata/sap/ZRSO_BKPF&$format=json&$top=4001&$skip = 5000"
</code></pre>
<p>I have developed a code but that will do this serially using a for loop.</p>
<pre><code>import requests
from requests.auth import HTTPBasicAuth
import json
import os
num_iter = 5
dataChunkSize = 10000
fldr_to_write = '/local folder/on the drive'
for i in range(1,dataChunkSize*num_iter,dataChunkSize):
if i == 1:
data = requests.get(url = url + "&$top = {1}".format(dataChunkSize) + "&$skip=0",headers=headers,auth = HTTPBasicAuth(usr,pwd))
if data.status_code = 200:
data_f = json.loads(data.text)
with (os.path.join(fld_to_write,'bkpf_1st.json'),'w',encoding='utf-8') as j:
json.dump(data_f,j,ensure_ascii=False, indent=4)
else:
data = _make_http_call_to_sap(url = url + "&$filter = {0}".format(flt) + "&$top = {1}".format(i) + "&$skip={2}".format(i+999),headers=headers,auth = HTTPBasicAuth(usr,pwd))
if data.status_code = 200:
data_f = json.loads(data.text)
with (os.path.join(fld_to_write,'bkpf_{}.json'.format(i)),'w',encoding='utf-8') as j:
json.dump(data_f,j,ensure_ascii=False, indent=4)
</code></pre>
<p>I want to use <code>asyncio</code> and <code>aiohttp</code> to replicate the above logic. Need some guidance on the same.</p>
|
<python>
|
2025-02-15 09:03:27
| 0
| 1,259
|
pythondumb
|
79,441,142
| 10,416,012
|
async filter function with generic typing
|
<p>I'm trying to implement the python built in <code>filter</code> but async, seem like an easy task right?</p>
<pre><code>async def simulated_data() -> AsyncIterator[int|None]:
for i in [1, None,3,5]:
yield i
async def afilter[T](predicate, iterable):
async for item in iterable:
if predicate is not none and predicate(item):
yield item
b = afilter(None, simulated_data())
# or just this!
b = (it async for it in iter if iter is not None)
</code></pre>
<p>Even a comprehension does the trick :D</p>
<p>But what about typing? The type of b still shows "AsyncGenerator[int | None, None]" but it can´t be None.</p>
<p>I tried with <code>TypeGuard</code>, but no luck, then I went to the original filter function, because this problem is solved already there.</p>
<pre><code>class filter(Generic[_T]):
@overload
def __new__(cls, function: None, iterable: Iterable[_T | None], /) -> Self: ...
@overload
def __new__(cls, function: Callable[[_S], TypeGuard[_T]], iterable: Iterable[_S], /) -> Self: ...
@overload
def __new__(cls, function: Callable[[_S], TypeIs[_T]], iterable: Iterable[_S], /) -> Self: ...
@overload
def __new__(cls, function: Callable[[_T], Any], iterable: Iterable[_T], /) -> Self: ...
def __iter__(self) -> Self: ...
def __next__(self) -> _T: ...
</code></pre>
<p>Well it seems filter is not even a function is a generic class, at this point the task doesn't look so easy, anyone has the solution (with generic types) by any chance?</p>
|
<python><filter><python-asyncio><generator><python-typing>
|
2025-02-15 07:37:58
| 3
| 2,235
|
Ziur Olpa
|
79,441,048
| 7,290,715
|
Python: Values incremented by a user supplied number in each iteration using a loop
|
<p>Below is my problem statement:
I want to generate the below things using <code>while</code> loop (or <code>for loop</code>)</p>
<pre><code>c top skip
1 1000 0
2 1001 2000 --- added 999
3 2001 3000 --- added 999
4 3001 4000 --- added 999
--and so on.
</code></pre>
<p>lets assume that c is dynamically set.
The requirement is not to create a pandas dataframe but to print the above one by one.</p>
<p>To achieve this, I have applied the below python code:</p>
<pre><code>url = 'http://<ip>:<port>/sap/opu/odata/sap/ZRSO2_BKPF?$format=json&$'
top = 1000
skip = 1999
cnt_v = 3
i=1
while i <=cnt_v:
if i==1:
print(url + 'top = {}'.format(str(top)) + '&$skip = 0')
#break
else:
skip += top + 1
print(url + 'top = {}'.format(str(top)) + '&$skip = {}'.format(str(skip)))
top += top + 1
i+=1
</code></pre>
<p>It is working fine for the first condition (i.e. first row of the above table). But is breaking from 3rd row where skip and top values are incorrectly being populated. Please see below the output:</p>
<pre><code>http://<ip>:<port>/sap/opu/odata/sap/ZRSO2_BKPF?$format=json&$top = 1000&$skip = 0
http://<ip>:<port>/sap/opu/odata/sap/ZRSO2_BKPF?$format=json&$top = 1000&$skip = 2000
http://<ip>:<port>/sap/opu/odata/sap/ZRSO2_BKPF?$format=json&$top = 2001&$skip = 3002
http://<ip>:<port>/sap/opu/odata/sap/ZRSO2_BKPF?$format=json&$top = 4003&$skip = 6006
</code></pre>
<p>Am I missing anything here?</p>
<p><em>Edit</em></p>
<p>Following the answer as suggested, I have made the changes as below and it is working fine.</p>
<pre><code>top = 1000
cnt_v = 10
def output(top, skip):
print(f'{url}{top = }&${skip = }')
for c in range(1, top*cnt_v, 1000):
if c==1:
output(top, 0)
else:
output(c , c + 999)
</code></pre>
|
<python>
|
2025-02-15 06:15:55
| 1
| 1,259
|
pythondumb
|
79,440,957
| 11,953,868
|
How to avoid AttributeError in multiple labels
|
<p>I have dicom files from different sources and sometimes i get error like this:</p>
<blockquote>
<p>AttributeError: 'FileDataset' object has no attribute 'PatientAge'
because some files doesn't have this attribute.</p>
</blockquote>
<p>I've created function to check objects for errors:</p>
<pre><code>def check_if_exists(self, object_attr):
try:
return str(object_attr)
except:
return 'NULL'
</code></pre>
<p>But I get error in setText line before executing code from my function:</p>
<pre><code>self.label.setText(self.check_if_exists(ds.PatientAge))
</code></pre>
<p>I have several labels like this so I wanted to create one function and check if each attribute exists but I'm not sure how to do this. Sometimes I use ds.PatientName, ds.PatientAge and sometimes ds[0x0022,0x7031].value. Using try except: or hasattr() function for each label is redundant and i'm trying to get the best solution.</p>
<p>Can you give me some ideas?</p>
|
<python><attributeerror><dicom><pydicom>
|
2025-02-15 04:30:32
| 2
| 2,179
|
James Jacques
|
79,440,929
| 765,271
|
workaround for NotImplementedError: Initial conditions produced too many solutions for constants from dsolve?
|
<p>Is there a workaround for this? Using sympy 1.13.3 with python 3.13.1 when trying to solve
<code> y'(x)=y(x)^(1/3)</code> with IC <code>y(0)=1</code> it gives</p>
<blockquote>
<p>NotImplementedError: Initial conditions produced too many solutions
for constants</p>
</blockquote>
<p>Does sympy really not able to solve this, or do I need some option or setting to use? I am new to using sympy for solving ode's. This ode is just quadrature ode, so I expected no problem solving it in sympy.</p>
<p>Here is complete code</p>
<blockquote>
<p>python
Python 3.13.1 (main, Dec 4 2024, 18:05:56) [GCC 14.2.1 20240910] on linux</p>
</blockquote>
<pre><code>from sympy import *
x=symbols('x')
y=Function('y')
dsolve(Eq(-y(x)**(1/3) + Derivative(y(x), x),0) , y(x), ics={y(0):1})
</code></pre>
<p>It gives</p>
<pre><code>> Traceback (most recent call last): File "<python-input-4>", line 1,
> in <module>
> dsolve(Eq(-y(x)**(1/3) + Derivative(y(x), x),0) , y(x), ics={y(0):1})
> ~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/usr/lib/python3.13/site-packages/sympy/solvers/ode/ode.py",
> line 640, in dsolve
> return _helper_simplify(eq, hint, hints, simplify, ics=ics) File "/usr/lib/python3.13/site-packages/sympy/solvers/ode/ode.py", line
> 709, in _helper_simplify
> solved_constants = solve_ics([s], [r['func']], cons(s), ics) File "/usr/lib/python3.13/site-packages/sympy/solvers/ode/ode.py",
> line 817, in solve_ics
> raise NotImplementedError("Initial conditions produced too many solutions for constants") NotImplementedError: Initial conditions
> produced too many solutions for constants
> >>>
</code></pre>
<p>The solution should be</p>
<pre><code>ode:=diff(y(x),x)=y(x)^(1/3);
dsolve([ode,y(0)=1])
# y(x) = (9 + 6*x)^(3/2)/27
</code></pre>
|
<python><sympy>
|
2025-02-15 03:43:26
| 1
| 13,235
|
Nasser
|
79,440,854
| 6,591,667
|
LLDB running a Python debugging script. I'm getting error: invalid thread
|
<p>I'm trying to run a Python debug script in LLDB, but I'm getting invalid thread errors. I'm not sure why.</p>
<p>Here are my programs:</p>
<p><em><strong>main.c:</strong></em></p>
<pre><code>#include <stdio.h>
#include <stdlib.h>
int addNumbs(int a, int b)
{
return a+b;
}
int main()
{
int a =5;
int b= 24;
int total = addNumbs(a,b);
printf("%d\n", total);
return 0;
}
</code></pre>
<p>My python debugging script
<em><strong>debugersample.py:</strong></em></p>
<pre><code>import lldb
debugger = lldb.debugger
debugger.HandleCommand("b main")
debugger.HandleCommand("r")
for i in range(0,3): #execute next 3 times
debugger.HandleCommand("n")
debugger.HandleCommand("bt")
</code></pre>
<p>Running LLDB:</p>
<pre><code>clang -g main -o main
lldb ./main
(lldb) command script import debugsample.py
Breakpoint 1: where = main`main + 15 at main.c:11:9, address = 0x0000000100003f0f
Process 5344 launched: '/Users/xxxx/CLionProjects/C_Practice/main' (x86_64)
error: invalid thread
error: invalid thread
error: invalid thread
error: invalid thread
Process 5344 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100003f0f main`main at main.c:11:9
8
9 int main()
10 {
-> 11 int a =5;
12 int b= 24;
13
14 int* ptr = (int*)malloc(sizeof(int));
Target 0: (main) stopped.
</code></pre>
<p>Note: When I run these commands in LLDB Python's interactive interpreter line by line, it works fine.</p>
|
<python><debugging><lldb>
|
2025-02-15 01:42:00
| 1
| 8,891
|
tadm123
|
79,440,679
| 6,791,342
|
Is there a way to isolate redis-py pipelines so they are not prematurely flushed by async workers?
|
<p>Is there a way to isolate pipelines between jobs or threads?</p>
<p>My Redis wrapper has 2 methods:</p>
<pre><code>def queue_redis_message_on_pipeline(message, pipeline):
pipe = pipeline or get_websocket_redis_client().pipeline()
...
pipe.publish(message)
def execute_redis_pipeline(pipeline):
pipe = pipeline or get_websocket_redis_client().pipeline()
pipe.execute
</code></pre>
<p>It's all the same pipeline singleton. There are async workers that are using the same pipeline but may either A. flush the pipeline prematurely B. compound the pipeline and cause a network connection Out Of Memory error. Any suggestions?</p>
|
<python><redis><celery><redis-py>
|
2025-02-14 22:36:59
| 1
| 915
|
baumannalexj
|
79,440,210
| 1,442,731
|
Python: Shutting down child thread when parent dies
|
<p>I have a parent Python task that starts a child task to listen for a USB/BLE response. Problem is that if the parent task dies, the child listener task keeps running and the process has to be killed.</p>
<p>Parent Process:</p>
<pre><code> self.listenerTask = threading.Thread(target=self.listener, name="Listener", args=[interface])
</code></pre>
<p>Listener thread:</p>
<pre><code> def listener(self, interface):
logger.info(f"Listener started, threadid={threading.get_ident()}")
self.event = threading.Event()
while not self.stopListener:
responseMsg = asyncio.run((interface.readMsg(timeout=None)))
...
</code></pre>
<p>Anyway to catch the parent in it's death and have it set self.stopListener? Any better way?</p>
|
<python><multithreading>
|
2025-02-14 17:58:58
| 1
| 6,227
|
wdtj
|
79,440,163
| 1,473,517
|
tqdm, multiprocessing and how to print a line under the progress bar
|
<p>I am using multiprocessing and tqdm to show the progress of the workers. I want to add a line under the progress bar to show which tasks are currently being processed. Unfortunately, whatever I do seems to end up with this being printed on top of the progress bar making a mess. Here is a MWE that shows the problem:</p>
<pre><code>from multiprocessing import Pool, Manager, Value
import time
import os
import tqdm
import sys
class ParallelProcessor:
def __init__(self, shared_data):
self.shared_data = shared_data
def process_task(self, args):
"""Worker function: Simulates task processing and updates progress"""
lock, progress, active_tasks, index, integer_arg = args
pid = os.getpid()
core_id = index % len(os.sched_getaffinity(0))
os.sched_setaffinity(pid, {core_id})
with lock:
active_tasks.append(f"Task {index+1}")
time.sleep(2) # Simulate processing time
with lock:
active_tasks.remove(f"Task {index+1}")
progress.value += 1
return self.shared_data
def progress_updater(self, total_tasks, progress, active_tasks):
"""Update tqdm progress bar and active task list on separate lines"""
sys.stdout.write("\n") # Move to the next line for active task display
sys.stdout.flush()
with tqdm.tqdm(total=total_tasks, desc="Processing Tasks", position=0, leave=True) as pbar:
while pbar.n < total_tasks:
time.sleep(0.1) # Update interval
pbar.n = progress.value
pbar.refresh()
# Move cursor down to the next line and overwrite active task display
sys.stdout.write("\033[s") # Save cursor position
sys.stdout.write(f"\033[2K\rActive: {', '.join(active_tasks[:5])}") # Clear line and print active tasks
sys.stdout.write("\033[u") # Restore cursor position
sys.stdout.flush()
def run_parallel(self, tasks, num_cores=None):
"""Runs tasks in parallel with a progress bar"""
num_cores = num_cores or len(os.sched_getaffinity(0))
manager = Manager()
lock = manager.Lock()
progress = manager.Value("i", 0) # Shared integer for progress tracking
active_tasks = manager.list() # Shared list for active tasks
# Start progress updater in the main process
from threading import Thread
progress_thread = Thread(target=self.progress_updater, args=(len(tasks), progress, active_tasks))
progress_thread.start()
# Prepare task arguments
task_args = [(lock, progress, active_tasks, idx, val) for idx, val in enumerate(tasks)]
# Run parallel tasks
with Pool(num_cores) as pool:
results = pool.map(self.process_task, task_args)
# Ensure progress bar finishes
progress_thread.join()
print("\n") # Move to the next line after processing
return results
if __name__ == "__main__":
processor = ParallelProcessor(shared_data=10)
processor.run_parallel(tasks=range(40), num_cores=4)
</code></pre>
|
<python><multiprocessing><tqdm>
|
2025-02-14 17:40:09
| 1
| 21,513
|
Simd
|
79,440,067
| 219,159
|
Retrieve server certificate in urllib3
|
<p>In python's urllib3, can one retrieve the server certificate after making a successful HTTPS request? If so, how?</p>
|
<python><https><urllib3>
|
2025-02-14 17:00:39
| 1
| 61,826
|
Seva Alekseyev
|
79,439,971
| 2,648,504
|
Pandas - only select rows containing a substring in a column
|
<p>I'm using something similar to this as <code>input.txt</code></p>
<pre><code>header
040525 $$$$$ 9999 12345
random stuff
040525 $$$$$ 8888 12345
040525 $$$$$ 7777 12345
random stuff
040525 $$$$$ 6666 12345
footer
</code></pre>
<p>Due to the way this input is being pre-processed, I cannot correctly use pd.read_csv. I must first create a list from the input; Then, create a DataFrame from the list.</p>
<pre><code>data_list = []
with open('input.txt', 'r') as data:
for line in data:
data_list.append(line.strip().split())
df = pd.DataFrame(data_list)
</code></pre>
<p>I only want to append lines that contain '$$$' in the second column. Desired output would be:</p>
<pre><code> 0 1 2 3
0 40525 $$$$$ 9999 12345
1 40525 $$$$$ 8888 12345
2 40525 $$$$$ 7777 12345
3 40525 $$$$$ 6666 12345
</code></pre>
|
<python><pandas>
|
2025-02-14 16:21:47
| 4
| 881
|
yodish
|
79,439,943
| 6,345,968
|
how to use/accept single line of suggested code in Google Collab
|
<p><strong>Problem Description</strong><br />
Google Collab gives suggestions on what your code might be, and you can hit <em>tab</em> to implement the suggested code. The thing is, often I don't need all of the suggested code, I just need a single line. Is there a command to just get the the suggested code for the remaining line instead of <em>all</em> of the suggested code?</p>
<p><strong>Example of problem</strong><br />
I need to access a dataset from <em>sklearn</em>. First I get all the datasets, then I get access to a specific one using the line:</p>
<pre><code>from sklearn import datasets
iris = datasets.load_iris()
</code></pre>
<p>When I type in "iris" at the beginning of the line, code suggestions pop up:
<a href="https://i.sstatic.net/BOOJ9kFz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOOJ9kFz.png" alt="enter image description here" /></a></p>
<p>After typing in "iris" at the beginning of line 3, and getting the suggestions, I can hit <em>tab</em> to add the suggested code. However, I don't want <strong>all</strong> of the suggested code! I just want the suggested code on line 3!</p>
<p>Is there a command to just get the suggested line, not all of the code?</p>
|
<python><autocomplete><google-colaboratory>
|
2025-02-14 16:08:11
| 0
| 347
|
Bob
|
79,439,706
| 254,343
|
Is it possible to add a custom element locator strategy to Selenium (Python)
|
<p>I am writing a test and would like to replicate the user's workflow -- users don't go by CSS selectors, element names, or XPath queries, all they can see is what the user agent renders. To that end, I would like to find an element on the page based on its [rendered] <em>content</em>, and being case-insensitive at that (since that's what more closely resembles how the user will approach it). I'd accept the equivalent of the <code>//label[contains(lower-case(text()), "username")]</code> XPath 2.0 expression.</p>
<p>Since Selenium appears to defer XPath query execution to the driven browser, where my testing indicates it's only ever XPath 1.0, I can't use XPath 2.0 functions like <code>lower-case</code> or <code>matches</code>, in order to find an element that has the desired content.</p>
<p>None of the other locator strategies (e.g. using a CSS selector) match capabilities of XPath, certainly none of them allow me to find an element based on its <em>content</em>.</p>
<p>I thought if it were possible to write a custom locator strategy somehow, that would allow me to use it with the <code>find_element</code> procedure, but I can't find much information on whether that is even possible, let alone if it's possible with specifically the <code>selenium</code> Python module provided by Selenium, or how to accomplish it in the first place.</p>
<p>Is it possible? How can it be done, if so? Alternatively, am I solving the wrong problem?</p>
|
<python><selenium-webdriver>
|
2025-02-14 14:36:36
| 1
| 9,260
|
Armen Michaeli
|
79,439,634
| 6,077,239
|
How can matrix be stored efficiently in Polars dataframe and performance matrix operations
|
<p>I have a lot of <code>dates</code>. For each <code>date</code>, there is a vector <code>v</code> (with length <code>n</code>) and a square matrix <code>M</code> (with dimension <code>n</code> by <code>n</code>). <code>v</code>, <code>M</code> and <code>n</code> will vary by <code>date</code> in terms of both values and lengths/dimensions.</p>
<p>The task is for each date, I want to perform a matrix operation to generate a constant scalar - <code>transpose(v) * M * v</code>.</p>
<p>The naïve way is to do this through a for loop, but the computation time will be huge given it is sequential.</p>
<p>I am wondering if I can store all information into a single <strong>Polars</strong> <code>Dataframe</code> such that I can do something like <code>df.group_by("date").agg(...)</code> which is parallel and efficient.</p>
<p>To give a concrete example:</p>
<ul>
<li>For <code>date 2020-01-01</code>, <code>v = [1, 2]</code>, <code>M = [[1, 2], [3, 4]]</code>, the indices for v and M are <code>["a", "b"]</code>.</li>
<li>For <code>date 2020-02-01</code>, <code>v = [1, 2, 3]</code>, <code>M = [[1, 2, 3], [3, 4, 5], [4, 5, 6]]</code>, the indices for v and M are <code>["a", "b", "c"]</code>.</li>
<li>For <code>date 2020-03-01</code>, <code>v = [1, 3, 5]</code>, <code>M = [[1, 5, 9], [2, 4, 6], [3, 6, 9]]</code>, the indices for v and M are <code>["b", "d", "e"]</code>.</li>
<li>Result:</li>
</ul>
<p><a href="https://i.sstatic.net/yrtkYU40.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrtkYU40.png" alt="enter image description here" /></a></p>
|
<python><python-polars><polars>
|
2025-02-14 14:09:52
| 2
| 1,153
|
lebesgue
|
79,439,301
| 16,383,578
|
How to get all Unicode ranges a font has glyphs for and the exceptions?
|
<p>I am trying to write a Python program to convert Markdown as used by Stack Exchange directly into images. The logic of Markdown is quite simple, I find MathJax to be challenging to render, but I have found <a href="https://www.bearnok.com/grva/it/knowledge/software/mathjax" rel="nofollow noreferrer">this</a>.</p>
<p>But I have trouble determine which font to use for a given character. I want to have total control on which individual character is rendered using which font. More specifically I like this <a href="https://fonts.google.com/specimen/Andika" rel="nofollow noreferrer">Andika</a> font. It is Sans-Serif, and most Sans-Serif fonts are guilty of not distinguishing UPPER-CASE i with lower-case L, in Andika there are two bars on UPPER-CASE i, and its lower-case a has no "hair" on its head...</p>
<p>And it supports Greek and Cyrillic scripts too, but sadly it doesn't support Hiragana, Katakana, Hangul, Chinese etc.
I will use the script to "serialize" text from many different languages, predominantly the text is in English, but it frequently contains whole lyrics of songs in other languages. I plan to use Andika for European languages and <a href="https://fonts.google.com/noto/specimen/Noto+Sans+SC" rel="nofollow noreferrer">Noto Sans SC</a> for East Asian script, but neither of them supports emojis, for that I need yet another <a href="https://fonts.google.com/noto/specimen/Noto+Color+Emoji" rel="nofollow noreferrer">font</a>.</p>
<p>I need to way to determine if a chosen font has glyph for a given character, this has to be done for every single character, a character will be first checked against Andika to check if it can be rendered using Andika, with fallbacks to Noto Sans SC and then Noto Color Emoji, and if all three fail an exception needs to be raised which will be handled...</p>
<p>For English text this is simple enough, just <code>ord(c) < 128</code>, but to efficiently check if a given font supports a UNICODE codepoint is hard.</p>
<p>I have decided on the following method, first, the notation <code>(a, b)</code> specifies a range of integers, a >= 0 and b >= a, and both ends are included, for example, (0, 0) is a valid range containing only 0, and of course (5, 9) contains (5, 6, 7, 8, 9).</p>
<p>Then the task is to find all consecutive ranges in a font, and do binary search for x in the list of starts, and then check if x is inside the corresponding range.</p>
<p>I did with the following (you need <a href="https://pypi.org/project/fonttools/" rel="nofollow noreferrer">fontTools</a> and <a href="https://pypi.org/project/unicodedata2/" rel="nofollow noreferrer">unicodedata2</a>):</p>
<pre><code>import unicodedata2
from bisect import bisect_right
from fontTools.ttLib import TTFont
def get_glyphs_in_font(font_path):
return sorted(
{code for cmap in TTFont(font_path)["cmap"].tables for code in cmap.cmap}
)
def group_consecutive(lst):
result = []
start = end = -1e309
for i in lst:
if i == end + 1:
end = i
else:
if end != -1e309:
result.append((start, end))
start = end = i
result.append((start, end))
return result
def get_glyph_ranges(font_path):
return group_consecutive(get_glyphs_in_font(font_path))
</code></pre>
<p>Its output is correct, but it has 129 elements in the output for <code>"Andika-Regular.ttf"</code>. It would be inefficient to do this for every single character.</p>
<p>Also, UNICODE has a bunch of control characters, reserved characters, blank characters etc. they are all non-printable so I call all of them invalid, and the following finds all of them:</p>
<pre><code>INVALID_UNICODE = [
i
for i in range(1114112)
if unicodedata2.category(chr(i)) in {"Cc", "Cf", "Cn", "Co", "Cs", "Zl", "Zp", "Zs"}
]
INVALID_UNICODE_SET = set(INVALID_UNICODE)
</code></pre>
<p>Now let me explain the rules: let (a, b) (c, d) be two ranges, d >= c > b >= a, now if any of the following is a subset of INVALID_UNICODE: (b, c), (b + 1, c), (b + 1, c - 1), (b, c - 1), then the two ranges should combine to form (a, d). Now if the above doesn't apply, but c / b <= 1.05, then they should still combine, but now all <strong>valid</strong> codepoints in the range (b + 1, c - 1) should be accreted as exceptions.</p>
<p>For example, given the first 10 elements of <code>get_glyphs_in_font("Andika-Regular.ttf")</code>:</p>
<pre><code>[(32, 126),
(160, 328),
(330, 800),
(803, 831),
(838, 879),
(903, 903),
(915, 916),
(920, 920),
(926, 926),
(928, 928),
]
</code></pre>
<p>The first is adjacent to the range (0, 32) so the first iteration should have (0, 126), after that, (127, 160) are all invalid, so it should combine the second range to form (0, 328). Then there is a discontinuity between (0, 328) and (330, 800), but according to the second rule they should be joined and [329] be noted as an exception.</p>
<p>The outcome for the example should be:</p>
<pre><code>((0, 928),
[329,
801,
802,
832,
833,
834,
835,
836,
837,
880,
881,
882,
883,
884,
885,
886,
887,
890,
891,
892,
893,
894,
895,
900,
901,
902,
904,
905,
906,
908,
910,
911,
912,
913,
914,
917,
918,
919,
921,
922,
923,
924,
925,
927])
</code></pre>
<p>I had tried this half a dozen times but couldn't get the intended result. This is my latest attempt, it is both smart and stupid at the same time:</p>
<pre><code>def get_glyph_groups1(font_path):
ranges = group_consecutive(get_glyphs_in_font(font_path))
result = {}
cur_start, cur_end = 0, 32
cur_missing = []
for start, end in ranges:
s1 = cur_end + cur_end not in INVALID_UNICODE_SET
e1 = start - start not in INVALID_UNICODE_SET
i1 = bisect_right(INVALID_UNICODE, e1)
i2 = bisect_right(INVALID_UNICODE, s1)
if i1 - i2 == e1 - s1 and bool(i1 - i2) == bool(e1 - s1):
cur_end = end
continue
if start / cur_end <= 1.05:
cur_missing.extend(range(cur_end + 1, start))
cur_end = end
else:
result[(cur_start, cur_end)] = cur_missing
cur_start, cur_end = start, end
cur_missing = []
result[(cur_start, cur_end)] = cur_missing
return result
</code></pre>
<p>Needless to say it doesn't work:</p>
<pre><code>In [36]: get_glyph_groups1("C:/Windows/Fonts/Andika-Regular.ttf")
Out[36]: {(0, 122654): []}
</code></pre>
<p>How can I make it work?</p>
<hr />
<p>I have changed my code to this:</p>
<pre><code>INVALID_UNICODE = [
i
for i in range(1114112)
if unicodedata2.category(chr(i)) in {"Cc", "Cf", "Cn", "Co", "Cs", "Zl", "Zp", "Zs"}
]
INVALID_UNICODE_RANGES = group_consecutive(INVALID_UNICODE)
INVALID_UNICODE_STARTS = [start for start, _ in INVALID_UNICODE_RANGES]
INVALID_UNICODE_RANGES = dict(INVALID_UNICODE_RANGES)
def should_combine(a, b):
i = bisect_right(INVALID_UNICODE_STARTS, a)
start = INVALID_UNICODE_STARTS[max(i - 1, 0)]
end = INVALID_UNICODE_RANGES[start]
if not start <= a <= end:
i = bisect_right(INVALID_UNICODE_STARTS, a + 1)
start = INVALID_UNICODE_STARTS[max(i - 1, 0)]
end = INVALID_UNICODE_RANGES[start]
if not start <= a <= end:
return False
return start <= b - 1 <= end if not start <= b <= end else True
def get_glyph_groups3(font_path):
ranges = group_consecutive(get_glyphs_in_font(font_path))
result = {}
cur_start, cur_end = 0, 32
cur_missing = []
for start, end in ranges:
if should_combine(cur_end, start):
cur_end = end
continue
if start / cur_end <= 1.05:
cur_missing.extend(range(cur_end + 1, start))
cur_end = end
else:
result[(cur_start, cur_end)] = cur_missing
cur_start, cur_end = start, end
cur_missing = []
result[(cur_start, cur_end)] = cur_missing
return result
</code></pre>
<p>But it still doesn't work:</p>
<pre><code>In [56]: get_glyph_groups3("C:/Windows/Fonts/Andika-Regular.ttf").keys()
Out[56]: dict_keys([(0, 126), (160, 1327), (6832, 6842), (7424, 10217), (11360, 11835), (42752, 43877), (61744, 67726), (119820, 122654)])
</code></pre>
<p>The first one should be <code>(0, 1327)</code>. The output is closer to what I wanted, but sadly I cannot verify the correctness of the output.</p>
|
<python><algorithm><unicode><fonts>
|
2025-02-14 11:56:56
| 1
| 3,930
|
Ξένη Γήινος
|
79,439,244
| 2,125,540
|
Python Markdown extension for shifting headings
|
<p>I'm trying to inject markdown notes into templated pages. To do so, I need to shift the headings for the page to be consistent:</p>
<ul>
<li><code># bla</code> -> <code><h3>bla</h3></code>,</li>
<li><code>## foo</code> -> <code><h4>foo</h4></code>.</li>
</ul>
<p>I think that the simplest way is by adding on the fly <strong>#</strong> characters at the beginning of the heading lines with the markdown preprocessor.</p>
<p>So I'm trying to write the following markdown extension <code>ShiftHeading</code>:</p>
<pre class="lang-py prettyprint-override"><code>from markdown.extensions import Extension
from markdown.preprocessors import Preprocessor
import re
SHIFT_HEADING_RE = r'^(#+ )'
class ShiftHeadingPreprocessor(Preprocessor):
""" Shift the level of headings by adding one or several '#' """
def __init__(self, shift=0, **kwargs):
self.config = {
'SHIFT' : [shift, 'shift level'],
}
self.shift = "#"*shift
super(ShiftHeadingPreprocessor, self).__init__(**kwargs)
def run(self, lines):
new_lines = []
for line in lines:
new_lines = re.sub(SHIFT_HEADING_RE, self.shift + r"\1", line)
return new_lines
class ShiftHeading(Extension):
def __init__(self, shift=0, **kwargs):
self.config = {
'shift' : [shift, 'shift level'],
}
super(ShiftHeading, self).__init__(**kwargs)
def extendMarkdown(self, md):
md.preprocessors.register(ShiftHeadingPreprocessor( shift=self.getConfig('shift', 0)), 'shift_heading', 175)
</code></pre>
<p>And here is my test script :</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*
text = """
# Heading 1
*Ceci* est du texte
## Heading 2
Un nouveau paragraphe
"""
from md_extension.shiftHeader import ShiftHeading
import markdown
md = markdown.Markdown( extensions=[ ShiftHeading(shift=1) ], )
print(md.convert(text))
</code></pre>
<p>And I get... nothing at all, when the output expected would have been :</p>
<pre class="lang-html prettyprint-override"><code><h2>Heading 1</h2>
<p><em>Ceci</em> est du texte</p>
<h3>Heading 2</h3>
<p>Un nouveau paragraphe</p>
</code></pre>
<p>Without the extension :</p>
<pre class="lang-py prettyprint-override"><code>md = markdown.Markdown()
</code></pre>
<p>I get a result as expected :</p>
<pre class="lang-html prettyprint-override"><code><h1>Heading 1</h1>
<p><em>Ceci</em> est du texte</p>
<h2>Heading 2</h2>
<p>Un nouveau paragraphe</p>
</code></pre>
<p>Surely, I've not understood the <a href="https://python-markdown.github.io/extensions/api/" rel="nofollow noreferrer">extension API</a> or the <a href="https://github.com/Python-Markdown/markdown/wiki/Tutorial-1---Writing-Extensions-for-Python-Markdown" rel="nofollow noreferrer">tuto</a>...</p>
<p>Some solution ? Thanks</p>
|
<python><markdown>
|
2025-02-14 11:35:49
| 0
| 363
|
FrViPofm
|
79,439,132
| 769,486
|
sqlalchemy: associationproxy with custom setter
|
<p>I’m trying to build a model with a relationship being represented as a string key, but changes to this key should create or load instances instead of modifying the property.</p>
<p>Currently I attempt doing this using an associationproxy as it does <em>almost</em> exactly what I want.</p>
<pre class="lang-py prettyprint-override"><code>class Category(db.Model):
id: orm.Mapped[int] = orm.mapped_column(primary_key=True)
name: orm.Mapped[str] = orm.mapped_column(sa.String(255), unique=True)
@classmethod
def load_or_create(cls, name):
cls.query.filter_by(name=name).first() or cls(name=name)
class Thing(db.Model):
id: orm.Mapped[int] = orm.mapped_column(primary_key=True)
category_id: orm.Mapped[int] = orm.mapped_column(sa.ForeignKey("category.id"))
category: orm.Mapped["Category"] = orm.relationship()
category_name = sqlalchemy.ext.associationproxy.association_proxy("category", "name", creator=Category.load_or_create)
</code></pre>
<p>Ideally this would fulfill the following test case:</p>
<pre class="lang-py prettyprint-override"><code>def test_category_name():
thing = Thing(category_name="foo")
assert thing.category.name == "foo" # So far so good
foo_category = thing.category
thing.category_name = "bar"
# This fails because the category name of the foo_category is edited instead.
assert thing.category is not foo_category
</code></pre>
<p>It seems that there is a way to <a href="https://docs.sqlalchemy.org/en/20/orm/extensions/associationproxy.html#sqlalchemy.ext.associationproxy.AssociationProxy.getset_factory" rel="nofollow noreferrer">pass a <code>getset_factory</code></a> but the documentation doesn’t give any hint on how to use it and I was unable to figure it out from reading the code.</p>
<p>Is there a way to make the AssociationProxy always use the creator function instead of setting the property? Is there a better way to achieve what I want here?</p>
|
<python><sqlalchemy>
|
2025-02-14 10:53:45
| 0
| 956
|
zwirbeltier
|
79,438,961
| 6,727,914
|
Is there a way to use symlink in site-packages to reduce disk space usage?
|
<p>Every time I create a new project, it copies the entire package codebase into the project folder, which I find wasteful. For example, I don't want each of my projects to occupy 1 gigabyte worth of Tensorflow v2.8 disk space.</p>
<p>In other languages, we can easily avoid this. For instance, in Node.js, we can use <code>pnpm</code> or <code>yarn berry</code>. In Dart, this functionality is built-in by default. They utilize a global cache directory and the engine refers directly to <code>~/cache/.some-language/some-package/version/files</code> (or its symlink in <code>pnpm</code>).</p>
<p>However, I can't seem to find a way to do this in Python. I read about all the dozens of Python package manager and I tried using <code>uv</code> because they advertise on their GitHub page:</p>
<blockquote>
<p>💾 Disk-space efficient, with a global cache for dependency deduplication.</p>
</blockquote>
<p>Unfortunately, it turns out that this was completely false and misleading. I tried it, and it only caches the package to reduce network usage the next time it is installed, but it still copies an entire instance into each project's <code>.venv</code> folder. This does not improve disk space usage at all.</p>
<p>For example:</p>
<pre class="lang-none prettyprint-override"><code>myproject/.venv/site-packages % du -sh * | sort -hr | head -20
1.0G tensorflow
326M jaxlib
106M mediapipe
106M cv2
81M scipy
72M clang
46M numpy
</code></pre>
<p>no-go answers:</p>
<ul>
<li>using a global interpreter and installing packages on that</li>
<li>reusing environments (it slows down indexing of projects that don't require heavy packages like Tensorflow)</li>
</ul>
|
<python>
|
2025-02-14 09:50:16
| 3
| 21,427
|
TSR
|
79,438,809
| 9,686,427
|
Is is possible to define class methods in multiple cells in Jupyter notebooks?
|
<p>Say that I am creating a class with multiple methods on an <code>ipynb</code>:</p>
<pre><code>class Example:
def method1(self):
print("Hello!")
def method2(self):
print("Bye!")
</code></pre>
<p>What should I do if I want to include a Markdown in between both methods? Simply inserting the markdown will lead to an error, as <code>method2(self)</code> will then lie in a code cell without any class declaration.</p>
<hr />
<p><strong>Context:</strong> I love the fact that I can use LaTeX to explain portions of my code in Jupyter notebooks, yet often I find that in order to explain the mathematics of the methods of a class I have to either add the LaTeX code before or after <em>all</em> methods have been defined. It would be more elegant to write the code of the first method followed by its explanation, then write the code for the second method and so on.</p>
|
<python><python-3.x><jupyter-notebook>
|
2025-02-14 08:49:50
| 1
| 484
|
Sam
|
79,438,588
| 843,400
|
Fixing Python import paths for a package in src used both from entry points outside of src and as a dependency in other packages?
|
<p>I have a model package that runs in Sagemaker. It's structure looks something like this (domain-specific stuff redacted):</p>
<pre><code><My project root>
--> src
|--> potato
|----> a bunch of nested modules
|--> utils
|----> some modules
|--> datastuff
|----> more nested modules
--> test/
| ....
--> entrypoint1.py
--> entrypoint2.py
--> config and other stuff
</code></pre>
<p>Right now, the code is filled with a bunch of <code>from src.potato.some_module import some_class, some_method</code> etc. , both from code within <code>src</code> and from the entrypoints which aren't in src. This was working fine until now, but now we are trying to vend stuff within <code>potato</code> and <code>datastuff</code> to other packages (by publishing this package to a CA repo). Problems cropped up because in those dependent packages, I try to import something like <code>from potato.blarg import some_method </code>, but get errors about being unable to find a module called <code>src.datastuff.some_module</code> that exists in the blarg module's imports.</p>
<p>So my next step was to try and get rid of all the <code>src</code> pieces from the imports in the potato and datastuff packages. When I did this, VSCode was resolving the imports fine. But as soon as I tried to actually run either of the entrypoints (which live outside of src), I get the error:</p>
<pre><code>Traceback (most recent call last):
File "<...>/entrypoint.py", line 1, in <module>
from src.potato.cli.blarg_cli import blarg_main
File "<...>/src/potato/cli/blarg_cli.py", line 10, in <module>
from potato.cli.cli_utils import ( <--- this used to be src.potato.<...>
ModuleNotFoundError: No module named 'potato'
</code></pre>
<p>I don't think I'm understanding something about how the imports are supposed to work and how I can fix this issue. I have considered doing a refactoring and moving the entrypoints into src, or moving stuff out of <code>src</code>, etc. but the team I am working on really wants to keep the structure like this (with the code in src, and the entrypoints out of it) and I think there are valid reasons to keep things this way, so I'm feeling a bit stuck and trying to see if I can make this work with a minimal change.</p>
<p>Any help here would be much appreciated!</p>
|
<python><amazon-sagemaker>
|
2025-02-14 07:01:50
| 0
| 3,906
|
CustardBun
|
79,438,495
| 7,498,328
|
How to speed up Python for highlighting cells for an Excel spreadsheet conditionally?
|
<p>I have the following Python code which tries to color rows of an Excel spreadsheet conditionally upon the values of the columns. Due to the number of rows, the run time is very slow at more than <code>30 mins</code>. I am wondering if there are ways to make this much faster? Thanks.</p>
<pre><code>import openpyxl
from openpyxl.styles import PatternFill, Font
import time
import os
from concurrent.futures import ThreadPoolExecutor
# Create sample data
wb = openpyxl.Workbook()
ws = wb.active
# Add headers
headers = ["ID", "Type", "Value"]
for col, header in enumerate(headers, 1):
ws.cell(row=1, column=col, value=header)
# Add 100000 rows of sample data
for row in range(2, 100002):
ws.cell(row=row, column=1, value=row-1)
ws.cell(row=row, column=2, value="Type 1" if row % 3 == 0 else
"Type 2" if row % 3 == 1 else "Type 3")
ws.cell(row=row, column=3, value=f"Value {row-1}")
# Define fills
fills = {
"Type 1": PatternFill(start_color="FFF2CC", end_color="FFF2CC", fill_type="solid"),
"Type 2": PatternFill(start_color="DBEEF4", end_color="DBEEF4", fill_type="solid"),
"Type 3": PatternFill(start_color="FFC0CB", end_color="FFC0CB", fill_type="solid")
}
# Loop approach
start = time.perf_counter()
for row_idx in range(2, ws.max_row + 1):
category = ws.cell(row=row_idx, column=2).value
fill = fills.get(category)
if fill:
for cell in ws[row_idx]:
cell.fill = fill
cell.font = Font(bold=True)
print(f"Run time: {time.perf_counter() - start:.2f} seconds")
wb.save("output.xlsx")
</code></pre>
|
<python><excel><openpyxl>
|
2025-02-14 06:02:08
| 2
| 2,618
|
user321627
|
79,438,489
| 86,072
|
What is the correct way to please the typechecker for a '(bytes | str) -> str' function?
|
<p>I have the following code:</p>
<pre><code>def from_utf8(string: bytes | str) -> str:
if isinstance(string, bytes):
return string.decode("utf-8")
else:
return string # <- type warning on this line
</code></pre>
<p>pylance gives me a type warning on the <code>return string</code> line:</p>
<pre><code>Type "bytearray | memoryview[_I@memoryview] | str" is not assignable to return type "str"
Type "bytearray | memoryview[_I@memoryview] | str" is not assignable to type "str"
"bytearray" is not assignable to "str"
</code></pre>
<p>My understanding is:<br />
the type annotation <code>x: bytes</code> is actually an alias for "runtime types" <code>x: bytes | bytearray | memoryview[_I@memoryview]</code>, but <code>isinstance(x, bytes)</code> only checks for <code>bytes</code>, not the two others.</p>
<p>I tried checking for types the other way around:</p>
<pre><code>def from_utf8(string: bytes | str) -> str:
if isinstance(string, str):
return string
else:
return string.decode("utf-8") # <- no attribute 'decode' for 'memoryview'
</code></pre>
<p>The error now becomes:</p>
<pre><code>Cannot access attribute "decode" for class "memoryview[_I@memoryview]"
Attribute "decode" is unknown
</code></pre>
<p>For context:</p>
<ul>
<li>my project uses python 3.11</li>
<li>I see these warnings in vscode, using pylance version 2025.2.1 and python (<code>ms-python.python</code>) extension version 2025.0.0</li>
</ul>
<hr />
<p>Do I have a convenient way to write a version of <code>from_utf8(string)</code> that passes the type checker ?</p>
<p>also: is my assumption correct, and is it documented somewhere ?</p>
|
<python><visual-studio-code><python-typing><python-3.11><pyright>
|
2025-02-14 05:59:13
| 1
| 53,340
|
LeGEC
|
79,438,335
| 10,034,073
|
How to make Pydantic's non-strict, coercive mode apply to integer literals?
|
<p>I'm validating inputs to a function using Pydantic's <code>@validate_call</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
from pydantic import validate_call
@validate_call
def foo(a: Literal[0, 90, 180, 270]) -> None:
print(a, type(a))
</code></pre>
<p>I want Pydantic to perform its default type coercion like it does with the <code>int</code> type:</p>
<pre class="lang-py prettyprint-override"><code>foo(90) # Works as expected
foo('90') # Doesn't work, but I want it to
</code></pre>
<p>If I use the annotation <code>a: int</code>, it will coerce strings like <code>'180'</code>, but then I have to manually validate which integers are given.</p>
<p><strong>How do I make Pydantic perform type coercion on Literals?</strong></p>
<p><em>Note: I'll accept a solution that requires <code>a</code> to be a string type instead of an integer, as long as it still allows both integer and string input.</em></p>
<hr />
<h3>Bad Solutions</h3>
<ul>
<li><p>I don't want to add every literal case. <code>Literal[0, 90, 180, 270, '0', '90', '180', '270']</code> is bad because it doesn't allow the strings <code>'-0'</code> or <code>'180.0'</code>.</p>
</li>
<li><p>I could do <code>Annotated[int, Field(ge=0, le=0)] | Annotated[int, Field(ge=90, le=90)] | ...</code>, but that's stupidly verbose.</p>
</li>
<li><p>I don't want to define some separate function or model. At that point, it's easier to just accept <code>a: int</code> and validate the particular value inside the method.</p>
</li>
</ul>
|
<python><type-conversion><pydantic><pydantic-v2>
|
2025-02-14 03:55:40
| 1
| 444
|
kviLL
|
79,438,268
| 139,150
|
How to run a macro before converting a file to PDF?
|
<p>I need to convert a .txt file (unicode) file to PDF using libreoffice using AWS lamabda function. It can be achieved using the code found here...</p>
<p><a href="https://github.com/fritzpaz/lambda-libre-office/blob/main/main.py#L19" rel="nofollow noreferrer">https://github.com/fritzpaz/lambda-libre-office/blob/main/main.py#L19</a></p>
<p>The code is working as expected. And this line is the most important.</p>
<pre><code> result = subprocess.run(['/opt/libreoffice7.5/program/soffice', '--headless', '--nologo', '--nodefault', '--nofirststartwizard', '--convert-to', 'pdf', file, '--outdir', '/tmp'], capture_output=True)
</code></pre>
<p>I need to run a macro before converting the file to PDF. So I added this line just before the one mentioned above.</p>
<pre><code>macro_result = subprocess.run(
["/opt/libreoffice7.5/program/soffice", "--headless", "--invisible", "--norestore",
f"macro:///StyleLibrary.Module1.myStyleMacro2({file})"],
check=False, capture_output=True
</code></pre>
<p>This line gets executed and changes the file as indicated by timestamp of the file. But the contents of the macro are not actually applied to the text inside the file. I have made sure that the macro is installed through extension. And the macro is working as expected and there is no problem with the macro code. I must be missing something in subprocess.run method.</p>
|
<python><amazon-web-services><aws-lambda><libreoffice><libreoffice-macros>
|
2025-02-14 03:12:04
| 0
| 32,554
|
shantanuo
|
79,438,265
| 785,494
|
How to remove the "4 files to analyze" messages from vscode?
|
<p>As explained in <a href="https://stackoverflow.com/questions/75604975/my-vs-code-status-bar-is-showing-42-files-to-analyze-numbers-that-keeps-increa">My VS code status bar is showing "42 files to analyze" numbers that keeps increasing</a> , VS code flashes a message when it analyzes files (in Python). While I like and use the status bar, I find this message flashing on and off distracting. As I type and save, the quick flashing tends to make the message blink and catch my eye.</p>
<p>Is there a way I can keep the status bar on and suppress this message? Or, even better, only show this message if it takes more than x seconds to analyze the files? (That way, I'll know if there's a problem, or that vscode is behind my coding - but regular coding won't keep it flashing on and off.)</p>
|
<python><visual-studio-code><user-interface><configuration><statusbar>
|
2025-02-14 03:10:38
| 1
| 9,357
|
SRobertJames
|
79,438,248
| 2,893,712
|
Mysql rowcount always returns 1 on INSERT IGNORE statement
|
<p>I am using <code>pymysql</code> connector to do some inserts into my database. I am trying to return whether or not a record was added or updated.</p>
<p>My code is</p>
<pre><code>import pymysql
db = pymysql.connect(host='127.0.0.1',user='USERNAME',password='PASSWORD',database='DATABASE')
cursor = db.cursor()
sql = "INSERT IGNORE INTO `Table` (key, via) SELECT temp.`id`, 'Via_Bot' FROM (SELECT %s AS id) temp LEFT JOIN `Other_Table` ON `Other_Table`.id = temp.id WHERE `Other_Table`.id IS NULL;"
key_id = 'ab12cd'
rows = cursor.execute(sql, (key_id,))
db.commit()
</code></pre>
<p>In this situation <code>rows</code> and <code>cursor.rowcount</code> always returns 1 even if a record was not inserted/modified. How do I correctly see if a record has been updated/inserted?</p>
|
<python><mysql><sql-insert><pymysql><database-cursor>
|
2025-02-14 02:55:16
| 1
| 8,806
|
Bijan
|
79,438,224
| 9,538,252
|
Accessing output in Python HVPlot Panel for further operations (getting a variable from bind)
|
<p>I have a Jupyter Notebook with a Pandas Dataframe using Panel for interactive controls. My desired outcome is I want to manipulate a dataframe using the controls, then I would like to see the modified dataframe, then take additional action on this dataframe in subsequent cells. The way I currently have my script constructed, this is not possible as the results are nested in either the bind or the Panel Output. How can I access variable "filtered_frame" in a subsequent call without assigning it as a global variable?</p>
<pre><code>import panel as pn
import hvplot.pandas
pn.extension()
def dostuff(a):
filtered_frame = all_the_things
if a_opt.value != None:
filtered_frame = filtered_frame[filtered_frame['col_y'].isin(a_opt.value)]
return pn.Row(filtered_frame)
a_opt = pn.widgets.RadioBoxGroup(name='a', options=[None, True, False], inline=True)
output = pn.bind(dostuff, a=a_opt)
component = pn.Column(a_opt, output)
component
</code></pre>
<p>TLDR - I want to be able to access variable/dataframe filtered_frame in a new Jupyter cell after action has been taken on it in the interactive panel.</p>
|
<python><pandas><holoviz-panel>
|
2025-02-14 02:40:12
| 0
| 311
|
ByRequest
|
79,438,105
| 1,115,716
|
Running two functions in their own processes and terminating one
|
<p>I have a setup to run a render and measure the workload on the GPU in Python, using two functions that I run independently using the <code>multiprocessing</code> module to launch them. However, I need to terminate one from the other, so I store the object returned by <code>subprocess.popen()</code> globally to terminate it:</p>
<pre><code>import os
import multiprocessing
import subprocess
nvidia_cmd = None
blender_cmd_to_run = '/home/mickey/dev/blender-4.2.1-linux-x64/blender -b /home/mickey/dev/elastic_shared_memory/benchmark_scenes/bmw27/bmw27.blend -f 10 -- --cycles-device CUDA'
nvidia_cmd_to_run = 'nvidia-smi --query-gpu=gpu_bus_id,memory.used --format=csv -l 1'
def run_blender():
blender_cmd = subprocess.Popen(blender_cmd_to_run, shell=True, stdout=subprocess.PIPE)
for line in blender_cmd.stdout:
print(line.decode().strip())
nvidia_cmd.terminate()
def run_nvidia():
global nvidia_cmd
nvidia_cmd = subprocess.Popen(nvidia_cmd_to_run, shell=True, stdout=subprocess.PIPE)
for line in nvidia_cmd.stdout:
print(line.decode().strip())
if __name__ == '__main__':
blender_process = multiprocessing.Process(target=run_blender)
nvidia_process = multiprocessing.Process(target=run_nvidia)
nvidia_process.start()
blender_process.start()
</code></pre>
<p>However, when the <code>blender_process</code> function ends, I get an error with the terminate call:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'terminate'
</code></pre>
<p>But why? I'm storing the value as soon as the other command is launched. What am I missing?</p>
|
<python>
|
2025-02-14 00:58:19
| 4
| 1,842
|
easythrees
|
79,437,896
| 5,091,805
|
Chaining celery task results not working with signatures and kwargs
|
<p>This is what I was doing, and it was working up until today.</p>
<pre class="lang-py prettyprint-override"><code>workflow = chain(
signature(
"my.task",
args=(1, 2, 3),
options={"queue": "default"},
),
signature(
"my.other.task",
kwargs={
"explicity_param1": 10,
"explicit_param2": 12,
},
options={"queue": "default"},
),
)()
@shared_task(bind=true, name="my.task")
def my_task(self, a, b, c)
return a + b + c
@shared_task(bind=true, name="my.other.task")
def my_other_task(self, from_task, explicity_param1, explicity_param2)
return from_task + explicity_param1 + explicity_param2
</code></pre>
<p>I could pass a result from the first task to the next while also setting explicit params. Now it doesnt seem to be passing the result at all, but I still get the kwargs. One day I moved things around and now the result of the first function in the second task comes up as <code>None</code></p>
|
<python><celery>
|
2025-02-13 22:36:54
| 0
| 6,289
|
Ari
|
79,437,815
| 1,532,974
|
How can I get mypy to handle subclassing EnumType/EnumMeta correctly?
|
<p><em>disclaimer: will refer to <code>EnumType</code> as opposed to the older <code>EnumMeta</code></em></p>
<p>The following is valid at runtime (and is typically what is recommended to do):</p>
<pre class="lang-py prettyprint-override"><code>import sys
from enum import Enum, IntEnum
from typing import Any, Type, TypeVar, Union, Callable, cast, TYPE_CHECKING
if sys.version_info >= (3, 11):
from enum import EnumType
else:
from enum import EnumMeta as EnumType
_E = TypeVar("_E", bound=Enum)
class MultipleEnumAccessMeta(EnumType):
"""
Enum Metaclass to provide a way to access multiple values all at once.
"""
def __getitem__(cls: Type[_E], key: Union[str, tuple[str, ...]]) -> Union[_E, list[_E]]:
getitem = cast(Callable[[Type[_E], str], _E], EnumType.__getitem__) # Ensure correct typing for __getitem__
if isinstance(key, tuple):
return [getitem(cls, name) for name in key] # Return list for tuple keys
return getitem(cls, key) # Return single value for single key
if TYPE_CHECKING:
reveal_type(EnumType.__getitem__) # Base method signature
reveal_type(MultipleEnumAccessMeta.__getitem__) # Overridden method signature
# Test Enum with metaclass
class Names(IntEnum, metaclass=MultipleEnumAccessMeta):
Alice = 0
Bob = 1
Charlie = 2
# Test cases
assert Names["Alice"] == Names.Alice
assert Names["Alice", "Bob"] == [Names.Alice, Names.Bob]
</code></pre>
<p>However, this gives the following typehint complaints</p>
<pre><code>test.py:17: error: Self argument missing for a non-static method (or an invalid type for self) [misc]
test.py:17: error: Return type "list[Never]" of "__getitem__" incompatible with return type "Never" in supertype "EnumMeta" [override]
test.py:25: note: Revealed type is "def [_EnumMemberT] (type[_EnumMemberT`3], builtins.str) -> _EnumMemberT`3"
test.py:26: note: Revealed type is "def [_E <: enum.Enum] (type[_E`4], Union[builtins.str, tuple[builtins.str, ...]]) -> Union[_E`4, builtins.list[_E`4]]"
test.py:36: error: Enum index should be a string (actual index type "tuple[str, str]") [misc]
test.py:36: error: Non-overlapping equality check (left operand type: "Names", right operand type: "list[Names]") [comparison-overlap]
</code></pre>
<p>So to first order, you say "fine", if I drop the <code>cls: Type[_E]</code> portion type-hint, <code>mypy</code> gets more confused:</p>
<pre><code>test.py:17: error: Return type "Union[_E, list[_E]]" of "__getitem__" incompatible with return type "Never" in supertype "EnumMeta" [override]
test.py:21: error: Argument 1 has incompatible type "MultipleEnumAccessMeta"; expected "type[_E]" [arg-type]
test.py:22: error: Argument 1 has incompatible type "MultipleEnumAccessMeta"; expected "type[_E]" [arg-type]
test.py:25: note: Revealed type is "def [_EnumMemberT] (type[_EnumMemberT`3], builtins.str) -> _EnumMemberT`3"
test.py:26: note: Revealed type is "def [_E <: enum.Enum] (atlas_schema.test.MultipleEnumAccessMeta, Union[builtins.str, tuple[builtins.str, ...]]) -> Union[_E`4, builtins.list[_E`4]]"
test.py:36: error: Enum index should be a string (actual index type "tuple[str, str]") [misc]
test.py:36: error: Non-overlapping equality check (left operand type: "Names", right operand type: "list[Names]") [comparison-overlap]
</code></pre>
<p>I have questions. Likely this is partially due to the metaclassing that's being very tricky and I'm doing my best to be careful here, but I'm apparently either not careful enough or <code>mypy</code> really has some bugs:</p>
<ol>
<li>why is <code>EnumType.__getitem__</code> revealed as both returning <code>Never</code> and <code>_EnumMemberT`3</code> ? This seems like a weird internal conflict with <code>mypy</code>.</li>
<li>How do we typehint the <code>cls</code> parameter of <code>MultipleEnumAccessMeta.__getitem__</code> correctly, both to ensure the <code>TypeVar</code> is evaluated consistently by <code>mypy</code>, but also so that the call to <code>super().__getitem__</code> [or to <code>EnumType.__getitem__</code>] is correct?</li>
</ol>
|
<python><enums><python-typing><mypy>
|
2025-02-13 21:55:33
| 0
| 621
|
kratsg
|
79,437,778
| 8,032,508
|
Run loop while writing to paramiko SFTP file is in progress
|
<p>I'm writing a script that will transfer large amounts of data to an SFTP server, and I'd like to have some sort of terminal print-out during the long loading time for troubleshooting/debugging. I'm using paramiko for the SFTP connection and file writing.</p>
<p>What I currently have it this:</p>
<pre><code>remote_zip_file = sftp_client.file(file_name_with_path, "wb")
remote_zip_file.write(my_data)
</code></pre>
<p>Is there any way to run a <code>while</code> loop (or something similar) that runs while <code>paramiko.sftp_file.SFTPFile</code> is being written?</p>
<p>What I'd like is something like this (pseudo code):</p>
<pre><code>remote_zip_file = sftp_client.file(file_name_with_path, "wb")
while remote_zip_file.write(my_data) == IN_PROGRESS:
time.sleep(1)
print('Some print out that shows that the file writing is in progress')
</code></pre>
<p>This question is concerning the actual <code>write</code> function, not the <code>put</code> function. Answers to question <a href="https://stackoverflow.com/questions/8313080/how-to-see-log-file-transfer-progress-using-paramiko">How to see (log) file transfer progress using paramiko?</a> mention the built-in callback function that can be used, but this callback parameter isn't present in the <code>paramiko.sftp_file.SFTPFile</code> class specified in the original question (<a href="https://docs.paramiko.org/en/stable/api/sftp.html#paramiko.sftp_file.SFTPFile.write" rel="nofollow noreferrer">paramiko documentation source</a>).</p>
|
<python><while-loop><paramiko>
|
2025-02-13 21:36:03
| 1
| 752
|
Jwok
|
79,437,667
| 20,591,261
|
How to count unique state combinations per ID in a Polars DataFrame
|
<p>I have a Polars DataFrame where each id can appear multiple times with different state values (either 1 or 2). I want to count how many unique ids have only state 1, only state 2, or both states 1 and 2.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"id": [1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 9, 9, 10, 10, 10, 11, 11, 12, 12, 13, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 20, 20, 20],
"state": [1, 2, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 2]
})
</code></pre>
<p>I want to count how many unique ids fall into each category:</p>
<p>• Only state 1 (e.g., IDs that only have 1)</p>
<p>• Only state 2 (e.g., IDs that only have 2)</p>
<p>• Both states 1 and 2 (e.g., IDs that have both 1 and 2)</p>
<p>Expected Result (Example):</p>
<pre><code>State combination [1] -> 20 IDs
State combination [2] -> 15 IDs
State combination [1, 2] -> 30 IDs
</code></pre>
|
<python><python-polars>
|
2025-02-13 20:40:19
| 3
| 1,195
|
Simon
|
79,437,593
| 12,403,550
|
Update item in nested dictionary containing list
|
<p>I am writing a method that can update any item in a nested dictionary. The method takes the path, json, and value:</p>
<pre><code>from functools import reduce
import operator
json_test = {'request_version': 'v1.0',
'fields': [{'team': [{'name': 'Real Madrid', 'colour': 'white'}],
'location': {'type': 'Feature',
'geometry': {'type': 'Point', 'coordinates': [0, 53]}},
'query': {'filter': '2024/25'},
'player': 'Bellingham'}]}
def update_attribute(element, json, new_value):
*path, last = element.split('.')
target = reduce(operator.getitem, path, json)
target[last] = new_value
return json
update_attribute('fields.player', json_test, 'Mbappe')
</code></pre>
<p>However, this method won't work as there is a list in the dictionary.</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[114], line 1
----> 1 update_attribute("fields.player", json_name_ab_asr, "test")
Cell In[111], line 3, in update_attribute(element, json, new_value)
1 def update_attribute(element, json, new_value):
2 *path, last = element.split('.')
----> 3 target = reduce(operator.getitem, path, json)
4 target[last] = new_value
5 return json
TypeError: list indices must be integers or slices, not str
</code></pre>
<p>Any workarounds?</p>
|
<python><dictionary>
|
2025-02-13 20:02:24
| 1
| 433
|
prayner
|
79,437,460
| 14,614,150
|
How to filter a HAIL genetic table based on the a .txt file with the correct rows?
|
<p>If I have a hail genetic table like this on my juypter python notebook:</p>
<pre><code>variant_qc
gq_stats info
locus alleles filters a_index was_split mean stdev min max call_rate n_called p_value_excess_het AC AF AN homozygote_count ...
locus<GRCh38> array<str> set<str> int32 bool float64 float64 float64 float64 float64 int64 int64 int64 int64 int64 float64 float64 float64 array<int32> array<float64> int32 array<int32>
chr1:7756105 ["A","C"] {} 1 False 4.08e+01 3.98e+00 3.00e+00 9.90e+01 1.00e+00 414672 0 158 41333 42840 1.01e-01 1.79e-21 1.00e+00 [44347] [5.35e-02] 829344 [371832,1507]
chr1:8618725 ["C","G"] {} 1 True 8.29e+01 2.31e+01 1.00e+00 9.90e+01 1.00e+00 414829 0 1 2 2 4.82e-06 5.00e-01 5.00e-01 [2] [2.41e-06] 829658 [414827,0]
chr1:8618725 ["C","T"] {} 2 True 8.29e+01 2.31e+01 1.00e+00 9.90e+01 1.00e+00 414829 0 1 100530 403020 2.54e-01 1.05e-196 1.00e+00 [705510] [8.50e-01] 829658 [11809,302490]
</code></pre>
<p>If I am interested in the locus and alleles column, you can see for some variants in the first column called locus its repeated but with different alleles (2nd column). Now if I have a second .txt file which I want to use the filter the above table:</p>
<pre><code>CHR POS REF ALT A1 BETA
chr1 7756105 A C C -0.155523
chr1 8618725 C T C -0.13646
</code></pre>
<p>I want to filter out the rows in the first table that do not match the text file, e.g. chr1:8618725 ["C","G"] would go from the first table.</p>
<p>Expected output:</p>
<pre><code>variant_qc
gq_stats info
locus alleles filters a_index was_split mean stdev min max call_rate n_called p_value_excess_het AC AF AN homozygote_count ...
locus<GRCh38> array<str> set<str> int32 bool float64 float64 float64 float64 float64 int64 int64 int64 int64 int64 float64 float64 float64 array<int32> array<float64> int32 array<int32>
chr1:7756105 ["A","C"] {} 1 False 4.08e+01 3.98e+00 3.00e+00 9.90e+01 1.00e+00 414672 0 158 41333 42840 1.01e-01 1.79e-21 1.00e+00 [44347] [5.35e-02] 829344 [371832,1507]
chr1:8618725 ["C","T"] {} 2 True 8.29e+01 2.31e+01 1.00e+00 9.90e+01 1.00e+00 414829 0 1 100530 403020 2.54e-01 1.05e-196 1.00e+00 [705510] [8.50e-01] 829658 [11809,302490]
</code></pre>
<p>The code I have tried is:</p>
<pre><code># Load the filter file as a Pandas DataFrame
filter_df = pd.read_csv("list_uc_snp_list_sep.txt", sep="\t", dtype=str)
# Convert the filter file into a Hail Table
filter_ht = hl.Table.parallelize(
hl.literal(filter_df.to_dict(orient="records")),
hl.tstruct(CHR=hl.tstr, POS=hl.tstr, REF=hl.tstr, ALT=hl.tstr, A1=hl.tstr, BETA=hl.tstr)
)
# Create 'locus' and 'alleles' fields to match the Hail table
filter_ht = filter_ht.annotate(
locus=hl.locus(filter_ht.CHR, hl.int32(filter_ht.POS), reference_genome="GRCh38"),
alleles=[filter_ht.REF, filter_ht.ALT]
)
# Filter WGS_split_mt_full_list_ofSNPs by keeping only matching rows
filtered_mt = WGS_split_mt_full_list_ofSNPs.filter_rows(
hl.is_defined(filter_ht.key_by("locus", "alleles")[WGS_split_mt_full_list_ofSNPs.locus,
WGS_split_mt_full_list_ofSNPs.alleles])
)
</code></pre>
<p>but it doesnt work...I get an error when doing the first step. I have never used python before so do not know how I will be able to do this. I am fluent in r. Is there a way to convert the hail table to R, OR if not, how can I use python...</p>
|
<python><hail>
|
2025-02-13 18:54:27
| 0
| 507
|
HKJ3
|
79,437,319
| 1,564,659
|
Object is not subscriptable in Scrapy Fake User Agent
|
<p>I got this error:</p>
<pre><code> from fake_useragent import UserAgent
File "D:\Kerja\HIT\Python Projects\Ongoing Projects\Andrew Mancilla\mancilla-env\lib\site-packages\fake_useragent\__init__.py", line 4, in <module>
from fake_useragent.fake import FakeUserAgent, UserAgent
File "D:\Kerja\HIT\Python Projects\Ongoing Projects\Andrew Mancilla\mancilla-env\lib\site-packages\fake_useragent\fake.py", line 8, in <module>
from fake_useragent.utils import BrowserUserAgentData, load
File "D:\Kerja\HIT\Python Projects\Ongoing Projects\Andrew Mancilla\mancilla-env\lib\site-packages\fake_useragent\utils.py", line 42, in <module>
def load() -> list[BrowserUserAgentData]:
TypeError: 'type' object is not subscriptable
</code></pre>
<p>How to remove that error?</p>
<p>My spec</p>
<pre><code>scrape-fake-useragent==1.4.4
fake-useragent==2.0.0
Scrapy==2.11.2
Python 3.8
</code></pre>
|
<python><python-3.x><scrapy><user-agent><python-3.8>
|
2025-02-13 17:51:07
| 1
| 19,366
|
Aminah Nuraini
|
79,437,187
| 243,031
|
backward lookup is not working in django 5.x
|
<p>We are migrating our django app from <code>django==3.2.25</code> to <code>django==5.1.6</code>. <code>OneToOneField, ManyToManyField</code> are giving errors on revers lookup.</p>
<p>Create fresh setup.</p>
<pre><code>python -m venv app_corp_1.0.X
./app_corp_1.0.X/bin/pip install django
mkdir djangotutorial
./app_corp_1.0.X/bin/django-admin startproject mysite djangotutorial
./app_corp_1.0.X/bin/python djangotutorial/manage.py shell
</code></pre>
<p>I have models as below.</p>
<pre><code>from django.db import models
class Switch(models.Model):
fqdn = models.CharField(max_length=45, unique=True)
class Meta:
managed = False
db_table = 'Switch'
app_label = 'myapp_models'
class ConfigState(models.Model):
switch = models.OneToOneField(Switch, models.CASCADE, db_column='switch', primary_key=True,
related_name='config_state')
class Meta:
managed = False
db_table = 'ConfigState'
app_label = 'myapp_models'
class EdgeSwitch(models.Model):
switch = models.OneToOneField(Switch, models.CASCADE, db_column='switch', primary_key=True,
related_name='edge_switch')
class Meta:
managed = False
db_table = 'EdgeSwitch'
app_label = 'myapp_models'
</code></pre>
<p>When I try to get backward lookup query in <code>DJango==3.X</code> it works.</p>
<pre><code>>>> print(EdgeSwitch.objects.filter(switch__config_state=1).query)
SELECT `EdgeSwitch`.`switch`, `EdgeSwitch`.`cluster`, `EdgeSwitch`.`sequence`, `EdgeSwitch`.`position`, `EdgeSwitch`.`role`, `EdgeSwitch`.`span`, `EdgeSwitch`.`loopback_v4`, `EdgeSwitch`.`loopback_v6` FROM `EdgeSwitch` INNER JOIN `Switch` ON (`EdgeSwitch`.`switch` = `Switch`.`id`) INNER JOIN `ConfigState` ON (`Switch`.`id` = `ConfigState`.`switch`) WHERE `ConfigState`.`switch` = 1
</code></pre>
<p>Same code gives error in <code>DJango==5.X</code></p>
<pre><code>>>> print(EdgeSwitch.objects.filter(switch__config_state=1).query)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/query.py", line 1476, in filter
return self._filter_or_exclude(False, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/query.py", line 1494, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, args, kwargs)
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/query.py", line 1501, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1609, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1641, in _add_q
child_clause, needed_inner = self.build_filter(
^^^^^^^^^^^^^^^^^^
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1555, in build_filter
condition = self.build_lookup(lookups, col, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1379, in build_lookup
lhs = self.try_transform(lhs, lookup_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/virtualenvs/app_corp_1.0.X/lib/python3.12/site-packages/django/db/models/sql/query.py", line 1423, in try_transform
raise FieldError(
django.core.exceptions.FieldError: Unsupported lookup 'config_state' for OneToOneField or join on the field not permitted.
</code></pre>
<p>How to make it working as it was working before?</p>
|
<python><django><django-orm>
|
2025-02-13 17:10:20
| 1
| 21,411
|
NPatel
|
79,437,175
| 616,514
|
Flask: Match folder hierarchy of `static` to `templates`
|
<p>In a blueprint folder hierarchy, the contents of <code>templates</code> shadows the hierarchy.</p>
<p>Ultimately, I'd like to</p>
<ul>
<li>Mirror the hierarchy of blueprint <code>templates</code> in blueprint <code>static</code>?</li>
<li>Add a <code>url_prefix</code> to the blueprint which applies to <code>templates</code> and <code>static</code>,</li>
</ul>
<p>From the folder hierachy displayed below, <code>./subfiles/auth</code> contains:</p>
<ul>
<li><code>./subfiles/auth/templates/auth/register.html</code></li>
<li><code>./subfiles/auth/static/auth/register.js</code></li>
</ul>
<p>(Note the addition of <code>auth</code> after <code>static</code>.)</p>
<h2>Issue:</h2>
<p>Even without the <code>url_prefix</code>:</p>
<pre class="lang-py prettyprint-override"><code># auth.py
@auth_bp.route("/auth/register", methods=["GET", "POST"])
def register():
# [...]
return render_template(f"{request.path[1:]}.html")
</code></pre>
<pre class="lang-html prettyprint-override"><code><!-- register.html -->
<!-- [...] -->
<script src="{{ url_for( request.blueprint + ".static", filename=request.path[1:] + ".js") }}"></script>
</code></pre>
<p>results in:</p>
<p><code>GET http://127.0.0.1:5000/static/auth/register.js net::ERR_ABORTED 404 (NOT FOUND)</code></p>
<p>whereas:</p>
<pre class="lang-html prettyprint-override"><code><!-- register.html -->
<!-- [...] -->
<p>{{ url_for( request.blueprint + ".static", filename=request.path[1:] + ".js") }}</p>
</code></pre>
<p>results in:</p>
<p><code>/static/auth/register.js</code></p>
<p>That should be correct, although I have a feeling that it is looking at the root level <code>./static</code> rather than in <code>./subfiles/auth/static/</code>.</p>
<h2>Folder Hierarchy:</h2>
<pre><code>% tree -I '__pycache__'
.
├── app.py
├── app.sh
├── blueprints_init.py
├── config.py
├── global_fun.py
├── pielection.db
├── pielection.sql
├── readme.txt
├── requirements.txt
├── static
│ ├── brand.png
│ └── styles.css
├── subfiles
│ ├── __init__.py
│ ├── auth
│ │ ├── __init__.py
│ │ ├── auth_fun.py
│ │ ├── auth_rt.py
│ │ ├── static
│ │ │ └── auth
│ │ │ └── register.js
│ │ └── templates
│ │ └── auth
│ │ ├── account.html
│ │ ├── login.html
│ │ └── register.html
└── templates
├── apology.html
├── layout.html
└── layoutmwe.html
</code></pre>
|
<javascript><python><html><flask>
|
2025-02-13 17:05:42
| 0
| 615
|
kando
|
79,437,137
| 7,195,666
|
assert_type with callable
|
<p><code>pyright</code> seems to expect a name of the parameter in a <code>Callable</code> when using <code>assert_type</code></p>
<p>For such code:</p>
<pre><code>from typing import assert_type
from collections.abc import Callable
def tuple_of_nums(n: int) -> tuple[int,...]:
return tuple(range(n))
assert_type(tuple_of_nums, Callable[[int], tuple[int,...]])
</code></pre>
<p>running <code>pyright file.py</code> yields:</p>
<pre><code>file.py:5:13 - error: "assert_type" mismatch: expected "(int) -> tuple[int, ...]" but received "(n: int) -> tuple[int, ...]" (reportAssertTypeFailure)
</code></pre>
<p>The only difference being <code>n:</code> in the <code>received</code> function.</p>
<p>How do I make this type assertion work?</p>
|
<python><python-typing><pyright>
|
2025-02-13 16:55:27
| 1
| 2,271
|
Vulwsztyn
|
79,437,082
| 1,145,011
|
Unable to run Playwright test scripts in debug Mode
|
<p>I'm trying to learn Playwright, so I was checking Playwright Inspector concept and written below piece of code. In Pycharm terminal I run the below i.e. set PWDEBUG=1 and use pytest</p>
<pre><code>PWDEBUG=1 pytest -s test_saucedemo.py
</code></pre>
<p>I get error message</p>
<pre class="lang-none prettyprint-override"><code>PWDEBUG=1 : The term 'PWDEBUG=1' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and try again.
</code></pre>
<p><strong>Note</strong>: Path of the file is proper because when I run as <code>pytest -s test_saucedemo.py</code> test is passed.</p>
<pre><code>import pytest
from playwright.sync_api import Page,expect
def test_page_title(page:Page):
page.goto("https://www.saucedemo.com/v1/index.html") #pytest --base-url https://www.saucedemo.com
expect(page).to_have_title("Swag Labs")
page.get_by_placeholder("Username").fill("standard_user")
page.get_by_placeholder("Password").fill("secret_sauce")
page.locator("input.btn_action").click()
</code></pre>
|
<python><pytest><playwright><playwright-python>
|
2025-02-13 16:39:25
| 1
| 1,551
|
user166013
|
79,436,968
| 1,922,959
|
How to isolate problematic text in a large csv file with Python
|
<p>I'm pretty new with Python and text analysis in general...working on a project for a class. I'm reading in a bunch of free text from .csv files that came from excel. There are over 200,000 rows.</p>
<p>I read them in with just <code>pd.read_csv()</code> and then</p>
<pre><code>df['Text'].fillna('').apply(str)
df['Text'].str.replace(r"[^a-zA-Z]", " ", regex=True)
df.dropna()
</code></pre>
<p>Then I've defined</p>
<pre><code>def preprocess_text(text):
text = re.sub(r'\d+', '', text) # Remove numbers
text = text.lower() # Convert to lowercase
text = text.translate(str.maketrans('', '', string.punctuation)) # Remove punctuation
words = word_tokenize(text) # Tokenize text
words = [word for word in words if word not in stopwords.words('english')] # Remove stopwords
return words # Return list of words
</code></pre>
<p>But when I call that on my dataframe I get</p>
<pre><code>df['cleaned_text'] = df['Text'].apply(preprocess_text)
AttributeError: 'float' object has no attribute 'lower'
</code></pre>
<p>I went back and modified the function</p>
<pre><code>def preprocess_text(text):
try:
text = re.sub(r'\d+', '', text) # Remove numbers
except TypeError:
print(text)
except AttributeError:
print(text)
text = text.lower() # Convert to lowercase
text = text.translate(str.maketrans('', '', string.punctuation)) # Remove punctuation
words = word_tokenize(text) # Tokenize text
words = [word for word in words if word not in stopwords.words('english')] # Remove stopwords
return words # Return list of words
</code></pre>
<p>And the text that I get when the error occurs is just <code>nan</code></p>
<p>Any pointers on how to isolate where in this mass of text the error is occurring? Or better yet, a pre-processing step that I can eliminate this with?</p>
|
<python><pandas><text>
|
2025-02-13 16:04:23
| 0
| 1,299
|
jerH
|
79,436,915
| 3,420,542
|
Return error code and message to API caller
|
<p>I have a web application that put some data into a database using the SQLAlchemy: if the element does not exist into the table it is added else it doesn't. In both cases it shows a popup message to the user telling if the operation was successful or not using the flash command, so it works fine.</p>
<p>The problem is that if I do a POST HTTP request to that service using an external device (for example an Android app or a separate Python script) the response returns always status code 200 even if the record exists into the table.</p>
<p>What I would like to do is:</p>
<ol>
<li>The user add the record</li>
<li>If the record does not exists it is added into the database and redirect me to the homepage of the website <code>/home</code></li>
<li>If the record exists return an error code/message to the user so that I can manage that case on client side and redirect to the homepage of the website <code>/home</code></li>
</ol>
<p>Code of the python flask:</p>
<pre><code># creating of the model
class Product(db.Model):
id = db.Column(db.String(100), primary_key=True)
total = db.Column(db.Integer())
# home route
@app.route('/')
def home():
product = Product.query.all()
return render_template('index.html', products=product)
# route for adding todos
@app.route('/add', methods=['POST'])
def add():
if request.is_json:
data = request.get_json()
else:
data = request.form
product = Product(id=data['product_id'], total=data['product_quantity'])
product_exist = Product.query.filter_by(id=product.id).first()
if product_exist:
flash(f"{product.id} already in list", category="error")
else:
db.session.add(product)
db.session.commit()
flash(f"{product.total} {product.id} added successfully.", category="success")
return redirect(url_for('home'))
</code></pre>
|
<python><flask><sqlalchemy>
|
2025-02-13 15:44:23
| 1
| 748
|
xXJohnRamboXx
|
79,436,541
| 6,544,849
|
How to detect edges of the following image
|
<p>I am trying to detect the fringes/edges of the following image but it doesn't work</p>
<p>Here is the python code:<a href="https://i.sstatic.net/eAZLGVtv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAZLGVtv.png" alt="nevermind the title" /></a></p>
<pre><code>import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the image
image_path = 'your path to image'
image = cv2.imread(image_path)
# Convert to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply Gaussian blur to reduce noise
blurred_image = cv2.GaussianBlur(gray_image, (5, 5), 0)
# Apply Canny edge detection
edges = cv2.Canny(blurred_image, threshold1=50, threshold2=150)
edges_colored = np.zeros_like(image)
edges_colored[:, :, 1] = edges # Set the red channel
overlay = cv2.addWeighted(image, 0.8, edges_colored, 1, 0)
# Display the overlaid image
plt.figure(figsize=(8, 6))
plt.imshow(cv2.cvtColor(overlay, cv2.COLOR_BGR2RGB))
plt.title('Original Image with Detected Edges in Red')
plt.axis('off')
plt.show()
</code></pre>
<p>And this is the image I want to detect edges for should follow the lines for the fringes. I have tried almost everything black and white increasing the contrast but it doesn't work. I have modified other image manually and for that it work and I don't know what can be the reason here is the working example:
<a href="https://i.sstatic.net/v89lEWpo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v89lEWpo.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong that it doesn't work for the new image but for some reason works for the other. I have tried normalizing and played with threshold and brightness and contrast but didn't help. Thanks for the help!!</p>
|
<python><opencv><image-processing>
|
2025-02-13 14:39:13
| 1
| 321
|
wosker4yan
|
79,436,530
| 4,473,615
|
Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonFunction
|
<p>While reading a JSON file and converting from pandas dataframe to spark. Process stops with this error.</p>
<blockquote>
<p>Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonFunction. Trace:
py4j.Py4JException: Constructor org.apache.spark.api.python.PythonFunction([class [B, class java.util.HashMap, class java.util.ArrayList, class java.lang.String, class java.lang.String, class java.util.ArrayList, class org.apache.spark.api.python.PythonAccumulatorV2]) does not exist</p>
</blockquote>
<pre><code>import pandas as pd
import json
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession.builder.getOrCreate()
with open('data_file.json', 'r') as file:
data = json.load(file)
df = pd.json_normalize(data)
df = pd.DataFrame(df, columns=['Col1', 'Col2'])
df_aws = spark.createDataFrame(df)
</code></pre>
<p>Trying to use findspark</p>
<pre><code>import findspark
findspark.init('path/to/spark-3.5.4-bin-hadoop3')
</code></pre>
<p>As well as trying to use spark parameters</p>
<pre><code>import os
import sys
spark_path = r"path/to/spark-3.5.4-bin-hadoop3" # spark installed folder
os.environ['SPARK_HOME'] = spark_path
sys.path.insert(0, spark_path + "/bin")
sys.path.insert(0, spark_path + "/python/pyspark/")
sys.path.insert(0, spark_path + "/python/lib/pyspark.zip")
sys.path.insert(0, spark_path + "/python/lib/py4j-0.10.7-src.zip")
</code></pre>
<p>Both the approaches are not working, any assistance would be appreciated</p>
|
<python><apache-spark><pyspark>
|
2025-02-13 14:35:46
| 0
| 5,241
|
Jim Macaulay
|
79,436,464
| 118,549
|
Satisfying Python type checking for values from a dictionary
|
<p>I have a Python script that pulls in some config variables using dotenv_values and sets some initial constants using these. The constants are then passed as arguments to a function. If I define the constants in the global context, I get a type-checking error due to the function expecting a string. If I define the constants within the main function, the type checking is fine (although it doesn't like me using the constant naming style within main).</p>
<p>Here's a simple script that should demonstrate the issue:</p>
<pre class="lang-py prettyprint-override"><code>"""Type-checking test"""
from dotenv import dotenv_values
config = dotenv_values(".config")
URL = config.get("BASE_URL")
if URL is None:
raise ValueError("BASE_URL is not set in .config file")
def do_a_thing(url: str) -> None:
"""Do a thing"""
print("You passed in a URL:", url)
def main():
"""Main"""
do_a_thing(URL)
if __name__ == "__main__":
main()
</code></pre>
<p>This results in the checker flagging the "do_a_thing(URL)" line with:</p>
<pre><code>Argument of type "str | None" cannot be assigned to parameter "url" of type "str" in function "do_a_thing"
Type "str | None" is not assignable to type "str"
"None" is not assignable to "str"
</code></pre>
<p>How can I convince the type checker that the do_a_thing function will never be called with a None value?</p>
|
<python><python-typing>
|
2025-02-13 14:12:55
| 0
| 752
|
Gordon Mckeown
|
79,436,352
| 2,490,392
|
How to insert a column at a specific index with values for some rows in a single operation?
|
<p>I want to insert a column at a specific index in a Pandas DataFrame, but only assign values to certain rows. Currently, I am doing it in two steps:</p>
<pre><code>df = pd.DataFrame({'A': [1, 2, 3, 4, 5],
'B': [10, 20, 30, 40, 50] })
df.insert(1, 'NewCol', None)
df.loc[[1, 3], 'NewCol'] = ['X', 'Y']
</code></pre>
<p>Is there a more concise way to achieve this in a single operation?</p>
|
<python><pandas><dataframe>
|
2025-02-13 13:40:55
| 1
| 389
|
kirgol
|
79,436,332
| 8,329,213
|
Modeling Time series with ARIMA
|
<p>I am trying to model a <code>time series</code>. I am not certain if my approach is correct and that's why I am posting a question here. May be someone could point out the mistake I am doing. Can someone propose a better model here?</p>
<p>Here is the data for <code>36 months</code>.</p>
<pre><code>Revenue
5200, 4989, 4500, 4812, 4675, 3661, 3898, 5690, 6049, 6919, 5539, 5592,
5396, 4733, 4061, 4657, 3485, 3616, 2823, 4394, 5324, 6071, 6327, 4210,
4393, 4366, 3704, 3449, 3240, 3529, 3077, 4438, 4932, 4158, 3864, 4171,
3444, 3111, 2521, 1961, 1459, 1884
</code></pre>
<p>I am training the data on 36 months and forecasting on remaining 6 months. First of all I create a Graph and then test if the time series is <code>stationary</code> or not.</p>
<pre><code>import matplotlib.pyplot as plt
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.statespace.sarimax import SARIMAX
from pmdarima.arima import auto_arima
df = pd.read_excel(os.path.join('TS Data.xlsx'))
train, test = df[df.index<36],df[df.index>=36]
plt.figure(figsize=(10,4))
plt.plot(train)
plt.plot(test)
plt.title('Revenue', fontsize=10)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/OssBJg18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OssBJg18.png" alt="enter image description here" /></a></p>
<p>I check for <code>Stationarity</code> using <a href="https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test" rel="nofollow noreferrer"><code>Augmented_Dickey–Fuller (ADF) test</code></a>:</p>
<pre><code>result = adfuller(train)
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
print('t-value at that Perc: %s' % str(result[4]))
ADF Statistic: -2.672480
p-value: 0.078923
t-value at that Perc: {'1%': -3.6327426647230316, '5%': -2.9485102040816327, '10%': -2.6130173469387756}
</code></pre>
<p>We see that there is <code>Non-Stationarity</code> because <code>p-value</code> is high and <code>ADF Statistic</code> is higher than <code>t-value</code> at 1% and 5% significance. So, yes, we must make the time series <code>Stationary</code>.</p>
<p>I took the <code>first-difference</code> and checked for <code>Stationarity</code>.</p>
<pre><code>first_diff = train.diff()[1:]
plt.figure(figsize=(10,4))
plt.plot(first_diff)
plt.title('First Difference', fontsize=10)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/jts0sYkF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jts0sYkF.png" alt="enter image description here" /></a></p>
<p>Now, I again ran the <code>ADF-Test</code> and got the following results:</p>
<pre><code>result = adfuller(first_diff)
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
print('t-value at that Perc: %s' % str(result[4]))
ADF Statistic: -5.869300
p-value: 0.000000
t-value at that Perc: {'1%': -3.639224104416853, '5%': -2.9512301791166293, '10%': -2.614446989619377}
</code></pre>
<p>We see that <code>first-difference</code> is indeed <code>Stationary</code> because <code>p-value</code> is 0 and <code>ADF Statistic</code> is lower than <code>t-value</code> at 1%, 5% & 10% significance. So, this is the <code>time series</code> we will use to do our analysis.</p>
<p>We see that there is a <code>Seasonality</code> in the data at 12 months, so I will use use <a href="https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html" rel="nofollow noreferrer"><code>SARIMA</code></a> model.</p>
<p><strong>SARIMA Modeling</strong></p>
<pre><code>acf_plot = plot_acf(first_diff,lags=13)
</code></pre>
<p><a href="https://i.sstatic.net/xFUdZpxi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFUdZpxi.png" alt="enter image description here" /></a></p>
<p>We see that from <a href="https://en.wikipedia.org/wiki/Autocorrelation" rel="nofollow noreferrer"><code>Auto correlation</code></a> plot that there is no significant lag.</p>
<pre><code>pacf_plot = plot_pacf(first_diff,lags=13)
</code></pre>
<p><a href="https://i.sstatic.net/XWx7gB0c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWx7gB0c.png" alt="enter image description here" /></a></p>
<p>and there is slighty significant <a href="https://en.wikipedia.org/wiki/Partial_autocorrelation_function" rel="nofollow noreferrer"><code>Partial auto correlation</code></a> one at lag 9. I almost reject it as it's 9th period and not huge.</p>
<p>Then, I ran <a href="https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html" rel="nofollow noreferrer"><code>SARIMA</code></a> Model.</p>
<pre><code>my_order = (0,1,0)
my_seasonal_order = (0, 0, 0, 12) # Since none of PACF and ACF had any significant lag.
model = SARIMAX(train, order=my_order, seasonal_order=my_seasonal_order)
model_fit = model.fit()
print(model_fit.summary())
SARIMAX Results
==============================================================================
Dep. Variable: Revenue No. Observations: 36
Model: SARIMAX(0, 1, 0) Log Likelihood -284.069
Date: Thu, 13 Feb 2025 AIC 570.137
Time: 12:56:59 BIC 571.693
Sample: 0 HQIC 570.674
- 36
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
sigma2 6.561e+05 1.46e+05 4.491 0.000 3.7e+05 9.42e+05
===================================================================================
Ljung-Box (L1) (Q): 0.06 Jarque-Bera (JB): 0.13
Prob(Q): 0.81 Prob(JB): 0.94
Heteroskedasticity (H): 0.50 Skew: -0.02
Prob(H) (two-sided): 0.24 Kurtosis: 3.30
===================================================================================
</code></pre>
<p>The predictions are as follows:</p>
<pre><code>predictions = model_fit.forecast(len(test))
print(predictions)
36 4171.0
37 4171.0
38 4171.0
39 4171.0
40 4171.0
41 4171.0
plt.figure(figsize=(10,4))
plt.plot(test)
plt.plot(predictions)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/OLYWKJ18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OLYWKJ18.png" alt="enter image description here" /></a></p>
<p>All predicted values are the same, and I understand that it's because the lags in <a href="https://en.wikipedia.org/wiki/Autoregressive_model" rel="nofollow noreferrer"><code>AR</code></a> and <a href="https://en.wikipedia.org/wiki/Moving-average_model" rel="nofollow noreferrer"><code>MA</code></a> process were taken as 0 for all <code>p,q,P,Q</code>.</p>
<p>When I run <a href="https://alkaline-ml.com/pmdarima/modules/generated/pmdarima.arima.auto_arima.html" rel="nofollow noreferrer"><code>Auto-Arima</code></a>, where all parameters are chosen on its own, I get the same results.</p>
<pre><code>model = auto_arima(train, start_p=1, start_q=1, test='adf', seasonal=True, trace=True,
max_p=12, max_q=12, d=1, max_order=None, error_action='ignore',
suppress_warnings=True, stepwise=True,
)
</code></pre>
<p>Following is the result:</p>
<pre><code>print('\n',model.summary())
Performing stepwise search to minimize aic
ARIMA(1,1,1)(0,0,0)[0] intercept : AIC=inf, Time=0.13 sec
ARIMA(0,1,0)(0,0,0)[0] intercept : AIC=478.256, Time=0.02 sec
ARIMA(1,1,0)(0,0,0)[0] intercept : AIC=480.045, Time=0.10 sec
ARIMA(0,1,1)(0,0,0)[0] intercept : AIC=480.076, Time=0.09 sec
ARIMA(0,1,0)(0,0,0)[0] : AIC=476.259, Time=0.01 sec
Best model: ARIMA(0,1,0)(0,0,0)[0]
Total fit time: 0.359 seconds
Our Model: ARIMA(0,1,0)(0,0,0)[0]
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 30
Model: SARIMAX(0, 1, 0) Log Likelihood -237.130
Date: Thu, 13 Feb 2025 AIC 476.259
Time: 11:58:34 BIC 477.627
Sample: 0 HQIC 476.688
- 30
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
sigma2 7.404e+05 1.92e+05 3.865 0.000 3.65e+05 1.12e+06
===================================================================================
Ljung-Box (L1) (Q): 0.08 Jarque-Bera (JB): 0.05
Prob(Q): 0.78 Prob(JB): 0.97
Heteroskedasticity (H): 0.43 Skew: -0.10
Prob(H) (two-sided): 0.20 Kurtosis: 3.06
===================================================================================
</code></pre>
<p><code>ARIMA(0,1,0)(0,0,0)[0]</code> is selected by it, which is almost similar to mine in context of <code>lags p,q,P,Q</code>, except that seasonality component <code>D</code> is also <code>[0]</code></p>
<pre><code>prediction, confint = model.predict(n_periods=len(test), return_conf_int=True)
print(prediction)
36 4171.0
37 4171.0
38 4171.0
39 4171.0
40 4171.0
41 4171.0
</code></pre>
<p>Predicted values are same for both <code>SARIMAX</code> and <code>Auto-Arima</code> Models.</p>
<p><strong>My Question:</strong> One can clearly see that there is a pattern in the Time series, but my model is not able to capture it. Instead I have a constant value, making it a very poor <code>fit</code>. What wrong am I doing? Can someone propose the changes I need to do in context of <code>lags</code>, so that I am better able to capture the <code>time series</code> and get a good <code>fit</code>?</p>
<p>Thanks</p>
|
<python><time-series><arima><sarimax>
|
2025-02-13 13:32:17
| 0
| 7,707
|
cph_sto
|
79,436,279
| 10,666,216
|
KeyError when constructing a prompt with JSON text in LangChain
|
<p>I'm trying to create a few-shot prompt for a bot that summarizes the contents of a JSON file. No matter what input I give the FewShotPromptTemplate, it fails with a <code>KeyError: '\n "name"'</code>. Just doing <code>example_prompt.invoke(examples[0]).to_string()</code> works fine though.</p>
<pre class="lang-py prettyprint-override"><code>from langchain_core.prompts import PromptTemplate, FewShotPromptTemplate
json_raw = """{
"name": "Bernd",
"type": "Bread"
}"""
examples = [
{
"question": json_raw,
"answer": "A file describing a bread.",
}
]
example_prompt = PromptTemplate.from_template("Input file:\n{question}\n\nOutput file:\n{answer}\n")
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Input file:\n{input}",
input_variables=["input"],
)
command = prompt.invoke({"input": "asdf"})
print(command)
</code></pre>
<p>Full error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/jan/meine/ai/bpmnbot/fewshotbug.py", line 20, in <module>
command = prompt.invoke({"input": "input"})
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/prompts/base.py", line 208, in invoke
return self._call_with_config(
~~~~~~~~~~~~~~~~~~~~~~^
self._format_prompt_with_error_handling,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
serialized=self._serialized,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 1925, in _call_with_config
context.run(
~~~~~~~~~~~^
call_func_with_variable_args, # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
**kwargs,
^^^^^^^^^
),
^
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/runnables/config.py", line 396, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/prompts/base.py", line 183, in _format_prompt_with_error_handling
return self.format_prompt(**_inner_input)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/prompts/string.py", line 286, in format_prompt
return StringPromptValue(text=self.format(**kwargs))
~~~~~~~~~~~^^^^^^^^^^
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/prompts/few_shot.py", line 197, in format
return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/string.py", line 190, in format
return self.vformat(format_string, args, kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jan/meine/ai/bpmnbot/venv/lib/python3.13/site-packages/langchain_core/utils/formatting.py", line 33, in vformat
return super().vformat(format_string, args, kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/string.py", line 194, in vformat
result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/string.py", line 234, in _vformat
obj, arg_used = self.get_field(field_name, args, kwargs)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/string.py", line 299, in get_field
obj = self.get_value(first, args, kwargs)
File "/opt/homebrew/Cellar/python@3.13/3.13.0_1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/string.py", line 256, in get_value
return kwargs[key]
~~~~~~^^^^^
KeyError: '\n "name"'
</code></pre>
<p>What's going on here?</p>
|
<python><langchain><large-language-model>
|
2025-02-13 13:09:49
| 0
| 1,073
|
Jan Berndt
|
79,436,180
| 14,743,705
|
How can I get the date from weeknr and year using strptime
|
<p>I'm trying to get the date of monday given some weeknr and year.
But I feel like strptime is just returning the wrong date.
This is what I try:</p>
<pre><code>from datetime import date, datetime
today = date.today()
today_year = today.isocalendar()[0]
today_weeknr = today.isocalendar()[1]
print(today)
print(today_year, today_weeknr)
d = "{}-W{}-1".format(today_year, today_weeknr)
monday_date = datetime.strptime(d, "%Y-W%W-%w").date()
print(monday_date)
print(monday_date.isocalendar()[1])
</code></pre>
<p>Result:</p>
<pre><code>$ python test.py
2025-02-13
2025 7
2025-02-17
8
</code></pre>
<p>So how the hell am I in the next week now?</p>
|
<python><strptime>
|
2025-02-13 12:33:09
| 2
| 305
|
Harm
|
79,436,039
| 13,860,719
|
How to plot polygons from categorical grid points in matplotlib? (phase-diagram generation)
|
<p>I have a dataframe that contains 1681 evenly distributed 2D grid points. Each data point has its x and y coordinates, a label representing its category (or phase), and a color for that category.</p>
<pre><code> x y label color
0 -40.0 -30.0 Fe #660066
1 -40.0 -29.0 Fe #660066
2 -40.0 -28.0 FeS #ff7f50
3 -40.0 -27.0 FeS #ff7f50
4 -40.0 -26.0 FeS #ff7f50
... ... ... ... ...
1676 0.0 6.0 Fe2(SO4)3 #8a2be2
1677 0.0 7.0 Fe2(SO4)3 #8a2be2
1678 0.0 8.0 Fe2(SO4)3 #8a2be2
1679 0.0 9.0 Fe2(SO4)3 #8a2be2
1680 0.0 10.0 Fe2(SO4)3 #8a2be2
[1681 rows x 4 columns]
</code></pre>
<p>I want to generate a polygon diagram that shows the linear boundary of each category (in my case also known as a "phase diagram"). Sor far I can only show this kind of diagram in a simple scatter plot like this:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
plt.figure(figsize=(8., 8.))
for color in df.color.unique():
df_color = df[df.color==color]
plt.scatter(
x=df_color.x,
y=df_color.y,
c=color,
s=100,
label=df_color.label.iloc[0]
)
plt.xlim([-40., 0.])
plt.ylim([-30., 10.])
plt.xlabel('Log pO2(g)')
plt.ylabel('Log pSO2(g)')
plt.legend(bbox_to_anchor=(1.05, 1.))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/3PceH2lD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3PceH2lD.png" alt="enter image description here" /></a>
However, what I want is a phase diagram with clear linear boundaries that looks something like this:
<a href="https://i.sstatic.net/0ko5IPCY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ko5IPCY.jpg" alt="enter image description here" /></a></p>
<p>Is there any way I can generate such phase diagram using <code>matplotlib</code>? Note that the boundary is not deterministic, especially when the grid points are not dense enough. Hence there needs to be some kind of heuristics, for example the boundary line should always lie in the middle of two neighboring points with different categories. I imagine there will be some sort of line fitting or interpolation needed, and <code>matplotlib.patches.Polygon</code> is probably useful here.</p>
<p>For easy testing, I attach a code snippet for generating the data, but <strong>the polygon information shown below are not supposed to be used for generating the phase diagram</strong></p>
<pre><code>import numpy as np
import pandas as pd
from shapely.geometry import Point, Polygon
labels = ['Fe', 'Fe3O4', 'FeS', 'Fe2O3', 'FeS2', 'FeSO4', 'Fe2(SO4)3']
colors = ['#660066', '#b6fcd5', '#ff7f50', '#ffb6c1', '#c6e2ff', '#d3ffce', '#8a2be2']
polygons = []
polygons.append(Polygon([(-26.7243,-14.7423), (-26.7243,-30.0000), (-40.0000,-30.0000),
(-40.0000,-28.0181)]))
polygons.append(Polygon([(-18.1347,-0.4263), (-16.6048,1.6135), (-16.6048,-30.0000),
(-26.7243,-30.0000), (-26.7243,-14.7423), (-18.1347,-0.4263)]))
polygons.append(Polygon([(-18.1347,-0.4263), (-26.7243,-14.7423),
(-40.0000,-28.0181), (-40.0000,-22.2917), (-18.1347,-0.4263)]))
polygons.append(Polygon([(0.0000,-20.2615), (0.0000,-30.0000), (-16.6048,-30.0000),
(-16.6048,1.6135), (-16.5517,1.6865), (-6.0517,-0.9385), (0.0000,-3.9643)]))
polygons.append(Polygon([(-14.2390,10.0000), (-14.5829,7.5927), (-16.5517,1.6865),
(-16.6048,1.6135), (-18.1347,-0.4263), (-40.0000,-22.2917), (-40.0000,10.0000)]))
polygons.append(Polygon([(-6.0517,-0.9385), (-16.5517,1.6865), (-14.5829,7.5927),
(-6.0517,-0.9385)]))
polygons.append(Polygon([(0.0000,-3.9643), (-6.0517,-0.9385), (-14.5829,7.5927),
(-14.2390,10.0000), (0.0000,10.0000)]))
x_grid = np.arange(-40., 0.01, 1.)
y_grid = np.arange(-30., 10.01, 1.)
xy_grid = np.array(np.meshgrid(x_grid, y_grid)).T.reshape(-1, 2).tolist()
data = []
for coords in xy_grid:
point = Point(coords)
for i, poly in enumerate(polygons):
if poly.buffer(1e-3).contains(point):
data.append({
'x': point.x,
'y': point.y,
'label': labels[i],
'color': colors[i]
})
break
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><algorithm><matplotlib><plot>
|
2025-02-13 11:42:43
| 3
| 2,963
|
Shaun Han
|
79,435,652
| 1,487,336
|
How to avoid data copying in joblib parallel?
|
<p>I have a function <code>f(df, x)</code> where <code>df</code> is a large dataframe and <code>x</code> is a simple variable. The function <code>f</code> only read from <code>df</code> and doesn't modify it. Is it possible to share the memory of <code>df</code> and not copying it to sub-processes when using <code>joblib.Parallel</code> or other <code>multiprocessing</code> module?</p>
<ol>
<li>I'd like to avoid turning <code>df</code> into a global variable, as I'd like to reuse the code to process other data.</li>
<li>It's not possible to turn <code>df</code> into numpy array, as <code>f</code> needs to locate data using index of <code>df</code>.</li>
</ol>
<p>Edit:</p>
<p>Will <code>df</code> be copied to sub-process while executing <code>Parallel</code> in the following code?</p>
<pre><code>def g(df):
def f(x):
nonlocal df
...
return z
list_res = Parallel(10)(delayed(f)(x) for x in iterables)
return list_res
</code></pre>
|
<python><python-3.x><memory-management><joblib>
|
2025-02-13 09:24:59
| 0
| 809
|
Lei Hao
|
79,435,641
| 1,553,368
|
Having problem to transform time attribute when transorming between stream and sql api
|
<p>I have a simple <code>Pyflink</code> application where I need a primary key for temporal table joining. Since my source uses avro confluent-format and it has problems with primary keys I use transformation: SQL API -> Stream objects -> SQL API. And here I get a problem explaining it table has time attribute.</p>
<pre class="lang-none prettyprint-override"><code>sensor_readings_ddl = f"""
CREATE TABLE sensor_readings (
kafka_key_id VARCHAR not null,
...
ts TIMESTAMP(3),
WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
) WITH (
'connector' = 'kafka',
...
)
class TsExtractor(TimestampAssigner):
def extract_timestamp(self, element, record_timestamp):
return element.ts # Extracts timestamp from the `ts` field of the event
sensors_reading_stream = (
tenv.to_data_stream(sensor_readings_tab)
.assign_timestamps_and_watermarks(
WatermarkStrategy
.for_bounded_out_of_orderness(Duration.of_seconds(10)) # Define watermark strategy
.with_timestamp_assigner(TsExtractor()) # Use custom timestamp assigner
)
)
sensors_reading_schema = (Schema.new_builder().
column("kafka_key_id", DataTypes.STRING().not_null()).
...
column("ts", DataTypes.TIMESTAMP(3)).
primary_key("kafka_key_id").
watermark("ts", "ts - INTERVAL '5' SECOND").
build())
sensor_readings_view = tenv.from_data_stream(sensors_reading_stream, sensors_reading_schema)
sensor_readings_view_tab = tenv.create_temporary_view( "sensor_readings_view", sensors_reading_stream)
tumbling_w_sql = """
SELECT
sr.device_id,
das.metric_1,
das.metric_2,
TUMBLE_START(sr.ts, INTERVAL '30' SECONDS) AS window_start,
TUMBLE_END(sr.ts, INTERVAL '30' SECONDS) AS window_end,
SUM(sr.ampere_hour) AS charge_consumed
FROM sensor_readings_view FOR SYSTEM_TIME AS OF sr.ts AS sr
JOIN device_account_stats_view AS das ON sr.device_id = das.device_id
GROUP BY
TUMBLE(sr.ts, INTERVAL '30' SECONDS),
sr.device_id,
das.metric_1,
das.metric_2
"""
</code></pre>
<p>The error is:</p>
<blockquote>
<p>pyflink.util.exceptions.TableException: >org.apache.flink.table.api.TableException: Window aggregate can only >be defined over a time attribute column, but TIMESTAMP(3) >encountered.</p>
</blockquote>
<p>Help is appreciated.</p>
|
<python><pyflink>
|
2025-02-13 09:21:06
| 1
| 329
|
Olga Gorun
|
79,435,496
| 2,700,041
|
Fastest way to map column from unique key dataframe to a duplicate-allowed dataframe
|
<p>I have two DataFrames:</p>
<ul>
<li><code>A</code>: Contains unique <code>(A1, A2)</code> pairs and a column <code>D</code> with numerical values.</li>
<li><code>B</code>: Contains <code>(A1, A2)</code> pairs, but allows duplicates.</li>
</ul>
<p>I need to efficiently map column <code>D</code> from <code>A</code> to <code>B</code> based on the <code>(A1, A2)</code> keys.</p>
<p>Currently, I’m using the following Pandas approach:</p>
<pre><code>import pandas as pd
A = pd.DataFrame({
'A1': [1, 2, 3],
'A2': ['X', 'Y', 'Z'],
'D': [10, 20, 30]
})
B = pd.DataFrame({
'A1': [2, 3, 4, 2],
'A2': ['Y', 'Z', 'W', 'Y'],
})
B = B.merge(A, how='left', on=['A1', 'A2'], suffixes=('', '_A'))
B.drop(columns=[col for col in B.columns if col.endswith('_A')], inplace=True)
print(B)
</code></pre>
<p>gives the output of <code>B</code> filled with <code>D</code></p>
<pre><code> A1 A2 D
0 2 Y 20.0
1 3 Z 30.0
2 4 W NaN
3 2 Y 20.0
</code></pre>
<p><strong>Concerns</strong>:</p>
<p>I am looking a faster way to achieve the same mapping other than using <code>merge</code>. The output should retain all rows from B, filling missing values from A where applicable. One of the drawback of this approach is to remove unnecessary columns due to left join to make it compatible for my downstream code.</p>
<p><strong>What I’ve Tried:</strong></p>
<p>Using <code>update()</code>, but it doesn’t work well with multi-key joins.</p>
<p><strong>Question:</strong></p>
<p>Is there a more efficient way to map <code>D</code> from <code>A</code> to <code>B</code> faster without unnecessary column operations?</p>
|
<python><pandas><performance><join><time>
|
2025-02-13 08:29:08
| 1
| 1,427
|
hanugm
|
79,435,219
| 3,076,866
|
Cannot Apply + Operator Between Series[float] and float
|
<p>I'm developing an indicator in <strong>Indie</strong>. I’m trying to calculate <strong>Bollinger Bands</strong> and <strong>Keltner Channels</strong>, but I’m running into an error when adding a <code>Series[float]</code> to a <code>float</code> value.</p>
<p>Here’s the error message:</p>
<blockquote>
<p>Error: 20:20 cannot apply operator <code>+</code> to operands of types: <class 'indie.Series[float]'> and <class 'float'></p>
</blockquote>
<p>Here’s the relevant code snippet:</p>
<pre class="lang-py prettyprint-override"><code>sDev = StdDev.new(self.close, length)
mid_line_bb = Sma.new(self.close, length)
lower_band_bb = mid_line_bb + num_dev_dn * sDev[0] # <-- ERROR HERE
upper_band_bb = mid_line_bb + num_dev_up * sDev[0] # <-- ERROR HERE
</code></pre>
<p><strong>What I’ve Tried:</strong></p>
<ol>
<li>Wrapping <code>sDev[0]</code> with <code>MutSeriesF.new()</code>, but that caused another issue.</li>
<li>Using <code>sDev.value_or(0)</code> (which doesn’t seem to exist in Indie v4).</li>
<li>Looking for an explicit type conversion function in Indie’s documentation.</li>
</ol>
<p>I assume the issue is that <code>mid_line_bb</code> is a <strong><code>Series[float]</code></strong>, while <code>sDev[0]</code> is a <strong>single float</strong>. How should I properly handle this type of mismatch in <strong>Indie</strong>?</p>
|
<python><indie><indie-v4>
|
2025-02-13 06:22:10
| 1
| 9,715
|
Hardik Satasiya
|
79,435,178
| 5,023,889
|
Python IO – show read speed interactively
|
<p>is it possible to show reading speed interactively when reading a file?</p>
<p>The reason is that I would like to measure the speed when reading a file from network drive via Python and compare it against copying the file to local drive manually.</p>
<p>Let's say I have a simple open read function:</p>
<pre class="lang-py prettyprint-override"><code>with open("path/to/network/drive/file", "rb") as f: # Show speed interactively when reading
# do stuff
</code></pre>
<p>Thanks!</p>
|
<python><io>
|
2025-02-13 05:56:58
| 1
| 4,949
|
Darren Christopher
|
79,434,994
| 678,228
|
Consolidating data with multiple columns in Pandas
|
<p>I have an excel file, with 4 columns, namely "Item", "Size", "Price", "Quantity".</p>
<p>I would like to combine all the items with same value in "Item", "Size", "Price", and sum up their "Quantity" into a single entry, and keep the sorting order.</p>
<p>For example:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Item.</th>
<th>Size.</th>
<th>Price</th>
<th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2-3cm</td>
<td>0.5</td>
<td>10</td>
</tr>
<tr>
<td>A</td>
<td>1-2cm</td>
<td>0.6</td>
<td>20</td>
</tr>
<tr>
<td>B</td>
<td>2cm</td>
<td>0.7</td>
<td>30</td>
</tr>
<tr>
<td>A</td>
<td>1-2cm</td>
<td>0.6</td>
<td>40</td>
</tr>
</tbody>
</table></div>
<p>Desired Output:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Item.</th>
<th>Size.</th>
<th>Price</th>
<th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2-3cm</td>
<td>0.5</td>
<td>10</td>
</tr>
<tr>
<td>A</td>
<td>1-2cm</td>
<td>0.6</td>
<td>60</td>
</tr>
<tr>
<td>B</td>
<td>2cm</td>
<td>0.7</td>
<td>30</td>
</tr>
</tbody>
</table></div>
<p>I tried to use groupby method, but the results seem not correct.
Any idea to solve this in excel365 + Python Pandas?</p>
|
<python><excel><pandas>
|
2025-02-13 03:33:06
| 2
| 3,521
|
Feng-Chun Ting
|
79,434,990
| 704,265
|
In AWS CDK, how can we get the arn from the role that is created along with an aws_sso.CfnPermissionSet?
|
<p>To be explicit, the code uses the following class: <code>aws_cdk.aws_sso import CfnPermissionSet</code> with the <code>inline_policy</code> argument.</p>
<p>After it's deployed, there is a role that matches that <code>inline_policy</code>. I'm looking for a way to use that role arn within the stack, to allow me to set it as a principal in another role for a <code>sts:AssumeRole</code> grant.</p>
<p>I've read somewhere that getting the <code>ref</code> of the CfnPermissionSet would yield an arn that I can use some string replace over to get the role, but that does not work. The automatically generated ids don't match between the PermissionSet arn and the role arn.</p>
|
<python><amazon-web-services><aws-cdk>
|
2025-02-13 03:25:08
| 0
| 3,166
|
Finch_Powers
|
79,434,771
| 1,115,716
|
launching multiple commands in parallel and parsing their outputs
|
<p>I have two functions, each launch a command, and I need to run both in parallel:</p>
<pre><code>def time_render(filename: str) -> str:
# Run the render and time the operation
# We expect the full path for the file
command_to_run = '{} -b {} -f 10 -- --cycles-device CUDA'.format(
render_executable, filename)
cmd = subprocess.Popen(command_to_run, shell=True, stdout=subprocess.PIPE)
elapsed_time_str = '-1' # if you see this output, something bad happened
for line in cmd.stdout:
decoded_line = line.decode().strip()
if "Time: " in decoded_line and "(Saving: " in decoded_line:
start_index = decoded_line.find(":")
end_index = decoded_line.find("(")
# this is the line we want with the render time
elapsed_time_str = convert_to_seconds(
decoded_line[start_index+1:end_index].strip())
return elapsed_time_str
def get_peak_GPU_mem() -> dict:
# Monitor the memory usage on the GPU and
# record the peak
peak_mem = {}
command_to_run = 'nvidia-smi --query-gpu=gpu_bus_id,memory.used --format=csv -l 1'
cmd = subprocess.Popen(command_to_run, shell=True, stdout=subprocess.PIPE)
for line in cmd.stdout:
decoded_line = line.decode().strip()
# The output of this command will usually look like this:
# 00000000:2D:00.0, 101 MiB <-- One GPU
# 00000000:99:00.0, 11 MiB <-- Another GPU
# Each GPU will have a unique PCI ID
split_tokens = decoded_line.split(',')
gpu_id = split_tokens[0].strip()
curr_mem = int(split_tokens[1].strip().split()[0])
if gpu_id not in peak_mem:
peak_mem[gpu_id] = curr_mem
else:
# we have a previous entry, compare the entries
if curr_mem > peak_mem[gpu_id]:
peak_mem[gpu_id] = curr_mem
</code></pre>
<p>I get that I could probably use the <code>multiprocessing</code> module to run these in parallel, but I need to be able to terminate the <code>get_peak_GPU_mem</code> function since that command never terminates. How can I terminate it such that it returns the value it's building? Is there a better way to organize this? I basically need to run those two commands and monitor their respective outputs in specific ways.</p>
<p>Edit: To clarify, I need to be able to terminate a function while getting a value back since the command it runs never stops.</p>
|
<python>
|
2025-02-13 00:07:37
| 0
| 1,842
|
easythrees
|
79,434,678
| 2,626,865
|
explain the following attribute lookup
|
<pre><code>class Test(io.BufferedIOBase):
def __init__(self, f):
self.f = f
def __getattr__(self, name):
print("getattr: searching for: ", name)
#try:
# v = getattr(self.f, name)
#except Exception:
# print(" found nothing!")
# raise
#print(" found: ", v)
#return v
#return 22
raise AttributeError(name)
def __getattribute__(self, name):
print("1.getattribute@self: searching for: ", name, flush=True)
try:
v = super().__getattribute__(name)
except Exception as e:
print(" 1.exception: ", e)
print(" 1.found nothing!", flush=True)
pass
else:
print(" 1.found: ", v, flush=True)
return v
print("2.getattribute@self.f: searching for: ", name, flush=True)
#f = super().__getattribute__("f")
f = object.__getattribute__(self, "f")
try:
v = getattr(f, name)
except Exception as e:
print(" 2.exception: ", e)
print(" 2.found nothing!", flush=True)
raise
else:
print(" 2.found: ", v, flush=True)
return v
print("ERROR")
f1 = open(p, "rb")
f2 = Test(f1)
print("\nTest 01:")
v = f2.closed
print("result: ", v)
print("\nTest 02:")
v = f2.qwer
print("result: ", v)
</code></pre>
<pre><code>Test 01:
1.getattribute@self: searching for: closed
1.getattribute@self: searching for: __IOBase_closed
1.exception: 'Test' object has no attribute '__IOBase_closed'
1.found nothing!
2.getattribute@self.f: searching for: __IOBase_closed
2.exception: '_io.BufferedReader' object has no attribute '__IOBase_closed'
2.found nothing!
getattr: searching for: __IOBase_closed
1.found: False
result: False
Test 02:
1.getattribute@self: searching for: qwer
1.exception: 'Test' object has no attribute 'qwer'
1.found nothing!
2.getattribute@self.f: searching for: qwer
2.exception: '_io.BufferedReader' object has no attribute 'qwer'
2.found nothing!
getattr: searching for: qwer
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[532], line 49
47 print("result: ", v)
48 print("\nTest 02:")
---> 49 v = f2.qwer
50 print("result: ", v)
Cell In[532], line 15, in Test.__getattr__(self, name)
6 print("getattr: searching for: ", name)
7 #try:
8 # v = getattr(self.f, name)
9 #except Exception:
(...)
13 #return v
14 #return 22
---> 15 raise AttributeError(name)
AttributeError: qwer
</code></pre>
<p>Following <a href="https://docs.python.org/3/howto/descriptor.html#invocation-from-an-instance" rel="nofollow noreferrer"><code>object_getattribute()</code></a>, this is what I think happens:</p>
<ol>
<li><code>closed</code> resolves to the non-data property from <a href="https://github.com/python/cpython/blob/3.13/Lib/_pyio.py#L466" rel="nofollow noreferrer">IOBase</a>.</li>
<li>The property is called before the previous <code>__getattribute__</code> returns, looking up <code>self.__closed</code>, which is actually <code>_IOBase__closed</code> per <a href="https://docs.python.org/3/tutorial/classes.html#private-variables" rel="nofollow noreferrer">name-mangling rules</a>.</li>
</ol>
<p>Now the steps become strange.</p>
<ol start="3">
<li>My <code>__getattribute__</code> seems to be searching for <code>__IOBase_closed</code> (which doesn't exist), rather than <code>_IOBase__closed</code>. This I can attribute to mismatches between the Python and CPython code.</li>
<li>My <code>__getattribute__</code> fails and raises <code>AttributeError</code>, and then my <code>__getattr__</code> fails and raises <code>AttributeError</code>. And yet somehow, the value of <code>_IOBase__closed</code> is returned. I can only explain this if Python calls another getattr* function further up the MRO.</li>
</ol>
<p>So now I'm stumped.</p>
|
<python>
|
2025-02-12 22:50:14
| 0
| 2,131
|
user19087
|
79,434,667
| 1,188,381
|
pymongo create a dict based on a group match
|
<p>I'm new to pymongo and can't quite wrap my head around the logic of creating a nested group and match.</p>
<p>Im trying to find all the unique models types and then create a list of the logical names used for the models.</p>
<p>The output that I want to create, or close to it.</p>
<pre><code> {
{"Model-1", {Devices: ["Name-1","Name-4"]},
{"Model-2", {Devices: ["Name-2","Name-3"]}
}
</code></pre>
<p>Mongo DB Data:</p>
<pre><code>[
{
"_id": "1",
"cdate": {
"$date": "2023-11-16T00:00:00.000Z"
},
"AP Name": "Name-1",
"Model": "Model-1"
},
{
"_id": "2",
"cdate": {
"$date": "2023-11-16T00:00:00.000Z"
},
"AP Name": "Name-2",
"Model": "Model-2"
},
{
"_id": "3",
"cdate": {
"$date": "2023-11-16T00:00:00.000Z"
},
"AP Name": "Name-3",
"Model": "Model-2"
},
{
"_id": "4",
"cdate": {
"$date": "2023-11-16T00:00:00.000Z"
},
"AP Name": "Name-4",
"Model": "Model-1"
}
]
</code></pre>
<p>My code: I dont think this is the best way to do this.. so any help or suggestions woudld be great.</p>
<pre class="lang-py prettyprint-override"><code>def mongo_aggregate_tags(foo_coll, foo_keyword, foo_match, x):
agg_sites = foo_coll.aggregate(
[
{"$match": {f"{foo_match}": x}},
{"$group": {"_id": f"${foo_keyword}"}},
{"$sort": {foo_keyword: 1}},
]
)
return agg_sites
for f_item in coll_inv.distinct("Model"):
for x in mongo_aggregate_tags(coll_inv, "Model", "AP Name", f_item):
print(x, " ", f_item)
print(type(x))
</code></pre>
<p>The output I actually get looks like this:</p>
<pre><code>{'_id': 'Model-1'} Name-1
{'_id': 'Model-1'} Name-4
{'_id': 'Model-2'} Name-2
{'_id': 'Model-2'} Name-3
</code></pre>
|
<python><mongodb><pymongo-3.x>
|
2025-02-12 22:40:11
| 1
| 1,445
|
onxx
|
79,434,556
| 3,282,758
|
Best place to initialize a variable from a postgres database table after django project startup
|
<p>I have a django project where I have some database tables.</p>
<p>One of the database tables is designed to store messages and their titles. This helps me to create/alter these messages from my django-admin.</p>
<p>Now I want to initialize a variable (as a dictionary) from this table as follows :</p>
<pre><code>MY_MSGS = {record.name : {'title':record.title, 'message':record.message} for record in MyTable.objects.all()}
</code></pre>
<p>This must happen at the server startup for now.
<code>MY_MSGS</code> must be accessible to the different view-files.</p>
<p>Later I would want to periodically update <code>MY_MSGS</code> by reading MyTable again.</p>
<p>So I want that <code>My_MSGS</code> behaves as a all global to all my view-files and should be initialized after the startup is complete.</p>
<p>FYI I have multiple view-files that are all imported from <code>views.py</code>. Also this is a very small table with just about maximum 15 messages and so I do not mind holding this data in memory</p>
|
<python><django><django-models><django-views>
|
2025-02-12 21:52:58
| 1
| 1,493
|
user3282758
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.