QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,280,318
| 2,451,542
|
Skip rows with pandas.read_csv(..., comment="#") but allow hash in the data?
|
<p>Is there any way in pandas to ignore #-commented lines in their entirety, but leave the # symbol alone in the CSV body?</p>
<pre><code>import pandas as pd
from io import StringIO
csv_content = """\
# title: Sample CSV
# description: This dataset
id,name,favourite_hashtag
1,John,#python
2,Jane,#rstats
"""
data = pd.read_csv(StringIO(csv_content),comment='#')
print(data)
</code></pre>
<p>Right now we get:</p>
<pre><code> id name favourite_hashtag
0 1 John NaN
1 2 Jane NaN
</code></pre>
<p>If no, is there any hope of making a feature request to that end?</p>
|
<python><pandas>
|
2024-12-14 08:29:58
| 3
| 357
|
matentzn
|
79,280,126
| 3,358,488
|
Torchscript failure: 'RecursiveScriptModule' object has no attribute
|
<p>I am trying to use PyTorch's Torchscript to script a module defined in a third-party library.</p>
<p>The example below is an abstract version of the problem. Suppose some library I cannot modify defines <code>SomeClass</code> and <code>LibraryModule</code> classes, where the latter is a PyTorch module.</p>
<p><code>LibraryModule</code>'s main method is <code>compute</code>, which takes a tensor and an instance of <code>SomeClass</code>.</p>
<pre><code>import torch
import torch.nn as nn
class SomeClass:
"""A utility class in a library I cannot modify"""
def __init__(self, x):
self.x = x
class LibraryModule(nn.Module):
"""A module provided in a library I cannot modify"""
def __init__(self, in_features, out_features):
super().__init__()
self.linear = nn.Linear(in_features, out_features)
def compute(self, x, some_class_object: SomeClass):
"""
Main function of my module; like forward, but takes a non-tensor argument
"""
return self.linear(x) * some_class_object.x
</code></pre>
<p>This is what I tried to get a script for the class:</p>
<pre><code>script = torch.jit.script(LibraryModule(3, 2))
print(script.compute(torch.tensor([10, 20, 30]), SomeClass(2)))
</code></pre>
<p>but I get the following error:</p>
<pre><code> File "torchscript.py", line 25, in <module>
print(script.compute(torch.tensor([10, 20, 30]), SomeClass(2)))
^^^^^^^^^^^^^^
File "\Lib\site-packages\torch\jit\_script.py", line 826, in __getattr__
return super().__getattr__(attr)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "Lib\site-packages\torch\jit\_script.py", line 533, in __getattr__
return super().__getattr__(attr)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "Lib\site-packages\torch\nn\modules\module.py", line 1931, in __getattr__
raise AttributeError(
AttributeError: 'RecursiveScriptModule' object has no attribute 'compute'. Did you mean: 'compile'?
</code></pre>
<p>I also tried scripting the method directly:</p>
<pre><code>compute_script = torch.jit.script(LibraryModule(3, 2).compute)
print(compute_script(torch.tensor([10, 20, 30]), SomeClass(2)))
</code></pre>
<p>but then I get:</p>
<pre><code>RuntimeError:
'Tensor (inferred)' object has no attribute or method 'linear'.:
File "torchscript.py", line 21
Main function of my module; like forward, but takes a non-tensor argument
"""
return self.linear(x) * some_class_object.x
~~~~~~~~~~~ <--- HERE
</code></pre>
<p>How can I get a working script of <code>LibraryModule.compute</code>?</p>
|
<python><pytorch><torchscript>
|
2024-12-14 05:37:41
| 1
| 5,872
|
user118967
|
79,280,099
| 11,953,868
|
The best way to reduce the calculation time of formulas in nested loops?
|
<p>Could you tell me what is the best way to reduce the calculation time of formulas in nested loops? I have code like this:</p>
<pre><code>for k in range(0, 130):
for i in range(0, 1600):
array2[i,k] = 0
for j in range(0, 1600):
array2[i,k] = array2[i,k] + constant_value * (array1[j,k] * function1(i - j))
</code></pre>
<p>Array1 and array2 contain double numbers. This block of code executes in 15-20 minutes. The problem is that the block is nested in another loop that has 100 to 400 iterations, so sometimes it takes more than 24 hours to calculate. Is there a way to speed up this piece of code?</p>
|
<python><for-loop><nested><nested-loops>
|
2024-12-14 05:10:59
| 3
| 2,179
|
James Jacques
|
79,280,091
| 5,942,779
|
Behavior of df.map() inside another df.apply()
|
<p>I find this <a href="https://stackoverflow.com/questions/55929264/using-a-dataframe-to-format-the-style-of-another-dataframe">code</a> very interesting. I modified the code a little to improve the question. Essentially, the code uses a DataFrame to format the style of another DataFrame using <code>pd.style</code>.</p>
<pre class="lang-py prettyprint-override"><code>t1 = pd.DataFrame({'x':[300,200,700], 'y':[100,300,200]})
t2 = pd.DataFrame({'x':['A','B','C'], 'y':['C','B','D']})
def highlight_cell(val, props=''):
return props if val > 200 else ''
t2.style.apply(lambda x: t1.map(highlight_cell, props='background-color:yellow'), axis=None)
</code></pre>
<p><a href="https://i.sstatic.net/wXBuO1Y8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wXBuO1Y8.png" alt="enter image description here" /></a></p>
<p>But can anyone explain how the last line works? I couldn't find Pandas documentation that clarifies the behavior of <code>df.map()</code> inside another <code>df.apply()</code>.</p>
<p>To me, the code reads like <strong>for each item in t1, apply <code>highlight_cell()</code> to the entire t2 at once</strong>, and then return the whole thing, as illustrated in this pseudocode.</p>
<pre><code>for x in all_items_in_t1:
yield [highlight_cell(y) for y in all_items_in_t2]
</code></pre>
<p>However, the output is saying <strong>for each item in t1, apply <code>highlight_cell()</code> only to the corresponding item in t2 that has the same (x, y) location as that item in t1</strong>, like this.</p>
<pre><code>for x, y in zip(all_items_in_t1, all_items_in_t2):
yield highlight_cell(y)
</code></pre>
<p>I'm still having trouble understanding this pattern because it seems a bit confusing. Can anyone explain it more clearly?</p>
|
<python><pandas>
|
2024-12-14 05:05:49
| 1
| 689
|
Scoodood
|
79,280,060
| 1,625,455
|
Can Python 3 copy files without overwriting?
|
<p>In a 2016 blogpost, <a href="https://moltenform.com/articles/avoiding-race-conditions-copying/" rel="nofollow noreferrer">it is claimed</a>:</p>
<pre><code>def notActuallySafeCopy(srcfile, destfile):
if os.path.exists(destfile):
raise IOError('destination already exists')
shutil.copy(srcfile, destfile)
</code></pre>
<blockquote>
<p>there is a race condition here -- there is a brief window of time after the check in which if a file is created at "destfile", it will be replaced.</p>
</blockquote>
<blockquote>
<p>... the Python documentation explicitly says that shutil.copy and shutil.copy2 can silently overwrite the destination file if it already exists, and there is no flag to prevent this.</p>
</blockquote>
<p>Is this still true? Does Python supply a way to copy a file that does not overwrite a destination file?</p>
|
<python><python-3.x><file><copy><race-condition>
|
2024-12-14 04:25:49
| 1
| 346
|
Atomic Tripod
|
79,279,979
| 968,132
|
GCP pub-sub errors
|
<p>I am building a simple webapp that accepts HTTP POSTs and should trigger a Cloud Pub-Sub event. Without pub-sub, the app deploys fine and I can reach the endpoint.</p>
<p>I've created a sample functions. I've done this both through Cloud Run Functions (console.cloud.google.com/functions/). This shows Deployed By as "Cloud Run Functions" and through Cloud Run (i.e. console.cloud.google.com/run). This shows Deployed By as my account. Same outcomes.</p>
<p>For the initial pre-packaged hello world example, both deploy no problem.</p>
<p>Now let's add the pub-sub capability. Add <code>google-cloud-pubsub</code> to the requirements file. Add the pub-sub client to the app. Create a Pub-Sub topic (which I do via the <a href="https://console.cloud.google.com/cloudpubsub" rel="nofollow noreferrer">UI</a>).</p>
<p>So now it looks like this...</p>
<pre><code>import json
import os
import functions_framework
from google.cloud import pubsub_v1
PROJECT_ID = "my project"
publisher = pubsub_v1.PublisherClient()
@functions_framework.http
def hello_http(request):
"""HTTP Cloud Function.
Args:
request (flask.Request): The request object.
<https://flask.palletsprojects.com/en/1.1.x/api/#incoming-request-data>
Returns:
The response text, or any set of values that can be turned into a
Response object using `make_response`
<https://flask.palletsprojects.com/en/1.1.x/api/#flask.make_response>.
"""
request_json = request.get_json(silent=True)
request_args = request.args
topic_name = 'projects/{project_id}/topics/{topic}'.format(
project_id=PROJECT_ID,
topic='test-topic',
)
message = {
'key': 'value',
'info': 'This is a test message'
}
future = publisher.publish(topic_name, json.dumps(message).encode("utf-8"))
future.result()
if request_json and 'name' in request_json:
name = request_json['name']
elif request_args and 'name' in request_args:
name = request_args['name']
else:
name = 'World'
return 'Hello {}!'.format(name)
</code></pre>
<p>I'll reliably get these warnings in the console. What do they mean?</p>
<blockquote>
<p>WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1734272617.454893 1 init.cc:229] grpc_wait_for_shutdown_with_timeout() timed out.</p>
</blockquote>
<p>Additionally, I'll get weird errors in in the Google Cloud Platform Run testing console that look to be build related (e.g. the /bin/sh: 2c3d685a15e9). Sometimes it errors, sometimes it does not.</p>
<pre><code>[9:47:15 PM] - Preparing function sources ...
[9:47:15 PM] - Loading new function sources ...
[9:47:22 PM] - Starting function...
[9:47:22 PM] - WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1734144439.303879 20 init.cc:229] grpc_wait_for_shutdown_with_timeout() timed out.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1734144441.495104 7 init.cc:229] grpc_wait_for_shutdown_with_timeout() timed out.
/bin/sh: 2c3d685a15e9: not found
/bin/sh: 2c3d685a15e9: not found
</code></pre>
|
<python><google-cloud-platform><google-cloud-pubsub><google-cloud-run>
|
2024-12-14 02:54:05
| 0
| 1,148
|
Peter
|
79,279,969
| 654,019
|
Finding the last unique string in a list
|
<p>I was doing some coding on CodeSignal to refresh my knowledge.</p>
<p>I was asked this question:</p>
<blockquote>
<p>Suppose you've got a list of words, let's say ['apple', 'banana', 'apple', 'mango', 'banana']. Each word could be repeated an arbitrary number of times. Think of this list as a conveyor belt in a space-age fruit factory. Now, your task is to identify the last unique fruit on the belt, i.e., the one that didn't repeat. If all the fruits are repeating, then there ain't any unique fruit, and your function should return an empty string ('').</p>
<p>Your function should take a list of strings (the conveyor belt of fruits) as input. Now, a string can be any word, not just a fruit name, and the list can have any number of strings. There could also be an edge case where the list has no strings at all (Empty conveyor belt, eh?). For output, your function should return the last unique string in the list or an empty string if there are not any of them.</p>
<p>To solve this task, you are expected to use sets. Sets are efficient for tracking seen and duplicate elements due to their fast membership testing capability.</p>
</blockquote>
<p>and my solution is:</p>
<pre><code>def find_unique_string(words):
last_uniq = ''
seen = set()
for w in words:
if last_uniq == w:
last_uniq = ''
if w not in seen:
last_uniq = w
seen.add(w)
return last_uniq
</code></pre>
<p>and the test cases are:</p>
<pre><code>print(find_unique_string(['apple', 'banana', 'apple', 'mango', 'banana'])) # It should print: 'mango'
print(find_unique_string(['hello', 'world', 'hello'])) # It should print: 'world'
print(find_unique_string(['hello', 'world', 'hello', 'world'])) # It should print: ''
print(find_unique_string([])) # It should print: ''
print(find_unique_string(['apple', 'banana', 'apple', 'kiwi', 'banana', 'kiwi'])) # it should print ''
</code></pre>
<p>The CodeSignal AI doesn't accept my code as a solution and reports that it is not correct, but it can not provide any cases in which the result of my code is wrong.</p>
<p>Now my question:</p>
<p>Who is right? Me or CodeSignal AI?</p>
<p>If I am right, how can I prove to CodeSignal AI that I am right so I can pass the test and go to the next one?</p>
<p>If AI is right, can you give me a sample test case that breaks my code and doesn't generate the expected output?</p>
<h1>Note:</h1>
<p>I do not want you to write a solution which would have two sets, one for seen and one for duplicate as this is the result that AI is expecting (I think), but I want to know if my code is correct and if it is, how to prove that it is, or you have a test case to prove that my code is wrong.</p>
|
<python><algorithm>
|
2024-12-14 02:39:02
| 9
| 18,400
|
mans
|
79,279,962
| 4,211,297
|
can't get doit task to accept parameters from command line
|
<p>the command:</p>
<p><code>doit -f ./srvapp/doit-build.py build --stype $argc_type --zone $argc_zone</code></p>
<p>the DAG</p>
<pre><code>def task_build():
"""build"""
def python_build(stype, zone):
print(f"TYPE: {stype}, ZONE: {zone}")
return {
'actions': [python_build],
'verbosity':2,
'params':[
{'name':'zone', 'default': 'BLANKZONE'},
{'name':'stype', 'default': 'BLANKSTYPE'}
]
}
</code></pre>
<p>the result:</p>
<pre><code>TYPE: email-sifter
ZONE: uat
ACTION: build
ERROR: Error parsing Task: option --stype not recognized (parsing options: [CmdOption({'name':'zone', 'short':'','long':'' }), CmdOption({'name':'stype', 'short':'','long':'' })]). Got: ['--stype', 'email-sifter', '--zone', 'uat']
</code></pre>
<p>Not sure why its telling me style not recognized?</p>
<p>Following doc from here: <a href="https://pydoit.org/task-args.html#task-action-parameters" rel="nofollow noreferrer">https://pydoit.org/task-args.html#task-action-parameters</a></p>
|
<python><doit>
|
2024-12-14 02:32:48
| 0
| 2,351
|
Pompey Magnus
|
79,279,909
| 8,191,023
|
Python type hinter function matching Protocol with kwargs
|
<p>I am having trouble with the PyCharm python type hinter when defining a function that matches a Protocol. It is giving me a warning and I am not sure why.</p>
<pre><code>from typing import Protocol, List, Any
class MyProtocol(Protocol):
def __call__(self, docs: List[str], **kwargs: Any) -> str:
pass
def do_call(
mylist: List[str],
callback: MyProtocol,
**kwargs: Any,
) -> str:
return callback(mylist, **kwargs)
def my_func(mylist: List[str], **kwargs: Any) -> str:
return ",".join(mylist)
# this results in a warning
do_call(["a", "b", "c"], my_func)
</code></pre>
<p>The type checker yields the following warning:</p>
<blockquote>
<p>Expected type 'MyProtocol', got '(mylist: list[str], kwargs: dict[str, Any]) -> str' instead</p>
</blockquote>
<p>Even the example in the <a href="https://docs.python.org/3.11/library/typing.html" rel="nofollow noreferrer">python documentation</a> results in the warning on the line that should be ok.</p>
<pre><code>from collections.abc import Iterable
from typing import Protocol
class Combiner(Protocol):
def __call__(self, *vals: bytes, maxlen: int | None = None) -> list[bytes]: ...
def batch_proc(data: Iterable[bytes], cb_results: Combiner) -> bytes:
for item in data:
...
def good_cb(*vals: bytes, maxlen: int | None = None) -> list[bytes]:
...
def bad_cb(*vals: bytes, maxitems: int | None) -> list[bytes]:
...
batch_proc([], good_cb) # OK
batch_proc([], bad_cb) # Error! Argument 2 has incompatible type because of
# different name and kind in the callback
</code></pre>
<p>For the OK line:</p>
<blockquote>
<p>Expected type 'Combiner', got '(vals: tuple[bytes, ...], maxlen: int | None) -> list[bytes]' instead</p>
</blockquote>
<p>Is there a way to tell to the type hinter that my function is indeed a "MyProtocol"? Is my implementation correct or is this maybe a defect in the PyCharm built-in type hinter?</p>
|
<python><pycharm><python-typing>
|
2024-12-14 01:21:40
| 0
| 374
|
MarkRx
|
79,279,855
| 1,316,365
|
Have numpy.concatenate return proper subclass rather than plain ndarray
|
<p>I have a numpy array subclass, and I'd like to be able to concatenate them.</p>
<pre><code>import numpy as np
class BreakfastArray(np.ndarray):
def __new__(cls, n=1):
dtypes=[("waffles", int), ("eggs", int)]
obj = np.zeros(n, dtype=dtypes).view(cls)
return obj
b1 = BreakfastArray(n=1)
b2 = BreakfastArray(n=2)
con_b1b2 = np.concatenate([b1, b2])
print(b1.__class__, con_b1b2.__class__)
</code></pre>
<p>this outputs <code><class '__main__.BreakfastArray'> <class 'numpy.ndarray'></code>, but I'd like the concatenated array to also be a <code>BreakfastArray</code> class. It looks like I probably need to add a <code>__array_finalize__</code> method, but I can't figure out the right way to do it.</p>
|
<python><numpy><numpy-ndarray>
|
2024-12-14 00:17:22
| 3
| 995
|
I.P. Freeley
|
79,279,854
| 1,629,904
|
How to Enable Premove in InputHandler
|
<p>I am using cm-chessboard and I am trying to allow the player to drag his own pieces even though is not his turn. I am trying to accomplish this so I can allow premove. Premove means I am allowed to "premove" the next move in my opponents time. The issue is I cannot understand how to allow the user to drag moves when is not his turn.</p>
<pre><code>const inputHandler = (event) => {
if (event.type === INPUT_EVENT_TYPE.movingOverSquare) {
return;
}
if (event.type !== INPUT_EVENT_TYPE.moveInputFinished) {
event.chessboard.removeLegalMovesMarkers()
}
if (event.type === INPUT_EVENT_TYPE.moveInputStarted) {
const moves = game.moves({square: event.squareFrom, verbose: true})
event.chessboard.addLegalMovesMarkers(moves)
return moves.length > 0
} else if (event.type === INPUT_EVENT_TYPE.validateMoveInput) {
const move = {from: event.squareFrom, to: event.squareTo, promotion: event.promotion}
const result = game.move(move)
if (result) {
event.chessboard.state.moveInputProcess.then(() => {
event.chessboard.setPosition(game.fen(), true)
})
return true;
} else {
let possibleMoves = game.moves({square: event.squareFrom, verbose: true})
for (const possibleMove of possibleMoves) {
if (possibleMove.promotion && possibleMove.to === event.squareTo) {
event.chessboard.showPromotionDialog(event.squareTo, COLOR.white, (result) => {
if (result.type === PROMOTION_DIALOG_RESULT_TYPE.pieceSelected) {
let pmove = game.move({from: event.squareFrom, to: event.squareTo, promotion: result.piece.charAt(1)})
event.chessboard.setPosition(game.fen(), true)
} else {
event.chessboard.setPosition(game.fen(), true)
}
})
return true
}
}
}
return result
}
}
</code></pre>
|
<javascript><python><reactjs><chess><chessboard.js>
|
2024-12-14 00:17:17
| 0
| 332
|
dianesis
|
79,279,831
| 5,404,620
|
Meaning of _* in python (in reserved classes of identifiers)
|
<p>The python documentation writes about <code>_*</code></p>
<p>"Not imported by <code>from module import *</code>."</p>
<p>What do they mean with that?</p>
<p><a href="https://docs.python.org/3/reference/lexical_analysis.html#:%7E:text=_*,import%20*." rel="nofollow noreferrer">https://docs.python.org/3/reference/lexical_analysis.html#:~:text=_*,import%20*.</a></p>
|
<python>
|
2024-12-13 23:58:34
| 1
| 2,887
|
Adler
|
79,279,737
| 1,689,179
|
PIL cannot identify specific JPG file
|
<p>It works with other JPGs but not this one specifically, I'm wondering what's wrong and how it could be improved.</p>
<pre class="lang-py prettyprint-override"><code>import io
from aiohttp import ClientSession
from PIL import Image
session = ClientSession
async with session.get(f'https://rocket-league.com/content/media/itemshopPreviewDetailed/2024-12-12.jpg') as r:
if r.status == 200:
try:
img = Image.open(io.BytesIO(await r.read()))
except Exception as e:
logging.warning(e)
</code></pre>
<p>The error is:
<code>WARNING: cannot identify image file <_io.BytesIO object at 0x7f216a624810></code>.</p>
|
<python><python-3.x><python-imaging-library>
|
2024-12-13 22:39:38
| 0
| 3,519
|
dan
|
79,279,731
| 9,415,280
|
Why Is There No Change Using The pyspark.ml Feature VectorAssembler?
|
<p>The following is an example from Databricks with my own data, I can't get the <code>VectorAssembler</code> transformation working.</p>
<pre><code>string_indexer = StringIndexer(inputCol='ptype', outputCol='index_ptype', handleInvalid="skip")
string_indexer_model = string_indexer.fit(sample_df)
indexed_df = string_indexer_model.transform(sample_df)
ohe = OneHotEncoder(inputCol='index_ptype', outputCol='ohe_ptype', handleInvalid="keep")
ohe_model = ohe.fit(indexed_df)
ohe_df = ohe_model.transform(indexed_df)
ohe_df.show()
+-----+-----------+-------------+
|ptype|index_ptype| ohe_ptype|
+-----+-----------+-------------+
| 5.0| 2.0|(6,[2],[1.0])|
| 7.0| 4.0|(6,[4],[1.0])|
| 3.0| 1.0|(6,[1],[1.0])|
| 1.0| 0.0|(6,[0],[1.0])|
| 6.0| 3.0|(6,[3],[1.0])|
| 8.0| 5.0|(6,[5],[1.0])|
+-----+-----------+-------------+
assembler = VectorAssembler(inputCols=['ohe_ptype'], outputCol="features")
result_df_dense = assembler.transform(ohe_df)
result_df_dense.show(truncate=False)
+-----+-----------+-------------+-------------+
|ptype|index_ptype|ohe_ptype |features |
+-----+-----------+-------------+-------------+
|5.0 |2.0 |(6,[2],[1.0])|(6,[2],[1.0])|
|7.0 |4.0 |(6,[4],[1.0])|(6,[4],[1.0])|
|3.0 |1.0 |(6,[1],[1.0])|(6,[1],[1.0])|
|1.0 |0.0 |(6,[0],[1.0])|(6,[0],[1.0])|
|6.0 |3.0 |(6,[3],[1.0])|(6,[3],[1.0])|
|8.0 |5.0 |(6,[5],[1.0])|(6,[5],[1.0])|
+-----+-----------+-------------+-------------+
</code></pre>
<p>As seen, my features are exactly the same as <code>ohe_ptype!</code></p>
<p>I expect to get something like this:</p>
<pre><code>+-----+-----------+-------------+-------------------------+
|ptype|index_ptype|ohe_ptype |features |
+-----+-----------+-------------+-------------------------+
|5.0 |2.0 |(6,[2],[1.0])|[0.0,0.0,1.0,0.0,0.0,0.0]|
</code></pre>
|
<python><apache-spark-mllib><one-hot-encoding><apache-spark-ml>
|
2024-12-13 22:35:11
| 1
| 451
|
Jonathan Roy
|
79,279,531
| 276,363
|
When are Jinja2 {% set %} operations actually run?
|
<p>Is it safe to assume that if a portion of a template is not included due to conditional operations, {% set %} operations within it are not evaluated? For example:</p>
<pre><code>{% if var = 1 %}
{% set thing = function_with_side_effects() %}
{% endif
</code></pre>
<p>Is it safe to assume that if <code>var != 1</code>, <code>function_with_side_effects()</code> will never be called during expansion of this template?</p>
|
<python><jinja2>
|
2024-12-13 20:39:34
| 1
| 2,078
|
Christophe
|
79,279,510
| 3,071,582
|
Docker Flask Service: ModuleNotFoundError: No module named 'flask' When Running Container, but Works Manually
|
<p>I am trying to build and run a <code>Flask</code>-based Python service (<code>task_manager</code>) using Docker and Docker Compose, but I'm encountering a persistent issue. When I run the container via <code>docker-compose up --build</code>, the service fails to find the Flask module:</p>
<pre class="lang-none prettyprint-override"><code>task_manager | Traceback (most recent call last):
task_manager | File "/app/task_manager.py", line 1, in <module>
task_manager | from flask import Flask, jsonify
task_manager | ModuleNotFoundError: No module named 'flask'
</code></pre>
<p>However, the same service works <strong>perfectly</strong> if I manually execute the following commands in PowerShell:</p>
<pre class="lang-bash prettyprint-override"><code>docker run -it --name task_manager -v D:/Projects/my_ai_assistant/task_manager:/app -w /app python:3.11-slim bash
apt-get update && apt-get install -y --no-install-recommends build-essential libssl-dev libffi-dev python3-dev bash && rm -rf /var/lib/apt/lists/*
python -m venv /app/venv
source /app/venv/bin/activate
pip install --upgrade pip
pip install --no-cache-dir -r requirements.txt
/app/venv/bin/python /app/task_manager.py
</code></pre>
<p><strong>Context of the Setup:</strong></p>
<ul>
<li><strong>Host OS</strong>: Windows 11</li>
<li><strong>Environment</strong>:
<ul>
<li>Running Docker Desktop with Debian (via MS Store) as the WSL backend</li>
<li>Using PowerShell for commands</li>
</ul>
</li>
<li><strong>Directory Structure:</strong>
<ul>
<li><code>D:/Projects/my_ai_assistant/</code> is the root directory.</li>
<li>Each service (<code>task_manager</code> and <code>gui_dashboard</code>) has its own subfolder and <code>Dockerfile</code>.</li>
</ul>
</li>
</ul>
<hr />
<h3><strong>Working Service (<code>gui_dashboard</code>) for Reference:</strong></h3>
<p>The <code>gui_dashboard</code> service works without issue. Its setup:</p>
<p><strong>docker-compose.yml (relevant section):</strong></p>
<pre class="lang-yaml prettyprint-override"><code>gui_dashboard:
build:
context: ./gui_dashboard
container_name: gui_dashboard
working_dir: /app
volumes:
- ./gui_dashboard:/app
ports:
- "3000:3000"
restart: unless-stopped
command: ["npm", "start"]
</code></pre>
<hr />
<h3><strong>Problematic <code>task_manager</code> Service:</strong></h3>
<p><strong>docker-compose.yml (relevant section):</strong></p>
<pre class="lang-yaml prettyprint-override"><code>task_manager:
build:
context: ./task_manager
container_name: task_manager
working_dir: /app
volumes:
- ./task_manager:/app
ports:
- "5000:5000"
environment:
FLASK_RUN_HOST: "0.0.0.0"
FLASK_RUN_PORT: "5000"
restart: unless-stopped
</code></pre>
<p><strong>task_manager/Dockerfile:</strong></p>
<pre class="lang-none prettyprint-override"><code># Use the official Python image
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set the working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libssl-dev \
libffi-dev \
python3-dev \
bash && \
rm -rf /var/lib/apt/lists/*
# Copy application files
COPY . /app/
# Create a virtual environment and install dependencies
RUN python -m venv /app/venv && \
source /app/venv/bin/activate && \
/app/venv/bin/pip install --upgrade pip && \
/app/venv/bin/pip install --no-cache-dir -r /app/requirements.txt
# Expose Flask's default port
EXPOSE 5000
# Command to run the application
CMD ["python", "/app/task_manager.py"]
</code></pre>
<hr />
<h3><strong>Troubleshooting Notes:</strong></h3>
<ul>
<li><p><strong>Manual Execution Works:</strong><br />
Manually running the steps in PowerShell, including activating the virtual environment and running the script, works flawlessly.</p>
</li>
<li><p><strong>Mounting Filesystem:</strong><br />
I use <code>-v D:/Projects/my_ai_assistant/task_manager:/app</code> to map the local directory for visibility and avoid copying files into the container.</p>
</li>
<li><p><strong>Experiment with CMD:</strong><br />
Changing the <code>CMD</code> to use the virtual environment explicitly:</p>
<pre class="lang-none prettyprint-override"><code>CMD ["/app/venv/bin/python", "/app/task_manager.py"]
</code></pre>
<p>Results in:</p>
<pre class="lang-none prettyprint-override"><code>Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/app/venv/bin/python": stat /app/venv/bin/python: no such file or directory: unknown
</code></pre>
</li>
</ul>
<hr />
<h3><strong>What I Need Help With:</strong></h3>
<p>Why does the Flask module fail to be found (<code>ModuleNotFoundError</code>) during <code>docker-compose up</code>, even though it works manually? How can I ensure the <code>task_manager</code> service works as expected using Docker Compose?</p>
|
<python><docker><flask><docker-compose><virtualenv>
|
2024-12-13 20:32:21
| 0
| 737
|
Paul-Jason
|
79,279,391
| 4,966,886
|
Predecessors from scipy depth_first_order
|
<p>I use scipy version 1.14.1 to traverse the minimum spanning tree in depth-first order, but I do not understand some results, namely the predecessors returned by scipy are not correct.</p>
<p>Here is an illustration for the following graph:</p>
<p><a href="https://i.sstatic.net/rUVhBQMkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUVhBQMkm.png" alt="enter image description here" /></a></p>
<p>The following code</p>
<pre><code>import numpy as np
from scipy.sparse import coo_matrix
from scipy.sparse.csgraph import minimum_spanning_tree
from scipy.sparse.csgraph import depth_first_order
rows = np.array([0, 1, 2, 2, 4, 9, 2, 2, 10, 10, 8 ])
cols = np.array([1, 2, 3, 4, 9, 5, 6, 10, 11, 8, 7 ])
# construct undirected graph
X = coo_matrix( (12,12))
X.col = np.concatenate( (rows, cols), axis=0)
X.row = np.concatenate( (cols, rows), axis=0)
X.data = np.ones(len(X.row))
# the minimum spanning tree is the graph itself
tree = minimum_spanning_tree(X)
print(tree)
# traversing the graph
print(depth_first_order(tree, i_start=0, directed=False, return_predecessors=True))
</code></pre>
<p>gives the minimum spanning tree (the graph itself in fact):</p>
<pre><code> Coords Values
(0, 1) 1.0
(1, 2) 1.0
(2, 3) 1.0
(2, 4) 1.0
(2, 6) 1.0
(2, 10) 1.0
(4, 9) 1.0
(5, 9) 1.0
(7, 8) 1.0
(8, 10) 1.0
(10, 11) 1.0
</code></pre>
<p>and the depth-first order:</p>
<p><code>[ 0, 1, 2, 3, 4, 9, 5, 6, 10, 11, 8, 7]</code></p>
<p>predecessors: <code>[-9999, 0, 1, 2, 2, 9, 2, 8,10, 4, 2, 10]</code></p>
<p>So it says that 9 has 9 as ancestor, but it is 4, and from that position on results are not coherent.</p>
<p>Thanks for any help.</p>
|
<python><scipy><tree><networkx><depth-first-search>
|
2024-12-13 19:22:22
| 2
| 306
|
user11634
|
79,279,376
| 4,200,859
|
Pytest import issue with VSCode
|
<p>I literally just had Pytest working fine, I have the correct conda env selected in VSCode. I tried to run a test just then, and it can't find my folder 'modules' which is in my project directory. The import for modules is in my tests/test_function.py file, as:</p>
<p><code>from modules.my_module import my_func</code></p>
<pre><code>[pytest]
testpaths = tests
python_files = test_*.py
addopts = -v -s
filterwarnings =
ignore::DeprecationWarning
ignore::UserWarning
ignore::RuntimeWarning
</code></pre>
<p>Above is my pytest.ini file.</p>
<pre><code> "python.testing.pytestArgs": [
"tests",
"-s"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.autoTestDiscoverOnSaveEnabled": true,
"python.testing.pytestPath": "pytest",
"python.analysis.extraPaths": ["."]
</code></pre>
<p>I've tried adding the above to my user settings in VSCode.</p>
<p>If I run pytest from command line (same conda env):</p>
<p><code>python -m pytest tests/test_function.py::test_myfunc -v</code></p>
<p>It works totally fine. And, VSCode worked fine 5 minutes ago, I haven't changed a thing. I tried restarting VSCode.
What have I done wrong here? I'm losing my mind and I keep running into these type of problems with Python imports, and I end up spending four hours figuring it out instead of actually working. Thanks!!</p>
|
<python><visual-studio-code><pytest>
|
2024-12-13 19:14:54
| 0
| 639
|
Max
|
79,279,317
| 13,132,640
|
Can you serialize a class instance within class method?
|
<p>I have a class which has a method which performs various calculations. Prior to beginning the calculations, I want the class instance to be saved in a format where I can easily load it again. Since the actual class contains various data types, I am using dill to accomplish this. However, I was wondering if the following is possible (it seems not, but I want to better understand the reason why not): I thought to have a method which dumps the class instance to a file.</p>
<p>I guess the better way of doing what I want to accomplish is to, at the location where I am going to call the "get_water_frequency" method, first dump the class instance and then call that function. But it's not totally clear why I can't do it within the class.</p>
<p>A dummy example:</p>
<pre><code>import dill
class Plant:
def __init__(self, name, height):
self.name = name
self.height = height
def get_water_frequency(self,save_path=None):
# before doing a task, I want to save my instance
if save_path is not None:
self.save(path=save_path)
if self.height>3:
water_frequency=2
else:
water_frequency=3
return water_frequency
def save(self,path):
with open(path, "wb") as dill_file:
dill.dump(self, dill_file)
return
#
my_rose = Plant('White Rose',4)
my_rose.save('C:/my_rose.dill')
with open('C:/my_rose.dill', "r") as dill_file:
rose_from_file = dill.load(dill_file)
</code></pre>
<p>Running the above gives the following error upon dill.load():</p>
<pre><code>Traceback (most recent call last):
File "testing_save.py", line 31, in <module>
rose_from_file = dill.load(dill_file)
File "\lib\site-packages\dill\_dill.py", line 297, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "\lib\site-packages\dill\_dill.py", line 452, in load
obj = StockUnpickler.load(self)
File "C:\Python310\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 488: character maps to <undefined>
</code></pre>
|
<python><serialization>
|
2024-12-13 18:53:57
| 0
| 379
|
user13132640
|
79,279,195
| 10,750,537
|
subprocess.Popen fails to set the name of executable
|
<p>According to the documentation of <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen" rel="nofollow noreferrer">Popen</a>, it seems possible, in POSIX systems, to set the name of the process as it is returned by <code>ps</code>:</p>
<blockquote>
<p>The <em>executable</em> argument specifies a replacement program to execute. It is very seldom needed. When <code>shell=False </code>, executable replaces the program to execute specified by <em>args</em>. However, the original <em>args</em> is still passed to the program. Most programs treat the program specified by <em>args</em> as the command name, which can then be different from the program actually executed. On POSIX, the <em>args</em> name becomes the display name for the executable in utilities such as <strong>ps</strong>.</p>
</blockquote>
<p>On my Ubuntu 22.04 machine, if I run the following python script:</p>
<pre><code>import subprocess
with subprocess.Popen(["foo", "10"], executable="sleep", shell=False) as process:
pass
</code></pre>
<p>and then run <code>ps -ae</code> in another terminal, the last lines of output are</p>
<pre><code> 22854 ? 00:00:00 python3.10
22857 ? 00:00:00 sleep
22860 pts/0 00:00:00 ps
</code></pre>
<p>I would expect to see <code>foo</code> instead of <code>sleep</code>. Could you please explain me what I am getting wrong?</p>
|
<python><ubuntu><popen>
|
2024-12-13 17:56:08
| 1
| 381
|
JtTest
|
79,279,193
| 13,190,759
|
How to convert from extendedSRGB color to standard sRGB outside of Swift or Objective-C
|
<p>I am trying to write some code in Python to display a color taken from a Swift program. However, that color is in the <code>extendedSRGB</code> format. <code>extendedSRGB</code> is similar to standard sRGB except RGB values can be outside of the typical 0 to 1 range. Upon further research, it seems that this is an Apple-made color format, so it is not supported in languages other than Swift or Objective-C. Note: it is called <code>extendedSRGBColorSpace</code> in Objective-C.</p>
<p>I attempted to clip the RGB channels using <code>srgb = np.clip(extended_srgb, 0, 1)</code>. I also tried taking the absolute value of the channels using <code>srgb = np.abs(extended_srgb)</code>. However, neither of these techniques gave the correct color, particularly for shades of green. According to the <a href="https://developer.apple.com/documentation/appkit/nscolorspace/extendedsrgb" rel="nofollow noreferrer">extendedSRGB Apple Developer docs</a>:</p>
<blockquote>
<p>This color space has the same colorimetry as sRGB, but component values below 0.0 and above 1.0 may be encoded in this color space. Negative values are encoded as the signed reflection of the original encoding function. y(x) = sign(x)*f(abs(x))</p>
</blockquote>
<p>I tried following this to create a formula for converting <code>extendedSRGB</code> values back into standard sRGB format, but was unable to. When I used this formula, it seemed to convert negative numbers back into negative numbers. I also do not know what f(x) is in this case.</p>
<p>My question is, what exactly is the formula for converting colors in the <code>extendedSRGB</code> format into standard sRGB format? Alternatively, is there any way in Python to convert from <code>extendedSRGB</code> colors to sRGB?</p>
|
<python><swift><objective-c><colors><srgb>
|
2024-12-13 17:55:23
| 0
| 351
|
Evan93
|
79,279,191
| 13,598,320
|
Web3.py AsyncWeb3 WebSocket Connection Fails for Infura Sepolia
|
<p>I’m working on an Ethereum project that connects to the Sepolia network using Infura's WebSocket endpoint <code>(wss://sepolia.infura.io/ws/v3/<PROJECT_ID>)</code>. While the connection works fine in Postman, my Python implementation using Web3.py fails with the following error:</p>
<pre><code>ERROR:BlockchainLayer:Error connecting to WebSocket: Could not connect to: wss://sepolia.infura.io/ws/v3/<PROJECT_ID>. Retries exceeded max of 5.
</code></pre>
<p>I tried applying ExtraDataToPOAMiddleware as I did with the HTTP connection, since Sepolia uses a Proof-of-Authority (PoA) consensus, but no luck. The same setup works perfectly for the HTTP endpoint, but WebSocket connections keep failing.</p>
<p>Here is the relevant code for the WebSocket connection:</p>
<pre><code>async def connect_websocket(self):
"""
Connects to the WebSocket endpoint using AsyncWeb3 and manages its context.
"""
try:
async with AsyncWeb3(WebSocketProvider(self.websocket_url)) as w3:
# w3.middleware_onion.inject(ExtraDataToPOAMiddleware, layer=0)
self.web3_ws = w3
if not await self.web3_ws.is_connected():
raise ConnectionError(f"Failed to connect to WebSocket endpoint: {self.websocket_url}")
self.logger.info(f"Connected to {self.network_name} network via WebSocket.")
except ConnectionClosedError as e:
self.logger.error(f"WebSocket connection closed: {e}")
except Exception as e:
self.logger.error(f"Error connecting to WebSocket: {e}")
</code></pre>
<p><strong>What I’ve Tried</strong></p>
<ul>
<li>Verified the WebSocket endpoint using Postman, and it responds with
101 Switching Protocols.</li>
<li>Confirmed the API key and URL are correct.</li>
<li>Applied ExtraDataToPOAMiddleware for PoA compatibility.</li>
<li>Tested rate-limiting settings and ensured the free-tier limits are not
exceeded.</li>
<li>Updated Web3.py to the latest version available.</li>
</ul>
<p>Any insights or examples of working implementations would be greatly appreciated!</p>
|
<python><websocket><ethereum><web3py><infura>
|
2024-12-13 17:54:39
| 1
| 548
|
seddouguim
|
79,279,190
| 12,466,687
|
How to conditionally format data in Great Tables?
|
<p>I am trying to conditionally format table data using Great Tables but not sure how to do it.</p>
<p>To highlight the color of all those cells (sort of heatmap) whose values is higher than Upper Range column.</p>
<p>Data:</p>
<pre><code>import polars as pl
gt_sample_df = pl.DataFrame({'Test': ['Test A','Test B','Test C','Test D','Test Z','Test E','Test F','Test X',
'Test G','Test H','Test I','Test J'],
'Lower Range': [35.3,2.5,85.0,0.0,None,3.0,200.0,None,3.0,400.0,None,7.0],
'Upper Range': [79.5,3.5,150.0,160.0,160.0,5.0,None,200.0,5.0,1000.0,150.0,30.0],
'2024-11-10': [43.0,3.14,135.82,162.7,None,None,206.0,None,4.76,519.52,134.4,26.88],
'2024-08-03': [36.0,4.31,152.98,None,175.5,5.94,None,211.0,None,512.08,112.6,22.52],
'2024-06-17': [47.0,3.38,158.94,None,182.0,4.87,None,229.0,None,550.24,115.3,23.06],
'2024-02-01': [44.0,3.12,136.84,None,154.1,4.51,None,198.0,None,465.04,86.3,17.26],
'2023-10-16': [45.0,3.11,140.14,None,162.0,4.6,None,207.0,None,501.44,109.3,21.86],
'2023-05-15': [42.0,3.8,159.58,None,192.0,5.57,None,234.0,None,597.68,162.1,32.42]})
gt_sample_df
</code></pre>
<p><a href="https://i.sstatic.net/F0AytAlV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0AytAlV.png" alt="enter image description here" /></a></p>
<p>The various date columns in this dataframe <code>gt_sample_df</code> contain the results and I want to compare with the Upper Range and highlight those whose values are higher than Upper Range column. There can be n number of date columns with any date so I can't use static names for columns.</p>
<p>I have tried:</p>
<pre><code>from great_tables import GT, md, style, loc, google_font
(GT(gt_sample_df)
.tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")],
locations=loc.body(columns=pl.exclude(["Test",'Lower Range','Upper Range']),
rows=pl.col(lambda x: x) > pl.col('Upper Range')))
)
</code></pre>
<pre><code>from great_tables import GT, md, style, loc, google_font
(GT(gt_sample_df)
.tab_style(style=[style.text(color="Navy"), style.fill(color="red")],
locations=loc.body(columns=[3:],
rows=pl.col(lambda x: x) > pl.col('Upper Range')))
)
</code></pre>
<p>As I only want to highlight values greater in the date columns so I was trying to exclude first 3 columns in the column selection but it didn't work and I am not sure how to automatically compare values of all other Date columns to Upper Range column.</p>
<p><strong>Update:</strong>
Column selection I am able to do it but not able to select proper rows</p>
<pre><code>columns_required = gt_sample_df.select(pl.exclude(["Test",'Lower Range','Upper Range'])).columns
(GT(gt_sample_df)
.tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")],
locations=loc.body(columns=columns_required, rows=pl.col(lambda x: x) > pl.col('Upper Range')))
)
</code></pre>
<pre><code>import polars.selectors as cs
(GT(gt_sample_df)
.tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")],
locations=loc.body(columns=cs.starts_with("20"), rows=pl.col(lambda x: x) > pl.col('Upper Range')))
)
</code></pre>
<pre><code>columns_required = gt_sample_df.select(pl.exclude(["Test",'Lower Range','Upper Range'])).columns
(GT(gt_sample_df)
.tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")],
for col_name in columns_required:
locations=loc.body(columns=[col_name],
rows=pl.col(col_name) > pl.col('Upper Range')))
)
</code></pre>
<p>this also didn't work.</p>
<p><strong>Desired Output</strong> probably something like this:</p>
<p><a href="https://i.sstatic.net/IlV6a2Wk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IlV6a2Wk.png" alt="enter image description here" /></a></p>
|
<python><dataframe><python-polars><great-tables>
|
2024-12-13 17:53:52
| 1
| 2,357
|
ViSa
|
79,279,124
| 2,475,195
|
Does pytorch CNN care about image size?
|
<p>I am playing with CNNs these days, and I have code like pasted below. My question is, would this work on any image size? It is not clear to me what parameter or channel, if any, cares about the image size? And if that's the case, how does the model know how many neurons it needs, isn't that a function of image size?</p>
<p>Related point on pretrained models - if I use pretrained models, do I need to reformat my images to be same as what the model was trained on in the first place, or how does that work?</p>
<pre class="lang-py prettyprint-override"><code>class CNN(nn.Module):
def __init__(self, num_classes, num_channels=1):
super(CNN, self).__init__()
self.num_classes = num_classes
self.conv1 = nn.Conv2d(num_channels, 32, kernel_size=3, padding=1)
self.relu1 = nn.ReLU()
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)
self.relu2 = nn.ReLU()
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.fc = nn.Linear(64*7*7, num_classes)
</code></pre>
|
<python><machine-learning><deep-learning><pytorch><conv-neural-network>
|
2024-12-13 17:24:14
| 1
| 4,355
|
Baron Yugovich
|
79,279,060
| 17,658,327
|
How to use numpy.where in a pipe function for pandas dataframe groupby?
|
<p>Here is a script to simulate the issue I am facing:</p>
<pre><code>import pandas as pd
import numpy as np
data = {
'a':[1,2,1,1,2,1,1],
'b':[10,40,20,10,40,10,20],
'c':[0.3, 0.2, 0.6, 0.4, 0.5, 0.2, 0.8],
'd':[3, 1, 5, 1, 7, 2., 2.],
}
df = pd.DataFrame.from_dict(data)
# I apply some custom function to populate column 'e'.
# For demonstration, I am using a very simple function here.
df['e']=df.apply(lambda x: x['c']<=0.3, axis=1)
# This is the column I need to obtain using groupby and pipe/transform
df['f']=[2., 1., 0., 2., 1., 2., 0.]
print(df)
</code></pre>
<p>Output:</p>
<pre><code> a b c d e f
0 1 10 0.3 3.0 True 2.0
1 2 40 0.2 1.0 True 1.0
2 1 20 0.6 5.0 False 0.0
3 1 10 0.4 1.0 False 2.0
4 2 40 0.5 7.0 False 1.0
5 1 10 0.2 2.0 True 2.0
6 1 20 0.8 2.0 False 0.0
</code></pre>
<p>The logic to be used to find column <code>f</code> is as follows:</p>
<p>For each group of <code>df.groupby(['a', 'b'])</code>:</p>
<ul>
<li>select entries with True value for <code>e</code>.</li>
<li>if there are any item in the selected array:
<ul>
<li>find entry with minimum <code>d</code> and return <code>d</code> (in real application, <code>d</code> needs to be manipulated in conjunction with other columns, and the result would be returned)</li>
</ul>
</li>
<li>else:
<ul>
<li>return 0</li>
</ul>
</li>
</ul>
<p><strong>What I have tried:</strong></p>
<pre><code>def func(x):
print(type(x))
print(x)
print('-'*50)
ind=np.where(x['e']) #<--- How can I implement this?
if len(ind)>0:
ind_min=np.argmin(x.iloc[ind]['d'])
return x.iloc[ind[ind_min]]['d']
else:
return 0
df['g']=df.groupby(['a', 'b']).pipe(func)
</code></pre>
<p>Output:</p>
<pre><code><class 'pandas.core.groupby.generic.DataFrameGroupBy'>
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x000001B348735550>
--------------------------------------------------
...
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (3, 2) + inhomogeneous part.
</code></pre>
<p>I get the mentioned error on line: <code>ind=np.where(x['e']) #<--- How can I implement this?</code></p>
<p>So, how can apply <code>np.where</code> on a <code>pandas.core.groupby.generic.DataFrameGroupBy</code> object?</p>
|
<python><pandas><dataframe><group-by><pipe>
|
2024-12-13 16:56:50
| 2
| 626
|
learner
|
79,278,987
| 405,017
|
Stop Plotly dash.dcc.Store() triggering an update on page load
|
<p>Summary: my Dash app is triggering a callback on initial page load from a dcc.Store, with sometimes-old data in it. How do I prevent this? Note: I am using <code>prevent_initial_call=True</code>.</p>
<p>The layout for my Dash app contains these:</p>
<pre class="lang-py prettyprint-override"><code>dcc.Location(id=self.url_id, refresh=False),
dcc.Store(id=self.getter_id, storage_type="session"),
dcc.Store(id=self.setter_id, data={}, storage_type="session"),
</code></pre>
<p>I have these callbacks:</p>
<pre class="lang-py prettyprint-override"><code>@self.app.callback(
Output(self.getter_id, "data"),
Input(self.url_id, "hash"),
)
def parse_hash(hash_str: str | None = None) -> HashValues:
return hash_to_dict(hash_str)
@self.app.callback(
Output(self.url_id, "hash"),
Input(self.setter_id, "data"),
State(self.url_id, "hash"),
prevent_initial_call=True,
)
def update_hash(
set_hash_data: HashValues | None = None,
current_hash: str | None = None,
) -> str:
if not set_hash_data:
raise dash.exceptions.PreventUpdate()
# merge new settings over existing values
params = hash_to_dict(current_hash) | set_hash_data
return dict_to_hash(params)
</code></pre>
<p>99% of the time everything works fine. I can use <code>(self.setter_id, "data")</code> as an output for other callbacks on the page, the <code>update_hash()</code> callback is invoked and the url hash shows what I want. I can modify the URL on the page and callbacks using <code>(self.getter_id, "data")</code> are properly invoked with the parsed data sent to them.</p>
<p>However, when I request the route fresh, with no hash on the URL, the last callback above usually triggers, with <code>set_hash_data</code> being some dict…that is often NOT even the last dict I set it to. (As a result, the bogus data is sent up into the URL's hash, which then happens to trigger updates to various inputs all over the page. The "pristine" page that I tried to load fresh starts off in a bad state.)</p>
<p>As you can see, I have <code>prevent_initial_call=True</code> set for that callback. I've put breakpoints and print statements in every callback in my app. This is the first callback that triggers on page load. I've looked at the network trace in Chrome, and this the first call to <code>_dash-update-component</code>.</p>
<p>Any suggestions on how to fix this, or trace the offending JS?</p>
|
<python><plotly-dash>
|
2024-12-13 16:34:47
| 0
| 304,256
|
Phrogz
|
79,278,878
| 1,406,168
|
Deploying files from GitHub to Azure
|
<p>I am having a GitHub action where everything looks like a success, however it seems like the application is not deployed.</p>
<p>yaml:</p>
<pre><code>name: Build and deploy Python app to Azure Web App - fast-api-port
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python version
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here (PyTest, Django test suites, etc.)
- name: Zip artifact for deployment
run: zip release.zip ./* -r
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact@v4
with:
name: python-app
path: |
release.zip
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
permissions:
id-token: write #This is required for requesting the JWT
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v4
with:
name: python-app
- name: Unzip artifact for deployment
run: unzip release.zip
- name: Login to Azure
uses: azure/login@v2
with:
client-id: ${{ secrets.xx}}
tenant-id: ${{ secrets.xx}}
subscription-id: ${{ secrets.xx}}
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy@v3
id: deploy-to-webapp
with:
app-name: 'fast-api-port'
slot-name: 'Production'
</code></pre>
<p>All the steps in the logs succeeds:</p>
<p>The logs says 0 errors and 0 warnings. However when I go into the file manager I cant find any files uploaded and may be why I cant get my app to run:</p>
<p>Am I missing something or should I not be able to see the files? I test with an FTP connection just to be sure but no .py files any where?</p>
|
<python><azure><github><azure-web-app-service><github-actions>
|
2024-12-13 15:49:57
| 1
| 5,363
|
Thomas Segato
|
79,278,850
| 11,999,452
|
openCV: video does not get saved correctly
|
<p>The output video has only 1kB and can not be opened. What is wrong?
This is my code:</p>
<pre><code>import cv2
import numpy as np
file_name = "Touhou - Bad Apple.mp4"
output_file_name = "Touhou - Bad Apple difference flip.mp4"
# Open the video file
video = cv2.VideoCapture(file_name)
# Get the dimensions of the video
frame_width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
frame_rate = video.get(cv2.CAP_PROP_FPS)
# Create a VideoWriter object to write the new video file
fourcc = cv2.VideoWriter_fourcc(*'mp4v') # Codec for .mp4 files
new_video = cv2.VideoWriter(output_file_name, fourcc, frame_rate, (frame_width, frame_height))
# Create first frame
new_frame = np.random.randint(0, 256, (frame_height, frame_width), dtype=np.uint8)
new_video.write(new_frame)
# Read and write the remaining frames
success, current_video_frame = video.read()
while True:
success, next_video_frame = video.read()
if not success:
break
difference = cv2.absdiff(current_video_frame, next_video_frame)
difference = cv2.cvtColor(difference, cv2.COLOR_BGR2GRAY)
new_frame = new_frame + difference
new_video.write(new_frame)
current_video_frame = next_video_frame
cv2.imshow('Difference', new_frame)
if cv2.waitKey(1) == ord('q'):
break
# Release the video objects
video.release()
new_video.release()
cv2.destroyAllWindows()
</code></pre>
<p>Random text so that stack overflow lets me post it.
Random text so that stack overflow lets me post it.
Random text so that stack overflow lets me post it.
Random text so that stack overflow lets me post it.
Random text so that stack overflow lets me post it.
Random text so that stack overflow lets me post it.
Random text so that stack overflow lets me post it.</p>
|
<python><opencv><video>
|
2024-12-13 15:40:40
| 1
| 400
|
Akut Luna
|
79,278,835
| 120,440
|
How do I update a matplotlib plot in realtime and also have a REPL?
|
<p>I am in a situation where I have a primitive REPL with some simple commands. These commands generate data and put the data into a threadsafe queue. I then have a function that takes the data from the queue and plots it on a matplotlib plot.</p>
<p>What I would ultimately like to do is to be able to have a REPL command that creates a thread that generates data upon starting and puts the data into the queue. The data would then be plotted as it arrives from the queue into a matplotlib plot. I am having trouble understanding how to implement this.</p>
<p>My first idea was to just implement the REPL as while loop in the main thread and then have the function with matplotlib run in a separate thread. Unfortunately, matplotlib seems to require running in the main thread of the application. So I could not pursue that.</p>
<p>The next thing I tried was to put the REPL into its own thread and keep a matplotlib function in the main thread. Here's my main.py:</p>
<pre><code>import threading
from queue import Queue
from typing import List
from repl import REPL
from stripchart import live_stripchart
def main():
queue :Queue[List[float]] = Queue(maxsize = 0)
quit: threading.Event = threading.Event()
# A primitive REPL to control application. It runs
# in a separate thread. And puts data into the queue.
repl = REPL(queue, quit)
repl.start()
# live_stripchart is a function that plots data from the queue.
# It runs in the main thread (because matplotlib seems to require this?)
live_stripchart(queue, quit) #
repl.join()
if __name__ == "__main__":
main()
</code></pre>
<p>The REPL consists of a simple while loop that runs in a separate thread. I feel like this is a problem. Is it OK to use input statements that take data from keyboard input in a separate thread? <code>¯\_(ツ)_/¯</code>, <em>but it seems like it works</em>.</p>
<pre><code>import re
import threading
from queue import Queue
from typing import List
import random
fake_data_cmd_regex :re.Pattern = re.compile(r'^fake_data\s+\-n\s*(\d+)\s*$')
class REPL(threading.Thread):
def __init__(self, queue :Queue[List[float]], quit :threading.Event):
self.queue :Queue[List[float]] = queue
self.quit = quit
super().__init__()
def run(self):
try:
# Here's a primitive REPL to contol the application
while True:
command = input('> ')
if command == 'help':
print("quit")
print(" Exit the application.")
print("fake_data -n <number>")
print(" Generate some data and put it in queue.")
continue
if command == 'quit':
self.quit.set() # fire quit event to stop theads that check it.
break # exit this loop, terminating this thread.
match = fake_data_cmd_regex.match(command)
if match:
n, = match.groups()
print(f"generating {n} fake data points...")
self.queue.put([random.random() for _ in range(int(n))])
continue
if command == '':
continue
print("Unknown command. Type 'help' for help.")
except KeyboardInterrupt:
self.quit.set() # stop well monitor if it's running
print(f"cli done")
</code></pre>
<p>Finally, I have the stripchart, this takes the data from the queue and plots it. This function runs in the main thread of the program.</p>
<pre><code>import matplotlib.pyplot as plt
import threading
from queue import Queue, Empty
from typing import List
def live_stripchart(queue: Queue[List[float]], quit: threading.Event):
plt.ion()
fig, ax = plt.subplots()
x, y = [], []
i = 0
line, = ax.plot([], [], 'b-') # Initialize an empty plot line
while not quit.is_set():
try:
# grab data from the queue
data = queue.get(block=True, timeout=0.5)
# Update x and y
x.extend(range(i, i + len(data)))
y.extend(data)
i += len(data)
# Keep only the most recent 100 points
if len(x) > 100:
x = x[-100:]
y = y[-100:]
# autoscale the view
line.set_data(x, y)
ax.relim()
ax.autoscale_view()
# actually plot it
plt.draw()
plt.pause(0.1)
except Empty:
plt.pause(0.1) # gracefully handle timeout
print("plot_ui done", flush=True)
plt.close(fig)
</code></pre>
<p>When I run this, it mostly does what I want. The matplotlib window appears and I can add data to it with my REPL command <code>fake_data -n 100</code>. The problem is that the icon for the matplotlib window keeps blinking in the taskbar (this is windows 10). <em>This makes me think something is wrong.</em></p>
<p><a href="https://i.sstatic.net/UmzyhrkE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmzyhrkE.png" alt="icon for matplotlib blinks" /></a></p>
<p>Here's the commands I issue in the REPL...
<a href="https://i.sstatic.net/TnhxmyJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TnhxmyJj.png" alt="REPL example" /></a></p>
<p>I get even more problems when I try to continuously update the queue with more data generated from yet another thread. That's more involved to explain, so I am just posting a question about this more simplified situation to see if this is even the right path.</p>
<p>Is this approach correct?
Why is the icon for matplotlib blinking continuously?
Is it OK to implement a REPL as I have done in it's own thread?</p>
|
<python><matplotlib><python-multithreading>
|
2024-12-13 15:34:41
| 1
| 3,155
|
Angelo
|
79,278,749
| 520,275
|
Is there any way to optimize disk usage for virtualenv environments?
|
<p>I have several <strong>virtualenvs</strong> for the same Python version, each with a physical copy of each dependency file. Some AI-related dependencies (tensorflow, torch, nvidia, etc) tend to be immense.
As a consequence, a lot of disk space is consumed with a lot of duplicate files.</p>
<p><strong>I'm looking for a way to optimize disk usage by symlinking files or sharing environments differently</strong>.
It might be something when creating the environments, or maybe after and just a file system operation level.</p>
<p>This is not the same as in <a href="https://stackoverflow.com/questions/36592470/how-to-share-libraries-between-virtual-environments-and-or-python-installation">How to share libraries between virtual environments and/or python installation</a>, since using "--system-site-packages" will limit to a single version of a dependency per Python version (which at some point seems to be most correct thing). But I would prefer to have the flexibility to have several versions.</p>
<p>For instance, here are 3 virtualenvs with similar dependencies
<a href="https://i.sstatic.net/iVGGddVj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVGGddVj.png" alt="several virtualenvs" /></a></p>
<p>More disk consuming dependencies:</p>
<p><a href="https://i.sstatic.net/oTjOTOOA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTjOTOOA.png" alt="virtualenvs deps size" /></a></p>
|
<python><virtualenv>
|
2024-12-13 15:05:38
| 0
| 10,232
|
Gonzalo Matheu
|
79,278,735
| 2,386,113
|
Horizontal and vertical colorbars on a single plot in python
|
<p>I have KDE plot generated using seaborn on which I overlay some scattered points with some size. I need to have two colorbars to represent the data of KDE and the scattered values's size (as shown below).</p>
<p>I tried to use the answer provided <a href="https://stackoverflow.com/a/33505522/2386113">here</a> at Stackoverflow, however, it is not generating the desired results as shown in the screenshot below</p>
<p><a href="https://i.sstatic.net/f80ZsM6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f80ZsM6t.png" alt="enter image description here" /></a></p>
<p><strong>Question:</strong> How can I correct the horizontal colorbar too so that there is some vertical gap between colorbar and the main plot? Also, the length of the horizontal bar is a bit too much and needs to have the same length as the width of the main plot.</p>
<p><strong>MWE</strong>:</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits import axes_grid1
import seaborn as sns
import numpy as np
def add_colorbar(im, aspect=20, pad_fraction=0.5, orientation='vertical', **kwargs):
"""Add a color bar to an image plot with an option for horizontal or vertical colorbars."""
divider = axes_grid1.make_axes_locatable(im.axes)
if orientation == 'horizontal':
width = axes_grid1.axes_size.AxesX(im.axes, aspect=1./aspect) # Horizontal colorbar
pad = axes_grid1.axes_size.Fraction(pad_fraction, width)
cax = divider.append_axes("bottom", size=width, pad=pad) # Place at the bottom
else:
width = axes_grid1.axes_size.AxesY(im.axes, aspect=1./aspect) # Default vertical colorbar
pad = axes_grid1.axes_size.Fraction(pad_fraction, width)
cax = divider.append_axes("right", size=width, pad=pad) # Place at the right
return im.axes.figure.colorbar(im, cax=cax, orientation=orientation, **kwargs)
# Dummy data
x_values = np.random.rand(100)
y_values = np.random.rand(100)
size_values = np.random.rand(100)
kd = np.random.rand(100, 2)
kde_params_x = np.mean(kd[:, 0])
kde_params_y = np.mean(kd[:, 1])
# Create a plot
fig, ax = plt.subplots(figsize=(8, 8))
# Plot the multidimensional KDE for the winning data
kde = sns.kdeplot(x=kd[:, 0], y=kd[:, 1], fill=True, cmap='crest', bw_adjust=0.5, alpha=0.7, ax=ax)
# colorbar for the KDE plot
add_colorbar(kde.collections[0], orientation='horizontal') # orientation to 'horizontal'
# Overlay the empirical scatter plot with contrasting colors
scatter = ax.scatter(x_values, y_values, s=2, c=size_values, cmap='plasma', alpha=0.8, edgecolor='black', linewidth=0.5, label='Empirical Data')
# Add a colorbar for the scatter plot (vertical, as before)
add_colorbar(scatter, orientation='vertical') # vertical orientation
plt.show()
</code></pre>
|
<python><matplotlib><seaborn><colorbar>
|
2024-12-13 14:59:43
| 1
| 5,777
|
skm
|
79,278,733
| 19,556,055
|
Initializing a local cluster in Dask takes forever
|
<p>I'm trying out some things with Dask for the first time, and while I had it running a few weeks ago, I now find that I can't get the LocalCluster initiated. I've cut if off after running 30 minutes at some point, and am unable to get it initiated. Does anybody know what the issue may be? The code I'm trying to run is below, it doesn't get past the LocalCluster line.</p>
<pre><code>import dask.dataframe as dd
from dask.distributed import LocalCluster, Client
from dask.diagnostics import ResourceProfiler
import multiprocessing as mp
import time
from globals import * # Custom file with some global variables
# https://stackoverflow.com/questions/53394935/what-is-the-right-way-to-close-a-dask-localcluster
cluster = LocalCluster(n_workers=int(0.9 * mp.cpu_count()), processes=True, threads_per_worker=1, memory_limit="2GB")
print(cluster)
client = Client(cluster)
print(client.dashboard_link)
</code></pre>
|
<python><dask><dask-distributed>
|
2024-12-13 14:59:22
| 0
| 338
|
MKJ
|
79,278,724
| 11,357,695
|
Monkeypatching pandas series to_csv with pytest
|
<p>I am testing a function, <code>write_query()</code>, in <strong>module.py</strong>. My test is in <strong>test_module.py</strong>, which is carried out using pytest. I am using the <a href="https://docs.pytest.org/en/4.6.x/monkeypatch.html" rel="nofollow noreferrer">pytest monkeypatch fixture</a> to monkeypatch <code>Series.to_csv()</code>, in order to prevent file writing. It works in part - my monkeypatched function performs checks that seem to complete, and raises an error that is expected and indicates a passed test (I did this so I could explicitly include a test in the test function, rather than calling the function in a test and have the test performed in the function alone). However, it is still writing a file, <code>doesnt_matter</code>, and I'm not sure why. Can anyone explain?</p>
<p>Thanks!</p>
<p>Tim</p>
<p><strong>folder/module.py</strong></p>
<pre><code>import pandas as pd
def write_query(df, outpath, entry_name_col):
#Make a list of queries. NB - no pandas objects/methods involved in
#build_query()
entry_names = build_query(df, entry_name_col)
#Write the queries to a csv
with open(outpath, 'w') as csv_file:
pd.Series(entry_names).to_csv(csv_file,
index=False,
lineterminator='\n',
header = False)
</code></pre>
<p><strong>folder/test_module.py</strong></p>
<pre><code>import os
import pytest
from folder.module import build_query, write_query
def temp_to_csv(self, csv_file, index, lineterminator, header):
#check csv_file is correct
assert csv_file.name == 'doesnt_matter'
#check values to be written are correct
if list(self.values) == ['(id:"entry_name_1")', '(id:"entry_name_2")']:
#if they are, raise an error with a specific message that can be checked
#in the main test.
raise ValueError('Correct values passed to to_csv')
else:
raise ValueError
class TestWriteQuery:
def test_normal(self,
good_input_queries, #dataframe fixture in conftest.py
monkeypatch):
#check file doesnt exist before function is called
assert 'doesnt_matter' not in os.listdir(os.getcwd()) #This passes
#monkeypatch Series.to_csv() - have also tried module.pd.Series.to_csv,
#pandas.Series.to_csv, pd.Series.to_csv, Series.to_csv
monkeypatch.setattr('folder.module.pd.Series.to_csv',
temp_to_csv)
#call function and test raised error
with pytest.raises(ValueError) as e: #This passes
write_query(good_input_queries,
outpath = 'doesnt_matter',
entry_name_col = 'Entry name')
#e is <ExceptionInfo ValueError('Correct values passed to to_csv') tblen=3>
assert 'Correct values passed to to_csv' in str(e) #This passes
#check file has not been written
assert 'doesnt_matter' not in os.listdir(os.getcwd()) #This fails
</code></pre>
|
<python><pandas><unit-testing><pytest><monkeypatching>
|
2024-12-13 14:55:04
| 0
| 756
|
Tim Kirkwood
|
79,278,659
| 19,556,055
|
Can't load numpy when installed through Anaconda
|
<p>I am trying to run the vaex module which requires Python 3.6 and, apparently, a conda install rather than a pip install. I don't <em>really</em> know what I'm doing here, but I went ahead and installed Anaconda and created a new conda environment through the Anaconda prompt using <code>conda create -n research_project_conda python=3.6 vaex numpy pandas scikit-learn</code>. That went without any problems, and I can load the environment in VSCode. However, when I try to run a script using this environment I get the error below:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\marij\miniconda3\envs\research_project_conda\lib\site-packages\numpy\core\__init__.py", line 22, in <module>
from . import multiarray
File "C:\Users\marij\miniconda3\envs\research_project_conda\lib\site-packages\numpy\core\multiarray.py", line 12, in <module>
from . import overrides
File "C:\Users\marij\miniconda3\envs\research_project_conda\lib\site-packages\numpy\core\overrides.py", line 7, in <module>
from numpy.core._multiarray_umath import (
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\marij\.vscode\extensions\ms-python.python-2024.22.0-win32-x64\python_files\python_server.py", line 133, in exec_user_input
retval = callable_(user_input, user_globals)
File "<string>", line 1, in <module>
File "C:\Users\marij\miniconda3\envs\research_project_conda\lib\site-packages\numpy\__init__.py", line 140, in <module>
from . import core
File "C:\Users\marij\miniconda3\envs\research_project_conda\lib\site-packages\numpy\core\__init__.py", line 48, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.6 from "C:\Users\marij\miniconda3\envs\research_project_conda\python.exe"
* The NumPy version is: "1.19.2"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed: The specified module could not be found.
</code></pre>
<p>I tried to add some paths to my PATH according to <a href="https://stackoverflow.com/questions/54063285/numpy-is-already-installed-with-anaconda-but-i-get-an-importerror-dll-load-fail">this</a> thread, but it still doesn't work. Again, I'm a bit of a noob at this. Does anyone know what may be the issue?</p>
|
<python><numpy><anaconda>
|
2024-12-13 14:31:44
| 0
| 338
|
MKJ
|
79,278,599
| 3,895,259
|
How to properly set up django channels and websocket
|
<p>I'm trying to implement websocket and django channels. On frontend I have react and on backend django.</p>
<p>I've installed <code>channels</code> and added it to <code>INSTALLED_APPS</code> in settings and I also added to settings.py</p>
<pre><code>ASGI_APPLICATION = 'VideoScreen.routing.application'
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels.layers.InMemoryChannelLayer',
},
}
</code></pre>
<p>The routing is as follows:</p>
<pre><code>websocket_urlPattern = [
path("ws/device-key/", DeviceCodeConsumer.as_asgi()),
]
application = ProtocolTypeRouter(
{
"http": get_asgi_application(),
"websocket": AuthMiddlewareStack(URLRouter(websocket_urlPattern)),
}
)
channel_routing = {"http.request": StaticFilesHandler()}
</code></pre>
<p>And <code>DeviceCodeConsumer</code> is as follows:</p>
<pre><code>class DeviceCodeConsumer(WebsocketConsumer):
def connect(self):
self.group_name = 'device_key'
async_to_sync(self.channel_layer.group_add)(
self.group_name,
self.channel_name
)
print(f"Added to group {self.group_name}") # Debugging message
# Accept the WebSocket connection
self.accept()
def receive(self, text_data):
# Handle incoming JSON messages if needed
print('RECIEVED: ' + text_data)
text_data_json = json.loads(text_data)
message = text_data_json['message']
async_to_sync(self.channel_layer.group_send)(
self.group_name,
{
'type': 'update_device_key',
'message': message
}
)
def update_device_key(self, event):
message = event['message']
self.send(text_data=json.dumps({
'type': 'update_device_key',
'message': message
}))
</code></pre>
<p>Thus I create group with name device_key and I want to add a message to that group when some code executes in django view and to notify the frontend about it. The view in django that I have is as follows:</p>
<pre><code>@login_required
def device_edit_view(request, device_id):
if device_id <= 0:
device = Device()
else:
device = Device.objects.get(pk=device_id)
if request.method == "POST":
print("POST method reached") # Debugging message to confirm POST method
form = DeviceEditForm(data=request.POST, instance=device)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
device_id = instance.id
messages.success(request, "Your device has been saved")
# Notify the WebSocket group using the utility function
notify_websocket_group(
"device_key", # Group name
"update_device_key", # Type of the message
instance.key
)
return redirect("/devices")
else:
form = DeviceEditForm(instance=device)
form.fields["playlist"].queryset = Playlist.objects.filter(
company_id=request.session[COMPANY_ID]
)
context = {
"form": form,
"device": device,
}
return render(request, "play/device-edit.html", context)
</code></pre>
<p>Where i have <code># Notify the WebSocket group using the utility function</code> I want to add the channel group that there is a change and I should also get a notification on frontend. <code>notify_websocket_group</code> is as follows:</p>
<pre class="lang-py prettyprint-override"><code>def notify_websocket_group(group_name, event_type, message):
"""
Notifies a WebSocket group with a given message.
Args:
group_name (str): The group to send the message to.
event_type (str): The type of the message (for example, 'device_key').
message (str): The message content.
"""
print(f"Notifying group: {group_name}, Event Type: {event_type}, Message: {message}") # Debugging message
channel_layer = get_channel_layer()
async_to_sync(channel_layer.group_send(
group_name,
{
"type": event_type, # The method to call on the consumer
"message": message,
}
))
print('Sent WebSocket message to group device_key') # Debugging message
</code></pre>
<p>I see all those print in terminal, but that should also call <code>recieve</code> from <code>DeviceCodeConsumer </code> and I should see print('RECIEVED: ' + text_data). But I do not see that.</p>
<p>What am I doing wrong?</p>
|
<python><django><websocket><django-channels>
|
2024-12-13 14:09:31
| 0
| 12,124
|
Boky
|
79,278,528
| 386,861
|
Trying to plot a time series with ordinal data in Altair
|
<p>I'm slowly understanding a problem that I have with a time series plot involving data over three financial years.</p>
<p>Here's some sample data.</p>
<pre><code> Financial Year Month Category Value FY
0 2022-04-01 2023 Apr Total 53,616.40 2022-2023
1 2022-05-01 2023 May Total 56,652.97 2022-2023
2 2022-06-01 2023 Jun Total 41,240.00 2022-2023
3 2022-07-01 2023 Jul Total 37,039.38 2022-2023
4 2022-08-01 2023 Aug Total 46,683.56 2022-2023
</code></pre>
<p>I can plot the timeseries like this:</p>
<pre><code># Create the Altair chart
chart = alt.Chart(marketing_block_melted).mark_bar().encode(
x=alt.X('Date:T', sort=FY_MONTH_SHORT_ORDER),
y=alt.Y('Value:Q'),
color='FY:N',
tooltip=['Date:T', 'Value:Q', 'Financial Year:N']
).properties(
width=800,
height=400,
title='Year-by-Year Comparison of Financial Data'
)
chart
</code></pre>
<p>[![enter image description here][1]][1]</p>
<p>Which is fine. But I want to plot each year's data across months from April to March so I can compare month finances for a month across years.
[1]: <a href="https://i.sstatic.net/6p7AP7BM.png" rel="nofollow noreferrer">https://i.sstatic.net/6p7AP7BM.png</a></p>
|
<python><altair>
|
2024-12-13 13:47:12
| 1
| 7,882
|
elksie5000
|
79,278,424
| 618,579
|
Python importing CSV, adding square brackets
|
<p>I have the following code:</p>
<pre><code># Read serial numbers from CSV
with open('serials.csv', mode='r') as infile:
reader = csv.reader(infile)
serials = [row for row in reader]
# Query warranty date for each serial number and update CSV
for row in serials:
data = {"sn": row}
response = requests.post(url, json=data)
if response.status_code == 200:
warranty_info = response.json()
warranty_expire_date = warranty_info.get('warrantyInfo', {}).get('warrantyExpireDate')
row.append(warranty_expire_date)
else:
row.append("Failed to retrieve")
</code></pre>
<p>This opens a CSV file and reads each serial number of each line, they are stored as simply as:</p>
<pre><code>serial1
serial2
serial3
</code></pre>
<p>The idea is then I submit JSON in the format of:</p>
<pre><code>{'sn': "serial1"}
</code></pre>
<p>The issue I have is that it it trying to submit it in the format of:</p>
<pre><code>{'sn': "['serial1']"}
</code></pre>
<p>I presume <code>[]</code> means it's a list, but I can't understand why it thinks it's a list.</p>
|
<python><python-requests>
|
2024-12-13 13:05:47
| 4
| 2,513
|
Nathan
|
79,278,373
| 1,635,450
|
Python reverse dictionary key value reference relationships
|
<p>If I have a following dictionary:</p>
<pre><code>{"1": {"2", "3"}, "2": {"3", "4"}, "3": {"2", "4"}}
</code></pre>
<p>In this dictionary, the key references the values in the set.</p>
<p>How do I construct a new dictionary which defines the references to the values above:</p>
<pre><code>{"1": {}, "2": {"1", "3"}, "3": {"1", "2"}, "4": {"2", "3"}}
</code></pre>
<p>This is what I currently do:</p>
<pre><code> for k,v in data.items():
if len(v) > 0:
for p in v:
references[p].add(k)
else:
for p,v in references.items():
v.add(k)
</code></pre>
<p>Any better / optimized /pythonic way of doing it?</p>
|
<python><dictionary><python-3.12>
|
2024-12-13 12:46:38
| 2
| 4,280
|
khteh
|
79,278,351
| 6,089,311
|
How to forward / backward fill null fields in a struct column using Polars?
|
<p>This code not fill null values in column. I want to some fields to forward and backward fill nulls.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.LazyFrame({
"dt": [
"2024-08-30",
"2024-08-02",
"2024-09-03",
"2024-09-04"
],
"df1": {
"a": [0.1, 0.2, 0.3, 0.1],
"b": [0, -1, 2, 1]
},
}).with_columns(
pl.col("dt").str.to_datetime("%Y-%m-%d")
)
df2 = pl.LazyFrame({
"dt": [
"2024-08-29",
"2024-08-30",
"2024-09-02",
"2024-09-03"
],
"df2":{
"a": [100, 120, -80, 20],
"b": [1, -2, 0, 0]
},
}).with_columns(
pl.col("dt").str.to_datetime("%Y-%m-%d")
)
df = pl.concat([df1, df2], how="align")
df = df.with_columns(
*[
pl.col(c).struct.with_fields(
pl.field("a").forward_fill().backward_fill(),
pl.field("b").forward_fill().backward_fill(),
)
for c in ["df1", "df2"]
]
)
print(df.collect())
</code></pre>
<p>Null values appear in the output. I would expect them to be forward and backward filled, but they aren't.</p>
<pre><code>┌─────────────────────┬───────────┬───────────┐
│ dt ┆ df1 ┆ df2 │
│ --- ┆ --- ┆ --- │
│ datetime[μs] ┆ struct[2] ┆ struct[2] │
╞═════════════════════╪═══════════╪═══════════╡
│ 2024-08-02 00:00:00 ┆ {0.2,-1} ┆ null │
│ 2024-08-29 00:00:00 ┆ null ┆ {100,1} │
│ 2024-08-30 00:00:00 ┆ {0.1,0} ┆ {120,-2} │
│ 2024-09-02 00:00:00 ┆ null ┆ {-80,0} │
│ 2024-09-03 00:00:00 ┆ {0.3,2} ┆ {20,0} │
│ 2024-09-04 00:00:00 ┆ {0.1,1} ┆ null │
└─────────────────────┴───────────┴───────────┘
</code></pre>
<p>I would expected this output:</p>
<pre><code>┌─────────────────────┬───────────┬───────────┐
│ dt ┆ df1 ┆ df2 │
│ --- ┆ --- ┆ --- │
│ datetime[μs] ┆ struct[2] ┆ struct[2] │
╞═════════════════════╪═══════════╪═══════════╡
│ 2024-08-02 00:00:00 ┆ {0.2,-1} ┆ {100,1} │
│ 2024-08-29 00:00:00 ┆ {0.2,-1} ┆ {100,1} │
│ 2024-08-30 00:00:00 ┆ {0.1,0} ┆ {120,-2} │
│ 2024-09-02 00:00:00 ┆ {0.1,0} ┆ {-80,0} │
│ 2024-09-03 00:00:00 ┆ {0.3,2} ┆ {20,0} │
│ 2024-09-04 00:00:00 ┆ {0.1,1} ┆ {20,0} │
└─────────────────────┴───────────┴───────────┘
</code></pre>
<p>How to do that and why null values in column aren't filled? Probably it could be by <code>pl.col(c).forward_fill().backward_fill()</code>, but what if I want only one field to be filled?</p>
|
<python><dataframe><python-polars>
|
2024-12-13 12:36:46
| 2
| 586
|
Jan
|
79,278,344
| 5,838,180
|
Draw a Venn-diagram in python with one large area encompassing all other
|
<p>I used so far succesffully this code:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib_venn import venn2
# Define the areas for the Venn diagram
area1 = 13552 # Area of the left ellipse
area2 = 7887 # Area of the right ellipse
intersection = 7747 # Area of the intersection
# Calculate the unique areas for each set
only_area1 = area1 - intersection
only_area2 = area2 - intersection
# Create the Venn diagram
plt.figure(figsize=(8, 6))
v = venn2(subsets=(only_area1, only_area2, intersection), set_labels=("", ""))
# Customize the labels
v.get_label_by_id('10').set_text("SDSS-FIRST,\n$i-t_f > 2.5$\nN=13552")
v.get_label_by_id('01').set_text("SDSS-FIRST-\nQuaia-VLASS,\n$G-t > 2.5$\nN=7887")
v.get_label_by_id('11').set_text("$i-t_f > 2.5$,\n$G-t > 2.5$\nN=7747")
# Style the diagram with only margins colored
for subset, color in zip(['10', '01', '11'], ['red', 'blue', 'green']):
patch = v.get_patch_by_id(subset)
if patch:
patch.set_edgecolor(color)
patch.set_linewidth(2)
patch.set_facecolor('none')
# Customize the label colors
v.get_label_by_id('10').set_color('red')
v.get_label_by_id('01').set_color('blue')
v.get_label_by_id('11').set_color('green')
plt.show()
</code></pre>
<p>And it created this nice plot:</p>
<p><a href="https://i.sstatic.net/xVhd3zJi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVhd3zJi.png" alt="enter image description here" /></a></p>
<p>But now I decided I want to encompass all of the areas in the diagram as a part of an all-encompassing area, so I want to get a diagram like this:</p>
<p><a href="https://i.sstatic.net/pzn6wYZf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzn6wYZf.png" alt="enter image description here" /></a></p>
<p>How do I code this? It seems like I have to use <code>venn3</code> but I am confused about how to tell that the black circle should contain everything else. And ChatGPT provides only nonsense. Thanks for any help!</p>
<hr />
<p>Using the help of tanglef I was able to create this code, that almost does the job:</p>
<pre><code>from collections import Counter
import matplotlib.pyplot as plt
sets = Counter()
sets["100"] = 0.21346
sets["010"] = 0
sets["001"] = 0
sets["110"] = 0.13552
sets["101"] = 0.7887
sets["111"] = 0.7747
sets["011"] = 0
# Create the Venn diagram
plt.figure(figsize=(8, 6))
v = venn3(subsets=sets)
# Customize the labels
v.get_label_by_id('100').set_text("SDSS-FIRST,\n$i < 20.5$")
v.get_label_by_id('101').set_text("SDSS-FIRST,\n$i-t_f > 2.5$\nN=13552")
v.get_label_by_id('110').set_text("SDSS-FIRST-\nQuaia-VLASS,\n$G-t > 2.5$\nN=7887")
v.get_label_by_id('111').set_text("$i-t_f > 2.5$,\n$G-t > 2.5$\nN=7747")
# Style the diagram with only margins colored
for subset, color in zip(['100','110', '101', '111'], ['black','red', 'blue', 'green']):
patch = v.get_patch_by_id(subset)
if patch:
patch.set_edgecolor(color)
patch.set_linewidth(4)
patch.set_facecolor('none')
# Customize the label colors
v.get_label_by_id('100').set_color('black')
v.get_label_by_id('110').set_color('red')
v.get_label_by_id('101').set_color('blue')
v.get_label_by_id('111').set_color('green')
</code></pre>
<p><a href="https://i.sstatic.net/AJIz5EO8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJIz5EO8.png" alt="enter image description here" /></a></p>
<p>My problem now is that I want to get rid off the labels A,B,C and I want to make the area sizes correspond to the numbers provided in <code>sets</code> (= to the numbers in the labels), because as of now the area sizes are wrong. And I wonder if I have maybe chosen the wrong sets.</p>
|
<python><matplotlib><venn-diagram>
|
2024-12-13 12:33:57
| 2
| 2,072
|
NeStack
|
79,278,333
| 8,024,622
|
KivyMD delete line in file when button ist pressed
|
<p>I'm having a screen showing me items saved in a file. Every line should be shown on the screen. This works well.</p>
<p>What I want to acchieve is, when the button to the right of an element (icon: 'delete') is pressed, the line in that file is removed and the screen is refreshed.</p>
<p>I really do not know how to do this. Please help :)</p>
<p>Here's the python code</p>
<pre><code>from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.core.window import Window
Window.keyboard_anim_args = {'d': .2, 't': 'in_out_expo'}
Window.softinput_mode = "below_target"
Window.size = (450,800)
from kivymd.app import MDApp
from kivymd.uix.menu import MDDropdownMenu
from kivymd.uix.navigationbar import MDNavigationBar, MDNavigationItem
from kivymd.uix.boxlayout import MDBoxLayout
from kivymd.uix.label import MDLabel
from kivymd.uix.divider import MDDivider
from kivymd.uix.button import MDButton, MDButtonText, MDButtonIcon
from kivymd.uix.textfield import MDTextField
import os
KV = '''
<MainScreen>:
MDScreen:
md_bg_color: app.theme_cls.backgroundColor
<FavoritenScreen>:
MDScreen:
ScrollView:
id: favoriten_sv
size_hint: None, None
size: root.width, root.height
pos_hint: {'center_x': 0.5, 'center_y': 0.3}
MDBoxLayout:
id: favoriten_layout
orientation: 'vertical'
size_hint_y: None
height: self.minimum_height
MDBoxLayout:
orientation: "vertical"
ScreenManager:
id:sm
MainScreen:
name:
'Suche'
FavoritenScreen:
name:
'Favoriten'
MDNavigationBar:
on_switch_tabs: app.wechsel(*args)
MDNavigationItem:
MDNavigationItemIcon:
icon:'text-search'
MDNavigationItemLabel:
text:'Suche'
MDNavigationItem:
MDNavigationItemIcon:
icon:'heart'
MDNavigationItemLabel:
text:'Favoriten'
'''
class FavoritenScreen(Screen):
def on_enter(self):
self.ids.favoriten_layout.clear_widgets()
fav = self.manager.get_screen('Suche').favoriten_laden()
for entry in range(len(fav)):
if fav[entry].strip():
if entry > 0:
e = fav[entry].split(';')
button_layout = MDBoxLayout(orientation='horizontal',size_hint_y=None,height=126)
favorite_label = MDTextField(
text=e[1] + '\n' + e[0] + '\n' + e[2] + '\n' + e[3],
disabled=True,
multiline=True,
#halign="center",
)
favorite_button = MDButton(
pos_hint={'center_y': 0.45}
)
favorite_button.add_widget(MDButtonIcon(
icon='check'
))
delete_button = MDButton(
pos_hint={'center_y': 0.45})
delete_button.add_widget(MDButtonIcon(
icon='delete'
))
button_layout.add_widget(favorite_label)
button_layout.add_widget(favorite_button)
button_layout.add_widget(delete_button)
self.ids.favoriten_layout.add_widget(button_layout)
class MainScreen(Screen):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.fav = self.favoriten_laden()
def favoriten_laden(self):
if not os.path.exists('f'):
with open('f', 'w', encoding='utf-8') as w:
w.write('stadt;marke;adresse;kraftstoff')
with open('f', 'r', encoding='utf-8') as r:
fav = r.read()
return fav.split('\n')
class MyApp(MDApp):
def wechsel(self, bar: MDNavigationBar, item: MDNavigationItem, item_icon: str, item_text: str):
if item_text == 'Suche':
self.root.ids.sm.transition.direction = 'right'
elif item_text == 'Favoriten':
self.root.ids.sm.transition.direction = 'left'
self.root.ids.sm.current = item_text
def build(self):
self.title = 'Test'
self.theme_cls.theme_style = 'Dark'
self.theme_cls.primary_palette = 'Whitesmoke'
self.screen = Builder.load_string(KV)
return self.screen
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>And here's what the file looks like</p>
<pre><code>A;D;G;J
B;E;H;K
C;F;I;L
</code></pre>
|
<python><kivy><kivymd>
|
2024-12-13 12:29:25
| 1
| 624
|
jigga
|
79,278,129
| 11,626,097
|
boto3 redshift client .describe_statement provides no error message
|
<p>I've run an sql query through boto3 redshift client's <em>execute_statement</em> function, and subsequently checked the status of the query with <em>describe_statement</em>.</p>
<pre><code>result = client_redshift.execute_statement(Database= 'my_database', SecretArn= secret_arn,
Sql= query_str, ClusterIdentifier= cluster_id,
ResultFormat = "CSV")
response = client_redshift.describe_statement(Id = result["Id"])
if response["Status"] == "FINISHED":
data = client_redshift.get_statement_result(Id = result["Id"])
break
if response["Status"] == "FAILED":
print(response.keys())
</code></pre>
<p>The status comes out as failed, and I'm trying to figure out why. The SQL is fine, I've checked it in another app.</p>
<p>According to boto3 1.35.79 documentation, there should be "Error" in the keys returned by describe_statement, but there is not.</p>
<p>Boto3 documentation:</p>
<p><a href="https://i.sstatic.net/wi1l1XAY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wi1l1XAY.png" alt="boto3 describe_statement documentation" /></a></p>
<p>Result of response.keys():</p>
<pre><code>dict_keys(['ClusterIdentifier', 'CreatedAt', 'Database', 'Duration', 'HasResultSet', 'Id', 'QueryString', 'RedshiftPid', 'RedshiftQueryId', 'ResultFormat', 'ResultRows', 'ResultSize', 'SecretArn', 'Status', 'UpdatedAt', 'ResponseMetadata'])
</code></pre>
<p>Before I report this as an issue on Github, am I missing something here?</p>
|
<python><amazon-web-services><aws-lambda><amazon-redshift><boto3>
|
2024-12-13 11:23:02
| 0
| 679
|
tonneofash
|
79,278,086
| 19,356,117
|
How to convert Polars dataframe to numpy array which has certain dims?
|
<p>I have a Polars DataFrame with 300 basins, each basin having 100,000 time records, and each time record consisting of 40 variables, totaling 30 million rows and 40 variables. How can I reconstruct it into a NumPy array with the shape (300, 100,000, 40) without disrupting the corresponding data indices?</p>
<p>Example:</p>
<pre><code>shape: (10, 7)
┌──────────────┬─────────────┬─────────────┬─────────────┬─────────────┬─────────────┬─────────────┐
│ HQprecipitat ┆ IRprecipita ┆ precipitati ┆ precipitati ┆ randomError ┆ basin_id ┆ time │
│ ion ┆ tion ┆ onCal ┆ onUncal ┆ --- ┆ --- ┆ --- │
│ --- ┆ --- ┆ --- ┆ --- ┆ f32 ┆ str ┆ datetime[μs │
│ f32 ┆ f32 ┆ f32 ┆ f32 ┆ ┆ ┆ ] │
╞══════════════╪═════════════╪═════════════╪═════════════╪═════════════╪═════════════╪═════════════╡
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-01 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 09:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-01 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 12:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-01 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 15:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-01 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 18:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-01 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 21:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-02 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 00:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-02 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 03:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-02 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 06:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-02 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 09:00:00 │
│ null ┆ null ┆ null ┆ null ┆ null ┆ anhui_62909 ┆ 1980-01-02 │
│ ┆ ┆ ┆ ┆ ┆ 400 ┆ 12:00:00 │
└──────────────┴─────────────┴─────────────┴─────────────┴─────────────┴─────────────┴─────────────┘
# It should be reshaped to a numpy array which shape is (1, 10, 7)
# 1 means amount of basins, 10 means amount of times, 7 means width or amount of variables.
</code></pre>
|
<python><python-3.x><numpy><python-polars>
|
2024-12-13 11:07:05
| 1
| 1,115
|
forestbat
|
79,277,864
| 6,151,828
|
Define different class method depending on whether an input parameter is present/absent
|
<p>I would like to be able to define a different class method, depending on whether one of the inputs is present or not. Something like:</p>
<pre><code>class mcmc(object):
def __init__(self, X, Y=None):
if Y is None:
def input_vector(self, beta):
return beta.dot(X)
else:
def input_vector(self, beta):
return beta.dot(np.vstack([X, Y]))
</code></pre>
<p>I could have coded this directly in the method, like:</p>
<pre><code>class mcmc(object):
def __init__(self, X, Y=None):
self.X = X
if Y is not None:
self.Y = Y
def input_vector(self, beta):
if self.__hasattr__('Y'):
return return beta.dot(np.vstack([self.X, self.Y]))
else:
return beta.dot(self.X)
</code></pre>
<p>but the function is used repeatedly (millions of times), so it probably adds a lot to the execution time.</p>
<p>Another similar possibility, inspired by <a href="https://stackoverflow.com/q/72048892/6151828">How can I call any method inside a class depending on an input to another method in the same class?</a>, is probably:</p>
<pre><code>class mcmc(object):
def __init__(self, X, Y=None):
self.X = X
if Y is not None:
self.Y = Y
self.method = 'input_vector_XY'
else:
self.method = 'input_vector_X'
def input_vector(self, beta):
return self.__getattribute__(self.method)(beta)
def input_vector_XY(self, beta):
return return beta.dot(np.vstack([self.X, self.Y]))
def input_vector_X(self, beta):
return beta.dot(self.X)
</code></pre>
<p>This also involves an "overhead" in terms of calling an additional function.</p>
|
<python>
|
2024-12-13 09:52:35
| 2
| 803
|
Roger V.
|
79,277,600
| 1,406,168
|
FastAPI deployed to an azure web app - getting environments variables
|
<p>I have deployed a FastAPI application to an azure web app. I would like to access the environment variables from the web app in fast api. Is that possible and if yes or do you do that? Both from azure ofcource, but also how do you manage locally when developing. As we speak I am only using a basic hello world template:</p>
<pre><code>from fastapi import FastAPI,Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi_azure_auth import SingleTenantAzureAuthorizationCodeBearer
import uvicorn
from fastapi import FastAPI, Security
import os
from typing import Dict
from settings import Settings
from pydantic import AnyHttpUrl,BaseModel
from contextlib import asynccontextmanager
from typing import AsyncGenerator
from fastapi_azure_auth.user import User
settings = Settings()
@asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
"""
Load OpenID config on startup.
"""
await azure_scheme.openid_config.load_config()
yield
app = FastAPI(
swagger_ui_oauth2_redirect_url='/oauth2-redirect',
swagger_ui_init_oauth={
'usePkceWithAuthorizationCodeGrant': True,
'clientId': settings.xx,
'scopes': settings.xx,
},
)
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=['*'],
allow_headers=['*'],
)
azure_scheme = SingleTenantAzureAuthorizationCodeBearer(
app_client_id=settings.xx,
tenant_id=settings.xx,
scopes=settings.xx,
)
class User(BaseModel):
name: str
roles: list[str] = []
@app.get("/", dependencies=[Security(azure_scheme)])
async def root():
print("Yo bro")
return {"whoIsTheBest": "DNA Team is"}
@app.get("/test", dependencies=[Security(azure_scheme)])
async def root():
print("Yo test")
return {"whoIsTheBest": "Team is!"}
@app.get("/me", dependencies=[Security(azure_scheme)])
async def me(request: Request):
print("Me")
return User(roles=request.state.user.roles,name=request.state.user.name)
if __name__ == '__main__':
uvicorn.run('main:app', reload=True)
</code></pre>
|
<python><azure><azure-web-app-service><fastapi><azure-webapps>
|
2024-12-13 08:18:38
| 1
| 5,363
|
Thomas Segato
|
79,277,584
| 13,392,257
|
How to clear task resources and zombie processes
|
<p>I am building celery + django + selenium application. I am running selenium-based browsers in separate processes with help celery. Versions:</p>
<pre><code>celery==5.2.6
redis==3.4.1
selenium-wire==5.1.0
Django==4.0.4
djangorestframework==3.13.1
</code></pre>
<p>I found out that after several hours application generates thousands of zombie processes. Also found out that <code>problem deals with celery docker container</code>, because after <code>sudo /usr/local/bin/docker-compose -f /data/new_app/docker-compose.yml restart celery</code> I have 0 zombie processes.</p>
<p>My code</p>
<pre><code>from rest_framework.decorators import api_view
@api_view(['POST'])
def periodic_check_all_urls(request): # web-service endpoint
...
check_urls.delay(parsing_results_ids) # call celery task
</code></pre>
<p>Celery task code</p>
<pre><code>from celery import shared_task
@shared_task()
def check_urls(parsing_result_ids: List[int]):
"""
Run Selenium-based parser
the parser exctracts data and saves in database
"""
try:
logger.info(f"{datetime.now()} Start check_urls")
parser = Parser() # open selenium browser
parsing_results = ParsingResult.objects.filter(pk__in=parsing_result_ids).exclude(status__in=["DONE", "FAILED"])
parser.check_parsing_result(parsing_results)
except Exception as e:
full_trace = traceback.format_exc()
finally:
if 'parser' in locals():
parser.stop()
</code></pre>
<p>Selenium browser stop function and destructor</p>
<pre><code>class Parser():
def __init__(self):
"""
Prepare parser
"""
if not USE_GUI:
self.display = Display(visible=0, size=(800, 600))
self.display.start()
""" Replaced with FireFox
self.driver = get_chromedriver(proxy_data)
"""
proxy_data = {
...
}
self.driver = get_firefox_driver(proxy_data=proxy_data)
def __del__(self):
self.stop()
def stop(self):
try:
self.driver.quit()
logger.info("Selenium driver closed")
except:
pass
try:
self.display.stop()
logger.info("Display stopped")
except:
pass
</code></pre>
<p>Also I was trying several settings to limit celery task resources and time of work (it didn't help with Zombie processes)</p>
<p>My celery settings in dgango settings.py</p>
<pre><code># celery setting (documents generation)
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER", "redis://redis:6379/0")
CELERY_RESULT_BACKEND = os.environ.get("CELERY_BROKER", "redis://redis:6379/0")
CELERY_IMPORTS = ("core_app.celery",)
CELERY_TASK_TIME_LIMIT = 10 * 60
</code></pre>
<p>My celery settings in dockers</p>
<pre><code>celery:
build: ./project
command: celery -A core_app worker --loglevel=info --concurrency=15 --max-memory-per-child=1000000
volumes:
- ./project:/usr/src/app
- ./project/media:/project/media
- ./project/logs:/project/logs
env_file:
- .env
environment:
# environment variables declared in the environment section override env_file
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- django
- redis
</code></pre>
<p>I read <a href="https://stackoverflow.com/questions/22717479/django-celery-how-to-kill-a-celery-task">Django/Celery - How to kill a celery task?</a> but it didn't help</p>
<p>Also read <a href="https://stackoverflow.com/questions/64347284/celery-revoke-leaving-zombie-ffmpeg-process">Celery revoke leaving zombie ffmpeg process</a> but my task already contains try/except</p>
<p>Example of zombie processes</p>
<pre><code>ps aux | grep 'Z'
root 32448 0.0 0.0 0 0 ? Z 13:45 0:00 [Utility Process] <defunct>
root 32449 0.0 0.0 0 0 ? Z 13:09 0:00 [Utility Process] <defunct>
root 32450 0.0 0.0 0 0 ? Z 11:13 0:00 [sh] <defunct>
root 32451 0.0 0.0 0 0 ? Z 13:44 0:00 [Utility Process] <defunct>
root 32452 0.0 0.0 0 0 ? Z 10:12 0:00 [Utility Process] <defunct>
root 32453 0.0 0.0 0 0 ? Z 09:52 0:00 [sh] <defunct>
root 32454 0.0 0.0 0 0 ? Z 10:40 0:00 [Utility Process] <defunct>
root 32455 0.0 0.0 0 0 ? Z 09:52 0:00 [Utility Process] <defunct>
root 32456 0.0 0.0 0 0 ? Z 10:13 0:00 [sh] <defunct>
root 32457 0.0 0.0 0 0 ? Z 10:51 0:00 [Utility Process] <defunct>
root 32459 0.0 0.0 0 0 ? Z 14:01 0:00 [Utility Process] <defunct>
root 32460 0.0 0.0 0 0 ? Z 13:16 0:00 [Utility Process] <defunct>
root 32461 0.0 0.0 0 0 ? Z 10:40 0:00 [Utility Process] <defunct>
root 32462 0.0 0.0 0 0 ? Z 10:12 0:00 [Utility Process] <defunct>
</code></pre>
|
<python><selenium-webdriver><celery>
|
2024-12-13 08:10:28
| 2
| 1,708
|
mascai
|
79,277,572
| 15,460,398
|
How to download the TestData dataset of open3d python
|
<p>I have installed open3d in Raspberry Pi 4 using the <a href="https://www.open3d.org/docs/0.9.0/getting_started.html" rel="nofollow noreferrer">documentation</a>:
Command: <code>pip3 install open3d</code></p>
<p>I can successfully import open3d in a Python program. But if I run the program from the <a href="https://www.open3d.org/docs/0.9.0/tutorial/Basic/pointcloud.html#visualize-point-cloud" rel="nofollow noreferrer">tutorial</a>:</p>
<pre><code>import numpy as np
import open3d as o3d
if __name__ == "__main__":
print("Load a ply point cloud, print it, and render it")
pcd = o3d.io.read_point_cloud("../../TestData/fragment.ply")
print(pcd)
print(np.asarray(pcd.points))
o3d.visualization.draw_geometries([pcd])
</code></pre>
<p>It gives error:</p>
<pre><code>Load a ply point cloud, print it, and render it
RPly: Unable to open file
[Open3D WARNING] Read PLY failed: unable to open file: ../../TestData/fragment.ply
PointCloud with 0 points.
[]
[Open3D WARNING] The number of points is 0 when creating axis-aligned bounding box.
</code></pre>
<p>It looks like the TestData files are not downloaded.</p>
<p>So how can I download the TestData files?</p>
|
<python><python-3.x><open3d>
|
2024-12-13 08:06:31
| 1
| 361
|
BeamString
|
79,277,541
| 13,128,766
|
The "BadStatusLine" error when connecting to mail server with exchangelib
|
<p>I'm trying to connect to my mailbox with exchangelib. Options are: protocol SMTP, port 465. Code:</p>
<pre><code>credentials = Credentials(MAIL_ACCOUNT, PASS)
config = ExConfiguration(server=MAIL_SERVER + ":" + MAIL_SERVER_PORT, credentials=credentials)
account = Account(MAIL_ACCOUNT,config=config, credentials=credentials, autodiscover=False)
for msg in account.inbox:
print('Message')
</code></pre>
<p>Script throws error "exchangelib.errors.TransportError: ('Connection aborted.', BadStatusLine('220 smtp.mydomain.com ESMTP ready'))"</p>
<p>The code 220 means that connection is successful. I believe that there is error due to unexpected string in server response. How can I skip check for this string in response when connecting to server?</p>
|
<python><exchangelib>
|
2024-12-13 07:49:44
| 1
| 451
|
Vad1m
|
79,277,352
| 1,084,174
|
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn while fine tuning llama
|
<p>This <a href="https://stackoverflow.com/questions/76780201/runtimeerror-element-0-of-tensors-does-not-require-grad-and-does-not-have-a-gr">answer</a> did not solve my problem.</p>
<p>I was trying to fine tune llama model locally using LoRA, peft. Here is my code:</p>
<pre><code>import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
TrainingArguments,
pipeline,
logging,
)
from peft import LoraConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training
from trl import SFTTrainer
base_model_id = "/ml/models/Meta-Llama-3.1-8B-Instruct"
new_model = "./models/Meta-llama3-8b-SFT2"
dataset = load_dataset('json', data_files='./datasets/done.jsonl', split='train')
def preprocess_function(examples):
examples['text'] = [f"question: {q}, answer: {a}, url: {u}" for q, a, u in zip(examples['question'], examples['answer'], examples['url'])]
return {'text': examples['text']}
preprocessed_dataset = dataset.map(preprocess_function, batched=True)
preprocessed_dataset = preprocessed_dataset.remove_columns(['chunk_id', 'question', 'answer', 'url'])
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_eos_token=True, add_bos_token=True)
tokenizer.pad_token = tokenizer.eos_token
peft_config = LoraConfig(task_type="CAUSAL_LM", r=2, lora_alpha=16, lora_dropout=0.01)
model = prepare_model_for_kbit_training(model)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
training_arguments = TrainingArguments(
output_dir="./results",
num_train_epochs=1,
per_device_train_batch_size=1,
gradient_accumulation_steps=16,
optim="paged_adamw_32bit",
save_steps=100,
logging_steps=5,
learning_rate=2e-4,
weight_decay=0.001,
bf16=True,
max_grad_norm=0.3,
max_steps=500,
warmup_ratio=0.03,
group_by_length=True,
gradient_checkpointing=True,
report_to="tensorboard",
)
trainer = SFTTrainer(
model=model,
train_dataset=preprocessed_dataset,
tokenizer=tokenizer,
args=training_arguments,
)
trainer.train()
trainer.model.save_pretrained(new_model)
</code></pre>
<p>However, this code throws an error:</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[2], line 68
60 trainer = SFTTrainer(
61 model=model,
62 train_dataset=dataset,
63 tokenizer=tokenizer,
64 args=training_arguments,
65 )
67 # Train the model
---> 68 trainer.train()
70 # Save the trained model
71 trainer.model.save_pretrained(new_model)
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/transformers/trainer.py:2164, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2162 hf_hub_utils.enable_progress_bars()
2163 else:
-> 2164 return inner_training_loop(
2165 args=args,
2166 resume_from_checkpoint=resume_from_checkpoint,
2167 trial=trial,
2168 ignore_keys_for_eval=ignore_keys_for_eval,
2169 )
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/transformers/trainer.py:2522, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2516 context = (
2517 functools.partial(self.accelerator.no_sync, model=model)
2518 if i != len(batch_samples) - 1
2519 else contextlib.nullcontext
2520 )
2521 with context():
-> 2522 tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
2524 if (
2525 args.logging_nan_inf_filter
2526 and not is_torch_xla_available()
2527 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
2528 ):
2529 # if loss is nan or inf simply add the average of previous logged losses
2530 tr_loss = tr_loss + tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/transformers/trainer.py:3688, in Trainer.training_step(***failed resolving arguments***)
3686 scaled_loss.backward()
3687 else:
-> 3688 self.accelerator.backward(loss, **kwargs)
3689 # Finally we need to normalize the loss for reporting
3690 if num_items_in_batch is None:
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/accelerate/accelerator.py:2248, in Accelerator.backward(self, loss, **kwargs)
2246 self.lomo_backward(loss, learning_rate)
2247 else:
-> 2248 loss.backward(**kwargs)
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/torch/_tensor.py:581, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
571 if has_torch_function_unary(self):
572 return handle_torch_function(
573 Tensor.backward,
574 (self,),
(...)
579 inputs=inputs,
580 )
--> 581 torch.autograd.backward(
582 self, gradient, retain_graph, create_graph, inputs=inputs
583 )
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/torch/autograd/__init__.py:347, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
342 retain_graph = create_graph
344 # The reason we repeat the same comment below is that
345 # some Python versions print out the first line of a multi-line function
346 # calls in the traceback and some print out the last line
--> 347 _engine_run_backward(
348 tensors,
349 grad_tensors_,
350 retain_graph,
351 create_graph,
352 inputs,
353 allow_unreachable=True,
354 accumulate_grad=True,
355 )
File ~/venvs/fine_tuning_llama/lib/python3.11/site-packages/torch/autograd/graph.py:825, in _engine_run_backward(t_outputs, *args, **kwargs)
823 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
824 try:
--> 825 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
826 t_outputs, *args, **kwargs
827 ) # Calls into the C++ engine to run the backward pass
828 finally:
829 if attach_logging_hooks:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
</code></pre>
<p>What should I do to resolve it?</p>
|
<python><machine-learning><llama><fine-tuning><peft>
|
2024-12-13 06:02:07
| 0
| 40,671
|
Sazzad Hissain Khan
|
79,277,053
| 1,043,677
|
Spline lineal continuation on borders
|
<p>I have code interpolating a spline from a CDF that seems valid between the range x=40 and x=110. Below 40, I want a flat line at 0, and after 110, a flat line at 1 towards infinite.</p>
<p>I have come up with this code:</p>
<pre><code>def _find_spline(x_orig: np.ndarray, cdf: np.ndarray) -> BSpline:
# Find the smoothing factor
smoothing_factor, iters = _find_highest_smoothing_factor(x_orig, cdf, 1e-5)
# Fit a linear spline to the first segment
t_begin, c_begin, k_begin = splrep([0, x_orig[0]], [0, cdf[0]], k=1)
# Fit the middle segment with the specified smoothing factor
t_middle, c_middle, k_middle = splrep(x_orig, cdf, s=smoothing_factor)
# Explicitly construct the final segment
t_end = np.array([x_orig[-1], x_orig[-1], 300, 300]) # Linear knots
c_end = np.array([cdf[-1], 1.0]) # Start from the evaluated value
# Combine the knots and coefficients
t_combined = np.concatenate([t_begin[:-1], t_middle[1:-1], t_end])
c_combined = np.concatenate([c_begin[:-1], c_middle[1:-1], c_end])
return BSpline(t_combined, c_combined, k_middle, extrapolate=True)
</code></pre>
<p>I have uploaded a full reproducible example <a href="http://tpcg.io/_MWR0OJ" rel="nofollow noreferrer">here</a>.</p>
<p>however, this generates something like in:</p>
<p><a href="https://i.sstatic.net/82nZCoZT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82nZCoZT.png" alt="enter image description here" /></a></p>
<p>so, the first segment until 40 seems fine and evaluates to 0... but I don't know why the spline goes down to 0 at the end at 110, I want it just to continue straight...</p>
<p>What is wrong?</p>
|
<python><scipy><interpolation><spline>
|
2024-12-13 01:53:26
| 1
| 1,718
|
lqbweb
|
79,276,868
| 3,635,544
|
Exception has occurred: DatatypeMismatch column "occurence_timestamp" is of type timestamp without time zone but expression is of type bigint
|
<p>Here are the core steps and logics of my script below:</p>
<ol>
<li>Create and instantiate a <code>PostgreSQLDB</code> class object that does database operation</li>
<li>Use view <code>vw_valid_case_from_db1</code> to get a list of <code>case_id</code> which will be included</li>
<li><code>df_db1_case_table</code> is a pandas df that extracts information from the table named db1</li>
<li>Filter and include only cases from 2 for the rows in <code>df_db1_case_table</code></li>
<li>I have set up a table named <code>db2_case_table</code> that I want to batch insert the information from the filtered <code>df_db1_case_table</code> (or named <code>df_db1_case_table_filtered</code>). The name of columns from <code>df_db1_case_table_filtered</code> match with the columns from <code>db2_case_table</code></li>
<li>The problem is I am getting this error <code>Exception has occurred: DatatypeMismatch column "occurence_timestamp" is of type timestamp without time zone but expression is of type bigint LINE 1: ...7.0, 2259027.0, NULL, 'CA23307772', NULL, '1441', 1689711600... HINT: You will need to rewrite or cast the expression. File "some_path_to.py", line 170, in <module> cursor.executemany(insert_query, data_to_insert) psycopg2.errors.DatatypeMismatch: column "occurence_timestamp" is of type timestamp without time zone but expression is of type bigint LINE 1: ...7.0, 2259027.0, NULL, 'CA23307772', NULL, '1441', 1689711600... HINT: You will need to rewrite or cast the expression.</code></li>
<li>The section <code>for col in timestamp_columns:</code> of the code below is how I tried to solve this. Namely, if the data type of the (whatever)_timestamp is in 'int64' or 'float64' format, it will be enforced to be in datetime format. However, it doesn't work and the same error appears.</li>
<li>Additional information: if I remove all of the (whatever)_timestamp fields <code>['occurence_timestamp', 'reported_timestamp', 'created_timestamp', 'modified_timestamp', 'agency_extract_timestamp', 'city_extract_timestamp', 'pdf_extract_timestamp' ]</code>, the script works fine.</li>
</ol>
<p>Script below:</p>
<pre><code>import pandas as pd
import os
import logging
from datetime import datetime
from helper_db_operation import PostgreSQLDB
# Set up logging configuration
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
# Some database credential code
postgres_db = PostgreSQLDB(postgres_user, postgres_password, postgres_host, postgres_db_name)
# Query to fetch valid case IDs from db1 view
sql_query_get_valid_case_case_from_db1 = """
SELECT case_id
FROM vw_valid_case_from_db1
"""
# Execute query and load results into a Pandas DataFrame
try:
df_db1_valid_cases = pd.read_sql(sql_query_get_valid_case_case_from_db1, postgres_db.conn)
except Exception as e:
logging.error(f"Error while fetching data from db1 view: {e}")
raise
# 1.2) connect db1_case, then apply the filter from 1.1) to exclude invalid case
# Query to fetch all case from db1 table
sql_query_get_case_table_from_db1 = """
SELECT *
FROM public.db1_case
"""
# Execute query and load results into a Pandas DataFrame
df_db1_case_table = pd.read_sql(sql_query_get_case_table_from_db1, postgres_db.conn)
# Apply the filter to include only those case that are in the valid list from df_db1_valid_cases
valid_case_ids = df_db1_valid_cases['case_id'].tolist()
# Filter df_db1_case_table to include only rows where the ID is in the valid case IDs
df_db1_case_table_filtered = df_db1_case_table[df_db1_case_table['id'].isin(valid_case_ids)]
# Check the result of the filtering
logging.debug(f"Filtered {len(df_db1_case_table_filtered)} valid case.")
target_table = 'db2_case_table'
# Fetch the target table schema to determine the column names dynamically
try:
query_table_schema = f"""
SELECT column_name
FROM information_schema.columns
WHERE table_name = '{target_table}';
"""
df_table_schema = pd.read_sql(query_table_schema, postgres_db.conn)
target_columns = df_table_schema['column_name'].tolist()
except Exception as e:
logging.error(f"Error fetching schema for table {target_table}: {e}")
raise
# Convert epoch timestamps to datetime for `occurence_timestamp` and other timestamp columns
timestamp_columns = [
'occurence_timestamp', 'reported_timestamp', 'created_timestamp', 'modified_timestamp',
'agency_extract_timestamp', 'city_extract_timestamp', 'pdf_extract_timestamp'
]
for col in timestamp_columns:
if col in df_db1_case_table_filtered.columns:
if df_db1_case_table_filtered[col].dtype in ['int64', 'float64']:
df_db1_case_table_filtered[col] = pd.to_datetime(
df_db1_case_table_filtered[col], unit='s', errors='coerce'
)
elif pd.api.types.is_datetime64_any_dtype(df_db1_case_table_filtered[col]):
df_db1_case_table_filtered[col] = df_db1_case_table_filtered[col].dt.tz_localize(None)
# Ensure the correct columns are included in `df_for_insertion`
df_for_insertion = df_db1_case_table_filtered[
[col for col in df_db1_case_table_filtered.columns if col in target_columns]
]
logging.info(df_for_insertion.dtypes)
logging.info(df_for_insertion[['occurence_timestamp']].head())
# Insert the filtered and dynamically mapped data into PostgreSQL
try:
# Convert the DataFrame into a list of tuples
data_to_insert = df_for_insertion.to_records(index=False).tolist()
logging.info(data_to_insert[:5])
# Generate the INSERT query dynamically based on the DataFrame columns
columns = ', '.join(df_for_insertion.columns)
placeholders = ', '.join(['%s'] * len(df_for_insertion.columns))
insert_query = f"INSERT INTO {target_table} ({columns}) VALUES ({placeholders})"
# Execute the batch insert
with postgres_db.conn.cursor() as cursor:
cursor.executemany(insert_query, data_to_insert)
postgres_db.conn.commit()
except Exception as e:
logging.error(f"Error while inserting data into table {target_table}: {e}")
raise
</code></pre>
<p>Additional information:
This is the print out of the check of column type from the dataframe</p>
<pre><code>occurence_timestamp datetime64[ns]
reported_timestamp datetime64[ns]
</code></pre>
<p>This is the sample example of occurence_timestamp values</p>
<pre><code>occurence_timestamp
0 2023-07-18 20:20:00
1 2023-09-21 17:00:00
2 2023-09-21 15:48:00
3 2023-09-21 21:30:00
4 2023-09-11 08:45:00
</code></pre>
|
<python><sql><python-3.x><pandas><postgresql>
|
2024-12-12 23:27:57
| 1
| 4,775
|
KubiK888
|
79,276,761
| 10,934,417
|
How to convert the column with lists into one hot encoded columns?
|
<p>Assume, there is one DataFrame such as following</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'id':range(1,4),
'items':[['A', 'B'], ['A', 'B', 'C'], ['A', 'C']]})
df
id items
1 [A, B]
2 [A, B, C]
3 [A, C]
</code></pre>
<p>Is there an efficient way to convert above DataFrame into the following (one-hot encoded columns)? Many Thanks in advance!</p>
<pre><code> id items A B C
1 [A, B] 1 1 0
2 [A, B, C] 1 1 1
3 [A, C] 1 0 1
</code></pre>
|
<python><pandas><numpy>
|
2024-12-12 22:20:26
| 1
| 641
|
DaCard
|
79,276,690
| 417,678
|
Django + FactoryBoy -> Postgres not converting datetime to utc correctly
|
<p>Okay, this is a weird one. I've got a Django model that I'm writing to Postgres via FactoryBoy for testing.</p>
<pre><code>contract = ContractFactory(
effective_date=datetime.datetime(2023, 1, 1, tzinfo=pytz.timezone('US/Central'))
)
</code></pre>
<p>I'm in US/Central, and I expect the database to store the object in UTC. The datetime in Postgres should be 2023-01-01 06:00:00 UTC since the difference between timezones is 6 hours.</p>
<p>If I fetch the object back and print the date, it's wrong!</p>
<pre><code>2023-01-01 05:51:00+00:00
</code></pre>
<p>If I fetch it from Postgres directly, it's also wrong:</p>
<pre><code>2023-01-01 05:51:00.000000 +00:00
</code></pre>
<p>I checked the raw time in the database and it's correct. Postgres is running in a Docker container, and if I do a <code>select now()</code> the time is just fine in UTC.</p>
<p>Where could this be coming from?</p>
|
<python><django><postgresql><datetime><timezone>
|
2024-12-12 21:44:21
| 1
| 6,469
|
mj_
|
79,276,647
| 5,837,992
|
How do I pass a row to a function using df.apply in Pandas
|
<p>I have a fairly complicated function that I need to run on each row of my dataframe - lambda functions won't work here.
I want to pass multiple columns from the dataframe to the function.</p>
<p>In my initial test, I tried passing the row and naming a single column (note that I will need to use multiple columns for my program), but received <code>KeyError: 'rec_id'</code></p>
<p>What do I need to do to pass a row to the function so I can use the different column values (i.e. <code>row.Name</code> or <code>row.rec_id</code>?</p>
<p>A simple sample that shows what I am trying to do:</p>
<pre><code>def testapply(row):
print(row)
if row['rec_id']<863238:
return "X"
if row['rec_id']>863238 :
return "Y"
data = {'Name': ['Alice', 'Bob', 'Charlie'],
'rec_id': [2532, 863238, 2212]}
df = pd.DataFrame(data)
df.apply(testapply)
</code></pre>
|
<python><pandas><apply>
|
2024-12-12 21:24:45
| 0
| 1,980
|
Stumbling Through Data Science
|
79,276,553
| 620,679
|
Sparse matrix in pandas/scipy with row and column indices
|
<p>I have a dataframe in pandas that looks like this:</p>
<pre><code>>>> df[['BranchNumber', 'ModelArticleNumber', 'ActualSellingPrice']].info()
<class 'pandas.core.frame.DataFrame'>
Index: 447970 entries, 0 to 500734
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 BranchNumber 447970 non-null int64
1 ModelArticleNumber 447970 non-null object
2 ActualSellingPrice 447970 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 13.7+ MB
</code></pre>
<p>Given that there are 463 unique branch numbers and 5,235 model article numbers, I'd like to create a 463x5,235 sparse matrix with indices. The following kludge is how far I've gotten:</p>
<pre><code>import numpy as np
import scipy
def index(df):
ix = { v: n for n, v in enumerate(df.unique()) }
return np.array(list(map(lambda value: ix[value], df)))
def csr(df, row_tag, col_tag, value_tag):
m = scipy.sparse.csr_matrix(
(
df[value_tag],
(
index(df[row_tag]),
index(df[col_tag])
)
)
)
return m
</code></pre>
<p>I'd like to be able to do things like <code>m.T.dot(m)</code> efficiently, and retain the ability to see which key is associated with the rows and columns. I've looked through the documentation on pandas indices and arrays, and through the scipy docs on sparse matrices, and I'm just not there. Can somebody point me in the right direction?</p>
|
<python><pandas><scipy><sparse-matrix>
|
2024-12-12 20:46:06
| 1
| 4,041
|
Scott Deerwester
|
79,276,537
| 1,815,486
|
Calling a wrapped static method using self instead of class name passes self as arg
|
<p>This question is related to <a href="https://stackoverflow.com/questions/65461959/calling-a-static-method-with-self-vs-class-name">Calling a static method with self vs. class name</a> but I'm trying to understand the behavior when you wrap a static method so I can fix my wrapper.</p>
<p>For example:</p>
<pre><code>import functools
def wrap(f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
print(f"{f.__name__} called with args: {args}, kwargs: {kwargs}")
return f(*args, **kwargs)
return wrapped
class Test:
@staticmethod
def static():
print("hello")
def method(self):
self.static()
Test.static = wrap(Test.static)
Test().method()
</code></pre>
<p>will produce:</p>
<pre><code>static called with args: (<__main__.Test object at 0x1050b3fd0>,), kwargs: {}
Traceback (most recent call last):
File "/Users/aiguofer/decorator_example.py", line 20, in <module>
Test().meth()
File "/Users/aiguofer/decorator_example.py", line 16, in meth
self.static()
File "/Users/aiguofer/decorator_example.py", line 7, in wrapped
return f(*args, **kwargs)
TypeError: Test.static() takes 0 positional arguments but 1 was given
</code></pre>
<p>However, if I change <code>self.static()</code> -> <code>Test.static()</code>, we get the expected output:</p>
<pre><code>static called with args: (), kwargs: {}
hello
</code></pre>
<p>My use case is that I need to wrap some methods from an external library, including a <code>staticmethod</code> on a class. Within that class, they call the static method from an instance method using <code>self.<method_name></code>, which is causing the above issue in my wrapper. I thought I might be able to deal with this issue with a <code>isinstance(f, staticmethod)</code> but that seems to return <code>False</code>.</p>
<p>I'd love to understand what is happening as well as potential solutions to this problem!</p>
|
<python><python-decorators>
|
2024-12-12 20:39:31
| 1
| 2,165
|
aiguofer
|
79,276,505
| 657,003
|
Assignment to global variable attribute in for loop is not available outside the scope of the loop
|
<p>This is a simplified version of this SO question (<a href="https://stackoverflow.com/questions/79274789/an-assignment-to-a-networkx-graph-edge-attribute-in-a-for-loop-is-properly-refle">link</a>) with some program output.</p>
<p>Using Python and networkX under macOS, I am not able to assign a value to an edge attribute as follows:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib.pyplot as plt
from pyvis.network import Network
from io import StringIO
csv_cities= """City,Location,Lon,Lat,x,y,,LonDat,LatDat
Los Angeles,"Los Angeles, California, United States",-118.254190,34.048050,140,328,q,-123.114980,25.761681
New York,"New York, New York, United States",-74.005994,40.712749,1415,591,q,-71.056800,51.045700
Atlanta,"Atlanta, Georgia, United States",-84.389854,33.750800,1116,316,q,-94.971032,39.002842
Chicago,"Chicago, Illinois, United States",-87.632360,41.881954,1022,638,q,52.05818,25.284019 """
csv_connections= """City1,City2,numTracks,NumCarSpots,Colors,SegmentTotalCarSpots
Chicago,Los Angeles,1,4,Blue,4
New York,Chicago,2,2,"Orange, Black",4
Atlanta,Chicago,1,3,Blue,3"""
cities = pd.read_csv(StringIO(csv_cities))
interCityConnections = pd.read_csv(StringIO(csv_connections))
G = nx.Graph()
#Add cities to graph
coords={}
for index, row in cities.iterrows():
print(f"{row['City']:<17} {int(row['x']):<6} {int(row['y']):<6}")
G.add_node(row['City'],city=True, x=row['x'], y=row['y'], physics=False,
label=row['City'], title=f'{row['x']},{row['y']}')
coords[row['City']]=int(row['x']),int(row['y'])
print("- 1 - ")
#Add intercity connections
for index, row in interCityConnections.iterrows():
print(f"{row['City1']:>17} <-{row['NumCarSpots']} {row['Colors']}-> {row['City2']:<17} ")
G.add_edge(row['City1'],row['City2'],interCity=True,numTracks=row['numTracks'])
G[row['City1']][row['City2']]['taxes']="Major" # This works
shortPaths=nx.all_shortest_paths(G,row['City1'], row['City2'])
G[row['City1']][row['City2']]['shortPaths']=shortPaths #some assignment happens here
# but value not usable outside loop?
for p in shortPaths: #This works
print(f"Path:{p}") #This works
print("- 2 - ")
print(G['Atlanta']['Chicago'])
print(f"type:{type(G['Atlanta']['Chicago']['shortPaths'])}")
for p in G['Atlanta']['Chicago']['shortPaths']:
print("For looping...") # This never executes
print(f"Path:{p}") # This never executes
print("Done. There should be a path displayed above this line.")
</code></pre>
<p>This is the output:</p>
<pre><code>q@q-mbp ttr % py simplebug.py
Los Angeles 140 328
New York 1415 591
Atlanta 1116 316
Chicago 1022 638
- 1 -
Chicago <-4 Blue-> Los Angeles
Path:['Chicago', 'Los Angeles']
New York <-2 Orange, Black-> Chicago
Path:['New York', 'Chicago']
Atlanta <-3 Blue-> Chicago
Path:['Atlanta', 'Chicago']
- 2 -
{'interCity': True, 'numTracks': 1, 'taxes': 'Major', 'shortPaths': <generator object _build_paths_from_predecessors at 0x10d107480>}
type:<class 'generator'>
Done. There should be a path displayed above this line.
</code></pre>
<p>So, how can I modify an edge attribute from inside that for loop and have the change persist outside the scope of the loop?</p>
<p>How can I access the result of this assignment:</p>
<pre><code>G[row['City1']][row['City2']]['shortPaths']=shortPaths
</code></pre>
<p>outside the for loop?</p>
<p>This feels like some kind of scope bug, but I admittedly am quite new to both Python and networkX. I wonder if I should be passing G into the for loop somehow, but I am at a loss.</p>
|
<python><networkx>
|
2024-12-12 20:26:49
| 1
| 2,809
|
Perry Horwich
|
79,276,455
| 7,233,155
|
Why doesn't mypy follow config in GitHub Actions?
|
<p>If I run mypy locally it will only type check certain files because the <code>pyproject.toml</code> file has the following inclusion:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.mypy]
files = [
"python/project/file1.py",
"python/project/file2.py"
]
strict = true
</code></pre>
<p>This is in a project with the following structure:</p>
<pre><code>- root
- pyproject.toml
- python
- project
- file1.py
- file2.py
- other_files.py
</code></pre>
<p>My workflow looks roughly like this:</p>
<pre class="lang-yaml prettyprint-override"><code> steps:
- uses: actions/checkout@v4
- name: Set up Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install .[dev] -v
- name: Test with pytest and display Coverage
run: |
coverage run -m --source=project pytest
coverage report -m
- name: MyPy type checking
run: |
mypy --config-file pyproject.toml
</code></pre>
<p>But when it runs it doesn't just type check the specific files it also types all other files in the project and returns 1,000s of errors. Why?</p>
|
<python><github-actions><mypy>
|
2024-12-12 20:08:38
| 0
| 4,801
|
Attack68
|
79,276,411
| 5,547,553
|
How to resize figure to fit window on event in matplotlib?
|
<p>I've already got help with this problem, but stuck again.<br>
I'm trying to pre-generate and store matplotlib figures in python, and then display them on a keyboard event (left-right cursor keys).<br>
It is working fine, but will not fit the figures (after the first one) to the window.<br>
How can I do that?<br>
If I manually resize the window, then the figure will be adjusted to fit, but it is not working automatically.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
def new_figure(title, data):
fig,ax = plt.subplots(figsize=(12, 8), dpi=80)
plt.plot(data, label=title)
ax.set_xlabel('x-axis')
ax.set_ylabel('value')
plt.legend()
plt.title(title)
plt.close(fig)
return fig
def show_figure(fig):
dummy = plt.figure()
new_manager = dummy.canvas.manager
new_manager.canvas.figure = fig
fig.set_canvas(new_manager.canvas)
def redraw(event, cnt):
event.canvas.figure = figs[cnt]
event.canvas.mpl_connect('key_press_event', keypress)
event.canvas.draw()
def keypress(event):
global cnt
if event.key == 'right':
cnt += 1
cnt %= mx
elif event.key == 'left':
cnt -= 1
if cnt < 0:
cnt = mx-1
redraw(event, cnt)
d = range(0, 360)
data = []
data.append(np.sin(np.radians(d)))
data.append(np.cos(np.radians(d)))
data.append(np.tan(np.radians(d)))
titles = ['sin','cos','tan']
mx = len(data)
figs = []
for i in range(mx):
fig = new_figure(titles[i], data[i])
figs.append(fig)
cnt = 0
show_figure(figs[0])
figs[0].canvas.mpl_connect('key_press_event', keypress)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-12-12 19:49:38
| 0
| 1,174
|
lmocsi
|
79,276,400
| 28,063,240
|
How to get the index of a text node in BeautifulSoup?
|
<p>How can I get the source index of a text node in an HTML string?</p>
<p>Tags have <code>sourceline</code> and <code>sourcepos</code> which is useful for this, but <code>NavigableString</code> does not have any directly-helpful properties like that (as far as I can find)</p>
<p>I've thought about using</p>
<pre class="lang-py prettyprint-override"><code>def get_index(text_node: NavigableString) -> int:
return text_node.next_element.sourcepos - len(text_node)
</code></pre>
<p>But this will not work perfectly because the length of the closing tag is unpredictable, e.g.</p>
<pre class="lang-py prettyprint-override"><code>>>> get_index(BeautifulSoup('<p>hello</p><br>', 'html.parser').find(text=True))
7
</code></pre>
<p>Is incorrect, and <code>'<p>hello</p >'</code> is also valid HTML and will produce an even more incorrect result, and I'm not sure how to solve this kind of case using the tools I've found so far in BeautifulSoup.</p>
<p>I would also be interested in an lxml or Python html module answer if they have simple solutions.</p>
<p>Desired results:</p>
<pre><code>>>> get_index(BeautifulSoup('hello', 'html.parser').find(text=True))
0
>>> get_index(BeautifulSoup('<p>hello</p><br>', 'html.parser').find(text=True))
3
>>> get_index(BeautifulSoup('<!-- hi -->hello', 'html.parser').find(text=True))
11
>>> get_index(BeautifulSoup('<p></p ><p >hello<br>there</p>', 'html.parser').find(text=True))
12
>>> get_index(BeautifulSoup('<p></p ><p >hello<br>there</p>', 'html.parser').find_all(string=True)[1])
21
</code></pre>
|
<python><beautifulsoup><lxml>
|
2024-12-12 19:46:42
| 1
| 404
|
Nils
|
79,276,360
| 5,312,606
|
attrs and class variables
|
<p>I really like the way how one can define classes via the <code>attrs</code> library.
In particular how the usual syntax of python class variables is hijacked to define instance variables.
But is there a way to get class variables with <code>attrs</code>?</p>
<p>If I have the following code</p>
<pre class="lang-py prettyprint-override"><code>from typing import ClassVar, Final
from attrs import define
@define
class Person:
name: Final[str]
age: Final[int]
goes_to_school: Final[bool]
my_class_var: ClassVar[str]
p = Person('Bart Simpson', 10, True)
p.my_class_var
</code></pre>
<p>it fails with <code>AttributeError: 'Person' object has no attribute 'my_class_var'</code>.
And indeed, the <a href="https://www.attrs.org/en/stable/api.html" rel="nofollow noreferrer">attrs API reference</a> says</p>
<blockquote>
<p>Attributes annotated as typing.ClassVar, and attributes that are neither annotated nor set to an field() are ignored.</p>
</blockquote>
|
<python><class-variables><python-attrs>
|
2024-12-12 19:30:43
| 1
| 1,897
|
mcocdawc
|
79,276,283
| 2,475,195
|
How to reshape image tensor in pytorch
|
<p>I am playing with the MNIST dataset, where each image is represented as a grayscale row of length 784, and I want to reformat it as 28x28:</p>
<pre><code>DIM = int(len(features)**0.5)
X_train_tensor = torch.tensor(X_train_aug[features].values).float()
X_train_tensor = X_train_tensor.view(len(X_train_tensor), 1, DIM, DIM) # Batch Size, Channels, Height, Width
</code></pre>
<p>Which produces the following error when I feed it to the model:</p>
<pre><code>ValueError: Expected input batch_size (1) to match target batch_size (42000).
</code></pre>
<p>I know I have 420k rows in the data set, and I am trying to feed it as a single batch. But somehow it seems the model thinks the entire tensor is a batch of size 1? Probably my <code>.view</code> above is wrong, but how do I fix it?</p>
|
<python><deep-learning><pytorch><neural-network><conv-neural-network>
|
2024-12-12 18:58:49
| 0
| 4,355
|
Baron Yugovich
|
79,276,230
| 3,125,823
|
Callable default on unique field will not generate unique values upon migrating
|
<p>Using Django/DRF to create a CRUD api</p>
<p>I'm trying to use the RandomUUID to create a UUID from Postgres:</p>
<pre><code>from django.db import models
from django.contrib.postgres.functions import RandomUUID
class Year(models.Model):
year_id = models.UUIDField(
primary_key=True, default=RandomUUID, editable=False)
</code></pre>
<p>When I run python manage.py makemigrations</p>
<p>It gives me this error:</p>
<pre><code>Callable default on unique field year.year_id will not generate unique values upon migrating
</code></pre>
<p>What am I doing wrong?</p>
<p>I want Django ORM to tell Postgres to create the UUID. I don't want to use Python's uuid module.</p>
|
<python><django><postgresql><django-models>
|
2024-12-12 18:38:05
| 1
| 1,958
|
user3125823
|
79,276,175
| 11,609,834
|
Is there a way to require Pytest to reload imports in a test file?
|
<p>TL;DR: I would like Pytest to reload imports in certain files when it runs those tests. Some of the imports in my code base have side effects that I need to control at test time. The behavior would basically be like <code>importlib.reload</code> for any module in the test file. Is this possible?</p>
<p>Longer explanation:</p>
<p>I have a class that works as a registry for certain kinds of functions:</p>
<pre class="lang-py prettyprint-override"><code># content of registry.py
class Registry:
_registry = []
@classmethod
def register_function(cls, func):
# decorator; registers a function in the registry.
cls._registry.append(func)
</code></pre>
<p>and I have a several files with registered functions:</p>
<pre class="lang-py prettyprint-override"><code># content of registered_func.py
@Registry.register_function
def registered_function(...):
...
</code></pre>
<p>I need to write tests that assure that the registered functions do what's expected when pulled from the registry. However, the way the registry is accessed may return registered functions besides the function of interest. Note the behavior: <code>registered_function</code> is registered on import of that file.</p>
<p>Running a suite of tests is creating issues; what functions are registered depends on the order of tests run. I have tried monkey patching:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import Registry as _Registry
@pytest.fixture
def Registry(monkeypatch):
monkeypatch.setattr(_Registry, '_registry', {})
return _Registry
def test_registered_function_registry(Registry):
import registered_function
# do tests
assert registered_function in Register._registry
</code></pre>
<p>However, if registered function has been previously imported during any test, pytest won't
re-import in <code>test_registered_function_registry</code>.</p>
<p>Ideally, the syntax in <code>test_registered_function_registry</code> works. I don't want to couple tests to the modules that do registering--any import inside a function would re-execute the code in the imported file and populate the registry accordingly. This is ideal because it's exactly the registry would be populated in our code base.</p>
<p>A work around is to explicitly <code>reload</code> the parent module with <code>importlib</code>, but it feels awkward to me. The behavior here is associated with Pytest--what happens when running a test depends on the state the Pytest creates, not the state that the test file creates. Is there a way to control this behavior of pytest?</p>
|
<python><pytest>
|
2024-12-12 18:18:16
| 0
| 1,013
|
philosofool
|
79,276,109
| 412,137
|
Poetry install failing with SSLError: Max retries exceeded on GitHub HTTPSConnectionPool
|
<p>I am encountering an error when running the poetry install command in my Python project. The error message is as follows:</p>
<pre><code>HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: ... (Caused by SSLError(FileNotFoundError(2, 'No such file or directory')))
</code></pre>
<p>I have tried the following troubleshooting steps, but none have resolved the issue:</p>
<ol>
<li>Verified my internet connection.</li>
<li>Checked that I have the latest version of Poetry installed.</li>
<li>Ensured my Git and OpenSSL installations are up-to-date.</li>
<li>Confirmed that the repository URL is correct and accessible from a browser.</li>
</ol>
<p>Despite all this, the error persists. It seems related to SSL or Git configuration, but I am unsure how to resolve it.</p>
<pre><code>Environment Details:
OS: 14.7.1 (23H222)
Python version: 3.11.9]
Poetry version: 1.8.5
Git version: 2.45.2
</code></pre>
<p>Has anyone encountered this issue before or can provide guidance on how to resolve it?</p>
|
<python><git><ssl><dependency-management><python-poetry>
|
2024-12-12 17:53:47
| 2
| 2,767
|
Nadav
|
79,275,888
| 1,987,477
|
Three plots sharing x-axis on top of each other as in figure
|
<p>I'm struggling with <code>matplotlib</code>.</p>
<p>I'd like to get a plot like <a href="https://imgur.com/a/FfLKQi3" rel="nofollow noreferrer">this one</a> but with the code below I can only get <a href="https://imgur.com/a/utdr9vs" rel="nofollow noreferrer">this plot</a>.</p>
<pre><code>f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=False)
ax1 = plt.gca()
ax1.yaxis.set_inverted(True)
ax1.plot(bp_phase, bp_mag, 'o', markersize=7, markerfacecolor="#BFCFFF",
markeredgewidth=1, markeredgecolor='black')
ax1.plot(bp_phase_snd_cycle, bp_mag, 'o', markersize=7, markerfacecolor="#BFCFFF",
markeredgewidth=1, markeredgecolor='black')
ax2.yaxis.set_inverted(True)
ax2.plot(g_phase, g_mag, 'o', markersize=7, markerfacecolor="#99FF99",
markeredgewidth=1, markeredgecolor='black')
ax2.plot(g_phase_snd_cycle, g_mag, 'o', markersize=7, markerfacecolor="#99FF99",
markeredgewidth=1, markeredgecolor='black')
ax3.yaxis.set_inverted(True)
ax3.plot(rp_phase, rp_mag, 'o', markersize=7, markerfacecolor="#ffa500",
markeredgewidth=1, markeredgecolor='black')
ax3.plot(bp_phase_snd_cycle, rp_mag, 'o', markersize=7, markerfacecolor="#ffa500",
markeredgewidth=1, markeredgecolor='black')
plt.xticks(fontsize=14, rotation=90)
plt.yticks(fontsize=14)
f.subplots_adjust(hspace=0)
plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False)
</code></pre>
<p>I can't figure out why there is an empty slot while two functions got plotted on the same one.
NB: I can't provide the plotting data, maybe some <code>matplotlib</code> guru can understand anyway where is my mistake.</p>
<p>Thank you</p>
|
<python><matplotlib>
|
2024-12-12 16:35:07
| 0
| 1,325
|
user123892
|
79,275,886
| 2,537,394
|
Speed up numpy looking for best indices
|
<p>I have a numpy array that maps x-y-coordinates to the appropriate z-coordinates. For this I use a 2D array that represents x and y as its axes and contains the corresponding z values:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
x_size = 2000
y_size = 2500
z_size = 400
rng = np.random.default_rng(123)
z_coordinates = np.linspace(0, z_size, y_size) + rng.laplace(0, 1, (x_size, y_size))
</code></pre>
<p>So each of the 2000*2500 x-y-points is assigned a z-value (float between 0 and 400). Now I want to look up for each integer z and integer x which is the closest y-value, essentially creating a map that is of shape <code>(x_size, z_size)</code> and holds the best y-values.</p>
<p>The simplest approach is creating an empty array of target shape and iterating over each z value:</p>
<pre class="lang-py prettyprint-override"><code>y_coordinates = np.empty((x_size, z_size), dtype=np.uint16)
for i in range(z_size):
y_coordinates[:, i] = np.argmin(
np.abs(z_coordinates - i),
axis=1,
)
</code></pre>
<p>however this takes about 11 s on my machine, which unfortunately is way to slow.</p>
<p>Surely using a more vectorised approach would be faster, such as:</p>
<pre class="lang-py prettyprint-override"><code>y_coordinates = np.argmin(
np.abs(
z_coordinates[..., np.newaxis] - np.arange(z_size)
),
axis=1,
)
</code></pre>
<p>Surprisingly this runs about 60% slower than the version above (tested at 1/10th size, since at full size this uses excessive memory).</p>
<p>Also wrapping the code blocks in functions and decorating them with numba's <code>@jit(nopython=True)</code> doesn't help.</p>
<p><strong>How can I speed up the calculation?</strong></p>
|
<python><arrays><numpy><coordinates><coordinate-transformation>
|
2024-12-12 16:34:55
| 4
| 731
|
YPOC
|
79,275,864
| 2,626,865
|
Approaches and problems with massively concurrent processing using concurrent.futures and/or asyncio
|
<p>I want to run billions of IO-bound tasks across a pool of thousands of threads. These are my challenges:</p>
<ul>
<li>Reduce memory usage. The thread pool from <code>concurrent.futures</code> uses an unbound queue. Too many submissions and memory usage balloons but too few and not enough work is accomplished.</li>
<li>Collect results. Neither the pool nor its context manager store completed futures, so collecting results isn't necessary to reduce memory usage. I would like the option of collecting results if at some future time I need them.</li>
<li>Handle exceptions. I would like python exceptions in any of the tasks to be propagated to the main thread.</li>
<li>Incorporate asyncio via <code>loop.run_in_executor()</code></li>
</ul>
<p>Each future from <code>concurrent.futures</code> can have an <code>add_done_callback</code> that collects the result and handles any exceptions. Exceptions raised within these callbacks are ignored, but since callbacks run from the main thread, exceptions can be stored as a global variable and re-raised from the submission loop. This doesn't help in reducing memory usage, but I haven't figured out how to do that even through asyncio (see below).</p>
<p>To incorporate asyncio I create a consumer task and a producer task. The producer submits callables to the loop's executor in a synchronous loop, placing the resulting futures onto an unbound queue.</p>
<p>The consumer maintains a set of pending futures, awaiting the first result from either that set or the queue. It awaits the result of <code>asyncio.wait()</code> over both the queue and a second <code>asyncio.wait()</code> coroutine over the set of pending futures. If the result comes from the queue, it adds it to the set and resumes waiting. If the result is a completed future, it processes the results and handles exceptions before resuming the wait. The task is complete when the queue shuts down.</p>
<p>Am I making this overly convoluted? Is there a simpler approach?</p>
<p>I haven't figured out a neat way to reduce memory usage. The best I can come up with is to check the thread pool's <code>_work_queue.qsize()</code>, and if it exceeds a threshold, to simply <code>await asyncio.sleep(<small number>)</code>. Surely there must be a better way?</p>
|
<python><memory-management><concurrency><python-asyncio><concurrent.futures>
|
2024-12-12 16:30:08
| 0
| 2,131
|
user19087
|
79,275,860
| 3,486,684
|
Joining two dataframes that share "index columns" (id columns), but not data columns, so that the resulting dataframe has a full spine of ids?
|
<p>I find myself doing this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import sys
red_data = pl.DataFrame(
[
pl.Series("id", [0, 1, 2], dtype=pl.UInt8()),
pl.Series("red_data", [1, 0, 1], dtype=pl.UInt8()),
]
)
blue_data = pl.DataFrame(
[
pl.Series("id", [0, 2, 3], dtype=pl.UInt8()),
pl.Series("blue_data", [0, 1, 1], dtype=pl.UInt8()),
]
)
# in both red and blue
red_and_blue = red_data.join(blue_data, on=["id"])
# in red, but not blue
red_not_blue = red_data.join(blue_data, on=["id"], how="anti").with_columns(
blue_data=pl.lit(None, dtype=pl.UInt8())
)
# in blue, but not red
blue_not_red = blue_data.join(red_data, on=["id"], how="anti").with_columns(
red_data=pl.lit(None, dtype=pl.UInt8())
)
columns = ["id", "red_data", "blue_data"]
sys.displayhook(
pl.concat(
[
red_and_blue.select(columns),
red_not_blue.select(columns),
blue_not_red.select(columns),
]
)
)
</code></pre>
<pre><code>shape: (4, 3)
┌─────┬──────────┬───────────┐
│ id ┆ red_data ┆ blue_data │
│ --- ┆ --- ┆ --- │
│ u8 ┆ u8 ┆ u8 │
╞═════╪══════════╪═══════════╡
│ 0 ┆ 1 ┆ 0 │
│ 2 ┆ 1 ┆ 1 │
│ 1 ┆ 0 ┆ null │
│ 3 ┆ null ┆ 1 │
└─────┴──────────┴───────────┘
</code></pre>
|
<python><python-polars>
|
2024-12-12 16:28:34
| 1
| 4,654
|
bzm3r
|
79,275,745
| 3,765,883
|
Odd boolean expression
|
<p>I'm trying to debug (rewrite?) someone else's Python/cherrypy web app, and I ran across the following 'if' statement:</p>
<pre><code>if not filename.endswith(".dat") and (
filename.endswith(".dat") or not filename.endswith(".cup")
):
raise RuntimeError(
"Waypoint file {} has an unsupported format.".format(
waypoint_file.filename
)
)
</code></pre>
<p>I think this is the same as:</p>
<pre><code>if not A and (A or not B):
</code></pre>
<p>If so, then:</p>
<ul>
<li><p>if <code>A = False</code>, then it reduces to <code>if True and (False or not B):</code></p>
<ul>
<li><code>if True and not B</code> = <code>not B</code></li>
</ul>
</li>
<li><p>if <code>A = True</code>, then it reduces to <code>if False:</code> i.e. the <code>if</code> block will never execute</p>
</li>
</ul>
<p>I'm pretty sure that the intent of the <code>if</code> block is to warn the user that the extension of the file in question is neither <code>.DAT</code> nor <code>.CUP</code>, but it doesn't look to me that it actually executes that intent.</p>
<p>I think the <code>if</code> block should be:</p>
<pre><code>if(not .DAT and not .CUP) = if not(.DAT or .CUP)
</code></pre>
<p>Is that correct?</p>
|
<python><boolean-expression>
|
2024-12-12 15:49:09
| 2
| 327
|
user3765883
|
79,275,700
| 19,356,117
|
Why there is 'Unpickling Error' when using polars to read data for pytorch?
|
<p>I have changed my data tool from xarray to polars in recent, and use <code>pl.DataFrame.to_torch()</code> to generate tensor for training my Pytorch model. Data source's format is parquet file.</p>
<p>For avoiding fork child processes, I use <code>torch.multiprocessing.spawn</code> to start my training process, however the process crashed with this:</p>
<pre><code>/home/username/.conda/envs/torchhydro1/bin/python3.11 -X pycache_prefix=/home/username/.cache/JetBrains/IntelliJIdea2024.3/cpython-cache /home/username/.local/share/JetBrains/IntelliJIdea2024.3/python-ce/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --port 29781 --file /home/username/torchhydro/experiments/train_with_era5land_gnn_ddp.py
Console output is saving to: /home/username/torchhydro/experiments/results/train_gnn_ddp.txt
[20:38:51] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:38:52] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
update config file
!!!!!!NOTE!!!!!!!!
-------Please make sure the PRECIPITATION variable is in the 1st location in var_t setting!!---------
If you have POTENTIAL_EVAPOTRANSPIRATION, please set it the 2nd!!!-
!!!!!!NOTE!!!!!!!!
-------Please make sure the STREAMFLOW variable is in the 1st location in var_out setting!!---------
[20:39:04] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:39:06] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
……
Torch is using cuda:0
[2024-12-12 20:48:08,931] torch.distributed.distributed_c10d: [INFO] Using backend config: {'cuda': 'nccl'}
[W CUDAAllocatorConfig.h:30] Warning: expandable_segments not supported on this platform (function operator())
using 8 workers
Pin memory set to True
0%| | 0/22986 [00:00<?, ?it/s]
[20:48:40] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:48:41] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
[20:49:28] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:49:29] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
[20:50:19] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:50:20] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
[20:51:12] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:51:13] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
[20:52:07] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:52:09] DEBUG Using selector: EpollSelector selector_events.py:54
……
[20:52:13] DEBUG CACHEDIR=/home/username/.cache/matplotlib __init__.py:341
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
[20:53:11] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:53:12] DEBUG Using selector: EpollSelector selector_events.py:54
……
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
[20:55:12] DEBUG No module named 'forge' signatures.py:43
DEBUG No module named 'forge' signatures.py:43
[20:55:14] DEBUG Using selector: EpollSelector selector_events.py:54
……
[20:55:19] DEBUG CACHEDIR=/home/username/.cache/matplotlib __init__.py:341
DEBUG Using fontManager instance from font_manager.py:1580
/home/username/.cache/matplotlib/fontl
ist-v390.json
Traceback (most recent call last):
File "/home/username/.local/share/JetBrains/IntelliJIdea2024.3/python-ce/helpers/pydev/pydevd.py", line 1570, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/share/JetBrains/IntelliJIdea2024.3/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/username/torchhydro/experiments/train_with_era5land_gnn_ddp.py", line 171, in <module>
test_run_model()
File "/home/username/torchhydro/experiments/train_with_era5land_gnn_ddp.py", line 56, in test_run_model
mp.spawn(gnn_train_worker, args=(world_size, config_data, None), nprocs=world_size, join=True)
File "/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 241, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method="spawn")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
^^^^^^^^^^^^^^
File "/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 140, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with signal SIGKILL
python-BaseException
Traceback (most recent call last):
File "/home/username/.conda/envs/torchhydro1/lib/python3.11/multiprocessing/spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_pickle.UnpicklingError: pickle data was truncated
python-BaseException
/home/username/.conda/envs/torchhydro1/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 30 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
</code></pre>
<p>Now I have 2 problems:</p>
<p>First, why it will appears <code>_pickle.UnpicklingError</code>?</p>
<p>Second, after executing <code>0%| | 0/22986 [00:00<?, ?it/s]</code>, there is 7 <code>……</code>s in my process log, means that this DEBUG process has been repeated for 8 or 9 times! I have set <code>num_worker</code> of <code>pytorch DataLoader</code> to 8, does this problem have connection with <code>num_worker</code>?</p>
<p>This problem occurs after I'm using polars, so I think problem comes from polars, or threads in polars and pytorch have some mistakes.</p>
<p>But how to know why there is <code>UnpicklingError</code> and solve it? Hope for your reply.</p>
|
<python><python-3.x><pytorch><multiprocessing><python-polars>
|
2024-12-12 15:34:00
| 1
| 1,115
|
forestbat
|
79,275,645
| 13,440,165
|
Can I create a multidimensional array containing a unit matrix without nested loops?
|
<p>Suppose I have a Numpy array <code>n</code> indices, where the first <code>n-2</code> represents some counting indices and the last 2 indices represent a square <code>MxM</code> matrix. I want to initialize this structure so it will contain copies of the unit matrix.</p>
<p>Example (here N=3, M=2):</p>
<pre><code>A = numpy.zeros((3,2,2))
for k in range(3):
A[k,:,:] = numpy.eye(2)
</code></pre>
<p>Another Example (here N=4, M=5):</p>
<pre><code>B = numpy.zeros((3,4,5,5))
for k1 in range(3):
for k2 in range(4):
B[k1,k2,:,:] = numpy.eye(5)
</code></pre>
<p>Is there a way to do this without relying on nested loops?</p>
|
<python><numpy><numpy-ndarray>
|
2024-12-12 15:18:02
| 3
| 883
|
Triceratops
|
79,275,524
| 695,134
|
Invalid escape sequences in Python 3.12 in a Docker image
|
<p>I have the following code:</p>
<pre><code>x = re.match("^(\d+\.\d+\.\d+\.\d+)$", url)
</code></pre>
<p>It all works fine and runs under a Docker instance with python 3.11 installed.</p>
<p>The image has been upgraded to version 3.12 (it's a public GitLab repo) and now whenever we put any escape code in a string such as the above line we get errors such as, doesn't matter what the line, even <code>\n</code>:</p>
<pre><code>Stderr: <string>:12621: SyntaxWarning: invalid escape sequence '\d'
<string>:12690: SyntaxWarning: invalid escape sequence '\d'
<string>:12699: SyntaxWarning: invalid escape sequence '\/'
<string>:12700: SyntaxWarning: invalid escape sequence '\/'
</code></pre>
<p>Without access to the Docker instance, I presume there must be an issue with the Docker image and not something that has changed in Python. Where might the fault might lie, something I'm doing wrong, a change to Python 3.12+ or just a bad image creation?</p>
|
<python><python-3.x>
|
2024-12-12 14:43:37
| 0
| 6,898
|
Neil Walker
|
79,275,501
| 4,701,426
|
Manipulation of a Pandas dataframe most time- and memory-efficiently
|
<p>Please imagine I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame(index=pd.Index(['1', '1', '2', '2'], name='from'), columns=['to'], data= ['2', '2', '4', '5'])
</code></pre>
<p>df:</p>
<p><a href="https://i.sstatic.net/UDavHvKE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDavHvKE.png" alt="enter image description here" /></a></p>
<p>Now, I would like to calculate a matrix comprising of the percentage of times each value in the index "from" transitions to each value in column 'to', which is known as a transition matrix. I can achieve this by creating an empty transition matrix first and then populating it with the percentages using a for loop:</p>
<pre><code>#Create an empty matrix to populate later (using sparse dtype to save memory):
matrix = pd.DataFrame(index=df.index.unique(), columns=df.to.unique(), data=0, dtype=pd.SparseDtype(dtype=np.float16, fill_value=0))
</code></pre>
<p>matrix:</p>
<p><a href="https://i.sstatic.net/lQYcXBb9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lQYcXBb9.png" alt="enter image description here" /></a></p>
<pre><code>for i in range(len(df)):
from_, to = df.index[i], df.to.iloc[i]
matrix[to] = matrix[to].sparse.to_dense() # Convert to dense format because sparse dtype does not allow value assignment with .loc in the next line:
matrix.loc[from_, to] += 1 # Do a normal insertion with .loc[]
matrix[to] = matrix[to].astype(pd.SparseDtype(dtype=np.float16, fill_value=0)) # Back to the original sparse format
matrix = (matrix.div(matrix.sum(axis=1), axis=0)*100) # converting counts to percentages
</code></pre>
<p>matrix:</p>
<p><a href="https://i.sstatic.net/19v4B5H3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19v4B5H3.png" alt="enter image description here" /></a></p>
<p>This works. For example, index "1" only transitioned to "2" (100% of the time) and index "2" transitioned to "4" 50% of the time and to "5" the other 50% of the time, as can be verified in <code>df.</code></p>
<p><strong>Issue:</strong> The actual matrix is about 500K by 500K and the for loop takes a really long time to finish. So, is there a vectorized or other efficient way of calculating <code>matrix</code> from <code>df</code></p>
<p>Note: I would get MemoryError without using the whole Sparse dtype thing even with <code>dtype=float16</code> in <code>pd.DataFrame()</code> so I prefer to keep that if possible. It would be great if the 500K by 500K matrix will not take up more than 10-12Gb of RAM. Also, if it matters, these percentages will always have a 0-100 range, obviously.</p>
|
<python><pandas><dataframe><numpy><vectorization>
|
2024-12-12 14:36:54
| 1
| 2,151
|
Saeed
|
79,275,434
| 4,537,160
|
VSCode background processes triggering SSO in Python code
|
<p>I have a Python repo that I'm working on in VSCode.</p>
<p>The repo structure is like this:</p>
<pre><code>main_repo
-- utils
---- client
------ my_client.py
</code></pre>
<p>my_client.py contains this code:</p>
<pre><code>class CustomClient(DataClient):
def __init__(self, client_type=None, lazy_init=False):
self.client_type = client_type
self._is_initialized = False # Tracks whether initialization has occurred
# Initialize eagerly if not lazy
if lazy_init is False:
self.initialize()
def initialize(self):
"""Explicitly initialize the base class if lazy_init=True."""
if not self._is_initialized:
super().__init__(...) # initialization logic
self._is_initialized = True
@staticmethod
def SpecificClient(lazy_init=False):
"""Factory method to create a Client of a certain type."""
return CustomClient(client_type="foo", lazy_init=lazy_init)
</code></pre>
<p>So, the idea is that I can create a client like this:</p>
<pre><code>client = CustomClient.SpecificClient(lazy_init=True)
client.initialize()
</code></pre>
<p>The client initialization triggers a web page for SSO login, and the issue is that this is happening every time I save a file in the IDE.
So, I imagine there is some background process that is checking the code, goes through the <code>client.initialize()</code> calls and executes them upon saving.</p>
<p>I tried changing the initialize method to:</p>
<pre><code> def initialize(self):
"""Explicitly initialize the base class if lazy_init=True."""
stack = inspect.stack()
for frame in stack:
if "pylint" in frame.filename or "mypy" in frame.filename:
return
if not self._is_initialized:
super().__init__(...) # initialization logic
self._is_initialized = True
</code></pre>
<p>but it's still launching the SSO everytime.</p>
<p>I tried disabling the extensions, I left only the ones shown in this picture, but I still have this issue:</p>
<p><a href="https://i.sstatic.net/JpGYXxH2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpGYXxH2.png" alt="remaining_extensions" /></a></p>
<p>Any suggestions?</p>
|
<python><visual-studio-code>
|
2024-12-12 14:13:36
| 0
| 1,630
|
Carlo
|
79,274,939
| 3,497,293
|
retrieving pagerduty dedup key of an incident using API
|
<p>I have a python program that generates <a href="https://developer.pagerduty.com/api-reference/" rel="nofollow noreferrer">Pagerduty(PD)</a> alerts using <a href="https://pagerduty.github.io/pdpyras/user_guide.html" rel="nofollow noreferrer">pdpyras</a> libraries. I have a case where i need to resolve the PD alerts automatically if the issue is not found in next run. When creating alert, I am passing <code>dedup_key</code> and <code>title</code>. The <code>title</code> varies according to state of infra (offline, broken, down). I want to resolve the alerts using <code>dedup_key</code>. There is a <a href="https://pagerduty.github.io/pdpyras/user_guide.html#events-api-v2" rel="nofollow noreferrer">library function</a> for resolving alert but i have too many items in good state that if I send blind resolve api for each of them, I will get a 429 (too many request) from Pagerduty.</p>
<p>What I want to do is to search for alert matching the dedup_key (if exists) and resolve only those alerts but there is no v2 search api. I am struggling to get pagerduty incident dedup key using API.</p>
<p>I also attempted to add a string same as dedup_key in <code>custom_details</code> field of the trigger(...) function in pdpyras as workaround and i see the fields in PD UI but when I retrieve the open incidents via API, I don't see the custom_details field in api response.</p>
<p>Need some help here.</p>
<p>Code snippet</p>
<pre class="lang-py prettyprint-override"><code> def search_alert(self, dedup_key):
self.all_active_incidents = self.session.list_all(
'incidents',
params={'statuses[]': ['triggered', 'acknowledged']}
)
for incident in self.all_active_incidents:
if incident['dedup_key'] == dedup_key:
return incident
return None
def resolve_alert(self ,dedup_key):
'''
Resolve incident in pagerduty
:param dedup_key: dedup key of the incident
:return: dedup key of the resolved incident
'''
return self.v2_session.resolve(dedup_key=dedup_key)
def create_alert(self, title, dedup_key, source="..."):
dedup_key = self.v2_session.trigger(title, source=source, dedup_key=dedup_key)
return dedup_key
</code></pre>
|
<python><pagerduty>
|
2024-12-12 11:31:33
| 0
| 1,454
|
Krishnom
|
79,274,781
| 10,482,316
|
Movesense Sensor Not Sending Data via Python Script but Works with Mobile App
|
<p>I am working with a Movesense sensor and attempting to collect data using the Python script provided in the documentation. The path and commands I am using have been verified as correct according to the Movesense API documentation and cross-checked with other reference code. The Python client successfully connects to the sensor and sends the command to start data streaming, but no data is being received.</p>
<p>Here’s what I’ve verified so far:</p>
<ol>
<li>The sensor is functional, as it streams data when connected to the Movesense mobile app.</li>
<li>The path I am using (e.g., "/Meas/ECG/200") is correct and matches the documentation.</li>
<li>The write command is being sent successfully (confirmed via debug logs), and notifications are enabled without errors.</li>
<li>The UUIDs for the write and notify characteristics are correct.</li>
</ol>
<p>Despite this, no notifications or data are received via the Python script. I have tried:</p>
<ul>
<li>Adding delays before and after sending the write command.</li>
<li>Resending the write command multiple times.</li>
<li>Verifying the sensor is not connected to any other device during testing.</li>
</ul>
<p>The Python script uses the Bleak library for BLE communication. Logging shows no errors, and the sensor acknowledges the write command, but it does not send any data.</p>
<p>Has anyone encountered a similar issue, or is there something I might be missing in the configuration or communication sequence? Any insights or suggestions would be greatly appreciated.</p>
<p>Here is the code that I am using.</p>
<pre><code># -*- coding: utf-8 -*-
"""
Python gatt_sensordata_app client example using the Bleak GATT client.
This example is based on the examples in the Bleak repo: https://github.com/hbldh/bleak
"""
import logging
import asyncio
import platform
import signal
from bleak import BleakClient
from bleak import _logger as logger
from bleak import discover
from functools import reduce
from typing import List
import struct
import sys
WRITE_CHARACTERISTIC_UUID = (
<WRITE_CHARACTERISTIC_UUID>
)
NOTIFY_CHARACTERISTIC_UUID = (
<NOTIFY_CHARACTERISTIC_UUID>
)
# https://stackoverflow.com/a/56243296
class DataView:
def __init__(self, array, bytes_per_element=1):
"""
bytes_per_element is the size of each element in bytes.
By default we are assume the array is one byte per element.
"""
self.array = array
self.bytes_per_element = 1
def __get_binary(self, start_index, byte_count, signed=False):
integers = [self.array[start_index + x] for x in range(byte_count)]
_bytes = [integer.to_bytes(
self.bytes_per_element, byteorder='little', signed=signed) for integer in integers]
return reduce(lambda a, b: a + b, _bytes)
def get_uint_16(self, start_index):
bytes_to_read = 2
return int.from_bytes(self.__get_binary(start_index, bytes_to_read), byteorder='little')
def get_uint_8(self, start_index):
bytes_to_read = 1
return int.from_bytes(self.__get_binary(start_index, bytes_to_read), byteorder='little')
def get_uint_32(self, start_index):
bytes_to_read = 4
binary = self.__get_binary(start_index, bytes_to_read)
return struct.unpack('<I', binary)[0] # <f for little endian
def get_int_32(self, start_index):
bytes_to_read = 4
binary = self.__get_binary(start_index, bytes_to_read)
return struct.unpack('<i', binary)[0] # < for little endian
def get_float_32(self, start_index):
bytes_to_read = 4
binary = self.__get_binary(start_index, bytes_to_read)
return struct.unpack('<f', binary)[0] # <f for little endian
async def run_queue_consumer(queue: asyncio.Queue):
while True:
data = await queue.get()
if data is None:
logger.info(
"Got message from client about disconnection. Exiting consumer loop..."
)
break
else:
# print to stdout
print(data)
PACKET_TYPE_DATA = 2
PACKET_TYPE_DATA_PART2 = 3
ongoing_data_update = None
async def run_ble_client( end_of_serial: str, sensor_types: List[str], queue: asyncio.Queue):
# Check the device is available
devices = await discover()
found = False
address = None
for d in devices:
logger.debug("device:", d)
if d.name and d.name.endswith(end_of_serial):
logger.info("device found")
address = d.address
found = True
break
# This event is set if device disconnects or ctrl+c is pressed
disconnected_event = asyncio.Event()
def raise_graceful_exit(*args):
disconnected_event.set()
def disconnect_callback(client):
logger.info("Disconnected callback called!")
disconnected_event.set()
async def notification_handler(sender, data):
"""Simple notification handler which prints the data received."""
logger.info(f"Notification received from {sender}: {data}")
print(f"Raw data: {data}")
d = DataView(data)
packet_type= d.get_uint_8(0)
reference = d.get_uint_8(1)
global ongoing_data_update
# print("packet ", packet_type, ", ongoing:",ongoing_data_update)
if packet_type == PACKET_TYPE_DATA:
# ECG (reference 100) fits in one packet
if reference == 100:
timestamp = d.get_uint_32(2)
for i in range(0,16):
# Interpolate timestamp within the data notification
row_timestamp = timestamp + int(i*1000/200)
## ECG package starts with timestamp and then array of 16 samples
# Sample scaling is 0.38 uV/sample
sample_mV = d.get_int_32(6+i*4) * 0.38 *0.001
msg_row = "ECG,{},{:.3f}".format(row_timestamp, sample_mV)
# queue message for later consumption (output)
await queue.put(msg_row)
else:
# print("PACKET_TYPE_DATA")
# Store 1st part of the incoming data
ongoing_data_update = d
elif packet_type == PACKET_TYPE_DATA_PART2:
# print("PACKET_TYPE_DATA_PART2. len:",len(data))
# Create combined DataView that contains the whole data packet
# (skip type_id + ref num of the data_part2)
d = DataView( ongoing_data_update.array + data[2:])
ongoing_data_update = None
# Dig data from the binary
# msg = "Data: offset: {}, len: {}".format(d.get_uint_32(2),
# len(d.array))
timestamp = d.get_uint_32(2)
for i in range(0,8):
# Interpolate timestamp within the data notification
row_timestamp = timestamp + int(i*1000/104)
## IMU9 package starts with timestamp and then three arrays (len 8*4 bytes) of xyz's
## Each "row" therefore starts (3*4 bytes after each other interleaving to acc, gyro and magn)
offset = 6 + i * 3* 4
skip = 3*8*4
msg_row = "IMU9,{},{:.2f},{:.2f},{:.2f},{:.2f},{:.2f},{:.2f},{:.2f},{:.2f},{:.2f}".format(row_timestamp
, d.get_float_32(offset)
, d.get_float_32(offset+4)
, d.get_float_32(offset+8)
, d.get_float_32(offset+skip+0)
, d.get_float_32(offset+skip+4)
, d.get_float_32(offset+skip+8)
, d.get_float_32(offset+2*skip+0)
, d.get_float_32(offset+2*skip+4)
, d.get_float_32(offset+2*skip+8))
# queue message for later consumption (output)
await queue.put(msg_row)
if found:
async with BleakClient(address, disconnected_callback=disconnect_callback) as client:
# Add signal handler for ctrl+c
signal.signal(signal.SIGINT, raise_graceful_exit)
signal.signal(signal.SIGTERM, raise_graceful_exit)
# Start notifications and subscribe to acceleration @ 13Hz
logger.info("Attempting to enable notifications")
await client.start_notify(NOTIFY_CHARACTERISTIC_UUID, notification_handler)
logger.info("Notifications successfully enabled")
if "ECG" in sensor_types:
logger.info("Sending write command for ECG")
await client.write_gatt_char(WRITE_CHARACTERISTIC_UUID, bytearray([1, 2])+bytearray("/Meas/ECG/200", "utf-8"), response=True)
logger.info("Write command for ECG sent successfully")
if "IMU9" in sensor_types:
logger.info("Sending write command for IMU9")
await client.write_gatt_char(WRITE_CHARACTERISTIC_UUID, bytearray([1, 99])+bytearray("/Meas/IMU9/104", "utf-8"), response=True)
logger.info("Write command for IMU9 sent successfully")
# Run until disconnect event is set
await disconnected_event.wait()
logger.info(
"Disconnect set by ctrl+c or real disconnect event. Check Status:")
# Check the conection status to infer if the device disconnected or crtl+c was pressed
status = client.is_connected
logger.info("Connected: {}".format(status))
# If status is connected, unsubscribe and stop notifications
if status:
logger.info("Unsubscribe")
await client.write_gatt_char(WRITE_CHARACTERISTIC_UUID, bytearray([2, 99]), response=True)
await client.write_gatt_char(WRITE_CHARACTERISTIC_UUID, bytearray([2, 100]), response=True)
logger.info("Stop notifications")
await client.stop_notify(NOTIFY_CHARACTERISTIC_UUID)
# Signal consumer to exit
await queue.put(None)
await asyncio.sleep(1.0)
else:
# Signal consumer to exit
await queue.put(None)
print("Sensor ******" + end_of_serial, "not found!")
async def main(end_of_serial: str, sensor_types: List[str]):
queue = asyncio.Queue()
client_task = run_ble_client(end_of_serial, sensor_types, queue)
consumer_task = run_queue_consumer(queue)
await asyncio.gather(client_task, consumer_task)
logger.info("Main method done.")
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
# print usage if command line arg not given
if len(sys.argv)<2:
print("Usage: python movesense_sensor_data <end_of_sensor_name> <sensor_type>")
print("sensor_type must be either 'IMU9', 'ECG', or omitted to run both")
exit(1)
end_of_serial = sys.argv[1]
sensor_type = sys.argv[2] if len(sys.argv) > 2 else ""
# Ensure valid sensor type and run the corresponding function
if sensor_type == "":
sensor_types = ["IMU9", "ECG"]
elif sensor_type in ["IMU9", "ECG"]:
sensor_types = [sensor_type]
else:
print("Error: sensor_type must be either 'IMU9' or 'ECG'")
exit(1)
asyncio.run(main(end_of_serial, sensor_types))
</code></pre>
|
<python><movesense><python-bleak>
|
2024-12-12 10:43:45
| 1
| 746
|
Shreyas Ramachandran
|
79,274,751
| 7,672,005
|
How to properly handle multi-GPU execution failing on one GPU due to OOM
|
<p>I'm working with multiple GPUs handling large amaounts of data.
I want to create an out-of-memory (OOM) catch system that skips the current batch on all GPUs if any are out of memory.</p>
<p>However, for reasons I don't understand only the OOM GPU reaches the dist.all_reduce synchronisation point. The rest don't log anything beyond the first print and the execution freezes and ends without further message.</p>
<p>I feel like I'm missing something either simple or some distributed computing trivia I don't know. If anyone could point our my error I'd be grateful.</p>
<pre><code>def train_epoch(model, loader, optimizer, device, loss_fn):
for batch_idx, data in enumerate(loader):
if hasattr(data, 'stores') and isinstance(data.stores, list):
for store in data.stores:
if 'name' in store:
print(f"[rank {idr_torch.rank}] Batch {train_count} contains samples with names: {store['name']}")
# Initialize OOM flag
oom_flag = torch.tensor(0, device=device)
try:
# Move data to device
data = data.to(device)
optimizer.zero_grad()
# Forward pass
pred = model(data)
# Compute loss
loss = loss_fn(pred, data=data, device=device)
# Backward pass
loss.backward()
# Optimizer step
optimizer.step()
except RuntimeError as e:
if 'CUDA out of memory' in str(e):
print(f"[rank {idr_torch.rank}] CUDA OOM at batch {batch_idx}. Skipping batch...")
torch.cuda.empty_cache()
# Log problematic batch
if hasattr(data, 'stores'):
for store in data.stores:
if 'name' in store:
print(f"[rank {idr_torch.rank}] Problematic batch samples: {store['name']}")
# Set OOM flag
oom_flag = torch.tensor(1, device=device)
# Clear gradients and cache to prevent residue state
optimizer.zero_grad(set_to_none=True)
torch.cuda.empty_cache()
else:
raise e # Raise non-OOM exceptions
# Synchronize OOM flag across ranks (ensures all GPUs check if any had an OOM)
print(f"[rank {idr_torch.rank}] Waiting on OOM-flag synch in batch {batch_idx}...")
torch.distributed.all_reduce(oom_flag, op=torch.distributed.ReduceOp.MAX)
print(f"[rank {idr_torch.rank}] Synch complete at batch {batch_idx}.")
# If any rank had OOM, skip the batch
if oom_flag.item() > 0:
print(f"[rank {idr_torch.rank}] Skipping synchronized batch {batch_idx} due to OOM...")
skip_count += 1
continue # Skip optimizer step
# Ensure memory cleanup after epoch
torch.cuda.empty_cache()
return True
</code></pre>
<p>Output:</p>
<pre><code> <trimmed above prints>
2024-12-11 17:10:24,414 - INFO - [rank 14] Batch 26 contains samples with names: ['test1', 'test2']
2024-12-11 17:10:24,414 - INFO - [rank 8] Batch 26 contains samples with names: ['test2', 'test3']
2024-12-11 17:10:24,414 - INFO - [rank 15] Batch 26 contains samples with names: ['test4', 'test5']
2024-12-11 17:10:30,923 - INFO - [rank 3] CUDA OOM at batch 26. Skipping batch...
2024-12-11 17:10:30,925 - INFO - [rank 3] Problematic batch samples: ['test6', 'test7']
2024-12-11 17:10:30,932 - INFO - [rank 3] Waiting on OOM-flag synch in batch 26...
2024-12-11 17:10:30,934 - INFO - [rank 3] Synch complete at batch 26.
<execution dies silently>
</code></pre>
<p>I've also tried adding an extra <code>dist.barrier()</code> before the <code>all_reduce</code>, this causes the OOM GPU to hang on "Waiting on OOM-flag synch"</p>
|
<python><error-handling><pytorch><distributed-computing><multi-gpu>
|
2024-12-12 10:34:14
| 0
| 534
|
Zyzyx
|
79,274,733
| 8,650,926
|
IFDRational is not JSON serializable using Pillow
|
<p>I am using PIL in python to extract the metadata of an image.</p>
<p>Here is my code:</p>
<pre><code>import json
from PIL import Image, TiffImagePlugin
import PIL.ExifTags
img = Image.open("/home/user/DSCN0010.jpg")
dct = {
PIL.ExifTags.TAGS[k]: float(v) if isinstance(v, TiffImagePlugin.IFDRational) else v
for k, v in img._getexif().items()
if k in PIL.ExifTags.TAGS
}
print(json.dumps(dct))
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>Error processing EXIF data: Object of type IFDRational is not JSON serializable
</code></pre>
<p>As you can see in the code, I cast all the values of type <code>IFDRational</code> to <code>float</code> but I'm still getting the error.</p>
<p>Here is the link to the image: <a href="https://github.com/ianare/exif-samples/blob/master/jpg/gps/DSCN0010.jpg" rel="nofollow noreferrer">https://github.com/ianare/exif-samples/blob/master/jpg/gps/DSCN0010.jpg</a></p>
|
<python><json><python-imaging-library><metadata>
|
2024-12-12 10:29:06
| 1
| 808
|
Mike Delta
|
79,274,475
| 4,139,024
|
altair grouped bar chart with quantitive axis
|
<p>When using a quantitive x axis the xOffset does not seem to work in altair, i.e. the bars are stacked but I would like to have them next to each other. When encoding the x-axis as ordinal <code>x=alt.X("Genre:O")</code> it works, but the sorting is wrong. Any idea how to solve this?</p>
<pre><code>import altair as alt
import pandas as pd
data = pd.DataFrame(
[[1, 5, "F"], [2, 10, "F"], [1, 3, "M"], [2, 9, "M"]],
columns=["Genre", "Rating", "Gender"],
)
chart = (
alt.Chart(data)
.mark_bar()
.encode(
x=alt.X("Genre:Q", axis=alt.Axis(labelAngle=0)),
xOffset="Gender",
y=alt.Y("Rating", axis=alt.Axis(grid=False)),
color="Gender",
)
)
chart
</code></pre>
<p><a href="https://i.sstatic.net/GP5dmACQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GP5dmACQ.png" alt="enter image description here" /></a></p>
|
<python><visualization><altair>
|
2024-12-12 09:20:58
| 1
| 3,338
|
timbmg
|
79,274,376
| 7,218,871
|
Slice a numpy 2d array using another 2d array
|
<p>I have a 2D array of (4,5) and another 2D array of (4,2) shape. The second array contains the start and end indices that I need to filter out from first array i.e., I want to slice the first array using second array.</p>
<pre><code>np.random.seed(0)
a = np.random.randint(0,999,(4,5))
a
array([[684, 559, 629, 192, 835],
[763, 707, 359, 9, 723],
[277, 754, 804, 599, 70],
[472, 600, 396, 314, 705]])
</code></pre>
<pre><code>idx = np.array([[2,4],
[0,3],
[2,3],
[1,3]
])
</code></pre>
<p>Expected output - can be either of following two formats. Only reason for padding with zeros is that variable length 2d arrays are not supported.</p>
<pre><code>[[629, 192, 835, 0, 0],
[763, 707, 359, 9, 0],
[804, 599, 0, 0, 0],
[600, 396, 314, 0, 0]
]
</code></pre>
<pre><code>[[0, 0, 629, 192, 835],
[763, 707, 359, 9, 0],
[0, 0, 804, 599, 0],
[0, 600, 396, 314, 0]
]
</code></pre>
|
<python><numpy><numpy-ndarray>
|
2024-12-12 08:52:10
| 4
| 620
|
Abhishek Jain
|
79,274,365
| 4,393,852
|
Exception in the class destructor
|
<p>I'm gettin an <code>ImportError</code> exception when execute code:</p>
<pre><code>from requests import Session
class web_client(object):
def __init__(self, adapter: type):
self.adapter = adapter
self.s = Session()
self.s.verify = False
self.adapter.auth(self)
def __del__(self):
self.adapter.close()
self.s.close()
class adapter(type):
@classmethod
def process(cls):
cls.client.s.get('https://stackoverflow.com')
@classmethod
def auth(cls, client: web_client):
cls.client = client
cls.process()
@classmethod
def close(cls):
cls.process() # exception here
if __name__ == "__main__":
web_client(adapter)
</code></pre>
<p>Why this code throws an error and how to avoid it?</p>
<pre><code>Exception ignored in: <function web_client.__del__ at 0x00000231EE1D8310>
Traceback (most recent call last):
File "C:\Users\me\Desktop\main.py", line 12, in __del__
File "C:\Users\me\Desktop\main.py", line 28, in close
File "C:\Users\me\Desktop\main.py", line 19, in process
File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 555, in get
File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 528, in request
File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 448, in prepare_request
File "C:\Anaconda3\lib\site-packages\requests\cookies.py", line 544, in merge_cookies
File "C:\Anaconda3\lib\site-packages\requests\cookies.py", line 352, in update
File "C:\Anaconda3\lib\copy.py", line 92, in copy
ImportError: sys.meta_path is None, Python is likely shutting down
</code></pre>
|
<python><python-3.x>
|
2024-12-12 08:48:01
| 1
| 741
|
shmnff
|
79,274,171
| 1,084,174
|
Getting error while fine tuning using peft
|
<p>I was trying to fine tune using peft. Following is my code:</p>
<pre><code>from peft import LoraConfig, TaskType, get_peft_model
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import TrainingArguments
from transformers import Trainer
from datasets import load_dataset
peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=2, lora_alpha=16, lora_dropout=0.01)
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
dataset = load_dataset('imdb')
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
def tokenize_function(examples):
return tokenizer(examples['text'], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
training_args = TrainingArguments(
output_dir="./mt0-large-lora",
learning_rate=1e-3,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
</code></pre>
<p>I am getting following error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 49
27 training_args = TrainingArguments(
28 output_dir="./mt0-large-lora",
29 learning_rate=1e-3,
(...)
36 load_best_model_at_end=True,
37 )
39 trainer = Trainer(
40 model=model,
41 args=training_args,
(...)
46 compute_metrics=compute_metrics,
47 )
---> 49 trainer.train()
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1534 self.model_wrapped = self.model
1536 inner_training_loop = find_executable_batch_size(
1537 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1538 )
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/transformers/trainer.py:1809, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1806 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
1808 with self.accelerator.accumulate(model):
-> 1809 tr_loss_step = self.training_step(model, inputs)
1811 if (
1812 args.logging_nan_inf_filter
1813 and not is_torch_tpu_available()
1814 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1815 ):
1816 # if loss is nan or inf simply add the average of previous logged losses
1817 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/transformers/trainer.py:2654, in Trainer.training_step(self, model, inputs)
2651 return loss_mb.reduce_mean().detach().to(self.args.device)
2653 with self.compute_loss_context_manager():
-> 2654 loss = self.compute_loss(model, inputs)
2656 if self.args.n_gpu > 1:
2657 loss = loss.mean() # mean() to average on multi-gpu parallel training
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/transformers/trainer.py:2679, in Trainer.compute_loss(self, model, inputs, return_outputs)
2677 else:
2678 labels = None
-> 2679 outputs = model(**inputs)
2680 # Save past state if it exists
2681 # TODO: this needs to be fixed and made cleaner later.
2682 if self.args.past_index >= 0:
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/peft/peft_model.py:1080, in PeftModelForSeq2SeqLM.forward(self, input_ids, attention_mask, inputs_embeds, decoder_input_ids, decoder_attention_mask, decoder_inputs_embeds, labels, output_attentions, output_hidden_states, return_dict, **kwargs)
1078 peft_config = self.active_peft_config
1079 if not isinstance(peft_config, PromptLearningConfig):
-> 1080 return self.base_model(
1081 input_ids=input_ids,
1082 attention_mask=attention_mask,
1083 inputs_embeds=inputs_embeds,
1084 decoder_input_ids=decoder_input_ids,
1085 decoder_attention_mask=decoder_attention_mask,
1086 decoder_inputs_embeds=decoder_inputs_embeds,
1087 labels=labels,
1088 output_attentions=output_attentions,
1089 output_hidden_states=output_hidden_states,
1090 return_dict=return_dict,
1091 **kwargs,
1092 )
1094 batch_size = input_ids.shape[0]
1095 if decoder_attention_mask is not None:
1096 # concat prompt attention mask
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/transformers/models/mt5/modeling_mt5.py:1741, in MT5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1738 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)
1740 # Decode
-> 1741 decoder_outputs = self.decoder(
1742 input_ids=decoder_input_ids,
1743 attention_mask=decoder_attention_mask,
1744 inputs_embeds=decoder_inputs_embeds,
1745 past_key_values=past_key_values,
1746 encoder_hidden_states=hidden_states,
1747 encoder_attention_mask=attention_mask,
1748 head_mask=decoder_head_mask,
1749 cross_attn_head_mask=cross_attn_head_mask,
1750 use_cache=use_cache,
1751 output_attentions=output_attentions,
1752 output_hidden_states=output_hidden_states,
1753 return_dict=return_dict,
1754 )
1756 sequence_output = decoder_outputs[0]
1758 # Set device for model parallelism
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/venvs/fine_tuning_llama_peft/lib/python3.11/site-packages/transformers/models/mt5/modeling_mt5.py:966, in MT5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
963 raise ValueError("You have to initialize the model with valid token embeddings")
964 inputs_embeds = self.embed_tokens(input_ids)
--> 966 batch_size, seq_length = input_shape
968 # required mask seq length can be calculated via length of past
969 mask_seq_length = past_key_values[0][0].shape[2] + seq_length if past_key_values is not None else seq_length
ValueError: not enough values to unpack (expected 2, got 1)
</code></pre>
<p>Any idea whats going wrong here? Running on my RTX GeForce 4090, GPU with 24 GB for testing purpose.</p>
|
<python><machine-learning><gpu><fine-tuning><peft>
|
2024-12-12 07:31:09
| 0
| 40,671
|
Sazzad Hissain Khan
|
79,273,969
| 26,579,940
|
Get list of kwargs for parent class
|
<p>For some Python packages, it will fetch and display a list of kwargs for a class, even if it is an inherited class.</p>
<p>The following is an example of code that inherits Django's model class.
I named the arg of the class as **kwargs, but vscode takes the kwargs of the parent class and shows it.</p>
<p>How do I use this feature?</p>
<p>Please check the following image.
<a href="https://i.sstatic.net/nuNb6BEP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuNb6BEP.png" alt="Get list of kwargs for parent class" /></a></p>
|
<python>
|
2024-12-12 05:52:15
| 0
| 404
|
white.seolpyo.com
|
79,273,831
| 11,464
|
How do I export a darts' TCNModel to ONNX?
|
<p>I have a TCNModel that is trained on a time series:</p>
<pre><code>model_air = TCNModel(
input_chunk_length=13,
output_chunk_length=12,
n_epochs=3,
dropout=0.1,
dilation_base=2,
weight_norm=True,
kernel_size=5,
num_filters=3,
random_state=0,
save_checkpoints=True,
model_name=model_name,
force_reset=True,
)
</code></pre>
<p>According to <a href="https://unit8co.github.io/darts/generated_api/darts.models.forecasting.pl_forecasting_module.html?highlight=onnx#darts.models.forecasting.pl_forecasting_module.PLDualCovariatesModule.to_onnx" rel="nofollow noreferrer">darts documentation</a>, the inner <code>model_air.model</code> of type <code>darts.models.forecasting.tcn_model._TCNModule</code> (derived from <code>PLPastCovariatesModule</code>) can be serialized to ONNX via to_onnx call. However, no matter what I try, I get errors at export:</p>
<blockquote>
<p>model_air.model.to_onnx('model_air.onnx')</p>
</blockquote>
<p>ValueError: Could not export to ONNX since neither <code>input_sample</code> nor <code>model.example_input_array</code> attribute is set.</p>
<pre><code>dummy_input = torch.randn(1, 13, 1)
model_air.model.to_onnx('model_air.onnx', input_sample=dummy_input)
</code></pre>
<p>ValueError: not enough values to unpack (expected 2, got 1)</p>
<pre><code>dummy_input = (torch.randn(1, 13, 1), None)
model_air.model.to_onnx('model_air.onnx', input_sample=dummy_input)
</code></pre>
<p>TypeError: _TCNModule.forward() takes 2 positional arguments but 3 were given</p>
<p>What input value should I give to model_air.model?</p>
|
<python><onnx><darts-python>
|
2024-12-12 04:30:39
| 0
| 7,639
|
lmsasu
|
79,273,770
| 188,331
|
Run Flask app in a sub-directory with error log and WSGI Process Group
|
<p>I try to migrate a Flask app from an Ubuntu 22.04 server (originally it is configured as a Virtual Host) to RHEL 9.4 server (as a sub-directory script), both running Apache web server and having WSGI module installed. From this <a href="https://stackoverflow.com/questions/14181364/running-a-flask-website-in-a-subdirectory-with-apache">question</a>, I know that I can set the WSCGIScriptAlias to configure the sub-directory. But my question is, how can I convert a <code>VirtualHost</code> to a sub-directory script?</p>
<p>The configuration in Ubuntu (<code>/etc/apache2/sites-available/subdomain.example.com.conf</code>):</p>
<pre><code>WSGIPythonPath /var/www/www/path/to/flask/venv/lib/python3.10/site-packages
<VirtualHost *:443>
ServerName subdomain.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/path/to/flask
WSGIDaemonProcess flaskapp threads=5
WSGIScriptAlias / /var/www/path/to/flask/flaskapp.wsgi
WSGIApplicationGroup %{GLOBAL}
<Directory flaskapp>
WSGIProcessGroup flaskapp
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/flask-error.log
CustomLog ${APACHE_LOG_DIR}/flask-access.log combined
SSLCertificateFile /etc/letsencrypt/live/example.com-0002/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com-0002/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</code></pre>
<p>and here is the RHEL Apache config (<code>/etc/httpd/conf/httpd.conf</code>):</p>
<pre><code><Directory "/var/www">
AllowOverride None
# Allow open access:
Require all granted
# Flask app related
WSGIDaemonProcess flaskapp threads=5
WSGIScriptAlias /subfolder /var/www/path/to/flask/flaskapp.wsgi
WSGIApplicationGroup %{GLOBAL}
</Directory>
</code></pre>
<p>How can I set the <code>ErrorLog</code>, <code>CustomLog</code>, <code>WSGIProcessGroup</code>, <code>WSGIApplicationGroup</code> and the <code>WSGIPythonPath</code> (for the virtual environment)?</p>
<p>Please help, thanks.</p>
|
<python><apache><flask><mod-wsgi><rhel9>
|
2024-12-12 03:45:16
| 0
| 54,395
|
Raptor
|
79,273,711
| 1,540,785
|
MariaDB in Docker - MariaDB Connector/Python was build with MariaDB Connector/C version 3.3.3, but loaded Connector/C library has version 10.3.39
|
<p>I can't figure out the error in my Docker container.</p>
<p>I am using python:3.9-buster</p>
<p>i am downloading, I think, the correct Connector/C version for MariaDB</p>
<pre><code>RUN wget https://dlm.mariadb.com/2678574/Connectors/c/connector-c-3.3.3/mariadb-connector-c-3.3.3-debian-bullseye-amd64.tar.gz \
-O mariadb-connector-c.tar.gz && \
tar -xzf mariadb-connector-c.tar.gz --strip-components=1 -C /usr && \
rm mariadb-connector-c.tar.gz
</code></pre>
<p>But I can't figure out why I'm still getting the error.</p>
<p>The full error line if that helps</p>
<pre><code>/app/.venv/lib/python3.9/site-packages/mariadb/connections.py:85: RuntimeWarning: MariaDB Connector/Python was build with MariaDB Connector/C version 3.3.3 but loaded Connector/C library has version 10.3.39
</code></pre>
<p>I followed the trace of this post: <a href="https://stackoverflow.com/questions/74429209/mariadb-in-docker-mariadb-connector-python-requires-mariadb-connector-c-3-2">Mariadb in Docker: MariaDB Connector/Python requires MariaDB Connector/C >= 3.2.4, found version 3.1.16</a> but it did not work</p>
|
<python><docker><mariadb>
|
2024-12-12 03:04:11
| 1
| 371
|
kanadianDri3
|
79,273,621
| 342,553
|
Why is python mock assert_any_call not matching
|
<p>Given the doc for assert_any_call</p>
<p><a href="https://i.sstatic.net/29chfiM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/29chfiM6.png" alt="enter image description here" /></a></p>
<p>I have a log statement I want to assert against</p>
<pre class="lang-py prettyprint-override"><code>...
logger.warning('Backup quantity is 0, supplied uuids %s, matched machines: %s', uuids, machines)
...
</code></pre>
<p>see the last 3 lines of my unit test (first 2 lines for investigation purpose only)</p>
<pre class="lang-py prettyprint-override"><code># deliberate fail so it prints what it expects
logger.warning.assert_not_called()
# prints the params used in the official assert
print('Backup quantity is 0, supplied uuids %s, matched machines: %s', [vm3_uuid], VirtualMachine.objects.none())
# actual official assert
logger.warning.assert_any_call('Backup quantity is 0, supplied uuids %s, matched machines: %s', [vm3_uuid], VirtualMachine.objects.none())
</code></pre>
<p>The output for the test <code>assert_not_called</code> was:</p>
<pre><code>AssertionError: Expected 'warning' to not have been called. Called 2 times.
Calls: [call('Backup quantity is 0, supplied uuids %s, matched machines: %s', ['232d7937-975c-457b-8a11-ac473d0e04a0'], <QuerySet []>),
call('%s %s cannot find proxmox UUID from BIS', 'DEF-456', 'vm4')]
</code></pre>
<p>The param print was</p>
<pre><code>Backup quantity is 0, supplied uuids %s, matched machines: %s ['232d7937-975c-457b-8a11-ac473d0e04a0'] <QuerySet []>
</code></pre>
<p>The offical assert failed</p>
<pre><code>AssertionError: warning('Backup quantity is 0, supplied uuids %s, matched machines: %s', ['232d7937-975c-457b-8a11-ac473d0e04a0'], <QuerySet []>) call not found
</code></pre>
<p>Unless I missed something very obvious, I don't know why the <code>assert_any_call</code> would fail?</p>
|
<python><mocking><python-mock>
|
2024-12-12 01:47:21
| 2
| 26,828
|
James Lin
|
79,273,568
| 7,399,336
|
Get image or video from tweet using tweet id or url (with a good rate limit)
|
<p>I would like to replicate the <a href="https://twittervideodownloader.com/" rel="nofollow noreferrer">https://twittervideodownloader.com/</a> using python, it get all the videos url</p>
<p><a href="https://i.sstatic.net/eAH6zQPv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAH6zQPv.png" alt="example" /></a></p>
<p>I tried to web scrap a tweet, but it loaded dynamically, so I discarded that option.</p>
<p>I'm trying to get it using Tweepy, it works but only 1 tweet/15 minutes.. that doesn't make sense to me.. How does that webpage do it?</p>
<p>Is there a way to get tweet info without that ridiculous rate limit?</p>
|
<python><twitter><tweepy><twitter-oauth><twitter-api-v2>
|
2024-12-12 01:11:28
| 0
| 361
|
AndresSp
|
79,273,432
| 6,024,187
|
python multiprocessing gets slower with additional cpus
|
<p>I'm trying to parallelize code that should be embarrassingly parallel and it just seems to get slower the more processes I use. Here is a minimally (dys)functional example:</p>
<pre><code>import os
import time
import random
import multiprocessing
from multiprocessing import Pool, Manager, Process
import numpy as np
import pandas as pd
def pool_func(
number: int,
max_number: int
) -> dict:
pid = str(multiprocessing.current_process().pid)
print('[{:2d}/{:2d} {:s}] Starting ...'.format(number, max_number, pid))
t0 = time.time()
# # the following takes ~10 seconds on a separate node
# for i in range(2):
# print('[{:d}] Passed loop {:d}/2...'.format(number, i+1))
# time.sleep(5)
# the following takes ~3.3 seconds on a separate node
n = 1000
for _ in range(50):
u = np.random.randn(n, n)
v = np.linalg.inv(u)
t1 = time.time()
print('[{:2d}/{:2d} {:s}] Finished in {:.1f} seconds.'.format(number, max_number, pid, t1 - t0))
return {}
if __name__ == "__main__":
runs = []
count = 0
while count < 50:
runs.append(
(count, 50)
)
count += 1
print(f"Number of runs to perform: {len(runs):d}")
num_cpus = 4
print(f"Running job with {num_cpus:d} CPUs in parallel ...")
# with Pool(processes=num_cpus) as pool:
with multiprocessing.get_context("spawn").Pool(processes=num_cpus) as pool:
results = pool.starmap(pool_func, runs)
print('Main process done.')
</code></pre>
<p>There are three features I want to point out. First, <code>num_cpus</code> can be changed to increase the number of workers in the pool. Second, I can change from the default 'fork' pool to the 'spawn' method, this doesn't seem to change anything. Finally, inside pool_func, the process that is running can be either a CPU intensive matrix inversion or a CPU-absent wait function.</p>
<p>When I use the wait function, the processes run in approximately the correct time, about 10 seconds per process. When I use the matrix inversion, the process time increases with the number of processes in the following, approximate, way:</p>
<pre class="lang-none prettyprint-override"><code>1 CPU : 3 seconds
2 CPUs: 4 seconds
4 CPUs: 30 seconds
8 CPUs: 95 seconds
</code></pre>
<p>Here is a partial output of the script above, run as-is:</p>
<pre class="lang-none prettyprint-override"><code>Number of runs to perform: 50
Running job with 4 CPUs in parallel ...
[ 0/50 581194] Starting ...
[ 4/50 581193] Starting ...
[ 8/50 581192] Starting ...
[12/50 581191] Starting ...
[ 0/50 581194] Finished in 24.7 seconds.
[ 1/50 581194] Starting ...
[ 4/50 581193] Finished in 29.3 seconds.
[ 5/50 581193] Starting ...
[12/50 581191] Finished in 30.3 seconds.
[13/50 581191] Starting ...
[ 8/50 581192] Finished in 32.2 seconds.
[ 9/50 581192] Starting ...
[ 1/50 581194] Finished in 26.9 seconds.
[ 2/50 581194] Starting ...
[ 5/50 581193] Finished in 30.3 seconds.
[ 6/50 581193] Starting ...
[13/50 581191] Finished in 30.8 seconds.
[14/50 581191] Starting ...
[ 9/50 581192] Finished in 32.8 seconds.
[10/50 581192] Starting ...
...
</code></pre>
<p>The process ids look unique to me.</p>
<p>Clearly, there is some problem with scaling as adding more CPUs is causing the processes to run slower. There isn't any I/O at all in the processes that are being timed. These are pedestrian processes that I expected would work right out of the box. I have no idea why this is not working as I think it should. Why does this script have individual processes that take longer when I use more CPUs?</p>
<p>When I run this on my macos laptop, it works as expected. But has similar scaling issues on a different remote linux computer I have access to. This might be a platform specific problem, but I'll leave it up in case someone has seen it before and knows how to fix it.</p>
|
<python><multiprocessing>
|
2024-12-11 23:17:56
| 2
| 897
|
Finncent Price
|
79,273,274
| 13,118,009
|
How to use ntile() windows function or similar on hundreds of columns in AWS Databricks
|
<p>I've a table with 20 million rows and 400+ columns. Aside from the first column, I need to change all other columns to evenly distributed deciles, independent of other columns. My data reside in AWS Databricks. I'm running the python notebook as job using a job cluster with the following configuration:</p>
<pre><code>Multi-Node with Enable Autoscaling checked
Driver: i3en.3xlarge · Workers: i3en.3xlarge · 2-10 workers · DBR: 14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12)]
</code></pre>
<p>So the sample data and desired output (ie, desired is last two columns) may look like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>person_id</th>
<th>col2</th>
<th>col400</th>
<th>col2_decile</th>
<th>col400_decile</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>0.01</td>
<td>0.10</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>b</td>
<td>0.15</td>
<td>0.25</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>c</td>
<td>0.20</td>
<td>0.30</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>d</td>
<td>0.25</td>
<td>0.30</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>e</td>
<td>0.30</td>
<td>0.40</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>f</td>
<td>0.35</td>
<td>0.45</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>g</td>
<td>0.40</td>
<td>0.50</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>h</td>
<td>0.45</td>
<td>0.55</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>i</td>
<td>0.50</td>
<td>0.60</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>j</td>
<td>0.55</td>
<td>0.62</td>
<td>7</td>
<td>8</td>
</tr>
<tr>
<td>k</td>
<td>0.55</td>
<td>0.33</td>
<td>8</td>
<td>4</td>
</tr>
<tr>
<td>l</td>
<td>0.56</td>
<td>0.64</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>n</td>
<td>0.05</td>
<td>0.59</td>
<td>1</td>
<td>7</td>
</tr>
<tr>
<td>o</td>
<td>0.19</td>
<td>0.22</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>p</td>
<td>0.42</td>
<td>0.49</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>q</td>
<td>0.51</td>
<td>0.23</td>
<td>7</td>
<td>2</td>
</tr>
<tr>
<td>r</td>
<td>0.75</td>
<td>0.63</td>
<td>10</td>
<td>9</td>
</tr>
<tr>
<td>s</td>
<td>0.82</td>
<td>1.02</td>
<td>10</td>
<td>10</td>
</tr>
<tr>
<td>t</td>
<td>0.59</td>
<td>0.76</td>
<td>9</td>
<td>10</td>
</tr>
<tr>
<td>u</td>
<td>0.57</td>
<td>0.57</td>
<td>9</td>
<td>7</td>
</tr>
</tbody>
</table></div>
<p><strong>Attempt 1</strong>: My first attempt used Pandas <code>qcut</code> feature, but unsurprisingly that errored when converting to pandas Dataframe.</p>
<pre><code>import pandas as pd
from pyspark.sql import SparkSession
#import data
df = spark.sql("select * from my_space.table1")
# Convert to a pandas DataFrame
pandas_df = df.toPandas()
# List of columns to calculate deciles
columns = pandas_df.columns[1:]
# Loop through each column and calculate deciles (1 to 10)
for col in columns:
# Use qcut to assign a decile rank (1 to 10)
pandas_df[col + '_decile'] = pd.qcut(pandas_df[col], 10, labels=False, duplicates='drop') + 1 # Deciles 1-10
# Convert the DataFrame to a Spark DataFrame
spark_df = spark.createDataFrame(pandas_df)
# Select the 'person_id' column along with columns that end with '_decile'
decile_columns = ['person_id'] + [col for col in spark_df.columns if col.endswith('_decile')]
# Create a new DataFrame with only 'person_id' and decile columns
decile_df = spark_df.select(decile_columns)
#some minor stuff here where I saved a new df as df_renamed and renamed some columns
# Write it to a table
df_renamed.write.mode("overwrite").saveAsTable("my_space.my_new_table")
</code></pre>
<p>This is the error when converting to pandas Dataframe. From my understanding, increasing this size may not even help because pandas is not very performant with large data.</p>
<pre><code>SparkException:
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 164 tasks (23.4 GiB) is bigger than local result size limit 23.3 GiB, to address it, set spark.driver.maxResultSize bigger than your dataset result size.
</code></pre>
<p><strong>Attempt 2</strong>: Using Pyspark's <code>ntile</code> function, but the job is still running after 28 hours.</p>
<pre><code>from pyspark.sql import functions as F
from pyspark.sql.window import Window
# import data
df = spark.sql("select * from my_space.table1")
# List of columns to calculate deciles
columns = df.columns[1:]
# define number of deciles
num_deciles = 10
# List to hold the transformations for each column
decile_columns = []
for col in columns:
window_spec = Window.orderBy(col)
# Apply the ntile function to create deciles based on the column's values
decile_columns.append(F.ntile(num_deciles).over(window_spec).alias(f'{col}_decile'))
# Apply the transformations to the DataFrame
df_with_deciles = df.select('*', *decile_columns)
# Select the 'person_id' column along with columns that end with '_decile'
decile_columns2 = ['person_id'] + [col for col in df_with_deciles.columns if col.endswith('_decile')]
# Create a new DataFrame with only 'person_id' and decile columns along with their values
decile_df = df_with_deciles.select(decile_columns2)
#some minor stuff here where I saved a new df as df_renamed and renamed some columns
# Write it to a table
df_renamed.write.mode("overwrite").saveAsTable("my_space.my_new_table")
</code></pre>
<p><strong>Question</strong>: Aside from increasing the cluster size or splitting up the columns for computing deciles and then stitching back together, is there anything else worth trying?</p>
|
<python><pyspark><databricks>
|
2024-12-11 21:47:10
| 1
| 6,494
|
Isolated
|
79,273,032
| 1,739,725
|
Run Python coroutine functions concurrently, but start additional coroutines as soon as particular coroutines have completed
|
<p>I have four total coroutines (<code>async def</code> functions): A, B, C, and D. C can't be called until I know the result of A, and D can't be called until I know the result of A <em>and</em> B. See the attached diagram. I want to run these coroutines concurrently such that the total time spent until I get the results from C and D is minimized.
<a href="https://i.sstatic.net/JpSbiRw2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpSbiRw2.png" alt="task dependency graph" /></a></p>
<p>We should start by running A and B concurrently, but I it would be sub-optimal to use <code>a, b = asyncio.gather(get_a(), get_b())</code> because C should be started as soon as A finishes, and doesn't need to wait around for B to complete.</p>
<p>Similarly, we could run A and C in sequence, while also running B concurrently (as shown below) but in this case we might be waiting around to start D longer than needed if A and B have completed but C is still running:</p>
<pre><code>async def group_1():
a = await get_a()
c = await get_c(a)
return c
c, b = asyncio.gather(
group_1(),
get_b(),
)
</code></pre>
<p>How can I execute these tasks without any unnecessary waiting?</p>
<p>I imagine I could use callbacks if necessary, but I would prefer to use a higher level construct if possible. Additionally, my actual use case has more tasks than this with more complex dependency chains, and I think callbacks would quickly become difficult to reason about.</p>
|
<python><concurrency><python-asyncio>
|
2024-12-11 19:58:55
| 3
| 2,186
|
Pwnosaurus
|
79,272,853
| 11,357,695
|
Comparing empty dataframes
|
<p>I have a function, <code>extract_redundant_values</code>, to extract redundant rows from a pandas dataframe. I am testing it by running on <code>in_df</code> to generate <code>out_df</code>. I am then comparing this against my expected output <code>expected_out_df</code>. They seem to have the same index, columns and values, but do not qualify as equal according to <code>pd.DataFrame().equals()</code>:</p>
<pre><code>import numpy as np
import pandas as pd
def extract_redundant_values(df, col):
unique_df = df.drop_duplicates(subset=[col],
keep = False)
redundant_df = df[~df.index.isin(unique_df.index)]
return redundant_df
# =============================================================================
# setup
# =============================================================================
expected_columns = ['Col1', 'Col2', 'Col3']
in_df = pd.DataFrame(data = [[1, 2, 3],
[4, 6, 6],
[7, 8, 9]],
columns = expected_columns)
# =============================================================================
# run
# =============================================================================
out_df = extract_redundant_values(df = in_df,
col = "Col1")
# =============================================================================
# compare
# =============================================================================
expected_out_df = pd.DataFrame(columns = expected_columns)
#same values, index and dataframe
assert out_df.columns.equals(expected_out_df.columns) #fine
assert out_df.index.equals(expected_out_df.index) #fine
assert np.array_equal(expected_out_df.values, out_df.values) #fine
#not the same for some other reason...
assert out_df.equals(expected_out_df) #assertion error
</code></pre>
<p>I have also tried comparing two empty dataframes with the same columns, and these were fine as expected - so I don't see why <code>out_df</code> and <code>expected_out_df</code> are considered different:</p>
<pre><code>expected_columns = ['Col1', 'Col2', 'Col3']
eg_df1 = pd.DataFrame(columns = expected_columns)
eg_df2 = pd.DataFrame(columns = expected_columns)
assert eg_df1.equals(eg_df2) #fine
</code></pre>
<p>Can anyone offer an explanation?</p>
<p>Thanks!</p>
|
<python><pandas><dataframe><unit-testing><testing>
|
2024-12-11 18:50:46
| 1
| 756
|
Tim Kirkwood
|
79,272,800
| 13,147,413
|
Rearrange and encode columns in pandas
|
<p>i have data structured like this (working with pandas):</p>
<pre><code>ID|comp_1_name|comp_1_percentage|comp_2_name|comp_2_percentage|
1| name_1 | 13 | name_2 | 33 |
2| name_3 | 15 | name_1 | 46 |
</code></pre>
<p>There are six comp_name/comp_percentage couples.
Names are not equally distributed in all six "*_name" columns.
I would like to obtain that kind of trasformation:</p>
<pre><code>ID|name_1|name_2|name_3|
1| 13 | 33 | 0 |
2| 46 | 0 | 15 |
</code></pre>
<p>I tried transposing (.T) both the entire dataframe and isolating the comp_name, comp_percentage couples, but to no avail.</p>
|
<python><pandas>
|
2024-12-11 18:25:16
| 2
| 881
|
Alessandro Togni
|
79,272,692
| 1,303,562
|
Locust on_start() method not executing requests
|
<p>I am using <code>Locust</code> for performance testing, and I have a task that performs an <code>API</code> call in the <code>on_start</code> method. The <code>on_start</code> method prints the "started" message, but the request inside the try block does not execute. The same request works fine when executed outside of <code>Locust</code> (e.g., in a standalone <code>Python</code> script).</p>
<p>Here is my <code>Locust</code> script:</p>
<pre><code>import gevent
from locust import HttpUser, task, between
from locust.env import Environment
from locust.stats import stats_printer, stats_history
class QuickstartUser(HttpUser):
wait_time = between(1, 5)
host = 'https://MY_HOST'
@task
def hello_world(self):
self.client.get("/hello")
self.client.get("/world")
def on_start(self):
print('started')
try:
res = self.client.post(
"'https://MY_HOST'/api/auth/logon",
json={"username": "adminUser1", "password": "12345678"},
headers={'Content-Type': 'application/json'},
verify=False)
print(res.text)
except Exception as ex:
print(ex)
# Setup Environment and Runner
env = Environment(user_classes=[QuickstartUser])
runner = env.create_local_runner()
# Start a WebUI instance
web_ui = env.create_web_ui("127.0.0.1", 8089)
# Execute init event handlers (only really needed if you have registered any)
env.events.init.fire(environment=env, runner=runner, web_ui=web_ui)
# Start a greenlet that periodically outputs the current stats
gevent.spawn(stats_printer(env.stats))
# Start a greenlet that saves current stats to history
gevent.spawn(stats_history, env.runner)
# Start the test
runner.start(user_count=1
</code></pre>
<p><strong>What I have tried:</strong>
Verified the <code>API</code> request works correctly outside <code>Locust</code> using requests in a standalone script.
Checked the <code>URL</code>, <code>payload</code>, and <code>headers</code> to ensure they are correct.
I removed <code>on_start</code> method and try the <code>request</code> from <code>@task</code></p>
<p><strong>Observations:</strong>
The print('started') message is displayed.
No output from the print(<code>res.text</code>) line, indicating the request isn't being sent or executed.
No exceptions are caught in the except block.</p>
<p><strong>Questions:</strong>
Why isn't the request in <code>on_start</code> executing?
How can I ensure the <code>on_start</code> method runs the request properly within <code>Locust</code>?
Any help or insights would be greatly appreciated!</p>
|
<python><locust>
|
2024-12-11 17:42:41
| 0
| 721
|
Dana Yeger
|
79,272,691
| 13,163,640
|
Kafka consumer is missing messages during deployment
|
<p>My consumers are inherited from the BasicKafkaConsumerV2. During deployments when the pods are rotating I am missing few messages which is visible from the offsets printed after the manual commit().
Kafka is not supposed to miss messages if not committed. What could be the problem here.</p>
<p>run command:</p>
<pre><code> - name: order-consumer
image: KUSTOMIZE_PRIMARY
imagePullPolicy: Always
command:
[
# Invoking wait for pgbouncer script
"/wait-for.sh",
"localhost:6432",
"-s",
"-t",
"30",
"--",
# Starting main process
"ddtrace-run",
"python",
"manage.py",
"run_order-consumer",
]
</code></pre>
<p>Consumer:</p>
<pre><code>class BasicKafkaConsumerV2:
group_id = None # str
consumer_name = None # str
newrelic_application = None
topic_handlers = {} # dict
DB_EXCEPTION_RETRY_TIMEOUT = 5 # seconds
DLQ_TOPIC = None
def __init__(self, latest_offset=False):
"""Inits the Consumer and subscribes to the topics"""
self.consumer = KafkaConsumer(
bootstrap_servers=["broker1", "broker2"],
group_id=self.group_id,
enable_auto_commit=False,
auto_offset_reset="latest",
)
self.topics_list = list(self.topic_handlers.keys())
self.consumer.subscribe(self.topics_list)
self.newrelic_application = newrelic.agent.application()
logger.info(
f"{[self.consumer_name]} subscribed to {self.topics_list} with auto_offset_reset {self.auto_offset_reset}"
)
def message_handler_wrapped(
self,
topic: str,
kafka_msg_value: bytes,
headers: dict,
consumed_message=None,
):
"""Processes the message
Also handles any DB exceptions by retrying the event after a period
"""
with tracer.trace(
settings.DD_KAFKA_RESOURCE_NAME,
service=settings.DD_SERVICE,
resource=self.group_id,
span_type="consumer",
) as span:
try:
json_data = json.loads(kafka_msg_value)
dict_headers = convert_tuple_to_dict(headers)
span.set_tag("topic", topic)
span.set_tag("event", self.get_event_name(json_data))
self.message_handler(topic, json_data, dict_headers)
except (InterfaceError, OperationalError) as e:
"""Sleep for sometime to allow the DB to heal
This will essentially infinitely loop (further processing of events is blocked)
"""
logger.info(f"[{self.consumer_name}] DB Exception: {e}")
span.set_tag("type", "retry")
span.set_exc_info(type(e), e, e.__traceback__)
time.sleep(self.DB_EXCEPTION_RETRY_TIMEOUT)
self.message_handler_wrapped(
topic, kafka_msg_value, headers, consumed_message
)
except Exception as e:
logger.exception(f"[{self.consumer_name}] Exception: {e}")
span.set_tag("type", "error")
span.set_exc_info(type(e), e, e.__traceback__)
sentry_sdk.capture_exception(e)
def message_handler(self, topic: str, data: dict, headers: dict):
"""Handles the message"""
event = self.get_event_name(data)
topic_handler = self.topic_handlers.get(topic)
topic_handler.handle_message(event, data, headers)
def start_consumer(self):
"""Starts consuming messages on the topic"""
logger.info(f"Consumer [{self.consumer_name}] is starting consuming")
for msg in self.consumer:
with LogGuidSetter() as _:
self.message_handler_wrapped(
msg.topic, msg.value, msg.headers, msg
)
self.consumer.commit()
logger.info(
f"[{self.consumer_name}] Consumed message from partition: {msg.partition} offset: {msg.offset} with key: {msg.key}"
)
def get_event_name(self, data):
return data.get("event") or data.get("event_name")
class TopicEventHandler:
topic = None
event_handler_mapping = {} # event name and their fn handlers
def handle_message(self, event, data, headers):
"""Handles the message"""
event_handler = getattr(
self, self.event_handler_mapping.get(event, ""), None
)
if event_handler is None:
logger.info(f"Topic <{self.topic}> unhandled event : {event}")
return
event_handler(data, headers)
</code></pre>
|
<python><django><apache-kafka><apache-zookeeper><kafka-python>
|
2024-12-11 17:42:21
| 2
| 567
|
Dev
|
79,272,593
| 9,002,634
|
How to use DO $$ block with psycopg3?
|
<h3>Context</h3>
<p>I have a table that is using temporal versioning where the "active" row (for a retailer) is where expire_tstamp is null.</p>
<h3>Problem</h3>
<p>I want to execute once to do an "upsert".</p>
<ol>
<li>If there are no existing active rows for a retailer, insert.</li>
<li>If there are existing active rows for a retailer, expire them, then insert.</li>
</ol>
<p>But I keep getting error:</p>
<pre><code>psycopg.errors.IndeterminateDatatype: could not determine data type of parameter $2
</code></pre>
<h3>What I've tried so far</h3>
<pre class="lang-py prettyprint-override"><code>QUERY = """
DO $$
BEGIN
IF EXISTS (
SELECT 1
FROM retailer_xml
WHERE retailer_id = %(retailer_id)s::integer AND expire_tstamp IS NULL
) THEN
UPDATE retailer_xml
SET expire_tstamp = CURRENT_TIMESTAMP
WHERE retailer_id = %(retailer_id)s::integer AND expire_tstamp IS NULL;
END IF;
INSERT INTO retailer_xml (retailer_id, xml)
VALUES (%(retailer_id)s::integer, %(xml)s::text);
END $$;
"""
def insert_retailer_xml(
aurora_cursor: PGCursor, retailer_id: RetailerId | ClientId, xml: str
) -> None:
params = {"retailer_id": retailer_id, "xml": xml}
aurora_cursor.execute(
query=QUERY,
params=params,
)
</code></pre>
<p>My table</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE public.retailer_xml (
retailer_xml_id bigserial NOT NULL,
retailer_id int4 NOT NULL,
"xml" text NOT NULL,
create_tstamp timestamptz DEFAULT now() NOT NULL,
modify_tstamp timestamptz DEFAULT now() NOT NULL,
expire_tstamp timestamptz NULL,
CONSTRAINT retailer_xml_pkey PRIMARY KEY (retailer_xml_id)
);
CREATE UNIQUE INDEX retailer_xml_retailer_id_idx ON public.retailer_xml USING btree (retailer_id) WHERE (expire_tstamp IS NULL);
</code></pre>
<p>I've tried the following but is getting unique constraint error</p>
<pre class="lang-sql prettyprint-override"><code>WITH expired AS (
UPDATE remap_list_xml
SET expire_tstamp = CURRENT_TIMESTAMP
WHERE retailer_id = %s AND expire_tstamp IS NULL
RETURNING remap_list_xml_id
)
INSERT INTO remap_list_xml (retailer_id, xml)
VALUES (%s, %s);
</code></pre>
|
<python><sql><postgresql><psycopg3>
|
2024-12-11 17:07:00
| 1
| 318
|
Viet Than
|
79,272,506
| 785,736
|
how to calculate ratio in group by in pandas
|
<p>I have the following pandas dataframe<br/></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>ts</th>
<th>tenant</th>
<th>result</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>t1</td>
<td>pass</td>
</tr>
<tr>
<td>1</td>
<td>t1</td>
<td>pass</td>
</tr>
<tr>
<td>1</td>
<td>t2</td>
<td>pass</td>
</tr>
<tr>
<td>2</td>
<td>t1</td>
<td>fail</td>
</tr>
<tr>
<td>2</td>
<td>t1</td>
<td>fail</td>
</tr>
<tr>
<td>2</td>
<td>t2</td>
<td>fail</td>
</tr>
<tr>
<td>2</td>
<td>t1</td>
<td>pass</td>
</tr>
<tr>
<td>2</td>
<td>t2</td>
<td>pass</td>
</tr>
<tr>
<td>3</td>
<td>t1</td>
<td>pass</td>
</tr>
<tr>
<td>3</td>
<td>t1</td>
<td>pass</td>
</tr>
<tr>
<td>3</td>
<td>t2</td>
<td>pass</td>
</tr>
</tbody>
</table></div>
<p>i want to perform something that will be similar to the following SQL statement</p>
<pre><code>select
ts,
tenant,
count(case result when 'fail' then 1 else 0) / count(1)
from tbl
group by ts, tenant
</code></pre>
<p>that will result with the following dataframe<br/></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>ts</th>
<th>tenant</th>
<th>fail_ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>t1</td>
<td>0.0</td>
</tr>
<tr>
<td>1</td>
<td>t2</td>
<td>0.0</td>
</tr>
<tr>
<td>2</td>
<td>t1</td>
<td>0.666</td>
</tr>
<tr>
<td>2</td>
<td>t2</td>
<td>0.5</td>
</tr>
<tr>
<td>3</td>
<td>t1</td>
<td>0.0</td>
</tr>
<tr>
<td>3</td>
<td>t2</td>
<td>0.0</td>
</tr>
</tbody>
</table></div>
<p>I dont really understand how i can do this. Is it possible to do it directly on the dataframe or would i require intermediate dataframes? would i need to do it more programmatically and iterate over the groups?</p>
<p>The end game here is to create timeseries data on the fail ratio per tenant to plot.</p>
<p>Thanks,</p>
|
<python><pandas><group-by>
|
2024-12-11 16:37:48
| 3
| 4,580
|
Mr T.
|
79,272,387
| 20,591,261
|
How to use join_where() in Polars for conditional joins with date ranges?
|
<p>I have two Polars DataFrames:</p>
<ol>
<li>One (df) records the dates when items were sold.</li>
<li>Another (df2) contains additional details, including dates and
values for items.</li>
</ol>
<p>I want to join these DataFrames where the <code>ID</code> matches, and the sale date in <code>df</code> falls within a 1-day range of the dates in <code>df2</code>.</p>
<p>Here’s the sample data:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from datetime import datetime
df = pl.DataFrame({
"ID" : [1,2,3],
"Time" : [datetime(1,1,2),datetime(1,1,3),datetime(1,1,3)],
})
df2 = pl.DataFrame({
"ID" : [1,1,1,2,2,3],
"Time" : [
datetime(1,1,2),
datetime(1,1,3),
datetime(1,1,4),
datetime(1,1,4),
datetime(1,1,5),
datetime(1,1,4),
],
"Value": [10,10,10,20,20,30],
})
</code></pre>
<p>I can achieve this using <code>.join()</code> and <code>filter()</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.with_columns(
(pl.col("Time") - pl.duration(days=1)).alias("-1"),
(pl.col("Time") + pl.duration(days=1)).alias("+1"),
).join(df2, on="ID", how="left")
.filter(pl.col("Time_right").is_between("-1", "+1"))
)
</code></pre>
<p>This works, but I wanted to simplify it using the <code>join_where()</code> function. Here's my attempt:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.with_columns(
(pl.col("Time") - pl.duration(days=1)).alias("-1"),
(pl.col("Time") + pl.duration(days=1)).alias("+1"),
).join_where(
df2,
pl.col("Time_right").is_between("-1", "+1"),
pl.col("ID") == pl.col("ID_right")
)
)
</code></pre>
<p>However, I encounter this error:<br />
<code>InvalidOperationError: only 1 binary comparison allowed as join condition</code></p>
<p>How can I use <code>join_where()</code> in this case? Is there a way to achieve the desired result with this function, or is it limited in handling multiple conditions?</p>
|
<python><dataframe><python-polars>
|
2024-12-11 16:03:49
| 0
| 1,195
|
Simon
|
79,272,352
| 4,418,481
|
Re-override plotly-resampler reset axis behavior
|
<p>I created a Dash app that displays a figure in it. I have used plotly-resampler since I had to deal with large datasets.</p>
<p>At the documentation of plotly-resampler, they mentioned this:</p>
<blockquote>
<p>The plotly autoscale event (triggered by the autoscale button or a
double-click within the graph), does not reset the axes but autoscales
the current graph-view of plotly-resampler figures. This design choice
was made as it seemed more intuitive for the developers to support
this behavior with double-click than the default axes-reset behavior.
The graph axes can ofcourse be resetted by using the reset_axis button.</p>
</blockquote>
<p>However, I do would like to make sure that double clicking will reset the axis as it used to in the vanilla plotly.</p>
<p>Is there any ideas how to do it?</p>
|
<python><plotly><plotly-resampler>
|
2024-12-11 15:49:09
| 0
| 1,859
|
Ben
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.