QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,301,627
| 1,473,517
|
How to use Symbolic Regression to approximate Pascal's triangle?
|
<p>I have been trying out symbolic regression and was wondering how to use it approximate a row of Pascal's triangle for example. I make the data with:</p>
<pre><code>import math
def print_pascals_triangle_row(n):
row = []
for k in range(n + 1):
coefficient = math.comb(n, k)
row.append(coefficient)
return row
n = 20
pascals_triangle_row = print_pascals_triangle_row(n)
</code></pre>
<p>This gives:</p>
<pre><code>[1,
20,
190,
1140,
4845,
15504,
38760,
77520,
125970,
167960,
184756,
167960,
125970,
77520,
38760,
15504,
4845,
1140,
190,
20,
1]
y = pascals_triangle_row
X = np.array(range(len(y))).reshape(-1, 1)
</code></pre>
<p>Set up the model:</p>
<pre><code>from pysr import PySRRegressor
model = PySRRegressor(
maxsize=15,
niterations=5000, # < Increase me for better results
binary_operators=["+", "*"],
unary_operators=[
"log",
"exp",
"inv",
"square",
"sqrt",
"sign",
# ^ Custom operator (julia syntax)
],
# ^ Define operator for SymPy as well
elementwise_loss="loss(prediction, target) = (prediction - target)^2",
# ^ Custom loss function (julia syntax)
)
</code></pre>
<p>And finally fit the model:</p>
<pre><code>model.fit(X, y)
</code></pre>
<p>This doesn't give a model that is anywhere near accurate. My guess is the fundamental reason is that it can't do <code>x**x</code> which is need to <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow noreferrer">approximate factorial</a>.</p>
<p>Is there any way to allow symbolic regression to use the binary operator ** or otherwise fit a row of Pascal's triangle?</p>
|
<python><math><regression>
|
2024-12-22 19:52:41
| 1
| 21,513
|
Simd
|
79,301,529
| 22,407,544
|
How to alter data entered into django models to fit a criteria?
|
<p>I have a django models.py file that I use to store information in my database. I store information on businesses and the countries that they operate in. I get the list of countries from their website. However they may call each country by a different name or use different formatting, for example 'The Gambia' vs 'Gambia', 'Eswatini' vs 'Swaziland', 'Trinidad and Tobago' vs 'Trinidad & Tobago'', 'The United States' vs 'United States of America'. I want that when I store the names of these countries in my database they automatically follow a set of rules to ensure consistent formatting of their names. Additionally, some websites state something like 'We operate in 150 countries'. I want to enter this information into the database but not have it come up when I request a list of countries from the frontend.</p>
<p>Here is my models.py:</p>
<pre><code>class Company(models.Model): #the newly created database model and below are the fields
name = models.CharField(max_length=250, blank=True, null=True) #textField used for larger strings, CharField, smaller
slug = models.SlugField(max_length=250, blank=True, db_index=True, unique=True)
available_merchant_countries = models.TextField(max_length=2500, blank=True, null=True)
available_merchant_countries_source = models.URLField(max_length=250, blank=True, null=True)
</code></pre>
<p>Countries aren't stored one-at-a-time. I store a whole list of comma-separated countries at a time.</p>
|
<python><django>
|
2024-12-22 18:49:03
| 0
| 359
|
tthheemmaannii
|
79,301,465
| 5,837,773
|
Plotly not exporting images in Flask route with Kaleido
|
<p>I have a Flask app where I need to display some images. Beside that I also want to store them locally with <code>pio.write_image</code> as explained in <a href="https://plotly.com/python-api-reference/generated/plotly.io.write_image.html" rel="nofollow noreferrer">docs</a>. This works ok, if used in regular python, when I am testing in helperz.py -</p>
<pre><code>if __name__ == "__main__":
ic("name=main in strategies_helperz")
print(plotly.__version__, kaleido.__version__)
# create data
data_original = generate_dummy_data(symbol="BTCUSDT", num_records=100)
# transform to dataframe
df = klines_to_dataframe(data_original)
fig = plot_ohlc(df=df, exchange="mexc")
user="ZEoJt367"
src_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
folder_path = os.path.join(src_path, "screenshots", user)
if not os.path.exists(folder_path):
ic(f'{folder_path} not existing, should create one!')
os.makedirs(folder_path)
fileName = os.path.join(folder_path, current_time + str(".png"))
saveScreenshot(fig=fig, fileName=fileName, width=800, height=500)
</code></pre>
<p>..and this is saveScreenshot function:</p>
<pre><code>def saveScreenshot(fig, fileName, width=800, height=500):
ic(f"def saveScreenshot to ${fileName}")
try:
pio.write_image(fig=fig, file=fileName, width=width, height=height, engine="kaleido")
ic(f"image saved to ${fileName} !")
return True
except Exception as e:
ic(f"Error saving image: {e}")
</code></pre>
<p>...but if use this within the route in app.py file, which is one level higher in file hierarchy, it just hangs:</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/api/klines_for_plotly/<string:exchange>/<string:symbol>/<string:timeframe>', methods=['POST'])
def apiKlinesForPlotly(exchange, symbol, timeframe):
ic(f"def apiKlinesForPlotly - POST - for ${exchange}, symbol: ${symbol}, timeframe: ${timeframe}")
klines_with_names=[]
if exchange == "mexc":
klines_mexc=getKlinesDataMexc(symbol=symbol, interval=timeframe)
for data in klines_mexc:
kline = Mexc_KLine(data)
klines_with_names.append(kline)
elif exchange == "binance":
klines_binance=getKlinesDataBinance(symbol=symbol, interval=timeframe)
for data in klines_binance:
kline = Binance_KLine(data)
klines_with_names.append(kline)
# transforming klines_with_names to pandas df
df=klinesWithNamesToDf(klines_with_names=klines_with_names)
# creating a plotly fig based on df
fig = plot_ohlc(df=df, exchange=exchange)
user="ZEoJt367"
src_path = os.path.dirname(os.path.abspath(__file__))
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
folder_path = os.path.join(src_path, "screenshots", user)
if not os.path.exists(folder_path):
ic(f'{folder_path} not existing, should create one!')
os.makedirs(folder_path)
fileName = os.path.join(folder_path, current_time + str(".png"))
fig = plot_ohlc(df=df, exchange=exchange)
saveScreenshot(fig=fig, fileName=fileName, width=800, height=500)
fig_json = fig.to_json()
return fig_json
</code></pre>
<p>In this scenario saveScreenshot never finishes.</p>
<p>I did install previous version of kaleido, as many with similar issues suggested, but still no luck. Could it be, that for use in flask route, the function should be async?</p>
|
<python><flask><plotly><kaleido>
|
2024-12-22 18:08:10
| 0
| 409
|
Gregor Sotošek
|
79,301,108
| 3,331,037
|
Google's Gemini on local audio files
|
<p>Google <a href="https://cloud.google.com/vertex-ai/generative-ai/docs/samples/generativeaionvertexai-gemini-audio-transcription" rel="nofollow noreferrer">has a page</a> describing how to use one of their Gemini-1.5 models to transcribe audio. They include a sample script (see below). The script grabs the audio file from Google Storage via the <code>Part.from_uri()</code> command.</p>
<p>I would, instead, like to use a local file. Setting the URI to "file:///..." does not work. How can I do that?</p>
<p>Google's (working, cloud-based) code is here:</p>
<pre class="lang-py prettyprint-override"><code>import vertexai
from vertexai.generative_models import GenerativeModel, GenerationConfig, Part
# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")
model = GenerativeModel("gemini-1.5-flash-002")
prompt = """
Can you transcribe this interview, in the format of timecode, speaker, caption.
Use speaker A, speaker B, etc. to identify speakers.
"""
audio_file_uri = "gs://cloud-samples-data/generative-ai/audio/pixel.mp3"
audio_file = Part.from_uri(audio_file_uri, mime_type="audio/mpeg")
contents = [audio_file, prompt]
response = model.generate_content(contents, generation_config=GenerationConfig(audio_timestamp=True))
print(response.text)
# Example response:
# [00:00:00] Speaker A: Your devices are getting better over time...
# [00:00:16] Speaker B: Welcome to the Made by Google podcast, ...
# [00:01:00] Speaker A: So many features. I am a singer. ...
# [00:01:33] Speaker B: Amazing. DeCarlos, same question to you, ...
</code></pre>
|
<python><audio><large-language-model><google-gemini>
|
2024-12-22 13:48:53
| 1
| 498
|
Bill Bradley
|
79,301,100
| 19,556,055
|
Dask distributed.protocol.core - CRITICAL - Failed to deserialize with OutOfData exception
|
<p>I want to use XGBoost with Dask, which requires a client to be passed to the train method. When I try to read the data without defining a client, everything works fine, but when I run the code below I get <code>distributed.protocol.core - CRITICAL - Failed to deserialize</code> with an <code>OutOfData</code> exception.</p>
<pre><code>from distributed import Client
import dask.dataframe as dd
client = Client()
dda = dd.read_parquet(fr"C:\Users\marij\Documents\GitHub\Research_Project\Data\small.parquet", engine="pyarrow").to_dask_array(lengths=True)
</code></pre>
<p>The full error traceback:</p>
<pre><code>2024-12-22 14:27:30,821 - distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 128, in unpackb
ret = unpacker._unpack()
^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 565, in _unpack
ret.append(self._unpack(EX_CONSTRUCT))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 592, in _unpack
ret[key] = self._unpack(EX_CONSTRUCT)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 565, in _unpack
ret.append(self._unpack(EX_CONSTRUCT))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 546, in _unpack
typ, n, obj = self._read_header()
^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 488, in _read_header
obj = self._read(n)
^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 405, in _read
self._reserve(n, raise_outofdata=raise_outofdata)
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 420, in _reserve
raise OutOfData
msgpack.exceptions.OutOfData
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\protocol\core.py", line 175, in loads
return msgpack.loads(
^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 130, in unpackb
raise ValueError("Unpack failed: incomplete input")
ValueError: Unpack failed: incomplete input
2024-12-22 14:27:30,832 - distributed.core - ERROR - Exception while handling op register-client
Traceback (most recent call last):
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 128, in unpackb
ret = unpacker._unpack()
^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 565, in _unpack
ret.append(self._unpack(EX_CONSTRUCT))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 592, in _unpack
ret[key] = self._unpack(EX_CONSTRUCT)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 565, in _unpack
ret.append(self._unpack(EX_CONSTRUCT))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 546, in _unpack
typ, n, obj = self._read_header()
^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 488, in _read_header
obj = self._read(n)
^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 405, in _read
self._reserve(n, raise_outofdata=raise_outofdata)
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 420, in _reserve
raise OutOfData
msgpack.exceptions.OutOfData
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\core.py", line 831, in _handle_comm
result = await result
^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\scheduler.py", line 5902, in add_client
await self.handle_stream(comm=comm, extra={"client": client})
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\core.py", line 886, in handle_stream
msgs = await comm.read()
^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\comm\tcp.py", line 247, in read
msg = await from_frames(
^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\comm\utils.py", line 78, in from_frames
res = _from_frames()
^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\comm\utils.py", line 61, in _from_frames
return protocol.loads(
^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\protocol\core.py", line 175, in loads
return msgpack.loads(
^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 130, in unpackb
raise ValueError("Unpack failed: incomplete input")
ValueError: Unpack failed: incomplete input
Task exception was never retrieved
future: <Task finished name='Task-321' coro=<Server._handle_comm() done, defined at c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\core.py:737> exception=ValueError('Unpack failed: incomplete input')>
Traceback (most recent call last):
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 128, in unpackb
ret = unpacker._unpack()
^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 565, in _unpack
ret.append(self._unpack(EX_CONSTRUCT))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 592, in _unpack
ret[key] = self._unpack(EX_CONSTRUCT)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 565, in _unpack
ret.append(self._unpack(EX_CONSTRUCT))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 546, in _unpack
typ, n, obj = self._read_header()
^^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 488, in _read_header
obj = self._read(n)
^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 405, in _read
self._reserve(n, raise_outofdata=raise_outofdata)
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 420, in _reserve
raise OutOfData
msgpack.exceptions.OutOfData
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\core.py", line 831, in _handle_comm
result = await result
^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\scheduler.py", line 5902, in add_client
await self.handle_stream(comm=comm, extra={"client": client})
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\core.py", line 886, in handle_stream
msgs = await comm.read()
^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\comm\tcp.py", line 247, in read
msg = await from_frames(
^^^^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\comm\utils.py", line 78, in from_frames
res = _from_frames()
^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\comm\utils.py", line 61, in _from_frames
return protocol.loads(
^^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\distributed\protocol\core.py", line 175, in loads
return msgpack.loads(
^^^^^^^^^^^^^^
File "c:\Users\marij\miniconda3\envs\research_project_dask\Lib\site-packages\msgpack\fallback.py", line 130, in unpackb
raise ValueError("Unpack failed: incomplete input")
ValueError: Unpack failed: incomplete input
</code></pre>
<p>Some package versions (let me know if more is required):</p>
<pre><code>dask 2024.8.2
dask-core 2024.8.2
dask-expr 1.1.13
dask-glm 0.3.2
dask-ml 2024.4.4
dask-xgboost 0.1.11
distributed 2024.8.2
pyarrow 18.1.0
pyarrow-core 18.1.0
</code></pre>
<p>What might be the reason for this?</p>
|
<python><dask><dask-distributed>
|
2024-12-22 13:43:37
| 0
| 338
|
MKJ
|
79,301,038
| 2,414,934
|
solve Sympy functions in Scipy (optimize.root)
|
<p>I'm trying to find the roots with the root function in Scipy for functions/variables created in Sympy.</p>
<p>sympy:</p>
<pre><code>AllEquations = [x1 - 5, y1 - 5, z1 - 5, ((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**0.5 - 150, y2 - 100, z2, ((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**0.5 - 150]
AllVariables = [x1, y1, z1, x2, y2, z2]
</code></pre>
<p>Below works in Scipy, but is done manually:</p>
<pre class="lang-py prettyprint-override"><code>def equation(p):
x1, y1, z1, x2, y2, z2 = p
return x1 - 5, y1 - 5, z1 - 5, ((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**0.5 - 150, y2 - 100, z2, ((x1 - x2)**2 + (y1 - y2)**2 + (z1 - z2)**2)**0.5 - 150
initial_guess = [5,5,5,100,100,0]
from scipy.optimize import root
sol = root(equation, initial_guess, method = 'lm")
>>>sol.x
array([ 5.00000000e+00, 5.00000000e+00, 5.00000000e+00, 1.20974135e+02,
1.00000000e+02, -2.00332805e-25])
</code></pre>
<p>Question is now, how can I automate this? The equation function above is static and should be made dynamically as the number of parameters and functions will change continuously.</p>
|
<python><scipy><sympy><scipy-optimize>
|
2024-12-22 13:07:45
| 2
| 673
|
Achaibou Karim
|
79,300,639
| 7,233,155
|
Configure AirspeedVelocity for Python package with PyO3 and Maturin
|
<p>I'm trying to setup Airspeed Velocity to use with my Python project. In other setups, such as github workflows the only build commands needed are:</p>
<pre><code> "python -m pip install --upgrade pip",
"pip install .[dev] -v"
</code></pre>
<p>But this won't work with ASV. I have rust-PyO3 extensions and build using <code>maturin</code> (configured in pyproject.toml). I receive the traceback:</p>
<pre><code>· Creating environments
· Discovering benchmarks
·· Uninstalling from virtualenv-py3.13
·· Building c03e16a1 <main> for virtualenv-py3.13...........
·· Installing c03e16a1 <main> into virtualenv-py3.13
·· Failed to build the project and import the benchmark suite.
</code></pre>
<p>This is having included the following in the <code>asv.conf.json</code>:</p>
<pre><code> // To build the package using pyproject.toml (PEP518), uncomment the following lines
"build_command": [
"python -m pip install --upgrade pip",
"pip install .[dev] -v"
],
</code></pre>
|
<python><benchmarking><pyo3>
|
2024-12-22 07:39:30
| 0
| 4,801
|
Attack68
|
79,300,495
| 759,880
|
Making FastAPI ML server faster
|
<p>I have a FastAPI server where upon a HTTP <code>GET</code> (<code>/get_documents/</code>), there are 2 bottlenecks to returning a response fast: there is a call to another server via a client ("get vectors"), then there is a call to an LLM (in process memory) to generate text. Both are very slow. I'm thinking that the "get vectors" call to the other server could be put behind an <code>await</code>, so that the CPU is freed while waiting for the external call to come back. However, I don't think the call to the LLM in process would benefit from being behind an <code>await</code>, since it does require to complete on the CPU to answer the query. Further, I am not sure if I should make the <code>/get_documents/</code> async for FastAPI. I keep reading about this, but get lost since there is talk of "threads" but python has the GIL... What would be the best strategy for the external call (via client) and the call to the LLM, as well as that <code>/get_documents/</code> that runs in FastAPI? I'm hoping for a very simple solution, if possible.</p>
|
<python><async-await><concurrency><fastapi>
|
2024-12-22 05:07:58
| 0
| 4,483
|
ToBeOrNotToBe
|
79,300,354
| 2,403,945
|
How to insert a unicode text to PDF using PyMuPDF?
|
<p>I'm trying to use the PyMuPDF library to insert a Unicode text into a PDF file. I have the following code based on the documentation example:</p>
<pre class="lang-py prettyprint-override"><code>import pymupdf
doc = pymupdf.open()
page = doc.new_page()
p = pymupdf.Point(50, 72)
# String in Sinhala language
text = (
"ශ්රී දළදා මාලිගාව යනු බුදුරජාණන් වහන්සේගේ වම් දන්තධාතූන් වහන්සේ වර්තමානයේ තැන්පත් කර ඇති මාළිගාවයි."
)
font = pymupdf.Font(fontfile="ISKPOTAB.TTF") # Font file of the default Windows Sinhala font
page.insert_font(fontbuffer=font.buffer) # using font buffer since using name "Iskoola Pota Bold" produce an error
rc = page.insert_text(p, text, fontfile=font.buffer, fontsize=11, rotate=0)
print("%i lines printed on page %i." % (rc, page.number))
doc.save("text.pdf")
</code></pre>
<p>This code runs without any errors. However, the pdf file it produces only has dots(".").
<a href="https://i.sstatic.net/Z4N0o2Qm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4N0o2Qm.png" alt="enter image description here" /></a></p>
<p>Am I missing anything here or it's just that PyMuPDF does not support unicode insertion?</p>
|
<python><pdf><unicode><pymupdf>
|
2024-12-22 01:46:25
| 2
| 1,459
|
paarandika
|
79,300,265
| 7,729,563
|
Is it possible to type annotate Python function parameter used as TypedDict key to make mypy happy?
|
<p>While working through code challenges I am trying to use type annotations for all function parameters/return types. I use mypy in strict mode with the goal of no errors.</p>
<p>I've spent some time on this one and can't figure it out - example of problem:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal, NotRequired, TypedDict
class Movie(TypedDict):
Title: str
Runtime: str
Awards: str
class MovieData(TypedDict):
Awards: NotRequired[str]
Runtime: NotRequired[str]
def get_movie_field(movies: list[Movie], field: Literal['Awards', 'Runtime']) -> dict[str, MovieData]:
return {movie['Title']: {field: movie[field]} for movie in movies}
</code></pre>
<pre class="lang-none prettyprint-override"><code># Check with mypy 1.14.0:
PS> mypy --strict .\program.py
program.py:13: error: Expected TypedDict key to be string literal [misc]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Is there some way to make this work? I realize I can append <code># type: ignore</code> to the line but I'm curious if there's another way.</p>
<p>I've tried a bunch of different permutations such as ensuring field matches one of the literal values before the dict comprehension but nothing seems to help.</p>
<hr>
<p>I like <a href="https://stackoverflow.com/a/79301414/">@chepner's solution</a>. Unfortunately, I couldn't figure out how to make it work with mypy in strict mode - taking his example:</p>
<pre><code>def get_movie_field(movies, field):
return {movie['Title']: {field: movie[field]} for movie in movies}
</code></pre>
<p>I get:</p>
<pre class="lang-none prettyprint-override"><code>PS> mypy --strict program.py
program.py:18: error: Function is missing a type annotation [no-untyped-def]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>As the last <code>get_movie_field</code> isn't type annotated.</p>
<p>From reviewing <a href="https://docs.python.org/3/library/typing.html#overload" rel="nofollow noreferrer">the mypy docs</a>, I updated the last <code>get_movie_field</code> function as follows - but it still doesn't fix the problem:</p>
<pre><code># @chepner's full solution updated with type annotations for
# last get_movie_field:
from typing import Literal, NotRequired, TypedDict, overload
class Movie(TypedDict):
Title: str
Runtime: str
Awards: str
class MovieData(TypedDict):
Awards: NotRequired[str]
Runtime: NotRequired[str]
@overload
def get_movie_field(movies: list[Movie], field: Literal['Awards']) -> dict[str, MovieData]:
...
@overload
def get_movie_field(movies: list[Movie], field: Literal['Runtime']) -> dict[str, MovieData]:
...
def get_movie_field(movies: list[Movie], field: Literal['Awards']|Literal['Runtime']
) -> dict[str, MovieData]:
return {movie['Title']: {field: movie[field]} for movie in movies}
</code></pre>
<pre class="lang-none prettyprint-override"><code>PS> mypy --strict program.py
program.py:19: error: Expected TypedDict key to be string literal [misc]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>However, this inspired me to find another approach. I'm not saying it's great but it does work in the sense that mypy doesn't complain:</p>
<pre><code>from typing import Literal, NotRequired, TypedDict
class Movie(TypedDict):
Title: str
Runtime: str
Awards: str
class MovieData(TypedDict):
Awards: NotRequired[str]
Runtime: NotRequired[str]
def create_MovieData(key: str, value: str) -> MovieData:
"""Creates a new TypedDict using a given key and value."""
if key == 'Awards':
return {'Awards': value}
elif key == 'Runtime':
return {'Runtime': value}
raise ValueError(f"Invalid key: {key}")
def get_movie_field(movies: list[Movie], field: Literal['Awards', 'Runtime']) -> dict[str, MovieData]:
return {movie['Title']: create_MovieData(field, movie[field]) for movie in movies}
</code></pre>
<pre class="lang-none prettyprint-override"><code>PS> mypy --strict program.py
Success: no issues found in 1 source file
</code></pre>
|
<python><python-typing><mypy>
|
2024-12-21 23:34:41
| 2
| 529
|
James S.
|
79,300,021
| 7,349,864
|
Sock Bind error in Apache Functions Python in local
|
<p>I'm testing Apache Functions and I'm trying to test my function locally</p>
<p>For this, I have started the local server with</p>
<pre><code>fn start --log-level DEBUG --port 8080
</code></pre>
<p>and I'm trying to create, deploy and invoke a test function with</p>
<pre><code>fn use context default
fn update context registry mylocalregistry
fn -v init --runtime python fn-local
cd fn-local
fn -v deploy --create-app --app app-local --local
fn invoke app-local fn-local
</code></pre>
<p>But the invocation (last line) fails with <code>Error invoking function. status: 502 message: Container failed to initialize, please ensure you are using the latest fdk and check the logs</code></p>
<p>In the server logs I see that the problem is a python socket creation</p>
<pre><code>...
time="2024-12-21T19:48:40Z" level=debug msg=" File \"/python/fdk/__init__.py\", line 62, in start\n" app_id=01JFN9NJFZNG8G00GZJ0000001 container_id=01JFNC7J7ZNG8G00GZJ0000003 fn_id=01JFNC6GG1NG8G00GZJ0000001 image="mylocalregistry/fn-local:0.0.2" tag=stderr
time="2024-12-21T19:48:40Z" level=debug msg=" sock.bind(phony_socket_path)\n" app_id=01JFN9NJFZNG8G00GZJ0000001 container_id=01JFNC7J7ZNG8G00GZJ0000003 fn_id=01JFNC6GG1NG8G00GZJ0000001 image="mylocalregistry/fn-local:0.0.2" tag=stderr
time="2024-12-21T19:48:40Z" level=debug msg="PermissionError: [Errno 1] Operation not permitted\n" app_id=01JFN9NJFZNG8G00GZJ0000001 container_id=01JFNC7J7ZNG8G00GZJ0000003 fn_id=01JFNC6GG1NG8G00GZJ0000001 image="mylocalregistry/fn-local:0.0.2" tag=stderr
time="2024-12-21T19:48:40Z" level=info msg="hot function terminated" app_id=01JFN9NJFZNG8G00GZJ0000001 container_id=01JFNC7J7ZNG8G00GZJ0000003 cpus= error="container exit code 1" fn_id=01JFNC6GG1NG8G00GZJ0000001 idle_timeout=30 image="mylocalregistry/fn-local:0.0.2" memory=256
...
</code></pre>
<p>I have already tried to run the starting with root <code>sudo fn start</code> with no change</p>
<p>I'm using Mac Sequioa with Rancher</p>
|
<python><rancher><rancher-desktop><fn>
|
2024-12-21 19:57:22
| 0
| 2,208
|
Sourcerer
|
79,299,996
| 13,975,447
|
bulk_update cloudflare KV attribute error python
|
<p>I am using python to bulk_update the KV using this
<a href="https://developers.cloudflare.com/api/python/resources/kv/subresources/namespaces/methods/bulk_update/" rel="nofollow noreferrer">https://developers.cloudflare.com/api/python/resources/kv/subresources/namespaces/methods/bulk_update/</a></p>
<p>when using an identical code as the example i get an error</p>
<pre><code> response = client.kv.namespaces.bulk_update(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NamespacesResource' object has no attribute 'bulk_update'
</code></pre>
<p>my versions are:</p>
<pre><code>python 3.12.6
cloudflare == 3.1.1
pydantic == 2.9.*
httpx == 0.27.*
</code></pre>
<p>any idea how to fix this? the API toekn are working when using D1 db so it is not a token</p>
|
<python><cloudflare><cloudflare-kv>
|
2024-12-21 19:33:29
| 1
| 3,426
|
Abdulaziz
|
79,299,983
| 19,356,117
|
DistNetworkError when using multiprocessing_context parameter in pytorch dataloader
|
<p>Because of some special reasons I want to use <code>spawn</code> method to create worker in <code>DataLoader</code> of Pytorch, this is demo:</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
import lightning
fabric = lightning.Fabric(devices=[0, 2], num_nodes=1, strategy='ddp')
fabric.launch()
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 2)
def forward(self, x):
return self.linear(x)
if __name__ == '__main__':
x = torch.randn(100, 10)
y = torch.rand(100, 2)
dataset = TensorDataset(x, y)
# crashed because of multiprocessing_context='spawn'
train_loader = fabric.setup_dataloaders(DataLoader(dataset, batch_size=10, shuffle=True,
num_workers=1, multiprocessing_context='spawn'))
model = LinearModel()
crit = nn.MSELoss()
model, optimizer = fabric.setup(model, optim.Adam(model.parameters(), lr=0.01))
for epoch in range(0, 10):
print(f'Epoch {epoch}')
for xs, ys in train_loader:
output = model(xs)
loss = crit(output, ys)
fabric.backward(loss)
optimizer.step()
</code></pre>
<p>But it crashed with this error:</p>
<pre><code># https://pastebin.com/BqA9mjiE
Epoch 0
Epoch 0
……
torch.distributed.DistNetworkError: The server socket has failed to listen on any local network address.
The server socket has failed to bind to [::]:55733 (errno: 98 - Address already in use).
The server socket has failed to bind to 0.0.0.0:55733 (errno: 98 - Address already in use).
</code></pre>
<p>Port 55733 is listened by training processes before so it will crash.</p>
<p>But I want to know, why port will be bind repeatedly when <code>multiprocessing_context</code> is <code>spawn</code>?</p>
<p>My version of Pytorch is <code>2.2.2</code> and fabric's version is <code>2.4.0</code>.</p>
<p>Hope for your reply.</p>
|
<python><pytorch><python-multiprocessing><pytorch-lightning>
|
2024-12-21 19:27:59
| 0
| 1,115
|
forestbat
|
79,299,956
| 6,000,382
|
How to actually use cv2.estimateAffine3D to align 3d points in python?
|
<p>I want to compute a similarity alignment that overlays two sets of points in correspondence (rotation, scaling, translation). OpenCV has a lovely function for doing this, estimateAffine3D(). Despite a couple of other stack overflow questions about it, noone has presented an actual example of how to use this function to do what it is advertised to do (even the function documentation is opaque). Asking and answering my own question with a code example, just to get this documented and save you some time.</p>
|
<python><opencv><geometry>
|
2024-12-21 19:05:43
| 1
| 974
|
welch
|
79,299,748
| 7,396,306
|
Count number of characters in each row and drop if all are below a certain number
|
<p>I have a dataframe with many columns, all of which contain text data mixed with <code>NaNs</code>.</p>
<p>I want to count the number of characters in each cell in each column and then drop any rows where <strong>all</strong> the columns have less than 5 characters (if any cells have more than 5 characters, then the row is not dropped).</p>
<p>I was considering making a new column with <code>str.len</code> for each column and then filter out rows using that, but it seems very cumbersome.</p>
<p>Example:</p>
<pre><code>>>> df
column_1 column_2 column_3
0 werhi dsfhjk dh ---> not filtered because some columns have more than 5 characters
1 sgds fuo g ---> filtered
2 wqyuio dsklh fhkjfj
3 fhi d fgho ---> filtered
4 sadfhkj sdjfkhs yyisdk
>>> df_filtered
column_1 column_2 column_3
0 werhio dsfhjk dh
2 wqyuio dsfjklh fhkjfj
4 sadfhkj sdjfkhs yyisdk
</code></pre>
|
<python><pandas><string><dataframe>
|
2024-12-21 16:40:36
| 2
| 859
|
DrakeMurdoch
|
79,299,698
| 1,946,052
|
Python Gekko ignores variable types
|
<p>With this code</p>
<pre><code>from gekko import GEKKO
model = GEKKO(remote=False)
x = model.Var(name='x', lb=0, ub=1, integer=True)
y = model.Var(name='y', lb=0, ub=10, integer=True)
z = model.Var(name='z')
model.Equation(x + y + z >= 10)
model.Equation(y - 2 * z <= 5)
model.Obj(x + 2 * y + 3 * z)
model.options.SOLVER=3
model.solve(disp=False)
print(f"{x.name:>5s} = {x.value[0]:8.4f}")
print(f"{y.name:>5s} = {y.value[0]:8.4f}")
print(f"{z.name:>5s} = {z.value[0]:8.4f}")
print(f"{'obj':>5s} = {model.options.OBJFCNVAL:8.4f}")
</code></pre>
<p>I get the result:</p>
<pre><code>int_x = 1.0000
int_y = 7.6667
z = 1.3333
obj = 20.3333
</code></pre>
<p>Why is <code>integer=True</code> ignored? I expect to get</p>
<pre><code>1
7
2
21
</code></pre>
|
<python><optimization><gekko><variable-types>
|
2024-12-21 16:00:52
| 0
| 2,283
|
Michael Hecht
|
79,299,600
| 3,070,181
|
How to show a dialog when using a spinbox in tkinter
|
<p>I have a spinbox in my tkinter application and I want to show a warning in certain circumstances</p>
<pre><code>import tkinter as tk
from tkinter import ttk, messagebox
widgets = None
root = None
def main() -> None:
global widgets
global root
root = tk.Tk()
root.title('Spinbox')
root.geometry('400x400')
widgets = tk.IntVar()
spin_box = ttk.Spinbox(
root,
from_=1,
to=40,
textvariable=widgets,
command=_widgets_change,
)
spin_box.grid(row=0, column=0)
root.mainloop()
def _widgets_change() -> None:
if widgets.get() > 2:
root.update_idletasks()
result = messagebox.showwarning(
'',
'Too big',
parent=root
)
print(result)
if __name__ == '__main__':
main()
</code></pre>
<p>This works if I increase the value in the entry, but if I use the up-down button I get into a loop with the error</p>
<blockquote>
<p>_tkinter.TclError: can't invoke "grab" command: application has been destroyed</p>
</blockquote>
<p>How can I get around this?</p>
<p>(Linux - Manjaro)</p>
|
<python><tkinter><dialog><tcl><spinner>
|
2024-12-21 14:54:27
| 0
| 3,841
|
Psionman
|
79,299,542
| 1,788,656
|
Geojsonio does not render jsonfile
|
<p>All,</p>
<p>The geojsonio package is not rendering simple JSON files on geojson.io as it should be. I get empty map as shown below</p>
<p><a href="https://i.sstatic.net/xVCCGpei.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVCCGpei.png" alt="enter image description here" /></a>
Here is the Python code I use to render the geojson file.</p>
<pre><code>import geopandas as gpd
states = gpd.read_file('us-states.json') # available at https://github.com/PublicaMundi/MappingAPI/tree/master/data/geojson
print(states.head())
states_ = states.to_json()
print(states_)
import geojsonio
geojsonio.display(states_)
</code></pre>
<p>However, I can render it by opening the json file via open tap <a href="https://geojson.io/#map=2/0/20" rel="nofollow noreferrer">https://geojson.io/#map=2/0/20</a> as shown below
<a href="https://i.sstatic.net/FkO25mVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FkO25mVo.png" alt="enter image description here" /></a></p>
<p>Any ideas.
Thanks</p>
|
<python><python-3.x><geojson><geopandas>
|
2024-12-21 14:15:32
| 0
| 725
|
Kernel
|
79,299,276
| 1,788,656
|
geopandas is missing states.geojson file
|
<p>All,
I got the following error when trying to import states.geojson file as described in this page <a href="https://www.twilio.com/en-us/blog/geospatial-analysis-python-geojson-geopandas-html" rel="nofollow noreferrer">https://www.twilio.com/en-us/blog/geospatial-analysis-python-geojson-geopandas-html</a>. I think that this file is among the pre-installed files with the geopands.</p>
<p>I am using geopandas version 0.14.4</p>
<pre><code>import geopandas as gpd
states = gpd.read_file('states.geojson')
</code></pre>
<p>Here is the error</p>
<pre><code>Traceback (most recent call last):
File fiona/ogrext.pyx:130 in fiona.ogrext.gdal_open_vector
File fiona/ogrext.pyx:134 in fiona.ogrext.gdal_open_vector
File fiona/_err.pyx:375 in fiona._err.StackChecker.exc_wrap_pointer
CPLE_OpenFailedError: states.geojson: No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
Cell In[1], line 2
states = gpd.read_file('states.geojson') # built in-file
File ~/anaconda3/lib/python3.11/site-packages/geopandas/io/file.py:289 in _read_file
return _read_file_fiona(
File ~/anaconda3/lib/python3.11/site-packages/geopandas/io/file.py:315 in _read_file_fiona
with reader(path_or_bytes, **kwargs) as features:
File ~/anaconda3/lib/python3.11/site-packages/fiona/env.py:457 in wrapper
return f(*args, **kwds)
File ~/anaconda3/lib/python3.11/site-packages/fiona/__init__.py:342 in open
colxn = Collection(
File ~/anaconda3/lib/python3.11/site-packages/fiona/collection.py:226 in __init__
self.session.start(self, **kwargs)
File fiona/ogrext.pyx:876 in fiona.ogrext.Session.start
File fiona/ogrext.pyx:136 in fiona.ogrext.gdal_open_vector
DriverError: Failed to open dataset (flags=68): states.geojson
</code></pre>
<p>Thanks</p>
|
<python><python-3.x><geopandas>
|
2024-12-21 10:54:38
| 1
| 725
|
Kernel
|
79,299,194
| 13,392,257
|
FeatureStore: 'NoneType' object has no attribute 'utctimetuple'
|
<p>I am using feast library</p>
<p>And I am creating FeatureStore object in my script:</p>
<pre><code>def get_feature_store(config: Dict[str, str]):
spark_conf = read_spark_conf(config["spark_config_master_path"])
online_host, online_port = config["online_store_connection"].split(":")
print(spark_conf)
repo_config = RepoConfig(
project=config["project_name"],
# spark.engine
provider="local",
batch_engine={
"type": "pim_spark.engine",
"partitions": 1000
},
process_dt=config["process_dt"],
distinct=(config["distinct"].lower() == "true"),
update_all=(config["update_all"].lower() == "true"),
run_validate_flag=(config["run_validate_flag"].lower() == "true"),
last_part_name=config["last_part_name"],
registry=RegistryConfig(
registry_type="sql",
path=f"postgresql://{config['registry_user']}:{config['registry_password']}@{config['registry_connection']}",
cache_ttl_seconds=3,
),
offline_store=SparkOfflineStoreConfig(
type="spark",
spark_conf=spark_conf,
),
online_store=PostgreSQLOnlineStoreConfig(
type="postgres",
host=online_host,
port=online_port,
database=config["online_store_database"],
db_schema=config["online_store_database_schema"],
user=config["online_store_user"],
password=config["online_store_password"],
sslmode=None,
sslkey_path=None,
sslcert_path=None,
sslrootcert_path=None,
),
entity_key_serialization_version=2, # value of 2 uses a newer serialization scheme, supported as of Feast 0.23.
)
return FeatureStore(config=repo_config)
</code></pre>
<p>I have an error:</p>
<pre><code>File "fs_runner.py", line 160, in run_demo
store = get_feature_store(config)
File "fs_runner.py", line 87, in get_feature_store
return FeatureStore(config=repo_config)
File "/srv/data10/hadoop/yarn/nm/usercache/tuz_dapp_p0_lab_ppml/appcache/application_1733519374776_597720/container_e82_1733519374776_597720_01_000001/feast_dapp_venv.zip/feast-venv/lib/python3.8/site-packages/feast/usage.py", line 362, in wrapper
raise exc.with_traceback(traceback)
File "/srv/data10/hadoop/yarn/nm/usercache/tuz_dapp_p0_lab_ppml/appcache/application_1733519374776_597720/container_e82_1733519374776_597720_01_000001/feast_dapp_venv.zip/feast-venv/lib/python3.8/site-packages/feast/usage.py", line 348, in wrapper
return func(*args, **kwargs)
File "/srv/data10/hadoop/yarn/nm/usercache/tuz_dapp_p0_lab_ppml/appcache/application_1733519374776_597720/container_e82_1733519374776_597720_01_000001/feast_dapp_venv.zip/feast-venv/lib/python3.8/site-packages/feast/feature_store.py", line 169, in __init__
self._registry = SqlRegistry(registry_config, None)
File "/srv/data10/hadoop/yarn/nm/usercache/tuz_dapp_p0_lab_ppml/appcache/application_1733519374776_597720/container_e82_1733519374776_597720_01_000001/feast_dapp_venv.zip/feast-venv/lib/python3.8/site-packages/feast/infra/registry/sql.py", line 198, in __init__
self.cached_registry_proto = self.proto()
File "/srv/data10/hadoop/yarn/nm/usercache/tuz_dapp_p0_lab_ppml/appcache/application_1733519374776_597720/container_e82_1733519374776_597720_01_000001/feast_dapp_venv.zip/feast-venv/lib/python3.8/site-packages/feast/infra/registry/sql.py", line 824, in proto
r.last_updated.FromDatetime(max(last_updated_timestamps))
File "/srv/data10/hadoop/yarn/nm/usercache/tuz_dapp_p0_lab_ppml/appcache/application_1733519374776_597720/container_e82_1733519374776_597720_01_000001/feast_dapp_venv.zip/feast-venv/lib/python3.8/site-packages/google/protobuf/internal/well_known_types.py", line 249, in FromDatetime
self.seconds = calendar.timegm(dt.utctimetuple())
AttributeError: 'NoneType' object has no attribute 'utctimetuple'
</code></pre>
<p>How to understand what is wrong with my config and how to fix it?</p>
|
<python><feast>
|
2024-12-21 09:50:15
| 0
| 1,708
|
mascai
|
79,299,062
| 10,115,137
|
pip install PyGObject==3.50.0 complaints of stale meson even though latest meson present
|
<p>Upgrading PyGobject using</p>
<pre class="lang-bash prettyprint-override"><code>pip install PyGObject==3.50.0 --verbose
</code></pre>
<p>gives the error (see full error at the end)</p>
<pre class="lang-py prettyprint-override"><code>meson-python: error: Could not find meson version 0.63.3 or newer, found 0.61.2.
[end of output]
</code></pre>
<p>even though I have the latest meson</p>
<pre class="lang-bash prettyprint-override"><code>>>>pip list | grep -i -e 'meson' -e 'pygo'
meson 1.6.1
meson-python 0.17.1
PyGObject 3.42.1
</code></pre>
<h2>full error</h2>
<pre class="lang-py prettyprint-override"><code>>>>pip install PyGObject==3.50.0 --verbose
Using pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
Defaulting to user installation because normal site-packages is not writeable
Collecting PyGObject==3.50.0
Using cached pygobject-3.50.0.tar.gz (1.1 MB)
Running command pip subprocess to install build dependencies
Collecting meson-python>=0.12.1
Using cached meson_python-0.17.1-py3-none-any.whl (27 kB)
Collecting pycairo>=1.16
Using cached pycairo-1.27.0.tar.gz (661 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [2 lines of output]
meson-python: error: Could not find meson version 0.63.3 or newer, found 0.61.2.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /tmp/pip-standalone-pip-ldyax5i4/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-mmevng98/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'meson-python>=0.12.
1' 'pycairo>=1.16'
cwd: [inherit]
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<h2>Environment:</h2>
<pre class="lang-bash prettyprint-override"><code>>>>uname -a # Pop!_OS 22.04 LTS
Linux $HOSTNAME 6.9.3-76060903-generic #202405300957~1732141768~22.04~f2697e1 SMP PREEMPT_DYNAMIC Wed N x86_64 x86_64 x86_64 GNU/Linux
>>>python --version
Python 3.10.12
>>>pip --version
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
</code></pre>
|
<python><installation><pip>
|
2024-12-21 07:56:30
| 1
| 920
|
lineage
|
79,298,879
| 15,632,586
|
ModuleNotFoundError: No module named 'huggingface_hub.inference._types'
|
<p>I am running a RAG pipeline, with LlamaIndex and quantized LLama3-8B-Instruct. I just installed these libraries:</p>
<pre><code>!pip install --upgrade huggingface_hub
!pip install --upgrade peft
!pip install llama-index bitsandbytes accelerate llama-index-llms-huggingface llama-index-embeddings-huggingface
!pip install --upgrade transformers
!pip install --upgrade sentence-transformers
</code></pre>
<p>Then I was looking to run the quantization pipeline like this:</p>
<pre><code>import torch
from llama_index.llms.huggingface import HuggingFaceLLM
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
</code></pre>
<p>However, I got this error returned to me: <code>ModuleNotFoundError: No module named 'huggingface_hub.inference._types'</code>. Last time I worked with this pipeline two months ago, the code worked, so I think LlamaIndex has changed something; especially since when I clicked on the error, it referenced to: <code> from huggingface_hub.inference._types import ConversationalOutput</code>, but <code>ConversationalOutput</code> module doesn't exist in HuggingFace docs.</p>
<p>So, what should I do to fix this error and be able to run this RAG pipeline?</p>
|
<python><large-language-model><huggingface><llama-index><retrieval-augmented-generation>
|
2024-12-21 04:34:23
| 1
| 451
|
Hoang Cuong Nguyen
|
79,298,628
| 107,158
|
How can I use Polars to stream the contents of a Parquet file as CSV text to standard output?
|
<p>Using <a href="https://www.python.org/" rel="nofollow noreferrer">Python</a> <a href="https://pola.rs/" rel="nofollow noreferrer">Polars</a>, how can I modify the following script to stream the contents of a <a href="https://parquet.apache.org/" rel="nofollow noreferrer">Parquet</a> file as <a href="https://en.wikipedia.org/wiki/Comma-separated_values" rel="nofollow noreferrer">CSV</a> text to standard output?</p>
<pre><code>import polars as pl
import sys
pl.scan_parquet("BTCUSDT-trades-2022-01.parquet").sink_csv(sys.stdout)
</code></pre>
<p>Python complains that <code>Lazyframe.sink_csv</code> expects a string argument and not a <code>TextIOWrapper</code>:</p>
<pre><code>Traceback (most recent call last):
File "/mnt/storage/Data/Binance/Market Data/Polars/select_btcusdt_aggtrades.py", line 4, in <module>
pl.scan_parquet("../BTCUSDT/2022/BTCUSDT-trades-2022-01.parquet").sink_csv(sys.stdout)
File "/home/derek/.cache/uv/archive-v0/bOaA51uU_dQEF2peOOxqI/lib/python3.12/site-packages/polars/_utils/unstable.py", line 58, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/derek/.cache/uv/archive-v0/bOaA51uU_dQEF2peOOxqI/lib/python3.12/site-packages/polars/lazyframe/frame.py", line 2717, in sink_csv
path=normalize_filepath(path),
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/derek/.cache/uv/archive-v0/bOaA51uU_dQEF2peOOxqI/lib/python3.12/site-packages/polars/_utils/various.py", line 225, in normalize_filepath
path = os.path.expanduser(path) # noqa: PTH111
^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 259, in expanduser
TypeError: expected str, bytes or os.PathLike object, not TextIOWrapper
</code></pre>
<p>Note that I wish to stream the Parquet file and not read the entire file into a DataFrame in memory because each input file is quite large and there are twelve files for each year, so reading all of the the input data for a year would quickly consume all available memory in my laptop.</p>
<h6>Data Source</h6>
<p><a href="https://data.binance.vision/data/spot/monthly/trades/BTCUSDT/BTCUSDT-trades-2022-01.zip" rel="nofollow noreferrer"><code>BTCUSDT-trades-2022-01.zip</code></a> contains the trades in CSV format between Bitcoin and USDT on the <a href="https://www.binance.com/en" rel="nofollow noreferrer">Binance</a> cryptocurrency exchange in January 2022. <code>BTCUSDT-trades-2022-01.parquet</code> contains this trade data in Parquet format.</p>
|
<python><dataframe><csv><parquet><python-polars>
|
2024-12-20 23:39:58
| 2
| 28,456
|
Derek Mahar
|
79,298,621
| 22,407,544
|
How to get my favicon at `example.com/favicon.ico` in my Django project?
|
<p>Google currently does not show my website's favicon in its search results and I learned that it is becuase it should be located at <code>example.com/favicon.ico</code>. I'm looking for a simple way to do this, hopefully without relying on redirects. I've used redirection to the version in my static folder but when I visit the url it redirects to <code>example.com/static/favicon.ico</code>, which I do not want. I want that if a user or a crawler visits <code>example.com/favicon.ico</code> they see my favicon. It is currently located at:</p>
<pre><code>my_project/
├── my_project/
│ ├── ...
│ ├── settings.py
│ ├── ...
├── manage.py
└── static/
└── img/
└── favicon.ico
</code></pre>
<p>I use gunicorn as my web server and whitenoise as my http server. Here is my urls.py:</p>
<pre><code>from django.urls import path, include # new
from django.contrib.sitemaps.views import sitemap # new
from finder.views import ProcessorDetailSitemap, ArticleDetailSitemap, StaticViewSitemap # Correct import path
#from django.conf import settings
#from django.conf.urls.static import static
sitemaps = {
'processor':ProcessorDetailSitemap,
'article': ArticleDetailSitemap,
'static': StaticViewSitemap,
}
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('finder.urls')), #new
path("sitemap.xml", sitemap, {"sitemaps": sitemaps}, name="django.contrib.sitemaps.views.sitemap",),
]
</code></pre>
<p>It is also linked in my template usingn<code><link rel="icon" type="image/x-icon" href="{% static 'img/favicon.ico' %}"></code></p>
|
<python><django>
|
2024-12-20 23:35:03
| 1
| 359
|
tthheemmaannii
|
79,298,447
| 4,048,657
|
PyTorch scatter max for sparse tensors?
|
<p>I have the following PyTorch code</p>
<pre><code>value_tensor = torch.sparse_coo_tensor(indices=query_indices.t(), values=values, size=(num_lines, img_size, img_size)).to(device=device)
value_tensor = value_tensor.to_dense()
indices = torch.arange(0, img_size * img_size).repeat(len(lines)).to(device=device)
line_tensor_flat = value_tensor.flatten()
img, _ = scatter_max(line_tensor_flat, indices, dim=0)
img = torch.reshape(img, (img_size, img_size))
</code></pre>
<p>Note the line: <code>value_tensor = value_tensor.to_dense()</code>, this is unsurprisingly slow.</p>
<p>However, I cannot figure out how to obtain the same results with a sparse tensor. The function in question calls <code>reshape</code> which is not available on sparse tensors. I'm using <a href="https://pytorch-scatter.readthedocs.io/en/1.3.0/functions/max.html" rel="nofollow noreferrer">Scatter Max</a> but opened to using anything that works.</p>
|
<python><pytorch>
|
2024-12-20 21:36:06
| 1
| 1,239
|
Cedric Martens
|
79,298,424
| 219,153
|
Generating binary arrays with alternating values based on change indices in NumPy
|
<p>I have an array <code>a</code> of increasing indexes, e.g. <code>[2 5 9 10]</code>, which indicates positions of value change. Assuming the output values are <code>0</code> and <code>1</code>, I want to get array <code>b</code>:</p>
<pre><code>[0 0 1 1 1 0 0 0 0 1 0]
</code></pre>
<p>Is there a NumPy magic to transform <code>a</code> into <code>b</code>?</p>
|
<python><numpy>
|
2024-12-20 21:22:12
| 4
| 8,585
|
Paul Jurczak
|
79,298,368
| 4,907,639
|
Inspect all probabilities of BERTopic model
|
<p>Say I build a BERTopic model using</p>
<pre><code>from bertopic import BERTopic
topic_model = BERTopic(n_gram_range=(1, 1), nr_topics=20)
topics, probs = topic_model.fit_transform(docs)
</code></pre>
<p>Inspecting <code>probs</code> gives me just a single value for each item in <code>docs</code>.</p>
<pre><code>probs
array([0.51914467, 0. , 0. , ..., 1. , 1. ,
1. ])
</code></pre>
<p>I would like the entire probability vector across all topics (so in this case, where <code>nr_topics=20</code>, I want a vector of 20 probabilities for each item in <code>docs</code>). In other words, if I have N items in <code>docs</code> and K topics, I would like an NxK output.</p>
|
<python><nlp><topic-modeling>
|
2024-12-20 20:49:34
| 1
| 2,109
|
coolhand
|
79,298,069
| 15,673,975
|
Importing a Python dependency from a private repository causes docker buildx build on Kubernetes to fail
|
<p>I have setup a Kubernetes pod to build Docker containers. The projects I'm building containers for have a Python dependency which is from a private repository. Here is the issue: if I run the <code>docker buildx build ...</code> command in my terminal, everything works.
If, however, I run the command through a Python scripts, it fails with</p>
<pre><code>#1 ERROR: error for bootstrap "kube30": Unauthorized
------
> [internal] booting buildkit:
------
ERROR: error for bootstrap "kube30": Unauthorized
</code></pre>
<p>Note that it only fails if I explicitly import the private dependency. If I remove the import, without changing anything else, the building process works. And again, only when going through a script: when using the command directly from cmdline, it works in either case.</p>
<p>What could be the problem?</p>
|
<python><docker><kubernetes>
|
2024-12-20 18:06:09
| 1
| 374
|
ultrapoci
|
79,298,043
| 5,403,987
|
How to stop jupyter-book build from examining files in .venv directory
|
<p>I'm using jupyter-book. I recently switched from using a conda based environment to uv. uv creates a .venv directory at the top level. Now when I issue the command</p>
<pre><code>jupyter-book build .
</code></pre>
<p>I can see that jupyter-book is traversing the .venv directory as well and trying to build any markdown files, rst files, jupyter notebooks, etc. that are part of any of the Python packages in the .venv directory. This is not ideal, so I'd like to exclude the .venv directory. I updated my _copnfig.yml file to attempt to do so:</p>
<pre><code>execute:
execute_notebooks: auto
exclude_patterns:
- '.venv'
</code></pre>
<p>But it appears to have no effect. I tried other pattern variants such as <code>'.venv/', '.venv/*', '.venv/**'</code>, with no luck.</p>
<p>Any suggestions would be appreciated.</p>
|
<python><python-sphinx><jupyterbook>
|
2024-12-20 17:57:52
| 1
| 2,224
|
Tom Johnson
|
79,297,758
| 990,549
|
How should I parse times in the Japanese "30-hour" format for data analysis?
|
<p>I'm considering a data analysis project involving information on Japanese TV broadcasts. The relevant data will include broadcast times, and some of those will be for programs that aired late at night.</p>
<p>Late-night Japanese TV schedules follow a non-standard time format called the <a href="https://ja.wikipedia.org/wiki/30%E6%99%82%E9%96%93%E5%88%B6" rel="nofollow noreferrer">30-hour system</a> (brief English explanation <a href="https://en.wikipedia.org/wiki/Date_and_time_notation_in_Japan#Time" rel="nofollow noreferrer">here</a>). Most times are given in normal Japan Standard Time, formatted as <code>%H:%M</code>. Times from midnight to 6 AM, however, are treated as an extension of the previous day and numbered accordingly, under the logic that that's how people staying up late experience them. For example, Macross Frontier was broadcast in Kansai at 1:25 AM, but it was written as 25:25.</p>
<p>I want to use this data in a Pandas or Polars DataFrame. Theoretically, it could be left as a string, but it'd be more useful to convert it to a standard format for datetimes -- either Python's built-in type, or the types used in NumPy or Polars. One simple approach could be:</p>
<pre><code>from datetime import date, time, datetime
from zoneinfo import ZoneInfo
def process_30hour(d: date, t: str):
h, m = [int(n) for n in t.split(':')] # assumes format 'HH:MM' for t
if h > 23:
h -= 24
d += 1
return datetime.combine(d, time(h, m), ZoneInfo('Japan'))
</code></pre>
<p>This could then be applied to a whole DataFrame with <code>DataFrame.apply()</code>. There may be a more performant way, however, especially considering the vectorization features of DataFrames -- both libraries recommend avoiding <code>DataFrame.apply()</code> if there's an alternative.</p>
|
<python><pandas><numpy><datetime><python-polars>
|
2024-12-20 15:52:43
| 2
| 1,050
|
Shay Guy
|
79,297,644
| 7,179,546
|
How to get response with Nones when using the multiprocessing library on Python?
|
<p>I'm using the multiprocessing package for Python to parallelize 2 functions. Each of those functions return a dictionary that can have None as value for some keys, and when that happens I'm getting this error</p>
<blockquote>
<p>Error in multiprocessing my function: Error sending result: <code><Output of my function></code> Reason: 'TypeError("'NoneType' object is not callable")'</p>
</blockquote>
<p>However, if I run this code, everything works correctly</p>
<pre><code>import multiprocessing
def function1(param1):
return {'a': param1, 'b': None}
def function2(param2):
return {'c': param2, 'd': None}
if __name__ == '__main__':
with multiprocessing.Pool(processes=2) as pool:
a = 3
future_1 = pool.apply_async(function1, (3,))
future_2 = pool.apply_async(function2, [a])
result_1 = future_1.get()
result_2 = future_2.get()
print(result_1)
print(result_2)
</code></pre>
<p>Any idea what kind of issues could be the cause of that error?</p>
|
<python><python-multiprocessing>
|
2024-12-20 15:06:20
| 0
| 737
|
Carabes
|
79,297,329
| 1,358,829
|
FB-Hydra: calling compose during hydra main is not properly finding configs in the searchpath
|
<p>I'm trying to call compose within <code>hydra.main</code>, but it is not properly using the searchpath I set in my main config. This is the organization of my script:</p>
<pre><code>my_script
┣ config
┃ ┣ config.yaml
┗ my_script.py
</code></pre>
<p>My scripts depend on a config package within a library I'm developing, It is organized as follows:</p>
<pre><code>my_lib
┣ config
┃ ┣ config_group_A
┃ ┃ ┣ ...
┃ ┣ config_group_B
┃ ┃ ┣ some_config.yaml
| | ┗ __init__.py
| ┗ __init__.py
</code></pre>
<p>and so on.</p>
<p>In my <code>my_script/config/config.yaml</code> I added the following:</p>
<pre class="lang-yaml prettyprint-override"><code> searchpath:
- pkg://my_lib.config
- pkg://my_lib.config.config_group_B
</code></pre>
<p>The purpose is that the main config uses a lot of configs within <code>my_lib/config</code>, but also I want to, within <code>hydra.main</code>, use <code>hydra.compose</code> to compose the config <code>some_config.yaml</code></p>
<p>The problem is that when I call <code>hydra.compose</code> within <code>hydra.main</code>, I get the following error:</p>
<pre><code>hydra.errors.MissingConfigException: Cannot find primary config 'some_config'. Check that it's in your config search path.
Config search path:
provider=hydra, path=pkg://hydra.conf
provider=main, path=file:///<path_to_scripts>/my_script/config
provider=schema, path=structured://
</code></pre>
<p>But I'm confused because I added both <code>my_lib.config</code> and <code>my_lib.config.config_group_B</code> to the searchpath. These even appear in the <code>hydra.core.hydra_config.HydraConfig.get().searchpath</code> and <code>hydra.core.hydra_config.HydraConfig.get().runtime.config_sources</code>, so it is strange I cannot compose <code>some_config.yaml</code> during <code>hydra.main</code>. For refernece, I call in the script <code>hydra.main(version_base="1.3", config_path='config', config_name='config')</code></p>
<p>What is the issue here?</p>
|
<python><python-3.x><fb-hydra>
|
2024-12-20 13:09:56
| 1
| 1,232
|
Alb
|
79,297,217
| 8,458,083
|
How can use statically typed check a dataclass create with make_dataclass
|
<p>When i create and use a dataclass the "normal way" I can run and type check my code mypy without problem.
For instance, this code works perfectly fine:</p>
<pre><code>@dataclass
class Person2:
name : str
age : int
height : float
def f2(p : Person2):
print(p)
</code></pre>
<p>But if I try to craete a dataclass with make_dataclass, I have a problem.
For instance:</p>
<pre><code>types_list = [('name', str), ('age', int), ('height', float)]
Person = make_dataclass('Person', types_list)
a=Person(name="n",age=2, height=4.3)
def f(p : Person):
print(p)
</code></pre>
<blockquote>
<p>q.py:10: error: Variable "q.Person" is not valid as a type [valid-type]</p>
</blockquote>
<p>Why does this problem occurs and is it possible to fix it</p>
|
<python><python-typing><mypy>
|
2024-12-20 12:19:42
| 1
| 2,017
|
Pierre-olivier Gendraud
|
79,297,127
| 19,573,290
|
Putting a Tkinter window below all other windows
|
<p>I'm trying to make a Tkinter window appear below all other windows.</p>
<p>I know how to make it appear over all other windows:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
root = tk.Tk()
root.attributes('-topmost', True)
root.mainloop()
</code></pre>
<p>But I'm trying to do the opposite of that - so that the window sits under all windows, on the desktop. <code>-bottommost</code> isn't a thing though, sadly.</p>
|
<python><tkinter>
|
2024-12-20 11:46:35
| 2
| 363
|
stysan
|
79,296,881
| 9,213,069
|
Replace pandas column value by previous column value if cell contain a substring
|
<p>I'm using Python 3.6.8. I have a dataframe df with columns name & sport. I want to replace value of sport if it contains substring "replace this value for".</p>
<p>My dataframe is generated using following code</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'name': ['Bob', 'Jane', 'Alice'],
'sport': ['tennis', 'replace this value for Jane', 'replace this value for Alice']})
</code></pre>
<p>So I wrote following code</p>
<pre><code>row_num=df.index[df.iloc[:,1].str.contains('replace this value for')]
for num in row_num:
df.loc[num,1]=df.loc[num,0]
</code></pre>
<p>But I'm getting following error</p>
<pre><code> # If we have a listlike key, _check_indexing_error will raise
KeyError: 0
</code></pre>
<p>Can you please help me to resolve this error?</p>
|
<python><pandas>
|
2024-12-20 10:21:24
| 2
| 883
|
Tanvi Mirza
|
79,296,865
| 3,414,663
|
How to make a Python Dataclass mixin?
|
<p>To make a dataclass in Python you typically use the decorator. Something
like:</p>
<pre class="lang-none prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class Foo:
n: int = 1
def bump(self):
self.n += 1
</code></pre>
<p>Alternatively you can use <code>make_dataclass</code> to do something
like</p>
<pre class="lang-none prettyprint-override"><code>Foo = make_dataclass('Foo',
[('n', int)],
namespace={'bump': lambda self: self.n += 1})
</code></pre>
<p>Is there a way to make a mixin class, say <code>Dataclass</code>, so that you
could write the first form as follows?</p>
<pre class="lang-none prettyprint-override"><code>class Foo(Dataclass):
n: int = 1
def bump(self):
self.n += 1
</code></pre>
<p>Or is there a reason why that might be undesirable?</p>
|
<python><mixins><python-dataclasses>
|
2024-12-20 10:12:02
| 1
| 589
|
user3414663
|
79,296,853
| 329,680
|
sqlmodel ValueError: <class 'list'> has no matching SQLAlchemy type
|
<p>Given these classes and 1-to-N relationships, I get compile time error: <code>ValueError: <class 'list'> has no matching SQLAlchemy type</code> referring to the <code>category.py</code> field <code>todos: list["Todo"]</code>, while the relation <code>hero/team</code> does work.</p>
<p>I also tried to replace the <code>UUID</code> type and use <code>int</code> in both classes, without success.</p>
<pre><code># hero.py (works)
class Hero(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
....
team_id: int | None = Field(default=None, foreign_key="team.id")
team: Team | None = Relationship(back_populates="heroes")
# team.py (works)
class Team(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
.....
heroes: list["Hero"] = Relationship(back_populates="team")
# todo.py (does not work)
class Todo(SQLModel, table=True):
id: UUID | None = Field(
default=uuid4(),
primary_key=True,
)
......
category_id: UUID | None = Field(default=None, foreign_key="category.id")
category: Category | None = Relationship(back_populates="todos")
# category.py (does not work => ValueError: <class 'list'>)
class Category(SQLModel, table=True):
id: UUID | None = Field(default=uuid4(), primary_key=True)
.....
todos: list["Todo"] = Relationship(back_populates="category")
</code></pre>
<p>Full error log:</p>
<pre><code>import todo.models.category
File ".../todo/models/category.py", line 9, in <module>
class Category(SQLModel, table=True):
File ".../lib/python3.12/site-packages/sqlmodel/main.py", line 559, in __new__
col = get_column_from_field(v)
^^^^^^^^^^^^^^^^^^^^^^^^
File "../lib/python3.12/site-packages/sqlmodel/main.py", line 708, in get_column_from_field
sa_type = get_sqlalchemy_type(field)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../lib/python3.12/site-packages/sqlmodel/main.py", line 697, in get_sqlalchemy_type
raise ValueError(f"{type_} has no matching SQLAlchemy type")
ValueError: <class 'list'> has no matching SQLAlchemy type
</code></pre>
|
<python><orm><fastapi><sqlmodel>
|
2024-12-20 10:08:24
| 3
| 1,016
|
Fabio
|
79,296,798
| 2,915,050
|
Python unittest - how to unit test function to open files of specific file pattern
|
<p>I'm trying to write a unit test with <code>unittest</code> which tests that the behaviour of the below function only opens files that have a filename pattern of <code>test_*.json</code>.</p>
<pre><code>import glob
import json
def get_tests():
tests = []
for test in list(glob.iglob(f'**/test_*.json', recursive=True)):
with open(test) as f:
contents = f.read()
to_json = json.loads(contents)
tests.append(to_json)
return tests
</code></pre>
<p>I've seen this SO post <a href="https://stackoverflow.com/questions/1289894/how-do-i-mock-an-open-used-in-a-with-statement-using-the-mock-framework-in-pyth">How do I mock an open used in a with statement (using the Mock framework in Python)?</a> which is to use <code>patch</code> and <code>mock_open</code>, however, trying to follow the top answer, it's mainly focussed on opening a mock file, rather than testing the function that OP has written <code>testme</code> with a mock file. Also, I'm unsure in how to tie in filenames of a specific pattern either with mock file.</p>
<p>I want the unit tests to prove that it only opens files of that pattern, and not any other file.</p>
|
<python><mocking><python-unittest>
|
2024-12-20 09:48:06
| 0
| 1,583
|
RoyalSwish
|
79,296,766
| 4,913,254
|
Heatmap (seaborn) doesn't show all Ylabels
|
<p>My heatmap doesn't want to show all Ylabels.</p>
<p>This is the plot
<a href="https://i.sstatic.net/UmjzJlHE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmjzJlHE.png" alt="enter image description here" /></a></p>
<p>And this is the code</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# Load the dataset from a file (e.g., 'data.csv')
file_path = 'data.csv' # Replace with your file path
df = pd.read_csv(file_path, sep='\t')
# Convert the "VAF from Genotyping" column to numeric after removing the percentage sign
df['VAF from Genotyping'] = df['VAF from Genotyping'].str.rstrip('%').astype(float)
# Sort the dataframe by "VAF from Genotyping" in descending order
df_sorted = df.sort_values(by='VAF from Genotyping', ascending=False)
# Modify Sample-Run labels to include #Reads in BAM and VAF from Genotyping
df_sorted["Sample-Run"] = df_sorted["Sample-Run"] + " (" + df_sorted["#Reads in BAM"].astype(str) + " Reads, " + df_sorted["VAF from Genotyping"].astype(str) + "% VAF)"
# Convert detection status to binary
threshold_cols = ["6K", "10K", "15K", "20K", "25K", "35K"]
df_binary = df_sorted[threshold_cols].replace({"No Mutation Detected": 0, "Variant": 1})
# Ensure binary index matches Sample-Run
df_binary.index = df_sorted["Sample-Run"]
print(len(df_binary))
# --- Heatmap Plot ---
plt.figure(figsize=(12, 8))
sns.heatmap(df_binary, cmap="Blues", annot=True, cbar=True, fmt="d")
plt.title("Mutation Detection Matrix Across Thresholds", fontsize=14)
plt.xlabel("Read Thresholds", fontsize=12)
plt.ylabel("Wnumber-RunID Read VAF", fontsize=12)
plt.gca().set_yticklabels(plt.gca().get_yticklabels(), fontsize=10)
# Save heatmap
heatmap_file = "detection_matrix_with_metadata.png"
plt.tight_layout()
plt.savefig(heatmap_file)
plt.close() # Close the figure
# Save the sorted dataset to a new file
output_file = 'sorted_data_with_metadata.csv'
df_sorted.to_csv(output_file, index=False, sep='\t')
# Confirmation
print(f"Sorted data saved to '{output_file}'")
print(f"Heatmap saved as '{heatmap_file}'")
</code></pre>
<p>The <code>print(len(df_binary))</code> showed me that the heatmap function takes 54 elements and I can see that in the number of 0s in the first column of the heatmap. However, the plot only show around 27 Ylabels. Why???</p>
|
<python><seaborn>
|
2024-12-20 09:33:38
| 0
| 1,393
|
Manolo Dominguez Becerra
|
79,296,597
| 5,868,293
|
Rolling window selection with groupby in pandas
|
<p>I have the following pandas dataframe:</p>
<pre><code># Create the DataFrame
df = pd.DataFrame({
'id': [1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2],
'date': [1, 2, 3, 4, 5, 6, 7, 8, 5, 6, 7, 8, 9, 10, 11, 12],
'value': [11, 12, 13, 14, 15, 16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28]
})
df
id date value
0 1 1 11
1 1 2 12
2 1 3 13
3 1 4 14
4 1 5 15
5 1 6 16
6 1 7 17
7 1 8 18
8 2 5 21
9 2 6 22
10 2 7 23
11 2 8 24
12 2 9 25
13 2 10 26
14 2 11 27
15 2 12 28
</code></pre>
<p>I want to query the above dataframe, in a rolling window manner, for both <code>ids</code>. The rolling window should be of size <code>n</code>.</p>
<p>So, if <code>n==2</code>, in the 1st iteration I would like to query this:</p>
<p><code>df.query('(id==1 and (date==1 or date==2)) or (id==2 and (date==5 or date==6))')</code></p>
<pre><code>id date value
0 1 1 11
1 1 2 12
8 2 5 21
9 2 6 22
</code></pre>
<p>in the 2nd iteration I would like to query this:</p>
<p><code>df.query('(id==1 and (date==2 or date==3)) or (id==2 and (date==6 or date==7))')</code></p>
<pre><code>id date value
1 1 2 12
2 1 3 13
9 2 6 22
10 2 7 23
</code></pre>
<p>in the 3rd iteration I would like to query this:</p>
<p><code>df.query('(id==1 and (date==3 or date==4)) or (id==2 and (date==7 or date==8))')</code></p>
<pre><code>id date value
2 1 3 13
3 1 4 14
10 2 7 23
11 2 8 24
</code></pre>
<p>etc. How could I do that in pandas ? My data has around 500 <code>ids</code></p>
|
<python><pandas>
|
2024-12-20 08:24:15
| 5
| 4,512
|
quant
|
79,296,589
| 26,579,940
|
pyqtgraph How to add multiple plots to one ax
|
<p>I want to open a window with pyqtgraph alone, without using pyqt.</p>
<p>I want to create two or more axes in one window through pyqtgraph, and draw two lines in one ax.
What should I do?</p>
<p>The following code creates multiple windows.</p>
<pre><code>import pandas as pd
import pyqtgraph as pg
a = [
{'a': 5, 'b': 10, 'c': 5},
{'a': 4, 'b': 0.5, 'c': 1},
{'a': 3.5, 'b': 15, 'c': 9},
{'a': 2.1, 'b': 5, 'c': 8},
{'a': 0.1, 'b': 1, 'c': 5},
]
df = pd.DataFrame(a)
pg.plot(df['a'].values)
pg.plot(df['b'].values)
pg.exec()
</code></pre>
<p>The following code does not create a window.</p>
<pre><code>import pandas as pd
import pyqtgraph as pg
a = [
{'a': 5, 'b': 10, 'c': 5},
{'a': 4, 'b': 0.5, 'c': 1},
{'a': 3.5, 'b': 15, 'c': 9},
{'a': 2.1, 'b': 5, 'c': 8},
{'a': 0.1, 'b': 1, 'c': 5},
]
df = pd.DataFrame(a)
widget = pg.MultiPlotWidget()
widget.addItem(pg.PlotWidget(df['a'].values))
widget.addItem(pg.PlotWidget(df['b'].values))
pg.exec()
</code></pre>
|
<python><pyqtgraph>
|
2024-12-20 08:22:27
| 1
| 404
|
white.seolpyo.com
|
79,296,505
| 1,503,683
|
Optimized way to replace a large list of dicts
|
<p>Let's say I have a (large) list of (large) dicts, for instance:</p>
<pre class="lang-json prettyprint-override"><code>my_list = [
{
"foo": "bar",
"foobar": "barfoo",
"something": 1,
"useless_bool": True
},
{
"foo": "rab",
"foobar": "oofrab",
"something": 1,
"useless_str": "different_value"
},
...
]
</code></pre>
<p>Where this list contains <strong>thousands</strong> of dicts, and where each dicts can have <strong>hundreds</strong> of keys. Also I have a Jinja template, for instance:</p>
<pre class="lang-py prettyprint-override"><code>jinja_template = jinja2.Environment().from_string(
"{{ foo }} - {{ foobar }} - {{ something }}"
)
</code></pre>
<p>For each dict in my list, I want add a new key containing the rendered Jinja template (using other existing keys), and remove all other keys. To end with something like:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"new_key": "bar - barfoo - 1"
},
{
"new_key": "rab - oofrab - different_value"
},
...
]
</code></pre>
<p>So far I made a (dumb) loop to do so:</p>
<pre class="lang-py prettyprint-override"><code>import jinja2
new_message_template = jinja2.Environment().from_string("{{ foo }} - {{ foobar }} - {{ something }}")
for item in my_list:
# Render the new value
rendered_value = new_message_template.render(**item)
# Remove all existing keys
item.clear()
# Set the new key/value
item["new_key"] = rendered_value
</code></pre>
<p>This works.</p>
<p><strong>But considering this list contains thousands of dicts, with each having hundreds of keys</strong>, is there a more optimized way to perform this operation, in term of performances and execution time ?</p>
|
<python><dictionary>
|
2024-12-20 07:48:32
| 1
| 2,802
|
Pierre
|
79,296,132
| 16,326,242
|
How to backup the whole group chat in telegram with telethon?
|
<p>I would like to clone or backup or forward the messages (including files, videos) to another group. The groups have topics.</p>
<h4>What I have tried</h4>
<p>I tried to use a Telegram Bot with <code>telethon</code> but some functions cannot be accessed by bot. So, I used a phone number, actually three. They all got banned. Maybe my codes were very bad, and I was sending too many requests. I would love to get some codes that will definitely work, please. I have only one telegram account left. Thanks, guys.
<br/>
If there are some other ways around that cost no money, I am open to try it.</p>
|
<python><telegram><chat><telethon>
|
2024-12-20 03:57:55
| 0
| 1,084
|
Four
|
79,296,036
| 4,048,657
|
Construct a sparse tensor while propagating gradient?
|
<p>I have code similar to this I would like to make faster:</p>
<pre class="lang-py prettyprint-override"><code># indices: indices of a 3d tensor
# values associated to the indices
result = torch.zeros((L, N, N))
for idx, (i,j,k) in enumerate(indices):
mask = torch.zeros_like(result)
mask[i][j][k] = 1.0
img = img + mask * values[idx]
</code></pre>
<p>Now, even if I chose to make a sparse mask, I notice that each iteration runs slower. Is there a simple solution to a function that will propagate the gradient of the form <code>img = func(indices, values) </code></p>
<p>I'm looking for a solution that takes advantage of vectorization or/and sparse data structures</p>
|
<python><pytorch>
|
2024-12-20 02:24:31
| 1
| 1,239
|
Cedric Martens
|
79,296,005
| 759,880
|
How to patch a 3rd party lib in a unit test with FastAPI client
|
<p>I have an <code>app/main.py</code> for FastAPI which does:</p>
<pre><code>import qdrant_client as QdrantClient
...
qdrant_client = QdrantClient(url=...)
qdrant_client.create_collection(...)
...
app = FastAPI()
...
@app.get("/my_endpoint")
def handle_my_endpoint(query: str):
...
qdrant_client.query_points(...)
</code></pre>
<p>And I want to write a unit for <code>/my_endpoint</code>.</p>
<p>So I have in the unit test file:</p>
<pre><code>from fastapi.testclient import TestClient
from app.main import app
class Test(unittest.TestCase):
@patch('app.main.QdrantClient', qdrant_client=MagicMock())
def test_my_endpoint(self, qdrant_client):
qdrant_client.create_collection.return_value = None
with TestClient(app) as client:
resp = client.get("/my_endpoint", params={...})
...
</code></pre>
<p>But it doesn't work, it insists on actually using the real client which tries to really call a Qdrant server, which fails... What am I missing to set up that <code>patch</code> correctly?</p>
|
<python><fastapi><python-unittest><python-unittest.mock>
|
2024-12-20 01:48:27
| 1
| 4,483
|
ToBeOrNotToBe
|
79,295,912
| 888,367
|
How to get Gemini 1.5 to extract tabular data from an image?
|
<p>I need to extract pairs of code and description from a table with columns but no rows in an image like this:</p>
<p><a href="https://i.sstatic.net/TMHxZZ3J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMHxZZ3J.png" alt="table with four columns in Spanish" /></a></p>
<p>I tryed Gemini 1.5 Flash, provided the image and the corresponding prompt to the chat and it managed surprisingly well to extract the code-description pairs:</p>
<p><a href="https://i.sstatic.net/pKyHBGfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pKyHBGfg.png" alt="actual prompt and answer from Gemini 1.5 Flash" /></a></p>
<p>When I tried to create a python program that does the same, I only found documentation to extract the text from the image and then pass the text to the LLM to figure out how to pair the codes (under column "CÓDIGO") and descriptions (under "DESIGNACIÓN DE LA MERCANCÍA"). But since the text is out of context it's impossible for the model to figure out the pairs out of the text alone:</p>
<pre class="lang-py prettyprint-override"><code>import io
import os
from google.cloud import aiplatform
from google.cloud import language_v1
from google.cloud.vision_v1 import ImageAnnotatorClient
service_account_path = "my_key.json"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = service_account_path
# Authenticate using the service account key
aiplatform.init(project="my_project", location="us-central1",
credentials=service_account_path)
def process_image(image_path):
client = ImageAnnotatorClient()
with io.open(image_path, 'rb') as image_file:
content = image_file.read()
image = client.document_text_detection(image=content)
text = image.text
prompt = f"""
The following text is extracted from a table in Spanish.
The table has columns: "CÓDIGO", "DESIGNACIÓN DE LA MERCANCÍA".
The "DESIGNACIÓN DE LA MERCANCÍA" column often starts with a hyphen "-".
Extract the data from the text and present it as a list of dictionaries,
where each dictionary has the following structure:
{{"CÓDIGO": "code_value", "DESIGNACIÓN DE LA MERCANCÍA": "description"}}
Text:
{text}
"""
client = language_v1.LanguageServiceClient()
document = language_v1.Document(
content=prompt, type_=language_v1.Document.Type.PLAIN_TEXT
)
response = client.analyze_sentiment(document=document)
return response.document_sentiment.text.split("\n")[1:-1]
</code></pre>
<p>Is there a way to get this or another GenAI to receive an image and process the prompt to extract the data the same way the chat version of Gemini 1.5 Flash did?</p>
|
<python><google-gemini>
|
2024-12-20 00:14:27
| 1
| 530
|
Andrés Meza-Escallón
|
79,295,631
| 2,072,516
|
Getting error related to lazy loading when adding many to many relationship
|
<p>I'm struggling with SQLAlchemy. I have a model with a many to many relationship set up:</p>
<pre><code>class User(Base):
roles: Mapped[List["Role"]] = relationship(
secondary="user_roles", back_populates="users"
)
</code></pre>
<p>I'm setting up some code to populate the database with test data:</p>
<pre><code> user = await register_user(
email="me@email.com", username="username", password="test1234"
)
db_session.add(user)
await db_session.commit()
admin_role = Role(name="Admin", owner_id=user.id)
db_session.add(admin_role)
user.roles.append(admin_role)
db_session.add(user)
await db_session.commit()
</code></pre>
<p>But when I do this, I get</p>
<pre><code>sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place?
</code></pre>
<p>There's a link with that error which suggests that the problem is lazyloading. But in this case, I didn't load anything. I'm confused as to how to establish the cross model relationship.</p>
<p>I'm using the asyncpg driver to run on Postgres and have the sessionmaker set to use <code>autocommit=False, expire_on_commit=False</code>.</p>
|
<python><sqlalchemy>
|
2024-12-19 21:17:13
| 1
| 3,210
|
Rohit
|
79,295,576
| 1,867,328
|
Return elements from a list that match a pattern
|
<p>I have a list of words as below:</p>
<pre><code>STR = ['aa', 'dffg', 'aa2', 'AAA3']
</code></pre>
<p>I want to get a list of elements from the above list that match a string:</p>
<pre><code>to_match = 'aa'
</code></pre>
<p>I tried the code below:</p>
<pre><code>import re
[re.findall(to_match, iWord) for iWord in STR]
# [['aa'], [], ['aa'], []]
</code></pre>
<p>However, I wanted to get a list like <code>['aa', 'aa2', 'AAA3']</code>.</p>
|
<python><regex><string><list>
|
2024-12-19 20:54:51
| 2
| 3,832
|
Bogaso
|
79,295,537
| 8,372,455
|
SSL Certificate Verification Error with Hugging Face Transformers CLI
|
<p>I'm trying to download the <code>TheBloke/falcon-40b-instruct-GPTQ</code> model using the Hugging Face Transformers CLI in PowerShell on Windows 10, but I consistently encounter an SSL certificate error. It appears to be same issues if I am using a Python script or Powershell even if I try just one time in downloading the model. Here's the command I'm running:</p>
<pre class="lang-bash prettyprint-override"><code>transformers-cli download TheBloke/falcon-40b-instruct-GPTQ
</code></pre>
<p><strong>Error Message:</strong></p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\ben\AppData\Local\Programs\Python\Python312\Lib\site-packages\urllib3\connectionpool.py", line 466, in _make_request
self._validate_conn(conn)
File "C:\Users\ben\AppData\Local\Programs\Python\Python312\Lib\site-packages\urllib3\connectionpool.py", line 1095, in _validate_conn
conn.connect()
...
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1000)
</code></pre>
<p><strong>Environment Details:</strong></p>
<ul>
<li><strong>OS:</strong> Windows 10</li>
<li><strong>Python Version:</strong> 3.12</li>
<li><strong>transformers-cli version:</strong> Latest (installed via pip)</li>
<li><strong>Network Setup:</strong> I'm running this in a corporate network, but I also tried it from home with the same result.</li>
</ul>
<p><strong>What I’ve Tried:</strong></p>
<ol>
<li>Ensuring that the CA certificates are updated via <code>pip install --upgrade certifi</code>.</li>
<li>Running <code>transformers-cli</code> with SSL verification disabled:
<pre class="lang-bash prettyprint-override"><code>$env:CURL_CA_BUNDLE = ""
transformers-cli download TheBloke/falcon-40b-instruct-GPTQ
</code></pre>
This didn't resolve the issue.</li>
<li>Using <code>curl</code> with <code>--insecure</code> works to fetch individual files from Hugging Face, but this doesn't integrate with the <code>transformers-cli</code>.</li>
<li>Tried different networks, including a home Wi-Fi network.</li>
<li>Verified that my Python environment and <code>requests</code> library are up to date:
<pre class="lang-bash prettyprint-override"><code>pip install --upgrade requests urllib3 transformers huggingface_hub
</code></pre>
</li>
</ol>
<p><strong>Questions:</strong></p>
<ol>
<li>Is there a way to bypass SSL verification for the Hugging Face CLI while ensuring the download completes?</li>
<li>Could this be related to my Python installation or specific SSL settings in Windows?</li>
<li>Are there alternative methods to manually download and use the model files in the Hugging Face cache directory?</li>
</ol>
<p><strong>Additional Info:</strong></p>
<ul>
<li>I’ve also tried downloading smaller models like <code>tiiuae/falcon-7b-instruct</code> but face the same issue.</li>
<li>The issue persists across multiple virtual environments.</li>
</ul>
|
<python><huggingface-transformers><huggingface><huggingface-trainer><huggingface-hub>
|
2024-12-19 20:36:51
| 0
| 3,564
|
bbartling
|
79,295,261
| 10,832,189
|
How to integrate a Stripe Terminal Reader to POS application using Django?
|
<p>I am developing a POS system using Django. I have a Stripe account, and through the system I am developing, I can process payments using credit or debit cards, with the money being deposited into my Stripe account. This is done by typing card information such as the card number, CVV, and expiration date.</p>
<p>Now, I have decided to use a Stripe Terminal Reader to simplify the process. Instead of manually entering card details, customers can swipe, insert, or tap their card on the Terminal Reader for payment. The model I have ordered is the BBPOS WisePOS E. I powered it on, and it generated a code that I entered into my Stripe account. The terminal's online or offline status is displayed in my Stripe account.</p>
<p>The idea is that when I select 'Debit or Credit Card' as the payment method, the amount to be paid should be sent to the terminal. However, this process is not working.</p>
<p>The terminal still shows the screen displayed in the attached image."</p>
<p>Let me know if you'd like further refinements!
<a href="https://i.sstatic.net/ZuTwf5mS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZuTwf5mS.jpg" alt="enter image description here" /></a></p>
<p>I don't know if I miss some steps that need to be done in order for this to work.</p>
<p>Bellow are my functions:</p>
<pre><code>@method_decorator(login_required)
def post(self, request, order_id):
"""Handles POST requests to process the checkout."""
order = get_object_or_404(Order, id=order_id)
# Ensure the order has items
if not order.items.exists():
modal_message = "Cette commande ne contient aucun produit. Le paiement ne peut pas être traité."
return render(request, 'pos/orders/checkout.html', {
'order': order,
'modal_message': modal_message,
'currency': None,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY
})
# Fetch the active currency
active_currency = Currency.objects.filter(is_active=True).first()
if not active_currency:
return render(request, 'pos/orders/checkout.html', {
'order': order,
'modal_message': 'Aucune devise active trouvée pour le magasin.',
'currency': None,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY
})
# Retrieve payment data
payment_method = request.POST.get('payment_method')
received_amount = request.POST.get('received_amount')
stripe_payment_method_id = request.POST.get('stripe_payment_method_id')
reader_id = request.POST.get('reader_id') # Added for terminal payments
discount_type = request.POST.get('discount_type')
discount_amount = request.POST.get('discount_amount')
# Convert received amount to Decimal
try:
received_amount = Decimal(received_amount) if received_amount else None
except (ValueError, InvalidOperation):
return render(request, 'pos/orders/checkout.html', {
'order': order,
'modal_message': 'Montant reçu invalide.',
'currency': active_currency,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY
})
# Apply discount if any
try:
if discount_type and discount_amount:
discount_amount = Decimal(discount_amount)
order.discount_type = discount_type
order.discount_amount = discount_amount
order.update_totals() # Recalculate totals
else:
order.discount_type = None
order.discount_amount = Decimal('0.00')
except (ValueError, InvalidOperation):
return render(request, 'pos/orders/checkout.html', {
'order': order,
'modal_message': 'Montant de remise invalide.',
'currency': active_currency,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY
})
# Ensure payment amount is rounded to 2 decimals
payment_amount = round(order.total_amount_with_tax, 2)
change = None
try:
if payment_method == 'cash':
if received_amount is None or received_amount < payment_amount:
raise ValueError("Le montant reçu est insuffisant.")
change = received_amount - payment_amount
order.status = 'completed'
elif payment_method in ['credit_card', 'debit_card']:
payment_service = PaymentService()
# Create a PaymentIntent
payment_intent = payment_service.create_payment_intent(
amount=payment_amount,
currency=active_currency.code,
payment_method_types=["card_present"]
)
order.payment_intent_id = payment_intent["id"]
# Send to terminal and process payment
try:
response = payment_service.send_to_terminal(payment_intent["id"])
if response["status"] == "succeeded":
order.status = 'completed'
received_amount = payment_amount
change = Decimal('0.00')
else:
raise ValueError("Échec du paiement par terminal.")
except Exception as e:
raise ValueError(f"Erreur lors du paiement avec le terminal: {str(e)}")
except stripe.error.CardError as e:
logging.error(f"Stripe Card Error: {e.error.message}")
return render(request, 'pos/orders/checkout.html', {
'order': order,
'modal_message': f"Erreur Stripe: {e.error.message}",
'currency': active_currency,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY
})
except Exception as e:
logging.error(f"Unexpected Error: {str(e)}")
return render(request, 'pos/orders/checkout.html', {
'order': order,
'modal_message': f"Erreur lors du traitement du paiement: {str(e)}",
'currency': active_currency,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY
})
# Create the bill and update the order
bill = Bill.objects.create(
order=order,
bill_id=f'{order.id}-{timezone.now().strftime("%Y%m%d%H%M%S")}',
payment_method=payment_method,
payment_amount=payment_amount,
received_amount=received_amount,
change_amount=change
)
order.user = request.user
order.payment_method = payment_method
order.save()
# Update user profile and handle notifications
self.update_user_profile_and_notifications(order, request.user)
# Redirect to the checkout completed page
return render(request, 'pos/orders/checkout_complete.html', {
'order': order,
'bill': bill,
'received_amount': received_amount,
'change': change,
'currency': active_currency,
'stripe_publishable_key': settings.STRIPE_PUBLISHABLE_KEY,
'success_message': 'Transaction terminée avec succès.'
})
class CheckoutCompleteView(View):
@method_decorator(login_required)
def get(self, request, order_id):
order = get_object_or_404(Order, id=order_id)
# Get the currency from the first order item
currency = None
if order.items.exists():
first_order_item = order.items.first()
if first_order_item and first_order_item.batch.product.currency:
currency = first_order_item.batch.product.currency
return render(request, 'pos/orders/checkout_complete.html', {
'order': order,
'currency': currency
})
@method_decorator(login_required)
def post(self, request, order_id):
order = get_object_or_404(Order, id=order_id)
print_receipt = request.POST.get('print_receipt') == 'yes'
if print_receipt:
return redirect('posApp:generate_pdf_receipt', order_id=order.id)
# If not printing the receipt, just render the checkout complete page
context = {
'order': order,
'currency': order.items.first().batch.product.currency if order.items.exists() else None,
}
return render(request, 'pos/checkout_complete.html', context)
# Backend Endpoint (send_to_terminal)
# Creating a backend view (send_to_terminal) to handle the terminal communication"
@login_required
def send_to_terminal(request, order_id):
"""
Send the payment amount to the terminal.
"""
if request.method == "POST":
try:
amount = Decimal(request.POST.get('amount', 0))
if amount <= 0:
return JsonResponse({'success': False, 'error': 'Montant non valide.'})
# Create a PaymentIntent
payment_service = PaymentService()
payment_intent = payment_service.create_payment_intent(
amount=amount,
currency="CAD",
payment_method_types=["card_present"]
)
# Fetch the online reader dynamically
readers = stripe.Terminal.Reader.list(status="online").data
if not readers:
return JsonResponse({'success': False, 'error': 'Aucun lecteur en ligne trouvé.'})
reader = readers[0] # Use the first online reader
# Send the payment to the terminal
response = stripe.Terminal.Reader.process_payment_intent(
reader["id"], {"payment_intent": payment_intent["id"]}
)
if response.get("status") == "succeeded":
return JsonResponse({'success': True, 'payment_intent_id': payment_intent["id"]})
else:
return JsonResponse({'success': False, 'error': response.get("error", "Erreur du terminal.")})
except Exception as e:
return JsonResponse({'success': False, 'error': str(e)})
</code></pre>
<p>I've this pyment service's code as well:</p>
<pre><code>import stripe
import logging
from decimal import Decimal
from django.conf import settings
class PaymentService:
def __init__(self):
"""Initialize the PaymentService with the Stripe API key."""
stripe.api_key = settings.STRIPE_SECRET_KEY
self.logger = logging.getLogger(__name__)
def get_online_reader(self):
"""
Fetch the first online terminal reader from Stripe.
:return: Stripe Terminal Reader object.
:raises: ValueError if no online reader is found.
"""
try:
readers = stripe.Terminal.Reader.list(status="online").data
if not readers:
self.logger.error("Aucun lecteur de terminal en ligne trouvé.")
raise ValueError("Aucun lecteur de terminal en ligne trouvé.")
return readers[0] # Return the first online reader
except stripe.error.StripeError as e:
self.logger.error(f"Erreur Stripe lors de la récupération des lecteurs: {str(e)}")
raise Exception(f"Erreur Stripe: {str(e)}")
def create_payment_intent(self, amount, currency="CAD", payment_method_types=None):
"""
Create a payment intent for a terminal transaction.
:param amount: Decimal, total amount to charge.
:param currency: str, currency code (default: "CAD").
:param payment_method_types: list, payment methods (default: ["card_present"]).
:param capture_method: str, capture method for the payment intent.
:return: Stripe PaymentIntent object.
"""
try:
if payment_method_types is None:
payment_method_types = ["card_present"]
payment_intent = stripe.PaymentIntent.create(
amount=int(round(amount, 2) * 100), # Convert to cents
currency=currency.lower(),
payment_method_types=payment_method_types,
capture_method=capture_method
)
self.logger.info(f"PaymentIntent created: {payment_intent['id']}")
return payment_intent
except stripe.error.StripeError as e:
self.logger.error(f"Stripe error while creating PaymentIntent: {str(e)}")
raise Exception(f"Stripe error: {str(e)}")
except Exception as e:
self.logger.error(f"Unexpected error while creating PaymentIntent: {str(e)}")
raise Exception(f"Unexpected error: {str(e)}")
def send_to_terminal(self, payment_intent_id):
"""
Send a payment intent to the online terminal reader for processing.
:param payment_intent_id: str, ID of the PaymentIntent.
:return: Stripe response from the terminal reader.
"""
try:
# Retrieve the Reader ID from settings
reader_id = settings.STRIPE_READER_ID # Ensure this is correctly set in your configuration
# Send the payment intent to the terminal
response = stripe.Terminal.Reader.process_payment_intent(
reader_id, {"payment_intent": payment_intent_id}
)
self.logger.info(f"PaymentIntent {payment_intent_id} sent to reader {reader_id}.")
return response
except stripe.error.StripeError as e:
self.logger.error(f"Erreur Stripe lors de l'envoi au terminal: {str(e)}")
raise Exception(f"Erreur Stripe: {str(e)}")
except Exception as e:
self.logger.error(f"Unexpected error while sending to terminal: {str(e)}")
raise Exception(f"Unexpected error: {str(e)}")
</code></pre>
<p>Here is my checkout template's code:</p>
<pre><code> <!-- Content Section -->
<div class="content">
<div class="row">
<div class="col-md-8">
<label align="center">Commande N° {{ order.id }}</label>
<div class="table-responsive">
<table class="table table-striped">
<thead>
<tr>
<th>Produit</th>
<th>Quantité</th>
<th>Prix unitaire</th>
<th>Total</th>
</tr>
</thead>
<tbody>
{% for item in order.items.all %}
<tr>
<td>{{ item.product_batch.product.name }}</td>
<td>{{ item.quantity }}</td>
<td>
{% if item.product_batch.discounted_price %}
{{ item.product_batch.discounted_price }} {{ currency.symbol }}
{% else %}
{{ item.product_batch.price }} {{ currency.symbol }}
{% endif %}
</td>
<td>
{% if item.product_batch.discounted_price %}
{{ item.quantity|multiply:item.product_batch.discounted_price|floatformat:2 }} {{ currency.symbol }}
{% else %}
{{ item.quantity|multiply:item.product_batch.price|floatformat:2 }} {{ currency.symbol }}
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
<tfoot>
<tr>
<td colspan="3" class="text-right"><strong>Total à payer:</strong></td>
<td><strong>{{ order.total_amount_with_tax|floatformat:2 }} {{ currency.symbol }}</strong></td>
</tr>
</tfoot>
</table>
</div>
</div>
<!-- Payment Section -->
<div class="col-md-4">
<form id="checkout-form" method="post">
<input type="hidden" id="stripe_payment_method_id" name="stripe_payment_method_id" value="">
{% csrf_token %}
<!-- Mode de Paiement -->
<div class="form-group">
<label for="payment_method">Mode de Paiement</label>
<select class="form-control" id="payment_method" name="payment_method" required>
<option value="cash" selected>Cash</option>
<option value="credit_card">Credit Card</option>
<option value="debit_card">Debit Card</option>
<option value="holo">Holo</option>
<option value="huri_money">Huri Money</option>
</select>
</div>
<!-- Discount Type -->
<div class="form-group">
<label for="discount_type">Type de réduction</label>
<select class="form-control" id="discount_type" name="discount_type">
<option value="">Aucune</option>
<option value="rabais">Rabais</option>
<option value="remise">Remise</option>
<option value="ristourne">Ristourne</option>
</select>
</div>
<!-- Discount Amount -->
<div class="form-group">
<label for="discount_amount">Montant de la réduction</label>
<input type="number" class="form-control" id="discount_amount" name="discount_amount" min="0" step="0.01" value="0.00">
</div>
<!-- Montant reçu (for cash payment) -->
<div class="form-group" id="cash-payment">
<label for="received_amount">Montant reçu</label>
<input type="number" class="form-control" id="received_amount" name="received_amount" min="0" step="0.01">
<small id="change" class="form-text text-muted"></small>
</div>
<!-- Payment card fields for Stripe -->
<div id="card-element" class="form-group" style="display:none;">
<!-- A Stripe Element will be inserted here. -->
</div>
<div id="card-errors" role="alert" class="form-text text-danger"></div>
<button type="submit" class="btn btn-success btn-block">Confirmer la commande</button>
</form>
</div>
</div>
</div>
<!-- Stripe Integration & Checkout Form Handling -->
<script src="https://js.stripe.com/v3/"></script>
<script src="https://js.stripe.com/terminal/v1/"></script>
<script>
$(document).ready(function () {
console.log("Initializing Stripe...");
try {
// Initialize Stripe
const stripe = Stripe("{{ stripe_publishable_key }}");
const elements = stripe.elements();
// Create a card element
const card = elements.create('card', {
style: {
base: {
fontSize: '16px',
color: '#32325d',
'::placeholder': { color: '#aab7c4' }
},
invalid: {
color: '#fa755a',
iconColor: '#fa755a'
}
}
});
// Mount the card element
card.mount('#card-element');
console.log("Card element mounted successfully.");
// Function to toggle payment fields
function togglePaymentFields() {
const paymentMethod = $('#payment_method').val();
console.log("Selected payment method:", paymentMethod);
if (paymentMethod === 'cash') {
$('#cash-payment').show();
$('#card-element').hide();
card.unmount(); // Ensure card fields are unmounted
$('#received_amount').val('');
$('#change').text('');
} else if (paymentMethod === 'credit_card' || paymentMethod === 'debit_card') {
$('#cash-payment').hide();
$('#card-element').show();
card.mount('#card-element'); // Remount card fields
} else {
$('#cash-payment').hide();
$('#card-element').hide();
card.unmount();
}
}
// Initialize the field toggle
togglePaymentFields();
// Trigger toggle on payment method change
$('#payment_method').change(function () {
togglePaymentFields();
});
// Update change amount dynamically
$('#received_amount').on('input', function () {
const received = parseFloat($(this).val());
const total = parseFloat("{{ order.total_amount_with_tax }}");
if (!isNaN(received) && received >= total) {
const change = received - total;
$('#change').text('Montant à retourner: ' + change.toFixed(2) + ' {{ currency.symbol }}');
} else {
$('#change').text('');
}
});
$(document).ready(function () {
$('#checkout-form').on('submit', function (e) {
e.preventDefault();
const paymentMethod = $('#payment_method').val();
console.log("Form submission triggered. Selected payment method:", paymentMethod);
// Handle cash payment
if (paymentMethod === 'cash') {
const received = parseFloat($('#received_amount').val());
const total = parseFloat("{{ order.total_amount_with_tax }}");
const discountAmount = parseFloat($('#discount_amount').val()) || 0;
console.log("Received amount:", received, "Total amount:", total, "Discount amount:", discountAmount);
const finalTotal = total - discountAmount;
if (isNaN(received) || received < finalTotal) {
alert('Le montant reçu est insuffisant.');
return; // Prevent form submission
}
console.log("Cash payment validated. Submitting form...");
this.submit(); // Proceed with the form submission for cash payment
}
// Handle credit or debit card payment
else if (paymentMethod === 'credit_card' || paymentMethod === 'debit_card') {
const totalAmount = parseFloat("{{ order.total_amount_with_tax }}");
console.log("Initiating terminal payment for:", totalAmount);
// Send the payment amount to the terminal
$.ajax({
type: 'POST',
url: "{% url 'posApp:send_to_terminal' order.id %}",
data: {
'csrfmiddlewaretoken': '{{ csrf_token }}',
'amount': totalAmount
},
success: function (response) {
if (response.success) {
console.log("Amount successfully sent to terminal. Proceeding with payment confirmation...");
alert('Paiement traité avec succès.');
window.location.href = "{% url 'posApp:checkout_complete' order.id %}";
} else {
alert(response.error || "Une erreur s'est produite lors de l'envoi au terminal.");
}
},
error: function (xhr, status, error) {
console.error("Error sending payment to terminal:", error);
alert('Une erreur s\'est produite lors de l\'envoi au terminal: ' + error);
}
});
}
// Handle invalid payment method
else {
alert('Méthode de paiement invalide.');
console.error("Invalid payment method selected:", paymentMethod);
}
});
});
} catch (err) {
console.error("Error initializing Stripe or setting up payment fields:", err);
alert("An error occurred while setting up the payment system. Check the console for details.");
}
});
</script>
</code></pre>
<p>During the Terminal Reader configuration, I added the registration code and location ID to my Stripe account. Are there any other settings that need to be configured, such as using the terminal's serial number in the POS system or any other required steps?</p>
|
<python><django><stripe-payments>
|
2024-12-19 18:23:46
| 1
| 395
|
Mohamed Abdillah
|
79,295,148
| 9,315,690
|
How do I type hint a frame object in Python?
|
<p>I'm type hinting a large existing Python codebase, and in one part it sets a signal handler using <a href="https://docs.python.org/3/library/signal.html#signal.signal" rel="nofollow noreferrer"><code>signal.signal</code></a>. The signal handler is a custom function defined in the codebase, so I need to type hint it myself. However, while I can guess based on the description of <code>signal.signal</code> that the first parameter is an integer, the second one is just described as "<code>None</code> or a frame object" in the documentation.</p>
<p>I have no idea what a frame object is, but thankfully the aforementioned documentation links to a section called <a href="https://docs.python.org/3/reference/datamodel.html#frame-objects" rel="nofollow noreferrer">"Frame objects"</a> and the <a href="https://docs.python.org/3/library/inspect.html#module-inspect" rel="nofollow noreferrer"><code>inspect</code> module documentation</a>. However, after looking through both of these, I still don't really find any information about how to actually type hint a frame object, just information that might be useful to someone writing code that interacts with frame objects without typing it.</p>
<p>So, what type do I use for frame object type hints in Python?</p>
|
<python><python-typing><mypy>
|
2024-12-19 17:33:11
| 3
| 3,887
|
Newbyte
|
79,295,143
| 1,269,646
|
sqlalchemy - how to get attributes from the query response into an object
|
<p>I am trying to get the some calculated value from a sqlquery into the object that should be instantiated, however access to those values seems impossible.</p>
<p>given three tables: A, B, A_to_B</p>
<ul>
<li>A - some model
<ul>
<li>somevalues</li>
<li>"totalsum"</li>
</ul>
</li>
<li>B - some model</li>
<li>A_to_B - assocication table with extra values
<ul>
<li>a_id</li>
<li>b_id</li>
<li>count</li>
</ul>
</li>
</ul>
<p>I need to do a query like</p>
<pre><code>select a.*, sum(a_to_b.count) as totalsum from A, A_to_B where a.id=A_to_B.a_id;
</code></pre>
<p>and other variations...
sqlalchemy throws an error</p>
<pre><code>Label name totalsum is being renamed to an anonymous label due to disambiguation which is not supported right now. Please use unique names for explicit labels
</code></pre>
<p>is there any way to get sqlalchemy to bind the values of the associationtable to a model ?</p>
<p>thanks in advance!</p>
|
<python><sqlalchemy><associations>
|
2024-12-19 17:31:19
| 2
| 1,090
|
hlv
|
79,295,111
| 8,554,611
|
pyaudio produces no sound when using explicit device
|
<p>This code results in no sound output. Neither it sounds nor is indicated anywhere.</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import suppress
from typing import Mapping
import pyaudio
import numpy as np
from numpy.typing import NDArray
freq: float = 130.0
p: pyaudio.PyAudio = pyaudio.PyAudio()
def main() -> None:
stream: pyaudio.Stream
info: Mapping[str, str | int | float] = {
"index": 0,
"name": "HDA Intel PCH: ALC892 Digital (hw:1,1)",
"maxInputChannels": 0,
"maxOutputChannels": 2,
"defaultSampleRate": 44100.0,
}
stream = p.open(
format=pyaudio.paFloat32,
channels=1,
rate=round(info["defaultSampleRate"]),
output=True,
output_device_index=info["index"],
)
phase: NDArray[np.float32] = np.linspace(
start=0.0,
stop=2.0 * np.pi,
num=round(info["defaultSampleRate"]),
endpoint=False,
dtype=np.float32,
)
wf: NDArray[np.float32] = np.sin(phase * freq)
# about 140 seconds of sound
with suppress(KeyboardInterrupt):
for _ in range(100):
stream.write(wf.tobytes())
stream.stop_stream()
stream.close()
main()
p.terminate()
</code></pre>
<p>The <code>info</code> value is actually what <code>p.get_device_info_by_index(0)</code> gives.</p>
<p>However, when I use other <code>info</code> values, corresponding to <code>pulse</code> (#9 for me) and <code>default</code> (#10) devices, the same code works fine. The sine is emitted via the same physical device as I specifically set earlier. With a USB sound card, it's the same: explicitly, it's silent, but when I set it as the default device, it works.</p>
<hr />
<p><code>stderr</code> shows the same errors regardless of the device:</p>
<pre class="lang-none prettyprint-override"><code>ALSA lib pcm_dmix.c:1000:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib confmisc.c:1377:(snd_func_refer) Unable to find definition 'cards.0.pcm.hdmi.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:5205:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5728:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM hdmi
ALSA lib confmisc.c:1377:(snd_func_refer) Unable to find definition 'cards.0.pcm.hdmi.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:5205:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5728:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM hdmi
ALSA lib confmisc.c:1377:(snd_func_refer) Unable to find definition 'cards.0.pcm.modem.0:CARD=0'
ALSA lib conf.c:5205:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5728:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline:CARD=0,DEV=0
ALSA lib confmisc.c:1377:(snd_func_refer) Unable to find definition 'cards.0.pcm.modem.0:CARD=0'
ALSA lib conf.c:5205:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5728:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline:CARD=0,DEV=0
ALSA lib confmisc.c:1377:(snd_func_refer) Unable to find definition 'cards.0.pcm.modem.0:CARD=0'
ALSA lib conf.c:5205:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5728:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM phoneline
ALSA lib confmisc.c:1377:(snd_func_refer) Unable to find definition 'cards.0.pcm.modem.0:CARD=0'
ALSA lib conf.c:5205:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5728:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2722:(snd_pcm_open_noupdate) Unknown PCM phoneline
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
ALSA lib pcm_oss.c:404:(_snd_pcm_oss_open) Cannot open device /dev/dsp
ALSA lib pcm_oss.c:404:(_snd_pcm_oss_open) Cannot open device /dev/dsp
ALSA lib pcm_a52.c:1036:(_snd_pcm_a52_open) a52 is only for playback
ALSA lib confmisc.c:160:(snd_config_get_card) Invalid field card
ALSA lib pcm_usb_stream.c:481:(_snd_pcm_usb_stream_open) Invalid card 'card'
ALSA lib confmisc.c:160:(snd_config_get_card) Invalid field card
ALSA lib pcm_usb_stream.c:481:(_snd_pcm_usb_stream_open) Invalid card 'card'
ALSA lib pcm_dmix.c:1000:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for -1, skipping unlock
</code></pre>
<p>All of them are from <code>pyaudio.PyAudio()</code>.</p>
<hr />
<pre class="lang-none prettyprint-override"><code># lsb_release -a LSB Version: n/a
Distributor ID: ManjaroLinux
Description: Manjaro Linux
Release: 24.2.1
Codename: Yonada
$ python -c "import sys; print('Python %s on %s' % (sys.version, sys.platform))"
Python 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] on linux
</code></pre>
|
<python><pyaudio>
|
2024-12-19 17:18:13
| 0
| 796
|
StSav012
|
79,295,062
| 8,769,985
|
formatting output of gvar
|
<p>I use the <a href="https://pypi.org/project/gvar/" rel="nofollow noreferrer">gvar</a> library in python. This example, taken from the <a href="https://gvar.readthedocs.io/en/latest/overview.html#formatting-gvars-for-printing-display" rel="nofollow noreferrer">documentation</a> gives an error:</p>
<pre><code>>>> import gvar
>>> x = gvar.gvar(3.14159, 0.0236)
>>> print(f'{x:.2g}', f'{x:.3f}', f'{x:.4e}', f'{x:.^20.2g}')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "src/gvar/_gvarcore.pyx", line 92, in gvar._gvarcore.GVar.__format__
ValueError: Unknown format code 'g' for object of type 'str'
>>>
</code></pre>
<p>The version of gvar I am using is 11.11.12.</p>
<p>How should I fix the code with this version of gvar?</p>
|
<python><printing><gvar>
|
2024-12-19 16:59:02
| 0
| 7,617
|
francesco
|
79,295,000
| 7,124,155
|
How can I group these related rows using PySpark?
|
<p>I need to parse some text data in Python PySpark on Databricks. The data look like this:</p>
<pre><code>df = spark.createDataFrame([("new entry", 1, 123),
("acct", 2, None),
("cust ID", 3, None),
("new entry", 4, 456),
("acct", 5, None),
("more text", 6, None),
("cust ID", 7, None)],
("value", "line num", "tracking ID"))
</code></pre>
<p>Here I manually added the "need grouping" column to illustrate - rows from "new entry" to "cust ID" are one group, followed by another. They are not all the same length.</p>
<p><a href="https://i.sstatic.net/Gs4y32cQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gs4y32cQ.png" alt="enter image description here" /></a></p>
<p>I need to match up cust ID with the tracking ID a few lines before, so something like this:</p>
<p><a href="https://i.sstatic.net/nuk0ZEsP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuk0ZEsP.png" alt="enter image description here" /></a></p>
<p>How can I match cust ID with tracking ID? I thought of a window function but I'm not sure how to create the needed grouping.</p>
|
<python><pyspark><databricks>
|
2024-12-19 16:37:39
| 1
| 1,329
|
Chuck
|
79,294,915
| 3,130,747
|
How to read multiple parquet files from a gcp storage bucket using duckdb and python
|
<ul>
<li>python version: Python 3.12.3</li>
<li>duckdb version: "1.1.3"</li>
</ul>
<p>I'm trying to get duckdb working to read multiple parquet files from a storage bucket dir using a glob pattern.</p>
<p>I can load a single file using the following:</p>
<pre class="lang-py prettyprint-override"><code># works
duckdb.register_filesystem(filesystem("gcs"))
test_file = "gcs:///some_bucket/some_dir/some_file.parquet"
data = duckdb.sql(f"SELECT * FROM read_parquet('{test_file}')")
</code></pre>
<p>Based on <a href="https://duckdb.org/docs/data/multiple_files/overview.html" rel="nofollow noreferrer">https://duckdb.org/docs/data/multiple_files/overview.html</a> I expect to be able to use the following, but it fails:</p>
<pre class="lang-py prettyprint-override"><code># fail
duckdb.register_filesystem(filesystem("gcs"))
test_file = "gcs:///some_bucket/some_dir/*.parquet"
data = duckdb.sql(f"SELECT * FROM read_parquet('{test_file}')")
</code></pre>
<p>With the error:</p>
<pre><code>duckdb.duckdb.IOException: IO Error: No files found that match the pattern "gcs:///some_bucket/some_dir/*.parquet"
</code></pre>
<p>I know that there <em>is</em> a parquet file which would match that glob though, based on the initial (working) version which explicitly states the file to query.</p>
<h2>Loading multiple via a list</h2>
<p>Note, when I tried the following:</p>
<pre class="lang-py prettyprint-override"><code> a = "gs://some_bucket/some_dir/a.parquet"
b = "gs://some_bucket/some_dir/b.parquet"
data = duckdb.sql(f"SELECT * FROM read_parquet(['{a}', '{b}'], union_by_name=true)")
</code></pre>
<p>It works, though I'm not sure if this is relevant to my initial issue re globs or not.</p>
<h1>Edit: ListObjectV2 required for file glob</h1>
<p>Following answer:</p>
<p><em>> The documentation for the httpfs extension implies that globbing is only supported for s3. See <a href="https://duckdb.org/docs/extensions/httpfs/overview.html" rel="nofollow noreferrer">https://duckdb.org/docs/extensions/httpfs/overview.html</a></em></p>
<p>I was confused as (to me) the docs here: <a href="https://duckdb.org/docs/extensions/httpfs/s3api" rel="nofollow noreferrer">https://duckdb.org/docs/extensions/httpfs/s3api</a> seemed to imply that google storage was also supported, looking again (at <a href="https://duckdb.org/docs/extensions/httpfs/s3api" rel="nofollow noreferrer">https://duckdb.org/docs/extensions/httpfs/s3api</a> ) it states:</p>
<pre><code>File glob ListObjectV2
</code></pre>
<p>Searching for support on this brings up:</p>
<ul>
<li>(7 years old) <a href="https://stackoverflow.com/questions/45638871/is-the-listobjectsv2-api-call-implemented-in-google-cloud-storage">Is the ListObjectsV2 api call implemented in Google Cloud Storage?</a> and the</li>
<li><a href="https://issuetracker.google.com/issues/155109631" rel="nofollow noreferrer">https://issuetracker.google.com/issues/155109631</a></li>
</ul>
<p>At the end of the <a href="https://issuetracker.google.com/issues/155109631" rel="nofollow noreferrer">https://issuetracker.google.com/issues/155109631</a> thread there's a link to: <a href="https://cloud.google.com/storage/docs/release-notes#July_1k2_2021" rel="nofollow noreferrer">https://cloud.google.com/storage/docs/release-notes#July_1k2_2021</a> which states:</p>
<pre><code>List object V2 for the XML APIPreview launched.
List object V2 provides improved interoperability with Amazon S3 tools and libraries.
</code></pre>
<p>I don't know enough about this to know if that's what duckdb needs.</p>
|
<python><google-cloud-storage><parquet><duckdb>
|
2024-12-19 16:02:34
| 1
| 4,944
|
baxx
|
79,294,881
| 7,959,614
|
How can I get a certain number of evenly spaced points along the octagon perimeter?
|
<p>I want to get the coordinates of a number of points that together form an octagon.
For a circle this is done easily as follows:</p>
<pre><code>import numpy as np
n = 100
x = np.cos(np.linspace(0, 2 * np.pi, n))
y = np.sin(np.linspace(0, 2 * np.pi, n))
coordinates = list(zip(x, y))
</code></pre>
<p>By changing <code>n</code> I can increase/decrease the "angularity". Now I want to do the same for an octagon. I know that an octagon has 8 sides and the angle between each side 45 degrees.
Let's assume that the perimeter of the octagon is 30.72m. Each side has therefore a length of 3.79m.</p>
<pre><code>perimeter = 30.72
n_sides = 8
angle = 45
</code></pre>
<p>How can I get <code>n</code> coordinates that represent this octagon?</p>
<p><strong>Edit:</strong>
With the help of the answers of @lastchance and @mozway I am able to generate an octagon. My goal is to get evenly-spaced <code>n</code> coordinates from the perimeter of this octagon.
If <code>n = 8</code> these coordinates correspond to the corners of the octagon, but I'm interested in cases where <code>n > 8</code></p>
|
<python><matplotlib>
|
2024-12-19 15:52:27
| 6
| 406
|
HJA24
|
79,294,784
| 13,090,289
|
ImportError in Hugging Face Integration: `LocalEntryNotFoundError` when using Llama Index and transformers
|
<p>I am working on an AI project with <code>Llama Index</code> and the <code>transformers</code> library, integrating Hugging Face models. Below is my code snippet:</p>
<pre class="lang-py prettyprint-override"><code>from llama_index.core import Settings
from llama_index.embeddings.huggingface import HuggingFaceInferenceAPIEmbedding
import tiktoken
from llama_index.core.callbacks import CallbackManager, TokenCountingHandler
from llama_index.core.service_context import ServiceContext
from llama_index.core.postprocessor import SentenceTransformerRerank
from llama_index.llms.huggingface import HuggingFaceInferenceAPI
</code></pre>
<p>However, when I run the code, I get the following error:</p>
<p>ImportError: cannot import name 'LocalEntryNotFoundError' from 'huggingface_hub.errors' (/usr/local/lib/python3.10/dist-packages/huggingface_hub/errors.py)</p>
<p>During handling of the above exception, this occurred:</p>
<p>RuntimeError: Failed to import transformers.trainer because of the following error: cannot import name 'LocalEntryNotFoundError' from 'huggingface_hub.errors'</p>
<p><strong>What I've tried:</strong></p>
<ol>
<li>Verified the installation versions of huggingface_hub and transformers.</li>
<li>Ensured the proper API key and models are set.</li>
<li>Upgrading the dependencies</li>
</ol>
<pre class="lang-py prettyprint-override"><code>%pip install --upgrade transformers huggingface_hub llama-index-llms-huggingface llama-index-llms-huggingface-api
</code></pre>
|
<python><huggingface-transformers><importerror><huggingface><llama-index>
|
2024-12-19 15:23:53
| 1
| 436
|
Deepak Singh Rajput
|
79,294,628
| 16,611,809
|
How to dynamically adjust colors depending on the dark mode?
|
<p>The default header color for DataFrames (<code>render.DataGrid()</code>, <code>render.DataTable()</code>) is a very light grey (probably <code>gainsboro</code>). If I switch an app to dark mode using and DataFrame the header becomes unreadable. Is there any way to make a dict to tell shiny what color to use for the color <code>gainsboro</code> or for a specific object, if I switch to dark mode?</p>
<p>Here's an (not so minimal) MRE:</p>
<pre class="lang-py prettyprint-override"><code>from shiny import App, render, ui
import polars as pl
app_ui = ui.page_fillable(
ui.layout_sidebar(
ui.sidebar(ui.input_dark_mode()),
ui.layout_columns(
ui.card(
ui.card_header("card_header1", style='background:gainsboro'),
ui.output_data_frame("card1"),
full_screen=True
),
col_widths=12
)
)
)
def server(input, output, session):
@output
@render.data_frame
def card1():
return render.DataGrid(pl.DataFrame({'a': ['a','b','c','d']}), filters=True)
app = App(app_ui, server)
</code></pre>
|
<python><darkmode><py-shiny>
|
2024-12-19 14:28:32
| 1
| 627
|
gernophil
|
79,294,574
| 2,859,206
|
In Pandas, how to reference and use a value from a dictionary based on column AND index values in a dataframe?
|
<p>I've data about how my times people are sick in certain locations (location A and B) at certain times (index of dates). I need to divide each value by the population in that location (column) AND at that time (index), which references a separate dictionary.</p>
<p>Eg dataframe:</p>
<pre><code>import pandas as pd
data = [{'A': 1, 'B': 3}, {'A': 2, 'B': 20}, {'A': "Unk", 'B': 50}]
df = pd.DataFrame(data, index=[pd.to_datetime("2019-12-31")
, pd.to_datetime("2020-12-30")
, pd.to_datetime("2020-12-31")])
Out:
A B
2019-12-31 1 3
2020-12-30 2 20
2021-12-31 Unk 50
</code></pre>
<p>Population dictionary (location_year):</p>
<pre><code>dic = {"A_2019": 100, "B_2019": 200, "A_2020": 120, "B_2020": 150}
</code></pre>
<p>While it's not necessary to have the output in the same df, the output I'm trying to achieve would be:</p>
<pre><code> A B A1 B1
2019-12-31 1 3 0.01 0.015
2020-12-30 2 20 0.017 0.133
2021-12-31 Unk 50 nan 0.333
</code></pre>
<p>I've tried lots of different approaches, but almost always get an unhashable type error.</p>
<pre><code>for col in df.columns:
df[col + "1"] = df[col]/dic[col + "_" + df.index.strftime("%Y")]
Out: `TypeError: unhashable type: 'Index
</code></pre>
<p>I guess I don't understand how pandas is parsing the df.index value to the dictionary(?). Can this be fixed, or is another approach necessary?</p>
|
<python><pandas><dataframe><dictionary>
|
2024-12-19 14:10:46
| 2
| 2,490
|
DrWhat
|
79,294,507
| 7,334,912
|
Diffusers pipeline Instant ID with Ipadapter
|
<p>I want to use an implementation of <a href="https://github.com/instantX-research/InstantID" rel="nofollow noreferrer">InstantID</a> with <a href="https://huggingface.co/docs/diffusers/main/using-diffusers/ip_adapter" rel="nofollow noreferrer">Ipadapter</a> using Diffusers library.
So far I got :</p>
<pre><code>import diffusers
from diffusers.utils import load_image
from diffusers.models import ControlNetModel
from transformers import CLIPVisionModelWithProjection
# Custom diffusers implementation Instantid & insightface
from insightface.app import FaceAnalysis
from pipeline_stable_diffusion_xl_instantid import StableDiffusionXLInstantIDPipeline, draw_kps
# Other dependencies
import cv2
import torch
import numpy as np
from PIL import Image
from compel import Compel, ReturnedEmbeddingsType
app_face = FaceAnalysis(name='antelopev2', root='./', providers=['CPUExecutionProvider', 'CPUExecutionProvider']) #CUDAExecutionProvider
app_face.prepare(ctx_id=0, det_size=(640, 640))
# prepare models under ./checkpoints
face_adapter = "./models/instantid/ip-adapter.bin"
controlnet_path = "./models/instantid/ControlNetModel/"
# load IdentityNet
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionXLInstantIDPipeline.from_single_file(
"./models/checkpoints/realvisxlV40_v40LightningBakedvae.safetensors",
controlnet=controlnet, torch_dtype=torch.float16
)
pipe.cuda()
# load adapter
pipe.load_ip_adapter_instantid(face_adapter)
# Load ipadapter
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
"./models/ipadapters",
subfolder="sdxl_models/image_encoder",
torch_dtype=torch.float16,
#weight_name="ip-adapter-plus_sdxl_vit-h.safetensors"
).to("cuda")
# Apply adapter to pipe
pipe.image_encoder = image_encoder
pipe.load_ip_adapter("./models/ipadapters", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors")
pipe.set_ip_adapter_scale(1.3)
# Optimisation
pipe.enable_model_cpu_offload()
pipe.enable_vae_tiling()
image = Image.open("img1.png")
face_info = app_face.get(cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR))
face_info = sorted(face_info, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1] # only use the maximum face
face_emb = face_info['embedding']
prompt = "prompt"
kps = Image.open("kps_standard.png")
ipadapter_image = Image.open("img2.png")
#encod = pipe.image_encoder(ipadapter_image)
prompt_embed, pooled = compel_proc(prompt)
image = pipe(
prompt,
width=768,
height=1024,
image_embeds=face_emb,
image=kps,
seed=42,
ip_adapter_image=ipadapter_image,
controlnet_conditioning_scale=0.7,
control_guidance_end = .7,
num_inference_steps=6,
guidance_scale=3,
).images[0]
</code></pre>
<p>And got :</p>
<pre><code>ValueError: <class 'diffusers.models.unet_2d_condition.UNet2DConditionModel'> has the config param `encoder_hid_dim_type` set to 'ip_image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`
</code></pre>
<p>Thus I tried to implement <code>image_embeds</code>:</p>
<pre><code>pipe(..., added_conditions={"image_embeds": face_emb}, ..)
</code></pre>
<p>or <code>added_conditions="image_embeds"</code>, <code>added_cond_kwargs = { "image_embeds" : face_emb}</code>,</p>
<p>Tested on:</p>
<blockquote>
<p>Windows 10</p>
<p>Python=3.10</p>
<p>Diffusers=0.28</p>
<p>transformers=4.40.2</p>
<p>torch=2.3.0</p>
<p>peft=0.0.11.1.</p>
</blockquote>
<p>The environment can run <code>instantid</code> and <code>ipadapter</code> generation on different pipeline, but not on the same.</p>
|
<python><deep-learning><stable-diffusion><diffusers>
|
2024-12-19 13:51:35
| 0
| 502
|
Felox
|
79,294,496
| 4,108,376
|
Writing Windows registry with backslash in sub-key name, in Python
|
<p>I'm trying to write into the Windows registry key "SOFTWARE/Microsoft/DirectX/UserGpuPreferences" using the Python winreg module.</p>
<p>The name of the sub-key needs to be the python executable path, which contains backslashes. However when using <code>winreg.SetValue()</code>, this instead adds a tree of keys following the components of the path, instead of a single sub-key whose name contains backslashes.</p>
<pre><code>import winreg
path = ['SOFTWARE', 'Microsoft', 'DirectX', 'UserGpuPreferences']
registry = winreg.ConnectRegistry(None, winreg.HKEY_CURRENT_USER)
key = winreg.OpenKey(registry, '\\'.join(path), 0, winreg.KEY_WRITE)
sub_key = sys.executable
value = 'GpuPreference=2;'
winreg.SetValue(key, sub_key, winreg.REG_SZ, value)
</code></pre>
<p>Also escaping the backslashes (<code>sys.executable.replace('\\', '\\\\')</code>) does not change it.</p>
<p>Is there any way to insert such a registry value, using Python?</p>
|
<python><registry><winreg>
|
2024-12-19 13:46:56
| 1
| 9,230
|
tmlen
|
79,294,413
| 9,189,389
|
Second independent axis in Altair
|
<p>I would like to add a second (independent) x-axis to my figure, demonstrating a month for a given week.</p>
<p>Here is my snippet:</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime, timedelta
weeks = list(range(0, 54))
start_date = datetime(1979, 1, 1)
week_dates = [start_date + timedelta(weeks=w) for w in weeks]
years = list(range(1979, 2024))
data = {
"week": weeks,
"date": week_dates,
"month": [date.strftime("%b") for date in week_dates],
}
# Precipitation
data.update({str(year): np.random.uniform(0, 100, size=len(weeks)) for year in years})
df = pd.DataFrame(data)
df["Mean_Precipitation"] = df[[str(year) for year in years]].mean(axis=1)
rain_calendar = alt.Chart(df, title=f"Weekly precipitation volumes").mark_bar().encode(
alt.X('week:O', title='Week in calendar year', axis=alt.Axis(labelAlign="right", labelAngle=-45)),
alt.Y(f'1979:Q', title='Rain'),
alt.Color(f"1979:Q", scale=alt.Scale(scheme='redyellowgreen')),
alt.Order(f"1979:Q", sort="ascending")).properties(width=850, height=350)
line = alt.Chart(df).mark_line(interpolate='basis', strokeWidth=2, color='#FB0000').encode(
alt.X('week:O'),
alt.Y('Mean_Precipitation:Q'))
month_labels = alt.Chart(df).mark_text(
align='center', baseline='top', dy=30
).encode(
alt.X('week:O', title=None), text='month:N')
combined_chart = alt.layer(rain_calendar, line, month_labels).resolve_scale(x='shared').configure_axisX(
labelAlign="right", labelAngle=-45, titlePadding=10, titleFontSize=14)\
.configure_axisY(titlePadding=10, titleFontSize=14).configure_bar(binSpacing=0)
</code></pre>
<p>So far, this is the best outcome that I've got:</p>
<p><a href="https://i.sstatic.net/DdZKoIf4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdZKoIf4.png" alt="enter image description here" /></a></p>
<p>It is a topic related to discussed one there: <a href="https://stackoverflow.com/questions/72464829/second-x-axis-or-x-labels/72466179?noredirect=1#comment139824259_72466179">Second x axis or x labels</a></p>
|
<python><altair>
|
2024-12-19 13:16:23
| 2
| 464
|
Luckasino
|
79,294,236
| 5,457,202
|
ValueError: setting an array element with a sequence in Vaex dataframe
|
<p>I've been given a CSV file from a previous project and I'm supposed to prepare some scripts with Python to plot the value it contains. The dataset in this CSV file holds data from electric and vibration signals. The data I'm interested in is stored at a column, "DecompressedValue", where each row holds a 16.000-elements-long array of float values, which represents a vibration/electric signal.</p>
<p>I want to use Vaex to exploit its higher performance features, but I found what I think is a bug when processing the signals. I started adapting code which works in Pandas.</p>
<pre><code>import pandas as pd
import json
signal_df = pd.read_csv('csv_test.csv', sep=';')
# The DecompressedValue column, despite being stored as a regular array, is read a long string, so in order to turn it into an array, json.loads() has to be applied to each value of the column
signal_df.DecompressedValue = signal_df.DecompressedValue.apply(lambda r: json.loads(r))
</code></pre>
<p>However, when trying to replicate the same functionality in Vaex, even if this code runs correctly, trying to access the dataframe after that produces an error (find vaex_test.csv for testing this code <a href="https://github.com/user-attachments/files/18197545/vaex_test.csv" rel="nofollow noreferrer">here</a>).</p>
<pre><code>import vaex
test = vaex.from_csv('vaex_test.csv', sep=';')
test['DecompressedValue'] = test['DecompressedValue'].apply(lambda r: json.loads(r))
test.head()
</code></pre>
<p>This produce a ValueError:</p>
<pre><code>[12/19/24 12:50:48] ERROR error evaluating: DecompressedValue at rows 0-5 [dataframe.py](file:///C:/Users/user/AppData/Local/anaconda3/envs/py310env/lib/site-packages/vaex/dataframe.py):[4101](file:///C:/Users/user/AppData/Local/anaconda3/envs/py310env/lib/site-packages/vaex/dataframe.py#4101)
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File
"c:\Users\user\AppData\Local\anaconda3\envs\py310env\lib\mu
ltiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File
"c:\Users\user\AppData\Local\anaconda3\envs\py310env\lib\si
te-packages\vaex\expression.py", line 1629, in _apply
result = np.array(result)
ValueError: setting an array element with a sequence. The requested
array has an inhomogeneous shape after 1 dimensions. The detected
shape was (5,) + inhomogeneous part.
"""
</code></pre>
<p>I've reviewed questions with the same error but I don't think they are applicable since those questions are usually related to numpy arrays and I feel my problem is more related to Vaex idiosyncrasy.</p>
|
<python><pandas><numpy><vaex>
|
2024-12-19 12:11:16
| 1
| 436
|
J. Maria
|
79,294,037
| 275,088
|
How does creating a Python venv with `--system-site-packages` work under the hood?
|
<p>I'm trying to understand how creating a Python venv with access to the system site-packages work. The <a href="https://docs.python.org/3/library/venv.html" rel="nofollow noreferrer">documentation</a> doesn't go much into details, so I've just checked the diff between two venvs with and without the <code>--system-site-packages</code> option. It seems that the difference is just in <code>pyvenv.cfg</code>:</p>
<pre><code>@@ -1,5 +1,5 @@
home = /usr/bin
-include-system-site-packages = true
+include-system-site-packages = false
version = 3.12.8
executable = /usr/bin/python3.12
-command = /usr/bin/python -m venv --system-site-packages /tmp/venv1
+command = /usr/bin/python -m venv /tmp/venv2
</code></pre>
<p>How does this make the system packages accessible in the virtual environment?</p>
|
<python><python-venv>
|
2024-12-19 11:04:27
| 1
| 16,548
|
planetp
|
79,294,022
| 15,136,864
|
invalid_client when trying to authenticate with client_id and client_secret using django oauth toolkit and rest framework
|
<p>I’m running a Django service that exposes endpoints via Django REST Framework. I want to secure these endpoints using Django OAuth Toolkit for authentication.</p>
<p>When I create an application from the admin panel, I use the following settings:</p>
<p><a href="https://i.sstatic.net/Kh6elhGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kh6elhGy.png" alt="enter image description here" /></a></p>
<p>As shown in the screenshot, I disable client_secret hashing. With this configuration, everything works perfectly, and I can obtain an access token without any issues.</p>
<p><a href="https://i.sstatic.net/2fIKgpjM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fIKgpjM.png" alt="enter image description here" /></a></p>
<pre><code>* Preparing request to http://localhost:8000/o/token/
* Current time is 2024-12-19T10:47:03.573Z
* Enable automatic URL encoding
* Using default HTTP version
* Enable timeout of 30000ms
* Enable SSL validation
* Found bundle for host localhost: 0x159f21ed0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#106) with host localhost
* Connected to localhost (127.0.0.1) port 8000 (#106)
> POST /o/token/ HTTP/1.1
> Host: localhost:8000
> User-Agent: insomnia/0.2.2
> Content-Type: application/x-www-form-urlencoded
> Accept: application/x-www-form-urlencoded, application/json
> Authorization: Basic MkVDdzAwWkN1cm1CRFc4TEFrSmpjSGt1bnh1OW9WZlRpY09DaGU5bDpKVTl6SURzd0Zob09JMzJhRjhUMVI2WnhYZDVVTU84TWwwbHdiZldNWFNxcHVuTGdsaXBLT2xLTFBNMTBublV1TGp3WGFWOVBPR2ZxYUpURzF5Smx2VGRMWHVPRzN0SVg4bE9tQ1N6U09lbTV4Z2ExaWZrNWRUdjVOYWdFV2djQQ==
> Content-Length: 29
| grant_type=client_credentials
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Thu, 19 Dec 2024 10:47:03 GMT
< Server: WSGIServer/0.2 CPython/3.12.8
< Content-Type: application/json
< Cache-Control: no-store
< Pragma: no-cache
< djdt-store-id: b377d48fbdae4db989aabb760af12619
< Server-Timing: TimerPanel_utime;dur=28.13699999999919;desc="User CPU time", TimerPanel_stime;dur=2.7200000000000557;desc="System CPU time", TimerPanel_total;dur=30.856999999999246;desc="Total CPU time", TimerPanel_total_time;dur=56.63733399705961;desc="Elapsed time", SQLPanel_sql_time;dur=5.738208987168036;desc="SQL 3 queries", CachePanel_total_time;dur=0;desc="Cache 0 Calls"
< Vary: Accept-Language, Cookie
< Content-Language: en
< X-Frame-Options: DENY
< Content-Length: 118
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Cross-Origin-Opener-Policy: same-origin
* Received 118 B chunk
* Connection #106 to host localhost left intact
| {"access_token": "C5RbvIhZIp3y5zLnC7xrezfuzTGNwe", "expires_in": 36000, "token_type": "Bearer", "scope": "read write"}
</code></pre>
<p>However, when I enable client_secret hashing, I receive the error: {"error": "invalid_client"}.</p>
<p><a href="https://i.sstatic.net/c3tT4MgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c3tT4MgY.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/5PCTLxHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5PCTLxHO.png" alt="enter image description here" /></a></p>
<pre><code>* Preparing request to http://localhost:8000/o/token/
* Current time is 2024-12-19T10:50:41.587Z
* Enable automatic URL encoding
* Using default HTTP version
* Enable timeout of 30000ms
* Enable SSL validation
* Too old connection (217 seconds), disconnect it
* Connection 106 seems to be dead!
* Closing connection 106
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000 (#107)
> POST /o/token/ HTTP/1.1
> Host: localhost:8000
> User-Agent: insomnia/0.2.2
> Content-Type: application/x-www-form-urlencoded
> Accept: application/x-www-form-urlencoded, application/json
> Authorization: Basic eHp0cktWNmh1bDJjdFJNWWMxNW82ZHhWRk4wZHJTNzkyMjNldTQ3VTp6cUJwbW1zVHQ4dU1PaEk4Y29FZ3U3eW9rUUNRYnpoVE1rN2Y3bkVYclFlYnl5cEdYU1JpOWNFSGlIUXk1UnJ4blpvNTFOWVBldWhoTERkeWF3VkNrYkJGN05KVzNKZjRua1Z4OEI0a3p1V2tHRU9XdHNOOUo0NWFQRTIybHkzNw==
> Content-Length: 29
| grant_type=client_credentials
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Date: Thu, 19 Dec 2024 10:50:41 GMT
< Server: WSGIServer/0.2 CPython/3.12.8
< Content-Type: application/json
< Cache-Control: no-store
< Pragma: no-cache
< WWW-Authenticate: Bearer error="invalid_client"
< djdt-store-id: fc37d8dc93364fa1ab78cd392ddcfe20
< Server-Timing: TimerPanel_utime;dur=20.312999999998027;desc="User CPU time", TimerPanel_stime;dur=3.6159999999991754;desc="System CPU time", TimerPanel_total;dur=23.928999999997203;desc="Total CPU time", TimerPanel_total_time;dur=44.46612499305047;desc="Elapsed time", SQLPanel_sql_time;dur=2.3827080003684387;desc="SQL 2 queries", CachePanel_total_time;dur=0;desc="Cache 0 Calls"
< Vary: Accept-Language, Cookie
< Content-Language: en
< X-Frame-Options: DENY
< Content-Length: 27
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Cross-Origin-Opener-Policy: same-origin
* Received 27 B chunk
* Connection #107 to host localhost left intact
| {"error": "invalid_client"}
</code></pre>
<p>Here’s my custom authentication class, which I copied from <a href="https://stackoverflow.com/a/65718714/15136864">https://stackoverflow.com/a/65718714/15136864</a>:</p>
<pre class="lang-py prettyprint-override"><code>from oauth2_provider.contrib.rest_framework import OAuth2Authentication
from oauth2_provider.models import Application
class OAuth2ClientCredentialAuthentication(OAuth2Authentication):
"""OAuth2Authentication doesn't allows credentials to belong to an application (client).
This override authenticates server-to-server requests, using client_credential authentication.
"""
def authenticate(self, request):
authentication = super().authenticate(request)
if authentication is not None:
_, access_token = authentication
if self._grant_type_is_client_credentials(access_token):
authentication = access_token.application.user, access_token
return authentication
def _grant_type_is_client_credentials(self, access_token):
return access_token.application.authorization_grant_type == Application.GRANT_CLIENT_CREDENTIALS
</code></pre>
<p>I used this class because, as mentioned in the referenced post, logging in using client credentials isn’t supported out of the box, which isn’t explicitly stated in the documentation.</p>
<p>Here's my configuration</p>
<pre class="lang-py prettyprint-override"><code>REST_FRAMEWORK = {
"DEFAULT_SCHEMA_CLASS": "drf_spectacular.openapi.AutoSchema",
"DEFAULT_AUTHENTICATION_CLASSES": ("authentication.rest_framework.OAuth2ClientCredentialAuthentication",),
"DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
}
OAUTH2_PROVIDER_APPLICATION_MODEL = "authentication.Application"
OAUTH_2_PROVIDER = {
"SCOPES": {"read": "Read scope", "write": "Write scope", "groups": "Access to your groups"},
}
</code></pre>
<p>Edit:
I discovered that our application uses a custom password hasher:</p>
<pre class="lang-py prettyprint-override"><code>PASSWORD_HASHERS = [
"authentication.password.PBKDF2SHA256PasswordHasher",
]
</code></pre>
<p>It’s likely that Django OAuth Toolkit uses a different hasher, which might be causing the issue.</p>
|
<python><django><django-rest-framework><oauth-2.0><django-oauth-toolkit>
|
2024-12-19 10:59:12
| 0
| 310
|
Sevy
|
79,293,972
| 5,658,512
|
Need to fix Regex in the PostgreSQL Search
|
<p>I am in need to use the Word Boundary(\y) as a condition to match the exact word which is been searched as a Query Param. I am writing this regex and executing in PostgreSQL but it is returning FALSE.</p>
<p><strong>text_search_value = "apple."</strong> (Which needs to be exactly matched in the {doc_table} which is presented in the code)</p>
<p>My code</p>
<pre><code>vector_split_text = re.sub("\.|\-|/", " ", text_search_value)
text_search_value = re.escape((re.sub("\)",")",re.sub("\(","(",text_search_value))))
converted_part_query = f""""{doc_table}".lni IN (SELECT BTRIM(xxx) FROM "{docsplit_table}" WHERE yyy IN ({all_rcis}) AND doc_text_vector @@ phraseto_tsquery('simple', '{vector_split_text}'))"""
converted_part_query = (
"("
+ converted_part_query
+ f""" and "{doc_table}".document::text ~ '\y{text_search_value}\y'"""
+ ")"
)
</code></pre>
<p><strong>PostgreSQL Query</strong></p>
<pre><code>select 'apple.' ~ '\yapple\.\y'
</code></pre>
<p><a href="https://i.sstatic.net/82CLYsxT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82CLYsxT.png" alt="I might need help in fixing the regex to handle the exact search for the word "apple."" /></a></p>
<p><strong>Expected Result</strong>
The Select Query should return TRUE while executing in the PostgreSQL Table with the help of regex fixed.</p>
|
<python><sql><regex><postgresql><vector-search>
|
2024-12-19 10:41:01
| 2
| 4,280
|
Naresh Kumar P
|
79,293,889
| 12,813,584
|
catelog sentences into 5 words that represent them
|
<p>I have dataframe with 1000 text rows. <code>df['text']</code></p>
<p>I also have 5 words that I want to know for each one of them how much they represnt the text (between 0 to 1)</p>
<p>every score will be in <code>df["word1"]</code> ,<code>df["word2"]</code> and etc</p>
<p>I will glad for recomendations how to do that</p>
<p><strong>edit</strong></p>
<p>represnt = the semantic distance between the word to the text.</p>
<p>for example -
lets say in row 1 the text is "i want to eat"
and I have 2 words : food and house.</p>
<p>so in <code>df["food "]</code> it would be higher score than in <code>df["house"]</code></p>
|
<python><pandas><nlp><text-mining><similarity>
|
2024-12-19 10:16:47
| 1
| 469
|
rafine
|
79,293,738
| 12,297,666
|
Excessively increased training time for multiple Keras Conv1d networks within a for loop
|
<p>I am training several Keras models inside a for loop. Check the following code:</p>
<pre><code>import os
import time
import pandas as pd
import tensorflow as tf
import numpy as np
import unicodedata
from keras.models import Sequential
from keras.layers import InputLayer
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
from keras.callbacks import EarlyStopping
import keras.backend as K
from openpyxl.styles import PatternFill
class vazao:
def __init__(self, modelo='M1', rede='Conv1D', opcao=4):
self.dir_usuario = os.getcwd()
self.tipo = 'VAZAO'
self.caminho_vazao = f'{self.dir_usuario}\\{self.tipo}'
self.caminho_deck_ons_dados_abertos = f'{self.caminho_vazao}\\DECK_ONS_DADOS_ABERTOS\\'
self.caminho_deck_ons_sintegre = f'{self.caminho_vazao}\\DECK_ONS_SINTEGRE'
self.caminho_psats = f'{self.caminho_deck_ons_sintegre}\\PSATS\\'
self.caminho_psats_previsao = f'{self.caminho_deck_ons_sintegre}\\PSATS_PREVISAO\\'
self.caminho_resultados_previsao = f'{self.dir_usuario}\\RESULTADOS PREVISAO\\{self.tipo}'
self.caminho_deck_dessem_destino = f'{self.dir_usuario}\\RESULTADOS PREVISAO\\DECK_DESSEM'
self.df_reservatorios = pd.read_excel(f'{self.caminho_deck_ons_dados_abertos}\\RESERVATORIOS.xlsx')
renomear_postos = {'EMBORCAÇÃO':'EMBORCACAO', 'C.BRANCO-1':'CAPIM BRANC1', 'C.BRANCO-2':'CAPIM BRANC2', 'SÃO SIMÃO':'SAO SIMAO', 'TRÊS MARIAS': 'TRES MARIAS',
'B. BONITA': 'BARRA BONITA', 'PROMISSÃO':'PROMISSAO', 'N. AVANHANDAVA':'NAVANHANDAVA', 'TAQUARUÇU':'TAQUARUCU', 'TRÊS IRMÃOS':'TRES IRMAOS',
'PORTO PRIMAVERA':'P. PRIMAVERA', 'M. MORAES':'M. DE MORAES', 'CORUMBA':'CORUMBA I', 'SERRA DA MESA':'SERRA MESA', 'SANTO ANTONIO':'STO ANTONIO',
'PEIXE ANGICAL':'PEIXE ANGIC', 'NILO PEÇANHA':'NILO PECANHA', 'PEREIRA PASSOS':'P. PASSOS', 'SANTA CECILIA':'STA CECILIA', 'C. DOURADA':'CACH.DOURADA',
'PORTO ESTRELA':'P. ESTRELA', 'G. B. MUNHOZ':'G.B. MUNHOZ', 'G. P. SOUZA':'G.P. SOUZA', 'SANTA CLARA-PR':'STA CLARA PR', 'FUNDÃO':'FUNDAO', 'SALTO SANTIAGO':'SLT.SANTIAGO',
'ITÁ':'ITA', 'PONTE DE PEDRA':'PONTE PEDRA', 'P. AFONSO 1,2,3':'P.AFONSO 123', 'P. AFONSO 4':'P.AFONSO 4', 'BOA ESPERANÇA':'B. ESPERANCA',
'PEDRA DO CAVALO':'P. CAVALO', 'COARACY NUNES':'COARACY NUNE', 'CACHOEIRA CALDEIRAO':'CACH.CALDEIR', 'STA.CLARA-MG':'STA CLARA MG', 'SUIÇA':'SUICA',
'S.R.VERDINHO':'SLT VERDINHO', 'QUEBRA QUEIXO':'QUEBRA QUEIX', 'CORUMBA-3':'CORUMBA III', 'CORUMBA-4':'CORUMBA IV', 'B.COQUEIROS':'B. COQUEIROS',
'FOZ DO RIO CLARO':'FOZ R. CLARO', 'S.DO FACÃO':'SERRA FACAO', 'PASSO SAO JOAO':'PASSO S JOAO', 'STO ANTONIO DO JARI':'STO ANT JARI',
'FERREIRA GOMES':'FERREIRA GOM', 'GUILM. AMORIM':'GUILMAN-AMOR', 'SALTO GRANDE CM': 'SALTO GRANDE', 'SALTO GRANDE CS': 'L.N. GARCEZ'}
remover_postos = ['ITAPARICA', 'MOXOTO', 'P.AFONSO 123', 'P.AFONSO 4', 'XINGO',
'STA CECILIA', 'ITUTINGA', 'FONTES', 'P. PASSOS', 'NILO PECANHA',
'SIMPLICIO', 'SANTONIO CM', 'P. ESTRELA', 'BELO MONTE', 'ITIQUIRA II',
'HENRY BORDEN', 'CANASTRA', 'SUICA']
self.df_reservatorios['nom_reservatorio'].replace(renomear_postos, inplace=True)
self.df_reservatorios = self.df_reservatorios[~self.df_reservatorios['nom_reservatorio'].isin(remover_postos)]
self.df_reservatorios.loc[self.df_reservatorios['nom_reservatorio'] == 'SUICA', 'cod_resplanejamento'] = 145
df_reservatorios = self.df_reservatorios.copy()
df_reservatorios = df_reservatorios[['nom_reservatorio', 'nom_bacia','nom_rio','val_latitude','val_longitude']]
self.df_config = pd.read_excel(f'{self.caminho_deck_ons_sintegre}\\Configuracao.xlsx')
lista_df = []
arquivos = os.listdir(self.caminho_deck_ons_dados_abertos)
for arquivo in arquivos:
if 'DADOS' in arquivo:
df_ano = pd.read_csv(f'{self.caminho_deck_ons_dados_abertos}{arquivo}', sep=';', index_col=None, header=0)
lista_df.append(df_ano)
self.df_vazao = pd.concat(lista_df, axis=0, ignore_index=True)
self.df_vazao['id_subsistema'].replace(to_replace='SE', value='SECO', inplace=True)
self.df_vazao['nom_reservatorio'].replace(renomear_postos, inplace=True)
ultimo_ano = pd.to_datetime(self.df_vazao['din_instante']).dt.year.max()
self.df_vazao_ultimo_ano_completo = self.df_vazao[pd.to_datetime(self.df_vazao['din_instante']).dt.year == ultimo_ano]
self.df_vazao_ultimo_ano_completo = self.df_vazao_ultimo_ano_completo[~self.df_vazao_ultimo_ano_completo['nom_reservatorio'].isin(remover_postos)]
self.df_vazao_ultimo_ano_completo.loc[self.df_vazao_ultimo_ano_completo['nom_reservatorio'] == 'SUICA', 'cod_usina'] = 145
self.dic_par_postos_jusante_montante = {
'DARDANELOS': '',
'SAMUEL': '',
'PORTO ESTRELA': 'SALTO GRANDE CM',
'ESPORA': '',
'BILLINGS': 'GUARAPIRANGA',
'GUARAPIRANGA': '',
'SOBRADINHO': 'TRES MARIAS',
'BALBINA': '',
'STO ANTONIO DO JARI': '',
'COARACY NUNES': 'CACHOEIRA CALDEIRAO',
'FERREIRA GOMES':'COARACY NUNES',
}
self.modelo = modelo
self.rede = rede
self.opcao = opcao
self.lista_subs = ['SECO', 'S', 'NE', 'N']
self.lista_horizontes = ['dM1', 'dM2', 'dM3', 'dM4', 'dM5', 'dM6', 'dM7', 'dM8', 'dM9']
# self.lista_horizontes = ['dM1', 'dM9']
self.lista_modelos = ['M1', 'M3']
self.lista_redes = ['Conv1D', 'MLP']
self.lista_modelos_previ_precipit = ['ECMWF', 'ETA40', 'GEFS']
self.associacoes_df = pd.read_csv(f'{self.caminho_vazao}\\associacoes_usina_posto.csv')
self.df_precipitacoes = pd.read_csv(f'{self.caminho_vazao}\\precipitacoes.csv')
self.df_correlacao_reservatorio_psat = pd.read_csv(f'{self.caminho_vazao}\\correlacao_usina_psat.csv')
def consulta_postos_em_bacias(self, subs, bacia):
caminho = f'{self.caminho_vazao}\\SUBS_{subs}\\{bacia}'
pastas = os.listdir(caminho)
return pastas
def treinamento(self, tela = False, pb = 0, progresso = 'text', subs='', bacia='', reservatorio='', epocas=500, es=250, num_inic=10, erro=''):
def remover_acentos(texto):
texto_sem_acento = ''.join(c for c in unicodedata.normalize('NFD', texto) if unicodedata.category(c) != 'Mn')
texto_sem_acento = texto_sem_acento.replace(' ', '_')
texto_sem_acento = texto_sem_acento.replace(',','.')
return texto_sem_acento
def conv1d():
nome_rede = f'{subs}_{bacia}_{reservatorio}_{self.modelo}_{self.rede}_{horizonte}_inicializacao_{inicializacao}'
nome_rede = remover_acentos(nome_rede)
# K.clear_session() # DID NOT WORK
rede_conv1d = Sequential(name=nome_rede)
rede_conv1d.add(InputLayer((timesteps, input_dim), name='Camada_Entrada'))
rede_conv1d.add(Conv1D(filters=64, kernel_size=4, activation=funcao, name='Camada_Conv_1'))
rede_conv1d.add(MaxPooling1D(pool_size=1))
rede_conv1d.add(Conv1D(filters=64, kernel_size=4, activation=funcao, name='Camada_Conv_2'))
rede_conv1d.add(MaxPooling1D(pool_size=1))
rede_conv1d.add(Flatten())
rede_conv1d.add(Dense(units=72, activation=funcao, name='Dense_1'))
rede_conv1d.add(Dense(units=neuronios_saida, activation=funcao, name='Camada_Saida'))
return rede_conv1d
tf.config.set_visible_devices([], 'GPU') # Disable GPU
input_dim = 1
val_size = 0.2
amostras_antes = 15
# amostras_depois = 1
funcao = 'tanh'
inicializacoes = num_inic
if self.opcao == 1:
pass
elif self.opcao == 2:
pass
elif self.opcao == 3:
reservatorios_na_bacia = self.consulta_postos_em_bacias(subs=subs, bacia=bacia)
for reservatorio in reservatorios_na_bacia:
df_treinamento = pd.read_csv(f'{self.caminho_vazao}\\SUBS_{subs}\\{bacia}\\{reservatorio}\\{reservatorio}_Dados_Treinamento_Norm.csv', index_col='din_instante')
diretorio_result_treinamento = f'{self.caminho_vazao}\\SUBS_{subs}\\{bacia}\\{reservatorio}\\AGREGADO\\Resultados Treinamento'
diretorio_elia = f'{self.caminho_vazao}\\SUBS_{subs}\\{bacia}\\{reservatorio}\\AGREGADO\\ELIA'
if not os.path.exists(diretorio_result_treinamento):
os.makedirs(diretorio_result_treinamento)
if not os.path.exists(diretorio_elia):
os.makedirs(diretorio_elia)
if len(df_treinamento) != 0:
writer = pd.ExcelWriter(f'{diretorio_result_treinamento}\\{subs}_{bacia}_{reservatorio}_MSE_{self.modelo}_{self.rede}.xlsx')
writer_loss = pd.ExcelWriter(f'{diretorio_result_treinamento}\\{subs}_{bacia}_{reservatorio}_Treinamento_{self.modelo}_{self.rede}.xlsx')
for horizonte in self.lista_horizontes:
horiz = int(horizonte[-1])
x_train = []
y_train = []
if self.modelo == 'M1':
for i in range(0, len(df_treinamento), 1):
final_x = i + amostras_antes
final_y = final_x + horiz
if final_y > len(df_treinamento):
break
previsor_vazao = df_treinamento['val_vazaoincremental'][i:final_x]
previsor_sen = pd.Series(df_treinamento['Seno'].loc[df_treinamento.index[final_y-1]])
previsor_cos = pd.Series(df_treinamento['Cosseno'].loc[df_treinamento.index[final_y-1]])
previsor_concat = pd.concat([previsor_vazao,
previsor_sen,
previsor_cos]).values
x_train.append(previsor_concat)
y_train.append(df_treinamento['val_vazaoincremental'][final_y - 1])
else:
melhor_lag_pearson = self.df_correlacao_reservatorio_psat.loc[self.df_correlacao_reservatorio_psat['nom_reservatorio'] == reservatorio, 'melhor_lag'].values[0]
if -melhor_lag_pearson > amostras_antes:
for i in range(-melhor_lag_pearson-amostras_antes, len(df_treinamento), 1):
final_x = i + amostras_antes
final_y = final_x + horiz
if final_y > len(df_treinamento):
break
previsor_vazao = df_treinamento['val_vazaoincremental'][i:final_x]
previsor_precipitacao = pd.Series(df_treinamento['precipitacao'].loc[df_treinamento.index[final_y + melhor_lag_pearson - 1]])
previsor_sen = pd.Series(df_treinamento['Seno'].loc[df_treinamento.index[final_y - 1]])
previsor_cos = pd.Series(df_treinamento['Cosseno'].loc[df_treinamento.index[final_y - 1]])
previsor_concat = pd.concat([previsor_vazao,
previsor_precipitacao,
previsor_sen,
previsor_cos]).values
x_train.append(previsor_concat)
y_train.append(df_treinamento['val_vazaoincremental'][final_y - 1])
else:
for i in range(0, len(df_treinamento), 1):
final_x = i + amostras_antes
final_y = final_x + horiz
if final_y > len(df_treinamento):
break
previsor_vazao = df_treinamento['val_vazaoincremental'][i:final_x]
previsor_precipitacao = pd.Series(df_treinamento['precipitacao'].loc[df_treinamento.index[final_y + melhor_lag_pearson - 1]])
previsor_sen = pd.Series(df_treinamento['Seno'].loc[df_treinamento.index[final_y - 1]])
previsor_cos = pd.Series(df_treinamento['Cosseno'].loc[df_treinamento.index[final_y - 1]])
previsor_concat = pd.concat([previsor_vazao,
previsor_precipitacao,
previsor_sen,
previsor_cos]).values
x_train.append(previsor_concat)
y_train.append(df_treinamento['val_vazaoincremental'][final_y - 1])
x_train = np.array(x_train)
y_train = np.array(y_train)
y_train = np.reshape(y_train, (y_train.shape[0], 1))
df_mse_train = pd.DataFrame(index=pd.RangeIndex(inicializacoes), columns=['mse treinamento'])
df_mse_val = pd.DataFrame(index=pd.RangeIndex(inicializacoes), columns=['mse validação'])
df_tempo = pd.DataFrame(index=pd.RangeIndex(inicializacoes), columns=['tempo (minutos)'])
df_melhor_epoca = pd.DataFrame(index=pd.RangeIndex(inicializacoes), columns=['melhor época'])
df_earlystop = pd.DataFrame(index=pd.RangeIndex(inicializacoes), columns=['época parou'])
num_amostras = x_train.shape[0]
timesteps = x_train.shape[1]
x_train = x_train.reshape(num_amostras, timesteps, input_dim)
melhor_historico = None
menor_loss_validacao = float('inf')
neuronios_saida = y_train.shape[1]
for inicializacao in range(inicializacoes):
if self.rede == 'Conv1D':
rede_ = conv1d()
rede_.compile(optimizer='adam', loss='mean_squared_error')
cp = ModelCheckpoint(
filepath=f'{diretorio_elia}\\{subs}_{bacia}_{reservatorio}_{self.modelo}_{self.rede}_{horizonte}_inicializacao_{inicializacao}.hdf5',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min')
earlystop = EarlyStopping(monitor='val_loss',
patience=es,
verbose=1,
mode='min',
restore_best_weights=True)
t = time.time()
treinando = rede_.fit(x_train, y_train, batch_size=32,
epochs=epocas, verbose=2,
callbacks=[cp, earlystop],
validation_split=val_size, shuffle=True)
tempo_modelo = round((time.time() - t) / 60, 4)
# del rede_ # DID NOT WORK
# gc.collect() # DID NOT WORK
tempo_modelo = round((time.time() - t) / 60, 4)
menor_loss_validacao_inicializacao = min(treinando.history['val_loss'])
if menor_loss_validacao_inicializacao < menor_loss_validacao:
menor_loss_validacao = menor_loss_validacao_inicializacao
melhor_historico = treinando.history
df_mse_train.iloc[inicializacao] = treinando.history['loss'][np.argmin(treinando.history['val_loss'])]
df_mse_val.iloc[inicializacao] = min(treinando.history['val_loss'])
df_tempo.iloc[inicializacao] = tempo_modelo
df_melhor_epoca.iloc[inicializacao] = np.argmin(treinando.history['val_loss'])
df_earlystop.iloc[inicializacao] = earlystop.stopped_epoch
df_loss = pd.DataFrame.from_dict(melhor_historico)
df_loss.index.names = ['Época']
df_loss.to_excel(writer_loss, sheet_name=f'{horizonte}', index=True)
df_mse_train.loc['Media'] = df_mse_train.mean()
df_mse_train.loc['Desv_Pad'] = df_mse_train.std(ddof=0)
df_mse_val.loc['Media'] = df_mse_val.mean()
df_mse_val.loc['Desv_Pad'] = df_mse_val.std(ddof=0)
df_tempo.loc['Total (minutos)'] = df_tempo.sum()
melhor_inicializacao = np.argmin(df_mse_val[:-2])
linha_excel = melhor_inicializacao + 3
df_resultados = pd.concat([df_mse_train, df_mse_val, df_earlystop, df_melhor_epoca, df_tempo], axis=1)
df_resultados.index.names = ['Inicialização']
df_resultados.to_excel(writer, sheet_name=f'{horizonte}', index=True, startrow=1)
aba = writer.sheets[f'{horizonte}']
# Color best initialization
for col_num in range(1, aba.max_column + 1):
cell = aba.cell(row=linha_excel, column=col_num)
fill = PatternFill(start_color="00FF00", end_color="00FF00", fill_type="solid")
cell.fill = fill
for inicializacao in range(inicializacoes):
if inicializacao != melhor_inicializacao:
if os.path.exists(f'{diretorio_elia}\\{subs}_{bacia}_{reservatorio}_{self.modelo}_{self.rede}_{horizonte}_inicializacao_{inicializacao}.hdf5'):
os.remove(f'{diretorio_elia}\\{subs}_{bacia}_{reservatorio}_{self.modelo}_{self.rede}_{horizonte}_inicializacao_{inicializacao}.hdf5')
else:
print('Arquivo não existe')
else:
nome_antigo = f'{diretorio_elia}\\{subs}_{bacia}_{reservatorio}_{self.modelo}_{self.rede}_{horizonte}_inicializacao_{inicializacao}.hdf5'
nome_novo = f'{diretorio_elia}\\{subs}_{bacia}_{reservatorio}_{self.modelo}_{self.rede}_{horizonte}.hdf5'
if os.path.exists(nome_novo):
os.remove(nome_novo)
os.rename(nome_antigo, nome_novo)
else:
os.rename(nome_antigo, nome_novo)
writer._save()
writer_loss._save()
else:
pass
return None
if __name__ == "__main__":
vazao(modelo='M1', rede='Conv1D', opcao=3).treinamento(subs='SECO', bacia='SAO FRANCISCO')
</code></pre>
<p>When the train starts, each epoch takes around 280ms. But as trains go on, each epoch lasts around 3s. I have tried to solve this problem using some sugestions i found, like <code>clear_session()</code>, but nothing changed. I also have tried to delete the model when it finishes the <code>.fit</code> and also use <code>gc.collect()</code>, but none worked.</p>
<p>This increase in training time only happens if i use the <code>Conv1D</code> network. If i use the <code>MLP</code>, the training time is constant at around 90ms and does not increase. What is happening here? What is causing this increase in training time for the Conv1D? Any ideas how to fix this?</p>
<p>Here some examples:</p>
<pre><code>At the very start:
Epoch 3: val_loss improved from 0.00015 to 0.00007, saving model to C:\Users\ldsp_\SIPREDVS\VAZAO\SUBS_SECO\SAO FRANCISCO\QUEIMADO\AGREGADO\ELIA\SECO_SAO FRANCISCO_QUEIMADO_M1_Conv1D_dM1_inicializacao_0.hdf5
164/164 - 0s - loss: 3.6645e-04 - val_loss: 7.4710e-05 - 284ms/epoch - 2ms/step
Epoch 4/500
After 10 minutes of training:
Epoch 392: val_loss did not improve from 0.00005
164/164 - 0s - loss: 2.2052e-04 - val_loss: 7.2596e-05 - 318ms/epoch - 2ms/step
Epoch 393/500
After 40 minutes of training:
Epoch 13: val_loss improved from 0.00025 to 0.00022, saving model to C:\Users\ldsp_\SIPREDVS\VAZAO\SUBS_SECO\SAO FRANCISCO\QUEIMADO\AGREGADO\ELIA\SECO_SAO FRANCISCO_QUEIMADO_M1_Conv1D_dM2_inicializacao_5.hdf5
164/164 - 0s - loss: 6.5353e-04 - val_loss: 2.2064e-04 - 450ms/epoch - 3ms/step
Epoch 14/500
</code></pre>
|
<python><tensorflow><keras>
|
2024-12-19 09:29:18
| 0
| 679
|
Murilo
|
79,293,591
| 8,335,151
|
How to define Output in AML pipelien?
|
<p>Here is my AML pipeline using python SDK v2.</p>
<pre><code>input_path = Input(type="uri_folder", path="azureml://datastores/test/paths/input")
output_path = Output(type="uri_folder", path="azureml://datastores/test/paths/output")
</code></pre>
<pre><code>@command_component(
name="test_com",
version="1",
display_name="test com",
description="test",
environment=custom_env,
)
def prepare_data_component(
input_data: Input(type="uri_folder"),
output_data: Output(type="uri_folder"),
):
print("input_data: ", input_data)
print("output_data: ", output_data)
</code></pre>
<pre><code>@pipeline(
default_compute=cpu_compute_target,
)
def pipeline_test(pipeline_input_data):
prepare_data_node = prepare_data_component(
input_data=pipeline_input_data,
output_data=output_data,
)
return {
"output_data": prepare_data_node.outputs.output_data,
}
# create a pipeline
pipeline_job = pipeline_test(
pipeline_input_data=input_data,
)
print(pipeline_job)
</code></pre>
<p>But it shows me this error:</p>
<pre><code>[component] test com() got an unexpected keyword argument 'output_data', valid keywords: 'input_data'.
</code></pre>
<p>Can output be passed as a parameter to component? Or can you give me some references on how to use output?</p>
<p>I don't want the official documentation because it is missing a lot of content, such as "How to use output in python sdk"</p>
|
<python><azure><azure-machine-learning-service><azure-pipelines-yaml>
|
2024-12-19 08:37:30
| 1
| 3,765
|
AnonymousKKYY
|
79,293,528
| 2,695,990
|
How to replace xml node value in Python, without changing the whole file
|
<p>Doing my first steps in python I try to parse and update a xml file.
The xml is as follows:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?>
<!DOCTYPE eu:eu-backbone SYSTEM "../../util/dtd/eu-regional.dtd"[]>
<test dtd-version="3.2" xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink">
<mr>
<leaf checksum="88ed245997a341a4c7d1e40d614eb14f" >
<title>book name</title>
</leaf>
</mr>
</test>
</code></pre>
<p>I would like to update the value of the checksum.I have written a class with one method:</p>
<pre class="lang-py prettyprint-override"><code> @staticmethod
def replace_checksum_in_index_xml(xml_file_path, checksum):
logging.debug(f"ReplaceChecksumInIndexXml xml_file_path: {xml_file_path}")
try:
from xml.etree import ElementTree as et
tree = et.parse(xml_file_path)
tree.find('.//leaf').set("checksum", checksum)
tree.write(xml_file_path)
except Exception as e:
logging.error(f"Error updating checksum in {xml_file_path}: {e}")
</code></pre>
<p>I call the method:</p>
<pre class="lang-py prettyprint-override"><code> xml_file_path = "index.xml"
checksum = "aaabbb"
Hashes.replace_checksum_in_index_xml(xml_file_path, checksum)
</code></pre>
<p>The checksum is indeed updated. But also the whole xml structure is changed:</p>
<pre class="lang-xml prettyprint-override"><code><test dtd-version="3.2">
<mr>
<leaf checksum="aaabbb">
<title>book name</title>
</leaf>
</mr>
</test>
</code></pre>
<p>How can I update only given node, without changing anything else in given xml file?</p>
<p><strong>UPDATE</strong></p>
<p>Solution provided by LRD_Clan is better than my original one. But it is still changing a bit the structure of xml. Also when I take more complex example I see again part of xml is removed.
More complex example with additional DOCTYPE:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE eu:eu-backbone SYSTEM "../../util/dtd/eu-regional.dtd"[]>
<?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?>
<test dtd-version="3.2" xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink">
<mr>
<leaf checksum="88ed245997a341a4c7d1e40d614eb14f" >
<title>book name</title>
</leaf>
</mr>
</test>
</code></pre>
<p>after running updated script the result is:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?><test xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink" dtd-version="3.2">
<mr>
<leaf checksum="aaabbb">
<title>book name</title>
</leaf>
</mr>
</test>
</code></pre>
<p>I would really like to see only this one xml element being changed and the other part of document to be left intact.</p>
<p>I had similiar solution written in powershell which looks like</p>
<pre class="lang-none prettyprint-override"><code> [xml]$xmlContent = Get-Content -Path $xmlFilePath
$element = $xmlContent.SelectSingleNode("//leaf")
$element.SetAttribute("checksum, "new text")
$xmlContent.Save((Resolve-Path "$xmlFilePath").Path)
</code></pre>
<p>I was hoping I will find something at least same elegant in python.</p>
|
<python><xml><xpath><xml-parsing><elementtree>
|
2024-12-19 08:18:55
| 4
| 3,174
|
fascynacja
|
79,293,424
| 15,460,398
|
Visualizer freezes for down-sampled point cloud of OAK-D camera
|
<p>Currently, I am using an OAK-D PRO FF camera. The platform is Windows 10.</p>
<p>Following the <a href="https://docs.luxonis.com/software/depthai/examples/pointcloud_visualization/" rel="nofollow noreferrer">documentation</a>, I can visualize the point cloud from the camera. I want to downsample the point cloud using the voxel down-sampling function.</p>
<p>However, I can not properly visualize the point cloud after downsampling. It looks like the open3D visualizer stops updating the point cloud. And the downsampled cloud also looks strange (only containing some horizontal and vertical lines)</p>
<p>This is my code:</p>
<pre><code>#!/usr/bin/env python3
import depthai as dai
from time import sleep
import numpy as np
import cv2
import time
import sys
try:
import open3d as o3d
except ImportError:
sys.exit("Critical dependency missing: Open3D. Please install it using the command: '{} -m pip install open3d' and then rerun the script.".format(sys.executable))
FPS = 30
class FPSCounter:
def __init__(self):
self.frameCount = 0
self.fps = 0
self.startTime = time.time()
def tick(self):
self.frameCount += 1
if self.frameCount % 10 == 0:
elapsedTime = time.time() - self.startTime
self.fps = self.frameCount / elapsedTime
self.frameCount = 0
self.startTime = time.time()
return self.fps
pipeline = dai.Pipeline()
camRgb = pipeline.create(dai.node.ColorCamera)
monoLeft = pipeline.create(dai.node.MonoCamera)
monoRight = pipeline.create(dai.node.MonoCamera)
depth = pipeline.create(dai.node.StereoDepth)
pointcloud = pipeline.create(dai.node.PointCloud)
sync = pipeline.create(dai.node.Sync)
xOut = pipeline.create(dai.node.XLinkOut)
xOut.input.setBlocking(False)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setIspScale(1,3)
camRgb.setFps(FPS)
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoLeft.setCamera("left")
monoLeft.setFps(FPS)
monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoRight.setCamera("right")
monoRight.setFps(FPS)
depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
depth.setLeftRightCheck(True)
depth.setExtendedDisparity(False)
depth.setSubpixel(True)
depth.setDepthAlign(dai.CameraBoardSocket.CAM_A)
monoLeft.out.link(depth.left)
monoRight.out.link(depth.right)
depth.depth.link(pointcloud.inputDepth)
camRgb.isp.link(sync.inputs["rgb"])
pointcloud.outputPointCloud.link(sync.inputs["pcl"])
sync.out.link(xOut.input)
xOut.setStreamName("out")
with dai.Device(pipeline) as device:
isRunning = True
def key_callback(vis, action, mods):
global isRunning
if action == 0:
isRunning = False
q = device.getOutputQueue(name="out", maxSize=4, blocking=False)
vis = o3d.visualization.VisualizerWithKeyCallback()
vis.create_window()
vis.register_key_action_callback(81, key_callback)
pcd = o3d.geometry.PointCloud()
coordinateFrame = o3d.geometry.TriangleMesh.create_coordinate_frame(size=1000, origin=[0,0,0])
vis.add_geometry(coordinateFrame)
first = True
fpsCounter = FPSCounter()
while isRunning:
inMessage = q.get()
inColor = inMessage["rgb"]
inPointCloud = inMessage["pcl"]
cvColorFrame = inColor.getCvFrame()
# Convert the frame to RGB
cvRGBFrame = cv2.cvtColor(cvColorFrame, cv2.COLOR_BGR2RGB)
fps = fpsCounter.tick()
# Display the FPS on the frame
cv2.putText(cvColorFrame, f"FPS: {fps:.2f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.imshow("color", cvColorFrame)
key = cv2.waitKey(1)
if key == ord('q'):
break
if inPointCloud:
t_before = time.time()
points = inPointCloud.getPoints().astype(np.float64)
pcd.points = o3d.utility.Vector3dVector(points)
colors = (cvRGBFrame.reshape(-1, 3) / 255.0).astype(np.float64)
pcd.colors = o3d.utility.Vector3dVector(colors)
#### voxel down-sampling
dwn = pcd.voxel_down_sample(voxel_size=5)
if first:
vis.add_geometry(dwn)
first = False
else:
vis.update_geometry(dwn)
vis.poll_events()
vis.update_renderer()
vis.destroy_window()
</code></pre>
<p>Point cloud preview without downsampling:</p>
<p><a href="https://i.sstatic.net/pBIYdXVf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBIYdXVf.png" alt="enter image description here" /></a></p>
<p>After downsampling:</p>
<p><a href="https://i.sstatic.net/TMgONotJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMgONotJ.png" alt="enter image description here" /></a></p>
<p>Am I missing something? What is the proper way to down-sample the point cloud?</p>
|
<python><python-3.x>
|
2024-12-19 07:33:05
| 0
| 361
|
BeamString
|
79,293,377
| 726,730
|
Cannot close correctly aiortc RTCPeerConnection after client disconnect
|
<p>offer:</p>
<pre class="lang-py prettyprint-override"><code>
async def offer(self, request):
try:
params = await request.json()
peer_connection = {
"name": params["name"],
"surname": params["surname"],
"pc": None,
"is_closed": False,
"dc": None,
"uid": uuid.uuid4(),
"audio_track": None,
"audio_track_for_local_use": None,
"audio_blackhole": None,
"video_track": None,
"video_track_for_local_use": None,
"video_blackhole": None,
"offer_in_progress": True,
"call_answered": False,
"manage_call_end_thread": None,
"stop_in_progress": False,
"call_number": None
}
if len(list(self.pcs.keys())) == 3:
return web.Response(content_type="application/json", text=json.dumps({"sdp": "", "type": ""}))
reserved_call_numbers = list(self.pcs.keys())
if 1 not in reserved_call_numbers:
call_number = 1
elif 2 not in reserved_call_numbers:
call_number = 2
else:
call_number = 3
peer_connection["call_number"] = call_number
self.pcs[call_number] = peer_connection
self.to_emitter.send({"type": "call_"+str(call_number)+"_offering", "name": peer_connection["name"], "surname": peer_connection["surname"],'uid':peer_connection['uid']})
timer = 0
while (timer < self.configuration["aiortc_time_window_for_answer_ms"]/1000 and self.queue.qsize() == 0 and self.pcs[call_number]["call_answered"] == False):
if request.transport is None or request.transport.is_closing():
self.to_emitter.send({"type":"transport-error","call-number":call_number})
try:
request.transport.close()
except:
pass
del self.pcs[call_number]
return web.Response(content_type="application/json", text=json.dumps({"sdp": "", "type": ""}))
timer += 0.1
await asyncio.sleep(0.1)
self.pcs[call_number]["call_answered"] = True
if self.queue.qsize() == 0:
return self.reject_offer(call_number,peer_connection)
else:
data = self.queue.get()
correct_types = ["call-1","call-2","call-3"]
if data["type"] in correct_types:
if (data["call"] == "reject"):
return self.reject_offer(call_number, peer_connection)
elif (data["call"] == "answer"):
while not self.queue.empty():
self.queue.get()
self.pcs[call_number]["pc"] = RTCPeerConnection(configuration=RTCConfiguration([RTCIceServer("stun:stun.l.google:19302"),]))
@self.pcs[call_number]["pc"].on("iceconnectionstatechange")
async def on_ice_connection_state_change():
pc = self.pcs[call_number]["pc"]
print(f"ICE connection state: {pc.iceConnectionState}")
if pc.iceConnectionState == "failed":
print("ICE connection failed. Attempting to restart ICE.")
await pc.restartIce()
await asyncio.sleep(5) # Adjust as needed
if pc.iceConnectionState == "disconnected":
print("Peer connection lost. Stopping connection.")
await self.stop_peer_connection(call_number, self.pcs[call_number]["uid"])
if pc.iceConnectionState == "disconnected":
# Wait before handling disconnection
await asyncio.sleep(5) # Adjust as needed
if pc.iceConnectionState == "disconnected":
print("Peer connection lost. Stopping connection.")
await self.stop_peer_connection(call_number, self.pcs[call_number]["uid"])
@self.pcs[call_number]["pc"].on("connectionstatechange")
async def on_connection_state_change():
pc = self.pcs[call_number]["pc"]
print(f"Connection state: {pc.connectionState}")
if pc.connectionState in ["failed", "disconnected", "closed"]:
print("Connection failed, disconnected, or closed. Stopping connection.")
await self.stop_peer_connection(call_number, self.pcs[call_number]["uid"])
@self.pcs[call_number]["pc"].on("datachannel")
async def on_datachannel(channel):
self.pcs[call_number]["dc"] = channel
# Send UID to the connecting peer
try:
channel.send(json.dumps({"type": "uid", "uid": str(peer_connection["uid"])}))
except Exception as e:
print(f"Error sending UID: {e}")
# Inform about other peers
try:
for uid, pc in self.pcs.items():
if pc['uid'] != peer_connection['uid']:
try:
pc['dc'].send(json.dumps({
"type": "other-uid",
"uid": str(peer_connection['uid']),
"name": peer_connection["name"],
"surname": peer_connection["surname"]
}))
except Exception as e:
print(f"Error sending other-uid message: {e}")
except Exception as e:
print(f"Error informing about other peers: {e}")
@channel.on("message")
async def on_message(message):
try:
message = json.loads(message)
msg_type = message.get("type")
if msg_type == "disconnected":
print('disconnected message from client')
await self.stop_peer_connection(call_number, self.pcs[call_number]["uid"])
elif msg_type in ["offer", "answer", "ice-candidate"]:
target_uid = message["to_uid"]
for uid, pc in self.pcs.items():
if str(pc['uid']) == target_uid:
try:
pc["dc"].send(json.dumps(message))
except Exception as e:
print(f"Error relaying {msg_type}: {e}")
else:
print(f"Unhandled message type: {msg_type}")
except Exception as e:
print(f"Error handling message: {e}")
@channel.on("close")
async def on_close():
await self.stop_peer_connection(call_number, self.pcs[call_number]["uid"])
# audio from server to client
if self.server_audio_stream_offer == None:
self.server_audio_stream_offer = Server_Audio_Stream_Offer(self.speackers_deck_queue,self.to_emitter)
#self.server_audio_stream_offer = AudioStreamTrack()
self.pcs[call_number]["pc"].addTrack(self.server_audio_stream_offer)
# video from server to client
if self.server_video_stream_offer is None:
self.server_video_stream_offer = self.create_local_tracks()
self.pcs[call_number]["pc"].addTrack(self.server_video_stream_offer)
# Attach video from server to QLabel
if self.server_video_track is None:
self.server_video_track = WebCamera(self.server_video_stream_offer,self.to_emitter)
if self.server_video_blackholde is None:
self.server_video_blackholde = MediaBlackhole()
self.server_video_blackholde.addTrack(self.server_video_track)
await self.server_video_blackholde.start()
@self.pcs[call_number]["pc"].on("track")
async def on_track(track):
if track.kind == "audio":
self.pcs[call_number]["audio_track"] = track
# audio from client (server use)
if call_number == 1:
correct_queue = self.ip_call_1_packet_queue
elif call_number == 2:
correct_queue = self.ip_call_2_packet_queue
else:
correct_queue = self.ip_call_3_packet_queue
self.put_to_q = True
self.pcs[call_number]["audio_track_for_local_use"] = ClientTrack(track, self, self.to_emitter,call_number,self.put_to_q,correct_queue)
self.pcs[call_number]["audio_blackhole"] = MediaBlackhole()
self.pcs[call_number]["audio_blackhole"].addTrack(
self.pcs[call_number]["audio_track_for_local_use"])
await self.pcs[call_number]["audio_blackhole"].start()
else:
self.pcs[call_number]["video_track"] = track
# video from client (server use)
self.pcs[call_number]["video_track_for_local_use"] = ClientWebCamera(track, self.to_emitter,
call_number, self)
self.pcs[call_number]["video_blackhole"] = MediaBlackhole()
self.pcs[call_number]["video_blackhole"].addTrack(
self.pcs[call_number]["video_track_for_local_use"])
await self.pcs[call_number]["video_blackhole"].start()
offer = RTCSessionDescription(sdp=params["sdp"], type=params["type"])
# handle offer
await self.pcs[call_number]["pc"].setRemoteDescription(offer)
# send answer
answer = await self.pcs[call_number]["pc"].createAnswer()
await self.pcs[call_number]["pc"].setLocalDescription(answer)
return web.Response(content_type="application/json", text=json.dumps(
{"sdp": self.pcs[call_number]["pc"].localDescription.sdp,
"type": self.pcs[call_number]["pc"].localDescription.type}))
else:
return self.reject_offer(call_number, peer_connection)
else:
return self.reject_offer(call_number, peer_connection)
except:
error = traceback.format_exc()
self.to_emitter.send({"type": "error", "error_message": error})
</code></pre>
<p>stop_peer_connection:</p>
<pre class="lang-py prettyprint-override"><code> async def stop_peer_connection(self, call_number, uid):
try:
if call_number not in self.pcs:
# Peer connection does not exist
return None
if self.pcs[call_number]["stop_in_progress"]:
# Stop process is already in progress
return None
self.pcs[call_number]["stop_in_progress"] = True
# 1. Notify PyQt5 about the stop
self.to_emitter.send({"type": "stop-peer-connection", "call_number": call_number})
# 2. Empty the correct queue
queue = self.call_queues[call_number - 1]
while not queue.empty():
queue.get()
# 4. Close data channel
try:
self.pcs[call_number]["dc"].close()
except Exception:
pass
# 5. Stop client audio track
try:
await self.pcs[call_number]["audio_blackhole"].stop()
except Exception:
pass
# 6. Notify PyQt5 about call status
self.to_emitter.send({"type": f"call-{call_number}-status", "status": "closed-by-client"})
# 7. Stop client video track
try:
self.pcs[call_number]["video_track_for_local_use"].stop()
await self.pcs[call_number]["video_blackhole"].stop()
except Exception:
pass
# 9. Release server resources if there are no remaining connections
if len(self.pcs.keys()) == 1: # No active peer connections
# Stop video relay and related blackholes
try:
if self.server_video_blackholde:
await self.server_video_blackholde.stop()
self.server_video_blackholde = None
if self.server_video_track:
await self.server_video_track.stop()
self.server_video_track = None
except Exception:
pass
# Stop server audio stream
try:
if self.server_audio_stream_offer:
self.server_audio_stream_offer.stop()
self.server_audio_stream_offer = None
except Exception:
pass
# Stop and release the webcam
try:
if self.webcam:
if self.webcam.video:
self.webcam.video.stop()
await asyncio.sleep(0.1) # Give time for cleanup
self.webcam = None
except Exception as e:
self.to_emitter.send({"type": "error", "error_message": str(e)})
# Attempt final resource cleanup
gc.collect() # Force garbage collection to release resources
self.to_emitter.send({"type": "log", "message": "Garbage collection triggered for cleanup."})
# 3. Close peer connection
try:
await self.pcs[call_number]["pc"].close()
print('Peer connection closed correctly!!!')
except Exception:
print(traceback.format_exc())
# 8. Cleanup
try:
del self.pcs[call_number]
except Exception:
pass
except Exception:
error = traceback.format_exc()
print(error)
self.to_emitter.send({"type": "error", "error_message": error})
</code></pre>
<p>The problem with this code is that when client is disconnected, so it's send disconnect message from dc, then stop_peer_connection async method is runned, then this message:</p>
<p><code>print('Peer connection closed correctly!!!')</code></p>
<p>never printed because <code>await self.pcs[call_number]["pc"].close()</code> never returns</p>
|
<python><p2p><rtcpeerconnection><aiortc>
|
2024-12-19 07:16:31
| 0
| 2,427
|
Chris P
|
79,293,270
| 3,003,072
|
Python: efficient unpacking large dictionary with list of strings as values to list of lists of keys
|
<p>I have a large dictionary with lists of strings as values. These strings can be duplicated. I would like to convert this dictionary to a list of lists of keys. The goal is to <strong>correlate the keys of original dictionary to unique strings so that later I can easily pick up all the keys for any unique string</strong> in subsequent analysis.</p>
<p>A simulation function is as follows:</p>
<pre><code>import cProfile
import time
import string
import random
import collections
import numpy as np
from typing import Dict, List, Set
def test_large_dict():
char_pool: str = string.ascii_uppercase
# simulate a large dictionary with 300,001 keys, 10,000 unique strings.
test_dict: Dict[int, List[str]] = collections.defaultdict(list)
num_keys: int = 300001
str_nums: np.ndarray = np.random.randint(10, 300, size=num_keys)
str_len: List[int] = np.random.randint(7, 30, size=10000).tolist()
rand_strs: List[str] = [
''.join(random.choice(char_pool) for _ in range(str_len[i]))
for i in range(10000)
]
for i in range(1, num_keys):
# no. of strings in current value: str_nums[i])
test_dict[i] = [random.choice(rand_strs) for _ in range(str_nums[i])]
# In original dictionary
print(f"1: {test_dict[1]}")
print(f"2: {test_dict[2]}")
# ===========================================
# The block I want to optimise.
t = time.time()
str_index: Dict[str, int] = {}
key_lists: List[Set[int]] = []
k: int = 0
for ix, str_list in test_dict.items():
for s in str_list:
if s in str_index:
g = str_index[s]
key_lists[g].add(ix)
else:
str_index[s] = k
key_lists.append({ix})
k += 1
print(f"Elapsed time for the conversion: {time.time() - t:.2f} seconds.")
# ===========================================
if __name__ == '__main__':
cProfile.run("test_large_dict()", sort="tottime")
</code></pre>
<p>The test time:</p>
<pre><code>Elapsed time for the conversion: 24.77 seconds.
</code></pre>
<p>Profiling results are:</p>
<pre><code>355043899 function calls in 148.398 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
46487731 37.495 0.000 109.065 0.000 random.py:341(choice)
46487731 35.055 0.000 56.797 0.000 random.py:242(_randbelow_with_getrandbits)
1 27.915 27.915 147.832 147.832 test_proteolysis.py:104(test_large_dict)
92975462 14.773 0.000 14.773 0.000 {built-in method builtins.len}
76097488 13.163 0.000 13.163 0.000 {method 'getrandbits' of '_random.Random' objects}
46297043 10.721 0.000 10.721 0.000 {method 'add' of 'set' objects}
46487731 8.579 0.000 8.579 0.000 {method 'bit_length' of 'int' objects}
1 0.566 0.566 148.398 148.398 <string>:1(<module>)
190688 0.076 0.000 0.476 0.000 test_proteolysis.py:112(<genexpr>)
10000 0.040 0.000 0.517 0.000 {method 'join' of 'str' objects}
2 0.005 0.003 0.005 0.003 {method 'randint' of 'numpy.random.mtrand.RandomState' objects}
10000 0.005 0.000 0.005 0.000 {method 'append' of 'list' objects}
3 0.005 0.002 0.005 0.002 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {method 'tolist' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 {method 'reduce' of 'numpy.ufunc' objects}
1 0.000 0.000 148.398 148.398 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 0.000 0.000 0.000 0.000 fromnumeric.py:71(_wrapreduction)
2 0.000 0.000 0.000 0.000 fromnumeric.py:2979(prod)
2 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
2 0.000 0.000 0.000 0.000 {built-in method time.time}
3 0.000 0.000 0.000 0.000 {method 'items' of 'dict' objects}
2 0.000 0.000 0.000 0.000 fromnumeric.py:2974(_prod_dispatcher)
</code></pre>
<p>Lots of time were spent to unpack this large dictionary into lists. Any more efficient way to unpack the large dictionary? I'm also free to any data structure to achieve the goal. In the provided example, I generated a <code>dict</code> <code>str_index</code> and <code>list</code> <code>key_lists</code>. The keys of <code>str_index</code> are unique strings, and values are the indexes of lists in the <code>key_lists</code>, so that all keys in original large dictionary for <em>i</em>th unique string are put in <em>i</em>th list in <code>key_lists</code>, <em>e.g.</em>, all keys for <em>i</em>th unique string <code>sj</code> are <code>key_lists[str_index[sj]]</code>.</p>
|
<python>
|
2024-12-19 06:20:52
| 1
| 616
|
Elkan
|
79,293,171
| 2,037,570
|
import of a function from nested structure
|
<p>Consider this directory structure:</p>
<pre><code>a-a
| b
| c
| print.py
</code></pre>
<p>Basically a-a/b/c. and print.py inside that directory. The contents of print.py looks like as :</p>
<pre><code>def print_5():
print("5")
def print_10():
print("10")
</code></pre>
<p>I want to import and use these functions into my current file at the level of a-a. Structure :</p>
<pre><code>ls
a-a test.py
</code></pre>
<p>How do I do that?</p>
<p>Inside test.py, I tried importing all the functions as :</p>
<pre><code>from a-a.b.c import f
print_5()
</code></pre>
<p>And it gives error as SyntaxError: invalid syntax that I understand. So, I moved 'a-a' to 'a_a' and it started to give me ModuleNotFoundError: No module named 'a_a'. I know it can be a trivial thing, just not coming in place for me.</p>
|
<python>
|
2024-12-19 05:18:28
| 2
| 3,645
|
Hemant Bhargava
|
79,293,060
| 772,649
|
How to set multiple elements conditionally in Polars similar to .loc in Pandas?
|
<p>I am trying to set multiple elements in a Polars DataFrame based on a condition, similar to how it is done in Pandas. Here’s an example in Pandas:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(dict(
A=[1, 2, 3, 4, 5],
B=[0, 5, 9, 2, 10],
))
df.loc[df['A'] < df['B'], 'A'] = [100, 210, 320]
print(df)
</code></pre>
<p>This updates column <code>A</code> where <code>A < B</code> with <code>[100, 210, 320]</code>.</p>
<p>In Polars, I know that updating a DataFrame in place is not possible, and it is fine to return a new DataFrame with the updated elements. I have tried the following methods:</p>
<h3>Attempt 1: Using <code>Series.scatter</code> with <code>map_batches</code></h3>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(dict(
A=[1, 2, 3, 4, 5],
B=[0, 5, 9, 2, 10],
))
def set_elements(cols):
a, b = cols
return a.scatter((a < b).arg_true(), [100, 210, 320])
df = df.with_columns(
pl.map_batches(['A', 'B'], set_elements)
)
</code></pre>
<h3>Attempt 2: Creating an update DataFrame and using <code>update()</code></h3>
<pre class="lang-py prettyprint-override"><code>df = df.with_row_index()
df_update = df.filter(pl.col('A') < pl.col('B')).select(
'index',
pl.Series('A', [100, 210, 320])
)
df = df.update(df_update, on='index').drop('index')
</code></pre>
<p>Both approaches work, but they feel cumbersome compared to the straightforward Pandas syntax.</p>
<p><strong>Question:</strong><br />
Is there a simpler or more idiomatic way in Polars to set multiple elements conditionally in a column, similar to the Pandas <code>loc</code> syntax?</p>
|
<python><dataframe><python-polars>
|
2024-12-19 03:44:50
| 3
| 97,797
|
HYRY
|
79,292,925
| 4,701,426
|
Interacting with the "save as" window when downloading a webpage in Chrome using Selenium
|
<p>I have two scripts:</p>
<ol>
<li><code>other.py</code> looks like this:</li>
</ol>
<blockquote>
<pre><code># some stuff is done here and a list of urls is created such as:
urls = ['https://www.walmart.com/ip/Sabrina-Carpenter-Cherry-Pop-EDP-30ml-1oz/5492571361?classType=REGULAR&athbdg=L1600', 'https://www.walmart.com/ip/Hoey-5-1-Painless-Hair-Remover-Women-Facial-Removal-Electric-Cordless-Shaver-Set-Wet-Dry-Lady-Razor-Women-Bikini-Line-Nose-Hair-Eyebrow-Arm-Leg-USB-R/647670434?classType=REGULAR']
# Then, the script runs another script called get_url.py and passes the urls to it to be processed:
subprocess.Popen(['python', 'get_url.py', str(urls)])
#it is important that this does not block the code and the rest of the code in this script can run without waiting for get_url.py to complete.
</code></pre>
</blockquote>
<ol start="2">
<li><code>get_url.py</code> called above looks like this and downloads each url passed to it:</li>
</ol>
<blockquote>
<pre><code>import pandas as pd
import os
import time
from datetime import datetime
import pyautogui
from selenium.webdriver.chrome.options import Options
from selenium import webdriver
from concurrent.futures import ProcessPoolExecutor
def get_page(url):
file_name = f"{url[:20]}_{pd.to_datetime(datetime.now()).strftime('%Y-%m-%d %H-%M-%S')}.html"
file_path = os.path.join(os.getcwd(), 'data', 'htmls')
path_and_name = os.path.join(file_path, file_name)
driver = webdriver.Chrome(options=options)
driver.get(url)
time.sleep(1)
pyautogui.hotkey('ctrl', 's') # open the save as window
time.sleep(1)
pyautogui.typewrite(path_and_name ) # enter the path and file name so the webpage is downloaded in the desired directory
time.sleep(.5)
pyautogui.hotkey('enter')
time.sleep(.2)
while True: # wait until the download is complete, then close the driver
files = os.listdir(file_path)
if file_name in files:
driver.close()
break
time.sleep(.1)
urls = sys.argv[1] # getting urls from other.py
#converting the string urls to an actual list:
urls = ast.literal_eval(page_urls.replace('[', '').replace(']', '').replace('\n', ', '))
if __name__ =='__main__': # multi-processing the urls to speed up things(necessary)
with ProcessPoolExecutor(max_workers=10) as executer:
executer.map(get_page, urls, chunksize = 1)
</code></pre>
</blockquote>
<p>The function works fine as long as I open one browser. However, as soon as multiple windows open by the <code>ProcessPoolExecutor</code>, it appears that the <code>pyautogui.typewrite</code> part of the function loses track of the windows, which may lead to <code>path_and_name</code> being typed multiple times in the "save as" window or typed incomplete leading to the page not downloading or being downloaded with a bad name/directory. Even worse, if I click somewhere like inside my code editor when the function is running, <code>pyautogui</code> may type the <code>path_and_name</code> value in the editor where the cursor is active. Running the browser in "headless" mode so that I don't accidentally mess with the windows does not help.</p>
<p>So, basically, how do I fix the above code?</p>
|
<python><selenium-webdriver>
|
2024-12-19 01:48:17
| 1
| 2,151
|
Saeed
|
79,292,914
| 8,458,083
|
Why does unpacking with the new generic syntax in Python 3.13 cause a mypy error?
|
<p>I am trying to use the new generic type syntax introduced in Python 3.13 for defining type aliases with unpacking (*). While the code executes correctly, mypy raises a type-checking error. The same code works fine when using the old generic syntax.
Here is my code using the old generic syntax, which works both at runtime and with mypy:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Callable
from typing import TypeAliasType, Unpack
RK_function_args = TypeAliasType("RK_function_args", tuple[float, int])
# Original function type
RK_function = TypeAliasType("RK_function", Callable[[Unpack[RK_function_args]], int])
# Attempted new function type with an additional int argument
RK_functionBIS = TypeAliasType(
"RK_functionBIS", Callable[[Unpack[RK_function_args], int], int]
)
def ff(a: float, b: int, c: int) -> int:
return 2
bis: RK_functionBIS = ff
res: int = bis(1.0, 2, 3) # OK
print(res)
</code></pre>
<p>However, when I rewrite this using the new generic syntax from Python 3.13, mypy raises an error:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Callable
from typing import Unpack
type RK_function_args = tuple[float, int]
type RK_function = Callable[[Unpack[RK_function_args]], int]
type RK_functionBIS = Callable[[*RK_function_args, int], int]
def f(a: float, b: int) -> int:
return 2
def ff(a: float, b: int, c: int) -> int:
return 2
bis: RK_functionBIS = ff
res: int = bis(1.0, 2, 3) # OK
print(res)
</code></pre>
<blockquote>
<p>main.py:17: error: Incompatible types in assignment (expression has
type "Callable[[float, int, int], int]", variable has type
"Callable[[VarArg(*tuple[*tuple[float, int], int])], int]")
[assignment]</p>
</blockquote>
|
<python><python-typing><mypy><python-3.13>
|
2024-12-19 01:41:26
| 0
| 2,017
|
Pierre-olivier Gendraud
|
79,292,913
| 432,509
|
Are there any drawbacks to assigning `__slots__` from `__annotations__` in a Python class?
|
<p>With Python's type annotations, classes that declare <code>__slots__</code> often end up duplicating the class identifiers.</p>
<p>A brief example:</p>
<pre><code>class Event:
location: tuple[float, float]
index: int
__slots__ = (
"location",
"index",
)
</code></pre>
<p>An alternative to this is to use annotations to declare slots.</p>
<pre><code>class Event:
location: tuple[float, float]
index: int
__slots__ = tuple(__annotations__)
</code></pre>
<p>Is there any reasons to avoid deriving <code>__slots__</code> from <code>__annotations__</code> in Python 3.10 & newer?</p>
<hr />
<p>Notes:</p>
<ul>
<li><p>This was suggested as an answer to this question:<br>
<a href="https://stackoverflow.com/questions/52553143">__slots__ type annotations for Python / PyCharm</a> by @mike-r.</p>
</li>
<li><p>A similar question here:<br>
<a href="https://stackoverflow.com/questions/68611263">Is it a good practice to use __annotations__.keys() for __slots__?</a> <br>
<em>...which I wasn't aware of when asking this.</em></p>
</li>
</ul>
|
<python><python-typing><slots>
|
2024-12-19 01:40:24
| 1
| 49,183
|
ideasman42
|
79,292,846
| 5,942,779
|
Use Python's logging to overwriting previous console line
|
<p>I want to use Python's logging library to overwrite the previous console line. It should display <code>...</code> on the console while the process is running, then overwrite it with <code>OK</code> once the process completes, as shown in this Python code.</p>
<pre class="lang-py prettyprint-override"><code>import time
for i in range(10):
print(f'Progress: {i+1}/10 ...', end='')
time.sleep(1)
print(f'\rProgress: {i+1}/10 OK')
</code></pre>
<p><a href="https://i.sstatic.net/rU6RKm7k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rU6RKm7k.png" alt="console output displays ... while it is running, and then overwrites it with OK on the same line once it completes" /></a></p>
<p>I want to achieve the same result using the <code>logging</code> library, but my current solution feels too verbose and un-pythonic. Does anyone have a better approach? Thanks!</p>
<pre><code>import logging
import sys
import time
class OverwriteStreamHandler(logging.StreamHandler):
def __init__(self, stream, overwrite=False):
super().__init__(stream)
self.overwrite = overwrite
def emit(self, record):
msg = self.format(record)
stream = self.stream
if self.overwrite:
stream.write('\r' + msg)
else:
stream.write(msg)
stream.flush()
def create_logger(name, overwrite=False):
handler = OverwriteStreamHandler(sys.stdout, overwrite=overwrite)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
return logger
logger1 = create_logger('logger1')
logger2 = create_logger('logger2', overwrite=True)
for i in range(10):
logger1.info(f'Progress: {i+1}/10 ...')
time.sleep(1)
logger2.info(f'Progress: {i+1}/10 Ok\n')
</code></pre>
|
<python>
|
2024-12-19 00:37:00
| 2
| 689
|
Scoodood
|
79,292,593
| 6,041,629
|
How to parallelize scipy.optimize basinhopping?
|
<p>I am performing many optimizations on a vector of about 100 values, with the optimization inputs changing each timestep, based on the previous timestep's optimization's results.</p>
<p>Optimization is done using Scipy.Optimize.Basinhopping. Each optimization step takes 10-20 seconds to run. Is it possible to parallelize Basinhopping to run a single optimization across multiple cores, and therefore speed up each step? I can't seem to find an example of this. Any suggestions would be appreciated.</p>
|
<python><optimization><scipy>
|
2024-12-18 21:54:16
| 0
| 526
|
Kingle
|
79,292,473
| 1,361,752
|
Display holoviews in jupyter in an air-gapped (offline) environment
|
<p>How should you use Holoviews with a bokeh backend in an air-gapped system (one with no internet access)?</p>
<p>In such cases its possible to get bokeh to still display using inline in a jupyter notebook by putting the following at the top of your notebook:</p>
<pre class="lang-py prettyprint-override"><code>import bokeh.io
from bokeh.resources import INLINE
bokeh.io.output_notebook(INLINE)
</code></pre>
<p>However, these same lines do not seem to enable holoviews with a bokeh back-end to work in the same environment. Holoviews silently fails providing blank outputs. Is there any way to get holoviews with bokeh to work in an elegant way on air-gapped systems?</p>
|
<python><bokeh><holoviews>
|
2024-12-18 21:04:09
| 1
| 4,167
|
Caleb
|
79,292,404
| 620,679
|
Add a list property to a pd.DataFrame subclass
|
<p>I'm using a subclass of pandas' DataFrame class. The subclass needs to have a property that is a list. Here's an example:</p>
<pre><code>import pandas as pd
class MyDataFrame(pd.DataFrame):
def __init__(self, data, colors, *args, **kwargs):
m = pd.DataFrame(data)
super().__init__(m, *args, **kwargs)
self.colors = colors
my_df = MyDataFrame(
{
"name": ["Fred", "Wilma"],
"age": [42, 38]
},
colors=["red", "yellow", "green"])
</code></pre>
<p>This gets me the following warning on <code>self.colors = colors</code>:</p>
<blockquote>
<p>UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see <a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access</a></p>
</blockquote>
<p>It appears that the problem is DataFrame's feature of treating column headers as attributes and interpreting the "<code>self.colors = colors</code>" line as a request to add a column to the DataFrame, which it very reasonable declines to do. I've tried added a setter without effect. I also tried moving the attribute assignment above the <code>super().__init__</code> call, but ended up in an infinite recursion. What can I do to fix this?</p>
|
<python><pandas>
|
2024-12-18 20:30:00
| 1
| 4,041
|
Scott Deerwester
|
79,292,303
| 395,255
|
how to format output with newlines using ast module
|
<p>I am trying to generate some code using AST module. This is how my code looks like:</p>
<p>I would like to add new line after each key with proper formatting so that the output looks formatted.</p>
<pre><code>my_awesome_properties = [
(ast.Constant(value="key1"), ast.Constant(value="value1")),
(ast.Constant(value="key2"), ast.Constant(value="value2")),
(ast.Constant(value="key3"), ast.Constant(value="value3")),
]
my_awesome_properties_dict = ast.Dict(
keys=[key for key, value in my_awesome_properties],
values=[value for key, value in my_awesome_properties]
)
my_awesome_properties_assignment = ast.Assign(
targets=[ast.Name(id='my_awesome_properties', ctx=ast.Store())],
value=my_awesome_properties_dict
)
my_awesome_properties_assignment.lineno = 1
my_awesome_module_node = ast.Module(
body=[
my_awesome_properties_assignment,
],
type_ignores=[]
)
print(ast.unparse(my_awesome_module_node))
</code></pre>
<p>This generates output as
<code>my_awesome_properties = {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}</code></p>
<p>How do I add new line after each key with proper formatting so that the output looks like this with double quotes?</p>
<pre><code>my_awesome_properties = {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
</code></pre>
|
<python><abstract-syntax-tree>
|
2024-12-18 19:41:40
| 2
| 12,380
|
Asdfg
|
79,292,298
| 17,721,722
|
How to Automatically Detect PySpark Home Path from PyInstaller Executable?
|
<p>In my local development environment, I can easily run a PySpark application without configuring anything. However, on the server, we are using PyInstaller for EXE deployment. PyInstaller does not include the PySpark libraries' <code>_internal</code> folder in the executable, so I have to manually set the path.</p>
<p>Here's a snippet of my PyInstaller <code>manage.py</code> script:</p>
<pre class="lang-py prettyprint-override"><code># -*- mode: python ; coding: utf-8 -*-
# Analysis for manage.py
a_manage = Analysis(
['manage.py'],
pathex=['/app/app_name/app_name-backend-dev'],
# I tried adding .venv/lib/python3.11/site-packages to the pathex, but it didn't work
binaries=[
('/usr/lib/x86_64-linux-gnu/libpython3.11.so.1.0', './_internal/libpython3.11.so.1.0')
],
datas=[],
hiddenimports=[
# I tried adding pyspark imports, but it didn't work
'pyspark', 'pyspark.sql', 'pyspark.sql.session', 'pyspark.sql.functions', 'pyspark.sql.types', 'pyspark.sql.column',
'app_name2.apps', 'Crypto.Cipher', 'Crypto.Util.Padding', 'snakecase', 'cryptography.fernet',
'cryptography.hazmat.primitives', 'cryptography.hazmat.primitives.kdf.pbkdf2', 'apscheduler.triggers.cron',
'apscheduler.schedulers.background', 'apscheduler.events', 'oauth2_provider.contrib.rest_framework',
'app_name.apps', 'app_name.role_permissions', 'django_filters.rest_framework', 'app_name.urls',
'app_name.others.constants', 'app_name.models', 'app_name', 'sslserver'
],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
)
pyz_manage = PYZ(a_manage.pure)
exe_manage = EXE(
pyz_manage,
a_manage.scripts,
[],
exclude_binaries=True,
name='manage',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll_manage = COLLECT(
exe_manage,
a_manage.binaries,
a_manage.datas,
strip=False,
upx=True,
upx_exclude=[],
name='manage',
)
</code></pre>
<p>When I try to run the executable, I encounter the following error:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "portal/operations/load_data/load_data.py", line 57, in start
File "portal/pyspark/operations.py", line 498, in get_session
File "pyspark/sql/session.py", line 497, in getOrCreate
File "pyspark/context.py", line 515, in getOrCreate
File "pyspark/context.py", line 201, in __init__
File "pyspark/context.py", line 436, in _ensure_initialized
File "pyspark/java_gateway.py", line 97, in launch_gateway
File "subprocess.py", line 1026, in __init__
File "subprocess.py", line 1955, in _execute_child
FileNotFoundError: [Errno 2] No such file or directory: '/home/rhythmflow/Desktop/Reconciliation/reconciliation-backend-v3/dist/manage/_internal/./bin/spark-submit'
</code></pre>
<p>To resolve this, I created a global <code>.venv</code> in the Linux home directory and installed PySpark using <code>pip install pyspark</code>.</p>
<p>I then manually set the <code>SPARK_HOME</code> environment variable:</p>
<pre class="lang-bash prettyprint-override"><code>SPARK_HOME = /home/user_name/.venv/lib/python3.11/site-packages/pyspark
</code></pre>
<p>And used it in my code as follows:</p>
<pre class="lang-py prettyprint-override"><code>SPARK_HOME = env_var("SPARK_HOME")
SparkSession.builder.appName(app_name).config("spark.home", SPARK_HOME).getOrCreate()
</code></pre>
<p>This approach works fine in the development environment, but I want to simplify the process and avoid manually specifying the Spark home path.</p>
<h3>Question:</h3>
<p>Is there a way to automatically detect the PySpark home path in a PyInstaller executable, so that I don't have to manually set the <code>SPARK_HOME</code> environment variable?</p>
<h3>Edit:</h3>
<p>I tried this approach to get the Spark home directory:</p>
<pre class="lang-py prettyprint-override"><code>import pyspark
SPARK_HOME = os.path.dirname(pyspark.__file__)
</code></pre>
<p>However, I encountered the following error, I think PySpark is not getting installed in EXE build/dist.</p>
<pre class="lang-bash prettyprint-override"><code>Could not find valid SPARK_HOME while searching ['/home/user_name/Desktop/project_name/app_name/dist', '/home/user_name/Desktop/project_name/app_name/dist/manage/_internal/pyspark/spark-distribution', '/home/user_name/Desktop/project_name/app_name/dist/manage/_internal/pyspark', '/home/user_name/Desktop/project_name/app_name/dist/manage/_internal/pyspark/spark-distribution', '/home/user_name/Desktop/project_name/app_name/dist/manage/_internal/pyspark', '/home/user_name/Desktop/project_name/app_name/dist/manage']
Traceback (most recent call last):
File "pyspark/find_spark_home.py", line 73, in _find_spark_home
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "wsgiref/handlers.py", line 137, in run
File "django/contrib/staticfiles/handlers.py", line 80, in __call__
return self.application(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/wsgi.py", line 124, in __call__
response = self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/base.py", line 140, in get_response
response = self._middleware_chain(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "corsheaders/middleware.py", line 56, in __call__
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "portal/middleware.py", line 29, in __call__
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "portal/encdec_middleware.py", line 53, in __call__
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "portal/middleware.py", line 115, in __call__
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/utils/deprecation.py", line 129, in __call__
response = response or self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/views/decorators/csrf.py", line 65, in _view_wrapper
return view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "django/views/generic/base.py", line 104, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "rest_framework/views.py", line 506, in dispatch
File "portal/views.py", line 174, in post
File "portal/operations/initializer.py", line 39, in initialize
File "portal/operations/operation_factory.py", line 52, in create
File "portal/operations/load_data/load_data.py", line 57, in start
File "portal/pyspark/operations.py", line 496, in get_session
File "pyspark/sql/session.py", line 497, in getOrCreate
File "pyspark/context.py", line 515, in getOrCreate
File "pyspark/context.py", line 201, in __init__
File "pyspark/context.py", line 436, in _ensure_initialized
File "pyspark/java_gateway.py", line 60, in launch_gateway
File "pyspark/find_spark_home.py", line 91, in _find_spark_home
SystemExit: -1
</code></pre>
|
<python><django><ubuntu><pyspark><pyinstaller>
|
2024-12-18 19:38:04
| 1
| 501
|
Purushottam Nawale
|
79,292,192
| 6,345,518
|
Regex match only if odd number of a character precede and follow
|
<p>I want to match the closing quote together with the opening quote of the following string if both are on the same line. Two strings may be separated either by a blank <code> </code> or a <strong>blank</strong>-plus-<strong>blank</strong> <code> + </code>.</p>
<p>Regex engine: Python</p>
<p>F.i. from</p>
<pre><code>this is "some string" "; which should match" 234
"and this" + "should also match\"" "\"and this"
but not this: " " a " + "
</code></pre>
<p>I'd like to see matches for:</p>
<ul>
<li>line 1: <code>" "</code> from between <code>some string</code> and <code>; which...</code></li>
<li>line 2:
<ul>
<li><code>" + "</code> from between <code>and this</code> and <code>should also match\"</code></li>
<li><code>" "</code> from between <code>should also match\"</code> and <code>\"and this</code></li>
</ul>
</li>
<li>line 3: No matches</li>
</ul>
<p>So in fact, I <strong>think</strong> it might be best to only match the groups <code>" "</code> and <code>" + "</code> if there is an <strong>odd number</strong> of quotes <strong>before and after</strong> the group. Since lookbehing/ahead is fixed length only, I didn't find a good way to do it.</p>
<p>I tried</p>
<pre><code>re.compile(r'(" \+ ")|(" ")(?!;|,)')
</code></pre>
<p>but this assumes that there may be no semicolon within a string</p>
<p>and also</p>
<pre><code>re.compile(r'"[^"]+")
</code></pre>
<p>but this only finds the strings themselves, but not the "inter-string" quotes.</p>
|
<python><regex>
|
2024-12-18 18:54:22
| 3
| 5,832
|
JE_Muc
|
79,292,094
| 5,941,624
|
Print randomly progress bar in tqdm
|
<p>I want to use tqdm in a loop such as:</p>
<pre><code>def __process_each_iteration(self, imputer) -> tuple[int, float, float]:
progress_bar= tqdm(
range(self.base_imputed_df.shape[1]),
desc="Processing...: ",
bar_format=(
"{l_bar}{bar}| Iteration {n_fmt}/{total_fmt} "
"[{elapsed}<{remaining}, {rate_fmt}]"
),
)
for col_index in progress_bar:
pass
progress_bar.close()
def fit_transform(self):
for idx, imputer in enumerate(range(10)):
change, avg_train_metric, avg_val_metric = self.__process_each_iteration(imputer)
pass...
</code></pre>
<p>when I run the above code, it gives me the following output:</p>
<pre><code>Iteration 1/9
Processing...: 100%|██████████| Iteration 30/30 [00:09<00:00, 3.26it/s]
Processing...: 0%| | Iteration 0/30 [00:00<?, ?it/s]30 columns updated.
Average r2_score -> train: 0.9665220914801507, val: 0.7951696912960284
Iteration 2/9
Processing...: 100%|██████████| Iteration 30/30 [00:13<00:00, 2.30it/s]
Processing...: 0%| | Iteration 0/30 [00:00<?, ?it/s]19 columns updated.
Average r2_score -> train: 0.9849819806147938, val: 0.85501137134333
</code></pre>
<p>it prints two progress bars in each iteration
I used tqdm as follows:</p>
<pre><code>with tqdm(...) as
</code></pre>
<p>but I had the same problem...</p>
|
<python><tqdm>
|
2024-12-18 18:13:31
| 0
| 331
|
fatemeakbari
|
79,292,053
| 1,028,270
|
keyrings.google-artifactregistry.auth for python AR repo without a service account JSON key
|
<p>I don't use non-expiring service account JSON keys; I use ambient creds and OIDC everywhere, so I can't generate and use json keys for AR.</p>
<p>I want to do pip installs in my Dockerfiles to install my private AR packages.</p>
<p>Doing something like this works:</p>
<pre><code># Passing in a non expiring key
docker build -f "Dockerfile" \
--secret id=gcp_creds,src="gsa_key.json" .
##### IN DOCKERFILE
# I don't want to generate json keys like this
ENV GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/gcp_creds
RUN pip install keyring keyrings.google-artifactregistry-auth
COPY requirements.txt requirements.txt
RUN --mount=type=secret,id=gcp_creds \
--mount=type=cache,target=target=/root/.cache \
pip install -r requirements.txt
</code></pre>
<p>Does keyrings.google-artifactregistry-auth support tokens somehow?</p>
<p>This did not work for example:</p>
<pre><code># GSA is active through ambient creds or impersonation
gcloud auth print-access-token > /tmp/token.json
# also tried: gcloud config config-helper --format="json(credential)" > /tmp/token.json
docker build -f "Dockerfile" \
--secret id=gcp_creds,src="/tmp/token.json" .
</code></pre>
<p>I'd like to avoid resorting to doing something like building outside of the container and copying the built artifacts in the image. I want to do the build inside the Dockerfile.</p>
<h1>EDIT</h1>
<p>So I got this working with GCP provided GH actions tooling so I know it's possible to generate temp tokens in a format the keyring will accept. I still want to know how to do this with the gcloud cli for other scenarios.</p>
<p>Using the Dockerfile above and these GH actions steps I can auth via OIDC and generate a creds file the pip keyring provider accepts:</p>
<pre><code>- id: 'auth'
name: 'auth to gcp'
uses: 'google-github-actions/auth@v2'
with:
token_format: 'access_token'
workload_identity_provider: 'projects/xxxxx/locations/global/workloadIdentityPools/github/providers/github'
service_account: xxxxxxxx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v6
with:
push: false
context: '.'
file: 'Dockerfile'
tags: myimage:mytag
load: true
cache-from: type=gha,scope=localbuild
cache-to: type=gha,mode=max,scope=localbuild
secret-files: |
gcp_creds=${{steps.auth.outputs.credentials_file_path}}
</code></pre>
|
<python><google-cloud-platform><pip><google-iam><google-artifact-registry>
|
2024-12-18 17:56:04
| 3
| 32,280
|
red888
|
79,291,973
| 5,920,776
|
Python Flask Saving/Sharing Query results between routes
|
<p>I'm developing a python flask web application that takes user input for which cities to plan a route for, geocodes the input cities by querying a third party API, and uses that result to query another API to determine the optimal route and driving directions.</p>
<p>I have two routes that use the output of the optimal route query to generate an itinerary table and a folium map.</p>
<p>Currently the API calls happen twice, once within the itinerary table route and once in the folium app route and I was aiming fix that by running it once and passing the results to each in turn.</p>
<p>The initial steps are fine as I'm able to store the geocoded city results in a session value and access it no problem. However the optimal route directions are too large for containing in a session cookie, additionally I'm getting a type error with the destination order that I'm saving to a session value as well.</p>
<p>The app is deployed on google cloud.</p>
<p>Any advice on what is the best/simplest method to make the list of latitude/longitude coordinates for the optimal route driving directions available to both the map and the itinerary?</p>
<p>I've seen some posts about KV session (<a href="https://pythonhosted.org/Flask-KVSession/" rel="nofollow noreferrer">https://pythonhosted.org/Flask-KVSession/</a>) and talk about using a server side database. Not sure if those are my only options or if there is anything else that might be easier to setup.</p>
<p>Skeleton code of my app</p>
<pre><code>@app.route('/', methods=['GET', 'POST'])
def homePage():
return render_template('index.html')
@app.route('/uploadFile', methods=['GET', 'POST'])
def uploadFile():
# takes user uploaded text file and saves as session['uploaded_data_file']
return redirect(url_for('geocodeUpload'))
@app.route('/checkUpload', methods=['GET', 'POST'])
def geocodeUpload():
# gets uploaded file of cities and sends to geocoding API and then saves to session.
session['geocodeTable'] = geocodeTable
return render_template("confirm_upload.html", geocodeTable=geocodeTable)
# user clicks on generate itinerary in html page
@app.route('/queryAPI', methods=['GET','POST'])
def queryAPI():
# this is the section I'm building to handle the optimal route query
return redirect(url_for('showItinerary'))
@app.route('/itinerary', methods=['GET','POST'])
def showItinerary():
# currently gets geocoded cities from session and queries API by itself
return render_template('itinerary.html', itinerary=itinerary)
@app.route('/map')
def displayMap():
# called from iframe within itinerary.html. gets geocoded cities and queries API to make map
m.save(map_file)
return open(map_file).read()
</code></pre>
<p>In case relevant, everything is wrapped in</p>
<pre><code>def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
return app
</code></pre>
<p>From (<a href="https://flask.palletsprojects.com/en/stable/patterns/appfactories/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/stable/patterns/appfactories/</a>)</p>
|
<python><flask>
|
2024-12-18 17:21:16
| 0
| 438
|
PhDavey
|
79,291,938
| 3,070,181
|
How to get a reference to traced variable in tkinter
|
<p>I have a tkinter application with variable number of entries and I want to detect changes to the contents and specifically, which entry has been changed</p>
<pre><code>import tkinter as tk
from tkinter import ttk
DOGS = 3
def main() -> None:
root = tk.Tk()
root.title('')
root.geometry('400x400')
dogs = {}
entries = {}
for row in range(DOGS):
dogs[row] = tk.StringVar()
dogs[row].trace_add('write', dog_changed)
entries[row] = ttk.Entry(root, textvariable=dogs[row])
entries[row].grid(row=row, column=0)
root.mainloop()
def dog_changed(*args):
print(args)
if __name__ == '__main__':
main()
</code></pre>
<p>I can see in args that I have the name of the StringVar as a string (e.g."PY_VAR0"), but is there a more elegant way of getting a reference to which entry has been changed rather than taking a substr here.</p>
<p>I have tried this lambda function, but of course it only show the current value of row</p>
<pre><code> dogs[row].trace_add('write', lambda *args: dog_changed(row, *args))
</code></pre>
|
<python><tkinter><callback>
|
2024-12-18 17:08:48
| 1
| 3,841
|
Psionman
|
79,291,893
| 12,115,498
|
Computing an iterated integral by iteratively applying the 1D trapezoidal rule
|
<p>I have a Python function, called <code>trap_1D</code>, which computes the (approximate) integral of a univariate function f over an interval [a,b] using the composite trapezoidal rule. Now I want to evaluate an iterated integral of the form \int_{a}^{b} \int_{c}^{d} f(x,y) dy dx by iteratively calling <code>trap_1D</code>: Below is <code>trap_1D</code>:</p>
<pre><code>import numpy as np
def trap_1D(func, a, b, N, *args):
#Computes the integral of a univariate function over [a, b] using the composite trapezoidal rule.
h = (b - a) / N #The grid spacing
x = np.linspace(a, b, N + 1) #The sub-division points: x_0, x_1,...,x_N.
y = func(x, *args)
return (np.sum(y) - (y[0] + y[-1]) / 2) * h
</code></pre>
<p>I've tested <code>trap_1D</code> on a few functions and it works. Next, I constructed a function I(x) := \int_{a}^{b} f(x,y) dy.</p>
<pre><code>def create_I(f, a, b, N):
def I(x):
return trap_1D(lambda y: f(x, y), a, b, N) #integrate the function y --> f(x,y)
return I
</code></pre>
<p>I've tested <code>create_I</code> and it works. Now to test out evaluating iterated integrals I'm using f(x,y) = sin(x) * sin(y) as my test function, which I want to integrate over the rectangle [0,pi/2] x [0,pi/2]:</p>
<pre><code>def f(x,y):
return np.sin(x)*np.cos(y)
N = 100
I = create_I(f, 0.0, np.pi/2, N) #The function I() is correct
I2 = trap_1D(I, 0.0, np.pi/2, N) #This gives an error
</code></pre>
<p>The last line gives the following error:</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_15880\410820249.py in <module>
26 N = 100
27 I = create_I(f, 0.0, np.pi/2, 100)
---> 28 trap_1D(I, 0.0, np.pi/2, N)
~\AppData\Local\Temp\ipykernel_15880\410820249.py in trap_1D(func, a, b, N, *args)
9 y = func(x, *args)
10
---> 11 return (np.sum(y) - (y[0] + y[-1]) / 2) * h
12
13
IndexError: invalid index to scalar variable.
</code></pre>
<p>I don't understand this error. What is causing the problem here?</p>
|
<python><numpy><numerical-methods><numerical-integration>
|
2024-12-18 16:52:04
| 2
| 783
|
Leonidas
|
79,291,863
| 902,570
|
Algorithm for Compact Representation of Tuples Using Cartesian Product
|
<p>I have the following set of tuples:</p>
<pre><code>('red', 'hot', 'big'),
('red', 'hot', 'small'),
('red', 'cold', 'big'),
('blue', 'hot', 'big'),
('blue', 'cold', 'big'),
('green', 'hot', 'big'),
('green', 'cold', 'big')
</code></pre>
<p>I would like to create a compact representation of the above, where a tuple element can be a list of allowed values. The intention is that given a representation like this, one can easily generate the original set by applying the Cartesian product. The result of the transformation might look like this:</p>
<pre><code>('red', ['hot', 'cold'], 'big'),
('red', 'hot', 'small'),
(['blue', 'green'], ['hot', 'cold'], 'big')
</code></pre>
<p>I want the representation to be as compact as possible. Can you suggest an algorithm, preferably a well-known one?</p>
<p>Edit</p>
<p>This is just an example. I'm looking for a general solution where the tuple can be of any length, and the number of alternatives at each position is not limited to 2.</p>
|
<python><algorithm><tuples><minimize>
|
2024-12-18 16:45:00
| 2
| 717
|
depthofreality
|
79,291,770
| 9,557,881
|
Fill pandas columns based on datetime condition
|
<p>Here is the sample code to generate a dataframe.</p>
<pre><code>import pandas as pd
import numpy as np
dates = pd.date_range("20241218", periods=9600,freq='1MIN')
df = pd.DataFrame(np.random.randn(9600, 4), index=dates, columns=list("ABCD"))
</code></pre>
<p>I want to fill all the columns with -1 for time between 1:35 to 1:45 for all the dates.
Similarly I want to fill all the columns with -2 for the exact time of 1:00 for all the dates.
For all other time values, the columns need to be filled with zeros.
Please suggest the way forward.</p>
|
<python><pandas><dataframe><datetime>
|
2024-12-18 16:16:17
| 1
| 676
|
Abhishek Kulkarni
|
79,291,701
| 1,330,237
|
Conda python package installs in wrong place
|
<p>I have created a pure python conda package called <code>ssstack</code> using <code>conda build</code>, but when I install it (from the local build directory) using:</p>
<pre><code>conda create -n <env name> -c <path to local build> ssstack`
</code></pre>
<p>the python package and CLI executables get <strong>put in the wrong place</strong>. Instead of it putting the python package in:</p>
<pre><code><ENV_ROOT>/lib/python3.12/site-packages/ssstack
</code></pre>
<p>it incorrectly puts them in the root of the environment, i.e.:</p>
<pre><code><ENV_ROOT>/site-packages/ssstack
</code></pre>
<p>It also puts the python CLI scripts in the wrong place; instead of it being put in</p>
<pre><code><ENV_ROOT>/bin/ssstack
</code></pre>
<p>It ends up being left in:</p>
<pre><code><ENV_ROOT>/python-scripts/ssstack
</code></pre>
<p>Obviously, neither the python module or the CLI scripts can be found when using the environment.</p>
<p>It's almost like conda doesn't know that it is a <strong>python</strong> package, so doesn't put it under the <code>lib/python3.xx</code> directory.</p>
<p>Oddly, if I create a new environment with just a python installation first, then activate it and install the <code>ssstack</code> package in to the existing environment then <strong>everything gets put in the correct place</strong>, i.e.:</p>
<pre><code>conda create -n ssstack_env python==3.12
conda activate ssstack_env
conda install -c <path to local build> ssstack ## WORKS!
</code></pre>
<p>I can't for the life of me understand why the latter method works correctly, but not the former.</p>
<p>I've spent a lot of time searching for a reason, but have come up with nothing. Any advice will be gratefully recieved!</p>
<p>Here's my <code>meta.yaml</code> recipe for reference:</p>
<pre><code>{% set name = "ssstack" %}
{% set version = "0.4.0" %}
package:
name: "{{ name|lower }}"
version: "{{ version }}"
source:
path: ../../..
build:
noarch: python
number: 1
script: {{ PYTHON }} -m pip install . -vv --no-deps --no-build-isolation
requirements:
host:
- pip
- python>=3.8
- setuptools
run:
- conda
- conda-lock
- packaging
- pydantic
- pyyaml
- requests
- tabulate
- tqdm
- typer
</code></pre>
|
<python><conda><conda-build>
|
2024-12-18 15:47:09
| 1
| 3,230
|
ccbunney
|
79,291,557
| 536,262
|
python pyproject.toml - arch dependency solved on install
|
<p>In <code>pyproject.toml</code> we have a optional-dependencies for a windows package:</p>
<pre class="lang-ini prettyprint-override"><code>[project.optional-dependencies]
windows = [
"pywinpty>=2.0.14"
]
</code></pre>
<p>To install:</p>
<pre class="lang-bash prettyprint-override"><code># on windows
pip install .[windows]
# on linux/mac we use the enclosed pty
pip install .
</code></pre>
<p>Is it possible so <code>pip install .</code> does this check automatic?</p>
<p>Or <code>uv pip install .</code> (pywinpty is a rust package)</p>
|
<python><pip><python-packaging><pyproject.toml>
|
2024-12-18 15:05:07
| 1
| 3,731
|
MortenB
|
79,291,519
| 8,458,083
|
How to build a type from another type and make mypy checking work
|
<p>I'm working with Python's type hinting system and I'm trying to create a type alias for a function that's similar to an existing function type, but with one additional argument. Here's what I've attempted:</p>
<pre><code>from typing import Callable
# Original function type
RK_function = Callable[[float, int], int]
# Attempted new function type with an additional int argument
RK_functionBIS = Callable[[*RK_function.__args__[:-1], int], int]
</code></pre>
<p>I expected RK_functionBIS to represent Callable[[float, int, int], int], which it does when runned directly without checking it with mypy. However, when I run mypy for type checking, I get this error:
text</p>
<blockquote>
<p>error: Invalid type alias: expression is not a valid type
[valid-type] Q</p>
</blockquote>
|
<python><python-typing><mypy>
|
2024-12-18 14:51:22
| 2
| 2,017
|
Pierre-olivier Gendraud
|
79,291,512
| 16,611,809
|
How to target Shiny's inputs with CSS selectors?
|
<p>I want to CSS style my Shiny for Python input fields, but it somehow doesn't work. I can however style the card headers. Does anyone know, what I am doing wrong? Or can the ui.input* elements not be styled directly?</p>
<pre><code>from shiny import App, render, ui, reactive
from pathlib import Path
app_ui = ui.page_fillable(
ui.panel_title(
ui.row(
ui.column(12, ui.h1("title1")),
)
),
ui.layout_sidebar(
ui.sidebar(
ui.input_text("input_text1", "input_text1", value=""),
ui.input_selectize("input_selectize1", "input_selectize1", choices=["1", "2"]),
ui.input_numeric("input_numeric1", "input_numeric1", value=4),
ui.input_switch("input_switch1", "input_switch1", value=False),
ui.input_action_button("input_action_button1", "input_action_button1"),
width="350px"
),
ui.layout_columns(
ui.card(
ui.card_header("card_header1"),
ui.output_data_frame("card1"),
full_screen=True
),
col_widths=12
)
),
ui.tags.style(
".card-header { color:white; background:#2A2A2A !important; }",
".input-text { color:red; height:0px; }",
".input-numeric { color:red; height:0px; }")
)
def server(input, output, session):
@reactive.event(input.input_action_button1)
def reactive_function1():
pass
@output
@render.data_frame
def card1():
return reactive_function1()
src_dir = Path(__file__).parent / "src"
app = App(app_ui, server, static_assets=src_dir)
</code></pre>
|
<python><css><py-shiny>
|
2024-12-18 14:48:15
| 1
| 627
|
gernophil
|
79,291,434
| 10,311,645
|
Torch installation: No matching distribution found for torch>=2.0.0
|
<p>I am trying to install version of pytorch as per the requirements.txt file which looks like:</p>
<pre><code># --------- pytorch --------- #
torch>=2.0.0
torchvision>=0.15.0
lightning>=2.0.0
torchmetrics>=0.11.4
</code></pre>
<p>I create a new venv and then run <code>pip install -r requirements.txt</code></p>
<p>It cannot find the correct version of torch giving this error:</p>
<p><code>ERROR: No matching distribution found for torch>=2.0.0</code></p>
<p>What am I doing wrong?</p>
|
<python><pytorch><torch>
|
2024-12-18 14:26:39
| 1
| 313
|
tam63
|
79,291,415
| 2,534,342
|
Convert pythorch pth model to onnx with fixed width and variable height
|
<p>My code uses PyTorch to perform segmentation annotations on PNG images. The input images have a width of 512 pixels or a multiple of this, but the height can range from 400 to 900 pixels. The code, along with the PyTorch model (*.pth file), works as expected.</p>
<p>I am currently attempting to convert my *.pth model to *.onnx. The code itself hasn’t changed much (only modifications related to ONNX, naturally), but the issue I am encountering is with the model conversion.</p>
<p>Here is my code for the model conversion:</p>
<pre class="lang-py prettyprint-override"><code>import onnx
import torch
import torch.nn as nn
import torch.nn.functional as F
# pip install torch onnx
class UNet(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=True):
super().__init__()
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = bilinear
self.inc = DoubleConv(n_channels, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if bilinear else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512, bilinear)
self.up2 = Up(512, 256, bilinear)
self.up3 = Up(256, 128, bilinear)
self.up4 = Up(128, 64 * factor, bilinear)
self.outc = OutConv(64, n_classes)
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
logits = self.outc(x)
return logits
class DoubleConv(nn.Module):
"""(convolution => [BN] => ReLU) * 2"""
def __init__(self, in_channels, out_channels, mid_channels=None):
super().__init__()
if not mid_channels:
mid_channels = out_channels
self.double_conv = nn.Sequential(
nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(mid_channels),
nn.ReLU(inplace=True),
nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
)
def forward(self, x):
return self.double_conv(x)
class Down(nn.Module):
"""Downscaling with maxpool then double conv"""
def __init__(self, in_channels, out_channels):
super().__init__()
self.maxpool_conv = nn.Sequential(nn.MaxPool2d(2), DoubleConv(in_channels, out_channels))
def forward(self, x):
return self.maxpool_conv(x)
class Up(nn.Module):
"""Upscaling then double conv"""
def __init__(self, in_channels, out_channels, bilinear=True):
super().__init__()
self.up: nn.Upsample | nn.ConvTranspose2d
# if bilinear, use the normal convolutions to reduce the number of channels
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True)
self.conv = DoubleConv(in_channels, out_channels // 2, in_channels // 2)
else:
self.up = nn.ConvTranspose2d(in_channels, in_channels // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_channels, out_channels)
def forward(self, x1, x2):
x1 = self.up(x1)
# input is CHW
diffY = torch.tensor([x2.size()[2] - x1.size()[2]])
diffX = torch.tensor([x2.size()[3] - x1.size()[3]])
x1 = F.pad(
x1,
[
torch.div(diffX, 2, rounding_mode="floor"),
diffX - torch.div(diffX, 2, rounding_mode="floor"),
torch.div(diffY, 2, rounding_mode="floor"),
diffY - torch.div(diffY, 2, rounding_mode="floor"),
],
)
# if you have padding issues, see
# https://github.com/HaiyongJiang/U-Net-Pytorch-Unstructured-Buggy/commit/0e854509c2cea854e247a9c615f175f76fbb2e3a
# https://github.com/xiaopeng-liao/Pytorch-UNet/commit/8ebac70e633bac59fc22bb5195e513d5832fb3bd
x = torch.cat([x2, x1], dim=1)
return self.conv(x)
class OutConv(nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
def forward(self, x):
return self.conv(x)
def convert_pytorch_to_onnx(pytorch_model_path, onnx_model_path):
# Load the PyTorch model
model = UNet(n_channels=1, n_classes=1)
model.load_state_dict(torch.load(pytorch_model_path, map_location="cpu"))
model.eval()
# Create dummy input with dynamic size
dummy_input = torch.randn(1, 1, 512, 512) # Height [400 to 900], Width fixed at 512
# Export the model
torch.onnx.export(
model, # model being run
dummy_input, # model input (or a tuple for multiple inputs)
onnx_model_path, # where to save the model
export_params=True, # store the trained parameter weights inside the model file
opset_version=20, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=["input"], # the model's input names
output_names=["output"], # the model's output names
dynamic_axes={
"input": {0: "batch_size", 2: "height", 3: "width"}, # variable length axes
"output": {0: "batch_size", 2: "height", 3: "width"},
},
)
# Verify the model
onnx_model = onnx.load(onnx_model_path)
onnx.checker.check_model(onnx_model)
print(f"Model {pytorch_model_path} converted to {onnx_model_path}")
# List of models to convert
models = [
"models/case1.pth",
"models/case2.pth",
]
# Convert each model
for model_path in models:
onnx_path = model_path.replace(".pth", ".onnx")
convert_pytorch_to_onnx(model_path, onnx_path)
</code></pre>
<p>Using the ONNX models created with:</p>
<pre class="lang-py prettyprint-override"><code>dummy_input = torch.randn(1, 1, 512, 512) # Height [400 to 900], Width fixed at 512
</code></pre>
<p>Inference on images with heights between 400 and 600 pixels seems to work similarly to my PyTorch code. However, images with a height of 800 pixels produce incorrect results when they work at all.</p>
<p>Conversely, if I convert the model using:</p>
<pre class="lang-py prettyprint-override"><code>dummy_input = torch.randn(1, 1, 885, 512)
</code></pre>
<p>An image with a height of 885 pixels works perfectly.</p>
<p>I’m not an expert in PyTorch or ONNX. For now, the only “workable” solution I’ve found is to use 512x512 in the <code>dummy_input</code> and add a few lines to my Python code to crop the top and bottom of input images if they are taller than 512 pixels. However, the results are not identical to those of the original PyTorch code.</p>
<p>I’m unsure exactly what PyTorch does with variable-height inputs — I didn’t build the model — but from inspecting the UNet class, it’s clear that it is not cropping the input. It’s more likely “reducing” the height until it fits the model’s structure.</p>
<p>If that’s the case, how can I replicate this behaviour with an ONNX model?</p>
|
<python><pytorch><onnx><onnxruntime>
|
2024-12-18 14:22:31
| 1
| 612
|
alanwilter
|
79,291,388
| 1,176,347
|
How do you add min and max values to plotly.graph_objects.Box if I'm passing Precomputed Quartiles?
|
<p>I'm using code from the example official documentation <a href="https://plotly.com/python/box-plots/" rel="nofollow noreferrer">https://plotly.com/python/box-plots/</a> -> <code>Box Plot With Precomputed Quartiles</code>. In such cases, plotly using q1 for mix and q3 for max.</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Box(q1=[ 1, 2, 3 ], median=[ 4, 5, 6 ],
q3=[ 7, 8, 9 ], lowerfence=[-1, 0, 1],
upperfence=[7, 8, 9], mean=[ 2.2, 2.8, 3.2 ],
sd=[ 0.2, 0.4, 0.6 ], notchspan=[ 0.2, 0.4, 0.6 ], name="Precompiled Quartiles"))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/fZY4PP6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fZY4PP6t.png" alt="enter image description here" /></a>
I already have calculated min and max and want to use them.</p>
<p>This is how the plot looks when I'm passing the data frame with all records. Plotly calculates itself q1,q3,max, min - everything is good except for the performance. I assume this is because values are displayed on the plot, and it makes it very heavy to render.
<a href="https://i.sstatic.net/lGjrxzD9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGjrxzD9.png" alt="enter image description here" /></a></p>
<p>I like to calc aggregates first and use them if possible. So, I've calculated q1,q3,max, min. You can see that max is greater than experience for the BrightData group.
<a href="https://i.sstatic.net/4DBRV0Lj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4DBRV0Lj.png" alt="enter image description here" /></a></p>
<p>My expectation is that experience and max can be displayed in one plot and have different values here.</p>
|
<python><plotly><jupyter-lab>
|
2024-12-18 14:11:10
| 1
| 543
|
greggyNapalm
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.