QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,123,191
| 132,042
|
Pandas' Series.plot() stacks disjoint segments instead of adjoining them
|
<p>I'm following the video from <a href="https://www.coursera.org/learn/python-statistics-financial-analysis/lecture/mfHGK/1-3-basics-of-dataframe" rel="nofollow noreferrer">2018 or 2019 on Coursera</a> that explains the basics of how to use <code>pandas</code>. The examples are based on the Facebook stock data that one can download in CSV format (e.g., from Yahoo Finance).</p>
<p>I came across a difference in how the <code>plot()</code> function works, and I can't figure out how to make it do what the video says it should be doing. According to them, and according to their screenshot (see below), when you plot multiple series for different x-axis values, you should end up with a continuous line chart, with the different segments colored differently.</p>
<p>Here's what it's supposed to look like:</p>
<img src="https://i.sstatic.net/L6pSG.png" height="300">
<p>However, my version of <code>pandas</code> (presumably, 1.5.2) plots the separate segments on top of each other, even though that makes no sense. Is this a bug or a new feature introduced into <code>pandas</code> since 2018? Is there some argument to <code>plot()</code> that controls this behavior?</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
fb = pd.read_csv("C:\\Users\\me\\Downloads\\META.csv", index_col = 'Date')
fb.loc['2022-01-10':'2022-06-01', 'Close'].plot()
fb.loc['2022-06-02':, 'Close'].plot()
</code></pre>
<p>This is what I see plotted in my notebook (note the overlapping labels on the x-axis):</p>
<img src="https://i.sstatic.net/f9bBz.png" height="400">
|
<python><pandas><windows><anaconda>
|
2023-01-15 05:49:55
| 2
| 1,313
|
Tatiana Racheva
|
75,122,995
| 1,146,785
|
can I run a shell task in VSCode that uses a specific shell for a python virtualEnv?
|
<p>I'm writing some python to render stuff that I tweak and run a lot, and that runs inside a virtual env. I would like a keyboard command to run a bash script (that launches python) inside the known terminal and virtual env.</p>
<p>I played a bit with setting up a shell script and a custom task, but entering the virtual env is always a bit tricky.</p>
<p>I don't need a debugger or anything complicated, just a way to run the python code and attach a keystroke to it.</p>
<p><a href="https://code.visualstudio.com/docs/editor/debugging#_launch-configurations" rel="nofollow noreferrer">https://code.visualstudio.com/docs/editor/debugging#_launch-configurations</a>
<a href="https://code.visualstudio.com/docs/python/jupyter-support-py" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/jupyter-support-py</a></p>
|
<python><visual-studio-code><virtualenv>
|
2023-01-15 04:47:23
| 2
| 12,455
|
dcsan
|
75,122,991
| 10,411,973
|
Python ping by reading txt file contain list of IP address and string or name next to it
|
<p>I'm testing running simple script to ping few servers by calling a text file iplist.txt. The script working if txt file only contain IP address. Now I'm adding hostname next to the IP address in the iplist.txt and ping failed.</p>
<pre><code>Original only IP address in iplist.txt
192.168.1.1
192.168.1.2
Updated with name string in iplist.txt
192.168.1.1 server1
192.168.1.2 server2
</code></pre>
<p>I think the issue is the strip part,</p>
<pre><code>def iplist() :
# Grab list of IP from the file
pingList = open("iplist.txt", "r")
for i in pingList:
ip = i[-15:]
ip = ip.strip(' ')
ip = ip.strip('\n')
with open(os.devnull, 'w') as DEVNULL:
try:
subprocess.check_call(
['ping', '-w', '1', ip],
stdout = DEVNULL,
stderr = DEVNULL
)
</code></pre>
|
<python><ping><txt>
|
2023-01-15 04:46:36
| 1
| 565
|
chenoi
|
75,122,916
| 1,436,800
|
How to make customize detail view set using ModelViewSet?
|
<p>I am new to django.
I have a model:</p>
<pre><code>class Class(models.Model):
name = models.CharField(max_length=128)
students = models.ManyToManyField("Student")
def __str__(self) -> str:
return self.name
</code></pre>
<p>Now I want to create API to display the students in a particular class,the detail view, by giving the name of the class as a parameter using ModelViewClass.
Currently, I have following viewset written:</p>
<pre><code>class ClassViewSet(ModelViewSet):
serializer_class = serializers.ClassSerializer
queryset = models.Class.objects.all()
</code></pre>
<p>How to do that?</p>
|
<python><django><django-models><django-rest-framework><django-views>
|
2023-01-15 04:24:45
| 1
| 315
|
Waleed Farrukh
|
75,122,915
| 2,446,702
|
Python Pandas - how to write a string to a spcific cell via index without using the dataframe
|
<p>I am trying to use pandas to write a value to a specific cell via index (1,1), in an xlsx file.
Lets say I currently have an xlsx file:</p>
<pre><code>A B C
1 2 3
</code></pre>
<p>How can I update 2 to another value without using the whole dataframe please?
For the pupose of what I working on, I would like to specify the value as a string via index (1,1).</p>
|
<python><python-3.x><pandas><xlsx>
|
2023-01-15 04:24:36
| 1
| 3,255
|
speedyrazor
|
75,122,794
| 12,224,591
|
Provide Specific Face Colors to trisurf? (MatPlotLib, PY 3.10)
|
<p>I'm attempting to find a way to provide different colors to the <code>trisurf</code> function, called on a <code>scatter</code> plot, in <code>Python 3.10</code> using the <code>MatPlotLib</code> module.</p>
<p>Let's say I have the following simple plot script:</p>
<pre><code>fig = plt.figure()
ax = fig.add_subplot(projection='3d')
X = [1, 3, 2, 4]
Y = [1, 1, 2, 2]
Z = [2, 2, 2, 2]
ax.scatter(X, Y, Z, s = 0)
ax.plot_trisurf(X, Y, Z)
ax.set_xlim(0, 5)
ax.set_ylim(0, 3)
ax.set_zlim(1.9, 2.1)
ax.set_xlabel('X Axis')
ax.set_ylabel('Y Axis')
ax.set_zlabel('Z Axis')
plt.show()
</code></pre>
<p>The script above generates 2 faces on a single plot, as such:</p>
<p><a href="https://i.sstatic.net/jQhg6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jQhg6.png" alt="enter image description here" /></a></p>
<p>As far as I understand, one is able to supply either a singular RGB color value to be applied to all generated faces, or a color map.</p>
<p>Here's the result of the script above, with the addition of the <code>color = [1, 0, 0]</code> argument to the <code>plot_trisurf</code> call:</p>
<p><a href="https://i.sstatic.net/PoYmD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PoYmD.png" alt="enter image description here" /></a></p>
<p>Now all the generated faces are painted in the single provided color.</p>
<p>I was wondering, however, whether it was possible to supply a specific color for each generated face, in a single plot? So for instance, in my example, if I wanted for one face to be red (<code>[1, 0, 0]</code>), and one to be blue (<code>[0, 0, 1]</code>)?</p>
<p>A color argument of <code>color = [[1, 0, 0], [0 ,0 ,1]]</code> (where each sub-list of the color list is supposed to be the color of each one of the faces in my example), gives an error of <code>RGBA sequence should have length 3 or 4</code>.</p>
<p>As far as colormaps go, I understand that you can generate custom ones.
In my example I could have something akin to:</p>
<pre><code>colors = [(1, 0, 0), (0, 0, 1)]
cm = LinearSegmentedColormap.from_list(
"Custom", colors)
sm = mpl.cm.ScalarMappable(cmap = cm)
</code></pre>
<p>Which would give me the following colormap:</p>
<p><a href="https://i.sstatic.net/UHxewt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UHxewt.png" alt="enter image description here" /></a></p>
<p>However, I'm quite unsure of how to reference the color index of the colormap. It doesn't seem like I can provide just a list of indices. This is all the more confusing to me, as <a href="https://matplotlib.org/stable/api/_as_gen/mpl_toolkits.mplot3d.axes3d.Axes3D.html#mpl_toolkits.mplot3d.axes3d.Axes3D.plot_trisurf" rel="nofollow noreferrer">the documentation of the plot_trisurf function</a> doesn't provide much detail for the possible forms of the <code>color</code> argument. It just states <code>Color of the surface patches</code>.</p>
<p>Is it even possible to provide explicit face colors as a list, to the <code>plot_trisurf</code>? Is there a different & better way to achieve this?</p>
<p>Thanks for reading my post, and guidance is appreciated!</p>
|
<python><matplotlib>
|
2023-01-15 03:42:44
| 1
| 705
|
Runsva
|
75,122,712
| 422,348
|
How can I run a simple twisted client on top of asyncio?
|
<p>I have the following client code that I borrowed from twisted's docs:</p>
<p><a href="https://docs.twistedmatrix.com/en/twisted-20.3.0/web/howto/client.html#the-agent" rel="nofollow noreferrer">https://docs.twistedmatrix.com/en/twisted-20.3.0/web/howto/client.html#the-agent</a></p>
<p>And I am trying to run it with asyncio since I am building an asyncio project that requires compatibility with twisted. Here is the code:</p>
<pre><code>import asyncio
from twisted.internet import asyncioreactor
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
asyncioreactor.install()
async def request():
agent = Agent(asyncioreactor.AsyncioSelectorReactor)
d = agent.request(
b'GET',
b'http://httpbin.com/anything',
Headers({'User-Agent': ['Twisted Web Client Example']}),
None)
def cbResponse(ignored):
print('Response received')
d.addCallback(cbResponse)
def cbShutdown(ignored):
asyncioreactor.AsyncioSelectorReactor.stop()
d.addBoth(cbShutdown)
print("This is where it always get stuck")
res = await d.asFuture(asyncio.get_event_loop())
print("SUCCESSS!!!!")
if __name__ == "__main__":
asyncio.run(request())
</code></pre>
<p>I saved this as a <code>request.py</code> file and run it with <code>python request.py</code>, but it always hangs when reaching this line:</p>
<pre><code> print("This is where it always get stuck")
res = await d.asFuture(asyncio.get_event_loop())
</code></pre>
<p>Is it possible to run this with asyncio? I am not too familiar with twisted and my ultimate goal is to be able to run a twisted client with asyncio.</p>
|
<python><python-asyncio><twisted><twisted.internet><twisted.client>
|
2023-01-15 03:16:50
| 1
| 2,482
|
Ruben Quinones
|
75,122,563
| 5,924,264
|
Assigning one column of dataframe to column of another dataframe with disparate indices?
|
<p>I have dataframes <code>first</code> and <code>second</code> of the same length, where the first one's index is increments of 15 and the second is in increments of 1. I would like to assign one column of <code>first</code> to another of <code>second</code>.</p>
<p>e.g., something like below</p>
<pre><code>import pandas as pd
first = pd.DataFrame({"index": [0, 15, 30], "value": [2.2, 2.2, 2.2]})
second = pd.DataFrame({"value": [3.2, 3.2, 3.2]})
first = first.set_index("index")
first.value = second.value
</code></pre>
<p>however, the indices are disparate so the above gives <code>NaN</code>s for <code>first.value</code> after the first row. I think one approach is to call <code>reset_index()</code> prior to assignment, but I believe this is a costly op? Is there an approach that doesn't involve resetting the index?</p>
|
<python><pandas><dataframe>
|
2023-01-15 02:24:10
| 1
| 2,502
|
roulette01
|
75,122,558
| 1,348,878
|
How to Import Integer as Numeric String and not in Scientific Notation
|
<p>Trying to import a JSON file which is list of dictionaries, one of which contains two epoch time values (start time and end time). Now, if all instances include both times, no problem--pandas json.normalize will load all values correctly as they appear in the source data. But because some end-time values are missing altogether, python needs to have an 'Nan' and so the entire column of the df is apparently string, AND, apparently because of that (I don't know), all values that DO exist are changed to exponential notation, rendering the data useless.</p>
<p>Here's a typical JSON file:</p>
<pre><code>[ {
"id": "U1Q6MDpERVNLVE9Q",
"A": {
"ProcessName": "A3463453",
"SubProcess": "M1",
"Machine": "46"
},
"B": {
"user": "a12",
"polhode": "343282"
},
"C": {
"rotorState": "m-mode",
"startTime": 1671540600000,
"endTime": 1672963068453,
"ProcessElapsedMs": 6142877
},
"D": "fb6a9154-44a2-3d60-b978-f1d2ad1a68ff"
}, {
"id": "QVNUOjA6REVTS1RP",
"A": {
"ProcessName": "A3465453",
"SubProcess": "M1",
"Machine": "47"
},
"B": {
"user": "a12",
"polhode": "343282"
},
"C": {
"rotorState": "f-mode",
"startTime": 1671720693000,
"ProcessElapsedMs": 71973000
},
"D": "28e160c9-d954-35d7-a077-70fc70711baf"
}, {
"id": "NUOjA6REVTS1RPUA",
"A": {
"ProcessName": "A3465453",
"SubProcess": "M3",
"Machine": "48"
},
"B": {
"user": "a12",
"polhode": "343282"
},
"C": {
"rotorState": "m-mode",
"startTime": 1673000200000,
"endTime": 1673001028516,
"ProcessElapsedMs": 10160506
},
"D": "ed7077f2-b64c-3944-a0c3-9f0612826c85"
}, {
"id": "U1Q6MDpERVNLVE9Q",
"A": {
"ProcessName": "A3463853",
"SubProcess": "M3",
"Machine": "49"
},
"B": {
"user": "a12",
"polhode": "343282"
},
"C": {
"rotorState": "m-mode",
"startTime": 1673006529000,
"endTime": 1673001028516,
"ProcessElapsedMs": 3832128
},
"D": "0d671793-9679-3e72-9862-f31ad75cfd89"
}, {
"id": "zMzg5OkRFU0tUT1A",
"A": {
"ProcessName": "A3476553",
"SubProcess": "M18",
"Machine": "31"
},
"B": {
"user": "a12",
"polhode": "343282"
},
"C": {
"rotorState": "m-mode",
"startTime": 1671758829000,
"endTime": 1672916208140,
"ProcessElapsedMs": 3832128
},
"D": "1ab25dec-c7d8-3ea8-8dbf-c7c48beaa65a"
}
</code></pre>
<p>]</p>
<p>and, using this code</p>
<pre><code>json_file = open(full_json_path)
json_data = json.loads(json_file.read())
json_df = pd.json_normalize(json_data, max_level=1)
insert_df = json_df[["A.ProcessName", "A.SubProcess", "C.rotorState", "C.startTime", "C.endTime", "C.ProcessElapsedMs"]]
insert_df.columns=["ProcessName", "SubProcess", "GState", "StartEpoch", "EndEpoch", "ProcessElapsedMs"]
print(insert_df.to_string())
</code></pre>
<p>I get this result:</p>
<pre><code> A.ProcessName A.SubProcess C.rotorState C.startTime C.endTime C.ProcessElapsedMs
</code></pre>
<p>0 A3463453 M1 m-mode 1671540600000 1.672963e+12 6142877
1 A3465453 M1 f-mode 1671720693000 NaN 71973000
2 A3465453 M3 m-mode 1673000200000 1.673001e+12 10160506
3 A3463853 M3 m-mode 1673006529000 1.673001e+12 3832128
4 A3476553 M18 m-mode 1671758829000 1.672916e+12 3832128</p>
<p>On the other hand, if the data looks like this, no missing EndEpoch values:</p>
<pre><code> ProcessName SubProcess GState StartEpoch EndEpoch ProcessElapsedMs
</code></pre>
<p>0 A3463453 M1 m-mode 1671540600000 1672963068453 6142877
1 A3465453 M1 f-mode 1671720693000 1672963068453 71973000
2 A3465453 M3 m-mode 1673000200000 1673001028516 10160506
3 A3463853 M3 m-mode 1673006529000 1673001028516 3832128
4 A3476553 M18 m-mode 1671758829000 1672916208140 3832128</p>
<p>then the result is what I need to have:</p>
<pre><code> ProcessName SubProcess GState StartEpoch EndEpoch ProcessElapsedMs
</code></pre>
<p>0 A3463453 M1 m-mode 1671540600000 1672963068453 6142877
1 A3465453 M1 f-mode 1671720693000 1672963068453 71973000
2 A3465453 M3 m-mode 1673000200000 1673001028516 10160506
3 A3463853 M3 m-mode 1673006529000 1673001028516 3832128
4 A3476553 M18 m-mode 1671758829000 1672916208140 3832128</p>
<p>What have I done?</p>
<p>Well, explicitly creating the dataframe with data type of integer doesn't work, because he's going to complain when there's an NaN value that can't be loaded into the integer dataframe column.
Trying to do a replace of the NaN with, say a '0' doesn't work because that has no impact at all on the exp notation values that are already in the data frame.</p>
<p>I'm totally new to Python, and don't care what library function method I use--I just want my data to not be goofed up. I have to think this is a problem handled routinely, I just don't know what smart questions to ask.</p>
<p>On an unrelated note, I'd also like to be able to use something similar to record_path, if you will, to tell the json.normalize to totally ignore/bypass one or more entire dictionaries--such as "B:" in my example data, so I don't have to import it at all. If you can point me in a direction that would be very helpful.</p>
<p>My apologies for not using correct terminology wherever that may have happened.</p>
|
<python><json><pandas>
|
2023-01-15 02:22:21
| 1
| 517
|
Kirk Fleming
|
75,122,503
| 3,591,044
|
Remove number patterns from string
|
<p>I have conversations that look as follows:</p>
<pre><code>s = "1) Person Alpha:\nHello, how are you doing?\n\n1) Human:\nGreat, thank you.\n\n2) Person Alpha:\nHow is the weather?\n\n2) Human:\nThe weather is good."
1) Person Alpha:
Hello, how are you doing?
1) Human:
Great, thank you.
2) Person Alpha:
How is the weather?
2) Human:
The weather is good.
</code></pre>
<p>I would like to remove the enumeration at the beginning to get the following result:</p>
<pre><code>s = "Person Alpha:\nHello, how are you doing?\n\nHuman:\nGreat, thank you.\n\nPerson Alpha:\nHow is the weather?\n\nHuman:\nThe weather is good."
Person Alpha:
Hello, how are you doing?
Human:
Great, thank you.
Person Alpha:
How is the weather?
Human:
The weather is good.
</code></pre>
<p>My idea is to search for 1), 2), 3),... in the text and replace it with an empty string. This might work but is inefficient (and can be a problem if e.g. 1) appears in the text of the conversation).</p>
<p>Is there a better / more elegant way to do this?</p>
|
<python><python-3.x><string><replace>
|
2023-01-15 02:04:30
| 5
| 891
|
BlackHawk
|
75,122,487
| 4,155,976
|
Graphviz running on SageMaker notebook instance but not SageMaker Studio
|
<p>I'm running a python script with PyTorch/Graphviz. It executes in a SageMaker notebook instance, but not in SageMaker Studio.</p>
<p>It appears the notebook instance with kernel <strong>conda_pytorch_p39</strong> already contains an installation of Graphviz so the script just works as is and I get my Graphviz png.</p>
<p>When using SageMaker Studio with kernel <strong>PyTorch 1.8 Python 3.6 GPU</strong>, it seems to be a lot more complicated. Only installing torchviz didn't work so I tried installing graphviz via <code>conda install -c fastchan python-graphviz</code>.</p>
<p>I know that on Windows, Linux, macOS we need the path of graphviz executable but I didn't need to provide that in my SageMaker notebook instance.</p>
<p><strong>SageMaker Studio Notebook</strong>
<em>(SageMaker kernel: PyTorch 1.8 Python 3.6 GPU)</em></p>
<p>The notebook I am debugging.</p>
<pre><code>#Torchviz install
%pip install torchviz
[out]:
Installing collected packages: torchviz
Successfully installed torchviz-0.0.2
location: /opt/conda/lib/python3.6/site-packages/torchviz
# Tried installing python-graphviz
%conda install -c fastchan python-graphviz
[out]:
environment location: /opt/conda
added / updated specs:
- python-graphviz
Downloading and Extracting Packages
graphviz-2.42.3
python-graphviz-0.16
from whych import whych
whych("graphviz")
[out]:
Python executable: /opt/conda/bin/python
Module "graphviz" found at location: /opt/conda/lib/python3.6/site-packages/graphviz
#However:
%cd /opt/conda/lib/python3.6/site-packages/graphviz
[out]:
No such file or directory
#It cannot find python3.6 even though the kernel is PyTorch 1.8 Python 3.6 GPU
# For testing only
import graphviz as gv
[out]:
AttributeError: module 'graphviz.backend' has no attribute 'ENCODING'
from torchviz import make_dot
make_dot(loss_tensor)
#Attempted fix:
import os
from torchviz import make_dot
os.environ['PATH'] += os.pathsep + '/opt/conda/lib/python3.6/site-packages/graphviz'
make_dot(loss_tensor)
[out]:
AttributeError: module 'graphviz.backend' has no attribute 'ENCODING'
#PATH
'/opt/amazon/openmpi/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/miniconda3/condabin:/tmp/anaconda3/condabin:/tmp/miniconda2/condabin:/tmp/anaconda2/condabin'
</code></pre>
|
<python><pytorch><graphviz><amazon-sagemaker><amazon-sagemaker-studio>
|
2023-01-15 01:59:04
| 1
| 12,017
|
Edison
|
75,122,437
| 19,094,667
|
Finding all positions of an object in an image
|
<p>My goal is to find the locations of specific image on other PNG image, using python. Take this example:</p>
<p><a href="https://i.sstatic.net/LctMn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LctMn.png" alt="subimage" /></a></p>
<p><a href="https://i.sstatic.net/BkuDj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BkuDj.png" alt="image with dots" /></a></p>
<p>I want to find the coordinates of all dots in the image. The images of the dots is known. Since it's PNG without background color, it would of course be even better not to have the pictures of the dots to find the spots (Of course only if possible without).</p>
<p>At the moment I can only find one spot from a dot. But then unfortunately, not randomly somewhere, but always according to the order of the pixels.</p>
<p><a href="https://i.sstatic.net/Ex2x2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ex2x2.png" alt="enter image description here" /></a></p>
<p>My working Code (See image above when spot found):
I'm looking for that particular point with the image of the point (subimage), in the image with points.</p>
<pre><code>def get_coordinates(canvas, image, _browser):
canvas_base64 = _browser.execute_script("return arguments[0].toDataURL('image/png').substring(21);", canvas)
canvas_png = base64.b64decode(canvas_base64)
with open(r"canvas.png", 'wb') as f:
f.write(canvas_png)
canvas_seats = Image.open('canvas.png')
i_size = canvas_seats.size
seat = Image.open(image)
w_size = seat.size
x0, y0 = w_size[0] // 2, w_size[1] // 2
pixel = seat.getpixel((x0, y0))[:-1]
best = (1000, 0, 0)
for x in range(i_size[0]):
for y in range(i_size[1]):
i_pixel = canvas_seats.getpixel((x, y))
d = diff(i_pixel, pixel)
if d < best[0]:
best = (d, x, y)
draw = ImageDraw.Draw(canvas_seats)
x, y = best[1:]
draw.rectangle((x - x0, y - y0, x + x0, y + y0), outline='red')
draw.ellipse((x - 2, y - 2, x + 2, y + 2), fill='blue', outline='blue')
canvas_seats.save('out.png')
return [x, y]
def diff(a, b):
return sum((a - b) ** 2 for a, b in zip(a, b))
</code></pre>
<p>How would I go about finding the points in the image? Ideally all of them, otherwise I would be satisfied with just finding a specific one but placing it randomly and not always the same place with every run.</p>
<p>Thanks!</p>
|
<python><python-imaging-library>
|
2023-01-15 01:40:06
| 1
| 517
|
Agan
|
75,122,111
| 17,696,880
|
Set regex pattern that concatenates one capture group or another depending on whether or not the input string starts with certain symbols
|
<pre class="lang-py prettyprint-override"><code>import re
word = ""
input_text = "Creo que July no se trata de un nombre" #example 1, should match with the Case 00
#input_text = "Creo que July Moore no se trata de un nombre" #example 2, should not match any case
#input_text = "Efectivamente esa es una lista de nombres. July Moore no se trata de un nombre" #example 3, should match with the Case 01
#input_text = "July Moore no se trata de un nombre" #example 4, should match with the Case 01
name_capture_pattern_00 = r"((?:\w+))?" # does not tolerate whitespace in middle
#name_capture_pattern_01 = r"((?:\w\s*)+)"
name_capture_pattern_01 = r"(^[A-Z](?:\w\s*)+)" # tolerates that there are spaces but forces it to be a word that begins with a capital letter
#Case 00
regex_pattern_00 = name_capture_pattern_00 + r"\s*(?i:no)\s*(?i:se\s*tratar[íi]a\s*de\s*un\s*nombre|se\s*trata\s*de\s*un\s*nombre|(?:ser[íi]a|es)\s*un\s*nombre)"
#Case 01
regex_pattern_01 = r"(?:^|[.;,]\s*)" + name_capture_pattern_01 + r"\s*(?i:no)\s*(?i:se\s*tratar[íi]a\s*de\s*un\s*nombre|se\s*trata\s*de\s*un\s*nombre|(?:ser[íi]a|es)\s*un\s*nombre)"
#Taking the regex pattern(case 00 or case 01), it will search the string and then try to extract the substring of interest using capturing groups.
n0 = re.search(regex_pattern_00, input_text)
if n0 and word == "":
word, = n0.groups()
word = word.strip()
print(repr(word)) # --> print the substring that I captured with the capturing group
n1 = re.search(regex_pattern_01, input_text)
if n1 and word == "":
word, = n1.groups()
word = word.strip()
print(repr(word)) # --> print the substring that I captured with the capturing group
</code></pre>
<p>If in front of the pattern there is a <code>.\s*</code> , a <code>,\s*</code> , a <code>;\s*</code> , or if it is simply the beginning of the input string, then use this capture pattern <code>name_capture_pattern_01 = r"((?:\w\s*)+)?"</code>, but if that is not the case, use this other capture pattern <code>name_capture_pattern_00 = r"((?:\w+))?"</code></p>
<p>I think that in case 00 you should add something like this at the beginning of the pattern <code>(?:(?<=\s)|^)</code></p>
<p>That way you would get these 2 possible resulting patterns after concatenate, where perhaps an <code>or</code> condition <code>|</code> can be set inside the search pattern:</p>
<p>In <code>Case 00</code>...</p>
<p><code>(?:\.|\;|\,)</code> or the <code>start of the string</code> <code>+</code> <code>((?:\w\s*)+)?</code> <code>+</code> <code>r"\s*(?i:no)\s*(?i:se\s*tratar[íi]a\s*de\s*un\s*nombre|se\s*trata\s*de\s*un\s*nombre|(?:ser[íi]a|es)\s*un\s*nombre)"</code></p>
<p>In other case (<code>Case 01</code>)...</p>
<p><code>((?:\w+))??</code> <code>+</code> <code>r"\s*(?i:no)\s*(?i:se\s*tratar[íi]a\s*de\s*un\s*nombre|se\s*trata\s*de\s*un\s*nombre|(?:ser[íi]a|es)\s*un\s*nombre)"</code></p>
<p>But in both cases (<code>Case 00</code> or <code>Case 01</code>, depending on what the program identifies) it should match the pattern and extract the capturing group to store it in the variable called as <code>word</code> .</p>
<p>And the <strong>correct output</strong> for each of these cases would be the capture group that should be obtained and printed in each of these examples:</p>
<pre><code>'July' #for the example 1
'' #for the example 2
'July Moore' #for the example 3
'July Moore' #for the example 4
</code></pre>
<hr />
<p><strong>EDIT CODE:</strong></p>
<p>This code, although it appears that the regex patterns are well established, fails by returning as output only the last part of the name, in this case <code>"Moore"</code>, and not the full name <code>"July Moore"</code></p>
<pre class="lang-py prettyprint-override"><code>import re
#Here are 2 examples where you can see this "capture error"
input_text = "HghD djkf ; July Moore no se trata de un nombre"
input_text = "July Moore no se trata de un nombre"
word = ""
#name_capture_pattern_01 = r"((?:\w\s*)+)"
name_capture_pattern_01 = r"([A-Z][a-z]+(?:\s*[A-Z][a-z]+)*)"
#Case 01
regex_pattern_01 = r"(?:^|[.;,]\s*)" + name_capture_pattern_01 + r"\s*(?i:no)\s*(?i:se\s*tratar[íi]a\s*de\s*un\s*nombre|se\s*trata\s*de\s*un\s*nombre|(?:ser[íi]a|es)\s*un\s*nombre)"
n1 = re.search(regex_pattern_01, input_text)
if n1 and word == "":
word, = n1.groups()
word = word.strip()
print(repr(word))
</code></pre>
<p>In both examples, since it complies with starting with <code>(?:^|[.;,]\s*)</code> and starting with a capital letter like this pattern <code>([A-Z][a-z]+(?:\s*[A-Z][a-z]+)*)</code>, it should print the full name in the console <code>July Moore</code>. It's quite curious but placing this pattern makes it impossible for me to capture a complete name under these conditions established by the search pattern.</p>
|
<python><python-3.x><regex><string><regex-group>
|
2023-01-15 00:03:56
| 1
| 875
|
Matt095
|
75,121,925
| 706,389
|
Why doesn't python logging.exception method log traceback by default?
|
<p>When writing defensive code in python (e.g. you're handling some user input or whatever), I find it useful to return <code>Exception</code> objects alongside regular computation results, so they can be discarded/logged or processed in some other way. Consider the following snippet:</p>
<pre class="lang-py prettyprint-override"><code>import logging
from traceback import TracebackException
from typing import Union
logging.basicConfig(level=logging.INFO)
def _compute(x) -> int:
return len(x)
def compute(x) -> Union[int, Exception]:
try:
return _compute(x)
except Exception as e:
return e
inputs = [
'whatever',
1,
'ooo',
None,
]
outputs = []
for i in inputs:
r = compute(i)
outputs.append(r)
for i, r in zip(inputs, outputs):
logging.info('compute(%s)', i)
if isinstance(r, Exception):
logging.exception(r)
else:
logging.info(r)
</code></pre>
<p>This results in the following output</p>
<pre><code>INFO:root:compute(whatever)
INFO:root:8
INFO:root:compute(1)
ERROR:root:object of type 'int' has no len()
NoneType: None
INFO:root:compute(ooo)
INFO:root:3
INFO:root:compute(None)
ERROR:root:object of type 'NoneType' has no len()
NoneType: None
</code></pre>
<p>So you can see that useful exception information like stacktrace is lost, which makes it a bit hard to debug the cause of exception.</p>
<p>This can be fixed by logging exception as <code>logging.exception(r, exc_info=r)</code>:</p>
<pre><code>INFO:root:compute(whatever)
INFO:root:8
INFO:root:compute(1)
ERROR:root:object of type 'int' has no len()
Traceback (most recent call last):
File "/tmp/test.py", line 15, in compute
return _compute(x)
File "/tmp/test.py", line 10, in _compute
return len(x)
TypeError: object of type 'int' has no len()
INFO:root:compute(ooo)
INFO:root:3
INFO:root:compute(None)
ERROR:root:object of type 'NoneType' has no len()
Traceback (most recent call last):
File "/tmp/test.py", line 15, in compute
return _compute(x)
File "/tmp/test.py", line 10, in _compute
return len(x)
TypeError: object of type 'NoneType' has no len()
</code></pre>
<p>My question is -- why doesn't <code>logging.exception</code> method do this by default, if the argument passed to it happens to be an <code>Exception</code>? I tried searching in PEPs/etc but wasn't really fruitful.</p>
<p>My only guess is that <code>logging.exception</code> is essentially <a href="https://github.com/python/cpython/blob/3.11/Lib/logging/__init__.py#L1868-L1872" rel="nofollow noreferrer">just a special case of <code>logging.error</code></a>, so in principle <code>logging.exception</code> method doesn't know whether is' passed an <code>Exception</code> object or something else. So supporting this would require some code, e.g. checking whether <code>isinstance(msg, Exception)</code>, and perhaps the authors of logging library decided it's a bit too specific. But IMO it makes sense considering in practice in most cases <code>logging.exception</code> is passed an <code>Exception</code> object.</p>
|
<python><exception><python-logging>
|
2023-01-14 23:18:03
| 2
| 2,549
|
karlicoss
|
75,121,856
| 580,644
|
Beautifulsoup add attribute to first <td> item in a table
|
<p>I would like to get a table html code from a website with Beautifulsoup and I need to add attribute to the first td item. I have:</p>
<pre><code>try:
description=hun.select('#description > div.tab-pane-body > div > div > div > table')[0]
description+="<style type=text/css>td:first-child { font-weight: bold; width: 5%; } td:nth-child(2) { width: 380px } td:nth-child(3) { font-weight: bold; }</style>"
except:
description=None
</code></pre>
<p>The selected <code>description</code>'s code:</p>
<pre><code><table border="0" cellpadding="0" cellspacing="0" width="704">
<tbody>
<tr>
<td valign="top" width="704" style="">
<p><span>Short description </span></p>
</td>
</tr>
<tr>
<td valign="top" width="123" style="">
<p><span>Additional data</span></p>
</td>
</tr>
</tbody>
</table>
</code></pre>
<p>I would like to add a colspan attribute to the first <code><td></code> and keep changes in the <code>description</code> variable:</p>
<pre><code><table border="0" cellpadding="0" cellspacing="0" width="704">
<tbody>
<tr>
<td valign="top" width="704" style="" colspan="4">
<p><span>Short description </span></p>
</td>
</tr>
<tr>
<td valign="top" width="123" style="">
<p><span>Additional data</span></p>
</td>
</tr>
</tbody>
</table>
</code></pre>
<p>I tried:</p>
<pre><code>hun=BeautifulSoup(f,'html.parser')
try:
description2=hun.select('#description > div.tab-pane-body > div > div > div > table')[0]
description2+="<style type=text/css>td:first-child { font-weight: bold; width: 5%; } td:nth-child(2) { width: 380px } td:nth-child(3) { font-weight: bold; }</style>"
soup = BeautifulSoup(description2, 'html.parser')
description = soup.td['colspan'] = 4
</code></pre>
<p>...but it is not working, the output is "4", instead of the table's html code with attribute added.</p>
<p>I found it, it must be like this:</p>
<pre><code>hun=BeautifulSoup(f,'html.parser')
try:
description2=hun.select('#description > div.tab-pane-body > div > div > div > table')[0]
description2+="<style type=text/css>td:first-child { font-weight: bold; width: 5%; } td:nth-child(2) { width: 380px } td:nth-child(3) { font-weight: bold; }</style>"
soup = BeautifulSoup(description2, 'html.parser')
soup.td['colspan'] = 4
description = soup
</code></pre>
|
<python><beautifulsoup>
|
2023-01-14 23:03:45
| 1
| 2,656
|
Adrian
|
75,121,814
| 222,977
|
How to deal with NumPy array product underflow and overflow
|
<p>I have 2d numpy array of shape (15077, 5). All the values are less than or equal to 1.0. I'm essentially trying to do the following:</p>
<pre class="lang-py prettyprint-override"><code>product = array.prod(axis=0)
product = product / product.sum()
</code></pre>
<p>So basically I want to return an array that represents the product of each column in the 2d array. The above code works fine for smaller inputs. But what I'm dealing with now has underflow and I'm ending up with a resulting array of all 0s. I've verified there are no 0s in the input array.</p>
<p>I've tried using the longdouble type and still seem to have the problem. I've tried to figure out ways of normalizing such as this:</p>
<pre><code> results = np.ones(len(array[0]))
multiplier = 1
for row in array:
results = results * (row * multiplier)
while results.max() > 1:
results = results / 2
multiplier = multiplier / 2
while results.max() > 0 and results.max() < 1:
results = results * 2
multiplier = multiplier * 2
return results / results.sum()
</code></pre>
<p>While the above code does end up returning an array that isn't all zeros, I'm not convinced it's doing the correct thing. One of the elements is 0. I'm unsure if that's because the algorithm is wrong or if it's because there's so much difference between that column and the other columns that it's</p>
<p>Is there a way to do this that correctly accounts for overflow and underflow?</p>
|
<python><python-3.x><numpy><numpy-ndarray>
|
2023-01-14 22:56:29
| 1
| 583
|
Dan
|
75,121,807
| 3,672,883
|
what are keypoints in yolov7 pose?
|
<p>I am trying to understad the keypoint output of the yolov7, but I didn't find enough information about that.</p>
<p>I have the following output:</p>
<pre><code>array([ 0, 0, 430.44, 476.19, 243.75, 840, 0.94348, 402.75, 128.5, 0.99902, 417.5, 114.25, 0.99658, 385.5, 115, 0.99609, 437.75, 125.5, 0.89209, 366.75, 128, 0.66406, 471, 229.62,
0.97754, 346.75, 224.88, 0.97705, 526, 322.75, 0.95654, 388.5, 340.75, 0.95898, 424.5, 314.75, 0.94873, 483.5, 335.5, 0.9502, 465.5, 457.75, 0.99219, 381.5, 456.25, 0.99219, 451.5, 649,
0.98584, 379.25, 649.5, 0.98633, 446.5, 818, 0.92285, 366, 829.5, 0.9248])
</code></pre>
<p>the paper <a href="https://arxiv.org/pdf/2204.06806.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2204.06806.pdf</a> tells "So, in total there are 51 elements for 17 keypoints associated with an anchor. " but the length is 58.</p>
<p>there are 18 numbers that probably are confidences of a keypoint:</p>
<pre><code>array([ 0.94348, 0.99902,, 0.99658, 0.99609, 0.89209, 0.66406, 0.97754, 0.97705, 0.95654, 0.95898, 0.94873, 0.9502, 0.99219, 0.99219,
0.98584, 0.98633, 0.92285, 0.9248])
</code></pre>
<p>But the paper tells that are 17 keypoints.</p>
<p>In this repo <a href="https://github.com/retkowsky/Human_pose_estimation_with_YoloV7/blob/main/Human_pose_estimation_YoloV7.ipynb" rel="nofollow noreferrer">https://github.com/retkowsky/Human_pose_estimation_with_YoloV7/blob/main/Human_pose_estimation_YoloV7.ipynb</a> tells that the keypoints are the following:</p>
<p><a href="https://i.sstatic.net/HG8dB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HG8dB.png" alt="enter image description here" /></a></p>
<p>but that shape doesn't match the prediction:</p>
<p><a href="https://i.sstatic.net/jwJUD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jwJUD.png" alt="enter image description here" /></a></p>
<p>Is the first image right about the keypoints?</p>
<p>and what are the first four digits?</p>
<pre><code> 0, 0, 430.44, 476.19
</code></pre>
<p>Thanks</p>
<p><strong>EDIT</strong></p>
<p>This is not a complet answer but editing the plot function I can get the following information</p>
<p>Given the following output keypoint:</p>
<pre><code>array([[ 0, 0, 312.31, 486, 291.75, 916.5, 0.94974, 304.5, 118.75, 0.99902, 320.75, 102.25, 0.99756, 287.75, 103.25, 0.99658, 345, 112, 0.96338, 268.25, 115.25, 0.69531, 394,
226.25, 0.98145, 228.25, 230.12, 0.98389, 428.5, 358.5, 0.95898, 192.88, 364.75, 0.96533, 407, 464.25, 0.95166, 215.75, 464.25, 0.9585, 363.75, 491, 0.99219, 257.75, 491.5, 0.99268,
361.5, 680, 0.9834, 250.88, 679, 0.98438, 361, 861.5, 0.91064, 247, 863, 0.91504]])
</code></pre>
<p>from this position ouput[7:] you can get the points of each keypoint, with the following sort as you can see in the image</p>
<p><a href="https://i.sstatic.net/IaqBo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IaqBo.png" alt="enter image description here" /></a></p>
<pre><code>array([ 304.5, 118.75, 0.99902, 320.75, 102.25, 0.99756, 287.75, 103.25, 0.99658, 345, 112, 0.96338, 268.25, 115.25, 0.69531, 394, 226.25, 0.98145, 228.25, 230.12, 0.98389, 428.5, 358.5, 0.95898,
192.88, 364.75, 0.96533, 407, 464.25, 0.95166, 215.75, 464.25, 0.9585, 363.75, 491, 0.99219, 257.75, 491.5, 0.99268, 361.5, 680, 0.9834, 250.88, 679, 0.98438, 361, 861.5, 0.91064,
247, 863, 0.91504])
</code></pre>
<p>but I am not sure about what are the rest of the values:</p>
<p>0, 0, 312.31, 486, 291.75, 916.5, 0.94974,</p>
|
<python><pytorch><yolo><yolov7>
|
2023-01-14 22:54:25
| 1
| 5,342
|
Tlaloc-ES
|
75,121,793
| 9,855,588
|
how would you import a variable from a library when you have a utility that references the variable python
|
<p>Say I am using library XYZ that stores a variable in <strong>init</strong>.py.</p>
<p>I have the following files:</p>
<pre><code>some_library/__init__.py
hello="frog"
file1.py
import some_library
def run():
print(some_library.hello)
file2.py (at this point I can access hello from some_library/__init__.py).
import file1 as f1
print(f1.some_library.hello)
</code></pre>
<p>The import convention doing it this way seems a bit weird. File2.py is my main driver, and file1.py is a utility, while some_library is an external package that I need. Basically my utility uses some_library, and in my driver application I want to use the utility, but I need to check if hello from some_library contains some value.</p>
<p>In my driver application should I also <code>import some_library</code> and check <code>hello</code> or is it better to rely on the value of it from <code>file1.py</code> since I'm importing the full library there?</p>
<p>I need to be able to do something like</p>
<pre><code>file2.py
from file1 import run, file1.some_library
if file1.some_library == "hello":
run()
</code></pre>
|
<python><python-3.x>
|
2023-01-14 22:51:17
| 0
| 3,221
|
dataviews
|
75,121,671
| 12,603,542
|
Calling open() with 'append' mode throws: [Errno 2] No such file or directory exception in Python
|
<p>It is kind of weard situation and I can not find similar problem anywhere. In my situation we call 'open' with 'append' parameter, as below.</p>
<p>I am using a function that calls other mehod:</p>
<pre><code>fileManager.saveNewLine(result_line, results_path, "a")
#below is fileManager
def saveNewLine(text, path, mode):
hs = open(path, mode) #<-- throws: [Errno 2] No such file or directory...
hs.write(text + "\n")
hs.close()
</code></pre>
<p>And the file <code>path</code> is:</p>
<blockquote>
<p>'ml_models/signals/results/model-training-1h-Core-hourly-True-True-True-0.2-1-9-500-32-1-0.2-1-5-lstm-relu-linear-5000-300-10-10-250-250-6000-14-13000-col--MlModel-time-2023-01-14_23-12-24.301759.txt'</p>
</blockquote>
<p>The path (relative) has only <strong>198</strong> characters. The folder <strong>ml_models/signals/results</strong> exists. I ran out of ideas what else can be wrong.</p>
<p>As far as I know 'append' should create new file if exists. Tested also with <code>w</code> parameter.</p>
<blockquote>
<p>"a" - Append - Opens a file for appending, creates the file if it does
not exist.</p>
<p>"w" - Write - Opens a file for writing, creates the file if
it does not exist</p>
</blockquote>
|
<python><file><io>
|
2023-01-14 22:25:58
| 1
| 631
|
bakunet
|
75,121,419
| 19,003,861
|
Django - Annotate within for loop - what is the difference between my two codes
|
<p>I am trying to sum up two columns in a view with <code>values()</code> and <code>annotate()</code>.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column 1</th>
<th>Column 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>-2</td>
</tr>
</tbody>
</table>
</div>
<p>Currently calling "total" will return the total for each row and not the overall total.</p>
<p>Returning in template</p>
<pre><code>5
3
</code></pre>
<p>instead of</p>
<pre><code>8
</code></pre>
<p>I believe this is because I print the total in a for loop. What confuses me is that I have an almost similar code working perfectly fine in another view.</p>
<p>How can I get the total of the several rows together?</p>
<p><strong>Update</strong> to answer Willem's question - timestamp is used to order the list of model objects when they are created.</p>
<p>This was not the result I initially wanted when I wrote the code. But realised I could use this view to render a report of the objects as they are being created, so I added the timestamp to order the objects starting with the most recent one.</p>
<p>This is not relevant for my problem. I removed it to avoid confusion. Sorry for this.</p>
<p><strong>views</strong></p>
<pre><code>def function(request, userprofile_id):
venue = UserProfile.objects.filter(user=request.user).values('venue')
points_cummulated_per_user_per_venue = Itemised_Loyalty_Card.objects.filter(user=userprofile_id).filter(venue=request.user.userprofile.venue).values('venue__name','timestamp').annotate(sum_points=Sum('add_points')).annotate(less_points=Sum('use_points')).annotate(total=F('add_points')-F('use_points')).
return render(request,"main/account/venue_loyalty_card.html",{'venue':venue,'points_cummulated_per_user_per_venue':points_cummulated_per_user_per_venue})
</code></pre>
<p><strong>template</strong></p>
<pre><code>{%for model in points_cummulated_per_user_per_venue %}
Total: {{model.total}}
{%endfor%}
</code></pre>
<p><strong>models</strong></p>
<pre><code>class Itemised_Loyalty_Card(models.Model):
user = models.ForeignKey(UserProfile, blank=True, null=True, on_delete=models.CASCADE)
venue = models.ForeignKey(Venue, blank=True, null=True, on_delete=models.CASCADE)
add_points = models.IntegerField(name = 'add_points', null = True, blank=True, default=0)
use_points = models.IntegerField(name= 'use_points', null = True, blank=True, default=0)
class Venue(models.Model, HitCountMixin):
id = models.AutoField(primary_key=True)
name = models.CharField(verbose_name="Name",max_length=100, blank=True)
</code></pre>
|
<python><django><django-views>
|
2023-01-14 21:36:43
| 1
| 415
|
PhilM
|
75,121,353
| 19,678,835
|
Automate AWS ECR scanning
|
<p>I have tried to automate ECR image scanning using AWS CLI. But I was stuck in the scanning step. When I call <code>aws ecr start-image-scan</code>, it starts the scanning. But how I know the scanning is finish. My images are large and it takes few minutes. Could someone help me to figure out this. I am using Python</p>
|
<python><amazon-ecr><scanning>
|
2023-01-14 21:24:32
| 2
| 488
|
Mark P
|
75,121,352
| 10,772,422
|
StableBaselines creating a model segmentation fault
|
<p>I am getting a segmentation fault when trying to create a stable_baselines3 PPO model on a CartPole-v1 OpenAI Gym environment.</p>
<p>So far what I've tried is running a short example code on Python 3.10 as well as Python 3.9. I'm running the python script in a Conda environment. What I did was install stable-baselines[extra] using pip. I also installed the openAI gym library using conda.</p>
<p>With Python 3.10 I was getting a Segmentation fault in the <code>pthreading.py</code> file with a call to <code>wait</code> function call.
Using Python 3.9 I'm getting a different error in the <code>constraints.py</code> file in the <code>_IntegerInterval.check</code> method</p>
<p>This is the example code:</p>
<pre><code>import sys
import gym
from stable_baselines3 import PPO, A2C
from stable_baselines3.common.env_util import make_vec_env
import faulthandler
faulthandler.enable()
def main():
print("Going to create model")
env = gym.make("CartPole-v1")
model = PPO("MlpPolicy", env, verbose=1)
print("Model created")
model.learn(total_timesteps=25000)
model.save("ppo_cartpole")
if __name__ == '__main__':
main()
sys.settrace(None)
</code></pre>
<p>This is the terminal output:</p>
<pre><code>(py39) ilijas-mbp:Doom_DQN_GC ilijavuk$ cd /Users/ilijavuk/Documents/Reinforcement_Learning/Doom_DQN_GC ; /usr/bin/env /Users/ilijavuk/opt/anaconda3/envs/py39/bin/python /Users/ilijavuk/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 50143 -- /Users/ilijavuk/Documents/Reinforcement_Learning/Doom_DQN_GC/StableBaselinesTest.py
Going to create model
Backend MacOSX is interactive backend. Turning interactive mode on.
Using cpu device
Wrapping the env with a `Monitor` wrapper
Wrapping the env in a DummyVecEnv.
Fatal Python error: Segmentation fault
Thread 0x0000700007a7b000 (most recent call first):
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/threading.py", line 316 in wait
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/threading.py", line 581 in wait
File "/Users/ilijavuk/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/pydevd.py", line 261 in _on_run
File "/Users/ilijavuk/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_daemon_thread.py", line 49 in run
(py39) ilijas-mbp:Doom_DQN_GC ilijavuk$
</code></pre>
<p>So it seems like the model creation is failing somewhere in the DummyVecEnv wrapping step</p>
<p>EDIT:
I managed to run the test script with gdb. This is the current trace that I'm getting:</p>
<pre><code>Starting program: /Users/ilijavuk/opt/anaconda3/envs/py39/bin/python StableBaselinesTest.py
[New Thread 0x1603 of process 2803]
[New Thread 0x2003 of process 2803]
warning: unhandled dyld version (17)
Going to create model
Using cpu device
Wrapping the env with a `Monitor` wrapper
Wrapping the env in a DummyVecEnv.
Model created
[New Thread 0x1807 of process 2803]
[New Thread 0x2103 of process 2803]
[New Thread 0x2203 of process 2803]
[New Thread 0x2303 of process 2803]
[New Thread 0x2403 of process 2803]
[New Thread 0x2503 of process 2803]
[New Thread 0x2603 of process 2803]
[New Thread 0x2703 of process 2803]
[New Thread 0x2803 of process 2803]
[New Thread 0x2903 of process 2803]
[New Thread 0x2a03 of process 2803]
[New Thread 0x2b03 of process 2803]
[New Thread 0x3e03 of process 2803]
[New Thread 0x3f03 of process 2803]
Thread 3 received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x1807 of process 2803]
0x0000000000000000 in ?? ()
(gdb) backtrace
#0 0x0000000000000000 in ?? ()
#1 0x0000000103a48538 in ?? ()
#2 0x0000000000000000 in ?? ()
(gdb)
</code></pre>
<p>After quitting gdb I also get this. HOWEVER, this seems to be tied to the <code>model.learn</code> call, since this error disappears when I comment model.learn and model.save out</p>
<pre><code>(gdb) q
A debugging session is active
Inferior 1 [process 2803] will be killed.
Quit anyway? (y or n) y
Fatal Python error: Segmentation fault
Thread 0x00007ff847bb84c0 (most recent call first):
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114 in forward
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194 in _call_impl
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/container.py", line 204 in forward
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194 in _call_impl
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/stable_baselines3/common/torch_layers.py", line 263 in forward
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194 in _call_impl
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/stable_baselines3/common/policies.py", line 627 in forward
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194 in _call_impl
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 166 in collect_rollouts
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 248 in learn
File "/Users/ilijavuk/opt/anaconda3/envs/py39/lib/python3.9/site-packages/stable_baselines3/ppo/ppo.py", line 307 in learn
File "/Users/ilijavuk/Documents/Reinforcement_Learning/Doom_DQN_GC/StableBaselinesTest.py", line 16 in main
File "/Users/ilijavuk/Documents/Reinforcement_Learning/Doom_DQN_GC/StableBaselinesTest.py", line 20 in <module>
</code></pre>
|
<python><segmentation-fault><openai-gym><stable-baselines>
|
2023-01-14 21:24:21
| 1
| 361
|
Ilija Vuk
|
75,121,336
| 252,226
|
Cannot start dask client
|
<p>When I try and initiate a dask distributed cluster with:</p>
<pre><code>from dask.distributed import Client, progress
client = Client(threads_per_worker=1, n_workers=2)
client
</code></pre>
<p>I get the following error:</p>
<p><code>RuntimeError: Cluster failed to start: module 'numpy' has no attribute 'bool8'</code></p>
<p>I have <code>numpy==1.22.4</code> installed which is required by another library (it requires <code>< 1.23</code>). I have <code>dask==2023.1.0</code> installed.</p>
<p>Will it work with an older version of <code>dask</code>? If so which one should I use?</p>
|
<python><numpy><dask><dask-distributed>
|
2023-01-14 21:22:05
| 0
| 783
|
dbschwartz
|
75,121,204
| 17,561,414
|
JSON items data types python
|
<p>I have the following JSON structure.</p>
<p><a href="https://i.sstatic.net/Lc5KV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lc5KV.png" alt="enter image description here" /></a></p>
<p>Goal is to identify datatypes of each nested keys under <code>items</code> hierarchy.</p>
<pre><code>for i, item in enumerate(data['_embedded']['items']):
for key, values in item.items():
if isinstance(values, dict):
print(values, ' is dict')
elif isinstance(values, list):
print(values, ' is list')
elif isinstance(values, str):
print(values, ' is str')
</code></pre>
<p>output is:</p>
<pre><code>{'self':PCR-0006894-SAMKG0-PC'}} is dict
PCR-0006894-SAMKG0-PC is str
products is str
etc.
</code></pre>
<p>so it just gives me the value types in each key, but desired output based on the above attached JSON structure should be</p>
<pre><code>_links is dict
identifier is str
enabled is bool
family is str
categories is list
</code></pre>
<p>when i change <code>if isinstance(values, dict):</code> to <code>if isinstance(key, dict):</code> it prints that everything is str but it is not true</p>
|
<python><json><dictionary><flatten>
|
2023-01-14 21:01:30
| 0
| 735
|
Greencolor
|
75,121,127
| 2,584,721
|
Save all intermediate variables in a function, should the function fail
|
<p>I find myself frequently running into this sort of problem. I have a function like</p>
<pre class="lang-py prettyprint-override"><code>def compute(input):
result = two_hour_computation(input)
result = post_processing(result)
return result
</code></pre>
<p>and <code>post_processing(result)</code> fails. Now the obvious thing to do is to change the function to</p>
<pre class="lang-py prettyprint-override"><code>import pickle
def compute(input):
result = two_hour_computation(input)
pickle.dump(result, open('intermediate_result.pickle', 'wb'))
result = post_processing(result)
return result
</code></pre>
<p>but I don't usually remember to write all my functions that way. What I wish I had was a decorator like:</p>
<pre class="lang-py prettyprint-override"><code>@return_intermediate_results_if_something_goes_wrong
def compute(input):
result = two_hour_computation(input)
result = post_processing(result)
return result
</code></pre>
<p>Does something like that exist? I can't find it on google.</p>
|
<python><python-decorators>
|
2023-01-14 20:47:22
| 7
| 14,710
|
Alex Lenail
|
75,121,034
| 11,462,274
|
Create a lambda function to set values in a column without being alerted for value set in a copy of a slice of a DataFrame
|
<p>Object <code>archive</code>:</p>
<pre class="lang-none prettyprint-override"><code>match_date,start_time,competition,team_home,team_away,match,tip,reliability,odds,home_goals,away_goals,score,result
2023-01-13,16:45,Italian Serie A,Napoli,Juventus,Napoli v Juventus,Under 2.5 Goals,3,1.8,,
2023-01-13,17:00,English Premier League,Aston Villa,Leeds,Aston Villa v Leeds,Over 2.5 Goals,3,1.73,,
2023-01-13,17:00,Spanish La Liga,Celta Vigo,Villarreal,Celta Vigo v Villarreal,Under 2.5 Goals,1,1.6,,
2023-01-13,17:15,Portuguese Primeira Liga,Portimonense,Santa Clara,Portimonense v Santa Clara,Santa Clara To Win,4,3.6,,
2023-01-14,09:30,English Premier League,Man Utd,Man City,Man Utd v Man City,Man City To Win,3,1.83,,
2023-01-14,09:30,English Football League - Championship,Rotherham,Blackburn,Rotherham v Blackburn,Under 2.5 Goals,3,1.73,,
</code></pre>
<p>Object <code>df_new</code>:</p>
<pre class="lang-none prettyprint-override"><code>match_date,start_time,competition,team_home,team_away,match,home_goals,away_goals,score
2023-01-13,FT,English Premier League,Aston Villa,Leeds,Aston Villa v Leeds,2,1,2 - 1
2023-01-13,FT,Spanish La Liga,Celta Vigo,Villarreal,Celta Vigo v Villarreal,1,1,1 - 1
2023-01-13,FT,Italian Serie A,Napoli,Juventus,Napoli v Juventus,5,1,5 - 1
2023-01-13,FT,Portuguese Primeira Liga,Portimonense,Santa Clara,Portimonense v Santa Clara,0,0,0 - 0
</code></pre>
<pre class="lang-python prettyprint-override"><code>def market_result(home,away,mkt,hg,ag):
if (mkt == f'{home} To Win') and (hg > ag):
return 'GREEN'
if (mkt == f'{home} To Win') and (hg <= ag):
return 'RED'
if (mkt == f'{away} To Win') and (ag > hg):
return 'GREEN'
if (mkt == f'{away} To Win') and (ag <= hg):
return 'RED'
if (mkt == 'Both Teams To Score') and (hg > 0) and (ag > 0):
return 'GREEN'
if (mkt == 'Both Teams To Score') and ((hg == 0) or (ag == 0)):
return 'RED'
if (mkt == 'Both Teams To Score - No') and ((hg == 0) or (ag == 0)):
return 'GREEN'
if (mkt == 'Both Teams To Score - No') and ((hg > 0) or (ag > 0)):
return 'RED'
if (mkt == 'Under 2.5 Goals') and (hg+ag < 2.5):
return 'GREEN'
if (mkt == 'Under 2.5 Goals') and (hg+ag >= 2.5):
return 'RED'
if (mkt == 'Over 2.5 Goals') and (hg+ag > 2.5):
return 'GREEN'
if (mkt == 'Over 2.5 Goals') and (hg+ag <= 2.5):
return 'RED'
if (mkt == 'Under 3.5 Goals') and (hg+ag < 3.5):
return 'GREEN'
if (mkt == 'Under 3.5 Goals') and (hg+ag >= 3.5):
return 'RED'
if (mkt == 'Over 3.5 Goals') and (hg+ag > 3.5):
return 'GREEN'
if (mkt == 'Over 3.5 Goals') and (hg+ag <= 3.5):
return 'RED'
def get_result(df):
df = df[(df['score'].notnull()) & (df['result'].isnull())]
df['result'] = df.apply(lambda x: market_result(x['team_home'], x['team_away'], x['tip'], int(x['home_goals']), int(x['away_goals'])), axis=1)
return df
def append_matches(archive,df_new):
df_csv = pd.read_csv(archive)
df_csv.loc[df_csv["score"] == "", "score"] = float("NaN")
df_csv.loc[df_csv["home_goals"] == "", "home_goals"] = float("NaN")
df_csv.loc[df_csv["away_goals"] == "", "away_goals"] = float("NaN")
dt = pd.read_csv(df_new)
df_merge = df_csv.combine_first(df_csv[['match_date','competition','team_home','team_away','match']].merge(dt, "left"))[df_csv.columns.values]
df_merge = df_merge[df_csv.columns]
df_results = get_result(df_merge)
df_merge.update(df_results)
df_merge.to_csv(archive, index=False)
def main():
append_matches('archive.csv','df_new.csv')
if __name__ == '__main__':
main()
</code></pre>
<p>Error receive:</p>
<pre class="lang-none prettyprint-override"><code>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
df['result'] = df.apply(lambda x: market_result(x['team_home'], x['team_away'], x['tip'], int(x['home_goals']), int(x['away_goals'])), axis=1)
</code></pre>
<p>How should I proceed to solve this problem?</p>
|
<python><pandas><dataframe>
|
2023-01-14 20:31:11
| 1
| 2,222
|
Digital Farmer
|
75,121,012
| 997,832
|
Tensorflow model with multuple inputs
|
<p>I have the following neural net model. I have an input to as int sequence. And there is also another two neural nets beginning from same type of input layer and get concatenated together. This concatenation is the final output of the model. If I specified the input of the model as <code>main_input</code> and the <code>entity_extraction</code> and <code>relation_extraction</code> networks also start with main_input and their output is the final output, then does it mean that I have 3 inputs to this model? What is the underlying input/output mechanism in this model?</p>
<pre><code>main_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32', name='main_input')
x = embedding_layer(main_input)
x = CuDNNLSTM(KG_EMBEDDING_DIM, return_sequences=True)(x)
x = Avg(x)
x = Dense(KG_EMBEDDING_DIM)(x)
x = Activation('relu')(x)
# relation_extraction = Reshape([KG_EMBEDDING_DIM])(x)
relation_extraction = Transpose(x)
x = embedding_layer(main_input)
x = CuDNNLSTM(KG_EMBEDDING_DIM, return_sequences=True)(x)
x = Avg(x)
x = Dense(KG_EMBEDDING_DIM)(x)
x = Activation('relu')(x)
# entity_extraction = Reshape([KG_EMBEDDING_DIM])(x)
entity_extraction = Transpose(x)
final_output = Dense(units=20, activation='softmax')(Concatenate(axis=0)([entity_extraction,relation_extraction]))
m = Model(inputs=[main_input], outputs=[final_output])
</code></pre>
|
<python><tensorflow><deep-learning>
|
2023-01-14 20:25:04
| 1
| 1,395
|
cuneyttyler
|
75,120,918
| 11,462,274
|
When trying to use update to combine two DataFrame, the result is None
|
<p>Object <code>archive</code>:</p>
<pre class="lang-none prettyprint-override"><code>match_date,start_time,competition,team_home,team_away,match,tip,reliability,odds,home_goals,away_goals,score,result
2023-01-13,16:45,Italian Serie A,Napoli,Juventus,Napoli v Juventus,Under 2.5 Goals,3,1.8,,
2023-01-13,17:00,English Premier League,Aston Villa,Leeds,Aston Villa v Leeds,Over 2.5 Goals,3,1.73,,
2023-01-13,17:00,Spanish La Liga,Celta Vigo,Villarreal,Celta Vigo v Villarreal,Under 2.5 Goals,1,1.6,,
2023-01-13,17:15,Portuguese Primeira Liga,Portimonense,Santa Clara,Portimonense v Santa Clara,Santa Clara To Win,4,3.6,,
2023-01-14,09:30,English Premier League,Man Utd,Man City,Man Utd v Man City,Man City To Win,3,1.83,,
2023-01-14,09:30,English Football League - Championship,Rotherham,Blackburn,Rotherham v Blackburn,Under 2.5 Goals,3,1.73,,
</code></pre>
<p>Object <code>df_new</code>:</p>
<pre class="lang-none prettyprint-override"><code>match_date,start_time,competition,team_home,team_away,match,home_goals,away_goals,score
2023-01-13,FT,English Premier League,Aston Villa,Leeds,Aston Villa v Leeds,2,1,2 - 1
2023-01-13,FT,Spanish La Liga,Celta Vigo,Villarreal,Celta Vigo v Villarreal,1,1,1 - 1
2023-01-13,FT,Italian Serie A,Napoli,Juventus,Napoli v Juventus,5,1,5 - 1
2023-01-13,FT,Portuguese Primeira Liga,Portimonense,Santa Clara,Portimonense v Santa Clara,0,0,0 - 0
</code></pre>
<pre class="lang-python prettyprint-override"><code>def market_result(home,away,mkt,hg,ag):
if (mkt == f'{home} To Win') and (hg > ag):
return 'GREEN'
if (mkt == f'{home} To Win') and (hg <= ag):
return 'RED'
if (mkt == f'{away} To Win') and (ag > hg):
return 'GREEN'
if (mkt == f'{away} To Win') and (ag <= hg):
return 'RED'
if (mkt == 'Both Teams To Score') and (hg > 0) and (ag > 0):
return 'GREEN'
if (mkt == 'Both Teams To Score') and ((hg == 0) or (ag == 0)):
return 'RED'
if (mkt == 'Both Teams To Score - No') and ((hg == 0) or (ag == 0)):
return 'GREEN'
if (mkt == 'Both Teams To Score - No') and ((hg > 0) or (ag > 0)):
return 'RED'
if (mkt == 'Under 2.5 Goals') and (hg+ag < 2.5):
return 'GREEN'
if (mkt == 'Under 2.5 Goals') and (hg+ag >= 2.5):
return 'RED'
if (mkt == 'Over 2.5 Goals') and (hg+ag > 2.5):
return 'GREEN'
if (mkt == 'Over 2.5 Goals') and (hg+ag <= 2.5):
return 'RED'
if (mkt == 'Under 3.5 Goals') and (hg+ag < 3.5):
return 'GREEN'
if (mkt == 'Under 3.5 Goals') and (hg+ag >= 3.5):
return 'RED'
if (mkt == 'Over 3.5 Goals') and (hg+ag > 3.5):
return 'GREEN'
if (mkt == 'Over 3.5 Goals') and (hg+ag <= 3.5):
return 'RED'
def get_result(df):
df = df[(df['score'].notnull()) & (df['result'].isnull())]
df['result'] = df.apply(lambda x: market_result(x['team_home'], x['team_away'], x['tip'], int(x['home_goals']), int(x['away_goals'])), axis=1)
return df
def append_matches(archive,df_new):
df_csv = pd.read_csv(archive)
df_csv.loc[df_csv["score"] == "", "score"] = float("NaN")
df_csv.loc[df_csv["home_goals"] == "", "home_goals"] = float("NaN")
df_csv.loc[df_csv["away_goals"] == "", "away_goals"] = float("NaN")
dt = pd.read_csv(df_new)
df_merge = df_csv.combine_first(df_csv[['match_date','competition','team_home','team_away','match']].merge(dt, "left"))[df_csv.columns.values]
df_merge = df_merge[df_csv.columns]
df_results = get_result(df_merge)
df_merge = df_merge.update(df_results)
df_merge.to_csv(archive, index=False)
def main():
append_matches('archive.csv','df_new.csv')
if __name__ == '__main__':
main()
</code></pre>
<p>Error received:</p>
<pre class="lang-none prettyprint-override"><code>append_matches('infogol.csv',df)
df_merge.to_csv(archive, index=False)
AttributeError: 'NoneType' object has no attribute 'to_csv'
</code></pre>
<p>How should I proceed to solve this problem?</p>
|
<python><pandas><dataframe>
|
2023-01-14 20:07:43
| 1
| 2,222
|
Digital Farmer
|
75,120,795
| 15,724,084
|
python scrapy different result when running from shell when as script
|
<p><a href="https://i.sstatic.net/y4Z9K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y4Z9K.png" alt="enter image description here" /></a>I have my script when I run script my print statement gives 'None' value. But when the same thing is run from scrapy shell I can get result what i want;
What can be reason for such different results;
Code is below</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
class TestSpiderSpider(scrapy.Spider):
name = 'test_spider'
allowed_domains = ['dvlaregistrations.direct.gov.uk']
start_urls = ['https://dvlaregistrations.dvla.gov.uk/search/results.html?search=CO11CTD"&"action=index"&"pricefrom=0"&"priceto="&"prefixmatches="&"currentmatches="&"limitprefix="&"limitcurrent="&"limitauction="&"searched=true"&"openoption="&"language=en"&"prefix2=Search"&"super="&"super_pricefrom="&"super_priceto='
]
def parse(self, response):
price=response.css('div.resultsstrip p::text').get()
print(price)
print('---+---')
all_prices = response.css('div.resultsstrip p::text')
for element in all_prices:
yield print(element.css('::text').get())
[url][1]=''
process = CrawlerProcess()
process.crawl(TestSpiderSpider)
process.start()
</code></pre>
<p>this script when run gives None value, but when <code>response.css('div.resultsstrip p::text').get() '£250'</code> shell gives value what is located</p>
|
<python><scrapy>
|
2023-01-14 19:49:14
| 1
| 741
|
xlmaster
|
75,120,783
| 9,256,321
|
Sum of a sequence in Python
|
<p>Let me present my problem in mathematical notation before diving into the programming aspect.</p>
<p>Let <code>a_n</code> be the sequence whose <code>i</code>th term is defined as <code>i^2 - (i-1)^2</code>. It is easy to see <code>a_i = 2i-1</code>. Hence (in mathematical notation) we have <code>a_n = {2-1, 4-1, ..., 2n-1} = {1, 3, 5, ..., 2n -1}</code>, the sequence of all odd integers in the range <code>[1, 2n]</code>.</p>
<p>In HacerRank, an excercise is to define a function that computes the sum <code>S_n = a_1 + a_2 + ... + a_n</code> and then finds <code>x</code> in the equation <code>S_n = x (mod 10^9 + 7)</code> (still using math notation). So we need to find the equivalence of the sum of all odd integers up to <code>2n</code> in <code>mod 10^9 + 7</code>.</p>
<p>Now, to go into the programming aspect, here's what I attempted:</p>
<pre><code>def summingSeries(n):
# Firstly, an anonimous function computing the ith term in the sequence.
a_i = lambda i: 2*i - 1
# Now, let us sum all elements in the list consisting of
# every a_i for i in the range [1, n].
s_n = sum([a_i(x) for x in range(1, n + 1)])
# Lastly, return the required modulo.
return s_n % (10**9 + 7)
</code></pre>
<p>This function passes <em>some</em>, but not all tests in HackerRank. However, I am oblivious as to what might be wrong with it. Any clues?</p>
<p>Thanks in advance.</p>
|
<python><math><lambda>
|
2023-01-14 19:47:27
| 1
| 350
|
lafinur
|
75,120,551
| 1,459,607
|
Python hashlib is giving different results
|
<p>For some reason, my code below is giving inconsistent results. The files in <code>files</code> do not ever change. However, the result of <code>hasher.hexdigest()</code> is giving different values each time this function runs. My goal with this code is to only generate a new settings file if and only if the checksum/hash in the current settings file does not match the result of the three settings files hashed with <code>hashlib</code>. Does anyone see what I might be doing wrong?</p>
<pre><code>def should_generate_new_settings(qt_settings_generated_path: Path) -> tuple[bool, str]:
""" compare checksum of user_settings.json and the current ini file to what is stored in the currently generated settings file """
generate = False
hasher = hashlib.new('md5')
if not qt_settings_generated_path.exists():
generate = True
try:
# if the file is corrupt, it may have a filesize of 0.
generated_file = qt_settings_generated_path.stat()
if generated_file.st_size < 1:
generate = True
files = [paths.user_settings_path, paths.settings_generated_path, Path(__file__)]
for path in files:
file_contents = path.read_bytes()
hasher.update(file_contents)
with qt_settings_generated_path.open('r') as file:
lines = file.read().splitlines()
checksum_prefix = '# checksum: '
for line in lines:
if line.startswith(checksum_prefix):
file_checksum = line.lstrip(checksum_prefix)
if file_checksum != hasher.hexdigest():
generate = True
break
except FileNotFoundError:
generate = True
return (generate, hasher.hexdigest())
</code></pre>
|
<python><md5><hashlib>
|
2023-01-14 19:03:36
| 1
| 1,386
|
Ryan Glenn
|
75,120,494
| 8,584,998
|
Torpy Stream #4: closed already; Connection broken: IncompleteRead(634 bytes read, 3974 more expected)
|
<p>I am trying to use torpy to query Bitcoin balances over tor.</p>
<pre><code>from torpy.http.requests import TorRequests
import json
addr1 = r'34xp4vRoCGJym3xR7yCVPFHoCNxv4Twseo'
addr2 = r'bc1qgdjqv0av3q56jvd82tkdjpy7gdp9ut8tlqmgrpmv24sq90ecnvqqjwvw97'
with TorRequests() as tor_requests:
print("establish circuit")
with tor_requests.get_session() as sess:
for addr in [addr1, addr2]:
val = sess.get(f"https://api.blockcypher.com/v1/btc/main/addrs/{addr}") # https://stackoverflow.com/a/71704333
mydict = json.loads(val.text)
balance = mydict['balance']/(10**8)
print(f'{addr} balance: {balance:.8f} BTC')
</code></pre>
<p>This works the first time I run it; it returns the balances. However, if I run it a second time, I receive a long traceback:</p>
<pre><code>Stream #4: closed (but received b'\xca4{\xee\xa6\xb0K\x8fq\xea\xb9p\x81\tr*\x15\x80\xa9 zA\x08\xe9^u\x9b\xd8V,\xe8\xd2=V\xdd\x12\xe2\x9d\xfdm\xef\xbc\xaf\\9\xeb\xbc\x9f\xaa\xc3XR\x95K\xc9\x0b\xe7\x0bv\xa8:f\xd8\x8cj>\x14\xcao:0XQ\xc8\x7f\xe3{\xfb4`&\xf5\xa6\x9ez\x9e>!\x0c\xa6\xee$&Vs\x1b\x16l\xe7]7\xe4\xb4o\x8f\xcbO\xc5\xd7\xaf\x9f\x8e7\xd8\xe7\xd1\x91\xe0}VBY\xc1W\x1a\xf9)\x04\x0b\x9c\x18\x07~\xc7\x9f\xd8!\xdb^\x8a\xa4h\xb7\xb9\x98\x122\x07\x8ft1\t\x16\xaf\xb2\x05W\xb1U\xd7\xfa[\xcdn\xecR\xd6\xcfo\xd8SgJY\xe4tf~yA\x07f\x83%\xbc\xbd\x04\x92.-\x1dr\xe8\xd4{\xe2|hY\xbf\x00S\xbf\xdd\xdal\x9eY\xa1^\xf42\xc5V\xf4\xa3\x8bd\x90t\xe2m\xbb\x87e0\x956\xb7W\xde\xb1/\xd3\x9e\xf2\xbb4\xd8\x1b\xe3\xd1j8\xf6\x17\xc6^\xcf\nJw\xe0g\xf7\xcb5;\r\x99h\x87\xd2r|\xe7\xc1{\xc1\xc08O-\xc3\xdeo\x7f\xbfc\xcc\x9c\x14\xfa\xd9\x13\xaf0\x1d\xab\x9b\x10\xa75\xd7\xea\x16\x91\xb8l\xb1$\x06nW\xcb\x82\xe3>T\xdf\xc0N\xc9\xc0>\xed\xfaND%\xbe\xbd\xee\xe1\x8don\xc4y\xd8\x9a\x99\xa0\xe1\x8d\n*9n\xaa\xb5/B\xec\xbb\xfbr\x0fK4\xab\xebi,\xcaa\xb1+\xb2RG\xe8\t\xb29w\x1a\xfcC\x91\xb6L\xbd\xa9B\xfc\xf4\x08+\t\xed\x87\xe5\x81 \xad\x9a-\xcaS\x18\xc0\x93\x08]M\x87`\x80?\xc1W\x03\xf1\x94\x01\x17\x8a\x13\xb4\x87\xcd\x99\xf7\xb9\xa2&\x82\xf4\x9b\xf8\x80\xcfc\x02\x16\xf4\x0e\xab\x82\xc9\x0bn\x06U\x10:\x842tRy.\x8eg\x15\x1a\xe1\x89\x00\xd4\xd69\x12\xe5#\x93\xaa\x89\x01Y15YD\x8c/N\xcc\xcf\x97\xfb\x14\x04\x0fe\xc9\xa4)\xee\xe4\x9fO\xd4\xcf\x1ek\x07\x8cq\xf32<m\xa3J\xa7\x80')
Stream #4 is already closed or was never opened (but received CellRelay(inner_cell = CellRelayData(data = ... (498 bytes)), stream_id = 4, digest = b'\xc1\xbb\x92T', circuit_id = 80000001))
Stream #4: closed already
Stream #4: closed already
Traceback (most recent call last):
File "C:\Program Files\Python\Python38\lib\site-packages\urllib3\response.py", line 443, in _error_catcher
yield
File "C:\Program Files\Python\Python38\lib\site-packages\urllib3\response.py", line 818, in read_chunked
chunk = self._handle_chunk(amt)
File "C:\Program Files\Python\Python38\lib\site-packages\urllib3\response.py", line 771, in _handle_chunk
returned_chunk = self._fp._safe_read(self.chunk_left)
File "C:\Program Files\Python\Python38\lib\http\client.py", line 610, in _safe_read
raise IncompleteRead(data, amt-len(data))
http.client.IncompleteRead: IncompleteRead(634 bytes read, 3974 more expected)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python\Python38\lib\site-packages\requests\models.py", line 753, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "C:\Program Files\Python\Python38\lib\site-packages\urllib3\response.py", line 623, in stream
for line in self.read_chunked(amt, decode_content=decode_content):
File "C:\Program Files\Python\Python38\lib\site-packages\urllib3\response.py", line 844, in read_chunked
self._original_response.close()
File "C:\Program Files\Python\Python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Program Files\Python\Python38\lib\site-packages\urllib3\response.py", line 460, in _error_catcher
raise ProtocolError("Connection broken: %r" % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(634 bytes read, 3974 more expected)', IncompleteRead(634 bytes read, 3974 more expected))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Examples\Python\torpy\check_btc_addr.py", line 50, in <module>
val = sess.get(f"https://api.blockcypher.com/v1/btc/main/addrs/{addr}") # https://stackoverflow.com/a/71704333
File "C:\Program Files\Python\Python38\lib\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\Program Files\Python\Python38\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Program Files\Python\Python38\lib\site-packages\requests\sessions.py", line 697, in send
r.content
File "C:\Program Files\Python\Python38\lib\site-packages\requests\models.py", line 831, in content
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
File "C:\Program Files\Python\Python38\lib\site-packages\requests\models.py", line 756, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(634 bytes read, 3974 more expected)', IncompleteRead(634 bytes read, 3974 more expected))
</code></pre>
<p>I don't really understand why it doesn't work if I run it multiple times. Anybody know what's going on here, and how to fix it?</p>
|
<python><blockchain><bitcoin><tor>
|
2023-01-14 18:53:33
| 1
| 1,310
|
EllipticalInitial
|
75,120,329
| 3,247,006
|
Doesn't "__str__()" work properly in "admin.py" for Django Admin?
|
<p>For example, I define <a href="https://docs.djangoproject.com/en/4.1/ref/models/instances/#str" rel="nofollow noreferrer">__str__()</a> in <strong><code>Person</code> model</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
def __str__(self): # Here
return self.first_name + " " + self.last_name
</code></pre>
<p>Then, I define <strong><code>Person</code> admin</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
pass
</code></pre>
<p>Then, the full name is displayed in the message and list in <strong>"Change List" page</strong>:</p>
<p><a href="https://i.sstatic.net/VpuF6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VpuF6.png" alt="enter image description here" /></a></p>
<p>But, when I define <code>__str__()</code> in <strong><code>Person</code> admin</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
def __str__(self): # Here
return self.first_name + " " + self.last_name
</code></pre>
<p>Or, when I define <code>__str__()</code> then assign it to <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display" rel="nofollow noreferrer">list_display</a> in <strong><code>Person</code> admin</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
list_display = ('__str__',) # Here
def __str__(self): # Here
return self.first_name + " " + self.last_name
</code></pre>
<p>The full name is not displayed in the message and list in <strong>"Change List" page</strong>:</p>
<p><a href="https://i.sstatic.net/t9h10.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t9h10.png" alt="enter image description here" /></a></p>
<p>So, doesn't <code>__str__()</code> work properly in <strong><code>Person</code> admin</strong>?</p>
|
<python><django><function><django-admin><django-messages>
|
2023-01-14 18:26:33
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,120,244
| 580,644
|
Beautifulsoup remove bracket from output
|
<p>I am trying to get html from a web page:</p>
<pre><code>try:
description=hun.select('#description > div.tab-pane-body > div > div > div > table')
except:
description=None
result = {"description":str(description)}
data.append(result)
print(json2xml.Json2xml(data, wrapper="all", pretty=True, attr_type=False).to_xml())
</code></pre>
<p>This works fine, but I have "<code>[<span>Test</span>]</code>" brackets in the output. How can I avoid these brackets from the output?</p>
|
<python><python-3.x><beautifulsoup>
|
2023-01-14 18:14:49
| 2
| 2,656
|
Adrian
|
75,120,176
| 5,947,182
|
How do you move mouse with Playwright Python?
|
<p>I'm writing a test code to <strong>check if the mouse is moving in the Playwright browser</strong>. I used <code>pyautogui</code> in this case to locate current mouse position but that might be the problem, so I was wondering if there is a similar method for Playwright Python?</p>
<p>Please have a look at the code below. It prints out the same coordinates for the start and final mouse positions, which means there's no mouse movement. I have tried adding <code>page.mouse.up()</code> and <code>page.mouse.down()</code> before and after the <code>page.mouse.move(100,200)</code> as shown on the official Playwright Docs <a href="https://playwright.dev/python/docs/api/class-mouse" rel="nofollow noreferrer">page</a> but to no avail. <strong>How do you move mouse with Playwright Python?</strong></p>
<pre><code>import pytest
import pyautogui
from playwright.sync_api import sync_playwright
def test_simple_move():
mouse_start_position = pyautogui.position()
print(mouse_start_position)
with sync_playwright() as playwright:
browser = playwright.chromium.launch(headless=False, slow_mo=10)
page = browser.new_page()
page.goto(r"http://www.uitestingplayground.com/")
page.mouse.move(100,200)
mouse_final_position = pyautogui.position()
print(mouse_final_position)
</code></pre>
|
<python><mouse><playwright>
|
2023-01-14 18:04:12
| 1
| 388
|
Andrea
|
75,120,076
| 6,534,818
|
Practically implementing CTCLoss
|
<p>This thread covers some of the nuances about CTC Loss and its unique way of capturing repeated characters and blanks in a sequence: <a href="https://stackoverflow.com/questions/55284586/ctc-what-is-the-difference-between-space-and-blank">CTC: What is the difference between space and blank?</a> but its practical implementation is unclear.</p>
<p>Lets say that I am trying predict these two sequences that correspond to two pictures.</p>
<pre><code>seq_list = ['pizza', 'a pizza']
</code></pre>
<p>and I map their characters to integers for the model with something like:</p>
<pre><code>mapping = {'p': 0,
'i': 1,
'z': 2,
'a': 3,
'blank': 4}
</code></pre>
<p>What do the individual labels look like?</p>
<pre><code>pizza_label = [0, 1, 2, 4, 3] # pizza
a_pizza_label = [3, 0, 1, 2, 4, 3] # a pizza
</code></pre>
<p>Then, what about combining them so the shape of the labels are the same for the model? Do we use blank for padding?</p>
<pre><code>pizza_label = [0, 1, 2, 4, 3, 4] # pizza
a_pizza_label = [3, 0, 1, 2, 4, 3] # a pizza
</code></pre>
|
<python><tensorflow><machine-learning><pytorch><computer-vision>
|
2023-01-14 17:47:37
| 1
| 1,859
|
John Stud
|
75,120,012
| 11,462,274
|
Replace values in specific rows from one DataFrame to another when certain columns have the same values
|
<p>Unlike the other questions, I don't want to create a new column with the new values, I want to use the same column just changing the old values for new ones if they exist.</p>
<p>For a new column I would have:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(data = {'Name' : ['Carl','Steave','Julius','Marcus'],
'Work' : ['Home','Street','Car','Airplane'],
'Year' : ['2022','2021','2020','2019'],
'Days' : ['',5,'','']})
df2 = pd.DataFrame(data = {'Name' : ['Carl','Julius'],
'Work' : ['Home','Car'],
'Days' : [1,2]})
df_merge = pd.merge(df1, df2, how='left', on=['Name','Work'], suffixes=('','_'))
print(df_merge)
</code></pre>
<pre class="lang-none prettyprint-override"><code> Name Work Year Days Days_
0 Carl Home 2022 1.0
1 Steave Street 2021 5 NaN
2 Julius Car 2020 2.0
3 Marcus Airplane 2019 NaN
</code></pre>
<p>But what I really want is exactly like this:</p>
<pre class="lang-none prettyprint-override"><code> Name Work Year Days
0 Carl Home 2022 1
1 Steave Street 2021 5
2 Julius Car 2020 2
3 Marcus Airplane 2019
</code></pre>
<p>How can I make such a union?</p>
|
<python><pandas><dataframe>
|
2023-01-14 17:39:39
| 2
| 2,222
|
Digital Farmer
|
75,119,883
| 17,561,414
|
Json flattening python
|
<p>My goal is to identify which instanse is <code>dict</code>, <code>str</code> or <code>list</code> under the <code>items</code> hierarchy.</p>
<pre><code>def flatten(data):
for i, item in enumerate(data['_embedded']['items']):
if isinstance(item, dict):
print('Item', i, 'is a dict')
elif isinstance(item, list):
print('Item', i, 'is a list')
elif isinstance(item, str):
print('Item', i, 'is a str')
else:
print('Item', i, 'is unknown')
flatten(data)
</code></pre>
<p>Output of this code is:</p>
<pre><code>Item 0 is a dict
Item 1 is a dict
Item 2 is a dict
Item 3 is a dict
Item 4 is a dict
</code></pre>
<p>Desired out put should access the keys (<code>identifier</code>, <code>enabled</code>,<code>family</code>) inside the <code>0</code>, <code>1</code> etc.</p>
<p>for a better udnerstnading of the structure of the JSON file please see the image
<a href="https://i.sstatic.net/BfR3r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BfR3r.png" alt="enter image description here" /></a></p>
|
<python><json><flatten><json-flattener>
|
2023-01-14 17:21:28
| 1
| 735
|
Greencolor
|
75,119,644
| 7,197,067
|
Synth of AWS gateway load balancer in python CDK is failing
|
<p>I am trying to create an AWS Gateway Load Balancer configuration in AWS CDK (python). I already have a working version in Cloud Formation. The synth step is failing, seemingly, because CDK is not recognizing a "list" as a Sequence.</p>
<p>Below is the key bit of python. Note that I'm using L1 constructs since there do not yet seem to be L2 constructs for GWLB.</p>
<pre><code> gwlb = elbv2.CfnLoadBalancer(
self,
"GatewayLoadBalancer",
name=f"GWLB-{self.stack_name}",
type="gateway",
subnets=gwlb_subnet_ids,
scheme="internal",
load_balancer_attributes=[
elbv2.CfnLoadBalancer.LoadBalancerAttributeProperty(
key="load_balancing.cross_zone.enabled", value="true"
)
],
)
gw_endpoint_service = ec2.CfnVPCEndpointService(
self,
"VPCEndpointService",
acceptance_required=False,
gateway_load_balancer_arns=[gwlb.get_att("Arn")],
)
</code></pre>
<p>When I run the synth, I get this error:</p>
<pre><code> File "/Users/pmryan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/typeguard/__init__.py", line 757, in check_type
checker_func(argname, value, expected_type, memo)
File "/Users/pmryan/.pyenv/versions/3.8.12/lib/python3.8/site-packages/typeguard/__init__.py", line 558, in check_union
raise TypeError('type of {} must be one of ({}); got {} instead'.
TypeError: type of argument gateway_load_balancer_arns must be one of (Sequence[str], NoneType); got list instead
</code></pre>
<p>Wondering if this is a CDK bug. In every other CDK construct, I can pass a python list to an argument that expects a Sequence.</p>
|
<python><amazon-web-services><aws-cdk><amazon-elb><gateway>
|
2023-01-14 16:50:20
| 1
| 314
|
Pat
|
75,119,478
| 13,359,498
|
ValueError: Dimensions must be equal, but are 4 and 224 for '{{node Equal}} = Equal[T=DT_FLOAT, incompatible_shape_error=true]
|
<p>I am trying o build a neural network, but I am facing problems fitting that.</p>
<p>Data shapes:</p>
<p>X_train = (555, 224, 224, 3)</p>
<p>X_test = (99, 224, 224, 3)</p>
<p>y_train = (555, 4)</p>
<p>y_test = (99, 4)</p>
<p>X_val = (116, 224, 224, 3)</p>
<p>y_val = (116, 4)</p>
<p>Code snippet:</p>
<pre><code>from keras.layers import Conv2D, AveragePooling2D, MaxPooling2D, Flatten, Dense, Concatenate, Input
from keras import Model
# Defining model input
input_ = Input(shape=(224, 224, 3))
# Defining first parallel layer
in_1 = Conv2D(filters=16, kernel_size=(3, 3), activation='relu', padding='same')(input_)
conv_1 = BatchNormalization()(in_1)
conv_1 = AveragePooling2D(pool_size=(2, 2), strides=(3, 3))(conv_1)
# Defining second parallel layer
in_2 = Conv2D(filters=16, kernel_size=(5, 5), activation='relu', padding='same')(input_)
conv_2 = BatchNormalization()(in_2)
conv_2 = AveragePooling2D(pool_size=(2, 2), strides=(3, 3))(conv_2)
# Defining third parallel layer
in_3 = Conv2D(filters=16, kernel_size=(5, 5), activation='relu', padding='same')(input_)
conv_3 = BatchNormalization()(in_3)
conv_3 = MaxPooling2D(pool_size=(2, 2), strides=(3, 3))(conv_3)
# Defining fourth parallel layer
in_4 = Conv2D(filters=16, kernel_size=(9, 9), activation='relu', padding='same')(input_)
conv_4 = BatchNormalization()(in_4)
conv_4 = MaxPooling2D(pool_size=(2, 2), strides=(3, 3))(conv_4)
# Concatenating layers
concat = Concatenate()([conv_1, conv_2, conv_3, conv_4])
flat = Flatten()(concat)
out = Dense(units=4, activation='softmax')(flat)
# tell the model what cost and optimization method to use
from tensorflow.keras.optimizers import Adam
model.compile(
optimizer = Adam(learning_rate=0.00001),
loss='categorical_crossentropy',
metrics=['accuracy']
)
model1 = model.fit(X_train,
y_train,
validation_data = (X_val, y_val),
epochs = 100,
batch_size=batch_size
# callbacks=[es_callback]
)
</code></pre>
<p>error:
<strong>ValueError: Dimensions must be equal but are 4 and 224 for '{{node Equal}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](IteratorGetNext:1, Cast_1)' with input shapes: [?,4], [?,224,224].</strong></p>
|
<python><tensorflow><keras><deep-learning><neural-network>
|
2023-01-14 16:30:02
| 0
| 578
|
Rezuana Haque
|
75,119,381
| 9,974,205
|
Problem Following Web Scraping Tutorial Using Python
|
<p>I am following this <a href="https://www.makeuseof.com/python-scrape-web-images-how-to/#python-package-set-up" rel="nofollow noreferrer">web scraping tutorial</a> and I am getting an error.</p>
<p>My code is as follows:</p>
<pre><code>import requests
URL = "http://books.toscrape.com/" # Replace this with the website's URL
getURL = requests.get(URL, headers={"User-Agent":"Mozilla/5.0"})
print(getURL.status_code)
from bs4 import BeautifulSoup
soup = BeautifulSoup(getURL.text, 'html.parser')
images = soup.find_all('img')
print(images)
imageSources=[]
for image in images:
imageSources.append(image.get("src"))
print(imageSources)
for image in imageSources:
webs=requests.get(image)
open("images/"+image.split("/")[-1], "wb").write(webs.content)
</code></pre>
<p>Unfortunately, I am getting an error in the line <code>webs=requests.get(image)</code>, which is as follows:</p>
<p><code>MissingSchema: Invalid URL 'media/cache/2c/da/2cdad67c44b002e7ead0cc35693c0e8b.jpg': No schema supplied. Perhaps you meant http://media/cache/2c/da/2cdad67c44b002e7ead0cc35693c0e8b.jpg?</code></p>
<p>I am totally new to web scraping and I don't know what this means. Any suggestion is appreciated.</p>
|
<python><image><web-scraping><url><beautifulsoup>
|
2023-01-14 16:17:51
| 1
| 503
|
slow_learner
|
75,119,303
| 10,924,836
|
Plotting density chart
|
<p>I am trying to plot a density chart. Below you can see data and chart</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = {'type_sale':[100,200,400,400,200,400,300,200,210,300],
'bool':[0,1,0,1,1,0,1,1,0,1],
}
df1 = pd.DataFrame(data, columns = ['type_sale',
'bool'])
df1['bool']= df1['bool'].astype('int32')
</code></pre>
<p>I tried with the command above but is not working. Can anybody help me how to solve this problem ?</p>
<pre><code>plot_density_chart(df1[['type_sale', 'bool']], "bool", 'type_sale',
category_var="type_sale", title='prevalence',
xlabel='Type_sale', logx="Yes", vline=None,
save_figure_name = 'type_sale_prevalence.pdf')
</code></pre>
|
<python><matplotlib>
|
2023-01-14 16:05:26
| 1
| 2,538
|
silent_hunter
|
75,119,153
| 10,829,044
|
pandas transform n columns to n/3 columns and n/3 rows
|
<p>I have a dataframe like as shown below</p>
<pre><code>data = {
'key':['k1','k2'],
'name_M1':['name', 'name'],'area_M1':[1,2],'length_M1':[11,21],'breadth_M1':[12,22],
'name_M2':['name', 'name'],'area_M2':[1,2],'length_M2':[11,21],'breadth_M2':[12,22],
'name_M3':['name', 'name'],'area_M3':[1,2],'length_M3':[11,21],'breadth_M3':[12,22],
'name_M4':['name', 'name'],'area_M4':[1,2],'length_M4':[11,21],'breadth_M4':[12,22],
'name_M5':['name', 'name'],'area_M5':[1,2],'length_M5':[11,21],'breadth_M5':[12,22],
'name_M6':['name', 'name'],'area_M6':[1,2],'length_M6':[11,21],'breadth_M6':[12,22],
}
df = pd.DataFrame(data)
</code></pre>
<p>Input data looks like below in wide format</p>
<p><a href="https://i.sstatic.net/qU699.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qU699.png" alt="enter image description here" /></a></p>
<p>I would like to convert it into time-based long format like below. We call it time-based because you can see that each row has 3 months data. Then the subsequent rows are pushed by 1 month</p>
<p>ex: sample shape of data looks like below (with only one column for each month)</p>
<pre><code>k1,Area_M1,Area_M2,Area_M3,Area_M4,Area_M5,Area_M6
</code></pre>
<p>I would like to convert it like below (subsequent rows are shifted by one month)</p>
<pre><code>k1,Area_M1,Area_M2,Area_M3
K1,Area_M2,Area_M3,Area_M4
K1,Area_M3,Area_M4,Area_M5
K1,Area_M4,Area_M5,Area_M6
</code></pre>
<p>But in real data, instead of one column for each month, I have multiple columns for each month. So, we need to convert/transform all those columns. So, I tried something like below but it doesn't work</p>
<pre><code>pd.wide_to_long(df, stubnames=["name_1st","area_1st","length_first","breadth_first",
"name_2nd","area_2nd","length_2nd","breadth_2nd",
"name_3rd","area_3rd","length_3rd","breadth_3rd"],
i="key", j="name",
sep="_", suffix=r"(?:\d+|n)").reset_index()
</code></pre>
<p>But I expect my output to be like as below</p>
<p><a href="https://i.sstatic.net/lMsTP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lMsTP.png" alt="enter image description here" /></a></p>
<p><strong>updated error screenshot</strong></p>
<p><a href="https://i.sstatic.net/8KUSD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8KUSD.png" alt="enter image description here" /></a></p>
<p><strong>Updated error screenshot</strong></p>
<p><a href="https://i.sstatic.net/q1YWT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q1YWT.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><pivot><transformation>
|
2023-01-14 15:44:55
| 1
| 7,793
|
The Great
|
75,119,141
| 19,009,577
|
Get list of tuples of stacked for loop values
|
<p>While trying to speed up:</p>
<pre><code>l = some_value
for i in range(1000):
for j in range(1000):
for k in range(1000):
function(i, j, k, l)
</code></pre>
<p>I stumbled upon <code>multiprocessing.Pool().starmap()</code> however it requires the iterated values to be passed in as an interator <code>[(0, 0, 0), (1, 0, 0), ...]</code>. Is there a fast way to get this list of tuples containing the for loop values?</p>
|
<python><for-loop><multiprocessing><python-multiprocessing>
|
2023-01-14 15:43:17
| 1
| 397
|
TheRavenSpectre
|
75,119,070
| 1,849,163
|
How to get the last date of the current week or quarter in python?
|
<p>I would like to find a simple way to get the last date of the current week or quarter.</p>
<p>To get the last date of the current month I can use the <code>relativdelta</code> function from <code>dateutil</code>:</p>
<pre><code>import pandas as pd
today_date = pd.Timestamp.today().date() #get today's date
from dateutil.relativedelta import relativedelta
current_month_last_date = today_date + relativedelta(day=31) #get last date of current month
</code></pre>
<p>What is an equivalent way to get the last date of the current week or quarter?</p>
|
<python><python-datetime>
|
2023-01-14 15:30:38
| 1
| 351
|
econlearner
|
75,119,036
| 275,002
|
How to export loads of MysQL record in CSV in chunks in Python?
|
<p>TO clarify, I am on a shared hosting so <code>mysqldump</code> and <code>OUTFILE</code> is not available for me, at least on NameCheap hosting. I am using the following code</p>
<pre><code>def fetch(connection, id_count, offset):
records = None
try:
if connection is not None:
with connection.cursor(dictionary=True) as cursor:
sql = "SELECT * FROM options_data WHERE id > {} LIMIT {},3000".format(id_count, offset)
print(sql)
cursor.execute(sql, records)
records = cursor.fetchall()
connection.commit()
except Exception as ex:
print('Exception in Store')
print(ex)
finally:
return records
</code></pre>
<pre><code>def get_connection(host, user, password, db_name):
connection = None
try:
connection = mysql.connector.connect(
host=host,
user=user,
use_unicode=True,
password=password,
database=db_name
)
connection.set_charset_collation('utf8')
print('Connected')
except Exception as ex:
print(str(ex))
finally:
return connection
</code></pre>
<p>and then</p>
<pre><code>connection = get_connection(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME)
if connection is None:
print('Unable to connect MySQL Server')
exit()
result = []
final_rercords = []
idx = 0
for x in range(0, 37206, 3000):
idx += 1
records = fetch(connection, x, x)
for r in records:
result.append(r['instrument_name'])
result.append(str(r['m']))
result.append(str(r['p']))
result.append(str(r['e']))
result.append(str(r['ts'].strftime('%Y-%m-%d %H:%m:%S')))
final_rercords.append(','.join(result))
with open('{}_options.csv'.format(idx), 'w', encoding='utf8') as f:
f.write('\n'.join(final_rercords))
f.write('\n')
with open('{}_options.csv'.format(idx), 'rb') as f_in:
with gzip.open('{}_options.csv.gz'.format(idx), 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
print('{}_options.csv.gz created'.format(idx))
sleep(1)
</code></pre>
<p>I noticed it is not the SQL Query but the process of iterating records and then dumping into a CSV file is taking both time and memory. On Namecheap the script is killed on a higher limit.</p>
|
<python><namecheap>
|
2023-01-14 15:24:18
| 1
| 15,089
|
Volatil3
|
75,118,894
| 20,266,647
|
Issue with dynamic allocation in PySpark session (under MLRun and in K8s)
|
<p>I would like to maximize power of Spark cluster in MLRun solution for my calculation and I used this session setting for Spark cluster in MLRun solution (it is under Kubernetes cluster):</p>
<pre><code>spark = SparkSession.builder.appName('Test-Spark') \
.config("spark.dynamicAllocation.enabled", True) \
.config("spark.shuffle.service.enabled", True) \
.config("spark.executor.memory", "12g") \
.config("spark.executor.cores", "4") \
.config("spark.dynamicAllocation.enabled", True) \
.config("spark.dynamicAllocation.minExecutors", 3) \
.config("spark.dynamicAllocation.maxExecutors", 6) \
.config("spark.dynamicAllocation.initialExecutors", 5)\
.getOrCreate()
</code></pre>
<p>The issue is, that I cannot utilize all power and in many cases I utilized only 1, 2 or 3 executor with small amount of cores.</p>
<p>Do you know, how to utilize in Spark session higher sources/performance (it seems, that dynamic allocation does not work correctly in MLRun & K8s & Spark)?</p>
|
<python><apache-spark><kubernetes><dynamic-memory-allocation><mlrun>
|
2023-01-14 15:03:46
| 1
| 1,390
|
JIST
|
75,118,729
| 17,160,160
|
Python. DFS graph traversal, correct output?
|
<p>I'm currently getting to grips with graph traversal in Python.</p>
<p>Given the following graph:</p>
<p><a href="https://i.sstatic.net/aCStw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aCStw.png" alt="enter image description here" /></a></p>
<p>Implemented using this dictionary:</p>
<pre><code>
graph = {'0': set(['1', '2', '3']),
'1': set(['0','2']),
'2': set(['0','1','4']),
'3': set(['0']),
'4': set(['2'])}
</code></pre>
<p>Am I correct in thinking a depth first search traversal beginning from node 0 should return <code>[0,1,2,4,3]</code>?</p>
<p>My dfs function returns <code>[0,3,1,2,4]</code> and so I am wondering if I have something wrong in my implementation:</p>
<pre><code>def dfs(graph, node,visited=None):
if visited is None:
visited=set()
if node not in visited:
print (node,end=' ')
visited.add(node)
for neighbour in graph[node]:
dfs(graph,neighbour,visited=visited)
dfs(graph,'0')
</code></pre>
<p>Help and advice appreciated.</p>
|
<python><graph-theory><depth-first-search>
|
2023-01-14 14:38:12
| 2
| 609
|
r0bt
|
75,118,691
| 1,506,145
|
How to convert a string to a numpy matrix? Inverse of numpy.array_str?
|
<p>I have a string I want to convert to a 2d numpy matrix, i created it by using <code>numpy.array_str</code>.</p>
<pre class="lang-py prettyprint-override"><code>
s = '[[ 82. 0. 0. 17.]\n [ 72. 0. 0. 30.]\n [ 79. 0. 0. 131.]\n [ 72. 0. 0. 27.]]'
np.array(s)
np.fromstring(s)
</code></pre>
<p>However, none of the two methods work. <code>np.array</code> just returns the string as a numpy array and <code>np.fromstring</code> gives the error message: <code>string size must be a multiple of element size</code>.</p>
<p>Got any tips what to do? Is there an "inverse" to <code>np.array_str</code>?</p>
|
<python><numpy>
|
2023-01-14 14:33:28
| 1
| 5,316
|
user1506145
|
75,118,636
| 293,995
|
Open Chromium with selenium in read-mode
|
<p>My goal is to be able to get the main text content of a webpage without anything else.
Firefox do this using with the reader view feature. It seems that Chrome haves this as experimental feature. Despite activating the feature from code, the icon doesn't show up.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service as ChromiumService
from webdriver_manager.chrome import ChromeDriverManager
from webdriver_manager.core.utils import ChromeType
chrome_options = Options()
chrome_options.add_argument("--reader-mode=true")
chrome_options.add_argument("--enable-reader-mode")
driver = webdriver.Chrome(service=ChromiumService(ChromeDriverManager(chrome_type=ChromeType.CHROMIUM).install()), options=chrome_options)
driver
url = "https://www.google.com"
driver.get(url)
print(driver.page_source)
driver.quit()
</code></pre>
<p>The command line information <em>chrome://version/</em> in are showing the parameters :</p>
<pre><code>Command Line
/Applications/Google Chrome.app/Contents/MacOS/Google Chrome --allow-pre-commit-input --disable-background-networking --disable-client-side-phishing-detection --disable-default-apps --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --enable-automation --enable-blink-features=ShadowDOMV0 --enable-logging --enable-reader-mode --log-level=0 --no-first-run --no-service-autorun --password-store=basic --reader-mode=true --remote-debugging-port=0 --test-type=webdriver --use-mock-keychain --user-data-dir=/var/folders/qx/y0sp0wpn5g524st9yn6f3cdm0000gn/T/.com.google.Chrome.L3pFOZ --flag-switches-begin --flag-switches-end
</code></pre>
<p><em>chrome://flags/</em> doesn't show the reader mode activated. Even if I activate it manually and restart, the reader mode icon is not shown in the binaries as well as Chrome on mac.</p>
<p>Is this feature still exists? If yes, how do I use it from selenium?</p>
|
<python><selenium-chromedriver>
|
2023-01-14 14:23:56
| 1
| 2,631
|
hotips
|
75,118,581
| 5,684,405
|
Installing black with pipx does not install the dependency aiohttp
|
<p>I've installed <code>pipx</code> with <code>brew</code> and then <code>black</code> with <code>pipx</code>:</p>
<pre class="lang-bash prettyprint-override"><code>$ brew install pipx
...
$ pipx install black
...
$ pipx list
venvs are in /Users/mc/.local/pipx/venvs
apps are exposed on your $PATH at /Users/mc/.local/bin
package black 22.12.0, installed using Python 3.11.1
- black
- blackd
</code></pre>
<p>However, I keep getting an import error when running <code>blackd</code></p>
<pre class="lang-bash prettyprint-override"><code>$ /Users/mc/.local/bin/blackd
Traceback (most recent call last):
File "/Users/mc/.local/bin/blackd", line 5, in <module>
from blackd import patched_main
File "/Users/mc/.local/pipx/venvs/black/lib/python3.11/site-packages/blackd/__init__.py", line 14, in <module>
raise ImportError(
ImportError: aiohttp dependency is not installed: No module named 'aiohttp'. Please re-install black with the '[d]' extra install to obtain aiohttp_cors: `pip install black[d]`
</code></pre>
<p>How to fix this? Why is <code>pipx</code> not installing the required dependency aiohttp_cors?</p>
<p>Also, why does it use python <code>3.11.1</code> when my system python is <code>3.9.6</code></p>
<pre class="lang-bash prettyprint-override"><code>$ python3 --version
Python 3.9.6
</code></pre>
<p>Doing as advised by @KarlKnechtel below:</p>
<pre><code>$ brew install python@3.10
...
==> Pouring python@3.10--3.10.9.arm64_monterey.bottle.tar.gz
...
Python has been installed as
/opt/homebrew/bin/python3
Unversioned symlinks `python`, `python-config`, `pip` etc. pointing to
`python3`, `python3-config`, `pip3` etc., respectively, have been installed into
/opt/homebrew/opt/python@3.10/libexec/bin
...
</code></pre>
<p>I then get:</p>
<pre><code>$ python3 --version
Python 3.10.9
$brew list python python3
...
/opt/homebrew/Cellar/python@3.10/3.10.9/bin/pip3
/opt/homebrew/Cellar/python@3.10/3.10.9/bin/pip3.10
...
/opt/homebrew/Cellar/python@3.10/3.10.9/bin/python3
/opt/homebrew/Cellar/python@3.10/3.10.9/bin/python3.10
...
</code></pre>
<p>but still, when I install <code>black</code> it installs python <code>3.11</code>:</p>
<pre class="lang-bash prettyprint-override"><code>$pipx install black[d]
zsh: no matches found: black[d]
$pipx install black
installed package black 22.12.0, installed using Python 3.11.1
These apps are now globally available
- black
- blackd
done! ✨ 🌟 ✨
</code></pre>
|
<python><python-3.x><macos><pipx>
|
2023-01-14 14:16:57
| 2
| 2,969
|
mCs
|
75,118,425
| 3,247,006
|
How to display values in multiple lines by indentation in Django Admin?
|
<p>I have <strong><code>Person</code> model</strong> below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
</code></pre>
<p>Then, I put <code>\n</code> between <code>obj.first_name</code> and <code>obj.last_name</code> as shown below to display first name and last name separately in 2 lines by indentation:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
list_display = ('person',)
def person(self, obj): # ↓↓ Here
return obj.first_name + "\n" + obj.last_name
</code></pre>
<p>But, first name and last name were displayed in one line without indentation as shown below:</p>
<p><a href="https://i.sstatic.net/bfl1A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bfl1A.png" alt="enter image description here" /></a></p>
<pre class="lang-none prettyprint-override"><code>John Smith # One line
</code></pre>
<p>So, how can I display first name and last name separately in 2 lines by indentation as shown below:</p>
<pre class="lang-none prettyprint-override"><code>John # 1st line
Smith # 2nd line
</code></pre>
|
<python><django><django-admin><admin><indentation>
|
2023-01-14 13:51:27
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
75,118,407
| 1,259,374
|
How do I get the next X day of the week
|
<p>So I have this <code>function</code> that retrieve the <code>date</code> from given days from today:</p>
<pre><code>def get_date_from_today(d):
tomorrow = datetime.date.today() + datetime.timedelta(days=d)
return tomorrow.strftime("%Y/%m/%d")
</code></pre>
<p>How do I get for example the date of the next <code>Thursday</code> ?</p>
<p>If today if <code>Thursday</code> I want to get the current <code>date</code> and if not I want to closest <code>Thursday</code> date</p>
|
<python><datetime><weekday>
|
2023-01-14 13:48:20
| 2
| 1,139
|
falukky
|
75,118,399
| 20,443,541
|
How to set up a Tor-Server (Hidden Service) as a proxy?
|
<p>The goal is, being able to access the proxy anonymously, such that the host (proxy) doesn't know, where the request came from (of course with credentials).</p>
<p>The client should be able to acess <code>www.example.com</code> over the hosts ip, without the host knowing the clients ip.</p>
<p>Here's a example request route to <code>www.example.com</code>:
<a href="https://i.sstatic.net/inDYD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/inDYD.png" alt="![Network-route" /></a></p>
<ul>
<li>How would I <strong>hookup a browser</strong> to it?</li>
<li>How would I connect using Python? (something proxy-chain like?)</li>
</ul>
<p>Note: OS doesn't depend, programming language preferably Python</p>
<p>EDIT:</p>
<ul>
<li>The Client in my case should be able to specify:
<ul>
<li>Headers</li>
<li>request-method</li>
<li>site-url</li>
</ul>
</li>
<li>For the request which the hidden-service makes (so basically a proxy)</li>
</ul>
|
<python><proxy><tor>
|
2023-01-14 13:46:39
| 2
| 1,159
|
kaliiiiiiiii
|
75,118,159
| 16,872,314
|
Generate specific Toeplitz covariance matrix
|
<p>I want to generate a statistical sample from a multidimensional normal distribution. For this I need to generate one specific kind of covariance matrix:</p>
<pre><code>1 0.99 0.98 0.97 ...
0.99 1 0.99 0.98 ...
0.98 0.99 1 0.99 ...
0.97 0.98 0.99 1 ...
... ... ... ...
</code></pre>
<p>Is there a way to generate this kind of matrix for multiple dimensions easily, without writing it by hand? (I need to have matrices with 50-100 dimensions, so doing it by hand is very tedious.)</p>
|
<python><numpy><matrix><scipy><toeplitz>
|
2023-01-14 13:06:37
| 2
| 720
|
Marcello Zago
|
75,118,153
| 11,824,828
|
Iterate dataframe and sum transactions by condition
|
<p>I have the following sample of data:</p>
<pre><code> id year type num
1 1994 A 0
2 1950 A 2333
3 1977 B 4444
4 1995 B 555
1 1994 A 0
6 1955 A 333
7 2006 B 4123
6 1975 A 0
9 1999 B 123
3 1950 A 1234
</code></pre>
<p>I'm looking for the easiest way how to sum column 'num' based on conditions of type == 'A' and year < 1999</p>
<p>I'm iterating through the dataframe df with the data:</p>
<pre><code> data = pd.read_csv('data.csv')
df = pd.DataFrame(data)
df_sum = pd.DataFrame
for index, row in df.iterrows():
if row['type'] == 'A' and row['year'] < 1999:
df_sum = df_sum.append(row) //This doesn't work
</code></pre>
<p>and trying to store the rows that match the conditions into df_sum where I'd make the sumarized num by id. Have no idea how to iterate and store the data based on condition into new dataframe.</p>
<p>The desired output would be:</p>
<pre><code>id num_sum
1 0
2 2333
6 333
.....
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-14 13:06:04
| 2
| 325
|
vloubes
|
75,118,111
| 4,321,525
|
How to iterate over a numpy array, getting two values per loop?
|
<p>I envision something like</p>
<pre><code>import numpy as np
x = np.arange(10)
for i, j in x:
print(i,j)
</code></pre>
<p>and get something like</p>
<pre><code>0 1
2 3
4 5
6 7
8 9
</code></pre>
<p>But I get this traceback:</p>
<pre><code>Traceback (most recent call last):
File "/home/andreas/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/223.8214.51/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
TypeError: cannot unpack non-iterable numpy.int64 object
</code></pre>
<p>I also tried to use <code>np.nditer(x)</code> and <code>itertools</code> with <code>zip(x[::2], x[1::2])</code>, but that does not work either, with different error messages.</p>
<p>This should be super simple, but I can't find solutions online.</p>
|
<python><numpy><loops>
|
2023-01-14 13:00:21
| 3
| 405
|
Andreas Schuldei
|
75,117,522
| 18,806,499
|
E1101: Module 'mysql.connector' has no 'errors' member (no-member)
|
<p>I have written a python program and it's working fine on my computer, but when I'm trying to lint it with pylint and have this error: <code>E1101: Module 'mysql.connector' has no 'errors' member (no-member)</code></p>
<p>I have pieces of code like this in my code:</p>
<pre><code>try:
...
except mysql.connector.errors.DatabaseError:
...
</code></pre>
<p>To catch mysql errors and I repead It works fine on my computer but is an error at GitHun Actions pipeline.
Here is pipeline by the way:</p>
<pre><code>name: Pylint
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pylint
- name: Analysing the code with pylint
run: |
pylint $(git ls-files '*.py')
</code></pre>
<p>What do I miss?</p>
|
<python><github-actions><pylint>
|
2023-01-14 11:15:26
| 0
| 305
|
Diana
|
75,117,517
| 15,724,084
|
python spider scrapy cannot launch the code
|
<p>I before used Selenium, but now client needs Scrapy framework to be used in his project.</p>
<p>I read and watched. I came to some points how to write first request spider. But I need more kind of assist.</p>
<pre><code>import scrapy
class QuotesSpider(scrapy.Spider):
name = 'quotes'
plate_num = "EA66LEE"
start_urls = [
f'https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto='
,
]
def parse(self, response):
for quote in response.xpath('div[@class="resultsstrip"]/a/p'):
yield {
'plate number': plate_num,
'price': quote.xpath('div[@class="resultsstrip"]/a/p[@class="resultsstripprice"/text()]').get(),
}
</code></pre>
<p>I want scrape <a href="https://dvlaregistrations.dvla.gov.uk/search/results.html?search=EA66LEE&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto=" rel="nofollow noreferrer">url</a> if plate number exists then grab the <p> web element price tag.</p>
<pre><code><a id="buy_EA66LEE" class="resultsstripplate plate" href="/buy.html?plate=EA66 LEE&amp;price=999" title="Buy now">EA66 LEE </a>
<p class="resultsstripprice">£999</p>
</code></pre>
<p>even from terminal I cannot get the right values from xpath located <code> response.xpath('div/a/p/text()').get()</code></p>
|
<python><scrapy>
|
2023-01-14 11:15:02
| 1
| 741
|
xlmaster
|
75,117,431
| 3,591,044
|
Splitting string on several delimiters without considering new line
|
<p>I have a string representing conversation turns as follows:</p>
<pre><code>s = "person alpha:\nHow are you today?\n\nperson beta:\nI'm fine, thank you.\n\nperson alpha:\nWhat's up?\n\nperson beta:\nNot much, just hanging around."
</code></pre>
<p>In plain text, it looks as follows.</p>
<pre><code>person alpha:
How are you today?
person beta:
I'm fine, thank you.
person alpha:
What's up?
person beta:
Not much, just hanging around.
</code></pre>
<p>Now, I would like to split the string on <code>person alpha</code> and <code>person beta</code>, so that the resulting list looks as follows:</p>
<p>["person alpha:\nHow are you today?", "person beta:\nI'm fine, thank you.", "person alpha:\nWhat's up?", "person beta:\nNot much, just hanging around."]</p>
<p>I have tried the following approach</p>
<pre><code>import re
res = re.split('person alpha |person beta |\*|\n', s)
</code></pre>
<p>But the results is as follows:</p>
<pre><code>['person alpha:', 'How are you today?', '', 'person beta:', "I'm fine, thank you.", '', 'person alpha:', "What's up?", '', 'person beta:', 'Not much, just hanging around.']
</code></pre>
<p>What is wrong with my regex?</p>
|
<python><python-3.x><regex><string><split>
|
2023-01-14 10:59:44
| 3
| 891
|
BlackHawk
|
75,117,392
| 5,865,393
|
Create a default guest user on flask run
|
<p>How can I create a default <strong>guest</strong> user with username <code>guest</code> and password equal to <code>password</code> when I start the web server; i.e. <code>flask run</code>?</p>
<blockquote>
<p>The purpose of this default guest user is to be a demo user so that the actual user doesn't have to register and be able to test and tour the web app.</p>
</blockquote>
<p><strong>models.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from flask_login import UserMixin
from app import bcrypt, db, login_manager
@login_manager.user_loader
def load_user(user_id: int):
return User.query.get(user_id)
class User(db.Model, UserMixin):
__tablename__ = "user"
id = db.Column(db.Integer, primary_key=True, autoincrement=True, unique=True)
username = db.Column(db.String(60), unique=True, nullable=False)
password = db.Column(db.String(60), nullable=False)
def __init__(self, username="guest", password="password"):
self.username = username
self.password = bcrypt.generate_password_hash(password).decode("UTF-8")
def __repr__(self):
return f"<User {self.username!r}>"
@classmethod
def authenticate(cls, username, password):
user = cls.query.filter_by(username=username).first()
if user and bcrypt.check_password_hash(user.password, password):
return user
return False
</code></pre>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from flask import Blueprint, flash, redirect, render_template, url_for
from flask_login import current_user, login_user
from app.auth.forms import SigninForm
from app.models import User
auth = Blueprint("auth", __name__, url_prefix="/auth")
@auth.route("/signin", methods=["GET", "POST"])
def signin():
if current_user.is_authenticated:
return redirect(url_for("main.home"))
form = SigninForm()
if form.validate_on_submit():
user = User.authenticate(
username=form.username.data, password=form.password.data
)
if user:
login_user(user, remember=form.remember.data)
flash(f"Hello, {form.username.data}", category="info")
return redirect(url_for("main.home"))
else:
flash("Login Unsuccessful. Please check username and password", "danger")
return render_template(
"auth/signin.html", title="Sign in", icon="log-in", form=form
)
</code></pre>
<p><strong>EDIT:</strong></p>
<p>My idea is to add the <code>guest</code> user manually to the DB even before the <code>flask run</code>. But what if the code is redistributed? I want to make the <code>guest</code> user creation to be automated using Python.</p>
|
<python><authentication><flask><sqlalchemy>
|
2023-01-14 10:52:59
| 1
| 2,284
|
Tes3awy
|
75,117,364
| 5,659,324
|
How to store directory, sub directory and file paths in a Python Dictionary?
|
<p>I am developing a software which analyze excel files stored in Years directories which contains months directories and each month directory consist of excel files. Structure as shown below.
<a href="https://i.sstatic.net/JZF52.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JZF52.png" alt="enter image description here" /></a></p>
<p>In order to achieve my goal, I have used the code below</p>
<pre><code>os.walk("..\..\..\..\ema_monthly_reports")
</code></pre>
<p>the above code shows all the directories, sub directories and files</p>
<blockquote>
<p>('..\..\..\..\ema_monthly_reports', ['2022', '2023'], [])('..\..\..\..\ema_monthly_reports\2022', ['1', '10', '11', '12', '2', '3', '4', '5', '6', '7', '8', '9'], [])
('..\..\..\..\ema_monthly_reports\2022\1', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\10', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\11', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\12', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\2', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\3', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\4', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\5', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\6', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\7', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\8', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2022\9', [], ['Basic Facilities.csv'])('..\..\..\..\ema_monthly_reports\2023', ['1'], [])('..\..\..\..\ema_monthly_reports\2023\1', [], ['Basic Facilities.xlsx'])</p>
</blockquote>
<p>but I don't know how to properly manage years, months and files names in dictionary format.</p>
|
<python><python-3.x><django><dictionary>
|
2023-01-14 10:49:59
| 1
| 659
|
hamid
|
75,117,330
| 1,833,328
|
Software lock key with Python
|
<p>I wrote a Python program to control a measurement instrument. This program can be transferred to other instruments, but I don't want people to do this without my agreement. I am therefore considering to use some sort of a lock key mechanism, which allows unlocking the software with a key code that is specific to a given instrument. While it's easy to write a bit of code to do this in Python, this code will be visible to anyone and it will therefore be easy to work around it.</p>
<p>Is there a solution for Python to check the key code such that users will not be able (i) to work around it easily by making trivial changes to my code and (ii) to see the code that implements the secret rules to validate the key code?</p>
|
<python>
|
2023-01-14 10:44:13
| 1
| 621
|
mbrennwa
|
75,117,011
| 19,826,650
|
Php Shell exec didn't run Python file if there are imports like pandas
|
<p>I have anaconda to run python,apache server localhost (phpmyadmin), php, with visual studio code. when i use shell exec without pandas is run fine, but when i want to use pandas the shell exec didn't run the python code.</p>
<p>list of imports that prevent python to be executed from php:</p>
<ol>
<li>pandas as pd</li>
<li>numpy as np</li>
<li>matplotdip.pyplot as plt</li>
<li>from sklearn.model_selection import train_test_split</li>
<li>from sklearn.neighbors import KNeighborsClassifier</li>
<li>from sklearn.metrics import accuracy_score,hamming_loss,classification_report</li>
</ol>
<p><strong>test.py</strong></p>
<pre><code>import sys
import json
# import pandas as pd
# from pandas import json_normalize
import os
import pymysql
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mplcursors as mpl
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score,hamming_loss,classification_report
print("Hello world has been called")
</code></pre>
<p>Is there something need to be done? i really need the imports for python</p>
|
<python><php><apache><anaconda>
|
2023-01-14 09:40:24
| 0
| 377
|
Jessen Jie
|
75,116,920
| 19,504,610
|
Delegates the Calculation of a Property of a Superclass to its Subclass
|
<p>In the book, Python in a Nutshell,</p>
<p>the authors claim the following code snippet is problematic:</p>
<pre class="lang-py prettyprint-override"><code>class B:
def f(self):
return 23
g = property(f)
class C(B):
def f(self):
return 42
c = C()
print(c.g) # prints: 23, not 42
</code></pre>
<p>And the solution, as the authors claimed, is to redirect the calculation of property <code>c.g</code> to the class <code>C</code>'s implementation of <code>f</code> at the superclass <code>B</code>'s level.</p>
<pre class="lang-py prettyprint-override"><code>class B:
def f(self):
return 23
def _f_getter(self):
return self.f()
g = property(_f_getter)
class C(B):
def f(self):
return 42
c = C()
print(c.g) # prints: 42, as expected
</code></pre>
<p>I disagree but please help me to understand why the authors claimed as they did.</p>
<p>Two reasons for my disagreement.</p>
<ol>
<li><p>Superclasses are far likely to be in upstream modules and it is not ideal for upstream modules to implement additional indirection to take into account that subclasses in downstream modules may reimplement the getter method for the superclass property.</p>
</li>
<li><p>The calculation of a property of a superclass is done after the instantiation of an instance of the superclass. An instance of the subclass must be instantiated after the instantiation of an instance of the superclass. Therefore, if the writer of C does not explicitly "declare" <code>C.g</code> to be a property with a different implementation <code>C.f</code>, then it should rightly inherits the property <code>B.g</code> and i.e. <code>c.g</code> should be <code>b.g</code>.</p>
</li>
</ol>
<p>My question is:</p>
<p>am I right with this thought or are the authors right with their claims?</p>
|
<python><inheritance><properties>
|
2023-01-14 09:18:48
| 3
| 831
|
Jim
|
75,116,578
| 6,792,327
|
Swift: Incorrect Base64 Encoding
|
<p>I am attempting to convert a block of code from python and it involved encoding a json string to base64. My attempt on Swift does not produce the same base64 encoded string.</p>
<p>Python:</p>
<pre><code>payload_nonce = datetime.datetime(2022, 10, 10, 0, 0, 0).timestamp()
payload = {"request": "/v1/mytrades", "nonce": payload_nonce}
encoded_payload = json.dumps(payload).encode()
b64 = base64.b64encode(encoded_payload)
print(b64)
//prints b'eyJyZXF1ZXN0IjogIi92MS9teXRyYWRlcyIsICJub25jZSI6IDE2NjUzMzEyMDAuMH0='
</code></pre>
<p>Swift:</p>
<pre><code>let formatter = DateFormatter()
formatter.dateFormat = "dd/MM/yyyy"
let date = formatter.date(from: "10/10/2022")
let payloadNonce = date!.timeIntervalSince1970
payload = [
"request": "/v1/mytrades",
"nonce": String(describing: payloadNonce)
]
do {
let json = try JSONSerialization.data(withJSONObject: payload)
let b64 = json.base64EncodedString()
print(b64)
//prints eyJyZXF1ZXN0IjoiXC92MVwvbXl0cmFkZXMiLCJub25jZSI6IjE2NjUzMzEyMDAuMCJ9
} catch {//handle error}
</code></pre>
<p>What am I missing?</p>
|
<python><swift>
|
2023-01-14 08:04:33
| 2
| 2,947
|
Koh
|
75,116,574
| 2,706,344
|
Interpolation using `asfreq('D')` in Multiindex
|
<p>The following code generates two DataFrames:</p>
<pre><code>frame1=pd.DataFrame({'dates':['2023-01-01','2023-01-07','2023-01-09'],'values':[0,18,28]})
frame1['dates']=pd.to_datetime(frame1['dates'])
frame1=frame1.set_index('dates')
frame2=pd.DataFrame({'dates':['2023-01-08','2023-01-12'],'values':[8,12]})
frame2['dates']=pd.to_datetime(frame2['dates'])
frame2=frame2.set_index('dates')
</code></pre>
<p>Using</p>
<pre><code>frame1.asfreq('D').interpolate()
frame2.asfreq('D').interpolate()
</code></pre>
<p>we can interpolate their values between the days to obtain</p>
<p><a href="https://i.sstatic.net/MAESY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MAESY.png" alt="result of the frame1 interpolation" /></a>
and
<a href="https://i.sstatic.net/UDW0E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDW0E.png" alt="enter image description here" /></a></p>
<p>However, consider now the concatenation table:</p>
<pre><code>frame1['frame']='f1'
frame2['frame']='f2'
concat=pd.concat([frame1,frame2])
concat=concat.set_index('frame',append=True)
concat=concat.reorder_levels(['frame','dates'])
concat
</code></pre>
<p><a href="https://i.sstatic.net/Lmnk0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lmnk0.png" alt="concat table" /></a></p>
<p>I want to do the interpolation using one command like</p>
<pre><code>concat.groupby('frame').apply(lambda g:g.asfreq('D').interpolate())
</code></pre>
<p>direktly in the concatenation table. Unfortunately, my above command does not work but raises a <code>TypeError</code>:</p>
<pre><code>TypeError: Cannot convert input [('f1', Timestamp('2023-01-01 00:00:00'))] of type <class 'tuple'> to Timestamp
</code></pre>
<p>How do I fix that command to work?</p>
|
<python><pandas>
|
2023-01-14 08:02:21
| 1
| 4,346
|
principal-ideal-domain
|
75,116,527
| 20,240,835
|
Python filter larger text by quantile
|
<p>Assume I am process a very large text file,
I have the following pseudocode</p>
<pre><code>xx_valueList = []
lines=[]
with line in file:
xx_value = calc_xxValue(line)
xx_valueList.append(xx_value)
lines.append(lines)
# get_quantile_value is a function return the cutoff value with a specific quantile precent
cut_offvalue = get_quantile_value(xx_valueList, precent=0.05)
for line in lines:
if calc_xxValue(line) > cut_offvalue:
# do someting here
</code></pre>
<p>Note that the file is very large and may come from a pipe, so I don't want to read it twice.</p>
<p>We must read the entire file before we can get the cutoff to filter file</p>
<p>The above method can work, but it consumes too much memory, is there some algorithmic optimization that can improve efficiency and reduce memory consumption?</p>
|
<python><algorithm><optimization><filter><quantile>
|
2023-01-14 07:51:00
| 1
| 689
|
zhang
|
75,116,521
| 19,238,204
|
Check My Code.. Why Python' Sympy integration taking so long?
|
<p>I have been working on this to plot a function and rotating toward y and x axis. Then use SymPy to obtain the surface area.</p>
<p>I try from terminal and from running the <code>.py</code> file, both taking too long for calculating the integral to obtain the surface area.</p>
<p>this is my code:</p>
<pre><code>import numpy as np
import sympy as sy
x = sy.Symbol("x")
def f(x):
return ((x**6) + 2)/ (8*x ** 2)
def fd(x):
return sy.simplify(sy.diff(f(x), x))
def vx(x):
return 2*np.pi*(f(x)*((1 + (fd(x) ** 2))**(1/2)))
vx = sy.integrate(vx(x), (x, 1, 3))
</code></pre>
<p>My questions:</p>
<ol>
<li>Why <code>sy.integrate</code> took so long? almost 30 minutes.. is this function hard to be computed?</li>
</ol>
<p>From terminal it has not even finished to calculate the integral till the time I ask this question at SoF:</p>
<p><a href="https://i.sstatic.net/AUHj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AUHj8.png" alt="1" /></a></p>
<ol start="2">
<li>Are there errors in my code or way of improvements for my code?</li>
</ol>
<ol start="3">
<li>[Edited]</li>
</ol>
<p>This is the answer from the <code>sy.integrate</code>:</p>
<pre><code>0.392699081698724*Integral(2*(x**2 + 1)**1.0*Abs(x**4 - x**2 + 1)**1.0/x**5.0, (x, 1, 3)) + 0.392699081698724*Integral(x**1.0*(x**2 + 1)**1.0*Abs(x**4 - x**2 + 1)**1.0, (x, 1, 3))
</code></pre>
<p>why it does not substitute the <code>x</code> values into the definite integral?</p>
<p>Is</p>
<pre><code>sy.integrate(vx(x), (x, 1, 3))
</code></pre>
<p>wrong to calculate a definite integral?</p>
|
<python><numpy><sympy>
|
2023-01-14 07:49:34
| 1
| 435
|
Freya the Goddess
|
75,116,507
| 17,347,824
|
While loop to append list or break based on value type and user input
|
<p>I'm trying to write a python program that asks the user to enter an integer or "q" (case-insensitive) to quit that will then take any integers and print the sum of the last 5.</p>
<p>I have created a holding list and some counter and test variables to help with this, but I can't seem to get it to work the way I'd like. I keep getting various errors.</p>
<p>The code I currently have is</p>
<pre><code>my_list = []
quit = 0
i = 0
while quit == 0:
value = eval(input("Please enter an integer or the letter 'q' to quit: ")
if value.isdigit()
my_list.append(value)
i += 1
print(sum(my_list[-1] + my_list[-2] + my_list[-3] + my_list[-4] + my_list[-5]))
if value == q:
quit += 1
elif
print("Your input is not an integer, please try again")
</code></pre>
<p>This is returning and error of invalid syntax for the <code>my_list.append(value)</code> line.</p>
<p>What I would like this to do is allow for me to enter any integer, have the loop test if it is an integer, and if so, add it to the holding list and print out the sum of the most recent 5 entries in the list (or all if less than 5). If I enter "q" or "Q" I want the loop to break and the program to end.</p>
|
<python>
|
2023-01-14 07:45:32
| 3
| 409
|
data_life
|
75,116,506
| 2,035,790
|
Improve download speed of images from s3
|
<p>I created a Streamlit app that gets and displays images from S3 for labeling purposes. The app is extremely slow! After using the code profiler, I discovered that the following section of code takes the most time (reaches 40-120 seconds).</p>
<pre><code>for obj in my_bucket.objects.filter(Prefix="images/"+item_data['Img']):
object = my_bucket.Object(obj.key)
file_stream = io.BytesIO()
object.download_fileobj(file_stream)
image = Image.open(file_stream)
</code></pre>
<p>Is there anything that might be done to reduce this time?</p>
|
<python><amazon-web-services><amazon-s3><boto3><streamlit>
|
2023-01-14 07:45:04
| 0
| 1,401
|
userInThisWorld
|
75,116,202
| 19,238,204
|
Check my Python Code to Obtain Area of the Surface Revolved About the x-axis
|
<p>I want to plot area of surface of</p>
<p><code>(x^6 + 2)/(8x^2)</code></p>
<p>with <code>1 ≤ x ≤ 3</code></p>
<p>this is my Python code / MWE:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
n = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222, projection='3d')
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224, projection='3d')
x = np.linspace(1, 3, 3)
y = ((x ** 6) + 2) / (8 * x ** 2)
t = np.linspace(0, np.pi * 2, n)
xn = np.outer(x, np.cos(t))
yn = np.outer(x, np.sin(t))
zn = np.zeros_like(xn)
for i in range(len(x)):
zn[i:i + 1, :] = np.full_like(zn[0, :], y[i])
ax1.plot(x, y)
ax1.set_title("$f(x)$")
ax2.plot_surface(xn, yn, zn)
ax2.set_title("$f(x)$: Revolution around $y$")
# find the inverse of the function
y_inverse = x
x_inverse = ((y_inverse ** 6) + 2) / ( 8 * x ** 2)
xn_inverse = np.outer(x_inverse, np.cos(t))
yn_inverse = np.outer(x_inverse, np.sin(t))
zn_inverse = np.zeros_like(xn_inverse)
for i in range(len(x_inverse)):
zn_inverse[i:i + 1, :] = np.full_like(zn_inverse[0, :], y_inverse[i])
ax3.plot(x_inverse, y_inverse)
ax3.set_title("Inverse of $f(x)$")
ax4.plot_surface(xn_inverse, yn_inverse, zn_inverse)
ax4.set_title("$f(x)$: Revolution around $x$")
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/axG6M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/axG6M.png" alt="py" /></a></p>
<p>Questions:</p>
<ol>
<li>I want to know whether the plot is correct for the surface area.</li>
<li>Is there any better way to plot besides this ?</li>
<li>I think the plot of <code>y</code>, the x-axis and y-axis limit need more adjustment, since I use Julia to plot the function and got this (more smooth curve not bending):</li>
</ol>
<p><a href="https://i.sstatic.net/d6mYG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d6mYG.png" alt="1" /></a></p>
|
<python><numpy>
|
2023-01-14 06:28:24
| 1
| 435
|
Freya the Goddess
|
75,116,164
| 11,402,025
|
AWS StateMachine AccessDeniedException in step: CleanUpOnError
|
<p>I am getting the following error when trying to execute step function on the lambda</p>
<pre><code>"errorType": "AccessDeniedException",
"errorMessage": "User: arn:aws:sts::14161:assumed-role/serverlessrepo-Functi-cleanerRole/serverlessrepo-=Function-p-cleaner is not authorized to perform: lambda:functionname on resource: arn:aws:lambda:function:functionname because no identity-based policy allows the lambda:functionname action",
</code></pre>
<pre><code>Resources:
FunctionExecutionRole: # Execution role for function
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: [
"sts:AssumeRole",
"lambda:InvokeAsync",
"lambda:InvokeFunction"
]
Resource: "*"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambda_FullAccess
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
- arn:aws:iam::aws:policy/CloudWatchReadOnlyAccess
Tags:
- Key: Application
Value: !Sub '${ApplicationTag}'
Function1:
Type: AWS::Serverless::Function # Find or Create alias lambda function
Properties:
PackageType: Image
ImageConfig:
Command:
- function1.lambda_handler
ImageUri:
AutoPublishAlias: live # This property enables lambda function versioning.
Role: !GetAtt FindOrCreateAliasExecutionRole.Arn
Tags:
Application: !Sub '${ApplicationTag}'
</code></pre>
<p>I do not have permission to change user's IAM roles/policies/permissions</p>
|
<python><amazon-web-services><aws-lambda><amazon-iam><aws-step-functions>
|
2023-01-14 06:15:07
| 1
| 1,712
|
Tanu
|
75,116,124
| 19,561,210
|
Python - Reverse Linked List Issue
|
<p>I have been working on the reverse linked list method for a long time and I am trying to understand why it is wrong but I can't seem to get it. The following is my function.</p>
<pre class="lang-py prettyprint-override"><code>def reverse(self, head):
if head is None or head.next is None:
return head
prev = None
current = head
while current is not None:
temp = current
current.next = prev
prev = temp
current = prev.next
return prev
</code></pre>
<p>I thought the above reverse function should work. Example SLL: 1 -> 2 -> 3 -> None <br>
My thought process:</p>
<ol>
<li>Set <code>temp</code> to reference to the <code>current</code> which is the head of the linked list. Thus, <code>temp.val = 1</code> and <code>temp.next.val = 2</code>. <br></li>
<li><code>current.next = prev</code> means that the <code>current</code> is now pointing to <code>None</code>. <br></li>
<li><code>prev = temp</code> means that <code>prev.val = 1</code> and <code>prev.next.val = 2</code> <br></li>
<li><code>current = prev.next</code> means that <code>current.val = 2</code> and <code>current.next.val = 3</code> <br></li>
</ol>
<p>Is my thought process correct? If not, which step is incorrect?
I also searched up the "correct" way of doing this where</p>
<pre class="lang-py prettyprint-override"><code>while current is not None:
temp = current.next
current.next = prev
prev = current
current = temp
</code></pre>
<p>This works but I want to understand why my way of doing it is incorrect. I don't want to memorise blindly.</p>
|
<python><reverse>
|
2023-01-14 06:01:23
| 0
| 634
|
Jessica
|
75,116,047
| 17,696,880
|
How to remove the line where a specific string is found in a .txt file?
|
<pre class="lang-py prettyprint-override"><code>import os
word_to_replace, replacement_word = "", ""
if (os.path.isfile('data_association/names.txt')):
word_file_path = 'data_association/names.txt'
else:
open('data_association/names.txt', "w")
word_file_path = 'data_association/names.txt'
word = "Claudio"
with open(word_file_path) as f:
lineas = [linea.strip() for linea in f.readlines()]
numero = None
if word in lineas: numero = lineas.index(word)+1
if numero != None:
#Here you must remove the line with the name of that file
else: print(That person's name could not be found within the file, so no line has been removed)
</code></pre>
<p>I need to try to remove that name from the following .txt file (assuming that there is a line with that name in that file, if it is a file without that line it should print that that name has not been found and that it cannot be deleted)</p>
<pre><code>Lucy
Samuel
María del Pilar
Claudia
Claudio
Katherine
Maríne
</code></pre>
<p>After removing the line with the example name <code>"Claudio"</code>, the file would look like this:</p>
<pre><code>Lucy
Samuel
María del Pilar
Claudia
Katherine
Maríne
</code></pre>
<p>PS: this list of names is a reduced fragment of the actual list, so it's better if instead of rewriting the whole file, you just edit the specific line where that name was found</p>
|
<python><python-3.x><file><replace><txt>
|
2023-01-14 05:43:13
| 0
| 875
|
Matt095
|
75,115,582
| 18,587,779
|
How to load toml file in python
|
<p>How to load toml file into a python file
that my code</p>
<p>python file:</p>
<pre><code>import toml
toml.get("first").name
</code></pre>
<p>toml file :</p>
<pre><code>[first]
name = "Mark Wasfy"
age = 22
[second]
name = "John Wasfy"
age = 25
</code></pre>
|
<python><toml>
|
2023-01-14 03:30:03
| 3
| 318
|
Mark Wasfy
|
75,115,498
| 5,965,999
|
Connecting to a laser with Python's socket interface
|
<p>I'm trying to connect over ethernet to a piece of hardware (a laser) which listens for connections on a certain port. The laser's documentation on this is very minimal; the entirety of it is as follows:</p>
<blockquote>
<p>Ethernet TCP/IP Interface:
The IP address of the laser is shown on the front panel. Touching the screen where
the address is shown displays the network setup menu where you can change the
network settings.
The laser listens for connections on port 10001. The command must be sent as a
single string in a single packet. The individual commands are described in “Interface
Commands” on page 3-2.</p>
</blockquote>
<p>The commands alluded to are a list of few-character text strings, such as "ABN", which turns the laser on. I would like to talk to the laser using Python's socket interface. I've tried to follow the pattern of <a href="https://realpython.com/python-sockets/" rel="nofollow noreferrer">this tutorial</a>, but to no avail. Here's an example of what I tried:</p>
<pre><code>import socket
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(("192.168.1.223",10001))
s.sendall(b"ABN")
</code></pre>
<p>This does nothing. However, I can talk to the laser using Putty and a raw connection to the same IP and port as above. This opens a terminal where I can type commands such as "ABN" (which works, turning the laser on) and read the laser's replies.</p>
<p><strong>Question:</strong> What is a Putty "raw" connection doing, and how can I replicate it with Python?</p>
<hr />
<p>Edit: The laser is an IPG YLR-100-1064-LP-WC.</p>
|
<python><sockets>
|
2023-01-14 03:08:37
| 1
| 2,360
|
Yly
|
75,115,453
| 219,976
|
Django EmbeddedField raises ValidationError because of renamed field
|
<p>I've got a Django application with <code>djongo</code> as a database driver. The models are:</p>
<pre class="lang-py prettyprint-override"><code>class Blog(models.Model):
_id = models.ObjectIdField()
name = models.CharField(max_length=100, db_column="Name")
tagline = models.TextField()
class Entry(models.Model):
_id = models.ObjectIdField()
blog = models.EmbeddedField(
model_container=Blog
)
</code></pre>
<p>When I run this application, I got an error:</p>
<pre><code>File "\.venv\lib\site-packages\djongo\models\fields.py", line 125, in _validate_container
raise ValidationError(
django.core.exceptions.ValidationError: ['Field "m.Blog.name" of model container:"<class \'project.m.models.Blog\'>" cannot be named as "name", different from column name "Name"']
</code></pre>
<p>I want to keep the name of the field <code>name</code> in my model and database different because the database already exists, and I can't change it. The database uses camelCase for naming fields, whereas in the application, I want to use snake_case.</p>
<p>How to avoid this error?</p>
|
<python><django><mongodb><django-models><djongo>
|
2023-01-14 02:53:34
| 4
| 6,657
|
StuffHappens
|
75,115,422
| 3,361,013
|
Dash app, plotly chart data outside the chart area
|
<p>I have written following code which updates Plotly Chart with some random values every 5 seconds, however after few seconds the new data is located outside the chart and is not visible. Is there an easy way to reset the axes everytime it's needed?</p>
<p>Also how can I make this responsive so it will auto-scale to full window?</p>
<pre><code>import requests
import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output
from datetime import datetime
import json
import plotly.graph_objs as go
import random
# Create empty lists to store x and y values
x_values = []
y_values = []
# Initialize the dash app
app = dash.Dash()
app.layout = html.Div([
dcc.Graph(id='live-graph', animate=True,responsive=True),
dcc.Interval(
id='graph-update',
interval=5000,
n_intervals=0
),
])
# Define the callback function
@app.callback(Output('live-graph', 'figure'), [Input('graph-update', 'n_intervals')])
def update_graph(n):
current_value= random.randint(2000, 8000)
# Get the current datetime
x = datetime.now()
x_values.append(x)
# Get the response value and append it to the y_values list
y = current_value
y_values.append(y)
# Create the line chart
trace = go.Scatter(x=x_values, y=y_values)
data = [trace]
layout = go.Layout(title='Real-time Data',
xaxis=dict(title='Datetime'),
yaxis=dict(title='Value'))
return go.Figure(data=data, layout=layout).update_layout(
autosize=True,
margin=dict(l=0, r=0, t=0, b=0)
)
if __name__ == '__main__':
app.run_server(debug=False)
</code></pre>
<p>Many thanks!</p>
|
<python><plotly><plotly-dash>
|
2023-01-14 02:44:17
| 1
| 847
|
Petrik
|
75,115,379
| 7,071,794
|
How can I disable the gradient color with kdeplot?
|
<p>When I run the below code, I get a figure with gradient color (from black to orange). Please look at the attached figure. Whereas I want to get a figure only with single color, orange (not figure with a gradient color). How can I do that?</p>
<p><strong>My code:</strong></p>
<pre><code>#!/usr/bin/python3
import numpy as np
import pylab as plot
import matplotlib.pyplot as plt
import numpy, scipy, pylab, random
from matplotlib.ticker import MultipleLocator
import matplotlib as mpl
from matplotlib.ticker import MaxNLocator
import seaborn as sns
import pandas as pd
fig, ax = plt.subplots(figsize=(4, 2))
df = pd.read_csv('input.txt', sep="\s\s+", engine='python')
sns.kdeplot(data=df, label = "s1", color = "orange", cmap=None)
plt.xlabel('x', fontsize=7)
plt.ylabel('y', fontsize=7)
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(0.5)
plt.savefig("plot.png", dpi=300, bbox_inches='tight')
</code></pre>
<p><strong>input.txt:</strong></p>
<pre><code> 0.43082 0.45386
0.35440 0.91632
0.16962 0.85031
0.07069 0.54742
0.31648 1.06689
0.57874 1.17532
0.18982 1.01678
0.31012 0.54656
0.31133 0.81658
0.53612 0.50940
0.36633 0.83130
0.37021 0.74655
0.28335 1.30949
0.11517 0.63141
0.24908 1.04403
-0.28633 0.46673
-0.13251 0.33448
-0.00568 0.53939
-0.03536 0.76191
0.24695 0.92592
</code></pre>
<p>The output figure that I get is here: <a href="https://i.sstatic.net/azadz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/azadz.png" alt="plot.png" /></a></p>
|
<python><matplotlib><seaborn><kdeplot>
|
2023-01-14 02:29:28
| 1
| 437
|
qasim
|
75,115,246
| 17,090,926
|
Can you load a Polars dataframe directly into an s3 bucket as parquet?
|
<p>looking for something like this:</p>
<p><a href="https://stackoverflow.com/questions/38154040/save-dataframe-to-csv-directly-to-s3-python">Save Dataframe to csv directly to s3 Python</a></p>
<p>the api shows these arguments:
<a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.DataFrame.write_parquet.html" rel="nofollow noreferrer">https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.DataFrame.write_parquet.html</a></p>
<p>but i'm not sure how to convert the df into a stream...</p>
|
<python><dataframe><amazon-s3><parquet><python-polars>
|
2023-01-14 01:46:21
| 1
| 415
|
rnd om
|
75,115,154
| 10,318,539
|
How to add two Bits using single Qubit on Quantum Computer
|
<p>As we knew that a qubit have the capacity to store two bit at a time. I am curious to add two bits using single qubit.
I try a lot but failed, please give some hints if someone know.</p>
<p>Code:</p>
<pre><code>from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from numpy import pi
qreg_q = QuantumRegister(2, 'q')
creg_c = ClassicalRegister(2, 'c')
circuit = QuantumCircuit(qreg_q, creg_c)
circuit.x(qreg_q[0])
circuit.measure(qreg_q[0], creg_c[0])
circuit.measure(qreg_q[1], creg_c[0])
circuit.x(qreg_q[1])
</code></pre>
<p>Circuit Diagram:</p>
<p><a href="https://i.sstatic.net/r4Ubf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r4Ubf.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><quantum-computing><qiskit>
|
2023-01-13 22:51:07
| 1
| 485
|
Engr. Khuram Shahzad
|
75,114,982
| 1,825,360
|
Python: Positive integral solutions for Linear Equation
|
<p>I want to find all POSITIVE INTEGRAL solutions for a,b for this simple, toy linear equation:
a + 2b = 5 and for which the solutions are:</p>
<pre><code>a|5|3|1|
b|0|1|2|
</code></pre>
<p>I've tried a few things after going some posts here and the "python-constraint" module was helpful, for example:</p>
<pre><code>from constraint import *
problem = Problem()
problem.addVariables(['a', 'b'], range(10))
problem.addConstraint(lambda a, b: 5 == 2*b + a , ('a', 'b'))
solutions = problem.getSolutions()
print(solutions)
</code></pre>
<p>Prints out the solutions as follows:</p>
<pre><code>[{'a': 5, 'b': 0}, {'a': 3, 'b': 1}, {'a': 1, 'b': 2}]
</code></pre>
<p>I've two questions: 1. In real example I know the bounds of the variables a,b etc. Can I separately input the bounds or domains of both a and b to problem.addVariables() or do I have to use problem.addVariable() instead ?</p>
<p>Secondly, is their any other way of doing this simply in Python like using the 'sympy' or 'pulp' or any other?
I'm not sure how to get the extract solutions from sympy's solve() output:</p>
<pre><code>import sympy
a,b = symbols('a b', integer=True)
solve( a*1 + b*2 - 5, [a,b])
</code></pre>
<p>Which gives me the following output:</p>
<pre><code>[(5 - 2*b, b)]
</code></pre>
<p>Thanks</p>
|
<python><equation-solving>
|
2023-01-13 22:24:05
| 1
| 469
|
The August
|
75,114,947
| 16,978,074
|
read a csv file in 3 columns
|
<p>I want to read a csv file with 3 columns: "source","target","genre_ids" with python</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv('edges1.csv',encoding="ISO-8859-1", delimiter=';;', header=None,skiprows=1, names=columns,engine="python",index_col=False )
data = pd.concat([df.iloc[:,0].str.split(',', expand=True).rename(columns:=['source','target','genre_ids']), axis==1])
</code></pre>
<p>I want to get:</p>
<pre><code>source target genre_ids
apple green 21
strawberry red 23
son on
</code></pre>
<p>edge1.csv contains:</p>
<pre><code>source,target,genre_ids
apple,green,21
strawberry,red,23
and so on
</code></pre>
<p>when I read the <code>edges1.csv</code> file I have data on only one column. To separate the columns I found the concat method online which splits the columns, but it doesn't work.</p>
|
<python><pandas><csv><split>
|
2023-01-13 22:18:29
| 2
| 337
|
Elly
|
75,114,894
| 17,696,880
|
Replace a string identified in a specific line of a .txt file by another string
|
<pre><code>import re, os
def replace_one_line_content_with_other(input_text):
word_to_replace, replacement_word = "", ""
if (os.path.isfile('data_association/names.txt')):
word_file_path = 'data_association/names.txt'
else:
open('data_association/names.txt', "w")
word_file_path = 'data_association/names.txt'
#name_capture_pattern = r"([A-Z][\wí]+\s*(?i:del|de\s*el|de)\s*[A-Z]\w+)"
name_capture_pattern = r"((?:\w+))?"
regex_pattern = r"(?i:no\s*es)\s*" + name_capture_pattern + r"\s*(?i:sino\s*que\s*es)\s*" + name_capture_pattern
n0 = re.search(regex_pattern, input_text)
if n0:
word_to_replace, replacement_word = n0.groups()
print(repr(word_to_replace)) # --> word that I need identify in the txt
print(repr(replacement_word)) # --> word by which I will replace the previous one
#After reaching this point, the algorithm will already have both words, both the one that must be replaced (in case it is identified within the txt) and the one with which it will be replaced.
numero = None
with open(word_file_path) as f:
lineas = [linea.strip() for linea in f.readlines()]
#Find the word in the .txt file where the words are stored
if word_to_replace in lineas: numero = lineas.index(word)+1
#REPLACE word_to_replace with replacement_word in this .txt line
input_text = "No es Marín sino que es Marina"
replace_one_line_content_with_other(input_text)
</code></pre>
<p>After identifying the string stored in the <code>word_to_replace = 'Marín'</code> variable inside the .txt file, I must replace it with the string that is inside the <code>replacement_word = 'Marina'</code> variable.</p>
<p>Original content in the .txt file (it must be assumed that we do not know which line the word is on, so the line number could not be indicated)</p>
<pre><code>Lucy
Martin
Lucila
Samuel
Katherine
María del Pilar
Maríne
Marín
Steve
Robert
</code></pre>
<p>And the .txt content after the modification:</p>
<pre><code>Lucy
Martin
Lucila
Samuel
Katherine
María del Pilar
Maríne
Marina
Steve
Robert
</code></pre>
|
<python><python-3.x><file><replace><txt>
|
2023-01-13 22:08:11
| 1
| 875
|
Matt095
|
75,114,879
| 1,360,276
|
Python codegeneration from OpenAPI spcification
|
<p>Having an OpenAPI 3 specification, I'd like to generate stub code from it, defining the DTOs/serializers/deserializers, webframework-agnostic. The plan is to use this generated code not only for the client, but for the server as well. Marshmallow dataclasses models would be great, or maybe Pydantic. Any tool already existing to solve this?</p>
|
<python><openapi>
|
2023-01-13 22:06:32
| 0
| 620
|
saabeilin
|
75,114,867
| 19,276,569
|
If a list is unhashable in Python, why is a class instance with list attribute not?
|
<p>Firstly, we have a normal list:</p>
<pre><code>ingredients = ["hot water", "taste"]
</code></pre>
<p>Trying to print this list's hash will expectedly raise a TypeError:</p>
<pre><code>print(hash(ingredients))
>>> TypeError: unhashable type: 'list'
</code></pre>
<p>which means we cannot use it as a dictionary key, for example.</p>
<p>But now suppose we have a <code>Tea</code> class which only takes one argument; a list.</p>
<pre><code>class Tea:
def __init__(self, ingredients: list|None = None) -> None:
self.ingredients = ingredients
if ingredients is None:
self.ingredients = []
</code></pre>
<p>Surprisingly, creating an instance and printing its hash will <em>not</em> raise an error:</p>
<pre><code>cup = Tea(["hot water", "taste"])
print(hash(cup))
>>> 269041261
</code></pre>
<p>This hints at the object being hashable (although pretty much being identical to a list in its functionality). Trying to print its <code>ingredients</code> attribute's hash, however, <em>will</em> raise the expected error:</p>
<pre><code>print(hash(cup.ingredients))
>>> TypeError: unhashable type: 'list'
</code></pre>
<p>Why is this the case? Shouldn't the presence of the list — being an unhashable type — make it impossible to hash any object that 'contains' a list? For example, now it is possible to use our <code>cup</code> as a dictionary key:</p>
<pre><code>dct = {
cup = "test"
}
</code></pre>
<p>despite the fact that the cup is more or less a list in its functionality. So if you really want to use a list (or another unhashable type) as a dictionary key, isn't it possible do do it in this way? (not my main question, just a side consequence)</p>
<p>Why doesn't the presence of the list make the entire datatype unhashable?</p>
|
<python><list><hashable>
|
2023-01-13 22:04:51
| 1
| 856
|
juanpethes
|
75,114,866
| 8,310,504
|
Is there a way to wrap an numpy `ndarray` interface around an existing binary file?
|
<p>I have a binary network capture (<code>.pcapng</code>) file that contains video data. I am parsing the <code>.pcapng</code> with scapy and I can extract the data, but the video sequences I am working with are very large and the operations I want to perform quickly grind my machine to a halt if I load very much data at once. One approach to deal with this would be to extract all the data and save it into a mmap file, or better yet, HDF5. However, before I sign up for making copies of all the data, I wanted to see if it is possible to memory map the existing files in place. Is there a way to make a discontinuous mmap into an existing file that tells an ndarray object where to find memory associated with a given index, when that memory may be in arbitrary locations within the file? I haven't found any good analogs in mmap, which assumes a contiguous file is available. I imagine some <code>ndarray</code> subclass that loads up the file, scans the file for the boundaries of all the relevant imagery data within the <code>.pcapng</code> file, and provides a custom implementation of <code>ndarray</code> <code>__index__</code> method that can return the appropriate file offset(s) for a given index or slice. Is this bonkers, or is there a better (already solved) method for doing this?</p>
|
<python><numpy><scapy><mmap><pcap>
|
2023-01-13 22:04:48
| 1
| 301
|
K. Nielson
|
75,114,862
| 1,464,515
|
Running idf.py runconfig only opens the "idf.py" file in vscode
|
<p>I installed the ESP-idf extension "express install"
idf.py is not recognized so</p>
<p><a href="https://i.sstatic.net/SGdBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SGdBl.png" alt="enter image description here" /></a></p>
<p>i added manually the environment variables IDF_PATH, IDF_TOOLS_PATH and also added %IDF_PATH%/tools to the Path variable.</p>
<p><a href="https://i.sstatic.net/X9E0c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X9E0c.png" alt="enter image description here" /></a></p>
<p>Now when I run "idf.py menuconfig" in vs code terminal it just opens the idf.py file in vs code</p>
<p><a href="https://i.sstatic.net/oeN9U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oeN9U.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/2swOX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2swOX.png" alt="enter image description here" /></a></p>
<p>What i am doing wrong.
Also if I run python "$env:IDF_TOOLS_PATH\idf.py" menuconfig i get the following</p>
<p>Python 3.10.8</p>
<p><a href="https://i.sstatic.net/Dx7bK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dx7bK.png" alt="enter image description here" /></a></p>
|
<python><vscode-extensions><esp-idf>
|
2023-01-13 22:04:18
| 1
| 438
|
Cristóbal Felipe Fica Urzúa
|
75,114,841
| 21,003,650
|
Debugger warning from IPython: frozen modules
|
<p>I created a new environment using conda and wanted to add it to jupyter-lab. I got a warning about frozen modules? (shown below)</p>
<pre class="lang-none prettyprint-override"><code>$ ipython kernel install --user --name=testi2
0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
Installed kernelspec testi2 in /home/michael/.local/share/jupyter/kernels/testi2
</code></pre>
<p>All I had installed were ipykernel, ipython, ipywidgets, jupyterlab_widgets, ipympl</p>
<p>Python Version 3.11.0, Conda version 22.11.0</p>
<p>And I used <code>conda install nodejs -c conda-forge --repodata-fn=repodata.json</code> to get the latest version of nodejs</p>
<p>I also tried re-installing ipykernel to a previous version (6.20.1 -> 6.19.2)</p>
|
<python><ipython><python-3.11>
|
2023-01-13 22:01:50
| 2
| 383
|
Elijah
|
75,114,624
| 18,092,798
|
Multiline ruleorder in Snakemake
|
<p>I have 3 rules and their names are somewhat long. When using <code>ruleorder</code>, the line goes over my desired 80 character limit. Is it possible break up the <code>ruleorder</code> into multiple lines in such a way that the behaviour is <em>exactly</em> the same as if I wrote it all in one line?</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>ruleorder: long_rule_1 > long_rule_2 > long_rule_3
</code></pre>
<p>I would like to reformat it into something like this:</p>
<pre class="lang-py prettyprint-override"><code>ruleorder: (
long_rule_1
> long_rule_2
> long_rule_3
)
</code></pre>
|
<python><python-3.x><snakemake><directed-acyclic-graphs>
|
2023-01-13 21:28:31
| 3
| 581
|
yippingAppa
|
75,114,602
| 3,398,741
|
Poor model performances when doing multi-class classification
|
<h1>Context</h1>
<p>I have a dataset of medical X-Rays (<a href="https://thumbs.dreamstime.com/z/x-ray-human-head-skull-side-view-cranium-medical-analysis-xray-mri-ct-diagnostic-scan-photo-ray-human-head-skull-side-view-239898982.jpg" rel="nofollow noreferrer">example</a>). I want to train a model to recognize an <a href="https://d1l9wtg77iuzz5.cloudfront.net/assets/5501/248215/original.svg?1542210077" rel="nofollow noreferrer">overbite</a>. The potential values can be:</p>
<ul>
<li>Normal</li>
<li>1-2mm</li>
<li>2-4mm</li>
<li>[ ... ]</li>
<li>8mm+</li>
</ul>
<h2>Test Results</h2>
<p>I've built a CNN to process the images. My problem is that the validation accuracy is extremely low when comparing multiple classes of images. I tried different combinations of things and here are the results:</p>
<pre><code>| Image | Val Accuracy |
| ----------- | ------------ |
| A -> B | 56% |
| B -> C | 33% |
| A -> C | 75% |
| A -> B -> C | 17% |
</code></pre>
<p>When I compare images 1-1 against each other, the ML seems to train better than when otherwise. Why is that the case? In total I have:</p>
<ul>
<li>1368 images of A</li>
<li>1651 images of B</li>
<li>449 images of C</li>
</ul>
<p>(I realize 3.5K images is not a lot of data but I'm trying to figure out the fundamentals of a good model first before downloading and training on more data. My DB only has 17K images)</p>
<h2>Code</h2>
<p>I have a custom input pipeline and generate a <code>tf.data.Dataset</code>.</p>
<pre><code>print(train_ds)
==> <ParallelMapDataset element_spec=(TensorSpec(shape=(512, 512, 1), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.uint8, name=None))>
</code></pre>
<p>Here is the CNN architecture:</p>
<pre><code>input_shape = (None, IMG_SIZE, IMG_SIZE, color_channels)
num_classes = len(class_names)
# Pre-processing layers
RESIZED_IMG = 256
resize_and_rescale = tf.keras.Sequential([
layers.Resizing(RESIZED_IMG, RESIZED_IMG),
layers.Rescaling(1./255)
])
medium = 0.2
micro = 0.10
data_augmentation = tf.keras.Sequential([
layers.RandomContrast(medium),
layers.RandomBrightness(medium),
layers.RandomRotation(micro, fill_mode="constant"),
layers.RandomTranslation(micro, micro, fill_mode="constant"),
layers.RandomZoom(micro, fill_mode="constant"),
])
# Hidden layers
model = Sequential([
data_augmentation,
resize_and_rescale,
Conv2D(16, 3, padding='same', activation='relu'),
Conv2D(24, 5, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(128, activation='relu'),
Dense(num_classes, activation='softmax'),
])
# Build
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.build(input_shape)
model.summary()
# Start training
epochs = 15
early_stopping_monitor = tf.keras.callbacks.EarlyStopping(
monitor='val_accuracy',
restore_best_weights=True,
patience=7
)
mcp_save = tf.keras.callbacks.ModelCheckpoint(
'.mdl_wts.hdf5',
save_best_only=True,
monitor='val_accuracy'
)
history = model.fit(
batch_train_ds,
validation_data=batch_val_ds,
epochs=epochs,
class_weight=class_weights,
callbacks=[early_stopping_monitor, mcp_save]
)
</code></pre>
<p>The only thing I've changed in between my runs is which images are loaded in my input pipeline and then recorded their accuracy. I have intentionally kept the CNN small because I don't have a lot of data.</p>
<h2>Questions</h2>
<ul>
<li>Why does my model perform worse when training on more classes?</li>
<li>Do I have the wrong data and the images do not have enough conclusive information?</li>
<li>Is my image count too low in order to train a decent ML model?</li>
<li>Is my CNN not deep enough for multi-class classification?</li>
</ul>
|
<python><tensorflow><machine-learning><keras><conv-neural-network>
|
2023-01-13 21:25:34
| 2
| 1,149
|
FrenchMajesty
|
75,114,570
| 615,743
|
How to batch sqlalchemy results by size
|
<p>I have a model <code>Post</code> and want to iterate through them in batches of 10 in a loop.</p>
<p>This is what I've tried, but does not work:</p>
<pre class="lang-py prettyprint-override"><code>batched_posts = Post.query.yield_for(10)
for posts in batched_posts.partitions(): # error: 'Query' object has no attribute 'partitions'
print(len(posts)) # prints 10 ten times if I have 100 posts
</code></pre>
|
<python><sqlalchemy>
|
2023-01-13 21:20:12
| 1
| 350
|
ydnaklementine
|
75,114,565
| 7,071,794
|
could not convert string to float with sns.kdeplot
|
<p>I am trying to use <code>sns.kdeplot</code>to get a figure but I get the below error:</p>
<pre><code>ValueError: could not convert string to float: ' 0.43082 0.45386'
</code></pre>
<p>Do you know how I can fix this error?</p>
<p><strong>Code snippet:</strong></p>
<pre><code>data=pd.read_csv('input.txt', sep="\t", header = None)
sns.kdeplot(data=data, common_norm=False, palette=('b'))
</code></pre>
<p><strong>input.txt:</strong></p>
<pre><code> 0.43082 0.45386
0.35440 0.91632
0.16962 0.85031
0.07069 0.54742
0.31648 1.06689
0.57874 1.17532
0.18982 1.01678
0.31012 0.54656
0.31133 0.81658
0.53612 0.50940
0.36633 0.83130
0.37021 0.74655
0.28335 1.30949
0.11517 0.63141
0.24908 1.04403
-0.28633 0.46673
-0.13251 0.33448
-0.00568 0.53939
-0.03536 0.76191
0.24695 0.92592
</code></pre>
|
<python><matplotlib><seaborn><kdeplot>
|
2023-01-13 21:19:34
| 1
| 437
|
qasim
|
75,114,510
| 2,889,716
|
FastAPI Mock is not working, Seems like that patch is not applied
|
<p>Would you please tell me what's wrong with this code?
app.py</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
from fastapi import FastAPI
from fastapi import status
from fastapi.testclient import TestClient
from app_dep import resp
app = FastAPI()
client = TestClient(app)
def test_create_item(mocker):
mocker.patch(
'app_dep.resp',
return_value={'name', 'mocked'}
)
r = client.post("/ehsan")
print(r.json())
@app.post("/ehsan", status_code=status.HTTP_201_CREATED)
async def create():
return resp()
if __name__ == "__main__":
uvicorn.run("boot:app", host="0.0.0.0", port=8100, reload=True)
</code></pre>
<p>app_dep.py</p>
<pre><code>def resp():
return {"name": "original"}
</code></pre>
<p>This is a whole code including a FastAPI app and a test. It is expected when I run <code>pytest app.py -s</code> see this output:</p>
<pre><code>{'name': 'mocked'}
</code></pre>
<p>But I see:</p>
<pre><code>{'name': 'original'}
</code></pre>
|
<python><mocking><fastapi>
|
2023-01-13 21:12:26
| 0
| 4,899
|
ehsan shirzadi
|
75,114,437
| 472,485
|
Sharing context between get and post
|
<p>what is the correct way to share context between a <code>get</code> and <code>post</code> invocation in same view in Django without sending anything anything to client? Can I do something like below?</p>
<pre><code> class Req(TemplateView):
@login_required
def get(self, request, *args, **kwargs):
recepient=self.kwargs['recepient']
ftype=self.kwargs['type']
@login_required
def post(self, request, *args, **kwargs):
recepient=self.kwargs['recepient']
ftype=self.kwargs['type']
</code></pre>
|
<python><django><post><get>
|
2023-01-13 21:01:36
| 2
| 22,975
|
Jean
|
75,114,410
| 16,491,055
|
Check if there exists a value in each of 3 numpy arrays, that are within an interval of x?
|
<p>Suppose I have 3 <code>numpy</code> arrays. It could be more than 3 arrays, however.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
INTERVAL = 2
array1 = np.array([1,5,10,15,20,25,30])
array2 = np.array([1,10,50,100,150,200,250,300])
array3 = np.array([3,8,12])
</code></pre>
<p>For a given set of elements to match in the above arrays, each element must be within <code>INTERVAL</code> of each other. The actual index position of the elements in the array do not matter in the comparison. Order is not guaranteed. It is about any element at any position in the arrays are each within <code>INTERVAL</code> of each other.</p>
<p>Example matches that would be returned from the above 3 arrays:</p>
<pre><code>Example#1
array1 : 1
array2 : 1
array3 : 3
Example#2
array1 : 10
array2 : 10
array3 : 8
Example#3
array1 : 10
array2 : 10
array3 : 12
</code></pre>
<p>Bonus points :</p>
<p>In the event where there could be multiple matches with the same elements, only return the one with the lowest sum. For example, <code>Example#2</code> and <code>Example#3</code> share elements, but <code>Example#2</code> & <code>Example#1</code> should be returned and not <code>Example#3</code></p>
<p>Any suggestions on how I should go about this?</p>
|
<python><arrays><numpy>
|
2023-01-13 20:57:00
| 1
| 771
|
geekygeek
|
75,114,230
| 338,479
|
Is there a way to add a timeout to a system call in a thread?
|
<p>My use case: I want to call <code>fcntl.flock()</code> on a file but have a timeout. Following the recipe in <a href="https://stackoverflow.com/questions/492519/timeout-on-a-function-call/494273#494273">Timeout on a function call</a>, I wrapped my code in a <code>contextmanager</code> that implements timeouts via a Unix signal:</p>
<pre><code>@contextmanager
def doTimeout(seconds):
"""Creates a "with" context that times out after the
specified time."""
def timeout_handler(signum, frame):
pass
original_handler = signal.signal(signal.SIGALRM, timeout_handler)
try:
signal.alarm(seconds)
yield
finally:
signal.alarm(0)
signal.signal(signal.SIGALRM, original_handler)
</code></pre>
<p>and used it as follows:</p>
<pre><code> with doTimeout(timeout):
try:
fcntl.flock(self.file, fcntl.LOCK_EX)
self.locked = True
return True
except (OSError, IOError) as e:
if e.errno == errno.EINTR:
return False
raise
</code></pre>
<p>This all worked perfectly, but unfortunately I can only do this from the main thread because only the main thread can catch signals. Is there a way to do it from another thread?</p>
<p>My alternatives at this point are to periodically test the lock and then sleep, or launch a subprocess. Neither of these is ideal; is there a better way?</p>
|
<python><timeout><signals>
|
2023-01-13 20:35:15
| 1
| 10,195
|
Edward Falk
|
75,114,215
| 5,304,058
|
how to extract data inside a bracket in pandas
|
<p>I have a dataframe column which has paranthesis with it. I would like to have only string inside it.</p>
<pre><code>df:
ID col1
1 [2023/01/06:12:00:00 AM]
2 [2023/01/06:12:00:00 AM]
3 [2023/01/06:12:00:00 AM]
</code></pre>
<p>Expected:</p>
<pre><code>ID col1
1 2023/01/06:12:00:00 AM
2 2023/01/06:12:00:00 AM
3 2023/01/06:12:00:00 AM
</code></pre>
<p>I tried with str.findall(r"(?<=[)([^]]+)(?=])") and also some other regex it is not working.</p>
<p>Can anyone please help me?</p>
|
<python><pandas>
|
2023-01-13 20:33:50
| 3
| 578
|
unicorn
|
75,114,168
| 1,437,877
|
Python structural pattern matching for string containing float
|
<p>How can I use structural pattern matching for the following use case:</p>
<pre><code>values = ["done 0.0", "done 3.9", "failed system busy"]
for v in values:
vms = v.split()
match vms:
case ['done', float()>0]: # Syntax error
print("Well done")
case ['done', float()==0]: # Syntax error
print("It is okay")
case ['failed', *rest]:
print(v)
</code></pre>
<p>Please excuse me for the syntax errors, I have written this to demonstrate my thought process.</p>
<p>What could be the right syntax to achieve this pattern matching? Is it even possible?</p>
|
<python><pattern-matching>
|
2023-01-13 20:26:22
| 2
| 4,089
|
Abbas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.