QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,888,374
| 10,681,828
|
How to synchronize access inside async for?
|
<p>I found this library for asynchronously consuming kafka messages: <a href="https://github.com/aio-libs/aiokafka" rel="nofollow noreferrer">https://github.com/aio-libs/aiokafka</a></p>
<p>It gives this code example:</p>
<pre><code>from aiokafka import AIOKafkaConsumer
import asyncio
async def consume():
consumer = AIOKafkaConsumer(
'redacted',
bootstrap_servers='redacted',
auto_offset_reset="earliest"
#group_id="my-group"
)
# Get cluster layout and join group `my-group`
await consumer.start()
try:
# Consume messages
async for msg in consumer:
print("consumed: ", msg.topic, msg.partition, msg.offset,
msg.key, msg.value, msg.timestamp)
finally:
# Will leave consumer group; perform autocommit if enabled.
await consumer.stop()
asyncio.run(consume())
</code></pre>
<p>I would like to find out the biggest kafka message using this code. So, Inside <code>async for</code> I need to do <code>max_size = max(max_size, len(msg.value))</code>. But I think it won't be thread-safe, and I need to lock access to it?</p>
<pre><code>try:
max_size = -1
# Consume messages
async for msg in consumer:
max_size = max(max_size, len(msg.value)) # do I need to lock this code?
</code></pre>
<p>How do I do it in python? I've checked out this page: <a href="https://docs.python.org/3/library/asyncio-sync.html" rel="nofollow noreferrer">https://docs.python.org/3/library/asyncio-sync.html</a> and I'm confused because those synchronization primitives are not thread-safe? So I can't use them in a multithreaded context? I'm really confused. I come from a Java background and need to write this script, so, pardon me that I haven't read all the asyncio books out there.</p>
<p>Is my understanding correct that the body of the <code>async for</code> loop is a continuation that may be scheduled on a separate thread when the asynchronous operation is done?</p>
|
<python><async-await><python-asyncio>
|
2022-12-22 12:13:36
| 1
| 2,272
|
Pavel Orekhov
|
74,888,295
| 694,360
|
Detect almost grayscale image with Python
|
<p>Inspired by this <a href="https://stackoverflow.com/q/23660929/694360">question</a> and this <a href="https://stackoverflow.com/a/74834150/694360">answer</a> (which isn't very solid) I realized that I often find myself converting to grayscale a color image that is <em>almost</em> grayscale (usually a color scan from a grayscale original). So I wrote a function meant to measure a kind of <em>distance</em> of a color image from grayscale:</p>
<pre><code>import numpy as np
from PIL import Image, ImageChops, ImageOps, ImageStat
def distance_from_grey(img): # img must be a Pillow Image object in RGB mode
img_diff=ImageChops.difference(img, ImageOps.grayscale(img).convert('RGB'))
return np.array(img_diff.getdata()).mean()
img = Image.open('test.jpg')
print(distance_from_grey(img))
</code></pre>
<p>The number obtained is the average difference among all pixels of RGB values and their grayscale value, which will be zero for a perfect grayscale image.</p>
<p>What I'm asking to imaging experts is:</p>
<ul>
<li>is this approach valid or there are better ones?</li>
<li>at which <em>distance</em> an image can be safely converted to grayscale without checking it visually?</li>
</ul>
|
<python><colors><python-imaging-library><grayscale>
|
2022-12-22 12:07:02
| 2
| 5,750
|
mmj
|
74,888,098
| 13,158,157
|
pandas: expand and replace rows with rows from another data frames
|
<p>I have two data frames with many different column names but a few common ones. One frame has rows that have to be "expanded" with rows from the other data frame:</p>
<p>Example:</p>
<pre><code>df = pd.DataFrame({'option':['A', 'A', 'B', 'B', 'fill_A', 'fill_B', ], 'items':['11111', '22222', '33333', '11111', '', '', ], 'other_colA':['','', '','', '','' ]})
look_up_df = pd.DataFrame({'option':['A','A','A','B', 'B','B'], 'items':['11111', '22222', '33333', '44444', '55555', '66666'], 'other_colB':['','', '','', '','' ]})
df
</code></pre>
<p><a href="https://i.sstatic.net/FpreS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FpreS.png" alt="Data Frame to fill" /></a></p>
<p>Rows "fill_A" and "fill_B" in <code>df</code> have to be replace with rows from <code>look_up_df</code> like so:</p>
<p><a href="https://i.sstatic.net/xwIqw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xwIqw.png" alt="enter image description here" /></a></p>
<p>How do this expansion while leaving out of the rest of columns ?</p>
|
<python><pandas><dataframe>
|
2022-12-22 11:49:04
| 1
| 525
|
euh
|
74,887,925
| 7,826,511
|
How to allow CORS from Axios get request in Django backend?
|
<p>I've been looking for a solution to this problem but nothing seems to work. I've arrived at the <a href="https://github.com/adamchainz/django-cors-headers" rel="nofollow noreferrer">django-cors-headers</a> package but can't get it to work.</p>
<p>I'm sending an <code>axios</code> request form my <code>vue</code> frontend:</p>
<pre><code>axios.get('data/')
.then(res => { console.log(res) })
</code></pre>
<p>but it throws a <code>200 network error</code> error:</p>
<pre><code>Access to XMLHttpRequest at 'http://localhost:8000/data/' from origin 'http://localhost:3000' has been blocked by CORS policy: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
</code></pre>
<ul>
<li></li>
</ul>
<pre><code>GET http://localhost:8000/data/ net::ERR_FAILED 200
</code></pre>
<ul>
<li></li>
</ul>
<pre><code>AxiosError {message: 'Network Error', name: 'AxiosError', code: 'ERR_NETWORK', config: {…}, request: XMLHttpRequest, …}
code
:
"ERR_NETWORK"
config
:
{transitional: {…}, adapter: Array(2), transformRequest: Array(1), transformResponse: Array(1), timeout: 0, …}
message
:
"Network Error"
name
:
"AxiosError"
request
:
XMLHttpRequest {onreadystatechange: null, readyState: 4, timeout: 0, withCredentials: true, upload: XMLHttpRequestUpload, …}
stack
:
"AxiosError: Network Error\n
</code></pre>
<h3>Django backend</h3>
<p>I am redirecting the incoming request in <strong><code>myProject/urls.py</code></strong>:</p>
<pre><code>from django.urls import path, include
urlpatterns = [
path('', include('myApp.urls')),
]
</code></pre>
<p>to <strong><code>myApp/urls.py</code></strong>:</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path('data/', views.getData)
]
</code></pre>
<p>which invokes <strong><code>myApp/views.py</code></strong>:</p>
<pre><code>from rest_framework.response import Response
from rest_framework.decorators import api_view
from base.models import Item
from .serializers import ItemSerializer
@api_view(['GET'])
def getData(request):
items = Item.objects.all()
serializer = ItemSerializer(items, many=True)
return Response(serializer.data)
</code></pre>
<p>with <strong><code>base/models.py</code></strong>:</p>
<pre><code>from django.db import models
class Item(models.Model):
name = models.CharField(max_length=200)
created = models.DateTimeField(auto_now_add=True)
</code></pre>
<p>and <strong><code>myApp/serializers.py</code></strong>:</p>
<pre><code>from rest_framework import serializers
from base.models import Item
class ItemSerializer(serializers.ModelSerializer):
class Meta:
model = Item
fields = '__all__'
</code></pre>
<p>I've installed the <code>django-cors-headers</code> package and configured it in <strong><code>myProject/settings.py</code></strong>:</p>
<pre><code>INSTALLED_APPS = [
'corsheaders',
...
]
MIDDLEWARE = [
"corsheaders.middleware.CorsMiddleware",
]
</code></pre>
<p>with either:</p>
<pre><code>CORS_ALLOWED_ORIGINS = [
"http://localhost:3000",
]
</code></pre>
<p>or</p>
<pre><code>CORS_ALLOW_ALL_ORIGINS = True
</code></pre>
<p>but none of both works. I've tried reading up on the <a href="https://github.com/adamchainz/django-cors-headers" rel="nofollow noreferrer">package docs</a> but can't find the mistake.</p>
|
<python><django><rest><axios><cors>
|
2022-12-22 11:32:27
| 1
| 6,153
|
Artur Müller Romanov
|
74,887,884
| 19,155,645
|
pandas: .isna() shows that whole column is NaNs, but it is strings
|
<p>I have a pandas dataframe with a column that is populated by "yes" or "no" strings.
When I do <code>.value_counts()</code> to this column, i receive the correct distribution. <br>
But, when I run <code>.isna()</code> it shows that the whole column is NaNs.</p>
<p>I suspect later it creates problems for me.</p>
<p>Example:</p>
<pre><code>df = pd.DataFrame(np.array([[0,1,2,3,4],[40,30,20,10,0], ['yes','yes','no','no','yes']]).T, columns=['A','B','C'])
len(df['C'].isna()) # 5 --> why?!
df['C'].value_counts() # yes : 3, no: 2 --> as expected.
</code></pre>
|
<python><pandas>
|
2022-12-22 11:28:33
| 1
| 512
|
ArieAI
|
74,887,868
| 12,474,157
|
Python transform synchronous request into asynchronous to download image (httpio)
|
<p>I'm using synchronous responses to get image data from a url into bytes format. But I would like to make it an asynchronous process. My approach is changing python requests for httpio. to achieve it.</p>
<p>My original code, which is working throws the following result</p>
<p><a href="https://i.sstatic.net/0JMJH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0JMJH.png" alt="enter image description here" /></a></p>
<h2>My Dataframe</h2>
<pre><code>df = pd.DataFrame({
"uuid": {
0: 86240171628346,
1: 165887752774427,
2: 175393314389900,
3: 273714316578343,
4: 167563092160852,
},
"raw_logo_url": {
0: "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTOYM0psiirjiZltDUm5xPtxtsdEUVIEIR4UGZMDMxNEmS8Fmr1N4ttV5db5lpJOeI0d64&usqp=CAU",
1: "https://cdn.logojoy.com/wp-content/uploads/2018/05/01104813/1268-768x591.png",
2: "https://res.cloudinary.com/crunchbase-production/image/upload/c_lpad,f_auto,q_auto:eco,dpr_1/v1444818641/snpqz6rutxy7azidex5w.png",
3: "https://res.cloudinary.com/crunchbase-production/image/upload/c_lpad,f_auto,q_auto:eco,dpr_1/uymioczowxzwibevii1b",
4: "https://www.companyfolders.com/blog/media/2015/01/adidas-300x207.jpg",
},
"version": {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
}
)
</code></pre>
<h2>Synchronous version</h2>
<pre><code>def get_image_from_url(url: str) -> Union[PngImagePlugin.PngImageFile, JpegImagePlugin.JpegImageFile]:
response = requests.get(url)
if response.status_code == 200:
# print(response.content[:100])
return Image.open(io.BytesIO(response.content))
print(response.status_code)
return None
def convert_image_to_bytes(image: Union[PngImagePlugin.PngImageFile, JpegImagePlugin.JpegImageFile]) -> bytes:
"""
Receives a PIL image file and returns a bytes object
"""
b = io.BytesIO()
image.save(b, 'png')
image_bytes = b.getvalue()
return image_bytes
def process_image_from_url(url: str) -> bytes:
image = get_image_from_url(url)
return convert_image_to_bytes(image)
df["company_logo"] = df["raw_logo_url"].apply(process_image_from_url)
</code></pre>
<h2>Asynchronous version, not working</h2>
<pre><code>import asyncio
import aiohttp
import pandas as pd
async def get_image_from_url_(url: str) -> Union[PngImagePlugin.PngImageFile, JpegImagePlugin.JpegImageFile]:
async with aiohttp.ClientSession() as session:
response = await session.get(url)
if response.status == 200:
print(response.__dict__.keys())
return response.text()
# return Image.open(io.BytesIO(response.read()))
print(response.status)
return None
async def process_image(df: pd.DataFrame, input_column: str = 'raw_logo_url', output_column: str = 'company_logo'):
df[output_column] = await asyncio.gather(*[get_image_from_url(v) for v in df[input_column]])
print(df)
df2 = asyncio.run(process_image(df))
</code></pre>
<h3>Issue</h3>
<p>In the async response I cannot get the <code>response.content</code> that I want to transform into an image to make a bytes object later.</p>
<pre><code>*** TypeError: unhashable type: 'JpegImageFile'
</code></pre>
<p>Notice that every image is transformed into <code>PNG</code> in the original execution</p>
|
<python><asynchronous><async-await><python-asyncio>
|
2022-12-22 11:27:12
| 0
| 1,720
|
The Dan
|
74,887,645
| 350,403
|
Retrieving object metadata while traversing S3 using Minio client's list_objects method
|
<p>I am using the <a href="https://min.io/docs/minio/linux/developers/python/API.html#list_objects" rel="nofollow noreferrer"><code>list_objects</code></a> method of the python minio client and trying to retrieve the metadata for each object while traversing folders in S3. I attempted to do this by setting the <code>include_user_meta</code> parameter of the method to <code>True</code>. However, when I look at the returned objects, the metadata of the returned objects is not included. I checked the S3 API Reference documentation for the <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html" rel="nofollow noreferrer">ListObjectsV2 endpoint</a> and there is no mention of a <code>metadata</code> parameter that could be passed to include the metadata during file traversal.</p>
<p>Is there some other way to retrieve the metadata for each object while traversing a folder in S3 (without firing off one request for each retrieved object), or is this simply not possible and there is a bug in the python minio client?</p>
|
<python><client><metadata><traversal><minio>
|
2022-12-22 11:09:20
| 2
| 1,327
|
evermean
|
74,887,383
| 12,559,770
|
Order lists according to another list in python
|
<p>I have multiple lists such as :</p>
<pre><code>List1=['Canis_lupus','Cattus_catus','Mus_musculus','Rattus_rattus','Bombyx']
List2=['Homo_sapiens','Homo_erectus','Pan_troglodys']
List3=['Cattus_cattus','Bombyx','Homo_erectus','Mus_musculus']
</code></pre>
<p>And a predefined ordered list with all the element that could be within the <strong>Lists</strong> above=</p>
<pre><code>Ordered_list=['Cattus_cattus','Bombyx','Mus_musculus','Homo_sapiens','Pan_troglodys','Canis_lupus','Rattus_rattus','Homo_erectus']
</code></pre>
<p>So I would like simple to reorder the 3 lists by comparing with the order of elements in <code>Ordered_list</code></p>
<p>The new ordered list should then be :</p>
<pre><code>List1=['Cattus_catus','Bombyx','Mus_musculus','Canis_lupus','Rattus_rattus']
List2=['Homo_sapiens','Pan_troglodys','Homo_erectus']
List3=['Cattus_cattus','Bombyx','Mus_musculus','Homo_erectus']
</code></pre>
<p>Does someone have an idea please ?</p>
|
<python><python-3.x>
|
2022-12-22 10:45:22
| 3
| 3,442
|
chippycentra
|
74,887,245
| 11,023,647
|
How to alter foreignkey with Alembic
|
<p>I handle my PostgreSQL migrations with <code>Alembic</code>. This is how I create a table <code>items</code>:</p>
<pre><code>from alembic import op
import sqlalchemy as sa
def upgrade():
items_table = op.create_table(
"items",
sa.Column("id", UUID(as_uuid=True), primary_key=True),
sa.Column("user_id", UUID(as_uuid=True), nullable=False),
sa.PrimaryKeyConstraint("id"),
sa.ForeignKeyConstraint(
["user_id"],
["users.id"],
),
)
</code></pre>
<p>I'd like to make a new migration file to add <code>ondelete="CASCADE"</code> after the <code>sa.ForeignKeyConstraint(...)</code>. How can I do this using <code>sqlalchemy</code>? How do I drop the <code>ForeignKeyConstraint</code> and create a new one? Or do I need to drop the whole table and create it again?</p>
|
<python><postgresql><sqlalchemy><alembic>
|
2022-12-22 10:34:59
| 1
| 379
|
lr_optim
|
74,887,205
| 8,076,879
|
Paramiko equivalent of OpenSSH ssh -J switch
|
<p>I am trying to translate this <code>ssh</code> this command to Python using the <code>paramiko</code> library.</p>
<pre class="lang-bash prettyprint-override"><code>sshpass -p SomePassword ssh -J specificSshHost admin@11.0.0.0 \
-oHostKeyAlgorithms=+ssh-rsa \
-oKexAlgorithms=+diffie-hellman-group1-sha1 \
-o "StrictHostKeyChecking no"
</code></pre>
<p>Where <code>specificSshHost</code> points to this file in <code>.ssh/config</code> as follows</p>
<pre class="lang-none prettyprint-override"><code>Host specificSshHost
User admin
IdentityFile ~/.ssh/mySpecificRsaKey
</code></pre>
<p>What I have so far</p>
<pre class="lang-py prettyprint-override"><code>import paramiko
import os
client = paramiko.SSHClient()
client.load_host_keys("/home/name/.ssh/mySpecificRsaKey")
user = 'admin'
pswd = 'SomePassword'
ssh_keypath = ".ssh/mySpecificSshHost"
REMOTE_SERVER_IP = "11.0.0.0"
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname=str(REMOTE_SERVER_IP), username=user,
key_filename=ssh_keypath)
</code></pre>
<p>This is what I find in the <code>paramiko</code> log file</p>
<pre class="lang-none prettyprint-override"><code>INFO:paramiko.hostkeys:Not enough fields found in known_hosts in line 26 ('xPuIyxnS2aQoUvDVyCtJEJ47P6nH8su/bDGj6hrS1GBOFYLrCu4LBQ==')
INFO:paramiko.hostkeys:Unable to handle key of type RSA
</code></pre>
<p>I have read that <code>paramiko</code> supports the <code>rsa</code> and also those algorithms, so I do not understand why the <code>connect</code> command is just hanging there. The error trace triggered by a <code>keyboardInterrupt</code> is</p>
<pre class="lang-none prettyprint-override"><code> File "/tmp/ipykernel_202149/1488139442.py", line 36, in <module>
client.connect(hostname=str(REMOTE_SERVER_IP), username =str(user),
File "/home/david/miniconda3/lib/python3.9/site-packages/paramiko/client.py", line 358, in connect
retry_on_signal(lambda: sock.connect(addr))
File "/home/david/miniconda3/lib/python3.9/site-packages/paramiko/util.py", line 279, in retry_on_signal
return function()
File "/home/david/miniconda3/lib/python3.9/site-packages/paramiko/client.py", line 358, in <lambda>
retry_on_signal(lambda: sock.connect(addr))
</code></pre>
|
<python><ssh><paramiko>
|
2022-12-22 10:31:15
| 1
| 2,438
|
DaveR
|
74,887,176
| 5,368,122
|
Calculating difference of two columns with lag and storing result in other
|
<p>I have <strong>base dataframe</strong> that looks like this:</p>
<pre><code>mname p_code p_name fcval shotdate actual_1 actual_2 actual_3
0 101_1210 BankABC 5590890 2015-02-05 10 20 30
</code></pre>
<p>and a <strong>control dataframe</strong> that looks like this:</p>
<pre><code>mname p_code p_name fcval shotdate prd_1 prd_2 prd_3
30 101_1210 BankABC 5590890 2015-02-05 15 30 40
</code></pre>
<p><strong>Note:</strong> The number of feature/cols like actual and prd are 48 i.e actual_1, actual_2 ... actual_48, same for prd.</p>
<p>There could be multiple dataframes like the control one, the structure stays the same.</p>
<p>I want to calculate the difference between <strong>actual_</strong>* columns of base by shifting the <strong>prd_</strong>* columns of control by a lag, and store the result in control dataframe in new column called <strong>error</strong>. The lag is calulated as</p>
<pre><code>mname//30 = 1 in this case, could be 3 if mname=90 as 90//30=3, the the lag woudl be 3, ie shifting the **prd_*** cells by 3
</code></pre>
<p>In the above case, the difference would be like this</p>
<pre><code>actual_1 actual_2 actual_3
prd_1 prd_2 prd_3
</code></pre>
<p>will result in</p>
<pre><code>err1 = actual_2 - prd_1 = 20-15 = 5
err2 = actual_3 - prd_2 = 30-30 = 0
err3 = NaN because there is no matching actual
</code></pre>
<p>The resulting dataframe looks like this:</p>
<pre><code>mname p_code p_name fcval shotdate mNumber error
30 101_1210 BankABC 5590890 2015-02-05 1 5
30 101_1210 BankABC 5590890 2015-02-05 2 0
30 101_1210 BankABC 5590890 2015-02-05 3 NaN
</code></pre>
<p>Also, if nay of the acutal is nan, then error should be NaN.</p>
<p>I have been trying this with apply and lag but unsuccessfull.</p>
<p>Thanks in advance!</p>
|
<python><pandas>
|
2022-12-22 10:28:48
| 2
| 844
|
Obiii
|
74,886,735
| 9,363,181
|
Python unittest case expected not matching with the actual
|
<p>I am trying to mock the <code>secrets manager client</code>. Earlier the variables weren't in the class so I was able to mock the client directly using a patch like below:</p>
<pre><code>@patch('my_repo.rc.client')
</code></pre>
<p>and now since I am using an instance method, I need to mock the instance method.</p>
<p><strong>rc.py</strong></p>
<pre><code>import boto3
import json
from services.provisioner_logger import get_provisioner_logger
from services.exceptions import UnableToRetrieveDetails
class MyRepo(object):
def __init__(self, region):
self.client = self.__get_client(region)
def id_lookup(self, category):
logger = get_provisioner_logger()
try:
response = self.client.get_secret_value(SecretId=category)
result = json.loads(response['SecretString'])
logger.info("Got value for secret %s.", category)
return result
except Exception as e:
logger.error("unable to retrieve secret details due to ", str(e))
raise Exception("unable to retrieve secret details due to ", str(e))
def __get_client(self, region):
return boto3.session.Session().client(
service_name='secretsmanager',
region_name=region
)
</code></pre>
<p><strong>test_secrt.py</strong></p>
<pre><code>from unittest import TestCase
from unittest.mock import patch, MagicMock
from my_repo.rc import MyRepo
import my_repo
class TestSecretManagerMethod(TestCase):
def test_get_secret_value(self):
with patch.object(my_repo.rc.MyRepo, "id_lookup") as fake_bar_mock:
fake_bar_mock.get_secret_value.return_value = {
"SecretString": '{"secret": "gotsomecreds"}',
}
actual = MyRepo("eu-west-1").id_lookup("any-name")
self.assertEqual(actual, {"secret": "gotsomecreds"})
</code></pre>
<p>Now, I tried a <a href="https://stackoverflow.com/questions/8469680/using-mock-patch-to-mock-an-instance-method">SO post</a> to implement the same but the end result isn't matching. It gives results like below:</p>
<pre><code>self.assertEqual(actual, {"secret": "gotsomecreds"})
AssertionError: <MagicMock name='id_lookup()' id='4589498032'> != {'secret': 'gotsomecreds'}
</code></pre>
<p>I think I am close but unable to find out what exactly am I missing here.</p>
|
<python><python-3.x><unit-testing><python-unittest><python-unittest.mock>
|
2022-12-22 09:49:31
| 2
| 645
|
RushHour
|
74,886,418
| 3,415,597
|
Cannot install and import tensorflow_federated in colab
|
<p>I want to try a simple federated learning example in python. For it, I need to import tensorflow_federated package.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow_federated as tff
</code></pre>
<p>Here is the stack trace</p>
<pre class="lang-py prettyprint-override"><code>TypeError Traceback (most recent call last)
<ipython-input-6-961ae1555cfa> in <module>
----> 1 import tensorflow_federated as tff
14 frames
/usr/lib/python3.8/typing.py in _type_check(arg, msg, is_argument)
147 return arg
148 if not callable(arg):
--> 149 raise TypeError(f"{msg} Got {arg!r:.100}.")
150 return arg
151
TypeError: Callable[[arg, ...], result]: each arg must be a type. Got Ellipsis.
</code></pre>
<p>How should I resolve this error?<br />
BTW, I read in a forum that the problem might be resolved by updating the python version, however it still exists despite I updated it to v3.9<br />
The full stack trace is as follows (I had to submit a screenshot of it was misinterpreted by stackoverflow as some quotes and codes that are not in the right format)
<a href="https://i.sstatic.net/Skmez.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Skmez.png" alt="enter image description here" /></a></p>
|
<python><tensorflow-federated><federated-learning>
|
2022-12-22 09:18:37
| 0
| 465
|
HoOman
|
74,886,375
| 4,377,521
|
Best way to suppress mypy's "has no attribute" for global variable that is initialized later
|
<p>I have read other questions on this topic - and all of them are about another level (classes, methods or function).
I'm trying to find a way to suppress "has not attribute" message from mypy for a <strong>module level</strong> name.</p>
<p>Code looks like this</p>
<pre><code>service: Service | None = None
@on_startup
def init():
global service
inject.configure(config)
service = get_service()
</code></pre>
<p>So I am 100% sure that service will be not <code>None</code> whenever I will access it. The only way I see now is to mark each line with <code>service.*</code> as <code># type: ignore</code>. It leads to huge amount of comments. Is there any other way to let this code pass mypy?</p>
<p>Error message is: <code>Item "None" of "Optional[Service]" has no attribute "..."</code></p>
|
<python><mypy><linter>
|
2022-12-22 09:14:46
| 1
| 2,938
|
sashaaero
|
74,886,225
| 4,879,688
|
How can I get `stdout` back in a running cell of a reloaded Jupyter notebook?
|
<p>I run a Jupyter notebook. One cell contains a control loop of a CPU/memory intensive FEniCS computations. The loop uses <code>print(i)</code> as a progress indicator (where <code>i</code> is the iteration count). As <a href="https://stackoverflow.com/questions/74864501/how-can-i-get-pid-of-a-running-jupyter-notebook">there have been other notebooks running in parallel</a>, swapping happened and the frontend of the notebook has been reloaded. That left me with:</p>
<ul>
<li>the kernel process running (I can see it in the <code>htop</code> output),</li>
<li>the kernel is busy (I requested to execute another cell and I am still waiting, the cell is marked <code>[*]</code>),</li>
<li>the <code>stdio</code> output of the cell is frozen as it has been somewhen before the reloading (37%),</li>
<li>the cell is marked as neither executed nor running (<code>[ ]</code>),</li>
<li>the notebook favicon is "busy" (a hourglass),</li>
<li>the kernel ignoring (?) interrupt requests - it does not stop when I click the stop button (but I am used to such behaviour when I use FEniCS).</li>
</ul>
<p>From experience I know, that either:</p>
<ul>
<li>the cell is actually running, the state is there, and I can access it will be accessible after the execution is finished,</li>
<li>the cell has hanged (e.g. kernel is deadlocked performing some kind of busy waiting or so, IDK).</li>
</ul>
<p>I have interrupted the kernel (with either Jupyter notebook frontend or <code>kill -2</code> - I had tried both). The cell was actually running and the control loop was at 98% (no data lost though). But I still want to know <strong>whether there is any way I can get access to <code>stdout</code> stream in cases like this</strong> (without interrupting the execution unless it can be resumed exactly where it was interrupted).</p>
|
<python><python-3.x><jupyter-notebook><stdout><python-3.7>
|
2022-12-22 09:00:52
| 0
| 2,742
|
abukaj
|
74,886,170
| 11,883,900
|
use different array lists as a dropdown list on streamlit app
|
<p>I am developing a streamlit dashboard and I have different array lists that look like this</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import streamlit as st
svm_pred = np.array([1.50, 2.51, 3.34, 4.123, 5.22])
linear_pred = np.array([0.31, 2.11, 0.33, 4.0, 5])
bayesian_pred = np.array([1, 2.56, 3.89, 4, 5])
</code></pre>
<p>Now, I am using this arrays to plot different graphs and due to that I want the arrays to be selected from a dropdown list where when selected, it will automatically plot the graph.</p>
<p>Here is how I created my dropdown list:</p>
<pre class="lang-py prettyprint-override"><code>preds = {
'SVM Predictions': svm_pred,
'Polynomial Regression Predictions': linear_pred,
'Bayesian Ridge Regression Predictions': bayesian_pred,
}
model_predict = st.sidebar.selectbox(
"Select the model to predict : ", list(preds.keys()))
</code></pre>
<p>On plotting code, I call the model_predict selectbox to plot the chart,otherwise the chart will be empty.</p>
<pre class="lang-py prettyprint-override"><code>plot_predictions(adjusted_dates, world_cases, model_predict,
'SVM Predictions', 'purple')
plot_predictions(adjusted_dates, world_cases, model_predict,
'Polynomial Regression Predictions', 'orange')
plot_predictions(adjusted_dates, world_cases, model_predict,
'Bayesian Ridge Regression Predictions', 'green')
</code></pre>
<p>When I run the code, I get this error</p>
<blockquote>
<p>ValueError: Illegal format string "Polynomial Regression Predictions"; two marker symbols</p>
</blockquote>
<p>What am I missing and how can I resolve this?</p>
|
<python><arrays><numpy><streamlit>
|
2022-12-22 08:55:35
| 1
| 1,098
|
LivingstoneM
|
74,886,169
| 6,459,056
|
pytest no module named common
|
<p>I'm trying to get started with python and pytest, I have the following project structure :</p>
<pre><code>.
├── my_module
│ ├── __init__.py
│ ├── common
│ │ ├── __init__.py
│ │ └── utils.py
│ └── my_script.py
└── test
├── __init__.py
└── test_my_script.py
</code></pre>
<p>When I run tests (using <code>pytest</code>), I get an error:</p>
<pre><code>no module named 'common'.
</code></pre>
<p>I have also the following all configs files:</p>
<ul>
<li>tox.ini</li>
<li>setup.py</li>
<li>setup.cfg</li>
<li>pyproject.toml</li>
</ul>
<p><br> someone know what I missed?
<br> <strong>EDIT</strong>
here is how I import utils from test_my_script.py :</p>
<pre><code>from common.util import func1,func2
</code></pre>
|
<python><python-3.x><pytest><tox>
|
2022-12-22 08:55:31
| 1
| 3,121
|
aName
|
74,886,164
| 913,098
|
Pycharm doesn't recognize packages with remote interpreter
|
<h2>TL;DR - This is a PyCharm remote interpreter question.</h2>
<p>Remote libraries are not properly synced, and PyCharm is unable to index properly when using remote interpreter. Everything runs fine.</p>
<p>Following is the entire <s>(currently unsuccessful)</s> debug process</p>
<p><strong>See update section for a narrowing down of the problem</strong></p>
<hr />
<p>I am using a virtual environment created with <code>python -m venv venv</code>, then pointing to it as I always have using ssh interpreter. The exact same happens with conda as well.</p>
<p><a href="https://i.sstatic.net/CCWEu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CCWEu.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/O18vK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O18vK.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/3oGbx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3oGbx.png" alt="enter image description here" /></a></p>
<p>After configuring the interpreter, many of the installed packages are marked red by PyCharm, not giving auto complete, and not knowing these packages.</p>
<p>Here is the requirements.txt file, which is used with <code>pip install -r requirements.txt</code></p>
<pre><code>--index https:<our_internal_pypi_server>
--extra-index-url <some_external_pypi_server>
algo_api>=2.5.0
algo_flows>=2.4.0
DateTime==4.7
fastapi==0.88.0
imagesize==1.4.1
numpy==1.23.1
opencv_python==4.6.0.66
overrides==6.1.0
pydantic==1.9.0
pymongo==4.1.1
pytest==7.1.2
pytorch_lightning==1.6.4
PyYAML==6.0
scikit_learn==1.1.3
setuptools==59.5.0
tinytree==0.2.1
#torch==1.10.2+cu113
#torchvision==0.11.3+cu113
tqdm==4.64.0
uv_build_utils==1.4.0
uv_python_utils>=1.11.1
allegroai
pymongo[srv]
</code></pre>
<p>Here is <code>pip freeze</code></p>
<pre><code>absl-py==1.3.0
aggdraw==1.3.15
aiohttp==3.8.3
aiosignal==1.3.1
albumentations==1.3.0
algo-api==2.5.0
algo-flows==2.4.0
allegroai==3.6.1
altair==4.2.0
amqp==5.1.1
anomalib==0.3.2
antlr4-python3-runtime==4.9.3
anyio==3.6.2
astunparse==1.6.3
async-timeout==4.0.2
attrs==20.3.0
bcrypt==4.0.1
bleach==5.0.1
boto3==1.26.34
botocore==1.29.34
cachetools==5.2.0
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==2.1.1
clearml==1.8.3
click==8.1.3
commonmark==0.9.1
contourpy==1.0.6
cpu-cores==0.1.3
cryptography==38.0.4
cycler==0.11.0
DateTime==4.7
decorator==5.1.1
deepmerge==1.1.0
dnspython==2.2.1
docker-pycreds==0.4.0
docopt==0.6.2
docutils==0.19
dotsi==0.0.3
efficientnet==1.0.0
einops==0.6.0
entrypoints==0.4
fastapi==0.88.0
ffmpy==0.3.0
fire==0.5.0
Flask==2.2.2
flatbuffers==1.12
focal-loss==0.0.7
fonttools==4.38.0
frozenlist==1.3.3
fsspec==2022.11.0
furl==2.1.3
future==0.18.2
gast==0.4.0
gitdb==4.0.10
GitPython==3.1.29
google-auth==2.15.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
gradio==3.15.0
grpcio==1.51.1
gunicorn==20.1.0
h11==0.14.0
h5py==3.7.0
httpcore==0.16.3
httpx==0.23.1
humanfriendly==9.2
idna==3.4
image-classifiers==1.0.0
imageio==2.23.0
imagesize==1.4.1
imgaug==0.4.0
importlib-metadata==5.2.0
importlib-resources==5.10.1
imutils==0.5.4
inflection==0.5.1
iniconfig==1.1.1
itsdangerous==2.1.2
jaraco.classes==3.2.3
jeepney==0.8.0
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
jsonschema==3.2.0
keras==2.9.0
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
keyring==23.13.1
kiwisolver==1.4.4
kmeans1d==0.3.1
kornia==0.6.8
libclang==14.0.6
linkify-it-py==1.0.3
luqum==0.11.0
Markdown==3.4.1
markdown-it-py==2.1.0
MarkupSafe==2.1.1
maskrcnn-benchmark==1.1.2+cu113
matplotlib==3.6.2
mdit-py-plugins==0.3.3
mdurl==0.1.2
ml-distillery==1.0.1
more-itertools==9.0.0
multidict==6.0.3
networkx==2.8.8
numpy==1.23.1
oauthlib==3.2.2
omegaconf==2.3.0
opencv-python==4.6.0.66
opencv-python-headless==4.6.0.66
opt-einsum==3.3.0
orderedmultidict==1.0.1
orjson==3.8.3
overrides==6.1.0
packaging==22.0
pandas==1.5.2
paramiko==2.12.0
pathlib==1.0.1
pathlib2==2.3.7.post1
pathtools==0.1.2
pika==1.3.1
Pillow==9.3.0
pkginfo==1.9.2
pluggy==1.0.0
ply==3.11
promise==2.3
protobuf==3.19.6
psd-tools==1.9.23
psutil==5.9.4
py==1.11.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyclipper==1.3.0.post4
pycocotools==2.0.6
pycparser==2.21
pycpd==2.0.0
pycryptodome==3.16.0
pydantic==1.9.0
pyDeprecate==0.3.2
pydub==0.25.1
pygit2==1.11.1
Pygments==2.13.0
pyhumps==3.8.0
PyJWT==2.4.0
pymongo==4.1.1
PyNaCl==1.5.0
pyparsing==2.4.7
pyrsistent==0.19.2
pytest==7.1.2
python-dateutil==2.8.2
python-multipart==0.0.5
pytorch-lightning==1.6.4
pytz==2022.7
PyWavelets==1.4.1
PyYAML==6.0
qudida==0.0.4
readme-renderer==37.3
requests==2.28.1
requests-oauthlib==1.3.1
requests-toolbelt==0.10.1
rfc3986==1.5.0
rich==12.6.0
rsa==4.9
s3transfer==0.6.0
scikit-image==0.19.3
scikit-learn==1.1.3
scipy==1.9.3
SecretStorage==3.3.3
segmentation-models==1.0.1
sentry-sdk==1.12.1
setproctitle==1.3.2
shapely==2.0.0
shortuuid==1.0.11
six==1.16.0
sklearn==0.0.post1
smmap==5.0.0
sniffio==1.3.0
starlette==0.22.0
tensorboard==2.9.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.9.1
tensorflow-estimator==2.9.0
tensorflow-io-gcs-filesystem==0.29.0
termcolor==2.1.1
threadpoolctl==3.1.0
tifffile==2022.10.10
timm==0.5.4
tinytree==0.2.1
tomli==2.0.1
toolz==0.12.0
torch==1.10.2+cu113
torchmetrics==0.9.0
torchtext==0.11.2
torchvision==0.11.3+cu113
tqdm==4.64.0
twine==4.0.2
typing-utils==0.1.0
typing_extensions==4.4.0
uc-micro-py==1.0.1
urllib3==1.26.13
uv-build-utils==1.4.0
uv-envyaml==2.0.1
uv-python-serving==2.0.1
uv-python-utils==1.12.0
uvicorn==0.20.0
uvrabbit==1.4.1
validators==0.20.0
vine==5.0.0
wandb==0.12.17
webencodings==0.5.1
websockets==10.4
Werkzeug==2.2.2
windshield-grid-localisation==1.0.0.dev5
wrapt==1.14.1
yacs==0.1.8
yarl==1.8.2
zipp==3.11.0
zope.interface==5.5.2
</code></pre>
<p>The following minimal test program</p>
<pre><code>import pytest
import uv_python_utils
from importlib_metadata import version as version_query
from pkg_resources import parse_version
import requests
installed_pytest_version = parse_version(version_query('pytest'))
installed_uv_python_utils_version = parse_version(version_query('uv_python_utils'))
installed_importlib_metadata_version = parse_version(version_query('importlib_metadata'))
print(installed_pytest_version)
print(installed_uv_python_utils_version)
print(installed_importlib_metadata_version)
</code></pre>
<p>runs with output</p>
<pre><code>7.1.2
1.12.0
5.2.0
</code></pre>
<p>but in the IDE, it looks like this:</p>
<p><a href="https://i.sstatic.net/yWHTc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yWHTc.png" alt="enter image description here" /></a></p>
<p><a href="https://intellij-support.jetbrains.com/hc/en-us/requests/4567907" rel="nofollow noreferrer">Here</a> is the support ticket for JetBrains (not sure if visible for everyone or not). They were not able to help yet.</p>
<p>They offered, and I have done all of the following which did not help:</p>
<ol>
<li>Delete <code>~/.pycharm_helpers</code> on remote</li>
<li>Go to Help | Find Action... and search for "Registry...".
In the registry, search for python.use.targets.api and disable it.
Reconfigure your project interpreter.</li>
</ol>
<p>They looked in "the logs" (not sure which log), coming from Help --> "Collect Logs and Diagnostic Data", and saw the following</p>
<pre><code>at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:92)
2022-12-15 11:14:42,932 [ 478638] WARN - net.schmizz.sshj.xfer.FileSystemFile - Could not set permissions for C:\Users\noam.s\AppData\Local\JetBrains\PyCharm2022.3\remote_sources\-2115534621\.\site-packages__1.zip to 1a4
2022-12-15 11:14:42,986 [ 478692] WARN - net.schmizz.sshj.xfer.FileSystemFile - Could not set permissions for C:\Users\noam.s\AppData\Local\JetBrains\PyCharm2022.3\remote_sources\-2115534621\.\.state.json to 1a4
2022-12-15 11:14:43,077 [ 478783] WARN - net.schmizz.sshj.xfer.FileSystemFile - Could not set permissions for C:\Users\noam.s\AppData\Local\JetBrains\PyCharm2022.3\remote_sources\-2115534621\.\python3.8.zip to 1a4
</code></pre>
<p>I could not find any permission irregularities though.</p>
<p>I also tried to purge everything from Pycharm from both local and remote, and reinstall, and this persists.</p>
<ol>
<li>Uninstall PyCharm, resinstall an older version that works for a colleague (works on the same remote in the same directory for the colleague, so the problem is local)</li>
<li>Delete .idea</li>
<li>Delete <code>C:\Users\noam.s\AppData\Roaming\JetBrains</code></li>
<li>Obviously I tried invalidate caches & restart.</li>
</ol>
<p><strong><s>The libraries just don't get downloaded to the External Libraries</s> [See update below], as shown in the Project menu, which doesn't agree with <code>pip freeze</code></strong></p>
<p>In the venv case:</p>
<p><a href="https://i.sstatic.net/jbPPO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jbPPO.png" alt="enter image description here" /></a></p>
<p>In the conda case, the downloaded remote libraries don't even agree with the Pycharm interpreter screen!</p>
<p><a href="https://i.sstatic.net/MoeV6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MoeV6.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/KHKhy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KHKhy.png" alt="enter image description here" /></a></p>
<p>This really makes it hard for me to work and I am not able to find any workaround.
Any ideas?</p>
<hr />
<h2>Update - The problem occurs when Pycharm tries to unpack from <code>skeletons.zip</code>.</h2>
<p>I found a workaround to avoid the "reds":</p>
<ol>
<li>Open the Remote Libraries in explorer</li>
</ol>
<p><a href="https://i.sstatic.net/9vhF4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9vhF4.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>Delete that folder.</li>
<li>Manually extract the folder from skeletons.zip</li>
<li>Reindex pycharm</li>
</ol>
<p>This gave the folowing warnings:</p>
<pre><code>! Attempting to correct the invalid file or folder name
! Renaming C:\Users\noam.s\AppData\Local\Temp\Rar$DRa30340.29792\756417188\uvrabbit\aux.py to C:\Users\noam.s\AppData\Local\Temp\Rar$DRa30340.29792\756417188\uvrabbit\_aux.py
</code></pre>
<p><a href="https://i.sstatic.net/ZLDMk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLDMk.png" alt="enter image description here" /></a></p>
<p>but allowed me to start working. This is not a valid solution in my opinion though, as it required manual handling, rather then let the IDE do it's one job.</p>
<hr />
<ol>
<li>Why does this happen?</li>
<li>How to fix it?</li>
<li>How to avoid it?</li>
</ol>
|
<python><windows><pycharm><remote-debugging>
|
2022-12-22 08:54:58
| 3
| 28,697
|
Gulzar
|
74,885,954
| 11,332,693
|
Checking if any string element of the column is matching with other column string list in python
|
<p>df</p>
<pre><code> CAR1 CAR2
['ford','hyundai'] ['ford','hyundai']
['ford','hyundai'] ['hyundai','nissan']
['ford','hyundai'] ['bmw', 'audi']
</code></pre>
<p>Expected output :</p>
<pre><code> CAR1 CAR2 Flag
['ford','hyundai'] ['ford','hyundai'] 1
['ford','hyundai'] ['hyundai','nissan'] 1
['ford','hyundai'] ['bmw', 'audi'] 0
</code></pre>
<p>Raise flag 1 in case of any elements/string from CAR1 matches with CAR2, else raise flag 0</p>
<p>My try is:</p>
<pre><code>df[[x in y for x,y in zip(df['CAR1'], df['CAR2'])]
</code></pre>
|
<python><pandas><string><list><dataframe>
|
2022-12-22 08:31:31
| 2
| 417
|
AB14
|
74,885,777
| 4,619,958
|
How to implement associated types in Python/Mypy? Or, what to do when wanting sub-classes to have subclass arguments?
|
<p>Consider the (simplified) code, where I want <code>Lean*</code> and <code>Isabelle*</code> extend the <code>Base*</code>.</p>
<pre class="lang-py prettyprint-override"><code>class BaseProblem: ...
class BaseStep: ...
class LeanProblem(BaseProblem): ...
class LeanStep(BaseStep): ...
class IsabelleProblem(BaseProblem): ...
class IsabelleStep(BaseStep): ...
class BaseProver:
def f(self, problem: BaseProblem, step: BaseStep): ...
class LeanProver(BaseProver):
def f(self, problem: LeanProblem, step: LeanStep): ...
class IsabelleProver(BaseProver):
def f(self, problem: IsabelleProblem, step: IsabelleStep): ...
</code></pre>
<p>However, the <code>f</code> function will have problem in mypy:</p>
<pre><code>Argument 1 of "f" is incompatible with supertype "LeanProblem";
supertype defines the argument type as "BaseProblem" [override]
</code></pre>
<p>I know it can be solved by generics, such as:</p>
<pre class="lang-py prettyprint-override"><code>TProblem = TypeVar('TProblem', bound=BaseProblem)
TStep = TypeVar('TStep', bound=BaseStep)
class BaseProver(Generic[TProblem, TStep]):
def f(self, problem: TProblem, step: TStep): ...
class LeanProver(BaseProver[LeanProblem, LeanStep]):
def f(self, problem: LeanProblem, step: LeanStep): ...
...
</code></pre>
<p>However, instead of only "Problem" and "Step", indeed I have more of such types (say, 10). Thus, the approach to use generic will be quite ugly IMHO.</p>
<p>When using Rust, I know we can have <a href="https://doc.rust-lang.org/beta/rust-by-example/generics/assoc_items/types.html" rel="nofollow noreferrer">associated types</a> and it can solve the problem; but I have not found an equivalent in Python.</p>
|
<python><types><mypy><type-systems>
|
2022-12-22 08:15:36
| 0
| 17,865
|
ch271828n
|
74,885,555
| 10,710,625
|
Transform one row to a data frame with multiple rows
|
<p>I have a data frame containing one row:</p>
<pre class="lang-py prettyprint-override"><code>df_1D = pd.DataFrame({'Day1':[5],
'Day2':[6],
'Day3':[7],
'ID':['AB12'],
'Country':['US'],
'Destination_A':['Miami'],
'Destination_B':['New York'],
'Destination_C':['Chicago'],
'First_Agent':['Jim'],
'Second_Agent':['Ron'],
'Third_Agent':['Cynthia']},
)
Day1 Day2 Day3 ID ... Destination_C First_Agent Second_Agent Third_Agent
0 5 6 7 AB12 ... Chicago Jim Ron Cynthia
</code></pre>
<p>I'm wondering if there's an easy way, to transform it into a dataframe with three rows as shown here:</p>
<pre><code> Day ID Country Destination Agent
0 5 AB12 US Miami Jim
1 6 AB12 US New York Ron
2 7 AB12 US Chicago Cynthia
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-22 07:55:31
| 5
| 739
|
the phoenix
|
74,885,464
| 7,826,511
|
How to configure BaseURL in Django4?
|
<p>I'm new to <code>django</code> and trying to set a <code>baseURL</code> in <code>django4</code>. I came upon <a href="https://stackoverflow.com/questions/17420478/how-to-set-base-url-in-django">How to set base URL in Django</a> of which the solution is:</p>
<pre><code>from django.conf.urls import include, url
from . import views
urlpatterns = [
url(r'^someuri/', include([
url(r'^admin/', include(admin.site.urls) ),
url(r'^other/$', views.other)
])),
]
</code></pre>
<p>but this import statement:</p>
<pre><code>from django.conf.urls import url
</code></pre>
<p>shows:</p>
<pre><code>Cannot find reference 'url' in '__init__.py'
</code></pre>
<p>What am I doing wrong?</p>
|
<python><django>
|
2022-12-22 07:44:21
| 1
| 6,153
|
Artur Müller Romanov
|
74,885,462
| 3,247,006
|
"&" and "|" vs "and" and "or" for "AND" and "OR" operators in Django
|
<p>I have <strong><code>Blog</code> model</strong> below. *I use <strong>Django 3.2.16</strong> and <strong>PostgreSQL</strong>:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Blog(models.Model):
post = models.TextField()
def __str__(self):
return self.post
</code></pre>
<p>Then, <strong><code>store_blog</code> table</strong> has <strong>2 rows</strong> below:</p>
<h3><code>store_blog</code> table:</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>post</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>1</strong></td>
<td><strong>Python is popular and simple.</strong></td>
</tr>
<tr>
<td><strong>2</strong></td>
<td><strong>Java is popular and complex.</strong></td>
</tr>
</tbody>
</table>
</div>
<p>Then, when running <strong>the <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.filter" rel="nofollow noreferrer">filter()</a> code</strong> using <strong><code>&</code></strong> or <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.Q" rel="nofollow noreferrer"><strong>Q()</strong></a> and <strong><code>&</code></strong> or using <strong><code>and</code></strong> or <strong><code>Q()</code></strong> and <strong><code>and</code></strong> in <strong><code>test()</code> view</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from .models import Blog
from django.db.models import Q
from django.http import HttpResponse
def test(request):
# With "&"
# ↓ Here
qs = Blog.objects.filter(post__contains="popular") & \
Blog.objects.filter(post__contains="simple")
print(qs)
# With "Q()" and "&"
# ↓ Here # ↓ Here
qs = Blog.objects.filter(Q(post__contains="popular") &
Q(post__contains="simple"))
print(qs) # ↑ Here
# With "and"
# ↓ Here
qs = Blog.objects.filter(post__contains="popular") and \
Blog.objects.filter(post__contains="simple")
print(qs)
# With "Q()" and "and"
# ↓ Here # ↓ Here
qs = Blog.objects.filter(Q(post__contains="popular") and
Q(post__contains="simple"))
print(qs) # ↑ Here
return HttpResponse("Test")
</code></pre>
<p>I got the same result below:</p>
<pre class="lang-none prettyprint-override"><code><QuerySet [<Blog: Python is popular and simple.>]> # With "&"
<QuerySet [<Blog: Python is popular and simple.>]> # With "Q()" and "&"
<QuerySet [<Blog: Python is popular and simple.>]> # With "and"
<QuerySet [<Blog: Python is popular and simple.>]> # With "Q()" and "and"
[22/Dec/2022 16:04:45] "GET /store/test/ HTTP/1.1" 200 9
</code></pre>
<p>And, when running <strong>the <code>filter()</code> code</strong> using <strong><code>|</code></strong> or <strong><code>Q()</code></strong> and <strong><code>|</code></strong> or using <strong><code>or</code></strong> or <strong><code>Q()</code></strong> and <strong><code>or</code></strong> in <strong><code>test()</code> view</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from .models import Blog
from django.db.models import Q
from django.http import HttpResponse
def test(request):
# With "|"
# ↓ Here
qs = Blog.objects.filter(post__contains="popular") | \
Blog.objects.filter(post__contains="simple")
print(qs)
# With "Q()" and "|"
# ↓ Here # ↓ Here
qs = Blog.objects.filter(Q(post__contains="popular") |
Q(post__contains="simple"))
print(qs) # ↑ Here
# With "or"
# ↓ Here
qs = Blog.objects.filter(post__contains="popular") or \
Blog.objects.filter(post__contains="simple")
print(qs)
# With "Q()" and "or"
# ↓ Here # ↓ Here
qs = Blog.objects.filter(Q(post__contains="popular") or
Q(post__contains="simple"))
print(qs) # ↑ Here
return HttpResponse("Test")
</code></pre>
<p>I got the same result below:</p>
<pre class="lang-none prettyprint-override"><code><QuerySet [<Blog: Python is popular and simple.>, <Blog: Java is popular and complex.>]> # With "|"
<QuerySet [<Blog: Python is popular and simple.>, <Blog: Java is popular and complex.>]> # With "Q()" and "|"
<QuerySet [<Blog: Python is popular and simple.>, <Blog: Java is popular and complex.>]> # With "or"
<QuerySet [<Blog: Python is popular and simple.>, <Blog: Java is popular and complex.>]> # With "Q()" and "or"
[22/Dec/2022 16:20:27] "GET /store/test/ HTTP/1.1" 200 9
</code></pre>
<p>So, are there any differences between <strong><code>&</code></strong> and <strong><code>and</code></strong> and <strong><code>|</code></strong> and <strong><code>or</code></strong> for <strong><code>AND</code> and <code>OR</code> operators</strong> in Django?</p>
|
<python><python-3.x><django><sql>
|
2022-12-22 07:44:05
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
74,885,227
| 16,727,671
|
How to run command from variable instead of those line in python?
|
<p>I have below commands for run az aks command:</p>
<pre><code>from azure.cli.core import get_default_cli
az_cli = get_default_cli()
res = az_cli.invoke(['login', '--service-principal', '-u', client_id, '-p', client_secret,'--tenant',tenant_id])
az_cli.invoke(['aks','command','invoke','--resource-group',resourcegroup,'--name',clustername,'--command','kubectl apply -f',outfile_final])
</code></pre>
<p>I want as below,</p>
<pre><code>azcmd = "az login --service-principal -u " + client_id + " -p " + client_secret + " --tenant " + tenant_id
**res = az_cli.invoke([azcmd])**
</code></pre>
<p>but Above script is giving error like <strong>args should be a list or tuple</strong>,
and 2nd error:
<a href="https://i.sstatic.net/bmptr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmptr.png" alt="enter image description here" /></a>
Is there anyways to run invoke with get input from variable.</p>
<p>Edit1:
I'm applying deployment file as below:</p>
<pre><code>namespace = "kubectl apply -f abc.yaml"
</code></pre>
|
<python><azure><azure-cli>
|
2022-12-22 07:20:04
| 1
| 448
|
microset
|
74,885,225
| 11,479,825
|
Cast features to ClassLabel
|
<p>I have a dataset with type dictionary which I converted to <code>Dataset</code>:</p>
<p>ds = datasets.Dataset.from_dict(bio_dict)</p>
<p>The shape now is:</p>
<pre><code>Dataset({
features: ['id', 'text', 'ner_tags', 'input_ids', 'attention_mask', 'label'],
num_rows: 8805
})
</code></pre>
<p>When I use the <code>train_test_split</code> function of <code>Datasets</code> I receive the following error:</p>
<pre><code>train_testvalid = ds.train_test_split(test_size=0.5, shuffle=True, stratify_by_column="label")
</code></pre>
<blockquote>
<p>ValueError: Stratifying by column is only supported for ClassLabel
column, and column label is Sequence.</p>
</blockquote>
<p>How can I change the type to ClassLabel so that stratify works?</p>
|
<python><huggingface-transformers><huggingface-datasets>
|
2022-12-22 07:19:48
| 1
| 985
|
Yana
|
74,885,183
| 1,762,051
|
How to create zoom bot which can join and record meeting using official zoom API or SDK way?
|
<p>I want to create zoom bot which can join and record meeting using official way using zoom API or SDK.</p>
<p>I saw there exists <a href="https://marketplace.zoom.us/docs/api-reference/introduction/" rel="nofollow noreferrer">ZOOM API Reference</a>, <a href="https://marketplace.zoom.us/docs/sdk/native-sdks/introduction/" rel="nofollow noreferrer">Zoom Meeting SDKs</a> and <a href="https://marketplace.zoom.us/docs/zoom-apps/guides/meeting-bots-sdk-media-streams/" rel="nofollow noreferrer">Meeting Bots: Accessing Media Streams</a>.<br />
In <a href="https://marketplace.zoom.us/docs/api-reference/introduction/" rel="nofollow noreferrer"><code>ZOOM API Reference</code></a> I did not find anything using which I bot can join and record meeting.<br />
In <a href="https://marketplace.zoom.us/docs/sdk/native-sdks/introduction/" rel="nofollow noreferrer"><code>Zoom Meeting SDKs</code></a> I did not find anything which I can use in my <a href="https://en.wikipedia.org/wiki/Python_(programming_language)" rel="nofollow noreferrer">Python</a> or <a href="https://en.wikipedia.org/wiki/Node.js" rel="nofollow noreferrer">Node.js</a> script for automation of joining and recording meeting.<br />
In <a href="https://marketplace.zoom.us/docs/zoom-apps/guides/meeting-bots-sdk-media-streams/" rel="nofollow noreferrer"><code>Meeting Bots: Accessing Media Streams</code></a> I found that its exists for Window and MacOS so I am not sure if I can use it in my standalone python or node.js script.</p>
<p>So what is right API or SDK for creating bot which can join and record meeting?</p>
|
<python><node.js><zoom-sdk>
|
2022-12-22 07:14:58
| 1
| 10,924
|
Alok
|
74,885,052
| 16,082,534
|
FHIR Converter for Python
|
<p>Is there is any function available in python to convert the given json input into HL7 FHIR format. By passing the Liquid Template (Shopify) and the input source data.</p>
|
<python><shopify><hl7-fhir><liquid-template>
|
2022-12-22 06:55:53
| 1
| 1,326
|
Akram
|
74,885,035
| 11,922,765
|
Import data from another dataframe for matching cells
|
<p>I have two data frames that I import from the excel sheets. There is some information I need to import from auxiliary dataframe to main dataframe if there is a matching.
My code:</p>
<pre><code>auxdf =pd.DataFrame({'prod':['look','duck','chik']},index=['prod_team1','prod_team2','prod_team3'])
auxdf =
prod
prod_team1 look
prod_team2 duck
prod_team3 chik
# Main dataframe after importing from an excel sheet
maindf =
col1 col2
mar_team1 aoo auxdf['prod_team1']
mar_team2 auxdf.loc['prod_team2'] bla
mar_team3 foo auxdf['prod_team3']
# I want to import information from auxdf into maindf
for i in range(len(mdf)):
for j in range(len(mdf.columns)):
# Check if a cell value has a string called 'auxdf', if so, change its value
try:
if 'auxdf' in maindf[maindf.columns[0]].iloc[0]:
maindf[maindf.columns[0]].iloc[0] = eval(maindf[maindf.columns[0]].iloc[0])
except:
pass
</code></pre>
<p>Expected output:</p>
<pre><code>maindf =
col1 col2
mar_team1 aoo look
mar_team2 duck bla
mar_team3 foo chik
</code></pre>
<p>Need help to find most pythonic way of doing it. Thanks</p>
|
<python><excel><pandas><dataframe><numpy>
|
2022-12-22 06:53:49
| 3
| 4,702
|
Mainland
|
74,885,010
| 988,279
|
Flask. Roles_required decorator no more working after Flask upgrade
|
<p>When I upgrade my Flask project (Flask 1.0.2, Flask-Login 0.4.1) to the latest versions, the roles_required decorator no longer works. The "login" itself is working.</p>
<p>models.py</p>
<pre><code>...
from flask_user import UserMixin
class ApiUserRoles(db.Model):
__tablename__ = 'apiuser_roles'
user_id = db.Column("user_id", db.Text, db.ForeignKey("public.apiuser.id", use_alter=True), nullable=False,
primary_key=True)
role_id = db.Column("role_id", db.Integer, db.ForeignKey("public.apiroles.id", use_alter=True), nullable=False,
primary_key=True)
description = db.Column("description", db.String(50))
class ApiUser(db.Model, UserMixin):
__tablename__ = 'apiuser'
id = db.Column("id", db.Text, nullable=False, primary_key=True)
roles = db.relationship('ApiRoles', secondary='public.apiuser_roles')
password_hash = db.Column("password_hash", db.Text, nullable=False)
description = db.Column("description", db.String(50))
</code></pre>
<p>auth.py</p>
<pre><code>from flask import session
from flask_httpauth import HTTPBasicAuth
from flask_user import UserManager
class ApiUserManager(UserManager):
auth = None
def __init__(self, app, db, User, auth):
super(ApiUserManager, self).__init__(app, db, User)
self.auth = auth
def unauthenticated_view(self):
return self.auth.auth_error_callback(401)
def unauthorized_view(self):
return self.auth.auth_error_callback(403)
class ApiAuth(HTTPBasicAuth):
def authenticate(self, auth, stored_password):
result = super(ApiAuth, self).authenticate(auth, stored_password)
if result and auth:
session['user_id'] = auth.get('username')
return result
auth = ApiAuth()
</code></pre>
<p>init.py</p>
<pre><code>from auth import ApiUserManager, auth as auth_object
from models import ApiUser
def init(app):
app.app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv('database_url')
db.init_app(app.app)
db.app = app.app
# Flask-User settings
app.app.config['USER_APP_NAME'] = 'api'
app.app.config['USER_ENABLE_EMAIL'] = False
...
user_manager = ApiUserManager(app.app, db, ApiUser, auth_object)
# the following lines are no more working
login_manager = user_manager.login_manager
@login_manager.user_loader
def load_user(user_id):
try:
return ApiUser.query.filter(ApiUser.id == user_id).first()
except:
return None
...
def create_app(config_name):
app = connexion.FlaskApp(__name__)
with app.app.app_context():
init(app)
return app.app
</code></pre>
<p>When I add a breakpoint in my load_user method, the "old Flask application" is hitting the breakpoint. The "new Flask" does not reach this method.</p>
<p>When I call some endpoint e.g.</p>
<pre><code>@auth.login_required
@roles_required('Read')
def getFoo():
...
</code></pre>
<p>I get an "Unauthorized Access"</p>
<p>When I remove the @roles_required decorator from the endpoint, everything works fine.
Has someone an idea?</p>
|
<python><flask>
|
2022-12-22 06:50:56
| 1
| 522
|
saromba
|
74,884,921
| 1,588,857
|
How to add attributes to a `enum.StrEnum`?
|
<p>I have an <code>enum.StrEnum</code>, for which I want to add attributes to the elements.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>class Fruit(enum.StrEnum):
APPLE = ("Apple", { "color": "red" })
BANANA = ("Banana", { "color": "yellow" })
>>> str(Fruit.APPLE)
"Apple"
>>> Fruit.APPLE.color
"red"
</code></pre>
<p>How can I accomplish this? (I'm running Python 3.11.0.)</p>
<p><sub>This question is not a duplicate of <a href="https://stackoverflow.com/questions/12680080/python-enums-with-attributes">this one</a>, which asks about the original <code>enum.Enum</code>.</sub></p>
|
<python><python-3.x><enums>
|
2022-12-22 06:36:15
| 1
| 2,844
|
midrare
|
74,884,887
| 4,377,095
|
Pandas setting column values to adjacent ones for duplicates found in a key column
|
<p>Assuming we have the following table:</p>
<pre><code>+---------+------+------+------+------+------+------+------+------+
| COL_ID | ColB | COLC | COLD | COLE | COLF | COLG | COLH | COLI |
+---------+------+------+------+------+------+------+------+------+
| aa1 | 1 | 1 | | | | | | |
| aa1 | 2 | 1 | | | | | | |
| aa2 | 3 | 1 | | | | | | |
| ab3 | 6 | 2 | | | | | | |
| ab3 | 5 | 2 | | | | | | |
| ab3 | 7 | 1 | | | | | | |
| ab3 | 1 | 1 | | | | | | |
+---------+------+------+------+------+------+------+------+------+
</code></pre>
<p>How can we assign the values of duplicates in the adjacent column if a duplicate is found?</p>
<pre><code>+---------+------+------+------+------+------+------+------+------+
| COL_ID | ColB | COLC | COLD | COLE | COLF | COLG | COLH | COLI |
+---------+------+------+------+------+------+------+------+------+
| aa1 | 1 | 1 | 1 | 1 | 2 | 1 | | |
| aa2 | 3 | 1 | | | | | | |
| ab3 | 6 | 2 | 5 | 2 | 7 | 1 | 1 | 1 |
+---------+------+------+------+------+------+------+------+------+
</code></pre>
<p>Here is the sample code to generate this table</p>
<pre><code>import pandas as pd
import numpy as np
my_dic = {'COL_ID': ['aa1', 'aa1', 'aa2', 'ab3','ab3','ab3','ab3'],
'COLB': [1,2,3,6,5,7,1],
'COLC': [1,1,1,2,2,1,1],
'COLD':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
'COLF':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
'COLG':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
'COLH':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
'COLI:':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan]}
dp = pd.DataFrame(my_dic)
</code></pre>
|
<python><pandas>
|
2022-12-22 06:31:42
| 1
| 537
|
Led
|
74,884,821
| 10,710,625
|
Remove rows from grouped data frames based on column values
|
<p>I would like to remove from each subgroup in a data frame, the rows which satisfy certain conditions. The subgroups are grouped based on the two columns <code>Days</code> & <code>ID</code>, here's my data frame:</p>
<pre><code>df = pd.DataFrame({'Days':[5,5,5,5,6,6],
'ID':['A11','A11','A11','A11','B12','B12'],
'Country':['DE','DE','FR','US','US','US'],
'Car1':['BMW','Volvo','Audi','BMW','Mercedes','BMW'],
'Car2':['Volvo','Mercedes','BMW','Volvo','Volvo','Volvo'],
'Car3':['Mercedes',nan,'Volvo',nan,nan,nan]},
)
Days ID Country Car1 Car2 Car3
0 5 A11 DE BMW Volvo Mercedes
1 5 A11 DE Volvo Mercedes nan
2 5 A11 FR Audi BMW Volvo
3 5 A11 US BMW Volvo nan
4 6 B12 US Mercedes Volvo nan
5 6 B12 US BMW Volvo nan
</code></pre>
<p>I would like to remove the rows from each group satisfying the following three conditions:</p>
<pre><code>1. Car3=nan
2. Car1=Car2 (from another row within the group)
3. Car2=Car3 (from another row within the group)
</code></pre>
<p>The expected data frame I would like to have:</p>
<pre><code> Days ID Country Car1 Car2 Car3
0 5 A11 DE BMW Volvo Mercedes
1 5 A11 FR Audi BMW Volvo
2 6 B12 US Mercedes Volvo nan
3 6 B12 US BMW Volvo nan
</code></pre>
|
<python><pandas><dataframe><filter><group-by>
|
2022-12-22 06:23:13
| 1
| 739
|
the phoenix
|
74,884,772
| 10,226,040
|
Regex - how can I get the last tag in this string
|
<p>I have a string</p>
<pre><code>"<li style="-moz-float-edge: content-box">... that in <i><b><a href="/wiki/La%C3%9Ft_uns_sorgen,_la%C3%9Ft_uns_wachen,_BWV_213" title="Lat uns sorgen, lat uns wachen, BWV 213">Die Wahl des Herkules</a></b></i>, Hercules must choose between the good cop and the bad cop?<br style="clear:both;" />"
</code></pre>
<p>and I want to get the <strong>last tag</strong></p>
<pre><code>"<br style="clear:both;" />"
</code></pre>
<p>My re - <code>r'[<]([\w]+\b)(.^<)+[/][>]'</code> doesn't work. I expected to find match by excluding <code>'<'</code> symbol.</p>
<p><a href="https://regex101.com/r/BDD30S/1" rel="nofollow noreferrer">https://regex101.com/r/BDD30S/1</a></p>
|
<python><regex>
|
2022-12-22 06:16:30
| 4
| 311
|
Chainsaw
|
74,884,688
| 19,321,677
|
How to aggregate dataframe and sum by boolean columns?
|
<p>I have this df and want to aggregate it so that the last 2 columns sum up and reduce duplicates per user id.</p>
<p>current</p>
<pre><code>user_id | name | product | ...| purchase_flag | retention_flag
123 | John | book | ...| 0 | 1
123 | John | book | ...| 1 | 0
....
</code></pre>
<p>desired state</p>
<pre><code>user_id | name | product | ...| purchase_flag | retention_flag
123 | John | book | ...| 1 | 1
....
</code></pre>
<p>I have a total of 100 columns, so doing a groupby manually in pandas will not be feasible. How do I group by all columns in the df and then sum by the purchase_flag and retention_flag?</p>
<p>I attempted:</p>
<pre><code>df.groupby([how to put all cols here expect the flag columns?]).agg({'purchase_flag':'sum','retention_flag':'sum',})
</code></pre>
<p>How do I finish this?</p>
|
<python><pandas>
|
2022-12-22 06:05:08
| 1
| 365
|
titutubs
|
74,884,660
| 149,900
|
pip specify package version either '==x.y' or '>=a.b'
|
<p>When installing with <code>pip</code> -- or by specifying through <code>requirements.txt</code> -- how do I specify that the version is <em>either</em>:</p>
<ul>
<li><code>==x.y</code>, <em>or</em></li>
<li><code>>=a.b</code></li>
</ul>
<p>where <code>x.y < a.b</code>.</p>
<p>For example, I want a package to be <em>either</em> <code>==5.4</code> <em>or</em> <code>>=6.1</code>.</p>
<p>Let's say I need to do this because:</p>
<ul>
<li>I want my program to run on Python>=3.7</li>
<li>The last package supported for Python 3.7 is "5.4"</li>
<li>For Python>3.7, the latest package is "6.1.*"</li>
<li>I am avoiding "6.0.*" because of a slight incompatibility, which had been fixed in "6.1.*", and I want <code>pip</code> to not spend any time trying to check the "6.0.*" line</li>
</ul>
|
<python><pip>
|
2022-12-22 06:00:48
| 2
| 6,951
|
pepoluan
|
74,884,618
| 574,308
|
Dockerized python.django app does not run in the browser
|
<p>I have a python-Django app built in ubuntu OS that I am trying to containerize. Docker builds and runs with no errors. But when I try to browse the site in a browser, I do not see it.</p>
<p>This is my allowed hosts in settings.py</p>
<pre><code>ALLOWED_HOSTS = ["localhost", "127.0.0.1","0.0.0.0"]
</code></pre>
<p>And this is my dockerfile</p>
<pre><code>#base image
FROM python:3.10
# setup environment variable
ENV DockerHOME=/home/app
RUN mkdir -p $DockerHOME
WORKDIR $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip3 install --upgrade pip
# copy the whole project to your docker home directory.
COPY . $DockerHOME
# run this command to install all dependencies
RUN pip3 install -r requirements.txt
# port where the Django app runs
EXPOSE 8000
# start server
CMD python3 manage.py runserver 0.0.0.0:8000
</code></pre>
<p>I tried to run it by the following commands with no luck.</p>
<pre><code>sudo docker run -p 8000:8000 myapp:v0
#also the following
sudo docker run myapp:v0
</code></pre>
<p>I am browsing the site with <a href="http://0.0.0.0:8000/" rel="nofollow noreferrer">http://0.0.0.0:8000/</a>
I tried with the docker IP <a href="http://172.17.0.2:80000" rel="nofollow noreferrer">http://172.17.0.2:80000</a></p>
<p>So not sure what am I missing. Any idea would be very much appreciated.</p>
<p>EDIT:</p>
<p>I have added docker-compose file. This is what I have.</p>
<pre><code>version: '3.8'
services:
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
container_name: my_app
volumes:
- .:/home/app
ports:
- "8000:8000"
</code></pre>
<p>And then I tried</p>
<pre><code>docker-compose build
docker-compose up -d
</code></pre>
<p>still do not see the app in browser.
When I run "docker ps", I can see the container ID and status and other things.</p>
|
<python><django><docker>
|
2022-12-22 05:53:50
| 1
| 2,750
|
Reza.Hoque
|
74,884,538
| 3,004,472
|
python regex to extract only specific pattern file names from a list
|
<p>I would like to extract only certain files from a list. I need to apply following rules while extracting only selected files from a list.</p>
<p>if the file contains patterns like <strong>f[1-99]</strong> or <strong>t[1-99]</strong> or <strong>v[1-99]</strong> or combination of <strong>f[1-9]_v[1-9]_t[1-9]</strong>. below are some sample.</p>
<pre><code>phone_football_androind_1_v1_te_t1_fe
phone_football_ios_v1_t1
foot_cricket2345678_f12_t4
tfd_fr_ve_t1_v1_f3_201234_yyymmmdd
def_000_t4_f1
file_job_1234567_f1_t55
ROKLOP_f33_t44
agdcv_t45
gop_gop_f1_t14_v14
file_op_v1_t1
fop_f1_v1_1223
</code></pre>
<p>could u lease help how to check if the above patterns contains in the files and take only file with following patterns? I have tried following but stuck with reges in python. not sure how to add OR condition in regex</p>
<pre><code>import re
# Take input from users
MyString1 = "tfd_fr_ve_t1_v1_f3_201234_yyymmmdd"
# re.search() returns a Match object
# if there is a match anywhere in the string
if re.search('(_v(\d+)).*', MyString1):
print("YES,it is present in string ")
else:
print("NO,string is not present")
</code></pre>
|
<python><regex><python-re>
|
2022-12-22 05:43:53
| 1
| 880
|
BigD
|
74,884,385
| 5,928,682
|
Print a string with " " in place of '' in python
|
<p>I am trying a create a role in AWS.</p>
<p>These role details are coming in from a json object.</p>
<pre><code>shared ={
"mts":{
"account_id":"11111",
"workbench":"aaaaa",
"prefix":"zzzz"
},
"tsf":{
"account_id":"22222",
"workbench":"bbbbb",
"prefix":"yyyy"
}
}
role_arn = []
for x in shared:
role = f"arn:aws:iam::'{shared[x]['account_id']}':role/'{shared[x]['prefix']}'_role"
role_arn.append(role)
print(role_arn)
</code></pre>
<p>the out output:</p>
<pre><code>["arn:aws:iam::'11111':role/'zzzz'_role", "arn:aws:iam::'22222':role/'yyyy'_role"]
</code></pre>
<p>the <code>account_id</code> is being represented in <code>''</code> quotes which I want to avoid.</p>
<p>What I am expecting is something like this</p>
<pre><code>["arn:aws:iam::11111:role/zzzz_role", "arn:aws:iam::22222:role/yyyy_role"]
</code></pre>
<p>How can I achieve this programmatically?</p>
|
<python><amazon-web-services>
|
2022-12-22 05:16:47
| 4
| 677
|
Sumanth Shetty
|
74,884,154
| 9,542,989
|
Dynamically Enable/Disable Form Fields in Streamlit
|
<p>I have a form in Streamlit that looks something like this,</p>
<pre><code>with st.form("my_form"):
text_input_1 = st.text_input("Input 1")
drop_down_1 = st.selectbox("Dropdown 1", ["Option 1", "Option 2"])
drop_down_2 = st.selectbox("Dropdown 2", ["Option 3", "Option 4"])
submitted = st.form_submit_button("Submit")
if submitted:
submission_state = st.text('Form Submitted!')
</code></pre>
<p>Now, I want my second dropdown field, i.e. <code>drop_down_2</code> to appear only if a certain value in <code>drop_down_1</code> is selected, say <code>Option 1</code>.</p>
<p>How can I achieve this?</p>
<p>I tried something like this, but it did not work.</p>
<pre><code>with st.form("my_form"):
is_drop_down_2_disabled = True
text_input_1 = st.text_input("Input 1")
drop_down_1 = st.selectbox("Dropdown 1", ["Option 1", "Option 2"])
if drop_down_1 == 'Option 1':
is_drop_down_2_disabled = False
drop_down_2 = st.selectbox("Dropdown 2", ["Option 3", "Option 4"], disabled=is_drop_down_2_disabled)
submitted = st.form_submit_button("Submit")
if submitted:
submission_state = st.text('Form Submitted!')
</code></pre>
<p>I assume that this is because the variables within the form are not assigned any values until the <code>Submit</code> button is clicked.</p>
|
<python><python-3.x><streamlit>
|
2022-12-22 04:34:15
| 1
| 2,115
|
Minura Punchihewa
|
74,884,117
| 3,247,006
|
How to run "AND" operator with "filter()" without "SyntaxError: keyword argument repeated:" error in Django?
|
<p>I have <strong><code>Blog</code> model</strong> below. *I use <strong>Django 3.2.16</strong> and <strong>PostgreSQL</strong>:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Blog(models.Model):
post = models.TextField()
def __str__(self):
return self.post
</code></pre>
<p>Then, <strong><code>store_blog</code> table</strong> has <strong>2 rows</strong> below:</p>
<h3><code>store_blog</code> table:</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>post</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>1</strong></td>
<td><strong>Python is popular and simple.</strong></td>
</tr>
<tr>
<td><strong>2</strong></td>
<td><strong>Java is popular and complex.</strong></td>
</tr>
</tbody>
</table>
</div>
<p>Then, when writing <strong>the <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.filter" rel="nofollow noreferrer">filter()</a> code</strong> with <strong>2 <code>post__contains</code> arguments</strong> in <strong><code>test()</code> view</strong> to run <strong><code>AND</code> operator</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/views.py"
from .models import Blog
from django.http import HttpResponse
def test(request):
qs = Blog.objects.filter(
post__contains="popular", post__contains="simple"
) # ↑ ↑ ↑ Here ↑ ↑ ↑ # ↑ ↑ ↑ Here ↑ ↑ ↑
print(qs)
return HttpResponse("Test")
</code></pre>
<p>I got the error below:</p>
<blockquote>
<p>SyntaxError: keyword argument repeated: post__contains</p>
</blockquote>
<p>So, how to run <strong><code>AND</code> operator</strong> with <strong><code>filter()</code></strong> without <strong><code>SyntaxError: keyword argument repeated:</code> error</strong> in Django?</p>
|
<python><python-3.x><django><django-filter><sql>
|
2022-12-22 04:24:57
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
74,884,113
| 5,769,497
|
Read a CSV file with Json column using Pandas
|
<p>I've a CSV file that contains couple of JSON columns and I'm reading it using python panda. The sample file data looks like the following :</p>
<pre><code>12345,67890,{"key1":"value1","key2":"value2","key3":"value3"},abcdefgh,{"key4":"value4"}
12345,67890,NONE,abcdefgh,{"key4":"value4"}
</code></pre>
<p>I'm using <code>,</code> as a separator while reading the CSV but this is causing an issue since the JSON data also contain <code>,</code> and eventually the row isn't correctly delimited.</p>
<p><code>pd.read_csv('s3://bucket-name/file.csv', sep=",")</code></p>
<p>I've also tried another regex <code>[a-zA-Z0-9],|[}],</code> as a separator but this removes the last character(1 character before <code>,</code>) from the column data.</p>
|
<python><pandas><csv>
|
2022-12-22 04:24:11
| 1
| 2,489
|
Sanket Makani
|
74,883,991
| 2,392,358
|
awk + adding a column baed on values of another column + adding a field name in the 1 command
|
<p>I want to add a new column at the end, based on the text of another column(with an if statement), and then I want to add a new column/field name.
I am close but I am struggling with the syntax, I am using awk, but apologies its been a while since I used this. and I am wondering if I should use python/anaconda(jupyter notebook), but going with the easiest env I have available to me at the minute, awk .</p>
<p>This is my file:</p>
<pre><code>$ cat file1
f1,f2,f3,f4,f5
row1_1,row1_2,row1_3,SBCDE,row1_5
row2_1,row2_2,row2_3,AWERF,row2_5
row3_1,row3_2,row3_3,ASDFG,row3_5
</code></pre>
<p>Here I want, based on the text in column 4, create a new column at the end and, but I am winging this a bit, that is I got it to work.</p>
<pre><code> $ awk -F, '{if (substr($4,1,1)=="A")
print $0 (NR>1 ? FS substr($4,1,4) : "")
else
print $0 (NR>1 ? FS substr($4,1,2) : "")
}' file1
f1,f2,f3,f4,f5
row1_1,row1_2,row1_3,SBCDE,row1_5,SB
row2_1,row2_2,row2_3,AWERF,row2_5,AWER
row3_1,row3_2,row3_3,ASDFG,row3_5,ASDF
</code></pre>
<p>But here I wnat to add a field/column name at the end, which I am close, I believe.</p>
<pre><code> $ awk -F, -v OFS=, 'NR==1{ print $0, "test"}
NR>1
{
if (substr($4,1,1)=="A")
print $0 (NR>1 ? FS substr($4,1,4) : "")
else
print $0 (NR>1 ? FS substr($4,1,2) : "")
}
' file1
f1,f2,f3,f4,f5,test
f1,f2,f3,f4,f5
row1_1,row1_2,row1_3,SBCDE,row1_5
row1_1,row1_2,row1_3,SBCDE,row1_5,SB
row2_1,row2_2,row2_3,AWERF,row2_5
row2_1,row2_2,row2_3,AWERF,row2_5,AWER
row3_1,row3_2,row3_3,ASDFG,row3_5
row3_1,row3_2,row3_3,ASDFG,row3_5,ASDF
</code></pre>
<p>What I want is this:</p>
<pre><code>f1,f2,f3,f4,f5,test
row1_1,row1_2,row1_3,SBCDE,row1_5,SB
row2_1,row2_2,row2_3,AWERF,row2_5,AWER
row3_1,row3_2,row3_3,ASDFG,row3_5,ASDF
</code></pre>
<h2>EDIT1</h2>
<p>for my ref: this is the awk I want:</p>
<pre><code>awk -F, '{if (substr($4,1,1)=="P")
print $0 (NR>1 ? FS substr($4,5,4) : "")
else
print $0 (NR>1 ? FS substr($4,1,4) : "")
}' file1
</code></pre>
<p>outputting it to file2:</p>
<pre><code>awk -F, '{if (substr($4,1,1)=="P")
print $0 (NR>1 ? FS substr($4,5,4) : "")
else
print $0 (NR>1 ? FS substr($4,1,4) : "")
}' file1 > file2
$
$
</code></pre>
<p>2 files, file2 has other column added:</p>
<pre><code>$ls
file1 file2
$cat file1
f1,f2,f3,f4,f5
row1_1,row1_2,row1_3,SBCDE,row1_5
row2_1,row2_2,row2_3,AWERF,row2_5
row3_1,row3_2,row3_3,ASDFG,row3_5
row4_1,row4_2,row4_3,PRE-ASDQG,row4_5
$cat file2
f1,f2,f3,f4,f5
row1_1,row1_2,row1_3,SBCDE,row1_5,SBCD
row2_1,row2_2,row2_3,AWERF,row2_5,AWER
row3_1,row3_2,row3_3,ASDFG,row3_5,ASDF
row4_1,row4_2,row4_3,PRE-ASDQG,row4_5,ASDQ
</code></pre>
<h2>EDIT2 -- Correction</h2>
<p>file 2 is what I want:</p>
<pre><code>cat file1
f1,f2,f3,f4,f5
row1_1,row1_2,row1_3,SBCDE,row1_5
row2_1,row2_2,row2_3,AWERF,row2_5
row3_1,row3_2,row3_3,ASDFG,row3_5
row4_1,row4_2,row4_3,PRE-ASDQG,row4_5
cat file2
f1,f2,f3,f4,f5,test
row1_1,row1_2,row1_3,SBCDE,row1_5,SBCD
row2_1,row2_2,row2_3,AWERF,row2_5,AWER
row3_1,row3_2,row3_3,ASDFG,row3_5,ASDF
row4_1,row4_2,row4_3,PRE-ASDQG,row4_5,ASDQ
awk -F, -v OFS=, 'NR==1{ print $0, "test"}
NR>1 {
if (substr($4,1,1)=="P")
print $0 (NR>1 ? FS substr($4,5,4) : "")
else
print $0 (NR>1 ? FS substr($4,1,4) : "")
}
' file1
f1,f2,f3,f4,f5,test
row1_1,row1_2,row1_3,SBCDE,row1_5,SBCD
row2_1,row2_2,row2_3,AWERF,row2_5,AWER
row3_1,row3_2,row3_3,ASDFG,row3_5,ASDF
row4_1,row4_2,row4_3,PRE-ASDQG,row4_5,ASDQ
</code></pre>
|
<python><awk>
|
2022-12-22 03:57:23
| 4
| 4,703
|
HattrickNZ
|
74,883,933
| 11,225,821
|
Django logger refuse to show full traceback in file message
|
<p>Currently, i'm setting up logging in Django following the document <a href="https://docs.djangoproject.com/en/2.2/topics/logging/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.2/topics/logging/</a> (I'm using Django 2.2 with python3.7) and Django rest framework.
Here is my settings.py:</p>
<pre><code># LOGGING
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
'format': '{levelname} {asctime} {message}',
'style': '{',
},
},
'handlers': {
'file': {
'level': 'ERROR',
'class': 'logging.FileHandler',
'filename': '/var/log/django_errors.log',
'formatter': 'simple'
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'ERROR',
},
},
}
</code></pre>
<p>Here is my view using django-rest-framework:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
class Countries(APIView):
permission_classes = ()
authentication_classes = []
def get(self, request):
try:
... some API process
except Exception as e:
import traceback
print(traceback.format_exc())
logger.error(traceback.format_exc())
return JsonResponse({'code': 500, 'message': str(e)}, status=500)
</code></pre>
<p>From the console I can see the print() show the correct full stack traceback. But in the logging file, it doesn't show any traceback.</p>
<p>django_errors.log:</p>
<pre><code>ERROR 2022-12-22 11:35:21,051 "GET /api/countries HTTP/1.1" 500 72
ERROR 2022-12-22 11:35:21,065 "GET /api/countries HTTP/1.1" 500 72
</code></pre>
<p>I also tried <code>logger.exception()</code> the same thing happens, file doesn't log full stack traceback</p>
<p>I tried looking online for solutions and tried them but to no prevail, hope someone can help me figure out why</p>
<p>expected log to show correct traceback for example like console print():</p>
<pre><code> Traceback (most recent call last):
File "/app/api/views/countries.py", line 40, in get
country["currency"] =
get_currency_code_from_country_code(country["data"])
File "/app/utils/django_utils.py", line 470, in
get_currency_code_from_country_code
return currency.alpha_3
AttributeError: 'NoneType' object has no attribute 'alpha_3
</code></pre>
<p>'</p>
<p>I even tried adding {stack_info} to my formatter, but it just returns None:</p>
<pre><code>ERROR 2022-12-22 11:58:18,921 "GET /api/countries HTTP/1.1" 500 72 None
</code></pre>
|
<python><django><logging><django-rest-framework>
|
2022-12-22 03:45:43
| 1
| 3,960
|
Linh Nguyen
|
74,883,922
| 313,042
|
Removing pixels below a certain threshold
|
<p>I have a grayscale image with something written in the front and something at the back. I'd like to filter out the back part of the letters and only have the front. It's only grayscale and not RGB, and I'd rather not have to calculate pixels manually.</p>
<p>Is there any library function I can use to do this? I'm new to python and at the moment, using PIL library, so that's my preference. But if there are other libraries, I'm open to that as well.</p>
<p>Here's the image:
<a href="https://i.sstatic.net/jFmbt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jFmbt.jpg" alt="enter image description here" /></a></p>
|
<python><image-processing><python-imaging-library>
|
2022-12-22 03:42:53
| 2
| 11,966
|
Rajath
|
74,883,814
| 13,079,519
|
How to send message to a specific discord channel
|
<p>I am trying to send a message to a specific channel and I can run the code below just nothing show up on the channel and I have no idea why... I put the channel Id in the get_channel input, just putting a random number here.</p>
<pre><code>import discord
client = discord.Client()
async def send_msg(msg):
channel = client.get_channel(123456456788)
await channel.send('hello')
</code></pre>
|
<python><discord>
|
2022-12-22 03:14:35
| 1
| 323
|
DJ-coding
|
74,883,691
| 14,397,434
|
what does a variable mean on its own in a python list?
|
<pre><code>list1 = ["Mike", "", "Emma", "Kelly", "", "Brad"]
[i for i in list1 if i]
['Mike', 'Emma', 'Kelly', 'Brad']
</code></pre>
<p>Why is it that simply saying "if i" works?</p>
<p>Why doesn't i==True work? Or why doesn't i==False return anything?</p>
<p>I ask because the following code returns a list of booleans:</p>
<pre><code>for i in list1:
print (i != "")
True
False
True
True
False
True
</code></pre>
<p>Thank you,<br />
R user</p>
|
<python><list><for-loop><boolean>
|
2022-12-22 02:43:51
| 1
| 407
|
Antonio
|
74,883,192
| 911,971
|
Intentionally returning None instead of object
|
<p>Suppose I have two simple classes:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Item():
Name: str
Valie: float
class Registry():
items = []
def __init__(self):
# load some items from external source, etc
""" find an element in an array or return None """
def find(self, name: str) -> Item:
for i in self.Items:
if i.Name = name: # items are unique,
return i
return None # ignore type
</code></pre>
<p>Without this <code>#ignore type</code> got warning <code>Expression of type "None" cannot be assigned to return type "Item"</code>. Ok, I understand why. But is this the right approach or is there a better more "pythonic" way to solve this problem? i.e. returns nothing if it is not in the list.</p>
<p>On the "other side" is something like:</p>
<pre class="lang-py prettyprint-override"><code>item = registry.find(name)
if item != None:
doSomething()
</code></pre>
|
<python><python-typing>
|
2022-12-22 00:51:05
| 1
| 506
|
parasit
|
74,883,180
| 1,897,151
|
dataclasss python reconstruct value
|
<p>lets say i have this dataclass</p>
<pre><code>@dataclass_json
@dataclass
class TestMetaData:
edit_meta: MetaData
@dataclass_json
@dataclass
class MetaData:
list_meta: list[MetaDataList]
@dataclass_json
@dataclass
class MetaDataList:
name: str
fields: list[MetaDataFields]
class MetaDataFields:
label: str
name: str
</code></pre>
<p>i have a processing method to add data into this dataclass</p>
<pre><code>data_args = {
'test': TestMetaData(
edit_meta=MetaData(
list_meta=[
some_process_method(
meta
)
for meta in meta_info['metafields']
]
)
)
}
return data_args
</code></pre>
<p>now i have data_args and when i wanted to return to frontend as dictionary i can do something like this</p>
<pre><code>return {
**data_args
}
</code></pre>
<p>but my question is, if i wanted to process data_args again before returning to frontend, how can i access data_args? using data_args['edit_meta'] as loop also wont help me to access the data inside. for my information i would like to learn how to access the data if is already converted into dataclass json</p>
|
<python>
|
2022-12-22 00:48:23
| 0
| 503
|
user1897151
|
74,883,102
| 2,444,023
|
Is it possible to simulate a distribution from a glm fit in statsmodels?
|
<p>I would like to fit a multiple linear regression. It will have a couple input parameters and no intercept.</p>
<p>Is it possible to fit a glm fit with statsmodels then use that fit to simulate a distribution of predicted values?</p>
|
<python><statistics><statsmodels>
|
2022-12-22 00:33:02
| 1
| 2,838
|
Alex
|
74,883,093
| 213,759
|
How to deal with system Python and `brew link python` on MacOS?
|
<p>I cannot link Python 3.8 over system Python 3.9.</p>
<p>I have few installed python versions (3.8, 3.10, etc) by brew and system Python 3.9.</p>
<p>P.S. I cannot uninstall system one (it does not appear in Applications).</p>
<pre class="lang-bash prettyprint-override"><code>$ python3 --version
Python 3.10.9
$ brew unlink python@$(python3 --version | grep -oE '[0-9]+\.[0-9]+'); brew link python@3.8
Unlinking /opt/homebrew/Cellar/python@3.10/3.10.9... 25 symlinks removed.
Warning: Already linked: /opt/homebrew/Cellar/python@3.8/3.8.16
To relink, run:
brew unlink python@3.8 && brew link python@3.8
$ python3 --version
Python 3.9.6
$ type python3
python3 is /usr/bin/python3
$ python3[press TAB]
python3 python3-config python3.10 python3.10-config python3.11 python3.11-config
$ python3.8 --version
zsh: command not found: python3.8
$ echo $PATH
/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
</code></pre>
<p>Questions:</p>
<ol>
<li>How to set brewed Python over system one?</li>
<li><s>How to uninstall system python?</s> (not recommended)</li>
<li>How to link <code>python</code> in addition to <code>python3</code>?</li>
<li><s>Why there is no <code>python3.8</code> available?</s> (solved with <code>brew reinstall python@3.8</code>)</li>
</ol>
<p><strong>UPD:</strong></p>
<ol start="4">
<li><code>python3.8</code> became available after <code>brew reinstall python@3.8</code>.</li>
</ol>
<p><strong>UPD 2:</strong></p>
<p>It looks like there is no way to fix this properly. Must be a bug of 3.8 installer.</p>
<p>Here is workaround commands:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir ~/bin/
ln -s /opt/homebrew/bin/python3.8 ~/bin/python
chmod u+x,og-rwx ~/bin/python
</code></pre>
<p>And this <code>~/bin</code> to your <code>PATH</code>.</p>
<p>@micromoses described the same in Method #4.</p>
|
<python><macos><homebrew>
|
2022-12-22 00:32:09
| 2
| 3,127
|
Kirby
|
74,882,649
| 12,361,700
|
multiprocessing.Pool not using all the cores in M1 Mac
|
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing.dummy import Pool
def process_board(elems):
# do something
for _ in range(1000):
with Pool(cpu_count()) as p:
_ = p.map(process_board, enumerate(some_array))
</code></pre>
<p>and this is the activity monitor of my mac while the code is running:
<a href="https://i.sstatic.net/RyUFc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RyUFc.png" alt="activity monitor" /></a></p>
<p>I can ensure that <code>len(some_array) > 1000</code>, so there is for sure more work that can be distributed, but seems not the case... what am I missing?</p>
<p><strong>Update</strong>:<br />
I tried chunking them, to see if there is any difference:</p>
<pre><code># elements per chunk -> time taken
# 100 -> 31.9 sec
# 50 -> 31.8 sec
# 20 -> 31.6 sec
# 10 -> 32 sec
# 5 -> 32 sec
</code></pre>
<p>consider that I have around 1000 elements, so 100 elements per chunk means 10 chunks, and this is my CPU loads during the tests:
<a href="https://i.sstatic.net/JE9Sz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JE9Sz.png" alt="enter image description here" /></a></p>
<p>As you can see, changing the number of chunks does not help to use the last 4 CPUS...</p>
|
<python><multithreading><cpu>
|
2022-12-21 23:09:46
| 1
| 13,109
|
Alberto
|
74,882,623
| 15,178,267
|
Django: how to write django signal to update field in django?
|
<p>i want to write a simple django signal that would automatically change the status of a field from <strong>live</strong> to <strong>finished</strong>, when i check a button <strong>completed</strong>.</p>
<p>I have a model that looks like this\</p>
<pre><code>
class Predictions(models.Model):
## other fields are here
user = models.ForeignKey(User, on_delete=models.SET_NULL, null=True)
status = models.CharField(choices=STATUS, max_length=100, default="in_review")
class PredictionData(models.Model):
predictions = models.ForeignKey(Predictions, on_delete=models.SET_NULL, null=True, related_name="prediction_data")
votes = models.PositiveIntegerField(default=0)
won = models.BooleanField(default=False)
</code></pre>
<p>when i check the <strong>won</strong> button that is in the <code>PredictionData</code> model, i want to immediately changes the <code>status</code> of the <code>Prediction</code> to finished.</p>
<p>NOTE: i have some tuple at the top of the model.</p>
<pre><code>STATUS = (
("live", "Live"),
("in_review", "In review"),
("pending", "Pending"),
("cancelled", "Cancelled"),
("finished", "Finished"),
)
</code></pre>
|
<python><django>
|
2022-12-21 23:05:04
| 1
| 851
|
Destiny Franks
|
74,882,555
| 1,551,027
|
How to log all permissions an application is using in Google Cloud's SDK
|
<p>I have a sandbox project that I am currently in an owner role for. This gives me great freedom in development and I've written a bunch of python code that uses the following:</p>
<pre><code>Storage
Security Center
Storage Notifications
Datastore
Secret Manager
Pub/Sub
</code></pre>
<p>I would like to log all of the permissions this application uses. Is there some way to do this in GCP? Perhaps in the Logging API, or similar?</p>
<p>I need this so I don't have to manually identify all of the permissions for a role I intend to create so the application follows the principle of least privilege.</p>
<p>Thanks!</p>
|
<python><google-cloud-platform><google-cloud-iam><google-cloud-logging>
|
2022-12-21 22:55:37
| 1
| 3,373
|
Dshiz
|
74,882,548
| 17,487,457
|
Calculate time difference of 2 adjacent datapoints for each user
|
<p>I have the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{'user_id': [53, 53, 53, 53, 53, 53, 53, 53, 54, 54, 54, 54, 54, 54, 54],
'timestamp': [10, 15, 20, 25, 30, 31, 34, 37, 14, 16, 18, 20, 22, 25, 28],
'activity': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A',
'D', 'D', 'D', 'D', 'D', 'D', 'D']}
)
</code></pre>
<pre><code>df
user_id timestamp activity
0 53 10 A
1 53 15 A
2 53 20 A
3 53 25 A
4 53 30 A
5 53 31 A
6 53 34 A
7 53 37 A
8 54 14 D
9 54 16 D
10 54 18 D
11 54 20 D
12 54 22 D
13 54 25 D
14 54 28 D
</code></pre>
<p>I want to calculate the time difference between every
2 adjacent datapoints (rows) in each <code>user_id</code> and plot the CDF,
per <code>activity</code>. Assuming each user starts new activity from 0 seconds. <code>timestamp</code> column represents <code>unix</code> timestamp, I give last 2 digits for brevity.</p>
<p>Target <code>df</code> (required result):</p>
<pre><code> user_id timestamp activity timestamp_diff
0 53 10 A 0
1 53 15 A 5
2 53 20 A 5
3 53 25 A 5
4 53 30 A 5
5 53 31 A 1
6 53 34 A 3
7 53 37 A 3
8 54 14 D 0
9 54 16 D 2
10 54 18 D 2
11 54 20 D 2
12 54 22 D 2
13 54 25 D 3
14 54 28 D 3
</code></pre>
<p>My attempts (to calculate the time differences):</p>
<pre class="lang-py prettyprint-override"><code>df['shift1'] = df.groupby('user_id')['timestamp'].shift(1, fill_value=0)
df['shift2'] = df.groupby('user_id')['timestamp'].shift(-1, fill_value=0)
df['diff1'] = df.timestamp - df.shift1
df['diff2'] = df.shift2 - df.timestamp
df['shift3'] = df.groupby('user_id')['timestamp'].shift(-1)
df['shift3'].fillna(method='ffill', inplace=True)
df['diff3'] = df.shift3 - df.timestamp
</code></pre>
<pre><code>df
user_id timestamp activity shift1 shift2 diff1 diff2 shift3 diff3
0 53 10 A 0 15 10 5 15.0 5.0
1 53 15 A 10 20 5 5 20.0 5.0
2 53 20 A 15 25 5 5 25.0 5.0
3 53 25 A 20 30 5 5 30.0 5.0
4 53 30 A 25 31 5 1 31.0 1.0
5 53 31 A 30 34 1 3 34.0 3.0
6 53 34 A 31 37 3 3 37.0 3.0
7 53 37 A 34 0 3 -37 37.0 0.0
8 54 14 D 0 16 14 2 16.0 2.0
9 54 16 D 14 18 2 2 18.0 2.0
10 54 18 D 16 20 2 2 20.0 2.0
11 54 20 D 18 22 2 2 22.0 2.0
12 54 22 D 20 25 2 3 25.0 3.0
13 54 25 D 22 28 3 3 28.0 3.0
14 54 28 D 25 0 3 -28 28.0 0.0
</code></pre>
<p>I cannot reach to the target, none of <code>diff1, diff2</code> or <code>diff3</code> columns match the <code>timestamp_diff</code>.</p>
|
<python><pandas><dataframe>
|
2022-12-21 22:54:17
| 1
| 305
|
Amina Umar
|
74,882,495
| 5,568,409
|
Convert a Python dataframe date column in seconds
|
<p>I am reading a <code>.csv</code> data file using <code>pd.read_csv</code> and I get these first 5 rows from my global dataframe (containing thousands of rows):</p>
<pre><code> time id time_offset
0 2017-12-01 21:00:00 0 -60
1 2017-12-01 21:01:00 0 -59
2 2017-12-01 21:02:00 0 -58
3 2017-12-01 21:03:00 0 -57
4 2017-12-01 21:04:00 0 -56
</code></pre>
<p>I'm not very good at manipulating dates in Python and I haven't found how to do this manipulation:</p>
<ol>
<li>create in my dataframe a new <code>hour</code> column from the existing <code>time</code> column, containing only the <code>hours:minutes:seconds</code> data, which should be: <code>21:00:00</code>, <code>21:01:00</code>, <code>21:02:00</code>, etc...</li>
<li>then create another column <code>seconds</code> from the newly created <code>hour</code>, containing the number of seconds elapsed since time <code>0</code>, which should be: <code>75600</code> (calculated as 21x3600), <code>75601</code> (calculated ,as 21x3600 + 1), etc...</li>
</ol>
<p>Any help in sorting this out would be much appreciated.</p>
|
<python><pandas><dataframe><datetime>
|
2022-12-21 22:46:26
| 3
| 1,216
|
Andrew
|
74,882,381
| 2,326,896
|
Best practices for defining fields only relevant in subclasses in Python
|
<p>If I have a variable that is only used in a subclass, should I assign None to it in the superclass?</p>
<p>This is minimum example, there other subclasses where ‘number’ is different but the ‘show’ method is still relevant.</p>
<pre><code>class A:
def show(self):
print(self.number)
class C(A):
number = 5
c = C()
c.show()
</code></pre>
<p>Should I define <code>number=None</code> in A?</p>
<p>I’m asking because PyCharm keeps showing warnings about this, but I’m not sure filling the superclass with None’s is a good idea.</p>
|
<python><oop>
|
2022-12-21 22:29:57
| 1
| 891
|
Fernando César
|
74,882,139
| 5,212,614
|
How can we parse a JSON file for specific records of county borders and overlay that on a Folium HeatMap?
|
<p>I found a JSON file that has borders of US counties, right here.</p>
<p><a href="https://eric.clst.org/assets/wiki/uploads/Stuff/gz_2010_us_050_00_500k.json" rel="nofollow noreferrer">https://eric.clst.org/assets/wiki/uploads/Stuff/gz_2010_us_050_00_500k.json</a></p>
<p>How can I parse that file for specific records, like 'Durham' and 'Raleigh' and 'Charlotte' together, and plot these on a Folium map? When I run the code below, I have all counties plotted on the map, be because no specific counties are parsed out before mapping.</p>
<pre><code>from folium import GeoJson
geo=r"C:\\Users\\RShuell\\Downloads\\gz_2010_us_050_00_500k.json"
file = open(geo)
text = file.read()
m = folium.Map(width="%100",weight="%100")
GeoJson(text).add_to(m)
m
</code></pre>
<p><a href="https://i.sstatic.net/JWPO0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JWPO0.png" alt="enter image description here" /></a></p>
<p>Finally, how would I overlap a HeatMap on top of the plotted county borders? When I create a Folium HeatMap, it overwrites all the county borders!</p>
<pre><code>import folium
from folium.plugins import HeatMap
max_amount = float(df_2std['Total_Cust_Minutes'].max())
hmap = folium.Map(location=[35.5, -82.5], zoom_start=7, )
hm_wide = HeatMap(list(zip(df_2std.Circuit_Latitude.values,
df_2std.Circuit_Longitude.values,
df_2std.Total_Cust_Minutes.values)),
min_opacity=0.2,
max_val=max_amount,
radius=25,
blur=20,
max_zoom=1,
)
hmap.add_child(hm_wide)
</code></pre>
<p><a href="https://i.sstatic.net/cCnfh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cCnfh.png" alt="enter image description here" /></a></p>
|
<python><json><python-3.x><folium>
|
2022-12-21 21:55:41
| 1
| 20,492
|
ASH
|
74,882,136
| 5,604,562
|
Memory efficient dot product between a sparse matrix and a non-sparse numpy matrix
|
<p>I have gone through similar questions that has been asked before (for example <a href="https://stackoverflow.com/questions/41942115/numpy-efficient-large-dot-products">[1]</a> <a href="https://stackoverflow.com/questions/20983882/efficient-dot-products-of-large-memory-mapped-arrays">[2]</a>). However, none of them completely relevant for my problem.</p>
<p>I am trying to calculate a dot product between two large matrices and I have some memory constraint that I have to meet.</p>
<p>I have a <strong>numpy</strong> sparse matrix, which is a shape of (10000,600000). For example,</p>
<pre class="lang-py prettyprint-override"><code>from scipy import sparse as sps
x = sps.random(m=10000, n=600000, density=0.1).toarray()
</code></pre>
<p>The second numpy matrix is of size (600000, 256), which consists of only (-1, 1).</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
y = np.random.choice([-1,1], size=(600000, 256))
</code></pre>
<p>I need dot product of <code>x</code> and <code>y</code> at lowest possible memory required. Speed is not the primary concern.</p>
<p>Here is what I have tried so far:</p>
<h3>Scipy Sparse Format:</h3>
<p>Naturally, I converted the numpy sparse matrix to scipy <code>csr_matrix</code>. However, task is still getting killed due to memory issue. There is no error, I just get killed on the terminal.</p>
<pre><code>from scipy import sparse as sps
sparse_x = sps.csr_matrix(x, copy=False)
z = sparse_x.dot(y)
# killed
</code></pre>
<h3>Decreasing dtype precision + Scipy Sparse Format:</h3>
<pre><code>from scipy import sparse as sps
x = x.astype("float16", copy=False)
y = y.astype("int8", copy=False)
sparse_x = sps.csr_matrix(x, copy=False)
z = sparse_x.dot(y)
# Increases the memory requirement for some reason and dies
</code></pre>
<h3>np.einsum</h3>
<p>Not sure if it helps/works with sparse matrix. Found something interesting in this <a href="https://stackoverflow.com/questions/23322866/einsum-on-a-sparse-matrix">answer</a>. However, following doesn't help either:</p>
<pre><code>z = np.einsum('ij,jk->ik', x, y)
# similar memory requirement as the scipy sparse dot
</code></pre>
<h1>Suggestions?</h1>
<p>If you have any suggestions to improve any of these. Please let me know. Further, I am thinking in the following directions:</p>
<ol>
<li><p>It would be great If I can get rid of dot product itself somehow. My second matrix (i.e. <code>y</code> is randomly generated and it just has [-1, 1]. I am hoping if there is way I could take advantage of its features.</p>
</li>
<li><p>May be diving dot product into several small dot product and then, aggregate.</p>
</li>
</ol>
|
<python><numpy><matrix><scipy><sparse-matrix>
|
2022-12-21 21:55:02
| 3
| 3,604
|
Grayrigel
|
74,882,120
| 10,096,713
|
Scrape info from popup window with Playwright in Python and store in pandas df
|
<p>I'm using Playwright in a Jupyter Notebook to obtain building construction years from a property appraiser's website. Some web pages, <a href="https://www.pbcgov.org/papa/Asps/PropertyDetail/PropertyDetail.aspx?parcel=73414434010020070" rel="nofollow noreferrer">like this one</a>, have multiple buildings' data viewable only after the user clicks <code>View Building Details</code> partway down the page.</p>
<p>I can get Playwright to click the <code>View Building Details</code> button and click through to <code>Building 2</code>, <code>Building 3</code>, and <code>Building 4</code>.</p>
<p><strong>The issue</strong> is that I can't extract the <code>Year Built</code> data underneath the <code>Structural Element for Building ###</code> table.</p>
<p><strong>The goal</strong> is to have a script that will click the <code>View Building Details</code> button and cycle through Buildings 2 through <em>n</em> and collect each one's <code>Year Built</code> value.</p>
<p>I'm trying to use Pandas' <code>read_html</code> function to pull out the tables, but I'm open to other solutions.</p>
<p>This is what I have:</p>
<pre><code>from playwright.async_api import async_playwright
import pandas as pd
playwright = await async_playwright().start()
browser = await playwright.chromium.launch(headless = False)
page = await browser.new_page()
## Go to PAPA property address
await page.goto("https://www.pbcgov.org/papa/Asps/PropertyDetail/PropertyDetail.aspx?parcel=73414434010020070")
x = await page.content()
## Click text=View Building Details
await page.locator("text=View Building Details").click()
#######################################
## Click text=Building 2
await page.frame_locator("#MainContent_Iframe7").locator("text=Building 2").click()
x2 = await page.frame_locator("#MainContent_Iframe7").locator("html").inner_html()
## Click text=Building 3
await page.frame_locator("#MainContent_Iframe7").locator("text=Building 3").click()
x3 = await page.frame_locator("#MainContent_Iframe7").locator("html").inner_html()
## Click text=Building 4
await page.frame_locator("#MainContent_Iframe7").locator("text=Building 4").click()
x4 = await page.frame_locator("#MainContent_Iframe7").locator("html").inner_html()
x2s = pd.read_html(x2)
x3s = pd.read_html(x3)
x4s = pd.read_html(x4)
x2s[3] // When it works this is the table that I want
x3s[3]
x4s[3]
</code></pre>
<p>I think the issue has something to do with loading times. The script <em>kind of</em> works when each click to cycle through additional buildings is wrapped in a <code>try</code> and <code>except</code> block with instructions to wait for a certain selector. I copied the selector using Chrome's dev tools and tried both CSS selectors and relative xpaths. See example:</p>
<pre><code>try:
await page.wait_for_selector('//*[@id="frmPage"]/div[3]/div/div[1]/div[2]/fieldset/table/tbody/tr[3]/td[1]/table[1]/tbody')
except Exception as e:
print(f'BUILDING 3: {e}')
</code></pre>
<p>I did try using <code>time.sleep</code> but the script still failed and didn't return the right info. The docs caution against using <code>time.sleep</code> anyway.</p>
<p>I tried included putting <code>await page.wait_for_load_state("networkidle")</code> between each attempt.</p>
|
<python><pandas><async-await><playwright-python>
|
2022-12-21 21:52:58
| 1
| 335
|
Adam
|
74,882,060
| 12,506,687
|
Select related is not returning all the values from the relation in Django
|
<p>I'm doing this query</p>
<pre class="lang-sql prettyprint-override"><code>SELECT [User].[User_Id], [User].[Client_id], [User].[EMail], [User].[First_Name], [User].[Family_Name], [User].[Telephone], [Clients].[Client_Id], [Clients].[Name], [Clients].[Organization_type] FROM [User] INNER JOIN [Clients] ON ([User].[Client_id] = [Clients].[Client_Id]) WHERE [User].[EMail] = 'birna@athygli.is'
</code></pre>
<p>In the SQL server is working fine even when I print in django the queryset looks good, but, when getting the results is not getting it from the Clients` table, It has to be something with the relation between tables in Django but I really don't know where</p>
<p>Here are my two models</p>
<pre class="lang-py prettyprint-override"><code>class V2_Clients(models.Model):
Client_Id = models.CharField(primary_key=True, max_length=50)
Name = models.CharField(max_length=255, null=True)
Organization_type = models.CharField(max_length=128, null=True)
class Meta:
managed = True
db_table = "[Clients]"
class V2_Users(models.Model):
User_Id = models.CharField(primary_key=True, max_length=50)
Client = models.ForeignKey(V2_Clients, on_delete=models.CASCADE)
EMail = models.CharField(max_length=250)
First_Name = models.CharField(max_length=50, null=True)
Family_Name = models.CharField(max_length=50, null=True)
Telephone = models.CharField(max_length=50, null=True)
class Meta:
managed = True
db_table = "[User]"
</code></pre>
<p>This is where I do the query, even when I do <code>print(v2_user.query)</code> I get the same SQL shown at the top, but is not getting the values from the <code>Clients</code> table only the results from the <code>User</code></p>
<pre class="lang-py prettyprint-override"><code>v2_user = V2_Users.objects.using('sl_v2').filter(EMail=jsonData['Client']['Client_Email']).select_related()
</code></pre>
<p>What could be the issue?</p>
|
<python><django>
|
2022-12-21 21:44:55
| 2
| 476
|
Rinzler21
|
74,881,986
| 14,594,208
|
How to cross join (cartesian product) two Series?
|
<p>Consider the following two series.</p>
<p>Let <code>x</code> be:</p>
<pre class="lang-py prettyprint-override"><code>x
a 10
b 20
c 30
Name: x_value
</code></pre>
<p>And let <code>y</code> be:</p>
<pre class="lang-py prettyprint-override"><code>y
d 100
e 200
Name: y_value
</code></pre>
<p>Ideally, the result would have a MultiIndex along with the cartesian product of the series' cross values:</p>
<pre class="lang-py prettyprint-override"><code>
x_value y_value
x y
a d 10 100
e 10 200
b d 20 100
e 20 200
c d 30 100
e 30 200
</code></pre>
<p>I have seen similar questions (e.g. <a href="https://stackoverflow.com/questions/13269890/cartesian-product-in-pandas">cartesian product in pandas</a>) about <strong>cross merge</strong>, but I haven't found anything about Series so far (let alone a MultiIndex of initial indices approach).</p>
<p>The part that seems troublesome to me is how I'd get to work with Series, instead of DataFrames.</p>
|
<python><pandas><series>
|
2022-12-21 21:35:03
| 2
| 1,066
|
theodosis
|
74,881,956
| 1,901,114
|
Inconsistent default logarithm base in sqlite3
|
<p>I've been experiencing an issue where SQLite's log function returns inconsistent results between PyCharm's Query console and when running the same query in python environments. The code where I first spotted this used SQLAlchemy, but it can also be seen using sqlite3 module here:</p>
<pre><code>>>> from sqlite3 import connect
>>> cur = connect("data/db.sqlite3").cursor()
>>> print(cur.execute("select log(10);").fetchall())
[(0.9999999999999999,)]
</code></pre>
<p>When executed in python, <code>log(x)</code> is always evaluated as base 10 logarithm, as specified in <a href="https://www.sqlite.org/lang_mathfunc.html#log" rel="nofollow noreferrer">SQLite documentation</a>:</p>
<blockquote>
<p>Return the base-10 logarithm for X. Or, for the two-argument version, return the base-B logarithm of X.</p>
</blockquote>
<p>However, when I run the same query in PyCharm's query console, it returns the natural log.</p>
<pre><code>+-----------------+
|log(10) |
+-----------------+
|2.302585092994046|
+-----------------+
</code></pre>
<p>Why is this happening, and how do I ensure that my queries are evaluated consistently across all environments?</p>
|
<python><sqlite><pycharm>
|
2022-12-21 21:30:49
| 0
| 1,656
|
Mirac7
|
74,881,643
| 14,084,653
|
Passing lambda function as Parameter to threading.Thread
|
<p>Im new to python so forgive me if Im not applying best practices and I welcome any feedback to help doing the code below the python way. I have the following code and the out put is making me very confused and not sure how to fix in order to get the expected output. Im getting the name Sal on both greetings:</p>
<pre><code> from time import sleep
import threading
def printGreating(get_greeting):
sleep(5)
greeting = get_greeting()
print(greeting)
def createGreeting(greeting,name):
return greeting+' '+ name
nameslist = ["Sam","Sal"]
threads = []
for n in nameslist:
print(n)
if n == "Sam":
create_greeting_l = lambda : createGreeting("Hello",n)
else:
create_greeting_l = lambda : createGreeting("Greeting",n)
t = threading.Thread(target=printGreating, args=(create_greeting_l,))
t.start()
threads.append(t)
for t in threads:
t.join()
print('-------------------Completed--------------')
Expected output:
Sam
Sal
Hello Sam
Greeting Sal
-------------------Completed--------------
What Im getting:
Sam
Sal
Greeting Sal
Hello Sal
</code></pre>
<p>Expected output:</p>
<p>Sam
Sal
Hello Sam
Greeting Sal</p>
<p>What Im getting:
Sam
Sal
Greeting Sal
Hello Sal</p>
<p>UPDATED Dec 22,2022:</p>
<p>I finally figured out the fix. the name should be passed as parameter to the lambda funntion:</p>
<pre><code>if n == "Sam":
create_greeting_l = lambda a : createGreeting("Hello",a)
else:
create_greeting_l = lambda a : createGreeting("Greeting",a)
t = GreeterClass(create_greeting_l,n)
</code></pre>
|
<python><python-3.x>
|
2022-12-21 20:50:28
| 0
| 779
|
samsal77
|
74,881,441
| 14,867,609
|
Why is Scipy's cross correlation function giving incorrect results for audio signals
|
<p>I'm trying to calculate the location of a sound source using TDOA, by cross correlating audio recordings from different microphones and getting the time delay. I am fairly confident that the recording start times are synchronized to 0.01ms or less. The microphones are all plugged into a RaspberryPi.</p>
<p>However, when testing in a room with the microphones set up about 15 meters apart, the cross correlation is giving very incorrect and unreasonable results. I divide the lag in array distance (which are audio data recorded at a specific sample rate) by the sample rate, which should give the delay in seconds right? But the code would say that the delay is 1-2 second or more, when the entire recording is only 3 seconds. The recorded sound is otherwise crystal clear, with almost zero background noise, and several loud, prominent claps.</p>
<p>Here's the code I used:</p>
<pre class="lang-py prettyprint-override"><code>from scipy import signal
from scipy.io import wavfile
r1,s1 = wavfile.read('near output.wav'). #r1 is sample rate, s1 is data
r2,s2 = wavfile.read('far output.wav')
correlation = signal.correlate(s1, s2, mode="full")
lags = signal.correlation_lags(s1.size, s2.size, mode="full")
lag = lags[np.argmax(correlation)]
print(f"Delay: {lag / r1}")
</code></pre>
|
<python><audio>
|
2022-12-21 20:25:17
| 0
| 306
|
Jevon Mao
|
74,881,425
| 769,449
|
Python loop through nested JSON object without root element
|
<p>I've already checked <a href="https://stackoverflow.com/questions/34818782/iterate-through-nested-json-object-and-get-values-with-python">here</a> and <a href="https://stackoverflow.com/questions/45784067/scrapy-on-a-json-response">here</a>.</p>
<p>I have the following json which has no root element and I want to loop through each item in <code>objects</code> and print the value for <code>object_code</code>:</p>
<pre><code>{
"count": 3,
"objects": [
{
"object_code": "HN1059"
},
{
"object_code": "HN1060"
},
{
"object_code": "VO1013"
}
]
}
</code></pre>
<p>I tried:</p>
<pre><code>json='{"count": 3,"objects": [{"object_code": "HN1059"},{"object_code": "HN1060"},{"object_code": "VO1013"}]}'
for obj in json['objects']:
print(obj.get('object_code'))
for obj in json[0]['objects']:
print(obj.get('object_code'))
</code></pre>
<p>Neither work and I get the error:</p>
<blockquote>
<p>TypeError: string indices must be integers</p>
</blockquote>
<p><strong>UPDATE 1</strong></p>
<p>The suggested solutions don't work for me, maybe that's because I'm using it in the context of a Scrapy class, here's the full code that throws error</p>
<blockquote>
<p>TypeError: 'NoneType' object is not iterable</p>
</blockquote>
<pre><code>import json
import scrapy
class MySpider(scrapy.Spider):
name = 'mytest'
start_urls = []
def start_requests(self):
s='{"count": 3,"objects": [{"object_code": "HN1059"},{"object_code": "HN1060"},{"object_code": "VO1013"}]}'
obj = json.loads(s)
for o in obj['objects']:
print(o.get('object_code'))
</code></pre>
|
<python><for-loop><nested>
|
2022-12-21 20:23:06
| 2
| 6,241
|
Adam
|
74,881,392
| 1,218,712
|
Is there a way to simplify an undirected graph with OSMNX?
|
<p>In my research, I use OpenStreetMap data for traffic-related simulations. Part of the data preparation involves using the <em>osmnx</em> library to get a <a href="https://osmnx.readthedocs.io/en/stable/osmnx.html#osmnx.simplification.simplify_graph" rel="nofollow noreferrer">simplified graph</a> of the road network.</p>
<p>Currently, we <strong>do not want to consider one ways.</strong> In other words, every road should be represented as a single edge, regardless of whether or not it's a one-way or two-way street. This essentially means that I am looking to have an undirected graph rather than a directed graph.</p>
<p>The main problem is that osmnx's simplify graph only works with directed graphs.
If I call osmnx's <em>simplify_graph</em> function using a MultiDiGraph, I end up with something like this. In this example, the contiguous edges are not being merged because the part in purple is one-way whereas the pink and light blue parts are two-way streets. Relevant OpenStreetMap way IDs are <a href="https://www.openstreetmap.org/way/46678071" rel="nofollow noreferrer">46678071</a>, <a href="https://www.openstreetmap.org/way/110711994" rel="nofollow noreferrer">110711994</a> and <a href="https://www.openstreetmap.org/way/237298378" rel="nofollow noreferrer">237298378</a>. <strong>However</strong>, this is not what I am looking for; I would like these three edges to be merged, regardless of the fact that one of them is one-way.</p>
<p><a href="https://i.sstatic.net/SZ0v8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SZ0v8.png" alt="problem with graph simplification" /></a></p>
<pre class="lang-python prettyprint-override"><code>ox.settings.log_console = True
G = ox.graph_from_xml("osm_network_agglomeration_montreal.xml",
simplify=False,
retain_all=True,
bidirectional=False)
# Only retain graph that is inside a certain zone
G = ox.truncate.truncate_graph_polygon(G, boundary_polygon)
# Filter edges based on highway type
allowed_highway_types = ["primary",
"secondary",
"tertiary"]
edges_subset = []
for u, v, data in G.edges(data=True):
if data['highway'] in allowed_highway_types:
edges_subset.append((u, v, 0))
G_subset = G.edge_subgraph(edges_subset)
#G_subset = ox.get_undirected(G_subset) # Can't do this as simplify_graph only works with directed graphs.
# Simplify the graph: get rid of interstitial nodes
G_truncated = ox.simplify_graph(G_subset, strict=True, remove_rings=False)
# Convert to an undirected graph. We don't want parallel edges unless their geometries differ.
G_truncated = ox.get_undirected(G_truncated)
gdf_nodes, gdf_links = ox.graph_to_gdfs(G_truncated)
# Get rid of the variables we won't need anymore
#del G, edges_subset, G_subset
</code></pre>
<p>So, my question is: is there a way to simplify an <strong>undirected</strong> graph? I am fine with modifying OSMNX's code and submitting a pull request if that's what's required here.</p>
<p>Thanks!</p>
|
<python><networkx><openstreetmap><graph-theory><osmnx>
|
2022-12-21 20:19:34
| 1
| 571
|
David Murray
|
74,881,369
| 5,977,437
|
TypeError when creating Series from custom ExtensionArray
|
<p>I've created a basic example custom Pandas Extension Type for storing 2D coordinates, with source code below.</p>
<p>I'm able to successfully create arrays of this type using pd.array() which work as expected:</p>
<pre><code>arr = pd.array([(1.5, 2.0), (156, 21), (-120, 98.5)], dtype='coordinate')
</code></pre>
<p><code><CoordinateArray> [Coordinate(1.5, 2.0), Coordinate(156.0, 21.0), Coordinate(-120.0, 98.5)] Length: 3, dtype: <class '__main__.CoordinateDtype'></code></p>
<p>However I am getting the below error when using that array to initialise a Series, or initialising a Series directly and specifying the 'coordinate' dtype:</p>
<pre><code>Cell In [58], line 1
----> 1 pd.Series(coords, dtype='coordinate')
File ~/.local/lib/python3.9/site-packages/pandas/core/series.py:474, in Series.__init__(self, data, index, dtype, name, copy, fastpath)
472 manager = get_option("mode.data_manager")
473 if manager == "block":
--> 474 data = SingleBlockManager.from_array(data, index)
475 elif manager == "array":
476 data = SingleArrayManager.from_array(data, index)
File ~/.local/lib/python3.9/site-packages/pandas/core/internals/managers.py:1912, in SingleBlockManager.from_array(cls, array, index)
1907 @classmethod
1908 def from_array(cls, array: ArrayLike, index: Index) -> SingleBlockManager:
1909 """
1910 Constructor for if we have an array that is not yet a Block.
1911 """
-> 1912 block = new_block(array, placement=slice(0, len(index)), ndim=1)
1913 return cls(block, index)
File ~/.local/lib/python3.9/site-packages/pandas/core/internals/blocks.py:2181, in new_block(values, placement, ndim)
2178 klass = get_block_type(values.dtype)
2180 values = maybe_coerce_values(values)
-> 2181 return klass(values, ndim=ndim, placement=placement)
TypeError: Argument 'values' has incorrect type (expected numpy.ndarray, got CoordinateArray)
</code></pre>
<p>It seems to be an issue with initialising the Block to hold the data, but I'm not sure why. Extension Type definition:</p>
<pre><code>import numpy as np
import pandas as pd
from functools import total_ordering
from pandas.core.dtypes.base import register_extension_dtype
from pandas.core.dtypes.dtypes import PandasExtensionDtype
from pandas.api.extensions import ExtensionArray, ExtensionScalarOpsMixin
@total_ordering
class Coordinate(object):
"""
Simple class to represent a 2D coordinate with X and Y components.
Could extend with more useful methods etc
"""
def __init__(self, x, y):
self.x = float(x)
self.y = float(y)
def __getitem__(self, index):
"""
Allows object to act like (x, y) coordinate pair with indexing
"""
if index == 0:
return self.x
elif index == 1:
return self.y
else:
raise KeyError('Invalid coordinate index: {}'.format(index))
def as_tuple(self):
"""
Return as (x, y) coordinate pair
"""
return (self.x, self.y)
def __len__(self):
return 2
def __repr__(self):
return 'Coordinate({}, {})'.format(self.x, self.y)
# Operator support
def __add__(self, other):
"""
Add scalar value or other coordinate
"""
if isinstance(other, (int, float)):
return Coordinate(self.x + other, self.y + other)
other_coord = create_coordinate(other)
return Coordinate(self.x + other_coord.x, self.y + other_coord.y)
def __sub__(self, other):
"""
Subtract scalar value or other coordinate
"""
if isinstance(other, (int, float)):
return Coordinate(self.x - other, self.y - other)
other_coord = create_coordinate(other)
return Coordinate(self.x - other_coord.x, self.y - other_coord.y)
def __mul__(self, other):
if isinstance(other, (int, float)):
return Coordinate(self.x * other, self.y * other)
else:
raise TypeError('Cannot multiply coordinate by {}'.format(type(other)))
def __neg__(self):
return Coordinate(-self.x, -self.y)
def __eq__(self, other):
other_coord = create_coordinate(other)
return self.x == other_coord.x and self.y == other_coord.y
def __lt__(self, other):
other_coord = create_coordinate(other)
return self.x < other_coord.x and self.y < other_coord.y
def create_coordinate(val):
"""
Factory function for constructing a Coordinate from various
types of inputs
"""
if isinstance(val, Coordinate):
return val
if isinstance(val, (list, tuple)) and len(val) == 2:
# Construct from list-like of X,Y value pair
return Coordinate(val[0], val[1])
raise ValueError('Invalid value to create Coordinate from: {}'.format(val))
@register_extension_dtype
class CoordinateDtype(PandasExtensionDtype):
"""
Class to describe the custom Coordinate data type
"""
type = Coordinate # Scalar type for data
name = 'coordinate' # String identifying the data type (for display)
_metadata = ('name',) # List of attributes to uniquely identify this data type
@classmethod
def construct_array_type(cls):
"""
Return array type associated with this dtype
"""
return CoordinateArray
def __str__(self):
return self.name
class CoordinateArray(ExtensionArray, ExtensionScalarOpsMixin):
"""
Custom Extension Array type for an array of Coordinates
Needs to define:
- Associated Dtype it is used with
- How to construct array from sequence of scalars
- How data is stored and accessed
- Any custom array methods
"""
dtype = CoordinateDtype
def __init__(self, x_values, y_values, copy=False):
"""
Initialise array of coordinates from component X and Y values
(Allows efficient initialisation from existing lists/arrays)
"""
self.x_values = np.array(x_values, dtype=np.float64, copy=copy)
self.y_values = np.array(y_values, dtype=np.float64, copy=copy)
@classmethod
def _from_sequence(cls, scalars, *, dtype=None, copy=False):
# Construct new array from sequence of values (Unzip coordinates into x and y components)
x_values, y_values = zip(*[create_coordinate(val).as_tuple() for val in scalars])
return CoordinateArray(x_values, y_values, copy=copy)
@classmethod
def from_coordinates(cls, coordinates):
"""
Construct array from sequence of values (coordinates)
Can be provided as Coordinate instances or list/tuple like (x, y) pairs
"""
return cls._from_sequence(coordinates)
@classmethod
def _concat_same_type(cls, to_concat):
"""
Concatenate multiple arrays of this dtype
"""
return CoordinateArray(
np.concatenate(arr.x_values for arr in to_concat),
np.concatenate(arr.y_values for arr in to_concat),
)
@property
def nbytes(self):
"""
The number of bytes needed to store this object in memory.
"""
return self.x_values.nbytes + self.y_values.nbytes
def __getitem__(self, item):
"""
Retrieve single item or slice
"""
if isinstance(item, int):
# Get single coordinate
return Coordinate(self.x_values[item], self.y_values[item])
else:
# Get subset from slice or boolean array
return CoordinateArray(self.x_values[item], self.y_values[item])
def __eq__(self, other):
"""
Perform element-wise equality with a given coordinate value
"""
if isinstance(other, (pd.Index, pd.Series, pd.DataFrame)):
return NotImplemented
return (self.x_values == other[0]) & (self.y_values == other[1])
def __len__(self):
return self.x_values.size
def isna(self):
"""
Returns a 1-D array indicating if each value is missing
"""
return np.isnan(self.x_values)
def take(self, indices, *, allow_fill=False, fill_value=None):
"""
Take element from array using boolean index
"""
from pandas.core.algorithms import take
if allow_fill and fill_value is None:
fill_value = self.dtype.na_value
x_result = take(self.x_values, indices, fill_value=fill_value, allow_fill=allow_fill)
y_result = take(self.y_values, indices, fill_value=fill_value, allow_fill=allow_fill)
return CoordinateArray(x_result, y_result)
def copy(self):
"""
Return copy of array
"""
return CoordinateArray(np.copy(self.x_values), np.copy(self.y_values))
# Register operator overloads using logic defined in Coordinate class
CoordinateArray._add_arithmetic_ops()
CoordinateArray._add_comparison_ops()
</code></pre>
|
<python><pandas>
|
2022-12-21 20:16:35
| 1
| 369
|
Finn Andersen
|
74,881,303
| 738,794
|
Snips-nlu fit failed with error module 'numpy' has no attribute 'float' after following quick start
|
<p>With the latest version 0.20.2 snips-nlu library on a windows 10 machine (python 3.8.15 and numpy 1.24.0), I got attribute Error when fitting the engine. What could be the issue?</p>
<pre><code>(nlpenv) C:\Users\one>python -m snips_nlu train sample_dataset.json nlu_engine
Create and train the engine...
Traceback (most recent call last):
File "C:\Users\one\.conda\envs\nlpenv\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\one\.conda\envs\nlpenv\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\__main__.py", line 6, in <module>
main()
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\cli\__init__.py", line 52, in main
args.func(args)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\cli\training.py", line 23, in _train
return train(
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\cli\training.py", line 56, in train
engine = SnipsNLUEngine(config, random_state=random_state).fit(dataset)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\common\log_utils.py", line 30, in wrapped
res = fn(*args, **kwargs)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\nlu_engine\nlu_engine.py", line 126, in fit
recycled_parser.fit(dataset, force_retrain)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\common\log_utils.py", line 30, in wrapped
res = fn(*args, **kwargs)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\intent_parser\probabilistic_intent_parser.py", line 77, in fit
self.intent_classifier.fit(dataset)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\common\log_utils.py", line 30, in wrapped
res = fn(*args, **kwargs)
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\snips_nlu\intent_classifier\log_reg_classifier.py", line 67, in fit
from sklearn.linear_model import SGDClassifier
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\sklearn\linear_model\__init__.py", line 12, in <module>
from ._least_angle import (Lars, LassoLars, lars_path, lars_path_gram, LarsCV,
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\sklearn\linear_model\_least_angle.py", line 30, in <module>
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
File "C:\Users\one\.conda\envs\nlpenv\lib\site-packages\numpy\__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'float'
</code></pre>
|
<python><nlu>
|
2022-12-21 20:08:45
| 1
| 697
|
YuMei
|
74,881,272
| 17,889,840
|
How to create Tensorflow dataset batches for variable shape inputs?
|
<p>I have dataset for image captioning. Each image has different number of captions (or sentences), let say some images have seven captions and other may have ten or more. I used the following code for dataset creation:</p>
<pre><code>def make_dataset(videos, captions):
dataset = tf.data.Dataset.from_tensor_slices((videos, tf.ragged.constant(captions)))
dataset = dataset.shuffle(BATCH_SIZE * 8)
dataset = dataset.map(process_input, num_parallel_calls=AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE).prefetch(AUTOTUNE)
return dataset
</code></pre>
<p>this code is worked fine only when the <code>BATCH_SIZE = 1</code> .
when I try to use <code>BATCH_SIZE = 2</code> or more I get the following error:</p>
<pre><code>InvalidArgumentError: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [7,20], [batch]: [10,20] [Op:IteratorGetNext]
</code></pre>
<p>Is there a way to merge these data in batches without using padding?</p>
|
<python><tensorflow-datasets><batchsize>
|
2022-12-21 20:05:36
| 0
| 472
|
A_B_Y
|
74,881,230
| 3,200,552
|
How to use asyncio.gather with dynamically generated tasks?
|
<p>I have an async method called <code>send_notifications_async(token)</code>. This method users a library to sent apple push notifications asynchronously. Given a list of tokens how can I dynamically create a iterable set of tasks to run asynchronously with asyncio.gather?</p>
<pre><code>async def gather_notification_responses(tokens):
return await asyncio.gather(*[send_notification_async(token.token) async for token in tokens])
</code></pre>
<p>I'm running the code with this synchronous method:</p>
<pre><code>def send_notifications(tokens):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
results = loop.run_until_complete(gather_notification_responses(tokens))
loop.close()
return results
</code></pre>
<p>I keep facing the same 2 runtime errors:</p>
<pre><code>OSError: [Errno 9] Bad file descriptor
</code></pre>
<p>and</p>
<pre><code>RuntimeError: Event loop is closed
</code></pre>
|
<python><asynchronous><apple-push-notifications><python-asyncio>
|
2022-12-21 20:01:03
| 0
| 785
|
merhoo
|
74,881,208
| 7,984,318
|
pandas python how to convert time duration to milliseconds?
|
<p>I have a df ,you can have it by copy and run the following code:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
b_id duration1 duration2 user
384 28 days 21:05:16.141263 0 days 00:00:44.999706 Test
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
df
</code></pre>
<p>My question is ,how can I convert the time duration from '28 days 21:05:16.141263' to milliseconds ?</p>
|
<python><pandas><dataframe><datetime><timedelta>
|
2022-12-21 19:58:54
| 2
| 4,094
|
William
|
74,881,196
| 5,716,633
|
I am looking for a comprehensive explanation of the `inputs` parameter of the `.backward()` method in PyTorch
|
<p>I am having trouble understanding the usage of the <code>inputs</code> keyword in the <code>.backward()</code> call.</p>
<p>The Documentation says the following:</p>
<blockquote>
<p><strong>inputs</strong> (sequence of Tensor) – Inputs w.r.t. which the gradient will be accumulated into .grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors.</p>
</blockquote>
<p>From what I understand this allows us to specify the inputs against which we'll look at gradients.</p>
<p>Isn't the already specified if <code>.backward()</code> is called some tensor like a loss, <code>loss.backward()</code>?
wouldn't the computation graph ensure that gradients are calculated with respect to the relevant parameters.</p>
<p>I haven't found sources that explain this better. I'd appreciate if I could be directed to an explanation.</p>
|
<python><pytorch><autograd>
|
2022-12-21 19:57:49
| 1
| 449
|
Shashwat
|
74,881,158
| 9,415,280
|
deleat row with nan in a tensorflow dataset
|
<p>There is a way to do like pandas inside tensor dataset, deleating row with a nan like this???</p>
<pre><code>ds = ds[~np.isnan(ds).any(axis=1)]
</code></pre>
<p>My test exemple is:</p>
<pre><code>simple_data_samples = np.array([
[1, 11, 111, -1, -11],
[2, np.nan, 222, -2, -22],
[3, 33, 333, -3, -33],
[4, 44, 444, -4, -44],
[5, 55, 555, -5, -55],
[6, 66, 666, -6, -66],
[7, 77, 777, -7, -77],
[8, 88, 888, -8, -88],
[9, 99, 999, -9, np.nan],
[10, 100, 1000, -10, -100],
[11, 111, 1111, -11, -111],
[12, 122, 122, -12, -122]
])
ds = tf.data.Dataset.from_tensor_slices(simple_data_samples)
ds = dataset.window(4, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda x: x).batch(4)
ds = ds.shuffle(dim_dataset)
# clear nan row here
</code></pre>
<p>This must be done after shuffle.</p>
<p><strong>###############EDIT UPDATE##############</strong></p>
<p>next step is to split label with this short function:</p>
<pre><code>def split_feature_label(x):
return x[:input_sequence_length], x[input_sequence_length:,
slice(slice_size, None, None)]_test
</code></pre>
<p>and final transform like this...</p>
<pre><code>ds = ds.map(split_feature_label)
# split data train test set.
split = round(split_train_ratio * (dim_dataset - input_sequence_length - forecast_sequence_length))
ds_train = ds.take(split)
ds_valid = ds.skip(split)
ds_train = ds_train.batch(batch_size, drop_remainder=True)
ds_valid = ds.batch(batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(1)
ds_valid = ds.prefetch(1)
return iter(ds_train), iter(ds_valid)
</code></pre>
<p>If I introduce this proposed solution:</p>
<pre><code>ds = ds.map(lambda x: tf.boolean_mask(x, tf.reduce_all(~tf.math.is_nan(x), axis=-1)))
</code></pre>
<p>It seem to work until I call the next step of splitting my input and label (last column == label). The code run but if I try to inspect my data after this splitting I get these messages:</p>
<pre><code>2022-12-23 10:15:05.514989: W tensorflow/core/framework/op_kernel.cc:1780] OP_REQUIRES failed at strided_slice_op.cc:111 : INVALID_ARGUMENT: slice index 3 of dimension 0 out of bounds.
and
raise core._status_to_exception(e) from None # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__IteratorGetNext_output_types_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} slice index 3 of dimension 0 out of bounds.
[[{{node strided_slice_1}}]] [Op:IteratorGetNext]
````
Something seem to change in the shape or structure!?!?
</code></pre>
|
<python><tensorflow><nan><tensorflow-datasets><tf.data.dataset>
|
2022-12-21 19:53:55
| 1
| 451
|
Jonathan Roy
|
74,881,051
| 360,557
|
Rendering an ellipse using matplotlib to a basemap with a projection
|
<p>I'm plotting an ellipse using matplotlib using this code:</p>
<pre><code>lat = float(ellipse['cy'])
lng = float(ellipse['cx'])
width = float(ellipse['rx']) / 111111.0
height = float(ellipse['ry']) / 111111.0
rot = float(ellipse['rotation']) # + 180?
xpt, ypt = lng, lat
if m != None:
xpt, ypt = m(lng, lat)
ell = Ellipse(xy=(xpt, ypt), width=width*2, height=height*2,
angle=rot, edgecolor='black', facecolor='none')
ax.add_patch(ell)
ax.annotate(ellipse['label'], (xpt, ypt))
</code></pre>
<p>I'm using this code to create the basemap:</p>
<pre><code># setup Lambert Conformal basemap.
m = Basemap(width=12000000, height=9000000, projection='lcc',
resolution='c', lat_1=45., lat_2=55, lat_0=50, lon_0=-107.)
# m.shadedrelief()
m.drawlsmask(land_color='coral', ocean_color='aqua', lakes=True)
</code></pre>
<p>The ellipse isn't rendered. If I don't use a basemap and set m to None, then the ellipse is rendered. Do I need to do anything specific for the ellipse when rendering to a projected map? I'm using matplotlib 3.6.2 on Python 3.11.0 (Windows 11).</p>
<p>I read <a href="https://stackoverflow.com/questions/8161144/drawing-ellipses-on-matplotlib-basemap-projections">Drawing ellipses on matplotlib basemap projections</a> but it's 11 years old, so I'm not sure if the code is still relevant.</p>
<p>Thanks</p>
|
<python><matplotlib><ellipse>
|
2022-12-21 19:40:58
| 1
| 534
|
Mike Stoddart
|
74,880,905
| 7,984,318
|
pandas how to get mean value of datetime timestamp with some conditions?
|
<p>I have a df ,you can have it by copy and run the following code:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
b_id duration1 duration2 user
366 NaN 38 days 22:05:06.807430 Test
367 0 days 00:00:05.285239 NaN Test
368 NaN NaN Test
371 NaN NaN Test
378 NaN 451 days 14:59:28.830482 Test
384 28 days 21:05:16.141263 0 days 00:00:44.999706 Test
466 NaN 38 days 22:05:06.807430 Tom
467 0 days 00:00:05.285239 NaN Tom
468 NaN NaN Tom
471 NaN NaN Tom
478 NaN 451 days 14:59:28.830482 Tom
484 28 days 21:05:16.141263 0 days 00:00:44.999706 Tom
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
df
</code></pre>
<p>My question is ,how can I get the mean value of each duration of each user ?</p>
<p>The output should something like this(the mean value is a fake one for sample ,not the exactly mean value):</p>
<pre><code>mean_duration1 mean_duration2 user
8 days 22:05:06.807430 3 days 22:05:06.807430 Test
2 days 00:00:05.285239 4 days 22:05:06.807430 Tom
</code></pre>
|
<python><pandas><dataframe><numpy><pandas-timeindex>
|
2022-12-21 19:26:51
| 1
| 4,094
|
William
|
74,880,762
| 8,587,712
|
How to fill between two curves of different x and y ranges with matplotlib
|
<p>Say I have two lines, defined by the data</p>
<pre><code>x1 = [0,1,2,3]
y1 = [3,5,4,6]
x2 = [1.5,2.5,3.5,4.5]
y2 = [1,3,2,4]
</code></pre>
<p>which make the plot</p>
<pre><code>plt.figure(figsize=(10,10))
plt.plot(x1, y1)
plt.plot(x2, y2)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/z2zoZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z2zoZ.png" alt="enter image description here" /></a></p>
<p>How can I fill between those two lines? I want the polygon made by connecting the endpoints of these lines, but <code>plt.fill_between</code> and <code>plt.fill_betweenx</code> don't work, as they are on both different x and y ranges.</p>
|
<python><matplotlib><plot><astronomy>
|
2022-12-21 19:11:21
| 0
| 313
|
Nikko Cleri
|
74,880,750
| 5,399,268
|
Problem using exec within class' function in Python
|
<p>The following code works as expected</p>
<pre><code>name = "Test"
myname = ""
exec('myname ="' + name + '"')
print(myname)
</code></pre>
<p>Which shows as result:</p>
<pre><code>Test
</code></pre>
<h2>Problem</h2>
<p>However, if I define the same within a function in a <code>class</code> and execute it I get as result an empty string.</p>
<pre><code>class new(object):
def __init__(self, name):
self.print(name)
def print(self, name):
myname = ""
exec('myname ="' + name + '"')
print(myname)
a = new("My name")
</code></pre>
<p>The above is a toy example code of a bigger code.</p>
<h2>Question</h2>
<p>How to define the function so as to get the same result? The exec function is actually needed in the bigger code.</p>
|
<python><class><exec>
|
2022-12-21 19:10:15
| 1
| 4,793
|
Cedric Zoppolo
|
74,880,733
| 7,984,318
|
pandas error:None of ['b_id'] are in the columns
|
<p>I'm trying to create a dataframe from the following code:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
b_id Rejected Remediation user
366 NaN 38 days 22:05:06.807430 Test
367 0 days 00:00:05.285239 NaN Test
368 NaN NaN Test
371 NaN NaN Test
378 NaN 451 days 14:59:28.830482 Test
384 28 days 21:05:16.141263 0 days 00:00:44.999706 Test
"""
df= pd.read_csv(StringIO(df.strip()), sep='|')
df.set_index("b_id", inplace = True)
</code></pre>
<p>But I received error:</p>
<pre><code>"None of ['b_id'] are in the columns"
</code></pre>
<p>Any friends can help ?</p>
|
<python><pandas><dataframe>
|
2022-12-21 19:08:31
| 1
| 4,094
|
William
|
74,880,709
| 1,677,381
|
How to identify the user or machine making an HTTP request to an Apache web server on RHEL 7 using server side Python or client side script?
|
<p>I have a RHEL 7 Linux server using Apache 2.4 as the httpd daemon. One of the pages served by Apache is a simple https form that is generated using Python 3.11. Currently, the form is submitting and being processed properly, but we have no way to track where the form was submitted from.</p>
<p>Ideally, there would be a field for users to enter their user name, but we have no way of validating if the user name is valid or not.</p>
<p>I would like to add a hidden field to the form that would contain one of the following:</p>
<ul>
<li>User name used to log into the clients computer from where the form was submitted.</li>
<li>Computer name of the clients computer from where the form was submitted.</li>
<li>IP address of the clients computer from where the from was submitted.</li>
</ul>
<p>I do not care if this data is discovered by Python while the page is being generated, or by a client side script embedded in the generated web page.</p>
<p>The majority of users will be using Windows 10 and Chrome or Edge as their browser, but there will be Apple and Linux users and other browsers as well.</p>
<p>Is this possible? If so, how?</p>
|
<python><apache><authentication><https><rhel>
|
2022-12-21 19:06:31
| 1
| 363
|
Calab
|
74,880,628
| 6,382,242
|
OneHotEncoder - Predefined categories for SOME columns?
|
<p>Let's say I have this dataframe:</p>
<pre><code>df = pd.DataFrame({"a": [1,2,3], "b": ["d", "d", "d"]})
</code></pre>
<p>And I want to OneHotEncode both the "a" and "b" columns. But let's say that I know what the categories of the "a" column are: {1, 2, 3, 4, 5}, but I don't know what the categories for the "b" column are (and want them to be automatically inferred).</p>
<p>How can I use the default <code>categories='auto'</code> behavior for only the "b" feature, but pass the categories for the "a" feature? Looks like OneHotEncode doesn't allow that: either you pass in 'auto' for all features or predefined categories for ALL features.</p>
<p>I would like to keep the encoder for future transforms and the capability to handle unknown/unseen categories like the way Sklearn's OHE does.</p>
<p>I tried passing <code>categories=[[1,2,3,4,5], 'auto']</code>, <code>categories=[[1,2,3,4,5], None]</code>, <code>categories=[[1,2,3,4,5], []]</code>, but all of them errored out.</p>
<hr />
<p>Function snipped</p>
<pre><code>def one_hot_encode_categorical_columns(df, columns, categories="auto"):
ohe = OneHotEncoder(categories=categories, sparse=False, handle_unknown="ignore")
ohe_df = pd.DataFrame(ohe.fit_transform(df[columns]))
ohe_df.columns = ohe.get_feature_names_out(columns)
new_df = pd.concat([df, ohe_df], axis=1)
return ohe, new_df
df = pd.DataFrame({"a": [1,2,3], "b": ["d", "d", "d"]})
# call function here
</code></pre>
|
<python><pandas><one-hot-encoding>
|
2022-12-21 18:57:34
| 2
| 529
|
WalksB
|
74,880,421
| 7,598,461
|
Can't create venv in docker jupyter image (permissions)
|
<p>When trying to docker build from this Dockerfile:</p>
<pre><code>FROM jupyter/datascience-notebook:latest
ENV VIRTUAL_ENV=/opt/venv
RUN python -m venv $VIRTUAL_ENV
</code></pre>
<p>... I receive the following error:</p>
<pre><code>Error: [Errno 13] Permission denied: '/opt/venv'
The command '/bin/bash -o pipefail -c python -m venv $VIRTUAL_ENV' returned a non-zero code: 1
</code></pre>
<p>It works fine with other parent images.</p>
<p>I don't understand why it doesn't work. Since my Dockerfile doesn't contain a <code>USER ...</code> line, the <code>RUN</code> line which tries to create the venv is operating as <code>root</code> user isn't it?</p>
<p>I've tried the usual things like including a <code>RUN chown appuser /opt</code> higher up in the Dockerfile without success.</p>
|
<python><docker><permissions><python-venv>
|
2022-12-21 18:36:45
| 0
| 5,091
|
jsstuball
|
74,880,315
| 10,164,750
|
extract hypen separated values from a column and apply UDF
|
<p>I have a <code>dataframe</code> like as provided below:</p>
<pre><code>+--------+-------+-------+--------------+--------------------+-----------------+----------+--------------------+------------+
|sequence|recType|valCode|registerNumber| rest| errorCode|errorType | errorDescription|isSuccessful|
+--------+-------+-------+--------------+--------------------+-----------------+----------+--------------------+------------+
| 9| 11| 0| XXXX2288|110XXXX2288MKKKKK...| CHAR0088| ERROR|Records out of se...| N|
| 9| 12| 0| XXXX2288|130XXXX22880011ZZ...| CHAR0088| ERROR|Records out of se...| N|
| 9| 18| 0| XXXX2288|140XXXX2288 ...| CHAR0088| ERROR|Records out of se...| N|
+--------+-------+-------+--------------+--------------------+-----------------+----------+--------------------+------------+ N|
</code></pre>
<p>The below code uses <code>UDF</code> to populate the data for <code>errorType</code> and <code>errorDescription</code> columns.
The <code>UDFs</code> i.e. <code>resolveErrorTypeUDF</code> and <code>resolveErrorDescUDF</code> take one <code>errorCode</code> as input and provide the respective <code>errorType</code> and <code>errorDescription</code> in output respectively.</p>
<pre><code>errorFinalDf = errorDfAll.na.fill("") \
.withColumn("errorType", resolveErrorTypeUDF(col("errorCode"))) \
.withColumn("errorDescription", resolveErrorDescUDF(col("errorCode"))) \
.withColumn("isSuccessful", when(trim(col("errorCode")).eqNullSafe(""), "Y").otherwise("N")) \
.dropDuplicates()
</code></pre>
<p>Please notice that, I used to get only one <code>error code</code> in <code>errorCode</code> column. Now onwards, I will be getting single/multiple <code>-</code> separated <code>error codes</code> in the <code>errorCode</code> column. And I need to populate all the mapping <code>errorType</code> and <code>errorDescription</code> and write them into respective column with <code>-</code> separation.</p>
<p>The new <code>dataframe</code> would look like this.</p>
<pre><code>+--------+-------+-------+--------------+--------------------+-----------------+----------+--------------------+------------+
|sequence|recType|valCode|registerNumber| rest| errorCode|errorType | errorDescription|isSuccessful|
+--------+-------+-------+--------------+--------------------+-----------------+----------+--------------------+------------+
| 7| 1| 0| XXXX8822|010XXXX8822XBCDEF...|CHAR0009-CHAR0021|ERROR-WARN|Short Failed-Miss...| N|
| 7| 11| 0| XXXX8822|110XXXX8822LLLLLL...|CHAR0009-CHAR0021|ERROR-WARN|Short Failed-Miss...| N|
| 7| 12| 0| XXXX8822|120XXXX8822011GB ...|CHAR0009-CHAR0021|ERROR-WARN|Short Failed-Miss...| N|
| 7| 18| 0| XXXX8822|180XXXX8822 ...|CHAR0009-CHAR0021|ERROR-WARN|Short Failed-Miss...| N|
| 7| 18| 0| XXXX8822|180XXXX88220 ...|CHAR0009-CHAR0021|ERROR-WARN|Short Failed-Miss...| N|
+--------+-------+-------+--------------+--------------------+-----------------+----------+--------------------+------------+
</code></pre>
<p>What changes would be needed to accommodate the new scenario. Please help. Thank you.</p>
|
<python><apache-spark><pyspark>
|
2022-12-21 18:23:56
| 1
| 331
|
SDS
|
74,880,185
| 4,752,223
|
TypeError: cannot pickle '_hashlib.HASH' object
|
<p>I have a rather simple dataclass.</p>
<p>I saved it on a pickle (using dill instead of the real pickle).</p>
<p><code>import dill as pickle</code></p>
<p>After some other operations:</p>
<ul>
<li>Loading the same pickle fails</li>
<li>Trying to save the same object fails</li>
</ul>
<p>Error:</p>
<p><code>TypeError: cannot pickle '_hashlib.HASH' object</code></p>
<p>I am not using any hashlib library (that I am aware of).</p>
<p>Previously I was able to pickle/unpickle the same object/dataclass without issues.</p>
<p><strong>Note:</strong> The reason of putting the Q/A here is because that error message was leading me to very obscure places, far away from my real problem/scenario. I don't want others to think there is something wrong with the dataclass or pickle/dill when it is not the case.</p>
|
<python><pickle><dill>
|
2022-12-21 18:11:31
| 2
| 2,928
|
Rub
|
74,880,162
| 14,256,643
|
How to get specific part of any url using urlparse()?
|
<p>I have an url like this</p>
<pre><code>url = 'https://grabagun.com/firearms/handguns/semi-automatic-handguns/glock-19-gen-5-polished-nickel-9mm-4-02-inch-barrel-15-rounds-exclusive.html'
</code></pre>
<p>When I use <code>urlparse()</code> function, I am getting result like this:</p>
<pre><code>>>> url = urlparse(url)
>>> url.path
'/firearms/handguns/semi-automatic-handguns/glock-19-gen-5-polished-nickel-9mm-4-02-inch-barrel-15-rounds-exclusive.html'
</code></pre>
<p>Is it possible to get something like this:</p>
<blockquote>
<p>path1 = "firearms"<br />
path2 = "handguns"<br />
path3 = "semi-automatic-handguns"</p>
</blockquote>
<p>and I don't want to get any text which have ".html" at the end.</p>
|
<python><urllib><python-re><url-parsing>
|
2022-12-21 18:09:45
| 4
| 1,647
|
boyenec
|
74,880,134
| 14,791,134
|
BeautifulSoup find.text returns empty string although element exists
|
<p>I am scraping the following; <a href="https://www.espn.com/nfl/scoreboard/" rel="nofollow noreferrer">https://www.espn.com/nfl/scoreboard/</a> and trying to get the times of the games</p>
<pre class="lang-py prettyprint-override"><code>import requests
from bs4 import BeautifulSoup
r = requests.get("https://www.espn.com/nfl/scoreboard")
soup = BeautifulSoup(r.content, "lxml")
for section in soup.find_all("section", {"class": "Card gameModules"}):
for game in section.find_all("section", {"class": "Scoreboard bg-clr-white flex flex-auto justify-between"}):
print(game.find("div", {"class": "ScoreCell__Time ScoreboardScoreCell__Time h9 clr-gray-03"}).text)
</code></pre>
<p>Even though it should return the times of the games, it just returns empty strings. Why is this?</p>
|
<python><web-scraping><beautifulsoup><espn>
|
2022-12-21 18:07:03
| 1
| 468
|
earningjoker430
|
74,880,119
| 1,828,289
|
Get peak memory usage of process tree
|
<p>I have a commandline program that launches a bunch of subprocesses (which themselves can launch more subprocesses). I'd like to somehow determine the peak combined memory usage of all subprocesses in the whole tree. This is on linux; I don't have source code for the program.</p>
<p>I can run the program from python with <code>subprocess.call()</code>, and I can get the max resident set size with <code>resource.getrusage(RUSAGE_CHILDREN).ru_maxrss</code>. I think that only returns the value for the program itself, but not its children. What is the equivalent of getrusage() that applies to the whole process tree recursively?</p>
<p>P.S. man getrusage says this:</p>
<blockquote>
<p>For RUSAGE_CHILDREN, this is the resident set size of the largest
child, not the maximum resident set size of the process tree.</p>
</blockquote>
<p>So it's not the peak of the first child as I thought, but also not the instantaneous peak of the process tree or the sum of peaks of the children.</p>
|
<python><linux><memory><memory-profiling>
|
2022-12-21 18:05:43
| 0
| 20,357
|
Alex I
|
74,879,980
| 9,603,285
|
How to overlay two 2D-histograms in Matplotlib?
|
<p>I have two datasets (corresponding with the time-positional data of hydrogen atoms and time-positional data of alumina atoms) in the same system.
I want to plot the density of each element by overlaying two <code>hist2d</code> plots using matplotlib.</p>
<p>I am currently doing this by setting an alpha value on the second <code>hist2d</code>:</p>
<pre class="lang-py prettyprint-override"><code> fig, ax = plt.subplots(figsize=(4, 4))
v = ax.hist2d(x=alx, y=aly,
bins=50, cmap='Reds')
h = ax.hist2d(x=hx, y=hy,
bins=50, cmap='Blues',
alpha=0.7)
ax.set_title('Adsorption over time, {} K'.format(temp))
ax.set_xlabel('picoseconds')
ax.set_ylabel('z-axis')
fig.colorbar(h[3], ax=ax)
fig.savefig(savename, dpi=300)
</code></pre>
<p>I do get the plot that I want, however the colors seem washed out due to the alpha value.
Is there a more correct way to do generate such plots?</p>
<p><a href="https://i.sstatic.net/TThUo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TThUo.png" alt="Plot that I currently receive. As you can see, the colors are a bit washed out" /></a></p>
|
<python><matplotlib>
|
2022-12-21 17:52:55
| 1
| 601
|
lcdumort
|
74,879,738
| 16,009,435
|
unable to host flask app on a specific ip and port
|
<p>I wanted to host my flask app on a specific port but the method I am using is not working. What I did is assign the <code>host</code> and <code>port</code> properties in my <code>socket.run()</code>. When I go to the specified address the page doesn't load. Where did I go wrong and how can I properly host a flask app with specific ip address and port. Thanks in advance.</p>
<p><strong>EDIT:</strong> when I run the app with <code>python app.py</code> it works but when I run it with <code>flask run</code> it doesn't work.</p>
<pre><code>from flask import Flask, render_template, Response
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'blahBlah'
socket = SocketIO(app)
@app.route('/')
def index():
return render_template('index.html')
if __name__ == '__main__':
socket.run(app, host='127.0.0.1', port=7000)
</code></pre>
|
<python><flask>
|
2022-12-21 17:30:42
| 2
| 1,387
|
seriously
|
74,879,722
| 6,635,590
|
Check if a filename has multiple '.'/periods in it
|
<p>So I'm making a website and in one of the pages you can upload images.</p>
<p>I didn't think of this before when making my file upload function but files are allowed to have multiple <code>.</code> in them, so how can I differentiate between the "real" <code>.</code> and the fake <code>.</code> to get the filename and the extension.</p>
<p>This is my file upload function, which isn't especially relevant but it shows how I upload the files:</p>
<pre><code>def upload_files(files, extensions, path, overwrite=False, rename=None):
if not os.path.exists(path):
os.makedirs(path)
filepath = None
for file in files:
name, ext = file.filename.split('.')
if ext in extensions or extensions == '*':
if rename:
filepath = path + rename + '.' + ext if path else rename + '.' + ext
else:
filepath = path + file.filename if path else file.filename
file.save(filepath, overwrite=overwrite)
else:
raise Exception('[ FILE ISSUE ] - File Extension is not allowed.')
</code></pre>
<p>As you can see I am splitting the filename based on the <code>.</code> that is there but I now need to split it and figure out which <code>.</code> split pair is the actual pair for filename and extension, it also creates the issue of providing too many values for the declaration <code>name, ext</code> since there is a third var now at least.</p>
|
<python><file><bottle>
|
2022-12-21 17:29:31
| 2
| 734
|
tygzy
|
74,879,616
| 4,414,359
|
How to filter a pandas dataframe after groupby and mean
|
<p>Does anyone know off the top of their head how to filter after grouping a dataframe and applying the mean function?
I'd like to be able to get the <code>hour</code>s and <code>count</code>s for <code>day</code>s 0, 5, 6.</p>
<p><a href="https://i.sstatic.net/Cp3Mn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cp3Mn.png" alt="enter image description here" /></a></p>
|
<python><pandas><function><filter><group-by>
|
2022-12-21 17:19:52
| 1
| 1,727
|
Raksha
|
74,879,346
| 387,851
|
Upgrade SQLite version on Lambda Python3.9
|
<p>The built-in version of SQLite on the AWS Lambda runtime Python 3.9 is v3.7.17 which was released in 2013. I am unable to upgrade this version with a Layer or Code Deployment because library sources are arranged in such a away where it's not overridable since Python 3.9 is bundled with SQLite (batteries included and all). I believe Lambda finds <code>sqlite.so</code> in <code>/var/runtime/lib</code></p>
<p>Is there a way to update SQLite in Lambda?</p>
<p>Python dumb of Lambda env vars:</p>
<pre><code>'LAMBDA_RUNTIME_DIR': '/var/runtime',
'LAMBDA_TASK_ROOT': '/var/task',
'LANG': 'en_US.UTF-8',
'LD_LIBRARY_PATH': '/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib',
'PATH': '/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin',
'PWD': '/var/task',
'PYTHONPATH': '/var/runtime',
'SHLVL': '0',
'TZ': ':UTC',
</code></pre>
<p>And sys.path</p>
<pre><code>[
'/var/task',
'/opt/python/lib/python3.8/site-packages',
'/opt/python',
'/var/runtime',
'/var/lang/lib/python38.zip',
'/var/lang/lib/python3.8',
'/var/lang/lib/python3.8/lib-dynload',
'/var/lang/lib/python3.8/site-packages',
'/opt/python/lib/python3.8/site-packages',
'/opt/python'
]
</code></pre>
<p>Proof:</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
print("sqlite3:")
sqlite3_version = ".".join(str(x) for x in sqlite3.version_info)
sqlite3_lib_version = ".".join(str(x) for x in sqlite3.sqlite_version_info)
print(f"Version: v{sqlite3_version}")
print(f"SQLite Library Version: v{sqlite3_lib_version}")
</code></pre>
<pre><code>sqlite3:
Version: v2.6.0
SQLite Library Version: v3.7.17
</code></pre>
|
<python><sqlite><aws-lambda>
|
2022-12-21 16:58:29
| 1
| 1,748
|
four43
|
74,879,306
| 5,901,870
|
Convert a pyspark dataframe into a dictionary filtering and collecting values from columns
|
<p>I need to convert this DataFrame to a dictionary:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>value-1</th>
<th>value-2</th>
<th>value-3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1A</td>
<td>Approve</td>
<td>NULL</td>
<td>NULL</td>
</tr>
<tr>
<td>2B</td>
<td>Approve</td>
<td>Approve</td>
<td>NULL</td>
</tr>
<tr>
<td>3C</td>
<td>NULL</td>
<td>NULL</td>
<td>Approve</td>
</tr>
</tbody>
</table>
</div>
<p>output:</p>
<pre><code>{'1A': [value-1], '2B': [value-1,value-2], '3C': [value-3]}
</code></pre>
<p>Notice that I am using values of the first column of the DataFrame as keys to the dictionary.</p>
|
<python><dataframe><dictionary><pyspark>
|
2022-12-21 16:55:13
| 1
| 400
|
Mikesama
|
74,879,287
| 11,693,768
|
Pandas not filtering unless dataframe is saved into a csv and read back as a csv, source is a json loaded into dataframe
|
<p>I have a json output which looks like this.</p>
<pre><code>{'pagination': {'limit': 100, 'offset': 0, 'count': 38, 'total': 38},
'data': [{'name': 'Ceco Environmental Corp',
'symbol': 'CECE',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'NASDAQ Stock Exchange',
'acronym': 'NASDAQ',
'mic': 'XNAS',
'country': 'USA',
'country_code': 'US',
'city': 'New York',
'website': 'www.nasdaq.com'}},
{'name': 'CEC CoreCast Corporation Ltd',
'symbol': '600764.XSHG',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'Shanghai Stock Exchange',
'acronym': 'SSE',
'mic': 'XSHG',
'country': 'China',
'country_code': 'CN',
'city': 'Shanghai',
'website': 'www.sse.com.cn'}},
{'name': 'CECEP WindPower Corp',
'symbol': '601016.XSHG',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'Shanghai Stock Exchange',
'acronym': 'SSE',
'mic': 'XSHG',
'country': 'China',
'country_code': 'CN',
'city': 'Shanghai',
'website': 'www.sse.com.cn'}},
{'name': 'CECONOMY AG INHABER-STAMMAKTIEN O.N.',
'symbol': 'CEC.XSTU',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'Börse Stuttgart',
'acronym': 'XSTU',
'mic': 'XSTU',
'country': 'Germany',
'country_code': 'DE',
'city': 'Stuttgart',
'website': 'www.boerse-stuttgart.de'}},
{'name': 'CECONOMY AG ST O.N.',
'symbol': 'CEC.XFRA',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'Deutsche Börse',
'acronym': 'FSX',
'mic': 'XFRA',
'country': 'Germany',
'country_code': 'DE',
'city': 'Frankfurt',
'website': 'www.deutsche-boerse.com'}},
{'name': 'CECONOMY AG ST O.N.',
'symbol': 'CEC.XETRA',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'Deutsche Börse Xetra',
'acronym': 'XETR',
'mic': 'XETRA',
'country': 'Germany',
'country_code': 'DE',
'city': 'Frankfurt',
'website': ''}},
{'name': 'CECEP COSTIN',
'symbol': '2228.XHKG',
'has_intraday': False,
'has_eod': True,
'country': None,
'stock_exchange': {'name': 'Hong Kong Stock Exchange',
'acronym': 'HKEX',
'mic': 'XHKG',
'country': 'Hong Kong',
'country_code': 'HK',
'city': 'Hong Kong',
'website': 'www.hkex.com.hk'}},
.....
</code></pre>
<p>I am trying to load it into a dataframe and filter the <code>stock_exchange</code> column by country.</p>
<p>Here is my code.</p>
<pre><code>import pandas as pd
data = api_result.json()
result = pd.DataFrame(data['data'])
result[result['stock_exchange'].str.contains('China')]
</code></pre>
<p>But I get the following error, <code>KeyError: "None of [Float64Index([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],\n dtype='float64')] are in the [columns]"</code></p>
<p>However, if I save the output to a csv, and reload it back into a dataframe like this,</p>
<pre><code>result.to_csv('result.csv')
result = pd.read_csv('result.csv')
result[result['stock_exchange'].str.contains('China')]
</code></pre>
<p>I get the filtered dataframe like this,</p>
<pre><code>
Unnamed: 0 name symbol has_intraday has_eod country stock_exchange
1 1 CEC CoreCast Corporation Ltd 600764.XSHG False True NaN {'name': 'Shanghai Stock Exchange', 'acronym':...
2 2 CECEP WindPower Corp 601016.XSHG False True NaN {'name': 'Shanghai Stock Exchange', 'acronym':...
</code></pre>
<p>Any idea why I can't filter the dataframe without saving the frame to csv and reloading first?</p>
|
<python><json><pandas><dataframe><filter>
|
2022-12-21 16:53:31
| 2
| 5,234
|
anarchy
|
74,879,238
| 598,057
|
How to reliably access root tag's namespace declarations and attributes using lxml?
|
<p>In the following example, is there a way to either:</p>
<ol>
<li>only print the root <code>REQ-IF</code> tags's complete attributes without having lxml to print the whole XML document
or</li>
<li>to enumerate the root tag's namespaces and attributes in the same order as they appear in an XML document?</li>
</ol>
<p>My goal is to get precisely this string and nothing else:</p>
<pre><code><REQ-IF xmlns="http://www.omg.org/spec/ReqIF/20101201" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.omg.org/spec/ReqIF/20110401/reqif.xsd reqif.xsd http://www.w3.org/1999/xhtml driver.xsd">
</code></pre>
<p>In the code below, I can either print the whole XML document or access independently the <code>nsmap</code> and <code>attrib</code> properties. I can also imagine making a deep copy of a document tree, removing the children nodes and calling <code>tostring()</code> on that parent node only but I was wondering if there was a more elegant solution to just get the exact XML string of the parent tag without accessing its child tags or hacking on the tree structure?</p>
<pre><code>import io
from lxml import etree
reqif_content = """<?xml version="1.0" encoding="UTF-8"?>
<REQ-IF xmlns="http://www.omg.org/spec/ReqIF/20101201" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.omg.org/spec/ReqIF/20110401/reqif.xsd reqif.xsd http://www.w3.org/1999/xhtml driver.xsd">
<THE-HEADER>
<REQ-IF-HEADER IDENTIFIER="3dd1a60c-59d1-11da-86ca-4bda04a730ce">
<COMMENT>Embedded OLE object with multiple representation forms.</COMMENT>
<CREATION-TIME>2005-05-23T12:00:00+02:00</CREATION-TIME>
<SOURCE-TOOL-ID>Manually written</SOURCE-TOOL-ID>
<TITLE>Test data RIF72</TITLE>
</REQ-IF-HEADER>
</THE-HEADER>
</REQ-IF>
"""
xml_reqif_root = etree.parse(
io.BytesIO(bytes(reqif_content, "UTF-8"))
)
print(etree.tostring(xml_reqif_root.getroot(), pretty_print=True).decode("utf8"))
print("nsmap:")
print(xml_reqif_root.getroot().nsmap)
print("attrib:")
print(xml_reqif_root.getroot().attrib)
</code></pre>
<p>In the output, it is clear that I get a full XML which I don't want and I get some non-identical representation of the namespaces and attributes:</p>
<pre><code><REQ-IF xmlns="http://www.omg.org/spec/ReqIF/20101201" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.omg.org/spec/ReqIF/20110401/reqif.xsd reqif.xsd http://www.w3.org/1999/xhtml driver.xsd">
<THE-HEADER>
<REQ-IF-HEADER IDENTIFIER="3dd1a60c-59d1-11da-86ca-4bda04a730ce">
<COMMENT>Embedded OLE object with multiple representation forms.</COMMENT>
<CREATION-TIME>2005-05-23T12:00:00+02:00</CREATION-TIME>
<SOURCE-TOOL-ID>Manually written</SOURCE-TOOL-ID>
<TITLE>Test data RIF72</TITLE>
</REQ-IF-HEADER>
</THE-HEADER>
</REQ-IF>
nsmap:
{None: 'http://www.omg.org/spec/ReqIF/20101201', 'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}
attrib:
{'{http://www.w3.org/2001/XMLSchema-instance}schemaLocation': 'http://www.omg.org/spec/ReqIF/20110401/reqif.xsd reqif.xsd http://www.w3.org/1999/xhtml driver.xsd'}
</code></pre>
|
<python><lxml>
|
2022-12-21 16:49:16
| 0
| 11,408
|
Stanislav Pankevich
|
74,879,188
| 1,044,326
|
Limit the height of the violin plot data within range
|
<p>How can I make sure that violin plot contained within it's data range? I have binary classified feature categories (1,0). Within that values can range from 0 to 1. However as you can see when binary classification is 1 (orange). There is no values beyond 0.6. How can I fix that?</p>
<pre><code>for f in binary_feature_name:
x = master_df_copy['status']
vfig = sns.violinplot(x=f, y='value', data = eda_cis_df, palette='Set2', cut=0)
fig = vfig.get_figure()
fig.savefig('./output/eda/' + f + ".png")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/2oer7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2oer7.png" alt="enter image description here" /></a></p>
<p>I am expecting something like this</p>
<p><a href="https://i.sstatic.net/aeLbr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aeLbr.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn><violin-plot>
|
2022-12-21 16:44:56
| 1
| 1,550
|
MonteCristo
|
74,879,157
| 2,878,290
|
Azure Data Factory Trigger Azure Notebook Failure
|
<p>I am trying to execute the notebook via azure datafactory to Azure Databricks notebook but unable to success my ADF pipeline, if I run the azure databricks notebook separately on my pyspark scripts, there is no error but if run via the ADF pipeline, i am getting below like.</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'prophet'</p>
<p>ModuleNotFoundError Traceback (most recent call
last) in
6 import pandas as pd
7 import pyspark.pandas as ps
----> 8 from prophet import Prophet
9 from pyspark.sql.types import StructType, StructField, StringType, FloatType, TimestampType, DateType, IntegerType
10</p>
</blockquote>
<p><a href="https://i.sstatic.net/NFL2c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NFL2c.png" alt="enter image description here" /></a></p>
<p>I am not sure, if we installed everting in ADB cluster but throwing the error in ADF pipeline. i tried to restart the cluster and all the possibility. Kindly provide your advice.</p>
|
<python><pyspark><azure-databricks><azure-data-factory>
|
2022-12-21 16:41:48
| 1
| 382
|
Developer Rajinikanth
|
74,879,155
| 20,652,094
|
Pandas or R - Merge Rows By Same Value in Column Over NaN values - Look at Example
|
<p>I have a very specific dataset it looks something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>record_id</th>
<th>event_id</th>
<th>instrument</th>
<th>repeat_inst</th>
</tr>
</thead>
<tbody>
<tr>
<td>PI0005</td>
<td>v03_abc_1</td>
<td><code>NaN</code></td>
<td>1</td>
</tr>
<tr>
<td>PI0005</td>
<td>v03_abc_1</td>
<td>i_sensor</td>
<td><code>NaN</code></td>
</tr>
<tr>
<td>PI0005</td>
<td>v03_abc_1</td>
<td><code>NaN</code></td>
<td><code>NaN</code></td>
</tr>
<tr>
<td>PI0005</td>
<td>v02_abc_33</td>
<td>i_sensor</td>
<td><code>NaN</code></td>
</tr>
<tr>
<td>PI0005</td>
<td>v02_abc_33</td>
<td><code>NaN</code></td>
<td><code>NaN</code></td>
</tr>
<tr>
<td>PI0006</td>
<td>v02_abc_1</td>
<td>i_sensor</td>
<td>1</td>
</tr>
<tr>
<td>PI0006</td>
<td>v02_abc_1</td>
<td><code>NaN</code></td>
<td><code>NaN</code></td>
</tr>
</tbody>
</table>
</div>
<p>How do I make it look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>record_id</th>
<th>event_id</th>
<th>instrument</th>
<th>repeat_inst</th>
</tr>
</thead>
<tbody>
<tr>
<td>PI0005</td>
<td>v03_abc_1</td>
<td>i_sensor</td>
<td>1</td>
</tr>
<tr>
<td>PI0005</td>
<td>v02_abc_33</td>
<td>i_sensor</td>
<td><code>NaN</code></td>
</tr>
<tr>
<td>PI0006</td>
<td>v02_abc_2</td>
<td>i_sensor</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>Where rows with the same <code>record_id</code> and <code>event_id</code> get merged together, where <code>NaN</code> values are replaced with the other value, and if both values are <code>NaN</code>, then <code>NaN</code> can be kept (like in the forth and fifth row in the original dataframe).</p>
<p>Assume that only one of the related cells have a value and all others have <code>NaN</code>.</p>
<p>This should apply to all columns of the data, there are thousands of columns and rows.</p>
<p>I tried using group by, but don't know how to continue.</p>
|
<python><r><pandas>
|
2022-12-21 16:41:44
| 1
| 307
|
user123
|
74,879,121
| 10,682,580
|
Efficiently localize an array of datetimes with pytz
|
<p>What is the most efficient way of converting an array of naive <code>datetime.datetime</code> objects to an array of timezone-aware datetime objects?</p>
<p>Currently I have them in a numpy array. The answer doesn't necessarily need to end up as a numpy array, but should consider starting as one.</p>
<p>e.g. if I have this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pytz
from datetime import datetime
# Time zone information
timezone = pytz.FixedOffset(-480)
# Numpy array of datetime objects
datetimes = np.array([datetime(2022, 1, 1, 12, 0, 0), datetime(2022, 1, 2, 12, 0, 0)])
</code></pre>
<p>How can I make <code>datetimes</code> timezone-aware?</p>
<p>Obviously list comprehension could work, but for large arrays it doesn't seem like it is as efficient as it could be. I would like a vectorized operation.</p>
<p>ChatGPT told me this would work (spoiler alert, it doesn't)</p>
<pre class="lang-py prettyprint-override"><code># Add time zone information to each datetime object
datetimes_with_timezone = timezone.localize(datetimes, is_dst=None)
</code></pre>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Nick\Anaconda3\envs\pftools\lib\site-packages\pytz\tzinfo.py", line 317, in localize
if dt.tzinfo is not None:
AttributeError: 'numpy.ndarray' object has no attribute 'tzinfo'
</code></pre>
|
<python><arrays><numpy><datetime><pytz>
|
2022-12-21 16:38:37
| 1
| 2,419
|
alex_danielssen
|
74,879,104
| 16,389,095
|
Kivy MD: How to switch between different layouts using MD Tabs
|
<p>I'm trying to design a user interface in Python / Kivy MD. The layout should include a MDTab with some icons on top, a button at the bottom and some widgets in the middle. By clicking on the tab icons, the widget in the middle should change accordingly. For example modifying the text of a label (<em>see code</em>).
For this purpose, I'm trying to use a ScreenManager defined in the <strong>Designer.kv</strong> file. Anyway, I'm not sure is the most suitable object.</p>
<pre><code>MDBoxLayout:
screen_manager: screen_manager
orientation: "vertical"
padding: 10, 0, 10, 10
MDTabs:
id: tabs
on_tab_switch: app.on_tab_switch(*args)
MDScreenManager:
id: screen_manager
Screen:
name: 'screen1'
MDBoxLayout:
MDCheckbox:
MDLabel:
text: 'TAB 1'
Screen:
name: 'screen2'
MDBoxLayout:
MDCheckbox:
MDLabel:
text: 'TAB 2'
MDRaisedButton:
text: 'CONFIGURE'
size_hint_x: 1
pos_hint: {"center_y":0.5}
</code></pre>
<p>I recall the <em>screen_manager</em> object in the .py file using the <em>ObjectProperty</em>.</p>
<pre><code>from kivy.lang import Builder
from kivymd.app import MDApp
from kivymd.uix.tab import MDTabsBase
from kivymd.uix.floatlayout import MDFloatLayout
from kivy.properties import ObjectProperty
class Tab(MDFloatLayout, MDTabsBase):
'''Class implementing content for a tab.'''
class MainApp(MDApp):
screen_manager = ObjectProperty()
icons = ["clock", "video-3d"]
def build(self):
return Builder.load_file('Designer.kv')
def on_start(self):
for tab_name in self.icons:
self.root.ids.tabs.add_widget(Tab(icon=tab_name))
def on_tab_switch(
self, instance_tabs, instance_tab, instance_tab_label, tab_text
):
'''
Called when switching tabs.
:type instance_tabs: <kivymd.uix.tab.MDTabs object>;
:param instance_tab: <__main__.Tab object>;
:param instance_tab_label: <kivymd.uix.tab.MDTabsLabel object>;
:param tab_text: text or name icon of tab;
'''
count_icon = instance_tab.icon
if count_icon == self.icons[0]:
self.screen_manager.current = 'screen1'
elif count_icon == self.icons[1]:
self.screen_manager.current = 'screen2'
if __name__ == '__main__':
MainApp().run()
</code></pre>
<p>When I run the code I get this error:
<em>AttributeError: 'NoneType' object has no attribute 'current'</em>.</p>
<p>How can I fix it? Any alternative ideas to switch between screen/layouts by clicking on Tab elements?</p>
|
<python><layout><kivy><kivy-language><kivymd>
|
2022-12-21 16:37:18
| 1
| 421
|
eljamba
|
74,879,074
| 5,212,614
|
How can we search for a few strings in a column and multiply another column by a constant?
|
<p>I thought I could search for a string in a column, and if the result is found, multiply a value in another column by a string, like this.</p>
<pre><code>df_merged['MaintCost'] = df_merged.loc[df_merged['Code_Description'].str.contains('03 Tree','17 Tree'), 'AvgTotal_OH_Miles'] * 15
df_merged['MaintCost'] = df_merged.loc[df_merged['Code_Description'].str.contains('26 Vines'), 'AvgTotal_OH_Miles'] * 5
df_merged['MaintCost'] = df_merged.loc[df_merged['Code_Description'].str.contains('overgrown primary', 'Tree fails'), 'AvgTotal_OH_Miles'] * 12
</code></pre>
<p>This can't be working because I have a string like this '03 Tree' in the column named 'Code_Description' and in 'MaintCost' I have NAN. What am I missing here?</p>
<p>Here's an example to illustrate the point. I am using slightly different names for the dataframe and column names.</p>
<pre><code>data = [{'Month': '2020-01-01', 'Expense':1000, 'Revenue':-50000, 'Building':'03 Tree'},
{'Month': '2020-02-01', 'Expense':3000, 'Revenue':40000, 'Building':'17 Tree'},
{'Month': '2020-03-01', 'Expense':7000, 'Revenue':50000, 'Building':'Tree fails'},
{'Month': '2020-04-01', 'Expense':3000, 'Revenue':40000, 'Building':'overgrown primary'},
{'Month': '2020-01-01', 'Expense':5000, 'Revenue':-6000, 'Building':'Tree fails'},
{'Month': '2020-02-01', 'Expense':5000, 'Revenue':4000, 'Building':'26 Vines'},
{'Month': '2020-03-01', 'Expense':5000, 'Revenue':9000, 'Building':'26 Vines'},
{'Month': '2020-04-01', 'Expense':6000, 'Revenue':10000, 'Building':'Tree fails'}]
df = pd.DataFrame(data)
df
df['MaintCost'] = df.loc[df['Building'].str.contains('03 Tree','17 Tree'), 'Expense'] * 15
df['MaintCost'] = df.loc[df['Building'].str.contains('26 Vines'), 'Expense'] * 5
df['MaintCost'] = df.loc[df['Building'].str.contains('overgrown primary', 'Tree fails'), 'Expense'] * 12
df['MaintCost'] = df.loc[df['Building'].str.contains('Tree fails'), 'Expense'] * 10
df['MaintCost'] = df['MaintCost'].fillna(100)
df
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/XfRqG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XfRqG.png" alt="enter image description here" /></a></p>
<p>For one thing, I would expect to see 15000 in row zero but I am getting 100 because row zero is coming back as a NAN!</p>
|
<python><python-3.x><pandas>
|
2022-12-21 16:34:42
| 1
| 20,492
|
ASH
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.