QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,106,061
| 7,641,854
|
Azure ML Pipeline prohibit file upload
|
<p>When creating a Pipeline with Python SDK V2 for Azure ML all contents of my current working directory are uploaded. Can I blacklist some files being upload? E.g. I use <code>load_env(".env")</code> in order to read some credentials but I don't wan't it to be uploaded.</p>
<p>Directory content:</p>
<pre><code>./src
utilities.py # contains helper function to get Azure credentials
.env # contains credentials
conda.yaml
script.py
</code></pre>
<p>A minimal pipeline example:</p>
<pre class="lang-py prettyprint-override"><code>import mldesigner
import mlflow
from azure.ai.ml import MLClient
from azure.ai.ml.dsl import pipeline
from src.utilities import get_credential
credential = get_credential() # calls `load_env(".env") locally
ml_client = MLClient(
credential=credential,
subscription_id="foo",
resource_group_name="bar",
workspace_name="foofoo",
)
@mldesigner.command_component(
name="testcomponent",
display_name="Test Component",
description="Test Component description.",
environment=dict(
conda_file="./conda.yaml",
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04",
),
)
def test_component():
mlflow.log_metric("metric", 0)
cluster_name = "foobar"
@pipeline(default_compute=cluster_name)
def pipe():
test_component()
pipeline_job = pipe()
pipeline_job = ml_client.jobs.create_or_update(
pipeline_job, experiment_name="pipeline_samples"
)
</code></pre>
<p>After running <code>python script.py</code> the pipeline job is created and runs in Azure ML. If I have a look at the Pipeline in Azure ML UI and inspect <em>Test Component</em> and the tab <em>Code</em> I find all source files including <code>.env</code>.</p>
<p>How can I prevent uploading this file using the SDK while creating a pipeline job?</p>
|
<python><azure><azure-machine-learning-service>
|
2023-01-13 07:31:26
| 1
| 760
|
Ken Jiiii
|
75,105,937
| 21,539
|
How to center text along a path with svgwrite
|
<p>I have a path I've created in svgwrite and I want to have my text to be centered along that path.</p>
<p>It's confusing since the svgwrite API doesn't appear to offer any mechanism for doing so</p>
|
<python><svg><svgwrite>
|
2023-01-13 07:16:38
| 1
| 24,816
|
Zain Rizvi
|
75,105,739
| 2,955,827
|
How can I make pandas apply faster if I only use pandas built-in function in it?
|
<p>For example I have a dataframe <code>df</code> :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">trade_date</th>
<th style="text-align: right;">01</th>
<th style="text-align: right;">02</th>
<th style="text-align: right;">03</th>
<th style="text-align: right;">04</th>
<th style="text-align: right;">05</th>
<th style="text-align: right;">06</th>
<th style="text-align: right;">07</th>
<th style="text-align: right;">08</th>
<th style="text-align: right;">09</th>
<th style="text-align: right;">10</th>
<th style="text-align: right;">11</th>
<th style="text-align: right;">12</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2010-01-04 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">12</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-05 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-06 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-07 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-08 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-11 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-12 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-13 00:00:00</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-14 00:00:00</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">2010-01-15 00:00:00</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">11</td>
</tr>
</tbody>
</table>
</div>
<p>and I want to get this result:</p>
<pre><code>df.apply(lambda r: r.nlargest(2).index.max(), axis=1)
</code></pre>
<p>All functions used in apply are numpy/pandas's built-in function, so I think there should be some way to get rid of python level for loop and make this transform much faster.</p>
<p>How can I do that?</p>
|
<python><pandas><numpy>
|
2023-01-13 06:50:33
| 1
| 3,295
|
PaleNeutron
|
75,105,658
| 19,950,360
|
how to user insert_rows_from_dataframe?? Python to Bigquery
|
<p>Example My DataFrame</p>
<pre><code>df = pd.DataFrame({'a': [4,5,4], 'b': ['121233', '45a56', '0000'], 'c':['sas53d1', '1asdf23', '2asdf456']})
</code></pre>
<p>my Bigquery Table Path</p>
<pre><code>table = 'tutorial.b'
</code></pre>
<p>So i make a Table with</p>
<pre><code>client.load_table_from_dataframe(df,table)
</code></pre>
<p>Then i make a bigquery table in tutorial.b
So i want to insert another dataframe</p>
<pre><code>df1 = df = pd.DataFrame({'a': [1,2,3], 'b': ['aaa', 'bbb', 'ccc'], 'c':['casd33', 'dfsf12', 'lsdfkj3']})
</code></pre>
<p>so user insert_rows_from_dataframe</p>
<pre><code>client.insert_rows_from_dataframe(table = table_path, dataframe = df1)
client.insert_rows_from_dataframe(df1, table_path)
</code></pre>
<p>i try this two code but always same error</p>
<pre><code>The table argument should be a table ID string, Table, or TableReference
</code></pre>
<p>how to insert my dataframe to already exist table?</p>
|
<python><sql><google-cloud-platform><google-bigquery>
|
2023-01-13 06:37:43
| 1
| 315
|
lima
|
75,105,373
| 10,715,700
|
Comparing two values in a structfield of a column in pyspark
|
<p>I have Column where each row is a StructField. I want to get max of two values in the StructField.</p>
<p>I tried this</p>
<pre class="lang-py prettyprint-override"><code>trends_df = trends_df.withColumn("importance_score", max(col("avg_total")["max"]["agg_importance"], col("avg_total")["min"]["agg_importance"], key=max_key))
</code></pre>
<p>But it throws this error</p>
<pre><code>ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
</code></pre>
<p>I am now getting it done with UDFs</p>
<pre class="lang-py prettyprint-override"><code>max_key = lambda x: x if x else float("-inf")
_get_max_udf = udf(lambda x, y: max(x,y, key=max_key), FloatType())
trends_df = trends_df.withColumn("importance_score", _get_max_udf(col("avg_total")["max"]["agg_importance"], col("avg_total")["min"]["agg_importance"]))
</code></pre>
<p>This works, but I want to know if there a way I can avoid using the udf and get it done with just spark.</p>
<p>Edit:
This is the result of <code>trends_df.printSchema()</code></p>
<pre><code>root
|-- avg_total: struct (nullable = true)
| |-- max: struct (nullable = true)
| | |-- avg_percent: double (nullable = true)
| | |-- max_index: long (nullable = true)
| | |-- max_val: long (nullable = true)
| | |-- total_percent: double (nullable = true)
| | |-- total_val: long (nullable = true)
| |-- min: struct (nullable = true)
| | |-- avg_percent: double (nullable = true)
| | |-- min_index: long (nullable = true)
| | |-- min_val: long (nullable = true)
| | |-- total_percent: double (nullable = true)
| | |-- total_val: long (nullable = true)
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-01-13 05:55:11
| 1
| 430
|
BBloggsbott
|
75,105,205
| 818,072
|
Use an older python version in Ubuntu when executing a binary from Java using ProcessBuilder
|
<p>I have a Java SpringBoot app that does audio processing and builds to a docker container based on Ubuntu. The app uses a third-party binary tool (<a href="https://github.com/shaka-project/shaka-packager" rel="nofollow noreferrer">https://github.com/shaka-project/shaka-packager</a>) like</p>
<pre><code>ProcessBuilder pb = new ProcessBuilder();
pb.command("shaka-packager", "-input foo", "-output bar");
pb.start().waitFor();
</code></pre>
<p>Problem:
We started updating versions of Java, etc, and are using a newer version of Ubuntu 22.04, which comes with Python 3.10. This broke shaka-packager, because it has some python component that does <code>from collections import Mapping</code>, which has been deprecated and moved to <code>collections.abc</code>.</p>
<p>I'm trying to figure out if there is a way I can run the shaka packager in a virtual 3.9 python environment. I messed around with conda for a bit, but can't seem to get it to work with Java's <code>ProcessBuilder</code> calls. <code>conda activate</code> works in interactive shells, but <code>ProcessBuilder</code> creates a new shell for every invocation. I thought <code>conda run</code> would do it, but it seems to only work with python scripts and not compiled binaries.</p>
<p>Any suggestions on how to approach this? I could fork shaka packager and fix the python script that's failing (and submit a PR), but I would prefer not maintaining our own fork long-term if the owner doesn't merge it. I would also love to move away from a binary and use a Java library instead, but haven't found one that does everything shaka can do and I don't think I have the time to experiment with a mix of different libraries to get full feature parity.</p>
|
<python><java><ubuntu><conda>
|
2023-01-13 05:28:54
| 1
| 1,660
|
Egor
|
75,105,181
| 4,504,945
|
How do you create a polygon that fills the area between 2 circles?
|
<p>I'm trying to create a function that can create a specific polygon with Pygame.</p>
<p>My snippet of code:</p>
<pre class="lang-py prettyprint-override"><code>def draw_line_round_corners_polygon(surf, point_1, point_2, color, circle_radius):
point_1_vector = pygame.math.Vector2(point_1)
point_2_vector = pygame.math.Vector2(point_2)
line_vector = (point_2_vector - point_1_vector).normalize()
line_normalised_vector = pygame.math.Vector2(-line_vector.y, line_vector.x) * circle_radius // 2
points = [point_1_vector + line_normalised_vector, point_2_vector + line_normalised_vector, point_2_vector - line_normalised_vector, point_1_vector - line_normalised_vector]
pygame.draw.polygon(surf, color, points)
pygame.draw.circle(surf, color, point_1, round(2 * circle_radius))
pygame.draw.circle(surf, color, point_2, round(circle_radius))
</code></pre>
<p>Current output:</p>
<p><a href="https://i.sstatic.net/O0Ldq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O0Ldq.png" alt="" /></a></p>
<p>Desired output:</p>
<p><a href="https://i.sstatic.net/jvmQa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jvmQa.png" alt="" /></a></p>
<p>Question:</p>
<p>How can I converge from the width of the bigger circle to the smaller circle with PyGame?</p>
|
<python><python-3.x><pygame>
|
2023-01-13 05:25:35
| 1
| 1,984
|
3kstc
|
75,105,038
| 4,531,757
|
Pandas - Build sequnce in the group while resetting on defined condition
|
<p>I am kind of stuck with my learning knowledge. I thought this is the time I seek help from experts.</p>
<p>Here I have to build a 'result column' based on my patient & schedule columns.
-- Condition 1: Sequence need to be reset if a patient change in the patient column
-- Condition 2. The sequence needs to reset within the patient if schedule = 000</p>
<p>This is my sample dataset:</p>
<pre><code>df2 = pd.DataFrame({'patient': ['one', 'one', 'one', 'one','two', 'two','two','two','two'],
'schedule': ['111', '111', '000', '111', '111', '000','111','111','111'],
'date': ['11/20/2022', '11/22/2022', '11/23/2022', '11/8/2022', '11/9/2022', '11/14/2022','11/20/2022', '11/22/2022', '11/23/2022']})
</code></pre>
<p>This is my result or intended data frame (new column I like to see is 'result':</p>
<pre><code>result = pd.DataFrame({'patient': ['one', 'one', 'one', 'one','two', 'two','two','two','two'],
'schedule': ['111', '111', '000', '111', '111', '000','111','111','111'],
'date': ['11/20/2022', '11/22/2022', '11/23/2022', '11/8/2022', '11/9/2022', '11/14/2022','11/20/2022', '11/22/2022', '11/23/2022'],
'result': ['1st_Time', '2nd_Time', 'Reset', '1st_Time', '1st_Time', 'Reset','1st_Time','2nd_Time','3rd_Time']})
</code></pre>
<p>Thank you so much for all your help.</p>
|
<python><pandas><dataframe>
|
2023-01-13 05:02:01
| 1
| 601
|
Murali
|
75,105,035
| 14,261,423
|
Why set.discard doesn't throw an error when a set is passed to it in Python?
|
<p>My question is quite simple.</p>
<p>When I run</p>
<pre><code>someSet = {1,2,3,4}
someSet.discard([5])
</code></pre>
<p>It gives the error:</p>
<pre><code>Traceback (most recent call last):
File "File.py", line 2, in <module>
someSet.discard([5])
TypeError: unhashable type: 'list'
</code></pre>
<p>Just like list, sets are also unhashable and can't be stored in a set.
So, I expect the following code to generate an error:</p>
<pre><code>someSet = {1,2,3,4}
someSet.discard({5})
</code></pre>
<p>But to my surprise, it did not generate any error. Why is it so? Does this mean that I am getting an error for list as there something other than it being unhashable which gives rise to the error? If yes, then what is that thing?</p>
|
<python><python-3.x><list><set>
|
2023-01-13 05:01:22
| 1
| 749
|
Keshav Saraf
|
75,104,870
| 9,510,800
|
Generate time series dataframe with min and max time with the given interval pandas
|
<p>How can I generate a time series dataset with min and max date range with the specific interval in pandas?</p>
<pre><code> min_date = 18 oct 2022
Max_date = 20 Oct 2022
interval = 1 hour
Min_date Max_date
18/10/2022 00:00:00 18/10/2022 01:00:00
18/10/2022 01:00:00 18/10/2022 02:00:00
18/10/2022 02:00:00 18/10/2022 03:00:00
18/10/2022 03:00:00 18/10/2022 04:00:00
19/10/2022 22:00:00 18/10/2022 23:00:00
19/10/2022 23:00:00 18/10/2022 23:59:00
</code></pre>
<p>Thanks in advance</p>
|
<python><pandas><numpy>
|
2023-01-13 04:31:49
| 1
| 874
|
python_interest
|
75,104,841
| 14,791,134
|
How to match regular expression characters in right position, wrong order?
|
<p>Say my string is <code>bucs</code>.
I would want to match <strong>buc</strong>caneer<strong>s</strong>, tampa bay <strong>buc</strong>caneer<strong>s</strong>, and <strong>bucs</strong>, but not "falcons".</p>
<p>I'm fairly new to regex, I tried:</p>
<pre class="lang-py prettyprint-override"><code>re.findall("bucs", "buccaneers")
</code></pre>
<p>and it returned an empty list.</p>
<p>How would I go about doing this?</p>
|
<python><regex>
|
2023-01-13 04:24:41
| 1
| 468
|
earningjoker430
|
75,104,648
| 9,430,855
|
OpenAPI spec generated from FastAPI not showing missing values allowed as dict values
|
<p>Take this simple service:</p>
<pre><code># main.py
import typing as t
from pydantic.types import StrictStr
from fastapi import FastAPI
from pydantic import BaseModel, StrictBool, StrictInt, StrictStr
Value = t.Optional[StrictBool] | t.Optional[StrictInt]
class Example(BaseModel):
x: dict[StrictStr, Value]
app = FastAPI()
@app.get("/read", response_model=Example)
def read():
return Example(x={"value": None})
</code></pre>
<p>the actual service works as intended:</p>
<pre><code>$ curl localhost:8000/read | jq
{
"x": {
"value": null
}
}
</code></pre>
<p>The issue is with the <code>openapi.json</code> and how it's translating the <code>Example</code> class -- it doesn't seem to allow <code>null</code> values:</p>
<pre><code>$ curl localhost:8000/openapi.json | jq '.components'
{
"schemas": {
"Example": {
"title": "Example",
"required": [
"x"
],
"type": "object",
"properties": {
"x": {
"title": "X",
"type": "object",
"additionalProperties": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "integer"
}
]
}
}
}
}
}
}
</code></pre>
<p>This is a problem because when I use it to generate client libraries, they fail on validating the output with something like</p>
<pre><code>ApiValueError: Invalid inputs given to generate an instance of <class>. None of the anyOf schemas matched the input data.
</code></pre>
<p>Any solutions here?</p>
|
<python><fastapi><openapi><pydantic>
|
2023-01-13 03:40:11
| 0
| 3,736
|
dave-edison
|
75,104,569
| 2,951,230
|
How to extend one tensor with another if they do not have the same size
|
<pre><code>a = tensor([ [101, 103],
[101, 1045]
])
b = tensor([ [101, 777, 227],
[101, 888, 228]
])
</code></pre>
<p>How to I get this tensor c from a and b:</p>
<pre><code>c = a + b = tensor([ [101, 103, 0],
[101, 1045, 0],
[101, 777, 227],
[101, 888, 228]
])
</code></pre>
<p>I tried with c = torch.cat((a, b), dim=0) but does not work.</p>
|
<python><python-3.x><pytorch><artificial-intelligence><tensor>
|
2023-01-13 03:25:32
| 1
| 1,239
|
Brana
|
75,104,512
| 17,103,465
|
Pandas where multiple conditions are met ; concatenate the result to create a new column
|
<p>I have a code below and you can see I am using <code>np.select</code> to identify if the string in my column contains any of the codes and create a reference column with the description based on the logic.</p>
<pre><code># Creating Score column
col = 'codes_desc'
conditions = [(df_merged[col].str.contains('R27', case=False)),
(df_merged[col].str.contains('R38', case=False)),
(df_merged[col].str.contains('R52', case=False)),
(df_merged[col].str.contains('R62', case=False)),
(df_merged[col].str.contains('R21', case=False)),
(df_merged[col].str.contains('R22', case=False)),
(df_merged[col].str.contains('R23', case=False)),
(df_merged[col].str.contains('R57', case=False)),
(df_merged[col].str.contains('R82', case=False)),
(df_merged[col].str.contains('R86', case=False)),
(df_merged[col].str.contains('R20', case=False)),
(df_merged[col].str.contains('R98', case=False))
]
choices = [
'The person is a Ninja',
'The person is a Pirate',
'The person is a Doctor',
'The person is a Samurai',
'The person is a Admiral',
'The person is a Police',
'The person is a Teacher',
'The person is a Singer',
'The person is a Guitarist',
'The person is a Chef',
'The person is a Runner',
'The person is a Wizard'
]
df_merged["reference"] = np.select(conditions, choices, default= 'Reason Unknown')
</code></pre>
<p>But I find cases in my dataframe where the the column 'codes_desc' contains two codes for example :</p>
<pre><code>codes_desc
The selected codes are R27, R22.
</code></pre>
<p>In this case I want my output to be like in the column 'Reference" :</p>
<pre><code>1. 'The person is a Ninja'
2. 'The person is a Police'
</code></pre>
<p>But since <code>np.select</code> works like a case statement ; it picks up the last code description , so how to do this ?</p>
|
<python><pandas><dataframe>
|
2023-01-13 03:15:14
| 1
| 349
|
Ash
|
75,104,389
| 17,696,880
|
Define capture regex for name recognition of composite people connected by a connector
|
<pre class="lang-py prettyprint-override"><code>import re
def register_new_persons_names_to_identify_in_inputs(input_text):
#Cases of compound human names:
name_capture_pattern = r"(^[A-Z](?:\w+)\s*(?:del|de\s*el|de)\s*^[A-Z](?:\w+))?"
regex_pattern = name_capture_pattern + r"\s*(?i:se\s*trata\s*de\s*un\s*nombre|(?:ser[íi]a|es)\s*un\s*nombre)"
n0 = re.search(regex_pattern, input_text) #distingue entre mayusculas y minusculas
if n0:
word, = n0.groups()
if(word == None or word == "" or word == " "): print("I think there was a problem, and although I thought you were giving me a name, I couldn't interpret it!")
else: print(repr(word))
input_text = "Creo que María del Pilar se trata de un nombre" #example 1
input_text = "Estoy segura que María dEl Pilar se tRatA De uN nOmbre" #example 2
input_text = "María del Carmen es un nombre viejo" #example 2
register_new_persons_names_to_identify_in_inputs(input_text)
</code></pre>
<p>In the Spanish language there are some names that are compounds, but in the middle they have a connector <code>"del"</code> placed, which is sometimes written in upper case, and many other times it is usually left in lower case (even if it is a name).</p>
<p>Because when defining the regex indicating that each part of the name must start with a capital letter, it fails and does not correctly capture the name of the person. I think the error in my capture regex is in the captures for each of the names <code>^[A-Z](?:\w+))</code></p>
<p>I would also like to know if there is any way so that it does not matter if any of these connectors options <code>(?:del|de\s*el|de)</code> are written in uppercase or lowercase, however it does with the rest of the sentence. Something like <code>(?i:del|de\s*el|de)?-i</code>, but always without affecting the capture group (which is the name of the person)</p>
<p>This is the <strong>correct output</strong> that I need:</p>
<pre><code>'María del Pilar' #for example 1
'María del Pilar' #for example 2
'María del Carmen' #for example 3
</code></pre>
|
<python><python-3.x><regex><string><regex-group>
|
2023-01-13 02:49:01
| 1
| 875
|
Matt095
|
75,104,325
| 7,959,614
|
Use multiple INNER JOINS to transpose one column in multiple columns
|
<p>I have the following table</p>
<pre><code>CREATE TABLE "holes" (
"tournament" INTEGER,
"year" INTEGER,
"course" INTEGER,
"round" INTEGER,
"hole" INTEGER,
"stimp" INTEGER,
);
</code></pre>
<p>With the following small sample of data:</p>
<pre><code>33 2016 895 1 1 12
33 2016 895 1 2 18
33 2016 895 1 3 15
33 2016 895 1 4 11
33 2016 895 1 5 18
33 2016 895 1 6 28
33 2016 895 1 7 21
33 2016 895 1 8 14
33 2016 895 1 9 10
33 2016 895 1 10 11
33 2016 895 1 11 12
33 2016 895 1 12 18
33 2016 895 1 13 15
33 2016 895 1 14 11
33 2016 895 1 15 18
33 2016 895 1 16 28
33 2016 895 1 17 21
33 2016 895 1 18 14
</code></pre>
<p>The goal is to show each <code>hole</code> as a column.
At the moment I am using this query but it's very slow.</p>
<pre><code>SELECT h.tournament, h.year, h.course, h.round,
hole1.stimp AS "hole 1",
hole2.stimp AS "hole 2",
hole3.stimp AS "hole 3",
hole4.stimp AS "hole 4",
hole5.stimp AS "hole 5",
hole6.stimp AS "hole 6",
hole7.stimp AS "hole 7",
hole8.stimp AS "hole 8",
hole9.stimp AS "hole 9",
hole10.stimp AS "hole 10",
hole11.stimp AS "hole 11",
hole12.stimp AS "hole 12",
hole13.stimp AS "hole 13",
hole14.stimp AS "hole 14",
hole15.stimp AS "hole 15",
hole16.stimp AS "hole 16",
hole17.stimp AS "hole 17",
hole18.stimp AS "hole 18"
FROM holes h
INNER JOIN holes hole1
ON hole1.course = h.hole
AND hole1.hole = '1'
INNER JOIN holes hole2
ON hole2.course = h.hole
AND hole2.hole = '2'
INNER JOIN holes hole3
ON hole3.course = h.hole
AND hole3.hole = '3'
INNER JOIN holes hole4
ON hole4.course = h.hole
AND hole4.hole = '4'
INNER JOIN holes hole5
ON hole5.course = h.hole
AND hole5.hole = '5'
INNER JOIN holes hole6
ON hole6.course = h.hole
AND hole6.hole = '6'
INNER JOIN holes hole7
ON hole7.course = h.hole
AND hole7.hole = '7'
INNER JOIN holes hole8
ON hole8.course = h.hole
AND hole8.hole = '8'
INNER JOIN holes hole9
ON hole9.course = h.hole
AND hole9.hole = '9'
INNER JOIN holes hole10
ON hole10.course = h.hole
AND hole10.hole = '10'
INNER JOIN holes hole11
ON hole11.course = h.hole
AND hole11.hole = '11'
INNER JOIN holes hole12
ON hole12.course = h.hole
AND hole12.hole = '12'
INNER JOIN holes hole13
ON hole13.course = h.hole
AND hole13.hole = '13'
INNER JOIN holes hole14
ON hole14.course = h.hole
AND hole14.hole = '14'
INNER JOIN holes hole15
ON hole15.course = h.hole
AND hole15.hole = '15'
INNER JOIN holes hole16
ON hole16.course = h.hole
AND hole16.hole = '16'
INNER JOIN holes hole17
ON hole17.course = h.hole
AND hole17.hole = '17'
INNER JOIN holes hole18
ON hole18.course = h.hole
AND hole18.hole = '18'
GROUP BY h.tournament, h.year, h.course, h.round
</code></pre>
<p>Please advice!</p>
<hr />
<p>The suggestion of @Parfait looks as follows</p>
<pre><code>SELECT h.tournament, h.year, h.course, h.round,
MIN(CASE WHEN h2.hole = '1' THEN h2.stimp END) AS "hole 1",
MIN(CASE WHEN h2.hole = '2' THEN h2.stimp END) AS "hole 2",
MIN(CASE WHEN h2.hole = '3' THEN h2.stimp END) AS "hole 3",
MIN(CASE WHEN h2.hole = '4' THEN h2.stimp END) AS "hole 4",
MIN(CASE WHEN h2.hole = '5' THEN h2.stimp END) AS "hole 5",
MIN(CASE WHEN h2.hole = '6' THEN h2.stimp END) AS "hole 6",
MIN(CASE WHEN h2.hole = '7' THEN h2.stimp END) AS "hole 7",
MIN(CASE WHEN h2.hole = '8' THEN h2.stimp END) AS "hole 8",
MIN(CASE WHEN h2.hole = '9' THEN h2.stimp END) AS "hole 9",
MIN(CASE WHEN h2.hole = '10' THEN h2.stimp END) AS "hole 10",
MIN(CASE WHEN h2.hole = '11' THEN h2.stimp END) AS "hole 11",
MIN(CASE WHEN h2.hole = '12' THEN h2.stimp END) AS "hole 12",
MIN(CASE WHEN h2.hole = '13' THEN h2.stimp END) AS "hole 13",
MIN(CASE WHEN h2.hole = '14' THEN h2.stimp END) AS "hole 14",
MIN(CASE WHEN h2.hole = '15' THEN h2.stimp END) AS "hole 15",
MIN(CASE WHEN h2.hole = '16' THEN h2.stimp END) AS "hole 16",
MIN(CASE WHEN h2.hole = '17' THEN h2.stimp END) AS "hole 17",
MIN(CASE WHEN h2.hole = '18' THEN h2.stimp END) AS "hole 18"
FROM holes h
INNER JOIN holes h2
ON h2.course = h.hole
GROUP BY h.tournament, h.year, h.course, h.round
</code></pre>
<p>I replaced <code>MAX</code> with <code>MIN</code> because there are some blanks and the output only shows these using <code>MAX</code>.<a href="https://i.sstatic.net/ZmygU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZmygU.png" alt="enter image description here" /></a></p>
|
<python><sqlite><group-by><case><conditional-aggregation>
|
2023-01-13 02:36:10
| 2
| 406
|
HJA24
|
75,104,206
| 10,836,714
|
What causes a type error when subscribing to list?
|
<p>I have the following code im running in CoLab with Python Version 3.8 (not 3.9):</p>
<pre><code>from typing import List, Dict
def get_embedding(text: str, model: str=EMBEDDING_MODEL) -> list[float]:
result = openai.Embedding.create(
model=model,
input=text
)
return result["data"][0]["embedding"]
def compute_doc_embeddings(df: pd.DataFrame) -> dict[tuple[str, str], list[float]]:
"""
Create an embedding for each row in the dataframe using the OpenAI Embeddings API.
Return a dictionary that maps between each embedding vector and the index of the row that it corresponds to.
"""
return {
idx: get_embedding(r.content) for idx, r in df.iterrows()
}
</code></pre>
<p>Which results in this error:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-bd6d3a14202c> in <module>
1 from typing import List, Dict
2
----> 3 def get_embedding(text: str, model: str=EMBEDDING_MODEL) -> list[float]:
4 result = openai.Embedding.create(
5 model=model,
TypeError: 'type' object is not subscriptable
</code></pre>
<p>Is there a workaround to fixing the declaration besides upgrading to 3.9?</p>
|
<python><typeerror><python-typing>
|
2023-01-13 02:11:21
| 0
| 1,221
|
Mark Wagner
|
75,104,127
| 6,419,790
|
Why does my function return a all upper case answer instead of a all lower case answer?
|
<p>I want to write a function that greets ‘Batman’ or ‘Black Widow’ in uppercase and all others in lowercase text.</p>
<pre><code>def hello(name):
if name == "Batman" or "Black Widow":
print(f"Hello {name.upper()}")
else:
print(f"Hello {name.lower()}")
hello("Aqua Man")
</code></pre>
<p><a href="https://i.sstatic.net/1uqva.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1uqva.png" alt="Aqua Man should be all in lower but it shows as all upper" /></a></p>
<p>Here, Aqua Man should be all in lower but it shows as all upper.
Can someone help me with this problem? Thank you!</p>
|
<python><function>
|
2023-01-13 01:55:54
| 1
| 927
|
Minho
|
75,104,089
| 2,951,230
|
How to pad a tensor
|
<p>How would I pad this tensor by adding element 100 on the end</p>
<pre><code>a = tensor([[ 101, 103],
[ 101, 1045, 223],
[ 101, 777, 665 , 889],
[ 101, 888]])
</code></pre>
<p>So the result would be:</p>
<pre><code> b = tensor([[ 101, 103, 100, 100],
[ 101, 1045, 223, 100],
[ 101, 777, 665 , 889],
[ 101, 888, 100, 100]])
</code></pre>
<p>I know the functions is torch.nn.functional.pad(), but i could not any simple example with a tensor like this that is probably a 2d tensor.</p>
<p>Which was surprising, because this is what a (most) typical padding is.</p>
|
<python><python-3.x><pytorch><artificial-intelligence>
|
2023-01-13 01:49:47
| 2
| 1,239
|
Brana
|
75,103,799
| 2,951,230
|
How to extend one tensor with another. So the result contains all the elements from the 2 tensors, see example
|
<pre><code>a = tensor([ [101, 103],
[101, 1045]
])
b = tensor([ [101, 777],
[101, 888]
])
</code></pre>
<p>How to I get this tensor c from a and b:</p>
<pre><code>c = a + b = tensor([ [101, 103],
[101, 1045],
[101, 777],
[101, 888]
])
</code></pre>
<p>With python lists this would be simply <code>c = a + b</code>, but with pytorch it just simply adds the elements and does not extends the list.</p>
|
<python><python-3.x><pytorch><artificial-intelligence><tensor>
|
2023-01-13 00:52:08
| 1
| 1,239
|
Brana
|
75,103,692
| 8,081,835
|
Handling class mismatch in ImageDataGenerator for training and validation sets
|
<p>How do I work around when my training image dataset have different number of classes than validation set.</p>
<p>Directory structure:</p>
<pre><code>- train
- class1
- class2
- class3
- test
- class1
- class3
</code></pre>
<pre class="lang-py prettyprint-override"><code>idg = ImageDataGenerator(
preprocessing_function=preprocess_input
)
train_gen = idg.flow_from_directory(
TRAIN_DATA_PATH,
target_size=(ROWS, COLS),
batch_size = 32
)
val_gen = idg.flow_from_directory(
TEST_DATA_PATH,
target_size=(ROWS, COLS),
batch_size = 32
)
input_shape = (ROWS, COLS, 3)
nclass = len(train_gen.class_indices)
base_model = applications.InceptionV3(weights='imagenet',
include_top=False,
input_shape=(ROWS, COLS,3))
base_model.trainable = False
model = Sequential()
model .add(base_model)
model .add(GlobalAveragePooling2D())
model .add(Dropout(0.5))
model .add(Dense(nclass, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(learning_rate=1e-4, momentum=0.9),
metrics=['accuracy'])
model.summary()
model.fit(
train_gen,
epochs=20,
verbose=True,
validation_data=val_gen
)
</code></pre>
<p>The error I get is related to the different number of classes in validation set.</p>
<pre><code>Node: 'categorical_crossentropy/softmax_cross_entropy_with_logits'
logits and labels must be broadcastable: logits_size=[32,206] labels_size=[32,189]
</code></pre>
<p>I have 206 classes in the train set and 189 in the validation set. Is it possible to have the same mapping as in train set (the names of the image folders are the same, I'm just missing some of them)</p>
|
<python><tensorflow><keras>
|
2023-01-13 00:32:08
| 2
| 771
|
Mateusz Dorobek
|
75,103,687
| 8,901,144
|
Reindex Pyspark Dataframe with dates in Year-Week format for each group
|
<p>I have the following Pyspark dataframe:</p>
<pre><code>id1 id2 date col1 col2
1 1 2022-W01 5 10
1 2 2022-W02 2 5
1 3 2022-W03 3 8
1 5 2022-W05 5 3
2 2 2022-W03 2 2
2 6 2022-W05 4 1
2 8 2022-W07 3 2
</code></pre>
<p>I would like to fill the missing dates for each id1 and obtain something like this:</p>
<pre><code>id1 id2 date col1 col2
1 1 2022-W01 5 10
1 2 2022-W02 2 5
1 3 2022-W03 3 8
1 NA 2022-W04 NA NA
1 5 2022-W05 5 3
2 2 2022-W03 2 2
2 NA 2022-W04 NA NA
2 6 2022-W05 4 1
12 NA 2022-W06 NA NA
2 8 2022-W07 3 2
</code></pre>
<p>I started with this code:</p>
<pre><code>df.groupby('id').agg(F.expr('max(date)').alias('max_date'),F.expr('min(date)').alias('min_date'))\
.withColumn('date',F.expr("explode(sequence(min_date,max_date,interval 1 week))"))\
.drop('max_date','min_date')
)
</code></pre>
<p>The main problem is that my date is in a particular format '2022-W01'. I couldn't find a quick solution for it</p>
|
<python><date><pyspark><group-by><reindex>
|
2023-01-13 00:29:32
| 1
| 1,255
|
Marco
|
75,103,628
| 6,676,101
|
What is an alternative to using `__getattr__()` method for wrapper classes?
|
<p>Suppose that I have two classes:</p>
<blockquote>
<ol>
<li>a class named <code>Swimmer</code></li>
<li>a class named <code>Person</code></li>
</ol>
</blockquote>
<p>For my particular application, we can <em><strong>NOT</strong></em> have <code>Swimmer</code> inherit from <code>Person</code>, although we want something like inheritance.</p>
<p>Instead of class inheritance each <code>Swimmer</code> will have an instance of the <code>Person</code> class as a member variable.</p>
<pre class="lang-python prettyprint-override"><code>class Person:
pass
class Swimmer:
def __init__(self, person):
self._person = person
def __getattr__(self, attrname:str):
try:
attr = getattr(self._person)
return attr
except AttributeError:
raise AttributeError
</code></pre>
<hr />
<p>Perhaps the <code>Person</code> class has the following class methods:</p>
<blockquote>
<ul>
<li><code>kneel()</code></li>
<li><code>crawl()</code></li>
<li><code>walk()</code></li>
<li><code>lean_over()</code></li>
<li><code>lay_down()</code></li>
</ul>
</blockquote>
<hr />
<p>The <code>Swimmer</code> class has all of the same methods as the <code>Person</code> class, plus some additional methods:</p>
<blockquote>
<ul>
<li><code>run()</code></li>
<li><code>swim()</code></li>
<li><code>dive()</code></li>
<li><code>throw_ball()</code></li>
</ul>
</blockquote>
<p>When it comes to kneeling, crawling, walking, and laying down, a <code>Swimmer</code> is meant to be a transparent wrapper around the <code>Person</code> class.</p>
<hr />
<p>I want to write something like this:</p>
<pre class="lang-python prettyprint-override"><code>swimmer_instance = SwimmerClass(person_instance)
</code></pre>
<p>I wrote a <code>__getattr__()</code> method.</p>
<p>However, I ran into many headaches with <code>__getattr__()</code>.</p>
<p>Consider writing the code <code>self.oops</code>. There is no attribute of the <code>_Swimmer</code> class named <code>oops</code>. We should not look for <code>oops</code> inside of <code>self._person</code>.</p>
<p>Aanytime that I mistyped the name of an attribute of <code>Swimmer</code>, my computer searched for that attribute in the instance of the <code>Person</code> class. Normally, fixing such spelling mistakes is easy. But, with a <code>__getattr__()</code> method, tracking down the problem becomes difficult.</p>
<p>How can I avoid this problem?</p>
<hr />
<p>Perhaps one option is create a sub-class of the <code>Swimmer</code> class. In the sub-class have have a method, the name of which is a misspelling of <code>__getattr__</code>. However, I am not sure about this idea; please advise me.</p>
<pre class="lang-python prettyprint-override"><code>class _Swimmer:
def __init__(self, person):
self._person = person
def run(self):
return "I ran"
def swim(self):
return "I swam"
def dive(self):
# SHOULD NOT LOOK FOR `oops` inside of self._person!
self.oops
return "I dove"
def _getattrimp(self, attrname:str):
# MISSPELLING OF `__getattr__`
try:
attr = getattr(self._person)
return attr
except AttributeError:
raise AttributeError
class Swimmer(_Swimmer):
def __getattr__(self, attrname:str):
attr = self._getattrimp(attrname)
return attr
</code></pre>
<p>Really, it is important to me that we <em><strong>not</strong></em> look inside of <code>self._person</code> for anything except the following:</p>
<blockquote>
<ul>
<li><code>Kneel()</code></li>
<li><code>Crawl()</code></li>
<li><code>Walk()</code></li>
<li><code>Lean()</code></li>
<li><code>LayDown()</code></li>
</ul>
</blockquote>
<p>The solution must be more general than just something what works for the <code>Swimmer</code> class and <code>Person</code> class.</p>
<blockquote>
<h2>How do we write a function which accepts any class as input and pops out a class which has methods of the same name as the input class?</h2>
</blockquote>
<p>We can get a list of <code>Person</code> attributes by writing <code>person_attributes = dir(Person)</code>.</p>
<p>Is it appropriate to dynamically create a sub-class of <code>Swimmer</code> which takes <code>Person</code> as input?</p>
<pre class="lang-python prettyprint-override"><code>class Person:
def kneel(self, *args, **kwargs):
return "I KNEELED"
def crawl(self, *args, **kwargs):
return "I crawled"
def walk(self, *args, **kwargs):
return "I WALKED"
def lean_over(self, *args, **kwargs):
return "I leaned over"
################################################################
import functools
class TransparentMethod:
def __init__(self, mthd):
self._mthd = mthd
@classmethod
def make_transparent_method(cls, old_method):
new_method = cls(old_method)
new_method = functools.wraps(old_method)
return new_method
def __call__(self, *args, **kwargs):
ret_val = self._mthd(*args, **kwargs)
return ret_val
###############################################################
attributes = dict.fromkeys(dir(Person))
for attr_name in attributes.keys():
old_attr = getattr(Person, attr_name)
new_attr = TransparentMethod.make_transparent_method(old_attr)
name = "_Swimmer"
bases = (object, )
_Swimmer = type(name, bases, attributes)
class Swimmer(_Swimmer):
pass
</code></pre>
|
<python><python-3.x><inheritance><attributes><dynamic-programming>
|
2023-01-13 00:17:50
| 1
| 4,700
|
Toothpick Anemone
|
75,103,548
| 2,132,478
|
How to access context from a custom django-admin command?
|
<p>I have been using django.text client to examine the context of some URLs from several unit tests with a code similar to the following:</p>
<pre><code>from django.test import TestCase
class MyTests(TestCase):
def example_test(self):
response = self.client.get('/')
# I can access response.context at this point
</code></pre>
<p>Now I am trying to do the same thing from a <a href="https://docs.djangoproject.com/en/4.1/howto/custom-management-commands/" rel="nofollow noreferrer">custom management command</a> but surprisingly this is not working as expected as I can not access the context in the response object.</p>
<pre><code>from django.test import Client
class Command(BaseCommand):
def handle(self, *args, **kwargs):
c = Client()
response = c.get('/')
# response.context is always None at this point
</code></pre>
<p>Is there a way to access the context from a custom management command?
(Django 4.0)</p>
|
<python><django><django-4.0>
|
2023-01-13 00:00:34
| 1
| 575
|
Alfonso_MA
|
75,103,514
| 11,462,274
|
How to find the last element loaded on the page to use in WebDriverWait to be as reliable as possible about full page loading?
|
<p>This page has dynamic loading with different elements loading at different times:</p>
<p><a href="https://www.globo.com/" rel="nofollow noreferrer">https://www.globo.com/</a></p>
<p>I use an element that I've noticed takes a little longer than the others:</p>
<pre class="lang-python prettyprint-override"><code>WebDriverWait(driver, 30).until(
EC.element_to_be_clickable(
(By.XPATH, "//div[contains(@class,'tooltip-vitrine')]")
</code></pre>
<p>But I would like to know if there is any way to track the sequence that elements are loaded on pages to find a pattern and use an element that always takes longer than the others, giving greater confidence about the complete loading of the page.</p>
|
<python><selenium><webdriverwait>
|
2023-01-12 23:54:44
| 2
| 2,222
|
Digital Farmer
|
75,103,492
| 3,247,006
|
What is "list_display_links" for Django Admin?
|
<p>I have <strong><code>Person</code> model</strong> below:</p>
<pre class="lang-py prettyprint-override"><code># "store/models.py"
from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
age = models.IntegerField()
def __str__(self):
return self.first_name + " " + self.last_name
</code></pre>
<p>Then, I assigned <code>"first_name"</code>, <code>"last_name"</code> and <code>"age"</code> to <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display" rel="nofollow noreferrer">list_display</a> in <strong><code>Person</code> admin</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
list_display = ("first_name", "last_name", "age") # Here
</code></pre>
<p>Now, <strong>FIRST NAME</strong>, <strong>LAST NAME</strong> and <strong>AGE</strong> are displayed as shown below:</p>
<p><a href="https://i.sstatic.net/38a5G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/38a5G.png" alt="enter image description here" /></a></p>
<p>Next, I assigned <code>"first_name"</code>, <code>"last_name"</code> and <code>"age"</code> to <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display_links" rel="nofollow noreferrer">list_display_links</a> in <strong><code>Person</code> admin</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "store/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
list_display = ("first_name", "last_name", "age")
list_display_links = ("first_name", "last_name", "age") # Here
</code></pre>
<p>But, nothing happened to <strong>the "change list" page</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/BsU6i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BsU6i.png" alt="enter image description here" /></a></p>
<p>So, what is <code>list_display_links</code>?</p>
|
<python><django><hyperlink><django-admin><changelist>
|
2023-01-12 23:51:12
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,103,288
| 1,448,412
|
Python IDE autocomplete missing some methods from AppDaemon plugin class
|
<p>I'm trying to use autocomplete for the <code>Hass</code> class from the Python <a href="https://github.com/AppDaemon/appdaemon" rel="nofollow noreferrer">AppDaemon</a> package. Autocomplete is showing some of the inherited methods from the superclass such as <code>get_state()</code>, but some methods are missing, such as <code>log()</code> and <code>get_entity()</code>. This behavior is the same in VS Code and PyCharm Community.</p>
<p><a href="https://i.sstatic.net/QCE8b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QCE8b.png" alt="Autocomplete from superclass" /></a></p>
<p><a href="https://i.sstatic.net/RH9xx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RH9xx.png" alt="Autocomplete method from superclasses superclass" /></a></p>
<p>Here's the skeleton of a class I'm writing, which inherits from <code>hass.Hass</code>:</p>
<pre><code>import hassapi as hass
class AutocompleteTest(hass.Hass):
def initialize(self):
self.get
</code></pre>
<p>Here's the class it inherits from (<a href="https://github.com/AppDaemon/appdaemon/blob/dev/appdaemon/plugins/hass/hassapi.py" rel="nofollow noreferrer">GitHub link</a>):</p>
<pre><code>class Hass(adbase.ADBase, adapi.ADAPI):
</code></pre>
<p>The methods I want to autocomplete are in the superclass <code>adapi.ADAPI</code> (<a href="https://github.com/AppDaemon/appdaemon/blob/dev/appdaemon/adapi.py" rel="nofollow noreferrer">GitHub link</a>). Here are the method definitions from that class:</p>
<pre><code>class ADAPI:
# This method shows in autocomplete
@utils.sync_wrapper
async def get_state(
self,
entity_id: str = None,
attribute: str = None,
default: Any = None,
copy: bool = True,
**kwargs: Optional[Any],
) -> Any:
# This method does not show in autocomplete
def log(self, msg, *args, **kwargs):
# This method does not show in autocomplete
def get_entity(self, entity: str, **kwargs: Optional[Any]) -> Entity:
</code></pre>
<p>Can anyone help me understand what's going on, and how to get autocomplete fully working?</p>
<p>My requirements file:</p>
<pre><code>hassapi
iso8601
requests
</code></pre>
|
<python><plugins><autocomplete><package>
|
2023-01-12 23:18:19
| 1
| 725
|
Tim
|
75,103,237
| 2,515,265
|
How to install private Python package from an AWS S3 repository using Poetry?
|
<p>I'm using Poetry 1.3.2, created a virtual env with Python 3.8 and now trying to install the dependencies of my project via <code>poetry install</code>.</p>
<p>The project has a private dependency which is stored in an AWS S3 bucket repository, so I added the address of this channel via <code>poetry source add --secondary my-conda-repo s3://my-conda-repo/my-key</code> (the credentials are in my <code>~/.aws/credentials</code>).</p>
<p>In <code>pyproject.toml</code> I configured my private dependency as</p>
<pre><code>[tool.poetry.dependencies]
my-utils = { version = "5.0.1", source = "INTERNAL" }
</code></pre>
<p>When I run <code>poetry install</code> I get an error:</p>
<pre><code> • Installing my-utils (5.0.1): Pending...
• Installing my-utils (5.0.1): Failed
RuntimeError
Unable to find installation candidates for my-utils (5.0.1)
</code></pre>
<p><strong>How can I configure Poetry to read from my S3 repository?</strong></p>
|
<python><amazon-s3><dependencies><python-poetry>
|
2023-01-12 23:08:52
| 1
| 2,657
|
Javide
|
75,103,179
| 6,115,999
|
Why would I get an AttributeError here if I provide an if statement to get around it?
|
<p>I've been inspired by the code here: <a href="https://towardsdatascience.com/stock-news-sentiment-analysis-with-python-193d4b4378d4" rel="nofollow noreferrer">https://towardsdatascience.com/stock-news-sentiment-analysis-with-python-193d4b4378d4</a></p>
<p>I'm basically scraping news from a financial website using Beautiful Soup. This particular flavor of the code I made is where I'm stumped:</p>
<pre><code>for i, table_row in enumerate(df_tr):
if table_row.a.text is not None:
print(table_row.a.text)
</code></pre>
<p>I get this error:</p>
<p><code>AttributeError: 'NoneType' object has no attribute 'text'</code></p>
<p>But I literally provide an if statement that says if that table_row is None, it wont reach the print statement. I even did a version where I put an else statement that tells it to <code>continue</code> and I still get the same error. Why is this?</p>
|
<python><html><beautifulsoup>
|
2023-01-12 22:58:39
| 1
| 877
|
filifunk
|
75,103,140
| 11,925,053
|
Python process not recognized with service file
|
<p>I have a process <a href="https://github.com/YashTotale/goodreads-user-scraper" rel="nofollow noreferrer">goodreads-user-scraper</a> that runs fine within a cron scheduler script that I run from my Ubuntu terminal.</p>
<p>From my Ubuntu server terminal, I navigate to the directory containing scheduler.py and write:</p>
<pre class="lang-bash prettyprint-override"><code>python scheduler.py
</code></pre>
<p>This runs fine. It scrapes the site and saves files to the output_dir I have assigned inside the script.</p>
<p>Now, I want to run this function using a service file (socialAggregator.service).</p>
<p>When I set up a service file in my Ubuntu server to run scheduler.py, goodreads-user-scraper is not recognized. It's the exact same file I just ran from the terminal.</p>
<p>Why is goodreads-user-scraper not found when the service file calls the script?</p>
<p>Any ideas?</p>
<p>Error message form syslog file</p>
<pre><code>Jan 12 22:13:15 speedypersonal2 python[2668]: --user_id: 1: goodreads-user-scraper: not found
</code></pre>
<p>socialAggregator.service</p>
<pre><code>[Unit]
Description=Run Social Aggregator scheduler - collect data from API's and store in socialAggregator Db --- DEVELOPMENT ---
After=network.target
[Service]
User=nick
ExecStart= /home/nick/environments/social_agg/bin/python /home/nick/applications/socialAggregator/scheduler.py --serve-in-foreground
[Install]
WantedBy=multi-user.target
</code></pre>
<p>scheduler.py</p>
<pre class="lang-py prettyprint-override"><code>from apscheduler.schedulers.background import BackgroundScheduler
import json
import requests
from datetime import datetime, timedelta
import os
from sa_config import ConfigLocal, ConfigDev, ConfigProd
import logging
from logging.handlers import RotatingFileHandler
import subprocess
if os.environ.get('CONFIG_TYPE')=='local':
config = ConfigLocal()
elif os.environ.get('CONFIG_TYPE')=='dev':
config = ConfigDev()
elif os.environ.get('CONFIG_TYPE')=='prod':
config = ConfigProd()
#Setting up Logger
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
formatter_terminal = logging.Formatter('%(asctime)s:%(filename)s:%(name)s:%(message)s')
#initialize a logger
logger_init = logging.getLogger(__name__)
logger_init.setLevel(logging.DEBUG)
#where do we store logging information
file_handler = RotatingFileHandler(os.path.join(config.PROJ_ROOT_PATH,'social_agg_schduler.log'), mode='a', maxBytes=5*1024*1024,backupCount=2)
file_handler.setFormatter(formatter)
#where the stream_handler will print
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter_terminal)
logger_init.addHandler(file_handler)
logger_init.addHandler(stream_handler)
def scheduler_funct():
logger_init.info(f"- Started Scheduler on {datetime.today().strftime('%Y-%m-%d %H:%M')}-")
scheduler = BackgroundScheduler()
job_collect_socials = scheduler.add_job(run_goodreads,'cron', hour='*', minute='13', second='15')#Testing
scheduler.start()
while True:
pass
def run_goodreads():
logger_init.info(f"- START run_goodreads() -")
output_dir = os.path.join(config.PROJ_DB_PATH)
goodreads_process = subprocess.Popen(['goodreads-user-scraper', '--user_id', config.GOODREADS_ID,'--output_dir', output_dir], shell=True, stdout=subprocess.PIPE)
logger_init.info(f"- send subprocess now on::: goodreads_process.communicate() -")
_, _ = goodreads_process.communicate()
logger_init.info(f"- FINISH run_goodreads() -")
if __name__ == '__main__':
scheduler_funct()
</code></pre>
|
<python><ubuntu><service>
|
2023-01-12 22:52:00
| 1
| 309
|
costa rica
|
75,103,127
| 13,171,500
|
Getting "NotImplementedError: Could not run 'torchvision::nms' with arguments from CUDA backend" despite having all necessary libraries and imports
|
<p>The full error:</p>
<pre><code>NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
</code></pre>
<p>I get this when attempting to train a YOLOv8 model on a Windows 11 machine, everything works for the first epoch then this occurs.</p>
<hr />
<p>I also get this error immediately after the first epoch ends but I don't think it is relevant.</p>
<pre><code>Error executing job with overrides: ['task=detect', 'mode=train', 'model=yolov8n.pt', 'data=custom.yaml', 'epochs=300', 'imgsz=160', 'workers=8', 'batch=4']
</code></pre>
<p>I was trying to train a YOLOv8 image detection model utilizing CUDA GPU.</p>
|
<python><pytorch><yolo>
|
2023-01-12 22:50:03
| 6
| 770
|
Vincent Casey
|
75,102,992
| 14,403,266
|
Fill blank cells of a pandas dataframe column by matching with another datafame column
|
<p>I have a pandas dataframe, lets call it <code>df1</code> that looks like this (the follow is just a sample to give an idea of the dataframe):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Ac</th>
<th>Tp</th>
<th>Id</th>
<th>2020</th>
<th>2021</th>
<th>2022</th>
</tr>
</thead>
<tbody>
<tr>
<td>Efecty</td>
<td>FC</td>
<td>IQ_EF</td>
<td>100</td>
<td>200</td>
<td>45</td>
</tr>
<tr>
<td>Asset</td>
<td>FC</td>
<td></td>
<td>52</td>
<td>48</td>
<td>15</td>
</tr>
<tr>
<td>Debt</td>
<td>P&G</td>
<td>IQ_DEBT</td>
<td>45</td>
<td>58</td>
<td>15</td>
</tr>
<tr>
<td>Tax</td>
<td>Other</td>
<td></td>
<td>48</td>
<td>45</td>
<td>78</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to fill the blank spaces using a in the <code>'Id'</code> column using the next auxiliar dataframe, lets call it <code>df2</code> (again, this is just a sample):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Ac</th>
<th>Tp</th>
<th>Id</th>
</tr>
</thead>
<tbody>
<tr>
<td>Efecty</td>
<td>FC</td>
<td>IQ_EF</td>
</tr>
<tr>
<td>Asset</td>
<td>FC</td>
<td>IQ_AST</td>
</tr>
<tr>
<td>Debt</td>
<td>P&G</td>
<td>IQ_DEBT</td>
</tr>
<tr>
<td>Tax</td>
<td>Other</td>
<td>IQ_TAX</td>
</tr>
<tr>
<td>Income</td>
<td>BAL</td>
<td>IQ_INC</td>
</tr>
<tr>
<td>Invest</td>
<td>FC</td>
<td>IQ_INV</td>
</tr>
</tbody>
</table>
</div>
<p>To get <code>df1</code> dataframe, looking like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Ac</th>
<th>Tp</th>
<th>Id</th>
<th>2020</th>
<th>2021</th>
<th>2022</th>
</tr>
</thead>
<tbody>
<tr>
<td>Efecty</td>
<td>FC</td>
<td>IQ_EF</td>
<td>100</td>
<td>200</td>
<td>45</td>
</tr>
<tr>
<td>Asset</td>
<td>FC</td>
<td>IQ_AST</td>
<td>52</td>
<td>48</td>
<td>15</td>
</tr>
<tr>
<td>Debt</td>
<td>P&G</td>
<td>IQ_DEBT</td>
<td>45</td>
<td>58</td>
<td>15</td>
</tr>
<tr>
<td>Tax</td>
<td>Other</td>
<td>IQ_TAX</td>
<td>48</td>
<td>45</td>
<td>78</td>
</tr>
</tbody>
</table>
</div>
<p>I tried with this line of code but it did not work:</p>
<pre><code>df1['Id'] = df1['Id'].mask(df1('nan')).fillna(df1['Ac'].map(df2('Ac')['Id']))
</code></pre>
<p>Can you guys help me?</p>
|
<python><pandas><dataframe>
|
2023-01-12 22:35:06
| 1
| 337
|
Valeria Arango
|
75,102,924
| 6,676,101
|
How might we write a decorator which captures all standard output printed by some function?
|
<p>Suppose that a function contains a lot of print-statements.</p>
<p>I want to capture all of those print statements in a string, or save them to text file.</p>
<p>What kind of function decorator might do that for us?</p>
<pre class="lang-python prettyprint-override"><code>log_file = open("log.txt", "w")
@copy_print_statements(log_file)
def printy_the_printer():
print("I print a lot")
# should print to both `sys.stdout` and `log_file`
printy_the_printer()
printy_the_printer()
printy_the_printer()
</code></pre>
<p>The following is one failed attempt. Feel free to ignore, or depart from the code below. The real goal is to write code for a decorator. The decorator replaces an old function with a new function. The old functions print a lot to console and the new functions send the print-statements somewhere else.</p>
<pre class="lang-python prettyprint-override"><code>import io
import sys
import functools
class MergedStream:
"""
"""
def __init__(self, lefty, righty):
"""
`lefty` and `righty` should be file-streams.
Examples of valid streams might be the values returned by
the following function calls:
getattr(sys, 'stdout')
io.StringIO()
open("foo.txt", "w")
"""
self._lefty = lefty
self._righty = righty
def write(self, *args, **kwargs):
"""
"""
self._lefty.write(*args, **kwargs)
self._righty.write(*args, **kwargs)
class CopyPrintStatements:
def __init__(_callable, file):
self._callable = _callable
self._file = _file
def __call__(*args, **kwargs):
old_stdout = sys.stdout
sys.stdout = MergedStream(sys.stdout, self._file)
try:
return self._callable(*args, **kwargs)
finally:
sys.stdout = old_stdout
@classmethod
def copy_print_statements(cls, file_stream):
"""
This class method is intended to decorate callables
An example usage is shown below:
@copy_print_statements(sys.stderr)
def foobar():
print("this message is printed to both `stdout` and `stderr`")
"""
decorator = cls.make_decorator(file_stream)
return decorator
@classmethod
def make_decorator(cls, old_callable, file):
new_callable = cls(old_callable, file)
new_callable = functools.wraps(old_callable)
return new_callable
</code></pre>
|
<python><python-3.x><printing><stream><stdout>
|
2023-01-12 22:26:27
| 2
| 4,700
|
Toothpick Anemone
|
75,102,809
| 2,744,388
|
Get databases from SQL managed instance using azure-mgmt-sql
|
<p>I use the library azure-mgmt-sql to get all the SQL servers and databases with the following code:</p>
<pre><code>resources = sql_client.servers.list_by_resource_group("myresourcegroup")
for r in resources:
databases = sql_client.databases.list_by_server("myresourcegroup", r.name)
for d in databases:
print(d.name)
</code></pre>
<p>I also need to get the SQL managed instances and their databases. I found that using managed_instances instead of servers returns the SQL managed instances, but I didn't find a way to get the databases.</p>
<pre><code>resources = sql_client.managed_instances.list_by_resource_group("myresourcegroup")
for r in resources:
databases = sql_client.databases.list_by_server("myresourcegroup", r.name)
for d in databases: <- ERROR when accessing the iterator
print(d.name)
</code></pre>
<p>The error I am getting is the following:</p>
<blockquote>
<p>azure.core.exceptions.ResourceNotFoundError: (ParentResourceNotFound)
Can not perform requested operation on nested resource. Parent
resource 'mymanagedinstancename>' not found.</p>
</blockquote>
<p>How I can get the databases from the sql managed instance?</p>
|
<python><azure><azure-sdk-python>
|
2023-01-12 22:09:41
| 1
| 2,183
|
Miguel Febres
|
75,102,685
| 1,045,755
|
Scattermapbox in Plotly super laggy with fairly few data points
|
<p>I am using Plotly and Mapbox to plot some data points on a map.</p>
<p>In a data frame I have roughly 700 rows, where each row contains a pair of coordinates, i.e. I need to draw a line between each pair. It also have some info about each line so that I can change the color and such for each individual line-pair.</p>
<p>In order to do that I can't just plot everything at once, and I instead need to do 700 plots using <code>add_trace</code>. My data frame looks something like:</p>
<pre><code>df =
coordinates col1 col2 text
--------------------------------------------------------------------------------
[(49.20, 17.51), (49.49, 17.48)] 100 "diamond" ["dog", "cat"]
[(51.31, 4.26), (51.26, 4.29)] 400 "diamond" ["milk", "beer"]
[(47.09, 18.04), (47.31, 18.78)] 200 "mocca" ["cow", "soda"]
...
</code></pre>
<p>My code for the plotting is something like:</p>
<pre><code>fig = go.Figure()
for _, row in df.iterrows():
fig.add_trace(
go.Scattermapbox(
mode="markers+lines+text",
lat=list(zip(*row["coordinates"]))[0],
lon=list(zip(*row["coordinates"]))[1],
text=row["text"],
textposition="top center",
textfont=dict(
size=12,
color="white",
),
line=dict(
width=2 if row["col1"] < 300 else 4,
color="red" if row["col2"] == "diamond" else "blue",,
),
marker=dict(
size=13 if row["col2"] == "diamond" else 7,
symbol="diamond" if row["col2"] == "diamond" else "circle",
color="red" if row["col2"] == "diamond" else "blue",
),
)
)
fig.update_layout(
margin={"r": 0, "t": 0, "l": 0, "b": 0},
mapbox={
"accesstoken": mapbox_token,
"center": {"lon": 10, "lat": 50},
"style": "my_style",
"zoom": 3,
},
showlegend=False,
)
fig.show()
</code></pre>
<p>So as stated, I have roughly 700 in my data frame. And right now, it almost can't be plotted. It takes a lot of time, and I can see it adding a few lines at a time. So basically really laggy, and unusable.</p>
<p>I know it probably has something to do with me using <code>add_trace</code> that many times. But still, it seems crazy to me.</p>
<p>I don't know if the solution would be to divide my data frame into red parts, diamond parts etc., and then plot each data from with a full set of coordinates (instead of just pair wise), which would probably reduce the number of <code>add_traces</code> from 700 to a few.</p>
<p>But before I do that, I just wanted to make sure I wasn't doing anything stupid here, or that there may be some kind of other solution ?</p>
<p>The idea is to move it to a React webapp, I don't know if that will change anything ?</p>
<p>Thanks in advance.</p>
|
<python><plotly><mapbox>
|
2023-01-12 21:52:58
| 1
| 2,615
|
Denver Dang
|
75,102,575
| 3,579,144
|
Python typevar signature for function that extracts objects of a specific type from a nested dictionary
|
<p>I have some data organized into the following structure:</p>
<pre><code>T = TypeVar("T")
my_data: Dict[type[T], Dict[str, List[T]]] = dict()
</code></pre>
<p>Thus, given a type <code>T</code>, this dictionary will return another dictionary that is keyed by a string identifier, mapping to a list of objects of the type <code>T</code>.</p>
<p>To appease my python typechecker, I want to write a function that extracts data from the dictionary when given a type <code>T</code> and the string identifier. Something like this:</p>
<pre><code>def _extract_by_type_and_id(self, some_data:Dict[Type[T], Dict[str, List[T]]], some_type:Type[T], batch_id:str) -> tuple[T, ...]:
return tuple(some_data[some_type][batch_id])
</code></pre>
<p>Then I call my function like this:</p>
<pre><code>_extract_by_type_and_id(my_data, MyClass, my_id)
</code></pre>
<p>The problem is, when I try to use this function, and assign the output to something that expect a <code>Tuple[MyClass, ...]</code>, the type checker is still upset, saying that the function returns one of two types: the schema specified, or the generic type:</p>
<pre><code>Argument of type "tuple[MyClass | T@_extract_by_type_and_id, ...]" cannot be assigned to parameter "my_class_objects" of type "Tuple[MySchema, ...]"
</code></pre>
<p>The key here being that for some reason, a union of <code>MySchema</code> and <code>T@_extract_by_type_and_id</code> are being returned by that function. Is there some way to specify that the function can accept these generic types <code>T</code>, but whichever type <code>R</code> is passed into the <code>some_type</code> parameter is the one (and only one) that the function should return?</p>
<p>Thank you in advance</p>
|
<python><types><pyright>
|
2023-01-12 21:39:46
| 0
| 1,541
|
Vranvs
|
75,102,428
| 1,930,758
|
python package office365 installation crashes os on pqy5 dependency installation
|
<p>I Noticed this behaviour 2 times, but this package installation is literally leaking memory and crashing the OS.</p>
<p>Any approach how to solve this ?</p>
<pre><code>
pip install office365
</code></pre>
<p><a href="https://i.sstatic.net/pEp9i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pEp9i.png" alt="Mac OS memory raising like crazy" /></a></p>
|
<python><installation><pip><package><office365>
|
2023-01-12 21:21:03
| 1
| 1,598
|
zhrist
|
75,102,298
| 6,932,839
|
Tkinter - Change Label Text/Entry on Button Click
|
<p>I have two labels and entry fields (<strong>A & B</strong>). When I enter the username/password for "<em>A Username/A Password</em>", I want to click the "Submit" button, then have the labels/entry fields change to "<em>B Username/B Password</em>" and be able to click the "Submit" button again, using <code>Tkinter</code>.</p>
<p><strong>Python Code</strong></p>
<pre><code>import tkinter as tk
root = tk.Tk()
a_user_var = tk.StringVar()
a_pass_var = tk.StringVar()
b_user_var = tk.StringVar()
b_pass_var = tk.StringVar()
def submit():
a_user = a_user_var.get()
a_pass = a_pass_var.get()
a_user_var.set("")
a_pass_var.set("")
b_user = b_user_var.get()
b_pass = b_pass_var.get()
b_user_var.set("")
b_pass_var.set("")
a_user_label = tk.Label(root, text="A Username")
a_user_entry = tk.Entry(root, textvariable=a_user_var)
a_pass_label = tk.Label(root, text="A Password")
a_pass_entry = tk.Entry(root, textvariable=a_pass_var, show="•")
b_user_label = tk.Label(root, text="B Username")
b_user_entry = tk.Entry(root, textvariable=b_user_var)
b_pass_label = tk.Label(root, text="B Password")
b_pass_entry = tk.Entry(root, textvariable=b_pass_var, show="•")
sub_btn = tk.Button(root, text="Submit", command=submit)
a_user_label.grid(row=0, column=0)
a_user_entry.grid(row=0, column=1)
a_pass_label.grid(row=1, column=0)
a_pass_entry.grid(row=1, column=1)
b_user_label.grid(row=0, column=0)
b_user_entry.grid(row=0, column=1)
b_pass_label.grid(row=1, column=0)
b_pass_entry.grid(row=1, column=1)
sub_btn.grid(row=2, column=0)
root.mainloop()
</code></pre>
<p><strong>Current Result</strong></p>
<p><a href="https://i.sstatic.net/zTUCS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zTUCS.png" alt="enter image description here" /></a></p>
<p><strong>Desired Result (after clicking Submit button)</strong></p>
<p><a href="https://i.sstatic.net/w4fE1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w4fE1.png" alt="enter image description here" /></a></p>
|
<python><tkinter><button><label><tkinter-entry>
|
2023-01-12 21:06:59
| 1
| 1,141
|
arnpry
|
75,102,295
| 3,987,085
|
Finding the singular or plural form of a word with regex
|
<p>Let's assume I have the sentence:</p>
<pre class="lang-py prettyprint-override"><code>sentence = "A cow runs on the grass"
</code></pre>
<p>If I want to replace the word <code>cow</code> with "some" special token, I can do:</p>
<pre class="lang-py prettyprint-override"><code>to_replace = "cow"
# A <SPECIAL> runs on the grass
sentence = re.sub(rf"(?!\B\w)({re.escape(to_replace)})(?<!\w\B)", "<SPECIAL>", sentence, count=1)
</code></pre>
<p>Additionally, if I want to replace it's plural form, I could do:</p>
<pre class="lang-py prettyprint-override"><code>sentence = "The cows run on the grass"
to_replace = "cow"
# Influenza is one of the respiratory <SPECIAL>
sentence = re.sub(rf"(?!\B\w)({re.escape(to_replace) + 's?'})(?<!\w\B)", "<SPECIAL>", sentence, count=1)
</code></pre>
<p>which does the replacement even if the word to replace remains in its singular form <code>cow</code>, while the <code>s?</code> does the job to perform the replacement.</p>
<p>My question is what happens if I want to apply the same in a more general way, i.e., find-and-replace words which can be singular, plural - ending with <code>s</code>, and also plural - ending with <code>es</code> <strong>(note that I'm intentionally ignoring many edge cases that could appear - discussed in the comments of the question)</strong>. Another way to frame the question would be how can add multiple optional ending suffixes to a word, so that it works for the following examples:</p>
<pre class="lang-py prettyprint-override"><code>to_replace = "cow"
sentence1 = "The cow runs on the grass"
sentence2 = "The cows run on the grass"
# --------------
to_replace = "gas"
sentence3 = "There are many natural gases"
</code></pre>
|
<python><regex>
|
2023-01-12 21:06:43
| 2
| 5,645
|
gorjan
|
75,102,245
| 15,537,675
|
Read txt file with pandas into dataframe
|
<p>I want to read the txt file from <a href="https://github.com/odota/core/wiki/MMR-Data" rel="nofollow noreferrer">here</a> with Dota 2 mmrs for different players. It has the form as below:</p>
<pre><code> 1) "103757918"
2) "1"
3) "107361667"
4) "1"
5) "108464725"
6) "1"
7) "110818765"
8) "1"
9) "111436016"
10) "1"
11) "113518306"
12) "1"
13) "118896321"
14) "1"
15) "119780733"
16) "1"
17) "120360801"
18) "1"
19) "120870684"
20) "1"
21) "122616345"
22) "1"
23) "124393917"
24) "1"
25) "124487030"
</code></pre>
<p>With the account_id (e.g 103757918) followed by the mmr of the player (e.g 1). How can I read this in a pandas dataframe with two columns = account_id, mmr?</p>
<p>I don't need the index numbers.</p>
|
<python><pandas><dataframe><text-files>
|
2023-01-12 21:00:21
| 3
| 472
|
OLGJ
|
75,102,146
| 1,611,396
|
Flet page.update() does not update my table
|
<p>In my code I am trying to update a table which is called bag_table (in the row of the container of the right column). And when running my code it does show the table initially, but when I hit the submit button the backed is working. It is actually updating the bag_table variable, but it does not update in the gui itself.</p>
<p>Below is my full code.</p>
<pre><code>pd.options.display.max_columns = 100
from services.bag import PCHN
from utils.convertors import dataframe_to_datatable
import flet as ft
def main(page: ft.page):
def bag_service(e):
pc = '9722LA' if postal_code_field.value == '' else postal_code_field.value
hn = '29' if house_number_field.value == '' else house_number_field.value
address = PCHN(pc,
hn).result
bag_table = dataframe_to_datatable(address)
page.add(bag_table) # I added this for debugging, it is actually adding the table at the bottom of my page, so it is updating the actual bag_table
page.update() # This is not updating my bag_table in place though. It stays static as it is.
# define form fields
postal_code_field = ft.TextField(label='Postal code')
house_number_field = ft.TextField(label='House number')
submit_button = ft.ElevatedButton(text='Submit', on_click=bag_service)
# fields for the right column
address = PCHN('9722GN', '5').result
bag_table = dataframe_to_datatable(address)
# design layout
# 1 column to the left as a frame and one to the right with two rows
horizontal_divider = ft.Row
left_column = ft.Column
right_column = ft.Column
# fill the design
page.add(
horizontal_divider(
[left_column(
[postal_code_field,
house_number_field,
submit_button
]
),
right_column(
[
ft.Container(
ft.Row([bag_table],
scroll='always'),
bgcolor=ft.colors.BLACK,
width=800
)
]
)
]
)
)
if __name__ == '__main__':
ft.app(target=main,
view=ft.WEB_BROWSER,
port=666
)
</code></pre>
<p>I have been trouble shooting this like craze (hence all the print statements), but it's a classical thing of looking at a thing for hours, and it's probably a stupid mistake. Any help would be much appreciated.</p>
|
<python><user-interface><flet>
|
2023-01-12 20:47:40
| 1
| 362
|
mtjiran
|
75,102,116
| 7,385,923
|
trim a string of text, after a hyphen media in python
|
<p>I just started with python, now I see myself needing the following, I have the following string:</p>
<pre><code>1184-7380501-2023-183229
</code></pre>
<p>what i need is to trim this string and get only the following characters after the first hyphen. it should be as follows:</p>
<pre><code>1184-738
</code></pre>
<p>how can i do this?</p>
|
<python><string>
|
2023-01-12 20:44:07
| 2
| 1,161
|
FeRcHo
|
75,102,100
| 7,984,318
|
pandas groupby and count same column value
|
<p>I have a DataFrame, you can have it by running:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
case_id scheduled_date status_code
1213 2021-08 success
3444 2021-06 fail
4566 2021-07 unknown
12213 2021-08 unknown
34344 2021-06 fail
44566 2021-07 unknown
1213 2021-08 fail
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
</code></pre>
<p>This outputs:</p>
<pre><code> case_id scheduled_date status_code
0 1213 2021-08 success
1 3444 2021-06 fail
2 4566 2021-07 unknown
3 12213 2021-08 unknown
4 34344 2021-06 fail
5 44566 2021-07 unknown
6 1213 2021-08 fail
</code></pre>
<p>How can I count the number of success, fail, and unknown of each month?</p>
<p>Output should look like:</p>
<pre><code>scheduled_date num of success num of fail num of unknown
2021-08 1 1 1
2021-06 0 2 0
2021-07 0 0 2
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-12 20:42:30
| 2
| 4,094
|
William
|
75,102,042
| 517,752
|
Problem in generating logger file to a specific path in a greengrass
|
<p>I am trying to generate a log file to a specific folder and path in greengrass v2. however the log file is created at the current directory.</p>
<p>The current directory at which the logger file is generated is</p>
<pre><code>/sim/things/t1_gateway_iotgateway_1234/greengrass/packages/artifacts-unarchived/com.data.iot.RulesEngineCore/2.3.1-pp.38/package
</code></pre>
<p>Could you please help me where am I missing?</p>
<p>The following is my program.</p>
<pre><code>import logging
from datetime import datetime
import os, sys
from logging.handlers import RotatingFileHandler
def getStandardStdOutHandler():
formatter = logging.Formatter(
fmt="[%(asctime)s][%(levelname)-7s][%(name)s] %(message)s (%(threadName)s[% (thread)d]:%(module)s:%(funcName)s:%(lineno)d)"
)
filename = datetime.now().strftime("rule_engine_%Y_%m_%d_%H_%M.log")
path = "/sim/things/t1_gateway_iotgateway_1234/greengrass/logs/"
_handler = RotatingFileHandler(path + filename, maxBytes=1000000, backupCount=5)
_handler.setLevel(logging.DEBUG)
_handler.setFormatter(formatter)
return _handler
def getLogger(name: str):
logger = logging.getLogger(name)
logger.addHandler(getStandardStdOutHandler())
return logger
</code></pre>
|
<python><logging>
|
2023-01-12 20:36:28
| 2
| 1,342
|
Pankesh Patel
|
75,101,954
| 2,817,520
|
Can pip handle concurrency?
|
<p>I want to allow multiple users to manipulate the same virtual environment. What happens if multiple processes try to install/update/delete the same package using <code>pip</code>? Should I use <a href="https://en.wikipedia.org/wiki/File_locking#Lock_files" rel="nofollow noreferrer">file locking</a>?</p>
<p>Here is the situation. There is a web app which can have multiple admins. Admin A and B login and see an update is available. They both click on the update button. A request is sent to the server in order to update the app's package. Now what happens?</p>
|
<python><pip><concurrency>
|
2023-01-12 20:26:13
| 1
| 860
|
Dante
|
75,101,932
| 20,536,016
|
PagerDuty Export All Historical Incidents
|
<p>Does anyone have a way to export all historical incidents from PagerDuty? I can't seem to make it work by using any of the options in here:</p>
<p><a href="https://developer.pagerduty.com/api-reference/9d0b4b12e36f9-list-incidents" rel="nofollow noreferrer">https://developer.pagerduty.com/api-reference/9d0b4b12e36f9-list-incidents</a></p>
<p>So I've been trying to do it in python using <a href="https://pagerduty.github.io/pdpyras/" rel="nofollow noreferrer">https://pagerduty.github.io/pdpyras/</a></p>
<p>My simple script looks like this:</p>
<pre><code>import os
from pdpyras import APISession
api_key = os.environ['PD_API_KEY']
session = APISession(api_key, default_from="fake.email.com")
for incident in session.iter_all('incidents'):
print(incident)
</code></pre>
<p>This only exports about the last month worth of incidents. I can't seem to find a parameter to pass into this which will allow me to export ALL incidents.</p>
|
<python><pagerduty>
|
2023-01-12 20:24:25
| 1
| 389
|
Gary Turner
|
75,101,850
| 5,924,264
|
How to compute a column that is shifted from an existing column in a dataframe and truncate the first and last rows of each group?
|
<p>I have a dataframe as follows:</p>
<pre><code>df =
integer_id begin
0 13
0 15
0 18
0 19
1 10
1 15
1 17
</code></pre>
<p>I want to compute a 3rd column <code>end</code> where <code>df.end</code> is defined by the next <code>df.start</code> for the given <code>integer_id</code>, so e.g.,</p>
<p>the above would become</p>
<pre><code>df =
integer_id begin end
0 13 15
0 15 18
0 18 19
0 19
1 10 15
1 15 17
1 17
</code></pre>
<p>Furthermore, for the last row of each <code>integer_id</code>, I want <code>end</code> to go to <code>25</code> and for the first row of each <code>integer_id</code>, I want <code>start</code> to be truncated to <code>10</code>, so ultimately, we would have</p>
<pre><code>df =
integer_id begin end
0 10 15
0 15 18
0 18 19
0 19 20
1 10 15
1 15 17
1 17 20
</code></pre>
<p>I am not very good with pandas, but I think I will have to use the <code>apply</code> and <code>groupby('integer_id')</code> here, or is there another approach I can apply here?</p>
|
<python><pandas><dataframe>
|
2023-01-12 20:16:55
| 1
| 2,502
|
roulette01
|
75,101,804
| 5,203,628
|
Import "jsonschema" could not be resolved from sourcePylance
|
<p>I am attempting to use the jsonschema package to validate uploaded JSON payloads from the user.</p>
<p>I have run <code>pip install jsonschema</code> inside my venv and received a message confirming it's installation.</p>
<p>Running <code>pip freeze</code> confirms this.<br />
<a href="https://i.sstatic.net/wuP12.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wuP12.png" alt="enter image description here" /></a></p>
<p>When I attempt to import the package VS Code underlines the package with a yellow squiggly and says "Import "jsonschema" could not be resolved from sourcePylance"</p>
<p><a href="https://i.sstatic.net/iErE0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iErE0.png" alt="enter image description here" /></a></p>
<p>I am able to access the validate function via autocomplete menus, however when I attempt to run via the VS Code run menu it fails with the following message</p>
<blockquote>
<p>"ModuleNotFoundError: No module named 'jsonschema'"</p>
</blockquote>
<p>If I run it manually from my terminal it executes the code as expected.</p>
<p>My launch.json is provided</p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "app.py",
"console": "integratedTerminal",
"justMyCode": true
}
]
}
</code></pre>
|
<python><visual-studio-code><jsonschema>
|
2023-01-12 20:12:04
| 1
| 1,146
|
Rob S.
|
75,101,629
| 1,289,801
|
How to encode Hex representation of bytes as Python byte values?
|
<p>The below values are Hex representations of bytes, ie. <code>41</code> at the top left is 'A'. How can I convert all values to Python bytes, i.e. <code>41</code> would become <code>b\x41</code>.</p>
<pre><code>41 61 30 41 61 31 41 61
32 41 61 33 41 61 34 41
61 35 41 61 36 41 61 37
41 61 38 41 61 39 41 62
30 41 62 31 41 62 32 41
62 33 41 62 34 41 62 35
41 62 36 41 62 37 41 62
38 41 62 39 41 63 30 41
</code></pre>
<p>In a next step, I would take the string of Python encoded bytes, <code>s = b'\x41\x61\30...'</code> and write a binary file from it:</p>
<pre><code>f = open("binary", "wb")
f.write(s)
f.close()
</code></pre>
<p>It would also solve my problem, if the values could be somehow converted to bytes directly without the intermediate Python encoding.</p>
|
<python><byte>
|
2023-01-12 19:53:49
| 1
| 3,441
|
TMOTTM
|
75,101,591
| 2,474,025
|
Plotly dash local css only loaded by hot reload
|
<p>My local css file is not loaded at the start of the application.
But if I modify the css file and have hot reload active it loads.</p>
<p>In the example below I have css in the file assets/my.css which colors the dropdown dark after I start the server and then add a whitespace to the css file.</p>
<p>How can I make sure the app immediately uses the local stylesheet from the start?</p>
<p>CSS:</p>
<pre><code>.dash-bootstrap .Select-control {
height: calc(1.5em + 0.75rem + 2px);
font-size: 0.9375rem;
font-weight: 400;
line-height: 1.5;
color: #fff;
background-color: #222;
background-clip: padding-box;
border: 1px solid #444;
border-radius: 0.25rem;
transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
</code></pre>
<p>--</p>
<pre><code># stylesheet in assets only works after hot reload.
import plotly.express as px
from dash import html, dcc, Dash
# Load Data
df = px.data.tips()
# Build App
app = Dash(__name__)
app.layout = html.Div([
html.H1("JupyterDash Demo"),
dcc.Graph(id='graph', figure=px.scatter(
df, x="total_bill", y="tip", color="tip",
render_mode="webgl", title="Tips"
)),
html.Label([
"colorscale",
dcc.Dropdown(
className="dash-bootstrap",
id='colorscale-dropdown', clearable=False,
value='plasma', options=[
{'label': c, 'value': c}
for c in px.colors.named_colorscales()
])
]),
])
# Run app and display result inline in the notebook
app.run_server(dev_tools_hot_reload=True, port=8068)
</code></pre>
|
<python><plotly-dash>
|
2023-01-12 19:49:49
| 1
| 1,033
|
phobic
|
75,101,486
| 11,701,675
|
How to swap out single elements between two lists when they are greater then each other
|
<p>I have two lists where I want to control the first element of both lists and when the element of the second list is smaller swap them.</p>
<p>This is my code for that:</p>
<pre><code>if yListe[0] < xListe[0]:
xListe[0], yListe[0] = yListe[0], xListe[0]
if yListe[1] < xListe[1]:
xListe, yListe = yListe , xListe
</code></pre>
<p>The elements inside these lists are coordinates on the x and y axes</p>
<p>But I am getting the following error:
<code>'tuple' object does not support item assignment</code></p>
<p>I tried to also do this:</p>
<pre><code>for x1, y2 in zip(xListe, yListe):
if y2 < x1:
xListe[x1], yListe[y2] = y2, x1
</code></pre>
<p>But the same error. How can I fix this?</p>
<p>SAMPLE DATA:
These are values on choose x and y axis</p>
<pre><code>List1 = [123,120]
List2 = [80,150]
</code></pre>
<p>UPDATE CODE:</p>
<pre><code>for i, (x1, y2) in enumerate(zip(xListe, yListe)):
if y2 < x1:
xListe[i], yListe[i] = y2, x1
</code></pre>
<p>UPDATED Sample data LATEST:</p>
<pre><code>List1 = [123,120]
List2 = [80,150]
</code></pre>
<p>In this case, we are checking if 80 is smaller than 123, and if yes we swap these two values so 80 and 123, then we are checking 150 and 120 but because 150 is greater we are not doing anything. I know the code to swap the complete list but what I want to do is swap out single elements</p>
|
<python>
|
2023-01-12 19:38:27
| 2
| 647
|
natyus
|
75,101,474
| 12,596,824
|
Assign operator method chaining str.join()
|
<p>I have the following method chaining code and want to create a new column. but i'm getting an error when doing the following.</p>
<pre><code>(
pd.pivot(test, index = ['file_path'], columns = 'year', values = 'file')
.fillna(0)
.astype(int)
.reset_index()
.assign(hierarchy = file_path.str[1:-1].str.join(' > '))
)
</code></pre>
<p>Before the assign method the dataframe looks something like this:</p>
<pre><code>file_path 2017 2018 2019 2020
S:\Test\A 0 0 1 2
S:\Test\A\B 1 0 1 3
S:\Test\A\C 3 1 1 0
S:\Test\B\A 1 0 0 1
S:\Test\B\B 1 0 0 1
</code></pre>
<p>The error is : name 'file_path' is not defined.</p>
<p>file_path exists in the data frame but I'm not calling it correctly. What is the proper way to create a new column based on another using assign?</p>
|
<python><pandas><string><method-chaining>
|
2023-01-12 19:37:21
| 1
| 1,937
|
Eisen
|
75,101,276
| 15,176,150
|
How do you set the opacity of a Plotly Chart's bars individually?
|
<p>I'm working on bar chart using <a href="https://plotly.com/python/discrete-color/" rel="nofollow noreferrer">plotly express</a>. I'd like to use discrete colours and set the opacity of each bar individually. At the moment I'm using the following code:</p>
<pre><code>import plotly.express as px
df1 = pd.DataFrame(dict(score=[93.3, 93.3, 92, 88], model=['model1', 'model2', 'model3', 'model4']))
fig = px.bar(df1, x='score', y='model', color='model',
color_discrete_map={
"model1": "gray",
"model2": "rgb(255, 10, 10)",
"model3": "rgb(255, 10, 10)",
"model4": "rgb(255, 10, 10)"},
opacity=[1, 1, 0.1, 0.1])
fig.update_layout(xaxis_range=[80,100], xaxis_title='Score', yaxis_title='')
fig.show()
</code></pre>
<p>To generate the plot below:</p>
<p><a href="https://i.sstatic.net/6H6BF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6H6BF.png" alt="Visualisation of a Plotly plot without colour." /></a></p>
<p>I've tried adding separate opacities using a list of values, but that doesn't seem to work. Instead, plotly takes the first value in the list and applies it to all the bar charts.</p>
<p>How can I add opacity to each bar individually?</p>
|
<python><graph><plotly><bar-chart><opacity>
|
2023-01-12 19:16:00
| 2
| 1,146
|
Connor
|
75,101,254
| 304,684
|
Use regex to remove a substring that matches a beginning of a substring through the following comma
|
<p>I haven't found any helpful Regex tools to help me figure this complicated pattern out.</p>
<p>I have the following string:</p>
<pre><code>Myfirstname Mylastname, Department of Mydepartment, Mytitle, The University of Me; 4-1-1, Hong,Bunk, Tokyo 113-8655, Japan E-mail:my.email@example.jp, Tel:00-00-222-1171, Fax:00-00-225-3386
</code></pre>
<p>I am trying to learn enough Regex patterns to remove the substrings one at a time:</p>
<p><code>E-mail:my.email@example.jp</code></p>
<p><code>Tel:00-00-222-1171</code></p>
<p><code>Fax:00-00-225-3386</code></p>
<p>So I think the correct pattern would be to remove a given word (ie., "E-mail", "Tel") all the way through the following comma.</p>
<p><strong>Is type of dynamic pattern possible in Regex?</strong></p>
<p>I am performing the match in <strong>Python</strong>, however, I don't think that would matter too much.</p>
<p>Also, I know the data string <em>looks</em> comma separated, and it is. However there is no guarantee of preserving the order of those fields. That's why I'm trying to use a Regex match.</p>
|
<python><regex>
|
2023-01-12 19:13:38
| 1
| 12,097
|
Brett
|
75,101,224
| 13,600,944
|
Inherit multiple Django choices classes in one choices class
|
<p>Let's say we have two <code>choices</code> classes in Django like:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class C1(models.TextChoices):
attr1 = "ATTR1", "Attr 1"
attr2 = "ATTR2", "Attr 2"
class C2(models.TextChoices):
attr3 = "ATTR3", "Attr 3"
attr4 = "ATTR4", "Attr 4"
</code></pre>
<p>And we need to inherit these two classes in one choices class like this:</p>
<pre class="lang-py prettyprint-override"><code>class C(C1, C2):
pass
</code></pre>
<p>It throws an error like this: <code>TypeError: C: cannot extend enumeration 'C1'</code></p>
|
<python><django><django-models><django-rest-framework><enums>
|
2023-01-12 19:10:41
| 1
| 625
|
Hammad
|
75,101,128
| 12,871,587
|
Polars Reading CSV Files Causing Errors
|
<p>Often when reading messy csv files I end up seeing different kind of errors due to inconsistency of the data types in the column, for instance:</p>
<pre><code>ComputeError: Could not parse `22.4` as dtype Int64 at column 59.
The current offset in the file is 433793 bytes.
</code></pre>
<p>When the file/data is not yet familiar, I likely do not know what is the name of the column at 59th position. I'm asking for advice for more efficient process than what I'm currently doing to overcome these kind of issues:</p>
<p>1 - First I read the file with the reader option set to 'infer_schema_length=0' (which reads the data in pl.Utf8 string format). Another option is to use 'ignore_erros = True', but to my understanding it convers the error values to nulls, which is often what I don't want.</p>
<p>2 - As I don't know yet which is the 59th column, I do a for loop to figure it out</p>
<pre><code>for i in enumerate(df.columns):
print(i)
</code></pre>
<p>3 - Once I figured the column name raising the error, then I'll filter the dataframe to find that specific value to identify on which row(s) it appears on:</p>
<pre><code>(pl
.read_csv(file="file_name.csv", infer_schema_length=0)
.with_row_count()
.select(
[
pl.col("row_nr"),
pl.col("Error Column Name")
])
.filter(pl.col("Error Column Name") == "22.4")
)
</code></pre>
<p>Output:</p>
<pre><code>shape: (1, 2)
┌────────┬───────────────────┐
│ row_nr ┆ Error Column Name │
│ --- ┆ --- │
│ u32 ┆ str │
╞════════╪═══════════════════╡
│ 842 ┆ 22.4 │
└────────┴───────────────────┘
</code></pre>
<p>4 - Then depending on the file and case, but I would adjust the value to what it should be "224" or "22" or "23" either in the source of the file or modifying DF and converting all other column datatypes to desired ones.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Is there a easier way to access nth column in Polars than what I do in step 2?</li>
<li>Is there a more optimal way of overcoming the values causing the errors?</li>
<li>If I read the file and columns as pl.Utf8 and adjust the value causing the error, is there a convenient way to automatically detect the best datatypes for the df's columns after the data has been read rather than manually going column by column?</li>
</ol>
|
<python><dataframe><csv><data-cleaning><python-polars>
|
2023-01-12 18:59:47
| 1
| 713
|
miroslaavi
|
75,101,127
| 7,800,760
|
Pytests doesn't find a test if explicitely named
|
<p>Have the following project tree (just a tutorial to learn pytest):</p>
<pre><code>(pytest) bob@Roberts-Mac-mini ds % tree
.
├── ds
│ └── __init__.py
└── tests
├── __pycache__
│ ├── test_compare.cpython-311-pytest-7.2.0.pyc
│ ├── test_square.cpython-311-pytest-7.2.0.pyc
│ └── test_stack.cpython-311-pytest-7.2.0.pyc
├── test_compare.py
└── test_square.py
4 directories, 6 files
</code></pre>
<p>and from its root all works as expected if I just run pytest:</p>
<pre><code>===================================================================================== test session starts =====================================================================================
platform darwin -- Python 3.11.0, pytest-7.2.0, pluggy-1.0.0
rootdir: /Users/bob/Documents/work/pytest_tut
collected 6 items
ds/tests/test_compare.py F.. [ 50%]
ds/tests/test_square.py ..F [100%]
========================================================================================== FAILURES ===========================================================================================
________________________________________________________________________________________ test_greater _________________________________________________________________________________________
def test_greater():
num = 100
> assert num > 100
E assert 100 > 100
ds/tests/test_compare.py:3: AssertionError
________________________________________________________________________________________ test_equality ________________________________________________________________________________________
def test_equality():
> assert 10 == 11
E assert 10 == 11
ds/tests/test_square.py:12: AssertionError
=================================================================================== short test summary info ===================================================================================
FAILED ds/tests/test_compare.py::test_greater - assert 100 > 100
FAILED ds/tests/test_square.py::test_equality - assert 10 == 11
================================================================================= 2 failed, 4 passed in 0.03s =================================================================================
</code></pre>
<p>but if I run:</p>
<pre><code>(pytest) bob@Roberts-Mac-mini pytest_tut % pytest test_square.py
===================================================================================== test session starts =====================================================================================
platform darwin -- Python 3.11.0, pytest-7.2.0, pluggy-1.0.0
rootdir: /Users/bob/Documents/work/pytest_tut
collected 0 items
==================================================================================== no tests ran in 0.00s ====================================================================================
ERROR: file or directory not found: test_square.py
</code></pre>
<p>the test_square.py module is not found, unlike when pytest was invoked without arguments.</p>
<p>What am I doing wrong? Thanks</p>
|
<python><pytest>
|
2023-01-12 18:59:34
| 1
| 1,231
|
Robert Alexander
|
75,101,106
| 14,391,779
|
Compare the two versions of dataframes for Upsert and list out changes
|
<p>I am doing upsert operation in databricks. Now I want to check what is changed between two upsert operation.</p>
<p>My original <code>df1</code> look like this>>
<a href="https://i.sstatic.net/2aajU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2aajU.png" alt="My 1st Dataframe" /></a></p>
<p>My upserted <code>df2</code> look like this >> <a href="https://i.sstatic.net/cN36e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cN36e.png" alt="My 2nd Dataframe" /></a></p>
<p>I want Output like this>>
<a href="https://i.sstatic.net/5Gu9J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Gu9J.png" alt="Output Dataframe" /></a></p>
<p>here <code>id</code> is my <code>primary_key</code></p>
|
<python><apache-spark><pyspark><databricks><delta-lake>
|
2023-01-12 18:57:01
| 1
| 650
|
Suraj Shejal
|
75,101,058
| 11,313,496
|
Pandas: Create new column with repeating values based on non-repeating values in another column
|
<p>I have a dataframe with the following column the follows this format:</p>
<pre><code>df = pd.DataFrame(data={
'value': [123, 456, 789, 111, 121, 34523, 4352, 45343, 623]
'repeatVal': ['NaN', 2, 'NaN', 'NaN', 3, 'NaN', 'NaN', 'NaN', 'NaN'],
})
</code></pre>
<p>I want to create a new column that takes the values from 'value' and repeats it the number of times downward from 'repeatVal' so the output looks like 'result':</p>
<pre><code>df = pd.DataFrame(data={
'value': [123, 456, 789, 111, 121, 34523, 4352, 45343, 623]
'repeatVal': ['NaN', 2, 'NaN', 'NaN', 3, 'NaN', 'NaN', 'NaN', 'NaN'],
'result': ['NaN', 456, 456, 'NaN', 121, 121, 121, 'NaN', 'NaN']
})
</code></pre>
<p>To be clear, I do not want to duplicate the rows, I only want to create a new col where values are repeated n times, where n is specified in a different column. The format of the column 'repeatVals' is such that there will never be overlap--that there will always be sufficient NaN values between the repeat indicators in 'repeatVals'</p>
<p>I have read the docs on np.repeat and np.tile but those don't appear to solve this issue.</p>
|
<python><pandas><numpy><repeat>
|
2023-01-12 18:52:35
| 2
| 316
|
Whitewater
|
75,101,002
| 979,099
|
List available topic on MQTT connection with paho
|
<p>I've got ay MQTT client setup with Paho, and the broker I'm connecting to publishes a topic of <code>device/****/report</code></p>
<p>The issue is that **** is actually a dynamic value, and is actually the serial number of the device that I need to pull into my code.</p>
<p>Is there anyway that in the on_connect method of paho, I can fetch the published topics so that I can parse this serial number?</p>
<pre><code>def try_connection(
user_input: dict[str, Any],
) -> bool:
"""Test if we can connect to an MQTT broker."""
# We don't import on the top because some integrations
# should be able to optionally rely on MQTT.
import paho.mqtt.client as mqtt # pylint: disable=import-outside-toplevel
client = mqtt.Client()
result: queue.Queue[bool] = queue.Queue(maxsize=1)
def on_connect(
client_: mqtt.Client,
userdata: None,
flags: dict[str, Any],
result_code: int,
properties: mqtt.Properties | None = None,
) -> None:
"""Handle connection result."""
LOGGER.debug(f"client: {client.__dict__}")
LOGGER.debug(f"flags: {flags}")
LOGGER.debug(f"result_code: {result_code}")
LOGGER.debug(f"properties: {properties}")
result.put(result_code == mqtt.CONNACK_ACCEPTED)
client.on_connect = on_connect
client.connect_async(user_input[CONF_HOST], 1883)
client.loop_start()
try:
return result.get(timeout=5)
except queue.Empty:
return False
finally:
client.disconnect()
client.loop_stop()
</code></pre>
|
<python><python-3.x><mqtt>
|
2023-01-12 18:47:01
| 1
| 6,315
|
K20GH
|
75,100,921
| 1,686,236
|
TensorFlow Keras make ThresholdedReLU Theta optimizable
|
<p>I've built a seq-to-seq model to take N periods of continuous input data and predict M periods of continuous response data (M<N). Because my response is sparse - usually 0, I want to squish predictions that are "small" to 0, so have added a Thresholded ReLU as my final layer. The model, with a single encoder and single decoder is:</p>
<pre><code># encoder
encoder_inputs_11 = tf.keras.layers.Input(shape=(n_past, n_features))
encoder_layr1_11 = tf.keras.layers.LSTM(encoderLayers[0], return_state=True)
encoder_outputs1_11 = encoder_layr1_11(encoder_inputs_11)
encoder_states1_11 = encoder_outputs1_11[1:]
# decoder
decoder_inputs_11 = tf.keras.layers.RepeatVector(n_future)(encoder_outputs1_11[0])
decoder_layr1_11 = tf.keras.layers.LSTM(decoderLayers[0], return_sequences=True)(decoder_inputs_11, initial_state = encoder_states1_11)
decoder_outputs1_11 = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(n_response))(decoder_layr1_11)
# threshold layer
thresh_11 = tf.keras.layers.ThresholdedReLU(theta=reluTheta)(decoder_outputs1_11)
# entire model
result = tf.keras.models.Model(encoder_inputs_11, thresh_11)
</code></pre>
<p>How can I make <code>theta</code> in <code>tf.keras.layers.ThresholdedReLU(theta=reluTheta)</code> an optimizable parameter?</p>
|
<python><tensorflow><keras><deep-learning><lstm>
|
2023-01-12 18:39:17
| 0
| 2,631
|
Dr. Andrew
|
75,100,829
| 10,450,752
|
Shapely Buffering, not working as expected
|
<p>Why does buffering one of my geometries have an unexpected hole in it?</p>
<pre><code>from shapely import LineString
from geopandas import GeoDataFrame
l = LineString([
(250,447),
(319,446),
(325,387),
(290,374),
(259,378),
(254,385),
(240,409),
(244,440),
(250,447),
])
assert l.is_valid
assert l.is_simple
GeoDataFrame({'geometry': [
l,
l.buffer(80),
]}).plot(column='geometry')
</code></pre>
<p><a href="https://i.sstatic.net/BIpH7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BIpH7.png" alt="holey buffered geometry" /></a></p>
<ul>
<li>By removing a pair of coordinates, it doesn't have a hole.</li>
<li>When using Sedona's <code>ST_Buffer</code> this happened in more cases.</li>
</ul>
|
<python><gis><geopandas><shapely><apache-sedona>
|
2023-01-12 18:31:19
| 1
| 704
|
A. West
|
75,100,823
| 10,083,382
|
Pandas Group by and get corresponding Value
|
<p>Assuming that we have a Pandas Data Frame as below</p>
<pre><code>data = {'date':['2022-10-01', '2022-10-01', '2022-10-02', '2022-10-02', '2022-10-02'],
'price': [10, 20, 30, 40, 50],
'store': ['A', 'B', 'A', 'C', 'B']
}
df = pd.DataFrame(data)
</code></pre>
<p>I want to group by <code>date</code> and get max price value and for the max <code>price</code> I want the corresponding store value i.e. I do not want to apply max aggregation on <code>store</code> column.</p>
<p>How can I achieve that?</p>
<p><strong>Expected Output</strong></p>
<pre><code>+------------+-------+-------+
| date | price | store |
+------------+-------+-------+
| 2022-10-01 | 20 | B |
| 2022-10-02 | 50 | B |
+------------+-------+-------+
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2023-01-12 18:30:34
| 1
| 394
|
Lopez
|
75,100,754
| 5,437,090
|
selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: 298,437
|
<p>I am doing web scraping using <code>selenium</code> in python using the following code:</p>
<pre><code>from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def get_all_search_details(URL):
SEARCH_RESULTS = {}
options = Options()
options.headless = True
options.add_argument("--remote-debugging-port=9222")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
driver.get(URL)
print(f"Scraping {driver.current_url}")
try:
medias = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CLASS_NAME, 'result-row')))
except:
print(f">> Selenium Exception: {URL}")
return
for media_idx, media_elem in enumerate(medias):
outer_html = media_elem.get_attribute('outerHTML')
result = scrap_newspaper(outer_html) # some function to extract results
SEARCH_RESULTS[f"result_{media_idx}"] = result
return SEARCH_RESULTS
if __name__ == '__main__':
in_url = "https://digi.kansalliskirjasto.fi/search?query=%22heimo%20kosonen%22&orderBy=RELEVANCE"
my_res = get_all_search_details(in_url)
</code></pre>
<p>I applied <code>try except</code> to ensure I would not get trapped in selenium timeout exception , however, here is the error I obtained:</p>
<pre><code>Scraping https://digi.kansalliskirjasto.fi/search?query=%22heimo%20kosonen%22&orderBy=RELEVANCE
>> Selenium Exception: https://digi.kansalliskirjasto.fi/search?query=%22heimo%20kosonen%22&orderBy=RELEVANCE
Traceback (most recent call last):
File "nationalbiblioteket_logs.py", line 274, in <module>
run()
File "nationalbiblioteket_logs.py", line 262, in run
all_queries(file_=get_query_log(QUERY=args.query),
File "nationalbiblioteket_logs.py", line 218, in all_queries
df = pd.DataFrame( df.apply( check_urls, axis=1, ) )
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/frame.py", line 8740, in apply
return op.apply()
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/apply.py", line 688, in apply
return self.apply_standard()
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/apply.py", line 812, in apply_standard
results, res_index = self.apply_series_generator()
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/apply.py", line 828, in apply_series_generator
results[i] = self.f(v)
File "nationalbiblioteket_logs.py", line 217, in <lambda>
check_urls = lambda INPUT_DF: analyze_(INPUT_DF)
File "nationalbiblioteket_logs.py", line 200, in analyze_
df["search_results"] = get_all_search_details(in_url)
File "/home/xenial/WS_Farid/DARIAH-FI/url_scraping.py", line 27, in get_all_search_details
driver.get(URL)
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 441, in get
self.execute(Command.GET, {'url': url})
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/home/xenial/anaconda3/envs/py37/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: 298,437
(Session info: headless chrome=109.0.5414.74)
Stacktrace:
#0 0x561d3b04c303 <unknown>
#1 0x561d3ae20d37 <unknown>
#2 0x561d3ae0b549 <unknown>
#3 0x561d3ae0b285 <unknown>
#4 0x561d3ae09c77 <unknown>
#5 0x561d3ae0a408 <unknown>
#6 0x561d3ae1767f <unknown>
#7 0x561d3ae182d2 <unknown>
#8 0x561d3ae28fd0 <unknown>
#9 0x561d3ae2d34b <unknown>
#10 0x561d3ae0a9c5 <unknown>
#11 0x561d3ae28d7f <unknown>
#12 0x561d3ae95aa0 <unknown>
#13 0x561d3ae7d753 <unknown>
#14 0x561d3ae50a14 <unknown>
#15 0x561d3ae51b7e <unknown>
#16 0x561d3b09b32e <unknown>
#17 0x561d3b09ec0e <unknown>
#18 0x561d3b081610 <unknown>
#19 0x561d3b09fc23 <unknown>
#20 0x561d3b073545 <unknown>
#21 0x561d3b0c06a8 <unknown>
#22 0x561d3b0c0836 <unknown>
#23 0x561d3b0dbd13 <unknown>
#24 0x7f80a698a6ba start_thread
</code></pre>
<p>Is there a better alternative to get rid of such selenium timeout exception? To be more specific, I added:</p>
<pre><code>options.add_argument("--disable-extensions")
options.add_argument("--no-sandbox")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
</code></pre>
<p>But they were not helpful to tackle <code>selenium.common.exceptions.TimeoutException: Message: timeout: Timed out receiving message from renderer: 298,437</code>!</p>
<p>Here are some more details regarding libraries I use:</p>
<pre><code>>>> selenium.__version__
'4.5.0'
>>> webdriver_manager.__version__
'3.8.4'
</code></pre>
|
<python><selenium><web-scraping><timeoutexception>
|
2023-01-12 18:23:47
| 2
| 1,621
|
farid
|
75,100,563
| 496,289
|
Print an xlsx file with charts/pictures to pdf on Linux using Python
|
<p>I've read following and many more:</p>
<ol>
<li><a href="https://askubuntu.com/questions/437331/print-xlsx-file-from-command-line-using-ghostscript-and-libreoffice">print xlsx file from command line using ghostscript and libreoffice</a></li>
<li><a href="https://stackoverflow.com/questions/16683376/print-chosen-worksheets-in-excel-files-to-pdf-in-python">Print chosen worksheets in excel files to pdf in python</a></li>
<li><a href="https://stackoverflow.com/questions/62391152/python-converting-an-excel-file-xlsx-to-a-pdf-pdf">Python Converting an Excel file (.xlsx) to a PDF (.pdf)</a></li>
<li><a href="https://stackoverflow.com/questions/52326782/python-converting-xlsx-to-pdf">Python - Converting XLSX to PDF</a></li>
<li><a href="https://stackoverflow.com/questions/67577755/can-you-hard-print-an-xlsx-file-on-linux-with-python">Can you hard print an .xlsx file on Linux with Python?</a>
(unasnwered)</li>
<li><a href="https://stackoverflow.com/questions/46386143/python-library-to-open-existing-xlsx-workbook-with-charts">Python library to open existing .xlsx workbook with charts</a></li>
</ol>
<hr />
<ul>
<li>I have an xlsx file that contains lots of charts, tables and pictures (png watermarks inserted into the excel sheet as picture). E.g. <a href="https://templates.office.com/en-us/fitness-tracker-tm16400940" rel="nofollow noreferrer">see this sample template</a>.</li>
<li>I want to print this to a pdf.</li>
<li>I'm on Linux and preferably want to use Python.</li>
</ul>
<p>Closest I came was using <code>libreoffice</code> as described in answer #1 (Command used: <code>libreoffice --headless --invisible --convert-to pdf ./Book1.xlsx</code>) in list above. But all the text/formatting is messed up (whether I open the pdf file on Linux where I printed it or on Windows where I created the original xlsx file):</p>
<p><a href="https://i.sstatic.net/EXe9R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EXe9R.png" alt="enter image description here" /></a></p>
<p>Same xlsx file printed on Windows using Excel App's print to pdf:
<a href="https://i.sstatic.net/q4EWu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q4EWu.png" alt="enter image description here" /></a></p>
<p>File does look ok when I open it in libreoffice-calc on linux:
<a href="https://i.sstatic.net/B2hYS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B2hYS.png" alt="enter image description here" /></a></p>
|
<python><excel><linux><pdf><printing>
|
2023-01-12 18:05:00
| 0
| 17,945
|
Kashyap
|
75,100,542
| 7,984,318
|
How to modify date value to only keep month value
|
<p>I have a data frame, you can have it by running:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
case_id scheduled_date code
1213 2021-08-17 1
3444 2021-06-24 3
4566 2021-07-20 5
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
</code></pre>
<p>How can I change <code>scheduled_date</code> to only keep year and month? The output should be:</p>
<pre><code> case_id scheduled_date code
0 1213 2021-08 1
1 3444 2021-06 3
2 4566 2021-07 5
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-12 18:02:55
| 4
| 4,094
|
William
|
75,100,477
| 10,229,768
|
flake8 & pycodestyle/pep8 not showing E721 errors
|
<h5>Versions</h5>
<pre class="lang-bash prettyprint-override"><code>λ python --version
Python 3.10.6
λ flake8 --version
5.0.4 (mccabe: 0.7.0, pycodestyle: 2.9.1, pyflakes: 2.5.0) CPython 3.10.6 on Linux
# and on Windows
## Edit: after update
λ flake8 --version
6.0.0 (mccabe: 0.7.0, pycodestyle: 2.10.0, pyflakes: 3.0.1) CPython 3.10.6 on Linux
</code></pre>
<h5>Error to Catch</h5>
<ul>
<li><a href="https://www.flake8rules.com/rules/E721.html" rel="nofollow noreferrer">https://www.flake8rules.com/rules/E721.html</a>
<ul>
<li><strong>No:</strong> <code>type(user) == User</code></li>
<li><strong>Do:</strong> <code>isinstance(user, User)</code></li>
</ul>
</li>
</ul>
<h5>Example <em>(test.py)</em></h5>
<pre class="lang-py prettyprint-override"><code>test = []
if type(test) == list:
print('test is a list')
else:
print('test not a list')
</code></pre>
<p>Both <code>flake8 test.py</code> & <code>pycodestyle test.py</code> commands, in terminal, do not show any errors. Yet they should.<br />
I have no extra config, from what I'm reading this error should be enabled by default; Per <a href="https://pycodestyle.pycqa.org/en/2.9.1/intro.html#error-codes" rel="nofollow noreferrer">pycodestyle 2.9.1 Docs</a></p>
<ul>
<li>unless disabled per line with <code># noqa</code></li>
</ul>
<p>I've also tried:</p>
<ul>
<li><code>{flake8|pycodestyle} --select E721 test.py</code> to explicitly select the error</li>
<li><code>{flake8|pycodestyle} --ignore E302 test.py</code> to clear the default ignore list</li>
<li><code>{flake8|pycodestyle} --ignore E302 --select E721 test.py</code></li>
</ul>
<p>Am I missing something?- I quite like this error and now I'm worried it's not catching <strong>other</strong> errors as well.</p>
|
<python><pep8><flake8><pycodestyle>
|
2023-01-12 17:56:37
| 1
| 2,268
|
Nealium
|
75,100,323
| 2,706,344
|
Elements of one column that don't have a certain value in another column
|
<p>I have a DataFrame, let's assume the following one:</p>
<pre><code>df=pd.DataFrame({'person':['Sebastian','Sebastian','Sebastian', 'Maria', 'Maria', 'Maria', 'Achim','Achim','Achim'],'item':['house','garden','sink','sink','gold','house','stone','gold','wood']})
</code></pre>
<p>Now I want to get a list of all persons who don't own a certain item, for example gold. I've fould a way to implement it, but I think there is a better way. That is how I have done it:</p>
<pre><code>allPersons=df['person'].unique()
personWithGold=df[df['item']=='gold']['person'].unique()
personWithoutGold=allPersons[~np.isin(allPersons,personWithGold)]
</code></pre>
<p>Any suggestions how to improve the code? I somehow feel that there is a pretty easy one-line solution.</p>
|
<python><pandas><numpy>
|
2023-01-12 17:41:46
| 3
| 4,346
|
principal-ideal-domain
|
75,100,261
| 20,901,192
|
Is there a way to get the file import dependencies for different files in a Python project?
|
<p>I'm working with a Python project consisting of multiple files and nested directories. I want to get a dictionary consisting of all existing files and their references to other files (nesting not required), for example:</p>
<pre><code>dependencies = {
"file1.py": ["file2.py","file4.py"],
"file2.py": ["file3.py","file4.py",
"file3.py": ["file1.py"],
"file4.py": []
}
</code></pre>
<p>Is there a module or an existing approach already out there to accomplish this?</p>
<p>My current plan is to write a program to read each line in every file and just track what comes after any <code>from</code> or <code>import</code> statement, but I'm not sure if that method's airtight.</p>
<p>Pseudocode:</p>
<pre><code>dependencies = {}
for file in directory:
for line in file:
if line begins with "import" or "from":
dependencies[file] += everything_after_from_import(line)
return dependencies
</code></pre>
<p>I looked at modules like <code>pipdeptree</code>, but those only seem to track pip dependencies, rather than imports from file-to-file. I also don't need to worry about performance or scalability, as this is for generating an offline report for my own reference.</p>
<p>Is my current approach the best, or are there better ways out there?</p>
|
<python>
|
2023-01-12 17:36:43
| 1
| 374
|
Matt Eng
|
75,100,102
| 20,677,182
|
Get app version from pyproject.toml inside python code
|
<p>I am not very familiar with python, I only done automation with so I am a new with packages and everything.<br />
I am creating an API with Flask, Gunicorn and Poetry.<br />
I noticed that there is a version number inside the pyproject.toml and I would like to create a route /version which returns the version of my app.<br />
My app structure look like this atm:</p>
<pre><code>├── README.md
├── __init__.py
├── poetry.lock
├── pyproject.toml
├── tests
│ └── __init__.py
└── wsgi.py
</code></pre>
<p>Where <code>wsgi.py</code> is my main file which run the app.</p>
<p>I saw peoples using importlib but I didn't find how to make it work as it is used with:<br />
<code> __version__ = importlib.metadata.version("__package__")</code><br />
But I have no clue what this <strong>package</strong> mean.</p>
|
<python><python-packaging><python-poetry><python-importlib>
|
2023-01-12 17:23:32
| 6
| 365
|
pgossa
|
75,100,088
| 5,437,090
|
Error handling of python subprocess to run bash script with multiple input arguments vs successful running of bash script with hardcoded inputs
|
<p>I have a Rest API to execute <code>curl</code> command in bash script to retrieve newspaper information from a national library website as follows. I can run this bash script in two different ways:</p>
<ol>
<li><code>$ bash sof.sh # input arguments are hardcoded! WORKS OK!</code></li>
<li><code>$ python file.py # input arguments are passed from a dictionary! ERROR!</code></li>
</ol>
<p>Here is <code>sof.sh</code> file:</p>
<pre><code>#!/bin/bash
: '
################## 1) Running via Bash Script & hardcoding input arguments ##################
myQUERY="Rusanen"
myORDERBY="DATE_DESC"
myFORMATS='["NEWSPAPER"]'
myFUZZY="false"
myPubPlace='["Iisalmi", "Kuopio"]'
myLANG='["FIN"]'
################## 1) Running via Bash Script & hardcoding input arguments ##################
# Result: OK! a json file with retreived expected information
'
#: '
################## 2) Running from python script with input arguments ##################
for ARGUMENT in "$@"
do
#echo "$ARGUMENT"
KEY=$(echo $ARGUMENT | cut -f1 -d=)
KEY_LENGTH=${#KEY}
VALUE="${ARGUMENT:$KEY_LENGTH+1}"
export "$KEY"="$VALUE"
done
echo $# "ARGS:" $*
################## 2) Running from python script with input arguments ##################
# Result: Error!!
#'
out_file_name="newspaper_info_query_${myQUERY// /_}.json"
echo ">> Running $0 | Searching for QUERY: $myQUERY | Saving in $out_file_name"
curl 'https://digi.kansalliskirjasto.fi/rest/binding-search/search/binding?offset=0&count=10000' \
-H 'Accept: application/json, text/plain, */*' \
-H 'Cache-Control: no-cache' \
-H 'Connection: keep-alive' \
-H 'Content-Type: application/json' \
-H 'Pragma: no-cache' \
--compressed \
--output $out_file_name \
-d @- <<EOF
{ "query":"$myQUERY",
"languages":$myLANG,
"formats":$myFORMATS,
"orderBy":"$myORDERBY",
"fuzzy":$myFUZZY,
"publicationPlaces": $myPubPlace
}
EOF
</code></pre>
<p>Running <code>$ bash sof.sh</code> with manually hardcoded input arguments in the bash script works fine with expected behavior, i.e., it returns a json file with expected information.</p>
<p>However, to automate my code, I need to run this bash script using <code>$ python file.py</code> with <code>subprocess</code> as follows:</p>
<pre><code>def rest_api_sof(params={}):
params = {'query': ["Rusanen"],
'publicationPlace': ["Iisalmi", "Kuopio"],
'lang': ["FIN"],
'orderBy': ["DATE_DESC"],
'formats': ["NEWSPAPER"],
}
print(f"REST API: {params}")
subprocess.call(['bash',
'sof.sh',
f'myFORMATS={params.get("formats", "")}',
f'myQUERY={",".join(params.get("query"))}',
f'myORDERBY={",".join(params.get("orderBy", ""))}',
f'myLANG={params.get("lang", "")}',
f'myPubPlace={params.get("publicationPlace", "")}',
])
if __name__ == '__main__':
rest_api_sof()
</code></pre>
<p>To replicate and see Error in json file, please comment <code>1) Running via Bash Script & hard coding input arguments</code> and correspondingly uncomment <code>2) Running from python script with input arguments</code> and run <code>$ python file.py</code>.</p>
<p>Here is the error in my json file after running <code>$ python file.py</code>:</p>
<pre><code><!doctype html>
<html lang="fi">
<head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# article: http://ogp.me/ns/article#">
<title>Digitaaliset aineistot - Kansalliskirjasto</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<meta name="robots" content="index, follow"/>
<meta name="copyright" content="Kansalliskirjasto. Kaikki oikeudet pidätetään."/>
<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
<base href="/">
<meta name="google-site-verification" content="fLK4q3SMlbeGTQl-tN32ENsBoaAaTlRd8sRbmTxlSBU" />
<meta name="msvalidate.01" content="7EDEBF53A1C81ABECE44A7A666D94950" />
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=DM+Serif+Display&family=Open+Sans:ital,wght@0,300;0,400;0,600;0,700;1,400&display=swap" rel="stylesheet">
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-10360577-3', 'auto');
</script>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-KF8NK1STFH"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
// see google-analytics.service
</script>
<!-- Matomo -->
<script>
var _paq = window._paq = window._paq || [];
(function() {
var u = "https://tilasto.lib.helsinki.fi/";
_paq.push(['setTrackerUrl', u + 'matomo.php']);
_paq.push(['setSiteId', '17']);
var d = document, g = d.createElement('script'), s = d.getElementsByTagName('script')[0];
g.async = true;
g.src = u + 'matomo.js';
s.parentNode.insertBefore(g, s);
}
)();
</script>
<noscript><p><img src="https://tilasto.lib.helsinki.fi/matomo.php?idsite=17&amp;rec=1" style="border:0;" alt=""/></p></noscript>
<!-- End Matomo Code --><style type="text/css">[ng-cloak] { display: none !important; }</style>
<script>
window.errorHandlerUrl = "/rest/js-error-handler";
window.commonOptions = {"localLoginEnabled":true,"localRegistrationEnabled":false,"marcOverlayEnabled":true,"opendataEnabled":true,"overrideHeader":"","hakaEnabled":true,"includeExternalResources":true,"legalDepositWorkstation":false,"jiraCollectorEnabled":true,"buildNumber":"8201672f226078f2cefbe8a0025dc03f5d98c25f","searchMaxResults":10000,"showExperimentalSearchFeatures":true,"bindingSearchMaxResults":1000,"excelDownloadEnabled":true,"giosgEnabled":true};
</script>
<style type="text/css">.external-resource-alt { display: none !important; }</style>
</head>
<body class="digiweb">
<noscript>
<h3>Sovellus vaatii JavaScriptin.</h3>
<p>Ole hyvä ja laita selaimesi JavaScript päälle, jos haluat käyttää palvelua.</p>
<h3>Aktivera Javascript.</h3>
<p>För att kunna använda våra webbaserade system behöver du ha Javascript aktiverat.</p>
<h3>This application requires JavaScript.</h3>
<p>Please turn on JavaScript in order to use the application.</p>
</noscript><app-digiweb></app-digiweb>
<div id="kk-server-error" style="display: none;">
<h1 align="center">Järjestelmässä tapahtui virhe.</h1></div>
<div id="kk-server-page" style="display: none;">
</div>
<script type="text/javascript">
window.language = "fi";
window.renderId = 1673541833124;
window.facebookAppId = "465149013631512"
window.reCaptchaSiteKey = "6Lf7xuASAAAAANNu9xcDirXyzjebiH4pPpkKVCKq";
</script>
<script src="/assets/runtime-es2015.f1ac93cb35b9635f0f7e.js" type="module"></script>
<script src="/assets/runtime-es5.f1ac93cb35b9635f0f7e.js" nomodule></script>
<script src="/assets/polyfills-es2015.8db02cde19c51f542c72.js" type="module"></script>
<script src="/assets/polyfills-es5.2273af7ef2cf66cdc0de.js" nomodule></script>
<script src="/assets/styles-es2015.a539381f703344410705.js" type="module"></script>
<script src="/assets/styles-es5.a539381f703344410705.js" nomodule></script>
<script src="" type="module"></script>
<script src="" type="module"></script>
<script src="" nomodule></script>
<script src="/assets/main-es2015.b5796f606e925a9d947d.js" type="module"></script>
<script src="/assets/main-es5.b5796f606e925a9d947d.js" nomodule></script>
</body>
</html>
</code></pre>
|
<python><linux><bash><shell><subprocess>
|
2023-01-12 17:22:02
| 1
| 1,621
|
farid
|
75,100,067
| 2,463,570
|
Django Views not getting POST Data from api call
|
<p>I am not using django rest frame work</p>
<p>but in normal views.py I have a simple views</p>
<p>#views.py</p>
<pre><code>def api_post_operations(request):
pdb.set_trace()
if request.POST:
print(request.POST["name"])
print(request.POST["address"])
</code></pre>
<p>now I call it</p>
<pre><code>import requests
url = "http://localhost:8000/api_post_operations"
payload = {"name":"raj", "address": "asasass" }
rees = requests.post(url, data=payload, headers={})
</code></pre>
<p>it is comming</p>
<pre><code>(Pdb) request.POST
<QueryDict: {}>
</code></pre>
<p>Any reason why it is comming {} balnk</p>
<pre><code>request.body also comming blank
</code></pre>
|
<python><python-3.x><django>
|
2023-01-12 17:20:07
| 0
| 12,390
|
Rajarshi Das
|
75,100,065
| 6,676,101
|
If you were to explicitly call the `__enter__()` and `__exit_()` methods instead of using a `with` statement, what would the code look like?
|
<p>If you were to explicitly call the <code>__enter__()</code> and <code>__exit_()</code> methods instead of using a <code>with</code> statement, what would the code look like?</p>
<p>Code using a <code>with</code> statement:</p>
<pre class="lang-python prettyprint-override"><code>with open("test.txt", "w") as file:
file.write("Hello, World!")
</code></pre>
<p>Failed attempt to re-write the code</p>
<p>The goal is to replace the <code>with</code>-statement with explicit calls to <code>__enter__()</code> and <code>__exit__()</code></p>
<pre class="lang-python prettyprint-override"><code>file = open("test.txt", "w")
try:
file.__enter__()
file.write("Hello, World!")
file.__exit__()
except BaseException as exc:
exc_class, exc_object, traceback = something_some_some()
file.__exit__(exc_class, exc_object, traceback)
finally:
pass
</code></pre>
|
<python><python-3.x><with-statement>
|
2023-01-12 17:19:50
| 1
| 4,700
|
Toothpick Anemone
|
75,099,888
| 514,149
|
FileNotFoundError with shared memory on Linux
|
<p>I am trying to create a shared memory for my Python application, which should be used in the parent process and in another process that is spawned from that parent process. In most cases that works fine, however, sometimes I get the following stacktrace:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 110, in __setstate__
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory: '/psm_47f7f5d7'
</code></pre>
<p>I want to emphasize that our code/application works fine in 99% of the time. We are spawning these new processes with new shared memory for each such process on a regular basis in our application (which is a server process, so it's running 24/7). Nearly all the time this works fine, only from time to time this error above is thrown, which then kills the whole application.</p>
<p><strong>Update:</strong> I noticed that this problem occurs mainly when the application was running for a while already. When I start it up the creation of shared memory and spawning new processes works fine without this error.</p>
<p>The shared memory is created like this:</p>
<pre class="lang-py prettyprint-override"><code># Spawn context for multiprocessing
_mp_spawn_ctxt = multiprocessing.get_context("spawn")
_mp_spawn_ctxt_pipe = _mp_spawn_ctxt.Pipe
# Create shared memory
mem_size = width * height * bpp
shared_mem = shared_memory.SharedMemory(create=True, size=mem_size)
image = np.ndarray((height, width, bpp), dtype=np.uint8, buffer=shared_mem.buf)
parent_pipe, child_pipe = _mp_spawn_ctxt_pipe()
time.sleep(0.1)
# Spawn new process
# _CameraProcess is a custom class derived from _mp_spawn_ctxt.Process
proc = _CameraProcess(shared_mem, child_pipe)
proc.start()
</code></pre>
<p>Any ideas what could be the issue here?</p>
|
<python><linux><multiprocessing><python-multiprocessing><shared-memory>
|
2023-01-12 17:04:37
| 1
| 10,479
|
Matthias
|
75,099,842
| 1,440,565
|
Find poetry installation used by PyCharm
|
<p>I am trying to set up a Poetry virtual environment in PyCharm. I am doing the following:</p>
<ol>
<li>Press Ctrl+Alt+S to open the Project Settings</li>
<li>Go to Project -> Python Interpreter</li>
<li>Click on Add Intepreter -> Add Local Interpreter...</li>
<li>Select Poetry Environment</li>
<li>Select Poetry Environment</li>
<li>Make sure Python 3.11 is selected as the Base Environment</li>
<li>Click OK</li>
</ol>
<p>This gives the following error:</p>
<blockquote>
<p>RuntimeError: The lock file is not compatible with the current version of Poetry.</p>
</blockquote>
<p>I already installed poetry 1.3.2. I can confirm this from powershell:</p>
<pre><code>PS C:\src\my-project> poetry --version
Poetry (version 1.3.2)
</code></pre>
<p>I think I have another version of poetry installed somewhere and PyCharm is picking it up. How do I find this instance of poetry? Where do I look for it?</p>
|
<python><pycharm><python-poetry>
|
2023-01-12 17:00:38
| 0
| 83,954
|
Code-Apprentice
|
75,099,551
| 8,081,835
|
How to clean GPU memory when loading another dataset
|
<p>I'm training the CNN network on audio spectrograms comparing 2 types of input data (3 seconds and 30 seconds). This results in different spectrogram sizes in experiments.</p>
<p>I'm using this to get data:</p>
<pre class="lang-py prettyprint-override"><code>def get_data(data_type, batch_size):
assert data_type in ['3s', '30s'], "data_type shoulbe either 3s or 30s"
if data_type == '3s':
audio_dir = DATA_PATH / 'genres_3_seconds'
max_signal_length_to_crop = 67_500
elif data_type == '30s':
audio_dir = DATA_PATH / 'genres_original'
max_signal_length_to_crop = 660_000
input_shape = (max_signal_length_to_crop, 1)
train_ds, val_ds = tf.keras.utils.audio_dataset_from_directory(
directory=audio_dir,
batch_size=batch_size,
validation_split=0.2,
output_sequence_length=max_signal_length_to_crop,
subset='both',
label_mode='categorical'
)
test_ds = val_ds.shard(num_shards=2, index=0)
val_ds = val_ds.shard(num_shards=2, index=1)
return train_ds, val_ds, test_ds, input_shape
</code></pre>
<p>I'm using this function to create models.</p>
<pre class="lang-py prettyprint-override"><code>def get_model(model_type, data_type, input_shape):
if data_type == '3s':
WIN_LENGTH = 1024 * 2
FRAME_STEP = int(WIN_LENGTH / 4) # / 4 a nie /2
elif data_type == '30s':
WIN_LENGTH = 1024 * 4
FRAME_STEP = int(WIN_LENGTH / 2) # / 4 a nie /2
specrtogram_layer =
kapre.composed.get_melspectrogram_layer(input_shape=input_shape, win_length=WIN_LENGTH, hop_length=FRAME_STEP)
model = Sequential([
specrtogram_layer,
*model_dict[model_type],
Dense(units=10, activation='softmax', name='last_dense')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=START_LR),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'],
)
return model
</code></pre>
<pre class="lang-py prettyprint-override"><code>model_dict = {
'CNN_Basic': [
Conv2D(filters=8, kernel_size=3, activation='relu'),
MaxPooling2D(2),
Conv2D(filters=16, kernel_size=3, activation='relu'),
MaxPooling2D(2),
Conv2D(filters=32, kernel_size=3, activation='relu'),
MaxPooling2D(2),
Flatten(),
Dense(units=128, activation='relu'),
],
...
}
</code></pre>
<p>I'm running several experiments on different architectures in a loop. This is my training loop:</p>
<pre class="lang-py prettyprint-override"><code>for data_type in ['3s', '30s']:
train_ds, val_ds, test_ds, input_shape = get_data(data_type=data_type, batch_size=30)
for model_type in ['CNN_Basic', ...]:
model = get_model(model_type, input_shape=input_shape, data_type=data_type)
model.fit(train_ds, epochs=epochs, validation_data=val_ds)
</code></pre>
<p>The error I get:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "...\lib\site-packages\tensorflow\python\trackable\base.py", line 205, in _method_wrapper
result = method(self, *args, **kwargs)
File "...\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "...\lib\site-packages\tensorflow\python\framework\ops.py", line 1969, in _create_c_op
raise ValueError(e.message)
ValueError: Exception encountered when calling layer "dense" (type Dense).
Dimensions must be equal, but are 17024 and 6272 for '{{node dense/MatMul}} = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false](Placeholder, dense/MatMul/ReadVariableOp)' with input shapes: [?,17024], [6272,128].
Call arguments received by layer "dense" (type Dense):
• inputs=tf.Tensor(shape=(None, 17024), dtype=float32)
</code></pre>
<p>I think it's caused by something with the datasets because I got this error only when I ran an experiment with a 3-second spectrogram after the 30-second one. I'm creating new models each time, and to load the data I use <code>tf.keras.utils.audio_dataset_from_directory</code> and load it to the same variable in the following loop iterations.</p>
|
<python><tensorflow><keras><gpu>
|
2023-01-12 16:36:39
| 1
| 771
|
Mateusz Dorobek
|
75,099,533
| 6,676,101
|
What would cause the inner contents of a `with` statement to not execute?
|
<p>When I run the following code, no error messages are printed, and it seems to fail silently. I expected the string <code>"BLAH"</code> to be printed to the console.</p>
<pre class="lang-python prettyprint-override"><code>from contextlib import redirect_stdout
import io # the initialism `io` represents `INPUT OUTPUT LIBRARY`
def run_solver():
print("solver is running")
with io.StringIO() as fake_std_out:
with redirect_stdout(fake_std_out):
print("BLAH") # THIS IS NEVER PRINTED
run_solver()
data = fake_std_out.getvalue()
print(data)
print("THE END")
</code></pre>
<p>The output I expect is:</p>
<pre class="lang-none prettyprint-override"><code>BLAH
solver is running
THE END
</code></pre>
<p>Instead, we have:</p>
<pre class="lang-none prettyprint-override"><code>THE END
</code></pre>
<h1>Edits</h1>
<ol>
<li>I realize now that I wanted to <em>copy</em> standard output, not re-direct it.</li>
<li>Using <code>print</code> to print the contents of the string stream won't work because the destination of the print function is now the string stream instead of the system console. After calling <code>getvalue()</code> it makes no sense to attempt to use print statements anymore.</li>
</ol>
|
<python><python-3.x><with-statement><silent>
|
2023-01-12 16:35:48
| 1
| 4,700
|
Toothpick Anemone
|
75,099,470
| 848,277
|
Getting current execution date in a task or asset in dagster
|
<p>Is there an easier way than what I'm doing to get the current date in an <code>dagster</code> asset, than what I'm currently doing?</p>
<pre><code>def current_dt():
return datetime.today().strftime('%Y-%m-%d')
@asset
def my_task(current_dt):
return current_dt
</code></pre>
<p>In <code>airflow</code> these are passed by default in the python callable function definition ex: <code>def my_task(ds, **kwargs):</code></p>
|
<python><dagster>
|
2023-01-12 16:30:50
| 1
| 12,450
|
pyCthon
|
75,099,457
| 8,869,570
|
Is there a vectorized way to find overlap in intervals between 2 dataframes?
|
<p>I have 2 dataframes: <code>df1, df2</code>.
e.g.,</p>
<pre><code>df1:
id start end
1 0 15
1 15 30
1 30 45
2 0 15
2 15 30
2 30 45
df2 =
id start end
1 0 1.1
1 1.1 11.4
1 11.4 34
1 34 46
2 0 1.5
2 1.5 20
2 20 30
</code></pre>
<p>For each row in <code>df1</code>, I want to find the rows in <code>df2</code> such that <code>df1.id == df2.id</code> and there is overlap between the 2 intervals <code>[df1.start, df1.end]</code> and <code>[df2.start, df2.end]</code>. The overlap is defined by the condition: <code>df1.start <= df2.end AND df1.end > df2.start)</code></p>
<p>So for the above, the result should be</p>
<pre><code>[0, 1, 2]
[2]
[2, 3]
[4, 5]
[5, 6]
[6]
</code></pre>
<p>Is there a vectorized way to do this?</p>
<p>The output should be a dataframe (or some other structure) that has length <code>len(df1)</code> where each row are the indices in <code>df2</code> that fit the aforementioned constraints. <code>df2</code> is quite large, on the order of millions, and <code>df1</code> is on the order of 10s of the thousands, so there may be memory concerns. Technically, I only need 1 row at a time to process before moving on to the next row, so I don't necessarily need to store all the rows, but that may ruin the vectorization if I process a row at a time?</p>
<p>Ultimately, I want to use the output indices to compute weight averages. <code>df2</code> has a 4th column, <code>quantity</code>, and I want to compute its weighted average where the weights are proportional to the amount of overlap. So technically I don't need to store the overlap indices, but I think the memory problem would still persist since they will stilled by stored implicitly?</p>
|
<python><pandas><dataframe><vectorization>
|
2023-01-12 16:29:24
| 1
| 2,328
|
24n8
|
75,099,407
| 7,984,318
|
pandas how to check if the following 2 rows have the same column value
|
<p>I have a df:</p>
<pre><code>df = pd.DataFrame([[1,2], [9,0],[3,4], [3,4]], columns=list('AB'))
</code></pre>
<p>output:</p>
<pre><code> A B
0 1 2
1 9 0
2 3 4
3 3 4
</code></pre>
<p>The length of the df is always even. I need to divide the rows by 2 ,it means ,first row compare with second row ,third row compare with forth row ,...,n-1 row compare with n row.
My question is how to check if the column values of each 2 rows is exact the same.</p>
<p>For example, the first row and the second row are not the same ,but the third row and forth row are the same.</p>
|
<python><pandas><dataframe>
|
2023-01-12 16:25:35
| 2
| 4,094
|
William
|
75,099,352
| 8,081,835
|
ReduceLRonPlateau keeps decreasing LR across multiple models
|
<p>I'm using <code>ReduceLROnPlateau</code> for multiple experiments, but I'm getting lower and lower initial learning rate for each conjsecutive modewl run.</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.callbacks import ReduceLROnPlateau
for model in models:
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=10)
model.fit(dataset, epochs=10, validation_data=val_dataset, callbacks=reduce_lr)
</code></pre>
<p>The learning rate on the output log looks as follows:</p>
<pre><code>Model #1
Epoch 1 ... lr: 0.01
...
Epoch 21 ... lr: 0.005
Model #2
Epoch 1 ... lr: 0.005
...
Epoch 25 ... lr: 0.001
</code></pre>
<p>and so on. (don't look at the numbers I've siplified the output)</p>
<p>How do I tell the model or to the callback to start from the same learning rate each time?</p>
|
<python><tensorflow><keras><callback><learning-rate>
|
2023-01-12 16:21:20
| 0
| 771
|
Mateusz Dorobek
|
75,099,272
| 14,649,310
|
Drawbacks of executing code in an SQLAlchemy managed session and if so why?
|
<p>I have seen different "patterns" in handling this case so I am wondering if one has any drawbacks comapred to the other.
So lets assume that we wish to create a new object of class <code>MyClass</code> and add it to the database. We can do the following:</p>
<pre><code> class MyClass:
pass
def builder_method_for_myclass():
# A lot of code here..
return MyClass()
my_object=builder_method_for_myclass()
with db.managed_session() as s:
s.add(my_object)
</code></pre>
<p>which seems that only keeps the session open for adding the new object but I have also seen cases where the entire builder method is called and executed within the managed session like so:</p>
<pre><code> class MyClass:
pass
def builder_method_for_myclass():
# A lot of code here..
return MyClass()
with db.managed_session() as s:
my_object=builder_method_for_myclass()
</code></pre>
<p>are there any downsides in either of these methods and if yes what are they? Cant find something specific about this in the documentation.</p>
|
<python><sqlalchemy>
|
2023-01-12 16:14:53
| 1
| 4,999
|
KZiovas
|
75,099,182
| 20,025,773
|
Stable Diffusion Error: Couldn't install torch / No matching distribution found for torch==1.12.1+cu113
|
<p>I am trying to install locally Stable Diffusion. I follow the presented steps but when I get to the last one "run webui-use file" it opens the terminal and it's saying "Press any key to continue...". If I do so the terminal instantly closes.
I went to the SB folder, right-clicked open in the terminal and used ./webui-user to run the file. The terminal does not longer close but nothing is happening and I get those two errors:</p>
<p><code>Couldn't install torch</code>,</p>
<p><code>No matching distribution found for torch==1.12.1+cu113</code></p>
<p>I've researched online and I've tried installing the torch version from the error, also I tried <code>pip install --user pipenv==2022.1.8</code> but I get the same errors.</p>
|
<python><pytorch><stable-diffusion>
|
2023-01-12 16:07:58
| 4
| 404
|
Oliver
|
75,099,092
| 7,886,651
|
Where is it possible to find python documentation for training spacy model/pipelines
|
<p>I have been looking through the spacy documentation on training/fine-tuning spacy models or pipelines, however, after walking through the following guide <a href="https://spacy.io/usage/training" rel="nofollow noreferrer">https://spacy.io/usage/training</a> I found that it all comes down to creating and configuring a single file <code>config.cfg</code>. However, I didn't find a python guide to follow. Hence, I would like to know if this is the new norm, and if it is encouraged to follow this route rather than coding in python. Secondly, if there is a true python guide, I would love to get some references.</p>
<p>Thank in advance!</p>
|
<python><spacy-3>
|
2023-01-12 16:00:21
| 1
| 2,312
|
I. A
|
75,098,999
| 9,354,364
|
Python beautifulsoup not able to extract a hyperlink from href tag
|
<p>I am trying to scrape the data from a website. It has an excel sheet inside the tag a and href.
I have tried multiple ways using requests and beautifulsoup but i am not getting the link of the excel sheet.</p>
<p>Website url - <a href="https://ppac.gov.in/prices/international-prices-of-crude-oil" rel="nofollow noreferrer">https://ppac.gov.in/prices/international-prices-of-crude-oil</a>
item which i want to scrape is</p>
<p><a href="https://i.sstatic.net/0rvpF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0rvpF.png" alt="enter image description here" /></a></p>
<p>after inspecting the element i get the details as below: -
<a href="https://i.sstatic.net/Z08lz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z08lz.png" alt="enter image description here" /></a></p>
<p>I have tried the below code: , but every time when I try ,I get all the links except this xlsx file.</p>
<pre><code>from bs4 import BeautifulSoup
import urllib
import re
import requests
html_page = urllib.request.urlopen(url)
links = []
soup = BeautifulSoup(html_page, "html.parser")
for link in soup.findAll('a', attrs={'href': re.compile("^https://")}):
print(link.get('href'))
links.append(link.get('href'))
</code></pre>
<p>Output which i get has all the links except the above mentioned excel file url.
Can anyone help me to get the URL which changes daily hence i need to scrape it using http regex or xlsx (tried this also <code>for link in soup.find_all(attrs={'href': re.compile("xlsx")}</code>))</p>
<p>Expected output is the url to excel file :- <a href="https://ppac.gov.in/uploads/reports/1673497201_english_1_Crude%20Oil%20FOB%20Price%20%28Indian%20Basket%29.xlsx" rel="nofollow noreferrer">https://ppac.gov.in/uploads/reports/1673497201_english_1_Crude%20Oil%20FOB%20Price%20%28Indian%20Basket%29.xlsx</a></p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2023-01-12 15:52:49
| 2
| 1,443
|
Hunaidkhan
|
75,098,860
| 5,595,377
|
Access to marshmallow field value
|
<p>I begin with marshamallow and I try to validate a field. My schema is very simple.</p>
<pre class="lang-py prettyprint-override"><code>class MySchema(Schema):
pid = fields.String(required=true)
visibility = fields.String(validate=OneOf(['public','private'])
@validates('visibility')
def visibility_changes(self, data, **kwargs):
# Load data from DB (based on ID)
db_record = load_data_from_db(self.pid) # <-- problem is here
# Check visibility changed
if db_record.get('visibility') != data:
do_some_check_here()
</code></pre>
<p>But using <code>self.pid</code> doesn't work. It raises an error <code>AttributeError: 'MySchema' object has no attribute 'pid'</code>.
What's the correct way to access to my "pid" field value into my <code>@validates</code> function ?</p>
<p>I tried using <code>self.fields</code>, <code>self.load_fields.get('pid').get_value()</code>, ... no easy way to access it, but I suppose that Marshmallow has such magic method.</p>
<p>Thanks for your help.</p>
|
<python><marshmallow>
|
2023-01-12 15:42:02
| 1
| 389
|
Renaud Michotte
|
75,098,813
| 7,267,480
|
using scipy.optimize curve_fit to find parameters of a curve and getting 'Covariance of the parameters could not be estimated'
|
<p>I am trying to use scipy.optimize to fit experimental data and got:</p>
<pre><code>optimizeWarning: Covariance of the parameters could not be estimated
warnings.warn('Covariance of the parameters could not be estimated',
</code></pre>
<p>Here is the data I am trying to fit <strong>with exponential curve</strong>:</p>
<p><a href="https://i.sstatic.net/vt12d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vt12d.png" alt="enter image description here" /></a></p>
<p>here is the part of a code where I am trying to fit a data:</p>
<pre><code># using curve_fit
from scipy.optimize import curve_fit
# defining a function
# exponential curve
def _1_func(x, a0,b0,beta):
"""
calculates the exponential curve shifted by bo and scaled by a0
beta is exponential
"""
y = a0 * np.exp( beta * x ) + b0
return y
# the code to fit
# initial guess for exp fitting params
numpoints = spectrum_one.shape[0]
x = F[1:numpoints] # zero element is not used
y = np.absolute(spectrum_one[1:numpoints])/signal_size
# making an initial guess
a0 = 1
b0 = y.mean()
beta = -100
p0 = [a0, b0, beta]
popt, pcov = curve_fit(_1_func, x, y, p0=p0)
perr = np.sqrt(np.diag(pcov)) # errors
print('Popt')
print(popt)
print('Pcov')
print(pcov)
</code></pre>
<p><strong>UPDATE1:</strong>
The result is:</p>
<pre><code>Popt
[ 1.00000000e+00 7.80761109e-04 -1.00000000e+02]
Pcov
[[inf inf inf]
[inf inf inf]
[inf inf inf]]
</code></pre>
<p><strong>UPDATE 2</strong> - raw data for fitting is here in csv format: <a href="https://drive.google.com/file/d/1wUoS3Dq_3XdZwo3OMth4_PT-1xVJXdfy/view?usp=share_link" rel="nofollow noreferrer">https://drive.google.com/file/d/1wUoS3Dq_3XdZwo3OMth4_PT-1xVJXdfy/view?usp=share_link</a></p>
<p>As I understand ic pcov has inf - it shows that curve_fit can't calculate the covariance and the popt parameters can't be used they are not optimal for this data..</p>
<p>If I visualize the data I have next results:</p>
<p><a href="https://i.sstatic.net/CIZP5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CIZP5.png" alt="enter image description here" /></a></p>
<p>Why am I getting this type of error?
(I thought it is an easy task for curve_fit)</p>
<p>Maybe I need to scale my data somehow?</p>
|
<python><curve-fitting><scipy-optimize>
|
2023-01-12 15:37:40
| 1
| 496
|
twistfire
|
75,098,803
| 1,295,422
|
Robot 6 DOF pick object with camera
|
<p>I have a 6 DOF robotic arm (<a href="https://docs.niryo.com/dev/pyniryo2/v1.0.0/en/index.html" rel="nofollow noreferrer">Niryo Ned 2</a>) and a RGB-D camera (<a href="https://github.com/luxonis/depthai-python/blob/main/examples/SpatialDetection/spatial_tiny_yolo.py" rel="nofollow noreferrer">Depth AI OAK-D</a>).</p>
<p>My aim is to detect an object using a YOLO AI and pick it with the robotic arm.</p>
<p>So far, I've be able to:</p>
<ul>
<li>Detect the object and its location in space (relative to camera)</li>
<li>Converts coordinates to the end-effector base</li>
<li>Move the arm to that point</li>
</ul>
<p>Here is my code:</p>
<pre><code>def convert_to_arm_coordinates(self, object_pos_3d) -> np.array:
# Input distance is in milimeters while Niryo works in meters
object_pos_3d /= 1000.0
# Rotation matrix to rotate the coordinates from the camera frame to the robot frame
rotation_matrix = np.array([[0, 1, 0], [-1, 0, 0], [0, 0, -1]])
# Translation vector to translate the coordinates from the camera frame to the robot frame
translation_vector = np.array([+0.004, -0.04, -0.085])
# Transform the coordinates using the rotation matrix
object_pos_robot = np.matmul(rotation_matrix, object_pos_3d + translation_vector)
return object_pos_robot
def loop_detections(detections):
for detection in detections:
label, x_ia, y_ia, z_ia = get_position(detection)
if int(x_ia) != 0 and int(y_ia) != 0 and int(z_ia) != 0:
if label == "label_to_grab":
print("[CAMERA] Grabing label {} ..".format(label), flush=True)
# Read detected position of the object from RGB-D camera
relative_pos = np.array([x_ia, y_ia, z_ia])
# Convert to robot coordinates system
world_pos = convert_to_arm_coordinates(relative_pos)
# Shift position
x, y, z, roll, pitch, yaw = robot.arm.get_pose().to_list()
print("[Niryo] Current pose ({}, {}, {})".format(x, y, z), flush=True)
x += world_pos[1]
y += world_pos[1]
z += world_pos[2]
x = max(-0.5, min(0.35, x))
y = max(-0.5, min(0.5, y))
z = max(0.17, min(0.6, z))
print("[DepthAI] {}: Detection ({}, {}, {}), World ({}, {}, {}), Shifted ({}, {}, {})".format(label, x_ia, y_ia, z_ia, world_pos[0], world_pos[1], world_pos[2], x, y, z), flush=True)
# Reach the coordinates and pick the object
robot.pick_place.pick_from_pose([x, y, z, roll, pitch, yaw])
# Move back to stand-by position
robot.arm.move_pose(robot.stand_by)
</code></pre>
<p>The robot is correctly moving towards Y and Z axis but not on X axis.
The robot is 4 to 8 centimeters too short on that axis. As the camera is not purely vertical, I am almost certain the AI computed X is not correct.</p>
<p>Is there a way to compute X based on end effector pitch?</p>
|
<python><coordinates><robotics>
|
2023-01-12 15:36:56
| 1
| 8,732
|
Manitoba
|
75,098,734
| 12,985,993
|
Camelot pdf extraction has an issue while copying texts among span cells
|
<p>I am extracting data from PDFs using camelot and am faced with the following issue on 3. page of <a href="https://www.onsemi.com/pdf/datasheet/fdb9406_f085-d.pdf" rel="nofollow noreferrer">this</a> datasheet. The problematic table is shown below:</p>
<p><a href="https://i.sstatic.net/2NB9H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2NB9H.png" alt="The table with issue" /></a></p>
<p>The issue is inconsistency during the copying content of span cells. As you can see on the following picture span cells are correctly detected.</p>
<p><a href="https://i.sstatic.net/LPHWo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LPHWo.png" alt="The grid of the Table" /></a></p>
<p>Even if the cells are detected correctly in the 3. column the content is copied to one of two spanned cells and in the 4. column the content is copied to two of three spanned cells. You can see the data I extracted as follow. There is always one missing cell per both columns.</p>
<p><a href="https://i.sstatic.net/L6lDH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L6lDH.png" alt="The data I extracted from the table" /></a></p>
<p>And here is the code I used if you want to try it out;</p>
<pre><code>table_areas=['86, 697, 529, 95'] # To ignore page borders
tables = camelot.read_pdf(single_source, pages='all',
flavor = 'lattice',
copy_text=['v'],
line_scale = 110,
table_regions=table_areas,
flag_size = False,
process_background=False)
</code></pre>
<p><strong>Code (Colab):</strong></p>
<pre><code>!pip install "camelot-py[cv]" -q
!pip install PyPDF2==2.12.1
!apt-get install ghostscript
</code></pre>
<pre><code>import camelot
import pandas as pd
from tabulate import tabulate
import re
import fitz
</code></pre>
<pre><code>single_source = '/content/FDB9406_F085-D.PDF'
print("Extracting ", single_source, "...")
table_areas=['86, 697, 529, 95']
tables = camelot.read_pdf(single_source, pages='all', flavor = 'lattice', copy_text=['v'], line_scale = 110, table_regions=table_areas, flag_size = False, process_background=False)
print("Extracting ", single_source, "is finished!")
</code></pre>
<p><strong>to visualize the tables:</strong></p>
<pre><code>for table in accurate_tables:
print(table.parsing_report, table.shape, table._bbox)
print(tabulate(table.df, headers='keys', tablefmt='psql'))
camelot.plot(table, kind='grid').show()
print("Extracting ", single_source, "is finished!")
</code></pre>
|
<python><pdf><python-camelot><pdf-extraction>
|
2023-01-12 15:32:07
| 0
| 498
|
Said Akyuz
|
75,098,696
| 13,827,112
|
Numpy loadtxt in python - except some colums
|
<p>Is it possible in np.loadtxt to load all columns except the first one, or except some particular ones, please?</p>
<p>What does usecols = (0,) mean, please?
Is it possible to use sliders?</p>
<p>I have 55 columns, and I would like to load all except one. Is there better way than writing <code>usecols = (1, 2, 3, 4, 5, 6, 7)</code> and continue to 55?</p>
|
<python><numpy><python-3.6>
|
2023-01-12 15:28:42
| 1
| 1,195
|
Elena Greg
|
75,098,621
| 12,568,761
|
How do I use torch.profiler.profile without a context manager?
|
<p>In the <a href="https://pytorch.org/docs/stable/autograd.html#profiler" rel="nofollow noreferrer">pytorch autograd profiler documentation</a>, it says that the profiler is a "Context manager that manages autograd profiler state and holds a summary of results." However, in a <a href="https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html#use-profiler-to-record-execution-events" rel="nofollow noreferrer">different part of the documentation</a> it demonstrates a non-context manager start/stop which it says is also supported. However, in torch 1.9.0 it appears this start/stop alternative has been removed:</p>
<pre><code>from torch.profiler import profile
prof = profile()
prof.start()
# --> AttributeError: 'profile' object has no attribute 'start'
</code></pre>
<p>I have looked into step() instead, but that also does not work (it does not initialize the profiler).</p>
<p>The use case is that I would like to profile the training run without needing to edit the code which actually calls the training script: I have access to the state before and after, but not the exact training script.<br />
Is this possible?</p>
|
<python><pytorch><profiler><torchvision><autograd>
|
2023-01-12 15:22:07
| 2
| 550
|
tiberius
|
75,098,618
| 12,596,824
|
Recursive Function to get all files from main folder and subdirectories inside it in Python
|
<p>I have a file directory that looks something like this. I have a larger directory, but showing this one just for explanation purposes:</p>
<pre><code>.
├── a.txt
├── b.txt
├── foo
│ └── w.txt
│ └── a.txt
└── moo
└── cool.csv
└── bad.csv
└── more
└── wow.csv
</code></pre>
<p>I want to write a recursive function to get year counts for files within each subdirectory within this directory.</p>
<p>I want the code to basically check if it's a directory or file. If it's a directory then I want to call the function again and get counts until there's no more subdirectories.</p>
<p>I have the following code (which keeps breaking my kernel when I test it). There's probably some logic error as well I would think..</p>
<pre><code>import os
import pandas as pd
dir_path = 'S:\\Test'
def getFiles(dir_path):
contents = os.listdir(dir_path)
# check if content is directory or not
for file in contents:
if os.path.isdir(os.path.join(dir_path, file)):
# get everything inside subdirectory
getFiles(dir_path = os.path.join(dir_path, file))
# it's a file
else:
# do something to get the year of the file and put it in a list or something
# at the end create pandas data frame and return
</code></pre>
<p>Expected output would be a pandas dataframe that looks something like this..</p>
<pre><code>Subdir 2020 2021 2022
foo 0 1 1
moo 0 2 0
more 1 0 0
</code></pre>
<p>How can I do this in Python?</p>
<p><strong>EDIT:</strong></p>
<p>Just realized os.walk() is probably extremely useful for my case here.</p>
<p>Trying to figure out a solution with os.walk() instead of doing it the long way..</p>
|
<python><pandas><operating-system>
|
2023-01-12 15:22:05
| 0
| 1,937
|
Eisen
|
75,098,522
| 15,112,773
|
tokenize apostrophe and dash python
|
<p>I'm tokenizing sentences with the following function using regular expressions.</p>
<pre><code>def tokenize2(text):
if text is None: return ''
if len(text) == 0: return text
# 1. Delete urls
text = re.sub(
r"((https?|ftps?|file)?:\/\/)?(?:[\w\d!#$&'()*\+,:;=?@[\]\-_.~]|(?:%[0-9a-fA-F][0-9a-fA-F]))+" +
"\.([\w\d]{2,6})(\/(?:[\w\d!#$&'()*\+,:;=?@[\]\-_.~]|(?:%[0-9a-fA-F][0-9a-fA-F]))+)*",
'', text)
# 2. Delete #hashtags and @usernames
text = re.sub(r'[@#]\w+', '', text)
# 3. Replace apostrophe to space
text = re.sub(r"'", "'", text)
# 4. Delete ip, non informative digits
text = re.sub(r'\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b', '', text)
# 5. RT
text = re.sub(r'RT\s*:', '', text)
# 6. Tokenize
tokens = re.findall(r'''(?:\w\.)+\w\.?|\w{3,}-?\w*-*\w*''', text)
tokens = [w.lower() for w in tokens]
return tokens
</code></pre>
<p>The problem is that it removes words less than length three before the apostrophe and dash. For example for French words</p>
<pre><code>text = "L’apostrophe, l’île, l’histoire, s’il, qu’elle, t’aime, s’appelle, n’oublie pas, jusqu’à, d’une. Apporte-m’en un. Allez–y! Regarde–le!"
tokenize2(text)
['apostrophe',
'île',
'histoire',
'elle',
'aime',
'appelle',
'oublie',
'pas',
'jusqu',
'une',
'apporte-m',
'allez',
'regarde']
</code></pre>
<p>If the word length is greater than three, then the function works fine. For example</p>
<pre><code>text = "to cold-shoulder"
tokenize2(text)
['cold-shoulder']
</code></pre>
<p>That is, I need to remove words less than length 3, but if they are in the combination of an apostrophe and a dash, then they do not need to be removed</p>
|
<python><regex><tokenize>
|
2023-01-12 15:15:09
| 1
| 383
|
Rory
|
75,098,471
| 8,841,193
|
Extract a Word table from multiple docx files using python docx
|
<p>I have a quite a few word files that have same table structure that I need to extract and save them into a csv/excel as a separate sheet (in .xls) for each word.docx.</p>
<p>Below only extracts first table.. and doesn't loop through whole docx.. is there a way we can loop through entire .doc and all the files in the folder</p>
<pre><code>import os
from docx import Document
import pandas as pd
folder = 'C:/Users/trans/downloads/test'
file_names = [f for f in os.listdir(folder) if f.endswith(".docx") ]
file_names = [os.path.join(folder, file) for file in file_names]
print(file_names)
tables = []
for file in file_names:
document = Document(file)
for table in document.tables:
df = [['' for i in range(len(table.columns))] for j in range(len(table.rows))]
for i, row in enumerate(table.rows):
for j, cell in enumerate(row.cells):
if cell.text:
df[i][j] = cell.text
tables.append(pd.DataFrame(df))
print(df)
for nr, i in enumerate(tables):
i.to_csv('C:/Users/trans/downloads/test/'"table_" + str(nr) + ".csv")
</code></pre>
|
<python><python-3.x><pandas><pip><python-docx>
|
2023-01-12 15:11:05
| 1
| 305
|
sunny babau
|
75,098,403
| 10,866,873
|
Python Mutli-Language XMLs as string resources
|
<p>I want to implement language XML into my project and change all hard-code strings into language.xml references based on the way Android uses string resources. (I have not found anything that does this)</p>
<p>en_gb.xml:</p>
<pre class="lang-xml prettyprint-override"><code><resource>
<section1>
<string name="hello">Hello</string>
<string name="bye">Goodbye</string>
</section1>
<section2>
<string name="world">World</string>
<string name="end">!</string>
</section2>
</resource>
</code></pre>
<p>jp_tr.xml</p>
<pre class="lang-xml prettyprint-override"><code><resource>
<section1>
<string name="hello">こんにちは</string>
<string name="bye">さようなら</string>
</section1>
<section2>
<string name="world">世界</string>
<string name="end">!</string>
</section2>
</resource>
</code></pre>
<p><strong>Not using this anymore</strong> See Edit->
now using ElementTree and exec() i can build classes based on these files</p>
<pre class="lang-py prettyprint-override"><code>class en_gb:
tr = ET.parse(r'.\assets\local\en_gb.xml')
for rs in tr.getroot():
exec(f'class {rs.tag}:pass')
for c in rs:
exec(f'{rs.tag}.{c.attrib["name"]}=str("{c.text}")')
</code></pre>
<p>This creates a structure as 'lang.section.strname' which i can then use in code</p>
<pre class="lang-py prettyprint-override"><code>from localization import en_gb, jp_tr
lang = en_gb
print(lang.section1.hello) #> Hello
lang = jp_tr
print(lang.section2.world) #> 世界
</code></pre>
<p>Now i want to programically create all the base classes to automatically create the structure. However i cannot find a method to add a class to another class programically and construct it in the same way.</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
import os
from pathlib import Path
for lang in os.listdir('.\\assets\local\\'):
lang = Path(language).stem
exec(f'class {lang}:pass')
for rs in ET.parse(fr'.\assets\local\{lang}.xml').getroot():
exec(f'class{lang}.{rs.tag} = class {rs.tag}:pass')
for c in rs:
exec(f'{rs.tag}.{c.attrib["name"]}=str("{c.text}")')
</code></pre>
<p>however <code>en_gb.base = class base:pass</code> is not a valid syntax and so I'm stuck.</p>
<p>After creation here is how it should look (as in the raw code):</p>
<pre class="lang-py prettyprint-override"><code>class en_gb:
class section1:
hello = "Hello"
bye = "Goodbye"
class section2:
world = "World"
end = "!"
class jp_tr:
class section1:
hello = "こんにちは"
bye = "さようなら"
class section2:
world = "世界"
end = "!"
</code></pre>
<p>Edit:
I have replaced the previous method with SimpleNamespace</p>
<pre class="lang-py prettyprint-override"><code>class local(object):
def __init__(self, lang="en_gb"):
l = localization()
self = getattr(l.langs, lang)
def set_lang(self, lang:str):
self = getattr(l.langs, lang)
class localization:
def __init__(self, lang="en_gb"):
langs = {}
for language in list(Path('./assets/local/').glob('*.xml')):
l_name = Path(language).stem
tree = ElementTree.parse(f".\\assets\\local\\{l_name}.xml")
d = {}
for child in tree.getroot():
tmp = {}
for c in child:
tmp[c.attrib['name']] = c.text
d[child.tag] = SimpleNamespace(**tmp)
langs[l_name] = SimpleNamespace(**d)
self.langs = SimpleNamespace(**langs)
def __get__(self):
print("getting")
return self.langs
</code></pre>
<p>However i cannot change the language using the locals class.</p>
<pre><code>l = localization()
print(l.langs.en_gb.section1.hello) ##this works
lg = local()
print(lg.section1.hello) ##this doesnt work
#this should be able to change on the flu
lg.set_lang('jp_tr')
print(lg.section1.hello) ##should now be こんにちは
</code></pre>
|
<python><python-3.x><python-class>
|
2023-01-12 15:06:27
| 1
| 426
|
Scott Paterson
|
75,098,321
| 10,260,806
|
Anyone know why my .bash_profile is adding "export pyenv" everytime i open terminal?
|
<p>I had an issue where vscode was loading terminal in a blank screen and i got an error message in vscode saying "Unable to resolve your shell environment".</p>
<p>So i decided to check my <code>.bash_profile</code> file and was suprised to find it was over 700 lines where it was mainly just the following code repeated:</p>
<pre><code>export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
export PYENV_ROOT="$HOME/.pyenv"
command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"
</code></pre>
<p>I deleted the file and reopened terminal and realised everytime I open terminal, it adds the following lines:</p>
<pre><code>export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
export PYENV_ROOT="$HOME/.pyenv"
command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"
</code></pre>
<p>I installed pyenv straight forward following guides online so i'm not sure what time doing wrong.</p>
<p>If i delete <code>.bash_profile</code> then reopen terminal, it recreates the <code>.bash_profile</code> and adds the following code.</p>
<pre><code>export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
</code></pre>
<p>Anyone have any ideas how to fix this?</p>
<p>Note: I also have a <code>.zhrc</code> with the following exports which work as intended:</p>
<pre><code>export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
export PATH=${PATH}:/usr/local/mysql/bin/
export PATH="$PATH:/Applications/Visual Studio Code.app/Contents/Resources/app/bin"
</code></pre>
|
<python><macos><pyenv>
|
2023-01-12 15:01:34
| 1
| 982
|
RedRum
|
75,098,058
| 3,816,498
|
Gunicorn preload config parameter not working
|
<p>My <code>config.py</code> file for gunicorn looks like that:</p>
<pre><code>preload = True
loglevel = "debug"
</code></pre>
<p>I run gunicorn with the following command:</p>
<pre><code>gunicorn -c config.py --bind 0.0.0.0:1234 app.index:server
</code></pre>
<p>The log looks like this:</p>
<pre><code>service | preload: False
</code></pre>
<p><strong>Why is the preload parameter not showing up in the config print out when starting?</strong></p>
|
<python><gunicorn>
|
2023-01-12 14:44:22
| 1
| 1,383
|
felice
|
75,098,036
| 10,286,813
|
Remove elements stored as a list in a dataframe column from list structures and convert to a string
|
<p>Is it possible to remove the comma separated date elements under the column <code>df['date']</code> from the list structure and store as a string instead?
example dataframe:</p>
<pre><code>df=pd.DataFrame({'date':[['2022-06-24'],['2021-07-07','2021-07-14'],\
['2021-08-11','2021-12-17','2021-09-14','2022-02-15'],\
['2019-08-19','2019-09-25'],\
['2013-05-16']]})
</code></pre>
<p>Output should look like this:</p>
<pre><code>2022-06-24
2021-07-07,2021-07-14
2021-08-11,2021-12-17,2021-09-14,2022-02-15
2019-08-19,2019-09-25
2013-05-16
</code></pre>
<p>I tried:</p>
<pre><code>df['date_2'] = [','.join(map(str, l)) for l in df['date']]
</code></pre>
<p>but not getting the desired output</p>
|
<python><pandas><list><dataframe>
|
2023-01-12 14:42:48
| 1
| 1,049
|
Nev1111
|
75,097,671
| 6,296,626
|
Draw a text in the correct "pieslice" in a circle using PIL
|
<p>I was able to generate using Python <a href="https://pillow.readthedocs.io/en/stable/" rel="nofollow noreferrer">PIL</a> library the following wheel with colored segments:</p>
<p><a href="https://i.sstatic.net/h6ukn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h6ukn.png" alt="enter image description here" /></a></p>
<p>However, due to my limited math skill, I wasn't able to create a logic/calculation to place the text in the correct <code>xy</code> location. I am trying to place a text in the center of the slice, near the edge of the circle (around <code>20</code> away from the edge).</p>
<p>My attempt (code snippet from a for loop that generates each slice):</p>
<pre class="lang-py prettyprint-override"><code>draw.pieslice(wheel_geometry, degree_1, degree_2, fill=color, outline="black", width=3)
draw.text(
xy=(
wheel_size/2 + (wheel_radius-20) * math.sin(math.radians(degree_1 + 5) + slice_degree/2),
wheel_size/2 + (wheel_radius-20) * math.cos(math.radians(degree_1 + 5) + slice_degree/2)
),
text=str(label),
fill="white"
)
</code></pre>
<p>However, as seen in the picture, the labels are in the wrong position.</p>
<pre><code>(1, 0xcc0011), # red
(2, 0xeeaa00), # yellow
(3, 0x10aded) # light blue
</code></pre>
|
<python><math><geometry><python-imaging-library><trigonometry>
|
2023-01-12 14:15:17
| 1
| 1,479
|
Programer Beginner
|
75,097,652
| 506,824
|
How to unit-test a Flask-Caching memoized function?
|
<p>I'm using:</p>
<pre><code>Python 3.9.13
pytest==7.2.0
pytest-mock==3.10.0
Flask==2.2.2
Flask-Caching==2.0.1
</code></pre>
<p>This is my class being tested:</p>
<pre><code>@dataclass(frozen=True)
class NominationData:
title: str
url: str
is_approved: bool
articles: list[ArticleData]
hooks: list[HookData]
@staticmethod
@cache.memoize(timeout=30)
def from_nomination(nomination):
"""Construct a NominationData from a dyk_tools.Nomination"""
return NominationData(
nomination.title(),
nomination.url(),
nomination.is_approved(),
[ArticleData.from_article(a) for a in nomination.articles()],
[HookData.from_hook(h) for h in nomination.hooks()],
)
</code></pre>
<p>I've got a unit test for <code>NominationData.from_nomination()</code> which previously worked in a non-memoized version. Now that I've added the <code>@cache.memoize()</code> decoration, the test crashes in <code>flask_caching/__init__.py:870</code> with <code>AttributeError: 'Cache' object has no attribute 'app'</code>. It's obvious what the problem is: I don't have a valid application context in <code>flask.current_app</code>. The question is, what's the best way to fix that?</p>
<p>One possibility would be to patch flask.current_app with a mock <code>AppContext</code>, but I'm hesitant to go down that path. Another possibility would be to split out the memoization into a distinct shim:</p>
<pre><code>@staticmethod
@cache.memoize(timeout=30)
def from_nomination(nomination):
return inner_from_nomination(nomination)
</code></pre>
<p>and then just call <code>inner_from_nomination()</code> in my unit test. That should work, but just feels wrong. Or maybe there's some cleaner way entirely? How would you do this?</p>
|
<python><flask><pytest><flask-caching>
|
2023-01-12 14:14:10
| 1
| 2,177
|
Roy Smith
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.