QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,022,951
| 3,656,056
|
Storing yaml into AWS SSM store
|
<p>I have the following YAML file(sample content in the below section) which I would like to store in AWS-SSM. Each property in YAML should be kept as a key/value pair in the AWS parameter store.</p>
<pre><code> doe: "a deer, a female deer"
ray: "a drop of golden sun"
pi: 3.14159
xmas: true
french-hens: 3
calling-birds:
- huey
- dewey
- louie
- fred
xmas-fifth-day:
calling-birds: four
french-hens: 3
golden-rings: 5
partridges:
count: 1
location: "a pear tree"
turtle-doves: two
</code></pre>
<p>In SSM it should be stored as,</p>
<pre><code>name=xmas-fifth-day/partridges/count
value=1
name=xmas-fifth-day/partridges/location
value=a pear tree
etc etc.
</code></pre>
<p>There is a library for <a href="https://github.com/b-b3rn4rd/json2ssm" rel="nofollow noreferrer">json2SSM</a> it would be great if we have one for yaml2SSM or if we can able to achieve it using was AWS-CLI command is exceptionally grateful.</p>
<p>AWS SSM PUT_PARAMETERS support <code>yaml-input</code> but the format YAML is different than the one I have. If converting the YAML to a different one is the only solution then do we have any simple way to convert it? Any python/js/java library which converts the format?</p>
|
<python><amazon-web-services><yaml><aws-ssm>
|
2023-01-05 18:37:08
| 1
| 3,973
|
VelNaga
|
75,022,838
| 12,014,637
|
NotImplementedError: Cannot convert a symbolic tf.Tensor to a numpy array
|
<p>I have a tensorflow custom layer that I want to use with functional API for training. But I get a weird error that I don't know how to fix. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import math
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
def scaled_dot_product_attention(query, key, value, mask=None):
key_dim = tf.cast(tf.shape(key)[-1], tf.float32)
scaled_scores = tf.matmul(query, key, transpose_b=True) / np.sqrt(key_dim)
if mask is not None:
scaled_scores = tf.where(mask==0, -np.inf, scaled_scores)
softmax = tf.keras.layers.Softmax()
weights = softmax(scaled_scores)
return tf.matmul(weights, value), weights
class MultiHeadSelfAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadSelfAttention, self).__init__()
self.d_model = d_model
self.num_heads = num_heads
self.d_head = self.d_model // self.num_heads
self.wq = tf.keras.layers.Dense(self.d_model)
self.wk = tf.keras.layers.Dense(self.d_model)
self.wv = tf.keras.layers.Dense(self.d_model)
# Linear layer to generate the final output.
self.dense = tf.keras.layers.Dense(self.d_model)
def split_heads(self, x):
split_inputs = tf.reshape(x, (-1, self.num_heads, self.d_head))
return tf.transpose(split_inputs, perm=[1,0,2])
def merge_heads(self, x):
merged_inputs = tf.transpose(x, perm=[1,0,2])
return tf.reshape(merged_inputs, (-1, self.d_model))
def call(self, q, k, v, mask):
qs = self.wq(q)
ks = self.wk(k)
vs = self.wv(v)
qs = self.split_heads(qs)
ks = self.split_heads(ks)
vs = self.split_heads(vs)
output, attn_weights = scaled_dot_product_attention(qs, ks, vs, mask)
output = self.merge_heads(output)
return self.dense(output)
# -------- Model Building -----------
values = Input(shape=(3,12))
values_emb = MultiHeadSelfAttention(12, 3)(values,values,values, mask = None)
model = Model(values, values_emb)
model.summary()
</code></pre>
<p>The <code>MultiHeadSelfAttention</code> class successfully works if you give it a random input and it generates an output. But it throws an error when you want to put it in a model. The error is:</p>
<pre><code>NotImplementedError: Cannot convert a symbolic tf.Tensor (multi_head_self_attention_4/Cast:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.
</code></pre>
<p>Does anyone know how to fix this issue?</p>
|
<python><tensorflow><keras><deep-learning>
|
2023-01-05 18:26:49
| 0
| 618
|
Amin Shn
|
75,022,789
| 491,894
|
I've loaded a yaml file with `!ENV SOME_VAR` and replaced the string with the value. How do I save the original string and not the changed string?
|
<p>I'm using python 3.x and pyyaml. I'm not married to pyyaml if I need to replace it.</p>
<p>There are a number of questions (with answers) on how to replace a value in a yaml file with the value of an environment variable.</p>
<p>E.g. <code>db_password: !ENV DB_PASSWORD</code> becomes <code>db_password: s00p3rs3kr3t</code>.</p>
<p>The user and the program can make changes to other values (e.g., user sets db_table with cli option, program sets generated hash value).</p>
<p>I want to save those changes without saving the value of the environment variable for <code>db_password</code>.</p>
<p>A simplified example of what I have looks like the following code.</p>
<pre class="lang-py prettyprint-override"><code>def my_regex:
return regex
def resolve_env_vars:
# replace string with environment variable value
loader = yaml.SafeLoader
loader.add_implicit_resolver('!ENV', my_regex(), None)
loader.add_constructor('!ENV', resolve_env_vars)
with open(yamlfile, 'r',) as raw:
cfg = yaml.load(raw, Loader=loader)
</code></pre>
<p>While this works fine for loading the value into the resulting dict, I need to figure out some way of noting the original value and which key it goes with.</p>
<p>I have stepped through the entire process with <code>pudb</code> and I cannot find a way to restore the original value when writing the config file. By the time the code gets to <code>resolve_env_vars</code> the associated key (e.g., <code>db_password</code> in the example above) is not accessible.</p>
<p>How do I save <code>db_password: !ENV DB_PASSWORD</code> instead of <code>db_password: s00p3rs3kret</code> when writing the data back to the config file?</p>
|
<python><yaml>
|
2023-01-05 18:22:54
| 1
| 1,304
|
harleypig
|
75,022,784
| 8,968,801
|
Dynamically Setting the Output Type of Python Function
|
<p>Okay, I have a set of dictionaries that contain a key/ID and a function. Said dictionaries are used as identifiers that point to a specific cache key and function to update said cache key's associated data.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict, List
class CacheIdentifier(TypedDict):
key: str
func: Callable
def function_to_fetch_cache_data() -> List[str]:
return ["a", "b", "c"]
IDENTIFIER: CacheIdentifier = {
"key": "my_cache_key",
"func": function_to_fetch_cache_data
}
</code></pre>
<p>Now, the thing is, I have a function called <code>load_or_set_cache</code> which takes an identifier object and, like the name says, checks if there's data associated with a cache key and fetches the data associated with the cache key if it exists, otherwise it uses the <code>func</code> argument of the <code>CacheIdentifier</code> object provided to fetch new data.</p>
<pre class="lang-py prettyprint-override"><code>def load_or_set_cache(identifier: CacheIdentifier) -> Any:
# Logic to check if the data exists in the cache or not
if not cached:
cache_data = identifier["func"]()
cache_key = identifier["key"]
cache.set(cache_key, cache_data, TTL = 3600)
else:
cache_key = identifier["key"]
cache_data = cache.get(cache_key)
return cache_data
cache_data = load_or_set_cache(IDENTIFIER)
</code></pre>
<p>The thing is, the <code>load_or_set_function</code> function returns the data that was fetched and stored in the cache, but as you can expect, the type of said data varies depending on the return type of the function of each identifier. In my example above, if the <code>function_to_fetch_cache_data</code> has a return type of <code>List[str]</code> then the <code>load_or_set_cache</code> function will have the same return type, causing <code>cache_data</code> to have a <code>List[str]</code> type.</p>
<p>Currently the output type of the <code>load_or_set_cache</code> function is just set to <code>Any</code>, is there any way I could dynamically change the output type of the function depending on the output type of the associated <code>func</code> argument found in each cache identifier?</p>
<p>I've tried playing with TypeVars but dont feel like they really suit what I want to do</p>
|
<python><python-typing>
|
2023-01-05 18:22:05
| 1
| 823
|
Eddysanoli
|
75,022,631
| 1,877,527
|
Add a grouping level to an existing Pandas groupby object
|
<p>Let's say I have a frame with a dozen columns, <code>col1</code> through <code>col12</code>. This frame is large, so I want to minimize time spent in grouping.</p>
<p>So, I'd like to first group by 3 columns then by a 4th column. I'm using the aggregated data set and using the group as an entire object. So,</p>
<pre><code>dfNew1 = df.groupby(["col1", "col2", "col3"], as_index= False)
dfNew2 = df.groupby(["col1", "col2", "col3", "col4"], as_index= False)
</code></pre>
<p>This is expensive and IRL takes almost 10 minutes. What I'd like to do is retain some of my "work" for the second groupby. In an ideal world, something like</p>
<pre><code>baseGrouper = df.groupby(["col1", "col2", "col3"])
dfNew1 = baseGrouper.reset_index()
dfNew2 = baseGrouper.groupby(["col4"]).reset_index() # DataFrameGroupBy does not actually have a "groupby" method
</code></pre>
<p>Surely there's some way to keep my grouping work and not doing it twice?</p>
|
<python><pandas>
|
2023-01-05 18:07:35
| 1
| 732
|
Philip Kahn
|
75,022,532
| 15,171,387
|
Dealining with missing values in multiple columns of a dataframe in Python
|
<p>I am dealing with a huge dataframe with hundreds of columns with possibility of missing values in each of the columns. Here is sample:</p>
<pre><code>import pandas as pd
import numpy as np
data = {'a': [1,1,0,1,1],
'b': ["a", "b", np.nan, 'c', np.nan],
'c': ['b1','b2',np.nan, 'c1', np.nan],
'd': [1,1,1, 2, np.nan],
'e': [4,4,4, 3, np.nan]
}
df = pd.DataFrame(data)
print(df)
a b c d e
0 1 a b1 1.0 4.0
1 1 b b2 1.0 4.0
2 0 NaN NaN 1.0 4.0
3 1 c c1 2.0 3.0
4 1 NaN NaN NaN NaN
</code></pre>
<p>In order to deal with missing values at once, I am doing something like this. Which basically if the missing values are in one the a,b, or c columns, then I replace them with a specific value.</p>
<pre><code>df=df.fillna({'a':0, 'b':'other', 'c':-1})
print (df)
a b c d e
0 1 a b1 1.0 4.0
1 1 b b2 1.0 4.0
2 0 other -1 1.0 4.0
3 1 c c1 2.0 3.0
4 1 other -1 NaN NaN
</code></pre>
<p>What I would like to do is if the missing values in any other columns than those three columns, then simply replace the missing values with a value that appears the most often in that column. For example, in <code>column d</code>, 1 is repeated the most so I simply replace missing value in with <code>1.0</code>.</p>
|
<python><pandas><missing-data>
|
2023-01-05 17:58:46
| 1
| 651
|
armin
|
75,022,459
| 14,517,622
|
Get Album Art of a music using ffmpeg in Webp
|
<p>I am struggling to output thumbnail of an mp3 file in webp format</p>
<p>Tried this but it did not worked for .webp</p>
<p>I am running this command in AWS lambda function to extract album Art and save in S3.</p>
<p><code>ffmpeg -i input.mp3 -an -vcodec copy cover.webp</code></p>
<p><a href="https://i.sstatic.net/NYFYK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NYFYK.png" alt="enter image description here" /></a></p>
<p>Any help would be appreciated.</p>
<p>Reference: <a href="https://stackoverflow.com/questions/13592709/retrieve-album-art-using-ffmpeg">link</a></p>
|
<javascript><python><linux><audio><ffmpeg>
|
2023-01-05 17:50:50
| 0
| 847
|
Arihant Jain
|
75,022,215
| 4,247,704
|
Routing requests to a specific Heroku Dyno
|
<p>I built an real-time collaboration application with <a href="http://prosemirror.net" rel="nofollow noreferrer">Prosemirror</a> that uses a centralised operational transform algorithm (described <a href="https://marijnhaverbeke.nl/blog/collaborative-editing.html#centralization" rel="nofollow noreferrer">here</a> by Marijn Haverbeke) with a Python server using <a href="https://github.com/django/channels" rel="nofollow noreferrer">Django Channels</a> with <a href="https://github.com/fellowapp/prosemirror-py" rel="nofollow noreferrer">prosemirror-py</a> as the central point.</p>
<p>The server creates a <code>DocumentInstance</code> for every document users are collaborating on, keeps it in memory and occasionally stores it in a Redis database. As long as there is only one Dyno, all requests are routed there. Then the server looks up the instance the request belongs to and updates it.</p>
<p>I would like to take advantage of Heroku's horizontal scaling and run more than one dyno. But as I understand, this would imply requests being routed to any of the running dynos. But since one <code>DocumentInstance</code> can only live on one server this would not work.</p>
<p>Is there a way to make sure that requests belonging to a specific <code>DocumentInstance</code> are only routed to the machine that keep that keeps it?</p>
<p>Or maybe there is an alternative architecture that I am overlooking?</p>
|
<python><heroku><prose-mirror><operational-transform>
|
2023-01-05 17:27:20
| 1
| 427
|
nehalem
|
75,022,209
| 1,833,945
|
Why this Apache Beam pipeline reading an Excel file and creating a .CSV from it is not working?
|
<p>I am pretty new in Apache Beam and I am experiencing the following problem with this simple task: I am trying to create a new <strong>.csv</strong> file staring from an <strong>.xlsx</strong> Excel file. To do this I am using Apache Beam with Python 3 language and <strong>Pandas</strong> library. I admit that are all topic pretty new to me.</p>
<p>I am working on Google Colab but I think that this is not so significant as information.</p>
<p>I imported Apache Beam and Pandas in this way (the ! is only the way to give shell command to Google Colab):</p>
<pre><code>!{'pip install --quiet apache-beam pandas'}
</code></pre>
<p>And this is my Python code implementing the Apache Bean pipeline:</p>
<pre><code>import apache_beam as beam
import pandas as pd
def parse_excel(line):
# Use the pandas library to parse the line into a DataFrame
df = pd.read_excel(line)
print("DATAFRAME")
# Convert the DataFrame to a list of dictionaries, where each dictionary represents a row in the DataFrame
# and has keys that are the column names and values that are the cell values
return [row.to_dict() for _, row in df.iterrows()]
def print_json(json_object):
# Print the JSON object
print(json_object)
def run(argv=None):
print("START run()")
p = beam.Pipeline()
# Read the Excel file as a PCollection
lines = (
p
| 'Read the Excel file' >> beam.io.ReadFromText('Pazienti_export_reduced.xlsx')
| "Convert to pandas DataFrame" >> beam.Map(lambda x: pd.DataFrame(x))
| "Write to CSV" >> beam.io.WriteToText(
'data/csvOutput', file_name_suffix=".csv", header=True
)
)
print("after lines pipeline")
# Parse the lines using the pandas library
#json_objects = lines | 'ParseExcel' >> beam.Map(parse_excel)
# Print the values of the json_objects PCollection
#json_objects | 'PrintJSON' >> beam.ParDo(print_json)
if __name__ == '__main__':
print("START main()")
print(beam.__version__)
print(pd.__version__)
run()
</code></pre>
<p>When I run it I obtain no error but my <strong>data</strong> folder still empty. Basically it seems that the expected <strong>csvOutput.csv</strong> output file was not created at the end of my pipeline.</p>
<p>What is wrong? What am I missing? How can I try to fix my code?</p>
|
<python><pandas><apache-beam><apache-beam-io>
|
2023-01-05 17:26:54
| 1
| 43,315
|
AndreaNobili
|
75,022,186
| 715,036
|
The git process never exits when running as a Python subprocess
|
<h2>Background</h2>
<p>I'm writing a Python program to process LFS error messages as I have some repos with missing LFS files in Bitbucket Server. The output when running <code>git lfs fetch --all</code> isn't very helpful when you want to find out which LFS files are missing from a repo. That's why I'm making a tool for it. Maybe it can be contributed back to the git project in some form when I'm done with it?</p>
<h2>Code snippet</h2>
<pre class="lang-py prettyprint-override"><code>import subprocess
def git_lfs_fetch(repo_dir) -> list:
timeout_sec = 120
try:
completed_process = subprocess.run(
["git", "lfs", "fetch", "--all"], check=False, cwd=repo_dir,
capture_output=True, text=True, timeout=timeout_sec, shell=False)
return completed_process.stderr.split('\n')
except subprocess.TimeoutExpired as ex:
print(f'ERROR Could not complete "{ex.cmd}" before timeout of {timeout_sec} s!')
print(ex.stderr)
return []
</code></pre>
<h2>Problem</h2>
<p>Sometimes, and not in any consistent way, the <code>subprocess.run()</code> method never returns because the <code>git-lfs</code> process never exits. Usually running the <code>git lfs fetch --all</code> command in my test repos takes a few seconds to complete. As a workaround, I added a 2 min <code>timeout</code> to the <code>subprocess.run()</code> call. I figured I could get the output I'm interested in from stderr from the exception, since the <code>git-lfs</code> was done with all it should do. However, that did not help. Python does not seem to be able to kill the <code>git</code> subprocess. I understand from the doc that it sends a <code>SIGKILL</code> to the process and then waits for it to exit. But it never exits, even with the timeout set.</p>
<p>If I manually kill the <code>git-lfs</code> process from the outside I get the expected output printed from <code>ex.stderr</code> so <code>git-lfs</code> sure looks like it is done, and my workaround does what it should.</p>
<h2>Environment</h2>
<ul>
<li><code>Windows 10, 64 bit</code></li>
<li><code>Python 3.10.7</code></li>
<li><code>git version 2.39.0.windows.2</code>
<ul>
<li>Upgraded from <code>2.35.2.windows.1</code></li>
</ul>
</li>
<li><code>git-lfs/3.3.0 (GitHub; windows amd64; go 1.19.3; git 77deabdf)</code>
<ul>
<li>Upgraded from <code>git-lfs/3.0.2 (GitHub; windows amd64; go 1.17.2)</code></li>
</ul>
</li>
</ul>
<h2>Fixing my workaround</h2>
<p>As I was writing this, in a typical <a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging" rel="nofollow noreferrer">rubberducking</a> fashion, I had an idea.</p>
<p>Since Python fails to forcibly terminate the <code>git</code> subprocess I tried using <code>git-lfs</code> directly, instead of letting <code>git</code> call it. This made the timeout work.</p>
<pre class="lang-py prettyprint-override"><code>subprocess.run(["git-lfs", "fetch", "--all"], ...)
</code></pre>
<h2>What I'm looking for</h2>
<p>The solution I'm looking for is a way to figure out why <code>git-lfs</code> and consequently <code>git</code> won't terminate properly. Even better would be a fix for that problem.</p>
<p>I have seen similar problems when calling <code>git</code> from Java and C# on both Linux and Windows (several years ago), i.e. the <code>git</code> command actually completes all it should do but the <code>git</code> process never terminates. So I'm thinking this "hanging" could be a problem in git itself. I really would like to be able to find out why the <code>git-lfs</code> process won't exit. I don't know where to start looking.</p>
<h2>Update</h2>
<p>I used the tip from @CharlesDuffy to try <code>strace</code>. I got some output that did not really tell me anything useful.</p>
<p>In Task Manager I then happened to see that there was a few (7) ssh processes running. On inspection of their command line it became clear that they were related to Git LFS and that they all were waiting for a response from the Bitbucket Server. The commands issued to the server was one of these</p>
<ul>
<li><code>ssh -p 7999 git@server-url "git-lfs-authenticate <path> download"</code></li>
<li><code>ssh -p 7999 git@server-url "git-lfs-transfer <path> download"</code></li>
</ul>
<p>and they appeared to be left-over processes from all the times I had seen problems and forcibly killed the <code>git-lfs</code> process.</p>
<p>I upgraded Git to the latest version, 2.39.0 with git-lfs 3.3.0, and tried again. At first it seemed to work better but then I got <code>git-lfs</code> stuck again. This time running from the command line, not from Python, and Task Manager shows that <code>ssh.exe</code> is running this command line and seem stuck in it:</p>
<pre class="lang-bash prettyprint-override"><code>ssh -oControlMaster=Auto -oControlPath=C:\Users\username\Appdata\LocalTemp\sock-3553225979\sock-%C -p 7999 git@server-url "git-lfs-transfer <path> download"
</code></pre>
<p>I was unable to get any useful information from that hanging <code>ssh.exe</code> process using <code>strace</code> from <code>Git Bash</code>, and eventually it died while I was trying to get <code>strace</code> to attach to it again.</p>
<p>Since the <code>ssh</code> daemon on the server is living in the Bitbucket application I think the next step here is to involve Atlassian support for server side debugging. I still lack a reliable way of reproducing the problem though.</p>
|
<python><python-3.x><git><subprocess><git-lfs>
|
2023-01-05 17:25:03
| 1
| 721
|
cb2
|
75,021,988
| 2,482,149
|
Unable to Loop Over Nested Dictionary in Stored Procedure: Psycopg2
|
<p>I'm trying to add the values of a nested dictionary within a list to a table using a stored procedure in <code>psycopg2</code>. This is the schema I receive from a JSON application:</p>
<pre><code>[
{
"team_id": 236,
"lineup": [
{
"player_id": 3043,
"country": {
"id": 61,
"name": "Denmark"
}
}
]
},
{
"team_id": 237,
"lineup": [
{
"player_id": 3045,
"country": {
"id": 62,
"name": "Italy"
}
}
]
}
]
</code></pre>
<p>I'm trying to add the player's <code>country</code> into a table in Postgres. The <code>id</code> will be of type integer and <code>name</code> of type <code>VARCHAR</code>. Here is my stored procedure:</p>
<pre><code>'''
CREATE PROCEDURE insert_country_by_lineups(data JSON)
AS $$
BEGIN
FOR team IN SELECT * FROM json_array_elements(data) LOOP
FOR player in SELECT * FROM json_array_elements(team->'lineup') LOOP
INSERT INTO country(id,name)
VALUES (CAST(player->'country'->>'id' AS integer), player->'country'->>'name')
ON CONFLICT DO NOTHING RETURNING id;
END LOOP;
END LOOP;
END;
$$ LANGUAGE plpgsql;
'''
</code></pre>
<p>However, I keep getting this when I execute the procedure on my cursor:</p>
<pre><code>loop variable of loop over rows must be a record variable or list of scalar variables
LINE 5: FOR team IN SELECT * FROM json_array_elements(data...
</code></pre>
<p>What am I doing wrong?</p>
|
<python><sql><postgresql><stored-procedures><psycopg2>
|
2023-01-05 17:06:51
| 1
| 1,226
|
clattenburg cake
|
75,021,969
| 15,176,150
|
How does VSCode's Python Extension use the Paths you set?
|
<p>I'm troubleshooting an issue with VSCode's <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.python" rel="nofollow noreferrer">Python extension</a> trying to load the wrong version of Python.</p>
<p>I'd like to know how this extension uses the Path variables you give it in its settings. For example, see the picture below:</p>
<p><a href="https://i.sstatic.net/EDoaO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDoaO.png" alt="Example of the Python extension's Path variables." /></a></p>
<p>Intuitively, I assume it uses my environment variables and searches for the package in there, but does it?</p>
<p>I suppose another option is somehow using the interpreter path passed to it earlier:</p>
<p><a href="https://i.sstatic.net/lzTCh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lzTCh.png" alt="Python extension's interpreter path." /></a></p>
<p>How does the extension use this Path?</p>
|
<python><visual-studio-code><python-extensions>
|
2023-01-05 17:04:35
| 1
| 1,146
|
Connor
|
75,021,923
| 7,975,962
|
pandas - DataFrame.groupby.head with different values
|
<p>I have two dataframes. One of them has session ids and their cut-off points. The other dataframe has multiple rows for each session and I want to take first n rows of each session and n is the cut-off point from the other dataframe. This is a screenshot of two dataframes.</p>
<p><a href="https://i.sstatic.net/ImMAs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ImMAs.png" alt="enter image description here" /></a></p>
<p>For example session 0 has 20 rows and session 1 has 50 rows. Cut-off index for session 0 is 10 and it is 30 for session 1. I want to do a groupby or any vectorized operation which takes first 10 rows of session 0 and first 30 rows of session 1.</p>
<p>Is it possible without looping?</p>
|
<python><pandas><numpy><group-by>
|
2023-01-05 17:00:41
| 1
| 974
|
gunesevitan
|
75,021,879
| 3,575,623
|
make install using a conda environement
|
<p>I'm trying to install <a href="https://bitbucket.org/dmarnetto/haplostrips/src/master/" rel="nofollow noreferrer">HaploStrips</a>, which requires python 2.7 and pip (and a few other libraries). The author suggests to create an anaconda environment using <code>conda create -n haplostrips python=2.7 pandas=0.22 pysam</code>, then activating that environment and installing using that.</p>
<p>However, even when I do that, the installer can't find pandas or pysam, or the correct version of Python (it uses my default python install, which is 3.8).</p>
<p>How can I get <code>make install</code> to use the executables provided by anaconda? Is there a better way to get it to use python 2.7 + the other dependancies?</p>
<p>This is the output from running <code>conda create --name haplostripsb python=2.7</code>:</p>
<pre class="lang-none prettyprint-override"><code> >> conda create --name haplostripsb python=2.7
Collecting package metadata (current_repodata.json): failed
WARNING conda.exceptions:print_unexpected_error_report(1216): KeyError('pkgs_dirs')
Traceback (most recent call last):
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/main.py", line 87, in _main
exit_code = do_call(args, p)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/main_create.py", line 41, in execute
install(args, parser, 'create')
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 260, in install
unlink_link_transaction = solver.solve_for_transaction(
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 152, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 195, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 300, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/common/io.py", line 88, in decorated
return f(*args, **kwds)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 435, in _collect_all_metadata
_supplement_index_with_system(virtual_pkg_index)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/index.py", line 164, in _supplement_index_with_system
dist_name, dist_version = context.os_distribution_name_version
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/auxlib/decorators.py", line 268, in new_fget
cache[inner_attname] = func(self)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/base/context.py", line 863, in os_distribution_name_version
from conda._vendor.distro import id, version
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/_vendor/distro.py", line 1084, in <module>
_distro = LinuxDistribution()
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/_vendor/distro.py", line 599, in __init__
self._lsb_release_info = self._get_lsb_release_info() \
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/_vendor/distro.py", line 943, in _get_lsb_release_info
raise subprocess.CalledProcessError(code, cmd, stdout, stderr)
subprocess.CalledProcessError: Command 'lsb_release -a' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/exceptions.py", line 1214, in print_unexpected_error_report
message_builder.append(get_main_info_str(error_report['conda_info']))
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/main_info.py", line 237, in get_main_info_str
info_dict['_' + key] = ('\n' + 26 * ' ').join(info_dict[key])
KeyError: 'pkgs_dirs'
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/exceptions.py", line 1082, in __call__
return func(*args, **kwargs)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/main.py", line 87, in _main
exit_code = do_call(args, p)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/conda_argparse.py", line 84, in do_call
return getattr(module, func_name)(args, parser)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/main_create.py", line 41, in execute
install(args, parser, 'create')
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 260, in install
unlink_link_transaction = solver.solve_for_transaction(
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 152, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 195, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 300, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/common/io.py", line 88, in decorated
return f(*args, **kwds)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 435, in _collect_all_metadata
_supplement_index_with_system(virtual_pkg_index)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/core/index.py", line 164, in _supplement_index_with_system
dist_name, dist_version = context.os_distribution_name_version
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/auxlib/decorators.py", line 268, in new_fget
cache[inner_attname] = func(self)
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/base/context.py", line 863, in os_distribution_name_version
from conda._vendor.distro import id, version
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/_vendor/distro.py", line 1084, in <module>
_distro = LinuxDistribution()
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/_vendor/distro.py", line 599, in __init__
self._lsb_release_info = self._get_lsb_release_info() \
File "/home/muthur/anaconda3/lib/python3.8/site-packages/conda/_vendor/distro.py", line 943, in _get_lsb_release_info
raise subprocess.CalledProcessError(code, cmd, stdout, stderr)
subprocess.CalledProcessError: Command 'lsb_release -a' returned non-zero exit status 1.
`$ /home/muthur/anaconda3/bin/conda create --name haplostripsb python=2.7`
environment variables:
conda info could not be constructed.
KeyError('pkgs_dirs')
An unexpected error has occurred. Conda has prepared the above report.
</code></pre>
<p>This is the output from running <code>lsb_release -a</code></p>
<pre><code> >> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
</code></pre>
|
<python><python-2.7><makefile><anaconda>
|
2023-01-05 16:56:47
| 0
| 507
|
Whitehot
|
75,021,798
| 15,176,150
|
How does VSCode's Python Extension handle a missing .env file?
|
<p>I'm trying to troubleshoot an issue with VSCode's <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.python" rel="nofollow noreferrer">Python extension</a> loading the wrong version of Python and I'd like to rule out incorrect environment variables being an issue.</p>
<p>In the Python Extension's settings there is the following option:</p>
<p><a href="https://i.sstatic.net/p4IGR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p4IGR.png" alt="Python: Env File option in VSCode's Python extension settings." /></a></p>
<p>My workspace doesn't contain a <code>./env</code> file. What will VSCode's Python extension default to in this situation?</p>
|
<python><visual-studio-code><python-extensions>
|
2023-01-05 16:49:32
| 1
| 1,146
|
Connor
|
75,021,750
| 12,127,578
|
DeltaTable schema not updating when using `ALTER TABLE ADD COLUMNS`
|
<p>I'm currently playing with <a href="https://docs.delta.io/latest/index.html" rel="nofollow noreferrer">Delta Tables</a> on my local machine and I encountered a behavior that I don't understand.
I create my DeltaTable like so:</p>
<pre class="lang-py prettyprint-override"><code>df.write \
.format('delta') \
.mode('overwrite') \
.option('overwriteSchema', 'true') \
.save(my_table_path)
dt = DeltaTable.forPath(spark, my_table_path)
</code></pre>
<p>Then, I run the following command.</p>
<pre class="lang-py prettyprint-override"><code>spark.sql(f"ALTER TABLE delta.`{my_table_path}` ADD COLUMNS (my_new_col string)")
</code></pre>
<p>This adds a new column to the schema as can be seen by running</p>
<pre class="lang-py prettyprint-override"><code>spark.sql(f"DESCRIBE TABLE delta.`{my_table_path}`").show()
</code></pre>
<p>It even shows as the last DeltaTable operation in the history, by running <code>dt.history().show()</code>.</p>
<p>However, this is not reflected in the DeltaTable object <code>dt</code>, infact if I run <code>dt.toDF().printSchema()</code>, the new column is not displayed.</p>
<p>On the other hand, if I do something like</p>
<pre class="lang-py prettyprint-override"><code>spark.sql(f"UPDATE delta.`{my_table_path}` SET existing_col = 'foo'")
</code></pre>
<p>and after I run <code>dt.toDF().show()</code>, the update is reflected and shown under <code>existing_col</code>, which now appears containing <code>foo</code> everywhere.</p>
<p>The only way I found out to have the <code>dt</code> object reflect the schema change, is to run <code>dt = DeltaTable.forPath(spark, my_table_path)</code> again after <code>ALTER TABLE</code>.</p>
<p>What am I missing?</p>
<hr />
<p><strong>Edit</strong>: Added repo link for reproducibility.</p>
<p><a href="https://github.com/wtfzambo/delta-bug-working-example" rel="nofollow noreferrer">https://github.com/wtfzambo/delta-bug-working-example</a></p>
<p><strong>Edit2</strong>: Repo uses Delta 1.0, but issue exists also in Delta 2.*</p>
|
<python><apache-spark><pyspark><delta-lake>
|
2023-01-05 16:45:57
| 1
| 678
|
wtfzambo
|
75,021,685
| 9,261,745
|
how to save the picture in a URL into azure blob storage ADLS gen2 in python
|
<p>I would like to save the picture in some URL into a blob storage directly by python.
I tried to use the code in <a href="https://stackoverflow.com/questions/56645082/download-web-images-by-url-from-excel-and-save-to-folders-in-python">Download web images by URL from excel and save to folders in Python</a>. This is how I adapted it</p>
<pre><code>for index, row in data.iterrows():
url = row['url']
file_name = url.split('/')[-1]
r = requests.get(url)
abs_file_name = lake_reporting_root + file_name #blob storage folder name
if r.status_code == 200:
with open(abs_file_name, "wb") as f:
f.write(r.content)
</code></pre>
<p>It has an error</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory:'abfss://datalake@xxx.dfs.core.windows.net/production/xx/test/xxxx
</code></pre>
<p>Any idea?</p>
|
<python><pandas><url><azure-blob-storage>
|
2023-01-05 16:40:28
| 1
| 457
|
Youshikyou
|
75,021,620
| 2,237,820
|
How to select which request from the queue to get the resource in Simpy
|
<p>I am working on simulating a scheduling algorithm in Simpy. What I need to do is to choose from the queue which requests to get the resource. I have read the examples provided on the Simpy website and the documentation. However, I could not find anything regarding selecting requests from the queue to serve. Priority resources don't satisfy my requirement too. I also searched the questions on stack overflow, but I could not find something similar to what I am trying to do. I thought about generating the request objects and storing them in an array, and after deciding which one must be served, I requested the resource. But that does not seem valid to me as I am trying to simulate task scheduling.</p>
<p>Is it possible to override the queueing behavior in Simpy to achieve this task?<br />
I appreciate any help you can provide.</p>
|
<python><simpy>
|
2023-01-05 16:35:07
| 1
| 551
|
Alaa
|
75,021,559
| 8,261,345
|
How do I prevent Google Cloud Functions caching results to calls?
|
<p>I have a simple flask app in a Google Cloud Function that returns different data based on the user's authorization, specified by a token in the header:</p>
<pre class="lang-py prettyprint-override"><code>@functions_framework.http
def main(request):
token = request.headers['Authorization']
data: List[str] = my_store.get_authorized_data(token)
print(data)
return data
</code></pre>
<p>This is deployed using:</p>
<pre><code>gcloud functions deploy myfunc \
--entry-point main \
--project=myproj \
--trigger-http \
--runtime=python39 \
--allow-unauthenticated \
--timeout=540 \
--memory=1024MB \
--min-instances 1
</code></pre>
<p>I then have a React/TypeScript front-end deployed to Firebase hosting that calls the function:</p>
<pre class="lang-js prettyprint-override"><code>axios.get(`cloud/function/url`, {
headers: {
Accept: "application/json, text/plain, */*",
Authorization: `Bearer ${token}`,
},
})
.then((result) => {
console.log(result.data);
return result.data;
});
</code></pre>
<p>The cloud functions are regular Google Cloud Functions, <strong>not</strong> Cloud Functions for Firebase.</p>
<h3>The problem</h3>
<p>If I send a request with a token authorized for all data <code>['a', 'b', 'c']</code>, everything works fine:</p>
<ul>
<li>Cloud function log: <code>['a', 'b', 'c']</code></li>
<li>Front-end log: <code>['a', 'b', 'c']</code></li>
</ul>
<p>If I then switch to a user with less data access, just <code>['a', 'b']</code>, it also works:</p>
<ul>
<li>Cloud function log: <code>['a', 'b']</code></li>
<li>Front-end log: <code>['a', 'b']</code></li>
</ul>
<p>But if I then switch back to the account with access to all data, there is an unexpected result:</p>
<ul>
<li>Cloud function log: <code>['a', 'b', 'c']</code></li>
<li>Front-end log: <code>['a', 'b']</code></li>
</ul>
<p>The front-end is not receiving the data that the cloud function is logging. This is intermittent, if I make repeat calls, around half get the correct <code>['a', 'b', 'c']</code> in the front-end, the other half always show the incorrect <code>['a', 'b']</code>. <em>All</em> logs in the cloud function show the correct <code>['a', 'b', 'c']</code> regardless of the result in the front-end.</p>
<p>It appears the cloud function is caching the result, and just sending an old result.</p>
<h3>What I've tried</h3>
<p>A similar question states that <a href="https://stackoverflow.com/questions/60908964/how-do-i-disable-firebase-functions-cache">cloud functions re-use server instances</a> and there's not any remedy.</p>
<p>Here are the steps I've taken:</p>
<ol>
<li>Redeploying the cloud function: this works, the issue disappears, but returns once I switch tokens again. <strong>I can't redeploy my cloud function every time a user needs to change account.</strong></li>
<li>Redeploying the front-end to Firebase: this does <strong>not</strong> fix the issue.</li>
<li>Disabling cache in the browser: this does <strong>not</strong> fix the issue.</li>
<li>Disabling cache in firebase hosting (as in <a href="https://stackoverflow.com/questions/56531235/invalidate-firebase-cache-for-google-cloud-function-on-deploy">this question</a>): this does <strong>not</strong> fix the issue, similarly I have tried setting this in my <code>firebase.json</code>, which also did <strong>not</strong> fix the issue:</li>
</ol>
<pre><code>"headers": [{
"source": "**",
"headers": [{
"key": "Cache-Control",
"value": "max-age=0"
}]
}]
</code></pre>
<ol start="5">
<li>Disabling caching on the axios.get: this does <strong>not</strong> fix the issue. I tried 3 different methods as in <a href="https://stackoverflow.com/questions/49263559/using-javascript-axios-fetch-can-you-disable-browser-cache">this question</a>: by adding an ever-changing parameter to the URL so it is always different, setting such a parameter in the call <code>params</code> and setting the caching headers directly. None of these fix the issue:</li>
</ol>
<pre><code>axios.get(`/cloud/function/url?timestamp=${new Date().getTime()}`, {
headers: {
Accept: "application/json, text/plain, */*",
Authorization: `Bearer ${token}`,
'Cache-Control': 'no-cache',
Pragma: 'no-cache',
Expires: '0',
},
params: {
t: new Date().getTime()
}
})
</code></pre>
<p>From my disabling front-end cache, redeploying front-end and the <code>console.log</code> I am confident the issue is not client-side. I curl the API directly to verify this:</p>
<pre><code>curl '/my/function/url' -H 'Authorization: Bearer '"$token"'
</code></pre>
<ol>
<li>Response for token 1: <code>{data: ["a", "b", "c"]}</code></li>
<li>Response for token 2: <code>{data: ["a", "b"]}</code></li>
<li>Response going back to token 1: <code>{data: ["a", "b"]}</code></li>
</ol>
<p>It seems clear the cloud function really is returning some type of cached response. This is further reinforced by the fact that redeploying the cloud function does fix the issue. How do I stop a Google Cloud Function from caching responses?</p>
<hr />
<h2>Edit: further details on how the backend retrieves the data for each token</h2>
<p>Data is stored in a Firestore NoSQL document database. Each entry has the user account as the document name and the allowed data as fields in the document.</p>
<pre><code>user1: {a: null, b: null, c: null}
user2: {a: null, b: null}
</code></pre>
<p>The <code>get_authorized_data</code> function gets the user account from the token and retrieves the allowed data items from the Firestore:</p>
<pre class="lang-py prettyprint-override"><code>from firebase_admin import auth, firestore
db = firestore.client()
def get_authorized_data(token):
# This returns {'uid': '123', 'email': 'example@example.com'}
decoded_token = auth.verify_id_token(id_token)
document = db.collection(u'allowed-data-per-user')\
.document(decoded_token['email']).get()
allowed_data = list(document.to_dict().keys())
return allowed_data
</code></pre>
<p>If you notice my comments about the <code>print</code> statement in the first block of python at the top of the question, this retrieves the correct data: always <code>['a', 'b', 'c']</code> for token 1 and always <code>['a', 'b']</code> for token 2. I do not believe that this is the part at fault.</p>
|
<javascript><python><firebase><google-cloud-platform><google-cloud-functions>
|
2023-01-05 16:30:19
| 0
| 694
|
Student
|
75,021,498
| 4,265,321
|
conda-provided gfortran cannot compile MPI program
|
<p>I have an account on a cluster which I access through SSH. The cluster uses Red Hat. My account does not have root privileges.</p>
<p>I do not have gfortran installed but I need it to compile a program that I will be using. This program is a Python wrapper of a Fortran code, which uses MPI.</p>
<p>I followed <a href="https://stackoverflow.com/questions/64242438/gfortran-linking-c-libraries-with-conda">this brilliant answer</a> to create a conda environment with Fortran compilers.</p>
<p>I activate the environment and then try to run <code>python setup.py install</code> for my Python code. I get this error:</p>
<pre class="lang-none prettyprint-override"><code>/disk/username/bin/x86_64-conda-linux-gnu-gfortran -ffree-line-length-none -cpp -fPIC -fno-stack-arrays -Ofast -DMPI -c utils.F90
utils.F90:5: Error: Can't open included file 'mpif.h'
</code></pre>
<p>Then I try to modify the Makefile of the original Fortran code, including an explicit reference to where <code>mpif.h</code> is, with <code>-I/path/to/conda/environment/include</code>, following <a href="https://stackoverflow.com/questions/26748141/cant-find-mpif-h-compiling-error">this other brilliant SO answer</a>. This helps, and the installation is then complete.</p>
<p>However, when I try to <code>import</code> the Python package I get the following error:</p>
<p><code>ImportError: undefined symbol: mpi_allreduce_</code></p>
<p>What is going wrong? Incidentally, if I compile the Fortran code without MPI, everything works fine.</p>
|
<python><fortran><mpi><gfortran>
|
2023-01-05 16:23:51
| 0
| 1,343
|
johnhenry
|
75,021,371
| 5,024,631
|
SKLearn & ElasticNet: Cross validation fails when using Accuracy as a metric
|
<p>I have a binary classification problem. I've been using <code>cross validation</code> to optimize the <code>ElasticNet</code> parameters. However ElasticNet only seems to work when I supply <code>roc_auc</code> as the scoring method to be used during CV, However I also want to test out a wide range of scoring methods, in particular <code>accuracy</code>. Specifically, when using accuracy, ElasticNet returns this error:</p>
<pre><code>ValueError: Classification metrics can't handle a mix of binary and continuous targets
</code></pre>
<p>However my <code>y</code> targets are indeed binary. Below is a replication of my problem using the dataset from <a href="https://www.kaggle.com/datasets/uciml/breast-cancer-wisconsin-data?resource=download" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelBinarizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold
from sklearn.metrics import make_scorer, recall_score, accuracy_score, precision_score, confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import ElasticNet
data = pd.read_csv('data 2.csv')
# by default majority class (benign) will be negative
lb = LabelBinarizer()
data['diagnosis'] = lb.fit_transform(data['diagnosis'].values)
targets = data['diagnosis']
data.drop(['id', 'diagnosis', 'Unnamed: 32'], axis=1, inplace=True)
X_train, X_test, y_train, y_test = train_test_split(data, targets, stratify=targets)
#elastic net logistic regression
lr = ElasticNet(max_iter=2000)
scorer = 'accuracy'
param_grid = {
'alpha': [1e-4, 1e-3, 1e-2, 0.01, 0.1, 1, 5, 10],
'l1_ratio': np.arange(0.2, 0.9, 0.1)
}
skf = StratifiedKFold(n_splits=10)
clf = GridSearchCV(lr, param_grid, scoring=scorer, cv=skf, return_train_score=True,
n_jobs=-1)
clf.fit(X_train.values, y_train.values)
</code></pre>
<p>I figured that ElasticNet might be trying to solve a linear regression problem so I tried <code>lr = LogisticRegression(penalty='elasticnet', l1_ratios=[0.1, 0.5, 0.9], solver='saga')</code> as the classifier but the same problem persists.</p>
<p>If I use as the scoring metric <code>scorer = 'roc_auc'</code> then the model is built as expected.</p>
<p>Also, as a sanity to check to see if there is something wrong with the data I tried the same but with a random forest classifier and here the problem disappears:</p>
<pre class="lang-py prettyprint-override"><code># random forest
clf = RandomForestClassifier(n_jobs=-1)
param_grid = {
'min_samples_split': [3, 5, 10],
'n_estimators' : [100, 300],
'max_depth': [3, 5, 15, 25],
'max_features': [3, 5, 10, 20]
}
skf = StratifiedKFold(n_splits=10)
scorer = 'accuracy'
grid_search = GridSearchCV(clf, param_grid, scoring=scorer,
cv=skf, return_train_score=True, n_jobs=-1)
grid_search.fit(X_train.values, y_train.values)
</code></pre>
<p>Has anyone got any ideas on what's happening here?</p>
|
<python><scikit-learn><classification><elasticnet>
|
2023-01-05 16:13:27
| 1
| 2,783
|
pd441
|
75,021,259
| 9,415,280
|
how continue recording evolution trainning on tensorflow model in same history file eachtime
|
<p>I train a big tensorflow model with a lot of data. I need to stop/save and reload it to continu trainning on new data.</p>
<p>If I save the history file, can I (and how?) start again the trainning and continu compilling result on the same history file?</p>
<p>My two reasons:</p>
<p>Keep complete evolution tracking of my model trainning.
make save of the best model of all time, not best of the last trainning session who could be worst (bad data...) than the last trainning sessions.</p>
<p>Thank for your input!</p>
|
<python><tensorflow><resuming-training>
|
2023-01-05 16:04:05
| 1
| 451
|
Jonathan Roy
|
75,021,051
| 12,968,928
|
Convert pandas series of strings to a series of lists
|
<p>For iinstance I have a dataframe as below</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"col":['AM RLC, F C', 'AM/F C', 'DM','D C']})
|col
-------------------|
0 |"AM RLC, F C" |
1 |"AM/F C" |
2 |"DM" |
3 |"D C" |
</code></pre>
<p>My expected output is as following</p>
<pre><code> |col
----|-----------------------|
0 |["AM", "RLC", "F", "C"]|
1 |["AM", "F", "C"] |
2 |["DM" ] |
3 |["D", "C"] |
</code></pre>
<p>",", "/" and "space" should be treated as delimiter,</p>
<p>The answers in this <a href="https://stackoverflow.com/questions/52675820/pandas-convert-series-of-strings-to-series-of-lists-of-strings-of-size-1-for-e">question</a> do not answer my queries</p>
|
<python><pandas>
|
2023-01-05 15:48:25
| 4
| 1,511
|
Macosso
|
75,020,740
| 11,644,523
|
dbt postgres - all models appending schema public_ to output
|
<p>I am testing a local setup of dbt-postgres. I have a simple model, but for some reason, any table created is being placed in a schema with the prefix <code>public</code> appended to it.</p>
<p>Desired output table:
<code>public.test</code></p>
<p>Current output table:
<code>public_public.test</code></p>
<p>As you can see, the public schema is being duplicated here. Using another schema in the model also creates a new schema with <code>public_</code> prefix.</p>
<p>Simple model test.sql file:</p>
<pre><code>{{ config(materialized='table', schema='public') }}
select a from table x
</code></pre>
<p>dbt_profile.yml</p>
<pre><code>name: 'abc'
version: '0.1'
config-version: 2
profile: 'abc'
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target"
clean-targets:
- "target"
- "dbt_packages"
- "dbt_modules"
- "logs"
models:
abc:
materialized: table
</code></pre>
<p>profiles.yml</p>
<pre><code>abc:
outputs:
dev:
type: postgres
threads: 1
host: "localhost"
port: 5432
user: "admin"
pass: "admin"
dbname: database
schema: "public"
target: dev
</code></pre>
|
<python><postgresql><dbt>
|
2023-01-05 15:25:25
| 1
| 735
|
Dametime
|
75,020,695
| 5,680,286
|
Getting TypeError: 'NoneType' object is not iterable while iterating multi level JSON object
|
<p>I have following JSON get response:</p>
<pre><code>{
"id": "mov_BO381oEATXonG6bj",
"object": "movement",
"amount": 59400,
"post_date": "2020-04-17T00:00:00.000Z",
"description": "Traspaso de:Fintoc SpA",
"transaction_date": "2020-04-16T11:31:12.000Z",
"currency": "CLP",
"reference_id": "123740123",
"type": "transfer",
"pending": false,
"recipient_account": null,
"sender_account": {
"holder_id": "771806538",
"holder_name": "Comercial y Producción SpA",
"number": "1530108000",
"institution": {
"id": "cl_banco_de_chile",
"name": "Banco de Chile",
"country": "cl"
}
},
"comment": "Pago factura 198"
},
{
"id": "mov_4Ag8xaHXzvq3ea2b",
"description": "0257478262 Transf.",
"amount": 450000,
"currency": "CLP",
"post_date": "2022-12-01T00:00:00Z",
"transaction_date": null,
"type": "transfer",
"recipient_account": null,
"sender_account": {
"holder_id": "257478262",
"number": null,
"institution": null,
"holder_name": "Maria Marco"
},
"comment": null,
"reference_id": "003922335",
"pending": false,
"object": "movement"
}
</code></pre>
<p>I have the following python script to iterate through the object and clasify the transactions, but I am getting the following error while iterating to the inner transaction item. The outer transaction item does not give error, but when I want to iterate through the inner item, I got this error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/berra/Python/movements.py", line 59, in <module>
for inner_transaction in outer_transaction['sender_account']:
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>The code is the following:</p>
<pre><code>import requests
import json
url = "https://api.fintoc.com/v1/accounts/acc_qNDRKQeTpbAKvpnW/movements?link_token=token"
headers = {
"accept": "application/json",
"Authorization": "authorizedKey."
}
response = requests.get(url, headers=headers)
# Parse the JSON data
transactions = json.loads(response.text)
expenses = []
incomes = []
#print(response.text)
for outer_transaction in transactions:
# if movimiento['sender_account'] is None:
# expenses.append(movimiento)
if outer_transaction['description'] == 'Traspaso de:Fintoc SpA':
incomes.append(outer_transaction)
if 'sender_account' in outer_transaction:
#for inner_transaction in outer_transaction:
for inner_transaction in outer_transaction['sender_account']:
if inner_transaction['holder_name']=='Maria Marco':
incomes.append(outer_transaction)
print("Expenses:", expenses)
print("Incomes:", incomes)
</code></pre>
<p>The API documentation I am using is available here: <a href="https://docs.fintoc.com/reference/movements-list" rel="nofollow noreferrer">https://docs.fintoc.com/reference/movements-list</a></p>
<p>I have tried a lot of options to try to solve this but I was not able to make it. Does anyone has any clue about what I am missing or doing wrong? Thank you in advance.</p>
|
<python><json>
|
2023-01-05 15:22:00
| 1
| 317
|
Bernardo Troncoso
|
75,020,628
| 278,638
|
Avoid formatting issues when warnings.captureWarnings(True) is set
|
<p>I have created a log handler which logs to a database table, and I wanted to also capture warnings issued by the program into a different table, so used <code>warnings.captureWarnings(True)</code></p>
<p>I had some issues with the message formatting in the <code>py.warnings</code> logger - the message always showed as <code>%s</code>. I triangulated it to <a href="https://github.com/python/cpython/blob/3.10/Lib/logging/__init__.py#L2245" rel="nofollow noreferrer">this line</a> (<code>logger.warning("%s", s)</code>)in the logging library code which implements warning emission when <code>captureWarnings</code> is True.</p>
<p>When it is replaced by <code>logger.warning(s)</code> the message appears as expected. I presume therefore that it's some sort of formatting issue, but I can't figure out what.</p>
<p>I also saw <a href="https://github.com/python/cpython/issues/90715" rel="nofollow noreferrer">BPO-46557</a> which actually implements this behaviour, seemingly for unrelated reasons (but only for Python3.11+, I am on 3.10).</p>
<p>My DB log handler code is below. Is there any way of fixing this without upgrading to Python 3.11 - seems like it should be unnecessary.</p>
<p>Code called before main code is run:</p>
<pre><code>logging.captureWarnings(True)
warnings_logger = logging.getLogger("py.warnings")
warnings_logger.setLevel(logging.DEBUG)
from db_logging import SQLAlchemyWarningHandler
handler = SQLAlchemyWarningHandler()
warnings_logger.addHandler(handler)
</code></pre>
<p>handler and LogRecord code (<code>db_logging.py</code>). However I don't believe it's anything in this code as I get:
<code><LogRecord: py.warnings, 30, /usr/lib/python3.10/warnings.py, 110, "%s"></code> when I print out the record before it's emitted</p>
<pre><code>from database import database
# lib imports
from sqlalchemy import Column
from sqlalchemy.types import DateTime, Integer, String
from sqlalchemy.sql import func
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.orm import sessionmaker
from sqlalchemy import DDL, event
# stdlib imports
import logging
import traceback
import sys
class Base(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
__table_args__ = {"schema": "logs"}
id = Column(Integer, primary_key=True) # auto incrementing
logger = Column(String) # the name of the logger. (e.g. myapp.views)
level = Column(String) # info, debug, or error?
trace = Column(String) # the full traceback printout
msg = Column(String) # any custom log you may have included
created_at = Column(DateTime, default=func.now()) # the current timestamp
source_loc = Column(String)
def __init__(self, logger=None, level=None, trace=None, msg=None, source_loc=None):
self.logger = logger
self.level = level
self.trace = trace
self.msg = msg
self.source_loc = source_loc
def __unicode__(self):
return self.__repr__()
def __repr__(self):
return "<Log: %s - %s>" % (self.created_at.strftime('%m/%d/%Y-%H:%M:%S'), self.msg[:50])
Base = declarative_base(cls=Base)
event.listen(Base.metadata, 'before_create', DDL("CREATE SCHEMA IF NOT EXISTS logs"))
class Logs(Base):
"log class which writes all main db logs"
pass
class WarningLogs(Base):
"seperate log class for deprecation warnings which writes to a different db table"
pass
class SQLAlchemyHandler(logging.Handler):
"A very basic logger that commits a LogRecord to the SQL Db"
def __init__(self):
logging.Handler.__init__(self)
Base.metadata.create_all(database.engine)
Session = sessionmaker(bind=database.engine)
self.session = Session()
self.log_class = getattr(sys.modules[__name__], 'Logs')
def emit(self, record):
trace = None
exc = record.__dict__['exc_info']
if exc:
trace = traceback.format_exc()
log = self.log_class(
logger=record.__dict__['name'],
level=record.__dict__['levelname'],
trace=trace,
msg=record.__dict__['msg'],
source_loc=f"{record.__dict__['pathname']}:{record.__dict__['lineno']}")
self.session.add(log)
self.session.commit()
class SQLAlchemyWarningHandler(SQLAlchemyHandler):
"Extends SQLAlchemyHandler to use WarningLog objects, which use a different table"
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.log_class = getattr(sys.modules[__name__], 'WarningLogs')
</code></pre>
|
<python><logging><sqlalchemy><warnings><python-logging>
|
2023-01-05 15:17:02
| 1
| 2,126
|
Stev_k
|
75,020,542
| 17,487,457
|
pandas: extract time-of--the day as feature for classification
|
<p>I have a time series data, with a unix seconds column (among other columns):</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{
'user': [3,3,3,3,3,6,6,6],
'timestamp': [1459467971, 1459468020, 1459468026, 1459468031,
1459468036,1513974852, 1513974853, 1513974854]
}
)
</code></pre>
<p>The dataset is for classification of some events, some are frequent in the morning, others in the afternoon.</p>
<p>I think <code>time-of-day</code> could be an important discriminant as well in this task.</p>
<p>How do I extract number of seconds for time of the day from this?</p>
<p>Note:
I know that using <code>pandas to_datetime</code> object would give <code>HH:MM:SS</code> as:</p>
<pre class="lang-py prettyprint-override"><code>df['timestamp'] = pd.to_datetime(df['timestamp'], unit='s')
#df['date'] = df['timestamp'].dt.date
df['time'] = df['timestamp'].dt.time
df
user timestamp time
0 3 2016-03-31 23:46:11 23:46:11
1 3 2016-03-31 23:47:00 23:47:00
2 3 2016-03-31 23:47:06 23:47:06
3 3 2016-03-31 23:47:11 23:47:11
4 3 2016-03-31 23:47:16 23:47:16
5 6 2017-12-22 20:34:12 20:34:12
6 6 2017-12-22 20:34:13 20:34:13
7 6 2017-12-22 20:34:14 20:34:14
</code></pre>
<p>Isn't there an easy way to pick out the number of seconds representing the time, like this:</p>
<pre class="lang-py prettyprint-override"><code> user timestamp time-of-day
0 3 2016-03-31 23:46:11 85571
1 3 2016-03-31 23:47:00 85620
2 3 2016-03-31 23:47:06 85626
3 3 2016-03-31 23:47:11 85631
4 3 2016-03-31 23:47:16 85636
5 6 2017-12-22 20:34:12 74052
6 6 2017-12-22 20:34:13 74053
7 6 2017-12-22 20:34:14 74054
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-05 15:11:12
| 2
| 305
|
Amina Umar
|
75,020,539
| 10,731,820
|
Number of deleted messages is twice as many as received messages
|
<p>I created SQS service with terraform</p>
<pre><code>resource "aws_sqs_queue" "ses_queue" {
name = "ses_queue"
message_retention_seconds = 86400
receive_wait_time_seconds = 1
visibility_timeout_seconds = 15
}
resource "aws_lambda_event_source_mapping" "send_email_message" {
event_source_arn = aws_sqs_queue.ses_queue.arn
function_name = aws_lambda_function.send_email_message.function_name
batch_size = 5
}
</code></pre>
<p>I am sending emails using lambda function</p>
<pre><code>for record in event.get("Records"):
receipt_handle = record.get("receiptHandle", "")
request_body = record.get("body")
response = send_email(request_body)
if response:
sqs_client.delete_message(QueueUrl=constants.SES_QUEUE_URL, ReceiptHandle=receipt_handle)
</code></pre>
<p>I am wondering why number of deleted messages is twice as many as received messages
<a href="https://i.sstatic.net/tFZl2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tFZl2.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/2YjcW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2YjcW.png" alt="enter image description here" /></a></p>
|
<python><amazon-web-services><aws-lambda><terraform><amazon-sqs>
|
2023-01-05 15:11:05
| 2
| 853
|
psowa001
|
75,020,279
| 14,720,380
|
PyBind11: How to implement Vector of Shared Pointers of Classes with Inheritence? Unable to load a custom holder type from a default-holder instance
|
<p>I have a base class in c++ that looks like:</p>
<pre class="lang-cpp prettyprint-override"><code>class EdgeAttributePipelineStep {
public:
virtual void setAttributes(Edge& edge) = 0;
};
</code></pre>
<p>And then some classes that inherit from this class:</p>
<pre class="lang-cpp prettyprint-override"><code>class FuelCostStep : public EdgeAttributePipelineStep {
public:
FuelCostStep(float fuelCostPerTonne)
: m_FuelCostPerTonne(fuelCostPerTonne) {};
void setAttributes(Edge& edge);
private:
float m_FuelCostPerTonne;
};
class HireCostStep : public EdgeAttributePipelineStep {
public:
HireCostStep(float hireCostPerDay) : m_HireCostPerHour(hireCostPerDay/24) {};
void setAttributes(Edge &edge);
private:
float m_HireCostPerHour;
};
</code></pre>
<p>And then finally, a class that accepts a vector of the base class in its constructor:</p>
<pre class="lang-cpp prettyprint-override"><code>class EdgeAttributePipeline {
public:
EdgeAttributePipeline(std::vector<std::shared_ptr<EdgeAttributePipelineStep>> steps);
private:
std::vector<std::shared_ptr<EdgeAttributePipelineStep>> m_steps;
};
</code></pre>
<p>I want to be able to create the pipeline class in python like:</p>
<pre class="lang-py prettyprint-override"><code>pipeline = EdgeAttributePipeline([
FuelCostStep(10),
HireCostStep(10)
])
</code></pre>
<p>And I attempted some python bindings using PyBind11 to do this:</p>
<pre class="lang-cpp prettyprint-override"><code>#include "edge_attributes.h"
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
PYBIND11_MODULE(WeatherRouting, m)
{
py::class_<EdgeAttributePipelineStep>(m, "EdgeAttribtuePipelineStep");
// I tried both using std::shared_ptr
py::class_<FuelCostStep, std::shared_ptr<FuelCostStep>>(m, "FuelCostStep")
.def(py::init<double>());
// And not using std::shared_ptr
py::class_<HireCostStep, EdgeAttributePipelineStep>(m, "HireCostStep")
.def(py::init<double>());
py::class_<EdgeAttributePipeline>(m, "EdgeAttributePipeline")
.def(py::init<std::vector<std::shared_ptr<EdgeAttributePipelineStep>>>());
}
</code></pre>
<p>However both of these attempts resulted in the same error:</p>
<pre class="lang-py prettyprint-override"><code>>>> EdgeAttributePipeline([FuelCostStep(10)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Unable to load a custom holder type from a default-holder instance
>>> EdgeAttributePipeline([HireCostStep(10)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Unable to load a custom holder type from a default-holder instance
</code></pre>
<p>I also had a look at making a <a href="https://pybind11.readthedocs.io/en/stable/advanced/classes.html#combining-virtual-functions-and-inheritance" rel="nofollow noreferrer">trampoline class</a>:</p>
<pre class="lang-cpp prettyprint-override"><code>class PyEdgeAttributePipelineStep : public EdgeAttributePipelineStep {
public:
using EdgeAttributePipelineStep::EdgeAttributePipelineStep;
};
</code></pre>
<p>And modifying the pybind code:</p>
<pre class="lang-cpp prettyprint-override"><code> py::class_<EdgeAttributePipelineStep, PyEdgeAttributePipelineStep, std::shared_ptr<EdgeAttributePipelineStep>>(m, "EdgeAttribtuePipelineStep");
py::class_<FuelCostStep, EdgeAttributePipelineStep>(m, "FuelCostStep")
.def(py::init<double>());
</code></pre>
<p>However this now makes a new error when ran in python:</p>
<pre><code>ImportError: generic_type: type "FuelCostStep" does not have a non-default holder type while its base "EdgeAttributePipelineStep" does
</code></pre>
<p>I am a bit stuck trying to implement this. How do I get this code to work?</p>
|
<python><c++><pybind11>
|
2023-01-05 14:51:21
| 1
| 6,623
|
Tom McLean
|
75,020,276
| 5,437,090
|
Drop Duolicates in a Panda DataFrame for timestampe in a certain threshold
|
<p>I have a Dataframe as follows and would like to remove duplicates with respect to three columns: <code>user</code>, <code>url</code> and <code>timestamp</code> (only if it is less than or equal <code><= 10 sec</code> of last occurrence). Here I elaborate rows with <code>#</code> comment:</p>
<pre><code> timestamp user url
0 2018-02-07 00:00:00+02:00 ip0 google.com # 1st occurrence
1 2018-02-07 00:00:02+02:00 ip1 xe.com # 1st occurrence
2 2018-02-07 00:00:10+02:00 ip7 facebook.com
3 2018-02-07 00:00:11+02:00 ip1 xe.com # duplicate <= 10 sec : drop
4 2018-02-07 00:00:13+02:00 ip1 xe.com # not a duplicate, comparing with 1st occurrence : leave it
5 2018-02-07 00:00:15+02:00 ip2 example.com
6 2018-02-07 00:00:20+02:00 ip3 ebay.com
7 2018-02-07 00:00:55+02:00 ip1 xe.com # not a duplicate: leave it
8 2018-02-07 00:00:59+02:00 ip5 amazon.com
9 2018-02-07 00:01:02+02:00 ip1 xe.com # duplicate <= 10 sec : drop
10 2018-02-07 00:01:28+02:00 ip0 google.com # not a duplicate: leave it
</code></pre>
<p>I tried <code>df = df.drop_duplicates(subset=['user', 'url'], keep='first')</code> which removes all possible duplicate regardless of <code>timestamp</code>.</p>
<p>My expected results should look something like this:</p>
<pre><code> timestamp user url
0 2018-02-07 00:00:00+02:00 ip0 google.com
1 2018-02-07 00:00:02+02:00 ip1 xe.com
2 2018-02-07 00:00:10+02:00 ip7 facebook.com
4 2018-02-07 00:00:13+02:00 ip1 xe.com
5 2018-02-07 00:00:15+02:00 ip2 example.com
6 2018-02-07 00:00:20+02:00 ip3 ebay.com
7 2018-02-07 00:00:55+02:00 ip1 xe.com
8 2018-02-07 00:00:59+02:00 ip5 amazon.com
10 2018-02-07 00:01:28+02:00 ip0 google.com
</code></pre>
<p>What is most compact and easiest way to create some sort of mask (if that is a practical idea) to exclude rows which fall within certain threshold, e.g., 10 seconds, of the fist occurrence?</p>
<p>Cheers,</p>
|
<python><pandas><dataframe><duplicates><mask>
|
2023-01-05 14:51:15
| 2
| 1,621
|
farid
|
75,020,272
| 8,933,502
|
Calculating an average, when only some "checkpoints" are known
|
<p>I'm trying to wrap my head around pandas/numpy to do something that should probably be simple (but my lack of knowledge...)</p>
<p>The series looks something like this:</p>
<pre><code> 2022-01-01 | 100
2022-01-02 | nan
2022-01-03 | nan
2022-01-04 | 200
2022-01-05 | nan
2022-01-06 | nan
2022-01-07 | nan
2022-01-08 | 250
2022-01-09 | nan
2022-01-10 | 400
2022-01-11 | nan
</code></pre>
<p>This represents a consumption. So, on the example above, on the 8 of January, we see that it became 250 and the last known value was 200 on the 4th of January. So, we know that on that period, the average was 12.5 per day. What I want to obtain is exactly this: the daily average (retrospectively). So, from the example above, that would be the result I'm looking for:</p>
<pre><code> 2022-01-01 | 33.3 | 100
2022-01-02 | 33.3 | 133.3
2022-01-03 | 33.3 | 166.6
2022-01-04 | 12.5 | 200
2022-01-05 | 12.5 | 212.5
2022-01-06 | 12.5 | 225
2022-01-07 | 12.5 | 237.5
2022-01-08 | 75 | 250
2022-01-09 | 75 | 325
2022-01-10 | 0 | 400
2022-01-11 | 0 | 400
</code></pre>
<p>The last column is just for checking that everything is cumulated as expected. But it's not needed, I'm fine with having a <code>Series</code> as output.</p>
<p>I've already tried twisting the <code>Series</code> in many ways, also doing it with the debugger. But alas, I think it would be better to ask someone who knows, along with some explanation so that I can understand better how to reach that result.</p>
|
<python><pandas><numpy><time-series><series>
|
2023-01-05 14:51:09
| 1
| 842
|
Samuel GIFFARD
|
75,020,251
| 18,205,996
|
Adding values to cells in a specific column in HTML
|
<p>I am creating a very basic table in HTML for my Django project.
I got some help from <a href="https://stackoverflow.com/questions/75011084/generating-new-html-rows-for-each-document-from-firestore-in-python-django/75011558#75011558">Here</a> to learn how to pass a list from my Django app to my HTML and generate rows for my values.
My question this time is: how can I target the second, third, or any particular column for my list? For example, suppose I have a list of values and want to add all of them to Column 3 of my table.
Here is the code I tried to use where i wanted to have {{ name }} values added to column two, but it keeps adding the cells to the first column</p>
<pre><code>
<html>
<body>
<table border="1" cellpadding = "5" cellspacing="5">
<tr>
<th> IDs </th>
<th> Names </th>
<th> Heights </th>
</tr>
{% for id in myIDs %}
<tr>
<td>{{ id }}</td>
</tr>
{% endfor %}
<tr>
{% for name in myNames %}
<td>
<tr>
<td>{{ name }}</td>
</tr>
</td>
{% endfor %}
</tr>
</table>
</body>
</html>
</code></pre>
|
<python><html><django>
|
2023-01-05 14:49:27
| 2
| 597
|
taha khamis
|
75,020,244
| 3,139,811
|
Pywin32 COM not able to set attribute keyerror
|
<p>I'm trying to automate something using its com interface. There are some problems which I can't get my head around.</p>
<pre><code>app = DispatchEx('CANoe.Application')
configs = app.Configuration.TestConfigurations
config = configs.Item(1)
unit = config.TestUnits.Item(1)
unit.Name
unit.Enabled
</code></pre>
<p>Until here it is fine, name and enabled will print out a string and a bool value. But as soon when I want to set the enabled to false or true I get the following:</p>
<pre><code>>>> unit.Enabled = False
Traceback (most recent call last):
File "C:\Python\Python310\lib\site-packages\win32com\client\__init__.py", line 590, in __setattr__
args, defArgs = self._prop_map_put_[attr]
KeyError: 'Enabled'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python\Python310\lib\site-packages\win32com\client\__init__.py", line 592, in __setattr__
raise AttributeError(
AttributeError: '<win32com.gen_py.CANoe 12.0 Type Library.ITestUnit instance at 0x1977650030784>' object has no attribute 'Enabled'
</code></pre>
<p>Which is odd because in VBS it works without any problem (I need to port these to Python). Also it looks like that some attributes are not there like "Elements" when I print the internals of the unit object:</p>
<pre><code>>>> print(dir(unit))
['Application', 'CLSID', 'Enabled', 'Name', 'Parent', 'Report', 'Verdict', '_ApplyTypes_', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_get_good_object_', '_get_good_single_object_', '_oleobj_', '_prop_map_get_', '_prop_map_put_', 'coclass_clsid']
>>>
</code></pre>
<p>When I do a "<em>prop_map_put</em>"</p>
<pre><code>>>> print(unit._prop_map_put_)
{}
>>>
</code></pre>
<p>I tried to use setattr to set Enabled on true, it didn't throw an exception but after that I was not able to call unit.Enabled because I need to do something with <em>ApplyTypes</em>.</p>
|
<python><com><pywin32><canoe>
|
2023-01-05 14:49:08
| 1
| 857
|
John
|
75,020,192
| 13,994,829
|
Python-MultiThreading: Can MultiThreading improve "for loop" performance?
|
<p>As far as my understanding:</p>
<ul>
<li>MultiThread is an ideal option for <strong>I/O</strong> applications.</li>
</ul>
<p>Therefore, I test a "for loop" code without any I/O.
(As following code)</p>
<p>Howerver, it can reduce the execution time from 6.3s to 3.7s.</p>
<p>Is the result correct ?</p>
<p>or any mistake in my suppose ?</p>
<pre><code>from multiprocessing.dummy import Pool as ThreadPool
import time
# normal
l = []
s = time.time()
for i in range(0, 10000):
for j in range(i):
l.append(j * 10)
e = time.time()
print(f"case1: {e-s}") # 6.3 sec
# multiThread
def func(x):
for i in range(x):
l_.append(i * 10)
with ThreadPool(50) as pool:
l_ = []
s = time.time()
pool.map(func, range(0, 10000))
e = time.time()
print(f"case2: {e-s}") # 3.7 sec
</code></pre>
|
<python><multithreading><python-multithreading>
|
2023-01-05 14:45:13
| 3
| 545
|
Xiang
|
75,020,141
| 353,337
|
Add data to class with `__slots__`, wrap it?
|
<p>I have a Python object <code>a</code> of class <code>A</code> which is out of my control. I don't know anyhing about <code>A</code>, inclusing its <code>__init__</code>. I would like to add some data to <code>a</code>, but <code>__slots__</code> in <code>A</code> prevents me from doing so. I'd like to work around it <em>somehow</em>.</p>
<p>The object with the extra data needs to function just like an instance of <code>A</code>, so I thought about subclassing <code>A</code> with a "wrapper" that simply adds <code>extra_data</code>. Not sure how to initialize <code>super()</code> though.</p>
<pre class="lang-py prettyprint-override"><code>class A:
"""
This class is out of my control. I know nothing about its internals,
or the data it contains. __slots__ prevents more attributes being added.
"""
__slots__ = ["data"]
def __init__(self, data):
self.data = data
# The initialization happens elsewhere. I don't know how A is initialized.
a = A([1, 2, 3])
# Won't work:
#
# a.t = 1
# setattr(a, "t", 1)
#
# AttributeError: 'A' object has no attribute 't'
class Wrap(A):
def __init__(self, a, extra_data):
self.extra_data = extra_data
super().__init__(a) # ??
w = Wrap(a, [27])
print(w.data) # [1, 2, 3]
print(w.extra_data) # [27]
</code></pre>
<p>Any hints?</p>
|
<python><class><subclass><wrapper>
|
2023-01-05 14:41:55
| 1
| 59,565
|
Nico Schlömer
|
75,019,942
| 12,883,297
|
Add time delta for 2 columns based on the condition on 1 column in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["A","9:00 AM","10:20 AM"],["A","11:12 AM","12:32 PM"],["A","1:03 PM","1:33 PM"],["A","1:34 PM","2:44 PM"],["B","9:00 AM","12:20 PM"],["B","12:56 PM","1:06 PM"],["B","1:07 PM","1:17 PM"],["B","1:18 PM","1:28 PM"]],columns=["id","start_time","end_time"])
</code></pre>
<pre><code>id start_time end_time
A 9:00 AM 10:20 AM
A 11:12 AM 12:32 PM
A 1:03 PM 1:33 PM
A 1:34 PM 2:44 PM
B 9:00 AM 12:20 PM
B 12:56 PM 1:06 PM
B 1:07 PM 1:17 PM
B 1:18 PM 1:28 PM
</code></pre>
<p>I want to add time delta 1 hour for each row of the columns start_time and end_time where start_time >= 12:30 PM.</p>
<p>Expected Output:</p>
<pre><code>df_out = pd.DataFrame([["A","9:00 AM","10:20 AM"],["A","11:12 AM","12:32 PM"],["A","2:03 PM","2:33 PM"],["A","2:34 PM","3:44 PM"],["B","9:00 AM","12:20 PM"],["B","1:56 PM","2:06 PM"],["B","2:07 PM","2:17 PM"],["B","2:18 PM","2:28 PM"]],columns=["id","start_time","end_time"])
</code></pre>
<pre><code>id start_time end_time
A 9:00 AM 10:20 AM
A 11:12 AM 12:32 PM
A 2:03 PM 2:33 PM
A 2:34 PM 3:44 PM
B 9:00 AM 12:20 PM
B 1:56 PM 2:06 PM
B 2:07 PM 2:17 PM
B 2:18 PM 2:28 PM
</code></pre>
<p>How to do it?</p>
|
<python><pandas><dataframe><datetime><time-series>
|
2023-01-05 14:26:24
| 1
| 611
|
Chethan
|
75,019,865
| 10,891,675
|
Python unit tests - mocking imported class methods
|
<p>I would like to mock some imported class methods and module functions for my unit tests. I tried several ways to define the mocked values but I don't understand why they are not taken into account.</p>
<p>I wrote some tests following the advices in <a href="https://stackoverflow.com/questions/16134281/python-mocking-a-function-from-an-imported-module">Python Mocking a function from an imported module</a>.</p>
<p>Here is a piece of code representing the application to test:</p>
<pre><code>from services import myModule1
from services.spec1 import importedClass
class myClass(object):
def __init__(self, param1, param2):
self.param1 = param1
self.param2 = param2
self.param3 = 0
self.param4 = 0
self.myMethod()
def myMethod(self):
newVar = importedClass()
self.param3 = newVar.meth1(self.param2)
calcParam = myModule1.methodMod1(self.param1)
self.param4 = calcParam["keyParam3"]
</code></pre>
<p>I would like to unit test <code>myMethod</code> and I need to mock <code>importedClass</code> and <code>myModule1.methodMod1()</code>.</p>
<p>Here is a new piece of code that I tried for the tests (previous attempts below):</p>
<pre><code>import unittest
from unittest.mock import patch
from my_module import myClass
class test_myClass(unittest.TestCase):
@patch('my_module.importedClass')
@patch('my_module.myModule1')
def test_myMethod(self, mock_mod1, mock_class):
mock_mod1.methodMod1.return_value = {"keyParam3": 5, "keyParam4": 7}
mock_class.meth1.return_value = 2
test_parameters = (0, 0)
test_res = myClass(*test_parameters)
self.assertEqual(test_res.param3, 2)
self.assertEqual(test_res.param4, 5)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>The mocking has no error but the mocked values are not taken into account.</p>
<h1>Previous attempts</h1>
<p>What I tried for each of them, using 2 different approaches:</p>
<pre><code>import unittest
from unittest.mock import Mock, patch
from my_module import myClass
import services
class test_myClass(unittest.TestCase):
def setUp(self):
services.spec1 = Mock()
services.spec1.importedClass.meth1.return_value = 2
@patch('services.myModule1')
def test_myMethod(self, my_mock):
my_mock.methodMod1.return_value = {"keyParam3": 5, "keyParam4": 7}
test_parameters = (0, 0)
test_res = myClass(*test_parameters)
self.assertEqual(test_res.param3, 2)
self.assertEqual(test_res.param4, 5)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>The result is that the calculated attributes are not updated and still 0 - so test fails.</p>
<p>I also tried with <code>services = Mock()</code> and defined return values for each part, to regroup each mock in <code>setUp</code> method or in a <code>@patch</code>, but nothing worked.</p>
<p>I also tried with <code>my_module.spec1 = Mock()</code>, to make the function global, or even <code>self.spec1 = Mock()</code> to make it very local to the test's context (if I understood correctly the differences, this is something I'm not really sure neither) but nothing worked.</p>
|
<python><unit-testing><mocking>
|
2023-01-05 14:19:51
| 1
| 696
|
Christophe
|
75,019,800
| 1,195,909
|
SQLAlchemy inheritance relations issue
|
<p>My SQLAlchemy model has a base class <code>Alpha</code> and two derived classes <code>Beta</code> and <code>Gamma</code>. <code>Beta</code> is also a field of <code>Beta</code>. My implementation results in the message error below. How do I fix this join condition issue?</p>
<blockquote>
<p>AmbiguousForeignKeysError: Could not determine join condition between
parent/child tables on relationship Beta.gammas - there are multiple
foreign key paths linking the tables. Specify the 'foreign_keys'
argument, providing a list of those columns which should be counted as
containing a foreign key reference to the parent table.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>class Alpha(declarative_base()):
__tablename__ = 'alpha'
id_ = Column(Integer, primary_key=True)
data = Column(Integer)
type_ = Column(String(50))
__mapper_args__ = {
"polymorphic_identity": "alpha",
"polymorphic_on": type_,
}
class Beta(Alpha):
__tablename__ = 'beta'
id_ = Column(Integer, ForeignKey('alpha.id_'), primary_key=True)
foo = Column(Integer)
# Related
gammas = relationship('Gamma', back_populates='beta')
__mapper_args__ = {
"polymorphic_identity": 'beta',
}
class Gamma(Alpha):
__tablename__ = 'gamma'
id_ = Column(Integer, ForeignKey('alpha.id_'), primary_key=True)
bar = Column(Integer)
beta_id = Column(Integer, ForeignKey('beta.id_'))
beta = relationship('Beta', back_populates='gammas')
__mapper_args__ = {
"polymorphic_identity": 'gamma',
}
declarative_base().metadata.create_all(engine, checkfirst=True)
beta = Beta()
</code></pre>
|
<python><inheritance><sqlalchemy>
|
2023-01-05 14:15:47
| 1
| 3,463
|
msampaio
|
75,019,532
| 13,488,334
|
Local imports work in bundled PyInstaller app but in Python source
|
<p>This issue has plagued me for the last few months, I need a more experienced opinion. We have a CLI Python application that uses a gRPC server to communicate with other backend services. Its structured something like this:</p>
<pre><code>app
- gRPC_service
- __init__.py
- service_related_files.py
- service_tests
- __init__.py
- test_service.py
- src
- __init__.py
- src_files.py
- python-win
- python37.dll
- gRPC_service.spec
</code></pre>
<p>A few notes about the application:</p>
<ul>
<li><p>The <code>src</code> directory houses the lower level machine learning code. The <code>gRPC_service</code> directory acts as a wrapper around the <code>src</code> code and sends processed requests to a given client</p>
</li>
<li><p>The <code>python-win</code> directory is a specific version of the Python Interpreter. Many things throughout the code base are reliant on a specific version of Python (3.7.9). This was our solution to making the code base a bit more portable. A developer can clone the repository and immediately have the necessary version of Python installed along with the plethora of 3rd party dependencies after running a <code>Create-VirtualEnvironment.ps1</code> script that uses the <code>python.exe</code> in the <code>python-win</code> directory.</p>
</li>
</ul>
<p>The number one issue I have faced when developing this application is namespace and importing issues, and I'm not exactly sure what's causing it. We have a sub-application within the <code>src</code> directory that only imports modules from within the <code>src</code> package and uses 3rd party libraries. This works just fine with no <code>ModuleNotFound</code> errors.</p>
<p>Issues begin to surface when importing <code>src</code> modules from within the <code>gRPC_service</code> package. Even though both packages have <code>__init__.py</code> files, <code>ModuleNotFound</code> errors will be thrown if the PYTHONPATH is not modified at runtime. A work-around solution to this is to collect all the different file paths to each package within <code>app</code> and add them to <code>sys.path</code>. It goes without saying this is inconvenient.</p>
<p>This works for importing a majority of the modules, but to add to the confusion, some of the modules from the <code>src</code> package can only be imported after modifying <code>sys.path</code> AND adding a <code>src.</code> prefix to all of the local imports within the <code>src</code> package. Adding this prefix to local imports breaks the sub-application in the <code>src</code> package that I was speaking of earlier. A 'src can't be found' error gets thrown in the sub-app when doing this.</p>
<p>Additionally, without adding the <code>src.</code> prefixes to corresponding imports, no <code>ModuleNotFound</code> errors are thrown when the gRPC_service is bundled as a PyInstaller <code>.exe</code>. I have modified the pathex within the <code>.spec</code> file. The app works just fine when bundled - how do I get equivalent behavior when just running Python from source?</p>
<p>So I am looking for some advice from Python devs who have worked on large code bases. Is this issue common? What can I do to alleviate the inconvenience of modifying the PYTHONPATH at runtime? Is there a fix-all solution that tends to the needs of both applications within this codebase?</p>
|
<python><python-3.x><pyinstaller><python-import><python-internals>
|
2023-01-05 13:55:43
| 1
| 394
|
wisenickel
|
75,019,496
| 19,674,402
|
Enable "Try it out" in OpenAPI so that no need to click
|
<p>I'm using FastAPI and OpenAPI/Swagger UI to see and test my endpoints.</p>
<p>Each time I use an endpoint for the first time, in order to test it, I have to first click the <kbd>Try it out</kbd> button, which is getting tedious.</p>
<p>Is there a way to make it disappear and be able to test the endpoint instantly?</p>
|
<python><swagger><fastapi><swagger-ui><openapi>
|
2023-01-05 13:53:54
| 1
| 496
|
PythonForEver
|
75,019,472
| 8,868,699
|
Apply pandas groupby, sum and mean based on another column
|
<p>Im working on a data frame. Trying to group, sum, and mean on based on the other column. As all the 3 operations are in one. Im unable to find any solution. Please help</p>
<p><a href="https://i.sstatic.net/nRxJH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nRxJH.png" alt="enter image description here" /></a></p>
<p>Expected Output:
<a href="https://i.sstatic.net/9pItO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9pItO.png" alt="enter image description here" /></a></p>
<pre><code>mydf = pd.DataFrame.from_records([{'Month-Year': 'Apr-2020',
'meets_target_str': 'Target Met',
'granularity_date': '2020-04-01',
'meets_target': 31,
'total' : 67,
'percentage':46.26},
{'Month-Year': 'Apr-2020',
'meets_target_str': 'Target not met',
'granularity_date': '2020-04-01',
'meets_target': 36,
'total' : 67,
'percentage':53.73},
{'Month-Year': 'May-2020',
'meets_target_str': 'Target not met',
'granularity_date': '2020-05-01',
'meets_target': 41,
'total' : 97,
'percentage':42.26},
{'Month-Year': 'May-2020',
'meets_target_str': 'Target Met',
'granularity_date': '2020-05-01',
'meets_target': 56,
'total' : 97,
'percentage':57.73},
{'Month-Year': 'Jun-2020',
'meets_target_str': 'Target not met',
'granularity_date': '2020-06-01',
'meets_target': 41,
'total' : 70,
'percentage':58.47},
{'Month-Year': 'Jun-2020',
'meets_target_str': 'Target Met',
'granularity_date': '2020-06-01',
'meets_target': 29,
'total' : 70,
'percentage':41.42
}])
</code></pre>
|
<python><pandas><group-by>
|
2023-01-05 13:51:40
| 2
| 1,649
|
Hayat
|
75,019,357
| 5,084,566
|
How to write videos with half of duration using OpenCV?
|
<p>I have a mp4/avi videos with duration 10 minutes and FPS 30. I want to reduce duration to 5 mins but FPS still 30. It means that the new videos will drop a half of frame (for example, <code>f0 f2 f4</code> compare with original video <code>f0 f1 f2 f3 f4</code>). How can I do it on opencv? This is current code to get duration and FPS of the video.</p>
<pre><code># import module
import cv2
import datetime
# create video capture object
data = cv2.VideoCapture('C:/Users/Asus/Documents/videoDuration.mp4')
# count the number of frames
frames = data.get(cv2.CAP_PROP_FRAME_COUNT)
fps = data.get(cv2.CAP_PROP_FPS)
# calculate duration of the video
seconds = round(frames / fps)
video_time = datetime.timedelta(seconds=seconds)
print(f"duration in seconds: {seconds}")
print(f"video time: {video_time}")
</code></pre>
|
<python><python-3.x><opencv>
|
2023-01-05 13:42:58
| 2
| 395
|
Moon Lee
|
75,019,347
| 8,844,500
|
Style not compatible with Jupyter-Book when using python-sphinx autodoc to generate API documentation
|
<p>I'd love to use the so elegant Jupyter-Book to construct documentation for my packages. I've tried quite a lot, and here is an example I'm finalizing those days :</p>
<ul>
<li>the package is available on a gitlab repository <a href="https://framagit.org/nlp/iamtokenizing" rel="nofollow noreferrer">https://framagit.org/nlp/iamtokenizing</a></li>
<li>the associated documentation is constructed (almost) automatically by calling <code>jupyter-book build documentation/</code> on CI/CL-deployment of the master branch of the repository, and visible there : <a href="https://nlp.frama.io/iamtokenizing" rel="nofollow noreferrer">https://nlp.frama.io/iamtokenizing</a></li>
</ul>
<p>One can see I use</p>
<ul>
<li>jupyter-notebook to generate user guides or tutorials,</li>
<li>markdown to generate simple static pages without codes, and</li>
<li><a href="https://jupyterbook.org/en/stable/advanced/developers.html" rel="nofollow noreferrer">sphinx autodoc</a> to generate api documentation. Note I'm fairly newbie using sphinx and autodoc.</li>
</ul>
<p>I'd like to enhance the publication style of the auto-generated api documentation. As one can see, e.g. on <a href="https://nlp.frama.io/iamtokenizing/api/chargrams.html" rel="nofollow noreferrer">this page</a>, there is no right sidebar in the API documentation, whereas one may want to write the names of the class methods, or module functions there.</p>
<p>Also, the parameters list might be better presented in indented lists instead of being queued in parenthesis, and the name of the class might be shorter, for instance only the name of the class, not the path to it might appear. In short, I'd prefer having</p>
<pre class="lang-py prettyprint-override"><code>class BaseTokenizer(string: str = '',
subtoksep: chr = ' ',
intervals: Union[None, EvenSizedSortedSet] = None,
tokens: list = [])
</code></pre>
<p>instead of the monolithic presentation</p>
<pre class="lang-py prettyprint-override"><code>class iamtokenizing.base_tokenizer.BaseTokenizer(string: str = '', subtoksep:
chr = ' ', intervals: Union[None,
extractionstring.even_sized_sorted_set.EvenSizedSortedSet] = None, tokens: list = [])
</code></pre>
<p>Is there a way to do all of that, or should I consider raising an issue instead ?</p>
|
<python><jupyter-notebook><python-sphinx><documentation-generation><jupyterbook>
|
2023-01-05 13:42:29
| 0
| 329
|
FraSchelle
|
75,019,302
| 4,114,325
|
Python sum over map over list
|
<p>From Mathematica I am used to summing over a map over a list with a very short and concise syntax. E.g. to sum a map over a polynomial function:</p>
<pre><code>myList = {1,2,3};
output = Sum[ x^3+x^2+x , { x, myList } ]
</code></pre>
<p>To do the same thing in Python, I came up with the following syntax:</p>
<pre><code>myList = [1,2,3]
output = sum(list(map(lambda x: x*x*x+x*x+x , myList)))
</code></pre>
<p>My question is: Is that the most simple/efficient way of doing this? I mean, it seems to me that there should be a simpler way than nesting three or four built in functions for such a simple task.</p>
|
<python>
|
2023-01-05 13:39:04
| 2
| 1,023
|
Kagaratsch
|
75,019,188
| 4,847,250
|
How do I compute the barycenter of an histogram represented as polar coordinate
|
<p>I display a histogram as a polar plot and I want to display the barycenter to see if there is a dominant orientation of the histogram.
My issue is when the histogram have a lot of 0 and 2pi value, the barycenter is not working (because 0 and 2pi is the same angle in the polar plot)</p>
<p>So I get something like this :
<a href="https://i.sstatic.net/aGrTQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aGrTQ.png" alt="enter image description here" /></a></p>
<p>I expect to see the red dot at 0 degree.</p>
<p>Here's a minimal example code :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a = np.random.rand(1000)*3.14*2
a = np.hstack((a,np.ones(100)*0.1))
a = np.hstack((a,np.ones(100)*3.1*2))
f = plt.figure()
f.add_subplot(1,1,1,projection='polar')
n, bins = np.histogram(np.array(a), 50)
n = n / np.sum(n)
plt.hist((bins[0:-1] + bins[1:]) / 2, weights=n, bins=bins, rwidth=1.)
binscenter = (bins[0:-1] + bins[1:]) / 2
moyenne = np.sum(n * binscenter) / np.sum(n)
plt.scatter(moyenne ,0.1,c='r')
plt.show()
</code></pre>
|
<python><polar-coordinates>
|
2023-01-05 13:29:58
| 1
| 5,207
|
ymmx
|
75,019,075
| 5,353,753
|
Comparing two dataframes columns
|
<p>I have two dataframes that are structure wise equal.</p>
<p>Both has to following format:</p>
<pre><code>file_name | country_name | country_code | .....
</code></pre>
<p>I want to compare the two, and get a percentage of equality for each column.</p>
<p>The second data frame is the test dataframe, that holds the true values. Some of the values are NaN, which should be ignored. So far I have a managed to compare the two, and get the total number of equal samples for each column, my problem is dividing each of them by the total number of relevant samples(That doesn't have NaN in the second dataframe), in a "nice way".</p>
<p>For example:</p>
<p>df1</p>
<pre><code>file_name | country_name
1 a
2 b
3 d
4 c
</code></pre>
<p>df2</p>
<pre><code>file_name | country_name
1 a
2 b
3 nan
4 d
</code></pre>
<p>I expect an output of 66% for this column, because 2 of the 3 relevant samples has the same value, and the 4th is nan so it is ignored from the calculation.</p>
<p>What I've done so far:</p>
<pre><code>test_set = pd.read_excel(file_path)
test_set = test_set.astype(str)
a_set = pd.read_excel(file2_path)
a_set = a_set.astype(str)
merged_df = a_set.merge(test_set, on='file_name')
for field in fields:
if field == 'file_name':
continue
merged_df[field] = merged_df.apply(lambda x: 0 if x[field + '_y'] == 'nan' else 1 if x[field + '_x'].lower() == x[field + '_y'].lower() else 0, axis=1)
scores = merged_df.drop('file_name', axis=1).sum(axis=0)
</code></pre>
<p>This gives me these(correct) results:</p>
<pre><code>country_name 14
country_code 0
state_name 4
state_code 59
city 74
...
</code></pre>
<p>But now I want to divide each of them by the total number of samples that doesn't contain NaN in the corresponding field from the test_set dataframe. I can think of naive ways to do this, like creating another column that holds the number of not nan values for each of these column, but looking for a pretty solution.</p>
|
<python><pandas><dataframe>
|
2023-01-05 13:20:41
| 2
| 40,569
|
sagi
|
75,018,977
| 1,325,133
|
Splitting nested lists into DataFrame
|
<p>I have the following data:</p>
<pre><code>{
"load_avg_1min": "0.40",
"load_avg_5min": "0.25",
"load_avg_15min": "0.28",
"processes_total": "719",
"processes_running": "3",
"cpu_state_user": "34.19",
"cpu_state_kernel": "10.36",
"cpu_state_idle": "55.44",
"TABLE_cpu_usage": {
"ROW_cpu_usage": [
{
"cpuid": "0",
"user": "32.98",
"kernel": "11.34",
"idle": "55.67"
},
{
"cpuid": "1",
"user": "34.37",
"kernel": "10.41",
"idle": "55.20"
}
]
},
"memory_usage_total": "8159468",
"memory_usage_used": "4310284",
"memory_usage_free": "3849184",
"vm_total": "0",
"vm_free": "0",
"mem_buffers": "130460",
"mem_cached": "2480340",
"current_memory_status": "OK"
}
</code></pre>
<p>Im looking to create a single row DataFrame. However, I want each element of <code>ROW_cpu_usage</code> to be a different column i.e:</p>
<pre><code>cpuid.0 | user.0 ... cpuid.1 etc etc
</code></pre>
<p>Im currently using the below but unable to see any clean ways of using the <code>meta</code> or <code>record_path</code> arguments.</p>
<pre><code>df = pd.json_normalize(data=json_payload)
</code></pre>
<p>EDIT: The DF should also include the other data from <code>data</code> such as <code>load_avg_1min</code> etc.</p>
<p>Thanks,</p>
|
<python><pandas><dataframe>
|
2023-01-05 13:13:45
| 1
| 16,889
|
felix001
|
75,018,833
| 3,444,107
|
Flink Postgres query to jsonb column gives jdbc failure
|
<p>I am using PyFlink 1.16 and flink-connector-jdbc-1.16.0.jar to query Postgres table with a jsonb column</p>
<pre><code>sql = "SELECT entity_id FROM event_files"
table2 = table_env.sql_query(sql)
table2.execute().print()
</code></pre>
<p>It failed with this error:</p>
<pre><code>Caused by: java.lang.UnsupportedOperationException: Doesn't support Postgres type 'jsonb' yet
at org.apache.flink.connector.jdbc.dialect.psql.PostgresTypeMapper.mapping(PostgresTypeMapper.java:173)
</code></pre>
<p>I get this error despite not using the jsonb column.</p>
<p>I patched PostgresTypeMapper.java to map <code>jsonb</code> to <code>varchar</code>. Now the above code is working. But I want to query by jsonb column, payload. So I tried to cast it to varchar a column, P1:</p>
<pre><code>sql = "SELECT entity_id, CAST(payload AS varchar) AS P1 FROM event_files"
table2 = table_env.sql_query(sql)
table2.execute().print()
</code></pre>
<p>but that fails with:</p>
<pre><code>Caused by: java.lang.ClassCastException: class org.postgresql.util.PGobject cannot be cast to class java.lang.String (org.postgresql.util.PGobject is in unnamed module of loader org.apache.flink.util.ChildFirstClassLoader @38c50913; java.lang.String is in module java.base of loader 'bootstrap')
at org.apache.flink.connector.jdbc.converter.AbstractJdbcRowConverter.lambda$createInternalConverter$224afae6$10(AbstractJdbcRowConverter.java:176)
</code></pre>
<p>I have been trying to find ways around this including creating a view from event_files, but nothing has worked so far.</p>
<h3>Details on patching flink-connector-jdbc</h3>
<p>Cloned the flink-connector-jdbc project from GitHub and add a few lines to flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/psql/PostgresTypeMapper.java:</p>
<pre class="lang-java prettyprint-override"><code> private static final String PG_CHARACTER_VARYING_ARRAY = "_varchar";
private static final String PG_JSONB = "jsonb";
case PG_DATE_ARRAY:
return DataTypes.ARRAY(DataTypes.DATE());
case PG_JSONB:
return DataTypes.STRING();
</code></pre>
<p>I build it with Maven.</p>
|
<python><java><postgresql><apache-flink><pyflink>
|
2023-01-05 13:02:12
| 0
| 1,032
|
Sami Badawi
|
75,018,635
| 1,397,946
|
Detect equidistant lines on an image
|
<p>I am trying to detect horizontal distance markers on a measurement plate like on the picture below. The marked area shows which lines I need to detect. You will notice there is a problem in the image, an extra line (marked with an arrow) that should not be there - it's just an artifact and it's messing with my initial algorithm to detect the "true lines" (the lines that have corresponding numbers to the left).</p>
<p><a href="https://i.sstatic.net/f0Psw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f0Psw.jpg" alt="plate" /></a></p>
<p>With kernel density estimation, I can establish what is the most common distance between the consecutive lines, e.g. 150 pixels. I am later looping through all lines, trying to find candidates for the ruler lines. The algorithm:</p>
<pre class="lang-py prettyprint-override"><code>margin = 30 # in pixels
ruler_distance = 150 # established with kernel density estimation, we could use median too
candidates = defaultdict(list)
for i, line1 in enumerate(merged):
candidates[i].append(line1)
for j, line2 in enumerate(merged):
if j > i:
avg_y1 = (line1[1] + line1[3]) / 2 # line[1] and line[3] are y-coordinates
avg_y2 = (line2[1] + line2[3]) / 2
diff = avg_y2 - avg_y1
if ruler_distance - margin <= diff <= ruler_distance + margin:
candidates[i].append(line2)
ruler_distance = diff + avg_ruler_distance
</code></pre>
<p>The trouble is, the algorithm is very volatile. It works for <code>margin=30</code>, but if I change it by +/- 10, it will fall apart. Obviously the issue is perspective. The lines are equidistant in reality, but not with perspective.</p>
<p>Maybe I need a radically different approach? The presented image is a result of detecting horizontal lines, merging them together and finding coordinates with Hough transform (in OpenCV). It could be tempting to further erode the marked line, but following this method leads to eliminating some markers in other images.</p>
<p>Data (can be read with e.g. <code>numpy.loadtxt</code>):</p>
<pre><code>23 18 882 17
1449 445 2841 454
508 625 1887 645
746 655 2092 670
548 1140 1998 1140
692 1180 1739 1180
480 1362 892 1362
807 1988 1234 1988
795 2116 1575 2116
783 2254 1717 2254
770 2395 1882 2395
756 2542 1878 2542
1008 2607 1332 2607
742 2695 1892 2695
729 2854 1900 2854
715 3016 1907 3016
387 3531 1949 3533
1515 4026 3023 4008
</code></pre>
|
<python><opencv><computer-vision>
|
2023-01-05 12:47:57
| 0
| 11,517
|
Lukasz Tracewski
|
75,018,630
| 1,058,671
|
Why pyright doesn't allow str.join of a non-literal set?
|
<p>The following code produces the error message:</p>
<blockquote>
<ol>
<li>Operator "|" not supported for types "set[str]" and "set[str]"</li>
</ol>
<p>...... Operator "|" not supported for types "set[str]" and "set[str]" when expected type is "Iterable[LiteralString]"</p>
</blockquote>
<pre><code>def update_no_proxy():
def _normalize_url(url: str) -> str:
return urlparse(url).netloc
old_no_proxy_urls = set(os.getenv('NO_PROXY', '').split(','))
no_proxy_new_urls = {_normalize_url(url) for url in NO_PROXY_URLS}
no_proxy_new_urls = {*no_proxy_new_urls, get_local_ip_address()}
no_proxy_urls: str = ','.join(old_no_proxy_urls | no_proxy_new_urls)
for env_name in NO_PROXY_ENV_NAMES:
os.environ[env_name] = no_proxy_urls.strip(', ')
</code></pre>
<p>However, the following code does not produce the error:</p>
<pre><code>def update_no_proxy():
def _normalize_url(url: str) -> str:
return urlparse(url).netloc
old_no_proxy_urls = set(os.getenv('NO_PROXY', '').split(','))
no_proxy_new_urls = {_normalize_url(url) for url in NO_PROXY_URLS}
no_proxy_new_urls = {*no_proxy_new_urls, get_local_ip_address()}
all_no_proxy_urls = old_no_proxy_urls | no_proxy_new_urls
no_proxy_urls = ','.join(all_no_proxy_urls)
for env_name in NO_PROXY_ENV_NAMES:
os.environ[env_name] = no_proxy_urls.strip(', ')
</code></pre>
<p>Why putting <code>all_no_proxy_urls</code> in a new variable solves the problem?</p>
|
<python><python-3.x><python-typing><python-3.10><pyright>
|
2023-01-05 12:47:20
| 0
| 6,651
|
Yam Mesicka
|
75,018,374
| 20,078,696
|
Why is the turtle offset?
|
<p>I am trying to create a program to move the turtle to where the mouse is.
Right now I am doing:</p>
<pre><code>import turtle
t = turtle.Turtle()
canvas = turtle.getcanvas()
width = canvas.winfo_width()
height = canvas.winfo_height()
midpointX = width / 2
midpointY = height / 2
t.speed(0)
while True:
mouseX, mouseY = canvas.winfo_pointerxy()
turtleX = mouseX - midpointX
turtleY = (mouseY - midpointY) * -1
t.goto(turtleX, turtleY)
</code></pre>
<p>The turtle is offset when I run it in PyCharm or through the command line, but not when I run it on replit.</p>
<p>I am using Windows 11 if that helps.</p>
<p>This is the extent to which the turtle goes if my mouse is on the edge of the screen:
<a href="https://i.sstatic.net/ujMT3l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ujMT3l.png" alt="This is the extent to which the turtle goes if my mouse is on the edge of the screen" /></a></p>
|
<python><turtle-graphics><python-turtle>
|
2023-01-05 12:26:00
| 1
| 789
|
sbottingota
|
75,018,324
| 1,240,487
|
What is the difference between pydantic Field and Annotated?
|
<p>Those two concepts <strong>Field</strong> and <strong>Annotated</strong> seem very similar in functionality. For example, I can define the same variable in any way as:</p>
<pre><code>temperature: float = Field(0.0, ge=0, le=1)
temperature: Annotated[confloat(ge=0, le=1),...] = 0.0
</code></pre>
<p>Is there any drawback of using only Field or Annotated?</p>
|
<python><pydantic>
|
2023-01-05 12:22:10
| 2
| 1,373
|
gench
|
75,018,307
| 12,210,908
|
Undefined Reference to major, minor and makedev in libpython*.so file during Python-2.7.13 and Python-3.5.7 build
|
<p>I am cross-compiling <code>python-2.7.13</code> and <code>python-3.5.7</code> from scratch using the following configuration command line options to the <code>./configure</code> binary:</p>
<pre><code>CONFIG_SITE=config.site ./configure \
--host=arm-linux-gnueabi \
--build=x86_64-pc-linux-gnu \
CC=/toolchain/gcc-arm-8.2-2018.08-x86_64-arm-linux-gnueabi/bin/arm-linux-gnueabi-gcc \
RANLIB=/toolchain/gcc-arm-8.2-2018.08-x86_64-arm-linux-gnueabi/bin/arm-linux-gnueabi-ranlib \
READELF=/toolchain/gcc-arm-8.2-2018.08-x86_64-arm-linux-gnueabi/bin/arm-linux-gnueabi-readelf \
--enable-shared --enable-ipv6 --without-cxx-main --without-doc-strings --with-system-ffi --with-pydebug
</code></pre>
<p>After the configuration is successful and the required Makefile is generated, when I am executing <code>make</code> to generate the <code>python</code> binary I am getting the following error:</p>
<pre><code>/toolchain/arm/gcc-arm-8.2-2018.08-x86_64-arm-linux-gnueabi/bin/arm-linux-gnueabi-gcc -Xlinker -export-dynamic -o python \
Modules/python.o \
-L. -lpython2.7 -lpthread -ldl -lpthread -lutil -lm
./libpython2.7.so: undefined reference to `minor'
./libpython2.7.so: undefined reference to `major'
./libpython2.7.so: undefined reference to `makedev'
collect2: error: ld returned 1 exit status
make: *** [python] Error 1
</code></pre>
<p>The toolchain which is being used for the cross-compilation is <code>gcc-arm-8.2-2018.08-x86_64-arm-linux-gnueabi</code> downloaded from here: <a href="https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads/8-2-2018-08" rel="nofollow noreferrer">https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads/8-2-2018-08</a></p>
<p>On further inspection of the <code>/Modules/posixmodule.o</code> executable using <code>readelf</code>, I found that the functions <code>major, minor</code> and <code>makedev</code> are in the <code>UNDEF</code> category, hence encountering the linking error.</p>
<pre><code>readelf -s posixmodule.o | grep -i "major"
409: 00004cb8 20 FUNC LOCAL DEFAULT 1 os_major_impl
410: 00004ccc 124 FUNC LOCAL DEFAULT 1 os_major
1288: 0000065c 1 OBJECT LOCAL DEFAULT 5 os_major__doc__
1629: 00000000 0 NOTYPE GLOBAL DEFAULT UND major
readelf -s posixmodule.o | grep -i "major"
409: 00004cb8 20 FUNC LOCAL DEFAULT 1 os_major_impl
410: 00004ccc 124 FUNC LOCAL DEFAULT 1 os_major
1288: 0000065c 1 OBJECT LOCAL DEFAULT 5 os_major__doc__
1629: 00000000 0 NOTYPE GLOBAL DEFAULT UND major
readelf -s posixmodule.o | grep -i "minor"
404: 00004c28 20 FUNC LOCAL DEFAULT 1 os_minor_impl
405: 00004c3c 124 FUNC LOCAL DEFAULT 1 os_minor
1287: 00000658 1 OBJECT LOCAL DEFAULT 5 os_minor__doc__
1628: 00000000 0 NOTYPE GLOBAL DEFAULT UND minor
</code></pre>
<p>Could anyone help to resolve these errors?</p>
|
<python><python-3.x><python-2.7><cross-compiling>
|
2023-01-05 12:20:53
| 1
| 368
|
strikersps
|
75,018,094
| 10,896,385
|
Map dataframe function without lambda
|
<p>I have the following function:</p>
<pre><code>def summarize(text, percentage=.6):
import numpy as np
sentences = nltk.sent_tokenize(text)
sentences = sentences[:int(percentage*len(sentences))]
summary = ''.join([str(sentence) for sentence in sentences])
return summary
</code></pre>
<p>And I want to map it to dataframe rows. It works pretty well when I use the following code :</p>
<pre><code>df['summary'] = df['text'].map(summarize)
</code></pre>
<p>However, when I want to change the percentage variable in this call, it does <code> df['summary'] = df['text'].map(summarize(percentage=.8))</code>, it shows an error indicating it requires another argument, which is <code>text</code>. Of course, it can be resolved using a <em>lambda</em> function as follows:</p>
<pre><code>df['summary'] = df['text'].map(lambda x: summarize(x, percentage=.8))
</code></pre>
<p>But I do not want use the lambda in the call. Is there any method to do it otherwise? For example using <code>kwargs</code> inside the function to refer to the <code>text</code> column in the dataframe? Thank you</p>
|
<python><pandas>
|
2023-01-05 12:03:28
| 1
| 303
|
Mus
|
75,017,751
| 13,839,945
|
Using multiple CPU in PyTorch
|
<p>I dont have access to any GPU's, but I want to speed-up the training of my model created with PyTorch, which would be using more than 1 CPU. I will use the most basic model for example here.</p>
<p>All I want is this code to run on multiple CPU instead of just 1 (Dataset and Network class in <em>Appendix</em>).</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
df = pd.read_pickle('path/to/data')
X = df.drop(columns=['target'])
y = df[['target']]
train_data = CustomDataset(X, y)
train_loader = DataLoader(
dataset=train_data,
batch_size=64
)
model = Network(X.shape[-1])
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
epochs = 10
for e in range(1, epochs + 1):
train_loss = .0
model.train()
for batch_id, (data, labels) in enumerate(train_loader):
optimizer.zero_grad()
target = model(data)
loss = criterion(target, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
print('Epoch {}:\tTrain Loss: {:.4f}'.format(
e,
train_loss / len(train_loader)
))
</code></pre>
<p>Right now, this code is using only 1 CPU for 100% during training. I want the code to use 4 CPU's in the training process for 100%. There is so much different information out there on how to do this in the best way, that none of it is really working. I have tried different approaches using <code>model = torch.nn.parallel.DistributedDataParallel(model)</code> for example, but none of the worked.</p>
<p>Does someone now how to get the most performance for this code using 4 CPU? Thanks in Advance!</p>
<p><strong>APPENDIX</strong></p>
<pre class="lang-py prettyprint-override"><code>class CustomDataset(Dataset):
def __init__(
self,
X,
y,
):
self.X = torch.Tensor(X.values)
self.y = torch.Tensor(y.values)
def __getitem__(
self,
index
):
return self.X[index], self.y[index]
def __len__(self):
return len(X)
class Network(nn.Module):
def __init__(
self,
input_size
):
super(Network, self).__init__()
self.input_size = input_size
self.linear_1 = nn.Linear(self.input_size, 32)
self.linear_2 = nn.Linear(32, 1)
def forward(self, data):
output = self.linear_1(data)
output = self.linear_2(output)
return output
</code></pre>
|
<python><pytorch>
|
2023-01-05 11:30:27
| 1
| 341
|
JD.
|
75,017,723
| 10,866,873
|
Python get/set port and identify it without communication
|
<p>I am selecting a random port that all my applications can use to communicate on that changes each time.</p>
<pre class="lang-py prettyprint-override"><code>s = socket(AF_INET, SOCK_STREAM)
s.bind(('', 0))
addr = s.getsockname()
port = addr[1]
s.close()
</code></pre>
<p>Now I'm looking for a good way for other scripts to find that port without prior communication in any form (on windows) using windows commands or API calls. Something <strong>like</strong> registering the port in registry but without requiring administrative privileges. (not that though as explained below). A named service port would be nice but I don't think that exists in windows.</p>
<p>I know I can use a temp file to store it but that seems like a cheap way to do it and will require some clean-up and error checking.</p>
<p>Currently I am using sys.argv and opening the scripts with the args sent but there are cases where the user will open them without args so its not the best solution for me.</p>
<p>I have also tried using <code>netstat -a -o</code> to get all used ports then using the PIDs from that searching though <code>tasklist /fi pid eq #####</code> to get the name that matches what I'm looking for. However when developing the name is <code>python.exe</code> which can have multiple ports assigned and it also takes a long time to complete so I cannot use that either.</p>
|
<python><python-sockets>
|
2023-01-05 11:27:23
| 0
| 426
|
Scott Paterson
|
75,017,628
| 19,003,861
|
Generate QR Code to URL with variable (page/pk_ID)
|
<p>I am trying to generate a QR code that is unique to the <code>page_id</code>.</p>
<p>The aim is to send a user that <strong>is not</strong> <code>request.user</code> to a specific page (<code>loyalty_card/UserProfile_ID</code>).</p>
<p>As an example:</p>
<ol>
<li>site/loyaltycard/UserProfile_1 - would have qr code leading to site/pageID_1</li>
<li>site/loyaltycard/pageID_2 - would have qr code leading to site/pageID_2</li>
</ol>
<p>here is where I am at.</p>
<pre><code>from io import BytesIO
from qrcode.image.svg import SvgImage
@login_required
def venue_loyalty_card(request, userprofile_id):
profile = UserProfile.objects.filter(user=userprofile_id)
itemised_loyalty_cards = Itemised_Loyatly_Card.objects.filter(user=userprofile_id)
factory = qrcode.image.svg.SvgImage
stream = BytesIO()
qrcode_url = "....com/account/" + str(userprofile_id) + "/"
qrcode_img = qrcode.make(qrcode_url, image_factory=factory, box_size=10)
qrcode_img.save(stream)
return render(request,"main/account/venue_loyalty_card.html", {'itemised_loyalty_cards':itemised_loyalty_cards, 'profile':profile,'qrcode_url':qrcode_url,'qrcode_img':qrcode_img})
</code></pre>
<p><strong>template</strong></p>
<pre><code>{% block content %}
</br></br>
Your Loyalty Cards</br>
</br>
<img style="width: 100%" src="{{qrcode_img|safe}}">
{%for userprofile in profile%}
{%endfor%}
{%for itemised_loyatly_card in itemised_loyalty_cards %}</br>
{% if request.user.id == itemised_loyatly_card.user.id %}
Total Points: {{itemised_loyatly_card.sum}}</br>
User : {{itemised_loyatly_card.user.id}}</br>
Venue :{{itemised_loyatly_card.venue}}</br>
Add points here: {{itemised_loyatly_card.points}}</br>
{%else%}
{%endif%}
{%endfor%}
{%endblock%}
</code></pre>
<p><strong>url</strong></p>
<pre><code>path('loyalty_card/<userprofile_id>', views.venue_loyalty_card,name="venue-loyalty-card"),
</code></pre>
<p>Current error message on console: (edit following Rudra's demand)</p>
<pre><code>System check identified no issues (0 silenced).
January 05, 2023 - 11:55:32
Django version 4.0.6, using settings 'mysite.settings.dev'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
[05/Jan/2023 11:55:35] "GET /loyalty_card/17 HTTP/1.1" 200 11017
Internal Server Error: /loyalty_card/<qrcode.image.svg.SvgImage object at 0x00000227BFB77EE0>
Traceback (most recent call last):
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\fields\__init__.py", line 1988, in get_prep_value
return int(value)
ValueError: invalid literal for int() with base 10: '<qrcode.image.svg.SvgImage object at 0x00000227BFB77EE0>'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\CeX\Virtual\lib\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
File "C:\Users\CeX\Virtual\lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\CeX\Virtual\lib\site-packages\django\contrib\auth\decorators.py", line 23, in _wrapped_view
return view_func(request, *args, **kwargs)
File "C:\Users\CeX\Dev\mysite\MyApp\main\views.py", line 730, in venue_loyalty_card
profile = UserProfile.objects.filter(user=userprofile_id)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\query.py", line 1071, in filter
return self._filter_or_exclude(False, args, kwargs)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\query.py", line 1089, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, args, kwargs)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\query.py", line 1096, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\sql\query.py", line 1502, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\sql\query.py", line 1532, in _add_q
child_clause, needed_inner = self.build_filter(
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\sql\query.py", line 1448, in build_filter
condition = self.build_lookup(lookups, col, value)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\sql\query.py", line 1273, in build_lookup
lookup = lookup_class(lhs, rhs)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\lookups.py", line 27, in __init__
self.rhs = self.get_prep_lookup()
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\fields\related_lookups.py", line 154, in get_prep_lookup
self.rhs = target_field.get_prep_value(self.rhs)
File "C:\Users\CeX\Virtual\lib\site-packages\django\db\models\fields\__init__.py", line 1990, in get_prep_value
raise e.__class__(
ValueError: Field 'id' expected a number but got '<qrcode.image.svg.SvgImage object at 0x00000227BFB77EE0>'.
[05/Jan/2023 11:55:35] "GET /loyalty_card/%3Cqrcode.image.svg.SvgImage%20object%20at%200x00000227BFB77EE0%3E HTTP/1.1" 500 134352
</code></pre>
|
<python><django><django-models><django-views>
|
2023-01-05 11:18:47
| 1
| 415
|
PhilM
|
75,017,489
| 3,484,477
|
Install older version of pytorch with GPU support using conda
|
<p>I would like to install <code>pytorch</code> <code>1.11</code> using <code>conda</code> with <code>gpu support</code> for a specific cuda version, e.g. <code>11.6</code>.</p>
<p>One can do the following to install latest version:</p>
<pre><code>conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
</code></pre>
<p>And <a href="https://pytorch.org/get-started/previous-versions/" rel="nofollow noreferrer">HERE</a> says how to do it for older version of pytorch but no gpu support:</p>
<pre><code>conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 -c pytorch
</code></pre>
<p>So, then does the following install a <code>pytorch 1.11</code> with <code>gpu support</code> using conda?</p>
<pre><code>conda install pytorch==1.11.0 torchvision==0.12.0 pytorch-cuda=11.6 -c pytorch -c nvidia
</code></pre>
|
<python><pytorch><anaconda><conda>
|
2023-01-05 11:07:11
| 1
| 1,643
|
Meysam Sadeghi
|
75,017,331
| 3,608,005
|
RangeX in geoviews, trying to understand x axis range
|
<p>Can someone please explain to me the meaning of the x axis range retrieved using RangeX for geospatial data?</p>
<pre><code>import geoviews as gv
import holoviews as hv
background = gv.tile_sources.Wikipedia
rng = hv.streams.RangeX(source=background)
background
</code></pre>
<p><a href="https://i.sstatic.net/Zk0P6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zk0P6.png" alt="enter image description here" /></a></p>
<p>When I call the rng object I do get None at first for <code>x_range</code> and upon using the wheel zoom, I do get following output:</p>
<pre><code>rng
RangeX(x_range=(-18473547.316975493, 15590216.865764225))
</code></pre>
<p>I would expect <code>(-170, 149)</code> but not roughly 1e5 times that value. How can I interpret the output from <code>RangeX</code> in that case?</p>
|
<python><holoviews><geoviews>
|
2023-01-05 10:54:02
| 1
| 5,448
|
Moritz
|
75,017,330
| 14,534,957
|
A constructor for dataclasses with randomized attribute values
|
<p>Hi could someone explain whats happening here:
i want to instantiate objects with random values.</p>
<pre><code>@dataclass
class Particle:
pos = (random.randint(0, 800), random.randint(0, 800))
for _ in range(3):
p = Particle()
print(p.pos)
</code></pre>
<p>prints:</p>
<ul>
<li>(123, 586)</li>
<li>(123, 586)</li>
<li>(123, 586)</li>
</ul>
<p>expected behaviour wold be three tuples with different values. Whats happening here??</p>
<p>(when i use a normal class it works as expected)</p>
|
<python><python-dataclasses>
|
2023-01-05 10:53:51
| 1
| 418
|
user14534957
|
75,017,277
| 2,397,072
|
Dealing with multiple subprocesses writing to stdout in python
|
<p>I have multiple subprocesses (opened with <code>subprocess.Popen</code>) continuously writing lots of short lines to stdout (several MB/s).<br />
Sometimes, I guess when two subprocesses write at the exact same time to stdout, the output in stdout is an unreadable line.</p>
<p>For example, subprocess 1 writes AAAA, subprocess 2 write BBBB and stdout would sometimes show AABABABB instead of distinct AAAA and BBBB.</p>
<p>How to catch the output of subprocesses and redirect them to stdout while avoiding collisions that lead to unreadable output?</p>
<p>I am currently writing the output of the subprocesses to files (one per subprocess) but I'd rather have the output in stdout!</p>
<pre><code>import argparse, sys, json
from subprocess import Popen, PIPE
from multiprocessing import cpu_count
# Process parameters
parser = argparse.ArgumentParser()
parser.add_argument('--cmd', type=str, help='command to be executed', required=True)
parser.add_argument('--pattern', type=str, help='pattern for the output files', required=True)
args = parser.parse_args()
# Create worker subprocesses and open output files
processes,files = [],[]
for i in range(cpu_count()):
f = open(args.pattern.replace('{}',f'{i:02}'),'w')
p = Popen(json.loads(args.cmd), stdout=f, stdin=PIPE)
processes.append(p)
files.append(f)
# Traiter le flux stdin
i = 0 # index of the next subprocess to feed with stdin
for line in sys.stdin:
p = processes[i]
p.stdin.write(line.encode())
p.stdin.flush()
i += 1
if i==len(processes):
i = 0
# Close output files
for i in range(cpu_count()):
f = files[i]
f.close()
</code></pre>
|
<python><subprocess><stdout><multiprocess>
|
2023-01-05 10:48:39
| 1
| 301
|
Guillaume Adam
|
75,017,224
| 2,802,576
|
Pythonnet cannot convert object to target type
|
<p>Using <a href="https://github.com/pythonnet/pythonnet" rel="nofollow noreferrer">pythonnet</a> I am calling a Python function, which returns an object, from my C# script. Once I get the result from Python function I want to be able to cast/convert this Python object into C# user defined class. I tried using <code>AsManagedObject (Type t)</code> and <code>As<T> ()</code> functions but during runtime I got an error -</p>
<blockquote>
<p>Cannot convert object to target type
Inner Exception PythonException: 'Result' value cannot be converted to Models.Result</p>
</blockquote>
<p>I could use some help to convert PyObjects to C# user defined types.</p>
<p>Python -</p>
<pre><code>#Result.py
class Result:
def __init__(self, Sum) -> None:
self.Sum = Sum
#Main.py
from Result import Result
def add(inputCsharp) -> Result:
result = Result(inputCsharp.Input1 + inputCsharp/input2)
return result
</code></pre>
<p>C# -</p>
<pre><code>//Result.cs
public class Result
{
public int Sum { get; set; }
}
//Program.cs
using (Py.GIL())
{
using (var scope = Py.CreateScope())
{
dynamic obj = Py.Import(@"Main");
var inputCsharp = new Input()
{
Input1 = 2,
Input2 = 3
};
var result = obj.add(inputCsharp.ToPython());
var t = (Result)result.AsManagedObject(typeof(Result)); //here it is throwing an error
}
}
</code></pre>
|
<python><c#><python.net>
|
2023-01-05 10:44:50
| 0
| 801
|
arpymastro
|
75,017,208
| 4,534,466
|
Dask using too much memory when reading parquet
|
<p>I have a rather large parquet file (~1.35Gb) that I'm trying to read. I'm using Google Colab Pro which gives me 25 Gb of RAM. I ran the following code:</p>
<pre><code>import dask.dataframe as dd
data = dd.read_parquet(DATA_DIR / 'train.parquet', chunksize=100)
data.head()
</code></pre>
<p>And ran out of memory, is there something I can do to improve the memory consumption? I tried different chunk sizes, as well as removing it entirely, but all run out of memory.</p>
|
<python><dataframe><dask><parquet><dask-dataframe>
|
2023-01-05 10:43:42
| 1
| 1,530
|
João Areias
|
75,017,187
| 17,124,619
|
Sudo apt-get install commands with Jenkins
|
<p>I have a jenkins server open and I'm attempting to sudo install:</p>
<pre><code>sudo apt-get install libsqlite3-dev
</code></pre>
<p>Because I am using a pipeline that runs a python script, whereby this script would return the following error:</p>
<blockquote>
<p>builtins.ModuleNotFoundError: No module named '_sqlite3'</p>
</blockquote>
<p>Online articles have suggested to sudo install the package above. However, I would need root-privileges to do so.</p>
<p>I have installed the <code>ssh-server</code> plugin within jenkins and <code>jenkins-cli.jar</code>, is there a way to install the package with any of these two options, or an easier alternative?</p>
|
<python><jenkins>
|
2023-01-05 10:42:02
| 0
| 309
|
Emil11
|
75,017,086
| 10,729,292
|
Extracting the largest rectangle like object from an image with any possible orientation using openCV?
|
<p>I have pictures of cheques which were scanned and are on A4 sheets now.
I need to extract those cheques, which can be a bit tilted or rotated.
Now using an AI model would be my last preference. I tried the following script and it gave away every little and tiny rectangle from the image</p>
<pre><code>import cv2
import numpy as np
from google.colab.patches import cv2_imshow
# Read the image into a NumPy array
image = cv2.imread('/content/img.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Use the Canny edge detector to find the edges of the objects in the image
edges = cv2.Canny(gray, 50, 150)
# Find the contours of the objects
contours, _ = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Iterate through the contours
for contour in contours:
# Approximate the shape of the contour as a polygon
epsilon = 0.1 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
# If the approximated polygon has 4 sides, it is a rectangle
if len(approx) == 4:
# Extract the rectangle from the image
x, y, w, h = cv2.boundingRect(contour)
rectangle = image[y:y+h, x:x+w]
# Do something with the rectangle (e.g., display it, save it to a file, etc.)
cv2_imshow(rectangle)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>and this is what my images look like
what would be the best non YoLo type of solution here</p>
<p><a href="https://i.sstatic.net/9JKk7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9JKk7.png" alt="enter image description here" /></a></p>
|
<python><opencv><computer-vision>
|
2023-01-05 10:33:59
| 0
| 1,558
|
Sadaf Shafi
|
75,017,033
| 2,418,162
|
Run Selenium directly on Pydroid3 on Android
|
<p>my goal is to run a python script that uses Selenium to manipulate the chrome browser, directly with Pydroid 3 app on my android smartphone (without a pc connection).</p>
<p>I followed the instructions at the link <a href="https://chromedriver.chromium.org/getting-started/getting-started---android" rel="nofollow noreferrer">https://chromedriver.chromium.org/getting-started/getting-started---android</a> to download the appropriate webdriver and set it correctly for Android.</p>
<p>In detail the version of Chrome on my Smartphone (Samsung S10) is 108.0.5359.128 therefore, following what is indicated here:
<a href="https://chromedriver.chromium.org/downloads#h.p_ID_32" rel="nofollow noreferrer">https://chromedriver.chromium.org/downloads#h.p_ID_32</a>
I downloaded webdriver "chromedriver_linux64.zip" present at the following link:
<a href="https://chromedriver.storage.googleapis.com/index.html?path=108.0.5359.71/" rel="nofollow noreferrer">https://chromedriver.storage.googleapis.com/index.html?path=108.0.5359.71/</a></p>
<p>But following this example
<a href="https://chromedriver.chromium.org/getting-started/getting-started---android#h.p_ID_390" rel="nofollow noreferrer">https://chromedriver.chromium.org/getting-started/getting-started---android#h.p_ID_390</a> and running the following Python3 script:</p>
<pre><code>from selenium import webdriver
path_to_chromedriver = os.path.dirname(__file__) + '/chromedriver'
print('path_to_chromedriver: ', path_to_chromedriver)
options = webdriver.ChromeOptions()
options.add_experimental_option('androidPackage', 'com.android.chrome')
driver = webdriver.Chrome(path_to_chromedriver, 0, options=options)
driver.get('https://google.com')
driver.quit()
</code></pre>
<p>or this one:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
path_to_chromedriver = os.path.dirname(__file__) + '/chromedriver'
print('path_to_chromedriver: ', path_to_chromedriver)
service = Service(path_to_chromedriver)
options = webdriver.ChromeOptions()
options.add_experimental_option('androidPackage', 'com.android.chrome')
driver = webdriver.Chrome(service=service, options=options)
driver.get('https://google.com')
</code></pre>
<p>I get this error:</p>
<pre><code>path_to_chromedriver: /storage/emulated/0/Download/ke-ce-verimme-justwatch-scraper/ke-ce-verimme-justwatch-scraper/scrapers/chromedriver
Traceback (most recent call last):
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 31, in <module>
start(fakepyfile,mainpyfile)
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 30, in start
exec(open(mainpyfile).read(), __main__.__dict__)
File "<string>", line 124, in <module>
File "/storage/emulated/0/Download/ke-ce-verimme-justwatch-scraper/ke-ce-verimme-justwatch-scraper/scrapers/justwatchScraper.py", line 168, in start
scrolledGenrePage = scrollGenrePageToTheEnd(genrePageURL)
File "/storage/emulated/0/Download/ke-ce-verimme-justwatch-scraper/ke-ce-verimme-justwatch-scraper/scrapers/justwatchScraper.py", line 47, in scrollGenrePageToTheEnd
driver = webdriver.Chrome(service=service, options=options)
File "/data/user/0/ru.iiec.pydroid3/files/aarch64-linux-android/lib/python3.9/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/data/user/0/ru.iiec.pydroid3/files/aarch64-linux-android/lib/python3.9/site-packages/selenium/webdriver/chromium/webdriver.py", line 103, in __init__
self.service.start()
File "/data/user/0/ru.iiec.pydroid3/files/aarch64-linux-android/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 106, in start
self.assert_process_still_running()
File "/data/user/0/ru.iiec.pydroid3/files/aarch64-linux-android/lib/python3.9/site-packages/selenium/webdriver/common/service.py", line 117, in assert_process_still_running
return_code = self.process.poll()
AttributeError: 'Service' object has no attribute 'process'
[Program finished]
</code></pre>
<p>Can anyone help me or suggest a workaround?</p>
|
<python><selenium><selenium-webdriver><selenium-chromedriver><pydroid>
|
2023-01-05 10:30:07
| 0
| 445
|
Fobi
|
75,016,854
| 11,945,144
|
How can I resolve write errors that I have in my data?
|
<p>I have a dataframe (more than 1 million rows) that has an open text columns for customer can write whatever they want.
Misspelled words appear frequently and I'm trying to group comments that are grammatically the same.</p>
<p>For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>I want to change my credit card</td>
</tr>
<tr>
<td>2</td>
<td>I wannt change my creditt card</td>
</tr>
<tr>
<td>3</td>
<td>I want change credit caurd</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried using <strong>Levenshtein Distance</strong> but computationally it is very expensive.
Can you tell me another way to do this task?</p>
<p>Thanks!</p>
|
<python><dataframe><nlp><misspelling><write-error>
|
2023-01-05 10:14:30
| 2
| 343
|
Maite89
|
75,016,612
| 4,317,058
|
How to draw 2 readily available Figure objects alongside
|
<p>Let's see a contrived example of 2 <code>matplotlib.pyplot</code> functions returning a plot, which then may be accessed through <code>plt.gcf()</code> or <code>plt.gca()</code> methods:</p>
<pre><code>x = np.linspace(-5, 5, 11)
y = x
y_2 = x * x
plt.plot(x,y)
fig_1 = plt.gcf()
ax_1 = plt.gca()
#plt.close()
plt.plot(x,y_2)
fig_2 = plt.gcf()
ax_2 = plt.gca()
#plt.close()
</code></pre>
<p>How would I nicely draw <code>fig_1</code> and <code>fig_2</code> objects alongside through e.g. <code>plt.subplots(1,2)</code> (given <code>x</code>, <code>y</code>, <code>y_2</code> are not accessable)</p>
|
<python><matplotlib>
|
2023-01-05 09:56:02
| 1
| 25,529
|
Sergey Bushmanov
|
75,016,581
| 11,653,949
|
Which catalog is Airflow viewing during initialization of SensorOperator?
|
<p>I try open fil during performing <strong>init</strong> method in SensorOperator and get error in Airflow:</p>
<blockquote>
<p>Broken DAG: [/opt/airflow/dags/get_stars_and_print.py] Traceback (most recent call last):
File "/opt/airflow/dags/get_stars_and_print.py", line 69, in <strong>init</strong>
self.starting_stars = self.read_stars_from_file()
File "/opt/airflow/dags/get_stars_and_print.py", line 72, in read_stars_from_file
with open(self.stars_filename) as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'stars.json'</p>
</blockquote>
<p>Here's my code:</p>
<pre><code>class GithubStarsSensor(BaseSensorOperator):
@apply_defaults
def __init__(self, repo, stars_filename, *args, **kwargs):
super().__init__(*args, **kwargs)
self.repo_url = f"http://api.github.com/repos/{repo}"
self.stars_filename = stars_filename
self.starting_stars = self.read_stars_from_file()
def read_stars_from_file(self):
with open(self.stars_filename) as fp:
data = json.load(fp)
return data["stars"]
</code></pre>
<p>There's this file in user directory in Worker. This file was written by Airflow like this:</p>
<pre><code>with open(self.stars_filename, "w") as stars:
json.dump(gh_repo.as_dict(), stars)
</code></pre>
<p>But when I try to read this during initialization it doesn't see it.</p>
<p>I use Docker Airflow image for my experiments.</p>
|
<python><airflow>
|
2023-01-05 09:53:42
| 1
| 602
|
Alexander Lopatin
|
75,016,478
| 14,868,949
|
Split and merge 2 columns pandas
|
<p>Is it possible to have this kind of disposition in pd.DataFrame :
<a href="https://i.sstatic.net/n73Pz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n73Pz.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-01-05 09:44:35
| 2
| 648
|
ArrowRise
|
75,016,292
| 5,349,291
|
Modifing the return value of reduce() expression modifies the input
|
<p>I wrote a function to collect all values into a single list from a dictionary where each value is a list. However, when I later modified that list, I found that my original dictionary was modified too!</p>
<pre><code>from functools import reduce
d = {'foo': [1,2,3]}
all_vals = reduce(lambda x, y: x + y, d.values())
all_vals.append(4)
print(d)
# {'foo': [1, 2, 3, 4]}
</code></pre>
<p>This doesn't happen if the dictionary has multiple key/values though:</p>
<pre><code>from functools import reduce
d = {'foo': [1,2,3], 'bar': [9]}
all_vals = reduce(lambda x, y: x + y, d.values())
all_vals.append(4)
print(d)
# {'foo': [1, 2, 3], 'bar': [9]}
</code></pre>
<p>The dictionary now stays unmodified. Can anybody explain why python has this behaviour?</p>
|
<python>
|
2023-01-05 09:29:55
| 3
| 2,074
|
mchristos
|
75,016,212
| 2,392,965
|
PaddleOCR Error flag 'flagfile' was defined more than once
|
<p>Am encountering an issue running PaddleOCR on M1 Macbook</p>
<p>ERROR: flag 'flagfile' was defined more than once (in files '/Users/paddle/xly/workspace/f2bafd01-b80e-4ac8-972c-1652775b2e51/Paddle/build/third_party/gflags/src/extern_gflags/src/gflags.cc' and '/tmp/gflags-20211021-3963-1mi18ai/gflags-2.2.2/src/gflags.cc').</p>
|
<python><paddle-paddle><paddleocr>
|
2023-01-05 09:21:40
| 2
| 443
|
user2392965
|
75,016,155
| 8,622,404
|
converting onnx model to tensorflow fails
|
<p>I am trying to convert <a href="https://github.com/facebookresearch/detr" rel="nofollow noreferrer">detr model</a> to tensor flow using onnx. I converted the model using <code>torch.onnx.export</code> with opset_version=12.(which produces a <code>detr.onnx</code> file)</p>
<p>Then I tried to convert the onnx file to tensorflow model using <a href="https://github.com/onnx/onnx-tensorflow/blob/main/example/onnx_to_tf.py" rel="nofollow noreferrer">this example</a>. I added <code>onnx.check_model</code> line to make sure model is loaded correctly.</p>
<pre><code>import math
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch
from torch import nn
from torchvision.models import resnet50
import onnx
from onnx_tf.backend import prepare
import torchvision.transforms as T
torch.set_grad_enabled(False)
model = torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True)
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
im = Image.open(requests.get(url, stream=True).raw)
transform = T.Compose([
T.Resize(800),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
img = transform(im).unsqueeze(0)
torch.onnx.export(model, img, 'detr.onnx', opset_version = 12)
onnx_model = onnx.load('./detr.onnx')
result = onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.export_graph('./model.pb')
</code></pre>
<p>This code raises an exception when it reaches <code>tf_rep.export_graph('./model.pb')</code> line.
<br>onnx version = 1.13.0 , torch version = 1.13.0+cu117 , onnx_tf = 1.10.0</p>
<p>message of exception :</p>
<pre><code>KeyError Traceback (most recent call last)
Cell In[19], line 26
23 result = onnx.checker.check_model(onnx_model)
25 tf_rep = prepare(onnx_model)
---> 26 tf_rep.export_graph('./model.pb')
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\backend_rep.py:143, in TensorflowRep.export_graph(self, path)
129 """Export backend representation to a Tensorflow proto file.
130
131 This function obtains the graph proto corresponding to the ONNX
(...)
137 :returns: none.
138 """
139 self.tf_module.is_export = True
140 tf.saved_model.save(
141 self.tf_module,
142 path,
--> 143 signatures=self.tf_module.__call__.get_concrete_function(
144 **self.signatures))
145 self.tf_module.is_export = False
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:1239, in Function.get_concrete_function(self, *args, **kwargs)
1237 def get_concrete_function(self, *args, **kwargs):
1238 # Implements GenericFunction.get_concrete_function.
-> 1239 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1240 concrete._garbage_collector.release() # pylint: disable=protected-access
1241 return concrete
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:1219, in Function._get_concrete_function_garbage_collected(self, *args, **kwargs)
1217 if self._stateful_fn is None:
1218 initializers = []
-> 1219 self._initialize(args, kwargs, add_initializers_to=initializers)
1220 self._initialize_uninitialized_variables(initializers)
1222 if self._created_variables:
1223 # In this case we have created variables on the first call, so we run the
1224 # defunned version which is guaranteed to never create variables.
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:785, in Function._initialize(self, args, kwds, add_initializers_to)
782 self._lifted_initializer_graph = lifted_initializer_graph
783 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
784 self._concrete_stateful_fn = (
--> 785 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
786 *args, **kwds))
788 def invalid_creator_scope(*unused_args, **unused_kwds):
789 """Disables variable creation."""
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:2523, in Function._get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2521 args, kwargs = None, None
2522 with self._lock:
-> 2523 graph_function, _ = self._maybe_define_function(args, kwargs)
2524 return graph_function
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:2760, in Function._maybe_define_function(self, args, kwargs)
2758 # Only get placeholders for arguments, not captures
2759 args, kwargs = placeholder_dict["args"]
-> 2760 graph_function = self._create_graph_function(args, kwargs)
2762 graph_capture_container = graph_function.graph._capture_func_lib # pylint: disable=protected-access
2763 # Maintain the list of all captures
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:2670, in Function._create_graph_function(self, args, kwargs)
2665 missing_arg_names = [
2666 "%s_%d" % (arg, i) for i, arg in enumerate(missing_arg_names)
2667 ]
2668 arg_names = base_arg_names + missing_arg_names
2669 graph_function = ConcreteFunction(
-> 2670 func_graph_module.func_graph_from_py_func(
2671 self._name,
2672 self._python_function,
2673 args,
2674 kwargs,
2675 self.input_signature,
2676 autograph=self._autograph,
2677 autograph_options=self._autograph_options,
2678 arg_names=arg_names,
2679 capture_by_value=self._capture_by_value),
2680 self._function_attributes,
2681 spec=self.function_spec,
2682 # Tell the ConcreteFunction to clean up its graph once it goes out of
2683 # scope. This is not the default behavior since it gets used in some
2684 # places (like Keras) where the FuncGraph lives longer than the
2685 # ConcreteFunction.
2686 shared_func_graph=False)
2687 return graph_function
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\func_graph.py:1247, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, acd_record_initial_resource_uses)
1244 else:
1245 _, original_func = tf_decorator.unwrap(python_func)
-> 1247 func_outputs = python_func(*func_args, **func_kwargs)
1249 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
1250 # TensorArrays and `None`s.
1251 func_outputs = nest.map_structure(
1252 convert, func_outputs, expand_composites=True)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:677, in Function._defun_with_scope.<locals>.wrapped_fn(*args, **kwds)
673 with default_graph._variable_creator_scope(scope, priority=50): # pylint: disable=protected-access
674 # __wrapped__ allows AutoGraph to swap in a converted function. We give
675 # the function a weak reference to itself to avoid a reference cycle.
676 with OptionalXlaContext(compile_with_xla):
--> 677 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
678 return out
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:3317, in class_method_to_instance_method.<locals>.bound_method_wrapper(*args, **kwargs)
3312 return wrapped_fn(weak_instance(), *args, **kwargs)
3314 # If __wrapped__ was replaced, then it is always an unbound function.
3315 # However, the replacer is still responsible for attaching self properly.
3316 # TODO(mdan): Is it possible to do it here instead?
-> 3317 return wrapped_fn(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\func_graph.py:1233, in func_graph_from_py_func.<locals>.autograph_handler(*args, **kwargs)
1231 except Exception as e: # pylint:disable=broad-except
1232 if hasattr(e, "ag_error_metadata"):
-> 1233 raise e.ag_error_metadata.to_exception(e)
1234 else:
1235 raise
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\func_graph.py:1222, in func_graph_from_py_func.<locals>.autograph_handler(*args, **kwargs)
1220 # TODO(mdan): Push this block higher in tf.function's call stack.
1221 try:
-> 1222 return autograph.converted_call(
1223 original_func,
1224 args,
1225 kwargs,
1226 options=autograph.ConversionOptions(
1227 recursive=True,
1228 optional_features=autograph_options,
1229 user_requested=True,
1230 ))
1231 except Exception as e: # pylint:disable=broad-except
1232 if hasattr(e, "ag_error_metadata"):
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options)
437 try:
438 if kwargs is not None:
--> 439 result = converted_f(*effective_args, **kwargs)
440 else:
441 result = converted_f(*effective_args)
File ~\AppData\Local\Temp\__autograph_generated_fileq0h7j9t_.py:30, in outer_factory.<locals>.inner_factory.<locals>.tf____call__(self, **kwargs)
28 node = ag__.Undefined('node')
29 onnx_node = ag__.Undefined('onnx_node')
---> 30 ag__.for_stmt(ag__.ld(self).graph_def.node, None, loop_body, get_state, set_state, (), {'iterate_names': 'node'})
31 outputs = ag__.converted_call(ag__.ld(dict), (), None, fscope)
33 def get_state_4():
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:463, in for_stmt(iter_, extra_test, body, get_state, set_state, symbol_names, opts)
459 _tf_distributed_iterable_for_stmt(
460 iter_, extra_test, body, get_state, set_state, symbol_names, opts)
462 else:
--> 463 _py_for_stmt(iter_, extra_test, body, None, None)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:512, in _py_for_stmt(***failed resolving arguments***)
510 else:
511 for target in iter_:
--> 512 body(target)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:478, in _py_for_stmt.<locals>.protected_body(protected_iter)
477 def protected_body(protected_iter):
--> 478 original_body(protected_iter)
479 after_iteration()
480 before_iteration()
File ~\AppData\Local\Temp\__autograph_generated_fileq0h7j9t_.py:23, in outer_factory.<locals>.inner_factory.<locals>.tf____call__.<locals>.loop_body(itr)
21 node = itr
22 onnx_node = ag__.converted_call(ag__.ld(OnnxNode), (ag__.ld(node),), None, fscope)
---> 23 output_ops = ag__.converted_call(ag__.ld(self).backend._onnx_node_to_tensorflow_op, (ag__.ld(onnx_node), ag__.ld(tensor_dict), ag__.ld(self).handlers), dict(opset=ag__.ld(self).opset, strict=ag__.ld(self).strict), fscope)
24 curr_node_output_map = ag__.converted_call(ag__.ld(dict), (ag__.converted_call(ag__.ld(zip), (ag__.ld(onnx_node).outputs, ag__.ld(output_ops)), None, fscope),), None, fscope)
25 ag__.converted_call(ag__.ld(tensor_dict).update, (ag__.ld(curr_node_output_map),), None, fscope)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options)
437 try:
438 if kwargs is not None:
--> 439 result = converted_f(*effective_args, **kwargs)
440 else:
441 result = converted_f(*effective_args)
File ~\AppData\Local\Temp\__autograph_generated_filetsq4l59p.py:62, in outer_factory.<locals>.inner_factory.<locals>.tf___onnx_node_to_tensorflow_op(cls, node, tensor_dict, handlers, opset, strict)
60 pass
61 handler = ag__.Undefined('handler')
---> 62 ag__.if_stmt(ag__.ld(handlers), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
64 def get_state_2():
65 return ()
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1362 else:
-> 1363 _py_if_stmt(cond, body, orelse)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse)
1414 def _py_if_stmt(cond, body, orelse):
1415 """Overload of if_stmt that executes a Python if statement."""
-> 1416 return body() if cond else orelse()
File ~\AppData\Local\Temp\__autograph_generated_filetsq4l59p.py:56, in outer_factory.<locals>.inner_factory.<locals>.tf___onnx_node_to_tensorflow_op.<locals>.if_body_1()
54 nonlocal retval_, do_return
55 pass
---> 56 ag__.if_stmt(ag__.ld(handler), if_body, else_body, get_state, set_state, ('do_return', 'retval_'), 2)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1362 else:
-> 1363 _py_if_stmt(cond, body, orelse)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse)
1414 def _py_if_stmt(cond, body, orelse):
1415 """Overload of if_stmt that executes a Python if statement."""
-> 1416 return body() if cond else orelse()
File ~\AppData\Local\Temp\__autograph_generated_filetsq4l59p.py:48, in outer_factory.<locals>.inner_factory.<locals>.tf___onnx_node_to_tensorflow_op.<locals>.if_body_1.<locals>.if_body()
46 try:
47 do_return = True
---> 48 retval_ = ag__.converted_call(ag__.ld(handler).handle, (ag__.ld(node),), dict(tensor_dict=ag__.ld(tensor_dict), strict=ag__.ld(strict)), fscope)
49 except:
50 do_return = False
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options)
437 try:
438 if kwargs is not None:
--> 439 result = converted_f(*effective_args, **kwargs)
440 else:
441 result = converted_f(*effective_args)
File ~\AppData\Local\Temp\__autograph_generated_filec7_esoft.py:41, in outer_factory.<locals>.inner_factory.<locals>.tf__handle(cls, node, **kwargs)
39 nonlocal retval_, do_return
40 raise ag__.converted_call(ag__.ld(BackendIsNotSupposedToImplementIt), (ag__.converted_call('{} version {} is not implemented.'.format, (ag__.ld(node).op_type, ag__.ld(cls).SINCE_VERSION), None, fscope),), None, fscope)
---> 41 ag__.if_stmt(ag__.ld(ver_handle), if_body, else_body, get_state, set_state, ('do_return', 'retval_'), 2)
42 return fscope.ret(retval_, do_return)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1362 else:
-> 1363 _py_if_stmt(cond, body, orelse)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse)
1414 def _py_if_stmt(cond, body, orelse):
1415 """Overload of if_stmt that executes a Python if statement."""
-> 1416 return body() if cond else orelse()
File ~\AppData\Local\Temp\__autograph_generated_filec7_esoft.py:33, in outer_factory.<locals>.inner_factory.<locals>.tf__handle.<locals>.if_body()
31 try:
32 do_return = True
---> 33 retval_ = ag__.converted_call(ag__.ld(ver_handle), (ag__.ld(node),), dict(**ag__.ld(kwargs)), fscope)
34 except:
35 do_return = False
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options)
437 try:
438 if kwargs is not None:
--> 439 result = converted_f(*effective_args, **kwargs)
440 else:
441 result = converted_f(*effective_args)
File ~\AppData\Local\Temp\__autograph_generated_filevddqx9qt.py:12, in outer_factory.<locals>.inner_factory.<locals>.tf__version(cls, node, **kwargs)
10 try:
11 do_return = True
---> 12 retval_ = ag__.converted_call(ag__.ld(cls)._common, (ag__.ld(node),), dict(**ag__.ld(kwargs)), fscope)
13 except:
14 do_return = False
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options)
437 try:
438 if kwargs is not None:
--> 439 result = converted_f(*effective_args, **kwargs)
440 else:
441 result = converted_f(*effective_args)
File ~\AppData\Local\Temp\__autograph_generated_filedezd6jrz.py:122, in outer_factory.<locals>.inner_factory.<locals>.tf___common(cls, node, **kwargs)
120 paddings = ag__.Undefined('paddings')
121 constant_values = ag__.Undefined('constant_values')
--> 122 ag__.if_stmt(ag__.ld(cls).SINCE_VERSION < 11, if_body_1, else_body_1, get_state_1, set_state_1, ('constant_values', 'paddings'), 2)
123 cond = ag__.converted_call(ag__.ld(tf).cond, (ag__.converted_call(ag__.ld(check_positive), (ag__.ld(paddings),), None, fscope), ag__.autograph_artifact(lambda : ag__.converted_call(ag__.ld(process_pos_pads), (ag__.ld(x), ag__.ld(paddings), ag__.ld(constant_values)), None, fscope)), ag__.autograph_artifact(lambda : ag__.converted_call(ag__.ld(process_neg_pads), (ag__.ld(x), ag__.ld(paddings), ag__.ld(constant_values)), None, fscope))), None, fscope)
124 try:
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
1362 else:
-> 1363 _py_if_stmt(cond, body, orelse)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse)
1414 def _py_if_stmt(cond, body, orelse):
1415 """Overload of if_stmt that executes a Python if statement."""
-> 1416 return body() if cond else orelse()
File ~\AppData\Local\Temp\__autograph_generated_filedezd6jrz.py:119, in outer_factory.<locals>.inner_factory.<locals>.tf___common.<locals>.else_body_1()
117 nonlocal paddings, constant_values
118 paddings = ag__.ld(tensor_dict)[ag__.ld(node).inputs[1]]
--> 119 constant_values = ag__.if_exp(ag__.converted_call(ag__.ld(len), (ag__.ld(node).inputs,), None, fscope) == 3, lambda : ag__.ld(tensor_dict)[ag__.ld(node).inputs[2]], lambda : 0, 'ag__.converted_call(len, (node.inputs,), None, fscope) == 3')
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\conditional_expressions.py:27, in if_exp(cond, if_true, if_false, expr_repr)
25 return _tf_if_exp(cond, if_true, if_false, expr_repr)
26 else:
---> 27 return _py_if_exp(cond, if_true, if_false)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\conditional_expressions.py:52, in _py_if_exp(cond, if_true, if_false)
51 def _py_if_exp(cond, if_true, if_false):
---> 52 return if_true() if cond else if_false()
File ~\AppData\Local\Temp\__autograph_generated_filedezd6jrz.py:119, in outer_factory.<locals>.inner_factory.<locals>.tf___common.<locals>.else_body_1.<locals>.<lambda>()
117 nonlocal paddings, constant_values
118 paddings = ag__.ld(tensor_dict)[ag__.ld(node).inputs[1]]
--> 119 constant_values = ag__.if_exp(ag__.converted_call(ag__.ld(len), (ag__.ld(node).inputs,), None, fscope) == 3, lambda : ag__.ld(tensor_dict)[ag__.ld(node).inputs[2]], lambda : 0, 'ag__.converted_call(len, (node.inputs,), None, fscope) == 3')
KeyError: in user code:
File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\backend_tf_module.py", line 99, in __call__ *
output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node,
File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\backend.py", line 347, in _onnx_node_to_tensorflow_op *
return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\handlers\handler.py", line 59, in handle *
return ver_handle(node, **kwargs)
File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\handlers\backend\pad.py", line 91, in version_11 *
return cls._common(node, **kwargs)
File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\handlers\backend\pad.py", line 73, in _common *
constant_values = tensor_dict[node.inputs[2]] if len(
KeyError: ''
</code></pre>
|
<python><tensorflow2.0><onnx>
|
2023-01-05 09:16:59
| 2
| 356
|
kingGarfield
|
75,016,142
| 16,591,917
|
Did the Panel class move to another library in bokeh for new version of Bokeh
|
<p>I have a bokeh application using Tabs and Panel widgets. I used to do:
<code>from bokeh.models import Tabs,Panel</code></p>
<p>I recently did a new installation of python 3.8 and subsequently bokeh (3.0.2) and now <code>Panel</code> cannot be imported from <code>bokeh.models</code>. Online examples cite importing from <code>bokeh.models.widgets</code> but that does not work either. The error message I get is <code>cannot import name 'Panel' from 'bokeh.models.widgets' (C:\Users\_my_username_\AppData\Roaming\Python\Python38\site-packages\bokeh\models\widgets\__init__.py)</code></p>
|
<python><tabs><bokeh><panel>
|
2023-01-05 09:15:46
| 1
| 319
|
JacquesStrydom
|
75,016,130
| 6,007,601
|
Multi clients modbus to single modbus server
|
<p>I have two python scripts which need to get data from the same modbus server.</p>
<p>However they cannot be simultaneously connected to the same modbus server, so I am looking for a "proxy" software.</p>
<p>This proxy will then sequentially transfer the modbus queries to the modbus server.
Solution should run on debian 11 lite.
Thanks!</p>
|
<python><network-programming><modbus>
|
2023-01-05 09:14:58
| 2
| 585
|
clouvis
|
75,016,050
| 11,264,031
|
How to perform async commit when using kafka-python
|
<p>I'm using <code>kafka-python</code> library for my fastapi consumer app and I'm consuming messages in batch with maximum of 100 records. Since the topic has huge traffic and have only one partition, consuming, processing and committing should be as quick as possible hence I want to use <code>commit_async()</code>, instead of <strong>synchronous</strong> <code>commit()</code>.</p>
<p>But I'm not able to find a good example of <code>commit_async()</code>. I'm looking for an example for <code>commit_async()</code> with callback so that I can log in case of commit failure. But I'm not sure what does that callback function takes as argument and what field those arguments contain.</p>
<p>However the docs related to <a href="https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html#kafka.KafkaConsumer.commit_async" rel="nofollow noreferrer">commit_async</a> mentions the arguments, I'm not completely sure how to use them.</p>
<p>I need help in completing my callback function <code>on_commit()</code>, someone please help here</p>
<p><strong>Code</strong></p>
<pre><code>import logging as log
from kafka import KafkaConsumer
from message_handler_impl import MessageHandlerImpl
def on_commit():
pass
class KafkaMessageConsumer:
def __init__(self, bootstrap_servers: str, topic: str, group_id: str):
self.bootstrap_servers = bootstrap_servers
self.topic = topic
self.group_id = group_id
self.consumer = KafkaConsumer(topic, bootstrap_servers=bootstrap_servers, group_id=group_id, enable_auto_commit=False, auto_offset_reset='latest')
def consume_messages(self, max_poll_records: int,
message_handler: MessageHandlerImpl = MessageHandlerImpl()):
try:
while True:
try:
msg_pack = self.consumer.poll(max_records=max_poll_records)
for topic_partition, messages in msg_pack.items():
message_handler.process_messages(messages)
self.consumer.commit_async(callback=on_commit)
except Exception as e:
log.error("Error while consuming message due to: %s", e, exc_info=True)
finally:
log.error("Something went wrong, closing consumer...........")
self.consumer.close()
if __name__ == "__main__":
kafka_consumer = KafkaMessageConsumer("localhost:9092", "test-topic", "test-group")
kafka_consumer.consume_messages(100)
</code></pre>
|
<python><apache-kafka><kafka-consumer-api><kafka-python>
|
2023-01-05 09:08:34
| 1
| 426
|
Swastik
|
75,015,571
| 12,436,050
|
Error: Time out while performing API call in Python
|
<p>I have a list (lst) which is a list of list. There are 19 elements in this list and each element has ~2500 strings.</p>
<pre><code>lst
[['A', 'B', 'C',...]['E', 'F', 'G',....][.....]]
</code></pre>
<p>I am using these strings (A,B....) to call an API endpoint ('q':element). However after ~1800 strings, I am getting a time out.</p>
<p>I am running following lines of code.</p>
<pre><code>def get_val(element):
url = 'https://www.xxxx/yyy/api/search'
headers = {'Content-Type': 'application/json'}
param = {'q': element, 'page' : 500}
try:
response = requests.get(url, headers = headers, params = param, timeout=(3.05, 27))
docs = response.json()['response']['docs']
for result in docs:
file.write("%s\t%s\n" % (element,result['short_form']))
except Timeout:
print('Timeout has been raised.')
#loop through elements of list
for i in lst:
for element in i:
get_val(element)
</code></pre>
<p>How can I modify my code to avoid this time out?</p>
|
<python><python-requests>
|
2023-01-05 08:21:32
| 1
| 1,495
|
rshar
|
75,015,557
| 516,268
|
How to set xaxis ticks order?
|
<p>df:</p>
<pre><code> USER_ID EventNum
0 1390 17
1 4452 15
2 995 14
3 532 14
4 3281 14
... ... ...
5897 4971 1
5898 2637 1
5899 792 1
5900 5622 1
5901 1 1
[5902 rows x 2 columns]
</code></pre>
<p>I want to plot a figure, using <code>USER_ID</code> as X-axis, <code>EventNum</code> as Y-axis.
To avoid cluttering up the axis, I sample <code>USER_ID</code> values at
fixed interval as xticks like this:</p>
<pre><code>[1390, 4899, 4062, 366, 5001, 3383, 5003, 446, 2879, 3220, 4006, 4595, 1713, 2649, 2291, 5647, 2040, 5468, 3719, 4198, 5622]
</code></pre>
<p>I do like this, but it seems xticks values are not placed as their sorted order(<code>xticks</code>), instead they are placed by the increasing numeric order(see the figure below).</p>
<pre><code>xticks = [1390, 4899, 4062, 366, 5001, 3383, 5003, 446, 2879, 3220, 4006, 4595, 1713, 2649, 2291, 5647, 2040, 5468, 3719, 4198, 5622]
ax = df.plot(x='USER_ID', y=['EventNum'], use_index=False, rot=270)
ax.set_xticks(xticks)
ax.set_xlabel('User ID')
ax.set_ylabel('Event Number')
</code></pre>
<p><a href="https://i.sstatic.net/CapRS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CapRS.png" alt="enter image description here" /></a></p>
<p>How can I fix this?</p>
|
<python><pandas><matplotlib>
|
2023-01-05 08:19:48
| 1
| 1,327
|
l4rmbr
|
75,015,435
| 9,257,578
|
How to flatten the dataset properly
|
<p>I have a dataset like this
<a href="https://i.sstatic.net/eHpu8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eHpu8.png" alt="enter image description here" /></a></p>
<p>The problem here is i want to have <code>General_officers_0_Name</code> , <code>General_officers_0_Title</code>, <code>General_officers_0_YearBorn</code> and same for the others ones with different values like <code>MAtainance_name_type_work_ty</code> like this for the nested data type how to do this please help to solve this?
If this question is asked previously and contains the answer please remind to link with that.</p>
<p>The output of <code>print (df.head().to_dict(orient='list')) </code> is
<a href="https://i.sstatic.net/mLhz1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLhz1.png" alt="enter image description here" /></a></p>
|
<python><json><pandas><dataframe>
|
2023-01-05 08:07:13
| 0
| 533
|
Neetesshhr
|
75,015,390
| 7,125,235
|
The boto3 `list_discovered_resources` method is returning deleted resources even when flag `includeDeletedResources` is set to False
|
<p>I am using a python script to fetch active resources from my AWS account. I am using the following method:</p>
<pre class="lang-py prettyprint-override"><code>member_session = boto3.session.Session(
aws_access_key_id=credentials["Credentials"]["AccessKeyId"],
aws_secret_access_key=credentials["Credentials"]["SecretAccessKey"],
aws_session_token=credentials["Credentials"]["SessionToken"],
)
member_session.client('config').list_discovered_resources(resourceType="AWS::CloudFormation::Stack", includeDeletedResources=False)
</code></pre>
<p>This returns all resources including the deleted ones of a specific resource type.
I validated from the aws console and they are under the deleted tab for cloudformation-stack.</p>
<p>Is there something I am missing? Is there a time period after which the deleted resources are not visible as response to this api? I couldn't find something related in the docs.</p>
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/config.html#ConfigService.Client.list_discovered_resources" rel="nofollow noreferrer">Boto3 docs for the API Call</a></p>
<p>Why is the call returning deleted resources even though flag <code>includeDeletedResources</code> is set to False?</p>
|
<python><amazon-web-services><boto3><aws-config>
|
2023-01-05 08:03:03
| 1
| 613
|
sparkstar
|
75,015,228
| 3,103,957
|
Python import module approaches
|
<p>I have created a package named mod and placed a python file inside it (a.k.a: module). The name of the python file is printme.py.</p>
<p>I import the module in the below couple of ways.</p>
<pre><code>import mod.printme
['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'mod']
</code></pre>
<p>After this is executed, the name space is appended with the value mod.
I expected this to have the value printme (which is the actual module name)</p>
<pre><code>from mod import printme
['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'printme']
</code></pre>
<p>After this is executed, correctly the name space is appended with the value printme as expected.</p>
<p>Technically, both are expected to import the module printme in the local namespace, I am little puzzled why the first approach is NOT placing the printme module name in the local namespace.</p>
<p>Could someone please help me to understand.</p>
<p>Thanks!</p>
|
<python><import><module><namespaces>
|
2023-01-05 07:45:54
| 1
| 878
|
user3103957
|
75,014,987
| 72,911
|
How do I add a chain of trust with x509.CertificateBuilder?
|
<p>I'm building a certificate with <a href="https://cryptography.io/en/latest/x509/reference/#x-509-certificate-builder" rel="nofollow noreferrer">cryptography.x509.CertificateBuilder</a>:</p>
<pre><code>from pathlib import Path
from cryptography import x509
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives.serialization import Encoding
private_key: rsa.RSAPrivateKey
ca_cert: x509.Certificate
ca_cert_private_key: rsa.RSAPrivateKey
cert = (
x509.CertificateBuilder()
.subject_name(x509.Name.from_rfc4514_string(f"CN={workstation_name}"))
.issuer_name(ca_cert.subject)
.public_key(private_key.public_key())
.serial_number(x509.random_serial_number())
.not_valid_before(today)
.not_valid_after(today + timedelta(days=60))
.add_extension(subject_alt_name, critical=False)
.sign(ca_cert_private_key)
)
Path("workstation.crt").write_bytes(cert.public_bytes(Encoding.PEM))
</code></pre>
<p>How do I add the ca_cert to the <a href="https://en.m.wikipedia.org/wiki/X.509#Certificate_chains_and_cross-certification" rel="nofollow noreferrer">certificate chain</a> for the cert I'm creating?</p>
<p>E.g. for stackoverflow, the cert looks like this, and you can see that there are 3 certs in the chain</p>
<pre><code>$ openssl s_client -connect stackoverflow.com:443
CONNECTED(00000003)
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = *.stackexchange.com
verify return:1
---
Certificate chain
0 s:CN = *.stackexchange.com
i:C = US, O = Let's Encrypt, CN = R3
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Dec 18 14:08:17 2022 GMT; NotAfter: Mar 18 14:08:16 2023 GMT
1 s:C = US, O = Let's Encrypt, CN = R3
i:C = US, O = Internet Security Research Group, CN = ISRG Root X1
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Sep 4 00:00:00 2020 GMT; NotAfter: Sep 15 16:00:00 2025 GMT
2 s:C = US, O = Internet Security Research Group, CN = ISRG Root X1
i:O = Digital Signature Trust Co., CN = DST Root CA X3
a:PKEY: rsaEncryption, 4096 (bit); sigalg: RSA-SHA256
v:NotBefore: Jan 20 19:14:03 2021 GMT; NotAfter: Sep 30 18:14:03 2024 GMT
---
</code></pre>
<p>For the cert I'm creating there is only 1 certificate in the chain. How do I add the ca_cert as the 2nd?</p>
<pre><code>$ openssl s_client -connect localhost:443
CONNECTED(00000003)
Can't use SSL_get_servername
depth=1 CN = [Redacted CA Cert Name]
verify return:1
depth=0 CN = [Redacted Workstation Name]
verify return:1
---
Certificate chain
0 s:CN = [Redacted Workstation Name]
i:CN = [Redacted CA Cert Name]
---
</code></pre>
|
<python><cryptography><x509><python-cryptography>
|
2023-01-05 07:18:59
| 0
| 9,642
|
Gary van der Merwe
|
75,014,976
| 3,082,759
|
How to fit long time period data into Regression models in scikit-learn?
|
<p>I'm working on the regresion model with population and demand values my data is for period from 1980 to 2021 by country, below example where under year is the number of population and under year_dem is the demand for item.</p>
<p><a href="https://i.sstatic.net/Kd80V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kd80V.png" alt="enter image description here" /></a></p>
<p>Taks is to create prediction model to forecast demand for each country in future.</p>
<pre><code># Import necessary libraries
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
# Load the dataset containing past data on vaccine demand and supply
data = df.iloc[0]
X = data.drop(['Country','ISO','1980_dem', '1981_dem', '1982_dem','1983_dem','1984_dem','1985_dem','1986_dem','1987_dem','1988_dem','1989_dem','1990_dem','1991_dem','1992_dem','1993_dem','1994_dem','1995_dem','1996_dem','1997_dem','1998_dem','1999_dem','2000_dem','2001_dem','2002_dem','2003_dem','2004_dem','2005_dem','2006_dem','2007_dem','2008_dem','2009_dem','2010_dem','2011_dem','2012_dem','2013_dem','2014_dem','2015_dem','2016_dem','2017_dem','2018_dem','2019_dem','2020_dem','2021_dem'])
y = data['1980_dem']
model = RandomForestRegressor(n_estimators=50, max_features="auto", random_state=44)
model.fit(X_train, y_train)
# Split the DataFrame into a training set and a testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=44)
# Use the trained model to make predictions on the test set
#predictions = model.predict(X_test)
# Calculate the accuracy of the predictions
#accuracy = model.score(X_test, y_test)
#print('Accuracy:', round(accuracy,2),'%.')
</code></pre>
<p>expect to have created a model with Accuracy printed and poisbilit to predict values for future based on the model.</p>
|
<python><scikit-learn><forecasting>
|
2023-01-05 07:18:00
| 0
| 387
|
PiotrK
|
75,014,968
| 5,753,454
|
What is the best way to handle boolean bounds with scipy's minimize function?
|
<p>I am trying to find the minimum value of a function that takes <code>float</code> and <code>bool</code> values as input.</p>
<pre class="lang-py prettyprint-override"><code>def function_to_optimize(a : float, b : float, c : bool) -> float:
# do some calculation...
</code></pre>
<p>How do I define the <code>bounds</code> parameter of the <code>scipy.optimize.minimize</code> regarding my <code>bool</code> variable <code>c</code>?</p>
<p>Say that the bounds for <code>a</code> and <code>b</code> are <code>(0,1)</code>. What I am doing currently is to modify <code>function_to_optimize</code> so that it takes 3 <code>float</code> parameters and doing the following:</p>
<pre class="lang-py prettyprint-override"><code>def new_function_to_optimize(a : float, b : float, c : float) -> float:
c = False if c < 0 else True
# do some calculation...
</code></pre>
<p>Then, I apply the bounds as <code>bounds = [(0,1), (0,1), (-1,1)]</code>. This way, for variable <code>c</code>, when the optimizer tries a value below 0, then c is assigned the value <code>False</code> and <code>True</code> otherwise.</p>
<p>But I am not sure if this is the best way to do this and if maybe the algorithms behind <code>minimize</code> can do some further optimization if he is aware that the variables are of type <code>bool</code>. Should I instead define the <code>bounds</code> as <code>(False, True)</code>?</p>
|
<python><scipy-optimize><scipy-optimize-minimize>
|
2023-01-05 07:17:12
| 0
| 2,104
|
FTM
|
75,014,768
| 4,414,359
|
Why is json.loads throwing JSONDecodeError: Expecting value
|
<p>I have a string like:</p>
<pre><code>j = '{"type":"x","key":"A","attributes":{"label":"test","visible":True,"advancedOptions":True},"fees":[{"type":"set","key":"fee","attributes":{"amount":"5","type":"percent","label":"Test","fee":{"value":"7","type":"percent"},"visible":True}}]}'
</code></pre>
<p>I'm just wondering why this code works:</p>
<pre><code>exec('t = ' + j)
print(json.dumps(t, indent=4))
</code></pre>
<p>but this code doesn't</p>
<pre><code>print(json.dumps(json.load(j), indent=4))
</code></pre>
<p>It doesn't make sense to me that if a string is first converted into a dictionary, you can load it as json, but you can't convert that exact same string into a json directly %\ ...</p>
<p><code>JSONDecodeError: Expecting value line 1 column 40 (char 39)</code></p>
<p>Has anyone run into anything like this before?</p>
<p>I read somewhere on the internet that it might have something to do with the size of the string, where json loads just can't handle it or something.</p>
|
<python><json>
|
2023-01-05 06:52:38
| 1
| 1,727
|
Raksha
|
75,014,747
| 10,576,494
|
How to initialize locations of numpy array using dictionary keys and values?
|
<p>I have the following numpy array which is basically a 3 channel image:</p>
<pre><code>arr = np.zeros((6, 4, 3), dtype=np.float32)
# dictionary of values, key is array location
values_of_channel_0 = {
(0, 2) : 1,
(1, 0) : 1,
(1, 3) : 5,
(2, 1) : 2,
(2, 2) : 3,
(2, 3) : 1,
(3, 0) : 1,
(3, 2) : 2,
(4, 0) : 2,
(4, 2) : 20,
(5, 0) : 1,
(5, 2) : 10,
(5, 3) : 1
}
</code></pre>
<p>I am trying to find the most elegant way to set all the values of the 3rd channel according to the dictionary. Here is what I tried:</p>
<pre><code>locations = list(values_of_channel_0.keys())
values = list(values_of_channel_0.values())
arr[lc,0] = values # trying to set the 3rd channel
</code></pre>
<p>But this fails.</p>
<p>Is there a way in which this can be done without looping over keys and values?</p>
|
<python><python-3.x><numpy>
|
2023-01-05 06:48:52
| 1
| 4,781
|
Elad Maimoni
|
75,014,563
| 4,133,188
|
Pytorch find the matching 2x2 tensor in a batch of 2x2 tensors
|
<p>I have a <code>2x2</code> reference tensor and a batch of candidate <code>2x2</code> tensors. I would like to find the closest candidate tensor to the reference tensor by summed euclidean distance over the identically indexed (except for the batch index) elements.</p>
<p>For example:</p>
<pre><code>ref = torch.as_tensor([[1, 2], [3, 4]])
candidates = torch.rand(100, 2, 2)
</code></pre>
<p>I would like to find the <code>2x2</code> tensor <code>index</code> in <code>candidates</code> that minimizes:</p>
<pre><code>(ref[0][0] - candidates[index][0][0])**2 +
(ref[0][1] - candidates[index][0][1])**2 +
(ref[1][0] - candidates[index][1][0])**2 +
(ref[1][1] - candidates[index][1][1])**2
</code></pre>
<p>Ideally, this solution would work for arbitrary dimension reference tensor of size <code>(b, c, d, ...., z)</code> and an arbitrary <code>batch_size</code> of candidate tensors with equal dimensions to the reference tensor <code>(batch_size, b, c, d,..., z)</code></p>
|
<python><pytorch>
|
2023-01-05 06:27:17
| 1
| 771
|
BeginnersMindTruly
|
75,014,470
| 10,450,242
|
Django Many to Many field dependent on another many to many field
|
<p>I'm creating two many-to-many fields based on same model in a single model. I would like to show only those instances in second many-to-many field which are selected in first many to many field to further apply selection. Through which approach should i handle this behaviour.</p>
<pre><code>class Params(models.Model):
name = models.CharField(max_length = 200)
comments = Model.TextField()
def __str__(self):
return self.name
class Selection(models.Model):
select_params = models.ManyToManyField(Params, blank=True, related_name = "selection_1")
edit_selected_params = models.ManyToManyField(Params, blank=True, related_name =
"selection_from_selec_params")
</code></pre>
<p>Thanks and regards</p>
|
<python><django><django-models>
|
2023-01-05 06:15:21
| 1
| 346
|
Kashan
|
75,014,452
| 18,059,131
|
Cannot connect to chrome at 127.0.0.1 python undetected-chromedriver
|
<p>Running the following on a machine running Ubuntu 22.094 LTS:</p>
<pre><code>chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
driver = uc.Chrome(executable_path='/home/samuel04/chromedriver', chrome_options=chrome_options)
</code></pre>
<p>I get the following error: <code>Message: unknown error: cannot connect to chrome at 127.0.0.1:42129</code></p>
<p>How could I fix this? My chromedriver and chrome are both on version 108 and undetected-chromedriver is 3.2.1.</p>
|
<python><ubuntu><selenium-webdriver><selenium-chromedriver><undetected-chromedriver>
|
2023-01-05 06:12:49
| 0
| 318
|
prodohsamuel
|
75,014,245
| 1,625,487
|
Python Gunicorn with Gevent workers very slow than pure flask for time.localtime
|
<p>I have a controller that's very slow when running in Gunicorn/Gevent (21sec) over pure Flask (5s)</p>
<p>Upon profiling there are 300K calls to Python's built-in method <code>time.localtime</code>. Here's a simplified controller that also shows this behavior</p>
<pre><code>import time
def my_time():
time.localtime()
@app.route('/my_test', methods=['GET'])
def my_test():
"""
Test endpoint
"""
for i in range(326457):
my_time()
return 'OK'
</code></pre>
<p>Profile output from running pure Flask</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
326457 0.204 0.000 0.387 0.000 __init__.py:23(my_time)
326457 0.183 0.000 0.183 0.000 {built-in method time.localtime}
326457 0.010 0.000 0.010 0.000 {method 'disable' of '_lsprof.Profiler' objects}
0 0.000 0.000 profile:0(profiler)
</code></pre>
<p>Profile output w/ Gevent/Gunicorn.
Gunicorn server settings <code>gunicorn --workers=2 --threads=5 --timeout=300 --graceful-timeout=300 --worker-connections=100 --worker-class=gevent -b :8080 main:app</code></p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
326457 0.226 0.000 15.784 0.000 __init__.py:23(my_time)
326457 15.557 0.000 15.557 0.000 {built-in method time.localtime}
326457 0.016 0.000 0.016 0.000 {method 'disable' of '_lsprof.Profiler' objects}
0 0.000 0.000 profile:0(profiler)
</code></pre>
<p>What's causing time.localtime to take over 15s with Gunicorn/Gevent?</p>
<p>Versions</p>
<pre><code>Python 3.6
gevent==22.10.2
gunicorn==20.1.0
</code></pre>
|
<python><performance><gunicorn><gevent>
|
2023-01-05 05:36:48
| 0
| 2,432
|
Abs
|
75,014,117
| 5,640,258
|
What's the difference between Bazel's py_library and py_binary rules
|
<p>Bazel's <a href="https://bazel.build/reference/be/python#py_library" rel="nofollow noreferrer">documentation</a> explains how to use each, but I don't understand the difference in their nature. Aren't all python files executables anyway? What difference does it make to specify it is a binary?</p>
|
<python><bazel><bazel-rules>
|
2023-01-05 05:16:56
| 2
| 494
|
A. Fenzry
|
75,013,812
| 2,148,718
|
Access Google Cloud Storage object in a project that I don't belong to
|
<p>There is a GCP project that contains a bucket that I have read and write permissions to, but I don't know the name of the project nor am I part of the project. None of the contents of this bucket are public.</p>
<p>I have successfully authenticated my user locally using <code>gcloud auth application-default login</code>.</p>
<p>I can successfully download from this bucket using <code>gsutil cat gs://BUCKET/PATH</code>.</p>
<p>However, if I use the <code>google.cloud.storage</code> Python API, it fails at the point of identifying the project, presumably because I don't have access to the project itself:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import storage
client = storage.Client()
storage.Blob.from_string("gs://BUCKET/PATH", client=client).download_as_text()
</code></pre>
<blockquote>
<p>The billing account for the owning project is disabled in state closed: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)</p>
</blockquote>
<p>I can't use <code>storage.Client.create_anonymous_client()</code> since this is only relevant for public buckets, but I suspect that I could fix this by changing the <code>credentials</code> argument to <code>Client()</code>.</p>
<p>Can anyone help me download the file from Google Cloud in this case?</p>
|
<python><google-cloud-platform><google-cloud-storage><google-api-python-client>
|
2023-01-05 04:18:04
| 1
| 20,337
|
Migwell
|
75,013,720
| 12,883,297
|
Select the dataframe based on the certain time value in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["A","11:40 AM"],["B","12:51 PM"],["C","6:33 PM"],["D","11:13 AM"],["E","7:13 PM"]],columns=["id","time"])
</code></pre>
<pre><code>id time
A 11:40 AM
B 12:51 PM
C 6:33 PM
D 11:13 AM
E 7:13 PM
</code></pre>
<p>I want to select only those rows which are < 6:30 PM.</p>
<p>Expected output:</p>
<pre><code>df = pd.DataFrame([["A","11:40 AM"],["B","12:51 PM"],["D","11:13 AM"]],columns=["id","time"])
</code></pre>
<pre><code>id time
A 11:40 AM
B 12:51 PM
D 11:13 AM
</code></pre>
<p>I tried <code>df[(df['time'].dt.time < '18:30:00')]</code>. It is giving an error. How to do it?</p>
|
<python><pandas><dataframe><time-series><timestamp>
|
2023-01-05 04:00:33
| 1
| 611
|
Chethan
|
75,013,708
| 10,483,893
|
Finding local minima and maxima
|
<p>The implementation from Geeksforgeeks <a href="https://www.geeksforgeeks.org/find-indices-of-all-local-maxima-and-local-minima-in-an-array/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/find-indices-of-all-local-maxima-and-local-minima-in-an-array/</a> is wrong.
If you have consecutive-duplicates, things will gall apart!</p>
<pre><code>Example 1: values = [ 1, 2, 3, 7, 11, 15, 13, 12, 11, 6, 5, 7, 11, 8]
The default implementation correctly identify "15" as a peak.
Example 2: values = [ 1, 2, 3, 7, 11, 15, 15, 13, 12, 11, 6, 5, 7, 11, 8]
The default implementation will mark "11" as local maxima because there are two consecutive 15's.
</code></pre>
<p>Below is code from geekforgeeks, with problem highlighted - when making greater/lesser comparison with left and right , if your neighbour's values == , then look further left or right:</p>
<pre><code>def findLocalMaximaMinima(n, arr):
# Empty lists to store points of
# local maxima and minima
mx = []
mn = []
# Checking whether the first point is
# local maxima or minima or neither
if(arr[0] > arr[1]):
mx.append(0)
elif(arr[0] < arr[1]):
mn.append(0)
# Iterating over all points to check
# local maxima and local minima
for i in range(1, n-1):
# Condition for local minima
if(arr[i-1] > arr[i] < arr[i + 1]): <-- Problem is here
mn.append(i)
# Condition for local maxima
elif(arr[i-1] < arr[i] > arr[i + 1]): <-- Problem is here
mx.append(i)
# Checking whether the last point is
# local maxima or minima or neither
if(arr[-1] > arr[-2]):
mx.append(n-1)
elif(arr[-1] < arr[-2]):
mn.append(n-1)
# Print all the local maxima and
# local minima indexes stored
if(len(mx) > 0):
print("Points of Local maxima"\
" are : ", end ='')
print(*mx)
else:
print("There are no points of"\
" Local maxima.")
if(len(mn) > 0):
print("Points of Local minima"\
" are : ", end ='')
print(*mn)
else:
print("There are no points"\
" of Local minima.")
</code></pre>
|
<python><algorithm>
|
2023-01-05 03:57:02
| 2
| 1,404
|
user3761555
|
75,013,651
| 9,359,102
|
Django makemigrations : IndexError: list index out of range
|
<p>When I attempt to run python3.8 makemigrations, I get the following :</p>
<pre><code>File "/usr/lib/python3.8/gettext.py", line 436, in _parse
plural = v[1].split('plural=')[1]
IndexError: list index out of range
</code></pre>
<p>Upon detailed inspection by running python3.8 manage.py runserver , I observed the following:</p>
<pre><code>2023-01-05 03:33:22,179 django.utils.autoreload INFO Watching for file changes with StatReloader
2023-01-05 03:33:22,180 django.utils.autoreload DEBUG Waiting for apps ready_event.
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 110, in inner_run
autoreload.raise_last_exception()
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/core/management/__init__.py", line 375, in execute
autoreload.check_errors(django.setup)()
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/apps/registry.py", line 114, in populate
app_config.import_models()
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/apps/config.py", line 301, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/contrib/auth/models.py", line 92, in <module>
class Group(models.Model):
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/db/models/base.py", line 161, in __new__
new_class.add_to_class(obj_name, obj)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/db/models/base.py", line 326, in add_to_class
value.contribute_to_class(cls, name)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/db/models/fields/related.py", line 1635, in contribute_to_class
self.remote_field.through = create_many_to_many_intermediary_model(self, cls)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/db/models/fields/related.py", line 1099, in create_many_to_many_intermediary_model
'verbose_name': _('%(from)s-%(to)s relationship') % {'from': from_, 'to': to},
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/functional.py", line 176, in __mod__
return str(self) % rhs
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/functional.py", line 140, in __text_cast
return func(*self.__args, **self.__kw)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/translation/__init__.py", line 94, in gettext
return _trans.gettext(message)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/translation/trans_real.py", line 355, in gettext
_default = _default or translation(settings.LANGUAGE_CODE)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/translation/trans_real.py", line 268, in translation
_translations[language] = DjangoTranslation(language)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/translation/trans_real.py", line 155, in __init__
self._add_installed_apps_translations()
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/translation/trans_real.py", line 203, in _add_installed_apps_translations
translation = self._new_gnu_trans(localedir)
File "/home/earthling/myEnv/lib/python3.8/site-packages/django/utils/translation/trans_real.py", line 177, in _new_gnu_trans
return gettext_module.translation(
File "/usr/lib/python3.8/gettext.py", line 613, in translation
t = _translations.setdefault(key, class_(fp))
File "/usr/lib/python3.8/gettext.py", line 261, in __init__
self._parse(fp)
File "/usr/lib/python3.8/gettext.py", line 436, in _parse
plural = v[1].split('plural=')[1]
IndexError: list index out of range
</code></pre>
<p>I have checked everything in INSTALLED_APPS and it looks correct, since that is the only thing shows in the error trace.</p>
<p>Another error (seems unrelated) which is before the above error in the trace is :</p>
<pre><code>botocore.utils DEBUG : Metadata service returned non-200 response with status code of 404 for url:http://169.254.169.254/latest/meta-data/iam/security-credentials/
</code></pre>
<p>Ill be posting a separate question for the botocore error though I thought to put it here because I just felt to give the trace completely.</p>
<p>I have the following list langauges in settings.py which has mentions in the stack trace.</p>
<pre class="lang-py prettyprint-override"><code>
LANGUAGE_CODE = 'en'
LANGUAGES = [
('ar', _('Arabic')),
('es', _('Spanish')),
('ca', _('Catalan')),
('cs', _('Czech')),
('en-gb', _('British English')),
('el', _('Greek')),
('ko', _('Korean')),
('fi', _('Finnish')),
('pl', _('Polish')),
('en', _('English')),
('de', _('German')),
('nl', _('Dutch')),
('da', _('Danish')),
('hu', _('Hungarian')),
('sv', _('Swedish')),
('fr', _('French')),
('it', _('Italian')),
('tr', _('Turkish')),
('pt', _('Portuguese')),
('pt-br', _('Brazilian Portuguese')),
('ro', _('Romanian')),
('ru', _('Russian')),
('sk', _('Slovak')),
('uk', _('Ukrainian')),
('zh-cn', _('Simplified Chinese')),
]
</code></pre>
<p>EDIT:
What I have tried based on the stack post here(<a href="https://stackoverflow.com/questions/1329278/using-settings-languages-with-properly-translated-names-using-gettext">Using settings.LANGUAGES with properly translated names using gettext()</a>) :</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>from django.utils.translation import gettext_lazy as _
LANGUAGE_CODE = 'en'
LANGUAGES = [
('ar', _('Arabic')),
('ca', _('Catalan')),
('cs', _('Czech')),
('da', _('Danish')),
('de', _('German')),
('en', _('English')),
('en-gb', _('British English')),
('el', _('Greek')),
('es', _('Spanish')),
('fi', _('Finnish')),
('fr', _('French')),
('it', _('Italian')),
('ko', _('Korean')),
('nl', _('Dutch')),
('pl', _('Polish')),
('pt', _('Portuguese')),
('pt-br', _('Brazilian Portuguese')),
('ro', _('Romanian')),
('ru', _('Russian')),
('sk', _('Slovak')),
('uk', _('Ukrainian')),
('zh-cn', _('Simplified Chinese')),
]</code></pre>
</div>
</div>
</p>
<p>Also tried:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>LANGUAGE_CODE = 'en'
gettext_noop = lambda s: s
LANGUAGES = (
('ar', gettext_noop('Arabic')),
('ca', gettext_noop('Catalan')),
('cs', gettext_noop('Czech')),
('da', gettext_noop('Danish')),
('de', gettext_noop('German')),
('en', gettext_noop('English')),
('en-gb', gettext_noop('British English')),
('el', gettext_noop('Greek')),
('es', gettext_noop('Spanish')),
('fi', gettext_noop('Finnish')),
('fr', gettext_noop('French')),
('it', gettext_noop('Italian')),
('ko', gettext_noop('Korean')),
('nl', gettext_noop('Dutch')),
('pl', gettext_noop('Polish')),
('pt', gettext_noop('Portuguese')),
('pt-br', gettext_noop('Brazilian Portuguese')),
('ro', gettext_noop('Romanian')),
('ru', gettext_noop('Russian')),
('sk', gettext_noop('Slovak')),
('uk', gettext_noop('Ukrainian')),
('zh-cn', gettext_noop('Simplified Chinese')),
)</code></pre>
</div>
</div>
</p>
<p>Error Persists !</p>
|
<python><django><django-i18n>
|
2023-01-05 03:46:25
| 1
| 489
|
Earthling
|
75,013,555
| 2,101,043
|
string conversion with exponential notation
|
<p>I have a value that has exponential notation in it which I wanted to covert to string in python, tried the below ways, but it gets converted e+100,</p>
<pre><code>value=5066e97
str(value) results in '5.066e+100'
f'{value}' results in '5.066e+100'
value.__str__() results in '5.066e+100'
'{}'.format(tag) results in '5.066e+100'
</code></pre>
<p>But I want the result to be '5066e97' i.e., any value that has the exponential notation must be converted to string with quotes around it, what is the pythonic way to achieve it?</p>
|
<python><python-3.x><string><scientific-notation>
|
2023-01-05 03:23:45
| 1
| 2,062
|
Sai
|
75,013,480
| 5,621,387
|
error while adding multiple triggers to same glue workflow via boto3
|
<p>Task that I wanted to achieve is: I wanted to create workflow, add jobs and triggers to workflow and run the workflow</p>
<p>I am using below code to create the workflow</p>
<pre><code>import boto3
def create_workflow(wf_name):
response = client.create_workflow(
Name=wf_name,
Description='string'
)
return response['Name']
wf_name=create_workflow('wf_dev_jan_5th')
</code></pre>
<p>Below code I have used to add job and trigger to existing workflow</p>
<pre><code>response_trigger=client.create_trigger(
Name="new_trigger",
WorkflowName="wf_dev_jan_5th",Type='ON_DEMAND',Actions=[
{
'JobName': 'abc_dev",
'Arguments': {
'string': 'string'
},
'Timeout': 123,
'SecurityConfiguration': 'string',
'NotificationProperty': {
'NotifyDelayAfter': 123
}
},
])
</code></pre>
<p>From above code I could see the workflow created with name . I have two jobs one with 'abc_dev' and 'def_dev'</p>
<p>When I created trigger name with new_trigger on workflow wf_dev_jan_5th it didn't throw any error</p>
<p>But when I created trigger name with new_trigger_1 with different job i.e def_dev on same workflow wf_dev_jan_5th it is throwing below error</p>
<pre><code> error:IdempotentParameterMismatchException: An error occurred (IdempotentParameterMismatchException) when calling the CreateTrigger operation: Trigger with name 'new_trigger' already submitted with different configuration
</code></pre>
<p>Can anyone help here please</p>
|
<python><amazon-web-services><boto3><aws-glue>
|
2023-01-05 03:09:29
| 1
| 789
|
Naveen Srikanth
|
75,013,374
| 18,308,621
|
How to send a Polars `Expr`/`PyExpr`/`LogicalPlan` from Python to Rust?
|
<p>In a single process, I could write an Polars <code>Expr</code> and then use it on <code>DataFrame</code> by <code>with_column</code> or <code>select</code>.</p>
<p>But in another scenario of realtime financial data, I have a rust process which has a lot of history and realtime data(<code>DataFrame</code>) as a server. There are multiple python process(client) need to put their own <code>Expr</code>/<code>PyExpr</code>/<code>LogicalPlan</code> on the same data.</p>
<p>Because the data is changing time by time, so I need to avoid frequently data copy from rust to python. Is there a way to send the <code>Expr</code>/<code>PyExpr</code>/<code>LogicalPlan</code> from python client to rust server, then calculate the result in rust with its data so the copy job would be much smaller.</p>
|
<python><rust><python-polars><rust-polars>
|
2023-01-05 02:41:51
| 1
| 331
|
Hakase
|
75,013,274
| 14,584,978
|
How to re-group parquet files using polars (in file storage)
|
<p>I am using a dataset of multiple data frames as a makeshift data warehouse accessible to Power BI and python use cases. How can I regroup the data using polars into data frames of a particular size?</p>
<pre><code>from pathlib import Path
import uuid
import polars as pl
def repartition(directory_to_repartition, target_size):
repart_dir = Path(directory_to_repartition)
old_paths = [v for v in repart_dir.iterdir() if v.suffix == '.parquet']
frames = [pl.read_parquet(path) for path in old_paths]
big_frame = pl.concat(frame) # memory usage may or may not be tolerable
#############This method is the missing link I am looking for assuming I have sufficient memory for these operations########
new_frames = big_frame.split_partitions(partition_size=target_size)
for frame in new_frames:
frame.write_parquet(repart_dir / f"{uuid.uuid4()}.parquet")
for old in old_paths:
try:
old.unlink()
except FileNotFoundError:
pass
</code></pre>
<p>Note the function split_partitions is a pseudo_method of polars dataframes. it does not exist.</p>
|
<python><dataframe><parquet><partitioning><python-polars>
|
2023-01-05 02:22:02
| 1
| 374
|
Isaacnfairplay
|
75,013,126
| 3,624,549
|
python - Convert Panda Dataframe string-column into bigquery.SchemaField "TIMESTAMP"
|
<p>I am trying to load a <code>BigQuery Table</code> from a python panda <code>Dataframe</code>.</p>
<p>the csv file has the content:</p>
<pre><code>t_time
2023-01-01 07:20:54.272000 UTC
2023-01-02 04:22:26.914000 UTC
2023-01-03 04:32:38.663000 UTC
</code></pre>
<p>the BigQuery table has one column <code>t_time</code> with datatype <code>TIMESTAMP</code></p>
<p>schema: <code>bigquery.SchemaField("t_time", "TIMESTAMP", mode="NULLABLE")</code></p>
<p>snippet code:</p>
<pre><code>from google.cloud import bigquery
import pandas as pd
import ...
client = bigquery.Client()
df=pd.read_csv("./my_times.csv", header=1, names=['t_time'])
print(f"> {df['t_time']}")
df.info()
job_config = bigquery.LoadJobConfig(
schema = [
bigquery.SchemaField("t_time", "TIMESTAMP"),
]
write_disposition="WRITE_TRUNCATE",
)
client.load_table_from_dataframe(df, "myproj.mydataset.mytable", job_config=job_config).result()
</code></pre>
<p>output:</p>
<pre><code> 0 2022-08-03 07:20:54.272000 UTC
1 2022-08-04 04:22:26.914000 UTC
2 2022-08-03 04:32:38.663000 UTC
Name: t_time, dtype: object
Error object of type <class 'str'> cannot be converted to int
</code></pre>
<p>The problem is in <code>bigquery.SchemaField("insert_timestamp", "TIMESTAMP"),</code>
I am wondering why, since I have other tables with timestamp format and with times in that format<code><date> <time> UTC</code>.</p>
<p>I have tried also to convert the dataframe column <code>t_time</code> to timestamp but without success (not sure hot to convert from that format into timestamp).</p>
<p>What would be the correct approach to have the <code>bigquery</code> table with datatype <code>timestamp</code> for the give CSV format (with UTC)?</p>
|
<python><pandas><dataframe><google-bigquery>
|
2023-01-05 01:53:41
| 1
| 2,420
|
Alg_D
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.