QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,992,493
| 17,501,206
|
Find all words in binary buffer using Python
|
<p>I want to find in <strong>binary buffer</strong> (<code>bytes</code>) all the "words" build from ascii lowercase and digits that <strong>only</strong> 5 chars length.</p>
<p>For example:</p>
<p><code>bytes(b'a\x1109ertx01\x03a54bb\x05')</code> contains <code>a54bb</code> and <code>09ert</code> .</p>
<p>Note the string <code>abcdef121212</code> is larger than 5 chars so I don't want it</p>
<p>I have build that set</p>
<pre><code>set([ord(i) for i in string.ascii_lowercase + string.digits])
</code></pre>
<p>What is the fastest way to do that using Python?</p>
|
<python><python-3.x><set>
|
2023-01-03 11:05:34
| 1
| 334
|
vtable
|
74,992,220
| 925,913
|
Python: Mock class constructor to throw an Exception, inside another class's constructor
|
<h3>Problem</h3>
<p>To put concisely, I have a class A whose constructor catches any exceptions that might occur. The only logic that <em>can</em> throw an exception though, is the construction of another class B (from an external library, in case that matters). I want to test that when this inner constructor (B) throws an exception, the outer constructor (A) catches this exception.</p>
<pre><code># module_a.py
from external_module_b import B
class A:
def __init__(self) -> None:
try:
self.b = B(
b_param_1="...",
b_param_2="..."
)
self.ok = True
# ...
except Exception as e:
self.ok = False
print_report_request(repr(e))
</code></pre>
<h3>Attempts</h3>
<p>First, I tried using @patch() with side_effect like this:</p>
<pre><code># test_module_a.py
from unittest import mock, TestCase
from module_a import A
class MyTestCase(TestCase):
@mock.patch("external_module_b.B")
def test_constructor_throws(self, mock_b: mock.Mock):
mock_b.side_effect = Exception("test")
a = A()
self.assertFalse(a.ok)
</code></pre>
<p>This didn't seem to work—a.ok was True. I tried another suggestion to define side_effect in @patch() itself:</p>
<pre><code> @mock.patch("external_module_b.B", side_effect=Exception("Test"))
def test_constructor_throws(self, mock_b: mock.Mock):
a = A()
self.assertFalse(a.ok)
</code></pre>
<p>a.ok was still True.</p>
<p>I wondered if something's wrong with the string I'm giving to @patch(). But typing "external_module_b" in code, PyCharm's autocomplete did suggest "external_module_b.B". (I'm not sure whether that's a valid proof or not.) I tried yet another suggestion that uses raiseError. I also tried making side_effect a function (lambda). But I think it's more likely that I'm misunderstanding something fundamental. Possibly to do with mocking constructors / classes.</p>
|
<python><mocking>
|
2023-01-03 10:41:16
| 1
| 30,423
|
Andrew Cheong
|
74,992,183
| 7,052,830
|
Uvicorn gives "access is denied" in Anaconda Prompt
|
<p><strong>Problem</strong></p>
<p>I have two conda virtual environments. One can run uvicorn commands such as <code>uvicorn main:app --reload</code> to run <a href="https://fastapi.tiangolo.com/tutorial/first-steps/" rel="nofollow noreferrer">the example code from the FastAPI tutorial</a>, but the other environment returns <code>Access is denied.</code> for each uvicorn command.</p>
<p><strong>What I have tried</strong></p>
<p>The second (not working one) has the exact same versions of uvicorn, fastapi, and python installed and I'm using the same anaconda prompt. Hence it's not a problem of access rights on the underlying folder or admin rights, because in that case both environments should not work in the same prompts (right?).</p>
<p>I'm looking for a difference in both environments, but given that only fastapi and uvicorn are required to run Fast API code, I'm not sure what to try next.</p>
<p><strong>Question</strong></p>
<p>Are there additional requirements for running a FastAPI app, beyond the fastapi and uvicorn packages that could cause this problem and/or are there perhaps other matters I should look into?</p>
|
<python><anaconda><fastapi><uvicorn>
|
2023-01-03 10:38:08
| 0
| 545
|
Tim J
|
74,992,166
| 20,051,041
|
How to define dynamic column range in Excel using Python
|
<p>I am using Python 3 to generate an Excel file. In each run a number of columns is different.</p>
<p>With static number of columns, I would use e.g.:</p>
<pre><code>writer = pd.ExcelWriter("File.xlsx", engine = "xlsxwriter")
writer.sheets["Sheet"].set_column("A:G", 10)
</code></pre>
<p>What is a code expression to select all Excel columns (replacing "A:G" above)?</p>
|
<python><excel>
|
2023-01-03 10:36:14
| 2
| 580
|
Mr.Slow
|
74,992,120
| 19,580,067
|
Find the row and column index of the particular value in Dataframe
|
<p>I have to find the row and column index of the particular value from the dataframe. I have the code to find the row index based on the column name. But not sure how to find both row and column indexes.</p>
<p>Current Table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td>VT1</td>
<td>Date</td>
<td>Time</td>
<td>Glen</td>
<td>1600</td>
</tr>
<tr>
<td>VT2</td>
<td>04/16</td>
<td>4:00</td>
<td>Cof</td>
<td>1600</td>
</tr>
<tr>
<td>VT3</td>
<td>04/18</td>
<td>5.00</td>
<td>1750</td>
<td>NAN</td>
</tr>
<tr>
<td>VT4</td>
<td>04/19</td>
<td>7.00</td>
<td>1970</td>
<td>NAN</td>
</tr>
</tbody>
</table>
</div>
<p>From the above table, need to find the row and column index of the value 'Date'.</p>
<pre><code>Code to find row index based on column:
print(df[df[1]=='Date'].index.values)
But we need to find the both the indexes without giving column name.
</code></pre>
|
<python><pandas><dataframe><python-camelot>
|
2023-01-03 10:32:19
| 2
| 359
|
Pravin
|
74,991,976
| 9,102,437
|
How to change the default charset for in MySQL?
|
<p>I have a python code which adds rows to the MySQL database which runs on the same machine. The code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import mysql.connector
config = {
'user': 'root',
'password': 'a$$enjoyer',
'host': 'localhost',
'database': 'db',
'raise_on_warnings': True
}
cnx = mysql.connector.connect(**config)
con = cnx.cursor()
</code></pre>
<p>Now, I have a table which has a <code>MEDIUMTEXT</code> column. The issue is that whenever I try to insert texts with emojis (yes, I need them), I get the error like: <code>1366 (HY000): Incorrect string value: '\xF0\x9F\xA5\xB6' for column 'epictext' at row 1</code>.</p>
<p>The code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>con.execute('INSERT INTO nice VALUES(%s,%s,%s)', [1,2,'🥶'])
</code></pre>
<p>I have found a few questions here which addressed this error like, for example <a href="https://stackoverflow.com/questions/20411440/incorrect-string-value-xf0-x9f-x8e-xb6-xf0-x9f-mysql">this</a> one, but they didn't fix the issue.</p>
<p>The weird thing is that simply running <code>INSERT INTO nice VALUES(1,2,'🥶');</code> directly in MySql console works fine. Moreover adding the <code>'collation': 'utf8mb4_general_ci'</code> to <code>config</code> fixes the issue entirely, but when I last tried doing this, this line was not necessary, and I would really like to avoid using it if it is possible.</p>
|
<python><mysql><sql><python-3.x><mysql-connector-python>
|
2023-01-03 10:21:10
| 1
| 772
|
user9102437
|
74,991,778
| 12,394,386
|
'BertModel' object has no attribute 'bert' error german bert model
|
<p>i want to replicate the work in this repo
<a href="https://github.com/theartificialguy/NLP-with-Deep-Learning/blob/master/BERT/Multi-Class%20classification%20TF-BERT/multi_class.ipynb" rel="nofollow noreferrer">https://github.com/theartificialguy/NLP-with-Deep-Learning/blob/master/BERT/Multi-Class%20classification%20TF-BERT/multi_class.ipynb</a>
but with german texts and using other labels
i prepared my data and went through the steps doing the necessary modifications
instead of
<code>tokenizer = BertTokenizer.from_pretrained('bert-base-cased')</code>
i used <code>tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")</code>
from this documentation <a href="https://huggingface.co/dbmdz/bert-base-german-cased" rel="nofollow noreferrer">https://huggingface.co/dbmdz/bert-base-german-cased</a>
and
instead of
<code>model = TFBertModel.from_pretrained('bert-base-cased')</code>
i used <code>model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")</code>
then when i came to the point where i had to execute this part</p>
<pre><code># defining 2 input layers for input_ids and attn_masks
input_ids = tf.keras.layers.Input(shape=(256,), name='input_ids', dtype='int32')
attn_masks = tf.keras.layers.Input(shape=(256,), name='attention_mask', dtype='int32')
bert_embds = model.bert(input_ids, attention_mask=attn_masks)[1] # 0 -> activation layer (3D),
1 -> pooled output layer (2D)
intermediate_layer = tf.keras.layers.Dense(512, activation='relu', name='intermediate_layer')
(bert_embds)
output_layer = tf.keras.layers.Dense(5, activation='softmax', name='output_layer')
(intermediate_layer) # softmax -> calcs probs of classes
sentiment_model = tf.keras.Model(inputs=[input_ids, attn_masks], outputs=output_layer)
sentiment_model.summary()
</code></pre>
<p>i had this error</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-42-ed437bbb2d3e> in <module>
3 attn_masks = tf.keras.layers.Input(shape=(256,), name='attention_mask', dtype='int32')
4
----> 5 bert_embds = model.bert(input_ids, attention_mask=attn_masks)[1] # 0 -> activation
layer (3D), 1 -> pooled output layer (2D)
6 intermediate_layer = tf.keras.layers.Dense(512, activation='relu',
name='intermediate_layer')(bert_embds)
7 output_layer = tf.keras.layers.Dense(5, activation='softmax', name='output_layer')
(intermediate_layer) # softmax -> calcs probs of classes
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
1263 if name in modules:
1264 return modules[name]
-> 1265 raise AttributeError("'{}' object has no attribute '{}'".format(
1266 type(self).__name__, name))
1267
AttributeError: 'BertModel' object has no attribute 'bert'
</code></pre>
|
<python><nlp><model><huggingface-transformers><bert-language-model>
|
2023-01-03 10:04:31
| 1
| 323
|
youta
|
74,991,754
| 7,161,679
|
How to yield one array element and keep other elements in pyspark DataFrame?
|
<p>I have a pyspark DataFrame like:</p>
<pre>
+------------------------+
| ids|
+------------------------+
|[101826, 101827, 101576]|
+------------------------+
</pre>
<p>and I want explode this dataframe like:</p>
<pre>
+------------------------+
| id| ids|
+------------------------+
|101826 |[101827, 101576]|
|101827 |[101826, 101576]|
|101576 |[101826, 101827]|
+------------------------+
</pre>
<p>How can I do using pyspark udf or other methods?</p>
|
<python><pyspark><apache-spark-sql>
|
2023-01-03 10:01:57
| 1
| 1,438
|
littlely
|
74,991,684
| 4,796,942
|
Batch update validation and formatting cells using pygsheets
|
<p>I am using pygsheets and would like to batch validate cells instead of looping through each cell and doing it iteratively. I have gone through the <a href="https://readthedocs.org/projects/pygsheets/downloads/pdf/stable/" rel="nofollow noreferrer">pygsheets documentation</a> and have not found an example of this, would this be possible and if so how would one do this? I did see an example of batching <a href="https://pygsheets.readthedocs.io/en/2.0.6/examples.html" rel="nofollow noreferrer">in the documentation (through unlinking and then linking again)</a>, but this did not work for me instead no update happened.</p>
<p>Below I have a working example of the code that I am trying to optimise by batching the update.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<pre><code>import pygsheets
spread_sheet_id = "...insert...spreadsheet...id"
spreadsheet_name = "...spreadsheet_name..."
wks_name_or_pos = "...worksheet_name..."
spreadsheet = pygsheets.Spreadsheet(client=service,id=spread_sheet_id)
wksheet = spreadsheet.worksheet('title',wks_name_or_pos)
header_list = ["A","B","C"]
for index, element in enumerate(header_list):
cell_string = str(chr(65+index)+"1")
wksheet.cell(cell_string).set_text_format('bold', True).value = element
header_cell = wksheet.cell(cell_string)
header_cell.color = (0.9529412, 0.9529412, 0.9529412, 0) # set background color of this cell as a tuple (red, green, blue, alpha)
header_cell.update()
wksheet.set_data_validation(
start=cell_string,end=cell_string,
condition_type='TEXT_CONTAINS',
condition_values=[element], inputMessage=f"Value must be {element}", strict=True)
</code></pre>
<p>I have realised I can change the value in the cell by passing it in as a list of lists, but not sure how to batch the validation and batch format the cell.</p>
<pre><code>header_list = ["A","B","C"]
list_of_lists = [[col] for col in header_list]
# update values with list of lists (working)
wksheet.update_cells('A1:C1',list_of_lists)
# batch update to bold, change the colour to grey and make sure values fit in cell (increase cell size) ?
# wksheet.add_conditional_formatting(start='A1', end='C1',
# condition_type='CUSTOM_FORMULA',
# format={'backgroundColor':{'red':0.5,'green':0.5, 'blue':0.5, 'alpha':0}},
# condition_values=['=NOT(ISBLANK(A1))'])
# batch validate multiple cells so that the value is strictly the value provided ?
</code></pre>
<p>I also tried just unlinking, running the pygsheets commands then linking again as</p>
<pre><code>wksheet.unlink()
header_list = ["A","B","C"]
for index, element in enumerate(header_list):
cell_string = str(chr(65+index)+"1")
wksheet.cell(cell_string).set_text_format('bold', True).value = element
header_cell = wksheet.cell(cell_string)
header_cell.color = (0.9529412, 0.9529412, 0.9529412, 0) # set background color of this cell as a tuple (red, green, blue, alpha)
header_cell.update()
wksheet.set_data_validation(
start=cell_string,end=cell_string,
condition_type='TEXT_CONTAINS',condition_values=[element], inputMessage=f"Value must be {element}", strict=True)
wksheet.link()
</code></pre>
|
<python><google-sheets><google-sheets-api><pygsheets>
|
2023-01-03 09:56:20
| 1
| 1,587
|
user4933
|
74,991,641
| 2,320,153
|
Gunicorn: do not empty specified log files when restarting
|
<p>I have created a Gunicorn project, with <code>accesslog</code> and <code>errorlog</code> being specified in a config file, and then the server being started with only a <code>-c</code> flag to specify this config file.</p>
<p>The problem is, each time I restart the same Gunicorn process (via <code>pkill -F <pidfile, also specified in config></code>), the files specified in these configs are emptied. I've got an info that it's because of the mode in which Gunicorn opens these files being "write", rather than "append", but haven't found anything about in the official settings.</p>
<p>How can I fix it? It's important because I tend to forget manually backing up these logs and had no capacity for automating it so far.</p>
|
<python><linux><logging><gunicorn>
|
2023-01-03 09:53:07
| 1
| 1,420
|
Zoltán Schmidt
|
74,991,629
| 14,122,835
|
Inner join in pandas
|
<p>I have two dataframes:</p>
<ul>
<li>The first one was extracted from the manifest database.The data explains about the value, route (origin and destination), and also the actual SLA</li>
</ul>
<pre><code>awb_number route value sla_actual (days)
01 A - B 24,000 2
02 A - C 25,000 3
03 C - B 29,000 5
04 B - D 35,000 6
</code></pre>
<ul>
<li>The second dataframe explains about the route (origin and destination) and also internal SLA (3PL SLA).</li>
</ul>
<pre><code>route sla_partner (days)
A - B 4
B - A 3
A - C 3
B - D 5
</code></pre>
<p>I would like to investigate the gap between the SLA actual and 3PL SLA, so what I do is to join these two dataframes based on the routes.</p>
<p>I supposed the result would be like this:</p>
<pre><code>awb_number route value sla_actual sla_partner
01 A - B 24,000 2 4
02 A - C 25,000 3 3
03 C - B 29,000 5 NaN
04 B - D 35,000 6 5
</code></pre>
<p>What I have done is:</p>
<pre><code>df_sla_check = pd.merge(df_actual, df_sla_partner, on = ['route_city_lazada'], how = 'inner')
</code></pre>
<p>The first dataframe has 36,000 rows while the second dataframe has 20,000 rows, but the result returns over 700,000 rows. Is there something wrong with my logic? Isn't it supposed to return around 20,000 rows - 36,000 rows?</p>
<p>Can somebody help me how to do this correctly?</p>
<p>Thank you in advance</p>
|
<python><pandas><inner-join>
|
2023-01-03 09:52:12
| 3
| 531
|
yangyang
|
74,991,599
| 5,470,966
|
Does PyCharm have the feature for executing python script in parts/cells (like in VsCode - Run in interacting Window)?
|
<p>when using <strong>VSCode</strong>, I can run python files in cells/parts as if it was Jupyter notebook, without actually having a notebook.
<a href="https://code.visualstudio.com/docs/python/jupyter-support-py" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/jupyter-support-py</a>
It means, you can run a python file, part by part, iteratively, like in Jupyter Notebooks, but in <code>.py</code> file.</p>
<p>it helps me to keep the code organized as a python file. (screenshot attached)</p>
<p>I wonder if the same feature exists in <strong>PyCharm</strong>. I couldn't find it.</p>
<p>I attach a screenshot of the feature in VsCode when I can run simple python file in interactive mode, part by part.</p>
<p>thanks.</p>
<p><a href="https://i.sstatic.net/YRuVG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YRuVG.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/pGDEi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pGDEi.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><jupyter-notebook><pycharm><ipython>
|
2023-01-03 09:49:53
| 2
| 488
|
ggcarmi
|
74,991,551
| 10,238,256
|
Summarize rows in pandas dataframe by column value and append specific column values as columns
|
<p>I have a dataframe as follows with multiple rows per id (maximum 3).</p>
<pre><code>dat = pd.DataFrame({'id':[1,1,1,2,2,3,4,4], 'code': ["A","B","D","B","D","A","A","D"], 'amount':[11,2,5,22,5,32,11,5]})
id code amount
0 1 A 11
1 1 B 2
2 1 D 5
3 2 B 22
4 2 D 5
5 3 A 32
6 4 A 11
7 4 D 5
</code></pre>
<p>I want to consolidate the df and have only one row per id so that it looks as follows:</p>
<pre><code> id code1 amount1 code2 amount2 code3 amount3
0 1 A 11 B 2 D 5
1 2 B 22 D 5 NaN NaN
2 3 A 32 NaN NaN NaN NaN
3 4 A 11 D 5 NaN NaN
</code></pre>
<p>How can I acheive this in pandas?</p>
|
<python><pandas><transformation>
|
2023-01-03 09:46:04
| 1
| 375
|
sfluck
|
74,991,540
| 7,376,511
|
Python Requests: set proxy only for specific domain
|
<p>Is there a native way in Python with the requests library to only use a proxy for a specific domain?</p>
<p>Like how you can mount HTTP Adapters, but with proxies, like the following example:</p>
<pre><code>from requests import Session
from requests.adapters import HTTPAdapter
s = Session()
s.mount("http://www.example.org", HTTPAdapter(max_retries=retries))
</code></pre>
|
<python><python-requests>
|
2023-01-03 09:45:18
| 2
| 797
|
Some Guy
|
74,991,284
| 273,506
|
pandas dataframe reset index
|
<p>I have a dataframe like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Attended</th>
<th>Email</th>
<th>JoinDate</th>
<th>JoinTime</th>
<th>JoinTime</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td>JoinTimeFirst</td>
<td>JoinTimeLast</td>
</tr>
<tr>
<td>Yes</td>
<td>009indrajeet</td>
<td>12/3/2022</td>
<td>12/3/2022 19:50</td>
<td>12/3/2022 21:47</td>
</tr>
<tr>
<td>Yes</td>
<td>09871143420.ms</td>
<td>12/18/2022</td>
<td>12/18/2022 20:41</td>
<td>12/18/2022 20:41</td>
</tr>
<tr>
<td>Yes</td>
<td>09s.bisht</td>
<td>12/17/2022</td>
<td>12/17/2022 19:51</td>
<td>12/17/2022 19:51</td>
</tr>
</tbody>
</table>
</div>
<p>and I need to change column headers like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Attended</th>
<th>Email</th>
<th>JoinDate</th>
<th>JoinTimeFirst</th>
<th>JoinTimeLast</th>
</tr>
</thead>
<tbody>
<tr>
<td>Yes</td>
<td>009indrajeet</td>
<td>12/3/2022</td>
<td>12/3/2022 19:50</td>
<td>12/3/2022 21:47</td>
</tr>
<tr>
<td>Yes</td>
<td>09871143420.ms</td>
<td>12/18/2022</td>
<td>12/18/2022 20:41</td>
<td>12/18/2022 20:41</td>
</tr>
<tr>
<td>Yes</td>
<td>09s.bisht</td>
<td>12/17/2022</td>
<td>12/17/2022 19:51</td>
<td>12/17/2022 19:51</td>
</tr>
</tbody>
</table>
</div>
<p>I tried multiple ways but noting worked out, any help will be appreciated. To get to the first dataframe, this is what I did:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"Attended":["Yes","Yes","Yes"]
,"Email":["009indrajeet","09871143420.ms","09s.bisht"]
,"JoinTime":["Dec 3, 2022 19:50:52","Dec 3, 2022 20:10:52","Dec 3, 2022 21:47:32"]})
#convert JoinTime to timestamp column
df['JoinTime'] = pd.to_datetime(df['JoinTime'],format='%b %d, %Y %H:%M:%S', errors='raise')
#extract date from timestamp column
df['JoinDate'] = df['JoinTime'].dt.date
#created grouper dataset
df_grp = df.groupby(["Attended","Email","JoinDate"])
#define aggregations
dict_agg = {'JoinTime':[('JoinTimeFirst','min'),('JoinTimeLast','max'),('JoinTimes',set)]}
#do grouping with aggregations
df = df_grp.agg(dict_agg).reset_index()
</code></pre>
<p>print(df)</p>
<pre><code>print(df.columns)
MultiIndex([('Attended', ''),
( 'Email', ''),
('JoinDate', ''),
('JoinTime', 'JoinTimeFirst'),
('JoinTime', 'JoinTimeLast'),
('JoinTime', 'JoinTimes')],
)
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-03 09:20:01
| 3
| 1,069
|
Arjun
|
74,991,185
| 6,282,576
|
Groupby using Django's ORM to get a dictionary of lists from the queryset of model with foreignkey
|
<p>I have two models, <code>Business</code> and <code>Employee</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class Business(models.Model):
name = models.CharField(max_length=150)
# ...
class Employee(models.Model):
business = models.ForeignKey(
Business,
related_name="employees",
on_delete=models.CASCADE,
)
name = models.CharField(max_length=150)
# ...
</code></pre>
<p>Here's a sample data:</p>
<pre class="lang-py prettyprint-override"><code>Business.objects.create(name="first company")
Business.objects.create(name="second company")
Employee.objects.create(business_id=1, name="Karol")
Employee.objects.create(business_id=1, name="Kathrine")
Employee.objects.create(business_id=1, name="Justin")
Employee.objects.create(business_id=2, name="Valeria")
Employee.objects.create(business_id=2, name="Krista")
</code></pre>
<p>And I want to get a dictionary of lists, keys being the businesses and values being the list of employees. I can do so using <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#prefetch-related" rel="nofollow noreferrer"><code>prefetch_related</code></a> on the Business model. A query like this:</p>
<pre class="lang-py prettyprint-override"><code>businesses = Business.objects.prefetch_related("employees")
for b in businesses:
print(b.name, '->', list(b.employees.values_list("name", flat=True)))
</code></pre>
<p>Which gives me this:</p>
<pre class="lang-py prettyprint-override"><code>first company -> ['Karol', 'Kathrine', 'Justin']
second company -> ['Valeria', 'Krista']
</code></pre>
<p>And this is exactly what I want and I can construct my dictionary of lists. But the problem is that I only have access to the <code>Employee</code> model. Basically I only have a QuerySet of all Employees and I want to achieve the same result. I figured I could use <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#select-related" rel="nofollow noreferrer"><code>select_related</code></a>, because I do need the business objects, but this query:</p>
<pre class="lang-py prettyprint-override"><code>Employee.objects.select_related("business")
</code></pre>
<p>Gives me this QuerySet:</p>
<pre class="lang-py prettyprint-override"><code><QuerySet [<Employee: Employee object (1)>, <Employee: Employee object (2)>, <Employee: Employee object (3)>, <Employee: Employee object (4)>, <Employee: Employee object (5)>]>
</code></pre>
<p>And I don't know how to group by business using Django's ORM from this QuerySet. How can I do that?</p>
<p>Here's how I'm doing it so far:</p>
<pre class="lang-py prettyprint-override"><code>employees = {}
for employee in Employee.objects.only("business"):
employees.setdefault(employee.business, []).append(employee.id)
</code></pre>
<p>I do this because I need the business objects to perform some operation on the list of their employees. And this works. But I want to do the same without the for loop and in a single query. Is it possible?</p>
|
<python><django><postgresql><group-by><django-orm>
|
2023-01-03 09:10:31
| 2
| 4,313
|
Amir Shabani
|
74,990,801
| 7,175,247
|
AttributeError: module 'sqlalchemy.orm' has no attribute 'DeclarativeMeta'
|
<p>Getting the below error while running a flask application:</p>
<pre><code>ubuntu@ip-10-50-50-190:~/RHS_US/application$ python3 run.py
Traceback (most recent call last):
File "run.py", line 1, in <module>
from portal import create_app
File "/home/ubuntu/RHS_US/application/portal/__init__.py", line 7, in <module>
from flask_sqlalchemy import SQLAlchemy
File "/home/ubuntu/RHS_US/application/rhs_us_venv/lib/python3.7/site-
packages/flask_sqlalchemy/__init__.py", line 5, in <module>
from .extension import SQLAlchemy
File "/home/ubuntu/RHS_US/application/rhs_us_venv/lib/python3.7/site-
packages/flask_sqlalchemy/extension.py", line 17, in <module>
from .model import _QueryProperty
File "/home/ubuntu/RHS_US/application/rhs_us_venv/lib/python3.7/site-
packages/flask_sqlalchemy/model.py", line 210, in <module>
class DefaultMeta(BindMetaMixin, NameMetaMixin, sa.orm.DeclarativeMeta):
AttributeError: module 'sqlalchemy.orm' has no attribute 'DeclarativeMeta'
</code></pre>
<p>I have also checked model.py under site packages and it does contain</p>
<blockquote>
<p>class DefaultMeta(BindMetaMixin, NameMetaMixin,
sa.orm.DeclarativeMeta):</p>
</blockquote>
|
<python><flask><flask-sqlalchemy>
|
2023-01-03 08:33:04
| 1
| 814
|
Nagesh Singh Chauhan
|
74,990,755
| 4,451,521
|
Using pytest with dataframes to test specific columns
|
<p>I am writing pytest tests that use panda's dataframes and I am trying to write the code as general as I can. (I can always check element by element but trying to avoid that)</p>
<p>so I have an input dataframe that contains some ID column like this</p>
<pre><code>ID,othervalue, othervalue2
00001, 4, 3
00001, 3, 3
00001, 2, 0
00003, 5, 2
00003, 2, 1
00003, 2, 9
</code></pre>
<p>and I do</p>
<pre><code>def test_df_against_angle(df, angle):
result = do_some_calculation(df, angle)
</code></pre>
<p>Now, <code>result</code> is also a dataframe that contains a ID column and it also contains a <code>decision</code> column that can take a value like "plus", "minus" (or "pass", "fail" or something like that) Something like</p>
<pre><code>ID, someresult, decision, someotherresult
00001, 4, plus, 3
00001, 2, plus, 2
00002, 2, minus, 2
00002, 1, minus, 5
00002, 0, minus, 9
</code></pre>
<p>I want to add an assertion (or several) that asserts the following (Not all at once, I mean, different assertions since I have not yet decide which would be better):</p>
<ol>
<li>All decision values corresponding to an ID are the same</li>
<li>The decision values corresponding to an ID are different than the ones of the other ID</li>
<li>The decision of ID 00001 is plus and the one of 00002 is minus</li>
</ol>
<p>I know that pandas have some assertion to compare equal dataframes but how can I go for this situation?</p>
|
<python><pandas><pytest>
|
2023-01-03 08:28:48
| 1
| 10,576
|
KansaiRobot
|
74,990,683
| 19,325,656
|
Flask blueprint as a package | setuptools
|
<p>I have a flask blueprint that I'm pushing to GitHub as a separate repo. Then I install it via</p>
<p><em>pip install git@httpps://....</em></p>
<p>I get the package in my venv and it's all good I can import code etc.</p>
<p><strong>However</strong> there is an issue. I don't see HTML/CSS/js files that are visible on GitHub.</p>
<p>I found similar errors on here but the solutions that I tested don't work. Maybe there is an error in my folder structure (if so how to do it correctly? flask needs to see them too) maybe it's something different.</p>
<p>Here is my folder structure all of the folders have <strong>init</strong>.py</p>
<pre><code>src
updater
SWupdater
templates
static
js
*.js
css
external_css
*.css
*.css
images
*.jpg
SWupdater
*.html
</code></pre>
<p>This is my setup without names/descriptions etc</p>
<pre><code>setuptools.setup(
packages=setuptools.find_packages(where='src', exclude=["*.tests", "*.tests.*"]),
package_dir={"updater": "src/updater"},
zip_safe=False,
include_package_data=True,
install_requires=[],
classifiers=[],
python_requires='>=3.7'
)
</code></pre>
<p>What I've tried to do in my setup.py</p>
<p>-added package_data</p>
<pre><code>package_data={"updater/SWupdater/templates": ["SWUpdater/*"],
"updater/SWupdater/templates/static":["css/*", "images/*", "js/*"]},
</code></pre>
<p>-added include_package_data</p>
<pre><code>include_package_data=True,
</code></pre>
<p>-a combination of package_data/include
-line order combination (one higher one lower)</p>
<p>I do not build this package, I only push it to GitHub and install it into my venv</p>
|
<python><flask><setuptools>
|
2023-01-03 08:20:52
| 1
| 471
|
rafaelHTML
|
74,990,601
| 12,991,231
|
How to convert the values in a Python Defaultdict to a Numpy array?
|
<p>I want multiple values to belong to the same key, so I used a Python defaultdict to walk around this.
However, since now the values in the defaultdict are nested lists, how do I make each element of the nested lists a row of a Numpy ndarray?</p>
<p>Let's say my defaultdict looks like this:</p>
<pre><code>my_dict = defaultdict(list)
*** in some for loop ***
my_dict[key].append(value) # key is a string and value is a Numpy array of shape (1,10)
*** end of the for loop ***
</code></pre>
<p>I guess the slowest way would be using a nested for loop like:</p>
<pre><code>data = np.empty((0,10),np.uint8)
for i in my_dict:
for j in my_dict[i]:
data = np.append(data,j,axis=0)
</code></pre>
<p>is there a faster way to do this?</p>
|
<python><numpy>
|
2023-01-03 08:13:10
| 2
| 337
|
sensationti
|
74,990,578
| 275,002
|
Go: How to deal with Memory leaks while returning a CString?
|
<p>I have the following function signature which then return a JSON string</p>
<pre><code>func getData(symbol, day, month, year *C.char) *C.char {
combine, _ := json.Marshal(combineRecords)
log.Println(string(combine))
return C.CString(string(combine))
}
</code></pre>
<p>The Go code is then being called in Python</p>
<pre><code>import ctypes
from time import sleep
library = ctypes.cdll.LoadLibrary('./deribit.so')
get_data = library.getData
# Make python convert its values to C representation.
# get_data.argtypes = [ctypes.c_char_p, ctypes.c_char_p,ctypes.c_char_p,ctypes.c_char_p]
get_data.restype = ctypes.c_char_p
for i in range(1,100):
j= get_data("BTC".encode("utf-8"), "5".encode("utf-8"), "JAN".encode("utf-8"), "23".encode("utf-8"))
# j= get_data(b"BTC", b"3", b"JAN", b"23")
print('prnting in Python')
# print(j)
sleep(1)
</code></pre>
<p>It works fine as expected on the Python side but I fear memory leaks when the function will be called in a loop at the Python end.</p>
<p>How do I deal with memory leaks? should I return <code>bytes</code> instead of a <code>CString</code> and deal bytes at Python end to avoid memory leaks? I did find <a href="https://fluhus.github.io/snopher/" rel="nofollow noreferrer">this</a> link to deal with it but somehow I do not know the size of JSON string returned after marshalling</p>
|
<python><go><memory-leaks><ctypes><cgo>
|
2023-01-03 08:10:28
| 2
| 15,089
|
Volatil3
|
74,990,485
| 3,801,744
|
Wrong `nbytes` value in a numpy array after broadcasting with `broadcast_to`
|
<p>I just noted this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import sys
arr = np.broadcast_to(0, (10, 1000000000000))
print(arr.nbytes) # prints "80000000000000"
print(sys.getsizeof(arr)) # prints "120"
</code></pre>
<p>Is this a bug or intended behavior? I.e., is <code>nbytes</code> meant to hold the amount of "logical" bytes, not accounting for 0-strides?</p>
|
<python><arrays><numpy>
|
2023-01-03 07:59:43
| 1
| 652
|
SiLiKhon
|
74,990,427
| 2,134,767
|
ordering is not working with django model seralizer
|
<p>I am using 2.2.16 django versiong and 3.11.2 django_rest_framework version and using serializers.ModelSerializer CRUD operation is working but ordering is not working.</p>
<p>models.py code is below,</p>
<pre><code>class Ticket(models.Model):
title = models.CharField(max_length=255)
description = models.CharField(blank=True, max_length=255)
notes = models.TextField()
</code></pre>
<p>eg: code in seralizers.py</p>
<pre><code>from rest_framework import serializers
class Ticket(serializers.ModelSerializer):
title = serializers.CharField()
description = serializers.CharField(required=False)
notes = serializers.CharField(required=False)
class Meta(object):
fields = '__all__'
depth = 1
model = Ticket
ordering_fields = ["title"]
def __init__(self, *args, **kwargs):
super(Ticket, self).__init__(*args, **kwargs)
</code></pre>
<p>When i do GET operation as below, ordering is not working with query parameters</p>
<p>eg:</p>
<pre><code>http://localhost:4200/api/ticket?ordering=title
</code></pre>
<p>Above api call is returning add data with default ordering as ID (auto created field), but not returning in the assenting of title field (which is a char field).</p>
<p>How can I fix this? Also how can i add filtering to the same</p>
<pre><code>eg: http://localhost:4200/api/ticket?title=abc # this should give me result of only matching title field with abc or starts with abc
</code></pre>
|
<python><django><django-rest-framework>
|
2023-01-03 07:54:34
| 1
| 2,852
|
Naggappan Ramukannan
|
74,990,168
| 3,755,432
|
Include config files with setup.py install
|
<p>I have tried all the option available on stackoverflow, still could not get etc directory to get copied to install path with <code>setup.py install</code> command. I have code in below structure.</p>
<pre><code>├── setup.py
├── MANIFEST.in
├── etc/
│ ├── config.yaml
│ └── message.xml
└── src/
└── my-app/
├── __init__.py
└── main.py
</code></pre>
<p>I am using setuptools version 65.6.3 with python 3.7.5</p>
<p>setup.py</p>
<pre><code>import setuptools
setuptools.setup(
name='my-app',
version='0.1',
package_dir=("","src"),
packages=setuptools.find_packages("src"),
include_package_data=True,
entry_points={
"console_scripts":[
"my_main = my-app.main:main"
]
})
</code></pre>
<p>MANIFEST.in</p>
<pre><code>recursive-include etc *.yml *.xml
</code></pre>
<p>I have also tried below in MANIFEST</p>
<pre><code>include etc/*.yml
include etc/*.xml
</code></pre>
|
<python><setuptools>
|
2023-01-03 07:26:06
| 1
| 2,017
|
tempuser
|
74,990,097
| 1,186,624
|
How can I use scipy.integrate.solve_ivp() to solve an ODE in an interactive simulation?
|
<p>I have implemented a simple simulation on a uniform circular motion using the low level API <code>scipy.integrate.RK45()</code> like the following.</p>
<pre><code>import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
r = np.array([1, 0], 'float')
v = np.array([0, 1], 'float')
dt = 0.1
def motion_eq(t, y):
r, v = y[0:2], y[2:4]
return np.hstack([v, -r])
motion_solver = scipy.integrate.RK45(motion_eq, 0, np.hstack([r, v]),
t_bound = np.inf, first_step = dt, max_step = dt)
particle, *_ = plt.plot(*r.T, 'o')
plt.gca().set_aspect(1)
plt.xlim([-2, 2])
plt.ylim([-2, 2])
def update():
motion_solver.step()
r = motion_solver.y[0:2]
particle.set_data(*r.T)
plt.draw()
timer = plt.gcf().canvas.new_timer(interval = 50)
timer.add_callback(update)
timer.start()
plt.show()
</code></pre>
<p>I tried at first the high level API <code>scipy.integrate.solve_ivp()</code>, but it seems it doesn't provide an interface to create an instance containing the state of the system and get states of the system iteratively.(I am calling this an interactive simulation, because you can pause, change the system state and resume, although it's not implemented in the sample code.)</p>
<p>Is this possible with the <code>solve_ivp()</code>? If it is not, am I doing right with the <code>RK45</code>, especially on specifying the <code>t_bound</code>, <code>first_step</code> and <code>max_step</code> options? I can find plenty of resources on solving for a given time interval on Internet, but I couldn't find one on solving like this.</p>
|
<python><scipy><simulation><ode>
|
2023-01-03 07:16:50
| 1
| 5,019
|
relent95
|
74,990,085
| 17,267,064
|
How do I transform a text data with cells in a rows separated by pipe sign into specific pattern of data via python?
|
<p>I want to transform below data in to a pattern of specific rows of 4 cells. Please find the specimen of below data.</p>
<pre><code>text = """A | B | Lorem | Ipsum | is | simply | dummy
C | D | text | of | the | printing | and
E | F | typesetting | industry. | Lorem
G | H | more | recently | with | desktop | publishing | software | like | Aldus
I | J | Ipsum | has | been | the | industry's
K | L | standard | dummy | text | ever | since | the | 1500s
M | N | took | a
O | P | scrambled | it | to | make | a | type | specimen | book"""
</code></pre>
<p>I am required to transform each row to only contain not more than 4 cells. Any cells coming after fourth cell should be inserted to a next row having that has first two cells similar to that of first row and current row shouldn't also be greater than 4 cells. The transformation of above text data should look like below one.</p>
<pre><code>A | B | Lorem | Ipsum
A | B | is | simply
A | B | dummy
C | D | text | of
C | D | the | printing
C | D | and
E | F | typesetting | industry.
E | F | Lorem
G | H | more | recently
G | H | with | desktop
G | H | publishing | software
G | H | like | Aldus
.
.
and so on...
</code></pre>
<p>I have tried something on my own but I am not even half way there as per below code which is incomplete.</p>
<pre><code>new_text = ""
for i in text.split('\n'):
row = i.split(' | ')
if len(row) == 4:
new_text = new_text + i + '\n'
elif len(row) > 4:
for j in range(len(row)):
if j < 3:
new_text = new_text + row[0] + ' | ' + row[1] + ...
</code></pre>
<p>I am unable to figure out the logic to use first two cells if number of cells are higher than 4 in each row.</p>
|
<python><python-3.x>
|
2023-01-03 07:15:17
| 2
| 346
|
Mohit Aswani
|
74,990,008
| 10,035,978
|
ping constantly an IP and write into txt
|
<p>I have this code</p>
<pre><code>def ping_google(command):
with open('google.txt', 'a') as f:
f.write(subprocess.check_output(command))
t1 = threading.Thread(target=ping_anel, args=("ping -t 8.8.8.8",))
</code></pre>
<p>And i would like to save the infinite pinging to google in a txt file. Is it possible?</p>
|
<python><cmd><ping>
|
2023-01-03 07:05:39
| 1
| 1,976
|
Alex
|
74,989,976
| 19,106,705
|
nvidia-smi vs torch.cuda.memory_allocated
|
<p>I am checking the gpu memory usage in the training step.</p>
<p><strong>To start with the main question</strong>, checking the gpu memory using the <code>torch.cuda.memory_allocated</code> method is different from checking with <code>nvidia-smi</code>. And I want to know why.</p>
<p>Actually, I measured the gpu usage using the vgg16 model.</p>
<p>This code prints the <strong>theoretical feature map size and weight size</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
from functools import reduce
Model_number = 7
Model_name = ["alexnet", "vgg11_bn", "vgg16_bn", "resnet18", "resnet50", "googlenet", "vgg11", "vgg16"]
Model_weights = ["AlexNet_Weights", "VGG11_BN_Weights", "VGG16_BN_Weights", "ResNet18_Weights", "ResNet50_Weights", "GoogLeNet_Weights", "VGG11_Weights", "VGG16_Weights"]
exec(f"from torchvision.models import {Model_name[Model_number]}, {Model_weights[Model_number]}")
exec(f"weights = {Model_weights[Model_number]}.DEFAULT")
exec(f"model = {Model_name[Model_number]}(weights=None)")
weight_memory_allocate = 0
feature_map_allocate = 0
weight_type = 4 # float32 = 4, half = 2
batch_size = 128
input_channels = 3
input_size = [batch_size, 3, 224, 224]
def check_model_info(m):
global input_size
global weight_memory_allocate, feature_map_allocate
if isinstance(m, nn.Conv2d):
in_channels, out_channels = m.in_channels, m.out_channels
kernel_size, stride, padding = m.kernel_size[0], m.stride[0], m.padding[0]
# weight
weight_memory_allocate += in_channels * out_channels * kernel_size * kernel_size * weight_type
# bias
weight_memory_allocate += out_channels * weight_type
# feature map
feature_map_allocate += reduce(lambda a, b: a * b, input_size, 1) * weight_type
out_len = int((input_size[2] + 2 * padding - kernel_size)/stride + 1)
input_size = [batch_size, out_channels, out_len, out_len]
elif isinstance(m, nn.Linear):
input_size = [batch_size, reduce(lambda a, b: a * b, input_size[1:], 1)]
in_nodes, out_nodes = m.in_features, m.out_features
# weight
weight_memory_allocate += in_nodes * out_nodes * weight_type
# bias
weight_memory_allocate += out_nodes * weight_type
#feature map
feature_map_allocate += reduce(lambda a, b: a * b, input_size, 1) * weight_type
input_size = [batch_size, out_nodes]
elif isinstance(m, nn.MaxPool2d):
out_len = int((input_size[2] + 2 * m.padding - m.kernel_size)/m.stride + 1)
input_size = [batch_size, input_size[1], out_len, out_len]
model.apply(check_model_info)
print("---------------------------------------------------------")
print("origial memory allocate")
print(f"total = {(weight_memory_allocate + feature_map_allocate)/1024.0/1024.0:.2f}MB")
print(f"weight = {weight_memory_allocate/1024.0/1024.0:.2f}MB")
print(f"feature_map = {feature_map_allocate/1024.0/1024.0:.2f}MB")
print("---------------------------------------------------------")
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------
origial memory allocate
total = 4978.54MB
weight = 527.79MB
feature_map = 4450.75MB
---------------------------------------------------------
</code></pre>
<p>And this code checks gpu usage with <code>torch.cuda.memory_allocated</code>:</p>
<pre class="lang-py prettyprint-override"><code>def test_memory_training(in_size=(3,224,224), out_size=1000, optimizer_type=torch.optim.SGD, batch_size=1, use_amp=False, device=0):
sample_input = torch.randn(batch_size, *in_size, dtype=torch.float32)
optimizer = optimizer_type(model.parameters(), lr=.001)
model.to(device)
print(f"After model to device: {to_MB(torch.cuda.memory_allocated(device)):.2f}MB")
for i in range(5):
optimizer.zero_grad()
print("Iteration", i)
with torch.cuda.amp.autocast(enabled=use_amp):
a = torch.cuda.memory_allocated(device)
out = model(sample_input.to(device)).sum() # Taking the sum here just to get a scalar output
b = torch.cuda.memory_allocated(device)
print(f"After forward pass {to_MB(torch.cuda.memory_allocated(device)):.2f}MB")
print(f"Memory consumed by forward pass {to_MB(b - a):.2f}MB")
out.backward()
print(f"After backward pass {to_MB(torch.cuda.memory_allocated(device)):.2f}MB")
optimizer.step()
print(f"After optimizer step {to_MB(torch.cuda.memory_allocated(device)):.2f}MB")
print("---------------------------------------------------------")
def to_MB(a):
return a/1024.0/1024.0
test_memory_training(batch_size=batch_size)
</code></pre>
<p>Output:</p>
<pre><code>After model to device: 529.04MB
Iteration 0
After forward pass 9481.04MB
Memory consumed by forward pass 8952.00MB
After backward pass 1057.21MB
After optimizer step 1057.21MB
---------------------------------------------------------
Iteration 1
After forward pass 10009.21MB
Memory consumed by forward pass 8952.00MB
After backward pass 1057.21MB
After optimizer step 1057.21MB
---------------------------------------------------------
......
</code></pre>
<p>This is the result output by <code>nvidia-smi</code> when training:</p>
<p><a href="https://i.sstatic.net/LDELy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LDELy.png" alt="enter image description here" /></a></p>
<p>Here's a more detailed question:</p>
<p>I think Pytorch store the following 3 things in the training step.</p>
<ol>
<li>model parameters</li>
<li>input feature map in forward pass</li>
<li>model gradient information for optimizer</li>
</ol>
<p>And I think in the forward pass, input feature map should be stored. But in theory, I thought 4450.75MB should be stored in memory, but actually 8952.00MB is stored. Almost 2 times difference.</p>
<p>And if you check the memory usage using <code>nvidia-smi</code> and <code>torch.cuda.memory_allocated</code>, the memory usage using <code>nvidia-smi</code> shows about twice as much memory.</p>
<p>what makes this difference?</p>
<p>Thanks for reading the long question. Any help is appreciated.</p>
|
<python><pytorch><gpu><nvidia-smi><vram>
|
2023-01-03 07:01:31
| 1
| 870
|
core_not_dumped
|
74,989,931
| 4,041,387
|
converting the C# code to python on decryption
|
<pre><code>using System.Security.Cryptography;
public Task<byte[]> Decrypt(byte[] encryptedBytes, string encryptionKey)
{
if (encryptedBytes == null)
throw new ArgumentNullException("encryptedBytes can't be null");
if (string.IsNullOrEmpty(encryptionKey))
throw new ArgumentNullException("encryptionKey can't be null");
byte[] encryptedTextBytes = encryptedBytes;
this._encryptionKey = encryptionKey;
var encryptionKeyBytes = Encoding.UTF32.GetBytes(this._encryptionKey);
Rfc2898DeriveBytes generatedKey = new Rfc2898DeriveBytes(this._encryptionKey, new byte[] { 0x65, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x65 });
var sessionKey = generatedKey.GetBytes(32);
var iV = generatedKey.GetBytes(16);
return Task.Run(() =>
{
return (Decrypt(encryptedTextBytes, sessionKey, iV));
});
}
public byte[] Decrypt(byte[] dataToDecrypt, byte[] key, byte[] iv)
{
using (var aes = new AesCryptoServiceProvider())
{
aes.Mode = CipherMode.CBC;
aes.Padding = PaddingMode.PKCS7;
aes.Key = key;
aes.IV = iv;
using (var memoryStream = new MemoryStream())
{
var cryptoStream = new CryptoStream(memoryStream, aes.CreateDecryptor(), CryptoStreamMode.Write);
cryptoStream.Write(dataToDecrypt, 0, dataToDecrypt.Length);
cryptoStream.FlushFinalBlock();
var decryptBytes = memoryStream.ToArray();
return decryptBytes;
}
}
}
</code></pre>
<p>Hi,
I need help converting the above-mentioned c# code to its equivalent python code. As I am new to python, hence not sure which are the relevant libraries to be used here. It would be great if someone could help me here.</p>
<p>So far I have tried the below code but looks like it is not working:</p>
<pre><code>from Crypto.Cipher import AES
from Crypto.Protocol.KDF import PBKDF2
from PIL import Image
_encryptionKey='secret'.encode("utf-32")
salt = '\0x65\0x76\0x61\0x6e\0x20\0x4d\0x65\0x64\0x76\0x65\0x64\0x65\0x65'.encode("utf-32")
key_bytes = PBKDF2(_encryptionKey, salt, dkLen=64)
session_key=key_bytes[:32]
iv= key_bytes[:16]
cipher = AES.new(session_key, AES.MODE_CBC, iv)
val = cipher.decrypt(ency_img)
Image.open(BytesIO(val))
</code></pre>
<p>Here <code>ency_img</code> is encrypted image bytes object coming from the MySQL DB with column type as <code>longblob</code></p>
<p>error from PIL Image</p>
<blockquote>
<p>PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO
object at 0x7fb10d79a270></p>
</blockquote>
|
<python><c#><encryption><cryptography><aes>
|
2023-01-03 06:55:40
| 1
| 425
|
Imran
|
74,989,827
| 18,092,798
|
Snakemake rule higher priority than all other rules
|
<p>So I know that in order to set rule priority, you use <code>ruleorder</code>. Is there an efficient way to give a rule priority over all other rules? For example, suppose I have rules <code>a</code>, <code>b</code>, and <code>c</code>. I would like rule <code>b</code> to have higher priority over <code>a</code> and <code>c</code>. How would I do this other than manually doing <code>ruleorder: b > c</code> and <code>ruleorder: b > a</code>?</p>
|
<python><python-3.x><snakemake><directed-acyclic-graphs>
|
2023-01-03 06:43:44
| 2
| 581
|
yippingAppa
|
74,989,725
| 13,560,598
|
tensorflow sparse dense layer
|
<p>I want to create a custom dense layer in tf.keras, which works with sparse weight matrices. My weight matrices are zero almost everywhere, and I know the sparsity pattern. It would be a huge cost saving. How would I incorporate sparse matrices into a custom Dense layer? Could someone point me to a reference? I could not find this functionality in tf. Edit: the layer should be trainable.</p>
|
<python><tensorflow><sparse-matrix><layer>
|
2023-01-03 06:27:44
| 0
| 593
|
NNN
|
74,989,658
| 7,347,925
|
How to apply scipy curve_fit to different magnitude data?
|
<p>I have two arrays: <code>x</code> ranges from 50 to 1000 while <code>y</code> ranges from 0 to 13.</p>
<p>I'm trying to fit them by the Gaussian function:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.array([954.57747687, 845.47601272, 746.69873108, 657.48101282,
577.09839969, 504.86539163, 440.13425151, 382.29381761,
330.7683239 , 285.01622822, 244.52904868, 208.83020828,
177.473888 , 150.04388862, 126.15250125, 105.4393871 ,
87.57046634])
y = np.array([ 0.2, 0.5, 0.6, 1.4, 2.7, 4. , 5. , 6.2, 8.6, 10.3, 11.6,
12.4, 12.7, 12.4, 7.6, 3. , 0.8])
# Function to calculate the Gaussian with constants a, b, and c
def gaussian(x, a, b, c):
return a*np.exp(-np.power(x - b, 2)/(2*np.power(c, 2)))
pars, cov = curve_fit(f=gaussian, xdata=x, ydata=y)
print(pars) # [1. 1. 1.]
pars, cov = curve_fit(f=gaussian, xdata=x/100, ydata=y)
print(pars) # [12.38265077 2.49976014 1.18267892]
fig, axs = plt.subplots()
axs.scatter(y, x/100)
axs.plot(gaussian(x/100, *pars), x/100, linewidth=2, linestyle='--')
</code></pre>
<p>I have to scale the x data to make it work ... How to fit the data without manual scaling?</p>
<p><a href="https://i.sstatic.net/LQ5EZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LQ5EZ.png" alt="example" /></a></p>
|
<python><scipy><curve-fitting><scipy-optimize>
|
2023-01-03 06:17:47
| 0
| 1,039
|
zxdawn
|
74,989,136
| 19,238,204
|
Question on Integration Formula and Negative Result with the Plot of the Volume
|
<p>I have created this code by modification from previous topics.</p>
<p><a href="https://i.sstatic.net/Cqjrm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cqjrm.png" alt="volume" /></a></p>
<p>I put the calculated volume on the volume plot. My questions are:</p>
<ol>
<li>My plots are correct right?</li>
<li>My volume calculations are correct too right?</li>
<li>Why there will be negative volume? If I put the formula for vx(x) as r1 - r2 it will be negative. Should I put abs (absolute value) instead in the future? So I could careless If I put r1 - r2 or r2 - r1, the numbers is the same, only one has negative sign. What is the significant meaning of negative sign for volume? Do we need a careful thought when calculating volume through integration?</li>
<li>I do not use <code>sympy</code> is sympy better in calculating integral than numpy/scipy?</li>
</ol>
<p>Thanks.. this is my code / MWE:</p>
<pre><code># Compare the plot at xy axis with the solid of revolution toward x and y axis
# For region bounded by the line x - 2y = 0 and y^2 = 4x
# Plotting the revolution of the bounded region
# can be done by limiting the np.linspace of the y, u, and x_inverse values
# You can determine the limits by finding the intersection points of the two functions.
import matplotlib.pyplot as plt
import numpy as np
import sympy as sy
def r1(x):
return x/2
def r2(x):
return 2*(x**(1/2))
def r3(x):
return 2*x
def r4(x):
return (x/2)**(2)
def vx(x):
return np.pi*(r2(x)**2 - r1(x)**2)
def vy(x):
return np.pi*(r3(x)**2 - r4(x)**2)
x = sy.Symbol("x")
vx = sy.integrate(vx(x), (x, 0, 16))
vy = sy.integrate(vy(x), (x, 0, 8))
n = 200
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222, projection='3d')
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224, projection='3d')
y = np.linspace(0, 8, n)
x1 = (2*y)
x2 = (y / 2) ** (2)
t = np.linspace(0, np.pi * 2, n)
u = np.linspace(0, 16, n)
v = np.linspace(0, 2 * np.pi, n)
U, V = np.meshgrid(u, v)
X = U
Y1 = (2 * U ** (1/2)) * np.cos(V)
Z1 = (2 * U ** (1/2)) * np.sin(V)
Y2 = (U / 2) * np.cos(V)
Z2 = (U / 2) * np.sin(V)
Y3 = ((U / 2) ** (2)) * np.cos(V)
Z3 = ((U / 2) ** (2)) * np.sin(V)
Y4 = (2*U) * np.cos(V)
Z4 = (2*U) * np.sin(V)
ax1.plot(x1, y, label='$y=x/2$')
ax1.plot(x2, y, label='$y=2 \sqrt{x}$')
ax1.legend()
ax1.set_title('$f(x)$')
ax2.plot_surface(X, Y3, Z3, alpha=0.3, color='red', rstride=6, cstride=12)
ax2.plot_surface(X, Y4, Z4, alpha=0.3, color='blue', rstride=6, cstride=12)
ax2.set_title("$f(x)$: Revolution around $y$ \n Volume = {}".format(vy))
# find the inverse of the function
x_inverse = np.linspace(0, 8, n)
y1_inverse = np.power(2*x_inverse, 1)
y2_inverse = np.power(x_inverse / 2, 2)
ax3.plot(x_inverse, y1_inverse, label='Inverse of $y=x/2$')
ax3.plot(x_inverse, y2_inverse, label='Inverse of $y=2 \sqrt{x}$')
ax3.set_title('Inverse of $f(x)$')
ax3.legend()
ax4.plot_surface(X, Y1, Z1, alpha=0.3, color='red', rstride=6, cstride=12)
ax4.plot_surface(X, Y2, Z2, alpha=0.3, color='blue', rstride=6, cstride=12)
ax4.set_title("$f(x)$: Revolution around $x$ \n Volume = {}".format(vx))
plt.tight_layout()
plt.show()
</code></pre>
|
<python><numpy><matplotlib>
|
2023-01-03 04:50:10
| 1
| 435
|
Freya the Goddess
|
74,988,972
| 13,176,726
|
Submitting a Post to API from Flutter app always returning Exception: type 'int' is not a subtype of type 'String' in type cast
|
<p>I have the following API in a Django project where I am trying to send a log with some variables that are Int and String.</p>
<p>Here is the Api/views.py:</p>
<pre><code>@api_view(['POST', 'GET'])
@permission_classes([IsAuthenticated])
def addlog(request,username, id):
workout = Workout.objects.get(id=id, user=request.user)
if request.method == 'POST':
if workout.active:
active_session = ActiveSession.objects.get(id=ActiveSession.objects.last().id)
form = LogForm(request.POST)
if form.is_valid():
data = Log()
data.user = request.user
data.log_workout = request.POST.get('log_workout')
data.log_exercise = request.POST.get('log_exercise')
data.log_order = request.POST.get('log_order')
data.log_repetitions = form.cleaned_data['log_repetitions']
data.log_weight = form.cleaned_data['log_weight']
data.workout_id = Workout.pk
data.save()
</code></pre>
<p>In my flutter app I have created the api_service.dart as following:</p>
<pre><code> static Future<http.Response> addLog(int logWeight, int logRepetitions,
int logOrder, String logExercise, String logWorkout, int id) async {
var url = Uri.parse(Config.apiURL +
Config.userAddlogAPI.replaceFirst("{id}", id.toString()));
try {
final response = await http.post(url, headers: {
HttpHeaders.authorizationHeader:
'Token xxxxxxxxxxxx',
}, body: {
'log_weight': logWeight,
'log_repetitions': logRepetitions,
'log_order': logOrder,
'log_exercise': logExercise,
'log_workout': logWorkout,
});
#.......if to catch error but it doesn't reach this part........................
}
</code></pre>
<p>Here the screen.dart</p>
<pre><code> Form(
child: Expanded(
child: Column(
children: [
TextFormField(keyboardType:TextInputType.number,
onChanged:(value) {final int?parsedValue = int.tryParse(value);
if (parsedValue !=null) {setState(() { logWeight = parsedValue;});
} else {}
},),
TextFormField(keyboardType:TextInputType.number,
onChanged:(value) {
final int?parsedValue = int.tryParse(value);
if (parsedValue != null) {setState(() {logRepetitions = parsedValue;});
} else {}},),
OutlinedButton(child: Text('Add Log'),
onPressed:
() async {
final Map<String, dynamic> arguments = ModalRoute.of(context)!.settings.arguments as Map<String, dynamic>;
final int id = arguments['id'] ??0;
final String logWorkout = arguments['logWorkout'] ??'';
final String logExercise = snapshot.data![index].name; // <-- set logExercise here
final int logOrder = breakdown.order;
print("$logRepetitions, $logWeight, $logOrder, $logExercise, $logWorkout, $id");
print(id.runtimeType);
try {
final http.Response response = await APIService.addLog(
logWeight, logRepetitions, logOrder, logExercise, logWorkout, id);
if (response.statusCode == 200) {
print('Log submitted successfully');} else {
print('Failed to submit log');}
} catch (error) { print(error);
await showDialog( context: context, builder: (context) {
return AlertDialog(title:Text('Error'),
content: Text(error.toString()),
actions: [ OutlinedButton(
child: Text('OK'),
onPressed: () {Navigator.of(context).pop();
},),],); },); } }, ), ],),),),
</code></pre>
<p>So in the dart file there are 2 textfields and an outline button inside a form. The textfields are where users add their inputs in numeric values and when the user clicks on the submit I receive this error. I have made sure that the only 2 strings which are <code>logExercise, logWorkout</code> are actually strings. I am not sure what I am doing wrong and how to fix it. I am fairly new to flutter.</p>
|
<python><flutter><dart>
|
2023-01-03 04:09:31
| 2
| 982
|
A_K
|
74,988,964
| 9,218,680
|
create multi index column dataframe by transposing(pivoting) a column
|
<p>I have a dataframe that looks something like:</p>
<pre><code>date enddate category Low High
2023-01-02 06:00:00 2023-12-01 A 45 55
2023-01-02 06:00:00 2024-12-01 A 46 56
2023-01-02 06:00:00 2025-12-01 A 47 57
2023-01-02 06:00:00 2023-12-01 B 85 86
2023-01-02 06:00:00 2024-12-01 B 86 87
2023-01-02 06:00:00 2025-12-01 B 88 89
</code></pre>
<p>And am looking to convert to a dataframe that looks something like:</p>
<pre><code>date
2023-12-01 2024-12-01 2025-12-01
Category Low High Category Low High Category Low High
2023-01-02 06:00:00 A 45 55 A 46 47 A 47 57
2023-01-02 06:00:00 B 85 86 B 86 87 B 88 89
</code></pre>
<p>SO it is essentially creating multi index columns.
Am not sure what will be the efficient way here. I played around with stacking/unstacking and pivoting a bit, but could not really grab myself around it.</p>
<p>If you can suggest a good way please.</p>
<p>Please note the date values may not be 06:00:00 for all the rows.</p>
|
<python><pandas><multi-index>
|
2023-01-03 04:08:14
| 1
| 2,510
|
asimo
|
74,988,694
| 9,532,692
|
Plotly: how to draw two linecharts with two dataframes with colors
|
<p>I am trying to draw two subplots (top and bottom) that contains category of 3 colors and values for each date. With the dataset below, I can draw a single plot with data_A (as shown below)</p>
<pre><code>fig = px.line(data_A,
x="Date", y="Percent",
color='Color',
color_discrete_sequence=['green','red','gold'],
markers=True)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/Ixroi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ixroi.png" alt="image1" /></a></p>
<p>However, I can't seem to draw two plots using <code>add_trace</code></p>
<pre><code>from plotly.subplots import make_subplots
import plotly.express as px
fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.05)
fig.add_trace(x = data_A['Date'],
y= data_A['Percent'],
# color='Color',
# color_discrete_sequence=['green','red','gold'],
markers=True)
fig.add_trace(x = data_B['Date'],
y= data_B['Percent'],
# color='Color',
# color_discrete_sequence=['green','red','gold'],
markers=True)
</code></pre>
<p>Here is the sample data:</p>
<pre><code>data_A = [{'Date': '2022-01-01', 'Color': "Green", 'Percent': 40},
{'Date': '2022-01-01', 'Color': "Red", 'Percent': 20},
{'Date': '2022-01-01', 'Color': "Yellow", 'Percent': 30},
{'Date': '2022-01-02', 'Color': "Green", 'Percent': 45},
{'Date': '2022-01-02', 'Color': "Red", 'Percent': 30},
{'Date': '2022-01-02', 'Color': "Yellow", 'Percent': 25},
{'Date': '2022-01-03', 'Color': "Green", 'Percent': 40},
{'Date': '2022-01-03', 'Color': "Red", 'Percent': 20},
{'Date': '2022-01-03', 'Color': "Yellow", 'Percent': 30},
{'Date': '2022-01-04', 'Color': "Green", 'Percent': 45},
{'Date': '2022-01-04', 'Color': "Red", 'Percent': 25},
{'Date': '2022-01-04', 'Color': "Yellow", 'Percent': 30}]
data_B = [{'Date': '2022-01-01', 'Color': "Green", 'Percent': 30},
{'Date': '2022-01-01', 'Color': "Red", 'Percent': 50},
{'Date': '2022-01-01', 'Color': "Yellow", 'Percent': 20},
{'Date': '2022-01-02', 'Color': "Green", 'Percent': 65},
{'Date': '2022-01-02', 'Color': "Red", 'Percent': 10},
{'Date': '2022-01-02', 'Color': "Yellow", 'Percent': 25},
{'Date': '2022-01-03', 'Color': "Green", 'Percent': 40},
{'Date': '2022-01-03', 'Color': "Red", 'Percent': 30},
{'Date': '2022-01-03', 'Color': "Yellow", 'Percent': 20},
{'Date': '2022-01-04', 'Color': "Green", 'Percent': 55},
{'Date': '2022-01-04', 'Color': "Red", 'Percent': 35},
{'Date': '2022-01-04', 'Color': "Yellow", 'Percent': 10}]
</code></pre>
|
<python><plotly>
|
2023-01-03 02:57:51
| 0
| 724
|
user9532692
|
74,988,630
| 1,219,322
|
PyDeck in Streamlit - event handling for on_click
|
<p>Given the following code, the on_click event is not triggered, not certain why.</p>
<pre><code>def on_click(info):
print('Testing...')
st.write("TEST TEST")
chart = pdk.Deck(
map_style="dark",
initial_view_state={
"country": "CA",
"latitude":45.458952,
"longitude": -73.571648 ,
"zoom": 11,
"pitch": 50,
},
layers=[
pdk.Layer(
"HexagonLayer",
data=df,
#get_position=[-73.571648, 45.458952],
get_position=['longitude', 'latitude'],
auto_highlight=True,
pickable=True,
on_click=on_click,
elevation=100,
elevation_scale=9,
),
],
)
st.pydeck_chart(chart)
</code></pre>
<p>The idea is to print out just for testing to see if the event triggers the on_click event, but nothing happens.</p>
|
<python><streamlit><pydeck>
|
2023-01-03 02:38:50
| 0
| 1,998
|
dirtyw0lf
|
74,988,623
| 1,019,129
|
MULTI level dict: Return reference instead of the content
|
<p>I have a tree. I have to calculate cummulatve sum up to every leaf. And finally keep only the top N leafs by sum.</p>
<p>So far I got the sum and i get the leafs content back .. the problem is I want back REFERENCES to the leafs so that I delete the ones with low cumsum</p>
<p>Is there way to do that ?</p>
<pre><code>import numpy as np
class BeamTree(dict):
def __init__(self, *args, **kwargs):
super(BeamTree, self).__init__(*args, **kwargs)
self.__dict__ = self
self.nodes = []
def add(self, a, aa, q):
self.nodes.append( BeamTree({ 'a': a, 'aa': aa, 'qq': q, 'qs':0 }) )
return self.nodes[-1]
def qsum(self, q=0):
if len(self.nodes) == 0 : return []
leafs = []
for node in self.nodes:
node['qs'] = q + node['qq']
leafs.extend( node.qsum(node['qs']) )
if len(node.nodes) == 0 : leafs.append(node)
if len(leafs) > 0 : return leafs
return []
def generate(self, branch=3, depth=3):
if depth < 1 : return
for b in range(branch) :
sym = 's' + str(np.random.randint(100))
aix = np.random.randint(100)
q = np.random.rand()
node = self.add(sym, aix, q)
node.generate(branch, depth-1)
</code></pre>
<p>here is a test :</p>
<pre><code>In [212]: b=BeamTree(); b.generate(2,2)
In [213]: l=b.qsum(0)
In [214]: b
Out[214]:
{'nodes': [{'a': 's80',
'aa': 56,
'qq': 0.673,
'qs': 0.673,
'nodes': [{'a': 's8', 'aa': 16, 'qq': 0.115, 'qs': 0.788, 'nodes': []}, {'a': 's64', 'aa': 10, 'qq': 0.599, 'qs': 1.272, 'nodes': []}]},
{'a': 's67',
'aa': 0,
'qq': 0.900,
'qs': 0.900,
'nodes': [{'a': 's69', 'aa': 23, 'qq': 0.801, 'qs': 1.700, 'nodes': []}, {'a': 's8', 'aa': 41, 'qq': 0.826, 'qs': 1.726, 'nodes': []}]}]}
In [215]: l
Out[215]:
[{'a': 's8', 'aa': 16, 'qq': 0.115, 'qs': 0.788, 'nodes': []},
{'a': 's64', 'aa': 10, 'qq': 0.599, 'qs': 1.272, 'nodes': []},
{'a': 's69', 'aa': 23, 'qq': 0.801, 'qs': 1.700, 'nodes': []},
{'a': 's8', 'aa': 41, 'qq': 0.826, 'qs': 1.726, 'nodes': []}]
In [216]: del l[0]
In [217]: l
Out[217]:
[{'a': 's64', 'aa': 10, 'qq': 0.599, 'qs': 1.272, 'nodes': []},
{'a': 's69', 'aa': 23, 'qq': 0.801, 'qs': 1.700, 'nodes': []},
{'a': 's8', 'aa': 41, 'qq': 0.826, 'qs': 1.726, 'nodes': []}]
In [218]: b
Out[218]:
{'nodes': [{'a': 's80',
'aa': 56,
'qq': 0.673,
'qs': 0.673,
'nodes': [{'a': 's8', 'aa': 16, 'qq': 0.115, 'qs': 0.788, 'nodes': []}, {'a': 's64', 'aa': 10, 'qq': 0.599, 'qs': 1.272, 'nodes': []}]},
{'a': 's67',
'aa': 0,
'qq': 0.900,
'qs': 0.900,
'nodes': [{'a': 's69', 'aa': 23, 'qq': 0.801, 'qs': 1.700, 'nodes': []}, {'a': 's8', 'aa': 41, 'qq': 0.826, 'qs': 1.726, 'nodes': []}]}]}
In [219]:
</code></pre>
|
<python><reference><tree><leaf>
|
2023-01-03 02:36:15
| 1
| 7,536
|
sten
|
74,988,607
| 11,628,437
|
Why does GitHub pagination give varying results?
|
<p>If I perform a code search using the GitHub Search API and request 100 results per page, I get a varying number of results -</p>
<pre><code>import requests
# url = "https://api.github.com/search/code?q=torch +in:file + language:python+size:0..250&page=1&per_page=100"
url = "https://api.github.com/search/code?q=torch +in:file + language:python&page=1&per_page=100"
headers = {
'Authorization': 'Token xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
}
response = requests.request("GET", url, headers=headers).json()
print(len(response['items']))
</code></pre>
<p>Thanks to <a href="https://stackoverflow.com/questions/74869773/how-do-i-get-all-1000-results-using-the-github-search-api">this</a> answer, I have the following workaround: I run the query multiple times until I get the required results on a page.</p>
<p>My current project requires me to iterate through the search API looking for files of varying sizes. I am basically repeating the procedure described <a href="https://stackoverflow.com/a/47828323">here</a>. Therefore my code looks something like this -</p>
<pre><code>url = "https://api.github.com/search/code?q=torch +in:file + language:python+size:0..250&page=1&per_page=100"
</code></pre>
<p>In this case, I don't know in advance the number of results a page should actually have. Could someone tell me a workaround for this? Maybe I am using the Search API incorrectly?</p>
|
<python><github><github-api><github-search>
|
2023-01-03 02:33:48
| 1
| 1,851
|
desert_ranger
|
74,988,529
| 7,968,024
|
How to send messages from server to client in a Python WebSockets server, AFTER initial handshake?
|
<p>Here is a small websockets client and server POC.
It sends a single hard-coded message string from the (Python) server to the Javascript client page.</p>
<p>The question is, how to send further, ad-hoc messages? From the server to the client.</p>
<p>Tiny HTML client page with embedded Javascript:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html lang="en">
<body> See console for messages </body>
<script>
# Create websocket
const socket = new WebSocket('ws://localhost:8000');
# Add listener to receive server messages
socket.addEventListener('open', function (event) {
socket.send('Connection Established');
});
# Add message to browser console
socket.addEventListener('message', function (event) {
console.log(event.data);
});
</script>
</html>
</code></pre>
<p>Here is the Python server code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import websockets
import time
# Create handler for each connection
async def handler(websocket, path):
await websocket.send("message from websockets server")
# Start websocket server
start_server = websockets.serve(handler, "localhost", 8000)
# Start async code
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
</code></pre>
<p>This successfully sends a hard-coded message from server to client.
You can see the message in the browser console.
At this point the websocket is open.</p>
<p>The main application (not shown) now needs to send messages.
These will be dynamic messages, not hard-coded.</p>
<p>How can we send later, dynamic messages from the server?
<em>After</em> the code here runs?</p>
<p>I would like to put the socket into a global variable and call a send method but this is not possible because the server runs a continuous loop.</p>
|
<javascript><python><websocket>
|
2023-01-03 02:09:56
| 1
| 1,536
|
CyclingDave
|
74,988,210
| 2,000,548
|
How to know actual result of `proc_output.waitFor` if `expected_output` is wrong?
|
<p>I am trying to follow a <a href="https://www.youtube.com/watch?v=h-1IhC01T1c" rel="nofollow noreferrer">ROS2 testing tutorial</a> which tests a topic listener to understand how ROS2 testing works. Here is a screenshot of the related code at 21:15</p>
<p><a href="https://i.sstatic.net/0gF67.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0gF67.jpg" alt="enter image description here" /></a></p>
<p>I have a node <code>target_control_node</code> which subscribes the topic <code>turtle1/pose</code> and then move the turtle to a new random pose.</p>
<pre><code>import math
import random
import rclpy
from geometry_msgs.msg import Twist
from rclpy.node import Node
from turtlesim.msg import Pose
class TargetControlNode(Node):
def __init__(self):
super().__init__("target_control_node")
self.get_logger().info("target_control_node")
self._target_pose = None
self._cmd_vel_publisher = self.create_publisher(Twist, "turtle1/cmd_vel", 10)
self.create_subscription(Pose, "turtle1/pose", self.subscribe_target_pose, 10)
self.create_timer(1.0, self.control_loop)
def subscribe_target_pose(self, msg):
self._target_pose = msg
def control_loop(self):
if self._target_pose is None:
return
target_x = random.uniform(0.0, 10.0)
target_y = random.uniform(0.0, 10.0)
dist_x = target_x - self._target_pose.x
dist_y = target_y - self._target_pose.y
distance = math.sqrt(dist_x**2 + dist_y**2)
msg = Twist()
# position
msg.linear.x = 1.0 * distance
# orientation
goal_theta = math.atan2(dist_y, dist_x)
diff = goal_theta - self._target_pose.theta
if diff > math.pi:
diff -= 2 * math.pi
elif diff < -math.pi:
diff += 2 * math.pi
msg.angular.z = 2 * diff
self._cmd_vel_publisher.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = TargetControlNode()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == "__main__":
main()
</code></pre>
<p>I am trying to write a simple test for the subscription part based on the tutorial above to understand how it works.</p>
<p>Here is my initial test code. Note inside I am using <code>expected_output=str(msg)</code>, however, it is wrong, and I am not sure what to put there.</p>
<pre class="lang-py prettyprint-override"><code>import pathlib
import random
import sys
import time
import unittest
import uuid
import launch
import launch_ros
import launch_testing
import pytest
import rclpy
import std_msgs.msg
from geometry_msgs.msg import Twist
from turtlesim.msg import Pose
@pytest.mark.rostest
def generate_test_description():
src_path = pathlib.Path(__file__).parent.parent
target_control_node = launch_ros.actions.Node(
executable=sys.executable,
arguments=[src_path.joinpath("turtle_robot/target_control_node.py").as_posix()],
additional_env={"PYTHONUNBUFFERED": "1"},
)
return (
launch.LaunchDescription(
[
target_control_node,
launch_testing.actions.ReadyToTest(),
]
),
{
"target_control_node": target_control_node,
},
)
class TestTargetControlNodeLink(unittest.TestCase):
@classmethod
def setUpClass(cls):
rclpy.init()
@classmethod
def tearDownClass(cls):
rclpy.shutdown()
def setUp(self):
self.node = rclpy.create_node("target_control_test_node")
def tearDown(self):
self.node.destroy_node()
def test_target_control_node(self, target_control_node, proc_output):
pose_pub = self.node.create_publisher(Pose, "turtle1/pose", 10)
try:
msg = Pose()
msg.x = random.uniform(0.0, 10.0)
msg.y = random.uniform(0.0, 10.0)
msg.theta = 0.0
msg.linear_velocity = 0.0
msg.angular_velocity = 0.0
pose_pub.publish(msg)
success = proc_output.waitFor(
# `str(msg)` is wrong, however, I am not sure what to put here.
expected_output=str(msg), process=target_control_node, timeout=1.0
)
assert success
finally:
self.node.destroy_publisher(pose_pub)
</code></pre>
<p>When I run <code>launch_test src/turtle_robot/test/test_target_control_node.py</code>, it only prints this without telling me what is actual output:</p>
<pre class="lang-bash prettyprint-override"><code>[INFO] [launch]: All log files can be found below /home/parallels/.ros/log/2023-01-02-16-37-27-631032-ubuntu-linux-22-04-desktop-1439830
[INFO] [launch]: Default logging verbosity is set to INFO
test_target_control_node (test_target_control_node.TestTargetControlNodeLink) ... [INFO] [python3-1]: process started with pid [1439833]
[python3-1] [INFO] [1672706247.877402445] [target_control_node]: target_control_node
FAIL
======================================================================
FAIL: test_target_control_node (test_target_control_node.TestTargetControlNodeLink)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/my-ros/src/turtle_robot/test/test_target_control_node.py", line 91, in test_target_control_node
assert success
AssertionError
----------------------------------------------------------------------
Ran 1 test in 1.061s
FAILED (failures=1)
[INFO] [python3-1]: sending signal 'SIGINT' to process[python3-1]
[python3-1] Traceback (most recent call last):
[python3-1] File "/my-ros/src/turtle_robot/turtle_robot/target_control_node.py", line 59, in <module>
[python3-1] main()
[python3-1] File "/my-ros/src/turtle_robot/turtle_robot/target_control_node.py", line 53, in main
[python3-1] rclpy.spin(node)
[python3-1] File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/__init__.py", line 222, in spin
[python3-1] executor.spin_once()
[python3-1] File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 705, in spin_once
[python3-1] handler, entity, node = self.wait_for_ready_callbacks(timeout_sec=timeout_sec)
[python3-1] File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 691, in wait_for_ready_callbacks
[python3-1] return next(self._cb_iter)
[python3-1] File "/opt/ros/humble/local/lib/python3.10/dist-packages/rclpy/executors.py", line 588, in _wait_for_ready_callbacks
[python3-1] wait_set.wait(timeout_nsec)
[python3-1] KeyboardInterrupt
[ERROR] [python3-1]: process has died [pid 1439833, exit code -2, cmd '/usr/bin/python3 /my-ros/src/turtle_robot/turtle_robot/target_control_node.py --ros-args'].
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
</code></pre>
<p>I checked the source code of <a href="https://github.com/ros2/launch/blob/b129eb65c9f03980c724b17200236290fa797816/launch_testing/launch_testing/io_handler.py#L148-L206" rel="nofollow noreferrer">waitFor</a>, but still no clue.</p>
<p>Is there a way to print the actual output so that I can give correct <code>expected_output</code>? Thanks!</p>
|
<python><python-3.x><ros><ros2>
|
2023-01-03 00:47:13
| 1
| 50,638
|
Hongbo Miao
|
74,988,123
| 1,064,810
|
How do you get stdout from pexpect?
|
<pre><code>import pexpect
domain = "test"
username = "user"
password = "password"
child = pexpect.spawn("virsh console " + domain)
# ensure we get a login prompt:
child.sendline()
# log in:
child.expect("login:")
child.sendline(username)
child.expect("Password:")
child.sendline(password)
# run command:
child.sendline("touch foo.txt")
child.sendline("exit")
</code></pre>
<p>This works perfectly. The problem now is that I want to do an <code>ls</code> (after <code>touch</code>) and get the output but I can't figure out how to do that.</p>
<p>I saw an example using <code>child.logfile = sys.stdout</code> right after spawn (and changing the spawn line to include <code>encoding="utf-8"</code>) and this does produce some output, but doesn't show the output from ls.</p>
<p>From here:</p>
<p><a href="https://stackoverflow.com/questions/17632010/python-how-to-read-output-from-pexpect-child">Python how to read output from pexpect child?</a></p>
<p>I tried the following:</p>
<pre><code>import pexpect
domain = "test"
username = "user"
password = "password"
child = pexpect.spawn("virsh console " + domain)
# ensure we get a login prompt:
child.sendline()
# log in:
child.expect("login:")
child.sendline(username)
child.expect("Password:")
child.sendline(password)
# run command:
child.sendline("touch foo.txt")
child.sendline("ls")
child.expect(pexpect.EOF)
print(child.before)
child.sendline("exit")
</code></pre>
<p>But this just times out.</p>
<p>I also tried:</p>
<pre><code>import pexpect
domain = "test"
username = "user"
password = "password"
child = pexpect.spawn("virsh console " + domain)
# ensure we get a login prompt:
child.sendline()
# log in:
child.expect("login:")
child.sendline(username)
child.expect("Password:")
child.sendline(password)
# run command:
child.sendline("touch foo.txt")
child.sendline("ls")
child.expect(".*") # the shell prompt varies extensively
print(child.before)
child.sendline("exit")
</code></pre>
<p>And this returns the following line:</p>
<pre><code>b''
</code></pre>
<p>Which is clearly not the expected output from ls. How do I get stdout from ls in this case?</p>
<p>All the examples from: <a href="https://stackoverflow.com/questions/17632010/python-how-to-read-output-from-pexpect-child">Python how to read output from pexpect child?</a> do not have this extra layer of <code>virsh console</code> in the equation, which I think makes my problem much more difficult to solve.</p>
|
<python><pexpect><virsh>
|
2023-01-03 00:22:07
| 0
| 1,563
|
cat pants
|
74,988,009
| 14,729,820
|
How to implement Text2Image with CNNs and Transposed CNNs
|
<p>I wanna implement text2image neural networks like the image below: Please see the image
<a href="https://i.sstatic.net/MHgXL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MHgXL.png" alt="" /></a>
using CNNs and Transposed CNNs with Embedding layer</p>
<pre><code>import torch
from torch import nn
</code></pre>
<p>Input text :</p>
<pre><code>text = "A cat wearing glasses and playing the guitar "
# Simple preprocessing the text
word_to_ix = {"A": 0, "cat": 1, "wearing": 2, "glasses": 3, "and": 4, "playing": 5, "the": 6, "guitar":7}
lookup_tensor = torch.tensor(list(word_to_ix.values()), dtype = torch.long) # a tensor representing words by integers
vocab_size = len(lookup_tensor)
</code></pre>
<p>architecture implementation :</p>
<pre><code>class TextToImage(nn.Module):
def __init__(self, vocab_size):
super(TextToImage, self).__init__()
self.vocab_size = vocab_size
self.noise = torch.rand((56,64))
# DEFINE the layers
# Embedding
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim = 64)
# Conv
self.conv2d_1 = nn.Conv2d(in_channels=64, out_channels=3, kernel_size=(3, 3), stride=(2, 2), padding='valid')
self.conv2d_2 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding='valid')
# Transposed CNNs
self.conv2dTran_1 = nn.ConvTranspose2d(in_channels=16, out_channels=16, kernel_size=(3, 3), stride=(1, 1), padding=1)
self.conv2dTran_2 = nn.ConvTranspose2d(in_channels=16, out_channels=3, kernel_size=(3, 3), stride=(2, 2), padding=0)
self.conv2dTran_3 = nn.ConvTranspose2d(in_channels=6, out_channels=3, kernel_size=(4, 4), stride=(2, 2), padding=0)
self.relu = torch.nn.ReLU(inplace=False)
self.dropout = torch.nn.Dropout(0.4)
def forward(self, text_tensor):
#SEND the input text tensor to the embedding layer
emb = self.embed(text_tensor)
#COMBINE the embedding with the noise tensor. Make it have 3 dimensions
combine1 = torch.cat((emb, self.noise), dim=1, out=None)
#SEND the noisy embedding to the convolutional and transposed convolutional layers
conv2d_1 = self.conv2d_1(combine1)
conv2d_2 = self.conv2d_2(conv2d_1)
dropout = self.dropout(conv2d_2)
conv2dTran_1 = self.conv2dTran_1(dropout)
conv2dTran_2 = self.conv2dTran_2(conv2dTran_1)
#COMBINE the outputs having a skip connection in the image of the architecture
combine2 = torch.cat((conv2d_1, conv2dTran_2), dim=1, out=None)
conv2dTran_3 = self.conv2dTran_3(combine2)
#SEND the combined outputs to the final layer. Please name your final output variable as "image" so you that it can be returned
image = self.relu(conv2dTran_3)
return image
</code></pre>
<p><strong>Expected output
torch.Size( [3, 64, 64] )</strong></p>
<pre><code>texttoimage = TextToImage(vocab_size=vocab_size)
output = texttoimage(lookup_tensor)
output.size()
</code></pre>
<p>Generated random noisy image :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(np.moveaxis(output.detach().numpy(), 0,-1))
</code></pre>
<p>The error I got :</p>
<pre><code>RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 56 for tensor number 1 in the list.
</code></pre>
<p>Does anyone how to solve this issue I think it from concatenate nosey with embedding</p>
|
<python><pytorch><conv-neural-network><encoder><text2image>
|
2023-01-02 23:53:48
| 2
| 366
|
Mohammed
|
74,987,896
| 12,708,740
|
Move file using specific nav path to another folder
|
<p>I have a list of navigation paths to specific files, which all come from different folders. I'd like to move them all to a new folder.</p>
<p>Specifically, my data is formatted in two columns in a dataframe, where I'd like to move each file to its new folder. (Each row describes a file.)</p>
<p>My input:</p>
<pre><code>df = pd.DataFrame({'old_path': ['/Users/myname/images/cat/file_0.jpg', '/Users/myname/images/dog/file_1.jpg', '/Users/myname/images/squirrel/file_2.jpg'],
'new_path': ['/Users/myname/mydir/file_0.jpg', '/Users/myname/mydir/file_1.jpg', '/Users/myname/mydir/file_2.jpg'],
})
</code></pre>
<p>However, I have yet to figure out how to modify the code below to do this in a loop or anything similarly helpful: <a href="https://stackoverflow.com/questions/8858008/how-to-move-a-file-in-python">How to move a file in Python?</a></p>
<pre><code>import os
import shutil
shutil.move("path/to/current/file.foo", "path/to/new/destination/for/file.foo")
</code></pre>
<p>In this example, all the files are being moved to the same folder. But, it'd be great if the answered code were generalizable to send files to different folders, too.</p>
|
<python><pandas><file><directory><shutil>
|
2023-01-02 23:29:39
| 1
| 675
|
psychcoder
|
74,987,851
| 929,732
|
Unable to insert a zulu date into mysql database using flask_mysqldb
|
<p>When I run the following iwth the value of Job_date being "2023-01-02T21:00:00.504Z"</p>
<pre><code>@api.route('/addjob', methods=["POST"])
def addjob():
f = open('addjob.txt','w')
f.write(str(request.json))
cursor = mysql.connection.cursor()
sql = '''INSERT into jobs (job_date,cust_auto_id,cost,paid,mileage,job_desc,person_id) values (str_to_date(%s,"%%Y-%%m-%%dT%%H:%%i:%%sZ"),%s,%s,%s,%s,%s,%s)'''
try:
cursor.execute(sql,(request.json.get("jobdate", None),request.json.get("whichauto", None),request.json.get("jobcost", None),request.json.get("paystatus", None),request.json.get("mileage", None),request.json.get("jobdesc", None),request.json.get("whichperson", None)))
mysql.connection.commit()
except Exception as e:
f.write(str(e))
return jsonify([])
</code></pre>
<p>I get</p>
<pre><code>"Incorrect datetime value: '2023-01-02T21:00:00.504Z' for function str_to_date
</code></pre>
|
<python><flask-mysql>
|
2023-01-02 23:22:03
| 1
| 1,489
|
BostonAreaHuman
|
74,987,703
| 2,351,099
|
Are Python Unix-only APIs supported under WSL?
|
<p>Some Python APIs, such as <a href="https://docs.python.org/3/library/os.html#os.pread" rel="nofollow noreferrer"><code>os.pread</code></a>, are documented with availability "Unix", and indeed they are <a href="https://stackoverflow.com/questions/50902714/python-pread-pwrite-only-on-unix">not visible when using native Windows Python</a>.</p>
<p>Are they supported in Python installed via WSL (Windows subsystem for Linux)?</p>
|
<python><windows-subsystem-for-linux>
|
2023-01-02 22:52:05
| 1
| 17,008
|
Massimiliano
|
74,987,695
| 1,513,168
|
How to use Comparison Operator (>) with datetime?
|
<p>I am trying to do something with directories older than 4 days. Here is what I have:</p>
<pre><code>from datetime import datetime, date
#get current time
curret_time = datetime.now()
#get file creation time
stat = os.stat(my_directory)
creation_time = datetime.fromtimestamp(stat.st_birthtime)
#get the age of directory
age_of_directory=curret_time - creation_time
#I want to remove any directory that is older than 4 days
if age_of_directory > 4:
#shutil.rmtree(my_directory)
print(age_of_directory) #debugging line
</code></pre>
<p>Error I get is:</p>
<pre><code>TypeError: '>' not supported between instances of 'datetime.timedelta' and 'int'
</code></pre>
<p>How do a fix this issue?</p>
|
<python>
|
2023-01-02 22:49:42
| 1
| 810
|
Supertech
|
74,987,667
| 10,292,638
|
How to properly connect to SQL Server from a python script when python packages are based on github?
|
<p>Suppose that due to an HTTP 403 Error it's not possible to download the packages from the PyPi repo (nor <code> pip install <package></code> commands) which causes me to install the <code>pyodbc</code> by cloning the repo from Github (<a href="https://github.com/mkleehammer/pyodbc" rel="nofollow noreferrer">https://github.com/mkleehammer/pyodbc</a>) and by running the next <code> .cmd</code> windows file:</p>
<pre><code>cd "root_folder"
git activate
git clone https://github.com/mkleehammer/pyodbc.git --depth 1
</code></pre>
<p>Note that this package is downloaded to the same root folder where my python script is, after this I try to set a connection to Microsoft SQL Server:</p>
<pre><code>import pyodbc as pyodbc
# set connection settings
server="servername"
database="DB1"
user="user1"
password="123"
# establishing connection to db
conn = pyodbc.connect("DRIVER={SQL Server};SERVER="+server+";DATABASE="+database+";UID="+user+";PWD="+password)
cursor=conn.cursor()
print("Succesful connection to sql server")
</code></pre>
<p>However, when I run the above code the next traceback error arises:</p>
<blockquote>
<p>Traceback (most recent call last):
File "/dcleaner.py", line 47, in <br />
conn = pyodbc.connect("DRIVER={SQL Server};SERVER="+server+";DATABASE="+database+";UID="+user+";PWD="+password)
AttributeError: module 'pyodbc' has no attribute 'connect'</p>
</blockquote>
<p>Do you know how can I properly connect from a py script to a sql-server based database?</p>
|
<python><sql-server><windows><git><odbc>
|
2023-01-02 22:46:21
| 1
| 1,055
|
AlSub
|
74,987,528
| 11,631,828
|
How to correctly call the API endpoint with Python 3 without getting error 500?
|
<p>I am following the instruction as per the API documentation and enquiring the following endpoint:</p>
<pre><code>summary = 'https://api.pancakeswap.info/api/v2/summary'
</code></pre>
<p>However I am getting the following error:</p>
<pre><code>{'error': {'code': 500, 'message': 'GraphQL error: panic processing query: only derived fields can lead to multiple children here'}}
</code></pre>
<p>Here is my code:</p>
<pre><code>import requests
import json
summary = 'https://api.pancakeswap.info/api/v2/summary'
def get_data(endpoint):
data = requests.get(endpoint).json()
print(data)
get_data(summary)
</code></pre>
<p>What am I doing wrong and how to fix it?</p>
|
<python><python-requests><endpoint><pancakeswap>
|
2023-01-02 22:21:13
| 1
| 982
|
Pro Girl
|
74,987,486
| 443,836
|
How to use struct pointer from C in Python?
|
<p>This is a snippet from a C header for a Windows DLL that was generated by Kotlin Multiplatform/Native:</p>
<pre class="lang-c prettyprint-override"><code>typedef struct {
struct {
struct {
// ...
} root;
} kotlin;
} libnative_ExportedSymbols;
extern libnative_ExportedSymbols* libnative_symbols(void);
</code></pre>
<p>From C, you would access the struct <code>root</code> <a href="https://kotlinlang.org/docs/native-dynamic-libraries.html#use-generated-headers-from-c" rel="nofollow noreferrer">like this</a>:</p>
<pre><code>libnative_ExportedSymbols* lib = libnative_symbols();
lib->kotlin.root. // ...
</code></pre>
<p>How can I access it from Python? It seems that all official samples have been removed. So I tried this without success:</p>
<pre><code>import ctypes
libPath = "path/to/libnative.dll"
dll = ctypes.CDLL(libPath)
pythonInt = dll.libnative_symbols()
print(type(pythonInt)) # <class 'int'>
print(pythonInt) # A number, e.g. 1190351680
CPointer = ctypes.POINTER(ctypes.c_long)
cLong = ctypes.c_long(pythonInt)
cAddress = ctypes.addressof(cLong)
cPointer = ctypes.cast(cAddress, CPointer)
print(type(pythonInt) == type(cPointer.contents.value)) # true
print(pythonInt == cPointer.contents.value) # true
try:
print(cPointer.kotlin.root)
except AttributeError:
print("AttributeError") # AttributeError
try:
print(cPointer.contents.kotlin.root)
except AttributeError:
print("AttributeError") # AttributeError
try:
print(cPointer.contents.value.kotlin.root)
except AttributeError:
print("AttributeError") # AttributeError
</code></pre>
|
<python><ctypes><kotlin-native>
|
2023-01-02 22:14:55
| 2
| 4,878
|
Marco Eckstein
|
74,987,480
| 3,574,176
|
Failing to deploy python azure function when including git repository into requirements.txt
|
<p>I am using VS code and Azure functions extension to create and deploy a python HTTP trigger azure function.</p>
<p>The problem I am having is that when I include a git repository to the <code>requirements.txt</code> the deployment fails without clear error. As soon as I remove the reference the deployment is successfull.</p>
<p>The <code>requirements.txt</code>:</p>
<pre><code>azure-functions
numpy
git+https://github.com/my/3rdPartyRepo.git@main
</code></pre>
<p>Using the azure function tools for VS code I get the following message when failing:</p>
<blockquote>
<p>22:04:41 MyFunctionApp: Generating summary of Oryx build 22:04:41
MyFunctionApp: Deployment Log file does not exist in
/tmp/oryx-build.log 22:04:41 MyFunctionApp: The logfile at
/tmp/oryx-build.log is empty. Unable to fetch the summary of build
22:04:41 MyFunctionApp: Deployment Failed. deployer =
ms-azuretools-vscode deploymentPath = Functions App ZipDeploy. Extract
zip. Remote build. 22:05:07 MyFunctionApp: Deployment failed.</p>
</blockquote>
<p>Using the command to deploy a zip file:</p>
<pre><code>az functionapp deployment source config-zip -g dev-myapp -n myfunctionAppInAzureName --src myfunctionApp.zip --build-remote
</code></pre>
<p>Gives me this error</p>
<blockquote>
<p>Zip deployment failed. {'id': '3811f2d3-73f1-423d-ad6e-81e7f6125c3d',
'status': 3, 'status_text': '', 'author_email': 'N/A', 'author':
'N/A', 'deployer': 'Push-Deployer', 'message': 'Created via a push
deployment', 'progress': '', 'received_time':
'2023-01-01T18:36:05.0549119Z', 'start_time':
'2023-01-01T18:36:06.0940945Z', 'end_time':
'2023-01-01T18:36:42.9062073Z', 'last_success_end_time': None,
'complete': True, 'active': False, 'is_temp': False, 'is_readonly':
True, 'url':
'https://myfunctionAppInAzureName.scm.azurewebsites.net/api/deployments/latest',
'log_url':
'https://myfunctionAppInAzureName.scm.azurewebsites.net/api/deployments/latest/log',
'site_name': 'myfunctionAppInAzureName'}. Please run the command az
webapp log deployment show -n myfunctionAppInAzureName -g dev-myapp</p>
</blockquote>
<p>When opening the <code>https://myfunctionAppInAzureName.scm.azurewebsites.net/api/deployments/latest</code> it displays there are no latest deployments.</p>
<p>If opening <code>https://myfunctionAppInAzureName.scm.azurewebsites.net/api/deployments/latest/log</code>' it displays there are not latest logs.</p>
<p>I have tried to delete the function app in azure and recreate and then try the same steps. Tried again deleting and recreating running in reverse.
Tried changing python version from 3.8 to 3.9.</p>
<p>The option to install the package via git is the recommended one by the owner of the git repository and the pip command installs package without any issues.</p>
<p>What could I do to fix the issue?
What can I do to see the real errors here as to why is it happening?</p>
|
<python><git><azure><deployment><azure-functions>
|
2023-01-02 22:13:47
| 1
| 1,555
|
vsarunov
|
74,987,453
| 8,864,226
|
Need to sort NaN in Python to the end of a list
|
<p>I'm trying to sort a list and work around Python's poor handling of <code>nan</code> and <code>inf</code>.</p>
<p>I need to partition a list into all the numbers sorted (or reversed) and at the end of the list any NaN or Inf. The order of the non-number (NaN/Inf) elements is not important, so long as they are arranged at the end of the list.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>n = [13, 4, 52, 3.1, float('nan'), float('inf'), 8.2]
print(n)
o = sorted(n, key = lambda x : float('inf') if (x != x) else x)
print(o)
print(reversed(o))
</code></pre>
<p>The <code>o</code> works, and outputs:</p>
<pre><code>[3.1, 4, 8.2, 13, 52, nan, inf]
</code></pre>
<p>But using <code>reversed</code> outputs:</p>
<pre><code>[inf, nan, 52, 13, 8.2, 4, 3.1]
</code></pre>
<p>Which is not what I want.</p>
<p>I want it to reverse only the values that aren't <code>nan</code> and <code>inf</code>.</p>
<p>Desired output:</p>
<pre><code>[52, 13, 8.2, 4, 3.1, nan, inf]
</code></pre>
|
<python><list><sorting><nan>
|
2023-01-02 22:10:16
| 1
| 6,097
|
James Risner
|
74,987,358
| 2,487,653
|
How to embed Jupyter-notebook inside Python code with access to caller's environment
|
<p>Often when I develop code I want to work inside the environment where that code will run. Typically I do this by invoking an IPython interpreter [which I do by setting PYTHONBREAKPOINT to point to a routine that calls <code>IPython.terminal.embed.InteractiveShellEmbed()</code>]. I am thinking that it would be way better to do this from Jupyter, where I would end up with a notebook that I could then use to write the final code. To give an example,</p>
<pre><code>class Myobj(object):
def __init__(self,arg1,arg2):
self.a1 = arg1
self.a2 = arg2
def doSomething(self):
breakpoint()
test = Myobj(2,3)
test.doSomething()
</code></pre>
<p>Then when my breakpoint runs I end up in Pdb (or with my extension, IPython) and I can access self.a1, etc.</p>
<p>I have tried starting up a notebook server with the following code:</p>
<pre><code>j = 1
t = 'test string'
from notebook.notebookapp import NotebookApp
app = NotebookApp()
app.initialize(["--port", "8888"])
app.start()
</code></pre>
<p>This sort-of-works in that I get a browser window where I can start a notebook, but the global variables I have previously defined are not carried in to the kernel.</p>
<p>I'd think that what I want to do must be possible. This is Python after all.</p>
|
<python><jupyter-notebook>
|
2023-01-02 21:53:40
| 1
| 858
|
bht
|
74,987,297
| 2,163,109
|
Matplotlib -- how to retreive polygons colors from choropleth map
|
<p>I made the choropleth map using GeoPandas and Matplotlib. I want to add value labels to each polygon of the map in a way that font label color must be a contrast to polygon fill color (white on a darker color and black on a lighter).</p>
<p>Thus, I need to know every polygon's fill color. I found the solution (see minimal working example code below).</p>
<p>But I suppose that a more simple and clear solution exists, so I post this question with the hope to find it with community help.</p>
<pre class="lang-python prettyprint-override"><code>import geopandas as gpd
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from itertools import islice, pairwise
from matplotlib.collections import PatchCollection
def contrast_color(color):
d = 0
r, g, b = (round(x*255, 0) for x in color[:3])
luminance = 1 - (0.299 * r + 0.587 * g + 0.114 * b) / 255
d = 0 if luminance < 0.5 else 255
return (d, d, d)
def get_colors(ax):
# get childrens
# to obtain a PatchCollection
_ = ax.axes.get_children()
collection = _[0] # suppose it is first element
if not isinstance(collection, PatchCollection):
raise TypeError("This is not Collection")
# get information about polygons fill colors
# .get_facecolors() returns ALL colors for ALL polygons
# that belongs to one multipolygon
# e. g. if we have two multipolygons,
# and the first consists of two polygons
# and second consists of one polygon
# we obtain THREE colors
poly_colors = collection.get_facecolors()
return poly_colors.tolist()
gpd.read_file("https://gist.githubusercontent.com/ap-Codkelden/72f988e2bcc90ea3c6c9d6d989d8eb3b/raw/c91927bdb6b199c4dd6df6759200a5a1e4b820f0/obl_sample.geojson")
dfm['coords'] = [x[0] for x in dfm['geometry'].apply(lambda x: x.representative_point().coords[:])]
fig, ax = plt.subplots(1, figsize=(10, 6))
ax.axis('off')
ax.set_title('Title', fontdict={'fontsize': '12', 'fontweight' : '3'})
dfm.plot(
ax=ax,
column='Average', cmap='Blues_r',
linewidth=0.5, edgecolor='k',
scheme='FisherJenks', k=2,
legend=True
)
out = [] # empty container for colors
# count polygons for every multipolygon
# since it can contains more than one
poly_count = dfm.geometry.apply(lambda x: len(x.geoms)).to_list()
poly_colors = get_colors(ax)
# we need split the polygon's colors list into sublists,
# where every sublist will contain all colors for
# every polygon that belongs to one multipolygon
slices = [(0, poly_count[0])] + [x for x in pairwise(np.cumsum(poly_count))]
# splitting
for s in slices:
out.append(
set(tuple(x) for x in islice(poly_colors, *s)),)
# remove transparensy info
out = [next(iter(x))[:3] for x in out]
dfm['color'] = [tuple([y/255 for y in x]) for x in map(contrast_color, out)]
for idx, row in dfm.iterrows():
plt.annotate(
f"{row['reg_en']}\n{row['Average']:.2f}",
xy=row['coords'], horizontalalignment='center',
color=row['color'], size=9)
</code></pre>
<p>Desired labels are:</p>
<img src="https://i.sstatic.net/MA3nI.png" width="250"/>
|
<python><matplotlib><geopandas>
|
2023-01-02 21:44:05
| 0
| 424
|
codkelden
|
74,987,168
| 2,072,962
|
Keras prediction incorrect with scaler and feature selection
|
<p>I build an application that trains a Keras binary classifier model (0 or 1) every x time (hourly,daily) given the new data. The data preparation, training and testing works well, or at least as expected. It tests different features and scales it with MinMaxScaler (some values are negative).</p>
<p>On live data predictions with one single data point, the values are unrealistic (around 0.9987 to 1 most of the time, which is inaccurate). Since the result should be how close to "1" the prediction is, getting such high numbers constantly raises alerts.</p>
<p><strong>Code for live prediction is as follows</strong></p>
<p>current_df is a pandas dataframe that contains the 1 row with the data pulled live and the column headers, we select the "features" (since why load the features from the db and we implement dynamic feature selection when training the model, which could mean on every model there are different features)</p>
<p>Get the features as a list:</p>
<pre><code># Convert literal str to list
features = ast.literal_eval(features)
</code></pre>
<p>Then select only the features that I need in the dataframe:</p>
<pre><code># Select the features
selected_df = current_df[features]
</code></pre>
<p>Get the values as a list:</p>
<pre><code> # Get the values of the df
current_list = selected_df.values.tolist()[0]
</code></pre>
<p>Then I reshape it:</p>
<pre><code> # Reshape to allow scaling and predicting
current_list = np.reshape(current_list, (-1, 1))
</code></pre>
<p>If I call "transform" instead of "fit_transform" in the line above, I get the following error: <strong>This MinMaxScaler instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.</strong></p>
<p>Reshape again:</p>
<pre><code># Reshape to be able to scale
current_list = np.reshape(current_list, (1, -1))
</code></pre>
<p>Loads the model using Keras (model_location is a Path) and predict:</p>
<pre><code># Loads the model from the local folder
reconstructed_model = keras.models.load_model(model_location)
prediction = reconstructed_model.predict(current_list)
prediction = prediction.flat[0]
</code></pre>
<p><strong>Updated</strong></p>
<p>The data gets scaled using fit_transform and transform (MinMaxScaler although it can be Standard Scaler):</p>
<pre><code>X_train = pd.DataFrame(scaler.fit_transform(X_train), columns=X_train.columns, index=X_train.index)
X_test = pd.DataFrame(scaler.transform(X_test), columns=X_test.columns, index=X_test.index)
</code></pre>
<p>And this is run when training the model (the "model" config is not shown):</p>
<pre><code># Compile the model
model.compile(optimizer=optimizer,
loss=loss,
metrics=['binary_accuracy'])
# build the model
model.fit(X_train, y_train, epochs=epochs, verbose=0)
# Evaluate using Keras built-in function
scores = model.evaluate(X_test, y_test, verbose=0)
testing_accuracy = scores[1]
# create model with sklearn KerasClassifier for evaluation
eval_model = KerasClassifier(model=model, epochs=epochs, batch_size=10, verbose=0)
# Evaluate model using RepeatedStratifiedKFold
accuracy = ML.evaluate_model_KFold(eval_model, X_test, y_test)
# Predict testing data
pred_test= model.predict(X_test, verbose=0)
pred_test = pred_test.flatten()
# extract the predicted class labels
y_predicted_test = np.where(pred_test > 0.5, 1, 0)
</code></pre>
<p>Regarding feature selection, the features are not always the same --I use both SelectKBest (10 or 15 features) or RFECV. And select the trained model with highest accuracy, meaning the features can be different.</p>
<p>Is there anything I'm doing wrong here? I'm thinking maybe the scaling should be done before the feature selection or there's some issue with the scaling (since maybe some values might be 0 when training and 100 when using it and the features are not necessarily the same when scaling).</p>
|
<python><keras><scikit-learn>
|
2023-01-02 21:21:31
| 2
| 764
|
galgo
|
74,987,100
| 6,346,514
|
Python, change email to name format
|
<p>How can I parse the below email options to just expected output. These are not in a dataframe, they are separate strings. I have a loop that loops through each string.</p>
<p>example input</p>
<pre><code>Louis.Stevens@hotmail.com
Louis.a.Stevens@hotmail.com
Louis.Stevens@stackoverflow.com
Louis.Stevens2@hotmail.com
Mike.Williams2@hotmail.com
Lebron.A.James@hotmail.com
</code></pre>
<p>expected output:</p>
<pre><code>Louis Stevens
Louis Stevens
Louis Stevens
Louis Stevens
Mike Williams
Lebron James
</code></pre>
<p>Thanks</p>
|
<python><pandas>
|
2023-01-02 21:12:20
| 3
| 577
|
Jonnyboi
|
74,987,090
| 11,058,930
|
Create column in DataFrame1 based on values from DataFrame2
|
<p>I have two Dataframes, and would like to create a new column in DataFrame 1 based on DataFrame 2 values.</p>
<p>But I dont want to join the two dataframes per say and make one big dataframe, but rather use the second Dataframe simply as a look-up.</p>
<pre><code>#Main Dataframe:
df1 = pd.DataFrame({'Size':["Big", "Medium", "Small"], 'Sold_Quantity':[10, 6, 40]})
#Lookup Dataframe
df2 = pd.DataFrame({'Size':["Big", "Medium", "Small"], 'Sold_Quantiy_Score_Mean':[10, 20, 30]})
#Create column in Dataframe 1 based on lookup dataframe values:
df1['New_Column'] = when df1['Size'] = df2['Size'] and df1['Sold_Quantity'] < df2['Sold_Quantiy_Score_Mean'] then 'Below Average Sales' else 'Above Average Sales!' end
</code></pre>
|
<python><pandas>
|
2023-01-02 21:11:15
| 2
| 1,747
|
mikelowry
|
74,987,080
| 1,546,710
|
Keras Concatenate and Lambda not recognized as Layer, TypeError: The added layer must be an instance of class Layer
|
<p>I have been unable to find a solution for my problem. The below (with and without Lambda) does not recognize the result of Concatenate as a Layer:</p>
<pre class="lang-py prettyprint-override"><code>tensors = []
for item in items:
tensor = tf.constant(item, dtype=tf.string, shape=[1])
tensors.append(tensor)
def constant_layer(tensor):
return tf.keras.layers.Input(tensor=tensor, shape=tensor.shape)
input_layers = []
for tensor in tensors:
input_layers.append(constant_layer(tensor))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Lambda(lambda x: tf.keras.layers.Concatenate()(x))(input_layers))
model.add(tf.keras.layers.Dense(10, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='relu'))
model.add(tf.keras.layers.Dense(1))
# Compile model
model.compile(optimizer='adam', loss='mean_squared_error')
</code></pre>
<p>The error I receive is the following:</p>
<pre><code> File "script.py", line 25
model.add(tf.keras.layers.Lambda(lambda x: tf.keras.layers.Concatenate()(x))(input_layers))
File "./lib/python3.7/site-packages/tensorflow/python/trackable/base.py", line 205, in _method_wrapper
result = method(self, *args, **kwargs)
File "./lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "./lib/python3.7/site-packages/keras/engine/sequential.py", line 187, in add
"The added layer must be an instance of class Layer. "
TypeError: The added layer must be an instance of class Layer. Received: layer=KerasTensor(type_spec=TensorSpec(shape=(1,), dtype=tf.string, name=None), name='lambda/concatenate/concat/concat:0', description="created by layer 'lambda'") of type <class 'keras.engine.keras_tensor.KerasTensor'>.
</code></pre>
<p>I'm trying to apply TF variables to the model, the input is a list of strings. Any suggestions?</p>
|
<python><tensorflow><machine-learning><tf.keras>
|
2023-01-02 21:09:32
| 1
| 3,415
|
aug2uag
|
74,986,826
| 2,540,336
|
How does NumPy in-place sort work on views?
|
<p>Could you please help me understand the output of these two sorting attempts:</p>
<p>Attempt 1</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.array([1, 2, 3])
a[::-1].sort()
print(a)
# prints [3 2 1]
</code></pre>
<p>I somehow understand that <code>a[::-1]</code> is a view and hence sorting in place leads to descending order instead of the usual ascending order.</p>
<p>Attempt 2</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.array([1, 2, 3])
a = a[::-1]
a.sort()
print(a)
# prints [1 2 3]
</code></pre>
<p>What has changed here? We are still operating on a view so why is the output different?</p>
|
<python><numpy>
|
2023-01-02 20:35:53
| 2
| 597
|
karpan
|
74,986,796
| 13,536,496
|
Setting up HTTPS for Elastic Beanstalk single instance Python (FastAPI) application
|
<p>Please note: No Load Balancers. I'm specifically looking for solutions with single instances.</p>
<p>Greetings! As the title suggests, I'm having trouble setting up HTTPS with a self signed certificate (certbot) for an EB instance running python.</p>
<p>To get started, I did follow the instructions here <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-python.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-python.html</a>, replacing <code>server.crt</code> with <code>cert.pem</code> and <code>server.key</code> with <code>privkey.pem</code> generated by certbot, and <code>application.py</code> with <code>asgi.py</code> (the entry point to my application). All of these in <code>https-instance.config</code> and <code>https-instance-single.config</code> under <code>.ebextensions</code>.</p>
<p>I will include the files here in case I misconfigured something.</p>
<blockquote>
<p>https-instance.config</p>
</blockquote>
<pre><code>packages:
yum:
mod24_ssl: []
files:
/etc/httpd/conf.d/ssl.conf:
mode: "000644"
owner: root
group: root
content: |
LoadModule wsgi_module modules/mod_wsgi.so
WSGIPythonHome /opt/python/run/baselinenv
WSGISocketPrefix run/wsgi
WSGIRestrictEmbedded On
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
Alias /static/ /opt/python/current/app/static/
<Directory /opt/python/current/app/static>
Order allow,deny
Allow from all
</Directory>
WSGIScriptAlias / /opt/python/current/app/asgi.py
<Directory /opt/python/current/app>
Require all granted
</Directory>
WSGIDaemonProcess wsgi-ssl processes=1 threads=15 display-name=%{GROUP} \
python-path=/opt/python/current/app \
python-home=/opt/python/run/venv \
home=/opt/python/current/app \
user=wsgi \
group=wsgi
WSGIProcessGroup wsgi-ssl
</VirtualHost>
/etc/pki/tls/certs/server.crt:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
/etc/pki/tls/certs/server.key:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----
container_commands:
01killhttpd:
command: "killall httpd"
02waitforhttpddeath:
command: "sleep 3"
</code></pre>
<blockquote>
<p>https-instance-single.config</p>
</blockquote>
<pre><code>Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: { "Fn::GetAtt": ["AWSEBSecurityGroup", "GroupId"] }
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
</code></pre>
<p>Upon deploying, I get <code>Instance deployment failed. For details, see 'eb-engine.log'.</code> with no further details as to what went wrong.</p>
<p>As for the configuration, the proxy server is Apache, WSGIPath <code>asgi.py</code>, the rest default.</p>
<p>Edit:</p>
<p>After building a cople of times and reviewing the logs under <code>eb-engine.log</code>, ended up with this <code>https-instance.config</code>:</p>
<pre><code>packages:
yum:
mod_ssl: []
files:
/etc/httpd/conf.d/ssl.conf:
mode: "000644"
owner: root
group: root
content: |
LoadModule wsgi_module modules/mod_wsgi.so
WSGIPythonHome /opt/python/run/baselinenv
WSGISocketPrefix run/wsgi
WSGIRestrictEmbedded On
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
Alias /static/ /opt/python/current/app/static/
<Directory /opt/python/current/app/static>
Order allow,deny
Allow from all
</Directory>
WSGIScriptAlias / /opt/python/current/app/asgi.py
<Directory /opt/python/current/app>
Require all granted
</Directory>
WSGIDaemonProcess wsgi-ssl processes=1 threads=15 display-name=%{GROUP} \
python-path=/opt/python/current/app \
python-home=/opt/python/run/venv \
home=/opt/python/current/app \
user=daemon \
group=daemon
WSGIProcessGroup wsgi-ssl
</VirtualHost>
/etc/pki/tls/certs/server.crt:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
/etc/pki/tls/certs/server.key:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----
</code></pre>
<p>The deployment is successful using this, however I still get a <code>refused to connect</code> response.</p>
<p>Any suggestions are welcome, thanks in advance!</p>
|
<python><ssl><https><amazon-elastic-beanstalk><fastapi>
|
2023-01-02 20:31:05
| 0
| 352
|
Mark
|
74,986,791
| 11,400,377
|
Tkinter not updating when changing variables
|
<p>I'm tinkering with Tkinter and trying to check if my program is open on pressing a button, but the Tkinter is not updating my label. Why?</p>
<pre class="lang-py prettyprint-override"><code>from win32gui import GetWindowRect, FindWindow
from tkinter import Tk, ttk, BooleanVar
class Bot:
root = Tk()
is_gta_open = BooleanVar(None, False)
def mta_open_string(self):
return "włączone" if self.is_gta_open.get() else "wyłączone"
def draw_gui(self):
frame = ttk.Frame(self.root, padding=10)
frame.grid()
ttk.Label(frame, text=f"Status gry: {self.mta_open_string()}").grid(column=0, row=0)
ttk.Button(frame, text="Odśwież", command=lambda: [self.try_finding_rect(), self.root.update()]).grid(column=1, row=0)
self.root.mainloop()
def try_finding_rect(self):
window_handle = FindWindow("Grand theft auto San Andreas", None)
if window_handle == 0:
self.is_gta_open.set(False)
return
self.is_gta_open.set(True)
def run(self):
self.try_finding_rect()
self.draw_gui()
if __name__ == "__main__":
Bot().run()
</code></pre>
<p>I'm updating the state using <code>self.root.update</code> method and using <code>BooleanVar</code>, so I don't know why this is not working.</p>
|
<python><tkinter>
|
2023-01-02 20:30:02
| 2
| 993
|
Cholewka
|
74,986,775
| 159,072
|
How can I send a message from server to client using socketIO?
|
<p>I want to send a message from server app to the html page. So, I wrote the following code.</p>
<p>However, the html page is showing nothing.</p>
<p>What am I doing incorrectly?</p>
<p><strong>server_sends_client_receives.py</strong></p>
<pre><code>from flask import Flask, render_template
from flask_socketio import SocketIO, emit
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
@app.route('/')
def index():
return render_template('server_sends_client_receives.html')
@socketio.on('my_event')
def handle_my_custom_event(data):
emit('my_response', {data: 'sent'})
if __name__ == '__main__':
socketio.run(app)
</code></pre>
<p><strong>server_sends_client_receives.html</strong></p>
<pre><code><!--client sends, server receives-->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
<script type="text/javascript" charset="utf8" src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.0.1/socket.io.js" integrity="sha512-q/dWJ3kcmjBLU4Qc47E4A9kTB4m3wuTY7vkFJDTZKjTs8jhyGQnaUrxa0Ytd0ssMZhbNua9hE+E7Qv1j+DyZwA==" crossorigin="anonymous"></script>
<script type="text/javascript" charset="utf-8">
socket.on('my_event', function (msg, cb)
{
$('#messages').text(msg.data).html());
if (cb)
cb();
});
</script>
</head>
<body>
<div id="messages"></div>
</body>
</html>
</code></pre>
|
<python><flask><flask-socketio>
|
2023-01-02 20:28:35
| 2
| 17,446
|
user366312
|
74,986,587
| 7,694,216
|
Python Structural Pattern Matching: str(True) doesn't match into str(True)
|
<p>I've found an unexpected behaviour for Python structural pattern matching that I want to discuss today.<br />
All the code there is run with CPython 3.10.8</p>
<p>So, let's take a look on the code below</p>
<pre class="lang-py prettyprint-override"><code>match str(True):
case str(True): print(1)
case str(False): print(2)
case _: print(3)
</code></pre>
<p>I expect this code to print 1. The reason for that is that str(True) will evaluate into "True" in both match part and case part. I expect "True" to match "True". <strong>However, surprisingly for me, this code print 3.</strong></p>
<p>We can also rewrite this into this piece of code:</p>
<pre class="lang-py prettyprint-override"><code>match str(True):
case "True": print(1)
case "False": print(2)
case _: print(3)
</code></pre>
<p><strong>This time this code will print 1.</strong></p>
<p>What happening there? Why python pattern match working differently depending on whether there is an expression after "case" word or not. Should evaluations be forbidden in pattern-matching? What is rationale behind this?</p>
|
<python><python-3.x><pattern-matching><python-3.10><structural-pattern-matching>
|
2023-01-02 20:04:46
| 1
| 456
|
inyutin
|
74,986,461
| 6,020,161
|
Load a Python XGBoost as SparkXGBoost?
|
<p>Is there a way to take a model object trained in base XGBoost and load it as a SparkXGBoost model? The docs aren't super clear on this split. I've tried:</p>
<pre><code>from xgboost.spark import SparkXGBClassifierModel
model2 = SparkXGBClassifierModel.load("xgboost-model")
</code></pre>
<p>Im getting the following error:</p>
<pre><code>Input path does not exist: /xgboost-model/metadata
</code></pre>
<p>Assuming this means there is a format difference if the model had originally been trained as a SparkXGBoost model.</p>
|
<python><pyspark><xgboost>
|
2023-01-02 19:47:11
| 1
| 340
|
Alex
|
74,986,333
| 11,092,636
|
tqdm and pytest INTERNALERROR> UnicodeEncodeError: 'charmap' codec can't encode characters in position 245-254: character maps to <undefined>
|
<p>MRE:</p>
<p><code>test_test.py</code>:</p>
<pre><code>import main
def test_file() -> None:
main.main("qfqsdfsqdds")
assert True
</code></pre>
<p><code>main.py</code>:</p>
<pre><code>from tqdm import tqdm
def main(username: str):
for i in tqdm(range(10)):
a = i + 1
</code></pre>
<p>Running the test through <code>PyCharm</code> raises the following error:</p>
<pre><code>test_test.py::test_file PASSED [100%]
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\main.py", line 270, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\main.py", line 324, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\main.py", line 349, in pytest_runtestloop
INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\runner.py", line 112, in pytest_runtest_protocol
INTERNALERROR> runtestprotocol(item, nextitem=nextitem)
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\runner.py", line 131, in runtestprotocol
INTERNALERROR> reports.append(call_and_report(item, "call", log))
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\_pytest\runner.py", line 224, in call_and_report
INTERNALERROR> hook.pytest_runtest_logreport(report=report)
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\teamcity\pytest_plugin.py", line 300, in pytest_runtest_logreport
INTERNALERROR> self.report_test_output(report, test_id)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\teamcity\pytest_plugin.py", line 208, in report_test_output
INTERNALERROR> dump_test_stderr(self.teamcity, test_id, test_id, data)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\teamcity\common.py", line 78, in dump_test_stderr
INTERNALERROR> messages.testStdErr(test_id, chunk, flowId=flow_id)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\teamcity\messages.py", line 190, in testStdErr
INTERNALERROR> self.message('testStdErr', name=testName, out=out, flowId=flowId)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\_jb_runner_tools.py", line 117, in message
INTERNALERROR> _old_service_messages.message(self, messageName, **properties)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\teamcity\messages.py", line 101, in message
INTERNALERROR> retry_on_EAGAIN(self.output.write)(self.encode(message))
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pycharm\teamcity\messages.py", line 68, in encode
INTERNALERROR> value = value.encode(self.encoding)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> File "C:\Users\lhott\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 12, in encode
INTERNALERROR> return codecs.charmap_encode(input,errors,encoding_table)
INTERNALERROR> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR> UnicodeEncodeError: 'charmap' codec can't encode characters in position 245-254: character maps to <undefined>
============================== 1 passed in 0.02s ==============================
Process finished with exit code 3
</code></pre>
<p>Running <code>pytest</code> with the terminal works though.</p>
<p>The problem does seem to come from <code>tqdm</code>.</p>
<p>I'm using <code>Python 3.11.1</code> and:</p>
<pre><code>PyCharm 2022.3.1 (Community Edition)
Build #PC-223.8214.51, built on December 20, 2022
Runtime version: 17.0.5+1-b653.23 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
Windows 11 10.0
GC: G1 Young Generation, G1 Old Generation
Memory: 2030M
Cores: 16
Non-Bundled Plugins:
me.lensvol.blackconnect (0.5.0)
com.chesterccw.excelreader (2022.12.1-203.223)
com.github.copilot (1.1.38.2229)
</code></pre>
<p>As @aaron pointed out, there might be a huge error on the stack trace from PyCharm because when I click on About it does show <code>2022.3.1</code> and when I run the code the stack trace says <code>2022.1.3</code>.</p>
|
<python><pycharm><pytest><tqdm>
|
2023-01-02 19:29:33
| 1
| 720
|
FluidMechanics Potential Flows
|
74,986,298
| 275,002
|
Strugglign to pass string and int parameters from Python to Go library
|
<p>I have the go library with the following signature:</p>
<pre><code>//export getData
func getData(symbol string, day int, month string, year int) string {
return "getData2"
}
</code></pre>
<p>In Python I did like the below:</p>
<pre><code>import ctypes
library = ctypes.cdll.LoadLibrary('./lib.so')
get_data = library.getData
# Make python convert its values to C representation.
get_data.argtypes = [ctypes.c_char_p, ctypes.c_int,ctypes.c_int,ctypes.c_int]
get_data.restype = ctypes.c_wchar
# j= get_data("BTC".encode('utf-8'), "3", "JAN".encode('utf-8'), "23".encode('utf-8'))
j= get_data(b"XYZ", 3, "JAN", 23)
print(j)
</code></pre>
<p>and it gives the error</p>
<pre><code>ctypes.ArgumentError: argument 3: <class 'TypeError'>: wrong type
</code></pre>
<p>I am using Python 3.9</p>
<p><strong>Updates</strong></p>
<p>I made changes in Go Function Signature like this:</p>
<pre><code>func getData(symbol, day, month, year *C.char) *C.char {
var instrumentName, combine string
x := C.GoString(symbol) + "-" + C.GoString(day) + C.GoString(month) + C.GoString(year)
log.Println(x)
....
</code></pre>
<p>And in Python like this:</p>
<pre><code>get_data = library.getData
# Make python convert its values to C representation.
# get_data.argtypes = [ctypes.c_char_p, ctypes.c_char_p,ctypes.c_char_p,ctypes.c_char_p]
get_data.restype = ctypes.c_wchar_p
j= get_data("BTC", "3", "JAN", "23")
# j= get_data(b"BTC", 3, "JAN", 23)
print(j.decode('utf-8'))
</code></pre>
<p>No parameter issue but the issue I am getting that it is fetching first param of each param in Go code, that is:</p>
<pre><code>x := C.GoString(symbol) + "-" + C.GoString(day) + C.GoString(month) + C.GoString(year)
log.Println(x)
</code></pre>
<p>So instead of printing <code>BTC-3JAN23</code>, it prints <code>B-3J2</code></p>
|
<python><go><ctypes><cgo>
|
2023-01-02 19:24:37
| 1
| 15,089
|
Volatil3
|
74,986,221
| 8,610,286
|
Given a list of Wikidata identifiers, is there a way to find which ones are directly related using Python and/or SPARQL?
|
<p>I have a list of Wikidata IDs and I want to find which of those are subclasses (P279) of others.</p>
<p>Let's suppose I have the list in pseudocode <code>["Q42" (Douglas Adams) , "Q752870" (motor vehicle) , "Q1420" (motor car), "Q216762" (hatchback car) </code>].</p>
<p>I'm trying to find a way to process this list and have as output something like:</p>
<p><code>[("Q752870", "Q1420")("Q1420","Q216762")]</code> with the subclass pairs.</p>
<p>I could iterate the list and run a custom SPARQL queries for each pair, in pseudocode:</p>
<pre><code>subclass_pairs = []
for a in list:
for b in list:
if custom_query_handler(a,b):
subclass_pairs.append((a,b))
</code></pre>
<p>But this implies a very large number of SPARQL requests.</p>
<p>How to do this in a single SPARQL request? Is there any other solution possible?</p>
<p>`</p>
|
<python><sparql><wikidata>
|
2023-01-02 19:14:38
| 1
| 349
|
Tiago Lubiana
|
74,986,098
| 13,583,054
|
How to use Async or Multithread on Azure Managed Endpoint
|
<p>I deployed a model using Azure ML managed endpoint, but I found a bottleneck.</p>
<p>I'm using Azure ML Managed Endpoint to host ML models for object prediction. Our endpoint receives a URL of a picture and is responsible for downloading and predicting the image.</p>
<p>The problem is the bottleneck: each image is downloaded one at a time (synchronously), which is very slow.</p>
<p>Is there a way to download images async or to create multiple threads ? I expected a way to make if faster.</p>
|
<python><machine-learning><azure-machine-learning-service>
|
2023-01-02 19:00:09
| 1
| 384
|
Juliano Negri
|
74,985,998
| 8,588,743
|
Adding percentages to Venn-diagram using matplotlib_venn
|
<p>I have a data frame that looks like this:</p>
<pre><code> customerid brand
0 A2222242BG84 A
1 A2222255LD3L B
2 A2222255LD3L A
3 A2222263537U A
4 A2222265CE34 C
... ... ...
6679602 A9ZZ86K4VM97 B
6679603 A9ZZ9629MP6E B
6679604 A9ZZ9629MP6E C
6679605 A9ZZ9AB9RN5E A
6679606 A9ZZ9C47PZ8G C
</code></pre>
<p>where the brands are <code>A,B</code> and <code>C</code>. Many customers are customers in one brand, two brands or all three brands and I want to draw a Venn diagram indicating how customers are shared over all brands. I've managed to correctly write the code to show the different counts, in thousands of units but I struggle to make the Venn diagram show how many percent of the entire customer base that count entails.</p>
<p>Here is my complete code and should be completely reproducible:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib_venn as venn
def count_formatter(count, branch_counts):
# Convert count to thousands
count = count / 1000
# Return the count as a string, followed by the percentage
return f'{count:.1f}K ({100 * count / sum(branch_counts.values):.1f}%)'
# Get counts of each branch
branch_counts = df['brand'].value_counts()
# Convert counts to sets
branch_sets = [set(group_data['customerid']) for _, group_data in df.groupby('brand')]
plt.figure(figsize=(10, 10))
# Generate the Venn diagram
venn.venn3(
subsets=branch_sets,
set_labels=['A', 'B', 'C'],
subset_label_formatter=lambda count, branch_counts=branch_counts: count_formatter(count, branch_counts)
)
# Show the plot
plt.show()
</code></pre>
<p>The figure that's generated only shows 0.0% on all the instances. I don't see why this is.</p>
<p><a href="https://i.sstatic.net/WluXz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WluXz.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><venn-diagram><matplotlib-venn>
|
2023-01-02 18:45:44
| 1
| 903
|
Parseval
|
74,985,970
| 1,870,832
|
Convert np datetime64 column to pandas DatetimeIndex with frequency attribute set correctly
|
<p>Reproducing the data I have:</p>
<pre><code>import numpy as np
import pandas as pd
dts = ['2016-01-01', '2016-02-01', '2016-03-01', '2016-04-01',
'2016-05-01', '2016-06-01', '2016-07-01', '2016-08-01',
'2016-09-01', '2016-10-01', '2016-11-01', '2016-12-01',
'2017-01-01', '2017-02-01', '2017-03-01', '2017-04-01']
my_df = pd.DataFrame({'col1': range(len(dts)), 'month_beginning': dts})#, dtype={'month_beginning': np.datetime64})
my_df['month_beginning'] = my_df.month_beginning.astype(np.datetime64)
</code></pre>
<p>And what I want is to set <code>month_beginning</code> as a datetime index, and specifically <strong>I need it to have the <code>frequency</code> attribute set correctly as monthly</strong></p>
<p>Here's what I've tried so far, and how they have not worked:</p>
<p><strong>First attempt</strong></p>
<pre><code>my_df = my_df.set_index('month_beginning')
</code></pre>
<p>...however after executing the above, <code>my_df.index</code> shows a DatetimeIndex but with <code>freq=None</code>.</p>
<p><strong>Second attempt</strong></p>
<pre><code>dt_idx = pd.DatetimeIndex(my_df.month_beginning, freq='M')
</code></pre>
<p>...but that throws the following error:</p>
<pre><code>ValueError: Inferred frequency MS from passed values does not conform to passed frequency M
</code></pre>
<p>...This is particularly confusing to me given that, as can be checked in my data above, my <code>dts</code>/<code>month-beginning</code> series is in fact monthly and is not missing any months...</p>
|
<python><pandas><numpy><datetime><indexing>
|
2023-01-02 18:42:18
| 1
| 9,136
|
Max Power
|
74,985,825
| 8,059,615
|
Scraping Aliexpress search page does not return all products
|
<p>I have the below code, which I expect to return 60 products, but instead only returns 16:</p>
<pre><code>driver = webdriver.Firefox(service=Service(GeckoDriverManager().install()))
url = 'https://www.aliexpress.com/w/wholesale-silicone-night-light.html?SearchText=silicone+night+light"&"catId=0"&"initiative_id=SB_20230101130255"&"spm=a2g0o.productlist.1000002.0"&"trafficChannel=main"&"shipFromCountry=US"&"g=y'
driver.get(url)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')
product_links = []
def get_element_title(element):
return element.select('h1[class*="manhattan--titleText--"]')[0].text
def get_product_links(soup):
for element in soup.select('a[class*="manhattan--container--"]'):
link = f"http:{element['href']}"
product_links.append(link)
print(get_element_title(element))
get_product_links(soup)
</code></pre>
<p>I manually checked the class name for all the products, since I thought maybe some of them have different class names in an effort to stop scraping, but they all have the same class name.</p>
<p><a href="https://i.sstatic.net/Ey9Yk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ey9Yk.png" alt="enter image description here" /></a></p>
|
<python><selenium><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-01-02 18:23:48
| 2
| 405
|
Yousef
|
74,985,816
| 13,745,926
|
How to use gradient descent for cosine similarity optimization?
|
<p>so I have a vector A with 1508 dimensions that is defined, I have another list of n vectors in a list called C. Each of the n vectors is attributed a weight, and a vector B is defined by the following for each dimension : The weight times the first dimension of it's vector + the second weight times the first dimension of it's vector etc.... and all that divided by the sum of the weights. Basically I want to find the best combination of the n vectors to form a goal vector A. But the combination is done by averaging the dimensions of the vectors. Also the weights can only take 1 or 0 as a value. I tried to use Tensorflow's gradient descent in the following way :</p>
<pre><code>import tensorflow as tf
tf.compat.v1.disable_eager_execution()
# Define the dimensions of the vectors
d = 1508
import numpy as np
import random
import math
# Define the variables
A = [random.uniform(0, 1) for _ in range(d)]
C = [[random.uniform(0, 1) for k in range(d)] for _ in range(4)]
a0 = a1 = a2 = a3 = 1
variables = [a0,a1,a2,a3]
# Define B as a linear combination of the vectors in C
B = []
for l in range(d):
value = 0
for i in range(4):
value = value + variables[i] * C[i][l]
B.append(value/sum(variables))
# Use a sigmoid activation to squash the variables to the range [0, 1]
variables = [1 / (1 + math.exp(-v)) for v in variables]
# Round the variables to 0 or 1
variables = [tf.round(v) for v in variables]
variables = [tf.Variable(v) for v in variables]
def loss(A,B):
return 1 - tf.keras.losses.cosine_similarity(A, B)
# Define the loss function
# Use an optimizer to minimize the loss
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
num_steps = 100
import sys
# Run the training loop
for _ in range(num_steps):
with tf.GradientTape() as tape:
B = []
for l in range(d):
value = 0
for i in range(4):
value = value + variables[i] * C[i][l]
B.append(value/sum(variables))
loss_value = loss(A, B)
grads = tape.gradient(loss_value, variables)
print(loss_value)
print(grads)
print(len(grads))
optimizer.apply_gradients(zip(grads, variables))
variables = [tf.clip_by_value(v, 0, 1) for v in variables]
variables = [tf.Variable(v) for v in variables]
</code></pre>
<p>In this example I used 4 weights. But for some reason I can ever get the value of the variables to be printed out and I have no clue if this is working.</p>
<p>That's what I've tried, I was wondering if anyone had ideas on finding best combination of vectors to reach goal vector and what they used. Also will gradient descent work for this? Also I can't get the values from the variables, .numpy doesn't work tf. print doesn't work etc... Thanks! Here is what I get from running it for a bit :</p>
<pre><code>[<tf.Tensor 'AddN_1:0' shape=() dtype=float32>, <tf.Tensor 'AddN_2:0' shape=() dtype=float32>, <tf.Tensor 'AddN_3:0' shape=() dtype=float32>, <tf.Tensor 'AddN_4:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_6:0' shape=() dtype=float32>, <tf.Tensor 'AddN_7:0' shape=() dtype=float32>, <tf.Tensor 'AddN_8:0' shape=() dtype=float32>, <tf.Tensor 'AddN_9:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_11:0' shape=() dtype=float32>, <tf.Tensor 'AddN_12:0' shape=() dtype=float32>, <tf.Tensor 'AddN_13:0' shape=() dtype=float32>, <tf.Tensor 'AddN_14:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_16:0' shape=() dtype=float32>, <tf.Tensor 'AddN_17:0' shape=() dtype=float32>, <tf.Tensor 'AddN_18:0' shape=() dtype=float32>, <tf.Tensor 'AddN_19:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_21:0' shape=() dtype=float32>, <tf.Tensor 'AddN_22:0' shape=() dtype=float32>, <tf.Tensor 'AddN_23:0' shape=() dtype=float32>, <tf.Tensor 'AddN_24:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_26:0' shape=() dtype=float32>, <tf.Tensor 'AddN_27:0' shape=() dtype=float32>, <tf.Tensor 'AddN_28:0' shape=() dtype=float32>, <tf.Tensor 'AddN_29:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_31:0' shape=() dtype=float32>, <tf.Tensor 'AddN_32:0' shape=() dtype=float32>, <tf.Tensor 'AddN_33:0' shape=() dtype=float32>, <tf.Tensor 'AddN_34:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_36:0' shape=() dtype=float32>, <tf.Tensor 'AddN_37:0' shape=() dtype=float32>, <tf.Tensor 'AddN_38:0' shape=() dtype=float32>, <tf.Tensor 'AddN_39:0' shape=() dtype=float32>]
4
[<tf.Tensor 'AddN_41:0' shape=() dtype=float32>, <tf.Tensor 'AddN_42:0' shape=() dtype=float32>, <tf.Tensor 'AddN_43:0' shape=() dtype=float32>, <tf.Tensor 'AddN_44:0' shape=() dtype=float32>]
4
</code></pre>
<p>Thanks for the help!!!!</p>
|
<python><tensorflow><optimization><vector><gradient-descent>
|
2023-01-02 18:22:26
| 0
| 417
|
Constantly Groovin'
|
74,985,802
| 8,549,300
|
Mask python array based on multiple column indices
|
<p>I have a 64*64 array, and would like to mask certain columns. For one column I know I can do:</p>
<pre><code>mask = np.tri(64,k=0,dtype=bool)
col = np.zeros((64,64),bool)
col[:,3] = True
col_mask = col + np.transpose(col)
col_mask = np.tril(col_mask)
col_mask = col_mask[mask]
</code></pre>
<p>but how to extend this to multiple indices? I have tried doing <code>col[:,1] & col[:,2] = True</code> but got
<code>SyntaxError: cannot assign to operator</code></p>
<p>Also I might have up to 10 columns I would like to mask, so is there a less unwieldily approach? I have also looked at <a href="https://www.stackoverflow.com/">numpy.indices</a> but I don't think this is what I need. Thank you!</p>
|
<python><arrays><numpy>
|
2023-01-02 18:20:43
| 1
| 361
|
firefly
|
74,985,762
| 2,079,306
|
Curl issue. JSON.loads() works fine with python-requests, but fails when using curl to the flask API. Changes all double quotes to single
|
<p>TypeError: the JSON object must be str, bytes or bytearray, not 'dict'</p>
<p>I have a flask server that is running:</p>
<pre><code>@app.route('/getMyData', methods=['GET'])
def getMyData():
data = json.loads(request.get_json()) # get JSON string and load to python dict
# TYPE ERROR OCCURS HERE
</code></pre>
<p>I use a python script to send:</p>
<pre><code>PARAMS = {"files": ["file1", "file2", "file3", "file4"], "date": [["2000-06-01", "2001-08-01"], ["2005-11-01", "2006-01-01"]], "data": ["data1", "data2", "data3"]}
PARAMS_JSON = json.dumps(PARAMS) # dict to JSON
r = requests.get(url=URL, json=PARAMS_JSON)
</code></pre>
<p>No issues. json.loads on the flask server parses it fine.</p>
<p>I try to create an example for those not using python with a simple curl command. I send:</p>
<pre><code>curl http://127.0.0.1:5000/getMyData -X GET -d '{"files": ["file1", "file2", "file3", "file4"], "date": [["2000-06-01", "2001-08-01"], ["2005-11-01", "2006-01-01"]], "data": ["data1", "data2", "data3"]}' -H 'Content-Type:application/json'
</code></pre>
<p>This throws the type error.</p>
<p>Troubleshooting: I print request.get_json() on the flask server to see what is going on.</p>
<p>When I use the python script (That works) request.json() prints:</p>
<pre><code>{"files": ["file1", "file2", "file3", "file4"], "date": [["2000-06-01", "2001-08-01"], ["2005-11-01", "2006-01-01"]], "data": ["data1", "data2", "data3"]}
</code></pre>
<p>When I use the curl command request.json() prints:</p>
<pre><code>{'files': ['file1', 'file2', 'file3', 'file4'], 'date': [['2000-06-01', '2020-08-01'], ['2005-11-01', '2006-01-01']], 'data': ['data1', 'data2', 'data3']}
</code></pre>
<p>As you can see. Curl seems to be changing all my double quotes to single quotes, which isn't a JSON string. Why? Why does curl torment me so?</p>
|
<python><json><flask><curl>
|
2023-01-02 18:14:56
| 1
| 1,123
|
john stamos
|
74,985,747
| 8,372,455
|
pyTest for UDP protocol scripts
|
<p>Would anyone have a pyTest tip if its possible to use with a Python BACnet library called <a href="https://bac0.readthedocs.io/en/latest/" rel="nofollow noreferrer">BAC0</a>? I have never used pyTest before just curious if there would be any boiler plate scripts and if my ideology is on track or not.</p>
<p>I have a BACnet server app below with 1 discoverable BACnet (UDP protocol) point for a demand response signal level. BACnet is a protocol quite common with building automation technology.</p>
<pre><code>def make_bacnet_app():
def check_event_status():
adr_sig_object = bacnet.this_application.get_object_name('ADR-Event-Level')
adr_sig_object.presentValue = Real(ven_client.bacnet_payload_value)
# create discoverable BACnet object
_new_objects = analog_value(
name='ADR-Event-Level',
description='SIMPLE SIGNAL demand response level',
presentValue=0,is_commandable=False
)
# create BACnet app
bacnet = BAC0.lite()
_new_objects.add_objects_to_application(bacnet)
print('BACnet APP Created Success!')
bacnet_sig_handle = RecurringTask(check_event_status,delay=5)
bacnet_sig_handle.start()
return bacnet
if __name__ == "__main__":
print("Starting main loop")
t2 = threading.Thread(
target=lambda: make_bacnet_app())
t2.setDaemon(True)
t2.start()
</code></pre>
<p>There's some other code not shown that checks a server with <code>requests</code> basically I can get the BACnet point to change with my testing by sending a GET request to this server on my home test bench laboratory:
<code>http://192.168.0.100:8080/trigger/1</code></p>
<p>Basically the server and BAC0 app run on the same device for my testing purposes...</p>
<p>And it works with just using a BACnet scan tool my BACnet point changes what I need it to do...just curious if anyone would have some tips putting a pyTest script together?</p>
<p>i dont even know if this is possible but can pyTest do this:</p>
<ol>
<li>start server.py file on remote machine (else I can just run them
myself thru SSH)</li>
<li>start client BAC0 script on remote machine (else I can just run them
myself thru SSH)</li>
<li>send GET request to server</li>
<li>perform a BACnet read request with BAC0 to my BAC0 app to verify the
point reads a <code>0</code></li>
<li>wait 100 seconds (built into my server for testing purposes)</li>
<li>perform a BACnet read request with BAC0 to my BAC0 app to verify the
point reads a <code>1</code></li>
<li>wait 100 seconds (built into my test)</li>
</ol>
<p>I can test my bacnet server app with a simple client script that I have run on a remote machine which I should be able to stuff into a pyTest function to assert if the <code>reading</code> equals a <code>1</code> or <code>0</code>. The server app and client code cannot run on the same device the scripts will error out because only one BACnet instance can run on specific port for BACnet a time.</p>
<pre><code>import BAC0
bacnet = BAC0.lite()
reading = bacnet.read("192.168.0.100 analogValue 0 presentValue")
print("The signal is: ",reading)
</code></pre>
<p>Not alot of wisdom here any tips appreciated!!!!!!!!</p>
|
<python><pytest><bacnet><bac0><bacpypes>
|
2023-01-02 18:13:45
| 1
| 3,564
|
bbartling
|
74,985,713
| 11,806,116
|
Plotting a 3D vector field on 2D plane in Python
|
<p>I would like to plot a 3D vector field on a 2D plane.</p>
<p><a href="https://i.sstatic.net/AvE5P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AvE5P.png" alt="an example data" /></a>
i tried plotting but not able to get a 3D view of the vector fields</p>
<p>any help would be highly appreciated and thankful</p>
<p>tried plotting using matplotlib 3d but with no success</p>
|
<python><plot><3d>
|
2023-01-02 18:09:33
| 1
| 495
|
code-freeze
|
74,985,638
| 1,473,517
|
How to plot points over a violin plot?
|
<p>I have four pandas Series and I plot them using a violin plot as follows:</p>
<pre><code>import seaborn
seaborn.violinplot([X1['total'], X2['total'], X3['total'], X4['total']])
</code></pre>
<p>I would like to plot the values on top of the violin plot so I added:</p>
<pre><code>seaborn.stripplot([X1['total'], X2['total'], X3['total'], X4['total']])
</code></pre>
<p>But this gives:</p>
<p><a href="https://i.sstatic.net/BXzk2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BXzk2.png" alt="enter image description here" /></a></p>
<p>It plots all the points over the first violin plot.</p>
<p>What am I doing wrong?</p>
|
<python><matplotlib><seaborn><violin-plot>
|
2023-01-02 17:59:41
| 1
| 21,513
|
Simd
|
74,985,619
| 95
|
virtualenv creation fails with error "StopIteration"
|
<p>In attempt to slim down a Docker image with a large Python environment, I tried removing as many files and directories as possible (cached packages, <code>__pycache__</code> directories, <code>.pyc</code>, <code>.md</code>, <code>.txt</code> files, etc).</p>
<p>Now <code>pre-commit</code> initialization fails because it cannot create its virtual environment. I also cannot use <code>virtualenv</code> directly:</p>
<pre class="lang-bash prettyprint-override"><code>$ python -m virtualenv foo2
StopIteration:
</code></pre>
|
<python><virtualenv><python-packaging>
|
2023-01-02 17:57:01
| 1
| 17,398
|
Marek Grzenkowicz
|
74,985,498
| 6,843,153
|
why do I have to add a higher folder when opening a file in python when debugging?
|
<p>I have a directory tree similar to this:</p>
<pre><code>my_application
my_component
process_config.py
process_config.yaml
runner.py
</code></pre>
<p>I launch <code>python runner.py</code> and it calls a class inside <code>process_config.py</code> that reads the yaml file like this:</p>
<pre><code>with open(
os.path.join("my_component", "process_config.yaml"), "r"
) as proces_config_defaults_source:
</code></pre>
<p>And it works fine when y run <code>python runner.py</code>, but it can't locate the file when I run the file from the debugger. I have to use this option for it to work:</p>
<pre><code>with open(
os.path.join("my_application", "my_component", "process_config.yaml"), "r"
) as proces_config_defaults_source:
</code></pre>
<p>How can I make the debugger to work the same as a python run?</p>
|
<python><visual-studio-code><debugging>
|
2023-01-02 17:42:54
| 0
| 5,505
|
HuLu ViCa
|
74,985,482
| 11,083,136
|
Sum of sines and cosines from DFT
|
<p>I have a signal and want to reconstruct it from its spectrum as a sum of sines and/or cosines. I am aware of the inverse FFT but I want to reconstruct the signal in this way.</p>
<p>An example would look like this:</p>
<pre class="lang-py prettyprint-override"><code>sig = np.array([1, 5, -3, 0.7, 3.1, -5, -0.5, 3.2, -2.3, -1.1, 3, 0.3, -2.05, 2.1, 3.05, -2.3])
fft = np.fft.rfft(sig)
mag = np.abs(fft) * 2 / sig.size
phase = np.angle(fft)
x = np.arange(sig.size)
reconstructed = list()
for x_i in x:
val = 0
for i, (m, p) in enumerate(zip(mag, phase)):
val += ... # what's the correct form?
reconstructed.append(val)
</code></pre>
<p>What's the correct code to write in the next-to-last line?</p>
|
<python><numpy>
|
2023-01-02 17:41:17
| 0
| 566
|
Dominik Ficek
|
74,985,447
| 3,880,849
|
PyCharm: namedtuple: unexpected argument, unfilled parameter
|
<p>In PyCharm, you can declare a named tuple.</p>
<pre><code>from collections import namedtuple
InstTyp = namedtuple(
typename='InstTyp',
field_names='''
instance_type
memory
num_cpus
'''
)
</code></pre>
<p>Code that uses the named tuple runs without error.</p>
<pre><code>it = InstTyp(
instance_type='tx',
memory=64,
num_cpus=8
)
</code></pre>
<p>However, PyCharm raises "Unexpected argument" and "unfilled parameter" inspection warnings.</p>
<p><a href="https://i.sstatic.net/Z4WDp1Nm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4WDp1Nm.png" alt="enter image description here" /></a></p>
|
<python><collections><pycharm><namedtuple><code-inspection>
|
2023-01-02 17:37:36
| 2
| 936
|
Brian Fitzgerald
|
74,985,335
| 7,236,133
|
Pandas data frame - Group a column values then Randomize new values of that column
|
<p>I have one column (X) that contains some values with duplicates (several rows have the same value and they all are sequenced).
I have a requirement to randomize new values for that columns for testing one issue. so I tried:</p>
<pre><code>np.random.seed(RSEED)
df["X"] = np.random.randint(100, 500, df.shape[0])
</code></pre>
<p>But this is not enough, I need to keep the sequences, I mean to group by same value then to randomize for all of the rows of that value a new number, and to do it for all grouped values of the original column. e.g.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>X</th>
<th>new X (randomized)</th>
</tr>
</thead>
<tbody>
<tr>
<td>210</td>
<td>500</td>
</tr>
<tr>
<td>210</td>
<td>500</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>340</td>
<td>100</td>
</tr>
<tr>
<td>340</td>
<td>100</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
</tbody>
</table>
</div>
<p>I started looking if Pandas has something built-in, I can group by <code>pandas.DataFrame.groupBy</code> but couldn't find a <code>pandas.DataFrame.random</code> that can be applied for the same group.</p>
|
<python><pandas>
|
2023-01-02 17:24:56
| 1
| 679
|
zbeedatm
|
74,985,248
| 3,457,513
|
How to combine Mixins and Protocols in Python
|
<p>I'm trying to understand the correct way to use <code>Protocol</code>s for ensuring the inheritor of a <code>Mixin</code> contains all the attributes needed by the <code>Mixin</code> to operate correctly. The <code>Mixin</code>s are part of an existing codebase, so will be hard to change the design but I will do it if it makes the design "right".</p>
<p>This is a simplified example of how the code is currently implemented:</p>
<pre class="lang-py prettyprint-override"><code>class A1Mixin(object):
_needed_for_a1: str
def a1(self):
print(self._needed_for_a1)
class A1UserV1(
A1Mixin,
object
):
def __init__(self, needed_for_a1: str):
self._needed_for_a1: str = needed_for_a1
</code></pre>
<p>But the issue with this is that type-checking on <code>A1UserV1</code> does not highlight if <code>self._needed_for_a1</code> is not defined in <code>__init__</code>.
So I thought this might work:</p>
<pre class="lang-py prettyprint-override"><code>class A1Protocol(Protocol):
_needed_for_a1: str
class A1UserV2(
A1Mixin,
A1Protocol,
object
):
def __init__(self, needed_for_a1: str):
self._needed_for_a1: str = needed_for_a1
</code></pre>
<p>But still no flagging from PyCharm if I do not define <code>self._needed_for_a1</code> in <code>__init__</code>. Is this even a proper use of <code>Protocol</code>?</p>
|
<python><python-3.x><protocols>
|
2023-01-02 17:13:02
| 0
| 1,045
|
vahndi
|
74,985,212
| 4,878,848
|
How can I Convert 'pyspark.dbutils.DBUtils' to 'dbruntime.dbutils.DBUtils' in Databricks
|
<p>I am working on a project where we have some helper functions that uses dbutils and they were initially used as notebook but now they got converted to python modules. Now I cannot access those methods as they cannot find dbutils.</p>
<p>I searched for ways of using dbutils in a way that I can call that from notebook as well as python module and I got some stack overflow answers that suggest using below methods:</p>
<pre><code>def get_db_utils(spark):
dbutils = None
if spark.conf.get("spark.databricks.service.client.enabled") == "true":
print("Inside IDE Dbutils")
from pyspark.dbutils import DBUtils
dbutils = DBUtils(spark)
else:
print("Inside Notebook Dbutils")
import IPython
dbutils = IPython.get_ipython().user_ns["dbutils"]
return dbutils
def get_dbutils(spark):
from pyspark.dbutils import DBUtils
return DBUtils(spark)
</code></pre>
<p>Whenever I check the type of the dbutils reference variable after calling these functions are as below:</p>
<pre><code>dbutils1 = get_db_utils(spark)
dbutils2 = get_dbutils(spark)
print(type(dbutils1))
print(type(dbutils2))
</code></pre>
<p>It gives the output as <strong><class 'pyspark.dbutils.DBUtils'></strong> but whereas when print the type of the actual dbutils I get the output as <strong><class 'dbruntime.dbutils.DBUtils'></strong></p>
<p>Now when I try to read the secret value using actual dbutils it runs and works properly.
But whenever I used dbutils1 or dbutils2</p>
<pre><code>secret_value = dbutils1.secrets.get(scope=SECRET_SCOPE, key="Key")
</code></pre>
<p>it gives me below error:</p>
<pre><code>IllegalArgumentException: Invalid URI host: null (authority: null)
</code></pre>
<p>Is there any way I can get around this error?</p>
|
<python><pyspark><databricks><dbutils>
|
2023-01-02 17:09:12
| 0
| 3,076
|
Nikunj Kakadiya
|
74,985,163
| 10,045,805
|
How can I parse the unit : "g/100mL" using unit-parse in Python?
|
<p>I'm trying to parse strings in Python, looking for scientific values and units. I want to retrieve them in order to convert them to some other units.</p>
<p>I'm using the library <a href="https://pint.readthedocs.io/en/0.20.1/advanced/defining.html" rel="nofollow noreferrer">unit-parse</a> (based on <a href="https://pint.readthedocs.io/en/0.20.1/advanced/defining.html" rel="nofollow noreferrer">pint</a>) but it has trouble understanding this example : <code>12.5g/100ml</code>.</p>
<p>I managed a workaround : replacing <code>g/100mL</code> in the string by another word (<code>stuff</code> for example in the code below) and using this word as a new unit (equivalent to <code>(g/l) * 10</code>)</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import pint
u = pint.UnitRegistry()
U = Unit = u.Unit
Q = Quantity = u.Quantity
from unit_parse import parser, logger, config
def display(text):
text = text.replace(" ", "") # Suppress spaces.
result = parser(text)
print(f"RESULT = {result}")
print(f"VALUE = {result.m}")
print(f"UNIT = {result.u}")
print(f"to g/l = {result.to('g/L')}")
print(f"to g/ml = {result.to('g/ml')}")
print(f"to stuff = {result.to('stuff')}")
def main():
u.define('stuff = (g/l) * 10')
logger.setLevel(logging.INFO)
more_last_minute_sub = [["g/100mL", "stuff"]] # [bad text/regex, new text]
config.last_minute_sub += more_last_minute_sub # Here we are adding to the existing list of units
text = ("12.5g / 100mL")
</code></pre>
<p>Is there a better way to do this ? Or should I stick to this workaround ? Is there a better library to use ?</p>
|
<python><text-parsing><units-of-measurement><pint>
|
2023-01-02 17:04:06
| 2
| 380
|
cuzureau
|
74,985,086
| 1,169,091
|
Why do Categories columns take up more space than the Object columns?
|
<p>When I run this code and look at the output of info() the DataFrame that uses Category types seems to take more space (932 bytes) then the DataFrame that uses Object types (624 bytes).</p>
<pre><code>def initData():
myPets = {"animal": ["cat", "alligator", "snake", "dog", "gerbil", "lion", "gecko", "hippopotamus", "parrot", "crocodile", "falcon", "hamster", "guinea pig"],
"feel" : ["furry", "rough", "scaly", "furry", "furry", "furry", "rough", "rough", "feathery", "rough", "feathery", "furry", "furry" ],
"where lives": ["indoor", "outdoor", "indoor", "indoor", "indoor", "outdoor", "indoor", "outdoor", "indoor", "outdoor", "outdoor", "indoor", "indoor" ],
"risk": ["safe", "dangerous", "dangerous", "safe", "safe", "dangerous", "safe", "dangerous", "safe", "dangerous", "safe", "safe", "safe" ],
"favorite food": ["treats", "fish", "bugs", "treats", "grain", "antelope", "bugs", "antelope", "grain", "fish", "rabbit", "grain", "grain" ],
"want to own": [1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1 ] }
petDF = pd.DataFrame(myPets)
petDF = petDF.set_index("animal")
#print(petDF.info())
#petDF.head(100)
return petDF
def addCategoryColumns(myDF):
myDF["cat_feel"] = myDF["feel"].astype("category")
myDF["cat_where_lives"] = myDF["where lives"].astype("category")
myDF["cat_risk"] = myDF["risk"].astype("category")
myDF["cat_favorite_food"] = myDF["favorite food"].astype("category")
return myDF
objectsDF = initData()
categoriesDF = initData()
categoriesDF = addCategoryColumns(categoriesDF)
categoriesDF = categoriesDF.drop(["feel", "where lives", "risk", "favorite food"], axis = 1)
print(objectsDF.info())
print(categoriesDF.info())
categoriesDF.head()
<class 'pandas.core.frame.DataFrame'>
Index: 13 entries, cat to guinea pig
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 feel 13 non-null object
1 where lives 13 non-null object
2 risk 13 non-null object
3 favorite food 13 non-null object
4 want to own 13 non-null int64
dtypes: int64(1), object(4)
memory usage: 624.0+ bytes
None
<class 'pandas.core.frame.DataFrame'>
Index: 13 entries, cat to guinea pig
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 want to own 13 non-null int64
1 cat_feel 13 non-null category
2 cat_where_lives 13 non-null category
3 cat_risk 13 non-null category
4 cat_favorite_food 13 non-null category
dtypes: category(4), int64(1)
memory usage: 932.0+ bytes
None
</code></pre>
|
<python><pandas>
|
2023-01-02 16:56:27
| 1
| 4,741
|
nicomp
|
74,985,024
| 19,363,912
|
Find a value in next td tag with bs4
|
<p>Any way to pick the value <code>6.543</code> (ignoring <code><b></code>), belonging to next <code><td></code> after <code>Hello Friend </code>?</p>
<pre><code> <tr>
<td align="right" colspan="4">
Hey Hello Friend
</td>
<td align="right">
2.123
</td>
</tr>
<tr>
<td align="right" colspan="4">
<b>
Hello Friend
<sup>
3
</sup>
</b>
</td>
<td align="right">
<b>
6.543
</b>
</td>
</tr>
</code></pre>
<p>Note there is 'Hey Hello Friend' and 'Hello Friend '.</p>
<p>Using <code>soup.find("td", text=re.compile("Hello Friend ")).find_next_sibling("td")</code> does not work. It returns <code>AttributeError: 'NoneType' object has no attribute 'find_next_sibling'</code>.</p>
|
<python><html><beautifulsoup>
|
2023-01-02 16:49:20
| 2
| 447
|
aeiou
|
74,984,988
| 18,756,733
|
Print None if pd.read_html can't find any tables
|
<p>I want to print the table if it exists.</p>
<pre><code>import pandas as pd
main_url='https://fbref.com/en/comps/9/2000-2001/2000-2001-Premier-League-Stats'
squad_advanced_goalkeeping=pd.read_html(main_url,match='Squad Advanced Goalkeeping')[0] if pd.read_html(main_url,match='Squad Advanced Goalkeeping') else None
squad_advanced_goalkeeping
</code></pre>
<p>I thought this code was the solution but I still get "ValueError: No tables found matching pattern 'Squad Advanced Goalkeeping'"</p>
|
<python><web-scraping>
|
2023-01-02 16:44:49
| 1
| 426
|
beridzeg45
|
74,984,983
| 19,797,660
|
Dataframe get max() from multiple columns in a rolling window - Shape Error
|
<p><strong>EDIT 2:</strong></p>
<p>Considier the question closed.
After I gave the example in the first EDIT I have realised that I am senslessly looking through all of the columns looking for Highest value, since the <code>High</code> column will always have the highest value and the following code is sufficient:</p>
<pre><code>_high[f'Highest {n1}'] = price_df['High'].rolling(n1).max()
</code></pre>
<p>I am trying to get max from multiple columns with this code:</p>
<pre><code>_high[f'Highest {n1}'] = price_df[['Open', 'Close', 'High', 'Low']].rolling(n1).max()
</code></pre>
<p>, but I am getting this error:</p>
<pre><code> File "C:\Users\...\main.py", line 1282, in chop_zone
_high[f'Highest {n1}'] = price_df[['Open', 'Close', 'High', 'Low']].rolling(n1).max()
File "C:\Users\...\venv\lib\site-packages\pandas\core\frame.py", line 3968, in __setitem__
self._set_item_frame_value(key, value)
File "C:\Users\...\venv\lib\site-packages\pandas\core\frame.py", line 4098, in _set_item_frame_value
raise ValueError("Columns must be same length as key")
ValueError: Columns must be same length as key
Process finished with exit code 1
</code></pre>
<p>And I don't know why, how can I fix this?</p>
<p>EDIT:</p>
<p><em>What I expect the output to be:</em></p>
<p>Lets say I have the following dataframe (it is <code>price_df[['Open', 'Close', 'High', 'Low']].tail(10)</code>)</p>
<pre><code> Open Close High Low
5168 14010.0 14016.00 14024.05 14005.50
5169 14016.0 14018.00 14018.50 14007.50
5170 14018.0 14015.50 14021.50 14012.50
5171 14015.5 14007.00 14018.00 14004.50
5172 14007.0 14013.00 14013.50 13999.50
5173 14013.0 14007.00 14013.50 14002.00
5174 14007.0 14009.60 14017.60 14003.55
5175 14009.6 14013.60 14015.00 14004.60
5176 14013.6 14020.00 14021.60 14009.00
5177 14020.0 14015.55 14022.60 14013.60
</code></pre>
<p>So I expect a single column with the maximum value from the rolling window of size <code>n1</code> along all columns.
For example if the rolling window would be 3 the first maximal value would be <code>14024.5</code> from column <code>High</code> across all columns from index <code>5168</code> to <code>5170</code>, the next maximal value would be 14021.5 from the high column across from indices <code>5169</code> to <code>5171</code>.</p>
|
<python><pandas>
|
2023-01-02 16:44:30
| 0
| 329
|
Jakub Szurlej
|
74,984,788
| 3,004,472
|
how to remove string ending with specific string
|
<p>I have file names like</p>
<pre><code>ios_g1_v1_yyyymmdd
ios_g1_v1_h1_yyyymmddhhmmss
ios_g1_v1_h1_YYYYMMDDHHMMSS
ios_g1_v1_g1_YYYY
ios_g1_v1_j1_YYYYmmdd
ios_g1_v1
ios_g1_v1_t1_h1
ios_g1_v1_ty1_f1
</code></pre>
<p>I would like to remove only the suffix when it matches the string YYYYMMDDHHMMSS OR yyyymmdd OR YYYYmmdd OR YYYY</p>
<p>my expected output would be</p>
<pre><code>ios_g1_v1
ios_g1_v1_h1
ios_g1_v1_h1
ios_g1_v1_g1
ios_g1_v1_j1
ios_g1_v1
ios_g1_v1_t1_h1
ios_g1_v1_ty1_f1
</code></pre>
<p>How can I achieve this in python using regex ? i tried with something like below, but it didn't work</p>
<pre><code>word_trimmed_stage1 = re.sub('.*[^YYYYMMDDHHMMSS]$', '', filename)
</code></pre>
|
<python><string>
|
2023-01-02 16:24:56
| 5
| 880
|
BigD
|
74,984,700
| 14,895,107
|
why am i getting this cloudflare error on replit?
|
<p>I am getting this cloudflare error</p>
<p><a href="https://i.sstatic.net/PWsvy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PWsvy.png" alt="enter image description here" /></a></p>
<p>Things I have tried :</p>
<p>1.) running <code>kill 1</code> in shell</p>
<p>2.) regenerating token</p>
<p>3.) created a new application on discord developer portal</p>
<p>but it didn't work</p>
<p>also, I am using uptimerobot to keep my bot running</p>
|
<python><discord><discord.py><cloudflare><replit>
|
2023-01-02 16:16:15
| 0
| 903
|
Abhimanyu Sharma
|
74,984,548
| 8,280,171
|
how to read specific lines of list in yaml to python
|
<p>I have 2 files, first is <code>ip_list.yaml</code> which is</p>
<pre><code>globals:
hosted_zone: "test.com"
endpoint_prefix: defalt
ip_v4:
- 123
- 234
- 456
ip_v6:
- 123
- 234
- 345
</code></pre>
<p>and the other one is <code>network.py</code></p>
<pre><code># Creating IP rules sets
ip_set_v4 = wafv2.CfnIPSet(
self,
"IPSetv4",
addresses=[
# how do i parse the ip_v4 from ip_list.yaml
],
ip_address_version="IPV4",
name="ipv4-set",
scope="CLOUDFRONT",
)
ip_set_v6 = wafv2.CfnIPSet(
self,
"IPSetv6",
addresses=[
# how do i parse the ip_v6 from ip_list.yaml
],
ip_address_version="IPV6",
name="ipv6-set",
scope="CLOUDFRONT",
)
</code></pre>
<p>how do i read only all values under ip_v4 and ip_v6 from <code>ip_list.yaml</code> then respectively put them in <code>network.py</code> addresses for both ip_set_v4 and ip_set_v6? (where i put the comment in <code>network.py</code>)</p>
|
<python><yaml>
|
2023-01-02 16:02:17
| 1
| 705
|
Jack Rogers
|
74,984,387
| 2,803,488
|
How do I avoid global variables when using MQTT callbacks
|
<p>I am incrementing a global variable in my on_receive callback, to track how many messages have been received. I need to know this count for testing purposes.</p>
<p>Since global variables are generally considered a code smell, is there a way to avoid using global variables in this situation?</p>
<p>Here is my callback:</p>
<pre><code>def on_message_callback_v3( message_client, userdata, message ):
global global_message_received_count
with my_mutex:
global_message_received_count += 1
msg = str( message.payload.decode( "utf-8" ) )
print( f"▼▼ ON MESSAGE ▼▼" )
print( f" Message received for client: {message_client}" )
print( f" Message user data: {userdata}" )
print( f" Message topic: {message.topic}" )
print( f" Message body: {msg}" )
</code></pre>
<p>When all messages have been published, I compare <code>global_message_received_count</code> to the <code>published_count</code> (incremented elsewhere), to determine if all messages have been received. Since the signature of the callback is enforced by Paho, I cannot pass in or return variables.</p>
<p>I would like to avoid replying on the global <code>global_message_received_count</code>.</p>
|
<python><callback><mqtt><global><paho>
|
2023-01-02 15:46:22
| 1
| 455
|
Adam Howell
|
74,984,186
| 9,778,828
|
Pandas read_SQL response is supposed to be in Hebrew, but I get gibberish instead
|
<p>I have a python script in which I make an SQL query to my Teradata server.</p>
<p>I use teradatasql python library for that:</p>
<pre><code>conn = tdSQL.connect(logmech=logmech, host=host)
query = "SELECT * FROM table"
df = pandas.read_sql(query, conn)
</code></pre>
<p>And instead of getting the "Hebrew" column, I get the "Gibberish" column:</p>
<pre><code> Hebrew Gibberish hebrew_char2hexint
0 אילת àéìú E0E9ECFA
1 אשדוד àùãåã E0F9E3E5E3
2 אשקלון àùœìåï E0F9F7ECE5EF
3 באר שבע áàø ùáò E1E0F820F9E1F2
4 בית שמש áéú ùîù E1E9FA20F9EEF9
5 בני ברק áÐé áøœ E1F0E920E1F8F7
6 דימונה ãéîåÐä E3E9EEE5F0E4
7 המשולש דרום äîùåìù ãøåí E4EEF9E5ECF920E3F8E5ED
8 המשולש צפון äîùåìù öôåï E4EEF9E5ECF920F6F4E5EF
9 הרצליה äøöìéä E4F8F6ECE9E4
10 חדרה çãøä E7E3F8E4
11 חולון çåìåï E7E5ECE5EF
12 חיפה çéôä E7E9F4E4
13 חצור הגלילית çöåø äâìéìéú E7F6E5F820E4E2ECE9ECE9FA
14 טבריה èáøéä E8E1F8E9E4
15 יהוד éäåã E9E4E5E3
16 ירושלים éøåùìéí E9F8E5F9ECE9ED
17 כפר סבא ëôø ñáà EBF4F820F1E1E0
18 לא קיים ìà œééí ECE020F7E9E9ED
19 מגדל העמק îâãì äòîœ EEE2E3EC20E4F2EEF7
20 מודיעין îåãéòéï EEE5E3E9F2E9EF
21 מעלה אדומים îòìä àãåîéí EEF2ECE420E0E3E5EEE9ED
22 נהריה Ðäøéä F0E4F8E9E4
23 נתיבות Ðúéáåú F0FAE9E1E5FA
24 נתניה ÐúÐéä F0FAF0E9E4
25 עפולה òôåìä F2F4E5ECE4
26 פתח תקוה ôúç úœåä F4FAE720FAF7E5E4
27 קריות œøéåú F7F8E9E5FA
28 קרית גת œøéú âú F7F8E9FA20E2FA
29 קרית טבעון œøéú èáòåï F7F8E9FA20E8E1F2E5EF
30 קרית שמונה œøéú ùîåÐä F7F8E9FA20F9EEE5F0E4
31 ראשון לציון øàùåï ìöéåï F8E0F9E5EF20ECF6E9E5EF
32 רחובות øçåáåú F8E7E5E1E5FA
33 רמת גן øîú âï F8EEFA20E2EF
34 תל אביב יפו úì àáéá éôå FAEC20E0E1E9E120E9F4E5
</code></pre>
<p>Any ideas why does it happen and how to solve that?</p>
<p>I did manage to partially solve the problem -</p>
<pre><code>df2.Gibberish[0].encode('ISO-8859-1').decode('ISO-8859-8')
</code></pre>
<p>returns:</p>
<pre><code>'אילת'
</code></pre>
<p>It also works with the second row. But when I try it on the third row:</p>
<pre><code>df2.Gibberish[2].encode('ISO-8859-1').decode('ISO-8859-8')
</code></pre>
<p>I get this error:</p>
<pre><code>UnicodeEncodeError: 'latin-1' codec can't encode character '\u0153' in position 2: ordinal not in range(256)
</code></pre>
<p>The only way I managed not to receive any errors is with the following encoding and decoding:</p>
<pre><code>df.Gibberish[index].encode('ISO-8859-15').decode('cp1255')
</code></pre>
<p>But the translation is not perfect:</p>
<pre><code>0 באר שבע
1 ב׀י בר½
2 דימו׀ה
3 אילת
4 בית שמש
5 המשולש דרום
6 המשולש צפון
7 הרצליה
8 אשדוד
9 אש½לון
10 חולון
11 חדרה
12 חיפה
13 חצור הגלילית
14 ׀הריה
15 ׀ת׀יה
16 ׀תיבות
17 יהוד
18 טבריה
19 ירושלים
20 כפר סבא
21 לא ½יים
22 מודיעין
23 מגדל העמ½
24 מעלה אדומים
25 ראשון לציון
26 רחובות
27 ½ריות
28 ½רית גת
29 ½רית טבעון
30 ½רית שמו׀ה
31 רמת גן
32 עפולה
33 פתח ת½וה
34 תל אביב יפו
</code></pre>
|
<python><sql><pandas><encoding><teradata>
|
2023-01-02 15:27:27
| 2
| 505
|
AlonBA
|
74,984,168
| 9,439,097
|
Pydantic generate json-schema enforcing field requirements
|
<p>I have the following model, where the field <code>b</code> is a list whose length is additionally enforced based on the value of field <code>a</code>.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, validator
from enum import Enum
class A(BaseModel):
a: int
b: list[int]
@validator("b")
def check_b_length(cls, v, values):
assert len(v) == values["a"]
a = A(a=1, b=[1])
A.schema_json()
</code></pre>
<p>Generating the following JSON-schema:</p>
<pre class="lang-json prettyprint-override"><code>{
"title": "A",
"type": "object",
"properties": {
"a": {
"title": "A",
"type": "integer"
},
"b": {
"title": "B",
"type": "array",
"items": {
"type": "string"
}
}
},
"required": [
"a",
"b"
]
}
</code></pre>
<p>Is there any way to enfore the additional validation being done by pydantic in the <code>check_b_length</code> function in JSON-schema? I.e. this could be done possibly via <code>oneOf</code>, as in this question: <a href="https://stackoverflow.com/questions/9029524/json-schema-specify-field-is-required-based-on-value-of-another-field">JSON Schema - specify field is required based on value of another field</a> ... So its possible to enfore in JSON-schema, and possible to enforce in pydantic, but how can you map them?</p>
|
<python><json><validation><jsonschema><pydantic>
|
2023-01-02 15:25:27
| 0
| 3,893
|
charelf
|
74,984,025
| 4,451,315
|
when / then / otherwise with values from numpy array
|
<p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({'group': [1, 1, 1, 3, 3, 3, 4, 4]})
</code></pre>
<p>I have a numpy array of values, which I'd like to replace <code>'group'</code> 3 with</p>
<pre class="lang-py prettyprint-override"><code>values = np.array([9, 8, 7])
</code></pre>
<p>Expected result:</p>
<pre><code>shape: (8, 1)
┌───────┐
│ group │
│ --- │
│ i64 │
╞═══════╡
│ 1 │
│ 1 │
│ 1 │
│ 9 │
│ 8 │
│ 7 │
│ 4 │
│ 4 │
└───────┘
</code></pre>
<p>Here's what I've tried:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.with_columns(
pl.when(pl.col('group')==3)
.then(values)
.otherwise(pl.col('group'))
).alias('group')
)
</code></pre>
<pre><code>ShapeError: shapes of `self`, `mask` and `other` are not suitable for `zip_with` operation
</code></pre>
<p>How can I do this correctly?</p>
|
<python><dataframe><python-polars>
|
2023-01-02 15:09:28
| 3
| 11,062
|
ignoring_gravity
|
74,983,997
| 16,665,831
|
Decoding json message inside of string
|
<p>I have the following decoding function;</p>
<pre class="lang-py prettyprint-override"><code>def flatten_data(json_data):
"""
Arguments:
json_data (dict): json data
Returns:
dict : {a:1, b:2, b_c:1, b_d:2}
"""
out = {}
def flatten(x, name=''):
if type(x) is dict:
for a in x:
flatten(x[a], name + a + '_')
elif type(x) is list:
out[name[:-1]] = x
else:
out[name[:-1]] = x
flatten(json_data)
return out
</code></pre>
<p>If I am giving the following JSON body input to this function;</p>
<pre class="lang-json prettyprint-override"><code>{
"id": "123",
"name": "Jack",
"createdAt": 20221212,
"region": '{"country": "USA", "city": "NewYork"}'
}
</code></pre>
<p>I need to get the output as follows;</p>
<pre class="lang-json prettyprint-override"><code>{
"id": "123",
"name": "Jack",
"createdAt": 20221212,
"region_country": "USA",
"region_city": 'NewYork'
}
</code></pre>
<p>How can I modify my <code>flatten_data</code> function?</p>
|
<python><json><json-deserialization><jsondecoder>
|
2023-01-02 15:07:27
| 2
| 309
|
Ugur Selim Ozen
|
74,983,882
| 12,415,855
|
Change value of the input date-box using selenium?
|
<p>i would like to change / select the date to "12/03/2022" on this website:</p>
<p><a href="https://www.cyberarena.live/schedule-efootball" rel="nofollow noreferrer">https://www.cyberarena.live/schedule-efootball</a></p>
<p>with the following code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
if __name__ == '__main__':
options = Options()
options.add_experimental_option ('excludeSwitches', ['enable-logging'])
options.add_argument("start-maximized")
options.add_argument('window-size=1920x1080')
options.add_argument('--no-sandbox')
options.add_argument('--disable-gpu')
srv=Service(ChromeDriverManager().install())
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = f"https://www.cyberarena.live/schedule-efootball"
driver.get (link)
tdayString = "12/03/2022"
driver.execute_script("arguments[0].setAttribute('value',arguments[1])", waitWD.until(EC.element_to_be_clickable((By.XPATH, "//input[contains(@id,'input_comp')]"))), tdayString)
input("Press!")
</code></pre>
<p>But nothing changed on the opened site - it is still the actual date selected with this code.
How can i change the date in the input / select box?</p>
|
<python><selenium>
|
2023-01-02 14:56:15
| 1
| 1,515
|
Rapid1898
|
74,983,829
| 1,973,005
|
Why is python's datetime conversion wrong in one direction?
|
<p>I am trying to convert a timezone aware <code>datetime.datetime</code> object to UTC and make it naive. For the conversion, I use the following code:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
import pytz
dt: datetime = datetime(2023, 1, 2, 12, 0, 0, tzinfo=pytz.timezone("Europe/Amsterdam"))
print(pytz.timezone("Europe/Amsterdam").utcoffset(dt=datetime.now()))
print(dt)
print(dt.astimezone(pytz.timezone("UTC")))
</code></pre>
<p>This outputs the following:</p>
<pre><code>1:00:00
2023-01-02 12:00:00+00:18
2023-01-02 11:42:00+00:00
</code></pre>
<p>For some reason, the time offset ends up being only 18 minutes instead on one hour. If I try to accomplish the opposite (from UTC to Europe/Amsterdam) it does work correctly:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
import pytz
dt: datetime = datetime(2023, 1, 2, 12, 0, 0, tzinfo=pytz.timezone("UTC"))
print(dt)
print(dt.astimezone(pytz.timezone("Europe/Amsterdam")))
</code></pre>
<p>Output (just as expected):</p>
<pre><code>1:00:00
2023-01-02 12:00:00+00:00
2023-01-02 13:00:00+01:00
</code></pre>
<p>Could anybody tell me why this is happening and how I could fix it? Or is there even a simpler method to convert to UTC and make the datetime naive?</p>
|
<python><datetime><timezone><pytz>
|
2023-01-02 14:51:28
| 1
| 1,097
|
Axel Köhler
|
74,983,751
| 4,451,521
|
Using parameters for pytest when parameters depend on each other
|
<p>I am writing a pytest test for a library similar to this</p>
<pre><code>from mylibrary use do_some_calculation
def test_df_against_angle():
df=load_some_df()
angle=30
result=do_some_calculation(df,angle)
assertTrue(result)
</code></pre>
<p>Now as you can see that test only works for a particular dataframe and for an angle(30)</p>
<p>I have to do this tests for several dataframes and several angles
To complicate matters,the angles I should use are different for each dataset</p>
<p>So I have to test that</p>
<ul>
<li>For data_set1.csv I have to try angles 0,30,60</li>
<li>For data_set2.csv I have to try angles 90,120,150</li>
<li>For data_set3.csv I have to try angles 180,210,240</li>
</ul>
<p>So I am guessing that I have to use pytest's parameters for that.
I know how to put simple values as parameters, (So for example I know how to put parameters so as to use those three csv files and even how to put these in a json file and read it to enter the test) but I am at lost as how to put several types of parameters and that these parameters depend on the other</p>
<p>Ideally also I would like to put this in the <code>conftest.py</code></p>
<p>Can someone give me some pointers on how to do this?</p>
|
<python><pytest>
|
2023-01-02 14:45:17
| 2
| 10,576
|
KansaiRobot
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.