QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
74,761,115
| 8,489,602
|
Collapse identical rows / columns in pandas DataFrame to intervals
|
<p>Let's imagine a DataFrame with some patterns that do intersect.<br />
A sample code for a <a href="https://stackoverflow.com/help/minimal-reproducible-example">minimal reproducible example</a>:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
from random import randint as ri
from random import choice
get_pattern = lambda: [i for ii in [[choice([True,False])]*ri(2,6) for _ in range(0,5)] for i in ii]
patterns = [get_pattern()[:15] for _ in range(0,3)]
df = pd.DataFrame({
'A': patterns[0],
'B': patterns[1],
'C': patterns[2]
}, index = pd.interval_range(start=0, end=15, closed='both')).T.replace(False,'')
</code></pre>
<h5>Output:</h5>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">[0, 1]</th>
<th style="text-align: right;">[1, 2]</th>
<th style="text-align: right;">[2, 3]</th>
<th style="text-align: right;">[3, 4]</th>
<th style="text-align: right;">[4, 5]</th>
<th style="text-align: right;">[5, 6]</th>
<th style="text-align: right;">[6, 7]</th>
<th style="text-align: right;">[7, 8]</th>
<th style="text-align: right;">[8, 9]</th>
<th style="text-align: right;">[9, 10]</th>
<th style="text-align: right;">[10, 11]</th>
<th style="text-align: right;">[11, 12]</th>
<th style="text-align: right;">[12, 13]</th>
<th style="text-align: right;">[13, 14]</th>
<th style="text-align: right;">[14, 15]</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">A</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: right;">B</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
</tr>
<tr>
<td style="text-align: right;">C</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
</tr>
</tbody>
</table>
</div>
<p>I did <code>random</code> to produce different results for the question on purpose. I also did a transpose for the table to fit better and use less vertical space, <code>False</code>s are removed for a better visual representation.<br />
In this particular case one can observe identical columns:</p>
<ul>
<li>from 0 to 4</li>
<li>from 4 to 8</li>
<li>from 8 to 12</li>
<li>from 12 to 15</li>
</ul>
<p>The question is how to collapse those identical columns (or rows without transpose) and change the corresponding intervals.</p>
<p>After some searching the only solution I've come up with was iterating through rows / columns and searching for duplicates. Than assigning left and right interval for a newly created DataFrame.</p>
<p>I suppose there might be a more elegant solution to get this:</p>
<pre class="lang-python prettyprint-override"><code>intervals = [
pd.Interval(left=0, right=4, closed='both'),
pd.Interval(left=4, right=8, closed='both'),
pd.Interval(left=8, right=12, closed='both'),
pd.Interval(left=12, right=15, closed='both')
]
pd.DataFrame({
'A': [False,False,True,False],
'B': [True,True,False,True],
'C': [False,True,True,False]
}, pd.IntervalIndex(intervals)).T.replace(False,'')
</code></pre>
<h5>Output</h5>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">[0, 4]</th>
<th style="text-align: right;">[4, 8]</th>
<th style="text-align: right;">[8, 12]</th>
<th style="text-align: right;">[12, 15]</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">A</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;">True</td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: right;">B</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;"></td>
<td style="text-align: right;">True</td>
</tr>
<tr>
<td style="text-align: right;">C</td>
<td style="text-align: right;"></td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;"></td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><intervals>
|
2022-12-11 13:12:26
| 1
| 661
|
DiMithras
|
74,761,083
| 815,612
|
How can I have subcommands with a "fallthrough" behavior?
|
<p>I have a program which is basically a kind of web client, so mostly the user will be running it with a single parameter, a URL:</p>
<pre><code>myclient https://example.com
</code></pre>
<p>But occasionally the user might want to configure the program using a subcommand:</p>
<pre><code>myclient configure-this ...
myclient configure-that ...
myclient configure-other ...
</code></pre>
<p>So I'd like to use subparsers to handle the subcommands, but I want a kind of "fallthrough" where if the first argument isn't recognized as any subcommand, the script just assumes it's a URL.</p>
<p>Can this be done using <code>argparse</code>?</p>
|
<python><argparse>
|
2022-12-11 13:07:53
| 0
| 6,464
|
Jack M
|
74,761,075
| 4,865,723
|
Drop empty categories in sub groups using groupby in pandas?
|
<p>I have a resulting table</p>
<pre><code>Year mycat
2019 A 2
B 1
2020 A 0
B 1
</code></pre>
<p>In the 3rd row (<code>2020, A</code>) you see zero. I want to get rid of lines like this.</p>
<pre><code>Year mycat
2019 A 2
B 1
2020 B 1
</code></pre>
<p>How can I do this? Is there a way to let pandas handle that without "hacking" the resulting table after I've done <code>.groupby().size()</code>?</p>
<p>Here is the full code:</p>
<pre><code>>>> import pandas as pd
>>> df = pd.DataFrame({'Year': [2019, 2019, 2019, 2020], 'mycat': list('AABB')})
>>> df.mycat = df.mycat.astype('category')
>>> df
Year mycat
0 2019 A
1 2019 A
2 2019 B
3 2020 B
>>> df.groupby(['Year', 'mycat']).size()
Year mycat
2019 A 2
B 1
2020 A 0
B 1
dtype: int64
</code></pre>
|
<python><pandas>
|
2022-12-11 13:06:46
| 1
| 12,450
|
buhtz
|
74,761,068
| 4,613,465
|
Caching tensorflow dataset before iteration results in empty dataset
|
<p>I'm using the <code>from_generator</code> Method of the Dataset API in TensorFlow.
However, when I iterate through the dataset (to count the samples) with caching enabled, the dataset is empty:</p>
<pre class="lang-py prettyprint-override"><code>output_signature = (tensorflow.TensorSpec(shape=(64, 64, 2), dtype=tensorflow.float32),
tensorflow.TensorSpec(shape=(), dtype=tensorflow.int16))
dataset = tensorflow.data.Dataset.from_generator(MSUMFSDFrameGenerator(pathlib.Path(dataset_locations["mfsd"]), True), output_signature=output_signature)
dataset = dataset.repeat().prefetch(buffer_size=tensorflow.data.AUTOTUNE).cache(filename)
size = dataset.reduce(0, lambda x,_: x+1).numpy()
</code></pre>
<p>But when I cache the dataset <strong>after</strong> iteration, the dataset seems to be fine (but isn't cached then and needs regeneration for training again):</p>
<pre class="lang-py prettyprint-override"><code>output_signature = (tensorflow.TensorSpec(shape=(64, 64, 2), dtype=tensorflow.float32),
tensorflow.TensorSpec(shape=(), dtype=tensorflow.int16))
dataset = tensorflow.data.Dataset.from_generator(MSUMFSDFrameGenerator(pathlib.Path(dataset_locations["mfsd"]), True), output_signature=output_signature)
dataset = dataset.repeat().prefetch(buffer_size=tensorflow.data.AUTOTUNE)
size = dataset.reduce(0, lambda x,_: x+1).numpy()
dataset = dataset.cache(filename)
</code></pre>
<p>Do you know how to iterate a cached dataset (to obtain the number of samples)?</p>
|
<python><tensorflow><machine-learning><caching><dataset>
|
2022-12-11 13:05:35
| 0
| 772
|
Fatorice
|
74,760,976
| 6,232,816
|
List of product categories to a dict of product categories
|
<p>I would like to create a product category representation as a python dictionary.
The input is the following list of product categories, represented as list of lists:</p>
<pre><code>categories_list = [
['computer', 'laptop'],
['computer', 'desktop'],
['computer', 'processor', 'Intel'],
['computer', 'processor', 'AMD'],
['computer', 'accessories', 'mouse'],
['computer', 'accessories', 'keyboard'],
['controller', 'raspberryPI'],
['controller', 'Arduino'],
['controller', 'accessories'],
['other'],
['electronics', 'other']
]
</code></pre>
<p>I would like to turn the list into a dictionary like this:</p>
<pre><code>categories_dict = {
'computer': {'laptop': None,
'desktop': None,
'processor': {'Intel': None,
'AMD': None},
'accessories': {'mouse': None,
'keyboard': None}},
'controller': {'raspberryPI': None,
'Arduino': None,
'accessories': None},
'other': None,
'electronics': {'other': None}
}
</code></pre>
<p>This represents the hierarchy. The main levels are: computer, controller, other, electronics.</p>
<p>Some of these have children, and some of those children have also their children.<br>
For example the computers children are:<br>
<strong>laptop, desktop, processor, accessories</strong><br>
-the processor and accessories have also subchildren:<br>
<strong>Intel, AMD</strong> - for processor<br>
<strong>mouse, keyboard</strong> - for accessories</p>
<p>None indicates that there are no further children/subcategories.</p>
<p>My code so far creates some tree structure, but this is still not what I need.</p>
<pre><code>def create_cat_dict():
dict_tree = {}
main_name = ""
for cat_list in all_categories:
for index, cat_name in enumerate(cat_list):
if index == 0 and cat_name not in dict_tree:
main_name = cat_name
dict_tree[cat_name] = {}
if index > 0 and cat_name not in dict_tree[main_name].keys():
dict_tree[main_name][cat_name] = None
print(dict_tree)
</code></pre>
<p>It creates this dictionary, which is still not correct:</p>
<pre><code>{'computer': {'AMD': None,
'Intel': None,
'accessories': None,
'desktop': None,
'keyboard': None,
'laptop': None,
'mouse': None,
'processor': None},
'controller': {'Arduino': None,
'accessories': None,
'raspberryPI': None},
'electronics': {'other': None},
'other': {}}
</code></pre>
<p>Under <strong>computer>processor should be {'Intel': None, 'AMD': None}</strong> <br>
Under <strong>computer>accessories should be {'mouse': None, 'keyboard': None}</strong></p>
<p>I would like to ask for help on how to read all the categories, even if a category level is 4, 5, 6 level deep.</p>
|
<python>
|
2022-12-11 12:52:03
| 2
| 469
|
Attila Toth
|
74,760,884
| 4,169,571
|
Minimum per row in numpy array
|
<p>I have a numpy array and want to calculate the minimum in each row:</p>
<pre><code>import numpy as np
data=np.array([[ 9.052878e+07, 1.666794e+08, 9.783935e+07, 7.168723e+07],
[ 1.033552e+04, 1.902951e+04, 1.117015e+04, 8.184407e+03],
[ 1.000000e+15, 5.740625e+15, 3.419288e+15, 2.549149e+15],
[ 1.000000e+15, 5.740625e+15, 3.419288e+15, 2.549149e+15]])
print(np.min(data))
#8184.407
</code></pre>
<p><code>np.min(data)</code> claulates the total minimum and not row-wise.</p>
|
<python><numpy><numpy-ndarray>
|
2022-12-11 12:38:49
| 2
| 817
|
len
|
74,760,856
| 6,587,020
|
How to extract <p> matching specific text considering it also has <b>?
|
<p>How can I find a paragraph HTML element that has a bold element inside? The bold element changes. It can be Michal and then Luis.</p>
<pre><code>from bs4 import BeautifulSoup
import re
html = "<p>Hello<b>Michael</b></p>"
# it could be "<p>Hello<b>Luis</b></p>"
soup = BeautifulSoup(html, 'html.parser')
testing = soup.find('p', text=re.compile(r"Hello"))
</code></pre>
<p>The testing variable returns None.</p>
<p>Consider that the html has tons of paragraphs I can't simply do a <code>soup.find_all("p")</code></p>
|
<python><html><web-scraping><beautifulsoup>
|
2022-12-11 12:33:49
| 1
| 333
|
dank
|
74,760,802
| 4,451,315
|
Percentage of total by group
|
<p>Say I start with:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: import polars as pl
In [2]: df = pl.DataFrame({
'group1': ['a', 'a', 'b', 'c', 'a', 'b'],
'group2': [0, 1, 1, 0, 1, 1]
})
In [3]: df
Out[3]:
shape: (6, 2)
ββββββββββ¬βββββββββ
β group1 β group2 β
β --- β --- β
β str β i64 β
ββββββββββͺβββββββββ‘
β a β 0 β
β a β 1 β
β b β 1 β
β c β 0 β
β a β 1 β
β b β 1 β
ββββββββββ΄βββββββββ
</code></pre>
<p>I'd like to get, for each <code>group1</code>, the distribution of <code>group2</code>.</p>
<p>My desired outcome is:</p>
<pre><code>shape: (4, 4)
ββββββββββ¬βββββββββ¬ββββββββ¬βββββββββββββ
β group1 β group2 β count β percentage β
β --- β --- β --- β --- β
β str β i64 β u32 β f64 β
ββββββββββͺβββββββββͺββββββββͺβββββββββββββ‘
β a β 0 β 1 β 0.333333 β
β a β 1 β 2 β 0.666667 β
β b β 1 β 2 β 1.0 β
β c β 0 β 1 β 1.0 β
ββββββββββ΄βββββββββ΄ββββββββ΄βββββββββββββ
</code></pre>
<hr />
<p>Here's one way I've found to do it - is there a more idiomatic way in polars?</p>
<pre class="lang-py prettyprint-override"><code>counts = df.group_by('group1', 'group2').len()
counts.with_columns(
(
counts['len']
/ counts.select(pl.col('len').sum().over('group1'))['len']
).alias('percentage')
).sort('group1', 'group2')
</code></pre>
|
<python><dataframe><python-polars>
|
2022-12-11 12:26:03
| 1
| 11,062
|
ignoring_gravity
|
74,760,548
| 3,394,220
|
Ubuntu 22.04 python3.10 and boost 1.80 compile warning
|
<p>After migrating our code to support ubuntu 22.04 with python 3.10 and latest stable boost 1.80 i cannot find the reason\solution for the compile warning.</p>
<ul>
<li>ubuntu 18.04 boost 1.76 python 3.6 was clean without errors\warnings</li>
</ul>
<p>boost is build from source using the configuration</p>
<pre><code>./bootstrap.sh --with-python=/usr/bin/python3.10
./b2 --prefix=/tmp/ --with-python
</code></pre>
<p>there error is cased by calling</p>
<pre><code>#include <boost/python.hpp>
</code></pre>
<p>cause the following warning:</p>
<pre><code>/mnt/fs/users/iliak/system_clean/Strategies/../External/include/boost/python/call_method.hpp:61:26: warning: βPyObject* PyEval_CallMethod(PyObject*, const char*, const char*, ...)β is deprecated [-Wdeprecated-declarations]
61 | PyObject* const result =
| ~~~~~~~~~~~~~~~~~^~~
62 | PyEval_CallMethod(
| ~~~~~~~~~~~~~~~~~~
63 | self
| ~~~~
64 | , const_cast<char*>(name)
| ~~~~~~~~~~~~~~~~~~~~~~~~~
65 | , const_cast<char*>("(" BOOST_PP_REPEAT_1ST(N, BOOST_PYTHON_FIXED, "O") ")")
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
66 | BOOST_PP_REPEAT_1ST(N, BOOST_PYTHON_FAST_ARG_TO_PYTHON_GET, nil)
| ~
In file included from /usr/include/python3.10/Python.h:130,
from /mnt/fs/users/iliak/system_clean/Strategies/../External/include/boost/python/detail/wrap_python.hpp:178,
from /mnt/fs/users/iliak/system_clean/Strategies/../External/include/boost/python/detail/prefix.hpp:13,
from /mnt/fs/users/iliak/system_clean/Strategies/../External/include/boost/python/args.hpp:8,
from /mnt/fs/users/iliak/system_clean/Strategies/../External/include/boost/python.hpp:11,
from /mnt/fs/users/iliak/system_clean/Strategies/../Engine/CandlesManager.h:7:
</code></pre>
|
<python><c++><boost><python-3.10>
|
2022-12-11 11:50:58
| 1
| 554
|
Ilia
|
74,760,516
| 12,725,674
|
Extract pdf pages which contains an image
|
<p>I have a number of pdf files in which some pages contain images and others not. Is there a way to extract only those pages of the pdf which contains an image? Unfortunately, its not sufficient to simply extract the images. I need to make sure that the whole page that contains the image is extracted.</p>
|
<python><pdf>
|
2022-12-11 11:45:43
| 1
| 367
|
xxgaryxx
|
74,760,492
| 5,991,801
|
Convert a string, representing a list of timestamps, to list of timestamps
|
<p>I have a string input representing a list of timestamps.</p>
<p>An example of such string is shown below:</p>
<pre><code>'[Timestamp('2022-08-13 17:25:00'), Timestamp('2022-08-13 18:02:00'), Timestamp('2022-08-13 18:46:00')]'
</code></pre>
<p>I need to convert it to a list of timestamps in python.</p>
<p>How can I do that in a right way?</p>
|
<python><pandas><datetime><timestamp>
|
2022-12-11 11:42:25
| 3
| 598
|
HoseinPanahi
|
74,760,440
| 859,227
|
Appending dataframes from right column
|
<p>In the following code, in every iteration, a csv file is read as a data frame and it is concatenated to an result data frame (initially empty) from right.</p>
<pre><code>result = pd.DataFrame()
for bench in benchmarks:
df = read_raw(bench)
print(df)
result = pd.concat([result, df], axis=1, join="inner")
print(result.info())
</code></pre>
<p>As you can see below, df is not empty, but final result is empty.</p>
<pre><code>$ python3 test.py
launch
0 1
1 524
2 3611
3 3611
4 169
... ...
92515 143
92516 138
92517 169
92518 138
92519 1048
[92520 rows x 1 columns]
<class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 M1 0 non-null int64
</code></pre>
<p>How can I fix that?</p>
|
<python><pandas><dataframe>
|
2022-12-11 11:34:36
| 1
| 25,175
|
mahmood
|
74,760,252
| 8,106,583
|
Selenium can't find file from '/tmp/' directory to populate file-input
|
<p>I want to upload a file using the selenium Firefox driver. The file is dynamically generated and does not need to persist so I would like to place it under <code>/tmp/</code>.</p>
<p>While selecting files from my <code>home</code> directory works perfectly fine:</p>
<pre><code>from selenium import webdriver
from pathlib import Path
driver = webdriver.Firefox()
driver.get("https://viljamis.com/filetest/")
Path(f"{Path.home()}/my_test_file").touch()
driver.find_element('xpath', '//input[@type="file"]').send_keys(f"{Path.home()}/my_test_file")
</code></pre>
<p>doing the same with a file located at <code>/tmp/my_test_file</code> like so:</p>
<pre><code>from selenium import webdriver
from pathlib import Path
driver = webdriver.Firefox()
driver.get("https://viljamis.com/filetest/")
Path(f"/tmp/my_test_file").touch()
driver.find_element('xpath', '//input[@type="file"]').send_keys(f"/tmp/my_test_file")
</code></pre>
<p>results in the following crude error message:</p>
<pre><code>Traceback (most recent call last):
File "/home/username/Downloads/test.py", line 12, in <module>
driver.find_element('xpath', '//input[@type="file"]').send_keys(f"/tmp/my_test_file")
File "/home/username/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webelement.py", line 233, in send_keys
self._execute(
File "/home/username/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webelement.py", line 410, in _execute
return self._parent.execute(command, params)
File "/home/username/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/home/username/.local/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.InvalidArgumentException: Message: File not found: /tmp/my_test_file
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5
InvalidArgumentError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:311:5
interaction.uploadFiles@chrome://remote/content/marionette/interaction.sys.mjs:537:13
</code></pre>
<p>which is strange because the file definitly does exist</p>
<p>I can't find a way to make it work. Why is there a difference when selecting files from <code>/tmp/</code>?</p>
<p>The access-rights are both times <code>-rw-rw-r--</code> and manually selecting the file by clicking the "browse..." button and selecting the file also works perfectly fine.</p>
<p><strong>Edit:</strong>
I am running a fairly fresh installation of Kubuntu 22.04.1</p>
|
<python><selenium><selenium-webdriver><file-access>
|
2022-12-11 11:04:15
| 1
| 2,437
|
wuerfelfreak
|
74,760,205
| 1,312,718
|
How to get ONLY magic methods for given class/object in Python?
|
<p>I wonder what check I should make to check whether method is magic? Not dunder (two underscore before and after), ONLY and ONLY magic methods. If be more specific methods that not goes through <code>__getattr__</code></p>
<p>This is my current solution, but I want to simplify set of <code>__ignore_attributes__ </code> and make this solution more robust.</p>
<pre><code>class ParameterizedMeta(ABCMeta):
@_tp_cache
def __getitem__(cls, *values):
# methods have default implementation not places here
values = cls.__subclass_parseargs__(*values)
parameters = cls.__subclass_parameters__(*values)
name = cls.__subclass_name__(parameters)
bases = cls.__subclass_bases__(parameters)
subclass = type(name, bases, parameters)
subclass.__subclass_post_init__()
return subclass
__ignore_attributes__ = {
"__class__",
"__mro__",
"__new__",
"__init__",
"__setattr__",
"__getattr__",
"__delattr__",
"__getattribute__",
"__eq__",
"__ne__",
"__init_subclass__",
"__class_getitem__",
"__subclasshook__",
"__module__",
"__name__",
"__qualname__",
"__dict__",
"__slots__",
"__size__",
"__sizeof__",
"__weakref__",
"__abstractmethods__",
"__orig_class__",
"__post_init__",
"__type__",
...
}
__force_attributes__ = [
"__hash__",
"__str__",
"__repr__",
]
__proxy_meta_bypass__ = {"__type__", ...}
class ProxyMeta(ParameterizedMeta):
__type__: type[Serializable]
def __getattr__(self, item) -> Any:
# If requested attribute is one of predefined prevent infinite recursion
if item in __proxy_meta_bypass__ :
raise AttributeError
return getattr(self.__type__, item)
def __subclass_post_init__(self):
if not isinstance(self.__type__, type):
return
attributes = set(dir(self.__type__) + __force_attributes__)
def delegate(attribute: str):
def proxy(s, *args, **kwargs):
func = getattr(s.__value__, attribute)
return func(*args, **kwargs)
proxy.__name__ = str(attributes)
return proxy
def check_is_magic(attribute: str):
if not attribute.startswith("__") or not attribute.endswith("__"):
return False
if attribute in __ignore_attributes__:
return False
value = getattr(self.__type__, attribute)
if not callable(value):
return False
return True
for name in filter(check_is_magic, attributes):
setattr(self, name, delegate(name))
</code></pre>
|
<python><reflection>
|
2022-12-11 10:57:03
| 0
| 1,798
|
mblw
|
74,760,184
| 13,636,407
|
manim copy and property propagation (rectangle grid_xstep)
|
<p>I'm starting with manim (<code>Manim Community v0.17.1</code>) and I got behaviors that I can't explain on some very basic example, any help appreciated.</p>
<pre class="lang-py prettyprint-override"><code>class SquareScene(Scene):
def construct(self):
kwargs = {"fill_opacity": 1, "stroke_color": WHITE}
square1 = Square(5, color=RED, grid_xstep=1, grid_ystep=1, **kwargs)
square2 = square1.copy().set(color=BLUE, **kwargs).shift(UP + RIGHT)
self.add(square2, square1)
</code></pre>
<ul>
<li><code>square1</code> is displayed without a grid, I don't know why</li>
<li><code>square2</code> does have a grid</li>
</ul>
<p>EDIT: I managed to solve the problem by setting <code>fill_color=RED</code> instead of <code>color=RED</code>. As I noticed the grid was there but also <code>RED</code> hence not visible.</p>
<p>Still it doesn't explain why the <code>BLUE</code> square grid color was <code>WHITE</code> then..</p>
<p><a href="https://i.sstatic.net/0qVcG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0qVcG.png" alt="rectangles" /></a></p>
|
<python><manim>
|
2022-12-11 10:54:01
| 1
| 3,572
|
paime
|
74,760,102
| 12,902,027
|
How can I invoke python' class dunder method in C?
|
<p>I'm writing some Python Module using the C extension API. And I want to implement insert() and is_greater() methods.
both methods should take an argument of int type or any user-defined object which has implemented __gt__() method.
To make it simple, need the interface like below.</p>
<pre class="lang-py prettyprint-override"><code>mc = MyClass()
mc.insert(1)
isGreater = mc.is_greater(2)
</code></pre>
<pre class="lang-py prettyprint-override"><code>mc = MyClass()
obj1 = MyObject(1)
mc.insert(obj1)
obj2 = MyObject(2)
isGreater = mc.is_greater(obj2)
</code></pre>
<p>I'm using the PyArg_ParseTuple function to parse the variable into a raw C Pointerγlike below.</p>
<pre class="lang-c prettyprint-override"><code>typedef int(* COMPARE_FUNCP)(PyObject * , PyObject * );
typedef struct
{
PyObject_HEAD
PyObject * objp;
COMPARE_FUNCP * is_greater;
} MyClassObject;
static PyObject *
myClass_insert(MyClassObject * self, PyObject * args)
{
PyObject * objp1;
if (!PyArg_ParseTuple(args, "O", &objp1))
{
return NULL;
}
self->objp = objp1;
// here I want to register MyObject class's __gt__() method.
self->is_greater =
}
static PyObject *
myClass_compare(MyClassObject * self, PyObject * args)
{
PyObject * objp2;
if (!PyArg_ParseTuple(args, "O", &objp2))
{
return NULL;
}
if (self->is_greater(self->objp, objp2)>0)
return Py_True;
}
</code></pre>
<p>I guess I need to register __gt__() method in C somehow, but How can I do this?</p>
<p>Someone please give me a clue. Thank you.</p>
|
<python><c><python-c-api>
|
2022-12-11 10:42:15
| 0
| 301
|
agongji
|
74,759,976
| 12,906,445
|
Using ffmpeng in swift code using PythonKit
|
<p>I have a python script that uses <code>ffmpeng</code>. I installed it using <code>brew install ffmpeg</code>. If I run the python script on python IDE it runs perfectly.</p>
<p>But I wanted to run this python script on my macOS app, so using PythonKit I tried running it:</p>
<pre><code>static func make_audio(dict: [String: String]) {
let sys = Python.import("sys")
let dirPath = "/Users/seungjunlee/PycharmProjects/pythonProject8"
sys.path.append(dirPath)
let Media = Python.import("audio")
Media.make_audio(PythonObject(dict))
}
</code></pre>
<p>But if I run above function it keep saying cannot find <code>ffmpeng</code>. So I installed ffmpeng again in current directory.(macOS app project). But I'm still getting error.</p>
<p>I already spent several hours fixing <code>can't find ffmpeng</code> bug appearing when running the python script on Python IDE. The solution was installing it using command <code>brew install ffmpeg</code>. But now it appearing again when I try to run the script in swift code.</p>
<p>Can anyone help me with this? Thanks.</p>
|
<python><swift><swift-pythonkit>
|
2022-12-11 10:24:36
| 0
| 1,002
|
Seungjun
|
74,759,968
| 10,718,214
|
convert json to table and avoid memory error
|
<p>I have a large json file with 38 GB as this structure:</p>
<pre><code>{"Text": [
{
"a" : 6,
"b" : 2022,
"c" : 11,
"d" : "2022-11-24",
"e" : "567",
"f" : "ww",
"i" : "00",
"j" : 4,
"k" : "y",
"l" : null,
"m" : 3,
"n" : 7,
"o" : "54",
"b" : null,
"q" : "yes",
"r" : 77,
"c" : "yes",
"t" : 6,
"y" : 8,
"v" : "yy",
"w" : "yy",
"x" : "o",
"y" : "100 ",
"z" : "r"
},{...},{....}
]}
</code></pre>
<p>so I have created this code to convert the json to dataFrame, but I get memory error.</p>
<blockquote>
<pre><code> df=pd.DataFrame([])
with open('data.json',encoding='utf-8-sig') as f:
l=f.read()
c=json.loads(l)
v=list(c.keys())[0]
data=(c[v])
for line in itertools.islice(data,2):
json_nor=json_normalize(line)
df=df.append(json_nor, ignore_index = True)
</code></pre>
</blockquote>
<p>any advice how to convert them and avoid memory error?</p>
|
<python><json>
|
2022-12-11 10:23:12
| 0
| 495
|
Fatima
|
74,759,875
| 9,490,400
|
How to send a list of lists as query parameter in FastAPI?
|
<p>I'm using FastAPI framework and I want to send a list of lists using <code>Query</code> parameters. I can send a list using the below syntax, but I am unable to pass list of lists.</p>
<pre><code>sections_to_consider: Optional[List[str]] = Query(None)
</code></pre>
<p>I get the below output.</p>
<p><a href="https://i.sstatic.net/0jwhf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0jwhf.png" alt="enter image description here" /></a></p>
<p>What I want is something like this</p>
<pre><code>{
"sections_to_consider": [
["string", "string2"],
["string3", "string4"]
]
}
</code></pre>
<p>I tried below syntax but getting an error.</p>
<pre><code>sections_to_consider: Optional[List[list]] = Query(None)
sections_to_consider: Optional[List[List[str]]] = Query(None)
</code></pre>
<p>I need to accept list of lists. This is an optional parameter but, if passed, the inner list must have exactly 2 strings.
Is there any way to do it in FastAPI? Or any work around?</p>
<p>Thanks in advance.</p>
|
<python><fastapi>
|
2022-12-11 10:07:11
| 1
| 336
|
Prince
|
74,759,796
| 7,168,098
|
pyhton pandas mapping the columns names series with map to a dict & handling missing values
|
<p>I would like to apply the map function to the columns of a dataframe as follows:</p>
<pre><code>d = {'one': [1, 2], 'two': [3, 4], 'three':[3,3]}
df = pd.DataFrame(data=d)
recodes = {'one':'A', 'two':'B'}
c = df.columns.map(recodes)
c
#result: Index(['A', 'B', nan], dtype='object')
</code></pre>
<p>All fine</p>
<p>Now I would like to apply another kind of dict, with values being tuples:</p>
<pre><code>recodes2 = {'one':('A','H'), 'two':('AB','HB')}
c = df.columns.map(recodes2)
c
</code></pre>
<p>This does not work. the error being:</p>
<pre><code>TypeError: Expected tuple, got float
</code></pre>
<p>EXPECTED OUTPUT:</p>
<pre><code>MultiIndex([( 'A', 'H'),
( 'AB', 'HB'),
('_unknown', '_unknown')],
)
</code></pre>
<p>The question is tripple:</p>
<ul>
<li>ON one side why is this so? why dont I get a (nan,nan) automatic default value.</li>
<li>secondly how to avoid this problem</li>
<li>and thridly how to include a kind of default value, saying for instance a tupple ("_unknown","_unknown") for values not being part of the dictionary.</li>
</ul>
<ul>
<li>I look for a kind of more pythonic answer than getting the set of values of the columns and modify the dict in order to include the default values for all keys not originally present in the dict.</li>
</ul>
<p>A POSSIBLE SOLUTION IS:</p>
<pre><code>d = {'one': [1, 2], 'two': [3, 4],'three':[6,7],'four':[8,9]}
df = pd.DataFrame(data=d)
# original recodes dict
recodes3 = {'one':('one','A','H'), 'two':('two','AB','HB')}
# complete recodes dict
missing_values = [col for col in df.columns if col not in recodes3.keys()]
print(missing_values)
recodes_missing = {k:(k,'_unknown','_unknown') for k in missing_values}
#complete the recode dict:
recodes4 = {**recodes3,**recodes_missing}
print(recodes4)
c = df.columns.map(recodes4)
c
</code></pre>
<p>But there should be a way to treat missing values in the map pandas map function ( I guess)</p>
|
<python><pandas><dictionary><default>
|
2022-12-11 09:56:08
| 1
| 3,553
|
JFerro
|
74,759,702
| 17,034,564
|
Transfer learning (or fine-tuning) pre-trained model on non-text data
|
<p>I am currently fine-tuning a <a href="https://huggingface.co/unideeplearning/polibert_sa" rel="nofollow noreferrer">sentiment analysis bert-based model</a> using <a href="https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer" rel="nofollow noreferrer">PyTorch Trainer</a> from hugging face. So far, so good.</p>
<p>I have easily managed to fine-tune the model on my text data. However, I'd like to conduct an ablation study to see how the inclusion of linguistic features impacts the model's performance.</p>
<p>In other words, how the inclusion of, e.g., comment length, type-to-token ratio, and other features (stored in my dataset in a separate column) affects the performance of my model.</p>
<p>This is what my data kind of looks like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Text</th>
<th>Type-token ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hi, my name is...</td>
<td>1.0</td>
</tr>
<tr>
<td>I cannot believe I did not...</td>
<td>0.95</td>
</tr>
</tbody>
</table>
</div>
<p>In the specific case above, for instance, I would like to fine-tune the model on the text column but also on the Type-token ratio one.</p>
<p>I know that some people concatenate the two columns into a string, but I am not sure that is the correct method. Is there a more methodologically sound way of doing this?</p>
<p>I was not able to find much information about it.</p>
<p><strong>EDIT</strong>:</p>
<p><em>(The below code works and, ideally, it should also include the <code>TTR</code> column.)</em></p>
<p>This is my code:</p>
<pre><code>dataset = pd.read_csv('/content/gdrive/MyDrive/.../data_train.csv')
train_roberta = dataset[['text_lower', 'label']].sample(frac=0.75)
validation_roberta = dataset[['text_lower', 'label']].drop(train_roberta.index)
train_roberta = train_roberta.dropna()
validation_roberta = validation_roberta.dropna()
train = Dataset.from_pandas(train_roberta, preserve_index=False)
validation = Dataset.from_pandas(validation_roberta, preserve_index=False)
tokenizer = AutoTokenizer.from_pretrained("a_model/a_bert_like_model")
def tokenize_function(example):
return tokenizer(example["text_lower"], padding="max_length", truncation=True)
tokenized_train_dataset = train.map(tokenize_function, batched=True)
tokenized_test_dataset = validation.map(tokenize_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
import os
os.environ["WANDB_DISABLED"] = "true"
training_args = TrainingArguments("test-trainer", evaluation_strategy="epoch") # default arguments for fine-tuning
model = AutoModelForSequenceClassification.from_pretrained("a_model/a_bert_like_model", num_labels=3)
def compute_metrics(eval_preds): # compute accuracy and f1-score
metric = load_metric("glue", "mrpc")
logits, labels = eval_preds
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer( # specifying trainer class
model,
training_args,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_test_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train() # starts fine-tuning
</code></pre>
|
<python><artificial-intelligence><bert-language-model><pre-trained-model><fine-tuning>
|
2022-12-11 09:40:51
| 0
| 678
|
corvusMidnight
|
74,759,691
| 5,257,450
|
Unstable lightgbm results on every new process when I restarted the server
|
<p>This is the python code I used for generating trees from my data.</p>
<pre class="lang-py prettyprint-override"><code>params = {
'objective': 'binary',
'deterministic': True,
'force_row_wise': True,
'num_threads': 8,
'learning_rate': 0.05,
'max_depth': 3,
'tree_learner': 'data',
'boosting_type': 'gbdt',
'feature_fraction': 0.7,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': -1,
'seed': 8,
}
random.seed(8)
np.random.seed(8)
dataset = lgb.Dataset(X, y)
bst = lgb.train(params, dataset, 10, keep_training_booster=True)
trees = bst.dump_model()["tree_info"]
</code></pre>
<p>I was testing it out on my local macbook M2 machine with a flask server.</p>
<ul>
<li>The code does give me stable result when I run the algorithm in a row without restarting the process.</li>
<li>The instability happens when every time I reboot the server</li>
<li>I've tested out every possible instruction in <a href="https://lightgbm.readthedocs.io/en/latest/Parameters.html" rel="nofollow noreferrer">https://lightgbm.readthedocs.io/en/latest/Parameters.html</a></li>
<li>I've even tried setting the random seed of python / numpy, but it doesn't work</li>
<li>I've also disabled parallelism by setting num_threads to 1, but it doesn't help</li>
</ul>
|
<python><random><lightgbm>
|
2022-12-11 09:38:47
| 0
| 3,284
|
xxx222
|
74,759,543
| 2,975,438
|
How to vectorize and speed-up double for-loop for pandas dataframe when doing text similarity scoring
|
<p>I have the following dataframe:</p>
<pre><code>d_test = {
'name' : ['South Beach', 'Dog', 'Bird', 'Ant', 'Big Dog', 'Beach', 'Dear', 'Cat'],
'cluster_number' : [1, 2, 3, 3, 2, 1, 4, 2]
}
df_test = pd.DataFrame(d_test)
</code></pre>
<p>I want to identify similar names in <code>name</code> column if those names belong to one cluster number and create unique id for them. For example <code>South Beach</code> and <code>Beach</code> belong to cluster number <code>1</code> and their similarity score is pretty high. So we associate it with unique id, say <code>1</code>. Next cluster is number <code>2</code> and three entities from <code>name</code> column belong to this cluster: <code>Dog</code>, <code>Big Dog</code> and <code>Cat</code>. <code>Dog</code> and <code>Big Dog</code> have high similarity score and their unique id will be, say <code>2</code>. For <code>Cat</code> unique id will be, say <code>3</code>. And so on.</p>
<p>I created a code for the logic above:</p>
<pre><code># pip install thefuzz
from thefuzz import fuzz
d_test = {
'name' : ['South Beach', 'Dog', 'Bird', 'Ant', 'Big Dog', 'Beach', 'Dear', 'Cat'],
'cluster_number' : [1, 2, 3, 3, 2, 1, 4, 2]
}
df_test = pd.DataFrame(d_test)
df_test['id'] = 0
i = 1
for index, row in df_test.iterrows():
for index_, row_ in df_test.iterrows():
if row['cluster_number'] == row_['cluster_number'] and row_['id'] == 0:
if fuzz.ratio(row['name'], row_['name']) > 50:
df_test.loc[index_,'id'] = int(i)
is_i_used = True
if is_i_used == True:
i += 1
is_i_used = False
</code></pre>
<p>Code generates expected result:</p>
<pre><code> name cluster_number id
0 South Beach 1 1
1 Dog 2 2
2 Bird 3 3
3 Ant 3 4
4 Big Dog 2 2
5 Beach 1 1
6 Dear 4 5
7 Cat 2 6
</code></pre>
<p>Note, for <code>Cat</code> we got <code>id</code> as <code>6</code> but it is fine because it is unique anyway.</p>
<p>While algorithm above works for test data I am not able to use it for real data that I have (about 1 million rows) and I am trying to understand how to vectorize the code and get rid of two for-loops.</p>
<p>Also <code>thefuzz</code> module has <code>process</code> function and it allows to process data at once:</p>
<pre><code>from thefuzz import process
out = process.extract("Beach", df_test['name'], limit=len(df_test))
</code></pre>
<p>But I don't see if it can help with speeding up the code.</p>
|
<python><pandas><vectorization><fuzzy-search><fuzzywuzzy>
|
2022-12-11 09:11:55
| 4
| 1,298
|
illuminato
|
74,759,494
| 4,882,300
|
Number of the possible arrays that can be formed from a string of digits ( Leetcode 1416. Restore The Array )
|
<p>Given the leetcode question <a href="https://leetcode.com/problems/restore-the-array/description/" rel="nofollow noreferrer">1416. Restore The Array</a>:</p>
<blockquote>
<p>A program was supposed to print an array of integers. The program
forgot to print whitespaces and the array is printed as a string of
digits s and all we know is that all integers in the array were in the
range [1, k] and there are no leading zeros in the array.</p>
<p>Given the string s and the integer k, return the number of the
possible arrays that can be printed as s using the mentioned program.</p>
</blockquote>
<p>I'm trying to solve this question using backtracking, but I can't understand why my solution returns an incorrect output.</p>
<pre class="lang-py prettyprint-override"><code>class Solution:
def numberOfArrays(self, s: str, k: int) -> int:
ans = 0
def backtrack(s, index, buffer):
nonlocal ans, k
# If we have reached the end of the string,
# we have found a valid array of integers.
if index == len(s):
ans += 1
return
# If the current digit is '0', we cannot form
# a valid number using any of the following digits.
if s[index] == "0":
return
# Try forming a number using the current digit and
# the following digits. If the number is valid,
# continue the backtracking process with the remaining
# digits in the string.
for i in range(index, len(s)):
buffer *= 10
buffer += int(s[i])
if buffer <= k:
backtrack(s, i + 1, buffer)
else:
# If the number is not valid, stop the backtracking
# process and undo any changes made to the buffer.
buffer //= 10
backtrack(s, 0, 0)
return ans
</code></pre>
<p>It is able to pass the following test cases:</p>
<pre><code>Input: s = "1000", k = 10000
Output: 1
Input: s = "1000", k = 10
Output: 0
Input: s = "1317", k = 2000
Output: 8
</code></pre>
<p>But not this one:</p>
<pre><code>Input: s = "2020", k = 30
Output: 0
Expected: 1
</code></pre>
<p>I don't understand why it can't detect the partition <code>[20, 20]</code>.</p>
|
<python><arrays><algorithm><backtracking>
|
2022-12-11 09:00:07
| 2
| 942
|
Ruan
|
74,759,447
| 12,575,770
|
What is the time complexity of this custom function?
|
<p>Is it correct to say that the time complexity of the following code snippet is O(n^2)? My main doubt is whether I can consider <code>triplets.append(sorted([array[i], array[j], twoSum - array[j]]))</code> as constant time since it does not depend on N and is always sorting an array of fixed size of length 3.</p>
<p><strong>Function</strong></p>
<pre><code>def threeNumberSum(array, targetSum):
triplets = []
# O(n^2)
for i in range(len(array)):
twoSum = targetSum - array[i]
hashTable = {}
for j in range(i+1, len(array)):
hashTable[array[j]] = twoSum - array[j]
y = hashTable.get(twoSum - array[j])
if y != None and twoSum-array[j] != array[j]:
triplets.append(sorted([array[i], array[j], twoSum - array[j]])) # O(1)?
triplets.sort() # O(nlogn)
return triplets
</code></pre>
|
<python><time-complexity>
|
2022-12-11 08:52:43
| 1
| 959
|
ChaoS Adm
|
74,759,417
| 20,554,684
|
"ModuleNotFoundError" when importing modules
|
<p>A similar question was asked at: <a href="https://stackoverflow.com/questions/66117109/python-subpackage-import-no-module-named-x">Python subpackage import "no module named x"</a>
But I was still not able to solve my problem.</p>
<p>For the first time, I divided my python code into modules and packages.
Here is how the files are set up (not with the real names):</p>
<pre><code>βββ projectfolder
|ββmain.py
|ββpackage1
|--__init.py__
|--module1.py
|--subpackage1
|--__init.py__
|--module2.py
</code></pre>
<p>Inside module2.py there is a function ("function1") that is supposed to be used inside a function ("function2") in module1.py. Function2 is then supposed to be used inside main.py.</p>
<p>This is firstly done by importing suppackage1 inside module1:</p>
<p><code>import subpackage1</code></p>
<p>This is the code inside the <code>__init__.py</code> inside subpackage1:</p>
<p><code>from .module2 import function1</code></p>
<p>Function1 is then used inside function2 which lies in module1.py like this:</p>
<p><code>subpackage1.function1()</code></p>
<p>Which does not create any error message.
But now I want to call function2 in main.py. Package1 is imported in main.py like this:</p>
<p><code>import package1</code></p>
<p>This is the code inside the <code>__init__.py</code> file inside package1:</p>
<p><code>from .module1 import function2</code></p>
<p>I was then expecting that I could use function2 without a problem in main.py like this:</p>
<p><code>package1.function2()</code></p>
<p>But when running the code from main.py I get this error message:</p>
<p>Error:</p>
<pre><code>"Inside main.py" import package1
"Inside __init__.py which lies in package1" from .module1 import function2
"Inside module1.py" ModuleNotFoundError: No module named 'subpackage1'
</code></pre>
<p>What have I done wrong? And is there a better way to organize packages and modules? Because this is already hard to manage only with a few files.</p>
|
<python><code-organization><modulenotfounderror>
|
2022-12-11 08:44:59
| 2
| 399
|
a_floating_point
|
74,759,317
| 19,060,245
|
How to get only observations with maximum values after using groupby.sum?
|
<p>Sample data:</p>
<pre><code>df = pd.DataFrame({
'Company': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'],
'Model': ['A1', 'A2', 'A1', 'A3', 'A1', 'A2', 'A2', 'A3'],
'Units_sold': [55, 67, 58, 72, 52, 64, 68, 83]
})
</code></pre>
<p>After using groupby with sum function</p>
<pre><code>df.groupby(['Company', 'Model'])['Units_sold'].agg('sum')
</code></pre>
<p>I get the following output:</p>
<pre><code>Company Model
A A1 113
A2 67
A3 72
B A1 52
A2 132
A3 83
</code></pre>
<p>I only want to get observations where <code>Units_sold</code> is maximum. That is expected output should be:</p>
<pre><code>Company Model
A A1 113
B A2 132
</code></pre>
|
<python><pandas>
|
2022-12-11 08:24:54
| 1
| 314
|
salman
|
74,759,260
| 7,077,761
|
pip install -e . vs setup.py
|
<p>I have been locally editing (inside a conda env) the package <code>GSTools</code> cloned from the github repo <a href="https://github.com/GeoStat-Framework/GSTools" rel="nofollow noreferrer">https://github.com/GeoStat-Framework/GSTools</a>, to adapt it to my own purposes. The package is c++ wrapped in python (cython).</p>
<p>I've thus far used <code>pip install -e .</code> in the main package dir for my local changes. But I want to now use their <code>OpenMP</code> support by setting the env variable export <code>GSTOOLS_BUILD_PARALLEL=1</code> . Then doing <code>pip install -e .</code> I get among other things in the terminal ...</p>
<pre><code>Installing collected packages: gstools
Running setup.py develop for gstools
Successfully installed gstools-1.3.6.dev37
</code></pre>
<p>The issue is nothing actually changed because, <code>setup.py</code> (shown below) is supposed to print <code>"OpenMP=True"</code> if the env variable is set to <code>GSTOOLS_BUILD_PARALLEL=1</code> in the linux terminal , and print something else if its not set to 1.</p>
<p>here is <code>setup.py</code>.</p>
<pre><code># -*- coding: utf-8 -*-
"""GSTools: A geostatistical toolbox."""
import os
β
import numpy as np
from Cython.Build import cythonize
from extension_helpers import add_openmp_flags_if_available
from setuptools import Extension, setup
β
# cython extensions
CY_MODULES = [
Extension(
name=f"gstools.{ext}",
sources=[os.path.join("src", "gstools", *ext.split(".")) + ".pyx"],
include_dirs=[np.get_include()],
define_macros=[("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION")],
)
for ext in ["field.summator", "variogram.estimator", "krige.krigesum"]
]
# you can set GSTOOLS_BUILD_PARALLEL=0 or GSTOOLS_BUILD_PARALLEL=1
if int(os.getenv("GSTOOLS_BUILD_PARALLEL", "0")):
added = [add_openmp_flags_if_available(mod) for mod in CY_MODULES]
print(f"## GSTools setup: OpenMP used: {any(added)}")
else:
print("## GSTools setup: OpenMP not wanted by the user.")
β
# setup - do not include package data to ignore .pyx files in wheels
setup(ext_modules=cythonize(CY_MODULES), include_package_data=False)
</code></pre>
<p>I tried instead just <code>python setup.py install</code> but that gives</p>
<pre><code>UNKNOWN 0.0.0 is already the active version in easy-install.pth
Installed /global/u1/b/benabou/.conda/envs/healpy_conda_gstools_dev/lib/python3.8/site-packages/UNKNOWN-0.0.0-py3.8-linux-x86_64.egg
Processing dependencies for UNKNOWN==0.0.0
Finished processing dependencies for UNKNOWN==0.0.0
</code></pre>
<p>and <code>import gstools</code></p>
<p>no longer works correctly.</p>
<p>So how can I install my edited version of the package with OpenMP support?</p>
|
<python><openmp><cython>
|
2022-12-11 08:15:14
| 1
| 966
|
math_lover
|
74,759,087
| 1,455,354
|
How to read JSON file from Python Wheel Package
|
<p>I have created a .whl package using <code>python3.9 setup.py bdist_wheel</code> and also I have checked that all files gets included in this .whl file. Below is the .whl file structure:</p>
<pre><code>MyPackage/
- api/
- workflow
- cfg
- data1.json
- data2.json
- data3.json
- scripts.py
</code></pre>
<p>and I am trying to read json files present inside api/workflow/cfg and add it into dictionary, using below function from scripts.py:</p>
<pre><code>def read_cfg(storage_dir="api/workflow/cfg") -> Dict[str, wd]:
wf: Dict[str, wd] = {}
for json_file in Path(storage_dir).glob("*.json"):
with open(json_file) as f:
definition = json.load(f)
wf[definition["name"]] = wd(**definition)
return wf
</code></pre>
<p>However, it gives an empty dictionary. What am I doing wrong and what needs to be changed to read the json files using the .whl file?</p>
<p>This runs successfully when I try to run it directly without using the .whl file.</p>
|
<python><setuptools><python-packaging>
|
2022-12-11 07:35:36
| 1
| 4,616
|
Rakesh Shetty
|
74,758,832
| 13,313,572
|
How to Quit / terminate / exit a particular Excel File in Python?
|
<p>I want to close/quit/terminate a particular excel file only. Based on my script, it closes all open/active excel files. My intention is to force close a particular file only. How to resolve it?</p>
<p>For Example, In my case I open, more than 3 excel files in active mode(open), Now I Run the script, to append data in a particular excel file (<strong>example.xlxs</strong>).It's in open mode, so raises an <strong>error code 13, permission error</strong>. so my necessity to forcibly close that particular file programmatically. But, it closes all open excel files.</p>
<p>Googleing so many things including xlwings.quit() . xlwings.quit() also closed all open/active excel files.</p>
<p>please suggest a better way to resolve it.</p>
<pre><code>import sys
import os
from PyQt5.QtWidgets import QWidget,QApplication,QMessageBox,QSizePolicy
from PyQt5.QtGui import QIcon
from openpyxl import load_workbook
from win32com.client import Dispatch
my_filename = r'd:\me\openpyxl\example.xlsx'
companies = [['name1','address1','tel1','web1'], ['name2','address2','tel2','web2']]
class Force_to_close_open_excel_files(QWidget):
def __init__(self):
super(). __init__()
self.show_errormsg = "show"
self.append_data()
def append_data(self):
try:
active_workbook = load_workbook(filename=my_filename)
active_work_sheet = active_workbook.active
active_workbook.save(filename=my_filename)
for info in companies:
active_work_sheet.append(info)
print("data Added Sucessfully")
active_workbook.save(filename=my_filename)
except Exception as e:
if self.show_errormsg == 'show':
self.handle_error(e)
else:
pass
def handle_error(self, error):
exc_type, exc_value, exc_traceback = sys.exc_info()
filenamewithpath = exc_traceback.tb_frame.f_code.co_filename
head, tail = os.path.split((filenamewithpath))
lineno = exc_traceback.tb_lineno
name = exc_traceback.tb_frame.f_code.co_name
type = exc_type.__name__
message = exc_value
error_no = ([x[0] for x in [message.args]])
nl = '\n'
kk = f'File Name : {tail[:-3]}{nl}' \
f'Line No. : {lineno}{nl}' \
f'Type : {type}{nl}' \
f'Error Code : {error_no[0]}{nl}'\
f'Name : {name}{nl}'
self.msg = QMessageBox()
self.msg.setSizePolicy(QSizePolicy.MinimumExpanding,QSizePolicy.MinimumExpanding)
self.msg.setWindowTitle(" Error/Bugs Information")
self.msg.setWindowIcon(QIcon('icon\close006.png'))
self.msg.setText(f'{type} - Line No {lineno}')
self.msg.setIcon(QMessageBox.Information)
self.msg.setStandardButtons(QMessageBox.Ok)
self.msg.addButton(QMessageBox.Close)
self.btn_focestop = self.msg.button(QMessageBox.Close)
self.btn_focestop.setText("Force to Close - Excel Files")
self.msg.setDefaultButton(QMessageBox.Ok)
self.msg.setInformativeText("")
self.msg.setDetailedText(kk)
self.msg.show()
self.msg.exec_()
if self.msg.clickedButton() == self.btn_focestop:
excel = Dispatch("Excel.Application")
excel.Visible = False
workbook = excel.Workbooks.Open(my_filename)
map(lambda book: book.Close(False), excel.Workbooks)
excel.Quit()
self.append_data()
def main():
app = QApplication(sys.argv)
ex = Force_to_close_open_excel_files()
# ex.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
|
<python><python-3.x><openpyxl><win32com>
|
2022-12-11 06:33:51
| 1
| 653
|
Kumar
|
74,758,688
| 9,257,578
|
Beautiful soup returns none data
|
<p>I am new to web scraping the <code>soup.find()</code> cannot find the data it returns none here i want to find price
my code is</p>
<pre><code>destination = "nepal"
check_in_year = 2022
check_in_month = 12
check_in_day = 13
check_out_year = 2022
check_out_month = 12
check_out_day = 17
adults = 2
total_children = 0
num_rooms = 1
url = "https://www.booking.com/searchresults.en-gb.html?label=gen173nr-1BCAEoggI46AdIM1gEaFCIAQGYAQm4ARfIAQzYAQHoAQGIAgGoAgO4AqnamocGwAIB0gIkNGEyODNlYTYtYTM2Yi00M2Y3LWE2YjItM2RmYWFlMTM5ZWI22AIF4AIB&sid=44053b754f64b58cfdde1ddc395974a0&sb=1&sb_lp=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.en-gb.html%3Flabel%3Dgen173nr-1BCAEoggI46AdIM1gEaFCIAQGYAQm4ARfIAQzYAQHoAQGIAgGoAgO4AqnamocGwAIB0gIkNGEyODNlYTYtYTM2Yi00M2Y3LWE2YjItM2RmYWFlMTM5ZWI22AIF4AIB%3Bsid%3D44053b754f64b58cfdde1ddc395974a0%3Bsb_price_type%3Dtotal%26%3B&ss={}&is_ski_area=0&checkin_year={}&checkin_month={}&checkin_monthday={}&checkout_year={}&checkout_month={}&checkout_monthday={}&group_adults={}&{}&no_rooms={}&b_h4u_keep_filters=&from_sf=1&dest_id=&dest_type=&search_pageview_id=1be740bf37ad0063&search_selected=false".format(
destination,check_in_year,check_in_month,check_in_day,check_out_year,check_out_month,check_out_day,adults,total_children,num_rooms)
print(url)
</code></pre>
<p>it prints out the url fine and website is also accessed with the same results that i want to search but here's the main content</p>
<pre><code>driver.get(url)
soup = BeautifulSoup(driver.page_source)
links = soup.find("span", {"data-testid":"price-and-discounted-price"})
print(links)
</code></pre>
<p>it prints out <code>None</code>
please help me out with this issue I want to find the price.</p>
|
<python><web-scraping><beautifulsoup>
|
2022-12-11 05:59:39
| 1
| 533
|
Neetesshhr
|
74,758,616
| 480,118
|
vscode multi-root workspace: specifying python path in env files
|
<p>I have the following folder structure. Project1 and Project2 are part of a multi-root workspace.
<a href="https://i.sstatic.net/CGN9A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CGN9A.png" alt="enter image description here" /></a></p>
<p>I will be developing on windows, but running on linux. so i would like like to keep different environment files (.env and .env_linux), and load them based on the OS running under. The .env file looks like this:</p>
<pre><code>PYTHONPATH=./src:./src/utils:./src/app:./src/services
</code></pre>
<p>My launch.json file looks like this:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Run App",
"type": "python",
"request": "launch",
"program": "${workspaceFolder:project1}/src/app/app.py",
"console": "integratedTerminal",
"justMyCode": true,
"windows": {
"envFile": "${workspaceFolder:project1}/.env"
},
"linux": {
"envFile": "${workspaceFolder:project1}/.env_linux"
}
}
]
}
</code></pre>
<p>The code in app.py looks like below - just one line trying to import the utils module:</p>
<pre><code>from utils import utils
</code></pre>
<p>When the code runs, at the above line I get the error "No module named 'utils'"</p>
<p>So i next added the following to my settings.json:</p>
<pre><code>"terminal.integrated.env.windows": {
"python.envFile": "${workspaceFolder:project1}/.env",
},
"terminal.integrated.env.linux": {
"python.envFile": "${workspaceFolder:project1}/.env_linux",
},
</code></pre>
<p>This did not solve the problem. I figured that this env file approach just isnt going to work and then added the PYTHONPATH to the settings.json as seen below, but i still get the same error:</p>
<pre><code>"terminal.integrated.env.windows": {
"python.envFile": "${workspaceFolder:project1}/.env",
"python.pythonPath":"${workspaceFolder:project1}/src:${workspaceFolder:project1}/src/app:${workspaceFolder:project1}/utils",
"PYTHONPATH":"${workspaceFolder:project1}/src:${workspaceFolder:project1}/src/app:${workspaceFolder:project1}/utils",
},
</code></pre>
<p>Still the same error. I also tried changing the .env file to reference the absolute path to no avail. <strong>What am i doing wrong???</strong></p>
<p>It's interesting to note that pylance is able to find the packages/modules when editing. just at run time i get that error.</p>
|
<python><visual-studio-code>
|
2022-12-11 05:38:16
| 1
| 6,184
|
mike01010
|
74,758,611
| 586,543
|
solving math equation for data loading problem
|
<p>I'm having a dataframe df_N with n observations. I'd want to write a code that would create new dataframe df_M with records from df_N. The number of observations in df_M ( ie m observations ) is several orders greater that the number of observations on df_N.
The number of observations on df_M can be represented in by the following formula.</p>
<blockquote>
<p>m = (n*(2^x)) + n^y + z</p>
</blockquote>
<p>Note that the first part of the equation is the series n, n<em>2, n</em>4, n*8. ie n times 2^x</p>
<p>Note that all values are integers.</p>
<p>For example if n = 8 and m = 82
the values of the formula would be
82= (8*(2^3) + 8^2 + 2 = 8*8 + 16 + 2 = 64 + 16 + 2 = 82
values of x = 3 , y = 2 and z = 2</p>
<p>Also note that always (n*(2^x)) > n^y > z . This constraint will restrict the number of solutions for the equation.</p>
<p>Is there a way of solving this equation on python and finding the values of x y and z, given n and m?</p>
<p>Once the value of x y and z are determined, I'd be able to write a code to create additional records for each of the segment of the equation and combiming them to a single dataframe df_M</p>
|
<python><numpy><math>
|
2022-12-11 05:36:45
| 1
| 409
|
Rajesh Kazhankodath
|
74,758,536
| 16,869,702
|
Unrecognized Option in subprocess.run()
|
<p>I want to run this command <code>ebsynth -style source_photo.png -guide source_segment.png target_segment.png -output output.png</code>.</p>
<p>This works perfectly in cmd but not in python subprocess.run()</p>
<p>Python Code</p>
<pre><code>import subprocess
process = subprocess.run([
'ebsynth',
'-style', 'source_photo.png',
'-guide', 'source_segment.png target_segment.png',
'-output', 'output.png'
], shell=True, cwd=dir)
</code></pre>
<p>Running this I am getting <code>error: unrecognized option 'output.png'</code></p>
<p>What is the problem?</p>
|
<python><subprocess>
|
2022-12-11 05:14:43
| 1
| 435
|
Meet Gondaliya
|
74,758,525
| 15,298,943
|
Python - trying to convert time from utc to cst in api response
|
<p>Below is code I am using to get data from an api. And below that is the response. I am trying to convert <code>datetime</code> from UTC to CST and then present the data with that time zone instead. But I am having trouble isolating <code>datetime</code></p>
<pre><code>import requests
import json
weather = requests.get('...')
j = json.loads(weather.text)
print (json.dumps(j, indent=2))
</code></pre>
<p>Response:</p>
<pre><code>{
"metadata": null,
"data": [
{
"datetime": "2022-12-11T05:00:00Z",
"is_day_time": false,
"icon_code": 5,
"weather_text": "Clear with few low clouds and few cirrus",
"temperature": {
"value": 45.968,
"units": "F"
},
"feels_like_temperature": {
"value": 39.092,
"units": "F"
},
"relative_humidity": 56,
"precipitation": {
"precipitation_probability": 4,
"total_precipitation": {
"value": 0.0,
"units": "in"
}
},
"wind": {
"speed": {
"value": 5.144953471725125,
"units": "mi/h"
},
"direction": 25
},
"wind_gust": {
"value": 9.014853256979242,
"units": "mi/h"
},
"pressure": {
"value": 29.4171829577118,
"units": "inHg"
},
"visibility": {
"value": 6.835083114610673,
"units": "mi"
},
"dew_point": {
"value": 31.01,
"units": "F"
},
"cloud_cover": 31
},
{
"datetime": "2022-12-11T06:00:00Z",
"is_day_time": false,
"icon_code": 4,
"weather_text": "Clear with few low clouds",
"temperature": {
"value": 45.068,
"units": "F"
},
"feels_like_temperature": {
"value": 38.066,
"units": "F"
},
"relative_humidity": 56,
"precipitation": {
"precipitation_probability": 5,
"total_precipitation": {
"value": 0.0,
"units": "in"
}
},
"wind": {
"speed": {
"value": 5.167322834645669,
"units": "mi/h"
},
"direction": 27
},
"wind_gust": {
"value": 8.724051539012168,
"units": "mi/h"
},
"pressure": {
"value": 29.4213171559632,
"units": "inHg"
},
"visibility": {
"value": 5.592340730136005,
"units": "mi"
},
"dew_point": {
"value": 30.2,
"units": "F"
},
"cloud_cover": 13
},
{
"datetime": "2022-12-11T07:00:00Z",
"is_day_time": false,
"icon_code": 4,
"weather_text": "Clear with few low clouds",
"temperature": {
"value": 44.33,
"units": "F"
},
"feels_like_temperature": {
"value": 37.364,
"units": "F"
},
"relative_humidity": 56,
"precipitation": {
"precipitation_probability": 4,
"total_precipitation": {
"value": 0.0,
"units": "in"
}
},
"wind": {
"speed": {
"value": 4.988367931281317,
"units": "mi/h"
},
"direction": 28
},
"wind_gust": {
"value": 8.254294917680744,
"units": "mi/h"
},
"pressure": {
"value": 29.4165923579616,
"units": "inHg"
},
"visibility": {
"value": 7.456454306848007,
"units": "mi"
},
"dew_point": {
"value": 29.714,
"units": "F"
},
"cloud_cover": 22
}
],
"error": null
</code></pre>
|
<python><json><python-requests>
|
2022-12-11 05:11:54
| 1
| 475
|
uncrayon
|
74,758,516
| 2,720,402
|
Dask will often have as many chunks in memory as twice the number of active threads - How to understand this?
|
<p>I read the captioned sentence in <a href="https://docs.dask.org/en/stable/array-best-practices.html#select-a-good-chunk-size" rel="nofollow noreferrer">daskβs website</a> and wonder what it means. I have extracted the relevant part below for ease of reference:</p>
<blockquote>
<p>A common performance problem among Dask Array users is that they have chosen a chunk size that is either too small (leading to lots of overhead) or poorly aligned with their data (leading to inefficient reading).<br />
While optimal sizes and shapes are highly problem specific, it is rare to see chunk sizes below 100 MB in size. If you are dealing with float64 data then this is around (4000, 4000) in size for a 2D array or (100, 400, 400) for a 3D array. <br />
You want to choose a chunk size that is large in order to reduce the number of chunks that Dask has to think about (which affects overhead) but also small enough so that many of them can fit in memory at once. <strong>Dask will often have as many chunks in memory as twice the number of active threads.</strong></p>
</blockquote>
<p>Does it mean that the same chunk will co-exist at the mother node(or process or thread?) and the child node? Is it not necessary to have the same chunk twice?</p>
<p>PS: I don't quite understand the difference among node, process and thread so I just put all of them there.</p>
|
<python><dask><dask-distributed>
|
2022-12-11 05:09:19
| 2
| 2,553
|
Ken T
|
74,758,409
| 10,634,126
|
Update global variable when worker fails (Python multiprocessing.pool ThreadPool)
|
<p>I have a Python function that requests data via API and involves a rotating expiring key. The volume of requests necessitates some parallelization of the function. I am doing this with the multiprocessing.pool module ThreadPool. Example code:</p>
<pre><code>import requests
from multiprocessing.pool import ThreadPool
from tqdm import tqdm
# Input is a list-of-dicts results of a previous process.
results = [...]
# Process starts by retrieving an authorization key.
headers = {"authorization": get_new_authorization()}
# api_call() is called on each existing result with the retrieved key.
results = thread(api_call, [(headers, result) for result in results])
# Function calls API with passed headers for given URL and returns dict.
def api_call(headers_plus_result):
headers, result = headers_plus_result
r = requests.get(result["url"]), headers=headers)
return json.loads(r.text)
# Threading function with default num_threads.
def thread(worker, jobs, num_threads=5):
pool = ThreadPool(num_threads)
results = list()
for result in tqdm(pool.imap_unordered(worker, jobs), total=len(jobs)):
if result:
results.append(result)
pool.close()
pool.join()
if results:
return results
# Function to get new authorization key.
def get_new_authorization():
...
return auth_key
</code></pre>
<p>I am trying to modify my mapping process so that, when the first worker fails (i.e. the authorization key expires), all other processes are paused until a new authorization key is retrieved. Then, the processes proceed with the new key.</p>
<p>Should this be inserted into the actual thread() function? If I put an exception in the api_call function itself, I don't see how I can stop the pool manager or update the header being passed to other workers.</p>
<p>Additionally: is using ThreadPool even the best method if I want this kind of flexibility?</p>
|
<python><multiprocessing><threadpool>
|
2022-12-11 04:41:51
| 1
| 909
|
OJT
|
74,758,399
| 5,618,172
|
Cast string to int64 is not supported
|
<p>I have problem using subclass keras Api. First I read data from TF record as following</p>
<pre><code>train_ds = tf.data.TFRecordDataset(['./data.tfrecord'])
val_ds = tf.data.TFRecordDataset(['./data_validate.tfrecord'])
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
feature_description = {
'label': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature2':tf.io.FixedLenFeature([], tf.string, default_value='')
}
def _parse_function(example_proto):
example = tf.io.parse_single_example(example_proto, feature_description)
return {'feature1': example['feature1'], 'feature2': example['feature2']}, example['label']
train_ds = train_ds.map(_parse_function).batch(256)
val_ds = val_ds.map(_parse_function).batch(256)
</code></pre>
<h1>Second</h1>
<p>I generated my Model as following</p>
<pre><code>class VanillaModel(tf.keras.Model):
def __init__(self, number_of_classes):
super(VanillaModel, self).__init__()
self.number_of_classes=number_of_classes
max_tokens = 100_000
sequence_length = 1042
embedding_dim = 64
embedding = "https://tfhub.dev/google/nnlm-en-dim128-with-normalization/2"
self.feature2_embedding = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, name="embedding", trainable=True)
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(
lowercase, r'<[^<>]*>', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
self.feature1_embedding = tf.keras.Sequential([
feature1_lookup,
Embedding(max_tokens, output_dim= 32, mask_zero=True),
GlobalAveragePooling1D()
])
self.hidden_layer_1 = Dense(embedding_dim, activation='relu', name="hidden_layer_1")
self.hidden_layer_2 = Dense(embedding_dim//2, activation='relu', name="hidden_layer_2")
self.drop_out_1 = Dropout(rate=0.15, name='drop_out_1')
self.hidden_layer_3 = Dense(embedding_dim//2, activation='relu',name="hidden_layer_3")
self.output_layer = Dense(number_of_classes, activation='sigmoid', name="output_layer")
def call(self, inputs):
x = tf.concat([
self.feature1_embedding(inputs["feature1"]),
self.feature2_embedding(inputs["feature2"])
], axis=1)
x = self.hidden_layer_1(x)
x = self.hidden_layer_2(x)
x = self.drop_out_1(x)
x = self.hidden_layer_3(x)
output=self.output_layer(x)
return output
</code></pre>
<h1>Finally</h1>
<p>I call Model to train it as following and pass dataset to <code>model.fit</code>:</p>
<pre><code>tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
steps_per_epoch = 85
model = ProblemTypeModel(number_of_classes)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False),
metrics=['accuracy'])
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=30,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback])
</code></pre>
<p>however I receive this Error <code>Cast string to int64 is not supported</code></p>
<pre><code>"name": "UnimplementedError",
"message": "Graph execution error:\n\nDetected at node 'vanilla_model_22/sequential_45/Cast' defined at (most recent call last):\n
File \"c:\\Users\\fakeuser\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\n
return _run_code(code, main_globals, None,\n
File \"c:\\Users\\fakeuser\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n
File \"c:\\Users\\fakeuser\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\ipykernel_launcher.py\", line 17, in <module>\n app.launch_new_instance()\n
File \"c:\\Users\\fakeuser\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\traitlets\\config\\application.py\", line 965, in launch_instance\n app.start()\n
File \"c:\\Users\\fakeuser\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\ipykernel\\kernelapp.py\", line 712, in start\n self.io_loop.start()\n
File \"c:\\Users\\fakeuser\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\tornado\\platform\\asyncio.py\", line 199, in start\n
self.asyncio_loop.run_forever()
.
.
.
.
tensor = tf.cast(tensor, dtype=ref_input.dtype)\nNode: 'problem_type_model_22/sequential_45/Cast'\nCast string to int64 is not supported\n\t [[{{node vanilla_model_22/sequential_45/Cast}}]] [Op:__inference_train_function_32172]
</code></pre>
<p>my understanding that need to parse my dataset to tuple of features, label which that what i am doing in <code>_parse_function</code>, also when i added print statement in <code>cell</code> function o found out that it works just fine for first two examples and it errors out after that. i don't know why tensorflow throw <code>Cast string to int64 is not supported</code> exception. i would appreciate your all help if you could spot where is error in my code. thank you in advance to take time to look at my post.</p>
<p><code>tensorflow version: 2.8.3</code></p>
|
<python><tensorflow><keras><deep-learning>
|
2022-12-11 04:39:16
| 0
| 1,964
|
Kero
|
74,758,335
| 368,453
|
Build a graph with the biggest distance (edges) between two nodes is equal than two
|
<p>I have to build an algorithm using Python:</p>
<ol>
<li>This algorithm has to build a graph that has the minimum possible number of edges given a number <code>n</code> of nodes.</li>
<li>we have to go from one node to another node using at most two edges.</li>
<li>some nodes cannot be adjacent to each other.</li>
<li>it is guaranteed that for a given input we will be able to find a solution that satisfies the three items above.</li>
</ol>
<p>The function signature can be <code>createGraph(n: int,forbidenEdge: list[int]) -> list[int]</code></p>
<p>One example: <code>n = 4</code>, <code>forbidenEdge = [[1,3]]</code></p>
<p>The function returns the edges that have to be built so we have a minimum number of edges in the graph.</p>
<p>For the given example we have this possible output (sometimes we can have more than one output): <code>[[1, 2],[4, 2],[2, 3]]</code></p>
<p>I am not sure if the solution I am thinking to implement is correct.</p>
<p>If we do not have any restrictions or if there is at least a node that can be connected to any node the minimum number of edges is equal to (n - 1). We can choose a node that can connect to all the other nodes. The problem is when no node can connect to all the other nodes. For this, I am thinking to create the following algorithm, but I am not sure if it is correct. For this situation I am thinking about the following algorithm:</p>
<p>We have to check the number of connections that each node can do and order them in a list according to this number. We have to check if the nodes that cannot connect to the node center are able to be connected to all other nodes (if it is possible, we connect the nodes that can be connected to the node center and the other nodes we connect to each other ... we print the connections) if it is not possible we have to choose another node center, the next in the sorted list. We repeat it until we find a node center that when chosen is going be possible to build edges between the nodes in a way that from any node we can move to another node passing through two edges only.</p>
<p>For this algorithm, if we have <code>n = 4</code>, <code>forbidenEdge = [[1,3],[2,4]]</code> ... node 3 can be the node center. Node 1 cannot connect to the node center, so it connects to all the other nodes. So we have the output: <code>[[1,2],[1,4],[2,3],[3,4]]</code>.</p>
<p>Is this algorithm going to get it right for any <code>n</code> and any <code>forbidenEdge</code> list? How could I prove it?</p>
|
<python><algorithm>
|
2022-12-11 04:23:36
| 1
| 17,488
|
Alucard
|
74,757,907
| 1,935,424
|
python-docx how to add space before/after a table
|
<p>I have a table defined in python-docx. To add a bit of space before/after that table, I'm just using an empty paragraph. That's ok but I would like to add a half/line worth space instead of a full line's worth of space.</p>
<p>Found these:</p>
<ul>
<li><a href="http://www.datypic.com/sc/ooxml/a-w_topFromText-1.html" rel="nofollow noreferrer">http://www.datypic.com/sc/ooxml/a-w_topFromText-1.html</a></li>
<li><a href="http://www.datypic.com/sc/ooxml/a-w_bottomFromText-1.html" rel="nofollow noreferrer">http://www.datypic.com/sc/ooxml/a-w_bottomFromText-1.html</a></li>
</ul>
<p>Both are part of tblpPr:</p>
<ul>
<li><a href="http://www.datypic.com/sc/ooxml/e-w_tblpPr-1.html" rel="nofollow noreferrer">http://www.datypic.com/sc/ooxml/e-w_tblpPr-1.html</a></li>
</ul>
<p>These seem to do exactly what I want. But I haven't been able to figure out how to add those attributes.</p>
<p>I have tried:</p>
<p><code>tbl._tblPr.set(qn('w:bottomFromText'), '1440')</code>
Note the tbl._tblPr is defined and accessible in Python but it does not generate any extra space. Note the units for 1440 are "twentieths of point".</p>
<p>What am I doing wrong?</p>
|
<python><python-docx>
|
2022-12-11 02:10:40
| 0
| 899
|
JohnA
|
74,757,873
| 986,612
|
Diverting the output of a c++ that calls python mangles it
|
<p>I have a c++ (VS2019) app</p>
<pre><code>#include <stdio.h>
#include <Python.h>
void main(int argc, char *argv[])
{
printf( "- before\n" );
FILE* file;
Py_SetPath( L"c:\\Python37\\Lib" );
Py_Initialize();
file = fopen( "abc.py", "r" );
PyRun_SimpleFile( file, "abc.py" );
Py_Finalize();
printf( "- after\n" );
return;
}
</code></pre>
<p>that calls a python (3.10) script abc.py</p>
<pre><code>print( "hi" );
</code></pre>
<p>When run (on Win11) from cmd (<code>main.exe</code>), the output is:</p>
<pre><code>- before
hi
- after
</code></pre>
<p>However, when diverted into a file (<code>main.exe > out.txt</code>, or used in a pipe to tee), I get:</p>
<pre><code>hi
- before
- after
</code></pre>
<p>In my real app, where I interleave python with c++, the python output is printed after the script terminates (again, only when diverting the app's output).</p>
<hr />
<p>Adding</p>
<p><code>setbuf( stdout, NULL );</code></p>
<p>, like @retired-ninja suggested, resolves the simple example above. However, in my more complicated app, it doesn't. I need to find a new simple example.</p>
<hr />
<p>I only needed to add auto-flush to my python script to resolve the issue:</p>
<pre><code>sys.stdout.reconfigure(line_buffering=True)
</code></pre>
|
<python><c++><windows>
|
2022-12-11 02:03:02
| 0
| 779
|
Zohar Levi
|
74,757,793
| 13,022,556
|
Module could not be found err or loading dependencies Pytorch Python
|
<p>I'm trying to install PyTorch into my Conda environment using Anaconda Prompt(Miniconda3) but running into a similar issue seen <a href="https://stackoverflow.com/questions/74594256/pytorch-error-loading-lib-site-packages-torch-lib-shm-dll-or-one-of-its-depen">here</a></p>
<p>First, I created my Conda environment:</p>
<pre><code>conda create --name token python=3.
</code></pre>
<p>Next, I activate my environment and download PyTorch without fail using the following:</p>
<pre><code>activate token
conda install pytorch pytorch-cuda=11.6 -c pytorch -c nvidia
</code></pre>
<p>Next, I go into my python console and try to import torch but find the following error:</p>
<pre><code>python
>>>import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\.conda\envs\token\lib\site-packages\torch\__init__.py", line 139, in <module>
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\.conda\envs\token\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
</code></pre>
<p>When running the list command:</p>
<pre><code>(base) C:\Users>conda list -n token pytorch
# packages in environment at C:\Users\.conda\envs\token:
#
# Name Version Build Channel
pytorch 1.13.0 py3.10_cuda11.6_cudnn8_0 pytorch
pytorch-cuda 11.6 h867d48c_0 pytorch
pytorch-mutex 1.0 cuda pytorch
</code></pre>
<p>Is there a way to check which dependencies are missing or know why PyTorch is not being recognised?</p>
|
<python><pytorch><conda><miniconda>
|
2022-12-11 01:40:55
| 0
| 414
|
Lui Hellesoe
|
74,757,750
| 18,308,393
|
Recursively find values for each key-value pair with matching keys
|
<p>I have an example dictionaty for <code>rules, quantifiers, and transformations</code>, essentially, inside each <code>key</code> there belongs another key containing <code>ids</code> equal to <code>id</code>. I am trying to find all those that match and return these id's that match as a dictionary in this format:</p>
<pre><code>dictionary = {'rules':[...], 'quantifiers':[...], 'transformations':[...]}
</code></pre>
<p>Here is the sample:</p>
<pre><code>test_dict = {
'rules': [{'id': 123,'logic': '{"$or":[{"$and":[{"baseOperator":null,"operator":"does_not_contain_ignore_case","operand1":"metrics.123","operand2":"metrics.456"}]}]}',},
{'id': 589,
'logic': '{"$or":[{"$and":[{"baseOperator":null,"operator":"does_not_contain_ignore_case","operand1":"metrics.123","operand2":0}, {"baseOperator":null,"operator":"does_not_contain_ignore_case","operand1":"metrics.456","operand2":0}]}]}',},
{'id': 51,
'logic': '{"$or":[{"$and":[{"baseOperator":null,"operator":"does_not_contain_ignore_case","operand1":"metrics.789","operand2":"metrics.1"}]}]}',},],
'quant': [{'id':123,
'transIds': [1, 2, 3],
'qualifiedId': 'metrics.123'},
{'id':456,
'transIds': [1, 6],
'qualifiedId': 'metrics.456'},
{'id':789,
'transIds': [9],
'qualifiedId': 'metrics.789'}],
'trans': [{'id':1,
'rules': [123, 120]},
{'id':6,
'rules':[589, 2]}]
}
</code></pre>
<p>Here was my attempt, however, I realised that the list <code>trans, rules</code> would be specific to each index ID, therefore, because <code>rules</code> is first in the <code>test_dict</code>, then the loop won't capture it because all values side by it are empty.</p>
<p>Essentially, I wanted to enter <code>logic</code> and capture all values <code>metric</code> that belong to the <code>ids</code> in <code>quantifiers</code></p>
<p>Capture all <code>ids</code> from <code>quantifiers</code> that match the values inside <code>attr</code></p>
<pre><code>attr = [123, 456]
keys = list(test_dict.keys())
trans = []
rules = []
for iter in range(len(keys)):
for in_iter in range(len(test_dict[keys[iter]])):
if test_dict[keys[iter]][in_iter].get('id') in attr:
if test_dict[keys[iter]][in_iter].get('transIds') is not None:
for J in test_dict[keys[iter]][in_iter].get('transIds'):
trans.append(J)
if test_dict[keys[iter]][in_iter].get('id') in trans:
if test_dict[keys[iter]][in_iter].get('rules') is not None:
for K in test_dict[keys[iter]][in_iter].get('rules'):
rules.append(K)
if test_dict[keys[iter]][in_iter].get('id') in rules:
if test_dict[keys[iter]][in_iter].get('logic') is not None:
print(test_dict[keys[iter]][in_iter].get('logic'))
</code></pre>
|
<python>
|
2022-12-11 01:28:33
| 1
| 367
|
Dollar Tune-bill
|
74,757,531
| 14,460,824
|
How can color names be more accurately recognised and extracted from strings?
|
<p>It may be a naΓ―ve approach that I use to recognise and extract colour names despite slight variations or misspellings in texts, which in a first throw also works better in English than in German, but the challenges seem to be approximately the same.</p>
<ul>
<li><p>Different spellings <code>grey/gray</code> or <code>weiΓ/weiss</code> where the similarity from a human perspective does not seem to be huge but from word2vec <code>grey</code> and <code>green</code> are more similar.</p>
</li>
<li><p>Colours not yet known or available in <code>color_list</code>, in the following case <code>brown</code> <em>May not best example, but perhaps it can be deduced from the context in the sentence. Just as you as a human being get an idea that it could be a color.</em></p>
</li>
</ul>
<p>Both cases could presumably be covered by an extension of the vocabulary with a lot of other color names. But, not knowing about such combinations in the first place seems difficult.</p>
<p>Does anyone see another adjusting screw or even a completely different procedure that could possibly achieve even better results?</p>
<pre><code>from collections import Counter
from math import sqrt
import pandas as pd
#list of known colors
colors = ['red','green','yellow','black','gray']
#dict or dataframe of sentences that contains color/s or not
df = pd.DataFrame({
'id':[1,2,3,4],
'text':['grey donkey with black mane',
'brown dog with sharp teeth',
'red cat with yellowish / seagreen glowing eyes',
'proud rooster with red comb']
}
)
#creating vector of the word
def word2vec(word):
cw = Counter(word)
sw = set(cw)
lw = sqrt(sum(c*c for c in cw.values()))
return cw, sw, lw
#check cosin distance between word and color
def cosdis(v1, v2):
common = v1[1].intersection(v2[1])
return sum(v1[0][ch]*v2[0][ch] for ch in common)/v1[2]/v2[2]
df['color_matches'] = [[(w,round(cd, 2),c) for w in s.split() for c in colors if (cd:=cosdis(word2vec(c), word2vec(w))) >= 0.85] for s in df.text]
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">id</th>
<th style="text-align: left;">text</th>
<th style="text-align: left;">color_matches</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">grey donkey with black mane</td>
<td style="text-align: left;">[('black', 1.0, 'black')]</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">brown dog with sharp teeth</td>
<td style="text-align: left;">[]</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">red cat with yellowish / seagreen glowing eyes</td>
<td style="text-align: left;">[('red', 1.0, 'red'), ('yellowish', 0.85, 'yellow'), ('seagreen', 0.91, 'green')]</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">proud rooster with red comb</td>
<td style="text-align: left;">[('red', 1.0, 'red')]</td>
</tr>
</tbody>
</table>
</div>
|
<python><string><search><word2vec><cosine-similarity>
|
2022-12-11 00:21:11
| 2
| 25,336
|
HedgeHog
|
74,757,483
| 5,967,025
|
What is the main intention of creating a subtype with NewType?
|
<p>I thought I could pass a value of a subtype where a value of a base type is expected. What I mean by saying that is as follows.</p>
<pre class="lang-py prettyprint-override"><code>from typing import NewType
BaseType = NewType("BaseType", int)
SubType1 = NewType("SubType1", BaseType)
SubType2 = NewType("SubType2", BaseType)
def return_age(age: BaseType) -> BaseType:
return age
return_age(SubType1(666)) # 1
return_age(SubType2(666)) # 2
</code></pre>
<p>And mypy says</p>
<pre><code># 1: error: Argument 1 to "SubType1" has incompatible type "int"; expected "BaseType" [arg-type]
# 2: error: Argument 1 to "SubType2" has incompatible type "int"; expected "BaseType" [arg-type]
</code></pre>
<p>I expected to be able to use a basetype and a subtype as a baseclass and a subclass.</p>
<pre class="lang-py prettyprint-override"><code>class BaseClass: pass
class SubClass1(BaseClass): pass
class SubClass2(BaseClass): pass
def return_class(klass: BaseClass) -> BaseClass:
return klass
return_class(SubClass1())
return_class(SubClass2())
</code></pre>
<p>That example above is completely valid usage whereas the subtype example is not. I am just trying to understand the main distinction of being a subclass and a subtype in that matter.</p>
|
<python><types>
|
2022-12-11 00:11:38
| 1
| 638
|
vildhjarta
|
74,757,418
| 4,169,571
|
Reading lines from file and assigning content to variables
|
<p>I have a file with many lines. Each line has the same length.</p>
<p>The first three lines are:</p>
<pre><code>0 MSG_201901010100.nc [98.22227, 0.00014308207] [3948.8948, 0.0057524233]
1 MSG_201901010200.nc [197.27554, 0.00028737469] [9986.71, 0.014547813]
2 MSG_201901010300.nc [218.46107, 0.00031823604] [12044.043, 0.017544765]
</code></pre>
<p>How can I read in the file and assign the content to lists, arrays or a Dataframe?</p>
<p>As lists I would like to have:</p>
<pre><code>a = [0,1,2]
b = [R10CH20P_MSG_201901010100.nc, R10CH20P_MSG_201901010200.nc, R10CH20P_MSG_201901010300.nc]
c1 = [98.22227, 197.27554, 218.46107]
c2 = [0.00014308207, 0.00028737469,0.00031823604]
d1 = [3948.8948, 9986.71, 12044.043]
d2 = [0.0057524233, 0.014547813, 0.017544765]
</code></pre>
<p>I tried to read the file with Pandas:</p>
<pre><code>import pandas as pd
df = pd.read_table(filename, sep='\s+', names=['a', 'b', 'c1', 'c2', 'd1', 'd2' ])
</code></pre>
<p>But this produces wrong assignments:</p>
<pre><code>print(df)
a b c1 c2 d1 d2
0 0 MSG_201901010100.nc [98.22227, 0.00014308207] [3948.8948, 0.0057524233]
1 1 MSG_201901010200.nc [197.27554, 0.00028737469] [9986.71, 0.014547813]
2 2 MSG_201901010300.nc [218.46107, 0.00031823604] [12044.043, 0.017544765]
</code></pre>
<p>For example <code>print(df['c1']</code>) gives:</p>
<pre><code>0 [98.22227,
1 [197.27554,
2 [218.46107,
Name: c1, dtype: object
</code></pre>
<p>and</p>
<p><code>print(df['c1'].values)</code> shows:</p>
<pre><code>['[98.22227,' '[197.27554,' '[218.46107,']
</code></pre>
|
<python><pandas><dataframe><numpy><numpy-ndarray>
|
2022-12-10 23:56:04
| 1
| 817
|
len
|
74,757,344
| 7,211,014
|
python search a file for text based on other text found (look ahead)?
|
<p>I have a massive html file that I need to find text for every .jpg image in the file. The process I want to perform is:</p>
<ul>
<li>search for the the image's name referenced in an href.</li>
<li>if found look ahead for the first instance of a regex</li>
</ul>
<p>Here is a part of the file. There are many many entries like this. I need to grab that date.</p>
<pre><code> <div class="_2ph_ _a6-p">
<div>
<div class="_2pin">
<div>
<div>
<div>
<div class="_a7nf">
<div class="_a7ng">
<div>
<a href="folder/image.jpg" target="_blank">
<img class="_a6_o _3-96" src="folder/image.jpg"/>
</a>
<div>
Mobile uploads
</div>
<div class="_3-95">
Test Test Test
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="_2pin">
<div>
Test Test Test
</div>
</div>
</div>
</div>
<div class="_3-94 _a6-o">
<a href="https://www.example.com;s=518" target="_blank">
<div class="_a72d">
Jun 25, 2011 12:10:54pm
</div>
</a>
</div>
</div>
<div class="_2ph_ _a6-p">
<div>
<div class="_2pin">
<div>
<div>
<div>
<div class="_a7nf">
<div class="_a7ng">
<div>
<a href="folder/image2.jpg" target="_blank">
<img class="_a6_o _3-96" src="folder/image2.jpg"/>
</a>
<div>
Mobile uploads
</div>
<div class="_3-95">
Test Test Test Test
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="_2pin">
<div>
Test Test Test Test
</div>
</div>
</div>
</div>
<div class="_3-94 _a6-o">
<a href="https://www.example.com;s=518" target="_blank">
<div class="_b28q">
Feb 10, 2012 1:10:54am
</div>
</a>
</div>
</div>
<div class="_3-95 _a6-g"> == $0
<div class="_2pin">Testing </div>\
<div class="_3-95 _a6-p">
<div>
<div><div>
<div><div>
<div><div>
<div><div>
<div>
<div>
<a href="folder/image3.jpg" target="_blank">
<img class="_a6_o _3-96" src="folder/image.jpg"/>
</a>
<div></div>
</div>
</div>
</div>
</div>
<div class="_b28q">
Feb 10, 2012 1:10:54am
</div>
</div>
</code></pre>
<p>I already figured out a regex that works for the date:</p>
<pre><code> rx_date = re.compile('(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s\d{1,2},\s\d{4}\s\d{1,2}:\d{2}:\d{2}(?:AM|PM|am|pm)')
</code></pre>
<p>I need to find <code>Jun 25, 2011 12:10:54pm</code> for the reference of <code>image.jpg</code> and <code>Feb 10, 2012 1:10:54am</code> for the reference of <code>image2.jpg</code>. How can I accomplish that?</p>
<p>I messed around with using beautiful soup, but all I can do with that is gather parts of the file. I could not figure out how to look ahead and tell beautiful soup. I tried using <code>.parent.parent.parent.parent.parent.parent.parent.parent.child</code> but that didn't work. Note every div class name is random so I can use that as a reference.</p>
<p><strong>EDIT:</strong>
I added one little monkey wrench in the logic. Some times the date is not in an <code>a</code> tag but in a div class by itself. html example updated.</p>
|
<python><html><beautifulsoup>
|
2022-12-10 23:40:27
| 1
| 1,338
|
Dave
|
74,757,050
| 9,381,966
|
How to deploy locally multiple flows using Prefect 2.0?
|
<p>I'm reading Prefect documentation and trying to understand how local deployment works. I can deploy a flow locally following the below steps.</p>
<p>First, I build the flow:</p>
<pre><code>prefect deployment build ./log_flow.py:log_flow -n log-simple -q test
</code></pre>
<p>Where ./log_flow.py:log_flow are, respectively, the flow's location and entrypoint. log-simple is the name of deployment and test is the work queue</p>
<p>Second, I start the worker using:</p>
<pre><code>prefect agent start -q 'test'
</code></pre>
<p>To apply the deployment, I use python running the below snippet:</p>
<pre class="lang-py prettyprint-override"><code>from log_flow import log_flow
from prefect.deployments import Deployment
deployment = Deployment.build_from_flow(
flow=log_flow,
name="log-simple",
parameters={"name": "Marvin"},
infra_overrides={"env": {"PREFECT_LOGGING_LEVEL": "DEBUG"}},
work_queue_name="test",
)
if __name__ == "__main__":
deployment.apply()
</code></pre>
<p>Well, that works fine for a single flow. But how can I deploy several flows at once? I can repeat the above process for every flow, but it looks a bit unpractical to me since each build step generates another YAML file. I think would be more practical if my deployment generates a single YAML file for all flows.</p>
<p>Is there a way to deploy several flows at once in Prefect 2.0?</p>
|
<python><prefect>
|
2022-12-10 22:40:43
| 2
| 1,590
|
Lucas
|
74,756,687
| 4,653,234
|
Reading HLS stream from Amazon IVS with timestamped metadata using PYTHON
|
<p>I'm trying to read HLS stream (Amazon IVS) using python (So I could process it with openCV for computer vision purposes). I can read simple stream with OpenCV although I notice a huuuuge latency (20+ seconds) comparing to what I see in a Live console of AWS. But putting that aside, I also want to be able to read timestamped metadata, which I need for frames processing. I can't see any reference in the docs for reading metadata with python, I can see events for JS, IOS, Android, Web in general, but nothing for server processing! This is extremely weird. I figured it has to be some kind of WSS, I asked ChatGPT and apparently it was available, but when I try the same approach I get:
<code>server rejected WebSocket connection: HTTP 200</code> error...</p>
<p>Is there anything I'm missing here? How can I read those events with python? The code I'm using is:</p>
<pre><code> # Establish a WebSocket connection to the IVS stream
try:
async with websockets.connect(STREAM_WS) as websocket:
# Subscribe to the timed metadata event
await websocket.send(json.dumps({"message": "subscribe", "data": "metadata"}))
# Continuously receive and print the timed metadata events
while True:
metadata_event = await websocket.recv()
print(metadata_event)
except Exception as e:
print(e)
</code></pre>
<p>where STREAM_WS ends with .m3u8 (I think that might be important because ChatGPT tells me about something about an Player URL ending with /live)...</p>
<p>I'm literally out of ideas</p>
|
<python><amazon-web-services><amazon-ivs>
|
2022-12-10 21:33:57
| 0
| 763
|
JFCorleone
|
74,756,670
| 6,282,576
|
Groupby using Django's ORM to get a dictionary of lists between two models
|
<p>I have two models, <code>User</code> and <code>Gift</code>:</p>
<pre class="lang-py prettyprint-override"><code>class User(models.Model):
name = models.CharField(max_length=150, null=True, blank=True)
...
class Gift(models.Model):
user = models.ForeignKey(
"User",
related_name="users",
on_delete=models.CASCADE,
)
...
</code></pre>
<p>Now I want to create a dictionary of lists to have a list of <em>gifts</em> for every <em>user</em>, so that I can lookup pretty fast the <code>ids</code> of gifts that some users have, in a for loop, without having to query the database many times. I came up with this:</p>
<pre class="lang-py prettyprint-override"><code>from itertools import groupby
gifts_grouped = {
key: [v.id for v in value] for key, value in groupby(gifts, key=lambda v: v.user_id)
}
</code></pre>
<p>Now every time I wish to lookup the gifts of a user, I can simply do:</p>
<pre class="lang-py prettyprint-override"><code>gifts_grouped[id_of_a_user]
</code></pre>
<p>And it'll return a list with ids of gifts for that user. Now this feels like a Python solution to a problem that can easily be solved with Django. But I don't know how. Is it possible to achieve the same results with Django's ORM?</p>
|
<python><django><django-models><group-by><django-orm>
|
2022-12-10 21:30:32
| 1
| 4,313
|
Amir Shabani
|
74,756,596
| 230,866
|
Error when running sudo snap revert through SSH using Fabric under Jenkins
|
<p>I am trying to use Fabric to run a <code>snap revert</code> command in a remote Ubuntu 18.14 machine through ssh. The command works when I run it manually, but it fails when I try to run it through Jenkins.</p>
<p>Here is the code I use:</p>
<pre><code>from fabric import Config, Connection
connection = Connection(
node_ip,
config=Config(overrides={"sudo": {"password": sudo_pass}}),
user=user,
connect_kwargs={"password": sudo_pass},
)
connection.sudo("snap revert mysnap-app")
</code></pre>
<p>The error I see is this:</p>
<pre><code>...
raise ThreadException(thread_exceptions)
invoke.exceptions.ThreadException:
Saw 1 exceptions within threads (OSError):
hread args: {'kwargs': {'buffer_': ['[sudo] password: \n'],
'hide': False,
'output': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>},
'target': <bound method Runner.handle_stderr of <fabric.runners.Remote object at 0x10fb06a00>>}
Traceback (most recent call last):
...
File "/.../site-packages/paramiko/channel.py", line 1198, in _send
raise socket.error("Socket is closed")
OSError: Socket is closed
</code></pre>
<p>I tried unsetting the pty with <code>pty=False</code> with same results. This is consistent: it works when run manually on the same machine that I run the Jenkins Agent, but always fails when is run by the Jenkins Agent.</p>
<p>Any Idea on how to tackle this problem?</p>
|
<python><ubuntu><jenkins><fabric><snapcraft>
|
2022-12-10 21:18:46
| 1
| 1,101
|
chaos.ct
|
74,756,517
| 19,425,874
|
Appending data to a Google Sheet using Python
|
<p>I have 3 different tables I'm looking to directly push to 3 separate tabs in a Google Sheet. I set up the GSpread connection and that's working well. I started to adjust my first print statement into what I thought would append the information to Tab A (waveData), but no luck.</p>
<p>I'm looking to append the information to the FIRST blank row in a tab. Basically, so that the data will be ADDED to what is already in there.</p>
<p>I'm trying to use append_rows to do this, but am hitting a "gspread.exceptions.APIError: {'code': 400, 'message': 'Invalid value at 'data.values' (type.googleapis.com/google.protobuf.ListValue).</p>
<p>I'm really new to this, just thought it would be a fun project to evaluate wave sizes in NJ across all major surf spots, but really in over my head (no pun intended).</p>
<p>Any thoughts?</p>
<pre><code>import requests
import pandas as pd
import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50')
waveData = sh.get_worksheet(0)
tideData = sh.get_worksheet(1)
lightData = sh.get_worksheet(2)
# AddValue = ["Test", 25, "Test2"]
# lightData.insert_row(AddValue, 3)
id_list = [
'/Belmar-Surf-Report/3683/',
'/Manasquan-Surf-Report/386/',
'/Ocean-Grove-Surf-Report/7945/',
'/Asbury-Park-Surf-Report/857/',
'/Avon-Surf-Report/4050/',
'/Bay-Head-Surf-Report/4951/',
'/Belmar-Surf-Report/3683/',
'/Boardwalk-Surf-Report/9183/',
]
for x in id_list:
waveData.append_rows(pd.read_html(requests.get('http://magicseaweed.com' + x).text)
[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]].to_json(), value_input_option="USER_ENTERED")
# print(pd.read_html(requests.get('http://magicseaweed.com' + x).text)[0])
# print(pd.read_html(requests.get('http://magicseaweed.com' + x).text)[1])
</code></pre>
|
<python><pandas><google-sheets><google-sheets-api><gspread>
|
2022-12-10 21:04:17
| 1
| 393
|
Anthony Madle
|
74,756,513
| 2,597,213
|
Iterate over json string items in jinja2
|
<p>Hi I need to send data in string format to my jinja templates, and the render them. The only way I found is to format my data as JSON and send it as a string to the renderer. But I donΒ΄t know how to use it in the templates, it seems that the <code>tojson</code> filter itΒ΄s not for this purpose, because it keeps rendered a string.</p>
<p>to keep it simple I'm doing something similar to:</p>
<pre><code>import json
a = {"a":1,"b":[1,2,3,4]}
response = json.dumps(a)
</code></pre>
<p>in template:</p>
<pre><code>{{ response }}
{{ response|tojson }}
</code></pre>
<p>both give a string response, not a dict or an object that I can use to render based on the values</p>
|
<python><templates><jinja2>
|
2022-12-10 21:03:58
| 1
| 4,885
|
efirvida
|
74,756,452
| 12,035,877
|
How to close a pdf opened with fitz if I've replaced its variable name?
|
<p>This is a simple issue. I use jupyter notebook for python and usually deal with pdfs using pymupdf.</p>
<p>I usually define <code>pdf = fitz.open('dir/to/file.pdf')</code> but somethimes I forget to close the file before i redefine <code>pdf = fitz.open('dir/to/other_file.pdf')</code></p>
<p>Sometimes I need to (for example) move or delete <code>file.pdf</code> (the original file) but I can't because python is using it.</p>
<p>Not being an expert, I don't know how to close this file when I have redefined the variable <code>pdf</code>, as obviously <code>pdf.close()</code> would close 'other_file.pdf' and I end up reeinitializing my .ipynb file, which feels dumb.</p>
<p>How can I access an object which variable name has been redefined?</p>
|
<python><pdf><pymupdf>
|
2022-12-10 20:51:37
| 2
| 547
|
JosΓ© Chamorro
|
74,756,311
| 520,556
|
Efficient way for additional indexing of pandas dataframe
|
<p>I am working on a rather large, binary (n-hot encoded) dataframe, which structure is similar to this toy example:</p>
<pre><code>import pandas as pd
data = {
'A' : [0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
'B' : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'C' : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
'D' : [1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0],
'E' : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
df = pd.DataFrame.from_dict(data)
df
A B C D E
0 0 0 0 1 0
1 0 0 0 1 0
2 0 0 0 1 0
3 1 0 0 0 0
4 1 0 0 0 0
5 1 0 0 0 0
6 1 0 0 0 0
7 1 0 0 0 0
8 0 0 0 1 0
9 0 0 0 1 0
10 0 0 1 0 0
11 0 0 1 0 0
12 0 0 1 0 0
13 0 0 1 0 0
</code></pre>
<p>To make some extractions and searches more efficient, I would like to generate some additional indexes, apart from the generic one, like this:</p>
<pre><code> A B C D E
0 1 D 0 0 0 1 0
1 1 D 0 0 0 1 0
2 1 D 0 0 0 1 0
3 1 A 1 0 0 0 0
4 1 A 1 0 0 0 0
5 1 A 1 0 0 0 0
6 1 A 1 0 0 0 0
7 1 A 1 0 0 0 0
8 2 D 0 0 0 1 0
9 2 D 0 0 0 1 0
10 1 C 0 0 1 0 0
11 1 C 0 0 1 0 0
12 1 C 0 0 1 0 0
13 1 C 0 0 1 0 0
</code></pre>
<p>Two new columns show which column contains 1 and what appearance that is. Even better would be to have yet another one:</p>
<pre><code> A B C D E
0 1 1 D 0 0 0 1 0
1 1 2 D 0 0 0 1 0
2 1 3 D 0 0 0 1 0
3 1 1 A 1 0 0 0 0
4 1 2 A 1 0 0 0 0
5 1 3 A 1 0 0 0 0
6 1 4 A 1 0 0 0 0
7 1 5 A 1 0 0 0 0
8 2 1 D 0 0 0 1 0
9 2 2 D 0 0 0 1 0
10 1 1 C 0 0 1 0 0
11 1 2 C 0 0 1 0 0
12 1 3 C 0 0 1 0 0
13 1 4 C 0 0 1 0 0
</code></pre>
<p>What would be the most efficient way to generate these indexes?</p>
|
<python><pandas>
|
2022-12-10 20:28:27
| 1
| 1,598
|
striatum
|
74,756,290
| 16,667,945
|
How to speed up my sequential port scanner using multi-threading & return successful values only?
|
<p>I was trying convert sequential code of port scanner to become fast as it's so slow :(.</p>
<p><strong>Sequential code</strong></p>
<pre class="lang-py prettyprint-override"><code>import sys ,socket
from datetime import datetime
from threading import Thread
def target():
t=input(str("Enter target:"))
target=socket.gethostbyname(t)
try:
for p in range(1,1026):
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
result=s.connect_ex((target,p))
if result==0:
service=socket.getservbyport(p)
print(f"port {p} is open service {service}")
else:
print(f"port {p} is close")
s.close()
except KeyboardInterrupt:
sys.exit()
except socket.error:
print("Host not responding")
def main():
target()
if __name__=="__main__":
main()
</code></pre>
<p>I successfully convert it Faster But I want to get successful output only in <code>ThreadPoolexecutor</code> but I can't here What I do.</p>
<p><strong>Fast Code</strong></p>
<pre class="lang-py prettyprint-override"><code>import socket
import threading
import concurrent.futures
import re
def scan(ip, port):
lock = threading.Lock()
scanner = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
scanner.settimeout(.1)
ip = re.sub("(https:// | http:// | \/)", '', ip)
ip = socket.gethostbyname(ip)
try:
scanner.connect((ip, port))
scanner.close()
with lock:
result = f"Port {port} is OPEN Running {socket.getservbyport(port)}"
print(result)
return result
except:
pass
def run(ip_num: str, scan_fn, nums_ports: int) -> list:
result = []
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:
for port in range(nums_ports):
future = executor.submit(scan_fn, ip_num, port + 1)
result.append(future.result())
print(result) # empty
return result
def main():
ip = input("target> ")
run(ip, scan, 1025)
# with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:
# for port in range(1025):
# executor.submit(scan, ip, port + 1)
if __name__ == "__main__":
main()
</code></pre>
<p>output if Target <code>google.com</code></p>
<pre class="lang-bash prettyprint-override"><code>target> google.com
Port 80 is OPEN Running http
Port 443 is OPEN Running https
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, 'Port 80 is OPEN Running http', None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, 'Port 443 is OPEN Running https', None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
</code></pre>
<p>this above Output comes out after a long time how to make it faster and just take the successful result of that printed in first.</p>
|
<python><sockets><python-multithreading>
|
2022-12-10 20:25:52
| 1
| 321
|
AbdullahSaidAbdeaaziz
|
74,756,281
| 3,618,854
|
In numpy, multipy two structured matrices concisely
|
<p>I have two matrices. The first has the following structure:</p>
<pre><code>[[1, 0, a],
[0, 1, b],
[1, 0, c],
[0, 1, d]]
</code></pre>
<p>where <code>1</code>, <code>0</code>, <code>a</code>, <code>b,</code> <code>c</code>, and <code>d</code> are scalars. The matrix is 4 by 3</p>
<p>The second is just a 2 by 3 matrix:</p>
<pre><code>[[r1],
[r2]]
</code></pre>
<p>where <code>r1</code> and <code>r2</code> are the first and second rows respectively, each having 3 elements.
I would like the output to be:</p>
<pre><code>[[r1, 0, a*r1],
[0, r1, b*r1],
[r2, 0, c*r2],
[0, r2, d*r2]]
</code></pre>
<p>which would be a 4 by 9 matrix.
This is similar to the Kronecker product, except separately for each row of the second matrix. Of course this could be done with cumbersome loops which I want to avoid.
How can I do this concisely?</p>
|
<python><numpy><matrix-multiplication><kronecker-product>
|
2022-12-10 20:25:04
| 3
| 3,432
|
Yakov Dan
|
74,756,223
| 2,773,607
|
Factory-boy fuzzy DateTimeField always the same date when using create_batch
|
<p>I am using factory-boy for creating instances of a Django model, and I am always getting the same value returned when using <code>factory.fuzzy.FuzzyDateTime</code>.</p>
<p>Minimal example:</p>
<pre><code># factory class
class FooFactory(DjangoModelFactory):
class Meta:
# models.Foo has a dt_field that is a DateTimeField
model = models.Foo
# creation of object, in unit tests
# I can't move the dt_field declaration
# into the factory definition since different
# unit tests use different start / end points
# for the datetime fuzzer
now = datetime.datetime.now(tz=pytz.timezone("America/New_York"))
one_week_ago = now - datetime.timedelta(days=7)
FooFactory.create_batch(
10,
dt_field=factory.fuzzy.FuzzyDateTime(
start_dt=one_week_ago, end_dt=now
)
)
</code></pre>
<p>When inspecting the Foo models after factory creation, the <code>dt_field</code> has the same date:</p>
<pre><code>>>> [r.dt_field for r in Foo.objects.all()]
>>> [datetime.datetime(2022, 12, 10, 20, 15, 31, 954301, tzinfo=<UTC>), datetime.datetime(2022, 12, 10, 20, 15, 31, 961147, tzinfo=<UTC>),
datetime.datetime(2022, 12, 10, 20, 15, 31, 967383, tzinfo=<UTC>), ...]
</code></pre>
|
<python><django><datetime><factory-boy>
|
2022-12-10 20:13:02
| 1
| 2,481
|
mprat
|
74,756,178
| 3,199,871
|
How can I run a Selenium app with Chromedriver and Headless Chrome on Github Actions?
|
<p><strong>Context</strong></p>
<p>I've created a python app that uses selenium, chromedriver & chrome to automate actions on a website (roughly similar to the one here: <a href="https://yasoob.me/posts/web-automation-with-selenium/" rel="nofollow noreferrer">https://yasoob.me/posts/web-automation-with-selenium/</a>) . I'd like to run it on a schedule (e.g. every saturday at 9am) so, as the author suggested, I put it on Github Actions. I realize that for this to work I need to install Chromedriver & Chrome Headless as part of the Github Actions workflow.</p>
<p>My yml file looks like this:</p>
<pre><code>name: Book
on:
schedule:
- cron: "0 4 * * *"
workflow_dispatch:
env:
ACTIONS_ALLOW_UNSECURE_COMMANDS: true
jobs:
scrape-latest:
runs-on: ubuntu-latest
services:
selenium:
image: selenium/standalone-chrome
steps:
- name: Checkout repo
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2.0.0
with:
python-version: '3.7'
- name: Install requirements
run: pip install -r requirements.txt
- name: Prepare Selenium
# https://github.com/marketplace/actions/setup-chromedriver
uses: nanasess/setup-chromedriver@master
- name: Run Booker
run: python app.py
</code></pre>
<p><strong>Issue</strong></p>
<p>I get the following error message:</p>
<pre><code>Run python app.py
Traceback (most recent call last):
File "app.py", line 19, in <module>
browser = webdriver.Chrome()
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
desired_capabilities=desired_capabilities)
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Error: Process completed with exit code 1.
</code></pre>
<p><strong>What I've tried</strong></p>
<p>I've tried different versions of this, where I put chromedrive as a binary in the requirements.txt, or try to install it through the app.py file, but it seems like putting it in the workflow file is the most reasonable place to start.</p>
<p><strong>My Ask</strong></p>
<ol>
<li><p>Conceptually: I've never really done anything like this before so I'm having a hard time understanding the logic of what I'm doing - I guess Github Actions runs an Ubuntu server, where each time I run the action, I need to install Chrome (in headless mode) and Chromedriver, so that I can then run the automation that's in app.py. Is that more or less correct?</p>
</li>
<li><p>Tactically: What am I doing wrong in the code above?</p>
</li>
</ol>
<p>Thank you!</p>
|
<python><selenium><google-chrome><selenium-webdriver><selenium-chromedriver>
|
2022-12-10 20:05:55
| 0
| 1,391
|
Sekoul
|
74,756,001
| 1,964,692
|
What other types of classes are there in Python?
|
<p>I just found out about <code>dataclasses</code> and was reading some tutorials on how to use them. One <a href="https://realpython.com/python-data-classes/#more-flexible-data-classes" rel="nofollow noreferrer">tutorial</a> described data classes as:</p>
<pre><code>A data class is a class typically containing mainly data, although there arenβt really any restrictions.
</code></pre>
<p>What other types of classes are there besides data class? Are there also helpful tools/decorators for these classes, too? If data class is a class that contains mainly data, then a class containing mainly helper methods is something else?</p>
<p>Or should I think about using <code>@dataclass</code> decorator for any class I want to build as long as I'm using Python 3.7+?</p>
|
<python><python-dataclasses>
|
2022-12-10 19:39:47
| 2
| 1,599
|
Korean_Of_the_Mountain
|
74,755,994
| 18,108,767
|
Closest true value to zero in Python
|
<p>A long time ago I read about the closest true value to zero, like <code>zero = 0.000000001</code>, something like that. In the article they mentioned about this value in Python and how to achieve it. Does anyone knows about this? I have look up here in SO but all the answers are about the closest value to zero of an array and that's not my point.</p>
|
<python>
|
2022-12-10 19:38:42
| 1
| 351
|
John
|
74,755,608
| 9,760,446
|
Plot multindex series with trend line
|
<p>Sample data for MWE:</p>
<pre><code>d = [{'Date': Timestamp('2022-08-02 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-14 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-18 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-01-19 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-20 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-21 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-01-22 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-01-23 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-01-24 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-01-25 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-01-26 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-01-27 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-28 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-29 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-30 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-01-31 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-01 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-02-02 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-03 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-04 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-05 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-02-06 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-02-07 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-20 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-02-21 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-02-22 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-23 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-02-24 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-02-25 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-02-26 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-27 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-02-28 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-01 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-02 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-03 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-04 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-05 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-06 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-07 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-08 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-09 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-10 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-11 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-12 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-13 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-14 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-15 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-16 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-18 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-19 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-02 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-14 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-01-18 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-01-19 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-20 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-01-21 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-26 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-27 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-28 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-03-29 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-03-30 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-03-31 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-01 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-02 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-03 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-04 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-05 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-06 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-07 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-08 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-09 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-10 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-11 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-12 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-13 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-14 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-15 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-16 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-18 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-19 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-20 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-21 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-22 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-23 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-24 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-25 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-04-26 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-27 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-28 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-04-29 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-04-30 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-05-01 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-02 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-03 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-04 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-05 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-06 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-07 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-08 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-09 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-10 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-11 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-12 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-13 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-14 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-15 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-16 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-05-17 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-18 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-19 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-20 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-05-21 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-05-22 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-05-23 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-24 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-25 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-26 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-27 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-28 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-05-29 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-30 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-05-31 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-01 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-02 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-03 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-04 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-05 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-06 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-07 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-08 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-09 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-10 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-11 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-12 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-13 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-14 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-15 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-16 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-18 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-19 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-20 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-21 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-22 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-23 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-24 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-25 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-26 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-27 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-06-28 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-06-29 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-06-30 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-01 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-02 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-03 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-04 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-05 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-06 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-07 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-08 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-09 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-10 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-11 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-12 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-13 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-14 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-15 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-16 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-16 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-16 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-16 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-17 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-28 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-07-29 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-30 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-07-31 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-01 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-02 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-03 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-04 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-05 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-06 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-07 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-08 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-09 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-10 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-11 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-12 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-13 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-14 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-15 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-16 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-17 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-18 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-19 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-20 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-21 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-22 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-23 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-24 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-25 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-08-26 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-27 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-28 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-29 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-08-30 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-08-31 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-01 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-02 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-03 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-04 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-05 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-06 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-07 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-08 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-09 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-10 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-11 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-12 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-13 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-14 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-15 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-16 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-17 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-18 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-19 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-20 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-21 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-22 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-23 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-24 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-25 00:00:00'), 'A': 'Yes'},
{'Date': Timestamp('2022-09-26 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-27 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-28 00:00:00'), 'A': 'Unknown'},
{'Date': Timestamp('2022-09-29 00:00:00'), 'A': 'No'},
{'Date': Timestamp('2022-09-30 00:00:00'), 'A': 'No'}]
df = pd.DataFrame(d)
</code></pre>
<p>I get a series representing the relative frequency where A=="Yes" in any given year/month (multi-index) like so (and per <a href="https://stackoverflow.com/a/74754742/9760446">this answer</a>):</p>
<pre><code>df_counts = (
df
.assign(Year=df["Date"].dt.year, Month=df["Date"].dt.month)
.pivot_table(index=["Year", "Month"], columns="A", aggfunc="count", fill_value=0)
.droplevel(0, axis=1)
)
result = df_counts["Yes"].div(df_counts.sum(axis=1))
</code></pre>
<p>This looks like:</p>
<pre><code>Year Month
2022 1 0.277778
2 0.125000
3 0.320000
4 0.233333
5 0.161290
6 0.466667
7 0.387097
8 0.272727
9 0.281250
Name: Yes, dtype: float64
</code></pre>
<p>I can also convert this back to a dataframe, if needed, like so:</p>
<pre><code>df2 = result.reset_index().rename(columns={0: "Freq"})
</code></pre>
<p>Making a line plot of this is straightforward:</p>
<pre><code>results.plot(kind="bar", legend=False, figsize=(14, 8))
</code></pre>
<p>However, I'd like to show a trend line to better illustrate the upward trend in the data. Because this is multi-index data, I have been struggling to get seaborn/matplotlib/pandas to do this.</p>
<p>How do I plot this data with a trend line?</p>
|
<python><pandas><seaborn><trend>
|
2022-12-10 18:35:01
| 1
| 1,962
|
Arthur Dent
|
74,755,265
| 12,945,785
|
how to get different parameters in a python function
|
<p>I would like to have the different options in a python function or equivalent ie I would like to get the different options in</p>
<pre><code>sns.set_style("")
</code></pre>
<p>how to get the "grid", "dark"...parameters with python instruction ?</p>
<p>It is an example of what I would like. I would like to get all the different options in a API function</p>
|
<python><seaborn>
|
2022-12-10 17:48:19
| 1
| 315
|
Jacques Tebeka
|
74,755,137
| 389,806
|
List static member with no value in a Protocol class
|
<p>This example only prints <code>STATIC_MEMBER_1</code>. It does not list <code>STATIC_MEMBER_2</code>. I'd like a way to list both of them.</p>
<pre><code>class Foo(Protocol):
MY_STATIC_MEMBER_1: int = 42
MY_STATIC_MEMBER_2: int
print(dir(Foo))
</code></pre>
<p>I've tried using <code>inspect</code> and <code>__mro__</code> but I cannot get it to list <code>STATIC_MEMBER_2</code>.</p>
|
<python>
|
2022-12-10 17:30:44
| 1
| 1,981
|
Philip
|
74,755,130
| 15,893,581
|
DataFrame: IF-condition in lambda asks about Series, but single element is needed to be checked
|
<pre><code>import numpy as np
import pandas as pd
#create DataFrame
df = pd.DataFrame({'str': [700,705,710,715,720,1095,1100,1105,1110,1115,1120,1125,1130,1135,1205,1210,1215,1220,1225,1230,1235,1240,1245,1250,1255],
'P': [0.075,0.075,0.075,0.075,0.075,17.95,19.75,21.85,24.25,26.55,29.2,31.9,35.05,37.7,98.6,102.15,108.5,113.5,118.4,123.55,127.3,132.7,138.7,142.7,148.35],
'C': [407.8,403.65,398.3,391.65,387.8,30.05,26.65,23.7,21.35,19.65,16.05,14.3,11.95,9.95,0.475,0,0.525,0,0.2,0.175,0.15,0.375,0.125,0.075,0.175]})
df = df.assign(ot= lambda x: x['P'] if (x['str']<1105) else x['C'])
print(df)
</code></pre>
<blockquote>
<p><strong>ValueError</strong>: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
<p>I suppose, x['str'] is being taken as Series, thus perhaps the index of the current row is needed, though it seems strange as so as "lambda x" I think should include only current row index & not the whole x['str'] Series. HOW to make condition "x['str']<1105" to be checked in such a lambda correctly??</p>
|
<python><dataframe><lambda>
|
2022-12-10 17:29:30
| 3
| 645
|
JeeyCi
|
74,754,965
| 13,285,583
|
How to load images with different image shape to tf.data pipe?
|
<p>My goal is to have a preprocessing layers so it can handle any image size. This is because the <a href="https://www.kaggle.com/datasets/chrisfilo/fruit-recognition" rel="nofollow noreferrer">data set that I use</a> have 2 different image shape. The solution is simple, just resize it when I load the image. However, I believe this wont work when the model is deployed, I can't do manual resize like that. So I must use preprocessing layers.</p>
<p><a href="https://www.tensorflow.org/tutorials/images/data_augmentation" rel="nofollow noreferrer">The docs I used</a></p>
<p>What I've tried:</p>
<ol>
<li><p><a href="https://stackoverflow.com/questions/74753958/how-to-make-the-preprocessing-layers-part-of-the-cnn-model?noredirect=1#comment131932887_74753958">Put the preprocessing layers part of the model</a>, it does not work.</p>
</li>
<li><p>I am thinking to use <code>TensorSliceDataset.map(resize_and_rescale)</code>.</p>
<p>The problem is I need to convert the <code>[tensor image 1, tensor image 2]</code> to <code>TensorSliceDataset</code>. However, I can't convert it.</p>
<p>What I've tried:</p>
<ol>
<li><p><code>tf.data.Dataset.from_tensor_slices((X_train, y_train))</code></p>
<p>It throws error</p>
<pre><code>InvalidArgumentError: {{function_node __wrapped__Pack_N_9773_device_/job:localhost/replica:0/task:0/device:GPU:0}} Shapes of all inputs must match: values[0].shape = [258,320,3] != values[23].shape = [322,480,3]
[[{{node Pack}}]] [Op:Pack] name: component_0
</code></pre>
</li>
</ol>
</li>
</ol>
<p>The load images function:</p>
<pre><code>def load_images(df):
paths = df['path'].values
X = []
for path in paths:
raw = tf.io.read_file(path)
img = tf.image.decode_png(raw, channels=3)
X.append(img)
y = df['kind'].cat.codes
return X, y
</code></pre>
|
<python><tensorflow>
|
2022-12-10 17:07:39
| 2
| 2,173
|
Jason Rich Darmawan
|
74,754,884
| 4,298,200
|
Error with missing kw parameters in Pytests tmpdir mkdir method
|
<p>I have a class <code>Foo</code>. Its supposed to persist files in some folder. First it ensures the folder and all its parents exist.</p>
<pre><code>class Foo:
def persist_stuff(self, in_folder: Path):
in_folder.mkdir(parents=True, exist_ok=True)
# TODO: persist stuff
</code></pre>
<p>I test this code with pytest and use the <a href="https://docs.pytest.org/en/7.1.x/how-to/tmp_path.html#the-tmpdir-and-tmpdir-factory-fixtures" rel="nofollow noreferrer"><code>tmpdir</code></a> fixture, which helps cleaning up any created files</p>
<pre><code>def test_foo(tmpdir):
foo = Foo()
foo.create_folder(tmpdir / "foo")
# TODO: test
</code></pre>
<p>Unfortunately, calling <a href="https://docs.python.org/3/library/os.html#os.mkdir" rel="nofollow noreferrer"><code>mkdir</code></a> on <code>tmpdir</code> with kw parameters raises a <code>TypeError</code>:</p>
<pre><code>TypeError: LocalPath.mkdir() got an unexpected keyword argument 'exist_ok'
</code></pre>
<p>It seems those newly introduced kw parameters have not been added to pytests implementation of <code>LocalPath</code> yet.</p>
<p>There are two trivial solutions which I both don't like:</p>
<ol>
<li>Adapt tests to use my own custom test directory and delete it after every test</li>
<li>Rewrite my production code by calling a custom function that does the same job as <code>mkdir(parents=True, exist_ok=True)</code> without triggering this issue.</li>
</ol>
<p>I don't like either solution (especially the second is a no-go). Is there another - maybe a little less crude - way?</p>
|
<python><pytest>
|
2022-12-10 16:56:47
| 1
| 6,008
|
Georg Plaz
|
74,754,779
| 6,619,692
|
How and where is PyTorch's cross-entropy loss implemented?
|
<p>Where is the workhorse code that actually implements cross-entropy loss in the PyTorch codebase?</p>
<p>Starting at <a href="https://github.com/pytorch/pytorch/blob/9ef1d55e6b85f089f5d1f5a221b2dda4e7c052b2/torch/nn/modules/loss.py#L1032" rel="nofollow noreferrer">loss.py</a>, I tracked the source code in PyTorch for the cross-entropy loss to <a href="https://github.com/pytorch/pytorch/blob/30fb2c4abaaaa966999eab11674f25b18460e609/torch/csrc/api/include/torch/nn/modules/loss.h#L739" rel="nofollow noreferrer">loss.h</a> but this just contains the following:</p>
<pre class="lang-cpp prettyprint-override"><code>struct TORCH_API CrossEntropyLossImpl : public Cloneable<CrossEntropyLossImpl> {
explicit CrossEntropyLossImpl(const CrossEntropyLossOptions& options_ = {});
void reset() override;
/// Pretty prints the `CrossEntropyLoss` module into the given `stream`.
void pretty_print(std::ostream& stream) const override;
Tensor forward(const Tensor& input, const Tensor& target);
/// The options with which this `Module` was constructed.
CrossEntropyLossOptions options;
/// A manual rescaling weight given to to each class.
Tensor weight;
};
/// A `ModuleHolder` subclass for `CrossEntropyLossImpl`.
/// See the documentation for `CrossEntropyLossImpl` class to learn what methods
/// it provides, and examples of how to use `CrossEntropyLoss` with
/// `torch::nn::CrossEntropyLossOptions`. See the documentation for
/// `ModuleHolder` to learn about PyTorch's module storage semantics.
TORCH_MODULE(CrossEntropyLoss);
</code></pre>
<p>Having looked at the <a href="https://pytorch.org/cppdocs/api/classtorch_1_1nn_1_1_module_holder.html" rel="nofollow noreferrer">ModuleHolder</a> template class, as a C++ newbie, I'm a little lost.</p>
<p>Can someone help me construct an accurate mental model of what is going on here?</p>
|
<python><c++><pytorch>
|
2022-12-10 16:40:16
| 2
| 1,459
|
Anil
|
74,754,552
| 17,795,398
|
SQLite: unable to use PRIMARY KEY (sqlite3.OperationalError)
|
<p>I'm trying to create a <code>PRIMARY KEY</code> with <code>sqlite3</code> but I get an error.</p>
<p>Code:</p>
<pre><code>import sqlite3
class DataBaseManager:
def __init__(self, database):
self.database = database
self.tablevideos = "videos"
self.cols = (
"idkey INTEGER NOT NULL PRIMARY KEY, "
"name INTEGER NOT NULL, "
"uploaded INTEGER DEFAULT NULL"
)
self.defaults = {
"uploaded": None
}
self.insertcols = "?,?"
self._create_database()
def _create_database(self):
"""
Creates the database if it does not exist
"""
connection = sqlite3.connect(self.database)
cursor = connection.cursor()
query = (
"CREATE TABLE IF NOT EXISTS "
"{}({})".format(self.tablevideos, self.cols)
)
cursor.execute(query)
connection.commit()
cursor.close()
connection.close()
def insert(self):
connection = sqlite3.connect(self.database)
cursor = connection.cursor()
query = (
"INSERT INTO {} "
"VALUES ({})".format(self.tablevideos, self.insertcols)
)
for i in [1,2,3]:
data = [
i,
*self.defaults.values()
]
cursor.execute(query, data)
connection.commit()
cursor.close()
connection.close()
databasemanager = DataBaseManager("data.db")
databasemanager.insert()
</code></pre>
<p>The error is:</p>
<pre><code>Traceback (most recent call last):
File "D:\Sync1\Code\Python3\FileTagerPy\example.py", line 59, in <module>
databasemanager.insert()
File "D:\Sync1\Code\Python3\FileTagerPy\example.py", line 53, in insert
cursor.execute(query, data)
sqlite3.OperationalError: table videos has 3 columns but 2 values were supplied
</code></pre>
<p>If I remove <code>"idkey INTEGER NOT NULL PRIMARY KEY, "</code> it works.</p>
<p>Thanks!</p>
|
<python><sqlite><sql-insert><primary-key><create-table>
|
2022-12-10 16:11:07
| 1
| 472
|
Abel GutiΓ©rrez
|
74,754,497
| 6,226,980
|
how do I find out what "pandas._libs.tslibs.timestamps.Timestamp" is aliased to without googling
|
<p><strong>Train of thoughts:</strong><br />
how do we check datatypes in pandas?<br />
many ways, <code>isinstance()</code>, <code>type()</code>, <code>sr.dtype</code>, <code>df.dtypes</code> etc</p>
<p>the next step is, what's the exact word I should put in your source code to do this:<br />
<code>type(myvar) == <what do I put in here?></code></p>
<p>ok, the simpliest way to find out is to try it out:</p>
<pre><code>type(myvar)
>> pandas._libs.tslibs.timestamps.Timestamp
</code></pre>
<p>so, the following is good and all, but that's quite a mouthful,<br />
<code>type(myvar) == pandas._libs.tslibs.timestamps.Timestamp</code></p>
<p><strong>Question</strong><br />
I happen to know that <code>type(myvar) == pandas.Timestamp</code> would work as well for this particular data type. I would like to know a general way of finding out if and what python types are aliased to something shorter.</p>
|
<python><pandas>
|
2022-12-10 16:04:53
| 0
| 2,489
|
eliu
|
74,754,313
| 5,080,612
|
SQLITE db files not closed. db-shm and db-wal files remain
|
<p>I want to write some basic code to do querys on read only mode on sqlite databases</p>
<p>These are daily db files so it is important after closing the connections not to leave other files in the server like <em>db-shm</em> or <em>db-wal</em> files associated</p>
<p>I have been reading the documentation and it seems that even though I try to close the connections explicitly, these are not closed so these files stay there</p>
<pre><code>import sqlite3
import pandas as pd
def _connect(f):
con = None
try:
con = sqlite3.connect("file:"+f+"?mode=ro", uri=True)
except sqlite3.Error as er:
print('SQLite error in the db connection: %s' % (' '.join(er.args)))
return con
def _disconnect(con):
try:
con.close()
except sqlite3.Error as er:
print('SQLite error in the db disconnection: %s' % (' '.join(er.args)))
return con
def fl_query(file):
'''Returns information about a db file
file : string
absolute path to the db file
returns
-------
list
'''
cn = _connect(file)
cur = cn.cursor()
query = """SELECT ....etc"""
cur.execute(query)
info = [(row[0],row[1],row[2],row[3],row[4],row[5]) for row in cur.fetchall()]
cur.close()
_disconnect(cn)
return info
folder = 'path to the db file'
file = folder+'6100000_2022-09-18.db'
info = fl_query(file)
</code></pre>
<p>I have read about how to close cleanly the databases but so far nothing works and db-shm and db-wal stay there everytime I open a file. Remark: it is a server with thousand of files so it is important not to create more files</p>
<p><a href="https://i.sstatic.net/qbcfy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qbcfy.png" alt="enter image description here" /></a></p>
|
<python><sqlite>
|
2022-12-10 15:38:37
| 1
| 1,104
|
gis20
|
74,753,898
| 14,661,648
|
psycopg2.ProgrammingError: no results to fetch (fetchall SELECT EXISTS)
|
<pre><code>Traceback (most recent call last):
File "/app/main.py", line 324, in that_function
links = cur.fetchall()[0]
File "/app/.heroku/python/lib/python3.10/site-packages/psycopg2/extras.py", line 104, in fetchall
res = super().fetchall()
psycopg2.ProgrammingError: no results to fetch
</code></pre>
<p>I don't understand why I get this exception in this code:</p>
<pre><code>cur.execute(
"SELECT EXISTS(SELECT 1 FROM images where link = %s)", (original_url,)
)
links = cur.fetchall()[0]
</code></pre>
<p>What is the logical error in this? What's weird is in my infinite looping Python script, this happens randomly. The database is updated throughout the script however, so I'm not sure what is causing it.</p>
|
<python><postgresql><psycopg2>
|
2022-12-10 14:43:29
| 0
| 1,067
|
Jiehfeng
|
74,753,755
| 12,285,101
|
list comprehension with Regex to match whole item if it has matching number between two specific characters
|
<p>This question is the continuation <a href="https://stackoverflow.com/questions/74744103/regex-python-find-match-items-on-list-that-have-the-same-digit-between-the-sec/74744288?noredirect=1#comment131931854_74744288">of this post.</a>
I have the following list :</p>
<pre><code>list_paths=[imgs/foldeer/img_ABC_21389_1.tif.tif,
imgs/foldeer/img_ABC_15431_10.tif.tif,
imgs/foldeer/img_GHC_561321_2.tif.tif,
imgs_foldeer/img_BCL_871125_21.tif.tif,
...]
</code></pre>
<p>I want to be able to run a for loop to match string with specific number,which is <strong>the number between the third occurrence of "_" to the ".tif.tif"</strong>,
for example, when number is 1, the string to be matched is "imgs/foldeer/img_ABC_21389_1.tif.tif" ,</p>
<p>for number 2, the match string will be "imgs/foldeer/img_GHC_561321_2.tif.tif".</p>
<p>For that, I wanted to use regex expression using list comprehension. <a href="https://stackoverflow.com/questions/58294532/regex-match-adjacent-digits-after-second-occurrence-of-character">Based on this answer,</a> I have tested this regex expression on Regex101:</p>
<pre><code>
number = 10
pattern = rf"^\S*?/(?:[^\s_/]+_){{3}}{number}\.tif\b[^\s/]*$"
indices = [for x in data if re.search(pattern, x)]
</code></pre>
<p>But this doesn't match anything, and also doesn't make sure that it will take the exact number, so if number is 1, it might also select items with number 10 .</p>
<p><strong>My end goal is to be able to match items in the list that have the request number between the 2nd occurrence of "_" to the first occirance of ".tif" , using regex expression, looking for help with the regex expression.</strong></p>
<p>The output should be the whole path and not only the number.</p>
|
<python><regex><list>
|
2022-12-10 14:25:48
| 1
| 1,592
|
Reut
|
74,753,748
| 11,440,563
|
pytorch - optimizer that favors true positives
|
<p>I have a beginner high level question about pytorch optimizer, namely is there a non-custom way to optimize for true positives?
Let's say that i have a list of labels:</p>
<pre><code>labels=[0,0,0,0,0,0,0,0,0,1]
</code></pre>
<p>And I would like a model to fit to those labels in a way that would favor a true positive - aka i really would like it to return '1' as last item, and i am not that concerned about false positive '1's for other items in a list.</p>
<p>I presume i can workaround this problem by defining a custom loss function which would give me weights for true positives / false positives, but i was wondering if there is an out of the box solution for that.</p>
<p>Kind regards.</p>
|
<python><pytorch><loss-function>
|
2022-12-10 14:24:54
| 1
| 363
|
pawelofficial
|
74,753,606
| 7,454,513
|
DRF - How to using serializer to load related data
|
<p>I have a self related table <code>Employee</code>, and <code>Project</code> table has foreignkey to related to <code>Employee</code> table.</p>
<pre><code>class Employee(models.Model):
eid = models.CharField(primary_key=True, max_length=10)
name = models.CharField(max_length=10)
pmid = models.ForeignKey('self', models.RESTRICT, related_name='team_member', blank=True, null=True,)
class Project(models.Model):
pid = models.CharField(primary_key=True, max_length=10)
description = models.CharField(max_length=100)
teamleaderid = models.ForeignKey(Employee, models.RESTRICT)
</code></pre>
<p>and <code>serializers.py</code></p>
<pre><code>class SubEmployeeSerializer(serializers.ModelSerializer):
class Meta:
model = Employee
fields = '__all__'
class EmployeeSerializer(serializers.ModelSerializer):
team_member = SubEmployeeSerializer(many=True, read_only=True)
class Meta:
model = Employee
fields = '__all__'
class ProjectSerializer(serializers.ModelSerializer):
class Meta:
model = Project
fields = '__all__'
depth = 1
</code></pre>
<p><code>views.py</code></p>
<pre><code>class ProjectList(generics.ListAPIView):
queryset = Project.objects.all()
serializer_class = ProjectSerializer
</code></pre>
<p>I hope when request <code>ProjectListView</code> I can get <code>teamleaderid</code> with <code>team_member</code> data
but I don't know why <code>team_member</code> not show in my response.</p>
<pre><code>[
{
"pid": "p1",
"description": "p1 project",
"teamleaderid": {
"eid": "1",
"name": "n1",
"pmid": null,
###### how to show below data ###
# "team_member": [ #
# { #
# "eid": "5", #
# "name": "n5", #
# "pmid": "1" #
#} #
#################################
}
}
]
</code></pre>
|
<python><django><django-rest-framework>
|
2022-12-10 14:05:46
| 1
| 683
|
Relax ZeroC
|
74,753,574
| 1,196,540
|
How to remove or disable unwanted languages in Django 4.1.1
|
<p>I had a question about translations in Django...
So I have a project with 4 languages defined in my settings.py</p>
<pre><code>LANGUAGES = [
('en', _('English')),
('fr', _('French')),
('de', _('German')),
('it', _('Italy')),
]
</code></pre>
<p>now I want to disable all languages except english, so I've been searching in google how to do it, and find this post on SO => <a href="https://stackoverflow.com/questions/28651706/how-to-remove-unwanted-languages-from-django-oscar">enter link description here</a>, than I tried to comment all languages except english, after that I've done <code>./manage.py makemigrations</code> => <code>./manage.py migrate</code> migrations went without any error, BUT in my language list they didn't dissapear... also I've been find code that as I think forms this list... I changed thare hardcoded langlist from <code>language_list = ['en', 'it', 'de', 'fr']</code> to <code>language_list = settings.LANGUAGES</code> and also nothing happened with my UI language list choices...</p>
<p>So, question:
how properly I can disable unwanted languages in my Django application;</p>
<p>P.S I'm new in python and Django at all, so please can anyone help me with this?</p>
|
<python><django>
|
2022-12-10 14:02:31
| 2
| 733
|
vladimir
|
74,753,477
| 2,482,149
|
Boto3 - Copy Object to a newly generated S3 location and return name in http body
|
<p>I have a lambda that runs via a http trigger in API Gateway.</p>
<p>My goal is to:</p>
<ol>
<li>Copy files from <code>input-bucket</code> (where the user uploads his/her files)</li>
<li>Create a UUID as a new location/folder in a separate bucket (<code>output-bucket</code>)</li>
<li>Paste those objects in that new location</li>
<li>Delete all objects from <code>input-bucket</code> and return a 200 http status with the new location name in the body</li>
</ol>
<p>However, I keep getting this:</p>
<pre><code>[ERROR] ClientError: An error occurred (AccessDenied) when calling the CopyObject operation: Access Denied
</code></pre>
<p>This is my lambda:</p>
<pre><code>LOGGER = logging.getLogger(__name__)
logging.basicConfig(level=logging.ERROR)
logging.getLogger(__name__).setLevel(logging.DEBUG)
session = boto3.Session()
client = session.client('s3')
s3 = session.resource('s3')
src_bucket = s3.Bucket('input-bucket')
def lambda_handler(event,context):
LOGGER.info('Reading files in {}'.format(src_bucket))
location = str(uuid.uuid4())
client.put_object(Bucket='output-bucket', Body='',Key = (location + "/"))
LOGGER.info('Generated folder location ' + "/"+ location + "/ in output-bucket")
for obj in src_bucket.objects.all():
copy_source = {
'Bucket': 'input-bucket',
'Key': obj.key
}
client.copy_object(Bucket='output-bucket',
Key=location + '/' + obj.key,CopySource = copy_source) ### ERROR OCCURS HERE
LOGGER.info('Copied: {} from {} to {} folder in {}'.format(obj.key,'input-bucket',location,'output-bucket'))
src_bucket.objects.all().delete()
LOGGER.info('Deleted all objects from {}'.format(src_bucket))
return { "statusCode": 200,
"body": json.dumps({
'folder_name': location
})
}
</code></pre>
<p>As far as I know, I have enabled everything correctly for the s3 bucket policies (this is only for <code>output-bucket</code>, I have identical policies for <code>input-bucket</code>:</p>
<pre><code>resource "aws_s3_bucket" "file_output" {
bucket = "output-bucket"
acl = "private"
policy = <<EOF
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ModifySrcBucket",
"Effect":"Allow",
"Principal":{
"AWS":[
"<XXXXXX>"
]
},
"Action":[
"s3:PutObject",
"s3:PutObjectTagging",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:DeleteObject"
],
"Resource":["arn:aws:s3:::output-bucket/*","arn:aws:s3:::output-bucket/*/*"]
},
{
"Effect":"Allow",
"Principal":{
"AWS":"<XXXXXXX>"
},
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::output-bucket"
}
]
}
EOF
}
resource "aws_s3_bucket_ownership_controls" "disable_acl_output" {
bucket = aws_s3_bucket.file_output.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}
</code></pre>
|
<python><amazon-web-services><amazon-s3><aws-lambda><terraform>
|
2022-12-10 13:47:51
| 2
| 1,226
|
clattenburg cake
|
74,753,455
| 2,276,831
|
How to split up an email body that contains replies (i.e. an email thread)?
|
<p>I am reading an email body using the python <code>imap</code> and <code>email</code> libraries:</p>
<pre class="lang-py prettyprint-override"><code>import imaplib
from email import message_from_bytes
imap = imaplib.IMAP4_SSL(host)
imap.login(email_address, pw)
_, output = imap.fetch("100", "(RFC822)")
raw_output = output[0]
msg = message_from_bytes(raw_output[1])
body = msg.get_payload(decode=True).decode()
</code></pre>
<p>However, the body has the replies of many other emails in a chain:</p>
<pre><code>second reply
________________________________
From: bob <bob@gmail.com>
Sent: 10 December 2022 11:35
To: susan <susan@outlook.com>
Subject: Re: creating a test thread
first reply
On 10 Dec 2022, at 11:34, blah dev <susan@outlook.com<mailto:susan@outlook.com>> wrote:
first message
</code></pre>
<p>Is there an easy way/library to strip/split the replies in the chain (especially given that the reply formatting seems to vary between email providers)?
i.e. something like a list <code>["second reply", "first reply", "first message" ]</code></p>
|
<python><email><imap>
|
2022-12-10 13:44:23
| 0
| 4,515
|
mic
|
74,753,293
| 12,361,700
|
Tensorflow fit history is noisier than expected
|
<p>I've fitted a model and this is the plot i get with te following code:</p>
<pre><code>hist = model.fit(
xs, ys, epochs=300, batch_size=100, validation_split=0.1,
callbacks=[K.callbacks.EarlyStopping(patience=30)]
)
plt.figure(dpi=200)
plt.plot(hist.history["loss"])
plt.plot(hist.history["val_loss"])
plt.legend(["loss", "val_loss"])
</code></pre>
<p><a href="https://i.sstatic.net/LRXEP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRXEP.jpg" alt="enter image description here" /></a></p>
<p>However, the training logs are the following:</p>
<pre><code>...
Epoch 200/300
9/9 [==============================] - 3s 384ms/step - loss: 514.2175 - val_loss: 584.2152
Epoch 201/300
9/9 [==============================] - 3s 385ms/step - loss: 510.9814 - val_loss: 581.8872
Epoch 202/300
9/9 [==============================] - 3s 391ms/step - loss: 518.9771 - val_loss: 582.4727
Epoch 203/300
9/9 [==============================] - 3s 383ms/step - loss: 521.8132 - val_loss: 582.9196
Epoch 204/300
9/9 [==============================] - 4s 393ms/step - loss: 516.8439 - val_loss: 584.0792
Epoch 205/300
9/9 [==============================] - 3s 391ms/step - loss: 513.7325 - val_loss: 582.6438
Epoch 206/300
9/9 [==============================] - 3s 390ms/step - loss: 514.4469 - val_loss: 583.5629
Epoch 207/300
9/9 [==============================] - 3s 391ms/step - loss: 522.0557 - val_loss: 581.7162
Epoch 208/300
9/9 [==============================] - 3s 389ms/step - loss: 518.6336 - val_loss: 582.8070
Epoch 209/300
9/9 [==============================] - 3s 391ms/step - loss: 518.0827 - val_loss: 582.4284
Epoch 210/300
9/9 [==============================] - 3s 389ms/step - loss: 514.1886 - val_loss: 582.4635
Epoch 211/300
9/9 [==============================] - 3s 390ms/step - loss: 514.4373 - val_loss: 582.1906
Epoch 212/300
9/9 [==============================] - 3s 391ms/step - loss: 514.9708 - val_loss: 582.1699
Epoch 213/300
9/9 [==============================] - 3s 388ms/step - loss: 521.1622 - val_loss: 582.3545
Epoch 214/300
9/9 [==============================] - 3s 388ms/step - loss: 513.5198 - val_loss: 582.7703
Epoch 215/300
9/9 [==============================] - 3s 392ms/step - loss: 514.6642 - val_loss: 582.3327
Epoch 216/300
9/9 [==============================] - 3s 392ms/step - loss: 514.0926 - val_loss: 583.9896
Epoch 217/300
9/9 [==============================] - 3s 385ms/step - loss: 520.9324 - val_loss: 583.9265
Epoch 218/300
9/9 [==============================] - 4s 397ms/step - loss: 510.2536 - val_loss: 584.6587
Epoch 219/300
9/9 [==============================] - 4s 394ms/step - loss: 515.7706 - val_loss: 583.2895
Epoch 220/300
9/9 [==============================] - 3s 391ms/step - loss: 520.9758 - val_loss: 582.2515
Epoch 221/300
9/9 [==============================] - 3s 386ms/step - loss: 517.8850 - val_loss: 582.1981
Epoch 222/300
9/9 [==============================] - 4s 395ms/step - loss: 514.2051 - val_loss: 583.0013
Epoch 223/300
9/9 [==============================] - 4s 402ms/step - loss: 509.3330 - val_loss: 583.7137
Epoch 224/300
9/9 [==============================] - 3s 388ms/step - loss: 516.6832 - val_loss: 582.0773
Epoch 225/300
9/9 [==============================] - 3s 387ms/step - loss: 515.5243 - val_loss: 582.2585
Epoch 226/300
9/9 [==============================] - 3s 389ms/step - loss: 517.6601 - val_loss: 582.3940
Epoch 227/300
9/9 [==============================] - 3s 388ms/step - loss: 515.7537 - val_loss: 582.3862
Epoch 228/300
9/9 [==============================] - 4s 394ms/step - loss: 516.1107 - val_loss: 582.7234
Epoch 229/300
9/9 [==============================] - 3s 389ms/step - loss: 517.5703 - val_loss: 583.3829
Epoch 230/300
9/9 [==============================] - 3s 388ms/step - loss: 516.7491 - val_loss: 583.4712
Epoch 231/300
9/9 [==============================] - 4s 392ms/step - loss: 520.6753 - val_loss: 583.2650
Epoch 232/300
9/9 [==============================] - 3s 388ms/step - loss: 516.1927 - val_loss: 581.9255
Epoch 233/300
9/9 [==============================] - 4s 393ms/step - loss: 512.5476 - val_loss: 583.1275
Epoch 234/300
9/9 [==============================] - 4s 392ms/step - loss: 513.5744 - val_loss: 583.0643
Epoch 235/300
9/9 [==============================] - 3s 385ms/step - loss: 520.2017 - val_loss: 582.6875
Epoch 236/300
9/9 [==============================] - 3s 386ms/step - loss: 518.7263 - val_loss: 583.0582
Epoch 237/300
9/9 [==============================] - 3s 382ms/step - loss: 521.4882 - val_loss: 582.8977
</code></pre>
<p>Infact, if with some regex I extract the training loss, and plot it, I get the following plot:</p>
<p><a href="https://i.sstatic.net/LK236.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LK236.jpg" alt="enter image description here" /></a></p>
<p>What am I missing about <code>History</code>?...</p>
|
<python><tensorflow><keras><deep-learning><neural-network>
|
2022-12-10 13:22:39
| 0
| 13,109
|
Alberto
|
74,753,277
| 5,452,365
|
How to get first element from lxml using xpath
|
<p>Minimal example:</p>
<pre><code>In [1]: from lxml import etree
In [2]: etree.fromstring('<who>syslogd</who>').xpath('/who/text()')
Out[2]: ['syslogd']
</code></pre>
<p>currently I'm using helper function:</p>
<pre><code>def safe_xpath_one(tree: Element, xpath: str) -> Union[Element, None]:
res = tree.xpath(xpath)
if res:
return res[0]
</code></pre>
<p>Here I then need to check first element from result which is additional. Is there direct way of specifying that I want the first and only first element?</p>
<p>P.S.: I think I got too much used to bs4's <code>soup.select_one</code></p>
|
<python><xpath><lxml><libxml2>
|
2022-12-10 13:20:15
| 2
| 11,652
|
Rahul
|
74,753,081
| 20,078,696
|
How to create an MPLClassifier from weights and biases? (Python 3)
|
<p>I am trying to create an MPLClassifier with predefined weights and biases so that I can save them to a file and then</p>
<p>If I train the network like this:</p>
<pre><code>import numpy as np
from sklearn.neural_network import MLPClassifier
data = np.load("data.npy")
labels = np.load("labels.npy")
clf = MLPClassifier()
clf.fit(data, labels)
np.save("weights.npy", clf.coefs_)
np.save("biases.npy", clf.intercepts_)
</code></pre>
<p>and then access the weights an biases like this:</p>
<pre><code>import numpy as np
from sklearn.neural_network import MLPClassifier
weights = np.load("weights.npy")
biases = np.load("biases.npy")
</code></pre>
<p>I want to be able to create a new network like:</p>
<pre><code>clf = MLPClassifier(weights=weights, biases=biases)
</code></pre>
|
<python><scikit-learn><deep-learning>
|
2022-12-10 12:53:20
| 1
| 789
|
sbottingota
|
74,753,015
| 12,500,949
|
Decode Httrack encoded urls in Python?
|
<p>I have downloaded a full website using Httrack Website Copier and now I want to retrieve all image source ('src') urls using Python 3.7.</p>
<p>Already did that but for further use I need those urls to be in plain text but instead they are something like this:</p>
<pre><code>cid:httpsX3aX2fX2fcommonsX2emX2ewikimediaX2eorgX2fstaticX2fimagesX2fmobileX2fcopyrightX2fcommonswikiX2dwordmarkX2esvg
cid:httpsX3aX2fX2fuploadX2ewikimediaX2eorgX2fwikipediaX2fcommonsX2fthumbX2f6X2f6eX2fBursitisX5fElbowX5fWCX2eJPGX2f800pxX2dBursitisX5fElbowX5fWCX2eJPGX3f20070925222131
cid:httpsX3aX2fX2fuploadX2ewikimediaX2eorgX2fwikipediaX2fcommonsX2fthumbX2f6X2f62X2fPDX2diconX2esvgX2f64pxX2dPDX2diconX2esvgX2epng
cid:httpsX3aX2fX2fuploadX2ewikimediaX2eorgX2fwikipediaX2fcommonsX2fthumbX2f6X2f6eX2fBursitisX5fElbowX5fWCX2eJPGX2f120pxX2dBursitisX5fElbowX5fWCX2eJPGX3f20070925222131
</code></pre>
<p>I don't know what these cid:urls are but google sent me to this: <a href="https://stackoverflow.com/questions/66656067/replace-cidnumber-with-chars-using-python-when-extracting-text-from-pdf-fil">Replace (cid:<number>) with chars using Python when extracting text from PDF files</a>
which is obviously something maybe somehow related to this problem but it doesn't help me (or at least I don't know how).
Also, I thought they are somehow escaped urls so I tried this solution from here which doesn't work either: <a href="https://stackoverflow.com/questions/8136788/decode-escaped-characters-in-url">Decode escaped characters in URL</a></p>
<p>This is my try to solve this problem into Python:</p>
<pre><code>import urllib.parse
mystr = "cid:httpsX3aX2fX2fcommonsX2emX2ewikimediaX2eorgX2fstaticX2fimagesX2fmobileX2fcopyrightX2fcommonswikiX2dwordmarkX2esvg"
print(mystr.encode('utf-8'))
print(mystr.encode('utf-8').decode('utf-8', errors='ignore'))
print(mystr.encode('utf-8').decode('utf-8'))
print(urllib.parse.unquote(mystr, encoding='utf-8', errors='replace'))
</code></pre>
<p>Note that I tried also to decode them from utf-8 because I thought maybe that way it will work but it doesn't work either.</p>
<p>Result of my code above is:</p>
<pre><code>b'cid:httpsX3aX2fX2fcommonsX2emX2ewikimediaX2eorgX2fstaticX2fimagesX2fmobileX2fcopyrightX2fcommonswikiX2dwordmarkX2esvg'
cid:httpsX3aX2fX2fcommonsX2emX2ewikimediaX2eorgX2fstaticX2fimagesX2fmobileX2fcopyrightX2fcommonswikiX2dwordmarkX2esvg
cid:httpsX3aX2fX2fcommonsX2emX2ewikimediaX2eorgX2fstaticX2fimagesX2fmobileX2fcopyrightX2fcommonswikiX2dwordmarkX2esvg
cid:httpsX3aX2fX2fcommonsX2emX2ewikimediaX2eorgX2fstaticX2fimagesX2fmobileX2fcopyrightX2fcommonswikiX2dwordmarkX2esvg
</code></pre>
<p>You can replace it with any of those strings from the beginning of my post and there will not be any change.</p>
<p>Thanks in advance!</p>
|
<python><python-3.x><utf-8><urldecode><httrack>
|
2022-12-10 12:44:20
| 0
| 439
|
YoYoYo
|
74,752,844
| 6,551,439
|
How can i pass validated data to another custom validator class in DRF?
|
<p>I Have this kind of serializer.py</p>
<pre><code>class PostSerializer(serializers.ModelSerializer):
title = serializers.CharField(validators=[TitleValidator()])
slug = serializers.CharField(validators=[SlugsValidator()], max_length=100, required=False)
</code></pre>
<p>and i have two class validators for this fields</p>
<pre><code>class TitleValidator:
MIN_TITLE_LENGTH = 20
def __call__(self, title: str):
if len(title) < self.MIN_TITLE_LENGTH:
raise ValidationError(f"Min title length is {self.MIN_TITLE_LENGTH}")
return title
class SlugsValidator:
def __call__(self, slug):
# Get title here
return slug
</code></pre>
<p>How can I pass the validated title into <code>SlugValidator</code> class?</p>
<p>I've tried to pass data directly to the <code>TitleValidator</code> instance, but the only thing I can get is the field itself, not an actual value.
Another way was to pass data inside validate() method, but seems like custom class validators are executed first, and i' getting an error that the title argument is not provided.
Is there any way I can achieve this?</p>
<p>It seems like it possible to get all fields if you define all class validators inside <code>Meta</code> class, but I'm wondering if it is possible without using <code>Meta</code> class</p>
|
<python><django><django-rest-framework>
|
2022-12-10 12:19:39
| 0
| 1,575
|
Andrew
|
74,752,810
| 10,266,059
|
Why do I get "ModuleNotFoundError: No module named 'TAP'" when TAP is installed?
|
<p>I am trying to use Rx to validate a YAML document against a schema, but python can't find the TAP module even though I've just installed it and I can see it's right there:</p>
<pre><code>[:~/git/Rx/python] [my-venv] master* Β± python3 rx-test.py
Traceback (most recent call last):
File "/home/atsaloli/git/Rx/python/rx-test.py", line 1, in <module>
from TAP.Simple import *
ModuleNotFoundError: No module named 'TAP'
[:~/git/Rx/python] [my-venv] master* 1 Β± pip3 freeze | grep -i tap
tap==0.2
tap.py==3.1
[:~/git/Rx/python] [my-venv] master* Β± ls /home/atsaloli/my-venv/lib/python3.10/site-packages/tap/
adapter.py formatter.py line.py __main__.py parser.py rules.py tests
directive.py __init__.py loader.py main.py __pycache__ runner.py tracker.py
[:~/git/Rx/python] [my-venv] master* Β±
</code></pre>
<p>Rx is <a href="https://rjbs.manxome.org/rx/index.html" rel="nofollow noreferrer">https://rjbs.manxome.org/rx/index.html</a></p>
<p>Similar questions (no accepted answer):</p>
<ul>
<li><a href="https://stackoverflow.com/questions/54956686/modulenotfounderror-no-module-named-script-but-script-actually-exist-and-dire">ModuleNotFoundError: No module named 'script' but script actually exist and directory precise</a></li>
<li><a href="https://stackoverflow.com/questions/60291357/modulenotfounderror-no-module-named-mysql-even-when-module-is-installed">ModuleNotFoundError: No module named 'mysql' even when module is installed</a></li>
<li><a href="https://stackoverflow.com/questions/338768/python-error-importerror-no-module-named?rq=1">Python error "ImportError: No module named"</a></li>
</ul>
<p>P.S. I did install <code>tap</code>:</p>
<pre><code>[:~/git/learn_python/yaml-validator] [my-venv] 1 $ pip install tap
Requirement already satisfied: tap in /home/atsaloli/my-venv/lib/python3.10/site-packages (0.2)
Requirement already satisfied: mc_bin_client in /home/atsaloli/my-venv/lib/python3.10/site-packages (from tap) (1.0.1)
[:~/git/learn_python/yaml-validator] [my-venv] $
</code></pre>
|
<python>
|
2022-12-10 12:13:39
| 0
| 1,676
|
Aleksey Tsalolikhin
|
74,752,799
| 4,169,571
|
Extract columns of numpy array consisting of lists
|
<p>I have a variable <code>var</code></p>
<p>When I print it in jupyter, it gives:</p>
<pre><code>var
#array([list([9166855000000.0, 13353516.0]),
# list([7818836000000.0, 11389833.0]),
# list([20269756000000.0, 29527304.0]),
# list([66886956000000.0, 97435384.0]),
# list([58686560000000.0, 85489730.0]),
# list([50809440000000.0, 74014984.0])], dtype=object)
</code></pre>
<p>or</p>
<pre><code>print(var)
[list([9166855000000.0, 13353516.0])
list([7818836000000.0, 11389833.0])
list([20269756000000.0, 29527304.0])
list([66886956000000.0, 97435384.0])
list([58686560000000.0, 85489730.0])
list([50809440000000.0, 74014984.0])]
</code></pre>
<p>The type is:</p>
<pre><code>print(type(var))
#<class 'numpy.ndarray'>
</code></pre>
<p>How can I devide the second elements of the sublists by the first ones?</p>
<p>I want to get the following values as an array or list:</p>
<pre><code>13353516.0/9166855000000.0
...
74014984.0/50809440000000.0
</code></pre>
|
<python><list><numpy><numpy-ndarray>
|
2022-12-10 12:10:48
| 2
| 817
|
len
|
74,752,610
| 3,809,375
|
How to use argparse to create command groups like git?
|
<p>I'm trying to figure out how to use properly builtin <a href="https://docs.python.org/3/library/argparse.html" rel="noreferrer">argparse</a> module to get a similar output than tools
such as git where I can display a nice help with all "root commands" nicely grouped, ie:</p>
<pre><code>$ git --help
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[--super-prefix=<path>] [--config-env=<name>=<envvar>]
<command> [<args>]
These are common Git commands used in various situations:
start a working area (see also: git help tutorial)
clone Clone a repository into a new directory
init Create an empty Git repository or reinitialize an existing one
work on the current change (see also: git help everyday)
add Add file contents to the index
mv Move or rename a file, a directory, or a symlink
restore Restore working tree files
rm Remove files from the working tree and from the index
examine the history and state (see also: git help revisions)
bisect Use binary search to find the commit that introduced a bug
diff Show changes between commits, commit and working tree, etc
grep Print lines matching a pattern
log Show commit logs
show Show various types of objects
status Show the working tree status
grow, mark and tweak your common history
branch List, create, or delete branches
commit Record changes to the repository
merge Join two or more development histories together
rebase Reapply commits on top of another base tip
reset Reset current HEAD to the specified state
switch Switch branches
tag Create, list, delete or verify a tag object signed with GPG
collaborate (see also: git help workflows)
fetch Download objects and refs from another repository
pull Fetch from and integrate with another repository or a local branch
push Update remote refs along with associated objects
'git help -a' and 'git help -g' list available subcommands and some
concept guides. See 'git help <command>' or 'git help <concept>'
to read about a specific subcommand or concept.
See 'git help git' for an overview of the system.
</code></pre>
<p>Here's my attempt:</p>
<pre><code>from argparse import ArgumentParser
class FooCommand:
def __init__(self, subparser):
self.name = "Foo"
self.help = "Foo help"
subparser.add_parser(self.name, help=self.help)
class BarCommand:
def __init__(self, subparser):
self.name = "Bar"
self.help = "Bar help"
subparser.add_parser(self.name, help=self.help)
class BazCommand:
def __init__(self, subparser):
self.name = "Baz"
self.help = "Baz help"
subparser.add_parser(self.name, help=self.help)
def test1():
parser = ArgumentParser(description="Test1 ArgumentParser")
root = parser.add_subparsers(dest="command", description="All Commands:")
# Group1
FooCommand(root)
BarCommand(root)
# Group2
BazCommand(root)
args = parser.parse_args()
print(args)
def test2():
parser = ArgumentParser(description="Test2 ArgumentParser")
# Group1
cat1 = parser.add_subparsers(dest="command", description="Category1 Commands:")
FooCommand(cat1)
BarCommand(cat1)
# Group2
cat2 = parser.add_subparsers(dest="command", description="Category2 Commands:")
BazCommand(cat2)
args = parser.parse_args()
print(args)
</code></pre>
<p>If you run <code>test1</code> you'd get:</p>
<pre><code>$ python mcve.py --help
usage: mcve.py [-h] {Foo,Bar,Baz} ...
Test1 ArgumentParser
options:
-h, --help show this help message and exit
subcommands:
All Commands:
{Foo,Bar,Baz}
Foo Foo help
Bar Bar help
Baz Baz help
</code></pre>
<p>Obviously this is not what I want, in there I just see all commands in a flat list, no groups or whatsoever... so the next logical attempt would be trying to group them. But if I run <code>test2</code> I'll get:</p>
<pre><code>$ python mcve.py --help
usage: mcve.py [-h] {Foo,Bar} ...
mcve.py: error: cannot have multiple subparser arguments
</code></pre>
<p>Which obviously means I'm not using properly argparse to accomplish the task at hand. So, is it possible to use argparse to achieve a similar behaviour than git? In the past I've relied on "hacks" so I thought the best practice here would be using the concept of <code>add_subparsers</code> but it seems I didn't understand properly that concept.</p>
|
<python><python-3.x><argparse>
|
2022-12-10 11:43:51
| 1
| 9,975
|
BPL
|
74,752,491
| 11,462,274
|
Boolean Series key reindexed when trying to generate a malleable filter to traverse a DataFrame
|
<p>I found <a href="https://stackoverflow.com/questions/41710789/boolean-series-key-will-be-reindexed-to-match-dataframe-index">this question</a> but I couldn't understand the problem and I couldn't adjust it for my case:</p>
<p>Usage example:</p>
<p>let's assume that I want to get the value of <code>competition</code> and <code>market_name</code> from line 5 of my CSV and filter lines 1,2,3 and 4 that have the same values in these columns and sum the column back, if back is <code> > 0</code>, then the new column called invest will show <code>TRUE</code>, otherwise <code>FALSE</code>. I want to do this by going through all the lines of the Dataframe, always calculating only the previous lines. The idea is to know whether or not it would be profitable to invest based on existing records in the history.</p>
<p>In order for the code to become malleable for different types of filters, I did it this way:</p>
<p>My 100% executable code for testing:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
from functools import reduce
df = pd.read_csv('test.csv')
df = df[df['result'].notnull()].reset_index(drop=True)
options = [
'country',
'competition',
'market_name',
'runner_name',
'odds',
'total_matched_vls',
'minute_traded_vls'
]
def my_func(cb, i):
dfilter = df[:i]
filter = dfilter[reduce(lambda x, y : x & (df[y[0]] == y[1]), zip(cb, map(lambda x: df.iloc[i][x], cb)), True)]
back_sum = filter['back'].sum()
aaa = True
if back_sum <= 0:
aaa = False
return aaa
def combs(a):
if len(a) == 0:
return [[]]
cs = []
for c in combs(a[1:]):
cs += [c, c+[a[0]]]
return cs
combinations = combs(options)
combinations = [x for x in combinations if x != []]
for cbnt in combinations:
df['invest'] = [my_func(cbnt,i) for i in range(len(df))]
true_backs = df[(df['invest'] == True)]['back']
if (true_backs.sum() > 0):
print(cbnt)
</code></pre>
<p>CSV sample example:</p>
<pre class="lang-none prettyprint-override"><code>clock_now,country,competition,match_id,match_name,market_id,market_name,runner_id,runner_name,total_matched,total_matched_vls,one,minute_traded,minute_traded_vls,odds,result,back,lay
2022/05/22 12:16,GB,English Premier League,31459682,Brighton v West Ham,1.199215861,First Half Goals 0.5,5851483,Over 0.5 Goals,2692.37,milhares,['149.12'β«Β§ '7.039999999999964'β«Β§ '48.190000000000055'β«Β§ '24.08999999999992'β«Β§ '75.65000000000009'],1797.9199999999998,milhares,1.7,WINNER,0.6545,-1
2022/05/22 12:20,HU,Hungarian NB II,31473686,Vasas v Soroksar,1.199438855,Over/Under 6.5 Goals,2542448,Under 6.5 Goals,5801.98,milhares,['125.76'β«Β§ '150.4699999999998'β«Β§ '200.79'β«Β§ '51.43000000000029'β«Β§ '478.25999999999976'],3011.07,milhares,1.01,LOSER,0.00935000000000001,-1
2022/05/22 12:21,LU,Luxembourg Division Nationale,31473752,FC Differdange 03 v Progres Niedercorn,1.199439863,Over/Under 0.5 Goals,5851483,Over 0.5 Goals,4451.85,milhares,['0.0'β«Β§ '12.889999999999873'β«Β§ '22.44000000000005'β«Β§ '1.2899999999999636'β«Β§ '219.71000000000004'],1038.0600000000004,milhares,1.71,WINNER,0.66385,-1
2022/05/22 12:16,GB,English Premier League,31459682,Brighton v West Ham,1.199215861,First Half Goals 0.5,5851483,Over 0.5 Goals,2692.37,milhares,['149.12'β«Β§ '7.039999999999964'β«Β§ '48.190000000000055'β«Β§ '24.08999999999992'β«Β§ '75.65000000000009'],1797.9199999999998,milhares,1.7,WINNER,0.6545,-1
2022/05/22 12:20,HU,Hungarian NB II,31473686,Vasas v Soroksar,1.199438855,Over/Under 6.5 Goals,2542448,Under 6.5 Goals,5801.98,milhares,['125.76'β«Β§ '150.4699999999998'β«Β§ '200.79'β«Β§ '51.43000000000029'β«Β§ '478.25999999999976'],3011.07,milhares,1.01,LOSER,0.00935000000000001,-1
2022/05/22 12:21,LU,Luxembourg Division Nationale,31473752,FC Differdange 03 v Progres Niedercorn,1.199439863,Over/Under 0.5 Goals,5851483,Over 0.5 Goals,4451.85,milhares,['0.0'β«Β§ '12.889999999999873'β«Β§ '22.44000000000005'β«Β§ '1.2899999999999636'β«Β§ '219.71000000000004'],1038.0600000000004,milhares,1.71,WINNER,0.66385,-1
2022/05/22 12:16,GB,English Premier League,31459682,Brighton v West Ham,1.199215861,First Half Goals 0.5,5851483,Over 0.5 Goals,2692.37,milhares,['149.12'β«Β§ '7.039999999999964'β«Β§ '48.190000000000055'β«Β§ '24.08999999999992'β«Β§ '75.65000000000009'],1797.9199999999998,milhares,1.7,WINNER,0.6545,-1
2022/05/22 12:20,HU,Hungarian NB II,31473686,Vasas v Soroksar,1.199438855,Over/Under 6.5 Goals,2542448,Under 6.5 Goals,5801.98,milhares,['125.76'β«Β§ '150.4699999999998'β«Β§ '200.79'β«Β§ '51.43000000000029'β«Β§ '478.25999999999976'],3011.07,milhares,1.01,LOSER,0.00935000000000001,-1
2022/05/22 12:21,LU,Luxembourg Division Nationale,31473752,FC Differdange 03 v Progres Niedercorn,1.199439863,Over/Under 0.5 Goals,5851483,Over 0.5 Goals,4451.85,milhares,['0.0'β«Β§ '12.889999999999873'β«Β§ '22.44000000000005'β«Β§ '1.2899999999999636'β«Β§ '219.71000000000004'],1038.0600000000004,milhares,1.71,WINNER,0.66385,-1
2022/05/22 12:16,GB,English Premier League,31459682,Brighton v West Ham,1.199215861,First Half Goals 0.5,5851483,Over 0.5 Goals,2692.37,milhares,['149.12'β«Β§ '7.039999999999964'β«Β§ '48.190000000000055'β«Β§ '24.08999999999992'β«Β§ '75.65000000000009'],1797.9199999999998,milhares,1.7,WINNER,0.6545,-1
2022/05/22 12:20,HU,Hungarian NB II,31473686,Vasas v Soroksar,1.199438855,Over/Under 6.5 Goals,2542448,Under 6.5 Goals,5801.98,milhares,['125.76'β«Β§ '150.4699999999998'β«Β§ '200.79'β«Β§ '51.43000000000029'β«Β§ '478.25999999999976'],3011.07,milhares,1.01,LOSER,0.00935000000000001,-1
2022/05/22 12:21,LU,Luxembourg Division Nationale,31473752,FC Differdange 03 v Progres Niedercorn,1.199439863,Over/Under 0.5 Goals,5851483,Over 0.5 Goals,4451.85,milhares,['0.0'β«Β§ '12.889999999999873'β«Β§ '22.44000000000005'β«Β§ '1.2899999999999636'β«Β§ '219.71000000000004'],1038.0600000000004,milhares,1.71,WINNER,0.66385,-1
2022/05/22 12:16,GB,English Premier League,31459682,Brighton v West Ham,1.199215861,First Half Goals 0.5,5851483,Over 0.5 Goals,2692.37,milhares,['149.12'β«Β§ '7.039999999999964'β«Β§ '48.190000000000055'β«Β§ '24.08999999999992'β«Β§ '75.65000000000009'],1797.9199999999998,milhares,1.7,WINNER,0.6545,-1
2022/05/22 12:20,HU,Hungarian NB II,31473686,Vasas v Soroksar,1.199438855,Over/Under 6.5 Goals,2542448,Under 6.5 Goals,5801.98,milhares,['125.76'β«Β§ '150.4699999999998'β«Β§ '200.79'β«Β§ '51.43000000000029'β«Β§ '478.25999999999976'],3011.07,milhares,1.01,LOSER,0.00935000000000001,-1
2022/05/22 12:21,LU,Luxembourg Division Nationale,31473752,FC Differdange 03 v Progres Niedercorn,1.199439863,Over/Under 0.5 Goals,5851483,Over 0.5 Goals,4451.85,milhares,['0.0'β«Β§ '12.889999999999873'β«Β§ '22.44000000000005'β«Β§ '1.2899999999999636'β«Β§ '219.71000000000004'],1038.0600000000004,milhares,1.71,WINNER,0.66385,-1
</code></pre>
<p>But i receive this Warning:</p>
<pre class="lang-none prettyprint-override"><code>UserWarning: Boolean Series key will be reindexed to match DataFrame index.
filter = dfilter[reduce(lambda x, y : x & (df[y[0]] == y[1]), zip(cb, map(lambda x: df.iloc[i][x], cb)), True)]
</code></pre>
<p>If I need the Series to generate the malleable option, how should I proceed to not receive this warning?</p>
|
<python><pandas><dataframe>
|
2022-12-10 11:27:10
| 1
| 2,222
|
Digital Farmer
|
74,752,403
| 18,308,393
|
Button not clicking with scrapy playwright
|
<p>I am attempting to click on an sso login for a platform by testing its button functionality with scrapy playwright. I have inputted an incorrect email and so after clicking the button, it should throw a text error that the email is incorrect. However, nothing seems to happen.</p>
<p>For example:</p>
<pre><code>import scrapy
from scrapy_playwright.page import PageMethod
from path import Path
class telSpider(scrapy.Spider):
name = 'tel'
start_urls = 'https://my.tealiumiq.com/login/sso/'
customer_settings = {
'USER-AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.0 Safari/605.1.15',
'CONNECTION': 'keep-alive',
'ACCEPT': 'application/json, text/javascript, */*; q=0.01'
}
def start_requests(self):
yield scrapy.Request(
self.start_urls,
meta = dict(
playwright = True,
playwright_include_page = True,
playwright_page_methods = [
PageMethod('waitForLoadState', state = 'domcontentloaded'),
PageMethod("fill", selector = '#email', value = "incorrect@email.com"),
PageMethod('wait_for_selector', selector = "span[data-i18n='login.form.btn_submit']", state = 'visible'),
PageMethod("click", selector = "#submitBtn", button = "middle", delay = 2000, force=True),
PageMethod("waitForEvent", event = "click"),
PageMethod("screenshot", path=Path(__file__).parent / "tealium.png", full_page=True),
]
),
callback = self.parse
)
def parse(self, response):
print(response)
</code></pre>
<p>I have attempted <code>evaluate</code> instead but this for some reason will only do one or the other event. For example, it either inputs email first or clicks, not one after the other or both.</p>
<pre><code>
"""(function() {
const setValue = Object.getOwnPropertyDescriptor(
window.HTMLInputElement.prototype,
"value"
).set;
const modifyInput = (name, value) => {
const input = document.getElementsByName(name)[0]
setValue.call(input, value)
input.dispatchEvent(new Event('input', { bubbles: true}))
};
modifyInput('email', "incorrect@email.com");
document.querySelector("#submitBtn").click();
}());"""
</code></pre>
<p>Expected output:
<a href="https://i.sstatic.net/WxRXI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxRXI.png" alt="enter image description here" /></a></p>
|
<javascript><python><scrapy><playwright>
|
2022-12-10 11:14:58
| 0
| 367
|
Dollar Tune-bill
|
74,752,394
| 7,032,967
|
Python file name on create
|
<p>I'm trying to learn files in Python and I have this set of codes, this code used to create a normal file in the development server which I used to play, but in production, the files output as <strong>'2022-12-10 10:40:14.599578+00:00.webm'</strong> , enclosed within single quotes, now I'm not sure if I manually remove this single quotes, the file will be a valid WebM file or not? and If I'm doing something wrong, how can I improve this?</p>
<pre><code> time = timezone.now()
with open(path.join(path.dirname(path.realpath(__file__)), f'{time}.webm'), 'wb') as file:
file.write(decode_string)
file.close()
</code></pre>
|
<python><python-3.x><file>
|
2022-12-10 11:14:02
| 1
| 1,314
|
Ranu Vijay
|
74,752,390
| 14,082,033
|
How can I reduce outliers in the output of a regression model?
|
<p>I am using a Keras model for regression which inputs are sensor measurements, and the output is the attitude of the sensor. This model consists of CuDNNLSTM and CNN. I need to reduce the number or range of outliers in the output.</p>
<p>The mean error is reasonable and low, but there are so many outliers in the output. The mean error is around 1, but as you can see in the boxplot, sometimes I get 180 errors (the maximum possible error).</p>
<p><a href="https://i.sstatic.net/UYETp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UYETp.png" alt="enter image description here" /></a></p>
<p>The training data has no outlier and has been preprocessed before.</p>
<p>How can I reduce the outliers in the output?
Are there any specific network topologies or layers that could handle this?</p>
<p>I tried normalizing the input or adding gaussian noise, but none of them had any impact on the number of outliers in the outputs. Also, I tried all possible loss functions (more than 38), and this is the best result.</p>
<p>The model is:</p>
<pre><code>Acc = Input((window_size, 3), name='acc')
Gyro = Input((window_size, 3), name='gyro')
AGconcat = concatenate([Acc, Gyro], axis=2, name='AGconcat')
fs = Input((1,), name='fs')
ACNN = Conv1D(filters=133,
kernel_size = 11,
padding = 'same',
activation = tfa.activations.mish,
name= 'ACNN')(Acc)
ACNN = Conv1D(filters=109,
kernel_size = 11,
padding = 'same',
activation = tfa.activations.mish,
name= 'ACNN1')(ACNN)
ACNN = MaxPooling1D(pool_size=3,
name = 'MaxPooling1D')(ACNN)
ACNN = Flatten(name='ACNNF')(ACNN)
GCNN = Conv1D(filters=142,
kernel_size = 11,
padding = 'same',
activation = tfa.activations.mish,
name= 'GCNN')(Gyro)
GCNN = Conv1D(filters=116,
kernel_size = 11,
padding = 'same',
activation = tfa.activations.mish,
name= 'GCNN1')(GCNN)
GCNN = MaxPooling1D(pool_size=3,
name = 'GyroMaxPool1D')(GCNN)
GCNN = Flatten(name='GCNNF')(GCNN)
AGconLSTM =Bidirectional(CuDNNGRU(128, return_sequences=True,
#return_state=True,
go_backwards=True,
name='BiLSTM1'))(AGconcat)
FlattenAG = Flatten(name='FlattenAG')(AGconLSTM)
AG = concatenate([ACNN, GCNN,FlattenAG])
AG = Dense(units=256,
activation= tfa.activations.mish)(AG)
Fdense = Dense(units=256,
activation= tfa.activations.mish,
name= 'Fdense')(fs)
AG = Flatten(name='AGF')(AG)
x = concatenate([AG, Fdense])
x = Dense(units=256,
activation= tfa.activations.mish)(x)
x = Flatten(name='output')(x)
output = Dense(4, activation='linear', name='quat')(x)
</code></pre>
|
<python><tensorflow><keras><deep-learning>
|
2022-12-10 11:13:46
| 1
| 331
|
Arman Asgharpoor
|
74,752,386
| 5,381,753
|
API to get all pincodes inside a radius
|
<p>I'm working on a python application where we need to get all the pincodes within a specific radius.
We have a base location and a radius of 10Km is drawn from this base pincode.</p>
<p>Do we have any API where this can be achieved?</p>
<p>FYA - Mainly looking for Indian PostalCodes.</p>
|
<python><django><google-api><maps>
|
2022-12-10 11:13:23
| 1
| 771
|
Aravind Pillai
|
74,752,303
| 6,300,872
|
Backtrader, using mongodb as datafeed instead CSV
|
<p>I'm quite new to backtrader and since I've started I couldn't stop wondering why there's no database support for the datafeed. I've found a page on the official website where's described how to implement a custom datafeed. The implementation should be pretty easy, but on github (or more in general on the web) I couldn't fine a single one implementation of a feed with MongoDb. I understand that CSV are easier to manage and so on, but in some cases could require a lot of RAM for storing data all at once in memory. On the other hand, having a db can be "RAM friendly" but will take longer during the backtesting process even if the DB is a documental one. Does anyone have any experience with both of these two approaches? And if yes, there's some code I can take a look at?</p>
<p>Thanks!</p>
|
<python><mongodb><backtrader>
|
2022-12-10 11:02:32
| 0
| 847
|
Paolo Ardissone
|
74,752,253
| 5,684,405
|
Efficient edit pandas column values based on column value which is list of strings
|
<p>How to EFFICIENTLY (fast way) edit pandas DF column based on a condition on string, where the column values are lists of strings.</p>
<p>eg find all rows with a string from set <code>{'Adam', 'bbb'}</code> in df column <code>string_lists</code> (which is a list of strings) and remove this strings (<code>'Adam', 'bbb'</code>) from the <code>string_lists</code> value</p>
<p>ex.</p>
<pre><code>id string_lists
0 ['aaa', 'Adam', 'bbb'] -> ['aaa']
</code></pre>
<p>EDIT:</p>
<p>I've solved it:</p>
<pre><code>drop_options={'?',2, 'abc'}
l1 = [1, '?', 2, 3, 4, 5, 'abc']
print(l1)
print([s for s in l1 if not any(s == drop for drop in drop_options) ])
</code></pre>
|
<python><pandas>
|
2022-12-10 10:53:17
| 1
| 2,969
|
mCs
|
74,752,251
| 10,773,616
|
Read Process Memory doesn't seem to give the right value
|
<p>I am trying to read memory from a process (gameboy advance emulator) in Python using ReadProcessMemory. There is a memory viewer and I am supposed to get 81 at 0xD273 (see picture). I am new to this, I tried to do everything correctly by adding reference in the ReadProcessMemory, but there might be some things that are wrong. I am pretty sure I have the right process id since it matches the one in the task manager.</p>
<p>When I run my code, I get random byte values that are different everytime. 15, 255, 11, 195, but I think I should be getting 81.</p>
<p>I need to use python 32-bit to run the script otherwise I get error 299 (ERROR_PARTIAL_COPY).</p>
<p>Is there something that I'm doing wrong? I donβt specify the base address but I assumed itβs handled by the <code>processHandle</code>.</p>
<p>Here is my code and an example of the output:</p>
<pre><code>result: 1, err code: 0, bytesRead: 1
data: 0000000000000015h
21
</code></pre>
<pre><code>import ctypes as c
from ctypes import wintypes as w
import psutil
# Must use py -3-32 vba_script.py
vba_process_id = [p.pid for p in psutil.process_iter() if "visualboyadvance" in p.name()][0]
pid = vba_process_id # I assume you have this from somewhere.
k32 = c.WinDLL('kernel32', use_last_error=True)
OpenProcess = k32.OpenProcess
ReadProcessMemory = k32.ReadProcessMemory
CloseHandle = k32.CloseHandle
processHandle = OpenProcess(0x10, False, pid)
addr = c.c_void_p(0xD273)
dataLen = 8
data = c.c_byte()
bytesRead = c.c_byte()
result = ReadProcessMemory(processHandle, c.byref(addr), c.byref(data), c.sizeof(data), c.byref(bytesRead))
e = c.get_last_error()
print('result: {}, err code: {}, bytesRead: {}'.format(result,e,bytesRead.value))
print('data: {:016X}h'.format(data.value))
print(data.value)
CloseHandle(processHandle)
</code></pre>
<p><a href="https://i.sstatic.net/9Jd3C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Jd3C.png" alt="enter image description here" /></a></p>
|
<python><emulation><ctypes><readprocessmemory><gameboy>
|
2022-12-10 10:52:57
| 1
| 371
|
JΓ©rΓ©my Talbot-PΓ’quet
|
74,752,181
| 17,795,398
|
Python and terminal: keep python environment after the file has been executed
|
<p>I have to run a Python file from the Windows terminal (Windows PowerShell). I want that after the file has been executed (<code>python foo.py</code>), python keeps open with the variables defined in the file.</p>
<p>If it is unclear what I want, I want the same behavior as IDLE, after the file has been executed, you can write code in the IDLE command line and the variables defined in the file are stored in current session.</p>
<p>I need this for Windows now, but I might also need this for Ubuntu.</p>
|
<python><linux><windows>
|
2022-12-10 10:40:17
| 1
| 472
|
Abel GutiΓ©rrez
|
74,752,120
| 11,963,167
|
Waiting for an external response without consuming resources
|
<p>I have a simple python program that needs to apply a function to circles based on their radius. My issue is, I need to fetch the radius from a third-party API, which may take up to several hours to send me the radius sizes, as a csv file for instance.</p>
<p>To be clearer:</p>
<pre class="lang-py prettyprint-override"><code>class Circle():
def __init__(self, id):
self.id = id
self.radius = None
def fetch_radius(circle):
''' call an API and retrieve radius based on
the circle id.
'''
radius = fetch_from_api(circle.id) # takes hour to return
circle.radius = radius
def main():
circle = Circle(id=2)
fetch_radius(circle.id)
</code></pre>
<p>This design seems flawed, because my program and objects hang for hours waiting for an external signal, and consume resources.</p>
<p>How can I improve my design ?</p>
<p>Edit : I thought of pickling the object and closing the program waiting for the answer, but I'm not convinced. Yehven also talked about asyncio. I add both keywords.</p>
|
<python><python-asyncio><pickle><external>
|
2022-12-10 10:32:02
| 0
| 496
|
Clej
|
74,752,014
| 1,272,975
|
Structure of importable pytest plugin
|
<p>I have some data/fixtures that are common across several projects. So the sensible thing to do is to refactor them into a standalone project and import them in the projects. Structure of the project I'm about to refactor looks like this:</p>
<pre><code>my-project/
src/
db/
__init__.py
module1.py
tests/
data/
test_data.csv
test_module1.py
</code></pre>
<p><code>test_module1.py</code> contains both the fixtures to be refactored and the unit tests for <code>module1.py</code>. The official way to share fixtures is via <a href="https://docs.pytest.org/en/latest/how-to/writing_plugins.html" rel="nofollow noreferrer">pytest-plugins</a>. The official documentation provides the following <code>pyproject.toml</code> example:</p>
<pre><code># sample ./pyproject.toml file
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "myproject"
classifiers = [
"Framework :: Pytest",
]
[tool.setuptools]
packages = ["myproject"]
[project.entry_points]
pytest11 = [
"myproject = myproject.pluginmodule",
]
</code></pre>
<p>My questions are:</p>
<ol>
<li>Do I refactor the fixtures and data from <code>tests/</code> into <code>new-fixture-project/src/test_data/</code> (like a regular package)? Or <code>new-fixture-project/tests/</code>?</li>
<li>In either case, what should the entry point be? Is it <code>new-fixture-project = test_data.data</code> (suppose I've refactored the fixtures into <code>test_data/data.py</code>?</li>
</ol>
<p>[Edit]
Answer to my question in the comment, this is what you need to build it as a regular package.</p>
<p>In <code>pyproject.toml</code></p>
<pre><code>[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
</code></pre>
<p>In <code>setup.cfg</code>, add this section</p>
<pre><code>[options.entry_points]
pytest11 =
test_data = test_data.plugin
</code></pre>
|
<python><pytest><directory-structure>
|
2022-12-10 10:13:16
| 1
| 734
|
stevew
|
74,751,979
| 13,860,217
|
How to scrape dynamic content displayed in tooltips using scrapy
|
<p>I'd like to extract the text inside the class "spc spc-nowrap" using scrapy and the container software docker to scrape dynamically loaded content.</p>
<pre><code><div id="tooltipdiv" style="position: absolute; z-index: 100; left: 637.188px; top: 625.609px; display: none;">
<span class="help">
<span class="help-box2 y-h wider">
<span class="wrap-help">
<span class="spc spc-nowrap" id="tooltiptext">
text to extract
<br>
text to extract
<strong>text to extract</strong>
<br>
</span>
</span>
</span>
</span>
</div>
</code></pre>
<p>Which xpath or css syntax returns these data?</p>
<pre><code>response.css("span#tooltiptext.spc.spc-nowrap").extract()
</code></pre>
<p>yields empty list</p>
|
<python><web-scraping><scrapy><web-crawler><tooltip>
|
2022-12-10 10:07:51
| 1
| 377
|
Michael
|
74,751,950
| 1,265,955
|
How to set cell formula using odfpy
|
<p>I keep getting Err:508 or #Name? (Err:525) when opening a spreadsheet created with odfpy and putting a formula in a cell with the following code:</p>
<pre><code>tc = TableCell( valuetype="string", formula=calc, value=0 )
</code></pre>
<p>In the spreadsheet, the formula looks fine, and any edit to it together with reverting the edit, so there is no net change to the formula, makes it think it has changed, re-evaluates it, and it works fine. But not until it is tweaked. What am I missing?</p>
<p>Here's the formula I'm using, in case that is relevant:</p>
<pre><code>=TEXT(NOW()+0*LEN(CONCAT(A2:F999));"YYYY.MM.DD.HH.MM.SS")
</code></pre>
<p>(purpose, to timestamp the most recent edit to a range of cells). I note that at the time the formula is inserted in row 1, that other rows haven't been inserted yet, but a few are in subsequent steps. But I wouldn't think any attempt to evaluate the range would occur until loaded into LibreOffice, so that doesn't seem a likely cause of the error.</p>
<hr />
<p>I already am using <code>;</code> not <code>,</code> in the function parameters, which seems to be the most successful answer for other people that encounter this error, and I'm using the English install, which is the other seeming issue some have with copy/pasting formulas. But still no joy, and not much that is relevant shows up in searching.</p>
|
<python><odfpy>
|
2022-12-10 10:03:14
| 1
| 515
|
Victoria
|
74,751,832
| 5,618,856
|
pandas dataframe to_sql - how to preserve data types
|
<p>I have a dataframe (from a nested json) with</p>
<pre><code>df.dtypes
name string
nested1 object
nested2 object
a_number float64
</code></pre>
<p>writing it to a database with sqlite3 (<code>df.to_sql(table, connection)</code>) fails. Presumably since it can't handle the objects. My workaround <code>df.astype(str)</code> is ugly and I loose the data types. Is there a way to preserve dataframe data types when writing to a SQLite db?</p>
|
<python><pandas><sqlite>
|
2022-12-10 09:42:47
| 1
| 603
|
Fred
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.