QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,779,065
| 12,735,064
|
OR-Tools CP-Sat: Efficiently optimize toward an average value
|
<p>I'm new to OR-Tools and CP-Sat I'm looking for guidance on how to most efficiently optimize sums of variables toward a common average value. Here's a very simplified example to help explain:</p>
<pre class="lang-py prettyprint-override"><code>from ortools.sat.python import cp_model
from random import randint
model = cp_model.CpModel()
# There are three groups of 5 boolean variables: a, b, and c
a_variables = [model.NewBoolVar(f"a{n}") for n in range(5)]
sum_a = model.NewIntVar(0, 5, "sum_a")
model.Add(sum_a == cp_model.LinearExpr.Sum(a_variables))
b_variables = [model.NewBoolVar(f"b{n}") for n in range(5)]
sum_b = model.NewIntVar(0, 5, "sum_b")
model.Add(sum_b == cp_model.LinearExpr.Sum(b_variables))
c_variables = [model.NewBoolVar(f"c{n}") for n in range(5)]
sum_c = model.NewIntVar(0, 5, "sum_c")
model.Add(sum_c == cp_model.LinearExpr.Sum(c_variables))
# Assign a random sum for the boolean variables
model.Add(cp_model.LinearExpr.Sum(a_variables + b_variables + c_variables) == randint(0, 15))
# We want to optimize so that each group of variables a, b, and c has the same sum,
# or as close as possible.
desired_sum = 3
objectives = []
# Add objectives here
model.Minimize(cp_model.LinearExpr.Sum(objectives))
solver = cp_model.CpSolver()
solver.parameters.log_search_progress = True
solver.parameters.num_search_workers = 1
solver.Solve(model)
solution = {}
solution["sum_a"] = solver.Value(sum_a)
solution["sum_b"] = solver.Value(sum_b)
solution["sum_c"] = solver.Value(sum_c)
return solution
</code></pre>
<p>It seems to me in order to guarantee a good result, the objectives should penalize differences non-linearly, so large differences get disproportionately heavier penalties compared to small distances. My best effort so far is to get the absolute value of the difference, then create a series of boolean variables that test for increasingly large differences, and penalize each of these variables more than the last.</p>
<pre class="lang-py prettyprint-override"><code>diff_a = model.NewIntVar(0, 3, "diff_a")
model.AddAbsEquality(diff_a, desired_sum - sum_a)
diff_b = model.NewIntVar(0, 3, "diff_b")
model.AddAbsEquality(diff_b, desired_sum - sum_b)
diff_c = model.NewIntVar(0, 3, "diff_c")
model.AddAbsEquality(diff_c, desired_sum - sum_c)
objectives = []
diff_a_penalties = [model.NewBoolVar(f"diff_a_{n}") for n in range(3)]
[model.Add(diff_a > n).OnlyEnforceIf(penalty) for n, penalty in enumerate(diff_a_penalties)]
[model.Add(diff_a <= n).OnlyEnforceIf(penalty.Not()) for n, penalty in enumerate(diff_a_penalties)]
objectives.append(cp_model.LinearExpr.WeightedSum(diff_a_penalties, range(1, 4)))
diff_b_penalties = [model.NewBoolVar(f"diff_b_{n}") for n in range(3)]
[model.Add(diff_b > n).OnlyEnforceIf(penalty) for n, penalty in enumerate(diff_b_penalties)]
[model.Add(diff_b <= n).OnlyEnforceIf(penalty.Not()) for n, penalty in enumerate(diff_b_penalties)]
objectives.append(cp_model.LinearExpr.WeightedSum(diff_b_penalties, range(1, 4)))
diff_c_penalties = [model.NewBoolVar(f"diff_c_{n}") for n in range(3)]
[model.Add(diff_c > n).OnlyEnforceIf(penalty) for n, penalty in enumerate(diff_c_penalties)]
[model.Add(diff_c <= n).OnlyEnforceIf(penalty.Not()) for n, penalty in enumerate(diff_c_penalties)]
objectives.append(cp_model.LinearExpr.WeightedSum(diff_c_penalties, range(1, 4)))
</code></pre>
<p>This works well and is much faster than my first attempt, which was using AddMultiplicationEquality to square the differences. But I'm wondering if anyone has come up with more efficient ways to accomplish this optimization. My method certainly adds a lot of variables to the model.</p>
|
<python><or-tools><cp-sat>
|
2022-12-13 01:27:41
| 0
| 1,047
|
schwartz721
|
74,778,994
| 315,168
|
Pandas open/close resample and rolling over values from the previous day
|
<p>I have transaction data that I hope to resample in a fashion similar to OHLC stock market prices.</p>
<ul>
<li>The goal is to display a meaningful price summary for each day</li>
</ul>
<p>However, my challenge is that the transactional data is sparse.</p>
<ul>
<li><p>There are days without transactions</p>
</li>
<li><p>The opening price of the day might happen in the middle of the day and does not roll over from the previous day automatically</p>
</li>
</ul>
<p>I can do the naive OHLC with <code>resample()</code>, as seen in the example below. Because the data is sparse, the normal straightforward resample gives unideal results. To get a meaningful price sample, the following conditions should be met too:</p>
<ul>
<li><p>The opening price is always set to the closing price of the previous day</p>
</li>
<li><p>If any day does not have transactions, all OHLC values are the closing price of the previous day ("price does not move")</p>
</li>
</ul>
<p>I can do this in pure Python, as it is not that difficult, but it is not very computationally efficient for high volumes of data. Thus, my question is, would Pandas offer any clever way of doing <code>resample</code> or <code>aggregate</code> satisfying the conditions above, but without needing to loop values in Python manually?</p>
<p>The example code is below:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Transactions do not have regular intervals and may miss days
data = {
"timestamp": [
pd.Timestamp("2020-01-01 01:00"),
pd.Timestamp("2020-01-01 05:00"),
pd.Timestamp("2020-01-02 03:00"),
pd.Timestamp("2020-01-04 04:00"),
pd.Timestamp("2020-01-05 00:00"),
],
"transaction": [
100.00,
102.00,
103.00,
102.80,
99.88
]
}
df = pd.DataFrame.from_dict(data, orient="columns")
df.set_index("timestamp", inplace=True)
print(df)
</code></pre>
<pre><code> transaction
timestamp
2020-01-01 01:00:00 100.00
2020-01-01 05:00:00 102.00
2020-01-02 03:00:00 103.00
2020-01-04 04:00:00 102.80
2020-01-05 00:00:00 99.88
</code></pre>
<pre class="lang-py prettyprint-override"><code># https://stackoverflow.com/a/36223274/315168
naive_resample = df["transaction"].resample("1D") .agg({'open': 'first', 'high': 'max', 'low': 'min', 'close': 'last'})
print(naive_resample)
</code></pre>
<p>In this result, you can see that:</p>
<ul>
<li><p>open/close do not match over daily boundaries</p>
</li>
<li><p>if a day does not have transactions price is marked as <code>NaN</code></p>
</li>
</ul>
<pre><code> open high low close
timestamp
2020-01-01 100.00 102.00 100.00 102.00
2020-01-02 103.00 103.00 103.00 103.00
2020-01-03 NaN NaN NaN NaN
2020-01-04 102.80 102.80 102.80 102.80
2020-01-05 99.88 99.88 99.88 99.88
</code></pre>
|
<python><pandas>
|
2022-12-13 01:13:46
| 1
| 84,872
|
Mikko Ohtamaa
|
74,778,959
| 15,542,245
|
Dynamic array not behaving as expected
|
<p>Replacing words/phrases in larger project. I can do it first time around. But when attempting to build on the changed line I have an issue with the dynamic array <code>listings[]</code>
Summarizing the issue. Here is some basic Python:</p>
<pre><code># concurrentTest.py
import re
subString = [' string1',' string2',' string3']
word = ['new_word1', 'new_word2', 'new_word3']
lineString = ['This line, needs new_word','This line, needs new_word', 'This line, needs new_word']
# matches word with underscore
wordsRegex = "(\S+_\S+[a-z])"
# matches word ending in comma
stringRegex = "(\s[a-z]+,)"
# store results
listings = []
# make the first change to our line string with a new word for each line
for (x, y) in zip(subString, lineString):
listings = re.sub(stringRegex, x, y)
# print(listings) # This shows that listings have been changed with a string number 'This string1 needs a new_word'
# try to make the second change to give 'This string1 needs a new_word1' etc
for (x, y) in zip(word, listings):
listings = re.sub(wordsRegex, x, y)
# print(listings) # If listings is declared [] like lineString why just first 3 letters 'Thi' printed?
</code></pre>
|
<python><iteration><dynamic-arrays>
|
2022-12-13 01:04:01
| 0
| 903
|
Dave
|
74,778,901
| 15,412,256
|
Are pandas DataFrame methods picklable?
|
<p>I am developing an algorithm to process a pool of datasets in parallel. I would like to examine wherther the DataFrame methods are pickable.</p>
<p>I wrote the code as the following:</p>
<pre><code>import pickle
import pandas as pd
# Define a custom function that we will use with the apply method
def my_custom_function(col):
# Do something with the col of data
return col+1
# Create a DataFrame
df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Check if the apply method is picklable
try:
pickle.dumps(df[0].apply(my_custom_function))
print("The apply method is picklable.")
except:
print("The apply method is not picklable.")
out:
The apply method is picklable.
</code></pre>
<p>Is this true?</p>
|
<python><multithreading><parallel-processing><multiprocessing><pickle>
|
2022-12-13 00:54:43
| 0
| 649
|
Kevin Li
|
74,778,842
| 18,308,393
|
Create a unique id from each id in a list of dictionaries
|
<p>I have a list of dictionaries with the key named <code>id</code>, however, some of these are duplicates. This is a sample of my dataset, my actual dicts have multiple keys but I'm filtering my <code>id</code>. Essentially, I do not want to remove the ids, instead I want to create a new id value by incrementing a number onto it to create a new number. However, this number must not be existing in the current list of ids. However, I find that one but all are transformed.</p>
<p>For example:</p>
<pre><code>import copy
ids = [{'id': 44},{'id': 49},{'id': 48},{'id': 53},{'id': 46},{'id': 51},{'id': 45},{'id': 50},{'id': 47},{'id': 52},{'id': 5091},{'id': 5060},{'id': 5002},{'id': 5071},{'id': 5011},{'id': 5027},{'id': 26},{'id': 29},{'id': 5034},{'id': 5086},{'id': 5063},{'id': 5022},{'id': 5014},{'id': 74},{'id': 5061},{'id': 4},{'id': 5013},{'id': 5076},{'id': 5055},{'id': 5006},{'id': 5051},{'id': 5032},{'id': 5008},{'id': 14},{'id': 35},{'id': 5},{'id': 7},{'id': 64},{'id': 5049},{'id': 5021},{'id': 5059},{'id': 5029},{'id': 6},{'id': 30},{'id': 23},{'id': 31},{'id': 5017},{'id': 8},{'id': 17},{'id': 24},{'id': 5007},{'id': 5033},{'id': 5065},{'id': 5020},{'id': 5085},{'id': 5025},{'id': 5068},{'id': 5041},{'id': 5048},{'id': 5056},{'id': 5080},{'id': 5070},{'id': 5072},{'id': 5077},{'id': 5073},{'id': 5067},{'id': 5088},{'id': 5010},{'id': 5040},{'id': 5075},{'id': 5035},{'id': 5043},{'id': 5012},{'id': 5052},{'id': 5081},{'id': 5004},{'id': 57},{'id': 56},{'id': 63},{'id': 62},{'id': 55},{'id': 54},{'id': 22},{'id': 59},{'id': 58},{'id': 61},{'id': 60},{'id': 21},{'id': 5046},{'id': 5024},{'id': 5036},{'id': 5058},{'id': 5053},{'id': 5044},{'id': 38},{'id': 36},{'id': 5050},{'id': 5047},{'id': 5079},{'id': 5062},{'id': 37},{'id': 13},{'id': 3},{'id': 27},{'id': 5078},{'id': 5009},{'id': 5069},{'id': 5092},{'id': 5090},{'id': 66},{'id': 81},{'id': 82},{'id': 70},{'id': 67},{'id': 75},{'id': 78},{'id': 76},{'id': 5001},{'id': 68},{'id': 69},{'id': 79},{'id': 65},{'id': 71},{'id': 77},{'id': 73},{'id': 72},{'id': 5031},{'id': 5083},{'id': 5037},{'id': 5003},{'id': 15},{'id': 16},{'id': 25},{'id': 32},{'id': 5023},{'id': 2},{'id': 5038},{'id': 5030},{'id': 5019},{'id': 5087},{'id': 5089},{'id': 5082},{'id': 5028},{'id': 5054},{'id': 5074},{'id': 5018},{'id': 5015},{'id': 5064},{'id': 5045},{'id': 5057},{'id': 5084},{'id': 5026},{'id': 5016},{'id': 12},{'id': 11},{'id': 10},{'id': 5066},{'id': 5042},{'id': 5005},{'id': 28},{'id': 80},{'id': 17131},{'id': 6646},{'id': 6440},{'id': 11253},{'id': 6254},{'id': 6240},{'id': 10547},{'id': 10495},{'id': 8179},{'id': 8139},{'id': 10726},{'id': 17285},{'id': 6566},{'id': 10760},{'id': 16521},{'id': 10732},{'id': 17627},{'id': 10179},{'id': 17433},{'id': 17437},{'id': 17435},{'id': 6554},{'id': 6560},{'id': 6562},{'id': 6664},{'id': 12507},{'id': 12509},{'id': 11275},{'id': 6606},{'id': 17287},{'id': 17289},{'id': 12511},{'id': 12221},{'id': 8705},{'id': 17129},{'id': 8691},{'id': 11078},{'id': 11697},{'id': 6604},{'id': 6590},{'id': 17413},{'id': 17217},{'id': 11076},{'id': 10724},{'id': 11487},{'id': 5188},{'id': 6049},{'id': 6556},{'id': 6558},{'id': 6700},{'id': 6548},{'id': 5437},{'id': 4},{'id': 6244},{'id': 5061},{'id': 10085},{'id': 12707},{'id': 35},{'id': 64},{'id': 18003},{'id': 6442},{'id': 6710},{'id': 12709},{'id': 11255},{'id': 11273},{'id': 17279},{'id': 17277},{'id': 17975},{'id': 16981},{'id': 6676},{'id': 6550},{'id': 6842},{'id': 37},{'id': 11054},{'id': 5444},{'id': 6426},{'id': 70},{'id': 67},{'id': 75},{'id': 6234},{'id': 8880},{'id': 8899},{'id': 13835},{'id': 14759},{'id': 7112},{'id': 5017},{'id': 6236},{'id': 9923},{'id': 16817},{'id': 5228},{'id': 5029}]
aID = copy.deepcopy(ids)
uniques = set([ x.get('id') for x in ids ])
listOf = [ x.get('id') for x in ids ]
G = True
d= 0
for items in aID:
if items.get('id') in uniques and listOf.count(items.get('id')) > 1:
G = True
while(G):
d += 1
if(items.get('id') + d not in uniques):
items.update({'id': items.get('id')+ d})
G = False
else:
continue
</code></pre>
<p>When I check for any duplicate ids I get a few:</p>
<pre><code>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
</code></pre>
<p>Where did I go wrong in my loop?</p>
<p>I have attempted the following, which works fine on the data above but may produce duplicates on my actual dataset:</p>
<pre><code>aID = copy.deepcopy(ids)
uniques = set()
for item in aID:
d = 0
while item['id'] + d in uniques:
d += 1
item['id'] += d
uniques.add(item['id'])
while items.get('id') + d not in uniques:
d+=1
items.update({'id':items.get('id')+d})
break
</code></pre>
|
<python>
|
2022-12-13 00:43:57
| 1
| 367
|
Dollar Tune-bill
|
74,778,827
| 7,389,168
|
Adding error bar to scatter plot, existing examples don't work
|
<p>Hello I am trying to add an error bar to the following scatter plot:</p>
<pre class="lang-py prettyprint-override"><code>(x, y) = self.rig_parameters["image_size"]
disparity_image = np.array(image)
locations = np.where(disparity_image > error_bound)
values = disparity_image[locations]
plt.xlim(0, x)
plt.ylim(y, 0)
plt.scatter(locations[1], locations[0], values, vmin = error_bound, vmax = 256)
plt.colorbar()
plt.show()
</code></pre>
<p>But the colorbar is not scaled correctly, it doesn't look like the plot uses color as a scale either.</p>
<p><a href="https://i.sstatic.net/R9frH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R9frH.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><scatter-plot>
|
2022-12-13 00:41:53
| 1
| 601
|
FourierFlux
|
74,778,825
| 833,960
|
How can I have code coverage gutters in Visual Studio Code, but also have working breakpoints?
|
<p>I have code coverage working for my Python code in VSCode. However, I can't set or remove breakpoints if the gutters are displayed. This is apparently a <a href="https://github.com/ryanluker/vscode-coverage-gutters#tips-and-tricks" rel="nofollow noreferrer">known issue</a>:</p>
<blockquote>
<p>to both use the extension and code debugging breakpoints you need to disable the gutter coverage</p>
</blockquote>
<p><a href="https://github.com/ryanluker/vscode-coverage-gutters/issues/288" rel="nofollow noreferrer">https://github.com/ryanluker/vscode-coverage-gutters/issues/288</a></p>
<p>Is there really no workaround or alternative that doesn't involve turning off the coverage gutter shading?</p>
|
<python><unit-testing><pytest><code-coverage>
|
2022-12-13 00:41:49
| 0
| 6,525
|
MattG
|
74,778,812
| 12,361,700
|
Use Keras optimizer on TF tensor
|
<p>I mean, I get that he needs an ID to keep track of what he needs, like the last gradient for that variable and so on, but can't we just have an optimizer for a specific tensor?</p>
<pre><code>a = tf.convert_to_tensor([1.])
with tf.GradientTape() as tape:
tape.watch(a)
loss = a**2
grad = tape.gradient(loss, a)
print(grad)
# <tf.Tensor: shape=(1,), dtype=float32, numpy=array([2.], dtype=float32)>
</code></pre>
<p>Thus we can calculate gradient of a tensor, but with that gradient we can't do anything, because it's not a <code>Variable</code>, thus we can't just do the following:</p>
<pre><code>K.optimizers.Adam().apply_gradients(zip(grad, a))
</code></pre>
<p>because we will get:</p>
<blockquote>
<p>AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_unique_id'</p>
</blockquote>
<p>But we can, I mean, the optimizer is something like <code>w = w - stepsize * grad</code>, we have <code>w</code>, we have <code>grad</code>, why can't we just do that inside an optimizer? Is there something that I can do to apply the formula in the Adam paper to <code>w</code> without making it a <code>tf.Variable</code>?</p>
|
<python><tensorflow><keras>
|
2022-12-13 00:39:14
| 1
| 13,109
|
Alberto
|
74,778,755
| 7,903,749
|
Django ORM, how to "rebase" a QuerySet to a related model
|
<p>With a database model described in the simplified toy example below, we are trying to get a list of EAV attributes used for a specific <code>Product</code>.</p>
<p>The sample source code in the <strong>Actions</strong> section below serves the purpose; however, we feel the statement is overly verbose: We only need columns of the <code>template_attribute</code> table, but the values arguments need to maintain a fully qualified path starting from the original <code>Product</code> model. See the code below:</p>
<pre class="lang-py prettyprint-override"><code># Getting the `id` columns from multiple-orders of related models:
attribute_set = template. values(
"id", # the base model, Product
"template__id", # the related model, Template
"template__templateattribute__id" # the second related model, TemplateAttribute
)
</code></pre>
<p>So, we wonder if there is a way to refer to the columns directly from the containing model, e.g. <code>templateattribute__id</code>, or even better <code>id</code>, instead of <code>template__templateattribute__id</code>.</p>
<p>We are new to the Django ORM and appreciate any hints or suggestions.</p>
<p><strong>Actions:</strong></p>
<pre class="lang-py prettyprint-override"><code>template = Product.active_objects.filter(id='xxx').select_related('template')
attribute_set = template. values("id", "template__id", "template__templateattribute__id")
for i, attr in enumerate(attribute_set):
print("{:03}: {}".format(i, attr))
# Output:
# 000: {'id': xxx, 'template__id': xxxx, 'template__templateattribute__id': xxxxx}
# 001: {'id': xxx, 'template__id': xxxx, 'template__templateattribute__id': xxxxx}
# 002: {'id': xxx, 'template__id': xxxx, 'template__templateattribute__id': xxxxx}
# ...
</code></pre>
<p><strong>The models:</strong></p>
<pre class="lang-py prettyprint-override"><code># Simplified toy example
class Product(models.Model):
product_template = models.ForeignKey(Template)
sku = models.CharField(max_length=100)
...
class Template(models.Model):
base_sku = models.CharField(max_length=100)
...
class TemplateAttribute(models.Model):
product_template = models.ForeignKey(Template)
attribute = models.ForeignKey(eav_models.Attribute)
...
# From the open-source Django EAV library
# imported as `eav_models`
#
class Attribute(models.Model):
name = models.CharField(_(u"name"), max_length=100,
help_text=_(u"User-friendly attribute name"))
...
slug = EavSlugField(_(u"slug"), max_length=50, db_index=True,
help_text=_(u"Short unique attribute label"))
...
</code></pre>
|
<python><django>
|
2022-12-13 00:28:05
| 2
| 2,243
|
James
|
74,778,742
| 2,975,438
|
Convert function with for/while loop into Pandas dataframe.apply(lambda ...: ...) method if there is a change in each iteration
|
<p>I am looking for a way to convert structure:</p>
<pre><code>for ... in dataframe:
while ...:
if ...:
do smth
if ...:
do smth
</code></pre>
<p>to</p>
<pre><code>dataframe.apply(lambda ...: ...)
</code></pre>
<p>Here is example of function with for/while loop:</p>
<pre><code>d_test = {
'name' : ['South Beach', 'Dog', 'Bird', 'Ant', 'Big Dog', 'Beach', 'Dear', 'Cat', 'Fish', 'Dry Fish'],
'cluster_number' : [1, 2, 3, 3, 2, 1, 4, 2, 2, 2]
}
df_test = pd.DataFrame(d_test)
from rapidfuzz import fuzz
df_test = df_test.sort_values(['cluster_number', 'name'])
df_test.reset_index(drop=True, inplace=True)
df_test['id'] = 0
def loop_in_cluster(index, row, df_test, index_, row_, is_i_used, i):
while index_ < len(df_test) and df_test.loc[index, 'cluster_number'] == df_test.loc[index_, 'cluster_number'] and df_test.loc[index_, 'id'] == 0:
if row['name'] == df_test.loc[index_, 'name'] or fuzz.ratio(row['name'], df_test.loc[index_, 'name']) > 50:
df_test.loc[index_,'id'] = i
is_i_used = True
index_ += 1
return df_test, is_i_used
i = 1
is_i_used = False
for index, row in df_test.iterrows():
row_ = row
index_ = index
df_test, is_i_used = loop_in_cluster(index, row, df_test, index_, row_, is_i_used, i)
if is_i_used == True:
i += 1
is_i_used = False
</code></pre>
<p>And here is my attempt to use <code>dataframe.apply()</code> method:</p>
<pre><code>i = 1
df_test.apply(lambda row: loop_in_cluster(i=i+1, index=row.name, row=row, df_test=df_test, index_=index, row_ = row, is_i_used=False) if is_i_used==True else loop_in_cluster(i=i, index=row.name, row=row, df_test=df_test, index_= index, row_=row, is_i_used=True), axis=1)
</code></pre>
<p>but I am getting: <code>StopIteration:</code> error in output. I tired other methods such as pandas <code>groupby.GroupBy</code> method but it seems that <code>apply</code> method is more closer to what I am looking for.</p>
|
<python><pandas><dataframe><vectorization>
|
2022-12-13 00:25:44
| 0
| 1,298
|
illuminato
|
74,778,698
| 1,896,222
|
AttributeError: 'DefaultAzureCredential' object has no attribute 'signed_session'
|
<p>I'm using python sdk to query application insights using <code>azure-applicationinsights==0.1.1</code>.
Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.applicationinsights import ApplicationInsightsDataClient
from azure.applicationinsights.models import QueryBody
def query_app_insights():
query = 'myEvents | take 10'
application = 'my-app'
creds = DefaultAzureCredential()
client = ApplicationInsightsDataClient(creds)
result = client.query.execute(application, QueryBody(query=query))
</code></pre>
<p>I get this error:</p>
<pre><code>AttributeError: 'DefaultAzureCredential' object has no attribute 'signed_session'
</code></pre>
<p>also tried:</p>
<pre class="lang-py prettyprint-override"><code> creds = ClientSecretCredential(tenant_id='something',
client_id='something',
client_secret='something')
client = ApplicationInsightsDataClient(creds)
</code></pre>
<p>It gives me the same error.</p>
<p>also I tried:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.applicationinsights import ApplicationInsightsDataClient
from azure.applicationinsights.models import QueryBody
from azure.common.credentials import ServicePrincipalCredentials
def query_app_insights():
query = 'myEvents | take 10'
application = 'my-app'
creds = ServicePrincipalCredentials(tenant='something',
client_id='something',
secret='something')
client = ApplicationInsightsDataClient(creds)
result = client.query.execute(application, QueryBody(query=query))
</code></pre>
<p>It complaints about my secret, but it is correct.
Any ideas?</p>
|
<python><azure><sdk>
|
2022-12-13 00:15:48
| 1
| 10,564
|
max
|
74,778,611
| 13,176,896
|
why not called function (forward) is called in pytorch class?
|
<p>I have the following simple example code for linear regression as follows.import torch</p>
<pre><code>import numpy as np
from torch.autograd import Variable
class linearRegression(torch.nn.Module):
def __init__(self, inputSize, outputSize):
super(linearRegression, self).__init__()
self.linear = torch.nn.Linear(inputSize, outputSize)
def forward(self, x):
out = self.linear(x)
return out
x = np.array([1], dtype = np.float32).reshape(-1, 1)
x = Variable(torch.from_numpy(x))
model = linearRegression(1, 1)
model(x)
</code></pre>
<p>the output is <code>tensor([[0.4512]], grad_fn=<AddmmBackward>)</code>.</p>
<p>My question is how the output is made by not <code>model.foward(x)</code> but <code>model(x)</code>.
In this code I have never called <code>forward</code>, but it seems to be called.</p>
|
<python><class><pytorch>
|
2022-12-13 00:00:10
| 1
| 2,642
|
Gilseung Ahn
|
74,778,347
| 554,521
|
pytest+celery: Test hangs
|
<p>I have a super simple celery task. During development, my broker for celery and pytest is sharing the same AWS SQS, and DynamoDB for results. My task was successful via CLI.</p>
<p>However, when running within pytest, my test would hang forever. When I force stop the test, it tells me it was stuck on the 'wait_for' function of <code>celery/backends/base.py:756</code>:</p>
<pre><code>self = <celery.backends.dynamodb.DynamoDBBackend object at 0x1075354d0>, task_id = 'ac20975a-76db-495b-895f-f7905f2f6b81', timeout = None, interval = 0.5
no_ack = True, on_interval = <promise@0x10749af80>
</code></pre>
<p><em>Note: code is simplified for stackoverflow.</em></p>
<p>my env is apple m1pro macOS 13.0.1</p>
<p>python: python 3.11.0 (homebrew)</p>
<p>celery: 5.2.7</p>
<pre class="lang-py prettyprint-override"><code># proj/tasks.py
from celery import Celery
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
broker_transport_options = {
'region': 'us-east-2',
'predefined_queues': {
'celery': {
'url': 'https://sqs.us-east-2.amazonaws.com/../awesome_queue',
}
}
}
app = Celery(
"awesome_queue",
broker="sqs://",
broker_transport_options=broker_transport_options,
backend="dynamodb://@us-east-2/awesome_queue"
)
@app.task
def read_dynamodb( region: str, tablename: str) -> list:
# Read from DynamoDB
logger.info(f"Reading {region}/{tablename}")
dynamodb_list = ["greetings"]
logger.info(dynamodb_list)
return region+tablename
</code></pre>
<p>My PyTest:</p>
<pre class="lang-py prettyprint-override"><code># tests/conftest.py
from proj.tasks import broker_transport_options
@pytest.fixture(scope="session")
def celery_config():
return {
"broker": "sqs://",
"broker_transport_options": broker_transport_options,
"backend": "dynamodb://@us-east-2/awesome_queue",
}
</code></pre>
<pre class="lang-py prettyprint-override"><code># tests/test_tasks.py
import pytest
from customer_shard_restore.tasks import app, read_dynamodb
from unittest.mock import patch
@pytest.mark.usefixtures("celery_session_app")
@pytest.mark.usefixtures("celery_session_worker")
class TestRestore:
def test_read_dynamodb(self):
result = read_dynamodb.apply_async("us-west-2", "blank")
print(result.get())
</code></pre>
<p>Running CLI, I get the correct output:</p>
<pre class="lang-bash prettyprint-override"><code># session 1
celery -A proj.tasks worker --loglevel=INFO
...
[2022-12-12 17:57:18,074: INFO/ForkPoolWorker-8] Task proj.tasks.read_dynamodb[829f0f0f-d177-4a4f-a045-7660f3b45ab6] succeeded in 0.9378940409806091s: 'somethingsomethingelse'
</code></pre>
<pre class="lang-py prettyprint-override"><code># session 2
Python 3.11.0 (main, Oct 26 2022, 19:06:18) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from proj.tasks import read_dynamodb
>>> result = read_dynamodb.apply_async(("something", "somethingelse"))
>>> result.get()
'somethingsomethingelse'
>>> exit()
</code></pre>
|
<python><pytest><celery>
|
2022-12-12 23:17:52
| 0
| 651
|
Ken
|
74,778,324
| 10,181,236
|
count how many times a record appears in a pandas dataframe and create a new feature with this counter
|
<p>I have this two dataframes dt_t and dt_u. I want to be able to count how many times a record in the text feature appears and I want to create a new feature in df_u where I associate to each id the counter. So id_u = 1 and id_u = 2 both will have counter = 3 since hello appears 3 times in df_t and both published a post with "hello" in the text.</p>
<pre><code>import pandas as pd
import numpy as np
df_t = pd.DataFrame({'id_t': [0, 1, 2, 3, 4], 'id_u': [1, 1, 3, 2, 2], 'text': ["hello", "hello", "friend", "hello", "my"]})
print(df_t)
df_u = pd.DataFrame({'id_u': [1, 2, 3]})
print()
print(df_u)
df_u_new = pd.DataFrame({'id_u': [1, 2, 3], 'counter': [3, 3, 1]})
print()
print(df_u_new)
</code></pre>
<p>The code I wrote for the moment is this, but this is very slow and also I have a very huge dataset so it is impossible to do.</p>
<pre><code>user_counter_dict = {}
tmp = dict(df_t["text"].value_counts())
# to speedup the process we set as index the text column
df_t.set_index(["text"], inplace=True)
for i, (k, v) in enumerate(tmp.items()):
row = (k, v)
text = row[0]
counter = row[1]
#this is slow and take much of the time
uniques_id = df_.loc[tweet]["id_u"].unique()
for elem in uniques_id:
value = user_counter_dict.setdefault(str(elem), counter)
if value < counter:
user_counter_dict[str(elem)] = counter
# and now I will put the date on the dict on a new column in df_u
</code></pre>
<p>Is there a very fast way to compute this?</p>
|
<python><pandas>
|
2022-12-12 23:15:10
| 1
| 512
|
JayJona
|
74,778,254
| 2,262,424
|
Logging to the very same mlflow run across multiple scripts
|
<p>I have separate preprocessing and training Python scripts. I would like to track my experiments using <a href="https://www.mlflow.org/docs/latest/python_api/mlflow.html" rel="nofollow noreferrer">mlflow</a>.<br />
Because my scripts are separate, I am using a Powershell script (think of it as a shell script, but on Windows) to trigger the Python scripts with the same configuration and to make sure that all the scripts operate with the same parameters and data.</p>
<p>How can I track across scripts into the same <em>mlflow</em> run?</p>
<p>I am passing the same experiment name to the scripts. To make sure the same run is picked up, I thought to generate a (16 bit hex) run ID in my Powershell script and pass it to all the Python scripts.<br />
This doesn't work as when a run ID is given to <code>mlflow_start_run()</code>, <em>mlflow</em> expects that run ID to already exist and it fails with <code>mlflow.exceptions.MlflowException: Run 'dfa31595f0b84a1e0d1343eedc81ee75' not found</code>.</p>
<p>If I pass a run name, each of the scripts gets logged to a different run anyways (which is expected).</p>
<p>I cannot use <code>mlflow.last_active_run()</code> in subsequent scripts, because I need to preserve the ability to run and track/log each script separately.</p>
|
<python><mlflow>
|
2022-12-12 23:02:45
| 0
| 766
|
gevra
|
74,778,173
| 6,630,397
|
Finding the best plane which includes a maximum number of points in a given set of 3D points
|
<p>I have a set of N 3D points (these are the vertices of a closed line). Most of them are part of the same plane, whereas a few are lightly shifted. I need to figure out the plane which naturally already includes the maximum number of points, and to project the remaining points (i.e. the shifted ones) onto it.</p>
<p>To do so, I iterate through all triplets of points; (0,1,2), then (1,2,3),... until (n,0,1). For each step, I build a plane passing through these 3 points and I compute the distances from that plane to all other points.</p>
<p>This gives me a matrix of distances <code>D[i,j] = d</code> where <code>i</code> = the index of the first plane (i.e. the index of the first point of the triplet forming that plane), and <code>j</code> the index of any other point from the point set at the distance <code>d</code> of the plane <code>i</code>. If that distance <code>d</code> between a point and the plane is <code>0</code>, that means the point is already <em>on</em> the plane.</p>
<p>Here is an initial set of points:</p>
<pre class="lang-py prettyprint-override"><code>array([[ 8.563 , 8.2252, 18.6602],
[ 8.563 , 8.2252, 22.3125],
[ 11.7319, 1.729 , 22.3125],
[ 11.7319, 1.729 , -19.207 ],
[ 8.084 , 9.207 , -19.207 ],
[ 8.084 , 9.207 , 18.6602],
[ 8.563 , 8.2252, 18.6602]])
# the resulting distance matrix:
distance_matrix = array([
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -6.27855770e-05, -6.27855770e-05],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -6.27855770e-05, -6.27855770e-05],
[ 5.45419753e-05, 5.45419753e-05, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 5.45419753e-05, 5.45419753e-05, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, -4.15414572e-04,-4.15414572e-04, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 4.15414572e-04, 4.15414572e-04, 0.00000000e+00, 0.00000000e+00]])
</code></pre>
<p><a href="https://i.sstatic.net/rbZW7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rbZW7.png" alt="enter image description here" /></a></p>
<p><em>Each green block corresponds to the triplet of points defining the plane of row <code>i</code>.</em></p>
<p>At this stage, I pick one (of the) point with the highest distance to a plane, i.e. at position (4,2), and decide to project this point (4) onto that plane (2) in order to replace it in the original array. Then I redo the computation of the planes and distances.</p>
<p>After that first iteration, with the new coordinates of that point, the distance matrix changes to something really close to zero (because the points were initially all supposed to be part of the same plane):</p>
<pre class="lang-py prettyprint-override"><code>
array([
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 4.65661287e-10, 4.65661287e-10],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, -4.65661287e-10, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]])
</code></pre>
<p>but it is stuck there... I mean, it never changes more than what you can see in array, whatever the number of iterations, and this number, here, <code>-4.65661287e-10</code> is not small enough to be considered as a 0, therefore, my set of point is never seen as part of the same plane by third party tools.</p>
<p>I'm wondering if this number has some special meaning or what, and if there is something I can do to lower it?</p>
<p>Here is the current code, it's a bit drafty but it should work:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Dec 13 11:35:48 2022
@author: s.k.
LICENSE:
MIT License
Copyright (c) 2022-now() s.k.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Except as contained in this notice, the name of the copyright holders shall
not be used in advertising or otherwise to promote the sale, use or other
dealings in this Software without prior written authorization from the
copyright holders.
"""
import numpy as np
from shapely import wkt
from shapely.geometry import Polygon, MultiLineString
def find_plane(points):
"""Find the coefficients (a,b,c,d) satisfying the plane equation:
ax + by + cz + d = 0 given a 3x3 array of row-wise points.
"""
# I need to to that otherwise the feeded numpy array get modified!!!:
pts = points.copy()
p0 = points[2,:].copy()
center = np.average(pts, axis=0, keepdims=True)
pts -= center # reduce large coordinates
u, v = pts[1:,:] - pts[0,:]
n = np.cross(u, v)
n_unit = n / np.linalg.norm(n)
d = -1 * (p0 @ n_unit)
return (np.append(n_unit, d), center)
def closest_distance_to_plane(points,plane):
if len(points.shape) == 1:
points = points.reshape(1,len(points)) # reshape 1dim vectors to 2D
nbpts, lp = points.shape
# here we can work with homogeneous coordinates
points = np.append(points, np.ones(nbpts).reshape(nbpts,1), axis=1)
dists = points @ plane
return dists
def project_points_on_plane(points, plane, pt_on_plane):
new_points = None
if len(points.shape) == 1:
points = points.reshape(1,len(points)) # reshape 1dim vectors to 2D
nbpts, lp = points.shape
n = plane[:-1]
new_points = points - ((points - pt_on_plane) @ n) * n
return new_points
def get_distances_to_planes(points):
lp = np.size(points,0)
shift = 2
p = np.append(points, points[:shift,:], axis=0)
planes = []
dists = np.zeros((lp, lp), dtype=np.double)
for i in range(lp): # loop over planes
include_idx = np.arange(i,i+3)
mask = np.zeros(lp+shift, dtype=bool)
mask[include_idx] = True
plane, pt_on_plane = find_plane(p[mask,:])
planes.append(plane)
mask2 = mask.copy()
if i > 1:
mask2[:shift] = mask2[-shift:]
mask2 = mask2[:-shift]
for j, pt in enumerate(p[:-shift]): # loop over remaning points
if ~mask2[j]:
dist = closest_distance_to_plane(pt, plane)
dists[i,j] = dist
return dists
def clean_plane(wkt_geom):
k = 1
dists = np.array([1])
new_geom = wkt_geom
P = wkt.loads(new_geom)
# remove last point as it's a duplicate of the first:
p = np.array(P.geoms[0].coords)[:-1]
lp = np.size(p,0)
dists = get_distances_to_planes(p)
max_dists = np.max(np.abs(dists))
print(f"max_dists init: {max_dists}")
while max_dists != 0 and k <= 20:
print(f"Iter {k}...")
idx_max_sum = np.argwhere(dists == np.amax(dists))
planes_max, pts_max = set(idx_max_sum[:,0]), set(idx_max_sum[:,1])
# pick only the first plane for the moment:
plane_idx = list(planes_max)[0]
include_idx = np.arange(plane_idx, plane_idx+3)
include_idx = include_idx%lp
mask = np.zeros(lp, dtype=bool)
mask[include_idx] = True
# TODO: verify for singularities here:
plane, pt_on_plane = find_plane(p[mask,:])
for pt_max in pts_max:
p[pt_max] = project_points_on_plane(p[pt_max], plane, pt_on_plane)
new_geom = Polygon(p)
dists = get_distances_to_planes(p)
max_dists = np.max(np.abs(dists))
print(f"max_dists: {max_dists}")
k += 1 if max_dists != 0 else 21
return new_geom.wkt
wkt_geom = '''MULTILINESTRING Z ((
2481328.563000001 1108008.2252000012 58.66020000015851,
2481328.563000001 1108008.2252000012 62.312500000349246,
2481331.731899999 1108001.7289999984 62.312500000349246,
2481331.731899999 1108001.7289999984 20.79300000029616,
2481328.083999999 1108009.2069999985 20.79300000029616,
2481328.083999999 1108009.2069999985 58.66020000015851,
2481328.563000001 1108008.2252000012 58.66020000015851
))'''
clean_plane(wkt_geom)
</code></pre>
<p>It should print:</p>
<pre class="lang-py prettyprint-override"><code>max_dists init: 0.00041541503742337227
Iter 1...
max_dists: 4.656612873077393e-10
Iter 2...
max_dists: 4.656612873077393e-10
Iter 3...
max_dists: 4.656612873077393e-10
Iter 4...
max_dists: 4.656612873077393e-10
Iter 5...
max_dists: 4.656612873077393e-10
Iter 6...
max_dists: 4.656612873077393e-10
Iter 7...
max_dists: 4.656612873077393e-10
Iter 8...
max_dists: 4.656612873077393e-10
Iter 9...
max_dists: 4.656612873077393e-10
Iter 10...
max_dists: 4.656612873077393e-10
Iter 11...
max_dists: 4.656612873077393e-10
Iter 12...
max_dists: 4.656612873077393e-10
Iter 13...
max_dists: 4.656612873077393e-10
Iter 14...
max_dists: 4.656612873077393e-10
Iter 15...
max_dists: 4.656612873077393e-10
Iter 16...
max_dists: 4.656612873077393e-10
Iter 17...
max_dists: 4.656612873077393e-10
Iter 18...
max_dists: 4.656612873077393e-10
Iter 19...
max_dists: 4.656612873077393e-10
Iter 20...
max_dists: 4.656612873077393e-10
</code></pre>
<p>This is the array containing the initial <code>i</code> planes with parameters <a href="https://en.wikipedia.org/wiki/Plane_(geometry)#Point%E2%80%93normal_form_and_general_form_of_the_equation_of_a_plane" rel="nofollow noreferrer">a,b,c,d</a> (=columns of the array):</p>
<pre><code>[array([ 8.98767249e-01, 4.38426085e-01, -0.00000000e+00, -2.71591655e+06]),
array([ 8.98767249e-01, 4.38426085e-01, 0.00000000e+00, -2.71591655e+06]),
array([ 8.98763941e-01, 4.38432867e-01, 0.00000000e+00, -2.71591586e+06]),
array([ 8.98763941e-01, 4.38432867e-01, 0.00000000e+00, -2.71591586e+06]),
array([ 8.98742050e-01, 4.38477740e-01, -0.00000000e+00, -2.71591126e+06]),
array([-8.98742050e-01, -4.38477740e-01, 0.00000000e+00, 2.71591126e+06])]
</code></pre>
|
<python><numpy><precision><mathematical-optimization><plane>
|
2022-12-12 22:48:42
| 1
| 8,371
|
swiss_knight
|
74,778,014
| 6,651,938
|
Hard to understand return_value for Python Mock Class in the following example
|
<p>I had a hard time understanding the choice of using <code>return_value</code>. See the following example</p>
<pre><code># my_module.py
def query(connection):
cursor = connection.cursor()
cursor.execute("SELECT name FROM users WHERE id = 1")
return cursor.fetchone()[0]
</code></pre>
<pre><code># test_query.py
import pytest
from unittest.mock import Mock
# Import the module containing the function to test
import my_module
def test_query():
# Set up the mock database connection
connection = Mock()
cursor = connection.cursor.return_value
cursor.fetchone.return_value = ("Alice",)
# Call the function under test
result = my_module.query(connection)
# Assert that the function has the expected behavior
assert result == "Alice"
</code></pre>
<p>My question is in <code>test_query.py</code>, why do I need the first <code>return_value</code>? It looks like if <code>cursor = connection.cursor.return_value</code>, I can replace the <code>cursor</code> in <code>cursor.fetchone.return_value = ("Alice",)</code>, which then becomes <code>connection.cursor.return_value.fetchone.return_value = ("Alice",)</code>? Why does this code require two <code>return_value</code> ?</p>
<p>Without the first <code>return_value</code>, I received <code>TypeError: 'Mock' object is not subscriptable</code> after running pytest</p>
<p>The broader question is when should I use <code>return_value</code>?</p>
|
<python><python-3.x><unit-testing><mocking>
|
2022-12-12 22:26:13
| 1
| 1,146
|
Bratt Swan
|
74,778,010
| 15,724,084
|
cannot convert my code to oop style in python
|
<p>I have done leetcode question in procedural way. I do not if it is proper ask the same question from this platform. But I am looking for OOP podcasts, and needed some practice. Occasionally, leetcode gave me opportunity- I can solve question by top to down procedural way writing code, but leetcode needs OOP version. I will share question and my answer as procedural and try of oop. If it is not proper I will delete, no problem.
question;
`Given an array of integers nums which is sorted in ascending order, and an integer target, write a function to search target in nums. If target exists, then return its index. Otherwise, return -1.</p>
<p>You must write an algorithm with O(log n) runtime complexity.`</p>
<p>my response;</p>
<pre><code>def f_nums(nums,target):
for i in range(len(nums)-1):
if nums[i]==target:
a= i
try:
if a:
return a
else:
return -1
except:
return -1
res=f_nums([-1,0,3,5,9,12],9)
print(res)
</code></pre>
<p>my try in oop version;</p>
<pre><code>class Solution:
def search(self, nums: List[int], target: int) -> int:
for i in range(len(self.nums)-1):
if self.nums[i]==self.target:
a=i
try:
if a:
return a
else:
return -1
except:
return -1
</code></pre>
<p>gives an error ;</p>
<p>`</p>
<pre><code>AttributeError: 'Solution' object has no attribute 'nums'
for i in range(len(self.nums)-1):
Line 3 in search (Solution.py)
ret = Solution().search(param_1, param_2)
Line 37 in _driver (Solution.py)
_driver()
Line 48 in <module> (Solution.py)
</code></pre>
<p>`</p>
|
<python><oop>
|
2022-12-12 22:25:26
| 1
| 741
|
xlmaster
|
74,778,003
| 1,761,521
|
Flask-smorest returning an empty json string
|
<p>The JSON response of my endpoint returns <code>{}</code> even though I am logging the correct data.</p>
<pre><code>from flask_smorest import Blueprint
bp = Blueprint("auth", __name__, url_prefix="/api/v1/auth/")
@bp.route("/login", methods=["POST"])
@bp.arguments(LoginRequest)
@bp.response(200, JwtTokenResponse)
@bp.response(404, ErrorResponse)
def login(args):
current_app.logger.debug(args)
username = args.get("username", None)
password = args.get("password", None)
current_app.logger.debug(f"Username: {username}")
current_app.logger.debug(f"Password: {password}")
user = User.query.filter_by(username=username).first()
if user is None:
return dict(message="User does not exists"), 404
if not check_password_hash(user.password, password):
return dict(message="Unable to Authenticate user."), 404
access_token = create_access_token(identity=username)
refresh_token = create_refresh_token(identity=username)
response = dict(access_token=access_token, refresh_token=refresh_token)
current_app.logger.debug(f"Response: {response}")
return response, 200
</code></pre>
<p>My <code>LoginTokenSchema</code> and <code>ErrorResponse</code> schemas are defined as:</p>
<pre><code>from marshmallow import Schema, fields
class JwtTokenResponse(Schema):
access_token = fields.String()
refresh_token = fields.String()
class ErrorResponse(Schema):
message = fields.String()
</code></pre>
<p>When I test the API with a user not in the database or with the wrong password; it will product the correct response with <code>ErrorRespose</code> however with the correct creds it just will output <code>{}</code>, when I check the flask logs I can see access/refresh token dict, what am I doing wrong?</p>
|
<python><flask><marshmallow><flask-marshmallow><flask-smorest>
|
2022-12-12 22:24:01
| 1
| 3,145
|
spitfiredd
|
74,778,002
| 12,671,140
|
Hold reference to Numpy array inside C++ object
|
<p>I want to create C++ wrapper for Numpy array. For that purpose I need to hold reference to Numpy array (for returning it to Python later) and reference to raw data of that array.</p>
<p>For that I've wrote the code:</p>
<p><strong>C++</strong></p>
<pre class="lang-cpp prettyprint-override"><code>// mynumpy.h
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
namespace py = boost::python;
namespace np = boost::python::numpy;
class MyNumpy
{
private:
np::ndarray* _pydata;
unsigned char* _data;
public:
MyNumpy(np::ndarray& pydata);
np::ndarray const& numpy();
};
</code></pre>
<pre class="lang-cpp prettyprint-override"><code>// mynumpy.cpp
#include "mynumpy.h"
namespace py = boost::python;
namespace np = boost::python::numpy;
MyNumpy::MyNumpy(np::ndarray& pydata)
{
_pydata = &pydata;
_data = reinterpret_cast<unsigned char*>(pydata.get_data());
}
np::ndarray const& MyNumpy::numpy()
{
return *_pydata;
}
BOOST_PYTHON_MODULE(test)
{
Py_Initialize();
np::initialize();
py::class_<MyNumpy>("MyNumpy", py::init<np::ndarray&>())
.def("numpy", &MyNumpy::numpy, py::return_value_policy<py::copy_const_reference>());
}
</code></pre>
<p><strong>Python</strong></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import test
data = np.array([0])
mynp = test.MyNumpy(data)
print(mynp.numpy())
</code></pre>
<p>But the following Python code seems to crash silently when trying to call <code>mynp.numpy()</code>. I've noticed that after calling the constructor, <code>_pydata</code> points to empty Python object. I'm not sure how am I supposed to hold those references.</p>
|
<python><boost-python>
|
2022-12-12 22:23:57
| 0
| 565
|
kacpo1
|
74,777,945
| 13,881,506
|
`eval('bar')` causes a `NameError`, but `print(bar); eval('bar')` doesn't
|
<p>The following code</p>
<pre><code>1 def foo():
2 def bar():
3 return 'blah'
4 def baz():
5 eval('bar')
6 baz()
7 foo()
</code></pre>
<p>causes a <code>NameError</code> -- the entire error message is</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 7, in <module>
foo()
File "main.py", line 6, in foo
baz()
File "main.py", line 5, in baz
eval('bar')
File "<string>", line 1, in <module>
NameError: name 'bar' is not defined
</code></pre>
<p>On the other hand, the following, almost identical code,</p>
<pre><code>1 def foo():
2 def bar():
3 return 'blah'
4 def baz():
5 print(bar); eval('bar')
6 baz()
7 foo()
</code></pre>
<p>does not cause any error. In fact, it prints <code><function foo.<locals>.bar at 0x7fd5fdb24af0></code>.</p>
<p>So I was wondering why, in the first piece of code, is the variable named <code>bar</code> not defined/available in line <code>5</code>. And why does <code>print</code>ing <code>bar</code> before calling <code>eval('bar')</code> fix this issue. From my understanding <code>bar</code> should be a local variable of <code>function foo</code>, so it should be accessible from <code>function foo.baz</code>. Is it something about <code>eval</code> that screws this up that <code>print</code>ing beforehand fixes?</p>
|
<python><scope><local-variables><nameerror>
|
2022-12-12 22:17:18
| 1
| 1,013
|
joseville
|
74,777,831
| 5,942,100
|
Tricky conversion based on specific conditions within Pandas
|
<p>I have a dataframe when where the date matches the date of a specific column, I wish to convert all 0's to blanks per area category for specific columns.</p>
<h2>Data</h2>
<pre><code>date start end area stat1 stat2 stat3 final
10/1/2021 11/1/2021 12/1/2021 NY 5 0 0 11/1/2021
11/1/2021 12/1/2021 1/1/2022 NY 19 0 0 11/1/2021
12/1/2021 1/1/2022 2/1/2022 NY 10 0 0 11/1/2021
1/1/2022 2/1/2022 3/1/2022 NY 1 0 0 11/1/2021
10/1/2021 11/1/2021 12/1/2021 CA 1 0 0 11/1/2021
11/1/2021 12/1/2021 1/1/2022 CA 3 0 0 11/1/2021
12/1/2021 1/1/2022 2/1/2022 CA 3 0 0 11/1/2021
1/1/2022 2/1/2022 3/1/2022 CA 2 0 0 11/1/2021
</code></pre>
<h2>Desired</h2>
<pre><code>date start end area stat1 stat2 stat3 final
10/1/2021 11/1/2021 12/1/2021 NY 5 0 0 11/1/2021
11/1/2021 12/1/2021 1/1/2022 NY 19 11/1/2021
12/1/2021 1/1/2022 2/1/2022 NY 10 11/1/2021
1/1/2022 2/1/2022 3/1/2022 NY 1 11/1/2021
10/1/2021 11/1/2021 12/1/2021 CA 1 0 0 11/1/2021
11/1/2021 12/1/2021 1/1/2022 CA 3 11/1/2021
12/1/2021 1/1/2022 2/1/2022 CA 3 11/1/2021
1/1/2022 2/1/2022 3/1/2022 CA 2 11/1/2021
</code></pre>
<h2>logic</h2>
<p><strong><code>above we want to convert all zeros to blanks just for columns [stat2] and [stat3] where the date in the [date] column is == '11/01/2021' or greater.</code></strong></p>
<h2>Doing</h2>
<p>I am thinking that I must <strong>groupby</strong> and <strong>create a subset</strong> and then perform the conversion:</p>
<pre><code>df1 = df.groupby(['date', 'area'], as_index=False
df1[df1.eq(0)] = np.nan
</code></pre>
<p>Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2022-12-12 22:01:40
| 1
| 4,428
|
Lynn
|
74,777,830
| 3,755,036
|
how can I import all python classes in a folder?
|
<p>This is the current structure of my project.</p>
<pre><code> cities
newyork.py
london.py
berlin.py
main.py
requirements.txt
</code></pre>
<p>Each of the city files contains a class. For instance <code>Class Newyork:</code> is defined in <code>newyork.py</code></p>
<p>In my <code>main.py</code> I am importing all the classes for use in this manner:</p>
<pre><code>from cities.newyork import NewYork
from cities.london import London
from cities.berlin import Berlin
</code></pre>
<p><strong>What I would like:</strong>
To have something like this</p>
<pre><code>from cities.* import NewYork, London, Berlin.
</code></pre>
<p>Is this possible in python?</p>
|
<python><python-3.x><pip>
|
2022-12-12 22:01:34
| 1
| 818
|
chickenman
|
74,777,822
| 10,959,670
|
Pandas rolling averages for dates after group by
|
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{"date": [pd.Timestamp("2022-01-01"), pd.Timestamp("2022-01-01"), pd.Timestamp("2022-01-01"), pd.Timestamp("2022-01-03"), pd.Timestamp("2022-01-03"), pd.Timestamp("2022-01-03"), pd.Timestamp("2022-01-05")],
"numbers": [1,2,4,4,11,7,5],
"grouper": [1, 0, 1, 0,1, 0, 0]
}
)
</code></pre>
<p>If I have the following df and I would like to get the rolling mean for the values of numbers that are before each rows date column, how would I do that? eg. the rolling averages for the past 3 days for grouped by ["grouper", "date"]</p>
<p>I know I can do something like this, but not even close to a solution -</p>
<p>I am looking to build on this <a href="https://stackoverflow.com/questions/74767290/pandas-rolling-average-window-for-dates-excluding-row">solution</a></p>
<pre><code>df["av"] = df.shift(1).rolling(window=3).mean()
</code></pre>
<p>but this does not shift dynamically so it includes today.</p>
<p>My expected output for the new av column for a 3 day window grouped by the two columns of the sample df would be</p>
<pre><code> date numbers grouper av
0 2022-01-01 1 1 NaN
1 2022-01-01 2 0 NaN
2 2022-01-01 4 1 NaN
3 2022-01-03 4 0 2.0
4 2022-01-03 11 1 2.5
5 2022-01-03 7 0 2.0
6 2022-01-05 5 0 5.5
</code></pre>
|
<python><pandas><group-by>
|
2022-12-12 22:00:58
| 2
| 331
|
Ben Muller
|
74,777,772
| 7,267,480
|
Generating a new, more dense grids using existing and scale
|
<p>I use Python for fitting various experimental data.</p>
<p>The data I have is stored in NumPy arrays, e.g.</p>
<pre><code>x = np.array([53.821593, 53.85265044, 53.88373501, 53.91484627, 53.94598466, 53.97714997, 54.00834218, 54.03956172, 54.07080797, 54.10208155, 54.13338225, 54.16471028,
54.19606522, 54.2274477])
</code></pre>
<p>where x stands for some coordinate of experimental data.</p>
<p>the distance between elements of an x array varies (the grid is nonuniform and some regions along the axis have ).</p>
<p>I need a package or a function to create a new grid based on the initial grid but more dense and will include the coordinates of the initial grid.</p>
<pre><code>def resample_grid(initial_grid: np.array, scale: float) -> np.array:
</code></pre>
<p>Something that will add additional points between the given points using some provided parameter, e.g. scale so the new grid will have (x.size * scale) points (approx.), while the left and right borders of the new grid must be the same as in the initial.</p>
<p>It's preferable that this function/tool will not give random values, so the result of applying it to the initial array will have a steady result.</p>
<p>Is there any package or solution?</p>
<p>Edited:
To be more specific - it's required to obtain a new grid with the known number of points in the resulting array that includes old coordinates and adds new ones to obtain a "more uniform" grid, e.g. by adding points in regions where the grid is too coarse and do not touch regions where we have maximum density values. So the resulting grid will have a "more uniform" grid - to minimize the mean length of intervals in the new grid (and the variance of lengths) for resulting array in comparison with the initial grid.</p>
<pre><code>def resample_grid(initial_grid: np.array, num_out_points: int) -> np.array:
</code></pre>
|
<python><numpy><grid>
|
2022-12-12 21:56:34
| 0
| 496
|
twistfire
|
74,777,697
| 2,011,779
|
Connecting to websocket with async inside async function
|
<p>I'm connecting to a websocket in an async function, to update a local mysql db</p>
<p>[edit: although only the coinbase websocket is shown here, it was later identified to be the gemini websocket that had the issue, by hitting the v1 instead of v2 websocket]</p>
<pre><code>async def write_websocket_to_sql(exchange, symbol):
# handle different inputs to get the correct url and subscription message, and error check
url = wss://ws-feed.exchange.coinbase.com
subscription_message = {
"type": "subscribe",
"product_ids": ["BTCUSD"],
"channels": ["level2_batch", "heartbeat"]
}
async with websockets.connect(url) as ws:
# Subscribe to the websocket feed
await ws.send(json.dumps(subscription_message))
# Receive and process messages from the websocket
while True:
print("receiving message for ", ticker_symbol, " on ", exchange, " at ", datetime.datetime.now())
try:
message = await ws.recv()
except websockets.exceptions.ConnectionClosed:
print("Connection closed, reconnecting...")
await ws.send(json.dumps(subscription_message))
continue
print("message received success!")
data = json.loads(message)
</code></pre>
<p>I'm looking for an explanation as to why the <code>async with websockets.connect(url) as ws:</code> line needs <code>async</code>. <code>await ws.send</code> is going to wait for the response either way</p>
<p>copilot originally added it in, and indeed, when I remove it I get the error</p>
<pre><code>line 453, in write_websocket_to_sql
with websockets.connect(url) as ws:
AttributeError: __enter__
</code></pre>
<p>no idea why its trying to use a function that doesn't exist?</p>
<p>After changing it back to <code>async</code>, I've started to get this error after a lot of successful pulls... but always within 5 seconds of starting the script.</p>
<pre><code> File "/home/path/main.py", line 573, in main
await asyncio.gather(task1, task2)
File "/home/path/main.py", line 463, in write_websocket_to_sql
await ws.send(json.dumps(subscription_message))
File "/home/anaconda3/envs/py39/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 619, in send
await self.ensure_open()
File "/home/anaconda3/envs/py39/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 920, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: no close frame received or sent
</code></pre>
<p>so my questions are:</p>
<ol>
<li>How does the logic differ and what is happening when using async inside async?</li>
<li>Do I need to just lengthen the timeout somehow or does this smell like a different problem?</li>
</ol>
|
<python><websocket>
|
2022-12-12 21:47:14
| 2
| 2,384
|
hedgedandlevered
|
74,777,664
| 2,418,347
|
regex: getting all matches for group
|
<p>in python, i'm trying to extract all time-ranges (of the form HHmmss-HHmmss) from a string. i'm using this python code.</p>
<pre><code>text = "random text 0700-1300 random text 1830-2230 random 1231 text"
regex = "(.*(\d{4,10}-\d{4,10}))*.*"
match = re.search(regex, text)
</code></pre>
<p>this only returns <code>1830-2230</code> but i'd like to get <code>0700-1300</code> and <code>1830-2230</code>. in my application there may be zero or any number of time-ranges (within reason) in the text string. i'd appreciate any hints.</p>
|
<python><regex>
|
2022-12-12 21:43:25
| 2
| 335
|
4mla1fn
|
74,777,613
| 3,102,471
|
Python CSV module returning non string iterator when passed the STDOUT stream containing CSV data
|
<p>The following program calls <code>subprocess.run()</code> to invoke a program (<code>ffprobe</code>, but that's not important) and return a CSV result in STDOUT. I am getting an error enumerating the CSV result.</p>
<pre><code>import os
import subprocess
import csv
for file in os.listdir("/Temp/Video"):
if file.endswith(".mkv"):
print(os.path.join("/Temp/Video", file))
ps = subprocess.run(["ffprobe", "-show_streams", "-print_format", "csv", "-i", "/Temp/Video/" + file], capture_output = True)
print(ps.stdout)
reader = csv.reader(ps.stdout)
results = []
for row in reader:
results.append(row)
</code></pre>
<p>Produces the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\mbmas\eclipse-workspace\DolbyAtmosToFlac\com\mbm\main.py", line 12, in <module>
for row in reader:
_csv.Error: iterator should return strings, not int (the file should be opened in text mode)
</code></pre>
<p>The output from the <code>print(ps.stdout)</code> statement produces:</p>
<pre><code>b'stream,0,h264,H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10,High,video,[0][0][0][0],0x0000,1920,1080,1920,1080,0,0,2,1:1,16:9,yuv420p,40,unknown,unknown,unknown,unknown,left,progressive,1,true,4,N/A,24000/1001,24000/1001,1/1000,0,0.000000,N/A,N/A,N/A,N/A,8,N/A,N/A,N/A,46,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,eng,17936509,01:20:18.271791666,115523,10802870592,001011,MakeMKV v1.16.4 win(x64-release),2021-08-20 19:09:26,BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES SOURCE_ID,Lavc59.7.102 libx264,00:01:30.010000000\r\nstream,1,vorbis,Vorbis,unknown,audio,[0][0][0][0],0x0000,fltp,48000,3,3.0,0,0,N/A,0/0,0/0,1/1000,0,0.000000,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,3314,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,eng,Surround 3.0,2422660,01:20:18.272000000,451713,1459129736,001100,MakeMKV v1.16.4 win(x64-release),2021-08-20 19:09:26,BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES SOURCE_ID,Lavc59.7.102 libvorbis,00:01:30.003000000\r\n'
</code></pre>
<p>The suggestion Python provides, <em>the file should be opened in text mode</em>, is confusing. The above output looks like text to me. How do I use the <code>csv</code> module to read CSV data held in a stream?</p>
|
<python><csv>
|
2022-12-12 21:37:43
| 1
| 1,150
|
mbmast
|
74,777,476
| 6,283,073
|
how can segmentation fault error be fixed OpenCV python
|
<p>I am trying to run a simple objection detection on webcam using Yolov5 but I keep getting the error below.</p>
<blockquote>
<blockquote>
<p>zsh: segmentation fault</p>
</blockquote>
</blockquote>
<p>The camera appears to open then shut off immediately and the code exit with the above error.
Here is my code</p>
<pre><code>def object_detector():
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# mmocr = MMOCR(det='TextSnake', recog='SAR')
cam = cv2.VideoCapture(0)
while(True):
ret, frame = cam.read()
# ocr_result = mmocr.readtext(frame, output='demo/cam.jpg', export='demo/', print_result=True, imshow=True)
# print("RESULT \n ", ocr_result)
frame = frame[:, :, [2,1,0]]
frame = Image.fromarray(frame)
frame = cv2.cvtColor(np.array(frame), cv2.COLOR_RGB2BGR)
# ocr_result = mmocr.readtext(frame, output='demo/cam.jpg', export='demo/', print_result=True, imshow=True)
# print("RESULT \n ", ocr_result)
result = model(frame,size=640)
# Results
# crops = result.crop(save=True)
cv2.imshow('YOLO', np.squeeze(result.render()))
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
</code></pre>
<p>what am i doing wrong and how can i fix it ?</p>
|
<python><opencv><pytorch><yolov5>
|
2022-12-12 21:23:26
| 2
| 1,679
|
e.iluf
|
74,777,424
| 405,396
|
NotImplementedError when running dbt version check on installing dbt-bigquery
|
<p>I am trying to install dbt bigquery in my Windows system by running the following pip commands -</p>
<pre><code>pip install dbt-bigquery
</code></pre>
<p>Installation has finished successfully but when I run the dbt --version command here is the error I am getting</p>
<blockquote>
<p>Traceback (most recent call last): File "", line 198,
in <em>run_module_as_main File "", line 88, in <em>run_code
File
"C:\Users\1354750\Documents\code\env\Scripts\dbt.exe_<em>main</em></em>.py",
line 4, in File
"C:\Users\1354750\Documents\code\env\Lib\site-packages\dbt\main.py",
line 2, in
from dbt.logger import log_cache_events, log_manager File "C:\Users\1354750\Documents\code\env\Lib\site-packages\dbt\logger.py",
line 16, in
from dbt.dataclass_schema import dbtClassMixin File "C:\Users\1354750\Documents\code\env\Lib\site-packages\dbt\dataclass_schema.py",
line 15, in
from mashumaro import DataClassDictMixin File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro_<em>init</em></em>.py",
line 4, in
from mashumaro.serializer.json import DataClassJSONMixin File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\serializer\json.py",
line 28, in
class DataClassJSONMixin(DataClassDictMixin): File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\serializer\base\dict.py",
line 16, in <strong>init_subclass</strong>
builder.add_from_dict() File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\serializer\base\metaprogramming.py",
line 270, in add_from_dict
pre_deserialize = self.get_declared_hook(<strong>PRE_DESERIALIZE</strong>)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File
"C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\serializer\base\metaprogramming.py",
line 255, in get_declared_hook
if not is_dataclass_dict_mixin(cls):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\meta\helpers.py",
line 247, in is_dataclass_dict_mixin
return type_name(t) == DataClassDictMixinPath
^^^^^^^^^^^^ File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\meta\helpers.py",
line 93, in type_name
elif is_generic(t) and not is_type_origin:
^^^^^^^^^^^^^ File "C:\Users\1354750\Documents\code\env\Lib\site-packages\mashumaro\meta\helpers.py",
line 161, in is_generic
raise NotImplementedError NotImplementedError</p>
</blockquote>
<p>Prior to the dbt command, I am running this in a Python virtual environment in the 'code' folder using the following commands.</p>
<pre><code>python -m venv env
.\env\Scripts\activate
</code></pre>
<p>Can someone help me what the error is pointing at?</p>
|
<python><python-3.x><google-bigquery><dbt>
|
2022-12-12 21:17:55
| 1
| 1,439
|
VKarthik
|
74,777,304
| 17,163,556
|
Get maximum value of y value within a specified range of x value
|
<p>I am using Python 3 in jupyter notebook and I data that looks like this:</p>
<pre><code>x = [5100. 5100.05 5100.1 ... 5399.85 5399.9 5399.95] Angstrom
y = [1.83400998 1.26499399 0.85423358 ... 0.78314406 0.76861344 0.77460277]
</code></pre>
<p>Is there a way in Python 3 to grab the maximum y value for x between 5120 and 5250?</p>
|
<python>
|
2022-12-12 21:05:08
| 0
| 689
|
Dila
|
74,777,277
| 5,150,335
|
Yielding inside coroutine
|
<p>I have the following code.</p>
<pre class="lang-py prettyprint-override"><code>def gen():
while True: # line 2
d = yield # line 3
yield d # line 4
x = gen()
next(x)
for i in (3, 4, 5):
print(x.send(i)) # line 10
</code></pre>
<p>When I run this, I get:</p>
<pre><code>3
None
5
</code></pre>
<p>rather than:</p>
<pre><code>3
4
5
</code></pre>
<p>which is what I would expect. I've already gone through several similar questions (including <a href="https://stackoverflow.com/questions/31869593/yielding-a-value-from-a-coroutine-in-python-a-k-a-convert-callback-to-generato">this one</a>) but I still don't understand what is happening here.</p>
<p>My understanding is as follows:
We prime the generator by calling <code>next(x)</code>, it starts execution and enters the <code>while</code> loop on line 2. On line 3, we have a <code>yield</code> statement and execution pauses. On line 10, we send the value <code>3</code>. Execution resumes on line 3, <code>d</code> is set to <code>3</code>, and we yield back <code>d</code>. On line 10, <code>send</code> returns the yielded value (<code>3</code>). The generator continues execution until we are back to the <code>yield</code> statement on line 3, etc.</p>
<p>Is this incorrect? What am I doing wrong here?</p>
|
<python><python-3.x><generator>
|
2022-12-12 21:03:01
| 1
| 2,157
|
mlz7
|
74,777,165
| 1,473,517
|
How to only keep the earliest record in a pandas marge
|
<p>I have two dataframes. Dataframe X has columns:</p>
<pre><code>Index(['studentid', 'Year', 'MCR_NAME', 'QUAL_DETAILS'])
</code></pre>
<p>Dataframe Y has columns:</p>
<pre><code>Index(['studentid', 'total'])
</code></pre>
<p>I want to merge X and Y on 'studentid' using:</p>
<pre><code>Z = X.merge(Y, on="studentid")
</code></pre>
<p>However, the same "studentid" field can occur in different years in X. If this happens, I want to only keep the earliest record after the merge. That is I don't want the same studentid to occur twice in the merged dataframe. The years from the Year field are written "20/21", "21/22", "22/23".</p>
<p>A given studentid can only occur once per year.</p>
<p>How can I do that?</p>
|
<python><pandas>
|
2022-12-12 20:51:17
| 1
| 21,513
|
Simd
|
74,777,128
| 10,620,003
|
Build array with size (n*k, m) with n matrix of size (k,m) with an efficient way
|
<p>I have three array (A,B,C) with size (<code>2,4</code>) and I want to build an array (X) with size (<code>2*3</code>, 4) with them.
I want to build the first row of X from A, the second from B, and the third from C. Then, fourth row from A, 5 from B, 6 from C.</p>
<pre><code>import numpy as np
A = np.array([[0, 2, 1, 2],
[3, 1, 4, 3]])
B = np.array([[1, 2, 1, 0],
[0, 4, 3, 1]])
C = np.array([[0, 4, 3, 2],
[3, 0, 1, 0]])
</code></pre>
<p>Now, the way that I am doing this is using a loop over the arrays. But, it is not efficient. Do you have any suggestion? Thank you.</p>
<p>The way that I am doing is:</p>
<pre><code>X = np.zeros((2*3, 4))
for i in range(2):
X[3*i] = A[i,:]
X[3*i+1] = B[i,:]
X[3*i+2] = C[i,:]
X
array([[0., 2., 1., 2.],
[1., 2., 1., 0.],
[0., 4., 3., 2.],
[3., 1., 4., 3.],
[0., 4., 3., 1.],
[3., 0., 1., 0.]])
</code></pre>
<p>Also, some times I have 5, 6 array and I should build the X with size (6*2, 4). So, with this way, I have to add or remove some lines of code to work. I am looking for a general and efficient way. Thank you.</p>
|
<python><numpy>
|
2022-12-12 20:47:28
| 3
| 730
|
Sadcow
|
74,776,904
| 9,915,864
|
Python tkinter passing object between callbacks
|
<p>I'm trying to update a label based on a button press that does some calculations first, but I'm getting an error I don't understand.</p>
<p>Background: This method was working fine in previous iterations. I have not changed the code. But I did make some changes to the class by adding a controller to the initial instantiation of the <code>ShelfDownloader</code> class, which these methods belong to. To clarify <code>ShelfDownloader</code> is only called once from a different module.</p>
<p>Description: On the initial call of this class, it displays the <code>self.total_books_label</code> correctly, but when I switch shelves it chokes. <strong>UPDATE:</strong> It seems I am passing a string, but I in the comments I posted what I tried and I still don't understand the error. See the second error:</p>
<p>Question: Since that's probably not the problem, I'm thinking one of my calls is missing something. Any suggestions please?</p>
<p><em>I removed the widget formatting for this question. I included these bits of code to help with the Error message.</em></p>
<pre><code>class ShelfDownloader(ctk.CTkFrame):
def __init__(self, parent, controller):
ctk.CTkFrame.__init__(self, parent)
self.parent = parent
self.controller = controller
...
def shelf_option_callback(self, parent, *args):
if not self.downloader:
sys.exit()
shelf_choice = [s for s in SHELF_METADATA if self.shelf_choice_var.get() in s['shelf_name']][0]
for k, v in shelf_choice.items():
self.downloader_dict[k] = v
updated_dict = self.downloader.update_shelf(**self.downloader_dict)
for k, v in updated_dict.items():
self.downloader_dict[k] = v
self.display_total_books_and_pages(parent)
def draw_top_panel(self):
parent = self.top_frame
LL1 = ctk.CTkLabel(parent, text="Choose shelf to download:")
# LL1.pack()
shelf_opt_menu = ctk.CTkOptionMenu(master=parent, width=170, variable=self.shelf_choice_var,
values=self.shelf_list,
command= lambda: self.shelf_option_callback(parent)
## ^^^^^^^^^^^^^^^^^^^^^^^^^^^
## This is my problem spot. Tried adding
## command=self.shelf_option_callback(parent)
## but different error.
shelf_opt_menu.pack()
def display_total_books_and_pages(self, parent):
if hasattr(self, 'total_books_label'):
self.total_books_label.destroy()
## GUI feedback info, partially a debugging tool
_books, _pages = (self.downloader_dict['total_book_count'], self.downloader_dict['total_page_count'])
self.display_text.set(f"Shelf has {_books} books, retrieving {_pages} pages.")
logger.debug(f"{self.display_text.get()}")
logger.debug(f"display_total_books_and_pages: {type(self.display_text)}")
self.total_books_label = ctk.CTkLabel(master=parent, width=180,
text=f"{self.display_text.get()}")
self.total_books_label.pack()
</code></pre>
<p>My debugging returns correct and expected output after I switch the shelf, ie <code>shelf_opt_menu</code>:</p>
<pre><code>2022-12-12 12:05:22,612, gui_download, 183: Shelf has 1624 books, retrieving 17 pages.
2022-12-12 12:05:22,612, gui_download, 184: display_total_books_and_pages: <class 'tkinter.StringVar'>
</code></pre>
<p>I get an error message that I'm passing a string in the code above. UPDATE: I trimmed the full error out since I confirmed that's what is happening with <code>logger.debug(f"{type(parent)}")</code></p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\megha\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
...
File "c:\MyProjects\gr_shelf_tools\src\gui_downloader.py", line 124, in shelf_option_callback
self.display_total_books_and_pages(parent)
File "c:\MyProjects\gr_shelf_tools\src\gui_downloader.py", line 184, in display_total_books_and_pages
self.total_books_label = ctk.CTkLabel(master=parent, width=180, text=f"{self.display_text.get()}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
File "C:\Users\megha\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 2591, in _setup
self.tk = master.tk
^^^^^^^^^
AttributeError: 'str' object has no attribute 'tk'
</code></pre>
<p>Error 2 (when I change it to a lambda call):</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\megha\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "C:\Users\megha\AppData\Local\Programs\Python\Python311\Lib\site-packages\customtkinter\windows\widgets\core_widget_classes\dropdown_menu.py", line 101, in <lambda>
command=lambda v=value: self._button_callback(v),
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\megha\AppData\Local\Programs\Python\Python311\Lib\site-packages\customtkinter\windows\widgets\core_widget_classes\dropdown_menu.py", line 106, in _button_callback
self._command(value)
File "C:\Users\megha\AppData\Local\Programs\Python\Python311\Lib\site-packages\customtkinter\windows\widgets\ctk_optionmenu.py", line 381, in _dropdown_callback
self._command(self._current_value)
TypeError: ShelfDownloader.draw_top_panel.<locals>.<lambda>() takes 0 positional arguments but 1 was given
</code></pre>
|
<python><python-3.x><tkinter>
|
2022-12-12 20:23:53
| 1
| 341
|
Meghan M.
|
74,776,898
| 1,715,544
|
Tell PIP to install only the dependencies which can be satisfied
|
<p>I'm installing a local module that relies on a bunch of local modules written by others. This means that sometimes everyone's versioning is out-of-sync, so running <code>pip -e [package]</code> results in a bunch of errors when it comes to installing the dependencies that are other local modules. For example:</p>
<blockquote>
<p><code>Module A</code> relies on <code>Module B</code>. But <code>Module B</code> throws an syntax error when <code>pip</code> tries installing it.</p>
</blockquote>
<p><strong>For now, I'd like to tell <code>pip</code> to install every dependency it can install, and pipe all errors to a file or something.</strong></p>
<ul>
<li>Running each line in the module's <code>requirements.txt</code> won't work because I'm using <code>pip install -e [module]</code>. I do not want to change it or its <code>setup.py</code></li>
<li><code>--ignore-installed</code> only works if the dependency is already installed</li>
<li><code>--no-deps</code> doesn't try to install dependencies at all</li>
</ul>
<p>I'd specifically like <code>pip</code> to exit with something like: "Installed package with some errors: ..." (i.e., I'd like it to install all the dependencies it <em>can</em> install while ignoring the ones it can't)</p>
|
<python><pip>
|
2022-12-12 20:23:05
| 0
| 1,410
|
AmagicalFishy
|
74,776,886
| 12,468,387
|
How to convert pine script stdev to Python code?
|
<p>I'm trying to convert pine script <a href="https://www.tradingview.com/pine-script-reference/v4/#fun_stdev" rel="nofollow noreferrer">stdev</a> to Python code but it seems I'm doing it wrong.</p>
<p>Pine script:</p>
<pre class="lang-none prettyprint-override"><code>//the same on pine
isZero(val, eps) => abs(val) <= eps
SUM(fst, snd) =>
EPS = 1e-10
res = fst + snd
if isZero(res, EPS)
res := 0
else
if not isZero(res, 1e-4)
res := res
else
15
pine_stdev(src, length) =>
avg = sma(src, length)
sumOfSquareDeviations = 0.0
for i = 0 to length - 1
sum = SUM(src[i], -avg)
sumOfSquareDeviations := sumOfSquareDeviations + sum * sum
stdev = sqrt(sumOfSquareDeviations / length)
</code></pre>
<p>Python code:</p>
<pre class="lang-py prettyprint-override"><code>import talib as ta
def isZero(val, eps):
if abs(val) <= eps:
return True
else:
return False
def SUM(fst, snd):
EPS = 1e-10
res = fst + snd
if isZero(res, EPS):
res += 0
else:
if not isZero(res, 1e-4):
res = res
else:
res = 15
return res
def pine_stdev(src, length):
avg = ta.SMA(src, length)
sumOfSquareDeviations = 0.0
for i in range(length - 1):
s = SUM(src.iloc[i], -avg.iloc[i])
sumOfSquareDeviations = sumOfSquareDeviations + s * s
stdev = (sumOfSquareDeviations / length)*(sumOfSquareDeviations / length)
</code></pre>
<p>What am I doing wrong? And why does the SUM function return 15?</p>
|
<python><pine-script><ta-lib>
|
2022-12-12 20:22:05
| 2
| 449
|
Denzel
|
74,776,834
| 15,006,497
|
Conditional Statements in Python
|
<p>having a weird issue with Python that i'm hoping someone can help me out with (syntax related).</p>
<p>I have a selenium web scraper to scrape linked in posts. It has a for loop as follows:</p>
<pre><code>for card in elements:
profileDiv = card.find_element(By.XPATH, ".//div[contains(@class, 'update-components-actor__meta')]") if card.find_element(By.XPATH, ".//div[contains(@class, 'update-components-actor__meta')]") else ""
profilePic = card.find_element(By.XPATH, ".//img[contains(@class, 'update-components-actor__avatar-image')]").get_attribute('src') if card.find_element(By.XPATH, ".//img[contains(@class, 'update-components-actor__avatar-image')]") else ""
profileName = profileDiv.find_element(By.XPATH, ".//span[contains(@class, 'update-components-actor__name')]").text
profileDescription = profileDiv.find_element(By.XPATH, ".//span[contains(@class, 'update-components-actor__description')]").text
text = card.find_element(By.XPATH, ".//div[contains(@class, 'feed-shared-update-v2__description-wrapper')]//following::span[1]").text
video = card.find_element(By.XPATH, ".//*[contains(@class, 'update-components-linkedin-video')]//preceding::video[1]").get_attribute('src') if card.find_element(By.XPATH, ".//*[contains(@class, 'update-components-linkedin-video')]//preceding::video[1]") else ""
image = card.find_element(By.XPATH, ".//*[contains(@class, 'update-components-image__image-link')]//descendant::img[1]").get_attribute('src') if card.find_element(By.XPATH, ".//*[contains(@class, 'update-components-image__image-link')]//descendant::img[1]") else ""
</code></pre>
<p>this <em>works</em> however it breaks and exits out if it can't find one of these items - even though I have a ternary there to check if the item exists.</p>
<p>So, i'm wondering, what is the correct way to do this in Python:</p>
<ol>
<li>Check if something exists</li>
<li>If it does, assign it to a variable</li>
<li>If not, skip.</li>
</ol>
<p>I don't want to put this in a try...catch because theoretically a post can have a image or a video or none.</p>
<p>I know in Javascript I would just do:</p>
<pre><code>let image = card.find_element(By.XPATH, ".//*[contains(@class, 'update-components-image__image-link')]//descendant::img[1]") ? card.find_element(By.XPATH, ".//*[contains(@class, 'update-components-image__image-link')]//descendant::img[1]").get_attribute('src') : null
</code></pre>
<p>thanks all</p>
|
<python><selenium>
|
2022-12-12 20:16:55
| 1
| 524
|
sylargaf
|
74,776,709
| 7,168,098
|
Adding a multindex to the columns of a pandas df based on several dictionaries
|
<p>Assuming I have a DF like:</p>
<pre><code>person_names = ['mike','manu','ana','analia','anomalia','fer']
df = pd.DataFrame(np.random.randn(5, 6), columns = person_names)
df
</code></pre>
<p>I also have two dictionaries, for easy purposes assuming only two:</p>
<pre><code># giving a couple of dictionaries like:
d = {'mike':{'city':'paris', 'department':2},
'manu':{'city':'london', 'department':1},
'ana':{'city':'barcelona', 'department':5}}
d2 = {'analia':{'functional':True, 'speed':'high'},
'anomalia':{'functional':True, 'speed':'medium'},
'fer':{'functional':False, 'speed':'low'}}
</code></pre>
<p>The result I would to achieve is a df having a multindex as shown in the excel screenshot here:</p>
<p><a href="https://i.sstatic.net/1jWn4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1jWn4.png" alt="enter image description here" /></a></p>
<p>The dictionaries contain values for SOME of the column names.</p>
<p>Not only I need to create the multiindex based on dictionaries but take into account that the keys of the dictionaries are different for both dictionaries, I would also like to keep the original names of the columns as first level of the multiindex.</p>
<p>Any suggestion?</p>
|
<python><pandas><multi-index>
|
2022-12-12 20:03:56
| 2
| 3,553
|
JFerro
|
74,776,533
| 3,963,430
|
Amazon SNS service SMS only with requests package
|
<p>I need to send SMS with Amazon SNS service but I can only use the requests package not boto3.</p>
<p>Here is as far as I came.</p>
<pre><code>import json
import requests
url = "https://sns.eu-central-1.amazonaws.com"
params = {
"Action": "Publish",
"Version": "2010-03-31",
"PhoneNumber": "+49123456789",
"Message": "Hello World!",
}
aws_access_key_id = "KEY"
aws_secret_access_key = "SECRET"
response = requests.post(url, data=params, auth=(aws_access_key_id, aws_secret_access_key), headers=headers)
print(response.text)
</code></pre>
<p>but I get:</p>
<pre><code><ErrorResponse xmlns="http://sns.amazonaws.com/doc/2010-03-31/">
<Error>
<Type>Sender</Type>
<Code>MissingAuthenticationToken</Code>
<Message>Request is missing Authentication Token</Message>
</Error>
<RequestId>xxxxx</RequestId>
</ErrorResponse>
</code></pre>
<p>How do I get the token?</p>
|
<python><amazon-sns>
|
2022-12-12 19:44:31
| 1
| 838
|
Gurkenkönig
|
74,776,500
| 12,760,550
|
Confirm if duplicated employees are in sequence in pandas dataframe
|
<p>Imagine I have the following dataframe with repetitive people by firstname and lastname:</p>
<pre><code>ID FirstName LastName Country
1 Paulo Cortez Brasil
2 Paulo Cortez Brasil
3 Paulo Cortez Espanha
1 Maria Lurdes Espanha
1 Maria Lurdes Espanha
1 John Page USA
2 Felipe Cardoso Brasil
2 John Page USA
3 Felipe Cardoso Espanha
2 Steve Xis UK
1 Peter Dave UK
np.nan Peter Dave UK
</code></pre>
<p>The issue I have is, if the person appears once, the ID should always be 1. If the person appears more than once (looking by only firstname and lastname) the ID should be sequential starting with 1 (in any row) and adding +1 for each other duplicated record.</p>
<p>I need a way to filter this dataframe to find people not following this logic (getting either the unique record or all records of the person if duplicated), this way returning this data:</p>
<pre><code>ID FirstName LastName Country
1 Maria Lurdes Espanha
1 Maria Lurdes Espanha
2 Felipe Cardoso Brasil
3 Felipe Cardoso Espanha
2 Steve Xis UK
1 Peter Dave UK
np.nan Peter Dave UK
</code></pre>
<p>What would be the best way to achieve it?</p>
|
<python><pandas><search><filter><duplicates>
|
2022-12-12 19:41:14
| 1
| 619
|
Paulo Cortez
|
74,776,482
| 5,065,952
|
Explode column of objects python pandas dataframe
|
<p>I am trying to explode a column to create new rows within pandas data frame. What would be the best approach to this?</p>
<p>Input:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>SKU</th>
<th>Quantity</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>YY-123-671</td>
<td>5</td>
<td>drawer</td>
</tr>
<tr>
<td>YY-345-111-WH,YY-345-111-RD,YY-345-111-BL</td>
<td>10</td>
<td>desk</td>
</tr>
<tr>
<td>LK-896-001</td>
<td>1</td>
<td>lamp</td>
</tr>
</tbody>
</table>
</div>
<p>Desired Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>SKU</th>
<th>Quantity</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>YY-123-671</td>
<td>5</td>
<td>drawer</td>
</tr>
<tr>
<td>YY-345-111-WH</td>
<td>10</td>
<td>desk</td>
</tr>
<tr>
<td>YY-345-111-RD</td>
<td>10</td>
<td>desk</td>
</tr>
<tr>
<td>YY-345-111-BL</td>
<td>10</td>
<td>desk</td>
</tr>
<tr>
<td>LK-896-001</td>
<td>1</td>
<td>lamp</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe><etl><pandas-explode>
|
2022-12-12 19:39:26
| 1
| 827
|
Defcon
|
74,776,348
| 14,167,846
|
Subtracking into from Dates
|
<p>I have some data that will looks like this:</p>
<pre><code> Dates Delta
0 2022-10-01 10
1 2022-10-01 21
2 2022-10-01 34
</code></pre>
<p>I am trying to add a new column, where I can subtract the number in the <code>Delta</code> column from the date in the <code>Dates</code> column. Ideally, the output will look like this (i did this by hand so if the dates are wrong, please excuse me).</p>
<pre><code> Dates Delta CalculatedDate
0 2022-10-01 10 2022-09-21
1 2022-10-01 21 2022-09-10
2 2022-10-01 34 2022-08-23
</code></pre>
<p>I've tried various versions of this and I'm not having any luck.</p>
<pre><code># importing libraries to create and manipulate toy data
import pandas as pd
from datetime import datetime, timedelta
# create toy data
df = pd.DataFrame({'Dates': ['2022-10-01', '2022-10-01', '2022-10-01'],
'Delta': [10, 21, 34]})
# cast the `Dates` column as dates
df['Dates'] = pd.to_datetime(df['Dates'])
##### Need help here
# Create a new column, showing the calculated date
df['CalculatedDate'] = df['Dates'] - timedelta(days=df['Delta'])
</code></pre>
|
<python><pandas><datetime>
|
2022-12-12 19:27:34
| 3
| 545
|
pkpto39
|
74,776,238
| 18,972,785
|
Python does not open default web browser to show visualization results in pyvis?
|
<p>In one part of my code, I need to visualize the generate graph using pyvis library. When i was using python 3.10 everything was ok and the graph was visualized in the default browser, but now due to some reasons i need to use python 3.6.4. In this situation when i generate graph and i want to open it in default browser (which is firefox now), it always opens Internet Explorer of windows and does not show anything and it always go to bing!, whereas my default browser is FireFox. Is there anything wrong with python 3.6.4? the code below is used to visualize. How can i open the generated graph in the default browser? I really appreciate answers which solves my problem.</p>
<pre><code>palette = (sns.color_palette("Pastel1", n_colors=len(set(labelList.values()))))
palette = palette.as_hex()
colorDict = {}
counter = 0
for i in palette:
colorDict[counter] = i
counter += 1
N = Network(height='100%', width='100%', directed=False, notebook=False, heading = self.headerName)
for n in G.nodes:
N.add_node(n, color=(colorDict[labelList[n]]), size=5)
for e in G.edges.data():
N.add_edge(e[0], e[1])
N.show('./result.html')
</code></pre>
|
<python><pyvis>
|
2022-12-12 19:16:51
| 2
| 505
|
Orca
|
74,776,202
| 12,760,550
|
Identify duplicated rows with different value in another column pandas dataframe
|
<p>Suppose I have a dataframe of names and countries:</p>
<pre><code>ID FirstName LastName Country
1 Paulo Cortez Brasil
2 Paulo Cortez Brasil
3 Paulo Cortez Espanha
4 Maria Lurdes Espanha
5 Maria Lurdes Espanha
6 John Page USA
7 Felipe Cardoso Brasil
8 John Page USA
9 Felipe Cardoso Espanha
10 Steve Xis UK
</code></pre>
<p>I need a way to identify all people that have the same firstname and lastname that appears more than once in the dataframe but at least one of the records appears belonging to another country and return all duplicated rows. This way resulting in this dataframe:</p>
<pre><code>ID FirstName LastName Country
1 Paulo Cortez Brasil
2 Paulo Cortez Brasil
3 Paulo Cortez Espanha
7 Felipe Cardoso Brasil
9 Felipe Cardoso Espanha
</code></pre>
<p>What would be the best way to achieve it?</p>
|
<python><pandas><duplicates><drop><lines-of-code>
|
2022-12-12 19:14:08
| 3
| 619
|
Paulo Cortez
|
74,776,174
| 12,574,341
|
Expressing semantic content of tuple values in type annotations
|
<p>I'm modeling a financial exchange</p>
<pre class="lang-py prettyprint-override"><code>class Exchange(ABC):
@abstractproperty
def balances(self) -> Dict[str, Tuple[float, float]]:
...
</code></pre>
<p>The semantic content of <code>.balances</code> return type is a <code>dict</code> that is <code>{asset: (quantity, proportion), ...}</code></p>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>{"BTC": (0.0015, .30), "ETH": (0.10, .20), "LTC": (5, .50)}
</code></pre>
<p>The problem is that this fact is not obvious just by looking at <code>-> Dict[str, Tuple[float, float]]</code>.</p>
<p>Developers who are not familiar with the codebase and API responses will not immediately know that the two <code>float</code>s represent the held quantity and proportion of the portfolio respectfully.</p>
<p>I attempt to solve this by creating a custom type alias.</p>
<pre class="lang-py prettyprint-override"><code>class Balance(NamedTuple):
quantity: float
proportion: float
class Exchange(ABC):
@abstractproperty
def balances(self) -> Dict[str, Balance]:
...
</code></pre>
<p>This introduces a new problem: IDE Intellisense literally shows <code>-> Dict[str, Balance]</code> with no elaboration on what <code>Balance</code> looks like.</p>
<p><a href="https://i.sstatic.net/bfcFP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bfcFP.png" alt="Screenshot of what vscode shows" /></a></p>
<p>I had hoped it would resolve the alias as something like <code>-> Dict[str, tuple(quantity: float, proportion: float)]</code>.</p>
<p>This leaves the same problem of sub-optimal expressiveness to unfamiliar developers. When a developer hovers over the function call, they will see unfamiliar custom type aliases in the return, of which they will have to go searching in the file to find it's definition to understand.</p>
<p>My goal is for developers to be able to jump into the codebase and immediately intuit the shapes and semantic content of function returns, without needing to ask me about API documentation or go searching for type declarations.</p>
<p>Any thoughts? What are the best practices here?</p>
|
<python><python-typing><pyright>
|
2022-12-12 19:11:01
| 1
| 1,459
|
Michael Moreno
|
74,775,969
| 984,003
|
Left-align image using Python ReportLab with Platypus flowable?
|
<p>How do I left align an image that I've added to a PDF using reportlab platypus? By default, the image gets centered.</p>
<pre><code>from reportlab.lib.units import cm, inch
from reportlab.lib.pagesizes import LETTER
from reportlab.platypus import SimpleDocTemplate, Paragraph, Image
from reportlab.lib.styles import getSampleStyleSheet
text = "<b>BOLD</b> normal <br/><br/>" + \
" After newline <font color=red>red</font>" + \
" <font size=20>Larger</font> normal.<br/><br/>"
new_pdf_path = "pdf_path"
img_path = "img_path"
doc = SimpleDocTemplate(new_pdf_path,pagesize=LETTER,rightMargin=1*inch,leftMargin=1*inch,topMargin=1*inch,bottomMargin=1*inch)
parts = []
parts.append(Paragraph(text, getSampleStyleSheet()['Normal']))
# HOW left align this 1-inch wide image?
parts.append(Image(img_path, 1*inch, 1*inch))
doc.build(parts)
</code></pre>
|
<python><reportlab><platypus>
|
2022-12-12 18:52:30
| 1
| 29,851
|
user984003
|
74,775,959
| 7,746,166
|
Pycharm Docker remote unittest debugging with invalid volume specification
|
<p>I have the following problem. I have a project which I want to debug via PyCharm and a docker image via ssh remote connection to some server. For standard debugging it is no problem. It works!</p>
<p>Docker Desktop is installed on windows. PyCharm 2021.3.3 is set up. Windows Linux path conversion is setup in the enviromental variables. But when I start a debugging process of a unit test with the same docker image, I get the following error:</p>
<blockquote>
<p>Cannot run the remote Python interpreter: invalid volume specification: 'C:\project:/opt/project:rw'**</p>
</blockquote>
<p><a href="https://i.sstatic.net/crRPH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/crRPH.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/2R1km.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2R1km.png" alt="enter image description here" /></a></p>
<p>This also worked for me before the system got a fresh windows installation. So my guess is that the unittest docker process somehow uses another part of the docker engine where I do not have the permission to process the windows Linux path conversion? the "rw" of the exception is read, write, isn't it? Furthermore it seems like it totally ignores the configuration file because i set a path to k where i map my project to but it has the project dir of C: in the exception message</p>
<p>I also tried different dokcer desktop version and different pycharm versions.</p>
|
<python><linux><windows><docker><pycharm>
|
2022-12-12 18:51:30
| 1
| 1,491
|
Varlor
|
74,775,817
| 4,710,409
|
Django-is it possible to use template filters inside "with"?
|
<p>I have a template filter called "get_data_id" that returns a value; I want to use that value as an argument for another filter called "get_data":</p>
<pre><code>{% with variable_v= "x|get_data_id" %}
<p> {{ variable_v|get_data }} </p>
{% endwith %}
</code></pre>
<p>But django returns:</p>
<pre><code>'with' expected at least one variable assignment
</code></pre>
<p>Is it possible to use template filters in "with clause" ?</p>
|
<python><django><database><django-templates>
|
2022-12-12 18:38:14
| 1
| 575
|
Mohammed Baashar
|
74,775,797
| 14,403,266
|
Convert a column of dates of a pandas dataframe to the last date of the respective year
|
<p>I have a pandas dataframe, lets call it <code>df</code>, looking like this:</p>
<pre><code> Acount Type Id Date Value Per
0 Exp P IQ 2016-03-31 -23421.170324 3M
1 Exp P IQ 2017-03-31 -44803.599908 3M
2 Exp P IQ 2018-03-31 -29294.611346 3M
3 Exp P IQ 2019-03-31 -9463.281704 3M
</code></pre>
<p>I need the date column to have the last day of each year, for example: "2019/12/31" and <code>df</code> to look like this:</p>
<pre><code> Acount Type Id Date Value Per
0 Exp P IQ 2016-12-31 -23421.170324 3M
1 Exp P IQ 2017-12-31 -44803.599908 3M
2 Exp P IQ 2018-12-31 -29294.611346 3M
3 Exp P IQ 2019-12-31 -9463.281704 3M
</code></pre>
<p>Do you guys know what do I have to do?</p>
|
<python><pandas><dataframe><date>
|
2022-12-12 18:35:41
| 2
| 337
|
Valeria Arango
|
74,775,767
| 11,653,374
|
Reading a users file within module A from a function within module B in Python
|
<p>I have the following directory:</p>
<pre><code>└── myproject
├── moduleA
│ ├── __init__.py
│ ├── users.py
├── moduleB
│ ├── __init__.py
│ ├── test_in_B.py
├── test.py
</code></pre>
<p>I can easily run <code>test.py</code> whose content is the following:</p>
<pre><code>from moduleA import users
print(users.name)
</code></pre>
<p>However, I cannot run <code>test_in_B.py</code> whose content is the same as the above and get <code>ModuleNotFoundError: No module named 'moduleA'</code> error.</p>
<p><strong>Contents for replication:</strong></p>
<p><code>moduleA (__init__.py)</code></p>
<pre><code>from moduleB import *
</code></pre>
<p><code>moduleB (__init__.py)</code></p>
<pre><code>from moduleA import *
</code></pre>
<p><code>users</code></p>
<pre><code>name = 'user1'
</code></pre>
<p><strong>Question</strong></p>
<p>How can I run <code>test_in_B.py</code> keeping the same structure?</p>
|
<python><module>
|
2022-12-12 18:32:49
| 1
| 728
|
Saeed
|
74,775,684
| 7,782,597
|
ShopifyQL raise error "Field 'shopifyqlQuery' doesn't exist on type 'QueryRoot'"
|
<p>I am trying to run a ShopifyQL query through Python, but I get the error "Field 'shopifyqlQuery' doesn't exist on type 'QueryRoot'".</p>
<p>The reference used to build the query is found in: <a href="https://shopify.dev/api/shopifyql" rel="nofollow noreferrer">https://shopify.dev/api/shopifyql</a></p>
<p>Follow below the excerpt of the code used to get the query:</p>
<pre><code>import json
import requests
API_KEY = MYKEY
PASSWORD = MYSECRET
SHOP_NAME = MYSHOP
API_VERSION = '2022-07'
shop_url = "https://%s:%s@%s.myshopify.com/admin/api/%s" % (API_KEY, PASSWORD, SHOP_NAME, API_VERSION)
response = requests.post(shop_url+'/graphql.json', json={'query': GraphQLString})
answer = json.loads(response.text)
</code></pre>
<p>The full error follows:</p>
<pre><code>{'errors': [{'message': "Field 'shopifyqlQuery' doesn't exist on type 'QueryRoot'", 'locations': [{'line': 4, 'column': 3}], 'path': ['query', 'shopifyqlQuery'], 'extensions': {'code': 'undefinedField', 'typeName': 'QueryRoot', 'fieldName': 'shopifyqlQuery'}}]}
</code></pre>
<p>When I try to run other Shopify GraphQL queries the code works just fine, which makes me ponder that I maybe missing something simple in the query, but I can't figure out what's missing.</p>
<p>Thanks all in advance!</p>
|
<python><graphql><shopify>
|
2022-12-12 18:25:10
| 1
| 338
|
Danilo Steckelberg
|
74,775,659
| 2,886,575
|
numpy argmin not returning ints on pandas objects
|
<p>I am trying to take the <a href="https://numpy.org/doc/stable/reference/generated/numpy.argmin.html" rel="nofollow noreferrer"><code>numpy.argmin</code></a> of a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html" rel="nofollow noreferrer"><code>pandas Series</code></a>. The numpy docs guarantee me that the <code>argmin</code> function returns an <code>ndarray</code> of ints. However, when called on a pandas <code>Series</code>, I get an element of the index.</p>
<p>For example:</p>
<pre><code>import pandas as pd
import numpy as np
foo = pd.Series(np.array([1,2,3]), index=["a","b","c"])
np.argmin(foo)
</code></pre>
<p>gives back <code>'a'</code>.</p>
<p>Is this expected behavior? Is there a different function that will give me the int index of the minimum argument, or do I need to include "if pandas" logic to deal with this?</p>
<p>Python 3.6.9 (default, Nov 25 2022, 14:10:45) [GCC 8.4.0] on linux</p>
<p>Ubuntu 18.04</p>
|
<python><pandas><numpy>
|
2022-12-12 18:23:25
| 1
| 5,605
|
Him
|
74,775,585
| 11,710,304
|
How can I get a value based on two other columns when one column value is ambigous in Python?
|
<p>I want to assign a variable depending on two column values. The input table <code>df_input</code> looks like this:</p>
<pre><code>import pandas as pd
df_input = pd.DataFrame(
{
"entity": [
"Table_A",
"Table_A",
"Table_A",
"Table_B",
"Table_B",
"Table_C",
],
"field": [
"Column 1",
"Column 2",
"Column 3",
"Column 1",
"Column 2",
"Column 1",
],
"type": ["new",
"new",
"new",
"old",
"new",
"old",],
}
)
Table_A = pd.DataFrame(
{
"Column 1": [123],
"Column 2": ["XYZ"],
"Column 3": [True],
}
)
</code></pre>
<p>This is how the Data looks like.</p>
<p><a href="https://i.sstatic.net/gYjzh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYjzh.png" alt="enter image description here" /></a></p>
<p>My goal is to determine the <code>type</code>, because certain functions are triggered depending on this value. But that is outside the scope for this question. My problem right now is that I want the <code>type</code> based on the <code>entity</code> and the <code>field</code>. So I tried this piece of code:</p>
<pre><code>for field in df_input.field:
print(f"{field=}")
if field == Table_A.columns:
type_variable = df_input.iloc[field]['type']
print(f"{type=}")
</code></pre>
<p>In this case <code>Table_A</code> is a table which will be transformed based on the type value. But it is out of scope for this question. All I receive is this error:<br />
<code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</code></p>
<pre><code>Output:
new
new
new
old
new
old
</code></pre>
<p>I know why this error occurs. Right now I am not able to solve this issue. I would like to achieve that the <code>type_variable</code> will be output, depending on the <code>table</code> and the <code>column</code> so I can move on with this variable in the next function. Can anybody help me with that issue?</p>
|
<python><pandas><for-loop><if-statement><reference>
|
2022-12-12 18:16:23
| 1
| 437
|
Horseman
|
74,775,562
| 1,914,781
|
plot error bar with plotly express not work
|
<p>I try to plot error bar with plotly express but it report error:
input element is 3 but it says error_y size is 6:</p>
<pre><code>ValueError: All arguments should have the same length. The length of argument `error_y` is 6, whereas the length of previously-processed arguments ['x', 'y'] is 3
</code></pre>
<p>Code:</p>
<pre><code>import plotly.express as px
import pandas as pd
import numpy as np
from io import StringIO
def save_fig(fig,pngname):
fig.write_image(pngname,format="png", width=800, height=300, scale=1)
print("[[%s]]"%pngname)
#plt.show()
return
def date_linspace(start, end, steps):
delta = (end - start) / (steps-1)
increments = range(0, steps) * np.array([delta]*steps)
return start + increments
def plot_timedelta(x,y,colors,pngname):
fig = px.scatter(
x=x,
y=y,
color=colors,
error_y=dict(
type='data',
symmetric=False,
arrayminus=y,
array=[0] * len(y),
thickness=1,
width=0,
),
)
tickvals = date_linspace(x.min(),x.max(),15)
print(x.min(),x.max())
layout =dict(
title="demo",
xaxis_title="X",
yaxis_title="Y",
title_x=0.5,
margin=dict(l=10,t=20,r=0,b=40),
height=300,
xaxis=dict(
tickangle=-25,
tickvals = tickvals,
ticktext=[d.strftime('%m-%d %H:%M:%S') for d in tickvals]
),
yaxis=dict(
showgrid=True,
zeroline=False,
showline=False,
showticklabels=True
)
)
fig.update_traces(
marker_size=14,
)
fig.update_layout(layout)
save_fig(fig,pngname)
return
def get_delta(df):
df['delta'] = df['ts'].diff().dt.total_seconds()
df['prev'] = df['ts'].shift(1)
print("delta min:",df['delta'].min())
print("delta max:",df['delta'].max())
#df = df[df['delta'] >= 40]
return df
data = """ts,source
2022-12-12 15:46:20.350,izat
2022-12-12 15:46:36.372,skyhook
2022-12-12 15:46:37.181,skyhook
"""
csvtext = StringIO(data)
df = pd.read_csv(csvtext, sep=",")
df['ts'] = pd.to_datetime(df['ts'])
df = get_delta(df)
df['delta'] = df['delta'].fillna(0)
plot_timedelta(df['ts'],df['delta'],df['source'],"demo.png")
</code></pre>
|
<python><plotly>
|
2022-12-12 18:14:50
| 1
| 9,011
|
lucky1928
|
74,775,404
| 2,146,381
|
interpolate, derivate and integrate a function -- some math fun
|
<p>I have a problem. I have three lists. The list_umf are the x values and list list_kf are the y values, while list_kfm are y values, too. kfm is the integral of kf. The values are the output of my code.</p>
<p>To show that kfm is the integral of kf, I want to calculate the derivative of kfm, which shuold be the same as kf. But the re calculated kfm (list_kf_re) is just 101.0 every time.</p>
<p>Whats wrong with my code?</p>
<pre><code>import numpy as np
from scipy import integrate, interpolate
from scipy.misc import derivative as deriv
import matplotlib.pyplot as plt
list_kfm = [15.348748494618041, 26.240336614039776, 37.76846357985518, 49.80068952374503, 62.25356792292074, 75.0692188764684, 88.20491343740369, 101.6276911997135,
115.31128207665246, 129.2342114999071, 143.37856687640036, 157.72915825067278, 172.27292637703843, 186.9985127198004, 201.89593919604192, 216.95636451973587]
list_kf = [168.08871431597626, 179.78615963605742, 188.728883379148, 196.0371678709251, 202.25334207341422, 207.68364358717665, 212.51893919883966, 216.88670040685466,
220.87653440371076, 224.55397301446894, 227.96847485999652, 231.15833919688876, 234.1538643061246, 236.97945558527186, 239.65507793294745, 242.19728380107006]
list_umf = [0.1, 0.15000000000000002, 0.20000000000000004, 0.25000000000000006, 0.30000000000000004, 0.3500000000000001, 0.40000000000000013, 0.45000000000000007,
0.5000000000000001, 0.5500000000000002, 0.6000000000000002, 0.6500000000000001, 0.7000000000000002, 0.7500000000000002, 0.8000000000000002, 0.8500000000000002]
f = interpolate.interp1d(
list_umf, list_kfm, bounds_error=False, fill_value=(15, 217))
list_kf_re = [deriv(f, x) for x in list_umf]
plt.plot(list_umf, list_kfm, label='kfm')
plt.plot(list_umf, list_kf, label='kf')
plt.plot(list_umf, list_kf_re, label='kfre')
print(list_kf_re)
print(list_kf)
</code></pre>
<p><a href="https://i.sstatic.net/3I264.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3I264.png" alt="enter image description here" /></a></p>
|
<python><numpy><scipy>
|
2022-12-12 18:00:10
| 1
| 322
|
tux007
|
74,775,249
| 11,347,405
|
How to plot a histogram of a list of dates?
|
<p>I have a list of times</p>
<pre><code>times = ['2022-12-09T16:06:34.000000000', '2022-12-09T16:06:34.000000000',
'2022-12-09T16:09:47.000000000', ... , '2022-12-09T17:46:10.000000000',
'2022-12-09T17:46:10.000000000', '2022-12-09T17:46:10.000000000',
'2022-12-09T17:49:10.000000000', '2022-12-09T17:49:10.000000000']
</code></pre>
<p>and I want to plot the number of occurrences per hour in a histogram using matplotlib.pyplot.</p>
<p>What I've done is this:</p>
<pre><code>times = pd.Series(extract_times()).astype("datetime64")
times.groupby([times.dt.day, times.dt.hour]).count()
times.plot()
</code></pre>
<p>which gives me this plot:</p>
<p><a href="https://i.sstatic.net/0jfsi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0jfsi.png" alt="enter image description here" /></a></p>
<p>I would like a more linear time scale on the x axis, and preferrably a rolling average, or bars alternatively.</p>
|
<python><pandas><matplotlib><datetime><time-series>
|
2022-12-12 17:47:58
| 1
| 550
|
naraghi
|
74,775,166
| 1,050,619
|
mock a function inside python pyramid view
|
<p>I have a python pyramid view and want to write a unittest.</p>
<pre><code>def home(request):
state = request.params.get('redirect', None)
cookie = request.headers.get('Cookie')
user = identify_session_user(cookie , request.registry.settings)
response = HTTPFound(location=state)
response.set_cookie('USERINFO',
base64.b64encode(json.dumps(user).encode('ascii')),
domain='test.com')
return response
</code></pre>
<p>Simple unittest:-</p>
<pre><code>def test_hello_world(self):
from tutorial import home
request = testing.DummyRequest()
response = hello_world(request)
self.assertEqual(response.status_code, 200)
</code></pre>
<p>How can I mock the idenfify_session_user function that is called inside my home view?</p>
|
<python><pytest><python-unittest><pyramid>
|
2022-12-12 17:40:35
| 1
| 20,966
|
user1050619
|
74,774,994
| 10,620,003
|
Build an array with size (1,n) from an array with size (m, k) with a smarter way
|
<p>I have a very large array with size (5, n), I want to build an array with size (1,20) from it in each iteration. I have to use a very basic approach to build my new array.
Here is an example:</p>
<pre><code>A = np.array(
[[4, 2, 1, 4, 0, 1, 3, 2, 4, 4],
[4, 2, 0, 3, 1, 1, 4, 2, 2, 1],
[3, 2, 3, 2, 0, 3, 4, 1, 4, 3],
[1, 1, 1, 3, 1, 1, 3, 0, 2, 2],
[3, 3, 4, 1, 4, 1, 0, 1, 0, 2]])
</code></pre>
<p>I want to build an array with size (1,20) from A. Which <code>0-4</code> is from row 0 of A, <code>4-8</code> from row 1 of A, <code>8-12</code> from row 2 A, and <code>12-16</code> from row 3, and <code>16-20</code> from row 4`. I use this code:</p>
<pre><code>B = np.zeros((1, 20))
B[0, 0:4] = A[0, 0:4]
B[0, 4:8] = A[1, 0:4]
B[0, 8:12] = A[2, 0:4]
B[0, 12:16] = A[3, 0:4]
B[0, 16:20] = A[4, 0:4]
</code></pre>
<p>and my B is :</p>
<pre><code>array([[4., 2., 1., 4., 4., 2., 0., 3., 3., 2., 3., 2., 1., 1., 1., 3.,
3., 3., 4., 1.]])
</code></pre>
<p>However, since I have a lot of this type of array in my code, I want to ask, do you have any solution which does not to need to use all of this lines of code for it? Thank you.</p>
|
<python><arrays><numpy>
|
2022-12-12 17:27:35
| 1
| 730
|
Sadcow
|
74,774,973
| 16,414,611
|
GraphQL API with Scrapy
|
<p>I'm trying to get data from <a href="https://www.ouedkniss.com/boutiques/immobilier" rel="nofollow noreferrer">https://www.ouedkniss.com/boutiques/immobilier</a> . I found that ouedkniss.com is using GraphQL API. I tried to use this API but failed to pull data and also to paginate. An error is showing. <code>AttributeError: 'list' object has no attribute 'get'</code> I don't know if I miss something else here or not. Here is what I tried so far:</p>
<pre><code>import scrapy
import json
from ..items import OuedknissItem
from scrapy.loader import ItemLoader
class StoresSpider(scrapy.Spider):
name = 'stores'
allowed_domains = ['www.ouedkniss.com']
def start_requests(self):
payload = json.dumps([
{
"operationName": "SearchStore",
"query": "query Campaign($slug: String!) {\n project(slug: $slug) {\n id\n isSharingProjectBudget\n risks\n story(assetWidth: 680)\n currency\n spreadsheet {\n displayMode\n public\n url\n data {\n name\n value\n phase\n rowNum\n __typename\n }\n dataLastUpdatedAt\n __typename\n }\n environmentalCommitments {\n id\n commitmentCategory\n description\n __typename\n }\n __typename\n }\n}\n",
"variables": {
"q": "", "filter": {
"categorySlug": "immobilier",
"count": 12, "page": 1},
"categorySlug": "immobilier",
"count": 12,
"page": 1
},
}
])
headers= {
"Content-Type": "application/json",
# "X-Requested-With": "XMLHttpRequest",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
}
yield scrapy.Request(
url='https://api.ouedkniss.com/graphql',
method="POST",
headers=headers,
body=payload,
callback=self.parse
)
return super().start_requests()
def parse(self, response):
json_resp = json.loads(response.body)
# print(json_resp)
stores = json_resp.get('data')[0].get('stores').get('data')
for store in stores:
loader = ItemLoader(item=OuedknissItem())
loader.add_value('name', store.get('name'))
yield loader.load_item()
</code></pre>
|
<python><web-scraping><graphql><scrapy>
|
2022-12-12 17:25:56
| 2
| 329
|
Raisul Islam
|
74,774,962
| 6,361,813
|
Geometric series: calculate quotient and number of elements from sum and first & last element
|
<p>Creating evenly spaced numbers on a log scale (a geometric progression) can easily be done for a given base and number of elements if the starting and final values of the sequence are known, e.g., with <code>numpy.logspace</code> and <code>numpy.geomspace</code>. Now assume I want to define the geometric progression the other way around, i.e., based on the properties of the resulting geometric series. If I know the sum of the series as well as the first and last element of the progression, can I compute the quotient and number of elements?</p>
<p>For instance, assume the first and last elements of the progression are <code>a_0</code> and <code>a_n</code> and the sum of the series is <code>s_n</code>. I know from trial and error that it works out for <code>n=9</code> and <code>r≈1.404</code>, but how could these values be computed?</p>
|
<python><arrays><algorithm><numpy><math>
|
2022-12-12 17:25:21
| 3
| 407
|
Pontis
|
74,774,627
| 19,079,397
|
How to break a current loop and go to the next loop when a condition is meet in python?
|
<p>I have a data frame like below. Now I want to iterate through unique values of column Name and get the values of column Age when the Age is 10 and when the condition is meet the loop has to break and continue with the next loop. I tried to break it using while loop but it is not working. What is the best way to loop which can break the current loop once the condition is meet and go to the next loop?</p>
<pre><code>Data Frame:-
import pandas as pd
data = [['tom', 10], ['nick', 5], ['juli', 4],
['tom', 11], ['nick', 7], ['juli', 24],
['tom', 12], ['nick', 10], ['juli', 15],
['tom', 14], ['nick', 20], ['juli', 17]]
df = pd.DataFrame(data, columns=['Name', 'Age'])
Loop:-
for j in df['Name'].unique():
print(j)
o=0
t=[]
while o == 10:
for k in df['Age']:
if k == 10:
t.append(k)
o = k
output:-
tom
nick
juli
</code></pre>
<p>It it printing the values in column Name but not printing the values inside the while loop. How do I achieve it?</p>
|
<python><pandas><dataframe><for-loop><while-loop>
|
2022-12-12 16:57:42
| 2
| 615
|
data en
|
74,774,599
| 2,829,961
|
How to read HDF5 files in R without the memory error?
|
<h2>Goal</h2>
<p>Read the <code>data</code> component of a hdf5 file in R.</p>
<h2>Problem</h2>
<p>I am using <code>rhdf5</code> to read hdf5 files in R. Out of 75 files, it successfully read 61 files. But it throws an error about memory for the rest of the files. Although, some of these files are shorter than already read files.<br />
I have tried running individual files in a fresh R session, but get the same error.<br />
Following is an example:</p>
<pre><code># Exploring the contents of the file:
library(rhdf5)
h5ls("music_0_math_0_simple_12_2022_08_08.hdf5")
group name otype dclass dim
0 / data H5I_GROUP
1 /data ACC_State H5I_DATASET INTEGER 1 x 1
2 /data ACC_State_Frames H5I_DATASET INTEGER 1
3 /data ACC_Voltage H5I_DATASET FLOAT 24792 x 1
4 /data AUX_CACC_Adjust_Gap H5I_DATASET INTEGER 24792 x 1
... CONTINUES ----
# Reading the file:
rhdf5::h5read("music_0_math_0_simple_12_2022_08_08.hdf5", name = "data")
Error in H5Dread(h5dataset = h5dataset, h5spaceFile = h5spaceFile, h5spaceMem = h5spaceMem, :
Not enough memory to read data! Try to read a subset of data by specifying the index or count parameter.
In addition: Warning message:
In h5checktypeOrOpenLoc(file, readonly = TRUE, fapl = NULL, native = native) :
An open HDF5 file handle exists. If the file has changed on disk meanwhile, the function may not work properly. Run 'h5closeAll()' to close all open HDF5 object handles.
Error: Error in h5checktype(). H5Identifier not valid.
</code></pre>
<h3>I can read the file via python:</h3>
<pre><code>import h5py
filename = "music_0_math_0_simple_12_2022_08_08.hdf5"
hf = h5py.File(filename, "r")
hf.keys()
data = hf.get('data')
data['SCC_Follow_Info']
#<HDF5 dataset "SCC_Follow_Info": shape (9, 24792), type "<f4">
</code></pre>
<p>How can I successfully read the file in R?</p>
|
<python><r><h5py><rhdf5>
|
2022-12-12 16:55:53
| 1
| 6,319
|
umair durrani
|
74,774,369
| 607,407
|
Is it possible to wait until multiprocessing.Value changes?
|
<p>I have a simple bool value to tell subprocess to clean up and gracefully exit:</p>
<pre><code>self.mem_manager = multiprocessing.Manager()
self.exiting = self..mem_manager.Value(ctypes.c_byte, False)
</code></pre>
<p>Now I have a bunch of threads doing the actual work, so right now at the end of my <code>run()</code> method for my process there was basically this:</p>
<pre><code>while not self.exiting.value:
time.sleep(1)
self.terminate_threads()
return
</code></pre>
<p>This is really ugly. Whenever I was dealing with that, there was a condition variable to wait on.</p>
<p>But I have another condition for quitting already - failure in either thread. So I now have a condition variable:</p>
<pre><code>error_condition = threading.Condition()
... threads have access to this and will notify on error and stop ...
... more code ...
while not self.exiting.value:
with error_condition:
error_condition.wait(1)
if self.errors_happened():
... handle errors ...
break
self.terminate_threads()
return
</code></pre>
<p>I would prefer of course:</p>
<pre><code>while not self.exiting.value:
wait_for_either(error_contition, changed_condition(self.exiting))
</code></pre>
<p>Can I do that somehow?</p>
|
<python><python-3.x><python-multiprocessing><python-multithreading>
|
2022-12-12 16:39:11
| 0
| 53,877
|
Tomáš Zato
|
74,774,282
| 17,176,270
|
FastAPI SQLAlchemy delete Many-to-Many relation causes StaleDataError
|
<p>I have an error upon deleting a ManyToMany relation from table. I found that the problem caused by same items in <code>bills_dishes</code> table, but I need to add same items.</p>
<p>So, the code below works fine for items without duplicates like:</p>
<pre><code>bill_id | dish_id
1 | 1
1 | 2
</code></pre>
<p>Models:</p>
<pre><code>bills_dishes_association = Table(
"bills_dishes",
Base.metadata,
Column("bill_id", Integer, ForeignKey("bills.id")),
Column("dish_id", Integer, ForeignKey("dishes.id")),
)
class Waiter(Base):
"""Waiter model."""
__tablename__ = "waiters"
id = Column(Integer, primary_key=True, autoincrement=True)
username = Column(String(50), nullable=False)
password = Column(String(6), nullable=False)
bills = relationship("Bill")
def __repr__(self):
return f"Waiter(id={self.id}, username={self.username})"
class Bill(Base):
"""Bill model."""
__tablename__ = "bills"
id = Column(Integer, primary_key=True, autoincrement=True)
waiter_id = Column(Integer, ForeignKey("waiters.id"), nullable=False)
table_number = Column(Integer, nullable=False)
amount = Column(Float, nullable=False)
tip_percent = Column(Integer)
tip_included = Column(Boolean, default=False, nullable=False)
time = Column(DateTime(timezone=True), server_default=func.now())
dishes = relationship(
"Dish", secondary=bills_dishes_association, back_populates="ordered"
)
def __repr__(self):
return f"Bill(id={self.id}, table_number={self.table_number}, amount={self.amount})"
class Dish(Base):
"""Dish model."""
__tablename__ = "dishes"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(100), nullable=False)
description = Column(String(1024), nullable=False)
image_url = Column(String(500), nullable=False)
cost = Column(Float)
ordered = relationship(
"Bill",
secondary=bills_dishes_association,
back_populates="dishes",
cascade="all, delete",
passive_deletes=True,
)
</code></pre>
<p>crud.py:</p>
<pre><code>def delete_bill(db: Session, bill_id: int):
"""Delete a bill by id."""
db_bill = db.query(Bill).filter(Bill.id == bill_id).first()
db.delete(db_bill)
db.commit()
return db_bill
</code></pre>
<p>But it doesn't work for this case:</p>
<pre><code>bill_id | dish_id
1 | 2
1 | 2
sqlalchemy.orm.exc.StaleDataError: DELETE statement on table 'bills_dishes' expected to delete 1 row(s); Only 2 were matched.
</code></pre>
<p>How to handle this?</p>
|
<python><postgresql><sqlalchemy><many-to-many><fastapi>
|
2022-12-12 16:31:07
| 1
| 780
|
Vitalii Mytenko
|
74,774,275
| 1,350,082
|
Finding concurrent occurences of values above/below a threshold in a Pandas DataFrame/Series
|
<p>I need to repeatedly scan columns in a Pandas DataFrame (I will refer to this as a Series) to determine the number of times a threshold is breached. What is making this more complicated is the following:</p>
<ul>
<li>The series is only considered to have breached the threshold if it is beyond the threshold for five consecutive points or more in the series.</li>
<li>The series is only considered to have returned to normative levels if for five consecutive points or more after breaching the limit the series is back within the limit.</li>
</ul>
<p>I'm trying to count the number of times the threshold is breached and the total number of points that the threshold is considered beyond the threshold.</p>
<p>For example, lets say the limit is if x >= 3 and with the following Series (I've formatted this as a list so it is easier to read, however in reality x would be a column in a DataFrame with N measurements):</p>
<p>x = [0, 0, 0, 5, 5, 5, 5, 1, <strong>5, 5, 5, 5, 5, 5, 0, 5</strong>, 0, 0, 0, 0, 0, 0, <strong>5, 5, 5, 5, 5</strong>, 0, 0, 0, 0, 0]</p>
<p>The desired result is that the threshold is breached twice, and the total number of points above the threshold is 13 (points in bold are considered in breach of the limit).</p>
<p>I can do this by looping through every point in the series however this is very slow. Through another question I found that the following can be used to find the number of times the series goes beyond a specified limit for more than five points:</p>
<pre><code>df['breach'] = df['x'] >= limit #Create a boolean column for when the series breaches the limit
df_limit = pd.DataFrame(df['breach'].values, columns=['breach_limit']) #create a new dataframe using the boolean column
df_limit['count'] = df_limit['breach_limit'].groupby(df_limit['breach_limit'].diff().ne(0).cumsum()).sum().replace(0, np.nan).dropna().reset_index(drop=True) #Count based aggregate of the column which results in the counts of every set of consecutive breaches in the series.
</code></pre>
<p>This would result in a count based aggregation of the total consecutive breaches. For example in the example above it would result in [4, 6, 1, 5]. You can then just discard the times that count is below five resulting in two breaches. However, this doesn't take into account allowing five points after the breach to return to normative levels or allow me to calculate the total points.</p>
<p>Has anyone got any suggestions for how to do this in a way which wouldn't use for loops so I can speed up the analysis which is prohibitively slow when using for loops.</p>
|
<python><pandas><dataframe>
|
2022-12-12 16:30:44
| 0
| 317
|
speeder1987
|
74,774,269
| 5,110,870
|
Correct way of updating __repr__ in Python using dataclasses and inheritance
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import MISSING, asdict, dataclass
from typing import Any
from datetime import datetime
@dataclass()
class BookMetadata():
'''Parent class.'''
isbn: str
title: str
author: str
publisher: str
date_published: int
def format_time(self, unix: int) -> str:
'''Convert unix time to %Y-%m-%d.'''
return datetime.fromtimestamp(int(str(unix)[0:10])).strftime('%Y-%m-%d')
def __post_init__(self):
'''Change attributes after assignment.'''
# Change date from UNIX to YYYY-MM-DD
self.date_published = self.format_time(self.date_published)
@dataclass()
class RetailPrice(BookMetadata):
'''Child class.'''
def __init__(self,
isbn, title, author, publisher, date_published,
price_usd, price_aud, price_eur, price_gbp) -> None:
self.price_usd: float = price_usd
self.price_aud: float = price_aud
self.price_eur: float = price_eur
self.price_gbp: float = price_gbp
BookMetadata.__init__(self, isbn, title, author, publisher, date_published)
# Or: super(RetailPrice, self).__init__(isbn, title, author, publisher, date_published)
def stringify(self, obj: Any) -> str:
'''Turn object into string.'''
return str(obj)
def __post_init__(self):
'''Change attribute values after assignment.'''
self.price_usd = self.stringify(self.price_usd)
def __repr__(self) -> str:
'''Update representation including parent and child class attributes.'''
return f'Retailprice(isbn={super().isbn}, title={super().title}, author={super().author}, publisher={super().publisher}, date_published={super().date_published}, price_usd={self.price_usd}, price_aud={self.price_aud}, price_eur={self.price_eur}, price_gbp={self.price_gbp})'
</code></pre>
<p>My <code>__repr__</code> method is failing with the following message:
<code>AttributeError: 'super' object has no attribute 'isbn'</code>, so I am referencing the attributes of the parent class all wrong here.</p>
<p>As it's possible to call the parent dataclass under the <code>__init__</code> method of the child dataclass, (<code>BookMetadata.__init__(self, isbn, title, author, publisher, date_published)</code>), I thought that trying with <code>super(BookMetadata, self)</code> would work, but it failed with the same message.</p>
<p>How should I reference the attributes of the parent class in <code>__repr__</code> within the child dataclass?</p>
|
<python><oop><inheritance><python-dataclasses><repr>
|
2022-12-12 16:30:11
| 2
| 7,979
|
FaCoffee
|
74,774,254
| 5,508,978
|
How can I retrieve the hyper-parameters that were used to train this xgboost booster type model?
|
<p>I have an xgboost model that is trained already. It was trained by the xgboost original API. I am trying to find the hyper-parameters upon which the trained model was trained. Most specifically, I want to retrieve the objective of the trained model.</p>
<pre><code>xgb.__versions__ # returns '1.7.2'
type(model) # returns xgboost.core.Booster
model.params() # AttibuteError: 'Booster' object has no attribute 'params'
model.get_params() # AttibuteError: 'Booster' object has no attribute 'get_params'
</code></pre>
<p>How can I retrieve the hyper-parameters that were used to train this xgboost booster type model?</p>
|
<python><xgboost>
|
2022-12-12 16:29:12
| 1
| 370
|
mansanto
|
74,774,193
| 13,454,049
|
Why is rb+ slower than r+? Python
|
<p>While is was writing another answer on StackOverflow, I encountered this very strange behaviour: rb+ seems to be slower than r+:</p>
<pre class="lang-py prettyprint-override"><code>LINE_NUMBER = 1001
NEW_LINE_2 = ""
NEW_LINE_3 = "".encode()
def test2():
with open("temp.txt", "w") as temp:
temp.write("Foo\n" * 1000)
temp.write("REPLACE ME!\n")
temp.write("Bar\n" * 1000)
with open("temp.txt", "r+") as temp:
lines = temp.read().split("\n")
lines[LINE_NUMBER - 1] = NEW_LINE_2
temp.seek(0)
temp.write("\n".join(lines))
temp.truncate()
def test3():
with open("temp.txt", "wb") as temp:
temp.write(b"Foo\n" * 1000)
temp.write(b"REPLACE ME!\n")
temp.write(b"Bar\n" * 1000)
with open("temp.txt", "rb+") as temp:
lines = temp.read().split(b"\n")
lines[LINE_NUMBER - 1] = NEW_LINE_3
temp.seek(0)
temp.write(b"\n".join(lines))
temp.truncate()
from timeit import repeat
loops = 3_000
count = 1
print(loops * min(repeat("test2()", globals=globals(), repeat=loops, number=count)))
print(loops * min(repeat("test3()", globals=globals(), repeat=loops, number=count)))
</code></pre>
<p>Pydroid 3 (Python 3 on Android):</p>
<pre class="lang-none prettyprint-override"><code>1.5903121093288064
1.754219876602292 # < slower? How?
</code></pre>
<p><a href="https://www.online-python.com/" rel="nofollow noreferrer">https://www.online-python.com/</a></p>
<pre class="lang-none prettyprint-override"><code>1.2284908443689346
1.0201307013630867 # faster as expected
</code></pre>
<p>I thought that decoding and encoding in order to process the file as a string would be slower than processing the bytes itself.
Could someone explain me what's going on?
I don't understand why it could be slower on Android. Is this a bug maybe?</p>
|
<python><python-3.x>
|
2022-12-12 16:24:03
| 1
| 1,205
|
Nice Zombies
|
74,774,072
| 7,437,143
|
Transparant, and coloured, networkx nodes with a node colour dictionary?
|
<p>Whilst drawing a networkx graph, with differently coloured nodes, I'm trying to make some of them transparent. Based on the last <a href="https://groups.google.com/g/networkx-discuss/c/Q9SHbJ4Af6A" rel="nofollow noreferrer">answer in this post</a>, that seems to be possible. However, I am experiencing some difficulties in setting both the colour, and the transparency value.</p>
<p>Analog to the given answer in the linked post, I tried:</p>
<pre class="lang-py prettyprint-override"><code>for node_name in G.nodes:
if node_name[:4] == "red_":
colour_dict[node_name] = ["olive",0.5]
else:
colour_dict[node_name] = ["yellow",1]
nx.draw(
G,
nx.get_node_attributes(G, "pos"),
with_labels=True,
node_size=160,
font_size=6,
width=0.2,
node_color=color_map,
edge_color=edge_color_map,
# **options,
)
</code></pre>
<p>Which returns:</p>
<blockquote>
<p>ValueError: 'c' argument must be a color, a sequence of colors, or a sequence of numbers, not [['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1], ['olive', 1]]</p>
</blockquote>
<p>Hence, I would like to ask, how can one set the node colour and make it transparent (e.g. 50% transparent) at the same time?</p>
|
<python><matplotlib><networkx><draw><transparency>
|
2022-12-12 16:14:34
| 2
| 2,887
|
a.t.
|
74,773,925
| 6,119,375
|
adding a vertical line to a time series plot in python
|
<p>i am plotting time series data, which will be split to a training and test data set. Now, i would like to draw a verticcal line in the plot, that indicated where the training/test data split happens.</p>
<pre><code>split_point indicates where the data should be plotted.
df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv', parse_dates=['date'], index_col='date')
df
data_size=len(df)
split_point = data_size - data_size // 3
split_point
# Draw Plot
def plot_df(df, x, y, title="", xlabel='Date', ylabel='Value', dpi=100):
plt.figure(figsize=(16,5), dpi=dpi)
plt.plot(x, y, color='tab:red')
plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel)
plt.show()
plot_df(df, x=df.index, y=df.value, title='Monthly anti-diabetic drug sales in Australia from 1992 to 2008.')
</code></pre>
<p>How can this be added to the plot? I tried using <code>plt.axvline</code>, but don't know how to go from the split point to the date. Any ideas?</p>
<pre><code>plt.axvline(split_point)
</code></pre>
|
<python><matplotlib><time-series>
|
2022-12-12 16:03:33
| 1
| 1,890
|
Nneka
|
74,773,824
| 14,403,266
|
Sum the rows of a pandas dataframe grouping by the dates with the same year
|
<p>I have the following dataframe. lets call it <code>df</code>:</p>
<pre><code>|Account |Type |Date | Per | Value|
-----------------------------------------------
|A |FC |31/03/2019 |3M |a |
|A |FC |30/06/2019 |3M |b |
|A |FC |30/09/2019 |3M |c |
|A |FC |31/12/2019 |3M |d |
|B |P&G |31/03/2019 |3M |e |
|B |P&G |30/06/2019 |3M |f |
|B |P&G |30/09/2019 |3M |g |
|B |P&G |31/12/2019 |3M |h |
</code></pre>
<p>Where a,bc,d,e,f,g,h are numerical values.For each element of the account column I need to sum up the values of the value column that are in the same year, according to the date column, getting something like this:</p>
<pre><code>|Account |Type |Date | Per | Value |
-------------------------------------------------
|A |FC |31/12/2019 |3M |a+b+c+d |
|B |P&G |30/12/2019 |3M |e+f+g+h |
</code></pre>
<p>Where the value of the date column of the resulting dataframe corresponds to the last period that was added.</p>
<p>I tried the next code:</p>
<pre><code>test_df = pd.DataFrame(df.groupby('Date').sum().reset_index())
</code></pre>
<p>And I get the next dataframe:</p>
<pre><code> Date Value
0 2016 294590.0158
1 2017 235216.0481
2 2018 326280.1496
3 2019 152482.2480
</code></pre>
<p>But clearly this is not what i was looking for.</p>
<p>Can you guys help me?</p>
|
<python><pandas><dataframe>
|
2022-12-12 15:57:29
| 0
| 337
|
Valeria Arango
|
74,773,746
| 2,394,694
|
poetry install fail build with private dependency
|
<p>I have a module that depends on another module (<code>module_a</code>) stored in my private repository (nexus) that reuqest another module (<code>module_b</code>) at build time, even stored in the private repository.</p>
<p>I add repository source on <code>poetry.toml</code> in order to add my private repo.</p>
<pre><code>[[tool.poetry.source]]
name = "nexus"
url = "https://my_nexus_url/private_repo/simple"
secondary = true
</code></pre>
<p>Then I specify the dependency on th toml</p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.9.0,<3.11,"
module_a="1.0.0"
</code></pre>
<p>When I run <code>poetry install</code> it download <code>module_a</code> and build it. During the build process I get this error:</p>
<pre><code> ERROR: Could not find a version that satisfies the requirement module_b==1.0.1
ERROR: No matching distribution found for module_b==1.0.1
</code></pre>
<p>When I try to install the module using <code>pip</code> with <code>--extra-index-url <my repo></code> everything works fine.</p>
<pre><code>pip install module_a --extra-index-url https://my_nexus_url/private_repo/simple
</code></pre>
<p>I guess that the problem is releted on the pip command, executed by <code>poetry</code>. It dose not specify the extra-index-url that point to my repo so it try to download dependency (<code>module_b</code>) from pypi repository instead from my repo.</p>
<p>There is a way to instruct poetry in order to use my private repo when source build is required?</p>
<p>I have already try with this:</p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.9.0,<3.11,"
module_b={version="1.0.1", source="nexus"}
module_a={version="1.0.0", source="nexus"}
</code></pre>
<p>without any success.</p>
|
<python><python-poetry>
|
2022-12-12 15:51:42
| 0
| 1,549
|
theShadow89
|
74,773,700
| 607,407
|
Is there a way to sync a serializable structure with python multiprocessing?
|
<p>If you create a new Process in python, it will serialize and copy the entire available scope, as far as I understand it. If you use <code>multiprocessing.Pipe()</code> it also allows sending various things, not just raw bytes.</p>
<p>However, instead of sending, I simply want to update a variable that contains a simple POD object like this:</p>
<pre><code>class MyStats:
def __init__(self):
self.bytes_read = 0
self.bytes_written = 0
</code></pre>
<p>So say that in a process, when I update these stats, I want to tell python to serialize it and send it to the parent process' side somehow. I don't want to have to create <code>multiprocessing.Value</code> for each and every one of these things, that sounds super tedious.</p>
<p>Is there a way to tell python to pass and overwrite a specific object property somehow?</p>
|
<python><python-3.x><python-multiprocessing>
|
2022-12-12 15:47:50
| 1
| 53,877
|
Tomáš Zato
|
74,773,594
| 12,692,182
|
Wildly inconsistent and incorrect lighting in opengl
|
<p>I followed a tutorial to add simple diffuse lighting, but the lighting is very much broken:</p>
<p><a href="https://i.sstatic.net/v2KVQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v2KVQ.png" alt="Sample object" /></a></p>
<p>On top of being inconsistent, all the diffuse component completely disappears at some camera angles (camera position seems to have no effect on this)</p>
<p>The vertex shader:</p>
<pre class="lang-c prettyprint-override"><code>#version 450 core
layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec4 vNormal;
layout (location = 2) out vec4 fNormal;
layout (location = 3) out vec4 fPos;
uniform mat4 MVMatrix;
uniform mat4 PMatrix;
void main()
{
gl_Position = PMatrix * (MVMatrix * vPosition);
fNormal = normalize(inverse(transpose(MVMatrix))*vNormal);
fPos = MVMatrix * vPosition;
}
</code></pre>
<p>Fragment shader:</p>
<pre class="lang-c prettyprint-override"><code>#version 450 core
layout (location = 0) out vec4 fColor;
layout (location = 2) in vec4 fNormal;
layout (location = 3) in vec4 fPos;
uniform vec4 objColour;
void main()
{
vec3 lightColour = vec3(0.5, 0.0, 0.8);
vec3 lightPos = vec3(10, 20, 30);
float ambientStrength = 0.4;
vec3 ambient = ambientStrength * lightColour;
vec3 diffLightDir = normalize(lightPos - vec3(fPos));
float diff = max(dot(vec3(fNormal), diffLightDir), 0.0);
vec3 diffuse = diff * lightColour;
vec3 rgb = (ambient + diffuse) * objColour.rgb;
fColor = vec4(rgb, objColour.a);
}
</code></pre>
<p>Normal calculation (Due to the pythonnic nature, I did not follow a tutorial and this is probably the issue)</p>
<pre class="lang-py prettyprint-override"><code>self.vertices = np.array([], dtype=np.float32)
self.normals = np.array([], dtype=np.float32)
data = Wavefront(r"C:\Users\cwinm\AppData\Local\Programs\Python\Python311\holder.obj", collect_faces=True)
all_vertices = data.vertices
for mesh in data.mesh_list:
for face in mesh.faces:
face_vertices = np.array([all_vertices[face[i]] for i in range(3)])
normal = np.cross(face_vertices[0]-face_vertices[1], face_vertices[2] - face_vertices[1])
normal /= np.linalg.norm(normal)
self.vertices = np.append(self.vertices, face_vertices)
for i in range(3): self.normals = np.append(self.normals, normal)
self.index = index_getter(len(self.vertices))
self.vertices.resize((len(self.vertices)//3, 3))
self.vertices = np.array(self.vertices * [0.5, 0.5, 0.5], dtype=np.float32)
</code></pre>
<p>(The local vertices and normals are then appended to a global vertex and normal buffer, which is pushed to OpenGL after initialisation)</p>
<p>VBO creation (also probably a problem)</p>
<pre class="lang-py prettyprint-override"><code>vPositionLoc = glGetAttribLocation(self.program, "vPosition")
vNormalLoc = glGetAttribLocation(self.program, "vNormal")
self.Buffers[self.PositionBuffer] = glGenBuffers(1)
glBindBuffer(GL_ARRAY_BUFFER, self.Buffers[self.PositionBuffer])
glBufferStorage(GL_ARRAY_BUFFER, self.vertices.nbytes, self.vertices, 0)
glVertexAttribPointer(vPositionLoc, 3, GL_FLOAT, False, 0, None)
glEnableVertexAttribArray(vPositionLoc)
self.Buffers[self.NormalBuffer] = glGenBuffers(1)
glBindBuffer(GL_ARRAY_BUFFER, self.Buffers[self.NormalBuffer])
glBufferStorage(GL_ARRAY_BUFFER, self.normals.nbytes, self.normals, 0)
glVertexAttribPointer(vNormalLoc, 3, GL_FLOAT, False, 0, None)
glEnableVertexAttribArray(vNormalLoc)
</code></pre>
<p>Ambient lighting, the matrices, and the vertex processing is all functional, things only broke when I added normals and (attempted) diffuse lighting</p>
|
<python><numpy><opengl><pyopengl><lighting>
|
2022-12-12 15:39:50
| 2
| 1,011
|
User 12692182
|
74,773,458
| 8,388,057
|
Cannot import name error in Django split models
|
<p>I have split the Django models into multiple model files following the follow file tree structure,</p>
<pre><code>+-api(app)-+
+-__init__.py
+-models -+
|
+-__init__.py
+-model1.py
+-model2.py
+-model3.py
+-serializers-+
|
+-__init__.py
+- model1_serializer.py
+-views
+-apps.py
...
</code></pre>
<p>my <code>__init__.py</code> in models looks like,</p>
<pre><code>from .model1 import *
from .model2 import *
</code></pre>
<p>and serializer <code>__init__.py</code> files look like this,</p>
<pre><code>from .model1_serializer import MBTITypeSerializer
</code></pre>
<p>I have splitter views files and serializer files. When I try to import models some of them imports without any problem, but some imports not working. I have observed if I change the import order in <code>__init__.py</code> file the working imports change. This is how I tried to import models,</p>
<p>in <code>serializers</code></p>
<pre><code>from api.models import MBTIType
...
</code></pre>
<p>Here is the error trace,</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\ \AppData\Local\Programs\Python\Python37\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\ \AppData\Local\Programs\Python\Python37\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "D:\ \implementation\backend\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "D:\\implementation\backend\venv\lib\site-packages\django\core\management\commands\runserver.py", line 110, in inner_run
autoreload.raise_last_exception()
File "D:\\implementation\backend\venv\lib\site-packages\django\utils\autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "D:\\implementation\backend\venv\lib\site-packages\django\core\management\__init__.py", line 375, in execute
autoreload.check_errors(django.setup)()
File "D:\\implementation\backend\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "D:\\implementation\backend\venv\lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "D:\\implementation\backend\venv\lib\site-packages\django\apps\registry.py", line 114, in populate
app_config.import_models()
File "D:\\implementation\backend\venv\lib\site-packages\django\apps\config.py", line 301, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\\AppData\Local\Programs\Python\Python37\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "D:\\implementation\backend\api\models\__init__.py", line 2, in <module>
from .model1 import *
File "D:\\implementation\backend\api\models\model1.py", line 3, in <module>
from .model2 import Model2
File "D:\\implementation\backend\api\models\model2.py", line 5, in <module>
from api.serializers import serilizer1
File "D:\\implementation\backend\api\serializers\__init__.py", line 2, in <module>
from .model1_serializer import Model1Serializer
File "D:\\implementation\backend\api\serializers\model1_serializer.py", line 2, in <module>
from api.models import Model1
ImportError: cannot import name 'Model1' from 'api.models' (D:\\implementation\backend\api\models\__init__.py)
</code></pre>
<p>Hoping any guidance to solve the issue.</p>
|
<python><django><django-models><python-module><django-serializer>
|
2022-12-12 15:29:24
| 1
| 1,215
|
Avishka Dambawinna
|
74,773,394
| 214,296
|
How to add OPTION to Click class implementation?
|
<p>Trying to add an "option" to a class implementation of click. Admittedly, Python is not my area of expertise, but it needs to be done. There area already a bunch of "arguments" implemented using this class approach. Anyone know how to get options to work here?</p>
<p><strong>test.py</strong></p>
<pre><code>import click
class OptionGroup(click.Option):
"""Customizing the default click option"""
def list_options(self, ctx: click.Context):
"""Sorts options in the specified order"""
# By default, click alphabetically sorts options
# This method will override that feature
return self.opts.keys()
@click.option(cls=OptionGroup)
def cli_opt():
"""Command Line Interface to configure options"""
pass
@cli_opt.command()
@click.option('-d', '--dest', 'dst-ip', type=str)
def dest_ip(dest_ip):
"""Specifies the destination controller IP address"""
print(f"Dest IP: ", dest_ip)
click.echo(dest_ip)
if __name__ == "__main__":
cli_opt()
</code></pre>
<p><strong>This is the output when I run the script...</strong></p>
<pre><code>$ python test.py --help
Traceback (most recent call last):
File "C:\Users\jfell\repos\test.py", line 13, in <module>
@click.option(cls=OptionGroup)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\decorators.py", line 308, in decorator
_param_memo(f, OptionClass(param_decls, **option_attrs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 2495, in __init__
super().__init__(param_decls, type=type, multiple=multiple, **attrs)
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 2072, in __init__
self.name, self.opts, self.secondary_opts = self._parse_decls(
^^^^^^^^^^^^^^^^^^
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 2640, in _parse_decls
raise TypeError("Could not determine name for option")
TypeError: Could not determine name for option
</code></pre>
<p>Similar code that only implements an argument works fine...</p>
<pre><code>import click
class CommandGroup(click.Group):
"""Customizing the default click group"""
def list_commands(self, ctx: click.Context):
"""Sorts commands in the specified order"""
# By default, click alphabetically sorts commands
# This method will override that feature
return self.commands.keys()
@click.group(cls=CommandGroup)
def cli():
"""Command Line Interface to send commands"""
pass
@cli.command("goto-mode")
@click.argument("mode", type=str)
def goto_mode(mode: str):
"""Directs Application Mode Change"""
click.echo(mode)
if __name__ == "__main__":
cli()
</code></pre>
<p>Output for argument only...</p>
<pre><code>$ python test.py goto-mode Success!
Success!
</code></pre>
<p>Script with option added...</p>
<pre><code>import click
class OptionGroup(click.Option):
"""Customizing the default click option"""
def list_options(self, ctx: click.Context):
"""Sorts options in the specified order"""
# By default, click alphabetically sorts options
# This method will override that feature
return self.opts.keys()
@click.option(cls=OptionGroup)
def cli_opt():
"""Command Line Interface to configure options"""
pass
@cli_opt.command()
@click.option('--dest', '-d', 'dst-ip', type=str)
def dest_ip(dest_ip):
"""Specifies the destination controller IP address"""
print(f"Dest IP: ", dest_ip)
click.echo(dest_ip)
class CommandGroup(click.Group):
"""Customizing the default click group"""
def list_commands(self, ctx: click.Context):
"""Sorts commands in the specified order"""
# By default, click alphabetically sorts commands
# This method will override that feature
return self.commands.keys()
@click.group(cls=CommandGroup)
def cli():
"""Command Line Interface to send commands"""
pass
@cli.command("goto-mode")
@click.argument("mode", type=str)
def goto_mode(mode: str):
"""Directs Application Mode Change"""
click.echo(mode)
if __name__ == "__main__":
cli_opt()
cli()
</code></pre>
<p>...yields same error...</p>
<pre><code>$ python test.py --help
Traceback (most recent call last):
File "C:\Users\jfell\repos\test.py", line 13, in <module>
@click.option(cls=OptionGroup)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\decorators.py", line 308, in decorator
_param_memo(f, OptionClass(param_decls, **option_attrs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 2495, in __init__
super().__init__(param_decls, type=type, multiple=multiple, **attrs)
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 2072, in __init__
self.name, self.opts, self.secondary_opts = self._parse_decls(
^^^^^^^^^^^^^^^^^^
File "C:\Users\jfell\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 2640, in _parse_decls
raise TypeError("Could not determine name for option")
TypeError: Could not determine name for option
</code></pre>
|
<python><runtime-error><python-click>
|
2022-12-12 15:24:29
| 1
| 14,392
|
Jim Fell
|
74,773,240
| 1,232,660
|
Caveats of printing unicode characters in Python
|
<p>The following code:</p>
<pre class="lang-py prettyprint-override"><code>print('\N{WAVING BLACK FLAG}')
</code></pre>
<p>is as simple as it can be. Yet on some machines it prints the character as expected, on other it raises a <code>UnicodeEncodeError</code> with a message <code>'ascii' codec can't encode character '\U0001f3f4' in position 0: ordinal not in range(128)</code>.</p>
<p><strong>Why printing a character can <em>sometimes</em> lead to <code>UnicodeEncodeError</code>?</strong> There is no mention about any encoding in the <a href="https://docs.python.org/3/library/functions.html#print" rel="nofollow noreferrer">documentation</a>. And is there any way how to make sure the string will be printed without raising any exceptions?</p>
<hr>
<p>I managed to isolate a reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
import sys
subprocess.run([sys.executable, 'test.py'], env=dict())
</code></pre>
<p>The <code>test.py</code> contains just the single print statement mentioned above. This example raises a <code>UnicodeEncodeError</code> on all tested machines... but only when tested with Python <code>3.6</code>. When tested with Python <code>3.7</code> it prints the character as expected.</p>
|
<python><unicode>
|
2022-12-12 15:12:09
| 1
| 3,558
|
Jeyekomon
|
74,773,066
| 9,182,743
|
Retain pandas dtype 'category' when using parquet file
|
<p>I am using parquet to store pandas dataframes, and would like to keep the dtype of columns.
However It sometimes ins't working:
here is the example code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'a': [pd.NA, 'a', 'b', 'c'],
'b': [1,2,3,pd.NA] # dataframe has type pd.NA in it.
})
df['b'] = df['b'].astype("category")
df['a'] = df['a'].astype("category") # with this columsn works
print ("dtype before parquet write/read: ", df.dtypes, sep='\n')
df.to_parquet('trial.parquet.gzip',
compression='gzip')
df = pd.read_parquet('trial.parquet.gzip')
print ("dtype after parquet write/read:", df.dtypes, sep='\n')
**OUT**:
dtype before parquet write/read:
a category
b category
dtype: object
dtype before parquet write/read:
a category
b float64
dtype: object
</code></pre>
|
<python><pandas><parquet>
|
2022-12-12 15:00:52
| 0
| 1,168
|
Leo
|
74,773,052
| 12,242,085
|
How to drop duplicates in one column based on values in 2 other columns in DataFrame in Python Pandas?
|
<p>I have DataFrame in Python Pandas like below:</p>
<p>data types:</p>
<ul>
<li><p>ID - int</p>
</li>
<li><p>TYPE - object</p>
</li>
<li><p>TG_A - int</p>
</li>
<li><p>TG_B - int</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>TYPE</th>
<th>TG_A</th>
<th>TG_B</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>A</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>111</td>
<td>B</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>222</td>
<td>B</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>222</td>
<td>A</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>333</td>
<td>B</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>333</td>
<td>A</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
</div></li>
</ul>
<p>And I need to drop duplicates in above DataFrame, so as to:</p>
<ul>
<li>If value in ID in my DF is duplicated -> drop rows where TYPE = B and TG_A = 1 or TYPE = A and TG_B = 1</li>
</ul>
<p>So, as a result I need something like below:</p>
<pre><code>ID | TYPE | TG_A | TG_B
----|------|------|-----
111 | A | 1 | 0
222 | A | 1 | 0
333 | B | 0 | 1
</code></pre>
<p>How can I do that in Python Pandas ?</p>
|
<python><pandas><duplicates><drop-duplicates>
|
2022-12-12 14:59:45
| 2
| 2,350
|
dingaro
|
74,772,998
| 13,115,582
|
How to mix / intersperse two .npy files?
|
<p>I have two <code>.npy</code> files that both contain a ndarray with shape <code>(1_000_000, 833)</code> (1M inputs for a neural network with 833 input neurons), however the exact shape should not matter except that it is the same among the two files.</p>
<p>I want to create two new <code>.npy</code> files that are taken from both files, one after the other. Let's say the first file contains <code>[1, 2, 3, 4, 5, 6]</code> and the second one <code>[a, b, c, d, e, f]</code>, then the new first file should contain <code>[1, a, 2, b, 3, c]</code> and the new second one <code>[4, d, 5, e, 6, f]</code>- the size of the files and the contents should remain the same, only its arrangement (and arrangement among the files) should change.</p>
<p>How could I achieve this behavior?</p>
|
<python><numpy><numpy-ndarray>
|
2022-12-12 14:56:33
| 1
| 687
|
leo848
|
74,772,980
| 3,979,919
|
Multiprocessing: Instantiate Processes individually
|
<p>I have an embarrassingly parallel problem in a Reinforcement-Learning context. I would like to let the neural network generate data in parallel. To achieve that each process needs its own model.</p>
<p>I have tried to use Pool to achieve this, but now I am not sure if this is the correct method.</p>
<pre class="lang-python prettyprint-override"><code>from multiprocessing import Pool
def run():
with Pool(processes=8) as p:
result = p.map_async(f, range(8))
p.close()
p.join()
print(result.get())
def f(x):
return x*x
if __name__ == '__main__':
run()
</code></pre>
<p>I know that you can use an initializer to set up the processes, but I think this is used to set up the processes with the same fixed data.</p>
<pre class="lang-python prettyprint-override"><code>model = None
def worker_init():
global model
model = CNN()
</code></pre>
<p>This does not work. So how can I give every Process its own model?</p>
|
<python><multiprocessing>
|
2022-12-12 14:55:25
| 1
| 1,671
|
Nima Mousavi
|
74,772,951
| 607,846
|
Running the same test on different model objects
|
<p>I have three scenarios in my db that should give the same result when I call an endpoint:</p>
<pre><code>Model1.objects.create(name="a")
assert requests.delete("endpoint?pk=a").response == 204
Model2.objects.create(name="a")
assert requests.delete("endpoint?pk=a").response == 204
Model1.objects.create(name="a")
Model2.objects.create(name="a")
assert requests.delete("endpoint?pk=a").response == 204
</code></pre>
<p>So basically the setup() part of the test is different, where I create the model objects, however the test itself is the same in each case. What is the best way to implement this? Can I just create a Base TestCase class which implements <code>assert requests.delete("endpoint?pk=a").response == 204</code> and then inherit from it three times, creating the models in the setUpTestData() in each of the three classes?</p>
|
<python><pytest><django-testing>
|
2022-12-12 14:52:22
| 1
| 13,283
|
Baz
|
74,772,932
| 8,324,092
|
ComboBox not showing the options required after saving workorder
|
<p>I'm writing a GUI application and I'm having some problems. I am building this app with the help of chatgpt but I have got this error and don't know how to fix it. When I open the app and I touch the combobox I can see the options which are the cities to be choosed. When I save the workorder and want to create a new one without closing the app the QComboBox doesn't work. Meanwhile the order_id when I want to create a new workorder it doesn't appear in the inputfield but it gets saved in the database correctly and gets the last Id. How to fix these issues? I tried to create a new function def new_work_order but that didn't help as well or maybe I was doing something wrong. Here is my code:</p>
<pre><code>from PyQt5 import QtWidgets, QtGui
import sys
from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QPushButton, QLineEdit, QVBoxLayout, QDateEdit, QComboBox, QStyleFactory, QTableView, QMessageBox
from PyQt5.QtCore import Qt
from PyQt5.QtGui import QFont, QStandardItemModel, QStandardItem
import psycopg2
import datetime
from PyQt5.QtCore import QDate
from PyQt5.QtWidgets import QDateEdit
from PyQt5.QtCore import QTimer
current_date = QDate.currentDate()
# Connect to the database
conn = psycopg2.connect(
host="localhost",
database="mms",
user="postgres",
password="postgres"
)
# Create a cursor object
cur = conn.cursor()
# Check if the `maintenance` table exists
cur.execute("SELECT * FROM information_schema.tables WHERE table_name='t_wo_workorders'")
if not cur.fetchone():
# Create the `maintenance` table
cur.execute(
"CREATE TABLE t_wo_workorders (order_id integer Primary Key, defined DATE, org VARCHAR(255), order_type VARCHAR(255), scheduled DATE, status VARCHAR(255), request VARCHAR(255), address VARCHAR(255), customer VARCHAR(255), tel_no VARCHAR(255))"
)
conn.commit()
class MaintenanceManagementSystem(QWidget):
def __init__(self):
super().__init__()
self.org_model = QStandardItemModel()
self.org_model.appendRow(QStandardItem("Prizren"))
self.org_model.appendRow(QStandardItem("Suharekë"))
self.org_model.appendRow(QStandardItem("Malishevë"))
self.org_model.appendRow(QStandardItem("Dragash"))
self.org_input = QComboBox()
self.org_input.setModel(self.org_model)
cur = conn.cursor()
cur.execute("SELECT MAX(order_id) FROM t_wo_workorders")
last_saved_id = cur.fetchone()[0]
self.last_id = 1
self.last_id = last_saved_id or 0
self.counter = self.last_id
self.counter += 1
org = self.org_input.currentText()
save_button = QPushButton('Save')
save_button.clicked.connect(self.save_work_order)
# Set the window title and size
self.setWindowTitle('Maintenance Management System')
self.resize(600, 300)
# Create a label and set its font and alignment
label = QLabel('Maintenance Management System')
label.setFont(QFont('Arial', 20))
label.setAlignment(Qt.AlignCenter)
# Create labels and input fields
order_id_label = QLabel('Order ID')
self.order_id_input = QLineEdit()
self.order_id_input.setReadOnly(True)
self.counter = self.last_id
self.counter += 1
self.generate_order_id()
defined_label = QLabel('Defined')
self.defined_input = QDateEdit(current_date)
self.defined_input.setCalendarPopup(True)
org_label = QLabel('Org. Unit')
order_type_label = QLabel('Order Type')
self.order_type_input = QLineEdit()
scheduled_label = QLabel('Scheduled')
self.scheduled_input = QDateEdit(current_date)
self.scheduled_input.setCalendarPopup(True)
status_label = QLabel('Status')
self.status_input = QLineEdit()
request_label = QLabel('Request')
self.request_input = QLineEdit()
address_label = QLabel('Address')
self.address_input = QLineEdit()
customer_label = QLabel('Customer')
self.customer_input = QLineEdit()
tel_no_label = QLabel('Tel Number')
self.tel_no_input = QLineEdit()
self.table_view = QTableView()
# Set the size of the input fields
self.order_id_input.setFixedWidth(100)
self.defined_input.setFixedWidth(100)
self.org_input.setFixedWidth(100)
self.order_type_input.setFixedWidth(100)
self.scheduled_input.setFixedWidth(100)
self.status_input.setFixedWidth(100)
self.request_input.setFixedWidth(100)
self.address_input.setFixedWidth(100)
self.customer_input.setFixedWidth(100)
self.tel_no_input.setFixedWidth(100)
# Create a vertical box layout and add it to the main window
layout = QVBoxLayout()
layout.addWidget(label)
self.setLayout(layout)
# Add the label, button, and input fields
layout.addWidget(order_id_label)
layout.addWidget(self.order_id_input)
layout.addWidget(defined_label)
layout.addWidget(self.defined_input)
layout.addWidget(org_label)
layout.addWidget(self.org_input)
layout.addWidget(order_type_label)
layout.addWidget(self.order_type_input)
layout.addWidget(order_type_label)
layout.addWidget(self.order_type_input)
layout.addWidget(scheduled_label)
layout.addWidget(self.scheduled_input)
layout.addWidget(status_label)
layout.addWidget(self.status_input)
layout.addWidget(request_label)
layout.addWidget(self.request_input)
layout.addWidget(address_label)
layout.addWidget(self.address_input)
layout.addWidget(customer_label)
layout.addWidget(self.customer_input)
layout.addWidget(tel_no_label)
layout.addWidget(self.tel_no_input)
# Add the save button to the layout
layout.addWidget(save_button)
save_button.setStyleSheet("QPushButton:pressed { background-color: grey; }")
layout.addWidget(save_button)
# Create the `maintenance` table
def reset_input_fields(self):
self.order_id_input.setText("")
self.defined_input.setDate(current_date)
self.org_input.setCurrentIndex(0)
self.order_type_input.setText("")
self.scheduled_input.setDate(current_date)
self.status_input.setText("")
self.request_input.setText("")
self.address_input.setText("")
self.customer_input.setText("")
self.tel_no_input.setText("")
def update_table_view(self):
cur = conn.cursor()
cur.execute("SELECT * FROM t_wo_workorders")
rows = cur.fetchall()
model = QStandardItemModel()
model.setHorizontalHeaderLabels(['order_id', 'defined', 'org', 'order_type', 'scheduled', 'status', 'request', 'address', 'customer', 'tel_no'])
for row in rows:
items = [QStandardItem(str(cell)) for cell in row]
model.appendRow(items)
self.table_view.setModel(model)
def generate_order_id(self):
self.last_id += 1
self.order_id_input.setText(str(self.counter))
def save_work_order(self):
# Get the values entered by the user
order_id = self.generate_order_id()
defined = self.defined_input.date().toString("yyyy-MM-dd")
org = self.org_input.currentText()
order_type = self.order_type_input.text()
scheduled = self.scheduled_input.date().toString("yyyy-MM-dd")
status = self.status_input.text()
request = self.request_input.text()
address = self.address_input.text()
customer = self.customer_input.text()
tel_no = self.tel_no_input.text()
self.order_id_input.setText(str(self.counter))
# Insert the data into the `maintenance` table
cur.execute(
"INSERT INTO t_wo_workorders (order_id, defined, org, order_type, scheduled, status, request, address, customer, tel_no) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)",
(self.order_id_input.text(), defined, self.org_input.currentText(), self.order_type_input.text(), scheduled, self.status_input.text(), self.request_input.text(), self.address_input.text(), self.customer_input.text(), self.tel_no_input.text()),
)
conn.commit()
self.update_table_view()
# Create a new standard item model
model = QStandardItemModel()
# Set the horizontal header labels
model.setHorizontalHeaderLabels(["Order ID", "Defined", "Org. Unit", "Order Type", "Scheduled", "Status", "Request", "Address", "Customer", "Tel Number"])
# Define a list of rows, where each row is a list of cell values
data = [
[order_id, defined, org, order_type, scheduled, status, request, address, customer, tel_no]
]
# Iterate over the rows
for i, row in enumerate(data):
# Iterate over the cell values in the row
for j, value in enumerate(row):
# Set the value of the cell
model.setItem(i, j, QStandardItem(value))
# Set the model for the table view
self.table_view.setModel(model)
# Clear the input fields
self.order_id_input.clear()
self.defined_input.clear()
self.org_input.clear()
self.order_type_input.clear()
self.scheduled_input.clear()
self.status_input.clear()
self.request_input.clear()
self.address_input.clear()
self.customer_input.clear()
self.tel_no_input.clear()
# Generate a new order ID
self.generate_order_id()
response = QMessageBox.question(self, "Message", "Work-order with ID: <font color='red'>" + str(self.counter) + "</font> was saved successfully. Do you want to create another work order?", QMessageBox.Yes | QMessageBox.No, QMessageBox.No)
# response.setStyleSheet("color: red;")
if response == QMessageBox.Yes:
self.counter += 1
self.generate_order_id()
self.reset_input_fields()
else:
self.close()
# Create an instance of QApplication
app = QApplication(sys.argv)
# Create an instance of your application
mms = MaintenanceManagementSystem()
# Load the stylesheet
with open('styles.css', 'r') as f:
stylesheet = f.read()
# Apply the stylesheet to the application
app.setStyleSheet(stylesheet)
# Show the application window
mms.show()
# Run the application
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5>
|
2022-12-12 14:50:53
| 1
| 429
|
Gent Bytyqi
|
74,772,909
| 17,487,457
|
Barplot of a dataframe by group
|
<p>I am having difficulty with this. I have the results from my initial model (`Unfiltered´), that I plot like so:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{'class': ['foot', 'bike', 'bus', 'car', 'metro'],
'Precision': [0.7, 0.66, 0.41, 0.61, 0.11],
'Recall': [0.58, 0.35, 0.13, 0.89, 0.02],
'F1-score': [0.64, 0.45, 0.2, 0.72, 0.04]}
)
groups = df.melt(id_vars=['class'], var_name=['Metric'])
sns.barplot(data=groups, x='class', y='value', hue='Metric')
</code></pre>
<p>To produce this nice plot:
<a href="https://i.sstatic.net/2l5EG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2l5EG.png" alt="enter image description here" /></a></p>
<p>Now, I obtained a second results from my improved model (<code>filtered</code>), so I add a column (<code>status</code>) to my <code>df</code> to indicate the results from each model like this:</p>
<pre class="lang-py prettyprint-override"><code>df2 = pd.DataFrame(
{'class': ['foot','foot','bike','bike','bus','bus',
'car','car','metro','metro'],
'Precison': [0.7, 0.62, 0.66, 0.96, 0.41, 0.42, 0.61, 0.75, 0.11, 0.3],
'Recall': [0.58, 0.93, 0.35, 0.4, 0.13, 0.1, 0.89, 0.86, 0.02, 0.01],
'F1-score': [0.64, 0.74, 0.45, 0.56, 0.2, 0.17, 0.72, 0.8, 0.04, 0.01],
'status': ['Unfiltered', 'Filtered', 'Unfiltered','Filtered','Unfiltered',
'Filtered','Unfiltered','Filtered','Unfiltered','Filtered']}
)
df2.head()
class Precison Recall F1-score status
0 foot 0.70 0.58 0.64 Unfiltered
1 foot 0.62 0.93 0.74 Filtered
2 bike 0.66 0.35 0.45 Unfiltered
3 bike 0.96 0.40 0.56 Filtered
4 bus 0.41 0.13 0.20 Unfiltered
</code></pre>
<p>And I want to plot this, in similar grouping as above (i.e. <code>foot</code>, <code>bike</code>, <code>bus</code>, <code>car</code>, <code>metro</code>). However, for each of the metrics, I want to place the two values side-by-side. Take for example, the <code>foot</code> group, I would have two bars <code>Precision[Unfiltered, filtered]</code>, then 2 bars for <code>Recall[Unfiltered, filtered]</code> and also 2 bars for <code>F1-score[Unfiltered, filtered]</code>. Likewise all other groups.</p>
<p>My attempt:</p>
<pre class="lang-py prettyprint-override"><code>group2 = df2.melt(id_vars=['class', 'status'], var_name=['Metric'])
sns.barplot(data=group2, x='class', y='value', hue='Metric')
</code></pre>
<p><a href="https://i.sstatic.net/inylJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/inylJ.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><seaborn><grouped-bar-chart>
|
2022-12-12 14:49:35
| 2
| 305
|
Amina Umar
|
74,772,855
| 12,785,645
|
Replace a specific line in a file without looping
|
<p>I have a huge file with a problematic character at line 9073245. So I want to replace/remove that character at that specific line while keeping the rest of the file intact. I found the following solution <a href="https://stackoverflow.com/a/39110/12785645">here</a>:</p>
<pre><code>from tempfile import mkstemp
from shutil import move, copymode
from os import fdopen, remove
def replace(file_path, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
with fdopen(fh,'w') as new_file:
with open(file_path) as old_file:
for line in old_file:
new_file.write(line.replace(pattern, subst))
#Copy the file permissions from the old file to the new file
copymode(file_path, abs_path)
#Remove original file
remove(file_path)
#Move new file
move(abs_path, file_path)
</code></pre>
<p>But instead of reading line by line, I just want to replace line number 9073245 and be done with it. I thought <code>getline</code> from <code>linecache</code> might work:</p>
<pre><code>import linecache
def lineInFileReplacer(file_path, line_nr, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
with fdopen(fh,'w') as new_file:
bad_line = linecache.getline(file_path, line_nr)
new_file.write(bad_line.replace(pattern, subst))
#Copy the file permissions from the old file to the new file
copymode(file_path, abs_path)
#Remove original file
remove(file_path)
#Move new file
move(abs_path, file_path)
</code></pre>
<p>but <code>new_file.write()</code> does not seem to include the replacement for <code>bad_line</code>.</p>
<p>How can I replace a line at a specific line number without looping through every line in the file?</p>
|
<python>
|
2022-12-12 14:45:31
| 2
| 463
|
saQuist
|
74,772,785
| 2,527,629
|
what are the differences among mambaforge, mambaforge-pypy3, miniforge, miniforge-pypy3
|
<p>there have been explanations about the different between <code>miniforge</code> and <code>miniconda</code></p>
<blockquote>
<p><code>miniforge</code> is the community (conda-forge) driven minimalistic conda installer. Subsequent package installations come thus from conda-forge channel.
<code>miniconda</code> is the Anaconda (company) driven minimalistic conda installer. Subsequent package installations come from the anaconda channels (default or otherwise).</p>
</blockquote>
<p>as for <a href="https://conda-forge.org/miniforge/" rel="noreferrer">mambaforge, mambaforge-pypy3, miniforge, miniforge-pypy3</a>, how do we choose which package to install?</p>
|
<python><pypy><mamba><mini-forge><mambaforge>
|
2022-12-12 14:40:09
| 2
| 3,690
|
wsdzbm
|
74,772,738
| 11,251,373
|
Django: get only objects with max foreignkey count
|
<p>The question is quite simple but possibly unsolvable with Django.</p>
<p>For example I have a model</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(models.Model)
field_a = models.IntegerField()
field_b = models.CharField()
field_c = models.ForegnKey(MyOtherModel)
</code></pre>
<p>The question is how to select only objects that have a maximal count of relations with MyOtherModel and preferably(almost mandatory) with only a single query set?</p>
<p>Lets say, we have 100 entries all together, 50 pcs. point to field_c_id=1, 40 pcs. to field_c_id=2 and rest 10 pcs. entries to field_c_id = 3.
I need only those which point to field_c_id=1? as 50 would be maximal count.</p>
<p>Thanks...</p>
|
<python><django>
|
2022-12-12 14:37:13
| 1
| 2,235
|
Aleksei Khatkevich
|
74,772,625
| 10,413,816
|
In Python (and attrs), can I export a field to yaml using a different name?
|
<p>I would like to modify a field name, but only when exporting to yaml. For instance:</p>
<pre><code>import attrs
import yaml
from attr import fields, field
from attrs import define
@define
class Task:
id: int
@define
class Data:
all_tasks: List[Task]
x: int = field(default=5)
if __name__ == '__main__':
list_of_tasks = [Task(1), Task(2), Task(3),]
d = Data(list_of_tasks, 10)
print(yaml.dump(attrs.asdict(d)))
</code></pre>
<p>Running this code I get</p>
<pre><code>all_tasks:
- id: 1
- id: 2
- id: 3
x: 10
</code></pre>
<p>I would like to keep the variable name in the code as <code>all_tasks</code>, but change it in the yaml to just <code>tasks</code>. I generic answer is preferable, since there are several fields to change.</p>
<p><strong>The underlying issue:</strong></p>
<p>The underlying issue is that I have a "list of tasks" and calling that variable just "tasks" makes it very similar to a single "task", so I usually rename it to something else. That said, when exporting/importing from YAML, just "tasks" looks much better (for configuration purposes and for non-code aware people.</p>
<p>If there is a good way to do this without attrs, I will also accept that.</p>
|
<python><yaml>
|
2022-12-12 14:29:43
| 1
| 572
|
Roberto Morávia
|
74,772,604
| 12,131,472
|
dataframe: parse a column containing list of dicts: Traceback ValueError: cannot reindex on an axis with duplicate labels
|
<p>I have one column(called 'data') in a dataframe which looks like this, each row has a list of dicts, starting with 2022-01-04, ended today, for example the 1st row is {'value': 18.76, 'date': '2022-01-04'}, {'value': 18.59, 'date': '2022-01-05'}, {'value': 18.99, 'date': '2022-01-06'}...</p>
<pre><code>0 [{'value': 18.76, 'date': '2022-01-04'}, {'val...
1 [{'value': 38.58, 'date': '2022-01-04'}, {'val...
2 [{'value': 37.5, 'date': '2022-01-04'}, {'valu...
3 [{'value': 61.77, 'date': '2022-01-04'}, {'val...
4 [{'value': 110.54, 'date': '2022-01-04'}, {'va...
5 [{'value': 101.71, 'date': '2022-01-04'}, {'va...
6 [{'value': 86.45, 'date': '2022-01-04'}, {'val...
7 [{'value': 97.95, 'date': '2022-01-04'}, {'val...
8 [{'value': 38.39, 'date': '2022-01-04'}, {'val...
9 [{'value': 217.92, 'date': '2022-01-04'}, {'va...
10 [{'value': 86.94, 'date': '2022-01-04'}, {'val...
11 [{'value': 55.2, 'date': '2022-01-04'}, {'valu...
12 [{'value': 138.97, 'date': '2022-01-04'}, {'va...
13 [{'value': 4853125.0, 'date': '2022-01-04'}, {...
14 [{'value': 29.12, 'date': '2022-01-04'}, {'val...
15 [{'value': 90.77, 'date': '2022-01-04'}, {'val...
16 [{'value': 87.15, 'date': '2022-01-04'}, {'val...
</code></pre>
<p>I used a line of code which worked before</p>
<pre><code>df[['date','value']] = df['data'].apply(lambda x: [[i['date'],i['value']] for i in x]).explode().apply(pd.Series, index=['date','value'])
</code></pre>
<p>but now this line fails and gives</p>
<pre><code>ValueError: cannot reindex on an axis with duplicate labels
</code></pre>
<p>Is there an easy solution to solve this issue? as there are 300 dates hence 300 data points for each row, I am not sure which dates may contain duplicate data??</p>
|
<python><pandas><dictionary><pandas-explode>
|
2022-12-12 14:28:21
| 1
| 447
|
neutralname
|
74,772,561
| 5,110,870
|
Python dataclass with inheritance: __init__() missing 1 required positional argument
|
<p>Trying my luck with inheritance with data classes (Python 3.9.13).</p>
<p>Given the following:</p>
<pre><code>from dataclasses import dataclass
from datetime import datetime
@dataclass()
class BookMetadata():
'''Parent class.'''
isbn: str
title: str
author: str
publisher: str
date_published: int
def __post_init__(self):
'''Change attributes after assignment.'''
# Change date from UNIX to YYYY-MM-DD HH:MM:SS
self.date_published = datetime.fromtimestamp(int(str(self.date_published))).strftime('%Y-%m-%d %H:%M:%S')
@dataclass()
class RetailPrice(BookMetadata):
'''Child class.'''
def __init__(self,
isbn, title, author, publisher, date_published,
price_usd, price_aud, price_eur, price_gbp) -> None:
BookMetadata.__init__(isbn, title, author, publisher, date_published)
self.price_usd: float = price_usd
self.price_aud: float = price_aud
self.price_eur: float = price_eur
self.price_gbp: float = price_gbp
def __post_init__(self):
self.price_usd = str(self.price_usd)
</code></pre>
<p>and the values assigned as such:</p>
<pre><code>book1 = RetailPrice(isbn='1234-5678-9000',
title='My book',
author='Name Surname',
publisher='My publisher',
date_published=1670536799,
price_usd=17.99,
price_aud=23.99,
price_eur=15.99,
price_gbp=16.99)
</code></pre>
<p>I get a <code>TypeError:</code>:
<code>TypeError: __init__() missing 1 required positional argument: 'date_published'</code>, but this was provided in the assignment.</p>
<p>Is this due to the fact that the parent class has no <code>__init__</code>?</p>
<p>PS: this is my attempt at reproducing line 21 in the image below, having to work with data classes instead of regular classes:
<a href="https://i.sstatic.net/epv0Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/epv0Z.png" alt="enter image description here" /></a></p>
|
<python><oop><inheritance><python-dataclasses>
|
2022-12-12 14:25:13
| 1
| 7,979
|
FaCoffee
|
74,772,395
| 14,882,862
|
How to declare additional attributes when subclassing str?
|
<p>Consider the following class:</p>
<pre><code>class StrWithInt(str):
def __new__(cls, content, value: int):
ret = super().__new__(cls, content)
ret.value = value
return ret
</code></pre>
<p>This class just works fine, but when using <code>mypy</code>, the following error occurs:
<code>"StrWithInt" has no attribute "value" [attr-defined]</code></p>
<p>Is there some way to explicitly state the attribute in this case? What is the proper way to solve this issue?</p>
<p>Note that this is a minimal example and not using a subclass of <code>str</code> is not an option in the non-minimal example.</p>
|
<python><subclass><mypy>
|
2022-12-12 14:13:31
| 1
| 866
|
Henk
|
74,772,390
| 17,191,838
|
Freezing the value of type arguments in an inherited generic class
|
<pre class="lang-py prettyprint-override"><code>import typing as typ
T = typ.TypeVar("T")
X = typ.TypeVar("X")
class Base(typ.Generic[T, X]):
pass
class ChildInt(?):
pass
class InheritedInt(ChildInt[str]):
# should be equivalent to Base[int, str]
</code></pre>
<p>I'd want to inherit <code>ChildInt</code> from <code>Base[int, ???]</code> where <code>ChildInt</code> is a generic that sets the <code>???</code> value.</p>
<p><strong>How do I achieve this?</strong></p>
|
<python><python-typing>
|
2022-12-12 14:12:40
| 1
| 537
|
TNTzx
|
74,772,381
| 8,406,122
|
Copying files from one location of a server to another using python
|
<p>Say I have a file that contains the different locations where some <code>'.wav'</code> files are present on a server. For example say the content of the text file <code>location.txt</code> containing the locations of the wav files is this</p>
<pre><code>/home/user/test_audio_folder_1/audio1.wav
/home/user/test_audio_folder_2/audio2.wav
/home/user/test_audio_folder_3/audio3.wav
/home/user/test_audio_folder_4/audio4.wav
/home/user/test_audio_folder_5/audio5.wav
</code></pre>
<p>Now what I want to do is that I want to copy these files from different locations within the server to a particular directory within that server, for example say <code>/home/user/final_audio_folder/</code> and this directory will contain all the audio files from <code>audio1.wav</code> to <code>audio5.wav</code></p>
<p>I am trying to perform this task by using <code>shutil</code>, but the main problem with <code>shutil</code> that I am facing is that while copying the files, I need to name the file. I have written a demo version of what I am trying to do, but dont know how to scale it when I will be reading the paths of the <code>'.wav'</code> files from the txt file and copy them to my desired location using a loop.</p>
<p>My code for copying a single file goes as follows,</p>
<pre><code>import shutil
original = r'/home/user/test_audio_folder_1/audio1.wav'
target=r'/home/user/final_audio_folder_1/final_audio1.wav'
shutil.copyfile(original,target)
</code></pre>
<p>Any suggestions will be really helpful. Thank you.</p>
|
<python><server>
|
2022-12-12 14:12:11
| 2
| 377
|
Turing101
|
74,772,293
| 4,451,521
|
Warning when adding a column to a dataframe with the same value
|
<p>How can I add a column to a dataframe with the same value</p>
<p>I've tried</p>
<pre><code>dataframe['size']=10
</code></pre>
<p>or</p>
<pre><code>dataframe.loc[:,'size']=10
</code></pre>
<p>but this gives</p>
<pre><code>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
</code></pre>
|
<python><pandas>
|
2022-12-12 14:06:28
| 0
| 10,576
|
KansaiRobot
|
74,772,252
| 12,304,000
|
No module named 'airflow.sensors.python_sensor'
|
<p>I am trying to use PythonSensor in my dag but I am unable to import it.</p>
<pre><code>from airflow.sensors.python_sensor import PythonSensor
wait_for_stg_completion = PythonSensor(
task_id='wait_for_stg_completion',
python_callable=fetch_stg_qa_status
)
</code></pre>
<p>How can I import it? What else can I try?</p>
|
<python><python-3.x><airflow>
|
2022-12-12 14:03:00
| 2
| 3,522
|
x89
|
74,772,165
| 7,959,614
|
Create triangular mesh of pentagon or higher
|
<p>I have the coordinates of the corners of a pentagon</p>
<pre><code>import math
import numpy as np
n_angles = 5
r = 1
angles = np.linspace(0, 2 * math.pi, n_angles, endpoint=False)
x = (r * np.cos(angles))
y = (r * np.sin(angles))
corners = np.array(list(zip(x, y)))
</code></pre>
<p>From these points I want to create a triangular mesh using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.html" rel="nofollow noreferrer">scipy.spatial.Delaunay</a>.</p>
<pre><code>import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
tri = Delaunay(points=corners)
plt.triplot(corners[:,0], corners[:,1], tri.simplices)
plt.plot(corners[:,0], corners[:,1], 'o')
for corner in range(len(corners)):
plt.annotate(text=f'{corner + 1}', xy=(corners[:,0][corner] + 0.05, corners[:,1][corner]))
plt.axis('equal')
plt.show()
</code></pre>
<p>The result looks as follows:</p>
<p><a href="https://i.sstatic.net/aBOU5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aBOU5.png" alt="enter image description here" /></a></p>
<p>Why are the simplices (1, 3), (1, 4), (2, 4) missing and how can I add them?\</p>
<p>I think I need to specify <code>incremental=True</code> in <code>Delaunay()</code> and use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.add_points.html" rel="nofollow noreferrer">scipy.spatial.Delaunay.add_points</a> but this results in an error:</p>
<pre><code>QhullError: QH6239 Qhull precision error: initial Delaunay input sites are cocircular or cospherical. Use option 'Qz' for the Delaunay triangulation or Voronoi diagram of cocircular/cospherical points; it adds a point "at infinity". Alternatively use option 'QJ' to joggle the input
</code></pre>
<p>Please advice</p>
|
<python><matplotlib><scipy><triangulation><delaunay>
|
2022-12-12 13:55:25
| 1
| 406
|
HJA24
|
74,772,104
| 7,484,371
|
Numpy: Using an index array to set values in a 3D array
|
<p>I have an <code>indices</code> array of shape (2, 2, 3) which looks like this:</p>
<pre><code>array([[[ 0, 6, 12],
[ 0, 6, 12]],
[[ 1, 7, 13],
[ 1, 7, 13]]])
</code></pre>
<p>I want to use these as <strong>indices</strong> to set some values of a <code>np.zeros</code> matrix to 1. While the highest value in this example is a 13, I know that it can go up to 18. This is why I created <code>one_hot = np.zeros((2, 2, 18))</code> array:</p>
<pre><code>array([[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]])
</code></pre>
<p>Using the <code>indices</code> array, my desired outcome is this:</p>
<pre><code>array([[[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]],
[[0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]]])
</code></pre>
<p>I want to use numpy's advanced indexing sort of like this:</p>
<p><code>one_hot[indices] = 1</code></p>
<p>How can I do that?</p>
|
<python><numpy><multidimensional-array><numpy-ndarray>
|
2022-12-12 13:51:02
| 1
| 4,244
|
Max S.
|
74,772,070
| 18,632,985
|
ModuleNotFoundError: No module named X when using foreach function with PySpark
|
<p>I currently encounter an error when using an <strong>external Python module</strong> (orjson) inside <strong>foreach</strong> function with <strong>Pyspark</strong>. Everything was fine if I use that module outside <strong>foreach</strong> function (<strong>collect()</strong> method). Below is my simple code</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, DateType, StringType, IntegerType
import orjson
if __name__ == "__main__":
spark = SparkSession \
.builder \
.master("local[*]") \
.appName("HelloSparkSQL") \
.getOrCreate()
data = [[1, "male"], [2, "male"], [3, "female"], [4, "female"], [10, "male"], ]
schema = StructType([StructField("Age", IntegerType()),StructField("Gender", StringType())])
surveyDF = spark.createDataFrame(data=data, schema= schema)
countDF = surveyDF.select("Age", "Gender").limit(20)
list1 = countDF.collect()
for row in list1:
data = {
"age": row["Age"],
"gender": row["Gender"]
}
newjson = orjson.dumps(data)
print(newjson)
# b'{"age":1,"gender":"male"}'
# b'{"age":2,"gender":"male"}'
# b'{"age":3,"gender":"female"}'
# b'{"age":4,"gender":"female"}'
# b'{"age":10,"gender":"male"}'
</code></pre>
<p>But as you know, it's never a good idea to iterate big data after using <strong>collect()</strong>. So I use a simple <strong>foreach</strong> function to iterate like below (replace all the parts from list1 to the end):</p>
<pre><code>def jsontest(row):
data = {
"age": row["Age"],
"gender": row["Gender"]
}
newjson = orjson.dumps(data)
print(newjson)
countDF.foreach(jsontest)
</code></pre>
<p>Then I got this error</p>
<pre><code> File "C:\SparkEverything\spark3_3_0\python\lib\pyspark.zip\pyspark\worker.py", line 668, in main
File "C:\SparkEverything\spark3_3_0\python\lib\pyspark.zip\pyspark\worker.py", line 85, in read_command
File "C:\SparkEverything\spark3_3_0\python\lib\pyspark.zip\pyspark\serializers.py", line 173, in _read_with_length
return self.loads(obj)
File "C:\SparkEverything\spark3_3_0\python\lib\pyspark.zip\pyspark\serializers.py", line 471, in loads
return cloudpickle.loads(obj, encoding=encoding)
File "C:\SparkEverything\spark3_3_0\python\lib\pyspark.zip\pyspark\cloudpickle\cloudpickle.py", line 679, in subimport
__import__(name)
ModuleNotFoundError: No module named 'orjson'
</code></pre>
<p>I followed some guides on the sof (<a href="https://stackoverflow.com/questions/65010189/modulenotfounderror-in-pyspark-caused-in-serializers-py">link</a>), which said I have to add all the dependencies (in my case it's <strong>orjson</strong> module) to a zip file, then add a <strong>--py-file</strong> after <strong>spark-submit</strong>. But it didn't work either. Below is my orjson's module folder:
<a href="https://i.sstatic.net/f1pB1.png%5D" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f1pB1.png%5D" alt="enter image description here" /></a></p>
<p>After zipping the folder and doing as the guide said, I encountered another error:</p>
<pre><code>ModuleNotFoundError: No module named 'orjson.orjson' / 'orjson'
</code></pre>
<p>I think this method only works if it's a custom py file with a custom function/module. It won't work with the module from <strong>"pip install x"</strong>. I have no luck to open the orjson.cp39-win_amd64.pyd file either</p>
|
<python><apache-spark><pyspark><python-module><spark-submit>
|
2022-12-12 13:48:33
| 1
| 751
|
Hoang Minh Quang FX15045
|
74,772,068
| 4,451,521
|
How can I set the size of the markers in plotly express sacatter_mapbox?
|
<p>I am trying to plot some latitudes and longitudes in a map
I do</p>
<pre><code>import plotly.express as px
fig = px.scatter_mapbox(one_line, lat="lat", lon="lon",
zoom=15,
text = 'point_idx',
size='size',
size_max=15
)
fig.update_layout(mapbox_style="open-street-map", margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
</code></pre>
<p>where <code>one_line</code> is the dataframe with the values (only 17 values)
For some reason this shows the map, I can see the text when I put the mouse there but the marker is not drawn</p>
<p>For an even strange reason in a similar script but with 3176 points I can see the markers
How can I set the marker size so that I can see them</p>
<p>EDIT
The data</p>
<pre><code> lat lon
0 35.843737 139.870344
1 35.843765 139.870372
2 35.843800 139.870407
3 35.843835 139.870441
4 35.843871 139.870476
5 35.843907 139.870511
6 35.843942 139.870545
7 35.843978 139.870580
8 35.844014 139.870614
9 35.844049 139.870648
10 35.844085 139.870682
11 35.844121 139.870715
12 35.844157 139.870749
13 35.844194 139.870783
14 35.844230 139.870817
15 35.844266 139.870850
16 35.844306 139.870887
</code></pre>
|
<python><plotly-express>
|
2022-12-12 13:48:24
| 0
| 10,576
|
KansaiRobot
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.