QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,771,961
| 13,983,136
|
Fit linear model on all data after doing it just on training data in order to make predictions about future data
|
<p>In linear regression (but in general on any other model) is there a need to use <code>.fit()</code> on all input data after using it only on training data? I'll try to explain it better with an example. I will artificially create a trivial set of input and output data:</p>
<pre><code>import numpy as np
from random import uniform
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
X = np.arange(0, 500, 5).reshape(-1, 1)
y = np.zeros(100)
for i in range(len(X)):
y[i] = 3 * X[i] + uniform(-10, 10)
X_train = X[:80]
y_train = y[:80]
X_test = X[80:]
y_test = y[80:]
reg = LinearRegression()
</code></pre>
<p>It could be seen as a time series (<code>X</code> contains the time periods).
Now, I train the model using the training sets:</p>
<pre><code>reg.fit(X_train, y_train)
</code></pre>
<p>and I make predictions using the testing set:</p>
<pre><code>y_pred = reg.predict(X_test)
</code></pre>
<p>Using <code>r2_score(y_test, y_pred)</code> I get about 0.99, so my model is able to fit the data correctly.</p>
<p>Now my question: I want to make a prediction about the future, i.e. about a series of data not contained in <code>X</code>, let's say:</p>
<pre><code>X_future = np.arange(500, 525, 5).reshape(-1, 1)
</code></pre>
<p>Should I proceed this way:</p>
<pre><code>y_future = reg.predict(X_future)
</code></pre>
<p>or in this:</p>
<pre><code>reg.fit(X, y)
y_future = reg.predict(X_future)
</code></pre>
<p>In other words, do I have to fit again the model found earlier on the entire <code>(X, y)</code> before making a prediction of the future? Are there any chances that both approaches are correct?</p>
<p>Thanks to anyone who will answer me.</p>
|
<python><machine-learning><scikit-learn><model><linear-regression>
|
2022-12-12 13:39:03
| 0
| 787
|
LJG
|
74,771,939
| 12,546,311
|
How to set xlim in seaborn barplot?
|
<p>I have created a barplot for given days of the year and the number of people born on this given day (figure a). I want to set the x-axes in my seaborn barplot to <code>xlim = (0,365)</code> to show the whole year.
But, once I use <code>ax.set_xlim(0,365)</code> the bar plot is simply moved to the left (figure b).</p>
<p><a href="https://i.sstatic.net/GIafg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GIafg.png" alt="img" /></a></p>
<p>This is the code:</p>
<pre><code>#data
df = pd.DataFrame()
df['day'] = np.arange(41,200)
df['born'] = np.random.randn(159)*100
#plot
f, axes = plt.subplots(4, 4, figsize = (12,12))
ax = sns.barplot(df.day, df.born, data = df, hue = df.time, ax = axes[0,0], color = 'skyblue')
ax.get_xaxis().set_label_text('')
ax.set_xticklabels('')
ax.set_yscale('log')
ax.set_ylim(0,10e3)
ax.set_xlim(0,366)
ax.set_title('SE Africa')
</code></pre>
<p>How can I set the x-axes limits to day 0 and 365 without the bars being shifted to the left?</p>
|
<python><seaborn><bar-chart>
|
2022-12-12 13:36:43
| 1
| 501
|
Thomas
|
74,771,737
| 20,732,098
|
Group by timestamp and get mean Dataframe
|
<p>I generate for each week as csv file. The weeks are then merged into one. The merged CSV with the Dataframe looks like this:</p>
<pre><code> machineId | id | mean | min | max
machine1 | 2 | 00:00:03.47 | 00:00:00.02 | 00:00:06.11
machine1 | 1 | 00:00:01.30 | 00:00:00.74 | 00:00:01.86
machine1 | 2 | 00:00:00.35 | 00:00:00.01 | 00:00:00.99
machine1 | 2 | 00:00:01.63 | 00:00:00.67 | 00:00:02.60
machine1 | 3 | 00:00:00.66 | 00:00:00.03 | 00:00:01.91
</code></pre>
<p>Then i want to group by the same rows and calculate the mean from the row. The first, thirt and fourth should be grouped by and the average of the columns should be calculated</p>
<p>I already used this method:</p>
<p>df = df.groupby(['machineId','id']).agg({'mean': 'mean','min':'mean','max':'mean})</p>
<p>but there is an error:</p>
<p>TypeError: Could not convert 00:00:03.47 to numeric</p>
|
<python><pandas><dataframe><group>
|
2022-12-12 13:19:34
| 1
| 336
|
ranqnova
|
74,771,507
| 12,709,265
|
Parallelize a task on gpu
|
<p>I have a task in which I have to read in some images, perform some trasformations such as resizing the images, etc, and then write the image back on the disk. I can parallelize this operation over my cpu. However, I was wondering if there is a possible way to parallelize this over gpu. Since, this would make my task terminate quicker.</p>
<p>I should specify that the images that I have are quiet large. Their size tends to be around 5000px X 4000px.</p>
<p>I perform this task in python 3.</p>
<p>So I would like to know if this is even possible. If yes, where can I begin from?</p>
|
<python><python-3.x><parallel-processing><gpu>
|
2022-12-12 13:01:27
| 0
| 1,428
|
Shawn Brar
|
74,771,447
| 7,969,193
|
How to integrate Kafka and Flink in Python?
|
<p>I am trying to develop a test Flink application that reads from and writes to a Kafka topic.</p>
<p>However, I have been getting this error:</p>
<pre><code>start writing data to kafka
Traceback (most recent call last):
File "teste.py", line 71, in <module>
write_to_kafka(env)
File "teste.py", line 45, in write_to_kafka
env.execute()
File "/Users/lauracorssac/miniconda3/envs/pyflink_38/lib/python3.8/site-packages/pyflink/datastream/stream_execution_environment.py", line 764, in execute
return JobExecutionResult(self._j_stream_execution_environment.execute(j_stream_graph))
File "/Users/lauracorssac/miniconda3/envs/pyflink_38/lib/python3.8/site-packages/py4j/java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "/Users/lauracorssac/miniconda3/envs/pyflink_38/lib/python3.8/site-packages/pyflink/util/exceptions.py", line 146, in deco
return f(*a, **kw)
File "/Users/lauracorssac/miniconda3/envs/pyflink_38/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o0.execute.
: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
at akka.dispatch.OnComplete.internal(Future.scala:300)
at akka.dispatch.OnComplete.internal(Future.scala:297)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:139)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:83)
at org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:256)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:247)
at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:240)
at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:738)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:715)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:477)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
... 4 more
Caused by: java.lang.NoSuchMethodError: org.apache.flink.api.common.functions.RuntimeContext.getMetricGroup()Lorg/apache/flink/metrics/MetricGroup;
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initProducer(FlinkKafkaProducer.java:1365)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initNonTransactionalProducer(FlinkKafkaProducer.java:1342)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.beginTransaction(FlinkKafkaProducer.java:990)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.beginTransaction(FlinkKafkaProducer.java:99)
at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.beginTransactionInternal(TwoPhaseCommitSinkFunction.java:436)
at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.initializeState(TwoPhaseCommitSinkFunction.java:427)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initializeState(FlinkKafkaProducer.java:1195)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:189)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:171)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:94)
at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:283)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code># Make sure that the Kafka cluster is started and the topic 'test_json_topic' is
# created before executing this job.
def write_to_kafka(env):
type_info = Types.ROW([Types.INT(), Types.STRING()])
ds = env.from_collection(
[(1, 'hi'), (2, 'hello'), (3, 'hi'), (4, 'hello'), (5, 'hi'), (6, 'hello'), (6, 'hello')],
type_info=type_info)
serialization_schema = JsonRowSerializationSchema.Builder() \
.with_type_info(type_info) \
.build()
kafka_producer = FlinkKafkaProducer(
topic='test_json_topic',
serialization_schema=serialization_schema,
producer_config={'bootstrap.servers': 'localhost:9092', 'group.id': 'test_group'}
)
# note that the output type of ds must be RowTypeInfo
ds.add_sink(kafka_producer)
env.execute()
def read_from_kafka(env):
deserialization_schema = JsonRowDeserializationSchema.Builder() \
.type_info(Types.ROW([Types.INT(), Types.STRING()])) \
.build()
kafka_consumer = FlinkKafkaConsumer(
topics='test_json_topic',
deserialization_schema=deserialization_schema,
properties={'bootstrap.servers': 'localhost:9092', 'group.id': 'test_group_1'}
)
kafka_consumer.set_start_from_earliest()
env.add_source(kafka_consumer).print()
env.execute('oi')
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO, format="%(message)s")
env = StreamExecutionEnvironment.get_execution_environment()
env.add_jars("file:///Users/lauracorssac/HiWiProj/flink-connector-base-1.16.0.jar")
env.add_jars("file:///Users/lauracorssac/HiWiProj/flink-sql-connector-kafka-1.16.0.jar")
print("start writing data to kafka")
write_to_kafka(env)
print("start reading data from kafka")
read_from_kafka(env)
</code></pre>
<p>As the code shows, I tried to download many and different jars from the Maven repository. Nothing worked.</p>
|
<python><apache-kafka><apache-flink><pyflink>
|
2022-12-12 12:56:44
| 1
| 1,385
|
Laura Corssac
|
74,771,398
| 12,871,587
|
Numpy datetime64[D] array to polars date series/column
|
<p><strong>Update:</strong> <code>pl.Series(datetime_array)</code> now creates a <code>date</code> type as expected.</p>
<hr />
<p>Numpy datetime array seems to be converted to object series in Polars, but numerical or string arrays keeps the proper format when converted to pl.Series. Am I using it wrong or could this be a bug etc.?</p>
<p>In:</p>
<pre><code>datetime_array = np.array(['2022-02-11', '2022-02-11', '2022-02-11','2022-02-10','2022-02-11', '2022-02-11'], dtype='datetime64[D]')
</code></pre>
<p>Out:</p>
<pre><code>array(['2022-02-11', '2022-02-11', '2022-02-11', '2022-02-10',
'2022-02-11', '2022-02-11'], dtype='datetime64[D]')
</code></pre>
<p>Converting to series:</p>
<p>In:</p>
<pre><code>pl.Series(datetime_array)
</code></pre>
<p>Out:</p>
<pre><code>shape: (6,)
Series: '' [o][object]
[
2022-02-11
2022-02-11
2022-02-11
2022-02-10
2022-02-11
2022-02-11
]
</code></pre>
<p>If I'm trying to define the dtype in the series to be pl.Date it raises an exception as below</p>
<p>In:</p>
<pre><code>pl.Series(datetime_array, dtype=pl.Date)
</code></pre>
<p>Out:</p>
<pre><code>InvalidOperationError: cannot cast array of type ObjectChunked to arrow datatype
</code></pre>
<p>Work around for this I have found is to convert numpy datetime array to string type in numpy, and only then convert to Polars series. Then using .str.strptime() in Polars to convert back to date type.</p>
<p>In:</p>
<pre><code>pl.Series(np.datetime_as_string(datetime_array)).str.to_date()
</code></pre>
<p>Out:</p>
<pre><code>shape: (6,)
Series: '' [date]
[
2022-02-11
2022-02-11
2022-02-11
2022-02-10
2022-02-11
2022-02-11
]
</code></pre>
|
<python><numpy><datetime><python-polars>
|
2022-12-12 12:52:32
| 2
| 713
|
miroslaavi
|
74,771,345
| 7,195,897
|
SQLAlchemy update rows based on key:value like queries?
|
<p>I am using the SQLAlchemy session to handle some updates to the database.
I am having two lists:</p>
<pre><code>keys = [1, 2, 3, 4, 5]
values = [6, 7, 8, 9, 0]
</code></pre>
<p>I want to set the value of <code>keys[0]</code> to be set to <code>values[0]</code>, and so on. From what I am used to do, this would be the solution:</p>
<pre><code>for key, value in zip(keys, values):
db.session.query(Table).filter(Table.key == key).update(value: value)
</code></pre>
<p>But this obviously would need 5 seperate SQL queries.
Now, unfortunately I am neither a pro in SQL nor SQLAlchemy.</p>
<p>Is there any operation that allows me to just submit two same-length lists, and SQLAlchemy optimizes it itself, something like this?:</p>
<pre><code>db.session.query(Table).filter(Table.key == keys).update(value: values)
</code></pre>
<p>Thanks for any input!</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2022-12-12 12:47:37
| 3
| 303
|
Stefan Wobbe
|
74,771,143
| 2,947,600
|
Segmentation fault when creating custom autograd function that calls Julia code
|
<p>I have a (vector-to-scalar) function and corresponding derivative function written in Julia that I am unable to translate to Python. I would like to use these within PyTorch by defining a custom autograd function. For simplicity, lets assume this function is <code>sum()</code>. This gives the following MRE:</p>
<pre class="lang-python prettyprint-override"><code>import numpy as np
import torch
from julia import Main
class JuliaSum(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
x = input.cpu().detach().numpy()
return torch.FloatTensor([Main.sum(x)]).to('cuda')
# return torch.FloatTensor([np.sum(x)]).to('cuda')
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
x = input.cpu().detach().numpy()
y = torch.FloatTensor(Main.ones(len(x))).to('cuda')
# y = torch.FloatTensor(np.ones(len(x))).to('cuda')
return grad_output * y
input = torch.FloatTensor([0.1, 0.2, 0.3]).to('cuda').requires_grad_()
# Works — outputs `tensor([0.6000], device='cuda:0', grad_fn=<JuliaSumBackward>)`
y = JuliaSum.apply(input)
print(y)
# Works — outputs `tensor([1., 1., 1.], device='cuda:0') `
x = input.cpu().detach().numpy().astype(np.float64)
y_test = torch.FloatTensor(main.ones(len(x))).to('cuda')
print(torch.ones(1).to('cuda') * y_test)
# Doesn't work — segmentation fault
y.backward(torch.ones(1).to('cuda'))
print(input.grad)
</code></pre>
<p>Calling the forward method works fine, as does running the code contained in the <code>backward</code> method from the global scope. However, when I call the <code>backward</code> method, I receive:</p>
<pre><code>signal (11): Segmentation fault
in expression starting at none:0
Allocations: 3652709 (Pool: 3650429; Big: 2280); GC: 5
Segmentation fault (core dumped)
</code></pre>
<p>The exact line command causing the issue is <code>Main.ones(len(x))</code>. Replacing this with <code>Main.ones(3)</code> still causes a segmentation fault, so it appears to be an issue with PyJulia accessing memory that has been deallocated.</p>
<p>Also note that when I replace the two calls to Julia with the corresponding NumPy commands (left commented-out), the <code>backward</code> method works fine. The code also works when all tensors are on the CPU but my application requires GPU-acceleration.</p>
<p>What is causing this segmentation fault, and how can alter my code to avoid it whilst keeping PyTorch tensors on the GPU?</p>
<hr>
<p>I've included a Dockerfile that matches my environment to make reproducing this issue as simple as possible. For reference, I am using an RTX 3060.</p>
<pre class="lang-docker prettyprint-override"><code>FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
ARG PYTHON_VERSION=3.10.1
ARG JULIA_VERSION=1.7.1
ENV container docker
ENV DEBIAN_FRONTEND noninteractive
ENV LANG en_US.utf8
ENV MAKEFLAGS -j4
RUN mkdir /app
WORKDIR /app
# DEPENDENCIES
#===========================================
RUN apt-get update -y && \
apt-get install -y gcc make wget libffi-dev \
build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev \
libncurses5-dev libncursesw5-dev xz-utils \
git
# INSTALL PYTHON
#===========================================
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz && \
tar -zxf Python-$PYTHON_VERSION.tgz && \
cd Python-$PYTHON_VERSION && \
./configure --with-ensurepip=install --enable-shared && make && make install && \
ldconfig && \
ln -sf python3 /usr/local/bin/python
RUN python -m pip install --upgrade pip setuptools wheel && \
python -m pip install julia numpy torch
# INSTALL JULIA
#====================================
RUN wget https://raw.githubusercontent.com/abelsiqueira/jill/main/jill.sh && \
bash /app/jill.sh -y -v $JULIA_VERSION && \
export PYTHON="python" && \
julia -e 'using Pkg; ENV["PYTHON"] = "/usr/local/bin/python"' && \
python -c 'import julia; julia.install()'
# CLEAN UP
#===========================================
RUN rm -rf /app/jill.sh \
/opt/julias/*.tar.gz \
/app/Python-$PYTHON_VERSION.tgz
RUN apt-get purge -y gcc make wget zlib1g-dev libffi-dev libssl-dev \
libbz2-dev libreadline-dev \
libncurses5-dev libncursesw5-dev xz-utils && \
apt-get autoremove -y
CMD ["/bin/bash"]
</code></pre>
|
<python><pytorch><julia>
|
2022-12-12 12:31:25
| 0
| 342
|
Tim Hargreaves
|
74,771,050
| 4,153,059
|
Overwrite specific value with previous row value in Pandas
|
<p>I need to overwrite a spurious noise value every time it occurs in a Pandas data frame column. I need to overwrite it with a clean value from the previous row. If multiple adjacent noise values are encountered, all should be overwritten by the same recent good value.</p>
<p>The following code works but is too slow. Is there a better non-iterative Pandas'esque solution?</p>
<pre><code>def cleanData(df) :
lastGoodValue = 0
for row in df.itertuples() :
if (df.at[row.Index, 'Barometric Altitude'] == 16383.997535000002) :
df.at[row.Index, 'Barometric Altitude'] = lastGoodValue
else:
lastGoodValue = df.at[row.Index, 'Barometric Altitude']
return df
</code></pre>
|
<python><pandas><signal-processing>
|
2022-12-12 12:22:52
| 2
| 3,574
|
CodeCabbie
|
74,771,032
| 6,013,700
|
How to test an element from a generator without consuming it
|
<p>I have a generator <code>gen</code>, with the following properties:</p>
<ul>
<li>it's quite expensive to make it yield (more expensive than creating the generator)</li>
<li>the elements take up a fair amount of memory</li>
<li>sometimes all of the <code>__next__</code> calls will throw an exception, but creating the generator doesn't tell you when that will happen</li>
</ul>
<p>I didn't implement the generator myself.</p>
<p>Is there a way to make the generator yield its first element (I will do this in a try/except), without having the generator subsequently start on the second element if I loop through it afterwards?</p>
<p>I thought of creating some code like this:</p>
<pre><code>try:
first = next(gen)
except StopIterator:
return None
except Exception:
print("Generator throws exception on a yield")
# looping also over the first element which we yielded already
for thing in (first, *gen):
do_something_complicated(thing)
</code></pre>
<p>Solutions I can see which are not very nice:</p>
<ol>
<li>Create generator, test first element, create a new generator, loop through the second one.</li>
<li>Put the entire for loop in a try/except; not so nice because the exception thrown by the yield is very general and it would potentially catch other things.</li>
<li>Yield first element, test it, then reform a new generator from the first element and the rest of <code>gen</code> (ideally without extracting all of <code>gen</code>'s elements into a list, since this could take a lot of memory).</li>
</ol>
<p>For 3, which seems like the best solution, a nearly-there example would be the example I gave above, but I believe that would just extract all the elements of <code>gen</code> into a tuple before we start iterating, which I would like to avoid.</p>
|
<python><generator><yield>
|
2022-12-12 12:21:33
| 1
| 1,602
|
Marses
|
74,770,983
| 972,647
|
dagster: how do I get modified assets only? (external api)
|
<p>I'm learning dagster so maybe I haven't fully grasped the concepts.</p>
<p>my goal is to query an external web services / api and get the modified records only. I can make the external call a resource or put it into the asset directly, right? The external resource has filter option for last modified.</p>
<p>For me the core question is where and how to I pass in the value (in this case a date)? All records changed after this date (eg. the last run) should be fetched from the external api?</p>
<p>So were do I store this date and how do I pass it to the asset?</p>
|
<python><dagster>
|
2022-12-12 12:17:42
| 1
| 7,652
|
beginner_
|
74,770,982
| 7,713,770
|
How to format template with django?
|
<p>I have a django application. And I try to format some data.</p>
<p>So I have this method:</p>
<pre><code>
from __future__ import print_function
import locale
import re
from fileinput import filename
from itertools import zip_longest
from locale import LC_NUMERIC, atof
import pandas as pd
from tabulate import tabulate
class FilterText:
def show_extracted_data_from_file(self):
verdi_total_fruit = [12, 13, 14]
verdi_fruit_name = ["watermeloenen", "appels", "winterpeen"]
verdi_cost = [123, 55, 124, 88, 123, 123]
regexes = [verdi_total_fruit, verdi_fruit_name, verdi_cost]
matches = [(regex) for regex in regexes]
return tabulate(
zip_longest(*matches), # type: ignore
headers=[
"aantal fruit",
"naam fruit",
"kosten fruit",
],
)
</code></pre>
<p>views:</p>
<pre><code>def test(request):
filter_text = FilterText()
content = ""
content = filter_text.show_extracted_data_from_file()
context = {"content": content}
return render(request, "main/test.html", context)
</code></pre>
<p>template</p>
<pre><code>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Create a Profile</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<link rel="stylesheet" type="text/css" href="{% static 'main/css/custom-style.css' %}" />
<link rel="stylesheet" type="text/css" href="{% static 'main/css/bootstrap.css' %}" />
</head>
<body>
<div class="container center">
<span class="form-inline" role="form">
<div class="inline-div">
<form class="form-inline" action="/controlepunt140" method="POST" enctype="multipart/form-data">
<div class="d-grid gap-3">
<div class="form-group">
</div>
<div class="form-outline">
<div class="form-group">
<div class="wishlist">
{{content}}
</div>
</div>
</div>
</div>
</div>
</span>
<span class="form-inline" role="form">
<div class="inline-div">
<div class="d-grid gap-3">
<div class="form-group">
</div>
</div>
</div>
</span>
</form>
</div>
</body>
</html>
</code></pre>
<p>that produces this result:</p>
<pre><code> aantal fruit naam fruit kosten fruit -------------- ------------- -------------- 12 watermeloenen 123 13 appels 55 14 winterpeen 124 88 123 123
</code></pre>
<p>but as you can see it is one horizontal output.</p>
<p>But I want every item under each other. that it will looks like:</p>
<pre><code>6 Watermeloenen 577,50
75 Watermeloenen 69,30
9 watermeloenen 46,20
</code></pre>
<p>a
Question: how to format the template?</p>
<p>if I do this:</p>
<pre><code> print(tabulate(
zip_longest(*matches), # type: ignore
headers=[
"aantal fruit",
"naam fruit",
"kosten fruit",
],
))
</code></pre>
<p>it looks correct:</p>
<pre><code> aantal fruit naam fruit kosten fruit
-------------- ------------- --------------
16 Watermeloenen 123,20
360 Watermeloenen 2.772,00
6 Watermeloenen 46,20
75 Watermeloenen 577,50
9 Watermeloenen 69,30
688 Appels 3.488,16
22 Sinaasappels 137,50
80 Sinaasappels 500,00
160 Sinaasappels 1.000,00
320 Sinaasappels 2.000,00
160 Sinaasappels 1.000,00
61 Sinaasappels 381,25
</code></pre>
<p>But not in the template</p>
<p>so this:</p>
<pre><code> return tabulate(
zip_longest(*matches), # type: ignore
headers=[
"aantal fruit",
"naam fruit",
"kosten fruit",
],tablefmt="html"
)
</code></pre>
<p>then the output is this:</p>
<pre><code><table> <thead> <tr><th style="text-align: right;"> aantal fruit</th><th>naam fruit </th><th style="text-align: right;"> kosten fruit</th></tr> </thead> <tbody> <tr><td style="text-align: right;"> 12</td><td>watermeloenen</td><td style="text-align: right;"> 123</td></tr> <tr><td style="text-align: right;"> 13</td><td>appels </td><td style="text-align: right;"> 55</td></tr> <tr><td style="text-align: right;"> 14</td><td>winterpeen </td><td style="text-align: right;"> 124</td></tr> <tr><td style="text-align: right;"> </td><td> </td><td style="text-align: right;"> 88</td></tr> <tr><td style="text-align: right;"> </td><td> </td><td style="text-align: right;"> 123</td></tr> <tr><td style="text-align: right;"> </td><td> </td><td style="text-align: right;"> 123</td></tr> </tbody> </table>
</code></pre>
|
<python><html><django>
|
2022-12-12 12:17:27
| 1
| 3,991
|
mightycode Newton
|
74,770,956
| 12,559,770
|
Assign unique group per consecutive values under a threshold in pandas
|
<p>I have a dataframe such as:</p>
<pre><code>Groups Names Values
G1 SP1 1
G1 SP1 5
G1 SP1 -2
G1 SP1 30
G1 SP1 50
G1 SP1 50
G1 SP1 -1
G1 SP1 2
G1 SP2 2
G1 SP2 20
G1 SP2 1
G2 SP3 30
G2 SP3 9
G2 SP3 3
G3 SP3 2
</code></pre>
<p>and I would like to add a <code>new_group</code> column for each <code>Groups-Names</code> where I found consecutive <code>Values < 10</code></p>
<p>I should then get:</p>
<pre><code>Groups Names Values new_groups
G1 SP1 1 NG1
G1 SP1 5 NG1
G1 SP1 -2 NG1
G1 SP1 30 NG2
G1 SP1 50 NG3
G1 SP1 50 NG4
G1 SP1 -1 NG5
G1 SP1 2 NG5
G1 SP2 2 NG5
G1 SP2 20 NG6
G1 SP2 1 NG7
G2 SP3 30 NG8
G2 SP3 9 NG9
G2 SP3 3 NG9
G3 SP3 2 NG10
</code></pre>
<p>so for instance, since I get <strong>Values < 10</strong> for the first <strong>3 rows</strong>, I assign the first group: <code>NG1</code></p>
<p>Then, I have a value > 10 (which is 30), so I assign the second group: <code>NG2</code></p>
<p>Then, I get <code>value > 10</code> in <code>row5</code>, then I assign a new group : <code>NG3</code><br />
Then, I get again a <code>value > 10</code> in <code>row6</code>, then I assign a new group: <code>NG4</code></p>
<p>and so on...</p>
<p>Here is the dataframe in dict format if it can help;</p>
<pre><code>{'Groups': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G1', 4: 'G1', 5: 'G1', 6: 'G1', 7: 'G1', 8: 'G1', 9: 'G1', 10: 'G1', 11: 'G2', 12: 'G2', 13: 'G2',14:'G3'}, 'Names': {0: 'SP1', 1: 'SP1', 2: 'SP1', 3: 'SP1', 4: 'SP1', 5: 'SP1', 6: 'SP1', 7: 'SP1', 8: 'SP2', 9: 'SP2', 10: 'SP2', 11: 'SP3', 12: 'SP3', 13: 'SP3', 14 : 'SP3'}, 'Values': {0: 1, 1: 5, 2: -2, 3: 30, 4: 50, 5: 50, 6: -1, 7: 2, 8: 2, 9: 20, 10: 1, 11: 30, 12: 9, 13: 3, 14: 2}}
</code></pre>
|
<python><python-3.x><pandas>
|
2022-12-12 12:14:55
| 2
| 3,442
|
chippycentra
|
74,770,810
| 10,729,292
|
what does @ mean in case of `pip install <package> @ path`?
|
<p>I just came across this from a project on GitHub</p>
<pre><code>pip install colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1602866480661/work
</code></pre>
<p>what does @ do ?
assuming it decides the path where to install, I tried using any path and it wouldn't work like that</p>
<p>Also, why would we want to do so?
Also, what is the significance of <code>file:///</code></p>
<p>Here is the link to the project</p>
<p><a href="https://github.com/sstzal/DFRF/blob/main/requirements.txt" rel="nofollow noreferrer">https://github.com/sstzal/DFRF/blob/main/requirements.txt</a></p>
<p>Thanks for your attention</p>
|
<python><pip>
|
2022-12-12 12:01:41
| 1
| 1,558
|
Sadaf Shafi
|
74,770,752
| 354,255
|
mock psycopg2 fetchone and fetchall to return different dataset doesn't work
|
<p>I'm trying to mock fetchone/fetchall to return different data set for different cases within a suite but it seems the first mock is global and can't be overwritten. All I get for the subsequent tests is the same value.</p>
<p>The first test case which sets up the mock:</p>
<pre><code>def test_rejected_scenario():
with mock.patch('psycopg2.connect') as mock_connect:
mock_con = mock_connect.return_value
mock_cursor = mock_con.cursor.return_value
mock_cursor.fetchall.return_value = REQUESTS
mock_cursor.fetchone.return_value = REQUESTS[0]
response = handler()
assert response["statusCode"] == 200
</code></pre>
<p>All the subsequent tests will get REQUESTS for fetchall() and REQUESTS[0] for fetchone() no matter how the mock is set up.</p>
<p>This question is about setting different return data set between test cases. Setting mock for individual case works for me.</p>
|
<python><mocking><pytest><psycopg2>
|
2022-12-12 11:56:11
| 0
| 641
|
Lys
|
74,770,697
| 17,762,566
|
Extracting values from complex and deeply nested list of dictionaires using python?
|
<p>I have a complex data structure consisting of list of dictionaries and these dictionaries consists of list of dictionaries further. Now, I am trying to extract specific <code>key:value</code> pairs from internal nested dicts (from list of dictionaries). Hopefully below example shows what I am trying to achieve</p>
<pre><code>complex_data =
[[{'A': 'test1'},
{'A': 'test2'},
{'B': [{'C': {'testabc': {'A': 'xxx'}}},
{'C': {'test123': {'A': 'yyy'}, 'test456': {'A': '111abc'}}},
{'C': {'test123': {'A': 'yyy'}, 'test456': {'A': '111def'}}}]}],
.
.
[{'A': 'test11'},
{'A': 'test22'}],
.
.
[{'A': 'test33'},
{'A': 'test44'},
{'B': []}],
.
[{'A': 'test3'},
{'A': 'test4'},
{'B': [{'C': {'testabc': {'A': '111'}}},
{'C': {'test123': {'A': 'yyy'}, 'test456': {'A': '999abc'}}},
{'C': {'test123': {'A': 'yyy'}, 'test456': {'A': '999def'}}}]}]]
</code></pre>
<p>Now the output should be a nested list of dictionaries like:</p>
<pre><code>desired_output = [[{'A': 'test1'}, {'A': 'test2'}, 'test456': {'A': '111def'}],
.
.
[{'A': 'test3'}, {'A': 'test4'}, 'test456': {'A': '999def'}]]
</code></pre>
<p>I am doing</p>
<pre><code>for y in complex_data:
desired_output.append([y[2]['B'][2]['C'] for y in row] for row in y)
</code></pre>
<p>But this won't work. Varibale <code>y</code> doesn't iterate over list <code>B</code>. Can anyone please let me know what is the issue here and how to resolve it? i am using <code>python3.9</code></p>
<p>Update: In some cases, the complete list <code>B</code> could be missing or could be empty <code>{'B': []}</code>.</p>
<p>Thanks in advance.</p>
<p>P.S: Please let me know if any info is missing or not clear.</p>
|
<python><python-3.9>
|
2022-12-12 11:51:11
| 1
| 793
|
Preeti
|
74,770,686
| 11,167,163
|
Unable to label inside ring pie chart
|
<p>The following code :</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
def autopct_format(values):
def my_format(pct):
total = sum(values)
val = int(round(pct*total/100.0)/1000000)
if val == 0:
return ""
else:
return '{:.1f}%\n({v:,d})'.format(pct, v=val)
return my_format
fig, ax = plt.subplots(1,1,figsize=(10,10),dpi=100,layout="constrained")
ax.axis('equal')
width = 0.3
#Color
A, B, C=[plt.cm.Blues, plt.cm.Reds, plt.cm.Greens]
#OUTSIDE
cin = [A(0.5),A(0.4),A(0.3),B(0.5),B(0.4),B(0.3),B(0.2),B(0.1), C(0.5),C(0.4),C(0.3)]
Labels_Smalls = ['groupA', 'groupB', 'groupC']
labels = ['A.1', 'A.2', 'A.3', 'B.1', 'B.2', 'C.1', 'C.2', 'C.3',
'C.4', 'C.5']
Sizes_Detail = [4,3,5,6,5,10,5,5,4,6]
Sizes = [12,11,30]
pie2, _ ,junk = ax.pie(Sizes_Detail ,radius=1,
labels=labels,labeldistance=0.85,
autopct=autopct_format(Sizes_Detail) ,pctdistance = 1.15,
colors=cin)
for ea, eb in zip(pie2, _):
mang =(ea.theta1 + ea.theta2)/2
tourner = 360 - mang
eb.set_rotation(mang+tourner) # rotate the label by (mean_angle + 270)
eb.set_va("center")
eb.set_ha("center")
plt.setp(pie2, width=width, edgecolor='white')
#INSIDE
pie, _, junk = ax.pie(Sizes, radius=1-width,
autopct=autopct_format(Sizes) ,pctdistance = 0.8,
colors = [A(0.6), B(0.6), C(0.6)])
plt.setp(pie, width=width, edgecolor='white')
plt.margins(0,0)
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72)
kw = dict(arrowprops=dict(arrowstyle="-"),
bbox=bbox_props, zorder=3, va="center")
for i, p in enumerate(pie):
ang = (p.theta2 - p.theta1)/2. + p.theta1
y = np.sin(np.deg2rad(ang))
x = np.cos(np.deg2rad(ang))
horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))]
connectionstyle = "angle,angleA=0,angleB={}".format(ang)
kw["arrowprops"].update({"connectionstyle": connectionstyle})
ax.annotate(Labels_Smalls[i], xy=(x, y), xytext=(1.35*np.sign(x), 1.4*y),
horizontalalignment=horizontalalignment, **kw)
for ea, eb in zip(pie, _):
mang =(ea.theta1 + ea.theta2)/2. # get mean_angle of the wedge
#print(mang, eb.get_rotation())
tourner = 360 - mang
eb.set_rotation(mang+tourner) # rotate the label by (mean_angle + 270)
eb.set_va("center")
eb.set_ha("center")
</code></pre>
<p>gives the following output :</p>
<p><a href="https://i.sstatic.net/2C3ca.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2C3ca.png" alt="enter image description here" /></a></p>
<p>But It does only link the outside pie and not the inside one, so how would I do the same chart but with the arrow linking the inside pie to the bbox ?</p>
<p><a href="https://i.sstatic.net/DB6pV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DB6pV.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2022-12-12 11:49:42
| 0
| 4,464
|
TourEiffel
|
74,770,647
| 12,285,101
|
Replace character in python string every even or odd occurrence
|
<p>I have the following python string:</p>
<pre><code>
strng='1.345 , 2.341 , 7.981 , 11.212 , 14.873 , 7.121...'
</code></pre>
<p>How can I remove all the ',' that their occurrence is odd, to get the following string:</p>
<pre><code>strng='1.345 2.341 , 7.981 11.212 , 14.873 7.121,...'
</code></pre>
<p>(Removed "," )</p>
<p>I know to use replace but to replace all specific characters and not only odd or double.</p>
|
<python><string><replace>
|
2022-12-12 11:46:44
| 3
| 1,592
|
Reut
|
74,770,604
| 4,896,449
|
Conditional in keras model based on input data / features
|
<p>I have a keras model which I would like to accept two input features, each feature would be encoded via its own embedding and dense layers. The two features are then summed to create the final output.</p>
<p>Dataset:</p>
<pre><code>row1 -> {x1: 'tag', x2: null, y: 'y1'}
row2 -> {x1: null, x2: 'long text field', y: 'y2'}
</code></pre>
<p>No rows contain both <code>x1</code> and <code>x2</code>, so the part of the model which encodes each feature needs to see the empty value and return a vector of zeros.</p>
<p>For the long text field I am not using my own model, but rather a pre-trained LM, this means I cannot add a special token to return all zeros - the tokenizer and embeddings are fixed.</p>
<p>How would I add a conditional into the model, which when the data is zero would skip the layer and return zeros, allowing me to just sum the outputs of the two towers.</p>
|
<python><tensorflow><keras>
|
2022-12-12 11:43:12
| 1
| 3,408
|
dendog
|
74,770,373
| 2,604,247
|
How to Make Column Names Dynamic and Deal with them As Strings in SQLAlchemy ORM?
|
<p>I am learning some SQL from the book Essential SQLAlchemy by Rick Copeland</p>
<p>I have not really used SQL much, and relied on frameworks like Pandas and Dask for data-processing tasks. As I am going through the book, I realise all the column names of a table are often part of the table attributes, and hence seems they need to be hard-coded, instead of being dealt with as strings. Example, from the book.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
class LineItem(Base):
__tablename__ = 'line_items'
line_item_id = Column(Integer(), primary_key=True)
order_id = Column(Integer(), ForeignKey('orders.order_id'))
cookie_id = Column(Integer(), ForeignKey('cookies.cookie_id'))
quantity = Column(Integer())
extended_cost = Column(Numeric(12, 2))
order = relationship("Order", backref=backref('line_items',
order_by=line_item_id))
cookie = relationship("Cookie", uselist=False)
</code></pre>
<p>When I work with pandas dataframe, I usually deal with it like</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
col_name:str='cookie_id'
df[col_name] # To access a column
</code></pre>
<p>Is there any way to make the column names in SQL-alchemy dynamic, i.e. be represented (and added to table) purely as strings, and tables be created dynamically with different column names (the strings coming from some other function or even user input etc.), that I can later access with strings as well?</p>
<p>Or is my expectation wrong in the sense somehow SQL is not supposed to be used like that?</p>
|
<python><sql><database><sqlalchemy>
|
2022-12-12 11:22:56
| 1
| 1,720
|
Della
|
74,770,219
| 17,639,970
|
How to import a subset of a zip file into colab?
|
<p>I have a very big zip file in my google drive which contain several subfloders. Now, I'd like to extract only a few subfolders (not all folder into colab). Is there any way for this?</p>
<p>For instance, suppose the zip file name is "MyBigFile.zip" which contain "folder1", "folder2", "folder3", "folder4", and "folder5". I only want to import and extract "folder1",and "folder4" into my google colab (and better import only 200 images from it only). How is it possible? any suggestion?</p>
<p>*if this is related: each folder 1-5 contains around 50000 .png files</p>
|
<python><google-drive-api><google-colaboratory>
|
2022-12-12 11:08:57
| 2
| 301
|
Rainbow
|
74,770,208
| 12,242,085
|
How to aggregate 3 columns in DataFrame to have count and distribution of values in separated columns in Python Pandas?
|
<p>I have Pandas DataFrame like below:</p>
<p>data types:</p>
<ul>
<li><p>ID - int</p>
</li>
<li><p>TIME - int</p>
</li>
<li><p>TG - int</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>TIME</th>
<th>TG</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>20210101</td>
<td>0</td>
</tr>
<tr>
<td>111</td>
<td>20210201</td>
<td>0</td>
</tr>
<tr>
<td>111</td>
<td>20210301</td>
<td>1</td>
</tr>
<tr>
<td>222</td>
<td>20210101</td>
<td>0</td>
</tr>
<tr>
<td>222</td>
<td>20210201</td>
<td>1</td>
</tr>
<tr>
<td>333</td>
<td>20210201</td>
<td>1</td>
</tr>
</tbody>
</table>
</div></li>
</ul>
<p>And I need to aggregate above DataFrame so as to know:</p>
<ol>
<li>how many IDs are per each value in TIME</li>
<li>how many "1" from TG are per each value in TIME</li>
<li>how many "0" from TG are per each value in TIME</li>
</ol>
<p>So I need to something like below:</p>
<pre><code>TIME | num_ID | num_1 | num_0
---------|--------|-------|--------
20210101 | 2 | 0 | 2
20210201 | 3 | 2 | 1
20210301 | 1 | 1 | 0
</code></pre>
<p>How can I do that in Python Padas ?</p>
|
<python><pandas><dataframe><group-by><aggregation>
|
2022-12-12 11:08:19
| 2
| 2,350
|
dingaro
|
74,770,033
| 3,678,257
|
Increase utilization of GPU for Sentence Transformer inference
|
<p>We'are using a Sentence Transformer model to calculate 1024-dim vectors for the purpose of similarity search. The model is served by a FastAPI web server exposing an API for other services to request a vector for a given text.</p>
<p>So we are planning to calculate vectors for a large document set (about 10_000_000 documents) and are running some tests.</p>
<ul>
<li>Using a CPU, we're getting encoding speed of about 0.1 second per a sentence.</li>
<li>Using a GTX 1650 card we're getting about 0.01 second per a sentence with an average GPU utilization of about 50-60%</li>
</ul>
<p>So we decided to see if we could get further improvements by renting out a VPS with Tesla T4 card. The results are somewhat disappointing:</p>
<ul>
<li>We're still getting the same 0.01 seconds per one sentence but with average GPU utilization of only 10-15%. So it looks like the card is heavily under-utilized.</li>
</ul>
<p>We tried increasing the number of workers, but it did not have any effect.</p>
<p>What can I do to improve utilization of the GPU and get faster performance.</p>
<p>The embeddings are calculated using this line of code:</p>
<pre class="lang-py prettyprint-override"><code>model.encode(text, device='cuda', normalize_embeddings=True).tolist()
</code></pre>
<p>I'm using this command to monitor GPU performance</p>
<pre><code>watch -d -n 0.5 nvidia-smi
</code></pre>
<p>and watching this column <code>GPU-Util</code></p>
|
<python><nlp><gpu><bert-language-model>
|
2022-12-12 10:54:16
| 0
| 664
|
ruslaniv
|
74,769,960
| 2,051,392
|
How does the Pandas Histogram Data Get to the Graph without Passing it In?
|
<p>Pretty straight forward question here.</p>
<p>I'm loading data in from a csv. The csv column for age is then converted into a histogram. Finally I'm showing a graph and the data is populated to it.</p>
<p>For the life of me though, I don't understand how the matplotlib <code>plt</code> is getting the data from the pandas command <code>dftrain.age.hist()</code> without me explicitly passing it in.</p>
<p>Is <code>hist</code> an extension method? That's the only thing that makes sense to me currently.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
#load csv files
##training data
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
#generate a histogram of ages
dftrain.age.hist()
#show the graph
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/MbNKp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MbNKp.png" alt="enter image description here" /></a></p>
|
<python><pandas><csv><matplotlib>
|
2022-12-12 10:48:04
| 2
| 9,875
|
DotNetRussell
|
74,769,915
| 5,101,926
|
Error Initalizing Stable diffusion with python 3.10.6. 'str' object has no attribute 'isascii'
|
<p>I'm trying to set Up Stable Diffusion 1.5 from GIT <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui" rel="nofollow noreferrer">https://github.com/AUTOMATIC1111/stable-diffusion-webui</a>.
I've followed a tutorial <a href="https://www.youtube.com/watch?v=ycQJDJ-qNI8&t=0s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ycQJDJ-qNI8&t=0s</a>.</p>
<p>To avoid problems with multiple python versions I've removed older Python version and installed only Python 3.10.6, then only 3.10.9, but I receive the same error with both versions.</p>
<p>When I call the web-user.bat to Initialize. The bat calls webui.bat and I receive this error:</p>
<pre><code>Python 3.10.9
venv "D:\Stable Diffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Fatal Python error: Py_Initialize: unable to load the file system codec
Traceback (most recent call last):
File "C:\Users\XX\AppData\Local\Programs\Python\Python310\lib\encodings\__init__.py", line 85, in search_function
File "C:\Users\XX\AppData\Local\Programs\Python\Python310\lib\encodings\__init__.py", line 64, in normalize_encoding
AttributeError: 'str' object has no attribute 'isascii'
Premere un tasto per continuare . . .
</code></pre>
<p>I've seen this error is due to using an old python version, but I'm using the 3.10.</p>
<p>Thanks</p>
|
<python><encoding><python-3.10><stable-diffusion>
|
2022-12-12 10:45:10
| 1
| 996
|
Ale
|
74,769,873
| 12,559,770
|
Add overlapping coordinates within groups in pandas
|
<p>I have a dataframe such as :</p>
<pre><code>Gps1 Gps2 start end
G1 GA 106 205
G1 GA 102 203
G1 GA 106 203
G1 GA 106 203
G2 GB 9 51
G2 GB 48 135
G2 GB 131 207
G2 GB 207 279
G3 GC 330 419
G3 GC 266 315
G3 GC 257 315
G3 GC 266 407
G4 GC 10 30
G4 GC 60 90
</code></pre>
<p>and I would like for each <code>['Gps1','Gps2']</code> groups to calculate for each row the <strong>number of overlapping</strong> coordinates between the <code>first row end</code> - <code>the end of the second row</code>.</p>
<p>So here is a detailed example :</p>
<p>for the first row of <code>G1-GA</code>:</p>
<pre><code>205-102 = 103
</code></pre>
<p>so I put 0.98 in the first row:</p>
<pre><code>Gps1 Gps2 start end Nb_overlapping
G1 GA 106 205 103
G1 GA 102 203
G1 GA 106 203
G1 GA 106 203
</code></pre>
<p>then the second row:</p>
<p><code>203-106= 97</code>, so I fill it and so on for the others :</p>
<pre><code>Gps1 Gps2 start end Nb_overlapping
G1 GA 106 205 103
G1 GA 102 203 97
G1 GA 106 203 113
G1 GA 90 210
</code></pre>
<p>The last row have to have the same value as the before-last value :</p>
<pre><code>Gps1 Gps2 start end Nb_overlapping
G1 GA 106 205 103
G1 GA 102 203 97
G1 GA 106 203 113
G1 GA 106 203 113
</code></pre>
<p>Then for the group <code>G2-GB</code> I do the same :</p>
<pre><code>Gps1 Gps2 start end Nb_overlapping
G2 GB 9 51 3
G2 GB 48 135 4
G2 GB 131 207 0
G2 GB 207 279 0
</code></pre>
<p>At the end I should get :</p>
<pre><code>Gps1 Gps2 start end
G1 GA 106 205 103
G1 GA 102 203 97
G1 GA 106 203 113
G1 GA 106 203 113
G2 GB 9 51 3
G2 GB 48 135 4
G2 GB 131 207 0
G2 GB 207 279 0
G3 GC 330 419 153
G3 GC 266 315 58
G3 GC 257 315 49
G3 GC 266 407 49
G4 GC 10 30 -30
G4 GC 60 90 -30
</code></pre>
<p>Does someone have an idea please ?</p>
<p>Here is the dict format of the dataframe if it can helps :</p>
<pre><code>{'Gps1': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G1', 4: 'G2', 5: 'G2', 6: 'G2', 7: 'G2', 8: 'G3', 9: 'G3', 10: 'G3', 11: 'G3', 12: 'G4', 13: 'G4'}, 'Gps2': {0: 'GA', 1: 'GA', 2: 'GA', 3: 'GA', 4: 'GB', 5: 'GB', 6: 'GB', 7: 'GB', 8: 'GC', 9: 'GC', 10: 'GC', 11: 'GC', 12: 'GC', 13: 'GC'}, 'start': {0: 106, 1: 102, 2: 106, 3: 106, 4: 9, 5: 48, 6: 131, 7: 207, 8: 330, 9: 266, 10: 257, 11: 266, 12: 10, 13: 60}, 'end': {0: 205, 1: 203, 2: 203, 3: 203, 4: 51, 5: 135, 6: 207, 7: 279, 8: 419, 9: 315, 10: 315, 11: 407, 12: 30, 13: 90}}
</code></pre>
|
<python><python-3.x><pandas>
|
2022-12-12 10:41:19
| 1
| 3,442
|
chippycentra
|
74,769,732
| 6,119,375
|
AttributeError: 'list' object has no attribute 'strftime' in Python
|
<p>I have the following data:</p>
<pre><code>costs_for_roi.index.values
array(['2017-03-05T00:00:00.000000000', '2017-03-12T00:00:00.000000000',
'2017-03-19T00:00:00.000000000', '2017-03-26T00:00:00.000000000',
'2017-04-02T00:00:00.000000000', '2017-04-09T00:00:00.000000000',
'2017-04-16T00:00:00.000000000', '2017-04-23T00:00:00.000000000'], dtype='datetime64[ns])
</code></pre>
<p>and need to run this piece of code:</p>
<pre><code>costs_for_roi
datelist = list(costs_for_roi.index.values).strftime("%Y-%m-%d%H:%M:%S")
datelist
</code></pre>
<p>but i keep getting the aformentioned error</p>
<p>is this due to the wrong fortmatting, or some module that is missing ?</p>
|
<python><datetime><strftime>
|
2022-12-12 10:30:58
| 1
| 1,890
|
Nneka
|
74,769,325
| 8,760,298
|
Amazon Sagemaker - Create USA map by States using Jupyter notebook
|
<p>I been trying creating USA map in Amazon Sagemaker Jupyter notebook instance, I have used the code</p>
<pre><code>import plotly.express as px
import numpy as np
import pandas as pd
dff = pd.read_excel("fold/map_data.xlsx",engine='openpyxl')
# Strip out white spaces
dff.columns = dff.columns.to_series().apply(lambda x: x.strip())
# Keep only Required cloumns,
dff=dff[['week_end','state','state_code','item','sales']]
# Filter the data and rename
dff=dff[(dff['week_end']=='12-12-2022') & (dff['item']=='Resd')]
dff.rename({'sales':'Sales Price ($)'},axis=1, inplace=True)
fig = px.choropleth(dff,
locations='state_code',
locationmode="USA-states",
scope="usa",
color='Sales Price ($)',
color_continuous_scale="Viridis_r",
)
fig.show()
</code></pre>
<p>Data used</p>
<p><a href="https://i.sstatic.net/1RFHP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1RFHP.png" alt="enter image description here" /></a></p>
<p>No Map is showing up,not even outline or any error. When I searched in Browser console I am getting warning "<strong>Default Kernel not found</strong>"</p>
<p>What am I missing here? Please help me understand. Is there any additional steps to be done since I am running inside Sagemaker?</p>
|
<python><jupyter-notebook><amazon-sagemaker>
|
2022-12-12 09:55:02
| 0
| 333
|
Tpk43
|
74,769,262
| 6,535,324
|
python validate input with decorator for functions with varying inputs
|
<p>I have many functions that all expected <code>config</code> as a parameter but vary with regards to other parameters. I would like to validate <code>config</code>. I wrote another function for this, but seems a decorator might be a cleaner solution:</p>
<pre class="lang-py prettyprint-override"><code>def validate_config(config):
if config not in [1,2,3]:
raise ValueError("config is expected to be 1, 2 or 3")
def f1(config, b):
validate_config(config)
pass
def f2(a, config):
validate_config(config)
pass
</code></pre>
|
<python>
|
2022-12-12 09:48:18
| 1
| 2,544
|
safex
|
74,769,052
| 15,852,600
|
How to get a new df constituted by partialy transposed fragments of another dataframe
|
<p>I am struggling to get my dataframe transposed, not simply transposed but I want to limit the number of columns to the number of rows in index <code>slices</code>, in order to well explain my problem I give you my dataframe here :</p>
<pre><code>df=pd.DataFrame({
'n' : [0,1,2, 0,1,2, 0,1,2],
'col1' : ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C'],
'col2' : [9.6,10.4, 11.2, 3.3, 6, 4, 1.94, 15.44, 6.17]
})
</code></pre>
<p>It has the following display :</p>
<pre><code> n col1 col2
0 0 A 9.60
1 1 A 10.40
2 2 A 11.20
3 0 B 3.30
4 1 B 6.00
5 2 B 4.00
6 0 C 1.94
7 1 C 15.44
8 2 C 6.17
</code></pre>
<p>From that dataframe I want to get the following <code>new_df</code>:</p>
<pre><code> 0 1 2
col1 A A A
col2 9.6 10.4 11.2
col1 B B B
col2 3.3 6.0 4.0
col1 C C C
col2 1.94 15.44 6.17
</code></pre>
<p>What I tried so far :</p>
<pre><code>new_df = df.values.reshape(3, 9)
new_w = [x.reshape(3,3).T for x in new_df]
df_1 = pd.DataFrame(new_w[0])
df_1.index = ['n', 'col1', 'col2']
df_2 = pd.DataFrame(new_w[1])
df_2.index = ['n', 'col1', 'col2']
df_3 = pd.DataFrame(new_w[2])
df_3.index = ['n', 'col1', 'col2']
new_df = df_1.append(df_2)
new_df = new_df.append(df_3)
new_df[new_df.index!='n']
</code></pre>
<p>The code I tried works but it looks long, I want another shorter solution for that.</p>
<p>Any help from your side will be highly appreciated, thanks.</p>
|
<python><pandas><dataframe>
|
2022-12-12 09:29:50
| 3
| 921
|
Khaled DELLAL
|
74,768,892
| 15,537,675
|
Read truncated json file to pandas dataframe
|
<p>I have a json file that is truncated. I have been looking for a way to read it into a pandas dataframe, but not been successfull. Several post refers to the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html#pandas.read_json" rel="nofollow noreferrer">read_json</a>.</p>
<p>I've tried below approach;</p>
<pre><code>import pandas as pd
df = pd.read_json("C:/Users/user1/Documents/history_truncated.json")
df.head()
### Results in ###
ValueError: Unexpected character found when decoding array value (2)
</code></pre>
<p>A similar issue is found <a href="https://stackoverflow.com/questions/27240982/valueerror-when-using-pandas-read-json">here</a> where the solution was to use an absolute path. In my case that did not work. Any suggestions on how to solve this issue?</p>
<h1>Edit adding last few lines to question</h1>
<pre><code>"departurePlatformName":null,"departureBoardingActivity":0,"departureBoardingActivitySpecified":true,"departureStopAssignment":null,
"departureOperatorRefs":null,"aimedHeadwayInterval":null,"expectedHeadwayInterval":null,"distanceFromStop":null,"numberOfStopsAway":null,"extensions":null},{"stopPointRef":{"value":"SE:276:Quay:9022012013003041"},"visitNumber":"1","order":"7","stopPointName":null,"item":false,"itemElementName":0,"predictionInaccurateSpecified":false,"occupancySpecified":false,"timingPointSpecified":false,"boardingStretch":false,"requestStop":false,"originDisplay":null,"destinationDisplay":null,"callNote":null,"facilityConditionElement":null,"facilityChangeElement":null,"situationRef":null,"aimedArrivalTime":"2022-08-30T09:33:00+02:00","aimedArrivalTimeSpecified":true,"expectedArrivalTime":"2022-08-30T09:33:00+02:00","expectedArrivalTimeSpecified":true,"expectedArrivalPredictionQuality":null,"arrivalStatus":0,"arrivalStatusSpecified":true,"arrivalProximityText":null,"arrivalPlatformName":null,"arrivalBoardingActivity":0,"arrivalBoardingActivitySpecified":true,"arrivalStopAssignment":null,"arrivalOperatorRefs":null,"aimedDepartureTimeSpecified":false,"expectedDepartureTimeSpecified":false,"provisionalExpectedDepartureTimeSpecified":false,"earliestExpectedDepartureTimeSpecified":false,"expectedDeparturePredictionQuality":null,"aimedLatestPassengerAccessTimeSpecified":false,"expectedLatestPassengerAccessTimeSpecified":false,"departureStatus":0,"departureStatusSpecified":true,"departureProximityText":null,"departurePlatformName":null,"departureBoardingActivity":0,
"departureBoardingActivitySpecified":true,
"departureStopAssignment":null,
"departureOperatorRefs":null,"aimedHeadwayInterval":null,
"expectedHeadwayInterval":null,"distanceFromStop":null,
"numberOfStopsAway":null,"extensions":null}],"isCompleteStopSequence":true,
"extensions":null}
}]
}]]}
}
</code></pre>
|
<python><json><pandas>
|
2022-12-12 09:15:20
| 0
| 472
|
OLGJ
|
74,768,805
| 1,780,761
|
python opencv - filter contours by position
|
<p>I use this code to find some blobs, and pick the biggest one.</p>
<pre><code>contours, hierarchy = cv2.findContours(th1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
c = max(contours, key=cv2.contourArea)
</code></pre>
<p>Now, I would need to change this code in a way so it returns the contour that is in the middle of the frame. (its bounding box covers the center pixel of the image)</p>
<p>I am not able to figure out how to do this except getting the bounding box of all contours with</p>
<pre><code>xbox, ybox, wbox, hbox = cv2.boundingRect(cont)
</code></pre>
<p>and then checking that x and y are smaller than the centere, and x+w and y+h aare bigger than the centre. It does not look like a efficient way tough, since there can be up to 500 small controus..</p>
|
<python><opencv><image-processing><computer-vision><contour>
|
2022-12-12 09:06:38
| 2
| 4,211
|
sharkyenergy
|
74,768,715
| 17,192,324
|
How to register multiple message handlers in aiogram
|
<p>I'm trying to make multiple different message handlers, <strong>is this an acceptable way to register multiple ones?</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from aiogram import Bot, Dispatcher, types
from settings import BOT_TOKEN
async def start_handler(event: types.Message):
await event.answer(
f"Hello, {event.from_user.get_mention(as_html=True)} 👋!",
parse_mode=types.ParseMode.HTML,
)
async def echo_answer(event: types.Message):
await event.answer(event.text, parse_mode=types.ParseMode.HTML
)
async def main():
bot = Bot(token=BOT_TOKEN)
try:
disp = Dispatcher(bot=bot)
disp.register_message_handler(start_handler, commands={"start", "restart"})
disp.register_message_handler(echo_answer, lambda msg: msg.text)
await disp.start_polling()
finally:
await bot.close()
asyncio.run(main())
</code></pre>
<p>my settings.py file contains</p>
<pre class="lang-py prettyprint-override"><code>import os
BOT_TOKEN = os.getenv('BOT_TOKEN')
if not BOT_TOKEN:
print('You have forgot to set BOT_TOKEN')
BOT_TOKEN = 'missing'
quit()
</code></pre>
<p>This code runs, and sends echo responses to any message and replies with Hello, @username 👋! for start and restart commands.
to reproduce one must have a bot and have a BOT_TOKEN in the environmental variables before running the code.</p>
<p>I tried this code, described above, looked ad <a href="https://docs.aiogram.dev/en/latest/dispatcher/index.html" rel="nofollow noreferrer">https://docs.aiogram.dev/en/latest/dispatcher/index.html</a> documentation and modified example on the source code page <a href="https://github.com/aiogram/aiogram#poll-botapi-for-updates-and-process-updates" rel="nofollow noreferrer">https://github.com/aiogram/aiogram#poll-botapi-for-updates-and-process-updates</a></p>
|
<python><python-3.x><telegram><aiogram><telebot>
|
2022-12-12 08:57:52
| 2
| 331
|
Kirill Setdekov
|
74,768,599
| 13,618,407
|
Error when calling AUTOTUNE from tensorflow version 2.9.2
|
<p>I am using AUTOTUNE in my TensorFlow project for a long time ago. The project is in google colab. It runs fine in the past. Now when I try to run the project, it has an error. I think the tensorflow version is changed causing the error.</p>
<p>In the AUTOTUNE line it shows me:</p>
<blockquote>
<p>AttributeError: module 'tensorflow.data.experimental' has no attribute 'AUTOTUNE'</p>
</blockquote>
<p>I already try to implement the solution in <a href="https://stackoverflow.com/questions/66962099/getting-attribute-error-when-using-autotune-in-tensorflow">this thread</a> but the issue is still there.</p>
<p>Here is my code:</p>
<pre><code>try:
AUTOTUNE = tf.data.AUTOTUNE
except:
AUTOTUNE = tf.data.experimental.AUTOTUNE
</code></pre>
<p>When I try to print the <strong>TensorFlow version</strong> and <strong>tf.data.AUTOTUNE</strong></p>
<pre><code>import tensorflow as tf
print(tf.__version__, ',')
print(tf.data.AUTOTUNE)
</code></pre>
<p>it show me the below result:</p>
<pre><code>2.9.2 ,
-1
</code></pre>
<p>Need your help, please.</p>
|
<python><tensorflow><image-processing><deep-learning><computer-vision>
|
2022-12-12 08:47:19
| 0
| 561
|
stic-lab
|
74,768,557
| 19,303,365
|
Extracting Experience Section from resume
|
<p>Below is the Code to extract the Experience section from the Resume , but its not giving the desired output :</p>
<pre><code>import re
def extract_experience(resume_text):
experience_pattern = r"(?:EXPERIENCE|Employment experience)\n(.*?)\n(?:Skills|Education)"
experience_match = re.search(experience_pattern, resume_text, re.DOTALL)
# Extract the Experience section from the match
if experience_match:
experience_section = experience_match.group(1)
return experience_section
else:
return "Experience section not found"
</code></pre>
<p>Below is my text that i have converted from pdf using <code>PyPDF2</code></p>
<pre><code>resume_text =
xyz
66 Chetwynd Road
UK
Phone - 070040040040
Email - a18@hotmail.co.uk
PERSONAL PROFILE
I am an energetic, hardworking individual who has developed a responsible approach to any
task I undertake or problem I’m presented with. With previous experience in both customer
service and administration, I can skillfully work with others and help with situations I am
faced with in a calm collective manner.
EXPERIENCE
Play Centre
HOST (September 2021- Present)
▪ Working well under pressure, making decisions quickly and strategically
▪ Problem -solving and using initiative
xyz center
Sales Assistant (November 2020 -January 2020)
▪ Interacted with customers ensuring service was welcoming and helpful.
▪ Responded to any queries
Health & Wellness Club - Work Experience
Staff - (January 2020 - February 2020)
▪ Welcomed members into the gym
▪ Ensured all equipment was safe and clean before it was used by members
EDUCATION
September 2021 -2022
University College
</code></pre>
<p>It always gives the output as <code>Experience section not found</code></p>
<p><strong>Expected Output i am looking for :</strong></p>
<pre><code>Play Centre
HOST (September 2021- Present)
xyz center
Sales Assistant (November 2020 -January 2020)
Health & Wellness Club - Work Experience
Staff - (January 2020 - February 2020)
</code></pre>
<p>what am i missing.?
Also is there any other effective way to extract the same. plz guide</p>
|
<python><regex>
|
2022-12-12 08:42:55
| 1
| 365
|
Roshankumar
|
74,768,505
| 1,930,543
|
Get the frequency of all combinations in Pandas
|
<p>I am trying to get the purchase frequency of all combinations of products.</p>
<p>Suppose my transactions are the following</p>
<pre><code>userid product
u1 A
u1 B
u1 C
u2 A
u2 C
</code></pre>
<p>So the solution should be</p>
<pre><code>combination count_of_distinct_users
A 2
B 1
C 2
A, B 1
A, C 2
B, C 1
A, B, C 1
</code></pre>
<p>i.e 2 users have purchased product A, one users has purchased product B..., 2 users have purchased products A and C ...</p>
|
<python><pandas>
|
2022-12-12 08:37:20
| 2
| 5,951
|
dimitris_ps
|
74,768,475
| 17,277,677
|
divide columns vertically by max value in a dataframe
|
<p>I have a following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>summary</th>
<th>summary_len</th>
<th>apple</th>
<th>book</th>
<th>computer</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>....</td>
<td>210</td>
<td>2</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>...</td>
<td>120</td>
<td>3</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>...</td>
<td>50</td>
<td>2</td>
<td>2</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>summary is basically some sort of description, summary_len <- the length of those descriptions and the rest - apple/book/computer and the keywords and the values presented in the table - those are the occurrences in each description.</p>
<p>I need to normalize this table, in a way to find max value - PER COLUMN (vertically) and then divide by this value, so the output will be as below (I put it in a format 2/3 - just to emphasis max value per column):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>summary</th>
<th>summary_len</th>
<th>apple</th>
<th>book</th>
<th>computer</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>....</td>
<td>210</td>
<td>2/3</td>
<td>1/2</td>
<td>0/1</td>
</tr>
<tr>
<td>2</td>
<td>...</td>
<td>120</td>
<td>3/3</td>
<td>0/2</td>
<td>1/1</td>
</tr>
<tr>
<td>3</td>
<td>...</td>
<td>50</td>
<td>2/3</td>
<td>2/2</td>
<td>1/1</td>
</tr>
</tbody>
</table>
</div>
<p>My problem here is that I don't have to find max in each columns - only for those keywords, which I am checking the occurrences for. I stored them in a list and got max value per column:</p>
<pre><code>max_per_col = df_freq[keywords].max()
max_per_col
</code></pre>
<p>this is how it looks (with the original data):
<a href="https://i.sstatic.net/Av0O0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Av0O0.png" alt="enter image description here" /></a></p>
<p>Could you help me apply it "back" to the former dataframe and divide vertically each column by the max value?</p>
|
<python><pandas><dataframe>
|
2022-12-12 08:34:41
| 1
| 313
|
Kas
|
74,768,423
| 9,778,828
|
Pandas sparse dataframe multiplication
|
<p>I have two pandas sparse dataframes, big_sdf and bigger_sdf.</p>
<p>When I try to multiply them:</p>
<pre><code>result = big_sdf @ bigger_sdf
</code></pre>
<p>I get an error:</p>
<pre><code>"numpy.core._exceptions.MemoryError: Unable to allocate 3.6 TiB for an array with shape (160815, 3078149) and data type int64"
</code></pre>
<p>So I tried to convert these sparse dataframes to SciPy's csr matrices and multiply it, but the conversion doesn't succeed:</p>
<pre><code>from scipy.sparse import csr_matrix
csr_big = csr_matrix(big_sdf)
csr_bigger = csr_matrix(bigger_sdf)
</code></pre>
<p>When I run the last row I get an error message:</p>
<pre><code>"ValueError: unrecognized csr_matrix constructor usage"
</code></pre>
<p>It only happens for the bigger matrix, the smaller one is converted with success.</p>
<p>Any ideas? Maybe there's a Pandas native method to multiply sparse dataframes which I missed?</p>
<p>Thanks in advance!</p>
|
<python><pandas><scipy><sparse-matrix><sparse-dataframe>
|
2022-12-12 08:29:17
| 0
| 505
|
AlonBA
|
74,768,399
| 4,652,534
|
FastAPI app still process the request though the connection has been disconnected from the client end
|
<p>I have a FastAPI app that runs on Gunicorn Webserver and the gateway interface is ASGI.</p>
<p>I try to simulate a response that takes a long time in my FastAPI App, and I expect that when I disconnect from the request connection currently under processing (e.g. close the tab), ASGI should abort or terminate the job in Gunicorn worker, while I found out that my FastAPI complete the processing anyway. See the logs below,</p>
<pre><code>api | TRACE: HTTP connection made
api | TRACE: ASGI [4] Started scope={'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.0', 'server': None, 'client': None, 'scheme': 'http', 'root_path': '', 'headers': '<...>', 'method': 'GET', 'path': '/long-time', 'raw_path': b'/long-time', 'query_string': b''}
api-proxy | 172.23.0.1 - - [12/Dec/2022:07:31:46 +0000] "GET /long-time HTTP/1.1" 499 "e9224322f5f65f94a54e4a6ca812ae72" generate 0 bytes in 3.051 "http://localhost:8000/docs" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" "-
api | TRACE: HTTP connection lost
api | [2022-12-12 15:31:53,295] [INFO] [/router.py:810]: Ready to return
api | TRACE: ASGI [4] Receive {'type': 'http.disconnect'}
api | TRACE: ASGI [4] Send {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
api | TRACE: ASGI [4] Send {'type': 'http.response.body', 'body': '<4 bytes>'}
api | TRACE: ASGI [4] Completed
</code></pre>
<pre><code>async def my_func_1():
"""
my func 1
"""
await asyncio.sleep(10)
return "zzzzzzzz"
@api_router.get("long-time")
async def root():
"""
my home route
"""
a = await asyncio.gather(my_func_1())
fastapi_logger.info("Ready to return")
return a
</code></pre>
<p>So I wonder if there's any configurable/programmable way to terminate the request processing, or send the response immediately rather than wait till the API endpoint function exits? Thanks.</p>
|
<python><fastapi><gunicorn><uvicorn><asgi>
|
2022-12-12 08:27:07
| 0
| 911
|
Dayo Choul
|
74,768,346
| 1,354,398
|
read_csv() not retaining 'NUL' (C '\0') at the end of the string
|
<p>I am reading tsv data into dataframe using read_csv function.
My tsv file has col 'Name' (size of 6 bytes, shorter string is padded with C '\0'). When opened using notepad++, content looks like:</p>
<p>Name</p>
<p><a href="https://i.sstatic.net/MDUft.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MDUft.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/EYdRA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EYdRA.png" alt="enter image description here" /></a></p>
<p>But read_csv is reading it as:</p>
<p>"(ABC"</p>
<p>&</p>
<p>"(PQRST"</p>
<p>ignoring the NULL and ")" after null.</p>
<p>I have tried different options available in the forum, like encoding, converter, engine etc. nothing served the purpose.</p>
<p>Appreciate your time and help. Any lead would be helpful.</p>
|
<python><pandas><dataframe>
|
2022-12-12 08:21:51
| 1
| 403
|
Arti
|
74,768,014
| 20,732,098
|
Round and Remove 3 digits Python Dataframe
|
<p>i have the following Dataframe:
<a href="https://i.sstatic.net/KDKoC.png" rel="nofollow noreferrer">Dataframe</a></p>
<pre><code>print (df)
time
0 00:00:04.4052727
1 00:00:06.5798
</code></pre>
<p>and my goal is to round the microseconds to 2 digits and remove the other digits so there are only 2 digits.</p>
<p>All columns should then look like the first row:</p>
<pre><code>print (df)
time
0 00:00:04.405
1 00:00:06.580
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-12 07:45:21
| 1
| 336
|
ranqnova
|
74,767,745
| 2,975,438
|
Python Regex replace but only if two or more characters precede regex expression
|
<p>I have a pattern: <code>"two_or_more_characters - zero_or_more_characters"</code> and I want to replace it with <code>"two_or_more_characters"</code>, where <code>"-"</code> is a dash.</p>
<p>I created regex for it:</p>
<pre><code>re.sub(r'-[\w(){}\[\],.?! ]+', '', t)
</code></pre>
<p>and it works as expected for some cases. For example for <code>t = "red-fox"</code> we will get <code>red</code>. But it does not work as needed for example: <code>t = "r-fox"</code>. The result is <code>r</code> but I am looking for way to keep <code>r-fox</code> instead.</p>
<p>If text has more then one dash then we need to remove text only after last dash. For example for <code>t = "r-fox-dog"</code> the result should be <code>r-fox</code></p>
|
<python><regex>
|
2022-12-12 07:13:40
| 1
| 1,298
|
illuminato
|
74,767,700
| 12,667,229
|
not able to remove duplicate image with hashing
|
<p>My aim is to remove identical images like the following:</p>
<p>Image 1: <a href="https://i.sstatic.net/8dLPo.png" rel="nofollow noreferrer">https://i.sstatic.net/8dLPo.png</a></p>
<p>Image 2: <a href="https://i.sstatic.net/hF11m.png" rel="nofollow noreferrer">https://i.sstatic.net/hF11m.png</a></p>
<p>Currently, I am using average hashing with</p>
<ul>
<li>the hash size of 32 (hash size less than this is giving collision )</li>
<li>thresh hold of 10-20</li>
</ul>
<p>I tried Phash as well, but it is removing almost similar images like the following, (which I don't want)</p>
<p>Image 3: <a href="https://i.sstatic.net/CwZ09.png" rel="nofollow noreferrer">https://i.sstatic.net/CwZ09.png</a></p>
<p>Image 4: <a href="https://i.sstatic.net/HvAaJ.png" rel="nofollow noreferrer">https://i.sstatic.net/HvAaJ.png</a></p>
<p>So I am looking for some technique through which I can identify that</p>
<ul>
<li>Image 1 and Image 2 are identical</li>
<li>Image 3 and Image 4 are Distinct</li>
</ul>
<p>kindly help because I have been stuck on this problem for so long.</p>
<p>Note: Every time type/kind of images would be different so I can't even invest time to learn deep learning and give it a try.</p>
|
<python><image-processing><computer-vision><duplicates><imagehash>
|
2022-12-12 07:09:23
| 2
| 330
|
Sahil Lohiya
|
74,767,568
| 3,668,129
|
How to print index with decreased font size?
|
<p>I want to print equation on screen and print the indexes with decreased font size.</p>
<p>For example (<code>i</code> and <code>i-1</code> have smaller font):</p>
<p><a href="https://i.sstatic.net/nCZC8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nCZC8.png" alt="enter image description here" /></a></p>
<p>How can I do it ?</p>
|
<python><python-3.x>
|
2022-12-12 06:54:07
| 1
| 4,880
|
user3668129
|
74,767,555
| 10,229,754
|
Taking equal number of elements from two arrays, such that the taken values have as few duplicates as possible
|
<p>Consider we have 2 arrays of size <code>N</code>, with their values in the range <code>[0, N-1]</code>. For example:</p>
<pre><code>a = np.array([0, 1, 2, 0])
b = np.array([2, 0, 3, 3])
</code></pre>
<p>I need to produce a new array <code>c</code> which contains exactly <code>N/2</code> elements from <code>a</code> and <code>b</code> respectively, i.e. the values must be taken evenly/equally from both parent arrays.</p>
<p>(For odd length, this would be <code>(N-1)/2</code> and <code>(N+1)/2</code>. Can also ignore odd length case, not important).</p>
<p>Taking equal number of elements from two arrays is pretty trivial, but there is an additional constraint: <code>c</code> should have as many unique numbers as possible / as few duplicates as possible.</p>
<p>For example, a solution to <code>a</code> and <code>b</code> above is:</p>
<pre><code>c = np.array([b[0], a[1], b[2], a[3]])
>>> c
array([2, 1, 3, 0])
</code></pre>
<p>Note that the position/order is preserved. Each element of <code>a</code> and <code>b</code> that we took to form <code>c</code> is in same position. If element <code>i</code> in <code>c</code> is from <code>a</code>, <code>c[i] == a[i]</code>, same for <code>b</code>.</p>
<hr />
<p>A straightforward solution for this is simply a sort of path traversal, easy enough to implement recursively:</p>
<pre><code>def traverse(i, a, b, path, n_a, n_b, best, best_path):
if n_a == 0 and n_b == 0:
score = len(set(path))
return (score, path.copy()) if score > best else (best, best_path)
if n_a > 0:
path.append(a[i])
best, best_path = traverse(i + 1, a, b, path, n_a - 1, n_b, best, best_path)
path.pop()
if n_b > 0:
path.append(b[i])
best, best_path = traverse(i + 1, a, b, path, n_a, n_b - 1, best, best_path)
path.pop()
return best, best_path
</code></pre>
<p>Here <code>n_a</code> and <code>n_b</code> are how many values we will take from <code>a</code> and <code>b</code> respectively, it's <code>2</code> and <code>2</code> as we want to evenly take <code>4</code> items.</p>
<pre><code>>>> score, best_path = traverse(0, a, b, [], 2, 2, 0, None)
>>> score, best_path
(4, [2, 1, 3, 0])
</code></pre>
<hr />
<p>Is there a way to implement the above in a more vectorized/efficient manner, possibly through numpy?</p>
|
<python><arrays><numpy><vectorization>
|
2022-12-12 06:52:21
| 2
| 4,171
|
Mercury
|
74,767,539
| 18,360,265
|
How to resolve eth-utils and eth-tester installation issues?
|
<p>While installing the web3.py I am getting the following error message.
I am new to blockchain development. Can Someone help me to resolve this Error?</p>
<p>Error:</p>
<p><a href="https://i.sstatic.net/uBePm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uBePm.png" alt="enter image description here" /></a></p>
|
<python><ethereum><web3py>
|
2022-12-12 06:51:12
| 2
| 409
|
Ashutosh Yadav
|
74,767,354
| 607,407
|
How can I ensure type hints still work for my class that inherits UserList from collections?
|
<p>I have a class that adds something on top of standard list behavior. To avoid re-implementing everything, I used <code>UserList</code>:</p>
<pre><code>from collections import UserList
class MyList(UserList):
...stuff...
</code></pre>
<p>But now, when I do something like:</p>
<pre><code>my_list = MyList([1,2,3,4])
</code></pre>
<p>My IDE just marks this as a <code>MyList</code> type and hints for things like loops don't work:</p>
<pre><code>for an_int in my_list:
# here, IDE does not know an_int is an int
</code></pre>
<p>I can force override this, but at the cost of losing hinting for my custom methods by using the type comment:</p>
<pre><code>my_list = MyList([1,2,3,4]) # type: list[int]
</code></pre>
<p>Is there something I can hint in the class definition that will tell python static analysis that the inner type is inherited from the list passed in ctor?</p>
|
<python><python-3.x><type-hinting>
|
2022-12-12 06:27:39
| 1
| 53,877
|
Tomáš Zato
|
74,767,298
| 10,937,025
|
How to merge two DataFrame containing same keys but different values in same columns in python
|
<p>I have one dataframe that contains all ids</p>
<pre><code>df1 = pd.DataFrame({'id': ['A01', 'A02', 'A03', 'A04', 'A05', 'A06','A07'],
'Name': ['', '', '', '', 'MKI', 'OPU','']})
</code></pre>
<p>Second DataFrame that contains some Ids has different name in them</p>
<pre><code>df2 = pd.DataFrame({'id': ['A01', 'A05', 'A06', 'A03'],
'Name': ['ABC', 'TUV', 'MNO', 'JKL']})
</code></pre>
<p>I want to merge both of them , where same Ids where one contains some name replace empty name and merge</p>
<p><strong>Also DF2 name have to consider while merging</strong></p>
<p>MERGE OUTPUT DF:-</p>
<pre><code>df3 = {'id': ['A01', 'A02', 'A03', 'A04', 'A05', 'A06','A07'],
'Name': ['ABC','', 'JKL','', 'TUV', 'MNO','']}
</code></pre>
<p>Note:- Merge two dataframe with same columns and some same id but different name, if it's empty replace it other dataframe value ,Also get two value for same id then replace it with DF2
<strong>consider DF2 as MAIN</strong> , and <strong>want all data of Df1</strong></p>
|
<python><pandas><dataframe><merge><concatenation>
|
2022-12-12 06:21:52
| 1
| 427
|
ZAVERI SIR
|
74,767,053
| 4,876,561
|
Converting word2vec output into dataframe for sklearn
|
<p>I am attempting to use <a href="https://radimrehurek.com/gensim/models/word2vec.html" rel="nofollow noreferrer">gensim's word2vec</a> to transform a column of a pandas dataframe into a vector that I can pass to a <a href="https://scikit-learn.org/stable/supervised_learning.html" rel="nofollow noreferrer"><code>sklearn</code> classifier</a> to make a prediction.</p>
<p>I understand that I need to average the vectors for each row. I have tried <a href="https://machinelearningmastery.com/develop-word-embeddings-python-gensim/" rel="nofollow noreferrer">following this guide</a> but I am stuck, as I am getting models back but I don't think I can access the underlying embeddings to find the averages.</p>
<p>Please see <a href="https://stackoverflow.com/help/minimal-reproducible-example">a minimal, reproducible example</a> below:</p>
<pre><code>import pandas as pd, numpy as np
from gensim.models import Word2Vec
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from sklearn.feature_extraction.text import CountVectorizer
temp_df = pd.DataFrame.from_dict({'ID': [1,2,3,4,5], 'ContData': [np.random.randint(1, 10 + 1)]*5,
'Text': ['Lorem ipsum dolor sit amet', 'consectetur adipiscing elit.', 'Sed elementum ultricies varius.',
'Nunc vel risus sed ligula ultrices maximus id qui', 'Pellentesque pellentesque sodales purus,'],
'Class': [1,0,1,0,1]})
temp_df['text_lists'] = [x.split(' ') for x in temp_df['Text']]
w2v_model = Word2Vec(temp_df['text_lists'].values, min_count=1)
cv = CountVectorizer()
count_model = pd.DataFrame(data=cv.fit_transform(temp_df['Text']).todense(), columns=list(cv.get_feature_names_out()))
</code></pre>
<p>Using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html" rel="nofollow noreferrer"><code>sklearn's CountVectorizer</code></a>, I am able to get a simple frequency representation that I can pass to a classifier. How can I get that same format using Word2vec?</p>
<p>This toy example produces:</p>
<pre><code>adipiscing amet consectetur dolor elementum elit id ipsum ligula lorem ... purus qui risus sed sit sodales ultrices ultricies varius vel
0 0 1 0 1 0 0 0 1 0 1 ... 0 0 0 0 1 0 0 0 0 0
1 1 0 1 0 0 1 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 1 0 0 0 0 0 ... 0 0 0 1 0 0 0 1 1 0
3 0 0 0 0 0 0 1 0 1 0 ... 0 1 1 1 0 0 1 0 0 1
4 0 0 0 0 0 0 0 0 0 0 ... 1 0 0 0 0 1 0 0 0 0
</code></pre>
<p>While this runs without error, I cannot access the embedding that I can pass with this current format. I would like to produce the same format, with the exception of instead of there being counts, its the <code>word2vec</code> value embeddings</p>
|
<python><scikit-learn><nlp><gensim><word2vec>
|
2022-12-12 05:48:08
| 1
| 7,351
|
artemis
|
74,766,991
| 6,283,073
|
how to receive whatsapp message in twilio
|
<p>i am trying to build an application that can recieve and respond to an sms or whatsapp message. i have been able to setup and connect the twilio number to WhatsApp API.</p>
<p>i was able to successful send a WhatsApp message with this function</p>
<pre><code>def send_with_whatsapp():
client = Client(account_sid, auth_token)
message = client.messages.create(
body='Hello there!',
from_='whatsapp:+1xxxx',
to='whatsapp:+1xxxx'
)
print(message.sid)
</code></pre>
<p>i am able to receive and respond sms with this function</p>
<pre><code>
@app.route("/sms", methods=['POST'])
def reply():
incoming_msg = request.form.get('Body').lower()
response = MessagingResponse()
print(incoming_msg)
</code></pre>
<p>the problem is I could not figure out how to see the recieved WhatsApp messages. when a text is sent via regular sms, the recieved message is printed at print(incoming_msg) but when WhatsApp message is sent, nothing is printed. How can i print the recieved whatsapp messages in python?</p>
|
<python><twilio><whatsapp>
|
2022-12-12 05:38:18
| 1
| 1,679
|
e.iluf
|
74,766,757
| 4,733,871
|
Pytorch Expected more than 1 value per channel when training when using BatchNorm
|
<p>I've written this code:</p>
<pre><code>import numpy as np
import torch
from torch.utils.data import TensorDataset, dataloader
inputDim = 10
n = 1000
X = np.random.rand(n,inputDim)
y = np.random.rand(0,2,n)
tensor_x = torch.Tensor(X)
tensor_y = torch.Tensor(y)
Xy = (tensor_x, tensor_y)
XyLoader = dataloader.DataLoader(Xy, batch_size = 16, shuffle = True, drop_last = True)
model = torch.nn.Sequential(
torch.nn.Linear(inputDim, 200),
torch.nn.ReLU(),
torch.nn.BatchNorm1d(num_features=200),
torch.nn.Linear(200,100),
torch.nn.Tanh(),
torch.nn.BatchNorm1d(num_features=100),
torch.nn.Linear(100,1),
torch.nn.Sigmoid()
)
optimizer = torch.optim.Adam(model.parameters(), lr= 0.001)
loss_fn = torch.nn.BCELoss()
nepochs = 1000
for epochs in range(nepochs):
for X,y in XyLoader:
batch_size = X.shape[0]
y_hat = model(X.view(batch_size,-1))
loss = loss_fn(y_hat, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
with torch.no_grad():
xt = torch.tensor(np.random.rand(1,inputDim))
y2 = model(xt.float())
print(y2.detach().numpy()[0][0])
</code></pre>
<p>What am I doing wrong with torch.nn.BatchNorm1d ?
If I run the code without the two line with everything goes "ok" what's the problem?</p>
|
<python><pytorch><batch-normalization>
|
2022-12-12 04:57:19
| 1
| 1,258
|
Dario Federici
|
74,766,701
| 4,451,521
|
Error while using kats module packaging version has no attribute legacyversion
|
<p>I am trying to use kats for the first time in order to run the code of <a href="https://towardsdatascience.com/how-to-detect-seasonality-outliers-and-changepoints-in-your-time-series-5d0901498cff" rel="nofollow noreferrer">this article</a></p>
<p>However I had the same error as in <a href="https://stackoverflow.com/questions/52889746/cant-import-annotations-from-future">this question</a> and in <a href="https://stackoverflow.com/questions/70515194/syntaxerror-future-feature-annotations-is-not-defined">this question</a> so I tried to solve it with the answers on those question. (I was using Python 3.6 and now I am using python 3.9)</p>
<p>However now I have a different error, and even harder to google.
Now it is</p>
<pre><code>from kats.detectors.outlier import OutlierDetector
outlier_detector = OutlierDetector(ts_day, "additive")
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 from kats.detectors.outlier import OutlierDetector
3 outlier_detector = OutlierDetector(ts_day, "additive")
File ~/miniconda3/envs/data_analysisPy39/lib/python3.9/site-packages/kats/__init__.py:6
1 # Copyright (c) Meta Platforms, Inc. and affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
----> 6 from . import compat # noqa # usort: skip
7 from . import consts # noqa # usort: skip
8 from . import data # noqa # usort: skip
File ~/miniconda3/envs/data_analysisPy39/lib/python3.9/site-packages/kats/compat/__init__.py:6
1 # Copyright (c) Meta Platforms, Inc. and affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
----> 6 from . import compat # noqa # usort: skip
7 from . import pandas # noqa # usort: skip
8 from . import statsmodels # noqa # usort: skip
File ~/miniconda3/envs/data_analysisPy39/lib/python3.9/site-packages/kats/compat/compat.py:19
14 from typing import Callable, Union
16 from packaging import version as pv
---> 19 V = Union[str, "Version", pv.Version, pv.LegacyVersion]
22 class Version:
23 """Extend packaging Version to allow comparing to version strings.
24
25 Wraps instead of extends, because pv.parse can return either a
26 pv.Version or a pv.LegacyVersion.
27 """
AttributeError: module 'packaging.version' has no attribute 'LegacyVersion'
</code></pre>
<p>Has anyone been able to run kats successfully? How can this error be corrected?</p>
|
<python><kats>
|
2022-12-12 04:45:57
| 1
| 10,576
|
KansaiRobot
|
74,766,562
| 15,982,771
|
Why does SendMessage not work for some applications?
|
<h2>Background:</h2>
<p>I was trying to program an auto clicker to click in the background to an application (<a href="https://en.wikipedia.org/wiki/Roblox" rel="nofollow noreferrer">Roblox</a>, not trying to do anything malicious). I was able to get the window and perform commands like closing it. However, when trying to send clicks to the window it returns 0. (I'm using SendMessage so I don't activate the window.)</p>
<h2>Minimum reproducible example:</h2>
<pre><code>import win32gui
import win32con
import win32api
</code></pre>
<pre><code>hwnd = win32gui.FindWindow(None, "Roblox")
while True:
lParam = win32api.MAKELONG(100, 100)
temp = win32gui.SendMessage(hwnd, win32con.WM_LBUTTONDOWN, None, lParam)
win32gui.SendMessage(hwnd, win32con.WM_LBUTTONUP, None, lParam)
print(temp)
</code></pre>
<h2>Things I tried:</h2>
<ol>
<li><p>I tried changing the window to see if it was the wrong window, or if it didn't see the window</p>
</li>
<li><p>I tried sending the message normally:</p>
<pre><code>lParam = win32api.MAKELONG(100, 100) # Get the coordinates and change to long
temp = win32gui.SendMessage(hwnd, win32con.WM_LBUTTONDOWN, None, lParam) # Send message to handle
win32gui.SendMessage(hwnd, win32con.WM_LBUTTONUP, None, lParam) # Release key from sent message to handle
</code></pre>
</li>
<li><p>I tried it with other windows, and it worked, but not for Roblox</p>
</li>
<li><p>I tried with other commands and it works, but clicks don't. This works: (So I know it's the right window)</p>
<pre><code>temp = win32gui.SendMessage(hwnd, win32con.WM_CLOSE, 0, 0) # Close window with SendMessage
</code></pre>
</li>
</ol>
|
<python><winapi><win32gui>
|
2022-12-12 04:14:57
| 1
| 1,128
|
Blue Robin
|
74,766,520
| 12,458,212
|
Mapping dict values based on keys in a list comprehension
|
<p>Lets say I have a dictionary where the keys are substring I want to use for a str search. I want to see if the keys exist in the elements of a list, and if they do, I'd like to set them equal to the value of that key and if there is no str match, set it equal to 'no match'.</p>
<pre><code>d = {'jun':'Junior', 'jr':'Junior', 'sr':'Senior', 'sen':'Senior'}
phrases = ['the Jr. title', 'sr jumped across the bridge', 'the man on the moon']
</code></pre>
<p>This is what I've tried, however I just can't seem to fit in the 'no match' statement in the list comprehension. Help appreciated. PS. would like to stick with a dict/list comprehension method for my specific use case</p>
<pre><code># Tried
[[v for k,v in d.items() if k in str(y).lower()] for y in phrases]
# Output
[['Junior'],['Senior'],[]]
# Desired Output
[['Junior'],['Senior'],['no match']]
</code></pre>
|
<python>
|
2022-12-12 04:03:56
| 3
| 695
|
chicagobeast12
|
74,766,512
| 1,812,993
|
async test does not run in pytest
|
<p>I am trying to test an async function. Thus, I have an async test. When I use pycharm to set breakpoints in this test, they don't break. The things which are supposed to print to console do not print. And, the test takes 0 seconds. pytest lists the test as <code>PASSED</code>, but it clearly isn't actually executing the code in context.</p>
<p>Here is the code:</p>
<pre><code>import unittest.mock
import pytest
from app.listener import ListenerCog
class ListenerCogTestCase(unittest.TestCase):
def setUp(self):
# Create a mock discord.Client object
self.client = unittest.mock.Mock()
# Create a ListenerCog instance and pass the mock client as the bot attribute
self.listener_cog = ListenerCog(self.client)
@pytest.mark.asyncio
async def test_on_ready(self):
# Call the on_ready method of the ListenerCog instance
await self.listener_cog.on_ready()
# Assert that the "Running!" message is printed
self.assertIn("Running!", self.stdout.getvalue())
</code></pre>
<p>What I've tried:</p>
<ul>
<li>enable <code>python.debug.asyncio.repl</code> in pycharm registry</li>
<li>add arguments to pytest including <code>--capture=no -s</code> and <code>--no-conv</code></li>
</ul>
<p>Nothing seems to be working. Would really appreciate some guidance here if anyone is accustomed to debugging async tests.</p>
|
<python><pytest><python-asyncio>
|
2022-12-12 04:02:18
| 1
| 7,376
|
melchoir55
|
74,766,332
| 6,291,976
|
Python: Understanding indexing in the use of enumerate with readline
|
<p>Python code:</p>
<pre><code>myfile = open("test-file.csv", "r")
for k, line in enumerate(myfile,0):
if k == 0:
myline = myfile.readline()
print(myline)
break
myfile.close()
</code></pre>
<p>and test-file.csv is:</p>
<pre><code>0. Zeroth
1. First
2. Second
3. Third
</code></pre>
<p>The output is</p>
<pre><code>1. First
</code></pre>
<p>Why don't I get</p>
<pre><code>0. Zeroth
</code></pre>
<p>?</p>
|
<python><readline><enumerate>
|
2022-12-12 03:17:32
| 2
| 509
|
mike65535
|
74,766,283
| 11,229,812
|
How to detect button pressed and kept down and button up in Tkniter GUI?
|
<p>I am trying to build a GUI tkiner and use it to control my Raspberry Pi robot.
The robot will have several functions but the main one will be to move forward, backward, left, and right using W, S, A, and D keys on the keyboard respectively, or by clicking the buttons on the GUI.
Here is my current code and screenshot of the GUI:</p>
<pre><code># Importing dependencies
import tkinter
import tkinter.messagebox
import customtkinter
# Setting up theme
customtkinter.set_appearance_mode("Dark") # Modes: "System" (standard), "Dark", "Light"
customtkinter.set_default_color_theme("blue") # Themes: "blue" (standard), "green", "dark-blue"
class App(customtkinter.CTk):
def __init__(self):
super().__init__()
# configure window
self.title("Cool Blue")
self.geometry(f"{1200}x{700}")
# configure grid layout (4x4)
self.grid_columnconfigure(1, weight=1)
self.grid_columnconfigure((2, 3), weight=0)
self.grid_rowconfigure((0, 1, 2), weight=1)
# create sidebar frame for controls
self.sidebar_frame = customtkinter.CTkFrame(self, width=200)
self.sidebar_frame.grid(row=0, column=0, rowspan=4, padx=(5, 5), pady=(10, 10), sticky="nsew")
self.sidebar_frame.grid_rowconfigure(4, weight=1)
self.logo_label = customtkinter.CTkLabel(self.sidebar_frame, text="Controls", font=customtkinter.CTkFont(size=20, weight="bold"))
self.logo_label.grid(row=0, column=1, padx=20, pady=(10, 10))
self.button_up = customtkinter.CTkButton(self.sidebar_frame, text="W", height=10, width=10, command=self.motion_event)
self.button_up.grid(row=1, column=1, padx=20, pady=10, ipadx=10, ipady=10)
self.button_down = customtkinter.CTkButton(self.sidebar_frame, text="S", height=10, width=10, command=self.motion_event)
self.button_down.grid(row=3, column=1, padx=20, pady=10, ipadx=10, ipady=10)
self.button_left = customtkinter.CTkButton(self.sidebar_frame, text="A", height=10, width=10, command=self.motion_event)
self.button_left.grid(row=2, column=0, padx=10, pady=10, ipadx=10, ipady=10)
self.button_right = customtkinter.CTkButton(self.sidebar_frame, text="D", height=10, width=10, command=self.motion_event)
self.button_right.grid(row=2, column=2, padx=10, pady=10, ipadx=10, ipady=10)
self.button_stop = customtkinter.CTkButton(self.sidebar_frame, text="Stop", height=10, width=10, command=self.motion_event)
self.button_stop.grid(row=2, column=1, padx=10, pady=10, ipadx=10, ipady=10)
# create Video Canvas
self.picam = customtkinter.CTkCanvas(self, width=800, background="gray")
self.picam.grid(row=0, column=1, rowspan=2, padx=(5, 5), pady=(20, 20), sticky="nsew")
self.picam_label = customtkinter.CTkLabel(master=self.picam, text="Live Video", font=customtkinter.CTkFont(size=20, weight="bold"))
self.picam_label.grid(row=0, column=2, columnspan=1, padx=10, pady=10, sticky="")
# create sidebar frame for Environmental Variable
self.measurements = customtkinter.CTkFrame(self, width=200)
self.measurements.grid(row=0, column=3, rowspan=4, padx=(5, 5), pady=(10, 10), sticky="nsew")
self.measurements.grid_rowconfigure(4, weight=1)
self.label_measurements = customtkinter.CTkLabel(master=self.measurements, text="Environment:", font=customtkinter.CTkFont(size=20, weight="bold"))
self.label_measurements.grid(row=0, column=2, columnspan=1, padx=10, pady=10, sticky="")
def motion_event(self):
# if button_up pressed down or physical key "w" on keyboard pressed and held down, print "Forward" until key released or button not being clicked anymore.
# else if button_down pressed down or physical key "s" on keyboard pressed and held down, print "Backward" until key released or button not being clicked anymore.
# else if button_left pressed down or physical key "a" on keyboard pressed and held down, print "Left" until key released or button not being clicked anymore.
# else if button_right pressed down or physical key "d" on keyboard pressed and held down, print "Right" until key released or button not being clicked anymore.
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/c2pDz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c2pDz.png" alt="enter image description here" /></a></p>
<p>So I created a basic layout with few buttons. What I am struggling with is the functionality of the buttons.
I need to implement fmotion_event function that will detect either if the key on my physical keyboard is pressed and held down or if the button on the GUI is being clicked and pressed. Once detected, to start I just need to get a printout of what key or button was clicked. Ultimately I want to replace this with the movement of the motors on the robot.</p>
<p>So for example, if the key "w" on the physical keyboard was pressed and held down or if the button "w" on GUI was pressed and held down, then print out "forward" and move my motors forward until the key or button was released.</p>
<p>Any suggestions on how do I accomplish this?
I understand you won't be able to help me with moving motors forward but if you can help me by just printing out directions I can implement motor movements later.</p>
<p>Thank you in advance.</p>
|
<python><python-3.x><tkinter>
|
2022-12-12 03:08:05
| 1
| 767
|
Slavisha84
|
74,766,267
| 4,095,108
|
Pandas - create new column from values in other columns on condition
|
<p>I'm trying to create a new column by merging non nans from two other columns.<br />
I'm sure something similar has been asked and I've looked at many questions but most of them seems to check the value and return a hard coded values.<br />
Here is my sample code:</p>
<pre><code> test_df = pd.DataFrame({
'col1':['a','b','c',np.nan,np.nan],
'col2':[np.nan,'b','c','d',np.nan]
})
print(test_df)
col1 col2
0 a NaN
1 b b
2 c c
3 NaN d
4 NaN NaN
</code></pre>
<p>What I need to add <code>col3</code> based on checking:<br />
if col1 is not nan then col1<br />
if col1 is nan and col2 not nana then col2<br />
if col1 is nan and col2 is nan then nan</p>
<pre><code> col1 col2 col3
0 a NaN a
1 b b b
2 c c c
3 NaN d d
4 NaN NaN NaN
</code></pre>
|
<python><pandas>
|
2022-12-12 03:04:29
| 1
| 1,685
|
jmich738
|
74,766,212
| 17,103,465
|
Distribute data equally in all the bins based on a column : Pandas
|
<p>I have large pandas dataframe; sample shown below :</p>
<p>I want to evenly distribute the "Good" flag between the ranges; so that each score range has equal number of goods.</p>
<pre><code>Score Good
100 0
100 0
100 0
300 0
400 0
400 0
600 1
600 1
600 0
650 0
650 0
650 1
700 1
770 1
770 1
800 0
890 1
890 1
</code></pre>
<p>Sample Output:</p>
<pre><code>bins Goods
100 - 600 2
650-700 2
770-800 2
> 890 2
</code></pre>
<p>I tried using <code>pd.cut</code> and <code>pd.qcut</code> but didn't able to figure this out.</p>
|
<python><pandas>
|
2022-12-12 02:51:53
| 2
| 349
|
Ash
|
74,766,171
| 8,416,255
|
What is the correct type hint to use when exporting a pydantic model as a dict?
|
<p>I'm writing an abstraction module which validates an excel sheet against a pydantic schema and returns the row as a dict using <code>dict(MyCustomModel(**sheet_row))</code>
. I would like to use type hinting so any function that uses the abstraction methods gets a type hint for the returned dictionary with its keys instead of just getting an unhelpful dict. Basically I'd like to return the keys of the dict that compose the schema so I don't have to keep referring to the schema for its fields and to catch any errors early on.</p>
<p>My current workaround is having my abstraction library return the pydantic model directly and type hint using the Model itself. This means every field has to be accessed using a dot notation instead of accessing it like a regular dictionary. I cannot annotate the dict has being the model itself as its a dict, not the actual pydantic model which has some extra attributes as well.</p>
<p>I tried type hinting with the type <code>MyCustomModel.__dict__()</code>. That resulted in the error <code>TypeError: Parameters to generic types must be types. Got mappingproxy({'__config__': <class 'foo.bar.Config'>, '__fields__': {'lab.</code>. Is there a way to send a type hint about the fields in the schema, but as a dictionary? I don't omit any keys during the dict export. All the fields in the model is present in the final dict being returned</p>
|
<python><python-typing><pydantic>
|
2022-12-12 02:45:11
| 1
| 409
|
rsn
|
74,765,958
| 607,407
|
What is a nice, python style way to yield ranged subsets of a string/bytes/list several items at a time?
|
<p>I want to loop over bytes data, and I'd hope the same principle would apply to strings and lists, where I don't go item by item, but a few items at a time. I know I can do <code>mystr[0:5]</code> to get the first five characters, and I'd like to do that in a loop.</p>
<p>I can do it the C style way, looping over ranges and then returning the remaining elements, if any:</p>
<pre><code>import math
def chunkify(listorstr, chunksize:int):
# Loop until the last chunk that is still chunksize long
end_index = int(math.floor(len(listorstr)/chunksize))
for i in range(0, end_index):
print(f"yield ")
yield listorstr[i*chunksize:(i+1)*chunksize]
# If anything remains at the end, yield the rest
remainder = len(listorstr)%chunksize
if remainder != 0:
yield listorstr[end_index*chunksize:len(listorstr)]
[i for i in chunkify("123456789", 2)]
</code></pre>
<p>This works just fine, but I strongly suspect python language features could make this a lot more compact.</p>
|
<python><python-3.x><list-comprehension>
|
2022-12-12 01:50:40
| 1
| 53,877
|
Tomáš Zato
|
74,765,880
| 1,330,810
|
Cloudtrail console resources missing from event record
|
<p>I need to get all the resources referenced by the action per each AWS event record. I use Python and cloudaux/boto.
The documentation states a "resources" field: <a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html</a> (although it does say it's optional).</p>
<p>In some Cloudtrail events, like Attach Role Policy as in the picture below, I can see the "resources referenced" in the console, but they are missing from the event record and when I fetch it via the API.</p>
<p>Is there any way to get them programmatically? The alternative would be to compute them manually from the request parameters / response, but it's structured differently for each type of event.</p>
<p><a href="https://i.sstatic.net/uYyjo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uYyjo.png" alt="enter image description here" /></a></p>
|
<python><amazon-web-services><amazon-cloudtrail>
|
2022-12-12 01:33:01
| 1
| 5,825
|
Idan
|
74,765,857
| 9,477,246
|
How to count columns by row in python pandas?
|
<p>I am not certain how to describe this situation.
Suppose I have the well-defined following table in dataframe pandas,</p>
<pre><code> 0 1 2 3 4 5 ... 2949 2950 2951 2952 2953 2954
0.txt html head meta meta meta meta ...
107.txt html head title meta meta meta ...
125.txt html head title style body div ...
190.txt html head meta title style body ...
202.txt html head meta title link style
</code></pre>
<p>And I want to make this table to spread out, columns representing the unique html tag and the value representing the specified row's count..</p>
<pre><code> html head meta style link body ...
0.txt 1 1 4 2 1 2 ...
107.txt 1 2 3 0 0 1 ...
</code></pre>
<p>Somthing like the above.. I have counted the total 88 distinct html headers are in the table so the column count might be 88. If this turn out to be success, then I will apply padnas' <code>describe()</code> , <code>value_counts()</code> function to find out more about this tags' statistics.. However, I am stuck with the above. Please give me some ideas to tackle this. Thank you..</p>
|
<python><html><pandas><tags>
|
2022-12-12 01:26:49
| 1
| 393
|
Hannah Lee
|
74,765,845
| 1,064,843
|
convert for loop to while (remove break <-- this is key)
|
<p>The <code>break</code> here is bothering me; after extensive research I want to ask if there is a pythonic way to convert this to a <code>while</code> loop:</p>
<pre><code>import re
file = open('parse.txt', 'r')
html = file.readlines()
def cleanup():
result = []
for line in html:
if "<li" and "</li>" in line:
stripped = re.sub(r'[\n\t]*<[^<]+?>', '', line).rstrip()
quoted = f'"{stripped}"'
result.append(quoted)
elif "INSTRUCTIONS" in line:
break
return ",\n".join(result)
</code></pre>
<p>I really am trying to practice designing more efficient loops.</p>
<p>added parse.txt</p>
<pre><code><p style="text-align:justify"><strong><span style="background-color:#ecf0f1">INGREDIENTS</span></strong></p>
<li style="text-align:justify"><span style="background-color:#ecf0f1">3 lb ground beef (80/20)</span></li>
<ul>
<li style="text-align:justify"><span style="background-color:#ecf0f1">1 large onion, chopped</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">2-3 cloves garlic, minced</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">2 jalapeño peppers, roasted, peeled, de-seeded, chopped</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">4-5 roma tomatoes, roasted peeled, chopped</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">1 15 oz can kidney beans, strained and washed</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">2 tsp salt</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">2 tsp black pepper</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">2 tsp cumin</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">¼ - ½ tsp cayenne pepper</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">1 tsp garlic powder</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">1 tsp Mexican oregano</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">1 tsp paprika</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">1 tsp smoked paprika</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">3 cups chicken broth</span></li>
<li style="text-align:justify"><span style="background-color:#ecf0f1">2 tbsp tomato paste</span></li>
</ul>
<p style="text-align:justify"><strong>INSTRUCTIONS</strong></p>
<ol>
<li style="text-align:justify">Heat a large put or Dutch oven over medium-high heat and brown the beef, while stirring to break it up. Cook until no longer pink. Drain out the liquid.</li>
<li style="text-align:justify">Stir in onions and cook for about 5 minutes until they are pale and soft. Add in minced garlic and jalapeño peppers, stirring for another minute.</li>
<li style="text-align:justify">Stir in the chopped tomatoes, all the spices, and tomato paste until well-distributed and tomato paste has broken up, then follow with the broth. Allow the pot to come to a gentle boil over medium heat, uncovered for about 20 minutes.</li>
<li style="text-align:justify">Reduce heat to low, cover and simmer for at least 3 hours, until liquid has reduced.</li>
<li style="text-align:justify">During the last 20-30 minutes of cook time, add in the kidney beans; uncover and allow liquid to reduce further during this time.</li>
<
li style="text-align:justify">Serve hot with jalapeño cornbread muffins, shredded cheese, avocado chunks, chopped cilantro, chopped green onion, tortilla chips.</li>
</ol>
</code></pre>
|
<python><loops>
|
2022-12-12 01:22:03
| 2
| 1,553
|
Snerd
|
74,765,825
| 1,077,539
|
Recursively traverse json and trim strings in Python
|
<p>I have a json like which has extra spaces</p>
<pre><code>{"main": "test ","title": {"title": "something. ","textColor": "Dark"},"background": {"color": "White "}}
</code></pre>
<p>I want to make a new json by removing the extra spaces</p>
<pre><code>{"main": "test","title": {"title": "something","textColor": "Dark"},"background": {"color": "White"}}
</code></pre>
<p>So far I got is which can print each key and values,</p>
<pre><code>def trim_json_strings(json):
for k, v in json.items():
if isinstance(v, dict):
trim_json_strings(v)
else:
strippedValue = v.strip() if isinstance(v, str) else v
print(k.strip(), strippedValue, end = '\n')
</code></pre>
<p>Not an expert in Python. Thanks in Advance</p>
|
<python><json><python-3.x>
|
2022-12-12 01:16:43
| 1
| 4,390
|
sadat
|
74,765,507
| 5,041,045
|
How to properly close an http connection from python requests
|
<p>I saw the answers in <a href="https://stackoverflow.com/questions/44665767/closing-python-requests-connection">Closing python requests connection</a>
but I don't think the answers really specify how to close the connection, or maybe I am not really understanding what it does.
For example, the following code:</p>
<pre class="lang-py prettyprint-override"><code>import requests
with requests.Session() as s:
s.post("https://example.com", headers={'Connection':'close'})
print(s)
# My understanding is that the Session is still alive as per the outputs
print(s)
s.close()
print(s)
</code></pre>
<p>returns</p>
<pre><code><requests.sessions.Session object at 0x106264e80>
<requests.sessions.Session object at 0x106264e80>
<requests.sessions.Session object at 0x106264e80>
</code></pre>
<p>I was expecting only the first print statement to work, since the second is already outside the with statement (hence the connection should have been closed, according to the documentation) and the third is after an explicit session object close().
Same as when you open a file, then close it, it becomes inaccessible:</p>
<pre><code>with open("hello.txt") as f:
print(f.read())
print(f.read())
</code></pre>
<p>prints:</p>
<pre><code>Hello
Traceback (most recent call last):
File "/Users/simon/junk/myfile.py", line 4, in <module>
print(f.read())
ValueError: I/O operation on closed file.
</code></pre>
<p>I was expecting the 2nd and 3rd print to through some kind of error.</p>
|
<python><http><tcp><python-requests>
|
2022-12-12 00:04:26
| 2
| 3,022
|
Simon Ernesto Cardenas Zarate
|
74,765,215
| 4,844,184
|
Make pip install .[option] install less packages than the default pip install . (so-called negative extra_requires)
|
<p>First of all let's assume the following:</p>
<ul>
<li>I am building a python package <code>mypackage</code> and want to make it available broadly</li>
<li>My package has the following python dependencies: "A","B","C" and "D" and we assume further that each dependency covers an independent use-case of the package (i.e. A is needed for users wanting to do A-type stuff, B is needed for B-type stuff, etc.)</li>
<li>A, B, C and D are all pretty heavy and take each tons of times to install.</li>
<li>The majority of the package's users are not developers and actually do not even know which type of stuff they will be interested in (whether it is one letter-stuff or any multiple letters simultaneously) and do not know how to do install with options</li>
<li>Some users are power-users and know from the get-go that they will only use C and D-stuff so they will only need C or D as dependencies. In fact some of the non developer users might actually turn into power users given enough time to practice</li>
</ul>
<p>From reading all hypotheses above then it makes perfect sense to have a default install installing A,B, C and D and having options available for power-users to install only C or D (or any package combination such as A and D).
Aka:</p>
<ul>
<li><code>pip install mypackage</code> => installs A, B, C and D</li>
<li><code>pip install mypackage[C, D]</code> => installs C and D <strong>but not A and B</strong></li>
</ul>
<p>This exact same problem is stated in this other question under the name <a href="https://stackoverflow.com/questions/36941264/negative-extra-requires-in-python-setup-py">negative extra_requires</a>. Because indeed the desired behavior is that <code>extra_requires</code> should install fewer packages than the default install. It is also connected to discussions and issues in <a href="https://discuss.python.org/t/possible-default-extras-dependency-categories/1537" rel="nofollow noreferrer">several</a> <a href="https://github.com/pypa/setuptools/issues/1139" rel="nofollow noreferrer">places</a>.</p>
<p>I wanted to know 1. has the situation changed or is it planned ? 2. what would be a way to circumvent this issue/go about this if not?</p>
|
<python><pip><setuptools><python-packaging>
|
2022-12-11 23:01:35
| 1
| 2,566
|
jeandut
|
74,765,171
| 19,989,634
|
Updating my live project to python 3.10 - ERROR python setup.py bdist_wheel did not run successfully
|
<p>I've been having some trouble when trying to install certain packages on my live version of my project and realized its because its running version 3.7 instead of my up to date version 3.10 on Windows 10.
Wheels and setup are both up to date.</p>
<p>When I try to pip install -r requirements.txt I get a number of errors which I'm having no luck in fixing.</p>
<p>installing mysql seems to be the reason for these errors appearing.</p>
<p>ERROR1:
python setup.py bdist_wheel did not run successfully.</p>
<p>ERROR2:
Running setup.py install for mysqlclient did not run successfully.</p>
<p>ERROR3:
error: command '/opt/rh/gcc-toolset-9/root/bin/gcc' failed: No such file or directory
error: legacy-install-failure</p>
|
<python><mysql><django>
|
2022-12-11 22:51:39
| 0
| 407
|
David Henson
|
74,765,166
| 8,312,634
|
MagicMock Mock'ed Function is not 'called'
|
<p>I an trying to mock an api function <code>send_message</code> from <code>stmplib.SMTP</code> in Python using MagicMock.</p>
<p>a simplified version of my function looks like</p>
<pre><code>#email_sender.py
def sendMessage():
msg = EmailMessage()
#Skipping code for populating msg
with smtplib.SMTP("localhost") as server:
server.send_message(msg)
</code></pre>
<p>I want to mock <code>server.send_message</code> call for my unit test. I have searched SO for some pointers and tried to follow a <a href="https://stackoverflow.com/a/72756380/8312634">similar question</a>.</p>
<p>Here is my unit test code based on the above question:</p>
<pre><code>#email_sender_test.py
import email_sender as es
def test_send_message_input_success() -> None:
with patch("smtplib.SMTP", spec=smtplib.SMTP) as mock_client:
mock_client.configure_mock(
**{
"send_message.return_value": None
}
)
es.sendMessage()
#assert below Passes
assert mock_client.called == True
#assert below Fails
assert mock_client.send_message.called == True
</code></pre>
<p>Any idea what I am doing wrong which is causing the <code>assert mock_client.send_message.called == True</code> to fail?</p>
<p>Thanks!</p>
|
<python><pytest><python-unittest>
|
2022-12-11 22:50:21
| 1
| 447
|
N0000B
|
74,765,120
| 12,064,467
|
Empty Result on Pandas Merge?
|
<p>I have one large DataFrame <code>prices</code> with three columns - id, date, and price. See a sample below:</p>
<pre><code> id date price
0 1st 2022-09-11 3.4
1 1st 2022-09-10 43.2
2 1st 2022-09-09 8.1
3 1st 2022-09-08 32.2
4 2nd 2022-09-11 10.4
5 2nd 2022-09-10 41.1
6 2nd 2022-09-09 6.5
7 2nd 2022-09-08 39.3
</code></pre>
<p>I have another smaller DataFrame, called <code>results</code>, with four columns - id, date, color, and size.</p>
<pre><code> id date color size
0 1st 2022-09-11 blue s
1 1st 2022-09-10 red m
2 2nd 2022-09-09 green xl
</code></pre>
<p>I want to merge the <code>price</code> column from <code>volumes</code> into <code>results</code> based on matching <code>id</code> and <code>date</code>, such that there is a new column <code>price</code> in the <code>results</code> DataFrame. Based on the examples above, here's what <code>results</code> should be after the merge operation:</p>
<pre><code> id date color size price
0 1st 2022-09-11 blue s 3.4
1 1st 2022-09-10 red m 43.2
2 2nd 2022-09-09 green xl 6.5
</code></pre>
<p>When I do <code>results = pd.merge(results, prices)</code>, I get an empty DataFrame. What is going wrong? Any help would be appreciated.</p>
<p><strong>EDIT:</strong>
Because it was asked for in the comments, here are <code>results.dtypes</code> and <code>prices.dtypes</code></p>
<p><code>prices.dtypes</code>:</p>
<pre><code>id object
date object
price float64
</code></pre>
<p><code>results.dtypes</code>:</p>
<pre><code>id object
date object
color object
size object
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-11 22:41:30
| 1
| 522
|
DataScienceNovice
|
74,765,022
| 15,587,184
|
R Tidyverse Alternative Code in Python for data wrangling
|
<p>I am trying to do the following in Python:</p>
<ol>
<li>Import iris data base</li>
<li>create a new column with the name target_measure that will be the multiplication of the natural log of Sepal.Width times the squeared Petal.Width</li>
<li>create a new variable called categorical_measure that will clasify the previous column into 3 labes like so:
if target_measure<1.5 then it will be: "<1.5",
target_measure>=1.5 and target_measure<3.5 then it will "1.5-3.5"
any other will be "out of target"</li>
<li>calculate the mean sepal and petal width grouping by species as well as the count of all labels in the column categorical_measure</li>
<li>finally filter all rows with "out of target" count is equal or greater than 5</li>
</ol>
<p>We can download/import the iris dataset here:</p>
<pre><code>data=pd.read_csv("https://gist.githubusercontent.com/curran/a08a1080b88344b0c8a7/raw/0e7a9b0a5d22642a06d3d5b9bcbad9890c8ee534/iris.csv")
</code></pre>
<p>My R code goes as follows</p>
<pre><code>library(tidyverse)
data=iris # R's built-in fun to import iris
#desired output
data %>% # this is known as a pipe in R and will exc the lines below feed from the data env object
group_by(Species) %>% #groups by species
mutate(target_measure=log(Sepal.Width)*(Petal.Width)^2)%>% #creates column target_measure
mutate(categorical_measure=case_when(target_measure<1.5~"<1.5", #creates column categorical_measure based on criteria
target_measure>=1.5 & target_measure<3.5~"1.5-3.5",
TRUE~"out of target")) %>%
summarise(mean_of_sepal=mean(Sepal.Width), #calculates mean of sepal.width of grouped data
mean_of_petal=mean(Petal.Width),
'No of 1.5'=sum(categorical_measure=="<1.5"), #calculates count label="<1.5" from column categorical_measure
'No of 1.5-3.5'=sum(categorical_measure=="1.5-3.5"),#calculates count label="1.5-3.5"
'No of out of target'=sum(categorical_measure=="out of target")) %>% #calculates count label="out of target"
filter(`No of out of target`>=5) # filters desired output
</code></pre>
<p>code without comments (for faster reading)</p>
<pre><code>data %>%
group_by(Species) %>%
mutate(target_measure=log(Sepal.Width)*(Petal.Width)^2)%>%
mutate(categorical_measure=case_when(target_measure<1.5~"<1.5",
target_measure>=1.5 & target_measure<3.5~"1.5-3.5",
TRUE~"out of target")) %>%
summarise(mean_of_sepal=mean(Sepal.Width),
mean_of_petal=mean(Petal.Width),
'No of 1.5'=sum(categorical_measure=="<1.5"),
'No of 1.5-3.5'=sum(categorical_measure=="1.5-3.5"),
'No of out of target'=sum(categorical_measure=="out of target")) %>%
filter(`No of out of target`>=5)
</code></pre>
<p>My desired output is:</p>
<pre><code># A tibble: 1 x 6
Species mean_of_sepal mean_of_petal `No of 1.5` `No of 1.5-3.5` `No of out of target`
<fct> <dbl> <dbl> <int> <int> <int>
1 virginica 2.97 2.03 0 11 39
</code></pre>
<p>Is there a way to achive this level of simplicity in Python?</p>
<p>So far I have come across the pandas library and useful functions such as <code>data.groupby(['species'])</code> but I alway find in each tutorial or YouTube video that each step is done separately or perhaps creating a function first and then using the .apply fun in Python but I am looking for a solution that will use pipes of some sort of structure alike.</p>
|
<python><r><pandas>
|
2022-12-11 22:22:05
| 1
| 809
|
R_Student
|
74,764,791
| 7,585,973
|
How two create label column, based on index number (odd/even) on pySpark
|
<p>Here's my Input</p>
<pre><code> index date_id year month day hour minute
0 156454 20200801 2021 12 31 12 38
1 156454 20200801 2021 12 31 12 39
</code></pre>
<p>What I want is just make label 'poi1' for odd rows and 'poi2' for even rows</p>
<p>Here's my output</p>
<pre><code> index date_id year month day hour minute label
0 156454 20200801 2021 12 31 12 38 poi1
1 156454 20200801 2021 12 31 12 39 poi2
</code></pre>
<p>The pandas code is like this</p>
<pre><code>df_movmnt_2["label"] = np.where(((df_movmnt_2.index)+1)%2 != 0, "poi1", "poi2")
</code></pre>
|
<python><pandas><pyspark>
|
2022-12-11 21:49:22
| 1
| 7,445
|
Nabih Bawazir
|
74,764,724
| 11,092,636
|
Text below a figure with matplotlib
|
<p>I managed to add text on the figure with</p>
<pre class="lang-py prettyprint-override"><code> fg_ax.text(
0.05,
0.1,
f"n = {len(df)}",
)
</code></pre>
<p><code>fg_ax</code> being my <code>matplotlib.axes</code> object</p>
<p>But I want it below the figure. Because my y coordinates go from 0 to 1, I figured after reading the documentation that doing something like this would put it below:</p>
<pre class="lang-py prettyprint-override"><code> fg_ax.text(
0.05,
-0.5,
f"n = {len(df)}",
)
</code></pre>
<p>But there is nothing that appears anymore, as if I was writing "outside" what is displayed.</p>
<p>I tried <code>plt.show()</code> and <code>fg_ax.figure.savefig</code>. None works.</p>
<p>Minimal reproducible example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = (30 * np.random.rand(N)) ** 2 # 0 to 15 point radii
_, fg_ax = plt.subplots()
fg_ax.scatter(x, y, s=area, c=colors, alpha=0.5)
fg_ax.text(
0.05,
-0.5,
f"n = qsdfqsdf",
)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2022-12-11 21:39:29
| 2
| 720
|
FluidMechanics Potential Flows
|
74,764,610
| 1,012,010
|
Is there a way to generate combinations, one at a time?
|
<p>I can generate combinations from a list of numbers using itertools.combinations, such as the following:</p>
<pre><code>from itertools import combinations
l = [1, 2, 3, 4, 5]
for i in combinations(l,2):
print(list(i))
</code></pre>
<p>This generates the following:</p>
<pre><code>[1, 2]
[1, 3]
[1, 4]
[1, 5]
[2, 3]
[2, 4]
[2, 5]
[3, 4]
[3, 5]
[4, 5]
</code></pre>
<p>How can I generate just one of these list pairs at a time and save it to a variable? I want to use each pair of numbers, one pair at a time, and then go to the next pair of numbers. I don't want to generate all of them at once.</p>
|
<python><combinations>
|
2022-12-11 21:19:22
| 1
| 730
|
Alligator
|
74,764,480
| 815,612
|
How do I write a Python function that returns a given (json) body with a given HTTP response code?
|
<p>I have the following (default) code in an AWS lambda function:</p>
<pre><code>def lambda_handler(event, context):
return {"statusCode": 200, "body": "\"Hello from Lambda!\""}
</code></pre>
<p>I also have this hooked up to an HTTP endpoint in the REST API section of the AWS web interface. Seemingly, the idea of this is that this will get converted into an HTTP response with body <code>"Hello from Lambda!"</code> and status code 200, but what I actually see when I hit the endpoint with CURL is:</p>
<pre><code>$ curl -X POST https://redacted.execute-api.us-east-1.amazonaws.com/testing/
{"statusCode": 200, "body": "\"Hello from Lambda!\""}
</code></pre>
<p>So the return value of the function just seems to get converted to JSON and sent right through. I'm finding it very hard to Google for answers here. How do I set this thing up so I can control the status response, response body, and response headers from within the lambda function?</p>
|
<python><amazon-web-services><aws-lambda>
|
2022-12-11 21:00:06
| 1
| 6,464
|
Jack M
|
74,764,305
| 16,977,407
|
PackageNotFoundError even though required channel added to anaconda config?
|
<p>I am working with Ubuntu in WSL and tried to install the required packages for a repo with:</p>
<pre><code>$ conda install --file requirements.txt
</code></pre>
<p>I got a <code>PackageNotFoundError</code> for a bunch of different packages. I search on <a href="https://anaconda.org" rel="nofollow noreferrer">anaconda.org</a> for the required channels and added them. But it doesn't matter which channels I add I always get a <code>PackageNotFoundError</code> for the last two remaining packages:</p>
<pre><code>$ conda install --file requirements.txt
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- openssl==1.1.1=h7b6447c_0
- intel-openmp==2019.5=281
Current channels:
- https://conda.anaconda.org/fastchan/linux-64
- https://conda.anaconda.org/fastchan/noarch
- https://conda.anaconda.org/cctbx202208/linux-64
- https://conda.anaconda.org/cctbx202208/noarch
- https://conda.anaconda.org/pytorch/linux-64
- https://conda.anaconda.org/pytorch/noarch
- https://conda.anaconda.org/conda-forge/linux-64
- https://conda.anaconda.org/conda-forge/noarch
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
</code></pre>
<p>Anaconda.org says <code>conda-forge, fastchan, cctbx202208</code> for openssl but even though I added all of them it's still not found.</p>
<p>The next thing I tried was to install it with pip:</p>
<pre><code>$ pip install openssl==1.1.1
ERROR: Could not find a version that satisfies the requirement openssl==1.1.1 (from versions: none)
ERROR: No matching distribution found for openssl==1.1.1
</code></pre>
<p>But pip detects none versions of this package. Same with intel-openmp, but pip does find packages but not the one I want <code>2019.5</code>:</p>
<pre><code>$ pip install intel-openmp==2019.5
ERROR: Could not find a version that satisfies the requirement intel-openmp==2019.5 (from versions: 2018.0.0, 2018.0.3, 2019.0, 2020.0.133, 2021.1.1, 2021.1.2, 2021.2.0, 2021.3.0, 2021.4.0, 2022.0.1, 2022.0.2, 2022.1.0, 2022.2.0, 2022.2.1)
ERROR: No matching distribution found for intel-openmp==2019.5
</code></pre>
<p>So my question is, is there another way to install the two packages or do they not exists anymore? Because the repo from which I got the code has its last commit from 3 years ago...</p>
<p><strong>Edit</strong>:</p>
<p>I tried this command:</p>
<pre><code>conda install -c anaconda openssl
</code></pre>
<p>and it installs openssl, but the latest version and than the code still says openssl is missing.</p>
<p>I also tried:</p>
<pre><code>conda install -c anaconda openssl=1.1.1
</code></pre>
<p>but I get the same error as in the beginning (PackageNotFoundError in the channels).</p>
<p><strong>Edit2</strong>:</p>
<p><a href="https://github.com/rohanchandra30/TrackNPred" rel="nofollow noreferrer">TrackNPred</a> is the repo I cloned and want to get working.</p>
<p>As for the required channel, I just searched for the package name on <a href="https://anaconda.org/search?q=intel-openmp" rel="nofollow noreferrer">anaconda.org</a> and add the channels i see to my anaconda config with:</p>
<pre><code>conda config --add channels new_channel
</code></pre>
<p>I'm not sure if I need the exact version of a package as it's listed in the requirements.txt or if the code also works with another version of the two missing packages.</p>
<p><strong>Edit3</strong>:</p>
<p>I changed in the <code>requirements.txt</code>:</p>
<pre><code>openssl=1.1.1*
intel-openmp=2019.5
</code></pre>
<p>and that worked.</p>
|
<python><linux><openssl><anaconda><conda>
|
2022-12-11 20:33:52
| 1
| 364
|
Gandhi
|
74,764,302
| 4,450,498
|
What event is associated with zooming an interactive matplotlib plot?
|
<p>As I understand it, when a user interacts with an interactive <code>matplotlib</code> plot (i.e. by clicking, pressing a key, etc.), an <a href="https://matplotlib.org/stable/api/backend_bases_api.html#matplotlib.backend_bases.Event" rel="nofollow noreferrer"><code>Event</code></a> is triggered, which can be <a href="https://matplotlib.org/stable/users/explain/event_handling.html#event-connections" rel="nofollow noreferrer">linked to an arbitrary callback function</a>, if desired.</p>
<p>Interactive <code>matplotlib</code> plots often come with a navigation toolbar that includes certain features like zooming and rubberband selection. My question is, is there a way to watch for these things from the backend and react when a user performs one of these actions using the nav bar/mouse?</p>
<p>I have gone through the list of event names on the <a href="https://matplotlib.org/stable/users/explain/event_handling.html" rel="nofollow noreferrer">event handling page</a> of matplotlib's documentation, as well as looked over the API reference for the <a href="https://matplotlib.org/stable/api/backend_bases_api.html#matplotlib.backend_bases.NavigationToolbar2" rel="nofollow noreferrer"><code>NavigationToolbar2</code></a> class, but I haven't been able to find any connection between the two. Is an event even the thing to be looking for, or is there some other way to detect these kinds of interactions?</p>
|
<python><matplotlib><event-handling>
|
2022-12-11 20:33:12
| 1
| 992
|
L0tad
|
74,764,189
| 1,302,551
|
Setting global variables from a dictionary within a function
|
<p>I am looking to use .yaml to manage several global parameters for a program. I would prefer to manage this from within a function, something like the below. However, it seems <code>globals().update()</code> does not work when included inside a function. Additionally, given the need to load an indeterminate number of variables with unknown names, using the basic <code>global</code> approach is not appropriate. Ideas?</p>
<p>.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>test:
- 12
- 13
- 14
- stuff:
john
test2: yo
</code></pre>
<p>Python</p>
<pre><code>import os
import yaml
def load_config():
with open(os.path.join(os.getcwd(), {file}), 'r') as reader:
vals = yaml.full_load(reader)
globals().update(vals)
</code></pre>
<p>Desired output</p>
<pre class="lang-none prettyprint-override"><code>load_config()
test
---------------
[12,13,14,{'stuff':'john'}]
test2
---------------
yo
</code></pre>
<p>What I get</p>
<pre class="lang-none prettyprint-override"><code>load_config()
test
---------------
NameError: name 'test' is not defined
test2
---------------
NameError: name 'test2' is not defined
</code></pre>
<p>Please note: <code>{file}</code> is for you, the code is not actually written that way. Also note that I understand the use of global is not normally recommended, however it is what is required for the answer of this question.</p>
|
<python><python-3.x><global-variables>
|
2022-12-11 20:17:38
| 1
| 1,356
|
WolVes
|
74,764,174
| 403,748
|
Pandas get rank on rolling with FixedForwardWindowIndexer
|
<p>I am using Pandas 1.51 and I'm trying to get the rank of each row in a dataframe in a rolling window that looks ahead by employing FixedForwardWindowIndexer. But I can't make sense of the results. My code:</p>
<pre><code>df = pd.DataFrame({"X":[9,3,4,5,1,2,8,7,6,10,11]})
window_size = 5
indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=window_size)
df.rolling(window=indexer).rank(ascending=False)
</code></pre>
<p>results:</p>
<pre><code> X
0 5.0
1 4.0
2 1.0
3 2.0
4 3.0
5 1.0
6 1.0
7 NaN
8 NaN
9 NaN
10 NaN
</code></pre>
<p>By my reckoning, it should look like:</p>
<pre><code> X
0 1.0 # based on the window [9,3,4,5,1], 9 is ranked 1st w/ascending = False
1 3.0 # based on the window [3,4,5,1,2], 3 is ranked 3rd
2 3.0 # based on the window [4,5,1,2,8], 4 is ranked 3rd
3 3.0 # etc
4 5.0
5 5.0
6 3.0
7 NaN
8 NaN
9 NaN
10 NaN
</code></pre>
<p>I am basing this on a backward-looking window, which works fine:</p>
<pre><code>>>> df.rolling(window_size).rank(ascending=False)
X
0 NaN
1 NaN
2 NaN
3 NaN
4 5.0
5 4.0
6 1.0
7 2.0
8 3.0
9 1.0
10 1.0
</code></pre>
<p>Any assistance is most welcome.</p>
|
<python><pandas><dataframe><rank><rolling-computation>
|
2022-12-11 20:15:49
| 1
| 1,380
|
davej
|
74,764,170
| 16,056,216
|
How can I use my model for a single picture?
|
<p>I trained a machine and I got the model now.
This is my code:( It works fine but i wanna use it for a single picture.)</p>
<pre><code>import numpy as np
import pandas as pd
import os
import cv2
import matplotlib.pyplot as plt
from tqdm import tqdm
from random import shuffle
from keras.utils import to_categorical
import pickle
def load_rand():
X=[]
dir_path='D:/dataset/train'
for sub_dir in tqdm(os.listdir(dir_path)):
print(sub_dir)
path_main=os.path.join(dir_path,sub_dir)
i=0
for img_name in os.listdir(path_main):
if i>=6:
break
img=cv2.imread(os.path.join(path_main,img_name))
img=cv2.resize(img,(100,100))
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
X.append(img)
i+=1
return X
X=load_rand()
X=np.array(X)
X.shape
def show_subpot(X,title=False,Y=None):
if X.shape[0]==36:
f, ax= plt.subplots(6,6, figsize=(40,60))
list_fruits=['rottenoranges', 'rottenapples', 'freshbanana', 'freshoranges', 'rottenbanana', 'freshapples']
for i,img in enumerate(X):
ax[i//6][i%6].imshow(img, aspect='auto')
if title==False:
ax[i//6][i%6].set_title(list_fruits[i//6])
elif title and Y is not None:
ax[i//6][i%6].set_title(Y[i])
plt.show()
else:
print('Cannot plot')
show_subpot(X)
del X
def load_rottenvsfresh():
quality=['fresh', 'rotten']
X,Y=[],[]
z=[]
for cata in tqdm(os.listdir('D:/dataset/train')):
if quality[0] in cata:
path_main=os.path.join('D:/dataset/train',cata)
for img_name in os.listdir(path_main):
img=cv2.imread(os.path.join(path_main,img_name))
img=cv2.resize(img,(100,100))
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
z.append([img,0])
else:
path_main=os.path.join('D:/dataset/train',cata)
for img_name in os.listdir(path_main):
img=cv2.imread(os.path.join(path_main,img_name))
img=cv2.resize(img,(100,100))
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
z.append([img,1])
print('Shuffling your data.....')
shuffle(z)
for images, labels in tqdm(z):
X.append(images);Y.append(labels)
return X,Y
X,Y=load_rottenvsfresh()
Y=np.array(Y)
X=np.array(X)
y_ser=pd.Series(Y)
y_ser.value_counts()
def load_rottenvsfresh_valset():
quality=['fresh', 'rotten']
X,Y=[],[]
z=[]
for cata in tqdm(os.listdir('D:/dataset/test')):
if quality[0] in cata:
path_main=os.path.join('D:/dataset/test',cata)
for img_name in os.listdir(path_main):
img=cv2.imread(os.path.join(path_main,img_name))
img=cv2.resize(img,(100,100))
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
z.append([img,0])
else:
path_main=os.path.join('D:/dataset/test',cata)
for img_name in os.listdir(path_main):
img=cv2.imread(os.path.join(path_main,img_name))
img=cv2.resize(img,(100,100))
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
z.append([img,1])
print('Shuffling your data.....')
shuffle(z)
for images, labels in tqdm(z):
X.append(images);Y.append(labels)
return X,Y
X_val,Y_val=load_rottenvsfresh_valset()
Y_val=np.array(Y_val)
X_val=np.array(X_val)
y_ser=pd.Series(Y_val)
y_ser.value_counts()
import keras
from keras.layers import Dense,Dropout, Conv2D,MaxPooling2D , Activation, Flatten, BatchNormalization, SeparableConv2D
from keras.models import Sequential
X=X/255.0
X_val=X_val/255.0
model.evaluate(X_val,Y_val)
model=load_model('D:/dataset/rottenvsfresh.h5')
from keras.models import Model, load_model
new_model=load_model('D:/dataset/rotten.h5')
new_model.evaluate(X_val,Y_val)
plt.imshow(X_val[0])
model.predict(X_val[0].reshape(1,100,100,3))
show_subpot(X_val[-36*11:-36*10])
#model.predict(X_val[-36*11:-36*10])
y_pred =model.predict(X_val[-36*11:-36*10])
np.round(y_pred).astype(int)
</code></pre>
<p>I trained the model to classify rotten fruit from the fresh one. I do not have enough experince in machine learning and this is my first project. I could do this in general but now I wanna try the model for single pictures as input. ( I copy pasted the codes from jupiter cells.)
I tried a lot to use the model for a single picture as input but it dosen't work.</p>
<p>This is how I tried: (But it gives wrong output.)</p>
<pre><code># Load the image you want to classify
img_path = "C:/Users/d/Desktop/hjbjkh/lo.jpg"
img = load_image(img_path)
# Preprocess the image and prepare it for classification
img = np.array([img]) # Add a batch dimension
# Print a summary of the model's architecture
model.summary()
# Use the classification model to predict the class of the image
predictions = model.predict(img)
# Get the predicted class
predicted_class = np.argmax(predictions)
# If desired, display the image and the predicted class
print("Predicted class:", predicted_class)
plt.imshow(img[0])
</code></pre>
<p>Edited part from what i get from comments:</p>
<pre><code># Import PyTorch
import torch
# Load the image you want to classify
img_path = "C:/Users/d/Desktop/hjbjkh/ad.jpg"
img = cv2.imread(img_path)
# Resize the image to the desired dimensions
img = cv2.resize(img, (100, 100))
# Convert the image from BGR color space to RGB color space
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Convert the image from a NumPy array to a PyTorch tensor
img = torch.from_numpy(img)
# Add a batch dimension to the image
img = img.unsqueeze(0)
# Convert the image from a PyTorch tensor to a NumPy array
img = img.numpy()
# Use the classification model to predict the class of the image
predictions = model.predict(img)
# Get the predicted class
predicted_class = np.argmax(predictions)
# If desired, display the image and the predicted class
print("Predicted class:", predicted_class)
plt.imshow(img[0])
</code></pre>
|
<python><machine-learning><deep-learning><artificial-intelligence>
|
2022-12-11 20:15:12
| 0
| 362
|
Amirreza Hashemi
|
74,764,099
| 7,802,034
|
Having trouble solving this problem recursively
|
<p>Assume we are given a string variable named word.
We are playing a game with 2 players, player A, and player B.</p>
<p>Each player, at their respective turn (with player A always beginning), chooses either the first or the last letter of the string and gets points based on the ordered count of that letter in the ABC (i.e. a = 1, b = 2, c = 3 and so on, so it's ord(char) - 96 in python). Then the other player is given the same string but without the letter that was chosen.</p>
<p>At the end of the game, whoever has the most points wins.</p>
<p>We are given that player B's strategy is a greedy strategy, meaning he will always choose the best option from the current given options (so if the word was "abc" he would choose the letter "c" because it's better for him at the moment).</p>
<p>We define a string to be "good" if no matter what player A picks in his turn, at any given point in the game, player B will always win.</p>
<p>Need: I need to create a function that recursively finds whether a word is considered "good" (returns True), and if not it returns False.</p>
<p>Restriction: The only allowed input is the word, so the function would look like: is_word_good(word).</p>
<p>If needed, memoization is allowed.</p>
<p>I tried wrapping my head around this problem but I am having difficulties solving it recursively, specifically, I cannot find a way to efficiently pass/save the cumulative score between the function calls. Maybe I'm missing something that makes the problem simpler.</p>
<p>I would've added a code but I kept deleting my ideas and trying to redo them. My main idea was to (maybe) save using memoization every word and the respective score each player can get at that word, and the score will depend on the chosen letter currently + recursion of chosen letters later on in the recursion. I failed to implement it correctly (let alone efficiently).</p>
<p>Example of expected outputs:</p>
<pre><code>>>> is_word_good("abc")
False
>>> is_word_good("asa")
True
</code></pre>
<p>Help would be appreciated!</p>
|
<python><string><recursion>
|
2022-12-11 20:05:48
| 1
| 551
|
EliKatz
|
74,764,012
| 470,081
|
Add HTML linebreaks with Python regex
|
<p>I need to add HTML linebreaks (<code><br /></code>) to a string at all line endings which are not followed by a blank line. This simple pattern works:</p>
<pre><code>body = re.sub(r'(.)\n(.)', r'\1<br />\2', body)
</code></pre>
<p>But I realized it will not work for an edge case where a line contains only a single character (because the character would have to be part of two different overlapping matches). So I tried the following pattern with lookaround subpatterns:</p>
<pre><code>body = re.sub(r'(?<=.)\n(?=.)', r'<br />', body)
</code></pre>
<p>This works as intended, except that the HTML tag is added after the linebreak (<code>\n</code>), and with an additional linebreak:</p>
<pre><code>linebreak
<br/>
!
<br/>
linebreak
<br/>
l
<br/>
works
</code></pre>
<p>I would expect that the matched linebreak is substituted by the HTML tag (thereby effectively removing the linebreaks from all matching areas) – why does the tag appear on a new line instead (i.e. increasing the number of linebreaks/lines)?</p>
<p>The equivalent pattern in vim does remove the linebreaks:</p>
<pre><code>s:\(.\)\zs\n\ze\(.\):\<br \/\>:ge
</code></pre>
|
<python><regex><regex-lookarounds>
|
2022-12-11 19:53:56
| 1
| 461
|
janeden
|
74,763,944
| 14,509,475
|
When indexing a DataFrame with a boolean mask, is it faster to apply the masks sequentially?
|
<p>Given a large DataFrame <code>df</code>, which is faster in general?</p>
<pre class="lang-py prettyprint-override"><code># combining the masks first
sub_df = df[(df["column1"] < 5) & (df["column2"] > 10)]
</code></pre>
<pre class="lang-py prettyprint-override"><code># applying the masks sequentially
sub_df = df[df["column1"] < 5]
sub_df = sub_df[sub_df["column2"] > 10]
</code></pre>
<p>The first approach only selects from the DataFrame once which may be faster, however, the second selection in the second example only has to consider a smaller DataFrame.</p>
|
<python><pandas><dataframe>
|
2022-12-11 19:43:34
| 1
| 496
|
trivicious
|
74,763,652
| 14,374,599
|
Performing operations on column with nan's without removing them
|
<p>I currently have a data frame like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>treated</th>
<th>control</th>
</tr>
</thead>
<tbody>
<tr>
<td>9.5</td>
<td>9.6</td>
</tr>
<tr>
<td>10</td>
<td>5</td>
</tr>
<tr>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
<p>I want to apply get a log 2 ratio between treated and control i.e <code>log2(treated/control)</code>. However, the <code>math.log2()</code> ratio breaks, due to 0 values in the control column (a zero division). Ideally, I would like to get the log 2 ratio using method chaining, e.g a <code>df.assign()</code> and simply put nan's where it is not possible, like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>treated</th>
<th>control</th>
<th>log_2_ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>9.5</td>
<td>9.6</td>
<td>-0.00454</td>
</tr>
<tr>
<td>10</td>
<td>5</td>
<td>0.301</td>
</tr>
<tr>
<td>6</td>
<td>0</td>
<td>nan</td>
</tr>
<tr>
<td>6</td>
<td>6</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I have managed to do this in an extremely round-about way, where I have:</p>
<ul>
<li>made a column <code>ratio</code> which is <code>treated/control</code></li>
<li>done <code>new_df = df.dropna()</code> on this dataframe</li>
<li>applied the log 2 ratio to this.</li>
<li>Left joined it back to it's the original df.</li>
</ul>
<p>As always, any help is very much appreciated :)</p>
|
<python><pandas><nan><method-chaining>
|
2022-12-11 19:01:51
| 3
| 497
|
KLM117
|
74,763,527
| 16,142,058
|
how to install requirement with pip
|
<p>I have a problem with <strong>pip</strong>. I have created my enviroment by miniconda. I have run this command</p>
<pre><code>conda env create --file enviroment.yml
</code></pre>
<p>enviroment.yml</p>
<pre><code>name: data-science-handbook
channels:
- conda-forge
dependencies:
- python=3.10
</code></pre>
<p>requirements.txt</p>
<pre><code>numpy==1.11.1
pandas==0.18.1
scipy==0.17.1
scikit-learn==0.17.1
scikit-image==0.12.3
pillow==3.4.2
matplotlib==1.5.1
seaborn==0.7.0
jupyter
notebook
line_profiler
memory_profiler
numexpr
pandas-datareader
netcdf4
</code></pre>
<p>I have searched google for a couple of hours, but I have not solved.</p>
<p>After run <strong>pip install -r requirements.txt</strong></p>
<pre><code>Collecting numpy==1.11.1
Downloading numpy-1.11.1.zip (4.7 MB)
---------------------------------------- 4.7/4.7 MB 6.3 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting pandas==0.18.1
Downloading pandas-0.18.1.zip (8.5 MB)
---------------------------------------- 8.5/8.5 MB 9.2 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting scipy==0.17.1
Downloading scipy-0.17.1.zip (13.8 MB)
---------------------------------------- 13.8/13.8 MB 9.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting scikit-learn==0.17.1
Downloading scikit-learn-0.17.1.tar.gz (7.9 MB)
---------------------------------------- 7.9/7.9 MB 10.1 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting scikit-image==0.12.3
Downloading scikit-image-0.12.3.tar.gz (20.7 MB)
---------------------------------------- 20.7/20.7 MB 8.2 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
To install scikit-image from source, you will need numpy.
Install numpy with pip:
pip install numpy
Or use your operating system package manager. For more
details, see http://scikit-image.org/docs/stable/install.html
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I tried install with conda. <strong>conda install --file requirements.txt</strong> and still error</p>
<pre><code>Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- matplotlib==1.5.1
- scikit-image==0.12.3
- pandas==0.18.1
- scikit-learn==0.17.1
- numpy==1.11.1
- pillow==3.4.2
- scipy==0.17.1
- seaborn==0.7.0
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
</code></pre>
<p>How can i fix it? please help me. Thanks</p>
|
<python>
|
2022-12-11 18:44:55
| 0
| 497
|
summerisbetterthanwinter
|
74,763,451
| 5,850,635
|
ValueError: x must be a label or position when I try to Plot 2 columns in x axis grouped in area stacked chart using Pandas
|
<p>I have set of data with 3 columns Label, Year and Total. My total count is based on the group of label and year.</p>
<pre><code>+--------------------+-------+-------+
| Label| Year| Total|
+--------------------+-------+-------+
| FTP|02/2018| 193360|
| BBBB |01/1970| 14|
| BBBB |02/2018|4567511|
| SSSS|02/2018| 187589|
| Dddd|02/2018| 41508|
</code></pre>
<p>I want to plot the data like in this below image.
<a href="https://i.sstatic.net/6SF1d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6SF1d.png" alt="enter image description here" /></a>
How to achieve this with stacked area chart in Pandas python. ( my x-axis should have both my label and year values and based on that grouping of y-axis should plot values )</p>
<p>The code I tried with seaborn as well normal</p>
<pre><code>dF.plot(figsize=(20,8), x =['Label','Year'], y ='Total', kind = 'area', stacked = True)
ax = df.plot(x="label", y="Total", legend=False, figsize=(10,8))
ax2 = ax.twinx()
df.plot(x="label", y="Dst_Port", ax=ax2, legend=False, color="r", figsize=(10,8))
ax.figure.legend()
plt.show()
</code></pre>
<p>My current graph can plot with single x-axis column value.</p>
|
<python><pandas><dataframe><matplotlib><pyspark>
|
2022-12-11 18:31:33
| 1
| 1,032
|
Kavya Shree
|
74,763,392
| 14,293,274
|
Error when recording sound with sounddevice
|
<p>I want to use sounddevice to capture (record?) audio that is coming out of my speakers. My speakers have two channels.</p>
<p>This is my code (which I found here <a href="https://realpython.com/playing-and-recording-sound-python/#python-sounddevice_1" rel="nofollow noreferrer">https://realpython.com/playing-and-recording-sound-python/#python-sounddevice_1</a>):</p>
<pre><code>import sounddevice as sd
from scipy.io.wavfile import write
fs = 44100/2 # Sample rate
seconds = 3 # Duration of recording
myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=2)
sd.wait() # Wait until recording is finished
write('output.wav', fs, myrecording) # Save as WAV file
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 24, in <module>
myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=2)
File "/usr/local/lib/python3.7/site-packages/sounddevice.py", line 277, in rec
ctx.input_dtype, callback, blocking, **kwargs)
File "/usr/local/lib/python3.7/site-packages/sounddevice.py", line 2587, in start_stream
**kwargs)
File "/usr/local/lib/python3.7/site-packages/sounddevice.py", line 1422, in __init__
**_remove_self(locals()))
File "/usr/local/lib/python3.7/site-packages/sounddevice.py", line 901, in __init__
f'Error opening {self.__class__.__name__}')
File "/usr/local/lib/python3.7/site-packages/sounddevice.py", line 2747, in _check
raise PortAudioError(errormsg, err)
sounddevice.PortAudioError: Error opening InputStream: Invalid number of channels [PaErrorCode -9998]
</code></pre>
<p>I'm on a Mac M1 running rosetta-emulated python.</p>
<p>When I set the channels to '1' it works, but it only records from one channel.</p>
<p>I tried to get the index of my speakers by running <code>print(sd.query_devices())</code>, this was the output:</p>
<pre><code>> 0 MacBook Pro Microphone, Core Audio (1 in, 0 out)
< 1 MacBook Pro Speakers, Core Audio (0 in, 2 out)
2 Microsoft Teams Audio, Core Audio (2 in, 2 out)
</code></pre>
<p>So I tried to manually set this index, like so:</p>
<pre><code>myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=2, device=1)
</code></pre>
<p>But this also produced the same error.</p>
|
<python><python-sounddevice>
|
2022-12-11 18:23:53
| 1
| 594
|
koegl
|
74,763,112
| 20,078,696
|
How to fix the screen size for a matplotlib plot
|
<p>I am trying to plot a tangent function (see below) and am trying to stop the plot 'jumping around':</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
lowerBound = 0
plt.ion()
while True:
x = np.arange(lowerBound, lowerBound + 50, 0.1)
y = np.tan(x / 2)
plt.cla()
plt.plot(x, y)
plt.pause(0.01)
lowerBound += 1
</code></pre>
<p>I tried</p>
<pre><code>plt.figure(figsize=(5, 4))
</code></pre>
<p>and</p>
<pre><code>fig = plt.figure()
fig.set_figheight(4)
</code></pre>
<p>but they just made the plot smaller, instead of fixing the height.</p>
<p>I was thinking I could do:</p>
<p><code>set y to maximum of -200 and y</code> and <code>set y to minimum of 200 and y</code>,
but I don't know how to get the min/max for all items in an array.</p>
|
<python><matplotlib>
|
2022-12-11 17:43:58
| 1
| 789
|
sbottingota
|
74,762,955
| 7,386,609
|
Managing contents of defaultdict with concurrency in python
|
<p>Some questions have looked at non-nested <code>defaultdict</code> behavior when multiprocessing:</p>
<p><a href="https://stackoverflow.com/questions/9256687/using-defaultdict-with-multiprocessing">Using defaultdict with multiprocessing?</a></p>
<p><a href="https://stackoverflow.com/questions/9372007/python-defaultdict-behavior-possible-with-multiprocessing">Python defaultdict behavior possible with multiprocessing?</a></p>
<p>and it seems that managing something nested like <code>defaultdict(list)</code> isn't an entirely simple process, let alone something more complex like <code>defaultdict(lambda: defaultdict(list))</code></p>
<pre><code>import concurrent.futures
from collections import defaultdict
import multiprocessing as mp
from multiprocessing.managers import BaseManager, DictProxy, ListProxy
import numpy as np
def called_function1(hey, i, yo):
yo[i].append(hey)
class EmbeddedManager(BaseManager):
pass
def func1():
emanager = EmbeddedManager()
emanager.register('defaultdict', defaultdict, DictProxy)
emanager.start()
ddict = emanager.defaultdict(list)
with concurrent.futures.ProcessPoolExecutor(8) as executor:
for i in range(10):
ind = np.random.randint(2)
executor.submit(called_function1, i, ind, ddict)
for k, v in ddict.items():
print(k, v)
emanager.shutdown()
</code></pre>
<p>trying to register a normal <code>defaultdict</code> will fail for the contents inside it, as they aren't being managed, and only the keys are retained:</p>
<pre><code>func1()
1 []
0 []
</code></pre>
<p>a different approach i tried was to add a list within the function, which would be a reasonable compromise</p>
<pre><code>def called_function2(hey, i, yo):
if i not in yo:
yo[i] = []
yo[i].append(hey)
def func2():
manager = mp.Manager()
ddict = manager.dict()
with concurrent.futures.ProcessPoolExecutor(8) as executor:
for i in range(10):
ind = np.random.randint(2)
executor.submit(called_function2, i, ind, ddict)
for k, v in ddict.items():
print(k, v)
</code></pre>
<p>but it still isn't being managed</p>
<pre><code>func2()
1 []
0 []
</code></pre>
<p>I can get this to work by forcing a managed list inside a dictionary before the function is called</p>
<pre><code>def called_function3(hey, i, yo):
yo[i].append(hey)
def func3():
manager = mp.Manager()
ddict = manager.dict()
with concurrent.futures.ProcessPoolExecutor(8) as executor:
for i in range(10):
ind = np.random.randint(2)
if ind not in ddict:
ddict[ind] = manager.list()
executor.submit(called_function2, i, ind, ddict)
for k, v in ddict.items():
print(k, v)
</code></pre>
<p>But I wouldn't prefer this method because i don't necessarily know if I need this dictionary key to even exist before the function is ran</p>
<pre><code>func3()
0 [0, 2, 3, 4, 6, 8]
1 [1, 5, 7, 9]
</code></pre>
<p>trying to pass the manager to the function so it can create a managed list on the fly doesn't work</p>
<pre><code>def called_function4(hey, i, yo, man):
if i not in yo:
yo[i] = man.list()
yo[i].append(hey)
def func4():
manager = mp.Manager()
ddict = manager.dict()
with concurrent.futures.ProcessPoolExecutor(8) as executor:
futures = []
for i in range(10):
ind = np.random.randint(2)
futures.append(executor.submit(called_function2, i, ind, ddict, manager))
for f in concurrent.futures.as_completed(futures):
print(f.result())
for k, v in ddict.items():
print(k, v)
</code></pre>
<pre><code>func4()
TypeError: Pickling an AuthenticationString object is disallowed for security reasons
</code></pre>
<p>and trying to create a new manager within the called function</p>
<pre><code>def called_function5(hey, i, yo):
if i not in yo:
yo[i] = mp.Manager().list()
yo[i].append(hey)
def func5():
manager = mp.Manager()
ddict = manager.dict()
with concurrent.futures.ProcessPoolExecutor(8) as executor:
futures = []
for i in range(10):
ind = np.random.randint(2)
futures.append(executor.submit(called_function5, i, ind, ddict))
for f in concurrent.futures.as_completed(futures):
print(f.result())
for k, v in ddict.items():
print(k, v)
</code></pre>
<p>raises another error</p>
<pre><code>func5()
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>is there any better way of doing this?</p>
|
<python><multithreading><concurrency><multiprocessing><defaultdict>
|
2022-12-11 17:24:17
| 1
| 312
|
Estif
|
74,762,905
| 2,302,911
|
Mediapipe Pose: How to slice the pose.process(<imageRGB>).pose_landmarks? in order to draw only selected keypoints?
|
<p>Given the following example:</p>
<pre><code>import cv2 #OpenCV is the library that we will be using for image processing import mediapipe as mp #Mediapipe is the framework that will allow us to get our pose estimation import time
mpDraw = mp.solutions.drawing_utils mpPose = mp.solutions.pose
pose = mpPose.Pose()
#pose = mpPose.Pose(static_image_mode = False, upper_body_only = True) #ONLY UPPER_BODY_TRACKING
#cap = cv2.VideoCapture(0) cap = cv2.VideoCapture('PoseVideos/1_girl_choreography.mp4')
pTime = 0 #previous time
while True:
success, img = cap.read() #that will give it our image and then we can write the cv2.imshow()
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #convert our image to RGB because Mediapipe use that format
results = pose.process(imgRGB) #we are simply going to send this image to our model
#print(enumerate(results.pose_landmarks.landmark)) #<enumerate object at 0x0000012312DD1A00>
#so then we will check if it is detected or not
if results.pose_landmarks:
mpDraw.draw_landmarks(img, results.pose_landmarks, mpPose.POSE_CONNECTIONS)
for id, lm in enumerate(results.pose_landmarks.landmark):
h, w, c = img.shape #get dimensions(h height, w width) and the c channel of image
print(id)
print(lm)
cx, cy = int(lm.x * w), int(lm.y * h)
cv2.circle(img, (cx, cy), 5, (255, 0, 0), cv2.FILLED)
cTime = time.time()
fps = 1 / (cTime - pTime)
pTime = cTime
cv2.putText(img, str(int(fps)), (70, 50), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 0), 3)
cv2.imshow("Image", img)
cv2.waitKey(1)
</code></pre>
<p>I do not want to draw all the keypoints in <code>results.pose_landmarks</code>, but I would like to remove the first 10 points.</p>
<p>Basically I would like to do the following</p>
<pre><code>mpDraw.draw_landmarks(img, results.pose_landmarks[10:], mpPose.POSE_CONNECTIONS)
</code></pre>
<p>Doing that, I get the following error:</p>
<pre><code>landmark_list = keypoints.pose_landmarks[10:],
TypeError: 'NormalizedLandmarkList' object is not subscriptable
</code></pre>
<p>Any idea, how to remove the first 10 element from <code>pose_landmarks</code>?</p>
|
<python><typeerror><mediapipe><pose>
|
2022-12-11 17:17:56
| 1
| 348
|
Dave
|
74,762,806
| 9,974,205
|
I cannot change the values of a column using python pandas
|
<p>I am working with the [UCI adult dataset][1]. I have added a row as a header to facilitate operation. I need to change the last column, which can take two values, '<=50k' and '>50k' and whose name is 'etiquette'. I have tried the following</p>
<pre><code>num_datos.loc[num_datos.loc[:,"etiquette"]=="<=50K", "etiquette"]=1
num_datos.loc[num_datos.loc[:,"etiquette"]==">50K", "etiquette"]=0
</code></pre>
<p>and the following</p>
<pre><code>num_datos['etiquette'].replace(['<=50K'], 1)
num_datos['etiquette'].replace(['>50K'], 0)
</code></pre>
<p>However, this seems to do nothing, since if I then execute</p>
<pre><code>print(num_datos.etiquette[0])
</code></pre>
<p>I still get a value of <code> <=50K</code>. Is there a way for me to replace the values of the column in question?</p>
|
<python><pandas><string><variables><integer>
|
2022-12-11 17:04:33
| 1
| 503
|
slow_learner
|
74,762,737
| 10,516,773
|
How to import Python Class or Function from parent directory?
|
<p>I have this project structure,</p>
<pre><code>.\src
.\api
test.py
.\config
config.py
app.py
</code></pre>
<p>when i'm trying to import a function or class from <code>test.py</code> inside <code>config.py</code>, using this statement</p>
<p><code>from src.api.test import tes_func</code></p>
<p>I get this error</p>
<p><code>ModuleNotFoundError: No module named 'src.api'</code></p>
<p>if i use these 2 lines i can import using</p>
<p><code>from api.test import tes_func</code>.</p>
<pre><code>import sys
sys.path.append("../")
</code></pre>
<p>why it's not working when use <code>from src.api.test import test_func</code>
Is there a way to import python files without <code>sys.path.append("../")</code>.</p>
<p>Thanks in advance.</p>
|
<python>
|
2022-12-11 16:54:54
| 1
| 1,120
|
pl-jay
|
74,762,618
| 16,527,170
|
Sum of Maximum Postive and Negative Consecutive rows in pandas
|
<p>I have a dataframe <code>df</code> as below:</p>
<pre><code># Import pandas library
import pandas as pd
# initialize list elements
data = [10,-20,30,40,-50,60,12,-12,11,1,90,-20,-10,-5,-4]
# Create the pandas DataFrame with column name is provided explicitly
df = pd.DataFrame(data, columns=['Numbers'])
# print dataframe.
df
</code></pre>
<p>I want the sum of count of max consecutive positive and negative numbers.</p>
<p>I am able to get count of max consucutive positive and negative numbers, but unable to sum using below code.</p>
<p>my code:</p>
<pre><code>streak = df['Numbers'].to_list()
from collections import defaultdict
from itertools import groupby
counter = defaultdict(list)
for key, val in groupby(streak, lambda ele: "plus" if ele >= 0 else "minus"):
counter[key].append(len(list(val)))
lst = []
for key in ('plus', 'minus'):
lst.append(counter[key])
print("Max Pos Count " + str(max(lst[0])))
print("Max Neg Count : " + str(max(lst[1])))
</code></pre>
<p>Current Output:</p>
<pre><code>Max Pos Count 3
Max Neg Count : 4
</code></pre>
<p>I am struggling to get sum of max consuctive positive and negative.</p>
<p>Expected Output:</p>
<pre><code>Sum Pos Max Consecutive: 102
Sum Neg Max Consecutive: -39
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-11 16:39:50
| 1
| 1,077
|
Divyank
|
74,762,578
| 4,225,430
|
How to integrate 'append' and 'print' Python code in a line?
|
<p>python newbie here. I wonder if I can write the <code>append</code> and <code>print</code> syntax in a line instead of two. I tried twice but failed, as shown below:</p>
<pre><code>a = [1,2,3,4]
y = 7
a.append(y)
print(a) #correct [1, 2, 3, 4, 7]
</code></pre>
<p>(1)</p>
<pre><code>print(a.append(y)) #retuns None
</code></pre>
<p>(2)</p>
<pre><code>print(a = a.append(y)) #returns 'a' is an invalid keyword argument for print()
</code></pre>
<p>A more important question is: May I know the reason of failure? Thank you for your answer.</p>
|
<python><append>
|
2022-12-11 16:34:28
| 2
| 393
|
ronzenith
|
74,762,418
| 1,219,322
|
Supervised ML to extract keywords from short texts
|
<p>I have a lot training data ready and looking for an ML algorithm to replace the current algorithms.
The input is a paragraph containing a short biography of a person and the output is their dates of birth and name.
What algorithm can output a set of keywords based on short texts?</p>
|
<python><machine-learning><supervised-learning>
|
2022-12-11 16:14:21
| 0
| 1,998
|
dirtyw0lf
|
74,762,417
| 14,789,957
|
Get body from POST in Flask
|
<p>I am very new to Flask and have the following Flask code:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
from flask import request
app = Flask(__name__)
@app.route('/')
def home():
return 'Hello, World!'
@app.route('/test/<email>', methods = ['GET', 'POST'], strict_slashes=False)
def test(email):
if request.method == 'GET':
print("USERS_EMAIL: ", email)
return 'GET REQUEST', 202
if request.method == 'POST':
print("USERS_EMAIL: ", email)
body = request.json['email_subject']
print("BODY: ", body)
return '', 201
return 'ERROR', 400
</code></pre>
<p>I need to send a JSON object from my react frontend end retrieve it in Flask. I am sending the following body in POSTMAN but I get this error every time (at the bottom of the post):</p>
<p><strong>BODY:</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"email_subject": "Postman test",
"title": "Lysenko Ilya Igorevich"
}
</code></pre>
<p><strong>Things I have tried:</strong></p>
<p>I found these two links (<a href="https://dev.to/dev_elie/sending-data-from-react-to-flask-apm" rel="nofollow noreferrer">link1</a>, <a href="https://javascript.plainenglish.io/sending-a-post-to-your-flask-api-from-a-react-js-app-6496692514e" rel="nofollow noreferrer">link2</a>) which seemed to be saying two different things, so first I tried both of these options:</p>
<pre class="lang-py prettyprint-override"><code>body = request.json['body']
</code></pre>
<pre class="lang-py prettyprint-override"><code>body = request.get_json()
</code></pre>
<p>neither which worked. I was then wondering whether the name in the square brackets refers not to the body but to the key in the body, so I then tried the following:</p>
<pre class="lang-py prettyprint-override"><code>body = request.json['email_subject']
</code></pre>
<p>which did not work either. Any suggestions?</p>
<p><em>(screenshot of my postman error)</em></p>
<p><a href="https://i.sstatic.net/TK8dE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TK8dE.png" alt="enter image description here" /></a></p>
|
<python><flask>
|
2022-12-11 16:14:18
| 0
| 785
|
yem
|
74,762,188
| 10,829,044
|
Hurdle models - gridsearchCV
|
<p>I am currently trying to build a hurdle model - zero inflated regressor to predict the revenue from each of out customers.</p>
<p>We use zero inflated regressor because most (80%) of our customers have 0 as revenue and only 20% have revenue > 0.</p>
<p>So, we build two models like as shown below</p>
<pre><code>zir = ZeroInflatedRegressor(
classifier=ExtraTreesClassifier(),
regressor=RandomForestRegressor()
)
</code></pre>
<p>And I do gridsearchCV to improve the performance of our model. So, I do the below</p>
<pre><code>from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(
estimator=zir,
param_grid={
'classifier__n_estimators': [100,200,300,400,500],
'classifier__bootstrap':[True, False],
'classifier__max_features': ['sqrt','log2',None],
'classifier__max_depth':[2,4,6,8,None],
'regressor__n_estimators': [100,200,300,400,500],
'regressor__bootstrap':[True, False],
'regressor__max_features': ['sqrt','log2',None],
'regressor__max_depth':[2,4,6,8,None]
},
scoring = 'neg_mean_squared_error'
)
</code></pre>
<p>Now my question is on how does gridsearchCV work in the case of hurdle models?</p>
<p>Does hyperparameters from classifier combine with regressor as well to generate a pair? Or only hypaprameters within the same model type combine to generate new pairs?</p>
<p>Put simply, would classifier have 150 combinations of hyperparameters and regressor seperately have 150?</p>
|
<python><machine-learning><regression><cross-validation><grid-search>
|
2022-12-11 15:43:45
| 1
| 7,793
|
The Great
|
74,761,823
| 13,954,738
|
How can I pick the value from list of dicts using python
|
<p>I have a sample dict in the below format. I want to append more values in the <code>headers</code>. How should I proceed?</p>
<pre><code>data = {'heading': 'Sample Data', 'MetaData': [{'M1': {'headers': ['age', 'roll_no'], 'values': [15, 5]}}, {'M2': {'headers': [], 'values': []}}]}
</code></pre>
<p>To access, I can try something like - <code>data['MetaData'][0]['headers'].append('class')</code>. It works this way but I want to access it via loop.</p>
|
<python>
|
2022-12-11 14:52:19
| 1
| 336
|
ninjacode
|
74,761,701
| 11,459,517
|
How to put measures on x-axis with date and time combine in python using tkinter and matplotlib and numpy
|
<p>I have a small GUI application using tkinter, matplotlib and numpy. Here user will upload an excel file and get multi line graph. But the main problem is I couldn't put the measures on x-axis. Here date and time combination will be the measure of x-axis. But only year is coming as measure. Here I am sharing my code:</p>
<pre><code>import tkinter as tk
from tkinter import filedialog
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
import pandas as pd
import matplotlib.dates
import numpy as np
from datetime import datetime
root= tk.Tk()
canvas1 = tk.Canvas(root, width = 1000, height = 300)
canvas1.pack()
label1 = tk.Label(root, text='Data Analyser')
label1.config(font=('Arial', 20))
canvas1.create_window(400, 50, window=label1)
def getExcel1 ():
global df
import_file_path = filedialog.askopenfilename()
df = pd.read_excel (import_file_path)
daytime=df.apply(lambda r : pd.datetime.combine(r['Day'],r['Time']),1)
global bar1
x = daytime
y1 = df['Count']
y2 = df['Month']
figure1 = Figure(figsize=(8,3), dpi=100)
subplot1 = figure1.add_subplot(111)
subplot2 = figure1.add_subplot(111)
bar1 = FigureCanvasTkAgg(figure1, root)
bar1.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH, expand=0)
subplot1.plot(x, y1, color='green', linestyle='solid', linewidth = 2, marker='o',
markerfacecolor='green', markersize=8, label='y1')
subplot2.plot(x, y2, color='red', linestyle='solid', linewidth = 2, marker='o',
markerfacecolor='red', markersize=8, label='y2')
def clear_charts():
bar1.get_tk_widget().pack_forget()
browseButton_Excel1 = tk.Button(text='Load File...', command=getExcel1, bg='green', fg='white', font=('helvetica', 12, 'bold'))
canvas1.create_window(400, 180, window=browseButton_Excel1)
button2 = tk.Button (root, text='Clear Chart', command=clear_charts, bg='green', font=('helvetica', 11, 'bold'))
canvas1.create_window(400, 220, window=button2)
button3 = tk.Button (root, text='Exit!', command=root.destroy, bg='green', font=('helvetica', 11, 'bold'))
canvas1.create_window(400, 260, window=button3)
root.mainloop()
</code></pre>
<p>The measure of x-axis should look like '2021-09-06 16:35:00', but it is only comes with '2021'.
Here is my excel file date:
<a href="https://i.sstatic.net/wWFKJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wWFKJ.png" alt="enter image description here" /></a></p>
<p>And also please suggest me how to put legend, axis label and figure name. Please help me out for this problem.</p>
|
<python><pandas><numpy><matplotlib><tkinter>
|
2022-12-11 14:35:16
| 1
| 321
|
Sanghamitra Lahiri
|
74,761,664
| 8,162,211
|
Coherence values do not agree when trying to calculate using different methods
|
<p>I'm trying to see if I can obtain identical coherence values for two signals when computing them two different ways:</p>
<p><strong>Method 1:</strong> I first compute the cross-correlation and the two auto-correlations. Then I take the DFT of the results to obtain the cross-spectral density and the power spectral densities, from which the coherence can be computed.</p>
<p><strong>Method 2:</strong> I simply apply the <code>coherence</code> function from <code>scipy.signal.</code></p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from scipy.signal import correlate,coherence
t = np.arange(0, 1, 0.001);
dt = t[1] - t[0]
fs=1/dt
f0=250
f1=400
x=np.cos(2*np.pi*f0*t)+np.cos(2*np.pi*f1*t)
y=np.cos(2*np.pi*f0*(t-.035))+np.cos(2*np.pi*f1*(t-.05))
fig, ax = plt.subplots(2,sharex=True)
# Coherence using method 1:
Rxx = correlate(x,x)
Pxx= abs(np.fft.rfft(Rxx))
Ryy = correlate(y,y)
Pyy = abs(np.fft.rfft(Ryy))
Rxy = correlate(x,y)
Pxy = abs(np.fft.rfft(Rxy))
f = np.fft.rfftfreq(len(Rxy))*fs
Coh1=np.divide(Pxy**2,np.multiply(Pxx,Pyy))
#Start at nonzero index, e.g. 50, since Pxx[0]=Pyy[0] where Coh[0] would be undefined
ax[0].plot(f[50:], Coh1[50:])
both equal zero
ax[0].set_xlabel('frequency [Hz]')
ax[0].set_ylabel('Coherence')
# Coherence using method 2:
f,Coh2=coherence(x,y)
ax[1].plot(f*fs,Coh2)
ax[1].set_xlabel('frequency [Hz]')
ax[1].set_ylabel('Coherence')
plt.show()
</code></pre>
<p>I obtain the graphs shown below.</p>
<p>The results don't make sense to me. Not only are they different, but neither produces coherence values equal to one at frequencies 250 and 400. The second method should yield an undefined coherence except at 250 and 400.</p>
<p><a href="https://i.sstatic.net/9PP9U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9PP9U.png" alt="enter image description here" /></a></p>
|
<python><scipy><spectrum>
|
2022-12-11 14:30:51
| 1
| 1,263
|
fishbacp
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.