qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
28,542,083 | I am currently working on a small project experimenting with different regions of python. I decided to make a multi-client TCP server in python, and wanted to be able to send a "packet" through the server, it be received by the other clients then parsed. However, I get an error if I try to send the packet, saying I must send either bytes or a string, is there a way to convert the object into bytes, then back or send the packet object through the server itself.
**## EDIT ##**
I have researched into UDP servers, and I do not believe that is what I am looking for, I probably provided too little information. I am creating a small trial game, for me and some friends to mess about on, I need there to be a constant connection to the server, as information is going to be constantly being sent across the network such as location,direction,speech,inventory etc. and I wanted a way to turn the entire Entity class into a byte array then send that and it be turned back into an instance of the Entity class when it was received. | 2015/02/16 | [
"https://Stackoverflow.com/questions/28542083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4410007/"
] | You could use pickle to serialize/deserialize objects to strings and back. <https://docs.python.org/2/library/pickle.html> | The simplest possible approach would be to send (gzipped?) JSON'd or [msgpack](https://pypi.python.org/pypi/msgpack-python/)'d objects.
For example, using UDP, this could look something like the below code; note that you would want to reuse the socket object rather than instantiating a new one every time.
```py
import socket
import msgpack
def send_object(obj=None, ip, port):
if obj:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(msgpack.dumps(obj), (ip, port))
``` | 3,256 |
55,749,206 | I've recently upgraded from `Ubuntu 18.04` to `19.04` which has `python 3.7`. But I work on many projects using `Python 3.6`.
Now when I try to create a `virtualenv` with `Python 36` in PyCharm, it raises:
```
ModuleNotFoundError: No module named 'distutils.core'
```
[](https://i.stack.imgur.com/kKMZC.png)
I can't figure out what to do.
I tried to install distutils:
```
milano@milano-PC:~$ sudo apt-get install python3-distutils
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-distutils is already the newest version (3.7.3-1ubuntu1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
```
But as you can see I have the newest version.
Do you know what to do? | 2019/04/18 | [
"https://Stackoverflow.com/questions/55749206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2607447/"
] | **Other Cases**
This happened on my python3.7 installation but not my main python3 after i upgrade my ubuntu to 20.04
[Solution](https://github.com/pypa/get-pip/issues/43#issuecomment-621262469):
```
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt install python3.7
``` | I still got error message after trying to install python3.9-distutils for python version 3.9 in pipenv.
As I noticed [here](https://github.com/pypa/pipenv/issues/3890) python3.9-distutils is in conflict with earlier versions of that package and cannot be installed on Ubuntu18.04.
I move on by using `python_version = "3.6"` with pipenv otherwise `$ pipenv install` would take the highest version of python and write it in the `Pipfile` and `Pipfile.lock`.
```
$ pipenv --rm #To remove the old environment
$ rm Pipfile* #Remove both Pipfiles
$ pipenv install --python 3.6
``` | 3,257 |
23,873,821 | I'm not an experienced python coder, so be gentle.
I have a very large (100s of Gb) binary file, that requires a particular command line tool (called parseTool here) to parse it. The format of the output of parseTool is simple raw text that I am doing some basic processing of (counting values, etc.).
Before I think about running multiple instances of my code on a cluster I want to make sure I am doing things reasonably efficiently.
Here is what I am using to iterate over the output of parseTool:
```
proc = subprocess.Popen(procTool +' myFile.file ',
shell=True,
stdout=subprocess.PIPE)
for line in proc.stdout:
linep=line.split("\t")
#process each line until end of stream
```
To be clear, my main concern is efficiently iterating over the output of parseTool. Any helpful tips out there? | 2014/05/26 | [
"https://Stackoverflow.com/questions/23873821",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/295182/"
] | Your code is perfectly fine and will "stream" the data efficiently, assuming that the "parseTool" also streams properly and that the text output does not have really long lines.
**If** it did have long lines (in particular, ones that scale as the input does, rather than having some upper bound) then you would want to take care to read chunks smaller than a line rather than reading a whole line and then splitting it. | You say your file is in binary.
If you use:
```
for line in proc.stdout:
...
```
Then the interpreter will read the binary data until it finds a new line character.
Since you said "binary", this seams to be a bad solution.
I would read in fixed chunks:
```
max_length=1024
while True:
chunk=proc.stdout.read(max_length)
if not chunk:
break
print chunk
``` | 3,267 |
13,585,857 | I'm using boto/python to launch a new EC2 instance that boots from an EBS volume. At the time I launch the instance, I'd like to override the default size of the booting EBS volume.
I found no boto methods or parameters that might fit into my launch code:
```
ec2 = boto.connect_ec2( ACCESS_KEY, SECRET_KEY, region=region )
reservation = ec2.run_instances( image_id=AMI_ID,
key_name=EC2_KEY_HANDLE,
instance_type=INSTANCE_TYPE,
security_groups = [ SECGROUP_HANDLE, ] )
```
[This web page](http://alestic.com/2010/02/ec2-resize-running-ebs-root) shows how to increase the size of a running EC2-instance's EBS volume using command-line tools, but I'd like to use boto at the time the EC2 instance is specified: | 2012/11/27 | [
"https://Stackoverflow.com/questions/13585857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1856725/"
] | You have to create a block device mapping first:
```
dev_sda1 = boto.ec2.blockdevicemapping.EBSBlockDeviceType()
dev_sda1.size = 50 # size in Gigabytes
bdm = boto.ec2.blockdevicemapping.BlockDeviceMapping()
bdm['/dev/sda1'] = dev_sda1
```
After this you can give the block device map in your `run_instances` call:
```
reservation = ec2.run_instances( image_id=AMI_ID,
key_name=EC2_KEY_HANDLE,
instance_type=INSTANCE_TYPE,
security_groups = [ SECGROUP_HANDLE, ],
block_device_mappings = [bdm])
```
Unfortunately this is [not really well documented](https://github.com/boto/boto/issues/736), but the example [can be found in the source code](https://github.com/boto/boto/blob/develop/boto/ec2/blockdevicemapping.py#L85). | You can also use [CloudFormation](http://aws.amazon.com/cloudformation), which is used to document and automate your environment.
You can check the template for the ESB definition at: <https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template>
```
"Resources" : {
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"AvailabilityZone" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "TestAz" ]},
"SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
"KeyName" : { "Ref" : "KeyName" },
"ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "AMI" ]},
"Volumes" : [
{ "VolumeId" : { "Ref" : "NewVolume" },
"Device" : "/dev/sdk"
}
]
}
},
...
"NewVolume" : {
"Type" : "AWS::EC2::Volume",
"Properties" : {
"Size" : "100",
"AvailabilityZone" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "TestAz" ]}
}
}
```
You can then use [Boto CloudFormation API](http://boto.cloudhackers.com/en/latest/ref/cloudformation.html) to deploy your environment. | 3,268 |
13,744,473 | I'm calling a command line program in python using the `os.system(command)` call.
How can I call this command passing a different folder for execution? There is a system call for this? Or I should save the current folder, and, after execution, change restore it. | 2012/12/06 | [
"https://Stackoverflow.com/questions/13744473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329082/"
] | The [`subprocess`](https://docs.python.org/2/library/subprocess.html) module is a very good solution.
```
import subprocess
p = subprocess.Popen([command, argument1,...], cwd=working_directory)
p.wait()
```
It has also arguments for modifying environment variables, redirecting input/output to the calling program, etc. | Try to `os.chdir(path)` before invoking the command.
From [here](http://docs.python.org/2/library/os.html#os.chdir):
>
> os.chdir(path) Change the current working directory to path.
>
>
> Availability: Unix, Windows
>
>
>
**EDIT**
This will change the current working dir, you can get the current working by:
```
os.getcwd()
```
If you want to save it and restore it later, if you need to do some work in the original working dir.
**EDIT 2**
In any case you should probably move to `subprocess` ([doc](http://docs.python.org/2/library/subprocess.html#module-subprocess)) as suggested [here](http://docs.python.org/2/library/os.html#os.system). If you use `subprocess`'s `Popen` you have the choice of providing `cwd` parameter to specify the working directory for the subprocess: [read this](http://docs.python.org/2/library/subprocess.html#popen-constructor).
```
subprocess.Popen(args, bufsize=0, executable=None, stdin=None,
stdout=None, stderr=None, preexec_fn=None, close_fds=False,
shell=False, cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0)
```
>
> ...
>
>
> If cwd is not None, the child’s current directory will be changed to
> cwd before it is executed. Note that this directory is not considered
> when searching the executable, so you can’t specify the program’s path
> relative to cwd.
>
>
> | 3,270 |
45,439,492 | I would like be able to several layers together, but before specifying the input, something like the following:
```
# conv is just a layer, no application
conv = Conv2D(64, (3,3), activation='relu', padding='same', name='conv')
# this doesn't work:
bn = BatchNormalization()(conv)
```
Note that I don't want to specify the input nor its shape if it can be avoided, I want to use this as a shared layer for multiple inputs at a later point.
Is there a way to do that? The above gives the following error:
```
>>> conv = Conv2D(64, (3,3), activation='relu', padding='same', name='conv')
>>> bn = BatchNormalization()(conv)
Traceback (most recent call last):
File "/home/mitchus/anaconda3/envs/tf/lib/python3.6/site-packages/keras/engine/topology.py", line 419, in assert_input_compatibility
K.is_keras_tensor(x)
File "/home/mitchus/anaconda3/envs/tf/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 393, in is_keras_tensor
raise ValueError('Unexpectedly found an instance of type `' + str(type(x)) + '`. '
ValueError: Unexpectedly found an instance of type `<class 'keras.layers.convolutional.Conv2D'>`. Expected a symbolic tensor instance.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mitchus/anaconda3/envs/tf/lib/python3.6/site-packages/keras/engine/topology.py", line 552, in __call__
self.assert_input_compatibility(inputs)
File "/home/mitchus/anaconda3/envs/tf/lib/python3.6/site-packages/keras/engine/topology.py", line 425, in assert_input_compatibility
str(inputs) + '. All inputs to the layer '
ValueError: Layer batch_normalization_4 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.convolutional.Conv2D'>. Full input: [<keras.layers.convolutional.Conv2D object at 0x7f3f6e54b748>]. All inputs to the layer should be tensors.
```
Grabbing the output of the conv layer doesn't do the trick either:
```
>>> bn = BatchNormalization()(conv.output)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mitchus/anaconda3/envs/tf/lib/python3.6/site-packages/keras/engine/topology.py", line 941, in output
' has no inbound nodes.')
AttributeError: Layer conv has no inbound nodes.
``` | 2017/08/01 | [
"https://Stackoverflow.com/questions/45439492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/729288/"
] | Try this:
```
def create_shared_layers():
layers = [
Conv2D(64, (3,3), activation='relu', padding='same', name='conv'),
BatchNormalization()
]
def shared_layers(x):
for layer in layers:
x = layer(x)
return x
return shared_layers
```
Later, you can do something like:
```
shared_layers = create_shared_layers()
...
h1 = shared_layers(x1)
h2 = shared_layers(x2)
``` | What about using a Lambda layer.
```py
import functools
from typing import List
from tensorflow import keras
def compose_layers(layers: List[keras.layers.Layer], **kargs) -> keras.layers.Layer:
return keras.layers.Lambda(
lambda x: functools.reduce(lambda tensor, layer: layer(tensor), layers, x),
**kargs,
)
```
then you can just call the `compose_layers` method to get the composition.
```py
layers = [
Conv2D(64, (3,3), activation='relu', padding='same', name='conv'),
BatchNormalization()
]
composed_layers = compose_layers(layers, name='composed_layers')
``` | 3,273 |
64,320,386 | I have a model which looks like this:
```
class InputTypeMap(models.Model):
input_type = models.ForeignKey(InputType, on_delete=models.CASCADE)
training = models.ForeignKey(Training, on_delete=models.CASCADE)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
gender = models.ForeignKey(Gender, on_delete=models.CASCADE)
```
When I try to create instances of this model with:
```
InputTypeMap.objects.create(input_type=input_type,
training=training,
gender=gender,
category=category)
```
I get an exception
```
Traceback (most recent call last):
File "/home/hove/sleipner/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.NotNullViolation: null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, Maintenance, Female, MareGielding, No).
```
From the error message it seems to me that a ID key for the new entry is not generated.
This code has worked as I expected for quite some time, but has "suddenly" started to fail locally - probably after a `apt get upgrade`. When I run the same code with sqlite or Postgres-10.14 things continue to work as before. It is not clear to me whether this is a bug in my code (most probable ...), Django or Postgres. I am using Django version 3.1.2
I have version 9.6 of the Postgres server | 2020/10/12 | [
"https://Stackoverflow.com/questions/64320386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422005/"
] | most probably, that is a primary\_key problem, in my case i added to my model something like:
```
person_id = models.AutoField(primary_key=True)
```
adapt my views to it, and it solved it, | [Changing primary key int type to serial](https://stackoverflow.com/questions/23578427/changing-primary-key-int-type-to-serial) solved it for me.
credit goes to [nishit chittora](https://stackoverflow.com/users/5081918/nishit-chittora). | 3,276 |
39,483,862 | Following examples and the numpy C-API (<http://docs.scipy.org/doc/numpy/reference/c-api.html>), I'm trying to access numpy array data in cpp, like this:
```
#include <Python.h>
#include <frameobject.h>
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION // TOGGLE OR NOT
#include "numpy/ndarraytypes.h"
#include "numpy/arrayobject.h"
...
// here I have passed "some_python_object" to the C code
// .. and "some_python_object" has member "infobuf" that is a numpy array
//
unsigned long* fInfoBuffer;
PyObject* infobuffer = PyObject_GetAttrString(some_python_object, "infobuf");
PyObject* x_array = PyArray_FROM_OT(infobuffer, NPY_UINT32);
fInfoBuffer = (unsigned long*)PyArray_DATA(x_array); // DOES NOT WORK WHEN API DEPRECATION IS TOGGLED
```
When the API deprecation is toggled, I get, when compiling:
```
error: cannot convert ‘PyObject* {aka _object*}’ to ‘PyArrayObject* {aka tagPyArrayObject*}’ for argument ‘1’ to ‘void* PyArray_DATA(PyArrayObject*)’
```
What would be the legitimate way of doing this in numpy 1.7+ ? | 2016/09/14 | [
"https://Stackoverflow.com/questions/39483862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4121210/"
] | You could try using a higher-level library that wraps numpy arrays in C++ containers with proper container semantics.
Try out `xtensor` and the `xtensor-python` bindings.
* Numpy to xtensor cheat sheet <http://xtensor.readthedocs.io/en/latest/numpy.html>
* The xtensor-python project
<http://xtensor-python.readthedocs.io/en/latest/>
There is also a cookiecutter to generate a minimal C++ extension project with all the boilerplate for testing, html documentation and setup.p...
**Example: C++ code**
```
#include <numeric> // Standard library import for std::accumulate
#include "pybind11/pybind11.h" // Pybind11 import to define Python bindings
#include "xtensor/xmath.hpp" // xtensor import for the C++ universal functions
#define FORCE_IMPORT_ARRAY // numpy C api loading
#include "xtensor-python/pyarray.hpp" // Numpy bindings
double sum_of_sines(xt::pyarray<double> &m)
{
auto sines = xt::sin(m);
// sines does not actually hold any value, which are only computed upon access
return std::accumulate(sines.begin(), sines.end(), 0.0);
}
PYBIND11_PLUGIN(xtensor_python_test)
{
xt::import_numpy();
pybind11::module m("xtensor_python_test", "Test module for xtensor python bindings");
m.def("sum_of_sines", sum_of_sines,
"Computes the sum of the sines of the values of the input array");
return m.ptr();
}
```
**Python Code:**
```
import numpy as np
import xtensor_python_test as xt
a = np.arange(15).reshape(3, 5)
s = xt.sum_of_sines(v)
s
``` | It is because `PyArray_DATA` expects a `PyArrayObject*`.
You can try to change the type of `x_array`:
```
PyArrayObject* x_array = (PyArrayObject*) PyArray_FROM_OT(infobuffer, NPY_UINT32)
``` | 3,277 |
59,853,922 | In python, how to generate a random number such that it is not a power of 2? The output needs to be a list of 8 random numbers. This should be done in a single statement (comprehension style) in python. | 2020/01/22 | [
"https://Stackoverflow.com/questions/59853922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4778195/"
] | You can use array\_sum with array\_map like below,
```
$array1 = [1, 2, 2, 3];
$array2 = [10, 20, 30, 50];
$array_sum1 = [];
foreach ($array1 as $key => $value) {
$array_sum1[$value][] = $array2[$key];
}
$array_sum1 = array_map("array_sum", $array_sum1);
print_r($array_sum1);
$array3 = [4, 4, 4, 6];
$array4 = [10, 20, 30, 50];
$array_sum2 = [];
foreach ($array3 as $key => $value) {
$array_sum2[$value][] = $array4[$key];
}
$array_sum2 = array_map("array_sum", $array_sum2);
print_r($array_sum2);die;
```
[Demo](https://3v4l.org/5sXjt)
**Output:-**
```
Array
(
[1] => 10
[2] => 50
[3] => 50
)
Array
(
[4] => 60
[6] => 50
)
``` | It is indirect to perform two iterations of your data to group & sum.
Use the "id" values as keys in your output array. If a given "id" is encountered for the first time, then save the "val" value to the "id"; after the first encounter, add the "val" to the "id".
Code: ([Demo](https://3v4l.org/fjINV))
```
$ids = [1, 2, 2, 3];
$vals = [10, 20, 30, 50];
foreach ($ids as $index => $id) {
if (!isset($result[$id])) {
$result[$id] = $vals[$index];
} else {
$result[$id] += $vals[$index];
}
}
var_export($result);
```
Output:
```
array (
1 => 10,
2 => 50,
3 => 50,
)
```
Here are similar (near duplicate) answers:
* <https://stackoverflow.com/a/53141488/2943403>
* <https://stackoverflow.com/a/52485161/2943403>
* <https://stackoverflow.com/a/47926978/2943403>
* <https://stackoverflow.com/a/54421292/2943403> | 3,278 |
74,320,720 | I have some existing code that uses boto3 (python) DynamoDB Table objects to query the database:
```py
import boto3
resource = boto3.resource("dynamodb")
table = resource.table("my_table")
# Do stuff here
```
We now want to run the tests for this code using DynamoDB Local instead of connecting to DynamoDB proper, to try and get them running faster and save on resources. To do that, I gather that I need to use a client object, not a table object:
```py
import boto3
session = boto3.session.Session()
db_client = session.client(service_name="dynamodb", endpoint_url="http://localhost:8000")
# Do slightly different stuff here, 'cos clients and tables work differently
```
However, there's really rather a lot of the existing code, to the point that the cost of rewriting everything to work with clients rather than tables is likely to be prohibitive.
Is there any way to either get a table object while specifying the endpoint\_url so I can point it at DynamoDB Local on creation, or else obtain a boto3 dynamodb table object from a boto3 dynamodb client object?
PS: I know I could also mock out the boto3 calls and not access the database at all. But that's also prohibitively costly, because for all of the existing tests we'd have to work out where they touch the database and what the appropriate mock setup and use is. For a couple of tests that's perfectly fine, but it's a lot of work if you've got a lot of tests. | 2022/11/04 | [
"https://Stackoverflow.com/questions/74320720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5734324/"
] | Yes, you can use the resource-level classes such as [Table](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#table) with both the real DynamoDB service and DynamoDB Local via the [DynamoDB service resource](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#service-resource), as follows:
```
resource = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
table = resource.Table(name)
``` | The other answers correctly told you that if you liked the "resource" API, you can still use it even with DynamoDB local (by the way, shameless plug: if you're looking for self-installable version of DynamoDB, you can also consider the open-source ScyllaDB project which has a DynamoDB API).
I just wanted to add that if you do want to switch to the "client" API - which I recommend (it's easier to use) - it's still possible to get a table object from a client. Just do:
```py
table = db_client.Table(name)
``` | 3,279 |
11,087,032 | I have a XML file which contains values having unwanted characters like
```
\xc2d
d\xa0
\xe7
\xc3\ufffdd
\xc3\ufffdd
\xc2\xa0
\xc3\xa7
\xa0\xa0
'619d813\xa03697'
\xe9.com
```
input examples could be
```
name : John Hinners\xc2d
email: abc@gmail\xe9.com
and others ....
```
desired output should be
```
name : John Hinners
email: abc@gmail.com
and others ....
```
I come from python background where this task can be done easily as
```
def remove_non_ascii(s):
return ''.join(i for i in s if ord(i)<128)
```
Is there some similar way to perform the same task in Java? | 2012/06/18 | [
"https://Stackoverflow.com/questions/11087032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/379235/"
] | In java it will not be as pretty.
You can use a regexp but if you don't have a simple definition of your characters the best is probably to do this :
```
StringBuilder sb = new StringBuilder();
for (int i=0; i<s.length(); i++) {
if (((int)s.charAt(i))<128) sb.append(s.charAt(i));
}
``` | ```
String s = "WantedCharactersunwantedCharacters";
```
If I want the remaining String to be "WantedCharacters", I simply write:
```
s = s.replaceAll("unwantedCharacters", "");
```
[EDIT]:
You could, of course, also write
```
private static String removeNonAscii(String s){
StringBuffer sb = new StringBuffer();
for(int i=0; i<s.length(); ++i){
if(s.charAt(i) < 128){
sb.append(s.charAt(i));
}
}
return sb.toString();
}
```
if that's a satisfying solution | 3,280 |
62,333,071 | I have been using a working Anaconda install (Python 3.7) for about a year, but suddenly I'm getting this warning when I run the interpreter:
```none
> python
Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Warning:
This Python interpreter is in a conda environment, but the environment has
not been activated. Libraries may fail to load. To activate this environment
please see https://conda.io/activation
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
I quite often use virtual environments, but never with Conda. Note that I've been able to run Python from the command line with just `python` for a long time now, and have never had to use `conda activate base`. I don't even have Conda on my path.
I've found these answers, but neither gives any clarity into why this may have started happening:
* *[CMD warning: "Python interpreter is in a conda environment, but the environment has not been activated"](https://stackoverflow.com/questions/56623269/cmd-warning-python-interpreter-is-in-a-conda-environment-but-the-environment-h)*
* *[Python is in a Conda environment, but it has not been activated in a Windows virtual environment](https://stackoverflow.com/questions/56679132/python-is-in-conda-env-but-has-not-been-activated-in-windows-virtual-env)* | 2020/06/11 | [
"https://Stackoverflow.com/questions/62333071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8605685/"
] | If you receive this warning, you need to activate your environment. To do so on Windows, use the Anaconda Prompt shortcut in your Windows start menu. If you have an existing cmd.exe session that you’d like to activate conda in run:
`call <your anaconda/miniconda install location>\Scripts\activate base.` | I have the same problem, by following this post [conda-is-not-recognized-as-internal-or-external-command](https://stackoverflow.com/questions/44515769/conda-is-not-recognized-as-internal-or-external-command), I am able to solve the problem.
The reason may be that your default Python interpreter has been switch to the the Conda python (e.g. on my Wondows 10, the path is `C:\Users\Xiang\anaconda3\python.exe`). Therefore, we need to add the Conda related path to the Environments Path, with details explained in the link. | 3,283 |
43,110,228 | I'm learning to use rpy2 in Jupyter notebook. I'm having troubles with the plotting. When I use this example from the rpy2 [docs](http://rpy2.readthedocs.io/en/version_2.8.x/interactive.html?highlight=ipython#ipython-magic-integration-was-rmagic) for interactive work:
```
from rpy2.interactive import process_revents
from rpy2.robjects.packages import importr
from rpy2.robjects.vectors import IntVector
process_revents.start()
graphics = importr("graphics")
graphics.barplot(IntVector((1,3,2,5,4)), ylab="Value")
```
Jupyter opens a new window with the plot. The window "title" reads: R Graphics: Device 2 (ACTIVE) (Not Responding). My Jupyter kernel is active. When I try to close the window with the plot, windows claims that python.exe is not responsing and if I force close then the jupyter kernel restarts.
First: How can I make rpy2 plot inline? Second: If inline plotting is not possible, how to get the plot in a window without python.exe becoming unresponsive? | 2017/03/30 | [
"https://Stackoverflow.com/questions/43110228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2998998/"
] | It seems that this is the answer to your question: <https://bitbucket.org/rpy2/rpy2/issues/330/ipython-plotting-wrapper>
```
with rpy2.robjects.lib.grdevices.render_to_bytesio(grdevices.png, width=1024, height=896, res=150) as img:
graphics.barplot(IntVector((1,3,2,5,4)), ylab="Value")
IPython.display.display(IPython.display.Image(data=img.getvalue(), format='png', embed=True))
``` | This is slightly more sopthisticated version of Christian's answer which wraps the plotting and inline embedding into the same context manager:
```py
from contextlib import contextmanager
from rpy2.robjects.lib import grdevices
from IPython.display import Image, display
@contextmanager
def r_inline_plot(width=600, height=600, dpi=100):
with grdevices.render_to_bytesio(grdevices.png,
width=width,
height=height,
res=dpi) as b:
yield
data = b.getvalue()
display(Image(data=data, format='png', embed=True))
```
Usage:
```py
with r_inline_plot(width=1024, height=896, dpi=150):
graphics.barplot(IntVector((1,3,2,5,4)), ylab="Value")
``` | 3,284 |
20,444,056 | I have a list of tuples each with 5 pieces of information in it. I need a way to search the list for a result or range of results from a search parameter or parameters.
So I'd like to search for an ID number (string) or name (string - only the whole name) or a range of salary so between (int - 10000-20000).
I read on another post [How to search a list of tuples in Python](https://stackoverflow.com/questions/2917372/how-to-search-a-list-of-tuples-in-python) that you can use list comprehensions
```
t = [[0] for x in l].index('12346')
```
but it wasn't explained all that well and it doesn't satisfy the range of values I need to enter.
This does it's job for one entry. But if 2 people have the same first name or job it will just display the first occurrence. To display the correct tuple I take the value T and send it to the my printing function to display. Which again won't work for multiple entries because I would need to know how many tuples were returned beforehand.
Here are a few tuples, all are very similar.
```
('12349', '30000', 'Wizard', 'Harry', 'Potter')
('12350', '15000', 'Entertainer', 'Herschel Shmoikel', 'Krustofski')
('13123', '75000', 'Consultant', 'David', 'Beckham')
('13124', '150000', 'Manager', 'Alex', 'Ferguson')
('13125', '75000', 'Manager', 'Manuel', 'Pellegrini')
```
Thank you | 2013/12/07 | [
"https://Stackoverflow.com/questions/20444056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2080298/"
] | The Checked property is set right on the XAML loading, when you set IsChecked="True". The tag may be loaded only later when the XAML loading code decides to set this property. That's why you can see uninitialized properties. | simple Solution for all of these type bugs/errors:
```
1- bool bFormLoaded;//=false ;
2- at [YourWinOrControlorWPF]_Loaded(object sender, RoutedEventArgs e)
add this flag at end of function:
bFormLoaded=true;
3-at UseDefaultFoldersCB_Checked(...)
add this line
if(bFormLoaded==false) return;
```
100% | 3,287 |
62,099,166 | I am trying to send data from python server to android client but it is not accepting any data from the server. but it is showing that it is connected with the server. i cant recognize any error.
here is my client code in android java.
```
package com.example.socketinput;
import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.Socket;
public class MainActivity extends AppCompatActivity {
Thread Thread1 = null;
String SERVER_IP="192.168.1.6";
int SERVER_PORT=9857;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Thread1 = new Thread(new Thread1());
Thread1.start();
}
private PrintWriter output;
private BufferedReader input;
class Thread1 implements Runnable {
public void run() {
Socket socket;
try {
socket = new Socket(SERVER_IP, SERVER_PORT);
output = new PrintWriter(socket.getOutputStream());
input = new BufferedReader(new InputStreamReader(socket.getInputStream()));
runOnUiThread(new Runnable() {
@Override
public void run() {
Log.i("Server", "Connected");
}
});
new Thread(new Thread2()).start();
} catch (IOException e) {
Log.i("Server", "Error in thread 1");
}
}
}
class Thread2 implements Runnable {
@Override
public void run() {
while (true) {
try {
final String message = input.readLine();
if (message != null) {
runOnUiThread(new Runnable() {
@Override
public void run() {
Log.i("Server", message);
}
});
} else {
Thread1 = new Thread(new Thread1());
Thread1.start();
return;
}
} catch (IOException e) {
Log.i("Server", "Error in thread 2");
e.printStackTrace();
}
}
}
}
}
```
here is my python server code .
```
#!/usr/bin/env python
# coding: utf-8
# In[1]:
import socket
# In[2]:
socket.gethostname()
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 9857))
s.listen(5)
# In[ ]:
while True:
clientsocket, address = s.accept()
print(f"connection from {address} has been established!")
clientsocket.send(bytes("welcome to the server!","utf-8"))
while True:
msg=input()
clientsocket.send(bytes(msg,"utf-8"))
```
android manifent file is-:
```
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.socketinput">
<uses-permission android:name = "android.permission.INTERNET" />
<uses-permission android:name = "android.permission.ACCESS_NETWORK_STATE" />
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
```
Note-Android app and python code is connected on same server.
[](https://i.stack.imgur.com/tD0sN.png) | 2020/05/30 | [
"https://Stackoverflow.com/questions/62099166",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6938184/"
] | You should use fake content while testing a layout, **unless your fixed height can match together** *(left col is box1+box5 = 1600px and col right is 1300px so a 300px gap(s)/difference)* : White space are here because of the heights arbitrary given to children.
Your grid woks fine and is fluid :
**Tips** : For testing, use `min-height` on children or the container, else use [fake content](https://lipsum.com/) / [htmlIpsum](https://www.webfx.com/web-design/html-ipsum/).
Your code with a `min-height` on the parent only
```css
.container {
min-height:80vh;/* demo purpose*/
width: 505px;
border: solid red;
display: grid;
grid-template-columns: 1fr 1fr;
grid-template-areas: "b1 b2" "b1 b3" "b1 b4" "b1 b6" "b5 b6"
}
.box {
border: solid 1px;
width: 250px;
}
.box1 {
background-color: yellow;
grid-area: b1;
}
.box2 {
background-color: blue;
grid-area: b2;
}
.box3 {
background-color: green;
grid-area: b3;
}
.box4 {
background-color: orange;
grid-area: b4;
}
.box5 {
grid-area: b5;
background-color: purple;
}
.box6 {
background-color: gray;
grid-area: b6;
}
```
```html
<div class="container">
<div class="box box1">1</div>
<div class="box box2">2</div>
<div class="box box3">3</div>
<div class="box box4">4</div>
<div class="box box5">5</div>
<div class="box box6">6</div>
</div>
```
min-height instead height to the children :
```css
.container {
width: 505px;
border: solid red;
display: grid;
grid-template-columns: 1fr 1fr;
grid-template-areas: "b1 b2" "b1 b3" "b1 b4" "b1 b6" "b5 b6"
}
.box {
border: solid 1px;
width: 250px;
}
.box1 {
min-height: 1000px;
background-color: yellow;
grid-area: b1;
}
.box2 {
min-height: 200px;
background-color: blue;
grid-area: b2;
}
.box3 {
min-height: 250px;
background-color: green;
grid-area: b3;
}
.box4 {
min-height: 250px;
background-color: orange;
grid-area: b4;
}
.box5 {
min-height: 600px;
grid-area: b5;
background-color: purple;
}
.box6 {
min-height: 400px;
background-color: gray;
grid-area: b6;
}
```
```html
<div class="container">
<div class="box box1">1</div>
<div class="box box2">2</div>
<div class="box box3">3</div>
<div class="box box4">4</div>
<div class="box box5">5</div>
<div class="box box6">6</div>
</div>
``` | If you want to maintain the designated heights, you can wrap the column 2 `divs` into a `flex` wrapper:
```
<div class="container">
<div class="box box1">1</div>
<div class="box box5">5</div>
<div class = "flex">
<div class="box box2">2</div>
<div class="box box3">3</div>
<div class="box box4">4</div>
<div class="box box6">6</div>
</div>
</div>
```
And then change your `grid` CSS a bit to accommodate:
```
.container {
width: 505px;
border: solid red;
display: grid;
grid-template-columns: 1fr 1fr;
grid-template-areas: "b1 b3" "b5 b3";
}
```
Now the only boxes that need `grid-area` are the 2 on the left (`b1` and `b5`) and the `flex` wrapper (`b3` in my code):
```
.box {
border: solid 1px;
width: 250px;
}
.box1 {
height: 1000px;
background-color: yellow;
grid-area: b1;
}
.box2 {
height: 200px;
background-color: blue;
}
.box3 {
height: 250px;
background-color: green;
grid-area: b3;
}
.box4 {
height: 250px;
background-color: orange;
}
.box5 {
height: 600px;
grid-area: b5;
background-color: purple;
}
.box6 {
height: 400px;
background-color: gray;
}
.flex {
display: flex;
flex-direction: column;
grid-area: b3;
}
```
Check this demo [JSfiddle](https://jsfiddle.net/repwn3jm/2/) | 3,288 |
5,166,488 | python and Tkinter are processing Unicode characters correctly.
But they are not able to display Unicode encoded characters correctly.
I am using Python 3.1 and Tkinter in Ubuntu. I am trying to use Tamil Unicode characters.
All the processing is done correctly. But the display is wrong?
Here is the Wrong display as in Tkinter

Here is the Correct display (as in gedit)

---
Still not solved:
```
from tkinter import *
root = Tk()
root.geometry('200x200')
var = StringVar()
label = Label( root, textvariable=var, relief=RAISED )
Entry(text="Placeholder text").pack()
var.set("கற்றதனால் ஆய பயனென்கொல் வாலறிவன்\nநற்றாள்தொழாஅர் எனின். ")
label.pack()
root.mainloop()
```
Manjaro:
[](https://i.stack.imgur.com/V1T3w.png)
Windows:
[](https://i.stack.imgur.com/G8r72.png) | 2011/03/02 | [
"https://Stackoverflow.com/questions/5166488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/641040/"
] | I had faced similar problems and discovered I used the Zero Width Joiner (U+200D) to explicitly tell the rendering engine to join two characters. That used to work in 2010 but looks like there have been changes in the rendering engine (that I am now aware of) and now in 2011 I find that having the joiner creates the problem ! (It broke my working code) I had to remove the explicit zero width joiners to have my code work again. Hope this helps. | It looks like Tk is mishandling things like 'Class Zero Combining Marks', see:
<http://www.unicode.org/versions/Unicode6.0.0/ch04.pdf#G124820> (Table 4-4)
I assume one of the sequences that do not show correctly are the codepoints: 0BA9
0BC6 (TAMIL SYLLABLE NNNE), where 0BC6 is a reordrant class zero combining mark according to the Unicode standard, which basically means the glyphs get swapped.
The only way to fix it is to file a bug at the Tk bug tracker and hope it gets fixed. | 3,289 |
3,589,214 | So here is the deal: I want to (for example) generate 4 pseudo-random numbers, that when added together would equal 40. How could this be dome in python? I could generate a random number 1-40, then generate another number between 1 and the remainder,etc, but then the first number would have a greater chance of "grabbing" more. | 2010/08/28 | [
"https://Stackoverflow.com/questions/3589214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433493/"
] | Generate 4 random numbers, compute their sum, divide each one by the sum and multiply by 40.
If you want Integers, then this will require a little non-randomness. | Building on [@markdickonson](https://stackoverflow.com/a/3590105/75033) by providing some control over distribution between the divisors. I introduce a variance/jiggle as a percentage of the uniform distance between each.
```
def constrained_sum_sample(n, total, variance=50):
"""Return a random-ish list of n positive integers summing to total.
variance: int; percentage of the gap between the uniform spacing to vary the result.
"""
divisor = total/n
jiggle = divisor * variance / 100 / 2
dividers = [int((x+1)*divisor + random.random()*jiggle) for x in range(n-1)]
result = [a - b for a, b in zip(dividers + [total], [0] + dividers)]
return result
```
Sample output:
```
[12, 8, 10, 10]
[10, 11, 10, 9]
[11, 9, 11, 9]
[11, 9, 12, 8]
```
The idea remains to divide the population equally, then randomly move them left or right within the given range. Since each value is still bound to the uniform point we don't have to worry about it drifting.
Good enough for my purposes, but not perfect. eg: the first number will always vary higher, and the last will always vary lower. | 3,291 |
74,271,418 | I'm pretty new at Power BI (so forgive my rough terminology), and I'm trying to create a bar chart from some existing financial data. Specifically, I'd like to know how to transform my data. I've looked at DAX and python, and can't quite figure out the right commands.
My existing table looks like the following. The set of categories are arbitrary (not known up front, so can't be hardcoded), same with the set of years.
```
Category 2002 2003 2004 2005
A $10 $75 $75 $75
B $75 $59 $75 $79
C $15 $32 $13 $5
B $23 $12 $75 $7
C $17 $88 $75 $15
```
And I want my output table to have the number of rows as the number of *unique* categories, totaling up the dollar amounts for each year.
```
Category 2002 2003 2004 2005
A $10 $75 $75 $75
B $98 $71 $150 $86
C $32 $120 $88 $20
```
What's the best way to roll up the data this way? I intend to use the resulting table to make a composite bar chart, one bar per year.
Thank you! | 2022/11/01 | [
"https://Stackoverflow.com/questions/74271418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4019700/"
] | 1. Avoid Excel-style cross-tables in Power BI. In the PowerQuery Editor transform your table by selecting Categorie and then **Unpivot other columns**
[](https://i.stack.imgur.com/UAHij.png)
2. Back in the designer view you can directly use this data to create a bar chart:
[](https://i.stack.imgur.com/Pc5a4.png)
3. If you like you can also create an aggregated table from your data with the **calculated table** expression
```
Aggregated =
SUMMARIZE(
'Table',
'Table'[Category],
'Table'[Year],
"Sum", SUM('Table'[Value])
)
```
but that's not needed for your purpose.
[](https://i.stack.imgur.com/FaCHF.png) | Here is the full M-Code to achieve your goal: Just change the source step with your source file:
```
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclTSUTI0ABLmpkhErE60khOMb2oJl7EEyziD9ID4xkYgljFIDVyLEYhraATXgtBhDiQsLGASQANiYwE=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Category = _t, #"2002" = _t, #"2003" = _t, #"2004" = _t, #"2005" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Category", type text}, {"2002", Int64.Type}, {"2003", Int64.Type}, {"2004", Int64.Type}, {"2005", Int64.Type}}),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type", {"Category"}, "Attribute", "Value"),
#"Renamed Columns" = Table.RenameColumns(#"Unpivoted Other Columns",{{"Attribute", "Year"}}),
#"Grouped Rows" = Table.Group(#"Renamed Columns", {"Category", "Year"}, {{"Total", each List.Sum([Value]), type number}}),
#"Pivoted Column" = Table.Pivot(#"Grouped Rows", List.Distinct(#"Grouped Rows"[Year]), "Year", "Total", List.Sum)
in
#"Pivoted Column"
```
If we test it:
[](https://i.stack.imgur.com/BGlcY.png) | 3,301 |
13,391,549 | I try to use a Bixolon receipt printer with OE on Windows 7. I success to print directly from a small python module using win32print (coming with py32win) with the code below :
win32print is not natively in OE so I paste win32print.pyd in OE server directory and put the code in a wizard of my OE module.
I can see my wizard, launch it without error but then nothing happens : no print, no error message.
Any ideas ?
Thank you
```
import win32print
printer=OpenPrinter(win32print.GetDefaultPrinter())
hJob = win32print.StartDocPrinter (printer, 1, ("RVGI Print", None, "RAW"))
g=open('test3.txt','r')
raw_data = bytes ( open( 'test3.txt' , 'r').read ())
try:
win32print.StartPagePrinter (printer)
win32print.WritePrinter (printer, raw_data)
win32print.EndPagePrinter (printer)
finally:
win32print.EndDocPrinter (printer)
win32print.ClosePrinter (printer)
``` | 2012/11/15 | [
"https://Stackoverflow.com/questions/13391549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1682857/"
] | Remember that the python code runs on the server. Is your printer connected to the server?
Also, you don't have an `except` section in your `try`. That makes errors go by silently. Try removing the `try` block so that errors are raised. Looking at them you might figure out the issue. | Well, I don't know if you typed here incorrectly, but the way you imported the `win32print` module force you to attach it to module function calls and you haven't done this in your first line:
```
printer = OpenPrinter(win32print.GetDefaultPrinter())
```
should be
```
printer = win32print.OpenPrinter(win32print.GetDefaultPrinter())
```
And there is another point that I don't really understands, but [Tim Golden](http://timgolden.me.uk/python/win32_how_do_i/print.html) put in his tutorial and could be your problem:
```
raw_data = bytes('your_text')
```
could be just:
```
raw_data = 'your_text'
```
depending on your system version. You also haven't specified the encoding, but since you got no errors that may not be the problem.
For the version thing Tim Golden posted:
```
import sys
if sys.version_info >= (3,):
raw_data = bytes ("This is a test", "utf-8")
else:
raw_data = "This is a test"
```
Hope it helps! | 3,302 |
11,878,300 | I would like to serialize on machine A and deserialize on machine B a python lambda. There are a couple of obvious problems with that:
* the pickle module does not serialize or deserialize code. It only serializes the names of classes/methods/functions
* some of the answers I found with google suggest the use of the low-level marshal module to serialize the func\_code attribute of the lambda but they fail to describe how one could reconstruct a function object from the deserialized code object
* marhshal(l.func\_code) will not serialize the closure associated with the lambda which leads to the problem of detecting when a given lambda really needs a closure and warning the user that he is trying to serialize a lambda that uses a closure
Hence, my question(s):
* how would one reconstruct a function from the deserialized (demarshaled) code object ?
* how would one detect that a given lambda will not work properly without the associated closure ? | 2012/08/09 | [
"https://Stackoverflow.com/questions/11878300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/782529/"
] | Surprisingly, checking whether a lambda will work without its associated closure is actually fairly easy. According to the [data model documentation](http://docs.python.org/release/2.6.2/reference/datamodel.html), you can just check the `func_closure` attribute:
```
>>> def get_lambdas():
... bar = 42
... return (lambda: 1, lambda: bar)
...
>>> no_vars, vars = get_lambdas()
>>> print no_vars.func_closure
None
>>> print vars.func_closure
(<cell at 0x1020d3d70: int object at 0x7fc150413708>,)
>>> print vars.func_closure[0].cell_contents
42
>>>
```
Then serializing + loading the lambda is fairly straight forward:
```
>>> import marshal, types
>>> old = lambda: 42
>>> old_code_serialized = marshal.dumps(old.func_code)
>>> new_code = marshal.loads(old_code_serialized)
>>> new = types.FunctionType(new_code, globals())
>>> new()
42
```
It's worth taking a look at the documentation for the `FunctionType`:
```
function(code, globals[, name[, argdefs[, closure]]])
Create a function object from a code object and a dictionary.
The optional name string overrides the name from the code object.
The optional argdefs tuple specifies the default argument values.
The optional closure tuple supplies the bindings for free variables.
```
Notice that you can also supply a closure… Which means you might even be able to serialize the old function's closure then load it at the other end :) | I'm not sure exactly what you want to do, but you could try [dill](https://github.com/uqfoundation/dill). Dill can serialize and deserialize lambdas and I believe also works for lambdas inside closures. The pickle API is a subset of it's API. To use it, just "import dill as pickle" and go about your business pickling stuff.
```
>>> import dill
>>> testme = lambda x: lambda y:x
>>> _testme = dill.loads(dill.dumps(testme))
>>> testme
<function <lambda> at 0x1d92530>
>>> _testme
<function <lambda> at 0x1d924f0>
>>>
>>> def complicated(a,b):
... def nested(x):
... return testme(x)(a) * b
... return nested
...
>>> _complicated = dill.loads(dill.dumps(complicated))
>>> complicated
<function complicated at 0x1d925b0>
>>> _complicated
<function complicated at 0x1d92570>
```
Dill registers it's types into the `pickle` registry, so if you have some black box code that uses `pickle` and you can't really edit it, then just importing dill can magically make it work without monkeypatching the 3rd party code. Or, if you want the whole interpreter session sent over the wire as an "python image", dill can do that too.
```
>>> # continuing from above
>>> dill.dump_session('foobar.pkl')
>>>
>>> ^D
dude@sakurai>$ python
Python 2.7.5 (default, Sep 30 2013, 20:15:49)
[GCC 4.2.1 (Apple Inc. build 5566)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> dill.load_session('foobar.pkl')
>>> testme(4)
<function <lambda> at 0x1d924b0>
>>> testme(4)(5)
4
>>> dill.source.getsource(testme)
'testme = lambda x: lambda y:x\n'
```
You can easily send the image across ssh to another computer, and start where you left off there as long as there's version compatibility of pickle and the usual caveats about python changing and things being installed. As shown, you can also extract the source of the lambda that was defined in the previous session.
Dill also has [some good tools](https://github.com/uqfoundation/dill/blob/master/dill/detect.py) for helping you understand what is causing your pickling to fail when your code fails. | 3,303 |
39,278,419 | I am trying to POST a request to server side from android client side, using AsyncHttpClient :
For now i just want to check whether the response is coming back or not , so i have not implemented anything to parse request parameters at server side and have just returned some json as response.
```
RequestParams params = new RequestParams();
params.put("key", "value");
params.put("more", "data");
PAAPI.post("http://sairav.pythonanywhere.com",params, new JsonHttpResponseHandler() {
@Override
public void onSuccess(int statusCode, Header[] headers, JSONObject response) {
// If the response is JSONObject instead of expected JSONArray
Log.d("Response::",response.toString());
Toast.makeText(act,response.toString(),Toast.LENGTH_LONG).show();
}
@Override
public void onStart() {
// called before request is started
Toast.makeText(act,"Going to make API CALL",Toast.LENGTH_LONG).show();
}
@Override
public void onFailure(int statusCode, Header[] headers, String responseString, Throwable throwable) {
super.onFailure(statusCode, headers, responseString, throwable);
Log.d("Failed: ", ""+statusCode);
Log.d("Error : ", "" + throwable);
}
@Override
public void onSuccess(int statusCode, Header[] headers, JSONArray timeline) {
// Pull out the first event on the public timeline
JSONObject firstEvent = null;
try {
firstEvent = timeline.getJSONObject(0);
} catch (JSONException e) {
e.printStackTrace();
}
String tweetText = null;
try {
tweetText = firstEvent.getString("text");
} catch (JSONException e) {
e.printStackTrace();
}
// Do something with the response
Toast.makeText(act,tweetText,Toast.LENGTH_LONG).show();
}
});
```
PAAPI class ::
```
class PAAPI {
protected static final String BASE_URL = "http://sairav.pythonanywhere.com";
private static AsyncHttpClient client = new AsyncHttpClient();
public static void get(String url, RequestParams params, AsyncHttpResponseHandler responseHandler) {
client.get(getAbsoluteUrl(url), params, responseHandler);
}
public static void post(String url, RequestParams params, AsyncHttpResponseHandler responseHandler) {
client.post(getAbsoluteUrl(url), params, responseHandler);
}
private static String getAbsoluteUrl(String relativeUrl) {
return BASE_URL + relativeUrl;
}
}
```
I get the toast from onStart when the app runs, but then in logcat i get this exception :
java.net.UnknownHostException: Unable to resolve host "**sairav.pythonanywhere.comhttp**": No address associated with hostname
**Note : I have already reset the Wifi connection and run again (on searching other similar questions) , but the problem persists.**
Logcat :
```
24154-24228/com.example.sairav.moneytor W/System.err: java.net.UnknownHostException: Unable to resolve host "sairav.pythonanywhere.comhttp": No address associated with hostname
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at java.net.InetAddress.lookupHostByName(InetAddress.java:440)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at java.net.InetAddress.getAllByNameImpl(InetAddress.java:252)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at java.net.InetAddress.getAllByName(InetAddress.java:215)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:44)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.DefaultClientConnectionOperator.resolveHostname(DefaultClientConnectionOperator.java:259)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:159)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:145)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:131)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:860)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at cz.msebera.android.httpclient.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at com.loopj.android.http.AsyncHttpRequest.makeRequest(AsyncHttpRequest.java:146)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at com.loopj.android.http.AsyncHttpRequest.makeRequestWithRetries(AsyncHttpRequest.java:177)
09-01 23:32:11.718 24154-24228/com.example.sairav.moneytor W/System.err: at com.loopj.android.http.AsyncHttpRequest.run(AsyncHttpRequest.java:106)
``` | 2016/09/01 | [
"https://Stackoverflow.com/questions/39278419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3820753/"
] | The way I found to do it is by using the token provider from the namespace manager. So:
```
var namespaceMngr = NamespaceManager.CreateFromConnectionString(namespaceConnString);
MessagingFactorySettings mfs = new MessagingFactorySettings();
mfs.TokenProvider = namespaceMngr.Settings.TokenProvider;
mfs.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.FromSeconds(timeToFlush);
MessagingFactory mf = MessagingFactory.Create(namespaceMngr.Address, mfs);
```
If you are trying to set the `AmqpTransportSettings.BatchFlushInterval` instead of the `NetMessagingTransportSettings.BatchFlushInterval` then I can't help you, I actually stumbled on this post looking for an answer. Trying to change the `AmqpTransportSettings.BatchFlushInterval` doesn't seem to stick to the `MessageFactory` even if the `MessageFactorySettings` reflect the change. | JordanSchillers answer fixes the token provider issue but my address was now using port 9355 instead of 9354.
I ended using a mixture of the ServiceBusConnectionStringBuilder and the NamespaceManager:
```
var serviceBusConnectionString = new ServiceBusConnectionStringBuilder(connection.ConnectionString);
MessagingFactorySettings factorySettings = new MessagingFactorySettings();
factorySettings.TransportType = serviceBusConnectionString.TransportType;
//Use the namespacemanager to create the token provider.
var namespaceManager = NamespaceManager.CreateFromConnectionString(connection.ConnectionString);
factorySettings.TokenProvider = namespaceManager.Settings.TokenProvider;
factorySettings.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.FromMilliseconds(batchTimeInMs);
MessagingFactory factory = MessagingFactory.Create(serviceBusConnectionString.Endpoints, factorySettings);
return factory.CreateTopicClient(topicName);
``` | 3,304 |
17,004,946 | I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself? | 2013/06/08 | [
"https://Stackoverflow.com/questions/17004946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/99876/"
] | There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of [boost::unit\_test::test\_observer](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/test_observer.html), create an instance of the class that will persist throughout the entire test (either a static global object or a [BOOST\_TEST\_GLOBAL\_FIXTURE](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost_test/utf_reference/test_org_reference/test_org_boost_test_global_fixture.html)), and then pass the class to [boost::unit\_test::framework::register\_observer](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/framework/register_observer.html).
The method to override with a start of test hook is `test_unit_start`, and the method to override with an end of test hook is `test_unit_finish`. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The `test_unit_finish` hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a [boost::unit\_test::results\_collector](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/results_collector_t.html) singleton, which has a [results()](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/results_collector_t.html#idm45779057427216-bb) method, and if you pass it the `test_unit_id` of the test unit provided to `test_unit_finish`, you get a [test\_results](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/test_results.html) object that has a [passed()](https://www.boost.org/doc/libs/1_70_0/libs/test/doc/html/boost/unit_test/test_results.html#idm45779057388688-bb) method. I can't really see a way to get the `test_unit_id` that is clearly part of the public API -- you can just directly access the p\_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the `assertion_result`, `exception_caught`, `test_unit_aborted`, and `test_unit_timed_out` hooks from the test\_observer subclass (`assertion_result` indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all). | According to the [Boost.Test documentation](http://www.boost.org/doc/libs/1_53_0/libs/test/doc/html/utf/user-guide/runtime-config/reference.html), run your test executable with `--log_level=error`. This will catch only failing test cases.
I checked that it works using a `BOOST_CHECK(false)` on an otherwise correctly running project with a few thousand unit tests.
Running with `--log_level=all` gives the result of all assertions. I checked that by piping it to `wc -l` that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by `--report_level=detailed`). You could of course also `grep` the log for the strings `error` or `failed`. | 3,305 |
16,092,153 | I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python.
I am unable to figure out what logic should I use- also I don't know how the original find functions?
I just started to use python and am fairly new to programming as well.
Any guidance shall be highly appreciated! | 2013/04/18 | [
"https://Stackoverflow.com/questions/16092153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1050305/"
] | >
> also I don't know how the original find functions
>
>
>
A good way to learn about functions without googling is to use [Ipython](http://ipython.org/)and especially the [notebook variant](http://ipython.org/notebook.html/). These allow you to write python code interactively, and have some special features. Typing the name of a function in Ipython (either notebook or the interpreter) with a question mark returns some information about the function e.g
```
find?
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Docstring: Return the indices where ravel(condition) is true
```
Typing two question marks reveals the source code
```
find??
Type: function
String Form:<function find at 0x2893cf8>
File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py
Definition: find(condition)
Source:
def find(condition):
"Return the indices where ravel(condition) is true"
res, = np.nonzero(np.ravel(condition))
return res
```
You would then need to go down the rabbit hole further to find exactly how find worked. | There is a simple solution to this problem, however there are also much faster solutions which you may want to look at after you've implemented the simple version. What you want to be doing is checking each position in the string you're search over and seeing if the string you're searching for starts there. This is inefficient but works well enough for most purposes, if you're feeling comfortable with that then you may want to look at Boyer-Moore string searching, which is a much more complex solution but more efficient. It exploits the fact that you can determine that if a string doesn't start at a certain point you may not need to check some of the other positions. | 3,306 |
27,967,988 | So I was dissapointed to find out that JavaScript's `for ( var in array/object)` was not equivalent to pythons `for var in list:`.
In JavaScript you are iterating over the indices themselves e.g.
```
0,
1,
2,
...
```
where as with Python, you are iterating over the values pointed to by the indices e.g.
```
"string var at index 0",
46,
"string var at index 2",
["array","of","values"],
...
```
Is there a standard JavaScript equivalent to Python's looping mechanism?
Disclaimer:
===========
>
> I am aware that the for (var in object) construct is meant to be used to iterate over keys in a dictionary and not generally over indices of an array. I am asking a specific question that pertains to use cases in which I do not care about order(or very much about speed) and just don't feel like using a while loop.
>
>
> | 2015/01/15 | [
"https://Stackoverflow.com/questions/27967988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3581485/"
] | for an array the most similar is the forEach loop (of course index is optional)
```
[1,2,3,4,].forEach(function(value,index){
console.log(value);
console.log(index);
});
```
So you will get the following output:
```
1
0
2
1
3
2
4
3
``` | In the next version of ECMAScript (ECMAScript6 aka Harmony) will be [for-of construct](http://tc39wiki.calculist.org/es6/for-of/):
```
for (let word of ["one", "two", "three"]) {
alert(word);
}
```
`for-of` could be used to iterate over various objects, Arrays, Maps, Sets and custom iterable objects. In that sense it's very close to Python's `for-in`. | 3,312 |
66,650,626 | Is there any to restore files from the recycle bin in python?
Here's the code:
```
from send2trash import send2trash
file_name = "test.txt"
operation = input("Enter the operation to perform[delete/restore]: ")
if operation == "delete":
send2trash(file_name)
print(f"Successfully deleted {file_name}")
else:
# Code to restore the file from recycle bin.
pass
```
Here when I type `"restore"` in the `input()` function, I want to restore my deleted file from the recycle bin.
Is there any way to achieve this in python?
It would be great if anyone could help me out.
EDIT:
Thanks for the answer @Kenivia, but I am facing one small issue:
```
import winshell
r = list(winshell.recycle_bin()) # this lists the original path of all the all items in the recycling bin
file_name = "C:\\test\\Untitled_1.txt" # This file is located in the recycle bin
index = r.index(file_name) # to determine the index of your file
winshell.undelete(r[index].original_filename())
```
When I run this code, I get an error: `ValueError: 'C:\\test\\Untitled_1.txt' is not in list`. Can you please help me out? | 2021/03/16 | [
"https://Stackoverflow.com/questions/66650626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14909172/"
] | It would depend on your operating system.
**Linux**
it's as simple as moving it from the trash folder to the original path. The location of the trash folder differs from distro to distro, but this is where it typically is.
There is a [command line tool](https://github.com/andreafrancia/trash-cli) that you can use, or dig through the code to get some ideas.
```
import subprocess as sp # here subprocess is just used to run the command, you can also use os.system but that is discouraged
sp.run(['mv','/home/USERNAME/.local/share/Trash/files/test.txt', '/ORIGINAL/PATH/')
```
**macOS**
On macOS, you do the same thing as you do in Linux, except the trash path is `~/.Trash`
```
import subprocess as sp
sp.run(['mv','~/.Trash/test.txt', '/ORIGINAL/PATH/')
```
Note that macOS stores information about the files at `~/.Trash/.DS_Store`, where Linux stores them at `/home/USERNAME/.local/share/Trash/info/`. This can be useful if you don't know the original path of the files.
**Windows**
you have to use `winshell`. See [this article](https://medium.com/swlh/how-to-access-recycle-bin-in-python-192a685e31fb) for more details
```
import winshell
r = list(winshell.recycle_bin()) # this lists the original path of all the all items in the recycling bin
index = r.index("C:\ORIGINAL\PATH\test.txt") # to determine the index of your file
winshell.undelete(r[index].original_filename())
``` | **Google Colab** (you are the `root` user)
Import the shell utility for Python:
```py
import shutil
```
Move the file from trash to a selected destination:
```py
shutil.move('/root/.local/share/Trash/files/<deleted-file>', '<destination-path>')
``` | 3,315 |
54,207,540 | I'm trying to find any python library or package which implements [newgrnn (Generalized Regression Neural Network)](https://www.mathworks.com/help/deeplearning/ref/newgrnn.html) using python.
Is there any package or library available where I can use neural network for regression. I'm trying to find python equivalent of the [newgrnn (Generalized Regression Neural Network)](https://www.mathworks.com/help/deeplearning/ref/newgrnn.html) which is described here. | 2019/01/15 | [
"https://Stackoverflow.com/questions/54207540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5347207/"
] | I found the library neupy which solved my problem:
```
from neupy import algorithms
from neupy.algorithms.rbfn.utils import pdf_between_data
grnn = algorithms.GRNN(std=0.003)
grnn.train(X, y)
# In this part of the code you can do any moifications you want
ratios = pdf_between_data(grnn.input_train, X, grnn.std)
predicted = (np.dot(grnn.target_train.T, ratios) / ratios.sum(axis=0)).T
```
This is the link for the library: <http://neupy.com/apidocs/neupy.algorithms.rbfn.grnn.html> | A more upgraded form is [pyGRNN](https://github.com/federhub/pyGRNN) which offers in addition to the normal GRNN the Anisotropic GRNN, which optimizes the hyperparameters automatically:
```
from sklearn import datasets
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error as MSE
from pyGRNN import GRNN
# get the data set
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
X_train, X_test, y_train, y_test = train_test_split(preprocessing.minmax_scale(X),
preprocessing.minmax_scale(y.reshape((-1, 1))),
test_size=0.25)
# use Anisotropic GRNN with Limited-Memory BFGS algorithm
# to select the optimal bandwidths
AGRNN = GRNN(calibration = 'gradient_search')
AGRNN.fit(X_train, y_train.ravel())
sigma = AGRNN.sigma
y_pred = AGRNN.predict(X_test)
mse_AGRNN = MSE(y_test, y_pred)
mse_AGRNN ## 0.030437040
``` | 3,316 |
33,713,149 | I have a text file containing CPU stats as below (from sar/sysstat)
```
17:30:38 CPU %user %nice %system %iowait %steal %idle
17:32:49 all 14.56 2.71 3.79 0.00 0.00 78.94
17:42:49 all 12.68 2.69 3.44 0.00 0.00 81.19
17:52:49 all 12.14 2.67 3.22 0.01 0.00 81.96
18:02:49 all 12.28 2.67 3.20 0.03 0.00 81.82
```
My goal is to build lists for each column (except the CPU, %nice and %steal) so I can plot them using bokeh, so tried to split each line to list and then I don't know how you can ignore certain values i.e.
```
#!/usr/bin/python
cpu_time = []
cpu_user = []
cpu_system = []
cpu_iowait = []
cpu_idle = []
with open('stats.txt') as F:
for line in F:
time, ignore, user, ignore, system, iowait, ignore, idle = line.split()
cpu_time.append(time)
cpu_user.append(user)
cpu_system.append(system)
cpu_iowait.append(iowait)
cpu_idle.append(idle)
```
Is there a better/short way to do this? More specifically, the logic I used to ignore some of the items doesn't look good to me. | 2015/11/14 | [
"https://Stackoverflow.com/questions/33713149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247154/"
] | Here is a more dynamic version that would scale to more columns. But there isn't really anything bad about your implementation.
```
# build a dict of column name -> list of column values
stats = {}
with open('stats.txt') as F:
header = None
for idx, line in enumerate(F):
# This is the header
if idx == 0:
# save the header for later use
header = line.split()
for word in header:
stats[word] = []
else:
# combine the header with the line to get a dict
line_dict = dict(zip(header, line.split()))
for key, val in line_dict.iteritems():
stats[key].append(val)
# remove keys we don't want
stats.pop('%nice')
stats.pop('%steal')
``` | First you could use `_` or `__` to represent ignored values (this is a common convention).
Next you could store all values into a single list and then unpack the list into multiple lists using `zip`.
```
cpu_stats = []
with open('stats.txt') as stats_file:
for line in stats_file:
time, _, user, _, system, iowait, _, idle = line.split()
cpu_stats.append([time, user, system, iowait, idle])
cpu_time, cpu_user, cpu_system, cpu_iowait, cpu_idle = zip(*cpu_stats)
```
You could write this using a couple list comprehensions, but I don't think it's any more readable:
```
with open('stats.txt') as stats_file:
lines = (line.split() for line in stats_file)
cpu_stats = [
(time, user, system, iowait, idle)
for time, _, user, _, system, iowait, _, idle
in lines
]
cpu_time, cpu_user, cpu_system, cpu_iowait, cpu_idle = zip(*cpu_stats)
``` | 3,317 |
21,881,748 | This may be a stupid question but I'm not sure how to phrase it in a google-friendly way...
In a terminal if you type something like:
```
nano some_file
```
then nano opens up an edit window inside the terminal. A text based application. Ctrl+X closes it again and you see the terminal as it was.
Here's another example:
```
man ls
```
How can I make a text based terminal application in python?
I hope this question makes sense, let me know if you need more clarification... | 2014/02/19 | [
"https://Stackoverflow.com/questions/21881748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742082/"
] | You probably need to use alternative screen buffer. To enable it just print '\0033[?1049h' and for disabling '\0033[?1049l' (Terminal Control Escape Sequences).
<http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#The%20Alternate%20Screen%20Buffer>
Example:
```
print('\033[?1049h', end='')
print('Alternative screen buffer')
s = input()
print('\033[?1049l', end='')
print('Normal mode')
print(s) `
``` | This does the trick:
<http://docs.python.org/2/howto/curses.html>
Example:
```
import curses
oScreen = curses.initscr()
curses.noecho()
curses.curs_set(0)
oScreen.keypad(1)
oScreen.addstr("Woooooooooooooo\n\n",curses.A_BOLD)
while True:
oEvent = oScreen.getch()
if oEvent == ord("q"):
break
curses.endwin()
``` | 3,320 |
6,577,218 | I have a python application where I want to start doing more work in the background so that it will scale better as it gets busier. In the past I have used Celery for doing normal background tasks, and this has worked well.
The only difference between this application and the others I have done in the past is that I need to guarantee that these messages are processed, they can't be lost.
For this application I'm not too concerned about speed for my message queue, I need reliability and durability first and formost. To be safe I want to have two queue servers, both in different data centers in case something goes wrong, one a backup of the other.
Looking at Celery it looks like it supports a bunch of different backends, some with more features then the others. The two most popular look like redis and RabbitMQ so I took some time to examine them further.
**RabbitMQ:**
Supports durable queues and clustering, but the problem with the way they have clustering today is that if you lose a node in the cluster, all messages in that node are unavailable until you bring that node back online. It doesn't replicated the messages between the different nodes in the cluster, it just replicates the metadata about the message, and then it goes back to the originating node to get the message, if the node isn't running, you are S.O.L. Not ideal.
The way they recommend to get around this is to setup a second server and replicate the file system using DRBD, and then running something like pacemaker to switch the clients to the backup server when it needs too. This seems pretty complicated, not sure if there is a better way. Anyone know of a better way?
**Redis:**
Supports a read slave and this would allow me to have a backup in case of emergencies but it doesn't support master-master setup, and I'm not sure if it handles active failover between master and slave. It doesn't have the same features as RabbitMQ, but looks much easier to setup and maintain.
**Questions:**
1. What is the best way to setup celery
so that it will guarantee message
processing.
2. Has anyone done this before? If so,
would be mind sharing what you did? | 2011/07/05 | [
"https://Stackoverflow.com/questions/6577218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356788/"
] | I suspect that Celery bound to existing backends is the wrong solution for the reliability guarantees you need.
Given that you want a distributed queueing system with strong durability and reliability guarantees, I'd start by looking for such a system (they do exist) and then figuring out the best way to bind to it in Python. That may be via Celery & a new backend, or not. | I've used Amazon SQS for this propose and got good results. You will recieve message until you will delete it from queue and it allows to grow you app as high as you will need. | 3,321 |
21,669,632 | I am trying to open a Windows Media Video file on a macintosh using OpenCV. To view this video in MacOS I had to install a player called Flip4Mac. I am assuming that this came with the codecs for decoding WMV. Is there something I can now do to get OpenCV to open the videos using the codec?
In python/opencv2 opening a video should be super easy:
```
cap = cv2.VideoCapture('0009.wmv')
```
But I get this:
```
WARNING: Couldn't read movie file 0009.wmv
``` | 2014/02/10 | [
"https://Stackoverflow.com/questions/21669632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/391339/"
] | use split function.
```
var str = "Architecture, Royal Melbourne Institute of Technology";
console.log(str.split(",")[0]);// logs Architecture
```
output array after splitting your string by `,` would have the expected result at the zeroth index. | Its again a normal Javascript, all the methods can be used in nodeJS.
var name = "any string";
For example:
```
var str = "Hi, world",
arrayOfStrings = str.split(','),
output = arrayOfStrings[0]; // output contains "Hi"
```
You can update the required field by directly replacing the string ie.
```
arrayOfStrings[0] = "other string";
str = arrayOfStrings.join(' '); // "other string world"
```
Point to be noted:
If we update the output, as we are updating the string it contains the copy NOT the reference, while joining it gives the same text ie, "Hi world".
So we need to change the reference value ie arrayOfStrings[0] then .join(' ') will combine the required string. | 3,330 |
7,020,630 | I wish to run a long-running script in the background upon receiving a request. I read about `subprocess` but I require that the call is nonblocking so that the request can complete in time.
```
def controlCrawlers(request):
if request.method == 'POST' and 'type' in request.POST and 'cc' in request.POST:
if request.POST['type'] == '3':
if request.POST['cc'] == '1':
try: #temp solution checking socket is occupied by trying to connect
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('localhost',DISCOVERY_SOCKET))
s.close()
return HttpResponse(simplejson.dumps({'success':0,'message': 'Socket is occupied. Possible crawler is already running'}), \
mimetype='application/json')
except:
pid = os.fork()
if pid == 0:
#f = open('/home/foo/django','a')
#f.write('abc')
# f.close()
path = os.path.join(os.path.dirname(__file__), 'blogcontentReader/blogpost_crawler.py')
os.system("python %s" %path)
os._exit(0)
return HttpResponse(simplejson.dumps({'success':1,'message': 'Running...'}), \
mimetype='application/json')
```
I used os.fork as suggested from another [post](https://stackoverflow.com/questions/6441807/spawn-a-new-non-blocking-process-using-python-on-mac-os-x) but apparently control does not flow into my `if pid == 0` portion. Is this the correct method to do this? | 2011/08/11 | [
"https://Stackoverflow.com/questions/7020630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357236/"
] | Yeah, don't do this, use [celery](http://docs.celeryproject.org/en/master/getting-started/introduction.html) instead. It makes running asynchronous tasks a lot easier, more reliable. | If you don't want to use asynchronous task queues with something like celery you can always just run a python script via cron. There are several options to do this. An example:
* create a model which save the values which are needed by your process
* write a standalone python/django script which get the values from the model, executee the task and remove the database entries
* set up a cronjob to run your script | 3,331 |
19,742,451 | I'm trying to use Django with virtualenv. I actually got the Django hello world webpage to display with 127.0.0.1:8001. Later I had to do some minor tweaks and now its giving me this error when I try to launch it again (I ctrl-Z from the previous working gunicorn session so I don't think it is because of that).
```
user myenv # /opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py myProject.wsgi
2013-11-02 08:26:37 [27880] [INFO] Starting gunicorn 18.0
2013-11-02 08:26:37 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:37 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:38 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:38 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:39 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:39 [27880] [ERROR] Retrying in 1 second.
^C2013-11-02 08:26:40 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:40 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:41 [27880] [ERROR] Connection in use: ('127.0.0.1', 8001)
2013-11-02 08:26:41 [27880] [ERROR] Retrying in 1 second.
2013-11-02 08:26:42 [27880] [ERROR] Can't connect to ('127.0.0.1', 8001)
user myenv #
```
Other commands I recently used include:
```
python manage.py syncdb
python manage.py startapp polls
```
I did 'killall python' to make sure they were not the cause.
gunicorn\_config.py:
```
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/myProject
workers = 1
user = 'tim'
```
myProject.wsgi:
```
import os
# os.environ["DJANGO_SETTINGS_MODULE"] = "myProject.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myProject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
``` | 2013/11/02 | [
"https://Stackoverflow.com/questions/19742451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1661745/"
] | `ctrl+z` halts the process, but does not close it. In consequence it does not release its ports. You can bring the process back with `fg` and then close it properly using `ctrl+c`. | The port 8000 was probably bound and thus unavailable for the connection. | 3,332 |
62,295,863 | I have this (python) list
my\_list = [['dog','cat','mat','fun'],['bob','cat','pan','fun'],['dog','ben','mat','rat'],
['cat','mat','fun','dog'],['mat','fun','dog','cat'],['fun','dog','cat','mat'],
['rat','dog','ben','mat'],['dog','mat','cat','fun'], ...
]
my\_list has 200704 elements
Note here
my\_list[0] = ['dog','cat','mat','fun']
dog->cat->mat->fun->dog
my\_list[3] = ['cat','mat','fun','dog']
cat->mat->fun->dog->cat
my\_list[4] = ['mat','fun','dog','cat']
mat->fun->dog->cat->mat
my\_list[5] = ['fun','dog','cat','mat']
fun->dog->cat->mat->fun
Going circular, they are all the same. So they should be marked duplicates.
Note:
my\_list[0] = ['dog','cat','mat','fun']
my\_list[7] = ['dog','mat','cat','fun']
These should NOT be marked duplicates since going circular, they are different.
Similarly,
my\_list[2] = ['dog','ben','mat','rat']
my\_list[6] = ['rat','dog','ben','mat']
They should be marked duplicates.
```
def remove_circular_duplicates(my_list):
# the quicker and more elegent logic here
# the function should identify that my_list[0], my_list[3], my_list[4] and my_list[5] are circular duplicates
# keep only my_list[0] and delete the rest 3
# same for my_list[2] and my_list[6] and so on
return (my_list_with_no_circular_duplicates)
```
----------------------------------------------------------------
My try:
----------------------------------------------------------------
This works but, takes more than 3 hrs to finish 200704 elements.
And its not an elegant way too.. (pardon my level)
```
t=my_list
tLen=len(t)
while i<tLen:
c=c+1
if c>2000:
# this is just to keep you informed of the progress
print(f'{i} of {tLen} finished ..')
c=0
if (finalT[i][4]=='unmarked'):
# make 0-1-2-3 -> 1-2-3-0 and check any duplicates
x0,x1,x2,x3 = t[i][1],t[i][2],t[i][3],t[i][0]
# make 0-1-2-3 -> 2-3-0-1 and check any duplicates
y0,y1,y2,y3 = t[i][2],t[i][3],t[i][0],t[i][1]
# make 0-1-2-3 -> 3-0-1-2 and check any duplicates
z0,z1,z2,z3 = t[i][3],t[i][0],t[i][1],t[i][2]
while j<tLen:
if (finalT[j][4]=='unmarked' and j!=i):
#j!=i skips checking the same (self) element
tString=t[j][0]+t[j][1]+t[j][2]+t[j][3]
if (x0+x1+x2+x3 == tString) or (y0+y1+y2+y3 == tString) or (z0+z1+z2+z3 == tString):
# duplicate found, mark it as 'duplicate'
finalT[j][4]='duplicate'
tString=''
j=j+1
finalT[i][4] = 'original'
j=0
i=i+1
# make list of only those marked as 'original'
i=0
ultimateT = []
while i<tLen:
if finalT[i][4] == 'original':
ultimateT.append(finalT[i])
i=i+1
# strip the 'oritinal' mark and keep only the quad
i=0
ultimateTLen=len(ultimateT)
while i<ultimateTLen:
ultimateT[i].remove('original')
i=i+1
my_list_with_no_curcular_duplicates = ultimateT
print (f'\n\nDONE!! \nStarted at: {start_time}\nEnded at {datetime.datetime.now()}')
return my_list_with_no_circular_duplicates
```
What i want is a quicker way of doing the same.
Tnx in advance. | 2020/06/10 | [
"https://Stackoverflow.com/questions/62295863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13717822/"
] | Your implementation is an n-squared algorithm, which means that the implementation time will grow dramatically for a large data set. 200,000 squared is a very large number. You need to convert this to an order n or n-log(n) algorithm. To do that you need to preprocess the data so that you can check whether a circularly equivalent item is also in the list without having to search through the list. To do that put each of the entries into a form that they can be compared without needing to iterate through the list. I would recommend that you rotate each entry so that it has the alphabetically first item first. For example change ['dog','cat','mat','fun'] to ['cat','mat','fun','dog']. That is an order n operation to process each element of the list once.
Then with them all in a common format you have several choices to determine if each entry is unique. I would use a set. For each item check if the item is in a set, if not, it is unique and should be added to the set. If the item is already in the set, then an equivalent item has already been found and this item can be removed. Checking if an item is in a set is a constant time operation in Python. It does this by using a hash table in order to index to find an item instead of needing to search. The result is this is is also an order n operation to go through each entry doing the check. Overall the algorithm is order n and will be dramatically faster than what you were doing. | @BradBudlong
Brad Budlong's answer is right.
Following is the implementation result of the same.
My method (given in the question):
Time taken: ~274 min
Result: len(my\_list\_without\_circular\_duplicates) >> 50176
Brad Budlong's method:
Time taken: ~12 sec (great !)
Result: len(my\_list\_without\_circular\_duplicates) >> 50176
Following is just the implementation of Brad Budlong's method:
```
# extract all individual words like 'cat', 'rat', 'fun' and put in a list without duplicates
all_non_duplicate_words_from_my_list = {.. the appropriate code here}
# and sort them alphabetically
alphabetically_sorted_words = sorted(all_non_duplicate_words_from_my_list)
# mark all as 'unsorted'
all_q_marked=[]
for i in my_list:
all_q_marked.append([i,'unsorted'])
# format my_list- in Brad's words,
# rotate each entry so that it has the alphabetically first item first.
# For example change ['dog','cat','mat','fun'] to ['cat','mat','fun','dog']
for w in alphabetically_sorted_words:
print(f'{w} in progress ..')
for q in all_q_marked:
if q[1]=='unsorted':
# check if the word exist in the quad
if w in q[0]:
# word exist, then rotate this quad to put that word in first place
# rotation_count=q[0].index(w) -- alternate method lines
quad=q[0]
for j in range(4):
quad=quad[-1:] + quad[:-1]
if quad[0]==w:
q[0]=quad
break
# mark as sorted
q[1]='sorted'
# strip the 'sorted' mark and keep only the quad
i=0
formatted_my_list=[]
while i<len(all_q_marked):
formatted_my_list.append(all_q_marked[i][0])
i=i+1
# finally remove duplicate lists in the list
my_list_without_circular_duplicates = [list(t) for t in set(tuple(element) for element in formatted_my_list)]
print (my_list_without_circular_duplicates)
```
Note here, although it iterates and processes alphabetically\_sorted\_words (201) with entire all\_q\_marked (200704) still, the time taken to process exponentially reduces as elements in the all\_q\_marked gets marked as 'sorted'. | 3,335 |
58,909,624 | While reading this [article](https://pbpython.com/pandas_transform.html), I came across this statement.
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
```
Other than `reset_index()` method call, everything else is clear to me.
My question is what will happen if I don't call `reset_index()` considering the given below sequence?
```
order_total = df.groupby('order')["ext price"].sum().rename("Order_Total").reset_index()
df_1 = df.merge(order_total)
df_1["Percent_of_Order"] = df_1["ext price"] / df_1["Order_Total"]
```
I tried to understand about this method from <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html>, but couldn't understand what does it mean to *reset the index* of a dataframe. | 2019/11/18 | [
"https://Stackoverflow.com/questions/58909624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1465553/"
] | I think better here is use [`GroupBy.transform`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html) for new `Series` with same size like original DataFrame filled by aggregate values, so `merge` is not necessary:
```
df_1 = pd.DataFrame({
'A':list('abcdef'),
'ext price':[5,3,6,9,2,4],
'order':list('aaabbb')
})
order_total1 = df_1.groupby('order')["ext price"].transform('sum')
df_1["Percent_of_Order"] = df_1["ext price"] / order_total1
print (df_1)
A ext price order Percent_of_Order
0 a 5 a 0.357143
1 b 3 a 0.214286
2 c 6 a 0.428571
3 d 9 b 0.600000
4 e 2 b 0.133333
5 f 4 b 0.266667
```
>
> My question is what will happen if I don't call reset\_index() considering the sequence?
>
>
>
Here is `Series` before `reset_index()`, so after `reset_index` is converting `Series` to 2 columns DataFrame, first column is called by index name and second column by `Series` name.
```
order_total = df_1.groupby('order')["ext price"].sum().rename("Order_Total")
print (order_total)
order
a 14
b 15
Name: Order_Total, dtype: int64
print (type(order_total))
<class 'pandas.core.series.Series'>
print (order_total.name)
Order_Total
print (order_total.index.name)
order
print (order_total.reset_index())
order Order_Total
0 a 14
1 b 15
```
Reason why is necessry in your code to 2 columns DataFrame is no parameter in `merge`. It means it use parameter `on` by intersection of common columns names between both DataFrames, here `order` column. | A simplified explanation is that;
`reset_index()` takes the current index, and places it in column 'index'. Then it recreates a new 'linear' index for the data-set.
```
df=pd.DataFrame([20,30,40,50],index=[2,3,4,5])
0
2 20
3 30
4 40
5 50
df.reset_index()
index 0
0 2 20
1 3 30
2 4 40
3 5 50
``` | 3,336 |
55,276,170 | I have been using Selenium and python to web scrape for a couple of weeks now. It has been working fairly good. Been running on a macOS and windows 7. However all the sudden the headless web driver has stopped working. I have been using chromedriver with the following settings:
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
options.add_argument('--no-sandbox')
options.add_argument('--disable-gpu')
chrome_options.add_argument("--window-size=1920x1080")
driver = webdriver.Chrome(chrome_options=options)
driver.get('url')
```
Initially I had to add the window, gpu and sandbox arguments to get it work and it did work up until now. However, when running the script now it gets stuck at driver.get('url'). It doesn't produce an error or anything just seems to run indefinitely. When I run without headless and simply run:
```
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('url')
```
it works exactly as intended. This problem is also isolated to my windows machine. Where do I start? | 2019/03/21 | [
"https://Stackoverflow.com/questions/55276170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9428990/"
] | You could try giving your svg an id (or class) and then styling it like so:
```
#test{
opacity:0;
}
#test:hover{
opacity:1;
}
```
---
the id should be inside your svg:
```
<svg id="test" .............. >
</svg>
```
Im not sure if this is what you exactly mean but its an easy way to do it | I would suggest taking a look at [ngx-svg](https://www.npmjs.com/package/ngx-svg) which allows to create containers and add multiple elements within those containers - in your case circles. It has other elements as well, and there is a documentation, which allows to understand what you have to do as well. | 3,341 |
17,779,480 | Recently, I've been attempting to defeat one of my main weaknesses in programming in general, random generation. I thought it would be an easy thing to do, but the lack of simple information is killing me on it. I don't want to sound dumb, but it feels to me like most of the information from places like [this](http://freespace.virgin.net/hugo.elias/models/m_perlin.htm) are written for mathematicians who went to college to graduate in theoretical mathematics. I just don't understand what I'm meant to do with that information in order to apply it to programming in a language such as python.
I've been working a few days staring at equations and attempting attempt after attempt, but still after all those days, after ripping my code apart again and again, all that's been working properly this entire time is this noise generator to generate basic noise:
```
import random
import math
random.seed(0)
def generateWhiteNoise(width,height):
noise = [[r for r in range(width)] for i in range(height)]
for i in range(0,height):
for j in range(0,width):
noise[i][j] = random.randint(0,1)
return noise
noise = generateWhiteNoise(50,12)
for i in noise:
print()
for o in i:
if(o == 0):
print('-',end='')
else:
print('#',end='')
```
This code produces this result:
```
##-######--#--#-#--##-###-###---#-##-#-----#--##-#
#-#-##-##-#----##------##--#####-#-##---#--#-##---
-------#-#------#---#-#---###--#--#-###-----##-#--
######--#-#-#--####-###---#---###-##--#-#-##--####
-#----###--------##--##--##-#-#--#----###-####--##
---####-#--#--###-#-#--#--#####--####-#-##-##--#--
----#--####-#-#-#-#-#---#--###------###--#-######-
--###--#-###-------#-##--###---#-####----###-#####
#----##--##-#--##-###--#----#-#-##--##-#-##---###-
##---##----##--##--#--#--###-###-#--#-##---#------
-##----#-###---######---#-#---#---###---#---###-##
#--##-##-###-###---#--##-##--##-##-#-#-##--#-#-##-
```
I'm wanting it to eventually produce something like this:
```
--------------------------------------------------
------------------####----------------------------
-----------------#####----------------------------
----------------#####-----------------------------
---------------#####--------------###-------------
---------------#####--------------###-------------
---------------------------------####-------------
---######------------------------####-------------
---######------------###--------------------------
----########---------###--------------------------
-----#######---------###--------------------------
------###-----------------------------------------
```
How can I manage to smooth out the white-noise I generate, and turn it into islands? Can anyone explain it in a very simplistic way for me?
I may be thinking about all of this very wrong. | 2013/07/22 | [
"https://Stackoverflow.com/questions/17779480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2577669/"
] | The direct answer to your question is "No, you cannot do what you are asking", and the second answer is "Yes, you are thinking about this all wrong".
The reason is that you are generating completely random noise. What you are asking for is coherent noise. They are two completely different animals and you cannot get coherent noise from random noise. Hence my answer.
To explain why, you must understand this simple statement which I am repeating from the [excellent libnoise documentation](http://libnoise.sourceforge.net/glossary/index.html#coherentnoise):
---
**Coherent noise**
==================
*A type of smooth pseudorandom noise.*
Coherent noise is generated by a coherent-noise function, which has three important properties:
* Passing in the same input value will always return the same output value.
* A small change in the input value will produce a small change in the output value.
* A large change in the input value will produce a random change in the output value.
---
Random noise does not have these properties, and therefore is completely unsuitable for what you are trying to achieve.
I would suggest studying [Ken Perlin's latest (improved) reference implementation](http://mrl.nyu.edu/~perlin/noise/) and his [SIGGRAPH 2002](http://mrl.nyu.edu/~perlin/paper445.pdf) notes.
If you cannot understand or implement this, then just use a library such as [libnoise](http://libnoise.sourceforge.net/), an excellent and well used LGPL library originally in C++ which has also been ported to many other languages. | Rather use cellular automatons. The algorithm that you find [here](http://www.roguebasin.com/index.php?title=Cellular_Automata_Method_for_Generating_Random_Cave-Like_Levels) creates similar patterns that you you would like to see:
```
. . . . . . . . . . . . . . .
. . . . . # # . . . . . # . .
. . . . # # # # . . . # # # .
. . . . . # # # # . . # # # .
. . . . . . # # # # # # # . .
. . . . . . # # # # # # # . .
. . . . # # # # # # # # # . .
. . . # # # # # # # # # # . .
. . # # # # # # . # . # # . .
. . # # # # # . . # . . . . .
. . . # # # # . . . # # # . .
. . . # # # # . . . # # # # .
. . # # # # . . . . . # # # .
. . # # # # . . . . . # # . .
. . . . . . . . . . . . . . .
``` | 3,342 |
48,166,183 | I have a problem which my novice knowledge cannot solve.
I'm trying to copy some python-2.x code (which is working) to python-3.x. Now it gives me an error.
Here's a snippet of the code:
```
def littleUglyDataCollectionInTheSourceCode():
a = {
'Aabenraa': [842.86917819535, 25.58264089252],
'Aalborg': [706.92644963185, 27.22746146366],
'Aarhus': [696.60346488317, 25.67540525994],
'Albertslund': [632.49007681987, 27.70499807418],
'Allerød': [674.10474259426, 27.91964123274],
'Assens': [697.02257492453, 25.83386400960],
'Ballerup': [647.05121493736, 27.72466920284],
'Billund': [906.63431520239, 26.23136823557],
'Bornholm': [696.05765684503, 28.98396327957],
'Brøndby': [644.89390717471, 28.18974127413],
}
return a
```
and:
```
def calcComponent(data):
# Todo: implement inteface to set these values by
# the corresponding 'Kommune'
T = float(data.period)
k = 1.1
rH = 1.0
# import with s/\([^\s-].*?\)\t\([0-9.]*\)$/'\1':'\2',/
myDict = littleUglyDataCollectionInTheSourceCode();
#if data.kommune in myDict:
# https://docs.djangoproject.com/en/1.10/ref/unicode/
key = data.kommune.encode("utf-8")
rd = myDict.get(key.strip(), 0)
laP = float(rd[0]) # average precipitation
midV = float(rd[1]) # Middelværdi Klimagrid
print(("lap " + str(laP)))
print(("mid V" + str(midV)))
```
It gives the error:
```
line 14, in calcComponent
laP = float(rd[0]) # average precipitation
TypeError: 'int' object is not subscriptable
```
I've tried different approaches and read dozens of aticles with no luck. Being a novice it is like tumbling in the dark. | 2018/01/09 | [
"https://Stackoverflow.com/questions/48166183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6551344/"
] | In your example `myDict` is a dictionary with strings as keys and lists as values.
```
key = data.kommune.encode("utf-8")
```
will be a bytes object, so there can't ever be any corresponding value for that key in the dictionary. This worked in python2 where automatic conversion was performed, but not anymore in python3, you need to use the correct type for lookups.
```
rd = myDict.get(key.strip(), 0)
```
will always return the integer `0`, which means that `rd[0]` can not work because integers are not indexable, as the error message tells you.
Generally the default value in a `get()` call should be compatible with what is returned in all other cases. Returning `0` as default where all non-default cases return lists can only lead to problems. | You are using `0` as a default value for `rd`, whereas the values in the dict are lists, so if the key is not found, `rd[0]` or `rd[1]` will fail. Instead, use a list or tuple as default, then it should work.
```
rd = myDict.get(key.strip(), [0, 0])
``` | 3,344 |
6,493,681 | I have a list of ids in python. For example:
```
x = [1,2,3,4,5,6]
```
And i want to select a list of records in my (mysql ) data-base under the condition that the ids of these records are in x. something like below:
```
SELECT * FROM mytable WHERE id IN x
```
but I don't know who I can do this in python. I have seen some examples using %s in their sql string. However this does not work when the variable is a list. does anyone know how I can do this?
Thanks | 2011/06/27 | [
"https://Stackoverflow.com/questions/6493681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/313245/"
] | Try something like this:
```
'(%s)' % ','.join(map(str,x))
```
This will give you a string that you could use to send to MySql as a valid `IN` clause:
```
(1,2,3,4,5,6)
``` | Well, if all of those are known to be numbers of good standing, then you can simply call
```
"SELECT * FROM mytable WHERE ID IN ({0})".format(','.join(x))
```
If you know that they are numbers but *any* of them might have been from the user, then I might use:
```
"SELECT * FROM mytable WHERE ID IN ({0})".format(','.join(list(map(int,x))))
```
`format` will perform the replacement at the appropriate index. join is used so that you don't have the `[]`. list converts everything to a list, map applies a function to a list/tuple/iterable. In Python 3, however, map returns a generator, which isn't what you need. You need a list. So, list(map(x,y)) will return the list form of map(x,y). | 3,347 |
11,360,161 | I get this error while running a python script (called by ./waf --run):
TypeError: abspath() takes exactly 1 argument (2 given)
The problem is that it is indeed called with: obj.path.abspath(env).
This is not a python issue, because that code worked perfectly before, and it's part of a huge project (ns3) so I doubt this is broken.
However something must have changed in my settings, because this code worked before, and now it doesn't.
Can you help me to figure out why I get this error ?
Here is the python code: <http://pastebin.com/EbJ50BBt>. The error occurs line 61. | 2012/07/06 | [
"https://Stackoverflow.com/questions/11360161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1502564/"
] | The documentation of the method [`Node.abspath()`](http://docs.waf.googlecode.com/git/apidocs_16/Node.html#waflib.Node.Node.abspath) states it does not take an additional `env` parameter, and I confirmed that it never did by checking the git history. I suggest replacing
```
if not (obj.path.abspath().startswith(launch_dir)
or obj.path.abspath(env).startswith(launch_dir)):
continue
```
with
```
if not obj.path.abspath().startswith(launch_dir):
continue
```
If this code worked before, this is probably due to the fact that the first operator of the `or` expression happened to always be `True`, so the second operator was never executed. It seems to be a bug in your code anyway. | You should have a file name and line number in the traceback. Go to that file and line and find out was "obj" and "obj.path.abspath" are. A simple solution would be to put the offending line in a try/except block to print (or log) more informations, ie:
```
# your code here
try:
whatever = obj.path.abspath(env)
except Exception, e:
# if you have a logger
logger.exception("oops : obj is '%s' (%s)" % (obj, type(obj)))
# else
import sys
print >> sys.stderr, "oops, got %s on '%s' (%s)" % (e, obj, type(obj))
# if you can run this code directly from a shell,
# this will send you in the interactive debugger so you can
# inspect the offending objet and the whole call stack.
# else comment out this line
import pdb; pdb.set_trace()
# and re-raise the exception
raise
```
My bet is that "obj.path" is NOT the python 'os.path' module, and that "obj.path.abspath" is a an instance method that only takes "self" as argument. | 3,348 |
48,264,720 | I am starting to learn the application of different types of classifiers in python sklearn module. The clf\_LR.predict(X\_predict) predicts the 'Loan\_Status' of the test data. In the training data it is either 1 or 0 depending on loan approval. But the predict gives a numpy array of float values around 0 and 1. I want to convert these values to nearest 1 or 0.
```
#regression
X = np.array(train_data.drop(['Loan_Status'],1))
y = np.array(train_data['Loan_Status'])
X_predict = np.array(test_data)
clf_LR = LinearRegression()
clf_LR.fit(X,y)
accuracy = clf_LR.score(X,y)
clf_LR.predict(X_predict)
```
The output is:
```
array([ 1.0531505 , 0.54463698, 0.66512836, 0.91817899, 0.81084038,
0.4400971 , 0.05132584, 0.5797642 , 0.72760712, 0.78624 ,
0.60043618, 0.79904144, 0.78164806, 0.63140686, 0.66746683,
0.56799806, 0.62462483, -0.27487531, 0.77595855, 0.62112923,
0.42499627, 0.21962665, 0.73747749, 0.62580336, 1.08242647,
0.60546731, 0.58980138, 0.68778534, 0.80729382, -0.25906255,
0.5911749 , 0.57754607, 0.71869494, 0.7414411 , 0.79574657,
1.053294 , 0.77238618, 0.84663303, 0.93977499, 0.39076889,
0.79835196, -0.31202102, 0.57969628, 0.6782184 , 0.62406822,
0.76141175, -0.14311827, 0.87284553, 0.45152395, 0.70505136,
0.80529711, 0.88614397, 0.0036123 , 0.59748637, 1.15082822,
0.6804735 , 0.64551666, -0.28882904, 0.71713245, 0.66373934,
0.5250008 , 0.81825485, 0.71661801, 0.74462875, 0.66047019,
0.62186449, -0.2895147 , 0.78990148, -0.198547 , 0.02752572,
1.0440052 , 0.58668459, 0.82012492, 0.50745345, -0.07448848,
0.56636204, 0.85462188, 0.4723699 , 0.5501792 , 0.91271145,
0.61796331, 0.47130567, 0.74644572, 0.38340698, 0.65640869,
0.75736077, -0.23866258, 0.89198235, 0.74552824, 0.58952803,
0.75363266, 0.44341609, 0.76332621, 0.60706656, 0.548128 ,
-0.05460422, 0.81488009, 0.51959111, 0.91001994, 0.71223763,
0.67600868, 0.79102218, -0.00530356, 0.20135057, 0.73923083,
0.56965262, 0.80045725, 0.67266281, 0.81694555, 0.70263141,
0.38996739, 0.38449832, 0.77388573, 0.92362979, 0.54006616,
0.76432229, 0.61683807, 0.44803386, 0.79751796, 0.55321023,
1.10480386, 1.03004599, 0.54718652, 0.74741632, 0.83907984,
0.86407637, 1.10821273, 0.6227142 , 0.94443767, -0.02906777,
0.68258672, 0.38914101, 0.86936186, -0.17331518, 0.35980983,
-0.32387964, 0.86583445, 0.5480951 , 0.5846661 , 0.96815188,
0.45474766, 0.54342586, 0.41997578, 0.73069535, 0.05828308,
0.4716423 , 0.70579418, 0.76672804, 0.90476146, 0.45363533,
0.78646442, 0.76841914, 0.77227952, 0.75068078, 0.94713967,
0.67417191, -0.16948404, 0.80726176, 1.12127705, 0.74715634,
0.44632464, 0.61668874, 0.6578295 , 0.60631521, 0.42455094,
0.65104766, -0.01636441, 0.87456921, -0.24877682, 0.76791838,
0.85037569, 0.75076961, 0.91323444, 0.27976108, 0.89643734,
0.14388116, 0.7340059 , 0.46372024, 0.91726212, 0.43539411,
0.44859789, -0.04401285, 0.28901989, 0.62105238, 0.56949422,
0.49728522, 0.65641239, 1.11183953, 0.76159204, 0.55822867,
0.79752582, 0.72726221, 0.49171728, -0.32777583, -0.30767082,
0.70702693, 0.91792405, 0.76112155, 0.68748705, 0.6172974 ,
0.70335159, 0.74522648, 1.01560133, 0.62808723, 0.50816819,
0.61760714, 0.55879101, 0.50060645, 0.87832261, 0.73523273,
0.60360986, 0.78153534, -0.2063286 , 0.85540569, 0.59231311,
0.75875401, 0.34422049, 0.58667666, -0.14887532, 0.81458285,
0.90631338, 0.5508966 , 0.93534451, 0.0048111 , 0.66506743,
0.5844512 , 0.67768398, 0.91190474, 0.39758323, 0.44284897,
0.47347625, 0.7603246 , 0.41066447, 0.50419741, 0.74437409,
0.44916515, 0.14160128, 0.72991652, 1.15215444, 0.50707437,
0.61020873, 0.8831041 , 0.78476914, 0.4953215 , 0.71862044,
0.66574986, 0.89547805, 0.93534669, 0.57742771, 0.9225718 ,
0.67209865, 0.34461023, 0.52848926, 0.95846303, 0.88237609,
-0.01603499, 0.94158916, 0.44069838, -0.17133448, 0.35288583,
0.55302018, 0.36446662, 0.62047864, 0.3803367 , 0.60398751,
0.9152663 , 0.48237299, 0.05646119, -0.65950771, 0.52644392,
-0.14182158, 0.65408783, -0.01741803, 0.76022561, 0.70883902,
0.56782191, 0.66484671, 0.79638622, 0.6668274 , 0.94365746,
0.76132423, 0.63407964, 0.43784118, 0.74599199, 0.69594847,
0.96794245, 0.49120557, -0.30985337, 0.48242465, 0.78788 ,
0.74562549, 0.61188416, -0.13990599, 0.59192289, 0.52577439,
0.62118612, 0.47292839, 0.38433912, 0.58535049, 0.61180443,
0.68363366, -0.17158279, -0.16752298, -0.12006642, 0.11420194,
0.54435597, 0.76707794, 0.94712879, 0.90341355, 0.41133755,
0.78063296, 1.06335948, 0.65061658, 0.55463919, -0.16184664,
0.45612831, 0.2974657 , 0.74769718, 0.73568274, 0.91792405,
0.69938454, 0.07815941, 0.73400855, 0.33905491, 0.48330823,
0.76760269, -0.03303408, 0.64432907, 0.44763337, 0.59214243,
0.78339532, 0.74755724, 0.70328769, 0.61766433, -0.34196805,
0.74271219, 0.66617484, 0.75939014, 0.46274977, 0.43760914,
-0.11568388, 1.12101126, 0.65718951, 0.74632966, -0.3918828 ,
0.29915035, 0.6155425 , 0.66089274, 0.8555285 , 0.54121081,
0.74758901, 0.84686185, 0.68150433, 0.44953323, 0.71672738,
0.86416735, 0.97374945, 0.36594854, 0.5508358 , 0.60524084,
-0.04479449, 0.56064679, 0.46826815, 0.75353414, 0.63092004,
0.52340796, 0.36622527, 0.42553235, 0.81877722, -0.03474048,
0.56185539, 0.57384744, 0.86959987, -0.35002778, 0.59209448,
0.43892519, 0.83366299, 0.55630127, 0.68092981, 0.79639642,
0.96289854, -0.15094804, 0.5866888 , 0.88245453, 0.65447514,
1.00194182, 0.45130259, -0.16774169, 0.66529484, 0.87330175,
0.12493249, 0.07427334, 0.79084776, 0.60848656, 0.7706963 ,
0.76846985, 0.74796571, 0.52316893, 0.62116966, 0.52497383,
0.05855483, 0.75575428, -0.20233853, 0.77693886, 0.15845594,
0.88457158, 0.0846857 , 0.7831948 , 0.54955829, 0.71151434,
1.23277406, 0.0153455 , 0.7111069 , 0.64140878, 0.69578766,
0.72386089, 0.3291767 , 0.8414526 , -0.14267676, 0.93841726,
0.94248916, 0.61492774, 0.60835432, -0.05542942, 1.01387972,
0.81980896, 0.39519755, 0.85483256, 0.79124875, 0.46196837,
0.5157149 , -0.2076404 , 0.57935033, 0.86477299, 0.62917312,
0.85446301, 0.40595525, 0.64527099, 0.7452028 , 0.58527638,
0.66419528, 0.49120555, 0.83966651, 0.86063059, 0.85615707,
-0.22704174])
```
I want to convert these values to nearest 1 or 0. Is there any way of doing this inplace? | 2018/01/15 | [
"https://Stackoverflow.com/questions/48264720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8380563/"
] | ```
import numpy as np
np.round(np.clip(clf_LR.predict(X_predict), 0, 1)) # floats
np.round(np.clip(clf_LR.predict(X_predict), 0, 1)).astype(bool) # binary
```
* [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html)
* [numpy.round](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.round_.html)
Technically above code is not **in-place**, but can be converted (using out arguments)!
(untested: try it!) | As said in @Pault comment what you need is a classifier, sklearn has many classifiers!
The choice of a classifier to use depend on many factors:
The following picture from [sklearn](http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) can help you to choose :
[](https://i.stack.imgur.com/MLhG2.png)
basically for logistic regression classifier , you can do the following :
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1.0, penalty='l1')
clf.fit(X, y)
clf.predict(X_predict) # will give you 0 or 1 as the class
``` | 3,350 |
36,427,747 | I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.
I think I need a way to organize this. Is there any good way to do it??
---
Currently, I do this by:
```
data/
helpers/
my_notebook.ipynb
import_file.py
```
I store data at `data/`, and extract `helper method` into `helpers/`, and divide them into files like `plot_helper.py`, `app_helper.py`, etc.
I summarize the imports in `import_file.py`,
```
from IPython.display import display
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re
```
And then I can import everything I need in `.ipynb` at top cell as
[](https://i.stack.imgur.com/XHbi3.png)
The structure can be seen at <https://github.com/cqcn1991/Wind-Speed-Analysis>
One problem I have right now is that I have too many submodule at `helpers/`, and it's hard to think which method should be put into which file.
I think a possible way is to organize in `pre-processing`, `processing`, `post-processing`.
UPDATE:
My big jupyter research notebook:
<https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html>
The top cell is `standard import` + `magic` + `extentions`
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import division
from import_file import *
load_libs()
``` | 2016/04/05 | [
"https://Stackoverflow.com/questions/36427747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794744/"
] | There are many ways to organise ipython research project. I am managing a team of 5 Data Scientists and 3 Data Engineers and I found those tips to be working well for our usecase:
This is a summary of my PyData London talk:
<http://www.slideshare.net/vladimirkazantsev/clean-code-in-jupyter-notebook>
**1. Create a shared (multi-project) utils library**
You most likely have to reuse/repeat some code in different research projects. Start refactoring those things into "common utils" package. Make setup.py file, push module to github (or similar), so that team members can "pip install" it from VCS.
Examples of functionality to put in there are:
* Data Warehouse or Storage access functions
* common plotting functions
* re-usable math/stats methods
**2. Split your fat master notebook into smaller notebooks**
In my experience, the good length of file with code (any language) is only few screens (100-400 lines). Jupyter Notebook is still the source file, but with output! Reading a notebook with 20+ cells is very hard. I like my notebooks to have 4-10 cells max.
Ideally, each notebook should have one "hypothesis-data-conclusions" triplet.
Example of splitting the notebook:
1\_data\_preparation.ipynb
2\_data\_validation.ipynb
3\_exploratory\_plotting.ipynb
4\_simple\_linear\_model.ipynb
5\_hierarchical\_model.ipynb
playground.ipynb
Save output of 1\_data\_preparation.ipynb to pickle `df.to_pickle('clean_data.pkl')`, csv or fast DB and use `pd.read_pickle("clean_data.pkl")` at the top of each notebook.
**3. It is not Python - it is IPython Notebook**
What makes notebook unique is **cells**. Use them well.
Each cell should be "idea-execution-output" triplet. If cell does not output anything - combine with the following cell. Import cell should output nothing -this is an expected output for it.
If cell have few outputs - it may be worth splitting it.
Hiding imports may or may not be good idea:
```
from myimports import *
```
Your reader may want to figure out what exactly you are importing to use the same stuff for her research. So use with caution. We do use it for `pandas, numpy, matplotlib, sql` however.
Hiding "secret sauce" in /helpers/model.py is bad:
```
myutil.fit_model_and_calculate(df)
```
This may save you typing and you will remove duplicate code, but your collaborator will have to open another file to figure out what's going on. Unfortunately, notebook (jupyter) is quite inflexible and basic environment, but you still don't want to force your reader to leave it for every piece of code. I hope that in the future IDE will improve, but for now, **keep "secret sauce" inside a notebook**. While "boring and obvious utils" - wherever you see fit. DRY still apply - you have to find the balance.
This should not stop you from packaging re-usable code into functions or even small classes. But "flat is better than nested".
**4. Keep notebooks clean**
You should be able to "reset & Run All" at any point in time.
Each re-run should be fast! Which means you may have to invest in writing some caching functions. May be you even want to put those into your "common utils" module.
Each cell should be executable multiple times, without the need to re-initialise the notebook. This saves you time and keep the code more robust.
But it may depend on state created by previous cells. Making each cell completely independent from the cells above is an anti-pattern, IMO.
After you are done with research - you are not done with notebook. Refactor.
**5. Create a project module, but be very selective**
If you keep re-using plotting or analytics function - do refactor it into this module. But in my experience, people expect to read and understand a notebook, without opening multiple util sub-modules. So naming your sub-routines well is even more important here, compared to normal Python.
"Clean code reads like well written prose" Grady Booch (developer of UML)
**6. Host Jupyter server in the cloud for the entire team**
You will have one environment, so everyone can quickly review and validate research without the need to match the environment (even though conda makes this pretty easy).
And you can configure defaults, like mpl style/colors and make matplot lib inline, by default:
In `~/.ipython/profile_default/ipython_config.py`
Add line `c.InteractiveShellApp.matplotlib = 'inline'`
**7. (experimental idea) Run a notebook from another notebook, with different parameters**
Quite often you may want to re-run the whole notebook, but with a different input parameters.
To do this, you can structure your research notebook as following:
Place ***params*** dictionary **in the first cell** of "source notebook".
```
params = dict(platform='iOS',
start_date='2016-05-01',
retention=7)
df = get_data(params ..)
do_analysis(params ..)
```
And in another (higher logical level) notebook, execute it using this function:
```
def run_notebook(nbfile, **kwargs):
"""
example:
run_notebook('report.ipynb', platform='google_play', start_date='2016-06-10')
"""
def read_notebook(nbfile):
if not nbfile.endswith('.ipynb'):
nbfile += '.ipynb'
with io.open(nbfile) as f:
nb = nbformat.read(f, as_version=4)
return nb
ip = get_ipython()
gl = ip.ns_table['user_global']
gl['params'] = None
arguments_in_original_state = True
for cell in read_notebook(nbfile).cells:
if cell.cell_type != 'code':
continue
ip.run_cell(cell.source)
if arguments_in_original_state and type(gl['params']) == dict:
gl['params'].update(kwargs)
arguments_in_original_state = False
```
Whether this "design pattern" proves to be useful is yet to be seen. We had some success with it - at least we stopped duplicating notebooks only to change few inputs.
Refactoring the notebook into a class or module break quick feedback loop of "idea-execute-output" that cells provide. And, IMHO, is not "ipythonic"..
**8. Write (unit) tests for shared library in notebooks and run with py.test**
There is a Plugin for py.test that can discover and run tests inside notebooks!
<https://pypi.python.org/pypi/pytest-ipynb> | You should ideally have a library hierarchy. I would organize it as follows:
Package wsautils
----------------
Fundamental, lowest level package [No dependencies]
stringutils.py: Contains the most basic files such string manipulation
dateutils.py: Date manipulation methods
Package wsadata
---------------
* Parsing data, dataframe manipulations, helper methods for Pandas etc.
* Depends on [wsautils]
+ pandasutils.py
+ parseutils.py
+ jsonutils.py [this could also go in wsautils]
+ etc.
Package wsamath (or wsastats)
-----------------------------
Math related utilities, models, PDF, CDFs [Depends on wsautils, wsadata]
Contains:
- probabilityutils.py
- statutils.py
etc.
Package wsacharts [or wsaplot]
------------------------------
* GUI, Plotting, Matplotlib, GGplot etc
* Depends on [wsautils, wsamath]
+ histogram.py
+ pichart.py
+ etc. Just an idea, you could also just have a single file here called chartutils or something
You get the idea. Create more libraries as necessary without making too many.
Few other tips:
---------------
* Follow the principles of good python package management thoroughly. Read this <http://python-packaging-user-guide.readthedocs.org/en/latest/installing/>
* Enforce strict dependency management via a script or a tool such that there are no circular dependencies between packages
* Define the name and purpose of each library/module well so that other users also can intuitively tell where a method/utility should go
* Follow good python coding standards (see PEP-8)
* Write test cases for every library/package
* Use a good editor (PyCharm is a good one for Python/iPython)
* Document your APIs, methods
Finally, remember that there are many ways to skin a cat and the above is just one that I happen to like.
HTH. | 3,351 |
54,292,049 | I play to HackNet game and i have to guess a word to bypass a firewall.
The key makes 6 characters long and contains the letters K,K,K,U,A,N.
What is the simplest way to generate all possible combinations either in bash or in python ? (bonus point for bash) | 2019/01/21 | [
"https://Stackoverflow.com/questions/54292049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10945277/"
] | Git uses a tree organization that is only allowed to be added new nodes (commits). If you really want to delete a wrongly pushed commit you must update your repository locally and than force push to the according remote. I found an issue talking about it.
[How to undo the initial commit on a remote repository in git?](https://stackoverflow.com/questions/18874613/how-to-undo-the-initial-commit-on-a-remote-repository-in-git) | use `git revert <commit_id_to_be_reverted>` | 3,361 |
1,265,078 | I want to used python to get the executed file version, and i know the [pefile.py](http://code.google.com/p/pefile/)
how to used it to do this?
notes: the executed file may be not completely. | 2009/08/12 | [
"https://Stackoverflow.com/questions/1265078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/154106/"
] | This is the best answer I think you can find:
```
import pefile
pe = pefile.PE("/path/to/something.exe")
print hex(pe.VS_VERSIONINFO.Length)
print hex(pe.VS_VERSIONINFO.Type)
print hex(pe.VS_VERSIONINFO.ValueLength)
print hex(pe.VS_FIXEDFILEINFO.Signature)
print hex(pe.VS_FIXEDFILEINFO.FileFlags)
print hex(pe.VS_FIXEDFILEINFO.FileOS)
for fileinfo in pe.FileInfo:
if fileinfo.Key == 'StringFileInfo':
for st in fileinfo.StringTable:
for entry in st.entries.items():
print '%s: %s' % (entry[0], entry[1])
if fileinfo.Key == 'VarFileInfo':
for var in fileinfo.Var:
print '%s: %s' % var.entry.items()[0]
```
[From Ero Carrera's (the author of `pefile.py`) own blog](http://blog.dkbza.org/2007/02/pefile-parsing-version-information-from.html) | I'm not sure that I understand your problem correctly, but if it's something along the lines of using pefile to retrieve the version of a provided executable, then perhaps (taken from [the tutorial][1])
```
import pefile
pe = pefile.PE("/path/to/pefile.exe")
print pe.dump_info()
```
will provide you with the version information. I have no idea how sensible pefile is when parsing incomplete files, but conjecturing that the version information is somewhere in the header and that pefile uses a generator to read the file, then it should be possible to read the information if the header is parseable. | 3,364 |
62,017,437 | I am new to programming. I have made a python script. It runs without errors in pycharm. Using pyinstaller i tried to make an exe. When i run the exe in build or dist folder or even through command prompt, it gives me the error 'Failed to execute Script Main'
I am attaching the warnings file link:
<https://drive.google.com/open?id=1cDQ2KGId0B8K9Qi1bWPIhL55hQO0dM-z>
Kindly help! | 2020/05/26 | [
"https://Stackoverflow.com/questions/62017437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13605404/"
] | There is one pip script for each virtual environment. So when you install a python module it get installed into the projectname\venv\Lib\site-packages directory.
When you run pyinstaller from terminal to make the executable, pyinstaller checks for dependencies in Sys.path . But that path does not include the projectname\venv\Lib\site-packages directory. Therefore pyinstaller cannot find those particular dependencies. In such cases it gives you warnings.Those warning can be found in 'warnname.txt' near your executable file.
**How to Configure pycharm to run pyinstaller**
1. First you need to add pyinstaller into project interpreter.
[](https://i.stack.imgur.com/eGLSS.png)
2. Then you need to setup running configurations.
[](https://i.stack.imgur.com/3C37D.png)
[](https://i.stack.imgur.com/prRnh.png)
**Script name**: path to your python script
**working path**: Project location
**Leave interpreter options as it is in the image.**
3. Run pyinstaller. You can find your .exe in dist directory.
4. If the "Module not found" error still persists. You can add a hidden import hook and specify the names of the missing modules.Navigate to **Project Path\venv\Lib\site-packages\PyInstaller\hooks** and create a new "**hook-pandas.py**"(hook-modulename.py) script and make a list of hidden import modules like this:
```
hiddenimports = ['pandas._libs.tslibs.np_datetime','pandas._libs.tslibs.nattype','pandas._libs.skiplist']
```
5. And run pyinstaller again, and it should work now. | I know I write this 10 months after but i run into the same problem and i know the solution. so, maybe some people who have the same problem could get help.
If your script has any additional files such as db,csv,png etc. you should add this files same directory. in this way you could solve the problem i guess. at least my problem was solved this way. | 3,365 |
48,021,748 | I have two mysql database one is localhost and another is in server now, am going to create simple app in python using flask for that application i would like to connect the both mysql DB (local and server).
Any one please suggest how to connect multiple DB into flask.
```
app = Flask(__name__)
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Sample code if possible.
Thanks | 2017/12/29 | [
"https://Stackoverflow.com/questions/48021748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5483189/"
] | I had the same issue, finally figured it out.
Instead of using
```
client = MongoClient()
client = MongoClient('localhost', 27017)
db = client.sampleDB1
```
Delete all that and try this:
```
mongo1 = PyMongo(app, uri = 'mongodb://localhost:27017/Database1')
mongo2 = PyMongo(app, uri = 'mongodb://localhost:27017/Database2')
```
Then, when you want to call a particular database you can use:
```
@app.route('/routenamedb1', methods=['GET'])
def get_data_from_Database1():
Database1 = mongo1.db.CollectionName ##Notice I use mongo1,
#If I wanted to access database2 I would use mongo2
#Walk through the Database for DC to
for s in Database1.find():
#Modifying code
return data
``` | create model.py and separate instances of 2 databases inside it, then in app.py:
```
app = Flask(__name__)
app.config['MODEL'] = model.my1st_database()
app.config['MODEL2'] = model.my2nd_database()
```
works for me :) | 3,366 |
57,010,207 | I want to use R to split some chat messages, here is an example:
```
example <- "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n[29.01.18, 23:45] Bob: Ja ist es!\n[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n[29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n[29.01.18, 23:46] Bob: Das ist praktisch!\n[29.01.18, 23:47] Bob: Oder?\n[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n[29.01.18, 23:47] Alice: und Emojis gehen auch!\n[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n[29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n[29.01.18, 23:50] Bob: Alles klar :) :D\n[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n[29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n[29.01.18, 23:53] Alice: https://github.com/JuKo007\n[29.01.18, 23:58] Alice: Scheint zu funktionieren!\n[29.01.18, 23:59] Alice: Sehr schön!\n[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n[30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
```
Basically, I want to split the string right in front of the date-time indicator in the brackets, here is what I tried so far:
```
# Cutting the textblock into individual messages
chat <- strsplit(example,"(?=\\[\\d\\d.\\d\\d.\\d\\d, \\d\\d:\\d\\d\\])",perl=TRUE)
chat <- unlist(chat)
```
The weird thing is, that in the output, it seems that the split occurs *after* the first square bracket, not in front:
```
[1] "["
[2] "29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[3] "["
[4] "29.01.18, 23:45] Bob: Ja ist es!\n"
[5] "["
[6] "29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[7] "["
[8] "29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n"
[9] "["
[10] "29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[11] "["
[12] "29.01.18, 23:46] Bob: Das ist praktisch!\n"
[13] "["
[14] "29.01.18, 23:47] Bob: Oder?\n"
[15] "["
[16] "29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[17] "["
[18] "29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[19] "["
[20] "29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[21] "["
[22] "29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n"
[23] "["
[24] "29.01.18, 23:50] Bob: Alles klar :) :D\n"
[25] "["
[26] "29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[27] "["
[28] "29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n"
[29] "["
[30] "29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n"
[31] "["
[32] "29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[33] "["
[34] "29.01.18, 23:58] Alice: Scheint zu funktionieren!\n"
[35] "["
[36] "29.01.18, 23:59] Alice: Sehr schön!\n"
[37] "["
[38] "30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[39] "["
[40] "30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
```
When I try to test the Regex pattern [online](https://regex101.com/r/T2wB2o/1) or use it in python, it works just as intended, so to me it seems that this is a feature of the strsplit function? Any recommendation on how to change my R code to make this work are very welcome! I know that it would be easy to just paste this output back together to get my desired output but I would really like to understand whats going on with strsplit and do it properly instead of patching it back together. What I want is:
```
[1] "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[2] "[29.01.18, 23:45] Bob: Ja ist es!\n"
[3] "[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[4] "[29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n"
[5] "[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[6] "[29.01.18, 23:46] Bob: Das ist praktisch!\n"
[7] "[29.01.18, 23:47] Bob: Oder?\n"
[8] "[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[9] "[29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[10] "[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[11] "[29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n"
[12] "[29.01.18, 23:50] Bob: Alles klar :) :D\n"
[13] "[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[14] "[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?"
[15] "[29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n"
[16] "[29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[17] "[29.01.18, 23:58] Alice: Scheint zu funktionieren!\n"
[18] "[29.01.18, 23:59] Alice: Sehr schön!\n"
[19] "[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[20] "[30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
``` | 2019/07/12 | [
"https://Stackoverflow.com/questions/57010207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6039913/"
] | You could add a negative lookahead `(?!^)` to assert not the start of the string.
Your updated line might look like:
```
chat <- strsplit(example,"(?!^)(?=\\[\\d\\d.\\d\\d.\\d\\d, \\d\\d:\\d\\d\\])",perl=TRUE)
```
[R demo](https://ideone.com/KlRaFp)
Result
```
[1] "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[2] "[29.01.18, 23:45] Bob: Ja ist es!\n"
[3] "[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[4] "[29.01.18, 23:46] Alice: PTT-20180129-WA0025.opus (Datei angehängt)\n"
[5] "[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[6] "[29.01.18, 23:46] Bob: Das ist praktisch!\n"
[7] "[29.01.18, 23:47] Bob: Oder?\n"
[8] "[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[9] "[29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[10] "[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[11] "[29.01.18, 23:49] Alice: Keine Ahnung, lass uns das doch mal ausprobieren\n"
[12] "[29.01.18, 23:50] Bob: Alles klar :) :D\n"
[13] "[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[14] "[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n"
[15] "[29.01.18, 23:52] Bob: Schick doch mal eine zum ausprobieren!\n"
[16] "[29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[17] "[29.01.18, 23:58] Alice: Scheint zu funktionieren!\n"
[18] "[29.01.18, 23:59] Alice: Sehr schön!\n"
[19] "[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[20] "[30.01.18, 00:02] Bob: Alles klar, los gehts |Emoji_Relieved_Face| \n"
``` | You can use `stringi` and extract the info you want by slightly modifying the end of your pattern (i.e., matching everything until the next `[`). You could include more of your pattern to ensure there aren't any false-matches but this should get your started. Good luck!
```
library(stringi)
stri_extract_all(example, regex = "\\[\\d\\d.\\d\\d.\\d\\d, \\d\\d:\\d\\d\\][^\\[]*")
[[1]]
[1] "[29.01.18, 23:33] Alice: Ist das hier ein Chatverlauf?\n"
[2] "[29.01.18, 23:45] Bob: Ja ist es!\n"
[3] "[29.01.18, 23:45] Bob: Der ist dazu da die funktionsweise des Parsers zu demonstrieren\n"
[4] "[29.01.18, 23:46] Alice: \016PTT-20180129-WA0025.opus (Datei angehängt)\n"
[5] "[29.01.18, 23:46] Bob: Ah, er kann also auch erkennen ob Voicemails gesendet wurden!\n"
[6] "[29.01.18, 23:46] Bob: Das ist praktisch!\n"
[7] "[29.01.18, 23:47] Bob: Oder?\n"
[8] "[29.01.18, 23:47] Alice: ja |Emoji_Grinning_Face_With_Smiling_Eyes| \n"
[9] "[29.01.18, 23:47] Alice: und Emojis gehen auch!\n"
[10] "[29.01.18, 23:47] Bob: Was ist mit normalen Smilies?\n"
[11] "[29.01.18, 23:49] Alice: \016Keine Ahnung, lass uns das doch mal ausprobieren\n"
[12] "[29.01.18, 23:50] Bob: Alles klar :) :D\n"
[13] "[29.01.18, 23:51] Alice: Scheint zu funktionieren!:P\n"
[14] "[29.01.18, 23:51] Bob: Meinst du, dass URLS auch erkannt werden?\n"
[15] "[29.01.18, 23:52] Bob: \016Schick doch mal eine zum ausprobieren!\n"
[16] "[29.01.18, 23:53] Alice: https://github.com/JuKo007\n"
[17] "[29.01.18, 23:58] Alice: \016Scheint zu funktionieren!\n"
[18] "[29.01.18, 23:59] Alice: Sehr schön!\n"
[19] "[30.01.18, 00:00] Alice: Damit sollten sich WhatsApp Verläufe besser quantifizieren lassen!\n"
[20] "[30.01.18, 00:02] Bob: \016Alles klar, los gehts |Emoji_Relieved_Face| \n"
``` | 3,369 |
25,567,791 | I've been trying for several days now to send a python array by i2c.
```
data = [x,x,x,x] # `x` is a number from 0 to 127.
bus.write_i2c_block_data(i2c_address, 0, data)
bus.write_i2c_block_data(addr, cmd, array)
```
In the function above: addr - arduino i2c adress; cmd - Not sure what this is; array - python array of int numbers.
Can this be done? What is actually the cmd?
---
FWIW, Arduino code, where I receive the array and put it on the `byteArray`:
>
>
> ```
> void receiveData(int numByte){
> int i = 0;
> while(wire.available()){
> if(i < 4){
> byteArray[i] = wire.read();
> i++;
> }
> }
> }
>
> ```
>
>
---
It gives me this error:
`bus.write_i2c_block_data(i2c_adress, 0, decodedArray) IOError: [Errno 5] Input/output error.`
I tried with this: `bus.write_byte(i2c_address, value)`, and it worked, but only for a `value` that goes from 0 to 127, but, I need to pass not only a value, but a full array. | 2014/08/29 | [
"https://Stackoverflow.com/questions/25567791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3866306/"
] | The function is the good one.
But you should take care of some points:
* bus.write\_i2c\_block\_data(addr, cmd, []) send the value of cmd AND the values in the list on the I2C bus.
So
```
bus.write_i2c_block_data(0x20, 42, [12, 23, 34, 45])
```
doesn't send 4 bytes but 5 bytes to the device.
I doesn't know how the wire library work on arduino, but the device only read 4 bytes, it doesn't send the ACK for the last bytes and the sender detect an output error.
* Two convention exist for I2C device address. The I2C bus have 7 bits for device address and a bit to indicate a read or a write. An other (wrong) convention is to write the address in 8 bits, and say that you have an address for read, and an other for write. The smbus package use the correct convention (7 bits).
Exemple: 0x23 in 7 bits convention, become 0x46 for writing, and 0x47 for reading. | It took me a while,but i got it working.
On the arduino side:
```
int count = 0;
...
...
void receiveData(int numByte){
while(Wire.available()){
if(count < 4){
byteArray[count] = Wire.read();
count++;
}
else{
count = 0;
byteArray[count] = Wire.read();
}
}
}
```
On the raspberry side:
```
def writeData(arrayValue):
for i in arrayValue:
bus.write_byte(i2c_address, i)
```
And that's it. | 3,370 |
12,758,591 | Are there any libraries in Python that does or allows Text To Speech Conversion using Mac Lion's built in text to speech engine?
I did google but most are windows based. I tried pyttx.
I tried to run
```
import pyttsx
engine = pyttsx.init()
engine.say('Sally sells seashells by the seashore.')
engine.say('The quick brown fox jumped over the lazy dog.')
engine.runAndWait()
```
But I get these errors
```
File "/Users/manabchetia/Documents/Codes/Speech.py", line 2, in <module>
engine = pyttsx.init()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/__init__.py", line 39, in init
eng = Engine(driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/engine.py", line 45, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/driver.py", line 64, in __init__
self._module = __import__(name, globals(), locals(), [driverName])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyttsx-1.0.egg/pyttsx/drivers/nsss.py", line 18, in <module>
ImportError: No module named Foundation
```
How do I solve these errors? | 2012/10/06 | [
"https://Stackoverflow.com/questions/12758591",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1657827/"
] | Wouldn't it be much simpler to do this?
```
from os import system
system('say Hello world!')
```
You can enter `man say` to see other things you can do with the `say` command.
However, if you want some more advanced features, importing `AppKit` would also be a possibility, although some Cocoa/Objective C knowledge is needed.
```
from AppKit import NSSpeechSynthesizer
speechSynthesizer = NSSpeechSynthesizer.alloc().initWithVoice_("com.apple.speech.synthesis.voice.Bruce")
speechSynthesizer.startSpeakingString_('Hi! Nice to meet you!')
```
If you would like to see more things you can do with NSSpeechSynthesizer take a look at Apple's documentation: <https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/NSSpeechSynthesizer_Class/Reference/Reference.html> | If you are targeting Mac OS X as your platform - PyObjC and NSSpeechSynthesizer is your best bet.
Here is a quick example for you
```
#!/usr/bin/env python
from AppKit import NSSpeechSynthesizer
import time
import sys
if len(sys.argv) < 2:
text = raw_input('type text to speak> ')
else:
text = sys.argv[1]
nssp = NSSpeechSynthesizer
ve = nssp.alloc().init()
for voice in nssp.availableVoices():
ve.setVoice_(voice)
print voice
ve.startSpeakingString_(text)
while not ve.isSpeaking():
time.sleep(0.1)
while ve.isSpeaking():
time.sleep(0.1)
```
Please note that AppKit module is part of PyObjC bridge and should be already installed on your Mac. No need to install it if you are using OS provided python (/usr/bin/python) | 3,373 |
53,622,737 | I have a Pandas Dataframe which has columns which look something like this:
```
df:
Column0 Column1 Column2
'MSC' '1' 'R2'
'MIS' 'Tuesday' '22'
'13' 'Finance' 'Monday'
```
So overall, in these columns are actual strings but also numeric values (integers) which are in string format.
I found [this](https://stackoverflow.com/questions/15891038/change-data-type-of-columns-in-pandas) nice post about the `pd.to_numeric` and `astype()` methods, but I can't see if or how I could use them in my case.
Using:
```
pd.to_numeric(df, errors = 'ignore')
```
just results in skiping the whole columns. Instead of skipping the whole columns, I only want to skip the strings in those columns which can't be converted, move on to the next entry and try to convert the next string.
So in the end, my dataframe would look like this:
```
df:
Column0 Column1 Column2
'MSC' 1 'R2'
'MIS' 'Tuesday' 22
13 'Finance' 'Monday'
```
Is there maybe an efficient way to loop over these columns and achieve that?
Best regards,
Jan
**EDIT:**
Thanks for all your suggestions! Since I am still a python beginner, @coldspeed and @sacul 's answers are easier to understand for me so I will go with one of them! | 2018/12/04 | [
"https://Stackoverflow.com/questions/53622737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10027078/"
] | 100% agree with the comments—mixing dtypes in columns is a terrible idea, performance wise.
For reference, however, I would do this with `pd.to_numeric` and `fillna`:
```
df2 = df.apply(pd.to_numeric, errors='coerce').fillna(df)
print(df2)
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
```
Columns are cast to `object` dtype to prevent coercion. You can see this when you extract the `values`:
```
print(df2.values.tolist())
[['MSC', 1.0, 'R2'], ['MIS', 'Tuesday', 22.0], [13.0, 'Finance', 'Monday']]
``` | I would apply `pd.to_numeric` with `errors='coerce'`, and `update` the original dataframe according to the results (see caveats in comments):
```
# show original string type:
df.loc[0,'Column1']
# '1'
df.update(df.apply(pd.to_numeric, errors='coerce'))
>>> df
Column0 Column1 Column2
0 MSC 1 R2
1 MIS Tuesday 22
2 13 Finance Monday
# show updated float type:
df.loc[0,'Column1']
# 1.0
``` | 3,376 |
7,504,129 | I have a variable, `fulltext`, which contains the full text of what I want the description of a new changelist in P4V to be. There are already files in the default changelist.
I want to use python to populate the description of a new changelist (based on default) with the contents of `fulltext`.
How can this be done. I've tried this:
```
os.sytem("p4 change -i")
print fulltext
```
But that doesn't create any new change list at all. I don't know how to tell p4 that I'm done editing the description. | 2011/09/21 | [
"https://Stackoverflow.com/questions/7504129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343381/"
] | If you're trying to write Python programs that work against Perforce, you might find P4Python helpful: <http://www.perforce.com/perforce/doc.current/manuals/p4script/03_python.html> | It is easiest if you have the changelist numbers that you know you are going to change.
```
#changeListIDNumber is the desired changelist to edit
import P4
p4 = P4.connect()
cl = p4.fetch_changelist(changeListIDNumber)
cl['Description'] = 'your description here'
p4.save_change(cl)
```
If you are using this for your default changelist, and you do not pre populate your description with anything, you will get an error as there will be no 'Description' key in your changelist dictionary. | 3,379 |
45,406,847 | I use Django to send email,everything is OK when running on development environment, which uses command "python manage.py runserver 0.0.0.0:8100". But in the production environment which deployed by nginx+uwsgi+Django do not work.
Here is the code:
```
#Email settings
EMAIL_HOST='smtp.exmail.qq.com'
EMAIL_PORT='465'
EMAIL_HOST_USER='sender@qq.cn'
EMAIL_HOST_PASSWORD='password'
EMAIL_USE_SSL=True
RECEIVE_EMIAL_LIST=['receiver@qq.com']
send_mail('subject','content',setting.EMAIL_HOST_USER,setting.RECEIVE_EMIAL_LIST, fail_silently=False)
``` | 2017/07/31 | [
"https://Stackoverflow.com/questions/45406847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133601/"
] | You could wrapping the check in a `setTimeout`:
```
$(".menu-toggle").first().click(function () {
setTimeout(function() {
if (!$("#wrapper").hasClass("menu-active")) {
$("#wrapper").find("div:first").addClass("overlay");
}
if ($("#wrapper").hasClass("menu-active")) {
$("#wrapper").find("div:first").removeClass("overlay");
}
}, 1);
});
```
That *should* cause the check to happen after the browser has updated the DOM. | Make the following,
```
<link rel="preload" href="path-to-your-script.js" as="script">
<script>
var scriptPriority =
document.createElement('script');
scriptPriority.src = 'path-to-your-script.js';
document.body.appendChild(scriptPriority);
</script>
```
About: Link rel Preload
Link rel preload is method that is used by a few developers, its something that almost nobody known and its use to give priority to a script or link css.
More info in:
<https://developers.google.com/web/updates/2016/03/link-rel-preload>
<https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content> | 3,382 |
71,461,517 | We have just updated our jenkins (2.337) and the python console output has gone weird:
[](https://i.stack.imgur.com/n2Yxn.png)
I've searched the jenkins settings (ANSI plugin etc) and I can change the inner colours but the gray background and line breaks remain. Does anyone know the settings to get it back to the plain old black and white it used to be? | 2022/03/13 | [
"https://Stackoverflow.com/questions/71461517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2325752/"
] | We had a similar problem ... we had an almost Black Background with Black Text
We found that the Extra CSS in the Theme section of the Jenkins Configuration has changed.
After putting it through a code formatter (there are no new lines or whitespace in the field) we had the following for the console-output:
```
.console-output .error-inline{
color:#f44336
}
.console-output,.console-output *{
position:relative;
font-family:Roboto Mono,monospace!important;
font-size:14px;
background:#263238;
color:#e9eded;
cursor:text
}
.console-output{
padding:10px 20px
}
```
The "background:#263238;" was giving the Dark Gray background, while the output didn't specify a colour for the text.
Once I updated the '.console-output' CSS to be:
```
.console-output{
color:#fff!important;
padding:10px 20px
}
```
it was all resolved.
Looking at your picture, your console output is going to have a lighter gray background while the text block will be a type specified in your CSS, and will have a darker grey background.
Changing both the have the same colour background will resolve your issue. You could use the HTML Code Examiner (F12 in Chrome) to examine the page setup, and track down exactly which items are giving the look you don't like | When you have broken console colors (black font on black screen) after jenkins update,
* Go to Manage Jenkins -> configure system
* scroll to theme
* click add -> extra CSS
put this in the new field:
```
.console-output{
color:#fff!important;
}
```
You can also add any other CSS to please your eye. | 3,383 |
14,081,949 | How to turn off collisions for some objects and then again turn it on using pymunk lib in python?
Let me show you the example, based on the code below. I want all red balls to go through first border of lines and stop on the lower border. Blue balls should still collide with upper border.
What needs to be changed in the code?
```
import pygame
from pygame.locals import *
from pygame.color import *
import pymunk as pm
from pymunk import Vec2d
import math, sys, random
def to_pygame(p):
"""Small hack to convert pymunk to pygame coordinates"""
return int(p.x), int(-p.y+600)
pygame.init()
screen = pygame.display.set_mode((600, 600))
clock = pygame.time.Clock()
running = True
### Physics stuff
space = pm.Space()
space.gravity = (0.0, -900.0)
## Balls
balls = []
### walls
static_body = pm.Body()
static_lines = [pm.Segment(static_body, (111.0, 280.0), (407.0, 246.0), 0.0),
pm.Segment(static_body, (407.0, 246.0), (407.0, 343.0), 0.0),
pm.Segment(static_body, (111.0, 420.0), (407.0, 386.0), 0.0),
pm.Segment(static_body, (407.0, 386.0), (407.0, 493.0), 0.0)]
for line in static_lines:
line.elasticity = 0.95
space.add(static_lines)
ticks_to_next_ball = 10
while running:
for event in pygame.event.get():
if event.type == QUIT:
running = False
elif event.type == KEYDOWN and event.key == K_ESCAPE:
running = False
ticks_to_next_ball -= 1
if ticks_to_next_ball <= 0:
ticks_to_next_ball = 100
mass = 10
radius = random.randint(10,40)
inertia = pm.moment_for_circle(mass, 0, radius, (0,0))
body = pm.Body(mass, inertia)
x = random.randint(115,350)
body.position = x, 600
shape = pm.Circle(body, radius, (0,0))
shape.elasticity = 0.95
space.add(body, shape)
balls.append(shape)
### Clear screen
screen.fill(THECOLORS["white"])
### Draw stuff
balls_to_remove = []
for ball in balls:
if ball.body.position.y < 200: balls_to_remove.append(ball)
p = to_pygame(ball.body.position)
if ball.radius > 25:
color = THECOLORS["blue"]
else:
color = THECOLORS["red"]
pygame.draw.circle(screen, color, p, int(ball.radius), 2)
for ball in balls_to_remove:
space.remove(ball, ball.body)
balls.remove(ball)
for line in static_lines:
body = line.body
pv1 = body.position + line.a.rotated(body.angle)
pv2 = body.position + line.b.rotated(body.angle)
p1 = to_pygame(pv1)
p2 = to_pygame(pv2)
pygame.draw.lines(screen, THECOLORS["lightgray"], False, [p1,p2])
### Update physics
dt = 1.0/60.0
for x in range(1):
space.step(dt)
### Flip screen
pygame.display.flip()
clock.tick(50)
pygame.display.set_caption("fps: " + str(clock.get_fps()))
``` | 2012/12/29 | [
"https://Stackoverflow.com/questions/14081949",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789021/"
] | Chipmunk has a few options filtering collisions:
<http://chipmunk-physics.net/release/ChipmunkLatest-Docs/#cpShape-Filtering>
It sounds like you just need to use a layers bitmask though.
ex:
```
# This layer bit is for balls colliding with other balls
# I'm only guessing that you want this though.
ball_layer = 1
# This layer bit is for things that collide with red balls only.
red_ball_layer = 2
# This layer bit is for things that collide with blue balls only.
blue_ball_layer = 4
# Bitwise OR the layer bits together
red_ball_shape.layers = ball_layer | red_ball_layer
blue_ball_shape.layers = ball_layer | blue_ball_layer
# Lower border should collide with red only
upper_border_shape.layers = red_ball_layer
#Upper border with blue balls only
lower_border_shape.layers = blue_ball_layer
```
I've never actually used Pymunk personally, but I'm guessing that it exposes the Chipmunk layers property simply as .layers | In Pymunk you can use the [ShapeFilter](http://www.pymunk.org/en/latest/pymunk.html#pymunk.ShapeFilter) class to set the categories (layers) with which an object can collide. I put the upper and lower lines into the categories 1 and 2 and then set the masks of the balls so that they ignore these layers. You need to understand how [bitmasking](https://en.wikipedia.org/wiki/Mask_(computing)) works.
Here's the complete example based on the code in the original question (press left and right mouse button to spawn the balls).
```
import sys
import pygame as pg
from pygame.color import THECOLORS
import pymunk as pm
def to_pygame(p):
"""Small hack to convert pymunk to pygame coordinates"""
return int(p[0]), int(-p[1]+600)
pg.init()
screen = pg.display.set_mode((600, 600))
clock = pg.time.Clock()
space = pm.Space()
space.gravity = (0.0, -900.0)
# Walls
static_body = space.static_body
static_lines = [
pm.Segment(static_body, (111.0, 280.0), (407.0, 246.0), 0.0),
pm.Segment(static_body, (407.0, 246.0), (407.0, 343.0), 0.0),
pm.Segment(static_body, (111.0, 420.0), (407.0, 386.0), 0.0),
pm.Segment(static_body, (407.0, 386.0), (407.0, 493.0), 0.0),
]
for idx, line in enumerate(static_lines):
line.elasticity = 0.95
if idx < 2: # Lower lines.
# The lower lines are in category 2, in binary 0b10.
line.filter = pm.ShapeFilter(categories=2)
else: # Upper lines.
# The upper lines are in category 1, in binary 0b1.
line.filter = pm.ShapeFilter(categories=1)
space.add(static_lines)
balls = []
running = True
while running:
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE:
running = False
if event.type == pg.MOUSEBUTTONDOWN:
radius = 15 if event.button == 1 else 30
mass = 10
inertia = pm.moment_for_circle(mass, 0, radius, (0,0))
body = pm.Body(mass, inertia)
body.position = to_pygame(event.pos)
shape = pm.Circle(body, radius, (0,0))
shape.elasticity = 0.95
if shape.radius > 25:
# bin(pm.ShapeFilter.ALL_MASKS ^ 1) is '0b11111111111111111111111111111110'
# That means all categories are checked for collisions except
# bit 1 (the upper lines) which are ignored.
shape.filter = pm.ShapeFilter(mask=pm.ShapeFilter.ALL_MASKS ^ 1)
else:
# Ignores category bin(2), '0b11111111111111111111111111111101'
# All categories are checked for collisions except bit 2 (the lower lines).
shape.filter = pm.ShapeFilter(mask=pm.ShapeFilter.ALL_MASKS ^ 2)
space.add(body, shape)
balls.append(shape)
screen.fill(THECOLORS["white"])
balls_to_remove = []
for ball in balls:
if ball.body.position.y < 100:
balls_to_remove.append(ball)
p = to_pygame(ball.body.position)
if ball.radius > 25:
color = THECOLORS["red"]
else:
color = THECOLORS["blue"]
pg.draw.circle(screen, color, p, int(ball.radius), 2)
for ball in balls_to_remove:
space.remove(ball, ball.body)
balls.remove(ball)
for line in static_lines:
body = line.body
pv1 = body.position + line.a.rotated(body.angle)
pv2 = body.position + line.b.rotated(body.angle)
p1 = to_pygame(pv1)
p2 = to_pygame(pv2)
pg.draw.lines(screen, THECOLORS["gray29"], False, [p1, p2])
# Update physics.
dt = 1.0/60.0
for x in range(1):
space.step(dt)
pg.display.flip()
clock.tick(50)
pg.quit()
sys.exit()
``` | 3,384 |
44,705,385 | I have this BT speaker , with in built mic , <http://www.intex.in/speakers/bluetooth-speakers/it-11s-bt>
i want to build something like google home with it , using python .Please guide me. | 2017/06/22 | [
"https://Stackoverflow.com/questions/44705385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8071763/"
] | Try with that :
```
function cari($d,$p)
{
$this->db->select('cf_pakar,gejala');
$this->db->from('gejalapenyakit');
$this->db->where('id_penyakit',$p);
$this->db->where_in('id_gejala',$d);
return $this->db->get()->result();
}
```
And your `$d = ('1','2','3','4','5')` should be `$d = ['1','2','3','4','5']`
try to do that :
```
function cari($d,$p)
{
//load $d here with that :
$d = ['1','2','3','4','5'];
//or that :
$d = [1,2,3,4,5]
$this->db->select('cf_pakar,gejala');
$this->db->from('gejalapenyakit');
$this->db->where('id_penyakit',$p);
$this->db->where_in('id_gejala',$d);
return $this->db->get()->result();
}
```
if it works then your problem comes from your $d. If it's not, your problem must come from your db | You need to send ',' seperated values in query.
$d = implode(",",$d);
This will work. | 3,385 |
64,902,105 | I have a requirement below but I am getting some error:
Write a separate Privileges class. The class should have one attribute, privileges, that stores a list of strings.Move the show\_privileges() method to this class. Make a Privileges instance as an attribute in the Admin class. Create a new instance of Admin and use your method to show its privileges.
```
class User:
def __init__(self, first_name, last_name):
"""Initiating attributes of user class"""
self.first_name = first_name
self.last_name = last_name
self.login_attempts = 1
def describe_user(self):
"""Print summary of the user info"""
print(f"User's info is {self.first_name} {self.last_name}")
def greet_user(self):
""" Method greets user"""
print(f"Goodday, {self.first_name} {self.last_name}!")
def increment_login_attempts(self, login):
"""Method increments login attempts"""
self.login_attempts += login
print(f"This user has {self.login_attempts} login attempts")
def reset_login_attempts(self):
"""Method resets login attempts"""
self.login_attempts = 0
class Privileges:
def __init__(self, privileges):
""" Shows admin privileges"""
self.privileges = privileges
def show_privileges(self):
"""Lists admin privileges"""
print(f"This user {','' '.join(self.privileges)}")
class Admin(User):
""" Represents Admin privileges of a user"""
def __init__(self, first_name, last_name, privileges):
super().__init__(first_name, last_name)
self.privileges = Privileges()
my_Admin_user = Admin('john', 'olode', ["Can Add Post", "Can Delete Post", "Can Ban User"])
my_Admin_user.describe_user()
my_Admin_user.greet_user()
my_Admin_user.Privileges.show_privileges()
```
Error Below
```
Traceback (most recent call last):
File "userPrivilegeClass.py", line 43, in <module>
my_Admin_user = Admin('john', 'olode', ["Can Add Post", "Can Delete Post", "Can Ban User"])
File "userPrivilegeClass.py", line 39, in __init__
self.privileges = Privileges()
TypeError: __init__() missing 1 required positional argument: 'privileges'
```
Would appreciate any help.
FYI, I am very new to python (about a month), please bear with me whuile you dumb it down for me.. Thank you | 2020/11/18 | [
"https://Stackoverflow.com/questions/64902105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14179096/"
] | As stated in the related questions, the easiest thing to do is to use an index instead as it requires no unsafe code. I might write it like this:
```
pub fn insert<'a, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'a mut V {
let idx = this
.iter()
.enumerate()
.find_map(|(i, (k, _))| if key == *k { Some(i) } else { None });
let idx = idx.unwrap_or_else(|| {
this.push((key, val));
this.len() - 1
});
&mut this[idx].1
}
```
You should *perform benchmarking* to know if this is not fast enough for some reason. Only in that case should you opt in to `unsafe` code to get the last bit of speed. You should then benchmark *again* to see if the code is measurably faster.
For example, you might be able to get the speedup by using [`slice::get_unchecked_mut`](https://doc.rust-lang.org/std/primitive.slice.html#method.get_unchecked_mut) instead of `&mut this[idx].1`, which is a much easier bit of unsafe code to rationalize.
The nice thing about using indices in our safe code is that they directly translate into pointer offset logic. We can take this safe example and make minimal modifications to it to get a version using `unsafe` code:
```rust
pub fn insert<'a, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'a mut V {
// I copied this code from Stack Overflow without reading the surrounding
// text which explained why this code is or is not safe.
unsafe {
let found = this
.iter_mut()
.find_map(|(k, v)| if key == *k { Some(v as *mut V) } else { None });
match found {
Some(v) => &mut *v,
None => {
this.push((key, val));
&mut this.last_mut().unwrap().1
}
}
}
}
```
The main points of safety revolve around the pointer to the value in `found`. It started as a mutable reference, so we know that it was valid when we were iterating. We know that `find_map` stops iterating on the first `Some`, and we know that iterating using `iter_mut()` shouldn't change our values anyway. Since `this` cannot change between the binding of `found` and the usage of it in the `match`, I believe that this piece of code is safe.
It's always valuable to exercise your code through Miri. You must actually *exercise* the code, as Miri only flags code that causes undefined behavior, ignoring any dormant code paths. This code is Miri-clean:
```
fn main() {
let mut things = vec![(1, 2), (3, 4)];
let v = insert(&mut things, 1, 2);
println!("{} ({:p})", v, v);
let v = insert(&mut things, 1, 2);
println!("{} ({:p})", v, v);
let v = insert(&mut things, 5, 6);
println!("{} ({:p})", v, v);
let v = insert(&mut things, 5, 6);
println!("{} ({:p})", v, v);
}
```
```none
2 (0x2829c)
2 (0x2829c)
6 (0x41054)
6 (0x41054)
```
---
>
> Is [the original implementation] actually safe?
>
>
>
Miri reports no issues for the same test code I used above, and I don't see anything obviously wrong.
>
> Is this the recommended way to express the unsafe operations performed? Should I use pointers instead?
>
>
>
When it's possible to avoid `mem::transmute`, it *generally* should be avoided. `transmute` is The Big Hammer and can do quite a lot of things that you might not intend (changing *types* is a key one). Using pointers feels much simpler in this case.
I agree with the usage of a comment to demonstrate why the unsafe code is safe. Even if it's wrong it still demonstrates the mindset of the original author. A future reviewer may be able to say "ah, they didn't think about checklist item #42, let me test that!".
Specifically for the reasoning in your comment, it's overly dense / academic *to me*. I don't see why there's talk about multiple lifetimes or double borrows.
>
> Will the new Polonius borrow checker be able to reason about patterns like this?
>
>
>
Yes:
```none
% cargo +nightly rustc --
Compiling example v0.1.0 (/private/tmp/example)
error[E0499]: cannot borrow `*this` as mutable more than once at a time
--> src/main.rs:8:16
|
2 | pub fn insert<'a, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'a mut V {
| -- lifetime `'a` defined here
3 | for (key1, val1) in &mut *this {
| ---------- first mutable borrow occurs here
4 | if key == *key1 {
5 | return val1;
| ---- returning this value requires that `*this` is borrowed for `'a`
...
8 | let this = &mut *this;
| ^^^^^^^^^^ second mutable borrow occurs here
% cargo +nightly rustc -- -Zpolonius
Compiling example v0.1.0 (/private/tmp/example)
Finished dev [unoptimized + debuginfo] target(s) in 0.86s
% ./target/debug/example
2 (0x7f97ea405b24)
2 (0x7f97ea405b24)
6 (0x7f97ea405ba4)
6 (0x7f97ea405ba4)
```
See also:
* [How to update-or-insert on a Vec?](https://stackoverflow.com/q/47395171/155423)
* [Double mutable borrow error in a loop happens even with NLL on](https://stackoverflow.com/q/50519147/155423)
* [Returning a reference from a HashMap or Vec causes a borrow to last beyond the scope it's in?](https://stackoverflow.com/q/38023871/155423)
* [When is it necessary to circumvent Rust's borrow checker?](https://stackoverflow.com/q/50440074/155423) | Safe alternative
----------------
Firstly, here is what I would suggest instead. You can iterate over the `Vec` once to get the index of the target value via `position(|x| x == y)`. You are then able to match the now owned value and continue like before. This should have very similar performance to your previous version (In fact, LLVM might even make it identical when built with release mode).
```rust
/// Insert a new data element at a given key.
pub fn insert<K: Eq, V>(this: &mut Vec<(K, V)>, key: K, val: V) -> &mut V {
match this.iter().position(|(key1, _)| &key == key1) {
Some(idx) => &mut this[idx].1,
None => {
this.push((key, val));
&mut this.last_mut().unwrap().1
}
}
}
```
[Playground Link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=a8688d354b5a460c263a9686bef775f7)
Explanation of Error
--------------------
Here is a quick explanation of why the compiler is getting confused. It is easier to view if I first rewrite it to separate the creation of the iterator. I also added a second lifetime to the function signature to make it less restrictive and easier to show the error. To be honest it kind of feels like a mistake on the part of the borrow checker, but I can see how it got there.
```rust
use std::slice::IterMut;
// Returns a reference of borrowed value 'a of lifetime 'b. Since this reference // may exist up to the end of 'a, we know that 'b <= 'a.
pub fn insert<'a: 'b, 'b, K: Eq, V>(this: &'a mut Vec<(K, V)>, key: K, val: V) -> &'b mut V {
// The problem comes from trying to identify an appropriate lifetime for 'c.
// While iterating, each item also more or less shares the lifetime 'c.
let iterator: IterMut<'c, (K, V)> = this.into_iter();
for (ref mut key1, ref mut val1) in iterator {
if key == *key1 {
// Since this is the returned value, it must have lifetime 'b to match
// the function signature. But at the same time it must also live for 'c.
// Therefore 'b <= 'c.
return val1
}
}
// So at this point the constraints we have so far are as follows:
// 'b <= 'a
// 'c <= 'a
// 'b <= 'c
// Therefore 'b <= 'c <= 'a
// Due to the next line, 'c mandates the iterator is still alive making this the
// second mutable borrow.
this.push((key, val));
// This lives for 'b, but since 'b <= 'c then 'c still exists
&mut this.last_mut().unwrap().1
}
```
Takeaways
---------
* **"Is this actually safe?"** Does it use `unsafe`? If it uses `unsafe` then it is not safe. Safe/unsafe is not about if it should work. Just because C code works, doesn't make it safe. It is about if our code has the potential for human error causing the program to act in ways the compiler can't account for. We only deem something unsafe to be safe once we have tried it under a number of conditions and it reliably works as expected with no exceptions. So "is this actually safe?" is more of question of how much much trust you have in this code.
* **"Is this the recommended way to express the unsafe operations performed? Should I use pointers instead?"** In terms of unsafe code, my personal preference would be what you have right now and just transmute the lifetimes. Using pointers just hides the transmute by making it implicit in the pointer dereference. Plus it adds pointers into the equation which just adds another layer of complexity.
* **"Will the new Polonius borrow checker be able to reason about patterns like this?"** No idea. Maybe someone with more knowledge on the subject will leave a comment answering this question.
* **Sidenote:** Try to avoid writing functions with `fn foo<'a>(&'a A) -> &'a B` lifetimes. This can be more restrictive because it forces the returned lifetime to be the same as the input. The implicit version looks more like `fn foo<'a: 'b, 'b>(&'a A) -> &'b B` and only requires that the input lifetime is longer than the returned lifetime. | 3,387 |
44,913,971 | I'm coding a little python program for ROT13.
If you don't know what it means, it means it will replace the letter of the alphabet to 13th letter in front of it therefore 'a' would become 'n'.
A user will ask for an input and I shall replace each character in the sentence to the 13th letter in front.
This means I need to replace each character, now who would I do that?
I tried importing the re function but It didn't work. This is what I got so far.
```
import re
Alpha = input("Input the word you would like translated")
Alpha = re.sub('[abcdefghijklmnopqrstuvwxyz]', 'nopqrstuvwxyzabcdefghijklm',
Alpha)
print(Alpha)
```
Help would be very much appreciated. | 2017/07/04 | [
"https://Stackoverflow.com/questions/44913971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7637737/"
] | [Vertically concatenate](https://www.mathworks.com/help/matlab/ref/vertcat.html) the matrices inside the cell arrays and use `intersect` with the [`'rows'`](https://www.mathworks.com/help/matlab/ref/intersect.html#btcnv0p-12) flag. i.e.
```
Q1={[1 2 3 4], [3 2 4 1], [4 2 1 3]};
Q2={[2 4 3 1], [1 2 3 4], [1 2 4 3]};
Qout = intersect(vertcat(Q1{:}), vertcat(Q2{:}), 'rows');
%>> Qout
%Qout =
% 1 2 3 4
``` | You can do it by using two loops and check all off them.
```
q1=[1 2 3 4; 3 2 4 1; 4 2 1 3];
q2=[2 4 3 1; 1 2 3 4; 1 2 4 3];
%find the size of matrix
[m1,n1] = size(q1);
[m2] = size(q2,1);
for (ii=1:m1)
for (jj=1:m2)
%if segments are equal, it will return 1
%if sum of same segment = 4 it means they are same
if ( sum( q1(ii,:) == q2(jj,:) ) == n1)
ii %result of q1
jj %result of q2
break;
end
end
end
``` | 3,388 |
44,092,459 | Undertaking a task to Write a function power that accepts two arguments, a and b and calculates a raised to the power b.
Example
```
power(2, 3) => 8
```
Note: Don't use
```
2 ** 3
```
and don't use
```
Math.pow(2, 3)
```
I have tried this
```
def power(a,b):
return eval(((str(a)+"*")*b)[:-1])
```
And it works but seems to fail one test which is to `return_1_when_exp_is_0`
and i also get the error
```
Unhandled Exception: unexpected EOF while parsing (, line 0)
```
Please how do i solve this issue considering that i am new to python | 2017/05/21 | [
"https://Stackoverflow.com/questions/44092459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7173798/"
] | You can use a for loop
```
x=1
for i in range(b):
x=x*a
print(x)
``` | ```
def power(a, b):
if b == 0:
return 1
else:
return a ** b
``` | 3,389 |
898,091 | I have previously read Spolsky's article on character-encoding, as well as [this from dive into python 3](http://diveintopython3.org/strings.html). I know php is getting Unicode at some point, but I am having trouble understanding why this is such a big deal.
If php-CLI is being used, ok it makes sense. However, in the web server world, isnt it up to the browser to take this integer and turn it into a character (based off character-encoding).
What am I not getting? | 2009/05/22 | [
"https://Stackoverflow.com/questions/898091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Well, for one thing you need to somehow generate the strings the browser displays :-) | There's an awesome FAQ section on Unicode and the Web [here.](http://unicode.org/faq/unicode_web.html) See if it answers some of your questions. | 3,394 |
42,512,141 | I have written the following simple program which should print out all events detected by `pygame.event.get()`.
```
import pygame, sys
from pygame.locals import *
display = pygame.display.set_mode((300, 300))
pygame.init()
while True:
for event in pygame.event.get():
print(event)
if event.type == QUIT:
pygame.quit()
sys.exit()
```
But when I run this I only have mouse events, and a KEYDOWN and KEYUP event when I hit caps-lock twice, being printed in terminal. When I use any other keys they only print to terminal as if I was writing in the terminal window.
```
<Event(4-MouseMotion {'pos': (102, 15), 'buttons': (0, 0, 0),
'rel': (-197, -284)})>
<Event(2-KeyDown {'unicode': '', 'scancode': 0, 'key': 301, 'm
od': 8192})>
<Event(3-KeyUp {'key': 301, 'scancode': 0, 'mod': 0})>
wasd
```
I am using Mac OSX 10.12.1, python 3.5.2, and pygame 1.9.4.dev0.
I assume I'm missing something straight forward, but I found nothing similar online. Any help would be much appreciated. | 2017/02/28 | [
"https://Stackoverflow.com/questions/42512141",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4191155/"
] | If you're working in a virtualenv, don't use the `virtualenv` command.
Use `python3 -m venv`.
Then install pygame (*e.g.* `pip3 install hg+http://bitbucket.org/pygame/pygame`).
See [this thread](https://bitbucket.org/pygame/pygame/issues/203/window-does-not-get-focus-on-os-x-with#comment-32656108) for more details on this issue. | Firstly i doubt you are but pygame only registers inputs when your focused on the pygame screen so there's that. I don't have a direct answer to your question so sorry but i do have my solution or work around to it. Because i dislike the normal event system i use pygame.key.get\_pressed() (<https://www.pygame.org/docs/ref/key.html>) just because i think it looks better and more readable. This is probably just a bad habit of mine though sooo..... | 3,399 |
1,206,215 | In python I can use os.getpid() and os.name() to get information about the Process ID and
OS name. Is there something similar in C++? I tried GetProcessId() but was told that this is undeclared... I am using Cygwin under windows.
Thank you | 2009/07/30 | [
"https://Stackoverflow.com/questions/1206215",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Standard C++ has no such functionality. You need to use OS specific features to get this. In your case, you need to look up POSIX/UNIX functions such as [getpid()](http://www.opengroup.org/onlinepubs/009695399/functions/getpid.html).
Note that if you actually do want to call the Windows functions to get process ID etc, you should be using a C++ environment like [MinGW](http://www.mingw.org/), which allows you to build native Windows applications, rather than Cygwin, which is more aimed at porting POSIX apps to Windows. | To use [GetProcessId](http://msdn.microsoft.com/en-us/library/ms683215(VS.85).aspx) you need to include Windows.h and link to Kernel32.lib. See [Process and Thread Functions](http://msdn.microsoft.com/en-us/library/ms684847(VS.85).aspx) for more information.
I use [MSYS/mingw](http://www.mingw.org/) instead of [cygwin](http://www.cygwin.com/). So, you may need the [w32api](http://cygwin.com/packages/w32api/) package installed. | 3,402 |
24,435,697 | Python 3.4: From reading some other SO questions it seems that if a `moduleName.py` file is outside of your current directory, if you want to import it you must add it to the path with `sys.path.insert(0, '/path/to/application/app/folder')`, otherwise an `import moduelName` statement results in this error:
```
ImportError: No module named moduleName
```
Does this imply that python automatically adds all other .py files in the same directory to the path? What's going on underneath the surface that allows you to import local files without appending the Python's path? And what does an `__init__.py` file do under the surface? | 2014/06/26 | [
"https://Stackoverflow.com/questions/24435697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3462076/"
] | Python adds the directory where the initial script resides as first item to [`sys.path`](https://docs.python.org/3/library/sys.html#sys.path):
>
> As initialized upon program startup, the first item of this list, `path[0]`, is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), `path[0]` is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of `PYTHONPATH`.
>
>
>
So what goes on underneath the surface is that Python appends (or rather, prepends) the 'local' directory to `sys.path` *for you*.
This simply means that the directory the script lives in is the first port of call when searching for a module.
`__init__.py` has nothing to do with all this. `__init__.py` is needed to make a directory a [(regular) package](https://docs.python.org/3/reference/import.html#packages); any such directory that is found on the Python module search path is treated as a module. | I have faced same problem when running python script from Intellij Idea.
There is a script in a
```
C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic
```
It uses
```
from meshtastic import portnums_pb2, channel_pb2, config_pb2
```
and fails.
I have realized that it looks for
```
C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic\meshtastic
```
and changed **working directory** of this script in **Run Configuration** from
```
C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic
```
to
```
C:\Users\user\IdeaProjects\Meshtastic-python
```
so it can find this module **UNDERNEATH workdir** during execution
```
C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic
``` | 3,405 |
29,333,578 | From work i got a job to make a python script which will click for testing the product of a "secret application" for windows 8.1.
The problem is that i can make it move the cursor but it can't click and i searched for win32 documentation on the internet but with no luck. Anyone who had this problem?
This is the click code
```
def click(x,y):
win32api.SetCursorPos((x, y))
#Left click
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
time.sleep(0.05)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29333578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2984950/"
] | `body` is a string. You have to parse it as JSON first:
```
res.json(JSON.parse(body)._links.self);
``` | This question is little old, yet, the following also seems more helpful. In request, you can pass `json: true` and request library returns you the json object.
replace following line,
>
>
> ```
> request('https://api.twitch.tv/kraken/streams/' + req.params.user, function ( error, response, body) {
>
> ```
>
>
with the one below
>
>
> ```
> request({'url':`https://api.twitch.tv/kraken/streams/${req.params.user}`, 'json': true }, function ( error, response, body) {
>
> ```
>
> | 3,406 |
12,578,943 | I'm writing a program to get a video feed from a web cam and display it in a Tkinter window. I wrote the following code which I ran on Ubuntu 12.04.
```
#!/usr/bin/env python
import sys, os, gobject
from Tkinter import *
import pygst
pygst.require("0.10")
import gst
# Goto GUI Class
class Prototype(Frame):
def __init__(self, parent):
gobject.threads_init()
Frame.__init__(self, parent)
# Parent Object
self.parent = parent
self.parent.title("WebCam")
self.parent.geometry("640x560+0+0")
self.parent.resizable(width=FALSE, height=FALSE)
# Video Box
self.movie_window = Canvas(self, width=640, height=480, bg="black")
self.movie_window.pack(side=TOP, expand=YES, fill=BOTH)
# Buttons Box
self.ButtonBox = Frame(self, relief=RAISED, borderwidth=1)
self.ButtonBox.pack(side=BOTTOM, expand=YES, fill=BOTH)
self.closeButton = Button(self.ButtonBox, text="Close", command=self.quit)
self.closeButton.pack(side=RIGHT, padx=5, pady=5)
gotoButton = Button(self.ButtonBox, text="Start", command=self.start_stop)
gotoButton.pack(side=RIGHT, padx=5, pady=5)
# Set up the gstreamer pipeline
self.player = gst.parse_launch ("v4l2src ! video/x-raw-yuv,width=640,height=480 ! ffmpegcolorspace ! xvimagesink")
bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)
def start_stop(self):
if self.gotoButton["text"] == "Start":
self.gotoButton["text"] = "Stop"
self.player.set_state(gst.STATE_PLAYING)
else:
self.player.set_state(gst.STATE_NULL)
self.gotoButton["text"] = "Start"
def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
self.button.set_label("Start")
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)
self.button.set_label("Start")
def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
# Assign the viewport
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(self.movie_window.window.xid)
def main():
root = Tk()
app = Prototype(root)
app.pack(expand=YES, fill=BOTH)
root.mainloop()
if __name__ == '__main__':
main()
```
My problem is neither the ButtonBox nor the VideoBox show in the output window when the program is running. How can I fix this? I did look at other sites for possible solutions (for instance <http://pygstdocs.berlios.de/#projects> or [Way to play video files in Tkinter?](https://stackoverflow.com/questions/7227162/way-to-play-video-files-in-tkinter)) however they have very limited information on what their code means.
After making the suggested alteration and a few others to get the buttons working, I realize that the display window is different from the main window when I run the program. Is there a way to get the video to display in the main window when using tkinter?? | 2012/09/25 | [
"https://Stackoverflow.com/questions/12578943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1696565/"
] | It looks like your Prototype class is a Tkinter Frame but you don't seem to have packed/placed it anywhere.
```
...
app = Prototype(root)
app.pack(expand=YES, fill=BOTH)
root.mainloop()
``` | I finally came up with a solution to the question. I realised that the error was in the line
imagesink.set\_xwindow\_id(self.movie\_window.window.xid)
which I changed to
imagesink.set\_xwindow\_id(self.movie\_window.winfo\_id())
The mistake is that I had used window.xid which is an attribute for gtk widgets. In tkinter winfo\_id() returns the window identifier for tkinter widgets. For more information <http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.winfo_id-method> | 3,407 |
28,422,787 | Using python 3, how would you change this code to print the sum of all numbers from 1 to 20?
```
n = 20
i=0
sum = 0
for i in range (1,n+1):
sum =+ i
i = i+1
print(sum)
``` | 2015/02/10 | [
"https://Stackoverflow.com/questions/28422787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4548170/"
] | The simplest way I can think about is:
```
sum(range(1, 21)) # includes 20
```
You can also use a loop:
```
s = 0
for i in range(21):
s += i
``` | ```
n = 20
# this isn't needed, the for loop sets i: i = 0
sum = 0
for i in range (1, n+1):
sum += i
# Remove this line: i = i+1
# for i in range already increments i
print(sum)
```
You shouldn't use the variable name `sum` because there is already a builtin function `sum` which you can even use instead. | 3,408 |
24,213,905 | I have account in Openshift. I use Django and Mysql in this account. <https://github.com/ogurchik/pullover/tree/master/wsgi/openshift>.
I created models for a new table in the Mysql database. When I execute the command `python manage.py sqlall MY_APP`, it renders this log:
```
BEGIN;
CREATE TABLE `books_publisher` (
`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`name` varchar(30) NOT NULL,
`address` varchar(50) NOT NULL,
`city` varchar(60) NOT NULL,
`state_province` varchar(30) NOT NULL,
`country` varchar(50) NOT NULL,
`website` varchar(200) NOT NULL
);
```
and etc.
I think this log means what account's environment setup suitable. But when I execute command `python manage.py syncdb`, the log is:
```
Creating tables ...
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)
```
But the database has nothing. How do I solve this problem? I have tried google'ing but I find nothing similar. | 2014/06/13 | [
"https://Stackoverflow.com/questions/24213905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2966342/"
] | Correct. The timestamp is a UNIX timestamp. That is - the number of whole seconds since Jan 1, 1970 UTC, not accounting for leap seconds.
You can verify the timestamp using a site like [epochconverter.com](http://www.epochconverter.com/)
```
1388613600 = 2014-01-01T22:00:00Z
```
Then you can check the time zone details at [timeanddate.com](http://www.timeanddate.com).
* In January 2014, [Toronto was on EST](http://www.timeanddate.com/time/zone/canada/toronto), which is UTC-05:00.
* [This calculation](http://www.timeanddate.com/worldclock/converted.html?iso=20140101T22&p1=0&p2=250) clearly verifies that 22:00 UTC is 5:00 PM EST. | As Marc B mentioned, `date('r', 1388613600)` returned a formatted version of the date including the timezone offset which was set to `+0000`.
The output is in fact UTC.
Thanks Marc! | 3,409 |
32,046,360 | I'm using wxpython with wx.Grid...
I have a general grid with many columns -created with `SetColumn(self, column)` , I want to be able to show and hide specific columns based on user security permission.
I read that `self.SetColMinimalAcceptableWidth(0)` might be useful? How do I use it on specific column? How do I restore the column to original size when I need to show it? | 2015/08/17 | [
"https://Stackoverflow.com/questions/32046360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2131325/"
] | The Grid manual has the following entry:
HideCol(self, col)
```
Hides the specified column.
To show the column later you need to call SetColSize with non-0 width or ShowCol to restore the previous column width.
If the column is already hidden, this method doesn’t do anything.
Parameters: col (int) – The column index.
```
So in the case of self.Mygrid:
```
self.Mygrid.HideCol(0)
```
would hide the first column. | Under wxPython 2.8:
```
grid.SetColMinimalAcceptableWidth(0)
grid.SetColSize(col, 0)
grid.ForceRefresh()
``` | 3,410 |
15,904,973 | Say i store a password in plain text in a variable called `passWd` as a string.
How does python release this variable once i discard of it (for instance, with `del passWd` or `passWd= 'new random data'`)?
Is the string stored as a byte-array meaning it can be overwritten in the memoryplace that it originally existed or is it a fixed set in a memory area which can't be modified and there for when assining a new value a new memory area is created and the old area is discareded but not overwritten by null?
I'm questioning how Python implements the safety of memory areas and would like to know more about it, mainly because i'm curious :)
From what i've gathered so far, using `del` (or `__del__`) causes the interpreter to not release memory areas of that variable automaticly which can cause issues, and also i'm not sure that **del** is so thurrow on deleting the values. But that's just from what i've gathered and not something in black or white :)
The main reason for me asking, is I'm intending to write a hand-over application that gets a string, does some I/O, passes it along to another subsystem (bootloader for raspberry pi for instance) and the interface is written in Python (how odd that must sound in some peoples ears..) and i'm not worried that the data is compromised during the I/O calculations but that a memory dump might be occuring in between the two subsystem handovers. or if the system is frozen (say a hiberation) say 20min after the system is booted and i removed the variable as fast as i could, but somehow it's still in the memory despite me doing a `del passWd` :)
(Ps. I've asked on Superuser, they refered me here aand i'm sorry for poor grammar!) | 2013/04/09 | [
"https://Stackoverflow.com/questions/15904973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/929999/"
] | Unless you use custom coded input methods to get the password, it will be in many more places then just your immutable string. So don't worry too much.
The OS should take care that any data from your process is cleared before the memory is allocated to another process. This may of course fail if the page is copied to disk (swapped out or hibernated).
Secure password entry is not easy. Maybe you can find a special library or module that handles this. | I finally whent with two solutions.
ld\_preload to replace the functionality of the string handling of Python on a lower level.
One other option which is a bit easier was to develop my own C library that has more functionality then what Python offers through the standard string handling.
Mainly the C code has a shread() function that writes over the memory area where the string "was" stored and some other error checks.
However, @Ber gave me a good enough answer to start developing my own solution since (as he pointed out) there is no secure method in Python and python stores strings in way to many places and relies on the OS (which, on it's own isn't a bad thing except when you don't trust the OS you are installing your realtively secure application on). | 3,411 |
3,422,775 | I have written a small Django App, that executes an interactive program based on user input and returns the output as the result. But for some reason, the subprocess hangs. On verification of the logs I found that a place where a '\n' has to be given as response to a challenge, the response seems to have never been made. Interestingly, if I run the same code from outside of Django, i.e either from a python module or from the interactive shell, subprocess works without a hitch. I am assuming some settings within the environment used by Django are the culprit here. Here are snippets of the code that I've written:
```
def runtests(test_name, selective=False, tests_file=''):
if selective:
run_cmd = ['runtest', '--runfromfile', tests_file, test_name]
else:
run_cmd = 'runtest %s' % (test_name)
print 'Executing command .. '
print run_cmd
p = subprocess.Popen(run_cmd, shell=False, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return p.stdout.read()
def result(request):
test_name = request.GET['test_name']
if not test_name:
return render_to_response('webrun/execute.html', {'error_flag':True})
in_file = os.path.abspath('webrun/log/%s_in.xml' % test_name)
suites = dict([(field[len('suite_'):],value)
for field,value in request.GET.items()
if field.startswith('suite_')])
if suites:
_from_dict_to_xml(suites, in_file, test_name)
output = runtests(test_name, bool(suites), in_file)
return render_to_response('webrun/result.html', {'output':output})
```
I've tried replacing subprocess with the older os.system method. But even that hangs in the exact same place. Again, this runs too if I were execute same code out of Django. | 2010/08/06 | [
"https://Stackoverflow.com/questions/3422775",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/412888/"
] | This is because code is JITted on a per-method basis, so when you first try to invoke `CheckCrystal()`, .NET first tries to compile it, subsequently loading all required and not-yet-loaded assemblies.
.NET allows you to intercept a moment when assembly resolution fails. To do so, subscribe to `AppDomain.AssemblyResolve` event. | You would probably want to handle the `AppDomain.AssemblyResolve` event. More information [here](http://msdn.microsoft.com/en-us/library/system.appdomain.assemblyresolve(VS.71).aspx).
A quick and dirty example:
```
AppDomain.CurrentDomain.AssemblyResolve += CurrentDomain_AssemblyResolve;
private static Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
{
if (args.Name == "CrystalReports")
{
PTrace.Error("Some dependences needed to run Crystal Reports are not available.");
}
// return located here assembly here or throw exception, etc
}
``` | 3,412 |
65,942,206 | 

can anyone help me? im pretty new to python and im trying to generate 10 files, each with increasingly harder questions. this code is for difficult 2. I dont want the answers in dif. 2 to be negative so whenever i get a second number bigger than the first i swap the two. for some reason some of them still come out with the first number bigger than the second. i added the "its less than" print statments for testing and it will detect the fact that its less than but wont do something about it. | 2021/01/28 | [
"https://Stackoverflow.com/questions/65942206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15100687/"
] | Your issue is that you're casting your random numbers to a string **before** comparing their mathematical values. You need to compare them as integers then cast them to strings. | I believe this is because you are checking for comparison between 2 strings not 2 integers. This will give bad results for this type of program
num1 = str(r.choice(numbers))
num2 = str(r.choice(numbers))
Here you are storing strings and not integers.
and then below this you are checking if num1 <= num2.
Convert them to integers before comparing them and your code should work. | 3,414 |
38,657,109 | I am using *Python 3.4*.
I have a Python script `myscript.py` :
```
import sys
def returnvalue(str) :
if str == "hi" :
return "yes"
else :
return "no"
print("calling python function with parameters:")
print(sys.argv[1])
str = sys.argv[1]
res = returnvalue(str)
target = open("file.txt", 'w')
target.write(res)
target.close()
```
I need to call this python script from the java class `PythonJava.java`
```
public class PythonJava
{
String arg1;
public void setArg1(String arg1) {
this.arg1 = arg1;
}
public void runPython()
{ //need to call myscript.py and also pass arg1 as its arguments.
//and also myscript.py path is in C:\Demo\myscript.py
}
```
and I am calling `runPython()` from another Java class by creating an object of `PythonJava`
```
obj.setArg1("hi");
...
obj.runPython();
```
I have tried many ways but none of them are properly working. I used Jython and also ProcessBuilder but the script was not write into file.txt. Can you suggest a way to properly implement this? | 2016/07/29 | [
"https://Stackoverflow.com/questions/38657109",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6013429/"
] | Have you looked at these? They suggest different ways of doing this:
[Call Python code from Java by passing parameters and results](https://stackoverflow.com/questions/27235286/call-python-code-from-java-by-passing-parameters-and-results)
[How to call a python method from a java class?](https://stackoverflow.com/questions/9381906/how-to-call-a-python-method-from-a-java-class)
In short one solution could be:
```
public void runPython()
{ //need to call myscript.py and also pass arg1 as its arguments.
//and also myscript.py path is in C:\Demo\myscript.py
String[] cmd = {
"python",
"C:/Demo/myscript.py",
this.arg1,
};
Runtime.getRuntime().exec(cmd);
}
```
edit: just make sure you change the variable name from str to something else, as noted by cdarke
Your python code (change str to something else, e.g. arg and specify a path for file):
```
def returnvalue(arg) :
if arg == "hi" :
return "yes"
return "no"
print("calling python function with parameters:")
print(sys.argv[1])
arg = sys.argv[1]
res = returnvalue(arg)
print(res)
with open("C:/path/to/where/you/want/file.txt", 'w') as target: # specify path or else it will be created where you run your java code
target.write(res)
``` | calling python from java with Argument and print python output in java console can be done with below simple method:
```
String pathPython = "pathtopython\\script.py";
String [] cmd = new String[3];
cmd[0] = "python";
cmd[1] = pathPython;
cmd[2] = arg1;
Runtime r = Runtime.getRuntime();
Process p = r.exec(cmd);
BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream()));
while((s = in.readLine()) != null){
System.out.println(s);
}
``` | 3,415 |
50,268,691 | I am trying to train my binary classifier over a huge data. Previously, I could accomplish training via using fit method of sklearn. But now, I have more data and I cannot cope with them. I am trying to fitting them partially but couldn't get rid of errors. How can I train my huge data incrementally? With applying my previous approach, I get an error about pipeline object. I have gone through the examples from [Incremental Learning](http://dask-ml.readthedocs.io/en/latest/incremental.html) but still running these code samples gives error. I will appreciate any help.
```
X,y = transform_to_dataset(training_data)
clf = Pipeline([
('vectorizer', DictVectorizer()),
('classifier', LogisticRegression())])
length=len(X)/2
clf.partial_fit(X[:length],y[:length],classes=np.array([0,1]))
clf.partial_fit(X[length:],y[length:],classes=np.array([0,1]))
```
**ERROR**
```
AttributeError: 'Pipeline' object has no attribute 'partial_fit'
```
**TRYING GIVEN CODE SAMPLES:**
```
clf=SGDClassifier(alpha=.0001, loss='log', penalty='l2', n_jobs=-1,
#shuffle=True, n_iter=10,
verbose=1)
length=len(X)/2
clf.partial_fit(X[:length],y[:length],classes=np.array([0,1]))
clf.partial_fit(X[length:],y[length:],classes=np.array([0,1]))
```
**ERROR**
```
File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 573, in check_X_y
ensure_min_features, warn_on_dtype, estimator)
File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 433, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
TypeError: float() argument must be a string or a number
```
My dataset consists of some sentences with their part of speech tags and dependency relations.
```
Thanks NN 0 root
to IN 3 case
all DT 1 nmod
who WP 5 nsubj
volunteered VBD 3 acl:relcl
. . 1 punct
You PRP 3 nsubj
will MD 3 aux
remain VB 0 root
as IN 5 case
alternates NNS 3 obl
. . 3 punct
``` | 2018/05/10 | [
"https://Stackoverflow.com/questions/50268691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9079119/"
] | A `Pipeline` object from scikit-learn does not have the `partial_fit`, as seen in [the docs](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html).
The reason for this is that you can add any estimator you want to that `Pipeline` object, and not all of them implement the `partial_fit`. [Here is a list of the supported estimators](http://scikit-learn.org/stable/modules/scaling_strategies.html#incremental-learning).
As you see, using `SGDClassifier` (without `Pipeline`), you don't get this "no attribute" error, because this specific estimator is supported. The error message you get for this one is probably due to text data. You can use the [LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) to process the non-numeric columns. | I was going through the same problem as `SGDClassifier` inside pipeline doesn't support the incremental learning (i.e. partial\_fit param). There is a way we could do incremental learning using sklearn but it is not with `partial_fit`, it is with `warm_start`. There are two algorithms in sklearn `LogisticRegression` and `RandomForest` that support warm\_start.
warm start is another way of doing incremental\_learning. read [here](http://scikit-learn.org/dev/glossary.html#term-warm-start) | 3,417 |
15,930,203 | I am using **zbarimg** to scan bar codes, I want to redirect the output to a python script. How can I redirect the output of the following command:
```
zbarimg code.png
```
to a python script, and what should be the script like?
I tried the following script:
```
#!/usr/local/bin/python
s = raw_input()
print s
```
I made it an executable by issuing the following:
```
chmod +x in.py
```
Than I ran the following :
```
zbarimg code.png | in.py
```
I know it's wrong but I can't figure out anything else! | 2013/04/10 | [
"https://Stackoverflow.com/questions/15930203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251851/"
] | Using the pipe operator `|` from the command is correct, actually. Did it not work?
You might need to explicitly specify the path for the python script as in
```
zbarimg code.png | ./in.py
```
and as @dogbane says, reading from stdin like `sys.stdin.readlines()` is better than using `raw_input` | Use [`sys.stdin`](http://docs.python.org/2/library/sys.html#sys.stdin) to read from stdin in your python script. For example:
```
import sys
data = sys.stdin.readlines()
``` | 3,420 |
45,430,966 | why are function considered data type in lua?
you can assign functions to variables and pass them as arguments in python too but there is no function data type in python. | 2017/08/01 | [
"https://Stackoverflow.com/questions/45430966",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4931135/"
] | I think you are mistaken.
If you take a look into <https://docs.python.org/2/reference/datamodel.html#types>
You'll find that Python even has multiple function types.
Callable types:
* user defined function
* user defined methods
* generator functions
* built-in functions
* built-in methods
* ...
There are further sections in the Python documentation that provide detail on the various types the interpreter supports.
Why there is a function type? I guess because it makes sense in a typed language to have different types for different kinds of things. If you don't have different types you don't need types at all. Having a function type is just consequent. How else would you classify a reference to a function? | Python does actually have a function type, its just called `lambda`. In both of these programming languages, functions are first-class values which is just a fancy way of saying you can pass them around to functions just like numbers or strings. It makes it possible to use [functional programming](https://en.wikipedia.org/wiki/Functional_programming) as a paradigm if it fits your purposes and the problem you are solving. | 3,422 |
53,863,318 | First, I was able to fix the ImportError. I figured out that it was because the Django version of pythonanywhere is not updated, So I upgraded Django on pythonanywhere from 1.x.x to 2.0.9.
The error came out like this:
>
> ImportError at /
> cannot import name 'path'
>
>
>
```
django version: 1.x.x
python version: 3.6.6
```
and, unfortunately, my app gave me another error:
>
> OperationalError at /
> no such column: blog\_post.published\_date
> Request Method: GET
> Request URL: http://*.pythonanywhere.com/
> Django Version: 2.0.9
> Exception Type: OperationalError
> Exception Value:
>
> no such column: blog\_post.published\_date
> Exception Location: /home/*/my-first-blog/myenv/lib/python3.6/site-packages/django/db/backends/sqlite3/base.py
> in execute, line 303
> Python Executable: /usr/local/bin/uwsgi
> Python Version: 3.6.6
>
>
>
I thought this error occurred because of some database, so I tried `migrate` or `makemigrations` on pythonanywhere, but I could not fix it still.
So, is there anyone who knows how to fix this database?
**Here is my `model.py`:**
```
from django.conf import settings
from django.db import models
from django.utils import timezone
class Post(models.Model):
author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
title = models.CharField(max_length=200)
text = models.TextField()
created_date = models.DateTimeField(default=timezone.now)
published_date = models.DateTimeField(blank=True, null=True)
def publish(self):
self.published_date = timezone.now()
self.save()
def __str__(self):
return self.title
```
**here is the output of `python manage.py showmigrations`:**
```
admin
[X] 0001_initial
[X] 0002_logentry_remove_auto_add
auth
[X] 0001_initial
[X] 0002_alter_permission_name_max_length
[X] 0003_alter_user_email_max_length
[X] 0004_alter_user_username_opts
[X] 0005_alter_user_last_login_null
[X] 0006_require_contenttypes_0002
[X] 0007_alter_validators_add_error_messages
[X] 0008_alter_user_username_max_length
[X] 0009_alter_user_last_name_max_length
blog
[X] 0001_initial
contenttypes
[X] 0001_initial
[X] 0002_remove_content_type_name
sessions
[X] 0001_initial
``` | 2018/12/20 | [
"https://Stackoverflow.com/questions/53863318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10665552/"
] | The problem as I see has to be with the database and django migrations.
The `Post` object inside the blog has the attribute that django's trying to find. The migrations haven't been correctly applied to the database.
Now considering the history of migrations, I do not know what's going wrong unless I can look around your database which I'm assuming is an sqlite.
One way to resolve this if you're having a newly constructed database is to get rid of the database and do the following:
* Delete all the migrations from `app/migrations` directory
* `python manage.py makemigrations`
* `python manage.py migrate`
Also, try to avoid `sqlite` as much as possible. The same migrations that ran on an `sqlite` db might be erroneous on Postgres or MySQL database which are more production grade databases.
**NOTE**: Please understand that this would lead to a complete data loss. Hence, try this only if you can afford to compromise on the existing/test data. | Don't forget to refresh your production server after every migration if you want the changes to take effect | 3,423 |
11,226,252 | Is there a way to loop in `while` if you start the script with `python -c`? This doesn't seem to be related to platform or python version...
**Linux**
```
[mpenning@Hotcoffee ~]$ python -c "import os;while (True): os.system('ls')"
File "<string>", line 1
import os;while (True): os.system('ls')
^
SyntaxError: invalid syntax
[mpenning@Hotcoffee ~]$
[mpenning@Hotcoffee ~]$ python -V
Python 2.6.6
[mpenning@Hotcoffee ~]$ uname -a
Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux
[mpenning@Hotcoffee ~]$
```
**Windows**
```
C:\Users\mike_pennington>python -c "import os;while True: os.system('dir')"
File "<string>", line 1
import os;while True: os.system('dir')
^
SyntaxError: invalid syntax
C:\Users\mike_pennington>python -V
Python 2.7.2
C:\Users\mike_pennington>
```
I have tried removing parenthesis in the `while` statement, but nothing seems to make this run. | 2012/06/27 | [
"https://Stackoverflow.com/questions/11226252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/667301/"
] | Multiline statements may not start after a statement-separating `;` in Python – otherwise, there might be ambiguities about the code blocks. Simply use line breaks in stead of `;`. This "works" on Linux:
```
$ python -c "import os
while True: os.system('ls')"
```
Not sure how to enter this on Windows, but why not simply write the commands to a `.py` file if it's more than one line? | Don't know about windows, if all you want is to be able to type in one-liners, you could consider line breaks inside quotes:
```
% python -c "import os;
while (True):
os.system('ls')"
``` | 3,424 |
52,119,496 | I am trying to write code to solve this python exercise: **I must use** the 'math' library, sqrt and possibly pow functions.
>
> "The distance between two points x and y is the square root of the sum
> of squared differences along each dimension of x and y.
>
>
> "Create a function that takes two vectors and outputs the distance
> between them.
>
>
> x = (0,0) y = (1,1)"
>
>
>
So far I've tried this - which certainly hasn't worked.
```
x = (0,0)
y = (1,1)
(c1, c2) = x
(c3, c4) = y
math.sqrt(sum((c1,**2)(c2,**2)(c3,**2)(c4,**2)))
```
>
>
> ```
> File "<ipython-input-14-ac0f3dc1fdeb>", line 1
> math.sqrt(sum((c1,**2)(c2,**2)(c3,**2)(c4,**2)))
> ^
> SyntaxError: invalid syntax
> ```
>
>
```
if c1 < c3:
difference1 = c3-c1
print(difference1)
```
>
> 1
>
>
>
... not even sure if that's the kind of calculation I should be working with.
```
def distance(x, y):
```
ummm... I expect the function starts by unpacking the tuples! But not sure how to write the rest of it, or cleanly.
I'm a beginner programmer & not a mathematician so I may be wrong in more than one sense... This exercise is from this HarvardX course: ['Using Python for Research'](https://courses.edx.org/courses/course-v1:HarvardX+PH526x+2T2018/4bdcc373b7a944f8861a3f190c10edca/).
It's OK to search for solutions via StackOverflow for learning on this course... not cheating to ask for pointers.
Many thanks for any ideas! I will keep searching around. | 2018/08/31 | [
"https://Stackoverflow.com/questions/52119496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10265759/"
] | ```
import math
def distance (x,y):
value= math.sqrt ((x[0]-y[0])**2 + (x[1] - y[1])**2)
print (value)
distance((0,0), (1,1))
``` | Thanks so much for those ideas! I figured it out. So happy.
```
for (a,b) in x,y:
dis = math.sqrt((y[0] - x[0])**2 + (y[1] - x[1])**2)
print(dis)
``` | 3,429 |
64,260,105 | I want to read all parquet files from an S3 bucket, including all those in the subdirectories (these are actually prefixes).
Using wildcards (\*) in the S3 url only works for the files in the specified folder. For example using this code will only read the parquet files below the `target/` folder.
```
df = spark.read.parquet("s3://bucket/target/*.parquet")
df.show()
```
Let say i have a structure like this in my s3 bucket:
```
"s3://bucket/target/2020/01/01/some-file.parquet"
"s3://bucket/target/2020/01/02/some-file.parquet"
```
The above code will raise the exception:
```
pyspark.sql.utils.AnalysisException: 'Path does not exist: s3://mailswitch-extract-underwr-prod/target/*.parquet;'
```
**How can I read all the parquet files from the subdirectories from my s3 bucket?**
To run my code, I am using AWS Glue 2.0 with Spark 2.4 and python 3. | 2020/10/08 | [
"https://Stackoverflow.com/questions/64260105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1771155/"
] | If you want to read all parquet files below the target folder
```
"s3://bucket/target/2020/01/01/some-file.parquet"
"s3://bucket/target/2020/01/02/some-file.parquet"
```
you can do
```
df = spark.read.parquet("bucket/target/*/*/*/*.parquet")
```
The downside is that you need to know the depth of your parquet files. | This worked for me:
```
df = spark.read.parquet("s3://your/path/here/some*wildcard")
``` | 3,431 |
40,446,084 | Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error:
```
selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
```
But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder.
```
driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs')
```
However, I get a new error-code now:
```
selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions.
```
Here's what I get when I check the file permissions of the path.
```
total 40
drwxr-xr-x 7 USER staff 238 Nov 6 00:07 .
drwxr-xr-x 17 USER staff 578 Nov 6 00:03 ..
-rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store
-rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py
drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__
-rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py
-rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py
``` | 2016/11/06 | [
"https://Stackoverflow.com/questions/40446084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7121239/"
] | I think the true reason for you problem is that: **The phantomjs which webdrive needs is not the one under `selenium/webdriver` fold**. When you use anaconda to install this package, it's really confusing (at least for me).
* First install it with `conda install -c conda-forge phantomjs`, test it with `phantomjs --version`.
* Then you can find the true phantomjs.exe in this fold: `"path = /${home_path}/anaconda3/envs/${env_name}/bin/phantomjs"`. To test if it's the true path, test with `/${home_path}/anaconda3/envs/${env_name}/bin/phantomjs --version`. It should output `__version__` information correctly.
* Pause this path to `webdriver.PhantomJS(executable_path=path)` and it will be fixed.
So there's no need to use `chmod` or put it in `/usr/local/bin` (in this way, the only goodness is that you can skip the `executable` parameter) | Strangely, for me it was fixed by putting phantomjs in `/usr/local/share` and adding some symbolic links. I followed [these steps](https://stackoverflow.com/questions/8778513/how-can-i-setup-run-phantomjs-on-ubuntu):
* move the phantomjs folder to `/usr/local/share/`:
+ `sudo mv phantomjs-2.1.1-linux-x86_64.tar.bz2 /usr/local/share/.`
* create the symbolic links:
+ `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs`
+ `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs`
I'm no Linux expert so I don't know why this makes a difference. If anyone wants to pitch in, feel free. | 3,432 |
58,460,780 | **using python 3.7**
Hi. I am trying to get the the selected treeview item and want to print it once i click left menu item. This is my treeview list. When I right click a menu appeas with stop process command. I am trying to get the selected item and print it but its giving me error
```
AttributeError: 'str' object has no attribute 'x' in treeview item
```
**Here is my tree list**
[enter image description here](https://i.stack.imgur.com/EJRy1.png)
**Here is my code**
```
self.popup_menu.add_command(label="stop process",
command=lambda:self.delete_selected("<Button-3>"))
self.tree.bind('<Button-3>', self.popup)
def delete_selected(self, event):
item = self.tree.identify('name','ID',event.x, event.y)
print(item)
def popup(self, event):
"""action in event of button 3 on tree view"""
try:
self.popup_menu.tk_popup(event.x_root, event.y_root, 0)
finally:
self.popup_menu.grab_release()
``` | 2019/10/19 | [
"https://Stackoverflow.com/questions/58460780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12241800/"
] | No, there is nothing like that, but there are tools, that try to mimic this behavior, for example lombok. Using `@Data` annotation we're getting default constructor, getters, setters, `toString`, `equals`, `hashCode`. We can fine-tune it by using annotations like `@Getter`, `@NoArgsConstructor` etc. | Neither Java nor Kotlin have anything similar to those Swift types you are talking about. Assignment *always* copies references to an object, rather than the object itself. What Kotlin's data classes do is that they create a `copy` method (among other things) that allows you to explicitly make a copy of an object, but you still have to actually call the method.
```
val b = a // b and a point to the same object, even if it is a data class
```
```
val b = a.copy() // this is what you need to do to create a copy of a data class
```
Java assignment copies references, not objects, and the same is true for Kotlin. There is no way around this, because it is a feature of the language itself. Copy constructors and methods (like what Kotlin's data class gives you) are the closest thing you have to such a feature. To get something like this in Java without having to manually write the code everytime, you could look into Project Lombok. | 3,442 |
29,943,146 | I am new to python, trying to port a script in 2.x to 3.x i am encountering the error TypeError; Must use key word argument or key function in python 3.x. Below is the piece of code: Please help
```
def resort_working_array( self, chosen_values_arr, num ):
for item in self.__working_arr[num]:
data_node = self.__pairs.get_node_info( item )
new_combs = []
for i in range(0, self.__n):
# numbers of new combinations to be created if this item is appended to array
new_combs.append( set([pairs_storage.key(z) for z in xuniqueCombinations( chosen_values_arr+[item], i+1)]) - self.__pairs.get_combs()[i] )
# weighting the node
item.weights = [ -len(new_combs[-1]) ] # node that creates most of new pairs is the best
item.weights += [ len(data_node.out) ] # less used outbound connections most likely to produce more new pairs while search continues
item.weights += [ len(x) for x in reversed(new_combs[:-1])]
item.weights += [ -data_node.counter ] # less used node is better
item.weights += [ -len(data_node.in_) ] # otherwise we will prefer node with most of free inbound connections; somehow it works out better ;)
self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) )
``` | 2015/04/29 | [
"https://Stackoverflow.com/questions/29943146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4846265/"
] | Looks like the problem is in this line.
```
self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) )
```
The `key` callable should take only one argument. Try:
```
self.__working_arr[num].sort(key = lambda a: a.weights)
``` | The exact same error message appears if you try to pass the *key* parameter as a positional parameter.
Wrong:
```
sort(lst, myKeyFunction)
```
Correct:
```
sort(lst, key=myKeyFunction)
```
Python 3.6.6 | 3,444 |
64,620,456 | I'm a beginner in python and I want to use comprehension to create a dictionary. Let's say I have the below two list and want to convert them to a dictionary like `{'Key 1':['c','d'], 'Key 2':['a','f'], 'Key 3':['b','e']}`. I can only think of the code below and I don't know how to change the value of the key and the filter using comprehension. How should I change my code?
```
value = ['a','b','c','d','e','f']
key = [2, 3, 1, 1, 3, 2]
{"Key 1" : [value for key,value in list(zip(key,value)) if key==1]}
``` | 2020/10/31 | [
"https://Stackoverflow.com/questions/64620456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | This should do it:
```
value = ['a','b','c','d','e','f']
key = [2, 3, 1, 1, 3, 2]
answer = {}
for k, v in zip(key, value):
if k in answer:
answer[k].append(v)
else:
answer[k] = [v]
print(answer)
{2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']}
```
EDIT: oops, jumped the gun. Apologies.
Here's the comprehension version, but it's not very efficient:
```
{
k: [v for i, v in enumerate(value) if key[i] == k]
for k in set(key)
}
```
EDIT 2:
Here's an one that has better complexity:
```
import pandas as pd
series = pd.Series(key)
{
k: [value[i] for i in indices]
for k, indices in series.groupby(series).groups.items()
}
``` | You could do it with dictionary comprehension *and* list comprehension:
```
{f"Key {k}" : [value for key,value in zip(key,value) if key == k] for k in key}
```
Your lists would yield the following:
```
{'Key 2': ['a', 'f'], 'Key 3': ['b', 'e'], 'Key 1': ['c', 'd']}
```
As requested. | 3,447 |
35,697,643 | I have a `Frame` with two columns of `String`,
```
let first = Series.ofValues(["a";"b";"c"])
let second = Series.ofValues(["d";"e";"f"])
let df = Frame(["first"; "second"], [first; second])
```
How do I produce a third column as the concatenation of the two columns?
In `python` `pandas`, this can be achieved with simple `+` operator, but `deedle` gives error if i do that,
```
error FS0043: No overloads match for method 'op_Addition'.
``` | 2016/02/29 | [
"https://Stackoverflow.com/questions/35697643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1569058/"
] | It sounds like what you want is to have something that returns something like:
```
Series.ofValues(["ad"; "be"; "cf"])
```
Then I think you need to define an addition operator with something like this:
```
let additionOperator = (fun (a:string) (b:string) -> (a + b))
```
And then you can add them like this:
```
Series.zipInto additionOperator first second
```
I get as the result:
```
val it : Series<int,string> = series [ 0 => ad; 1 => be; 2 => cf]
```
However if you are alright with tuples as your result, you can just use:
```
Series.zip first second
``` | I come across this after facing the same issue, the trick is to get the values as seq and use Seq.map2 to concat the two seqs, my solution is
```
let first = Series.ofValues(["a";"b";"c"])
let second = Series.ofValues(["d";"e";"f"])
let df = Seq.map2 (fun x y -> x+y) first.Values second.Values
|> Series.ofValues
|> (fun x -> Frame.addCol "third" x (Frame(["first"; "second"], [first; second])))
```
Result:
```
df.Print()
first second third
0 -> a d ad
1 -> b e be
2 -> c f cf
``` | 3,452 |
62,030,549 | I have a directory filled with '.tbl' files. The file set up is as follows:
\STAR\_ID = "HD 74156"
\DATA\_CATEGORY = "Planet Radial Velocity Curve"
\NUMBER\_OF\_POINTS = "82"
\TIME\_REFERENCE\_FRAME = "JD"
\MINIMUM\_DATE = "2453342.23249"
\DATE\_UNITS = "days"
\MAXIMUM\_DATE = "2454231.60002"
....
I need to rename every file in the directory using the STAR\_ID, so in this case the files name would be 'HD 74156.tbl.'
I have been able to do it for about 20 of the ~600 files. I am not sure why it will not continue through the rest of the files. My current code is:
```
for i in os.listdir(path):
with open(i) as f:
first_line = f.readline()
system = first_line.split('"')[1]
new_file = system + ".tbl"
os.rename(file, new_file)`
```
and the error message is:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-37-5883c060a977> in <module>
3 with open(i) as f:
4 first_line = f.readline()
----> 5 system = first_line.split('"')[1]
6 new_file = system + ".tbl"
7 os.rename(file, new_file)
IndexError: list index out of range
``` | 2020/05/26 | [
"https://Stackoverflow.com/questions/62030549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13622725/"
] | This error occurs because of `first_line.split('"')` is returning a list with less of 2 items.
you can try
```
first_line_ls = first_line.split('"')
if len(first_line_ls) > 1:
system = first_line_ls[1]
else:
#other method
```
This code can help you prevent the error and handle cases the file\_line str have less then 2 " | It looks like these `.tbl` files are not as uniform as you might have hoped. If this line:
```
----> 5 system = first_line.split('"')[1]
```
fails on some files, it's because their first line is not formatted as you expected, as @Leo Arad noted. You also want to make sure you're *actually* using the `STAR_ID` field. Perhaps these files usually put all the fields in the same order (as an aside, what are these `.tbl` files? What software did they come from? I've never seen it before), but since you've already found other inconsistencies with the format, better to be safe than sorry.
I might write a little helper function to parse the fields in this file. It takes a single line and returns a `(key, value)` tuple for the field. If the line does not look like a valid field it returns `(None, None)`:
```py
import re
# Dissection of this regular expression:
# ^\\ : line begins with \
# (?P<key>\w+) : extract the key, which is one or more letters, numbers or underscores
# \s*=\s* : an equal sign surrounding by any amount of white space
# "(?P<value>[^"]*)" : extract the value, which is between a pair of double-quotes
# and contains any characters other than double-quotes
# (Note: I don't know if this file format has a mechanism for escaping
# double-quotes inside the value; if so that would have to be handled as well)
_field_re = re.compile(r'^\\(?P<key>\w+)\s*=\s*"(?P<value>[^"]*)"')
def parse_field(line):
# match the line against the regular expression
match = _field_re.match(line)
# if it doesn't match, return (None, None)
if match is None:
return (None, None)
else:
# return the key and value pair
return match.groups()
```
Now open your file, loop over all the lines, and perform the rename once you find `STAR_ID`. If not, print a warning (this is mostly the same as your code with some slight variations):
```
for filename in os.listdir(path):
filename = os.path.join(path, filename)
star_id = None
# NOTE: Do the rename outside the with statement so that the
# file is closed; on Linux it doesn't matter but on Windows
# the rename will fail if the file is not closed first
with open(filename) as fobj:
for line in fobj:
key, value = parse_field(line)
if key == 'STAR_ID':
star_id = value
break
if star_id is not None:
os.rename(filename, os.path.join(path, star_id + '.tbl'))
else:
print(f'WARNING: STAR_ID key missing from {filename}', file=sys.stderr)
```
If you are not comfortable with regular expressions (and really, who is?) it would be good to learn the basics as it's an extremely useful tool to have in your belt. However, this format is simple enough that you could get away with using simple string parsing methods like you were doing. Though I would still enhance it a bit to make sure you're actually getting the STAR\_ID field. Something like this:
```
def parse_field(line):
if '=' not in line:
return (None, None)
key, value = [part.strip() for part in line.split('=', 1)]
if key[0] != '\\':
return (None, None)
else:
key = key[1:]
if value[0] != '"' or value[-1] != '"':
# still not a valid line assuming quotes are required
return (None, None)
else:
return (key, value.split('"')[1])
```
This is similar to what you were doing, but a little more robust (and returns the key as well as the value). But you can see this is more involved than the regular expression version. It's actually more-or-less implementing the exact same logic as the regular expression, but more slowly and verbosely. | 3,455 |
48,675,435 | In a personal project, I am trying to use Django as my front end and then allow data entered by users in a particular form to be copied to google sheets.
Google's own docs recommend using <https://github.com/google/oauth2client> which is now deprecated, and the docs have not been updated. With this, I have started attempting to use [Python Social Auth](https://github.com/python-social-auth/social-core) and [Gspread](https://github.com/burnash/gspread). For Gspread to be able to function correctly, I need to be able to pass it not only an access token but also a refresh token. Python Social Auth however is not persisting the refresh token along with the rest of the "extra data". Looking at the data preserved and the URLs routed to, it seems to me more like somewhere it is routing through Google+.
I have the following configurations in my Django settings files:
```
AUTHENTICATION_BACKENDS = (
'social_core.backends.google.GoogleOAuth2',
'django.contrib.auth.backends.ModelBackend',
)
SOCIAL_AUTH_PIPELINE = (
'social_core.pipeline.social_auth.social_details',
'social_core.pipeline.social_auth.social_uid',
'social_core.pipeline.social_auth.social_user',
'social_core.pipeline.user.get_username',
'social_core.pipeline.user.create_user',
'social_core.pipeline.social_auth.associate_user',
'social_core.pipeline.social_auth.load_extra_data',
'social_core.pipeline.user.user_details',
'social_core.pipeline.social_auth.associate_by_email',
)
SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = '...'
SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = '...'
SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = ['https://www.googleapis.com/auth/spreadsheets']
```
* Is there a better way to access a google sheet?
* Am I correct that PSA or google is redirecting me into a Google+ auth flow instead of the Google Oauth2?
* If not, what must change so that Python Social Auth keeps the refresh token? | 2018/02/08 | [
"https://Stackoverflow.com/questions/48675435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6883167/"
] | It's true that `python-social-auth` will use some bits of the Google+ platform, at least the API to retrieve details about the user to fill in the account.
From your settings, I see you have `associate_by_email` at the bottom, at that point, at that point it has no use since the user is already be created, if you really plan to use it, it must be before the `create_user` one, you can check the [`DEFAULT_PIPELINE`](https://github.com/python-social-auth/social-core/blob/master/social_core/pipeline/__init__.py#L29) as a reference.
In order to get a `refresh_token` from google, you need to tell it that you want one, to do that you need to set the `offline` access type:
```
SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = {
'access_type': 'offline'
}
```
With that setting Google will give you a `refresh_token` and it will automatically stored in `extra_data`. | Just provide this in your `settings.py`:
`SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = {
'access_type': 'offline',
'hd': 'xyzabc.com',
'approval_prompt':'force'
}`
remeber there is `{'approval_prompt' : 'force'}` which will force the user to select the gmail account, this way you will get refresh token. | 3,456 |
67,519,212 | I have written a simple caesar cipher code to take a string and a positional shift argument i.e cipher to encrypt the string. However, I have realized some of the outputs won't decrypt correctly. For example:
`python .\caesar_cipher.py 'fortuna' 6771 --encrypt` outputs `☼↑↔▲↨`
`python .\caesar_cipher.py '☼↑↔▲↨' 6771 --decrypt` outputs `\`,/UC` ( \ should be ` forgive my markdown skills)
I'm fairly certain there is some issue of encoding but I couldn't pinpoint it. Instead of printing and passing it as a command-line argument between two runs, if I were to just encrypt and decrypt in the same run output seems correct.
I'm using windows and I tried to run the above example (and a couple of others) both in cmd and PowerShell to test it.
Here is my code:
```
import argparse
# 127 number of chars in ascii
NO_OF_CHARS = 127
def encrypt(s: str) -> str:
return ''.join([chr((ord(c)+cipher) % NO_OF_CHARS) for c in s])
def decrypt(s: str) -> str:
return ''.join([chr((ord(c)-cipher) % NO_OF_CHARS) for c in s])
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("--encrypt", help="encrypt the string", action="store_true")
group.add_argument("--decrypt", help="decrypt the string", action="store_true")
parser.add_argument("string", type=str, help="string to encrypt/decrypt")
parser.add_argument("cipher", type=int,
help="positional shift amount for caesar cipher")
args = parser.parse_args()
string = args.string
encrypt_arg = args.encrypt
decrypt_arg = args.decrypt
cipher = args.cipher
if encrypt_arg:
result = encrypt(string)
else:
result = decrypt(string)
print(result)
``` | 2021/05/13 | [
"https://Stackoverflow.com/questions/67519212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9169087/"
] | I think the problem is after the encryption in copy and pasting the value. When I tested this code, what I found and you mentioned that too, directly transferring the encrypted value to the decrypt function by storing in a variable, doesn't cause any problem, but when directly pasting it is causing problem.
To overcome this problem, you write the encrypted text by encoding it in binary to file and then reading from that file.
File name has to be passed to the the CLI and **CIPHER**, it will give you the correct output.
This would work:
```
import argparse
# 127 number of chars in ascii
NO_OF_CHARS = 127
def encrypt(s: str) -> str:
return ''.join([chr((ord(c)+cipher) % NO_OF_CHARS) for c in s])
def decrypt(s: str) -> str:
return ''.join([chr((ord(c)-cipher) % NO_OF_CHARS) for c in s])
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("--encrypt", help="encrypt the string", action="store_true")
group.add_argument("--decrypt", help="decrypt the string", action="store_true")
parser.add_argument("string", type=str, help="string to encrypt/decrypt")
parser.add_argument("cipher", type=int,
help="positional shift amount for caesar cipher")
args = parser.parse_args()
string = args.string
encrypt_arg = args.encrypt
decrypt_arg = args.decrypt
cipher = args.cipher
if encrypt_arg:
result = encrypt(string)
with open('encrypt','wb') as f:
a = result.encode(encoding='utf-8')
f.write(a)
f.close()
print("Encrypted File created with name encrypt")
else:
with open(string,'rb') as r:
text = r.readlines()
print(decrypt(text[0].decode('utf-8')))
```
To test:
$ python caesar\_cipher.py 'fortuna' 6771 --encrypt
```
Encrypted File created with name encrypt
```
$ python caesar\_cipher.py 'encrypt' 6771 --decrypt
```
fortuna
``` | As @KnowledgeGainer mentioned, there is no problem with your code. The issue arises because you copied the output of your encryption from the terminal, and used that as your input for decryption. The terminal you're using is trying its best to interpret some potential non-printable control characters - `fortuna` has seven characters, but `☼↑↔▲↨` appears to be only five - but these are obviously unicode characters. In a caesar-cipher, your plaintext and encrypted message should be the same length, so it's clear that the terminal is mapping one or more "output bytes" to unicode characters. In what way it's doing this is not immediately obvious to me. | 3,459 |
34,284,737 | This is a part of my code for a hangman game. it is used for all four difficulties, but when it is used on my "insane" difficulty (which uses words from a word file) it adds an extra symbol to the end of the word meaning you can't win the game. it does this for every word in the .txt file.
This code works when using an array in the python window.
```
def insane():
global score
print ("This words may contain an apostrophe. \nStart guessing...")
time.sleep(0.5)
word = random.choice(words).lower()
print (word)
guesses = ''
fails = 0
while fails >= 0 and fails < 10: #try to fix this
failed = 0
for char in word:
if char in guesses:
print (char,)
else:
print ("_"),
failed += 1
if failed == 0:
print ("\nYou won, WELL DONE!")
score = score + 1
print ("your score is,", score)
difficultyINSANE()
guess = input("\nGuess a letter:").lower()
guesses += guess
if guess not in word:
fails += 1
print ("\nWrong")
if fails == 1:
print ("You have", + fails, "fail....WATCH OUT!" )
elif fails >= 2 and fails < 10:
print ("You have", + fails, "fails....WATCH OUT!" )
if fails == 10:
print ("You Loose\n")
print ("your score is, ", score)
print ("the word was,", word)
score = 0
difficultyINSANE()
```
**Edit:**
this is how i read the words
```
INSANEWORDS = open("create.txt","r+")
words = []
for item in INSANEWORDS:
words.append(item)
``` | 2015/12/15 | [
"https://Stackoverflow.com/questions/34284737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5653652/"
] | You have a `\n` at the end of every word.
You should strip the word of the `\n` before adding it:
```
INSANEWORDS = open("create.txt", "r+")
words = []
for item in INSANEWORDS:
words.append(item.strip('\n'))
```
**Before:**
[](https://i.stack.imgur.com/d6mOJ.png)
**After:**
[](https://i.stack.imgur.com/WarCk.png) | If my guess is correct when you are reading a line from your text file you also reading the new line character **\n** at the end of the word, which you can remove using:
```
word = word.strip('\n')
``` | 3,461 |
10,589,933 | I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list.
The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx>
The actual list in question is:
```
#Negotiate Protocol Request
packet = [chr(int(a, 16)) for a in """
00 00 00 90
ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00
00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00
00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50
52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d
41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66
6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e
31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c
41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20
30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00
""".split()]
```
He pulls a single byte (I think?) from it using the following lines:
```
what = packet[:]
where = choice(range(len(packet)))
which = chr(choice(range(256)))
what[where] = which
```
I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o
I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes. | 2012/05/14 | [
"https://Stackoverflow.com/questions/10589933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1298775/"
] | This
```
"""
00 00 00 90
ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00
00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00
00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50
52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d
41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66
6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e
31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c
41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20
30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00
"""
```
is just [multiline string](http://docs.python.org/tutorial/introduction.html#strings).
```
"""
00 00 00 90
ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00
""".split()
```
[produces split](http://docs.python.org/library/stdtypes.html#str.split) with spaces of the abovementioned string:
```
['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']
```
And this:
```
[chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']]
```
is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) which goes through the formed list and converts all the values applying `chr(int(a,16))` to each `a`.
[`int(a,16)`](http://docs.python.org/library/functions.html#int) converts string containing string representation of hexadecimal into `int`.
[`chr`](http://docs.python.org/library/functions.html#chr) converts this integer into char.
The result is:
```
>>> [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']]
['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00']
``` | The
```
"""
content
"""
```
format is a simple way to define multiline string literals in python. This is **not** a comment block.
The `[chr(int(a, 16)) for a in "00 00 00...".split()]` is a list comprehension. The large string is split into an array (split by spaces), and for each item in the array, it converts it to a hexadecimal number (`int(a,16)` means turn string a into an int, string a is in base 16) and then returns that ascii char (`chr(...)`) represented by that integer.
`packet[:]` returns a [shallow copy](http://docs.python.org/tutorial/introduction.html#lists) of the list `packet`.
`choice(range(len(packet)))` returns a random number in the range of the length of packet.
`chr(choice(range(256)))` picks a random number in the range 0,255 and interprets it as an ascii char, and then the final statement inserts that ascii char into the randomly selected location. | 3,462 |
36,461,925 | I am not even sure how to word my question due to me being quite new to python. The basic concept of what I want to accomplish is to be able to search for something in a 2D array and retrieve the right value as well as the values associated with that value (sorry for my bad explanation)
e.g.
`array=[[1,a,b],[2,x,d],[3,c,f]]`
if the user wants to find `2`, I want the program to retrieve `[2,x,d]` and if possible, put that into a normal (1D) array. Likewise, if the user searches for `3`, the program should retrieve `[3,c,f]`.
Thank you in advance (and if possible I want a solution that does not involve numpy) | 2016/04/06 | [
"https://Stackoverflow.com/questions/36461925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6168984/"
] | I am not sure if I understood the question correctly, but from what I got, I think you can create a model instance with initial values (place holders), and allow your component to initialise with this model, and then, when your data is ready, change the model instance values, which will reflect to your component.
This way, your component doesn't need to wait, it just uses place holder data, which you can of course test for, inside the component and display your template accordingly, and when the data is ready from the parent, updating it, will update the child.
I hope this helped. | What version of Angular are you with? Not sure if you're copy-pasting the redacted code, but it seems as if you're missing the `implements` keyword there in your Class.
`*ngIf` works good in this [plunker](https://plnkr.co/edit/jXsRvHZ33A1KrRxROGAK?p=preview).
From what I gather, something like \*ngIf is the proper way to do things in Ng2. Basically, only show the component if the conditions are good.
You might be running into a snag because your component gets *instantiated* before you expect it - because you require it in your parent component.
That might be because your component itself (or the template) expects some values, but they're not there (so your `constructor` breaks down).
According to [Lifecycle Hooks](https://angular.io/docs/ts/latest/guide/lifecycle-hooks.html) page on [angular.io](https://angular.io), that's exactly what OnInit interface is for.
---
Here's the code from the plunker directly (yours would be the SubComponent):
```
import {Component, OnInit} from 'angular2/core'
@Component({
selector: 'sub-component',
template: '<p>Subcomponent is alive!</p>'
})
class SubComponent {}
@Component({
selector: 'my-app',
providers: [],
template: `
<div>
<h2>Hello {{name}}</h2>
<div *ngIf="initialized">
Initialized
<sub-component>Sub</sub-component>
</div>
<div *ngIf="!initialized">Not initialized</div>
</div>
`,
directives: [SubComponent]
})
export class App implements OnInit {
initialized = false;
constructor() {
this.name = 'Angular2'
}
ngOnInit() {
setTimeout(() => {
this.initialized = true;
}, 2000)
}
}
``` | 3,471 |
39,816,500 | I've recently began work on a Python program as seen in the fragment below.
```
# General Variables
running = False
new = True
timeStart = 0.0
timeElapsed = 0.0
def endProg():
curses.nocbreak()
stdscr.keypad(False)
curses.echo()
curses.endwin()
quit()
# Draw
def draw():
stdscr.addstr(1, 1, ">", curses.color_pair(6))
stdscr.border()
if running:
stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.ctime( timeStart - timeElapsed ) ) )
stdscr.redrawwin()
stdscr.refresh()
# Calculate
def calc():
if running:
timeElapsed = t.clock() - timeStart
stdscr.border()
stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) )
# Main Loop
while True:
# Get Input
kInput = stdscr.getch()
# Close the program
if kInput == ord('q'):
endProg()
# Stop the current run
elif kInput == ord('s'):
stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) )
running = False
new = True
# Start a run
elif kInput == ord(' ') and new:
running = not running
new = not new
timeStart = dt.datetime.now()
# Toggle the timer
elif kInput == ord('p') and not new:
timeStart = dt.datetime.now() - timeStart
running = not running
calc()
draw()
```
**My program is a bit between solutions currently**, sorry if something doesn't look right. I'll be more than happy to explain.
I've spent the last several hours reading online about the time and datetime modules for python, trying to figure out how I can use them to accomplish my goals, but however I've tried to implement them it's been no use.
Essentially, I need my program to measure the elapsed time from when a button is pressed and be able to display it in a hour:minute.second format. The subtraction has made it very difficult, having to implement things such as timedelta. From what I have read online there is no way to do what I'm wanting without the datetime module, but it's given me nothing but problems.
Is there an easier solution, does my code have any outstanding errors, and how stupid am I? | 2016/10/02 | [
"https://Stackoverflow.com/questions/39816500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6911375/"
] | Install mocha and its types:
```sh
npm install mocha --save-dev
npm install @types/mocha --save-dev
```
Then, simply import mocha in your test files:
```js
import 'mocha';
describe('my test', () => {
it('does something', () => {
// your test
});
});
``` | Since TypeScript 2.0, you can add `mocha` to the `types` configuration of your `tsconfig.json` and it will always be loaded:
```
{
"compilerOptions": {
"types": [
"mocha"
]
}
}
``` | 3,474 |
34,756,978 | I am trying to download py2exe but every time that run the setup program it says "no python installation found in registry" but I have downloaded python 3.4 and have it on my computer working? please help.
I'm using a 64 bit computer with the 64 bit py2exe, I downloaded python from the python website. And i'm on windows 8 | 2016/01/13 | [
"https://Stackoverflow.com/questions/34756978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5781821/"
] | Try to use [cx\_Freeze](https://pypi.python.org/pypi/cx_Freeze) instead py2exe. | I assume that you have installed everything properly. In your install settings you can choose if you want to assign the **system variable** python.as you can see from the [point 3.3 of the documentation](https://docs.python.org/3.4/using/windows.html#configuring-python), you should:
>
> 3.3.1. Excursus: Setting environment variables¶
>
>
> Windows has a built-in dialog for changing environment variables (following guide applies to XP classical view): Right-click the icon for your machine (usually located on your Desktop and called “My Computer”) and chooseProperties there. Then, open the Advanced tab and click the Environment Variables button.
>
>
> In short, your path is:
>
>
>
```
My Computer / Properties / Advanced / Environment Variables
```
>
> In this dialog, you can add or modify User and System variables. To change System variables, you need non-restricted access to your machine (i.e. Administrator rights).
>
>
> Another way of adding variables to your environment is using the set command:
>
>
>
```
set PYTHONPATH=%PYTHONPATH%;C:My_python_lib
``` | 3,477 |
31,256,397 | I have data of the following form:
```
#@ <abc>
<http://stackoverflow.com/questions/ask> <question> _:question1 .
#@ <def>
<The> <second> <http://line> .
#@ <ghi>
_:question1 <http#responseCode> "200"^^<http://integer> .
#@ <klm>
<The> <second> <http://line1.xml> .
#@ <nop>
_:question1 <date> "Mon, 23 Apr 2012 13:49:27 GMT" .
#@ <jkn>
<G> <http#fifth> "200"^^<http://integer> .
#@ <k93>
_:question1 <http#responseCode> "200"^^<http://integer> .
#@ <k22>
<This> <third> <http://line2.xml> .
#@ <k73>
<http://site1> <hasAddress> <http://addr1> .
#@ <i27>
<kd8> <fourth> <http://addr2.xml> .
```
Now whenever two lines are equal, like: **`_:question1 <http#responseCode> "200"^^<http://integer> .`**, then I want to delete the equal lines (lines which match with each other character by character are equal lines) along with (i). the subsequent line (which ends with a fullstop) (ii). line previous to the equal line (which begins with #@).
```
#@ <abc>
<http://stackoverflow.com/questions/ask> <question> _:question1 .
#@ <def>
<The> <second> <http://line> .
#@ <nop>
_:question1 <date> "Mon, 23 Apr 2012 13:49:27 GMT" .
#@ <jkn>
<G> <http#fifth> "200"^^<http://integer> .
#@ <k73>
<http://site1> <hasAddress> <http://addr1> .
#@ <i27>
<kd8> <fourth> <http://addr2.xml> .
```
Now one way to do this is to store all these lines in a set in python and whenever two lines are equal (i.e. they match character by character) the previous and subsequent two lines are deleted. However, the size of my dataset is 100GB (and I have RAM of size 64GB), therefore I can not keep this information in set form in main-memory. Is there some way by which I can delete the duplicate lines along with their previous and subsequent two lines in python with limited main-memory space (RAM size 64 GB) | 2015/07/06 | [
"https://Stackoverflow.com/questions/31256397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4360034/"
] | Keep a boolean hashtable of hash codes of lines already seen.
For each line:
* if line hash()es to something you have already seen, you have a potential match: scan the file to check if it really is a duplicate.
* if line hash()es to a new hash, just mark the hash for the first time.
Dedicate as much memory you can to this hashtable, and the false positive rate will be low (i.e. less times you will have to scan for duplicates and found none).
Example:
```
table_size = 2**16
seen = [False]*table_size
for line in file:
h = hash(line) % table_size
if seen[h]:
dup = False
with open('yourfile','r') as f:
for line1 in f:
if line == line1:
dup = True
break
if not dup:
print(line)
else:
seen[h] = True
print(line)
```
As it has been pointed out, since you cannot store all the lines in memory you don't have many options, but at least this option doesn't require to scan the file for every single line, because most of the entries in the table will be False, i.e. the algorithm is sub-quadratic if the tabe is not full; it will degenerate to O(n2) once the table is full.
You can make a very memory-efficient implementation of the hash table, that requires only 1 bit for each hash code (e.g. make it an array of bytes, where each byte can store 8 boolean values)
---
See also [Bloom Filters](https://en.wikipedia.org/wiki/Bloom_filter) for more advanced techniques. | One fairly straightforward way - make a version of your data such that each line includes a field with its line number. Use unix 'sort' to sort that new file, excluding the line number field. The sort utility will merge sort the file even if it exceeds the size of available memory. Now you have a new file in which the duplicates are ordered, along with their original line numbers. Extract the line numbers of the duplicates and then use that as input for linearly processing your original data.
In more detailed steps.
* Make a new version of your file such that each line is prepended by its line number. So, "someline" becomes "1, someline"
* sort this file using the unix sort utility - sort -t"," -k2,2 file
* Scan the new file for consecutive duplicate entries in the second field
* the line numbers (first field) of such entries are the line numbers of duplicate lines in your original file - extract these and use them as input to remove duplicates in the original data. Since you know exactly where they are, you need not read in the entire file or create a giant in-memory structure for duplicates
The advantage of this method compared to some of the others suggested - it always works, regardless of the size of the input and the size of your available memory and it does not fail due to hash collisions or other probabilistic artifacts. You are leveraging the merge sort in unix sort where the hard stuff - dealing with larger-than-memory input -
has been done for you. | 3,478 |
48,964,181 | I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list).
Here is my code:
```
import pickle
import boto3
s3 = boto3.resource('s3')
with open('oldscreenurls.pkl', 'rb') as data:
old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data)
```
I get the following error even though the file exists:
```
FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl'
```
Any ideas? | 2018/02/24 | [
"https://Stackoverflow.com/questions/48964181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6327717/"
] | Super simple solution
```py
import pickle
import boto3
s3 = boto3.resource('s3')
my_pickle = pickle.loads(s3.Bucket("bucket_name").Object("key_to_pickle.pickle").get()['Body'].read())
``` | As shown in the documentation for [`download_fileobj`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_fileobj), you need to open the file in binary *write* mode and save to the file first. Once the file is downloaded, you can open it for reading and unpickle.
```
import pickle
import boto3
s3 = boto3.resource('s3')
with open('oldscreenurls.pkl', 'wb') as data:
s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data)
with open('oldscreenurls.pkl', 'rb') as data:
old_list = pickle.load(data)
```
`download_fileobj` takes the name of an object in S3 plus a handle to a local file, and saves the contents of that object to the file. There is also a version of this function called [`download_file`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_file) that takes a filename instead of an open file handle and handles opening it for you.
In this case it would probably be better to use [S3Client.get\_object](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object) though, to avoid having to write and then immediately read a file. You could also write to an in-memory BytesIO object, which acts like a file but doesn't actually touch a disk. That would look something like this:
```
import pickle
import boto3
from io import BytesIO
s3 = boto3.resource('s3')
with BytesIO() as data:
s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data)
data.seek(0) # move back to the beginning after writing
old_list = pickle.load(data)
``` | 3,480 |
66,144,386 | I have a large list of numbers (4,718,515 of them) and I need to run a method with each of them.
I have a method `Check(number)` that needs to be ran with the items in `numbers`.
I don't know the most efficient way to go about this in c#
Here are some examples of how I was able to do it in other languages if it helps
python:
`pool.imap_unordered`
and in Java:
```
forkJoinPool = new ForkJoinPool(processes);
forkJoinPool.submit(() -> words.stream()
.unordered()
.parallel()
.filter(CheckValidity::checkValidity)
``` | 2021/02/10 | [
"https://Stackoverflow.com/questions/66144386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10252192/"
] | There should not be any joining since the information is available in that table. Instead of selecting all the columns you can select only DebtAccountReferences as required.
```
select DebtAccountReferences
from DebtPayment_DL
where PaymentStartDate > '01/01/2021' and CloseDate is null;
``` | Based on the information you have given, you do not need a join statement. All of the information you need is stored in the table 'DebtPayment\_DL'
Your code is therefore correct and should generate the correct output. | 3,486 |
21,361,281 | I'm trying to modify a list in a tuple, the `append` method works, while `+=` operator works yet with an exception raised saying tuple could not be modified. I know a tuple is immutable, but I'm not trying to mutate it. Why this happen?
```
In [36]: t=([1,2],)
In [37]: t[0].append(123)
In [38]: t
Out[38]: ([1, 2, 123],)
In [39]: t[0]+=[4,5,]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-39-b5b3001fbe03> in <module>()
----> 1 t[0]+=[4,5,]
TypeError: 'tuple' object does not support item assignment
In [40]: t
Out[40]: ([1, 2, 123, 4, 5],)
``` | 2014/01/26 | [
"https://Stackoverflow.com/questions/21361281",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1150712/"
] | Because `t[0] += [4,5,]` is interpreted as:
```
t[0] = t[0].__iadd__([4,5,])
```
`t[0]__iadd__([4,5])` succeed, while `t[0] = ..` fail.
---
`list.__iadd__` extend the list, and return itself.
```
>>> lst = [0]
>>> lst2 = lst.__iadd__([1])
>>> lst
[0, 1]
>>> lst2
[0, 1]
>>> lst is lst2
True
``` | In fact you **do** change the tuple:
The `+` operator for lists creates a new list and you try to mutate your tuple by replacing the old list by the new one. `append`modifies the list in the tuple, therefore it works. | 3,487 |
50,314,242 | I want to save floating-point numbers as pixels in an image file. I am currently working in OpenCV-python, but I had also tried it with Pillow (PIL). Both packages convert `float` pixel data to integer before writing them to the file.
I want to save pixel values such as:
```none
(245.7865, 123.18788, 98.9866)
```
But when I read back the image file I get:
```none
(246, 123, 99)
```
Somehow my floating-point numbers get rounded off and converted to integers.
How to stop PIL or OpenCV from converting them to integer? | 2018/05/13 | [
"https://Stackoverflow.com/questions/50314242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5256558/"
] | Most likely your are looking for:
```
lapply(seq_along(x), function(i){
quantile(x[1:i], probs = 0.95)
})
```
for each index `i` in `x`, subset `x` from `1` to `i` and return `quantile`.
The output will be a list, you can convert it to vector:
```
unlist(lapply(seq_along(x), function(i){
quantile(x[1:i], probs=0.95)
}))
```
or better yer (as @Rui Barradas suggested in the comments) use sapply:
```
sapply(seq_along(x), function(i){
quantile(x[1:i], probs=0.95)
})
``` | Using `rollapply` would be something like the following.
```
library(xts)
rollapply(x[, "random"], width = list(seq(-length(x[, "random"]), 0)), FUN = quantile, probs = 0.95, partial = 0)
``` | 3,492 |
68,873,535 | I've a large MPEG (.ts) Binary file, usually a multiple of 188 bytes, I use python3,when I read 188 byte each time and parse to get required value, I found it really slow. I must traverse through each 188 bytes packet to get the value of the PID (binary data).
* On the same time when I use any MPEG offline professional analyzer, they get the list of all PID values and their total counts, within a 45 seconds for 5 min duration TS file, where my program takes > 10 mins to get the same.
* I don't understand how quickly they can find even though they might be written in c or c++.
* I tried python multiprocessing, but it is not helping much. this means my method of parsing and working of 188 bytes of data is not proper and causing huge delay.
---
```
`with open(file2,'rb') as f:
data=f.read(188)
if len(data)==0: break
b=BitStream(data)
... #parse b to get the required value
... # and increase count when needed
...
cnt=cnt+188
f.seek(cnt)`
``` | 2021/08/21 | [
"https://Stackoverflow.com/questions/68873535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8014376/"
] | It's already been copied. A `c_char_p` return is automatically converted to an immutable Python `bytes` object. If the return type was `POINTER(c_char)` *then* you would have a pointer to the actual memory. Sometimes you need the explicit type if you need to pass that pointer to a function to free the memory later.
A quick proof:
```py
from ctypes import *
dll = CDLL('msvcrt')
dll.strcpy.argtypes = c_char_p,c_char_p
dll.strcpy.restype = c_char_p
# strcpy returns a pointer to the destination buffer 'b'
b = create_string_buffer(30)
b2 = dll.strcpy(b,b'hello, world!')
print(b2)
b[0] = b'm' # alter the destination
print(b.value)
print(b2) # no change to b2
print()
dll.strcpy.restype = POINTER(c_char)
b3 = dll.strcpy(b,b'hello there!')
print(b3)
print(b3[:12])
b[0] = b'm' # alter the destination
print(b.value)
print(b3[:12]) # changed!
```
Output:
```none
b'hello, world!'
b'mello, world!'
b'hello, world!' # no change
<ctypes.LP_c_char object at 0x000001B65E9A5840> # the pointer
b'hello there!' # copied data from pointer
b'mello there!' # change destination buffer
b'mello there!' # cpoied data from pointer again, changed!
``` | `c_char_p` by default returns bytes object. So it will print with `b'` bytes. If need to print as string, we can do with `.decode('utf-8')`
**Example:**
```
print(b2) # prints b'hello, world!' as bytes
print(b2.decode('utf-8')) # prints 'hello, world!' as string
``` | 3,494 |
38,736,872 | I am trying to understand more about `__iter__` in Python 3. For some reason `__getitem__` is better understood by me than `__iter__`. I think I get somehow don't get the corresponding **next** implemention followed with `__iter__`.
I have this following code:
```
class Item:
def __getitem__(self,pos):
return range(0,30,10)[pos]
item1= Item()
print (f[1]) # 10
for i in item1:
print (i) # 0 10 20
```
I understand the code above, but then again how do i write the equivalent code using `__iter__` and `__next__()` ?
```
class Item:
def __iter__(self):
return self
#Lost here
def __next__(self,pos):
#Lost here
```
I understand when python sees a `__getitem__` method, it tries iterating over that object by calling the method with the integer index starting with `0`. | 2016/08/03 | [
"https://Stackoverflow.com/questions/38736872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2126725/"
] | In general, a really good approach is to make `__iter__` a generator by `yield`ing values. This might be *less* intuitive but it is straight-forward; you just yield back the results you want and `__next__` is then provided automatically for you:
```
class Item:
def __iter__(self):
for item in range(0, 30, 10):
yield item
```
This just uses the power of `yield` to get the desired effect, when `Python` calls `__iter__` on your object, it expects back an `iterator` (i.e an object that supports `__next__` calls), a generator does just that, producing each item as defined in your generator function (i.e `__iter__` in this case) when `__next__` is called:
```
>>> i = iter(Item())
>>> print(i) # generator, supports __next__
<generator object __iter__ at 0x7f6aeaf9e6d0>
>>> next(i)
0
>>> next(i)
10
>>> next(i)
20
```
Now you get the same effect as `__getitem__`. The difference is that no `index` is passed in, you have to manually loop through it in order to yield the result:
```
>>> for i in Item():
... print(i)
0
10
20
```
Apart from this, there's two other alternatives for creating an object that supports Iteration.
**One time looping: Make item an iterator**
Make `Item` an iterator by defining `__next__` and returning `self` from `__iter__` in this case, since you're not using `yield` the `__iter__` method returns `self` and `__next__` handles the logic of returning values:
```
class Item:
def __init__(self):
self.val = 0
def __iter__(self):
return self
def __next__(self):
if self.val > 2: raise StopIteration
res = range(0, 30, 10)[self.val]
self.val += 1
return res
```
This also uses an auxiliary `val` to get the result from the range and check if we should still be iterating (if not, we raise `StopIteration`):
```
>>> for i in Item():
... print(i)
0
10
20
```
The problem with this approach is that it is a one time ride, after iterating once, the `self.val` points to `3` and iteration can't be performed again. (using `yield` resolves this issue). (Yes, you could go and set `val` to 0 but that's just being sneaky.)
**Many times looping: create custom iterator object.**
The second approach is to use a custom iterator object specifically for your `Item` class and return it from `Item.__iter__` instead of `self`:
```
class Item:
def __iter__(self):
return IterItem()
class IterItem:
def __init__(self):
self.val = 0
def __iter__(self):
return self
def __next__(self):
if self.val > 2: raise StopIteration
res = range(0, 30, 10)[self.val]
self.val += 1
return res
```
Now every time you iterate a new custom iterator is supplied and you can support multiple iterations over `Item` objects. | Iter returns a iterator, mainly a generator as @machineyearning told at the comments, with next you can iterate over the object, see the example:
```
class Item:
def __init__(self):
self.elems = range(10)
self.current = 0
def __iter__(self):
return (x for x in self.elems)
def __next__(self):
if self.current >= len(self.elems):
self.current = 0
raise StopIteration
return self.elems[self.current]
>>> i = Item()
>>> a = iter(i)
>>> for x in a:
... print x
...
0
1
2
3
4
5
6
7
8
9
>>> for x in i:
... print x
...
0
1
2
3
4
5
6
7
8
9
``` | 3,495 |
68,019,978 | I am building an Ada boost model with Sklearn. Last year I made the same model with the same data, and I was able to access the feature importances. This year when I build the model with the same data the feature importance attribute contains NaNs.I have read some other stuff where people have has the same problem and its where there is NaN's in their data, however mine does not.
I am at loss of what is different, but I have isolated the Base\_estimator DecisionTree max\_depth to be the problem. The higher the max\_depth, the greater number of NaNs. However I have identified that max\_depth=10 is best for my work. This is my code
Can anyone point out where I am going wrong? Or explain what is happening or another way to get the feature\_importance?
I have recreated the same error with a sklearn dataset below.
I have a old version of sklearn with python 2.7 and with the same data this error doesn't occur.
Thank you
Data that I am working with is available here: <https://github.com/scikit-learn/scikit-learn/discussions/20315>
```
import pandas
import xarray
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
train_data=pandas.read_csv('data_train.csv')
model_variables=['RH','t2m','tp_r5','swvl1','SM_r20','tp','cvh','vdi','SM_r10','SM_IDW']
X = train_data[model_variables] # Features
y = train_data.ignition_no
np.count_nonzero(np.isnan(y))
0
#no missing target variables
tree = DecisionTreeClassifier(max_depth=10, random_state=12)
ada_model= AdaBoostClassifier(base_estimator = tree, random_state=12)
model= ada_model.fit(X,y)
model.feature_importances_
/home/mo/morc/.virtualenvs/newroo/lib/python3.6/site-packages/sklearn/tree/_classes.py:605: RuntimeWarning: invalid value encountered in true_divide
return self.tree_.compute_feature_importances()
array([ nan, nan, nan, nan, nan,
nan, nan, 0.02568412, nan, nan])
>>>
#Here is the same error recreated with the load_digits dataset from sklearn
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import train_test_split
from sklearn.model_selection import learning_curve
from sklearn.datasets import load_digits
>>> dataset = load_digits()
>>> X = dataset['data']
>>> y = dataset['target']
>>>
>>> score = []
>>> for depth in [1,2,10] :
... reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=depth))
... scores_ada = cross_val_score(reg_ada, X, y, cv=6)
... score.append(scores_ada.mean())
...
score
>>>[0.2615310293571163, 0.6466908212560386, 0.9621609067261242]
#best depth is 10, so making ada_boost classifier with base_estimator of max_depth=10
reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10))
model=reg_ada.fit(X,y)
model.feature_importances_
/home/mo/morc/.virtualenvs/fox/lib/python3.6/site-packages/sklearn/tree/_classes.py:605: RuntimeWarning: invalid value encountered in true_divide
return self.tree_.compute_feature_importances()
array([0.00000000e+00, 3.97071545e-03, nan, 1.04739889e-02,
1.71911851e-02, 1.13877668e-02, 5.53334918e-03, 3.48635371e-03,
3.81562332e-16, 2.97882448e-04, 5.21107270e-03, 1.90482369e-03,
9.54317398e-03, nan, 4.04579846e-03, 2.85770367e-03,
2.41466161e-03, 2.22172771e-04, nan, nan,
2.64452796e-02, 2.35455672e-02, 5.91982800e-03, 9.63862404e-15,
2.51667106e-05, 8.22347398e-03, 3.53522516e-02, 3.49199633e-02,
nan, nan, 7.85924750e-03, 0.00000000e+00,
0.00000000e+00, 2.43861329e-02, nan, 4.52136284e-03,
2.84309340e-02, 8.70846798e-03, nan, 0.00000000e+00,
0.00000000e+00, 8.51258472e-03, nan, 4.08880381e-02,
6.47568594e-03, 1.75046890e-02, 1.37183583e-02, 3.95955193e-32,
0.00000000e+00, 6.36631892e-05, 2.06906508e-02, nan,
nan, nan, 9.47079562e-03, 3.71242630e-03,
0.00000000e+00, 7.14153611e-06, nan, 5.14482654e-03,
2.23621689e-02, 1.79753787e-02, 3.05869803e-03, 4.80512718e-03])
``` | 2021/06/17 | [
"https://Stackoverflow.com/questions/68019978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8684167/"
] | Make an AJAX call to the specific endpoint and update the DOM accordingly. | Laravel is A PHP framework, PHP framework, PHP request data from server and return to client in which it must refresh the page. To archive interchange of data you have few option.
**option one**
use jquery ajax, it works well and fine with laravel and bootsrtap. Get started [here](https://jquery.com/) on offical website
**option two (recommended by me)**
use laravel Livewire. It is simple and easy, it is plain PHP and uses the same laravel functions. get started [here](https://laravel-livewire.com/) on offical website
**option three** use VueJs. You can plugin vuejs in your laravel application. vue components can be used inside laravel blades, this maybe hard if you have little difficult of you have little experience with JavaScript frameworks. yuo can read more in laravel [docs](https://laravel.com/docs/8.x/mix#vue) | 3,496 |
50,735,626 | Am trying to make a simple post api in flask-python but am getting this error :
```
TypeError: list object is not an iterator
```
but when i revise my code seems fine what could be the problem.
My function which specifically has the problem:
```
def post(self,name):
#return {'message': name}
item = next(filter(lambda x: x['name'] == name, items), None)
if item:
return {'message':"An item with name '{}' already exixts. ".format(name)},400
data = request.get_json()
item = {'name': name, 'price':data['price']}
items.append(item)
return item, 201
```
When i try to post something on **postman** i get this `logcat` **error**:
```
[2018-06-07 10:41:02,849] ERROR in app: Exception on /item/test [POST]
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\flask\app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Python27\lib\site-packages\flask\app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Python27\lib\site-packages\flask_restful\__init__.py", line 480, in wrapper
resp = resource(*args, **kwargs)
File "C:\Python27\lib\site-packages\flask\views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "C:\Python27\lib\site-packages\flask_restful\__init__.py", line 595, in dispatch_request
resp = meth(*args, **kwargs)
File "G:\flask_workspace\MealBookingApp\MealBookingApp\MealBookingApp\views.py", line 30, in post
item = next(filter(lambda x: x['name'] == name, items), None)
TypeError: list object is not an iterator
127.0.0.1 - - [07/Jun/2018 10:41:02] "POST /item/test HTTP/1.1" 500 -
```
**NB:**
***line 30*** , is the line below :
```
item = next(filter(lambda x: x['name'] == name, items), None)
``` | 2018/06/07 | [
"https://Stackoverflow.com/questions/50735626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6687699/"
] | Try using `iter()`
**Ex:**
```
item = next(iter(filter(lambda x: x['name'] == name, items)), None)
``` | To elaborate on @Rakesh's answer, lists aren't iterators, and the output of `filter()` in Python 2 is a list. To fix this, you can use the `iter()` function to output an iterator corresponding to the problematic list so that `next()` can be called appropriately. The same code then should solve your problem:
```
item = next(iter(filter(lambda x: x['name'] == name, items)), None)
```
Note that using `iter()` on an iterator still works in Python 3, so this code is forward compatible. | 3,497 |
576,557 | If I learn python 3.0 and code in it, will my code be still compatible with Python 2.6 (or 2.5 too!)?
---
Remarkably similar to:
[If I'm Going to Learn Python, Should I Learn 2.x or Just Jump Into 3.0?](https://stackoverflow.com/questions/410609/if-im-going-to-learn-python-should-i-learn-2-x-or-just-jump-into-3-0/410626) | 2009/02/23 | [
"https://Stackoverflow.com/questions/576557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69746/"
] | No, 3.x is largely incompatible with 2.x (that was actually a major motivation for doing it). In fact, you probably shouldn't be using 3.0 at all-- it's rather unusable at the moment, and is still mostly intended for library developers to port to it so that it can be usable. | NO. Python 3 code is backwards incompatible with 2.6. I recommend to begin with 2.6, because your code will be more **useful**. | 3,498 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.