GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
7,746,882
0
1
0
0
1
false
0
2011-10-12T20:57:00.000
0
2
0
in Python using the multiprocessing module, how can I determine which object caused a PicklingError?
7,746,484
0
python,multiprocessing
The error you're seeing could be caused by passing the wrong kind of function to the multiprocessing.Pool methods. The passed function must be directly importable from its parent module. It cannot be a method of a class, for instance.
I have a complex Python program. I'm trying to use the multiprocess Pool to parallelize it. I get the error message PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed. The traceback shows the statemen return send(obj) My hypothesis is that its the "obj" that is causing the problem and that I need to make it pickle-able. How can I determine which object is the cause of the problem? The program is complex and simply guessing might take a long time.
0
1
82
0
37,692,584
0
1
0
0
1
false
2
2011-10-13T18:44:00.000
0
2
0
Graph colouring in python using adjacency matrix
7,758,913
0
python,graph-algorithm
Implementing using adjacency is somewhat easier than using lists, as lists take a longer time and space. igraph has a quick method neighbors which can be used. However, with adjacency matrix alone, we can come up with our own graph coloring version which may not result in using minimum chromatic number. A quick strategy may be as follows: Initalize: Put one distinct color for nodes on each row (where a 1 appears) Start: With highest degree node (HDN) row as a reference, compare each row (meaning each node) with the HDN and see if it is also its neighbor by detecting a 1. If yes, then change that nodes color. Proceed like this to fine-tune. O(N^2) approach! Hope this helps.
How can I implement graph colouring in python using adjacency matrix? Is it possible? I implemented it using list. But it has some problems. I want to implement it using matrix. Can anybody give me the answer or suggestions to this?
0
1
1,943
0
7,761,877
0
1
0
0
2
false
5
2011-10-14T00:13:00.000
2
2
0
Is it worth using a multithreaded blas implementation along with multiprocessing in Python?
7,761,859
0.197375
python,multithreading,numpy,multiprocessing,blas
If you are already using multiprocessing, and all cores are at max load, then there will be very little, if any, benefit to adding threads that will be waiting around for a processor. Depending on your algorithm and what you're doing, it may be more beneficial to use one type over the other, but that's very dependent.
Suppose I have a 16 core machine, and an embarrassingly parallel program. I use lots of numpy dot products and addition of numpy arrays, and if I did not use multiprocessing it would be a no-brainer: Make sure numpy is built against a version of blas that uses multithreading. However, I am using multiprocessing, and all cores are working hard at all times. In this case, is there any benefit to be had from using a multithreading blas? Most of the operations are (blas) type 1, some are type 2.
0
1
2,321
0
7,765,829
0
1
0
0
2
false
5
2011-10-14T00:13:00.000
6
2
0
Is it worth using a multithreaded blas implementation along with multiprocessing in Python?
7,761,859
1
python,multithreading,numpy,multiprocessing,blas
You might need to be a little careful about the assumption that your code is actually used multithreaded BLAS calls. Relatively few numpy operators actually use the underlying BLAS, and relatively few BLAS calls are actually multithreaded. numpy.dot uses either BLAS dot, gemv or gemm, depending on the operation, but of those, only gemm is usually multithreaded, because there is rarely any performance benefit for the O(N) and O(N^2) BLAS calls in doing so. If you are limiting yourself to Level 1 and Level 2 BLAS operations, I doubt you are actually using any multithreaded BLAS calls, even if you are using a numpy implementation built with a mulithreaded BLAS, like Atlas or MKL.
Suppose I have a 16 core machine, and an embarrassingly parallel program. I use lots of numpy dot products and addition of numpy arrays, and if I did not use multiprocessing it would be a no-brainer: Make sure numpy is built against a version of blas that uses multithreading. However, I am using multiprocessing, and all cores are working hard at all times. In this case, is there any benefit to be had from using a multithreading blas? Most of the operations are (blas) type 1, some are type 2.
0
1
2,321
0
7,779,373
0
0
0
0
2
false
22
2011-10-15T00:42:00.000
8
5
0
Equivalent of "whos" command in NumPy
7,774,964
1
python,matlab,numpy,octave
Python has a builtin function dir() which returns the list of names in the current local scope.
I am new to Numpy and trying to search for a function to list out the variables along with their sizes (both the matrix dimensions as well as memory usage). I am essentially looking for an equivalent of the "whos" command in MATLAB and Octave. Does there exist any such command in NumPy?
0
1
21,664
0
60,568,982
0
0
0
0
2
false
22
2011-10-15T00:42:00.000
0
5
0
Equivalent of "whos" command in NumPy
7,774,964
0
python,matlab,numpy,octave
try using: type(VAR_NAME) this will output the class type for that particular variable, VAR_NAME
I am new to Numpy and trying to search for a function to list out the variables along with their sizes (both the matrix dimensions as well as memory usage). I am essentially looking for an equivalent of the "whos" command in MATLAB and Octave. Does there exist any such command in NumPy?
0
1
21,664
0
7,786,451
0
0
0
0
1
true
1
2011-10-16T18:07:00.000
1
1
0
does Enthought Python distribution includes pyhdf and HDF 4
7,786,232
1.2
python,image-processing
For the versions of EPD that include pyhdf, you don't need to install HDF 4 separately. However, note that pyhdf is not included in all versions of EPD---in particular, it's not included in the 64-bit Windows EPD or the 64-bit OS X EPD, though it is in the 32-bit versions.
In Enthought Python Distribution, I saw it includes pyhdf and numpy. Since it includes pyhdf, does it also include HDF 4? I am using pylab to code at this moment. Because I want to use a module of the pyhdf package called pyhdf.SD. And it Prerequisites HDF 4 library. So do I still need to install HDF 4 if I want to use pyhdf.SD? Thanks
0
1
258
0
7,787,997
0
0
0
0
1
false
1
2011-10-16T22:18:00.000
1
3
0
How do I do matrix computations in python without rounding?
7,787,732
0.066568
python,numpy
Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the "integer least squares problem". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.
I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant. How can I carry out an exact symbolic computation? (PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)
0
1
403
1
7,847,894
0
1
0
0
1
false
2
2011-10-21T07:41:00.000
0
3
0
python: container to memorize a large number of images
7,846,413
0
python,database
You could pickle a dict that associates filenames to byte strings of RGBA data. Assuming you have loaded the image with PIL, make sure they have all the same size and color format. Build a dict with images[filename] = im.tostring() and dump() it with pickle. Use Image.fromstring with the right size and mode parameters to get it back.
I have to store/retrieve a large number of images to use in my program. Each image is small: an icon 50x50, and each one has associated a string which is the path the icon is related to. Since they are so small I was thinking if there is some library which allows to store all of them in a single file. I would need to store both the image and the path string. I don't know if pickle is a possible choice - I also heard about much more complicated libraries such as HDF5... thanks for your help! alessandro
0
1
165
0
7,855,863
0
0
0
0
1
false
5
2011-10-21T22:09:00.000
5
4
0
Python's random module made inaccessible by Numpy's random module
7,855,845
0.244919
python,numpy,random-sample
This shouldn't happen. Check your code for bad imports like from numpy import *.
When I call random.sample(arr,length) an error returns random_sample() takes at most 1 positional argument (2 given). After some Googling I found out I'm calling Numpy's random sample function when I want to call the sample function of the random module. I've tried importing numpy under a different name, which doesn't fix the problem. I need Numpy for the rest of the program, though. Any thoughts? Thanks
0
1
4,375
0
11,026,071
0
0
0
0
2
false
2
2011-10-22T00:57:00.000
0
2
0
Anyone familiar with data format of Comfirmit?
7,856,725
0
python,spss
I was recently given a data set from confirmit. There are almost 4000 columns in the excel file. I want to enter it into a mysql db. There is not way they are just doing that output from one table. Do you know how the table schema works for confirmit?
I recently asked about accessing data from SPSS and got some absolutely wonderful help here. I now have an almost identical need to read data from a Confirmit data file. Not finding a ton of confirmit data file format on the web. It appears that Confirmit can export to SPSS *.sav files. This might be one avenue for me. Here's the exact needs: I need to be able to extract two different but related types of info from a market research study done using ConfirmIt: I need to be able to discover the data "schema", as in what questions are being asked (the text of the questions) and what the type of the answer is (multiple choice, yes/no, text) and what text labels are associated with each answer. I need to be able to read respondents answers and populate my data model. So for each of the questions discovered as part of step 1 above, I need to build a table of respondent answers. With SPSS this was easy thanks to a data access module available freely available by IBM and a nice Python wrapper by Albert-Jan Roskam. Googling I'm not finding much info. Any insight into this is helpful. Something like a Python or Java class to read the confirmit data would be perfect! Assuming my best option ends up being to export to SPSS *.sav file, does anyone know if it will meet both of my use cases above (contain the questions, answers schema and also contain each participant's results)?
0
1
768
0
10,631,838
0
0
0
0
2
true
2
2011-10-22T00:57:00.000
0
2
0
Anyone familiar with data format of Comfirmit?
7,856,725
1.2
python,spss
You can get the data schema from Excel definition export from Confirmit You can export from Confirmit txt file with the same template
I recently asked about accessing data from SPSS and got some absolutely wonderful help here. I now have an almost identical need to read data from a Confirmit data file. Not finding a ton of confirmit data file format on the web. It appears that Confirmit can export to SPSS *.sav files. This might be one avenue for me. Here's the exact needs: I need to be able to extract two different but related types of info from a market research study done using ConfirmIt: I need to be able to discover the data "schema", as in what questions are being asked (the text of the questions) and what the type of the answer is (multiple choice, yes/no, text) and what text labels are associated with each answer. I need to be able to read respondents answers and populate my data model. So for each of the questions discovered as part of step 1 above, I need to build a table of respondent answers. With SPSS this was easy thanks to a data access module available freely available by IBM and a nice Python wrapper by Albert-Jan Roskam. Googling I'm not finding much info. Any insight into this is helpful. Something like a Python or Java class to read the confirmit data would be perfect! Assuming my best option ends up being to export to SPSS *.sav file, does anyone know if it will meet both of my use cases above (contain the questions, answers schema and also contain each participant's results)?
0
1
768
0
7,878,155
0
0
0
0
1
false
38
2011-10-24T12:34:00.000
9
4
0
How can I create a standard colorbar for a series of plots in python
7,875,688
1
python,matplotlib,colorbar
Easiest solution is to call clim(lower_limit, upper_limit) with the same arguments for each plot.
I using matplotlib to plot some data in python and the plots require a standard colour bar. The data consists of a series of NxM matrices containing frequency information so that a simple imshow() plot gives a 2D histogram with colour describing frequency. Each matrix contains data in different, but overlapping ranges. Imshow normalizes the data in each matrix to the range 0-1 which means that, for example, the plot of matrix A, will appear identical to the plot of the matrix 2*A (though the colour bar will show double the values). What I would like is for the colour red, for example, to correspond to the same frequency in all of the plots. In other words, a single colour bar would suffice for all the plots. Any suggestions would be greatly appreciated.
0
1
46,914
0
7,891,137
0
0
0
1
1
true
12
2011-10-25T01:06:00.000
23
1
0
exporting from/importing to numpy, scipy in SQLite and HDF5 formats
7,883,646
1.2
python,sqlite,numpy,scipy,hdf5
Most of it depends on your use case. I have a lot more experience dealing with the various HDF5-based methods than traditional relational databases, so I can't comment too much on SQLite libraries for python... At least as far as h5py vs pyTables, they both offer very seamless access via numpy arrays, but they're oriented towards very different use cases. If you have n-dimensional data that you want to quickly access an arbitrary index-based slice of, then it's much more simple to use h5py. If you have data that's more table-like, and you want to query it, then pyTables is a much better option. h5py is a relatively "vanilla" wrapper around the HDF5 libraries compared to pyTables. This is a very good thing if you're going to be regularly accessing your HDF file from another language (pyTables adds some extra metadata). h5py can do a lot, but for some use cases (e.g. what pyTables does) you're going to need to spend more time tweaking things. pyTables has some really nice features. However, if your data doesn't look much like a table, then it's probably not the best option. To give a more concrete example, I work a lot with fairly large (tens of GB) 3 and 4 dimensional arrays of data. They're homogenous arrays of floats, ints, uint8s, etc. I usually want to access a small subset of the entire dataset. h5py makes this very simple, and does a fairly good job of auto-guessing a reasonable chunk size. Grabbing an arbitrary chunk or slice from disk is much, much faster than for a simple memmapped file. (Emphasis on arbitrary... Obviously, if you want to grab an entire "X" slice, then a C-ordered memmapped array is impossible to beat, as all the data in an "X" slice are adjacent on disk.) As a counter example, my wife collects data from a wide array of sensors that sample at minute to second intervals over several years. She needs to store and run arbitrary querys (and relatively simple calculations) on her data. pyTables makes this use case very easy and fast, and still has some advantages over traditional relational databases. (Particularly in terms of disk usage and speed at which a large (index-based) chunk of data can be read into memory)
There seems to be many choices for Python to interface with SQLite (sqlite3, atpy) and HDF5 (h5py, pyTables) -- I wonder if anyone has experience using these together with numpy arrays or data tables (structured/record arrays), and which of these most seamlessly integrate with "scientific" modules (numpy, scipy) for each data format (SQLite and HDF5).
0
1
3,647
0
7,892,106
0
0
0
0
1
true
1
2011-10-25T15:19:00.000
3
3
0
Run python script (with numpy dependency) from java
7,891,586
1.2
java,python,numpy,jython
If you're using Numpy you probably have to just use C Python, as it's a compiled extension. I'd recommend saving the image to disk, perhaps as a temporary file, and then calling the Python as a subprocess. If you're dealing with binary data you could even try memory mapping the data in Java and passing in in the path to the subprocess. Alternatively, depending on your circumstances, you could set up a simple data processing server in Python which accepts requests and returns processed data.
In a java application I need to use a specific image processing algorithm that is currently implemented in python. What would be the best approach, knowing that this script uses the Numpy library ? I alreayd tried to compile the script to java using the jythonc compiler but it seems that it doesn't support dependencies on native libraries like Numpy. I also tried to use Jepp but I get an ImportError when importing Numpy, too. Any suggestion ?
1
1
2,255
0
12,100,118
0
0
0
1
1
false
6
2011-10-26T11:15:00.000
1
4
0
NumPy arrays with SQLite
7,901,853
0.049958
python,arrays,sqlite,numpy,scipy
This looks a bit older but is there any reason you cannot just do a fetchall() instead of iterating and then just initializing numpy on declaration?
The most common SQLite interface I've seen in Python is sqlite3, but is there anything that works well with NumPy arrays or recarrays? By that I mean one that recognizes data types and does not require inserting row by row, and extracts into a NumPy (rec)array...? Kind of like R's SQL functions in the RDB or sqldf libraries, if anyone is familiar with those (they import/export/append whole tables or subsets of tables to or from R data tables).
0
1
7,905
0
9,572,786
0
0
0
0
1
false
1
2011-10-26T14:21:00.000
1
1
0
Detect Hand using OpenCV
7,904,055
0.197375
python,opencv,computer-vision,motion-detection
You can try the very basic but so effective and fast solution: on the upper half of the image: canny edge detection morphologyEx with adequate Structuring element(also simple combination of erode/dilate may be enough) convert to BW using adaptive threshold Xor the result with a mask representing the expected covered area. The number of ones returned by xor in each area of the mask is the index that you should use. This is extremely fast, you can make more than one iteration within the 0.5 sec and use the average. also you may detect faces and use them to adapt the position of your mask, but this will be more expensive :) hope that helps
I want to use openCV to detect when a person raises or lowers a hand or both hands. I have looked through the tutorials provided by python opencv and none of them seem to do the job. There is a camera that sits in front of the 2 persons, about 50cm away from them(so you see them from the waist up). The person is able to raise or lower each arm, or both of the arms and I have to detect when they do that.(the camera is mounted on the bars of the rollercoaster; this implies that the background is always changing) How can I detect this in the fastest time possible? It does not have to be real time detection but it does not have to be more than 0.5seconds. The whole image is 640x480. Now, since the hands can appear only in the top of the image, this would reduce the search area by half => 640x240. This would reduce to the problem of searching a certain object(the hands) in a constantly changing background. Thank you, Stefan F.
0
1
3,058
0
7,917,373
0
0
0
0
3
false
12
2011-10-26T20:52:00.000
1
6
0
Generating random numbers under very specific constraints
7,908,800
0.033321
python,algorithm,random
Blocked Gibbs sampling is pretty simple and converges to the right distribution (this is along the lines of what Alexandre is proposing). For all i, initialize ai = A / n and bi = B / n. Select i ≠ j uniformly at random. With probability 1/2, update ai and aj with uniform random values satisfying the constraints. The rest of the time, do the same for bi and bj. Repeat Step 2 as many times as seems to be necessary for your application. I have no idea what the convergence rate is.
I am faced with the following programming problem. I need to generate n (a, b) tuples for which the sum of all a's is a given A and sum of all b's is a given B and for each tuple the ratio of a / b is in the range (c_min, c_max). A / B is within the same range, too. I am also trying to make sure there is no bias in the result other than what is introduced by the constraints and the a / b values are more-or-less uniformly distributed in the given range. Some clarifications and meta-constraints: A, B, c_min, and c_max are given. The ratio A / B is in the (c_min, c_max) range. This has to be so if the problem is to have a solution given the other constraints. a and b are >0 and non-integer. I am trying to implement this in Python but ideas in any language (English included) are much appreciated.
0
1
2,895
0
7,908,987
0
0
0
0
3
true
12
2011-10-26T20:52:00.000
2
6
0
Generating random numbers under very specific constraints
7,908,800
1.2
python,algorithm,random
Start by generating as many identical tuples, n, as you need: (A/n, B/n) Now pick two tuples at random. Make a random change to the a value of one, and a compensating change to the a value of the other, keeping everything within the given constraints. Put the two tuples back. Now pick another random pair. This times twiddle with the b values. Lather, rinse repeat.
I am faced with the following programming problem. I need to generate n (a, b) tuples for which the sum of all a's is a given A and sum of all b's is a given B and for each tuple the ratio of a / b is in the range (c_min, c_max). A / B is within the same range, too. I am also trying to make sure there is no bias in the result other than what is introduced by the constraints and the a / b values are more-or-less uniformly distributed in the given range. Some clarifications and meta-constraints: A, B, c_min, and c_max are given. The ratio A / B is in the (c_min, c_max) range. This has to be so if the problem is to have a solution given the other constraints. a and b are >0 and non-integer. I am trying to implement this in Python but ideas in any language (English included) are much appreciated.
0
1
2,895
0
7,908,989
0
0
0
0
3
false
12
2011-10-26T20:52:00.000
2
6
0
Generating random numbers under very specific constraints
7,908,800
0.066568
python,algorithm,random
I think the simplest thing is to Use your favorite method to throw n-1 values such that \sum_i=0,n-1 a_i < A, and set a_n to get the right total. There are several SO question about doing that, though I've never seen a answer I'm really happy with yet. Maybe I'll write a paper or something. Get the n-1 b's by throwing the c_i uniformly on the allowed range, and set final b to get the right total and check on the final c (I think it must be OK, but I haven't proven it yet). Note that since we have 2 hard constrains we should expect to throw 2n-2 random numbers, and this method does exactly that (on the assumption that you can do step 1 with n-1 throws.
I am faced with the following programming problem. I need to generate n (a, b) tuples for which the sum of all a's is a given A and sum of all b's is a given B and for each tuple the ratio of a / b is in the range (c_min, c_max). A / B is within the same range, too. I am also trying to make sure there is no bias in the result other than what is introduced by the constraints and the a / b values are more-or-less uniformly distributed in the given range. Some clarifications and meta-constraints: A, B, c_min, and c_max are given. The ratio A / B is in the (c_min, c_max) range. This has to be so if the problem is to have a solution given the other constraints. a and b are >0 and non-integer. I am trying to implement this in Python but ideas in any language (English included) are much appreciated.
0
1
2,895
0
7,909,874
0
1
0
0
3
true
3
2011-10-26T22:36:00.000
4
3
0
Python - Run numpy without the python interpreter
7,909,761
1.2
python,numpy
find where numpy is installed on your system. For me, it's here: /usr/lib/pymodules/python2.7 import it explicitly before importing numpy import sys sys.path.append('/usr/lib/pymodules/python2.7') ... if you need help finding the correct path, check the contents of sys.path while using your python interpreter import sys print sys.path
I have an .x3d code which references a python script. I am trying to implement certain functions which make use of the numpy module. However, I am only able to import the builtin modules from Python. I am looking for a way to import the numpy module into the script without having to call the interpreter (i.e. "test.py", instead of "python test.py"). Currently I get "ImportError: No module named numpy". My question is: Is there a way to import the numpy module without having to call from the interpreter? Is there a way to include numpy as one of the built-in modules of Python?
0
1
1,209
0
7,909,895
0
1
0
0
3
false
3
2011-10-26T22:36:00.000
3
3
0
Python - Run numpy without the python interpreter
7,909,761
0.197375
python,numpy
I'm going to guess that your #! line is pointing to a different python interpreter then the one you use normally. Make sure they point to the same one.
I have an .x3d code which references a python script. I am trying to implement certain functions which make use of the numpy module. However, I am only able to import the builtin modules from Python. I am looking for a way to import the numpy module into the script without having to call the interpreter (i.e. "test.py", instead of "python test.py"). Currently I get "ImportError: No module named numpy". My question is: Is there a way to import the numpy module without having to call from the interpreter? Is there a way to include numpy as one of the built-in modules of Python?
0
1
1,209
0
7,909,774
0
1
0
0
3
false
3
2011-10-26T22:36:00.000
1
3
0
Python - Run numpy without the python interpreter
7,909,761
0.066568
python,numpy
Add the num.py libraries to sys.path before you call import
I have an .x3d code which references a python script. I am trying to implement certain functions which make use of the numpy module. However, I am only able to import the builtin modules from Python. I am looking for a way to import the numpy module into the script without having to call the interpreter (i.e. "test.py", instead of "python test.py"). Currently I get "ImportError: No module named numpy". My question is: Is there a way to import the numpy module without having to call from the interpreter? Is there a way to include numpy as one of the built-in modules of Python?
0
1
1,209
0
7,918,549
0
1
0
0
1
true
26
2011-10-27T14:07:00.000
17
2
0
Add footnote under the x-axis using matplotlib
7,917,107
1.2
python,matplotlib
One way would be just use plt.text(x,y,'text')
I couldn't find the right function to add a footnote in my plot. The footnote I want to have is something like an explanation of one item in the legend, but it is too long to put in the legend box. So, I'd like to add a ref number, e.g. [1], to the legend item, and add the footnote in the bottom of the plot, under the x-axis. Which function should I use? Thanks!
0
1
42,816
0
33,510,117
0
0
0
0
1
false
102
2011-10-27T21:14:00.000
0
4
0
How to transform numpy.matrix or array to scipy sparse matrix
7,922,487
0
python,numpy,scipy,sparse-matrix
As for the inverse, the function is inv(A), but I won't recommend using it, since for huge matrices it is very computationally costly and unstable. Instead, you should use an approximation to the inverse, or if you want to solve Ax = b you don't really need A-1.
For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. What are the functions to do the inverse? I searched, but got no idea what keywords should be the right hit.
0
1
141,998
1
7,940,923
0
0
0
0
1
false
2
2011-10-29T18:33:00.000
0
1
0
Error when importing OpenCV python module (when built with Qt and QtOpenGL)
7,940,848
0
python,qt,opencv,import
Probably need the qt dll's in the same place as the opencv dlls - and they have to be the version built with the same compiler as opencv (and possibly python)
I recently downloaded OpenCV 2.3.1, compiled with the CMake flags withQt and withQtOpenGL turned on. My Qt version is 4.7.4 and is configured with OpenGL enabled. Supposedly I only need to copy cv2.pyd to Python's site-package path: C:\Python27\Lib\site-packages And in the mean time make sure the OpenCV dlls are somewhere in my PATH. However, when I try to call import cv2 in ipython, it returned an error: ImportError: DLL load failed: The specified procedure could not be found. I also tried OpenCV 2.3, resulting the same error. If OpenCV is compiled without Qt, the import works just fine. Has anyone run into similar problem before? Or is there anyway to get more information, such as which procedure is missing from what DLL?
0
1
488
0
36,474,703
0
1
1
0
1
false
14
2011-11-02T23:14:00.000
2
5
0
Efficient way to generate and use millions of random numbers in Python
7,988,494
0.07983
python,random
Code to generate 10M random numbers efficiently and faster: import random l=10000000 listrandom=[] for i in range (l): value=random.randint(0,l) listrandom.append(value) print listrandom Time taken included the I/O time lagged in printing on screen: real 0m27.116s user 0m24.391s sys 0m0.819s
I'm in the process of working on programming project that involves some pretty extensive Monte Carlo simulation in Python, and as such the generation of a tremendous number of random numbers. Very nearly all of them, if not all of them, will be able to be generated by Python's built in random module. I'm something of a coding newbie, and unfamiliar with efficient and inefficient ways to do things. Is it faster to generate say, all the random numbers as a list, and then iterate through that list, or generate a new random number each time a function is called, which will be in a very large loop? Or some other, undoubtedly more clever method?
0
1
12,900
0
8,002,777
0
0
0
0
1
true
14
2011-11-03T22:13:00.000
17
1
0
Difference between HDF5 file and PyTables file
8,002,569
1.2
python,numpy,hdf5,pytables
PyTables files are HDF5 files. However, as I understand it, PyTables adds some extra metadata to the attributes of each entry in the HDF file. If you're looking for a more "vanilla" hdf5 solution for python/numpy, have a look a h5py. It's less database-like (i.e. less "table-like") than PyTables, and doesn't have as many nifty querying features, but it's much more straight-forward, in my opinion. If you're going to be accessing an hdf5 file from multiple different languages, h5py is probably a better route to take.
Is there a difference between HDF5 files and files created by PyTables? PyTables has two functions .isHDFfile() and .isPyTablesFile() suggesting that there is a difference between the two formats. I've done some looking around on Google and have gathered that PyTables is built on top of HDF, but I wasn't able to find much beyond that. I am specifically interested in interoperability, speed and overhead. Thanks.
0
1
2,253
0
8,019,217
0
0
1
0
2
false
1
2011-11-05T08:59:00.000
3
3
0
Coalition Search Algorithm
8,019,172
0.197375
java,c++,python,c,algorithm
In other words, you have an array X[1..n], and want to have all the subsets of it for which sum(subset) >= 1/2 * sum(X), right? That probably means the whole set qualifies. After that, you can drop any element k having X[k] < 1/2 * sum(X), and every such a coalition will be fine as an answer, too. After that, you can proceed dropping elements one by one, stopping when you've reached half of the sum. This is obviously not the most effective solution: you don't want to drop k1=1,k2=2 if you've already tried k1=2,k2=1—but I believe you can handle this.
I am looking for an algorithm that is implemented in C, C++, Python or Java that calculates the set of winning coalitions for n agents where each agent has a different amount of votes. I would appreciate any hints. Thanks!
0
1
373
0
8,019,235
0
0
1
0
2
false
1
2011-11-05T08:59:00.000
0
3
0
Coalition Search Algorithm
8,019,172
0
java,c++,python,c,algorithm
Arrange the number of votes for each of the agents into an array, and compute the partial sums from the right, so that you can find out SUM_i = k to n Votes[i] just by looking up the partial sum. Then do a backtrack search over all possible subsets of {1, 2, ...n}. At any point in the backtrack you have accepted some subset of agents 0..i - 1, and you know from the partial sum the maximum possible number of votes available from other agents. So you can look to see if the current subset could be extended with agents number >= i to form a winning coalition, and discard it if not. This gives you a backtrack search where you consider a subset only if it is already a winning coalition, or you will extend it to become a winning coalition. So I think the cost of the backtrack search is the sum of the sizes of the winning coalitions you discover, which seems close to optimal. I would be tempted to rearrange the agents before running this so that you deal with the agents with most votes first, but at the moment I don't see an argument that says you gain much from that. Actually - taking a tip from Alf's answer - life is a lot easier if you start from the full set of agents, and then use backtrack search to decide which agents to discard. Then you don't need an array of partial sums, and you only generate subsets you want anyway. And yes, there is no need to order agents in advance.
I am looking for an algorithm that is implemented in C, C++, Python or Java that calculates the set of winning coalitions for n agents where each agent has a different amount of votes. I would appreciate any hints. Thanks!
0
1
373
0
15,979,342
0
0
0
0
1
false
3
2011-11-08T03:24:00.000
4
2
0
How to force the functions of the optimize module of scipy to take a function and its gradient simultaneously
8,045,576
0.379949
python,optimization,scipy
The scipy.optimize.minimize methods has a parameter called "jac". If set to True, minimize will expect the callable f(x) to both return the function value and it's derivatives.
I have a fairly complex function f(x) that I want to optimize and I am using the fmin_bfgs function from the scipy.optimize module from Scipy. It forces me to give the function to minimize and the function of the gradient f'(x) separately, which is a pity because some of the computations for the gradient can be done when evaluating the function f(x). Is there a way of combining both functions? I was considering saving the intermediate values required for both functions, but I don't know if the fmin_bfgs function guarantees that f(x) is evaluated before than f'(x). Thank you
0
1
557
0
8,052,660
0
0
0
0
1
false
5
2011-11-08T15:10:00.000
0
2
0
exponential moving sum in numpy / scipy?
8,052,582
0
python,numpy,scipy,vectorization
You can try to improve python loops by doing good "practices" (like avoiding dots). Maybe you can code you function in C (into a "numpy library") and call it from python.
I am looking for a function to calculate exponential moving sum in numpy or scipy. I want to avoid using python loops because they are really slow. to be specific, I have two series A[] and T[]. T[i] is the timestamp of value A[i]. I define a half-decay period tau. For a given time t, the exponential moving sum is the sum of all the values A[i] that happens before t, with weight exp(-(t-T[i])/tau) for each A[i]. Thanks a lot!
0
1
2,259
0
11,141,963
0
0
0
0
1
true
2
2011-11-10T16:54:00.000
0
1
0
Using OpenCV and Python to stitch puzzle images together
8,083,263
1.2
python,image-processing,opencv,image-stitching
If this is a small fun project that you are trying to do, you can compare image histograms or use SIFT/SURF. I don't think there is implementation of SIFT, SURF in Python API. If you can find compatible equivalent, you can do it. Comparing images are very much dependent on the data-set that you have. Some techniques work more better than the other.
I am trying to use OpenCV and Python to stitch together several hundred puzzle piece images into one large, complete image. All of the images are digitized and are in a PNG format. The pieces were originally from a scan and extracted into individual pieces, so they have transparent backgrounds and are each a single piece. What is the process of comparing them and finding their matches using OpenCV? The plan is that the images and puzzle pieces will always be different and this python program will take a scan of all the pieces laid out, crop out the pieces (which it does now), and build the puzzle back.
0
1
1,769
0
8,091,830
0
0
0
0
3
false
6
2011-11-11T08:08:00.000
0
7
0
better algorithm for checking 5 in a row/col in a matrix
8,091,248
0
python,algorithm,matrix
I don't think you can avoid iteration, but you can at least do an XOR of all elements and if the result of that is 0 => they are all equal, then you don't need to do any comparisons.
is there a good algorithm for checking whether there are 5 same elements in a row or a column or diagonally given a square matrix, say 6x6? there is ofcourse the naive algorithm of iterating through every spot and then for each point in the matrix, iterate through that row, col and then the diagonal. I am wondering if there is a better way of doing it.
0
1
3,808
0
8,098,697
0
0
0
0
3
false
6
2011-11-11T08:08:00.000
0
7
0
better algorithm for checking 5 in a row/col in a matrix
8,091,248
0
python,algorithm,matrix
You can try improve your method with some heuristics: use the knowledge of the matrix size to exclude element sequences that do not fit and suspend unnecessary calculation. In case the given vector size is 6, you want to find 5 equal elements, and the first 3 elements are different, further calculation do not have any sense. This approach can give you a significant advantage, if 5 equal elements in a row happen rarely enough.
is there a good algorithm for checking whether there are 5 same elements in a row or a column or diagonally given a square matrix, say 6x6? there is ofcourse the naive algorithm of iterating through every spot and then for each point in the matrix, iterate through that row, col and then the diagonal. I am wondering if there is a better way of doing it.
0
1
3,808
0
8,091,403
0
0
0
0
3
false
6
2011-11-11T08:08:00.000
0
7
0
better algorithm for checking 5 in a row/col in a matrix
8,091,248
0
python,algorithm,matrix
Your best approach may depend on whether you control the placement of elements. For example, if you were building a game and just placed the most recent element on the grid, you could capture into four strings the vertical, horizontal, and diagonal strips that intersected that point, and use the same algorithm on each strip, tallying each element and evaluating the totals. The algorithm may be slightly different depending on whether you're counting five contiguous elements out of the six, or allow gaps as long as the total is five.
is there a good algorithm for checking whether there are 5 same elements in a row or a column or diagonally given a square matrix, say 6x6? there is ofcourse the naive algorithm of iterating through every spot and then for each point in the matrix, iterate through that row, col and then the diagonal. I am wondering if there is a better way of doing it.
0
1
3,808
0
8,288,377
0
0
0
0
1
true
7
2011-11-11T16:02:00.000
2
1
0
HDF5 for Python: high level vs low level interfaces. h5py
8,096,668
1.2
python,performance,hdf5,h5py
High-level interfaces are generally going with a performance loss of some sort. After that, whether it is noticeable (worth being investigated) will depend on what you are doing exactly with your code. Just start with the high-level interface. If the code is overall too slow, start profiling and move the bottlenecks down to the lower-level interface and see if it helps.
I've been working with HDF5 files with C and Matlab, both using the same way for reading from and writing to datasets: open file with h5f open dataset with h5d select space with h5s and so on... But now I'm working with Python, and with its h5py library I see that it has two ways to manage HDF5: high-level and low-level interfaces. And with the former it takes less lines of code to get the information from a single variable of the file. Is there any noticeable loss of performance when using the high-level interface? For example when dealing with a file with many variables inside, and we must read just one of them.
0
1
932
0
8,121,141
0
0
1
0
1
true
5
2011-11-14T10:01:00.000
1
2
0
Get the complete structure of a program?
8,119,900
1.2
python,coding-style
UML generation is provided by pyreverse - it's part of pylint package It generates UML in dot format - or png, etc. It creates UML diagram, so you can easily see basic structure of your code I'm not sure if it satisfy all your needs, but it might be helpful
I would have a quite simple question, but can't find any suitable automated solution for now. I have developed an algorithm that performs a lot of stuff (image processing in fact) in Python. What I want to do now is to optimize it. And for that, I would love to create a graph of my algorithm. Kind of an UML chart or sequencial chart in fact, in which functions would be displayed with inputs and ouptuts. My algorithm does not imply complex stuff, and is mainly based on a = f(b) operations (no databases, hardware stuff, server, . . . ) Would you have any hint? Thanks by advance !
0
1
435
0
8,147,354
0
0
0
0
1
false
4
2011-11-15T21:34:00.000
0
3
0
How do get matplotlib pyplot to generate a chart for viewing / output to .png at a specific resolution?
8,143,439
0
python,matplotlib
For resolution, you can use the dpi (dots per inch) argument when creating a figure, or in the savefig() function. For high quality prints of graphics dpi=600 or more is recommended.
I'm fed up with manually creating graphs in excel and consequently, I'm trying to automate the process using Python to massage the .csv data into a workable form and matplotlib to plot the result. Using matplotlib and generating them is no problem but I can't work out is how to set the aspect ration / resolution of the output. Specifically, I'm trying to generate scatter plots and stacked area graphs. Everything I've tried seems to result in one or more of the following: Cramped graph areas (small plot area covered with the legend, axes etc.). The wrong aspect ratio. Large spaces on the sides of the chart area (I want a very wide / not very tall image). If anyone has some working examples showing how to achieve this result I'd be very grateful!
0
1
1,578
0
16,157,331
0
0
0
0
1
false
6
2011-11-15T22:17:00.000
4
5
0
Modeling a linear system with Python
8,144,910
0.158649
simulation,python,modeling
As @Matt said, I know this is old. But this came up as my first google hit, so I wanted to edit it. You can use scipy.signal.lti to model linear, time invariant systems. That gives you lti.bode. For an impulse response in the form of H(s) = (As^2 + Bs + C)/(Ds^2 + Es + F), you would enter h = scipy.signal.lti([A,B,C],[D,E,F]). To get the bode plot, you would do plot(*h.bode()[:2]).
I would like to simulate/model a closed-loop, linear, time-invariant system (specifically a locked PLL approximation) with python. Each sub-block within the model has a known transfer function which is given in terms of complex frequency H(s) = K / ( s * tau + 1 ). Using the model, I would like to see how the system response as well as the noise response is affected as parameters (e.g. the VCO gain) are changed. This would involve using Bode plots and root-locus plots. What Python modules should I seek out to get the job done?
0
1
14,035
0
9,634,189
0
0
0
0
1
false
0
2011-11-16T13:34:00.000
0
2
0
Tracking two different colors using OpenCV 2.3 and Python
8,152,504
0
python,opencv
I don't really understand your concern. With the camera, you would get an image object. With this image object, you can calculate as much different histograms as you want. Each histogram would be a different output object :). Basicaly, you could track hundreds of colors at the same time!
I'm looking for a way to track two different colors at the same time using a single camera with OpenCV 2.3 (python bindings). I've read through a number of papers regarding OpenCV but can't find any mention as to whether or not it's capable of analyzing multiple histograms at once. Is this is even technically possible or do I need a separate camera for each color?
0
1
4,911
0
8,194,069
0
0
0
0
1
false
9
2011-11-19T10:58:00.000
1
4
0
Predicting Values with k-Means Clustering Algorithm
8,193,563
0.049958
python,machine-learning,data-mining,k-means,prediction
If you are considering assigning a value based on the average value within the nearest cluster, you are talking about some form of "soft decoder", which estimates not only the correct value of the coordinate but your level of confidence in the estimate. The alternative would be a "hard decoder" where only values of 0 and 1 are legal (occur in the training data set), and the new coordinate would get the median of the values within the nearest cluster. My guess is that you should always assign only a known-valid class value (0 or 1) to each coordinate, and averaging class values is not a valid approach.
I'm messing around with machine learning, and I've written a K Means algorithm implementation in Python. It takes a two dimensional data and organises them into clusters. Each data point also has a class value of either a 0 or a 1. What confuses me about the algorithm is how I can then use it to predict some values for another set of two dimensional data that doesn't have a 0 or a 1, but instead is unknown. For each cluster, should I average the points within it to either a 0 or a 1, and if an unknown point is closest to that cluster, then that unknown point takes on the averaged value? Or is there a smarter method? Cheers!
0
1
14,656
0
8,232,587
0
0
0
0
1
false
6
2011-11-22T10:41:00.000
0
2
0
Getting matplotlib plots to refresh on mouse focus
8,225,460
0
python,matplotlib
Have you tried to call plt.figure(fig.number) before plotting on figure fig and plt.show() after plotting a figure? It should update all the figures.
I am using matplotlib with interactive mode on and am performing a computation, say an optimization with many steps where I plot the intermediate results at each step for debugging purposes. These plots often fill the screen and overlap to a large extent. My problem is that during the calculation, figures that are partially or fully occluded don't refresh when I click on them. They are just a blank grey. I would like to force a redraw if necessary when I click on a figure, otherwise it is not useful to display it. Currently, I insert pdb.set_trace()'s in the code so I can stop and click on all the figures to see what is going on Is there a way to force matplotlib to redraw a figure whenever it gains mouse focus or is resized, even while it is busy doing something else?
0
1
2,049
0
8,236,857
0
1
0
0
1
false
0
2011-11-23T03:02:00.000
1
5
0
Python Sorting Lists in Lists
8,236,823
0.039979
python,list
What's wrong with itemgetter? lst.sort(key=lambda l: list(reversed(l)) should do the trick
Given: lst = [['John',3],['Blake',4],['Ted',3]] Result: lst = [['John',3],['Ted',3],['Blake',4]] I'm looking for a way to sort lists in lists first numerically then alphabetically without the use of the "itemgetter" syntax.
0
1
1,274
0
8,289,322
0
0
0
0
1
false
11
2011-11-27T21:21:00.000
0
7
0
Alternative to scipy and numpy for linear algebra?
8,289,157
0
python
I sometimes have this problem..not sure if this works but I often install it using my own account then try to run it in an IDE(komodo in my case) and it doesn't work. Like your issue it says it cannot find it. The way I solve this is to use sudo -i to get into root and then install it from there. If that does not work can you update your answer to provide a bit more info about the type of system your using(linux, mac, windows), version of python/numpy and how your accessing it so it'll be easier to help.
Is there a good (small and light) alternative to numpy for python, to do linear algebra? I only need matrices (multiplication, addition), inverses, transposes and such. Why? I am tired of trying to install numpy/scipy - it is such a pita to get it to work - it never seems to install correctly (esp. since I have two machines, one linux and one windows): no matter what I do: compile it or install from pre-built binaries. How hard is it to make a "normal" installer that just works?
0
1
10,052
0
8,314,002
0
0
0
0
1
false
1
2011-11-29T14:43:00.000
0
3
0
python - audio classification of equal length samples / 'vocoder' thingy
8,312,672
0
python,audio,classification,fft,pyaudioanalysis
Try searching for algorithms on "music fingerprinting".
Anybody able to supply links, advice, or other forms of help to the following? Objective - use python to classify 10-second audio samples so that I afterwards can speak into a microphone and have python pick out and play snippets (faded together) of closest matches from db. My objective is not to have the closest match and I don't care what the source of the audio samples is. So the result is probably of no use other than speaking in noise (fun). I would like the python app to be able to find a specific match of FFT for example within the 10 second samples in the db. I guess the real-time sampling of the microphone will have a 100 millisecond buffersample. Any ideas? FFT? What db? Other?
0
1
1,085
0
8,341,353
0
1
0
0
1
true
11
2011-12-01T11:23:00.000
6
2
0
Python ast to dot graph
8,340,567
1.2
python,grammar,abstract-syntax-tree
If you look at ast.NodeVisitor, it's a fairly trivial class. You can either subclass it or just reimplement its walking strategy to whatever you need. For instance, keeping references to the parent when nodes are visited is very simple to implement this way, just add a visit method that also accepts the parent as an argument, and pass that from your own generic_visit. P.S. By the way, it appears that NodeVisitor.generic_visit implements DFS, so all you have to do is add the parent node passing.
I'm analyzing the AST generated by python code for "fun and profit", and I would like to have something more graphical than "ast.dump" to actually see the AST generated. In theory is already a tree, so it shouldn't be too hard to create a graph, but I don't understand how I could do it. ast.walk seems to walk with a BFS strategy, and the visitX methods I can't really see the parent or I don't seem to find a way to create a graph... It seems like the only way is to write my own DFS walk function, is does it make sense?
0
1
3,299
0
8,385,718
0
1
0
0
1
false
84
2011-12-05T12:53:00.000
0
6
0
Why are NumPy arrays so fast?
8,385,602
0
python,arrays,numpy
Numpy arrays are extremily similar to 'normal' arrays such as those in c. Notice that every element has to be of the same type. The speedup is great because you can take advantage of prefetching and you can instantly access any element in array by it's index.
I just changed a program I am writing to hold my data as numpy arrays as I was having performance issues, and the difference was incredible. It originally took 30 minutes to run and now takes 2.5 seconds! I was wondering how it does it. I assume it is that the because it removes the need for for loops but beyond that I am stumped.
0
1
47,312
1
8,539,266
0
0
0
0
1
true
0
2011-12-05T16:41:00.000
0
1
0
How can I keep row selections in QItemSelectionModel when columns are sorted?
8,388,659
1.2
python,pyqt
Here is the way I ended up solving this problem: When row selections are made, put the unique IDs of each hidden row into a list, then hide all hidden rows Use self.connect(self.myHorizontalHeader, SIGNAL("sectionClicked(int)"), self.keepSelectionValues) to catch the event when a user clicks on a column header to sort the rows In self.keepSelectionValue, go through each row and if the unique ID is in the hidden row list, hide the row This effectively sorts and displays the non-hidden rows without displaying all the rows of the entire table.
I'm using QItemSelectionModel with QTableView to allow the users to select rows. The problem is that when the user then clicks on a column header to sort the rows, the selection disappears and all the sorted data is displayed. How can I keep the selection, and just sort that, rather than having all the rows appear? Thanks! --Erin
0
1
623
0
8,396,124
0
0
0
0
3
true
115
2011-12-06T06:20:00.000
221
4
0
Invert image displayed by imshow in matplotlib
8,396,101
1.2
python,image,matplotlib
Specify the keyword argument origin='lower' or origin='upper' in your call to imshow.
I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?
0
1
82,762
0
67,577,958
0
0
0
0
3
false
115
2011-12-06T06:20:00.000
1
4
0
Invert image displayed by imshow in matplotlib
8,396,101
0.049958
python,image,matplotlib
You can use the extent argument. For example, if X values range from -10 and 10 and Y values range from -5 to 5, you should pass extent=(-10,10,-5,5) to imshow().
I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?
0
1
82,762
0
68,682,366
0
0
0
0
3
false
115
2011-12-06T06:20:00.000
0
4
0
Invert image displayed by imshow in matplotlib
8,396,101
0
python,image,matplotlib
Use ax.invert_yaxis() to invert the y-axis, or ax.invert_xaxis() to invert the x-axis.
I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?
0
1
82,762
0
8,410,307
0
0
0
0
1
true
2
2011-12-07T04:06:00.000
4
1
0
randomly select a 200x200 square inside an image in python
8,410,260
1.2
python,image-processing,matplotlib,python-imaging-library
If you convert to binary PPM format, then there should be an easy way to seek to the appropriate offsets - it's not compressed, so there should be simple relationships. So pick two random numbers between 0 and 100000-200-1, and go to town. (I'm assuming you don't have a system with 10's of gigabytes of RAM)
I am using python to work on my project in image processing. Suppose I got a very large image ( 100000 x 100000), and I need to randomly select a 200 x 200 square from this large image. Are there any easy way to do this job? Please share some light with me. Thank you ----------------------------- EDIT ------------------------------------ Sorry I don't think it is 100000 x 100000, but the resolution of images are in 1 km and 2km. I am having trouble with selecting the area of 200 x 200.
0
1
465
0
8,414,694
0
1
0
0
2
false
1
2011-12-07T06:51:00.000
0
5
0
Python: large number of dict like objects memory use
8,411,476
0
python,memory,dictionary
Possibilities: (1) Benchmark the csv.DictReader approach and see if it causes a problem. Note that the dicts contain POINTERS to the keys and values; the actual key strings are not copied into each dict. (2) For each file, use csv.Reader, after the first row, build a class dynamically, instantiate it once per remaining row. Perhaps this is what you had in mind. (3) Have one fixed class, instantiated once per file, which gives you a list of tuples for the actual data, a tuple that maps column indices to column names, and a dict that maps column names to column indices. Tuples occupy less memory than lists because there is no extra append-space allocated. You can then get and set your data via (row_index, column_index) and (row_index, column_name). In any case, to get better advice, how about some simple facts and stats: What version of Python? How many files? rows per file? columns per file? total unique keys/column names?
I am using csv.DictReader to read some large files into memory to then do some analysis, so all objects from multiple CSV files need to be kept in memory. I need to read them as Dictionary to make analysis easier, and because the CSV files may be altered by adding new columns. Yes SQL can be used, but I'd rather avoid it if it's not needed. I'm wondering if there is a better and easier way of doing this. My concern is that I will have many dictionary objects with same keys and waste memory? The use of __slots__ was an option, but I will only know the attributes of an object after reading the CSV. [Edit:] Due to being on legacy system and "restrictions", use of third party libraries is not possible.
0
1
490
0
8,411,784
0
1
0
0
2
false
1
2011-12-07T06:51:00.000
0
5
0
Python: large number of dict like objects memory use
8,411,476
0
python,memory,dictionary
If all the data in one column are the same type, you can use NumPy. NumPy's loadtxt and genfromtxt function can be used to read csv file. And because it returns an array, the memory usage is smaller then dict.
I am using csv.DictReader to read some large files into memory to then do some analysis, so all objects from multiple CSV files need to be kept in memory. I need to read them as Dictionary to make analysis easier, and because the CSV files may be altered by adding new columns. Yes SQL can be used, but I'd rather avoid it if it's not needed. I'm wondering if there is a better and easier way of doing this. My concern is that I will have many dictionary objects with same keys and waste memory? The use of __slots__ was an option, but I will only know the attributes of an object after reading the CSV. [Edit:] Due to being on legacy system and "restrictions", use of third party libraries is not possible.
0
1
490
0
8,450,596
0
0
0
0
1
false
0
2011-12-09T17:39:00.000
0
3
0
matlab: randomly permuting rows and columns of a 2-D array
8,449,501
0
python,matlab,random,permutation
Both solutions above are great, and will work, but I believe both will involve making a completely new copy of the entire matrix in memory while doing the work. Since this is a huge matrix, that's pretty painful. In the case of the MATLAB solution, I think you'll be possibly creating two extra temporary copies, depending on how reshape works internally. I think you were on the right track by operating on columns, but the problem is that it will only scramble along columns. However, I believe if you do randperm along rows after that, you'll end up with a fully permuted matrix. This way you'll only be creating temporary variables that are, at worst, 80,000 by 1. Yes, that's two loops with 60,000 and 80,000 iterations each, but internally that's going to have to happen regardless. The algorithm is going to have to visit each memory location at least twice. You could probably do a more efficient algorithm by writing a C MEX function that operates completely in place, but I assume you'd rather not do that.
I have a large matrix (approx. 80,000 X 60,000), and I basically want to scramble all the entries (that is, randomly permute both rows and columns independently). I believe it'll work if I loop over the columns, and use randperm to randomly permute each column. (Or, I could equally well do rows.) Since this involves a loop with 60K iterations, I'm wondering if anyone can suggest a more efficient option? I've also been working with numpy/scipy, so if you know of a good option in python, that would be great as well. Thanks! Susan Thanks for all the thoughtful answers! Some more info: the rows of the matrix represent documents, and the data in each row is a vector of tf-idf weights for that document. Each column corresponds to one term in the vocabulary. I'm using pdist to calculate cosine similarities between all pairs of papers. And I want to generate a random set of papers to compare to. I think that just permuting the columns will work, then, because each paper gets assigned a random set of term frequencies. (Permuting the rows just means reordering the papers.) As Jonathan pointed out, this has the advantage of not making a new copy of the whole matrix, and it sounds like the other options all will.
0
1
2,124
0
8,458,779
0
0
0
0
1
false
6
2011-12-10T14:15:00.000
1
3
0
How to vectorize the evaluation of bilinear & quadratic forms?
8,457,110
0.066568
python,r,matlab,matrix,numpy
It's not entirely clear what you're trying to achieve, but in R, you use crossprod to form cross-products: given matrices X and Y with compatible dimensions, crossprod(X, Y) returns XTY. Similarly, matrix multiplication is achieved with the %*% operator: X %*% Y returns the product XY. So you can get XTAY as crossprod(X, A %*% Y) without having to worry about the mechanics of matrix multiplication, loops, or whatever. If your matrices have a particular structure that allows optimising the computations (symmetric, triangular, sparse, banded, ...), you could look at the Matrix package, which has some support for this. I haven't used Matlab, but I'm sure it would have similar functions for these operations.
Given any n x n matrix of real coefficients A, we can define a bilinear form bA : Rn x Rn → R by bA(x, y) = xTAy , and a quadratic form qA : Rn → R by qA(x) = bA(x, x) = xTAx . (For most common applications of quadratic forms qA, the matrix A is symmetric, or even symmetric positive definite, so feel free to assume that either one of these is the case, if it matters for your answer.) (Also, FWIW, bI and qI (where I is the n x n identity matrix) are, respectively, the standard inner product, and squared L2-norm on Rn, i.e. xTy and xTx.) Now suppose I have two n x m matrices, X and Y, and an n x n matrix A. I would like to optimize the computation of both bA(x,i, y,i) and qA(x,i) (where x,i and y,i denote the i-th column of X and Y, respectively), and I surmise that, at least in some environments like numpy, R, or Matlab, this will involve some form of vectorization. The only solution I can think of requires generating diagonal block matrices [X], [Y] and [A], with dimensions mn x m, mn x m, and mn x mn, respectively, and with (block) diagonal elements x,i, y,i, and A, respectively. Then the desired computations would be the matrix multiplications [X]T[A][Y] and [X]T[A][X]. This strategy is most definitely uninspired, but if there is a way to do it that is efficient in terms of both time and space, I'd like to see it. (It goes without saying that any implementation of it that does not exploit the sparsity of these block matrices would be doomed.) Is there a better approach? My preference of system for doing this is numpy, but answers in terms of some other system that supports efficient matrix computations, such as R or Matlab, may be OK too (assuming that I can figure out how to port them to numpy). Thanks! Of course, computing the products XTAY and XTAX would compute the desired bA(x,i, y,i) and qA(x,i) (as the diagonal elements of the resulting m x m matrices), along with the O(m2) irrelevant bA(x,i, y,j) and bA(x,i, x,j), (for i ≠ j), so this is a non-starter.
0
1
3,364
0
8,523,192
0
1
0
0
1
false
3
2011-12-15T15:51:00.000
0
4
0
Sorting a Python list by third element, then by first element, etc?
8,522,800
0
python,list,sorting
If you go further down the page Mr. White linked to, you'll see how you can specify an arbitrary function to compute your sort key (using the handy cmp_to_key function provided).
Say I have a list in the form [[x,y,z], [x,y,z] etc...] etc where each grouping represents a random point. I want to order my points by the z coordinate, then within each grouping of z's, sort them by x coordinate. Is this possible?
0
1
3,538
0
8,590,831
0
0
0
0
1
false
3
2011-12-20T21:57:00.000
1
1
0
Retrain a pybrain neural network after adding to the dataset
8,582,498
0.197375
python,neural-network,pybrain
It dependes of what is your objective. If you need an updated NN-model you can perform an online training, i.e. performing a single step of back-propagation with the sample acquired at time $t$ starting from the network you had at time $t-1$. Or maybe you can discard the older samples in order to have a fixed amount of training samples or you can reduce the size of the training set performing a sort of clustering (i.e. merging similar samples into a single one). If you explain better your application it'd be simpler suggesting solutions.
I have a pybrain NN up and running, and it seems to be working rather well. Ideally, I would like to train the network and obtain a prediction after each data point (the previous weeks figures, in this case) has been added to the dataset. At the moment I'm doing this by rebuilding the network each time, but it takes an increasingly long time to train the network as each example is added (+2 minutes for each example, in a dataset of 1000s of examples). Is there a way to speed up the process by adding the new example to an already trained NN and updating it, or am I overcomplicating the matter, and would be better served by training on a single set of examples (say last years data) and then testing on all of the new examples (this year)?
0
1
985
0
8,592,302
0
0
0
0
1
true
5
2011-12-21T15:15:00.000
8
2
0
Is random.expovariate equivalent to a Poisson Process
8,592,048
1.2
python,math,statistics,poisson
On a strict reading of your question, yes, that is what random.expovariate does. expovariate gives you random floating point numbers, exponentially distributed. In a Poisson process the size of the interval between consecutive events is exponential. However, there are two other ways I could imagine modelling poisson processes Just generate random numbers, uniformly distributed and sort them. Generate integers which have a Poisson distribution (i.e. they are distributed like the number of events within a fixed interval in a Poisson process). Use numpy.random.poisson to do this. Of course all three things are quite different. The right choice depends on your application.
I read somewhere that the python library function random.expovariate produces intervals equivalent to Poisson Process events. Is that really the case or should I impose some other function on the results?
0
1
12,528
0
25,213,056
0
0
0
0
1
false
22
2011-12-21T16:30:00.000
0
6
0
Robust Hand Detection via Computer Vision
8,593,091
0
python,image-processing,opencv,computer-vision,skin
Well my experience with the skin modeling are bad, because: 1) lightning can vary - skin segmentation is not robust 2) it will mark your face also (as other skin-like objects) I would use machine learning techniques like Haar training, which, in my opinion, if far more better approach than modeling and fixing some constraints (like skin detection + thresholding...)
I am currently working on a system for robust hand detection. The first step is to take a photo of the hand (in HSV color space) with the hand placed in a small rectangle to determine the skin color. I then apply a thresholding filter to set all non-skin pixels to black and all skin pixels white. So far it works quite well, but I wanted to ask if there is a better way to solve this? For example, I found a few papers mentioning concrete color spaces for caucasian people, but none with a comparison for asian/african/caucasian color-tones. By the way, I'm working with OpenCV via Python bindings.
0
1
11,923
0
8,633,193
0
0
0
0
2
false
0
2011-12-26T05:17:00.000
0
2
1
Hadoop - Saving Log Data and Developing GUI
8,633,112
0
java,python,hadoop
I think you can use HIVE. Even I am new to Hadoop but read some where that HIVE is for hadoop analytics. Not sure whether it has GUI or not, but for sure it has SQL capability to query unstructed data.
I am doing research for my new project, Following is the details of my project, research and questions: Project: Save the Logs (ex. format is TimeStamp,LOG Entry,Location,Remarks etc ) from different sources. Here Different sources is like, gettting the LOG data from the different systems world wide (Just an Overview) (After saving the LOG Entries in Hadoop as specified in 1) Generate Reports of the LOGs saved in Hadoop on demand like drill down, drill up etc NOTE: For every minute approx. thier will be 50 to 60 MB of LOG Entries from the systems (I checked it). Research and Questions: For saving log entries in the Hadoop from different sources, we used Apache Flume. We are creating our own MR programs and servlets. Is thier any good options other than flume? Is thier any Hadoop Data Analysis (Open Source) tool to genarte reports etc? I am doing my research, if any of us add some comments to me it will be helpfull.
0
1
321
0
8,633,276
0
0
0
0
2
false
0
2011-12-26T05:17:00.000
1
2
1
Hadoop - Saving Log Data and Developing GUI
8,633,112
0.099668
java,python,hadoop
Have you looked at Datameer ? It provides a GUI to import all these types of files, and create reports as well as dashboards.
I am doing research for my new project, Following is the details of my project, research and questions: Project: Save the Logs (ex. format is TimeStamp,LOG Entry,Location,Remarks etc ) from different sources. Here Different sources is like, gettting the LOG data from the different systems world wide (Just an Overview) (After saving the LOG Entries in Hadoop as specified in 1) Generate Reports of the LOGs saved in Hadoop on demand like drill down, drill up etc NOTE: For every minute approx. thier will be 50 to 60 MB of LOG Entries from the systems (I checked it). Research and Questions: For saving log entries in the Hadoop from different sources, we used Apache Flume. We are creating our own MR programs and servlets. Is thier any good options other than flume? Is thier any Hadoop Data Analysis (Open Source) tool to genarte reports etc? I am doing my research, if any of us add some comments to me it will be helpfull.
0
1
321
0
8,660,103
0
0
0
0
1
false
4
2011-12-28T18:53:00.000
1
2
0
Markov chain on letter scale and random text
8,660,015
0.099668
python,markov-chains
If each character only depends on the previous character, you could just compute the probabilities for all 27^2 pairs of characters.
I would like to generate a random text using letter frequencies from a book in a .txt file, so that each new character (string.lowercase + ' ') depends on the previous one. How do I use Markov chains to do so? Or is it simpler to use 27 arrays with conditional frequencies for each letter?
0
1
1,880
0
8,724,586
0
0
0
0
1
false
1
2012-01-02T10:19:00.000
0
1
0
Linking segments of edges
8,699,665
0
python,image-processing,edge-detection
You can use a method named dynamic programming. A very good intro on this can be found on chapter 6 of Sonka's digital image processing book
I have written a canny edge detection algorithm for a project. I want to know is there any method to link the broken segments of an edge, since i am getting a single edge as a conglomeration of a few segments. I am getting around 100 segments, which i am sure can be decreased with some intelligence. Please help.
0
1
433
0
8,711,375
0
0
0
0
1
true
0
2012-01-03T10:52:00.000
3
2
1
Formatting a single row as CSV
8,711,147
1.2
python,csv
The csv module wraps the _csv module, which is written in C. You could grab the source for it and modify it to not require the file-like object, but poking around in the module, I don't see any clear way to do it without recompiling.
I'm creating a script to convert a whole lot of data into CSV format. It runs on Google AppEngine using the mapreduce API, which is only relevant in that it means each row of data is formatted and output separately, in a callback function. I want to take advantage of the logic that already exists in the csv module to convert my data into the correct format, but because the CSV writer expects a file-like object, I'm having to instantiate a StringIO for each row, write the row to the object, then return the content of the object, each time. This seems silly, and I'm wondering if there is any way to access the internal CSV formatting logic of the csv module without the writing part.
0
1
464
0
8,717,213
0
1
0
0
1
false
18
2012-01-03T18:52:00.000
2
2
0
Chunking data from a large file for multiprocessing?
8,717,179
0.197375
python,parallel-processing
I would keep it simple. Have a single program open the file and read it line by line. You can choose how many files to split it into, open that many output files, and every line write to the next file. This will split the file into n equal parts. You can then run a Python program against each of the files in parallel.
I'm trying to a parallelize an application using multiprocessing which takes in a very large csv file (64MB to 500MB), does some work line by line, and then outputs a small, fixed size file. Currently I do a list(file_obj), which unfortunately is loaded entirely into memory (I think) and I then I break that list up into n parts, n being the number of processes I want to run. I then do a pool.map() on the broken up lists. This seems to have a really, really bad runtime in comparison to a single threaded, just-open-the-file-and-iterate-over-it methodology. Can someone suggest a better solution? Additionally, I need to process the rows of the file in groups which preserve the value of a certain column. These groups of rows can themselves be split up, but no group should contain more than one value for this column.
0
1
14,016
0
8,741,894
0
1
0
0
1
false
2
2012-01-05T01:13:00.000
0
4
0
Parallel computing
8,736,396
0
python
Like the commentators have said, find someone to talk to in your university. The answer to your question will be specific to what software is installed on the grid. If you have access to a grid, it's highly likely you also have access to a person whose job it is to answer your questions (and they will be pleased to help) - find this person!
I have a two dimensional table (Matrix) I need to process each line in this matrix independently from the others. The process of each line is time consuming. I'd like to use parallel computing resources in our university (Canadian Grid something) Can I have some advise on how to start ? I never used parallel computing before. Thanks :)
0
1
1,155
0
21,651,546
0
1
0
0
1
false
82
2012-01-05T07:49:00.000
2
9
0
How to solve a pair of nonlinear equations using Python?
8,739,227
0.044415
python,numpy,scipy,sympy
You can use openopt package and its NLP method. It has many dynamic programming algorithms to solve nonlinear algebraic equations consisting: goldenSection, scipy_fminbound, scipy_bfgs, scipy_cg, scipy_ncg, amsg2p, scipy_lbfgsb, scipy_tnc, bobyqa, ralg, ipopt, scipy_slsqp, scipy_cobyla, lincher, algencan, which you can choose from. Some of the latter algorithms can solve constrained nonlinear programming problem. So, you can introduce your system of equations to openopt.NLP() with a function like this: lambda x: x[0] + x[1]**2 - 4, np.exp(x[0]) + x[0]*x[1]
What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy) eg: x+y^2 = 4 e^x+ xy = 3 A code snippet which solves the above pair will be great
0
1
135,871
0
43,361,173
0
0
0
1
1
false
2
2012-01-05T23:13:00.000
0
3
0
does cassandra cql support aggregation functions, like group by and order by
8,751,293
0
python,cassandra,cql
Latest versions of Cassandra support aggregations within single partition only.
For example in CQL, SELECT * from abc_dimension ORDER BY key ASC; seems to be not working. Any help?
0
1
3,328
0
8,775,931
0
0
0
0
1
false
6
2012-01-08T06:12:00.000
2
3
0
Efficient ways to write a large NumPy array to a file
8,775,786
0.132549
python,numpy,scientific-computing
I would recommend looking at the pickle module. The pickle module allows you to serialize python objects as streams of bytes (e.g., strings). This allows you to write them to a file or send them over a network, and then reinstantiate the objects later.
I've currently got a project running on PiCloud that involves multiple iterations of an ODE Solver. Each iteration produces a NumPy array of about 30 rows and 1500 columns, with each iterations being appended to the bottom of the array of the previous results. Normally, I'd just let these fairly big arrays be returned by the function, hold them in memory and deal with them all at one. Except PiCloud has a fairly restrictive cap on the size of the data that can be out and out returned by a function, to keep down on transmission costs. Which is fine, except that means I'd have to launch thousands of jobs, each running on iteration, with considerable overhead. It appears the best solution to this is to write the output to a file, and then collect the file using another function they have that doesn't have a transfer limit. Is my best bet to do this just dumping it into a CSV file? Should I add to the CSV file each iteration, or hold it all in an array until the end and then just write once? Is there something terribly clever I'm missing?
0
1
8,395
0
33,917,789
0
0
0
0
1
false
5
2012-01-12T18:15:00.000
0
2
0
OpenCV Lip Segmentation
8,840,127
0
python,opencv,face-detection,simplecv
The color segementation involves "gradient of the difference between the pseudo-hue and luminance (obtaining hybrid contours)". Try googling for qouted string and you will find multiple research papers on this topic.
How do people usually extract the shape of the lips once the mouth region is found (in my case using haar cascade)? I tried color segmentation and edge/corner detection but they're very inaccurate for me. I need to find the two corners and the very upper and lower lip at the center. I've heard things about active appearance models but I'm having trouble understanding how to use this with python and I don't have enough context to figure out if this is even the conventional method for detecting different parts of the lips. Is that my best choice or do I have other options? If I should use it, how would I get started with it using python and simplecv?
0
1
4,301
0
8,860,314
0
1
0
0
1
true
5
2012-01-12T20:08:00.000
2
4
0
Find subject in incomplete sentence with NLTK
8,841,569
1.2
python,nlp,nltk
NLP techniques are relatively ill equipped to deal with this kind of text. Phrased differently: it is quite possible to build a solution which includes NLP processes to implement the desired classifier but the added complexity doesn't necessarily pays off in term of speed of development nor classifier precision improvements. If one really insists on using NLP techniques, POS-tagging and its ability to identify nouns is the most obvious idea, but Chunking and access to WordNet or other lexical sources are other plausible uses of NLTK. Instead, an ad-hoc solution based on simple regular expressions and a few heuristics such as these suggested by NoBugs is probably an appropriate approach to the problem. Certainly, such solutions bear two main risks: over-fitting to the portion of the text reviewed/considered in building the rules possible messiness/complexity of the solution if too many rules and sub-rules are introduced. Running some plain statical analysis on the complete (or very big sample) of the texts to be considered should help guide the selection of a few heuristics and also avoid the over-fitting concerns. I'm quite sure that a relatively small number of rules, associated with a custom dictionary should be sufficient to produce a classifier with appropriate precision as well as speed/resources performance. A few ideas: count all the words (and possibly all the bi-grams and tri-grams) in a sizable portion of the corpus a hand. This info can drive the design of the classifier by allowing to allocate the most effort and the most rigid rules to the most common patterns. manually introduce a short dictionary which associates the most popular words with: their POS function (mostly a binary matter here: i.e. nouns vs. modifiers and other non-nouns. their synonym root [if applicable] their class [if applicable] If the pattern holds for most of the input text, consider using the last word before the end of text or before the first comma as the main key to class selection. If the pattern doesn't hold, just give more weight to the first and to the last word. consider a first pass where the text is re-written with the most common bi-grams replaced by a single word (even an artificial code word) which would be in the dictionary consider also replacing the most common typos or synonyms with their corresponding synonym root. Adding regularity in the input helps improve precision and also help making a few rules / a few entries in the dictionary have a big return on precision. for words not found in dictionary, assume that words which are mixed with numbers and/or preceded by numbers are modifiers, not nouns. Assume that the consider a two-tiers classification whereby inputs which cannot be plausibly assigned a class are put in the "manual pile" to prompt additional review which results in additional of rules and/or dictionary entries. After a few iterations the classifier should require less and less improvements and tweaks. look for non-obvious features. For example some corpora are made from a mix of sources but some of the sources, may include particular regularities which help identify the source and/or be applicable as classification hints. For example some sources may only contains say uppercase text (or text typically longer than 50 characters, or truncated words at the end etc.) I'm afraid this answer falls short of providing Python/NLTK snippets as a primer towards a solution, but frankly such simple NLTK-based approaches are likely to be disappointing at best. Also, we should have a much bigger sample set of the input text to guide the selection of plausible approaches, include ones that are based on NLTK or NLP techniques at large.
I have a list of products that I am trying to classify into categories. They will be described with incomplete sentences like: "Solid State Drive Housing" "Hard Drive Cable" "1TB Hard Drive" "500GB Hard Drive, Refurbished from Manufacturer" How can I use python and NLP to get an output like "Housing, Cable, Drive, Drive", or a tree that describes which word is modifying which? Thank you in advance
0
1
2,952
0
8,858,983
0
0
0
0
1
false
5
2012-01-14T00:11:00.000
0
4
0
converting a space delimited file to a CSV
8,858,946
0
python
str.split() without any arguments will split by any length of whitespace. operator.itemgetter() takes multiple arguments, and will return a tuple.
I have a text file containing tabular data. What I need to do is automate the task of writing to a new text file that is comma delimited instead of space delimited, extract a few columns from existing data, reorder the columns. This is a snippet of the first 4 lines of the original data: Number of rows: 8542 Algorithm |Date |Time |Longitude |Latitude |Country 1 2000-01-03 215926.688 -0.262 35.813 Algeria 1 2000-01-03 215926.828 -0.284 35.817 Algeria Here is what I want in the end: Longitude,Latitude,Country,Date,Time -0.262,35.813,Algeria,2000-01-03,215926.688 Any tips on how to approach this?
0
1
18,146
0
9,473,690
0
0
0
0
1
false
6
2012-01-14T19:03:00.000
0
7
0
Convert netcdf to image
8,864,599
0
python,netcdf
IDV is a good visualization tool for NetCDF, but, as far as I know, there is no command line interface. I would recommend Matlab. It has read and write functions for NetCDF as well as an extensive plotting library...probably one of the best. You can then compile the matlab code and run it from the command line.
I have a netcdf file that I would like to convert to an image (joed, png, gif) using a command line tool. Is someone could please help me with the library name and possibly a link to how it is done. Regards David
0
1
8,951
0
8,959,115
0
1
0
0
1
false
2
2012-01-18T22:59:00.000
2
2
0
'Memory leak' when calling openopt SNLE in a loop
8,918,773
0.197375
python,memory-leaks,numpy,scipy
Yes, there is clearly a memory leak here. I ran the nlsp demo, that uses SNLE with interalg, using valgrind and found that 295k has been leaked from running the solver once. This should be reported to them.
Whenever I run the solver 'interalg' (in the SNLE function call from OpenOpt) in a loop my memory usage accumulates until the code stops running. It happen both in my Mac Os X 10.6.8 and in Slackware Linux. I would really appreciate some advice, considering that I am not extremely literate in python. Thank you! Daniel
0
1
143
0
8,938,840
0
0
0
0
1
false
25
2012-01-20T08:15:00.000
2
5
0
How to extract data from matplotlib plot
8,938,449
0.07983
python,matplotlib
Its Python, so you can modify the source script directly so the data is dumped before it is plotted
I have a wxPython program which reads from different datasets, performs various types of simple on-the-fly analysis on the data and plots various combinations of the datasets to matplotlib canvas. I would like to have the opportunity to dump currently plotted data to file for more sophisticated analysis later on. The question is: are there any methods in matplotlib that allow access to the data currently plotted in matplotlib.Figure?
0
1
31,312
0
8,954,018
0
0
0
0
2
true
0
2012-01-21T15:03:00.000
2
3
0
python: least-squares estimation?
8,953,991
1.2
python,matrix,numerical-methods
scipy and numpy is the obvious way to go here. Note that numpy uses the famous (and well-optimized) BLAS libraries, so it is also very fast. Much faster than any "pure python" will ever be.
I know how to implement least-squares with elementary matrix decomposition and other operations, but how can I do it in Python? (I've never tried to use matrices in Python) (clarification edit to satisfy the shoot-first-and-ask-questions-later -1'er) I was looking for help to find out how to use numerical programming in Python. Looks like numpy and scipy are the way to go. I was looking for how to use them, but I found a tutorial.
0
1
819
0
8,954,011
0
0
0
0
2
false
0
2012-01-21T15:03:00.000
1
3
0
python: least-squares estimation?
8,953,991
0.066568
python,matrix,numerical-methods
Have a look at SciPy. It's got matrix operations.
I know how to implement least-squares with elementary matrix decomposition and other operations, but how can I do it in Python? (I've never tried to use matrices in Python) (clarification edit to satisfy the shoot-first-and-ask-questions-later -1'er) I was looking for help to find out how to use numerical programming in Python. Looks like numpy and scipy are the way to go. I was looking for how to use them, but I found a tutorial.
0
1
819
0
9,020,306
0
0
0
0
1
false
2
2012-01-26T14:52:00.000
1
2
0
Finding 'edge cases' in a dataset
9,019,949
0.099668
python,statistics,matplotlib
I think what you want is a variance plot. Create a dictionary for distinct x values. Put each distinct value of y in a list associated with each x. Find the stdev (np.std) of the list associated with each x say "s". Plot the s vs. x.
i apologize in advance for not being very precise, as a i dont know the mathematical expression for what i want. i am using matplotlib to analyze a large dataset. What i have now is a distribution of x,y points. I want to find out the cases in which the x values of my function are the same, but y differs the greatest. So if i plot it, one part of the cases is at the top of my graph, the other is the botton of the graph. So how do i get the points(x,y), (x,y') where f(x)=y and f(x)=y' and y-y'=max ? cheers
0
1
960
0
9,027,774
0
1
0
0
1
false
3
2012-01-27T00:07:00.000
3
3
0
merging records in python or numpy
9,027,355
0.197375
python,merge,numpy
You can use a dictionary if the values are lists. defaultdict in the collections module is very useful for this.
I have a csv file in which the first column contains an identifier and the second column associated data. The identifier is replicated an arbitrary number of times so the file looks like this. data1,123 data1,345 data1,432 data2,654 data2,431 data3,947 data3,673 I would like to merge the records to generate a single record for each identifier and get. data1,123,345,432 data2,654,431 data3,947,673 Is there an efficient way to do this in python or numpy? Dictionaries appear to be out due to duplicate keys. At the moment I have the lines in a list of lists then looping through and testing for identity with the previous value at index 0 in the list but this is very clumsy. Thanks for any help.
0
1
267
0
9,144,538
0
0
0
0
2
false
0
2012-01-27T09:01:00.000
0
2
0
How to create columns in a csv file and insert row under them in python scrapy
9,030,953
0
python,scrapy
1)you can use custom Csv feed exporter each item key will be treated as a column heading and value as column value. 2) you can write a pipeline that can write data in csv file using python csv lib.
Please help me in creating columns and inserting rows under them in a csv file using python scrapy. I need to write scraped data into 3 columns. So first of all three columns are to be created and then data is to be entered in each row.
1
1
506
0
9,032,800
0
0
0
0
2
false
0
2012-01-27T09:01:00.000
1
2
0
How to create columns in a csv file and insert row under them in python scrapy
9,030,953
0.099668
python,scrapy
CSV is a Comma Saparated Values format. That basically means that it is a text file with some strings separated by commas and line-downs. Each line down creates a row and each comma creates a column in that row. I guess the simplest way to create a CSV file would be to create a Pythonic dict where each key is a column and the value for each column is a list of rows where None stands for the obvious lack of value. You can then fill in your dict by appending values to the requested column (thus adding a row) and then easily transform the dict into a CSV file by iterating over list indexes and for each column either add a VALUE, entry in the file or a , entry for index-out-of-bound or a None value for the corresponding list. For each row add a line down.
Please help me in creating columns and inserting rows under them in a csv file using python scrapy. I need to write scraped data into 3 columns. So first of all three columns are to be created and then data is to be entered in each row.
1
1
506
0
9,042,475
0
0
0
0
1
false
1
2012-01-27T23:03:00.000
1
2
0
Load sparse scipy matrix into existing numpy dense matrix
9,041,236
0.099668
python,numpy,scipy,numerical-computing
It does seem like there should be a better way to do this (and I haven't scoured the documentation), but you could always loop over the elements of the sparse array and assign to the dense array (probably zeroing out the dense array first). If this ends up too slow, that seems like an easy C extension to write....
Say I have a huge numpy matrix A taking up tens of gigabytes. It takes a non-negligible amount of time to allocate this memory. Let's say I also have a collection of scipy sparse matrices with the same dimensions as the numpy matrix. Sometimes I want to convert one of these sparse matrices into a dense matrix to perform some vectorized operations. Can I load one of these sparse matrices into A rather than re-allocate space each time I want to convert a sparse matrix into a dense matrix? The .toarray() method which is available on scipy sparse matrices does not seem to take an optional dense array argument, but maybe there is some other way to do this.
0
1
995
0
9,046,880
0
0
0
0
1
true
0
2012-01-28T14:16:00.000
1
1
0
Fitted curve on chart using ReportLab
9,045,888
1.2
python,charts,reportlab,curve-fitting
I would recommend using MatPlotLib. This is exactly the sort of thing it's designed to handle and it will be much easier than trying to piece together something in ReportLab alone, especially since you'll have to do all the calculation of the line on your own and figure out the details of how to draw it in just the right place. MatPlotLib integrates easily with ReportLab; I've used the combination several times with great results.
I'm preparing a set of reports using open source ReportLab. The reports contain a number of charts. Everything works well so far. I've been asked to take a (working) bar chart that shows two series of data and overlay a fitted curve for each series. I can see how I could overlay a segmented line on the bar graph by creating both a line chart and bar chart in the same ReportLab drawing. I can't find any reference for fitted curves in ReportLab, however. Does anyone have any insight into plotting a fitted curve to a series of data in ReportLab or, failing that, a suggestion about how to accomplish this task (I'm thinking that chart would need to be produced in matplotlib instead)?
0
1
447
0
66,779,646
0
0
0
0
3
false
14
2012-01-29T20:51:00.000
0
5
0
Python OpenCV - Find black areas in a binary image
9,056,646
0
python,opencv,colors,detection,threshold
I know this is an old question, but for completeness I wanted to point out that cv2.moments() will not always work for small contours. In this case, you can use cv2.minEnclosingCircle() which will always return the center coordinates (and radius), even if you have only a single point. Slightly more resource-hungry though, I think...
There is any method/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab) Up to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white). I can't use third party libraries such as cvblobslob or cvblob
0
1
26,428
0
15,779,624
0
0
0
0
3
false
14
2012-01-29T20:51:00.000
0
5
0
Python OpenCV - Find black areas in a binary image
9,056,646
0
python,opencv,colors,detection,threshold
Transform it to binary image using threshold with the CV_THRESH_BINARY_INV flag, you get threshold + inversion in one step.
There is any method/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab) Up to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white). I can't use third party libraries such as cvblobslob or cvblob
0
1
26,428
0
9,058,880
0
0
0
0
3
false
14
2012-01-29T20:51:00.000
2
5
0
Python OpenCV - Find black areas in a binary image
9,056,646
0.07983
python,opencv,colors,detection,threshold
After inverting binary image to turn black to white areas, apply cv.FindContours function. It will give you boundaries of the region you need. Later you can use cv.BoundingRect to get minimum bounding rectangle around region. Once you got the rectangle vertices, you can find its center etc. Or to find centroid of region, use cv.Moment function after finding contours. Then use cv.GetSpatialMoments in x and y direction. It is explained in opencv manual. To find area, use cv.ContourArea function.
There is any method/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab) Up to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white). I can't use third party libraries such as cvblobslob or cvblob
0
1
26,428
0
9,071,479
0
1
0
0
2
true
3
2012-01-30T21:29:00.000
1
2
0
How to handle large memory footprint in Python?
9,071,031
1.2
python
Well, if you need the whole dataset in RAM, there's not much to do but get more RAM. Sounds like you aren't sure if you really need to, but keeping all the data resident requires the smallest amount of thinking :) If your data comes in a stream over a long period of time, and all you are doing is creating a histogram, you don't need to keep it all resident. Just create your histogram as you go along, write the raw data out to a file if you want to have it available later, and let Python garbage collect the data as soon as you have bumped your histogram counters. All you have to keep resident is the histogram itself, which should be relatively small.
I have a scientific application that reads a potentially huge data file from disk and transforms it into various Python data structures such as a map of maps, list of lists etc. NumPy is called in for numerical analysis. The problem is, the memory usage can grow rapidly. As swap space is called in, the system slows down significantly. The general strategy I have seen: lazy initialization: this doesn't seem to help in the sense that many operations require in memory data anyway. shelving: this Python standard library seems support writing data object into a datafile (backed by some db) . My understanding is that it dumps data to a file, but if you need it, you still have to load all of them into memory, so it doesn't exactly help. Please correct me if this is a misunderstanding. The third option is to leverage a database, and offload as much data processing to it As an example: a scientific experiment runs several days and have generated a huge (tera bytes of data) sequence of: co-ordinate(x,y) observed event E at time t. And we need to compute a histogram over t for each (x,y) and output a 3-dimensional array. Any other suggestions? I guess my ideal case would be the in-memory data structure can be phased to disk based on a soft memory limit and this process should be as transparent as possible. Can any of these caching frameworks help? Edit: I appreciate all the suggested points and directions. Among those, I found user488551's comments to be most relevant. As much as I like Map/Reduce, to many scientific apps, the setup and effort for parallelization of code is even a bigger problem to tackle than my original question, IMHO. It is difficult to pick an answer as my question itself is so open ... but Bill's answer is more close to what we can do in real world, hence the choice. Thank you all.
0
1
294
0
9,071,108
0
1
0
0
2
false
3
2012-01-30T21:29:00.000
3
2
0
How to handle large memory footprint in Python?
9,071,031
0.291313
python
Have you considered divide and conquer? Maybe your problem lends itself to that. One framework you could use for that is Map/Reduce. Does your problem have multiple phases such that Phase I requires some data as input and generates an output which can be fed to phase II? In that case you can have 1 process do phase I and generate data for phase II. Maybe this will reduce the amount of data you simultaneously need in memory? Can you divide your problem into many small problems and recombine the solutions? In this case you can spawn multiple processes that each handle a small sub-problem and have one or more processes to combine these results in the end? If Map-Reduce works for you look at the Hadoop framework.
I have a scientific application that reads a potentially huge data file from disk and transforms it into various Python data structures such as a map of maps, list of lists etc. NumPy is called in for numerical analysis. The problem is, the memory usage can grow rapidly. As swap space is called in, the system slows down significantly. The general strategy I have seen: lazy initialization: this doesn't seem to help in the sense that many operations require in memory data anyway. shelving: this Python standard library seems support writing data object into a datafile (backed by some db) . My understanding is that it dumps data to a file, but if you need it, you still have to load all of them into memory, so it doesn't exactly help. Please correct me if this is a misunderstanding. The third option is to leverage a database, and offload as much data processing to it As an example: a scientific experiment runs several days and have generated a huge (tera bytes of data) sequence of: co-ordinate(x,y) observed event E at time t. And we need to compute a histogram over t for each (x,y) and output a 3-dimensional array. Any other suggestions? I guess my ideal case would be the in-memory data structure can be phased to disk based on a soft memory limit and this process should be as transparent as possible. Can any of these caching frameworks help? Edit: I appreciate all the suggested points and directions. Among those, I found user488551's comments to be most relevant. As much as I like Map/Reduce, to many scientific apps, the setup and effort for parallelization of code is even a bigger problem to tackle than my original question, IMHO. It is difficult to pick an answer as my question itself is so open ... but Bill's answer is more close to what we can do in real world, hence the choice. Thank you all.
0
1
294
0
9,079,959
0
1
0
0
1
false
1
2012-01-31T09:31:00.000
1
2
0
For cycle in Python's way in Matlab
9,077,225
0.099668
python,matlab,for-loop
In addition to the given answer, be aware that MATLAB's forloop is very slow. Maybe programming in a functional style using arrayfun, cellfun() and structfun() might be a handier solution, and quite close to Python's map().
May seem stupid, but after using Matlab for a while (a couple of years), I've tried Python, and despite some Matlab's features that are really handy, I really like Python. Now, for work, I'm using Matlab again, and sometimes I miss a structure like Python's 'for' loop. Instead of using the standard 'for' that Matlab provides, there is a structure more similar to process batches of similar data?
0
1
273
0
9,091,268
0
1
0
0
1
false
2
2012-02-01T05:31:00.000
0
4
0
Data structures with Python
9,091,252
0
python,data-structures
Given that all data structures exist in memory, and memory is effectively just a list (array)... there is no data structure that couldn't be expressed in terms of the basic Python data structures (with appropriate code to interact with them).
Python has a lot of convenient data structures (lists, tuples, dicts, sets, etc) which can be used to make other 'conventional' data structures (Eg, I can use a Python list to create a stack and a collections.dequeue to make a queue, dicts to make trees and graphs, etc). There are even third-party data structures that can be used for specific tasks (for instance the structures in Pandas, pytables, etc). So, if I know how to use lists, dicts, sets, etc, should I be able to implement any arbitrary data structure if I know what it is supposed to accomplish? In other words, what kind of data structures can the Python data structures not be used for? Thanks
0
1
2,410
0
9,136,461
0
1
0
0
1
false
0
2012-02-03T19:59:00.000
1
1
0
IronPython and setuptools/ez_install
9,134,717
0.197375
ironpython,setuptools
Distribute is a fork of setuptools that supports Python 3, among other things. ez_install is used to install setuptools/easy_install, and then easy_install can be used to install packages (although pip is better). Three years ago IronPython was missing a lot of the pieces needed, like zlib (2.7.0) and zipimport (upcoming 2.7.2). I haven't checked in a while to see it works, though, but any changes now should be minor.
Our company has developed Python libraries to open and display data from files using our proprietary file format. The library only depends on numpy which has been ported to IronPython. The setup.py for our internal distribution imports from setuptools but apparently this is not yet supported in IronPython. Searching the wirenet produces many references to a blog by Jeff Hardy that was written three years ago. Can someone explain the relationship between setuptools, ez_install, and distutils? Is there a way to distribute our library that is compatible with both CPython and IronPython. Many thanks, Kenny
0
1
919
0
9,215,067
0
0
0
0
1
false
8
2012-02-09T15:33:00.000
0
3
0
Graphviz - Drawing maximal cliques
9,213,797
0
python,graphviz
I don't think you can do this. Clusters are done via subgraphs, which are expected to be separate graphs, not overlapping with other subgraphs. You could change the visualisation though; if you imagine that the members of a clique are members of some set S, then you could simply add a node S and add directed or dashed edges linking each member to the S node. If the S nodes are given a different shape, then it should be clear which nodes are in which cliques. If you really want, you can give the edges connecting members to their clique node high weights, which should bring them close together on the graph. Note that there would never be edges between the clique nodes; that would indicate that two cliques are maximally connected, which just implies they are in fact one large clique, not two separate ones.
I want to use graphviz in order to draw for a given graph all the maximal cliques that it has. Therefore I would like that nodes in the same maximal clique will be visually encapsulated together (meaning that I would like that a big circle will surround them). I know that the cluster option exists - but in all the examples that I saw so far - each node is in one cluster only. In the maximal clique situation, a node can be in multiple cliques. Is there an option to visualize this with graphviz? If not, are there any other tools for this task (preferably with a python api). Thank you.
0
1
3,488
0
11,412,849
0
0
0
0
1
false
5
2012-02-09T23:43:00.000
2
2
0
Image Conversion between cv2, cv, mahotas, and SimpleCV
9,220,720
0.197375
python,opencv,python-2.7,simplecv,mahotas
I have never used mahotas. But I'm currently working on SimpleCV. I have just sent a pull request for making SimpleCV numpy array compatible with cv2. So, basically, Image.getNumpy() -> numpy.ndarray for cv2 Image.getBitmap() -> cv2.cv.iplimage Image.getMatrix() -> cv2.cv.cvmat To convert cv2 numpy array to SimpleCV Image object, Image(cv2_image) -> SimpleCV.ImageClass.Image
I am having to do a lot of vision related work in Python lately, and I am facing a lot of difficulties switching between formats. When I read an image using Mahotas, I cannot seem to get it to cv2, though they are both using numpy.ndarray. SimpleCV can take OpenCV images easily, but getting SimpleCV image out for legacy cv or mahotas seems to be quite a task. Some format conversion syntaxes would be really appreciated. For example, if I open a greyscale image using mahotas, it is treated to be in floating point colour space by default, as I gather. Even when I assign the type as numpy.uint8, cv2 cannot seem to recognise it as an array. I do not know how to solve this problem. I am not having much luck with colour images either. I am using Python 2.7 32bit on Ubuntu Oneiric Ocelot. Thanks in advance!
0
1
5,140
0
9,245,481
0
0
0
0
1
false
1
2012-02-12T00:51:00.000
0
3
0
Efficient two dimensional numpy array statistics
9,245,466
0
python,numpy,statistics
How many grids are there? One option would be to create a 3D array that is 100x100xnumGrids and compute the median across the 3rd dimension.
I have many 100x100 grids, is there an efficient way using numpy to calculate the median for every grid point and return just one 100x100 grid with the median values? Presently, I'm using a for loop to run through each grid point, calculating the median and then combining them into one grid at the end. I'm sure there's a better way to do this using numpy. Any help would be appreciated! Thanks!
0
1
1,689
0
9,248,907
0
0
0
0
1
false
3
2012-02-12T12:37:00.000
1
1
0
Python Sparse matrix inverse and laplacian calculation
9,248,821
0.197375
python,linear-algebra,sparse-matrix,matrix-inverse
In general the inverse of a sparse matrix is not sparse which is why you won't find sparse matrix inverters in linear algebra libraries. Since D is diagonal, D^(-1/2) is trivial and the Laplacian matrix calculation is thus trivial to write down. L has the same sparsity pattern as A but each value A_{ij} is multiplied by (D_i*D_j)^{-1/2}. Regarding the issue of the inverse, the standard approach is always to avoid calculating the inverse itself. Instead of calculating L^-1, repeatedly solve Lx=b for the unknown x. All good matrix solvers will allow you to decompose L which is expensive and then back-substitute (which is cheap) repeatedly for each value of b.
I have two sparse matrix A (affinity matrix) and D (Diagonal matrix) with dimension 100000*100000. I have to compute the Laplacian matrix L = D^(-1/2)*A*D^(-1/2). I am using scipy CSR format for sparse matrix. I didnt find any method to find inverse of sparse matrix. How to find L and inverse of sparse matrix? Also suggest that is it efficient to do so by using python or shall i call matlab function for calculating L?
0
1
1,503
0
9,254,885
0
0
0
0
1
false
2
2012-02-13T01:53:00.000
1
2
0
Generate volume curve from mp3
9,254,671
0.099668
python,waveform
An MP3 file is an encoded version of a waveform. Before you can work with the waveform, you must first decode the MP3 data into a PCM waveform. Once you have PCM data, each sample represents the waveform's amplitude at the point in time. If we assume an MP3 decoder outputs signed, 16-bit values, your amplitudes will range from -16384 to +16383. If you normalize the samples by dividing each by 16384, the waveform samples will then range between +/- 1.0. The issue really is one of MP3 decoding to PCM. As far as I know, there is no native python decoder. You can, however, use LAME, called from python as a subprocess or, with a bit more work, interface the LAME library directly to Python with something like SWIG. Not a trivial task. Plotting this data then becomes an exercise for the reader.
I'm trying to build something in python that can analyze an uploaded mp3 and generate the necessary data to build a waveform graphic. Everything I've found is much more complex than I need. Ultimately, I'm trying to build something like you'd see on SoundCloud. I've been looking into numpy and fft's, but it all seem more complicated than I need. What's the best approach to this? I'll build the actual graphic using canvas, so don't worry about that part of it, I just need the data to plot.
0
1
1,275
0
9,280,538
0
0
0
0
1
false
55
2012-02-14T16:06:00.000
52
5
0
Matplotlib python show() returns immediately
9,280,171
1
python,matplotlib
I think that using show(block=True) should fix your problem.
I have a simple python script which plots some graphs in the same figure. All graphs are created by the draw() and in the end I call the show() function to block. The script used to work with Python 2.6.6, Matplotlib 0.99.3, and Ubuntu 11.04. Tried to run it under Python 2.7.2, Matplotlib 1.0.1, and Ubuntu 11.10 but the show() function returns immediately without waiting to kill the figure. Is this a bug? Or a new feature and we'll have to change our scripts? Any ideas? EDIT: It does keep the plot open under interactive mode, i.e., python -i ..., but it used to work without that, and tried to have plt.ion() in the script and run it in normal mode but no luck.
0
1
84,640
0
9,280,574
0
0
0
0
1
false
5
2012-02-14T16:27:00.000
1
4
0
How to store numerical lookup table in Python (with labels)
9,280,488
0.049958
python,numpy
If you want to access the results by name, then you could use a python nested dictionary instead of ndarray, and serialize it in a .JSON text file using json module.
I have a scientific model which I am running in Python which produces a lookup table as output. That is, it produces a many-dimensional 'table' where each dimension is a parameter in the model and the value in each cell is the output of the model. My question is how best to store this lookup table in Python. I am running the model in a loop over every possible parameter combination (using the fantastic itertools.product function), but I can't work out how best to store the outputs. It would seem sensible to simply store the output as a ndarray, but I'd really like to be able to access the outputs based on the parameter values not just indices. For example, rather than accessing the values as table[16][5][17][14] I'd prefer to access them somehow using variable names/values, for example: table[solar_z=45, solar_a=170, type=17, reflectance=0.37] or something similar to that. It'd be brilliant if I were able to iterate over the values and get their parameter values back - that is, being able to find out that table[16]... corresponds to the outputs for solar_z = 45. Is there a sensible way to do this in Python?
0
1
1,600