Answer
stringlengths
18
5.54k
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
Tags
stringlengths
6
90
Title
stringlengths
15
149
Users Score
int64
-11
327
Database and SQL
int64
0
1
A_Id
int64
5.3k
72.5M
Other
int64
0
1
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Question
stringlengths
49
9.42k
Data Science and Machine Learning
int64
1
1
Web Development
int64
0
1
CreationDate
stringlengths
23
23
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
ViewCount
int64
7
3.27M
Available Count
int64
1
13
One thing to remember is that the Matlab compiler does not actually compile the Matlab code into native machine instructions. It simply wraps it into a standalone executable or a library with its own runtime engine that runs it. You would be able to run your code without Matlab installed, and you would be able to interface it with other languages, but it will still be interpreted Matlab code, so there would be no speedup.
4
0
0
false
14
python,c,matlab
Does anyone have experience creating a shared library in MATLAB?
5
0
1,659,332
0
5,136
0.244919
A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django app. At least I hope so. Do I have the right plan? Has anyone else done something similar? Can you let me know if there are any serious pitfalls that I'm not aware of at the moment?
1
0
2008-08-07T18:47:00.000
1
0
2,313
3
I'd also try ctypes first. Use the Matlab compiler to compile the code into C. Compile the C code into a DLL. Use ctypes to load and call code from this DLL The hardest step is probably 1, but if you already know Matlab and have used the Matlab compiler, you should not have serious problems with it.
4
0
0
false
14
python,c,matlab
Does anyone have experience creating a shared library in MATLAB?
2
0
138,534
0
5,136
0.099668
A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django app. At least I hope so. Do I have the right plan? Has anyone else done something similar? Can you let me know if there are any serious pitfalls that I'm not aware of at the moment?
1
0
2008-08-07T18:47:00.000
1
0
2,313
3
I won't help much but I remember that I was able to wrap a MATLAB simulation into DLL and then call it from a Delphi app. It worked really well.
4
0
0
true
14
python,c,matlab
Does anyone have experience creating a shared library in MATLAB?
3
0
5,302
0
5,136
1.2
A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django app. At least I hope so. Do I have the right plan? Has anyone else done something similar? Can you let me know if there are any serious pitfalls that I'm not aware of at the moment?
1
0
2008-08-07T18:47:00.000
1
0
2,313
3
for loops in MATLAB used to be slow, but this is not true anymore. So vectorizing is not always the miracle solution. Just use the profiler, and tic and toc functions to help you identify possible bottlenecks.
7
0
0
false
14
python,arrays,matlab,for-loop
Can parallel traversals be done in MATLAB just as in Python?
-2
0
138,886
0
49,307
-0.057081
Using the zip function, Python allows for loops to traverse multiple sequences in parallel. for (x,y) in zip(List1, List2): Does MATLAB have an equivalent syntax? If not, what is the best way to iterate over two parallel arrays at the same time using MATLAB?
1
0
2008-09-08T08:25:00.000
0
1
7,417
1
__reduce_ex__ is what __reduce__ should have been but never became. __reduce_ex__ works like __reduce__ but the pickle protocol is passed.
2
0
0
false
19
python,pickle
What is the difference between __reduce__ and __reduce_ex__?
9
0
150,318
0
150,284
1
I understand that these methods are for pickling/unpickling and have no relation to the reduce built-in function, but what's the difference between the 2 and why do we need both?
1
0
2008-09-29T19:31:00.000
0
1
9,908
1
You can try pygame, its very easy to handle and similar to SDL under c++
3
0
0
false
12
python,animation,2d
2D animation in Python
3
0
1,568,711
0
169,810
0.197375
I'm writing a simulator in Python, and am curious about options and opinions regarding basic 2D animations. By animation, I'm referring to rendering on the fly, not displaying prerendered images. I'm currently using matplotlib (Wxagg backend), and it's possible that I'll be able to continue using it, but I suspect it won't be able to sufficiently scale in terms of performance or capabilities. Requirements are: Cross-platform (Linux, MacOS X, Windows) Low complexity overhead Plays well with wxpython (at least won't step on each other's toes unduly) Interactivity. Detect when objects are clicked on, moused over, etc. Note that high performance isn't on the list, but the ability to handle ~100 bitmap objects on the screen would be good. Your thoughts?
1
0
2008-10-04T05:36:00.000
1
0
29,957
1
Seems to be pure inertia. Where it is in use, everyone is too busy to learn IDL or numpy in sufficient detail to switch, and don't want to rewrite good working programs. Luckily that's not strictly true, but true enough in enough places that Matlab will be around a long time. Like Fortran (in active use where i work!)
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
5
0
181,127
0
179,904
0.047583
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
Matlab is good at doing number crunching. Also Matrix and matrix manipulation. It has many helpful built in libraries(depends on the what version) I think it is easier to use than python if you are going to be calculating equations.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
2
0
1,890,839
0
179,904
0.019045
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
MATLAB is great for doing array manipulation, doing specialized math functions, and for creating nice plots quick. I'd probably only use it for large programs if I could use a lot of array/matrix manipulation. You don't have to worry about the IDE as much as in more formal packages, so it's easier for students without a lot of programming experience to pick up.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
13
0
179,910
0
179,904
1
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
The main reason it is useful in industry is the plug-ins built on top of the core functionality. Almost all active Matlab development for the last few years has focused on these. Unfortunately, you won't have much opportunity to use these in an academic environment.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
4
0
179,932
0
179,904
0.038077
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
One reason MATLAB is popular with universities is the same reason a lot of things are popular with universities: there's a lot of professors familiar with it, and it's fairly robust. I've spoken to a lot of folks who are especially interested in MATLAB's nascent ability to tap into the GPU instead of working serially. Having used Python in grad school, I kind of wish I had the licks to work with MATLAB in that case. It sure would make vector space calculations a breeze.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
4
0
180,012
0
179,904
0.038077
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
I think you answered your own question when you noted that Matlab is "cool to work with matrixes and plotting things". Any application that requires a lot of matrix maths and visualisation will probably be easiest to do in Matlab. That said, Matlab's syntax feels awkward and shows the language's age. In contrast, Python is a much nicer general purpose programming language and, with the right libraries can do much of what Matlab does. However, Matlab is always going to have a more concise syntax than Python for vector and matrix manipulation. If much of your programming involves these sorts of manipulations, such as in signal processing and some statistical techniques, then Matlab will be a better choice.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
3
0
181,295
0
179,904
0.028564
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
Hold everything. When's the last time you programed your calculator to play tetris? Did you actually think you could write anything you want in those 128k of RAM? Likely not. MATLAB is not for programming unless you're dealing with huge matrices. It's the graphing calculator you whip out when you've got Megabytes to Gigabytes of data to crunch and/or plot. Learn just basic stuff, but also don't kill yourself trying to make Python be a graphing calculator. You'll quickly get a feel for when you want to crunch, plot or explore in MATLAB and when you want to have all that Python offers. Lots of engineers turn to pre and post processing in Python or Perl. Occasionally even just calling out to MATLAB for the hard bits. They are such completely different tools that you should learn their basic strengths first without trying to replace one with the other. Granted for saving money I'd either use Octave or skimp on ease and learn to work with sparse matrices in Perl or Python.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
15
0
181,492
0
179,904
1
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
I've been using matlab for many years in my research. It's great for linear algebra and has a large set of well-written toolboxes. The most recent versions are starting to push it into being closer to a general-purpose language (better optimizers, a much better object model, richer scoping rules, etc.). This past summer, I had a job where I used Python + numpy instead of Matlab. I enjoyed the change of pace. It's a "real" language (and all that entails), and it has some great numeric features like broadcasting arrays. I also really like the ipython environment. Here are some things that I prefer about Matlab: consistency: MathWorks has spent a lot of effort making the toolboxes look and work like each other. They haven't done a perfect job, but it's one of the best I've seen for a codebase that's decades old. documentation: I find it very frustrating to figure out some things in numpy and/or python because the documentation quality is spotty: some things are documented very well, some not at all. It's often most frustrating when I see things that appear to mimic Matlab, but don't quite work the same. Being able to grab the source is invaluable (to be fair, most of the Matlab toolboxes ship with source too) compactness: for what I do, Matlab's syntax is often more compact (but not always) momentum: I have too much Matlab code to change now If I didn't have such a large existing codebase, I'd seriously consider switching to Python + numpy.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
34
0
193,386
0
179,904
1
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
It's been some time since I've used Matlab, but from memory it does provide (albeit with extra plugins) the ability to generate source to allow you to realise your algorithm on a DSP. Since python is a general purpose programming language there is no reason why you couldn't do everything in python that you can do in matlab. However, matlab does provide a number of other tools - eg. a very broad array of dsp features, a broad array of S and Z domain features. All of these could be hand coded in python (since it's a general purpose language), but if all you're after is the results perhaps spending the money on Matlab is the cheaper option? These features have also been tuned for performance. eg. The documentation for Numpy specifies that their Fourier transform is optimised for power of 2 point data sets. As I understand Matlab has been written to use the most efficient Fourier transform to suit the size of the data set, not just power of 2. edit: Oh, and in Matlab you can produce some sensational looking plots very easily, which is important when you're presenting your data. Again, certainly not impossible using other tools.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
3
0
180,736
0
179,904
0.028564
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
Personally, I tend to think of Matlab as an interactive matrix calculator and plotting tool with a few scripting capabilities, rather than as a full-fledged programming language like Python or C. The reason for its success is that matrix stuff and plotting work out of the box, and you can do a few very specific things in it with virtually no actual programming knowledge. The language is, as you point out, extremely frustrating to use for more general-purpose tasks, such as even the simplest string processing. Its syntax is quirky, and it wasn't created with the abstractions necessary for projects of more than 100 lines or so in mind. I think the reason why people try to use Matlab as a serious programming language is that most engineers (there are exceptions; my degree is in biomedical engineering and I like programming) are horrible programmers and hate to program. They're taught Matlab in college mostly for the matrix math, and they learn some rudimentary programming as part of learning Matlab, and just assume that Matlab is good enough. I can't think of anyone I know who knows any language besides Matlab, but still uses Matlab for anything other than a few pure number crunching applications.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
7
0
181,274
0
179,904
1
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
I believe you have a very good point and it's one that has been raised in the company where I work. The company is limited in it's ability to apply matlab because of the licensing costs involved. One developer proved that Python was a very suitable replacement but it fell on ignorant ears because to the owners of those ears... No-one in the company knew Python although many of us wanted to use it. MatLab has a name, a company, and task force behind it to solve any problems. There were some (but not a lot) of legacy MatLab projects that would need to be re-written. If it's worth £10,000 (??) it's gotta be worth it!! I'm with you here. Python is a very good replacement for MatLab. I should point out that I've been told the company uses maybe 5% to 10% of MatLabs capabilities and that is the basis for my agreement with the original poster
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
6
0
180,017
0
179,904
1
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
The most likely reason that it's used so much in universities is that the mathematics faculty are used to it, understand it, and know how to incorporate it into their curriculum.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
6
0
179,912
0
179,904
1
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
I know this question is old, and therefore may no longer be watched, but I felt it was necessary to comment. As an aerospace engineer at Georgia Tech, I can say, with no qualms, that MATLAB is awesome. You can have it quickly interface with your Excel spreadsheets to pull in data about how high and fast rockets are flying, how the wind affects those same rockets, and how different engines matter. Beyond rocketry, similar concepts come into play for cars, trucks, aircraft, spacecraft, and even athletics. You can pull in large amounts of data, manipulate all of it, and make sure your results are as they should be. In the event something is off, you can add a line break where an error occurs to debug your program without having to recompile every time you want to run your program. Is it slower than some other programs? Well, technically. I'm sure if you want to do the number crunching it's great for on an NVIDIA graphics processor, it would probably be faster, but it requires a lot more effort with harder debugging. As a general programming language, MATLAB is weak. It's not meant to work against Python, Java, ActionScript, C/C++ or any other general purpose language. It's meant for the engineering and mathematics niche the name implies, and it does so fantastically.
21
0
0
false
53
python,matlab
What is MATLAB good for? Why is it so used by universities? When is it better than Python?
4
0
1,113,065
0
179,904
0.038077
I've been recently asked to learn some MATLAB basics for a class. What does make it so cool for researchers and people that works in university? I saw it's cool to work with matrices and plotting things... (things that can be done easily in Python using some libraries). Writing a function or parsing a file is just painful. I'm still at the start, what am I missing? In the "real" world, what should I think to use it for? When should it can do better than Python? For better I mean: easy way to write something performing. UPDATE 1: One of the things I'd like to know the most is "Am I missing something?" :D UPDATE 2: Thank you for your answers. My question is not about buy or not to buy MATLAB. The university has the possibility to give me a copy of an old version of MATLAB (MATLAB 5 I guess) for free, without breaking the license. I'm interested in its capabilities and if it deserves a deeper study (I won't need anything more than basic MATLAB in oder to pass the exam :P ) it will really be better than Python for a specific kind of task in the real world.
1
0
2008-10-07T19:11:00.000
0
1
200,558
13
If you hate numpy, get out RPy and your local copy of R, and use it instead. (I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)
7
0
0
false
62
python,algorithm,matrix,linear-algebra,matrix-inverse
Python Inverse of a Matrix
1
0
213,717
1
211,160
0.028564
How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it.
1
0
2008-10-17T05:30:00.000
0
0
125,109
1
Using deltaX if deltax between 2 and 10 half increment if deltax between 10 and 20 unit increment if smaller than 2 we multiply by 10 and test again if larger than 20 we divide Then we get the position of the first unit or half increment on the width using xmin. I still need to test this solution.
5
0
0
false
2
python,algorithm,math
Ticking function grapher
0
0
346,873
0
346,823
0
I am trying to figure out the following problem. I am building Yet another math function grapher, The function is drawn on its predefined x,y range, that's all good. Now I am working on the background and the ticking of X, Y axes (if any axes are shown). I worked out the following. I have a fixed width of 250 p The tick gap should be between 12.5 and 50p. The ticks should indicate either unit or half unit range, by that i mean the following. x range (-5, 5): one tick = 1 x range (-1, 1): one tick = 0.5 or 0.1 depending on the gap that each of this option would generate. x range (0.1, 0.3): 0.05 Given a Xrange How would you get the number of ticks between either full or half unit range ? Or maybe there are other way to approach this type of problems.
1
0
2008-12-06T21:40:00.000
0
0
239
1
The following is a description of random weighted selection of an element of a set (or multiset, if repeats are allowed), both with and without replacement in O(n) space and O(log n) time. It consists of implementing a binary search tree, sorted by the elements to be selected, where each node of the tree contains: the element itself (element) the un-normalized weight of the element (elementweight), and the sum of all the un-normalized weights of the left-child node and all of its children (leftbranchweight). the sum of all the un-normalized weights of the right-child node and all of its chilren (rightbranchweight). Then we randomly select an element from the BST by descending down the tree. A rough description of the algorithm follows. The algorithm is given a node of the tree. Then the values of leftbranchweight, rightbranchweight, and elementweight of node is summed, and the weights are divided by this sum, resulting in the values leftbranchprobability, rightbranchprobability, and elementprobability, respectively. Then a random number between 0 and 1 (randomnumber) is obtained. if the number is less than elementprobability, remove the element from the BST as normal, updating leftbranchweight and rightbranchweight of all the necessary nodes, and return the element. else if the number is less than (elementprobability + leftbranchweight) recurse on leftchild (run the algorithm using leftchild as node) else recurse on rightchild When we finally find, using these weights, which element is to be returned, we either simply return it (with replacement) or we remove it and update relevant weights in the tree (without replacement). DISCLAIMER: The algorithm is rough, and a treatise on the proper implementation of a BST is not attempted here; rather, it is hoped that this answer will help those who really need fast weighted selection without replacement (like I do).
9
0
0
false
52
python,algorithm,random,random-sample
Weighted random selection with and without replacement
4
0
9,827,070
0
352,670
0.088656
Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory. Does anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.
1
0
2008-12-09T13:15:00.000
0
0
32,149
2
This is an old question for which numpy now offers an easy solution so I thought I would mention it. Current version of numpy is version 1.2 and numpy.random.choice allows the sampling to be done with or without replacement and with given weights.
9
0
0
false
52
python,algorithm,random,random-sample
Weighted random selection with and without replacement
1
0
66,553,611
0
352,670
0.022219
Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory. Does anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.
1
0
2008-12-09T13:15:00.000
0
0
32,149
2
Maybe you can use Python Imaging Library (PIL). Also have a look at PyX, but this library is meant to output to PDF, ...
5
0
0
false
3
python,python-3.x,plot,graphing,scatter-plot
Are there any graph/plotting/anything-like-that libraries for Python 3.0?
1
0
421,947
0
418,835
0.039979
As per the title. I am trying to create a simple scater plot, but haven't found any Python 3.0 libraries that can do it. Note, this isn't for a website, so the web ones are a bit useless.
1
0
2009-01-07T01:12:00.000
0
0
1,540
1
In addition to the previous replies, I would like to introduce another function. numpy.random.shuffle as well as random.shuffle perform in-place shuffling. However, if you want to return a shuffled array numpy.random.permutation is the function to use.
11
0
0
false
325
python,arrays,random,shuffle
Shuffle an array with python, randomize array item order with python
4
0
40,674,024
0
473,973
0.072599
What's the easiest way to shuffle an array with python?
1
0
2009-01-23T18:34:00.000
0
1
285,289
1
I would open seven file streams as accumulating them might be quite memory extensive if it's a lot of data. Of course that is only an option if you can sort them live and don't first need all data read to do the sorting.
2
0
0
false
0
python,file-io
Multiple output files
2
0
555,159
0
555,146
0.197375
edit: Initially I was trying to be general but it came out vague. I've included more detail below. I'm writing a script that pulls in data from two large CSV files, one of people's schedules and the other of information about their schedules. The data is mined and combined to eventually create pajek format graphs for Monday-Sat of peoples connections, with a seventh graph representing all connections over the week with a string of 1's and 0's to indicate which days of the week the connections are made. This last graph is a break from the pajek format and is used by a seperate program written by another researcher. Pajek format has a large header, and then lists connections as (vertex1 vertex2) unordered pairs. It's difficult to store these pairs in a dictionary, because there are often multiple connections on the same day between two pairs. I'm wondering what the best way to output to these graphs are. Should I make the large single graph and have a second script deconstruct it into several smaller graphs? Should I keep seven streams open and as I determine a connection write to them, or should I keep some other data structure for each and output them when I can (like a queue)?
1
0
2009-02-17T00:29:00.000
0
0
593
1
cPickle will be the fastest since it is saved in binary and no real python code has to be parsed. Other advantates are that it is more secure (since it does not execute commands) and you have no problems with setting $PYTHONPATH correctly.
6
0
0
false
11
python,serialization,caching
Python list serialization - fastest method
1
0
556,961
1
556,730
0.033321
I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest. Which is the fastest method, and why? Using import on a .py file that just contains the list assigned to a variable Using cPickle's load Some other method (perhaps numpy?) Also, how can one benchmark such things reliably? Addendum: measuring this reliably is difficult, because import is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with cPickle takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script. Intuitively I feel cPickle should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think). And yes, it's important for me that this performs quickly. Thanks
1
0
2009-02-17T13:16:00.000
0
0
8,359
1
In addition to df's answer, if you want to know the specific prices that are above the base prices, you can do: prices[prices > (1.10 * base_prices)]
4
0
0
false
1
python,numpy
Statistics with numpy
1
0
570,197
0
570,137
0.049958
I am working at some plots and statistics for work and I am not sure how I can do some statistics using numpy: I have a list of prices and another one of basePrices. And I want to know how many prices are with X percent above basePrice, how many are with Y percent above basePrice. Is there a simple way to do that using numpy?
1
0
2009-02-20T16:04:00.000
0
0
1,834
1
Do you have the possibility of using Jython? I just mention it because using TreeMap, TreeSet, etc. is trivial. Also if you're coming from a Java background and you want to head in a Pythonic direction Jython is wonderful for making the transition easier. Though I recognise that use of TreeSet in this case would not be part of such a "transition". For Jython superusers I have a question myself: the blist package can't be imported because it uses a C file which must be imported. But would there be any advantage of using blist instead of TreeSet? Can we generally assume the JVM uses algorithms which are essentially as good as those of CPython stuff?
7
0
0
false
22
python,data-structures
Python equivalent to java.util.SortedSet?
0
0
20,666,684
0
628,192
0
Does anybody know if Python has an equivalent to Java's SortedSet interface? Heres what I'm looking for: lets say I have an object of type foo, and I know how to compare two objects of type foo to see whether foo1 is "greater than" or "less than" foo2. I want a way of storing many objects of type foo in a list L, so that whenever I traverse the list L, I get the objects in order, according to the comparison method I define. Edit: I guess I can use a dictionary or a list and sort() it every time I modify it, but is this the best way?
1
1
2009-03-09T21:58:00.000
0
1
9,278
1
Back-propagation works by minimizing the error. However, you can really minimize whatever you want. So, you could use back-prop-like update rules to find the Artificial Neural Network inputs that minimize the output. This is a big question, sorry for the short answer. I should also add that my suggested approach sounds pretty inefficient compared to more established methods and would only find a local minima.
8
0
0
false
10
python,artificial-intelligence,neural-network,minimization
Can a neural network be used to find a functions minimum(a)?
3
0
13,611,588
0
652,283
0.07486
I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest). Then I realized I didn't even know if a NN is good for minimization. What do you think?
1
0
2009-03-16T21:53:00.000
0
0
7,231
4
They're pretty bad for the purpose; one of the big problems of neural networks is that they get stuck in local minima. You might want to look into support vector machines instead.
8
0
0
false
10
python,artificial-intelligence,neural-network,minimization
Can a neural network be used to find a functions minimum(a)?
0
0
652,348
0
652,283
0
I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest). Then I realized I didn't even know if a NN is good for minimization. What do you think?
1
0
2009-03-16T21:53:00.000
0
0
7,231
4
The training process of a back-propagation neural network works by minimizing the error from the optimal result. But having a trained neural network finding the minimum of an unknown function would be pretty hard. If you restrict the problem to a specific function class, it could work, and be pretty quick too. Neural networks are good at finding patterns, if there are any.
8
0
0
false
10
python,artificial-intelligence,neural-network,minimization
Can a neural network be used to find a functions minimum(a)?
1
0
652,327
0
652,283
0.024995
I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest). Then I realized I didn't even know if a NN is good for minimization. What do you think?
1
0
2009-03-16T21:53:00.000
0
0
7,231
4
Neural networks are classifiers. They separate two classes of data elements. They learn this separation (usually) by preclassified data elements. Thus, I say: No, unless you do a major stretch beyond breakage.
8
0
0
true
10
python,artificial-intelligence,neural-network,minimization
Can a neural network be used to find a functions minimum(a)?
-5
0
652,362
0
652,283
1.2
I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest). Then I realized I didn't even know if a NN is good for minimization. What do you think?
1
0
2009-03-16T21:53:00.000
0
0
7,231
4
I found that array.fromfile is the fastest methods for homogeneous data.
4
0
0
false
6
python,input,binaryfiles
Most efficient way of loading formatted binary files in Python
0
0
703,588
1
703,262
0
I have binary files no larger than 20Mb in size that have a header section and then a data section containing sequences of uchars. I have Numpy, SciPy, etc. and each library has different ways of loading in the data. Any suggestions for the most efficient methods I should use?
1
0
2009-03-31T22:03:00.000
0
0
450
1
There was a company offering a dump of the social graph, but it was taken down and no longer available. As you already realized - it is kind of hard, as it is changing all the time. I would recommend checking out their social_graph api methods as they give the most info with the least API calls.
3
0
0
true
3
python,twitter,dump,social-graph
Twitter Data Mining: Degrees of separation
0
0
817,451
0
785,327
1.2
What ready available algorithms could I use to data mine twitter to find out the degrees of separation between 2 people on twitter. How does it change when the social graph keeps changing and updating constantly. And then, is there any dump of twitter social graph data which I could use rather than making so many API calls to start over.
1
0
2009-04-24T10:30:00.000
0
0
2,436
1
I don't believe there is anything standard (but I could be wrong, I don't keep up with python that closely). It's very easy to implement though, and you may want to build on top of the numpy array as a container for it anyway, which gives you lots of good (and efficient) bits and pieces.
2
0
0
false
6
python,vector
Is there a Vector3 type in Python?
2
0
786,758
0
786,691
0.197375
I quickly checked numPy but it looks like it's using arrays as vectors? I am looking for a proper Vector3 type that I can instance and work on.
1
0
2009-04-24T16:46:00.000
0
0
12,341
1
There is a great library for this task called: pymatreader. Just do as follows: Install the package: pip install pymatreader Import the relevant function of this package: from pymatreader import read_mat Use the function to read the matlab struct: data = read_mat('matlab_struct.mat') use data.keys() to locate where the data is actually stored. The keys will usually look like: dict_keys(['__header__', '__version__', '__globals__', 'data_opp']). Where data_opp will be the actual key which stores the data. The name of this key can ofcourse be changed between different files. Last step - Create your dataframe: my_df = pd.DataFrame(data['data_opp']) That's it :)
12
0
0
false
508
python,matlab,file-io,scipy,mat-file
Read .mat files in Python
12
0
66,453,257
0
874,461
1
Is it possible to read binary MATLAB .mat files in Python? I've seen that SciPy has alleged support for reading .mat files, but I'm unsuccessful with it. I installed SciPy version 0.7.0, and I can't find the loadmat() method.
1
0
2009-05-17T12:02:00.000
0
0
566,507
1
I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed. Is the data only ever going to be parsed by Python programs? If not, then I'd avoid pickle et al (shelve and marshal) since they're very Python specific. JSON and YAML have the important advantage that parsers are easily available for most any language.
5
0
0
false
3
python,file-io,csv,multidimensional-array,fileparsing
Simple data storing in Python
2
0
875,525
0
875,228
0.07983
I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed. I'm sure python has library for doing such a task easily but so far all the approaches I have found seemed like it would have been sloppy to get it to work and I'm sure there is a better approach. So far I've tried: the array.toFile() method but couldn't figure out how to get it to work with nested arrays of strings, it seemed geared towards integer data. Lists and sets do not have a toFile method built in, so I would have had to parse and encode it manually. CSV seemed like a good approach but this would also require manually parsing it, and did not allow me to simply append new lines at the end - so any new calls the the CSVWriter would overwrite the file existing data. I'm really trying to avoid using databases (maybe SQLite but it seems a bit overkill) because I'm trying to develop this to have no software prerequisites besides Python.
1
0
2009-05-17T19:00:00.000
0
1
5,460
1
I attach an simple routine to convert a npy to an image. Works 100% and it is a piece of cake! from PIL import Image import matplotlib img = np.load('flair1_slice75.npy') matplotlib.image.imsave("G1_flair_75.jpeg", img)
21
0
0
false
370
python,image,numpy
Saving a Numpy array as an image
0
0
72,331,083
0
902,761
0
I have a matrix in the type of a Numpy array. How would I write it to disk it as an image? Any format works (png, jpeg, bmp...). One important constraint is that PIL is not present.
1
0
2009-05-24T00:08:00.000
0
0
767,103
1
How about using a formular like r' = r-(g+b)?
3
0
0
false
1
python,image-processing,opencv,color-space
How to isolate a single color in an image
0
0
968,351
0
968,317
0
I'm using the python OpenCV bindings and at the moment I try to isolate a colorrange. That means I want to filter out everything that is not reddish. I tried to take only the red color channel but this includes the white spaces in the Image too. What is a good way to do that?
1
0
2009-06-09T05:36:00.000
0
0
2,798
2
Use the HSV colorspace. Select pixels that have an H value in the range that you consider to contain "red," and an S value large enough that you do not consider it to be neutral, maroon, brown, or pink. You might also need to throw out pixels with low V's. The H dimension is a circle, and red is right where the circle is split, so your H range will be in two parts, one near 255, the other near 0.
3
0
0
false
1
python,image-processing,opencv,color-space
How to isolate a single color in an image
1
0
2,204,755
0
968,317
0.066568
I'm using the python OpenCV bindings and at the moment I try to isolate a colorrange. That means I want to filter out everything that is not reddish. I tried to take only the red color channel but this includes the white spaces in the Image too. What is a good way to do that?
1
0
2009-06-09T05:36:00.000
0
0
2,798
2
I don't really know about NER, but judging from that example, you could make an algorithm that searched for capital letters in the words or something like that. For that I would recommend regex as the most easy to implement solution if you're thinking small. Another option is to compare the texts with a database, wich yould match string pre-identified as Tags of interest. my 5 cents.
6
0
0
false
22
php,python,extract,analysis,named-entity-recognition
Algorithms for named entity recognition
-11
0
1,026,976
0
1,026,925
-1
I would like to use named entity recognition (NER) to find adequate tags for texts in a database. I know there is a Wikipedia article about this and lots of other pages describing NER, I would preferably hear something about this topic from you: What experiences did you make with the various algorithms? Which algorithm would you recommend? Which algorithm is the easiest to implement (PHP/Python)? How to the algorithms work? Is manual training necessary? Example: "Last year, I was in London where I saw Barack Obama." => Tags: London, Barack Obama I hope you can help me. Thank you very much in advance!
1
0
2009-06-22T12:26:00.000
0
0
9,656
1
I don't think there are methods that give you a metric exactly for what you want, but the methods that it has, like RMS, takes you a long way there. To do things with color, you can split the image into one layer per color, and get the RMS on each layer, which tells you some of the things you want to know. You can also convert the image in different ways so that you only retain color information, etc.
1
0
0
false
4
python,python-imaging-library
Simple Image Metrics with PIL
1
0
1,037,217
0
1,037,090
0.197375
I want to process uploaded photos with PIL and determine some "soft" image metrics like: is the image contrastful or dull? colorful or monochrome? bright or dark? is the image warm or cold (regarding light temperature)? is there a dominant hue? the metrics should be measured in a rating-style, e.g. colorful++++ for a very colorful photo, colorful+ for a rather monochrome image. I already noticed PIL's ImageStat Module, that calculates some interesting values for my metrics, e.g. RMS of histogram etc. However, this module is rather poorly documented, so I'm looking for more concrete algorithms to determine these metrics.
1
0
2009-06-24T08:30:00.000
0
0
609
1
test = csv.reader(c.split('\n'))
4
0
0
false
7
python,csv
python csv question
2
0
1,083,367
0
1,083,364
0.099668
i'm just testing out the csv component in python, and i am having some trouble with it. I have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row? Thanks a lot for any enlightenment provided! Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import csv >>> c = "1, 2, 3, 4\n 5, 6, 7, 8\n" >>> test = csv.reader(c) >>> for t in test: ... print t ... ['1'] ['', ''] [' '] ['2'] ['', ''] [' '] ['3'] ['', ''] [' '] ['4'] [] [' '] ['5'] ['', ''] [' '] ['6'] ['', ''] [' '] ['7'] ['', ''] [' '] ['8'] [] >>>
1
0
2009-07-05T02:49:00.000
0
1
850
2
csv.reader expects an iterable. You gave it "1, 2, 3, 4\n 5, 6, 7, 8\n"; iteration produces characters. Try giving it ["1, 2, 3, 4\n", "5, 6, 7, 8\n"] -- iteration will produce lines.
4
0
0
true
7
python,csv
python csv question
8
0
1,083,376
0
1,083,364
1.2
i'm just testing out the csv component in python, and i am having some trouble with it. I have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row? Thanks a lot for any enlightenment provided! Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import csv >>> c = "1, 2, 3, 4\n 5, 6, 7, 8\n" >>> test = csv.reader(c) >>> for t in test: ... print t ... ['1'] ['', ''] [' '] ['2'] ['', ''] [' '] ['3'] ['', ''] [' '] ['4'] [] [' '] ['5'] ['', ''] [' '] ['6'] ['', ''] [' '] ['7'] ['', ''] [' '] ['8'] [] >>>
1
0
2009-07-05T02:49:00.000
0
1
850
2
You could write the matrix in Python to a CSV file and read it in MATLAB using csvread.
7
0
0
false
51
python,matlab,file-io,import,matrix
Matrix from Python to MATLAB
4
0
1,095,296
1
1,095,265
0.113791
I'm working with Python and MATLAB right now and I have a 2D array in Python that I need to write to a file and then be able to read it into MATLAB as a matrix. Any ideas on how to do this? Thanks!
1
0
2009-07-07T22:55:00.000
0
0
69,405
2
I would probably use numpy.savetxt('yourfile.mat',yourarray) in Python and then yourarray = load('yourfile.mat') in MATLAB.
7
0
0
false
51
python,matlab,file-io,import,matrix
Matrix from Python to MATLAB
5
0
7,737,622
1
1,095,265
0.141893
I'm working with Python and MATLAB right now and I have a 2D array in Python that I need to write to a file and then be able to read it into MATLAB as a matrix. Any ideas on how to do this? Thanks!
1
0
2009-07-07T22:55:00.000
0
0
69,405
2
To work out the "worst" case, instead of using entropic I am looking to the partition that has the maximum number of elements, then select the try that is a minimum for this maximum => This will give me the minimum number of remaining possibility when I am not lucky (which happens in the worst case). This always solve standard case in 5 attempts, but it is not a full proof that 5 attempts are really needed because it could happen that for next step a bigger set possibilities would have given a better result than a smaller one (because easier to distinguish between). Though for the "Standard game" with 1680 I have a simple formal proof: For the first step the try that gives the minimum for the partition with the maximum number is 0,0,1,1: 256. Playing 0,0,1,2 is not as good: 276. For each subsequent try there are 14 outcomes (1 not placed and 3 placed is impossible) and 4 placed is giving a partition of 1. This means that in the best case (all partition same size) we will get a maximum partition that is a minimum of (number of possibilities - 1)/13 (rounded up because we have integer so necessarily some will be less and other more, so that the maximum is rounded up). If I apply this: After first play (0,0,1,1) I am getting 256 left. After second try: 20 = (256-1)/13 After third try : 2 = (20-1)/13 Then I have no choice but to try one of the two left for the 4th try. If I am unlucky a fifth try is needed. This proves we need at least 5 tries (but not that this is enough).
9
0
0
false
38
python,algorithm
How to solve the "Mastermind" guessing game?
0
0
9,515,347
0
1,185,634
0
How would you create an algorithm to solve the following puzzle, "Mastermind"? Your opponent has chosen four different colours from a set of six (yellow, blue, green, red, orange, purple). You must guess which they have chosen, and in what order. After each guess, your opponent tells you how many (but not which) of the colours you guessed were the right colour in the right place ["blacks"] and how many (but not which) were the right colour but in the wrong place ["whites"]. The game ends when you guess correctly (4 blacks, 0 whites). For example, if your opponent has chosen (blue, green, orange, red), and you guess (yellow, blue, green, red), you will get one "black" (for the red), and two whites (for the blue and green). You would get the same score for guessing (blue, orange, red, purple). I'm interested in what algorithm you would choose, and (optionally) how you translate that into code (preferably Python). I'm interested in coded solutions that are: Clear (easily understood) Concise Efficient (fast in making a guess) Effective (least number of guesses to solve the puzzle) Flexible (can easily answer questions about the algorithm, e.g. what is its worst case?) General (can be easily adapted to other types of puzzle than Mastermind) I'm happy with an algorithm that's very effective but not very efficient (provided it's not just poorly implemented!); however, a very efficient and effective algorithm implemented inflexibly and impenetrably is not of use. I have my own (detailed) solution in Python which I have posted, but this is by no means the only or best approach, so please post more! I'm not expecting an essay ;)
1
0
2009-07-26T21:43:00.000
0
0
37,167
1
I apply hierarchical Bayes models in R in combination with JAGS (Linux) or sometimes WinBUGS (Windows, or Wine). Check out the book of Andrew Gelman, as referred to above.
7
0
0
false
12
python,r,statistics
Hierarchical Bayes for R or Python
2
0
1,832,314
1
1,191,689
0.057081
Hierarchical Bayes models are commonly used in Marketing, Political Science, and Econometrics. Yet, the only package I know of is bayesm, which is really a companion to a book (Bayesian Statistics and Marketing, by Rossi, et al.) Am I missing something? Is there a software package for R or Python doing the job out there, and/or a worked-out example in the associated language?
1
0
2009-07-28T02:43:00.000
0
0
8,690
3
This answer comes almost ten years late, but it will hopefully help someone in the future. The brms package in R is a very good option for Bayesian hierarchical/multilevel models, using a syntax very similar to the lme4 package. The brms package uses the probabilistic programming language Stan in the back to do the inferences. Stan uses more advanced sampling methods than JAGS and BUGS, such as Hamiltonian Monte Carlo, which provides more efficient and reliable samples from the posterior distribution. If you wish to model more complicated phenomena, then you can use the rstanpackage to compile Stan models from R. There is also the Python alternative PyStan. However, in order to do this, you must learn how to use Stan.
7
0
0
false
12
python,r,statistics
Hierarchical Bayes for R or Python
0
0
55,978,470
1
1,191,689
0
Hierarchical Bayes models are commonly used in Marketing, Political Science, and Econometrics. Yet, the only package I know of is bayesm, which is really a companion to a book (Bayesian Statistics and Marketing, by Rossi, et al.) Am I missing something? Is there a software package for R or Python doing the job out there, and/or a worked-out example in the associated language?
1
0
2009-07-28T02:43:00.000
0
0
8,690
3
The lme4 package, which estimates hierarchical models using frequentist methods, has a function called mcmcsamp that allows you to sample from the posterior distribution of the model using MCMC. This currently works only for linear models, quite unfortunately.
7
0
0
false
12
python,r,statistics
Hierarchical Bayes for R or Python
0
0
1,197,766
1
1,191,689
0
Hierarchical Bayes models are commonly used in Marketing, Political Science, and Econometrics. Yet, the only package I know of is bayesm, which is really a companion to a book (Bayesian Statistics and Marketing, by Rossi, et al.) Am I missing something? Is there a software package for R or Python doing the job out there, and/or a worked-out example in the associated language?
1
0
2009-07-28T02:43:00.000
0
0
8,690
3
Probably either a dict of list or a list of dict. Personally, I'd go with the former. So, parse the heading row of the CSV to get a dict from column heading to column index. Then when you're reading through each row, work out what index you're at, grab the column heading, and then append to the end of the list for that column heading.
6
0
0
false
2
python,file,csv
Whats the best way of putting tabular data into python?
0
0
1,199,371
0
1,199,350
0
I have a CSV file which I am processing and putting the processed data into a text file. The entire data that goes into the text file is one big table(comma separated instead of space). My problem is How do I remember the column into which a piece of data goes in the text file? For eg. Assume there is a column called 'col'. I just put some data under col. Now after a few iterations, I want to put some other piece of data under col again (In a different row). How do I know where exactly col comes? (And there are a lot of columns like this.) Hope I am not too vague...
1
0
2009-07-29T10:44:00.000
0
1
798
2
Is SQLite an option for you? I know that you have CSV input and output. However, you can import all the data into the SQLite database. Then do all the necessary processing with the power of SQL. Then you can export the results as CSV.
6
0
0
false
2
python,file,csv
Whats the best way of putting tabular data into python?
1
0
1,199,409
0
1,199,350
0.033321
I have a CSV file which I am processing and putting the processed data into a text file. The entire data that goes into the text file is one big table(comma separated instead of space). My problem is How do I remember the column into which a piece of data goes in the text file? For eg. Assume there is a column called 'col'. I just put some data under col. Now after a few iterations, I want to put some other piece of data under col again (In a different row). How do I know where exactly col comes? (And there are a lot of columns like this.) Hope I am not too vague...
1
0
2009-07-29T10:44:00.000
0
1
798
2
If you're using numpy 1.3, there's also numpy.lib.recfunctions.append_fields(). For many installations, you'll need to import numpy.lib.recfunctions to access this. import numpy will not allow one to see the numpy.lib.recfunctions
2
0
0
true
22
python,numpy
Adding a field to a structured numpy array
20
0
1,208,039
0
1,201,817
1.2
What is the cleanest way to add a field to a structured numpy array? Can it be done destructively, or is it necessary to create a new array and copy over the existing fields? Are the contents of each field stored contiguously in memory so that such copying can be done efficiently?
1
0
2009-07-29T17:24:00.000
0
0
9,033
1
You have to encode your input and your output to something that can be represented by the neural network units. ( for example 1 for "x has a certain property p" -1 for "x doesn't have the property p" if your units' range is in [-1, 1]) The way you encode your input and the way you decode your output depends on what you want to train the neural network for. Moreover, there are many "neural networks" algoritms and learning rules for different tasks( Back propagation, boltzman machines, self organizing maps).
4
0
0
true
3
python,neural-network
Neural net input/output
3
0
1,205,509
0
1,205,449
1.2
Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out if someone could provide examples in python that would be a big help
1
0
2009-07-30T09:22:00.000
0
0
671
4
You have to add the number of units for input and output you need for the problem. If the unknown function to approximate depends on n parameter, you will have n input units. The number of output units depends on the nature of the funcion. For real functions with n real parameters you will have one output unit. Some problems, for example in forecasting of time series, you will have m output units for the m succesive values of the function. The encoding is important and depends on the choosen algorithm. For example, in backpropagation for feedforward nets, is better to transform, if possible, the greater number of features in discrete inputs, as for classification tasks. Other aspect of the encoding is that you have to evaluate the number of input and hidden units in function of the amount of data. Too many units related to data may give poor approximation due the course ff dimensionality problem. In some cases, you may to aggregate some of the input data in some way to avoid that problem or use some reduction mechanism as PCA.
4
0
0
false
3
python,neural-network
Neural net input/output
0
0
20,683,280
0
1,205,449
0
Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out if someone could provide examples in python that would be a big help
1
0
2009-07-30T09:22:00.000
0
0
671
4
Your features must be decomposed into parts that can be represented as real numbers. The magic of a Neural Net is it's a black box, the correct associations will be made (with internal weights) during the training Inputs Choose as few features as are needed to accurately describe the situation, then decompose each into a set of real valued numbers. Weather: [temp today, humidity today, temp yesterday, humidity yesterday...] the association between today's temp and today's humidity is made internally Team stats: [ave height, ave weight, max height, top score,...] Dice: not sure I understand this one, do you mean how to encode discrete values?* Complex number: [a,ai,b,bi,...] * Discrete valued features are tricky, but can still still be encoded as (0.0,1.0). The problem is they don't provide a gradient to learn the threshold on. Outputs You decide what you want the output to mean, and then encode your training examples in that format. The fewer output values, the easier to train. Weather: [tomorrow's chance of rain, tomorrow's temp,...] ** Team stats: [chance of winning, chance of winning by more than 20,...] Complex number: [x,xi,...] ** Here your training vectors would be: 1.0 if it rained the next day, 0.0 if it didn't Of course, whether or not the problem can actually be modeled by a neural net is a different question.
4
0
0
false
3
python,neural-network
Neural net input/output
2
0
1,207,505
0
1,205,449
0.099668
Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out if someone could provide examples in python that would be a big help
1
0
2009-07-30T09:22:00.000
0
0
671
4
More complex data usually means adding more neurons in the input and output layers. You can feed each "field" of your register, properly encoded as a real value (normalized, etc.) to each input neuron, or maybe you can even decompose even further into bit fields, assigning saturated inputs of 1 or 0 to the neurons... for the output, it depends on how you train the neural network, it will try to mimic the training set outputs.
4
0
0
false
3
python,neural-network
Neural net input/output
0
0
1,206,597
0
1,205,449
0
Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out if someone could provide examples in python that would be a big help
1
0
2009-07-30T09:22:00.000
0
0
671
4
Yes, SciPy/Numpy is mostly concerned about arrays. If you can tolerate an approximate solution, and your functions only operate over a range of value (not infinite) you can fill an array with the values and convolve the arrays. If you want something more "correct" calculus-wise you would probably need a powerful solver (mathmatica, maple...)
2
0
0
true
2
python,convolution
Convolution of two functions in Python
1
0
1,226,509
0
1,222,147
1.2
I will have to implement a convolution of two functions in Python, but SciPy/Numpy appear to have functions only for the convolution of two arrays. Before I try to implement this by using the the regular integration expression of convolution, I would like to ask if someone knows of an already available module that performs these operations. Failing that, which of the several kinds of integration that SciPy provides is the best suited for this? Thanks!
1
0
2009-08-03T12:42:00.000
0
0
8,768
1
Are you likely to need all rows in order or will you want only specific known rows? If you need to read all the data there isn't much advantage to having it in a database. edit: If the code fits in memory then a simple CSV is fine. Plain text data formats are always easier to deal with than opaque ones if you can use them.
4
0
0
false
0
python,database,database-design,file-io
Store data series in file or database if I want to do row level math operations?
0
1
1,241,784
0
1,241,758
0
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
1
0
2009-08-06T21:58:00.000
0
0
581
3
What matters most if all data will fit simultaneously into memory. From the size that you give, it seems that this is easily the case (a few megabytes at worst). If so, I would discourage using a relational database, and do all operations directly in Python. Depending on what other processing you need, I would probably rather use binary pickles, than CSV.
4
0
0
false
0
python,database,database-design,file-io
Store data series in file or database if I want to do row level math operations?
0
1
1,241,787
0
1,241,758
0
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
1
0
2009-08-06T21:58:00.000
0
0
581
3
"I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input." This is the standard use case for a data warehouse star-schema design. Buy Kimball's The Data Warehouse Toolkit. Read (and understand) the star schema before doing anything else. "What is the best way to store the data and manipulate?" A Star Schema. You can implement this as flat files (CSV is fine) or RDBMS. If you use flat files, you write simple loops to do the math. If you use an RDBMS you write simple SQL and simple loops. "My main concern is speed/performance as the number of datasets grows" Nothing is as fast as a flat file. Period. RDBMS is slower. The RDBMS value proposition stems from SQL being a relatively simple way to specify SELECT SUM(), COUNT() FROM fact JOIN dimension WHERE filter GROUP BY dimension attribute. Python isn't as terse as SQL, but it's just as fast and just as flexible. Python competes against SQL. "pitfalls/gotchas that I should be aware of?" DB design. If you don't get the star schema and how to separate facts from dimensions, all approaches are doomed. Once you separate facts from dimensions, all approaches are approximately equal. "What are the reasons why one should be chosen over another?" RDBMS slow and flexible. Flat files fast and (sometimes) less flexible. Python levels the playing field. "Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design?" Star Schema: central fact table surrounded by dimension tables. Nothing beats it. "Is there any project or framework out there to help with this type of task?" Not really.
4
0
0
true
0
python,database,database-design,file-io
Store data series in file or database if I want to do row level math operations?
2
1
1,245,169
0
1,241,758
1.2
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
1
0
2009-08-06T21:58:00.000
0
0
581
3
An ordered tree is usually better for this cases, but random access is going to be log(n). You should keep into account also insertion and removal costs...
10
0
0
false
36
python,data-structures,collections,dictionary
Key-ordered dict in Python
4
0
1,319,790
0
1,319,763
0.07983
I am looking for a solid implementation of an ordered associative array, that is, an ordered dictionary. I want the ordering in terms of keys, not of insertion order. More precisely, I am looking for a space-efficent implementation of a int-to-float (or string-to-float for another use case) mapping structure for which: Ordered iteration is O(n) Random access is O(1) The best I came up with was gluing a dict and a list of keys, keeping the last one ordered with bisect and insert. Any better ideas?
1
0
2009-08-23T22:33:00.000
0
1
12,867
1
You could build an index which maps words to phrases and do something like: let matched = set of all phrases for each word in the searched phrase let wordMatch = all phrases containing the current word let matched = intersection of matched and wordMatch After this, matched would contain all phrases matching all words in the target phrase. It could be pretty well optimized by initializing matched to the set of all phrases containing only words[0], and then only iterating over words[1..words.length]. Filtering phrases which are too short to match the target phrase may improve performance, too. Unless I'm mistaken, a simple implementation has a worst case complexity (when the search phrase matches all phrases) of O(n·m), where n is the number of words in the search phrase, and m is the number of phrases.
4
0
0
false
1
c#,java,c++,python,algorithm
Algorithm to filter a set of all phrases containing in other phrase
1
0
1,372,627
0
1,372,531
0.049958
Given a set of phrases, i would like to filter the set of all phrases that contain any of the other phrases. Contained here means that if a phrase contains all the words of another phrase it should be filtered out. Order of the words within the phrase does not matter. What i have so far is this: Sort the set by the number of words in each phrase. For each phrase X in the set: For each phrase Y in the rest of the set: If all the words in X are in Y then X is contained in Y, discard Y. This is slow given a list of about 10k phrases. Any better options?
1
0
2009-09-03T10:02:00.000
0
1
709
2
sort phrases by their contents, i.e., 'Z A' -> 'A Z', then eliminating phrases is easy going from shortest to longer ones.
4
0
0
false
1
c#,java,c++,python,algorithm
Algorithm to filter a set of all phrases containing in other phrase
0
0
1,372,585
0
1,372,531
0
Given a set of phrases, i would like to filter the set of all phrases that contain any of the other phrases. Contained here means that if a phrase contains all the words of another phrase it should be filtered out. Order of the words within the phrase does not matter. What i have so far is this: Sort the set by the number of words in each phrase. For each phrase X in the set: For each phrase Y in the rest of the set: If all the words in X are in Y then X is contained in Y, discard Y. This is slow given a list of about 10k phrases. Any better options?
1
0
2009-09-03T10:02:00.000
0
1
709
2
If you are willing to consider a library, pandas (http://pandas.pydata.org/) is a library built on top of numpy which amongst many other things provides: Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form I've been using it for almost one year in the financial industry where missing and badly aligned data is the norm and it really made my life easier.
4
0
0
false
11
python,numpy,data-analysis
How do you deal with missing data using numpy/scipy?
4
0
11,086,822
0
1,377,130
0.197375
One of the things I deal with most in data cleaning is missing values. R deals with this well using its "NA" missing data label. In python, it appears that I'll have to deal with masked arrays which seem to be a major pain to set up and don't seem to be well documented. Any suggestions on making this process easier in Python? This is becoming a deal-breaker in moving into Python for data analysis. Thanks Update It's obviously been a while since I've looked at the methods in the numpy.ma module. It appears that at least the basic analysis functions are available for masked arrays, and the examples provided helped me understand how to create masked arrays (thanks to the authors). I would like to see if some of the newer statistical methods in Python (being developed in this year's GSoC) incorporates this aspect, and at least does the complete case analysis.
1
0
2009-09-04T03:44:00.000
0
0
10,010
1
The best way to show multiple figures is use matplotlib or pylab. (for windows) with matplotlib you can prepare the figures and then when you finish the process with them you can show with the comand "matplotlib.show()" and all figures should be shown. (on linux) you don´t have problems adding changes to figures because the interactive mode is enable (on windows the interactive mode don't work OK).
3
0
0
false
32
python,matplotlib,figures
Python with matplotlib - drawing multiple figures in parallel
0
0
14,591,411
0
1,401,102
0
I have functions that contribute to small parts of a figure generation. I'm trying to use these functions to generate multiple figures? So something like this: work with Figure 1 do something else work with Figure 2 do something else work with Figure 1 do something else work with Figure 2 If anyone could help, that'd be great!
1
0
2009-09-09T17:57:00.000
0
1
43,360
1
Speed depends on the ratio of hits to misses. To be pythonic choose the clearer method. Personally I think way#1 is clearer (It takes less lines to have an 'if' block rather than an exception block and also uses less brain space). It will also be faster when there are more hits than misses (an exception is more expensive than skipping a if block).
5
0
0
false
2
python
Which is more pythonic for array removal?
3
0
1,418,321
1
1,418,266
0.119427
I'm removing an item from an array if it exists. Two ways I can think of to do this Way #1 # x array, r item to remove if r in x : x.remove( r ) Way #2 try : x.remove( r ) except : pass Timing it shows the try/except way can be faster (some times i'm getting:) 1.16225508968e-06 8.80804972547e-07 1.14314196588e-06 8.73752536492e-07 import timeit runs = 10000 x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' code1 =""" x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' if r in x : x.remove(r) """ print timeit.Timer( code1 ).timeit( runs ) / runs code2 =""" x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' try : x.remove( r ) except : pass """ print timeit.Timer( code2 ).timeit( runs ) / runs Which is more pythonic?
1
0
2009-09-13T17:15:00.000
0
1
267
3
The try/except way
5
0
0
false
2
python
Which is more pythonic for array removal?
2
0
1,418,275
1
1,418,266
0.07983
I'm removing an item from an array if it exists. Two ways I can think of to do this Way #1 # x array, r item to remove if r in x : x.remove( r ) Way #2 try : x.remove( r ) except : pass Timing it shows the try/except way can be faster (some times i'm getting:) 1.16225508968e-06 8.80804972547e-07 1.14314196588e-06 8.73752536492e-07 import timeit runs = 10000 x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' code1 =""" x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' if r in x : x.remove(r) """ print timeit.Timer( code1 ).timeit( runs ) / runs code2 =""" x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' try : x.remove( r ) except : pass """ print timeit.Timer( code2 ).timeit( runs ) / runs Which is more pythonic?
1
0
2009-09-13T17:15:00.000
0
1
267
3
I've always gone with the first method. if in reads far more clearly than exception handling does.
5
0
0
true
2
python
Which is more pythonic for array removal?
6
0
1,418,310
1
1,418,266
1.2
I'm removing an item from an array if it exists. Two ways I can think of to do this Way #1 # x array, r item to remove if r in x : x.remove( r ) Way #2 try : x.remove( r ) except : pass Timing it shows the try/except way can be faster (some times i'm getting:) 1.16225508968e-06 8.80804972547e-07 1.14314196588e-06 8.73752536492e-07 import timeit runs = 10000 x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' code1 =""" x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' if r in x : x.remove(r) """ print timeit.Timer( code1 ).timeit( runs ) / runs code2 =""" x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113', 'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow', 'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig', ] r = 'a' try : x.remove( r ) except : pass """ print timeit.Timer( code2 ).timeit( runs ) / runs Which is more pythonic?
1
0
2009-09-13T17:15:00.000
0
1
267
3
You only need to store the lower triangle of the matrix. Typically this is done with one n(n+1)/2 length list. You'll need to overload the __getitem__ method to interpret what the entry means.
4
0
0
false
1
python,data-structures,matrix
Symmetrically adressable matrix
1
0
1,425,181
0
1,425,162
0.049958
I'm looking to create a 2d matrix of integers with symmetric addressing ( i.e. matrix[2,3] and matrix[3,2] will return the same value ) in python. The integers will have addition and subtraction done on them, and be used for logical comparisons. My initial idea was to create the integer objects up front and try to fill a list of lists with some python equivalent of pointers. I'm not sure how to do it, though. What is the best way to implement this, and should I be using lists or another data structure?
1
0
2009-09-15T04:48:00.000
0
1
247
2
You're probably better off using a full square numpy matrix. Yes, it wastes half the memory storing redundant values, but rolling your own symmetric matrix in Python will waste even more memory and CPU by storing and processing the integers as Python objects.
4
0
0
false
1
python,data-structures,matrix
Symmetrically adressable matrix
2
0
1,425,305
0
1,425,162
0.099668
I'm looking to create a 2d matrix of integers with symmetric addressing ( i.e. matrix[2,3] and matrix[3,2] will return the same value ) in python. The integers will have addition and subtraction done on them, and be used for logical comparisons. My initial idea was to create the integer objects up front and try to fill a list of lists with some python equivalent of pointers. I'm not sure how to do it, though. What is the best way to implement this, and should I be using lists or another data structure?
1
0
2009-09-15T04:48:00.000
0
1
247
2
10B booleans (1.25MB of memory, assuming Python is sane) I think you have your arithmetic wrong -- stored supercompactly, 10B booleans would be 1.25 GIGA, _not__ MEGA, bytes. A list takes at least 4 bytes per item, so you'd need 40GB to do it the way you want. You can store an array (see the array module in the standard library) in much less memory than that, so it might possibly fit.
6
0
0
false
2
python
long-index arrays in python
2
0
1,436,482
0
1,436,411
0.066568
I'm attempting to shorten the memory footprint of 10B sequential integers by referencing them as indexes in a boolean array. In other words, I need to create an array of 10,000,000,000 elements, but that's well into the "Long" range. When I try to reference an array index greater than sys.maxint the array blows up: x = [False] * 10000000000 Traceback (most recent call last): File "", line 1, in x = [0] * 10000000000 OverflowError: cannot fit 'long' into an index-sized integer Anything I can do? I can't seem to find anyone on the net having this problem... Presumably the answer is "python can't handle arrays bigger than 2B."
1
0
2009-09-17T02:18:00.000
0
1
4,919
2
A dense bit vector is plausible but it won't be optimal unless you know you won't have more than about 10**10 elements, all clustered near each other, with a reasonably randomized distribution. If you have a different distribution, then a different structure will be better. For instance, if you know that in that range, [0,10**10), only a few members are present, use a set(), or if the reverse is true, with nearly every element present except for a fraction, use a negated set, ie element not in mySet. If the elements tend to cluster around small ranges, you could use a run length encoding, something like [xrange(0,10),xrange(10,15),xrange(15,100)], which you lookup into by bisecting until you find a matching range, and if the index is even, then the element is in the set. inserts and removals involve shuffling the ranges a bit. If your distribution really is dense, but you need a little more than what fits in memory (seems to be typical in practice) then you can manage memory by using mmap and wrapping the mapped file with an adaptor that uses a similar mechanism to the suggested array('I') solution already suggested. To get an idea of just how compressible you can possibly get, try building a plain file with a reasonable corpus of data in packed form and then apply a general compression algorithm (such as gzip) to see how much reduction you see. If there is much reduction, then you can probably use some sort of space optimization in your code as well.
6
0
0
false
2
python
long-index arrays in python
3
0
1,436,547
0
1,436,411
0.099668
I'm attempting to shorten the memory footprint of 10B sequential integers by referencing them as indexes in a boolean array. In other words, I need to create an array of 10,000,000,000 elements, but that's well into the "Long" range. When I try to reference an array index greater than sys.maxint the array blows up: x = [False] * 10000000000 Traceback (most recent call last): File "", line 1, in x = [0] * 10000000000 OverflowError: cannot fit 'long' into an index-sized integer Anything I can do? I can't seem to find anyone on the net having this problem... Presumably the answer is "python can't handle arrays bigger than 2B."
1
0
2009-09-17T02:18:00.000
0
1
4,919
2
Well, the .sort() method of lists sorts the list in place, while sorted() creates a new list. So if you have a large list, part of your performance difference will be due to copying. Still, an order of magnitude difference seems larger than I'd expect. Perhaps list.sort() has some special-cased optimization that sorted() can't make use of. For example, since the list class already has an internal Py_Object*[] array of the right size, perhaps it can perform swaps more efficiently. Edit: Alex and Anurag are right, the order of magnitude difference is due to you accidentally sorting an already-sorted list in your test case. However, as Alex's benchmarks show, list.sort() is about 2% faster than sorted(), which would make sense due to the copying overhead.
3
0
0
false
40
python,sorting
Python sort() method on list vs builtin sorted() function
9
0
1,436,981
0
1,436,962
1
I know that __builtin__ sorted() function works on any iterable. But can someone explain this huge (10x) performance difference between anylist.sort() vs sorted(anylist) ? Also, please point out if I am doing anything wrong with way this is measured. """ Example Output: $ python list_sort_timeit.py Using sort method: 20.0662879944 Using sorted builin method: 259.009809017 """ import random import timeit print 'Using sort method:', x = min(timeit.Timer("test_list1.sort()","import random;test_list1=random.sample(xrange(1000),1000)").repeat()) print x print 'Using sorted builin method:', x = min(timeit.Timer("sorted(test_list2)","import random;test_list2=random.sample(xrange(1000),1000)").repeat()) print x As the title says, I was interested in comparing list.sort() vs sorted(list). The above snippet showed something interesting that, python's sort function behaves very well for already sorted data. As pointed out by Anurag, in the first case, the sort method is working on already sorted data and while in second sorted it is working on fresh piece to do work again and again. So I wrote this one to test and yes, they are very close. """ Example Output: $ python list_sort_timeit.py Using sort method: 19.0166599751 Using sorted builin method: 23.203567028 """ import random import timeit print 'Using sort method:', x = min(timeit.Timer("test_list1.sort()","import random;test_list1=random.sample(xrange(1000),1000);test_list1.sort()").repeat()) print x print 'Using sorted builin method:', x = min(timeit.Timer("sorted(test_list2)","import random;test_list2=random.sample(xrange(1000),1000);test_list2.sort()").repeat()) print x Oh, I see Alex Martelli with a response, as I was typing this one.. ( I shall leave the edit, as it might be useful).
1
0
2009-09-17T06:07:00.000
0
1
63,711
1
Well i can\t quite get exactly what are you looking for, whole solution or maybe you've got problem with colors, if that second thing below you got some ideas The most simple solution would be to create 14-elements array with color that you write from hand with help of some graphic software and than simply fetch element with delta number while drawing You can also search some algorithm to draw gradient, but instead of drawing store its color to array, if you wil genrate 130 values then you can get your color like that mycolors[delta*10] Be more specific with your question, maybe then more people will abe bale to help you i hope that my answer in some way helps MTH
2
0
0
false
0
ironpython
Colour map in ipython
0
0
1,448,617
0
1,448,582
0
I am trying to plot some data in ipy. The data consists of three variables alpha,beta and delta. The alpha and beta values are the coordinates of the data points that I wish to plot using a hammer projection. I want to scale the colour of the markers according to the delta values, preferably in a rainbow scale colormap i.e. from red to blue. The delta values range from 0-13 and I want a linear colour correlation. Can anyone please help me, I am getting very frustrated. Many thanks Angela
1
0
2009-09-19T13:47:00.000
0
0
279
1
Do you mean opencv can't connect to your webcam or can't read video files recorded by it? Have you tried saving the video in an other format? OpenCV is probably the best supported python image processing tool
4
0
0
true
13
python,image-processing,video-processing
Most used Python module for video processing?
7
0
1,480,450
0
1,480,431
1.2
I need to: Open a video file Iterate over the frames of the file as images Do some analysis in this image frame of the video Draw in this image of the video Create a new video with these changes OpenCV isn't working for my webcam, but python-gst is working. Is this possible using python-gst? Thank you!
1
0
2009-09-26T04:23:00.000
0
1
12,979
2
Just build a C/C++ wrapper for your webcam and then use SWIG or SIP to access these functions from Python. Then use OpenCV in Python that's the best open sourced computer vision library in the wild. If you worry for performance and you work under Linux, you could download free versions of Intel Performance Primitives (IPP) that could be loaded at runtime with zero effort from OpenCV. For certain algorithms you could get a 200% boost of performances, plus automatic multicore support for most of time consuming functions.
4
0
0
false
13
python,image-processing,video-processing
Most used Python module for video processing?
0
0
1,497,418
0
1,480,431
0
I need to: Open a video file Iterate over the frames of the file as images Do some analysis in this image frame of the video Draw in this image of the video Create a new video with these changes OpenCV isn't working for my webcam, but python-gst is working. Is this possible using python-gst? Thank you!
1
0
2009-09-26T04:23:00.000
0
1
12,979
2
Try axis('equal'). It's been a while since I worked with matplotlib, but I seem to remember typing that command a lot.
3
0
0
false
4
python,matplotlib,boxplot
Matplotlib square boxplot
3
0
1,506,741
0
1,506,647
0.197375
I have a plot of two boxplots in the same figure. For style reasons, the axis should have the same length, so that the graphic box is square. I tried to use the set_aspect method, but the axes are too different because of their range and the result is terrible. Is it possible to have 1:1 axes even if they do not have the same number of points?
1
0
2009-10-01T21:37:00.000
0
0
8,361
1
In early python-versions, the sort function implemented a modified version of quicksort. However, it was deemed unstable and as of 2.3 they switched to using an adaptive mergesort algorithm.
3
0
0
false
128
python,algorithm,sorting,python-internals
About Python's built in sort() method
10
0
1,517,357
0
1,517,347
1
What algorithm is the built in sort() method in Python using? Is it possible to have a look at the code for that method?
1
0
2009-10-04T20:48:00.000
0
1
71,971
1
It is good to know the version of numpy you run, but strictly speaking if you just need to have specific version on your system you can write like this: pip install numpy==1.14.3 and this will install the version you need and uninstall other versions of numpy.
17
0
0
false
340
python,numpy,version
How do I check which version of NumPy I'm using?
0
0
53,898,417
0
1,520,234
0
How can I check which version of NumPy I'm using?
1
0
2009-10-05T13:56:00.000
0
0
514,245
2
You can try this: pip show numpy
17
0
0
false
340
python,numpy,version
How do I check which version of NumPy I'm using?
11
0
46,330,631
0
1,520,234
1
How can I check which version of NumPy I'm using?
1
0
2009-10-05T13:56:00.000
0
0
514,245
2
On possible option is to do a single pass through the file first to count the number of rows, without loading them. The other option is to double your table size each time, which has two benefits: You will only re-alloc memory log(n) times where n is the number of rows. You only need 50% more ram than your largest table size If you take the dynamic route, you could measure the length of the first row in bytes, then guess the number of rows by calculating (num bytes in file / num bytes in first row). Start with a table of this size.
4
0
0
false
11
python,numpy,memory-management
Incrementally building a numpy array and measuring memory usage
1
0
1,534,340
0
1,530,960
0.049958
I have a series of large text files (up to 1 gig) that are output from an experiment that need to be analysed in Python. They would be best loaded into a 2D numpy array, which presents the first question: As the number of rows is unknown at the beginning of the loading, how can a very large numpy array be most efficiently built, row by row? Simply adding the row to the array would be inefficient in memory terms, as two large arrays would momentarily co-exist. The same problem would seem to be occur if you use numpy.append. The stack functions are promising, but ideally I would want to grow the array in place. This leads to the second question: What is the best way to observe the memory usage of a Python program that heavily uses numpy arrays? To study the above problem, I've used the usual memory profiling tools - heapy and pympler - but am only getting the size of the outer array objects (80 bytes) and not the data they are containing. Asides from a crude measuring of how much memory the Python process is using, how can I get at the "full" size of the arrays as they grow? Local details: OSX 10.6, Python 2.6, but general solutions are welcome.
1
0
2009-10-07T11:08:00.000
0
0
6,464
1
You can also use GDAL, which has many many functions to work with spatial data.
8
0
0
false
49
python,algorithm,cluster-analysis,k-means
Python k-means algorithm
0
0
1,545,672
0
1,545,606
0
I am looking for Python implementation of k-means algorithm with examples to cluster and cache my database of coordinates.
1
0
2009-10-09T19:16:00.000
0
0
89,635
1
Why not download some real open source repos and use those? Have you thought about what goes into the files? is that random data too?
4
0
0
true
2
python,algorithm
Generate random directories/files given number of files and depth
5
0
1,553,126
0
1,553,114
1.2
I'd like to profile some VCS software, and to do so I want to generate a set of random files, in randomly arranged directories. I'm writing the script in Python, but my question is briefly: how do I generate a random directory tree with an average number of sub-directories per directory and some broad distribution of files per directory? Clarification: I'm not comparing different VCS repo formats (eg. SVN vs Git vs Hg), but profiling software that deals with SVN (and eventually other) working copies and repos. The constraints I'd like are to specify the total number of files (call it 'N', probably ~10k-100k) and the maximum depth of the directory structure ('L', probably 2-10). I don't care how many directories are generated at each level, and I don't want to end up with 1 file per dir, or 100k all in one dir. The distribution is something I'm not sure about, since I don't know whether VCS' (SVN in particular) would perform better or worse with a very uniform structure or a very skewed structure. Nonetheless, it would be nice if I could come up with an algorithm that didn't "even out" for large numbers. My first thoughts were: generate the directory tree using some method, and then uniformly populate the tree with files (treating each dir equally, with no regard as to nesting). My back-of-the-envelope calcs tell me that if there are 'L' levels, with 'D' subdirs per dir, and about sqrt(N) files per dir, then there will be about D^L dirs, so N =~ sqrt(N)*(D^L) => D =~ N^(1/2L). So now I have an approximate value for 'D', how can I generate the tree? How do I populate the files? I'd be grateful just for some pointers to good resources on algorithms I could use. My searching only found pretty applets/flash.
1
0
2009-10-12T07:06:00.000
0
0
3,603
1
You haven't made it completely clear what you need. It sounds like itertools should have what you need. Perhaps what you wish is an itertools.combinations of the itertools.product of the lists in your big list. @fortran: you can't have a set of sets. You can have a set of frozensets, but depending on what it really means to have duplicates here, that might not be what is needed.
3
0
0
false
2
python
How to make all combinations of the elements in an array?
1
0
1,592,512
0
1,591,762
0.066568
I have a list. It contains x lists, each with y elements. I want to pair each element with all the other elements, just once, (a,b = b,a) EDIT: this has been criticized as being too vague.So I'll describe the history. My function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness. After a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute. Using the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned. Now, within each list, the 12 objects have already been cross-bread with each other. I want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!)
1
0
2009-10-19T23:51:00.000
0
1
598
2
First of all, please don't refer to this as an "array". You are using a list of lists. In Python, an array is a different type of data structure, provided by the array module. Also, your application sounds suspiciously like a matrix. If you are really doing matrix manipulations, you should investigate the Numpy package. At first glance your problem sounded like something that the zip() function would solve or itertools.izip(). You should definitely read through the docs for the itertools module because it has various list manipulations and they will run faster than anything you could write yourself in Python.
3
0
0
false
2
python
How to make all combinations of the elements in an array?
0
0
1,591,802
0
1,591,762
0
I have a list. It contains x lists, each with y elements. I want to pair each element with all the other elements, just once, (a,b = b,a) EDIT: this has been criticized as being too vague.So I'll describe the history. My function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness. After a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute. Using the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned. Now, within each list, the 12 objects have already been cross-bread with each other. I want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!)
1
0
2009-10-19T23:51:00.000
0
1
598
2
If you are I/O bound, the best way I have found to optimize is to read or write the entire file into/out of memory at once, then operate out of RAM from there on. With extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used to do it. That is what you need to optimize. I don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I/O for each byte, that's what you need to do. Of course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time.
7
0
0
false
3
python,performance,optimization,file-io
How should I optimize this filesystem I/O bound program?
3
1
1,594,704
0
1,594,604
0.085505
I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?
1
0
2009-10-20T13:27:00.000
0
0
2,504
5
Use buffered writes for step 4. Write a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it. You would have one buffer per file, so that most "writes" won't actually hit the disk.
7
0
0
false
3
python,performance,optimization,file-io
How should I optimize this filesystem I/O bound program?
1
1
1,595,358
0
1,594,604
0.028564
I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?
1
0
2009-10-20T13:27:00.000
0
0
2,504
5
Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode. If the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to/from the disks. Also useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes.
7
0
0
true
3
python,performance,optimization,file-io
How should I optimize this filesystem I/O bound program?
3
1
1,595,626
0
1,594,604
1.2
I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?
1
0
2009-10-20T13:27:00.000
0
0
2,504
5
Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so.
7
0
0
false
3
python,performance,optimization,file-io
How should I optimize this filesystem I/O bound program?
2
1
1,597,062
0
1,594,604
0.057081
I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?
1
0
2009-10-20T13:27:00.000
0
0
2,504
5
Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them? This would remove the save to and load from the disk that step 4 entails. If the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last.
7
0
0
false
3
python,performance,optimization,file-io
How should I optimize this filesystem I/O bound program?
1
1
1,597,281
0
1,594,604
0.028564
I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?
1
0
2009-10-20T13:27:00.000
0
0
2,504
5
Contra the other answers offered, I believe that we can make a strong argument about the recoverability of a pickle. That answer is: "Yes, an incomplete pickle always leads to an exception." Why are we able to do this? Because the "pickle" format is in fact a small stack-based language. In a stack-based language you write code that pushes item after item on a stack, then invoke an operator that does something with the data you've accumulated. And it just so happens that a pickle has to end with the command ".", which says: "take the item now at the bottom of the stack and return it as the value of this pickle." If your pickle is chopped off early, it will not end with this command, and you will get an EOF error. If you want to try recovering some of the data, you might have to write your own interpreter, or call into pickle.py somewhere that gets around its wanting to raise EOFError when done interpreting the stack without finding a ".". The main thing to keep in mind is that, as in most stack-based languages, big data structures are built "backwards": first you put lots of little strings or numbers on the stack, then you invoke an operation that says "put those together into a list" or "grab pairs of items on the stack and make a dictionary". So, if a pickle is interrupted, you'll find the stack full of pieces of the object that was going to be built, but you'll be missing that final code that tells you what was going to be built from the pieces.
5
0
0
false
4
python
If pickling was interrupted, will unpickling necessarily always fail? - Python
7
0
1,654,390
0
1,653,897
1
Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?
1
0
2009-10-31T09:38:00.000
0
1
1,351
4
I doubt you could make a claim that it will always lead to an exception. Pickles are actually programs written in a specialized stack language. The internal details of pickles change from version to version, and new pickle protocols are added occasionally. The state of the pickle after a crash, and the resulting effects on the unpickler, would be very difficult to summarize in a simple statement like "it will always lead to an exception".
5
0
0
false
4
python
If pickling was interrupted, will unpickling necessarily always fail? - Python
1
0
1,654,321
0
1,653,897
0.039979
Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?
1
0
2009-10-31T09:38:00.000
0
1
1,351
4
To be sure that you have a "complete" pickle file, you need to pickle three things. Pickle a header of some kind that claims how many objects and what the end-of-file flag will look like. A tuple of an integer and the EOF string, for example. Pickle the objects you actually care about. The count is given by the header. Pickle a tail object that you don't actually care about, but which simply matches the claim made in the header. This can be simply a string that matches what was in the header. When you unpickle this file, you have to unpickle three things: The header. You care about the count and the form of the tail. The objects you actually care about. The tail object. Check that it matches the header. Other than that, it doesn't convey much except that the file was written in it's entirety.
5
0
0
false
4
python
If pickling was interrupted, will unpickling necessarily always fail? - Python
1
0
1,654,503
0
1,653,897
0.039979
Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?
1
0
2009-10-31T09:38:00.000
0
1
1,351
4
Pickling an object returns an str object, or writes an str object to a file ... it doesn't modify the original object. If a "crash" (exception) happens inside a pickling call, the result won't be returned to the caller, so you don't have anything that you could try to unpickle. Besides, why would you want to unpickle some dud rubbish left over after an exception?
5
0
0
false
4
python
If pickling was interrupted, will unpickling necessarily always fail? - Python
2
0
1,654,329
0
1,653,897
0.07983
Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?
1
0
2009-10-31T09:38:00.000
0
1
1,351
4
This is a tough one to answer. I recently switched some of my graphing workload from R to matplotlib. In my humble opinion, I find matplotlib's graphs to be prettier (better default colors, they look crisper and more modern). I also think matplotlib renders PNGs a whole lot better. The real motivation for me though, was that I wanted to work with my underlying data in Python (and numpy) and not R. I think this is the big question to ask, in which language do you want to load, parse and manipulate your data? On the other hand, a bonus for R is that the plotting defaults just work (there's a function for everything). I find myself frequently digging through the matplotlib docs (they are thick) looking for some obscure way to adjust a border or increase a line thickness. R's plotting routines have some maturity behind them.
2
0
0
true
11
python,r,matplotlib,scipy,data-visualization
matplotlib for R user?
13
0
1,662,207
0
1,661,479
1.2
I regularly make figures (the exploratory data analysis type) in R. I also program in Python and was wondering if there are features or concepts in matplotlib that would be worth learning. For instance, I am quite happy with R - but its image() function will produce large files with pixelated output, whereas Matlab's equivalent figure (I also program regularly in Matlab) seems to be manageable in file size and also 'smoothed' - does matplotlib also provide such reductions...? But more generally, I wonder what other advantages matplotlib might confer. I don't mean this to be a trolling question. Thanks.
1
0
2009-11-02T14:01:00.000
0
0
10,755
2
I think that the largest advantage is that matplotlib is based on Python, which you say you already know. So, this is one language less to learn. Just spend the time mastering Python, and you'll benefit both directly for the plotting task at hand and indirectly for your other Python needs. Besides, IMHO Python is an overall richer language than R, with far more libraries that can help for various tasks. You have to access data for plotting, and data comes in many forms. In whatever form it comes I'm sure Python has an efficient library for it. And how about embedding those plots in more complete programs, say simple GUIs? matplotlib binds easily with Python's GUI libs (like PyQT) and you can make stuff that only your imagination limits.
2
0
0
false
11
python,r,matplotlib,scipy,data-visualization
matplotlib for R user?
4
0
1,662,225
0
1,661,479
0.379949
I regularly make figures (the exploratory data analysis type) in R. I also program in Python and was wondering if there are features or concepts in matplotlib that would be worth learning. For instance, I am quite happy with R - but its image() function will produce large files with pixelated output, whereas Matlab's equivalent figure (I also program regularly in Matlab) seems to be manageable in file size and also 'smoothed' - does matplotlib also provide such reductions...? But more generally, I wonder what other advantages matplotlib might confer. I don't mean this to be a trolling question. Thanks.
1
0
2009-11-02T14:01:00.000
0
0
10,755
2
You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples: nltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')]) results in: Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])]) It identifies Barack as a person, but Obama as an organization. So, not perfect.
2
0
0
true
9
python,nlp,nltk,chunking
What is the default chunker for NLTK toolkit in Python?
9
0
1,687,712
0
1,687,510
1.2
I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too. I am reading the NLTK toolkit book, but it does not seem like they have a default chunker?
1
0
2009-11-06T13:10:00.000
0
0
4,560
1
sorry, may I ask which tool could judge "difficulty level" of sentences? I wish to find out "similar difficulty level" of sentences for user to read.
5
0
0
false
6
python,text,nlp,words,wordnet
Does WordNet have "levels"? (NLP)
0
0
58,050,062
0
1,695,971
0
For example... Chicken is an animal. Burrito is a food. WordNet allows you to do "is-a"...the hiearchy feature. However, how do I know when to stop travelling up the tree? I want a LEVEL. That is consistent. For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
1
0
2009-11-08T10:29:00.000
0
0
2,585
4

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
1
Add dataset card

Models trained or fine-tuned on RazinAleks/SO-Python_QA-Data_Science_and_Machine_Learning_class