Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Using MySQL in Pydev Eclipse | 70,125,088 | 0 | 2 | 6,768 | 1 | python,mysql,eclipse,pydev,mysql-python | Posting Answer in case URL changed in future
From Eclipse, choose Window / Preferences / PyDev / Interpreters / Python Interpreter, click on Manage with pip and enter the command:
install mysql-connector-python | 0 | 1 | 0 | 0 | 2010-05-05T16:36:00.000 | 3 | 0 | false | 2,775,095 | 0 | 0 | 0 | 2 | I am trying to access a MySQL database with python through Pydev Eclipse. I have installed the necessary files to access MysQL from python and I can access the database only when I write code in Python IDLE environment and run it from command prompt. However I am not able to run my applications from Pydev.
when I use this "import MysqlDB" i get an error, but in IDLE no errors and my code runs very smoothly.
Does anyone know were the problem is?
Thanks |
console window on top with Python? | 2,790,148 | 3 | 0 | 717 | 0 | python,windows | Don't. There's nothing worse than two windows that think they deserve to be the one on top fighting it out. I've seen CPUs dragged to their knees by it. | 0 | 1 | 0 | 0 | 2010-05-07T16:32:00.000 | 2 | 0.291313 | false | 2,790,108 | 1 | 0 | 0 | 2 | How do I force my console window to be always on top with Python? |
console window on top with Python? | 2,790,211 | 2 | 0 | 717 | 0 | python,windows | Unless you are using a console window written by yourself as a "real" window you can alter the state of, you'd have to talk to the window manager (be it Windows's or some of the Linux ones).
But I agree with Paul Tomblin. I think most window managers have that feature built in for users to activate it if they WANT it on top! | 0 | 1 | 0 | 0 | 2010-05-07T16:32:00.000 | 2 | 1.2 | true | 2,790,108 | 1 | 0 | 0 | 2 | How do I force my console window to be always on top with Python? |
How can I detect DOS line breaks in a file? | 2,798,651 | 0 | 14 | 21,186 | 0 | python,bash,file,line-breaks,line-endings | dos linebreaks are \r\n, unix only \n. So just search for \r\n. | 0 | 1 | 0 | 1 | 2010-05-09T18:16:00.000 | 7 | 0 | false | 2,798,627 | 1 | 0 | 0 | 1 | I have a bunch of files. Some are Unix line endings, many are DOS. I'd like to test each file to see if if is dos formatted, before I switch the line endings.
How would I do this? Is there a flag I can test for? Something similar? |
cd Terminal at a given directory after running a Python script? | 2,799,275 | 1 | 5 | 2,631 | 0 | python,bash,shell,directory,terminal | Have you tried simply running the program in the current shell?
i.e
$. script.py
instead of
$script.py | 0 | 1 | 0 | 0 | 2010-05-09T21:30:00.000 | 3 | 0.066568 | false | 2,799,256 | 0 | 0 | 0 | 2 | I'm working on a simple Python script that can use subprocess and/or os to execute some commands, which is working fine.
However, when the script exits I'd like to cd the actual Terminal (in this case OS X) so on exit, the new files are ready to use in the directory where the have been created. All the following (subprocess.Popen, os.system, os.chdir) can do what I want from within the script (i.e. they execute stuff in the target directory) but on exit leave the Terminal at the script's own directory, not the target directory.
I'd like to avoid writing a shell script to temporary file just to achieve this, if this is at all possible anyway? |
cd Terminal at a given directory after running a Python script? | 2,799,281 | 10 | 5 | 2,631 | 0 | python,bash,shell,directory,terminal | Sadly, no. Processes are not allowed to change the environment of their parent process, and in this case your Python script is a child process of the shell. You could "fake" it by having your Python process set up a new shell - call subprocess to open a shell process and present it to the user, inheriting the modified environment from itself - but that has the downside of forcing the Python process to run continually.
This is really what shell scripts are for.. :-) Someone clearly needs to write a more traditional shell (e.g. closer to Bash than IPython) which can use python as its scripting language. | 0 | 1 | 0 | 0 | 2010-05-09T21:30:00.000 | 3 | 1.2 | true | 2,799,256 | 0 | 0 | 0 | 2 | I'm working on a simple Python script that can use subprocess and/or os to execute some commands, which is working fine.
However, when the script exits I'd like to cd the actual Terminal (in this case OS X) so on exit, the new files are ready to use in the directory where the have been created. All the following (subprocess.Popen, os.system, os.chdir) can do what I want from within the script (i.e. they execute stuff in the target directory) but on exit leave the Terminal at the script's own directory, not the target directory.
I'd like to avoid writing a shell script to temporary file just to achieve this, if this is at all possible anyway? |
How to find the file system type in python | 2,800,869 | -1 | 6 | 2,040 | 0 | python,windows,macos,filesystems | os.popen('/sbin/fdisk -l /dev/sda') on Linux | 0 | 1 | 0 | 0 | 2010-05-10T06:46:00.000 | 3 | -0.066568 | false | 2,800,798 | 0 | 0 | 0 | 1 | I'm looking for a way in python to find out which type of file system is being used for a given path. I'm wanting to do this in a cross platform way. On linux I could just grab the output of df -T but that won't work on OSX or windows. |
How to "signal" interested child processes (without signals)? | 2,804,999 | 2 | 1 | 651 | 0 | python,unix,fork,subprocess,signals | I have come up with the idea of using a pipe file descriptor that the parent could write and then read/flush in combination with select, but this doesn't really qualify as a very elegant design.
In more detail: The parent would create a pipe, the subprocesses would inherit it, the parent process would write to the pipe, thereby waking up any subprocess select():ing on the file descriptor, but the parent would then immediately read from the read end of the pipe and empty it - the only effect being that those child processes that were select():ing on the pipe have woken up.
As I said, this feels odd and ugly, but I haven't found anything really better yet.
Edit:
It turns out that this doesn't work - some child processes are woken up and some aren't. I've resorted to using a Condition from the multiprocessing module. | 0 | 1 | 0 | 0 | 2010-05-10T17:39:00.000 | 2 | 0.197375 | false | 2,804,964 | 0 | 0 | 0 | 1 | I'm trying to find a good and simple method to signal child processes
(created through SocketServer with ForkingMixIn) from the parent
process.
While Unix signals could be used, I want to avoid them since only
children who are interested should receive the signal, and it would be
overkill and complicated to require some kind of registration
mechanism to identify to the parent process who is interested.
(Please don't suggest threads, as this particular program won't work
with threads, and thus has to use forks.) |
Reverse Engineer a .pyo python file | 2,814,647 | 0 | 3 | 7,406 | 0 | python,disassembly | This is Brian, the question asker.
I have completed what I needed to do by just trial and error and hex editing, hex edit...then convert the source...see what I changed..until I finally narrowed down what I was looking for. The constants (Admin IDs) were in the hex file as converted hex (obviously) but backwards.
I still have no idea how or where I'd find the !=
I heard IDA Pro newest version supports python, but I havent learned how to get python to work on it. | 0 | 1 | 0 | 0 | 2010-05-11T09:20:00.000 | 5 | 0 | false | 2,809,578 | 0 | 0 | 1 | 3 | I have 2 .pyo python files that I can convert to .py source files, but they don't compile perfectly as hinted by decompyle's verify.
Therefore looking at the source code, I can tell that config.pyo simply had variables in in an array:
ADMIN_USERIDS = [116901,
141,
349244,
39,
1159488]
I would like to take the original .pyo and disassembly or whatever I need to do inorder to change one of these IDs.
Or....
in model.pyo the source indicates a
if (productsDeveloperId != self.getUserId()):
All I would want to do is hex edit the != to be a == .....Simple with a windows exe program but I can't find a good python disassembler anywhere.
Any suggestions are welcomed...I am new to reading bytecode and new to python as well. |
Reverse Engineer a .pyo python file | 2,809,684 | 0 | 3 | 7,406 | 0 | python,disassembly | Convert the .pyo files to .py and then edit the .py and then run python on the .py files. Python will regenerate the .pyo files Don't edit the pyo
I don't know the python bytecode but I would doubt that the strings == or 1= would appear in the .pyo file
Although a much better way is get the original .py files and use them. If they give the wrong program as implied by wanting to change != to == then you could ask the supplier to fix the bug. | 0 | 1 | 0 | 0 | 2010-05-11T09:20:00.000 | 5 | 0 | false | 2,809,578 | 0 | 0 | 1 | 3 | I have 2 .pyo python files that I can convert to .py source files, but they don't compile perfectly as hinted by decompyle's verify.
Therefore looking at the source code, I can tell that config.pyo simply had variables in in an array:
ADMIN_USERIDS = [116901,
141,
349244,
39,
1159488]
I would like to take the original .pyo and disassembly or whatever I need to do inorder to change one of these IDs.
Or....
in model.pyo the source indicates a
if (productsDeveloperId != self.getUserId()):
All I would want to do is hex edit the != to be a == .....Simple with a windows exe program but I can't find a good python disassembler anywhere.
Any suggestions are welcomed...I am new to reading bytecode and new to python as well. |
Reverse Engineer a .pyo python file | 3,895,279 | 0 | 3 | 7,406 | 0 | python,disassembly | IDA up to 6.0 doesn't have a .pyc decompilation module. | 0 | 1 | 0 | 0 | 2010-05-11T09:20:00.000 | 5 | 0 | false | 2,809,578 | 0 | 0 | 1 | 3 | I have 2 .pyo python files that I can convert to .py source files, but they don't compile perfectly as hinted by decompyle's verify.
Therefore looking at the source code, I can tell that config.pyo simply had variables in in an array:
ADMIN_USERIDS = [116901,
141,
349244,
39,
1159488]
I would like to take the original .pyo and disassembly or whatever I need to do inorder to change one of these IDs.
Or....
in model.pyo the source indicates a
if (productsDeveloperId != self.getUserId()):
All I would want to do is hex edit the != to be a == .....Simple with a windows exe program but I can't find a good python disassembler anywhere.
Any suggestions are welcomed...I am new to reading bytecode and new to python as well. |
catch output from linux telnet to a python script | 2,825,228 | 1 | 1 | 1,408 | 0 | python | As xitrium mentioned, it would be better if you used telnetlib. You can dispense with the whole mess of shell redirection etc.
If you do something like telnet foo | process.py, you can read your programs stdin (sys.stdin) to get the output of the telnet program. When you're happy, you can exit and terminate the pipeline. subprocess.Popen would be used if you're trying to open the telnet program as a subprocess of the interpreter. I'm not sure you wanted that.
In any case, telnetlib is the right way to go it seems. If you simply want an output text processor, consider perl. It's strengths lean in that direction. | 0 | 1 | 0 | 0 | 2010-05-13T07:38:00.000 | 3 | 0.066568 | false | 2,825,100 | 0 | 0 | 0 | 1 | My problem is that i want to do something like this in linux console
telnet 192.168.255.28 > process.py
i.e i would like to do some transformation with console telnet output using python script. I'm see Popen in python for this case, but i can't understand how can i get input from telnet if it do not stop all time..
Pleas any ideas. |
Data munging and data import scripting | 2,833,559 | 3 | 0 | 819 | 0 | php,python,perl,shell,data-munging | import data from a file and possibly reformat it
Python excels at this. Be sure to read up on the csv module so you don't waste time inventing it yourself.
For binary data, you may have to use the struct module. [If you wrote the C++ program that produces the binary data, consider rewriting that program to stop using binary data. Your life will be simpler in the long run. Disk storage is cheaper than your time; highly compressed binary formats are more cost than value.]
Import the munged data into a database.
Extract data from the database
Perform calculations on the data and either insert or update tables in the database.
Use the mysqldb module for MySQL. SQLite is built-in to Python.
Often, you'll want to use Object-Relational mapping rather than write your own SQL. Look at sqlobject and sqlalchemy for this.
Also, before doing too much of this, buy a good book on data warehousing. Your two "task groups" sound like you're starting down the data warehousing road. It's easy to get this all fouled up through poor database design. Learn what a "Star Schema" is before you do anything else. | 0 | 1 | 0 | 1 | 2010-05-14T10:10:00.000 | 3 | 1.2 | true | 2,833,312 | 0 | 0 | 0 | 1 | I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs.
I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different.
Task Group A
import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data
Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database.
Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed).
Task Group B
Extract data from the database
Perform calculations on the data and either insert or update tables in the database.
My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so (+ a few other languages which are not relevant for the purpose of this question). I am from a Windows background, so I am still finding my feet in the Linux environment.
My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language, but this may be a flawed assumption. My thinking is that it would be easier to modify things in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc.
Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilemma. Which scripting language do I use to perform the tasks above (giving my background)?
My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bad experience with it a while back (bought the Perl Camel book + 'data munging with Perl' many years ago, but could still not 'grok' it just felt too alien. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong).
The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts).
So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY? Or, should I just stick to writing little C++ applications that I call in a shell script?
Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing too cryptic!]) how I can use the language you suggested to do what I am trying to do i.e.
load a CSV file into some kind of data structure where you can access data columns easily for data manipulation
dump the columnar data into a mySQL table
load data from mySQL table into a data structure that allows columns/rows to be accessed in the scripting language
Hopefully, the snippets will allow me to quickly spot the languages that will pose the steepest learning curve for me - as well as those that simple, elegant and efficient (hopefully those two criteria [elegance and shallow learning curve] are not orthogonal - though I suspect they might be). |
Group Chat XMPP with Google App Engine | 2,836,255 | 3 | 3 | 1,577 | 0 | java,python,google-app-engine,xmpp | No. App Engine apps can only directly handle HTTP requests - you can't run arbitrary servers on App Engine. | 0 | 1 | 0 | 0 | 2010-05-14T15:44:00.000 | 1 | 1.2 | true | 2,835,472 | 0 | 0 | 1 | 1 | Google App Engine has a great XMPP service built in. One of the few limitations it has is that it doesn't support receiving messages from a group chat. That's the one thing I want to do with it. :(
Can I run a 3rd party XMPP/Jabber server on App Engine that supports group chat?
If so, which one? |
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer? | 2,840,981 | 4 | 0 | 835 | 0 | python,c,objective-c,linux,macos | It's frequently helpful to learn programming languages in the order they were created. The folks that wrote Objective-C clearly had C and its syntax, peculiarities, and features in mind when they defined the language. It can't hurt you to learn C now. You may have some insight into why Objective-C is structured the way it is later.
C has a great, classic book on it, The C Programming Language by Kernighan & Ritchie, which is short and easy to digest if you already have another language under your belt. | 0 | 1 | 0 | 1 | 2010-05-15T16:54:00.000 | 5 | 0.158649 | false | 2,840,932 | 0 | 0 | 0 | 4 | I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer? |
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer? | 2,840,961 | 1 | 0 | 835 | 0 | python,c,objective-c,linux,macos | Sure Objective-C is quite easier to learn if you know C and quite a few books on Objective-C even asume you know C.
Also consider learning a bit about MacRuby for GUI development ;) | 0 | 1 | 0 | 1 | 2010-05-15T16:54:00.000 | 5 | 0.039979 | false | 2,840,932 | 0 | 0 | 0 | 4 | I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer? |
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer? | 2,909,728 | 0 | 0 | 835 | 0 | python,c,objective-c,linux,macos | Learning C will definitely be of help, as Objective C inherits its many properties and adds to it.
You could learn Objective C either from 'Learn Objective C on the Mac', this one's really a great book, and then if you plan to learn cocoa, get 'Learn Cocoa on the Mac' or the one by James Davidson, they should give you a fine head start, you can then consider moving to the one by Hillegass, and for a stunner 'Objective C developer handbook' by David Chisnall, this is a keeper, you can read it in a month or two.
For the compiler I would point you to clang though a gcc and gnustep combination will work. clang is a better choice if you want to work on Obj C 2.0 features and it is under heavy development. | 0 | 1 | 0 | 1 | 2010-05-15T16:54:00.000 | 5 | 0 | false | 2,840,932 | 0 | 0 | 0 | 4 | I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer? |
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer? | 5,783,941 | 0 | 0 | 835 | 0 | python,c,objective-c,linux,macos | Yes. Learn how to program in C. | 0 | 1 | 0 | 1 | 2010-05-15T16:54:00.000 | 5 | 0 | false | 2,840,932 | 0 | 0 | 0 | 4 | I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer? |
How to catch an exception thrown in ctypes? | 2,844,132 | 3 | 3 | 1,439 | 0 | python,exception,exception-handling,ctypes,abort | You might be able to setup a signal handler on SIGABRT to handle the signal caused by abort().
However, failed assertions might go along with corrupted memory and other bad things - there's usually a reason why an assertion failed. So usually terminating the applications is the best thing you can do (except displaying/logging an error in your handler before terminating). | 0 | 1 | 0 | 0 | 2010-05-16T14:32:00.000 | 1 | 1.2 | true | 2,844,121 | 0 | 0 | 0 | 1 | I am working with some C code called from Python using ctypes. Somewhere in the bowels of the C library, an exception is occurring and/or abort() is being called. Is there any way I can catch this in my Python caller code? (Platform is Linux) |
Where does GoogleAppEngineLauncher keep the local log files? | 20,735,387 | 5 | 9 | 7,373 | 0 | python,google-app-engine,logging | Many of these answers are now outdated. :)
In today's devappserver, use --logs_path=LOGS_FILE if you want to log to a file (in its native sqlite database format). Or as suggested in a comment, simply pipe the output if that's too complicated.
Since there's a log API, it actually now stores log entries in a file in --storage_path if not set. I have noticed a few bugs myself, though. (I'll assume they don't exist now, it's been a while since I used this.) | 0 | 1 | 0 | 0 | 2010-05-16T17:06:00.000 | 4 | 0.244919 | false | 2,844,635 | 0 | 0 | 1 | 1 | GoogleAppEngineLauncher can display the local log file of my app while it is running on my Mac during development. However, I can't change the font size there so I would like to use the tail command to watch the log file myself.
It's a shame but I can't find the log files. They are not under /var/log/, ~/Library/Logs or /Library/Logs. Do you know where they are?
(Maybe there are no physical files, just the stdout of the python development environment and so the log is only available in the launcher application.) |
What scripts should not be ported from bash to python? | 2,853,719 | 0 | 1 | 387 | 0 | python,linux,bash,scripting | Certain scripts that I write simply involving looping over a glob in some directories, and then executing some a piped series of commands on them. This kind of thing is much more tedious in python. | 0 | 1 | 0 | 1 | 2010-05-17T20:13:00.000 | 4 | 0 | false | 2,852,397 | 1 | 0 | 0 | 3 | I decided to rewrite all our Bash scripts in Python (there are not so many of them) as my first Python project. The reason for it is that although being quite fluent in Bash I feel it's somewhat archaic language and since our system is in the first stages of its developments I think switching to Python now will be the right thing to do.
Are there scripts that should always be written in Bash? For example, we have an init.d daemon script - is it OK to use Python for it?
We run CentOS.
Thanks. |
What scripts should not be ported from bash to python? | 2,852,418 | 3 | 1 | 387 | 0 | python,linux,bash,scripting | It is OK in the sense that you can do it. But the scripts in /etc/init.d usually need to load config data and some functions (for example to print the nice green OK on the console) which will be hard to emulate in Python.
So try to convert those which make sense (i.e. those which contain complex logic). If you need job control (starting/stopping processes), then bash is better suited than Python. | 0 | 1 | 0 | 1 | 2010-05-17T20:13:00.000 | 4 | 1.2 | true | 2,852,397 | 1 | 0 | 0 | 3 | I decided to rewrite all our Bash scripts in Python (there are not so many of them) as my first Python project. The reason for it is that although being quite fluent in Bash I feel it's somewhat archaic language and since our system is in the first stages of its developments I think switching to Python now will be the right thing to do.
Are there scripts that should always be written in Bash? For example, we have an init.d daemon script - is it OK to use Python for it?
We run CentOS.
Thanks. |
What scripts should not be ported from bash to python? | 2,853,661 | 1 | 1 | 387 | 0 | python,linux,bash,scripting | Every task has languages that are better suited for it and less so. Replacing the backtick ` quote of sh is pretty ponderous in Python as would be myriad quoting details, just to name a couple. There are likely better projects to cut your teeth on.
And all that they said above about Python being relatively heavyweight and not necessarily available when needed. | 0 | 1 | 0 | 1 | 2010-05-17T20:13:00.000 | 4 | 0.049958 | false | 2,852,397 | 1 | 0 | 0 | 3 | I decided to rewrite all our Bash scripts in Python (there are not so many of them) as my first Python project. The reason for it is that although being quite fluent in Bash I feel it's somewhat archaic language and since our system is in the first stages of its developments I think switching to Python now will be the right thing to do.
Are there scripts that should always be written in Bash? For example, we have an init.d daemon script - is it OK to use Python for it?
We run CentOS.
Thanks. |
How do I run an interactive command line Python app inside of Emacs on Win32? | 2,884,416 | 2 | 5 | 820 | 0 | python,emacs | You should look into the other shell modes. TERM-MODE and ANSI-MODE. I believe they can support interactive command line programs. | 0 | 1 | 0 | 0 | 2010-05-18T19:25:00.000 | 3 | 0.132549 | false | 2,860,386 | 0 | 0 | 0 | 2 | If I use M-x shell and run the interactive Python interpreter, Emacs on Windows does not return any IO.
When I discovered M-x python-shell, I regained hope. However, instead of running the interactive Python shell, I want to run a specific Python script that features an interactive CLI. (See Python's cmd module for details).
Is there a way of launching a Python script in Emacs that is interactive? (stdout, stdin, stderr) |
How do I run an interactive command line Python app inside of Emacs on Win32? | 21,938,935 | -1 | 5 | 820 | 0 | python,emacs | You can run python in interactive mode by typing python.exe -i
Alternatively, you can download ipython and just run ipython.exe | 0 | 1 | 0 | 0 | 2010-05-18T19:25:00.000 | 3 | -0.066568 | false | 2,860,386 | 0 | 0 | 0 | 2 | If I use M-x shell and run the interactive Python interpreter, Emacs on Windows does not return any IO.
When I discovered M-x python-shell, I regained hope. However, instead of running the interactive Python shell, I want to run a specific Python script that features an interactive CLI. (See Python's cmd module for details).
Is there a way of launching a Python script in Emacs that is interactive? (stdout, stdin, stderr) |
Queue remote calls to a Python Twisted perspective broker? | 2,899,163 | -2 | 11 | 6,156 | 0 | python,asynchronous,twisted | You might also like the txRDQ (Resizable Dispatch Queue) I wrote. Google it, it's in the tx collection on LaunchPad. Sorry I don't have more time to reply - about to go onstage.
Terry | 0 | 1 | 0 | 0 | 2010-05-18T23:22:00.000 | 2 | -0.197375 | false | 2,861,858 | 0 | 0 | 0 | 1 | The strength of Twisted (for python) is its asynchronous framework (I think). I've written an image processing server that takes requests via Perspective Broker. It works great as long as I feed it less than a couple hundred images at a time. However, sometimes it gets spiked with hundreds of images at virtually the same time. Because it tries to process them all concurrently the server crashes.
As a solution I'd like to queue up the remote_calls on the server so that it only processes ~100 images at a time. It seems like this might be something that Twisted already does, but I can't seem to find it. Any ideas on how to start implementing this? A point in the right direction? Thanks! |
call external program in python, watch output for specific text then take action | 2,864,298 | 3 | 0 | 720 | 0 | python,linux,unix | Use Popen from subprocess like this
process = Popen("cmd", shell=True, bufsize=bufsize, stdout=PIPE)
Then use process.stdout to read from program's stdout (like reading from any other file like object). | 0 | 1 | 0 | 0 | 2010-05-19T09:32:00.000 | 1 | 0.53705 | false | 2,864,277 | 1 | 0 | 0 | 1 | i'm looking for a way in python to run an external binary and watch it's output for: "up to date" If "up to date" isn't returned i want to run the original command again, once "up to date" is displayed i would like to be able to run another script. So far I've figured out how to run the binary with options using subprocess but thats as far as I've gotten. Thanks! |
Twisted Matrix and telnet server implementation | 2,864,683 | 1 | 4 | 2,482 | 0 | python,twisted,telnet | It sounds like you've got two separate tasks here:
Port the code from C to Python.
Rewrite the whole program to use Twisted.
Since you're new to Python, I would be inclined to do the first one first, before trying to make the program structure work in Twisted. If the program is old, there isn't likely to be any performance problems running it on modern hardware.
Converting the C code to Python first will give you the familiarity with Python you need to start on the port to Twisted. | 0 | 1 | 0 | 1 | 2010-05-19T10:33:00.000 | 2 | 0.099668 | false | 2,864,663 | 0 | 0 | 0 | 1 | I have a project which is essentially a game server where users connect and send text commands via telnet.
The code is in C and really old and unmodular and has several bugs and missing features. The main function alone is half the code.
I came to the conclusion that rewriting it in Python, with Twisted, could actually result in faster completement, besides other benefits.
So, here is the questions:
What packages and modules I should use? I see a "telnet" module inside "protocols" package. I also see "cronch" package with "ssh" and another "telnet" module.
I'm a complete novice regarding Python. |
Importing Sqlite data into Google App Engine | 2,873,946 | 0 | 5 | 2,231 | 1 | python,google-app-engine,sqlite | I have not had any trouble importing pysqlite2, reading data, then transforming it and writing it to AppEngine using the remote_api.
What error are you seeing? | 0 | 1 | 0 | 0 | 2010-05-20T00:47:00.000 | 4 | 0 | false | 2,870,379 | 0 | 0 | 1 | 1 | I have a relatively extensive sqlite database that I'd like to import into my Google App Engine python app.
I've created my models using the appengine API which are close, but not quite identical to the existing schema. I've written an import script to load the data from sqlite and create/save new appengine objects, but the appengine environment blocks me from accessing the sqlite library. This script is only to be run on my local app engine instance, and from there I hope to push the data to google.
Am I approaching this problem the wrong way, or is there a way to import the sqlite library while running in the local instance's environment? |
Is there any use for Bash scripting anymore? | 3,203,607 | 1 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | What I don't get is why people say bash when they mean any bourne-shell compatible shell.
When writing shell scripts: always try to use constructs that also work in older bourne shell interpreters as well. It will save you lots of trouble some day.
And yes, there is plenty of use for shell scripts today, as the shell always exist on all unixes, out of the box, contrary to perl, python, csh, zsh, ksh (possibly?), and so on.
Most of the time they only add extra convenience or different syntax for constructs like loops and tests. Some have improved redirection features.
Most of the time, I would say that ordinary bourne shell works equally well.
Typical pitfall:
if ! test $x -eq $y works as expected in bash that has a more clever builtin "if" operator, but the "correct" if test ! $x -eq $y should work in all environments. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.011111 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 3,147,413 | 1 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | As mentioned, the GNU tools are great, and are easiest to use within the shell. It is especially nice if your data is already in a linear or tabular form of plain text. Just as an example, the other day I was able to build a script to create an XHTML word cloud of any text file in 8 lines of Bourne Shell, which is even less powerful (but more widely supported) than Bash. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.011111 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,877,257 | 2 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | I'm a perl guy, but the number of the bash (or ksh) functions I use and create on a daily basis is quite significant. For anything involved, I'll write a perl script, but for navigating the directory structure, and specifically for manipulating environment variables bash/ksh/... are indispensable.
Again, especially for environment variables nothing beats shell, and quite a few programs use environment variables. In Perl, I have to write a bash alias or function that calls the Perl script, which writes out a temporary bash script, which then gets sourced after Perl exits in order to make the change in the same environment I'm launching from.
I've done this, especially for heavy-lifting on path variables. But there's no way to do it in just Perl (or python or ruby... or C-code for that matter). | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.022219 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,875,733 | 1 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | In my experience, Perl meets something like 99% of any need that might require a shell script. As a bonus, it is possible to write code that runs on Windows sans Cygwin. If I won't have a Perl install on a Windows box I want to target, I can use PAR::Packer or PerlApp to produce an executable. Python, Ruby and others should work just as well, too.
However, shell scripting isn't all that complicated--at least things that you should be scripting in a shell aren't all that complicated. You can do what you need to do with a fairly shallow level of knowledge.
Learn how to read and set variables. How to create and call functions. How to source other files. Learn how flow control works.
And most important, learn to read the shell man page. This may sound facetious, but I am 100% serious--don't worry about cramming every detail of shell scripting into your brain, instead learn to find what you need to know in the man page quickly and efficiently. If you find yourself using shell scripting often, the pertinent info will naturally stick in your brain.
So, yes, basic shell is worth learning. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.011111 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,872,083 | 2 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | Easier, probably not. I actually prefer perl to bash scripting in many cases. Bash does have one advantage, though, especially on Linux systems: it's all but guaranteed to be installed. And if it's not, its largely-compatible father (sh) will be, cause almost all system scripts are written for sh. Even perl isn't that ubiquitous, and it's everyfreakingwhere. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.022219 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,872,064 | 8 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | Well, when writing with bash, you can directly use every possible tool you have on the command line for your script. With any other language you would first have to execute that command and get some result etc. A simple script that (for example) gets a list of processes, runs through grep and gets some result would be a lot more complicated in other languages. As such, bash is still a good tool for writing quick things. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 1 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,874,267 | 2 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | If you do lots of GUI stuff, you'll probably only meet bash whenever you're doing some sort of customization on your own machine. Various hacks and stuff. If you use the command line to do stuff, bash is just indispensable. In fact, being good on the command line requires bash or some other shell familiarity.
I get miles out of having learned Bash when I wanted to navigate around my harddrive quickly. I wrote a navigation/menu interface that let me beam to different folders and files quickly and easily. Writing it in bash was simple and easy. And there's lots of easily accessed, and free, stuff that'll show you how.
Also, learning Bash is great for understanding how Unix and some of the core stuff really works -- and how far we've come with tools like Python. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.022219 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,875,677 | 4 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | Apart from what others have said, I'd like to point out what's in my opinion the main reason to learn Bash: it's the (almost) standard Linux shell.
Other scripting languages are surely useful, and maybe a lot more powerful, but what you'll be dealing with when you have a terminal in front of you is... Bash.
Being able to manage I/O, pipes and processes, assing and use variables, and do at least some loop and condition evaluation is a must, if you want to manage a Linux system. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.044415 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,872,085 | 2 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | Bash (and the original Bourne sh and myriad derivatives) is - from one perspective - an incredibly high-level language. Where many languages use simple primitives, shell primitives are entire programs.
That it might not be the best language to express your tasks, doesn't mean it is dead, dying, or even moribund. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 0.022219 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
Is there any use for Bash scripting anymore? | 2,872,171 | 26 | 76 | 50,948 | 0 | python,perl,bash,scripting,comparison | The real difference between bash and python is that python is a general purpose scripting language, while bash is simply a way to run a myriad of small (and often very fast) programs in a series. Python can do this, but it is not optimized for it. The programs (sort, find, uniq, scp) might do very complex tasks very simply, and bash allows these tasks to interoperate very simply with piping, flushing output in and out from files or devices etc. While Python can run the same programs, you will be forced to do bash scripting in the python script to accomplish the same thing, and then you are stuck with both python and bash. Both are fine by them self, but a mix of these don't improve anything IMHO. | 0 | 1 | 0 | 1 | 2010-05-20T08:20:00.000 | 18 | 1 | false | 2,872,041 | 0 | 0 | 0 | 10 | I just finished my second year as a university CS student, so my "real-world" knowledge is lacking. I learned Java my first year, continued with Java and picked up C and simple Bash
scripting my second. This summer I'm trying to learn Perl (God help me). I've dabbled with Python a bit in the past.
My question is, now that we have very readable, very writable scripting languages like Python, Ruby, Perl, etc, why does anyone write Bash scripts? Is there something I'm missing? I know my linux box has perl and python. Are they not ubiquitous enough? Is there really something
that's easier to do in Bash than in some other hll? |
What's the most scalable way to handle somewhat large file uploads in a Python webapp? | 19,850,039 | 0 | 3 | 672 | 0 | python,asynchronous,file-upload | See if you can push the files straight from the user's browser to a static file store, like Amazon S3 or rackspace's cloudfiles service.
Then you can handle a ton of parallel users. | 0 | 1 | 0 | 0 | 2010-05-20T17:17:00.000 | 2 | 0 | false | 2,876,181 | 0 | 0 | 0 | 1 | We have a web application that takes file uploads for some parts. The file uploads aren't terribly big (mostly word documents and such), but they're much larger than your typical web request and they tend to tie up our threaded servers (zope 2 servers running behind an Apache proxy).
I'm mostly in the brainstorming phase right now and trying to figure out a general technique to use. Some ideas I have are:
Using a python asynchronous server like tornado or diesel or gunicorn.
Writing something in twisted to handle it.
Just using nginx to handle the actual file uploads.
It's surprisingly difficult to find information on which approach I should be taking. I'm sure there are plenty of details that would be needed to make an actual decision, but I'm more worried about figuring out how to make this decision than anything else. Can anyone give me some advice about how to proceed with this? |
Python 3.0 IDE - Komodo and Eclipse both flaky? | 11,520,785 | 0 | 3 | 584 | 0 | python,ide | You did not mention these so I'm not sure if you've tried them but there are:
- Aptana (aptana.com)
- The Eric Python IDE (http://eric-ide.python-projects.org/)
- WingWare Python IDE (wingware.com)
I haven't used any of them so I don't know if they will match your needs, but I'd expected them to be pretty close as they are all mature.
As for PyCharm, I've been using it for a while and it's fine, actully I like it very much.
However I'm a Python noob and probably do not use many advanced features so YMMV. | 0 | 1 | 0 | 1 | 2010-05-21T01:18:00.000 | 2 | 0 | false | 2,879,008 | 0 | 0 | 0 | 1 | I'm trying to find a decent IDE that supports Python 3.x, and offers code completion/in-built Pydocs viewer, Mercurial integration, and SSH/SFTP support.
Anyhow, I'm trying Pydev, and I open up a .py file, it's in the Pydev perspective and the Run As doesn't offer any options. It does when you start a Pydev project, but I don't want to start a project just to edit one single Python script, lol, I want to just open a .py file and have It Just Work...
Plan 2, I try Komodo 6 Alpha 2. I actually quite like Komodo, and it's nice and snappy, offers in-built Mercurial support, as well as in-built SSH support (although it lacks SSH HTTP Proxy support, which is slightly annoying).
However, for some reason, this refuses to pick up Python 3. In Edit-Preferences-Languages, there's two option, one for Python and Python3, but the Python3 one refuses to work, with either the official Python.org binaries, or ActiveState's own ActivePython 3. Of course, I can set the "Python" interpreter to the 3.1 binary, but that's an ugly hack and breaks Python 2.x support.
So, does anybody who uses an IDE for Python have any suggestions on either of these accounts, or can you recommend an alternate IDE for Python 3.0 development?
Cheers,
Victor |
How can I hide the console window when freezing wxPython applications with cxFreeze? | 2,881,979 | 3 | 31 | 18,074 | 0 | python,wxpython,cx-freeze | If you're using Windows, you could rename your "main" script's extension (that launches the app) to .pyw | 1 | 1 | 0 | 0 | 2010-05-21T07:37:00.000 | 4 | 0.148885 | false | 2,880,316 | 0 | 0 | 0 | 1 | I'm developing a Python application using wxPython and freezing it using cxFreeze. All seems to be going fine apart from this following bit:
When I run the executable created by cxFreeze, a blank console window pops up. I don't want to show it. Is there any way I could hide it?
It doesn't seem to be documented on the cxFreeze site and Googling didn't turn up much apart from some similar sorta problems with Py2Exe.
Thanks. |
Google App Engine, Task Queues | 2,889,352 | 3 | 3 | 1,032 | 0 | python,google-app-engine,httpwebrequest,scheduled-tasks | A task is removed from the queue when it executes if and only if it returns an HTTP 200 response. For any other response, it will be retried until it successfully executes.
As David's answer indicates, they can also be manually removed. | 0 | 1 | 0 | 0 | 2010-05-21T18:29:00.000 | 3 | 0.197375 | false | 2,884,579 | 0 | 0 | 1 | 1 | How can I remove a task from a task queue? Is Google App Engine Task Queue removes the task from the queue after it is executed? |
Python 2.6 + PIL + Google App Engine issue | 2,885,231 | -1 | 3 | 1,343 | 0 | python,image,google-app-engine | How did you install PIL? If I remember correctly, I had to install it via MacPorts to get the App Engine SDK to recognize that it was installed.
You should probably install Python 2.5 and use that, while you're at it, since that is the Python version that App Engine uses and developing against 2.6 locally could potentially lead to surprising issues when you deploy your app. | 0 | 1 | 0 | 0 | 2010-05-21T19:38:00.000 | 5 | -0.039979 | false | 2,885,057 | 1 | 0 | 1 | 4 | I am using OS X 1.6 snow leopard and I successfully got PIL installed. I am able to open terminal and type import Image without any errors.
However, When using app engine I get Image error still saying that PIL is not installed. I am wondering if any of you have an thoughts as to how I can resolve this issue.
-Matthew |
Python 2.6 + PIL + Google App Engine issue | 2,889,386 | 0 | 3 | 1,343 | 0 | python,image,google-app-engine | You cannot use PIL with Appengine; it uses C extensions and won't run in the sandbox environment. You do need to have PIL installed on your local machine to use the images API in dev_appserver, because the SDK version of the images API itself uses PIL, but this doesn't mean you can use all of PIL through the images API; the images API is fairly limited.
Additionally, it's a good idea to use Python 2.5 for development, as the production environment uses version 2.5.2 and not all Python 2.6 syntax will work in production (notably "except FooError as bar"), and the purpose of the development server is to test that your code will work right in production. | 0 | 1 | 0 | 0 | 2010-05-21T19:38:00.000 | 5 | 0 | false | 2,885,057 | 1 | 0 | 1 | 4 | I am using OS X 1.6 snow leopard and I successfully got PIL installed. I am able to open terminal and type import Image without any errors.
However, When using app engine I get Image error still saying that PIL is not installed. I am wondering if any of you have an thoughts as to how I can resolve this issue.
-Matthew |
Python 2.6 + PIL + Google App Engine issue | 4,180,349 | 1 | 3 | 1,343 | 0 | python,image,google-app-engine | I had this same problem and found in GoogleAppEngineLauncher | Preferences that I needed to set the Python Path to /usr/local/bin/python2.5
After I did that it started working. | 0 | 1 | 0 | 0 | 2010-05-21T19:38:00.000 | 5 | 0.039979 | false | 2,885,057 | 1 | 0 | 1 | 4 | I am using OS X 1.6 snow leopard and I successfully got PIL installed. I am able to open terminal and type import Image without any errors.
However, When using app engine I get Image error still saying that PIL is not installed. I am wondering if any of you have an thoughts as to how I can resolve this issue.
-Matthew |
Python 2.6 + PIL + Google App Engine issue | 28,392,607 | 0 | 3 | 1,343 | 0 | python,image,google-app-engine | What David Scott said is actually correct.
I had the errors blow running and couldn't for the heck of me couldn't resolve the issue no matter what patches I tried. What worked for me apparently, was simply changing the Python directory found on C:\python27_x64 and target the pythonw.exe file using the Google App Engine.
FYI, I run with Windows 8.1
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\appcfg.py", line 127, in
run_file(file, globals())
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\appcfg.py", line 123, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5399, in
main(sys.argv)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5390, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2980, in Run
self.action(self)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5046, in call
return method()
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3793, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3814, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 243, in CheckForUpdates
runtime=runtime))
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc.py", line 424, in Send
f = self.opener.open(req)
File "C:\Python27\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1240, in https_open
context=self._context)
TypeError: do_open() got an unexpected keyword argument 'context'
2015-02-08 17:42:53 (Process exited with code 1)
You can close this window now. | 0 | 1 | 0 | 0 | 2010-05-21T19:38:00.000 | 5 | 0 | false | 2,885,057 | 1 | 0 | 1 | 4 | I am using OS X 1.6 snow leopard and I successfully got PIL installed. I am able to open terminal and type import Image without any errors.
However, When using app engine I get Image error still saying that PIL is not installed. I am wondering if any of you have an thoughts as to how I can resolve this issue.
-Matthew |
Which logging library to use for cross-language (Java, C++, Python) system | 2,926,570 | 1 | 11 | 2,806 | 0 | java,c++,python,logging | I'd use Apache log4cxx or Apache log4j.
It's Efficient. It has Logger hierarchies to modularize your logs. It's proven tecnology for a while now.
Currently, appenders exist for the console , files , GUI components, remote socket servers, NT Event Loggers , and remote UNIX Syslog daemons. It is also possible to log asynchronously.
How can I support distributed logging?
With remote socket servers appenders for example. | 0 | 1 | 0 | 1 | 2010-05-21T21:49:00.000 | 3 | 0.066568 | false | 2,885,822 | 0 | 0 | 1 | 1 | I have a system where a central Java controller launches analysis processes, which may be written in C++, Java, or Python (mostly they are C++). All these processes currently run on the same server. What are you suggestions to
Create a central log to which all processes can write to
What if in the future I push some processes to another server. How can I support distributed logging?
Thanks! |
Check if NFS share is mounted in python script | 10,356,817 | 1 | 2 | 4,930 | 0 | python,nfs | you cat parse /proc/mount file. Notice, that on different platforms and kernel versions file format may be different. | 0 | 1 | 0 | 0 | 2010-05-22T19:32:00.000 | 2 | 0.099668 | false | 2,889,490 | 0 | 0 | 0 | 1 | I wrote a python script that depends on a certain NFS share to be available. If the NFS share is not mounted it will happily copy the files to the local path where it should be mounted, but fail later when it tries to copy some files back that were created on the NFS server.
I'd like to catch this error specifically so I can print a useful error message that will tell the users of this script what they have to do.
My first idea would be to execute mount using subprocess and then check the output for this nfs share. But I'm wondering if there isn't a nicer and more robust method of doing it. |
Python modules import error | 2,893,047 | 4 | 2 | 2,669 | 0 | python,linux,centos | What's dns.__file__ in the first case? I suspect it's not coming from the directory you cded into the second time (the current directory when you start Python goes at the front of sys.path) but rather from a package containing that crucial resolver module which the second one appears to be lacking. | 0 | 1 | 0 | 0 | 2010-05-23T19:06:00.000 | 1 | 1.2 | true | 2,893,033 | 0 | 0 | 0 | 1 | Very strange for me:
# uname -a
Linux localhost.localdomain 2.6.18-194.3.1.el5 #1 SMP Thu May 13 13:09:10 EDT 2010 i686 i686 i386 GNU/Linux
# pwd
/root
# python
Python 2.6.5 (r265:79063, Apr 11 2010, 22:34:44)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
>>>
[3]+ Stopped python
# cd /home/user/dev/dns
[root@localhost dns]# python
Python 2.6.5 (r265:79063, Apr 11 2010, 22:34:44)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
File "dns.py", line 1, in
import dns.resolver
ImportError: No module named resolver
>>>
[4]+ Stopped python
#
Summary: I can't import same python module from different path.
Any ideas? 0_o
P.S. SELINUX=disabled |
Google App Engine appcfg.py data_upload Authentication fail | 2,940,702 | 2 | 3 | 1,823 | 0 | python,google-app-engine | If your administrator is an Apps for Domains account (eg, @yourdomain.com), and your app uses Google Accounts authentication, you won't be able to authenticate as an admin on your app. You need to add a Google Accounts (eg, @google.com) account as an administrator, and use that to upload. | 0 | 1 | 0 | 0 | 2010-05-24T07:22:00.000 | 2 | 0.197375 | false | 2,895,397 | 0 | 0 | 1 | 1 | I am using appcfg.py to upload data to datastore from a csv file.
But every time I try, I am getting error:
[info ] Authentication failed
even if i am using Admin id and password.
In my app.yaml file I am having:
handlers:
- url: /remote_api
script: $PYTHON_LIB/google/appengine/ext/remote_api/handler.py
login: admin
- url: .*
script: MainHandler.py
Can anybody please help me?
Thanks in advance. |
Twisted DTLS connection | 2,905,513 | 5 | 2 | 1,465 | 0 | python,twisted,m2crypto,pyopenssl | Neither pyOpenSSL nor M2Crypto exposes OpenSSL's DTLS features (as far as I know). So, the first step would be to extend one of these libraries to support it. After that, you could extend Twisted to use the new features you just added to the underlying SSL library. | 0 | 1 | 0 | 0 | 2010-05-25T05:51:00.000 | 2 | 1.2 | true | 2,902,320 | 0 | 0 | 0 | 1 | How to implement dtls protocol using twisted with m2crypto (or pyopenssl)? |
installing Python application with Python under windows | 2,903,621 | 1 | 0 | 220 | 0 | python,deployment | May be you should try to make your application running standalone with py2exe or pyInstaller.
It will generate an application which doesn't expect anything from the target machine. You 'll have an exe file that the user can execute without knowing that Python is used. The Python interpreter and the needed libs will be included.
Then you can use Inno Setup to make a windows installer that will copy all the needed files.
I am not sure if Django is very easy to include in such a standalone version.
I hope it helps | 0 | 1 | 0 | 0 | 2010-05-25T09:41:00.000 | 2 | 0.099668 | false | 2,903,507 | 1 | 0 | 0 | 1 | My application uses many Python libraries (Django, Twisted, xmlrpc). I cannot expect that the end user has the Python installed with all needed libraries.
I've created a fancy installer for my application using Inno Setup, but I don't think that it is a good solution to execute 5 other setup programs from my installer. It would be annoying to the user to click "Next" button 15 times. Is there any way to do that quietly? |
How do I rename a process on Linux? | 2,907,901 | 0 | 3 | 4,944 | 0 | python,linux,shell,process | Writing to *argv will change it, but you'll need to do that from C or the like; I don't think Python is going to readily give you access to that memory directly.
I'd also recommend just leaving it alone. | 0 | 1 | 0 | 0 | 2010-05-25T19:28:00.000 | 2 | 0 | false | 2,907,864 | 0 | 0 | 0 | 2 | I'm using Python, for what it's worth, but will accept answers in any applicable language.
I've tried writing to /proc/$pid/cmdline, but that's a readonly file.
I've tried assigning a new string to sys.argv[0], but that has no perceptible impact.
Are there any other possibilities? My program is executing processes via os.system (equivalent to system(3)) so a general, *NIX-based solution using an additional spawning process would be fine. |
How do I rename a process on Linux? | 2,907,918 | 0 | 3 | 4,944 | 0 | python,linux,shell,process | If you use subprocess.Popen instead of os.system you can use the executable argument to specify the path to the actual file to execute, and pass the name you want to show as the first item in the list that is parameter args. | 0 | 1 | 0 | 0 | 2010-05-25T19:28:00.000 | 2 | 0 | false | 2,907,864 | 0 | 0 | 0 | 2 | I'm using Python, for what it's worth, but will accept answers in any applicable language.
I've tried writing to /proc/$pid/cmdline, but that's a readonly file.
I've tried assigning a new string to sys.argv[0], but that has no perceptible impact.
Are there any other possibilities? My program is executing processes via os.system (equivalent to system(3)) so a general, *NIX-based solution using an additional spawning process would be fine. |
Which Language to target on Ubuntu? | 2,912,360 | 2 | 2 | 309 | 0 | c#,python,ubuntu,mono | I cannot say much about the market for Ubuntu. And since business is your primary concern, the programming language is, as you say yourself, secondary. I would say that in any business, choose the language and tools that solves the business problem most effectively. When release comes do your end users really care?
That said, if you can do it with Mono/C# I would encourage you to do so since you already have C# and .Net experience. But knowing a second language and development environment will only make you stronger. | 0 | 1 | 0 | 1 | 2010-05-26T10:52:00.000 | 2 | 0.197375 | false | 2,912,216 | 0 | 0 | 0 | 1 | I'm a c# programmer by trade and looking to move my wares over to Ubuntu as a business concern. I have some experience of Python and like it a lot. My question is, as a developer which would be the best language to use when targeting ubuntu Mono c# or python as a commercial concern.
please note that I am not interested in the technical aspects but strictly the commercials of where Ubuntu is heading, I see that there is a lot of work done within using Python and thinking that maybe with the whole Mono issue of who "might" purchase them. |
ctypes DLL with optional dependencies | 2,914,833 | 0 | 0 | 971 | 0 | python,windows,visual-studio,dll,ctypes | I can load LibName64 when I use the 64 bit version of python. Should have tried that earlier! | 0 | 1 | 0 | 0 | 2010-05-26T15:52:00.000 | 1 | 1.2 | true | 2,914,585 | 1 | 0 | 0 | 1 | Disclaimer: I'm new to windows programming so some of my assumptions may be wrong. Please correct me if so.
I am developing a python wrapper for a C API using ctypes. The API ships with both 64 and 32 DLLs/LIBs. I can succesfully load the DLL using ctypes.WinDLL('TheLibName') and call functions etc etc.
However some functions were not doing what they should. Upon further investigation it appears that the 32bit DLL is being used, which is what is causing the unexpected behaviour.
I have tried using ctypes.WinDLL('TheLibName64') but the module is not found. I have tried registering the DLL with regsrv32, but it reports there is no entry point (it also reports no entry point when I try and register TheLibName, which is found by WinDLL().
The DLL came with a sample project in Visual Studio (I have 0 experience with VS so again please correct me here) which builds both 32 and 64 bit versions of the sample project. In the .vcsproj file the configurations for the 64 bit version include:
AdditionalDependencies="TheLibName64.lib"
in the VCLinkerTool section.
In windows/system32 there are both TheLibName.dll/.lib, and TheLibName64.dll/.lib.
So it seems to me that my problem is now to make the python ctypes DLL loader load these optional dependencies when the DLL is loaded. However I can't find any information on this (perhaps because, as a doze noob, I do not know the correct terminology) in the ctypes documentation.
Is there a way to do this in ctypes? Am I going about this in completely the wrong way? Any help or general information about optional DLL dependencies and how they are loaded in windows would be much appreciated.
Thanks |
Making GUI applications on Linux/Windows. What languages/tools to use? | 2,918,582 | 1 | 3 | 1,292 | 0 | python,user-interface,open-source,monodevelop,glade | To access really low end computers, and if you have no real graphics requirements, you could consider a text mode interface - curses/ncurses for one. | 0 | 1 | 0 | 1 | 2010-05-27T04:23:00.000 | 4 | 0.049958 | false | 2,918,445 | 0 | 0 | 0 | 1 | My student group and I are trying to continue working on a project we worked on this semester over the summer to become a professional, deployable app. We originally did it in Adobe AIR but it seems now that the computers this program will be running on will be very slow, maybe 600mhz and 128-256mb ram so flash just isn't going to cut it. It is basically a health diagnosis application that we will be shipping out to impoverished countries.
Now comes the real question. We are wondering what language to rebuild our application in. It has to have a good gui builder associated with it, like adobe flex/air gui builder or visual studio's gui builder but the application should run on linux primarily, and if it can run on windows thats just a plus. We are all students too without really any outside help so whatever we decide to do this in there must be ample documentation available when we hit problems.
Some things we have considered so far are using python and glade or c# and monodevelop, but again we really are not experts on any of this which is why I am asking for help as I would rather spend the time now choosing the right tools instead of wasting time down the line when we hit a roadblock.
Thanks in advance! |
how i can open different linux terminal to output differnt kinds of debug information in python? | 2,933,610 | 3 | 3 | 1,349 | 0 | python,terminal,stream | Open a pipe, then fork off a terminal running cat reading from the read end of the pipe, and write into the write end of the pipe. | 0 | 1 | 0 | 0 | 2010-05-29T03:04:00.000 | 2 | 1.2 | true | 2,933,601 | 1 | 0 | 0 | 1 | I need output different information to different terminal instances instead of print them in same output stream, say std.err or std.out.
for example:
I have 5 kinds of information say A-E need to be displayed on different terminal windows on same desktop, looks like
[terminal 1] <- for displaying information A
[terminal 2] <- for displaying information B
[terminal 3] <- for displaying information C
[terminal 4] <- for displaying information D
[terminal 5] <- for displaying information E
I know I can output them into different files, then open terminals read the file in loop,
but what I want is python program can open terminal by program itself and print to them directly when it is needed.
Is it possible?
Thanks!
KC
[edit]
the best solution for this case is using SOCKET as the IPC I think if the resource is not a matter, it will come with best compatible capability - a server client mode.
and the pipe / subprocess will also be the useful solutions under same platform |
How to change the OSX menubar in wxPython without any opened window? | 2,958,571 | 1 | 5 | 1,471 | 0 | python,macos,wxpython,pyobjc | Can you create a hidden window that is offscreen somewhere? It is a hack, but I remember having to do a lot of hacks to make my wxPython-based application work correctly on Mac OS X.
Note:You'll have to disable the close button and set up that hidden window so that it doesn't show up in the Window menu.
Aside:Have you considered factoring out your GUI portion of your Python application and using PyObjC on Mac OS X? You'll get more native behaviours... | 1 | 1 | 0 | 0 | 2010-05-29T09:02:00.000 | 2 | 1.2 | true | 2,934,435 | 0 | 0 | 0 | 1 | I am writing a wxPython application that remains open after closing all of its windows - so you can still drag & drop new files onto the OSX dock icon (I do this with myApp.SetExitOnFrameDelete(False)).
Unfortunately if I close all the windows, the OSX menubar will only contain a "Help" menu. I would like to add at least a File/Open menu item, or just keep the menubar of the main window. Is this somehow possible in wxPython?
In fact, I would be happy with a non-wxPython hack as well (for example, setting the menu in pyobjc, but running the rest of the GUI in wxPython). wxPython development in OSX is such a hack anyway ;)
UPDATE: I managed to solve this problem using the tip from Lyndsey Ferguson. Here's what I have done:
On startup I create a window which I show and hide immediately. I set its position to (-10000,-10000) so that it does not flicker on the screen (aargh, what a dirty hack!)
I create an empty EVT_CLOSE event handler in that window so that it cannot be closed.
It seems that destroying a window resets the OSX menu, but hiding does not... So when the last window is closed, I need to show and hide this window again (hiding is necessary so that the user cannot switch to this window using the Window menu or Cmd-`)
Yeah, this is really ugly... I will be very grateful if someone comes up with a prettier solution.
UPDATE 2: Actually it can be solved in a much easier way: if we do not close the last window, only hide it. And ensure that it does not respond to menu events anymore. |
Task Queue stopped working | 2,935,659 | 1 | 1 | 391 | 0 | python,google-app-engine,task-queue | The development server won't run tasks automatically, you have to set them off yourself.
It's a design feature, so you can see what happens when you run them, instead of them running at any point.
Essentially, there's nothing wrong with your application, it's a feature of development server. | 0 | 1 | 0 | 0 | 2010-05-29T15:35:00.000 | 2 | 1.2 | true | 2,935,631 | 0 | 0 | 1 | 1 | I was playing with Goole App Engine Task Queue API to learn how to use it. But I couldn't make it trigger locally. My application is working like a charm when I upload to Google servers. But it doesn't trigger locally. All I see from the admin is the list of the tasks. But when their ETA comes, they just pass it. It's like they runs but they fails and waiting for the retries. But I can't see these events on command line.
When I try to click "Run" on admin panel, it runs successfuly and I can see these requests from the command line. I'm using App Engine SDK 1.3.4 on Linux with google-app-engine-django. I'm trying to find the problem from 3 hours now and I couldn't find it. It's also very hard to debug GAE applications. Because debug messages do not appear on console screen.
Thanks. |
can a python script know that another instance of the same script is running... and then talk to it? | 2,935,858 | 1 | 14 | 3,867 | 0 | python,multithreading,command-line,ipc,interprocess | Perhaps try using sockets for communication? | 0 | 1 | 0 | 0 | 2010-05-29T16:46:00.000 | 4 | 0.049958 | false | 2,935,836 | 1 | 0 | 0 | 2 | I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original instance before the new instance commits suicide. How can I do this in a cross-platform way?
Specifically, I'd like to enable the following behavior:
"foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it.
every few minutes the same script is launched again, but with different command-line parameters
when launched, the script should see if any other instances are running.
if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit.
instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform.
So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another?
Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible.
I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option.
More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them.
This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing.
But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.
BTW, this is not something I want to do on a one-script basis. Instead, I want to package this behavior into a library which many script authors can leverage-- my goal is to enable script authors to write simple, single-threaded scripts which are unaware of multi-instance issues, and to handle the multi-threading and single-instancing under the covers. |
can a python script know that another instance of the same script is running... and then talk to it? | 2,936,096 | 0 | 14 | 3,867 | 0 | python,multithreading,command-line,ipc,interprocess | Sounds like your best bet is sticking with a pid file but have it not only contain the process Id - have it also include the port number that the prior instance is listening on. So when starting up check for the pid file and if present see if a process with that Id is running - if so send your data to it and quit otherwise overwrite the pid file with the current process's info. | 0 | 1 | 0 | 0 | 2010-05-29T16:46:00.000 | 4 | 0 | false | 2,935,836 | 1 | 0 | 0 | 2 | I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original instance before the new instance commits suicide. How can I do this in a cross-platform way?
Specifically, I'd like to enable the following behavior:
"foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it.
every few minutes the same script is launched again, but with different command-line parameters
when launched, the script should see if any other instances are running.
if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit.
instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform.
So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another?
Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible.
I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option.
More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them.
This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing.
But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.
BTW, this is not something I want to do on a one-script basis. Instead, I want to package this behavior into a library which many script authors can leverage-- my goal is to enable script authors to write simple, single-threaded scripts which are unaware of multi-instance issues, and to handle the multi-threading and single-instancing under the covers. |
Python doesn't work properly when I execute a script after using Right Click >> Command Prompt Here | 2,943,171 | 0 | 0 | 2,823 | 0 | python,windows-7,path | You can check the currently present enviroment variables with the "set" command on the command line. For python to work you need at least PYTHONPATH pointing to your python libs and the path to python.exe should be included in your PATH variable. | 0 | 1 | 0 | 1 | 2010-05-31T11:03:00.000 | 3 | 0 | false | 2,943,071 | 1 | 0 | 0 | 2 | This is a weird bug. I know it's something funky going on with my PATH variable, but no idea how to fix it.
If I have a script C:\Test\test.py and I execute it from within IDLE, it works fine. If I open up Command Prompt using Run>>cmd.exe and navigate manually it works fine. But if I use Windows 7's convenient Right Click on folder >> Command Prompt Here then type test.py it fails with import errors.
I also cannot just type "python" to reach a python shell session if I use the latter method above.
Any ideas?
Edit: printing the python path for the command prompt that works yields the correct paths. Printing it on the non-working "Command prompt here" yields: Environment variable python not defined". |
Python doesn't work properly when I execute a script after using Right Click >> Command Prompt Here | 2,943,111 | 1 | 0 | 2,823 | 0 | python,windows-7,path | I don't use Windows much, but maybe when you open Right Click -> Command Prompt, the PATH is different from navigate manually. First try to print your PATH (oh I have no ideal how to do this) and see if it different in 2 situation. | 0 | 1 | 0 | 1 | 2010-05-31T11:03:00.000 | 3 | 0.066568 | false | 2,943,071 | 1 | 0 | 0 | 2 | This is a weird bug. I know it's something funky going on with my PATH variable, but no idea how to fix it.
If I have a script C:\Test\test.py and I execute it from within IDLE, it works fine. If I open up Command Prompt using Run>>cmd.exe and navigate manually it works fine. But if I use Windows 7's convenient Right Click on folder >> Command Prompt Here then type test.py it fails with import errors.
I also cannot just type "python" to reach a python shell session if I use the latter method above.
Any ideas?
Edit: printing the python path for the command prompt that works yields the correct paths. Printing it on the non-working "Command prompt here" yields: Environment variable python not defined". |
How to backup an AppEngine site? | 15,256,477 | 2 | 14 | 2,873 | 0 | python,google-app-engine,backup | There is now a backup option available in the dashboard. See "datastore admin". | 0 | 1 | 0 | 0 | 2010-05-31T22:06:00.000 | 4 | 0.099668 | false | 2,946,183 | 0 | 0 | 1 | 1 | So, you build a great shiny cloudy 2.0 website on top of AppEngine, with thousands upon thousands of images saved into the datastore and gigs of data at the blobstore. How do you backup them? |
Equivalent of alarm(3600) in Python | 2,949,014 | 2 | 1 | 362 | 0 | python | Just for your information: this is much self-descriptive to use multiplication when you set up timers, for example alarm(24 * 60 * 60) for 24 hours, instead of alarm(86400) for the same period. Hope this will help keep your code clean and easy-maintainable :) | 0 | 1 | 0 | 1 | 2010-06-01T09:09:00.000 | 2 | 0.197375 | false | 2,948,455 | 0 | 0 | 0 | 1 | Starting a Perl script with alarm(3600) will make the script abort if it is still running after one hour (3600 seconds).
Assume I want to set an upper bound on the running time of a Python script, what is the easiest way to achieve that? |
Packaging a Python script on Linux into a Windows executable | 8,840,967 | 2 | 62 | 79,618 | 0 | python,linux,windows,py2exe | I have tested py2exe inside of wine, and it does function. You'll need to install python in wine for it to work, or if you only use the standard libarary, you can bundle py2exe with py2exe from the windows machine and then use it in wine. Just keep in mind you need the same version of the ms visual C libraries in wine as were used to compile python or things won't work properly. | 0 | 1 | 0 | 0 | 2010-06-01T15:18:00.000 | 6 | 0.066568 | false | 2,950,971 | 1 | 0 | 0 | 1 | I have a Python script that I'd like to compile into a Windows executable. Now, py2exe works fine from Windows, but I'd like to be able to run this from Linux. I do have Windows on my development machine, but Linux is my primary dev platform and I'm getting kind of sick of rebooting into Windows just to create the .exe. Nor do I want to have to buy a second Windows license to run in a virtual machine such as VirtualBox. Any ideas?
PS: I am aware that py2exe doesn't exactly compile the python file as much as package your script with the Python interpreter. But either way, the result is that you don't need Python installed to run the script. |
Robustly killing Windows programs stuck reporting 'problems' | 2,952,969 | 3 | 3 | 565 | 0 | python,windows,process,kill,windows-error-reporting | Wouldn't it be easier to disable the error reporting feature? | 0 | 1 | 0 | 0 | 2010-06-01T18:56:00.000 | 2 | 1.2 | true | 2,952,443 | 0 | 0 | 0 | 2 | I am looking for a means to kill a Windows exe program that, when being tested from a python script, crashes and presents a dialog to the user; as this program is invoked many times, and may crash repeatedly, this is not suitable.
The problem dialog is the standard reporting of a Windows error:
"Foo.exe has encountered a problem and needs to close. We are sorry for the inconvenience"
and offers a Debug, Send Error Report, and Don't Send buttons.
I am able to kill other forms of dialog resulting from crashes (e.g. a Debug build's assert failure dialog is OK.)
I have tried taskkill.exe, pskill, and the terminate() function on the Popen object from the subprocess module that was used to invoke the .exe
Has anyone encountered this specific issue, and found a resolution?
I expect automating user input to select the window, and press the "Don't Send" button is one possible solution, but I would like something far simpler if possible |
Robustly killing Windows programs stuck reporting 'problems' | 2,952,456 | 0 | 3 | 565 | 0 | python,windows,process,kill,windows-error-reporting | If you were to use CreateProcessEx or a WinAPI specific function, you might be able to call TerminateProcess or TerminateThread to forcibly end the process. | 0 | 1 | 0 | 0 | 2010-06-01T18:56:00.000 | 2 | 0 | false | 2,952,443 | 0 | 0 | 0 | 2 | I am looking for a means to kill a Windows exe program that, when being tested from a python script, crashes and presents a dialog to the user; as this program is invoked many times, and may crash repeatedly, this is not suitable.
The problem dialog is the standard reporting of a Windows error:
"Foo.exe has encountered a problem and needs to close. We are sorry for the inconvenience"
and offers a Debug, Send Error Report, and Don't Send buttons.
I am able to kill other forms of dialog resulting from crashes (e.g. a Debug build's assert failure dialog is OK.)
I have tried taskkill.exe, pskill, and the terminate() function on the Popen object from the subprocess module that was used to invoke the .exe
Has anyone encountered this specific issue, and found a resolution?
I expect automating user input to select the window, and press the "Don't Send" button is one possible solution, but I would like something far simpler if possible |
Aptana Studio is opening but not ever closing a python.exe process | 2,952,983 | 2 | 2 | 509 | 0 | python,django,aptana | You should try adding --noreload to the runserver argument | 0 | 1 | 0 | 1 | 2010-06-01T20:08:00.000 | 1 | 1.2 | true | 2,952,957 | 0 | 0 | 1 | 1 | I am developing a small testing website using Django 1.2 in Aptana Studio build 2.0.4.1268158907. I have a Django project that I test by running the command "runserver 8001" on my project. This command runs the project on a small server that comes with Django.
However the problem arises that every time I run this command Aptana opens two instances of the process "python.exe". Upon terminating the command only one of these instances is ended. The other process continues to run and use memory. My server is not online, and the process doesn't seem to do anything that I can find. This happens every time i run the runserver command on my project and therefore more and more python.exe instances will open up through my development period.
Any help discovering either the purpose of this extra python.exe or a way to prevent it from opening would be much appreciated. |
Windows path in Python | 60,505,487 | -4 | 230 | 481,405 | 0 | python,path,string-literals | Use PowerShell
In Windows, you can use / in your path just like Linux or macOS in all places as long as you use PowerShell as your command-line interface. It comes pre-installed on Windows and it supports many Linux commands like ls command.
If you use Windows Command Prompt (the one that appears when you type cmd in Windows Start Menu), you need to specify paths with \ just inside it. You can use / paths in all other places (code editor, Python interactive mode, etc.). | 0 | 1 | 0 | 0 | 2010-06-01T22:29:00.000 | 6 | -1 | false | 2,953,834 | 1 | 0 | 0 | 1 | What is the best way to represent a Windows directory, for example "C:\meshes\as"? I have been trying to modify a script but it never works because I can't seem to get the directory right, I assume because of the '\' acting as escape character? |
running a python script on a remote computer | 2,957,649 | 1 | 1 | 1,086 | 0 | python | Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted). | 0 | 1 | 0 | 1 | 2010-06-02T12:21:00.000 | 3 | 1.2 | true | 2,957,588 | 1 | 0 | 0 | 1 | I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine. |
Python script names in tasklist | 2,958,943 | 0 | 0 | 872 | 0 | python,batch-file | I'd simply create a lockfile in the local filesystem and exit if this exists already. | 0 | 1 | 0 | 0 | 2010-06-02T13:42:00.000 | 5 | 0 | false | 2,958,246 | 1 | 0 | 0 | 2 | I am wondering, is there a way to change the name of a script so that it is not called "python.exe" in the tasklist. The reason I am asking is that I am trying to make a batch file that run's a python script. I want the batch file to check to see if the script is already running. if the script is already running then the batch file will do nothing. Thanks |
Python script names in tasklist | 2,958,673 | 0 | 0 | 872 | 0 | python,batch-file | This library does not work on Windows, and shouldn't be used in production code. Manipulation the argv array is a rather dirty hack.
Generally I'd not try to identify processes by scanning the process table. This is not really reliable, as process names aren't guaranteed to be unique. Instead I'd spawn a simple server on localhost inside the python script. If started, the script can then try to connect to the server, and quit, if the server is already running. This approach can later on also be expanded to support any kind of IPC. | 0 | 1 | 0 | 0 | 2010-06-02T13:42:00.000 | 5 | 0 | false | 2,958,246 | 1 | 0 | 0 | 2 | I am wondering, is there a way to change the name of a script so that it is not called "python.exe" in the tasklist. The reason I am asking is that I am trying to make a batch file that run's a python script. I want the batch file to check to see if the script is already running. if the script is already running then the batch file will do nothing. Thanks |
Unable to install pyodbc on Linux | 51,455,094 | 18 | 110 | 144,475 | 0 | python,linux,centos,pyodbc | Struggled with the same issue
After running:
sudo apt-get install unixodbc-dev
I was able to pip install pyodbc | 0 | 1 | 0 | 0 | 2010-06-02T18:14:00.000 | 19 | 1 | false | 2,960,339 | 1 | 0 | 0 | 2 | I am running Linux (2.6.18-164.15.1.el5.centos.plus) and trying to install pyodbc. I am doing pip install pyodbc and get a very long list of errors, which end in
error: command 'gcc' failed with exit status 1
I looked in /root/.pip/pip.log and saw the following:
InstallationError: Command /usr/local/bin/python -c "import setuptools; file='/home/build/pyodbc/setup.py'; execfile('/home/build/pyodbc/setup.py')" install --single-version-externally-managed --record /tmp/pip-7MS9Vu-record/install-record.txt failed with error code 1
Has anybody had a similar issue installing pyodbc? |
Unable to install pyodbc on Linux | 50,872,039 | 4 | 110 | 144,475 | 0 | python,linux,centos,pyodbc | A easy way to install pyodbc is by using 'conda'. As conda automatically installs required dependencies including unixodbc.
conda --ugrade all (optional)
then
conda install pyodbc
it will install following packages:
libgfortran-ng: 7.2.0-hdf63c60_3 defaults
mkl: 2018.0.3-1 defaults
mkl_fft: 1.0.2-py36_0 conda-forge
mkl_random: 1.0.1-py36_0 conda-forge
numpy-base: 1.14.5-py36hdbf6ddf_0 defaults
pyodbc: 4.0.17-py36_0 conda-forge
unixodbc: 2.3.4-1 conda-forge | 0 | 1 | 0 | 0 | 2010-06-02T18:14:00.000 | 19 | 0.04208 | false | 2,960,339 | 1 | 0 | 0 | 2 | I am running Linux (2.6.18-164.15.1.el5.centos.plus) and trying to install pyodbc. I am doing pip install pyodbc and get a very long list of errors, which end in
error: command 'gcc' failed with exit status 1
I looked in /root/.pip/pip.log and saw the following:
InstallationError: Command /usr/local/bin/python -c "import setuptools; file='/home/build/pyodbc/setup.py'; execfile('/home/build/pyodbc/setup.py')" install --single-version-externally-managed --record /tmp/pip-7MS9Vu-record/install-record.txt failed with error code 1
Has anybody had a similar issue installing pyodbc? |
Python retrieving windows service information | 2,962,625 | 1 | 1 | 128 | 0 | python,windows,windows-services | Sorted, I had the service run a python script with win32api.GetUserName() as it's output. | 0 | 1 | 0 | 0 | 2010-06-03T00:33:00.000 | 1 | 1.2 | true | 2,962,597 | 0 | 0 | 0 | 1 | Can python retrieve the name of the user that owns a windows service?
I've had a fiddle with win32serviceutil but to no avail, nor can I find much documentation on it beyond starting and stopping services.
Thanks! |
C++ code generation with Python | 2,966,999 | 1 | 12 | 23,299 | 0 | c++,python,code-generation | A few years ago I worked on a project to simplify interprocess shared memory management for large scale simulation systems. We used a related approach where the layout of data in shared memory was defined in XML files and a code generator, written in python, read the XML and spit out a set of header files defining structures and associated functions/operators/etc to match the XML description. At the time, I looked at several templating engines and, to my surprise, found it was easier and very straight-forward to just do it "by hand".
As you read the XML, just populate a set of data structures that match your code. Header file objects contain classes and classes contain variables (which may be of other class types). Give each object a printSelf() method that iterates over its contents and calls printSelf() for each object it contains.
It seems a little daunting at first but once you get started, it's pretty straight-forward. Oh, and one tip that helps with the generated code, add an indentation argument to printSelf() and increase it at each level. It makes the generated code much easier to read. | 0 | 1 | 0 | 1 | 2010-06-03T13:57:00.000 | 8 | 0.024995 | false | 2,966,618 | 0 | 0 | 0 | 1 | Can anyone point me to some documentation on how to write scripts in Python (or Perl or any other Linux friendly script language) that generate C++ code from XML or py files from the command line. I'd like to be able to write up some xml files and then run a shell command that reads these files and generates .h files with fully inlined functions, e.g. streaming operators, constructors, etc. |
open() in Python does not create a file if it doesn't exist | 17,250,611 | 6 | 841 | 1,119,129 | 0 | python,linux,file-io,file-permissions | I think it's r+, not rw. I'm just a starter, and that's what I've seen in the documentation. | 0 | 1 | 0 | 0 | 2010-06-03T15:05:00.000 | 17 | 1 | false | 2,967,194 | 1 | 0 | 0 | 4 | What is the best way to open a file as read/write if it exists, or if it does not, then create it and open it as read/write? From what I read, file = open('myfile.dat', 'rw') should do this, right?
It is not working for me (Python 2.6.2) and I'm wondering if it is a version problem, or not supposed to work like that or what.
The bottom line is, I just need a solution for the problem. I am curious about the other stuff, but all I need is a nice way to do the opening part.
The enclosing directory was writeable by user and group, not other (I'm on a Linux system... so permissions 775 in other words), and the exact error was:
IOError: no such file or directory. |
open() in Python does not create a file if it doesn't exist | 2,967,244 | 31 | 841 | 1,119,129 | 0 | python,linux,file-io,file-permissions | Change "rw" to "w+"
Or use 'a+' for appending (not erasing existing content) | 0 | 1 | 0 | 0 | 2010-06-03T15:05:00.000 | 17 | 1 | false | 2,967,194 | 1 | 0 | 0 | 4 | What is the best way to open a file as read/write if it exists, or if it does not, then create it and open it as read/write? From what I read, file = open('myfile.dat', 'rw') should do this, right?
It is not working for me (Python 2.6.2) and I'm wondering if it is a version problem, or not supposed to work like that or what.
The bottom line is, I just need a solution for the problem. I am curious about the other stuff, but all I need is a nice way to do the opening part.
The enclosing directory was writeable by user and group, not other (I'm on a Linux system... so permissions 775 in other words), and the exact error was:
IOError: no such file or directory. |
open() in Python does not create a file if it doesn't exist | 2,967,395 | 6 | 841 | 1,119,129 | 0 | python,linux,file-io,file-permissions | What do you want to do with file? Only writing to it or both read and write?
'w', 'a' will allow write and will create the file if it doesn't exist.
If you need to read from a file, the file has to be exist before open it. You can test its existence before opening it or use a try/except. | 0 | 1 | 0 | 0 | 2010-06-03T15:05:00.000 | 17 | 1 | false | 2,967,194 | 1 | 0 | 0 | 4 | What is the best way to open a file as read/write if it exists, or if it does not, then create it and open it as read/write? From what I read, file = open('myfile.dat', 'rw') should do this, right?
It is not working for me (Python 2.6.2) and I'm wondering if it is a version problem, or not supposed to work like that or what.
The bottom line is, I just need a solution for the problem. I am curious about the other stuff, but all I need is a nice way to do the opening part.
The enclosing directory was writeable by user and group, not other (I'm on a Linux system... so permissions 775 in other words), and the exact error was:
IOError: no such file or directory. |
open() in Python does not create a file if it doesn't exist | 33,069,611 | 4 | 841 | 1,119,129 | 0 | python,linux,file-io,file-permissions | Put w+ for writing the file, truncating if it exist, r+ to read the file, creating one if it don't exist but not writing (and returning null) or a+ for creating a new file or appending to a existing one. | 0 | 1 | 0 | 0 | 2010-06-03T15:05:00.000 | 17 | 0.047024 | false | 2,967,194 | 1 | 0 | 0 | 4 | What is the best way to open a file as read/write if it exists, or if it does not, then create it and open it as read/write? From what I read, file = open('myfile.dat', 'rw') should do this, right?
It is not working for me (Python 2.6.2) and I'm wondering if it is a version problem, or not supposed to work like that or what.
The bottom line is, I just need a solution for the problem. I am curious about the other stuff, but all I need is a nice way to do the opening part.
The enclosing directory was writeable by user and group, not other (I'm on a Linux system... so permissions 775 in other words), and the exact error was:
IOError: no such file or directory. |
How to run a script in the background even after I logout SSH? | 2,975,852 | 37 | 145 | 252,941 | 0 | python,service,cron | If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine) | 0 | 1 | 0 | 1 | 2010-06-04T15:39:00.000 | 12 | 1 | false | 2,975,624 | 0 | 0 | 0 | 2 | I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH? |
How to run a script in the background even after I logout SSH? | 69,754,988 | 7 | 145 | 252,941 | 0 | python,service,cron | Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>. | 0 | 1 | 0 | 1 | 2010-06-04T15:39:00.000 | 12 | 1 | false | 2,975,624 | 0 | 0 | 0 | 2 | I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH? |
ruby or python more suitable for scripting in all OSes? | 2,979,717 | 5 | 1 | 551 | 0 | python,ruby | Personally, I find the documentation for Python is much better than that for Ruby. The Docs for Ruby are full of cryptic examples that are terse, short, and just not very helpful.
On the other hand, docs for Python exist everywhere, but more importantly, in a useful, helpful form. | 0 | 1 | 0 | 1 | 2010-06-05T01:53:00.000 | 6 | 0.16514 | false | 2,978,801 | 0 | 0 | 0 | 6 | if i want to script a mini-application (in the Terminal) in mac and windows, which one is preferred: ruby or python?
or is there no major difference just a matter of taste?
cause i know python definetely is a good scripting language.
thanks |
ruby or python more suitable for scripting in all OSes? | 2,978,961 | 2 | 1 | 551 | 0 | python,ruby | I believe python and ruby (since at least OS X 10.4) came pre-installed on Mac, that is a convenience.
There are easy installers for Windows. On Linux of course your mileage may vary.
As much as i like python myself, don't think one is better than the other for your purpose. | 0 | 1 | 0 | 1 | 2010-06-05T01:53:00.000 | 6 | 0.066568 | false | 2,978,801 | 0 | 0 | 0 | 6 | if i want to script a mini-application (in the Terminal) in mac and windows, which one is preferred: ruby or python?
or is there no major difference just a matter of taste?
cause i know python definetely is a good scripting language.
thanks |
ruby or python more suitable for scripting in all OSes? | 2,979,070 | 2 | 1 | 551 | 0 | python,ruby | Python is perhaps a little more common, and arguably more mature, so on that basis alone, it may be worth choosing Python.
That said, both are available by default on Mac OS X, and neither are available on Windows by default, so in this case it really does not matter. | 0 | 1 | 0 | 1 | 2010-06-05T01:53:00.000 | 6 | 0.066568 | false | 2,978,801 | 0 | 0 | 0 | 6 | if i want to script a mini-application (in the Terminal) in mac and windows, which one is preferred: ruby or python?
or is there no major difference just a matter of taste?
cause i know python definetely is a good scripting language.
thanks |
ruby or python more suitable for scripting in all OSes? | 2,979,691 | 2 | 1 | 551 | 0 | python,ruby | I would suggest to go for Python over Ruby on Windows unless you are willing to port some gems as a few (no I cannot say what percentage) of the gems use unix/mac specific stuff (example from ENV[OSTYPE] to wget to unix processes) that I have seen break on windows. | 0 | 1 | 0 | 1 | 2010-06-05T01:53:00.000 | 6 | 0.066568 | false | 2,978,801 | 0 | 0 | 0 | 6 | if i want to script a mini-application (in the Terminal) in mac and windows, which one is preferred: ruby or python?
or is there no major difference just a matter of taste?
cause i know python definetely is a good scripting language.
thanks |
ruby or python more suitable for scripting in all OSes? | 3,028,950 | 0 | 1 | 551 | 0 | python,ruby | Both are excelent options, you won't go wrong no matter which one you chose. You should check out the availability of libraries for the task at hand and also how helpful the community is. The Python community is humongous and seems friendlier to me. Rubists seem to have some anger management issues. | 0 | 1 | 0 | 1 | 2010-06-05T01:53:00.000 | 6 | 0 | false | 2,978,801 | 0 | 0 | 0 | 6 | if i want to script a mini-application (in the Terminal) in mac and windows, which one is preferred: ruby or python?
or is there no major difference just a matter of taste?
cause i know python definetely is a good scripting language.
thanks |
ruby or python more suitable for scripting in all OSes? | 2,978,809 | 5 | 1 | 551 | 0 | python,ruby | Matter of taste, really. They each have a pretty good set of libraries and are cross-platform, so it'll be a matter of which one you prefer to code in. | 0 | 1 | 0 | 1 | 2010-06-05T01:53:00.000 | 6 | 1.2 | true | 2,978,801 | 0 | 0 | 0 | 6 | if i want to script a mini-application (in the Terminal) in mac and windows, which one is preferred: ruby or python?
or is there no major difference just a matter of taste?
cause i know python definetely is a good scripting language.
thanks |
Should I use Python or Assembly for a super fast copy program | 2,982,842 | 5 | 18 | 3,559 | 0 | python,assembly | I don't think writing it in assembly will help you. Writing a routine in assembly could help you if you are processor-bound and think you can do something smarter than your compiler. But in a network copy, you will be IO bound, so shaving a cycle here or there almost certainly will not make a difference.
I think the genreal rule here is that it's always best to profile your process to see where you are spending the time before thinking about optimizations. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0.076772 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,982,838 | 4 | 18 | 3,559 | 0 | python,assembly | I don't believe it will make a discernable difference which language you use for this purpose. The bottleneck here is not your application but the disk performance.
Just because a language is interpreted, it doesn't mean that every single operation in it is slow. As an example, it's a fairly safe bet that the lower-level code in Python will call assembly (or compiled) code to do copying.
Similarly, when you do stuff with collections and other libraries in Java, that's mostly compiled C, not interpreted Java.
There are a couple of things you can do to possibly speed up the process.
Buy faster hard disks (10K RPMs rather than 7.5K or less latency, larger caches and so forth).
Copying between two physical disks may be faster than copying on a single disk (due to the head movement).
If you're copying across the network, stage it. In other words, copy it fast to another local disk, then slow from there across the net.
You can also stage it in a different way. If you run a nightly (or even weekly) process to keep the copy up to date (only copying changed files) rather than three times a year, you won't find yourself in a situation where you have to copy a massive amount.
Also if you're using the network, run it on the box where the repository is. You don't want to copy all the data from a remote disk to another PC then back to yet another remote disk.
You may also want to be careful with Python. I may be mistaken (and no doubt the Pythonistas will set me straight if I'm mistaken on this count ) but I have a vague recollection that its threading may not fully utilise multi-core CPUs. In that case, you'd be better off with another solution.
You may well be better off sticking with your current solution. I suspect a specialised copy program will already be optimised as much as possible since that's what they do. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0.061461 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,982,852 | 1 | 18 | 3,559 | 0 | python,assembly | Right, here the bottleneck is not in the execution of the copying software itself but rather the disk access.
Going lower level does not mean that you will have better performance. Take a simple example of open() and fopen() APIs where open is much lower level is is more direct and fopen() is a library wrapper for the system open() function.
But in reality fopen has better berformance because it adds buffering and optimizes a lot of stuff that is not done in the raw open() function.
Implementing optimizations in assembly level is much harder and less efficient than in python. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0.015383 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,982,870 | 2 | 18 | 3,559 | 0 | python,assembly | Before you question the copying app, you should most likely question the data path. What are the theoretical limits and what are you achieving? What are the potential bottlenecks? If there is a single data path, you are probably not going to get a significant boost by parallelizing storage tasks. You may even exacerbate it. Most of the benefits you'll get with asynchronous I/O come at the block level - a level lower than the file system.
One thing you could do to boost I/O is decouple the fetch from source and store to destination portions. Assuming that the source and destination are separate entities, you could theoretically halve the amount of time for the process. But are the standard tools already doing this??
Oh - and on Python and the GIL - with I/O-bound execution, the GIL is really not quite that bad of a penalty. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0.03076 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,982,837 | 42 | 18 | 3,559 | 0 | python,assembly | Copying files is an I/O bound process. It is unlikely that you will see any speed up from rewriting it in assembly, and even multithreading may just cause things to go slower as different threads requesting different files at the same time will result in more disk seeks.
Using a standard tool is probably the best way to go here. If there is anything to optimize, you might want to consider changing your file system or your hardware. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 1 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,982,847 | 8 | 18 | 3,559 | 0 | python,assembly | There are 2 places for slowdown:
Per-file copy is MUCH slower than a disk copy (where you literally clone 100% of each sector's data). Especially for 20mm files. You can't fix that one with the most tuned assembly, unless you switch from cloning files to cloning raw disk data. In the latter case, yes, Assembly is indeed your ticket (or C).
Simply storing 20mm files and recursively finding them may be less efficient in Python. But that's more likely a function of finding better algorithm and is not likely to be significantly improved by Assembly. Plus, that will NOT be the main contributor to 50 hrs
In summary - Assembly WILL help if you do raw disk sector copy, but will NOT help if you do filesystem level copy. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 1 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,982,913 | 1 | 18 | 3,559 | 0 | python,assembly | 1,5 TB in approximately 50 hours gives a throughput of (1,5 * 1024^2) MB / (50 * 60^2) s = 8,7 MB/s. A theoretical 100 mbit/s bandwidth should give you 12,5 MB/s. It seems to me that your firewire connection is a problem. You should look at upgrading drivers, or upgrading to a better firewire/esata/usb interface.
That said, rather than the python/assembly question, you should look at acquiring a file syncing solution. It shouldn't be necessary copying that data over and over again. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0.015383 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 2,983,371 | 0 | 18 | 3,559 | 0 | python,assembly | As already said, it is not the language here to make the difference; assembly could be cool or fast for computations, but when the processor have to "speak" to peripherals, the limit is given by these. In this case the speed is given by your hard disk speed, and this is a limit you hardly can change wiithout changing your hd and waiting for better hd in future, but also by the way data are organized on the disk, i.e. by the filesystem. AFAIK, most used filesystems are not optimized to handle fastly tons of "small" files, rather they are optimized to hold "few" huge files.
So, changing the filesystem you're using could increase your copy speed, as far as it is more suitable to your case (and of course hd limits still apply!). If you want to "taste" the real limit of you hd, you should try a copy "sector by sector", replycating the exact image of your source hd to the dest hd. (But this option has some points to be aware of) | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Should I use Python or Assembly for a super fast copy program | 3,181,017 | 0 | 18 | 3,559 | 0 | python,assembly | Neither. If you want to take advantage of OS features to speed up I/O, you'll need to use some specialized system calls that are most easily accessed in C (or C++). You don't need to know a lot of C to write such a program, but you really need to know the system call interfaces.
In all likelihood, you can solve the problem without writing any code by using an existing tool or tuning the operating system, but if you really do need to write a tool, C is the most straightforward way to do it. | 0 | 1 | 0 | 0 | 2010-06-06T01:54:00.000 | 13 | 0 | false | 2,982,829 | 1 | 0 | 0 | 9 | As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine.
I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies.
I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages.
Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss)
The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.
Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer.
Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk.
I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them).
A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube.
At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives.
I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.