Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
Python seems to have functions for copying files (e.g. shutil.copy) and functions for copying directories.This also works with network paths. Is there a way to copy only part of the file from multiple sources and merge them afterwards Like a download manager downloads parts of a single file from multiple sources increasing the overall download speed. I want to achive the same over lan. I have a file on more than two machines on my network. How could i copy parts of file to a single destination from multiple sources ? Can it be done with standard shutil libraries?
false
10,717,858
0
0
0
0
Stat the file to find the size Divvy up the the start:end points that each reader will handle Open your write file in binary mode Open your readers in binary mode Handle the merging/collating of data when writing it out
0
358
0
0
2012-05-23T10:15:00.000
python,copy
copy parts of a file from multiple sources over LAN using python
1
1
2
10,717,978
0
0
0
In a block comment, I want to reference a URL that is over 80 characters long. What is the preferred convention for displaying this URL? I know bit.ly is an option, but the URL itself is descriptive. Shortening it and then having a nested comment describing the shortened URL seems like a crappy solution.
false
10,739,843
1
0
0
54
You can use the # noqa at the end of the line to stop PEP8/Flake8 from running that check. This is allowed by PEP8 via: Special cases aren't special enough to break the rules.
0
24,654
0
100
2012-05-24T14:47:00.000
python,pep8
How should I format a long url in a python comment and still be PEP8 compliant
1
1
8
25,034,769
0
0
0
I'm writing a python program, to work on windows, the program has heavy threading and I/O, it heavily uses sockets in its I/O to send and receive data from remote locations, other than that, it has some string manipulation using regular expressions. My question is: performance wise, is python the best programming language for such a program, compared to for example Java, or C#? Is there another language that would better fit the description above?
true
10,745,931
1.2
0
0
1
Your requirements are: to work on windows; the program has heavy threading and I/O it heavily uses sockets in its I/O to send and receive data it has some string manipulation using regular expressions. The reason it is hard to say definitively which is the best language for this task is that almost all languages match your requirements. Windows: all languages of notes Heavy use of threads: C#, Java, C, C++, Haskell, Scala, Clojure, Erlang. Processed-based threads or other work arounds: Ruby, Python, and other interpreted languages without true fine-grained concurrency. Sockets: all languages of note Regexes: all languages of note The most interesting constraint is the need to do massive concurrent IO. This means your bottleneck is going to be in context switching, cost of threads, and whether you can run thread pools on multiple cores. Depending on your scaling, you might want to use a compiled language, and one with lightweight threads, that can use multiple cores easily. That reduces the list to C++, Haskell, Erlang, Java, Scala. etc. You can probably work around the global interpreter lock in Python by using forked processes, it just won't be as fine grained.
1
1,715
0
2
2012-05-24T22:03:00.000
python,performance,sockets,programming-languages,io
Python socket I/O performance compared to other languages
1
2
2
10,757,187
0
0
0
I'm writing a python program, to work on windows, the program has heavy threading and I/O, it heavily uses sockets in its I/O to send and receive data from remote locations, other than that, it has some string manipulation using regular expressions. My question is: performance wise, is python the best programming language for such a program, compared to for example Java, or C#? Is there another language that would better fit the description above?
false
10,745,931
0.197375
0
0
2
Interesting question. The python modules that deal with sockets wrap the underlying OS functionality directly. Therefore, in a given operation, you are not likely to see any speed difference depending on the wrapper language. Where you will notice speed issues with python is if you are involved in really tight looping, like looking at every character in a stream. You did not indiciate how much data you are sending. Unless you are undertaking a solution that has to maintain a huge volume of I/O, then python will likely do just fine. Implementing nginx or memcached or redis in python... not as good of an idea. And as always... benchmark. If it is fast enough, then why change? PS. you the programmer will likely get it done faster in python!
1
1,715
0
2
2012-05-24T22:03:00.000
python,performance,sockets,programming-languages,io
Python socket I/O performance compared to other languages
1
2
2
10,746,007
0
0
0
Node.js is a perfect match for our web project, but there are few computational tasks for which we would prefer Python. We also already have a Python code for them. We are highly concerned about speed, what is the most elegant way how to call a Python "worker" from node.js in an asynchronous non-blocking way?
false
10,775,351
1
0
0
7
If you arrange to have your Python worker in a separate process (either long-running server-type process or a spawned child on demand), your communication with it will be asynchronous on the node.js side. UNIX/TCP sockets and stdin/out/err communication are inherently async in node.
1
116,462
0
134
2012-05-27T16:03:00.000
python,node.js,ipc
Combining node.js and Python
1
1
7
10,775,437
0
1
0
For one of my python project I am using reportlab's canvas feature to generate pdf document. Can anyone please help me to print small subset of html (p, strong, ul, ol, li, img, alignments) on reportlab canvas? Thanks in advance...
true
10,811,720
1.2
0
0
2
If this is what you are tying to do you should look at using Platypus with ReportLab, a built in set of classes in ReportLab for building documents out of objects representing page elements. Or, if you want really simple, xhtml2pdf would probably be better.
0
4,178
0
3
2012-05-30T07:21:00.000
python,pdf-generation,reportlab
Python: Printing Html in PDF using Reportlab Canvas
1
1
1
10,826,660
0
1
0
This is using web.py with uwsgi. When I return page data from a POST handler, the browser receives a blank page instead. GET handlers are working fine for me. The handler is being called correctly, and redirects (web.seeother) will work.
false
10,841,854
0.197375
0
0
1
To answer my own question, you need to call web.input() otherwise the returned data will be ignore (who knows why? is it a bug?)
0
257
0
1
2012-05-31T21:57:00.000
python,web.py,uwsgi
Page returned by POST handler ignored - get blank response (web.py)
1
1
1
10,842,122
0
1
0
I am planning to make a multiplayer game with a JavaScript based Client UI and Python on the server side. The game will be dynamic, so communication speed is very important - consequently I have decided to use UDP. Does anyone have any tips on implementations I could utilize. What tools would you recommend for this project?
false
10,858,172
0.379949
0
0
4
I recommend doing the dumbest simplest thing to get your project to work, meaning probably http and Json. Then deal with any performance problems. Otherwise you'll spend much of your project on a hard optimization problem that might not really matter.
1
302
0
0
2012-06-01T22:38:00.000
javascript,python,udp,multiplayer
UDP communication between JavaScript and Python
1
2
2
10,858,211
0
1
0
I am planning to make a multiplayer game with a JavaScript based Client UI and Python on the server side. The game will be dynamic, so communication speed is very important - consequently I have decided to use UDP. Does anyone have any tips on implementations I could utilize. What tools would you recommend for this project?
false
10,858,172
0.099668
0
0
1
I've been using SockJS + Tornado for this sort of thing. Easy to get started with, and well supported in modern browsers.
1
302
0
0
2012-06-01T22:38:00.000
javascript,python,udp,multiplayer
UDP communication between JavaScript and Python
1
2
2
10,858,488
0
0
0
I want to display visual/auditory stimuli inside a web browser for psychophysic experiments. I plan on using python, but I am concerned with timing. I obviously can not rely on screen refresh for timing which is common in these types of tasks. How much can I hope for in terms of accuracy for timing on the web and what are the best tools to use with Python. I am thinking of using FastCGI I just want to hear peoples thoughts on this.
true
10,858,337
1.2
1
0
2
Do your timing in JS, save current time in ms on document.ready and then when user hits a key. Benchmark your test with either high-speed camera, or test rig that "hits" a key, e.g. screen flash -> pohototransistor -> usb device -> virtual keyboard
0
296
0
2
2012-06-01T22:58:00.000
python,web,fastcgi,timing,psychtoolbox
Achieving best timing online for psychophysics experiments using Python on the web
1
1
1
10,999,764
0
0
0
Currently on OSX Selenium driver start opens up a new Firefox icon on OSX. Also, the current application loses focus and thus interrupts e.g. your typing. Is it possible to make Selenium launch Firefox on OSX such way that it would not take focus or cause extra action in Dock?
false
10,862,982
0.197375
0
0
2
The problem is due to the fact that Firefox does not always fire events correctly when it does not have focus. This will be fixed soon as it is now a normative part of the HTML5 spec. I would recommend just having an extremely lightweight VM, in something like virtualbox or VMWare Fusion and just using Remote WebDriver.
0
427
0
0
2012-06-02T14:15:00.000
python,macos,firefox,selenium
Supressing Firefox icon in OSX Dock when running tests
1
1
2
10,869,190
0
1
0
I am working on a project to make web-based proxy using python and mechanize . I have a problem : The page that mechanize returns, has URLS that are are not Mechanized and if user clicks on it, they will go thourgh the link by their own computer's ip (not the server that my code is installed on it) . is there any way to fix that ?
false
10,869,211
0
0
0
0
You can rewrite the urls, either by parsing the HTML with lxml, beautiful soup, etc - and then rewriting them and re-dumping the DOM to string before sending it to the user. Or by searching for URLs with regular expressions, and return a rewritten HTML. keep in mind that doing it properly, with links generated by javascript, etc - is almost impossible. That's why people use proxy servers.
0
104
0
0
2012-06-03T09:29:00.000
python,mechanize
how to mechnize the urls inside a returned page by mechanize in python?
1
1
1
10,869,296
0
0
0
What is the best solution in python, to monitor CPU, memory, and bandwidth usage per domain? This solution has to also work on multiple instances.
false
10,869,472
0.099668
1
0
1
CPU can be monitored by CloudWatch using the built-in metrics. For memory you can use custom metrics with the AWS command line tools or write powershell/ruby scripts with the official AWS SDK. You can monitor anything that's easily quantifiable using the AWS SDK. To monitor bandwidth usage per domain I'd recommend something like ntop.
1
3,733
0
4
2012-06-03T10:14:00.000
python,amazon-ec2,amazon-web-services,amazon-cloudwatch
Monitor bandwidth, memory, cpu per domain on EC2
1
1
2
11,124,827
0
0
0
I guess it's socket programming. But I have never done socket programming expect for running the tutorial examples while learning Python. I need some more ideas to implement this. What I specifically need is to run a monitoring program of a server which will poll or listen to traffic being exchange from different IPs across different popular ports. For example, how do I get data received and sent through port 80 of 192.168.1.10 and 192.168.1.1 ( which is the gateway). I checked out a number of ready made tools like MRTG, Bwmon, Ntop etc but since we are looking at doing some specific pattern studies, we need to do data capturing within the program. Idea is to monitor some popular ports and do a study of network traffic across some periods and compare them with some other data. We would like to figure a way to do all this with Python....
false
10,871,752
0
0
0
0
IPTraf is an ncurses based IP LAN monitoring tool. Has a capability to generate network statistics including TCP,UDP,ICMP and some more. Since you're thinking to execute it from python, you may consider to use screen (screen manager with VT100/ANSI terminal emulation) to overcome ncurses issues and you may want to pass logging and interval parameters to IPTraf which forces iptraf to log to a file in a given interval. Little bit tricky but eventually you can have what you are looking for by basically parsing the log file.
0
2,642
0
1
2012-06-03T15:53:00.000
python,sockets,network-traffic,traffic-measurement
Python: how to calculate data received and send between two ipaddresses and ports
1
1
2
10,872,601
0
0
0
So I'm writing this program that uses python and sockets to talk over the internet, but that's not the issue. The issue is I wish to test how the program works in a realistic internet environment. ie: My client connects to a proxy somewhere over seas, then the data comes back to my router to my other computer. so I want my data to go overseas, then come back. Anyway this can be done? or is this impossible? I've tried using proxy servers and I can't for the life of me get them to work!
false
10,880,696
0
0
0
0
It depends on the type of program you are writing, if you just want your program to connect to another instance of this same program at some destination ip. Or are you writing a server application? If so, what client connects to this server? I'm not certain you need to involve a proxy at all, TCP traffic will simply travel over the routers/switches it needs to get where it needs to go. If you can elaborate on your question, and specify if your writing a server that has a differentiated client, or just a desktop app that is both the client/server, that would be helpful.
0
1,546
0
1
2012-06-04T11:44:00.000
python,sockets,proxy,ip,external
Connect to your external IP to simulate the internet
1
1
4
10,881,188
0
0
0
I want to create a script in which if a http request is being executed e.g, I have played a voicefile using a http operation Play() defined in my code. In the mean time when the file is getting played , i want Pause() Operation to be called which can pause the file being played. The problem I am facing is that, As the HTTP request for PLAY is getting hit, the script gets back the control only after the successful/failure execution of PLAY() i.e. when the complete play operation has been completed due to which my pause operation returns failure because there isn't any file which is getting played currently. I can't use 2 scripts because both use the same data (Call-ID) Any help on this would be highly Appreciated. Thanks in Advance.
false
10,896,895
0
0
0
0
More detail would be helpful. What information does a thread need before it can construct a valid pause request? Could the thread that sends the play request write this information into some module-level data structure? Then the thread that is doing the pause could read this information and build a valid request.
0
219
0
0
2012-06-05T11:59:00.000
python,jython,grinder
Multiple Threads doing different operations in a single Grinder Script
1
1
1
10,921,272
0
1
0
Every Monday at Work, I have the task of printing out Account analysis (portfolio analysis) and Account Positions for over 50 accounts. So i go to the page, click "account analysis", enter the account name, click "format this page for printing", Print the output (excluding company disclosures), then I go back to the account analysis page and click "positions" instead this time, the positions for that account comes up. Then I click "format this page for printing", Print the output (excluding company disclosures).Then I repeat the process for the other 50 accounts. I haven't taken any programming classes in the past but I heard using python to automate a html response might help me do this faster. I was wondering if that's true, and if so, how does it work? Also, are there any other programs that could enable me automate this process and save time? Thank you so much
false
10,899,192
0
0
0
0
I think it will be easier for you get program like autoit.
0
773
0
2
2012-06-05T14:26:00.000
javascript,python,html,automation
Automating HTTP navigation and HTML printing using Python
1
1
3
10,899,256
0
1
0
I wrote a script that retrieves stock data on google finance and prints it out, nice and simple. It always worked, but since this morning I only get a page that tells me that I'm probably an automated script instead of the stock data. Of course, being a script, I can't pass the captcha. What can I do?
false
10,908,715
0.197375
0
0
1
well, you finally reached a quite challenging realm. decode the captcha. there do exist OCR approaches to decode simple captcha into code. not seems to work for google captcha. I heard there are some companies provide manual captcha decoding services, you can try to use some. ^_^ LOL ok, to be serious, if google don't want you to do it that way, then it is not easy to decode those captchas. After all, why google on finance data, there are a lot other providers, right? try to scrape those websites.
0
1,173
0
1
2012-06-06T05:42:00.000
python,bots,google-finance
Google Finance recognizes my Python script as a bot and blocks it
1
1
1
10,908,773
0
0
0
I wrote a python script which downloads data via the Yahoo! Finance API and puts it into a file. After that, it uploads the file to Dropbox. The script does that every 10 minutes. How can I implement this at a minimal cost with a server? I don't want to let my computer run 24/7. Thank you in advance!
true
10,937,861
1.2
0
0
1
If the script is gonna run every 10 minutes, EC2 etc. won't do you any good since they are priced based on 15-minute time slices (and the server will always be used). The cheapest solution is a small VPS, which can be found for as little as 5$ / month from some providers. Install dropbox & python and you're good to go.
0
729
0
0
2012-06-07T18:39:00.000
python,dropbox
How to run a python script and upload the generated to dropbox on a server?
1
1
2
10,937,959
0
0
0
XML is a good file format for storing documents: content with metadata. JSON is a good file format for storing data. Is there an analogous file format standard which is good at encoding operations? In other words, is there a standard file format which would be good for encoding small light-weight domain-specific languages? What I have in mind are simple DSLs consisting of only string data and no more than a dozen simple commands. My languages would consist of calling one command after another in a very simple manner (no conditionals or loops). Currently, I've used XML to encode a series of operations, where each tag represents a different command. A SAX parser dispatches each element as a function call. It's very difficult to look at; just doesn't feel like an elegant solution. Ideally, I'd be working in python and not writing my own parsers...trying to get the benefit of using an established standard file format. One fallback is to use python itself, but of course I'd prefer a language-neutral standard if one is to be found.
false
10,939,138
0
0
0
0
In the end, I implemented a simple interpreter in Python, using S-expressions. Parsers are easy to find online (about half a page of code) and implementing functions for the language can be made simple by use of function decorators.
1
615
0
1
2012-06-07T20:11:00.000
python,xml,json,standards,dsl
Domain Specific Language, Standard File Format
1
1
3
46,526,632
0
1
0
I am trying to scrape a website, but the thing that I want to get is not in the source code. But it does appear when i use firebug. Is there a way to scrape from the firebug code as opposed to the source code?
false
10,942,469
0
0
0
0
If the answer's not in the source code (possibly obfuscated, encoded, etc), then it was probably retrieved after the page loaded with an XmlHTTPRequest. You can use the 'network' panel in Firebug to see what other pieces of data the page loaded, and what requests it made to load them. (You may have to enable the network panel and then reload the page/start over)
0
1,907
0
0
2012-06-08T02:44:00.000
python,firebug,web-scraping
Scraping a website in python with firebug?
1
1
2
10,942,524
0
0
0
In POSIX C we can use writev to write multiple arrays at once to a file descriptor. This is useful when you have to concatenate multiple buffers in order to form a single message to send through a socket (think of a HTTP header and body, for instance). This way I don't need to call send twice, once for the header and once for the body (what prevent the messages to be split in different frames on the wire), nor I need to concatenate the buffers before sending. My question is, is there a Python equivalent?
false
10,953,060
0.197375
0
0
2
Python supports os.writev() as well as sendmsg(). These functions are atomic, so are equivalent of calling write() and send() respectively with concatenated buffer. There is TCP_CORK. You may say kernel not to send partial frames until un-corked. Using either technique, you may have control over partial TCP frames.
0
708
0
4
2012-06-08T17:01:00.000
python,sockets
Scatter/gather socket write in Python
1
1
2
37,355,354
0
0
0
I am trying to implement a network protocol that listens on 2 separate TCP ports. One is for control messages and one is for data messages. I understand that I need two separate protocol classes since there are two ports involved. I would like to have one factory that creates both of these protocols since there is state information and data that is shared between them and they essential implement one protocol. Is this possible? If yes, how? If not, how can I achieve something similar? I understand that it is unusal to divide a protocol between 2 ports but that is the given situation. Thanks
true
10,999,627
1.2
0
0
0
Your factory's buildProtocol can return anything you want it to return. That's up to you. However, you might find that things are a lot simpler if you just use two different factories. That does not preclude sharing state. Just have them share a bunch of attributes, or collect all your state together onto a single new object and have the factories share that object.
0
153
0
0
2012-06-12T15:14:00.000
python,twisted
A single factory for multiple protocols?
1
1
1
11,001,717
0
0
0
I am trying to start some background processing through rabbitmq, but when I send the request, I get the below error in the rabbitmq log. But, I think I am providing the right credentials, as my celery works are able to connect to rabbitmq server using the same username/password combination. =ERROR REPORT==== 12-Jun-2012::20:50:29 === exception on TCP connection from 127.0.0.1:41708 {channel0_error,starting, {amqp_error,access_refused, "AMQPLAIN login refused: user 'guest' - invalid credentials", 'connection.start_ok'}}
false
11,008,337
0
0
0
0
To get resolve connection with rabbitmq need to inspect below points: Connectivity from client machine to rabbitmq server machine [in case if client and server are running on separate machine], need to check along with port as well. Credential (username and password), a user must be onboarded into RabbitMQ which will be used to connect with RabbitMQ Permission to User must be given (permission may be attached to VHOST as well so need to provide permission carefully)
0
3,691
1
2
2012-06-13T04:47:00.000
python,rabbitmq,celery
Rabbitmq connection issue when using a username and password
1
1
2
72,374,505
0
0
0
How can I list files and folders if I only have an IP-address? With urllib and others, I am only able to display the content of the index.html file. But what if I want to see which files are in the root as well? I am looking for an example that shows how to implement username and password if needed. (Most of the time index.html is public, but sometimes the other files are not).
false
11,023,530
0.158649
0
0
4
HTTP does not work with "files" and "directories". Pick a different protocol.
0
57,862
0
17
2012-06-13T21:25:00.000
python,html,directory,ip-address
Python to list HTTP-files and directories
1
2
5
11,023,595
0
0
0
How can I list files and folders if I only have an IP-address? With urllib and others, I am only able to display the content of the index.html file. But what if I want to see which files are in the root as well? I am looking for an example that shows how to implement username and password if needed. (Most of the time index.html is public, but sometimes the other files are not).
false
11,023,530
1
0
0
13
You cannot get the directory listing directly via HTTP, as another answer says. It's the HTTP server that "decides" what to give you. Some will give you an HTML page displaying links to all the files inside a "directory", some will give you some page (index.html), and some will not even interpret the "directory" as one. For example, you might have a link to "http://localhost/user-login/": This does not mean that there is a directory called user-login in the document root of the server. The server interprets that as a "link" to some page. Now, to achieve what you want, you either have to use something other than HTTP (an FTP server on the "ip address" you want to access would do the job), or set up an HTTP server on that machine that provides for each path (http://192.168.2.100/directory) a list of files in it (in whatever format) and parse that through Python. If the server provides an "index of /bla/bla" kind of page (like Apache server do, directory listings), you could parse the HTML output to find out the names of files and directories. If not (e.g. a custom index.html, or whatever the server decides to give you), then you're out of luck :(, you can't do it.
0
57,862
0
17
2012-06-13T21:25:00.000
python,html,directory,ip-address
Python to list HTTP-files and directories
1
2
5
11,024,116
0
0
0
Background - I am using paramiko to put files on a bunch of remote servers, running several different operating systems, and with no Python installed on the remote systems. I need to specify remote directories for where the file should be put. Because different operating systems specify paths differently, I wanted to use some module. I wanted to use os.path.join, but that gets its configuration from my local machine. Is there any way to specify the platform in one of the os module's methods, or something similar? EDIT: Also during ssh sessions with paramiko.
true
11,054,131
1.2
0
0
5
Usually all of the different path modules are included, os.path is just the one for your local machine. Import ntpath if you want to do Windows path manipulation, and posixpath for Unix path manipulation.
1
173
0
3
2012-06-15T15:50:00.000
python
Join paths in Python given operating system
1
1
1
11,054,226
0
1
0
I am building a web application as college project (using Python), where I need to read content from websites. It could be any website on internet. At first I thought of using Screen Scrapers like BeautifulSoup, lxml to read content(data written by authors) but I am unable to search content based upon one logic as each website is developed on different standards. Thus I thought of using RSS/ Atom (using Universal Feed Parser) but I could only get content summary! But I want all the content, not just summary. So, is there a way to have one logic by which we can read a website's content using lib's like BeautifulSoup, lxml etc? Or I should use API's provided by the websites. My job becomes easy if its a blogger's blog as I can use Google Data API but the trouble is, should I need to write code for every different API for the same job? What is the best solution?
true
11,061,135
1.2
0
0
7
Using the website's public API, when it exists, is by far the best solution. That is quite why the API exists, it is the way that the website administrators say "use our content". Scraping may work one day and break the next, and it does not imply the website administrator's consent to have their content reused.
0
356
0
1
2012-06-16T05:41:00.000
python,html,screen-scraping,web-scraping
Should I use Screen Scrapers or API to read data from websites
1
1
2
11,061,160
0
1
0
I would like to have a web server displaying the status of 2 of my python scripts. These scripts listen for incoming data on a specific port. What I would like to do is have it so that when the script is running the web server will return a HTTP200 and when the script is not running a 500. I have had a look at cherrypy and other such python web servers but I could not get them to run first and then while the web server is running continue with the rest of my code. I would like it so that if the script crashes so does the web server. Or a way for the web server to display say a blank webpage with just a 1 in the HTML if the script is running or a 0 if it is not. Any suggestions? Thanks in advance.
false
11,070,842
0.197375
0
0
2
I would break it apart further: script A, on port a script B, on port b web script C which checks on A and B (by making simple requests to them) and returns the results in a machine-friendly format, ie JSON or XML web page D which calls C and formats the results for people, ie an HTML table There are existing programs which do this - Nagios springs to mind.
0
268
0
0
2012-06-17T11:09:00.000
python,web-services,monitoring,status,cherrypy
Calling a python web server within a script
1
1
2
11,071,410
0
0
0
I need to use python-rest-client package into my project. I tried several times for installing python-rest-client into my linux python, it never worked. But it works well in Windows python. Would anybody tell me how to install python-rest-client in linux python.
true
11,081,209
1.2
0
0
1
avasal, you were right. I did it by pip install python-rest-client
0
1,855
1
1
2012-06-18T10:45:00.000
python,json,rest
how to install python-rest-client lib in linux
1
1
1
11,184,777
0
0
0
What's the best way to download a python package and it's dependencies from pypi for offline installation on another machine? Is there any easy way to do this with pip or easy_install? I'm trying to install the requests library on a FreeBSD box that is not connected to the internet.
false
11,091,623
1
0
0
412
On the system that has access to internet The pip download command lets you download packages without installing them: pip download -r requirements.txt (In previous versions of pip, this was spelled pip install --download -r requirements.txt.) On the system that has no access to internet Then you can use pip install --no-index --find-links /path/to/download/dir/ -r requirements.txt to install those downloaded modules, without accessing the network.
1
394,353
0
256
2012-06-18T21:51:00.000
python,pip,freebsd,easy-install,python-requests
How to install packages offline?
1
2
12
14,447,068
0
0
0
What's the best way to download a python package and it's dependencies from pypi for offline installation on another machine? Is there any easy way to do this with pip or easy_install? I'm trying to install the requests library on a FreeBSD box that is not connected to the internet.
false
11,091,623
0.049958
0
0
3
For Pip 8.1.2 you can use pip download -r requ.txt to download packages to your local machine.
1
394,353
0
256
2012-06-18T21:51:00.000
python,pip,freebsd,easy-install,python-requests
How to install packages offline?
1
2
12
40,603,985
0
0
0
Can somebody skilled look at Routes library in python and tell me why there is not python 3 support? I need Routes functionality for use with Cherrypy on python 3. And I am curious if it will be better to try to port Routes to python 3 or write my own dispatcher for python 3 from scratch. I know some porting basics for python 2to3, but if there are any significant problems or drawbacks (other than method names, syntax etc), I would like to know them before I start working on the port. Thank you very much for any tips! Edit: do not understand me incorrectly! I am not lazy to check it by myself, but there are some aspects that i will not discover until I try it. And maybe, somebody here tried it before :-)
true
11,102,659
1.2
0
0
0
I tried to port Routes to python 3. After one day of work, I got all unit tests passed. But the code started being ugly and I was not sucessfull with using ported Routes with Cherrypy (probably something, that is not covered by unit tests). I was not patient enough to debug it. So I decided to write my own version which I will maybe share as opensource. It is not ready yet and I will update this answer later (for possibly interested people :-) Thanks to commenters above
0
369
0
0
2012-06-19T14:02:00.000
python,python-3.x,routes,porting
Routes library for python 3
1
1
1
11,502,743
0
0
0
Is there a way to find out the current number of connection attempts awaiting accept() on a TCP socket on Linux? I suppose I could count the number of accepts() that succeed before hitting EWOULDBLOCK on each event loop, but I'm using a high-level library (Python/Twisted) that hides these details. Also it's using epoll() rather than an old-fashioned select()/poll() loop. I am trying to get a general sense of the load on a high-performance non-blocking network server, and I think this number would be a good characterization. Load average/CPU statistics aren't helping much, because I'm doing a lot of disk I/O in concurrent worker processes. Most of these stats on Linux count time spent waiting on disk I/O as part of the load (which it isn't, for my particular server architecture). Latency between accept() and response isn't a good measure either, since each request usually gets processed very quickly once the server gets around to it. I'm just trying to find out how close I am to reaching a breaking point where the server can't dispatch requests faster than they are coming in.
false
11,126,372
0.132549
0
0
2
There is no function for this in the BSD Sockets API that I have ever seen. I question whether it is really a useful measure of load. You are assuming no connection pooling by clients, for one thing, and you are also assuming that latency is entirely manifested as pending connections. But as you can't get the number anyway the point is moot.
0
3,268
1
8
2012-06-20T19:00:00.000
python,linux,sockets,tcp,twisted
Determine the current number of backlogged connections in TCP listen() queue
1
1
3
11,127,146
0
0
0
How can I read from and write to my Galaxy Nexus phone, using MTP over a USB cable in python? I'm on a windows 7 computer.
false
11,161,747
-1
0
0
-4
Simply connecting a USB cable between the phone and computer should work. It may be necessary to enable MTP transfers in the settings menu on your phone. The menu selection location is likely to be different on different versions of android and different phone models. Try a google search for "galaxy nexus enable mtp". Make sure to include your android and phone version in the search. Make sure it is a good quality USB cable. Poor quality cables will not make a good connection and therefor not work reliably. A file management dialog comes up immediately on my desktop after hooking up a usb cable between my phone and laptop showing both the phone internal storage and SD card. This allows me to transfer files both ways directly to the phone SD storage (Linux Mint <-> LG Android ver. 5.1) Note that it is also possible to transfer files using Bluetooth. After establishing a connection, you would need to find the device name. Then it would be possible to open the device using standard python file constructs, i.e. popen(), etc.
1
15,627
0
25
2012-06-22T18:03:00.000
python,mtp
How to access an MTP USB device with python
1
2
3
46,545,449
0
0
0
How can I read from and write to my Galaxy Nexus phone, using MTP over a USB cable in python? I'm on a windows 7 computer.
true
11,161,747
1.2
0
0
5
One way to do this would be to install ADB (android debugging bridge, part of the SDK) and launch it as a child process from python. ADB can be used to, among other things, read from or write to, an android device.
1
15,627
0
25
2012-06-22T18:03:00.000
python,mtp
How to access an MTP USB device with python
1
2
3
11,254,179
0
0
0
I need to make a copy of a socket module to be able to use it and to have one more socket module monkey-patched and use it differently. Is this possible? I mean to really copy a module, namely to get the same result at runtime as if I've copied socketmodule.c, changed the initsocket() function to initmy_socket(), and installed it as my_socket extension.
false
11,170,949
0.049958
0
0
1
Physically copy the socket module to socket_monkey and go from there? I don't feel you need any "clever" work-around... but I might well be over simplifying!
0
7,543
0
39
2012-06-23T15:59:00.000
python,python-c-extension
How to make a copy of a python module at runtime?
1
1
4
11,171,590
0
0
0
I'd like to invoke PySocketModule_ImportModuleAndAPI function defined in socketmodule.h in my Python C-extension.
false
11,172,215
0
1
0
0
I had a similar problem and resolved it by: Adding the Modules\ directory (from your Python source) to the C/C++ Additional Include Directories. #include "socketmodule.h" Don't know if this is the best solution, but it worked for me!
0
142
0
1
2012-06-23T19:07:00.000
python-c-extension,python-extensions
Is it possible to include socketmodule.h in Python C extensions?
1
1
2
20,085,116
1
0
0
I have a design problem: I have two threads, a heartbeat/control thread and a messagehandler thread. Both are sharing the same socket, however the messageHandler thread only sends out messages and never receives. The heartbeat thread sends and receives (receives messages and reacts on heartbeats). The problem is I'm not sure if this is safe. There is no mechanism, I myself, implemented to see if the socket is being used. So is sharing a socket over python automatically thread safe or not? Also if it's not, the reason I put them in a separate thread is because the heartbeat is more important than the message handling. This means that if it gets flooded with messages, it still needs to do a heartbeat. So if I have to implement a bolt, is there away I can prioritize if my heartbeat/control thread needs to send a heartbeat?
false
11,177,018
0
0
0
0
I don't know of a way to prioritize at the Python level. So I'd suggest using 2 processes, not threads, and prioritize at the OS level. On Unix you can use os.nice() to do that. You'd need to use 2 sockets then, and your sharing problem would be solved at the same time.
0
22,734
0
14
2012-06-24T11:21:00.000
python,multithreading,sockets
Python: Socket and threads?
1
3
4
11,177,059
0
0
0
I have a design problem: I have two threads, a heartbeat/control thread and a messagehandler thread. Both are sharing the same socket, however the messageHandler thread only sends out messages and never receives. The heartbeat thread sends and receives (receives messages and reacts on heartbeats). The problem is I'm not sure if this is safe. There is no mechanism, I myself, implemented to see if the socket is being used. So is sharing a socket over python automatically thread safe or not? Also if it's not, the reason I put them in a separate thread is because the heartbeat is more important than the message handling. This means that if it gets flooded with messages, it still needs to do a heartbeat. So if I have to implement a bolt, is there away I can prioritize if my heartbeat/control thread needs to send a heartbeat?
false
11,177,018
0
0
0
0
If both threads are client threads, it is a good idea to open two client sockets one to the server for heart beat and another for communication.
0
22,734
0
14
2012-06-24T11:21:00.000
python,multithreading,sockets
Python: Socket and threads?
1
3
4
11,177,130
0
0
0
I have a design problem: I have two threads, a heartbeat/control thread and a messagehandler thread. Both are sharing the same socket, however the messageHandler thread only sends out messages and never receives. The heartbeat thread sends and receives (receives messages and reacts on heartbeats). The problem is I'm not sure if this is safe. There is no mechanism, I myself, implemented to see if the socket is being used. So is sharing a socket over python automatically thread safe or not? Also if it's not, the reason I put them in a separate thread is because the heartbeat is more important than the message handling. This means that if it gets flooded with messages, it still needs to do a heartbeat. So if I have to implement a bolt, is there away I can prioritize if my heartbeat/control thread needs to send a heartbeat?
true
11,177,018
1.2
0
0
10
Unfortunately,The socket shared by multi-thread is not thread safe.Think about buffer two threads operate on with no lock. The normal way to implement is with two socket,just like what ftp does.cmd socket and msg socket. If you wanna implement this by one socket,you can put different type msgs into different queues,and with a third thread consumes the queue and send them through the only socket. In this way,you can control heartbeat msg priory to data msg.
0
22,734
0
14
2012-06-24T11:21:00.000
python,multithreading,sockets
Python: Socket and threads?
1
3
4
11,177,260
0
0
0
What would be the simplest way to check a web page for changes? I want to scan a web page every so often, and compare it to an older scan. One problem is I also need the scan to ignore certain changes, such as the time of day, etc. I only want to check for relevant updates.
false
11,182,825
0.664037
0
0
4
I won't write code, but I'll give you the process I'd go through for solving this problem: Retrieve the source of the page Replace out all of the parts of the page that we don't care to monitor Calculate an md5 or sha1 hash of the source after replacements are made Compare the hash with the stored hash, see if it's different, and do whatever you need to do if the page has been updated Store the new hash
0
2,212
0
1
2012-06-25T02:24:00.000
python,html,parsing,text,web
Detecting web page updates with python
1
1
1
11,182,855
0
0
0
I need a url for using that for a template. Now there are two ways of storing the url and use that again in python I guess... One is using session to store that URL and get it later whenever we need it... or Second is using cookies to store that URL and get it later.. So which method is more appropriate in terms of security ? Is there any other method in python which is more better for storing the url and use that later, which is more secure..? While using cookies somebody can easily change the information I guess, in sessions also somebody can hijack it and make the changes....
false
11,188,725
0
1
0
0
In terms of security, you should store it in session. If it's in cookie, the client can modify your url to whatever he wants.
0
276
0
0
2012-06-25T11:43:00.000
python,django,http
Storing URL into cookies or session?
1
2
2
11,188,777
0
0
0
I need a url for using that for a template. Now there are two ways of storing the url and use that again in python I guess... One is using session to store that URL and get it later whenever we need it... or Second is using cookies to store that URL and get it later.. So which method is more appropriate in terms of security ? Is there any other method in python which is more better for storing the url and use that later, which is more secure..? While using cookies somebody can easily change the information I guess, in sessions also somebody can hijack it and make the changes....
true
11,188,725
1.2
1
0
0
I don't think "session hijacking" means what you think it means. The only thing someone can do with session hijacking is impersonate a user. The actual session data is stored on the back end (eg in the database), so if you don't give the user access to that particular data then they can't change it, whether they're the actual intended user or someone impersonating that user. So, the upshot of this is, store it in the session. Edit after comment Well, you'd better not allow any information to be sent to your server then, and make your website browse-only. Seriously, I don't see why "session data" is any less secure than anything else. You are being unreasonably paranoid. If you want to store data, you need to get that data from somewhere, either from a calculation on the server side, or from user submissions. If you can't calculate this specific URL on the server side, it needs to come from the user. And then you need to store it on the server against the particular user. I don't see what else you want to do.
0
276
0
0
2012-06-25T11:43:00.000
python,django,http
Storing URL into cookies or session?
1
2
2
11,188,963
0
0
0
I am running a Graphite server to monitor instruments at remote locations. I have a "perpetual" ssh tunnel to the machines from my server (loving autossh) to map their local ports to my server's local port. This works well, data comes through with no hasstles. However we use a flaky satellite connection to the sites, which goes down rather regularly. I am running a "data crawler" on the instrument that is running python and using socket to send packets to the Graphite server. The problem is, if the link goes down temporarily (or the server gets rebooted, for testing mostly), I cannot re-establish the connection to the server. I trap the error, and then run socket.close(), and then re-open, but I just can't re-establish the connection. If I quit the python program and restart it, the connection comes up just fine. Any ideas how I can "refresh" my socket connection?
false
11,190,243
0
0
0
0
It's hard to answer this correctly without a code sample. However, it sounds like you might be trying to reuse a closed socket, which is not possible. If the socket has been closed (or has experienced an error), you must re-create a new connection using a new socket object. For this to work, the remote server must be able to handle multiple client connections in its accept() loop.
0
1,295
0
0
2012-06-25T13:24:00.000
python,sockets,ssh-tunnel,graphite
python - can't restart socket connection from client if server becomes unavailable temporarily
1
1
1
11,214,979
0
0
0
What is the way to do urlopen in python such that even if the underlying machine has ipv6 networking enabled, the request is sent via ipv4 instead of ipv6?
false
11,231,244
0.197375
0
0
1
I had a look into the source code. Unfortunately, urllib.urlopen() seems to use httplib.HTTP(), which doesn't even allow setting a source address. urllib2.urlopen() uses httplib.HTTPConnection() which you could inherit from and create a class which by default sets a source address '0.0.0.0' instead of ''. Then you could somehow inject that new overridden class into the urllib2 stuff by creating a "new" HTTPHandler() (look how it's done in urllib2.py) and a new opener which you build_opener() and/or install_opener(). Sorry for not being very exact, but I never have done such a thing and don't know exactly how that works.
0
2,773
0
2
2012-06-27T16:48:00.000
python,ipv6,urllib,ipv4
how to do urlopen over ipv4 by default
1
1
1
11,231,476
0
0
0
I'm making a simulator for a digital radio using Python. The radio relays over RF one each of an RS-232 port and an Ethernet port, with a pair of radios making seamless pipes. Thus, the simulator will be used in pairs, with pipes between them simulating the RF link, permitting users to connect to each end using physical interfaces, virtual interfaces, or tunnels. For the RF serial port, I'm using PySerial and virtual serial ports to make the simulator as versatile as possible: I can connect the simulator to either a physical serial port, to a network socket, or to another local program. Aside from the tools used to create the virtual serial ports on each different OS, this approach is completely cross-platform. I'd like the simulator to be able to network with a local program via a virtual interface, with a remote program via a shared network interface, and with a remote program via a local physical interface that would be dedicated to the simulator. But so far, I haven't found a straightforward way to do this. I've been looking at SLIP/PPP, TAP/DUN, pcap/SOCK_RAW, and other possibilities, and I see no obvious or general solution. The key difficulty seems to be that this involves an entire Ethernet interface, below the IP level, at the level of the Ethernet protocol itself: If it were only a few ports, the solution would be relatively simple. Or am I missing something blindingly obvious? How do I use Python to create and use an RF Ethernet interface in a way that is as versatile as the RF Serial interface solution?
true
11,234,197
1.2
0
0
3
The massive number of answers people posted encouraged me to think outside of the box. My approach will be to use Dummynet, a truly amazing and versatile tool. Unfortunately the Dummynet Windows and Linux ports are not well-maintained, which means I'll be running *BSD. But this simplifies things, since a *BSD image can also be run as a VM, which greatly simplifies dealing with virtual interfaces. And if I'm concerned about size, I can use picoBSD or nanoBSD to craft a tiny tailored system for my simulator.
0
2,780
0
6
2012-06-27T20:16:00.000
python,networking,interface,simulation
Network interface simulation in Python?
1
1
2
11,315,321
0
0
0
Am I correct in that ElementTree does not support DTD or XSD? Is there a means of plugging anything into ElementTree to support validation, preferrably via XML Schema?
false
11,300,979
0.197375
0
0
1
I would try using the lxml library; it supports etree representations and validation.
0
785
0
1
2012-07-02T20:51:00.000
python,xml,xsd,dtd,elementtree
Validating XML parsed with ElementTree, possible?
1
1
1
11,301,443
0
0
0
I've the following problem. I've a Server Application that accepts sockets connections wrapped over SSL. The client sends user and password over it, once on the Server I check if the user/password is correct. If the user/password is wrong I want the Server to send to the Client a Socket.error. Right now the only idea that comes to mind is sending back to the client "Wrong Password" but I think it is safer to use the built-in errors, this way I can warp the code over a try except statement. Is it there anyway to send a socket error from the Server to the Client?
true
11,318,320
1.2
0
0
0
The easiest method would be to send back some sort of return code and when the client sees the return code from the server, it would throw the exception itself.
0
227
0
0
2012-07-03T20:02:00.000
python
Raising socket.error from client to server
1
1
1
11,318,363
0
0
0
I'm using python 2.7 and paramiko library. client app running on window sends ssh commands to server app running on linux. when I send vi command, I get the response <-[0m<-[24;2H<-[K<-[24;1H<-[1m~<-[0m<-[25;2H.... I don't know what these characters mean and how I process it. I'm struggling for hours, please help me.
false
11,342,314
0
1
0
0
Reviewing my SO activity this week, saw this opportunity to whore for rep: Those look like ANSI/VT100 terminal control codes, which suggests that something which thinks it is attached to a terminal is sending them but they are being received by something which doesn't know what to do with them. Now you can Google for 'VT100 control codes' and learn what you want.
0
223
0
3
2012-07-05T10:21:00.000
python,linux,paramiko
vi command returns error format data?
1
1
1
11,360,629
0
1
0
I am trying to write an iPhone application that uses a Python server. The iPhone application will send an HTTP request to the server, which should then respond by sending back a file that is on the server. What is the best way to do this? Thanks.
false
11,353,206
0
0
0
0
If you're literally just serving content (ie, not doing any calculations or look-ups), then use the nginx webserver to serve it based on URL.
0
750
0
1
2012-07-05T22:06:00.000
python,ios,webserver,download
How to respond to an HTTP request using Python
1
1
4
11,353,286
0
1
0
I am a novice python programmer and I am having troubles finding a tool to help me get a form from a javascript. I have written a small script in python and also have a simple interface done in javascript. The user needs to select a few items in the browser and the javascript then returns a sendForm(). I would like to then to recover the form with my python script. I know I could generate an xml file with javascript and tell my python script to wait until its creation and then catch it (with a os.path.exist(..)) but i would like to avoid this. I have seen that libraries such as cgi, mechanize, pyjs,(selenium?) exist to interface python and html,javascript but I can't find which one to use or if there would be another tool that would handle recovering the form easily. More info: the python script generates an xml which is read by javascript. The user selects items in the javascript (with checkboxes) which are then tagged in the xml by javascript. The javascript then outputs the modified xml in a hidden field and it is this modified xml that I wish to retrieve with my python script after it is created. Thank you all a lot for your help
false
11,383,796
0
0
0
0
Is your system a web application, If so your javascript can post to python back-end using ajax. Then you can encrypt a form to json string and send to back-end, in back en you can parse that string into python variable... Javascript it self does not have access to your local file except you run it local (but it's really limitted) I suggest you should try a web frame work like Django. It's easy to learn in one day.
0
453
0
2
2012-07-08T14:14:00.000
javascript,python,xml,forms
getting javascript form content with python
1
1
2
11,383,885
0
0
0
I have an ejabberd server at jabber.domain.com, with an xmpp component written in python (using sleekxmpp) at presence.domain.com. I wanted the component to get a notification each time a client changed his presence from available to unavailable and vice-versa. The clients themselves don't have any contacts. Currently, I have set up my clients to send their available presence stanzas to admin@presence.domain.com, and I do get their online/offline presence notifications. But I feel this isn't the right approach. I was hoping the clients wouldn't be aware of the component at presence.domain.com, and they would just connect to jabber.domain.com and the component should somehow get notified by the server about the clients presence. Is there a way to do that? Is my component setup correct? or should I think about using an xmpp plugin/module/etc.. Thanks
false
11,392,302
0
0
0
0
It is possible for a component to subscribe to a user's presence exactly the same way a user does. Also it is possible for the user to subscribe to a component's presence. You just have to follow the usual pattern, i.e. the component/user sends a <presence/> of type subscribe which the user/component can accept by sending a <presence/> of type subscribed. You can also have the user just send a presence to the component directly. There is no need to write custom hooks or create proxy users.
0
1,392
0
1
2012-07-09T09:26:00.000
python,xmpp,ejabberd
Getting ejabberd to notify an external module on client presence change
1
2
2
11,941,846
0
0
0
I have an ejabberd server at jabber.domain.com, with an xmpp component written in python (using sleekxmpp) at presence.domain.com. I wanted the component to get a notification each time a client changed his presence from available to unavailable and vice-versa. The clients themselves don't have any contacts. Currently, I have set up my clients to send their available presence stanzas to admin@presence.domain.com, and I do get their online/offline presence notifications. But I feel this isn't the right approach. I was hoping the clients wouldn't be aware of the component at presence.domain.com, and they would just connect to jabber.domain.com and the component should somehow get notified by the server about the clients presence. Is there a way to do that? Is my component setup correct? or should I think about using an xmpp plugin/module/etc.. Thanks
true
11,392,302
1.2
0
0
5
It is not difficult to write a custom ejabberd module for this. It will need to register to presence change hooks in ejabberd, and on each presence packet route a notification towards your external component. There is a pair of hooks 'set_presence_hook' and 'unset_presence_hook' that your module can register to, to be informed when the users starts/end a session. If you need to track other presence statuses, there is also a hook 'c2s_update_presence' that fires on any presence packets sent by your users. Other possibility, without using a custom module, is using shared rosters. Add admin@presence.domain.com to the shared rosters of all your users, but in this case they will see this item reflected on their roster.
0
1,392
0
1
2012-07-09T09:26:00.000
python,xmpp,ejabberd
Getting ejabberd to notify an external module on client presence change
1
2
2
11,926,839
0
0
0
Can anyone please tell me how to find the x-offset and y-offset default value of a Slider in a webpage using python for selenium webdriver. Thanks in Advance !
false
11,411,182
0
0
0
0
You can use the get_attribute(name) method on a webelement to retrieve attributes.
0
636
0
0
2012-07-10T10:17:00.000
python,webdriver
How to find x and y-offset for slider in python for a web-application
1
1
1
11,412,106
0
1
0
In pylons project when I do request.accept_language.best_matches(), it is returning me Null. I have set 2 languages in browser (en-us and es-ar) by going to Preferences-Content- Languages in firefox. How can I get the languages specified in the browser? repr(request.accept_language) gives <NilAccept: <class 'webob.acceptparse.Accept'>>
false
11,419,922
0.379949
0
0
2
Try looking at request.headers['accept-language'], or indeed the entire request.headers object. I suspect your browser is not providing those headers. Also, take a look at the browser request in wireshark, and the client request on the server.
0
519
0
0
2012-07-10T18:50:00.000
python,pylons
request.accept_language is always null in python
1
1
1
11,420,889
0
0
0
I'm using PyDev in eclipse with Python 2.7 on windows 7. I Installed networkx and it is properly running within Python shell but in eclipse it is showing error as it is unable to locate networkx can anyone tell me how to remove this error?
false
11,421,476
0
0
0
0
I think there are two options: Rebuild your interpreter Add it to your python path by appending the location of networkx to sys.path in python
0
1,086
0
3
2012-07-10T20:32:00.000
python,eclipse,networkx
Integrate networkx in eclipse on windows
1
2
2
11,421,571
0
0
0
I'm using PyDev in eclipse with Python 2.7 on windows 7. I Installed networkx and it is properly running within Python shell but in eclipse it is showing error as it is unable to locate networkx can anyone tell me how to remove this error?
true
11,421,476
1.2
0
0
4
you need to rebuild your interpreter go to project > properties > pyDev-Interpreter/Grammar click the "click here to configure" remove the existing interpreter hit "Auto config" button and follow the prompts kind of a pain but the only way Ive found to autodiscover newly installed packages
0
1,086
0
3
2012-07-10T20:32:00.000
python,eclipse,networkx
Integrate networkx in eclipse on windows
1
2
2
11,421,549
0
0
0
I am trying to make a rename feature for a Treectrl. Where, a TextCtrl goes over the element and I am able to rename it. I cant find this feature on the API but I am sure that there has to be some way to do it.
true
11,431,593
1.2
0
0
0
I assume you want to rename tree elements (leaves), right? Well when you instantiate the TreeCtrl, give it the following style flags: wx.TR_DEFAULT_STYLE | wx.TR_EDIT_LABELS Now you should be able to click or double-click any of the items and rename them. See the wxPython demo for more cool tricks.
0
214
0
0
2012-07-11T11:23:00.000
python,wxpython,wxwidgets
Renaming Item in a TreeCtrl
1
1
1
11,434,766
0
0
0
I gave my CustomTreeCtrl the TR_HAS_VARIABLE_ROW_HEIGHT style. But I am not sure where to go from there to change the height of the items inside the tree. I cant really find anything on the API or online.
true
11,460,864
1.2
0
0
0
I think all you have to do is change the font size of the item. That's what it looks like in the wxPython demo anyway. You could ask on the wxPython users group though. The author of that widget is on there most of the time and is very helpful.
0
138
0
0
2012-07-12T21:20:00.000
python,wxpython,wxwidgets
how to use CustomTreeCtrl with the TR_HAS_VARIABLE_ROW_HEIGHT style to change the items height?
1
1
1
11,471,540
0
0
0
I want to make very simple application in Python which: When REST calls PUT/DEL/GET are recived than response code is 200 When REST call create is recived than response code is 201 I tried with sockets but I don't know how to send 201.
true
11,470,856
1.2
0
0
2
Use an existing web framework such as Flask or Django. Doing this by yourself with sockets is way too much work, it's not worth it.
0
81
0
2
2012-07-13T12:49:00.000
python,rest,http-status-codes
Recive REST and Response http codes in Python
1
1
1
11,470,884
0
0
0
I would like to know if the following is possible: I want to connect my computer to either another computer, or a some consumer electronics (like a ps3 or xbox or something), via a double sided USB cable. I want to run a program on the first computer that will run a constant data stream through the USB cable, to trick the second computer into believing it is a usb flash drive, and it can read data from it. The first computer can change the data stream accordingly to the files that are supposed to be on the emulated flash drive. Essentially, I want to use a program on a computer to mimic USB hardware in another device. I don't know if I am wording this is a proper way or not, but is this possible? Diagram: | My Computer running this program | >-----emulated USB data stream-----> | Target |
false
11,478,690
0
0
0
0
This isn't possible without special hardware because USB doesn't support peer-to-peer networks. You might do better with Firewire (IEEE-1394), which can at least be used for TCP/IP with appropriate drivers.
0
988
0
0
2012-07-13T21:33:00.000
python,streaming,usb,emulation
Streaming data over USB in python / mimic usb drive with python?
1
1
1
11,478,774
0
1
0
I'm gonna write a web service which will allow upload/download of files, managing permissions and users. It will be the interface to which a Desktop app or Mobile App will communicate. I was wondering which of the web frameworks I should use to to that? It is a sort of remote storage for media files. I am going to host the web service on EC2 in a Linux environment. It should be fast (obviously) because It will have to handle tens of requests per second, transferring lots of data (GBs)... Communication will be done using JSon... But how to deal with binary data? If I use base64, it will grow by 33%... I think web2py should be ok, because it is very stable and mature project, but wanted other suggestions before choosing. Thank you.
true
11,505,231
1.2
0
0
2
I'm no doubt going to be shot down for this answer, but it needs to be said... You're going to write a service that allows tens of transfers a second, with very large file sizes... Uptime is going to be essential and so is transfer speeds etc... If this is for a business, and not just a personal pet project get the personal responsible for the IT budget to give "Box" or "DropBox" some pennies and use their services (I am not affiliated with either company). On a business level, this gets you up and running straight off, would probably end up cheaper than you coding, designing, debugging, paying for EC2 etc... More related to your question: Flask seems to be an up-coming and usable "simple" framework. That should provide all the functionality without all the bells and whistles. The other I would spend time looking at would be Pyramid - which when using a very basic starter template is very simple, but you've got the machinery behind it to really get quite complex things done. (You can mix url dispatch and traversal where necessary for instance).
0
170
0
0
2012-07-16T13:20:00.000
python,json,web-services,web-frameworks
Python web framework suggestion for a web service
1
1
1
11,505,774
0
1
0
i am writing a Bot that can just check thousands of website either they are in English or not. i am using Scrapy (python 2.7 framework) for crawling each website first page , can some one suggest me which is the best way to check website language , any help would be appreciated.
false
11,507,279
0.049958
0
0
2
If the sites are multilanguage you can send the "Accept-Language:en-US,en;q=0.8" header and expect the response to be in english. If they are not, you can inspect the "response.headers" dictionary and see if you can find any information about the language. If still unlucky, you can try mapping the IP to the country and then to the language in some way. As a last resource, try detecting the language (I don't know how accurate this is).
0
4,124
0
5
2012-07-16T15:16:00.000
python,scrapy,web-crawler,language-detection
python website language detection
1
1
8
11,507,534
0
1
0
I'm using selenium in python to check the page of a website that uses basic authentication on one frame. I'm checking to see if the password I am entering is correct or not, so the basic authentication often gets stuck because the password is wrong. Normally I use sel.set_page_load_timeout(8) and then catch the exception thrown when the page takes too long, but because the page loads except for the one frame, this function is not throwing an exception, and the page is getting stuck. How can I break out of the page?
false
11,529,450
0
0
0
0
Solution: go to the url of the frame itself, and set page load timeout on that page
0
230
0
0
2012-07-17T19:25:00.000
python,selenium,webdriver,frame,basic-authentication
Selenium stuck loading frame with basic authentication
1
1
1
11,547,170
0
1
0
I'm working on automatically generating a local HTML file, and all the relevant data that I need is in a Python script. Without a web server in place, I'm not sure how to proceed, because otherwise I think an AJAX/json solution would be possible. Basically in python I have a few list and dictionary objects that I need to use to create graphs using javascript and HTML. One solution I have (which really sucks) is to literally write HTML/JS from within Python using strings, and then save to a file. What else could I do here? I'm pretty sure Javascript doesn't have file I/O capabilities. Thanks.
true
11,547,150
1.2
0
0
4
You just need to get the data you have in your python code into a form readable by your javascript, right? Why not just take the data structure, convert it to JSON, and then write a .js file that your .html file includes that is simply var data = { json: "object here" };
1
10,066
0
1
2012-07-18T17:37:00.000
javascript,python,html,file
How to transfer variable data from Python to Javascript without a web server?
1
1
6
11,547,371
0
1
0
Is there a way to capture search data from other websites? For example, if a user visits any website with a search field, I am interested in what that user types into that search field to get to the desired blogpost/webpage/product. I want to know if this is possible by scraping the site, or by any other means. Also, is it illegal to perform a scraping operation to record such data on a third party website? Also, if this is possible using PHP and Python?
false
11,629,077
0
0
0
0
You can check the http referrer and see what values have been put in the GET variable, but this is restricted to GET variables ONLY!
0
220
0
2
2012-07-24T10:38:00.000
php,python,search,web-scraping
Capturing search data from other websites
1
1
1
11,629,117
0
0
0
How can my Python script get the URL of the currently active Google Chrome tab in Windows? This has to be done without interrupting the user, so sending key strokes to copy/paste is not an option.
false
11,645,123
0.066568
0
0
1
I quite new to StackOverFlow so apologies if the comment is out of tone. After looking at : Selenium, launching chrome://History directly, doing some keyboard emulation : copy/paste with Pywinauto, trying to use SOCK_RAW connections to capture the headers as per the Network tab of the DevTool (this one was very interesting), trying to get text of the omnibus/searchBar window element, closing and reopening chrome to read the history tables, .... I resulted in copy/pasting the History file itself (\AppData\Local\Google\Chrome\User Data\Default\History) into my application folder when the title of the window (retrieved using the hwnd + win32) is missing from "my" urls table. This can be done even if the sqlite db is locked and does not interfere with the user experience. Very basic solution that requires : sqlite3, psutil, win32gui. Hope that helps.
0
13,519
0
12
2012-07-25T07:56:00.000
python,winapi,google-chrome
How do I get the URL of the active Google Chrome tab in Windows?
1
1
3
57,915,124
0
1
0
In selenium testing, there is htmlunitdriver which you can run tests without browser with. I need to do this with windmill too. Is there a way to do this in windmill? Thank!
false
11,645,451
0
1
0
0
If you're looking to run Windmill in headless mode (no monitor) you can do it by running Xvfb :99 -ac & DISPLAY=:99 windmill firefox -e test=/path/to/your/test.py
0
364
0
1
2012-07-25T08:17:00.000
python,selenium,windmill,browser-testing
Windmill-Without web browser
1
1
1
12,344,550
0
0
0
I have to write a Listener for ActiveMQ in python. Is there any python package which could be used to write the listener. Also what is with Stomp/Openwire protocol. When i start activemq, i see three urls with protocol namely tcp, ssl, stomp. Any help will be appreciated EDIT!: Another question I have is suppose we start the broker with stomp as well as openwire protocol. Lets say the broker Url is now tcp://localhost:61616 and stomp://localhost:61613. So now the broker is listening on two different ports. My question is if a producer publishes a message on tcp port will that message could be consumed by a subscriber on stomp port? Also what If two subscribers on tcp and stomp respectively are waiting on the same queue, will they both receive the message?
false
11,647,268
0
0
0
0
Finally I am using STOMP python for listening to a ActiveMQ Broker. PyActiveMq is to unstable to be used as it is no more maintained.
0
7,213
0
3
2012-07-25T10:00:00.000
python,python-2.7,activemq
ActiveMQ Listener in Python
1
1
2
11,719,245
0
0
0
Is there a way to list all my HIT types (not HITs or assignments) using the mturk api? I can't find any documentation on this. I'm using python, so it'd be nice if boto supported this query.
true
11,673,711
1.2
1
0
1
Looking through the MTurk API (http://docs.amazonwebservices.com/AWSMechTurk/latest/AWSMturkAPI/Welcome.html) I don't see anything that returns a list of HIT types. You should post a query to the MTurk forum (https://forums.aws.amazon.com/forum.jspa?forumID=11). It seems like a useful feature to add.
0
362
0
0
2012-07-26T16:21:00.000
python,boto,mechanicalturk
List all hitTypes through the mturk API?
1
2
2
11,677,299
0
0
0
Is there a way to list all my HIT types (not HITs or assignments) using the mturk api? I can't find any documentation on this. I'm using python, so it'd be nice if boto supported this query.
false
11,673,711
0.099668
1
0
1
Unfortunately there isn't. We resort to persisting every HitType locally that we create through turk's api at houdiniapi.com which works just fine.
0
362
0
0
2012-07-26T16:21:00.000
python,boto,mechanicalturk
List all hitTypes through the mturk API?
1
2
2
11,678,042
0
1
0
How to show continuous real time updates in browser like facebook ticker, meetup.com home page does? In python, PHP, node.js and what would be the performance impact at the server side ? Also how could we achieve the same update thing if the page is cached by an CDN like akamai?
false
11,688,397
0
0
0
0
You could use a poll, long-poll or if you want a push system. Easiest would be a poll. However, all solutions require client side coding. Performance impact depends on your solution. Easiest to implement would be a poll. A poll with short frequency does effectively a request every, say 100ms ot simulate real time. A long-poll would be of less impact, but it would keep open a lot of request during a more or less time.
0
5,085
0
8
2012-07-27T13:08:00.000
php,python,node.js,asynchronous,real-time
How to show continuous real time updates like facebook ticker, meetup.com home page does?
1
1
4
11,688,432
0
0
0
I'm working with urllib and urllib2 in python and am using them to retrieve images from urls. Using something similar to : try: buffer=urllib2.url_open(urllib2.Request(url)) f.write(buffer) f.close except (Errors that could occur): #Network Errors(?) print "Failed to retrieve "+url pass Now what happens often is that the image does not load/is broken when using the site via a normal web browser this is presumably because of high server load or because the image does not exist or could not be retrieved by the server. Whatever the reason may be, the image does not load and a similar situation can also/is likely to occur when using the script. Since I do not know what error it might it throw up how do I handle it? I think mentioning all possible errors in the urllib2,urllib library in the except statement might be overkill so I need a better way. (I also might need to/have to handle broken Wi-Fi, unreachable server and the like at times so more errors)
true
11,697,289
1.2
0
0
5
There are only two exceptions you'll see, HTTPError (HTTP status codes) and URLError (everything that can go wrong), so it's not like it's overkill handling both of them. You can even just catch URLError if you don't care about status codes, since HTTPError is a subclass of it.
0
1,816
0
5
2012-07-28T00:47:00.000
python,try-catch,urllib2,urllib
Python: trying to raise multi-purpose exception for multiple error types
1
1
3
11,697,336
0
1
0
I am making a social site where users can post content and the content has views. Whenever a user from a different IP address views the content, the view count is incremented; multiple requests coming from the same IP address do not count. However lately someone is iterating though a list of proxies or something and artificially increasing the view counts. How can I prevent this? Is there something I can do by checking headers or something? Thanks.
false
11,697,457
0.197375
0
0
2
The best way to do it is pattern-recognition, since most proxies won't tell you that they are a proxy: if you see certain spikes of traffic, flag them and don't add them to the hitcount. Alternatively, if (s)he's using the same proxies over and over again, just blacklist those IP addresses. You could also try to detect proxies by using some sort of API proxy list service or checking for listening proxy servers.
0
216
0
4
2012-07-28T01:22:00.000
php,python,http,header,hit
How can I prevent my website from being "hit-boosted"?
1
1
2
11,697,490
0
0
0
I need to call GET, POST, PUT, etc. requests to another URI because of search, but I cannot find a way to do that internally with pyramid. Is there any way to do it at the moment?
false
11,701,920
0.066568
1
0
1
Also check the response status code: response.status_int I use it for example, to introspect my internal URIs and see whether or not a given relative URI is really served by the framework (example to generate breadcrumbs and make intermediate paths as links only if there are pages behind)
0
552
0
5
2012-07-28T14:35:00.000
python,pyramid
Pyramid subrequests
1
1
3
13,202,389
0
1
0
I'm currently using urllib2 and BeautifulSoup to open and parse html data. However I've ran into a problem with a site that uses javascript to load the images after the page has been rendered (I'm trying to find the image source for a certain image on the page). I'm thinking Twill could be a solution, and am trying to open the page and use a regular expression with 'find' to return the html string I'm looking for. I'm having some trouble getting this to work though, and can't seem to find any documentation or examples on how to use regular expressions with Twill. Any help or advice on how to do this or solve this problem in general would be much appreciated.
false
11,705,835
0
0
0
0
I'd rather user CSS selectors or "real" regexps on page source. Twill is AFAIK not being worked on. Have you tried BS or PyQuery with CSS selectors?
0
225
0
3
2012-07-29T00:57:00.000
python,regex,beautifulsoup,twill
Using Regular Expression with Twill
1
1
2
11,712,834
0
1
0
I'm trying to get the content of a HTML table generated dynamically by JavaScript in a webpage & parse it using BeautifulSoup to use certain values from the table. Since the content is generated by JavaScript it's not available in source (driver.page_source). Is there any other way to obtain the content and use it? It's table containing list of tasks, I need to parse the table and identify whether specific task I'm searching for is available.
false
11,706,424
0
0
0
0
You'd need to figure out what HTTP requests the Javascript is making, and make the same ones in your Python code. You can do this by using your favorite browser's development tools, or wireshark if forced.
0
1,222
0
1
2012-07-29T03:31:00.000
python,regex,selenium,webdriver,beautifulsoup
Get dynamic html table using selenium & parse it using beautifulsoup
1
1
2
11,707,106
0
0
0
Can each node of selenium grid run different python script/test? - how to setup?
true
11,716,677
1.2
1
0
0
Yes, use different browser configurations in the hub, and use two or more programs to contact the grid with different browsers
0
382
0
0
2012-07-30T06:58:00.000
python,testing,selenium
Can each node of selenium grid run different script/test? - how to setup?
1
1
1
11,718,057
0
0
0
I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe). What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time. What would be the best, most common, practical way to go about handling multiple connections? Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes. Any help/thoughts/advice would be appreciated.
true
11,725,192
1.2
0
0
1
For a group chat application, the general approach will be: Server side (accept process): Create the socket, bind it to a well known port (and on appropriate interface) and listen While (app_running) Client_socket = accept (using serverSocket) Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected. Continue, so that server can continue to accept more connections. Server-side client mgmt Thread: while app_running: read the incoming message, and store to a queue or something. continue Server side (group chat processing): For all connected clients: check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of) Client side: create a socket connect to server via IP-address, and port do send/receive. There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack). PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.
1
1,547
0
2
2012-07-30T16:03:00.000
python,multithreading,sockets,asynchronous
Group chat application in python using threads or asycore
1
1
1
11,725,633
0
1
0
In a python cgi script I have many selects in a form (100 or so), and each select has 5 or 6 options to choose from. I don't want to have a separate submit button, so I am using onchange="submit();" to submit the form as soon as an option is selected from one of the many selects. When I read the form data with form.keys() the name of every select on the form is listed instead of just the one that was changed. This requires me to compare the value selected in each select with the starting value to find out which one changed and this of course is very slow. How can I just get the new value of the one select that was changed?
true
11,755,474
1.2
0
0
1
If every select should be the only value that's needed, then every select is basically a form on its own. You could either remove all other selects when you activate a single select (which is prone to errors), or simply put every select in its own form instead of using one giant form. Otherwise all data is going to be send.
0
130
0
0
2012-08-01T08:34:00.000
javascript,python,html,forms,cgi
many selects (dropdowns) on html form, how to get just the value of the select that was changed
1
1
1
11,755,603
0
0
0
Is there a python library for oAuth which can be run on Window and Linux? On window I am using python-oauth but I could not find an installation for Linux
true
11,759,307
1.2
0
0
2
All Python libraries that don't rely on native code or platform-specific APIs are portable. I don't see any of that in python-oauth or python-oauth2. So your current library should work fine on Linux.
0
736
1
2
2012-08-01T12:33:00.000
python,oauth
Python: OAuth Library for Linux and Windows
1
1
2
11,759,545
0
0
0
In my app i sent packet by raw socket to another computer than get packet back and write the return packet to another computer by raw socket. My app is c++ application run on Ubuntu work with nfqueue. I want to test sent packets for both computer1 and computer2 in order to check if they are as expected. I need to write an automation test that check my program, this automation test need to listen to the eth load the sent packets and check if they are as expected (ip,ports, payload). I am looking for a simple way (tool (with simple API), code) to do this. I need a simple way to listen (automate) to the eth . I preffer that the test will check the sender , but it might be difficult to find an api to listen the eth (i sent via raw socket) , so a suggested API that will check the receivers computers is also good. The test application can be written in c++, java , python.
false
11,762,812
0
1
0
0
The only way to check if a packet has been sent correctly is by verifying it's integrity on the receiving end.
0
1,276
1
0
2012-08-01T15:40:00.000
c++,python,testing,networking
How to test if packet is sent correct?
1
2
3
11,763,064
0
0
0
In my app i sent packet by raw socket to another computer than get packet back and write the return packet to another computer by raw socket. My app is c++ application run on Ubuntu work with nfqueue. I want to test sent packets for both computer1 and computer2 in order to check if they are as expected. I need to write an automation test that check my program, this automation test need to listen to the eth load the sent packets and check if they are as expected (ip,ports, payload). I am looking for a simple way (tool (with simple API), code) to do this. I need a simple way to listen (automate) to the eth . I preffer that the test will check the sender , but it might be difficult to find an api to listen the eth (i sent via raw socket) , so a suggested API that will check the receivers computers is also good. The test application can be written in c++, java , python.
true
11,762,812
1.2
1
0
0
I operate tcpdump on the reciver coputer and save all packet to file. I analysis the tcpdump with python and check that packet send as expected in the test.
0
1,276
1
0
2012-08-01T15:40:00.000
c++,python,testing,networking
How to test if packet is sent correct?
1
2
3
11,809,920
0
1
0
I'm trying to parse a web page using Python's beautiful soup Python parser, and am running into an issue. The header of the HTML we get from them declares a utf-8 character set, so Beautiful Soup encodes the whole document in utf-8, and indeed the HTML tags are encoded in UTF-8 so we get back a nicely structured HTML page. The trouble is, this stupid website injects gb2312-encoded body text into the page that gets parsed as utf-8 by beautiful soup. Is there a way to convert the text from this "gb2312 pretending to be utf-8" state to "proper expression of the character set in utf-8?"
true
11,767,001
1.2
0
0
1
The simplest way might be to parse the page twice, once as UTF-8, and once as GB2312. Then extract the relevant section from the GB2312 parse. I don't know much about GB2312, but looking it up it appears to at least agree with ASCII on the basic letters, numbers, etc. So you should still be able to parse the HTML structure using GB2312, which would hopefully give you enough information to extract the part you need. This may be the only way to do it, actually. In general, GB2312-encoded text won't be valid UTF-8, so trying to decode it as UTF-8 should lead to errors. The BeautifulSoup documentation says: In rare cases (usually when a UTF-8 document contains text written in a completely different encoding), the only way to get Unicode may be to replace some characters with the special Unicode character “REPLACEMENT CHARACTER” (U+FFFD, �). If Unicode, Dammit needs to do this, it will set the .contains_replacement_characters attribute to True on the UnicodeDammit or BeautifulSoup object. This makes it sound like BeautifulSoup just ignores decoding errors and replaces the erroneous characters with U+FFFD. If this is the case (i.e., if your document has contains_replacement_characters == True), then there is no way to get the original data back from document once it's been decoded as UTF-8. You will have to do something like what I suggested above, decoding the entire document twice with different codecs.
0
587
0
0
2012-08-01T20:23:00.000
python,encoding,character-encoding,web-scraping,beautifulsoup
Parsing a utf-8 encoded web page with some gb2312 body text with Python
1
1
1
11,767,390
0
1
0
So, what issue im running into is how do i know what element of my page made a post request? I have multiple elements that can make the post request on the page, but how do i get the values from the element that created the request? It seems like this would be fairly trivial,but i have come up with nothing, and when doing quite a few google searches i have come up with nothing again. Is there any way to do this using Bottle? I had an idea to an a route for an sql page (with authentication of course) for providing the action for the form and use the template to render the id in the action, but i was thinking there had to be a better way to do this without routing another page.
true
11,770,312
1.2
0
0
0
You could add a hidden input field to each form on the page with a specific value. On the server side, check the value of this field to detect which form the post request came from.
0
53
0
0
2012-08-02T02:36:00.000
python,html,post,bottle
Distinguishing post request's from possible poster elements
1
1
1
11,770,627
0
0
0
The HTTP file and its contents are already downloaded and are present in memory. I just have to pass on the content to a decoder in gstreamer and play the content. However, I am not able to find the connecting link between the two. After reading the documentation, I understood that gstreamer uses httpsoupsrc for downloading and parsing of http files. But, in my case, I have my own parser as well as file downloader to do the same. It takes the url and returns the data in parts to be used by the decoder. I am not sure howto bypass httpsoupsrc and use my parser instead also how to link it to the decoder. Please let me know if anyone knows how things can be done.
true
11,786,318
1.2
0
0
1
You can use appsrc. You can pass chunks of your data to app source as needed.
0
241
0
0
2012-08-02T21:49:00.000
gstreamer,python-gstreamer
How to hook custom file parser to Gstreamer Decoder?
1
1
1
11,802,443
0
0
0
Given a graph G, a node n and a length L, I'd like to collect all (non-cyclic) paths of length L that depart from n. Do you have any idea on how to approach this? By now, I my graph is a networkx.Graph instance, but I do not really care if e.g. igraph is recommended. Thanks a lot!
false
11,809,856
1
0
0
6
A very simple way to approach (and solve entirely) this problem is to use the adjacency matrix A of the graph. The (i,j) th element of A^L is the number of paths between nodes i and j of length L. So if you sum these over all j keeping i fixed at n, you get all paths emanating from node n of length L. This will also unfortunately count the cyclic paths. These, happily, can be found from the element A^L(n,n), so just subtract that. So your final answer is: Σj{A^L(n,j)} - A^L(n,n). Word of caution: say you're looking for paths of length 5 from node 1: this calculation will also count the path with small cycles inside like 1-2-3-2-4, whose length is 5 or 4 depending on how you choose to see it, so be careful about that.
0
6,172
0
3
2012-08-04T15:38:00.000
python,graph,networkx,igraph
All paths of length L from node n using python
1
1
5
11,810,286
0
0
0
In Python's urlparse, you can use urlparse to parse the URL, and then parse_qsl to parse the query. I want to remove a query (name, value) pair, and then reconstruct the URL. There is a urlunparse method, but no unparse_qsl method. What is the correct way to reconstruct the query from the qsl list?
false
11,820,366
0.197375
0
0
2
The function urllib.urlencode is appropriate.
0
782
0
5
2012-08-05T21:52:00.000
python,urlparse
Python urlparse.unparse_qsl?
1
1
2
11,820,400
0
0
0
I have a site A where is installed a web portal written in python. Then, I have a site X (that is not static but change dinamically), where are stored some file. The site A and site X communicate through ftp. How can i allow a registered user of the portal to download a file like the file was in the site A. Is there a standard way to do this? Since the files can be large I would avoid to pass for the server. Thanks
true
11,885,062
1.2
0
0
1
The only way to grant access in the manner that you want to is to pass it through your server , write a frontend on the FTP server, or provide a limited download of the file on the FTP (Temporary account). The latter option is not secure and wouldn't recommend it although it would be easy to do. So, that leaves either passing the file through your server and handing it off to the user that way or having some kind of web frontend on the FTP server to serve the file. The frontend on the FTP server would be the best option, although it requires more work, but the basic requirements are: Link generation Database of some kind to hold the links/user's allowed to access it. A method to pass the authentication to this frontend so the user doesn't have to relogin anywhere, simple cookie/session would be easiest but again is difficult. It will require a lot of extra work but will be the most flexible, that is if it is possible to do this else I would stick with passing the data through or look into a third party CDN.
0
57
0
0
2012-08-09T13:55:00.000
python,web-applications,ftp,download
Make accessible a remote file to a registered user
1
1
1
11,886,438
0
0
0
I am currently using my own domain name for my Pythonanywhere app. The original username.pythonanywhere.com still serves the same content as www.my-domain.com, and I wanted to know if there would be duplicate search engine results from this. My sitemap.xml file is written for www.my-domain.com in case that changes anything. I only want www.my-domain to be crawled.
true
11,892,953
1.2
0
0
1
At the moment it is certainly possible. But would only realistically happen if you have people linking to your .pythonanywhere.com. We are currently working on a major upgrade that will give each webapp it's own wsgi server and the potential for this to occur will go away completely
0
79
0
1
2012-08-09T22:31:00.000
search-engine,sitemap,pythonanywhere
Duplicate Search Engine results on Pythonanywhere
1
1
1
11,899,675
0
0
0
I am using boto library in Python to get Amazon SQS messages. In exceptional cases I don't delete messages from queue in order to give a couple of more changes to recover temporary failures. But I don't want to keep receiving failed messages constantly. What I would like to do is either delete messages after receiving more than 3 times or not get message if receive count is more than 3. What is the most elegant way of doing it?
false
11,901,273
0.158649
0
0
4
Other way could be you can put an extra identifier at the end of the message in your SQS queue. This identifier can keep the count of the number of times the message has been read. Also if you want that your service should not poll these message again and again then you can create one more queue say "Dead Message Queue" and can transfer then message which has crossed the threshold to this queue.
0
21,434
0
8
2012-08-10T12:03:00.000
python,boto,amazon-sqs
How to get messages receive count in Amazon SQS using boto library in Python?
1
2
5
12,972,060
0
0
0
I am using boto library in Python to get Amazon SQS messages. In exceptional cases I don't delete messages from queue in order to give a couple of more changes to recover temporary failures. But I don't want to keep receiving failed messages constantly. What I would like to do is either delete messages after receiving more than 3 times or not get message if receive count is more than 3. What is the most elegant way of doing it?
false
11,901,273
0.039979
0
0
1
It should be done in few steps. create SQS connection :- sqsconnrec = SQSConnection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) create queue object :- request_q = sqsconnrec.create_queue("queue_Name") load the queue messages :- messages= request_q.get_messages() now you get the array of message objects and to find the total number of messages :- just do len(messages) should work like charm.
0
21,434
0
8
2012-08-10T12:03:00.000
python,boto,amazon-sqs
How to get messages receive count in Amazon SQS using boto library in Python?
1
2
5
11,960,623
0
0
0
First of all, this question has no malicious purposes. I had asked the same question yesterday in stackoverflow but it was removed. I would like to learn if I have to log into an account when sending emails with attachments using python smtplib module. The reason I don't want to log in to an account is that because there is no account that I can use in my company. Or I can ask my company's IT department to set up an account, but until that I want to write the program code and test it. Please don't remove this question. Best Regards
true
11,920,330
1.2
1
0
1
You don't have to have an account (ie. authenticate to your SMTP server) if your company's server is configured to accept mail from certain trusted networks. Typically SMTP servers consider the internal network as trusted and may accept mail from it without authentication.
0
95
0
0
2012-08-12T07:00:00.000
python,email,anonymous,smtplib
Do I have to log into an email account when sending emails using python smtplib?
1
1
1
11,920,368
0
0
0
I'm looking for a way to take gads of inbound SMTP messages and drop them onto an AMQP broker for further routing and processing. The messages won't actually end up in a mailbox, but instead SMTP is used as a message gateway. I've written a Postfix After-Queue Content Filter in Python that drops the inbound SMTP message onto a RabbitMQ broker. That works well - I get the raw message over a queue and it gets picked up nicely by a consumer. The issue is that the AMQP connection is created and torn down with each message... the Content Filter script gets re-executed from scratch each time. I imagine that will end up being a performance issue. If I could leverage something re-entrant I could reuse the connection. Or maybe I'm just approaching the whole thing incorrectly...
false
11,927,409
0.066568
1
0
1
Making an AMQP connection over plain TCP is pretty quick. Perhaps if you're using SSL then it's another story but you sure that enqueueing the raw message onto the AMQP exchange is going to be the bottleneck? My guess would be that actually delivering the message via SMTP is going to be much slower so how fast you can queue things up isn't going to affect the throughput of the system. If this piece does turn out to be a bottleneck I rather like creating little web servers using Sinatra, or Rack but it sounds like you might prefer a Python based solution. Have the postfix content filter perform a HTTP POST using curl to a webserver, which maintains a persistent connection to the AMQP server. Of course now you have an extra moving part for which you need to think about monitoring, error handling and security.
0
1,843
0
4
2012-08-13T02:04:00.000
python,smtp,rabbitmq,postfix-mta,amqp
Sending raw SMTP messages to an AMQP broker
1
1
3
11,927,486
0
0
0
I use socket in non-blocking mode, Client send data continuously to Server, although I set buffer for socket is big enough to save all data from client but Ewouldblock always threw, I don't know why, could you explain to me in detail about this Ewouldblock.
false
11,927,848
0.761594
0
0
5
EWOULDBLOCK means that the socket send buffer is full when sending, or that the socket receive buffer is empty when receiving. You are supposed to use select() to detect when these conditions become false.
0
12,052
0
3
2012-08-13T03:27:00.000
python,sockets
EWOULDBLOCK Error in socket programming
1
1
1
11,927,873
0
0
0
I have a string like "pe%20to%C5%A3i%20mai". When I apply urllib.parse.unquote to it, I get "pe to\u0163i mai". If I try to write this to a file, I get those exact simbols, not the expected glyph. How can I transform the string to utf-8 so in the file I have the proper glyph instead? Edit: I'm using Python 3.2 Edit2: So I figured out that the urllib.parse.unquote was working correctly, and my problem actually is that I'm serializing to YAML with yaml.dump and that seems to screw things up. Why?
false
11,939,286
0.049958
0
0
1
The urllib.parse.unquote returned a correct UTF-8 string and writing that straight to the file returned did the expected result. The problem was with yaml. By default it doesn't encode with UTF-8. My solution was to do: yaml.dump("pe%20to%C5%A3i%20mai",encoding="utf-8").decode("unicode-escape") Thanks to J.F. Sebastian and Mark Byers for asking me the right questions that helped me figure out the problem!
1
4,404
0
0
2012-08-13T17:28:00.000
python,utf-8,python-3.x,yaml,urldecode
Decoding UTF-8 URL in Python
1
1
4
11,940,331
0
0
0
I have written a local search engine in Python, which I feel was a good idea. It requires constant little changes and Python appears to be always readable when I go back. And it is good with regular expressions too. But now the engine is in demand online. Should I stick with python? Is there a good module/library (I know urllib superficially, but I mean something more specialized) for wrapping a local search engine (as simple as a method taking the string/query) with a method that can communicate with Javascript and keep/sort/order the incoming queries?
true
11,949,777
1.2
0
0
2
If you like Python, I would use Django or even Ruby on Rails. Both are great MVC (Model, View, Controller) frameworks which have manageable learning curves. I suggest Ruby on Rails because I was able to transition into it from Python and I really enjoyed its conventions and ease of use. Check them out.
1
208
0
2
2012-08-14T09:51:00.000
javascript,python
Python search engine going online
1
1
1
11,953,490
0
0
0
I am looking for a python library that is able to extract the actually data of a mp3 (the actual voices/sounds we listen to). I want to be able to use the data to compare with another mp3 file without the bitrate/encoding affecting the process. How do i go about it?
false
11,963,020
0
0
0
0
Python has wave and the Wave_read object which has function named readframes(n). It will return series of hexadecimal characters (these are basically loudness/amplitude of sound wave at particular time). You can compare two mp3 series of hexadecimal characters but you need to take care of bit depth and number of channels, as the stream output is dependent upon them. Something like- One character for an 8-bit mono signal, two from 8-bit stereo etc.
1
1,041
0
0
2012-08-15T01:33:00.000
python,mp3
python - Extract data from mp3 file
1
1
2
11,963,096
0
0
0
Is there a library available that can help make sure that JSON objects being sent back and forth between a server running Python and a Javascript client have not been corrupted? I'm thinking it would probably work by creating a hash of the object to be sent a long with the object. Then the receiver of the object could re-hash the object ans make sure that it matches the hash that it received. Is this something that I should even be concerned about, or is this something that browsers and clients normally have taken care of behind the scenes anyway? Thanks!
true
12,014,050
1.2
0
0
2
TCP has built-in error checking, and so do most link layer network protocols. So there's both per-link and end-to-end checks taking place. The only thing this doesn't protect against is intentional modification of the data, e.g. by a firewall, proxy, or network hacker. However, they can modify the hash as well as the JSON, so adding a hash doesn't protect against them. If you need real, secure protection you need to use cryptography, e.g. SSL.
1
115
0
1
2012-08-17T22:35:00.000
javascript,python,hash,corruption,validation
Python-Javascript Hash library to make sure that a JSON object did not get corrupted in transit
1
1
1
12,014,082
0