input_text
stringlengths
1
40.1k
target_text
stringlengths
1
29.4k
Python: How can I find all files with a particular extension? I am trying to find all the ` c` files in a directory using Python I wrote this but it is just returning me all files - not just ` c` files ````import os import re results = [] for folder in gamefolders: for f in os listdir(folder): if re search(' c' f): results = [f] print results ```` How can I just get the ` c` files?
````for _ _ filenames in os walk(folder): for file in filenames: fileExt=os path splitext(file)[-1] if fileExt == ' c': results append(file) ````
Foreach on Multiple 2 Dimensional Lists: Here is an example of the problem I am running into: ````A = [ [1 2 3 4 5 6] [7 8 9 10 11 12] [13 14 15 16 17 18] ] B = [ ['1' '2' '3' '4' '5' '6'] ['7' '8' '9' '10' '11' '12'] ['13' '14' '15' '16' '17' '18'] ] for a b in A B: for ai bi in a b: if ai == int(bi): print 'it worked!' ```` This code gives me an error on line 13: `ValueError: too many values to unpack` What I would like to happen is to have `a` and `b` point to the lists of 6 elements e g `[1 2 3 4 5 6]` and `['1' '2' '3' '4' '5' '6']` respectively for the first iteration I have tried having one iterator for each 2D array like above and I have also tried using 12 variables in case Python was trying to pass each element in the 6-element lists to it is own variable (6 for `a` and 6 for `b` as in `a1 a6 b1 b6` Could anyone point out what is going on here and perhaps explain how to get the result I am looking for?
If you want to iterate through multiple lists at the same time you need to use `zip` (<a href="https://docs python org/2/library/functions html#zip" rel="nofollow">https://docs python org/2/library/functions html#zip</a>): ````for a b in zip(A B): # a and b are now the inner lists print "It worked!" ```` I am not sure what you want to achieve with your `a == int(b)` statement so I left it out
datetime with timezone field/template output strange behavior Timezone settings settings py: ````USE_TZ = True TIME_ZONE = 'Europe/Moscow' # 4 ```` Record in database table(postgresql 9 1 timestamp with timezone : <blockquote> 2012-12-19 15:30:51 164368+04 </blockquote> Django date filter: ````date(object date "d m Y H:i:s") ```` And after all this manipulations datetime filed output in template as : <blockquote> 19 12 2012 11:30:51 </blockquote> Why this happens? Why django do not use TZ information?
When `USE_TZ` is `False` this is the time zone in which Django will store all datetimes When `USE_TZ` is `True` this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms Check this document <a href="https://docs djangoproject com/en/dev/ref/settings/#time-zone" rel="nofollow">Django Doc</a>
Python 3 5 - Pandas - Call a method with a for loop from another method Background: I am using Python 3 5 with Pandas and Jupyter Notebook This is my first go at classes Working with the Jupyter Notebook one can simply run small bits of code one cell at a time I would like to start making scripts/programs that have a logical and more readably flow But there are basics that I just do not understand yet Know that I have spend a lot of time the last few days reading and trying things to get this to work I rarely ask questions on SO because I can usually get what I need from previous posts like most people I am sure For some reason I am just not getting how to do what I am sure is simple Below is a snippet from a large program I am writing There are at this time four methods and they are duplicate the same bit of code And that is to loop through the state_list in order to filter the states I want from the Pandas dataframe that I am reading in Each method's purpose is to read in a different file (xlsx and csv) and pull out data for a date and specific states Rather than repeating the for loop for in each method can I make it a method and then just call it from the other methods? I tried a few things but it is just happening Current Code: ````class GetData(object): report_date = '3/1/2016' state_list = ['AL' 'AZ' 'GA' 'IA' 'ID' 'IL' 'MN' 'MS' 'MT' 'NE' 'NM' 'NV' 'TN' 'UT' 'WI'] def data_getter(self): """Pulls in dataset and filters on specific date and states """ data = pd read_excel('C:\\datapath\\file xlsx') data = data[data['date'] == GetData report_date] states = [] for state in GetData state_list: df = data[data['state'] == state] states append(df) concat_data = pd concat(states axis=0) return concat_data ```` Then I instantiate it like: ````data = GetData() dataset = data data_getter() ```` Goal - soemthing like this? ````class GetData(object): report_date = '3/1/2016' state_list = ['AL' 'AZ' 'GA' 'IA' 'ID' 'IL' 'MN' 'MS' 'MT' 'NE' 'NM' 'NV' 'TN' 'UT' 'WI'] def data_getter(self): """Pulls in dataset and filters on specific date and states """ data = pd read_excel('C:\\datapath\\file xlsx') data = data[data['date'] == GetData report_date] # Call to state_filter here? data = GetData() data = data state_filter def state_filter(self): states = [] for state in GetData state_list: df = data[data['state'] == state] states append(df) concat_data = pd concat(states axis=0) return concat_data ````
<strong>UPDATE:</strong> well you can always write your own wrapper class but i would say there must be a good reason for that ````class GetData(object): #report_date = '3/1/2016' states = ['AL' 'AZ' 'GA' 'IA' 'ID' 'IL' 'MN' 'MS' 'MT' 'NE' 'NM' 'NV' 'TN' 'UT' 'WI'] def __init__(self df_or_file=None read_func=pd read_excel **kwargs): if df_or_file is not None: if isinstance(df_or_file (pd DataFrame pd Series pd Panel)): self data = df elif(os path isfile(df_or_file)): self data = read_func(df_or_file **kwargs) else: self data = pd DataFrame() def save(self filename savefunc=pd DataFrame to_excel **kwargs): savefunc(df filename **kwargs) ```` now you can do the following things : let us generate some random DF and prepare CSV and Excel files: ````In [53]: df = pd DataFrame(np random randint(0 10 size=(5 3)) columns=list('abc')) In [54]: df Out[54]: a b c 0 6 0 2 1 8 1 5 2 5 5 4 3 0 4 1 4 5 4 2 In [55]: df to_csv( would:/temp/test csv' index=False) In [56]: (df+100) to_excel( would:/temp/test xlsx' index=False) ```` now we can create our object: ````In [57]: x = GetData(df) In [58]: x data Out[58]: a b c 0 6 0 2 1 8 1 5 2 5 5 4 3 0 4 1 4 5 4 2 ```` or load it from CSV ````In [61]: x = GetData( would:/temp/test csv' read_func=pd read_csv sep=' ') In [62]: x data Out[62]: a b c 0 6 0 2 1 8 1 5 2 5 5 4 3 0 4 1 4 5 4 2 In [63]: x data[x data a == 5] Out[63]: a b c 2 5 5 4 4 5 4 2 ```` or load it from Excel file: ````In [64]: x = GetData( would:/temp/test xlsx') In [65]: x data Out[65]: a b c 0 106 100 102 1 108 101 105 2 105 105 104 3 100 104 101 4 105 104 102 ```` and save it: ````In [66]: x data c = 0 In [67]: x data Out[67]: a b c 0 106 100 0 1 108 101 0 2 105 105 0 3 100 104 0 4 105 104 0 In [68]: x save( would:/temp/new xlsx' index=False) In [69]: x save( would:/temp/new csv' savefunc=pd DataFrame to_csv sep=';' index=False) ````
invalid syntax when run cProfile I tries to run `python -m cProfile simple_test_script py` I am on Windows 7 Python 2 7 10 simple_test_script py: ````import numpy as np from numpy linalg import eigvals def run_experiment(niter=100): K = 100 results = [] for _ in xrange(niter): mat = np random randn(K K) max_eigenvalue = np abs(eigvals(mat)) max() results append(max_eigenvalue) return results some_results = run_experiment() print 'Largest one we saw: %s' % np max(some_results) ```` I get this error: ````File "<ipython-input-13-6634cb53f497&gt;" line 1 python -m cProfile simple_test_script py ^ SyntaxError: invalid syntax ```` I read this documentation: <a href="https://docs python org/2/library/profile html" rel="nofollow">https://docs python org/2/library/profile html</a> <blockquote> (Use profile instead of cProfile if the latter is not available on your system ) </blockquote> I tried profile instead of cProfile but the the same error Any clues how i can call cProfile?
It seems like you were running the following command inside IPython: ````python -m cProfile simple_test_script py ```` You should just run it in your she will
Python add to MySQL database Using Scrapy to get some stuff and I want to store it in a Database Never done anything with MySQL or Python before so looking for some help as to why this will not work Here is my code: ````from __future__ import print_function from metacritic items import MetacriticItem from mysql connector import errorcode import mysql connector import json class MetacriticPipeline(object): DB_NAME = 'metacritic' TABLES = {} TABLES['titles'] = ( "CREATE TABLE `titles` (" " `name` varchar (14) NOT NULL " " PRIMARY KEY (`emp_no`)" ") ENGINE=InnoDB") cnx = mysql connector connect(user='root' password = 'andy') cursor = cnx cursor() def process_item(self item spider): if item['title']: return item class JsonWriterPipeline(object): def __init__(self): self file = open('items jl' 'wb') def process_item(self item spider): line = json dumps(dict(item)) "\n" self file write(line) return item class WriteToDatabasePipeline(object): def create_database(cursor): try: cursor execute( "CREATE DATABASE {} DEFAULT CHARACTER SET 'utf8'" format(DB_NAME)) except mysql connector Error as err: print("Failed creating database: {}" format(err)) exit(1) try: cnx database = DB_NAME except mysql connector Error as err: if err errno == errorcode ER_BAD_DB_ERROR: create_database(cursor) cnx database = DB_NAME else: print(err) exit(1) ```` I am getting this error in CMD when I try run it: ````File "metacritic\pipelines py" line 46 in WriteToDatabasePipeline cnx database = DB_NAME NameError: name 'DB_NAME' is not defined ```` Any idea why this is? It looks to me like DB_NAME is defined okay? I just want to make the database for now and try add in tables afterwards Thanks for any help
You are defining `DB_NAME` inside the `MetacriticPipeline` class move it at the top of the script (or better move to scrapy settings): ````from __future__ import print_function from metacritic items import MetacriticItem from mysql connector import errorcode import mysql connector import json DB_NAME = 'metacritic' class MetacriticPipeline(object): ````
dask distributed memory error I got the following error on the scheduler while running Dask on a distributed job: ````distributed core - ERROR - Traceback (most recent call last): File "/usr/local/lib/python3 4/dist-packages/distributed/core py" line 269 in write frames = protocol dumps(message) File "/usr/local/lib/python3 4/dist-packages/distributed/protocol py" line 81 in dumps frames = dumps_msgpack(small) File "/usr/local/lib/python3 4/dist-packages/distributed/protocol py" line 153 in dumps_msgpack payload = msgpack dumps(message use_bin_type=True) File "/usr/local/lib/python3 4/dist-packages/msgpack/__init__ py" line 47 in packb return Packer(**kwargs) pack(o) File "msgpack/_packer pyx" line 231 in msgpack _packer Packer pack (msgpack/_packer cpp:231) File "msgpack/_packer pyx" line 239 in msgpack _packer Packer pack (msgpack/_packer cpp:239) MemoryError ```` Is this running out of memory on the scheduler or on one of the workers? Or both??
The most common because of this error is trying to collect too much data such as occurs in the following example using dask dataframe: ````df = dd read_csv('s3://bucket/lots-of-data-* csv') df compute() ```` This loads all of the data into RAM across the cluster (which is fine) and then tries to bring the entire result back to the local machine by way of the scheduler (which probably cannot handle your 100's of GB of data all in one place ) Worker-to-client communications pass through the Scheduler so it is the first single machine to receive all of the data and the first machine likely to fail If this is the case then you instead probably want to use the `Executor persist` method to trigger computation but leave it on the cluster ````df = dd read_csv('s3://bucket/lots-of-data-* csv') df = e persist(df) ```` Generally we only use `df compute()` for small results that we want to view in our local session
python: fast and easy way to compare these lists? I wrote a function for this but I think its probably wildly inefficient and over-complicated so I wanted to ask if there was an easy way to do it Given two lists of lists ````foo = [['one' 1] ['two' 1] ['three' 1]] bar = [['three' 1] ['four' 1] ['five' 1]] ```` I need a function that will return ````final = [['one' 1] ['two' 1] ['three' 2] ['four' 1] ['five' 1]] ```` so it checks if there is any overlaps of the first term adds the second terms together and then returns a final list like above EDIT: foo/bar[1:] are guaranteed to be ordered but they could be like this ````foo = [['the' 100] ['at' 99] ['for' 32]] bar = [['mitochondria' 20] ['at' 10] ['you' 9]] ```` In other words they would be relatively random words paired with descending numbers
````&gt;&gt;&gt; foo = [['one' 1] ['two' 1] ['three' 1]] &gt;&gt;&gt; bar = [['three' 1] ['four' 1] ['five' 1]] &gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; Counter(dict(foo)) Counter(dict(bar)) Counter({'three': 2 'four': 1 'five': 1 'two': 1 'one': 1}) ```` so ````&gt;&gt;&gt; (Counter(dict(foo)) Counter(dict(bar))) items() [('four' 1) ('five' 1) ('three' 2) ('two' 1) ('one' 1)] ```` if the order is important: ````&gt;&gt;&gt; from collections import OrderedDict &gt;&gt;&gt; counter = (Counter(dict(foo)) Counter(dict(bar))) &gt;&gt;&gt; order = OrderedDict(foo bar) keys() &gt;&gt;&gt; [[k counter[k]] for k in order] [['one' 1] ['two' 1] ['three' 2] ['four' 1] ['five' 1]] ```` If the items are gathered into a list `L` ````&gt;&gt;&gt; foo = [['one' 1] ['two' 1] ['three' 1]] &gt;&gt;&gt; bar = [['three' 1] ['four' 1] ['five' 1]] &gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; from collections import OrderedDict &gt;&gt;&gt; from itertools import chain &gt;&gt;&gt; L = [foo bar] &gt;&gt;&gt; counter = Counter() &gt;&gt;&gt; for item in L: counter update(dict(item)) &gt;&gt;&gt; order = OrderedDict(chain from_iterable(L)) &gt;&gt;&gt; [[k counter[k]] for k in order] [['one' 1] ['two' 1] ['three' 2] ['four' 1] ['five' 1]] ````
Python function returns None unclear why I am pretty new to python and am hitting an issue I cannot explain I have tried searching through the forum answers here but what I am finding does not match up with my situation It feels like I am missing something pretty basic but I am not seeing it (obviously ) This code runs the way I expect: ````import string mults = [1 2 3 4 6 7 9 10 12 15 16 19 21 22 24] def factor_exp(lst): if lst[-1] == 1: lst pop() return lst+[1] if lst[-1] == 2: lst pop() return lst+[1 1] else: return "Should never get here" print factor_exp([1]) print factor_exp([2]) print factor_exp([1 2]) ```` This returns: ````&gt;&gt;&gt; [1] [1 1] [1 1 1] ```` Which is what I want I thought using append and extend on the list inside the function would work also One "append" added near the bottom of the code ````import string mults = [1 2 3 4 6 7 9 10 12 15 16 19 21 22 24] def factor_exp(lst): if lst[-1] == 1: lst pop() return lst+[1] if lst[-1] == 2: lst pop() return lst append([1 1]) else: return "Should never get here" print factor_exp([1]) print factor_exp([2]) print factor_exp([1 2]) ```` But this returns: ````&gt;&gt;&gt; [1] None None ```` Why are the "None's" appearing? Thanks in advance for any help or insights
I did not study your code but I would say that it is for this line: ````return lst append([1 1]) ```` `list append()` always returns `None` So `lst append([1 1])` will append `[1 1]` to `lst` and return `None`
Python MemoryError in dictionary of millions of GeoNames locations? I am trying to create a dictionary of location names and information from Geonames to use in a program that reads documents extracts location names and outputs their information Keys are location names and a list of tuples of the latitude and longitude country code feature class and GeoName ID corresponding to each name (as there can be multiple locations with the same name) are values Here is an example excerpt of the dictionary: ````{'xixerella': [(('42 55327' '1 48736') 'AD' 'PEOPLE' '3038816') (('42 55294' '1 48764') 'AD' 'ADMD' '3038817')] 'fonts vives': [(('42 5' '1 56667') 'AD' 'SPNG' '3038822')] 'roc del xeig': [(('42 56667' '1 48333') 'AD' 'RK' '3038820')] 'costa de xurius': [(('42 5' '1 48333') 'AD' 'SLP' '3038814')]} ```` The final dictionary has 9 088 105 keys When I try to dump it into a file with pickle so I can reference it in my other program it throws this error: ````Python(763 0xa03871a8) malloc: *** mach_vm_map(size=50331648) failed (error code=3) *** error: cannot allocate region *** set a breakpoint in malloc_error_break to debug Traceback (most recent call last): File "/Applications/Wing101 app/Contents/MacOS/src/debug/tserver/_sandbox py" line 31 in <module&gt; File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 1370 in dump Pickler(file protocol) dump(obj) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 224 in dump self save(obj) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 286 in save f(self obj) # Call unbound method with explicit self File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 649 in save_dict self _batch_setitems(obj iteritems()) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 663 in _batch_setitems save(v) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 286 in save f(self obj) # Call unbound method with explicit self File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 600 in save_list self _batch_appends(iter(obj)) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 615 in _batch_appends save(x) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 286 in save f(self obj) # Call unbound method with explicit self File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 562 in save_tuple save(element) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 286 in save f(self obj) # Call unbound method with explicit self File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 581 in save_tuple self memoize(obj) File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/pickle py" line 247 in memoize self memo[id(obj)] = memo_len obj MemoryError: ```` Is there a data structure I should be using instead of a dictionary? What can I do to cut down on memory usage? This is my program as is: ````import csv import sys import pickle geodict = {} ignore = ["" " " " " " " "-" " -" "- " " - "] csv field_size_limit(sys maxsize) reader = csv reader(open('allCountries-2 txt' 'rb') delimiter='\t') for row in reader: loc = [] loc append(row[2] lower()) if row[3] != '': altnames = row[3] split(' ') for entry in altnames: entry = "" join(x for x in entry if ord(x)<128) entry = entry lower() if entry not in loc: if entry not in ignore: loc append(entry) geoid = row[0] latlong = (row[4] row[5]) feature = row[7] country = row[8] for name in loc: if name in geodict: geodict[name] append((latlong country feature geoid)) else: geodict[name] = [(latlong country feature geoid)] with open('dict txt' 'wb') as handle: pickle dump(geodict handle) ```` If you are unfamiliar with the format/contents of the Geonames file: it is a 1 14 GB tab delimited text file row[2] is the location name in plain ASCII chars row[3] is alternative location names (sometimes there are no alt names; I strip non-ASCII because there are some crazy accented characters Chinese/Japanese/etc characters that Python does not like) If anything else is unclear just ask Please help! Thank you!
When working with data structures that big you should probably switch to <a href="https://code google com/p/streaming-pickle/" rel="nofollow">streaming pickle</a> It works very similar manner to regular pickle but load/saves in a streaming (incremental) manner thus using far less memory
Reading csv through DictReader I am just trying to read a csv using the first row as keys for the dictionary with no success My file has two lines (test) items being delimited by tabs ````subjectID logID logTimestamp gameKey userID writer gameTime gameCode gameDesc 9991 1711774 6/13/14 E9E91B56 L11-13 general 358 1002 GAMESCRIBE_CREATED ```` Code : ````def process_data(file_path): users = {} # Open the csv file and creates a dict of actions with open(file_path 'rb') as csvfile: spamreader = csv DictReader(csvfile delimiter='\t') print spam reader print spamreader fieldnames for row in spamreader: print row ```` I keep receiving this error: ```` ['subjectID' 'logID' 'logTimestamp' 'gameKey' 'userID' 'writer' 'gameTime' 'gameCode' 'gameDesc'] Traceback (most recent call last): File "logEvents py" line 41 in <module&gt; process_data(logs_path) File "logEvents py" line 11 in process_data for row in spamreader: File "/Library/Frameworks/Python framework/Versions/2 7/lib/python2 7/csv py" line 108 in next row = self reader next() ValueError: I/O operation on closed file ```` I do not know what I am doing wrong
You have `for row in spamreader` outside the `with` block in your actual code: ````with open(file_path 'rb') as csvfile: spamreader = csv DictReader(csvfile delimiter='\t') print spamreader for row in spamreader: # your code print row ```` Once you leave the `with` block the file is closed so trying to read from the file object fails
Target KeyboardInterrupt to subprocess I wish to launch a rather long-running subprocess in Python and would like to be able to terminate it with `^C` However pressing `^C` leads to the parent receiving `KeyboardInterrupt` and terminating (and sometimes leaves `sleep` as a defunct process) ````import subprocess subprocess call("sleep 100" split()) ```` How do I have it such that pressing `^C` only terminates the `sleep` process (as we would have on a she will command line) and allow the parent to continue? I believe I tried some combinations of using `preexec_fn` `start_new_session` and `she will` flags to `call` but with no success <strong>Edit</strong>: I know I can wrap the `subprocess` invocation in a `try-catch` block and ignore the keyboard interrupt; but I do not want to do that My question is this: the keyboard interrupt should have killed the `sleep` and should have been the end of it Why is then propagated as it were to the parent Or is it like the `sleep` process was never the one to receive the interrupt? If not how would I make it the foreground process? Again I am trying to emulate the parent-child relationship of a command line If I were to do the equivalent on a command line I can get away without needing extra handling
Use `signal` to catch SIGINT and make the signal handler terminate the subprocess Look at this for more information (if it is for Python 2 x): <a href="https://docs python org/2/library/signal html" rel="nofollow">https://docs python org/2/library/signal html</a>
Django packages for cloning Failblog I am considering making something similar to <a href="http://failblog org" rel="nofollow">Failblog</a> using Django That is a blog where every post has a main picture or a main video people can comment and vote on them people can upload pictures and (YouTube-hosted) videos so they will become posts there is a page where users can vote on those new uploaded items before they are being posted to the main stream and that is about it (I want it to be less cluttered than Failblog ) Could you point me at some Django packages that might be useful for such a project?
<a href="https://github com/nathanborror/django-basic-apps" rel="nofollow">Django basic apps </a>
What aspect of buildings became very rare?
Internal courtyards
Unhashable type 'dict' when trying to send an Elasticsearch geo query I am trying to get some geodata out of ES via the following snippet: ````result = es search( index="loc" body={ { "filtered" : { "query" : { "field" : { "text" : "restaurant" } } "filter" : { "geo_distance" : { "distance" : "12km" "location" : { "lat" : 40 "lon" : -70 } } } } } } ) ```` The query however does not succeed due to the following error: ````"lon" : -70 TypeError: unhashable type: 'dict' ```` The location field is correctly mapped to the geo_point type and the query is taken from the <a href="https://www elastic co/blog/geo-location-and-search" rel="nofollow">official examples</a> Is there something wrong with the way I wrote the query?
You are nesting a `dict` inside a `set` Remove outer curly braces to resolve the issue The error stems from the fact that sets dicts cannot contain unhashable collections like e g dict (thanks @Matthias) ````body= { "filtered" : { "query" : { "field" : { "text" : "restaurant" } } "filter" : { "geo_distance" : { "distance" : "12km" "location" : { "lat" : 40 "lon" : -70 } } } } } ````
Python script results in internal server error I am working on a python script that generates html Everything was working until I changed this: ````i=1; for lijn in open("enquete txt"): slecht=0; for lijn1 in open("enqueteres txt"): var=lijn1 split(":"); ```` to this: ````i=1; for lijn in open("enquete txt"): slecht=0; for lijn1 in open("enqueteres txt"): var=lijn1 split(":"); if var[3+i] == "slecht": slecht+=1; ```` Now I am getting an internal server error I tested the if test separately and it works Here is my error log: ````AH01215: File "C:/wamp/www/Test/enqueteres py" line 53\are referer: http://localhost/Test/verwerk py AH01215: if var[3+i] == "slecht":\are referer: http://localhost/Test/verwerk py AH01215: ^\are referer: http://localhost/Test/verwerk py AH01215: IndentationError: unexpected indent\are referer: http://localhost/Test/verwerk py ```` Line 53 is the location of the if test Can someone help me? Thanks in advance EDIT: I copied the code back into the editor as Alex suggested and it is working! I have no idea why because it looks the same as before Thanks!
There seems to be a '\r' character at the end of your line Try deleting the line and retyping it Also I would suggest removing the semicolons they are not necessary Also ensure that you are not intermingling tabs and spaces
Clock in Python I am currently working on creating a clock using python 2 7 5 and pygame Here is my code so far: ````screen = display set_mode((800 600)) running=True while running: for e in event get(): if e type == QUIT: running = False screen fill((0 0 0)) draw circle(screen (255 255 255) (400 300) 100 3) ang = 270 def clock(secs mins hours): ang = 270+6*secs dx = int(cos(radians(-ang))*100) dy = int(sin(radians(-ang))*100) draw line(screen (255 0 255) (400 300) (400+dx 300-dy) 1) time wait(1000) ang+=6 clock(1 1 1) display flip() quit() ```` I am currently having problems with the adding the minutes and hours hands What I would like to have happen I would like to be able to set any parameter(within 0 60) in the secs so that the seconds hand on the clock can begin from that point on the clock and continue to move 6 degrees clockwise from that point every second I can currently only have either the hand moving every second but only starting from 0 degrees (the 12 on the clock) or I can have it start according to the parameters that are imputed but it does not move I would also like to know how I would set different time waits for different hands on the clock if that is possible Thanks
Working example I use `clock tick(1)` to wait 1 second Use code from comments to get smoother move ````from pygame import * from pygame locals import * from math import * #---------------------------------------------------------------------- init() screen = display set_mode((800 600)) clock = time Clock() hours = 5 minutes = 59 seconds = 50 running=True while running: # events for e in event get(): if e type == QUIT: running = False # (variable) modifications seconds_ang = 270 6 * seconds seconds_dx = int(cos(radians(-seconds_ang))*200) seconds_dy = int(sin(radians(-seconds_ang))*200) minutes_ang = 270 6 * minutes #minutes_ang = 270 6 * (minutes seconds/60 ) # smoother move minutes_dx = int(cos(radians(-minutes_ang))*200) minutes_dy = int(sin(radians(-minutes_ang))*200) hours_ang = 270 30 * hours #hours_ang = 270 30 * (hours minutes/60 ) # smoother move hours_dx = int(cos(radians(-hours_ang))*150) hours_dy = int(sin(radians(-hours_ang))*150) seconds = 1 if seconds == 60: seconds = 0 minutes = 1 if minutes == 60: minutes = 0 hours = 1 if hours == 12: hours = 0 # draw screen fill((0 0 0)) draw circle(screen (255 255 255) (400 300) 200 3) draw line(screen (0 255 255) (400 300) (400 hours_dx 300 - hours_dy) 5) draw line(screen (255 255 0) (400 300) (400 minutes_dx 300 - minutes_dy) 2) draw line(screen (255 0 255) (400 300) (400 seconds_dx 300 - seconds_dy) 1) # flip display flip() # FPS (Frames Per Seconds) clock tick(1) quit() ```` <hr> Using `get_ticks()` I can set any `FPS` and I can have second clock working 10 times faster :) ````from pygame import * from pygame locals import * from math import * #---------------------------------------------------------------------- init() screen = display set_mode((800 600)) clock = time Clock() hours = 5 minutes = 59 seconds = 50 hours2 = 5 minutes2 = 59 seconds2 = 50 time_to_move_clock_1 = time get_ticks() time_to_move_clock_2 = time get_ticks() running=True while running: # events for e in event get(): if e type == QUIT: running = False # (variable) modifications if time_to_move_clock_1 <= time get_ticks(): time_to_move_clock_1 = 1000 # 1000ms = 1s seconds_ang = 270 6 * seconds seconds_dx = int(cos(radians(-seconds_ang))*200) seconds_dy = int(sin(radians(-seconds_ang))*200) minutes_ang = 270 6 * (minutes seconds/60 ) minutes_dx = int(cos(radians(-minutes_ang))*200) minutes_dy = int(sin(radians(-minutes_ang))*200) hours_ang = 270 30 * (hours minutes/60 ) hours_dx = int(cos(radians(-hours_ang))*150) hours_dy = int(sin(radians(-hours_ang))*150) seconds = 1 if seconds == 60: seconds = 0 minutes = 1 if minutes == 60: minutes = 0 hours = 1 if hours == 12: hours = 0 if time_to_move_clock_2 <= time get_ticks(): time_to_move_clock_2 = 100 # 100ms = 0 1s (10 times faster) seconds_ang2 = 270 6 * seconds2 seconds_dx2 = int(cos(radians(-seconds_ang2))*100) seconds_dy2 = int(sin(radians(-seconds_ang2))*100) minutes_ang2 = 270 6 * (minutes2 seconds2/60 ) minutes_dx2 = int(cos(radians(-minutes_ang2))*100) minutes_dy2 = int(sin(radians(-minutes_ang2))*100) hours_ang2 = 270 30 * (hours2 minutes2/60 ) hours_dx2 = int(cos(radians(-hours_ang2))*75) hours_dy2 = int(sin(radians(-hours_ang2))*75) seconds2 = 1 if seconds2 == 60: seconds2 = 0 minutes2 = 1 if minutes2 == 60: minutes2 = 0 hours2 = 1 if hours2 == 12: hours2 = 0 # draw screen fill((0 0 0)) draw circle(screen (255 255 255) (400 300) 200 3) draw line(screen (0 255 255) (400 300) (400 hours_dx 300 - hours_dy) 5) draw line(screen (255 255 0) (400 300) (400 minutes_dx 300 - minutes_dy) 2) draw line(screen (255 0 255) (400 300) (400 seconds_dx 300 - seconds_dy) 1) draw circle(screen (255 255 255) (100 100) 100 3) draw line(screen (0 255 255) (100 100) (100 hours_dx2 100 - hours_dy2) 5) draw line(screen (255 255 0) (100 100) (100 minutes_dx2 100 - minutes_dy2) 2) draw line(screen (255 0 255) (100 100) (100 seconds_dx2 100 - seconds_dy2) 1) # flip display flip() # FPS (Frames Per Seconds) clock tick(25) quit() ```` <hr> Using `class` I can get more clocks :) I removed `clock tick(60)` to get faster clocks but it makes my CPU very hot ````from pygame import * from pygame locals import * from math import * #---------------------------------------------------------------------- class Clock(object): def __init__(self hours minutes seconds cx cy are ticks): self hours = hours self minutes = minutes self seconds = seconds self cx = cx self cy = cy self are = r self ticks = ticks self time_to_move = 0 def update(self current_ticks): if current_ticks &gt; self time_to_move: self time_to_move = self ticks self seconds_ang = 270 6 * self seconds self seconds_dx = self cx int(cos(radians(-self seconds_ang))*self r) self seconds_dy = self cy - int(sin(radians(-self seconds_ang))*self r) self minutes_ang = 270 6 * (self minutes self seconds/60 ) self minutes_dx = self cx int(cos(radians(-self minutes_ang))*self r) self minutes_dy = self cy - int(sin(radians(-self minutes_ang))*self r) self hours_ang = 270 30 * (self hours self minutes/60 ) self hours_dx = self cx int(cos(radians(-self hours_ang))*self r* 75) self hours_dy = self cy - int(sin(radians(-self hours_ang))*self r* 75) self seconds = 1 if self seconds == 60: self seconds = 0 self minutes = 1 if self minutes == 60: self minutes = 0 self hours = 1 if self hours == 12: self hours = 0 def draw(self screen): draw circle(screen (255 255 255) (self cx self cy) self are 3) draw line(screen (0 255 255) (self cx self cy) (self hours_dx self hours_dy) 5) draw line(screen (255 255 0) (self cx self cy) (self minutes_dx self minutes_dy) 2) draw line(screen (255 0 255) (self cx self cy) (self seconds_dx self seconds_dy) 1) #---------------------------------------------------------------------- init() screen = display set_mode((800 600)) clock = time Clock() clock1 = Clock(5 59 0 400 300 200 1000) clock2 = Clock(5 59 0 100 100 100 100) clock3 = Clock(3 9 0 700 100 100 100) clock4 = Clock(7 24 0 100 500 100 10) clock5 = Clock(11 30 0 700 500 100 10) running=True while running: # events for e in event get(): if e type == QUIT: running = False # (variable) modifications clock1 update(time get_ticks()) clock2 update(time get_ticks()) clock3 update(time get_ticks()) clock4 update(time get_ticks()) clock5 update(time get_ticks()) # draw screen fill((0 0 0)) clock1 draw(screen) clock2 draw(screen) clock3 draw(screen) clock4 draw(screen) clock5 draw(screen) # flip display flip() # FPS (Frames Per Seconds) #clock tick(60) quit() ```` And the same with timers (`time set_timer()`) ````from pygame import * from pygame locals import * from math import * #---------------------------------------------------------------------- class Clock(object): def __init__(self hours minutes seconds cx cy r): self hours = hours self minutes = minutes self seconds = seconds self cx = cx self cy = cy self are = r self hours_dx = self cx self hours_dy = self cy - self r* 75 self minutes_dx = self cx self minutes_dy = self cy - self r self seconds_dx = self cx self seconds_dy = self cy - self r def update(self): self seconds_ang = 270 6 * self seconds self seconds_dx = self cx int(cos(radians(-self seconds_ang))*self r) self seconds_dy = self cy - int(sin(radians(-self seconds_ang))*self r) self minutes_ang = 270 6 * (self minutes self seconds/60 ) self minutes_dx = self cx int(cos(radians(-self minutes_ang))*self r) self minutes_dy = self cy - int(sin(radians(-self minutes_ang))*self r) self hours_ang = 270 30 * (self hours self minutes/60 ) self hours_dx = self cx int(cos(radians(-self hours_ang))*self r* 75) self hours_dy = self cy - int(sin(radians(-self hours_ang))*self r* 75) self seconds = 1 if self seconds == 60: self seconds = 0 self minutes = 1 if self minutes == 60: self minutes = 0 self hours = 1 if self hours == 12: self hours = 0 def draw(self screen): draw circle(screen (255 255 255) (self cx self cy) self are 3) draw line(screen (0 255 255) (self cx self cy) (self hours_dx self hours_dy) 5) draw line(screen (255 255 0) (self cx self cy) (self minutes_dx self minutes_dy) 2) draw line(screen (255 0 255) (self cx self cy) (self seconds_dx self seconds_dy) 1) #---------------------------------------------------------------------- init() screen = display set_mode((800 600)) clock = time Clock() clock1 = Clock(5 59 0 400 300 200) clock2 = Clock(5 59 0 100 100 100) clock3 = Clock(3 9 0 700 100 100) clock4 = Clock(7 24 0 100 500 100) clock5 = Clock(11 30 0 700 500 100) CLOCK1 = USEREVENT+1 CLOCK2 = USEREVENT+2 CLOCK3 = USEREVENT+3 CLOCK4 = USEREVENT+4 CLOCK5 = USEREVENT+5 timer1 = time set_timer(CLOCK1 1000) timer2 = time set_timer(CLOCK2 100) timer3 = time set_timer(CLOCK3 100) timer4 = time set_timer(CLOCK4 10) timer5 = time set_timer(CLOCK5 10) running=True while running: # events for e in event get(): if e type == QUIT: running = False elif e type == CLOCK1: clock1 update() elif e type == CLOCK2: clock2 update() elif e type == CLOCK3: clock3 update() elif e type == CLOCK4: clock4 update() elif e type == CLOCK5: clock5 update() # (variable) modifications # draw screen fill((0 0 0)) clock1 draw(screen) clock2 draw(screen) clock3 draw(screen) clock4 draw(screen) clock5 draw(screen) # flip display flip() # FPS (Frames Per Seconds) clock tick(60) quit() ```` <img src="http://i stack imgur com/NuhWn png" alt="enter image description here">
Reddit Script - Admin Account in example ini I have installed and setup reddit It is working I am unable to add admin account How can I add admin account? Documentation states: ````Create an account and add it to the admins line in example ini ```` But I unable to find admins line My second question is With which URL I access admin panel?
You are looking for <a href="https://github com/reddit/reddit/blob/master/r2/example ini#L746-L747" rel="nofollow">the `employees` section</a>: ````# permissions! each user should have one of admin sponsor or employee as their permission level employees = reddit:admin ```` An account with administrator privileges will have a "turn admin on" link in the top-right; after enabling that you will get an additional menu with links to various tools
Modifying a Python dict while iterating over it Let us say we have a Python dictionary `d` and we are iterating over it like so: ````for k v in d iteritems(): del d[f(k)] # remove some item d[g(k)] = v # add a new item ```` (`f` and `g` are just some black-box transformations ) In other words we try to add/remove items to `d` while iterating over it using `iteritems` Is this well defined? Could you provide some references to support your answer? (It is pretty obvious how to fix this if it is broken so this is not the angle I am after )
You cannot do that at least with `d iteritems()` I tried it and Python fails with ````RuntimeError: dictionary changed size during iteration ```` If you instead use `d items()` then it works In Python 3 `d items()` is a view into the dictionary like `d iteritems()` in Python 2 To do this in Python 3 instead use `d copy() items()` This will similarly allow us to iterate over a copy of the dictionary in order to avoid modifying the data structure we are iterating over
Who must sign a Show Up?
null
Can the python client tcp socket be used for multiple sends? I tried using a socket for 2 sends The first one succeeds and the next one does not From the <a href="http://docs python org/howto/sockets html" rel="nofollow">http://docs python org/howto/sockets html</a> it would appear that multiple sends should be allowed For Better or worse I do not really need to read from the socket I have used twisted but for the present purpose I would like to stick to a socket if I can help it(partly because I am using it within an application already using twisted to communicate this is a seperate connection) "When the connect completes the socket s can be used to send in a request for the text of the page The same socket will read the reply and then be destroyed That’s right destroyed Client sockets are normally only used for one exchange (or a small set of sequential exchanges) " return value for the send that succeeds = 35 return value for the send that FAILS = 32 code with some minor editing to remove any business logic ````self _commandSock = socket socket(socket AF_INET socket SOCK_STREAM) def sendPrereqs(self id prereqs): self _commandSock connect(self _commandConnection) #parse prereqs temp = prereqs split(' ') for pair in temp: tup = pair partition(':') try: command = 'some command' logging info('sending command: ' command) ret = self _commandSock send(command) if ret == None: logging info('send called successfully: ' command) else: logging info('socket returned non-None: ' str(ret)) except: print 'Unexpected Exception ' sys exc_info()[0]() print sys exc_info() #logging info('Unexpected Exception '+ str(sys exc_info()[0]())) #logging info(' ' str(sys exc_info())) self _commandSock close()` ````
The documentation is only referring to a common scenario You can call <a href="http://docs python org/library/socket html#socket socket send" rel="nofollow">`send`</a> <a href="http://docs python org/library/socket html#socket socket sendall" rel="nofollow">`sendall`</a> and <a href="http://docs python org/library/socket html#socket socket sendto" rel="nofollow">`sendto`</a> on all sockets as often as you want - as long as the socket is not closed Note that these methods return the number of bytes sent 32 and 35 simply mean you sent 32 bytes the first and 35 bytes the second time The fact that `socket send` returns without an exception means that the data got handed to the operating system but not that it actually reached the endpoint (or has been read correctly by an application there)
Who was Madonna's first producer?
Mark Kamins
Commandline options with optional arguments using optparse I am looking for a way to make a commandline option with optional argument using `optparse` Python module For example there are two command lines: `foobar -you -v` and `foobar -you USERNAME -v` While latter gets `USERNAME` as an argument former should detect absence of an argument and get username from the environment variable or by other means and not treat `-v` as an argument but as an option instead I know there is possibility to take multiple arguments using `append` action but it requires at least one argument Are there any solutions? UPDATE: Why will not this script works if no arugment is provided ? I have given some value to Default also Cannot something like this can used ? ````#!/usr/bin/env python from optparse import OptionParser parser = OptionParser() parser add_option("-you" "--user" dest="in_dir" help="write report to FILE" action= "store" default='USERNAME') (options args) = parser parse_args() if options in_dir == 'USERNAME': print' argument supplied or taken from default ' else: print 'arugment supplied' ```` On executing this file we get /options py -f Usage: options py [options] options py: error: -f option requires an argument As I have given some values in the default Why cannot it take that value from there ? Is any there other solution to this?
AFAIK there is no solution with the `optparse` module Extract from The Python Standard Library (same for Python 2 2 and 3): <blockquote> Typically a given option either takes an argument or it doesn’t Lots of people want an “optional option arguments” feature meaning that some options will take an argument if they see it and won’t if they don’t This is somewhat controversial because it makes parsing ambiguous: if -a takes an optional argument and -b is another option entirely how do we interpret -ab? Because of this ambiguity optparse does not support this feature </blockquote> You could have more luck with the newer `argparse` module that explicitely support that kind of requirement Extracts from The Python Standard Library for `argparse` <blockquote> The add_argument() method ArgumentParser add_argument(name or flags [ action][ nargs][ const][ default][ type][ choices][ required][ help][ metavar][ dest]) Define how a single command-line argument should be parsed Each parameter has its own more detailed description below but in short they are: nargs - The number of command-line arguments that should be consumed </blockquote> and later in `nargs` description : <blockquote> '?' One argument will be consumed from the command line if possible and produced as a single item If no command-line argument is present the value from default will be produced </blockquote>
Morningside Park interrupts which street?
120th Street
Why is not the string value updated? I was writing a for loop that would take a list of strings and add a new line to the end of the string if it does not already have one My first thought was below which did not work: ````for string in list : if not string endswith('\n'): string = '\n' ```` I then came up with below which did the trick: ````for string in range(len(ist)): if not list[string] endswith('\n'): list[string] = '\n' ```` I am confused as to why only the second one worked - could someone please help explain? Also is there a better way to do this?
Since string is an immutable object in this code: ````for string in list : if not string endswith('\n'): string = '\n' ```` In every iteration the `string` variable gets assigned an element in `list` then a new string is created with `'\n'` in the end but this new string is never being updated back into the list
python binary version does not match rpm version I have installed manually python (2 7 3) Whoc do I update the rpm version usr/bin/python -V: Python 2 7 3 rpm -qf /usr/bin/python: python-2 6 5-3 el6 x86_64 any suggestions? linux version: RH6 3
Create a symlink in `/usr/bin/` called `python2 7` point to to where you have installed the new Python and use that Do not attempt to upgrade or force the default python on a redhat box because a lot of other tools will stop working
How to delete/unset a cookie in web py In web py you can get access to the request's cookies with `web webapi cookies()` and you can set the value of a cookie with `web webapi setcookie( )` The documentation is not clear on how one <them>deletes</them> a cookie however -- do you just `setcookie` with a value of None?
You are right it is certainly not obvious from `setcookie()`'s docstring or from the online docs but it is <a href="http://webpy org/cookbook/cookies">there somewhere</a>: <blockquote> The third (and optional) argument to `web setcookie()` "expires" allows you to set when you want your cookie to expire <strong>Any negative number will expire the cookie immediately </strong> </blockquote> For example here is part of what we do in our sign-out code (delete the user's session cookie): ```` web setcookie('session' '' expires=-1 domain=session_cookie_domain) ```` Note that you must delete the cookie with the same domain and secure flag as you set it with otherwise it will not delete Also with web py you normally use `web setcookie()` as a shortcut to `web webapi setcookie()`
python run external program and continue its execution indepentently How can i run an external program let us say "Firefox" from my python script and make sure that its process will remain alive after the termination of my python script? I want to make it crossplatform if it is doable
There is no cross-platform way to do this with just the stdlib However if you write code for POSIX and for Windows that is usually good enough right? On Windows you want to pass a <a href="http://docs python org/3/library/subprocess html#subprocess Popen" rel="nofollow">`creationflags`</a> argument Read the docs (both there and <a href="http://msdn microsoft com/en-us/library/ms684863%28v=vs 85%29 aspx" rel="nofollow">at MSDN</a>) and decide whether you want a console-detached process a new-console process or a new-process-group process then use the appropriate flag You may also want to set some of the flags in `startupinfo`; again <a href="http://msdn microsoft com/en-us/library/ms686331%28v=vs 85%29 aspx" rel="nofollow">MSDN</a> will tell you what they mean On POSIX if you just want the simplest behavior and you are using 3 2+ you want to pass `start_new_session=True` In earlier Python versions or for other cases you want to pass a `preexec_fn` that allows you to do whatever daemonization you want That could be as little as `os setsid()` (what `start_new_session` does) or a whole lot more See <a href="http://www python org/dev/peps/pep-3143/" rel="nofollow">PEP 3143 -- Standard daemon process library</a> for a discussion of all of the different things you might want to do here So the simplest version is: ````def launch_in_background(args): try: subprocess CREATE_NEW_PROCESS_GROUP except AttributeError: # not Windows so assume POSIX; if not we will get a usable exception p = subprocess Popen(args start_new_session=True) else: # Windows p = subprocess Popen(args creationflags=subprocess CREATE_NEW_PROCESS_GROUP) ```` <hr> If you are willing to go outside the stdlib there are dozens of "she will-type functionality" libraries out there many of which have some kind of "detach" functionality Just search <a href="https://pypi python org/pypi?%3Aaction=search&amp;term=she will&amp;submit=search" rel="nofollow">`she will`</a> <a href="https://pypi python org/pypi?%3Aaction=search&amp;term=cli&amp;submit=search" rel="nofollow">`cli`</a> or <a href="https://pypi python org/pypi?%3Aaction=search&amp;term=subprocess&amp;submit=search" rel="nofollow">`subprocess`</a> at PyPI and find the one you like best
How to parse table in a text file? I am trying to scrape this page for the table of the holdings at the bottom to get the information in each column: <a href="https://www sec gov/Archives/edgar/data/1412093/000114036111027807/0001140361-11-027807 txt" rel="nofollow">https://www sec gov/Archives/edgar/data/1412093/000114036111027807/0001140361-11-027807 txt</a> What I have so far is this: ````from bs4 import BeautifulSoup import urllib2 import datetime import sys def scrape(url): htmlfile = urllib2 urlopen(url) htmltext = htmlfile read() bs = BeautifulSoup(htmltext) tables =bs find_all('table') for table in tables: print table if __name__ == '__main__': url = 'https://www sec gov/Archives/edgar/data/1412093/000114036111027807/0001140361-11-027807 txt' scrape(url) ```` However this only gets me the table and I cannot seem to further parse it going line by line Any help with this would be appreciated thanks!
The problem is that this is not an HTML table and rather a space delimited set of columns which you would have to parse differently Here is a quite naive but working solution using `splitlines()` to split the table into rows and `split()` to split into columns: ````import urllib2 from bs4 import BeautifulSoup def scrape(url): htmlfile = urllib2 urlopen(url) htmltext = htmlfile read() bs = BeautifulSoup(htmltext "html parser") data = bs find('table') get_text() splitlines()[10:] for line in data: print([item for item in line split()]) if __name__ == '__main__': url = 'https://www sec gov/Archives/edgar/data/1412093/000114036111027807/0001140361-11-027807 txt' scrape(url) ```` Prints: ````['ADVENTRX' 'PHARMAMACEUTICALS' 'INC' 'COM' 'NEW' '00764X202' '289' '138 377' 'SH' 'SOLE' 'N/A' '138 377'] ['AMGEN' 'INC' 'COM' '31162100' '54 519' '1 020 000' 'SH' 'SOLE' 'N/A' '1 020 000'] ['SOUTHERN' 'UN' 'CO' 'NEW' 'COM' '844030106' '5 328' '186 154' 'SH' 'SOLE' 'N/A' '186 154'] ['TAKE-TWO' 'INTERACTIVE' 'SOFTWAR' 'COM' '874054109' '151 310' '9 844 502' 'SH' 'SOLE' 'N/A' '9 844 502'] ```` The most unreliable part is that `[10:]` slice I am leaving this for you to improve
Why was Prince Hall Freemasonry outlawed?
null
How to import lib in repy using seattle environment I must import a lib to my project in repy but it returns a foult I want to use module with Dijkstra algorithm to run it in Seattle vessels to show the shortest path between vessels ````from priodict import priorityDictionary def Dijkstra(G start end=None): D = {} # dictionary of final distances P = {} # dictionary of predecessors Q = priorityDictionary() # est dist of non-final vert Q[start] = 0 for v in Q: D[v] = Q[v] if v == end: break for w in G[v]: vwLength = D[v] G[v][w] if w in D: if vwLength < D[w]: raise ValueError \ "Dijkstra: found better path to already-final vertex" elif w not in Q or vwLength < Q[w]: Q[w] = vwLength P[w] = v return (D P) def shortestPath(G start end): D P = Dijkstra(G start end) Path = [] while 1: Path append(end) if end == start: break end = P[end] Path reverse() return Path ```` it throw error like this: ````Uncaught exception! Following is a full traceback and a user traceback The user traceback excludes non-user modules The most recent call is displayed last Full debugging traceback: "repy py" line 428 in <module&gt; "repy py" line 178 in main "D:\STUDIA\LAN\seattle\seattle_repy\virtual_namespace py" line 78 in __init _ User traceback: Exception (with type 'exceptions ValueError'): Code failed safety check! Error: ("<class 'safety_exceptions CheckNodeException'&gt; (4 'From')" ) ```` What should i do to make it works?
From what I can find on Google you <a href="https://seattle cs washington edu/wiki/PythonVsRepy#Importstatements" rel="nofollow">cannot use `import` in Repy</a> Use something like ````include priodict repy ```` instead
Django Model Class Missing in Admin For some reason the classes `archive` and `album` show all of their fields in the django admin panel but `image` is not showing an `album` field when I go to add an image to the image panel If I open a she will it shows that `album` is a subset of `image` it just is not showing up in the `image`'s admin interface (but it does through the CLI) Why? ````class archive(models Model): name = models CharField(max_length = 30) archivedata = models TextField(blank=True help_text="Documentation for album/image archivedata methodology is put here") def __unicode__(self): return self name class tag(models Model): archive = models ForeignKey(archive) tag = models CharField(max_length=60) def __unicode__(self): return self tag class album(models Model): archive = models ForeignKey(archive) title = models CharField(max_length=60) tags = models ManyToManyField(tag blank=True help_text="Searchable Keywords") archivedata = models TextField(blank=True null=True help_text="Data specific to particular archiving methods or processes can be stored here") def __unicode__(self): return self title class image(models Model): album = models ForeignKey(album) archive = models ForeignKey(archive) imagefile = models ImageField(upload_to='ns/') #image width/height title = models CharField(max_length=60 blank=True help_text="Descriptive image title") tags = models ManyToManyField(tag blank=True help_text="Searchable Keywords") ```` Update: Including my admin py per request: ````from django db models import get_models get_app from django contrib import admin from django contrib admin sites import AlreadyRegistered def autoregister(*app_list): for app_name in app_list: app_models = get_app(app_name) for model in get_models(app_models): try: admin site register(model) except AlreadyRegistered: pass autoregister('appname') ````
Your `admin py` file should typically look like this ````from django contrib import admin admin site register(archive) admin site register(album) admin site register(image) ```` Based on your admin py I would do this ````autoregister('archive' 'album' 'image') ```` That said a few pointers - Your admin py is a bit overly complicated and not needed when 3 lines will suffice Additionally you should be naming your models in uppercase (Archive Album Image)
python head and tail in one line Is there a pythonic way of unpack a list in the first element and the "tail" in a single command? For example: ````&gt;&gt; head tail = **some_magic applied to** [1 1 2 3 5 8 13 21 34 55] &gt;&gt; head 1 &gt;&gt;&gt; tail [1 2 3 5 8 13 21 34 55] ````
Under Python 3 x you can do this nicely: ````&gt;&gt;&gt; head *tail = [1 1 2 3 5 8 13 21 34 55] &gt;&gt;&gt; head 1 &gt;&gt;&gt; tail [1 2 3 5 8 13 21 34 55] ```` A new feature in 3 x is to use the `*` operator in unpacking to mean any extra values It is described in <a href="http://www python org/dev/peps/pep-3132/">PEP 3132 - Extended Iterable Unpacking</a> This also has the advantage of working on any iterable not just sequences It is also <them>really</them> readable As described in the PEP if you want to do the equivalent under 2 x (without potentially making a temporary list) you have to do this: ````it = iter(iterable) head = it next() tail = list(it) ```` Naturally if you are working on a list the easiest way without the 3 x syntax is: ````head tail = seq[0] seq[1:] ````
What gambit did Nasser fail at in his bluster with Israel?
bluffing
Why is a list containing objects False when value tested? I would expect an empty list to value test as False but I am a bit confused why a reference for a list containing an object reports as False also when value tested as in the following example: ````&gt;&gt;&gt; weapon = [] &gt;&gt;&gt; weapon == True False &gt;&gt;&gt; weapon append("sword") &gt;&gt;&gt; weapon == True False &gt;&gt;&gt; weapon ['sword'] ```` If `weapon = []` is False why would `weapon = ['sword']` also be False? According to docs <a href="http://docs python org/release/2 4 4/lib/truth html" rel="nofollow">http://docs python org/release/2 4 4/lib/truth html</a> it should be True What am I missing in my understanding of this?
you should do a check like ````In [1]: w = [] In [2]: if w: : print True : else: : print False : False ````
fastest way to iterate in python I have never had to concern myself with this problem so far but now I need to use some large number of vertices that need to be buffered by PyOpenGL and it seems like the python iteration is the bottleneck Here is the situation I have an array of 3D points `vertices` and at each step I have to compute a 4D array of colors for each vertices My approach so far is: ````upper_border = len(self vertices) / 3 #Only generate at first step otherwise use old one and replace values if self color_array is None: self color_array = numpy empty(4 * upper_border) for i in range(upper_border): #Obtain a color between a start>end color diff_activity = (activity[i] - self min) / abs_diff clr_idx = i * 4 self color_array[clr_idx] = start_colors[0] diff_activity * end_colors[0] self color_array[clr_idx 1] = start_colors[1] diff_activity * end_colors[1] self color_array[clr_idx 2] = start_colors[2] diff_activity * end_colors[2] self color_array[clr_idx 3] = 1 ```` Now I do not think there is anything else I can do to eliminate the operations from each step of the loop but I am guessing there has to be a more optimal performance way to do that loop I am saying that because in javascript for example the same calculus produces a 9FPS while in Python I am only getting 2-3 FPS Regards Bogdan
- First of all : <strong>profile your code</strong> with <a href="http://docs python org/library/profile html">cProfile</a> - You should use <a href="http://docs python org/library/functions html#xrange">xrange</a> instead of range - You should avoid to recall `self color_array` 4 times on each loop try to create a local variable before the loop and use it into the loop : `local_array = self color_array` - try to pre-compute the `start_colors[N]` and `end_colors[N]` : `start_color_0 = start_colors[0]` - try to use <a href="http://docs python org/tutorial/datastructures html?highlight=extend">list extend()</a> to reduce lines in loop : ````local_array extend([ start_colors_0 diff_activity * end_colors_0 start_colors_1 diff_activity * end_colors_1 start_colors_2 diff_activity * end_colors_2 1 ]) ````
Python 3 3 2 - 'Grouping' System with Characters I have a fun little problem I need to count the amount of 'groups' of characters in a file Say the file is ```` ## # # ## #### ### ### ## # ```` The code will then count the amount of groups of #'s For example the above would be `3` It includes diagonals Here is my code so far: ````build = [] height = 0 with open('file txt') as i: build append(i) height = 1 length = len(build[0]) dirs = {'up':(-1 0) 'down':(1 0) 'left':(0 -1) 'right':(0 1) 'upleft':(-1 -1) 'upright':(-1 1) 'downleft':(1 -1) 'downright':(1 1)} def find_patches(grid length): queue = [] queue append((0 0)) patches = 0 while queue: current = queue pop(0) line cell = path[-1] if ## This is where I am at I was making a pathfinding system ````
Here’s a naive solution I came up with Originally I just wanted to loop through all the elements once an check for each if I can put it into an existing group That didn’t work however as some groups are only combined later (e g the first `#` in the second row would not belong to the big group until the second `#` in that row is processed) So I started working on a merge algorithm and then figured I could just do that from the beginning So how this works now is that I put every `#` into its own group Then I keep looking at combinations of two groups and check if they are close enough to each other that they belong to the same group If that’s the case I merge them and restart the check If I completely looked at all possible combinations and could not merge any more I know that I’m done ````from itertools import combinations product def canMerge (g h): for i j in g: for x y in h: if abs(i - x) <= 1 and abs(j - y) <= 1: return True return False def findGroups (field): # initialize one-element groups groups = [[(i j)] for i j in product(range(len(field)) range(len(field[0]))) if field[i][j] == '#'] # keep joining until no more joins can be executed merged = True while merged: merged = False for g h in combinations(groups 2): if canMerge(g h): g extend(h) groups remove(h) merged = True break return groups # intialize field field = '''\ ## # # ## #### ### ### ## # ''' splitlines() groups = findGroups(field) print(len(groups)) # 3 ````
Ceaser Cipher-Python using large numbers for shifts I need help with my ceaser cipher code I need to be able to shift/decrypt a letter even when the shift value is greater than 122 My code only works for shift values that are less than 22 The code fails when the user specifies an input for the shifter that is greater than 122 How can I do this? For a with shifter 123 the output should be are ````k = int(raw_input("Please enter a value for k: ")) #Shifter number original = raw_input("plaintext: ") #Message user wants ciphered original_as_array = list(original) ##I am turning the input into an array for i in range(0 len(original)): ##Now seperating each character to add k char = original_as_array[i] charint = ord(char) if charint &gt;= 65 and charint <=90: cipher_int = ((charint-65 k) % 26)+65 code_char = chr(cipher_int) print code_char elif charint &gt;= 97 and charint <=122: cipher_int = ((charint-97 k) % 26)+97 code_char = chr(cipher_int) print code_char else: print char ````
Your code fails because you did not specify a condition when `ord(char)` > 122 Instead you threw it to the else clause You can just cut the part `<= 122`: ````for k in range(20): for i in range(0 len(original)): ##Now seperating each character to add k char = original[i] charint = ord(char) if charint &gt;= 65 and charint <=90: cipher_int = ((charint-65 k) % 26)+65 code_char = chr(cipher_int) print code_char elif charint &gt;= 97: cipher_int = ((charint-97 k) % 26)+97 code_char = chr(cipher_int) print code_char else: print char ```` With `k` set to 18 this will produce `r` If this solution was not what you meant feel free to coment below!
What genre was not eclectic?
null
How to add filters to a query dynamically in Django? In my viewSet I am doing a query ````queryset= Books objects all(); ```` Now from an ajax call I get my filter values from UI i e age gender etc of auther There will be a total of 5 filters Now the problem which I ran into is how am I going to add filters to my query(only those filters which have any value) What I tried is I checked for individual filter value and did query but that way it fails as if the user remove the filter value or add multiple filters Any better suggestion how to accomplish this?
You have not shown any code so you have not really explained what the problem is: Start with the queryset `Book objects all()` For each filter check if there is a value for the filter in `request POST` and if so filter the queryset Django querysets are lazy so only the final queryset will be evaluated ````queryset = Book objects all() if request POST get('age'): queryset = queryset filter(author__age=request POST['age']) if request POST get('gender'): queryset = queryset filter(author__gender=request POST['gender']) ````
Validate well-formed JSON using Python I would like to create a python script that makes a series of web requests to endpoints that will spit out JSON After receiving the response I want to validate that it is well-formed JSON and not some error page Also I will need to insert an API key into the request header to get a proper response what is the best way to go about doing this in python? Thanks!
To validate the JSON you can use <a href="http://python-jsonschema readthedocs org/en/latest/" rel="nofollow">http://python-jsonschema readthedocs org/en/latest/</a> To insert the API key in the header you can use Requests <a href="http://docs python-requests org/en/latest/user/quickstart/" rel="nofollow">http://docs python-requests org/en/latest/user/quickstart/</a>
Find index position of characters in string I am trawling through a storage area and the paths look a lot like this: storagearea/storage1/ABC/ABCDEF1/raw/2013/05/ABCFGM1 I will not always know what year is it I need to find the starting index position of the year Therefor I am looking for where I find the following in the file name (2010 2011 2012 2013 2014 etc ) I have set up a list as follows: ````list_ = ['2010' '2011' '2012' '2013' '2014' '2015' '2016'] ```` and I can find if it is in the file name ````if any(word in file for word in list_): print 'Yahooo' ```` But how do I find the character index of the year in the absolute path?
<a href="https://docs python org/2/library/string html#string index" rel="nofollow">Python string index</a> ````string index(s sub[ start[ end]])¶ Like find() but raise ValueError when the substring is not found ````
Convert every character in a String to a Dictionary Key Suppose i have a string `"abcdefghijklmnopqrstuvwxyz"`and i want to initialize dictionary keys with those values ````alphabet = 'abcdefghijklmnopqrstuvwxyz' alphabetDict = dict() for char in alphabet: alphabetDict[char] = 0 ```` Is there a better way of doing that?
You can use dictionary comprehensions in Python ````alphabetDict = {char: 0 for char in alphabet} ```` <a href="https://docs python org/2/tutorial/datastructures html#dictionaries" rel="nofollow">Dictionaries</a> (Python Docs) There is a minor difference between this answer and Anand's above Dict comprehensions evaluate the value for every key while `fromkeys` only does it once If you are using things like ints this poses no problem However if you do ````d = {key: [] for key in <some set&gt;} d[3] append(5) print(d[2]) ```` gives you ````[] ```` and you have distinct lists while ````d = dict fromkeys(<some set&gt; []) d[3] append(5) print(d[2]) ```` gives you ````[5] ```` will map all the keys to the same list
Using MacPython with MacPorts I have installed MacPython 2 7 from python org How to set it as default in MacPorts? PS: #port select offers variants: ' - none(active) - python2 5-apple - python2 6-apple
I think MacPorts is not aware of MacPython and therefore your best bet would be to install python 2 7 from the ports That way you will be able to select python 2 7 as default Another advantage of using python from MacPorts is that if you install ports that require python MacPorts will not have to install python 2 7 a second time
Explicitly typed version of Python? I rather like Python's syntactic sugar; and standard library functions However the one feature which I dislike; is implicit typing Is there a distribution of Python with explicit typing; which is still compatible with e g : packages on PyPi? [I was looking into RPython]
<them>Since comments are limited </them> As an interpreted language Python is by definition weakly typed This is not a bad thing more as a control in place for the programmer to preempt potential syntactical bugs but in truth that will not stop logical bugs from happening any less and thus makes the point moot Even though the <a href="http://www disi unige it/person/AnconaD/papers/Recent_abstracts html#AACM-DLS07" rel="nofollow">paper on RPython</a> makes it is point it is focused on Object Oriented Programming You must bear in mind that Python is more an amalgamation of OOP and Functional Programming likely others too I encourage <a href="https://wiki python org/moin/StrongVsWeakTyping" rel="nofollow">reading of this page</a> it is very informative
Passing a numpy array to a tensorflow Queue I have a NumPy array and would like to read it in TensorFlow's code using a <a href="https://www tensorflow org/versions/r0 10/api_docs/python/io_ops html#queues" rel="nofollow">Queue</a> I would like the queue to return the whole data shuffled some specified number of epochs and throw an error after that It would be best if I would not need to hardcode the size of an example nor the number of examples I think <a href="https://www tensorflow org/versions/r0 10/api_docs/python/io_ops html#shuffle_batch" rel="nofollow">shuffle batch</a> is meant to serve that purpose I have tried using it as follows: ````data = tf constant(train_np) # train_np is my numpy array of shape (num_examples example_size) batch = tf train shuffle_batch([data] batch_size=5 capacity=52200 min_after_dequeue=10 num_threads=1 seed=None enqueue_many=True) sess run(tf initialize_all_variables()) tf train start_queue_runners(sess=sess) batch eval() ```` The problem with that approach is that it reads all the data continuously and I cannot specify it to finish after some number of epochs I am aware I could use the <a href="https://www tensorflow org/versions/r0 10/api_docs/python/io_ops html#RandomShuffleQueue" rel="nofollow">RandomShuffleQueue</a> and insert the data into it few times but: a) I do not want to waste epoch*data of memory and b) it will allow the queue to shuffle between epochs Is there a nice way to read the shuffled data in epochs in Tensorflow without writing your own Queue?
You could create another queue enqueue your data onto it `num_epoch` times close it and then hook it up to your `batch` To save memory you can make this queue small and enqueue items onto it in parallel There will be a bit of mixing between epochs To fully prevent mixing you could take code below with `num_epochs=1` and call it `num_epochs` times ````tf reset_default_graph() data = np array([1 2 3 4]) num_epochs = 5 queue1_input = tf placeholder(tf int32) queue1 = tf FIFOQueue(capacity=10 dtypes=[tf int32] shapes=[()]) def create_session(): config = tf ConfigProto() config operation_timeout_in_ms=20000 return tf InteractiveSession(config=config) enqueue_op = queue1 enqueue_many(queue1_input) close_op = queue1 close() dequeue_op = queue1 dequeue() batch = tf train shuffle_batch([dequeue_op] batch_size=4 capacity=5 min_after_dequeue=4) sess = create_session() def fill_queue(): for i in range(num_epochs): sess run(enqueue_op feed_dict={queue1_input: data}) sess run(close_op) fill_thread = threading Thread(target=fill_queue args=()) fill_thread start() # read the data from queue shuffled tf train start_queue_runners() try: while True: print batch eval() except tf errors OutOfRangeError: print "Done" ```` BY THE WAY `enqueue_many` pattern above will hang when the queue is not large enough to load the entire numpy dataset into it You could give yourself flexibility to have a smaller queue by loading the data in chunks as below ````tf reset_default_graph() data = np array([1 2 3 4]) queue1_capacity = 2 num_epochs = 2 queue1_input = tf placeholder(tf int32) queue1 = tf FIFOQueue(capacity=queue1_capacity dtypes=[tf int32] shapes=[()]) enqueue_op = queue1 enqueue_many(queue1_input) close_op = queue1 close() dequeue_op = queue1 dequeue() def dequeue(): try: while True: print sess run(dequeue_op) except: return def enqueue(): for i in range(num_epochs): start_pos = 0 while start_pos < len(data): end_pos = start_pos+queue1_capacity data_chunk = data[start_pos: end_pos] sess run(enqueue_op feed_dict={queue1_input: data_chunk}) start_pos = queue1_capacity sess run(close_op) sess = create_session() enqueue_thread = threading Thread(target=enqueue args=()) enqueue_thread start() dequeue_thread = threading Thread(target=dequeue args=()) dequeue_thread start() ````
What genre emerged from performances associated with nonreligious and civic festivals?
null
django / python get image from url and display on site Given a url to an image is there a way in Django/Python to pull this image in and then display it on my site (resized if possible) Thanks
If you just want to hotlink it print out the html snippet(`<img src="http://example com/img png" width="100" height="100" /&gt;`) If you want to store in at your server and resize it on the server-side you might want to look into <a href="http://wiki python org/moin/ImageMagick" rel="nofollow">ImageMagick</a> or <a href="http://www pythonware com/products/pil/" rel="nofollow">PIL</a> for processing and <a href="http://docs python org/library/urllib html" rel="nofollow">urllib</a> or <a href="http://pycurl sourceforge net/" rel="nofollow">pycurl</a> for downloading
websocket recv() never returns inside another event loop I am currently developing a <strong>server</strong> program in Python that uses the websockets and asyncio packages I got a basic script handling websockets working (Exhibit A) This script locks when waiting for input which is not what I want The solution for this that I imagine is I can start two asynchronous tasks - one that handles inputs and one that handles outputs - and start them in a secondary event loop I had to do some research about coroutines and I came up with Exhibit B as a proof of concept for running two things simultaneously in an event loop Now what I am stuck on is Exhibit C When I attempted to use this in a practical scenario with the websockets package I found that websocket recv() never finishes (or the coroutine never un-pauses - I am not sure what is going on exactly) In exhibit A it works fine and I have determined that the coroutine definitely runs at least up until that point Any ideas? Exhibit A: ````#!/usr/bin/python3 import asyncio import websockets import time # This works great! async def hello(websocket path): while True: # This line waits for input from socket name = await websocket recv() print("< {}" format(name)) # "echo echo echo echo echo " greeting = '' join(name " " for x in range(5)) await websocket send(greeting) print("&gt; {}" format(greeting)) time sleep(0 1); start_server = websockets serve(hello '' 26231) asyncio get_event_loop() run_until_complete(start_server) asyncio get_event_loop() run_forever() ```` Exhibit B: ````#!/usr/bin/python3 import asyncio import time class Thing: def __init__(self): pass async def test(self): for x in range(10): print("Hello"+str(x)) await asyncio sleep(0) def run(self): # Add the task to the event loop twice asyncio ensure_future(self test()) asyncio ensure_future(self test()) t = Thing() t run() loop = asyncio get_event_loop(); loop run_forever() ```` Exhibit C: ````#!/usr/bin/python3 import asyncio import websockets import time class WebsocketRequest: def __init__(self websocket): self websocket = websocket # Works great async def handle_oputs(self): # This works fine - sends a message # every 10 seconds to the client while True: print("sending ") await self websocket send("Hello") print("&gt; {}" format("Hello")) time sleep(10) # Does not work async def handle_iputs(self): # This stops at the await and never receives # any input from the client :/ while True: try: print("receiving ") # This is the line that causes sadness data = await self websocket recv() print("< {}" format(data)) except: # This does not happen either print("Listener is dead") async def run(self): # This is the part where the coroutine for # a client get split off into two of them # to handle input and output separately loop = asyncio new_event_loop() asyncio set_event_loop(loop) asyncio ensure_future(self handle_iputs()) asyncio ensure_future(self handle_oputs()) loop run_forever() class WebsocketServer: def __init__(self address): self ip = address[0] self port = address[1] async def hello(self websocket path): req = WebsocketRequest(websocket) await req run() def run(self): loop = asyncio new_event_loop() asyncio set_event_loop(loop) start_server = websockets serve(self hello self ip self port) asyncio get_event_loop() run_until_complete(start_server) asyncio get_event_loop() run_forever() ````
Maybe the module websocket (unlike websockets) can help you The use of WebsocketApp is very simple: ````from websocket import WebSocketApp class ExampleClass(object): def __init__(self): websocket enableTrace(True) self ws = websocket WebSocketApp("ws://echo websocket org" on_message=on_message on_error=on_error on_close=on_close) def on_message(ws message): print "Message Arrived:" message def on_error(ws error): print error def on_close(ws): print "Connection Closed" def on_open(ws): ws send("Hello!") ```` To download this module: <a href="https://pypi python org/pypi/websocket-client" rel="nofollow">https://pypi python org/pypi/websocket-client</a>
Django order_by ForeignKey set models I have following django models ````class Post(models Model): title = models CharField(max_length=240) class Comment(models Model): post = models ForeignKey(Post) date = models DateTimeField(auto_now_add=True) ```` I need a QuerySet of comments ordered first by post then by date But posts must be ordered by its latest comment If i could use model methods in QuerySet order_by it would be like this: ````class Post(models Model): title = models CharField(max_length=240) def get_last_comment_date(self): return self comment_set order_by('-date')[0] date ```` And the ordering that i needed could be: ````Comment objects all() order_by('post__get_last_comment_date' '-date') ```` But unfortunately methods in order_by are not allowed Please help Can i have such ordering?
You may not use methods in order_by <a href="https://docs djangoproject com/en/dev/topics/db/queries/#lookups-that-span-relationships" rel="nofollow">lookups</a> because <strong>they are converted to SQL</strong> So why not <strong>convert get_last_comment_date into a field</strong> ? e g using a <a href="https://docs djangoproject com/en/dev/topics/signals/" rel="nofollow">signal receiver</a>: ````from django db models import signals class Post(models Model): title = models CharField(max_length=240) last_comment_date = models DateField(null=True blank=True) def post_last_comment_date(sender instance=None **kwargs): try: last_comment_date = self comment_set order_by('-date')[0] date except Comment DoesNotExist: return if last_comment_date != comment post last_comment_date: comment post last_comment_date = last_comment_date comment post save() signals post_save connect(post_last_comment_date sender=Comment) ```` Now you can: `Comment objects order_by('post__last_comment_date' '-date')`
How can I write a python script equivalent of mdfind using PyObjC bindings and NSMetadataQuery? I want to write the python equivalent of mdfind I want to use the Spotlight-V100 metadata and I cannot find a description for the metadata db format used but NSMetadataQuery seems to be what I need I would like to do this in python using the built in Obj-C bindings but have not been able to figure out the correct incantation to get it to work Not sure if the problem is the asynchronous nature of the call or I am just wiring things together incorrectly A simple example giving the equivalent of of "mdfind " would be fine for a start
I got a very simple version working I do not quite have the predicate correct as the equivalent <them>mdfind</them> call has additional results Also it requires two args the first is the base pathname to work from with the second being the search term Here is the code: ````from Cocoa import * import sys query = NSMetadataQuery alloc() init() query setPredicate_(NSPredicate predicateWithFormat_("(kMDItemTextContent = \"" sys argv[2] "\")")) query setSearchScopes_(NSArray arrayWithObject_(sys argv[1])) query startQuery() NSRunLoop currentRunLoop() runUntilDate_(NSDate dateWithTimeIntervalSinceNow_(5)) query stopQuery() print "count: " len(query results()) for item in query results(): print "item: " item valueForAttribute_("kMDItemPath") ```` The query call is asynchronous so to be more complete I should register a callback and have the run loop go continuously As it is I do a search for 5 seconds so if we have a query that would take longer we will get only partial results
How do I troubleshoot a 500 response from pypi when attempting to release a new package version? I am attempting to release a new version of a package to pypi This is using python 2 7 and I am currently targeting pythons 2 6/2 7 for consumption The current release for the package in question is 0 0 2-1 (The `-1` was a build tag convention I read somewhere; I am changing this practice to use `b` for `beta` which is more relevant ) Basically if I have the combination of `version` (in the `setup()` call) and build tag (from `setup cfg`) that is anything other than the current version already on pypi both the `register` and `upload` commands fail: ````ethan@walrus:~/source/python-mandrel$ python setup py register running register running egg_info writing requirements to mandrel egg-info/requires txt writing mandrel egg-info/PKG-INFO writing top-level names to mandrel egg-info/top_level txt writing dependency_links to mandrel egg-info/dependency_links txt writing entry points to mandrel egg-info/entry_points txt reading manifest file 'mandrel egg-info/SOURCES txt' writing manifest file 'mandrel egg-info/SOURCES txt' running check Registering mandrel to http://pypi python org/pypi Server response (500): There is been a problem with your request ```` That is with a version of `0 0 3` and build tag of `b` But if I apply this patch: ````--- a/setup cfg + b/setup cfg @@ -1 3 1 3 @@ [egg_info] -tag_build = b tag_build = -1 different --git a/setup py b/setup py index 14761cf beb8278 100644 --- a/setup py + b/setup py @@ -3 7 3 7 @@ import os setup( name = "mandrel" - version = "0 0 3" version = "0 0 2" author = "Ethan Rowe" author_email = "ethan@the-rowes com" description = ("Provides bootstrapping for sane configuration management") ```` Then the `register` call (and presumably `upload`) will succeed: ````ethan@walrus:~/source/python-mandrel$ python setup py register running register running check Registering mandrel to http://pypi python org/pypi Server response (200): OK ```` If I change the build tag to `-2` say the `register` call will fail again This suggests the failure is related to any total version string that is not already known to pypi Unfortunately the `--show-response` option when using `upload` is unhelpful when the server responds with a 500 code; `distutils`' `upload` command merely reports the fact that the server experienced an error with nothing useful to go on Any suggestions on what I might do to troubleshoot?
I am having a 500 error also the issue for that with the diagnosis from them is here: <a href="https://sourceforge net/tracker/index php?func=detail&amp;aid=3573564&amp;group_id=66150&amp;atid=513503" rel="nofollow">https://sourceforge net/tracker/index php?func=detail&amp;aid=3573564&amp;group_id=66150&amp;atid=513503</a> I debugged it using pdb The show-response option is not really implemented in a useful way apparently I put an "import pdb; pdb set_trace()" in my Python dist in `distutils/command/register py` on line 291 which in my release is inside the method `post_to_server()` I do a "print req data" right there and then "next" through it in order to see the response installed inside the exception catch
Error while trying to install PythonMagic 0 9 11 on Fedora 21 The following is the output on the command line when I run `make` on the Pythin0 9 11 folder I am on fedora 21 ```` [user@localhost PythonMagick-0 9 11]$ make Making all in pythonmagick_src make[1]: Entering directory '/home/user/PythonMagick-0 9 11/pythonmagick_src' CXX libpymagick_la-_DrawableFillRule lo _DrawableFillRule cpp:3:28: fatal error: boost/python hpp: No such file or directory #include <boost/python hpp&gt; ^ compilation terminated Makefile:645: recipe for target 'libpymagick_la-_DrawableFillRule lo' failed make[1]: *** [libpymagick_la-_DrawableFillRule lo] Error 1 make[1]: Leaving directory '/home/user/PythonMagick-0 9 11 /pythonmagick_src' Makefile:641: recipe for target -recursive' failed make: *** [all-recursive] Error 1 `` ```` The following is the trace from config log ````gcc version 4 9 2 20141101 (Red Hat 4 9 2-1) (GCC) configure:3165: $? = 0 configure:3154: g++ -V &gt;&amp;5 g++: error: unrecognized command line option '-V' g++: fatal error: no input files compilation terminated configure:3165: $? = 4 configure:3154: g++ -qversion &gt;&amp;5 g++: error: unrecognized command line option '-qversion' g++: fatal error: no input files compilation terminated configure:3165: $? = 4 configure:3185: checking whether the C++ compiler works configure:3207: g++ conftest cpp -lposix &gt;&amp;5 /bin/ld: cannot find -lposix collect2: error: ld returned 1 exit status configure:3211: $? = 1 configure:3249: result: no configure: failed program was: | /* confdefs h */ | #define PACKAGE_NAME "PythonMagick" | #define PACKAGE_TARNAME "PythonMagick" | #define PACKAGE_VERSION "0 9 11" | #define PACKAGE_STRING "PythonMagick 0 9 11" | #define PACKAGE_BUGREPORT "" | #define PACKAGE_URL "" | #define PACKAGE "PythonMagick" | #define VERSION "0 9 11" | /* end confdefs h */ ```` I installed gcc-c++ I Am not sure where am I going wrong or is there something I am missing I am new to this
For all those who had the same set of problem well I understood the problem finally I had installed boost but boost-devel was not installed `sudo yum install boost-devel`
What is the purpose of subclassing the class "object" in Python? All the Python built-ins are subclasses of `object` and I come across many user-defined classes which are too Why? What is the purpose of the class `object`? It is just an empty class right?
Right but it marks the class as a <a href="http://www python org/doc/newstyle/" rel="nofollow">new-style</a> class Newly developed classes should use the `object` base because it costs little and future-proofs your code
A script that uses data from another file to edit an xml file I am pretty inexperienced with scripting so I really need your help on this one I have an xml file where every odd line I have this: ````<![CDATA[<head&gt;Wallpaper</head&gt;<body&gt;1328758083110 jpg</body&gt;]]&gt;</photo&gt; ```` I would like to be able to edit each "Wallpaper" part in between the head tags to instead say something pulled from a text file where each element is separated by a line break It is a pretty broad question so I would appreciate any input at all Where should I start?
If I understand your question correctly this python snippet should work I am assuming there are the same number of lines in your text file as there are lines you want edited in the XML file and the string "Wallpaper" is not in any of the lines that you do not want edited Otherwise you may need something different: ````inFile = open('mytextfile txt' 'r') myTextData = [] for line in inFile: myTextData append(line strip()) inFile close() inFile = open('myXMLfile xml' 'r') outFile = open('myFinishedXMLfile xml' 'w') currentItem = 0 for line in inFile: if 'Wallpaper' not in line: outFile write(line) else: left = line find('Wallpaper') right = left 9 # Wallpaper is 9 characters outFile write(line[:left]) outFile write(myTextData[currentItem]) outFile write(line[right:]) currentItem = 1 inFile close() outFile close() ````
Where was copper glowing at in the Roman era?
null
Transmit pickle bytes over SSH I am trying to launch a python script remotely on a server from another python script on my computer The goal is to transmit over SSH some data which are in a class (for instance data is the string I want to transmit Sure it has no sense to use pickle for a string it is juste for the example) I would like something like that : Script (on computer): maCo is a SSH connexion ````import pickle data = 'é&amp;' data_bytes = pickle dumps(data) maCo sendCmd(['python3' 'serverScript py' '-ma' data_bytes]) ```` Script (on server) ````import argparse pickle parser = argparse ArgumentParser() parser add_argument("-ma" "--myarg" help="Arg" type=bytes) args = parser parse_args() data = pickle loads(args myarg) ```` I try to convert bytes to string but that was not a success Thanks for your help
I am not sure what you want to do from your question but I can guess If you are looking for a package that will help you make a ssh connection and then ship a python object `pathos` has the ability to establish a ssh-tunnel or a direct ssh connection -- then you can pipe whatever type of communications you want through it (typically a socket-to-socket communication like `zmq` provides or some other `rpc` connection -- I often use `parallelpython`) All the communications across the wire can be pickled but they just have to be serialized objects really `pathos` tries to give you a functional interface to work with ssh and parallel `pipe` and `map` constructs Also if you are looking to transmit a class object or a class instance and you are using `pickle` to serialize your objects you will need to have the class defined in a module that already lives on the remote computer and is on the remote `PYTHONPATH` If you use `dill` to serialize your class object or instance you do not need the class on the remote system ahead of time This is because `pickle` serializes classes by reference while `dill` can store the entire class object or instance (<them>or</them> serialize by reference) Get `pathos` and `dill` here: <a href="https://github com/uqfoundation" rel="nofollow">https://github com/uqfoundation</a> I believe `IPython` has recently been able to do some of this too but you will still need `dill` most likely for sending a serialized class instance (as noted above)
python tuple name same as value name I was just playing around with tuples in an effort to learn python ````a=("1" "2") (a b) = a print a prints '1' ```` a ) If a tuple is a comma separated list of value (<a href="http://openbookproject net/thinkcs/python/english3e/tuples html" rel="nofollow">http://openbookproject net/thinkcs/python/english3e/tuples html</a>) why does python allow the value variable name to be the same as the tuple variable name b )In this case If I want to print the original tuple can I do anything else other than `print (a b)`
What you are doing is a multiple assignment Python interprets the following line: ````(a b) = a ```` as saying to take the object referred to by the variable `a` treat it as a 2-element sequence (raising an error if `a` is not such a sequence) and assign the elements to the variables `a` and `b` If that is not what you wanted to do do something else
python 2 6 import error when invoking python file I have the following error when i run a python file: ````Traceback (most recent call last): File "MySimulation py" line 14 in <module&gt; from random import * File "/usr/local/lib/python2 6/random py" line 48 in <module&gt; from binascii import hexlify as _hexlify ImportError: No such file or directory ```` Have you a solution?
Install `binascii` If it is installed ensure that your `PYTHONPATH` includes the location it is installed to
What is the first question to ask in order to define the two classes?
"what does reality consist of?"
Facebook - Getting a python function to repeat? I am hoping someone may be able to point out the error I am making; it is probably very straight forward! What I am trying to do is run some code previous what I have shown below then when I get to this point I need to get it to hold for the 600 seconds and then reload the download page: ````try: # Clicks OK in Download Requested window driver implicitly_wait(10) ClickOkay = driver find_element_by_css_selector(" _42ft _42fu layerCancel uiOverlayButton selected _42g- _42gy") ClickOkay click() except: print("error 2") # Wait Time # time sleep(600) # Allow Facebook to compile the archive # Reload Settings page GoToURL('https://www facebook com/' 'settings' driver) # Goes back through to Download page link = driver find_element_by_link_text('Download a copy') link click() ```` At this point if the archive has finished being created then the button changes from Start Archive to Download Archive However depending on the size of the profile the time taken to compile the archive varies so what i was attempting (with the code below and a couple of attempts with the if and while arguments) was to get it to check if the button exists and if not go back and wait 300 seconds before trying again Once the button appears it will then continue on to download using additional code ```` try: print("Checking if button exists") DownloadArchive = driver find_elements_by_css_selector(" _42ft _42fu selected _42gz _42gy") print(DownloadArchive count()) if(DownloadArchive count() &gt; 0): print("button exists") else: print("button does not exist") # Button to initiate password entry popup window #driver implicitly_wait(10) #while (DownloadArchive = driver find_element_by_css_selector(" _42ft _42fu selected _42gz _42gy")): # if (DownloadArchive = True): # DownloadArchive click() # print("wait") # else:time sleep(300) ```` Thanks in advance James
You are mixing the assignment operator (`=`) with the equal operator (`==`) So it should be: ````while (DownloadArchive == driver find_element_by_css_selector(" _42ft _42fu selected _42gz _42gy")): if (DownloadArchive == True): ```` Or just: ````while DownloadArchive == driver find_element_by_css_selector(" _42ft _42fu selected _42gz _42gy"): ```` Hope it helps!
Save user on database (using ModelForm) only after security questions has been answered? I am trying to develop a website where in the signup flow after entering the user credentials and successfully validating Google reCaptcha the user has to be directed to a page displaying a list of security questions the user has to answer one of the question to successfully signup on the website My forms py file is here ````import re from django import forms from django contrib auth forms import AuthenticationForm PasswordResetForm SetPasswordForm from models import CustomUser from django conf import settings from django utils translation import ugettext as _ import urllib import urllib request as urllib2 import json class SignUpForm(forms ModelForm): """ A form that creates a user with no privileges from the given username and password """ password1= forms CharField(label='Password' widget=forms PasswordInput) password2=forms CharField(label='Confirm Password' widget=forms PasswordInput) class Meta: #model to be used model = CustomUser #fields that have to be populated fields=['username'] def __init__(self *args **kwargs): self request=kwargs pop('requestObject' None) super(SignUpForm self) __init__(*args **kwargs) self fields['username'] required=True self fields['password1'] required=True self fields['password2'] required=True def clean(self): super(SignUpForm self) clean() ''' Test the Google Recaptcha ''' #url at which request will be sent url='https://www google com/recaptcha/api/siteverify' #dictionary of values to be sent for validation values={ 'secret': settings GOOGLE_RECAPTCHA_SECRET_KEY 'response': self request POST get(you'g-recaptcha-response' None) 'remoteip': self request META get("REMOTE_ADDR" None) } #making request data=urlllib parse urlencode(values) binary_data=data encode('utf-8') req= urllib request Request(url binary_data) response= urllib request urlopen(req) result = json loads(response read() decode('utf-8')) # result["success"] will be True on a success if not result["success"]: raise forms ValidationError("Only humans are allowed to submit this form ") return self cleaned_data def clean_password2(self): #checking whether the passwords match or not password1=self cleaned_data get('password1'); password2=self cleaned_data get('password2'); if password1 and password2 and password1!=password2: raise forms ValidationError("Passwords do not match!"); return password2; def save(self commit=True): #overriding the default save method for ModelForm user = super(UserCreationForm self) save(commit=False) user set_password(self cleaned_data["password1"]) if commit: user save() return user ```` The models py file is here ```` from django db import models from django conf import settings from django contrib auth models import AbstractUser # Create your models here #Choices of questions SECRET_QUESTION_CHOICES = ( ('WCPM' "In which city did your parents meet?") ('WCN' "What was your childhood nickname?") ('WPN' "What is your pet's name?") ) #Secret Questions class SecretQuestion(models Model): id=models AutoField(primary_key=True) secret_question = models CharField(max_length = 100 choices = SECRET_QUESTION_CHOICES default = SECRET_QUESTION_CHOICES[0] null=False blank=False) class Meta: db_table='Security Questions' class CustomUser(AbstractUser): profile_pic = models ImageField(upload_to = 'profile_pics/' blank = True null = True) following = models ManyToManyField('self' symmetrical = False related_name = 'followers' blank = True) ques_followed = models ManyToManyField('question Question' related_name = 'questions_followed' blank = True) topic_followed = models ManyToManyField('topic Topic' related_name = 'topic_followed' blank = True) security_questions = models ManyToManyField(SecretQuestion related_name = 'secret_question_user' through = "SecretQuestionAnswer") class Meta: db_table = 'User' def __str__(self): return self username class SecretQuestionAnswer(models Model): user = models ForeignKey(CustomUser) secret_question = models ForeignKey(SecretQuestion) answer = models CharField(max_length = 100 blank=False null=False) class Meta: db_table='SecretQuestionAnswer' ```` Now after successfully submitting the SignUpForm the user should only be registered(means saved on the database) if he answers one of the questions given above successfully How should work on this problem? Please help I have been banging my head on this problem for the past two days Can somebody help
Normaly you need to have a boolean field which will indicate if the user was finished will all steps By default this field will be `False` and when ready set it to `True` You can allow login only when field is set to `True` Another approach is using a wizard <a href="http://django-formtools readthedocs org/en/latest/" rel="nofollow">http://django-formtools readthedocs org/en/latest/</a> With this approach you need to protect user's password somehow because it is not a good idea to stay in the session
Import from excel file a current range of cells to MySql using Python My Project involves 1) Import data from excel file ````path=" \dataexample xls" databook=xlrd open_workbook(path) mydatasheet=databook sheet_by_index(0) ```` 2) Connect to a localhost database ````database = MySQLdb connect (host=myhost user = myuser passwd = mypasswd db = dbname) ```` 3) Import a current range of cell of cells to the database `My` `dataexample xls` has 12 rows and 122 cols and for my INSERT QUERY I need only A3:J12 cells After some search I'am in the point where: - Preparation for the query and `cursor = database cursor() query = """INSERT INTO agiosathanasios(record Stn_Code Raw_Dist Snow Snow_corr Smp Raw_Dist_QC Snow_final Snow_final_pos) VALUES (%s %s %s %s %s %s %s %s %s)"""` - Collect the correct cells for the query ````for row in range(3 12): values=[] for col in range(0 10): values append(mydatasheet cell(row col) value) print values ```` I was trying to put after values append the following code `database cursor execute(query values)`so I can import the value I want But it does work how can I fix this? How can I put the current values to my query ?
So according to tracebacks you provide not all parameters to your query or there is some problem with converting values to string (None in values) Check values before calling cursor execute! Try to convert to string values from excel: ````values append(str(mydatasheet cell(row col) value)) ````
Why did China need to resettle people into Gorges?
null
How to extract python's list variable's value in html? This is a python in html related problem I have a python script with a list named `scalar_not_adv_row` and `tpdob scalar_not_adv` Both the lists have same number of rows But the code below is throwing syntax error I am not able to understand why It is showing the error for `row_num=${tpdob scalar_not_adv_row[${i}]}` where I want to get a particular element from the list without looping through it Looping through both the lists will increase time complexity of the code and thus I am trying to avoid it Please provide any suggestions you have to make the code work! And please tell if you have come across any website which can tell in detail how to use python in html! ````<table&gt; % for COMPNAME in tpdob scalar_not_adv: <% i=0 %&gt; <% row_num=${tpdob scalar_not_adv_row[${i}]} %&gt; <tr&gt; <td&gt;${row_num}</td&gt; <td&gt;${COMPNAME}</td&gt; </tr&gt; <% i=i+1 %&gt; % endfor </table&gt; ````
Replace `<% row_num=${tpdob scalar_not_adv_row[${i}]} %&gt;` with `<% row_num=tpdob scalar_not_adv_row[i] %&gt;` And it will work fine :)
Who was the lead actor in Arsenal Stadium Mystery?
null
How to generate negative random value in python I am starting to learn python I tried to generate random values by passing in a negative and positive number Let say `-1` `1` How should I do this in python?
````import random def r(minimum maximum): return minimum (maximum - minimum) * random random() print r(-1 1) ```` EDIT: @San4ez's `random uniform(-1 1)` is the correct way No need to reinvent the wheel… Anyway `random uniform()` is coded as: ````def uniform(self a b): "Get a random number in the range [a b) or [a b] depending on rounding " return a (b-a) * self random() ````
Google Endpoints Android Client Library for multiple api classes I split up the code for my endpoints api over several classes with the @endpoints api decorator This was the way recommended by Google (<a href="https://cloud google com/appengine/docs/python/endpoints/create_api#creating_an_api_implemented_with_multiple_classes" rel="nofollow">source</a>) Now I am trying to generate the Android client libraries using the command line tool I can generate the library for one class using the following command `endpointscfg py get_client_lib java -bs gradle api HelloWorldApi` and this works But I have other classes that need to also be part of the library eg `endpointscfg py get_client_lib java -bs gradle api GoodbyeWorldApi` I do not know how to generate one complete library encompassing both of these classes
Did some looking around endpointscfg file in google's tools folder and realized that it takes multiple remote service files as arguments So I was able to use the following command to create client libraries from all the classes `endpointscfg py get_client_lib java api GoodbyeWorldApi api HelloWorldApi -bs gradle`
Python "TypeError: expected string or buffer" with re and csv So I am working on a program that grabs a Twitter username from a csv file and plugs it into a function that downloads all the tweets I pretty much have gotten it to work except I think the output of the row from the csv has brackets and apostrophes `['POTUS']` instead of `POTUS` which Twitter will not accept Here is the code I am using: ````with open('names csv') as namescsv: namereader = csv reader(namescsv) for row in namereader: row = re sub(r'[^\w=]' '' row) print row ```` I used re to try to remove the odd characters but when I execute the code I get this error: ````File "/home/ian/Desktop/tweepy_scripts/tweetdownloader_allcsv_v2 py" line 66 in <module&gt; row = re sub(r'[^\w=]' '' row) File "/usr/lib/python2 7/re py" line 151 in sub return _compile(pattern flags) sub(repl string count) TypeError: expected string or buffer ```` Some help would be awesome! I am a beginner and my attempts to solve the problem using previous articles has not yielded much
<blockquote> I think the output of the row from the csv has brackets and apostrophes `['POTUS']` instead of `POTUS` </blockquote> No it does not The output of the row from the csv module is a `list` of `str`s When you display the list (using for example `print`) it is displayed with the punctuation you describe Instead of passing `row` off to the Twitter API you might need to pass a single cell of the row The first cell is called `row[0]` so you might need: ````result = whatever the twitter api is called(row[0]) ````
What is the feeling of God being everywhere?
omnipresence
Emacs - Python Indent in parenthesis My default python mode in emacs indents the following multiline code this way: ````mydict = { "a": 1 "b": 2 } ```` which is okay as of <a href="http://legacy python org/dev/peps/pep-0008/#indentation" rel="nofollow">PEP8</a> I' d rather use the following style: ````mydict = { "a": 1 "b": 2 } ```` which is also ok with PEP8 How do I tell emacs to indent the last parenthesis accordingly to the beginning of the previous line?
Just install the latest python-mode el (from <a href="https://launchpad net/python-mode" rel="nofollow">https://launchpad net/python-mode</a>) In the latest version 6 1 4 the PEP8 "indent-alternatives when closing a list" is implemented (see <a href="https://mail python org/pipermail/python-mode/2014-January/001593 html" rel="nofollow">release notes</a> ) In your case you do not have to customize it and just accept the default By default `py-close-at-start-column-p` is `nil` which looks like: ````my_list = [ 1 2 3 4 5 6 ] result = some_function_that_takes_arguments( 'a' 'b' 'c' would' 'e' 'f' ) ````
How to change tab delimited in to comma delimited in pandas I do not know if this is something possible I am trying to append 12 files into a single file One of the files is tab delimited and the rest comma delimitted I loaded all the 12 files into dataframe and append it into an empty dataframe one by one in a loop ````list_of_files = glob glob(' /* txt') df = pd DataFrame() for filename in list_of_files: file = pd read_csv(filename) dfFilename = pd DataFrame(file) df = df append(dfFilename ignore_index=True) ```` But the big file is not in the format I wanted it to be And I think the problem is with the tab delimited file And I tried to run the code without the tab delimited file and the format of the appended file is fine So I was thinking if it is possible to change the tab delimited format into comma delimited using pandas Thank you for your help and suggestion
For the file that is tab-separated you should use: ````file = pd read_csv(filename sep="\t") ```` Pandas `read_csv` has quite a lot of parameters check it out in the <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas read_csv html" rel="nofollow">docs</a>
"MetaClass" "__new__" "cls" and "super" - what is the mechanism exactly? I have read posts like these: - <a href="http://stackoverflow com/questions/100003/what-is-a-metaclass-in-python">What is a metaclass in Python?</a> - <a href="http://stackoverflow com/questions/392160/what-are-your-concrete-use-cases-for-metaclasses-in-python">What are your (concrete) use-cases for metaclasses in Python?</a> - <a href="http://fuhm net/super-harmful/" rel="nofollow">Python's Super is nifty but you cannot use it</a> But somehow I got confused Many confusions like: When and why would I have to do something like the following? ````# Refer link1 return super(MyType cls) __new__(cls name bases newattrs) ```` or ````# Refer link2 return super(MetaSingleton cls) __call__(*args **kw) ```` or ````# Refer link2 return type(self __name__ other __name__ (self other) {}) ```` How does super work exactly? What is class registry and unregistry in link1 and how exactly does it work? (I thought it has something to do with <a href="http://en wikipedia org/wiki/Singleton_pattern" rel="nofollow">singleton</a> I may be wrong being from C background My coding style is still a mix of functional and OO) What is the flow of class instantiation (subclass metaclass super type) and method invocation ( ````metaclass>__new__ metaclass>__init__ super>__new__ subclass>__init__ inherited from metaclass ```` ) with well-commented working code (though the first link is quite close but it does not talk about cls keyword and super( ) and registry) Preferably an example with multiple inheritance P S : I made the last part as code because Stack&nbsp;Overflow formatting was converting the text `metaclass>__new__` to metaclass-><strong>new</strong>
OK you have thrown quite a few concepts into the mix here! I am going to pull out a few of the specific questions you have In general understanding super the MRO and metclasses is made much more complicated because there have been lots of changes in this tricky area over the last few versions of Python <a href="http://docs python org/reference/datamodel html" rel="nofollow">Python's own documentation</a> is a very good reference and completely up to date There is an <a href="http://www ibm com/developerworks/linux/library/l-pymeta html" rel="nofollow">IBM developerWorks article</a> which is fine as an introduction and takes a more tutorial-based approach but note that it is five years old and spends a lot of time talking about the older-style approaches to meta-classes <strong>`super`</strong> is how you access an object's super-classes It is more complex than (for example) Java's `super` keyword mainly because of multiple inheritance in Python As <a href="http://fuhm net/super-harmful/" rel="nofollow">Super Considered Harmful</a> explains using `super()` can result in you implicitly using a chain of super-classes the order of which is defined by the <a href="http://www python org/download/releases/2 3/mro/" rel="nofollow">Method Resolution Order</a> (MRO) You can see the MRO for a class easily by invoking `mro()` on the class (not on an instance) Note that meta-classes are not in an object's super-class hierarchy <a href="http://stackoverflow com/users/17624/thomas-wouters">Thomas</a>' description of meta-classes <a href="http://stackoverflow com/questions/100003/what-is-a-metaclass-in-python">here</a> is excellent: <blockquote> A metaclass is the class of a class Like a class defines how an instance of the class behaves a metaclass defines how a class behaves A class is an instance of a metaclass </blockquote> In the examples you give here is what is going on: - The call to `__new__` is being bubbled up to the next thing in the MRO In this case `super(MyType cls)` would resolve to `type`; calling `type __new__` let us Python complete it is normal instance creation steps - This example is using meta-classes to enforce a singleton He is overriding `__call__` in the metaclass so that whenever a class instance is created he intercepts that and can bypass instance creation if there already is one (stored in `cls instance`) Note that overriding `__new__` in the metaclass will not be good enough because that is only called when creating the <them>class</them> Overriding `__new__` on the class would work however - This shows a way to dynamically create a class Here is he is appending the supplied class's name to the created class name and adding it to the class hierarchy too I am not exactly sure what sort of code example you are looking for but here is a brief one showing meta-classes inheritance and method resolution: ````class MyMeta(type): def __new__(cls name bases dct): print "meta: creating %s %s" % (name bases) return type __new__(cls name bases dct) def meta_meth(cls): print "MyMeta meta_meth" __repr__ = lambda c: c __name__ class A(object): __metaclass__ = MyMeta def __init__(self): super(A self) __init__() print "A init" def meth(self): print "A meth" class B(object): __metaclass__ = MyMeta def __init__(self): super(B self) __init__() print "B init" def meth(self): print "B meth" class C(A B): __metaclass__ = MyMeta def __init__(self): super(C self) __init__() print "C init" &gt;&gt;&gt; c_obj = C() meta: creating A (<type 'object'&gt; ) meta: creating B (<type 'object'&gt; ) meta: creating C (A B) B init A init C init &gt;&gt;&gt; c_obj meth() A meth &gt;&gt;&gt; C meta_meth() MyMeta meta_meth &gt;&gt;&gt; c_obj meta_meth() Traceback (most recent call last): File "mro py" line 38 in <module&gt; c_obj meta_meth() AttributeError: 'C' object has no attribute 'meta_meth' ````
Who was one of the presidential candidates to debate in 1992?
null
wrong in my script for download? I write this script ````import urllib urllib urlretrieve("URL" "path\ name jpg") ```` It is working But if there is no internet it is make wrong I want it if there is no internet wait to connect by internet then work again
You can write something like this: ````def wait_for_internet_connection(): while True: try: response = urllib2 urlopen('http://74 125 113 99' timeout=1) return except urllib2 URLError: pass def main(): #your code here wait_for_internet_connection() main() ```` The while loop will execute until there is an active internet connection then executes your code
Using Tkinter user input for variables in functions I have two functions My first function creates a GUI where the user inputs min and max values for 8 different species My second function attempts to use those min and max values to create a simulation of 1000 mixtures within the boundaries of their respective min and max values whilst abiding by a number of different constraints However when I run the simulation I get no values I only get the CSV file with the headings of the species I also get no valuable error My code is below and I am out of ideas of how to make this work Any help would be much appreciated ````import Tkinter import pandas as pd import numpy as np class simulation_tk(Tkinter Tk): def __init__(self parent): Tkinter Tk __init__(self parent) self parent = parent self initialize() self grid() def initialize(self): self c2_low =Tkinter StringVar() self c3_low =Tkinter StringVar() self ic4_low =Tkinter StringVar() self nc4_low =Tkinter StringVar() self ic5_low =Tkinter StringVar() self nc5_low =Tkinter StringVar() self neoc5_low =Tkinter StringVar() self n2_low = Tkinter StringVar() self c2_high =Tkinter StringVar() self c3_high =Tkinter StringVar() self ic4_high =Tkinter StringVar() self nc4_high =Tkinter StringVar() self ic5_high =Tkinter StringVar() self nc5_high =Tkinter StringVar() self neoc5_high=Tkinter StringVar() self n2_high = Tkinter StringVar() self entry = Tkinter Entry(self textvariable = self c2_low) grid(column=0 row=1 sticky='EW') self entry = Tkinter Entry(self textvariable = self c2_high) grid(column=0 row=2 sticky='EW') self entry = Tkinter Entry(self textvariable = self c3_low) grid(column=0 row=3 sticky='EW') self entry = Tkinter Entry(self textvariable = self c3_high) grid(column=0 row=4 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic4_low) grid(column=1 row=1 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic4_high) grid(column=1 row=2 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc4_low) grid(column=1 row=3 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc4_high) grid(column=1 row=4 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic5_low) grid(column=0 row=5 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic5_high) grid(column=0 row=6 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc5_low) grid(column=0 row=7 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc5_high) grid(column=0 row=8 sticky='EW') self entry = Tkinter Entry(self textvariable = self neoc5_low) grid(column=1 row=5 sticky='EW') self entry = Tkinter Entry(self textvariable = self neoc5_high) grid(column=1 row=6 sticky='EW') self entry = Tkinter Entry(self textvariable = self n2_low) grid(column=1 row=7 sticky='EW') self entry = Tkinter Entry(self textvariable = self n2_high) grid(column=1 row=8 sticky='EW') self resizable(False False) button = Tkinter Button(self text=you"simulate" command =self simulation) button grid(column=3 row=9) def simulation(self): sample_runs =10000 # Sample Population needs to be higher than exporting population export_runs = 1000 # How many samples we actually take c2_low = self c2_low get() c2_high = self c2_high get() c3_low = self c3_low get() c3_high = self c3_high get() ic4_low = self ic4_low get() ic4_high =self ic4_high get() nc4_low =self nc4_low get() nc4_high = self nc4_high get() ic5_low = self ic5_low get() ic5_high = self ic5_high get() nc5_low = self nc5_low get() nc5_high = self nc5_high get() neoc5_low = self neoc5_low get() neoc5_high = self neoc5_high get() n2_low = self n2_low get() n2_high = self n2_high get() c2 = np random uniform(c2_low c2_high sample_runs) c3 = np random uniform(c3_low c3_high sample_runs) ic4 = np random uniform(ic4_low ic4_high sample_runs) nc4 = np random uniform(nc4_low nc4_high sample_runs) ic5 = np random uniform(ic5_low ic5_high sample_runs) nc5 = np random uniform(nc5_low nc5_high sample_runs) neoc5 = np random uniform(neoc5_low neoc5_high sample_runs) n2 = np random uniform(n2_low n2_high sample_runs) # SETS CONSTRAINTS BASED ON RANGES masked = np where((c3&gt;=c3_low) &amp; (c3<=c3_high) &amp; (c2&gt;=c2_low) &amp; (c2<= c2_high) &amp; (ic4&gt;=ic4_low) &amp; (ic4<= ic4_high) &amp; (nc4&gt;= nc4_low) &amp; (nc4<= nc4_high) &amp; (ic5&gt;= ic5_low) &amp; (ic5<= ic5_high)&amp; (nc5&gt;= nc5_low)&amp; (nc5<= nc5_high)&amp; (neoc5&gt;= neoc5_low)&amp; (neoc5<=neoc5_high) &amp; (n2&gt;=n2_low) &amp; (n2<= n2_high)) # MASKED CREATES AN INDEX (Where constraints are held) FOR LOOKING THROUGH DATA c2 = c2[masked][:export_runs] c3 = c3[masked][:export_runs] ic4 = ic4[masked][:export_runs] nc4 = nc4[masked][:export_runs] ic5 = ic5[masked][:export_runs] nc5 = nc5[masked][:export_runs] neoc5 = neoc5[masked][:export_runs] n2 = n2[masked][:export_runs] # DETERMINES CONC FROM METHANE BY BALANCE c1 = 100-c2-c3-nc4-ic4-nc5-ic5-neoc5-n2 #CREATES A SERIES FOR EACH COMPONENET AND ADDS COLUMNS TO A FINAL DATAFRAME c1_ser = pd Series(c1) c2_ser = pd Series(c2) c3_ser = pd Series(c3) ic4_ser = pd Series(ic4) nc4_ser = pd Series(nc4) ic5_ser = pd Series(ic5) nc5_ser = pd Series(nc5) neoc5_ser = pd Series(neoc5) n2_ser = pd Series(n2) #EXPORTS DATAFRAME TO CSV FILE NAMED LNG_DATA df = pd DataFrame([c1_ser c2_ser c3_ser ic4_ser nc4_ser ic5_ser nc5_ser neoc5_ser n2_ser]) T df columns = ['C1' 'C2' 'C3' 'nC4' 'iC4' 'nC5' 'iC5' 'neoC5' 'N2'] df to_csv(path to directory you want the saved file) if __name__ == "__main__": app = simulation_tk(None) app title('Simulation') app mainloop() ```` EDIT: The code for the original simulation function is below: ````import numpy as np import pandas as pd import time def LNG_SIMULATION(no_of_simulations): t0 = time time() # SET COMPOSITION RANGES HERE: c2_low =0; c2_high =14 c3_low =0; c3_high =4 nc4_low =0; nc4_high =1 5 ic4_low =0; ic4_high =1 2 nc5_low =0; nc5_high =0 1 ic5_low =0; ic5_high =0 1 neoc5_low =0; neoc5_high =0 01 n2_low =0; n2_high =1 5 # PRODUCES A RANDOM UNIFORM DISTRIBUTION BETWEEN LOW AND HIGH * runs sample_runs =10000 # Sample Population needs to be higher than exporting population export_runs = no_of_simulations # How many samples we actually take c2 = np random uniform(c2_low c2_high sample_runs) c3 = np random uniform(c3_low c3_high sample_runs) ic4 = np random uniform(ic4_low ic4_high sample_runs) nc4 = np random uniform(nc4_low nc4_high sample_runs) ic5 = np random uniform(ic5_low ic5_high sample_runs) nc5 = np random uniform(nc5_low nc5_high sample_runs) neoc5 = np random uniform(neoc5_low neoc5_high sample_runs) n2 = np random uniform(n2_low n2_high sample_runs) # SETS CONSTRAINTS BASED ON RANGES masked = np where((c3&gt;=0) &amp; (c3<=4) &amp; (c2&gt;=0) &amp; (c2<=14) &amp; (ic4&gt;=0) &amp; (ic4<=1 5) &amp; (nc4&gt;=0) &amp; (nc4<=1 2) &amp; (ic5&gt;=0) &amp; (ic5<=0 1)&amp; (nc5&gt;=0)&amp; (nc5<=0 1)&amp; (neoc5&gt;=0)&amp; (neoc5<=0 01) &amp; (n2&gt;=0) &amp; (n2<=1 5)) # MASKED CREATES AN INDEX (Where constraints are held) FOR LOOKING THROUGH DATA c2 = c2[masked][:export_runs] c3 = c3[masked][:export_runs] ic4 = ic4[masked][:export_runs] nc4 = nc4[masked][:export_runs] ic5 = ic5[masked][:export_runs] nc5 = nc5[masked][:export_runs] neoc5 = neoc5[masked][:export_runs] n2 = n2[masked][:export_runs] # DETERMINES CONC FROM METHANE BY BALANCE c1 = 100-c2-c3-nc4-ic4-nc5-ic5-neoc5-n2 #CREATES A SERIES FOR EACH COMPONENET AND ADDS COLUMNS TO A FINAL DATAFRAME c1_ser = pd Series(c1) c2_ser = pd Series(c2) c3_ser = pd Series(c3) ic4_ser = pd Series(ic4) nc4_ser = pd Series(nc4) ic5_ser = pd Series(ic5) nc5_ser = pd Series(nc5) neoc5_ser = pd Series(neoc5) n2_ser = pd Series(n2) print np min(c1); print np max(c1) # Check for methane range #EXPORTS DATAFRAME TO CSV FILE NAMED LNG_DATA df = pd DataFrame([c1_ser c2_ser c3_ser ic4_ser nc4_ser ic5_ser nc5_ser neoc5_ser n2_ser]) T df columns = ['C1' 'C2' 'C3' 'nC4' 'iC4' 'nC5' 'iC5' 'neoC5' 'N2'] df to_csv(filepath) t1 = time time() tfinal = t1-t0 'seconds' print tfinal LNG_SIMULATION(1000) ```` this gives the following output as a csv file: each row adds up to 100 hence the c1 = 100- (sum of all the other components) ````C1 C2 C3 nC4 iC4 nC5 iC5 neoC5 N2 0 82 85372539 12 99851014 2 642744858 0 129878248 0 800397967 0 002835756 0 01996335 0 00665644 0 545287856 1 97 53896049 1 246468861 0 00840227 0 616819596 0 340552181 0 093463733 0 0415282 0 002044789 0 11175988 2 96 06680372 1 005440722 0 427965685 0 944281965 0 354424967 0 029694142 0 046906668 0 001961002 1 122521133 3 92 152083 4 558717345 1 850648013 0 060053009 0 802721707 0 055533032 0 013490485 0 008897805 0 497855601 4 81 68486996 13 21690811 2 478113198 0 825638261 0 963227282 0 02162254 0 03812538 0 006329348 0 765165918 5 86 4237313 9 387647074 2 729233511 0 562534986 0 786110737 0 050537327 0 026122606 0 000290321 0 033792141 6 95 11319788 2 403944121 0 467770537 0 229967177 0 220494035 0 073742963 0 007893607 0 007473005 1 475516673 7 92 501114 2 677293658 2 742409857 0 608661787 0 237898432 0 073326044 0 030292277 0 002908029 1 126095919 8 89 83876672 5 850123215 2 598266005 0 060712896 0 29401403 0 037017143 0 048577495 0 001888549 1 270633946 9 84 14677099 13 9234657 0 214404288 0 535574576 0 677735065 0 061556983 0 015255684 0 006789481 0 418447232 10 94 73390493 2 302821233 1 478361587 0 500991046 0 022823156 0 030764131 0 024351373 0 009064709 0 896917832 ```` 1000 rows FINAL EDIT: ```` self entry = Tkinter Entry(self textvariable = self c2_low) grid(column=0 row=1 sticky='EW') self entry = Tkinter Entry(self textvariable = self c2_high) grid(column=1 row=1 sticky='EW') self entry = Tkinter Entry(self textvariable = self c3_low) grid(column=0 row=2 sticky='EW') self entry = Tkinter Entry(self textvariable = self c3_high) grid(column=1 row=2 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic4_low) grid(column=0 row=3 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic4_high) grid(column=1 row=3 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc4_low) grid(column=0 row=4 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc4_high) grid(column=1 row=4 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic5_low) grid(column=0 row=5 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic5_high) grid(column=1 row=5 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc5_low) grid(column=0 row=6 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc5_high) grid(column=1 row=6 sticky='EW') self entry = Tkinter Entry(self textvariable = self neoc5_low) grid(column=0 row=7 sticky='EW') self entry = Tkinter Entry(self textvariable = self neoc5_high) grid(column=1 row=7 sticky='EW') self entry = Tkinter Entry(self textvariable = self n2_low) grid(column=0 row=8 sticky='EW') self entry = Tkinter Entry(self textvariable = self n2_high) grid(column=1 row=8 sticky='EW') ````
The problem is that in your `np where` call your comparison is being performed between string values (i e the values in `c2_low` `c2_high` etc ) and numpy arrays That comparison will not work You need to convert those strings to floats like so: ````c2_low = float(self c2_low get()) ```` I will also note that I do not think you need your call to `np where` All you are doing there is making sure that the values of `c2` `c3` etc all lie within the specified ranges That should be true by default; those arrays were set up that way when you called `np random uniform` So you should be able to do away with your `masked` variable altogether If I make those changes to your code I am left with this: ````import Tkinter as Tkinter import pandas as pd import numpy as np class simulation_tk(Tkinter Tk): def __init__(self parent): Tkinter Tk __init__(self parent) self parent = parent self initialize() self grid() def initialize(self): self c2_low =Tkinter StringVar() self c3_low =Tkinter StringVar() self ic4_low =Tkinter StringVar() self nc4_low =Tkinter StringVar() self ic5_low =Tkinter StringVar() self nc5_low =Tkinter StringVar() self neoc5_low =Tkinter StringVar() self n2_low = Tkinter StringVar() self c2_high =Tkinter StringVar() self c3_high =Tkinter StringVar() self ic4_high =Tkinter StringVar() self nc4_high =Tkinter StringVar() self ic5_high =Tkinter StringVar() self nc5_high =Tkinter StringVar() self neoc5_high=Tkinter StringVar() self n2_high = Tkinter StringVar() self entry = Tkinter Entry(self textvariable = self c2_low) grid(column=0 row=1 sticky='EW') self entry = Tkinter Entry(self textvariable = self c2_high) grid(column=0 row=2 sticky='EW') self entry = Tkinter Entry(self textvariable = self c3_low) grid(column=0 row=3 sticky='EW') self entry = Tkinter Entry(self textvariable = self c3_high) grid(column=0 row=4 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic4_low) grid(column=1 row=1 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic4_high) grid(column=1 row=2 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc4_low) grid(column=1 row=3 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc4_high) grid(column=1 row=4 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic5_low) grid(column=0 row=5 sticky='EW') self entry = Tkinter Entry(self textvariable = self ic5_high) grid(column=0 row=6 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc5_low) grid(column=0 row=7 sticky='EW') self entry = Tkinter Entry(self textvariable = self nc5_high) grid(column=0 row=8 sticky='EW') self entry = Tkinter Entry(self textvariable = self neoc5_low) grid(column=1 row=5 sticky='EW') self entry = Tkinter Entry(self textvariable = self neoc5_high) grid(column=1 row=6 sticky='EW') self entry = Tkinter Entry(self textvariable = self n2_low) grid(column=1 row=7 sticky='EW') self entry = Tkinter Entry(self textvariable = self n2_high) grid(column=1 row=8 sticky='EW') self resizable(False False) button = Tkinter Button(self text=you"simulate" command =self simulation) button grid(column=3 row=9) def simulation(self): sample_runs =10000 # Sample Population needs to be higher than exporting population export_runs = 1000 # How many samples we actually take c2_low = float(self c2_low get()) c2_high = float(self c2_high get()) c3_low = float(self c3_low get()) c3_high = float(self c3_high get()) ic4_low = float(self ic4_low get()) ic4_high = float(self ic4_high get()) nc4_low = float(self nc4_low get()) nc4_high = float(self nc4_high get()) ic5_low = float(self ic5_low get()) ic5_high = float(self ic5_high get()) nc5_low = float(self nc5_low get()) nc5_high = float(self nc5_high get()) neoc5_low = float(self neoc5_low get()) neoc5_high = float(self neoc5_high get()) n2_low = float(self n2_low get()) n2_high = float(self n2_high get()) c2 = np random uniform(c2_low c2_high sample_runs) c3 = np random uniform(c3_low c3_high sample_runs) ic4 = np random uniform(ic4_low ic4_high sample_runs) nc4 = np random uniform(nc4_low nc4_high sample_runs) ic5 = np random uniform(ic5_low ic5_high sample_runs) nc5 = np random uniform(nc5_low nc5_high sample_runs) neoc5 = np random uniform(neoc5_low neoc5_high sample_runs) n2 = np random uniform(n2_low n2_high sample_runs) # SETS CONSTRAINTS BASED ON RANGES # masked = np where((c3&gt;=c3_low) &amp; (c3<=c3_high) &amp; (c2&gt;=c2_low) &amp; (c2<= c2_high) &amp; (ic4&gt;=ic4_low) &amp; # (ic4<= ic4_high) &amp; (nc4&gt;= nc4_low) &amp; (nc4<= nc4_high) &amp; (ic5&gt;= ic5_low) &amp; (ic5<= ic5_high)&amp; (nc5&gt;= nc5_low)&amp; # (nc5<= nc5_high)&amp; (neoc5&gt;= neoc5_low)&amp; (neoc5<=neoc5_high) &amp; (n2&gt;=n2_low) &amp; (n2<= n2_high)) # MASKED CREATES AN INDEX (Where constraints are held) FOR LOOKING THROUGH DATA c2 = c2[:export_runs] c3 = c3[:export_runs] ic4 = ic4[:export_runs] nc4 = nc4[:export_runs] ic5 = ic5[:export_runs] nc5 = nc5[:export_runs] neoc5 = neoc5[:export_runs] n2 = n2[:export_runs] # DETERMINES CONC FROM METHANE BY BALANCE c1 = 100-c2-c3-nc4-ic4-nc5-ic5-neoc5-n2 #CREATES A SERIES FOR EACH COMPONENET AND ADDS COLUMNS TO A FINAL DATAFRAME c1_ser = pd Series(c1) c2_ser = pd Series(c2) c3_ser = pd Series(c3) ic4_ser = pd Series(ic4) nc4_ser = pd Series(nc4) ic5_ser = pd Series(ic5) nc5_ser = pd Series(nc5) neoc5_ser = pd Series(neoc5) n2_ser = pd Series(n2) #EXPORTS DATAFRAME TO CSV FILE NAMED LNG_DATA df = pd DataFrame([c1_ser c2_ser c3_ser ic4_ser nc4_ser ic5_ser nc5_ser neoc5_ser n2_ser]) T df columns = ['C1' 'C2' 'C3' 'nC4' 'iC4' 'nC5' 'iC5' 'neoC5' 'N2'] df to_csv('output csv') if __name__ == "__main__": app = simulation_tk(None) app title('Simulation') app mainloop() ```` I have tested this with Python 2 7 and numpy 1 7 1 as well as Python 3 4 with numpy 1 9 2 (with an appropriate change to the tkinter import statement) and in both cases I get a fully populated CSV file where each row sums to 100
Heroku: command not found Working on Virtualenv I see similar questions but those are `Ruby on Rails` questions I am working on a `virtualenv` using `python` and `django` I installed `django-toolbelt` I also tried `pip install heroku` When I do: ````whereis heroku ```` I get this: ````heroku: /usr/bin/heroku ```` But when I try `heroku create` I get: ````heroku: command not found ```` ?? Thanks a lot for your help I am not sure how to solve this
This solved it for me: ````wget -qO- https://toolbelt heroku com/install sh | bash echo 'PATH="/usr/local/heroku/bin:$PATH"' &gt;&gt; ~/ profile ````
What name does the Quran use for itself that means "revelation"?
waḥy
What is the opposite of a grid system?
null
PyDev plugin for Eclipse I just installed the PyDev plugin for Eclipse Everything seems to be configured properly but I do not see syntax highlighting D you know what might be the problem?
You are probably not opening the file with the PyDev editor Make sure that the file association in window > preferences > general > editors > file associations for py files is correct (it should be marked to be opened by Python Editor as the default option) If it still did not open you can try right-clicking the file and choosing 'Open With > Python Editor' Also I would recommend you follow the <a href="http://pydev org/manual_101_root html" rel="nofollow">PyDev Getting Started Guide</a>
Replace terms introduced by TinyMCE when the text is viewed I am designing a digital book in Django I have a glossary of terms the writers write the pages of the book using TinyMCE and when the readers view this page I want the glossary in this text appear like a link to the glossary definition The text I store in database is a HTML text and I only must search in the plain text an replace the 'term' with 'term' when this page is accessed Any ideas? Regards and sorry for my English
Dealing directly with HTML code is never a good idea if you simply do a replace on the html text you may get into problems like this: ````<img src="static example com/jinja-templating"/&gt; ```` becoming: ````<img src="static example com/<a href='/glossary?word=jinja'&gt;jinja</a&gt;-templating"/&gt; ```` which is absolutely destructive No words <h2>So what can I do?</h2> <h3>HTML Parser</h3> I highly recommend learning and using an HTML parser like <a href="http://www crummy com/software/BeautifulSoup/" rel="nofollow">BeautifulSoup</a> <h3>Regex</h3> Regex is also not considered safe when dealing directly with html however at times it can get the job done For your case I decided to come up with a regular expression which <them>might</them> get it done ````import re html = '<div id="term"&gt;<span style="term:10px"&gt;term</span&gt;<img src="static example com/term"/&gt;</div&gt;<div&gt;the technology term is amazing</div&gt;' glossaried = re sub(r'&gt;([^<&gt;]*)term([^<&gt;]*)<' r'&gt;\1<a href="/glossary?word=term"&gt;term</a&gt;\2<' html) print glossaried ```` ````'<div id="term"&gt;<span style="term:10px"&gt;<a href="/glossary?word=term"&gt;term</a&gt;</span&gt;<img src="static example com/term"/&gt;</div&gt;<div&gt;the technology <a href="/glossary?word=term"&gt;term</a&gt; is amazing</div&gt;' ````
How to use PyOrient to create functions (stored procedures) in OrientDB? I am trying to create an OrientDB graph database using PyOrient and I cannot find enough documentation to allow me to get Functions working I have been able to create a function using `record_create` into the `ofunction` cluster but although it does not crash it does not appear to work either Here is my code: ````#!/usr/bin/python import pyorient ousername="user" opassword="pass" client = pyorient OrientDB("localhost" 2424) session_id = client connect( ousername opassword ) db_name="database" client db_create( db_name pyorient DB_TYPE_GRAPH pyorient STORAGE_TYPE_PLOCAL ) # Set up the schema of the database client command( "create class URL extends V" ) client command( "CREATE PROPERTY URL url STRING") client command( "CREATE PROPERTY URL id INTEGER") client command( "CREATE SEQUENCE urlseq") client command( "CREATE INDEX urls ON URL (url) UNIQUE") # Get the id numbers of all the clusters info=client db_reload() clusters={} for c in info: clusters[c name]=c id print(clusters) # Construct a test function # All this should do is create a new URL vertex Eventually it will check for uniqueness of url etc code="INSERT INTO URL SET id = sequence('urlseq') next() url='?'" addURL_func = { '@OFunction': { 'name': 'addURL' 'code':'orient getGraph() command("sql" "%s" [urlparam]);' % code 'language':'javascript' 'parameters':'urlparam' 'idempotent':False } } client record_create( clusters['ofunction'] addURL_func ) # Assume allURLs contains the list of URLs I want to store for url in allURLs: client command("select addURL('%s')" % url) vs = client command("select * from URL") for v in vs: print(v url) ```` Doing all the `select addURL` bits runs happily but doing `select * from URL` simply times out Presumably because (as I have discovered by examining the database in Studio) there are still no `URL` vertices Although why that should timeout rather than returning an empty list or giving a useful error message I am not sure What am I doing wrong and is there an easier way to create Functions through PyOrient? I do not want to just write the Functions in Studio because I am prototyping and want them written from the Python code rather than being lost every time I drop the mangled experimental graph! I have mainly been using the <a href="http://orientdb com/docs/2 0/orientdb wiki/Functions html" rel="nofollow">OrientDB wiki page</a> to find out about OrientDB functions and the <a href="https://github com/mogui/pyorient" rel="nofollow">PyOrient github page</a> as almost my only source of documentation for that <hr> Edit: I have been able to create a working Function in SQL (see my own answer below) but I still cannot create a working Javascript Function which creates a vertex My current best attempt is: ````code2="""var g=orient getGraph();g command('sql' 'CREATE VERTEX URL SET id = sequence(\\"urlseq\\") next() url = \\"'+urlparam+'\\"' [urlparam]);""" myFunction2 = 'CREATE FUNCTION addURL2 "' code2 '" parameters [urlparam] idempotent false language javascript' client command(myFunction2) ```` which runs without crashing when called from PyOrient but does not actually create any vertices But if I call it from Studio it works!?! I have no idea what is going on
You could try something like : ````code="var g=orient getGraph();\ng command(\\'sql\\' \\'%s\\' [urlparam]);" myFunction = "CREATE FUNCTION addURL '" code "' parameters [urlparam] idempotent false language javascrip" client command(myFunction); ```` <strong>UPDATE</strong> I used this code (version 2 2 5) and it worked for me ````code="var g=orient getGraph() command(\\'sql\\' \\'%s\\' [urlparam]);" myFunction = "CREATE FUNCTION addURL '" code "' parameters [urlparam] idempotent false language javascrip" client command(myFunction); ```` Hope it helps
Numpy: What is the correct way to upsample an array? ````octave:1&gt; a=[1 2 3] a = 1 2 3 octave:2&gt; k=[a;zeros(9 length(a))] k = 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ```` Is the below method the correct way to achieve it in Python: ````&gt;&gt;&gt; a=[1 2 3] &gt;&gt;&gt; np append(a np zeros((9 len(a)))) array([ 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]) ````
The octave solution results in a 10x3 matrix while your solution results in a 1-dimensional array with 30 elements I am assuming you want a matrix with the dimensions 10x3 right? ````&gt;&gt;&gt;a=np array((1 2 3)) &gt;&gt;&gt;k=np vstack((a np zeros((9 len(a))))) array([[ 1 2 3 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ]]) ````
Python - retrieved different result when using curl and requests library I am trying to build a python crawler using `requests` library When i use `get` method i retrieved result look like: `THá» THAO` But when i use `curl` i got `THỂ THAO` and it is my expected result Here is my code: ````def get_raw_channel(): are = requests get('http://vtv vn/') raw_html = r text soup = BeautifulSoup(raw_html) o_tags = soup find_all("option") for o_tag in o_tags: print o_tag text # raw_channel = RawChannel(o_tag text strip() o_tag['value']) # channels_file write(raw_channel __str__() '\n') ```` Here is my curl cmd: `curl http://vtv vn/` <strong>Question:</strong> why the results is different? How can i achieve `curl`'s result by using `requests`?
I tried your code and in my case the encoding was 'ISO-8859-1' try to encode your data into UTF-8 before process it in BS something like: ```` raw_html = r text encode("utf-8") soup = BeautifulSoup(raw_html) ```` <strong>UPDATE:</strong> I made some more tests looks like everything worked for me because I explicitly set encoding for request take a look ````In [1]: import requests In [2]: from BeautifulSoup import BeautifulSoup In [3]: are = requests get('http://vtv vn/') In [4]: r encoding = "utf-8" In [5]: raw_html = r text In [6]: soup = BeautifulSoup(raw_html) In [7]: soup findAll("option") Out[7]: [<option value="1"&gt; VTV1</option&gt; stripped out some output VTVCab3 - Thể thao TV</option&gt; <option value="13"&gt; stripped out some output ] ````
What royal symbol appears on the club's crest?
Crown of Aragon
How to extract data from txt file that is separated by varying space size? I am trying to extract data (date and time) from a ` txt` file raise it one second and then compare it with the next piece of data so I can see if date and time are continuous For example: ````15 6 19 9 12 59 0000000 some other stuff 15 6 19 9 13 0 0000000 ```` The problem is that the number of spaces between the digits varies (one space if the data on its right is two digits two spaces if the data on its right is one digit) How can I extract the dates increment one second and then compare them with the next date in the text without having to check for every space variation?
The `split` method of the string class already splits at any contiguous whitespace So just do `your_string split()`
How do I recreate a nested list from a long string of information in Python? I have an algorithm that encrypts a string into a long list of numbers using an encryption key that the program generates It works by encrypting one word at a time and putting it into a nested list For example: ````"4 3 1 4 5 5 2 4 6 2 3" ```` This would be two words because the first number is a four meaning the next four numbers represent four letters of a word Then after those 5 numbers there is a five meaning that the next five numbers represent five letters of a word I do not know how to convert: ````"4 3 1 4 5 5 2 4 6 2 3" ```` Into the nested list: ````[[3 1 4 5] [2 4 6 2 3] ] ```` I have tried many concepts but cannot seem to figure anything out Any ideas? Here is the encryption code if you need it: ````import string import random def generateKey(): return('' join(random SystemRandom() choice(string ascii_uppercase string digits) for _ in range(2))) def encrypt(message encryptionKey = list(generateKey())): print('' join(encryptionKey)) message = message split(' ') array = [] z = 0 for word in message: array append([]) for word in message: array[z] append(len(word)*ord(encryptionKey[0])) for letter in word: array[z] append(ord(letter)*ord(encryptionKey[1])) z = 1 z = 0 for row in array: for _set in row: print(str(_set) ' ' end='') print('\n') return(array) #Even though it returns a list this data will be transferred from one person to another via one long string of text ````
Assuming your encryption function is well-formed this should produce the output you requested Basically it works by creating a shared iterator that will keep its place between calls of the subroutine ````def sentence(text): iter_text = iter(text split()) # split text on spaces and create a single iterator from it def word(i_text num_chars): """Helper function to return a list of the num_chars length pulled out of the iterator i_text""" return [next(i_text) for _ in range(num_chars)] # [int(next(i_text)) ] for your literal output but since you are # encrypting as a string it seems more consistent to DECRYPT to a string return [word(iter_text int(ch)) for ch in iter_text] ```` Example: ````&gt;&gt;&gt; text = '4 3 1 4 5 5 2 4 6 2 3' &gt;&gt;&gt; result = sentence(text) &gt;&gt;&gt; print(result) [['3' '1' '4' '5'] ['2' '4' '6' '2' '3']] ````
Using soffice within python Command works in terminal but not in Python subprocess I am having the most frustrating problem with libreoffice in Python when I run the following in terminal I have no problem at all the pdf file is produced where I want it and life is dandy: ````cd /Applications/LibreOffice app/Contents/MacOS/ /soffice --convert-to pdf --outdir {output_folder} {path_to_docx_file}/{title} docx ```` However when I have tried to add this to my python script: ````SOFFICE = r'/Applications/LibreOffice app/Contents/MacOS/soffice' subprocess Popen([SOFFICE "--convert-to" "pdf" "--outdir" "{output_folder} " "{path_to_docx_file}/{title} docx"]) ```` I get an error saying : <blockquote> Error: source file could not be loaded </blockquote> I have tried opening up all the permissions of all of the binaries and files and this still does not work in the python script What am I doing wrong?
It is because you need to <strong>change the current working directory</strong> not just give an absolute path to the command ````subprocess Popen(["/Applications/LibreOffice app/Contents/MacOS/soffie" "--convert-to" "pdf" "--outdir" "{output_folder} " "{path_to_docx_file}/{title} docx"]) ```` Should be replaced with: ````subprocess Popen(["soffice" "--convert-to" "pdf" "--outdir" "{output_folder} " "{path_to_docx_file}/{title} docx"] cwd="/Applications/LibreOffice app/Contents/MacOS/") ```` Even if it <strong>seems to be quite similar</strong> there is a major difference between those two calls: the current working directory With the script: ````subprocess Popen(["/Applications/LibreOffice app/Contents/MacOS/soffie" "--convert-to" "pdf" "--outdir" "{output_folder} " "file docx"]) ```` If you are calling the python script in the ~ directory it will try to reach ~/file docx But in the second one : ````subprocess Popen(["soffice" "--convert-to" "pdf" "--outdir" "{output_folder} " "file docx"] cwd="/Applications/LibreOffice app/Contents/MacOS/") ```` It will try to reach the file in "/Applications/LibreOffice app/Contents/MacOS/file docx" which is the <strong>same behaviour of what you are doing with the `cd` command</strong> (in fact the cd command changes the current directory so giving the cwd argument is the same as making a `cd` call) You can <strong>also use absolute paths</strong> for all your files and it will solve the problem too but it is not what you are trying to do It depends on the software you are trying to build and it is purpose It is why the prompt says that the file does not exist The program cannot find the file in `WHERE_YOU_CALL_THE_SCRIPT/{path_to_docx_file}/{title} docx` because I suppose that the file is in `/Applications/LibreOffice app/Contents/MacOS/{path_to_docx_file}/{title} docx`
How to merge two pandas dataframe in parallel (multithreading or multiprocessing) Without doing in parallel programming I can merger left and right dataframe on `key` column using below code but it will be too slow since both are very large is there any way I can do it in parallelize efficiently ? I have 64 cores and so practically I can use 63 of them to merge these two dataframe ````left = pd DataFrame({'key': ['K0' 'K1' 'K2' 'K3'] 'A': ['A0' 'A1' 'A2' 'A3'] 'B': ['B0' 'B1' 'B2' 'B3']}) right = pd DataFrame({'key': ['K0' 'K1' 'K2' 'K3'] 'C': ['C0' 'C1' 'C2' 'C3'] WOULD': ['D0' 'D1' 'D2' 'D3']}) result = pd merge(left right on='key') ```` output will be : ````left: A B key 0 A0 B0 K0 1 A1 B1 K1 2 A2 B2 K2 3 A3 B3 K3 right: C D key 0 C0 D0 K0 1 C1 D1 K1 2 C2 D2 K2 3 C3 D3 K3 result: A B key C D 0 A0 B0 K0 C0 D0 1 A1 B1 K1 C1 D1 2 A2 B2 K2 C2 D2 3 A3 B3 K3 C3 D3 ```` I want to do this in parallel so I can do it at speed
You can improve the speed (by a factor of about 3 on the given example) of your merge by making the `key` column the index of your dataframes and using `join` instead ````left2 = left set_index('key') right2 = right set_index('key') In [46]: %timeit result2 = left2 join(right2) 1000 loops best of 3: 361 µs per loop In [47]: %timeit result = pd merge(left right on='key') 1000 loops best of 3: 1 01 ms per loop ````
How to dynamically allocate memory in Python Is there any method in python that I can use to get a block of memory from the heap and use a variable to reference it Just like the keyword "new" or the function `malloc()` in other languages: ````Object *obj = (Object *) malloc(sizeof(Object)); Object *obj = new Object(); ```` In the project my program is waiting to receive some data in uncertain intervals and with a certain length of bytes when correct I used to it like this: ````void receive()// callback { if(getSize()<=sizeof(DataStruct)) { DataStruct *pData=malloc(sizeof(DataStruct)); if(recvData(pData)&gt;0) list_add(globalList pData); } } void worker() { init() while(!isFinish) { dataProcess(globalList); } } ```` Now I want to migrate these old project to python and I tried to do it like this: ````def reveive(): data=dataRecv() globalList append(data) ```` However I get the all item in the list are same and equal to the latest received item It is obvious that all the list items are point to the same memory adress and I want to get a new memory adress each the function is called
You should read the <a href="https://docs python org/3/tutorial/index html" rel="nofollow">Python tutorial</a> You can create lists dictionaries objects and closures in Python All these live in the (Python) <a href="https://en wikipedia org/wiki/Memory_management#HEAP" rel="nofollow">heap</a> and Python has a naive <a href="https://en wikipedia org/wiki/Garbage_collection_%28computer_science%29" rel="nofollow">garbage collector</a> (reference counting marking for circularity) <sup>(the Python GC is naive because it does not use sophisticated GC techniques; hence it is slower than e g Ocaml or many JVM generational copying garbage collectors; read the <a href="http://gchandbook org/" rel="nofollow">GC handbook</a> for more; however the Python GC is much more friendly to external C code)</sup>
How to find element on the page by keywords in list using Selenium? I am trying to insert data into contact pages in some sites but they have different html structure So on the first page I have 3 fields (name phone message) on the second page I have 3 fields (first_name phone comment) So I need using Python/Selenium find this fields (inputs/textareas) using <them>regexp</them> Main idea is build some lists with keywords (first_name name your_name firstname etc) then try to find text field with this keywords (example: name="name") Now i am write this: ````contact = ['telephone' 'cellphone' 'phone'] q = driver find_element_by_xpath("//*[contains(@name 'phone')]") ```` So the question is how to dynamically find all text fields and submit button on the contact pages of some sites by using lists of keyword?
As said in comment it can be done quite easily by chaining a xpath query with "or" one way to do it: ````# I use lxml to demo the xpath which should be the same as in selenium In [7]: from lxml import html # just a sample In [8]: s = """<div id="contact-area"&gt; : <form method="post" action="contactengine php"&gt; : <label for="Name"&gt;Name:</label&gt; : <input type="text" name="Name" id="Name" /&gt; : <label for="City"&gt;City:</label&gt; : <input type="text" name="City" id="City" /&gt; : <label for="Email"&gt;Email:</label&gt; : <input type="text" name="Email" id="Email" /&gt; : <label for="Message"&gt;Message:</label&gt;<br /&gt; : <textarea name="Message" rows="20" cols="20" id="Message"&gt;</textarea&gt; : <input type="submit" name="submit" value="Submit" class="submit-button" /&gt; : </form&gt; : <div style="clear: both;"&gt;</div&gt; : </div&gt;""" In [9]: tree = html fromstring(s) In [10]: contact = ["Name" "Phone" "Message" "Comment"] # construct the query with "or" chaining with all keywords In [11]: query = " or " join("contains(@name '%s')" % field for field in contact) In [12]: query Out[12]: "contains(@name 'Name') or contains(@name 'Phone') or contains(@name 'Message') or contains(@name 'Comment')" ```` Results: ````In [13]: tree xpath("//*[%s]" % query) Out[13]: [<InputElement 10e34c8e8 name='Name' type='text'&gt; <TextareaElement 10e34c9f0 name='Message'&gt;] ```` Hope this helps <strong>Edit</strong>: Since your elements are somehow invisible by the time the page loads (either by css or JavaScript) please refer to my another answer in this <a href="http://stackoverflow com/questions/29932091/selenium-webdriver-python-element-is-not-currently-visible-same-class-name/29933749#29933749">SO</a> to execute JavaScript to "enable" those elements' visibilities I will not give detailed explanation here as this should really belong to another question
Create Copy and Paste function with Python I am looking for a solution to add a function to a program using Python I want to copy and paste selected data (selected with the mouse) Example: - Copy "Hello" using CTRL-C from the sentence "Hello everybody" when I select Hello - Copy a part of array selected using CTRL-C My main problem is how to use the selected data But now I just can copy string defined in the code (here "tt"): ````clipboard OpenClipboard() clipboard EmptyClipboard() clipboard SetClipboardText('tt') clipboard CloseClipboard() ```` I tried several codes found on the internet and in this website but none of them fixed my problem
You do not need to call `clipboard SetClipboardText()` When a program supports the clipboard then <kbd>Ctrl+C</kbd> will copy the currently selected text into the clipboard There is nothing you need to do to make this happen If your question is "How can I trigger <kbd>Ctrl+C</kbd> from outside of a program to copy the currently selected text into the clipboard" then the answer is: Usually you cannot For security reasons most programs do not respond to artificial key events which other programs send them The second error is something else entirely Your class `CopyEvent` does not have a property `list` so Python cannot invoke methods in it
django update view password and ForeignKey i currently use django update/create view and i have some problems: - how can i update/create a password? - i can show the old password but it does not save the new one with the django hash algorithm so the password is ignored and the user cannot log in anymore ````class Update(UpdateView): model = User fields = ['username' 'password'] ```` <old start="2"> - how can i update/create a Foreign Key? - is there a way to custom the fields? i e to show them as radio/checkbox/password? thanks
<blockquote> I can show the old password but it does not save the new one with the django hash algorithm so the password is ignored and the user cannot log in anymore </blockquote> That is because for security Django does not store raw passwords it stores a hash of the raw password which is sufficient to tell if a user entered the correct password To set the password use `User set_password()` ````user = request user # or another user source user set_password('raw password string') ```` So instead of changing the field directly change the password like above to store the hash (not the raw password) and do not bother with "showing old password" a secure system will not be able to <a href="https://docs djangoproject com/en/1 8/ref/contrib/auth/#django contrib auth models User set_password" rel="nofollow">https://docs djangoproject com/en/1 8/ref/contrib/auth/#django contrib auth models User set_password</a>
Learning Postfix While I was going through the postfix at this <a href="http://interactivepython org/runestone/static/pythonds/BasicDS/InfixPrefixandPostfixExpressions html#tbl-example1" rel="nofollow">site</a> I am just confused as after the definition of infix prefix and postfox it explains its rule that how to apply postfix as it completely says: <strong>Prefix expression notation requires that all operators precede the two operands that they work on Postfix on the other hand requires that its operators come after the corresponding operands </strong> Examples: A B * C = Normal used(Infix) A B * C = Now if we want to convert this into prefix we have to move all operator just before the two operands they work on i e will come before A and * will come before B Ok so far so good A * B C = Prefix A B * C = Now if we want to convert this into postfix we have to move operator just after the two operands they work on i e <strong>should</strong> come after B and * will come after C According to the rule it should be like this: <strong>A B C *</strong> but in the example it shows us this: <strong>A B C * </strong> = Postfix Please explain me where I am going wrong Thanks in advance -- Regards Pradeep
You need to read it in the order they will be applied First `*` will be applied to B and C; then `+` will be applied to the result of that calculation and A So the site is correct Note this has nothing to do with Python which does not support postfix notation
Image classification in Caffe always returns same class I have an issue with an image classification in caffe I use the imagenet model (from the caffe tutorial) for classification of data I created but I always get the same classification result (same class i e class 3) This is how I proceed: I use caffe for windows and Python as an interface (1) I gather the data My sample-images (training &amp; testing) are images which have a size of 5x5x3 (RGB) uint8 so its pixelvalues reach from 0-255 (2) I resize them to the size which imagenet requires: 256x256x3 Therefore I use the resize function in matlab (nearest neighbor interpolation) (3) I create a LevelDB and image_mean (4) Train my network (3000 iterations) The only parameters I change in the imagenet definition is the path to the mean image and the LevelDBs The results I get: ````I0428 12:38:04 350100 3236 solver cpp:245] Train net output #0: loss = 1 91102 (* 1 = 1 91102 loss) I0428 12:38:04 350100 3236 sgd_solver cpp:106] Iteration 2900 lr = 0 0001 I0428 12:38:30 353361 3236 solver cpp:229] Iteration 2920 loss = 2 18008 I0428 12:38:30 353361 3236 solver cpp:245] Train net output #0: loss = 2 18008 (* 1 = 2 18008 loss) I0428 12:38:30 353361 3236 sgd_solver cpp:106] Iteration 2920 lr = 0 0001 I0428 12:38:56 351630 3236 solver cpp:229] Iteration 2940 loss = 1 90925 I0428 12:38:56 351630 3236 solver cpp:245] Train net output #0: loss = 1 90925 (* 1 = 1 90925 loss) I0428 12:38:56 351630 3236 sgd_solver cpp:106] Iteration 2940 lr = 0 0001 I0428 12:39:22 341891 3236 solver cpp:229] Iteration 2960 loss = 1 98917 I0428 12:39:22 341891 3236 solver cpp:245] Train net output #0: loss = 1 98917 (* 1 = 1 98917 loss) I0428 12:39:22 341891 3236 sgd_solver cpp:106] Iteration 2960 lr = 0 0001 I0428 12:39:48 334151 3236 solver cpp:229] Iteration 2980 loss = 2 45919 I0428 12:39:48 334151 3236 solver cpp:245] Train net output #0: loss = 2 45919 (* 1 = 2 45919 loss) I0428 12:39:48 334151 3236 sgd_solver cpp:106] Iteration 2980 lr = 0 0001 I0428 12:40:13 040398 3236 solver cpp:456] Snapshotting to binary proto file Z:/DeepLearning/S1S2/Stockholm/models_iter_3000 caffemodel I0428 12:40:15 080418 3236 sgd_solver cpp:273] Snapshotting solver state to binary proto file Z:/DeepLearning/S1S2/Stockholm/models_iter_3000 solverstate I0428 12:40:15 820426 3236 solver cpp:318] Iteration 3000 loss = 2 08741 I0428 12:40:15 820426 3236 solver cpp:338] Iteration 3000 Testing net (#0) I0428 12:41:50 398375 3236 solver cpp:406] Test net output #0: accuracy = 0 11914 I0428 12:41:50 398375 3236 solver cpp:406] Test net output #1: loss = 2 71476 (* 1 = 2 71476 loss) I0428 12:41:50 398375 3236 solver cpp:323] Optimization Done I0428 12:41:50 398375 3236 caffe cpp:222] Optimization Done ```` (5) I run following code in Python to classify a single image: ````# set up Python environment: numpy for numerical routines and matplotlib for plotting import numpy as np import matplotlib pyplot as plt # display plots in this notebook # set display defaults plt rcParams['figure figsize'] = (10 10) # large images plt rcParams['image interpolation'] = 'nearest' # do not interpolate: show square pixels plt rcParams['image cmap'] = 'gray' # use grayscale output rather than a (potentially misleading) color heatmap # The caffe module needs to be on the Python path; # we will add it here explicitly import sys caffe_root = ' /' # this file should be run from {caffe_root}/examples (otherwise change this line) sys path insert(0 caffe_root 'python') import caffe # If you get "No module named _caffe" either you have not built pycaffe or you have the wrong path caffe set_mode_cpu() model_def = 'C:/Caffe/caffe-windows-master/models/bvlc_reference_caffenet/deploy prototxt' model_weights = 'Z:/DeepLearning/S1S2/Stockholm/models_iter_3000 caffemodel' net = caffe Net(model_def # defines the structure of the model model_weights # contains the trained weights caffe TEST) # use test mode (e g do not perform dropout) #load mean image file and convert it to a npy file-------------------------------- blob = caffe proto caffe_pb2 BlobProto() data = open('Z:/DeepLearning/S1S2/Stockholm/S1S2train256 binaryproto' "rb") read() blob ParseFromString(data) nparray = caffe io blobproto_to_array(blob) f = file('Z:/DeepLearning/PythonCalssification/imgmean npy' "wb") np save(f nparray) f close() # load the mean ImageNet image (as distributed with Caffe) for subtraction mu1 = np load('Z:/DeepLearning/PythonCalssification/imgmean npy') mu1 = mu1 squeeze() mu = mu1 mean(1) mean(1) # average over pixels to obtain the mean (BGR) pixel values print 'mean-subtracted values:' zip('BGR' mu) print 'mean shape: ' mu1 shape print 'data shape: ' net blobs['data'] data shape # create transformer for the input called 'data' transformer = caffe io Transformer({'data': net blobs['data'] data shape}) # set the size of the input (we can skip this if we are happy transformer set_transpose('data' (2 0 1)) # move image channels to outermost dimension transformer set_mean('data' mu) # subtract the dataset-mean value in each channel transformer set_raw_scale('data' 255) # rescale from [0 1] to [0 255] transformer set_channel_swap('data' (2 1 0)) # swap channels from RGB to BGR # set the size of the input (we can skip this if we are happy # with the default; we can also change it later e g for different batch sizes) net blobs['data'] reshape(50 # batch size 3 # 3-channel (BGR) images 227 227) # image size is 227x227 #load image image = caffe io load_image('Z:/DeepLearning/PythonCalssification/380 tiff') transformed_image = transformer preprocess('data' image) #plt imshow(image) # copy the image data into the memory allocated for the net net blobs['data'] data[ ] = transformed_image ### perform classification output = net forward() output_prob = output['prob'][0] # the output probability vector for the first image in the batch print 'predicted class is:' output_prob argmax() ```` It does not matter which input image I use I always get class "3" as classification result Here is a sample image I train/classify: <a href="http://i stack imgur com/nYod0 png"><img src="http://i stack imgur com/nYod0 png" alt="enter image description here"></a> I would be very happy if someone has an idea what is wrong? Thanks in advance!
If you always get the same class it means that the NN was not properly trained - Make sure that the training set is balanced When a classifier predicts always the same class it often because one class is over represented according to the others For example let us say that you have two classes the first represented by 95 instances and the second by 5 If the classifier classifies everything as belonging to the first class then he is already right at 95% - One obvious thing is that you should normalize the inputs `image / 255 0 - 0 5` it will center the input and decrease the standard deviation - After make sure that you have at least 4 times more data into your training set that you have weights in your NN - Last but not least make sure that the training set is properly shuffled
Include tag in Django template language -- what can I pass in to it? OK so again there is likely a "simple" solution to this but I am a beginner and nothing seems simple to me I have a view and a template that shows the attributes of an instance of a Car class that I have modeled This Car class has a ManyToMany relationship with my custom User class The template that show the attributes of a given instance of Car has many variables The view for each Car works fine Here is what I cannot get to work: I have a user profile page for each instance of User From that page I want to show the attributes of each Car that a particular User has "favorited " I am unable to figure out how to do this I have tried the {% include %} tag to include a snippet of the Car template and then use a for statement to iterate through the favorite set of the User In theory this would populate the User page with each Car that they have "favorited" and show its attributes However I do not know how to pass the {% include %} tag the proper context so the attributes are populated correctly for each instance of Car Is this possible? Is there a simpler way to do it that I am just overlooking? Any help is appreciated Thanks!
Use the <a href="https://docs djangoproject com/en/1 7/ref/templates/builtins/#include" rel="nofollow">`{% include with %}`</a> syntax: ````{% for car in user favorite_cars all %} {% include "car html" with name=car name year=car year %} {% endfor %} ```` Another alternative is the <a href="https://docs djangoproject com/en/1 7/ref/templates/builtins/#with" rel="nofollow">`{% with %}`</a> tag: ````{% for car in user favorite_cars all %} {% with name=car name year=car year %} {% with color=car color %} {% include "car html" %} {% endwith %} {% endwith %} {% endfor %} ```` <strong>UPDATE</strong>: If data for the template cannot be obtained from the `Car` model then you have to use the <a href="https://docs djangoproject com/en/1 7/howto/custom-template-tags/#inclusion-tags" rel="nofollow">custom inclusion tag</a>: ````from django import template register = template Library() @register inclusion_tag('car html') def show_car(car): history = get_history_for_car(car) return {'name': car name 'history': history} ```` And the in the template: ````{% load my_car_tags %} {% for car in user favorite_cars all %} {% show_car car %} {% endfor %} ````
Pyramid resource: In plain English I have been reading on the ways to implement authorization (and authentication) to my newly created Pyramid application I keep bumping into the concept called "Resource" I am using python-couchdb in my application and not using RDBMS at all hence no SQLAlchemy If I create a Product object like so: ````class Product(mapping Document): item = mapping TextField() name = mapping TextField() sizes = mapping ListField() ```` Can someone please tell me if this is also called the resource? I have been reading the entire documentation of Pyramids but no where does it explain the term resource in plain simple english (maybe I am just stupid) If this is the resource does this mean I just stick my ACL stuff in here like so: ````class Product(mapping Document): __acl__ = [(Allow AUTHENTICATED 'view')] item = mapping TextField() name = mapping TextField() sizes = mapping ListField() def __getitem__(self key): return <something&gt; ```` If I were to also use Traversal does this mean I add the <strong>getitem</strong> function in my python-couchdb Product class/resource? Sorry it is just really confusing with all the new terms (I came from Pylons 0 9 7) Thanks in advance
I think the piece you are missing is the traversal part Is Product the resource? Well it depends on what your traversal produces it could produce products Perhaps it might be best to walk this through from the view back to how it gets configured when the application is created Here is a typical view ```` @view_config(context=Product permission="view") def view_product(context request): pass # would do stuff ```` So this view gets called when context is an instance of Product AND if the <strong>acl</strong> attribute of that instance has the "view" permission So how would an instance of Product become context? This is where the magic of traversal comes in The very logic of traversal is simply a dictionary of dictionaries So one way that this could work for you is if you had a url like ````/product/1 ```` Somehow some resource needs to be traversed by the segments of the url to determine a context so that a view can be determined What if we had something like ```` class ProductContainer(object): """ container = ProductContainer() container[1] &gt;&gt;&gt; <Product(1)&gt; """ def __init__(self request name="product" parent=None): self __name__ = name self __parent__ = parent self _request = request def __getitem__(self key): p = db get_product(id=key) if not p: raise KeyError(key) else: p __acl__ = [(Allow Everyone "view")] p __name__ = key p __parent__ = self return p ```` Now this is covered in the documentation and I am attempting to boil it down to the basics you need to know The ProductContainer is an object that behaves like a dictionary The "<strong>name</strong>" and "<strong>parent</strong>" attributes are required by pyramid in order for the url generation methods to work right So now we have a resource that can be traversed How do we tell pyramid to traverse ProductContainer? We do that through the Configurator object ```` config = Configurator() config add_route(name="product" path="/product/*traverse" factory=ProductContainer) config scan() application = config make_wsgi_app() ```` The factory parameter expects a callable and it hands it the current request It just so happens that ProductContainer <strong>init</strong> will do that just fine This might seem a little much for such a simple example but hopefully you can imagine the possibilities This pattern allows for very granular permission models If you do not want/need a very granular permission model such as row level acl's you probably do not need traversal instead you can use routes with a single root factory ```` class RootFactory(object): def __init__(self request): self _request = request self __acl__ = [(Allow Everyone "view")] # todo: add more acls @view_config(permission="view" route_name="orders") def view_product(context request): order_id product_id = request matchdict["order_id"] request matchdict["product_id"] pass # do what you need to with the input the security check already happened config = Configurator(root_factory=RootFactory) config add_route(name="orders" path="/order/{order_id}/products/{product_id}") config scan() application = config make_wsgi_app() ```` note: I did the code example from memory obviously you need all the necessary imports etc in other words this is not going to work as a copy/paste
Copy constructor in python Is there a copy constructor in python ? If not what would I do to achieve something similar ? The situation is that I am using a library and I have extended one of the classes there with extra functionality and I want to be able to convert the objects I get from the library to instances of my own class
I think you want the <a href="http://docs python org/library/copy html">copy module</a> ````import copy x = copy copy(y) # make a shallow copy of y x = copy deepcopy(y) # make a deep copy of y ```` you can control copying in much the same way as you control <a href="http://docs python org/library/pickle html#module-pickle">pickle</a>
The winery of what notable director provided wines for True Grit?
null
scheduling embedded python processes I have been trying to create a C++ program that embeds multiple python threads Due to the nature of the program the advantage of multitasking comes from asynchronous I/O; but due to some variables that need to be altered between context switching I need to control the scheduling I thought that because of python's GIL lock this would be simple enough but it is turning out not to be: python wants to use POSIX threads rather than software threads I cannot figure out from the documentation what happens if I store the result of `PyEval_SaveThread()` and do not call `PyEval_RestoreThread()` in the same function--so presumably I am not supposed to be doing that etc Is it possible to create a custom scheduler for embedded python threads or was python basically designed so that it cannot be done?
It turns out that using `PyEval_SaveThread()` and `PyEval_RestoreThread()` is unnecessary basically I used coroutines to run the scripts and control the scheduling In this case from <a href="http://www xmailserver org/libpcl html" rel="nofollow">libPCL</a> However this is not really much of a solution because if python encounters a syntax error it will segfault if it is in a coroutine oddly enough even if there is only one python script running in one coroutine this will still happen But at the very least they do not seem to conflict with each other