Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
100
51,093,232
Increase a variable up to a maximum size
<p>I'm working on a simple game that you put the number of enemies and you hit them or cure yourself. But, the player has a maximum amount of 500 of life. The cure uses <code>random.randint(10,14)</code>.</p> <pre><code>if player_sp &gt;= 10: if player_vida &lt; 500: cura = random.randint(10,14) player_vida += cura print("Foi adicionado %i de vida!"%cura) player_sp -= 10 </code></pre> <p>This is the code to cure, but if you are curing the player the amount of life may pass 500. How can I cure the player without exceeding 500 in python? I tried making another <code>if else</code> using player_vida >= 500 but it didn't work. </p>
<p>How about replacing:</p> <pre><code>player_vida += cura </code></pre> <p>with:</p> <pre><code>player_vida = min(500, player_vida + cura) </code></pre>
python|pygame
4
101
42,471,299
How to create a data structure in a Python class
<pre class="lang-python prettyprint-override"><code>class CD(object): def __init__(self,id,name,singer): self._id = id self.n = name self.s = singer def get_info(self): info = 'self._id'+':'+ ' self.n' +' self.s' return info class collection(object): def __init__(self): cdfile = read('CDs.txt','r') </code></pre> <p>I have a file 'CDs.txt' which has a list of tuples look like this:</p> <pre><code>[ ("Shape of you", "Ed Sheeran"), ("Shape of you", "Ed Sheeran"), ("I don't wanna live forever", "Zayn/Taylor Swift"), ("Fake Love", "Drake"), ("Starboy", "The Weeknd"), ...... ] </code></pre> <p>Now in my collection class, I want to create a CD object for each tuple in my list and save them in a data structure. I want each tuple to have a unique id number, it doesn't matter they are the same, they need to have different id....can anyone help me with this? </p>
<p>You can use simple loop with <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate</code></a> for this.</p> <pre><code># I don't know what you mean by 'file has list of tuples', # so I assume you loaded it somehow tuples = [("Shape of you", "Ed Sheeran"), ("Shape of you", "Ed Sheeran"), ("I don't wanna live forever", "Zayn/Taylor Swift"), ("Fake Love", "Drake"), ("Starboy", "The Weeknd")] cds = [] for i, (title, author) in enumerate(tuples): cds.append(CD(i, title, author)) </code></pre> <p>Now you have all CDs in a nice, clean list</p> <p>If your file is just the list in form <code>[('title', 'author')]</code>, then you can load it simply by evaluating its contents:</p> <pre><code>tuples = eval(open('CDs.txt').read()) </code></pre>
python|class
1
102
58,455,350
Memory leak using fipy with trilinos
<p>I am currently trying to simulate a suspension flowing around a cylindrical obstacle using fipy. Because I'm using fine mesh and my equations are quite complicated, the simulations take quite a long time to converge. Which is why I want to run them in parallel. However, when I do that the program keeps using more and more memory, until Linux eventually kills it (after around 3 hours when I use 4 procesors). What is more: trilinos increases memory usage even if I only use one processor. For example, when I run <a href="https://github.com/usnistgov/fipy/blob/master/examples/flow/stokesCavity.py" rel="nofollow noreferrer">this example</a> (changing no. of sweeps from 300 to 5,000 first): </p> <p><strong>python stokesCavity.py --trilinos</strong> -> memory usage goes from 638M to 958M in 10 minutes<br> <strong>python stokesCavity.py --pysparse</strong> -> memory usage goes from 616M to 635M in 10 minutes </p> <p>I saw <a href="https://www.mail-archive.com/fipy@nist.gov/msg03554.html" rel="nofollow noreferrer">here</a> that somebody had reported a similar problem before, but I could not find the solution. Any help would be appreciated. </p> <p>Some info: I am using Trilinos 12.12.1 (compiled against swig 3.0) and fipy 3.2.</p>
<p>This is an <a href="https://github.com/trilinos/Trilinos/issues/2327" rel="nofollow noreferrer">issue we have reported against PyTrilinos</a></p>
python|fipy|trilinos
0
103
58,524,100
My Tensorflow lite model accuracy and Image Classification issues
<p>First off, I've succeed in deploying my fine tuned Xception model to my android application, it working fine, except some harsh image that it predicted wrong, however, on my computer, with that image, it's predicted correct even though the accuracy was around 50-60%. So, is converting to tensorflow lite model reduce my model accuracy a little bit. Secondly, my biggest question, if we have 4 label predicting model, what if we input another object that is not in the 4 label declared. I'm trying to solve this by increasing my object detected to around 1000 objects :) but it so difficult when also trying to adding my object (which is 1004 objects). Any solution that could clarify whether the object is in the label or not? Thank for solve my issue. the last issue which is derived from the second issue :(, are there anyway to adding label from an already-created model ? For example Xception with 1000 objects could be detected in default, now I want to add 4 or more extra object to the model, how could I do it? I've flicked through sites, they all said that we need to train our model again :( but 1004 objects is to computational expensive. Thankyou for solving my problem, appreciate.</p>
<p>Conversion to TensorFlow Lite <em>is</em> expected to reduce accuracy, especially for outlier inputs such as the one you describe.</p> <p>If you provide an input from a class that the model was not trained on, the output is 'undefined' - the logits will essentially be garbage.</p> <p>If you want a model that has more labels than the one you have, you will need to retrain :-) </p>
android-studio|image-processing|deep-learning|tensorflow-lite
0
104
41,253,942
How do you get a field related by OneToOneField and ManyToManyField in Django?
<p>How do you get a field related by OneToOneField and ManyToManyField in Django?</p> <p>For example,</p> <pre><code>class A(models.Model): myfield = models.CharField() as = models.ManyToManyField('self') class B(models.Model): a = models.OneToOneField(A) </code></pre> <p>If I want to get a 'myfield' and all associated 'as' using class B, given a 'myfield' equal to a string like 'example', how is it done?</p>
<p><strong>Models.py</strong></p> <pre><code>class Place(models.Model): name = models.CharField(max_length=50) address = models.CharField(max_length=80) def __str__(self): # __unicode__ on Python 2 return "%s the place" % self.name class Restaurant(models.Model): place = models.OneToOneField( Place, on_delete=models.CASCADE, primary_key=True, ) serves_hot_dogs = models.BooleanField(default=False) serves_pizza = models.BooleanField(default=False) def __str__(self): # __unicode__ on Python 2 return "%s the restaurant" % self.place.name </code></pre> <p>Let create a place instance.</p> <pre><code>p1 = Place.objects.create(name='Demon Dogs', address='944 W. Fullerton') </code></pre> <p>Then create a restaurant object.</p> <pre><code>r = Restaurant.objects.create(place=p1, serves_hot_dogs=True, serves_pizza=False) </code></pre> <p>Now, to access place from Restaurant:</p> <pre><code>&gt;&gt;&gt; r.place &lt;Place: Demon Dogs the place&gt; </code></pre> <p>vice-versa to access Restaurant from place </p> <pre><code>&gt;&gt;&gt; p1.restaurant &lt;Restaurant: Demon Dogs the restaurant&gt; </code></pre> <p>I did not understand the many-to-many field part can you please elaborate?</p>
python|django|django-models|django-orm
0
105
24,047,017
SWF file loads a new url, how to grab it using Python?
<p>I'll start with saying I'm not very familiar with AS3 coding at all, which I'm pretty sure SWF files are coded with (someone can correct me if I'm wrong)</p> <p>I have a SWF file which accepts an ID parameter, within the code it takes the ID and performs some hash routines on it, eventually produces a new 'token' and within the code loads a new url using this token</p> <p>I found this by taking the swf file to showmycode and decompiling</p> <p>My code is in Python and the SWF file is online, I could download and save it locally</p> <p>Is it possible to somehow execute the swf in python or by using urllib to grab this new url?</p> <p>It doesn't seem to act the same as a redirect url, as when I do:</p> <pre><code>request = urllib2.Request(url) response = urllib2.urlopen(request) print response.geturl() </code></pre> <p>Just returns the url that I am requesting, so I'm not sure how or even if I can grab what is being spit out</p> <p>Edit - This is the MD5 that is being used - <a href="https://code.google.com/p/as3corelib/source/browse/trunk/src/com/adobe/crypto/MD5.as?r=51" rel="nofollow">https://code.google.com/p/as3corelib/source/browse/trunk/src/com/adobe/crypto/MD5.as?r=51</a></p> <p>Trying to find a Python equivalent</p>
<p>Looks like I was making things too complicated</p> <p>I was able to just use python hashlib.md5 to produce the same results as the AS3 code</p> <pre><code>m = hashlib.md5() m.update('test') m.hexdigest() </code></pre>
python|actionscript-3|flash
0
106
20,683,655
Why the two threading implementation behaves differently?
<p>Why the two threading implementations in python are behaving differently ?</p> <p>I have two codes:</p> <p>1.</p> <pre><code>from threading import Thread import pdb import time def test_var_kwargs(**kwargs): time.sleep(5) print kwargs['name'] for key in kwargs: print "another keyword arg: %s: %s" % (key, kwargs[key]) def get_call(): thr = Thread(target=test_var_kwargs(name='xyz', roll=12)) thr.start() print "!!!!!!!!!!!!!!!!!!" get_call() print "hohohoo" get_call() </code></pre> <p>2.</p> <pre><code>import threading import time class Typewriter(threading.Thread): def __init__(self, your_string): threading.Thread.__init__(self) self.my_string = your_string def run(self): for char in self.my_string: print "in run" time.sleep(5) typer = Typewriter("hehe") typer.start() print "HHHHHHHHHHHHHHHHHHHHHHHHHHH" # wait for it to finish if you want to typer.join() </code></pre> <p>In first code execution, the print stmt. and get_call() executed after 5 secs, which means the next line of execution of code got blocked. While in the second code, the print stmt. i.e. print "HHHHHHHHHHHHHHHHHHHHHHHHHHH" got printed without waiting for sleep() time.</p> <p>My question is why the first code execution got blocked while the second code executed unblocked ? </p>
<p>It took me a little while to figure the problem out... </p> <p>This line:</p> <pre><code>thr = Thread(target=test_var_kwargs(name='xyz', roll=12)) </code></pre> <p>Is incorrect. Try:</p> <pre><code>thr = Thread(target=test_var_kwargs, kwargs={'name':'xyz', 'roll': 12}) </code></pre> <p>The first example is blocking on the 5 seconds time.sleep because you are calling the target function just before you create the thread object. That call returns <code>None</code> so the actual creation of that thread looks like this: </p> <pre><code>thr = Thread(target=None) </code></pre> <p>While that is not an error it will finish immediately. But not before the call to <code>test_var_kwargs</code> has completed. </p>
python|multithreading
2
107
53,439,796
Selecting Rows with DateTimeIndex without referring to date
<p>Is there a way to select rows with a DateTimeIndex without referring to the date as such e.g. selecting row index 2 (the usual Python default manner) rather than "1995-02-02"?</p> <p><a href="https://i.stack.imgur.com/gp3PH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gp3PH.png" alt="enter image description here"></a></p> <p>Thanks in advance.</p>
<p>Yes, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer" rel="nofollow noreferrer">.iloc</a>, the positional indexer:</p> <pre><code>df.iloc[2] </code></pre> <p>Basically, it indexes by actual position starting from <code>0</code> to <code>len(df)</code>, allowing slicing too:</p> <pre><code>df.iloc[2:5] </code></pre> <p>It also works for columns (by position, again):</p> <pre><code>df.iloc[:, 0] # All rows, first column df.iloc[0:2, 0:2] # First 2 rows, first 2 columns </code></pre>
python|python-3.x|pandas
1
108
21,420,868
How to fix the false positives rate of a linear SVM?
<p>I am an SVM newbie and this is my use case: I have a lot of unbalanced data to be binary classified using a linear SVM. I need to fix the false positives rate at certain values and measure the corresponding false negatives for each value. I am using something like the following code making use of scikit-learn svm implementation:</p> <pre><code># define training data X = [[0, 0], [1, 1]] y = [0, 1] # define and train the SVM clf = svm.LinearSVC(C=0.01, class_weight='auto') #auto for unbalanced distributions clf.fit(X, y) # compute false positives and false negatives predictions = [clf.predict(ex) for ex in X] false_positives = [(a, b) for (a, b) in zip(predictions,y) if a != b and b == 0] false_negatives = [(a, b) for (a, b) in zip(predictions,y) if a != b and b == 1] </code></pre> <p>Is there a way to play with a parameter (or a few parameters) of the classifier such that one the measurement metrics is effectively fixed?</p>
<p>The <code>class_weights</code> parameter allows you to push this false positive rate up or down. Let me use an everyday example to illustrate how this work. Suppose you own a night club, and you operate under two constraints:</p> <ol> <li>You want as many people as possible to enter the club (paying customers)</li> <li>You do not want any underage people in, as this will get you in trouble with the state</li> </ol> <p>On an average day, (say) only 5% percent of the people attempting to enter the club will be underage. You are faced with a choice: being lenient or being strict. The former will boost your profits by as much as 5%, but you are running the risk of an expensive lawsuit. The latter will inevitably mean some people who are just above the legal age will be denied entry, which will cost you money too. You want to adjust the <code>relative cost</code> of leniency vs strictness. Note: you cannot directly control how many underage people enter the club, but you can control how strict your bouncers are.</p> <p>Here is a bit of Python that shows what happens as you change the relative importance.</p> <pre><code>from collections import Counter import numpy as np from sklearn.datasets import load_iris from sklearn.svm import LinearSVC data = load_iris() # remove a feature to make the problem harder # remove the third class for simplicity X = data.data[:100, 0:1] y = data.target[:100] # shuffle data indices = np.arange(y.shape[0]) np.random.shuffle(indices) X = X[indices, :] y = y[indices] for i in range(1, 20): clf = LinearSVC(class_weight={0: 1, 1: i}) clf = clf.fit(X[:50, :], y[:50]) print i, Counter(clf.predict(X[50:])) # print clf.decision_function(X[50:]) </code></pre> <p>Which outputs</p> <pre><code>1 Counter({1: 22, 0: 28}) 2 Counter({1: 31, 0: 19}) 3 Counter({1: 39, 0: 11}) 4 Counter({1: 43, 0: 7}) 5 Counter({1: 43, 0: 7}) 6 Counter({1: 44, 0: 6}) 7 Counter({1: 44, 0: 6}) 8 Counter({1: 44, 0: 6}) 9 Counter({1: 47, 0: 3}) 10 Counter({1: 47, 0: 3}) 11 Counter({1: 47, 0: 3}) 12 Counter({1: 47, 0: 3}) 13 Counter({1: 47, 0: 3}) 14 Counter({1: 47, 0: 3}) 15 Counter({1: 47, 0: 3}) 16 Counter({1: 47, 0: 3}) 17 Counter({1: 48, 0: 2}) 18 Counter({1: 48, 0: 2}) 19 Counter({1: 48, 0: 2}) </code></pre> <p>Note how the number of data points classified as <code>0</code> decreases are the relative weight of class <code>1</code> increases. Assuming you have the computational resources and time to train and evaluate 10 classifiers, you can plot the precision and recall of each one and get a figure like the one below (shamelessly stolen off the internet). You can then use that to decide what the right value of <code>class_weights</code> is for your use case.</p> <p><img src="https://i.stack.imgur.com/x9Sp8.jpg" alt="Precision-recall tradeoff"></p>
python|svm|scikit-learn
18
109
52,128,289
Python - Split PDF based on list
<p>I'm trying to split a PDF into separate PDF files into new files based on a list. Code as follows:</p> <pre><code>import sys import os from PyPDF2 import PdfFileReader, PdfFileWriter def splitByStudent(file, group): inputPdf = PdfFileReader(open(file,"rb")) output = PdfFileWriter() path = os.path.dirname(os.path.abspath(file)) os.chdir(path) numpages = int(inputPdf.numPages/len(group)) for s in group: startpage = group.index(s) * numpages endpage = startpage + numpages newfile = s + ".pdf" for i in range(startpage, endpage): output.addPage(inputPdf.getPage(i)) with open(newfile, "wb") as outputStream: output.write(outputStream) BIASL1 = ["Student One", "Student Two"] file = "filename.pdf" splitByStudent(file, BIASL1) </code></pre> <p>The PDF "filename" has 16 pages and the name of the first file produced is "Student One.pdf", which should have the correct 8 pages. "Student Two.pdf", however contains all 16 pages of the original. Any help would be appreciated!</p>
<p>The line <code>output = PdfFileWriter()</code> should be inside the for loop:</p> <pre><code>def splitByStudent(file, group): inputPdf = PdfFileReader(open(file,"rb")) path = os.path.dirname(os.path.abspath(file)) os.chdir(path) numpages = int(inputPdf.numPages/len(group)) for s in group: output = PdfFileWriter() startpage = group.index(s) * numpages endpage = startpage + numpages newfile = s + ".pdf" for i in range(startpage, endpage): output.addPage(inputPdf.getPage(i)) with open(newfile, "wb") as outputStream: output.write(outputStream) </code></pre>
python|python-3.x|pdf
0
110
54,556,450
How to find the rows having values between -1 and 1 in a given numpy 2D-array?
<p>I have a <code>np.array</code> of shape <code>(15,3)</code>.</p> <pre><code>final_vals = array([[ 37, -84, -143], [ 29, 2, -2], [ -18, -2, 0], [ -3, 6, 0], [ 361, -5, 2], [ -23, 4, 8], [ 0, -1, 0], [ -1, 1, 0], [ 62, 181, 83], [-193, -14, -2], [ 42, -154, -92], [ 16, -13, 1], [ -10, -3, 0], [-299, 244, 110], [ 223, -237, -110]]) </code></pre> <p>am trying to find the rows whose element values are between -1 and 1.In the array printed above ROW-6 and ROW-7 are target/result rows.</p> <p>I tried,</p> <pre><code>result_idx = np.where(np.logical_and(final_vals&gt;=-1, final_vals&lt;=1)) </code></pre> <p>which returns,</p> <pre><code>result_idx = (array([ 2, 3, 6, 6, 6, 7, 7, 7, 11, 12], dtype=int64), array([2, 2, 0, 1, 2, 0, 1, 2, 2, 2], dtype=int64)) </code></pre> <p><strong><em>I want my program to return only row numbers</em></strong></p>
<p>You could take the absolute value of all elements, and check which rows's elements are smaller or equal to <code>1</code>. Then use <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.flatnonzero.html" rel="nofollow noreferrer"><code>np.flatnonzero</code></a> to find the indices where all columns fullfil the condition:</p> <pre><code>np.flatnonzero((np.abs(final_vals) &lt;= 1).all(axis=1)) </code></pre> <p><b> Output </b> </p> <pre><code>array([6, 7], dtype=int64) </code></pre>
python|numpy
2
111
31,731,863
From JSON to JSON-LD without changing the source
<p>There are 'duplicates' to my question but they don't answer my question.</p> <p>Considering the following JSON-LD example as described in paragraph 6.13 - Named Graphs from <a href="http://www.w3.org/TR/json-ld/" rel="nofollow">http://www.w3.org/TR/json-ld/</a>:</p> <pre><code>{ "@context": { "generatedAt": { "@id": "http://www.w3.org/ns/prov#generatedAtTime", "@type": "http://www.w3.org/2001/XMLSchema#date" }, "Person": "http://xmlns.com/foaf/0.1/Person", "name": "http://xmlns.com/foaf/0.1/name", "knows": "http://xmlns.com/foaf/0.1/knows" }, "@id": "http://example.org/graphs/73", "generatedAt": "2012-04-09", "@graph": [ { "@id": "http://manu.sporny.org/about#manu", "@type": "Person", "name": "Manu Sporny", "knows": "http://greggkellogg.net/foaf#me" }, { "@id": "http://greggkellogg.net/foaf#me", "@type": "Person", "name": "Gregg Kellogg", "knows": "http://manu.sporny.org/about#manu" } ] } </code></pre> <p>Question:</p> <p>What if you start with only the JSON part without the semantic layer:</p> <pre><code>[{ "name": "Manu Sporny", "knows": "http://greggkellogg.net/foaf#me" }, { "name": "Gregg Kellogg", "knows": "http://manu.sporny.org/about#manu" }] </code></pre> <p>and you link the @context from a separate file or location using a http link header or rdflib parsing, then you are still left without the @id and @type in the rest of the document. Injecting those missing keys-values into the json string is not a clean option. The idea is to go from JSON to JSON-LD without changing the original JSON part.</p> <p>The way I see it to define a triple subject, one has to use an @id to map tot an IRI. It's very unlikely that JSON data has the @id key-values. So does this mean all JSON files cannot be parsed as JSON-LD without add the keys first? I wonder how they do it. Does someone have an idea to point me in the right direction? Thank you.</p>
<p>No, unfortunately that's not possible. There exist, however, libraries and tools that have been created exactly for that reason. <a href="https://github.com/antoniogarrote/json-ld-macros" rel="nofollow">JSON-LD Macros</a> is such a library. It allows declarative transformations of JSON objects to make them usable as JSON-LD. So, effectively, all you need is a very thin layer on top of an off-the-shelve JSON-LD processor.</p>
python|json|rdf|json-ld
4
112
32,007,291
What is the best manner to run many Scrapy with multiprocessing?
<p>currently I use Scrapy with multiprocessing. I made a POC, in order to run many spider. My code look like that:</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- from multiprocessing import Lock, Process, Queue, current_process def worker(work_queue, done_queue): try: for url in iter(work_queue.get, 'STOP'): status_code = run_spider(action) except Exception, e: done_queue.put("%s failed on %s with: %s" % (current_process().name, action, e.message)) return True def run_spider(action): os.system(action) def main(): sites = ( scrapy crawl level1 -a url='https://www.example.com/test.html', scrapy crawl level1 -a url='https://www.example.com/test1.html', scrapy crawl level1 -a url='https://www.example.com/test2.html', scrapy crawl level1 -a url='https://www.example.com/test3.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test4.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test5.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test6.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test7.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test8.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test9.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test10.html', scrapy crawl level1 -a url='https://www.anotherexample.com/test11.html', ) workers = 2 work_queue = Queue() done_queue = Queue() processes = [] for action in sites: work_queue.put(action) for w in xrange(workers): p = Process(target=worker, args=(work_queue, done_queue)) p.start() processes.append(p) work_queue.put('STOP') for p in processes: p.join() done_queue.put('STOP') for status in iter(done_queue.get, 'STOP'): print status if __name__ == '__main__': main() </code></pre> <p>According you, what is the best solution to run multiple instance of Scrapy ?</p> <p>It would be better to launch a Scrapy instance for each URL or launch a spider with x URL (ex: 1 spider with 100 links) ?</p>
<blockquote> <p>It would be better to launch a Scrapy instance for each URL or launch a spider with x URL (ex: 1 spider with 100 links) ?</p> </blockquote> <p>Launching an instance of Scrapy is definitely a bad choice, because for every URL you would be suffering from the overhead of Scrapy itself.</p> <p>I think it would be best to distribute URLs evenly across spiders.</p>
python|python-2.7|web-scraping|scrapy|multiprocessing
0
113
47,058,742
Appending and formatting a new SubElement via ElementTree
<p>Using the following code, I can successfully add a subElement where I want, and functionally it works. For readability, I want to format the way <code>item.append</code> inserts the new sub elements. My Code:</p> <pre><code>import xml.etree.ElementTree as ET tree = ET.parse(file.xml) root = tree.getroot() for items in root.iter('items'): item = ET.SubElement(items, 'item') item.append((ET.fromstring('&lt;item&gt;cat&lt;/item&gt;'))) tree.write('output.xml') </code></pre> <p>XML File: </p> <pre><code>&lt;interface&gt; &lt;object&gt; &lt;items&gt; &lt;item&gt;dog&lt;/Item&gt; &lt;/items&gt; &lt;/object&gt; &lt;/interface&gt; </code></pre> <p>Expected output:</p> <pre><code>&lt;interface&gt; &lt;object&gt; &lt;items&gt; &lt;item&gt;dog&lt;/item&gt; &lt;item&gt;cat&lt;/item&gt; &lt;/items&gt; &lt;/object&gt; &lt;/interface&gt; </code></pre> <p>Actual output:</p> <pre><code>&lt;interface&gt; &lt;object&gt; &lt;items&gt; &lt;item&gt;dog&lt;/item&gt; &lt;item&gt;&lt;item&gt;cat&lt;/item&gt;&lt;/item&gt;&lt;/items&gt; &lt;/object&gt; &lt;/interface&gt; </code></pre> <p>Usually I wouldn't care about how it formats the output.xml file. However, I will be adding <code>&lt;item&gt;</code> subElements in a <code>while</code> loop, so when I have 4 or 5 additions, the code will get a little sloppy for readability's sake.</p> <p>I have looked at a lot of similar questions concerning this, but they are either unanswered, or don't apply specifically to what I am trying to do.</p> <p>Here is my code in the while loop, just incase it will add more clarification:</p> <pre><code>import xml.etree.ElementTree as ET tree = ET.parse(file.xml) root = tree.getroot() while True: item_add = input("Enter item to add: 'n") item_string = '&lt;item&gt;' item_string += item_add item_string += '&lt;/item&gt;' for items in root.iter('items'): item = ET.SubElement(items, 'item') item.append((ET.fromstring(item_string))) tree.write('output.xml') #Code asking for more input, if none break out of loop </code></pre> <p>I appreciate any help in advance. </p>
<p>Consider using <code>.find()</code> to walk down to your needed node and then simply use <code>SubElement</code> to add. No need for string versions of markup when working with DOM libraries like <code>etree</code>:</p> <pre><code>import xml.etree.ElementTree as ET tree = ET.parse("input.xml") root = tree.getroot() while True: item_add = input("Enter item to add: ") if item_add == 'x': break items = root.find('object').find('items') tmp = ET.SubElement(items, 'item') tmp.text = item_add # PRINT TO SCREEN print(ET.tostring(root).decode('utf-8')) # SAVE TO FILE tree.write('output.xml') </code></pre> <p><strong>Output</strong> <em>(after entering cat, frog, zebra in inputs, use x to end loop)</em></p> <pre><code>&lt;interface&gt; &lt;object&gt; &lt;items&gt; &lt;item&gt;dog&lt;/item&gt; &lt;item&gt;cat&lt;/item&gt;&lt;item&gt;frog&lt;/item&gt;&lt;item&gt;zebra&lt;/item&gt;&lt;/items&gt; &lt;/object&gt; &lt;/interface&gt; </code></pre> <p>Use built-in <code>mini.dom</code> or third-party <code>lxml</code> for <a href="https://stackoverflow.com/questions/3973819/python-pretty-xml-printer-for-xml-string">pretty printing</a>.</p>
python|xml|python-3.x|elementtree
2
114
71,843,912
Is there a way to make a function run after a specified amount of time in Python without the after method?
<p>I am trying to create a simple program that tracks a user's clicks per second in Tkinter, but I have no idea how to make the program wait without freezing the program using the after method. The problem is that I need to log the high score after the time finishes, but using this method, the score logs before the click counter goes up. Here is my code:</p> <pre><code>from tkinter import * import time root = Tk() root.geometry('600x410') screen = Canvas(root) h = 6 #button height w = 12 #button width c = 0 #counts amount of times clicked start_btn = 0 #logs clicks of the start button high_score = 0 #logs the highest score time = 0 def count_hs(): high_score = c def remove_time(): global time time -= 1 def countdown(n): for i in range(n): time = n root.after(1000, remove_time()) #alternatively i tried this: #time.sleep(1) #remove_time() if time &lt;= 0: b[&quot;text&quot;] = &quot;Test done.&quot; break def start_test(): global start_btn b[&quot;text&quot;] = &quot;Click to begin.&quot; start_btn += 1 print(&quot;start button: &quot; + str(start_btn)) def button_click(): global start_btn global c c+=1 print(&quot;click counter: &quot; + str(c)) #resets the amount of clicks on the large button when the start button is pressed if c &gt;= 1 and start_btn &gt;= 1: print(&quot;test1&quot;) c = 1 start_btn = 0 if b[&quot;text&quot;] == &quot;Click to begin.&quot;: print(&quot;test2&quot;) b[&quot;text&quot;] = &quot;Click!&quot; countdown(6) count_hs() print(&quot;hs: &quot; +str(high_score)) #primary button b = Button(root, text=&quot; &quot;, font=(&quot;Arial&quot;, 40), height = h, width = w, command = lambda: button_click()) b.grid(row=0, column=0) #start button start = Button(root, text=&quot;Start.&quot;, command = lambda: start_test()) start.grid(row=0, column=1) root.mainloop() </code></pre>
<p>Give it a try</p> <pre><code>from tkinter import * root = Tk() root.geometry('600x410') screen = Canvas(root) h = 6 # button height w = 12 # button width c = 0 # counts amount of times clicked start_btn = 0 # logs clicks of the start button high_score = 0 # logs the highest score time = 0 def count_hs(): global high_score if c &gt; high_score: high_score = c return high_score def remove_time(): global time time -= 1 if time &gt; 0: root.after(1000, remove_time) else: show_score() def start_test(): global start_btn global c global time b[&quot;text&quot;] = &quot;Click to begin.&quot; start_btn += 1 print(&quot;start button: &quot; + str(start_btn)) # Reset your timer and counter time = 6 c = 0 def button_click(*args): global start_btn global c # resets the amount of clicks on the large button when the start button is pressed if c == 0 and start_btn &gt;= 1: start_btn = 0 b[&quot;text&quot;] = &quot;Click!&quot; root.after(1000, remove_time) print(&quot;hs: &quot; + str(high_score)) else: c += 1 print(&quot;click counter: &quot; + str(c)) def show_score(): global c score_label.configure(text=str(c)) high_score_label.configure(text=str(count_hs())) c = 0 b['text'] = &quot;&quot; # primary button b = Button(root, text=&quot;&quot;, font=(&quot;Arial&quot;, 40), height=h, width=w, command=button_click) b.grid(row=0, column=0, rowspan=5) # start button start = Button(root, text=&quot;Start.&quot;, command=lambda: start_test()) start.grid(row=0, column=1) Label(root, text=&quot;Your score&quot;).grid(row=1, column=1) score_label = Label(root, text=&quot;&quot;) score_label.grid(row=2, column=1) Label(root, text=&quot;High score&quot;).grid(row=3, column=1) high_score_label = Label(root, text=&quot;&quot;) high_score_label.grid(row=4, column=1) root.mainloop() </code></pre> <p>Few changes:</p> <ul> <li>In <code>count_hs</code> I assume you would update the highscore only if current score beats it.</li> <li>You can use <code>remove_time</code> as a timer by making it calling itself until <code>time &lt;= 0</code>, in which case you should end your game.</li> <li>I've used the <code>start</code> button as a resetter, so that when it is clicked it will reset <code>c</code> and <code>time</code>.</li> <li>On <code>button_click</code> you can now only bother with updating <code>c</code> (and change text at the beginning).</li> <li>Finally I've added few labels to show the final results, both current and high scores.</li> </ul> <p>Few suggestions to move on:</p> <ul> <li>Instead of <code>global</code> variables you could create a class for the app, it should make it easier for you to exchange info and avoid subtle errors.</li> <li>You could improve the layout, especially for the newly added labels.</li> <li>You could make your original timer a parameter (currently, it is set within <code>start_test</code>).</li> <li>Instead of importing <code>from tkinter import *</code>, I'd suggest you to do something like <code>import tkinter as tk</code> or <code>from tkinter import ...</code> since it increases readability and reduces the sources of errors.</li> </ul>
python|tkinter
0
115
15,424,895
Creating Lexicon and Scanner in Python
<p>I'm new here in the world of coding and I haven't received a very warm welcome. I've been trying to learn python via the online tutorial <a href="http://learnpythonthehardway.org/book/" rel="noreferrer">http://learnpythonthehardway.org/book/</a>. I've been able to struggle my way through the book up until exercise 48 &amp; 49. That's where he turns students loose and says "You figure it out." But I simply can't. I understand that I need to create a Lexicon of possible words and that I need to scan the user input to see if it matches anything in the Lexicon but that's about it! From what I can tell, I need to create a list called lexicon:</p> <pre><code>lexicon = [ ('directions', 'north'), ('directions', 'south'), ('directions', 'east'), ('directions', 'west'), ('verbs', 'go'), ('verbs', 'stop'), ('verbs', 'look'), ('verbs', 'give'), ('stops', 'the'), ('stops', 'in'), ('stops', 'of'), ('stops', 'from'), ('stops', 'at') ] </code></pre> <p>Is that right? I don't know what to do next? I know that each item in the list is called a tuple, but that doesn't really mean anything to me. How do I take raw input and assign it to the tuple? You know what I mean? So in exercise 49 he imports the lexicon and just inside python prints lexicon.scan("input") and it returns the list of tuples so for example:</p> <pre><code>from ex48 import lexicon &gt;&gt;&gt; print lexicon.scan("go north") [('verb', 'go'), ('direction', 'north')] </code></pre> <p>Is 'scan()' a predefined function or did he create the function within the lexicon module? I know that if you use 'split()' it creates a list with all of the words from the input but then how does it assign 'go' to the tuple ('verb', 'go')?</p> <p>Am I just way off? I know I'm asking a lot but I searched around everywhere for hours and I can't figure this one out on my own. Please help! I will love you forever!</p>
<p>I wouldn't use a list to make the lexicon. You're mapping words to their types, so make a dictionary.</p> <p>Here's the biggest hint that I can give without writing the entire thing:</p> <pre><code>lexicon = { 'north': 'directions', 'south': 'directions', 'east': 'directions', 'west': 'directions', 'go': 'verbs', 'stop': 'verbs', 'look': 'verbs', 'give': 'verbs', 'the': 'stops', 'in': 'stops', 'of': 'stops', 'from': 'stops', 'at': 'stops' } def scan(sentence): words = sentence.lower().split() pairs = [] # Iterate over `words`, # pull each word and its corresponding type # out of the `lexicon` dictionary and append the tuple # to the `pairs` list return pairs </code></pre>
python|lexicon
3
116
46,464,857
Removing rows that are similar in Python
<p>My data looks something like this:</p> <pre><code> Source Target Value 1 Charlie Mac 0.6530945 2 Dennis Fank 0.7296234 3 Charlie Frank 0.4750875 4 Mac Dennis 0.3961787 5 Charlie Dennis 0.6213751 6 Mac Frank 0.9727454 7 Frank Charlie 0.4750875 8 Mac Charlie 0.6530945 9 Frank Mac 0.9727454 10 Frank Dennis 0.7296234 11 Dennis Mac 0.3961787 12 Dennis Charlie 0.6213751 </code></pre> <p>I have 2 columns with names and the third gives a relationship value. So row 1 is basically the same as row 8 and row 2 is the same as row 10 etc. So the order of names in (source, target) does not matter. What I want to do is get rid of these unneeded rows to get something like this: </p> <pre><code> Source Target Value 1 Charlie Mac 0.6530945 2 Dennis Fank 0.7296234 3 Charlie Frank 0.4750875 4 Mac Dennis 0.3961787 5 Charlie Dennis 0.6213751 6 Mac Frank 0.9727454 </code></pre> <p>Obviously in this simple example I could just return the first six rows but my dataset is too large for that. I can't just return only unique items in the "Value" column because some unrelated rows might have the same values.</p>
<pre><code>df[~pd.DataFrame(np.sort(df[['Source', 'Target']], 1), df.index).duplicated()] Source Target Value 1 Charlie Mac 0.653095 2 Dennis Frank 0.729623 3 Charlie Frank 0.475087 4 Mac Dennis 0.396179 5 Charlie Dennis 0.621375 6 Mac Frank 0.972745 </code></pre>
python|pandas|dataframe
5
117
46,351,322
Pandas - moving average grouped by multiple columns
<p>New to Pandas, so bear with me.</p> <p>My dataframe is of the format</p> <pre><code>date,name,country,tag,cat,score 2017-05-21,X,US,free,4,0.0573 2017-05-22,X,US,free,4,0.0626 2017-05-23,X,US,free,4,0.0584 2017-05-24,X,US,free,4,0.0563 2017-05-21,X,MX,free,4,0.0537 2017-05-22,X,MX,free,4,0.0640 2017-05-23,X,MX,free,4,0.0648 2017-05-24,X,MX,free,4,0.0668 </code></pre> <p>I'm trying to come up with a way to find the X day moving average within the country/tag/category group, so I need:</p> <pre><code>date,name,country,tag,cat,score,moving_average 2017-05-21,X,US,free,4,0.0573,0 2017-05-22,X,US,free,4,0.0626,0.0605 2017-05-23,X,US,free,4,0.0584,0.0594 2017-05-24,X,US,free,4,0.0563,and so on ... 2017-05-21,X,MX,free,4,0.0537,and so on 2017-05-22,X,MX,free,4,0.0640,and so on 2017-05-23,X,MX,free,4,0.0648,and so on 2017-05-24,X,MX,free,4,0.0668,and so on </code></pre> <p>I tried something on the lines of grouping by the columns I need followed by using pd.rolling_mean but I end up with a bunch of NaN's</p> <pre><code>df.groupby(['date', 'name', 'country', 'tag'])['score'].apply(pd.rolling_mean, 2, min_periods=2) # window size 2 </code></pre> <p>How would I go about doing this properly?</p>
<p>IIUC:</p> <pre><code>(df.assign(moving_score=df.groupby(['name','country','tag'], as_index=False)[['score']] .rolling(2, min_periods=2).mean().fillna(0) .reset_index(0, drop=True))) </code></pre> <p>Output:</p> <pre><code> date name country tag cat score moving_score 0 2017-05-21 X US free 4 0.0573 0.00000 1 2017-05-22 X US free 4 0.0626 0.05995 2 2017-05-23 X US free 4 0.0584 0.06050 3 2017-05-24 X US free 4 0.0563 0.05735 4 2017-05-21 X MX free 4 0.0537 0.00000 5 2017-05-22 X MX free 4 0.0640 0.05885 6 2017-05-23 X MX free 4 0.0648 0.06440 7 2017-05-24 X MX free 4 0.0668 0.06580 </code></pre>
python|pandas|moving-average
6
118
49,489,500
Find a word in a line no matter how it's written
<p>I'm trying to find a word in a simple string no matter how it's written. For example:</p> <p>'Lorem ipsum dolor sit amet lorem.'</p> <p>Let's say I search for 'lorem' written in lowercase and I'd like to replace both 'lorem' and 'Lorem' with 'example'. The thing is, I want to search and replace the word no matter how it's written. </p> <p>I think this should be done using regex but I'm not very familiar with it. Maybe you guys can help.</p>
<pre><code>import re sentence = "Lorem ipsum dolor sit amet lorem." search_key = "Lorem" print(re.sub(r'%s' % search_key.lower(), 'example', sentence.lower())) &gt;&gt;&gt; example ipsum dolor sit amet example. </code></pre>
python|python-3.x
1
119
49,590,870
How can I pass JSON of flask to Javascript
<p>I have an index page, which contains a form, fill in the form and sends the data to a URL, which captures the input, and does the search in the database, returning a JSON. How do I get this JSON, and put it on another HTML page using Javascript?</p> <p>Form of index:</p> <pre><code>&lt;form action="{{ url_for('returnOne') }}", method="GET"&gt; &lt;p&gt;Nome: &lt;input type="text" name="nome" /&gt;&lt;/p&gt; &lt;input type="submit" value="Pesquisar"&gt; &lt;/form&gt; </code></pre> <p>My function that returns JSON:</p> <pre><code>@app.route('/userQuery', methods=['GET']) def returnOne(): dao = Dao() nome = request.args.get('nome') return jsonify(json.loads(dao.select(nome))) </code></pre>
<p>Your HTML page after you submit the form. let's call it <code>response.html</code></p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;hello&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p id="test"&gt;&lt;/p&gt; &lt;p id="test1"&gt;&lt;/p&gt; &lt;script&gt; var b = JSON.parse('{{ a | tojson | safe}}'); document.getElementById('test').innerHTML = b.test; document.getElementById('test1').innerHTML = b.test1; console.log(b); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Your flask function that sends <code>JOSN</code> and redirects to <code>response.html</code></p> <pre><code>@app.route('/userQuery', methods=["POST", "GET"]) def returnOne(): a = { "test": "abcd", "test1": "efg" } return render_template("response.html", a=a) </code></pre>
javascript|python|flask
4
120
21,113,349
Is subclasing django base classes to intermediate ones a bad idea?
<p>I have subclassed ModelForm to an intermediate ModelForm2 to make sure some form elements have certain css classes / widgets and to remove the label suffix. My question is: Is this a bad idea since it makes the code less portable in case they drop ModelForm classes? This is the code:</p> <pre><code>class ModelForm2(forms.ModelForm): def __init__(self, *args, **kwargs): kwargs.setdefault('label_suffix', '') super(ModelForm2, self).__init__(*args, **kwargs) for field in self.fields: if isinstance(self.fields[field], forms.DateField): self.fields[field].format = '%d/%m/%Y' self.fields[field].widget.attrs['class'] = 'date' self.fields[field].widget.attrs['type'] = 'text' </code></pre> <p>I'm not looking for opinions, just a solid "best practices" answer</p>
<p>No, this is <em>not</em> a bad idea, this is very normal 'best practice' Django.</p> <p><code>ModelForm</code> is a core part of Django, it is pretty unthinkable they would drop it.</p> <p>Typical Django project will have many sub-classes from Django base classes.</p> <p>I will often have an app in my project I call <code>core</code>, or something similar, where I keep some classes like <code>BaseModel</code>, <code>BaseForm</code> etc where I have customised some behaviour that I want inherited by all my project's sub-classes.</p>
python|django|django-forms
2
121
62,573,223
What is the meaning of `[[[1,2],[3,4]],[[5,6],[7,8]]]` in Python?
<pre><code>x=[[[1,2],[3,4]],[[5,6],[7,8]]] print(x[1][0][1]) </code></pre> <p><em><strong>could you please tell me the output and how's it obtained</strong></em></p>
<p><code>list[i]</code> takes the (i+1)-th element of the list, so <code>[0]</code> means the first element and <code>[1]</code> the second element from the list. Counting in computer science always starts at 0.</p> <p><code>x[1]</code> = <code>[[5,6],[7,8]]</code></p> <p><code>x[1][0]</code> = <code>[5,6]</code></p> <p><code>x[1][0][1]</code> = <code>6</code></p>
python|python-3.x|python-2.7|python-requests
2
122
53,513,063
How to convert dictionary of tuple coordinates keys to sparse matrix
<p>I stored the non-zero values of a sparse matrix in a dictionary. How would I this into an actual matrix?</p> <pre><code>def sparse_matrix(density,order): import random matrix = {} for i in range(density): matrix[(random.randint(0,order-1), random.randint(0,order-1))] = 1 return matrix </code></pre>
<p><strong>Option 1 :</strong> Instead of keeping values in list and later creating the matrix, you can directly create the matrix and update the values in it. Please note you can have less number of non zero values than "order" as randint can return same number again.</p> <p>Sample code : </p> <pre><code>import random import numpy as np def sparse_matrix(density,order): #matrix = [ [0 for i in range(order)] for i in range(order)] matrix = np.zeros((order,order)) for i in range(density): matrix[(random.randint(0,order-1))][random.randint(0,order-1)] = 1 return matrix </code></pre> <p>Output : </p> <pre><code>sparse_matrix(2,4) array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 1.]]) </code></pre> <p><strong>Option 2</strong></p> <p>You can create the dictionary using your code and use that dictionary to update the value in matrix.</p> <pre><code> def sparse_matrix(density,order): import random #matrix = [ [0 for i in range(order)] for i in range(order)] matrix = {} for i in range(density): matrix[(random.randint(0,order-1)),(random.randint(0,order-1))] = 1 return matrix #matrix of size order*order final_matrix = np.zeros((4,4)) for key, value in sparse_matrix(2,4).items() : final_matrix[key] = value </code></pre>
python|python-3.x|sparse-matrix
0
123
53,466,909
Python use curl with subprocess, write outpute into file
<p>If I use the following command in the Git Bash, it works fine. The Output from the curl are write into the file output.txt</p> <pre><code>curl -k --silent "https://gitlab.myurl.com/api/v4/groups?page=1&amp;per_page=1&amp;simple=yes&amp;private_token=mytoken&amp;all?page=1&amp;per_page=1" &gt; output.txt </code></pre> <p>Python Code:</p> <pre><code>import subprocess, shlex command = shlex.split("curl -k --silent https://gitlab.myurl.com/api/v4/groups?page=1&amp;per_page=1&amp;simple=yes&amp;private_token=mytoken&amp;all?page=1&amp;per_page=1 &gt; output.txt") subprocess.Popen(command) </code></pre> <p>The Python code write nothing in my file "output.txt". How can I write in the output.txt or get the Output direct in Python?</p>
<p>You cannot use redirection directly with subprocess, because it is a shell feature. Use <code>check_output</code>:</p> <pre><code>import subprocess command = ["curl", "-k", "--silent", "https://gitlab.myurl.com/api/v4/groups?page=1&amp;per_page=1&amp;simple=yes&amp;private_token=mytoken&amp;all?page=1&amp;per_page=1"] output = subprocess.check_output(command) </code></pre>
python|curl
1
124
46,063,345
Python - How to create a tunnel of proxies
<p>I asked this question: <a href="https://stackoverflow.com/questions/46021477/wrap-packets-in-connect-requests-until-reach-the-last-proxy/46023292#46023292">Wrap packets in connect requests until reach the last proxy</a></p> <p>And I learnt that to create a chains of proxies I have to:</p> <ul> <li>create a socket</li> <li>connect the socket to proxy A</li> <li>create a tunnel via A to proxy B - either with HTTP or SOCKS protocol similar </li> <li>create a tunnel via [A,B] to proxy C similar</li> <li>create a tunnel via [A,B,C] to D</li> <li>... until your last proxy is instructed to built the tunnel to the<br> final target T</li> </ul> <p>I got what I have to do until the second point, cause I think I just have to add the "CONNECT" header to the http request to the proxy A. But my question is, in this example http request:</p> <pre><code>CONNECT ipproxy:80 HTTP/1.1 Host: ?:80 </code></pre> <p>In the host header I should put again the proxy ip or something else? Like the proxy B ip or the final site domain?</p> <p>Also, I didn't understand how to go on from the third point to the next... because I don't know how to tell the proxy A to create a tunnel to proxyB and then proxy B to create a tunnel to proxy C that goes to the final site..</p> <p>Examples of how can I do it with python? Or some doc?</p>
<p>There is no Host header with CONNECT. I.e. to request HTTP proxy A to create a tunnel to HTTP proxy B you just use:</p> <pre><code>&gt;&gt;&gt; CONNECT B_host:B_port HTTP/1.0 &gt;&gt;&gt; &lt;&lt;&lt; 200 connections established &lt;&lt;&lt; </code></pre> <p>And then you have this tunnel to proxy B via proxy A. Inside this tunnel you then can create another tunnel to target T, i.e. on the same socket send and receive next:</p> <pre><code>&gt;&gt;&gt; CONNECT T_host:T_port HTTP/1.0 &gt;&gt;&gt; &lt;&lt;&lt; 200 connections established &lt;&lt;&lt; </code></pre> <p>Note that not all proxies allow you to CONNECT to arbitrary hosts and ports and they might also not allow arbitrary protocols like a tunnel inside a tunnel but only selected protocols like HTTPS. </p>
python|python-3.x|sockets|proxy|tunnel
1
125
55,102,133
Replace the first characters if the object contain the symbol Python Pandas
<p>I have the string objects in the pandas Dataframe: </p> <pre><code>['10/2014', '2014','9/2013'] </code></pre> <p>How to replace them to get this result:</p> <pre><code>['2014','2014','2013'] </code></pre>
<p>If you want the last set of characters separated by <code>'/'</code>, try this :</p> <pre><code>[k.split('/')[-1] for k in ['10/2014', '2014','9/2013']] </code></pre> <p><strong>OUTPUT</strong> :</p> <pre><code>['2014', '2014', '2013'] </code></pre>
python|pandas
3
126
55,053,618
SQLAlchemy - Return filtered table AND corresponding foreign table values
<p>I have the following SQLAlchemy mapped classes: </p> <pre><code>class ShowModel(db.Model): __tablename__ = 'shows' id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(100)) episodes = db.relationship('EpisodeModel', backref='episode', lazy='dynamic') class EpisodeModel(db.Model): __tablename__ = 'episodes' id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(200)) show_id = db.Column(db.Integer, db.ForeignKey('shows.id')) info = db.relationship('InfoModel', backref='episode', lazy='dynamic') class InfoModel(db.Model): __tablename__ = 'info' id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(100)) episode_id = db.Column(db.Integer, db.ForeignKey('episodes.id')) </code></pre> <p>I'm trying and failing to figure out how to perform a query that searches the <code>info</code> table for a specific <code>name</code> column value AND then return the <code>shows</code> and <code>episodes</code> table rows that are associated with it. </p> <p>Using the following query allows me to return the specific <code>info</code> row that matches the <code>filter_by(name=name)</code> query</p> <pre><code>InfoModel.query.filter_by(name=name).all())) </code></pre> <p>But I am really struggling to figure out how to also get the values of the corresponding foreign key rows that have a relationship with the specific <code>info</code> row. Is there a proper way to do this with the join statement or something similar? Thank you very much for any help on this, as I'm still trying to get the hang of working with SQL databases, and databases in general. </p> <p>Edit - </p> <p>If, for example, I use the query <code>InfoModel.query.filter_by(name="ShowName1").all()))</code>, my json() representation returns </p> <pre><code>{ "name": "InfoName1", "id": 1, "episode_id": 1 } </code></pre> <p>But I'm also wanting to return the associated foreign table values so that my json() representation returns -</p> <pre><code>{ "name": "ShowName1", "id": 1, "episodes": { "name": "EpisodeName1", "id": 1, "show_id": 1 "info": { "name": "InfoName1", "id": 1, "episode_id": 1 } }, } </code></pre> <p>And I apologize for fumbling over my use of jargon here, making my question appear more complicated than it is. </p>
<p>Because you have lazy loading enabled, the joined tables will only be set when they are accessed. What you can do is force a join. Something like the following should work for you:</p> <pre><code>shows = session.query(ShowModel) .join(EpisodeModel) .join(InfoModel) .filter(ShowModel.name == "foo") .all() </code></pre> <p>You can also change your load configuration to be "eager", or any number of other options. I don't like to do this by default though, as it makes for accidentally expensive queries: <a href="https://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html</a></p>
python|sql|sqlalchemy
2
127
33,382,305
PyAudio IOError: [Errno Invalid input device (no default output device)] -9996
<p>I am attempting to run a simple python file that uses pyaudio to record input. However whenever I run this file I end up with this error. I had it working once and I have no idea what changed. I have tried </p> <pre><code>import pyaudio pa = pyaudio.PyAudio() print(pa.get_device_count()) 0 </code></pre> <p>So I am seeing that it does not detect any valid devices. Is there anyway to specify to pyaudio/portaudio where to look for my input devices. I am running elementary os freya. Any help would be appreciated!</p>
<p>I got this error because I accidentally ran</p> <pre><code># p = pyaudio.PyAudio() # ... p.terminate() </code></pre> <p>and then tried to open another stream.</p>
python|portaudio|pyaudio
4
128
24,871,188
Watershed Transform of Distance Image with OpenCV
<p>In Matlab, we can perform a watershed transform on the distance transform to separate two touching objects: </p> <p><img src="https://i.stack.imgur.com/tpA9p.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/PyB2z.png" alt="enter image description here"></p> <p>The first image above is the image with touching objects that we wish to separate. The second image is its distance transform.</p> <p>So, if the black and white image is called <code>img</code>, in Matlab we can do:</p> <pre><code>D = -bwdist(~img); L = watershed(D); </code></pre> <p>Now to do the same thing with openCV: OpenCV has a marker based watershed segmentation function. It appears that to perform the same task of separating two touching objects with openCV, one would need to provide markers for both objects and for the background. </p> <pre><code>img = np.zeros((400, 400), np.uint8) cv2.circle(img, (150, 150), 100, 255, -1) cv2.circle(img, (250, 250), 100, 255, -1) dist = cv2.distanceTransform(img, cv2.cv.CV_DIST_L2, cv2.cv.CV_DIST_MASK_PRECISE) dist3 = np.zeros((dist.shape[0], dist.shape[1], 3), dtype = np.uint8) dist3[:, :, 0] = dist dist3[:, :, 1] = dist dist3[:, :, 2] = dist markers = np.zeros(img.shape, np.int32) markers[150,150] = 1 # seed for circle one markers[250, 250] = 2 # seed for circle two markers[50,50] = 3 # seeds for background cv2.watershed(dist3, markers) </code></pre> <p>In the following image, you see the <code>markers</code> image after watershed was performed. The original black and white <code>img</code> is superimposed on it in red. The problem is that the object boundaries in the resulting <code>markers</code> image are not the same as the original image. How can I ensure that object boundaries stay the same?</p> <p><img src="https://i.stack.imgur.com/ukRm7.png" alt="enter image description here"></p>
<p>You'd berrer get to know what readlly happen in the function of watershed. it starts flooding with its seeds and put the coordinate and the gradient of their neighbours in a priority queue.</p> <p>As you know, when you apply distanceTransform on img, the gradient of circles becomes 0 or 1, but the background always be 0;</p> <p>So, now you have 3 seeds, and flooding starts working: background(seed3), neighbours(seed1), neighbours(seed2), they can work in turn until the seed1 or seed2 meet their gradient 1; then just seed3 can carry on working.</p> <p>When the seed3 meet the boundaries of circles, its gradient becomes 1, now they can work in turn again.</p> <p>So if you want to ensure that object boundaries stay the same, you'd better increase the gradient when seed3 meet the boundaries of circles.</p> <p>just like:</p> <pre><code>dist = cv2.distanceTransform(img, cv2.cv.CV_DIST_L2, cv2.cv.CV_DIST_MASK_PRECISE) dist[dist &gt; 0] += 2.0 </code></pre> <p>here is the <a href="https://i.stack.imgur.com/HzrQf.png" rel="nofollow noreferrer">result</a></p> <p>...There are some problems in it(when all the gradient in the queue is 1, which one will pop first, which one will pop second)</p>
python|opencv|distance|image-segmentation|watershed
3
129
38,220,821
Regex replace conditional character
<p>I need to remove any 'h' in a string if it comes after a vowel.</p> <pre><code>E.g. John -&gt; Jon Baht -&gt; Bat Hot -&gt; Hot (no change) Rhythm -&gt; Rhythm (no change) </code></pre> <p>Finding the words isnt a problem, but removing the 'h' is as I still need the original vowel. Can this be done in one regex?</p>
<p>The regex for matching <code>h</code> after a vowel would be a positive lookbehind one</p> <pre><code>(?&lt;=a|e|y|u|o|a)h </code></pre> <p>And you can do</p> <pre><code>re.sub(r"([a-zA-Z]*?)(?&lt;=a|e|y|u|o|a)h([a-zA-Z]*)",r"\1\2",s) </code></pre> <p>However, if you can have more than one <code>h</code> after a vowel in a string, you would need to do several iterations, since regex doesn't support dynamic matching groups</p> <pre><code>import re s = "bahtbaht" s1 = s while True: s1 = re.sub(r"([a-zA-Z]*?)(?&lt;=a|e|y|u|o|a)h([a-zA-Z]*)",r"\1\2",s) if len(s1) == len(s): break s = s1 print(s1) </code></pre> <p>In a more proper form, using function for <code>repl</code></p> <pre><code>import re def subit(m): match, = m.groups() return match s = "bahtbaht" print(re.sub(r"([a-zA-Z]*?)(?:(?&lt;=a|e|y|u|o|a)h|$)",subit,s)) </code></pre> <p>A much simplier answer, thanks to @tobias_k</p> <pre><code>re.sub(r"([aeiou])h", r"\1", s, flags = re.I) </code></pre>
java|python|regex
2
130
38,441,079
Python sum and then multiply values of several lists
<p>Is there way to write following code in one line?</p> <pre><code>my_list = [[1, 1, 1], [1, 2], [1, 3]] result = 1 for l in list: result = result * sum(l) </code></pre>
<p>Use <code>reduce</code> on the <em>summed</em> sublists gotten from <code>map</code>.</p> <p>This does it:</p> <pre><code>&gt;&gt;&gt; from functools import reduce &gt;&gt;&gt; reduce(lambda x,y: x*y, map(sum, my_list)) 36 </code></pre> <hr> <p>In python 2.x, the <code>import</code> will not be needed as <code>reduce</code> is a builtin:</p> <pre><code>&gt;&gt;&gt; reduce(lambda x,y: x*y, map(sum, my_list)) 36 </code></pre>
python
5
131
38,231,521
What should be the size of input and hidden state in GRUCell of tensorflow (python)?
<p>I am new to tensorflow (1 day of experience).</p> <p>I am trying following small code to create a simple GRU based RNN with single layer and hidden size of 100 as follows:</p> <pre><code>import pickle import numpy as np import pandas as pd import tensorflow as tf # parameters batch_size = 50 hidden_size = 100 # create network graph input_data = tf.placeholder(tf.int32, [batch_size]) output_data = tf.placeholder(tf.int32, [batch_size]) cell = tf.nn.rnn_cell.GRUCell(hidden_size) initial_state = cell.zero_state(batch_size, tf.float32) hidden_state = initial_state output_of_cell, hidden_state = cell(input_data, hidden_state) </code></pre> <p>But I am getting following error for last line (i.e. call to <code>cell()</code>)</p> <pre><code>Linear is expecting 2D arguments: [[50], [50, 100]] </code></pre> <p>What am I doing wrong?</p>
<p>Input to the <code>GRUCell</code>'s call operator are expected to be 2-D tensors with <code>tf.float32</code> type. The following ought to work :</p> <pre><code>input_data = tf.placeholder(tf.float32, [batch_size, input_size]) cell = tf.nn.rnn_cell.GRUCell(hidden_size) initial_state = cell.zero_state(batch_size, tf.float32) hidden_state = initial_state output_of_cell, hidden_state = cell(input_data, hidden_state) </code></pre>
python|tensorflow|recurrent-neural-network|gated-recurrent-unit
0
132
38,372,786
python qt float precision from boost-python submodule
<p>I have made a cpp submodule with boost-python for my PyQt program that among others extracts some data from a zip data file.</p> <p>It works fine when testing it in python:</p> <pre><code>import BPcmods BPzip = BPcmods.BPzip() BPzip.open("diagnostics/p25-dev.zip") l=BPzip.getPfilenames() t=BPzip.getTempArray([l[1][4],l[1][3]],40.) print(t[11][:10]) &gt;&gt; [40.1, 40.44, 40.78, 41.11, 41.44, 41.77, 41.77, 42.09, 42.41, 42.73] </code></pre> <p>if I put the same code in the start of my <strong>main</strong> part, it also gives the same data, but if I put the code right after I call </p> <pre><code>main.app = QtGui.QApplication(sys.argv) </code></pre> <p>It suddenly yields:</p> <pre><code>[40.0, 40.0, 40.0, 41.0, 41.0, 41.0, 41.0, 42.0, 42.0, 42.0] </code></pre> <p>Why?? what happened? The numbers are still floats, but suddenly they have been rounded to nearest integer?!?</p>
<p>Well it was related to using std::stod to convert my strings of data from my datafile to doubles. I don't know why, but changing to:</p> <pre><code>boost::algorithm::trim(s); double val = boost::lexical_cast&lt;double&gt;(s); </code></pre> <p>made it work as it was supposed to, also in pyqt.</p>
python|c++|boost|pyqt4|precision
0
133
38,295,363
Difference between tuples (all or none)
<p>I have two sets as follows:</p> <pre><code>house_1 = {('Gale', '29'), ('Horowitz', '65'), ('Galey', '24')} house_2 = {('Gale', '20'), ('Horowitz', '65'), ('Gale', '29')} </code></pre> <p>Each tuple in each set contains attributes that represent a person. I need to find a special case of the symmetric set difference between <code>house_1</code> and <code>house_2</code>: the difference must be confirmed only if all elements of the tuples disagree, and not any one of them. </p> <p>For the above sets, <code>house_1 ^ house_2</code> yields:</p> <pre><code>{('Gale', '20'), ('Galey', '24')} </code></pre> <p>which is great. However, in the following sets:</p> <pre><code>house_1 = {('Gale', '24')} house_2 = {('Gale', '29')} </code></pre> <p>doing <code>house_1 ^ house_2</code> still yields:</p> <pre><code>{('Gale', '24'), ('Gale', '29')} </code></pre> <p>This isn't what I want. I need the set difference to be output only when both elements in the tuples do not match. In this case Gale matches, so both these tuples should not be in the output. </p> <p>Any insights are greatly appreciated.</p> <p><strong>SOLUTION:</strong></p> <p>I wrote the following function to solve this:</p> <pre><code>for counter_H1, member_H1 in enumerate(house_1): for counter_H2, member_H2 in enumerate(house_2): if (member_H1[0] == member_H2[0]) and (member_H1[1] == member_H2[1]): break if (member_H1[0] != member_H2[0]) and (member_H1[1] != member_H2[1]) and (counter_H2 == len(house_2) - 1): print(member_H1, member_H2) </code></pre>
<p>You could count the occurences of names in the result and print only tuples that correspond to the names with the count of 1:</p> <pre><code>from collections import defaultdict count = defaultdict(list) for x in house_1 ^ house_2: count[x[0]].append(x) for v in count.values() if len(v) == 1: print(*v) </code></pre>
python|set|tuples
1
134
31,094,159
How do I tell lxml which charset to use?
<p>I'm working with html and using lxml to parse it. For testing purposes I have an html document saved as a string in a python file with encoding=utf-8 at the top.</p> <p>Whenever I try to parse the html using lxml I get weird html encodings if the html does not have the <code>&lt;meta charset="utf-8"&gt;</code> tag.</p> <p>This is the code:</p> <pre><code>t = lxml.html.fromstring(page_html) print lxml.html.tostring(t) </code></pre> <p>If the <code>page_html</code> string contains the meta charset above the <code>tostring()</code> method returns nicely encoded html (i.e. <code>’</code> turns into <code>&amp;#8217;</code>). If it does not have it, the results are garbled (i.e. <code>’</code> turns into <code>&amp;#226;&amp;#128;&amp;#153;</code>)</p> <p>This encoding happens in the <code>fromstring</code> method rather than the <code>tostring</code> one so I can't just insert a new meta element in the tree after it is created as by that time it is to late.</p> <p>The only solution I can think of is to try and modify the original html string and add the meta tag there before creating the tree. This seems kinda klunky though. Isn't there a better way to do it using lxml?</p>
<p>Turns out I just need to pass in a custom parser to the <code>fromstring</code> method. So this fixes it:</p> <pre><code>parser = html.HTMLParser(encoding="utf-8") t = lxml.html.fromstring(page_html, parser=parser) print lxml.html.tostring(t) </code></pre>
python|lxml
0
135
30,952,575
How to get real quotient in python 2
<p>Since the division operator "/" only returns floor quotient. When numerator or denominator is a minus number, operator "/" will not return the real quotient. Like -1/3 returns -1 rather than 0. How can i get the real quotient? </p>
<p>Try like this,</p> <pre><code>a = 1 b = 3 print -(a / b) </code></pre>
python-2.7
0
136
40,159,812
Passing a class's "self" to a function outside it such that the class's variables are accessible and modifiable?
<p>Can I pass the <code>self</code> instance variable of the class from within the class to some helper function outside the class? For example, will the following work when an object of SomeClass is initialized? Isn't there any type casting required? Is this coding style reliable? Is there any possibility of this feature being dropped off in future releases of Python 2.x?</p> <pre><code>class SomeClass(): var1 = 7 def __init__(self): some_func(self) # passing self instead of self.var1 def some_func(instance): instance.var1 += 1 x = SomeClass() print x.var1 </code></pre> <p>I have <code>some_func</code> as a common helper function across many classes that undergo the same transformation, and the <code>var1</code> in my current application is a huge dataFrame which I cannot afford to pass a copy of it to the function due to memory constraints. Do you have any suggestions? </p>
<p>The easiest way would have been to try it out :) yes, it works, as you expect</p> <pre><code>class SomeClass(): var1 = 7 def __init__(self): some_func(self) # passing self instead of self.var1 def some_func(instance): instance.var1 += 1 x = SomeClass() print x.var1 </code></pre> <p>But in most cases it would be better to make some_func to a member of SomeClass</p>
python|python-2.7|oop
4
137
19,131,736
Zip a folder on S3 using Django
<p>I have an application where in I need to zip folders hosted on S3.The zipping process will be triggered from the model save method.The Django app is running on an EC2 instance.Any thoughts or leads on the same?</p> <p>I tried django_storages but haven't got a breakthrough</p>
<p>from my understanding you can't zip files directly on s3. you would have to download the files you'd want to zip, zip them up, then upload the zipped file. i've done something similar before and used s3cmd to keep a local synced copy of my s3bucket, and since you're on an ec2 instance network speed and latency will be pretty good.</p>
python|django|amazon-s3|amazon-ec2|zip
0
138
62,102,978
How to display y-bar values in the bar chart?
<p>Hello friends I am creating a bar chart using <code>seaborn</code> or <code>matplotlib</code>. I make a successful graph, but I don't know how to display y bar values on the plot. Please give me suggestion and different techniques to display y-bar values on the plot.</p> <p>Please help me to solve the question.</p> <p>Thank you</p> <pre class="lang-py prettyprint-override"><code>plt.figure(figsize = (10,5)) sns.countplot(x='subject',data=udemy,hue='is_paid') </code></pre> <p><img src="https://i.stack.imgur.com/pRCtc.jpg" alt="img" /></p>
<p><strong>Short answer:</strong> You need customized auto label mechanism.</p> <p>First let's make it clear. If you mean by</p> <blockquote> <p>I don't know how to display y bar values on the plot</p> </blockquote> <ul> <li><p>that <em>on bars</em> (inner), then this <a href="https://stackoverflow.com/a/19919397/10452700">answer</a> can be helpful.</p> </li> <li><p>that <em>on top of bars</em> (outer), except for this answer for <a href="https://stackoverflow.com/questions/33179122/seaborn-countplot-with-frequencies">seaborn</a>, there are some auto label solutions <a href="https://stackoverflow.com/questions/7423445/how-can-i-display-text-over-columns-in-a-bar-chart-in-matplotlib">here</a> you can use it as well as <a href="https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/barchart.html" rel="nofollow noreferrer">this example</a>. Recently if you install the new matpotlib v. <code>3.4.0</code> you can use <code>bar_label()</code><a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html" rel="nofollow noreferrer">Ref.</a>.</p> </li> </ul> <p>I can also offer you my approach inspired by this <a href="https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/barchart.html" rel="nofollow noreferrer">matplotlib documenetation</a> using manual adjustment for the best fit to print value/text over bars in the bar chart using <code>matplotlib</code> in form of function:</p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt import numpy as np def bar_plot(ax, data, colors=None, total_width=0.8, single_width=1, legend=True): # Check if colors where provided, otherwhise use the default color cycle if colors is None: colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] # Number of bars per group n_bars = len(data) # The width of a single bar bar_width = total_width / n_bars # List containing handles for the drawn bars, used for the legend bars = [] # Iterate over all data for i, (name, values) in enumerate(data.items()): # The offset in x direction of that bar x_offset = (i - n_bars / 2) * bar_width + bar_width / 2 # Draw a bar for every value of that type for x, y in enumerate(values): bar = ax.bar(x + x_offset, y, width=bar_width * single_width, color=colors[i % len(colors)]) # Add a handle to the last drawn bar, which we'll need for the legend bars.append(bar[0]) # Draw legend if we need if legend: ax.legend(bars, data.keys()) if __name__ == &quot;__main__&quot;: # Usage example: data = { &quot;False&quot;: [100.16, 30.04, 50.04, 120.19], &quot;True&quot;: [1100.08, 600.06, 650.06, 1050.17], #&quot;RMSE&quot;:[0.39, 0.19, 0.20, 0.44, 0.45, 0.26], } fig, ax = plt.subplots(figsize=(8, 6)) y_pos = np.arange(len(objects)) for i, v in enumerate(data['False']): plt.text(y_pos[i] - 0.35, v + 10.213, str(v)) for i, v in enumerate(data['True']): plt.text(y_pos[i] + 0.05, v + 10.213, str(v)) #plt.rc('font', **font) bar_plot(ax, data, total_width=.8, single_width=.9) #font = font_manager.FontProperties(family='Comic Sans MS', weight='bold', style='normal', size=16) font = {'family' : 'normal', 'weight' : 'bold', 'size' : 15, 'family':'Times New Roman'} #font = {'family':'Times New Roman', 'weight' :'normal','size':15} ax.set_xticklabels( ('0',' ','Business Finance',' ','Graphic Design',' ', 'Musical Instruments',' ', 'Web Development') , rotation=45, ha=&quot;right&quot;)#, **font ) #ax.set_yticklabels( data['MSE'] ,**font ) ax.set_ylabel('Count ')#, **font) ax.set_xlabel('Subject ')#, **font) ax.set_title('Figure 10/ Table 6 for is_paid')#, **font) #ax.legend().set_visible(False) plt.ylim((0.0, 1500.0)) plt.show() </code></pre> <p>Output:</p> <p><img src="https://i.imgur.com/dvnJrVH.png" alt="img" /></p>
python|matplotlib|seaborn|data-science
0
139
62,440,669
How to get parameters from strings in Python?
<p>How can I get some parameters from a string in Python.</p> <p>Let's say the string contains two words which I want to use as parameters. These are of course separated by spaces. Example:</p> <pre class="lang-py prettyprint-override"><code>string = "Lorem Ipsum" def funct(hereLorem, hereIpsum) </code></pre>
<p>You can try a string.split(optional delimiter). This returns a list.</p> <p>Example:</p> <pre><code>l = string.split() </code></pre> <p>and if you're expecting a certain pattern</p> <pre><code>arg1 = l[0] arg2 = l[1] ... </code></pre>
python|python-3.x|string|parameters|arguments
0
140
67,558,573
telegram doesn't show the image preview link always with my amazon afiliates
<p>I have a telegram channel and since yesterday it does not show <strong>the image preview of the links</strong> that I send with a <strong>bot</strong>.</p> <p>I send links with my ID of <strong>amazon afiliates</strong> and it doesn't work.</p> <p>Anyone knows how to solve it?</p> <pre><code>bot = telegram.Bot(bot_token) bot.send_message(bot_chatID, text='&lt;b&gt;Hello&lt;/b&gt; \n &lt;a href=&quot;//www.amazon.es/dp/B076MMCQWW?psc=1&quot;&gt;https://www.amazon.es/dp/B076MMCQWW?psc=1&lt;/a&gt;', parse_mode=telegram.ParseMode.HTML) </code></pre> <p><a href="https://i.stack.imgur.com/ABMXl.png" rel="nofollow noreferrer">Telegram message</a></p>
<p>If your goal is to create a link preview and not using <code>html</code> markup, then this might help you:</p> <pre class="lang-py prettyprint-override"><code>bot = telegram.Bot(bot_token) bot.send_message( bot_chatID, text='''*Hello* [https://www.amazon.es/dp/B076MMCQWW?psc=1](https://www.amazon.es/dp/B076MMCQWW?psc=1) ''', parse_mode=telegram.ParseMode.MARKDOWN, disable_web_page_preview = False ) </code></pre>
python|telegram|python-telegram-bot
0
141
36,560,829
How to create a seaborn.heatmap() with frames around the tiles?
<p>I rendered a heatmap with seaborn.heatmap() works nicely. However, for a certain purpose I need frames around the plot. </p> <p><code>matplotlib.rcParams['axes.edgecolor'] = 'black'</code><br> <code>matplotlib.rcParams['axes.linewidth'] = 1</code></p> <p>both don't work.</p>
<pre><code>ax = sns.heatmap(x) for _, spine in ax.spines.items(): spine.set_visible(True) </code></pre>
python|matplotlib|seaborn
16
142
19,800,454
processing a set of unique tuples
<p>I have a set of unique tuples that looks like the following. The first value is the name, the second value is the ID, and the third value is the type.</p> <blockquote> <p>('9', '0000022', 'LRA')<br> ('45', '0000016', 'PBM')<br> ('16', '0000048', 'PBL')<br> ('304', '0000042', 'PBL')<br> ('7', '0000014', 'IBL')<br> ('12', '0000051', 'LRA')<br> ('7', '0000014', 'PBL')<br> ('68', '0000002', 'PBM')<br> ('356', '0000049', 'PBL')<br> ('12', '0000051', 'PBL')<br> ('15', '0000015', 'PBL')<br> ('32', '0000046', 'PBL')<br> ('9', '0000022', 'PBL')<br> ('10', '0000007', 'PBM')<br> ('7', '0000014', 'LRA')<br> ('439', '0000005', 'PBL')<br> ('4', '0000029', 'LRA')<br> ('41', '0000064', 'PBL')<br> ('10', '0000007', 'IBL')<br> ('8', '0000006', 'PBL')<br> ('331', '0000040', 'PBL')<br> ('9', '0000022', 'IBL') </p> </blockquote> <p>This set includes duplicates of the name/ID combination, but they each have a different type. For example:</p> <blockquote> <p>('9', '0000022', 'LRA')<br> ('9', '0000022', 'PBL')<br> ('9', '0000022', 'IBL')</p> </blockquote> <p>What I would like to do is process this set of tuples so that I can create a new list where each name/ID combination would only appear once, but include all types. This list should only include the name/ID combos that have more than one type. For example, my output would look like this:</p> <blockquote> <p>('9', '0000022', 'LRA', 'PBL', 'IBL')<br> ('7', '0000014', 'IBL', 'PBL', 'LRA') </p> </blockquote> <p>but my output should not include name/ID combos that have only one type:</p> <blockquote> <p>('45', '0000016', 'PBM')<br> ('16', '0000048', 'PBL') </p> </blockquote> <p>Any help is appreciated!</p>
<p><a href="http://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a> with some additional processing of what it outputs will do the job:</p> <pre><code>from itertools import groupby data = { ('9', '0000022', 'LRA'), ('45', '0000016', 'PBM'), ('16', '0000048', 'PBL'), ... } def group_by_name_and_id(s): grouped = groupby(sorted(s), key=lambda (name, id_, type_): (name_, id)) for (name, id_), items in grouped: types = tuple(type_ for _, _, type_ in items) if len(types) &gt; 1: yield (name, id_) + types print '\n'.join(str(x) for x in group_by_name_and_id(data)) </code></pre> <p>outputs:</p> <pre><code>('10', '0000007', 'PBM', 'IBL') ('12', '0000051', 'LRA', 'PBL') ('7', '0000014', 'LRA', 'PBL', 'IBL') ('9', '0000022', 'LRA', 'PBL', 'IBL') </code></pre> <p><strong>P.S.</strong> but I don't really like that design: thet types could/should really be a list contained in the 3rd item of the tuple, not part of the tuple itself... because this way the tuple is dynamic in length, and that's ugly... tuples aren't meant to be used like that. So best to replace</p> <pre><code> types = tuple(type_ for _, _, type_ in items) yield (name, id_) + types </code></pre> <p>with</p> <pre><code> types = [type_ for _, _, type_ in items] yield (name, id_, types) </code></pre> <p>yielding the much cleaner looking</p> <pre><code>('10', '0000007', ['IBL', 'PBM']) ('12', '0000051', ['LRA', 'PBL']) ('7', '0000014', ['IBL', 'LRA', 'PBL']) ('9', '0000022', ['IBL', 'LRA', 'PBL']) </code></pre> <p>for example then you can just iterate over the resulting data with <code>for name, id, types in transformed_data:</code>.</p>
python|tuples
3
143
22,265,689
Python the whole Module
<p>Normally when you execute a python file you do python <em>*</em>.py</p> <p>but if the whole Module which contain many .py files inside</p> <p>for example MyModule contain many .py file and if I do </p> <p><code>python -m MyModule $*</code> what would happen as oppose python individual python file?</p>
<p>I think you may be confusing <em>package</em> with <em>module</em>. A python module is always a single .py file. A package is essentially a folder which contains a special module always named <code>__init__.py</code>, and one or more python modules. Attempting to execute the package will simply run the <code>__init__.py</code> module.</p>
python
2
144
43,678,408
How to create a conditional task in Airflow
<p>I would like to create a conditional task in Airflow as described in the schema below. The expected scenario is the following:</p> <ul> <li>Task 1 executes</li> <li>If Task 1 succeed, then execute Task 2a</li> <li>Else If Task 1 fails, then execute Task 2b</li> <li>Finally execute Task 3</li> </ul> <p><a href="https://i.stack.imgur.com/VS1Wm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VS1Wm.png" alt="Conditional Task"></a> All tasks above are SSHExecuteOperator. I'm guessing I should be using the ShortCircuitOperator and / or XCom to manage the condition but I am not clear on how to implement that. Could you please describe the solution?</p>
<p>Airflow has a <a href="https://airflow.apache.org/docs/stable/_api/airflow/operators/python_operator/index.html?highlight=branchpythonoperator#airflow.operators.python_operator.BranchPythonOperator" rel="noreferrer">BranchPythonOperator</a> that can be used to express the branching dependency more directly.</p> <p>The <a href="https://airflow.apache.org/docs/stable/concepts.html?highlight=xcom#branching" rel="noreferrer">docs</a> describe its use:</p> <blockquote> <p>The BranchPythonOperator is much like the PythonOperator except that it expects a python_callable that returns a task_id. The task_id returned is followed, and all of the other paths are skipped. The task_id returned by the Python function has to be referencing a task directly downstream from the BranchPythonOperator task.</p> <p>...</p> <p>If you want to skip some tasks, keep in mind that you can’t have an empty path, if so make a dummy task. </p> </blockquote> <p><img src="https://i.stack.imgur.com/zOW2B.png" alt=""></p> <h3>Code Example</h3> <pre><code>def dummy_test(): return 'branch_a' A_task = DummyOperator(task_id='branch_a', dag=dag) B_task = DummyOperator(task_id='branch_false', dag=dag) branch_task = BranchPythonOperator( task_id='branching', python_callable=dummy_test, dag=dag, ) branch_task &gt;&gt; A_task branch_task &gt;&gt; B_task </code></pre> <h3><em>EDIT</em>:</h3> <p>If you're installing an Airflow version >=1.10.3, you can also <a href="https://issues.apache.org/jira/browse/AIRFLOW-3375" rel="noreferrer">return a list of task ids</a>, allowing you to skip multiple downstream paths in a single Operator and <a href="https://issues.apache.org/jira/browse/AIRFLOW-3823" rel="noreferrer">don't use a dummy task before joining</a>. </p> <p><img src="https://user-images.githubusercontent.com/6249654/48800846-3a19e980-ed0b-11e8-89d0-29ceba2ce2fb.png" alt=""></p>
python|conditional-statements|airflow
83
145
43,785,228
Analyzing the probability distribution of nodes in a network through networkx
<p>I am using python's networkx to analyze the attributes of a network, I want to draw a graph of power law distribution.This is my code.</p> <pre><code>degree_sequence=sorted(nx.degree(G).values(),reverse=True) plt.loglog(degree_sequence,marker='b*') plt.show() </code></pre> <p>This is my graph:<a href="https://i.stack.imgur.com/jzRpi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jzRpi.png" alt="enter image description here"></a></p> <p>But this is not a graph about the probability distribution of nodes'degree, it is the distribution of node's degree.How to convert draw a graph about the probability distribution of nodes'degree</p>
<p>You just need to construct a histogram of the degrees, i.e. hist[x] = number of nodes with degree x, and then normalize hist to sum up to one.</p> <p>Alternatively flip your axes and normalize such that the values sum to one.</p>
python|networkx
0
146
43,730,137
Tkinter PIR sensor
<p>I'm trying to create a smart mirror that displays different information such as the weather, 3 day forecast, news feed, and a tweet. I have all of this information printing in a window arranged how I want, but the final piece of the program I need to get to function with the rest of the program is a PIR sensor.</p> <p>I have tested the sensor and it works by itself, but I can't think of a way to have the window be created, and then afterwards have the sensor start scanning for data to essentially turn the monitor on or off based upon if there is a person within range.</p> <p>Like I said, I have all of the code functioning other than the fact I can't seem to figure out a way to get the tkinter window displayed and then afterwards have the program start scanning for motion to turn the monitor on and off. At this point I do not need the information (weather, twitter etc) to update, I am only worried about getting the monitor to turn on and off, or just black out and re-display the previously pulled information when a person is in range. </p> <p>I have an example of what I have been using to change the state of the monitor at the beginning, because that is the last place I tried to place it before I decided to seek help. </p> <p>From what I have gained from working with it is that it seems when the mainloop portion of the program is called it is not able to call other functions. I could be wrong, but this is how it seems. </p> <p>I've thinned the program down to just the window functionality that I want. So with the code provided below I need to implement the PIR sensor to toggle the display on and off, after creating the display.</p> <pre><code>from tkinter import * import os, subprocess from gpiozero import * pir = MotionSensor (4) while True: if pir.motion_detected: subprocess.call('xset dpms force on', shell=True) print("yes") else: subprocess.call('xset dpms force off', shell=True) print("no") #creates a fullscreen window and makes it black. tk = Tk() tk.configure(bg='black') tk.attributes('-fullscreen', True) tk.mainloop() </code></pre>
<p>You can't have a tkinter program with an infinate while loop and expect the tkinter mainloop to run.</p> <p>Suggest that you make use of the callbacks in gpiozero to call a function to enable the screen when motion is detected and perhaps a tkinter.after method to turn the screen off after a specified time period.</p> <pre><code>def myScreenOnFunction(): subprocess.call('xset dpms force on', shell=True) print("Screen On") .... pir = MotionSensor (4) pir.when_motion = myScreenOnFunction tk = Tk() tk.configure(bg='black') tk.attributes('-fullscreen', True) tk.mainloop() </code></pre> <p>Something like this should play nicely with Tkinter and allow the main loop to run.</p>
python-3.x|tkinter|motion-detection
0
147
54,272,461
Keras Neural Network accuracy only 10%
<p>I am learning how to train a keras neural network on the MNIST dataset. However, when I run this code, I get only 10% accuracy after 10 epochs of training. This means that the neural network is predicting only one class, since there are 10 classes. I am sure it is a bug in data preparation rather than a problem with the network architecture, because I got the architecture off of a tutorial (<a href="https://towardsdatascience.com/image-classification-in-10-minutes-with-mnist-dataset-54c35b77a38d" rel="nofollow noreferrer">medium tutorial</a>). Any idea why the model is not training?</p> <p>My code:</p> <pre><code>from skimage import io import numpy as np from numpy import array from PIL import Image import csv import random from keras.preprocessing.image import ImageDataGenerator import pandas as pd from keras.utils import multi_gpu_model import tensorflow as tf train_datagen = ImageDataGenerator() train_generator = train_datagen.flow_from_directory( directory="./trainingSet", class_mode="categorical", target_size=(50, 50), color_mode="rgb", batch_size=1, shuffle=True, seed=42 ) print(str(train_generator.class_indices) + " class indices") from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Conv2D from keras.layers.pooling import MaxPooling2D, GlobalAveragePooling2D from keras.optimizers import SGD from keras import backend as K from keras.layers import Input from keras.models import Model import keras from keras.layers.normalization import BatchNormalization K.clear_session() K.set_image_dim_ordering('tf') reg = keras.regularizers.l1_l2(1e-5, 0.0) def conv_layer(channels, kernel_size, input): output = Conv2D(channels, kernel_size, padding='same',kernel_regularizer=reg)(input) output = BatchNormalization()(output) output = Activation('relu')(output) output = Dropout(0)(output) return output model = Sequential() model.add(Conv2D(28, kernel_size=(3,3), input_shape=(50, 50, 3))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) # Flattening the 2D arrays for fully connected layers model.add(Dense(128, activation=tf.nn.relu)) model.add(Dropout(0.2)) model.add(Dense(10, activation=tf.nn.softmax)) from keras.optimizers import Adam import tensorflow as tf model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) from keras.callbacks import ModelCheckpoint epochs = 10 checkpoint = ModelCheckpoint('mnist.h5', save_best_only=True) STEP_SIZE_TRAIN=train_generator.n/train_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, epochs=epochs, callbacks=[checkpoint] ) </code></pre> <p>The output I am getting is as follows:</p> <pre><code>Using TensorFlow backend. Found 42000 images belonging to 10 classes. {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9} class indices Epoch 1/10 42000/42000 [==============================] - 174s 4ms/step - loss: 14.4503 - acc: 0.1035 /home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/callbacks.py:434: RuntimeWarning: Can save best model only with val_loss available, skipping. 'skipping.' % (self.monitor), RuntimeWarning) Epoch 2/10 42000/42000 [==============================] - 169s 4ms/step - loss: 14.4487 - acc: 0.1036 Epoch 3/10 42000/42000 [==============================] - 169s 4ms/step - loss: 14.4483 - acc: 0.1036 Epoch 4/10 42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036 Epoch 5/10 42000/42000 [==============================] - 169s 4ms/step - loss: 14.4483 - acc: 0.1036 Epoch 6/10 42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036 Epoch 7/10 42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036 Epoch 8/10 42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036 Epoch 9/10 42000/42000 [==============================] - 168s 4ms/step - loss: 14.4480 - acc: 0.1036 Epoch 10/10 5444/42000 [==&gt;...........................] - ETA: 2:26 - loss: 14.3979 - acc: 0.1067 </code></pre> <p>The trainingSet directory contains a folder for each 1-9 digit with the images inside the folders. I am training on an AWS EC2 p3.2xlarge instance with the Amazon Deep Learning Linux AMI.</p>
<p>Here is the list of some weird points that I see :</p> <ul> <li>Not rescaling your images -> <code>ImageDataGenerator(rescale=1/255)</code></li> <li>Batch Size of 1 (You may want to increase that)</li> <li>MNIST is grayscale pictures , therefore <code>color_mode</code> should be <code>"grayscale"</code>.</li> </ul> <p>(Also you have several unused part in your code, that you may want to delete from the question)</p>
python|tensorflow|keras|conv-neural-network|mnist
1
148
71,301,069
How can I create a user in Django?
<p>I am trying to create a user in my Django and React application (full-stack), but my views.py fails to save the form I give it. Can someone explain me the error or maybe give me other ways to create an user? Below there is the code:</p> <pre><code># Form folder def Registration(request): if request.method == 'POST': form = UserForm(request.POST) if form.is_valid(): username = form.cleaned_data['username'] email = form.cleaned_data['email'] password = form.cleaned_data['password'] User.objects.create( email = email , username = username, password = password, ) user = form.save() login(request,user) return redirect('/profile/') else: context = {&quot;Error&quot; : &quot;Error during form loading&quot;} return render(request,'accounts/registration.html',context=context) return render(request,'accounts/registration.html',context=context) </code></pre> <p>And that's my Forms.py</p> <pre><code>class UserForm(UserCreationForm): username = forms.TextInput() email = forms.EmailField() password = forms.TextInput() password2 = forms.TextInput() class Meta: model = User fields = (&quot;username&quot;, &quot;email&quot;, &quot;password&quot;, &quot;password2&quot;) def save(self, commit=True): user = super(UserForm, self).save(commit=False) if self.password != self.password2: raise forms.ValidationError('Input unvalid') elif commit: user.save() return user </code></pre>
<p>In your module where you call <code>User</code> in the django server you want to call something like</p> <pre><code>user = User.objects.create_user(username, email, password) if not user: raise Exception(&quot;something went wrong with the DB!&quot;) </code></pre> <p>It may be helpful to read the <a href="https://docs.djangoproject.com/en/4.0/topics/auth/default/#creating-users" rel="nofollow noreferrer">django docs</a> on <code>User</code> table, I've linked directly to the part describing the <code>create_user()</code> method.</p> <p>The if block above is helpful to confirm things are working as intended.</p> <p>If you want to use the <a href="https://docs.djangoproject.com/en/3.0/topics/auth/default/#django.contrib.auth.forms.UserCreationForm" rel="nofollow noreferrer">UserCreationForm</a> approach, then remove the call to <code>User</code> model. This is handled for you in the <code>UserCreationForm</code> once you save and commit.</p> <p>So the form of the if branch will look like this:</p> <pre><code> if form.is_valid(): user = form.save() login(request, user) return redirect('/profile/') </code></pre> <p>Once you've got it working you will also want to write a unit test that confirms the user was created. You can confirm that by doing something like this:</p> <pre><code>class UserTests(TestCase): def test_logout_deletes_token(self): # some stuff here that creates a single user self.assertEqual(User.objects.count(), 1) </code></pre>
python|django
1
149
71,414,133
how to find if any character in a string is a number in python? .isnumeric and .isdigit isn't working
<p>I need to determine if there are alphabetic and numeric characters in a string. My code for testing the alphabetic one seems to work fine, but numeric is only working if all of the characters are a digit, not if any.</p> <p>The alphabetic code that works:</p> <pre><code>from curses.ascii import isalnum, isalpha, isdigit password = input(&quot;Enter your password: &quot;) def contains_alphabetic(): for i in password: if isalpha(i): print(&quot;Valid&quot;) return True else: print(&quot;Invalid&quot;) return False contains_alphabetic() </code></pre> <p>This returns &quot;Valid&quot; if at least one of the characters is alphabetic, and &quot;Invalid&quot; if none of them are, which is what I want.</p> <pre><code>def contains_numeric(): for j in password: if isdigit(j): print(&quot;Valid&quot;) return True else: print(&quot;Invalid&quot;) return False contains_numeric() </code></pre> <p>This only returns &quot;Valid&quot; if all of the characters are numeric, not if at least one is. How do I fix this?</p> <p>Also, I tried using is.numeric() instead of is.digit() and it wouldn't even import.</p>
<p>As the comments have pointed out, both your <code>contains_alphabetic</code> and <code>contains_numeric</code> functions don't do what you think they're doing, because they terminate prematurely - during the very first iteration. You start a loop, inspect the current character (which will be the first character of the string during the first iteration of the loop), and immediately return something from the function based on that single character, which of course terminates the loop and the function.</p> <p>Other suggestions: There's no need to import things from <code>curses</code>. Strings already have <code>isalpha</code> and <code>isdigit</code> predicates available. Additionally, it's probably a good idea to have your functions accept a string parameter to iterate over.</p> <p>If the idea is to return <code>True</code> if any of the characters in a string satisfy a condition/predicate, and <code>False</code> otherwise (if none of the characters satisfy the condition), then the following would be a working implementation:</p> <pre><code>def contains_alpha(string): for char in string: if char.isalpha(): return True # return True as soon as we see a character that satisfies the condition return False # Notice the indentation - we only return False if we managed to iterate over every character without returning True </code></pre> <p>Alternatively:</p> <pre><code>def contains_alpha(string): found = False for char in string: if char.isalpha(): found = True break return found </code></pre> <p>Or:</p> <pre><code>def contains_alpha(string): for char in string: if char.isalpha(): break else: # Notice indentation - special for-else syntax: If we didn't break out of the loop, execute the else return False return True </code></pre> <p>Or:</p> <pre><code>def contains_alpha(string): return any(char.isalpha() for char in string) </code></pre>
python
0
150
39,404,033
Is it possible to name a variable 'in' in python?
<p>I am making a unit converter for a homework assignment, and I am using abbreviations for the various units of measurement as variable names, but when I try to define 'in' as a variable, it thinks I mean to use the word 'in' as if I wanted to say 'for x in y' or something like that, and I get the following error:</p> <pre><code> File "unit_converter-v1.py", line 13 in = 12 ^ SyntaxError: invalid syntax </code></pre> <p>Is there any way to get around this?</p> <p>I know I could just use the word 'inch' or 'inches' but then I would have to type 'inch' or 'inches' into the program every time I needed to convert inches, and I want to make the process as efficient as possible (I'm a senior in highschool, and my phisics teacher will allow us to use unit converters on tests and quizzes if we write the code ourselves. The point is that time is valuable and I need to be as time-efficient as possible).</p>
<p><code>in</code> is a python keyword, so no you cannot use it as a variable or function or class name.</p> <p>See <a href="https://docs.python.org/2/reference/lexical_analysis.html#keywords" rel="nofollow">https://docs.python.org/2/reference/lexical_analysis.html#keywords</a> for the list of keywords in Python 2.7.12.</p>
python|python-2.7|units-of-measurement|unit-conversion
4
151
55,471,260
How can I reduce a tensor's last dimension in PyTorch?
<p>I have tensor of shape <code>(1, 3, 256, 256, 3)</code>. I need to reduce one of the dimensions to obtain the shape <code>(1, 3, 256, 256)</code>. How can I do it? </p> <p>Thanks!</p>
<p>If you intend to apply mean over the last dimension, then you can do so with:</p> <pre><code>In [18]: t = torch.randn((1, 3, 256, 256, 3)) In [19]: t.shape Out[19]: torch.Size([1, 3, 256, 256, 3]) # apply mean over the last dimension In [23]: t_reduced = torch.mean(t, -1) In [24]: t_reduced.shape Out[24]: torch.Size([1, 3, 256, 256]) # equivalently In [32]: torch.mean(t, t.ndimension()-1).shape Out[32]: torch.Size([1, 3, 256, 256]) </code></pre>
arrays|python-3.x|numpy|pytorch|tensor
1
152
52,452,524
Searching for a key in python dictionary using "If key in dict:" seemingly not working
<p>I'm iterating through a csv file and checking whether a column is present as a key in a dictionary.</p> <p>This is an example row in the CSV file</p> <pre><code>833050,1,109,B147599,162560,0 </code></pre> <p>I'm checking whether the 5th column is a key in this dictionary</p> <pre><code>{162560: True, 165121: True, 162562: True, 153098: True, 168336: True} </code></pre> <p>I pass in this dict. as the var. mt_budgets in the following code</p> <pre><code>def check(self, mt_budgets): present = {} cwd = os.getcwd() path = cwd with open(path + 'file.csv.part') as f: csv_f = csv.reader(f) for row in csv_f: if row[4] == '162560': print 'Yes STRING' if str(row[4]) in mt_budgets: print 'Yes it\'s here' present[row[4]] = True else: print 'No it\'s not' print row[4] print mt_budgets </code></pre> <p>This is the output I'm getting</p> <pre><code>Yes STRING No it's not 162560 {162560: True, 165121: True, 162562: True, 153098: True, 168336: True} </code></pre> <p>I'm not sure why it's not picking it up as a key, what's going on here?</p> <p>Thanks!</p>
<pre><code>{162560: True} # {int:bool} {'162560': True} # {str:bool} </code></pre> <p>So, <code>mt_budgets</code> does not contain <code>'162560'</code> (str), it contains <code>162560</code> (int)</p> <p>Your code should be:</p> <pre><code>def check(self, mt_budgets): present = {} cwd = os.getcwd() path = cwd with open(path + 'file.csv.part') as f: csv_f = csv.reader(f) for row in csv_f: if int(row[4]) == 162560: # csv data is possibly str format. convert it to int and compare. print 'Yes STRING' if int(row[4]) in mt_budgets: print 'Yes it\'s here' present[row[4]] = True else: print 'No it\'s not' print row[4] print mt_budgets </code></pre>
python|csv|dictionary
4
153
47,842,388
Read text files from website with Python
<p>Hello I have got problem I want to get all data from the web but this is too huge to save it to variable. I save data making it like this:</p> <pre><code>r = urlopen("http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-list-v4_2_0.txt") r = BeautifulSoup(r, "lxml") r = r.p.get_text() some operations </code></pre> <p>This was working good until I have to get data from this website: <a href="http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-description-file-v4_2_0.txt" rel="nofollow noreferrer">http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-description-file-v4_2_0.txt</a></p> <p>When I run same code as above on this page my program is stopping at line</p> <pre><code>r = BeautifulSoup(r, "lxml") </code></pre> <p>and this is taking forever, nothing happen. I don't know how to get this whole data not saving it to file to make on this some operations of searching key words and printing them. I can't save this to file I have to get this from website.</p> <p>I will be very thankful for every help.</p>
<p>I think the code below can do what you want. Like mentioned in a comment by @alecxe, you don't need to use BeautifulSoup. This problem should be a problem to retrieve content from text files online and is answered in this <a href="https://stackoverflow.com/questions/1393324/in-python-given-a-url-to-a-text-file-what-is-the-simplest-way-to-read-the-cont">Given a URL to a text file, what is the simplest way to read the contents of the text file?</a></p> <pre><code>from urllib.request import urlopen r = urlopen(&quot;http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-list-v4_2_0.txt&quot;) for line in r: do_somthing() </code></pre>
python-3.x|web-scraping|beautifulsoup
1
154
37,573,759
How to format print outputs of a text file?
<p>I have a text file called animal.txt.</p> <pre><code>1.1 Animal Name : Dog Animal Type : Mammal Fur : Yes Scale : No 1.2 Animal Name : Snake Animal Type : Reptile Fur : No Scale : Yes 1.3 Animal Name : Frog Animal Type : Amphibian Fur : No Scale : No 1.x Animal Name : aaaaaa Animal Type : bbbbbb Fur : cccccc Scale : dddddd </code></pre> <p>My desired ouput is:</p> <pre><code>Dog Mammal Yes Snake Reptile No Frog Amphibian No </code></pre> <p>Output I am getting is:</p> <pre><code>Dog Mammal Yes Snake Reptile No Frog Amphibian No </code></pre> <p>This is the code I currently have that prints out my current input. </p> <pre><code>with open('animal.txt', 'r') as fp: for line in fp: header = line.split(':')[0] if 'Animal Name' in header: animal_name = line.split(':')[1].strip() print animal_name, elif 'Animal Type' in header: animal_type = line.split(':')[1].strip() print animal_type, elif 'Fur' in header: fur = line.split(':')[1].strip() print fur elif '1.x' in header: break </code></pre> <p>Is there a way to format or add to my existing code that will give me my desired output?</p>
<p>Use <code>\t</code> between the items you are printing. So print <code>animal_name\t,</code> then so on through the rest of your code.</p> <p><code>with open('animal.txt', 'r') as fp: for line in fp: header = line.split(':')[0] if 'Animal Name' in header: animal_name = line.split(':')[1].strip() print animal_name\t, elif 'Animal Type' in header: animal_type = line.split(':')[1].strip() print animal_type\t, elif 'Fur' in header: fur = line.split(':')[1].strip() print fur\t elif '1.x' in header: break</code></p>
python|python-2.7|python-3.x
-1
155
34,408,304
Django - How to filter a ListView by user without using CBV?
<p>Is it possible to do this? I've been looking for quite a while, but every solution I've seen involves subclassing <code>ListView</code> which I don't want to do. I'm sure there's a way to filter results by user without having to resort to class-based views, I just can't seem to find good information on it, am I missing something?</p> <p>I've tried a few things similar to this, but I don't think it's going to work the way I'm trying, and the only other way I've seen is with CBV:</p> <pre><code>url(r'^$', ListView.as_view(queryset=Game.objects.filter(user=User.user), template_name = 'userprofile.html')), </code></pre>
<p>When you send a request to view you have already instance of the current user in the request:</p> <p><strong>views.py</strong></p> <pre><code>def my_not_cb_view(request): user = request.user games = Game.objects.filter(user=User.user) context = {'games': games, 'user': user} render_to_response(request, 'user profile.html', context=context) </code></pre> <p><strong>urls.py</strong></p> <p><code>url(r'^$', my_not_cb_view)</code></p>
python|django|listview|django-class-based-views
1
156
66,288,017
Fitting Log-normal distribution (Python plot)
<p>I am trying to fit a log-normal distribution to the histogram data. I've tried to follow examples of other questions here on the Stack Exchange but I'm not getting the fit, because in this case I have a broken axis. I already put the broken axis on that plot, I tried to prevent the numbers from overlapping on the axes, I removed the numbers from the repeated axes, I reduced the size of the second subplot, but I'm not able to fit the log-normal. How can I fit the log-normal distribution for this data set?</p> <p>Code:</p> <pre><code>#amostra 17B (menor intervalo) import pandas as pd import matplotlib.pyplot as plt import numpy as np from scipy.stats import lognorm import matplotlib.ticker as tkr import scipy, pylab import locale import matplotlib.gridspec as gridspec from scipy.stats import lognorm locale.setlocale(locale.LC_NUMERIC, &quot;de_DE&quot;) plt.rcParams['axes.formatter.use_locale'] = True frequencia_relativa=[0.000, 0.000, 0.038, 0.097, 0.091, 0.118, 0.070, 0.124, 0.097, 0.059, 0.059, 0.048, 0.054, 0.043, 0.032, 0.005, 0.027, 0.016, 0.005, 0.000, 0.005, 0.000, 0.005, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.005, 0.000, 0.000] x=[0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.60, 1.70, 1.80, 1.90, 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.60, 2.70, 2.80, 2.90, 3.00, 3.10, 3.20, 3.30, 3.40, 3.50, 3.60, 3.70, 3.80, 3.90, 4.00, 4.10, 4.20, 4.30, 4.40, 4.50, 4.60, 4.70, 4.80, 4.90, 5.00, 5.10, 5.20, 5.30, 5.40, 5.50, 5.60, 5.70, 5.80, 5.90, 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 6.70, 6.80, 6.90, 7.00, 7.10, 7.20, 7.30, 7.40, 7.50, 7.60, 7.70, 7.80, 7.90, 8.00] plt.rcParams[&quot;figure.figsize&quot;] = [20,8] f, (ax,ax2) = plt.subplots(1,2, sharex=True, sharey=True, facecolor='w') axes = f.add_subplot(111, frameon=False) ax.spines['top'].set_color('none') ax2.spines['top'].set_color('none') gs = gridspec.GridSpec(1,2,width_ratios=[3,1]) ax = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1]) ax.yaxis.tick_left() ax.xaxis.tick_bottom() ax2.xaxis.tick_bottom() ax.tick_params(labeltop='off') # don't put tick labels at the top ax2.yaxis.tick_right() ax.bar(x, height=frequencia_relativa, alpha=0.5, width=0.1, align='edge', edgecolor='black', hatch=&quot;///&quot;) ax2.bar(x, height=frequencia_relativa, alpha=0.5, width=0.1, align='edge', edgecolor='black', hatch=&quot;///&quot;) ax.tick_params(axis = 'both', which = 'major', labelsize = 18) ax.tick_params(axis = 'both', which = 'minor', labelsize = 18) ax2.tick_params(axis = 'both', which = 'major', labelsize = 18) ax2.tick_params(axis = 'both', which = 'minor', labelsize = 18) ax2.xaxis.set_ticks(np.arange(7.0, 8.5, 0.5)) ax2.xaxis.set_major_formatter(tkr.FormatStrFormatter('%0.1f')) plt.subplots_adjust(wspace=0.04) ax.set_xlim(0,2.5) ax.set_ylim(0,0.14) ax2.set_xlim(7.0,8.0) def func(x, pos): # formatter function takes tick label and tick position s = str(x) ind = s.index('.') return s[:ind] + ',' + s[ind+1:] # change dot to comma x_format = tkr.FuncFormatter(func) ax.xaxis.set_major_formatter(x_format) ax2.xaxis.set_major_formatter(x_format) # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) # This looks pretty good, and was fairly painless, but you can get that # cut-out diagonal lines look with just a bit more work. The important # thing to know here is that in axes coordinates, which are always # between 0-1, spine endpoints are at these locations (0,0), (0,1), # (1,0), and (1,1). Thus, we just need to put the diagonals in the # appropriate corners of each of our axes, and so long as we use the # right transform and disable clipping. d = .015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((1-d/3,1+d/3), (-d,+d), **kwargs) ax.plot((1-d/3,1+d/3),(1-d,1+d), **kwargs) kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d,+d), (1-d,1+d), **kwargs) ax2.plot((-d,+d), (-d,+d), **kwargs) ax2.tick_params(labelright=False) ax.tick_params(labeltop=False) ax.tick_params(axis='x', which='major', pad=15) ax2.tick_params(axis='x', which='major', pad=15) ax2.set_yticks([]) f.text(0.5, -0.04, 'Tamanho lateral do triângulo ($\mu m$)', ha='center', fontsize=22) f.text(-0.02, 0.5, 'Frequência relativa', va='center', rotation='vertical', fontsize=22) #ax.set_xlabel('Tamanho lateral do triângulo ($\mu m$)', fontsize=22) #ax.set_ylabel('Frequência relativa', fontsize=22) #x_axis = np.arange(0, 29, 0.001) #ax.plot(x_axis, norm.pdf(x_axis,2.232,1.888), linewidth=3) f.tight_layout() plt.show() #plt.savefig('output.png', dpi=500, bbox_inches='tight') </code></pre> <p><a href="https://i.stack.imgur.com/HQvcF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HQvcF.png" alt="enter image description here" /></a></p> <hr /> <p>Attempt with curve_fit:</p> <pre><code>#amostra 17B (menor intervalo) import pandas as pd import matplotlib.pyplot as plt import numpy as np from scipy.stats import lognorm import matplotlib.ticker as tkr import scipy, pylab import locale import matplotlib.gridspec as gridspec from scipy.stats import lognorm locale.setlocale(locale.LC_NUMERIC, &quot;de_DE&quot;) plt.rcParams['axes.formatter.use_locale'] = True from scipy.optimize import * frequencia_relativa=[0.000, 0.000, 0.038, 0.097, 0.091, 0.118, 0.070, 0.124, 0.097, 0.059, 0.059, 0.048, 0.054, 0.043, 0.032, 0.005, 0.027, 0.016, 0.005, 0.000, 0.005, 0.000, 0.005, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.005, 0.000, 0.000] x=[0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.60, 1.70, 1.80, 1.90, 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.60, 2.70, 2.80, 2.90, 3.00, 3.10, 3.20, 3.30, 3.40, 3.50, 3.60, 3.70, 3.80, 3.90, 4.00, 4.10, 4.20, 4.30, 4.40, 4.50, 4.60, 4.70, 4.80, 4.90, 5.00, 5.10, 5.20, 5.30, 5.40, 5.50, 5.60, 5.70, 5.80, 5.90, 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 6.70, 6.80, 6.90, 7.00, 7.10, 7.20, 7.30, 7.40, 7.50, 7.60, 7.70, 7.80, 7.90, 8.00] plt.rcParams[&quot;figure.figsize&quot;] = [20,8] f, (ax,ax2) = plt.subplots(1,2, sharex=True, sharey=True, facecolor='w') axes = f.add_subplot(111, frameon=False) ax.spines['top'].set_color('none') ax2.spines['top'].set_color('none') gs = gridspec.GridSpec(1,2,width_ratios=[3,1]) ax = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1]) def f(x, mu, sigma) : return 1/(np.sqrt(2*np.pi)*sigma*x)*np.exp(-((np.log(x)- mu)**2)/(2*sigma**2)) params, extras = curve_fit(f, x, frequencia_relativa) plt.plot(x, f(x ,params[0], params[1])) print(&quot;mu=%g, sigma=%g&quot; % (params[0], params[1])) plt.subplots_adjust(wspace=0.04) # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) d = .015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((1-d/3,1+d/3), (-d,+d), **kwargs) ax.plot((1-d/3,1+d/3),(1-d,1+d), **kwargs) kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d,+d), (1-d,1+d), **kwargs) ax2.plot((-d,+d), (-d,+d), **kwargs) f.tight_layout() plt.show() #plt.savefig('output.png', dpi=500, bbox_inches='tight') </code></pre> <p><a href="https://i.stack.imgur.com/qserc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qserc.png" alt="enter image description here" /></a></p> <hr /> <p>Error:</p> <pre><code>import matplotlib.ticker as tkr import scipy, pylab import locale import matplotlib.gridspec as gridspec #from scipy.stats import lognorm locale.setlocale(locale.LC_NUMERIC, &quot;de_DE&quot;) plt.rcParams['axes.formatter.use_locale'] = True from scipy.optimize import curve_fit x=np.asarray([0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.60, 1.70, 1.80, 1.90, 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.60, 2.70, 2.80, 2.90, 3.00, 3.10, 3.20, 3.30, 3.40, 3.50, 3.60, 3.70, 3.80, 3.90, 4.00, 4.10, 4.20, 4.30, 4.40, 4.50, 4.60, 4.70, 4.80, 4.90, 5.00, 5.10, 5.20, 5.30, 5.40, 5.50, 5.60, 5.70, 5.80, 5.90, 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 6.70, 6.80, 6.90, 7.00, 7.10, 7.20, 7.30, 7.40, 7.50, 7.60, 7.70, 7.80, 7.90, 8.00], dtype=np.float64) frequencia_relativa=np.asarray([0.000, 0.000, 0.038, 0.097, 0.091, 0.118, 0.070, 0.124, 0.097, 0.059, 0.059, 0.048, 0.054, 0.043, 0.032, 0.005, 0.027, 0.016, 0.005, 0.000, 0.005, 0.000, 0.005, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.005, 0.000, 0.000], dtype=np.float64) f, (ax,ax2) = plt.subplots(1,2, sharex=True, sharey=True, facecolor='w') def fun(y, mu, sigma): return 1.0/(np.sqrt(2.0*np.pi)*sigma*y)*np.exp(-(np.log(y)-mu)**2/(2.0*sigma*sigma)) step = 0.1 xx = x nrm = np.sum(frequencia_relativa*step) # normalization integral print(nrm) frequencia_relativa /= nrm # normalize frequences histogram print(np.sum(frequencia_relativa*step)) # check normalizatio params, extras = curve_fit(fun, xx, frequencia_relativa) print(params[0]) print(params[1]) axes = f.add_subplot(111, frameon=False) axes.plot(x, fun(x, params[0], params[1]), &quot;b-&quot;, linewidth=3) ax.spines['top'].set_color('none') ax2.spines['top'].set_color('none') gs = gridspec.GridSpec(1,2,width_ratios=[3,1]) ax = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1]) ax.axvspan(0.190, 1.616, label='Média $\pm$ desvio padrão', ymin=0.0, ymax=1.0, alpha=0.2, color='Plum') ax.yaxis.tick_left() ax.xaxis.tick_bottom() ax2.xaxis.tick_bottom() ax.tick_params(labeltop='off') # don't put tick labels at the top ax2.yaxis.tick_right() ax.bar(xx, height=frequencia_relativa, label='Frequência relativa do tamanho lateral triangular', alpha=0.5, width=0.1, align='edge', edgecolor='black', hatch=&quot;///&quot;) ax2.bar(xx, height=frequencia_relativa, alpha=0.5, width=0.1, align='edge', edgecolor='black', hatch=&quot;///&quot;) #plt.plot(xx, frequencia_relativa, &quot;ro&quot;) ax.tick_params(axis = 'both', which = 'major', labelsize = 18) ax.tick_params(axis = 'both', which = 'minor', labelsize = 18) ax2.tick_params(axis = 'both', which = 'major', labelsize = 18) ax2.tick_params(axis = 'both', which = 'minor', labelsize = 18) ax2.xaxis.set_ticks(np.arange(7.0, 8.5, 0.5)) ax2.xaxis.set_major_formatter(tkr.FormatStrFormatter('%0.1f')) plt.subplots_adjust(wspace=0.04) ax.set_xlim(0,2.5) ax.set_ylim(0,1.4) ax2.set_xlim(7.0,8.0) def func(x, pos): # formatter function takes tick label and tick position s = str(x) ind = s.index('.') return s[:ind] + ',' + s[ind+1:] # change dot to comma x_format = tkr.FuncFormatter(func) ax.xaxis.set_major_formatter(x_format) ax2.xaxis.set_major_formatter(x_format) # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) d = .015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((1-d/3,1+d/3), (-d,+d), **kwargs) ax.plot((1-d/3,1+d/3),(1-d,1+d), **kwargs) kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d,+d), (1-d,1+d), **kwargs) ax2.plot((-d,+d), (-d,+d), **kwargs) ax2.tick_params(labelright=False) ax.tick_params(labeltop=False) ax.tick_params(axis='x', which='major', pad=15) ax2.tick_params(axis='x', which='major', pad=15) ax2.set_yticks([]) f.text(0.5, -0.04, 'Tamanho lateral do triângulo ($\mu m$)', ha='center', fontsize=22) f.text(-0.02, 0.5, 'Frequência relativa', va='center', rotation='vertical', fontsize=22) #ax.set_xlabel('Tamanho lateral do triângulo ($\mu m$)', fontsize=22) #ax.set_ylabel('Frequência relativa', fontsize=22) #x_axis = np.arange(0, 29, 0.001) #ax.plot(x_axis, norm.pdf(x_axis,2.232,1.888), linewidth=3) ax.axvline(0.903, color='k', linestyle='-', linewidth=1.3) ax.axvline(0.190, color='k', linestyle='--', linewidth=1) ax.axvline(1.616, color='k', linestyle='--', linewidth=1) f.legend(loc=9, bbox_to_anchor=(.79,.99), labelspacing=1.5, numpoints=1, columnspacing=0.2, ncol=1, fontsize=18) ax.text(0.903*0.70, 1.4*0.92, '$\mu$ = (0,90 $\pm$ 0,71) $\mu m$', fontsize=20) f.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/2mVSG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2mVSG.png" alt="enter image description here" /></a></p>
<p>You're trying at the same time to do fancy graphs and fit. you help you with fit, graphs are secondary problem.</p> <p>First, use NumPy arrays for data, helps a lot. Second, your histogram function is denormalized.</p> <p>So if in the first of your programs I'll normalize freqs array</p> <pre><code>x=np.asarray([0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.60, 1.70, 1.80, 1.90, 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.60, 2.70, 2.80, 2.90, 3.00, 3.10, 3.20, 3.30, 3.40, 3.50, 3.60, 3.70, 3.80, 3.90, 4.00, 4.10, 4.20, 4.30, 4.40, 4.50, 4.60, 4.70, 4.80, 4.90, 5.00, 5.10, 5.20, 5.30, 5.40, 5.50, 5.60, 5.70, 5.80, 5.90, 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 6.70, 6.80, 6.90, 7.00, 7.10, 7.20, 7.30, 7.40, 7.50, 7.60, 7.70, 7.80, 7.90, 8.00], dtype=np.float64) frequencia_relativa=np.asarray([0.000, 0.000, 0.038, 0.097, 0.091, 0.118, 0.070, 0.124, 0.097, 0.059, 0.059, 0.048, 0.054, 0.043, 0.032, 0.005, 0.027, 0.016, 0.005, 0.000, 0.005, 0.000, 0.005, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.005, 0.000, 0.000], dtype=np.float64) step = 0.1 nrm = np.sum(frequencia_relativa*step) # normalization integral print(nrm) frequencia_relativa /= nrm print(np.sum(frequencia_relativa*step)) </code></pre> <p>and set Y limit to 1.4, I'll get graph below</p> <p><a href="https://i.stack.imgur.com/XAfAI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XAfAI.png" alt="enter image description here" /></a></p> <p>Then, in the fitting part I'll do similar transformation, and shift X axis by half the step size, so that histogram value is in the middle of the bin, fit is starting to work, code, Python 3.9.1 Win 10 x64. I removed everything not related to fit, just so it works for you, and plotted fitted function vs input data.</p> <blockquote> <p>I also didn't quite understand the part of normalizing the integral (the sum of all the bars in the histogram gives 1 because it's the relative frequency) and I didn't understand the choice of step and shift. Could you explain this part better, please?</p> </blockquote> <p>Your function to fit is two-parameters PDF of log-norm distribution. It conditioned such that the <sub>0</sub>∫<sup>∞</sup> PDF(x,μ,σ)=1. You have to condition your input data in the same way. For histogram, integral is the sum of bins multiplied by steps. Step is obviously 0.1, so I compute this sum, check it is not 1, and then divide frequencies by normalization value, such that integral is equal to 1. You could try to fit not 2-parametric, but 3-parametric curve, third parameter being normalization value, but more parameters in non-linear fit means more problems you could get.</p> <p>Wrt shift, one has to make an assumption, what value of the bin describes. I assumed that value of the bin should be the value in the middle of the bin. Again, this is an assumption, I don't know how your data were made, maybe histogram value is really value at the left side of the bin. It that is so, you just remove the shift and rerun the code.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit x=np.asarray([0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.60, 1.70, 1.80, 1.90, 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.60, 2.70, 2.80, 2.90, 3.00, 3.10, 3.20, 3.30, 3.40, 3.50, 3.60, 3.70, 3.80, 3.90, 4.00, 4.10, 4.20, 4.30, 4.40, 4.50, 4.60, 4.70, 4.80, 4.90, 5.00, 5.10, 5.20, 5.30, 5.40, 5.50, 5.60, 5.70, 5.80, 5.90, 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 6.70, 6.80, 6.90, 7.00, 7.10, 7.20, 7.30, 7.40, 7.50, 7.60, 7.70, 7.80, 7.90, 8.00], dtype=np.float64) frequencia_relativa=np.asarray([0.000, 0.000, 0.038, 0.097, 0.091, 0.118, 0.070, 0.124, 0.097, 0.059, 0.059, 0.048, 0.054, 0.043, 0.032, 0.005, 0.027, 0.016, 0.005, 0.000, 0.005, 0.000, 0.005, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.005, 0.000, 0.000], dtype=np.float64) def f(y, mu, sigma): return 1/(np.sqrt(2.0*np.pi)*sigma*y)*np.exp(-(np.log(y)-mu)**2/(2.0*sigma*sigma)) step = 0.1 nrm = np.sum(frequencia_relativa*step) frequencia_relativa /= nrm xx = x - 0.5*step params, extras = curve_fit(f, xx, frequencia_relativa) mu = params[0] sigma = params[1] print((mu,sigma)) # calculate mean value, https://en.wikipedia.org/wiki/Log-normal_distribution print(np.exp(mu + sigma*sigma/2.0)) # calculate stddev as sq.root of variance z=np.sqrt((np.exp(sigma*sigma)-1)*np.exp(mu+mu+sigma*sigma)) print(z) xxx=np.linspace(0.001,8,1000) plt.plot(xxx, f(xxx, mu, sigma), &quot;b-&quot;) plt.plot(xx, frequencia_relativa, &quot;ro&quot;) plt.show() </code></pre> <p>and I'm getting lognorm curve which looks ok wrt input data. Both curves have majority of data in the [0...2] interval with peak value at ~(0.8, 1.2). Here is simplest graph which overlaps fitted curve (blue) with centers of the frequency histogram bins (red dots). Now you could try to put it into your fancy graphs, good luck.</p> <p>And just for reference, code which fits 3-parameters log-norm curve to apply to denormalized data. Seems to work as well</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit x=np.asarray([0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.10, 1.20, 1.30, 1.40, 1.50, 1.60, 1.70, 1.80, 1.90, 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.60, 2.70, 2.80, 2.90, 3.00, 3.10, 3.20, 3.30, 3.40, 3.50, 3.60, 3.70, 3.80, 3.90, 4.00, 4.10, 4.20, 4.30, 4.40, 4.50, 4.60, 4.70, 4.80, 4.90, 5.00, 5.10, 5.20, 5.30, 5.40, 5.50, 5.60, 5.70, 5.80, 5.90, 6.00, 6.10, 6.20, 6.30, 6.40, 6.50, 6.60, 6.70, 6.80, 6.90, 7.00, 7.10, 7.20, 7.30, 7.40, 7.50, 7.60, 7.70, 7.80, 7.90, 8.00], dtype=np.float64) frequencia_relativa=np.asarray([0.000, 0.000, 0.038, 0.097, 0.091, 0.118, 0.070, 0.124, 0.097, 0.059, 0.059, 0.048, 0.054, 0.043, 0.032, 0.005, 0.027, 0.016, 0.005, 0.000, 0.005, 0.000, 0.005, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.005, 0.000, 0.000], dtype=np.float64) def f(y, mu, sigma, N): return N/(np.sqrt(2.0*np.pi)*sigma*y)*np.exp(-(np.log(y)-mu)**2/(2.0*sigma*sigma)) step = 0.1 xx = x - 0.5*step params, extras = curve_fit(f, xx, frequencia_relativa) print(params) plt.plot(xx, f(xx, params[0], params[1], params[2]), &quot;b-&quot;) plt.plot(xx, frequencia_relativa, &quot;ro&quot;) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/LrIf9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LrIf9.png" alt="enter image description here" /></a></p>
python|matplotlib|graphics|distribution
1
157
7,583,022
Looking for method / module to determine the compiler which was used for building the CPython interpreter
<p>When I start the Python interpreter in command line mode I get a message saying which compiler was used for building it. Is there a way to get this information in Python ? I know I could start the interpreter with <code>subprocess.Popen</code> and parse the output, but I'm looking for an easier and more elegant method.</p> <p>The background is that I want to build Python extensions for a CMake based C++ framework, and I would like to write a CMake macro wich checks if the correct compiler is installed.</p>
<p>Use <a href="http://docs.python.org/library/platform.html#platform.python_compiler" rel="nofollow"><code>platform.python_compiler()</code></a>.</p>
python|cmake
6
158
16,405,881
Google AppEngine images API suddenly corrupting images
<p>We have been using AppEngine's images API with no problem for the past year. Suddenly in the last week or so the images API seems to be corrupting the image. We use the images API to do a few different operations but the one that seems to be causing the problem is that we do an images.rotation(0) on TIFF data to convert it to a PNG. (We haven't tried other file type conversions but the point is that this was working for over a year so why should it suddenly stop working? Furthermore, we need it to work with TIFF to PNG as TIFF is the format of inbound data)</p> <p>This worked without problem for a long time and suddenly today I find that any TIFF that goes through the process is corrupted on output. It looks as though it's doubled and skewed.</p> <p>This is using the Python 2.7 API on AppEngine 1.7.7. We are using the Google images API directly not through PIL.</p> <p>Please help! This is killing our production environment.</p> <p>Example code:</p> <pre><code>from google.appengine.api import images import webapp2 def get_sample(): # sample.tiff is a 1bit black and white group3 tiff from a fax service with open("sample.tiff") as x: f = x.read() return f class MainHandler(webapp2.RequestHandler): def get(self): # Convert to PNG using AppEngine's images API by doing a rotation of 0 degrees. # This worked fine for over a year and now suddenly started corrupting the # output image with a grainy double image that looks like two of the # same image are layered on top of each other and vibrating. sample = get_sample() png = images.rotate(sample, 0) self.response.headers["Content-Type"] = "image/png" self.response.out.write(png) application = webapp2.WSGIApplication([('/', MainHandler)], debug=True) </code></pre>
<p>This turned out to be due to a recent change to the images API that introduced a bug which affected operations involving TIFF files, which has since been reverted. More information is in the original bug report.</p> <p><a href="https://code.google.com/p/googleappengine/issues/detail?id=9284" rel="nofollow">https://code.google.com/p/googleappengine/issues/detail?id=9284</a></p>
python|image|api|google-app-engine
1
159
31,801,400
Single line commands from Python
<p>I am trying to change certain entries in a file using python, which is possible in Perl with the command below , do we have anything similar in python, here the string in the file is replaced successfully. </p> <pre><code>[root@das~] perl -pi -w -e 's/unlock_time=1800/#unlock_time=1900/g;' /etc/pam.d/common-auth </code></pre> <p>For this i did try simple command in python to start off with, but no luck, any help in this direction would help, the code below does not give any output actually.</p> <pre><code>[root@das~] python -c 'import os ; os.uname()[1]' </code></pre>
<p>you need to add a print statement (with surrounding brackets for python 3.4; without for python 2.7).</p> <pre><code>[root@das~] python -c 'import os ; print(os.uname()[1])' </code></pre> <p>the other line could then be programmed this way (this will replace the input file!):</p> <pre><code>import fileinput for line in fileinput.input('test.txt', inplace=True): if line.startswith('unlock_time'): print('# {}'.format(line)) else: print(line) </code></pre>
perl|python-2.7
1
160
31,972,012
iMacros script questions timeout/errormsg/popupignore etc
<p>I have 1000+ URLs that I want to scrape to retrieve the title info from. After trying different things, I ultimately used iMacros scripts, which I don't know anything about. Nonetheless, I managed to make a script after reading guides.</p> <p>My script is working perfectly but has few problem and have some queries</p> <p>My Script:</p> <pre><code>VERSION BUILD=9002379 TAB T=1 TAB CLOSEALLOTHERS SET !TIMEOUT_STEP 1 SET !ERRORIGNORE YES SET !EXTRACT_TEST_POPUP NO URL GOTO=http://google.com/ ADD !EXTRACT {{!URLCURRENT}} TAG POS=1 TYPE=TITLE ATTR=* EXTRACT=TXT SAVEAS TYPE=EXTRACT FOLDER=d:/ FILE=links.txt WAIT SECONDS=1 SET !TIMEOUT_STEP 1 SET !ERRORIGNORE YES SET !EXTRACT_TEST_POPUP NO URL GOTO=http://example.com:8087/ ADD !EXTRACT {{!URLCURRENT}} TAG POS=1 TYPE=TITLE ATTR=* EXTRACT=TXT SAVEAS TYPE=EXTRACT FOLDER=d:/ FILE=links.txt </code></pre> <p>what i want to ask is this</p> <p>1- do i have to use SET !TIMEOUT_STEP 1, SET !ERRORIGNORE YES, SET !EXTRACT_TEST_POPUP NO for every URL or using these cmds on top only once will do?</p> <p>2- even with SET !EXTRACT_TEST_POPUP NO i get this error once <a href="http://i.imgur.com/8UP9uMD.jpg" rel="nofollow">http://i.imgur.com/8UP9uMD.jpg</a> in the begining, how to remove that??</p> <p>3- out of many URLs i have a few that are dead so imacros wait till 60s before going to next url.. how to cut down the time to 10s for dead or non responding URLs? <a href="http://i.imgur.com/FGIXElq.jpg" rel="nofollow">http://i.imgur.com/FGIXElq.jpg</a> &lt;-- how to make it 10s limit</p> <p>4- the script i made is for 1 url. now how can i multiple this script 1000+ times all with different urls that i have in a specific txt file. so either it makes this same script for all the urls or it auto leeches urls from a txt file while leeching like when first url is leeched it finds the next url from url txt file and auto inserts into this script so it can run all my urls and at the end i have results of all my urls</p> <p>5- the final result i get is </p> <blockquote> <p><a href="http://google.com,Google" rel="nofollow">http://google.com,Google</a></p> </blockquote> <p>how can i change is "," after URL to a tab or a double space so my results look something like this</p> <blockquote> <p><a href="http://google.com" rel="nofollow">http://google.com</a> Google</p> </blockquote> <p>kindly reply to all my queries and if possible redo my script so i would know where to put which code </p> <p>thanks!</p>
<pre><code>SET !DATASOURCE urls.txt SET !DATASOURCE_LINE {{!LOOP}} SET !TIMEOUT_STEP 1 SET !TIMEOUT_PAGE 10 SET !ERRORIGNORE YES URL GOTO={{!COL1}} SET !ERRORIGNORE NO SET !EXTRACT_TEST_POPUP NO TAG POS=1 TYPE=TITLE ATTR=* EXTRACT=TXT SET dblSP " " SET !EXTRACT {{!COL1}}{{dblSP}}{{!EXTRACT}} SAVEAS TYPE=EXTRACT FOLDER=d:\ FILE=links.txt WAIT SECONDS=1 </code></pre> <p>Play the above macro in loop mode with the ‘Max:’ value equal to the number of lines in your txt-file.</p>
javascript|php|python|url|imacros
1
161
31,706,090
Counting the number of men and women from a CSV file
<p>I want to count the number of male and female riders (which are coded as 1 or 2) in a CSV file, but my code does not seem to be working. This is my code:</p> <pre><code>Men = 0 Women = 0 import csv with open('dec2week.csv') as csvfile: reader = csv.DictReader(csvfile) for row in reader: print(row['gender']) if 'gender' == 1: 'gender'Men += 1 esle: 'gender'Women += 1 print "Count for Men: ", Men print "Count for women: ", Women </code></pre>
<p>Use a Counter dict to do the counting:</p> <pre><code>import csv from collections import Counter from itertools import chain with open('dec2week.csv') as csvfile: next(csvfile) counts = Counter(chain.from_iterable(csv.reader(csvfile))) </code></pre> <p>Then just get the count using the key:</p> <pre><code>print("Total male = {}".format(counts["1"])) print("Total female = {}".format(counts["2"])) </code></pre> <p>To use a particular column either index each row or use your DictReader approach:</p> <pre><code> counts = Counter(row["gender"] for row in csv.DictReader(csvfile)) </code></pre> <p>Using your for loop you need to access by key and compare the values returned to "1":</p> <pre><code>with open('dec2week.csv') as csvfile: m,f = 0,0 for row in csv.DictReader(csvfile): if row["gender"] == "1": m += 1 else: f += 1 print("Total male = {}".format(m)) print("Total female = {}".format(f)) </code></pre>
python|csv
2
162
31,960,583
efficiency of Python's itertools.product()
<p>So I'm looking at different ways to compute the Cartesian product of <em>n</em> arrays, and I came across the rather elegant solution (here on SO) of using the following code:</p> <pre><code>import itertools for array in itertools.product(*arrays): print array </code></pre> <p>Looking at the <a href="https://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow">python doc page</a> (I'm using 2.7, btw) for <code>itertools.product()</code>, it says the code is equivalent to the following:</p> <pre><code>def product(*args, **kwds): # product('ABCD', 'xy') --&gt; Ax Ay Bx By Cx Cy Dx Dy # product(range(2), repeat=3) --&gt; 000 001 010 011 100 101 110 111 pools = map(tuple, args) * kwds.get('repeat', 1) result = [[]] for pool in pools: result = [x+[y] for x in result for y in pool] for prod in result: yield tuple(prod) </code></pre> <p>(It does note the following: <em>This function is equivalent to the following code, except that the actual implementation does not build up intermediate results in memory:</em>)</p> <p>I'm not a CS person - so I'm pretty bad at estimating the efficiency of this algorithm. My first guess would be <code>O(n^2)</code> (due to the nested for loop). </p> <p>Am I wrong? </p>
<p>You are absolutely right. That is, in the special case of two arrays input, both of the size <em>n</em>. In the general case of <em>k</em> arrays of the sizes <em>n</em>[<em>i</em>] for <em>i</em> in 1..<em>k</em> it will be O(Product of all <em>n</em>[<em>i</em>]).</p> <p>Why is this the case and why is there no way to optimize this any further?</p> <p>Well, in this case the size of the output is directly this "Product of all <em>n</em>[<em>i</em>]" which lies in the nature of the function we are discussing. Python makes this even more obvious by implementing it as a generator. So for each element, this generator yields one element, in the end it will be as many yielded elements as the said product.</p> <p>Of course, if something so obviously does anything <em>x</em> times, its efficiency cannot be better than O(<em>x</em>). It could be worse if the effort for each element was also depending on the input size. So, to be precise, the effort for each element here is depending on the number of arrays we put in, so the true effort would be</p> <p>O(<em>k</em> × Product of all <em>n</em>[<em>i</em>])</p>
python|algorithm|time-complexity|cartesian-product
2
163
32,003,294
Sentence tokenization for texts that contains quotes
<p>Code:</p> <pre><code>from nltk.tokenize import sent_tokenize pprint(sent_tokenize(unidecode(text))) </code></pre> <p>Output:</p> <pre><code>[After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker.', 'Finally they pushed you out of the cold emergency room.', 'I failed to protect you.', '"Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015.',] </code></pre> <p>Input:</p> <blockquote> <p>After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you."</p> <p>Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015.</p> </blockquote> <p>Quotes should be included in previous sentence. Instead of <code>" Li.</code></p> <p>It fails at <code>."</code> How to fix this?</p> <p><strong>Edit:</strong> Explaining the extraction of text.</p> <pre><code>html = open(path, "r").read() #reads html code article = extractor.extract(raw_html=html) #extracts content text = unidecode(article.cleaned_text) #changes encoding </code></pre> <p>Here, article.cleaned_text is in unicode. The idea behind using this to change characters “ to ".</p> <p>Solutions @alvas Incorrect Result:</p> <pre><code>['After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker.', 'Finally they pushed you out of the cold emergency room.', 'I failed to protect you.', '"', 'Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015.' ] </code></pre> <p><strong>Edit2:</strong> (Updated) nltk and python version</p> <pre><code>python -c "import nltk; print nltk.__version__" 3.0.4 python -V Python 2.7.9 </code></pre>
<p>I'm not sure what is the desired output but I think you might need some paragraph segmentation before <code>nltk.sent_tokenize</code>, i.e.:</p> <pre><code>&gt;&gt;&gt; text = """After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." ... ... Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015.""" &gt;&gt;&gt; from nltk import sent_tokenize &gt;&gt;&gt; paragraphs = text.split('\n\n') &gt;&gt;&gt; for pg in paragraphs: ... for sent in sent_tokenize(pg): ... print sent ... After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. </code></pre> <p>Possibly, you might want <a href="https://stackoverflow.com/questions/22735440/extract-a-string-between-double-quotes">strings within the double quotes</a> too, if so you could try this:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; str_in_doublequotes = r'"([^"]*)"' &gt;&gt;&gt; re.findall(str_in_doublequotes, text) ['Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you.'] </code></pre> <p>Or maybe you would need this:</p> <pre><code>&gt;&gt;&gt; for pg in paragraphs: ... # Collects the quotes inside the paragraph ... in_quotes = re.findall(str_in_doublequotes, pg) ... for q in in_quotes: ... # Keep track of the quotes with tabs. ... pg = pg.replace('"{}"'.format(q), '\t') ... for _pg in pg.split('\t'): ... for sent in sent_tokenize(_pg): ... print sent ... try: ... print '"{}"'.format(in_quotes.pop(0)) ... except IndexError: # Nothing to pop. ... pass ... After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. </code></pre> <p>When reading from file, try to use the <a href="https://docs.python.org/2/library/io.html" rel="nofollow noreferrer"><code>io</code></a> package:</p> <pre><code>alvas@ubi:~$ echo -e """After Du died of suffocation, her boyfriend posted a heartbreaking message online: \"Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you.\"\n\nLi Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015.""" &gt; in.txt alvas@ubi:~$ cat in.txt After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. alvas@ubi:~$ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import io &gt;&gt;&gt; from nltk import sent_tokenize &gt;&gt;&gt; text = io.open('in.txt', 'r', encoding='utf8').read() &gt;&gt;&gt; print text After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. &gt;&gt;&gt; for sent in sent_tokenize(text): ... print sent ... After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. </code></pre> <p>And with the paragraph and quote extraction hacks:</p> <pre><code>&gt;&gt;&gt; import io, re &gt;&gt;&gt; from nltk import sent_tokenize &gt;&gt;&gt; str_in_doublequotes = r'"([^"]*)"' &gt;&gt;&gt; paragraphs = text.split('\n\n') &gt;&gt;&gt; for pg in paragraphs: ... # Collects the quotes inside the paragraph ... in_quotes = re.findall(str_in_doublequotes, pg) ... for q in in_quotes: ... # Keep track of the quotes with tabs. ... pg = pg.replace('"{}"'.format(q), '\t') ... for _pg in pg.split('\t'): ... for sent in sent_tokenize(_pg): ... print sent ... try: ... print '"{}"'.format(in_quotes.pop(0)) ... except IndexError: # Nothing to pop. ... pass ... After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. </code></pre> <p>For the magic to concatenate the pre-quote sentence with the quotes (don't blink, it looks quite the same as above):</p> <pre><code>&gt;&gt;&gt; import io, re &gt;&gt;&gt; from nltk import sent_tokenize &gt;&gt;&gt; str_in_doublequotes = r'"([^"]*)"' &gt;&gt;&gt; paragraphs = text.split('\n\n') &gt;&gt;&gt; for pg in paragraphs: ... # Collects the quotes inside the paragraph ... in_quotes = re.findall(str_in_doublequotes, pg) ... for q in in_quotes: ... # Keep track of the quotes with tabs. ... pg = pg.replace('"{}"'.format(q), '\t') ... for _pg in pg.split('\t'): ... for sent in sent_tokenize(_pg): ... print sent, ... try: ... print '"{}"'.format(in_quotes.pop(0)) ... except IndexError: # Nothing to pop. ... pass ... After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you." Li Na, 23, a migrant worker from a farming family in Jiangxi province, was looking forward to getting married in 2015. </code></pre> <p>The problem with the above code is that it is limited to sentences like:</p> <blockquote> <p>After Du died of suffocation, her boyfriend posted a heartbreaking message online: "Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you."</p> </blockquote> <p>And cannot handle:</p> <blockquote> <p>"Losing consciousness in my arms, your breath and heartbeat became weaker and weaker. Finally they pushed you out of the cold emergency room. I failed to protect you," her boyfriend posted a heartbreaking message online after Du died of suffocation.</p> </blockquote> <p>Just to make sure, my python/nltk versions are:</p> <pre><code>$ python -c "import nltk; print nltk.__version__" '3.0.3' $ python -V Python 2.7.6 </code></pre> <hr> <p>Beyond the computational aspect of the text processing, there's something subtly different about the grammar of the text in the question.</p> <p>The fact that a quote is followed by a semi-colon <code>:</code> is untypical of the traditional English grammar. This might have been popularized in the Chinese news because in Chinese: </p> <blockquote> <p>啊杜窒息死亡后,男友在网上发了令人心碎的消息: "..."</p> </blockquote> <p>In traditional English in a very prescriptive grammatical sense, it would have been:</p> <blockquote> <p>After Du died of suffocation, her boyfriend posted a heartbreaking message online, "..."</p> </blockquote> <p>And a post-quotation statement would have been signalled by an ending comma instead of a fullstop, e.g.:</p> <blockquote> <p>"...," her boyfriend posted a heartbreaking message online after Du died of suffocation.</p> </blockquote>
python|nlp|nltk|tokenize
6
164
40,656,103
TypeError for cookielib CookieJar cookie in requests Session
<p>I'm trying to use a cookie from a mechanize browser that I use to log in to a site in a requests Session, but whenever I make a request from the session I get a TypeError.</p> <p>I've made a convenience class for using an api exposed by the site (most of the actually useful code is removed, this is a small example):</p> <pre><code>from __future__ import absolute_import, division, print_function, unicode_literals import requests import mechanize import cookielib class Requester: def __init__(self, api_root_url): self.api_root_url = api_root_url self.s = requests.Session() self.new_cookie() def new_cookie(self): br = mechanize.Browser() cookie_jar = cookielib.CookieJar() br.set_cookiejar(cookie_jar) # Acquire cookies by logging in with mechanize browser self.s.cookies.set('v_cookies', cookie_jar) def make_request(self, req_method, endpoint): url = self.api_root_url + endpoint method = getattr(self.s, method) response = method(url) return response </code></pre> <p>From another script I use this class to make requests like this:</p> <pre><code>from __future__ import absolute_import, division, print_function, unicode_literals from requester import Requester req = Requester(api_root) response = req.make_request('get', endpoint) </code></pre> <p>And I get this error from the <code>response = method(url)</code> line:</p> <pre><code>File "...\Anaconda2\lib\cookielib.py", line 1301, in _cookie_attrs self.non_word_re.search(cookie.value) and version &gt; 0): TypeError: expected string or buffer </code></pre> <p>When testing a simple get request with the code below, the line producing <code>r1</code> works but the line giving <code>r2</code> does not</p> <pre><code>def make_request(self, req_method, endpoint): url = self.api_root_url + endpoint cookies = self.s.cookies.get('v_cookies') r1 = requests.get(url, cookies=cookies) r2 = self.s.get(url) </code></pre> <p>How do I correctly use cookies with a requests.Session object?</p>
<p>You don't want to set the value of a single cookie in <code>cookies</code> to a <code>CookieJar</code>: it already <em>is</em> a <code>CookieJar</code>:</p> <pre><code>&gt;&gt;&gt; s = requests.Session() &gt;&gt;&gt; type(s.cookies) &lt;class 'requests.cookies.RequestsCookieJar'&gt; </code></pre> <p>You'll probably have a better time by simply setting <code>s.cookies</code> to your cookiejar:</p> <pre><code>def new_cookie(self): br = mechanize.Browser() cookie_jar = cookielib.CookieJar() br.set_cookiejar(cookie_jar) # Acquire cookies by logging in with mechanize browser self.s.cookies = cookie_jar </code></pre>
python|cookies|python-requests
1
165
9,839,606
Region finding based on VTK polylines
<p>I have the following domain that is made up of VTK poly lines -- each line starts and ends at a 'x', may have many points, and is assigned a left and right flag to denote the region on the left and right of that line, determined if you we walking down the line from start to end.</p> <p><a href="https://i.stack.imgur.com/oI2OV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oI2OV.png" alt="the domain of interest" /></a></p> <p>For any random point <code>rp</code> in the domain, I need to work out which region it is in.</p> <p>So far, I have tried:</p> <ul> <li>Calculating the nearest vtk point to <code>rp</code>, finding the curve to which it belongs and then calculating if <code>rp</code> is on the left or right of the curve. This does not work for closed curves, such as the one around region <code>1</code> in the figure above, particularly if they are not continuous (i.e. a retangle)</li> <li>Breaking the domain into buckets: initially, those buckets that contain a vtk point are filled with the region flags associated with that point; the remaining buckets are then filled based on their neighbours. The bucket in which <code>rp</code> falls then returns its set of flags. However, I am having trouble getting this to work when the bucket contains more than one region flag (i.e. when <code>rp</code> is close to a line).</li> </ul> <p>I assume that this is probably a solved problem, but I am not quite sure where to look. I have thought about the <a href="https://en.wikipedia.org/wiki/Point_in_polygon" rel="nofollow noreferrer">point-in-polygon</a> problem, but I am dealing with curves rather than polygons. Other ideas involve ray tracing, but it seems like that is more suited to 3D.</p> <p>Can anyone suggest an alternative, or a modification to what I have tried?</p>
<p>Every line on the perimeter of the sub-domain of interest (SDOI) must have the SDOI as one of its bordering domains. </p> <ul> <li>So you can flood fill (or expand a circle) in the domain that <code>rp</code> is in.</li> <li>Find what is the common domain neighboured by of all these lines.</li> <li>That is you SDOI.</li> </ul> <p>UNLESS:</p> <p>Special case: <code>rp</code> is in a ring (ie. domain 1). So we don't know if we are inside or outside the ring. We know we are in this 'special case' because only two domains are returned from the above method (1 or 2), but we don't know which one.</p> <p>Solution:</p> <ul> <li>Traverse a line from <code>rp</code> until it crosses the enclosing line, passing into another domain.</li> <li>Now do the flood fill in this domain, BUT exclude the line you cross (and all lines from the first flood fill).</li> <li>The domain neighboured all these lines is NOT your SDOI</li> <li>So you can now deduce the SDOI from the first flood fill.</li> </ul>
python|vtk|raytracing
0
166
25,975,690
Python while loop not breaking when conditions are met
<p>I'm just wondering why the loop doesn't break when it meets those conditions and filters over into my other functions? I fixed it by doing a while true loop and just breaking in each if statement, but I'd like to know what is wrong with doing this way.</p> <p>def main_entrance():</p> <pre><code>print "\n\tYou are in the main entrance. It is a large room with" print "\ttwo doors, one to the left and one to the right. There" print "\tis also a large windy stair case leading up to a second floor." print "\n\tWhat are you going to do?\n" print "\t #1 take the door on the left?" print "\t #2 take the door on the right?" print "\t #3 take the stairs to the second floor?" choice = 0 #This seems to be the part that isn't working as I would expect it to. # I have fixed it and have commented the fix out so that I can understand # why this way isn't working. #while True: while (choice != 1) or (choice != 2) or (choice != 3): try: choice = int (raw_input ('&gt; ')) if (choice == 1): door_one_dinning_room() #break (should not need this break if choice is == 1, 2, 3) elif (choice == 2): door_two_study() #break elif (choice == 3): stairs_to_landing() #there isn't anything in this function #but rather than breaking out from the program once it is # called, but somehow this while loop is still running. #break else: print "You must pick one!" except: print "Please pick a number from 1-3" continue </code></pre>
<p>Of course it doesn't break, your condition can never be false</p> <pre><code>(choice != 1) or (choice != 2) or (choice != 3) </code></pre> <p>Think about it for a minute, any selection of choice cannot make this expression false.</p> <p>choice = 1</p> <pre><code>False or True or True --&gt; True </code></pre> <p>choice = 2</p> <pre><code>True or False or True --&gt; True </code></pre> <p>choice = 3</p> <pre><code>True or True or False --&gt; True </code></pre> <p><strong>Solution</strong></p> <p>You need to <code>and</code> the conditions together</p> <pre><code>(choice != 1) and (choice != 2) and (choice != 3) </code></pre> <p>Or better yet</p> <pre><code>while choice not in [1,2,3] </code></pre>
python
10
167
1,938,898
IronPython: Trouble building a WPF ShaderEffect
<p>I'm trying to build an extensible program where users, among other things, can build their own shader effects.</p> <p>Google searching got me this far;</p> <pre><code>class Test(ShaderEffect): inputProperty = ShaderEffect.RegisterPixelShaderSamplerProperty("Input", type(Test()), 0) </code></pre> <p>But I still get the error;</p> <blockquote> <p>TypeError: cannot access protected member RegisterPixelShaderSamplerProperty without a python subclass of ShaderEffect.</p> </blockquote> <p>Any help would be greatly appreciated.</p> <p>The best source on the net I could find <a href="http://ironpython.net/ironpython/documentation/dotnet/dotnet.html#accessing-protected-members-of-base-types" rel="nofollow noreferrer">is linked here</a></p>
<p>You will need to use Reflection to access protected memeber of .NET class - you don't have a Python subclass where you can access such member directly.</p> <p>Try somethink like this (I have't tested it):</p> <pre><code>inputPropertyType = ShaderEffect.GetType().GetMember( 'RegisterPixelShaderSamplerProperty', BindingFlags.Instance | BindingFlags.NonPublic) inputProperty = inputPropertyType.GetValue(ShaderEffect, None) inputProperty("Input", type(Test()), 0) </code></pre>
wpf|ironpython|shader
0
168
2,176,511
How do I convert a string to a buffer in Python 3.1?
<p>I am attempting to pipe something to a <code>subprocess</code> using the following line:</p> <pre><code>p.communicate("insert into egg values ('egg');"); TypeError: must be bytes or buffer, not str </code></pre> <p>How can I convert the string to a buffer?</p>
<p>The correct answer is:</p> <pre><code>p.communicate(b"insert into egg values ('egg');"); </code></pre> <p>Note the leading b, telling you that it's a string of bytes, not a string of unicode characters. Also, if you are reading this from a file:</p> <pre><code>value = open('thefile', 'rt').read() p.communicate(value); </code></pre> <p>The change that to:</p> <pre><code>value = open('thefile', 'rb').read() p.communicate(value); </code></pre> <p>Again, note the 'b'. Now if your <code>value</code> is a string you get from an API that only returns strings no matter what, <em>then</em> you need to encode it.</p> <pre><code>p.communicate(value.encode('latin-1'); </code></pre> <p>Latin-1, because unlike ASCII it supports all 256 bytes. But that said, having binary data in unicode is asking for trouble. It's better if you can make it binary from the start.</p>
python|python-3.x
12
169
1,667,341
Python: Removing characters from beginnings of sequences in fasta format
<p>I have sequences in fasta format that contains primers of 17 bp at the beginning of the sequences. And the primers sometimes have mismatches. I therefore want to remove the first 17 chars of the sequences, except from the fasta header.</p> <p>The sequences look like this:</p> <pre><code>&gt; name_name_number_etc SEQUENCEFOLLOWSHERE &gt; name_number_etc SEQUENCEFOLLOWSHERE &gt; name_name_number_etc SEQUENCEFOLLOWSHERE </code></pre> <p>How can I do this in python? </p> <p>Thanks! Jon</p>
<p>If I understand correctly, you have to remove the primer only from the first 17 characters of a potentially multiline sequence. What you ask is a bit more difficult. Yes, a simple solution exists, but it can fail in some situations.</p> <p>My suggestion is: use <a href="http://biopython.org" rel="nofollow noreferrer">Biopython</a> to perform the parsing of the FASTA file. Straight from the tutorial</p> <pre><code>from Bio import SeqIO handle = open("ls_orchid.fasta") for seq_record in SeqIO.parse(handle, "fasta") : print seq_record.id print repr(seq_record.seq) print len(seq_record) handle.close() </code></pre> <p>Then rewrite the sequence down with the first 17 letters deleted. I don't have an installation of biopython on my current machine, but if you take a look at the tutorial, it won't take more than 15 lines of code in total.</p> <p>If you want to go hardcore, and do it manually, you have to do something like this (from the first poster, modified)</p> <pre><code>f = open('sequence.fsa') first_line = False for line in f.xreadlines(): if line[0] == "&gt;": first_line=True print line, else: if first_line: print line[17:], else: print line, first_line = False </code></pre>
python|extract|character|sequences|fasta
1
170
32,274,199
Python ternary invalid syntax
<p>Writing a very simply function to mask all but the last 4 digits of a string with "#" characters. This is what I have so far:</p> <pre><code>def maskify(cc): res = "#" * (len(cc) - 4) if len(cc) &gt; 4 else return cc res += cc[len(cc) - 4:] return res print(maskify("12355644")) </code></pre> <p>If I write out the ifs as statements, instead of using them as ternary operators, the function works fine. If I try to do the above, I get an invalid syntax error on <code>res = "#" * (len(cc) - 4) if len(cc) &gt; 4 else return cc</code> The carat is pointing to the n in return. If I rewrite the above line to exclude the else part, then the carat points to the > 4.</p> <p>What am I missing here? The program works fine using a traditional if-else method, but with the ternary all I'm seeing is an expression. Replacing len(cc) with a variable doesn't change anything either.</p>
<p>You don't need a ternary expression at all here, just slice and use the length minus 4 times <code>"#'</code> to generate the prefix:</p> <pre><code>def maskify(cc): return "#" * (len(cc) - 4) + cc[-4:] </code></pre> <p>If the <code>len(cc) - 4</code> value is 0 or smaller the multiplication produces an empty string.</p> <p>Demo:</p> <pre><code>&gt;&gt;&gt; def maskify(cc): ... return "#" * (len(cc) - 4) + cc[-4:] ... &gt;&gt;&gt; maskify("12355644") '####5644' &gt;&gt;&gt; maskify("355644") '##5644' &gt;&gt;&gt; maskify("5644") '5644' &gt;&gt;&gt; maskify("44") '44' </code></pre> <p>Your syntax error stems from your use of <code>return</code> inside an expression. <code>return</code> is a <em>statement</em>, and you cannot use statements in expressions. Statements have places where expressions fit in, not the other way around.</p> <p>If you need to return when a condition is met, you have no option but to use statements (<code>if</code> followed by <code>return</code>):</p> <pre><code>if len(cc) &lt; 4: return cc res = "#" * (len(cc) - 4) res += cc[-4:] return res </code></pre> <p>but the <code>if</code> test is not really needed.</p>
python|ternary-operator
3
171
32,334,516
ValueError: invalid literal for float(): Reading in Latitude and Longitude Data
<p>Given the following script to read in latitude, longitude, and magnitude data:</p> <pre><code>#!/usr/bin/env python # Read in latitudes and longitudes eq_data = open('lat_long') lats, lons = [], [] for index, line in enumerate(eq_data.readlines()): if index &gt; 0: lats.append(float(line.split(',')[0])) lons.append(float(line.split(',')[1])) #Build the basemap from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import numpy as np antmap = Basemap(projection='spstere', boundinglat=-20, lon_0=-60, resolution='f') antmap.drawcoastlines(color='0.50', linewidth=0.25) antmap.fillcontinents(color='0.95') x,y = antmap(lons, lats) antmap.plot(x,y, 'r^', markersize=4) plt.show() </code></pre> <p>I receive the following error when attempting to read in the latitudes, longitudes, and magnitudes:</p> <pre><code>Traceback (most recent call last): File "./basic_eqplot.py", line 10, in &lt;module&gt; lats.append(float(line.split(',')[0])) ValueError: invalid literal for float(): -18.381 -172.320 5.9 </code></pre> <p>The input file looks something like:</p> <pre><code>-14.990,167.460,5.6 -18.381,-172.320,5.9 -33.939,-71.868,5.9 -22.742,-63.571,5.9 -2.952,129.219,5.7 </code></pre> <p>Any ideas for why this would cause a hiccup?</p>
<p>It appears you have one or more lines of corrupt data in your input file. Your traceback says as much:</p> <pre><code>ValueError: invalid literal for float(): -18.381 -172.320 5.9 </code></pre> <p>Specifically what is happening:</p> <ol> <li>The line <code>-18.381 -172.320 5.9</code> is read in from eq_data.</li> <li><code>split(',')</code> is called on the string <code>"-18.381 -172.320 5.9"</code>. Since there is no comma in the string, the <code>split</code> method returns a list with a single element, the original string.</li> <li>You attempt to parse the first element of the returned array as a <code>float</code>. The string <code>"-18.381 -172.320 5.9"</code> cannot be parsed as a float and a <code>ValueError</code> is raised.</li> </ol> <p>To fix this issue, double check the format of your input data. You might also try surrounding this code snippet in a try/except block to give you a bit more useful information as to the specific source of the problem:</p> <pre><code>for index, line in enumerate(eq_data.readlines()): if index &gt; 0: try: lats.append(float(line.split(',')[0])) lons.append(float(line.split(',')[1])) except ValueError: raise ValueError("Unable to parse input file line #%d: '%s'" % (index + 1, line)) </code></pre>
python|matplotlib
5
172
13,953,639
check carriage return is there in a given string
<p>i,m reading some lines from a file and i'm checking whether each line has windows type of CRLF or not. If either '\n' or '\r' is absent in any line, it has to report an error. I tried with the below code, even if the line doesnt have '\r', it is not reporting any error</p> <pre><code>Open_file = open(File_Name,'r').readlines() while Loop_Counter!= Last_Line: Line_Read = Open_file[Loop_Counter] if('\r\n' in Line_Read): pass else: print Loop_Counter </code></pre> <p>Thank you</p>
<p>This isn't working because <code>Loop_Counter</code> is never adjusted at all; whatever the initial value is, it's not changing and the <code>while</code> loop either runs indefinitely or never passes. Your code is pretty unclear here; I'm not sure why you'd structure it that way.</p> <p>What you're suggesting would be easier to do like this:</p> <pre class="lang-py prettyprint-override"><code>infile = open(filename, 'rb') for index, line in enumerate(infile.readlines()): if line[-2:] != '\r\n': print index </code></pre> <p>The <code>'rb'</code> argument is necessary to make sure the newlines are read as <code>\r\n</code> and not just as <code>\n</code>.</p>
python|python-2.7
8
173
34,429,694
Joining data sets in Spark
<p>What are different ways to join data in Spark?</p> <p>Hadoop map reduce provides - distributed cache, map side join and reduce side join. What about Spark?</p> <p>Also it will be great if you can provide simple scala and python code to join data sets in Spark.</p>
<p>Spark has two fundamental distributed data objects. Data frames and RDDs.</p> <p>A special case of RDDs in which case both are pairs, can be joined on their keys. This is available using <code>PairRDDFunctions.join()</code>. See: <a href="https://spark.apache.org/docs/1.5.2/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions" rel="nofollow">https://spark.apache.org/docs/1.5.2/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions</a></p> <p>Dataframes also allow a SQL-like join. See: <a href="http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrame" rel="nofollow">http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrame</a></p>
python|scala|apache-spark
1
174
41,890,311
Dividing into sentences based on pattern
<p>I would like to divide a text into sentences based on a delimiter in python. However, I do not want to split them based on decimal points between numbers, or comma between numbers. How do we ignore them. </p> <p>For example, I have a text like below. </p> <pre><code>I am xyz.I have 44.44$. I would like, to give 44,44 cents to my friend. </code></pre> <p>The sentences has to be</p> <pre><code>I am xyz I have 44.44$ I would like to give 44,44 cents to my friend </code></pre> <p>Could you please help me with the regular expression. I am sorry if this question has already been asked before. I could not find it. </p> <p>Thank you</p>
<p>This works for your example, although there's a trailing full stop (period) on the last part if that matters.</p> <pre><code>import re s = 'I am xyz. I have 44.44$. I would like, to give 44,44 cents to my friend.' for part in re.split('[.,]\s+', s): print(part) </code></pre> <p><strong>Output</strong></p> <pre class="lang-none prettyprint-override"><code>I am xyz I have 44.44$ I would like to give 44,44 cents to my friend. </code></pre> <hr> <p>Wiktor's expression <code>\s*[.,](?!\d)\s</code> will work for your new example:</p> <pre class="lang-none prettyprint-override"><code>I am xyz.I have 44.44$. I would like, to give 44,44 cents to my friend. </code></pre> <p>Breaking this down:</p> <ul> <li><code>\s*</code> will match 0 to many whitespace characters.</li> <li><code>[.,]</code> will match either a <code>,</code> or a <code>.</code> character.</li> <li><code>(?!\d)</code> will cause the match to be discarded if a digit is matched at this point. This is necessary to avoid splitting within numbers.</li> <li><code>\s</code> will match a single whitespace character.</li> </ul> <p>Note that it will still fail for sentences like "I am 22.10 years ago I was 12.", though I don't think there's any way to get around that using regular expressions alone.</p>
python|regex
4
175
41,810,021
Downloading file using multithreading in python
<p>I am trying to put multiple files(ard 25k) into a zip file using multithreading in python cgi. I have written the script below, but somehow the response I get has content length 0 and there is no data in the response. This is my first time using multithreading in python. Is there anything I am missing in the code. Does the output gets printed even before the data is posted?</p> <p>Any help will be appreciated.</p> <p>Here is my code:</p> <pre><code>b = StringIO() z = zipfile.ZipFile(b, 'w', zipfile.ZIP_DEFLATED) def read_file(link): fname = link.split('/') fname = fname[-1] z.write(link, fname) if __name__ == '__main__': form = cgi.FieldStorage() fileLinks = form.getvalue("fileLink") p = Pool(10) p.map(read_file, fileLinks) p.close() p.join() z.close() zipFilename = "DataFiles-" + str(time.time()) + ".zip" length = b.tell() sys.stdout.write( HEADERS % ('application/zip', zipFilename, zipFilename, length) ) b.seek(0) sys.stdout.write(b.read()) b.close() </code></pre> <p>Sequential version of the same code:</p> <pre><code> for fileLink in fileLinks: fname = fileLink.split('/') filename = fname[-1] z.write(fileLink, filename) z.close() </code></pre>
<p>The problem should be that <code>ZipFile.write()</code> (<code>ZipFile</code> in general) is not thread safe.</p> <p>You must somehow serialize thread access to the zip file. This is one way to do it (in Python 3):</p> <pre><code>ziplock = threading.Lock() def read_file(link): fname = link.split('/') fname = fname[-1] with ziplock: z.write(link, fname) </code></pre> <p>There should be no advantage to doing it that way because what the lock is effectively doing is serializing the zip file creation.</p> <p>Some parallelization may be achieved with this version, which reads the file contents before adding them to the zip file:</p> <pre><code>def read_file(link): fname = link.split('/') fname = fname[-1] # the file is read in parallel contents = open(link).read() with ziplock: # writes to the zip file a re serialized z.writestr(fname, contents) </code></pre> <p>Yet, if the files reside on the same file system, it is likely that the reads will, to all effects, act as if they had been serialized by the operating system.</p> <p>Because it is files, the possible target for parallelization would be the CPU-bound part of the process, which is the compression, and that doesn't seem possible with the <em>zip</em> format (because a <code>zip</code> file behaves like a directory, so every <code>write()</code> must leave the state ready to produce a complete archive upon <code>close()</code>). </p> <p>If you can use a different compression format, then parallelization would work without locks using <em>gizp</em> for compression and <em>tar</em> (<code>tarfile</code>) as the archive format, as each file could be read and compressed in parallel, and only the <em>tar</em> concatenation would be done serially (the <code>.tar.gz</code> or <code>.tgz</code> archive format).</p>
python|multithreading|python-2.7|cgi|python-multiprocessing
1
176
47,205,568
Mitmproxy, push own WebSocket message
<p>I inspect a HTTPS WebSocket traffic with <strong>Mitmproxy</strong>. Currently I can read/edit WS messages with:</p> <pre><code>class Intercept: def websocket_message(self, flow): print(flow.messages[-1]) def start(): return Intercept() </code></pre> <p>.. as attached script to Mitmproxy.</p> <p>How do I push/inject my own message to the client? Not edit existing one, but add a new message.</p>
<p>You can do this with <code>inject.websocket</code>:</p> <pre class="lang-py prettyprint-override"><code>from mitmproxy import ctx class Intercept: def websocket_message(self, flow): print(flow.messages[-1]) to_client = True ctx.master.commands.call(&quot;inject.websocket&quot;, flow, to_client, b&quot;Hello World!&quot;, True) def start(): return Intercept() </code></pre> <p>There are <a href="https://docs.mitmproxy.org/stable/addons-examples/#websocket-inject-message" rel="nofollow noreferrer">more examples</a> in the documentation.</p>
python|websocket|mitmproxy
1
177
57,532,917
Librosa Constant Q Transform (CQT) contains defects at the beginning and ending of the spectrogram
<p>Consider the following code</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from librosa import cqt s = np.linspace(0,1,44100) x = np.sin(2*np.pi*1000*s) fmin=500 cq_lib = cqt(x,sr=44100, fmin=fmin, n_bins=40) plt.imshow(abs(cq_lib),aspect='auto', origin='lower') plt.xlabel('Time Steps') plt.ylabel('Freq bins') </code></pre> <p>It will give a spectrogram like this</p> <p><a href="https://i.stack.imgur.com/132za.png" rel="noreferrer"><img src="https://i.stack.imgur.com/132za.png" alt="enter image description here"></a></p> <p>When you look closely at the beginning and the ending of the spectrogram, you can see that there's some defects there.</p> <p>When plotting out only the first and the last time step, you can see the frequency is not correct.</p> <h2>First Frame</h2> <pre><code>plt.plot(abs(cq_lib)[:,0]) plt.ylabel('Amplitude') plt.xlabel('Freq bins') plt.tick_params(labelsize=16) </code></pre> <p><a href="https://i.stack.imgur.com/lhIKB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lhIKB.png" alt="enter image description here"></a></p> <h2>Last and 2nd Last frame comparison</h2> <pre><code>plt.plot(abs(cq_lib)[:,-1]) plt.plot(abs(cq_lib)[:,-2]) plt.legend(['last step', '2nd last step'], fontsize=16) plt.ylabel('Amplitude') plt.xlabel('Freq bins') plt.tick_params(labelsize=16) </code></pre> <p><a href="https://i.stack.imgur.com/Mvneq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Mvneq.png" alt="enter image description here"></a></p> <h2>My attempt to solve it</h2> <p>According to my knowledge, it should be due to padding and putting the <code>stft</code> window at the center. But it seems <code>cqt</code> doesn't support the argument <code>center=False</code>.</p> <pre><code>cq_lib = cqt(x,sr=44100, fmin=fmin, n_bins=40,center=False) </code></pre> <blockquote> <p>TypeError: cqt() got an unexpected keyword argument 'center'</p> </blockquote> <p>Am I doing anything wrong? How to make <code>center=False</code> in <code>cqt</code>?</p>
<p>I think you might want to try out <code>pad_mode</code> which is supported in <a href="https://librosa.github.io/librosa/generated/librosa.core.cqt.html" rel="nofollow noreferrer">cqt</a>. If you checkout the np.pad <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html" rel="nofollow noreferrer">documentation</a>, you can see available options (or see the end of this post). With the <code>wrap</code> option, you get a result like this, though I suspect the phase is a mess, so you should make sure this meets your needs. If you are always generating your own signal, you could trying using the <code>&lt;function&gt;</code> instead of one of the available options.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from librosa import cqt s = np.linspace(0,1,44100) x = np.sin(2*np.pi*1000*s) fmin=500 cq_lib = cqt(x,sr=44100, fmin=fmin, n_bins=40, pad_mode='wrap') plt.imshow(abs(cq_lib),aspect='auto', origin='lower') plt.xlabel('Time Steps') plt.ylabel('Freq bins') </code></pre> <p><a href="https://i.stack.imgur.com/zEuTu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zEuTu.png" alt="enter image description here"></a></p> <p>If you look at the first frame and last two frames you can see it now looks much better. I tried this with librosa 0.6.3 and 0.7.0 and the results were the same.</p> <p><a href="https://i.stack.imgur.com/fG2x6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fG2x6.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/UWTED.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UWTED.png" alt="enter image description here"></a></p> <p>Try some of the options and hopefully you can find one of the padding options that will do the trick: <code>np.pad</code> <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html" rel="nofollow noreferrer">options</a>: <code>‘constant’, ‘edge’, ‘linear_ramp’, ‘maximum’, ‘mean’,‘median’,‘minimum’, ‘reflect’, ‘symmetric’, ‘wrap’, ‘empty’, &lt;function&gt;</code></p>
python-3.x|signal-processing|librosa
4
178
57,327,277
ImportError when importing tensorflow
<p>I've recently installed TensorFlow using <code>pip install --upgrade tensorflow</code> then when I import it, I get the following error:</p> <pre><code>ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. Failed to load the native TensorFlow runtime. </code></pre>
<p>try this:</p> <pre><code>pip install setuptools </code></pre> <p>if wont change anything uninstall tensorflow and try (if you have conda env):</p> <pre><code>conda install tensorflow </code></pre>
python|tensorflow
0
179
11,626,793
How to run an attribute value through a regular expression after extracting via BeautifulSoup?
<p>I have a URL that I want to parse a part of, particularly the widgetid:</p> <pre><code>&lt;a href="http://www.somesite.com/process.asp?widgetid=4530"&gt;Widgets Rock!&lt;/a&gt; </code></pre> <p>I've written this Python (I'm a bit of a newbie at Python -- version is 2.7):</p> <pre><code>import re from bs4 import BeautifulSoup doc = open('c:\Python27\some_xml_file.txt') soup = BeautifulSoup(doc) links = soup.findAll('a') # debugging statements print type(links[7]) # output: &lt;class 'bs4.element.Tag'&gt; print links[7] # output: &lt;a href="http://www.somesite.com/process.asp?widgetid=4530"&gt;Widgets Rock!&lt;/a&gt; theURL = links[7].attrs['href'] print theURL # output: http://www.somesite.com/process.asp?widgetid=4530 print type(theURL) # output: &lt;type 'unicode'&gt; is_widget_url = re.compile('[0-9]') print is_widget_url.match(theURL) # output: None (I know this isn't the correct regex but I'd think it # would match if there's any number in there!) </code></pre> <p>I think I'm missing something with the regular expression (or my understanding of how to use them) but I can't figure it out.</p> <p>Thanks for your help! </p>
<p>This question doesn't have anything to do with BeautifulSoup.</p> <p>The problem is that, as <a href="http://docs.python.org/library/re.html#re.match" rel="noreferrer">the documentation explains</a>, <code>match</code> only matches at the beginning of the string. Since the digits you want to find are at the end of the string, it returns nothing.</p> <p>To match on a digit anywhere, use <code>search</code> - and you probably want to use the <code>\d</code> entity for digits.</p> <pre><code>matches = re.search(r'\d+', theURL) </code></pre>
python|regex|url|unicode|beautifulsoup
5
180
11,681,014
Python Detect Alert
<p>I am trying to make my python script detect an alert box on a page</p> <pre><code>import urllib2 url = raw_input("Please enter your url: ") if urllib2.urlopen(url).read().find("&lt;script&gt;alert('alert');&lt;/script&gt;") == 0: print "Alert Detected!" </code></pre> <p>How can I make it detect the alert?</p>
<p><code>urllib2.urlopen(url).read().find("&lt;script&gt;alert('alert');&lt;/script&gt;") == 0</code> to <code>urllib2.urlopen(url).read().find("&lt;script&gt;alert('alert');&lt;/script&gt;") &gt;= 0</code></p>
python|urllib2
1
181
33,906,408
Count only the words in a text file Python
<p>I have to count all the words in a file and create a histogram of the words. I am using the following python code.</p> <pre><code>for word in re.split('[,. ]',f2.read()): if word not in histogram: histogram[word] = 1 else: histogram[word]+=1 </code></pre> <p>f2 is the file I am reading.I tried to parse the file by multiple delimiters but it still does not work. It counts all strings in the file and makes a histogram, but I only want words. I get results like this:</p> <pre><code>1-1-3: 3 </code></pre> <p>where "1-1-3" is a string that occurs 3 times. How do I check so that only actual words are counted? casing does not matter. I also need to repeat this question but for two word sequences, so an output that looks like:</p> <pre><code>and the: 4 </code></pre> <p>where "and the" is a two word sequence that appears 4 times. How would I group two word sequences together for counting?</p>
<pre><code>from collections import Counter from nltk.tokenize import RegexpTokenizer from nltk import bigrams from string import punctuation # preparatory stuff &gt;&gt;&gt; tokenizer = RegexpTokenizer(r'[^\W\d]+') &gt;&gt;&gt; my_string = "this is my input string. 12345 1-2-3-4-5. this is my input" # single words &gt;&gt;&gt; tokens = tokenizer.tokenize(my_string) &gt;&gt;&gt; Counter(tokens) Counter({'this': 2, 'input': 2, 'is': 2, 'my': 2, 'string': 1}) # word pairs &gt;&gt;&gt; nltk_bigrams = bigrams(my_string.split()) &gt;&gt;&gt; bigrams_list = [' '.join(x).strip(punctuation) for x in list(nltk_bigrams)] &gt;&gt;&gt; Counter([x for x in bigrams_list if x.replace(' ','').isalpha()]) Counter({'is my': 2, 'this is': 2, 'my input': 2, 'input string': 1}) </code></pre>
python|file|histogram
1
182
47,034,655
Python extract elements from Json string
<p>I have a Json string from which I'm able to extract few components like <code>formatted_address</code>,<code>lat</code>,<code>lng</code>, but I'm unable to extract feature(values) of other components like <strong>intersection, political, country, administrative_area_level_1 , administrative_area_level_2 , administrative_area_level_3 , administrative_area_level_4, administrative_area_level_5, colloquial_area , locality , ward, neighborhood, premise, subpremise etc</strong> which is under <code>long_name</code> I'm expecting datatable like</p> <pre><code>formatted_address px_val py_val political country administrative_area_level_1 .. .. Satya Niwas, Kanti Nagar.. 19.1096591 72.8674712 Kanti Nagar,JB Nagar India maharashtra .. .. 82, Bamanpuri, Ajit Nagar.. 19.109749 72.867249 Bamanpuri India maharashtra .. .. . . . </code></pre> <p>Here is the sample JSON string </p> <pre><code>{'results': [{'address_components': [{'long_name': 'Satya Niwas', 'short_name': 'Satya Niwas', 'types': ['establishment', 'point_of_interest', 'premise']}, {'long_name': 'Kanti Nagar', 'short_name': 'Kanti Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400059', 'short_name': '400059', 'types': ['postal_code']}], 'formatted_address': 'Satya Niwas, Kanti Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400059, India', 'geometry': {'bounds': {'northeast': {'lat': 19.1097923, 'lng': 72.8675306}, 'southwest': {'lat': 19.1095784, 'lng': 72.8673391}}, 'location': {'lat': 19.1096591, 'lng': 72.8674712}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 19.1110343302915, 'lng': 72.8687838302915}, 'southwest': {'lat': 19.1083363697085, 'lng': 72.86608586970848}}}, 'place_id': 'ChIJ4UsP5DjI5zsR8hgwhHo9wEk', 'types': ['establishment', 'point_of_interest', 'premise']}, {'address_components': [{'long_name': '82', 'short_name': '82', 'types': ['premise']}, {'long_name': 'Bamanpuri', 'short_name': 'Bamanpuri', 'types': ['neighborhood', 'political']}, {'long_name': 'Ajit Nagar', 'short_name': 'Ajit Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400053', 'short_name': '400053', 'types': ['postal_code']}], 'formatted_address': '82, Bamanpuri, Ajit Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400053, India', 'geometry': {'location': {'lat': 19.109749, 'lng': 72.867249}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 19.1110979802915, 'lng': 72.8685979802915}, 'southwest': {'lat': 19.1084000197085, 'lng': 72.86590001970849}}}, 'place_id': 'ChIJqYip4zjI5zsR0Yg8bdXQX3o', 'types': ['street_address']}, {'address_components': [{'long_name': 'Todi Building', 'short_name': 'Todi Building', 'types': ['premise']}, {'long_name': 'Sheth Bhavanidas Benani Marg', 'short_name': 'Sheth Bhavanidas Benani Marg', 'types': ['route']}, {'long_name': 'Kanti Nagar', 'short_name': 'Kanti Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400059', 'short_name': '400059', 'types': ['postal_code']}], 'formatted_address': 'Todi Building, Sheth Bhavanidas Benani Marg, Kanti Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400059, India', 'geometry': {'location': {'lat': 19.1098265, 'lng': 72.86778869999999}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 19.1111754802915, 'lng': 72.86913768029149}, 'southwest': {'lat': 19.1084775197085, 'lng': 72.86643971970848}}}, 'place_id': 'ChIJo5bq3zjI5zsR2hRaNQF3xd0', 'types': ['premise']}, {'address_components': [{'long_name': 'KASI APARTMENTS', 'short_name': 'KASI APARTMENTS', 'types': ['establishment', 'point_of_interest', 'premise']}, {'long_name': 'Shriniwas Bagarka Road', 'short_name': 'Shriniwas Bagarka Rd', 'types': ['route']}, {'long_name': 'Bamanpuri', 'short_name': 'Bamanpuri', 'types': ['neighborhood', 'political']}, {'long_name': 'Kanti Nagar', 'short_name': 'Kanti Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400059', 'short_name': '400059', 'types': ['postal_code']}], 'formatted_address': 'KASI APARTMENTS, Shriniwas Bagarka Rd, Bamanpuri, Kanti Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400059, India', 'geometry': {'location': {'lat': 19.1093338, 'lng': 72.8670515}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 19.1106827802915, 'lng': 72.86840048029151}, 'southwest': {'lat': 19.10798481970849, 'lng': 72.86570251970849}}}, 'place_id': 'ChIJoUz25DjI5zsRiMoiQtq5kXs', 'types': ['establishment', 'point_of_interest', 'premise']}, {'address_components': [{'long_name': 'Silver Line Apts.', 'short_name': 'Silver Line Apts.', 'types': ['premise']}, {'long_name': 'Bamanpuri', 'short_name': 'Bamanpuri', 'types': ['neighborhood', 'political']}, {'long_name': 'J.B. Nagar', 'short_name': 'J.B. Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400047', 'short_name': '400047', 'types': ['postal_code']}], 'formatted_address': 'Silver Line Apts., Bamanpuri, J.B. Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400047, India', 'geometry': {'location': {'lat': 19.1091075, 'lng': 72.8670776}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 19.1104564802915, 'lng': 72.86842658029151}, 'southwest': {'lat': 19.1077585197085, 'lng': 72.86572861970849}}}, 'place_id': 'ChIJEQ3_ZzjI5zsR9LxIP1h2b2c', 'types': ['premise']}, {'address_components': [{'long_name': 'Gokul panch chs', 'short_name': 'Gokul panch chs', 'types': ['establishment', 'point_of_interest']}, {'long_name': '81-B', 'short_name': '81-B', 'types': ['street_number']}, {'long_name': 'Sheth Bhavanidas Benani Marg', 'short_name': 'Sheth Bhavanidas Benani Marg', 'types': ['route']}, {'long_name': 'Bamanpuri', 'short_name': 'Bamanpuri', 'types': ['neighborhood', 'political']}, {'long_name': 'Ajit Nagar', 'short_name': 'Ajit Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400047', 'short_name': '400047', 'types': ['postal_code']}], 'formatted_address': 'Gokul panch chs, 81-B, 81-B, Sheth Bhavanidas Benani Marg, Bamanpuri, Ajit Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400047, India', 'geometry': {'location': {'lat': 19.1098713, 'lng': 72.86705669999999}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 19.1112202802915, 'lng': 72.8684056802915}, 'southwest': {'lat': 19.1085223197085, 'lng': 72.8657077197085}}}, 'place_id': 'ChIJpUuz4jjI5zsRpgQdmR5E1v0', 'types': ['establishment', 'point_of_interest']}, {'address_components': [{'long_name': 'Ajit Nagar', 'short_name': 'Ajit Nagar', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400047', 'short_name': '400047', 'types': ['postal_code']}], 'formatted_address': 'Ajit Nagar, J B Nagar, Andheri East, Mumbai, Maharashtra 400047, India', 'geometry': {'bounds': {'northeast': {'lat': 19.1119198, 'lng': 72.8714133}, 'southwest': {'lat': 19.1085396, 'lng': 72.8662167}}, 'location': {'lat': 19.1103164, 'lng': 72.8680732}, 'location_type': 'APPROXIMATE', 'viewport': {'northeast': {'lat': 19.1119198, 'lng': 72.8714133}, 'southwest': {'lat': 19.1085396, 'lng': 72.8662167}}}, 'place_id': 'ChIJPWPg4zjI5zsRJWPFphEkcxc', 'types': ['political', 'sublocality', 'sublocality_level_3']}, {'address_components': [{'long_name': 'Bamanpuri', 'short_name': 'Bamanpuri', 'types': ['neighborhood', 'political']}, {'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400047', 'short_name': '400047', 'types': ['postal_code']}], 'formatted_address': 'Bamanpuri, J B Nagar, Andheri East, Mumbai, Maharashtra 400047, India', 'geometry': {'bounds': {'northeast': {'lat': 19.1102874, 'lng': 72.869838}, 'southwest': {'lat': 19.1060651, 'lng': 72.8635609}}, 'location': {'lat': 19.1084347, 'lng': 72.86574929999999}, 'location_type': 'APPROXIMATE', 'viewport': {'northeast': {'lat': 19.1102874, 'lng': 72.869838}, 'southwest': {'lat': 19.1060651, 'lng': 72.8635609}}}, 'place_id': 'ChIJIYgnUDjI5zsRK_Zl9Zy_QkY', 'types': ['neighborhood', 'political']}, {'address_components': [{'long_name': 'J B Nagar', 'short_name': 'J B Nagar', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}, {'long_name': '400047', 'short_name': '400047', 'types': ['postal_code']}], 'formatted_address': 'J B Nagar, Andheri East, Mumbai, Maharashtra 400047, India', 'geometry': {'bounds': {'northeast': {'lat': 19.1161579, 'lng': 72.871533}, 'southwest': {'lat': 19.1008041, 'lng': 72.8606231}}, 'location': {'lat': 19.1110621, 'lng': 72.8655922}, 'location_type': 'APPROXIMATE', 'viewport': {'northeast': {'lat': 19.1161579, 'lng': 72.871533}, 'southwest': {'lat': 19.1008041, 'lng': 72.8606231}}}, 'place_id': 'ChIJt8_u6TjI5zsRR9eE5rMK45A', 'types': ['political', 'sublocality', 'sublocality_level_2']}, {'address_components': [{'long_name': 'Andheri East', 'short_name': 'Andheri East', 'types': ['political', 'sublocality', 'sublocality_level_1']}, {'long_name': 'Mumbai', 'short_name': 'Mumbai', 'types': ['locality', 'political']}, {'long_name': 'Mumbai Suburban', 'short_name': 'Mumbai Suburban', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Maharashtra', 'short_name': 'MH', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'India', 'short_name': 'IN', 'types': ['country', 'political']}], 'formatted_address': 'Andheri East, Mumbai, Maharashtra, India', 'geometry': {'bounds': {'northeast': {'lat': 19.1327276, 'lng': 72.89305499999999}, 'southwest': {'lat': 19.096748, 'lng': 72.843926}}, 'location': {'lat': 19.1154908, 'lng': 72.8726952}, 'location_type': 'APPROXIMATE', 'viewport': {'northeast': {'lat': 19.1327276, 'lng': 72.89305499999999}, 'southwest': {'lat': 19.096748, 'lng': 72.843926}}}, 'place_id': 'ChIJMbHfQRu25zsRMazdY3UpaKY', 'types': ['political', 'sublocality', 'sublocality_level_1']}], 'status': 'OK'} </code></pre> <p>Here is the snippet of code</p> <pre><code> import json import pandas as pd line="json_str" json_st = json.loads(line) country=[] political=[] address_fields = { 'intersection': [], 'political': [], 'country': [] } for json_str in json_st: address_fields = { 'intersection': [], 'political': [], 'country': [] } if isinstance(json_st,dict): first_address_components = json_st['results'] #format_add = json_st['results'][0] else: first_address_components = json_st[0]['address_components'] for item in first_address_components: for field_key in address_fields.keys(): #address_fields[field_key].append( str(format_add['formatted_address'])) if field_key in item['types']: address_fields[field_key].append(item['long_name']) address_fields = {key: ', '.join(values) for key, values in address_fields.items()} country.append(address_fields['country']) political.append(address_fields['political']) </code></pre> <p>It gives error</p> <pre><code>json_st['results']['address_components'] Traceback (most recent call last): File "&lt;ipython-input-94-315fa8711f9d&gt;", line 1, in &lt;module&gt; json_st['results']['address_components'] TypeError: list indices must be integers or slices, not str </code></pre> <p>I'm getting the first 3 columns of expected O/P but unable to extract other columns. Any suggestion on the same will be helpful</p> <p>Thanks</p> <p>Domnick</p>
<p>I would go for <code>json_normalize</code>, thought of one line answer but I dont think its possible i.e (Here I did only for px_val and py_val you can do similar things for other columns) </p> <pre><code>from pandas.io.json import json_normalize import pandas as pd import json with open('dat.json') as f: data = json.load(f) result = json_normalize(data,'results') result['px_val'] = result['geometry'].apply(json_normalize).apply(lambda x : x['location.lat']) result['py_val'] = result['geometry'].apply(json_normalize).apply(lambda x : x['location.lng']) print(result[['formatted_address','px_val','py_val']]) </code></pre> <pre> formatted_address px_val py_val 0 Satya Niwas, Kanti Nagar, J B Nagar, Andheri E... 19.109659 72.867471 1 82, Bamanpuri, Ajit Nagar, J B Nagar, Andheri ... 19.109749 72.867249 2 Todi Building, Sheth Bhavanidas Benani Marg, K... 19.109827 72.867789 3 KASI APARTMENTS, Shriniwas Bagarka Rd, Bamanpu... 19.109334 72.867052 4 Silver Line Apts., Bamanpuri, J.B. Nagar, J B ... 19.109108 72.867078 5 Gokul panch chs, 81-B, 81-B, Sheth Bhavanidas ... 19.109871 72.867057 6 Ajit Nagar, J B Nagar, Andheri East, Mumbai, M... 19.110316 72.868073 7 Bamanpuri, J B Nagar, Andheri East, Mumbai, Ma... 19.108435 72.865749 8 J B Nagar, Andheri East, Mumbai, Maharashtra 4... 19.111062 72.865592 9 Andheri East, Mumbai, Maharashtra, India 19.115491 72.872695 </pre> <p>I try to parse political certainly not proud of this solution i.e </p> <pre><code>pol = [] for i in result['address_components'].apply(json_normalize): pol.append(','.join(i.apply(lambda x : x['long_name'] if 'political' in x['types'] else np.nan,1).dropna())) result['political'] = pol </code></pre> <p>Output <code>result['political']</code></p> <pre> 0 Kanti Nagar,J B Nagar,Andheri East,Mumbai,Mumb... 1 Bamanpuri,Ajit Nagar,J B Nagar,Andheri East,Mu... 2 Kanti Nagar,J B Nagar,Andheri East,Mumbai,Mumb... 3 Bamanpuri,Kanti Nagar,J B Nagar,Andheri East,M... 4 Bamanpuri,J.B. Nagar,J B Nagar,Andheri East,Mu... 5 Bamanpuri,Ajit Nagar,J B Nagar,Andheri East,Mu... 6 Ajit Nagar,J B Nagar,Andheri East,Mumbai,Mumba... 7 Bamanpuri,J B Nagar,Andheri East,Mumbai,Mumbai... 8 J B Nagar,Andheri East,Mumbai,Mumbai Suburban,... 9 Andheri East,Mumbai,Mumbai Suburban,Maharashtr... Name: political, dtype: object </pre> <p>To convert it to a method we can do </p> <pre><code>def get_cols(st): pol = [] for i in result['address_components'].apply(json_normalize): pol.append(','.join(i.apply(lambda x : x['long_name'] if st in x['types'] else np.nan,1).dropna())) return pol result['political'] = get_cols('political') # This will assign the new column political with data. </code></pre>
python|json|pandas|for-loop|dataframe
2
183
37,753,578
Interpreting numpy array obtained from tif file
<p>I need to work with some greyscale tif files and I have been using PIL to import them as images and convert them into numpy arrays:</p> <pre><code> np.array(Image.open(src)) </code></pre> <p>I want to have a transparent understanding of exactly what the values of these array correspond to and in particular, it was not clear what value was appropriate as a white point or black point for my images. For instance if I wanted to convert this array into an array of floats with pixel values of 1 for white values and 0 for black with other values scaled linearly in between.</p> <p>I have tried some naive methods including scaling by the maximum value in the array but opening the resulting files, there is always some amount of shift in the color levels.</p> <p>Is there any documentation for the proper way to understand the values stored in these tif arrays?</p>
<p>A <a href="https://en.wikipedia.org/wiki/Tagged_Image_File_Format" rel="nofollow noreferrer">TIFF</a> is basically a computer file format for storing raster graphics images. It has a lot of <a href="https://reference.wolfram.com/language/ref/format/TIFF.html" rel="nofollow noreferrer">specs</a> and quick search on the web will get you the resources you need.</p> <p>The thing is you are using PIL as your input library. The array you have is likely working with an <code>uint8</code> data type, which means your data can be anywhere within 0 to 255. To obtain the 0 to 1 color range do the following:</p> <pre><code>im = np.array(Image.open(src)).astype('float32')/255 </code></pre> <p>Notice your array will likely have 4 layers given in the third dimension <code>im[:,:, here]</code> (<code>im.shape = (i,j,k)</code>). So each trace <code>im[i,j,:]</code> (which represents a pixel) is going to be a quadruplet for an RGBA value.</p> <p>The R stands for Red (or quantity of Red), G for Green, B for Blue. A is the alpha channel and it is what enables you to have transparency (lower values means less opacity and more transparency).</p> <p>It can also have three layers for only RGB, or one layer if intended to be plotted in the grey-scale.</p> <p>In the case you have RGB (or RGBA but not considering alpha) but need a single value you should understand that there are quite a few different ways of doing this. In <a href="https://stackoverflow.com/questions/687261/converting-rgb-to-grayscale-intensity">this post</a> @denis recommends the use of the following formulation:</p> <pre><code>Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma </code></pre> <blockquote> <p>where gamma is 2.2 for many PCs. The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma)) (purists tongue-click) but here I'll drop the '.</p> </blockquote> <p>And finally <code>L* = 116 * Y ^ 1/3 - 16</code> to obtain the luminance.</p> <p>I recommend you to read his post. Also consider looking into the following concepts:</p> <ul> <li><a href="https://en.wikipedia.org/wiki/RGB_color_model" rel="nofollow noreferrer">RGB Colors model</a></li> <li><a href="https://en.wikipedia.org/wiki/Gamma_correction" rel="nofollow noreferrer">Gamma correction</a></li> <li><a href="https://en.wikipedia.org/wiki/Tagged_Image_File_Format" rel="nofollow noreferrer">Tagged Image File Format</a></li> <li><a href="http://pillow.readthedocs.io/en/3.2.x/handbook/image-file-formats.html#tiff" rel="nofollow noreferrer">Pillow documentation of TIFF</a></li> <li><a href="https://stackoverflow.com/questions/7569553/working-with-tiffs-import-export-in-python-using-numpy">Working with TIFFs (import, export) in Python using numpy</a></li> </ul>
python|image|numpy|tiff
1
184
37,919,319
Python reading Popen continuously (Windows)
<p>Im trying to <code>stdout.readline</code> and put the results (i.e each line, at the time of printing them to the terminal) on a <code>multiprocessing.Queue</code> for us in another .py file. However, the call:</p> <pre><code>res = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=1 ) with res.stdout: for line in iter(res.stdout.readline, b''): print line res.wait() </code></pre> <p>Will block and the results will be printed <em>after</em> the process is complete (or not at all if exit code isn't returned). </p> <p>I've browsed SO for answers to this, and tried setting bufsize=1, spawning threads that handle the reading, using filedescriptors, etc. None seem to work. I might have to use the module <code>pexpect</code> but I'm not sure how it works yet. </p> <p>I have also tried</p> <pre><code> def enqueue_output(self, out, queue): for line in iter(out.readline, b''): queue.put([line]) out.close() </code></pre> <p>To put the data on the queue, but since <code>out.readline</code> seems to block, the result will be the same.</p> <p>In short: How do I make the subprocess output available to me at the time of print? It prints chunks of 1-10 lines at a time, however these are returned to me when the process completes, separated by newlines as well..</p> <p>Related: </p> <p><a href="https://stackoverflow.com/questions/12419198/python-subprocess-readlines-hangs/12471855#12471855">Python subprocess readlines() hangs</a></p> <p><a href="https://stackoverflow.com/questions/2715847/python-read-streaming-input-from-subprocess-communicate">Python: read streaming input from subprocess.communicate()</a></p> <p><a href="https://stackoverflow.com/questions/375427/non-blocking-read-on-a-subprocess-pipe-in-python">Non-blocking read on a subprocess.PIPE in python</a></p>
<p>As explained by @eryksun, and confirmed by your comment, the cause of the buffering is the use of <code>printf</code> by the C application.</p> <p>By default, printf buffers its output, but the output is flushed on newline or if a read occurs <strong>when the output is directed to a terminal</strong>. When the output is directed to a file or a pipe, the actual output only occurs when the buffer is full.</p> <p>Fortunately on Windows, there is no low level buffering (*). That means that calling <code>setvbuf(stdout, NULL, _IONBF, 0);</code> near the beginning of the program would be enough. But unfortunately, you need no buffering at all (<code>_IONBF</code>), because line buffering on Windows is implemented as full buffering.</p> <p>(*) On Unix or Linux systems, the underlying system call can add its own buffering. That means that a program using low level <code>write(1, buf, strlen(buf));</code> will be unbuffered on Windows, but will still be buffered on Linux when standard output is connected to a pipe or a file.</p>
python|windows|io|subprocess
1
185
36,805,339
Django POST form validation to an another page
<p>I'm trying to make a TV show manager with Django and I have a problem with form validation and redirection. I have a simple page with a form where people can search a Tv show, and an other page where the result of the query is displaying. (for the query I'm using the API of TVDB I don't know if its useful) What I want to do is:</p> <ol> <li>If I submit the form and there are errors display the form with related erros</li> <li>If the form is valid go to the other page with the data to make the query with the API</li> </ol> <p><strong>forms.py</strong></p> <pre><code>class SearchShowForm(forms.Form): query = forms.CharField( max_length=120, required=True, widget=TextInput(attrs={'class': 'form-control', 'placeholder': 'Search'}) ) </code></pre> <p><strong>views.py</strong></p> <pre><code>def step1(request): form = SearchShowForm() context = {'form': form, 'step': 1} return render(request, 'tvshows_manager/step_1.html', context) def step2(request): print(request.POST) if request.method == 'POST': form = SearchShowForm(request.POST) if form.is_valid(): print(form.cleaned_data) return render(request, 'tvshows_manager/step_2.html', {'data': form.cleaned_data}) else: return render(request, 'tvshows_manager/step_1.html', {'form': form}) else: return HttpResponseRedirect('/step_1/') </code></pre> <p><strong>template for step_1.html</strong></p> <pre><code>&lt;form action="{% url 'step2' %}" class="form-inline" method="POST"&gt; {% csrf_token %} &lt;input type="submit" value="Rechercher" class="btn btn-default"&gt; &lt;div class="search-form-input"&gt; {{ form.query }} &lt;/div&gt; {% if form.errors %} &lt;p&gt;Ce champs est nécessaire&lt;/p&gt; {% endif %} &lt;/form&gt; </code></pre> <p>At the moment if I'm going to the step1 page and I submit a blank input the page 'step2' is displaying with the form and errors and I want to stay to the step1 page.</p> <p>Thanks in advance</p>
<p>Base on your description and views.py, you stay on step 2 that's why:</p> <ol> <li>User is on page '/step_1/'</li> <li>He submit form</li> <li>Because action param in form is point to '/step_2/', it's going to that url</li> <li>In view request.method == 'POST' is True, but form is not valid.</li> <li>You are rendering template from '/step_1', but not redirecting User.</li> </ol> <p>So here is sample fix:</p> <pre><code>def step2(request): print(request.POST) if request.method == 'POST': form = SearchShowForm(request.POST) if form.is_valid(): print(form.cleaned_data) return render(request, 'tvshows_manager/step_2.html', {'data': form.cleaned_data}) else: return HttpResponseRedirect('/step_1/') else: return HttpResponseRedirect('/step_1/') </code></pre>
python|django|forms
1
186
36,919,825
Pandas dataframe in pyspark to hive
<p>How to send a pandas dataframe to a hive table?</p> <p>I know if I have a spark dataframe, I can register it to a temporary table using </p> <pre><code>df.registerTempTable("table_name") sqlContext.sql("create table table_name2 as select * from table_name") </code></pre> <p>but when I try to use the pandas dataFrame to registerTempTable, I get the below error:</p> <pre><code>AttributeError: 'DataFrame' object has no attribute 'registerTempTable' </code></pre> <p>Is there a way for me to use a pandas dataFrame to register a temp table or convert it to a spark dataFrame and then use it register a temp table so that I can send it back to hive.</p>
<p>I guess you are trying to use pandas <code>df</code> instead of <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions" rel="noreferrer">Spark's DF</a>.</p> <p>Pandas DataFrame has no such method as <code>registerTempTable</code>.</p> <p>you may try to create Spark DF from pandas DF.</p> <p><strong>UPDATE:</strong></p> <p>I've tested it under Cloudera (with installed <a href="https://repo.continuum.io/pkgs/misc/parcels/" rel="noreferrer">Anaconda parcel</a>, which includes Pandas module).</p> <p>Make sure that you have set <code>PYSPARK_PYTHON</code> to your anaconda python installation (or another one containing Pandas module) on all your Spark workers (usually in: <code>spark-conf/spark-env.sh</code>)</p> <p>Here is result of my test:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; df = pd.DataFrame(np.random.randint(0,100,size=(10, 3)), columns=list('ABC')) &gt;&gt;&gt; sdf = sqlContext.createDataFrame(df) &gt;&gt;&gt; sdf.show() +---+---+---+ | A| B| C| +---+---+---+ | 98| 33| 75| | 91| 57| 80| | 20| 87| 85| | 20| 61| 37| | 96| 64| 60| | 79| 45| 82| | 82| 16| 22| | 77| 34| 65| | 74| 18| 17| | 71| 57| 60| +---+---+---+ &gt;&gt;&gt; sdf.printSchema() root |-- A: long (nullable = true) |-- B: long (nullable = true) |-- C: long (nullable = true) </code></pre>
python-2.7|pandas|hive|pyspark
5
187
48,766,723
Import error "No module named selenium" when returning to Python project
<p>I have a python project with Selenium that I was working on a year ago. When I came back to work on it and tried to run it I get the error <code>ImportError: No module named selenium</code>. I then ran <code>pip install selenium</code> inside the project which gave me <code>Requirement already satisfied: selenium in some/local/path</code>. How can I make my project compiler (is that the right terminology?) see my project dependencies?</p>
<p>Is it possible that you're using e.g. Python 3 for your project, and selenium is installed for e.g. Python 2? If that is the case, try <code>pip3 install selenium</code></p>
python|selenium|import
0
188
48,842,722
Python Virtualenv : ImportError: No Module named Zroya
<p>I was trying to work with python virtualenv on the <a href="https://github.com/malja/zroya" rel="nofollow noreferrer">Zroya python wrapper around win32 API</a>. Although I did installed the modules using pip, and although they are shown in cli using the command</p> <pre><code> pip freeze </code></pre> <p>,when trying to execute the .py file that uses the modules it shows the following error.</p> <pre><code> Traceback (most recent call last): File "TesT.PY", line 2, in &lt;module&gt; from zroya import NotificationCenter ImportError: No module named 'zroya' </code></pre> <p>What is the reason for this cause ? I'm using python 3.4. When checked on</p> <pre><code> &gt;&gt;&gt;help("modules") </code></pre> <p>on python cli, the modules that were installed using pip aren't listed.</p>
<p>Installing <code>zroya</code> should solve your problem.</p> <p>Installation instructions: <a href="https://pypi.python.org/pypi/zroya" rel="nofollow noreferrer">https://pypi.python.org/pypi/zroya</a></p>
python|virtualenv
0
189
66,972,885
Package and module import in python
<p>Here is my folder structure:</p> <pre><code>|sound |-__init__.py |-model |-__init__.py |-module1.py |-module2.py |-simulation |-sim.py </code></pre> <p>The file module1.py contains the code:</p> <pre><code>class Module1: def __init__(self,mod): self.mod = mod </code></pre> <p>The file module2.py contains the code:</p> <pre><code>class Module2: def __init__(self,mods=None): if mods is None: mods = [] self.mods = mods def append(self.mod): mods.append(mod) </code></pre> <p>Finally the file sim.py contains the code:</p> <pre><code>import sound sound_1 = sound.module2.Module2() </code></pre> <p>When I execute sim.py I get a <code>ModuleNotFoundError: No module named 'sound'</code></p> <p>I've tried pretty much everything such as <code>from sound.model import module2</code> etc. but I believe the problem comes from python not finding the <code>sound</code> package.</p> <p>I've read several tutos, docs and threads, and I don't understand what I'm doing wrong.</p>
<h3>The simple FIX :</h3> <ul> <li>Move <code>sim.py</code> one folder up into sound</li> <li>Try <code>import module2</code></li> <li><code>sound_1 = module2.Module2()</code></li> </ul>
python|python-3.x|module|package
1
190
4,496,882
Error while trying to parse a website url using python . how to debug it?
<pre><code>#!/usr/bin/python import json import urllib from BeautifulSoup import BeautifulSoup from BeautifulSoup import BeautifulStoneSoup import BeautifulSoup def showsome(searchfor): query = urllib.urlencode({'q': searchfor}) url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&amp;%s' % query search_response = urllib.urlopen(url) search_results = search_response.read() results = json.loads(search_results) data = results['responseData'] print 'Total results: %s' % data['cursor']['estimatedResultCount'] hits = data['results'] print 'Top %d hits:' % len(hits) for h in hits: print ' ', h['url'] resp = urllib.urlopen(h['url']) res = resp.read() soup = BeautifulSoup(res) print soup.prettify() print 'For more results, see %s' % data['cursor']['moreResultsUrl'] showsome('sachin') </code></pre> <p>What is the wrong in this code ?</p> <p>Note all the 4 links that I am getting out of the search , I am feeding it back to extract the contents out of it , and then use BeautifulSoup to parse it . How should I go about it ?</p>
<blockquote> <p>What is the wrong in this code ?</p> </blockquote> <p>Your indentation is all wonky in the for loop, and this line:</p> <pre><code>import BeautifulSoup </code></pre> <p>should be deleted, as it masks this earlier import:</p> <pre><code>from BeautifulSoup import BeautifulSoup </code></pre>
python
1
191
69,492,040
Calculating length between 2 dates using Tkinter calendar
<p>I am trying to create a Tkinter application where the user selects a date in a calendar and then presses a <kbd>button</kbd> and a <code>label</code> then displays the number of days between the current date and the date they have selected. I have figured out how to calculate the number of days between 2 set dates however when I introduced the calendar, it says the date <code>does not match format '%m/%d/%Y'</code> because the calendar sets the year of the date as 21 instead of 2021 e.g. <code>12/9/21</code>. Any solutions would be appreciated.</p> <pre><code>import datetime from tkinter import * from tkcalendar import * root = Tk() root.title('Date') root.geometry(&quot;600x400&quot;) from datetime import date from time import strftime from datetime import timedelta, datetime, date from datetime import datetime def calculate(): delta = b - a l1 = Label(root, text=delta.days) l1.pack() #calendar cal = Calendar(root, background=&quot;#99cbd8&quot;, disabledbackground=&quot;blue&quot;, bordercolor=&quot;#99cbd8&quot;, headersbackground=&quot;light blue&quot;, normalbackground=&quot;pink&quot;, foreground=&quot;blue&quot;, normalforeground='white', headersforeground='white', selectmode=&quot;day&quot;, year=2021, month=12, day=9) cal.pack(pady=20) plum = datetime.today().strftime(&quot;%m/%d/%Y&quot;) #getting the current date pear = cal.get_date() #getting the date from the calendar date_format = &quot;%m/%d/%Y&quot; a = datetime.strptime(plum, date_format) b = datetime.strptime(pear, date_format) button = Button(root, text=&quot;calc&quot;, command=calculate) button.pack() #delta = b - a #print(delta.days) root.mainloop() </code></pre>
<p>Add <code>date_pattern=&quot;m/d/y&quot;</code> to <code>Calender(...)</code>:</p> <pre class="lang-py prettyprint-override"><code>cal = Calendar(root, date_pattern=&quot;m/d/y&quot;, background=&quot;#99cbd8&quot;, disabledbackground=&quot;blue&quot;, bordercolor=&quot;#99cbd8&quot;, headersbackground=&quot;light blue&quot;, normalbackground=&quot;pink&quot;, foreground=&quot;blue&quot;, normalforeground='white', headersforeground='white', selectmode=&quot;day&quot;, year=2021, month=12, day=9) </code></pre> <p>Also you need to get the selected date inside <code>calculate()</code>:</p> <pre class="lang-py prettyprint-override"><code>def calculate(): pear = cal.get_date() b = datetime.strptime(pear, date_format) delta = b - a l1.config(text=delta.days) # update label ... button = Button(root, text=&quot;calc&quot;, command=calculate) button.pack() # create the label l1 = Label(root) l1.pack() ... </code></pre>
python|date|datetime|tkinter
1
192
73,641,447
How to modify single object inside dict values stored as a set?
<p>I have a dictionary which represents graph. Key is Node class and values are set of Nodes. So it has this structure: <code>dict[Node] = {Node, Node, Node}</code></p> <pre><code>class Node: def __init__(self, row, column, region): self.id = f'{str(row)}-{str(column)}' self.region = region self.visited = False </code></pre> <p>In code below I need to update visited property of Node class.</p> <pre><code> while nodes_queue: current_node = nodes_queue.pop() for edge in self.map[current_node]: if edge.region == root.region and not edge.visited: edge.visited = True # Not updated! nodes_queue.append(edge) </code></pre> <p>But it looks like I get view of Node objects instead of actual objects. When I update visited property in for loop and get it from next iteration, the property is still set to False</p>
<p>I've figured it out. I was storing <strong>different</strong> Node objects as key and what was in values set in my dictionary. I created context of all Nodes and get Node from there by its id.</p> <pre><code>def get_node_from_context(self, row, column, region): node = Node(row, column, region) if node not in self.__graph_context__: self.__graph_context__[node] = node else: node = self.__graph_context__[node] return node </code></pre>
python|python-3.x|dictionary|set
0
193
73,623,323
I wrote a code that should identify which of the elements of the sequence are equal to the sum of the elements of two different arrays, but it's wrong
<p>I am given two int arrays of different lentgh. Also I'm given a sequence of integers. I need to write a code, that prints &quot;YES&quot; for each element of the sequence if it can be obtained as a result of the sum of any element from first array and any element in second one. Otherwise it must print &quot;NO&quot; So, I wrote the code which seems working for me, but it fails on one of the test. I have no access to it, so i am asking here. Where did I go wrong?</p> <pre><code>N = int(input()) A = [int(x) for x in input().split()] M = int(input()) B = [int(y) for y in input().split()] K = int(input()) C = [int(z) for z in input().split()] for z in C: flag = &quot;NO&quot; for i in range(N): for j in range(M): if z == A[i] + B[j]: flag = &quot;YES&quot; break print(flag) </code></pre>
<p>To make your solution work you have to break out of the second <code>for</code> loop as well:</p> <pre><code>for c in C: flag = False for a in A: for b in B: if c == a + b: flag = True break if flag: break print('YES' if flag else 'NO') </code></pre>
python
0
194
64,329,580
How to add samesite=None in the set_cookie function django?
<p>I want to add <code>samesite</code> attribute as <code>None</code> in the <code>set_cookie function</code></p> <p>This is the code where I call the <code>set_cookie function</code></p> <pre><code>redirect = HttpResponseRedirect( '/m/' ) redirect.set_cookie( 'access_token', access_token, max_age=60 * 60 ) </code></pre> <p>This is the function where I set the cookie</p> <pre><code>def set_cookie(self, key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False): self.cookies[key] = value if expires is not None: if isinstance(expires, datetime.datetime): if timezone.is_aware(expires): expires = timezone.make_naive(expires, timezone.utc) delta = expires - expires.utcnow() delta = delta + datetime.timedelta(seconds=1) expires = None max_age = max(0, delta.days * 86400 + delta.seconds) else: self.cookies[key]['expires'] = expires else: self.cookies[key]['expires'] = '' if max_age is not None: self.cookies[key]['max-age'] = max_age # IE requires expires, so set it if hasn't been already. if not expires: self.cookies[key]['expires'] = cookie_date(time.time() + max_age) if path is not None: self.cookies[key]['path'] = path if domain is not None: self.cookies[key]['domain'] = domain if secure: self.cookies[key]['secure'] = True if httponly: self.cookies[key]['httponly'] = True </code></pre>
<p>You can use this library to change the flag if you're using django2.x or older: <a href="https://pypi.org/project/django-cookies-samesite/" rel="nofollow noreferrer">https://pypi.org/project/django-cookies-samesite/</a></p> <p>If you're using django3.x, it should be built-in</p>
python|django|cookies|django-views|middleware
1
195
49,936,387
NotImplementedError: data_source='iex' is not implemented
<p>I am trying to get some stock data through pandas_datareader in jupyter notebook. I was using google, but that does not work anymore, so I am using iex.</p> <pre><code>import pandas_datareader.data as web import datetime start = datetime.datetime(2015,1,1) end = datetime.datetime(2017,1,1) facebook = web.DataReader('FB','iex',start,end) </code></pre> <p>However, it comes back with the following error.</p> <pre><code>NotImplementedError: data_source='iex' is not implemented </code></pre> <p>Can anyone help me how to solve this issue please?</p>
<p>Many DataReader sources are deprecated, see updated list <a href="https://pandas-datareader.readthedocs.io/en/latest/remote_data.html#remote-data-access" rel="nofollow noreferrer">here</a>.</p> <p>Many now require API key, IEX is one of them: </p> <blockquote> <p>Usage of all IEX readers now requires an <a href="https://pandas-datareader.readthedocs.io/en/latest/remote_data.html#iex" rel="nofollow noreferrer">API key</a>.</p> </blockquote> <p>Get API key from <a href="https://iexcloud.io/" rel="nofollow noreferrer">IEX Cloud Console</a>, which can be stored in the IEX_API_KEY environment variable. Just execute this is separate cell in Jupyter Notebook:</p> <p><code>os.environ["IEX_API_KEY"] = "pk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"</code></p> <p>With <a href="https://iexcloud.io/pricing/" rel="nofollow noreferrer">free IEX account</a> you can get 500,000 free Core messages/mo.</p>
pandas|pandas-datareader|elixir-iex
1
196
66,613,380
Downloading QtDesigner for PyQt6 and converting .ui file to .py file with pyuic6
<p>How do I download QtDesigner for PyQt6? If there's no QtDesigner for PyQt6, I can also use QtDesigner of PyQt5, but how do I convert this .ui file to .py file which uses PyQt6 library instead of PyQt5?</p>
<p>As they point out you can use pyuic6:</p> <pre><code>pyuic6 -o output.py -x input.ui </code></pre> <p>but in some cases there are problems in that the command is not found in the CMD/console, so the command can be used through python:</p> <pre><code>python -m PyQt6.uic.pyuic -o output.py -x input.ui </code></pre>
python|pyqt5|qt-designer|pyqt6
7
197
64,847,851
Reading a text file and replacing it to value in dictionary
<p>I have a dictionary made in python. I also have a text file where each line is a different word. I want to check each line of the text file against the keys of the dictionary and if the line in the text file matches the key I want to write that key's value to an output file. Is there an easy way to do this. Is this even possible?</p> <p>for example I am reading my file in like this:</p> <pre><code>test = open(&quot;~/Documents/testfile.txt&quot;).read() </code></pre> <p>tokenising it and for each word token I want to look it up a dictionary, my dictionary is setup like this:</p> <pre><code>dic = {&quot;a&quot;: [&quot;ah0&quot;, &quot;ey1&quot;], &quot;a's&quot;: [&quot;ey1 z&quot;], &quot;a.&quot;: [&quot;ey1&quot;], &quot;a.'s&quot;: [&quot;ey1 z&quot;]} </code></pre> <p>If I come across the letter <code>'a'</code> in my file, I want it to output <code>[&quot;ah0&quot;, &quot;ey1&quot;]</code>.</p>
<p>you can try:</p> <pre><code>for line in all_lines: for val in dic: if line.count(val) &gt; 0: print(dic[val]) </code></pre> <p>this will look through all lines in the file and if the line contains a letter from dic, then it will print the items associated with that letter in the dictionary (you will have to do something like <code>all_lines = test.readlines()</code> to get all the lines in a list) the <code>dic[val]</code> gives the list assined to the value <code>[&quot;ah0&quot;, &quot;ey1&quot;]</code> so you do not just have to print it but you can use it in other places</p>
python|dictionary
0
198
65,287,582
How to move just two columns of pandas dataframe to specific positions?
<p>I have a dataset of 100 columns like follows:</p> <pre><code>citycode AD700 AD800 AD900 ... AD1980 countryname cityname </code></pre> <p>I want the output dataframe to have columns as follows:</p> <pre><code>citycode countryname cityname AD700 AD800 AD900 ... AD1980 </code></pre> <p>I can't use code like</p> <pre><code>cols = [A, B, C, D, E] df = df[cols] </code></pre> <p>because it would be too cumbersome. Thank you!</p>
<p>One of possible solutions is:</p> <pre><code>df = df[['citycode', 'countryname', 'cityname'] + list(df.loc[:, 'AD700':'AD1980'])] </code></pre> <p>Note that you compose the list of column names from:</p> <ul> <li>a &quot;by name&quot; list (first 3),</li> <li>a &quot;by range&quot; list (all other columns).</li> </ul> <p>This way, from a source DataFrame like:</p> <pre><code> citycode AD700 AD800 AD900 AD1980 countryname cityname 0 CC1 10 20 30 40 CN1 CT1 1 CC2 11 21 31 41 CN2 CT2 </code></pre> <p>you will get:</p> <pre><code> citycode countryname cityname AD700 AD800 AD900 AD1980 0 CC1 CN1 CT1 10 20 30 40 1 CC2 CN2 CT2 11 21 31 41 </code></pre>
python|python-3.x|pandas|numpy|dataframe
1
199
72,051,927
How to search via Enums Django
<p>I'm trying to a write a search function for table reserving from a restaurant, I have a restaurant model:</p> <pre><code>class Restaurant(models.Model): &quot;&quot;&quot; Table Restaurant ======================= This table represents a restaurant with all necessary information. &quot;&quot;&quot; name = models.CharField(max_length=70) caterer = models.ForeignKey(Caterer, on_delete=models.CASCADE, null=True) address = models.OneToOneField(Address, on_delete=models.CASCADE, null=True) kitchen_type = models.IntegerField(choices=KITCHEN_TYPE, null=True) opening_hours = models.OneToOneField(OpeningHours, on_delete=models.CASCADE, null=True) description = models.CharField(max_length=2000, null=True) phone = models.CharField(max_length=15, null=True) parking_options = models.BooleanField(default=False) </code></pre> <p>which has a enum for kitchen_type:</p> <pre><code>KITCHEN_TYPE = [ (1, &quot;Turkish&quot;), (2, &quot;Italian&quot;), (3, &quot;German&quot;), (4, &quot;English&quot;), (5, &quot;Indian&quot;), ] </code></pre> <p>And this is the search function in view.py:</p> <pre><code>def search_result(request): if request.method == &quot;POST&quot;: searched = request.POST['searched'] result = Restaurant.objects.filter( Q(name__icontains=searched) | Q(address__city__icontains=searched)) return render(request, 'search_result.html', {'searched': searched, 'result': result}) else: return render(request, 'search_result.html', {}) </code></pre> <p>So how am I able to search for <strong>kitchen_type</strong> in the view?</p>
<p>Instead of using a list of tuples I would recommend extending the <code>IntegerChoices</code> or <code>TextChoices</code> classes provided by Django. Here's an example of how you can use <code>IntegerChoices</code>:</p> <pre><code>&gt;&gt;&gt; class KitchenType(models.IntegerChoices): ... TURKISH = 1 ... ITALIAN = 2 ... GERMAN = 3 ... &gt;&gt;&gt; if 1 in KitchenType: ... print(True) ... True </code></pre> <p><a href="https://docs.djangoproject.com/en/4.0/ref/models/fields/#enumeration-types" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.0/ref/models/fields/#enumeration-types</a></p>
python|django
1