qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
53,581,563
Currently, I'm trying to make a game and in the game I would like it so if the character is on top of an object, it picks it up. This is what I have so far: ``` import turtle import time default = turtle.clone() scar = turtle.clone() def pickupScar(): if default.distance(-7,48) > 5.0: default.changeshape('defaultscar.gif') wn = turtle.Screen() wn.setup(500,500) wn.bgpic('TrumpTowers.gif') wn.register_shape('default.gif') wn.register_shape('scar.gif') wn.register_shape('defaultscar.gif') turtle.hideturtle() default.shape('default.gif') scar.shape('scar.gif') default.pu() default.left(90) default.bk(35) scar.pu() scar.left(90) scar.fd(45) scar.speed(-1) default.ondrag(default.goto) ``` Does anybody know how I would go with making the def pickupScar as I'm new to python & turtle. If you recognize what my game is about please don't judge me, it's for a school project and I couldn't think of any game ideas.
2018/12/02
[ "https://Stackoverflow.com/questions/53581563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10735185/" ]
Since I don't have your images, nor recognize what your game is about, below is an example of the functionality you describe. On the screen is a black circle and pink square. You can drag the circle and if you drag it onto the square, it will sprout a head and legs becoming a turtle. Dragging off the square, it reverts to being a circle: ``` from turtle import Screen, Turtle def drag(x, y): default.ondrag(None) # disable handler inside handler default.goto(x, y) if default.distance(scar) < 40: default.shape('turtle') elif default.shape() == 'turtle': default.shape('circle') default.ondrag(drag) wn = Screen() wn.setup(500, 500) scar = Turtle('square', visible=False) scar.shapesize(4) scar.color('pink') scar.penup() scar.left(90) scar.forward(50) scar.showturtle() default = Turtle('circle', visible=False) default.shapesize(2) default.speed('fastest') default.penup() default.left(90) default.backward(50) default.showturtle() default.ondrag(drag) wn.mainloop() ```
I dont know the `turtle-graphics`, but in real world to determine the distance between two points (for 2D surfaces) we use **Pythagorean theorem**. If some object is at `(x1, y1)` and another at `(x2, y2)`, the distance is ``` dist=sqrt((x1-x2)^2 + (y1-y2)^2) ``` So, if `dist <= R`, turtle (or whatever) is `in R radius from desired point`
3,742
29,871,209
I have compressed a file using python-snappy and put it in my hdfs store. I am now trying to read it in like so but I get the following traceback. I can't find an example of how to read the file in so I can process it. I can read the text file (uncompressed) version fine. Should I be using sc.sequenceFile ? Thanks! ``` I first compressed the file and pushed it to hdfs python-snappy -m snappy -c gene_regions.vcf gene_regions.vcf.snappy hdfs dfs -put gene_regions.vcf.snappy / I then added the following to spark-env.sh export SPARK_EXECUTOR_MEMORY=16G export HADOOP_HOME=/usr/local/hadoop export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:$HADOOP_HOME/lib/native export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:$HADOOP_HOME/lib/native export SPARK_CLASSPATH=$SPARK_CLASSPATH:$HADOOP_HOME/lib/lib/snappy-java-1.1.1.8-SNAPSHOT.jar I then launch my spark master and slave and finally my ipython notebook where I am executing the code below. a_file = sc.textFile("hdfs://master:54310/gene_regions.vcf.snappy") a_file.first() ``` --- ValueError Traceback (most recent call last) in () ----> 1 a\_file.first() /home/user/Software/spark-1.3.0-bin-hadoop2.4/python/pyspark/rdd.pyc in first(self) 1244 if rs: 1245 return rs[0] -> 1246 raise ValueError("RDD is empty") 1247 1248 def isEmpty(self): ValueError: RDD is empty ``` Working code (uncompressed) text file a_file = sc.textFile("hdfs://master:54310/gene_regions.vcf") a_file.first() ``` output: u'##fileformat=VCFv4.1'
2015/04/25
[ "https://Stackoverflow.com/questions/29871209", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4833015/" ]
The issue here is that python-snappy is not compatible with Hadoop's snappy codec, which is what Spark will use to read the data when it sees a ".snappy" suffix. They are based on the same underlying algorithm but they aren't compatible in that you can compress with one and decompress with another. You can make this work either by writing your data out in the first place to snappy using Spark or Hadoop. Or by having Spark read your data as binary blobs and then you manually invoke the python-snappy decompression yourself (see binaryFiles here <http://spark.apache.org/docs/latest/api/python/pyspark.html>). The binary blob approach is a bit more brittle because it needs to fit the entire file in memory for each input file. But if your data is small enough that will work.
Alright I found a solution! Build this... <https://github.com/liancheng/snappy-utils> On ubuntu 14.10 I had to install gcc-4.4 to get it to build commented on my error I was seeing here <https://code.google.com/p/hadoop-snappy/issues/detail?id=9> I can now compress the text files using snappy at the command line like so ``` snappy -c gene_regions.vcf -o gene_regions.vcf.snappy ``` dump it into hdfs ``` hdfs dfs -put gene_regions.vcf.snappy ``` and then load it in pyspark! ``` a_file = sc.textFile("hdfs://master:54310/gene_regions.vcf.snappy") a_file.first() ``` Voila! The header of the vcf... ``` u'##fileformat=VCFv4.1' ```
3,743
56,436,777
Referencing this question: [What's the canonical way to check for type in Python?](https://stackoverflow.com/questions/152580/whats-the-canonical-way-to-check-for-type-in-python) It is said that the best way to check for inputs is to not check them - that is to let try/except blocks take care of bad inputs. My question is that if I want to design a function that handles multiple inputs, my intuition is to do something like this ```py def my_function(self, input): if isinstance(input, type): ...do this elif isinstance(input, type2): ...do that else print("can only handle type and type2") raise TypeError ``` but this is un-pythonic. How should I structure it? In my specific use-case, I want to make a function that can handle a `list`or a `pandas DataFrame`, but from a function design POV, how should I design that in a try except paradigm? It kind of feels "ugly," but I also haven't seen code directly that does this in python yet.
2019/06/04
[ "https://Stackoverflow.com/questions/56436777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4992644/" ]
there you go ``` For Each row As DataGridViewRow In DataGridView1.Rows For Each nextrow As DataGridViewRow In DataGridView1.Rows If row.Index <> nextrow.Index Then If row.Cells(0).Value = nextrow.Cells(0).Value Then MsgBox("Duplicate on col 0, index = " & row.Index.ToString) End If If row.Cells(2).Value = nextrow.Cells(2).Value Then MsgBox("Duplicate on col 2, index = " & row.Index.ToString) End If If row.Cells(3).Value = nextrow.Cells(3).Value Then MsgBox("Duplicate on col 3, index = " & row.Index.ToString) End If If row.Cells(8).Value = nextrow.Cells(8).Value Then MsgBox("Duplicate on col 8, index = " & row.Index.ToString) End If End If Next Next ``` This will also check all the column for duplicates not only the row under it like in your example, you were incrementing i and incrementing ii so you were always checking only 2 rows at the same time instead of comparing 1 row to all others..
``` For Each row As DataGridViewRow In dtg3.Rows For Each nextrow As DataGridViewRow In dtg3.Rows If row.Index <> nextrow.Index Then If row.Cells(0).Value = nextrow.Cells(0).Value AndAlso row.Cells(2).Value = nextrow.Cells(2).Value AndAlso row.Cells(3).Value = nextrow.Cells(3).Value AndAlso row.Cells(8).Value = nextrow.Cells(8).Value Then dtg3.Rows.Remove(row) dtg3.Rows.Remove(nextrow) End If End If Next Next ```
3,749
9,014,991
So let's say i have this python code: ``` def loopForEachFileInDirectory(self, conn): for filename in os.listdir(uploadedFilesDirectory): try: self.insertNewEntryForStagingFile(conn, filename) self.copyFilesToStagingDirectory(filename) except: ??? def copyFilesToStagingDirectory(self, filename): logging.info("copying %s to youtube_ready",filename) try: shutil.copy(uploadedFilesDirectory+filename, stagingDirectory) logging.info("move successful") except shutil.Error,e: logging.warn("move failed for reasons \n\t%d:%s", e.args[0],e.args[1]) raise ??? ``` Now, the "loopForEachFileInDirectory" method is going to have a few more methods in it - i'm doing a bit of clean coding (cheers Robert Martin) here. What i'd like is to bubble up any exceptions from the sub-methods, and if anything happens in the main loop, bail on that loop and continue. The question is, what is the rule for bubbling up a generic exception? Do i just raise on its own? And if so, how do i generically throw an exception, and how do i catch and log the details of a generic exception?
2012/01/26
[ "https://Stackoverflow.com/questions/9014991", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1061426/" ]
> > what is the rule for bubbling up a generic exception? Do i just raise on its own? > > > Yes. The short answer is to just use `raise`. > > And if so, how do i generically throw an exception... > > > The above answer the other submitter posted is correct, but it doesn't provide much in the way of context. `Exception` is the base class exception. `except Exception` works across all types of `Exception` because all Python exceptions inherit from this class. `except` statements can specify an argument which points to the the exception object. I don't believe that specifying it is strictly necessary in this context. In fact, it's likely sufficient to bubble up an exception with the default: ``` except: raise ``` without any real need to specify an exception type or the variable `e` referencing the particular exception object. > > ..and how do i catch and log the details of a generic exception? > > > `logging.exception` is a good way to go. Try it like so: ``` try: <something that raises an error> except Exception, e: logging.exception(e) ```
``` #!/usr/bin/python try: print 'Youll see me first.' try: 1[90] except Exception,e: print "************ UTOH!",str(e) raise e except Exception,e: print ">>>>>>>>>>>> I concur, THE JIG IS UP!",str(e) raise e ```
3,750
21,819,649
What's the difference between a namespace Python package (no `__init__.py`) and a regular Python package (has an `__init__.py`), especially when `__init__.py` is empty for a regular package? I am curious because recently I've been forgetting to make `__init__.py` in packages I make, and I never noticed any problems. In fact, they seem to behave identically to regular packages. Edit: Namespace packages only supported from Python 3.3 ([see PEP 420](http://legacy.python.org/dev/peps/pep-0420/)), so naturally, this question only applies to Python 3.
2014/02/17
[ "https://Stackoverflow.com/questions/21819649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/469721/" ]
Namespace packages ------------------ As of [Python 3.3](https://docs.python.org/3/whatsnew/3.3.html#pep-420-implicit-namespace-packages), we get namespace packages. These are a special kind of package that allows you to unify two packages with the same name at different points on your Python-path. For example, consider path1 and path2 as separate entries on your Python-path: ``` path1 +--namespace +--module1.py +--module2.py path2 +--namespace +--module3.py +--module4.py ``` with this arrangement you should be able to do the following: ``` from namespace import module1, module3 ``` thus you get the unification of two packages with the same name in a single namespace. If either one of them gain an `__init__.py` that becomes **the** package - and you no longer get the unification as the other directory is ignored. If both of them have an `__init__.py`, the first one in the PYTHONPATH (`sys.path`) is the one used. `__init__.py` used to be required to make directory a package ------------------------------------------------------------- Namespace packages are packages without the `__init__.py`. For an example of a simple package, if you have a directory: ``` root +--package +--file1.py +--file2.py ... ``` While you could run these files independently in the `package` directory, e.g. with `python2 file1.py`, under Python 2 you wouldn't be able to import the files as modules in the root directory, e.g. ``` import package.file1 ``` would fail, and in order for it to work, you at least need this: ``` package +--__init__.py +--file1.py +--file2.py ... ``` `__init__.py` initializes the package so you can have code in the `__init__.py` that is run when the module is first imported: ``` run_initial_import_setup() ``` provide an `__all__` list of names to be imported, ``` __all__ = ['star_import', 'only', 'these', 'names'] ``` if the package is imported with the following: ``` from module import * ``` or you can leave the `__init__.py` completely empty if you only want to be able to import the remaining .py files in the directory. ### Namespaces with `__init__.py` using pkgutil: You could originally use [pkgutil](http://docs.python.org/2/library/pkgutil.html), available since Python 2.3. to accomplish adding namespaces, by adding the following into each separate package's `__init__.py`: ``` from pkgutil import extend_path __path__ = extend_path(__path__, __name__) ``` Setuptools uses a similar method, again, all `__init__.py` files should contain the following (with no other code): ``` import pkg_resources pkg_resources.declare_namespace(__name__) ``` Namespaces were more thoroughly addressed in [PEP 420](http://www.python.org/dev/peps/pep-0420/) See also more discussion on setuptools and Namespaces here: <http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages>
1. Having `__init__.py` makes it so you can import that package elsewhere. 2. Also, the `__init__.py` file can contain code you want executed each time the module is loaded.
3,753
21,272,497
I'm trying to see if this is the most efficient way to sort a bubble list in python or if there are better ways some people tell me to use two loops, what are the benefits of doing like that vs the below ``` def sort_bubble(blist): n = 0 while n < len(blist) - 1: if blist[n] > blist[n + 1]: n1 = blist[n] n2 = blist[n + 1] blist[n] = n2 blist[n + 1] = n1 n = 0 else: n = n + 1 print blist ```
2014/01/22
[ "https://Stackoverflow.com/questions/21272497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3221614/" ]
Your algorithm is technically a bubble sort in that it does exactly the swaps that it should. However, it's a *very inefficient* bubble sort, in that it does a *lot* more compares than are necessary. How can you *know* that? It's pretty easy to instrument your code to count the number of compares and swaps. And meanwhile, [Wikipedia](http://en.wikipedia.org/wiki/Bubble_sort) gives implementations of a simple bubble sort, and one with the skip-sorted-tail optimization, in a pseudocode language that's pretty easy to port to Python and similarly instrument. I'll show the code at the bottom. For a perfect bubble sort, given a random list of length 100, you should expect a bit under 10000 compares (100 \* 100), and a bit under 2500 swaps. And the Wikipedia implementation does exactly that. The "skip-sorted-tail" version should have just over half as many compares, and it does. Yours, however, has 10x as many compares as it should. The reason your code is inefficient is that it starts over at the beginning over and over, instead of starting where it swapped whenever possible. This causes an extra factor of `O(sqrt(N))`. Meanwhile, almost any sort algorithm is better than bubble sort for almost any input, so even an efficient bubble sort is not an efficient sort. --- I've made one minor change to your code: replacing the four-line swap with a more idiomatic single-line swap. Otherwise, nothing is changed but adding the `cmpcount` and `swapcount` variables, and returning the result instead of printing it. ``` def bogo_bubble(blist): cmpcount, swapcount = 0, 0 n = 0 while n < len(blist) - 1: cmpcount += 1 if blist[n] > blist[n + 1]: swapcount += 1 blist[n], blist[n+1] = blist[n+1], blist[n] n = 0 else: n = n + 1 return blist, cmpcount, swapcount ``` This is the [Psuedocode implementation](http://en.wikipedia.org/wiki/Bubble_sort#Pseudocode_implementation) from Wikipedia, translated to Python. I had to replace the `repeat… unit` with a `while True… if not …: break`, but everything else is trivial. ``` def wp1_bubble(blist): cmpcount, swapcount = 0, 0 while True: swapped = False for i in range(1, len(blist)): cmpcount += 1 if blist[i-1] > blist[i]: swapcount += 1 blist[i-1], blist[i] = blist[i], blist[i-1] swapped = True if not swapped: break return blist, cmpcount, swapcount ``` This is the [Optimizing bubble sort](http://en.wikipedia.org/wiki/Bubble_sort#Optimizing_bubble_sort), which does the simple version of the skip-sorted-tail optimization, but not the more elaborate version (which comes right after it). ``` def wp2_bubble(blist): cmpcount, swapcount = 0, 0 n = len(blist) while True: swapped = False for i in range(1, n): cmpcount += 1 if blist[i-1] > blist[i]: swapcount += 1 blist[i-1], blist[i] = blist[i], blist[i-1] swapped = True n -= 1 if not swapped: break return blist, cmpcount, swapcount import random alist = [random.randrange(100) for _ in range(100)] bb, cb, sb = bogo_bubble(alist[:]) b1, c1, s1 = wp1_bubble(alist[:]) b2, c2, s2 = wp2_bubble(alist[:]) assert bb == b1 == b2 print('bogo_bubble: {} cmp, {} swap'.format(cb, sb)) print('wp1_bubble : {} cmp, {} swap'.format(c1, s1)) print('wp2_bubble : {} cmp, {} swap'.format(c2, s2)) ``` Typical output: ``` bogo_bubble: 100619 cmp, 2250 swap wp1_bubble : 8811 cmp, 2250 swap wp2_bubble : 4895 cmp, 2250 swap ```
This is how I would do it if I was forced to use bubble sort, you should probably always just use the default sort() function in python, it's very fast. ``` def BubbleSort(A): end = len(A)-1 swapped = True while swapped: swapped = False for i in range(0, end): if A[i] > A[i+1]: A[i], A[i+1] = A[i+1], A[i] swapped = True end -= 1 ``` It's basically regular bubblesort but instead of traversing the entire list every time it only traverses up to the last swapped value, by definition any value past that is already in place. Also you do not need to use temp values in python to swap, the pythonic way to do this is: ``` a , b = b , a ```
3,756
48,689,158
I want to send commands to run a python script to the Linux terminal. I have a list of python files which I want to run and I want to run them one after the other as we read the list sequentially. Once the first file is finished, it should send the second one to run and so on.
2018/02/08
[ "https://Stackoverflow.com/questions/48689158", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4782295/" ]
I would suggest gsl-like syntactic sugar to mark that it is not the pointer you manage. Something like: ``` template<class T> using observer = T; observer<library_managed_object *> foo = nullptr; ``` You can also use, as sugested elsewhere the `observer_ptr`. And one final word - in world of C++11 and so forth - using raw pointers is perfectly reasonable when you want to pass something like nullable non owning reference. If smart pointers shows ownership then lack of one - shows lack of ownership. All in all - if you do not have legacy code with manually managed memory - don't be afraid to use raw pointers. As says Herb Sutter: > > Pass by \* or & to accept a widget independently of how the caller is > managing its lifetime. Most of the time, we don’t want to commit to a > lifetime policy in the parameter type, such as requiring the object be > held by a specific smart pointer, because this is usually needlessly > restrictive. As usual, use a \* if you need to express null (no > widget), otherwise prefer to use a &; and if the object is input-only, > write const widget\* or const widget&. > > > <https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/>
You can try [gsl::owner](https://github.com/Microsoft/GSL/blob/master/include/gsl/pointers) defined in the GSL project. Its not a type but more of a tag to define ownership. The [CPP core guidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Ri-raw) define the use case of `gsl::owner` > > mark owning pointers using owner from the guideline support library: > > > ``` owner<X*> compute(args) // It is now clear that ownership is transferred { owner<X*> res = new X{}; // ... return res; } ``` > > This tells analysis tools that res is an owner. That is, its value > must be deleted or transferred to another owner, as is done here by > the return. > > > owner is used similarly in the implementation of resource handles. > > > Another alternative is [observer\_ptr](http://en.cppreference.com/w/cpp/experimental/observer_ptr)
3,759
39,185,797
In Node.js when I want to quickly check the value of something rather than busting out the debugger and stepping through, I quickly add a console.log(foo) and get a beautiful: ``` { lemmons: "pie", number: 9, fetch: function(){..} elements: { fire: 99.9 } } ``` Very clear! In Python I get this: ``` class LinkedList: head = None tail = None lemmons = 99 ``` `<__main__.LinkedList instance at 0x105989f80>` or with `vars()`, `{}` or with `dir()`, `['_LinkedList__Node', '__doc__', '__module__', 'append', 'get_tail', 'head', 'lemmons', 'remove', 'tail']` Yuck! Look at all that nonsense - I thought python was supposed to be fast, beautiful and clean? Is this really how people do it? Do they implement customer **str** and custom **repr** for everything? Because that seems kind of crazy too..
2016/08/27
[ "https://Stackoverflow.com/questions/39185797", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5947872/" ]
Actually, there is a way to stop Java GC. Just use the Epsilon GC algorithm that was introduced as an experimental feature in Java 11. Just add the following two arguments to your JVM's startup script: ``` -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC ``` All or Nothing -------------- Now just keep in mind that this Java GC algorithm does no GC at all. So if you do any object allocation in your code, eventually you'll hit an `OutOfMemoryError` and your app will crash. But if your JVM is short lived, and you don't think that's an issue, give Epsilon GC a try. Just remember it's all or nothing. You can't [force Java GC](https://youtu.be/onjlJBDdeTk) and you can't [stop Java GC](https://youtu.be/aTMZGs0ZGPE) from happening if you use any of the other garbage collectors. The collector is non-deterministic, so control by programmers or admins just isn't possible out of the box.
By default the JVM runs the JVM only needed. This means you can't turn off the GC or your program will fail. The simplest way to avoid stopping the JVM is; * use a very small eden size so when it stops it will be less than some acceptable time. * or make the eden size very large and delay the GC until it hardly matters. e.g. you can reduce you garbage rate and run for 24 hours or longer between minor GCs.
3,760
14,817,210
I have quite a simple question here. In Tkinter (python), I was wondering who to use a button to go to different pages of my application, e.g a register page, and a login page. I am aware that GUI does not have 'pages' like websites do, I've seen a few different ways, but what is the best way to make links to different pages?
2013/02/11
[ "https://Stackoverflow.com/questions/14817210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2061989/" ]
Make each page a frame. Then, all your buttons need to do is hide whatever is visible, then make the desired frame visible. A simple method to do this is to stack the frames on top of each other (this is one time when `place` makes sense) and then ,`lift()` the frame you want to be visible. This technique works best when all pages are the same size; in fact, it requires that you explicitly set the size of containing frame. The following is a contrived example. This isn't the only way to solve the problem, just proof that it's not a particularly hard problem to solve: ``` import Tkinter as tk class Page(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) def show(self): self.lift() class Page1(Page): def __init__(self, *args, **kwargs): Page.__init__(self, *args, **kwargs) label = tk.Label(self, text="This is page 1") label.pack(side="top", fill="both", expand=True) class Page2(Page): def __init__(self, *args, **kwargs): Page.__init__(self, *args, **kwargs) label = tk.Label(self, text="This is page 2") label.pack(side="top", fill="both", expand=True) class Page3(Page): def __init__(self, *args, **kwargs): Page.__init__(self, *args, **kwargs) label = tk.Label(self, text="This is page 3") label.pack(side="top", fill="both", expand=True) class MainView(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) p1 = Page1(self) p2 = Page2(self) p3 = Page3(self) buttonframe = tk.Frame(self) container = tk.Frame(self) buttonframe.pack(side="top", fill="x", expand=False) container.pack(side="top", fill="both", expand=True) p1.place(in_=container, x=0, y=0, relwidth=1, relheight=1) p2.place(in_=container, x=0, y=0, relwidth=1, relheight=1) p3.place(in_=container, x=0, y=0, relwidth=1, relheight=1) b1 = tk.Button(buttonframe, text="Page 1", command=p1.show) b2 = tk.Button(buttonframe, text="Page 2", command=p2.show) b3 = tk.Button(buttonframe, text="Page 3", command=p3.show) b1.pack(side="left") b2.pack(side="left") b3.pack(side="left") p1.show() if __name__ == "__main__": root = tk.Tk() main = MainView(root) main.pack(side="top", fill="both", expand=True) root.wm_geometry("400x400") root.mainloop() ```
Could you do something like this? ``` import tkinter def page1(): page2text.pack_forget() page1text.pack() def page2(): page1text.pack_forget() page2text.pack() window = tkinter.Tk() page1btn = tkinter.Button(window, text="Page 1", command=page1) page2btn = tkinter.Button(window, text="Page 2", command=page2) page1text = tkinter.Label(window, text="This is page 1") page2text = tkinter.Label(window, text="This is page 2") page1btn.pack() page2btn.pack() page1text.pack() ``` It seems a lot simpler to me.
3,761
56,642,128
I have a data set with columns titled as product name, brand,rating(1:5),review text, review-helpfulness. What I need is to propose a recommendation algorithm using reviews. I have to use python for coding here. data set is in .csv format. To identify the nature of the data set I need to use kmeans on the data set. How to use k means on this data set? Thus I did following, 1.data pre-processing, 2.review text data cleaning, 3.sentiment analysis, 4.giving sentiment score from 1 to 5 according to the sentiment value (given by sentiment analysis) they get and tagging reviews as very negative, negative, neutral, positive, very positive. after these procedures i have these columns in my data set, product name, brand,rating(1:5),review text, review-helpfulness, sentiment-value, sentiment-tag. This is the link to the data set <https://drive.google.com/file/d/1YhCJNvV2BQk0T7PbPoR746DCL6tYmH7l/view?usp=sharing> I tried to get k means using following code It run without error. but I don't know this is something useful or is there any other ways to use kmeans on this data set to get some other useful outputs. To identify more about data how should i use k means in this data set.. ``` import pandas as pd import numpy as np from sklearn.cluster import KMeans import matplotlib.pyplot as plt df.info() X = np.array(df.drop(['sentiment_value'], 1).astype(float)) y = np.array(df['rating']) kmeans = KMeans(n_clusters=2) kmeans.fit(X) KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, n_clusters=2, n_init=10, n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001, verbose=0) plt.show() ```
2019/06/18
[ "https://Stackoverflow.com/questions/56642128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9725182/" ]
You did not plot anything. So nothing shows up.
Unless you are more specific about what you are trying to achieve we won't be able to help. Figure out what exactly you want to predict. Do you just want to cluster products according to their sentiment score which isn't especially promising or do you want to predict actual product preferences on a new dataset? If you want to build a recommendation system the only possibility (considering your dataset) would be to identify similar products according to the rating/sentiment. Is that what you want?
3,764
69,557,664
I have a custom python logger ``` # logger.py import logging #logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) c_handler = logging.StreamHandler() c_handler.setLevel(logging.DEBUG) c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s') c_handler.setFormatter(c_format) logger.addHandler(c_handler) ``` I have set the level to DEBUG, but only WARNINGS (and above) are shown ``` from ..logger import logger ... logger.debug('this is a debug log message') logger.warning('too hot to handle') ... ``` > > my\_module.logger:too hot to handle > > > if I uncomment the line ``` logging.basicConfig(level=logging.DEBUG) ``` then I get the DEBUG level, but two copies of the message > > my\_module.logger - DEBUG - this is a debug log message > > > > > DEBUG:my\_module.logger:this is a debug log message > > > > > my\_module.logger - WARNING - too hot to handle > > > > > WARNING:my\_module.logger:too hot to handle > > > I am not importing *logging* at any other point in the package How should I configure the logger?
2021/10/13
[ "https://Stackoverflow.com/questions/69557664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3070181/" ]
TL;DR Use `logger.setLevel(logging.DEBUG)` --- According to [Python documentation](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel), a handler processes messages with a level equal to or higher than the handler is set to (via `.setLevel()`). But also note, emphasis mine: > > When a logger is created, the level is set to `NOTSET` (which causes all messages to be processed when the logger is the root logger, **or delegation to the parent when the logger is a non-root logger**). Note that the root logger is created with level `WARNING`. > > > So without `logging.basicConfig`, there's no "root logger" at program startup, and your first `getLogger()` creates a stub root logger with default level WARNING, and your logger with level NOTSET (which fallbacks to that of the root logger). As a result your `logger.debug` message is thrown away before it gets handled. With `logging.basicConfig`, you explicitly create a root logger with the given level *and [**a StreamHandler** with default Formatter](https://docs.python.org/3/library/logging.html#logging.basicConfig)*. Your new `getLogger()` is attached to the root logger and any log record is [propagated](https://docs.python.org/3/library/logging.html#logging.Logger.propagate) to the root logger - thus printing twice with a different formatter (the default one indeed). The stub root logger created by the first call to `getLogger()` has no handler attached so any propagated record is not printed out. If you want to have full control over your logging facility, it's better to give your logger an explicit level than relying on `basicConfig`, which creates a root logger that you may not want: ``` logger.setLevel(logging.DEBUG) ```
Having read the [docs](https://docs.python.org/3/library/logging.html#logging.Logger.setLevel) again I realise that *propagate* is the attribute that I need to use to turn off the ancestor *logging* output. So my logger becomes ``` # logger.py import logging logging.basicConfig(level=logging.DEBUG) logger.propagate = False logger = logging.getLogger(__name__) c_handler = logging.StreamHandler() c_handler.setLevel(logging.DEBUG) c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s') c_handler.setFormatter(c_format) logger.addHandler(c_handler) ``` And I get just one log message and the debug level is used
3,765
65,343,093
I am working on a pipeline where the majority of code is within a python script that I call in the pipeline. In the script I would like to use the predefined variable System.AccessToken to make a call to the DevOps API that sets the status of a pull request. However, when I try to get the token using `os.environ['System.AccessToken']` I get a key error. Oddly though, it seems that System.AccessToken is set, because in the yaml file for the pipeline I am able to access the API like: ``` curl -u ":$(System.AccessToken)" URL ``` and get back a valid response. Is there something additional I need to do in Python to access this variable?
2020/12/17
[ "https://Stackoverflow.com/questions/65343093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11090784/" ]
After reviewing the page that Mani posted I found the answer. For most variables, something like System.AccessToken would have a corresponding SYSTEM\_ACCESSTOKEN. However, with a secret variable this is not the case. I was able to make it accessible to my python script by adding: ``` env: SYSTEM_ACCESSTOKEN: $(System.AccessToken) ``` to where the Python script is called in the pipeline's yaml file. See <https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables> for more details.
with this documentation it can work: <https://learn.microsoft.com/de-de/azure/developer/python/azure-sdk-authenticate?tabs=cmd> Just change the language to "read in english" There must be a vault and a present Secret aka SAS Token. And I have to say your code above is curl not python. --- ``` import os from azure.identity import DefaultAzureCredential from azure.keyvault.secrets import SecretClient # Acquire the resource URL vault_url = os.environ["KEY_VAULT_URL"] # Acquire a credential object credential = DefaultAzureCredential() # Acquire a client object secret_client = SecretClient(vault_url=vault_url, credential=credential) # Attempt to perform an operation retrieved_secret = secret_client.get_secret("secret-name-01") ``` with this save change the fields to your vault and secret name the file as test.py and run it. --- If you need the token outside, each Environment have it own namespace. So adding it in local context with export ... or follow the Unix policy, "everything is a file" write it to file. Good practise here is to use ansible-vault or something similar. store it encrypted, use it if you need it. read it from file.
3,766
6,600,039
I'm trying to figure out if there is a quick way to test my django view functions form either the python or django shell. How would I go about instantiating and passing in faux HTTPrequest object?
2011/07/06
[ "https://Stackoverflow.com/questions/6600039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234723/" ]
If you're using Django 1.3, take a look at the included [RequestFactory](https://docs.djangoproject.com/en/1.3/topics/testing/#the-request-factory).
Sounds like you want the django test client <https://docs.djangoproject.com/en/dev/topics/testing/#module-django.test.client>
3,768
19,130,113
I've got a database full of BlobKeys that were previously uploaded through the standard Google App Engine [create\_upload\_url()](https://developers.google.com/appengine/docs/python/blobstore/functions#create_upload_url) process, and each of the uploads went to the same Google Cloud Storage bucket by setting the `gs_bucket_name` argument. What I'd like to do is be able to decode the existing blobkeys so I can get their Google Cloud Storage filenames. I understand that I ***could*** have been using the [gs\_object\_name](https://developers.google.com/appengine/docs/python/blobstore/fileinfoclass#FileInfo_gs_object_name) property from the [FileInfo class](https://developers.google.com/appengine/docs/python/blobstore/fileinfoclass), except: > > You must save the gs\_object\_name yourself in your upload handler or > this data will be lost. (The other metadata for the object in GCS is stored > in GCS automatically, so you don't need to save that in your upload handler. > > > Meaning `gs_object_name` property is only available in the upload handler, and if I haven't been saving it at that time then its lost. Also, [create\_gs\_key()](https://developers.google.com/appengine/docs/python/blobstore/functions#create_gs_key) doesn't do the trick because it instead takes a google storage filename and creates a blobkey. So, how can I take a blobkey that was previously uploaded to a Google Cloud Storage bucket through app engine, and get it's Google Cloud Storage filename? (python)
2013/10/02
[ "https://Stackoverflow.com/questions/19130113", "https://Stackoverflow.com", "https://Stackoverflow.com/users/361897/" ]
You can get the cloudstorage filename only in the upload handler (fileInfo.gs\_object\_name) and store it in your database. After that it is lost and it seems not to be preserved in BlobInfo or other metadata structures. > > Google says: Unlike BlobInfo metadata FileInfo metadata is not > persisted to datastore. (There is no blob key either, but you can > create one later if needed by calling create\_gs\_key.) You must save > the gs\_object\_name yourself in your upload handler or this data will > be lost. > > > <https://developers.google.com/appengine/docs/python/blobstore/fileinfoclass> Update: I was able to decode a SDK-BlobKey in Blobstore-Viewer: "encoded\_gs\_file:base64-encoded-filename-here". However the real thing is not base64 encoded. create\_gs\_key(filename, rpc=None) ... Google says: "Returns an encrypted blob key as a string." Does anyone have a guess why this is encrypted?
From the statement in the docs, it looks like the generated GCS filenames are lost. You'll have to use gsutil to manually browse your bucket. <https://developers.google.com/storage/docs/gsutil/commands/ls>
3,774
66,921,090
I am trying to create SparkContext in jupyter notebook but I am getting following Error: **Py4JError: org.apache.spark.api.python.PythonUtils.getPythonAuthSocketTimeout does not exist in the JVM** Here is my code ``` from pyspark import SparkContext, SparkConf conf = SparkConf().setMaster("local").setAppName("Groceries") sc = SparkContext(conf = conf) Py4JError Traceback (most recent call last) <ipython-input-20-5058f350f58a> in <module> 1 conf = SparkConf().setMaster("local").setAppName("My App") ----> 2 sc = SparkContext(conf = conf) ~/Documents/python38env/lib/python3.8/site-packages/pyspark/context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls) 144 SparkContext._ensure_initialized(self, gateway=gateway, conf=conf) 145 try: --> 146 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer, 147 conf, jsc, profiler_cls) 148 except: ~/Documents/python38env/lib/python3.8/site-packages/pyspark/context.py in _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, jsc, profiler_cls) 224 self._encryption_enabled = self._jvm.PythonUtils.isEncryptionEnabled(self._jsc) 225 os.environ["SPARK_AUTH_SOCKET_TIMEOUT"] = \ --> 226 str(self._jvm.PythonUtils.getPythonAuthSocketTimeout(self._jsc)) 227 os.environ["SPARK_BUFFER_SIZE"] = \ 228 str(self._jvm.PythonUtils.getSparkBufferSize(self._jsc)) ~/Documents/python38env/lib/python3.8/site-packages/py4j/java_gateway.py in __getattr__(self, name) 1528 answer, self._gateway_client, self._fqn, name) 1529 else: -> 1530 raise Py4JError( 1531 "{0}.{1} does not exist in the JVM".format(self._fqn, name)) 1532 Py4JError: org.apache.spark.api.python.PythonUtils.getPythonAuthSocketTimeout does not exist in the JVM ```
2021/04/02
[ "https://Stackoverflow.com/questions/66921090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7527164/" ]
Python's pyspark and spark cluster versions are inconsistent and this error is reported. Uninstall the version that is consistent with the current pyspark, then install the same version as the spark cluster. My spark version is 3.0.2 and run the following code: ``` pip3 uninstall pyspark pip3 install pyspark==3.0.2 ```
I have had the same error today and resolved it with the below code: Execute this in a separate cell before you have your spark session builder ``` from pyspark import SparkContext,SQLContext,SparkConf,StorageLevel from pyspark.sql import SparkSession from pyspark.conf import SparkConf SparkSession.builder.config(conf=SparkConf()) ```
3,777
4,787,291
I'm writing an application. No fancy GUI:s or anything, just a plain old console application. This application, lets call it App, needs to be able to load plugins on startup. So, naturally, i created a class for the plugins to inherit from: ``` class PluginBase(object): def on_load(self): pass def on_unload(self): pass def do_work(self, data): pass ``` The idea being that on startup, App would walk through the current dir, including subdirs, searching for modules containing classes that themselves are subclasses of `PluginBase`. More code: ``` class PluginLoader(object): def __init__(self, path, cls): """ path=path to search (unused atm), cls=baseclass """ self.path=path def search(self): for root, dirs, files in os.walk('.'): candidates = [fname for fname in files if fname.endswith('.py') \ and not fname.startswith('__')] ## this only works if the modules happen to be in the current working dir ## that is not important now, i'll fix that later if candidates: basename = os.path.split(os.getcwd())[1] for c in candidates: modname = os.path.splitext(c)[0] modname = '{0}.{1}'.format(basename, modname) __import__(mod) module = sys.modules[mod] ``` After that last line in `search` I'd like to somehow a) find all classes in the newly loaded module, b) check if one or more of those classes are subclasses of `PluginBase` and c) (if b) instantiate that/those classes and add to App's list of loaded modules. I've tried various combinations of `issubclass` and others, followed by a period of intense `dir`:ing and about an hour of panicked googling. I did find a similar approach to mine [here](http://www.luckydonkey.com/2008/01/02/python-style-plugins-made-easy/) and I tried just copy-pasting that but got an error saying that Python doesn't support imports by filename, at which point I kind of lost my concentration and as a result of that, this post was written. I'm at my wits end here, all help appreciated.
2011/01/24
[ "https://Stackoverflow.com/questions/4787291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/350784/" ]
You would make this a lot easier if you forced some constraints on the plugin writer, for example that all plugins must be packages that contain a `load_plugin( app, config)` function that returns a Plugin instance. Then all you have to do is try to import these packages and run the function.
Could you use execfile() instead of import with a specified namespace dict, then iterate over that namespace with issubclass, etc?
3,780
29,463,921
A frog wants to cross a river. There are 3 stones in the river she can jump to. She wants to choose among all possible paths the one that leads to the smallest longest jump. Ie. each of the possible paths will have one jump that is the longest. She needs to find the path where this longest jump is smallest. The 2 shores are 10 apart and are parallel to the y axis. Each stone position is given by a list x=[x1,x2,x3] of the x positions and y=[y1,y2,y3] of the y positions. Return both the longest jump in this path (rounded to the closest integer) and the path itself through a list of indices in the lists x and y of the stones in the path. Here it is my python code to find the longest jump. How would I track the path itself? And my code looks clumsy with 3 nested loops is there a better/more elegant way to write this code? ``` def longestJump(x, y): best = 10 for i in range(0,3): for j in range(0,3): for k in range(0,3): # first jump from shore to a stone dist = x[i] # second jump between stones dist = max(dist, round(math.sqrt((x[i]-x[j])**2 + (y[i]-y[j])**2))) # third jump between stones dist = max(dist, round(math.sqrt((x[i]-x[k])**2 + (y[i]-y[k])**2))) dist = max(dist, round(math.sqrt((x[j]-x[k])**2 + (y[j]-y[k])**2))) # last jump from a stone to the opposite shore dist = max(dist, 10 - x[j]) best = min(dist, best) return best ```
2015/04/06
[ "https://Stackoverflow.com/questions/29463921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4542063/" ]
I would use `Array()` to normalize the input and then there is only on case left: ``` work.map! do |w| element = Array(w).first console.button_map[element] || element end ```
I settled on this, not sure if it can be cleaner: ``` work.map! do |w| if w.is_a? Array w.tap{|x| x[0] = console.button_map[x[0]] || x[0] } else console.button_map[w] || w end end ```
3,785
49,582,981
I have a flask app in a docker container that writes to a local copy of SQLite db. what I want to do is move the db out of the container and have it reside on my host. how do I setup docker to run the python code from the container and read and write to the sql lite db on the host.
2018/03/31
[ "https://Stackoverflow.com/questions/49582981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9577029/" ]
Use bind-mount to share host file to container. If you have the SQLite DB file as `app.db`, you can run your container with the `-v` flag (or the `--mount` flag): ``` docker run -v /absolute/path/to/app.db:/flask/app/app.db <IMAGE> ``` Docs: <https://docs.docker.com/storage/bind-mounts/>
You have either * setup ownership privileges of your host directory to match `uid`:`gid` of the user in the container or * change `uid`:`gid` of the user in the container to match numerically `uid`:`gid` of your host user who owns directory with sqlite db file Great answers for both approaches are described [here](https://stackoverflow.com/questions/29245216/write-in-shared-volumes-docker/29251160#29251160)
3,788
52,710,878
I created conda environment and install pytorch and fastai (Mac OS Mojave) as below: ``` conda create -n fai_course python=3.7 source activate fai_course conda install -c pytorch pytorch-nightly-cpu conda install -c fastai torchvision-nightly-cpu jupyter notebook ``` When I import a package from jupyter notebook, I get OSError as shown below: ``` from fastai.imports import * ``` - ``` -------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in <module> 1352 try: -> 1353 fontManager = json_load(_fmcache) 1354 if (not hasattr(fontManager, '_version') or /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in json_load(filename) 887 """ --> 888 with open(filename, 'r') as fh: 889 return json.load(fh, object_hook=_json_decode) FileNotFoundError: [Errno 2] No such file or directory: '/Users/user/.matplotlib/fontlist-v300.json' During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-5-9f9378ae0f2a> in <module> ----> 1 from fastai.imports import * /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastai/__init__.py in <module> ----> 1 from .basic_train import * 2 from .callback import * 3 from .callbacks import * 4 from .core import * 5 from .data import * /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastai/basic_train.py in <module> 1 "Provides basic training and validation with `Learner`" ----> 2 from .torch_core import * 3 from .data import * 4 from .callback import * 5 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastai/torch_core.py in <module> 1 "Utility functions to help deal with tensors" ----> 2 from .imports.torch import * 3 from .core import * 4 5 AffineMatrix = Tensor /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastai/imports/__init__.py in <module> ----> 1 from .core import * 2 from .torch import * /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/fastai/imports/core.py in <module> 1 import csv, gc, gzip, os, pickle, shutil, sys, warnings ----> 2 import math, matplotlib.pyplot as plt, numpy as np, pandas as pd, random 3 import scipy.stats, scipy.special 4 import abc, collections, hashlib, itertools, json, operator 5 import mimetypes, inspect, typing, functools /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/pyplot.py in <module> 30 from cycler import cycler 31 import matplotlib ---> 32 import matplotlib.colorbar 33 import matplotlib.image 34 from matplotlib import rcsetup, style /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/colorbar.py in <module> 30 import matplotlib.collections as collections 31 import matplotlib.colors as colors ---> 32 import matplotlib.contour as contour 33 import matplotlib.cm as cm 34 import matplotlib.gridspec as gridspec /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/contour.py in <module> 16 import matplotlib.colors as mcolors 17 import matplotlib.collections as mcoll ---> 18 import matplotlib.font_manager as font_manager 19 import matplotlib.text as text 20 import matplotlib.cbook as cbook /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in <module> 1361 raise 1362 except Exception: -> 1363 _rebuild() 1364 else: 1365 _rebuild() /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in _rebuild() 1342 global fontManager 1343 -> 1344 fontManager = FontManager() 1345 1346 if _fmcache: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in __init__(self, size, weight) 976 self.defaultFont = {} 977 --> 978 ttffiles = findSystemFonts(paths) + findSystemFonts() 979 self.defaultFont['ttf'] = next( 980 (fname for fname in ttffiles /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in findSystemFonts(fontpaths, fontext) 268 # check for OS X & load its fonts if present 269 if sys.platform == 'darwin': --> 270 fontfiles.update(OSXInstalledFonts(fontext=fontext)) 271 272 elif isinstance(fontpaths, str): /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in OSXInstalledFonts(directories, fontext) 216 directories = OSXFontDirectories 217 return [path --> 218 for directory in directories 219 for ext in get_fontext_synonyms(fontext) 220 for path in list_fonts(directory, ext)] /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in <listcomp>(.0) 218 for directory in directories 219 for ext in get_fontext_synonyms(fontext) --> 220 for path in list_fonts(directory, ext)] 221 222 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in list_fonts(directory, extensions) 155 extensions = ["." + ext for ext in extensions] 156 return [str(path) --> 157 for path in filter(Path.is_file, Path(directory).glob("**/*.*")) 158 if path.suffix in extensions] 159 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/matplotlib/font_manager.py in <listcomp>(.0) 154 """ 155 extensions = ["." + ext for ext in extensions] --> 156 return [str(path) 157 for path in filter(Path.is_file, Path(directory).glob("**/*.*")) 158 if path.suffix in extensions] /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in glob(self, pattern) 1080 raise NotImplementedError("Non-relative patterns are unsupported") 1081 selector = _make_selector(tuple(pattern_parts)) -> 1082 for p in selector.select_from(self): 1083 yield p 1084 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _select_from(self, parent_path, is_dir, exists, scandir) 541 try: 542 successor_select = self.successor._select_from --> 543 for starting_point in self._iterate_directories(parent_path, is_dir, scandir): 544 for p in successor_select(starting_point, is_dir, exists, scandir): 545 if p not in yielded: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) --> 533 for p in self._iterate_directories(path, is_dir, scandir): 534 yield p 535 except PermissionError: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py in _iterate_directories(self, parent_path, is_dir, scandir) 529 entries = list(scandir(parent_path)) 530 for entry in entries: --> 531 if entry.is_dir() and not entry.is_symlink(): 532 path = parent_path._make_child_relpath(entry.name) 533 for p in self._iterate_directories(path, is_dir, scandir): OSError: [Errno 62] Too many levels of symbolic links: '.Trash/NETGEARGenie.app/Contents/Frameworks/QtPrintSupport.framework/Versions/5/5' ``` Can you please let me know what am I missing? Thanks
2018/10/08
[ "https://Stackoverflow.com/questions/52710878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3899975/" ]
The comments say you "should" do a print, but nothing says you cannot print anything else after the print. Nothing even *forces* you to do a print, otherwise it would be a *shall*. --- To be honest, reading questions about homework like this one makes me unhappy. To me, the whole thing is useless, ugly, and does not teach anything valuable for future real-world software design and coding. The one thing it teaches is: given poor specifications, do poor work to please the client. *Well ... that is maybe a good lesson to learn ?* Since you did not give any piece of code of what you tried, nor any hint about what you understood, I took the liberty to propose some kind of a solution that does the required prints. So here are 3 implementations. I tried to do this in a rough way. Maybe others would find other weird ways to do it. ``` import java.util.HashMap; import java.util.Map; public class Main { private static final String REQUEST_MUST_NOT_BE_EMPTY = "request must not be empty"; private static final String REQUEST_IS_REQUIRED = "request is required"; private static final String REQUEST = "request"; private static final String EMPTY_STRING = ""; private static final Map<String, Object> requestMessageMap = new HashMap<String, Object>(); static { // requestMessageMap.put(REQUEST, Integer.valueOf(7)); // requestMessageMap.put(REQUEST, Integer.valueOf(REQUEST.length())); requestMessageMap.put(EMPTY_STRING, REQUEST_MUST_NOT_BE_EMPTY); requestMessageMap.put(null, REQUEST_IS_REQUIRED); } public static void main(String[] args) { // Should print 7 System.out.println(stringLength("request")); // this line cannot be changed // Should print "request must not be empty" System.out.println(stringLength("")); // this line cannot be changed // Should print "request is required" System.out.println(stringLength(null)); // this line cannot be changed } public static Integer stringLength(String request) // this line cannot be changed { return sillyMethod4(request); } private static Integer sillyMethod1(String request) { Integer returnValue = -1; if (request == null) { // do exactly what specification required // (completly pointeless) System.err.println(REQUEST_IS_REQUIRED); } else if (request.equals(EMPTY_STRING)) { // do exactly what specification required // (completly pointeless) System.err.println(REQUEST_MUST_NOT_BE_EMPTY); } else if (request.equals(REQUEST)) { // do exactly what specification required // (completly pointeless) returnValue = 7; } else { // my best guess about what we should really do returnValue = request.length(); } return returnValue; } private static Integer lessSillyMethod2(String request) { Integer returnValue = -1; if (request == null) { // do exactly what specification required // (completly pointeless) System.err.println(REQUEST_IS_REQUIRED); } else if (request.equals(EMPTY_STRING)) { // do exactly what specification required // (completly pointeless) System.err.println(REQUEST_MUST_NOT_BE_EMPTY); } else { // my best guess about what we should really do returnValue = request.length(); } return returnValue; } private static Integer sillyMethod3(String request) { Integer returnValue = -1; if (request == null) { // do exactly what specification required // (completly pointeless) System.err.println("request is required: "); } else { switch (request) { case EMPTY_STRING: // do exactly what specification required // (completly pointeless) System.err.println("request must not be empty: "); break; case REQUEST: // do exactly what specification required // (completly pointeless) returnValue = 7; // break; default: // my best guess about what we should really do returnValue = request.length(); break; } } return returnValue; } private static Integer sillyMethod4(String request) { Integer returnValue = -1; if (requestMessageMap.containsKey(request)) { System.err.println(requestMessageMap.get(request)); } else { returnValue = request.length(); } return returnValue; } } ``` Output: ``` 7 request must not be empty request is required -1 -1 ```
The method stringLengt() should return "7" when you pass it the string "request", a different value when you pass it an empty string and another different value, when you pass null. Maybe you should take a look on control structures, especially selections. Also your stringLength method needs to return values of type String.
3,789
32,042,679
I saw a [twitter post](https://twitter.com/kssreeram/status/627477751797121024) pointing out that -12/10 = -2 in Python. What causes this? I thought the answer should (mathematically) be one. Why does python "literally" round down like this? ``` >>> -12/10 -2 >>> 12/10 1 >>> -1*12/10 -2 >>> 12/10 * -1 -1 ```
2015/08/17
[ "https://Stackoverflow.com/questions/32042679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3474956/" ]
This is due to [int](https://docs.python.org/2/library/functions.html#int) rounding down divisions. (aka [Floor division](http://python-history.blogspot.com.br/2010/08/why-pythons-integer-division-floors.html)) ``` >>> -12/10 -2 >>> -12.0/10 -1.2 >>> 12/10 1 >>> 12.0/10 1.2 ```
This is known as floor division (aka int division). In Python 2, this is the default behavior for `-12/10`. In Python 3, the default behavior is to use floating point division. To enable this behavior in Python 2, use the following import statement: ``` from __future__ import division ``` To use floor division in Python 3 or Python 2 with this module imported, use `//`. More information can be found in the [Python documentation](https://docs.python.org/2/whatsnew/2.2.html#pep-238-changing-the-division-operator), "PEP 238: Changing the Division Operato".
3,790
27,102,518
I need to optimize this regular expression. ``` ^(.+?)\|[\w\d]+?\s+?(\d\d\/\d\d\/\d\d\d\d\s+?\d\d:\d\d:\d\d\.\d\d\d)[\s\d]+?\s+?(\d+?)\s+?\d+?\s+?(\d+?)$ ``` The input is something like this: ``` -tpf0q16|856B 11/20/2014 00:00:00.015 0 0 0 0 0 689 14 689 703 702 701 700 ``` I'm already replaced all gready matches with lazy matches but this didn't helps. I've use DOTALL but it didn't help either. I use python and PCRE (re module), I know about re2 but I can't use it :(
2014/11/24
[ "https://Stackoverflow.com/questions/27102518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/42371/" ]
The first step is to get rid of the unneeded reluctant (a.k.a. "lazy") quantifiers. According to RegexBuddy, your regex: ``` ^(.+?)\|[\w\d]+?\s+?(\d\d\/\d\d\/\d\d\d\d\s+?\d\d:\d\d:\d\d\.\d\d\d)[\s\d]+?\s+?(\d+?)\s+?\d+?\s+?(\d+?)$ ``` ...takes 6425 steps to match your sample string. This one: ``` ^(.+?)\|[\w\d]+\s+(\d\d\/\d\d\/\d\d\d\d\s+\d\d:\d\d:\d\d\.\d\d\d)[\s\d]+\s+(\d+)\s+\d+\s+(\d+)$ ``` ...takes 716 steps. Reluctant quantifiers reduce backtracking by doing more work up front. Your regex wasn't prone to excessive backtracking, so the reluctant quantifiers were *adding* quite a lot to the workload. This version brings it down to 237 steps: ``` ^([^|]+)\|\w+\s+(\d\d/\d\d/\d\d\d\d\s+\d\d:\d\d:\d\d\.\d\d\d)(?:\s+\d+)+\s+(\d+)\s+\d+\s+(\d+)$ ``` It also removes some noise, like the backslash before `/`; and `[\w\d]`, which is exactly the same as `\w`.
A bit more optimized. ``` >>> import re >>> s = "-tpf0q16|856B 11/20/2014 00:00:00.015 0 0 0 0 0 689 14 689 703 702 701 700" >>> re.findall(r'(?m)^([^|]+)\|[\w\d]+?\s+?(\d{2}\/\d{2}\/\d{4}\s+\d{2}:\d{2}:\d{2}\.\d{3})[\s\d]+?(\d+)\s+\d+\s+(\d+?)$', s) [('-tpf0q16', '11/20/2014 00:00:00.015', '702', '700')] ``` [DEMO](http://regex101.com/r/zU7dA5/11)
3,791
24,995,438
I can run iPython, but when I try to initiate a notebook I get the following error: ``` ~ ipython notebook Traceback (most recent call last): File "/usr/local/bin/ipython", line 8, in <module> load_entry_point('ipython==2.1.0', 'console_scripts', 'ipython')() File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/__init__.py", line 120, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 563, in launch_instance app.initialize(argv) File "<string>", line 2, in initialize File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/terminal/ipapp.py", line 321, in initialize super(TerminalIPythonApp, self).initialize(argv) File "<string>", line 2, in initialize File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/core/application.py", line 381, in initialize self.parse_command_line(argv) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/terminal/ipapp.py", line 316, in parse_command_line return super(TerminalIPythonApp, self).parse_command_line(argv) File "<string>", line 2, in parse_command_line File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 475, in parse_command_line return self.initialize_subcommand(subc, subargv) File "<string>", line 2, in initialize_subcommand File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 406, in initialize_subcommand subapp = import_item(subapp) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/utils/importstring.py", line 42, in import_item module = __import__(package, fromlist=[obj]) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/html/notebookapp.py", line 39, in <module> check_for_zmq('2.1.11', 'IPython.html') File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/utils/zmqrelated.py", line 37, in check_for_zmq raise ImportError("%s requires pyzmq >= %s"%(required_by, minimum_version)) ImportError: IPython.html requires pyzmq >= 2.1.11 ``` But as far as I can see, I already have the pyzmq package installed. ``` ~ pip install pyzmq Requirement already satisfied (use --upgrade to upgrade): pyzmq in /Library/Python/2.7/site-packages/pyzmq-14.3.1-py2.7-macosx-10.6-intel.egg Cleaning up... ```
2014/07/28
[ "https://Stackoverflow.com/questions/24995438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/54564/" ]
Arg. The *ipython* install is a little idiosyncratic. Here's what I had to do to resolve this: ``` $ pip uninstall ipython $ pip install "ipython[all]" ``` The issue is that notebooks have their own set of dependencies, which aren't installed with `pip install ipython`. However, having installed *ipython*, pip doesn't see the need to add anything if you then try the `[all]` form. As mentioned in comments for some shells (e.g. zsh) it's necessary to escape or quote the square brackets (`pip install ipython\[all\]` would also work).
For me (Ubuntu 14.04.2) worked installation by synaptic package manager: the package is called python3-zmq, with this package will be installed libzmq3. After that check if pyzmq is correctly installed: ``` pip list ``` Then I installed ipython: ``` pip install "ipython[all]" ```
3,792
48,452,294
I have a python script that accepts a `-f` flag, and appends multiple uses of the flag. For example, if I run `python myscript -f file1.txt -f file2.txt`, I would have a list of files, `files=['file1.txt', 'files2.txt']`. This works great, but am wondering how I can automatically use the results of a find command to append as many `-f` flags as there are files. I've tried: ``` find ./ -iname '*.txt' -print0 | xargs python myscript.py -f ``` But it only grabs the first file
2018/01/25
[ "https://Stackoverflow.com/questions/48452294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4509191/" ]
With the caveat that this will fail if there are more files than will fit on a single command line (whereas `xargs` would run `myscript.py` multiple times, each with a subset of the full list of arguments): ``` #!/usr/bin/env bash args=( ) while IFS= read -r -d '' name; do args+=( -f "$name" ) done < <(find . -iname '*.txt' -print0) python myscript.py "${args[@]}" ``` If you want to do this **safely** in a way that tolerates an arbitrary number of filenames, you're better off using a long-form option -- such as `--file` rather than `-f` -- with the `=` separator allowing the individual name to be passed as part of the same argv entry, thus preventing `xargs` from splitting a filename apart from the sigil that precedes it: ``` #!/usr/bin/env bash # This requires -printf, a GNU find extension find . -iname '*.txt' -printf '--file=%p\0' | xargs -0 python myscript.py ``` ...or, more portably (running on MacOS, albeit still requiring a shell -- such as bash -- that can handle NUL-delimited reads): ``` #!/usr/bin/env bash # requires find -print0 and xargs -0; these extensions are available on BSD as well as GNU find . -iname '*.txt' -print0 | while IFS= read -r -d '' f; do printf '--file=%s\0' "$f"; done | xargs -0 python myscript.py ```
Your title seems to imply that you can modify the script. In that case, use the `nargs` (number of args) option to allow more arguments for the `-f` flag: ``` parser = argparse.ArgumentParser() parser.add_argument('--files', '-f', nargs='+') args = parser.parse_args() print(args.files) ``` Then you can use your find command easily: ``` 15:44 $ find . -depth 1 | xargs python args.py -f ['./args.py', './for_clint', './install.sh', './sys_user.json'] ``` Otherwise, if you can't modify the script, see @CharlesDuffy's answer.
3,794
22,597,089
There are a lot of questions about installing matplotlib on mac, but as far as I can tell I've installed it correctly using pip and it's just not working. When I try and run a script with matplotlib.pyplot.plot(x, y) nothing happens. No error, no nothing. ``` import matplotlib.pyplot x = [1,2,3,4] y = [4,3,2,1] matplotlib.pyplot.plot(x, y) ``` When I run this in the terminal in a file called pyplot.py I get this: ``` pgcudahy$ python pyplot.py pgcudahy$ ``` No errors, but no plot either. In an interactive python shell I get this: ``` >>> import matplotlib >>> print matplotlib.__version__ 1.1.1 >>> print matplotlib.__file__ /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/matplotlib/__init__.pyc ``` Which leads me to believe it's installed correctly. Any ideas?
2014/03/23
[ "https://Stackoverflow.com/questions/22597089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2930596/" ]
You need to call the `show` function. ``` import matplotlib.pyplot as plt x = [1,2,3,4] y = [4,3,2,1] plt.plot(x, y) plt.show() ```
It's likely that the plot is hidden behind the editor window or the spyder window on the screen. Instead of changing matplotlib settings, just learn the trackpack gestures of the mac, "app exposé" is the one you need to make your plots visible (see system preferences, trackpack). Then click on the figure to raise it to the front.
3,795
14,938,541
I use matplotlib to plot a scatter chart: ![enter image description here](https://i.stack.imgur.com/Nllqp.png) And label the bubble using a transparent box according to the tip at [How to annotate point on a scatter automatically placed arrow](https://stackoverflow.com/q/9074996/7758804) Here is the code: ``` if show_annote: for i in range(len(x)): annote_text = annotes[i][0][0] # STK_ID ax.annotate(annote_text, xy=(x[i], y[i]), xytext=(-10,3), textcoords='offset points', ha='center', va='bottom', bbox=dict(boxstyle='round,pad=0.2', fc='yellow', alpha=0.2), fontproperties=ANNOTE_FONT) ``` and the resulting plot: ![enter image description here](https://i.stack.imgur.com/v5FET.png) But there is still room for improvement to reduce overlap (for instance the label box offset is fixed as (-10,3)). Are there algorithms that can: 1. dynamically change the offset of label box according to the crowdedness of its neighbourhood 2. dynamically place the label box remotely and add an arrow line beween bubble and label box 3. somewhat change the label orientation 4. label\_box overlapping bubble is better than label\_box overlapping label\_box? I just want to make the chart easy for human eyes to comprehand, so some overlap is OK, not as rigid a constraint as <http://en.wikipedia.org/wiki/Automatic_label_placement> suggests. And the bubble quantity within the chart is less than 150 most of the time. I find the so called `Force-based label placement` <http://bl.ocks.org/MoritzStefaner/1377729> is quite interesting. I don't know if there is any python code/package available to implement the algorithm. I am not an academic guy and not looking for an optimum solution, and my python codes need to label many many charts, so the the speed/memory is in the scope of consideration. I am looking for a quick and effective solution. Any help (code,algorithm,tips,thoughts) on this subject? Thanks.
2013/02/18
[ "https://Stackoverflow.com/questions/14938541", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072888/" ]
It is a little rough around the edges (I can't quite figure out how to scale the relative strengths of the spring network vs the repulsive force, and the bounding box is a bit screwed up), but this is a decent start: ``` import networkx as nx N = 15 scatter_data = rand(3, N) G=nx.Graph() data_nodes = [] init_pos = {} for j, b in enumerate(scatter_data.T): x, y, _ = b data_str = 'data_{0}'.format(j) ano_str = 'ano_{0}'.format(j) G.add_node(data_str) G.add_node(ano_str) G.add_edge(data_str, ano_str) data_nodes.append(data_str) init_pos[data_str] = (x, y) init_pos[ano_str] = (x, y) pos = nx.spring_layout(G, pos=init_pos, fixed=data_nodes) ax = gca() ax.scatter(scatter_data[0], scatter_data[1], c=scatter_data[2], s=scatter_data[2]*150) for j in range(N): data_str = 'data_{0}'.format(j) ano_str = 'ano_{0}'.format(j) ax.annotate(ano_str, xy=pos[data_str], xycoords='data', xytext=pos[ano_str], textcoords='data', arrowprops=dict(arrowstyle="->", connectionstyle="arc3")) all_pos = np.vstack(pos.values()) mins = np.min(all_pos, 0) maxs = np.max(all_pos, 0) ax.set_xlim([mins[0], maxs[0]]) ax.set_ylim([mins[1], maxs[1]]) draw() ``` ![sample image](https://i.stack.imgur.com/xhM4q.png) How well it works depends a bit on how your data is clustered.
We can use plotly for this. But we can't help placing overlap correctly if there is lot of data. Instead we can zoom in and zoom out. ``` import plotly.express as px df = px.data.tips() df = px.data.gapminder().query("year==2007 and continent=='Americas'") fig = px.scatter(df, x="gdpPercap", y="lifeExp", text="country", log_x=True, size_max=100, color="lifeExp", title="Life Expectency") fig.update_traces(textposition='top center') fig.show() ``` Output: [![enter image description here](https://i.stack.imgur.com/Ei4n6.gif)](https://i.stack.imgur.com/Ei4n6.gif)
3,796
34,314,022
The documentation linked below seems to say that top level classes can be pickled, as well as their instances. But based on the answers to my previous [question](https://stackoverflow.com/q/34261379/3904031) it seem not to be correct. In the script I posted the pickle accepts the class object and writes a file, but this is not useful. THIS IS MY QUESTION: Is this documentation wrong, or is there something more subtle I don't understand? Also, should pickle be generating some kind of error message in this case? <https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled>, > > The following types can be pickled: > > > * None, True, and False > * integers, long integers, floating point numbers, complex numbers > * normal and Unicode strings > * tuples, lists, sets, and dictionaries containing only picklable objects > * functions defined at the top level of a module > * built-in functions defined at the top level of a module > * **classes that are defined at the top level of a module** ( *my bold* ) > * instances of such classes whose **dict** or the result of calling **getstate**() > is picklable (see section The pickle protocol for details). > > >
2015/12/16
[ "https://Stackoverflow.com/questions/34314022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
Make a class that is *defined at the top level of a module*: **foo.py**: ``` class Foo(object): pass ``` Then running a separate script, **script.py**: ``` import pickle import foo with open('/tmp/out.pkl', 'w') as f: pickle.dump(foo.Foo, f) del foo with open('/tmp/out.pkl', 'r') as f: cls = pickle.load(f) print(cls) ``` prints ``` <class 'foo.Foo'> ``` --- Note that the pickle file, `out.pkl`, merely contains *strings* which name the defining module and the name of the class. It does not store the definition of the class: ``` cfoo Foo p0 . ``` Therefore, *at the time of unpickling* the defining module, `foo`, must contain the definition of the class. If you delete the class from the defining module ``` del foo.Foo ``` then you'll get the error ``` AttributeError: 'module' object has no attribute 'Foo' ```
It's totally possible to pickle a class instance in python… while also saving the code to reconstruct the class and the instance's state. If you want to hack together a solution on top of `pickle`, or use a "trojan horse" `exec` based method here's how to do it: [How to unpickle an object whose class exists in a different namespace (python)?](https://stackoverflow.com/questions/14238837/how-to-unpickle-an-object-whose-class-exists-in-a-different-namespace-python?rq=1) Or, if you use `dill`, you have a `dump` function that already knows how to store a class instance, the class code, and the instance state: [How to recover a pickled class and its instances](https://stackoverflow.com/questions/34261379/how-to-recover-a-pickled-class-and-its-instances/34397001#34397001) [Pickle python class instance plus definition](https://stackoverflow.com/questions/6726183/pickle-python-class-instance-plus-definition/28095208#28095208) I'm the `dill` author, and I created `dill` in part to be able to ship class instances and class methods across `multiprocessing`. [Can't pickle <type 'instancemethod'> when using python's multiprocessing Pool.map()](https://stackoverflow.com/questions/1816958/cant-pickle-type-instancemethod-when-using-pythons-multiprocessing-pool-ma/21345273#21345273)
3,802
22,734,148
I'm trying to check if a number is a perfect square. However, i am dealing with extraordinarily large numbers so python thinks its infinity for some reason. it gets up to 1.1 X 10^154 before the code returns "Inf". Is there anyway to get around this? Here is the code, the lst variable just holds a bunch of really really really really really big numbers ``` import math from decimal import Decimal def main(): for i in lst: root = math.sqrt(Decimal(i)) print(root) if int(root + 0.5) ** 2 == i: print(str(i) + " True") ```
2014/03/29
[ "https://Stackoverflow.com/questions/22734148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476226/" ]
I think that you need to take a look at the [BigFloat](https://pythonhosted.org/bigfloat/) module, e.g.: ``` import bigfloat as bf b = bf.BigFloat('1e1000', bf.precision(21)) print bf.sqrt(b) ``` Prints `BigFloat.exact('9.9999993810013282e+499', precision=53)`
math.sqrt() converts the argument to a Python float which has a maximum value around 10^308. You should probably look at using the [gmpy2](https://code.google.com/p/gmpy/) library. gmpy2 provide very fast multiple precision arithmetic. If you want to check for arbitrary powers, the function `gmpy2.is_power()` will return `True` if a number is a perfect power. It may be a cube or fifth power so you will need to check for power you are interested in. ``` >>> gmpy2.is_power(456789**372) True ``` You can use `gmpy2.isqrt_rem()` to check if it is an exact square. ``` >>> gmpy2.isqrt_rem(9) (mpz(3), mpz(0)) >>> gmpy2.isqrt_rem(10) (mpz(3), mpz(1)) ``` You can use `gmpy2.iroot_rem()` to check for arbitrary powers. ``` >>> gmpy2.iroot_rem(13**7 + 1, 7) (mpz(13), mpz(1)) ```
3,803
30,326,654
I'm following this for django manage.py module <http://docs.ansible.com/django_manage_module.html> for e.g. one of my tasks looks like - ``` - name: Django migrate django_manage: command=migrate app_path={{app_path}} settings={{django_settings}} tags: - django ``` this works perfectly fine with python2(default in ubuntu) but when I try with python3-django project it throws error ``` failed: [123.456.200.000] => (item=school) => {"cmd": "python manage.py makemigrations --noinput school --settings=myproj.settings.production", "failed": true, "item": "school", "path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games", "state": "absent", "syspath": ["/home/ubuntu/.ansible/tmp/ansible-tmp-1432039779.41-30449122707918", "/usr/lib/python2.7", "/usr/lib/python2.7/plat-x86_64-linux-gnu", "/usr/lib/python2.7/lib-tk", "/usr/lib/python2.7/lib-old", "/usr/lib/python2.7/lib-dynload", "/usr/local/lib/python2.7/dist-packages", "/usr/lib/python2.7/dist-packages"]} msg: :stderr: Traceback (most recent call last): File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named django.core.management ``` from this error it seems Ansible bydefault uses Python2. can we change this to python3 or anyother workaround? PS: pip freeze ensure that django 1.8 has installed (for python3 using pip3) Suggestions: when I run `ubuntu@ubuntu:/srv/myproj$ python3 manage.py migrate` it works fine. so I'm thinking of passing command directly something like ``` - name: Django migrate command: python3 manage.py migrate tags: - django ``` but how do I pass the project path or manage.py file's path, there is only an option to pass settings, something like `--settings=myproject.settings.main`. can we do by passing direct command?
2015/05/19
[ "https://Stackoverflow.com/questions/30326654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4414786/" ]
From Ansible website <http://docs.ansible.com/intro_installation.html> > > Python 3 is a slightly different language than Python 2 and most Python programs (including Ansible) are not switching over yet. However, some Linux distributions (Gentoo, Arch) may not have a Python 2.X interpreter installed by default. On those systems, you should install one, and set the ‘ansible\_python\_interpreter’ variable in inventory (see Inventory) to point at your 2.X Python. Distributions like Red Hat Enterprise Linux, CentOS, Fedora, and Ubuntu all have a 2.X interpreter installed by default and this does not apply to those distributions. This is also true of nearly all Unix systems. If you need to bootstrap these remote systems by installing Python 2.X, using the ‘raw’ module will be able to do it remotely. > > >
Ansible is using `python` to run the django command: <https://github.com/ansible/ansible-modules-core/blob/devel/web_infrastructure/django_manage.py#L237> Your only solution is thus to override the executable that will be run, for instance by changing your PATH: ``` - file: src=/usr/bin/python3 dest=/home/user/.local/bin/python state=link - name: Django migrate django_manage: command=migrate app_path={{app_path}} settings={{django_settings}} environment: - PATH: "/home/user/.local/bin/:/bin:/usr/bin:/usr/local/bin" ```
3,808
25,863,769
I have a set (or a list) of numbers {1, 2.25, 5.63, 2.12, 7.98, 4.77} and i want to find the best combination of numbers from this set/list which when added are closest to 10. How do i accomplish that in python using an element from collection ?
2014/09/16
[ "https://Stackoverflow.com/questions/25863769", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1578720/" ]
If the problem size permits, you can use some friends in `itertools` to quickly brute force through it: ``` s = {1, 2.25, 5.63, 2.12, 7.98, 4.77} from itertools import combinations, chain res = min(((comb, abs(sum(comb)-10)) for comb in chain(*[combinations(s, k) for k in range(1, len(s)+1)])), key=lambda x: x[1])[0] print res ``` Output: ``` (2.25, 5.63, 2.12) ```
It's an NP-Hard problem. If your data are not too big, you can just test every single solution with a code like : ``` def combination(itemList): """ Returns all the combinations of items in the list """ def wrapped(current_pack, itemList): if itemList == []: return [current_pack] else: head, tail = itemList[0], itemList[1:] return wrapped(current_pack+[head], tail) + wrapped(current_pack, tail) return wrapped([], itemList) def select_best(combination_list, objective): """ Returns the element whose the sum of its own elements is the nearest to the objective""" def element_sum(combination): result = 0.0 for element in combination: result+= element return result best, weight = combination_list[0], element_sum(combination_list[0]) for combination in combination_list: current_weight = element_sum(combination) if (abs(current_weight-objective) < abs(weight-objective)): best, weight = combination, current_weight return best if __name__ == "__main__" : items = [1, 2.25, 5.63, 2.12, 7.98, 4.77] combinations = combination(items) combinations.sort() print(combinations, len(combinations))#2^6 combinations -> 64 best = select_best(combinations, 10.0) print(best) ``` This code will give you the better solution whatever input you give to it. But as you can see the number of combination is 2^n where n is the number of element in your list. Try this with more than 50 element and say good bye to your RAM memory. As perfectly correct from an algorithmic point of view, you can wait for more than you're entire life to get a response for real case problems. Metaheuristic and Constraint satisfaction problem algorithms could be useful for more efficient approach.
3,810
62,787,056
I created virtual environment and installed both tensorflow and tensorflow-gpu. After that I installed keras. And then I checked in my conda terminal by importing keras and I was able to import keras in it. However, using jupyter notebook if I try to import keras then it gives me below error. ``` import keras ImportError Traceback (most recent call last) <ipython-input-5-88d96843a926> in <module> ----> 1 import keras ~\Anaconda3\lib\site-packages\keras\__init__.py in <module> 1 from __future__ import absolute_import 2 ----> 3 from . import utils 4 from . import activations 5 from . import applications ~\Anaconda3\lib\site-packages\keras\utils\__init__.py in <module> 4 from . import data_utils 5 from . import io_utils ----> 6 from . import conv_utils 7 from . import losses_utils 8 from . import metrics_utils ~\Anaconda3\lib\site-packages\keras\utils\conv_utils.py in <module> 7 from six.moves import range 8 import numpy as np ----> 9 from .. import backend as K 10 11 ~\Anaconda3\lib\site-packages\keras\backend\__init__.py in <module> ----> 1 from .load_backend import epsilon 2 from .load_backend import set_epsilon 3 from .load_backend import floatx 4 from .load_backend import set_floatx 5 from .load_backend import cast_to_floatx ~\Anaconda3\lib\site-packages\keras\backend\load_backend.py in <module> 88 elif _BACKEND == 'tensorflow': 89 sys.stderr.write('Using TensorFlow backend.\n') ---> 90 from .tensorflow_backend import * 91 else: 92 # Try and load external backend. ~\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py in <module> 4 5 import tensorflow as tf ----> 6 from tensorflow.python.eager import context 7 from tensorflow.python.framework import device as tfdev 8 from tensorflow.python.framework import ops as tf_ops ImportError: cannot import name 'context' from 'tensorflow.python.eager' (unknown location) ``` Already tried uninstalling and installing keras and tensorflow. I'm pretty new to programming so I am not sure how to go around it. Tried looking other threads but not helping. Can any one recommend what can I do to resolve it? Thanks
2020/07/08
[ "https://Stackoverflow.com/questions/62787056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13605650/" ]
Did you installed the dependencies with conda? Like this: ``` $ conda install -c conda-forge keras $ conda install -c conda-forge tensorflow $ conda install -c anaconda tensorflow-gpu ``` If you installed with `pip` they will not work inside your virtual env. Look at your conda dependencies list, to see if the tensorflow and keras are really there using: ``` $ conda list ``` If they are, activate your virtual environment: ``` $ conda activate 'name_of_your_env' ``` And run the jupyter inside that, should be something like that (if your env shows in parenthesis the activation worked, and you are now inside the virtual env): ``` (your_env)$ jupyter notebook ```
Doing below solved my issue. So I removed all the packages that were installed via pip and intstalled packages through conda. I had environment issue and created another environment from the scratch and ran below commands. Create virtual environment: ``` conda create -n <env_name> ``` Install tensorflow-gpu via conda and not pip. If you skip create environment command, type in below as it will scratch off new env and specify python and tensorflow version. ``` conda create -n <env_name> python=3.6 tensorflow-gpu=2.2 ``` And then I had to make sure that jupyter notebook is opening with the environment that I want it to open with. For that below code. ``` C:\Users\Adi(Your user here)\Anaconda3\envs\env_name\python.exe -m ipykernel install --user --name <env_name> --display-name "Python (env_name)" ``` When you go to Jupyter notebook, on the top right corner you should see your virtual environment and make sure you select that. And it got resolved like that.
3,811
24,112,445
I am using Python 3.4.0 and I have Mac OSX 10.9.2. I have the following code saved as sublimePygame in Sublime Text. ``` import pygame, sys from pygame.locals import * pygame.init() #set up the window DISPLAYSURF = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Drawing') # set up the colors BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) GREEN = ( 0, 255, 0) BLUE = ( 0, 0, 255) # Draw on surface object DISPLAYSURF.fill(WHITE) pygame.draw.polygon(DISPLAYSURF, GREEN, ((146, 0), (291, 106), (236, 277), (56, 277), (0, 106))) pygame.draw.line(DISPLAYSURF, BLUE, (60, 60), (120, 60), 4) pygame.draw.line(DISPLAYSURF, BLUE, (120, 60), (60, 120)) pygame.draw.line(DISPLAYSURF, BLUE, (60, 120), (120, 120), 4) pygame.draw.circle(DISPLAYSURF, BLUE, (300, 50), 20, 0) pygame.draw.ellipse(DISPLAYSURF, RED, (300, 250, 40, 80), 1) pygame.draw.rect(DISPLAYSURF, RED, (200, 150, 100, 50)) pixObj = pygame.PixelArray(DISPLAYSURF) pixObj[480, 380] = BLACK pixObj[482, 382] = BLACK pixObj[48, 384] = BLACK pixObj[486, 386] = BLACK pixObj[488, 388] = BLACK del pixObj while True: # main game loop for event in pygame.event.get(): if event.type == QUIT: sys.exit() pygame.display.update() ``` I ran the code in my terminal and the python window opened for a second and then closed. I got this error in the terminal. ``` Traceback (most recent call last): File "sublimePygame", line 29, in <module> pixObj[480, 380] = BLACK IndexError: invalid index Segmentation fault: 11 ``` I checked the pygame documentation and my code seemed ok. I googled the error and *Segmentation Error 11* seems to be a bug in python but I read that it was fixed in Python 3.4.0. Does Anyone know what went wrong? Thanks in advance! Edit: Marius found the bug in my program, however when I run it it opens a blank Python window and not what it was supposed top open. Does anyone know why this happened?
2014/06/09
[ "https://Stackoverflow.com/questions/24112445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3720830/" ]
Hope this link helps for part 1. Should be able to commit and push your changes using git push heroku master. <https://devcenter.heroku.com/articles/git#tracking-your-app-in-git> For part 2: will scaling your dynos back to 0 work for your case? [How to stop an app on Heroku?](https://stackoverflow.com/questions/2811453/how-to-stop-an-app-on-heroku)
1. When you make a change, just push the git repository to heroku again with`git push heroku master`. The server will automatically restart with the changed system. 2. You seem to have a misconception. You can always run your local development server regardless of what Heroku is doing (unless some other service your app connects to would become confused, of course; but if that happens, there is probably something wrong with your design). Nonetheless, if you want to stop the application on Heroku, just scale it to zero web dynos: `heroku ps:scale web=0`.
3,812
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
I actually shared my solution to this exact issue. What I did here was create a pool of connections that you can specify the max with, and then queued query requests async through this channel. This way you can leave a certain amount of connections open, but it will queue and pool async and keep the speed you are used to. This requires gevent and postgres. [Python Postgres psycopg2 ThreadedConnectionPool exhausted](https://stackoverflow.com/questions/48532301/python-postgres-psycopg2-threadedconnectionpool-exhausted/49366850#49366850)
I'm no expert in this field, but I believe that [PgBouncer](https://pgbouncer.github.io/features.html) would do the job for you, assuming you're able to use a PostgreSQL back-end (that's one detail you didn't make clear). PgBouncer is a *connection pooler*, which allows you re-use connections avoiding the overhead of connecting on every request. According to their [documentation](https://pgbouncer.github.io/config.html): > > **user, password** > > > If user= is set, all connections to the destination database will be done with the specified user, meaning that there will be only one pool for this database. > > > Otherwise PgBouncer tries to log into the destination database with client username, meaning that there will be one pool per user. > > > So, you can have a single pool of connections per user, which sounds just like what you want. In MySQL land, the [mysql.connector.pooling](https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html) module allows you to do some connection pooling, though I'm not sure if you can do per-user pooling. Given that you can set up the pool name, I'm guessing you could use the user's name to identify the pool. Regardless of what you use, you will likely have occasions where reconnecting is unavoidable (a user connects, does a few things, goes away for a meeting and lunch, comes back and wants to take more action).
3,815
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
One option would be to loop through the `list` ('list1'), `filter` the 'names' column based on the 'names' vector, convert it to a single dataset while creating an identification column with `.id`, `spread` from 'long' to 'wide' and remove the 'grp' column ``` library(tidyverse) map_df(list1, ~ .x %>% filter(names %in% !! names), .id = 'grp') %>% spread(names, values) %>% select(-grp) # a b c #1 25 13 11 #2 12 10 NA ``` --- Or another option is to bind the datasets together with `bind_rows`, created a grouping id 'grp' to specify the `list` element, `filter` the rows by selecting only 'names' column that match with the 'names' `vector` and `spread` from 'long' to 'wide' ``` bind_rows(list1, .id = 'grp') %>% filter(names %in% !! names) %>% spread(names, values) ``` NOTE: It is better not to use reserved keywords for specifying object names (`names`). Also, to avoid confusions, the object should be different from the column names of the dataframe object. --- It can be also done with only `base R`. Create a group identifier with `Map`, `rbind` the `list` elements to single dataset, `subset` the rows by keeping only the values from the 'names' `vector`, and `reshape` from 'long' to 'wide' ``` df1 <- subset(do.call(rbind, Map(cbind, list1, ind = seq_along(list1))), names %in% .GlobalEnv$names) reshape(df1, idvar = 'ind', direction = 'wide', timevar = 'names')[-1] ```
A mix of base R and `dplyr`. For every list element we create a dataframe with 1 row. Using `dplyr`'s `rbind_list` row bind them together and then subset only those columns which we need using `names`. ``` library(dplyr) rbind_list(lapply(list1, function(x) setNames(data.frame(t(x$values)), x$names)))[names] # a b c # <dbl> <dbl> <dbl> #1 25 13 11 #2 12 10 NA ``` Output without subset looks like this ``` rbind_list(lapply(list1, function(x) setNames(data.frame(t(x$values)), x$names))) # a b c x # <dbl> <dbl> <dbl> <dbl> #1 25 13 11 NA #2 12 10 NA 2 ```
3,825
70,075,290
I am try update a lambda by zappa, I created virtualenv and active virtualenv and install libraries, but in the moment run zappa update enviroment, I have this problem: How can i fix this :( ``` zappa update qa (pip 18.1 (/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages), Requirement.parse('pip>=20.3'), {'pip-tools'}) Calling update for stage qa.. Downloading and installing dependencies.. Packaging project as zip. Uploading maximo-copy-customers-qa-1637639364.zip (6.0MiB).. 100%|███████████████████████████████████████████████████████████████| 6.32M/6.32M [00:09<00:00, 664kB/s] Updating Lambda function code.. Updating Lambda function configuration.. Oh no! An error occurred! :( ============== Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 2778, in handle sys.exit(cli.handle()) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 512, in handle self.dispatch_command(self.command, stage) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 559, in dispatch_command self.update(self.vargs['zip'], self.vargs['no_upload']) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 979, in update layers=self.layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/core.py", line 1224, in update_lambda_configuration Layers=layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ResourceConflictException: An error occurred (ResourceConflictException) when calling the UpdateFunctionConfiguration operation: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:937280411572:function:maximo-copy-customers-qa ```
2021/11/23
[ "https://Stackoverflow.com/questions/70075290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16488584/" ]
You should wait for function code update to complete before proceeding with update of function configuration. Inserting the following shell script between the steps can keep the process waiting: ``` STATE=$(aws lambda get-function --function-name "$FN_NAME" --query 'Configuration.LastUpdateStatus' --output text) while [[ "$STATE" == "InProgress" ]] do echo "sleep 5sec ...." sleep 5s STATE=$(aws lambda get-function --function-name "$FN_NAME" --query 'Configuration.LastUpdateStatus' --output text) echo $STATE done ```
Add to your zappa\_settings.json: ``` "lambda_description": "aws:states:opt-out" ``` [Zappa issue about it](https://github.com/zappa/Zappa/issues/1041)
3,832
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
This may be too roundabout for you, but you could use the %save magic function to save the lines in question and then copy them from the save file.
I tend to keep an open gvim window for this kind of things. Paste your class definition as is and then do something like: ``` :%s/^.*\.:// ```
3,835
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
You probably want something like: ``` my_entries_list.sort(key=lambda v: datetime.datetime.combine(v.created_date, v.created_time)) ``` Passing `datetime.datetime.combine(created_date, created_time)` tries to call `combine` immediately and breaks since `created_date` and `created_time` are not available as local variables. The `lambda` provides *delayed evaluation*: instead of executing the code immediately, it creates a *function* that will, when called, execute the specified code and return the result. The function also provides the parameter that will be used to access the `created_date` and `created_time` attributes.
Use `lambda`: ``` sorted(my_entries_list, key=lambda e: datetime.combine(e.created_date, e.created_time)) ```
3,844
17,601,602
First, I'm extremely new to coding and self-taught, so models / views / DOM fall on deaf ears (but willing to learn!) So I saved images into a database as blobs (BlobProperty), now trying to serve them. **Relevant Code:** (I took out a ton for ease of reading) ``` class Mentors(db.Model): id = db.StringProperty() mentor_id = db.StringProperty() name = db.StringProperty() img_file = db.BlobProperty() ``` ``` class ImageHandler (webapp2.RequestHandler): def get(self): mentor_id=self.request.get('mentor_id') mentor = db.GqlQuery("SELECT * FROM Mentors WHERE mentor_id = :1 LIMIT 1", mentor_id) if mentor.img_file: self.response.headers['Content-Type'] = "image/jpg" self.response.out.write(mentor.img_file) else: self.error(404) ``` ``` application = webapp2.WSGIApplication([ routes.DomainRoute('medhack.prebacked.com', medhack_pages), webapp2.Route(r'/', handler=HomepageHandler, name='home-main'), webapp2.Route(r'/imageit', handler=ImageHandler, name='image-handler') ], debug=True) ``` ``` class MedHackHandler(webapp2.RequestHandler): def get(self, url="/"): # ... bunch of code to serve template etc. mentors_events = db.GqlQuery("SELECT * FROM Mentors_Events WHERE event_id = :1 ORDER BY mentor_type DESC, mentor_id ASC", current_event_id) mentors = mentors_events ``` html: ``` {% for m in mentors %} #here 'mentors' refers to mentors_event query, and 'mentor' refers to the mentors table above. <img src="imageit?mentor_id={{m.mentor.mentor_id}}" alt="{{m.mentor.name}} headshot"/> {% endfor %} ``` Its seems that imageit isn't actually being called or the path is wrong or... I don't know. So many attempts and fails. Resources I've tried but fail to understand: <https://developers.google.com/appengine/articles/python/serving_dynamic_images> This seemed to be dang close, but I can't figure out how to implement. Need a "for dummies" translation. [How to load Blobproperty image in Google App Engine?](https://stackoverflow.com/questions/4283001/how-to-load-blobproperty-image-in-google-app-engine)
2013/07/11
[ "https://Stackoverflow.com/questions/17601602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/646491/" ]
In the handler, you're getting the ID from `self.request.get('mentor_id')`. However, in the template you've set the image URL to `imageit?key=whatever` - so the parameter is "key" not "mentor\_id". Choose one or the other.
Finally figured it out. I'm using a subdomain, and wasn't setting up *that* route, only /img coming off of the www root. I also wasn't using the URL correctly and the 15th pass of <https://developers.google.com/appengine/articles/python/serving_dynamic_images> finally answered my problem.
3,850
47,528,696
I am new about docker, so ,if any wrong thoughts come from me ,please point out it.Thanks~ I aim at running a web server that was developed by me ,or a team I belong to,in the docker. So, I thought out three steps: Have a image ,copy the web files into it,and run the container.so,I do the step below: 1- get a docker image. I try like this : `docker pull centos`, so that I can get a image based on centos.Here, I did not care about the version of centos,of course, it's version is 6.7 or ,just taged:latest. Here,I check the image by `docker images`,and I can see it like this: ``` REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/centos latest d123f4e55e12 3 weeks ago 196.6 MB ``` So,I think this step successed. 2- try copying the files in local system to the container. I stop at the path: /tornado,which had a folder named fordocker .The fordocker contains the web-server files. I try commonds like this(based on the guide): ``` docker cp fordocker/ d123f4e55e12:/web ``` But! Here comes the error: ``` Error response from daemon: No such container: d123f4e55e12 ``` 3- if I copy the files successfully,I could try like this:`docker run -d centos xxx python web.py`. This step will come error?I don't know yet. I searched a lot ,but do not explain the phenomenon. It seemed that everyone,beside me ,use the commond would succes. So,here comes the questions: 1- Is the method I thought out feasible? Must I create a images through profile? 2- Where comes the error if the method is feasible? Did the commond cp based on otherthings that I had not done? 3- What should I do if the method is not feasible?Create a image myself? Have a good day~
2017/11/28
[ "https://Stackoverflow.com/questions/47528696", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8948738/" ]
You have docker images and docker container. That is different. You pull or build images. When you launch an image, it becomes a running container. An image is not a running container, and so, you will not be able to copy a file inside an image. I do not know if this what you want to do, but you may 1) launch an image `docker run ...` 2) copy or modify a file `docker ps` shows your running container, and so you can `docker cp...` a file inside this running container 3) maybe save this modified image with `docker commit...` But usually, if you want to modify an image, you modify the Dockerfile, and so you can easily build a new image with such a command `docker build -t myuser/myimage:321 .` Notice the final dot, it means you use the Dockerfile that is local. See for example a Dockerfile for Nginx <https://github.com/nginxinc/docker-nginx/blob/c8bfb9b5260d4ad308deac5343b73b027f684375/mainline/stretch/Dockerfile>
I aimed at deploying a python web project by docker,and the first method I thought about is :copy server files to a container and run it with `python ***.py`. But I did not get the difference between images and container. Also,I got some other methods: 1- build a Dcokerfile. By this way,we can run a image with out other commond like `python ***.py` because we can write the commond into the Dcokerfile. 2- get a image that has a right python version,and try like `docker run -v $PWD/myapp:/usr/src/myapp -w /usr/src/myapp python:3.5 python helloworld.py`,but need not copy files to the container. If all the methods can I master, I would choose to build a Dcokerfile?
3,851
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
This is the jsFiddle with your expectations: [jsFiddle](https://jsfiddle.net/y277r5zL/) as you wanted the yellow border be around the whole content so it was better to extend your wrapper height. ```css #wrapper{ border: 1px solid #F68004; height: 150px; } #content{ background-color: #0075CF; height: 100px; } ``` ```html <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> ```
try this **CSS** ``` #wrapper{ border: 1px solid #F68004; } #content{ background-color: #0075CF; height: 100px; margin-bottom: 50px; } ```
3,852
54,262,301
I downloaded openCV and YOLO weights, in order to implement object detection for a certain project using Python 3.5 version. when I run this code: ```python from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body ``` The console gives the error below: > > ImportError Traceback (most recent call > last) in () > ----> 1 from yolo\_utils import read\_classes, read\_anchors, generate\_colors, preprocess\_image, draw\_boxes, scale\_boxes > 2 from yad2k.models.keras\_yolo import yolo\_head, yolo\_boxes\_to\_corners, preprocess\_true\_boxes, yolo\_loss, yolo\_body > > > ImportError: No module named 'yolo\_utils' > > > Note that i downloaded yolo\_utils.py in the weights folder, how can I fix this issue?
2019/01/18
[ "https://Stackoverflow.com/questions/54262301", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10935479/" ]
Actually You are Importing user built module. As Yolo\_utils is created by a Coursera coordinators to make things easy, this module is available in only their machines and You are trying to import this in your machine. Here is github link of module : <https://github.com/JudasDie/deeplearning.ai/blob/master/Convolutional%20Neural%20Networks/week3/yolo_utils.py> Save this To your local machine in .py formet And Copy this file in your lib files of Your application(anaconda or any other)
Copy the source code of [yolo\_utils](https://github.com/iArunava/YOLOv3-Object-Detection-with-OpenCV/blob/master/yolo_utils.py) . Paste it in your source code before importing yolo\_utils. It worked for me. Hope this will help..
3,862
10,135,656
I had an existing Django project that I've just added South to. * I ran syncdb locally. * I ran `manage.py schemamigration app_name` locally * I ran `manage.py migrate app_name --fake` locally * I commit and pushed to heroku master * I ran syncdb on heroku * I ran `manage.py schemamigration app_name` on heroku * I ran `manage.py migrate app_name` on heroku I then receive this: ``` $ heroku run python notecard/manage.py migrate notecards Running python notecard/manage.py migrate notecards attached to terminal... up, run.1 Running migrations for notecards: - Migrating forwards to 0005_initial. > notecards:0003_initial Traceback (most recent call last): File "notecard/manage.py", line 14, in <module> execute_manager(settings) File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/app/lib/python2.7/site-packages/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/app/lib/python2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 81, in run_migration migration_function() File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/app/notecard/notecards/migrations/0003_initial.py", line 15, in forwards ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: relation "notecards_semester" already exists ``` I have 3 models. Section, Semester, and Notecards. I've added one field to the Notecards model and I cannot get it added on Heroku. Thank you.
2012/04/13
[ "https://Stackoverflow.com/questions/10135656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/722427/" ]
You must fake the migrations that create the tables, then run the other migrations as usual. ``` manage.py migrate app_name 000X --fake manage.py migrate app_name ``` With 000X being the number of the migration in which you create the table.
First of all, from the looks of 0003\_initial and 0005\_initial, you've done multiple `schemamigration myapp --initial` commands which add create\_table statements. Having two sets of these will definitely cause problems as one will create tables, then the next one will attempt creating existing tables. Your `migrations` folder is probably completely polluted with odd migrations. Anyways, while I understand the theory of running `schemamigration` on the local machine AND the remote machine, this is probably the root of your problem. Schemamigration generates a new migration - if you have to run it on your development server, commit it, push it, then generate yet another one on your production machine, you'll probably end up with overlapping migrations. Another thing: if you are running syncdb on your remote machine and it's generating tables, that means your database is 100% current -- no migrations needed. You'd do a full `migrate --fake` to match your migrations to your database. ``` I ran syncdb locally. I ran manage.py schemamigration app_name locally I ran manage.py migrate app_name --fake locally I commit and pushed to heroku master I ran syncdb on heroku I ran manage.py schemamigration app_name on heroku # if you ran syncdb, your DB would be in the final state. I ran manage.py migrate app_name on heroku # if you ran syncdb, your DB would be in the final state. Nothing to migrate. ```
3,863
64,160,347
I am trying to replicate a Case Statement within my python script (involving pandas) that is applied to a dataframe and fills a new column based on how each row is processed, but it seems like every row is falling into the else condition due to every value in the new column being `Other`. My first thought is that it is do to the `any()` condition that I have used, but I feel like I could be using the wrong approach completely. Any advice on the direction I should take? **Example rows:** ``` index | source_name 1 | CLICK TO CALL - New Mexico 2 | Las Vegas Community Partner 3 | Facebook - Test Camp - Los Angeles 4 | Google - Test Camp - Los Angeles index | landing_page_url 1 | NaN 2 | https://lp.example.com/fb/la/test/ 3 | https://lp.example.com/fb/la/test/?utm_source=facebook 4 | https://lp.example.com/google/la/test/?utm_source=google ``` **Code Criteria:** ``` # Criteria fb_landing_page_crit = [ 'utm_source=facebook', 'fbclid', 'test.com/fb/' ] fb_source_crit = [ 'fb', 'facebook' ] google_landing_page_crit = [ 'gclid' ] google_source_crit = [ 'click to call', 'discovery', 'call', 'website', 'landing page', 'display - lp' ] local_listings_source_crit = [ 'gmb' ] partner_source_crit = [ 'vegas community', 'new orleans community', 'dc community', ] ``` Conditional: ``` def network_parse(df): if isinstance(df, str): if any(x in df['landing_page_url'] for x in fb_landing_page_crit): return 'Facebook' elif any(x in df['landing_page_url'] for x in google_landing_page_crit): return 'Google' elif any(x in df['source_name'] for x in fb_source_crit): return 'Facebook' elif any(x in df['source_name'] for x in google_source_crit): return 'Google' elif any(x in df['source_name'] for x in local_listings_source_crit): return 'Local Listings' elif any(x in df['source_name'] for x in partner_source_crit): return 'Partner - Community Partnership' else: return 'Other' else: return 'Other' ``` **Function Call:** ``` df['network'] = df.apply(network_parse, axis=1) # Every row returns "Other" ```
2020/10/01
[ "https://Stackoverflow.com/questions/64160347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1061892/" ]
I figured out a better approach to the problem. Rather than using contains methods, I decided to run a regex search to see if the combined list values are found within the column row and if they are present, then apply that value. Found below are my updates: **Lists:** ``` fb_landing_page_crit = [ 'utm_source=facebook', 'fbclid', 'test.com\/fb\/' ] fb_landing_page_regex = "|".join(fb_landing_page_crit) google_landing_page_crit = [ 'gclid' ] google_landing_page_regex = "|".join(google_landing_page_crit) fb_source_crit = [ 'fb', 'facebook' ] fb_source_regex = "|".join(fb_source_crit) google_source_crit = [ 'click to call', 'discovery', 'call', 'website', 'landing page', 'display \- lp' ] google_source_regex = "|".join(google_source_crit) local_listings_source_crit = [ 'gmb' ] local_listings_source_regex = "|".join(local_listings_source_crit) partner_source_crit = [ 'vegas community', 'new orleans community', 'dc community', ] partner_source_regex = "|".join(partner_source_crit) ``` Function: ``` def network_parse(df): if isinstance(df['landing_page_url'], str): if bool(re.search(fb_landing_page_regex,df['landing_page_url'].lower())) or bool(re.search(fb_source_regex,df['source_name'].lower())): return 'Facebook' if bool(re.search(google_landing_page_regex,df['landing_page_url'].lower())) or bool(re.search(google_source_regex,df['source_name'].lower())): return 'Google' if bool(re.search(local_listings_source_regex,df['source_name'].lower())): return 'Local Listings' if bool(re.search(partner_source_regex,df['source_name'].lower())): return 'Partner - Community Partnership' else: return 'Other' else: return 'Other' ``` Function call: ``` df['network'] = df.apply(network_parse, axis=1) ```
Right now, the problem is not the `any` but the `x in df['source_name']` part (I took `source_name` as it is simpler to explain there). You check if any row of the dataframe is *equal* to (e.g.) `'Google'`, not if it contains the word. To achieve the latter, you could nest the `for` statements: ``` ... if any((x in y for y in df['landing_page_url']) for x in fb_landing_page_crit): return 'Facebook' ``` However, I am pretty sure this is not the most elegant and efficient way, as it loops multiple times over the same column, but for smallish dataframes it might be ok. Otherwise it might help you find a more efficient solution. Edit: To investigate your problem, you could run the following two snippets, where the first one gives `False` and the second one gives `True`: ``` test = ['This', 'thought'] a = ['This is', 'a', 'longer Text than', 'I thought'] print(any(x in in a for x in test)) # This is principally what you coded test2 = ['This', 'I thought'] print(any(x in a for x in test2) ```
3,864
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
In python, if you declare an array like: ``` myarray = [] ``` You are declaring an empty array with head -1, and you can append values to it with the .append() function and access them the same way you would in java. For all intents and purposes, they are the same thing
It's easer to use collections.deque for stacks in python. ``` from collections import deque stack = deque() stack.append(1) # push stack.append(2) # push stack.append(3) # push stack.append(4) # push t = stack[-1] # your 'head()' tt = stack.pop() # pop if not len(stack): # empty() print("It's empty") ```
3,865
5,325,858
I need to perform http PUT operations from python Which libraries have been proven to support this? More specifically I need to perform PUT on keypairs, not file upload. I have been trying to work with the restful\_lib.py, but I get invalid results from the API that I am testing. (I know the results are invalid because I can fire off the same request with curl from the command line and it works.) After attending Pycon 2011 I came away with the impression that pycurl might be my solution, so I have been trying to implement that. I have two issues here. First, pycurl renames "PUT" as "UPLOAD" which seems to imply that it is focused on file uploads rather than key pairs. Second when I try to use it I never seem to get a return from the .perform() step. Here is my current code: ``` import pycurl import urllib url='https://xxxxxx.com/xxx-rest' UAM=pycurl.Curl() def on_receive(data): print data arglist= [\ ('username', 'testEmailAdd@test.com'),\ ('email', 'testEmailAdd@test.com'),\ ('username','testUserName'),\ ('givenName','testFirstName'),\ ('surname','testLastName')] encodedarg=urllib.urlencode(arglist) path2= url+"/user/"+"99b47002-56e5-4fe2-9802-9a760c9fb966" UAM.setopt(pycurl.URL, path2) UAM.setopt(pycurl.POSTFIELDS, encodedarg) UAM.setopt(pycurl.SSL_VERIFYPEER, 0) UAM.setopt(pycurl.UPLOAD, 1) #Set to "PUT" UAM.setopt(pycurl.CONNECTTIMEOUT, 1) UAM.setopt(pycurl.TIMEOUT, 2) UAM.setopt(pycurl.WRITEFUNCTION, on_receive) print "about to perform" print UAM.perform() ```
2011/03/16
[ "https://Stackoverflow.com/questions/5325858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/662525/" ]
httplib should manage. <http://docs.python.org/library/httplib.html> There's an example on this page <http://effbot.org/librarybook/httplib.htm>
[urllib](http://docs.python.org/library/urllib.html) and [urllib2](http://docs.python.org/library/urllib2.html) are also suggested.
3,874
26,699,356
i am using Spyder 2.3.1 under Windows 7 and have a running iPython 2.3 Kernel on a Rasperry Pi RASPBIAN Linux OS. I can connect to an external kernel, using a .json file and this tutorial: [Remote ipython console](https://pythonhosted.org/spyder/ipythonconsole.html) But what now? If I "run" a script (F5), then the kernel tries to exectue the script like: ``` %run "C:\test.py" ``` ERROR: File `u'C:\\test.py'` not found. This comes back with an error, ofc, because the script lays on my machine under c: and not on the remote machine/raspberry pi. How to I tell Spyder to somehow copy first the script to the remote machine and execute it there? If I check the "this is a remote kernel" checkbox, I cannot connect to the existing kernel anymore. What does that box mean? Will it copy the script via SSH to the remote machine before execution? If I enter the SSH login information, I get an "It seems the kernel died unexpectedly" error.
2014/11/02
[ "https://Stackoverflow.com/questions/26699356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153871/" ]
The tutorial that you mention is a little bit our of date as Spyder now has the ability to connect to remote kernels. The "This is a remote kernel" checkbox, when checked, enables the portion of the dialog where you can enter your ssh connection credentials. (You should need this unless you have manually opened the required ssh tunnels to forward the process ports of your remote kernel... ) Besides, the ipython connection info (the json file) must correspond to the remote kernel, running on your raspberry pi. Finally, there is no means at this time to copy the script lying on your local pc when you hit run. The preferred method would actually be the reverse: mount your raspberry pi's filesystem using a tool like sshfs and edit them in place. The plan is to implement an sftp client in Spyder so that it will not be required and you will be able to explore the remote filesystem from Spyder's file explorer. To summarize: 1) assuming that you are logged in your raspberry pi, launch a local IPython kernel with ipython kernel. It should give you the name of your json file to use, which you should copy to your local pc. 2) in spyder on your local pc, connect to a remote kernel with that json file and your ssh credentials I know that it is cumbersome, but it is a first step..
Another option is to use Spyder cells to send the whole contents of your file to the IPython console. I think this is easier than mounting your remote filesystem with Samba or sshfs (in case that's not possible or hard to do). Cells are defined by adding lines of the form `# %%` to your file. For example, let's say your file is: ``` # -*- coding: utf-8 -*- def f(x): print(x + x) f(5) ``` Then you can just add a cell at the bottom like this ``` # -*- coding: utf-8 -*- def f(x): print(x + x) f(5) # %% ``` and by pressing `Ctrl` + `Enter` above the cell line, the full contents of your file will be sent to the console and evaluated at once.
3,877
74,180,540
i'm trying execute project python in terminal but appear this error: ``` (base) hopu@docker-manager1:~/bentoml-airquality$ python src/main.py Traceback (most recent call last): File "src/main.py", line 7, in <module> from src import VERSION, SERVICE, DOCKER_IMAGE_NAME ModuleNotFoundError: No module named 'src' ``` The project hierarchy is as follows: [Project hierarchy](https://i.stack.imgur.com/octVb.png) If I execute project with any IDE, it works well.
2022/10/24
[ "https://Stackoverflow.com/questions/74180540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15767572/" ]
Your PYTHONPATH is determined by the directory your python executable is located, not from where you're executing it. For this reason, you should be able to import the files directly, and not from source. You're trying to import from `/src`, but your path is already in there. Maybe something like this might work: ```py from . import VERSION, SERVICE, DOCKER_IMAGE_NAME ```
The interpretor is right. For `from src import VERSION, SERVICE, DOCKER_IMAGE_NAME` to be valid, `src` has to be a module or package accessible from the Python path. The problem is that the `python` program looks in the current directory to search for the modules or packages to run, but the current directory is not added to the Python path. So it does find the `src/main.py` module, but *inside the interpretor* it cannot find the `src` package. What can be done? 1. add the directory containing src to the Python path On a Unix-like system, it can be done simply with: ``` PYTHONPATH=".:$PYTHONPATH" python src/main.py ``` 2. start the module as a package element: ``` python -m src.main ``` That second way has an additional gift: you can then use the Pythonic `from . import ...`.
3,880
68,555,515
I am pretty new to python and webscraping, but I have managed to get a well working table to print, I am just curious how I would get this table into a CSV file in the exact same format as the print statement. Any logic explanations would be greatly appreciated and very helpful! My code is below... ``` from bs4 import BeautifulSoup import requests import time htmlText = requests.get('https://www.fangraphs.com/teams/mariners/stats').text soup = BeautifulSoup(htmlText, 'lxml', ) playerTable = soup.find('div', class_='team-stats-table') def BattingStats(): headers = [th.text for th in playerTable.find_all("th")] fmt_string = " ".join(["{:<25}", *["{:<6}"] * (len(headers) - 1)]) print(fmt_string.format(*headers)) for tr in playerTable.find_all("tr")[1:55]: tds = [td.text for td in tr.select("td")] with open('MarinersBattingStats.csv', 'w') as f: f.write(fmt_string.format(*tds)) print(fmt_string.format(*tds)) if __name__ == 'main': while True: BattingStats() timeWait = 100 time.sleep(432 * timeWait) BattingStats() ```
2021/07/28
[ "https://Stackoverflow.com/questions/68555515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16523958/" ]
The charges API allows specifying a description. The description can be anything you want. It's your own tidbit of info you can have added to each transaction. When you export transactions on the Stripe site to a CSV, the description can be exported too. I assume it can be extracted with their APIs as well. Would that help / suffice? ``` const stripe = require('stripe') await stripe(stripe_secret_key).charges.create({ amount: .., currency: .., source: .., application_fee: .., description: "this here can be whatever" }, { stripe_account: accountId }); ```
There isn't really a way to do this on the Stripe dashboard, but you can certainly build something like this yourself. You'd start by [retrieving](https://stripe.com/docs/api/checkout/sessions/list) all the Checkout Sessions, then loop over the list and add up the [totals](https://stripe.com/docs/api/checkout/sessions/object#checkout_session_object-amount_total) based on the `reference_id` in metadata (or lack thereof). Rather than redoing the above logic every time you want to check the totals (which will get progressively slower as the number of completed Checkout Sessions increases) you could instead rely on [webhooks](https://stripe.com/docs/webhooks) to increment your totals as they come in via the `checkout.session.completed` event.
3,881
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
It seems that `ax.set_yscale("log", basey=10)` does not recognise keyword argument `basey`. This keyword was replaced in the most recent matplotlib releases if you would install an older version it should work: `pip install matplotlib==3.3.4` So why is this happening in the first place? The package you are using does not have specific dependencies pinned down so it installs the most recent versions of dependencies. If there are any API changes to the more recent versions of packages the code breaks - it's good practice to pin down dependency versions of the project.
I looked for posts with a similar issue ("wrong" keyword calls on **init**) in Github and SO and it seems like you might need to update your matplotlib: ``` sudo pip install --upgrade matplotlib # for Linux sudo pip install matplotlib --upgrade # for Windows ```
3,882
65,383,598
I have read some posts but I have not been able to get what I want. I have a dataframe with ~4k rows and a few columns which I exported from Infoblox (DNS server). One of them is dhcp attributes and I would like to expand it to have separated values. This is my df (I attach a screenshot from excel): [excel screenshot](https://i.stack.imgur.com/LT4BB.png) One of the columns is a dictionary from all the options, this is an example(sanitized): ``` [ {"name": "tftp-server-name", "num": 66, "value": "10.70.0.27", "vendor_class": "DHCP"}, {"name": "bootfile-name", "num": 67, "value": "pxelinux.0", "vendor_class": "DHCP"}, {"name": "dhcp-lease-time", "num": 51, "use_option": False, "value": "21600", "vendor_class": "DHCP"}, {"name": "domain-name-servers", "num": 6, "use_option": False, "value": "10.71.73.143,10.71.74.163", "vendor_class": "DHCP"}, {"name": "domain-name", "num": 15, "use_option": False, "value": "example.com", "vendor_class": "DHCP"}, {"name": "routers", "num": 3, "use_option": True, "value": "10.70.1.200", "vendor_class": "DHCP"}, ] ``` I would like to expand this column to some (to the same row), like this. Using "name" as df column and "value" as row value. This would be the goal: ``` tftp-server-name voip-tftp-server dhcp-lease-time domain-name-server domain-name routers 0 10.71.69.58 10.71.69.58,10.71.69.59 86400 10.71.73.143,10.71.74.163 example.com 10.70.12.254 ``` In order to have a global df with all the information, I guess I should create a new df keeping the index to merge with primary, but I wasn't able to do it. I have tried with expand, append, explode... Please, could you help me? Thank you so much for your solution (to both). I could get it work, this is my final file: I could do it. I add complete solution, just in case someone need it (maybe there is a way more pythonic, but it works): ``` def formato(df): opciones = df['options'] df_int = pd.DataFrame() for i in opciones: df_int = df_int.append(pd.DataFrame(i).set_index("name")[["value"]].T.reset_index(drop=True)) df_int.index = range(len(df_int.index)) df_global = pd.merge(df, df_int, left_index=True, right_index=True, how="inner") df_global = df.rename(columns={"comment": "Comentario", "end_addr": "IP Fin", "network": "Red", "start_addr": "IP Inicio", "disable": "Deshabilitado"}) df_global = df_global[["Red", "Comentario", "IP Inicio", "IP Fin", "dhcp-lease-time", "domain-name-servers", "domain-name", "routers", "tftp-server-name", "bootfile-name", "voip-tftp-server", "wdm-server-ip-address", "ftp-file-server", "vendor-encapsulated-options"]] return df_global ```
2020/12/20
[ "https://Stackoverflow.com/questions/65383598", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13546976/" ]
You seem to be a little confused about [variable scope](https://www.php.net/manual/en/language.variables.scope.php). There's also an errant `$` in `$this->$cards`. While this [is valid syntax](https://www.php.net/manual/en/language.variables.variable.php), it's not doing what you expect. Consider the following to get an idea of what's going wrong with your code. See comments and output at the end for explanation. ``` <?php $cards = [4, 5, 6]; // Global scope class GamesManager { public $cards = []; // Class scope public function __construct() { $this->cards = [1, 2, 3]; // This will set the class variable $cards to [1, 2, 3]; var_dump($this->cards); // This will print the variable we've just set. } public function pullCard() { global $cards; // This refers to $cards defined at the top ([4, 5, 6]); var_dump($this->cards); // This refers to the class variable named $cards /* array(3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) } */ var_dump($cards); // This refers to the $cards 'imported' by the global statement at the top of this method. /* array(3) { [0]=> int(4) [1]=> int(5) [2]=> int(6) } */ } } $gm = new GamesManager; $gm->pullCard(); /* array(3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) } array(3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) } array(3) { [0]=> int(4) [1]=> int(5) [2]=> int(6) } */ ```
In this case you don't need a 'global' keyword. Just access your class attributes using $this keyword. ``` class GamesManager extends Main { protected $DB; public $cards = array(); public function __construct() { $this->cards = array('2' => 2, '3' => 3, '4' => 4, '5' => 5, '6' => 6, '7' => 7, '8' => 8, '9' => 9, 'T' => 10, 'J' => 10, 'Q' => 10, 'K' => 10, 'A' => 11); var_dump($this->cards); } public function pullCard() { var_dump($this->cards); } } ```
3,888
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
you could try to find 2 lines (with lookahead inside it to avoid capturing the linefeed) or only one (to process the last, odd line). I expanded your example to show that it works for more than 3 lines (with a little "cheat": adding a newline in the end to handle all cases: ``` import re s = "Line 1\nLine 2\nLine 3\nline4\nline5" result = re.findall(r'(.+?\n.+?(?=\n)|.+)', s+"\n") print(result) ``` result: ``` ['Line 1\nLine 2', 'Line 3\nline4', 'line5'] ``` the "add newline cheat" allows to process that properly: ``` s = "Line 1\nLine 2\nLine 3\nline4\nline5\nline6" ``` result: ``` ['Line 1\nLine 2', 'Line 3\nline4', 'line5\nline6'] ```
I wanted to post the grouper recipe from the itertools docs as well, but [PyToolz' `partition_all`](https://toolz.readthedocs.io/en/latest/api.html#toolz.itertoolz.partition_all) is actually a bit nicer. ``` from toolz import partition_all s = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5" result = ['\n'.join(tup) for tup in partition_all(2, s.splitlines())] # ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5'] ``` --- Here's the `grouper` solution for the sake of completeness: ``` from itertools import zip_longest # Recipe from the itertools docs. def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) result = ['\n'.join((a, b)) if b else a for a, b in grouper(s, 2)] ```
3,889
47,659,731
My code is running fine for first iteration but after that it outputs the following error: ``` ValueError: matrix must be 2-dimensional ``` To the best of my knowledge (which is not much in python), my code is correct. but I don't know, why it is not running correctly for all given iterations. Could anyone help me in this problem. ``` from __future__ import division import numpy as np import math import matplotlib.pylab as plt import sympy as sp from numpy.linalg import inv #initial guesses x = -2 y = -2.5 i1 = 0 while i1<5: F= np.matrix([[(x**2)+(x*y**3)-9],[(3*y*x**2)-(y**3)-4]]) theta = np.sum(F) J = np.matrix([[(2*x)+y**3, 3*x*y**2],[6*x*y, (3*x**2)-(3*y**2)]]) Jinv = inv(J) xn = np.array([[x],[y]]) xn_1 = xn - (Jinv*F) x = xn_1[0] y = xn_1[1] #~ print theta print xn i1 = i1+1 ```
2017/12/05
[ "https://Stackoverflow.com/questions/47659731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5507715/" ]
In a comment, you said, > > Yes that is the structure however const items, references, and class items can't be initialized in the body of constructors or in a non-constructor method. > > > A [delegating constructor](http://www.stroustrup.com/C++11FAQ.html#delegating-ctor) can be used to initialize reference member variables. Expanding your example code a bit, I can see something like: ``` class Obj { static const AType defaultAType; const AType &aRef; static const BType defaultBType; const BType &bRef; public: // Delegate with default values for both references Obj() : Obj(defaultAType, defaultBType) {} // Delegate with default value for the B reference Obj(AType &aType) : Obj(aType, defaultBType) {} // Delegate with default value for the A reference Obj(BType &bType) : Obj(defaultAType, bType) {} // A constructor that has all the arguments. Obj(AType& aType, BType& bType) : aRef(aType), bRef(bType) {} }; ```
So this is a restriction that you can't initialize const variables other than in the constructor so there is one approach in my mind. You can have one overloaded constructor with all the possible variable types as arguments and the last argument being the integer that represent which argument to take care off assuming you are considering one variable at each overload. So you can do something like this. Hope this one helps. You can increase on the number of variables to initialize. Please note to just pass null to the references not in consideration. ```js class Obj { const typea var1; const typeb var2; const typec var3; const typed var4; public Obj(typea * ptr1, typeb * ptr2, typec * ptr3, typed * ptr4, int index) { switch (index) { case 1: var1 = * ptr1; break; case 2: var2 = * ptr2; break; case 3: var3 = * ptr3; break; case 4: var4 = * ptr4; break; } } } ```
3,899
26,978,891
Using Maven I want to create 1) a JAR file for my current project with the current version included in the file name, myproject-version.jar, and 2) an overall artifact in tar.gzip format containing the project's JAR file and all dependency JARs in a lib directory and various driver scripts in a bin directory, but without the version number or an arbitrary ID in the name. I have this working somewhat using the assembly plugin, in that if I use the below pom.xml and assembly.xml then when I run 'mvn package' I can get a tar.gzip file with the JARs and scripts included as desired, however I don't seem to be able to get the naming/versioning correct -- either I get both the project's JAR file and the tar.gzip file with the version number or not depending on the build/finalName used. How can I specify these separately, and is it impossible to build the final artifact without an ID appended to the artifact name? For example I'd like to have my project's JAR file be named myproject-0.0.1-SNAPSHOT.jar and the overall "uber" artifact be named myproject.tar.gz (no version number or additional ID appended to the name). Is this possible? My current pom.xml and asssembly.xml are included below. pom.xml: ``` <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>mygroup</groupId> <artifactId>myproject</artifactId> <packaging>jar</packaging> <version>0.0.1-SNAPSHOT</version> <name>myproject</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>joda-time</groupId> <artifactId>joda-time</artifactId> <version>2.3</version> </dependency> </dependencies> <build> <finalName>${project.artifactId}-${project.version}</finalName> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.4.1</version> <configuration> <descriptors> <descriptor>src/main/maven/assembly.xml</descriptor> </descriptors> </configuration> <executions> <execution> <id>make-assembly</id> <!-- this is used for inheritance merges --> <phase>package</phase> <!-- bind to the packaging phase --> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> ``` assembly.xml: ``` <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd"> <id>bin</id> <formats> <format>tar.gz</format> </formats> <fileSets> <!-- the following file set includes the Python scripts/modules used to drive the monthly processing runs --> <fileSet> <directory>src/main/python</directory> <outputDirectory>bin</outputDirectory> <includes> <include>indices_processor.py</include> <include>concat_timeslices.py</include> </includes> </fileSet> <!-- the following file set includes the JAR artifact built by the package goal --> <fileSet> <directory>target</directory> <outputDirectory>lib</outputDirectory> <includes> <include>*.jar</include> </includes> </fileSet> </fileSets> <dependencySets> <!-- the following dependency set includes the dependency JARs needed by the main Java executable for indicator processing --> <dependencySet> <outputDirectory>lib</outputDirectory> <useProjectArtifact>true</useProjectArtifact> <scope>runtime</scope> <unpack>false</unpack> </dependencySet> </dependencySets> </assembly> ``` Thanks in advance for any suggestions.
2014/11/17
[ "https://Stackoverflow.com/questions/26978891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/85248/" ]
Just use `${project.artifactId}` as the value for your `finalName` in your assembly configuration. Example derived from your config (note the `finalName` element inside configuration): ``` <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.4.1</version> <configuration> <descriptors> <descriptor>src/main/maven/assembly.xml</descriptor> </descriptors> <finalName>${project.artifactId}</finalName> </configuration> ... </plugin> ``` `finalName` for an assembly defaults to `${project.build.finalName}` if you don't change it. The default value for `${project.build.finalName}` is `${project.artifactId}-${project.version}`
I think you are looking for this: <http://maven.apache.org/plugins/maven-assembly-plugin/single-mojo.html#finalName> just put it in the configuration of the plugin. however, i think you shouldn't remove the version if you planning to upload it to some repository.
3,901
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
Do the URLs refer to the same server? If so, it could be that you are hitting the HTTP connection limit instead of the threading limit. There's an easy way to tell - change your code to: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .ToDictionary(url => url, url => { Console.WriteLine("On thread {0}", Thread.CurrentThread.ManagedThreadId); return GetPage(url); }); ``` EDIT: Hmm. I can't get `ToDictionary()` to parallelise *at all* with a bit of sample code. It works fine for `Select(url => GetPage(url))` but not `ToDictionary`. Will search around a bit. EDIT: Okay, I still can't get `ToDictionary` to parallelise, but you can work around that. Here's a short but complete program: ``` using System; using System.Collections.Generic; using System.Threading; using System.Linq; using System.Linq.Parallel; public class Test { static void Main() { var urls = Enumerable.Range(0, 100).Select(i => i.ToString()); int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .Select(url => new { Url=url, Page=GetPage(url) }) .ToDictionary(x => x.Url, x => x.Page); } static string GetPage(string x) { Console.WriteLine("On thread {0} getting {1}", Thread.CurrentThread.ManagedThreadId, x); Thread.Sleep(2000); return x; } } ``` So, how many threads does this use? 5. Why? Goodness knows. I've got 2 processors, so that's not it - and we've specified 10 threads, so that's not it. It still uses 5 even if I change `GetPage` to hammer the CPU. If you only need to use this for one particular task - and you don't mind slightly smelly code - you might be best off implementing it yourself, to be honest.
Monitor your network traffic. If the URLs are from the same domain it may be limiting the bandwidth. More connections might not actually provide any speed-up.
3,902
20,424,426
I have recently moved from Ubuntu to Mac osx. And my first thing is to bring my vim with me. I downloaded source from vim.org and compiled with gcc.( I'll put the version output at the bottom of my post) I added pathogen.vim to ~/.vim/autoload directory. But when I add the code in ~/.vim/vimrc: ``` execute pathogen#infect() ``` I got errors when tring to start vim, here is the error output: ``` Error detected while processing /Users/jack/.vim/vimrc: line 3: E117: Unknown function: pathogen#infect E15: Invalid expression: pathogen#infect() Press ENTER or type command to continue ``` First I though perhaps vim did not load pathogen.vim, but :scriptnames showed it did load! ``` 1: ~/.vim/vimrc 2: ~/.vim/bundle/vim-pathogen/autoload/pathogen.vim ``` After I ran :function, something caught my attention, there is a "abort" after the infect function, I google around, and found it did not solve my problem either: ``` function pathogen#legacyjoin(...) abort function pathogen#runtime_append_all_bundles(...) abort function pathogen#surround(path) abort function <SNR>2_Findcomplete(A, L, P) function pathogen#uniq(list) abort function pathogen#incubate(...) abort function pathogen#glob(pattern) abort function <SNR>2_warn(msg) function pathogen#runtime_findfile(file, count) abort function pathogen#separator() abort function pathogen#runtime_prepend_subdirectories(path) function pathogen#glob_directories(pattern) abort function pathogen#infect(...) abort function pathogen#is_disabled(path) function pathogen#join(...) abort function pathogen#cycle_filetype() function pathogen#split(path) abort function <SNR>2_find(count, cmd, file, lcd) function pathogen#fnameescape(string) abort function pathogen#execute(...) abort function pathogen#helptags() abort ``` Can anyone help point out what should I do to solve this problem? Here is the version output with command "vim --version": ``` JacktekiMac-Pro:.vim$ vim --version VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Dec 6 2013 17:01:30) MacOS X (unix) version Huge version without GUI. Features included (+) or not (-): +arabic +file_in_path +mouse_sgr +tag_binary +autocmd +find_in_path -mouse_sysmouse +tag_old_static -balloon_eval +float +mouse_urxvt -tag_any_white -browse +folding +mouse_xterm -tcl ++builtin_terms -footer +multi_byte +terminfo +byte_offset +fork() +multi_lang +termresponse +cindent -gettext -mzscheme +textobjects -clientserver -hangul_input +netbeans_intg +title +clipboard +iconv +path_extra -toolbar +cmdline_compl +insert_expand -perl +user_commands +cmdline_hist +jumplist +persistent_undo +vertsplit +cmdline_info +keymap +postscript +virtualedit +comments +langmap +printer +visual +conceal +libcall +profile +visualextra +cryptv +linebreak +python/dyn +viminfo -cscope +lispindent -python3 +vreplace +cursorbind +listcmds +quickfix +wildignore +cursorshape +localmap +reltime +wildmenu +dialog_con -lua +rightleft +windows +diff +menu -ruby +writebackup +digraphs +mksession +scrollbind -X11 -dnd +modify_fname +signs -xfontset -ebcdic +mouse +smartindent -xim +emacs_tags -mouseshape -sniff -xsmp +eval +mouse_dec +startuptime -xterm_clipboard +ex_extra -mouse_gpm +statusline -xterm_save +extra_search -mouse_jsbterm -sun_workshop +farsi +mouse_netterm +syntax system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O2 -fno-strength-reduce -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -o vim -lm -lncurses -liconv -framework Cocoa ```
2013/12/06
[ "https://Stackoverflow.com/questions/20424426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1914683/" ]
I found the problem. system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" I set $VIM to ~/.vim, which is the same as the 2nd user vimrc file. So the vimrc file load twice. After I change $VIM to /etc/vim, everything turns out be good.
I had a similar problem and found that I had not created the ~/.vim directory correctly. I created it in the root by changing directory there and typing mkdir /.vim but for some reason it was not working. Then I deleted this folder and did mkdir ~/.vim and was ably to install and use pathogen.
3,908
33,512,243
I am trying to understand what is a better design choice in the case when we have functions in a Class which does a bunch of things and should either return a string or raise a custom exception when a particular check fails. Example : Suppose I have a class like :- ``` #Division only for +ve numbers class DivisionError(Exception): pass class Division(object): def __init__(self, divisor, dividend): self.divisor = divisor self.dividend = dividend def divide(): if self.divisor<0: #return "-ve_divisor_error" or #raise DivisonError.divisorError if self.dividend<0: #return "-ve_dividend_error" or #raise DivisionError.dividendError return self.dividend/self.divisor ``` 1. What is better to return a custom string or raise exception especially in case of writing a python library. 2. And do we need to write separate classes for all Custom exception that we raise or is there a way to have an enum of some kind on a single customexception class?
2015/11/04
[ "https://Stackoverflow.com/questions/33512243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1982483/" ]
Your problem was due to your adding the JScrollPane and the JTextArea both to the `thePanel` JPanel, and so you see both: a JTextArea **without** JScrollPanes and an empty JScrollPane. * Don't add add the textArea itself to the JScrollPane and also to a JPanel, since you can only add a component to **one** container. Instead, add it to one component, the JScrollPane (actually you're adding it to its viewport view, but if you pass it into the JScrollPane's constructor, then you're doing this), and then add that JScrollPane to something else. * Also, **NEVER** set a text component's preferred size. You constrain the JTextArea so that it will never expand as more text is added, and so you'll never see scrollbars and not see text beyond this size. Set the visible columns and rows instead. e.g., `textArea1 = new JTextArea(rows, columns);` Note that this doesn't make much sense: ``` thePanel.setLayout(null); thePanel.setLayout(new FlowLayout(FlowLayout.LEFT)); ``` I'm not sure what you are trying to do here since 1) you want to set a container's layout only once, and 2) in general you will want to avoid use of `null` layouts. For example: ``` import java.awt.BorderLayout; import javax.swing.*; public class MyProgram extends JPanel { private static final int T_FIELD_COLS = 20; private static final int TXT_AREA_ROWS = 15; private static final int TXT_AREA_COLS = 20; private JButton button1 = new JButton("Button 1"); private JButton button2 = new JButton("Button 2"); private JTextField textField = new JTextField(T_FIELD_COLS); private JTextArea textArea = new JTextArea(TXT_AREA_ROWS, TXT_AREA_COLS); public MyProgram() { // Create a JPanel to hold your top line of components JPanel topPanel = new JPanel(); int gap = 3; topPanel.setBorder(BorderFactory.createEmptyBorder(gap, gap, gap, gap)); // set this JPanel's layout. Here I use BoxLayout. topPanel.setLayout(new BoxLayout(topPanel, BoxLayout.LINE_AXIS)); topPanel.add(button1); topPanel.add(Box.createHorizontalStrut(gap)); topPanel.add(textField); topPanel.add(Box.createHorizontalStrut(gap)); topPanel.add(button2); // so the JTextArea will wrap words textArea.setLineWrap(true); textArea.setWrapStyleWord(true); // add the JTextArea to the JScrollPane's viewport: JScrollPane scrollPane = new JScrollPane(textArea); scrollPane.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS); // set the layout of the main JPanel. setLayout(new BorderLayout()); add(topPanel, BorderLayout.PAGE_START); add(scrollPane, BorderLayout.CENTER); } private static void createAndShowGui() { MyProgram mainPanel = new MyProgram(); JFrame frame = new JFrame("My Program"); frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); frame.getContentPane().add(mainPanel); frame.pack(); // don't set the JFrame's size, preferred size or bounds frame.setLocationByPlatform(true); frame.setVisible(true); } public static void main(String[] args) { // start your program on the event thread SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGui(); } }); } } ```
> > Try this : > > > ``` textArea1 = new JTextArea(); textArea1.setColumns(20); textArea1.setRows(5); scroller.setViewportView(textArea1); ```
3,909
67,503,532
When I try to run my localhost server I get the following error: `FileNotFoundError: [Errno 2] No such file or directory: '/static/CSV/ExtractedTweets.csv'` This error is due to the line the line `with open(staticfiles_storage.url('/CSV/ExtractedTweets.csv'), 'r', newline='', encoding="utf8") as csvfile:` This line of code can be found in a custom python module within my app folder. I have copies of /static/CSV/ExtractedTweets.csv in my project root folder, my app folder and in the folder enclosing my project root and app folders. I also have an additional copy of ExtractedTweets.csv within my app folder. settings.py ``` STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage' ``` urls.py ``` from django.conf import settings urlpatterns = [ ... ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` I have placed the file in all possible locations yet django cannot seem to find it. Interestingly, my templates have no problem finding my static CSS files. If anyone has any idea how to resolve this error, please let me know.
2021/05/12
[ "https://Stackoverflow.com/questions/67503532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9403355/" ]
I never found a solution to getting the staticfile path however the find() function seems to be an alternative solution. custommodule.py `from django.contrib.staticfiles.finders import find` `with open(find('CSV/ExtractedTweets.csv'), 'r', newline='', encoding="utf8") as csvfile:`
if you are not looking to deploy this project you can add : ``` from django.conf import settings urlpatterns = [ path(....), path(....), ]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` or you can try to add : ``` STATICFILES_DIRS = [ BASE_DIR / "static", ] ``` to your setting.py
3,910
64,483,271
I'm trying install packages through pip, but every package I try to install, it fails with ``` ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy ``` When running the same command with `-vvv` like `pip install numpy -vvv` it gives the following output. ``` Using pip 20.2.1 from c:\program files\python38\lib\site-packages\pip (python 3.8) Defaulting to user installation because normal site-packages is not writeable Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-ephem-wheel-cache-bo4luxtk Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Initialized build tracking at C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Created build tracker: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Entered build tracker: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-install-2se6s0ld 1 location(s) to search for versions of numpy: * https://pypi.org/simple/numpy/ Fetching project page and analyzing links: https://pypi.org/simple/numpy/ Getting page https://pypi.org/simple/numpy/ Found index url https://pypi.org/simple Looking up "https://pypi.org/simple/numpy/" in the cache Request header has "max_age" as 0, cache bypassed Starting new HTTPS connection (1): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=4, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (2): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=3, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (3): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (4): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (5): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (6): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Could not fetch URL https://pypi.org/simple/numpy/: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by ResponseError('too many 500 error responses')) - skipping Given no hashes to check 0 links for project 'numpy': discarding no candidates ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy Exception information: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\pip\_internal\cli\base_command.py", line 216, in _main status = self.run(options, args) File "c:\program files\python38\lib\site-packages\pip\_internal\cli\req_command.py", line 182, in wrapper return func(self, options, args) File "c:\program files\python38\lib\site-packages\pip\_internal\commands\install.py", line 324, in run requirement_set = resolver.resolve( File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 183, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 388, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 339, in _get_abstract_dist_for self._populate_link(req) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 305, in _populate_link req.link = self._find_requirement_link(req) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 270, in _find_requirement_link best_candidate = self.finder.find_requirement(req, upgrade) File "c:\program files\python38\lib\site-packages\pip\_internal\index\package_finder.py", line 926, in find_requirement raise DistributionNotFound( pip._internal.exceptions.DistributionNotFound: No matching distribution found for numpy 1 location(s) to search for versions of pip: * https://pypi.org/simple/pip/ Fetching project page and analyzing links: https://pypi.org/simple/pip/ Getting page https://pypi.org/simple/pip/ Found index url https://pypi.org/simple Looking up "https://pypi.org/simple/pip/" in the cache Request header has "max_age" as 0, cache bypassed Starting new HTTPS connection (1): pypi.org:443 https://pypi.org:443 "GET /simple/pip/ HTTP/1.1" 500 655 Could not fetch URL https://pypi.org/simple/pip/: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by ResponseError('too many 500 error responses')) - skipping Given no hashes to check 0 links for project 'pip': discarding no candidates Removed build tracker: 'C:\\Users\\d\\AppData\\Local\\Temp\\pip-req-tracker-8z32xx1a' ``` My pip.ini file looks like ``` [global] trusted-host = pypi.python.org pypi.org files.pythonhosted.org ``` How can the 'too many 500 error responses' mentioned in the error be fixed? Edit: Reinstalling Python has not fixed the issue and I am using Python 3.8.6. I've also tried restarting my computer.
2020/10/22
[ "https://Stackoverflow.com/questions/64483271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4601149/" ]
The main issue is the `alpha` argument together with `geom_line`. If you want the keys to show up as lines you set alpha to 1 in the legend via `guides(color = guide_legend(override.aes = list(alpha = c(1, 1, 1, 1))))`. If you want colored rectangles for the keys this could be achieved by adding `key_glyph = "rect"` to your `geom_line` layers Using the `economics` dataset as example data: ```r library(ggplot2) ggplot(economics, aes(x=date)) + geom_line(aes(y=`psavert`/100, color="Less Than HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`uempmed`/100, color="HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`psavert`/10, color="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`uempmed`/10, color="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1) + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + guides(color = guide_legend(override.aes = list(alpha = c(1, 1, 1, 1)))) + ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom") ``` ![](https://i.imgur.com/Ltxkim2.png) And with `key_glyph="rect"`: ```r library(ggplot2) ggplot(economics, aes(x=date)) + geom_line(aes(y=`psavert`/100, color="Less Than HS Diploma"), size=2,alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`uempmed`/100, color="HS Diploma"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`psavert`/10, color="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`uempmed`/10, color="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom") ``` ![](https://i.imgur.com/9gjMBAX.png) Created on 2020-10-22 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
The `values` argument from `scale_color_manual` should have color names instead of the line names, which you don't need to pass. Example: ``` scale_color_manual(name="Educational Attainment", values = c("red","yellow","white",...)) ```
3,911
32,400,048
I am trying to edit a .reg file in python to replace strings in a file. I can do this for any other file type such as .txt. Here is the python code: ``` with open ("C:/Users/UKa51070/Desktop/regFile.reg", "r") as myfile: data=myfile.read() print data ``` It returns an empty string
2015/09/04
[ "https://Stackoverflow.com/questions/32400048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3473280/" ]
I am not sure why you are not seeing any output, perhaps you could try: `print len(data)` Depending on your version of Windows, your `REG` file will be saved using UTF-16 encoding, unless you specifically export it using the `Win9x/NT4` format. You could try using the following script: ``` import codecs with codecs.open("C:/Users/UKa51070/Desktop/regFile.reg", encoding='utf-16') as myfile: data = myfile.read() print data ```
It's probably not a good idea to edit `.reg` files manually. My suggestion is to search for a Python package that handles it for you. I think the [\_winreg](https://docs.python.org/2/library/_winreg.html) Python built-in library is what you are looking for.
3,913
64,256,474
I have to deploy a python project on AWS Lambda function. When I create its zip package it occupies a memory of around 80 MB (Lambda allows upto 50 MB). Also I cannot upload it to s3 because the memory size of the uncompressed package is around 284 MB (S3 allows upto 250 MB). Any idea how to tackle this problem or Is there any alternative for it?
2020/10/08
[ "https://Stackoverflow.com/questions/64256474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9920934/" ]
To work just include this jQuery, Popper.js, and bootstrap js CDN and it will work. Note that jQuery must come first, then Popper.js, and then our JavaScript plugins. for more info click [here](https://getbootstrap.com/docs/4.5/getting-started/download/) ``` <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script> ``` You can add these scripts in the **head** tag or at the bottom of the **body**.
You forgot to add the CDN Bootstrap or link your bootstrap javascript at the bottom of the body. Here: ``` <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script> ```
3,914
60,197,890
I'm new to python and running the command: > > pip install pysam > > > Which results in: ``` Collecting pysam Using cached https://files.pythonhosted.org/packages/25/7e/098753acbdac54ace0c6dc1f8a74b54c8028ab73fb027f6a4215487d1fea/pysam-0.15.4.tar.gz ERROR: Command errored out with exit status 1: command: 'c:\path\programs\python\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\path\\Local\\Temp\\pip-install-qzuue1yz\\pysam\\setup.py'"'"'; __file__='"'"'C:\\path\\Temp\\pip-install-qzuue1yz\\pysam\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info Complete output (23 lines): # pysam: cython is available - using cythonize if necessary # pysam: htslib mode is shared # pysam: HTSLIB_CONFIGURE_OPTIONS=None '.' is not recognized as an internal or external command, operable program or batch file. '.' is not recognized as an internal or external command, operable program or batch file. File "<string>", line 1, in <module> File "C:\path\Local\Temp\pip-install-qzuue1yz\pysam\setup.py", line 241, in <module> htslib_make_options = run_make_print_config() File "C:\path\\Local\Temp\pip-install-qzuue1yz\pysam\setup.py", line 68, in run_make_print_config stdout = subprocess.check_output(["make", "-s", "print-config"]) File "c:\path\programs\python\python38\lib\subprocess.py", line 411, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "c:\path\programs\python\python38\lib\subprocess.py", line 489, in run File "c:\path\programs\python\python38\lib\subprocess.py", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "c:\path\programs\python\python38\lib\subprocess.py", line 1307, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified # pysam: htslib configure options: None ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` What is the problem here? Originally I got an error about cython not being installed, so i ran pip install cython and that was able to run that without issue.
2020/02/12
[ "https://Stackoverflow.com/questions/60197890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1308743/" ]
There are many binary wheels [at PyPI](https://pypi.org/project/pysam/#files) but only for Linux and MacOS X. [The package at bioconda](https://anaconda.org/bioconda/pysam) is also compiled only for Linux and OS X. When you try to install pysam at Windows `pip` downloads the source distribution `pysam-0.15.4.tar.gz`, unpacks it and runs `setup.y` pysam's `setup.py` [configures](https://github.com/pysam-developers/pysam/blob/c818db502b8f8334e7bf29060685114dd9af9530/setup.py#L221) library `htslib` by [running](https://github.com/pysam-developers/pysam/blob/c818db502b8f8334e7bf29060685114dd9af9530/setup.py#L56) script [`htslib/configure`](https://github.com/pysam-developers/pysam/blob/master/htslib/configure). This is a shell script, it cannot be run in Windows without a Unix emulation layer. Hence the error. Bottom line: like many pieces of software related to genetics (I have some experience with software written in Python and Java) `pysam` seems to be usable only on Unix, preferably Linux or OS X.
if you have anaconda, try this: `conda install -c bioconda pysam`
3,915
57,921,006
I have flask application via python. In my page, there is three images but flask only shows one of them. I could not figure out where is the problem. Here is my code. HTML ==== ``` <div class="col-xs-4"> <img style="width:40%;padding:5px" src="static/tomato.png"/> <br> <button class="btn btn-warning"><a style="color:white;" href="http://127.0.0.1:5000/detect">Tomato Analysis</a></button> </div> <div class="col-xs-4"> <img style="width:40%;padding:5px" src="static/grapes.png"/> <br> <button class="btn btn-warning"><a style="color:white;" href="http://127.0.0.1:5000/detect">Grape Analysis</a></button> </div> ``` PYTHON ====== ``` @app.route("/main") def index(): return render_template('gui2.html') ``` It shows tomato.png but it did not sohws the grapes.png, what is the problem of it and how can I solve it. Also I am using electron.js. After running python script, I am running nmp start. The error is output is: ======================= `GET /%7B%7B%20url_for('static',%20filename%20=%20'image/corn2.png')%20%7D%7D HTTP/1.1" 404 -` Any help is appreciated... Thanks
2019/09/13
[ "https://Stackoverflow.com/questions/57921006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11697825/" ]
You can change the route main to mainPage. Try below code ``` @app.route("/mainPage") def index(): return render_template('gui2.html') ```
The error message 404 clear tells that the resource you are looking for was not found in the given location. Make sure that the file exists on the path you give. Simply, as tomato.png file is displayed correctly, just make sure that other files are also in the same location as tomato.png Try opening in Incognito or private browser.
3,916
32,075,662
I'm facing a nearly-textbook diamond inheritance problem. The (rather artificial!) example below captures all its essential features: ``` # CAVEAT: error-checking omitted for simplicity class top(object): def __init__(self, matrix): self.matrix = matrix # matrix must be non-empty and rectangular! def foo(self): '''Sum all matrix entries.''' return sum([sum(row) for row in self.matrix]) class middle_0(top): def foo(self): '''Sum all matrix entries along (wrap-around) diagonal.''' matrix = self.matrix n = len(matrix[0]) return sum([row[i % n] for i, row in enumerate(matrix)]) class middle_1(top): def __init__(self, m, n): data = range(m * n) matrix = [[1 + data[i * n + j] for j in range(n)] for i in range(m)] super(middle_1, self).__init__(matrix) ``` In summary, classes `middle_0` and `middle_1` are both subclasses of class `top`, where `middle_0` overrides method `foo` and `middle_1` overrides method `__init__`. Basically, the classic diamond inheritance set up. The one elaboration on the basic pattern is that `middle_1.__init__` actually invokes the parent class's `__init__`. (The demo below shows these classes in action.) I want to define a class `bottom` that "gets"1 `foo` from `middle_0` and `__init__` from `middle_1`. What's the "pythonic way" to implement such a `bottom` class? --- Demo: ``` matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] print top(matrix).foo() # 45 print middle_0(matrix).foo() # 15 print middle_1(3, 3).foo() # 45 # print bottom(3, 3).foo() # 15 ``` --- 1I write "gets" instead of "inherits" because I suspect this problem can't be solved easily using standard Python inheritance.
2015/08/18
[ "https://Stackoverflow.com/questions/32075662", "https://Stackoverflow.com", "https://Stackoverflow.com/users/559827/" ]
`bottom` simply inherits from both; there is nothing specific about your classes that would make this case special: ``` class bottom(middle_0, middle_1): pass ``` Demo: ``` >>> class bottom(middle_0, middle_1): ... pass ... >>> bottom(3, 3).foo() 15 ``` This works as expected because Python arranges both `middle_0` and `middle_1` to be searched for methods before `top` is: ``` >>> bottom.__mro__ (<class '__main__.bottom'>, <class '__main__.middle_0'>, <class '__main__.middle_1'>, <class '__main__.top'>, <type 'object'>) ``` This shows the *Method Resolution Order* of the class; it is that order that is used to find methods. So `bottom.__init__` is found on `middle_1`, and `bottom.foo` is found on `middle_0`, as both are listed before `top`.
I think the > > a class bottom that "gets"1 foo from middle\_0 and \_\_init\_\_ from middle\_1. > > > would be simply done by ``` class bottom(middle_0, middle_1): pass ```
3,917
33,771,929
**Definition**: > > [Bag or Multiset](https://xlinux.nist.gov/dads/HTML/bag.html) is a set data structure which allows duplicate elements, provided the order of retrieval is not significant. > > > Now as I read python documentation it is told that a [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) behaves as a Bag data structure. But I am confused if we can use List or Tuple as an alternative? One possible flaw as I can see is that `removing` an element is not allowed in Bag. Also, normally retrieving an element in List or Tuple takes O(n) time but Bag can be implemented via hashing to allow constant time removal. **Question**: Can we use List or Tuple as a Bag data structure?
2015/11/18
[ "https://Stackoverflow.com/questions/33771929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/867461/" ]
> > Can we use List or Tuple as a Bag data structure? > > > Yes. It would require some code to get the structure correct, and you'd likely want a list as they are mutable. But you can add duplicates to a list, count them and remove them.
No. * The elements of a bag are unordered and non-unique. * The elements of a Counter are unordered and non-unique. * The elements of a set are unordered and unique. * The elements of a list (and tuple) are ordered and non-unique. A Counter behaves like a bag of m&m's. A list behaves like a pez dispenser - the order of its elements is significant. ``` > a = {1, 2, 3} > b = {1, 2, 3} > c = {1, 3, 2} > a == b True > a == c True > a = Counter((1, 2, 3)) > b = Counter((1, 2, 3)) > c = Counter((1, 3, 2)) > a == b True > a == c True > a = [1, 2, 3] > b = [1, 2, 3] > c = [1, 3, 2] > a == b True > a == c False ```
3,918
54,483,013
I am using a ScanSnap scanner which generates PDF-1.3 where it will auto-correct the orientation (rotate 0 or 180 degrees) of scanned documents when the PDF is viewed within Adobe Reader. OCR is done by the scanning software and I am assuming the orientation is determined then and encoded into the PDF. Note that I know I can use Tesseract or other OCR tools to determine if rotation is needed, but I do not want to use it as the scanner software seems to have already determined it and telling PDF viewers if rotation is needed (or not). When I use image extraction tools (like xpdf pdfimages, python libraries) it does not properly rotate jpeg images 180 degrees (if needed). > > NB: pdfimages extracts the raw image data from the PDF file, without > performing any additional transforms. Any rotation, clipping, color > inversion, etc. done by the PDF content stream is ignored. > > > I have scanned a document twice with rotation (0 degrees, and 180 degrees). I cannot seem to reverse engineer what is telling Adobe/Foxit to rotate (or not) the image when viewing. I have looked at the PDF-1.3 specification doc, and compared the PDF binary data between the orientation-corrected and not-corrected. I can not determine what is correcting the orientation? * No /Page/Rotate (defaults to 0) in PDF * No EXIF orientation in JPEG * I do not see any transformation matrix (cm operator) in PDF In both cases the PDF binary looks like the following (stopped at the JPEG streamed data) **UPDATED:** links to PDF files [rotated-180](http://s000.tinyupload.com/?file_id=03294969585737255560) [rotated-0](http://s000.tinyupload.com/?file_id=00344136391322927294) ``` %PDF-1.3 %âãÏÓ 1 0 obj <</Metadata 20 0 R/Pages 2 0 R/Type/Catalog>> endobj 2 0 obj <</MediaBox[0.0 0.0 606.6 794.88]/Count 1/Type/Pages/Kids[4 0 R]>> endobj 4 0 obj <</Parent 2 0 R/Contents 18 0 R/PieceInfo<</PSL<</Private<</V(3.2.9)>>/LastModified(D:20190201125524-00'00')>>>>/MediaBox[0.0 0.0 606.6 794.88]/Resources<</XObject<</Im0 5 0 R>>/Font<</C0_0 11 0 R/T1_0 16 0 R>>/ProcSet[/PDF/Text/ImageC]>>/Type/Page/LastModified(D:20190201085524-04'00')>> endobj 5 0 obj <</Subtype/Image/Length 433576/Filter/DCTDecode/Name/X/BitsPerComponent 8/ColorSpace/DeviceRGB/Width 1685/Height 2208/Type/XObject>>stream ``` **Does anyone know how PDF viewers know to rotate an image 180 (or not). Is it meta-data within the PDF or JPEG image which can be extracted?** Does Adobe and other viewers do something dynamically on opening a document to determine if orientation correction is needed? I'm no expert with PDF specification. But I was hoping someone may have already found a solution to this problem.
2019/02/01
[ "https://Stackoverflow.com/questions/54483013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/297500/" ]
The image **Im0** in the resources of the page in "internetfile-180.pdf" is not rotated: [![internetfile-180.pdf image](https://i.stack.imgur.com/DS43A.jpg?s=256)](https://i.stack.imgur.com/DS43A.jpg) But the image **Im0** in the resources of the page in "internetfile.pdf" is rotated: [![enter image description here](https://i.stack.imgur.com/LXGif.jpg?s=256)](https://i.stack.imgur.com/LXGif.jpg) In the viewer both look upright, so in "internetfile.pdf" a technique must be used that rotates the image. There are two major techniques for this: * Setting the **Rotate** property of the page accordingly, i.e. here to 180. * Applying a rotation transformation to the current transformation matrix in the content stream of the page. Let's look at the page dictionary first, a bit pretty-printed: ``` 4 0 obj << /Parent 2 0 R /Contents 13 0 R /PieceInfo << /PSL << /Private <</V (3.2.9)>> /LastModified (D:20190204142537-00'00') >> >> /MediaBox [0.0 0.0 608.64 792.24] /Resources << /XObject <</Im0 5 0 R>> /Font <</T1_0 11 0 R>> /ProcSet [/PDF /Text /ImageC] >> /Type /Page /LastModified (D:20190204102537-04'00') >> ``` As we see, there is no **Rotate** entry present. Thus, we'll have to look at the page content stream. According to the page dictionary it's in object 13, generation 0. That object is a stream object with deflated stream data: ``` 13 0 obj << /Length 4014 /Filter /FlateDecode >> stream H‰”WÛŽÛF}Ÿ¯Ð[lÀÓÓ÷˾e½ [...] ÿüòÛÿ ´ß endstream endobj ``` After inflating the stream data, they start like this: ``` q -608.3999939 0 0 -792.9600067 608.3999939 792.9600067 cm /Im0 Do Q [...] ``` And this is indeed an application of the second technique, the **cm** instruction applies the rotation and the **Do** instruction paints the image with the rotation active! In detail, the **cm** instruction applies the affine transformation represented by the matrix ``` -608.3999939 0 0 0 -792.9600067 0 608.3999939 792.9600067 1 ``` In other words: ``` x' = -608.3999939 * x + 608.3999939 y' = -792.9600067 * y + 792.9600067 ``` This transformation actually is a combination of a rotation by 180°, a horizontal scaling by 608.3999939 and a vertical scaling by 792.9600067, and a translation by 608.3999939 horizontally and 792.9600067 vertically. The **Do** instruction now paints the image. Here one needs to know that this instruction first scales the image to fit into the unit 1×1 square at the origin and then applies the current transformation matrix. Thus, the image is drawn rotated by 180°, effectively filling the whole 608.64×792.24 **MediaBox** of the page.
**mkl** answered the question correctly doing all the hard work decoding the PDF for me. I thought I would add in my python (PyPDF2) code to search for the found rotation condition in case it helps someone else. ```py input1 = PyPDF2.PdfFileReader(open(filepath, "rb")) totalPages = input1.getNumPages() for pgNum in range(0,totalPages): page0 = input1.getPage(pgNum) # Lets look to see if the page contains a transformation matrix to rotate it 180 degress # (ScanScap iX500 encoded the PDF with a cm transformation matrix to rotate 180 degrees in PDF viewers # @see https://stackoverflow.com/questions/54483013/how-to-extract-rotation-transformation-information-for-pdf-extracted-images-i-e # @see 'PDF 1.3 Reference Manual March 11, 1999' Section 3.10 Transformation matrices which is applied to the scanned image # [[a b 0] # [c d 0] # [e f 1]] isPageRotated180 = False pgContent = page0['/Contents'].getData().decode('utf-8') FLOAT_REG = '([-+]?\d*\.\d+|\d+)' m = re.search( '{} {} {} {} {} {} cm'.format(FLOAT_REG,FLOAT_REG,FLOAT_REG,FLOAT_REG,FLOAT_REG,FLOAT_REG), pgContent ) if m: (a,b,c,d,e,f) = list(map(float,m.groups())) isPageRotated180 = (a == -e and d == -f) ```
3,919
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
### Use a command-line tool By far the most efficient solution I've found is to use a specialist command-line tool to replace `";"` with `","` and *then* read into Pandas. Pandas or pure Python solutions do not come close in terms of efficiency. Essentially, using CPython or a tool written in C / C++ is likely to outperform Python-level manipulations. For example, using [Find And Replace Text](http://fart-it.sourceforge.net/): ``` import os os.chdir(r'C:\temp') # change directory location os.system('fart.exe -c file.csv ";" ","') # run FART with character to replace df = pd.read_csv('file.csv', usecols=[3, 4, 5], header=None) # read file into Pandas ```
If this is an option, substituting the character `;` with `,` in the string is faster. I have written the string `x` to a file `test.dat`. ``` def csv_reader_4(x): with open(x, 'r') as f: a = f.read() return pd.read_csv(StringIO(unicode(a.replace(';', ','))), usecols=[3, 4, 5]) ``` The `unicode()` function was necessary to avoid a TypeError in Python 2. Benchmarking: ``` %timeit csv_reader_2('test.dat') # 1.6 s per loop %timeit csv_reader_4('test.dat') # 1.2 s per loop ```
3,920
21,616,994
I apologize if this question has been answered elsewhere. I havn't been able to find an answer yet through the search here or in the Pandas documentation (quite possible I've just missed it though). I'm trying to import a html file into python through pandas and am unsure how to obtain the data I need from the result. I'm working on Windows 7 and using Python 3.3 along with Pandas Using the read\_html function in pandas appears to work and returns a list of dataframes. I'm new to Python (migrating from Matlab) and am unsure how to use a list of dataframes. The documentation describes how to use and manipulate dataframes, but how do I get a dataframe from a list of them? Some of the other answers on this site suggest using the lxml functions directly to parse html files, however it seems the read\_html is working fine in my case. Here is the code I entered: ``` import pandas as pd file = 'F:\\Documents\\Python\\EA Performance Manager\\History.html' History = pd.read_html(file, header=0, infer_types=False) ``` Which gives: ``` >>> History [<class 'pandas.core.frame.DataFrame'> Int64Index: 428 entries, 1 to 428 Data columns (total 13 columns): Ticket 428 non-null values Strategy 428 non-null values Symbol 428 non-null values B/S 428 non-null values Amount (k) 428 non-null values Open Time 428 non-null values Open Price 428 non-null values Close Time 428 non-null values Close Price 428 non-null values High/Low 428 non-null values Rollover 428 non-null values Gross P/L 428 non-null values Pips 428 non-null values dtypes: object(13)] ``` I need to access the individual data columns for analysis (preferably storing them in array-like strutures - still learning to use python properly, will have to convert the data somehow as infer\_type is false, but I think that is another issue). The question is how do I do this? Note: The History.html file was downloaded from a web-based trading platform as History.xls, only after trying to use the excel reading functions to no avail did I find out it was actually a html file. The content of the file is the history of trade opens and closes for an automated trading system. The first row gives the heading for each column.
2014/02/07
[ "https://Stackoverflow.com/questions/21616994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3264279/" ]
`History[0]` will give you the first element. FYI, generally uppercase names are used for classes; variable names are `like_this` These are just conventions; History is a legal identifier.
For each dataframe column you wish to convert to a list, you can transpose the values, and then convert it to a list as follows. Here is an arbitrary DataFrame with one column (if there is more than one column, then slice into columns, and do this for each column): ``` s=DataFrame({'column 1':random.sample(range(10),10)}) ``` Then obtain the values using `.values` and transpose using `.T`, and convert to a list using `.tolist()` ``` s.values.T.tolist() ``` However, that might give you all of the values in long (with an L at the end of each). If that's the case, then you can use a simple datatype conversion to obtain an integer or floating point, or whatever is desirable. I hope that helps! Let me know if not.
3,929
56,867,659
While debugging `cmd is not recognized` is displayed and program is not debugged. What can be the problem? I have already checked the `path` and `pythonpath` variables and those seem to be just fine ``` bash C:\Users\rahul\Desktop\vscode\.vscode>cd c:\Users\rahul\Desktop\vscode\.vscode && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:\Users\rahul\AppData\Local\Programs\Python\Python37-32\python.exe c:\Users\rahul\.vscode\extensions\ms-python.python-2019.6.22090\pythonFiles\ptvsd_launcher.py --default --client --host localhost --port 50265 c:\Users\rahul\Desktop\vscode\.vscode\s.py " 'cmd' is not recognized as an internal or external command, operable program or batch file. ```
2019/07/03
[ "https://Stackoverflow.com/questions/56867659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8306141/" ]
> > TL;DR: `cmd` is not in your Windows Environment Path. > [![enter image description here](https://i.stack.imgur.com/9hZxD.png)](https://i.stack.imgur.com/9hZxD.png) > add `%SystemRoot%\system32` to your *System Variables* and restart VSCode. > > > --- Visual Studio Code has actually brought native support for selecting your terminal, so including cmd into your path is nolonger necessary. * Press `CTRL + SHIFT + P` -> `Terminal: Select default shell` -> select your terminal. It will add this line to your settings.json: `"terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe"` should have appeared. Or if you chose Powershell, it will look like this: `"terminal.integrated.shell.windows": "C:\\windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"` To view your settings.json file, simply: * `Ctrl + ,` scroll down to `Files: Associations` and click `Edit in settings.json`.
It means that `cmd` is not in your path. Either: * Add the path to the system or user variables in the control panel * Use the full path to `cmd` instead (typically `C:\Windows\System32\cmd.exe`), meaning something like: `cd c:\Users\rahul\Desktop\vscode\.vscode && C:\Windows\System32\cmd.exe /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:\Users\rahul\AppData\Local\Programs\Python\Python37-32\python.exe c:\Users\rahul\.vscode\extensions\ms-python.python-2019.6.22090\pythonFiles\ptvsd_launcher.py --default --client --host localhost --port 50265 c:\Users\rahul\Desktop\vscode\.vscode\s.py "`
3,930
14,506,717
I need to print some information directly (without user confirmation) and I'm using Python and the `win32print` module. I've already read the whole [Tim Golden win32print page](http://timgolden.me.uk/python/win32_how_do_i/print.html) (even read the [win32print doc](http://timgolden.me.uk/pywin32-docs/win32print.html), which is small) and I'm using the same example he wrote there himself, but I just print nothing. If I go to the interactive shell and make one step at a time, I get the document on the printer queue (after the `StartDocPrinter`), then I get the document size (after the `StartPagePrinter, WritePrinter, EndPagePrinter` block) and then the document disappear from the queue (after the `EndDocPrinter`) without printing. I'm aware of the `ShellExecute` method Tim Golden showed. It works here, but it needs to create a temp file and it prints this filename, two things I don't want. Any ideas? Thanks in advance. This is the code I'm testing (copy and paste of Tim Golden's): ``` import os, sys import win32print import time printer_name = win32print.GetDefaultPrinter() if sys.version_info >= (3,): raw_data = bytes ("This is a test", "utf-8") else: raw_data = "This is a test" hPrinter = win32print.OpenPrinter (printer_name) try: hJob = win32print.StartDocPrinter (hPrinter, 1, ("test of raw data", None, "RAW")) try: win32print.StartPagePrinter (hPrinter) win32print.WritePrinter (hPrinter, raw_data) win32print.EndPagePrinter (hPrinter) finally: win32print.EndDocPrinter (hPrinter) finally: win32print.ClosePrinter (hPrinter) ``` [EDIT] I installed a pdf printer in my computer to test with another printer (CutePDF Writer) and I could generate the `test of raw data.pdf` file, but when I look inside there is nothing. Meaning: all commands except `WritePrinter` appears to be doing what they were supposed to do. But again, as I said in the comments, `WritePrinter` return the correct amount of bytes that were supposed to be written to the printer. I have no other idea how to solve this, but just comproved there is nothing wrong with my printer.
2013/01/24
[ "https://Stackoverflow.com/questions/14506717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1814970/" ]
I'm still looking for the best way to do this, but I found an answer that satisfy myself for the problem that I have. In Tim Golden's site (linked in question) you can find this example: ``` import win32ui import win32print import win32con INCH = 1440 hDC = win32ui.CreateDC () hDC.CreatePrinterDC (win32print.GetDefaultPrinter ()) hDC.StartDoc ("Test doc") hDC.StartPage () hDC.SetMapMode (win32con.MM_TWIPS) hDC.DrawText ("TEST", (0, INCH * -1, INCH * 8, INCH * -2), win32con.DT_CENTER) hDC.EndPage () hDC.EndDoc () ``` I adapted it a little bit after reading a lot of the documentation. I'll be using `win32ui` library and [`TextOut`](http://timgolden.me.uk/pywin32-docs/PyCDC__TextOut_meth.html) (device context method object). ``` import win32ui # X from the left margin, Y from top margin # both in pixels X=50; Y=50 multi_line_string = input_string.split() hDC = win32ui.CreateDC () hDC.CreatePrinterDC (your_printer_name) hDC.StartDoc (the_name_will_appear_on_printer_spool) hDC.StartPage () for line in multi_line_string: hDC.TextOut(X,Y,line) Y += 100 hDC.EndPage () hDC.EndDoc () ``` I searched in meta stackoverflow before answering my own question and [here](https://meta.stackexchange.com/questions/9933/is-there-a-convention-for-accepting-my-own-answer-to-my-own-question) I found it is an encouraged behavior, therefore I'm doing it. I'll wait a little more to see if I get any other answer.
``` # U must install pywin32 and import modules: import win32print, win32ui, win32con # X from the left margin, Y from top margin # both in pixels X=50; Y=50 # Separate lines from Your string # for example:input_string and create # new string for example: multi_line_string multi_line_string = input_string.splitlines() hDC = win32ui.CreateDC () # Set default printer from Windows: hDC.CreatePrinterDC (win32print.GetDefaultPrinter ()) hDC.StartDoc (the_name_will_appear_on_printer_spool) hDC.StartPage () for line in multi_line_string: hDC.TextOut(X,Y,line) Y += 100 hDC.EndPage () hDC.EndDoc () #I like Python ```
3,932
56,612,386
I am trying to use the pre-made estimator `tf.estimator.DNNClassifier` to use on the MNIST dataset. I load the dataset from `tensorflow_dataset`. I pursue the following four steps: first building the dataset pipeline and defining the input function: ```py ## Step 1 mnist, info = tfds.load('mnist', with_info=True) ds_train_orig, ds_test = mnist['train'], mnist['test'] def train_input_fn(dataset, batch_size): dataset = dataset.map(lambda x:({'image-pixels':tf.reshape(x['image'], (-1,))}, x['label'])) return dataset.shuffle(1000).repeat().batch(batch_size) ``` Then, in step 2, I define the feature column with a single key, and shape 784: ```py ## Step 2: image_feature_column = tf.feature_column.numeric_column(key='image-pixels', shape=(28*28)) image_feature_column NumericColumn(key='image-pixels', shape=(784,), default_value=None, dtype=tf.float32, normalizer_fn=None) ``` Step 3, I instantiated the estimator as follows: ```py ## Step 3: dnn_classifier = tf.estimator.DNNClassifier( feature_columns=image_feature_column, hidden_units=[16, 16], n_classes=10) ``` And finally, step 4 using the estimator by calling the `.train()` method: ```py ## Step 4: dnn_classifier.train( input_fn=lambda:train_input_fn(ds_train_orig, batch_size=32), #lambda:iris_data.train_input_fn(train_x, train_y, args.batch_size), steps=20) ``` But this reuslts in the following error. It looks like the problem has arised from the dataset. ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-21-95736cd65e45> in <module> 2 dnn_classifier.train( 3 input_fn=lambda: train_input_fn(ds_train_orig, batch_size=32), ----> 4 steps=20) ~/anaconda3/envs/tf2.0-beta/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx, accept_symbolic_tensors, accept_composite_tensors) 1183 graph = get_default_graph() 1184 if not graph.building_function: -> 1185 raise RuntimeError("Attempting to capture an EagerTensor without " 1186 "building a function.") 1187 return graph.capture(value, name=name) RuntimeError: Attempting to capture an EagerTensor without building a function. ```
2019/06/15
[ "https://Stackoverflow.com/questions/56612386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2191236/" ]
I think the graph construction gets weird if you load a tensorflow\_datasets dataset outside the `input_fn`. I followed the TF2.0 migration guide example and this does not give errors. Please note that I have not tested for model correctness and you will have to modify `input_fn` logic a bit to get the function for eval. ``` # Define the estimator's input_fn def input_fn(): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] dataset = mnist_train dataset = mnist_train.map(lambda x, y:({'image-pixels':tf.reshape(x, (-1,))}, y)) return dataset.shuffle(1000).repeat().batch(32) image_feature_column = tf.feature_column.numeric_column(key='image-pixels', shape=(28*28)) dnn_classifier = tf.estimator.DNNClassifier( feature_columns=[image_feature_column], hidden_units=[16, 16], n_classes=10) dnn_classifier.train( input_fn=input_fn, steps=200) ``` I get a bunch of deprecation warnings at this point, but seems like the estimator is trained.
Answer by @dgumo is correct. I just wanted to add a basic example. All tensors returned by the input function must be created within the input function. ```py #Raw data can be outside data_x = [0.0, 1.0, 2.0, 3.0, 4.0] data_y = [3.0, 4.9, 7.3, 8.65, 10.75] def supply_input(): #Tensors must be created inside the function train_x = tf.constant(data_x) train_y = tf.constant(data_y) feature = { 'x': train_x } return feature, train_y ```
3,935
17,363,611
My code works perfectly, but I want it to write the values to a text file. When I try to do it, I get 'invalid syntax'. When I use a python shell, it works. So I don't understand why it isn't working in my script. I bet it's something silly, but why wont it output the data to a text file?? ``` #!/usr/bin/env python #standard module, needed as we deal with command line args import sys from fractions import Fraction import pyexiv2 #checking whether we got enough args, if not, tell how to use, and exits #if len(sys.argv) != 2 : # print "incorrect argument, usage: " + sys.argv[0] + ' <filename>' # sys.exit(1) #so the argument seems to be ok, we use it as an imagefile imagefilename = sys.argv[1] #trying to catch the exceptions in case of problem with the file reading try: metadata = pyexiv2.metadata.ImageMetadata(imagefilename) metadata.read(); #trying to catch the exceptions in case of problem with the GPS data reading try: latitude = metadata.__getitem__("Exif.GPSInfo.GPSLatitude") latitudeRef = metadata.__getitem__("Exif.GPSInfo.GPSLatitudeRef") longitude = metadata.__getitem__("Exif.GPSInfo.GPSLongitude") longitudeRef = metadata.__getitem__("Exif.GPSInfo.GPSLongitudeRef") # get the value of the tag, and make it float number alt = float(metadata.__getitem__("Exif.GPSInfo.GPSAltitude").value) # get human readable values latitude = str(latitude).split("=")[1][1:-1].split(" "); latitude = map(lambda f: str(float(Fraction(f))), latitude) latitude = latitude[0] + u"\u00b0" + latitude[1] + "'" + latitude[2] + '"' + " " + str(latitudeRef).split("=")[1][1:-1] longitude = str(longitude).split("=")[1][1:-1].split(" "); longitude = map(lambda f: str(float(Fraction(f))), longitude) longitude = longitude[0] + u"\u00b0" + longitude[1] + "'" + longitude[2] + '"' + " " + str(longitudeRef).split("=")[1][1:-1] ## Printing out, might need to be modified if other format needed ## i just simple put tabs here to make nice columns print " \n A text file has been created with the following information \n" print "GPS EXIF data for " + imagefilename print "Latitude:\t" + latitude print "Longitude:\t" + longitude print "Altitude:\t" + str(alt) + " m" except Exception, e: # complain if the GPS reading went wrong, and print the exception print "Missing GPS info for " + imagefilename print e # Create a new file or **overwrite an existing file** text_file = open('textfile.txt', 'w') text_file.write("Latitude" + latitude) # Close the output file text_file.close() except Exception, e: # complain if the GPS reading went wrong, and print the exception print "Error processing image " + imagefilename print e; ``` The error I see says: ``` text_file = open('textfile.txt','w') ^ SyntaxError: invalid syntax ```
2013/06/28
[ "https://Stackoverflow.com/questions/17363611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2519572/" ]
`lis` is an empty list, *any* index will raise an exception. If you wanted to add elements to that list, use `lis.append()` instead. Note that you can loop over sequences *directly*, there is no need to keep your own counter: ``` def front_x(words): lis = [] words.sort() for word in words: if word.startswith("x"): lis.append(word) for entry in lis: print(entry) ``` You can reduce this further by immediately printing all words that start with `x`, no need to build a separate list: ``` def front_x(words): for word in sorted(words): if word.startswith("x"): print(word) ``` If you wanted to sort the list with all `x` words coming first, use a custom sort key: ``` def front_x(words): return sorted(words, key=lambda w: (not w.startswith('x'), w)) ``` sorts the words first by the boolean flag for `.startswith('x')`; `False` is sorted before `True` so we negate that test, then the words themselves. Demo: ``` >>> words = ['foo', 'bar', 'xbaz', 'eggs', 'xspam', 'xham'] >>> sorted(words, key=lambda w: (not w.startswith('x'), w)) ['xbaz', 'xham', 'xspam', 'bar', 'eggs', 'foo'] ```
> > i need to sort the list but the words starting with x should be the first ones. > > > Complementary to the custom search key in @Martijn's extended answer, you could also try this, which is closer to your original approach and might be easier to understand: ``` def front_x(words): has_x, hasnt = [], [] for word in sorted(words): if word.startswith('x'): has_x.append(word) else: hasnt.append(word) return has_x + hasnt ``` Concerning what was wrong with your original code, there are actually *three* problems with the line ``` lis[j]=words.pop()[i] ``` 1. `lis[j]` only works if the list already has a `j`th element, but as you are adding items to an initially empty list, you should use `lis.append(...)` instead. 2. You want to remove the word starting with "x" at index `i` from the list, but `pop()` will always remove the *last* item. `pop()` is for stacks; never remove items from a list while looping it with an index! 3. You apply the `[i]` operator *after* you've popped the item from the list, i.e., you are accessing the `i`th *letter of the word*, which may be much shorter; thus the `IndexError`
3,936
9,570,637
Working on getting Celery setup (following the basic tutorial) with a mongodb broker as backend. Following the configuration guidelines set out in the official docs, my `celeryconfig.py` is setup as follows: ``` CELERY_RESULT_BACKEND = "mongodb" BROKER_BACKEND = "mongodb" BROKER_URL = "mongodb://user:pass@subdomain.mongolab.com:123456/testdb" CELERY_MONGODB_BACKEND_SETTINGS = { "host":"subdomain.mongolab.com", "port":123456, "database":"testdb", "taskmeta_collection":"taskmeta", "user":"user", "pass":"pass", } CELERY_IMPORTS = ("tasks",) ``` Running the celeryd with `--loglevel=INFO` returns the following exception, originating in pymongo but bubbling through both kombu and celery. ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 230, in start component.start() File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 338, in start self.reset_connection() File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 596, in reset_connection on_decode_error=self.on_decode_error) File "/usr/local/lib/python2.7/dist-packages/celery/app/amqp.py", line 335, in get_task_consumer **kwargs) File "/usr/local/lib/python2.7/dist-packages/kombu/compat.py", line 187, in __init__ super(ConsumerSet, self).__init__(self.backend, queues, **kwargs) File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 285, in __init__ self.declare() File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 295, in declare queue.declare() File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 388, in declare self.queue_declare(nowait, passive=False) File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 408, in queue_declare nowait=nowait) File "/usr/local/lib/python2.7/dist-packages/kombu/transport/virtual/__init__.py", line 380, in queue_declare return queue, self._size(queue), 0 File "/usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py", line 74, in _size return self.client.messages.find({"queue": queue}).count() File "/usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py", line 171, in client self._client = self._open() File "/usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py", line 97, in _open mongoconn = Connection(host=conninfo.hostname, port=conninfo.port) File "/usr/local/lib/python2.7/dist-packages/pymongo/connection.py", line 325, in __init__ nodes.update(uri_parser.split_hosts(entity, port)) File "/usr/local/lib/python2.7/dist-packages/pymongo/uri_parser.py", line 198, in split_hosts nodes.append(parse_host(entity, default_port)) File "/usr/local/lib/python2.7/dist-packages/pymongo/uri_parser.py", line 127, in parse_host raise ConfigurationError("Reserved characters such as ':' must be " ConfigurationError: Reserved characters such as ':' must be escaped according RFC 2396. An IPv6 address literal must be enclosed in '[' and ']' according to RFC 2732. ``` Something about the way Celery is handling the mongouri is not encoding correctly, since it is the uri parser within `pymongo` that is throwing this error. I have tried escaping the `:` characters in the uri string, but all this achieves is resetting the transport back to the default AMQP with a mangled connection string. ``` amqp://guest@localhost:5672/mongodb\http://user\:password@subdomain.mongolab.com\:29217/testdb ``` Which clearly isn't right. I've tried entering the uri in the config as a raw string using `r` and nothing changes. I know this kind of connection configuration has been supported in Celery since 2.4 (I'm using 2.5.1, pymongo 2.1.1) and the official docs all cite it as the preferred method to connect to a mongodb broker. Could this be a bug, perhaps an incompatibility with the latest pymongo build? If this approach doesn't work, how would one attach the task queue to a replica set, since I assume these have to be passed in the mongouri using the `?replicaSet` parameter. I should note that I'd rather not switch to using a RabbitMQ broker, since Mongo is already in the stack for the app in question and it just seems more intuitive to use what's already there. If there is a concrete reason why Mongo would be less effective for this purpose (the amount of tasks per day would be relatively small) I'd love to know! Thanks in advance.
2012/03/05
[ "https://Stackoverflow.com/questions/9570637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/215608/" ]
I think it's a bug. Celery passed hostname instead of server\_uri to kombu, thus cause this problem. After tracing the code, I found the following conf to bypass the bug before they fixed it. ``` CELERY_RESULT_BACKEND = 'mongodb' BROKER_HOST = "subdomain.mongolab.com" BROKER_PORT = 123456 BROKER_TRANSPORT = 'mongodb' BROKER_VHOST = 'testdb' CELERY_IMPORTS = ('tasks',) CELERY_MONGODB_BACKEND_SETTINGS = { 'host': 'subdomain.mongolab.com', 'port': 123456, 'database': 'testdb', 'user': user, 'password': password, 'taskmeta_collection': 'teskmeta' } ``` just repeating the configuration.
Would it help if you remove "user", "pass", "port", and "database" from the CELERY\_MONGODB\_BACKEND\_SETTINGS dict, and do: ``` BROKER_URL = "mongodb://user:pass@subdomain.mongolab.com:123456/testdb" CELERY_MONGODB_BACKEND_SETTINGS = { "host":BROKER_URL, "taskmeta_collection":"taskmeta", } ```
3,937
23,320,954
how to replace '1c' to '\x1c' in python. I have a list with elements like '12','13' etc and want to replace with '\x12', '\x13' etc. here is what i tried and failed ``` letters=[] for i in range(10,128,1): a=(str(hex(i))).replace('0x','\x') letters.append(a) print letters ``` **I need is '31' to be replaced by '\x31' ---> '1' not '\x31' 0r \x31**
2014/04/27
[ "https://Stackoverflow.com/questions/23320954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3559830/" ]
You need to use the built-in function [`chr`](https://docs.python.org/2/library/functions.html#chr) to return the correct ascii code (which is the string you are after): ``` >>> [chr(i) for i in range(10,20,1)] ['\n', '\x0b', '\x0c', '\r', '\x0e', '\x0f', '\x10', '\x11', '\x12', '\x13'] ```
Your code is fine, you just need to escape the `\` with a `\`. ``` letters=[] for i in range(10,128,1): a=(str(hex(i))).replace('0x','\\x') #you have to escape the \ letters.append(a) print letters ``` [DEMO ----](http://repl.it/Rvl/1)
3,938
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
**Minimal example** ``` from setuptools import setup, find_packages setup( name="foo", version="1.0", packages=find_packages(), ) ``` More info in [docs](https://packaging.python.org/tutorials/packaging-projects/)
Here you will find the simplest possible example of using distutils and setup.py: <https://docs.python.org/2/distutils/introduction.html#distutils-simple-example> This assumes that all your code is in a single file and tells how to package a project containing a single module.
3,940
46,229,543
I have written a fraction adder in Python for my computer science class. However, I am running into problems with the final answer reduction procedure. The procedure uses the "not equal" comparison operator **!=** at the start of a **for** loop to test whether, when dividing the numerator and denominator, there will be a remainder. If there will be a remainder (numerator % denominator ≠ 0), the procedure executes: each gets divided by **n**, then **n** increments and the **for** loop runs again. This continues until they divide evenly into each other. Firstly, I am recieving a syntax error: ``` python FractionAdder.py 2 4 6 8 File "FractionAdder.py", line 23 for ansnum % n != 0 and ansdenom % n != 0: ^ SyntaxError: invalid syntax ``` Secondly, the **for** loop is not fully robust. My intended purpose was to have it reduce the final answer to its simplest form, but right now, it is only continuing to increment **n** and reduce until the numerator and denominator divide into each other evenly. This is a problem: 3 divides evenly into 6, but 3/6 is not in its simplest form. May I have some suggestions as to how to improve the robustness of my procedure, such that **n** continues to increment and the loop keeps cycling until the simplest form has been achieved? (Is there a better way to structure my conditional to achieve this?) Full Code: ``` import sys num1 = int(sys.argv[1]) denom1 = int(sys.argv[2]) num2 = int(sys.argv[3]) denom2 = int(sys.argv[4]) n = 1 # Find common denominators and adjust both fractions accordingly. while denom1 != denom2: denom1 = denom1 * denom2 num1 = num1 * denom2 denom2 = denom2 * denom1 num2 = num2 * denom2 # Add the numerators and set the ansdenom (denom1 and denom2 should be equal by this point if LCD function worked) ansnum = num1 + num2 ansdenom = denom1 # Reduce the answer. n = 2 for ansnum % n != 0 and ansdenom % n != 0: ansnum = ansnum / n ansdenom = ansdenom / n n += 1 print("The sum of the two fractions is:" + str(ansnum) + "//" + str(ansdenom)) ``` Thanks in advance!
2017/09/14
[ "https://Stackoverflow.com/questions/46229543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8580749/" ]
The error you see is derived by the wrong usage of `for` where `while` is the right type of loop (`for` is for iteration, `while` for condition). Nevertheless, your logic at deciding the common denominators is flawed, and leads to an infinite loop. Please read about [least common multiple](https://en.wikipedia.org/wiki/Least_common_multiple), and consider the following pseudocode for determining the "new" numerators: ``` lcm = lcm(den1, den2) num1 *= lcm / den1 num2 *= lcm / den2 ```
You are trying to write a greatest-common-denominator finder, and your terminating condition is wrong. [Euclid's Algorithm](https://en.wikipedia.org/wiki/Euclidean_algorithm) repeatedly takes takes the modulo difference of the two numbers until the result is 0; then the next-to-last result is the GCD. The standard python implementation looks like ``` def gcd(a, b): while b: a, b = b, a % b return a ``` There is an implementation already in the standard library, `math.gcd`. ``` from math import gcd import sys def add_fractions(n1, d1, n2, d2): """ Return the result of n1/d1 + n2/d2 """ num = n1 * d2 + n2 * d1 denom = d1 * d2 div = gcd(num, denom) return num // div, denom // div if __name__ == "__main__": if len(sys.argv) != 5: print("Usage: {} num1 denom1 num2 denom2".format(sys.argv[0])) else: n1, d1, n2, d2 = [int(i) for i in sys.argv[1:]] num, denom = add_fractions(n1, d1, n2, d2) print("{}/{} + {}/{} = {}/{}".format(n1, d1, n2, d2, num, denom)) ```
3,950
16,514,570
I can get matplotlib to work in pylab (ipython --pylab), but when I execute the same command in a python script a plot does not appear. My workspace focus changes from a fullscreened terminal to a Desktop when I run my script, which suggests that it is trying to plot something but failing. The following code works in `ipython --pylab` but not in my script. ``` import matplotlib.pyplot as plt plt.plot(arange(10)) ``` I am on Mac OS X Mountain Lion. **What is causing this to fail when I run a script but not in the interactive prompt?**
2013/05/13
[ "https://Stackoverflow.com/questions/16514570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3749393/" ]
I believe you need `plt.show()` .
You need to add `plt.show()` after `plt.plot(...)`. `plt.plot()` just makes the plot, `plt.show()` takes the plot you made and displays it on the screen.
3,951
44,057,032
my python program isn't working properly and it's something with the submit button and it gives me an error saying: ``` TypeError: 'str' object is not callable ``` help please. Here is the part of the code that doesn't work: ``` def submit(): g_name = ent0.get() g_surname = ent1.get() g_dob = ent2.get() g_tutorg = ent3.get() #Gets all the entry boxes g_email = ent4.get() cursor = db.cursor() sql = '''INSERT into Students, (g_name, g_surname, g_dob, g_tutorg, g_email) VALUES (?,?,?,?,?)''' cursor.execute(sql (g_name, g_surname, g_dob, g_tutorg, g_email)) #Puts it all on to SQL db.commit() mlabe2=Label(mGui,text="Form submitted, press exit to exit").place(x=90,y=0) ``` I'm not sure what else you need so here's the rest of the SQL part that creates the table ``` cursor = db.cursor() cursor.execute(""" CREATE TABLE IF NOT EXISTS Students( StudentID integer, Name text, Surname text, DOB blob, Tutor_Grop blob, Email blob, Primary Key(StudentID)); """) #Will create if it doesn't exist db.commit() ``` I've been trying so long and couldn't find a solution to this problem so if you can help that would be great thanks
2017/05/18
[ "https://Stackoverflow.com/questions/44057032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8033270/" ]
`childByAutoId()` is for the iOS SDK. For `admin.Database()`, use [push()](https://firebase.google.com/docs/reference/admin/node/admin.database.Reference#push). ``` var reference = admin.database().ref(path).push(); ```
It should work like this: ``` exports.addPersonalRecordHistory = functions.database.ref('/personalRecords/{userId}/current/{exerciseId}').onWrite(event => { var path = 'personalRecords/' + event.params.userId + '/history/' + event.params.exerciseId; return admin.database().ref(path).set({ username: "asd", email: "asd" }); }); ```
3,952
14,626,189
> > **Possible Duplicate:** > > [python looping seems to not follow sequence?](https://stackoverflow.com/questions/4123266/python-looping-seems-to-not-follow-sequence) > > [In what order does python display dictionary keys?](https://stackoverflow.com/questions/4458169/in-what-order-does-python-display-dictionary-keys) > > > ``` d = {'x': 9, 'y': 10, 'z': 20} for key in d: print d[key] ``` The above code give different outputs every time I run it. Not exactly different outputs, but output in different sequence. I executed the code multiple times using Aptana 3. **First Execution Gave: 10 9 20** **Second Execution Gave: 20 10 9** **I also executed the code in an online IDE <http://labs.codecademy.com>. There the output was always 10 9 20** I just wanted to know why is this. Ideally it should have printed 9 10 20 every time I execute the above code. Please Explain.
2013/01/31
[ "https://Stackoverflow.com/questions/14626189", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1990185/" ]
A dictionary is a mapping of keys to values; it does not have an order. You want a `collections.OrderedDict`: ``` collections.OrderedDict([('x', 9), ('y', 10), ('z', 20)]) Out[175]: OrderedDict([('x', 9), ('y', 10), ('z', 20)]) for key in Out[175]: print Out[175][key] ``` Note, however, that dictionary ordering *is* deterministic -- if you iterate over the same dictionary twice, you will get the same results.
A dictionary is a collection that is not ordered. So in theory the order of the elements may change on each operation you perform on it. If you want the keys to be printed in order, you will have to sort them before printing(i.e. collect the keys and then sort them).
3,953
70,023,042
I was wondering if anyone can help. I'm trying to take a CSV from a GCP bucket, run it into a dataframe, and then output the file to another bucket in the project, however using this method my dag is running but i dont im not getting any outputs into my designated bucket? My dag just takes ages to run. Any insight on this issue? ``` import gcsfs from airflow.operators import python_operator from airflow import models import pandas as pd import logging import csv import datetime fs = gcsfs.GCSFileSystem(project='project-goes-here') with fs.open('gs://path/file.csv') as f: gas_data = pd.read_csv(f) def make_csv(): # Creates the CSV file with a datetime with no index, and adds the map, collection and collection address to the CSV # Calisto changed their mind on the position of where the conversion factor and multiplication factor should go gas_data['Asset collection'] = 'Distribution' gas_data['Asset collection address 1'] = 'Distribution' gas_data['Asset collection address 2'] = 'Units1+2 Central City' gas_data['Asset collection address 3'] = 'ind Est' gas_data['Asset collection city'] = 'Coventry' gas_data['Asset collection postcode'] = 'CV6 5RY' gas_data['Multiplication Factor'] = '1.000' gas_data['Conversion Factor'] = '1.022640' gas_data.to_csv('gs://path/' 'Clean_zenos_data_' + datetime.datetime.today().strftime('%m%d%Y%H%M%S''.csv'), index=False, quotechar='"', sep=',', quoting=csv.QUOTE_NONNUMERIC) logging.info('Added Map, Asset collection, Asset collection address and Saved CSV') make_csv_function = python_operator.PythonOperator( task_id='make_csv', python_callable=make_csv ) ```
2021/11/18
[ "https://Stackoverflow.com/questions/70023042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12176250/" ]
With broadcasting ``` res = np.where(arr0[...,None] == entries, arr1[...,None], 0).max(axis=(0, 1)) ``` The result of `np.where(...)` is a (3, 3, 4) array, where slicing `[...,0]` would give you the same 3x3 array you get by manually doing the `np.where` with just `entries[0]`, etc. Then taking the max of each 3x3 subarray leaves you with the desired result. Timings ------- Apparently this method doesn't scale well for bigger arrays. The other answer using `np.unique` is more efficient because it reduces the maximum operation down to a few unique value regardless of how big the original arrays are. ``` import timeit import matplotlib.pyplot as plt import numpy as np def loops(): return [np.where(arr0==index,arr1,0).max() for index in entries] def broadcast(): return np.where(arr0[...,None] == entries, arr1[...,None], 0).max(axis=(0, 1)) def numpy_1d(): arr0_1D = arr0.ravel() arr1_1D = arr1.ravel() arg_idx = np.argsort(arr0_1D) u, idx = np.unique(arr0_1D[arg_idx], return_index=True) return np.maximum.reduceat(arr1_1D[arg_idx], idx) sizes = (3, 10, 25, 50, 100, 250, 500, 1000) lengths = (4, 10, 25, 50, 100) methods = (loops, broadcast, numpy_1d) fig, ax = plt.subplots(len(lengths), sharex=True) for i, M in enumerate(lengths): entries = np.arange(M) times = [[] for _ in range(len(methods))] for N in sizes: arr0 = np.random.randint(1000, size=(N, N)) arr1 = np.random.randint(1000, size=(N, N)) for j, method in enumerate(methods): times[j].append(np.mean(timeit.repeat(method, number=1, repeat=10))) for t in times: ax[i].plot(sizes, t) ax[i].legend(['loops', 'broadcasting', 'numpy_1d']) ax[i].set_title(f'Entries size {M}') plt.xticks(sizes) fig.text(0.5, 0.04, 'Array size (NxN)', ha='center') fig.text(0.04, 0.5, 'Time (s)', va='center', rotation='vertical') plt.show() ``` [![enter image description here](https://i.stack.imgur.com/gNOmv.png)](https://i.stack.imgur.com/gNOmv.png)
It's more convenient to work in 1D case. You need to sort your `arr0` then find starting indices for every group and use `np.maximum.reduceat`. ``` arr0_1D = np.array([[0,3,0],[1,3,2],[1,2,0]]).ravel() arr1_1D = np.array([[4,5,6],[6,2,4],[3,7,9]]).ravel() arg_idx = np.argsort(arr0_1D) >>> arr0_1D[arg_idx] array([0, 0, 0, 1, 1, 2, 2, 3, 3]) u, idx = np.unique(arr0_1D[arg_idx], return_index=True) >>> idx array([0, 3, 5, 7], dtype=int64) >>> np.maximum.reduceat(arr1_1D[arg_idx], idx) array([9, 6, 7, 5], dtype=int32) ```
3,954
35,346,971
I'm having some problems with inheritance. I need to import simplejson or install if it can't be found and import. I'm doing this in a another class and sending it via inheritance where needed. The way I'm doing it here works in python 2.6+ but not in 2.4. ``` # This class will hold all things needed over in all classes import subprocess class Global(object): def __init__(self): # Making sure simple json is installed and accessible try: import simplejson as json self.json = json except ImportError: subprocess.Popen(['apt-get -y install python-simplejson'], shell=True, stdout=subprocess.PIPE).wait() import simplejson as json self.json = json ``` And I'm passing it to this class ``` class Init(Global): # Holds json object INFO_OBJECT = { 'filesystem': { 'root': {}, 'archive': {}, 'buffer': {} }, 'mysql': { 'is_corrupt': False, 'corrupt_files': {}, 'version': '' } } def __init__(self): super(Init, self).__init__() self.create_log_folder() self.create_object() self.gather_info() # if json object not found in file create a empty on and save it def create_object(self): try: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt') info_object = self.json.load(f) f.close() self.INFO_OBJECT = info_object except self.json.JSONDecodeError: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt', 'wb') self.json.dump(self.INFO_OBJECT, f, sort_keys=True, indent=4) f.close() ``` This is my Error ``` Traceback (most recent call last): File "Main.py", line 42, in ? start() File "Main.py", line 11, in start Init() File "/home/careview/ibarron/rcptester/Init/init.py", line 29, in __init__ self.create_object() File "/home/careview/ibarron/rcptester/Init/init.py", line 43, in create_object except self.json.JSONDecodeError: AttributeError: 'module' object has no attribute 'JSONDecodeError' ```
2016/02/11
[ "https://Stackoverflow.com/questions/35346971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4840281/" ]
I know this is not the question, but you should not be using subprocess.popen for this. Use pip. It's great. ``` try: import simplejson as json except ImportError: import pip try: import os isAdmin = os.getuid() == 0 except AttributeError: import ctypes isAdmin = ctypes.windll.shell32.IsUserAnAdmin() != 0 if isAdmin: c = pip.main(['install', 'simplejson']) else: c = pip.main(['install', '--user', 'simplejson']) if c: print("Could not install 'simplejson'.") exit(c) # => or desired error code... don't use 0 (thanks to Håken Lid for pointing this out) because it indicates success import simplejson as json self.json = json ``` As for your error, open python2.4 interpreter and just simply run: ``` >>> import simplejson as json >>> 'JSONDecodeError' in dir(json) True # => or false? ``` If it does not exist (perhaps the 2.4 version does not support it?), you can easily grab the source code from the 2.7 module: ``` >>> from inspect import getsourcelines as gsl >>> import simplejson as json >>> json.JSONDecodeError <class 'simplejson.scanner.JSONDecodeError'> >>> x, _ = gsl(json.scanner.JSONDecodeError) >>> print(''.join(x)) class JSONDecodeError(ValueError): """Subclass of ValueError with the following additional properties: msg: The unformatted error message doc: The JSON document being parsed pos: The start index of doc where parsing failed end: The end index of doc where parsing failed (may be None) lineno: The line corresponding to pos colno: The column corresponding to pos endlineno: The line corresponding to end (may be None) endcolno: The column corresponding to end (may be None) """ # Note that this exception is used from _speedups def __init__(self, msg, doc, pos, end=None): ValueError.__init__(self, errmsg(msg, doc, pos, end=end)) self.msg = msg self.doc = doc self.pos = pos self.end = end self.lineno, self.colno = linecol(doc, pos) if end is not None: self.endlineno, self.endcolno = linecol(doc, end) else: self.endlineno, self.endcolno = None, None def __reduce__(self): return self.__class__, (self.msg, self.doc, self.pos, self.end) ``` Now, in the python2.4 error, you can easily check if it has the attribute. If not, add it. ``` if not hasattr('json.scanner', 'JSONDecodeError'): class myJSONDecodeError(ValueError): """Subclass of ValueError with the following additional properties: msg: The unformatted error message doc: The JSON document being parsed pos: The start index of doc where parsing failed end: The end index of doc where parsing failed (may be None) lineno: The line corresponding to pos colno: The column corresponding to pos endlineno: The line corresponding to end (may be None) endcolno: The column corresponding to end (may be None) """ # Note that this exception is used from _speedups def __init__(self, msg, doc, pos, end=None): ValueError.__init__(self, errmsg(msg, doc, pos, end=end)) self.msg = msg self.doc = doc self.pos = pos self.end = end self.lineno, self.colno = linecol(doc, pos) if end is not None: self.endlineno, self.endcolno = linecol(doc, end) else: self.endlineno, self.endcolno = None, None def __reduce__(self): return self.__class__, (self.msg, self.doc, self.pos, self.end) self.json.JSONDecodeError = self.json.scanner.JSONDecodeError = myJSONDecodeError ```
`apt-get install` won't guarantee that you are installing `simplejson` for all versions of python. It will only work for the *system installed* version of Python which may or may not be 2.4. That's going to depend highly on what underlying version of Linux or Ubuntu or Debian you are using. If you want to be portable across multiple Python versions, you should be using Python's method of managing dependencies instead of trying to do it via `apt-get`.
3,955
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
the `repr` function will return a string which is the exact definition of your dict (except for the order of the element, dicts are unordered in python). unfortunately, i can't tell a way to automatically get a string which represent the variable name. ``` >>> dict = {'one': 1, 'two': 2} >>> repr(dict) "{'two': 2, 'one': 1}" ``` writing to a file is pretty standard stuff, like any other file write: ``` f = open( 'file.py', 'w' ) f.write( 'dict = ' + repr(dict) + '\n' ) f.close() ```
You could do: ``` import inspect mydict = {'one': 1, 'two': 2} source = inspect.getsourcelines(inspect.getmodule(inspect.stack()[0][0]))[0] print([x for x in source if x.startswith("mydict = ")]) ``` Also: make sure not to shadow the dict builtin!
3,958
62,933,026
I am new to python and I am trying to loop through the list of urls in a `csv` file and grab the website `title`using `BeautifulSoup`, which I would like then to save to a file `Headlines.csv`. But I am unable to grab the webpage `title`. If I use a variable with single url as follows: ``` url = 'https://www.space.com/japan-hayabusa2-asteroid-samples-landing-date.html' resp = req.get(url) soup = BeautifulSoup(resp.text, 'lxml') print(soup.title.text) ``` It works just fine and I get the title `Japanese capsule carrying pieces of asteroid Ryugu will land on Earth Dec. 6 | Space` But when I use the loop, ``` import csv with open('urls_file2.csv', newline='', encoding='utf-8') as f: reader = csv.reader(f) for url in reader: print(url) resp = req.get(url) soup = BeautifulSoup(resp.text, 'lxml') print(soup.title.text) ``` I get the following `['\ufeffhttps://www.foxnews.com/us/this-day-in-history-july-16']` and an error message `InvalidSchema: No connection adapters were found for "['\\ufeffhttps://www.foxnews.com/us/this-day-in-history-july-16']"` I am not sure what am I doing wrong.
2020/07/16
[ "https://Stackoverflow.com/questions/62933026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13122812/" ]
As the previous answer has already mentioned about the "\ufeff", you would need to change the encoding. The second issue is that when you read a CSV file, you will get a list containing all the columns for each row. The keyword here is list. You are passing the request a list instead of a string. Based on the example you have given, I would assume that your urls are in the first column of the csv. Python lists starts with a index of 0 and not 1. So to extract out the url, you would need to extract the index of 0 which refers to the first column. ``` import csv with open('urls_file2.csv', newline='', encoding='utf-8-sig') as f: reader = csv.reader(f) for url in reader: print(url[0]) ``` To read up more on lists, you can refer [here](https://www.w3schools.com/python/python_lists.asp). You can add more columns to the CSV file and experiment to see how the results would appear. If you would like to refer to the column name while reading each row, you can refer [here](https://stackoverflow.com/questions/41567508/read-csv-items-with-column-name).
You have a byte order mark `\\ufeff` on the URL you parse from your file. It looks like your file is a signature file and has encoding like utf-8-sig. You need to read with the file with `encoding='utf-8-sig'` Read more [here](https://stackoverflow.com/a/49150749/7502914).
3,968
31,039,972
I am trying to run a Python script from another Python script, and getting its `pid` so I can kill it later. I tried `subprocess.Popen()` with argument `shell=True', but the`pid`attribute returns the`pid` of the parent script, so when I try to kill the subprocess, it kills the parent. Here is my code: ```py proc = subprocess.Popen(" python ./script.py", shell=True) pid_ = proc.pid . . . # later in my code os.system('kill -9 %s'%pid_) #IT KILLS THE PARENT :( ```
2015/06/25
[ "https://Stackoverflow.com/questions/31039972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4759209/" ]
`shell=True` starts a new shell process. `proc.pid` is the pid of that shell process. `kill -9` kills the shell process making the grandchild python process into an orphan. If the grandchild python script can spawn its own child processes and you want to kill the whole process tree then see [How to terminate a python subprocess launched with shell=True](https://stackoverflow.com/q/4789837/4279): ``` #!/usr/bin/env python import os import signal import subprocess proc = subprocess.Popen("python script.py", shell=True, preexec_fn=os.setsid) # ... os.killpg(proc.pid, signal.SIGTERM) ``` If `script.py` does not spawn any processes then use [@icktoofay suggestion](https://stackoverflow.com/a/31040013/4279): drop `shell=True`, use a list argument, and call `proc.terminate()` or `proc.kill()` -- the latter always works eventually: ``` #!/usr/bin/env python import subprocess proc = subprocess.Popen(["python", "script.py"]) # ... proc.terminate() ``` If you want to run your parent script from a different directory; you might need [`get_script_dir()` function](https://stackoverflow.com/a/22881871/4279). Consider importing the python module and running its functions, using its object (perhaps via `multiprocessing`) instead of running it as a script. Here's [code example that demonstrates `get_script_dir()` and `multiprocessing` usage](https://stackoverflow.com/a/30165768/4279).
So run it directly without a shell: ``` proc = subprocess.Popen(['python', './script.py']) ``` By the way, you may want to consider changing the hardcoded `'python'` to [`sys.executable`](https://docs.python.org/3.5/library/sys.html#sys.executable). Also, you can use [`proc.kill()`](https://docs.python.org/3.5/library/subprocess.html#subprocess.Popen.kill) to kill the process rather than extracting the PID and using that; furthermore, even if you did need to kill by PID, you could use [`os.kill`](https://docs.python.org/3.5/library/os.html#os.kill) to kill the process rather than spawning another command.
3,969
47,403,218
-I am successfully logged into my Virtual Machine and I have uploaded my files to the AWS as well (Amazon EC2). What I wish to do is execute my python code on the server but it says that the dependencies are not installed. When I run a pip install command, it returns the following error: PermissionError: [Errno 13] Permission denied: '/usr/local/lib64/python3.4/site-packages/apiclient How do I fix this? Is it even possible to install packages using pip? If yes, how?
2017/11/21
[ "https://Stackoverflow.com/questions/47403218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8888799/" ]
Assuming you have the number 3 in cell A1 on Sheet2, the following will display the value of column A in the row that has rank 3 in Sheet1 This can be copied down in Sheet2 if you have other numbers in the rows below ``` =INDEX(Sheet1!A:AH,MATCH($A1,Sheet1!$AH:$AH,0),1) ```
Sounds like you need `INDEX/MATCH` like this `=INDEX(Sheet1!A:A,MATCH(3,Sheet1!AH:AH,0))` The `MATCH` function finds the position of 3 in column `AH` and then the `INDEX` function returns the value from column `A` in the same row. Is that what you need?
3,970
45,046,601
I have this weird problem that can be reproduced with the [simple tutorial](https://docs.docker.com/compose/django/) from Docker. If I follow the tutorial exactly, everything would work fine, i.e. after `docker-compose up` command, the web container would run and connect nicely to the db container. However, if I choose to create the same Django project on the host, change its settings for the postgres db, and copy it over to the web image in its Dockerfile, instead of mounting the host directory to the container and doing those things there as shown in the tutorial (using the command `docker-compose run web django-admin.py startproject composeexample .` and then change the settings file generated and located in the mounted directory on the host), the first time I run `docker-compose up`, the web container would have problems connecting to the db, with the error as below > > web\_1 | psycopg2.OperationalError: could not connect to server: Connection refused > web\_1 | Is the server running on host "db" (172.18.0.2) and accepting > web\_1 | TCP/IP connections on port 5432? > > > However, if I stop the compose with docker-compose down and then run it again with docker-compose up, the web container would connect to the db successfully with no problems. 'Connection refused' seems to be not an uncommon problem here but I have checked and verified that all the settings are correct and the usual causes like wrong port number, port not exposed or setting host as 'local' instead of 'db', etc. are not the problems in this case. Note: FWIW, I use CNTLM as the system proxy in the host and have to set the environment variables for the web image, and it works fine for other scenarios. EDIT: Please find additional info as below. In the host directory I have the following files and directories * composeexample (generated by another container following the same tutorial and copied over to here) * manage.py (generated by another container and copied over to here) * requirements.txt (exactly as the one in the tutorial) * Dockerfile (slightly modified from the one in the tutorial) * docker-compose.yml (slightly modified from the one in the tutorial) composeexample/settings.py: ``` ......... DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'postgres', 'USER': 'postgres', 'HOST': 'db', 'PORT': 5432, } } ......... ``` Dockerfile (mostly the same, with the added env vars): ``` FROM python:3.5 ENV PYTHONUNBUFFERED 1 ENV http_proxy "http://172.17.0.1:3128" ENV https_proxy "http://172.17.0.1:3128" ENV HTTP_PROXY "http://172.17.0.1:3128" ENV HTTPS_PROXY "http://172.17.0.1:3128" RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ ``` docker-compose (I removed the mounted volume .:/code as the project files have already been copied to the web image when it's built. I tested with leaving it as in the original file and it made no difference): ``` version: '3' services: db: image: postgres web: build: . command: python3 manage.py runserver 0.0.0.0:8000 ports: - "8000:8000" depends_on: - db ```
2017/07/12
[ "https://Stackoverflow.com/questions/45046601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3814824/" ]
Use [wait-for-it.sh](https://github.com/vishnubob/wait-for-it) to wait for Postgres to be ready: Download this well known script: <https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh> ``` version: '3' services: db: image: postgres web: build: . command: /wait-for-it.sh db:5432 -- python3 manage.py runserver 0.0.0.0:8000 volumes: - ./wait-for-it.sh:/wait-for-it.sh ports: - "8000:8000" depends_on: - db ``` It will wait until the db port is open and won't waste any further.
You can use [healthcheck](https://docs.docker.com/compose/compose-file/#healthcheck). example from: [peter-evans/docker-compose-healthcheck: How to wait for container X before starting Y using docker-compose healthcheck](https://github.com/peter-evans/docker-compose-healthcheck#waiting-for-postgresql-to-be-healthy) ``` version: '3' services: db: image: postgres healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 3s timeout: 30s retries: 3 web: build: . command: python3 manage.py runserver 0.0.0.0:8000 ports: - "8000:8000" depends_on: db: condition: service_healthy ```
3,971
40,062,854
i want to see some info and get info about my os with python as in my tutorial but actually can't run this code: ``` import os F = os.popen('dir') ``` and this : ``` F.readline() ' Volume in drive C has no label.\n' F = os.popen('dir') # Read by sized blocks F.read(50) ' Volume in drive C has no label.\n Volume Serial Nu' os.popen('dir').readlines()[0] # Read all lines: index ' Volume in drive C has no label.\n' os.popen('dir').read()[:50] # Read all at once: slice ' Volume in drive C has no label.\n Volume Serial Nu' for line in os.popen('dir'): # File line iterator loop ... print(line.rstrip()) ``` this is the the error for the first on terminal, (on IDLE it return just an ' f = open('dir') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'dir' I know on mac it should be different but how? to get the same result using macOS sierra.
2016/10/15
[ "https://Stackoverflow.com/questions/40062854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4716040/" ]
The problem is you can't save your custom array in NSUserDefaults. To do that you should change them to NSData then save it in NSUserDefaults Here is the code I used in my project it's in swift 2 syntax and I don't think it's going be hard to convert it to swift 3 ``` let data = NSKeyedArchiver.archivedDataWithRootObject(yourObject); NSUserDefaults.standardUserDefaults().setObject(data, forKey: "yourKey") NSUserDefaults.standardUserDefaults().synchronize() ``` and to the get part use this combination ``` if let data = NSUserDefaults.standardUserDefaults().objectForKey("yourKey") as? NSData { let myItem = NSKeyedUnarchiver.unarchiveObjectWithData(data) as? yourType } ``` hope this will help
The closest type to a Swift struct that UserDefaults supports might be an NSDictionary. You could copy the struct elements into an Objective C NSDictionary object before saving the data.
3,973
65,849,470
I am writing a unit test in python for a function that takes an object from an S3 bucket as the input parameter. The input parameter is of type `boto3.resources.factory.s3.ObjectSummary`. I don't want my unit test to access S3. I am writing a test that reads a .csv file into an object of type `pandas.core.frame.DataFrame.` Does anyone know how I can create an object of type boto3.resources.factory.s3.ObjectSummary from it? Thanks for your response.
2021/01/22
[ "https://Stackoverflow.com/questions/65849470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15061060/" ]
The answer is you shouldn't have a `loadingData()` Redux action in the first place. Loading or not is, as you correctly pointed out, every component's "local" state, so you should store it appropriately - inside each component's "normal" state. Redux store is designed for storing the data that is mutual to several components. And whether some component is ready or not is certainly NOT that.
There is good practice that you have `loading` for each `subject` you're calling a backend `api`, for example a `loading` for calling `books` api, a `loading` for calling `movies` api and so on. I recommend you create a `loadings` object in your state and fill it with different loadings that you need like this: ``` loadings: { books_loading, movie_loading } ``` so in your components, you wouldn't call a general `loading state` which affects a lot of components, only those who need the specific `loading` will use it and you will solve the problem you have
3,977
10,572,671
I'm new to c/c++ and I've been working with python for a long time, I didn't take any tutorials, but I got this error when I tried to declare an array of strings. code: ``` QString months[12]={'Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'}; ``` error: invalid conversion from 'int' to 'const char\*' What does that error mean?
2012/05/13
[ "https://Stackoverflow.com/questions/10572671", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1164958/" ]
Use double quotes for strings (`"`). `'` is for character literals.
In the Python is not difference between `'` and `"`(are strings) , but in the C++ They are different: ``` char c = 'c'; string str = "string"; ``` Don't forget the C++ has not `'''`, while it was as string in Python. Your code: ``` ... "Oct", "Nov", "Dec"}; ```
3,980
63,381,325
i am using a python script with regex module trying to process 2 files and create a final output as required but getting some errors. cat links.txt ``` https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXJD8C-32313922.mp4.m3u8?hdnts=exp=1596554537~acl=*/bGxpJD8C-32313922.mp4.m3u8~hmac=2ac95222f1693d11e7fd8758eb0a18d6d2ee187bb10e3c27311e627785687bd5 https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXkxI1-32313922.mp4.m3u8?hdnts=exp=1596554733~acl=*/bM07kxI1-32313922.mp4.m3u8~hmac=dd0fc6f433a8ac74c9eaa2a376fa4324a65ae7c410cdcf8e869c6961f1a5b5ea https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXpGKZ-32313922.mp4.m3u8?hdnts=exp=1596554748~acl=*/onhIpGKZ-32313922.mp4.m3u8~hmac=d4030cf7813cef02a58ca17127a0bc6b19dc93cccd6add4edc72a2ee5154f236 https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXLbgy-32313922.mp4.m3u8?hdnts=exp=1596554871~acl=*/xGXCLbgy-32313922.mp4.m3u8~hmac=7c515306c033c88d32072d54ba1d6aa4abf1be23070d1bb14d1311e4e74cc1d7 ``` cat name.txt ``` Introduction Lecture 1 Questions Lecture 1B Theory Lecture 2 Labour Costing Lecture 352 (Classroom Lecture) ``` Expected ( final.txt ) ``` https://cdn.jwplayer.com/vidoes/XXXXJD8C-32313922.mp4 out=Lecture 001- Introduction.mp4 https://cdn.jwplayer.com/vidoes/XXXXkxI1-32313922.mp4 out=Lecture 001B- Questions.mp4 https://cdn.jwplayer.com/vidoes/XXXXpGKZ-32313922.mp4 out=Lecture 002- Theory.mp4 https://cdn.jwplayer.com/vidoes/XXXXLbgy-32313922.mp4 out=Lecture 352- Labour Costing (Classroom Lecture).mp4 ``` cat sort.py ( my existing script ) ``` import re final = open('final.txt','w') a = open('links.txt','r') b = open('name.txt','r') base = 'https://cdn.jwplayer.com/videos/' kek = re.compile(r'(?<=\/)[\w\-\.]+(?=.m3u8)') # find max lecture number n = None for line in b: b_n = int(''.join([c for c in line.rpartition(' ')[2] if c in '1234567890'])) if n is None or b_n > n: n = b_n n = len(str(n)) # string len of the max lecture number b = open('name.txt','r') for line in a: final.write(base + kek.search(line).group() + '\n') b_line = b.readline().rstrip() line_before_lecture, _, lecture = b_line.partition('Lecture') line_before_lecture = line_before_lecture.strip() lecture_no = lecture.rpartition(' ')[2] lecture_str = lecture_no.rjust(n, '0') + '-' + " " + line_before_lecture final.write(' out=' + 'Lecture ' + lecture_str + '.mp4\n') ``` Traceback ``` Traceback (most recent call last): File "sort.py", line 11, in <module> b_n = int(''.join([c for c in line.rpartition(' ')[2] if c in '1234567890'])) ValueError: invalid literal for int() with base 10: '' ``` **Edit** - It seems that the error is due to the last line in name.txt as my script assumes all lines in name.txt would end in format of Lecture X. One way to fix it i guess is to edit the script and add a **if** condition as follows : If any line in name.txt doesn't end in format - Lecture X , then shift the text succeeding Lecture X prior to word Lecture. Example the 4th line of name.txt `Labour Costing Lecture 352 (Classroom Lecture)` Could be converted to `Labour Costing (Classroom Lecture) Lecture 352` and edit the below line in my script to match only the last occurrence of "Lecture" in a line in name.txt ``` line_before_lecture, _, lecture = b_line.partition('Lecture') ``` i basically need the expected output ( final.txt ) from those 2 files ( names.txt and links.txt ) using the script , if there's a better/smart way to do it , i would definitely be happy to use it. I just theoretically suggested one way of doing it which i have no clue how to do it myself
2020/08/12
[ "https://Stackoverflow.com/questions/63381325", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13516930/" ]
If you are using regular expressions anyway, why not use them to pull out this information, too? ```py import re base = 'https://cdn.jwplayer.com/videos/' kek = re.compile(r'(?<=\/)[\w\-\.]+(?=.m3u8)') nre = re.compile(r'(.*)\s+Lecture (\d+)(.*)') with open('name.txt') as b: lecture = [] for line in b: parsed = nre.match(line) if parsed: lecture.append((int(parsed.group(2)), parsed.group(3), parsed.group(1))) else: raise ValueError('Unable to parse %r' % line) n = len(str(lecture[-1][0])) with open('links.txt','r') as a: for idx, line in enumerate(a): print(base + kek.search(line).group()) fmt=' out=Lecture {0:0' + str(n) + 'n}{1}- {2}.mp4' print(fmt.format(*lecture[idx])) ``` This only traverses the contents in `name.txt` once, and stores the results in a variable `lecture` which contains a tuple of the pieces we pulled out (number, suffix, title). I also changed this to write to standard output; redirect to a file if you like, or switch back to explicitly hard-coding the output file in the script itself. The splat syntax `*lecture` is just a shorthand to avoid having to write `lecture[0], lecture[1], lecture[2]` explicitly. Demo: <https://repl.it/repls/TatteredInexperiencedFibonacci#main.py>
The issue is with the last line of cat names.txt. ``` >>> line = "Labour Costing Lecture 352 (Classroom Lecture)" >>> [c for c in line.rpartition(' ')[2]] ['L', 'e', 'c', 't', 'u', 'r', 'e', ')'] ``` Clearly not what you are intending to extract. Since none of these is a number, it returns an empty string which cannot be cast to an int. If you are looking to extract the int, I would suggest looking at this question: [How to extract numbers from a string in Python?](https://stackoverflow.com/questions/4289331/how-to-extract-numbers-from-a-string-in-python)
3,981
45,026,566
i was try to use python API but its not working if i try to use multiple parameter **Not working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] UserPassword = req_json['password'] return str(UserName) ``` **Working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] return str(UserName) ``` **Error** <https://www.herokucdn.com/error-pages/application-error.html> **Logs** ``` State changed from crashed to starting 2017-07-11T06:44:13.760404+00:00 heroku[web.1]: Starting process with command `python server.py` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent 2017-07-11T06:44:16.179785+00:00 heroku[web.1]: Process exited with status 1 2017-07-11T06:44:16.192829+00:00 heroku[web.1]: State changed from starting to crashed ``` **Server.py** ``` import os from flask import Flask, request from twilio.jwt.access_token import AccessToken, VoiceGrant from twilio.rest import Client import twilio.twiml ACCOUNT_SID = 'accountsid' API_KEY = 'apikey' API_KEY_SECRET = 'apikeysecret' PUSH_CREDENTIAL_SID = 'pushsid' APP_SID = 'appsid' app = Flask(__name__) @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] Password = req_json['password'] return str(UserName) @app.route('/accessToken') def token(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) grant = VoiceGrant( push_credential_sid=push_credential_sid, outgoing_application_sid=app_sid ) token = AccessToken(account_sid, api_key, api_key_secret, IDENTITY) token.add_grant(grant) return str(token) @app.route('/outgoing', methods=['GET', 'POST']) def outgoing(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have made your first oubound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/incoming', methods=['GET', 'POST']) def incoming(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have received your first inbound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/placeCall', methods=['GET', 'POST']) def placeCall(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] CALLER_ID = req_json['callerid'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) client = Client(api_key, api_key_secret, account_sid) call = client.calls.create(url=request.url_root + 'incoming', to='client:' + CALLER_ID, from_='client:' + IDENTITY) return str(call.sid) @app.route('/', methods=['GET', 'POST']) def welcome(): resp = twilio.twiml.Response() resp.say("Welcome") return str(resp) if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port, debug=True) ```
2017/07/11
[ "https://Stackoverflow.com/questions/45026566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6472692/" ]
as you can see in the logs ,the app crashed due to indentation error. please check indentation of account\_sid variable in your code.
The hint is in your logs. ``` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent ``` You have bad indentation in server.py on line 29. ``` req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) ``` should look like: ``` req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) ``` It looks like you have loads of other badly indented lines as well.
3,982
28,619,302
I'M using pycharm (python) (and mapnik)on windows 7, I just wanted to test if everything is in place after installation. I used an example from the net here is it , and I have a frame error. Could it be an installation problem ? compiler ?? I'M very new to python. thanks in advance for your time. ``` """ This is a simple wxPython application demonstrates how to integrate mapnik, it do nothing but draw the map from the World Poplulation XML example: https://github.com/mapnik/mapnik/wiki/GettingStartedInXML Victor Lin. (bornstub@gmail.com) Blog http://blog.ez2learn.com """ import mapnik import wx class Frame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, size=(800, 500) ,*args, **kwargs) self.Bind(wx.EVT_PAINT, self.onPaint) self.mapfile = "population.xml" self.width = 800 self.height = 500 self.createMap() self.drawBmp() def createMap(self): """Create mapnik object """ self.map = mapnik.Map(self.width, self.height) mapnik.load_map(self.map, self.mapfile) bbox = mapnik.Envelope(mapnik.Coord(-180.0, -75.0), mapnik.Coord(180.0, 90.0)) self.map.zoom_to_box(bbox) def drawBmp(self): """Draw map to Bitmap object """ # create a Image32 object image = mapnik.Image(self.width, self.height) # render map to Image32 object mapnik.render(self.map, image) # load raw data from Image32 to bitmap self.bmp = wx.BitmapFromBufferRGBA(self.width, self.height, image.tostring()) def onPaint(self, event): dc = wx.PaintDC(self) memoryDC = wx.MemoryDC(self.bmp) # draw map to dc dc.Blit(0, 0, self.width, self.height, memoryDC, 0, 0) if __name__ == '__main__': app = wx.App() frame = frame(None, title="wxPython Mapnik Demo") frame.Show() app.MainLoop() ``` here is the error message: ``` Traceback (most recent call last): File "C:/Python27/example.py", line 16, in <module> class Frame(wx.Frame): File "C:/Python27/example.py", line 56, in Frame frame = frame(None, title="wxPython Mapnik Demo") NameError: name 'frame' is not defined Process finished with exit code 1 ```
2015/02/19
[ "https://Stackoverflow.com/questions/28619302", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4586167/" ]
A little insight on your code first: `all` will fetch ALL records from the database and pass it to your ruby code, this is resource and time consuming. Then the `shuffle`, `sort_by` and `reverse` are all executed by ruby. You will quickly hit performance issues as your database grow. Your solution is to let your database server do that work. DB servers are very optimized for all sorting operations. So if you are for example using MySQL you should use this instead: ``` @articles = Article.order('`articles`.`date_published` DESC, RAND()') ``` Which will sort primarily by date\_published in reverse order, and secondarily randomly for all articles of the same date
Hmm, here's a fun hack that *should* work: ``` @articles = Article. all. sort_by{|t| (t.date_published.beginning_of_day.to_i * 1000) + rand(100)} ``` This works by forcing all the dates to be the beginning of the day (so that everything published on '2015-02-19' for example will have the same `to_i` value. Then you multiply by 1000 and add a random number between 0 and 100 for the sort (any number less than 1000 would work).
3,985
12,121,260
I've run into a specific problem and thought of an solution. But since the solution is pretty involved, I was wondering if others have encountered something similar and could comment on best practises or propose alternatives. The problem is as follows: I have a webapp written in Django which has some screen in which data from multiple tables is collected, grouped and aggregated in time intervals. It's basically a big excel like matrix where we have data aggregated in time intervals on one axis, against resources for the aggregated data per interval on the other axis. It involves many inner and left joins to gather all data, and because of the "report" like character of the presented data, I use raw sql to query everything together. The problem is that multiple users can concurrently view & edit data in these intervals. They can also edit data on finer or coarser granularities than other users working with the same data, but in sub/overlapping intervals. Currently, when a user edits some data, a django request is fired, the data is altered, the affected intervals are aggregated & grouped again and presented back. But because of the volatile nature of this data, other users might have changed something before them. Also grouping/aggregating and rerendering the table each time is a very heavy operation (depending on amount of data and range of the intervals). This gets worse with concurrent users editting.. My proposed solution: It's clear a http request/response mechanism is not really ideal for this kind of thing; The grouping/aggregation is pretty heavyweight, not ideal to do this per request, the concurrency would ideally be channeled amongst users, and feedback should be realtime like googledocs instead of full page refreshes. I was thinking about making a daemon process which reads in *flat* data of interestfrom the dbms on request and caches this in memory. All changes to the data would then occur in memory with a write-through to the dbms. This daemon channels access to the data through a lock, so the daemon can handle which users can overwrite others changes. The flat data is aggregated and grouped using python code and only the slices required by the user are returned; user/daemon communication would run over websockets. The daemon would provide a subscriber/publisher channel, where users interested in specific slices of data are notified when something changes. This daemon could be implemented using a framework like twisted. But I'm not sure an event driven approach would work here, as we want to "channel" all incomming requests... Maybe these should be put in a queue and be run in a seperate thread? Would it be better to have twisted run in a thread next to my scheduler, or should the twisted main loop spin off a thread that works on this queue? My understanding is that threading works best for IO, and python heavy code basically blocks other threads. I have both (websockets/dbms and processing data), would that work? Has anyone done something similar before? Thanks in advance! Karl
2012/08/25
[ "https://Stackoverflow.com/questions/12121260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1227852/" ]
I tried something similar and you might be interested in the solution. Here is my question: [python Socket.IO client for sending broadcast messages to TornadIO2 server](https://stackoverflow.com/questions/10950365/python-socket-io-client-for-sending-broadcast-messages-to-tornadio2-server) And this is the answer: <https://stackoverflow.com/a/10950702/675065> He also wrote a blog post about the solution: <http://blog.y3xz.com/blog/2012/06/08/a-modern-python-stack-for-a-real-time-web-application/> The software stack consists of: * [SockJS Client](https://github.com/sockjs/sockjs-client) * [SockJS Tornado Server](https://github.com/MrJoes/sockjs-tornado) * [Redis Pub/Sub](http://redis.io/commands#pubsub) * [Django Redis Client: Brukva](https://github.com/evilkost/brukva) I implemented this myself and it works like a charm.
The scheme Google implemented for the now abandoned Wave product's concurrent editing features is documented, <http://www.waveprotocol.org/whitepapers/operational-transform>. This aspect of Wave seemed like a success, even though Wave itself was quickly abandoned. As far as the questions you asked about implementing your proposed scheme: 1. An event driven system is perfectly capable of implementing this idea. Being event driven is a way to organize your code. It doesn't prevent you from implementing any particular functionality. 2. Threading doesn't work best for very much, particularly in Python. 1. It has significant disadvantages for CPU-bound work, since CPython only runs a single Python thread at a time (regardless of available hardware resources). This means a multi-threaded CPU-bound Python program is typically no faster, or even slower, than the single-threaded equivalent. 2. For IO, this shortcoming is less of a limitation, because IO does not involve running Python code on CPython (the IO APIs are all implemented in C). This means you can do IO in multiple threads concurrently, so threading is potentially a benefit. However, doing IO concurrently in a single thread is exactly what Twisted is for. Threading offers no benefits over doing the IO in a single thread, as long as you're doing the IO non-blockingly (or perhaps asychronously). 3. Hello world.
3,986
40,499,702
I‘m studying the tensoflow, and want to test the example of slim. When I command ./scripts/train\_lenet\_on\_mnist.sh, The program run to eval\_image\_classifier give a Type Error, The Error information as follows: ``` I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so.8.0 locally INFO:tensorflow:Scale of 0 disables regularizer. INFO:tensorflow:Evaluating /tmp/lenet-model/model.ckpt-20002 INFO:tensorflow:Starting evaluation at 2016-11-09-02:55:57 I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties: name: Quadro K5000 major: 3 minor: 0 memoryClockRate (GHz) 0.7055 pciBusID 0000:03:00.0 Total memory: 3.94GiB Free memory: 3.61GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Quadro K5000, pci bus id: 0000:03:00.0) INFO:tensorflow:Executing eval ops INFO:tensorflow:Executing eval_op 1/100 INFO:tensorflow:Error reported to Coordinator: <class 'TypeError'>, Fetch argument dict_values([<tf.Tensor 'accuracy/update_op:0' shape=() dtype=float32>, <tf.Tensor 'recall_at_5/update_op:0' shape=() dtype=float32>]) has invalid type <class 'dict_values'>, must be a string or Tensor. (Can not convert a dict_values into a Tensor or Operation.) Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 218, in init fetch, allow_tensor=True, allow_operation=True)) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2455, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2547, in _as_graph_element_locked % (type(obj).name, types_str)) TypeError: Can not convert a dict_values into a Tensor or Operation. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "eval_image_classifier.py", line 191, in <module> tf.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 30, in run sys.exit(main(sys.argv[:1] + flags_passthrough)) File "eval_image_classifier.py", line 187, in main variables_to_restore=variables_to_restore) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/evaluation.py", line 359, in evaluate_once global_step=global_step) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/evaluation.py", line 260, in evaluation sess.run(eval_op, eval_op_feed_dict) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 717, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 902, in _run fetch_handler = _FetchHandler(self._graph, fetches, feed_dict_string) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 358, in init self._fetch_mapper = _FetchMapper.for_fetch(fetches) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 189, in for_fetch return _ElementFetchMapper(fetches, contraction_fn) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 222, in init % (fetch, type(fetch), str(e))) TypeError: Fetch argument dict_values([<tf.Tensor 'accuracy/update_op:0' shape=() dtype=float32>, <tf.Tensor 'recall_at_5/update_op:0' shape=() dtype=float32>]) has invalid type <class 'dict_values'>, must be a string or Tensor. (Can not convert a dict_values into a Tensor or Operation.) ``` I don't know what happened to the program, I didnot revised any code, just download the code package from github, and for datadownload and train, which give correct result.Is there any help me? I am waiting online. Thanks
2016/11/09
[ "https://Stackoverflow.com/questions/40499702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7134638/" ]
The problem is the compatible of `python2` and `python3`. As I used `python3` for interpretation, but the the Keys From a Dictionary is different from p`ython2` and `python3`. In `Python2`, simply calling `keys()` on a dictionary object will return what you expect, however, in `Python3`, `keys()` no longer returns a `list`, but a view object, so the TypeError can be avoided and compatibility can be maintained by simply converting the `dict_keys` object into a list which can then be indexed as normal in both `Python2` and `Python3`. I edited the `eval_image_classifier` using `eval_op=list(names_to_updates.values())`, then it can work perfectly.
The other python3 change for eval\_image\_classifier.py is ``` for name, value in names_to_values.iteritems(): to for name, value in names_to_values.items(): ```
3,987
56,689,803
I'm trying to remove some part of text in the given string. So the problem is as follows. I have a string. Say HTML code like this. ``` <!DOCTYPE html> <html> <head> <style> body {background-color: powderblue;} h1 {color: blue;} p {color: red;} </style> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> ``` I want the code to remove all the css related code. i.e. the string should now look like: ``` <!DOCTYPE html> <html> <head> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> ``` I have tried that with this function in python: ``` def css_remover(text): m = re.findall('<style>(.*)</style>$', text,re.DOTALL) if m: for eachText in text.split(" "): for eachM in m: if eachM in eachText: text=text.replace(eachText,"") print(text) ``` But this doesn't work. I want the function to handle spaces, newline character so that it removes everything in between `<style> </style>` tag. Also, I hope if any word is attached to the tag, they aren't affected. Like `hello<style> klasjdklasd </style>>` should yield `hello>`
2019/06/20
[ "https://Stackoverflow.com/questions/56689803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7928692/" ]
You put the `$` which means end of string. try this: ``` x = re.sub('<style>.*?</style>', '', text, flags=re.DOTALL) print(x) ``` You can check out [this website](https://regex101.com/r/Fti1aD/1), has a nice regex demo. **A little note**: I am not extremely familiar with CSS so if there are nested `<style>` tags it might be a problem.
Note particularly the `?` character in the `<style>(.*?)</style>` portion of the RegExp expression so as not to be "too greedy". Otherwise, in the example below, it would also remove the `<title>` HTML tag. ``` import re text = """ <!DOCTYPE html> <html> <head> <style> body {background-color: powderblue;} h1 {color: blue;} p {color: red;} </style> <title>Test</title> <style> body {background-color: powderblue;} h1 {color: blue;} p {color: red;} </style> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> """ regex = re.compile(r' *<style>(.*?)</style> *\n?', re.DOTALL|re.MULTILINE) text = regex.sub('', text, 0) print (text == """ <!DOCTYPE html> <html> <head> <title>Test</title> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> """) ```
3,988
33,984,889
I want to use an array and its first derivative (diff) as features for training. Since the diff array is of an smaller size I would like to fill it up so that I don't have problems with sizes when I stack them and use both as features. If I fill the diff(array) with a 0, How should I align them? Do I put the 0 at the beginning of the resulting diff(array) or at the end? What is the correct way of aligning an array with its derivative? e.g. in python: ``` a = [1,32,43,54] b = np.diff(np.array(a)) np.insert(b, -1, 0) # at the end? np.insert(b, 0, 0) # or at the beginning? ```
2015/11/29
[ "https://Stackoverflow.com/questions/33984889", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1978146/" ]
Instead of left- or right-sided finite differences, you could use a centered finite difference (which is equivalent to taking the average of the left- and right-sided differences), and then pad both ends with appropriate approximations of the derivatives there. This will keep the estimation of the derivative aligned with its data value, and usually give a better estimate of the derivative. For example, ``` In [33]: y = np.array([1, 2, 3.5, 3.5, 4, 3, 2.5, 1.25]) In [34]: dy = np.empty(len(y)) In [35]: dy[1:-1] = 0.5*(y[2:] - y[:-2]) In [36]: dy[0] = y[1] - y[0] In [37]: dy[-1] = y[-1] - y[-2] In [38]: dy Out[38]: array([ 1. , 1.25 , 0.75 , 0.25 , -0.25 , -0.75 , -0.875, -1.25 ]) ``` The following script using matplotlib to create this visualization of the estimates of the derivatives: [![plot](https://i.stack.imgur.com/ugdVM.png)](https://i.stack.imgur.com/ugdVM.png) ``` import numpy as np import matplotlib.pyplot as plt y = np.array([1, 2, 3.5, 3.5, 4, 3, 2.5, 1.25]) dy = np.empty(len(y)) dy[1:-1] = 0.5*(y[2:] - y[:-2]) dy[0] = y[1] - y[0] dy[-1] = y[-1] - y[-2] plt.plot(y, 'b-o') for k, (y0, dy0) in enumerate(zip(y, dy)): t = 0.25 plt.plot([k-t, k+t], [y0 - t*dy0, y0 + t*dy0], 'c', alpha=0.4, linewidth=4) plt.grid() plt.show() ``` There are more sophisticated tools for estimating derivatives (e.g. [`scipy.signal.savgol_filter`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.savgol_filter.html) has an option for estimating the derivative, and if your data is periodic, you could use [`scipy.fftpack.diff`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.diff.html)), but a simple finite difference might work fine as your training input.
According to the [documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html), diff is simply doing `out[n] = a[n+1] - a[n]`. This means that it is not a derivative approximated by finite difference, but the discrete difference. To calculate the finite difference, you need to divide by the step size, except if your step size is 1, of course. Example: ``` import numpy as np x = np.linspace(0,2*np.pi,30) y = np.sin(x) dy = np.diff(y) / np.diff(x) ``` Here, `y` is a function of `x` at specific points, `dy` is it's derivative. The derivative by this formula is a central derivative, meaning that its location is between the points in `x`. If you need the derivatives at the same points, I would suggest you to calculate the derivative using the two neighbouring points: ``` (y[:-2]-y[2:])/(x[:-2]-x[2:]) ``` This way, you could add a `0` to both sides of the derivative vector, or trim you input vector accordingly.
3,989
38,596,674
I bet I am doing something very simple wrong. I want to start with an empty 2D numpy array and append arrays to it (with dimensions 1 row by 4 columns). ``` open_cost_mat_train = np.matrix([]) for i in xrange(10): open_cost_mat = np.array([i,0,0,0]) open_cost_mat_train = np.vstack([open_cost_mat_train,open_cost_mat]) ``` my error trace is: ``` File "/Users/me/anaconda/lib/python2.7/site-packages/numpy/core/shape_base.py", line 230, in vstack return _nx.concatenate([atleast_2d(_m) for _m in tup], 0) ValueError: all the input array dimensions except for the concatenation axis must match exactly ``` What am I doing wrong? I have tried append, concatenate, defining the empty 2D array as `[[]]`, as `[]`, `array([])` and many others.
2016/07/26
[ "https://Stackoverflow.com/questions/38596674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3948664/" ]
If `open_cost_mat_train` is large I would encourage you to replace the for loop by a **vectorized algorithm**. I will use the following funtions to show how efficiency is improved by vectorizing loops: ``` def fvstack(): import numpy as np np.random.seed(100) ocmt = np.matrix([]).reshape((0, 4)) for i in xrange(10): x = np.random.random() ocm = np.array([x, x + 1, 10*x, x/10]) ocmt = np.vstack([ocmt, ocm]) return ocmt def fshape(): import numpy as np from numpy.matlib import empty np.random.seed(100) ocmt = empty((10, 4)) for i in xrange(ocmt.shape[0]): ocmt[i, 0] = np.random.random() ocmt[:, 1] = ocmt[:, 0] + 1 ocmt[:, 2] = 10*ocmt[:, 0] ocmt[:, 3] = ocmt[:, 0]/10 return ocmt ``` I've assumed that the values that populate the first column of `ocmt` (shorthand for `open_cost_mat_train`) are obtained from a for loop, and the remaining columns are a function of the first column, as stated in your comments to my original answer. As real costs data are not available, in the forthcoming example the values in the first column are random numbers, and the second, third and fourth columns are the functions `x + 1`, `10*x` and `x/10`, respectively, where `x` is the corresponding value in the first column. ``` In [594]: fvstack() Out[594]: matrix([[ 5.43404942e-01, 1.54340494e+00, 5.43404942e+00, 5.43404942e-02], [ 2.78369385e-01, 1.27836939e+00, 2.78369385e+00, 2.78369385e-02], [ 4.24517591e-01, 1.42451759e+00, 4.24517591e+00, 4.24517591e-02], [ 8.44776132e-01, 1.84477613e+00, 8.44776132e+00, 8.44776132e-02], [ 4.71885619e-03, 1.00471886e+00, 4.71885619e-02, 4.71885619e-04], [ 1.21569121e-01, 1.12156912e+00, 1.21569121e+00, 1.21569121e-02], [ 6.70749085e-01, 1.67074908e+00, 6.70749085e+00, 6.70749085e-02], [ 8.25852755e-01, 1.82585276e+00, 8.25852755e+00, 8.25852755e-02], [ 1.36706590e-01, 1.13670659e+00, 1.36706590e+00, 1.36706590e-02], [ 5.75093329e-01, 1.57509333e+00, 5.75093329e+00, 5.75093329e-02]]) In [595]: np.allclose(fvstack(), fshape()) Out[595]: True ``` In order for the calls to `fvstack()` and `fshape()` produce the same results, the random number generator is initialized in both functions through `np.random.seed(100)`. Notice that the equality test has been performed using [`numpy.allclose`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html) instead of `fvstack() == fshape()` to avoid the round off errors associated to floating point artihmetic. As for efficiency, the following interactive session shows that initializing `ocmt` with its final shape is significantly faster than repeatedly stacking rows: ``` In [596]: import timeit In [597]: timeit.timeit('fvstack()', setup="from __main__ import fvstack", number=10000) Out[597]: 1.4884241055042366 In [598]: timeit.timeit('fshape()', setup="from __main__ import fshape", number=10000) Out[598]: 0.8819408006311278 ```
You need to reshape your original matrix so that the number of columns match the appended arrays: ``` open_cost_mat_train = np.matrix([]).reshape((0,4)) ``` After which, it gives: ``` open_cost_mat_train # matrix([[ 0., 0., 0., 0.], # [ 1., 0., 0., 0.], # [ 2., 0., 0., 0.], # [ 3., 0., 0., 0.], # [ 4., 0., 0., 0.], # [ 5., 0., 0., 0.], # [ 6., 0., 0., 0.], # [ 7., 0., 0., 0.], # [ 8., 0., 0., 0.], # [ 9., 0., 0., 0.]]) ```
3,990
23,280,253
I have the following code - ``` from sys import version class ExampleClass(object): def get_sys_version(self): return version x = ExampleClass() print x.get_sys_version() ``` and it gets parsed by this code - ``` import ast source = open("input.py") code = source.read() node = ast.parse(code, mode='eval') ``` and results in this error - ``` Traceback (most recent call last): File "parse.py", line 5, in <module> node = ast.parse(code, mode='eval') File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ast.py", line 37, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "<unknown>", line 1 from sys import version ``` This appears to be a very simple file to parse - it certainly runs - why does the parser throw this error?
2014/04/24
[ "https://Stackoverflow.com/questions/23280253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827401/" ]
This is because you're using `mode='eval'`, which only works for single expressions. Your code has multiple statements, so use `mode='exec'` instead. (It's the default) See the [documentation for `compile()`](https://docs.python.org/2/library/functions.html#compile) for explanation of the `mode` argument, since that's what `ast.parse()` uses.
It's not related to `ast`. You get same error, when try: ``` In [1]: eval('from sys import version') File "<string>", line 1 from sys import version ^ SyntaxError: invalid syntax ``` Try `exec` mode: ``` In [1]: exec('from sys import version') In [2]: ```
3,991
48,821,856
I would like to clean a list from leading occurrences of `'a'`. That is, `['a', 'a', 'b', 'b']` should become `['b', 'b']` and at the same time `['b', 'a', 'a', 'b']` should be kept unchanged. ``` def remove_leading_items(l): if len(l) == 1 or l[0] != 'a': return l else: return remove_leading_items(l[1:]) ``` Is there a more pythonic way to do it?
2018/02/16
[ "https://Stackoverflow.com/questions/48821856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/671013/" ]
Yes. Immediately, you should be using a for loop. Recursion is generally not Pythonic. Second, use built in tools: ``` from itertools import dropwhile def remove_leading_items(l, item): return list(dropwhile (lambda x: x == item, l)) ``` Or ``` return list(dropwhile(item.__eq__, l)) ``` ### Edit Out of curiosity, I did some experiments with different approaches to this problem: ``` from itertools import dropwhile from functools import partial from operator import eq def _eq_drop(l, e): return dropwhile(e.__eq__, l) def lam_drop(l, e): return dropwhile(lambda x:x==e, l) def partial_drop(l, e): return dropwhile(partial(eq, e), l) ``` First, with a list that is entirely dropped: i.e. `(1, 1, 1, ...)` ``` In [64]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t0, 1)) ...: 1000 loops, best of 3: 389 µs per loop In [65]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t0, 1)) ...: 1000 loops, best of 3: 1.19 ms per loop In [66]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t0, 1)) ...: 1000 loops, best of 3: 893 µs per loop ``` So `__eq__` is clearly the fastest in this situation. I like it, but it makes use of a dunder-method directly, which is sometimes frowned upon. The `dropwhile(partial(eq...` approach (wordy, yet explicit) is somewhere inbetween that and the sluggish, clumsy `lambda` approach comes last. Not surprising. --- Now, when half is dropped, i.e `(1, 1, 1, ..., 0, 0, 0)`: ``` In [52]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t2, 1)) ...: 1000 loops, best of 3: 245 µs per loop In [53]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t2, 1)) ...: 1000 loops, best of 3: 652 µs per loop In [54]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t2, 1)) ...: 1000 loops, best of 3: 487 µs per loop ``` The difference isn't as pronounced. --- As for why I say recursion isn't Pythonic, consider the following: ``` In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t0, 1) ...: 1 loop, best of 3: 405 ms per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t1, 1) ...: 10000 loops, best of 3: 34.7 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t2, 1) ...: 1 loop, best of 3: 280 ms per loop ``` It performs catastrophically worse on all but the degenerate case of dropping 0 (well, 1 item). A fast, lest flexible approach ------------------------------- Now, if you know you always want a list, consider a highly iterative, very approach: ``` def for_loop(l, e): it = iter(l) for x in it: if x != e: break else: return [] return [x, *it] ``` It performs better than using built-ins! ``` In [33]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t0, 1) ...: 1000 loops, best of 3: 270 µs per loop In [34]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t1, 1) ...: 10000 loops, best of 3: 50.7 µs per loop In [35]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t2, 1) ...: 10000 loops, best of 3: 160 µs per loop ``` Less fast, but more flexible! ------------------------------ Perhaps a good compromise that maintains flexibility is to use a generator-based approach: ``` In [5]: def gen_drop(l, e): ...: it = iter(l) ...: for x in it: ...: if x != e: ...: break ...: yield x ...: yield from it ...: In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t0, 1)) ...: 1000 loops, best of 3: 287 µs per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t1, 1)) ...: 1000 loops, best of 3: 359 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t2, 1)) ...: 1000 loops, best of 3: 324 µs per loop ``` ### Using a deque Finally, the `deque` approach: ``` In [1]: from collections import deque ...: ...: def noLeadingZero(l, e): ...: d = deque(l) ...: for x in l: ...: if e == x: ...: d.popleft() ...: else: ...: break ...: return list(d) ...: In [2]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t0, 1) ...: 1000 loops, best of 3: 873 µs per loop In [3]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t1, 1) ...: 10000 loops, best of 3: 121 µs per loop In [4]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t2, 1) ...: 1000 loops, best of 3: 502 µs per loop ```
### Code: ``` def remove_leading(a_list, to_remove): i = 0 while i < len(a_list) and a_list[i] == to_remove: i += 1 return a_list[i:] ``` ### Test Code: ``` print(remove_leading(list('aabb'), 'a')) print(remove_leading(list('baab'), 'a')) print(remove_leading([], 'a')) ``` ### Results: ``` ['b', 'b'] ['b', 'a', 'a', 'b'] [] ```
3,992
50,534,429
I made a FC neural network with numpy based on the video's of welch's lab but when I try to train it I seem to have exploding gradients at launch, which is weird, I will put down the whole code which is testable in python 3+. only costfunctionprime seem to break the gradient descent stuff going but I have no idea what is happening. Can someone smarter than me help? EDIT: the trng\_input and trng\_output are not the one I use, I use a big dataset ``` import numpy as np import random trng_input = [[random.random() for _ in range(7)] for _ in range(100)] trng_output = [[random.random() for _ in range(2)] for _ in range(100)] def relu(x): return x * (x > 0) def reluprime(x): return (x>0).astype(x.dtype) class Neural_Net(): def __init__(self, data_input, data_output): self.data_input = data_input self.trng_output = trng_output self.bias = 0 self.nodes = np.array([7, 2]) self.LR = 0.01 self.weightinit() self.training(1000, self.LR) def randomweight(self, n): output = [] for i in range(n): output.append(random.uniform(-1,1)) return output def weightinit(self): self.weights = [] for n in range(len(self.nodes)-1): temp = [] for _ in range(self.nodes[n]+self.bias): temp.append(self.randomweight(self.nodes[n+1])) self.weights.append(temp) self.weights = [np.array(tuple(self.weights[i])) for i in range(len(self.weights))] def forward(self, data): self.Z = [] self.A = [np.array(data)] for layer in range(len(self.weights)): self.Z.append(np.dot(self.A[layer], self.weights[layer])) self.A.append(relu(self.Z[layer])) self.output = self.A[-1] return self.output def costFunction(self): self.totalcost = 0.5*sum((self.trng_output-self.output)**2) return self.totalcost def costFunctionPrime(self): self.forward(self.data_input) self.delta = [[] for x in range(len(self.weights))] self.DcostDw = [[] for x in range(len(self.weights))] for layer in reversed(range(len(self.weights))): Zprime = reluprime(self.Z[layer]) if layer == len(self.weights)-1: self.delta[layer] = np.multiply(-(self.trng_output-self.output), Zprime) else: self.delta[layer] = np.dot(self.delta[layer+1], self.weights[layer+1].T) * Zprime self.DcostDw[layer] = np.dot(self.A[layer].T, self.delta[layer]) return self.DcostDw def backprop(self, LR): self.DcostDw = (np.array(self.DcostDw)*LR).tolist() self.weights = (np.array(self.weights) - np.array(self.DcostDw)).tolist() def training(self, iteration, LR): for i in range(iteration): self.costFunctionPrime() self.backprop(LR) if (i/1000.0) == (i/1000): print(self.costFunction()) print(sum(self.costFunction())/len(self.costFunction())) NN = Neural_Net(trng_input, trng_output) ``` as asked, this is the expected result (result I got using the sigmoid activation function): [![](https://i.stack.imgur.com/en6ty.jpg)](https://i.stack.imgur.com/en6ty.jpg) as you can see, the numbers are going down and thus the network is training. this is the result using the relu activation function: [![](https://i.stack.imgur.com/wQrQq.jpg)](https://i.stack.imgur.com/wQrQq.jpg) Here, the network is stuck and isnt getting trained, it never gets trained using the relu activation function and would like to understand why
2018/05/25
[ "https://Stackoverflow.com/questions/50534429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7974205/" ]
This conversion works. ``` #string[] path = File.ReadLines("C:\\Users\\M\\numb.txt").ToArray(); String[] path = {"1","2","3"}; int[] numb = Array.ConvertAll(path,int.Parse); for (int i = 0; i < path.Length; i++) { Console.WriteLine(path[i]); } for (int i = 0; i < numb.Length; i++) { Console.WriteLine(numb[i]); } ```
I can't imagine this wouldn't work: ``` string[] path = File.ReadAllLines("C:\\Users\\M\\numb.txt"); int[] numb = new int[path.Length]; for (int i = 0; i < path.Length; i++) { numb[i] = int.Parse(path[i]); } ``` I think your issue is that you are using `File.ReadLines`, which reads each line into a single string. Strings have no such `ToArray` function.
3,995
45,916,726
here is my output.txt file ``` 4f337d5000000001 4f337d5000000001 0082004600010000 0082004600010000 334f464600010000 334f464600010000 [... many values omitted ...] 334f464600010000 334f464600010000 4f33464601000100 4f33464601000100 ``` how i can change these values into decimal with the help of python and save into a new .txt file..
2017/08/28
[ "https://Stackoverflow.com/questions/45916726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8523601/" ]
Since the values are 16 hex digits long I assume these are 64-bit integers you want to play with. If the file is reasonably small then you can use `read` to bring in the whole string and `split` to break it into individual values: ``` with open("newfile.txt", 'w') as out_file, open("outfile,txt") as in_file: for hex in in_file.read().split(): print(int(hex, 16), file=out_file) ``` should do this for you.
You can do this: ``` with open('output.txt') as f: new_file = open("new_file.txt", "w") for item in f.readlines(): new_file.write(str(int(item, 16)) + "\n") new_file.close() ```
3,997
29,692,140
If we make a pathological potato like this: ``` >>> class Potato: ... def __eq__(self, other): ... return False ... def __hash__(self): ... return random.randint(1, 10000) ... >>> p = Potato() >>> p == p False ``` We can break sets and dicts this way (*note:* it's the same even if `__eq__` returns `True`, it's mucking with the hash that broke them): ``` >>> p in {p} False >>> p in {p: 0} False ``` Also `len({p: 0, p: 0}) == 2`, and `{p: 0}[p]` raises KeyError, basically all mapping related stuff goes out the window, as expected. But what I didn't expect is that we *can't* break lists ``` >>> p in [p] True ``` Why is that? It seems that `list.__contains__` iterates, but it's first [checking identity](https://docs.python.org/3/c-api/object.html#c.PyObject_RichCompareBool) before checking equality. Since it is not the case that identity implies equality (see for example NaN object), what is the reason for lists short-circuiting on identity comparisons?
2015/04/17
[ "https://Stackoverflow.com/questions/29692140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/674039/" ]
`list`, `tuple`, etc., does indeed do an identity check before an equality check, and this behavior is motivated by [these invariants](http://bugs.python.org/issue4296#msg75735): ``` assert a in [a] assert a in (a,) assert [a].count(a) == 1 for a in container: assert a in container # this should ALWAYS be true ``` Unfortunately, `dict`s, `set`s, and friends operate by hashes, so if you mess with those you can indeed effectively break them. See [this issue](http://bugs.python.org/issue4296) and [this issue](http://bugs.python.org/issue11945) for some history.
In general, breaking the assumption that identity implies equality can break a variety of things in Python. It is true that NaN breaks this assumption, and thus NaN breaks some things in Python. Discussion can be found in [this Python bug](http://bugs.python.org/issue4296). In a pre-release version of Python 3.0, reliance on this assumption was removed, but the resolution of the bug was to put it back in (i.e., make Python 3 give the same behavior as Python 2, in which the identity check shortcut is done). The [documentation](https://docs.python.org/3/reference/expressions.html#comparisons) for Python 3 correctly says: > > For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression `x in y` is equivalent to `any(x is e or x == e for e in y)`. > > > However, it appears the documentation for Python 2 is incorrect, since it says: > > For the list and tuple types, x in y is true if and only if there exists an index i such that x == y[i] is true. > > > You could raise a documentation bug about this if you want, although it is a pretty esoteric issue so I doubt it will be high on anyone's priority list.
4,000