qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
42,289,722 | I have the python code where I pass the json file
```
def home():
with open('file.json', 'a+') as f:
return render_template('index.html', json_data=f.read())
```
The file look like this
```
{"hosts": [{"shortname": "serv1", "ipadr": "10.0.0.1", "longname": "server1"}, {"shortname": "serv2", "ipadr": "10.0.0.2", "longname": "server2"}]}
```
On the client side, I wrote this code
```
<table id="placar" class="table table-condensed table-bordered">
<thead>
<tr>
<th>shortname</th>
<th>longname</th>
<th>ipadress</th>
</tr>
</thead>
<tbody></tbody>
</table>
</div>
<script>
var data = {{ json_data }}
var transform = {
tag: 'tr',
children: [{
"tag": "td",
"html": "${shortname}"
}, {
"tag": "td",
"html": "${ipadr}"
}, {
"tag": "td",
"html": "${longname}"
}]
};
$('#placar > tbody ').json2html(data, transform);
</script>
```
But it doesn't work with my file, if write the simple array it works perfectly. Can anyone say what I did wrong, pass the file or create a table? | 2017/02/17 | [
"https://Stackoverflow.com/questions/42289722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7528895/"
] | Try this : Here `DATEADD(yy, DATEDIFF(yy,0,getdate())` will give start month of the year
```
DA.Access_Date >= DATEADD(YEAR, -2, DATEADD(YY, DATEDIFF(YY,0,GETDATE()), 0))
``` | Your condition should be like below. `DATEADD(YEAR,DATEDIFF(YEAR, 0, GETDATE())-2,0)` this will returns first day of `2015` year.
```
DA.Access_Date >= DATEADD(YEAR,DATEDIFF(YEAR, 0, GETDATE())-2,0)
``` | 4,213 |
60,945,866 | I've created flask app and try to dockerize it. It uses machine learning libraries, I had some problems with download it so my Dockerfile is a little bit messy, but Image was succesfully created.
```
from alpine:latest
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
FROM python:3.5
RUN pip3 install gensim
RUN pip3 freeze > requirements.txt
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENV PATH=/venv/bin:$PATH
ENV FLASK_APP /sentiment-service/__init__.py
CMD ["python","-m","flask", "run", "--host", "0.0.0.0", "--port", "5000"]
```
and when i try:
docker run my\_app:latest
I get
```
/usr/local/bin/python: No module named flask
```
Of course I have Flask==1.1.1 in my requirements.txt file.
Thanks for any help! | 2020/03/31 | [
"https://Stackoverflow.com/questions/60945866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9802634/"
] | The problem is here:
`RUN pip3 freeze > requirements.txt`
The `>` operator in bash overwrites the content of the file. If you want to append to your `requirements.txt`, consider using `>>` operator:
`RUN pip3 freeze >> requirements.txt` | Thank you All. Finally I rebuilded my app, simplified requirements, exclude alpine and use python 3.7 in my Dockerfile.
I could run app locally, but Docker probably could not find some file from path, or get some other error from app, that is why it stopped just after starting. | 4,217 |
43,648,081 | I have a pickle file that was created with python 2.7 that I'm trying to port to python 3.6. The file is saved in py 2.7 via `pickle.dumps(self.saved_objects, -1)`
and loaded in python 3.6 via `loads(data, encoding="bytes")` (from a file opened in `rb` mode). If I try opening in `r` mode and pass `encoding=latin1` to `loads` I get UnicodeDecode errors. When I open it as a byte stream it loads, but literally every string is now a byte string. Every object's `__dict__` keys are all `b"a_variable_name"` which then generates attribute errors when calling `an_object.a_variable_name` because `__getattr__` passes a string and `__dict__` only contains bytes. I feel like I've tried every combination of arguments and pickle protocols already. Apart from forcibly converting all objects' `__dict__` keys to strings I'm at a loss. Any ideas?
\*\* **Skip to 4/28/17 update for better example**
**-------------------------------------------------------------------------------------------------------------**
\*\* **Update 4/27/17**
This minimum example illustrates my problem:
**From py 2.7.13**
```
import pickle
class test(object):
def __init__(self):
self.x = u"test ¢" # including a unicode str breaks things
t = test()
dumpstr = pickle.dumps(t)
>>> dumpstr
"ccopy_reg\n_reconstructor\np0\n(c__main__\ntest\np1\nc__builtin__\nobject\np2\nNtp3\nRp4\n(dp5\nS'x'\np6\nVtest \xa2\np7\nsb."
```
**From py 3.6.1**
```
import pickle
class test(object):
def __init__(self):
self.x = "xyz"
dumpstr = b"ccopy_reg\n_reconstructor\np0\n(c__main__\ntest\np1\nc__builtin__\nobject\np2\nNtp3\nRp4\n(dp5\nS'x'\np6\nVtest \xa2\np7\nsb."
t = pickle.loads(dumpstr, encoding="bytes")
>>> t
<__main__.test object at 0x040E3DF0>
>>> t.x
Traceback (most recent call last):
File "<pyshell#15>", line 1, in <module>
t.x
AttributeError: 'test' object has no attribute 'x'
>>> t.__dict__
{b'x': 'test ¢'}
>>>
```
**-------------------------------------------------------------------------------------------------------------**
**Update 4/28/17**
To re-create my issue I'm posting my actual raw pickle data [here](https://www.dropbox.com/s/qazbnorjgxu6q6r/raw_data.pkl?dl=1)
The pickle file was created in python 2.7.13, windows 10 using
```
with open("raw_data.pkl", "wb") as fileobj:
pickle.dump(library, fileobj, protocol=0)
```
(protocol 0 so it's human readable)
To run it you'll need `classes.py`
```
# classes.py
class Library(object): pass
class Book(object): pass
class Student(object): pass
class RentalDetails(object): pass
```
And the test script here:
```
# load_pickle.py
import pickle, sys, itertools, os
raw_pkl = "raw_data.pkl"
is_py3 = sys.version_info.major == 3
read_modes = ["rb"]
encodings = ["bytes", "utf-8", "latin-1"]
fix_imports_choices = [True, False]
files = ["raw_data_%s.pkl" % x for x in range(3)]
def py2_test():
with open(raw_pkl, "rb") as fileobj:
loaded_object = pickle.load(fileobj)
print("library dict: %s" % (loaded_object.__dict__.keys()))
return loaded_object
def py2_dumps():
library = py2_test()
for protcol, path in enumerate(files):
print("dumping library to %s, protocol=%s" % (path, protcol))
with open(path, "wb") as writeobj:
pickle.dump(library, writeobj, protocol=protcol)
def py3_test():
# this test iterates over the different options trying to load
# the data pickled with py2 into a py3 environment
print("starting py3 test")
for (read_mode, encoding, fix_import, path) in itertools.product(read_modes, encodings, fix_imports_choices, files):
py3_load(path, read_mode=read_mode, fix_imports=fix_import, encoding=encoding)
def py3_load(path, read_mode, fix_imports, encoding):
from traceback import print_exc
print("-" * 50)
print("path=%s, read_mode = %s fix_imports = %s, encoding = %s" % (path, read_mode, fix_imports, encoding))
if not os.path.exists(path):
print("start this file with py2 first")
return
try:
with open(path, read_mode) as fileobj:
loaded_object = pickle.load(fileobj, fix_imports=fix_imports, encoding=encoding)
# print the object's __dict__
print("library dict: %s" % (loaded_object.__dict__.keys()))
# consider the test a failure if any member attributes are saved as bytes
test_passed = not any((isinstance(k, bytes) for k in loaded_object.__dict__.keys()))
print("Test %s" % ("Passed!" if test_passed else "Failed"))
except Exception:
print_exc()
print("Test Failed")
input("Press Enter to continue...")
print("-" * 50)
if is_py3:
py3_test()
else:
# py2_test()
py2_dumps()
```
put all 3 in the same directory and run `c:\python27\python load_pickle.py` first which will create 1 pickle file for each of the 3 protocols. Then run the same command with python 3 and notice that it version converts the `__dict__` keys to bytes. I had it working for about 6 hours, but for the life of me I can't figure out how I broke it again. | 2017/04/27 | [
"https://Stackoverflow.com/questions/43648081",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2682863/"
] | In short, you're hitting [bug 22005](http://bugs.python.org/issue22005) with `datetime.date` objects in the `RentalDetails` objects.
That can be worked around with the `encoding='bytes'` parameter, but that leaves your classes with `__dict__` containing bytes:
```
>>> library = pickle.loads(pickle_data, encoding='bytes')
>>> dir(library)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'str' and 'bytes'
```
It's possible to manually fix that based on your specific data:
```
def fix_object(obj):
"""Decode obj.__dict__ containing bytes keys"""
obj.__dict__ = dict((k.decode("ascii"), v) for k, v in obj.__dict__.items())
def fix_library(library):
"""Walk all library objects and decode __dict__ keys"""
fix_object(library)
for student in library.students:
fix_object(student)
for book in library.books:
fix_object(book)
for rental in book.rentals:
fix_object(rental)
```
But that's fragile and enough of a pain you should be looking for a better option.
1) Implement [`__getstate__`/`__setstate__`](https://docs.python.org/2.7/library/pickle.html#object.__getstate__) that maps datetime objects to a non-broken representation, for instance:
```
class Event(object):
"""Example class working around datetime pickling bug"""
def __init__(self):
self.date = datetime.date.today()
def __getstate__(self):
state = self.__dict__.copy()
state["date"] = state["date"].toordinal()
return state
def __setstate__(self, state):
self.__dict__.update(state)
self.date = datetime.date.fromordinal(self.date)
```
2) Don't use pickle at all. Along the lines of `__getstate__`/`__setstate__`, you can just implement `to_dict`/`from_dict` methods or similar in your classes for saving their content as json or some other plain format.
A final note, having a backreference to library in each object shouldn't be required. | >
> **Question**: Porting pickle py2 to py3 strings become bytes
>
>
>
The given `encoding='latin-1'` below, is ok.
Your Problem with `b''` are the result of using `encoding='bytes'`.
This will result in dict-keys being unpickled as bytes instead of as str.
The Problem data are the `datetime.date values '\x07á\x02\x10'`, starting at line **56** in `raw-data.pkl`.
It's a konwn Issue, as pointed already.
[Unpickling python2 datetime under python3](https://stackoverflow.com/questions/24805105/unpickling-python2-datetime-under-python3)
<http://bugs.python.org/issue22005>
For a workaround, I have patched `pickle.py` and got `unpickled object`, e.g.
>
> book.library.books[0].rentals[0].rental\_date=2017-02-16
>
>
>
---
This will work for me:
```
t = pickle.loads(dumpstr, encoding="latin-1")
```
>
> **Output**:
>
> <**main**.test object at 0xf7095fec>
>
> t.\_\_dict\_\_={'x': 'test ¢'}
>
> test ¢
>
>
>
***Tested with Python:3.4.2*** | 4,218 |
26,963,534 | I'm trying to complete a dice game python (3.4) programming assignment for school and I'm having some trouble passing a variable from one function to another using a return statement, but when I run the program the variable "diesum" is interpreted as undefined.
```
import random
def RollDice():
die1 = random.randint(1, 6)
die2 = random.randint(1, 6)
diesum = die1 + die2
return diesum
def Craps(diesum):
craps = [2, 3, 12]
natural = [7, 11]
established = [4, 5, 6, 8, 9, 10]
if (diesum == craps):
print(die1, "+", die2, "=", diesum, ",You lost")
elif (diesum == natural):
print(die1, "+", die2, "=", diesum, ",You Win")
elif (diesum == established):
print("Point is ", diesum)
diesum = roll
while diesum == roll:
RollDice()
if diesum == roll:
print("Same Number, You Won!")
elif (diesum != 7):
print("You Win")
else:
print("You Lost!")
break
RollDice()
Craps(diesum)
``` | 2014/11/16 | [
"https://Stackoverflow.com/questions/26963534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4259262/"
] | You are not passing the result of `RollDice` into `Craps`. Try this instead:
```
result = RollDice()
Craps(result)
```
There are some other issues in the snippet that you have pasted, but this is the main reason that your are seeing an error. The `return` statement returns a value from a function. You need to bind the value to a name (`result` in my case) before you can refer to it. You could also write `Craps(RollDice())` if you do not want to capture the result into an intermediate binding. | There are many reasons since it does not work.. first simplify the problem! This is a working initial example:
```
import random
def RollDice():
die1 = random.randint(1, 6)
die2 = random.randint(1, 6)
diesum = die1 + die2
print(diesum)
return diesum
def Craps(diesum):
craps = [2, 3, 12]
natural = [7, 11]
established = [4, 5, 6, 8, 9, 10]
for x in craps:
if diesum == x:
print("> You lost")
for x in natural:
if diesum == x:
print("> You Win")
for x in established:
if diesum == x:
print("> Point is ", diesum)
diesum = RollDice()
Craps(diesum)
```
Now write the second part of the game.. and be careful with indentation with Python! | 4,220 |
64,575,063 | ```
import pandas as pd
data = pd.read_excel (r'C:\Users\royli\Downloads\Product List.xlsx',sheet_name='Sheet1' )
df = pd.DataFrame(data, columns= ['Product'])
print (df)
```
*****Error Message*****
```
Traceback (most recent call last):
File "main.py", line 3, in <module>
Traceback (most recent call last):
File "main.py", line 3, in <module>
data = pd.read_excel (r'C:\Users\royli\Downloads\Product List.xlsx',sheet_name='Sheet1' )
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pandas/util/_decorators.py", line 296, in wrapper
return func(*args, **kwargs)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 304, in read_excel
io = ExcelFile(io, engine=engine)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 867, in __init__
self._reader = self._engines[engine](self._io)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pandas/io/excel/_xlrd.py", line 22, in __init__
super().__init__(filepath_or_buffer)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 353, in __init__
self.book = self.load_workbook(filepath_or_buffer)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/pandas/io/excel/_xlrd.py", line 37, in load_workbook
return open_workbook(filepath_or_buffer)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/xlrd/__init__.py", line 111, in open_workbook
with open(filename, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\royli\\Downloads\\Product List.xlsx'
KeyboardInterrupt
``` | 2020/10/28 | [
"https://Stackoverflow.com/questions/64575063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14526349/"
] | There are 3 ways to solve this:
1. If the git repository is on your Windows machine, [configure Beyond Compare as an external difftool](https://www.scootersoftware.com/support.php?zz=kb_vcs#gitwindows), then run
`git difftool --dir-diff` to launch a diff in the Folder Compare.
2. If you can install Beyond Compare for Linux on the remote machine, another option is to configure Beyond Compare as the diff tool for git on that machine, use an X-Window client on your Windows machine to [display BC for Linux remotely](https://www.scootersoftware.com/support.php?zz=kb_linuxremote), then run `git difftool --dir-diff`.
3. Export the revisions to be compared on the Linux machine to folders, then use Beyond Compare 4 Pro's built-in SFTP support to load the two folders in the Folder Compare on your Windows machine. `bcompare.exe sftp://user@server/1 sftp://user@server/2` | I just faced a similar problem, and wrote a script to allow using Beyond Compare as a Git difftool, with BC being installed locally, and the Git workspace residing on a remote machine: <https://github.com/mbikovitsky/beyond-ssh>. | 4,221 |
69,792,060 | I'm fairly new to programming in general and have been learning python3 for the last week or so. I tried building a dice roller and ran into an issue when asking the user if they wanted to repeat the roller or end the program.
```
import random as dice
d100 = dice.randint(1,100)
d20 = dice.randint(1,20)
d10 = dice.randint(1,10)
d8 = dice.randint(1,8)
d6 = dice.randint(1,6)
d4 = dice.randint(1,4)
d2 = dice.randint(1,2)
repeat = 'Y'
while repeat == 'Y' or 'y' or 'yes' or 'Yes':
roll = (input('What would you like to roll? A d100, d20, d10, d8, d6, d4, or d2?:'))
quantity = (input('How many would you like to roll?'))
quantity = int(quantity)
if roll == 'd100':
print('You rolled a: ' + str(d100 * quantity) + '!')
elif roll == 'd20':
print('You rolled a: ' + str(d20 * quantity) + '!')
elif roll == 'd10':
print('You rolled a: ' + str(d10 * quantity) + '!')
elif roll == 'd8':
print('You rolled a: ' + str(d8 * quantity) + '!')
elif roll == 'd6':
print('You rolled a: ' + str(d6 * quantity) + '!')
elif roll == 'd4':
print('You rolled a: ' + str(d4 * quantity) + '!')
elif roll == 'd2':
print('You rolled a: ' + str(d2 * quantity) + '!')
else:
print('That is not an available die! Please select a die.')
repeat = input('Would you like to continue?: ')
if repeat == 'yes' or 'Y' or 'y' or 'Yes':
continue
```
As of right now, despite what is input for the repeat variable it always continues even if it isn't "yes", "Y", "y", or "Yes". I'm sure the answer is simple and right in front of me but I'm stumped! Thanks in advance! | 2021/11/01 | [
"https://Stackoverflow.com/questions/69792060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17296020/"
] | It's a problem of precedence: `repeat == 'Y' or 'y' or 'yes' or 'Yes'` is interpreted as `(repeat == 'Y') or 'y' or 'yes' or 'Yes'` and then it tries to check whether `'y'` counts as true, which it does (it's a non-empty string).
What you want is `while repeat in ('Y', 'y', 'yes', 'Yes'):`
By the way, you don't need the `if` statement at the end of the loop since it will exit automatically if `repeat` is something other than `'Y'`, `'y'`, `'yes'`, or `'Yes'`. | Two things
`continue` means go to the top of the loop (and then check whether to re-enter it), not guaranteed to go through the loop again. It might be better named skip because it really means "skip the rest of this iteration". Hence you don't need `if ... continue` because you're already at the end of the iteration.
The real loop control is what follows `while`. You've made a common mistake by assuming Python can group those `or` operators as one set of options opposite the `==`. It can't. Only the first string is compared to `repeat` and the others are treated as individual conditions. A string on its own is `True` as long as it's not empty. Hence Python reads that as
>
> while `repeat` is `'Y'`, or `'y'` is not empty, or `'Yes'` is not empty, or `'yes'` is not empty
>
>
>
Since all three of those strings are by definition not empty, it doesn't matter if `repeat` is `'Y'`, the whole condition will always be `True`.
The way to do multiple options for equality is
`while repeat in ('Yes', 'yes', 'Y', 'y')`
This means that `repeat` must appear in that list of options.
Note that you can simplify by normalizing or casefolding repeat.
`while repeat.upper() in ('Y', 'YES')`
Or be even simpler and less strict
`while repeat.upper().startswith('Y')`
You should also strip `repeat` to further eliminate user error of whitespace:
`while repeat.strip().upper().startswith('Y')`
Then you begin to arrive at a best practice for user-ended loops :) | 4,222 |
71,164,536 | I'm just trying to make a very simple entry widget and grid it on the window but I keep getting an error. Anyway I can fix it?
code:
```
e = tk.Entry(root, borderwidth=5, width=35)
e.grid(root, row=0,column=0, columnspan=3, padx=10, pady=10)
```
Error:
```
Traceback (most recent call last):
File "C:\Users\mosta\PycharmProjects\pythonProject\main.py", line 298, in <module>
e.grid(root, row=0, column=0, columnspan=3, padx=10, pady=10)
File "C:\Users\mosta\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 2522, in grid_configure
self.tk.call(
_tkinter.TclError: bad option "-bd": must be -column, -columnspan, -in, -ipadx, -ipady, -padx, -pady, -row, -rowspan, or -sticky
``` | 2022/02/17 | [
"https://Stackoverflow.com/questions/71164536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You need to remove the argument `root` from the `grid` command.
```
e.grid(row=0,column=0, columnspan=3, padx=10, pady=10)
``` | By using the .place() method instead of the .grid() method, I have successfully gotten the Entry widget to work.
```
from tkinter import *
root = Tk()
e = Entry(root, borderwidth=5)
e.place(x=10, y=10, height=25, width=180)
```
I hope that this helps :-) | 4,223 |
26,313,761 | I know that [**si**](https://stackoverflow.com/questions/12160766/install-packages-with-portable-python "One Stack Overflow question.")[*mi*](https://stackoverflow.com/questions/16754614/adding-libraries-to-portable-python?rq=1 "Another Stack Overflow question.")[**la**](https://stackoverflow.com/questions/13119671/pygame-not-working-with-portable-python "A third Stack Overflow question.")[*r-*](https://stackoverflow.com/questions/2746542/importing-modules-on-portable-python?lq=1 "And a final fourth Stack Overflow question.") questions about installing modules in Portable Python have been
asked but I have looked at all of them and another [website](http://portablepython.uservoice.com "The official Portable Python issue reporting website."). I didn't have success.
For me, I wanted to **install Pygame on Portable Python 3.2.5.1 (on a memory stick).** I nearly
managed to install it successfully but when I typed `import pygame` into the shell window,
there was a weird error in one of the files, displayed in the shell. See image below:
[](https://i.stack.imgur.com/kUsaC.png)
**Update**: [*Portable Python*](http://portablepython.com/ "The Portable Python project website") at time of writing has been discontinued (not being developed anymore)
and there are other alternatives available in suggested links on their website or internet search
engine query results. I have managed to add [the Pygame Python module](http://pygame.org "The Python Pygame module website") to my version
of [one of these continuing projects](http://winpython.github.io "WinPython portable Python project website") so this question is not of use to me anymore. | 2014/10/11 | [
"https://Stackoverflow.com/questions/26313761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3787376/"
] | Implement your `Oggetto` class using standard JavaFX Properties:
```
import javafx.beans.property.BooleanProperty ;
import javafx.beans.property.IntegerProperty ;
import javafx.beans.property.SimpleBooleanProperty ;
import javafx.beans.property.SimpleIntegerProperty ;
public class Oggetto {
private final IntegerProperty value = new SimpleIntegerProperty() ;
public final IntegerProperty valueProperty() {
return value ;
}
public final int getValue() {
return value.get();
}
public final void setValue(int value) {
this.value.set(value);
}
private final BooleanProperty valid = new SimpleBooleanProperty();
public final BooleanProperty validProperty() {
return valid ;
}
public final boolean isValid() {
return valid.get();
}
public final void setValid(boolean valid) {
this.valid.set(valid);
}
public Oggetto(int value, boolean valid) {
setValue(value);
setValid(valid);
}
}
```
This may be all you need, as you can just observe the individual properties. But if you want a class that notifies invalidation listeners if either property changes, you can extend `ObjectBinding`:
```
import javafx.beans.binding.ObjectBinding ;
public class OggettoObservable extends ObjectBinding {
private final Oggetto value ;
public OggettoObservable(int value, boolean valid) {
this.value = new Oggetto(value, valid);
bind(this.value.valueProperty(), this.value.validProperty());
}
@Override
public Oggetto computeValue() {
return value ;
}
}
``` | ```
import javafx.beans.InvalidationListener;
import javafx.beans.value.ChangeListener;
import javafx.beans.value.ObservableValue;
public class VerySimply implements ObservableValue<Integer> {
private int newValue;
public ChangeListener<Integer> listener = new ChangeListener<Integer>() {
@Override
public void changed(ObservableValue<? extends Integer> observable, Integer oldValue, Integer newValue) {
System.out.println(" :) "+ newValue.intValue());
}
};
@Override
public void addListener(ChangeListener<? super Integer> listener) {
}
@Override
public void removeListener(ChangeListener<? super Integer> listener) {
}
@Override
public Integer getValue() {
return newValue;
}
@Override
public void addListener(InvalidationListener listener) {
}
@Override
public void removeListener(InvalidationListener listener) {
}
public void setNewValue(int newValue) {
int oldValue = this.newValue;
this.newValue = newValue;
listener.changed(this,oldValue,this.newValue);
}
}
``` | 4,224 |
60,493,027 | I am reading the book Hacking: The art of exploitation and there is a format string exploit example which attempts to overwrite an address of the dtors
with the address of a shellcode environment variable.
I work on Kali Linux 64-bit and already found out that there are no dtors (destructors of a c program) and so now I try to overwrite the fini\_array or the address of exit in ".got.plt" (I thought this would also work with the partial relro. So not being able to write into got.plt is my biggest problem I seek to get help with).
I already verified that the exploit writes the right address to the address given but when I run it with the address of fini\_array or got.plt I get a SIGSEV or "Illegal instruction" error.
After reading [this](https://mudongliang.github.io/2016/07/11/relro-a-not-so-well-known-memory-corruption-mitigation-technique.html) I think the problem is that the partial [relro](https://ctf101.org/binary-exploitation/relocation-read-only/) won't let me overwrite fini\_array since it makes fini\_array among many others readonly.
This is the python program I use to exploit the vuln program:
```
import struct
import sys
num = 0
num1 = 0
num2 = 0
num3 = 0
test_val = 0
if len(sys.argv) > 1:
num = int(sys.argv[1], 0)
if len(sys.argv) > 2:
test_val = int(sys.argv[2], 0)
if len(sys.argv) > 3:
num1 = int(sys.argv[3], 0)# - num
if len(sys.argv) > 4:
num2 = int(sys.argv[4], 0)# - num1 - num
if len(sys.argv) > 5:
num3 = int(sys.argv[5], 0)# - num2 - num1 - num
addr1 = test_val+2
addr2 = test_val+4
addr3 = test_val+6
vals = sorted(((num, test_val), (num1, addr1), (num2, addr2), (num3, addr3)))
def pad(s):
return s+"X"*(1024-len(s)-32)
exploit = ""
prev_val = 0
for val, addr in vals:
if not val:
continue
val_here = val - prev_val
prev_val = val
exploit += "%{}x".format(val_here)
if addr == test_val:
exploit += "%132$hn"
elif addr == addr1:
exploit += "%133$hn"
elif addr == addr2:
exploit += "%134$hn"
elif addr == addr3:
exploit += "%135$hn"
exploit = pad(exploit)
exploit += struct.pack("Q", test_val)
exploit += struct.pack("Q", addr1)
exploit += struct.pack("Q", addr2)
exploit += struct.pack("Q", addr3)
print pad(exploit)
```
When I pass the address of the shellcode environment variable and the address of fini\_array obtained with
```
objdump -s -j .fini_array ./vuln
```
I just get a SegmentationFault.
It is also very strange that this happens as well when I try to overwrite an address in the .got.plt section which actually should not be affected by partial relro which means I should be able to write to it but in reality I can't.
Moreover "ld --verbose ./vuln" shows this:
```
.dynamic : { *(.dynamic) }
.got : { *(.got) *(.igot) }
. = DATA_SEGMENT_RELRO_END (SIZEOF (.got.plt) >= 24 ? 24 : 0, .);
.got.plt : { *(.got.plt) *(.igot.plt) }
```
This is proof that .got.plt should not be readonly but why can I not write to it then?
Now my question is which workaround (maybe some gcc options) I could use to solve my problem. Even if it was not possible to actually overwrite .fini\_array why do I have the same problem with .got.plt and how can I resolve it?
I think that the problem I have with the .got.plt section might come from the fact that I am unable to execute the shellcode as it is part of the buffer. So are there any gcc options to make the buffer executable?
Here is vuln.c:
```
include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char *argv[]) {
char text[1024];
static int test_val = -72;
fgets(text, sizeof(text), stdin);
printf("The right way to print user-controlled input:\n");
printf("%s\n", text);
printf("The wrong way to print user-controlled input:\n");
printf(text);
printf("\n");
printf("[*] test_val @ %p = %d 0x%08x\n", &test_val, test_val, test_val);
exit(0);
}
```
I compile vuln.c with gcc 9.2.1 like this:
```
gcc -g -o vuln vuln.c
sudo chown root:root ./vuln
sudo chmod u+s ./vuln
```
This is the shellcode:
```
\x48\xbb\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x50\x57\x48\x89\xe6\xb0\x3b\x0f\x05
```
I exported this as a binary into the SHELLCODE variable by copying the above hex into input.txt.
Then run:
```
xxd -r -p input.txt output.bin
```
Now export it:
```
export SHELLCODE=$(cat output.bin)
```
The script getenv.c is used to get the address of Shellcode:
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char const *argv[]) {
char *ptr;
if (argc < 3) {
printf("Usage: %s <environment var> <target program name>\n", argv[0]);
exit(0);
}
ptr = getenv(argv[1]);
ptr += (strlen(argv[0]) - strlen(argv[2]))*2;
printf("%s will be at %p\n", argv[1], ptr);
return 0;
}
```
To use it run:
```
./getenvaddr SHELLCODE ./vuln
```
This tells you which address the SHELLCODE variable will have when you execute the vuln program.
Last I find the address of the exit function in the global offset table by:
```
objdump -R ./vuln
DYNAMIC RELOCATION RECORDS
OFFSET TYPE VALUE
0000000000003de8 R_X86_64_RELATIVE *ABS*+0x0000000000001170
0000000000003df0 R_X86_64_RELATIVE *ABS*+0x0000000000001130
0000000000004048 R_X86_64_RELATIVE *ABS*+0x0000000000004048
0000000000003fd8 R_X86_64_GLOB_DAT _ITM_deregisterTMCloneTable
0000000000003fe0 R_X86_64_GLOB_DAT __libc_start_main@GLIBC_2.2.5
0000000000003fe8 R_X86_64_GLOB_DAT __gmon_start__
0000000000003ff0 R_X86_64_GLOB_DAT _ITM_registerTMCloneTable
0000000000003ff8 R_X86_64_GLOB_DAT __cxa_finalize@GLIBC_2.2.5
0000000000004060 R_X86_64_COPY stdin@@GLIBC_2.2.5
0000000000004018 R_X86_64_JUMP_SLOT putchar@GLIBC_2.2.5
0000000000004020 R_X86_64_JUMP_SLOT puts@GLIBC_2.2.5
0000000000004028 R_X86_64_JUMP_SLOT printf@GLIBC_2.2.5
0000000000004030 R_X86_64_JUMP_SLOT fgets@GLIBC_2.2.5
0000000000004038 R_X86_64_JUMP_SLOT exit@GLIBC_2.2.5
```
Here the address of exit would be 0x4038
Now I write the address of the shellcode let's say 0x7fffffffe5e5 to the address of the exit function 0x4038 so that the program should be redirected into a shell instead of exiting like this:
```
python pyscript.py 0xe5e5 0x4038 0xffff 0x7fff | ./vuln
```
This is the underlying principle:
```
python pyscript.py first_to_bytes_of_shellcode exit_address second_to_bytes_of_shellcode third_to_bytes_of_shellcode optional_fourth_to_bytes_of_shellcode | ./vuln
``` | 2020/03/02 | [
"https://Stackoverflow.com/questions/60493027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12737461/"
] | Relocations and low addresses like this one:
```
0000000000003de8 R_X86_64_RELATIVE *ABS*+0x0000000000001170
```
suggest that the executable has been built as PIE (position-independent executable), with full address space layout randomization (ASLR). This means that the addresses do not match the static view from `objdump` and are disable for each run.
Typically, building with `gcc -no-pie` disables ASLR. If you use `gcc -no-pie -Wl,-z,norelro`, you will disable (partial) RELRO as well. | Probably, you can use 「-Wl,-z,norelro」 to disable RELRO. | 4,225 |
7,097,058 | >
> **Possible Duplicate:**
>
> [How to convert strings into integers in python?](https://stackoverflow.com/questions/642154/how-to-convert-strings-into-integers-in-python)
>
>
>
I need to change a list of strings into a list of integers how do i do this
i.e
('1', '1', '1', '1', '2') into (1,1,1,1,2). | 2011/08/17 | [
"https://Stackoverflow.com/questions/7097058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/899084/"
] | Use [list comprehensions](http://docs.python.org/tutorial/datastructures.html#list-comprehensions):
```
strtuple = ('1', '1', '1', '1', '2')
intlist = [int(s) for s in strtuple]
```
Stuff for completeness:
=======================
As your “list” is in truth a [tuple](http://docs.python.org/library/stdtypes.html#typesseq), i.e. a immutable list, you would have to use a generator expression together with a tuple constructor to get another tuple back:
```
inttuple = tuple(int(s) for s in strtuple)
```
The “generator expression” i talk about looks like this when not wrapped in a constructor call, and returns a generator this way.
```
intgenerator = (int(s) for s in strtuple)
``` | Use the `map` function.
```
vals = ('1', '1', '1', '1', '2')
result = tuple(map(int, vals))
print result
```
Output:
```
(1, 1, 1, 1, 2)
```
A performance comparison with the list comprehension:
```
from timeit import timeit
print timeit("map(int, vals)", "vals = '1', '2', '3', '4'")
print timeit("[int(s) for s in strlist]", "strlist = ('1', '1', '1', '1', '2')")
```
Output:
```
3.08675879197
4.08549801721
```
And with longer lists:
```
print timeit("map(int, vals)", "vals = tuple(map(str, range(10000)))", number = 1000)
print timeit("[int(s) for s in strlist]", "strlist = tuple(map(str, range(10000)))", number = 1000)
```
Output:
```
6.2849350965
7.36635214811
```
It appears that, (on my machine) in this case, the `map` approach is faster than the list comprehension. | 4,226 |
429,648 | Is there a library to do pretty on screen display with Python (mainly on Linux but preferably available on other OS too) ? I know there is python-osd but it uses [libxosd](http://sourceforge.net/projects/libxosd) which looks quite old. I would not call it *pretty*.
Maybe a Python binding for [libaosd](http://cia.vc/stats/project/libaosd). But I did not find any. | 2009/01/09 | [
"https://Stackoverflow.com/questions/429648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/49808/"
] | Actually, xosd isn't all that old; I went to university with the original author (Andre Renaud, who is a superlative programmer). It is quite low level, but pretty simple - xosd.c is only 1365 lines long. It wouldn't be hard to tweak it to display pretty much anything you want. | Using PyGTK on X it's possible to scrape the screen background and composite the image with a standard Pango layout.
I have some code that does this at <http://svn.sacredchao.net/svn/quodlibet/trunk/plugins/events/animosd.py>. It's a bit ugly and long, but mostly straightforward. | 4,231 |
64,090,872 | I have a for loop in Pygame that is trying to slowly progress through a string, like how text scrolls in RPGs. I want it to wait around 7 milliseconds before displaying the next character in the string, but I don't know how to make the loop wait that long without stopping other stuff.
Please note that I am very new to pygame and python in general.
Here is my code:
```
mainText = pygame.font.Font(mainFont, 40)
finalMessage = ""
for letter in msg:
finalMessage = finalMessage + letter
renderMainText = mainText.render(finalMessage, True, white)
screen.blit(renderMainText, (100, 100))
renderMainText = mainText.render(finalMessage, True, white)
```
Do I need to do threading? Asyncrio? | 2020/09/27 | [
"https://Stackoverflow.com/questions/64090872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14089022/"
] | You don't need the `for` loop at all. You have an application loop, so use it. The number of milliseconds since `pygame.init()` can be retrieved by [`pygame.time.get_ticks()`](https://www.pygame.org/docs/ref/time.html#pygame.time.get_ticks). See [`pygame.time`](https://www.pygame.org/docs/ref/time.html) module.
```py
next_letter_time = 0
next_letter = 0
run = True
while run:
current_time = pygame.time.get_ticks()
# [...]
if next_letter < len(msg):
if current_time > next_letter_time:
next_letter_time = current_time + 7000 # 7000 milliseconds = 7
finalMessage = finalMessage + msg[next_letter]
next_letter += 1
renderMainText = mainText.render(finalMessage, True, white)
```
---
Minimal example:
[](https://i.stack.imgur.com/eHAAr.gif)
```py
import pygame
pygame.init()
window = pygame.display.set_mode((500, 500))
clock = pygame.time.Clock()
white = (255, 255, 255)
mainText = pygame.font.SysFont(None, 50)
renderMainText = None
finalMessage = ""
msg = "test text"
next_letter_time = 0
next_letter = 0
run = True
while run:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
current_time = pygame.time.get_ticks()
if next_letter < len(msg):
if current_time > next_letter_time:
next_letter_time = current_time + 500
finalMessage = finalMessage + msg[next_letter]
next_letter += 1
renderMainText = mainText.render(finalMessage, True, white)
window.fill(0)
if renderMainText:
window.blit(renderMainText, (100, 100))
pygame.display.flip()
``` | use this
```
@coroutine
def my_func():
from time import sleep
mainText = pygame.font.Font(mainFont, 40)
finalMessage = ""
for letter in msg:
finalMessage = finalMessage + letter
renderMainText = mainText.render(finalMessage, True, white)
screen.blit(renderMainText, (100, 100))
yield from sleep(0.007)
renderMainText = mainText.render(finalMessage, True, white)
async(my_func)
```
yield from is according to python 3.4
for more different versions check <https://docs.python.org/3/>
your function will run independently without interrupting other tasks after `async(my_func)` | 4,234 |
60,775,172 | I used pyenv to install python 3.8.2 and to create a virtualenv.
In the virtualenv, I used pipenv to install `pandas`.
However, when importing pandas, I'm getting the following:
```
[...]
File "/home/luislhl/.pyenv/versions/poc-prefect/lib/python3.8/site-packages/pandas/io/common.py", line 3, in <module>
import bz2
File "/home/luislhl/.pyenv/versions/3.8.2/lib/python3.8/bz2.py", line 19, in <module>
from _bz2 import BZ2Compressor, BZ2Decompressor
ModuleNotFoundError: No module named '_bz2'
```
After some googling, I found out some people suggesting I rebuild Python from source after installing bzip2 library in my system.
However, after trying installing it with `sudo dnf install bzip2-devel` I see that I already had it installed.
As far as I know, pyenv builds python from source when installing some version.
So, why wasn't it capable of including the bzip2 module when building?
How can I manage to rebuild Python using pyenv in order to make bzip2 available?
I'm in Fedora 30
Thanks in advance
**UPDATE**
I tried installing another version of python with pyenv in verbose mode, to see the compilation output.
There is this message in the end of the compilation:
```
WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib?
```
But as I stated before, I checked I already have bzip2 installed in my system. So I don't know what to do. | 2020/03/20 | [
"https://Stackoverflow.com/questions/60775172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3477266/"
] | On macOS Big Sur, to get pyenv ( via homebrew ) to work I had to install zlib and bzip2 via homebrew and then add the exports in my ~/.zshrc ( or ~/.bashrc for bash I guess). The answer above [by luislhl](https://stackoverflow.com/q/60775172/2117661) leads the way to my solution.
```
brew install zlib bzip2
#Add the following to your ~/.zshrc
# For pyenv to build
export LDFLAGS="-L/usr/local/opt/zlib/lib -L/usr/local/opt/bzip2/lib"
export CPPFLAGS="-I/usr/local/opt/zlib/include -I/usr/local/opt/bzip2/include"
# Then the install worked
pyenv install 3.7.9
``` | Ok, I have found the solution after some time. It was simple, but I took some time to realize it.
It turns out the problem was the `bzip2-devel` I had installed was a 32-bit version.
The compilation process was looking for the 64-bit one, and didn't find it.
So I had to specifically install the 64-bit version:
```
sudo dnf install bzip2-devel-1.0.6-29.fc30.x86_64
``` | 4,235 |
59,118,639 | On a **Ubuntu 18.04** machine I am trying to use **opencv 4.1.2** [facedetect](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-facedetect.html) in a **gstreamer 1.14.5** pipeline but unfortunately the plugin is not installed.
I downloaded the gstreamer [bad plugin code](https://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-1.14.5.tar.xz) and tried to build using meson
The size of the so files created does not look right.
How do I install the opencv plugin?
```
(cv) roy@hp:~$ cat /proc/version
Linux version 5.0.0-36-generic (buildd@lgw01-amd64-060) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #39~18.04.1-Ubuntu SMP Tue Nov 12 11:09:50 UTC 2019
(cv) roy@hp:~$ which gst-inspect-1.0
/usr/bin/gst-inspect-1.0
(cv) roy@hp:~$ gst-inspect-1.0 --version
gst-inspect-1.0 version 1.14.5
GStreamer 1.14.5
https://launchpad.net/distros/ubuntu/+source/gstreamer1.0
(cv) roy@hp:~$ gst-inspect-1.0 facedetect
No such element or plugin 'facedetect'
(cv) roy@hp:~$ python
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> print(cv2.__version__)
4.1.2
>>> exit()
(cv) roy@hp:~$ ls -l /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopen*
-rw-r--r-- 1 root root 39752 Jul 4 02:16 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenal.so
-rw-r--r-- 1 root root 23376 Jul 4 02:16 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenexr.so
-rw-r--r-- 1 root root 81896 Jul 4 02:16 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenglmixers.so
-rw-r--r-- 1 root root 253048 Jul 3 09:19 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopengl.so
-rw-r--r-- 1 root root 48328 Jul 4 02:16 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenjpeg.so
-rw-r--r-- 1 root root 27368 Jul 4 02:16 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopenmpt.so
(cv) roy@hp:~$ ls -l gst-plugins-bad-1.14.5/gst-libs/gst/opencv/
total 84
-rw-r--r-- 1 roy roy 6395 Mar 23 2018 gstopencvutils.cpp
-rw-r--r-- 1 roy roy 1700 Mar 23 2018 gstopencvutils.h
-rw-r--r-- 1 roy roy 8871 Mar 23 2018 gstopencvvideofilter.cpp
-rw-r--r-- 1 roy roy 4559 Mar 23 2018 gstopencvvideofilter.h
-rw-r--r-- 1 roy roy 746 Mar 23 2018 Makefile.am
-rw-r--r-- 1 roy roy 38511 May 29 2019 Makefile.in
-rw-r--r-- 1 roy roy 775 Mar 23 2018 meson.build
-rw-r--r-- 1 roy roy 1082 Mar 23 2018 opencv-prelude.h
(cv) roy@hp:~$ ls -l gst-plugins-bad-1.14.5/build/gst-libs/gst/opencv/
total 0
lrwxrwxrwx 1 roy roy 21 Nov 30 08:50 libgstopencv-1.0.so -> libgstopencv-1.0.so.0
lrwxrwxrwx 1 roy roy 28 Nov 30 08:50 libgstopencv-1.0.so.0 -> libgstopencv-1.0.so.0.1405.0
(cv) roy@hp:~$
``` | 2019/11/30 | [
"https://Stackoverflow.com/questions/59118639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1431063/"
] | Please don't dirty your Ubuntu. Prefer using any package manager in Ubuntu, that you like. If you use `apt`, just install ready and available package for you:
```
sudo apt install libgstreamer-plugins-bad1.0-dev
``` | I had the same problem, and my solution is if you want to use the GStreamer OpenCV Plugins described [here](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-plugin-opencv.html) and [here](https://gstreamer.freedesktop.org/documentation/opencv/?gi-language=c) you need to do:
```
sudo apt install gstreamer1.0-opencv
```
as explained [here](https://stackoverflow.com/questions/13744763/gstreamer-opencv-edgedetect/68514027#68514027), then:
```
gst-launch-1.0 autovideosrc ! video/x-raw,width=640,height=480 ! videoconvert ! facedetect min-size-width=60 min-size-height=60 profile=/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml ! videoconvert ! xvimagesink
```
Worked successfully in my [NVIDIA® Jetson Nano™ Developer Kit](https://developer.nvidia.com/embedded/jetson-nano-developer-kit). | 4,240 |
57,502,112 | I am getting an attribute error while running the code given below:
```py
import base64
import subprocess
from __future__ import absolute_import, print_function
from pprint import pprint
import unittest
import webbrowser
import docusign_esign as docusign
from docusign_esign import AuthenticationApi, TemplatesApi,EnvelopesApi,ApiClient
from PyPDF2 import PdfFileReader
import pandas as pd
from datetime import datetime
from os import path
import requests
integrator_key = "XYZ"
base_url = "https://www.docusign.net/restapi"
oauth_base_url = "account.docusign.com" #use account-d.docusign.com for sandbox
redirect_uri = "https://www.docusign.com/api"
user_id = 'MNO'
private_key_filename = "docusign_private_key.txt"
client_secret = 'ABC' #production
account_id = 'QRS'
api_client = docusign.ApiClient(base_url)
api_client.configure_jwt_authorization_flow(integrator_key, client_secret, redirect_uri)
```
ERROR:
```
AttributeError
Traceback (most recent call last)
<ipython-input-2-1abfece08e05> in <module>()
55 api_client = docusign.ApiClient(base_url)
56 # make sure to pass the redirect uri
---> 57 api_client.configure_jwt_authorization_flow(integrator_key, client_secret, redirect_uri)
AttributeError: 'ApiClient' object has no attribute 'configure_jwt_authorization_flow'
``` | 2019/08/14 | [
"https://Stackoverflow.com/questions/57502112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11929301/"
] | >
> this html code was built automatically by jquery so I can't add id or "onlick" event on this tag
>
>
>
If you can't control when that happens, you can still use event delegation to get involved in the click event:
```
$(document).on('click', '.fc-day-grid-event', function() {
...//
});
```
That works even if the code runs before the element exists. The code in your question only works if the element exists as of when your code runs. See [the documentation](https://api.jquery.com/on/#on-events-selector-data-handler) for details. | ```
<a onclick="doStuff(this)">Click Me</a>
``` | 4,241 |
64,525,357 | Hello i'm new to python.
i'm working with lists in python and i want to Convert a `list` named **graph** to `dictionnary` **graph** in `PYTHON`.
my have `list` :
```js
graph = [
['01 Mai',
[
['Musset', 5],
['Place 11 Decembre 1960', 4],
["Sidi M'hamed", 3],
['El Hamma (haut)', 6]
]
],
['Musset',
[
['Place 11 Decembre 1960', 4],
["Sidi M'hamed", 3],
['El Hamma (haut)', 6],
["Jardin d'Essai (haut)", 10]
],
]
]
```
i want the list to be a dictionary like that :
```js
graph = {
'01 mai':{
'Musset':5,
'Place 11 Decembre 1960':4,
"Sidi M'hamed":3,
"El Hamma (haut)":6,
},
'Musset':{
'Place 11 Decembre 1960':4,
"Sidi M'hamed":3,
"El Hamma (haut)":6,
"Jardin d'Essai (haut)": 10,
}
}
``` | 2020/10/25 | [
"https://Stackoverflow.com/questions/64525357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11818297/"
] | A simple dict comprehension would do:
```py
as_dict = {k: dict(v) for k,v in graph}
```
[Playground](https://www.online-python.com/njzoZagLfc) | An easy solution would be:
```
for item in graph:
d[item[0]] = {record[0]: record[1] for record in item[1]}
``` | 4,246 |
57,532,371 | I have the following 8 (possibly non-unique) lists in python:
```
>>> a = [{9: {10:11}}, {}, {}]
>>> b = [{1:2}, {3:4}, {5:6}]
>>> c = [{}, {}, {}]
>>> d = [{1:2}, {3:4}, {5:6}]
>>> w = [{}, {}, {}]
>>> x = [{1:2}, {3:4}, {5:6}]
>>> y = [{}, {}, {}]
>>> z = [{1:2}, {3:4}, {5:6}]
```
I want to check if any combination of (a,b,c,d) is the same as any combination of (w,x,y,z). IE: if `{a, b, c, d} == {w, x, y, z}`. However, because of the datatypes of these lists, I cannot easily put them into a set. They are unhashable.
What's the most pythonic way to do it?
I wanted to do the following but it didn't work:
```
>>> set([a,b,c,d]) == set([w,x,y,z])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
```
So then I tried the following, but it didn't work either:
```
set([tuple(i) for i in [a,b,c,d]]) == set([tuple(i) for i in [w,x,y,z]])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
```
How can I write something pretty and efficient that will do the comparison that I need? | 2019/08/17 | [
"https://Stackoverflow.com/questions/57532371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1742777/"
] | You can abuse [`frozenset`](https://docs.python.org/3/library/stdtypes.html#frozenset) by turning each list of dictionaries to a frozenset of frozensets, with the internal frozensets being each dictionary's items:
```
def freeze(li):
return frozenset(frozenset(d.items()) for d in li)
a = freeze(a)
b = freeze(b)
c = freeze(c)
d = freeze(d)
w = freeze(w)
x = freeze(x)
y = freeze(y)
z = freeze(z)
print(z)
# frozenset({frozenset({(3, 4)}), frozenset({(5, 6)}), frozenset({(1, 2)})})
print({a, b, c, d} == {w, x, y, z})
# True
``` | @DeepSpace's answer works only if each sub-dict in a list is unique, since `[a, b, c, d]` should not be considered the same as `[a, a, b, c, d]`, but with @DeepSpace's use of the `set` constructor, they will be treated as the same.
To correctly account for possible duplicating items in the list, you can use `collections.Counter` instead:
```
from collections import Counter
def freeze(li):
return frozenset(frozenset(d.items()) for d in li)
print(Counter(map(freeze, [a, b, c, d])) == Counter(map(freeze, [a, a, b, c, d])))
```
Also, in case the sub-dicts contain lists or dicts as values, you can make it a recursive function instead:
```
def freeze(o):
if isinstance(o, list):
return frozenset(Counter(map(freeze, o)).items())
if isinstance(o, dict):
return frozenset((k, freeze(v)) for k, v in o.items())
return o
print(freeze([a,b,c,d]) == freeze([x,w,y,z]))
``` | 4,249 |
16,127,493 | This error broke my python-mysql installation on Mac 10.7.5. Here are the steps
1. The installed python is 2.7.1, mysql is 64 bit for 5.6.11.
2. The being installed python-mysql is 1.2.4, also tried 1.2.3
3. Configurations for the installation
```
1) sudo ln -s /usr/local/mysql/lib /usr/local/mysql/lib/mysql
2) Edit the setup_posix.py and change the following mysql_config.path = "mysql_config" to mysql_config.path = "/usr/local/mysql/bin/mysql_config"
3) sudo python setup.py build
```
Here is the stacktrace for build
```
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.7-intel-2.7/MySQLdb
running build_ext
building '_mysql' extension
llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'final',1) -D__version__=1.2.4 -I/usr/local/mysql/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.7-intel-2.7/_mysql.o -Wno-null-conversion -Os -g -fno-strict-aliasing -arch x86_64
cc1: error: unrecognized command line option "-Wno-null-conversion"
error: command 'llvm-gcc-4.2' failed with exit status 1
```
Welcome your suggestions and ideas. Thanks. | 2013/04/21 | [
"https://Stackoverflow.com/questions/16127493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/351637/"
] | Try to Remove `cflags -Wno-null-conversion -Wno-unused-private-field` in
```
/usr/local/mysql/bin/mysql_config.
```
like:
```
cflags="-I$pkgincludedir -Wall -Os -g -fno-strict-aliasing -DDBUG_OFF -arch x86_64 " #note: end space!
``` | Wow, I've been spending a couple of hours on thistrying to 'pip install MySQL-python'. I have been re-installing Xcode 4.6.3, the Xcode command line tools seperatly (on Mac OS X 10.7.5), and installing Kenneth Reitz' stuff (<https://github.com/kennethreitz/osx-gcc-installer>) to no avail while I was ...
Altering the cflags options finally helped!
Thanks! | 4,251 |
57,578,345 | Suppose i have the coefficients of a polynomial.How to write it in the usual form we write in pen and paper?E.g. if i have coefficients=1,-2,5 and the polynomial is a quadratic one then the program should print `x**2-2*x+5.
1*x**2-2*x**1+5*x**0` will also do.It is preferable that the program is written such that it works for large n too,like order 20 or 30 of the polynomial,and also there is some way to put some value of x into the result.e.g.If i set x=0,in the abovementioned example it should return 5.
So far,i have come to know that the thing i am asking for is symbolic computation,and there is a readymade package in python called sympy for doing these,but by only using the functions,i could not gain insight into the logic of writing the function,i referred to the lengthy source codes of the several functions in sympy module and got totally confused.Is there a simple way to do this probably without using of direct symbolic math packages? | 2019/08/20 | [
"https://Stackoverflow.com/questions/57578345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10444871/"
] | Here is a program that would work, without using the external packages. I have defined a Poly class and it has two methods: 1) evaluation 2) print the polynomial.
```
class Poly():
def __init__(self, coeff):
self.coeff = coeff
self.N = len(coeff)
def evaluate(self, x):
res = 0.0
for i in range(self.N):
res += self.coeff[i] * (x**(self.N-i-1))
return res
def printPoly(self):
for i in range(self.N):
if i == self.N-1:
print("%f" % (abs(self.coeff[i])))
else:
if self.coeff[i] != 0.0:
print("%f * x**%d" % (abs(self.coeff[i]), self.N-i-1), end='')
if self.coeff[i+1] > 0:
print(" + ", end='')
else:
print(" - ", end='')
p = poly([1,-2,5]) # creating the polynomial object.
p.printPoly() # prints: 1.0 * x**2 - 2.0 * x**1 + 5
print(p.evaluate(0.0)) # prints: 5.0
``` | For this task, you have to use python's symbolic module ([sympy](https://www.sympy.org/en/index.html)) since you specifically want your output to be a polynomial representation. The following code should do the job.
```
import sympy
from sympy import poly
x = sympy.Symbol('x') # Create a symbol x
coefficients = [1,-2,5] # Your coefficients a python list
p1 = sum(coef*x**i for i, coef in enumerate(reversed(coefficients))) # expression to generate a polynomial from coefficients.
print p1 # print(p1), depending on your python version
```
This statement: `p1.subs('x',2)` then evaluates your polynomial 'p1' at x=2. | 4,252 |
8,337,686 | Here is my `.bash_profile`
```
PYTHONPATH=".:/home/miki725/django/django:$PYTHONPATH"
export PYTHONPATH
```
So then I open python however the directory I add in `.bash_profile` is not the first one:
```
Python 2.4.3 (#1, Sep 21 2011, 20:06:00)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> for i in sys.path:
... print i
...
/usr/lib/python2.4/site-packages/setuptools-0.6c9-py2.4.egg
/usr/lib/python2.4/site-packages/flup-1.0.2-py2.4.egg
/usr/lib/python2.4/site-packages/MySQL_python-1.2.3c1-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/django_form_utils-0.1.7-py2.4.egg
/usr/lib/python2.4/site-packages/mechanize-0.2.1-py2.4.egg
/usr/lib/python2.4/site-packages/Django-1.2.1-py2.4.egg
/usr/lib/python2.4/site-packages/mercurial-1.6-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/lxml-2.2.7-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/django_registration-0.7-py2.4.egg
/usr/lib/python2.4/site-packages/sorl_thumbnail-3.2.5-py2.4.egg
/usr/lib/python2.4/site-packages/South-0.7.2-py2.4.egg
/usr/lib/python2.4/site-packages/django_keyedcache-1.4_1-py2.4.egg
/usr/lib/python2.4/site-packages/django_livesettings-1.4_3-py2.4.egg
/usr/lib/python2.4/site-packages/django_app_plugins-0.1.1-py2.4.egg
/usr/lib/python2.4/site-packages/django_signals_ahoy-0.1_2-py2.4.egg
/usr/lib/python2.4/site-packages/pycrypto-2.3-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/django_threaded_multihost-1.4_0-py2.4.egg
/usr/lib/python2.4/site-packages/PIL-1.1.7-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/pyOpenSSL-0.11-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/ZSI-2.0_rc3-py2.4.egg
/usr/lib/python2.4/site-packages/PyXML-0.8.4-py2.4-linux-i686.egg
/usr/lib/python2.4/site-packages/pyquery-0.6.1-py2.4.egg
/usr/lib/python2.4/site-packages/pip-1.0.1-py2.4.egg
/usr/lib/python2.4/site-packages/virtualenv-1.6.1-py2.4.egg
/usr/lib/python2.4/site-packages/simplejson-2.1.6-py2.4-linux-i686.egg
/home/miki725
/home/miki725/django/django
/usr/lib/python24.zip
/usr/lib/python2.4
/usr/lib/python2.4/plat-linux2
/usr/lib/python2.4/lib-tk
/usr/lib/python2.4/lib-dynload
/usr/lib/python2.4/site-packages
/usr/lib/python2.4/site-packages/Numeric
/usr/lib/python2.4/site-packages/PIL
/usr/lib/python2.4/site-packages/gtk-2.0
>>>
>>>
>>>
>>>
>>> import django
>>> django.__file__
'/usr/lib/python2.4/site-packages/Django-1.2.1-py2.4.egg/django/__init__.pyc'
>>>
```
How can I add to a python path in `.bash_profile` so it would be in the beginning. This is for shared hosting. I need to be able to import my django install instead of using system default.
Thank you | 2011/12/01 | [
"https://Stackoverflow.com/questions/8337686",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/485844/"
] | Your best bet is to modify `sys.path` at runtime. In a shared hosting enviroment it's common to do this in your .wsgi file. You could do something like this:
```
import sys
sys.path.insert(0, '/home/miki725/django/django')
```
If you add `export PYTHONSTARTUP=/home/miki725/.pythonrc` to your `.bash_profile`, you can add that your `.pythonrc` file, and it'll be executed before an interactive prompt is shown as well. | I'd say that your `PYTHONPATH` is being modified when the [site](http://docs.python.org/release/2.4.3/lib/module-site.html) module is imported. Please have a look at the [user](http://docs.python.org/release/2.4.3/lib/module-user.html) module to provide user-specific configuration (basically just prepend the directories you're interested in to `sys.path`).
Note: `user` module is currently deprecated, but for python 2.4 this should work.
Edit: Just for completeness, for python >= 2.6 (`user` module deprecated), you should create a `usercustomize.py` file in your local `site-packages` directory as explained [here](http://docs.python.org/tutorial/interpreter.html#the-customization-modules). | 4,253 |
9,252,970 | It worked when I did the poll tutorial in linux, but I'm doing it again in Windows 7, and it does nothing.
I already set the environmental variables, and set the file association to my `python27.exe`
When I run `django-admin.py` startproject mysite from the DOS command prompt, it executes, but it's showing me all the information (Like the options, etc) as though I typed the help option instead. It's not actually creating project files in my directory. I appreciate the help.
also, I tried the solution found here (it appears to be the exact same problem).
It did not work
[django-admin.py is not working properly](https://stackoverflow.com/questions/3123688/django-admin-py-is-not-working-properly) | 2012/02/12 | [
"https://Stackoverflow.com/questions/9252970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1159856/"
] | Try to run `python27 django-admin.py startproject mysite` from the command line,maybe a different (older) python.exe executes the `django-admin.py` file. If there's a program associated to the `.py` files, things mixes up, and your `path` environment variable doesn't matter.
I suggest you to use [virtualenv](http://pypi.python.org/pypi/virtualenv). When you use it, you should put the python.exe before every `.py` file you want to run, because the install of python will associate .py files to the installed python.exe, and will use that, whatever is in your path. :( | Great answers. But unfortunately it did not work for me. This is how I solved it
1. Opened `django_admin.py` as @wynston said. But the path at first line was already showing `#!C:\` correctly. So did not had to change it
2. I had to put `"..."` around `django-admin.py` address. Navigated to the project directory in `cmd.exe` and ran this
python "C:\Users\ ......\Scripts\django-admin.py" startproject projectname
It worked only with the quotation marks. I am using Anaconda Python 2.7 64 bit, on Windows 7, 64 bit. Hope it helps | 4,256 |
30,029,625 | I can't install [Rodeo](https://github.com/yhat/rodeo) with pip, on Ubuntu 14.04.2 LTS 64 bit (installed on a Virtual Box)
For information I'm a Python and Ubuntu beginner and I installed pip by following this [tutorial](http://www.liquidweb.com/kb/how-to-install-pip-on-ubuntu-14-04-lts/)
`pip -V`
`pip 6.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)`
**Problem:**
When I execute `pip install -U rodeo` I have an error message. Here is the log:
```
Did not find libzmq via pkg-config:
Package libzmq was not found in the pkg-config search path.
Perhaps you should add the directory containing `libzmq.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libzmq' found
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -c build/temp.linux-x86_64-2.7/scratch/check_sys_un.c -o build/temp.linux-x86_64-2.7/scratch/check_sys_un.o
x86_64-linux-gnu-gcc -pthread build/temp.linux-x86_64-2.7/scratch/check_sys_un.o -o build/temp.linux-x86_64-2.7/scratch/check_sys_un
Configure: Autodetecting ZMQ settings...
Custom ZMQ dir:
************************************************
creating build/temp.linux-x86_64-2.7/scratch/tmp
cc -c /tmp/timer_createSSuyTd.c -o build/temp.linux-x86_64-2.7/scratch/tmp/timer_createSSuyTd.o
cc build/temp.linux-x86_64-2.7/scratch/tmp/timer_createSSuyTd.o -o build/temp.linux-x86_64-2.7/scratch/a.out
build/temp.linux-x86_64-2.7/scratch/tmp/timer_createSSuyTd.o: In function `main':
timer_createSSuyTd.c:(.text+0x15): undefined reference to `timer_create'
collect2: error: ld returned 1 exit status
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Izmq/utils -Izmq/backend/cython -Izmq/devices -c build/temp.linux-x86_64-2.7/scratch/vers.c -o build/temp.linux-x86_64-2.7/scratch/vers.o
build/temp.linux-x86_64-2.7/scratch/vers.c:4:17: fatal error: zmq.h: No such file or directory
#include "zmq.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Failed with default libzmq, trying again with /usr/local
Configure: Autodetecting ZMQ settings...
Custom ZMQ dir: /usr/local
************************************************
cc -c /tmp/timer_createcU4dvG.c -o build/temp.linux-x86_64-2.7/scratch/tmp/timer_createcU4dvG.o
Assembler messages:
Fatal error: can't create build/temp.linux-x86_64-2.7/scratch/tmp/timer_createcU4dvG.o: No such file or directory
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/local/include -Izmq/utils -Izmq/backend/cython -Izmq/devices -c build/temp.linux-x86_64-2.7/scratch/vers.c -o build/temp.linux-x86_64-2.7/scratch/vers.o
build/temp.linux-x86_64-2.7/scratch/vers.c:4:17: fatal error: zmq.h: No such file or directory
#include "zmq.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Warning: Failed to build or run libzmq detection test.
If you expected pyzmq to link against an installed libzmq, please check to make sure:
* You have a C compiler installed
* A development version of Python is installed (including headers)
* A development version of ZMQ >= 2.1.4 is installed (including headers)
* If ZMQ is not in a default location, supply the argument --zmq=<path>
* If you did recently install ZMQ to a default location,
try rebuilding the ld cache with `sudo ldconfig`
or specify zmq's location with `--zmq=/usr/local`
You can skip all this detection/waiting nonsense if you know
you want pyzmq to bundle libzmq as an extension by passing:
`--zmq=bundled`
I will now try to build libzmq as a Python extension
unless you interrupt me (^C) in the next 10 seconds...
************************************************
1...
Using bundled libzmq
already have bundled/zeromq
attempting ./configure to generate platform.hpp
Warning: failed to configure libzmq:
/bin/sh: 1: ./configure: not found
staging platform.hpp from: buildutils/include_linux
checking for timer_create
************************************************
************************************************
creating build/temp.linux-x86_64-2.7/tmp
cc -c /tmp/timer_createmVaK_l.c -o build/temp.linux-x86_64-2.7/tmp/timer_createmVaK_l.o
cc build/temp.linux-x86_64-2.7/tmp/timer_createmVaK_l.o -o build/temp.linux-x86_64-2.7/a.out
build/temp.linux-x86_64-2.7/tmp/timer_createmVaK_l.o: In function `main':
timer_createmVaK_l.c:(.text+0x15): undefined reference to `timer_create'
collect2: error: ld returned 1 exit status
no timer_create, linking librt
Using bundled libsodium
already have bundled/libsodium
staging buildutils/include_sodium/version.h to bundled/libsodium/src/libsodium/include/sodium/version.h
already have crypto_scalarmult_curve25519.h
already have crypto_stream_salsa20.h
************************************************
************************************************
building 'zmq.libsodium' extension
creating build/temp.linux-x86_64-2.7/buildutils
creating build/temp.linux-x86_64-2.7/bundled
creating build/temp.linux-x86_64-2.7/bundled/libsodium
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify/32
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify/32/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify/16
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify/16/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify/64
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_verify/64/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_sign
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_sign/ed25519
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_sign/ed25519/ref10
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_sign/edwards25519sha512batch
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_sign/edwards25519sha512batch/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/hsalsa20
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/hsalsa20/ref2
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/salsa2012
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/salsa2012/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/salsa208
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/salsa208/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/salsa20
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_core/salsa20/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/sodium
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_aead
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_aead/chacha20poly1305
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_aead/chacha20poly1305/sodium
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_scalarmult
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_scalarmult/curve25519
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_scalarmult/curve25519/ref10
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth/hmacsha512
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth/hmacsha512/cp
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth/hmacsha512256
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth/hmacsha512256/cp
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth/hmacsha256
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_auth/hmacsha256/cp
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/randombytes
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/randombytes/sysrandom
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/randombytes/salsa20
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_pwhash
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_pwhash/scryptsalsa208sha256
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_pwhash/scryptsalsa208sha256/sse
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_pwhash/scryptsalsa208sha256/nosse
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_generichash
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_generichash/blake2
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_generichash/blake2/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_hash
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_hash/sha512
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_hash/sha512/cp
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_hash/sha256
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_hash/sha256/cp
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_onetimeauth
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_onetimeauth/poly1305
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_onetimeauth/poly1305/donna
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_box
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_box/curve25519xsalsa20poly1305
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_box/curve25519xsalsa20poly1305/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/xsalsa20
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/xsalsa20/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/salsa2012
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/salsa2012/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/aes128ctr
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/aes128ctr/portable
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/chacha20
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/chacha20/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/salsa208
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/salsa208/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/salsa20
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_stream/salsa20/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_shorthash
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_shorthash/siphash24
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_shorthash/siphash24/ref
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_secretbox
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_secretbox/xsalsa20poly1305
creating build/temp.linux-x86_64-2.7/bundled/libsodium/src/libsodium/crypto_secretbox/xsalsa20poly1305/ref
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DNATIVE_LITTLE_ENDIAN=1 -Ibundled/libsodium/src/libsodium/include -Ibundled/libsodium/src/libsodium/include/sodium -I/usr/include/python2.7 -c buildutils/initlibsodium.c -o build/temp.linux-x86_64-2.7/buildutils/initlibsodium.o
buildutils/initlibsodium.c:10:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_ricol/pyzmq/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-KXbrbW-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_ricol/pyzmq
Storing debug log for failure in /home/ricol/.pip/pip.log
```
**Edit:** I followed eandersson's answer:
```
sudo apt-get install python-dev
sudo apt-get install libzmq-dev
sudo pip install -U rodeo
sudo pip install slugify
```
But there is still a problem when I execute `rodeo .` even after rebooting :
```
ricol@ricol-VirtualBox:~$ rodeo .
_______ ___ ______ ________ ___
|_ __ \ .' `.|_ _ `.|_ __ | .' `.
| |__) | / .-. \ | | `. \ | |_ \_|/ .-. \
| __ / | | | | | | | | | _| _ | | | |
_| | \ \_\ `-' /_| |_.' /_| |__/ |\ `-' /
|____| |___|`.___.'|______.'|________| `.___.'
''''''''''''''''''''''''''''''''''''''''''''''''''
URL: http://localhost:5000/
DIRECTORY: /home/ricol
''''''''''''''''''''''''''''''''''''''''''''''''''
(process:2429): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0' failed
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
pbs<@resource://unity/observer.js:38:71
Observer.prototype.observe@resource://unity/observer.js:77:24
get_contentWindow@chrome://global/content/bindings/browser.xml:412:54
get_securityUI@chrome://global/content/bindings/browser.xml:662:17
browser_XBL_Constructor@chrome://global/content/bindings/browser.xml:786:17
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
pbs<@resource://unity/observer.js:38:71
Observer.prototype.observe@resource://unity/observer.js:77:24
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
pbs<@resource://unity/observer.js:38:71
Observer.prototype.observe@resource://unity/observer.js:77:24
[ERROR]: Exception on / [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python2.7/dist-packages/rodeo/rodeo.py", line 33, in home
dirslug = slugify.slugify(dirname)
File "/usr/local/lib/python2.7/dist-packages/slugify.py", line 26, in slugify
unicodedata.normalize('NFKD', string)
TypeError: must be unicode, not str
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
pbs<@resource://unity/observer.js:38:71
Observer.prototype.observe@resource://unity/observer.js:77:24
@chrome://browser/content/content.js:37:5
addTab@chrome://browser/content/tabbrowser.xml:1642:13
ssi_restoreWindow@resource:///modules/sessionstore/SessionStore.jsm:2292:1
ssi_onLoad@resource:///modules/sessionstore/SessionStore.jsm:782:11
SessionStoreInternal.onBeforeBrowserWindowShown/<@resource:///modules/sessionstore/SessionStore.jsm:948:9
Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:744:7
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
pbs<@resource://unity/observer.js:38:71
Observer.prototype.observe@resource://unity/observer.js:77:24
WARNING: content window passed to PrivateBrowsingUtils.isWindowPrivate. Use isContentWindowPrivate instead (but only for frame scripts).
pbu_isWindowPrivate@resource://gre/modules/PrivateBrowsingUtils.jsm:25:14
pbs<@resource://unity/observer.js:38:71
Observer.prototype.observe@resource://unity/observer.js:77:24
``` | 2015/05/04 | [
"https://Stackoverflow.com/questions/30029625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2236787/"
] | You will need to install python-dev/libzmq-dev for the installation to succeed. The problem is that while you can install most Python libraries using pip, some of them depend on C or C++ libraries. These libraries cannot be downloaded using PIP, so they need to be installed manually.
As PIP will only install Python libraries, any external dependencies have to be installed using apt-get. In this case you need the development library for zmq and/or python.
```
sudo apt-get install libzmq-dev
```
and/or
```
sudo apt-get install python-dev
``` | As of Rodeo v2.0, it is no longer installable via pip. On Ubuntu, you can install it using the Rodeo apt repo, commands are below:
```
#### add the yhat public key and the repo
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 33D40BC6
sudo add-apt-repository -u "deb http://rodeo-deb.yhat.com/ rodeo main"
#### install rodeo and run it
sudo apt-get -y install rodeo
/opt/Rodeo/rodeo
``` | 4,266 |
21,592,965 | I am writing a small script for a Tic Tac Toe game in python. I store the Tic Tac Toe grid in a list like this (example of a empty grid): `[[' ', ' ', ' ',], [' ', ' ', ' ',], [' ', ' ', ' ',]]`. These are the following possible string for the list:
* `' '` no player has marked this field
* `'X'` player X
* `'O'` player O
I have written a function that creates an empty grid (`create_grid`), a function that creates a random grid (`create_random_grid`) (this will be used for testing purposes later) and a function that prints the grid in such a manner so that its read-able to the end user (`show_grid`). I am having trouble with the `create_random_grid` function, the other two method work though. Here is how I approached the `create_random_grid` function:
* first create an empty grid using `create_grid`
* iterate over the grid line
* iterate over the character in the line
* change the character to an item randomly selected form here. `['X', 'O', ' ']`
* `return` the grid
**NOTE:** I do not expect that exact output for the *Expected Output*. For the *Actual Output*, I do not always get exactly that but all the lines are always the same.

I do not know why all the lines are the same. It seems that the last generated line is the one that is used. I added some debug lines in my code along with examples that cleary show my problem. I added line in my code that show me the randomly chosen mark for each slot of the grid, however my output does not correspond to that, except for the last line where they match. I have included other important information as comments in my code. Pastebin link [here](http://pastebin.com/yuGK8bRq)
***CODE:***
```
from random import choice
def create_grid(size=3):
"""
size: int. The horizontal and vertical height of the grid. By default is set to 3 because thats normal
returns: lst. A list of lines. lines is a list of strings
Creates a empty playing field
"""
x_lst = [' '] * size # this is a horizontal line of the code
lst = [] # final list
for y in range(size): # we append size times to the lst to get the proper height
lst.append(x_lst)
return lst
def show_grid(grid):
"""
grid: list. A list of lines, where the lines are a list of string
returns: None
"""
for line in grid:
print('[' + ']['.join(line)+']') # print each symbol in a box
def create_random_grid(size=3):
"""
size: int. The horizontal and vertical height of the grid. By default is set to 3 because thats normal
returns: lst. A list of lines. lines is a list of strings
Creates a grid with random player marks, used for testing purposes
"""
grid = create_grid()
symbols = ['X', 'O', ' ']
for line in range(size):
for column in range(size):
# grid[line][column] = choice(symbols) # what I want to use, but does not work
# debug, the same version as ^^ but in its smaller steps
random_item = choice(symbols)
print 'line: ', line, 'column: ', column, 'symbol chosen: ', random_item # shows randomly wirrten mark for each slot
grid[line][column] = random_item # over-write the indexes of grid with the randomly chosen symbol
return grid
hardcoded_grid = [['X', ' ', 'X'], [' ', 'O', 'O'], ['O', 'X', ' ']]
grid = create_random_grid()
print('\nThe simple list view of the random grid:\n'), grid
print('\nThis grid was created using the create_random_grid method:\n')
show_grid(grid)
print('\nThis grid was hard coded (to show that the show_grid function works):\n')
show_grid(hardcoded_grid)
``` | 2014/02/06 | [
"https://Stackoverflow.com/questions/21592965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2911408/"
] | ```
x_lst = [' '] * size
lst = []
for y in range(size):
lst.append(x_lst)
```
All elements of `lst` are the same list object. If you want equal but independent lists, create a new list each time:
```
lst = []
for y in range(size):
lst.append([' '] * size)
``` | Your board consists of three references to a single row. You need to make three separate rows, like so:
```
lst = [[' ']*3 for _ in range(3)]
``` | 4,267 |
59,726,776 | My question :
I was working on my computer vision project. I use opencv(4.1.2) and python to implement it.
I need a faster way to pass the reading frame into image processing on my Computer(Ubuntu 18.04 8 cores i7 3.00GHz Memory 32GB). the `cv2.VideoCapture.read()` read frame (frame size : 720x1280) will take about 120~140ms. **which is too slow.** my processing module take about 40ms per run. And we desire 25~30 FPS.
here is my demo code so far:
```
import cv2
from collections import deque
from time import sleep, time
import threading
class camCapture:
def __init__(self, camID, buffer_size):
self.Frame = deque(maxlen=buffer_size)
self.status = False
self.isstop = False
self.capture = cv2.VideoCapture(camID)
def start(self):
print('camera started!')
t1 = threading.Thread(target=self.queryframe, daemon=True, args=())
t1.start()
def stop(self):
self.isstop = True
print('camera stopped!')
def getframe(self):
print('current buffers : ', len(self.Frame))
return self.Frame.popleft()
def queryframe(self):
while (not self.isstop):
start = time()
self.status, tmp = self.capture.read()
print('read frame processed : ', (time() - start) *1000, 'ms')
self.Frame.append(tmp)
self.capture.release()
cam = camCapture(camID=0, buffer_size=50)
W, H = 1280, 720
cam.capture.set(cv2.CAP_PROP_FRAME_WIDTH, W)
cam.capture.set(cv2.CAP_PROP_FRAME_HEIGHT, H)
# start the reading frame thread
cam.start()
# filling frames
sleep(5)
while True:
frame = cam.getframe() # numpy array shape (720, 1280, 3)
cv2.imshow('video',frame)
sleep( 40 / 1000) # mimic the processing time
if cv2.waitKey(1) == 27:
cv2.destroyAllWindows()
cam.stop()
break
```
What I tried :
1. multiThread - one thread just reading the frame, the other do the image processing things.
**It's NOT what I want.** because I could set a buffer deque saving 50 frames for example. but the frame-reading thread worked with the speed ~ frame/130ms. my image processing thread worked with the speed ~ frame/40ms. then the deque just running out. so I've been tried the solution. but not what I need.
2. this [topic](https://stackoverflow.com/questions/52655841/opencv-python-multithreading-seeking-within-a-videocapture-object) is the discussion I found out which is most closest to my question. but unfortunately, I tried the accepted solutions (both of two below the discussion).
**One of the solution (6 six thumbs up) point out that he could reading and saving 100 frames at 1 sec intervals on his mac. why my machine cannot handle the frame reading work? Do I missing something? my installation used conda and pip `conda install -c conda-forge opencv`, `pip install opencv-python`(yes, I tried both.)**
**The other of the solution(1 thumb up) using ffmpeg solution. but it seem's work with video file but not camera device?**
3. adjust c2.waitKey() :
the parameter just controls the frequency when video display. not a solution.
Then, I know I just need some keywords to follow.
code above is my demo code so far, I want some method or guide to make me videoCapture.read() faster. maybe a way to use multithread inside videoCapture object or other camera reading module.
Any suggestions? | 2020/01/14 | [
"https://Stackoverflow.com/questions/59726776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9383559/"
] | This comes a bit late, but I was wondering this with my Logitech C920 HD Pro USB-camera on Ubuntu 20.04 and OpenCV. I tried to command the capture session to run Full HD @ 30 FPS but the FPS was fluctuating between 4-5 FPS.
The capture format for my camera defaulted as "YUYV 4:2:2". No matter how I tried to alter the video capture settings, OpenCV did not magically change the video format to match e.g. the desired FPS setting.
When I listed the video formats for my Logitech C920, it revealed:
```
ubuntu:~$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'YUYV' (YUYV 4:2:2)
<clip>
Size: Discrete 1600x896
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 2304x1296
Interval: Discrete 0.500s (2.000 fps)
[1]: 'MJPG' (Motion-JPEG, compressed)
<clip>
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.042s (24.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
```
The solution was to manually command the OpenCV capture device to use the compressed 'MJPG' format:
```
import numpy as np
import cv2
capture = cv2.VideoCapture(0)
W, H = 1920, 1080
capture.set(cv2.CAP_PROP_FRAME_WIDTH, W)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, H)
capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
capture.set(cv2.CAP_PROP_FPS, 30)
``` | Long.
I checked using the following settings and somehow if you increase the frame size opencv will reduce the total fps. Maybe this is a bug.
1920x1080 : FPS: 5.0, Width: 1920.0, Height: 1080.0 , delay = 150ms
<https://imgur.com/Vab61cF>
1280x720 : FPS: 10.0, Width: 1280.0, Height: 720.0, delay = 60ms
<https://imgur.com/QN6tsAO>
640x480 : FPS: 30.0, Width: 640.0, Height: 480.0, delay = 5ms
<https://imgur.com/RqkWSdK>
But by using other applications such as *cheese*, we still get a full 30fps at 1920x1080 resolution.
Please be notified that set the CAP\_PROP\_BUFFERSIZE value won't help either.
So a question will arise: "How can we overcome this?". At this stage, you only have 2 choices :
1. Reduce the frame resolution into 640x480
2. Use other framework
Hope this help. | 4,268 |
1,475,193 | My class contains a socket that connects to a server. Some of the methods of the class can throw an exception. The script I'm running contains an outer loop that catches the exception, logs an error, and creates a new class instance that tries to reconnect to the server.
Problem is that the server only handles one connection at a time (by design) and the "old" socket is still connected. So the new connection attempt hangs the script. I can work around this by forcing the old socket closed, but I wonder: why doesn't the socket automatically close?
When it is "stuck", netstat shows two sockets connected to the port. The server is waiting for input from the first socket though, it isn't handling the new one yet.
I run this against a dummy server that replies "error\n" to every incoming line.
EDIT: see my comment on [Mark Rushakoff's answer below](https://stackoverflow.com/questions/1475193/what-happens-to-a-python-object-when-you-throw-an-exception-from-it/1475198#1475198). An assert(False) [that I subsequently catch] from within the exception handler seems to force the socket closed.
```
import socket
class MyException(Exception):
pass
class MyClient(object):
def __init__(self, port):
self.sock = socket.create_connection(('localhost', port))
self.sockfile = self.sock.makefile()
def do_stuff(self):
self._send("do_stuff\n")
response = self._receive()
if response != "ok\n":
raise MyException()
return response
def _send(self, cmd):
self.sockfile.write(cmd)
self.sockfile.flush()
def _receive(self):
return self.sockfile.readline()
def connect():
c = MyClient(9989)
# On the second iteration, do_stuff() tries to send data and
# hangs indefinitely.
print c.do_stuff()
if __name__ == '__main__':
for _ in xrange(3):
try:
connect()
except MyException, e:
print 'Caught:', e
# This would be the workaround if I had access to the
# MyClient object:
#c.sock.close()
#c.sockfile.close()
```
EDIT: Here's the (ugly) server code:
```
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
s.bind(('localhost', 9989))
s.listen(5)
(c,a) = s.accept()
f = c.makefile()
print f.readline()
f.write('error\n')
f.flush()
(c2,a) = s.accept()
f = c.makefile()
print f.readline()
s.close()
``` | 2009/09/25 | [
"https://Stackoverflow.com/questions/1475193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/67022/"
] | This is an artifact of garbage collection. Even though the object is *out of scope*, it is not necessarily *collected* and therefore *destroyed* until a garbage collection run occurs -- this is not like C++ where a destructor is called as soon as an object loses scope.
You can probably work around this particular issue by changing `connect` to
```
def connect():
try:
c = MyClient(9989)
# On the second iteration, do_stuff() tries to send data and
# hangs indefinitely.
print c.do_stuff()
finally:
c.sock.close()
c.sockfile.close()
```
Alternatively, you could define `__enter__` and `__exit__` for `MyClient`, and do [a with statement](http://effbot.org/zone/python-with-statement.htm):
```
def connect():
with MyClient(9989) as c:
print c.do_stuff()
```
Which is effectively the same as a try-finally. | Ok, here's the final version. Explicitly close the socket objects when something gets borked.
```
import socket
class MyException(Exception):
pass
class MyClient(object):
def __init__(self, port):
self.sock = socket.create_connection(('localhost', port))
self.sockfile = self.sock.makefile()
def close(self):
self.sock.close()
self.sockfile.close()
def do_stuff(self):
self._send("do_stuff\n")
response = self._receive()
if response != "ok\n":
raise MyException()
return response
def _send(self, cmd):
self.sockfile.write(cmd)
self.sockfile.flush()
def _receive(self):
return self.sockfile.readline()
def connect():
try:
c = MyClient(9989)
print c.do_stuff()
except MyException:
print 'Caught MyException'
finally:
c.close()
if __name__ == '__main__':
for _ in xrange(2):
connect()
``` | 4,273 |
45,692,749 | Hello Python community I am angular and node.js developer and I want to try Python as backend of my server because I am new to python I want to ask you how to target the dist folder that contains all HTML and CSS and js files from the angular 4 apps in flask python server
Because my app is SPA application I have set routes inside angular routing component
When I run about or any other route I get this string message `'./dist/index.html'`
And I know I return string message but I want to tell the flask whatever route the user type on URL let the angular to render the page because inside my angular app I have set this pages and is work
any help how to start with flask and angular to build simple REST API
Now I have this file structure
```
python-angular4-app
|___ dist
| |___ index.html
| |___ style.css
| |___ inline.js
| |___ polyfill.js
| |___ vendor.js
| |___ favicon.ico
| |___ assets
|
|___ server.py
```
My server.py have this content
------------------------------
```
from flask import Flask
app = Flask(__name__, )
@app.route('/')
def main():
return './dist/index.html'
@app.route('/about')
def about():
return './dist/index.html'
@app.route('/contact')
def contact():
return './dist/index.html'
if __name__ == "__main__":
app.run(debug=True)
```
Best regards George35mk thnx for your help | 2017/08/15 | [
"https://Stackoverflow.com/questions/45692749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6600549/"
] | Since I had this same problem, I hope this answer will help someone looking for it again.
1. First create your angular application and build it. (You will get all the required js files and index.html file inside the 'dist' folder.
2. Create your python + flask web app with required end points.
```
from flask import Flask,render_template
app = Flask(__name__)
@app.route("/")
def hello():
return render_template('index.html')
if __name__ == "__main__":
app.run()
```
3. Create a folder 'templates' inside your python app root folder.
4. Copy your index.html file from the angular dist folder to newly created 'templates' folder.
5. Create a another folder call 'static' inside your python app root folder
6. Then copy all the other static files( JS files and CSS files ) to this new folder.
7. Update your index.html file static file urls like this.
```
<script type="text/javascript" src="/static/inline.bundle.js"></script>
```
>
> Flask look static files inside '/root\_folder/static' folder and update
> url relative to this structure.
>
>
>
Done. Now your app will serve on localhost:5000 and angular app will served.
Final folder structure will like this,
```
/pythondApplication
|-server.py
|-templates
| -index.html
|-static
| -js files and css files
```
Since this is my first answer in stackoverflow,If there is a thing to be corrected, feel free to mention it. | I don't think that it's possible to access Angular 'dist' directory via a REST API. Any routing should be done on the client-side with Angular, and Flask should handle your end-points.
In terms of building your REST API, I'd recommend something like this:
```
from flask import Flask, jsonify
app = Flask(__name__)
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
},
{
'id': 2,
'title': u'Learn Python',
'description': u'Need to find a good Python tutorial on the web',
'done': False
}
]
@app.route('/todo/api/v1.0/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': tasks})
if __name__ == '__main__':
app.run(debug=True)
```
This is from a very helpful [tutorial](https://blog.miguelgrinberg.com/post/designing-a-restful-api-with-python-and-flask) on building a basic REST API in Flask.
This will then plug in very nicely to your client-side in Angular:
```
getInfo() {
return this.http.get(
'http://myapi/id')
.map((res: Response) => res.json());
```
} | 4,278 |
35,253,338 | I am able to import the pandas package within the spyder ide; however, if I attempt to open a new juypter notebook, the import fails.
I use the Anaconda package distribution on MAC OS X.
Here is what I do:
```
In [1]: import pandas
```
and this is the response I get:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-5-97925edf8fb0> in <module>()
----> 1 import pandas
//anaconda/lib/python2.7/site-packages/pandas/__init__.py in <module>()
11 "pandas from the source directory, you may need to run "
12 "'python setup.py build_ext --inplace' to build the C "
---> 13 "extensions first.".format(module))
14
15 from datetime import datetime
ImportError: C extension: hashtable not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first.
``` | 2016/02/07 | [
"https://Stackoverflow.com/questions/35253338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3902319/"
] | You have more than one Python 2 engines installed. One in your main OS platform, another one inside Anaconda's virtual environment. You need to install Panda on the latter.
Run in your Bash prompt:
```
which python
```
Then run the following in Jupyter/IPython and compare the result with the output you got from the Bash script:
```
from sys import executable
print(executable)
```
If they differ, you should note the result of the latter (i.e. copy it), and then go to your Bash prompt, and do as follows:
```
<the 2nd output> -m pip install pandas
```
so it would be something **like** this:
```
/usr/bin/anaconda/python2 -m pip install pandas
```
And Pandas will be installed for your Anaconda Python.
There is a way to add library paths to your existing environment, using `sys.path.append('path to alternative locations')`, but this has to be done every time your want to use the alternative environment as the effects are temporary.
You can alternatively install everything in your main environment:
```
python -m pip install cython scipy panda matplotlib jupyter notebook ipython
```
Update:
=======
Based on responses to the above section:
Install `homebrew` like so:
In your Terminal:
```
xcode-select --install
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
then run:
```
brew doctor
brew update
brew upgrade
```
Now go ahead and install Python 2 through Brew:
```
brew install python
```
or for Python 3
```
brew install python3
```
Or do both. The install other useful stuff!
```
brew install git conda gfortran clang pkg-config
```
Then you can go ahead and install your desired libraries either using brew, or using `pip`, but first you should ensure that `pip` itself is installed.
```
easy_install pip
```
then you can install Python packages like so (NumPy is included in SciPy, and SciPy and Matplotlib depend on Cython and C, Scipy additionally uses fortran for ODE):
```
python2 -m install cython scipy pandas matplotlib jupyter
```
you can do that same thing for Python 3.
This clean install should really solve the problem. If it didn't, download Python from Python.org and re-install it. `brew` sometime refuses to install a package if it finds out that the package already exists. I don't recommend removing Python 2 so that you can install it through `brew`. That might cause issues with OS X. So the best alternative is to repair existing installations by installing the package downloaded from the website. OS X ensures that the package is installed in the right place. Once this is done, you can then go back to the instructions, but start from `brew install python3`. | I had the same issue on Mac OS X with Anaconda (Python 2). I tried importing the pandas package in python repl, and got this error:
```
ValueError: unknown locale: UTF-8
```
Therefore, I've added the following lines to my ~/.bash\_profile:
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
And this has fixed the issue for me. | 4,279 |
67,558,323 | Here's the thing, I'm building a streamlit app to get the cohorts data. Just like explained here: <https://towardsdatascience.com/a-step-by-step-introduction-to-cohort-analysis-in-python-a2cbbd8460ea>. So, basically I'm now at the point where I have a dataframe with the cohort date (cohort), the number of customers that belongs to that cohort and are buying in that month (n\_customers) and the month of the payment (order month). Now, I have to get a column with respect to the period number. What I mean is, I have this:
```
cohort order_month n_customers
2009-12 2009-12 1045
2009-12 2010-01 392
2009-12 2010-02 358
.
.
.
```
And I'm trying to get this:
```
cohort order_month n_customers period_number
2009-12 2009-12 1045 0
2009-12 2010-01 392 1
2009-12 2010-02 358 2
.
.
.
```
The name of the dataframe is df\_cohort.
So, in month 12/2009, there were 1045 customers from cohort 12/2009 buying something. In month 01/2010, there were 392 customers from cohort 12/2009 buying something. And so on. I need to create the column **period\_number** in order to build my heatmap.
I tried running this:
```
df_cohort["period_number"] = (
df_cohort - df_cohort
).apply(attrgetter("n"))
```
But I got this error:
```
AttributeError: 'Timedelta' object has no attribute 'n'
```
I needed to build the dataframe a little differently from the tutorial, that's why I have this error. Is there any way I can fix this from now on? Without changing something before, but only from this.
Regarding the data types of each column, both order\_month and corhort are datetime64[ns]. | 2021/05/16 | [
"https://Stackoverflow.com/questions/67558323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15317245/"
] | The function `enumerate` returns a tuple which is causing the `TypeError`. You can just keep a placeholder variable to separate the tuple into i and another placeholder variable, like this:
```
print(*[min(abs(i - j) for j in b) for i,_ in enumerate(a)])
```
Or alternatively, not use `enumerate` at all.
```
print(*[min(abs(i - j) for j in b) for i in range(n)])
``` | the enumerate function is used to get the index and the data of the list at the same time.
so enumerate gives,
for data,index in enumerate(a) | 4,282 |
54,642,243 | I'm trying to make a program in python for a data networking class to read in a file that contains 8 characters such as 00111001 and put it in a packet to then be converted to ASCII. I want to iterate through the packet and if it's a 1 then add the number in the conversation\_list =[128,64,32,16,8,4,2,1] according to the index of the for loop. I can't seem to get into any of my if statements.
file contains: 0, 0, 1, 1, 1, 0, 0, 1
Here is my output:
Accecpable number of arguments
Printing files message on next line
0, 0, 1, 1, 1, 0, 0, 1
00111001
['00111001']
here
0
1
0
```
import sys
filename = sys.argv[1]
if len(sys.argv) == 2:
print("Accecpable number of arguments") else:
print("Wrong number of arguments")
sys.exit(1)
message_data = open(filename, "r")
message_text = message_data.read()
if len(message_text) == 0:
print("Mess has zero length, " + filename + "was empty")
print("Printing files message on next line")
print(message_text)
replace_message = message_text.replace(", ", "")
print(replace_message)
packets = []
for index in range(0, len(replace_message), 8):
substring = replace_message[index:index+8]
packets.append(substring)
print(packets)
conversion_list = [128,64,32,16,8,4,2,1] running_total = 0
for packets_index, value in enumerate(packets):
if value[packets_index] == 1:
running_total + conversion_list[packets_index]
print(conversion_list[packets_index] + " added")
if value[packets_index] == 0:
print(packets_index)
continue
print (running_total)
``` | 2019/02/12 | [
"https://Stackoverflow.com/questions/54642243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7092778/"
] | ```
int[][] winner = {{0, 1, 2}, {3, 4, 5}, {6, 7, 8}, {0, 3, 6}, {1, 4, 7}, {2, 5, 8}, {0, 4, 8}, {2, 4, 6}};
```
This is all possible cases when there is a winner. The first 3 is horizontal, the next 3 is vertical, the last 2 is diagonal, where the numbers are defined like this, indicated in the previous code:
```
0 | 1 | 2
---+---+---
3 | 4 | 5
---+---+---
6 | 7 | 8
```
Then let's analyse the core code:
```
for (int[] columnWinner : winner) { // traverses all cases
if ( // if there is a case satisfied
// for a specified case for example {0, 1, 2}
playerChoices[columnWinner[0]] == playerChoices[columnWinner[1]] && // check if choice at 0 is the same as choice at 1
playerChoices[columnWinner[1]] == playerChoices[columnWinner[2]] && // check if choice at 1 is the same as choice at 2
// then choice at 0 1 2 are the same
playerChoices[columnWinner[0]] != Player.NO // and this "the same" is not "they are all empty"
) {
// then there is a winner
``` | I am assuming you are asking about the for each loop:
```
for (int[] columnWinner : winner) {
```
The loop is called a for each loop that creates a variable and gives it a value for every iteration in the loop.
In this case, the loop creates an array of length 3 named columnWinner for each possible row, column, and diagonal on the tic-tac-toe board.
Each time through the loop, it checks whether the person has won be verifying if all three elements in the columnWinner array are the same:
```
if (playerChoices[columnWinner[0]] == playerChoices[columnWinner[1]] && playerChoices[columnWinner[1]] == playerChoices[columnWinner[2]]
```
And then checking to make sure they are filled in, instead of empty.
```
&& playerChoices[columnWinner[0]] != Player.NO) {
``` | 4,283 |
41,042,599 | I read that it is one of the advantages of xgboost, that you can train on an existing model. Say I trained my model for 100 iterations, and want to restart from there to finish another 100 iterations, instead of redoing everything from the scratch..
I found this in xgboost demo examples, from here <https://github.com/dmlc/xgboost/blob/master/demo/guide-python/evals_result.py>
```
bst = xgb.train( param, dtrain, 1, watchlist )
ptrain = bst.predict(dtrain, output_margin=True)
ptest = bst.predict(dtest, output_margin=True)
dtrain.set_base_margin(ptrain)
dtest.set_base_margin(ptest)
print ('this is result of running from initial prediction')
bst = xgb.train( param, dtrain, 1, watchlist )
```
but this particular example is for objective, binary:logistic.. if I do this, I'm getting this error on `set_base_margin`
```
TypeError: only length-1 arrays can be converted to Python scalars
```
I have a model that got trained for 100 iterations.. I want to do another 100 iterations, but don't want to begin from the start again.
Any help..?? | 2016/12/08 | [
"https://Stackoverflow.com/questions/41042599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544102/"
] | figured it out, from this issue in xgboost repo <https://github.com/dmlc/xgboost/issues/235>
>
> Yes, this is something we overlooked when designing the interface, you should be able to set\_margin with flattened array.
>
>
>
`set_base_margin` expects a 1d array, so you just need to flatten the margined predictions and then pass it to `set_base_margin`
in the above code add these lines before setting the base margin
```
ptrain = ptrain.reshape(-1, 1)
ptest = ptest.reshape(-1, 1)
```
and training on the new `dtrain` with updated base margins will continue iterating from that stage | Things have changed now....
```
bst = xgb.train(param, dtrain, 1, watchlist , xgb_model=bst )
``` | 4,285 |
21,613,906 | I've got a python script that writes some data to a pipe when called:
```
def send_to_pipe(s):
send = '/var/tmp/mypipe.pipe'
sp = open(send, 'w')
sp.write(json.dumps(s))
sp.close()
if __name__ == "__main__":
name = sys.argv[1]
command = sys.argv[2]
s = {"name":name, "command":command}
send_to_pipe(s)
```
Then I have this file that keeps the pipe open indefinitely and reads data in every time the above script is called:
```
def watch_pipe():
receive = '/var/tmp/mypipe.pipe'
os.mkfifo(receive)
rp = os.open(receive, os.O_RDWR | os.O_NONBLOCK)
p = select.poll()
p.register(rp, select.POLLIN)
while True:
try:
if p.poll()[0][1] == select.POLLIN:
data = os.read(rp,512)
# Do some stuff with the data
except:
os.close(rp)
os.unlink(receive)
if __name__ == "__main__":
t = Thread(target=watch_pipe)
t.start()
# Do some other stuff that goes on indefinitely
```
This code works perfectly when I use threads. The pipe stays open, the first file writes to the pipe, and the stuff gets done. The problem is I can't stop the thread when I want to close the program. So I switched from Thread to Process:
```
p = Process(target=watch_pipe)
p.start()
```
But with a process instead of a thread, when I run the writer script, `open(send, 'w')` deletes the pipe as if it were a file I wanted to overwrite. Why is this? The permissions and ownership of the file is the same in both cases, and the writer script does not change. The only thing that changed was replacing a Thread object with an analogous Process object.
**EDIT:** After changing the open to use 'a' instead of 'w', the pipe still disappears when using a process. | 2014/02/06 | [
"https://Stackoverflow.com/questions/21613906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3272993/"
] | I think I don't understand exactly what you want.
* Option A:
\*\* Signal R. Server: Hosted as a Windows Service
\*\* Signal R. Client: ASP.NET MVC Application.
* Option B
\*\* Signal R. Server: ASP.NET MVC Application.
\*\* Signal R. Client: Windows Service
If what you need is Option A. You might want to take a look at "[Signal R Self-Host Tutorial](http://www.asp.net/signalr/overview/signalr-20/getting-started-with-signalr-20/tutorial-signalr-20-self-host)"
If what you need is Option B. You need to create a .Net Signal R Client in the Windows Service. Please check out [this tutorial](http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-net-client) on how to do so.
Regardless of your hosting type each hub has a unique name which you need to use when establishing the connection.
Regarding:
>
> I'll have that in my Windows Server but if I send text to
> <http://myserver.com/signalr> on the server, how do I get it? What hub?
>
>
>
Signal R is an abstraction of realtime dual channel communication, so it really depends on the setup of your Hub.
Regarding:
>
> Also where best to put this in the Windows service? An example would
> be great!
>
>
>
I would say, go simple and start by declaring a [Singleton](http://msdn.microsoft.com/en-us/library/ff650316.aspx) to start.
Hope it helps. | This tutorial may help, and it also includes some sample code:
<http://www.asp.net/signalr/overview/signalr-20/getting-started-with-signalr-20/tutorial-getting-started-with-signalr-20> | 4,286 |
49,835,559 | After installing Ubuntu as WSL(Windows Subsystem for Linux) I've run:
```
root@teclast:~# python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 ...
```
and try to access to this web server from my windows machine `http://0.0.0.0:8000` or `http://192.168.1.178:8000` but no success, web server available only by the address `http://127.0.0.1:8000` or `http://localhost:8000` it means that I can't connect to this web server from another pc in my network. Is it possible to getting an access to WSL from outside? | 2018/04/14 | [
"https://Stackoverflow.com/questions/49835559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1441863/"
] | Please follow the steps mentioned in the [link](https://www.nextofwindows.com/allow-server-running-inside-wsl-to-be-accessible-outside-windows-10-host) shared by @erazerbrecht and run your HTTP server by providing your ip address (instead of using localhost) and port number.
example:
```
Serving HTTP on 192.168.1.178 port 8000 (http://192.168.1.178 :8000/) ...
```
Otherwise you can also do this instead of following the [link](https://www.nextofwindows.com/allow-server-running-inside-wsl-to-be-accessible-outside-windows-10-host):
1. Goto Windows defender firewall -> `Advanced settings` from the left menu
2. Select `inbound`
3. Create `new rule`; next
4. Select `Program` as a rule type; next
5. Select `All Program`; next
6. Select `allow the connection`; next
7. Check all 3 (Domain, Private, Public); next
8. Provide rule a name
9. Finish
10. Your are good to go | I followed [the answer by @toran-sahu](https://stackoverflow.com/a/51998308/8917310) about adding an inbound rule to Windows Defender Firewall but recently (after adding a 2nd wsl2 instance) it stopped working again. I came across [this issue thread](https://github.com/microsoft/WSL/issues/4204) and running the following in cmd prompt got it working again for me.
```
wsl --shutdown
```
update: it seems this issue comes from having Fast Startup enabled <https://stackoverflow.com/a/66793101/8917310> | 4,291 |
39,878,262 | I have very large log file, which contains log of service restart messages. After I initiated service restart with external command I need to tail this log file from last occurrence of reboot message and check following messages to confirm correct restart procedure. I'm analysing messages by python, so only find last occurrence and follow file needed, then i check output line-by-line and simply close connection when read everything I need.
```
.... # lots of previous data
[timestamp] previous message
[timestamp] Rebooting... # from tis point i need to track messages
[timestamp] doing thing
[timestamp] doing other thing
[timestamp] doing final thing # final point, reboot successful
[timestamp] service activity message #
```
How can I perform such tailing?
```
tail -f <from last Rebooting... message>
``` | 2016/10/05 | [
"https://Stackoverflow.com/questions/39878262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1777415/"
] | give a generous buffer value, reverse, extract, reverse
```
$ tail -1000 file | tac | awk '1,/Rebooting/' | tac
```
or, replace `awk` script with `!p; /Rebooting/{p=1}` | Perhaps something like:
```
tail -fn +$(awk '/Rebooting/ { line = NR } END { print(line) }' log) log
```
which uses `awk` to find the line number of the last occurrence of the pattern and then tails starting at that line.
This still scans the entire file, though.
If you're really doing it from python, you can probably do better by searching the file in reverse directly in python. | 4,301 |
18,520,203 | I just installed the 'eve demo' I can't get it to start working.
The error is:
>
> eve.io.base.ConnectionException: Error initializing the driver. Make sure the database serveris running. Driver exception: OperationFailure(u"command SON([('authenticate', 1), ('user', u'user'), ('nonce', u'cec66353cb35b6f5'), ('key', u'14817e596653376514b76248055e1d4f')]) failed: auth fails",)
>
>
>
I have mongoDB running, and I have installed [Eve](http://python-eve.org) and Python2.7.
I create the [run.py](http://pastebin.com/HVJykT43) and the [settings.py](http://pastebin.com/jNukQAvW "settings.py") required.
What is not working ? am I missing something ? | 2013/08/29 | [
"https://Stackoverflow.com/questions/18520203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2615737/"
] | It looks like the MongoDB user/pw combo you configured in your `settings.py` has not been set at the db level. From the mongo shell type `use <dbname>`, then `db.system.users.find()` to get a list of authorized users for `<dbname>`. It is probably empty; add the user as needed (see the [MongoDB docs](http://docs.mongodb.org/manual/reference/method/db.addUser/)). | 1. get your mongodb's dbname,username and password from setting.py,eg:
```
MONGO_USERNAME = 'username'
MONGO_PASSWORD = 'password'
MONGO_DBNAME = 'apitest'
```
2. login in mongod server with mongo,and make sure your username in dbname's system.user collection.you can query authenticated users in that database with the following operation:
```
use apitest
db.system.users.find()
```
3. if username doesn't exist in system.users,then you can use db.addUser command to add a user to system.users collection.eg:
```
use apitest
db.addUser{'username','password'}
``` | 4,302 |
74,618,712 | I want to read the data on an excel file within a F drive. I am using python on Visual Studio Code to try achieve this however I am getting an error as seen in the pictures below. I installed pandas but I still get an error. How can I fix this issue?
[Coding Error](https://i.stack.imgur.com/XFyH4.png)
[Installed Pandas Library](https://i.stack.imgur.com/p1aiN.png)
I tried closing and opening visual studio. I tried uninstalling and reinstalling pandas.
[Python on Computer](https://i.stack.imgur.com/d3xEp.png) | 2022/11/29 | [
"https://Stackoverflow.com/questions/74618712",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20636206/"
] | You should try to open terminal in VS Code, and run `pip freeze` (and `pip3 freeze`). Check if you find pandas in the results, it won't. That must be because you'd have multiple installations of Python on your system. You may do any one of the below -
1. Get rid of all but one Python installation.
2. Install pandas on the VS Code installed python instance.
3. Configure the same installation of python that is referenced by your command prompt. (choose the correct python interpreter from VS Code Command Palette) | To read an Excel file with Python, you need to install the pandas library. To install pandas, open the command line or terminal and type:
pip install pandas
Once pandas is installed, you can read an Excel file like this:
import pandas as pd
df = pd.read\_excel('file\_name.xlsx')
print(df)
You should also make sure that the file path is correct and that you have the correct permissions to access the file.
If you are still having issues, try using the absolute file path instead of a relative file path. | 4,303 |
23,516,150 | I created a thread for a keylogger that logs in parallel to another thread that produces some sounds ( I want to catch reaction times).
Unfortunately, the thread never finishes although i invoke killKey() and "invoked killkey()" is printed.
I allways get an thread.isActive() = true from this thread.
```
class KeyHandler(threading.Thread):
hm = pyHook.HookManager()
def __init__(self):
threading.Thread.__init__(self)
def OnKeyboardCharEvent(self,event):
print 'Key:', event.Key
if event.Key=='E':
...
return True
def killKey(self):
KeyHandler.hm.UnhookKeyboard()
ctypes.windll.user32.PostQuitMessage(0)
print "invoked killkey()"
def run(self):
print "keyHandlerstartetrunning"
KeyHandler.hm.KeyDown = self.OnKeyboardCharEvent
KeyHandler.hm.HookKeyboard()
#print "keyboardhooked"
pythoncom.PumpMessages()
```
to be more precise,
ctypes.windll.user32.PostQuitMessage(0) does nothing
I would favor an external timeout to invoke killKey(), respective ctypes.windll.user32.PostQuitMessage(0) in this thread. | 2014/05/07 | [
"https://Stackoverflow.com/questions/23516150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2720827/"
] | PostQuitMessage has to be posted from the same thread. To do so you need to introduce a global variable `STOP_KEY_HANDLER`. If you want to quit then just set global `STOP_KEY_HANDLER = True` from any thread you want and it will quit with the next keystroke. Your key handler has to run on the main thread.
```
STOP_KEY_HANDLER = False
def main():
pass # here do all you want
#bla bla
global STOP_KEY_HANDLER
STOP_KEY_HANDLER = True # This will kill KeyHandler
class KeyHandler:
hm = pyHook.HookManager()
def OnKeyboardCharEvent(self,event):
if STOP_KEY_HANDLER:
self.killKey()
print 'Key:', event.Key
if event.Key=='E':
pass
return True
def killKey(self):
global STOP_KEY_HANDLER
if not STOP_KEY_HANDLER:
STOP_KEY_HANDLER = True
return None
KeyHandler.hm.UnhookKeyboard()
ctypes.windll.user32.PostQuitMessage(0)
print "invoked killkey()"
def _timeout(self):
if self.timeout:
time.sleep(self.timeout)
self.killKey()
def run(self, timeout=False):
print "keyHandlerstartetrunning"
self.timeout = timeout
threading.Thread(target=self._timeout).start()
KeyHandler.hm.KeyDown = self.OnKeyboardCharEvent
KeyHandler.hm.HookKeyboard()
#print "keyboardhooked"
pythoncom.PumpMessages()
k=KeyHandler()
threading.Thread(target=main).start()
k.run(timeout=100) # You can specify the timeout in seconds or you can kill it directly by setting STOP_KEY_HANDLER to True.
``` | I guess pbackup's solution is fine. Just to conclude I found a solution by simply sending a key myself instead of waiting for the user to input. It's proably not the best but was the fastest an goes parallel in my timing thread with the other timing routines.
```
STOP_KEY_HANDLER = True
# send key to kill handler - not pretty but works
for hwnd in get_hwnds_for_pid (GUIWINDOW_to_send_key_to.pid):
win32gui.PostMessage (hwnd, win32con.WM_KEYDOWN, win32con.VK_F5, 0)
# sleep to make sure processing is done
time.sleep(0.1)
# kill window
finished()
``` | 4,304 |
60,961,248 | ```py
bigger_list_of_names = ['Jim', 'Bob', 'Fred', 'Cam', 'Reagan','Alejandro','Dee','Rana','Denisha','Nicolasa','Annett','Catrina','Louvenia','Emmanuel','Dina','Jasmine','Shirl','Jene','Leona','Lise','Dodie','Kanesha','Carmela','Yuette',]
name_list = ['Jim', 'Bob', 'Fred', 'Cam']
search_people = re.compile(r'\b({})\b'.format(r'|'.join(name_list)), re.IGNORECASE)
print(search_people)
for names in bigger_list_of_names:
found_them = search_people.search(names, re.IGNORECASE | re.X)
print(names)
if found_them:
print('I found this person: {}'.format(found_them.group()))
else:
print('Did not find them')
```
The issue I am having is the regex does not find the names at all and keeps hitting the `else:`
I have tried `re.search`, `re.findall`, `re.find`, `re.match`, `re.fullmatch`, etc. They all return `None`. The only way for it to find anything is if I use `re.finditer` but that would not allow me to use `.group()`.
The output of the `re.compile` is `re.compile('\\b(Jim|Bob|Fred|Cam)\\b', re.IGNORECASE)`
I tested it on <https://regex101.com/> ([](https://i.stack.imgur.com/lL3mh.png)) and it looks like its working but not in the python.
Here is my console output:
[](https://i.stack.imgur.com/kxdcc.png)
Am I missing anything? | 2020/03/31 | [
"https://Stackoverflow.com/questions/60961248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13176034/"
] | The second argument to a compiled regular expression is the position in the string to start searching, not flags to use with the regex (the third, also optional argument, is the ending position to search). See the docs for [Regular expression objects](https://docs.python.org/3/library/re.html?highlight=re#re.Pattern.search) for details.
If you want to specify a case-insensitive search, pass `re.IGNORECASE` to `re.compile`. For this regex, `re.X` isn't needed. | What you are trying to do, does not require a regex search. You can achieve the same as follows.
```py
search_result = []
targets = set(names_list)
for name in set(bigger_list_of_names):
if name in targets:
search_result.append(name)
print(f'Found name: {name}')
else:
print(f'Did not find name: {name}')
print(search_result)
```
**Shorter version** using list-comprehension
```py
search_result = [name for name in set(bigger_list_of_names) if name in targets]
``` | 4,305 |
41,965,187 | To test my tensorflow installation I am using the mnist example provided in tensorflow repository, but when I execute the convolutional.py script I have this output:
```
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.2405
pciBusID 0000:03:00.0
Total memory: 5.93GiB
Free memory: 5.83GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x29020c0
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
Traceback (most recent call last):
File "convolutional.py", line 339, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "convolutional.py", line 284, in main
with tf.Session() as sess:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1187, in __init__
super(Session, self).__init__(target, graph, config=config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 552, in __init__
self._session = tf_session.TF_NewDeprecatedSession(opts, status)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
```
My first idea was that maybe I had problems in cuda installation but I tested using one of the examples provided for nvidia. In this case I used this example:
>
> NVIDIA\_CUDA-8.0\_Samples/6\_Advanced/c++11\_cuda
>
>
>
And the output is this:
```
GPU Device 0: "GeForce GTX 980 Ti" with compute capability 5.2
Read 3223503 byte corpus from ./warandpeace.txt
counted 107310 instances of 'x', 'y', 'z', or 'w' in "./warandpeace.txt"
```
Then my conclusion is the cuda is installed correctly. But I don not have any idea what is happening here. If someone can help me I will appreciated.
For more information this is my gpu configuration:
```
Tue Jan 31 19:42:10 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57 Driver Version: 367.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 560 Ti Off | 0000:01:00.0 N/A | N/A |
| 25% 45C P0 N/A / N/A | 463MiB / 958MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 980 Ti Off | 0000:03:00.0 Off | N/A |
| 0% 31C P8 13W / 280W | 1MiB / 6077MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
```
**EDIT:**
It is normal the two nvidia cards have the same physical id?
```
sudo lshw -C "display"
*-display
description: VGA compatible controller
product: GM200 [GeForce GTX 980 Ti]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:03:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:50 memory:f9000000-f9ffffff memory:b0000000-bfffffff memory:c0000000-c1ffffff ioport:d000(size=128) memory:fa000000-fa07ffff
*-display
description: VGA compatible controller
product: GF114 [GeForce GTX 560 Ti]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:45 memory:f6000000-f7ffffff memory:c8000000-cfffffff memory:d0000000-d3ffffff ioport:e000(size=128) memory:f8000000-f807ffff
``` | 2017/01/31 | [
"https://Stackoverflow.com/questions/41965187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3094625/"
] | The important points in the output you have shown is this:
```
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.2405
pciBusID 0000:03:00.0
Total memory: 5.93GiB
Free memory: 5.83GiB
```
i.e. the compute device you want is enumerated as device 0 and
```
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
```
i.e. the compute device generating the error is enumerated as device 1. Device 1 is your display GPU, which can't be used for computation in Tensorflow. If you either mark that device as compute prohibited with `nvidia-smi`, or use the `CUDA_VISIBLE_DEVICES` environment variable to only make your compute device visible to CUDA, the error should probably disappear. | I encountered a similar error when I attempted to run the `classify_image.py` script that is part of [the image recognition tutorial](https://www.tensorflow.org/tutorials/image_recognition). Since I already had a running Python session (elpy) in which I had run some TensorFlow code, the GPUs were allocated there and thus were not available for the script I was attempting to run from shell.
Quitting the existing Python session resolved the error. | 4,307 |
39,775,489 | I'm trying to push a new git repo upstream using gitpython module. Below are the steps that I'm doing and get an error 128.
```
# Initialize a local git repo
init_repo = Repo.init(gitlocalrepodir+"%s" %(gitinitrepo))
# Add a file to this new local git repo
init_repo.index.add([filename])
# Initial commit
init_repo.index.commit('Initial Commit - %s' %(timestr))
# Create remote
init_repo.create_remote('origin', giturl+gitinitrepo+'.git')
# Push upstream (Origin)
init_repo.remotes.origin.push()
```
While executing the push(), gitpython throws an exception:
```
'git push --porcelain origin' returned with exit code 128
```
Access to github is via SSH.
Do you see anything wrong that I'm doing? | 2016/09/29 | [
"https://Stackoverflow.com/questions/39775489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2170456/"
] | I tracked it down with a similar approach:
```
class ProgressPrinter(git.RemoteProgress):
def line_dropped(self, line):
print("line dropped : " + str(line))
```
Which you can then call in your code:
```
init_repo.remotes.origin.push(progress=ProgressPrinter())
``` | You need to capture the output from the git command.
Given this Progress class:
```
class Progress(git.RemoteProgress):
def __init__( self ):
super().__init__()
self.__all_dropped_lines = []
def update( self, op_code, cur_count, max_count=None, message='' ):
pass
def line_dropped( self, line ):
if line.startswith( 'POST git-upload-pack' ):
return
self.__all_dropped_lines.append( line )
def allErrorLines( self ):
return self.error_lines() + self.__all_dropped_lines
def allDroppedLines( self ):
return self.__all_dropped_lines
```
You can write code like this:
```
progress = Progress()
try:
for info in remote.push( progress=progress ):
# call info_callback with the push commands info
info_callback( info )
for line in progress.allDroppedLines():
log.info( line )
except GitCommandError:
for line in progress.allErrorLines():
log.error( line )
raise
```
When you run with this you will still get the 128 error, but you will also have the output og git to explain the problem. | 4,308 |
19,546,631 | I'm trying to extract values from numerous text files in python. The numbers I require are in the scientific notation form. My result text files are as follows
```
ADDITIONAL DATA
Tip Rotation (degrees)
Node , UR[x] , UR[y] , UR[z]
21 , 1.0744 , 1.2389 , -4.3271
22 , -1.0744 , -1.2389 , -4.3271
53 , 0.9670 , 1.0307 , -3.8990
54 , -0.0000 , -0.0000 , -3.5232
55 , -0.9670 , -1.0307 , -3.8990
Mean rotation variation along blade
Region , Rotation (degrees)
Partition line 0, 7.499739E-36
Partition line 1, -3.430092E-01
Partition line 2, -1.019287E+00
Partition line 3, -1.499808E+00
Partition line 4, -1.817651E+00
Partition line 5, -2.136372E+00
Partition line 6, -2.448321E+00
Partition line 7, -2.674414E+00
Partition line 8, -2.956737E+00
Partition line 9, -3.457806E+00
Partition line 10, -3.995106E+00
```
I've been using regexp successfully in the past but its doesnt seem to want to pick up the numbers. The number of the nodes changes in my results file so can't search by line. My python script is as follows.
```
import re
from pylab import *
from scipy import *
import matplotlib
from numpy import *
import numpy as np
from matplotlib import pyplot as plt
import csv
########################################
minTheta = -90
maxTheta = 0
thetaIncrements = 10
numberOfPartitions = 10
########################################
numberOfThetas = ((maxTheta - minTheta)/thetaIncrements)+1
print 'Number of thetas = '+str(numberOfThetas)
thetas = linspace(minTheta,maxTheta,numberOfThetas)
print 'Thetas = '+str(thetas)
part = linspace(1,numberOfPartitions,numberOfPartitions)
print 'Parts = '+str(part)
meanRotations = np.zeros((numberOfPartitions+1,numberOfThetas))
#print meanRotations
theta = minTheta
n=0
m=0
while theta <= maxTheta:
fileName = str(theta)+'.0.txt'
#print fileName
regexp = re.compile(r'Partition line 0, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[0,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 1, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[1,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 2, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[2,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 3, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[3,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 4, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[4,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 5, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[5,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 6, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[6,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 7, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[7,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 8, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[8,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 9, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[9,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 10, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[10,m]=(float((match.group(1))))
m=m+1
theta = theta+thetaIncrements
print 'Mean rotations on partition lines = '
print meanRotations
```
Any help would be much appreciated!! | 2013/10/23 | [
"https://Stackoverflow.com/questions/19546631",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2739143/"
] | Is this format of file standard one? If so? you can get all your float values with another technic.
So, here is the code:
```py
str = """ ADDITIONAL DATA
Tip Rotation (degrees)
Node , UR[x] , UR[y] , UR[z]
21 , 1.0744 , 1.2389 , -4.3271
22 , -1.0744 , -1.2389 , -4.3271
53 , 0.9670 , 1.0307 , -3.8990
54 , -0.0000 , -0.0000 , -3.5232
55 , -0.9670 , -1.0307 , -3.8990
Mean rotation variation along blade
Region , Rotation (degrees)
Partition line 0, 7.499739E-36
Partition line 1, -3.430092E-01
Partition line 2, -1.019287E+00
Partition line 3, -1.499808E+00
Partition line 4, -1.817651E+00
Partition line 5, -2.136372E+00
Partition line 6, -2.448321E+00
Partition line 7, -2.674414E+00
Partition line 8, -2.956737E+00
Partition line 9, -3.457806E+00
Partition line 10, -3.995106E+00
"""
arr = str.split()
for index in enumerate(arr):
print index # just to see the list
start = 59 # from this position the numbers begin
step = 4 # current number is each fourth
ar = []
for j in range(start, len(arr), step):
ar.append(arr[j])
floatAr = []
# or you can use this expression instead of the following loop
# floatAr = [float(x) for x in ar]
for n in range(len(ar)):
floatAr.append(float(ar[n]))
print floatAr
```
At the end you will recive a list called **floatAr** with all your float values. You can add *try-except* block for better usability.
Or, alternatively, if you want to use regex, here is the code:
```
<!--language:python -->
str = """ ADDITIONAL DATA
Tip Rotation (degrees)
Node , UR[x] , UR[y] , UR[z]
21 , 1.0744 , 1.2389 , -4.3271
22 , -1.0744 , -1.2389 , -4.3271
53 , 0.9670 , 1.0307 , -3.8990
54 , -0.0000 , -0.0000 , -3.5232
55 , -0.9670 , -1.0307 , -3.8990
Mean rotation variation along blade
Region , Rotation (degrees)
Partition line 0, 7.499739E-36
Partition line 1, -3.430092E-01
Partition line 2, -1.019287E+00
Partition line 3, -1.499808E+00
Partition line 4, -1.817651E+00
Partition line 5, -2.136372E+00
Partition line 6, -2.448321E+00
Partition line 7, -2.674414E+00
Partition line 8, -2.956737E+00
Partition line 9, -3.457806E+00
Partition line 10, -3.995106E+00"""
regex = '\s-?[1-9]+[0-9]*.?[0-9]*E-?\+?[0-9]+\s?'
import re
values = re.findall(regex, str)
floatAr = [float(x) for x in values]
print floatAr
```
By the way, here is a good on-line regex checker for python [pythex](https://pythex.org/) | I don't get the need for regex, to be honest. Something like this should do what you need:
```
with open(fileName) as f:
for line in f:
if line.startswith('Partition line'):
number=float(line.split(',')[1])
print number # or do whatever you want with it
# read other file contents with different if clauses
``` | 4,309 |
58,033,457 | Hey im trying to create a postgresql db container, im running it using the command:
```
docker-compose up
```
on the following compose file:
```
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_USERNAME: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: default_db
ports:
- 54320:5432
```
However when I try to connect to it using the follwoing python code:
```
import sqlalchemy
engine = sqlalchemy.create_engine('postgres://admin:admin@localhost:54320/default_db')
engine.connect()
```
I get the following error:
```
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "admin"
```
Anyone knows why this happens? | 2019/09/20 | [
"https://Stackoverflow.com/questions/58033457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5273907/"
] | Using POSTGRES\_USER instead of POSTGRES\_USERNAME solved this for me. | You should use POSTGRES\_USER instead of POSTGRES\_USERNAME.
Here is my postgres docker-compose configuration for your reference.
```
version: '3'
services:
postgres:
image: 'mdillon/postgis:latest'
environment:
- TZ=Asia/Shanghai
- POSTGRES_USER=postgres
- POSTGRES_PASSWOR=postgres
ports:
- '15432:5432'
``` | 4,310 |
46,708,708 | I'm looking at the best way to compare strings in a python function compiled using numba jit (no python mode, python 3).
The use case is the following :
```
import numba as nb
@nb.jit(nopython = True, cache = True)
def foo(a, t = 'default'):
if t == 'awesome':
return(a**2)
elif t == 'default':
return(a**3)
else:
...
```
However, the following error is returned:
```
Invalid usage of == with parameters (str, const('awesome'))
```
I tried using bytes but couldn't succeed.
Thanks !
---
Maurice pointed out the question [Python: can numba work with arrays of strings in nopython mode?](https://stackoverflow.com/questions/32056337/python-can-numba-work-with-arrays-of-strings-in-nopython-mode) but I'm looking at native python and not the numpy subset supported in numba. | 2017/10/12 | [
"https://Stackoverflow.com/questions/46708708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3640767/"
] | For newer numba versions (0.41.0 and later)
===========================================
Numba (since version 0.41.0) support [`str` in nopython mode](http://numba.pydata.org/numba-doc/0.42.0/reference/pysupported.html#str) and the code as written in the question will "just work". However for your example comparing the strings is **much** slower than your operation, so if you want to use strings in numba functions make sure the overhead is worth it.
```
import numba as nb
@nb.njit
def foo_string(a, t):
if t == 'awesome':
return(a**2)
elif t == 'default':
return(a**3)
else:
return a
@nb.njit
def foo_int(a, t):
if t == 1:
return(a**2)
elif t == 0:
return(a**3)
else:
return a
assert foo_string(100, 'default') == foo_int(100, 0)
%timeit foo_string(100, 'default')
# 2.82 µs ± 45.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit foo_int(100, 0)
# 213 ns ± 10.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
In your case the code is more than 10 times slower using strings.
Since your function doesn't do much it could be better and faster to do the string comparison in Python instead of numba:
```
def foo_string2(a, t):
if t == 'awesome':
sec = 1
elif t == 'default':
sec = 0
else:
sec = -1
return foo_int(a, sec)
assert foo_string2(100, 'default') == foo_string(100, 'default')
%timeit foo_string2(100, 'default')
# 323 ns ± 10.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
This is still a bit slower than the pure integer version but it's almost 10 times faster than using the string in the numba function.
But if you do a lot of numerical work in the numba function the string comparison overhead won't matter. But simply putting `numba.njit` on a function, especially if it doesn't do many array operations or number crunching, won't make it automatically faster!
For older numba versions (before 0.41.0):
=========================================
Numba doesn't support strings in `nopython` mode.
From the [documentation](http://numba.pydata.org/numba-doc/0.40.0/reference/pysupported.html#built-in-types):
>
> 2.6.2. Built-in types
> ---------------------
>
>
> ### 2.6.2.1. int, bool [...]
>
>
> ### 2.6.2.2. float, complex [...]
>
>
> ### 2.6.2.3. tuple [...]
>
>
> ### 2.6.2.4. list [...]
>
>
> ### 2.6.2.5. set [...]
>
>
> ### 2.6.2.7. bytes, bytearray, memoryview
>
>
> The `bytearray` type and, on Python 3, the `bytes` type support indexing, iteration and retrieving the `len()`.
>
>
> [...]
>
>
>
So strings aren't supported at all and bytes don't support equality checks.
However you can pass in `bytes` and iterate over them. That makes it possible to write your own comparison function:
```
import numba as nb
@nb.njit
def bytes_equal(a, b):
if len(a) != len(b):
return False
for char1, char2 in zip(a, b):
if char1 != char2:
return False
return True
```
Unfortunately the next problem is that numba cannot "lower" bytes, so you cannot hardcode the bytes in the function directly. But bytes are basically just integers, and the `bytes_equal` function works for all types that numba supports, that have a length and can be iterated over. So you could simply store them as lists:
```
import numba as nb
@nb.njit
def foo(a, t):
if bytes_equal(t, [97, 119, 101, 115, 111, 109, 101]):
return a**2
elif bytes_equal(t, [100, 101, 102, 97, 117, 108, 116]):
return a**3
else:
return a
```
or as global arrays (thanks @chrisb - see comments):
```
import numba as nb
import numpy as np
AWESOME = np.frombuffer(b'awesome', dtype='uint8')
DEFAULT = np.frombuffer(b'default', dtype='uint8')
@nb.njit
def foo(a, t):
if bytes_equal(t, AWESOME):
return a**2
elif bytes_equal(t, DEFAULT):
return a**3
else:
return a
```
Both will work correctly:
```
>>> foo(10, b'default')
1000
>>> foo(10, b'awesome')
100
>>> foo(10, b'awe')
10
```
However, you cannot specify a bytes array as default, so you need to explicitly provide the `t` variable. Also it feels hacky to do it that way.
My opinion: Just do the `if t == ...` checks in a normal function and call specialized numba functions inside the `if`s. String comparisons are really fast in Python, just wrap the math/array-intensive stuff in a numba function:
```
import numba as nb
@nb.njit
def awesome_func(a):
return a**2
@nb.njit
def default_func(a):
return a**3
@nb.njit
def other_func(a):
return a
def foo(a, t='default'):
if t == 'awesome':
return awesome_func(a)
elif t == 'default':
return default_func(a)
else:
return other_func(a)
```
But make sure you actually need numba for the functions. Sometimes normal Python/NumPy will be fast enough. Just profile the numba solution and a Python/NumPy solution and see if numba makes it significantly faster. :) | I'd suggest accepting @MSeifert's answer, but as a another option for these types of problems, consider using an `enum`.
In python, strings are often used as a sort of enum, and you `numba` has builtin support for enums so they can be used directly.
```
import enum
class FooOptions(enum.Enum):
AWESOME = 1
DEFAULT = 2
import numba
@numba.njit
def foo(a, t=FooOptions.DEFAULT):
if t == FooOptions.AWESOME:
return a**2
elif t == FooOptions.DEFAULT:
return a**2
else:
return a
foo(10, FooOptions.AWESOME)
Out[5]: 100
``` | 4,311 |
33,009,295 | Got kinda surprised with:
```
$ node -p 'process.argv' $SHELL '$SHELL' \t '\t' '\\t'
[ 'node', '/bin/bash', '$SHELL', 't', '\\t', '\\\\t' ]
$ python -c 'import sys; print sys.argv' $SHELL '$SHELL' \t '\t' '\\t'
['-c', '/bin/bash', '$SHELL', 't', '\\t', '\\\\t']
```
Expected the same behavior as with:
```
$ echo $SHELL '$SHELL' \t '\t' '\\t'
/bin/bash $SHELL t \t \\t
```
Which is how I need the stuff to be passed in.
Why the extra escape with `'\t'`, `'\\t'` in process argv? Why handled differently than `'$SHELL'`? Where's this actually coming from? Why different from the `echo` behavior?
First I thought this to be some *extras* on the [minimist](https://github.com/substack/minimist) part, but then got the same with both bare Node.js and Python. Might be missing something obvious here. | 2015/10/08 | [
"https://Stackoverflow.com/questions/33009295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681785/"
] | Use `$'...'` form to pass escape sequences like `\t`, `\n`, `\r`, `\0` etc in BASH:
```
python -c 'import sys; print sys.argv' $SHELL '$SHELL' \t $'\t' $'\\t'
['-c', '/bin/bash', '$SHELL', 't', '\t', '\\t']
```
As per `man bash`:
>
> Words of the form `$'string'` are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard. Backslash escape sequences, if present, are decoded as follows:
>
>
>
```
\a alert (bell)
\b backspace
\e
\E an escape character
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\\ backslash
\' single quote
\" double quote
\nnn the eight-bit character whose value is the octal value nnn (one to three digits)
\xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits)
\uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHH (one to four hex digits)
\UHHHHHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHHHHHH (one to eight hex digits)
\cx a control-x character
``` | In both python and node.js, there is a difference between the way `print` works with scalar strings and the way it works with collections.
Strings are printed simply as a sequence of characters. The resulting output is generally what the user expects to see, but it cannot be used as the representation of the string in the language. But when a list/array is printed out, what you get is a valid list/array literal, which can be used in a program.
For example, in python:
```
>>> print("x")
x
>>> print(["x"])
['x']
```
When printing the string, you just see the characters. But when printing the list containing the string, python adds quote characters, so that the output is a valid list literal. Similarly, it would add backslashes, if necessary:
```
>>> print("\\")
\
>>> print(["\\"])
['\\']
```
node.js works in exactly the same way:
```
$ node -p '"\\"'
\
$ node -p '["\\"]'
[ '\\' ]
```
When you print the string containing a single backslash, you just get a single backslash. But when you print a list/array containing a string consisting of a single backslash, you get a quoted string in which the backslash is escaped with a backslash, allowing it to be used as a literal in a program.
As with the printing of strings in node and python, the standard `echo` shell utility just prints the actual characters in the string. In a standard shell, there is no mechanism similar to node and python printing of arrays. Bash, however, does provide a mechanism for printing out the value of a variable in a format which could be used as part of a bash program:
```
$ quote=\"
# $quote is a single character:
$ echo "${#quote}"
1
# $quote prints out as a single quote, as you would expect
$ echo "$quote"
"
# If you needed a representation, use the 'declare' builtin:
$ declare -p quote
declare -- quote="\""
# You can also use the "%q" printf format (a bash extension)
$ printf "%q\n" "$quote"
\"
```
(References: bash manual on [`declare`](http://www.gnu.org/software/bash/manual/bash.html#index-declare) and [`printf`](http://www.gnu.org/software/bash/manual/bash.html#index-printf). Or type `help declare` and `help printf` in a bash session.)
---
That's not the full story, though. It is also important to understand how the shell interprets what you type. In other words, when you write
```
some_utility \" "\"" '\"'
```
What does `some_utility` actually see in the argv array?
In most contexts in a standard shell (including bash), C-style escapes sequences like `\t` are not interpreted as such. (The standard shell utility `printf` does interpret these sequences when they appear in a format string, and some other standard utilities also interpret the sequences, but the shell itself does not.) The handling of backslash by a standard shell depends on the context:
* Unquoted strings: the backslash quotes the following character, whatever it is (unless it is a newline, in which case both the backslash and the newline are removed from the input).
* Double-quoted strings: backslash can be used to escape the characters `$`, `\`, `"`, ```; also, a backslash followed by a newline is removed from the input, as in an unquoted string. In bash, if history expansion is enabled (as it is by default in *interactive* shells), backslash can also be used to avoid history expansion of `!`, but the backslash is retained in the final string.
* Single-quoted strings: backslash is treated as a normal character. (As a result, there is no way to include a single quote in a single-quoted string.)
Bash adds two more quoting mechanisms:
* C-style quoting, `$'...'`. If a single-quoted string is preceded by a dollar sign, then C-style escape sequences inside the string *are* interpreted in roughly the same way a C compiler would. This includes the standard whitespace characters such as newline (`\n`), octal, hexadecimal and unicode escapes (`\010`, `\x0a`, `\u000A`, `\U0000000A`), plus a few non-C sequences including "control" characters (`\cJ`) and the ESC character `\e` or `\E` (the same as `\x1b`). Backslashes can also be used to escape `\`, `'` and `"`. (Note that this is a different list from the list of backslashable characters in double-quoted strings; here, a backslash before a dollar sign or a backtic is *not* special, while a backslash before a single quote is special; moreover, the backslash-newline sequence is not interpreted.)
* Locale-specific Translation: `$"..."`. If a double-quoted string is preceded by a dollar sign, backslashes (and variable expansions and command substitutions) are interpreted as with a normal double-quoted strings, and then the string is looked up in a message catalog determined by the current locale.
(References: [Posix standard](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02), [Bash manual](http://www.gnu.org/software/bash/manual/bash.html#Quoting).) | 4,312 |
39,237,350 | How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` . | 2016/08/30 | [
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] | [itertools.groupby()](https://docs.python.org/3/library/itertools.html#itertools.groupby) is your solution.
```
newlst = [k for k, g in itertools.groupby(lst)]
```
---
If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it:
```
import itertools
def special_groupby(iterable):
last_element = 0
count = 0
state = False
def key_func(x):
nonlocal last_element
nonlocal count
nonlocal state
if last_element != x or x >= count:
last_element = x
count = 1
state = not state
else:
count += 1
return state
return [next(g) for k, g in itertools.groupby(iterable, key=key_func)]
special_groupby(lst)
```
OR
```
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst)))
```
Choose whichever you deem appropriate. Both methods are for numbers > 0. | You'd probably want something like this.
```
lst = [1, 1, 2, 2, 2, 2, 3, 3, 4, 1, 2]
prev_value = None
for number in lst[:]: # the : means we're slicing it, making a copy in other words
if number == prev_value:
lst.remove(number)
else:
prev_value = number
```
So, we're going through the list, and if it's the same as the previous number, we remove it from the list, otherwise, we update the previous number.
There may be a more succinct way, but this is the way that looked most apparent to me.
HTH. | 4,313 |
44,859,860 | I want to implement the following function in python:
[](https://i.stack.imgur.com/MJfQu.png)
I will write the code using 2-loops:
```
for i in range(5):
for j in range(5):
sum += f(i, j)
```
But the issue is that I have 20 such sigmas, so I will have to write 20 nested for loops. It makes the code unreadable. In my case, all i and j variables take same range (0 to 4). Is there some better of coding it? | 2017/07/01 | [
"https://Stackoverflow.com/questions/44859860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7737948/"
] | You can use [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product) to get cartesian product (of indexes for your cases):
```
>>> import itertools
>>> for i, j, k in itertools.product(range(1, 3), repeat=3):
... print(i, j, k)
...
1 1 1
1 1 2
1 2 1
1 2 2
2 1 1
2 1 2
2 2 1
2 2 2
```
---
```
import itertools
total = 0
for indexes in itertools.product(range(5), repeat=20):
total += f(*indexes)
```
* You should use `range(1,6)` instead of `range(5)` to mean `1` to `5`. (unless you meant indexes)
* Do not use `sum` as a variable name, it shadows builtin function [`sum`](https://docs.python.org/3/library/functions.html#sum). | Create Arrays by using Numpy .
```
import numpy as np
i = np.asarray([i for i in range(5)])
j = np.asarray([i for i in range(5)])
res = np.sum(f(i,j))
```
so you can avoide all loops. Important to note is that the function f needs to be able to work with array (a so called ufunc). If your f is more complicated and i doesnt allow arrays you can use numpys vectorize functions. Not as fast as a ufunc but better that nested loops :
```
from numpy import vectorize
f_vec = vectorize(f)
```
If you want to stay with plain python because you don't want arrays but lists or the types don't match for an array, there is always list comprehension which speeds up the loop. Say I and J are the iterable for i and j respectively then:
```
ij = [f(i,j) for i in I for j in J ]
res = sum(ij)
``` | 4,323 |
11,706,505 | I just started learning python and I am hoping you guys can help me comprehend things a little better. If you have ever played a pokemon game for the gameboy you'll understand more as to what I am trying to do. I started off with a text adventure where you do simple stuff, but now I am at the point of pokemon battling eachother. So this is what I am trying to achieve.
* Pokemon battle starts
* You attack target
* Target loses HP and attacks back
* First one to 0 hp loses
Of course all of this is printed out.
This is what I have for the battle so far, I am not sure how accurate I am right now. Just really looking to see how close I am to doing this correctly.
```
class Pokemon(object):
sName = "pidgy"
nAttack = 5
nHealth = 10
nEvasion = 1
def __init__(self, name, atk, hp, evd):
self.sName = name
self.nAttack = atk
self.nHealth = hp
self.nEvasion = evd
def fight(target, self):
target.nHealth - self.nAttack
def battle():
print "A wild appeared"
#pikachu = Pokemon("Pikafaggot", 18, 80, 21)
pidgy = Pokemon("Pidgy", 18, 80, 21)
pidgy.fight(pikachu)
#pikachu.fight(pidgy)
```
**Full code here:** <http://pastebin.com/ikmRuE5z>
I am also looking for advice on how to manage variables; I seem to be having a grocery list of variables at the top and I assume that is not good practice, where should they go? | 2012/07/29 | [
"https://Stackoverflow.com/questions/11706505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1364915/"
] | If I was to have `fight` as a instance method (which I'm not sure I would), I would probably code it up something like this:
```
class Pokemon(object):
def __init__(self,name,hp,damage):
self.name = name #pokemon name
self.hp = hp #hit-points of this particular pokemon
self.damage = damage #amount of damage this pokemon does every attack
def fight(self,other):
if(self.hp > 0):
print("%s did %d damage to %s"%(self.name,self.damage,other.name))
print("%s has %d hp left"%(other.name,other.hp))
other.hp -= self.damage
return other.fight(self) #Now the other pokemon fights back!
else:
print("%s wins! (%d hp left)"%(other.name,other.hp))
return other,self #return a tuple (winner,loser)
pikachu=Pokemon('pikachu', 100, 10)
pidgy=Pokemon('pidgy', 200, 12)
winner,loser = pidgy.fight(pikachu)
```
Of course, this is somewhat boring since the amount of damage does not depend on type of pokemon and isn't randomized in any way ... but hopefully it illustrates the point.
As for your class structure:
```
class Foo(object):
attr1=1
attr2=2
def __init__(self,attr1,attr2):
self.attr1 = attr1
self.attr2 = attr2
```
It doesn't really make sense (to me) to declare the class attributes if you're guaranteed to overwrite them in `__init__`. Just use instance attributes and you should be fine (i.e.):
```
class Foo(object):
def __init__(self,attr1,attr2):
self.attr1 = attr1
self.attr2 = attr2v
``` | 1. You don't need the variables up the top. You just need them in the **init**() method.
2. The fight method should return a value:
```
def fight(self, target):
target.nHealth -= self.nAttack
return target
```
3. You probably want to also check if someone has lost the battle:
```
def checkWin(myPoke, target):
# Return 1 if myPoke wins, 0 if target wins, -1 if no winner yet.
winner = -1
if myPoke.nHealth == 0:
winner = 0
elif target.nHealth == 0:
winner = 1
return winner
```
Hope I helped. | 4,324 |
4,414,767 | I'm trying to modify an existing [Django Mezzanine](http://mezzanine.jupo.org/) setup to allow me to blog in Markdown. Mezzanine has a "Core" model that has content as an HtmlField which is defined like so:
```
from django.db.models import TextField
class HtmlField(TextField):
"""
TextField that stores HTML.
"""
def formfield(self, **kwargs):
"""
Apply the class to the widget that will render the field as a
TincyMCE Editor.
"""
formfield = super(HtmlField, self).formfield(**kwargs)
formfield.widget.attrs["class"] = "mceEditor"
return formfield
```
The problem comes from the widget.attrs["class"] of mceEditor. My thoughts were to monkey patch the Content field on the Blog object
```
class BlogPost(Displayable, Ownable, Content):
def __init__(self, *args, **kwargs):
super(BlogPost, self).__init__(*args, **kwargs)
self._meta.get_field('content').formfield = XXX
```
My problems are my python skills aren't up to the task of replacing a bound method with a lambda that calls `super`.
formfield is called by the admin when it wants to create a field for display on the admin pages, so I need to patch that to make the BlogPost widget objects NOT have the class of mceEditor (I'm trying to leave mceEditor on all the other things)
How do you craft the replacement function? I'm pretty sure I attach it with
```
setattr(self._meta.get_field('content'), 'formfield', method_i_dont_know_how_to_write)
``` | 2010/12/11 | [
"https://Stackoverflow.com/questions/4414767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/147562/"
] | If you mean back references, then yes Java has this. You can refer to a capturing group inside a regular expression using the notation `\1` for the first group, `\2` for the second, etc. Note that inside a string literal the backslashes must be escaped. | The Java `java.util.regex.Pattern` class supports backreferences using the `\n` syntax.
See [the documentation](http://download.oracle.com/javase/1.5.0/docs/api/java/util/regex/Pattern.html) for more details. | 4,326 |
18,269,218 | I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work) | 2013/08/16 | [
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] | Well here're some workarounds
**1.** In your particular case you could do it with one extra:
```
if use_date_due:
sum_qs = sum_qs.extra(select={
'year': 'EXTRACT(year FROM coalesce(date_due, date))',
'month': 'EXTRACT(month FROM coalesce(date_due, date))',
'is_paid':'date_paid IS NOT NULL'
})
```
**2.** It's also possible to use plain python to get data you need:
```
for x in sum_qs:
chosen_date = x.date_due if use_date_due and x.date_due else x.date
print chosen_date.year, chosen_date.month
```
or
```
[(y.year, y.month) for y in (x.date_due if use_date_due and x.date_due else x.date for x in sum_qs)]
```
**3.** In the SQL world this type of calculating new fields is usually done by uing subquery or [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html). I like cte more because of it's readability. It could be like:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1
```
you can also chain as many cte as you want:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
), cte2 as (
select
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte2
)
select
year, month, sum(is_paid) as paid_count
from cte2
group by year, month
```
so in django you can use [raw query](https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries) like:
```
Invoice.objects.raw('
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1')
```
and you will have Invoice objects with some additional properties.
**4.** Or you can simply substitute fields in your query with plain python
```
if use_date_due:
chosen_date = 'coalesce(date_due, date)'
else:
chosen_date = 'date'
year = 'extract(year from {})'.format(chosen_date)
month = 'extract(month from {})'.format(chosen_date)
fields = {'year': year, 'month': month, 'is_paid':'date_paid is not null'}, 'chosen_date':chosen_date)
sum_qs = sum_qs.extra(select = fields)
``` | Would this work?:
```
from django.db import connection, transaction
cursor = connection.cursor()
sql = """
SELECT
%s AS year,
%s AS month,
date_paid IS NOT NULL as is_paid
FROM (
SELECT
(CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date, *
FROM
invoice_invoice
) as t1;
""" % (connection.ops.date_extract_sql('year', 'chosen_date'),
connection.ops.date_extract_sql('month', 'chosen_date'))
# Data retrieval operation - no commit required
cursor.execute(sql)
rows = cursor.fetchall()
```
I think it's pretty save both CASE WHEN and IS NOT NULL are pretty db agnostic, at least I assume they are, since they are used in django test in raw format.. | 4,327 |
67,800,225 | I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs | 2021/06/02 | [
"https://Stackoverflow.com/questions/67800225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8989219/"
] | ```
SELECT id
FROM books EXCEPT (
SELECT bookid FROM books_authors WHERE authorId='A2'
)
``` | ```
SELECT * FROM books
WHERE id NOT IN
(SELECT bookid FROM books_authors WHERE authorid = 'A2')
``` | 4,334 |
63,483,417 | Say I have a dataframe
```
id category
1 A
2 A
3 B
4 C
5 A
```
And I want to create a new column with incremental values where `category == 'A'`. So it should be something like.
```
id category value
1 A 1
2 A 2
3 B NaN
4 C NaN
5 A 3
```
Currently I am able to do this with
```
df['value'] = pd.nan
df.loc[df.category == "A", ['value']] = range(1, len(df[df.category == "A"]) + 1)
```
Is there a better/pythonic way to do this (i.e. I don't have to initialize the value column with nan? And currently, this method assigns me a float type instead of integer which is what I want. | 2020/08/19 | [
"https://Stackoverflow.com/questions/63483417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1676881/"
] | The following code for GNU sed:
```
sed 's/EndHere/&\n/g; s/\(StartHere\)[^\n]*\(EndHere\|$\)/\1\2/g; s/\n//g' <<EOF
StartHere Word1 EndHere
StartHere Word2
StartHere Word2 EndHere something else
something else StartHere Word2 EndHere something else
EOF
```
outputs:
```
StartHereEndHere
StartHere
StartHereEndHere something else
something else StartHereEndHere something else
```
>
> I am sure here that the word that i am deleting after is there only once per line
>
>
>
Then you could:
```
sed 's/\(StartHere\).*\(EndHere\)/\1\2/; t; s/\(StartHere\).*$/\1/'
```
The `t` command will end processing of the current line if the last `s` command was successful. So... it will work. | Instead of using `sed`, you could do it with Perl, which supports [negative lookahead](http://www.regular-expressions.info/lookaround.html).
Using the example you gave in your comment:
```
$ echo "oooo StartHere=Yo9897 EndHereYo" \
| perl -pe 's/(StartHere) (?: .*(EndHere) | .*(?!EndHere) )/$1$2/x'
```
would output "oooo StartHereEndHereYo".
`(?!...)` is a "negative lookahead"
The Perl's `x` regex option allows using spaces in the regex to make it (slightly) more readable | 4,339 |
42,357,563 | I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this? | 2017/02/21 | [
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] | Think about what `reduce` does. It takes the output of one call and uses it as the first argument when calling the same function again. So imagine n is 4. Suppose you call your lambda `f`. Then you are doing `f(f(1, 2), 3)`. That is equivalent to:
```
(1**2 + 2**2)**2 + 3**2
```
Because the first argument to your lambda is squared, your first sum of squares will be squared again on the next call, and then that sum will be squared again on the next, and so on. | You're only supposed to square each successive element. `x**2 + y**2` squares the running total (`x`) as well as each successive element (`y`). Change that to `x + y**2` and you'll get the correct result. Note that, as mentioned in comments, this requires a proper initial value as well, so you should pass `0` as the optional third argument.
```
>>> sum(x ** 2 for x in range(5,15))
985
>>> reduce(lambda x, y: x + y**2, range(5,15))
965
>>> reduce(lambda x, y: x + y**2, range(5,15), 0)
985
``` | 4,340 |
36,663,727 | I have two UI windows created with QT Designer. I have two separate python scripts for each UI. What I'm trying to do is the first script opens a window, creates a thread that looks for a certain condition, then when found, opens the second UI. Then the second UI creates a thread, and when done, opens the first UI.
This seems to work fine, here's the partial code that is fired when the signal is called:
```
def run_fuel(self):
self.st_window = L79Fuel.FuelWindow(self)
self.thread.exit()
self.st_window.show()
self.destroy()
```
So that appears to work fine. I am still unsure of the proper way to kill the thread, the docs seem to state exit() or quit(). But...the new window from the other script (L79Fuel.py) is shown and the old window destroyed.
Then the new window does some things, and again when a signal is called, it triggers an similar function that I'd like to close that window, and reopen the first window.
```
def start_first(self):
self.r_window = L79Tools.FirstWindow(self)
self.thread.exit()
self.r_window.show()
self.destroy()
```
And this just exits with a code 0. I stepped through it with a debugger, and what seems to be happening is it runs through `start_first`, does everything in the function, and then goes back to the first window's `sys.exit(app.exec_())`, does that line, and then loops back to the `start_first` function (the second window) and executes that code again, in a loop, over and over.
I'm stumped. I've read as much as I could find, but nothing seems to address this. I'm guessing there's something I'm doing wrong with the threading (both windows have a thread going) and I'm not killing the threads correctly, or something along those lines. | 2016/04/16 | [
"https://Stackoverflow.com/questions/36663727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5897068/"
] | Checkout the docs here:
<https://code.visualstudio.com/Docs/customization/colorizer>
You basically either get one from the marketplace or generate a basic editable file with yeoman.
You can also add themes even from color sublime as described here:
<https://code.visualstudio.com/docs/customization/themes> | Install theme from extensions from which you wish to start.
Then find where the theme got installed. On Windows it would be `%USERPROFILE%\.vscode\extensions`, see details in [Installing extensions](https://code.visualstudio.com/docs/extensions/install-extension).
There you'll find folder with theme, inside is `themes` folder and `<something>.tmTheme` file which is actually xml file. Open it inside VSCode and start editing :)
You'll find items and colors, syntax is described elsewhere, but common sense will help you.
To test change, open desired .cs file in same editor. Changes are applied after restart, so it's also good to make [key shortcut](https://code.visualstudio.com/docs/customization/keybindings) to restart the editor:
keybindings.json
```
...
{
"key": "ctrl+shift+alt+r",
"command": "workbench.action.reloadWindow"
}
...
```
Then try color, restart, see result, continue... | 4,346 |
26,259,870 | I am new to java (well I played with it a few times), and I am wondering:
=> How to do *fast* independent prototypes ? something like one file projects.
The last few years, I worked with python. Each time I had to develop some new functionality or algorithm, I would make a simple python module (i.e. file) just for it. I could then integrate all or part of it into my projects. So, how should I translate such "modular-development" workflow into a java context?
Now I am working on some relatively complex java DB+web project using spring, maven and intelliJ. And I can't see how to easily develop and run independent code into this context.
**Edit:**
I think my question is unclear because I confused two things:
1. fast developement and test of code snippets
2. incremental development
In my experience (with python, no web), I could pass from the first to the second seemlessly.
For the sake of consistency with the title, **the 1st has priority**. However it is good only for exploration purpose. In practice, the 2nd is more important. | 2014/10/08 | [
"https://Stackoverflow.com/questions/26259870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1206998/"
] | Definitely take a look at [Spring Boot](http://docs.spring.io/spring-boot/docs/1.2.2.RELEASE/reference/htmlsingle/#getting-started-introducing-spring-boot). Relatively new project. Its aim is to remove initial configuration phase and spin up Spring apps quickly. You can think about it as convention over configuration wrapper on top of Spring Framework.
It's also considered as good fit for micro-services architecture.
It has embedded servlet container (Jetty/Tomcat), so you don't need to configure it.
It also has various different bulk dependencies for different technology combinations/stacks. So you can pick good fit for your needs. | What does "develop and run independent code in this context" mean?
Do you mean "small standalone example code snippets?"
* Use the Maven exec plugin
* Write unit/integration tests
* Bring your Maven dependencies into something like a JRuby REPL | 4,347 |
32,934,653 | I am trying to load a CSV file into HDFS and read the same into Spark as RDDs. I am using Hortonworks Sandbox and trying these through the command line. I loaded the data as follows:
```
hadoop fs -put data.csv /
```
The data seems to have loaded properly as seen by the following command:
```
[root@sandbox temp]# hadoop fs -ls /data.csv
-rw-r--r-- 1 hdfs hdfs 70085496 2015-10-04 14:17 /data.csv
```
In pyspark, I tried reading this file as follows:
```
data = sc.textFile('/data.csv')
```
However, the following take command throws an error:
```
data.take(5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.3.0.0-2557/spark/python/pyspark/rdd.py", line 1194, in take
totalParts = self._jrdd.partitions().size()
File "/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1- src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o35.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data.csv
```
Can someone help me with this error? | 2015/10/04 | [
"https://Stackoverflow.com/questions/32934653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002903/"
] | I figured the answer out. I had to enter the complete path name of the HDFS file as follows:
```
data = sc.textFile('hdfs://sandbox.hortonworks.com:8020/data.csv')
```
The full path name is obtained from conf/core-site.xml | Error `org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data.csv`
It is reading from you local file system instead of hdfs.
Try providing file path like below,
```
data = sc.textFile("hdfs://data.csv")
``` | 4,348 |
69,395,204 | I have a list of dict I want to group by multiple keys.
I have used sort by default in python dict
```
data = [
[],
[{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}, {'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 126, 'bot': 'DB', 'month':8, 'year': 2021}],
[],
[{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}, {'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}, {'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}],
[{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}],
[{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}, {'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}, {'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}, {'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}, {'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}]
]
output_dict = {}
for i in data:
if not i:
pass
for j in i:
for key,val in sorted(j.items()):
output_dict.setdefault(val, []).append(key)
print(output_dict)
{'DB': ['bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot'], 9: ['month', 'month', 'month'], 8: ['value'], 2020: ['year', 'year', 'year', 'year', 'year'], 10: ['month', 'month'], 79: ['value'], 126: ['value'], 2021: ['year', 'year', 'year', 'year', 'year', 'year', 'year', 'year'], 'GEMBOT': ['bot', 'bot', 'bot', 'bot'], 11: ['month', 'month'], 222: ['value'], 4: ['month', 'month', 'month'], 623: ['value'], 628: ['value'], 0: ['value'], 703: ['value'], 3: ['month'], 1081: ['value'], 1335: ['value'], 1920: ['value'], 1: ['month'], 2132: ['value'], 2: ['month'], 2383: ['value']}
```
But I want the output like this.
```
[{ "bot": "DB",
"date": "Sept 20",
"value": 134
},{"bot": "DB",
"date": "Oct 20",
"value": 79
}.. So on ]
```
Is there an efficient way to flatten this list ?
Thanks in advance | 2021/09/30 | [
"https://Stackoverflow.com/questions/69395204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9538877/"
] | Maybe try:
```
from pprint import pprint
import datetime
output_dict = []
for i in data:
if i:
for j in i:
for key, val in sorted(j.items()):
if key == "bot":
temp["bot"] = val
elif key == "value":
temp["value"] = val
elif key == "month":
month = datetime.datetime.strptime(str(val), "%m")
temp["date"] = month.strftime("%b")
elif key == "year":
temp["date"] = str(temp["date"]) + " " + str(val)
output_dict.append(temp)
temp = {}
pprint(output_dict)
```
The final results are shown as follows:
```
[{'bot': 'DB', 'date': 'Sep 2020', 'value': 8},
{'bot': 'DB', 'date': 'Oct 2020', 'value': 79},
{'bot': 'DB', 'date': 'Aug 2021', 'value': 126},
{'bot': 'GEMBOT', 'date': 'Nov 2020', 'value': 222},
{'bot': 'GEMBOT', 'date': 'Apr 2021', 'value': 623},
{'bot': 'GEMBOT', 'date': 'Sep 2021', 'value': 628},
{'bot': 'GEMBOT', 'date': 'Apr 2021', 'value': 0},
{'bot': 'DB', 'date': 'Nov 2020', 'value': 703},
{'bot': 'DB', 'date': 'Mar 2021', 'value': 1081},
{'bot': 'DB', 'date': 'Oct 2020', 'value': 1335},
{'bot': 'DB', 'date': 'Apr 2021', 'value': 1920},
{'bot': 'DB', 'date': 'Jan 2021', 'value': 2132},
{'bot': 'DB', 'date': 'Feb 2021', 'value': 2383}]
``` | Maybe try:
```
output = []
for i in data:
if not i:
pass
for j in i:
output.append(j)
```
And then if you want to sort it, then you can use `sorted_output = sorted(ouput, key=lambda k: k['bot'])` to sort it by `bot` for example. If you want to sort it by date, maybe create a value that calculates the date in months and then sorts it from there. | 4,350 |
1,586,423 | I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast. | 2009/10/19 | [
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] | Time under 0.05 can be a simple noise. Repeat the main program enough times to actually get ~1s of execution time in C. (I mean repeating it in a loop in the program itself, not by running it again)
Did you compile your code with optimisations turned on? Did you try reducing the number of branches? (and comparisons)
```
if (i % 3 == 0) {
if (i % 5 == 0) {
printf("Hop\n");
continue;
}
printf("Hoppity\n");
} else if (i % 5 == 0){
printf("Hophop\n");
}
```
Did you try looking at the assembler output?
Also printf is pretty slow. Try `puts("Hop")` instead, since you don't use the formating anyways. | I would be interested to see how much time is spent in get\_count().
I'm not sure how much it would matter, but you're reading in a long as a string, which means the string cannot be larger than 20 bytes, or 10 bytes (2^64 = some 20 character long decimal number, or 2^32 = some 10 character long decimal number), so you don't need your while loop in get\_count. Also, you could allocate file\_text on the stack, rather than calling calloc - but I guess you'd still need to zero it out, or otherwise find the length and set the last byte to null.
```
file_length = lseek(fileptr, 0, SEEK_END);
``` | 4,353 |
47,489,567 | I've been stuck on this one for hours now.
I'm making a homemade smarthome terminal, and have been tinkering with kivy for about 2 weeks and it's been great so far. I'm at the point where I want to show the temperature inside a label inside a screen. I've made an actionbar with 4 buttons that slides through screens when clicked. In the first screen called "Thermostaat", I want to display a label with a temperature read from another external script I've written. I can't seem to get the temperature inside a label, even with a dummy value'
Here's my main.py:
```
#!/usr/bin/env python3
from kivy.app import App
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.clock import Clock
from kivy.uix.label import Label
from kivy.lang import Builder
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.ids.TempLabel.text = str(thetemp)
print(thetemp)
class ScreenThermo(Screen):
pass
class ScreenLight(Screen):
pass
class ScreenEnergy(Screen):
pass
class ScreenWeather(Screen):
pass
class Manager(ScreenManager):
screen_thermo = ObjectProperty(None)
screen_light = ObjectProperty(None)
screen_energy = ObjectProperty(None)
screen_weather = ObjectProperty(None)
class MenuApp(App):
def thermostaat(self):
print("Thermostaat")
def verlichting(self):
print("Verlichting")
def energie(self):
print("Energie")
def weer(self):
print("Het Weer")
def build(self):
Builder.load_file("test.kv")
return Menu()
if __name__ == '__main__':
MenuApp().run()
```
And here's my .kv file:
```
#:kivy 1.10.0
<Menu>:
manager: screen_manager
orientation: "vertical"
ActionBar:
size_hint_y: 0.05
ActionView:
ActionPrevious:
ActionButton:
text: "Thermostaat"
on_press: root.manager.current= 'thermo'
on_release: app.thermostaat()
ActionButton:
text: "Verlichting"
#I want my screens to switch when clicking on this actionbar button
on_press: root.manager.current= 'light'
on_release: app.verlichting()
ActionButton:
text: "Energieverbruik"
on_press: root.manager.current= 'energy'
on_release: app.energie()
ActionButton:
text: "Het Weer"
on_press: root.manager.current= 'weather'
on_release: app.weer()
Manager:
id: screen_manager
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: "stuff1"
<ScreenLight>:
Button:
text: "stuff2"
<ScreenEnergy>:
Button:
text: "stuff3"
<ScreenWeather>:
Button:
text: "stuff4"
<Manager>:
id: screen_manager
screen_thermo: screen_thermo
screen_light: screen_light
screen_energy: screen_energy
screen_weather: screen_weather
ScreenThermo:
id: screen_thermo
name: 'thermo'
manager: screen_manager
ScreenLight:
id: screen_light
name: 'light'
manager: screen_manager
ScreenEnergy:
id: screen_energy
name: 'energy'
manager: screen_manager
ScreenWeather:
id: screen_weather
name: 'weather'
manager: screen_manager
```
I'm constantly getting the follow error:
```
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
Here's my traceback incase you're wondering:
```
Traceback (most recent call last):
File "main.py", line 57, in <module>
MenuApp().run()
File "/home/default/kivy/kivy/app.py", line 829, in run
runTouchApp()
File "/home/default/kivy/kivy/base.py", line 502, in runTouchApp
EventLoop.window.mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 403, in mainloop
self._mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 289, in _mainloop
EventLoop.idle()
File "/home/default/kivy/kivy/base.py", line 337, in idle
Clock.tick()
File "/home/default/kivy/kivy/clock.py", line 581, in tick
self._process_events()
File "kivy/_clock.pyx", line 367, in kivy._clock.CyClockBase._process_events
cpdef _process_events(self):
File "kivy/_clock.pyx", line 397, in kivy._clock.CyClockBase._process_events
raise
File "kivy/_clock.pyx", line 395, in kivy._clock.CyClockBase._process_events
event.tick(self._last_tick)
File "kivy/_clock.pyx", line 167, in kivy._clock.ClockEvent.tick
ret = callback(self._dt)
File "main.py", line 17, in getTemp
self.ids.TempLabel.text = str(thetemp)
File "kivy/properties.pyx", line 839, in kivy.properties.ObservableDict.__getattr__
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
I hope anyone is willing to help me with this, because I want to continue with this project, I've got most of the functionality of my SmartHome equipment working already, so the last part is to make a decent GUI to fit it all in(controlling the lights, controlling the temperature in the house etc...) | 2017/11/25 | [
"https://Stackoverflow.com/questions/47489567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4977709/"
] | there is no id TempLabel in your menu
first thing, you must add the TempLabel in your kv:
```
...
<ScreenThermo>:
Label:
id: TempLabel
text: "stuff1"
...
```
then update the right label:
```
...
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.manager.screen_thermo.ids.TempLabel.text = str(thetemp)
print(thetemp)
...
``` | use a StringProperty
```
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: root.thermo_text
class ScreenThermo(BoxLayout):
thermo_text = StringProperty("stuff")
...
```
then any time you want to seet the text just do
```
my_screen.thermo_text = "ASD"
``` | 4,362 |
60,036,522 | **I am learning 'Automate the Boring Stuff with Python',
here is the code in the book:**
```
import csv, os
os.makedirs('headerRemoved', exist_ok=True)
#Loop through every file in the current working directory)
for csvFilename in os.listdir('C://Users//Xinxin//Desktop//123'):
if not csvFilename.endswith('.csv'):
continue # skip non-csv files
print('Removing header from ' + csvFilename + '...')
# Read the CSV file in (skipping first row).
csvRows = []
csvFileObj = open(csvFilename)
readerObj = csv.reader(csvFileObj)
for row in readerObj:
if readerObj.line_num == 1:
continue # skip first row
csvRows.append(row)
csvFileObj.close()
# Write out the CSV file.
csvFileObj = open(os.path.join('headerRemoved', csvFilename), 'w', newline='')
csvWriter = csv.writer(csvFileObj)
for row in csvRows:
csvWriter.writerow(row)
csvFileObj.close()
```
According to the book, it is said 'Run the above python program in that folder.'
It works, but when I move the python program out of the csv folder, and run the code,
then it shows
```
C:\Users\Xinxin\PycharmProjects\PPPP\venv\Scripts\python.exe C:/Users/Xinxin/Desktop/removeheader.py
Removing header from NAICS_data_1048.csv...
Traceback (most recent call last):
File "C:/Users/Xinxin/Desktop/removeheader.py", line 44, in <module>
csvFileObj = open(csvFilename)
FileNotFoundError: [Errno 2] No such file or directory: 'NAICS_data_1048.csv'
Process finished with exit code 1
```
Why csv files cnannot open? I already wrote absolute dir in line4...
**Thank you so much for your help.** | 2020/02/03 | [
"https://Stackoverflow.com/questions/60036522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12831792/"
] | >
> but when I move the python program out of the csv folder, and run the code, then it shows
>
>
>
1) This is the problem. Try adding the directory of the files to your removeheader.py (first line):
```
import sys
sys.path.append(r'C:/Users/Xinxin/Desktop/123')
```
2) Store the files in the same location as the script to make your life easier | You can need get current directory. Then add current directory with file name.
Example:
```
currentDir = os.getcwd()
currentFileCSV = currentDir +"//" + csvFilename
csvFileObj = open(currentFileCSV)
``` | 4,365 |
31,090,479 | I think I'm not understanding something basic about python's argparse.
I am trying to use the Google YouTube API for python script, but I am not understanding how to pass values to the script without using the command line.
For example, [here](https://developers.google.com/youtube/v3/docs/videos/insert) is the example for the API. The examples on github and elsewhere show this example as being called from the command line, from where the argparse values are passed when the script is called.
I don't want to use the command line. I am building an app that uses a decorator to obtain login credentials for the user, and when that user wants to upload to their YouTube account, they submit a form which will then call this script and have the argparse values passed to it.
How do I pass values to argparser (see below for portion of code in YouTube upload API script) from another python script?
```
if __name__ == '__main__':
argparser.add_argument("--file", required=True, help="Video file to upload")
argparser.add_argument("--title", help="Video title", default="Test Title")
argparser.add_argument("--description", help="Video description",
default="Test Description")
argparser.add_argument("--category", default="22",
help="Numeric video category. " +
"See https://developers.google.com/youtube/v3/docs/videoCategories/list")
argparser.add_argument("--keywords", help="Video keywords, comma separated",
default="")
argparser.add_argument("--privacyStatus", choices=VALID_PRIVACY_STATUSES,
default=VALID_PRIVACY_STATUSES[0], help="Video privacy status.")
args = argparser.parse_args()
if not os.path.exists(args.file):
exit("Please specify a valid file using the --file= parameter.")
youtube = get_authenticated_service(args)
try:
initialize_upload(youtube, args)
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
```
---
EDIT: Per request, here is the traceback for the 400 Error I am getting using either the standard method to initialize a dictionary or using argparse to create a dictionary. I thought I was getting this due to badly formed parameters, but perhaps not:
```
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1102, in __call__
return handler.dispatch()
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "C:\Users\...\testapp\oauth2client\appengine.py", line 796, in setup_oauth
resp = method(request_handler, *args, **kwargs)
File "C:\Users\...\testapp\testapp.py", line 116, in get
resumable_upload(insert_request)
File "C:\Users\...\testapp\testapp.py", line 183, in resumable_upload
status, response = insert_request.next_chunk()
File "C:\Users\...\testapp\oauth2client\util.py", line 129, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\Users\...\testapp\apiclient\http.py", line 874, in next_chunk
return self._process_response(resp, content)
File "C:\Users\...\testapp\apiclient\http.py", line 901, in _process_response
raise HttpError(resp, content, uri=self.uri)
HttpError: <HttpError 400 when requesting https://www.googleapis.com/upload/youtube/v3/videos?alt=json&part=status%2Csnippet&uploadType=resumable returned "Bad Request">
``` | 2015/06/27 | [
"https://Stackoverflow.com/questions/31090479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3449806/"
] | Whether it is the best approach or not is really for you to figure out. But using argparse without command line is **easy**. I do it all the time because I have batches that can be run from the command line. Or can also be called by other code - which is great for unit testing, as mentioned. argparse is especially good at defaulting parameters for example.
Starting with your sample.
```
import argparse
argparser = argparse.ArgumentParser()
argparser.add_argument("--file", required=True, help="Video file to upload")
argparser.add_argument("--title", help="Video title", default="Test Title")
argparser.add_argument("--description", help="Video description",
default="Test Description")
argparser.add_argument("--category", default="22",
help="Numeric video category. " +
"See https://developers.google.com/youtube/v3/docs/videoCategories/list")
argparser.add_argument("--keywords", help="Video keywords, comma separated",
default="")
VALID_PRIVACY_STATUSES = ("private","public")
argparser.add_argument("--privacyStatus", choices=VALID_PRIVACY_STATUSES,
default=VALID_PRIVACY_STATUSES[0], help="Video privacy status.")
#pass in any positional or required variables.. as strings in a list
#which corresponds to sys.argv[1:]. Not a string => arcane errors.
args = argparser.parse_args(["--file", "myfile.avi"])
#you can populate other optional parameters, not just positionals/required
#args = argparser.parse_args(["--file", "myfile.avi", "--title", "my title"])
print vars(args)
#modify them as you see fit, but no more validation is taking place
#so best to use parse_args.
args.privacyStatus = "some status not in choices - already parsed"
args.category = 42
print vars(args)
#proceed as before, the system doesn't care if it came from the command line or not
# youtube = get_authenticated_service(args)
```
output:
```
{'category': '22', 'description': 'Test Description', 'title': 'Test Title', 'privacyStatus': 'private', 'file': 'myfile.avi', 'keywords': ''}
{'category': 42, 'description': 'Test Description', 'title': 'Test Title', 'privacyStatus': 'some status not in choices - already parsed', 'file': 'myfile.avi', 'keywords': ''}
``` | Calling `parse_args` with your own list of strings is a common `argparse` testing method. If you don't give `parse_args` this list, it uses `sys.argv[1:]` - i.e. the strings that the shell gives. `sys.argv[0]` is the strip name.
```
args = argparser.parse_args(['--foo','foovalue','barvalue'])
```
It is also easy to construct an `args` object.
```
args = argparse.Namespace(foo='foovalue', bar='barvalue')
```
In fact, if you print `args` from a `parse_args` call it should look something like that. As described in the documentation, a `Namespace` is a simple object, and the values are artributes. So it is easy to construct your own `namespace` class. All `args` needs to be is something that returns the appropriate value when used as:
```
x = args.foo
b = args.bar
```
Also as noted in the docs, `vars(args)` turns this namespace into a dictionary. Some code likes to use dictionary, but evidently these youtub functions want a `Namespace` (or equivalent).
```
get_authenticated_service(args)
initialize_upload(youtube, args)
```
<https://docs.python.org/3/library/argparse.html#beyond-sys-argv>
<https://docs.python.org/3/library/argparse.html#the-namespace-object>
---
<https://developers.google.com/youtube/v3/guides/uploading_a_video?hl=id-ID>
has `get_authenticated_service` and `initialize_upload` code
```
def initialize_upload(youtube, options):
tags = None
if options.keywords:
tags = options.keywords.split(",")
body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
privacyStatus=options.privacyStatus
)
)
....
```
The `args` from the parser is `options`, which it uses as `options.category`, `options.title`, etc. You could substitute any other object which has the same behavior and the necessary attributes. | 4,366 |
16,213,235 | a methodology question:
I have a "main" python script which runs on an infinite loop on my system, and I want to send information to it (a json data string for example) occasionally with some other python scripts that will be started later by myself or another program and will end just after sending the string.
I can't use subprocess here because my main script doesn't know when the other will run and what code they will execute.
I'm thinking of making the main script listen on a local port and making the other scripts send it the strings on that port, but is there a better way to do it ? | 2013/04/25 | [
"https://Stackoverflow.com/questions/16213235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1112326/"
] | zeromq: <http://www.zeromq.org/> - is best solution for interprocess communications imho and have a excelent binding for python: <http://www.zeromq.org/bindings:python> | Since the "main" script looks like a service you can enhance it with a web API. [bottle](http://bottlepy.org/) is the perfect solution for this. With this additional code your python script is able to receive requests and process them:
```
import json
from bottle import run, post, request, response
@post('/process')
def my_process():
req_obj = json.loads(request.body.read())
# do something with req_obj
# ...
return 'All done'
run(host='localhost', port=8080, debug=True)
```
The client script may use the httplib to send a message to the server and read the response:
```
import httplib
c = httplib.HTTPConnection('localhost', 8080)
c.request('POST', '/process', '{}')
doc = c.getresponse().read()
print doc
# 'All done'
``` | 4,367 |
34,471,188 | Doing python exercises already I've a problem with string:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)-1
#print ("indice is:",indice)
while indice > 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Putting off "-1" the results is:
```
IndexError: string index out of range
``` | 2015/12/26 | [
"https://Stackoverflow.com/questions/34471188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4075480/"
] | ```
while indice > 0:
```
should be
```
while indice >= 0:
```
to print the first character (index `0`) at last.
---
BTW, if you use [`reversed`](https://docs.python.org/3/library/functions.html#reversed), you don't need to calculate index yourself:
```
s = 'mandarino'
for ch in reversed(s):
print(ch)
```
Side note: Don't use `str` as a varaible name. It will shadow a built in function/type [`str`](https://docs.python.org/3/library/functions.html#str). | Though above answers are correct..this is more `pythonic` way...
```
string = 'mandarino'
indice = len(string)
while indice >= 0:
indice -= 1
print (string[indice]),
``` | 4,369 |
43,957,412 | Is it possible to extract the text information from a popup page automatically using python?
I have google play store app link :
<https://play.google.com/store/apps/details?id=com.facebook.katana>
If you scroll down to the "ADDITIONAL INFORMATION" section, you will find "Permissions". By clicking 'View details" underneath will popup a page. Are those text information in the popup extractable?
And how do I get the information from the main page source if it is doable?
Thanks a lot. | 2017/05/13 | [
"https://Stackoverflow.com/questions/43957412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5090248/"
] | You'll need to do the following:
1) Set up a webdriver to control the website.
<https://sites.google.com/a/chromium.org/chromedriver/getting-started>
2) Right click "view details" and select inspect source. This will open the source code of the page. The highlighted portion corresponds to that button. You can right click and copy the xpath and use this to call a click function.
3) Once the new page opens, navigate your driver to this page and follow the same instructions as in step 2 to select the text you want. You can then use the innerhtml function to grab the text from this element. | This is going to be rather complicated: you'll have to dig through the HTML to find out what the button does (the link is actually a `button` element). The best would be to use a Google Play Store API, which doesn't exist as of right now. The easiest option would therefore be to go through a third-party API which would crawl the play store for you. Here's an [example](https://apptweak.io/api).
I won't walk you through the whole process, but you'll probably have to use the [requests](http://docs.python-requests.org/en/master/) module. | 4,373 |
9,687,922 | I recently built an application, for a client, which has several python files. I use ubuntu, and now that I am finished, I would like to give this to the client in a way that would make it easy for her to use in windows.
I have looked into py2exe with wine, as well as cx\_freeze and some other stuff, but cannot find a straightforward tutorial or useful documentation for turning many python files in ubuntu into an easy-to-use windows application or executable or anything really.
Thanks! | 2012/03/13 | [
"https://Stackoverflow.com/questions/9687922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1266969/"
] | [This page](http://bytes.com/topic/python/answers/26340-linux-wine-py2exe) appears to have a solution, as the asker didn't reply:
1. Install WINE.
2. Use WINE to install Python 2.3.
3. Use WINE to install py2exe.
4. Make a setup.py file for py2exe to compile your script:
>
>
> ```
> from distutils.core import setup
> import py2exe
>
> setup(name="vervang",
> scripts=["vervang.py"],
> )
>
> ```
>
>
* Run `wine python.exe setup.py py2exe`
[This page](http://wiki.python.org/moin/Py2Exe) says the resulting binaries might not be valid Win32 executables, though. | py2exe will not work on linux. Try [pyinstaller](http://www.pyinstaller.org/) it is a pure python implementation that will work on linux, mac and windows. | 4,374 |
71,040,315 | I have two builds at the same time when doing PR.
[](https://i.stack.imgur.com/QqREB.png)
According to docs, that could be turned off via [web interface](https://docs.travis-ci.com/user/web-ui/#build-pushed-branches)
[](https://i.stack.imgur.com/r9zaa.png)
I turned that off (want to have only PR build) and also added `only` to `.travis.yml`, but still have two builds, but now branch builds just in expecting stage. In web UI Travis - no more builds for branch created.
[](https://i.stack.imgur.com/ukPJt.png)
```
branches:
only:
- master
language: python
os: linux
dist: xenial
jobs:
include:
- name: pytest
python:
- 3.7
install:
- pip install -U pip
- pip install -U pytest
- pip install -U PyYAML
- pip install -U Cerberus
script:
- pytest -vvs
``` | 2022/02/08 | [
"https://Stackoverflow.com/questions/71040315",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5712858/"
] | I had to set Gpu Acceleration for the integrated terminal from default `auto` to either `off` or `canvas`.
In the settings.json that is achieved with:
```json
{
"terminal.integrated.gpuAcceleration": "canvas",
}
``` | Try changing editor font size and window zoom level:
```
{
"window.zoomLevel": -1,
"editor.fontSize": 14,
"terminal.integrated.fontSize": 12,
}
``` | 4,375 |
1,643,643 | I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended. | 2009/10/29 | [
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] | [argparse](http://docs.python.org/dev/library/argparse.html) is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts. | I believe this would work, and would avoid iterating over sys.argv:
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in sys.argv[1:]:
hlo.append(sh.cell(i, 8).value)
``` | 4,376 |
2,832,646 | my dir location,i am in a.py:
```
my_Project
|----blog
|-----__init__.py
|-----a.py
|-----blog.py
```
when i 'from blog import something' in a.py , it show error:
```
from blog import BaseRequestHandler
ImportError: cannot import name BaseRequestHandler
```
i think it import the blog folder,not the blog.py
so how to import the blog.py
**updated**
when i use 'blog.blog', it show this:
```
from blog.blog import BaseRequestHandler
ImportError: No module named blog
```
**updated2**
my sys.path is :
```
['D:\\zjm_code', 'D:\\Python25\\lib\\site-packages\\setuptools-0.6c11-py2.5.egg', 'D:\\Python25\\lib\\site-packages\\whoosh-0.3.18-py2.5.egg', 'C:\\WINDOWS\\system32\\python25.zip', 'D:\\Python25\\DLLs', 'D:\\Python25\\lib', 'D:\\Python25\\lib\\plat-win', 'D:\\Python25\\lib\\lib-tk', 'D:\\Python25', 'D:\\Python25\\lib\\site-packages', 'D:\\Python25\\lib\\site-packages\\PIL']
zjm_code
|-----a.py
|-----b.py
```
a.py is :
```
c="ccc"
```
b.py is :
```
from a import c
print c
```
and when i execute b.py ,i show this:
```
> "D:\Python25\pythonw.exe" "D:\zjm_code\b.py"
Traceback (most recent call last):
File "D:\zjm_code\b.py", line 2, in <module>
from a import c
ImportError: cannot import name c
``` | 2010/05/14 | [
"https://Stackoverflow.com/questions/2832646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234322/"
] | When you are in `a.py`, `import blog` should import the local `blog.py` and nothing else. Quoting the [docs](http://docs.python.org/tutorial/modules.html#the-module-search-path):
>
> modules are searched in the list of directories given by the variable sys.path which is initialized from the directory containing the input script
>
>
>
So my guess is that somehow, the name `BaseRequestHandler` is not defined in the file `blog.py`. | what happens when you:
```
import blog
```
Try outputting your sys.path, in order to make sure that you have the right dir to call the module from. | 4,386 |
16,168,836 | Here's a python-code-snippet:
```
import re
VARS='Variables: "OUTPUTFOLDER=installers","SETUP_ORDER=Product 4,Product 4 Library","SUB_CONTENTS=Product 4 Library","SUB_CONTENT_SIZES=9364256","SUB_CONTENT_GROUPS=Product 4 Library","SUB_CONTENT_DESCRIPTIONS=","SUB_CONTENT_GROUP_DESCRIPTIONS=","SUB_DISCS=Product 4,Product Disc",SUB_FILENAMES='
comp = re.findall(r'\w+=".*?"', VARS)
for var in comp:
print var
```
This is the output currently:
```
SUB_CONTENT_DESCRIPTIONS=","
SUB_CONTENT_GROUP_DESCRIPTIONS=","
```
However I'd like the output to extract all elements so it looks like this:
```
"OUTPUTFOLDER=installers"
"SETUP_ORDER=Product 4, Product 4 Library"
"SUB_CONTENTS=Product 4"
"SUB_CONTENT_SIZES=9364256"
...
```
What is wrong with my regex-pattern? | 2013/04/23 | [
"https://Stackoverflow.com/questions/16168836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/446835/"
] | ```
$(document).ready(function(){
$('#commentform').submit(function(){
var postname = $('h2.post-title').text();
ga('send', 'event', 'Engagement', 'Comment', postname, 5);
});
});
```
First of all. This code assigns the text of a `h2` tag with class `post-title` found in the `document`. A way more reliable way to get the title of the post would be an id.
Secondly, it may not work, because the form is submited before the Google Analitycs code fires. So you should stop the default behaviour and submit the form after the analitycs finishes sending it's data. (See: <https://developers.google.com/analytics/devguides/collection/analyticsjs/advanced#hitCallback>)
```
$( document ).ready( function() {
$( document ).on( 'submit', 'form#commentform', function( e ) {
var postname = $( '#post-title' ).text();
ga( 'send', {
'hitType': 'event',
'eventCategory': 'Engagement',
'eventAction': 'Comment',
'eventLabel': postname,
'eventValue': 5,
'hitCallback': function() {
//now you can submit the form
//$('#commentform').off('submit').trigger('submit');
$('#commentform').off('submit'); //unbind the event
$('#commentform')[0].submit(); //fire DOM element event, not jQuery event
}
});
return false;
});
});
```
**Edit:**
I just realised the code from `hitCallback` might not work. The revised version should call the event of the DOM element and in result - send the form.
**Edit2:**
Corrected the event binding in case the form doesn't exist when document.ready() is fired | Hard to tell without looking at an actual page, but likely the browser is redirecting to the form's submission before ga's network call is made. You'd need a way to wait for ga to finish, then finish submitting the form. | 4,387 |
71,672,487 | Can't import `Quartz` package.
I have installed it with this command `pip install pyobjc-framework-Quartz`. Tried reinstalling python, also tried `python -m pip install ...`. With `python2` or `sudo python3`, everything works fine but `python3` is giving me this error message every time I try importing `Quartz`
Python version - 3.10.4
Mac version - Big Sur 11.6.5
```
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import Quartz
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Quartz/__init__.py", line 6, in <module>
import AppKit
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/AppKit/__init__.py", line 10, in <module>
import Foundation
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Foundation/__init__.py", line 9, in <module>
import CoreFoundation
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/CoreFoundation/__init__.py", line 9, in <module>
import objc
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/__init__.py", line 6, in <module>
from . import _objc
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/_objc.cpython-310-darwin.so, 2): Symbol not found: _FSPathMakeRef
Referenced from: /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/_objc.cpython-310-darwin.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/_objc.cpython-310-darwin.so
``` | 2022/03/30 | [
"https://Stackoverflow.com/questions/71672487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16965639/"
] | For investigation purposes, can you try :
```
cd /tmp
python3 -m venv venv
source venv/bin/activate
pip install pyobjc-framework-Quartz
python your-script.py
```
Can you try this to see if it works :
```
env -i /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 my_script.py
```
You may have files only accessible by `root`, try to change ownership:
```
sudo chown -R $(id -u):$(id -g) /Library/Frameworks/Python.framework/Versions/3.10
```
and run `env -i ...` again.
If you run :
```
/Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -c 'import sys;print(sys.path)'
sudo /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -c 'import sys;print(sys.path)'
```
Is there any difference between the two ?
Try following to see if it improves :
```
sudo /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -m pip install pyobjc-framework-Quartz
```
Try this to see if there is anything unusual :
```
/Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -X importtest -v -c 'import Quartz'
``` | You might need to try:
```py
python3 -m pip install [...]
```
Hope this will hope. | 4,388 |
40,909,099 | I'm new to image processing and I'm really having a hard time understanding stuff...so the idea is that how do you create a matrix from a binary image in python?

to something like this:

It not the same image though the point is there.
Thank you for helping, I appreciate it cheers | 2016/12/01 | [
"https://Stackoverflow.com/questions/40909099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6019385/"
] | **Using cv2** -Read more [here](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html)
```
import cv2
img = cv2.imread('path/to/img.jpg')
resized = cv2.resize(img, (128, 128), cv2.INTER_LINEAR)
print pic
```
**Using skimage** - Read more [here](http://scikit-image.org/docs/dev/api/skimage.html)
```
import skimage.io as skio
faces = skio.imread_collection('path/to/images/*.pgm',conserve_memory=True) # can load multiple images at once
```
**Using Scipy** - Read more [here](http://www.scipy-lectures.org/advanced/image_processing/)
```
from scipy import misc
pic = misc.imread('path/to/img.jpg')
print pic
```
**Plotting Images**
```
import matplotlib.pyplot as plt
plt.imshow(faces[0],cmap=plt.cm.gray_r,interpolation="nearest")
plt.show()
``` | example I am currently working with.
====================================
```
"""
A set of utilities that are helpful for working with images. These are utilities
needed to actually apply the seam carving algorithm to images
"""
from PIL import Image
class Color:
"""
A simple class representing an RGB value.
"""
def __init__(self, r, g, b):
self.r = r
self.g = g
self.b = b
def __repr__(self):
return f'Color({self.r}, {self.g}, {self.b})'
def __str__(self):
return repr(self)
def read_image_into_array(filename):
"""
Read the given image into a 2D array of pixels. The result is an array,
where each element represents a row. Each row is an array, where each
element is a color.
See: Color
"""
img = Image.open(filename, 'r')
w, h = img.size
pixels = list(Color(*pixel) for pixel in img.getdata())
return [pixels[n:(n + w)] for n in range(0, w * h, w)]
def write_array_into_image(pixels, filename):
"""
Write the given 2D array of pixels into an image with the given filename.
The input pixels are represented as an array, where each element is a row.
Each row is an array, where each element is a color.
See: Color
"""
h = len(pixels)
w = len(pixels[0])
img = Image.new('RGB', (w, h))
output_pixels = img.load()
for y, row in enumerate(pixels):
for x, color in enumerate(row):
output_pixels[x, y] = (color.r, color.g, color.b)
img.save(filename)
``` | 4,389 |
67,740,573 | I have a huge file of around 5-10 GBs which has syntax as shown below.
"some text" condition1
"some text" condition2
"some text" condition3
"some text" condition1
"some text" condition4
& so on
The intent is to write a fast & efficient code to create separate files to store this text info based on conditions. All lines with condition1 will go to a file named "condition1.txt".
The limitation is that we do not know the unique conditions.
How can I dynamically generate new file handlers on the fly while reading the file line by line and keep track of these handlers using condition as key and handler as the value in a python dictionary? I can use other data structures as well. Need suggestions ! | 2021/05/28 | [
"https://Stackoverflow.com/questions/67740573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4991138/"
] | The first thing that comes to mind for me would be to use the append feature on Python file handlers. You could do something like this for each line of text:
```py
def writecond(text, cond):
fname = cond + '.txt'
with open(fname, 'a') as file:
file.write(text)
```
Another thing you could do is have a `dict` which maps you condition text to a list of open file handlers (although I think there might be a hard limit to the number of handlers you can have on some systems), but just be sure to close all of them before your function exits!
EDIT:
If you want the dictionary case, here's the code for that:
```py
fh_assign = {}
def writeline(text, condition):
if condition not in fh_assign.keys():
fh = open(f'{condition}.txt', 'w')
fh.write(text)
fh_assign[condition] = fh
else:
fh_assign[condition].write(text)
```
Once you're done with the calls to `writeline`, just iterate through the list as follows and close all the connections.
```py
for _, fh in fh_assign:
fh.close()
``` | I figured out one option of handling objects dynamically and keeping track of it.
```
file_handler = {}
with open(file) as f:
for line in f:
if line.split()[1] not in file_handler.keys():
file_handler[line.split()[1]] = open(line.split()[1],"w")
file_handler[line.split()[1]].write(line.split()[0])
else:
file_handler[line.split()[1]].write(line.split()[0])
f.close()
for key in file_handler.keys():
file_handler[key].close()
``` | 4,390 |
42,349,982 | I am trying to read the JSON file in python and it is successfully however some top values are skipped. I am trying to debug the reason. Here is the the code.
```
data = json.load(open('pre.txt'))
for key,val in data['outputs'].items():
print key
print data['outputs'][key]['feat_left']
```
**EDIT**
Here is the snapshot the file. I want to read `key` and `feat_left` for outputs
```
{
"outputs": {
"/home/113267806.jpg": {
"feat_left": [
2.369331121444702,
-1.1544183492660522
],
"feat_right": [
2.2432730197906494,
-0.896904468536377
]
},
"/home/115061965.jpg": {
"feat_left": [
1.8996189832687378,
-1.3713303804397583
],
"feat_right": [
1.908974051475525,
-1.4422794580459595
]
},
"/home/119306609.jpg": {
"feat_left": [
-0.7765399217605591,
-1.690917730331421
],
"feat_right": [
-1.1964678764343262,
-1.9359161853790283
]
}
}
}
```
P.S: Thanks to Rahul K P for the code | 2017/02/20 | [
"https://Stackoverflow.com/questions/42349982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396273/"
] | No top values are skipped. There are 45875 items in your data['output'] object. Try the following code:
```
len(data['outputs'].items())
```
And there are exactly 45875 items in your JSON file. Just note that JSON object is an unordered collection in python, like `dict`. | If you just want to print the content of the file by using a for-loop, you can try like this:
```
data = json.load(open('pre.txt')
for key,val in data['outputs'].items():
print key
print val[0] #this will print the array and its values below "feat_left", if the json is consistent
```
A more robust solution could look like this:
```
data = json.load(open('pre.txt')
for key,val in data['outputs'].items():
print key
for feat_key, feat_val in val.items():
if feat_key == 'feat_left':
print feat_val
```
Code is untested, give it a try. | 4,391 |
52,318,106 | HTML: I have a 'sign-up' form in a modal (index.html)
JS: The form data is posted to a python flask function: /signup\_user
```
$(function () {
$('#signupButton').click(function () {
$.ajax({
url: '/signup_user',
method: 'POST',
data: $('#signupForm').serialize()
})
.done(function (data) {
console.log('success callback 1', data)
})
.fail(function (xhr) {
console.log('error callback 1', xhr);
})
})
});
```
Python/Flask:
```
@app.route('/signup_user', methods=['POST'])
def signup_user():
#Request data from form and send to database
#Check that username isn't already taken
#if the username is not already taken
if new_user:
users.insert_one(new_user)
message = "Success"
return message
#else if ussername is taken, send message to user viewable in the modal
else:
message = "Failure"
return message
return redirect(url_for('index'))
```
I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username.
Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure.
How do I get the error message back to display in the form/modal? | 2018/09/13 | [
"https://Stackoverflow.com/questions/52318106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5632508/"
] | I would do something like this:
```
student_marks.group_by { |k, v| v }.map { |k, v| [k, v.map(&:first)] }.to_h
#=> { 50 => ["Alex", "Matt"], 54 => ["Beth"]}
``` | Another way could be
```
student_marks.each.with_object(Hash.new([])){ |(k,v), h| h[v] += [k] }
#=> {50=>["Alex", "Matt"], 54=>["Beth"]}
``` | 4,396 |
34,734,436 | I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore.
```none
[ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module>
@_call_aside
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application
[ec2-user@ip-172-31-17-194 ~]$ which pip
/usr/local/bin/pip
``` | 2016/01/12 | [
"https://Stackoverflow.com/questions/34734436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5326788/"
] | first, `which pip` is not going to return the same result as `sudo which pip`, so you should check that out first.
you may also consider not running pip as sudo at all.
[Is it acceptable & safe to run pip install under sudo?](https://stackoverflow.com/questions/15028648/is-it-acceptable-safe-to-run-pip-install-under-sudo)
second, can you try this:
```
easy_install --upgrade pip
```
if you get an error here (regarding pip's wheel support), try this, then run the above command again:
```
easy_install -U setuptools
``` | I fixed the same error ("The 'pip==6.1.1' distribution was not found") by using the tip of Wesm :
```
$> which pip && sudo which pip
/usr/local/bin/pip
/usr/bin/pip
```
So, it seels that "pip" of average user and of root are not the same. Will fix it later.
Then I ran "sudo easy\_install --upgrade pip" => succeed
Then I used "sudo /usr/local/bin/pip install " and it works. | 4,402 |
66,559,129 | I am writing table to mysql from python using pymysql to\_sql function.
I am having 1000 rows with 200 columns.
Query to connect to mysql is below:
```
engine = create_engine("mysql://hostname:password#@localhostname/dbname")
conn = engine.connect()
writing query: df.to_sql('data'.lower(),schema=schema,conn,'replace',index=False)
```
I am getting below error:
```
OperationalError: (pymysql.err.OperationalError) (1118, 'Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.')
```
I have changed column dtypes to string still am getting above error. Please, help me to solve this error.
I am trying to save the table like below. Here, I am providing few columns with
create table query.I am getting error while creating the table while saving.
CREATE TABLE dbname.table name(
`08:00:00` TEXT,
`08:08:00` TEXT,
`08:16:00` TEXT,
`08:24:00` TEXT,
`08:32:00` TEXT,
`08:40:00` TEXT,
`08:48:00` TEXT,
`08:56:00` TEXT,
`09:04:00` TEXT,
`09:12:00` TEXT,
`09:20:00` TEXT,
`09:28:00` TEXT) | 2021/03/10 | [
"https://Stackoverflow.com/questions/66559129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12249443/"
] | You can store the data with 3 methods-
1. Use localStorage
if you are using a json object then you can use localStorage.setItem("data123",JSON.stringify(data))
and fetch the data using JSON.prase(localStorage.getItem("data123"))
2. sessionStorgae
Syntax is same as localStaorage. Just replace localwith session
Difference - local is persistent, session get deleted when page is closed.
3. React state (prefered method if persistent storage is not required)
You can use useState hook (for functional component) or state = {} (for classful components).
Usage and examples are readily available one search away.
Note: Is using React states and the component which taken in the input data is inside a parent component the define the states in parent and pass the hook definitions to the child component or else when the component changes, the hooks will be lost if defined in the child component | I think the best way to save form data locally is to preserve it in your form handler class like you can create a new Handler class and create setter and getter in it which store the form fields data/values in object key, value pair which you always update whenever fields update. | 4,408 |
12,028,496 | How do I pass a query string to a HTML frame?
I have the following HTML in index.html:
```
<HTML>
<FRAMESET rows="200, 200" border=0>
<FRAMESET>
<FRAME name=top scrolling=no src="top.html">
<FRAME name=main scrolling=yes src="/cgi-bin/main.py">
</FRAMESET>
</FRAMESET>
</HTML>
```
The frame src is main.py which is a python cgi script. This is in main.py:
```
import cgi
form = cgi.FieldStorage()
test = form.getvalue('test')
```
Suppose I call the url using index.html?test=abcde. How do I pass the query string to main.py? Is this possible using javascript? | 2012/08/19 | [
"https://Stackoverflow.com/questions/12028496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65406/"
] | [This](https://stackoverflow.com/a/2880929/1273830) should help you get variables from the query string, which you can use to build your custom queryString. Or if you want to pass the query string as it is to the frame, then you could get it a simpler fashion.
`var queryString = window.location.href.split('index.html?')[1];`
Now, as for passing it to the frame, it should be easy because you'd just be appending the query string to the frame element's `src` attribute. | Not sure if this will work (untested), but perhaps you can load the query parameters onload using jQuery? Here is a proof of concept:
```
<html>
<head>
//Load jquery here, then do the following:
<script type="text/javascript">
$(document).ready(functin(){
// navigator.href holds the current window's q params
// you will have to write this funciton (fairly easy).
var q = get_query_paramenters(navigator.href);
var top = $("<frame name=top scrolling=no src='top.html'></frame>"),
main = $("<frame name=main scrolling=yes></frame>");
main.attr("src", "/cgi-bin/main.py?" + q);
$("#myframeset").append(top).append(main);
});
</script>
</head>
<body>
<FRAMESET rows="200, 200" border=0>
<FRAMESET id='myframeset'>
</FRAMESET>
</FRAMESET>
</body>
</html>
```
Hope that helps. | 4,409 |
22,447,986 | I have the following list of string
```
mystring = [
'FOO_LG_06.ip',
'FOO_LV_06.ip',
'FOO_SP_06.ip',
'FOO_LN_06.id',
'FOO_LV_06.id',
'FOO_SP_06.id']
```
What I want to do is to print it out so that it gives this:
```
LG.ip
LV.ip
SP.ip
LN.id
LV.id
SP.id
```
How can I do that in python?
I'm stuck with this code:
```
for soth in mystring:
print soth
```
In Perl we can do something like this for regex capture:
```
my ($part1,$part2) = $soth =~ /FOO_(\w+)_06(\.\w{2})/;
print "$part1";
print "$part2\n";
``` | 2014/03/17 | [
"https://Stackoverflow.com/questions/22447986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1380929/"
] | If you want to do this in a manner similar to the one you know in perl, you can use `re.search`:
```
import re
mystring = [
'FOO_LG_06.ip',
'FOO_LV_06.ip',
'FOO_SP_06.ip',
'FOO_LN_06.id',
'FOO_LV_06.id',
'FOO_SP_06.id']
for soth in mystring:
matches = re.search(r'FOO_(\w+)_06(\.\w{2})', soth)
print(matches.group(1) + matches.group(2))
```
`matches.group(1)` contains the first capture, `matches.group(2)` contains the second capture.
[ideone demo](http://ideone.com/UjiNFm). | different regex:
>
>
> >
> >
> > >
> > > p='[^*]+*([A-Z]+)[^.]+(..\*)'
> > >
> > >
> > >
> >
> >
> >
>
>
>
```
>>> for soth in mystring:
... match=re.search(p,soth)
... print ''.join([match.group(1),match.group(2)])
```
Output:
LG.ip
LV.ip
SP.ip
LN.id
LV.id
SP.id | 4,410 |
1,942,295 | Noob @ programming with python and pygtk.
I'm creating an application which includes a couple of dialogs for user interaction.
```
#!usr/bin/env python
import gtk
info = gtk.MessageDialog(type=gtk.DIALOG_INFO, buttons=gtk.BUTTONS_OK)
info.set_property('title', 'Test info message')
info.set_property('text', 'Message to be displayed in the messagebox goes here')
if info.run() == gtk.RESPONSE_OK:
info.destroy()
```
This displays my message dialog, however, when you click on the 'OK' button presented in the dialog, nothing happens, the box just freezes.
What am I doing wrong here? | 2009/12/21 | [
"https://Stackoverflow.com/questions/1942295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234654/"
] | can you give me a last chance? ;)
there are some errors in your code:
* you did not close a bracket
* your syntax in `.set_property` is wrong: use: `.set_property('property', 'value')`
but i think they are copy/paste errors.
try this code, it works for me. maybe did you forget the `gtk.main()`?
```
import gtk
info = gtk.MessageDialog(buttons=gtk.BUTTONS_OK)
info.set_property('title', 'Test info message')
info.set_property('text', 'Message to be displayed in the messagebox goes here')
response = info.run()
if response == gtk.RESPONSE_OK:
print 'ok'
else:
print response
info.destroy()
gtk.main()
``` | @mg
My bad. Your code is correct (and I guess my initial code was too)
The reason my dialog was remaining on the screen is because my gtk.main loop is running on a separate thread.
So all I had to was enclose your code (corrected version of mine) in between a
```
gtk.gdk.threads_enter()
```
and a
```
gtk.gdk.threads_leave()
```
and there it was.
Thanks for your response. | 4,411 |
15,852,455 | I need scipy on cygwin, so I figured the quickest way to make it work would have been installing enthought python. However, I then realized I have to make cygwin aware of enthought before I can use it, e.g. so that calling Python from the cygwin shell I get the enthought python (with scipy) rather than the cygwin one.
How do I do that?
I can guess my question is easy, but I'm just learning about all of this and so please be patient :-) | 2013/04/06 | [
"https://Stackoverflow.com/questions/15852455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1714385/"
] | There are better options than periodically polling the value of the variable. Polling could miss a variable change, and it requires computational resources even if nothing is happening.
You could wrap the variable in a wrapper class and change it only through a setter.
If you're using Eclipse, you can ask the debugger to stop whenever the value changes. | Using a wrapper class for you variable like:
```
class VarWrapper{
private Object myVar;
public Object getMyVar() {
return myVar;
}
public void setMyVar(Object myVar) {
//[1],Here you'll know myVar changed
this.myVar = myVar;
}
}
``` | 4,412 |
30,023,898 | I'm creating a little calculator as a project And I want it to restart when it type yes when it's done. Problem is, I can't seem to figure out how. I'm not a whiz when it comes to python.
```
import sys
OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"]
def userinput():
while True:
try:
number = int(input("Number: "))
break
except ValueError:
print("NOPE...")
return number
def operation():
while True:
operation = input("Multiply/Divide/Add: ")
if operation in OPTIONS:
break
else:
print("Not an option.")
return operation
def playagain():
while True:
again = input("Again? Yes/No: ")
if again == "Yes" or again == "yes":
break
elif again == "No" or again == "no":
sys.exit(0)
else:
print("Nope..")
def multiply(x,y):
z = x * y
print(z)
def divide(x,y):
z = x / y
print(z)
def add(x,y):
z = x + y
print(z)
def subtract(x,y):
z = x - y
print(z)
while True:
operation = operation()
x = userinput()
y = userinput()
if operation == "add" or operation == "Add":
add(x,y)
elif operation == "divide" or operation == "Divide":
divide(x,y)
elif operation == "multiply" or operation == "Multiply":
multiply(x,y)
elif operation == "subtract" or operation == "Subtract":
subtract(x,y)
playagain()
```
I currently have a break in line 28 because I can't find out how to restart it. If anyone could help me, THANKS! | 2015/05/04 | [
"https://Stackoverflow.com/questions/30023898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4861144/"
] | You don't need to restart your script, just have a little bit of thought about the design before you code. Taking the script you provided, there are two alterations for this issue:
```
def playagain():
while True:
again = input("Again? Yes/No: ")
if again == "Yes" or again == "yes":
return True
elif again == "No" or again == "no":
return False
else:
print("Nope..")
```
Then, where you call `playagain()`, change that to:
```
if not playagain(): break
```
I think I know why you want to restart the script, you have a bug.
Python functions are like any other object. When you say:
```
operation = operation()
```
that reassigns the reference to the `operation` function to the string returned by the function. So the second time you call it on restart it fails with:
```
TypeError: 'str' object is not callable
```
RENAME your `operation` function something like `foperation`:
```
def fopertion():
```
then:
```
operation = foperation()
```
So, the complete code becomes:
```
import sys
OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"]
def userinput():
while True:
try:
number = int(input("Number: "))
break
except ValueError:
print("NOPE...")
return number
def foperation():
while True:
operation = input("Multiply/Divide/Add: ")
if operation in OPTIONS:
break
else:
print("Not an option.")
return operation
def playagain():
while True:
again = input("Again? Yes/No: ")
if again == "Yes" or again == "yes":
return True
elif again == "No" or again == "no":
return False
else:
print("Nope..")
def multiply(x,y):
z = x * y
print(z)
def divide(x,y):
z = x / y
print(z)
def add(x,y):
z = x + y
print(z)
def subtract(x,y):
z = x - y
print(z)
while True:
operation = foperation()
x = userinput()
y = userinput()
if operation == "add" or operation == "Add":
add(x,y)
elif operation == "divide" or operation == "Divide":
divide(x,y)
elif operation == "multiply" or operation == "Multiply":
multiply(x,y)
elif operation == "subtract" or operation == "Subtract":
subtract(x,y)
if not playagain(): break
```
There are many other improvements to this code that I could make, but let's just get this working first. | Use os.execv()....
[Restarting a Python Script Within Itself](http://blog.petrzemek.net/2014/03/23/restarting-a-python-script-within-itself/) | 4,415 |
15,612,362 | Here is python code:
```
def is_palindrome(s):
return revers(s) == s
def revers(s):
ret = ''
for ch in s:
ret = ch + ret
return ret
print is_palindrome('RACECAR')
# that will print true
```
when i convert that function to php.
```
function is_palindrome($string){
if (strrev($string) == $string) return true;
return false;
}
$word = "RACECAR";
var_dump(is_palindrome($word));
// true
```
Both functions works fine but, how can i revers string with php in loop ??
```
$string = str_split(hello);
$output = '';
foreach($string as $c){
$output .= $c;
}
print $output;
// output
hello
//i did this,
```
that's work find but is there any way to do that in better way ?
$string = "hello";
$lent = strlen($string);
```
$ret = '';
for($i = $lent; ($i > 0) or ($i == 0); $i--)
{
$ret .= $string[$i];
#$lent = $lent - 1;
}
print $output;
//output
olleh
``` | 2013/03/25 | [
"https://Stackoverflow.com/questions/15612362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/740182/"
] | Replace
```
$output .= $c;
```
with
```
$output = $c . $output;
``` | strrev() is a function that reverses a string in PHP.
<http://php.net/manual/en/function.strrev.php>
```
$s = "foobar";
echo strrev($s); //raboof
```
If you want to check if a word is a palindrome:
```
function is_palindrome($word){ return strrev($word) == $word }
$s = "RACECAR";
echo $s." is ".((is_palindrome($s))?"":"NOT ")."a palindrome";
``` | 4,418 |
24,931,465 | Hi I am very new to python, here i m trying to open a xls file in python code but it is showing me some error as below.
Code:
```
from xlrd import open_workbook
import os.path
wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx')
Error:Traceback (most recent call last):
File "C:\Python27\1.py", line 3, in <module>
wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx')
File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 429, in open_workbook
biff_version = bk.getbof(XL_WORKBOOK_GLOBALS)
File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1545, in getbof
bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8])
File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1539, in bof_error
raise XLRDError('Unsupported format, or corrupt file: ' + msg)
xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found 'PK\x03\x04\x14\x00\x06\x00'
```
need help guyz | 2014/07/24 | [
"https://Stackoverflow.com/questions/24931465",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3872486/"
] | This is a version conflict issue. Your Excel sheet format and the format that xlrd expects are different. You could try to save the Excel sheet in a different format until you find what xlrd expects. | Not familiar with xlrd, but nothing wrong appears on my Mac.
According to @jewirth, you can try to rename the suffix to xls which is the old version, and then reopen it or convert it into xlsx. | 4,424 |
33,686,880 | I have a `libpython27.a` file: how to know whether it is 32-bit or 64-bit, on Windows 7 x64? | 2015/11/13 | [
"https://Stackoverflow.com/questions/33686880",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/395857/"
] | Try `dumpbin /headers "libpython27.a"`. ([dumpbin reference](https://msdn.microsoft.com/en-us/library/c1h23y6c.aspx))
The output will contain
`FILE HEADER VALUES
14C machine (x86)`
or
`FILE HEADER VALUES
8664 machine (x64)`
---
Note that if you get an error message like:
```
E:\temp>dumpbin /headers "libpython27.a"
LINK: extra operand `libpython27.a'
Try `LINK --help' for more information.
```
It means there is a copy of the GNU link utility somewhere in the search path. Make sure you use the correct `link.exe` (e.g. the one provided in `C:\Program Files (x86)\Common Files\Microsoft\Visual C++ for Python\9.0\VC\bin`). It also requires `mspdb80.dll`, which is in the same folder or something in PATH, otherwise you'll get the error message:
[](https://i.stack.imgur.com/84mPY.png) | When starting the Python interpreter in the terminal/command line you may also see a line like:
>
> Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit
> (AMD64)] on win32
>
>
>
Where [MSC v.1500 64 bit (AMD64)] means 64-bit Python.
Or
Try using ctypes to get the size of a void pointer:
```
import ctypes
print ctypes.sizeof(ctypes.c_voidp)
```
It'll be 4 for 32 bit or 8 for 64 bit. | 4,427 |
68,764,541 | I was reading through the [PEP 526](https://www.python.org/dev/peps/pep-0526/) documentation and I was wondering what is the proper way to annotate a class instance.
I have not found the answer in the documentation.
I have the following module:
```py
class global_variables:
# Class body
global_variables_dictionary: global_variables = global_variables("application.yaml")
```
Is `something: <class_name> = class_name()` the correct way to do this?
Thanks | 2021/08/12 | [
"https://Stackoverflow.com/questions/68764541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10588212/"
] | **Note**: "Best Practice" for something like this is difficult to define, since everyone's situation is likely different.
That being said, one of our projects has a similar situation as yours: we use Git Flow, and our `develop` branch build numbers are always different than the `release` branch build numbers. Our potentially ideal solution has not been implemented, but would likely be similar to your suggested Approach 8, where we would inject the version into the build pipeline without it being hard-coded in a commit (i.e. don't even modify the version file at all). The downside of this though is you can't know what version is represented by a specific commit based on code alone. But you could tag the commit with a specific version, which is probably what we would do if we implemented that. We could also bake the commit ID along with version info into the artifact meta data for easy lookup.
The solution we currently use is a combination of Approaches 4, 5, and 7. We separate version files (your Approach 5), and every time we create a `release` (or `hotfix`) branch, the first commit only changes the version file to the upcoming release version (your Approach 7). We make sure that `release` *always* has the tip of `main` in it, so that anytime we deploy `release` to production we can cleanly merge `release` to `main`. (Note we still use `--no-ff` as suggested by Git Flow but the point is we *could* fast-forward if we wanted to.)
Now, after you complete the `release` branch into `main`, Git Flow suggests merging `release` back to `develop`, but we find merging `main` back to `develop` slightly more efficient so that the tip of `main` is also on `develop`, but occasionally we also merge `release` back into `develop` before deployment if important bug fixes appear on `release`. Either way, both of those merges back to `develop` will always have conflicts with the version files on `develop`, and we use your Approach 4 to automate choosing the `develop` version of those files. This enables the merge back to be fully automated, however, sometimes there are still other conflicts that have to be resolved manually, just as a course of normal development happening on `develop` and `release` simultaneously. But at least it's usually clean.
Note that a side effect of our approach is that our versions files are *always* different on `develop` and `main`, and that's fine with us. | What about using an external tool to manage the version?
We use [GitVersion](https://github.com/GitTools/GitVersion) for this. Now I am not sure if there is a smarter way, but a brute-force one is to have something like this `<version>${env.GitVersion_SemVer}</version>` in your pom.xml, where env.GitVersion\_SemVer is an output from GitVersion. | 4,430 |
26,472,868 | I have a python script (analyze.py) which takes a filename as a parameter and analyzes it. When it is done with analysis, it waits for another file name. What I want to do is:
1. Send file name as a parameter from PHP to Python.
2. Run analyze.py in the background as a daemon with the filename that came from PHP.
I can post the parameter from PHP as a command line argument to Python but I cannot send parameter to python script that already runs at the background.
Any ideas? | 2014/10/20 | [
"https://Stackoverflow.com/questions/26472868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1430739/"
] | The obvious answer here is to either:
1. Run `analyze.py` once per filename, instead of running it as a daemon.
2. Pass `analyze.py` a whole slew of filenames at startup, instead of passing them one at a time.
But there may be a reason neither obvious answer will work in your case. If so, then you need some form of *inter-process communication*. There are a few alternatives:
* Use the Python script's standard input to pass it data, by writing to it from the (PHP) parent process. (I'm not sure how to do this from PHP, or even if it's possible, but it's pretty simple from Python, sh, and many other languages, so …)
* Open a TCP socket, Unix socket, named pipe, anonymous pipe, etc., giving one end to the Python child and keeping the other in the PHP parent. (Note that the first one is really just a special case of this one—under the covers, standard input is basically just an anonymous pipe between the child and parent.)
* Open a region of shared memory, or an `mmap`-ed file, or similar in both parent and child. This probably also requires sharing a semaphore that you can use to build a condition or event, so the child has some way to wait on the next input.
* Use some higher-level API that wraps up one of the above—e.g., write the Python child as a simple HTTP service (or JSON-RPC or ZeroMQ or pretty much anything you can find good libraries for in both languages); have the PHP code start that service and make requests as a client. | Here is what I did.
PHP Part:
```
<?php
$param1 = "filename";
$command = "python analyze.py ";
$command .= " $param1";
$pid = popen( $command,"r");
echo "<body><pre>";
while( !feof( $pid ) )
{
echo fread($pid, 256);
flush();
ob_flush();
}
pclose($pid);
?>
```
Python Part:
```
1. I used [JSON-RPC]: https://github.com/gerold-penz/python-jsonrpc to
create a http service that wraps my python script (runs forever)
2. Created a http client that calls the method of http service.
3. Printed the results in json format.
```
Works like a charm. | 4,431 |
46,877,384 | I am reading a text file in python(500 rows) and it seems like:
```
File Input:
0082335401
0094446049
01008544409
01037792084
01040763890
```
I wanted to ask that is it possible to insert one space after 5th Character in each line:
```
Desired Output:
00823 35401
00944 46049
01008 544409
01037 792084
01040 763890
```
I have tried below code
```
st = " ".join(st[i:i + 5] for i in range(0, len(st), 5))
```
but the below output was returned on executing it:
```
00823 35401
0094 44604 9
010 08544 409
0 10377 92084
0104 07638 90
```
I am a novice in Python. Any help would make a difference. | 2017/10/22 | [
"https://Stackoverflow.com/questions/46877384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Inside your k6 script use the url `host.docker.internal` to access something running on the host machine.
For example to access a service running on the host at `http://localhost:8080`
```js
// script.js
import http from "k6/http";
import { sleep } from "k6";
export default function () {
http.get("http://host.docker.internal:8080");
sleep(1);
}
```
Then on windows or mac this can be run with:
```sh
$ docker run -i loadimpact/k6 run - <script.js
```
for linux you need an extra flag
```sh
$ docker run --add-host=host.docker.internal:host-gateway -i loadimpact/k6 run - <script.js
```
References:
* Mac: <https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds>
* Windows: <https://docs.docker.com/docker-for-windows/networking/#known-limitations-use-cases-and-workarounds>
* Linux: <https://stackoverflow.com/a/61424570/3757139> | k6 inside the docker instance should be able to connect to the "public" IP on your host machine - the IP that is configured on your ethernet or Wifi interface. You can do a `ipconfig /all` to see all your interfaces and their IPs.
On my Mac I can do this:
`$ python httpserv.py &
[1] 7824
serving at port 8000
$ ifconfig en1
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether b8:09:8a:bb:f7:ed
inet6 fe80::148f:5671:5297:fc24%en1 prefixlen 64 secured scopeid 0x5
inet 192.168.0.107 netmask 0xffffff00 broadcast 192.168.0.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
$ echo 'import http from "k6/http"; export default function() { let res = http.get("http://192.168.0.107:8000"); console.log(res.status); };' |docker run -i loadimpact/k6 run -`
I.e. I start a simple HTTP server on port 8000 of the host machine, then executes the k6 docker image and tells it to access a URL based on the IP address of the physical, outward-facing en1 interface on the host machine. In your case, on Windows, you can use `ipconfig` to find out your external-facing IP. | 4,432 |
36,711,810 | I'm going to come out with a disclaimer and say this is my homework problem. So I don't necessarily want you to solve it, I just want some clarification.
The exact problem is this:
>
> Write a function to swap odd and even bits in an integer with as few
> instructions as possible (e.g., bit 0 and bit 1 are swapped, bit 2 and
> bit 3 are swapped, and so on).
>
>
>
It also hints that no conditional statements are required.
I kind of looked into it and I discovered if I somehow separate the even and odd bits I can use shifts to accomplish this. What I don't understand is how to manipulate individual bits. In python (programming language I'm used to) it's easy with the index operator as you can just do number[0] for example and you can get the first bit. How do you do something like this for assembly?
EDIT:
So @jotik, thanks for your help. I implemented something like this:
```
mov edi, ebx
and edi, 0x5555555555555555
shl edi, 1
mov esi, ebx
and esi, 0xAAAAAAAAAAAAAAAA
shr esi, 1
or edi, esi
mov eax, edi
```
And when I saw the | operator, I was thinking OR was ||. Silly mistake. | 2016/04/19 | [
"https://Stackoverflow.com/questions/36711810",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1157549/"
] | In assembly one can use [bit masks](https://en.wikipedia.org/wiki/Mask_%28computing%29) together with other [bitwise operations](https://en.wikipedia.org/wiki/Bitwise_operation) to archive your result.
```
result = ((odd-bit-mask & input) << 1) | ((even-bit-mask & input) >> 1)
```
where `odd-bit-mask` is a value with all odd bits set (`1`) and even bits unset (`0`); and `even-bit-mask` is a value with all even bits set (`1`) and odd bits unset. For 64-bit values, the odd and even bit masks would be (in hexadecimal notation) `0x0x5555555555555555` and `0xAAAAAAAAAAAAAAAA` respectively.
So the pseudocode of your assembly algorithm would probably look similar to the following:
```
oddbits = input & 0x5555555555555555
oddbits = oddbits << 1
evenbits = input & 0xAAAAAAAAAAAAAAAA
evenbits = evenbits >> 1
result = oddbits | evenbits
```
where `&` is a bitwise AND operation, `|` is a bitwise OR operation, `<<` and `>>` are the bitwise shift left and bitwise shift right operations respectively.
PS: You can find some other useful bit manipulation tricks on Sean Eron Anderson's [Bit Twiddling Hacks](https://graphics.stanford.edu/~seander/bithacks.html) webpage. | Here are a few hints:
* Bitwise boolean operations (these usually have 1:1 counterparts in assembly, but if everything else fails, you can construct them by cleverly combining several XOR calls)
+ bitwise AND: 0b10110101 & 0b00011000 → 0b00010000
+ bitwise OR: 0b10110101 & 0b00011000 → 0b10111101
+ bitwise XOR: 0b10110101 & 0b00011000 → 0b10101101
* Bit shifts
+ shift x by n bits to the left (x << n): 0b00100001 << 3 → 0b00001000
+ shift x by n bits to the right (x >> n): 0b00100001 >> 3 → 0b00000100
There's also rotating bit shift, where bits "shifted out" appear on the other side, but this is not as widely supported in hardware. | 4,433 |
41,132,864 | I have been trying to install OpenCV for ages now and finally I succeeded using this tutorial: <http://www.pyimagesearch.com/2016/12/05/macos-install-opencv-3-and-python-3-5/>.
However, whenever I try to import cv2 in IDLE, it is not found but I am certain I installed OpenCV.
The cv2.so file exists at:
/usr/local/lib/python3.5/site-packages/cv2.so
I believe it may have something to do with the interpreter but I am not sure how to fix it. In terminal, when I try importing it, it works. I included the terminal message to prove it.
[](https://i.stack.imgur.com/xRQ12.png)
Any help is appreciated. Thank you. | 2016/12/14 | [
"https://Stackoverflow.com/questions/41132864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7293747/"
] | ok i found the answer! after you activate the virtual environment with:
```
work on cv
```
type this on the terminal to open the IDLE with the current virtual environment
```
python -c "from idlelib.PyShell import main; main()"
```
or
```
python -m idlelib
```
and it will do the trick! | Dumb approach but does your IDLE run the same python environment as your terminal? | 4,434 |
12,990,462 | This is a repost of an issue I posted on the berkelium project on github (<https://github.com/sirikata/berkelium/issues/19>).
My question:
During chromium compilation on Linux (Debian testing, 64bit, gcc 4.7.1, cmake 2.8.9), the python script `action_makenames.py` fails with the following error:
```
...
ACTION webcore_bindings_sources_HTMLNames out/Release/obj/gen/webkit/HTMLNames.cpp
ACTION webcore_bindings_sources_SVGNames out/Release/obj/gen/webkit/SVGNames.cpp
ACTION webcore_bindings_sources_MathMLNames out/Release/obj/gen/webkit/MathMLNames.cpp
ACTION webcore_bindings_sources_XLinkNames out/Release/obj/gen/webkit/XLinkNames.cpp
ACTION webcore_bindings_sources_XMLNSNames out/Release/obj/gen/webkit/XMLNSNames.cpp
Unknown parameter math for tags/attrs
Traceback (most recent call last):
File "scripts/action_makenames.py", line 174, in <module>
sys.exit(main(sys.argv))
File "scripts/action_makenames.py", line 156, in main
assert returnCode == 0
AssertionError
make: *** [out/Release/obj/gen/webkit/MathMLNames.cpp] Error 1
make: *** Waiting for unfinished jobs....
Unknown parameter a for tags/attrs
Traceback (most recent call last):
File "scripts/action_makenames.py", line 174, in <module>
sys.exit(main(sys.argv))
File "scripts/action_makenames.py", line 156, in main
assert returnCode == 0
AssertionError
Unknown parameter a interfaceName for tags/attrs
make: *** [out/Release/obj/gen/webkit/SVGNames.cpp] Error 1
Traceback (most recent call last):
File "scripts/action_makenames.py", line 174, in <module>
sys.exit(main(sys.argv))
File "scripts/action_makenames.py", line 156, in main
assert returnCode == 0
AssertionError
make: *** [out/Release/obj/gen/webkit/HTMLNames.cpp] Error 1
Unknown parameter actuate for tags/attrs
Traceback (most recent call last):
File "scripts/action_makenames.py", line 174, in <module>
sys.exit(main(sys.argv))
File "scripts/action_makenames.py", line 156, in main
assert returnCode == 0
AssertionError
make: *** [out/Release/obj/gen/webkit/XLinkNames.cpp] Error 1
Unknown parameter xmlns for tags/attrs
Traceback (most recent call last):
File "scripts/action_makenames.py", line 174, in <module>
sys.exit(main(sys.argv))
File "scripts/action_makenames.py", line 156, in main
assert returnCode == 0
AssertionError
make: *** [out/Release/obj/gen/webkit/XMLNSNames.cpp] Error 1
Failed to install: chromium
```
It looks like the python script is calling a perl script, and the perl script is dying on line 209:
```
die "Unknown parameter $parameter for tags/attrs\n" if !defined($parameters{$parameter});
```
The 'unknown parameter's are:
* math
* a
* a interfaceName
* actuate
* xmlns
I'm not sure where these parameters are coming from.
Anyone have any idea how to correct this? | 2012/10/20 | [
"https://Stackoverflow.com/questions/12990462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/780281/"
] | Turns out to be a preprocessor bug for gcc 4.6. As a fix, you have to remove the `-P` parameter of the gcc preprocessor command in `make_names.pl`.
**Bug report**:
<http://code.google.com/p/chromium/issues/detail?id=46411>
**Bug fix**:
<http://trac.webkit.org/changeset/84123> | sounds like you may be missing a directory, a la
<http://aur.archlinux.org/packages.php?ID=45713> | 4,435 |
53,066,830 | I have a python program which I have made work in both Python 2 and 3, and it has more functionality in Python 3 (using new Python 3 features).
My script currently starts `#!/usr/bin/env python`, as that seems to be the mostly likely name for a python executable. However, what I'd like to do is "if python3 exists, use that, if not use python".
I would prefer not to have to distribute multiple files / and extra script (at present my program is a single distributed python file).
Is there an easy way to run the current script in python3, if it exists? | 2018/10/30 | [
"https://Stackoverflow.com/questions/53066830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/27074/"
] | Another better method modified from [this question](https://stackoverflow.com/questions/12070516/conditional-shebang-line-for-different-versions-of-python) is to check the `sys.version`:
```
import sys
py_ver = sys.version[0]
```
Original answer: May not be the best method, but one way to do it is test against a function that only exist in one version of Python to know what you are running off of.
```
try:
raw_input
py_ver = 2
except NameError:
py_ver = 3
if py_ver==2:
... Python 2 stuff
elif py_ver==3:
... Python 3 stuff
``` | Try with version\_info from sys package | 4,436 |
58,192,211 | I'm trying to do something simple in Python. I'm a little rusty so I'm not sure what I'm doing wrong. I want to give random values to dictionary items. Each loop I want to subtract from the original value so if a house has 5 rooms then the total doesn't ever go over 5 for the combined items in the dictionary.
This is not homework. I work IT and I'm trying to practice my Python which is one of my weaker known scripting languages.
Simplified in the terminal it appears to work but when I put it in the code I get an error.
Terminal:
```
$ python3
Python 3.6.8 (default, Apr 25 2019, 21:02:35)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import random
>>> h1 = random.randint(1,5)
>>> size = random.randint(1, h1)
>>> h1 = h1 - size
>>> print(h1)
2
```
Script:
```
import random
h1 = random.randint(1,5)
rooms = {
"bed" : 0,
"bath": 0,
"study": 0
}
for z in rooms:
size = random.randint(1, h1)
room_types[z] = size
if h1_size != 0:
h1 = h1 - size
for x, y in rooms.items():
print(x, y)
```
I get the following error:
```
$ ./two.py
Traceback (most recent call last):
File "./two.py", line 13, in <module>
size = random.randint(1, h1)
File "/usr/lib64/python3.6/random.py", line 221, in randint
return self.randrange(a, b+1)
File "/usr/lib64/python3.6/random.py", line 199, in randrange
raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (1,1, 0)
``` | 2019/10/01 | [
"https://Stackoverflow.com/questions/58192211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1112733/"
] | In the script you're re-assigning `h1` with `h1 - size` in each iteration of the `for` loop, and if `size` happens to be `h1` as it is the upper bound passed to `randint`, `h1` would become `0` after the assignment, so that in the next iteration you would be effectively calling `random.randint(1, 0)`, where the upper bound is less than the lower bound, which is disallowed and therefore produces the said error. | Let's consider a simple example. You randomly select a 4-room house. Your random numbers give you 3 bedrooms and one bath. Your loop continues to "study" and tries to generate a random number form 1 to 0. You neglected to reserve a room for that requirement. Python considers the inverted range to be an error.
If you truly require at least one room of each type, then allocate those before you choose random numbers, allowing a choice of 0 thereafter. If you don't require at least one room, then reduce the lower bound of your `rangrange`. | 4,438 |
33,306,221 | I need in python execute this command and enter password from keyboard, this is works:
```
import os
cmd = "cat /home/user1/.ssh/id_rsa.pub | ssh user2@host.net \'cat >> .ssh/authorized_keys\' > /dev/null 2>&1"
os.system(cmd)
```
As you can see I want append public key to remote host via ssh.
See here: [equivalent-of-ftp-put-and-append-in-scp](https://stackoverflow.com/questions/9971490/equivalent-of-ftp-put-and-append-in-scp) and here: [copy-and-append-files-to-a-remote-machine-cat-error](https://stackoverflow.com/questions/13650312/copy-and-append-files-to-a-remote-machine-cat-error)
Of course I want it do it without user input I've try pexpect and I think command is to weird for it:
```
import pexpect
child = pexpect.spawn(command=cmd, timeout=10, logfile=open('debug.txt', 'a+'))
matched = child.expect(['Password:', pexpect.EOF, pexpect.TIMEOUT])
if matched == 0:
child.sendline(passwd)
```
in debug.txt:
```
ssh-rsa AAAA..........vcxv233x5v3543sfsfvsv user1@host1
/bin/cat: |: No such file or directory
/bin/cat: ssh: No such file or directory
/bin/cat: user2@host.net: No such file or directory
/bin/cat: cat >> .ssh/authorized_keys: No such file or directory
/bin/cat: >: No such file or directory
/bin/cat: 2>&1: No such file or directory
```
I see two solution:
1. fix command for pexpect, that it recognize whole string as one command or,
2. inject/write passwd to stdin as fake user, but how!?!? | 2015/10/23 | [
"https://Stackoverflow.com/questions/33306221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2595216/"
] | From [the `pexpect` docs](http://pexpect.readthedocs.org/en/stable/api/pexpect.html#spawn-class):
>
> Remember that Pexpect does NOT interpret shell meta characters such as
> redirect, pipe, or wild cards (`>`, `|`, or `*`). This is a
> common mistake. If you want to run a command and pipe it through
> another command then you must also start a shell. For example::
>
>
>
> ```
> child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > logs.txt"')
> child.expect(pexpect.EOF)
>
> ```
>
> | That worked for me:
```
command = "/bin/bash -c \"cat /home/user1/.ssh/id_rsa.pub | ssh user2@host.net \'cat >> ~/.ssh/authorized_keys\' > /dev/null 2>&1\""
child = spawn(command=command, timeout=5)
``` | 4,439 |
2,286,276 | I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one.
Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist.
I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that. | 2010/02/18 | [
"https://Stackoverflow.com/questions/2286276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/275779/"
] | None of the answers shows how to delete just one table in an app. It's not too difficult. The [`dbshell`](https://docs.djangoproject.com/en/1.7/ref/django-admin/#django-admin-dbshell) command logs the user into the sqlite3 shell.
```
python manage.py dbshell
```
When you are in the shell, type the following command to see the structure of your database. This will show you all the table names in the database (and also the column names within tables).
```
SELECT * FROM sqlite_master WHERE type='table';
```
In general, Django names tables according to the following convention: "appname\_modelname". Therefore, SQL query that accomplishes your goal will look similar to the following:
```
DROP TABLE appname_modelname;
```
This should be sufficient, even if the table had relationships with other tables. Now you can log out of SQLITE shell by executing:
```
.exit
```
If you run syncdb again, Django will rebuild the table according to your model. This way, you can update your database tables without losing all of the app data. If you are running into this problem a lot, consider using South - a django app that will migrate your tables for you. | In Django 2.1.7, I've opened the `db.sqlite3` file in [SQLite browser](https://sqlitebrowser.org/) (there is also a Python package on [Pypi](https://pypi.org/project/sqlite_bro/)) and deleted the table using command
```
DROP TABLE appname_tablename;
```
and then
```
DELETE FROM django_migrations WHERE App='appname';
```
Then run again
```
python manage.py makemigrations appname
python manage.py migrate appname
``` | 4,440 |
59,039,858 | I am importing a large number of dates in the form DD/MM/YYYY from a csv file into python and want to group them by just MM-YYYY. One method I have tried is the following:
```
str=date.iloc[2]
```
which results in str=7/18/2019. But what I want to do is convert it to Jul 2019 to make groupings by month and year. I have tried doing this
```
datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y")
```
and get the following error "time data 'str' does not match format '%m/%d/%Y'. I have also tried str.replace("/","-") but that did not help either but when I manually type in the date
`datetime.datetime.strptime("7/18/2019","%m/%d/%Y").strptime("%b %Y")`
it works exactly as I'd like it to. This is an easy fix but my date dataframe contains hundreds of dates and would ultimately like to run it in a loop to automate it. I cannot seem to find why format does not match. I've done research but no one seems to be having the same issue.
Any help is appreciated. | 2019/11/25 | [
"https://Stackoverflow.com/questions/59039858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12432098/"
] | ```
datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y")
```
Is to parse literally the string of "str" into the date. Instead, you should do
```
datetime.datetime.strptime(str,"%m/%d/%Y").strftime("%b %Y")
``` | In this line
```
datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y")
```
`"str"` is a string literal. You want the variable `str`
```
datetime.datetime.strptime(str,"%m/%d/%Y").strptime("%b %Y")
``` | 4,450 |
4,707,941 | I have seen several Questions comparing different ECommerce CMS's:
1. [Prestashop compared to Zen-Cart and osCommerce](https://stackoverflow.com/questions/2040472/prestashop-compared-to-zen-cart-and-oscommerce)
2. [Magento or Prestashop, which is better?](https://stackoverflow.com/search?q=prestashop)
3. [Best php/ruby/python e-commerce
solution](https://stackoverflow.com/questions/76420/best-php-ruby-python-e-commerce-solution)
I was hoping to get some people to weigh in with which they prefer for a relatively small E-shop. I am now primarily looking at [PrestaShop](http://www.prestashop.com/) and [Shopify](http://www.shopify.com/). I really like that Shopify does the hosting, has quality service, and is simple to understand and theme. However PrestaShop is **free** and seems to be able to do just as much if not more than Shopify.
I have decided that Magento is too clunky for the project, and have read that many other solutions (osCommerce, ZenCart, OpenCart) are outdated, buggy, or just inferior. | 2011/01/16 | [
"https://Stackoverflow.com/questions/4707941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363701/"
] | "Free" in the e-commerce industry usually works out to a few thousand dollars a month of real cost. E-commerce stores are powering the livelihood of businesses, so there is no way to go with a value hosting company. Additionally security is a huge concern so updates are incredibly important. So this leaves you with a server configuration of at least 2 servers setup in HA environment and a part time operations person performing the maintenance. So once you ensure that you can keep your site up you then have to invest into things that most people don't think off:
* Email service that guarantees delivery
* CDN, your store needs to be fast or you won't sell.
* Fraud protection services ($$$)
Anyways, you get all of the above for 30 bucks a month from a hosted service.
Full disclosure: I'm founder of Shopify. I used to host my own e-commerce store before I started Shopify. 95% of our customers recover the monthly Shopify bill in the first few hours of the first day of each month. | The prices is not the main difference between Shopify and PrestaShop. Talking about the scope- I think both will suit you.
1. Technical Level
The choice of hosted Shopify or open-source PrestaShop may depends on the level of technical skills. Without doubt it is easier to maintain your store with hosted shopping cart which supplies you with regular updates and takes care of bug fixes.
PrestaShop administration is quite intuitive as well. It has the interface with numerous themes, modern designs and easily guided catalogues is attractive for the customers. It goes without saying, that such details help you raise your traffic rates.
1. Performance
Non-hosted installed platforms provide you with control over the speed. As the store is located on your server, you can maintain the necessary speed of connection. Unfortunately, this is not the case with hosted solutions. So, using Shopify you can face issues with low connection speed. | 4,452 |
12,624,324 | I am facing difficulty in getting the xml structure listing all the directories/ sub directories inside a given directory. I got that working using the recursion in the [given post](https://stackoverflow.com/questions/2104997/os-walk-python-xml-representation-of-a-directory-structure-recursion) My problem is little bit tougher than usual. I have directories that may have 10000 of files in it so checking every content to see if its a directory or not is going to be costly and its already taking to long to build the xml. I want to build the xml for directories only.
I know linux has some command like **find . -type d** to list the directories present (not the files). How can I achieve this in python.
Thanks in advance. | 2012/09/27 | [
"https://Stackoverflow.com/questions/12624324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1364646/"
] | [`os.walk`](http://docs.python.org/library/os.html#os.walk) already distinguishes between files and directories:
```
def find_all_dirs(root='.'):
for path,dirs,files in os.walk(root):
for d in dirs:
yield os.path.join(path, d)
``` | For just one directory...
```
import os
def get_dirs(p):
p = os.path.abspath(p)
return [n for n in os.listdir(p) if os.path.isdir(os.path.join(p, n))]
print "\n".join(get_dirs("."))
``` | 4,453 |
46,552,178 | I have two files. `functions.py` has a function and creates a pyspark udf from that function. `main.py` attempts to import the udf. However, `main.py` seems to have trouble accessing the function in `functions.py`.
functions.py:
```
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
def do_something(x):
return x + 'hello'
sample_udf = udf(lambda x: do_something(x), StringType())
```
main.py:
```
from functions import sample_udf, do_something
df = spark.read.load(file)
df.withColumn("sample",sample_udf(col("text")))
```
This results in an error:
```
17/10/03 19:35:29 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 6, ip-10-223-181-5.ec2.internal, executor 3): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/lib/spark/python/pyspark/worker.py", line 164, in main
func, profiler, deserializer, serializer = read_udfs(pickleSer, infile)
File "/usr/lib/spark/python/pyspark/worker.py", line 93, in read_udfs
arg_offsets, udf = read_single_udf(pickleSer, infile)
File "/usr/lib/spark/python/pyspark/worker.py", line 79, in read_single_udf
f, return_type = read_command(pickleSer, infile)
File "/usr/lib/spark/python/pyspark/worker.py", line 55, in read_command
command = serializer._read_with_length(file)
File "/usr/lib/spark/python/pyspark/serializers.py", line 169, in _read_with_length
return self.loads(obj)
File "/usr/lib/spark/python/pyspark/serializers.py", line 454, in loads
return pickle.loads(obj)
AttributeError: 'module' object has no attribute 'do_something'
```
If I bypass the `do_something` function and just put it inside the udf, eg: `udf(lambda x: x + ' hello', StringType())`, the UDF imports fine - but my function is a little longer and it would be nice to have it encapsulated in a separate function. What's the correct way to achieve this? | 2017/10/03 | [
"https://Stackoverflow.com/questions/46552178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5617110/"
] | Just adding this as answer:-
add your py file to sparkcontext in order to make it available to your executors.
```
sc.addPyFile("functions.py")
from functions import sample_udf
```
Here is my test notebook
<https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3669221609244155/3140647912908320/868274901052987/latest.html>
Thanks,
Charles. | I think a cleaner solution would be to use the udf decorator to define your udf function :
```
import pyspark.sql.functions as F
from pyspark.sql.types import StringType
@F.udf
def sample_udf(x):
return x + 'hello'
```
With this solution, the udf does not reference any other function and you don't need the `sc.addPyFile` in your main code.
```
from functions import sample_udf, do_something
df = spark.read.load(file)
df.withColumn("sample",sample_udf(col("text")))
# It works :)
```
For some older versions of spark, the decorator doesn't support typed udf some you might have to define a custom decorator as follow :
```
import pyspark.sql.functions as F
import pyspark.sql.types as t
# Custom udf decorator which accept return type
def udf_typed(returntype=t.StringType()):
def _typed_udf_wrapper(func):
return F.udf(func, returntype)
return _typed_udf_wrapper
@udf_typed(t.IntegerType())
def my_udf(x)
return int(x)
``` | 4,456 |
21,226,366 | I have a script to get and setup the latest NodeJS on my .deb system:
```
echo "Downloading, building and installing latest NodeJS"
sudo apt-get install python g++ make checkinstall
mkdir /tmp/node_build && cd $_
curl -O "http://nodejs.org/dist/node-latest.tar.gz"
tar xf node-latest.tar.gz && cd node-v*
NODE_VERSION="${PWD#*v}"
#NODE_VERSION=python -c "print '$PWD'.split('-')[-1][1:]"
echo "Installing NodeJS" $NODE_VERSION
./configure
sudo checkinstall -y --install=no --pkgversion NODE_VERSION
sudo dpkg -i node_$NODE_VERSION
```
Unfortunately it doesn't work; as the `echo` line outputs:
>
> Installing NodeJS i8/dir-where-runnning-script-from/node-v0.10.24
>
>
>
It does work from the shell though:
```
$ cd /tmp/node_build/node-v0.10.24 && echo "${PWD#*v}"
0.10.24
``` | 2014/01/20 | [
"https://Stackoverflow.com/questions/21226366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/587021/"
] | Is there another "v" in the path, like right before the "i8/"? `#*v` will remove through the *first* "v" in the variable; I'm pretty sure you want `##*v` which'll remove through the *last* "v" in the variable. (Technically, `#` removes the shortest matching prefix, and `##` removes the longest match). Thus:
```
NODE_VERSION="${PWD##*v}"
```
Should work. | Try this
```
sudo checkinstall -y --install=no --pkgversion "${NODE_VERSION##*v}"
``` | 4,457 |
37,445,901 | This question comes from [this one](https://stackoverflow.com/questions/37399965/refresh-web-page-using-a-cgi-python-script).
What I want is to be able to return the `HTTP 303` header from my python script, when the user clicks on a button. My script is very simple and as far as output is concerned, it *only* prints the following two lines:
```
print "HTTP/1.1 303 See Other\n\n"
print "Location: http://192.168.1.109\n\n"
```
I have also tried many different variants of the above (with a different number of `\r` and `\n` at the end of the lines), but without success; so far I always get `Internal Server Error`.
Are the above two lines enough for sending a `HTTP 303` response? Should there be something else? | 2016/05/25 | [
"https://Stackoverflow.com/questions/37445901",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/751115/"
] | Assuming you are using cgi ([2.7](https://docs.python.org/2/library/cgi.html))([3.5](https://docs.python.org/3.5/library/cgi.html))
The example below should redirect to the same page. The example doesn't attempt to parse headers, check what POST was send, it simply redirects to the page `'/'` when a POST is detected.
```
# python 3 import below:
# from http.server import HTTPServer, BaseHTTPRequestHandler
# python 2 import below:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import cgi
#stuff ...
class WebServerHandler(BaseHTTPRequestHandler):
def do_GET(self):
try:
if self.path.endswith("/"):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
page ='''<html>
<body>
<form action="/" method="POST">
<input type="submit" value="Reload" >
</form>
</body>
</html'''
self.wfile.write(page)
except IOError:
self.send_error(404, "File Not Found {}".format(self.path))
def do_POST(self):
self.send_response(303)
self.send_header('Content-type', 'text/html')
self.send_header('Location', '/') #This will navigate to the original page
self.end_headers()
def main():
try:
port = 8080
server = HTTPServer(('', port), WebServerHandler)
print("Web server is running on port {}".format(port))
server.serve_forever()
except KeyboardInterrupt:
print("^C entered, stopping web server...")
server.socket.close()
if __name__ == '__main__':
main()
``` | Typically browsers like to see `/r/n/r/n` at the end of an HTTP response. | 4,458 |
29,848,351 | I have the following list of keys in python.
```
[{'country': None, 'percent': 100.0}, {'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 2.0202}, {'country': 'JP', 'percent': 11.1111}, {'country': 'US', 'percent': 6.9767}, {'country': 'SG', 'percent': 99.8482}, {'country': 'US', 'percent': 1.9127}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, {'country': None, 'percent': 100.0}, {'country': None, 'percent': 100.0}]
```
So I need to add all the percentages for the same country and remove country that is `None` . Ideally the output would be.
```
[{'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 10.9096}, {'country': 'JP', 'percent': 11.1111}, {'country': 'SG', 'percent': 99.8482}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, ]
```
I tried the following.
```
for i, v in enumerate(response):
for j in response[i:]:
if v['country'] == j['country']:
response[i]['percent'] = i['percent'] + j['percent']
```
But I could not succeed and am struggling. Could someone please point me out in the right direction. | 2015/04/24 | [
"https://Stackoverflow.com/questions/29848351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/567797/"
] | ```
result_map = {}
for item in response:
if item['country'] is None:
continue
if item['country'] not in result_map:
result_map[item['country']] = item['percent']
else:
result_map[item['country']] += item['percent']
results = [
{'country': country, 'percent': percent}
for country, percent in result_map.items()
]
``` | Change the condition of the if to:
```
if response.index(v) != response.index(j) and v['country'] == j['country']:
```
You're addding twice the elements. | 4,460 |
19,847,275 | My function is like
```
def calResult(w,t,l,team):
wDict={}
for item in team:
for x in w:
wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1])
for x in t:
wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1])
return wDict
```
say I create the empty dict then I use `wDict[item]` to assign value for each key(these are from a team list, we have team like a b c d...). the `x[item.index(" "):item.index(" ")+1]` part will return a value after the int method have run. But the python shell returned that
```
Traceback (most recent call last):
File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 66, in <module>
File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 59, in calResult
builtins.KeyError: 'Torino'
```
I can't understand what exactly is the error in my code. | 2013/11/07 | [
"https://Stackoverflow.com/questions/19847275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2844097/"
] | You can not access `wDict[item]` the first time, since your dict is empty
This would be ok:
```
wDict[item] = 1
```
But you can not do this :
```
wDict[item] = wDict[item] + 1
```
Maybe you want to use this syntax :
```
wDict[item] = int(wDict.get(item, 0)]) + int(x[item.index(" "):item.index(" ") + 1])
``` | Looks like you are trying to use wDict[item] as the rvalue and the lvalue in the same assignment statement, when wDict[item] is not yet initialized.
```
wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1])
```
You are trying to access the "value" of the key item, but there is no key value pair initialized. | 4,463 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.