qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
62,745,685 | I am a beginner in Python. Here's what I am trying to do :
```python
import numpy as np
r10 = np.array([[i for i in range(0,10)],[i*10 for i in range(0,10)]]).T
r6 = np.array([[i for i in range(0,6)],[i*10 for i in range(0,6)]]).T
r_comb = np.array([[r10],[r6]]).T
np.savetxt('out.txt',r_comb)
```
Using np.savetxt gives me the following error since it only supports 1-D array :
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Programs\Python\Python38-32\lib\site-packages\numpy\lib\npyio.py in savetxt(fname, X, fmt, delimiter, newline, header, footer, comments, encoding)
1433 try:
-> 1434 v = format % tuple(row) + newline
1435 except TypeError:
TypeError: only size-1 arrays can be converted to Python scalars
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-88-c3147f076055> in <module>
----> 1 np.savetxt('out.txt',r_comb)
<__array_function__ internals> in savetxt(*args, **kwargs)
~\AppData\Local\Programs\Python\Python38-32\lib\site-packages\numpy\lib\npyio.py in savetxt(fname, X, fmt, delimiter, newline, header, footer, comments, encoding)
1434 v = format % tuple(row) + newline
1435 except TypeError:
-> 1436 raise TypeError("Mismatch between array dtype ('%s') and "
1437 "format specifier ('%s')"
1438 % (str(X.dtype), format))
TypeError: Mismatch between array dtype ('object') and format specifier ('%.18e %.18e')
```
Is there any other way of saving the contents of the variable r\_comb to a .txt file so that I can use it for other plotting programs?
Basically,I want the text file to look like this :
```python
0 0.0 0 0.0
1 0.1 1 0.1
2 0.2 2 0.2
3 0.3 3 0.3
4 0.4 4 0.4
5 0.5 5 0.5
6 0.6
7 0.7
8 0.8
9 0.9
```
Image showing how the contents of the text file should look
[](https://i.stack.imgur.com/dLpbW.png) | 2020/07/05 | [
"https://Stackoverflow.com/questions/62745685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13872751/"
] | How about using a ref as the gatekeepr
```
function useEffectIfReady(fn, deps = [], isReady = true) {
const readyWasToggled = useRef(isReady);
/*
There are 2 states:
0 - initial
1 - ready was toggled
*/
const getDep = () => {
if (readyWasToggled.current) {
return 1;
}
if (isReady) {
readyWasToggled.current = true;
}
return 0;
};
useEffect(() => {
if (!isReady) {
return;
}
return fn();
}, [...deps, fn, getDep()]);
}
```
[](https://codesandbox.io/s/optimistic-wu-cbfd5?fontsize=14&hidenavigation=1&theme=dark) | Try splitting the useEffect in this case for each state, just an idea based on your codes
```
useEffect(() => {
// your codes here
if (!isTrue) {
return;
}
}, [isTrue]);
useEffect(() => {
// your another set of codes here
someFunction(dep);
}, [dep])
``` | 1,707 |
66,015,125 | I am running python3.9 on ubuntu 18.04. I already went ahead and rand the command `sudo apt-get install python-scipy` and got the message:
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-scipy is already the newest version (0.19.1-2ubuntu1).
The following packages were automatically installed and are no longer required:
linux-hwe-5.4-headers-5.4.0-42 linux-hwe-5.4-headers-5.4.0-53
linux-hwe-5.4-headers-5.4.0-56 linux-hwe-5.4-headers-5.4.0-58
linux-hwe-5.4-headers-5.4.0-59 linux-hwe-5.4-headers-5.4.0-60
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
```
Yet, when I try to run my python3.9 code which uses `from scipy import integrate`, I get the error:
```
ModuleNotFoundError: No module named 'scipy'
```
I already read this [post](https://stackoverflow.com/questions/55043789/importerror-no-module-named-scipy-sparse/55043942) and tried uninstalling and installing scipy using
```
sudo apt-get install python3-scipy
```
But this did not work. Any suggestions?
**Edit 1**: I tried `sudo pip3 install scipy` which produced the message:
```
The directory '/home/nick/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/nick/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
```
Yet, when I tried to run the code again, I still get the same `ImportError`. | 2021/02/02 | [
"https://Stackoverflow.com/questions/66015125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13972346/"
] | Maybe try
```sh
python3.9 -m pip install scipy --user
```
which would use pip of python3.9 to install the package to a place without sudo privilege | Try
`pip3 install scipy`,
if that returns an `ERRNO 13: access denied` then try `pip3 install scipy --user` | 1,712 |
31,568,936 | I am using Selenium webdriver with firefox. I am wondering if there is a setting i can change such that it to only requesting resources from certain domains. (Specifically i want it only to request content which is on the same domain as the webpage itself).
My current set up, written in Python, is:
```python
from selenium import webdriver
firefox_profile = webdriver.FirefoxProfile()
## Here, I change various default setting in Firefox, and install a couple of monitoring extensions
driver = webdriver.Firefox(firefox_profile)
driver.get(web_address)
```
What i want to do, is if i specify the web address `wwww.domain.com`, then to only load content served by `domain.com`, and not e.g. all the tracking content hosted by other domains that would typically be requested. Hoping could be achieved by a change to the profile settings in firefox, or via an extension.
Note - there is a similar question (without an answer) - [Restricting Selenium/Webdriver/HtmlUnit to a certain domain](https://stackoverflow.com/questions/6468624/restricting-selenium-webdriver-htmlunit-to-a-certain-domain) - but it is four years old, and i think Selenium has evolved a lot since then. | 2015/07/22 | [
"https://Stackoverflow.com/questions/31568936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2801069/"
] | With thanks to Vicky, (who's approach of using Proxy settings i followed - although directly from Selenium), the code below will change the proxy settings in firefox such that it will not connect to a domain except that on the white-list.
I suspect several setting changes are unnecessary and can be omitted for most purposes. Code in Python.
```py
from selenium import webdriver
firefox_profile = webdriver.FirefoxProfile()
## replace desired_domain.com below with whitelisted domain. Separate domains by comma.
firefox_profile.set_preference("network.proxy.no_proxies_on","localhost,127.0.0.1,desired_domain.com")
firefox_profile.set_preference("network.proxy.backup.ftp","0.0.0.0")
firefox_profile.set_preference("network.proxy.backup.ftp_port",1)
firefox_profile.set_preference("network.proxy.backup.socks","0.0.0.0")
firefox_profile.set_preference("network.proxy.backup.socks_port",1)
firefox_profile.set_preference("network.proxy.backup.ssl","0.0.0.0")
firefox_profile.set_preference("network.proxy.backup.ssl_port",1)
firefox_profile.set_preference("network.proxy.ftp","0.0.0.0")
firefox_profile.set_preference("network.proxy.ftp_port",1)
firefox_profile.set_preference("network.proxy.http","0.0.0.0")
firefox_profile.set_preference("network.proxy.http_port",1)
firefox_profile.set_preference("network.proxy.socks","0.0.0.0")
firefox_profile.set_preference("network.proxy.socks_port",1)
firefox_profile.set_preference("network.proxy.ssl","0.0.0.0")
firefox_profile.set_preference("network.proxy.ssl_port",1)
firefox_profile.set_preference("network.proxy.type",1)
firefox_profile.set_preference("network.proxy.share_proxy_settings",True)
driver = webdriver.Firefox(firefox_profile)
driver.get(web_address_desired)
``` | I think it is still impossible in selenium.But you can still achieve this by using proxies like browsermob. Webdriver integrates well with [browsermob](https://github.com/lightbody/browsermob-proxy) proxy.
**Sample pseudeocode in java**
```java
//LittleProxy-powered 2.1.0 release
LegacyProxyServer server = new BrowserMobProxyServer();
server.start(0);
// Blacklist websites
server.blacklistRequests("https?://.*\\.blocksite\\.com/.*", 410);//these sites will be blocked
/// get the Selenium proxy object
Proxy proxy = ClientUtil.createSeleniumProxy(server);
// configure it as a desired capability
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.PROXY, proxy);
// initialize the driver with the capabilities ;
Webdriver driver = new FirefoxDriver(capabilities);
```
Hope this helps you.Kindly get back if you need any further help | 1,715 |
61,424,762 | I am a beginner in python. I am currently building an online video study website like Udemy and Treehouse using Flask. The little issue is that, the videos on the site can be downloaded by viewing or inspecting the source code. Browsers with video download extension (firefox, Chrome etc) can easily download videos when the video page loads. The HTML and python codes are shown below
```
<video id="videoElementID" width="100%" oncontextmenu="return false;" controlsList="nodownload" controls>
<source src="{{ videoclip }}" id="video" type="video/mp4">
</video>
```
```
@posts.route("/<int:post_id>/<int:chapters_id>/<int:video_id>", methods=['GET','POST'])
@login_required
def view_videos(post_id, chapters_id, video_id):
posts=Post.query.get_or_404(post_id)
if posts.author != current_user:
abort(403)
chapters=C.query.get_or_404(chapters_id)
videos=V.query.get_or_404(video_id)
videoclip = url_for('static', filename='stylesheets/v_uploads/' + posts.author.username + '/' + posts.course_name + '/' + videos.video_file)
return render_template('video.html', title="view video: ", videoclip=videoclip, posts=posts, chapters = chapters, videos=videos)
```
This is what I want:
1. to prevent browsers with file download extension from downloading the videos on the site
2. to hide the video url from the source code maybe by encrypting the path or the filename or the video itself
3. or more...
I have tried .htaccess but i think it only works with PHP. I tried to encrypt the code but i couldn't do it successfully. I have checked stackoverflow questions, but wasn't successful. I know its impossible to completely stop viewers from downloading but i just want to make it harder to download. Please I really need you guys to help me out. Thanks | 2020/04/25 | [
"https://Stackoverflow.com/questions/61424762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12172995/"
] | I don't think the problem comes from the flask side, but from the frontend side. So you might check if this is possible through javascript. I quickly looked into it and saw the question below:
I think you are facing a problem related to that mentioned in - [Prevent HTML5 video from being downloaded (right-click saved)?](https://stackoverflow.com/questions/9756837/prevent-html5-video-from-being-downloaded-right-click-saved) this article. | You have a couple of options here to make it more difficult, in order of difficulty:
1. You absolutely can use .htaccess (it is a web server feature--nothing to do with PHP) to require the referrer to be your site. Don't allow access to the video file if the referrer doesn't contain your site. ([See here for how to do this in Apache or Nginx](https://geekflare.com/block-unwanted-requests/))
2. Use the canvas technique [described here](https://stackoverflow.com/questions/41703555/prevent-downloading-videos-in-temp-folder-while-watching-videos-in-web-browser)
3. HTTP live streaming ([For example with Nginx](https://www.nginx.com/products/nginx/modules/rtmp-media-streaming/))
4. Use [CSRF tokens](https://portswigger.net/web-security/csrf/tokens) | 1,716 |
58,487,038 | I have a `while` loop that exits when an `if(condition){break;}` is met, after an unknown number of iterations. Inside the `while` loop, a function is called, that will return an array of variable size at every iteration. I know in `python` I could just append the arrays one to each other, and in the end I would have an array of arrays of variable size. What is the right approach to do it in `C`?
Here is the relevant part of my code (I know it is not a MWE, many parts are missing, but I hope it is still understandable):
```
int find_zeros(double *kappa, double *zeros){
// reset the counter "z"
int z = 0;
// do some calculations and compute the zeros
// update the value of z for every zero found
// "zeros" now contains "z" elements
return z;
}
double *foo(){
// allocate memory for the "zeros" array (I know that N>z)
double *zeros = (double *) malloc (sizeof(double) *N);
// variable to count the elements of the "zeros" array
int z;
while (1){
z = find_zeros(kappa, zeros);
// if no zeros where found, exit the while loop
if (z==0){ break; }
// now we know how many zeros we have (z has been updated
// by find_zeros()), so we can reallocate the memory
zeros = (double *) realloc(zeros, sizeof(double) *z);
// append the zeros somewhere (how?!)
// in python I would do: zeros_list.append(zeros)
}
// free the memory for the "zeros" array
free(zeros);
// return all the zeros found
// in python I would do: return zeros_list
}
int main(){
double *zeros_list = what??
// how do I allocate memory of a thing that I don't
// know how big it is going to be?
zeros_list = foo();
}
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58487038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9104884/"
] | You need to store store the zeros in `foo` independently from the values returned by `find_zeros`, as you would in Python, where you'd have separate variables `zeros_list` and `zeros`.
Python's `list.append` method is realized in two steps: first the array is reallocated to new capacity with `realloc` (you already have that part), and then the new value is assigned to the newly available space. Since you are creating an array of arrays, you also need to copy the values from their temporary location to the new one with `memcpy()`. For example (untested, in absence of a MWE):
```
struct floatvec {
double *values;
int length;
}
struct floatvec *foo(int *howmany) {
struct floatvec *zeros_list = NULL;
int zeros_cnt = 0;
static double zeros[N];
while (1) {
int z = find_zeros(kappa, zeros);
if (z == 0)
break;
struct floatvec new = {
.values = malloc(z * sizeof(double)),
.length = z
};
if (!new.values)
return NULL;
memcpy(new.values, zeros, sizeof(double) * z);
zeros_list = realloc(zeros_list, sizeof(struct floatvec) * (zeros_cnt + 1));
if (!zeros_list)
return NULL;
zeros_list[zeros_cnt++] = new;
}
*howmany = zeros_cnt;
return zeros_list;
}
```
Usage example:
```
int cnt;
struct floatvec *all_zeros = foo(&cnt);
for (int i = 0; i < cnt; i++) {
struct floatvec *curr = all_zeros[i];
for (int j = 0; j < curr->length; j++)
printf("%lf\n", curr->values[j]);
}
```
A couple of unrelated notes:
* don't cast the result of [`malloc` and `realloc`](https://stackoverflow.com/a/605858/1600898).
* check whether `malloc` returned NULL and abort your program or make an error return to the caller.
**EDIT**: updated the answer to match the edited question. | regarding your question:
*// append the zeros somewhere (how?!)*
the easiest way is a call to `memcpy()` similar to:
```
memcpy( &zeros[ last used offset ], newXeros, sizeof( newzeros ) );
``` | 1,717 |
35,241,760 | I'm running this every minute to debug and it keeps returning with `com.apple.xpc.launchd[1] (com.me.DesktopChanger[16390]): Service exited with abnormal code: 2`
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<!-- The label should be the same as the filename without the extension -->
<string>com.me.DesktopChanger</string>
<!-- Specify how to run your program here -->
<key>ProgramArguments</key>
<array>
<string>/usr/bin/python</string>
<string>~/Library/Application\ Support/DesktopChanger/DesktopChanger.py</string>
</array>
<key>StartInterval</key>
<integer>60</integer>
</dict>
</plist>
```
The launchd script is called: `com.me.DesktopChanger.plist`
The python script is located at: `/Users/Tom/Library/Application Support/DesktopChanger/DesktopChanger.py`
and `which python` returns: `/usr/bin/python`
running `ls -l` also returns:
`-rw-r--r--@ 1 Tom staff 623 6 Feb 13:40 com.me.DesktopChanger.plist`
and the python script with `ls -l` shows:
`-rwxr-xr-x@ 1 Tom staff 672 2 Dec 14:24 DesktopChanger.py` | 2016/02/06 | [
"https://Stackoverflow.com/questions/35241760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3042759/"
] | You can use the `reduce` method.
```
let placeNames = results.reduce("") { (placeNames, place) -> String in
return placeNames + place.name + " "
}
```
Now you have a single `String` with the concatenation of all the place names.
Short notation
--------------
You can also write it as follow
```
let placeNames = results.reduce("") { $0 + $1.name + " " }
```
Example
-------
Lets say `Place` is defined as follow (I am using a `Struct` however a `Class` does work as well)
```
struct Place {
let name: String
}
```
Now let's define `results` as an array of `Place(s)`
```
let results = [Place(name: "Italy"), Place(name: "UK"), Place(name: "USA")]
```
And finally let's test the code
```
let placeNames = results.reduce("") { $0 + " " + $1.name }
print(placeNames) // "Italy UK USA "
```
Removing the last blank space
-----------------------------
You maybe have noticed that a blank space is appended at the and of the generated `String`. We can get rid of that updating the code as follow
```
let placeNames = String(
places
.reduce("") { $0 + $1.name + " " }
.characters
.dropLast()
)
```
Why this solution is better then a for loop?
--------------------------------------------
The code I am suggesting here does follow the **Functional Programming** paradigm. There are several advantages over the classic for loop:
1. It's thread safe: since I am only using immutable values you don't have to worry about other threads that could change your values while you are using them.
2. It's less error prone because it's more declarative: we are describing how the result should be, not how to build it.
That's it :) | You can do it without a loop: (the '$0' is the argument in the closure)
```
results.forEach{ print($0, terminator:" ") }
``` | 1,718 |
18,782,620 | (disclaimer: this is my first stackoverflow question so forgive me in advance if I'm not too clear)
**Expected results:**
My task is to find company legal identifiers in a string representing a company name, then separate them from it and save them in a separate string.
The company names have already been cleaned so that they only contain alphanumeric lowercase characters.
Example:
```
company_1 = 'uber wien abcd gmbh'
company_2 = 'uber wien abcd g m b h'
company_3 = 'uber wien abcd ges mbh'
```
should result in
```
company_1_name = 'uber wien abcd'
company_1_legal = 'gmbh'
company_2_name = 'uber wien abcd'
company_2_legal = 'gmbh'
company_3_name = 'uber wien abcd'
company_3_legal = 'gesmbh'
```
**Where I am right now:**
I load the list of all company ids up from a csv file. Austria provides a good example. Two legal ids are:
```
gmbh
gesmbh
```
I use a regex expression that tells me **IF** the company name contains the legal identifier. However, this regex removes *all* spaces from the string in order to identify the legal id.
```
company_1_nospace = 'uberwienabcdgmbh'
company_2_nospace = 'uberwienabcdgmbh'
company_3_nospace = 'uberwienabcdgesmbh'
```
since I look for the regex in the string without spaces, I am able to see that all three companies have legal ids inside their name.
**Where I am stuck:**
I can say whether there is a legal id in `company_1`, `company_2`, and `company_3` but I can only remove it from `company_1`.
In fact, I cannot remove `g m b h` because it does not match, but I can say that it is a legal id. The only way I could remove it is to also remove spaces in the rest of the company name, which I dont want to do (it would only be a last resort option)
Even if I were to insert spaces into `gmbh` to match it with `g m b h`, I would then not pick up `ges mbh` or `ges m b h`.
(Note that the same thing happens for other countries)
**My code:**
```
import re
re_code = re.compile('^gmbh|gmbh$|^gesmbh|gesmbh$')
comp_id_re = re_code.search(re.sub('\s+', '', company_name))
if comp_id_re:
company_id = comp_id_re.group()
company_name = re.sub(re_code, '', company_name).strip()
else:
company_id = ''
```
Is there a way for python to *understand* which characters to remove from the original string?
Or would it just be easier if somehow (that's another problem) I find all possible alternatives for legal id spacing? ie from `gmbh` I create `g mbh`, `gm bh`, `gmb h`, `g m bh`, etc... and use that for matching/extraction?
I hope I have been clear enough with my explanation. Thinking about a title for this was rather difficult.
**UPDATE 1:** company ids are usually at the end of the company name string. They can occasionally be at the beginning in some countries.
**UPDATE 2:** I think this takes care of the company ids inside the company name. It works for legal ids at the end of the company name, but it does not work for company ids at the beginning
```
legal_regex = '^ltd|ltd$|^gmbh|gmbh$|^gesmbh|gesmbh$'
def foo(name, legal_regex):
#compile regex that matches company ids at beginning/end of string
re_code = re.compile(legal_regex)
#remove spaces
name_stream = name.replace(' ','')
#find regex matches for legal ids
comp_id_re = re_code.search(name_stream)
#save company_id, remove it from string
if comp_id_re:
company_id = comp_id_re.group()
name_stream = re.sub(re_code, '', name_stream).strip()
else:
company_id = ''
#restore spaced string (only works if id is at the end)
name_stream_it = iter(name_stream)
company_name = ''.join(next(name_stream_it) if e != ' ' else ' ' for e in name)
return (company_name, company_id)
``` | 2013/09/13 | [
"https://Stackoverflow.com/questions/18782620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2775630/"
] | Non-Regex solution would be easier here, and this is how, I would do it
```
legal_ids = """gmbh
gesmbh"""
def foo(name, legal_ids):
#Remove all spaces from the string
name_stream = name.replace(' ','')
#Now iterate through the legal_ids
for id in legal_ids:
#Remove the legal ID's from the string
name_stream = name_stream.replace(id, '')
#Now Create an iterator of the modified string
name_stream_it = iter(name_stream)
#Fill in the missing/removed spaces
return ''.join(next(name_stream_it) if e != ' ' else ' ' for e in name)
foo(company_1, legal_ids.splitlines())
'uber wien abcd '
foo(company_2, legal_ids.splitlines())
'uber wien abcd '
foo(company_3, legal_ids.splitlines())
'uber wien abcd '
``` | Here's the code I came up with:
```
company_1 = 'uber wien abcd gmbh'
company_2 = 'uber wien abcd g m b h'
company_3 = 'uber wien abcd ges mbh'
legalids = ["gmbh", "gesmbh"]
def info(company, legalids):
for legalid in legalids:
found = []
last_pos = len(company)-1
pos = len(legalid)-1
while True:
if len(legalid) == len(found):
newfound = found
newfound.reverse()
if legalid == ''.join(newfound):
return [company[:last_pos+1].strip(' '), legalid]
else:
break
if company[last_pos] == ' ':
last_pos -= 1
continue
elif company[last_pos] == legalid[pos]:
found.append(company[last_pos])
pos -= 1
else:
break
last_pos -= 1
return
print(info(company_1, legalids))
print(info(company_2, legalids))
print(info(company_3, legalids))
```
Output:
```
['uber wien abcd', 'gmbh']
['uber wien abcd', 'gmbh']
['uber wien abcd', 'gesmbh']
``` | 1,723 |
51,030,872 | Using Tensorflow 1.8.0, we are running into an issue whenever we attempt to build a categorical column. Here is a full example demonstrating the problem. It runs as-is (using only numeric columns). Uncommenting the indicator column definition and data generates a stack trace ending in `tensorflow.python.framework.errors_impl.InternalError: Unable to get element as bytes.`
```
import tensorflow as tf
import numpy as np
def feature_numeric(key):
return tf.feature_column.numeric_column(key=key, default_value=0)
def feature_indicator(key, vocabulary):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(
key=key, vocabulary_list=vocabulary ))
labels = ['Label1','Label2','Label3']
model = tf.estimator.DNNClassifier(
feature_columns=[
feature_numeric("number"),
# feature_indicator("indicator", ["A","B","C"]),
],
hidden_units=[64, 16, 8],
model_dir='./models',
n_classes=len(labels),
label_vocabulary=labels)
def train(inputs, training):
model.train(
input_fn=tf.estimator.inputs.numpy_input_fn(
x=inputs,
y=training,
shuffle=True
), steps=1)
inputs = {
"number": np.array([1,2,3,4,5]),
# "indicator": np.array([
# ["A"],
# ["B"],
# ["C"],
# ["A", "A"],
# ["A", "B", "C"],
# ]),
}
training = np.array(['Label1','Label2','Label3','Label2','Label1'])
train(inputs, training)
```
Attempts to use an embedding fare no better. Using only numeric inputs, we can successfully scale to thousands of input nodes, and in fact we have temporarily expanded our categorical features in the preprocessor to simulate indicators.
The documentation for `categorical_column_*()` and `indicator_column()` are awash in references to features we're pretty sure we're not using (proto inputs, whatever `bytes_list` is) but maybe we're wrong on that? | 2018/06/25 | [
"https://Stackoverflow.com/questions/51030872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3955068/"
] | The issue here is related to the ragged shape of the "indicator" input array (some elements are of length 1, one is length 2, one is length 3). If you pad your input lists with some non-vocabulary string (I used "Z" for example since your vocabulary is "A", "B", "C"), you'll get the expected results:
```
inputs = {
"number": np.array([1,2,3,4,5]),
"indicator": np.array([
["A", "Z", "Z"],
["B", "Z", "Z"],
["C", "Z", "Z"],
["A", "A", "Z"],
["A", "B", "C"]
])
}
```
You can verify that this works by printing the resulting tensor:
```
dense = tf.feature_column.input_layer(
inputs,
[
feature_numeric("number"),
feature_indicator("indicator", ["A","B","C"]),
])
with tf.train.MonitoredTrainingSession() as sess:
print(dense)
print(sess.run(dense))
``` | From what I can tell, the difficulty is that you are trying to make an indicator column from an array of arrays.
I collapsed your indicator array to
```
"indicator": np.array([
"A",
"B",
"C",
"AA",
"ABC",
])
```
... and the thing ran.
More, I can't find any example where the vocabulary array is anything but a flat array of strings. | 1,725 |
28,654,247 | I want to remove the heteroatoms (HETATM)s from PDB text files that I have locally. I found a perl script that apparently needs a quick tweak to make it do what I want but I'm unsure of what that tweak is.
```
!#/usr/bin/env perl
open(FILE,"file.pdb");
@file=<FILE>;
foreach (@file){
if (/^HETATM/){
print $_,"\n";
}}
```
Also, if anyone has an existing perl or python script that they are OK with sharing, I would greatly appreciate it. | 2015/02/22 | [
"https://Stackoverflow.com/questions/28654247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4589414/"
] | In R you can use the [Bio3D package](http://thegrantlab.org/bio3d/):
```
library(bio3d)
# read pdb
pdb <- read.pdb("1hel")
# make a subset based on TYPE
new <- trim.pdb(pdb, type="ATOM")
# write new pdb to disk
write.pdb(new, file="1hel_ATOM.pdb")
```
This can also be combined with various other selection criteria, e.g. chain id, residue number, residue name, etc etc:
```
# select ATOM records for chain A
n1 <- trim.pdb(pdb, type="ATOM", chain="A")
# select residue numbers 10 through 20
n2 <- trim.pdb(pdb, resno=10:20)
``` | In PERL Try this
```
use warnings;
use strict;
my $filename = "4BI7.pdb";
die "Error opening file" unless (open my $handler , '<' , "$filename");
open my $newfile, '>', "filename.pdb" or die "New file not create";
while($_ = <$handler>){
print $newfile "$_" unless /^HETATM.*/;
}
``` | 1,726 |
57,771,019 | I want to write something like this when expressed in python.
```
a = int(input())
for i in range(a):
b = input()
print(b)
```
And this is what I actually wrote.
```
(let [a][(read-line)]
(for[i (range [a])]
(defn b[string]
(= (read-line) b)
(println [b]))))
```
But now there are many errors. | 2019/09/03 | [
"https://Stackoverflow.com/questions/57771019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5237648/"
] | Similar to the Python flow.
```
(doseq [_ (range (Integer. (read-line)))
:let [b (read-line)]]
(println b))
```
Even closer to Python code:
```
(let [a (Integer. (read-line))]
(doseq [i (range a)
:let [b (read-line)]]
(println b)))
```
More functional Code
```
(mapv println (repeatedly (Integer. (read-line)) read-line))
```
But more idiomatic (per Sean Corfield)
```
(run! println (repeatedly (Integer. (read-line)) read-line))
``` | Off the top of my head, you could do something like:
`(map (fn [_] (println (read-line))) (range (Integer/parseInt (read-line))))`
There may be something more appropriate than a map here, read the clojure documentation. The clojure standard library has a lot of cool stuff :)
Edit: @SeanCorfield brought up a good point in the comments, using [run!](https://clojuredocs.org/clojure.core/run!) would be a better choice here since we don't care about the return value of map. | 1,727 |
36,267,936 | Given a 2-dimensional array in python, I would like to normalize each row with the following norms:
* Norm 1: **L\_1**
* Norm 2: **L\_2**
* Norm Inf: **L\_Inf**
I have started this code:
```
from numpy import linalg as LA
X = np.array([[1, 2, 3, 6],
[4, 5, 6, 5],
[1, 2, 5, 5],
[4, 5,10,25],
[5, 2,10,25]])
print X.shape
x = np.array([LA.norm(v,ord=1) for v in X])
print x
```
Output:
```
(5, 4) # array dimension
[12 20 13 44 42] # L1 on each Row
```
How can I modify the code such that WITHOUT using LOOP, I can directly have the rows of the matrix normalized? (Given the norm values above)
I tried :
```
l1 = X.sum(axis=1)
print l1
print X/l1.reshape(5,1)
[12 20 13 44 42]
[[0 0 0 0]
[0 0 0 0]
[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
```
but the output is zero. | 2016/03/28 | [
"https://Stackoverflow.com/questions/36267936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4208169/"
] | This is the L₁ norm:
```
>>> np.abs(X).sum(axis=1)
array([12, 20, 13, 44, 42])
```
This is the L₂ norm:
```
>>> np.sqrt((X * X).sum(axis=1))
array([ 7.07106781, 10.09950494, 7.41619849, 27.67670501, 27.45906044])
```
This is the L∞ norm:
```
>>> np.abs(X).max(axis=1)
array([ 6, 6, 5, 25, 25])
```
To normalise rows, just divide by the norm. For example, using L₂ normalisation:
```
>>> l2norm = np.sqrt((X * X).sum(axis=1))
>>> X / l2norm.reshape(5,1)
array([[ 0.14142136, 0.28284271, 0.42426407, 0.84852814],
[ 0.39605902, 0.49507377, 0.59408853, 0.49507377],
[ 0.13483997, 0.26967994, 0.67419986, 0.67419986],
[ 0.14452587, 0.18065734, 0.36131469, 0.90328672],
[ 0.18208926, 0.0728357 , 0.36417852, 0.9104463 ]])
>>> np.sqrt((_ * _).sum(axis=1))
array([ 1., 1., 1., 1., 1.])
```
---
More direct is the `norm` method in `numpy.linalg`, if you have it available:
```
>>> from numpy.linalg import norm
>>> norm(X, axis=1, ord=1) # L-1 norm
array([12, 20, 13, 44, 42])
>>> norm(X, axis=1, ord=2) # L-2 norm
array([ 7.07106781, 10.09950494, 7.41619849, 27.67670501, 27.45906044])
>>> norm(X, axis=1, ord=np.inf) # L-∞ norm
array([ 6, 6, 5, 25, 25])
```
---
*(after OP edit):* You saw zero values because `/` is an integer division in Python 2.x. Either upgrade to Python 3, or change dtype to float to avoid that integer division:
```
>>> linfnorm = norm(X, axis=1, ord=np.inf)
>>> X.astype(np.float) / linfnorm[:,None]
array([[ 0.16666667, 0.33333333, 0.5 , 1. ],
[ 0.66666667, 0.83333333, 1. , 0.83333333],
[ 0.2 , 0.4 , 1. , 1. ],
[ 0.16 , 0.2 , 0.4 , 1. ],
[ 0.2 , 0.08 , 0.4 , 1. ]])
``` | You can pass `axis=1` parameter:
```
In [58]: LA.norm(X, axis=1, ord=1)
Out[58]: array([12, 20, 13, 44, 42])
In [59]: LA.norm(X, axis=1, ord=2)
Out[59]: array([ 7.07106781, 10.09950494, 7.41619849, 27.67670501, 27.45906044])
``` | 1,729 |
28,975,468 | When I run ipython notebook; I get "ImportError: IPython.html requires pyzmq >= 13" error message in console. I already run " pip install "ipython[notebook]" " but I can not run the notebook. Could you pls assist how to solve this issue.
```
C:\Python27\Scripts>ipython notebook
Traceback (most recent call last):
File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "C:\Python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\Scripts\ipython.exe\__main__.py", line 9, in <module>
File "C:\Python27\lib\site-packages\IPython\__init__.py", line 120, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 573, in launch_instance app.initialize(argv)
File "<string>", line 2, in initialize
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "C:\Python27\lib\site-packages\IPython\terminal\ipapp.py", line 321, in initialize super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "C:\Python27\lib\site-packages\IPython\core\application.py", line 369, in initialize self.parse_command_line(argv)
File "C:\Python27\lib\site-packages\IPython\terminal\ipapp.py", line 316, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 471, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "C:\Python27\lib\site-packages\IPython\config\application.py", line 402, in initialize_subcommand
subapp = import_item(subapp)
File "C:\Python27\lib\site-packages\IPython\utils\importstring.py", line 42, in import_item
module = __import__(package, fromlist=[obj])
File "C:\Python27\lib\site-packages\IPython\html\notebookapp.py", line 29, in <module>
check_for_zmq('13', 'IPython.html')
File "C:\Python27\lib\site-packages\IPython\utils\zmqrelated.py", line 12, in check_for_zmq
raise ImportError("%s requires pyzmq >= %s"%(required_by,minimum_version))
ImportError: IPython.html requires pyzmq >= 13
``` | 2015/03/10 | [
"https://Stackoverflow.com/questions/28975468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2993130/"
] | Looks like you need the pyzmq package (version >= 13).
You can try installing it (or upgrading if need be) with :
`pip install --upgrade pyzmq` | These works for me ( win8 + anaconda 2.7.10 )
* Uninstall zmq 4.0.4.
* Install zmq 3.2.4.
* pip uninstall ipython
* pip install "ipython[all]"
* pip uninstall pyzmq
* pip install pyzmq | 1,730 |
37,888,923 | I have a datafile like this:
```
# coating file for detector A/R
# column 1 is the angle of incidence (degrees)
# column 2 is the wavelength (microns)
# column 3 is the transmission probability
# column 4 is the reflection probability
14.2000 0.531000 0.0618000 0.938200
14.2000 0.532000 0.0790500 0.920950
14.2000 0.533000 0.0998900 0.900110
# it has lots of other lines
# datafile can be obtained from pastebin
```
The link to input datafile is:
<http://pastebin.com/NaNbEm3E>
I like to create 20 files from this input such that each files have the comments line.
That is :
```
#out1.txt
#comments
first part of one-twentieth data
# out2.txt
# given comments
second part of one-twentieth data
# and so on upto out20.txt
```
How can we do so in python?
My intitial attempt is like this:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Author : Bhishan Poudel
# Date : May 23, 2016
# Imports
from __future__ import print_function
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# read in comments from the file
infile = 'filecopy_multiple.txt'
outfile = 'comments.txt'
comments = []
with open(infile, 'r') as fi, open (outfile, 'a') as fo:
for line in fi.readlines():
if line.startswith('#'):
comments.append(line)
print(line)
fo.write(line)
#==============================================================================
# read in a file
#
infile = infile
colnames = ['angle', 'wave','trans','refl']
print('{} {} {} {}'.format('\nreading file : ', infile, '','' ))
df = pd.read_csv(infile,sep='\s+', header = None,skiprows = 0,
comment='#',names=colnames,usecols=(0,1,2,3))
print('{} {} {} {}'.format('length of df : ', len(df),'',''))
# write 20 files
df = df
nfiles = 20
nrows = int(len(df)/nfiles)
groups = df.groupby( np.arange(len(df.index)) / nrows )
for (frameno, frame) in groups:
frame.to_csv("output_%s.csv" % frameno,index=None, header=None,sep='\t')
```
Till now I have twenty splitted files. I just want to copy the comments lines to each of the files. But the question is: `how to do so?`
There should be some easier method than creating another 20 output files with comments only and appending twenty\_splitted\_files to them.
Some useful links are following:
[How to split a dataframe column into multiple columns](https://stackoverflow.com/questions/18034361/how-to-split-a-dataframe-column-into-multiple-columns)
[How to split a DataFrame column in python](https://stackoverflow.com/questions/34733587/how-to-split-a-dataframe-column-in-python)
[Split a large pandas dataframe](https://stackoverflow.com/questions/17315737/split-a-large-pandas-dataframe) | 2016/06/17 | [
"https://Stackoverflow.com/questions/37888923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5200329/"
] | `inline-block` by default is `vertical-aligne`d `baseline`, and you are setting it to `middle` the first but you need to set it to `top`
```css
.logo-letter-text {
width: 1em;
text-align: center;
font-family: "Bebas Kai";
font-weight: 400;
color: rgba(246, 244, 229, 1.0);
}
.nav-menu {
position: relative;
position: fixed;
top: 0;
width: 100%;
height: 8%;
background: rgba(18, 18, 18, 1.0);
}
.nav-menu ul {
margin: 0;
padding: 0;
list-style-type: none;
height: 100%;
text-align: left;
padding-left: 30px;
border: 2px solid rgba(255, 255, 255, 1.0);
}
.nav-menu ul li {
border: 2px solid rgba(255, 0, 255, 1.0);
list-style-type: none;
line-height: 50px;
padding-left: 12px;
padding-right: 12px;
height: auto;
display: inline-block;
vertical-align: top;
}
.nav-menu ul li:nth-child(n+2) {
color: rgba(86, 86, 86, 1.0);
}
.nav-menu ul li:hover {
color: rgba(255, 255, 255, 1.0);
}
.nav-menu ul li:nth-child(1) {
border: 2px solid rgba(255, 0, 0, 1.0);
background: rgba(255, 102, 0, 1.0);
padding: 0;
margin: 0;
}
.nav-menu ul li:nth-child(n+2) {
font-size: 40px;
}
```
```html
<body class="bg-theme-main">
<nav class="nav-menu">
<ul>
<li><span class="logo-letter-text">M </span><span class="logo-letter-text">B </span><span class="logo-letter-text">A </span>
</li>
<li><span>A</span>
</li>
<li><span>B</span>
</li>
<li><span>C</span>
</li>
<li><span>D</span>
</li>
</ul>
</nav>
</body>
``` | Instead of using `.nav-menu ul li {display: inline-block}`
Use `.nav-menu ul li {float: left;}`
See fiddle <https://jsfiddle.net/4uggcyro/4/>
Or another solution would be to use `display: flex;`
```
.nav-menu ul {
display: flex;
flex-direction: row;
}
```
See fiddle <https://jsfiddle.net/4uggcyro/6/> | 1,731 |
12,405,322 | I have written a code for parallel programming in python.I am using pp module for this.
job\_server = pp.Server(ncpus, ppservers=ppservers)
where ncpus=8 which is no. of core in my system.
python version:2.6.5.
pp version:1.6.2.
But I am facing an error as follows,
```
Traceback (most recent call last):
File "/home/a3rmt/LIVE_ECG/file_video.py", line 613, in <module>
MakeImagesThread(array_file)
File "/home/a3rmt/LIVE_ECG/file_video.py", line 502, in MakeImagesThread
job_server = pp.Server(ncpus, ppservers=ppservers)
File "/usr/lib/pymodules/python2.6/pp.py", line 366, in __init__
raise ValueError("secret must be set using command-line option or configuration file")
ValueError: secret must be set using command-line option or configuration file
``` | 2012/09/13 | [
"https://Stackoverflow.com/questions/12405322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1372331/"
] | The theme is not used as part of the FPC uri and therefore there is only one cache per package.
I wrote a little extension to fix the issue and you can grab it on Github.
<https://github.com/benjy14/MobileFpcFix> | I have a feeling that the design exceptions support in Enterprise/PageCache work at the *package* level and not the *theme* level. Take a look at the code referencing design exceptions in app/code/core/Enterprise/PageCache/Model/Observer.php. My first suggestion would be to contact EE support, perhaps they can provide an appropriate solution or a patch. Alternatively, you can modify the caching key mechanism by rewriting enterprise\_pagecache/processor:
```
public function prepareCacheId($id)
{
$package = Mage::getDesign()->getPackageName();
$templateTheme = Mage::getDesign()->getTheme('template');
$id = $id.'_'.$package.'_'.$templateTheme;
return parent::prepareCacheId($id);
}
``` | 1,732 |
66,209,089 | The problem:
------------
I have a column with a list of redundant values, which I need to be converted into a dictionary-like format in a new column of a PySpark dataframe.
The scenario:
Here's my PySpark dataframe:
| A | C | all\_classes |
| --- | --- | --- |
| 10 | RDK | [1, 1, 1, 2, 2] |
| 10 | USW | [1, 2, 2, 2, 2, 2, 2] |
| 8 | RDK | [21, 21, 21, 21, 20] |
| 8 | RDJ | [20, 20, 21] |
| 10 | RDK | [45, 45, 45, 45, 2, 2, 2] |
| 7 | SSW | [6, 6, 6, 19, 19] |
which I created using below line:
```
my_df.groupBy(['A', 'C']).agg(collect_list("Class").alias("all_classes"))
```
Now that I want a new column that will collate data as follows:
Consider the data of 1'st row from the above column: `[1, 1, 1, 2, 2]`
That needs to be converted into `{1: 3, 2: 2}` which is basically the number itself and no. of times it is duplicated.
My attempt:
-----------
Since, I'm good with python I chose to go with writing a UDF something like this:
```
custom_collect_function = udf(lambda li: {k: li.count(k) for k in set(li)}, StructType(li, ArrayType(elementType=IntegerType()), False))
display(my_df.withColumn("Distribution", custom_collect_function(my_df.all_classes)))
```
I'm clearly failing to set the StructType here and I welcome any other/optimized approach than this. I also welcome any Spark way approach to the same.
The expected output:
| A | C | Distribution | all\_classes |
| --- | --- | --- | --- |
| 10 | RDK | {1: 3, 2: 2} | [1, 1, 1, 2, 2] |
| 10 | USW | {1: 1, 2: 6} | [1, 2, 2, 2, 2, 2, 2] |
| 8 | RDK | {21: 4, 20: 1} | [21, 21, 21, 21, 20] |
| 8 | RDJ | {20: 2, 21: 1} | [20, 20, 21] |
| 10 | RDK | {45: 4, 2: 3} | [45, 45, 45, 45, 2, 2, 2] |
| 7 | SSW | {6: 3, 19: 2} | [6, 6, 6, 19, 19] | | 2021/02/15 | [
"https://Stackoverflow.com/questions/66209089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8046443/"
] | Just introduce extra variables:
```
int main()
{
int a, b;
std::cin >> a >> b;
const int sum = a + b;
const int diff = a - b;
std::cout << sum << std::endl;
std::cout << diff;
}
```
or use computation when needed:
```
int main()
{
int a, b;
std::cin >> a >> b;
std::cout << a + b << std::endl;
std::cout << a - b;
}
```
Else you have to use convoluted way which would probably be less readable. | If you want to have:
```
a = a + b
b = a - b
```
in the end and not to use any other variable you can also do it like:
```
#include<iostream>
using namespace std;
int main()
{
int a,b;
cin>>a>>b;
a = a + b;
b = a - ( 2 * b);
cout<<a<<endl;
cout<<b;
}
```
but please note that it is not a good practice to use `using namespace std;` . instead do it like:
```
#include<iostream>
int main()
{
int a,b;
std::cin >> a >> b;
a = a + b;
b = a - ( 2 * b);
std::cout << a << std::endl;
std::cout << b << std::endl;
}
``` | 1,734 |
70,134,739 | I'm new to python and would appreciate any help i can
I'm looking at this code :
```
if left[0] < right[0]:
result.append(left[0])
left = left[1:]
elif left[0] > right[0]:
result.append(right[0])
right = right[1:]
max_iter -= 1
```
it doesn't make sense what it means , its about changing the order of numbers in a sequence to ascending order but what does [0] mean? | 2021/11/27 | [
"https://Stackoverflow.com/questions/70134739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17303354/"
] | The `%y` code matches a two-digit year - for a four-digit year, you should use `%Y` instead.
```
date = datetime.strptime('2021-11-27 00:00', '%Y-%m-%d %H:%M')
``` | as per the [documentation](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior), `%y` is
>
> Year without century as a zero-padded decimal number.
>
>
>
and `%Y` is
>
> Year with century as a decimal number.
>
>
>
so
```
from datetime import datetime
date = datetime.strptime('2021-11-27 00:00', '%Y-%m-%d %H:%M')
date
```
will give
`datetime.datetime(2021, 11, 27, 0, 0)` | 1,737 |
60,610,009 | I tried to do a choice menu, each menu make different things, for example if you choice the number 1, will work good, but if you try to choose 2 or other number, first will try to run 1, and I don't want this. Is there a way to become "independent" for each option?
Example (this will work):
```
choice = input ("""
1. Make thing 1
2. Make thing 2
3. Make thing 3
4. Exit
Please select your choice:""")
if choice == "1":
print("thing 1")
if choice == "2":
print("thing 2")
if choice == "3":
print("thing 3")
if choice == "4":
print("thing 4")
```
but, if 1 later has more coding, and you want to work with option 2, python will run also 1... | 2020/03/10 | [
"https://Stackoverflow.com/questions/60610009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13025132/"
] | Python lacks a switch/case statement (like C/C++) in which you CAN have it perform multiple (adjacent) case conditions, and then have it `break` before processing further cases. In Python you'll need to simulate using if-elif-else statements, perhaps utilizing comparison operators (like `==`, `<`) and/or boolean operators ( like `and`, `or`) in conditionals accordingly.
Here's an example of a C language switch/case [switch/case in python](https://bytebaker.com/2008/11/03/switch-case-statement-in-python/):
```
switch(n) {
case 0:
printf("You typed zero.\n");
break;
case 1:
case 9:
printf("n is a perfect square\n");
break;
case 2:
printf("n is an even number\n");
case 3:
case 5:
case 7:
printf("n is a prime number\n");
break;
case 4:
printf("n is a perfect square\n");
case 6:
case 8:
printf("n is an even number\n");
break;
default:
printf("Only single-digit numbers are allowed\n");
break;
}
```
Here's how you might take a first crack at simulating the switch/case in Python [switch/case in python](https://bytebaker.com/2008/11/03/switch-case-statement-in-python/):
```
if n == 0:
print "You typed zero.\n"
elif n == 1 or n == 9 or n == 4:
print "n is a perfect square\n"
elif n == 2 or n == 6 or n == 8:
print "n is an even number\n"
elif n == 3 or n == 5 or n == 7:
print "n is a prime number\n"
elif n > 9:
print "Only single-digit numbers are allowed\n"
```
And here's a much better, "Pythonic" way of doing it [switch/case in python](https://bytebaker.com/2008/11/03/switch-case-statement-in-python/):
```
options = {0 : zero,
1 : sqr,
4 : sqr,
9 : sqr,
2 : even,
3 : prime,
5 : prime,
7 : prime,
}
def zero():
print "You typed zero.\n"
def sqr():
print "n is a perfect square\n"
def even():
print "n is an even number\n"
def prime():
print "n is a prime number\n"
options[num]()
``` | Good news for you, if you are still interested in using the switch case in Python.
you can now use `match` with Python 3.10
like this:
```py
match n:
case 0:
print("You typed zero.\n")
case "1":
print("thing 1")
case "2":
print("thing 2")
case "3":
print("thing 3")
```
For more details read [click here to read this python documentation](https://docs.python.org/3/whatsnew/3.10.html) | 1,739 |
40,332,032 | I'm thinking about writing a desktop application that the GUI is made with either HTML or PHP, but the functions are run by a separate Java or python code, is there any heads up that I can look into? | 2016/10/30 | [
"https://Stackoverflow.com/questions/40332032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5517067/"
] | There are a couple of possible options:
1. Run your backend code as an embedded HTTP-server (like Jetty\* for Java or Tornado\* for Python). If the user starts the application, the backend runs the server and automatically starts the web browser with the URL of your server. This, however, may cause problems with the operating system firewall (running a server on the local machine)
2. You could also have a look at [CEF](https://en.wikipedia.org/wiki/Chromium_Embedded_Framework) (chromium embedded framework). It is made for exactly this purpose (running an HTML-Application inside your code). It uses the same codebase as the chromium (and chrome) web browser. It was developed originally for C++, but there is also a Java binding: [java-cef](https://bitbucket.org/chromiumembedded/java-cef)
Oh and by the way, PHP is a server-side language. I would not recommend to use it in your scenario (since your backend code is Python or Java).
\*I have not enough reputation to add more than two links, so you'll have to google those ones yourself. | You could expose data from Java or Python as JSON via GET request and use PHP to access it. There are multiple libraries for each of these languages both for writing and reading JSON. GET request can take parameters if needed. | 1,740 |
12,311,348 | I am trying to implement a class in which an attempt to access any attributes that do not exist in the current class or any of its ancestors will attempt to access those attributes from a member. Below is a trivial version of what I am trying to do.
```
class Foo:
def __init__(self, value):
self._value = value
def __getattr__(self, name):
return getattr(self._value, name)
if __name__ == '__main__':
print(Foo(5) > Foo(4)) # should do 5 > 4 (or (5).__gt__(4))
```
However, this raises a `TypeError`. Even using the `operator` module's `attrgetter` class does the same thing. I was taking a look at [the documentation regarding customizing attribute access](http://docs.python.org/py3k/reference/datamodel.html?highlight=__get__#object.__getattr__), but I didn't find it an easy read. How can I get around this? | 2012/09/07 | [
"https://Stackoverflow.com/questions/12311348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/652722/"
] | If I understand you correctly, what you are doing is correct, but it still won't work for what you're trying to use it for. The reason is that implicit magic-method lookup does not use `__getattr__` (or `__getattribute__` or any other such thing). The methods have to actually explicitly be there with their magic names. Your approach will work for normal attributes, but not magic methods. (Note that if you do `Foo(5).__lt__(4)` explicitly, it will work; it's only the implicit "magic" lookup --- e.g., calling `__lt__` when `<` is used) --- that is blocked.)
[This post](https://stackoverflow.com/questions/9057669/how-can-i-intercept-calls-to-pythons-magic-methods-in-new-style-classes) describes an approach for autogenerating magic methods using a metaclass. If you only need certain methods, you can just define them on the class manually. | `__*__` methods will not work unless they actually exist - so neither `__getattr__` nor `__getattribute__` will allow you to proxy those calls. You must create every single methods manually.
Yes, this does involve quite a bit of copy&paste. And yes, it's perfectly fine in this case.
You might be able to use the [werkzeug LocalProxy](https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/local.py#L248) class as a base or instead of your own class; your code would look like this when using `LocalProxy`:
```
print(LocalProxy(lambda: 5) > LocalProxy(lambda: 4))
``` | 1,741 |
3,905,179 | I'm struggling with setting my scons environment variables for visual studio 2008.
Normally I do following:
```
%VS90COMNTOOLS%vsvars32.bat
or
call %VS90COMNTOOLS%vsvars32.bat
```
And this works in my shell.
I try to do that in python using subprocess
```
subprocess.call([os.environ['VS90COMNTOOLS']+r"\vsvars32.bat"])
```
output:
```
d:\N\workspace>scons test
scons: Reading SConscript files ...
Setting environment for using Microsoft Visual Studio 2008 x86 tools.
KeyError: 'INCLUDE':
```
above invokes batch process, but environment variables are not inherited from it to my 'master process.
When i use:
```
subprocess.call([os.environ['VS90COMNTOOLS']+r"\vsvars32.bat"])
```
I get:
```
d:\N\workspace>scons test
scons: Reading SConscript files ...
WindowsError: [Error 2] The system cannot find the file specified:
File "D:\N\workspace\SConstruct", line 17:
subprocess.Popen(["call ", os.environ['VS90COMNTOOLS']+r"\vsvars32.bat"])
File "C:\Python26\lib\subprocess.py", line 595:
errread, errwrite)
File "C:\Python26\lib\subprocess.py", line 821:
startupinfo)
```
How to achieve that? | 2010/10/11 | [
"https://Stackoverflow.com/questions/3905179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/112496/"
] | Write a batch file that runs `vsvars32.bat` and then outputs the values in the form `VARNAME=value`, then have your Python script parse the values and inject them into `os.environ`.
This is done in python's own distutils module, [see the source here](http://hg.python.org/distutils2/file/0291648eb2b2/distutils2/compiler/msvc9compiler.py#l239). | In addition to the previous answer. An excerpt of my SConstruct:
```
for key in ['INCLUDE','LIB']:
if os.environ.has_key(key):
env.Prepend(ENV = {key.upper():os.environ[key]})
```
Please take care that variable names in Python are case-sensitive. Ensure that your `env['ENV']` dict has no duplicate variable names with different case, otherwise the Windows shell will only see one variant of the variable. | 1,742 |
24,024,920 | I have a python script which runs a fabfile. My issue is that I am asked for a password whenever I run the fabfile from my script. However, the login works fine with the specified key when I run the fabfile manually from the command line even though I am using the same fab parameters. Here is the contents of my fabfile:
```
[root@ip-10-10-20-82 bakery]# cat fabfile.py
from fabric.api import run
def deploy():
run('wget -P /tmp https://s3.amazonaws.com/LinuxBakery/httpd-2.2.26-1.1.amzn1.x86_64.rpm')
run('sudo yum localinstall /tmp/httpd-2.2.26-1.1.amzn1.x86_64.rpm')
```
Here is the syntax I use on the command line that works successfully:
```
fab -u ec2-user -i id_rsa -H 10.10.15.185 deploy
```
Here is the bit of python code which for some reason is prompting for a password instead of using the key:
```
import subprocess
subprocess.call(['fab', '-f', '/home/myhome/scripts/bakery/fabfile.py', '-u ec2-user', '-i', '/home/myhome/scripts/bakery/id_rsa', '-H', bakery_internalip, 'deploy'])
```
Here is what happens when I run it:
```
[10.10.15.185] Executing task 'deploy'
[10.10.15.185] run: wget -P /tmp https://s3.amazonaws.com/LinuxBakery/httpd-2.2.26-1.1.amzn1.x86_64.rpm
[10.10.15.185] Login password for ' ec2-user':
``` | 2014/06/03 | [
"https://Stackoverflow.com/questions/24024920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3507094/"
] | You can use [ssh-agent](http://www.openbsd.org/cgi-bin/man.cgi?query=ssh-agent&sektion=1):
```
$ eval `ssh-agent -s`
$ ssh-add /home/myhome/scripts/bakery/id_rsa
$ fab -H 10.10.15.185 deploy
``` | I was being asked for a password even though I had specified a key because there was an extra space between the "u" and "ec2-user". Here is the snippet before:
```
'-u ec2-user'
```
And here it is after:
```
'-uec2-user'
```
The extra space meant that fab was trying to authenticate with " ec2-user" instead of "ec2-user". | 1,744 |
70,386,636 | I'm trying to load a large CSV file into Pandas, which I'm new to.
The input file should have 13 columns. Howeever, Pandas is reading all of the column headings as one heading, and then just collecting the first few columns of data.
The code I am using is;-
leases=pd.read\_csv("/content/LEASES\_FULL\_2021\_12.csv", sep=',', delimiter=None, header=0, names=None, index\_col=False, usecols=None, squeeze=False, engine="python")
The CSV is formatted as follows:-
Unique Identifier,Tenure,Register Property Description,County,Region,Associated Property Description ID,Associated Property Description,OS UPRN,Price Paid,Reg Order,Date of Lease,Term,Alienation Clause Indicator
"1608D08BFC5496E31C7926595EE2F1BE278ED436","Leasehold","19 Alcester Crescent, Clapton","GREATER LONDON","GREATER LONDON","501286752","19 ALCESTER CRESCENT, LONDON E5 9PX","10008240310","","2","13-02-1905","99 years from 25 December 1902","N"
"5D0FA4909B7C0FD9477C2275E1948C8F135E233F","Leasehold","7 Agnes Street, Limehouse","GREATER LONDON","GREATER LONDON","3125118","7 AGNES STREET, LONDON E14 7DG","6044926","","2","16-10-1866","99 years from 24 June 1862","N"
etc
The Dataframe then loads with one column as follows:-
[enter image description here](https://i.stack.imgur.com/mfStC.png)
Any help would be greatly appreciated. | 2021/12/16 | [
"https://Stackoverflow.com/questions/70386636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16950273/"
] | The problem is that through type erasure, the compiler will produce the method below for your generic method.
```
public static JsonNode of(String key, Object value) {
ObjectMapper objectMapper = new ObjectMapper();
ObjectNode root = objectMapper.createObjectNode();
root.put(key, value); // <== error shows up on this line
return root;
}
```
Per Jackson documentation, there's no such method `put(String key, Object value)`.
Instead of calling the method `of(String key, T value)`, I would just do the following:
```
ObjectNode root = new ObjectMapper().createObjectNode();
root.put(key, value);
```
Or you could write write several overloaded methods `of(...)`. | There is another approach that works for any json object:
```
Map<String, Object> map = new ObjectMapper().readValue(json, new TypeReference<HashMap<String,Object>>() {});
```
The object in the value can be any value object (`String`, `Integer`, etc), another `Map<String, Object>` as a nested object or a `List<Object>`/`List<Map<String, Object>>` and so on down to any depth. | 1,745 |
6,866,802 | Im fairly proficient in php and am also learning python. I have been wanting to create a basic game for some time now and would like to create it in python. But in order to have a fancy smooth interface I need to use javascript (i dont much care for the flash/silverlight route). So I decided to start looking up game development in JavaScript. And in doing so most of the examples I have seen are games nearly completely done in javascript. And many of these games are quite nice.
But what im wondering, is if i combine python and javascript together for creating games; should I rely more heavily on javascript and let python do some of the background stuff? Or should I only use javascript for the basic UI elements, animation, flashy stuff and whatnot and keep the core game engine in python. Or some kind of mix between the two.
For php/python im mostly used to MVC style frameworks. So if i used JS only for the visuals would I just make heavy use of the in the view files as I would with a normal application? I know I can make a game in either language completely, but to use the best of both worlds im not quite sure where I should draw the line on letting python do work and letting JS do the work.
Also as a reference I would like to use some advanced features like canvas/svg and so on. Not really so much WebGL. And the game I have in mind to practice on is an isometric style RTS. The game will mostly be for learning. Im not really planning on releasing it.
Any input is appreciated.
---
Clarification:
the game will be web based. My web server is nginx. The games i would like to to would be multiplayer games where multiple people are playing online at the same time. Think world of warcraft. each server could hold up to N players that play against eachother. When that one is say 80% full a new one is spun up. Or something along those lines. I would like for the players to actually be able to interact with eachother live. Which is why in same ways i was also kind of looking into node.js but dont know as much about it yet. but seemed like it could fit.
--
Update: Im also planning on using mongodb as the database, if that matters to anyones answer. | 2011/07/28 | [
"https://Stackoverflow.com/questions/6866802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519990/"
] | I found that splitting the collection view item view into its own XIB and then rewiring the connections so that the collection view item prototype loads the new XIB will allow for you to create the bindings in interface builder without it crashing. I followed these steps...
1. Delete the collection view item view from the default xib.
2. Create a new view and XIB inheriting from NSCollectionViewItem.
3. Create your collection view item in the new view.
4. Bind the fields to the files owner of the new view.
5. Back in the collection view xib, update the properties of collection view item to load the bin name of the new xib.
I'm not sure that it's quicker than doing it programmatically, but it does allow you to manage the bindings without writing code. I've got a few apps working this way now. | Yup, I can confirm this bug too, even on Interface Builder 3.
The only workaround is to do the binding programmatically:
```
[textField bind:@"value" toObject:collectionViewItem withKeyPath:@"representedObject.foo" options:nil];
``` | 1,748 |
63,345,527 | I am trying to build a Docker application that uses Python's gensim library, version 3.8.3, which is being installed via pip from a requirements.txt file.
However, Docker seems to have trouble while trying to do
RUN pip install -r requirements.txt
My Requirement.txt for reference -
```
boto==2.49.0
boto3==1.14.33
botocore==1.17.33
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
Cython==0.29.14
docutils==0.15.2
Flask==1.1.2
gensim==3.8.3
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
MarkupSafe==1.1.1
numpy==1.19.1
python-dateutil==2.8.1
requests==2.24.0
s3transfer==0.3.3
scipy==1.5.2
six==1.15.0
smart-open==2.1.0
urllib3==1.25.10
Werkzeug==1.0.1
```
dockerFile
```
FROM python:3.8.2-alpine
WORKDIR /project
ADD . /project
RUN set -x && apk add --no-cache build-base && apk add --no-cache libexecinfo-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD ["python","similarity.py"]
```
error:
```
(venv) C:\Users\verma\PycharmProjects\flaskTest>docker image build -t similarity-flask-api .
Sending build context to Docker daemon 302.7MB
Step 1/7 : FROM python:3.8.2-alpine
---> 6c32e2504283
Step 2/7 : WORKDIR /project
---> Using cache
---> 554b6bda89ad
Step 3/7 : ADD . /project
---> d085a645ecb1
Step 4/7 : RUN set -x && apk add --no-cache build-base && apk add --no-cache libexecinfo-dev
---> Running in e7117c1e18ff
+ apk add --no-cache build-base
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
(1/18) Installing libgcc (9.2.0-r4)
(2/18) Installing libstdc++ (9.2.0-r4)
(3/18) Installing binutils (2.33.1-r0)
(4/18) Installing libmagic (5.37-r1)
(5/18) Installing file (5.37-r1)
(6/18) Installing gmp (6.1.2-r1)
(7/18) Installing isl (0.18-r0)
(8/18) Installing libgomp (9.2.0-r4)
(9/18) Installing libatomic (9.2.0-r4)
(10/18) Installing mpfr4 (4.0.2-r1)
(11/18) Installing mpc1 (1.1.0-r1)
(12/18) Installing gcc (9.2.0-r4)
(13/18) Installing musl-dev (1.1.24-r2)
(14/18) Installing libc-dev (0.7.2-r0)
(15/18) Installing g++ (9.2.0-r4)
(16/18) Installing make (4.2.1-r2)
(17/18) Installing fortify-headers (1.1-r0)
(18/18) Installing build-base (0.5-r1)
Executing busybox-1.31.1-r9.trigger
OK: 182 MiB in 52 packages
+ apk add --no-cache libexecinfo-dev
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
(1/2) Installing libexecinfo (1.1-r1)
(2/2) Installing libexecinfo-dev (1.1-r1)
OK: 183 MiB in 54 packages
Removing intermediate container e7117c1e18ff
---> 9e7a97f8bddc
Step 5/7 : RUN pip install --upgrade pip
---> Running in 0286591e9e70
Collecting pip
Downloading pip-20.2.1-py2.py3-none-any.whl (1.5 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 20.1
Uninstalling pip-20.1:
Successfully uninstalled pip-20.1
Successfully installed pip-20.2.1
Removing intermediate container 0286591e9e70
---> ca837786d695
Step 6/7 : RUN pip install -r requirements.txt
---> Running in 7f124c100c0b
Collecting boto==2.49.0
Downloading boto-2.49.0-py2.py3-none-any.whl (1.4 MB)
Collecting boto3==1.14.33
Downloading boto3-1.14.33-py2.py3-none-any.whl (129 kB)
Collecting botocore==1.17.33
Downloading botocore-1.17.33-py2.py3-none-any.whl (6.5 MB)
Collecting certifi==2020.6.20
Downloading certifi-2020.6.20-py2.py3-none-any.whl (156 kB)
Collecting chardet==3.0.4
Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
Collecting click==7.1.2
Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting Cython==0.29.14
Downloading Cython-0.29.14.tar.gz (2.1 MB)
Collecting docutils==0.15.2
Downloading docutils-0.15.2-py3-none-any.whl (547 kB)
Collecting Flask==1.1.2
Downloading Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting gensim==3.8.3
Downloading gensim-3.8.3.tar.gz (23.4 MB)
Collecting idna==2.10
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting itsdangerous==1.1.0
Downloading itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting Jinja2==2.11.2
Downloading Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
Collecting jmespath==0.10.0
Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting MarkupSafe==1.1.1
Downloading MarkupSafe-1.1.1.tar.gz (19 kB)
Processing /root/.cache/pip/wheels/df/b2/64/111c431ca7f7d49afb42126b7351fe1a4894803d75026360de/numpy-1.19.1-cp38-cp38-linux_x86_64.whl
Collecting python-dateutil==2.8.1
Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting requests==2.24.0
Downloading requests-2.24.0-py2.py3-none-any.whl (61 kB)
Collecting s3transfer==0.3.3
Downloading s3transfer-0.3.3-py2.py3-none-any.whl (69 kB)
Collecting scipy==1.5.2
Downloading scipy-1.5.2.tar.gz (25.4 MB)
Installing build dependencies: started
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpoyjzx5wb
cwd: /tmp/pip-install-r078skp_/scipy
Complete output (139 lines):
lapack_opt_info:
lapack_mkl_info:
customize UnixCCompiler
libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
openblas_lapack_info:
customize UnixCCompiler
customize UnixCCompiler
libraries openblas not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
openblas_clapack_info:
customize UnixCCompiler
customize UnixCCompiler
libraries openblas,lapack not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
flame_info:
customize UnixCCompiler
libraries flame not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries satlas,satlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
accelerate_info:
NOT AVAILABLE
lapack_info:
customize UnixCCompiler
libraries lapack not found in ['/usr/local/lib', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
setup.py:460: UserWarning: Unrecognized setuptools command ('dist_info --egg-base /tmp/pip-modern-metadata-ujofw06w'), proceeding with generating Cython sources
and expanding templates
warnings.warn("Unrecognized setuptools command ('{}'), proceeding with "
Running from SciPy source directory.
/tmp/pip-build-env-mw61mr08/overlay/lib/python3.8/site-packages/numpy/distutils/system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
/tmp/pip-build-env-mw61mr08/overlay/lib/python3.8/site-packages/numpy/distutils/system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/tmp/pip-build-env-mw61mr08/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 157, in prepare_metadata_for_build_wheel
self.run_setup()
File "/tmp/pip-build-env-mw61mr08/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 248, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-mw61mr08/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 583, in <module>
setup_package()
File "setup.py", line 579, in setup_package
setup(**metadata)
File "/tmp/pip-build-env-mw61mr08/overlay/lib/python3.8/site-packages/numpy/distutils/core.py", line 137, in setup
config = configuration()
File "setup.py", line 477, in configuration
raise NotFoundError(msg)
numpy.distutils.system_info.NotFoundError: No lapack/blas resources found.
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_bu
ild_wheel /tmp/tmpoyjzx5wb Check the logs for full command output.
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
```
**I tried this thread - [Docker unable to install numpy, scipy, or gensim](https://stackoverflow.com/questions/44732303/docker-unable-to-install-numpy-scipy-or-gensim)
As suggested I added line 4 and 5 in my dockerFile but it is still not working.** | 2020/08/10 | [
"https://Stackoverflow.com/questions/63345527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12138506/"
] | in the post you mension, thye install `libc-dev` to compile packs ...
you dont.
```
RUN apt-get -y install libc-dev
RUN apt-get -y install build-essential
```
I have problems trying to use "alpine" with Python...
so we choose ["slim-buster"](https://hub.docker.com/_/python) as docker image for Python.
so if you can...
1 - I would try slim-buster if you can
2 - Try a numpy ready docker image and install your python packages. | To install `numpy` on an alpine image, you typically need a few more dependencies:
```
RUN apk update && apk add gfortran build-base openblas-dev libffi-dev
```
Namely the openblas-dev, which you are missing. That will at least get `numpy` to install | 1,751 |
54,299,203 | I've looked around and haven't found anything just yet. I'm going through emails in an inbox and checking for a specific word set. It works on most emails but some of them don't parse. I checked the broken emails using.
```
print (msg.Body.encode('utf8'))
```
and my problem messages all start with **b'**.
like this
```
b'\xe6\xa0\xbc\xe6\xb5\xb4\xe3\xb9\xac\xe6\xa0\xbc\xe6\x85\xa5\xe3\xb9\xa4\xe0\xa8\x8d\xe6\xb4\xbc\xe7\x91\xa5\xe2\x81\xa1\xe7\x91\x
```
I think this is forcing python to read the body as bytes but I'm not sure. Either way after the b, no matter what encoding I try I don't get anything but garbage text.
I've tried other encoding methods as well decoding before but I'm just getting a ton of attribute errrors.
```
import win32api
import win32com.client
import datetime
import os
import time
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
dater = datetime.date.today() - datetime.timedelta(days = 1)
dater = str(dater.strftime("%m-%d-%Y"))
print (dater)
#for folders in outlook.folders:
# print(folders)
Receipt = outlook.folders[8]
print(Receipt)
Ritems = Receipt.folders["Inbox"]
Rmessage = Ritems.items
for msg in Rmessage:
if (msg.Class == 46 and msg.CreationTime.strftime("%m-%d-%Y") == dater):
print (msg.CreationTime)
print (msg.Subject)
print (msg.Body.encode('utf8'))
print ('..............................')
```
End result is to have the message printed out in the console, or at least give Python a way to read it so I can find the text I'm looking for in the body. | 2019/01/21 | [
"https://Stackoverflow.com/questions/54299203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6054714/"
] | Is the dropdown populated on the server or after an ajax call?
It might be that you've tried to select a value before any are available as it's still waiting on for a response to provide data.
You could wait until that response is received and then select the first option from that data set.
You could reference this example:
<https://stackoverflow.com/a/16746700/2835914>
You could also try:
```
success: function (data) {
var markup = "";
markup += "<option value='0'>Seleccionar..</option>";;
for (var x = 0; x < data.length; x++) {
markup += "<option value='" + data[x].Value + "' " + x === 0 ? "selected" : "" + ">" + data[x].Text + "</option>";
}
$("#MyDropDown").html(markup).show();
},
``` | if you used jquery ajax you should do this
```js
$(document).on("click", 'button',
function() {
$(".modal-body").load(YOURURL, function() {
$("#MyDropDown").prop('selectedIndex', 1);
});
$("#myModal").modal();
});
``` | 1,752 |
15,995,987 | In python, I have this list containing
```
['HELLO', 'WORLD']
```
how do I turn that list into
```
['OLLEH', 'DLROW']
``` | 2013/04/14 | [
"https://Stackoverflow.com/questions/15995987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2278906/"
] | ```
>>> words = ['HELLO', 'WORLD']
>>> [word[::-1] for word in words]
['OLLEH', 'DLROW']
``` | Using a list comprehension:
```
reversed_list = [x[::-1] for x in old_list]
``` | 1,753 |
16,560,497 | I am new at Python so this may seem to be very easy. I am trying to remove all **#**, numbers and if the same letter is repeated more then two times in a row, I need to change it to only two letters. This work perfectly but not with ØÆÅ.
*Any ideas how this can be done with ØÆÅ letters?*
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
import math, re, sys, os, codecs
reload(sys)
sys.setdefaultencoding('utf-8')
text = "ån9d ånd ååååånd d9d flllllløde... :)asd "
# Remove anything other than digits
text = re.sub(r'#', "", text)
text = re.sub(r"\d", "", text)
text = re.sub(r'(\w)\1+', r'\1\1', text)
print "Phone Num : "+ text
```
The result I get now is:
```
Phone Num : ånd ånd ååååånd dd flløde... :)asd
```
What I want is:
```
Phone Num : ånd ånd åånd dd flløde... :)asd
``` | 2013/05/15 | [
"https://Stackoverflow.com/questions/16560497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/871742/"
] | You need to work with Unicode values, not with byte strings. UTF-8 encoded `å` is *two* bytes, and a regular expression matching `\w` *only* matches ascii letters, digits and underscores when operating in the default non-Unicode-aware mode.
From the [`re` module documentation](http://docs.python.org/2/library/re.html) on `\w`:
>
> When the `LOCALE` and `UNICODE` flags are not specified, matches any alphanumeric character and the underscore; this is equivalent to the set `[a-zA-Z0-9_]`. With `LOCALE`, it will match the set `[0-9_]` plus whatever characters are defined as alphanumeric for the current locale. If `UNICODE` is set, this will match the characters `[0-9_]` plus whatever is classified as alphanumeric in the Unicode character properties database.
>
>
>
Unfortunately, even when you switch to properly using Unicode values (using a unicode `u''` literal or by decoding your source data to unicode values), using a Unicode regular expression (`re.sub(ur'...')`) and using the `re.UNICODE` flag to switch `\w` to match Unicode alphanumeric characters, the Python `re` module has limited support for Unicode matching still:
```
>>> print re.sub(ur'(\w)\1+', r'\1\1', text, re.UNICODE)
ånd ånd ååååånd dd flløde... :)asd
```
because `å` is not being recognized as alphanumeric:
```
>>> print re.sub(ur'\w', '', text, re.UNICODE)
å å ååååå ø... :)
```
The solution is to use the external [`regex` library](http://pypi.python.org/pypi/regex) which is a version of the `re` library that adds proper full Unicode support:
```
>>> import regex
>>> print regex.sub(ur'(\w)\1+', r'\1\1', text, re.UNICODE)
ånd ånd åånd dd flløde... :)asd
```
That module can do more than just recognize more alphanumeric characters in Unicode values, see the linked package page for more details. | Change:
```
text = u"ån9d ånd åååååååånd d9d flllllløde... :)asd "
```
and
```
text = re.sub(r'(\w)\1+', r'\1\1', text)
```
**COMPELTE SOLUTION**
```
import math, re, sys, os, codecs
reload(sys)
sys.setdefaultencoding('utf-8')
text = u"ån9d ånd åååååååånd d9d flllllløde... :)asd "
# Remove anything other than digits
text = re.sub(r'#', "", text)
text = re.sub(r"\d", "", text)
text = re.sub(r'(\w)\1+', r'\1\1', text)
text = re.sub(r'(\W)\1+', r'\1\1', text)
print "1: "+ text
```
Print:
```
1: ånd ånd åånd dd flløde.. :)asd
``` | 1,759 |
40,664,786 | I'm a beginner using python, and am writing a "guess my number game". So far I have everything working fine. The computer picks a random number between 1 and 3 and asks the player to guess the number. If the guess is higher than the random number, the program prints "Lower", and vice versa. The player only has 5 tries, and when they run out, the player gets a message and the game ends. If the player guesses correctly, they are congratulated and the game ends. However, sometimes when the number is guessed correctly, the program doesn't print the congratulatory message and I can't figure out why...
```
import random
print("\tWelcome to 'Guess My Number'!:")
print("\nI'm thinking of a numer between 1 and 100.")
print("Guess carefully, you only have 5 tries!.\n")
#sets initial values
the_number = random.randint(1,3)
guess = int(input("Take a guess: "))
tries = 1
guesses = 4
#guessing loop
while guess != the_number:
if guess > the_number:
print("Lower...")
elif guesses <= 0:
print("Sorry, you're out of guesses! Try again...")
break
elif guess < the_number:
print("Higher...")
guess = int(input("Take a guess: "))
tries += 1
guesses -= 1
if guess == the_number:
print("You guessed it! The number was", the_number)
print("And it only took you", tries, "tries!\n")
``` | 2016/11/17 | [
"https://Stackoverflow.com/questions/40664786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6912990/"
] | To answer your original question about the lack of congratulatory message for correct number, end the code with input(), to ensure it does not terminate before displaying the last message.
Order of calculation:
1. give input guess
2. reduce guesses (starting at 5), increase tries (starting at 1)
3. immediate break if guesses == 0
4. evaluate guess (lower, higher or equal, which would end while loop)
```html
import random
print("\tWelcome to 'Guess My Number'!:")
print("\nI'm thinking of a numer between 1 and 3.")
print("Guess carefully, you only have 5 tries!.\n")
#sets initial values
the_number = random.randint(1,3)
guess = int(input("Take a guess: "))
tries = 1
guesses = 5
#guessing loop
while guess != the_number:
tries += 1
guesses -= 1
if guesses == 0:
print("Sorry, you're out of guesses! Try again...")
break
elif guess > the_number:
print("Lower...")
elif guess < the_number:
print("Higher...")
guess = int(input("Take a guess: "))
if guess == the_number:
print("You guessed it! The number was", the_number)
print("And it only took you", tries, "tries!\n")
input()
``` | Assuming everything else works, un-indent the final check. You can't check `guess == the_number` while it isn't equal
```
#guessing loop
while guess != the_number:
# do logic
# outside guessing loop
if guesses > 0:
print("You guessed it! The number was", the_number)
print("And it only took you", tries, "tries!\n")
``` | 1,760 |
11,055,921 | I am using mongoexport to export mongodb data which also has Image data in Binary format.
Export is done in csv format.
I tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk.
But it seems that, data is corrupt and image is not getting stored.
Has anybody come across such situation or resolved similar thing ?
Thanks, | 2012/06/15 | [
"https://Stackoverflow.com/questions/11055921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459413/"
] | One thing to watch out for is an arbitrary 2MB BSON Object size limit in several of 10gen's implementations. You might have to denormalize your image data and store it across multiple objects. | Depending how you stored the data, it may be prefixed with 4 bytes of size. Are the corrupt exports 4 bytes/GridFS chunk longer than you'd expect? | 1,762 |
58,851,954 | Im trying to do the obeythetestinggoat tutorial and cant set my geckodriver,
Im working in Win10 64bits
my pip freeze shows:
Django==1.7,selenium==3.141.0,urllib3==1.25.7
i download the geckodriver (geckodriver-v0.26.0-win64) when i try to get the geckodriver version (via `$geckodriver --version`) stops and show me a error 'application error'
I think that the error are in the enviroment variables (i was trying to put the file in location where the variables are set (windows/system32 or python/scripts) but nothing works
i also trying this solution (put the file in some file where path are viable) in another computer and works. | 2019/11/14 | [
"https://Stackoverflow.com/questions/58851954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7663963/"
] | Vim 8 ( and a number of Vim emulations )
I'd start with
```
<h1>This is h0</h1>
<h2>This is h0</h2>
<h3>This is h0</h3>
<h4>This is h0</h4>
<h5>This is h0</h5>
<h6>This is h0</h6>
```
then on the top 0 of h0. I'd block highlight with `CTRL-V`
go down to the bottom 0 of the h6 tag with `5j`
then type `g` and then `CTRL-A`
and it will be what you want | With my [UnconditionalPaste plugin](http://www.vim.org/scripts/script.php?script_id=3355), you just need to yank the first `<h1>This is h1</h1>` line, and then paste 5 times with `5gPp`, which pastes with all decimal numbers incremented by 1. This also is repeatable via `.`, so you could have also pasted just once and then repeat that 4 times. There are also mapping variants that decrement, increment just one number, and many more mappings around paste variations. | 1,763 |
82,607 | I get DNS records from a Python program, using [DNS
Python](http://www.dnspython.org/)
I can get various DNSSEC-related records:
```
>>> import dns.resolver
>>> myresolver = dns.resolver.Resolver()
>>> myresolver.use_edns(1, 0, 1400)
>>> print myresolver.query('sources.org', 'DNSKEY')
<dns.resolver.Answer object at 0xb78ed78c>
>>> print myresolver.query('ripe.net', 'NSEC')
<dns.resolver.Answer object at 0x8271c0c>
```
But no RRSIG records:
```
>>> print myresolver.query('sources.org', 'RRSIG')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 664, in query
answer = Answer(qname, rdtype, rdclass, response)
File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 121, in __init__
raise NoAnswer
```
I tried several signed domains like absolight.fr or ripe.net.
Trying with dig, I see that there are indeed RRSIG records.
Checking with tcpdump, I can see that DNS Python sends the correct
query and receives correct replies (here, eight records):
```
16:09:39.342532 IP 192.134.4.69.53381 > 192.134.4.162.53: 22330+ [1au] RRSIG? sources.org. (40)
16:09:39.343229 IP 192.134.4.162.53 > 192.134.4.69.53381: 22330 8/5/6 RRSIG[|domain]
```
DNS Python 1.6.0 -
Python 2.5.2 (r252:60911, Aug 8 2008, 09:22:44)
[GCC 4.3.1] on linux2 | 2008/09/17 | [
"https://Stackoverflow.com/questions/82607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15625/"
] | You probably mean RRSIG ANY (otherwise, the order is wrong, the class needs to be after the type)
```
>>> print myresolver.query('sources.org', 'RRSIG', 'ANY')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 664, in query
answer = Answer(qname, rdtype, rdclass, response)
File "/usr/lib/python2.5/site-packages/dns/resolver.py", line 121, in __init__
raise NoAnswer
dns.resolver.NoAnswer
``` | If you try this, what happens?
```
print myresolver.query('sources.org', 'ANY', 'RRSIG')
``` | 1,772 |
45,368,847 | Very related to this post but I don't have the priviledge to comment there so I had to make a new post. [Deploy a simple VS2017 Django app to Azure - server error](https://stackoverflow.com/questions/43506691/deploy-a-simple-vs2017-django-app-to-azure-server-error)
I followed Silencer's tutorial there and I am getting this error from \LogFiles\wfastcgi.log:
```
2017-07-28 08:28:57.746719: Activating virtualenv with D:\home\site\wwwroot\env\Scripts\python.exe
2017-07-28 08:28:57.777987: Error occurred while reading WSGI handler:
Traceback (most recent call last):
File "D:\home\python360x64\wfastcgi.py", line 791, in main
env, handler = read_wsgi_handler(response.physical_path)
File "D:\home\python360x64\wfastcgi.py", line 633, in read_wsgi_handler
handler = get_wsgi_handler(os.getenv("WSGI_HANDLER"))
File "D:\home\python360x64\wfastcgi.py", line 605, in get_wsgi_handler
handler = handler()
File ".\ptvs_virtualenv_proxy.py", line 99, in get_virtualenv_handler
execfile(activate_this, dict(__file__=activate_this))
File ".\ptvs_virtualenv_proxy.py", line 27, in execfile
code = f.read()
File "D:\Repos\azure-python-siteextensions\source_packages\python.3.6.0\tools\Lib\encodings\cp1252.py", line 23, in decode
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2: character maps to <undefined>
```
I have installed python360x64 as an extension in azure portal, I am using this <https://github.com/Azure/azure-sdk-for-python/blob/master/examples/AzureResourceViewer/ptvs_virtualenv_proxy.py>
And my web.config:
```
<configuration>
<system.diagnostics>
<trace>
<listeners>
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
<filter type="" />
</add>
</listeners>
</trace>
</system.diagnostics>
<appSettings>
<add key="WSGI_ALT_VIRTUALENV_HANDLER" value="django.core.wsgi.get_wsgi_application()" />
<add key="WSGI_ALT_VIRTUALENV_ACTIVATE_THIS" value="D:\home\site\wwwroot\env\Scripts\python.exe" />
<add key="WSGI_HANDLER" value="ptvs_virtualenv_proxy.get_virtualenv_handler()" />
<add key="PYTHONPATH" value="D:\home\site\wwwroot" />
<add key="DJANGO_SETTINGS_MODULE" value="DjangoWebProject.settings" />
<add key="WSGI_LOG" value="D:\home\LogFiles\wfastcgi.log"/>
</appSettings>
<system.web>
<compilation debug="true" targetFramework="4.0" />
</system.web>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true" />
<handlers>
<add name="Python FastCGI" path="handler.fcgi" verb="*" modules="FastCgiModule" scriptProcessor="D:\home\python360x64\python.exe|D:\home\python360x64\wfastcgi.py" resourceType="Unspecified" requireAccess="Script" />
</handlers>
<rewrite>
<rules>
<rule name="Static Files" stopProcessing="true">
<conditions>
<add input="true" pattern="false" />
</conditions>
</rule>
<rule name="Configure Python" stopProcessing="true">
<match url="(.*)" ignoreCase="false" />
<conditions>
<add input="{REQUEST_URI}" pattern="^/static/.*" ignoreCase="true" negate="true" />
</conditions>
<action type="Rewrite" url="handler.fcgi/{R:1}" appendQueryString="true" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>
```
My /env/ python version is python360x64.
Any help appreciated! | 2017/07/28 | [
"https://Stackoverflow.com/questions/45368847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6335827/"
] | I had the same problem and finally it was fixed by changing this line in app config:
```
<add key="WSGI_HANDLER" value="ptvs_virtualenv_proxy.get_virtualenv_handler()" />
```
to this:
```
<add key="WSGI_HANDLER" value="myProject.wsgi.application" />
```
myProject is the name of my django project so you should put the your project's name. | I get the same errors in python 3.4. For python 2.7 there is an activate\_this.py script, which can help in some way.
Just put the activate\_this.py from a python 2.7 virtual environment in the .\env\Scripts folder and changed the path in web.config to point to activate\_this.py.
It seems to work. I am just not sure which version of python I am using now as 2.7 is still present on the system. | 1,779 |
31,096,631 | I am currently writing a script that installs my software-under-test then automatically runs my smoke tests using py.test. If a failure occurs during any of these tests, I would like to tell my software to not publish the software to the build servers. This is basically how it goes in pseudo-code:
```
def install_build_and_test():
# some python code installs some_build
install_my_build(some_build)
# then I want to test my build
subprocess.Popen(["py.test", "smoke_test_suite.py"])
# test_failures = ???
# If any failures occurred during testing, do not publish build
if test_failures is True:
print "Build will not publish because there were errors in your logs"
if test_failures is False:
publish_build(some_build)
```
My question here is how do I use pytest failures to tell my install\_and\_test\_build code to not publish some\_build? | 2015/06/28 | [
"https://Stackoverflow.com/questions/31096631",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4621806/"
] | Approach #1
===========
This is I think the road you were heading down. Basically, just treat test.py as a black box process and use the exit code to determine if there were any test failures (e.g. if there is a non-zero exit code)
```
exit_code = subprocess.Popen(["py.test", "smoke_test_suite.py"]).wait()
test_failures = bool(exit_code)
```
Approach #2
===========
Another even cleaner way is to [run py.test in python directly](https://pytest.org/latest/usage.html#calling-pytest-from-python-code).
```
import pytest
exit_code = pytest.main("smoke_test_suite.py")
test_failures = bool(exit_code)
``` | py.test must return a non-zero exit code if the tests fail. The simplest way to handle that would be using [`subprocess.check_call()`](https://docs.python.org/2/library/subprocess.html#subprocess.check_call):
```
try:
subprocess.check_call(["py.test", "smoke_test_suite.py"])
except subprocess.CalledProcessError:
print "Smoke tests have failed, not publishing"
else:
print "Smoke tests have passed, publishing"
# ...
``` | 1,781 |
41,724,259 | Happy new year 2017!
Hello everybody!
I have some issues when I try to deploy my docker image in a BlueMix container (where `cf ic run = docker run`)
I can't access the container from web even if the image is running well inside.
I pinged the binded adress:
```
ping 169.46.18.91
PING 169.46.18.91 (169.46.18.91): 56 data bytes
64 bytes from 169.46.18.91: icmp_seq=0 ttl=48 time=124.247 ms
64 bytes from 169.46.18.91: icmp_seq=1 ttl=48 time=122.701 ms
```
The image was working locally but now that I gave it an IP and hosted it on bluemix container service, I have reported an issue while the image is setting up after `cf ic -v run` command
Here are the logs of the command:
```
cf ic -v run -p 3000 --name bootingtest 1ed1b527771b
DEMANDE : [2017-01-18T10:32:31+01:00]
POST /UAALoginServerWAR/oauth/token HTTP/1.1
Host: login.ng.bluemix.net
Accept: application/json
Authorization: [DONNEES PRIVEES MASQUEES]
Connection: close
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.22.2+a95e24c / darwin
grant_type=refresh_token&refresh_token=eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIwNzJlYTFmNy00NGRlLTRmYmYtODUxOS1lNmU0NmU2MTk1Y2ItciIsInN1YiI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInNjb3BlIjpbIm9wZW5pZCIsInVhYS51c2VyIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIl0sImlhdCI6MTQ4NDczMTE3MSwiZXhwIjoxNDg3MzIzMTcxLCJjaWQiOiJjZiIsImNsaWVudF9pZCI6ImNmIiwiaXNzIjoiaHR0cHM6Ly91YWEubmcuYmx1ZW1peC5uZXQvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJncmFudF90eXBlIjoicGFzc3dvcmQiLCJ1c2VyX25hbWUiOiJlbW1hbnVlbC5zb2xvbUBmci5pYm0uY29tIiwib3JpZ2luIjoidWFhIiwidXNlcl9pZCI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInJldl9zaWciOiI2MWNkZjM4MiIsImF1ZCI6WyJjZiIsIm9wZW5pZCIsInVhYSIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdfQ._gxevCN9cCYX3Fw_FUEYvxFsRhHqfOT9KhjZFiHcNao&scope=
REPONSE : [2017-01-18T10:32:32+01:00]
HTTP/1.1 200 OK
Connection: close
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store, max-age=0, must-revalidate,no-store
Content-Security-Policy: default-src 'self' www.ibm.com 'unsafe-inline';
Content-Type: application/json;charset=UTF-8
Date: Wed, 18 Jan 2017 09:32:31 GMT
Expires: 0
Pragma: no-cache,no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=2592000 ; includeSubDomains
X-Backside-Transport: OK OK,OK OK
X-Client-Ip: 91.151.65.169
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Global-Transaction-Id: 1804077409
X-Powered-By: Servlet/3.1
X-Vcap-Request-Id: e683d47d-28aa-43c1-6356-d5818dfd86f1
X-Xss-Protection: 1; mode=block
6f6
{"access_token":"[DONNEES PRIVEES MASQUEES]","token_type":"[DONNEES PRIVEES MASQUEES]","refresh_token":"[DONNEES PRIVEES MASQUEES]","expires_in":1209599,"scope":"cloud_controller.read password.write cloud_controller.write openid uaa.user","jti":"edcd9c51-4521-4f49-bf03-def030e81626"}
0
a9dc3ad4-1a34-4848-9b16-8d1410b79a06
```
So is there a way to set up the connection from a "close" state to a "waiting for incoming connection" state when i'm running or building the image?
Something like an option `cf ic (docker) run -p 3000 --accept_all imageid cmd` (I didn't saw it in the --help menu)
or maybe you saw something wrong elsewhere?
I thought about logging into the container with `docker exec -it ID /bin/bash` but I don't know a bash command to accept all incoming connection... (and moreover I think it's the bash of the VM not the container itself)
Thank you for your answers and have a good day!
Emmanuel
---
**Other infos**:
Dockerfile
```
FROM ubuntu:14.04
RUN apt-get update && apt-get -y install python2.7
RUN apt-get -y install python-pip
RUN pip install Flask
RUN pip install ibmiotf
RUN pip install requests
RUN pip install flask-socketio
RUN pip install cloudant
ENV PORT=12345
EXPOSE 12345
ADD ./SIARA /opt/SIARA/
WORKDIR /opt/SIARA/
CMD sleep 80 && python testGUI.py
```
Flask server port mapping and running:
```
if __name__ == '__main__':
# Bind to PORT if defined, otherwise default to 5000.
port = int(os.environ.get('PORT', 5000))
socketio.run(app, host='0.0.0.0', port=port)
```
---
**Clues**
I had this warning messages when I used to test my image locally they weren't responsible of any crash but maybe now it's on the cloud this kind of error are responsible of the unsuccessful connection attempt from web?
tl;dr: apparently SSH connection are not available since (apparently) my version of python (2.7) needs an update (??)
But SSH aren't supposed to be relevant only in case of https:// connection?
```
cf ic logs -ft guiplay
2017-01-19T09:17:38.870006264Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:334: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
32017-01-19T09:17:38.870062551Z SNIMissingWarning
�2017-01-19T09:17:38.870081733Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-19T09:17:38.870089026Z InsecurePlatformWarning
�2017-01-19T09:17:39.145906849Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-19T09:17:39.145950355Z InsecurePlatformWarning
�2017-01-19T09:17:39.186165706Z WebSocket transport not available. Install eventlet or gevent and gevent-websocket for improved performance.
Y2017-01-19T09:17:39.192990810Z * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
``` | 2017/01/18 | [
"https://Stackoverflow.com/questions/41724259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7365430/"
] | Hmm - I see that you published port 3000 (the -p 3000 parameter in the run command), but the default port would be 5000. In the dockerfile, you switched that to 12345, so that's presumably what you're actually listening on there. Guessing that's the reason you want to open all ports?
Docker only exposes the ports that you tell it to - adding an additional -p 5000 for that default, or -p 12345 according to that Dockerfile, or both should allow you to connect to the app in those cases. Or, if you want to just publish all the ports that are EXPOSEd via the Dockerfile (in this case, that would be 12345), use the -P parameter.
More info: running in the cloud, there is additional security that your container is only accessible via the ports you want published. Within a space (in the private ips from other containers in the same space, or definitely from within the container itself), you should still be able to access those ports. From external, though, only the ports you have published should be accessible. I don't see a way to effectively publish \* (and, that seems like a rather questionable practice from a security standpoint) | Looks like Bluemix single container service is a bit touchy, it was hard to reach from web until I added a "scalable" container which asks for the required HTTP port.
I think the problem was this http port wasn't exposed, but now problem is solved the way I said above. | 1,782 |
28,478,279 | Hi I have this sample path "\10.81.67.162" which is a remote server (windows OS)
I want to be able to transfer files (local) to the remote server using paramiko in python.
I can make it work if the server is in linux.
This is my sample code
```
import paramiko
import base64
username = 'username'
password = 'password'
host = "10.81.67.162"
port = 22
transport = paramiko.Transport((host,port))
transport.connect(username = username, password = password)
stfp = paramiko.SFTPClient.from_transport(transport)
```
But having thhis error in windows:
```
Traceback (most recent call last):
File "ssh.py", line 9, in <module>
transport = paramiko.Transport((host,port))
File "build\bdist.win32\egg\paramiko\transport.py", line 289, in __init__
File "C:\Python27\lib\socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 10061] No connection could be made because the target machi
ne actively refused it
```
Python version 2.7
Paramiko version 1.7.5
Thanks! | 2015/02/12 | [
"https://Stackoverflow.com/questions/28478279",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3728094/"
] | If you are asking about method argument and method internal variables the answer is no. These variables are allocated on stack and are different for each thread.
EDIT:
Issues may happen when passing shared (among several threads) not thread safe objects. For example your method `foo()` accepts parameter of type `MyClass`:
```
foo(MyClass param) {}
```
Somewhere into `foo()` you call `param.bar()` that operates (directly or indirectly) with not thread safe member variable. In this case you will get race condition.
(thanks to @lexicore)
The threads synchronization issues are relevant however for class fields. | There shouldn't be any problems with your code since you don't manipulate the `parameters` (e.g. `adding/removing` stuff from the maps ).
of course there is this assumption that those maps are not related (e.g `sharing same resources`) **AND** no where else in you program you will manipulate those objects at the same time of this method is running.
Yes my explanation may give the readers some headache. but out of experience working with `map` and `multi-threaded programming`, there could be tons of other stuff that could go wrong.
So a word of advice try to make your program as `thread-safe` as possible, even though you are very sure that nothing would go wrong. | 1,783 |
54,675,259 | I am answering the Euler project questions in python and I don't know how to multiply a list by itself
I can get a list within the range though | 2019/02/13 | [
"https://Stackoverflow.com/questions/54675259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11051957/"
] | Seems like this should do it:
```
SET Row1 = CASE
WHEN Row1 = 'From1' THEN 'To1'
WHEN Row1 = 'From2' THEN 'To2'
etc
END
``` | Are you simply looking for a `case` expression?
```
UPDATE TOP (@batchsize) Table1
SET Row1 = (CASE table1.Row1
WHEN 'From1' THEN 'To1'
WHEN 'From2' THEN 'To2'
WHEN 'From3' THEN 'To3'
WHEN 'From4' THEN 'To4'
WHEN 'From5' THEN 'To5'
END)
FROM (SELECT TOP (@batchsize) Id
FROM Table1
ORDER BY TimeStamp DESC
) tto
WHERE Table1.Row1 in ('From1', 'From2', 'From3', 'From4', 'From5') AND
Table1.Id = tto.Id;
``` | 1,784 |
69,067,530 | I installed several files based upon `https://pbpython.com/pdf-reports.htm to create reports. However the following error messages
```
Traceback (most recent call last):
File "C:\histdata\test02.py", line 10, in <module>
from weasyprint import HTML
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\__init__.py", line 322, in <module>
from .css import preprocess_stylesheet # noqa isort:skip
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\css\__init__.py", line 27, in <module>
from . import computed_values, counters, media_queries
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\css\computed_values.py", line 16, in <module>
from ..text.ffi import ffi, pango, units_to_double
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\text\ffi.py", line 380, in <module>
gobject = _dlopen(
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\weasyprint\text\ffi.py", line 377, in _dlopen
return ffi.dlopen(names[0]) # pragma: no cover
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\cffi\api.py", line 150, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\cffi\api.py", line 832, in _make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
File "C:\Users\AquaTrader\AppData\Local\Programs\Python\Python38\lib\site-packages\cffi\api.py", line 827, in _load_backend_lib
raise OSError(msg)
OSError: cannot load library 'gobject-2.0-0': error 0x7e. Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0-0'
```
Any suggestions? Thanks in advance. (Please note that there is a similar issue on github which tells the individual to install GTK3.) Is this correct? | 2021/09/05 | [
"https://Stackoverflow.com/questions/69067530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6090578/"
] | The error means that the `gobject-2.0.0` library, which is part of GTK3+, cannot be found. Did you follow the installation instructions (<https://doc.courtbouillon.org/weasyprint/stable/first_steps.html>), which include installation of GTK3+? If no, do that. If yes, then the problem is, that the GTK3+ DLLs are not where Python is looking for them. For this, you need to add the directory containing the DLLs (e.g. `C:\Program Files\GTK3-Runtime Win64\bin` on Windows) to your PATH environment variable. That directory contains the relevant `libgobject-2.0-0.dll` library.
For Python 3.8+ and weasyprint 54+ you can manually set the path to your GTK3+ library with the environment variable `WEASYPRINT_DLL_DIRECTORIES` ([documentation](https://doc.courtbouillon.org/weasyprint/v54.0b1/first_steps.html#missing-library)). | As @mad said, you need the GTK3 library to have the `libobject-2.0.0` DLL. In Github Actions for example, you might be interested to use the [tschoonj/GTK-for-Windows](https://github.com/tschoonj/GTK-for-Windows-Runtime-Environment-Installer) repository :
```sh
# Download GTK3 resources
git clone -b 2022-01-04 https://github.com/tschoonj/GTK-for-Windows-Runtime-Environment-Installer GTK
cd GTK
# Add it to classic Windows install PATH
$Env:PATH += "$PWD\gtk-nsis-pack\bin"
# Add it to GitHub Actions PATH
# echo "$PWD\gtk-nsis-pack\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
``` | 1,785 |
16,819,938 | I have a class which would be a container for a number of variables of different types. The collection is finite and not very large so I didn't use a dictionary. Is there a way to automate, or shorten the creation of variables based on whether or not they are requested (specified as True/False) in the constructor?
Here is what I have for example:
```
class test:
def __init__(self,a=False,b=False,c=False):
if a: self.a = {}
if b: self.b = 34
if c: self.c = "generic string"
```
For any of a,b,c that are true in the constructor they will be created in the object.
I have a collection of standard variables (a,b,c,d..) that some objects will have and some objects won't. The number of combinations is too large to create separate classes, but the number of variables isn't enough to have a dictionary for them in each class.
Is there any way in python to do something like this:
```
class test:
def __init__(self,*args):
default_values = {a:{},b:34,c:"generic string"}
for item in args:
if item: self.arg = default_values[arg]
```
Maybe there is a whole other way to do this?
EDIT:
To clarify this a class which represents different type of bounding boxes on a 2D surface. Depending on the function of the box it can have any of frame coordinates, internal cross coordinates, id, population statistics (attached to that box), and some other cached values for easy calculation.
I don't want to have each object as a dictionary because there are methods attached to it which allow it to export and modify its internal data and interact with other objects of the same type (similar to how strings interact with + - .join, etc.). I also don't want to have a dictionary inside each object because the call to that variable is inelegant:
```
print foo.info["a"]
```
versus
```
print foo.a
```
Thanks to ballsdotball I've come up with a solution:
```
class test:
def __init__(self, a=False, b=False,c =False):
default_values = {"a":{},"b":34,"c":"generic string"}
for k, v in default_values.iteritems():
if eval(k): setattr(self,k,v)
``` | 2013/05/29 | [
"https://Stackoverflow.com/questions/16819938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2433420/"
] | Maybe something like:
```
def __init__(self,*args,**kwargs):
default_values = {a:{},b:34,c:"generic string"}
for k,v in kwargs.iteritems():
try:
if not v is False:
setattr(self,k,default_values[k])
except Exception, e:
print "Argument has no default value.",e
```
But to be honest I would just put the default values in with the init arguments instead of having to test for them like that.
\*Edited a couple times for syntax. | You can subclass `dict` (if you aren't using positional arguments):
```
class Test(dict):
def your_method(self):
return self['foo'] * 4
```
You can also override `__getattr__` and `__setattr__` if the `self['foo']` syntax bothers you:
```
class Test(dict):
def __getattr__(self, key):
return dict.__getattr__(self, key)
def __setattr__(self, key, value):
return dict.__getattr__(self, key, value)
def your_method(self):
return self.foo * 4
``` | 1,786 |
50,271,354 | Consider the following python snippet (I am running Python 3)
```
name = "Sammy"
def greet():
name = 'johny'
def hello():
print('hello ' + name) # gets 'name' from the enclosing 'greet'
hello()
greet()
```
This produces the output `hello johny` as expected
However,
```
x = 50
def func1():
x = 20
def func2():
print("x is ", x) # Generates error here
x = 2
print("Changed the local x to ",x)
func2()
func1()
print("x is still ",x)
```
generates an `UnboundLocalError: local variable 'x' referenced before assignment`.
Why does the first snippet work, whereas the second doesn't? | 2018/05/10 | [
"https://Stackoverflow.com/questions/50271354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5157905/"
] | The error is actually caused (indirectly) by the following line, i.e. `x = 2`. Try commenting out that line and you'll see that the function works.
The fact that there is an assignment to a variable named `x` makes `x` local to the function at *compile time*, however, at *execution time* the first reference to `x` fails because, at the time that the `print()` statement is executed, it does not exist yet *in the current scope*.
Correct it by using [`nonlocal`](https://docs.python.org/3/reference/simple_stmts.html#the-nonlocal-statement) in `func2()`:
```
def func2():
nonlocal x
print("x is ", x)
x = 2
print("Changed the local x to ",x)
```
The reason that the first function (`greet()`) works is because it's OK to read variables in an outer scope, however, you can not assign to them unless you specify that the variable exists in an outer scope (with `nonlocal` or `global`).
You can assign to a variable of the same name, however, that would create a new local variable, not the variable in the outer scope. So this also works:
```
def func1():
x = 20
def func2():
x = 2
print('Inner x is', x)
func2()
print('Outer x is', x)
```
Here `x` is assigned to before being referenced. This creates a new variable named `x` in the scope of function `func2()` which shadows the `x` defined in `func1()`. | In a given scope, *you can only reference a variable's name from one given scope*. Your variable cannot be global at some point and local later on or vice versa.
For that reason, if `x` is ever to be declared in a scope, Python will assume that you are refering to the local variable everywhere in that scope, unless you explicitly state otherwise.
This is why you first function, `greet`, works. The variable `name` is unambiguously coming from the closure. Although, in `func2` the variable `x` is used in the scope and thus you cannot reference the `x` from the closure unless explicitly stating otherwise with [`nonlocal`](https://docs.python.org/3/reference/simple_stmts.html#the-nonlocal-statement).
The following errors might enlighten us on this.
A variable cannot become global after use
-----------------------------------------
```
def func1():
x = 20
def func2():
print("x is ", x)
global x
print("Changed the local x to ",x)
func2()
```
This raises a `SyntaxError: name 'x' is used prior to global declaration`. This means that the closure's `x` cannot be used and then the global one.
A global variable cannot become local
-------------------------------------
Here is another case using `global` at the top of `func2`.
```
def func1():
x = 20
def func2():
global x
print("x is ", x)
x = 2
print("Changed the local x to ",x)
func2()
```
This code was exectued without error, but notice that the assignment to `x` updated the global variable, it did not make `x` become local again. | 1,787 |
55,717,203 | I wrote this small function:
```
def sets():
set1 = random.sample(range(1, 50), 10)
set2 = random.sample(range(1, 50), 10)
return(set1,set2)
sets()
```
The output of this function looks like this:
```
([24, 29, 43, 42, 45, 28, 26, 3, 8, 21],
[22, 37, 38, 44, 25, 42, 29, 7, 35, 9])
```
I want to plot this in a two way Venn diagram. I know how to plot the NUMBERS of overlap between the sets using the matplotlib, i.e. using [this](https://python-graph-gallery.com/170-basic-venn-diagram-with-2-groups/) exact code; however I want to plot the ACTUAL VALUES in the plot instead.
i.e. the overlap between the two should read: 29,42 as these are the two items in common, and not the number 2, to represent the number of numbers that overlap.
Would someone know how to do this? | 2019/04/16 | [
"https://Stackoverflow.com/questions/55717203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8407951/"
] | A possible solution is to output the labels instead of the set size. With the [matplotlib\_venn](https://pypi.org/project/matplotlib-venn/) package, you can do something like this:
```
import matplotlib.pyplot as plt
from matplotlib_venn import venn2
import random
set1 = set(random.sample(range(1, 50), 10))
set2 = set(random.sample(range(1, 50), 10))
venn = venn2([set1,set2], ('Group A', 'Group B'))
venn.get_label_by_id('100').set_text('\n'.join(map(str,set1-set2)))
venn.get_label_by_id('110').set_text('\n'.join(map(str,set1&set2)))
venn.get_label_by_id('010').set_text('\n'.join(map(str,set2-set1)))
plt.axis('on')
plt.show()
```
We're accessing the labels by a binary ID, which denotes the sets.
[](https://i.stack.imgur.com/PopoC.png) | The default behaviour of the venn2 package is to print the size of the overlap of the two sets. Here's the line of the source code where those sizes are added to the Venn diagram plot: <https://github.com/konstantint/matplotlib-venn/blob/master/matplotlib_venn/_venn2.py#L247>
To make this print the overlapping numbers you'll have to change the **compute\_venn2\_subsets(a,b)** function in this file. Replace the returned argument of **compute\_venn2\_subsets(a,b)** with:
```
([val for val in a if val not in b], [val for val in a if val in b], [val for val in b if val not in a])
```
instead of the set sizes that it's returning right now. If you only want to print the overlapping columns, then make **compute\_venn2\_subsets(a,b)** return
```
("", [val for val in a if val in b], "")
``` | 1,788 |
63,416,534 | I've recently tried to create a simple bot in discord with Python Code.
I'm testing just the first features to DM a user when he joins the server
Here is my code:
```
import os
import discord
from dotenv import load_dotenv
load_dotenv() #load .env files
TOKEN = os.getenv('DISCORD_TOKEN')
GUILD = os.getenv('DISCORD_GUILD')
client = discord.Client()
@client.event
async def on_ready():
guild = discord.utils.get(client.guilds, name=GUILD)
print(
f'{client.user} has connected to the following guild:\n'
f'{guild.name}(id: {guild.id})'
) #debug
members = '\n - '.join([member.name for member in guild.members])
print(f'Guild Members:\n - {members}') #debug
@client.event
async def on_member_join(member):
await member.creat_dm()
await member.dm_channel.send(
f'Hi {member.name}, welcome to my Discord Server!'
)
client.run(TOKEN)
```
```
Ignoring exception in on_member_join
Traceback (most recent call last):
File "/home/andre/.local/lib/python3.8/site-packages/discord/client.py", line 312, in _run_event
await coro(*args, **kwargs)
File "/home/andre/repos/github/discord_project/bot.py", line 30, in on_member_join
await member.creat_dm()
AttributeError: 'Member' object has no attribute 'creat_dm'
```
Can anyone help me with this annoying bug?
I've seen articles that show `member.create_dm()` being used | 2020/08/14 | [
"https://Stackoverflow.com/questions/63416534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13461081/"
] | So You Need To Edit Your Code Sligitly
Store Orginal Image To a temp for a while
```
originalBlue = image[i][j].rgbtBlue;
originalRed = image[i][j].rgbtRed;
originalGreen = image[i][j].rgbtGreen;
```
the result of each of these formulas may not be an integer so use float and round them to nearest integer
```
sepiaRed = round(.393 * originalRed + .769 * originalGreen + .189 * originalBlue);
sepiaGreen = round(.349 * originalRed + .686 * originalGreen + .168 * originalBlue);
sepiaBlue = round(.272 * originalRed + .534 * originalGreen + .131 * originalBlue);
if (sepiaRed > 255)
{
sepiaRed = 255;
}
if (sepiaGreen > 255)
{
sepiaGreen = 255;
}
if (sepiaBlue > 255)
{
sepiaBlue = 255;
}
```
now store the values to the orignal ones
```
image[i][j].rgbtBlue = sepiaBlue;
image[i][j].rgbtRed = sepiaRed;
image[i][j].rgbtGreen = sepiaGreen;
```
**Declare all Variable Outside For Loop**
```
float sepiaRed;
float sepiaBlue;
float sepiaGreen;
int originalRed;
int originalBlue;
int originalGreen;
```
I Hope It's Help | You need to use "saturation math".
For near white colors, your intermediate values (e.g. `sepiared`) can exceed 255.
255 (0xFF) is the maximum value that can fit in an `unsigned char`
For example, if `sepiared` were 256 (0x100), when it gets put into `rgbtRed`, only the rightmost 8 bits will be retained and the value will be *truncated* to 0. So, instead of a very bright value [near white], you'll end up with a very dark value [near black].
To fix this, add:
```
if (sepiared > 255)
sepiared = 255;
```
---
Also, note with the ordering of your `for` loops, it is very cache inefficient.
And, it's wasteful [and can be slow] to use `image[i][j].whatever` everywhere. Better to use a pointer to the current pixel.
Anyway, here's an updated version of your code with these changes:
```
void
sepia(int height, int width, RGBTRIPLE image[height][width])
{
RGBTRIPLE *pix;
for (int i = 0; i < height; i++) {
pix = &image[i][0];
for (int j = 0; j < width; j++, pix++) {
int sepiared = pix->rgbtRed * .393 +
pix->rgbtGreen * .769 +
pix->rgbtBlue * .189;
int sepiagreen = pix->rgbtRed * .349 +
pix->rgbtGreen * .686 +
pix->rgbtBlue * .168;
int sepiablue = pix->rgbtRed * .272 +
pix->rgbtGreen * .534 +
pix->rgbtBlue * .131;
if (sepiared > 255)
sepiared = 255;
if (sepiagreen > 255)
sepiagreen = 255;
if (sepiablue > 255)
sepiablue = 255;
pix->rgbtRed = sepiared;
pix->rgbtGreen = sepiagreen;
pix->rgbtBlue = sepiablue;
}
}
}
```
---
Also, note that it can be a bit slow to use floating point math on pixel images. In this case, it's faster/better to use scaled integer math.
Here's a version that does that:
```
void
sepia(int height, int width, RGBTRIPLE image[height][width])
{
RGBTRIPLE *pix;
for (int i = 0; i < height; i++) {
pix = &image[i][0];
for (int j = 0; j < width; j++, pix++) {
int sepiared = pix->rgbtRed * 393 +
pix->rgbtGreen * 769 +
pix->rgbtBlue * 189;
int sepiagreen = pix->rgbtRed * 349 +
pix->rgbtGreen * 686 +
pix->rgbtBlue * 168;
int sepiablue = pix->rgbtRed * 272 +
pix->rgbtGreen * 534 +
pix->rgbtBlue * 131;
sepiared /= 1000;
sepiagreen /= 1000;
sepiablue /= 1000;
if (sepiared > 255)
sepiared = 255;
if (sepiagreen > 255)
sepiagreen = 255;
if (sepiablue > 255)
sepiablue = 255;
pix->rgbtRed = sepiared;
pix->rgbtGreen = sepiagreen;
pix->rgbtBlue = sepiablue;
}
}
}
``` | 1,789 |
18,296,394 | I've got a list of instances of a particular class Foo, that has a field bar:
```
foo[1..n].bar
```
I'd like to "convert" this to just a list of bar items, so that I have `bar[1..n]`
Sorry for the 1..n notation - I'm just trying to indicate I have an arbitrarily long list.
Be gentle - I'm new to python. | 2013/08/18 | [
"https://Stackoverflow.com/questions/18296394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8478/"
] | Use a list comprehension
```
bar = [ i.bar for i in foo ]
```
Also, list indices in python start from 0, so a list of N elements would have items from `0` to `n - 1`. | >
> Be gentle - I'm new to python.
>
>
>
Okay.
>
> I've got a list of items:
>
>
>
> ```
> foo[1..n].bar
>
> ```
>
>
How in the heck is that a list? A list looks like this:
```
[1, 2, 3]
```
How does `foo[1..n].bar` fit that format? Like this?
```
foo[1, 2, 3].bar
```
That's nonsensical.
>
> I'd like to "convert" this to just a list of bar items, so that I have bar[1..n]
>
>
>
Once again your converted list has to have the format:
```
[obj1, obj2, obj3]
```
Now the question is what is the conversion factor that transforms your starting list:
```
[obj1, obj2, obj3 ]
```
into your transformed list:
```
[obj4, obj5, obj6]
```
????
>
> can you please put your actual list – Anshuman Dwibhashi no I
> actually can't. They are not trivial objects
>
>
>
Well, then post an example with trivial objects.
I sense that you know something about programming in some language, but that this is your first foray into a new language. The bottom line is: you have to learn the basic syntax in any language before you can speak intelligently about it. Get a beginning python book and start reading. | 1,794 |
41,778,173 | I have been using turtle package in python idle. Now I have switched to using Jupyter notebook.
How can I make turtle inline instead of opening a separate graphic screen. I am totally clueless about. Any pointers and advice will be highly appreciated. | 2017/01/21 | [
"https://Stackoverflow.com/questions/41778173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3668042/"
] | I found the following library that has a Turtle implementation working in Jupyter notebooks: <https://github.com/takluyver/mobilechelonian> | It seems you can get turtle module to work, if you run the Jupyter Notebook cell containing the code twice. Not sure why it works, but it does! | 1,795 |
1,584,864 | Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:
```
['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
```
Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is `data`):
```
>>> grep('javascript', data)
```
and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings.
I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.
```
import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
``` | 2009/10/18 | [
"https://Stackoverflow.com/questions/1584864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/177293/"
] | Here is a grep that uses recursion to search the data structure.
Note that good data structures lead the way to elegant solutions.
Bad data structures make you bend over backwards to accomodate.
This feels to me like one of those cases where a bad data structure is obstructing
rather than helping you.
Having a simple data structure with a more uniform structure
(instead of using this grep) might be worth investigating.
```
#!/usr/bin/env python
data=['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch',['find','java deep','down'])),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
]
def grep(astr,data,prefix=[]):
result=[]
for idx,elt in enumerate(data):
if isinstance(elt,basestring):
if astr in elt:
result.append(tuple(prefix+[idx]))
else:
result.extend(grep(astr,elt,prefix+[idx]))
return result
def pick(data,idx):
if idx:
return pick(data[idx[0]],idx[1:])
else:
return data
idxs=grep('java',data)
print(idxs)
for idx in idxs:
print('data[%s] = %s'%(idx,pick(data,idx)))
``` | To get the position use `enumerate()`
```
>>> data = [('foo', 'bar', 'frrr', 'baz'), ('foo/bar', 'baz/foo')]
>>>
>>> for l1, v1 in enumerate(data):
... for l2, v2 in enumerate(v1):
... if 'f' in v2:
... print l1, l2, v2
...
0 0 foo
1 0 foo/bar
1 1 baz/foo
```
In this example I am using a simple match `'foo' in bar` yet you probably use regex for the job.
Obviously, `enumerate()` can provide support in more than 2 levels as in your edited post. | 1,800 |
36,831,274 | I want to connect my Django web app database to my postgresql database I have on my Pythonanywhere paid account. Before coding anything, I just wanted to get everything talking to each other. This is the settings.py DATABASE section from my django app. I'm running Python 3.5 and Django 1.9.
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': '[myDatabaseName]',
'USER': '[myUsername]',
'PASSWORD': '[myPassword]',
'HOST': 'xxxxxxxx-xxx.postgres.pythonanywhere-services.com',
'PORT': '10130',
}
}
```
The HOST and PORT we're both provided from the pythonanywhere.com site under the tab DATABASE and Postgres. I did create my database, username, and password on the postgres console.
I then created a checkedb.py script I found that would check if the connection with the postgres database works.
```
from django.db import connections
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
connected = False
else:
connected = True
```
This is the error I receive after running this code.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/conf/__init__.py", line 38, in _setup
settings_module = os.environ[ENVIRONMENT_VARIABLE]
File "/usr/lib/python3.4/os.py", line 633, in __getitem__
raise KeyError(key) from None
KeyError: 'DJANGO_SETTINGS_MODULE'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/giraldez/golf/golf/dbcheck.py", line 2, in <module>
db_conn = connections['default']
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 196, in __getitem__
self.ensure_defaults(alias)
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 170, in ensure_defaults
conn = self.databases[alias]
File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 49, in __get__
res = instance.__dict__[self.func.__name__] = self.func(instance)
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 153, in databases
self._databases = settings.DATABASES
File "/usr/local/lib/python3.4/dist-packages/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/usr/local/lib/python3.4/dist-packages/django/conf/__init__.py", line 47, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DATABASES, but settings are not configured. You must either define the environment variable D
JANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
```
The directory for my project looks like this
```
golf/
---golf/
------__init.py__
------dbcheck.py
------settings.py
------urls.py
------wsgi.py
---media/
---static/
---manage.py
``` | 2016/04/25 | [
"https://Stackoverflow.com/questions/36831274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3314523/"
] | You need to setup django first if you are using it as a standalone script. Would have been easier to try with `./manage.py shell`. but if you want to test with a standalone script, here goes:
```
import sys,os
if __name__ == '__main__': # pragma nocover
# Setup environ
sys.path.append(os.getcwd())
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "main.settings_dev")
import django
django.setup()
from django.db import connections
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
connected = False
else:
connected = True
``` | The error you are getting is because you need to properly initialize the django environment before you can write custom scripts against it.
The easiest way to solve this is to run a python shell that already has the django configuration loaded, you can do this with `python manage.py shell`.
Once this shell has loaded, enter your code and it should tell you if the connection is valid or not. | 1,803 |
56,652,022 | I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.
The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it.
Is there a simple way to do this? Thanks. | 2019/06/18 | [
"https://Stackoverflow.com/questions/56652022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6649616/"
] | Loop the *TableDefs* collection.
For each *TableDef*, loop the *Fields* collection.
For each *Field*, check the property *Type* (= 101, as I recall) or *IsComplex* = True.
IsComplex is also True for *Multi-Value* fields, but if you don't use these, you should be fine. | Here is an example on VBA. It prints in immediate (open VBA editor by `Alt` + `F11`, then press `Ctrl` + `G`) messages about tables with Attachment type field.
```vb
Public Sub subTest()
Dim db As DAO.Database
Dim td As DAO.TableDef
Dim fld As DAO.Field
Dim boolIsAttachmentFieldPresent As Boolean
Set db = CurrentDb()
For Each td In db.TableDefs
If Left(td.Name, 4) <> "MSys" Then
'Debug.Print "Contents of: " & td.Name
boolIsAttachmentFieldPresent = False
For Each fld In td.Fields
'Debug.Print fld.Name & " of type " & fld.Type
If fld.Type = 101 Then
boolIsAttachmentFieldPresent = True
End If
Next fld
If boolIsAttachmentFieldPresent Then
Debug.Print "Table " & td.Name & " contains attachment field"
End If
End If
Next td
End Sub
```
All as @Gustav described. | 1,804 |
60,494,341 | I have a large csv data file, sample of the data as below.
```
name year value
China 1997 481970
Japan 1997 8491480
Germany 1997 4678022
China 1998 589759
Japan 1998 7912546
Germany 1998 5426582
```
After several attempts with no success, I would like to interpolate my data to monthly, and then change the format of data to be as in the example below,
```
date China Japan Germany
1997-01-31 40164.17 707623.33 389835.17
1997-02-28 80328.33 1415246.67 779670.33
1997-03-30 1204925 2122870 1169505.50
1997-04-30 160656.67 2830493.33 1559340.67
1997-05-31 200820.83 3538116.67 1949175.83
. . . .
. . . .
. . . .
1997-12-31 481970 8491480 4678022
1998-01-31 49146.58 659378.83 452215.17
1998-02-28 98293.17 1318757.67 904430.33
1998-03-30 147439.75 1978136.5 1356645.5
1998-04-30 196586.33 2637515.33 1808860.67
1998-05-31 245732.97 3296894.17 2261075.83
. . . .
. . . .
. . . .
1998-12-31 589759 7912546 5426582
```
someone suggested this [How to pivot a dataframe](https://stackoverflow.com/questions/47152691/how-to-pivot-a-dataframe) though it proved hard for me to reach the desired results. Maybe I'm not that good in python.
I would like to do it in R.
Thoughts? | 2020/03/02 | [
"https://Stackoverflow.com/questions/60494341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12807398/"
] | Assuming the input shown reproducibly in the Note at the end convert it to a zoo object `z` which, by specifying `split=`, will also convert it to wide form at the same time. Then expand it using `merge` and use linear interpolation with `na.approx`. Alternately replace `na.approx` with `na.spline`. Finally convert the time index to `Date` class. The result is a zoo object `m`. If you need a data frame use `fortify.zoo(m)`.
```
library(zoo)
z <- read.zoo(DF, split = 1, index = 2, FUN = as.numeric)
m <- na.approx(merge(z, zoo(, c(kronecker(time(z), 0:11/12, "+")))))
time(m) <- as.Date(as.yearmon(time(m)), frac = 1)
m
```
giving:
```
China Germany Japan
1997-01-31 481970.0 4678022 8491480
1997-02-28 490952.4 4740402 8443236
1997-03-31 499934.8 4802782 8394991
1997-04-30 508917.2 4865162 8346747
1997-05-31 517899.7 4927542 8298502
1997-06-30 526882.1 4989922 8250257
1997-07-31 535864.5 5052302 8202013
1997-08-31 544846.9 5114682 8153769
1997-09-30 553829.3 5177062 8105524
1997-10-31 562811.8 5239442 8057280
1997-11-30 571794.2 5301822 8009035
1997-12-31 580776.6 5364202 7960790
1998-01-31 589759.0 5426582 7912546
```
Note
----
```
Lines <- "name year value
China 1997 481970
Japan 1997 8491480
Germany 1997 4678022
China 1998 589759
Japan 1998 7912546
Germany 1998 5426582"
DF <- read.table(text = Lines, header = TRUE, as.is = TRUE)
``` | An option using `data.table`:
```
DT[, date := as.IDate(paste0(year, "-12-31"))][,
c("y0", "y1") := .(value, shift(value, -1L, fill=value[.N])), name]
longDT <- DT[, {
eom <- seq(min(date)+1L, max(date)+1L, by="1 month") - 1L
v <- unlist(mapply(function(a, d) a + (0:11) * d, y0, (y1 - y0)/12, SIMPLIFY=FALSE))
.(eom, v=v[seq_along(eom)])
}, name]
dcast(longDT, eom ~ name, sum, value.var="v")
```
output:
```
eom China Germany Japan
1: 1996-12-31 40164.17 389835.2 707623.3
2: 1997-01-31 76981.32 747184.1 1356278.1
3: 1997-02-28 113798.48 1104533.0 2004932.8
4: 1997-03-31 150615.63 1461881.9 2653587.5
5: 1997-04-30 187432.78 1819230.8 3302242.2
6: 1997-05-31 224249.93 2176579.7 3950896.9
7: 1997-06-30 261067.09 2533928.6 4599551.7
8: 1997-07-31 297884.24 2891277.5 5248206.4
9: 1997-08-31 334701.39 3248626.4 5896861.1
10: 1997-09-30 371518.54 3605975.3 6545515.8
11: 1997-10-31 408335.70 3963324.2 7194170.6
12: 1997-11-30 445152.85 4320673.1 7842825.3
13: 1997-12-31 481970.00 4678022.0 8491480.0
14: 1998-01-31 490952.42 4740402.0 8443235.5
15: 1998-02-28 499934.83 4802782.0 8394991.0
16: 1998-03-31 508917.25 4865162.0 8346746.5
17: 1998-04-30 517899.67 4927542.0 8298502.0
18: 1998-05-31 526882.08 4989922.0 8250257.5
19: 1998-06-30 535864.50 5052302.0 8202013.0
20: 1998-07-31 544846.92 5114682.0 8153768.5
21: 1998-08-31 553829.33 5177062.0 8105524.0
22: 1998-09-30 562811.75 5239442.0 8057279.5
23: 1998-10-31 571794.17 5301822.0 8009035.0
24: 1998-11-30 580776.58 5364202.0 7960790.5
25: 1998-12-31 589759.00 5426582.0 7912546.0
eom China Germany Japan
```
data:
```
library(data.table)
DT <- fread("name year value
China 1996 40164.17
Japan 1996 707623.33
Germany 1996 389835.17
China 1997 481970
Japan 1997 8491480
Germany 1997 4678022
China 1998 589759
Japan 1998 7912546
Germany 1998 5426582")
```
I have taken the liberty to add in the data for 1996. | 1,806 |
49,687,860 | After upgrade pycharm to 2018.1, and upgrade python to 3.6.5, pycharm reports "unresolved reference 'join'". The last version of pycharm doesn't show any warning for the line below:
```
from os.path import join, expanduser
```
May I know why?
(I used python 3.6.? before)
I tried almost everything I can find, such as delete and recreate interpreter, invalidate cache and restart, delete and recreate virtualenv... how do I fix this?
(I can run my program without any error.) | 2018/04/06 | [
"https://Stackoverflow.com/questions/49687860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8335451/"
] | Sadly, it seems that PyCharm will try to evaluate the path to an existing file/folder, which in some cases will not exist and thus create this warning.
It's not very useful when you are building a path for something that's supposed to be created, because obviously it will not exist yet, but PyCharm will still complain.
You could try clicking on File > Invalidate Caches > Invalidate and Restart. This worked for me.
[edit] It will come back tho, not much else to do. | Check that pycharms is using the correct interpreter. | 1,807 |
63,404,899 | I'm trying to write a highly modular Python logging system (using the logging module) and include information from the trace module in the log message.
For example, I want to be able to write a line of code like:
```
my_logger.log_message(MyLogFilter, "this is a message")
```
and have it include the trace of where the "log\_message" call was made, instead of the actual logger call itself.
I almost have the following code working except for the fact that the trace information is from the `logging.debug()` call rather than the `my_logger.log_message()` one.
```
class MyLogFilter(logging.Filter):
def __init__(self):
self.extra = {"error_code": 999}
self.level = "debug"
def filter(self, record):
for key in self.extra.keys():
setattr(record, key, self.extra[key])
class myLogger(object):
def __init__(self):
fid = logging.FileHandler("test.log")
formatter = logging.Formatter('%(pathname)s:%(lineno)i, %(error_code)%I, %(message)s'
fid.setFormatter(formatter)
self.my_logger = logging.getLogger(name="test")
self.my_logger.setLevel(logging.DEBUG)
self.my_logger.addHandler(fid)
def log_message(self, lfilter, message):
xfilter = lfilter()
self.my_logger.addFilter(xfilter)
log_funct = getattr(self.logger, xfilter.level)
log_funct(message)
if __name__ == "__main__":
logger = myLogger()
logger.log_message(MyLogFilter, "debugging")
```
This is a lot of trouble to go through in order to make a simple `logging.debug` call but in reality, I will have a list of many different versions of `MyLogFilter` at different logging levels that contain different values of the "error\_code" attribute and I'm trying to make the `log_message()` call as short and sweet as possible because it will be repeated numerous times.
I would appreciate any information about how to do what I want to, or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.
I would like to stick to the internal python modules of "logging" and "trace" if that's possible instead of using any external solutions. | 2020/08/14 | [
"https://Stackoverflow.com/questions/63404899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3014653/"
] | >
> or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.
>
>
>
My strong suggestion is that you view logging as a solved problem and avoid reinventing the wheel.
If you need more than the standard library's `logging` module provides, it's probably something like [structlog](https://www.structlog.org/en/stable/) (`pip install structlog`)
Structlog will give you:
* data binding
* cloud native structured logging
* pipelines
* ...and more
It will handle most local and cloud use cases.
Below is one common configuration that will output colorized logging to a .log file, to stdout, and can be extended further to log to eg AWS CloudWatch.
Notice there is an included processor: `StackInfoRenderer` -- this will include stack information to all logging calls with a 'truthy' value for stack\_info (this is also in stdlib's logging btw). If you only want stack info for exceptions, then you'd want to do something like exc\_info=True for your logging calls.
**main.py**
```py
from structlog import get_logger
from logging_config import configure_local_logging
configure_local_logging()
logger = get_logger()
logger.info("Some random info")
logger.debug("Debugging info with stack", stack_info=True)
try:
assert 'foo'=='bar'
catch Exception as e:
logger.error("Error info with an exc", exc_info=e)
```
**logging\_config.py**
```py
import logging
import structlog
def configure_local_logging(filename=__name__):
"""Provides a structlog colorized console and file renderer for logging in eg ING tickets"""
timestamper = structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S")
pre_chain = [
structlog.stdlib.add_log_level,
timestamper,
]
logging.config.dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"plain": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=False),
"foreign_pre_chain": pre_chain,
},
"colored": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=True),
"foreign_pre_chain": pre_chain,
},
},
"handlers": {
"default": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "colored",
},
"file": {
"level": "DEBUG",
"class": "logging.handlers.WatchedFileHandler",
"filename": filename + ".log",
"formatter": "plain",
},
},
"loggers": {
"": {
"handlers": ["default", "file"],
"level": "DEBUG",
"propagate": True,
},
}
})
structlog.configure_once(
processors=[
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
timestamper,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
```
Structlog can do quite a bit more than this. I suggest you check it out. | It turns out the missing piece to the puzzle is using the "traceback" module rather than the "trace" one. It's simple enough to parse the output of traceback to pull out the source filename and line number of the ".log\_message()" call.
If my logging needs become any more complicated then I'll definitely look into struct\_log. Thank you for that information as I'd never heard about it before. | 1,808 |
51,074,335 | Want to find the delimiter in the text file.
The text looks:
```
ID; Name
1; John Mak
2; David H
4; Herry
```
The file consists of tabs with the delimiter.
I tried with following: [by referring](https://stackoverflow.com/questions/21407993/find-delimiter-in-txt-to-convert-to-csv-using-python)
```
with open(filename, 'r') as f1:
dialect = csv.Sniffer().sniff(f1.read(1024), "\t")
print 'Delimiter:', dialect.delimiter
```
The result shows: `Delimiter:`
Expected result: `Delimiter: ;` | 2018/06/28 | [
"https://Stackoverflow.com/questions/51074335",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4268241/"
] | `sniff` can conclude with only one single character as the delimiter. Since your CSV file contains two characters as the delimiter, `sniff` will simply pick one of them. But since you also pass in the optional second argument to `sniff`, it will only pick what's contained in that value as a possible delimiter, which in your case, is `'\t'` (which is not visible from your `print` output).
From [`sniff`'s documentation](https://docs.python.org/2/library/csv.html#csv.Sniffer.sniff):
>
> If the optional *delimiters* parameter is given, it is interpreted as a
> string containing possible valid delimiter characters.
>
>
> | Sniffing is not guaranteed to work.
Here is one approach that will work with any kind of delimiter.
You start with what you assume is the most common delimiter `;` if that fails, then you try others until you manage to parse the row.
```
import csv
with open('sample.csv') as f:
reader = csv.reader(f, delimiter=';')
for row in reader:
try:
a,b = row
except ValueError:
try:
a,b = row[0].split(None, 1)
except ValueError:
a,b = row[0].split('\t', 1)
print('{} - {}'.format(a.strip(), b.strip()))
```
You can play around with this at [this replt.it link](https://repl.it/repls/KindQuizzicalMultiprocessing), play with the `sample.csv` file if you want to try out different delimiters.
You can combine sniffing with this to catch any odd delimiters that are not known to you. | 1,809 |
40,523,328 | I have code below for a simple test of `sympy.solve`:
```
#!/usr/bin/python
from sympy import *
x = Symbol('x', real=True)
#expr = sympify('exp(1 - 10*x) - 15')
expr = exp(1 - x) - 15
print "Expressiong:", expr
out = solve(expr)
for item in out:
print "Answer:", item
expr = exp(1 - 10*x) - 15
print expr
out = solve(expr)
for item in out:
print "Answer:", item
```
output is as follows:
```
Expressiong: exp(-x + 1) - 15
Answer: -log(15) + 1
exp(-10*x + 1) - 15
Answer: log(15**(9/10)*exp(1/10)/15)
```
The equation `exp(1 - x) = 15` is solved correctly (`x = -15log(15) + 1`).
But when I change `x` to `10*x`, the result is weird.
1. Why would there be a lot of complex answers if I initialize the symbol `x` without `real=True`?
2. Even with `real=True` when initializing the symbol `x`, the answer still is not correct. Comparing to the first equation, the result should be `-3/2*log(15) + 1/10`. Did I write the equation wrong?
Thanks in advance. | 2016/11/10 | [
"https://Stackoverflow.com/questions/40523328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2298014/"
] | I can also confirm that the `solve` output for the equation `exp(1 - 10*x) - 15 == 0` appears unecessarily complicated. I would suggest for univariate equations to first consider `sympy.solveset`. For this example, it gives the following nicely formatted solutions.
```
import sympy as sp
sp.init_printing(pretty_print=True)
x = sp.symbols('x')
sp.solveset(sp.exp(1 - 10*x) - 15,x)
```
[](https://i.stack.imgur.com/z2Tbt.png)
Note that there are complex roots due to the exponential function being multi-valued (in complex domain). If you want to restrict the domain of the solution to reals, `solveset` has the convenient option `domain` for this purpose.
```
sp.solveset(sp.exp(1 - 10*x) - 15,x, domain = sp.S.Reals)
```
[](https://i.stack.imgur.com/aeyZS.png) | `solve` gives real and complex roots if symbols allow. An equation like `exp(2*x)-4` can be though of as `y**2 - 4` with `y = exp(x)` and `y` (thus `x`) will have two solutions. There are 10 solutions if the 2 is replaced with 10. (But there are actually many more solutions besides as `solveset` indicates.)
You based your expectation of the 2nd case on a misstatement of the solution for the first case which was actually `-log(15) + 1`; the second case correctly gives 1/10th that value. | 1,810 |
53,289,402 | I have a windows setup file (.exe), which is used to install a software. This is a third party executable. During installation, it expects certain values and has a UI.
I want to run this setup .exe silently without any manual intervention (even for providing the parameter values).
After spending some time googling about the approach, I feel powershell should be able to help me with my requirements.
Can anyone suggest if powershell would be the right tool for this, or does a better tool exists to get this requirement?
Can python be used to implement this requirement?
Please note: Since this is a third party executable, I don't have the names of the parameters for which values must be provided to the UI during installation
Thank you | 2018/11/13 | [
"https://Stackoverflow.com/questions/53289402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1773169/"
] | ***Deployment***: Note that it is not always possible to run a setup.exe silently with full control of parameters and with reliable silent running. It depends on how the installer was designed. In these cases I normally resort to repackaging - some more detail below on this.
Some general tips for dealing with deployment:
1. ***Software Library Tip***: Maybe try to look up the software to see if others have dealt with it for silent installation and deployment: **<https://www.itninja.com/software>**
2. ***Extract Files***: Is this an embedded MSI (Windows Installer) file or a legacy style `setup.exe`? Maybe try to extract the files first: **[Programmatically extract contents of InstallShield setup.exe](https://stackoverflow.com/questions/8681252/programmatically-extract-contents-of-installshield-setup-exe/8694205#8694205)** (Installshield setup.exe files). More elaborate details:
* [How to run an installation in /silent mode with adjusted settings](https://stackoverflow.com/questions/52327442/how-to-run-an-installation-in-silent-mode-with-adjusted-settings/52338626#52338626) (extraction of non-Installshield setup.exe files - for example Advanced Installer or WiX setup.exe files)
* [Extract MSI from EXE](https://stackoverflow.com/questions/1547809/extract-msi-from-exe/24987512#24987512)
3. ***Setup.exe***: Just adding for completeness. You can try **`setup.exe /?`** or **`setup.exe /help`** or similar at the command line to check for embedded help in the exe.
---
***MSI Transforms***: If you discover and embedded MSI file in the setup.exe, then you can customize the installation parameters in a standardized way. details here: **[How to make better use of MSI files](https://stackoverflow.com/questions/458857/how-to-make-better-use-of-msi-files/1055861#1055861)**. Light weight customization is by **command line**, heavy weight customization via **transforms**.
***Legacy Setup.exe***: Legacy **`setup.exe`** are often created with [Inno Setup](http://www.jrsoftware.org/isinfo.htm), [NSIS](http://nsis.sourceforge.net/), or a [few other non-MSI setup authoring tools](http://www.installsite.org/pages/en/tt_nonmsi.htm). Each with their own quirks for command line. Here is an old source for some samples: <http://unattended.sourceforge.net/installers.php>.
***Repackaging***: Corporate users often repackage such legacy setup.exe files and turn them into **MSI** or **App-V** packages (or the brand new **MSIX** format). On the topic of repackaging and also a piece on PowerShell and the availability of [Windows Installer PowerShell Modules](https://github.com/heaths/psmsi): **[How can I use powershell to run through an installer?](https://stackoverflow.com/questions/46221983/how-can-i-use-powershell-to-run-through-an-installer/46224987#46224987)**.
---
***Some Further Links***:
* [How to create windows installer](https://stackoverflow.com/questions/49624070/how-to-create-windows-installer/49632260#49632260)
* [System Administrator deployment tools, including repackaging tools](http://www.installsite.org/pages/en/msi/admins.htm) | You can also try creating a shortcut to the exe and adding (one at a time) common help parameters in the shortcut target and see if one gives you a help dialog. Some common parameters are
/?
/help
-help
--help
This also depends on the developer implementing a help parameter, but most installer builders default to implementing, so more often than not you will get something. Also, try an internet search for "SOFTWARE NAME silent install". Quiet often the developer has some documentation on their web site. But, if it's a really small developer or freeware or the like, you may not find much. | 1,811 |
62,515,497 | I have a directory with quite some files. I have `n` search patterns and would like to list all files that match `m` of those.
Example: From the files below, list the ones that contain at least *two* of `str1`, `str2`, `str3` and `str4`.
```sh
$ ls -l dir/
total 16
-rw-r--r--. 1 me me 10 Jun 22 14:22 a
-rw-r--r--. 1 me me 5 Jun 22 14:22 b
-rw-r--r--. 1 me me 10 Jun 22 14:22 c
-rw-r--r--. 1 me me 9 Jun 22 14:22 d
-rw-r--r--. 1 me me 10 Jun 22 14:22 e
$ cat dir/a
str1
str2
$ cat dir/b
str2
$ cat dir/c
str2
str3
$ cat dir/d
str
str4
$ cat dir/e
str2
str4
```
I managed to achieve this with a rather ugly `for` loop on `find` results that spawns `n` `grep` processes for each file, which obviously is super inefficient and would take ages on directories with a lot of files:
```sh
for f in $(find dir/ -type f); do
c=0
grep -qs 'str1' $f && let c++
grep -qs 'str2' $f && let c++
grep -qs 'str3' $f && let c++
grep -qs 'str4' $f && let c++
[[ $c -ge 2 ]] && echo $f
done
```
I am quite sure I could achieve this in a far better performing way, but I am not sure how to tackle it. From what I understand from the man page (i.e. on `-e` and `-m`) this is not possible with `grep` alone.
What would be the right tool to use? Is this possible with `awk`?
Bonus: By using `find` I can define the files to search more precisely (i.e. `-prune` certain sub directories or only search files with `-iname '*.txt'`), which I would like to do with other solutions, too.
---
UPDATE
======
Some statistics about the performance of different implementations below.
---
### `find` + `awk`
(Script from [this](https://stackoverflow.com/a/62516128/2656118) answer)
```
real 0m0,006s
user 0m0,002s
sys 0m0,004s
```
---
### `python`
(I'm a `python` noob, please advise if this could be optimized):
```py
import os
patterns = []
patterns = ["str1", "str2", "str3", "str4"]
for root, dirs, files in os.walk("dir"):
for file in files:
c = int(0)
filepath = os.path.join(root, file)
with open(filepath, 'r') as input:
for pattern in patterns:
for line in input:
if pattern in line:
c += 1
break
if ( c >= 2 ):
print(filepath)
```
```
real 0m0,025s
user 0m0,019s
sys 0m0,006s
```
---
### `c++`
(Script from [this](https://stackoverflow.com/a/62519890/2656118) answer)
```
real 0m0,002s
user 0m0,001s
sys 0m0,001s
``` | 2020/06/22 | [
"https://Stackoverflow.com/questions/62515497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2656118/"
] | ```
$ cat reg.txt
str1
str2
str3
str4
```
```
$ cat prog.awk
# reads regexps from the first input file
# parameterized by `m'
# requires gawk or mawk for `nextfile'
FNR == NR {
reg[NR] = $0
next
}
FNR == 1 {
for (i in reg)
tst[i]
cnt = 0
}
{
for (i in tst) {
if ($0 ~ reg[i]) {
if (++cnt == m) {
print FILENAME
nextfile
}
delete tst[i]
}
}
}
```
```
$ find dir -type f -exec awk -v m=2 -f prog.awk reg.txt {} +
dir/a
dir/c
``` | Here's an option using `awk` since you tagged it with that too:
```
find dir -type f -exec \
awk '/str1|str2|str3|str4/{c++} END{if(c>=2) print FILENAME;}' {} \;
```
It will however count duplicates, so a file containing
```
str1
str1
```
will be listed. | 1,812 |
44,282,257 | I am new in python.
I have a scrapy project. I am using conda virtual environment where I have written a pipeline class like this:
```
from cassandra.cqlengine import connection
from cassandra.cqlengine.management import sync_table, create_keyspace_network_topology
from recentnews.cassandra.model.NewsPaperDataModel import NewspaperDataModel
from recentnews.common.Constants import DEFAULT_KEYSPACE
class RecentNewsPipeline(object):
def __init__(self):
connection.setup(["192.168.99.100"], DEFAULT_KEYSPACE, protocol_version=3, port=9042)
create_keyspace_network_topology(DEFAULT_KEYSPACE, {'DC1': 2})
sync_table(NewspaperDataModel)
def process_item(self, item, spider):
NewspaperDataModel.create(
title=item.title,
url=item.url,
domain=item.domain
)
return item
```
When I run the scrapy crawler like `scrapy crawl author`, it gives me this error:
```
(news) (C:\Miniconda2\envs\news) E:\Shoshi\Python Projects\recentnews-scrapy\recentnews>scrapy crawl author
2017-05-31 15:56:29 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: recentnews)
2017-05-31 15:56:29 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'recentnews.spiders', 'SPIDER_MODULES': ['recentnews.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'recentnews'}
2017-05-31 15:56:29 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-05-31 15:56:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-05-31 15:56:30 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
Unhandled error in Deferred:
2017-05-31 15:56:30 [twisted] CRITICAL: Unhandled error in Deferred:
2017-05-31 15:56:30 [twisted] CRITICAL:
Traceback (most recent call last):
File "C:\Miniconda2\envs\news\lib\site-packages\twisted\internet\defer.py", line 1301, in _inlineCallbacks
result = g.send(result)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\crawler.py", line 95, in crawl
six.reraise(*exc_info)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\crawler.py", line 77, in crawl
self.engine = self._create_engine()
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\crawler.py", line 102, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\core\engine.py", line 70, in __init__
self.scraper = Scraper(crawler)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\core\scraper.py", line 71, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "C:\Miniconda2\envs\news\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object
mod = import_module(module)
File "C:\Miniconda2\envs\news\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "E:\Shoshi\Python Projects\recentnews-scrapy\recentnews\recentnews\pipelines.py", line 7, in <module>
from cassandra.cqlengine import connection
ImportError: No module named cqlengine
```
I am using conda virtual environment.
But, when I run this code from python command line it works fine. no error:
```
(news) (C:\Miniconda2\envs\news) E:\Shoshi\Python Projects\recentnews-scrapy\recentnews>python
Python 2.7.13 |Continuum Analytics, Inc.| (default, May 11 2017, 13:17:26) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> from cassandra.cqlengine import connection
>>> from cassandra.cqlengine.management import sync_table, create_keyspace_network_topology
>>> from recentnews.cassandra.model.NewsPaperDataModel import NewspaperDataModel
>>> from recentnews.common.Constants import DEFAULT_KEYSPACE
>>> connection.setup(["192.168.99.100"], DEFAULT_KEYSPACE, protocol_version=3, port=9042)
>>> create_keyspace_network_topology(DEFAULT_KEYSPACE, {'DC1': 2})
C:\Miniconda2\envs\news\lib\site-packages\cassandra\cqlengine\management.py:545: UserWarning: CQLENG_ALLOW_SCHEMA_MANAGEMENT environment variable is not set. Future versions of this package will require this variable to enable management functions.
warnings.warn(msg)
>>> sync_table(NewspaperDataModel)
......
```
You can see that `from cassandra.cqlengine import connection` is imported perfectly.
What am I missing? Why not this code is working when I run this using `scrapy crawl author`? | 2017/05/31 | [
"https://Stackoverflow.com/questions/44282257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1427144/"
] | So it appeared that [there was a folder named `recentnews/cassandra/`](https://stackoverflow.com/questions/44282257/importerror-no-module-named-cqlengine-but-worked-on-python-command?noredirect=1#comment75573040_44282257) in the OP's scrapy project (namespace `recentnews.cassandra`).
When scrapy imports the item pipeline class `recentnews.pipelines.RecentNewsPipeline`, `importlib`'s interpreting of `from cassandra.cqlengine import connection` (at the beginning of `recentnews/pipeline.py`) found the local `recentnews.cassandra` module before the virtualenv-installed `cassandra` package.
One way to check which module is being imported is to add `import cassandra; print(cassandra.__file__)` before the `import` statement that fails. | When you create a virtual environment, by default the user-installed packages are not copied. You would therefore have to run `pip install casandra` (or whatever the package is called) in your virtual environment. That will probably fix this problem. | 1,815 |
50,877,817 | The first column corresponds to a single process and the second column are the components that go into the process. I want to have a loop that can examine all the processes and evaluate what other processes have the same individual components. Ultimately, I want a loop to find what processes have 50% or more of their components match 50% or more of another process.
For example, process 1 has 4 components in common with process 2, so they have more than 50% of their components that pair, so I would want a function to identify this process pairing. The same for process 1 and 3.
```
Process Comp.
1 511
1 233
1 712
1 606
1 4223
1 123
1 456
2 511
2 233
2 606
2 4223
2 222
2 309
2 708
3 309
3 412
3 299
3 511
3 712
3 222
3 708
```
I feel like I could use a network library for this in python or maybe run it in matlab with an iterative fucntion, but I need to do it in excel, and I am new to coding in excel so any help would be appreciated! | 2018/06/15 | [
"https://Stackoverflow.com/questions/50877817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9946677/"
] | Here is what I ended up doing:
```
// Get the tile's cartesian center.
var cartesian = new Cesium.Cartesian3(1525116.05769, -4463608.36127, 4278734.88048);
// Get the tile's cartographic center.
var cartographic = Cesium.Cartographic.fromCartesian(cartesian);
// Rotate the model.
model.rotation.x = -cartographic.latitude;
model.rotation.z = cartographic.longitude + Math.PI / 2;
``` | Just convert "gltfUpAxis" to "Z" would work fine. Or you can try "Y" too.
```
"asset": {
"gltfUpAxis": "Z",
"version": "1.0"
},
``` | 1,816 |
67,022,905 | I have a simple text file that has groups of key:value pairs with a blank row between each group of key:values. The number of key:value pairs can vary from group to group. Sample data and my code so far.
```
key1: value1
key2: value2
key3: value3
key1: value4
key2: value5
key3: value6
```
The code is close to what I am looking for, but the part that is missing is when it gets to a blank line I need it to close out the JSON and start a new one for the next group.
```
#!/usr/bin/python
import json
f = open("sample.txt", "r")
content = f.read()
splitcontent = content.splitlines()
d = []
for v in splitcontent:
l = v.split('\n')
print(l)
if l == ['']:
continue
d.append(dict(s.split(': ',1) for s in l))
with open("dump.json", 'w') as file:
file.write((json.dumps(d, indent=4, sort_keys= False)))
```
I tried to use the l == [''] to end the JSON and it does skip the blank, but just continues which is expected just not what I need.
Thanks for the help and if you recognize the code above an extra thank you. | 2021/04/09 | [
"https://Stackoverflow.com/questions/67022905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123307/"
] | Using [`Array.from()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from) and [`Array.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce), this could be done as follows:
```
Array.from(map.entries()).reduce((a, b) => a[1] < b[1] ? b : a)[0];
``` | Here is one approach
* Convert it to an array of key/value pairs
* Sort the array by the value
* Extract the second item of the first pair
Like so
```
let map: Map<string, number> = new Map();
map.set("a", 12);
map.set("b", 124);
map.set("c", 14);
map.set("d", 155);
const key = Array.from(map).sort((a, b) => (a[1] > b[1] ? -1 : 1))[0][0];
console.log(key);
``` | 1,817 |
2,319,495 | I need to keep a large number of Windows XP machines running the same version of python, with an assortment of modules, one of which is python-win32. I thought about installing python on a network drive that is mounted by all the client machines, and just adjust the path on the clients. Python starts up fine from the network, but when importing win32com I get a pop-up error saying:
>
> The procedure entry point ?PyWinObject\_AsHANDLE@@YAHPAU\_object@@PAPAXH@Z could not be located in the dynamic link library pywintypes24.dll
>
>
>
after dismissing the message dialog I get in the console:
>
> ImportError: DLL load failed: The specified procedure could not be found.
>
>
>
I searched the python directory for the pywintypes24.dll and it is present in "Lib\site-packages\pywin32\_system32" .
What am I missing and is there another way in which I can install Python + Python-Win32 + additional module once and have them running on many machines? I don't have access to the Microsoft systems management tools, so I need to be a bit more low-tech than that. | 2010/02/23 | [
"https://Stackoverflow.com/questions/2319495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18308/"
] | On every machine you have to basically run following `pywin32_postinstall.py -install` once. Assuming your python installation on the network is `N:\Python26`, run following command on every client:
```
N:\Python26\python.exe N:\Python26\Scripts\pywin32_postinstall.py -install
```
Another important thing is `Good Luck!`. The reason is that you might need to do this as `admin`. In my case such setup worked for all but one computer. I still did not figure out why. | You could use [batch files running at boot](http://isg.ee.ethz.ch/tools/realmen/index.en.html) to
* Mount the network share (`net use \\server\share`)
* Copy the Python and packages installers from the network share to a local folder
* Check version of the msi installer against the installed version
* If different, uninstall Python and all version dependent packages
* Reinstall all packages
This would be pretty much a roll your own central management system for that software. | 1,818 |
9,776,351 | This question describes my conclusion after researching available options for creating a headless Chrome instance in Python and asks for confirmation or resources that describe a 'better way'.
From what I've seen it seems that the quickest way to get started with a headless instance of Chrome in a Python application is to use CEF (<http://code.google.com/p/chromiumembedded/>) with CEFPython (<http://code.google.com/p/cefpython/>). CEFPython seems premature though, so using it would likely mean further customization before I'm able to load a headless Chrome instance that loads web pages (and required files), resolves a completed DOM and then lets me run arbitrary JS against it from Python.
Have I missed any other projects that are more mature or would make this easier for me? | 2012/03/19 | [
"https://Stackoverflow.com/questions/9776351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193601/"
] | Any reason you haven't considered Selenium with the Chrome Driver?
<http://code.google.com/p/selenium/wiki/ChromeDriver>
<http://code.google.com/p/selenium/wiki/PythonBindings> | [casperjs](http://casperjs.org/) is a headless webkit, but it wouldn't give you python bindings that I know of; it seems command-line oriented, but that doesn't mean you couldn't run it from python in such a way that satisfies what you are after. When you run casperjs, you provide a path to the javascript you want to execute; so you would need to emit that from Python.
But all that aside, I bring up casperjs because it seems to satisfy the lightweight, headless requirement very nicely. | 1,821 |
27,916,891 | I wrote some python code like:
```
if platform == 'ios':
do_something()
elif platform == 'android':
do_something_else()
```
And this piece of code was strongly criticized by one of my colleague. He accused me why I don't have an else block to handle the rest part.
In my opinion, all I need to do is to deal with these two situations, since I did not write the else block. I meant for else situations, the program will just let it pass through this check and continue going.
Was an else block necessary (obey some PEP rules) for any if elif block?
If it is morally compulsory, should I write my code as:
```
if platform == 'ios':
do_something()
if platform == 'android':
do_something_else()
```
But I still feel my first block just OK. Am I wrong? | 2015/01/13 | [
"https://Stackoverflow.com/questions/27916891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3480199/"
] | `else` is not required from the syntax point of view and is not enforced by `PEP8`. If you intended do nothing if `platform` is not `ios` or `android` then this is perfectly ok.
Alternatively, you can have a mapping "platform > function", something along these lines:
```
mapping = {'ios': do_something, 'android': do_something_else}
f = mapping.get(platform)
if f:
f()
``` | It depends on your code but in this case either way would've been fine. There was a logical reason that your code needed to be that way and that's fine. You do not have to follow the rules all the time, you have to be able to try different stuff all the time. | 1,831 |
27,955,947 | We are trying to write an automated test for our iOS app using the Appium python client.
We want to imitate Swipe event on an element but none of the APIs from appium.webdriver.common.touch\_action seem to be behaving the way we want.
Basically we want to break down swipe in three events (KEYDOWN, MOVE, KEYUP).
The flow goes as below
1. Find the element.
2. Hold it, swipe it from point A to B and Hold it there. (KEYDOWN and MOVE)
3. Do something.
4. Do something more.
5. Release the element. (KEYUP)
* How can we achieve it on iOS ?
We have it working on Android using monkeyrunner. It works as below
```
X=50
Y=50
hr = MonkeyRunner.waitForConnection(timeout = 60, deviceId = dev_2)
hr.touch(X, Y,MonkeyDevice.DOWN)
for i in range(1, 13):
hr.touch(X, Y + 20*i, hr.MOVE)
time.sleep(0.1)
MonkeyRunner.sleep(2)
// Do something
hr.touch(X, Y, MonkeyDevice.UP)
```
Thanks! | 2015/01/15 | [
"https://Stackoverflow.com/questions/27955947",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/631679/"
] | Your 'data' contains 4 contours. Each contour has one point that was drawn on image. What you need is 1 contour with 4 points. Push all your points to data[0].
On side note, you don't need to call drawContours() in loop. If you provide negative index of contour (third parameter), then all contours will be drawn.
```
vector<vector<Point> > data(1);
data[0].push_back(Point(0,0));
data[0].push_back(Point(0,120));
data[0].push_back(Point(180,100));
data[0].push_back(Point(70,0));
drawContours(input, data, -1, Scalar(0,255,0), 10, LINE_8);
``` | If you have only 4 points, I suggest you to use cv::Rectangle. If you can have a lot of points, you have to write a function using [cv::Line](http://docs.opencv.org/2.4.2/modules/core/doc/drawing_functions.html#line). | 1,832 |
49,981,741 | I am writing a python application and trying to manage the code in a structure.
The directory structure that I have is something like the following:-
```
package/
A/
__init__.py
base.py
B/
__init__.py
base.py
app.py
__init__.py
```
so I have a line in A/**init**.py that says
```
from .base import *
```
No problem there, but when I put the same line in B/**init**.py
```
from .base import *
```
I get an error
```
E0402: Attempted relative import beyond top-level package.
```
Isn't the two supposed to be identical? what exactly am I doing wrong here?
I am using Python 3.6, the way I ran the application is from the terminal with
```
> python app.py
```
Thanks
UPDATE:
Sorry, The error is from somewhere else.
In A/base.py i have
```
class ClassA():
...
```
In B/base.py I have
```
from ..A import ClassA
class ClassB(ClassA):
...
```
The error came from the import statement in B/base.py
```
from ..A import ClassA
```
UPDATE #2
@JOHN\_16 app.py is as follows:-
```
from A import ClassA
from B import ClassB
if __name__ == "__main__":
...
```
Also updated directory to include empty **init**.py as suggested. | 2018/04/23 | [
"https://Stackoverflow.com/questions/49981741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7877397/"
] | This is occurred because you have two packages: *A* and *B*. Package *B* can't get access to content of package *A* via relative import because it cant move outside top-level package. In you case both packages are top-level.
You need reorganize you project, for example like that
```
.
├── TL
│ ├── A
│ │ ├── __init__.py
│ │ └── base.py
│ ├── B
│ │ ├── __init__.py
│ │ └── base.py
│ └── __init__.py
└── app.py
```
and change content pf you `app.py` to use package TL:
```
from TL.A import ClassA
from TL.B import ClassB
if __name__ == "__main__":
``` | My problem was forgetting `__init__.py` in my top level directory. This allowed me to use relative imports for folders in that directory. | 1,833 |
62,209,746 | Still fairly new to python.
I was wondering what would be a good way of detecting what output response a python program were to choose.
As an example, if you were to make a speed/distance/time calculator, if only 2 input were ever given, how would you detect which was the missing input and therefore the output? I can think of some fairly crude ways but I was wondering if there was anything else if more complex tasks were to come into play.
I guess something like:
```
def sdf(speed=0, distance=0, time=0):
# detect which parameter has no input / equals 0
# calculate result
# return result
sdf(speed=10, distance=2)
```
Any ideas? | 2020/06/05 | [
"https://Stackoverflow.com/questions/62209746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11098113/"
] | Python allows you to change types of variables on the fly. Since you are working with integers and `0` could be a useful value in your calculations, your default 'not present' value should be `None`:
```
def sdf(speed=None, time=None, distance=None):
if speed is None:
return calculate_speed(time, distance), time, distance
if time is None:
return speed, calculate_time(speed, distance), distance
if distance is None:
return speed, time, calculate_distance(speed, time)
# All paramters have been set! Maybe check if all three are correct
return speed, time, distance
speed, time, distance = sdf(speed=1, distance=2)
```
This way you don't have to find out what happened afterwards. This function will give you all three values, given you gave it at least 2 out of the 3.
If your program flow allows for multiple values be `None`, your functions `calculate_XY` should throw an exception if they detect it. So in this case:
```
def calculate_distance(speed, time)
return speed * time
```
It will throw an unsupported operand exception(TypeError), so no need to clutter your code with useless asserts.
If you really don't know how many parameters will be set, do something like this:
```
try:
retval = sdf(None, None, x)
except TypeError as e:
print(e)
handle_exception(e)
```
Also just a heads up: the `is` operator in Python checks if the objects are the same object, not their value. Since objects that are assigned to `None` are just a 'pointer to the global `None` object'(simplification), checking whether a value 'contains' `None` with `is` is preferred. However be aware of this:
```
a = b = list()
a is b
True
# a and b are 'pointers' to the same list object
a = list()
b = list()
a is b
False
a == b
True
# a and b contain 2 different list objects, but their contents are identical
```
Just be aware that to compare values use `==` and to check if they are the same object, use `is`.
HTH | You should use multiple functions and call the one needed.
```
def CalculateTravelTime(distance, speed)
def CalculateTravelSpeed(distance, time)
def CalculateTravelDistance(speed, time)
``` | 1,834 |
48,788,169 | I was doing cs231n assignment 2 and encountered this problem.
I'm using tensorflow-gpu 1.5.0
Code as following
```
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
pass
y_out = complex_model(X,y,is_training)
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
```
Complete traceback
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-97f0b6c5a72e> in <module>()
6 tf.global_variables_initializer().run()
7
----> 8 ans = sess.run(y_out,feed_dict={X:x,is_training:True})
9 get_ipython().run_line_magic('timeit', 'sess.run(y_out,feed_dict={X:x,is_training:True})')
10 print(ans.shape)
c:\users\kasper\appdata\local\programs\python\python36\lib\site- packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1111 # Create a fetch handler to take care of the structure of fetches.
1112 fetch_handler = _FetchHandler(
-> 1113 self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
1114
1115 # Run request and get response.
c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in __init__(self, graph, fetches, feeds, feed_handles)
419 with graph.as_default():
--> 420 self._fetch_mapper = _FetchMapper.for_fetch(fetches)
421 self._fetches = []
422 self._targets = []
c:\users\kasper\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
235 if fetch is None:
236 raise TypeError('Fetch argument %r has invalid type %r' %
--> 237 (fetch, type(fetch)))
238 elif isinstance(fetch, (list, tuple)):
239 # NOTE(touts): This is also the code path for namedtuples.
TypeError: Fetch argument None has invalid type <class 'NoneType'>
```
I saw that similar questions have been asked on this site before,but those don't seem to solve mine.
Any help would be appreciated,thanks! | 2018/02/14 | [
"https://Stackoverflow.com/questions/48788169",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5856554/"
] | The problem is that the `y_out` argument to `sess.run()` is `None`, whereas it must be a `tf.Tensor` (or tensor-like object, such as a `tf.Variable`) or a `tf.Operation`.
In your example, `y_out` is defined by the following code:
```
# define model
def complex_model(X,y,is_training):
pass
y_out = complex_model(X,y,is_training)
```
`complex_model()` doesn't return a value, so `y_out = complex_model(...)` will set `y_out` to `None`. I'm not sure if this function is representative of your real code, but it's possible that your real `complex_model()` function is also missing a `return` statement. | I believe that **mrry** is right.
If you give a second look the the notebook [Assignment 2 - Tensorflow.ipynb](https://github.com/BedirYilmaz/cs231-stanford/blob/master/assignment2/TensorFlow.ipynb), you will notice the description cell as follows :
>
> Training a specific model
>
>
> In this section, we're going to specify a model for you to construct.
> The goal here isn't to get good performance (that'll be next), but
> instead to get comfortable with understanding the TensorFlow
> documentation and configuring your own model.
>
>
> Using the code provided above as guidance, and using the following
> TensorFlow documentation, specify a model with the following
> architecture:
>
>
>
> ```
> 7x7 Convolutional Layer with 32 filters and stride of 1
> ReLU Activation Layer
> Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
> 2x2 Max Pooling layer with a stride of 2
> Affine layer with 1024 output units
> ReLU Activation Layer
> Affine layer from 1024 input units to 10 outputs
>
> ```
>
>
Which is **asking you to define a model** inside the function
```
# define model
def complex_model(X,y,is_training):
pass
```
Just like they did in
```
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
```
Hope this helps! | 1,837 |
11,100,380 | I've been studying tkinter in python3 and find it very hard to find good documentation and answers online. To help others struggling with the same problems I decided to post a solution for a simple problem that there seems to be no documentation for online.
Problem: Create a wizard-like program, that presents the user with a series of windows and the user can move between the windows clicking next and back - buttons.
The solution is:
* Create one root window.
* Create as many frames as you have windows to present to the user. Attach all frames to the root window.
* Populate each frame with all the widgets it needs.
* When all the frames have been populated, hide each frame with the `grid_forget()` method but leave the first frame unhidden so that it becomes the visible one. All the child widgets on the frame will be hidden with the frame.
* When the user clicks on Next or Back buttons on a window, call a subroutine that hides other frames (with `grid_forget()`) and makes the one that is needed visible (with `grid()`).
* When you want the program to end, use the destroy - method for the root window.
So you will be creating a single window and showing different frames on it.
(By the way, the best place to start studying tkinter is: <http://www.tkdocs.com/tutorial/index.html>)
Here is a sample implementation in Python3. It has 3 simple windows, each with a text label and two buttons to navigate through different windows.
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Creates three "windows" that the user can navigate through using Back and Next - buttons.
import tkinter
import tkinter.ttk
def create_widgets_in_first_frame():
# Create the label for the frame
first_window_label = tkinter.ttk.Label(first_frame, text='Window 1')
first_window_label.grid(column=0, row=0, pady=10, padx=10, sticky=(tkinter.N))
# Create the button for the frame
first_window_quit_button = tkinter.Button(first_frame, text = "Quit", command = quit_program)
first_window_quit_button.grid(column=0, row=1, pady=10, sticky=(tkinter.N))
first_window_next_button = tkinter.Button(first_frame, text = "Next", command = call_second_frame_on_top)
first_window_next_button.grid(column=1, row=1, pady=10, sticky=(tkinter.N))
def create_widgets_in_second_frame():
# Create the label for the frame
second_window_label = tkinter.ttk.Label(second_frame, text='Window 2')
second_window_label.grid(column=0, row=0, pady=10, padx=10, sticky=(tkinter.N))
# Create the button for the frame
second_window_back_button = tkinter.Button(second_frame, text = "Back", command = call_first_frame_on_top)
second_window_back_button.grid(column=0, row=1, pady=10, sticky=(tkinter.N))
second_window_next_button = tkinter.Button(second_frame, text = "Next", command = call_third_frame_on_top)
second_window_next_button.grid(column=1, row=1, pady=10, sticky=(tkinter.N))
def create_widgets_in_third_frame():
# Create the label for the frame
third_window_label = tkinter.ttk.Label(third_frame, text='Window 3')
third_window_label.grid(column=0, row=0, pady=10, padx=10, sticky=(tkinter.N))
# Create the button for the frame
third_window_back_button = tkinter.Button(third_frame, text = "Back", command = call_second_frame_on_top)
third_window_back_button.grid(column=0, row=1, pady=10, sticky=(tkinter.N))
third_window_quit_button = tkinter.Button(third_frame, text = "Quit", command = quit_program)
third_window_quit_button.grid(column=1, row=1, pady=10, sticky=(tkinter.N))
def call_first_frame_on_top():
# This function can be called only from the second window.
# Hide the second window and show the first window.
second_frame.grid_forget()
first_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
def call_second_frame_on_top():
# This function can be called from the first and third windows.
# Hide the first and third windows and show the second window.
first_frame.grid_forget()
third_frame.grid_forget()
second_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
def call_third_frame_on_top():
# This function can only be called from the second window.
# Hide the second window and show the third window.
second_frame.grid_forget()
third_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
def quit_program():
root_window.destroy()
###############################
# Main program starts here :) #
###############################
# Create the root GUI window.
root_window = tkinter.Tk()
# Define window size
window_width = 200
window_heigth = 100
# Create frames inside the root window to hold other GUI elements. All frames must be created in the main program, otherwise they are not accessible in functions.
first_frame=tkinter.ttk.Frame(root_window, width=window_width, height=window_heigth)
first_frame['borderwidth'] = 2
first_frame['relief'] = 'sunken'
first_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
second_frame=tkinter.ttk.Frame(root_window, width=window_width, height=window_heigth)
second_frame['borderwidth'] = 2
second_frame['relief'] = 'sunken'
second_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
third_frame=tkinter.ttk.Frame(root_window, width=window_width, height=window_heigth)
third_frame['borderwidth'] = 2
third_frame['relief'] = 'sunken'
third_frame.grid(column=0, row=0, padx=20, pady=5, sticky=(tkinter.W, tkinter.N, tkinter.E))
# Create all widgets to all frames
create_widgets_in_third_frame()
create_widgets_in_second_frame()
create_widgets_in_first_frame()
# Hide all frames in reverse order, but leave first frame visible (unhidden).
third_frame.grid_forget()
second_frame.grid_forget()
# Start tkinter event - loop
root_window.mainloop()
``` | 2012/06/19 | [
"https://Stackoverflow.com/questions/11100380",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1466253/"
] | As you've taken the liberty to post an answer as a question. I'd like to post a comment as an answer and suggest that perhaps you should contribute this to TkDocs (click their [About tab](http://www.tkdocs.com/about.html) and they talk about contributing to the site).
I think it'd be better if that site were to improved with more examples than to turn this site into a cookbook. I think you can also contribute to the [Active State recipes](http://code.activestate.com/recipes/), and they seem to be the carriers of the torch for Tcl/Tk, so Tkinter stuff makes a lot of sense there too. | Thanks for your work- I used it as inspiration for this example that, while extremely light in terms of the content, is a cool way to make an arbitrary number of windows that you can switch between. You could move the location of the next and back buttons, turn them into arrows, whatever you want.
```
from tkinter import *
master=Tk()
class makeframe(object):
def __init__(self,i):
self.i=i
self.frame=Frame(master)
self.nextbutton=Button(self.frame,text='next',command=self.next)
self.nextbutton.grid(column=2,row=0)
self.backbutton=Button(self.frame,text='back',command=self.back)
self.backbutton.grid(column=0,row=0)
self.label=Label(self.frame,text='%i'%(self.i+1)).grid(column=1,row=0)
def next(self):
self.frame.grid_forget()
p[self.i+1].frame.grid()
def back(self):
self.frame.grid_forget()
p[self.i-1].frame.grid()
n=7
p=[0]*n
for i in range(n):
p[i]=makeframe(i)
p[0].frame.grid()
p[0].backbutton.config(state=DISABLED)
p[-1].nextbutton.config(state=DISABLED)
``` | 1,838 |
3,974,211 | i saw a javascript implementation of sha-256.
i waana ask if it is safe (pros/cons wathever) to use sha-256 (using javascript implementation or maybe python standard modules) alogrithm as a password generator:
i remember one password, put it in followed(etc) by the website address and use the generated text as the password for that website.
repeat process every time i need password
same for other websites | 2010/10/20 | [
"https://Stackoverflow.com/questions/3974211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/481170/"
] | I think you are describing the approach used by [SuperGenPass](http://supergenpass.com/):
Take a master password (same for every site), concatenate it with the site's domain name, and then hash the thing.
Yes, SHA-256 would be secure for that, likely more secure than when SuperGenPass uses. However, you will end up with very long passwords, too long for many sites to accept, and also not guaranteed to contain numbers and letters and special characters at the same time, which some sites require.
Also, the general problem remains that if somehow (not by breaking the algorithm, but by other means) your master password does get leaked, all your passwords are belong to us.
Completely random passwords are most secure (if we ignore the problem of storing them securely somewhere). | SHA-256 generates *very* long strings. You're better off using `random.choice()` with a string a fixed number of times. | 1,839 |
52,236,797 | Built Python 3.7 on my Raspberry pi zero in a attempt to upgrade from Python 3.5.3
The build was successful, ran into the module not found for smbus and switched that to smbus2, now when I import gpiozero I get Module not found. my DungeonCube.py program was working fine under Python 3.5.3, but now Python 3.7 seems to have trouble finding gpiozero
this is what I did to test:
```
python3
Python 3.7.0 (default, Sept 7 2018, 14:22:04)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import gpiozero
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'gpiozero'
>>>
```
anyone know how to get python 3.7 to see gpiozero module? | 2018/09/08 | [
"https://Stackoverflow.com/questions/52236797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10333299/"
] | I had same problem, realized that i used `pip3` to install the package, but I was trying to use it with `python` which invokes python2. I tried with `python3` it works just fine. | did you download the gpiozero module onto the raspberry pi? it does not come preinstalled with python.
you could try to do "sudo python3 pip install gpiozero". if that doesnt work replace python3 with python @GarryOsborne . | 1,840 |
15,976,639 | First of all: Please keep in mind that I'm very much a beginner at programming.
I'm trying to write a simple program in Python that will replace the consonants in a string with consonant+"o"+consonant. For example "b" would be replaced with "bob" and "d" would be replaced with "dod" (so the word "python" would be changed to "popytothohonon").
To do this I created a dictionary, that contained the pairs b:bob,c:coc,d:dod etc. Then I used the replace() command to read through the word and replace the consonants with their translation in the dictionary. The full piece of code looks like this:
```
def replacer(text):
consonant='bcdfghjklmnpqrstvwxz'
lexicon={}
for x in range(0,len(consonant)):
lexicon[x]=(consonant[x]),(consonant[x]+'o'+consonant[x])
for i,j in lexicon.items():
text=(text.replace(i,j))
return text
```
Now, when I try to call this function I get the following error:
```
Traceback (most recent call last):
File "D:\x\x.py", line 37, in <module>
print(replacer("python"))
File "D:\x\x.py", line 17, in replacer
text=(text.replace(i,j))
TypeError: Can't convert 'int' object to str implicitly
```
But I'm not using any ints! There's got to be something wrong with the dictionary, because everything works when i make it "by hand" like this:
```
list={'b':'bob', 'c':'coc', 'd':'dod', 'f':'fof', 'g':'gog', 'h':'hoh'......}
```
But when I print the "non-hand-made" dictionary everything seems to be in order:
```
{0: ('b', 'bob'), 1: ('c', 'coc'), 2: ('d', 'dod'), 3: ('f', 'fof')........
```
What am I doing wrong? | 2013/04/12 | [
"https://Stackoverflow.com/questions/15976639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2275161/"
] | `lexicon` is a dictionary with integers as keys and tuples as values. when you iterate over it's items, you're getting tuples of the form `(integer,tuple)`. You're then passing that integer and tuple to `text.replace` as `i` and `j` which is why it's complaining. Perhaps you meant:
```
for i,j in lexicon.values():
...
```
For this simple replacement, `str.replace` is fine, but for more complicated replacements, the code will probably be more robust (and possibly execute faster!) if you [use `re.sub` instead](https://stackoverflow.com/a/15324369/748858).
---
Also, as pointed out in the comments, for this case, a better data structure would be to use a `list`:
```
lexicon = [ (x,'{0}o{0}'.format(x)) for x in chars ]
```
Now you can build your dict from this list if you really want:
```
lexicon = dict(enumerate(lexicon))
```
but there's probably no need. And in this case, you'd iterate over `lexicon` directly:
```
for i,j in lexicon:
...
```
If you're only going to do this once, you could even do it lazily without ever materializing the list by using a generator expression:
```
lexicon = ( (x,'{0}o{0}'.format(x)) for x in chars )
``` | no ... your keys in the handmade version are strings ... your kets in the other version are ints ... ints have no replace method | 1,841 |
59,457,595 | I am taking the data science course from Udemy. After running the code to show the iris data set, it does not show. Instead, it downloads a data file.
I am running the following code:
```py
from IPython.display import HTML
HTML('<iframe src=http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data></iframe>')
```
Is the code correct? Could you please help how to show the iris dataset in the python using iframe?
link to the course: <https://www.udemy.com/course/introduction-to-data-science-using-python/learn/lecture/9387344#questions> | 2019/12/23 | [
"https://Stackoverflow.com/questions/59457595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4998064/"
] | If plot is not in the first line in the file, you could do this:
```
sed '1,/plot/!s/plot//'
```
If it can be on the first line, I see no other way but to loop it:
```
sed ':a;/plot/!{n;ba;};:b;n;s///;bb'
``` | In case you are ok with an `awk` solution, could you please try following.
```
awk '/plot/ && ++count==1{print;next} !/plot/' Input_file
```
***Explanation:*** Adding explanation for above code.
```
awk ' ##Starting awk program from here.
/plot/ && ++count==1{ ##Checking condition if string plot is present and variable count value is 1 then do following.
print ##Printing the current line.
next ##next will skip all further statements from here.
} ##Closing BLOCK for above condition.
!/plot/ ##Checking condition if string plot is NOT present then do print of that line.
' Input_file ##Mentioning Input_file name here.
```
***NOTE:*** In case you want to save output into Input\_file itself then append `> temp && mv temp Input_file` to above code. | 1,846 |
32,879,614 | I was following the instructions [here](https://people.csail.mit.edu/hubert/pyaudio/compilation.html) and I'm having trouble getting the installation to work. Basically, the first part works fine. I downloaded portaudio, followed the instructions, and it all seemed to work.
However, when I tried`python3 setup.py install`, I got an error. The error came from the /src/\_portaudiomodule.c file, and it said that "The file Python.h could not be found". I don't really understand what's going on because there was no Python.h file when I extracted the PyAudio archive. I don't know where the Python.h file was supposed to come from.
I'm kind of a noob to unix systems so I could have easily made a mistake somewhere. I've been trying to solve this for hours and I've had no luck so far. Thanks in advance for your help! | 2015/10/01 | [
"https://Stackoverflow.com/questions/32879614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047641/"
] | To install the latest version of pyaudio using conda:
```
source activate -your environment name-
pip install pyaudio
```
You may run into the following error when installing from pip:
```
src/_portaudiomodule.c:29:23: fatal error: portaudio.h: No such file or directory
#include "portaudio.h"
compilation terminated.
error: command 'gcc' failed with exit status 1
```
That is because you don't have the PortAudio development package installed. Install it with:
```
sudo apt-get install portaudio19-dev
``` | I was able to get it install with [anaconda](https://www.continuum.io/downloads), using [this package](https://anaconda.org/bokeh/pyaudio).
Follow install instructions for linux [here](https://www.continuum.io/downloads#_unix), then do:
```
conda install -c bokeh pyaudio=0.2.7
``` | 1,847 |
23,507,902 | I use gsutil to transfer files from a Windows machine to Google Cloud Storage.
I have not used it for more than 6 months and now when I try it I get:
Failure: invalid\_grant
From researching this I suspect the access token is no longer valid as it has not been used for 6 months, and I need a refresh token?
I cannot seem to find how to get and use this.
thanks
Running `gsutil -DD config` produces the following output:
```
C:\Python27>python c:/gsutil/gsutil -DD config
DEBUG:boto:path=/pub/gsutil.tar.gz
DEBUG:boto:auth_path=/pub/gsutil.tar.gz
DEBUG:boto:Method: HEAD
DEBUG:boto:Path: /pub/gsutil.tar.gz
DEBUG:boto:Data:
DEBUG:boto:Headers: {}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout': 70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key *******************************
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=******************************* not present (cache_file= c:\users\admini~1\appdata\local\temp\2\oauth2_client-tokencache._.ea******************************)
DEBUG:oauth2_client:GetAccessToken: token from cache: None
DEBUG:oauth2_client:GetAccessToken: fetching fresh access token...
INFO:oauth2client.client:Refreshing access_token connect: (accounts.google.com, 443)
send: 'POST /o/oauth2/token HTTP/1.1\r\nHost: accounts.google.com\r\nContent-Length: 177\r\ncontent-type: application/x- www-form-urlencoded\r\naccept-encoding: gzip, deflate\r\nuser-agent: Python-httplib2/0.7.7 (gzip)\r\n\r\nclient_secret=******************&grant_type=refresh_token&refresh_token=****************************************&client_ id=****************.apps.googleusercontent.com' reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/json; charset=utf-8 header: Cache-Control: no-cache, no-store, max-age=0, must-revalidate header: Pragma: no-cache header: Expires: Fri, 01 Jan 1990 00:00:00 GMT header: Date: Thu, 08 May 2014 02:02:21 GMT header: Content-Disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt header: Content-Encoding: gzip header: X-Content-Type-Options: nosniff header: X-Frame-Options: SAMEORIGIN
header: X-XSS-Protection: 1; mode=block header: Server: GSE header: Alternate-Protocol: 443:quic header: Transfer-Encoding: chunked
INFO:oauth2client.client:Failed to retrieve access token: { "error" : "invalid_grant" }
Traceback (most recent call last):
File "c:/gsutil/gsutil", line 83, in <module> gslib.__main__.main() File "c:\gsutil\gslib_main_.py", line 151, in main command_runner.RunNamedCommand('ver', ['-l'])
File "c:\gsutil\gslib\command_runner.py", line 95, in RunNamedCommand self._MaybeCheckForAndOfferSoftwareUpdate(command_name, debug)):
File "c:\gsutil\gslib\command_runner.py", line 181, in _MaybeCheckForAndOfferSoftwareUpdate cur_ver = LookUpGsutilVersion(suri_builder.StorageUri(GSUTIL_PUB_TARBALL))
File "c:\gsutil\gslib\util.py", line 299, in LookUpGsutilVersion obj = uri.get_key(False)
File "c:\gsutil\third_party\boto\boto\storage_uri.py", line 342, in get_key generation=self.generation)
File "c:\gsutil\third_party\boto\boto\gs\bucket.py", line 102, in get_key query_args_l=query_args_l)
File "c:\gsutil\third_party\boto\boto\s3\bucket.py", line 176, in _get_key_internal query_args=query_args)
File "c:\gsutil\third_party\boto\boto\s3\connection.py", line 547, in make_request retry_handler=retry_handler
File "c:\gsutil\third_party\boto\boto\connection.py", line 947, in make_request retry_handler=retry_handler)
File "c:\gsutil\third_party\boto\boto\connection.py", line 838, in _mexe request.authorize(connection=self)
File "c:\gsutil\third_party\boto\boto\connection.py", line 377, in authorize connection._auth_handler.add_auth(self, *********)
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_plugin.py", line 22, in add_auth self.oauth2_client.GetAuthorizationHeader()
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 338, in GetAuthorizationHeader return 'Bearer %s' % self.GetAccessToken().token
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 309, in GetAccessToken access_token = self.FetchAccessToken()
File "c:\gsutil\gslib\third_party\oauth2_plugin\oauth2_client.py", line 435, in FetchAccessToken credentials.refresh(http)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 516, in refresh self._refresh(http.request)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 653, in _refresh self._do_refresh_request(http_request)
File "c:\gsutil\third_party\google-api-python-client\oauth2client\client.py", line 710, in _do_refresh_request raise AccessTokenRefreshError(error_msg) oauth2client.client.AccessTokenRefreshError: invalid_grant
``` | 2014/05/07 | [
"https://Stackoverflow.com/questions/23507902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3610488/"
] | You can ask gsutil to configure itself. Go to the directory with gsutil and run this:
```
c:\gsutil> python gsutil config
```
Gsutil will lead you through the steps to setting up your credentials.
That said, access tokens only normally last about a half hour. It's more likely that the previously-configured refresh token was revoked for some reason. Alternately, you can only request new tokens at a certain rate. It's possible your account has been requesting many, many refresh tokens for some reason and has been temporarily rate limited by the access service. | Brandon Yarbrough gave me suggestions which solved this problem. He suspected that the .boto file was corrupted and suggested I delete it and run gsutil config again. I did this and it solved the problem. | 1,857 |
32,775,258 | Trying to write `to_csv` with the following code:
```
file_name = time.strftime("Box_Office_Data_%Y/%m/%d_%H:%M.csv")
allFilms.to_csv(file_name)
```
But am getting the following error:
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-36-aa2d6e13e9af> in <module>()
9
10 file_name = time.strftime("Box_Office_Data_%Y/%m/%d_%H:%M.csv")
---> 11 allFilms.to_csv(file_name)
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/frame.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, quoting, quotechar, line_terminator, chunksize, tupleize_cols, date_format, doublequote, escapechar, decimal, **kwds)
1187 escapechar=escapechar,
1188 decimal=decimal)
-> 1189 formatter.save()
1190
1191 if path_or_buf is None:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/format.py in save(self)
1440 else:
1441 f = com._get_handle(self.path_or_buf, self.mode,
-> 1442 encoding=self.encoding)
1443 close = True
1444
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/common.py in _get_handle(path, mode, encoding, compression)
2827 f = open(path, mode, encoding=encoding)
2828 else:
-> 2829 f = open(path, mode, errors='replace')
2830 else:
2831 f = open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'Box_Office_Data_2015/09/24_22:11.csv'
```
Since I'm writing to a csv, why would it be searching for a file/directory that is not yet created?
Anyone's help would be greatly appreciated :) | 2015/09/25 | [
"https://Stackoverflow.com/questions/32775258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5314975/"
] | The error is clear -
```
FileNotFoundError: [Errno 2] No such file or directory: 'Box_Office_Data_2015/09/24_22:11.csv'
```
If you get this error when trying to do `.to_csv()` , it means that the directory in which you are trying to save the file does not exist. So in your case, the directory - `Box_Office_Data_2015/09/` does not exist. It seems like you actually meant `Box_Office_Data_2015/09/24_22:11.csv` to be a filename (with no directory) , I am not sure if that would be possible `09/` would be considered as a directory.
A simple solution would be to use something other than `/` in between the year/month/day . Example -
```
file_name = time.strftime("Box_Office_Data_%Y_%m_%d_%H:%M.csv")
allFilms.to_csv(file_name)
``` | In your code `file_name = time.strftime("Box_Office_Data_%Y/%m/%d_%H:%M.csv")`.
File name was like this `Box_Office_Data_2015/09/24_22:11.csv`, which means a path to a file.
Try to replace the `/` with something like `_`.
Try this:
`file_name = time.strftime("Box_Office_Data_%Y_%m_%d_%H:%M.csv")` | 1,859 |
56,507,997 | I did `!pip install tree` on google colab notebook. It showed that `Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)`. But when I use `!tree`. The notebook reminded me that `bin/bash: tree: command not found`. How to solve it?
I tried several times but all failed.
It showed:
```
Collecting tree
Downloading https://files.pythonhosted.org/packages/29/3f/63cbed2909786f0e5ac30a4ae5791ad597c6b5fec7167e161c55bba511ce/Tree-0.2.4.tar.gz
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from tree) (4.3.0)
Collecting svgwrite (from tree)
Downloading https://files.pythonhosted.org/packages/87/ce/3259f75aebb12d8c7dd9e8c479ad4968db5ed18e03f24ee4f6be9d9aed23/svgwrite-1.2.1-py2.py3-none-any.whl (66kB)
|████████████████████████████████| 71kB 23.9MB/s
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from tree) (41.0.1)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from tree) (7.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from Pillow->tree) (0.46)
Requirement already satisfied: pyparsing>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from svgwrite->tree) (2.4.0)
Building wheels for collected packages: tree
Building wheel for tree (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/c7/08/aa/42261411808c634cd1d0e9fe6cde5e78bf47c2c8028f3930af
Successfully built tree
Installing collected packages: svgwrite, tree
Successfully installed svgwrite-1.2.1 tree-0.2.4
!pip install tree
!tree
```
I expect it shows the structure of the files in the directory. | 2019/06/08 | [
"https://Stackoverflow.com/questions/56507997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11618792/"
] | You seem to have confused pip with the local package manager?
`!apt-get install tree` does what you want:
```
.
└── sample_data
├── anscombe.json
├── california_housing_test.csv
├── california_housing_train.csv
├── mnist_test.csv
├── mnist_train_small.csv
└── README.md
1 directory, 6 files
``` | I think you have installed wrong tree with pip <https://pypi.org/project/Tree/>
Right code for install on Mac `brew install tree`
```
sudo apt-get install tree
```
the command for Debian / Ubuntu Linux / Mint `sudo apt install tree` | 1,860 |
68,160,205 | Before I describe the problem, here is a basic run-down of the overall process to give you a clearer picture. Additionally, I am a novice at PHP:
1. I have a WordPress website that uses CPanel as its web hosting software
2. The WordPress website has a form (made by UFB) that has the user upload an image
3. The image gets directed to the upload folder (/uploads) by using `image_upload.php`
4. The image is then downloaded onto a computer, and a program is run which generates numbers about the picture(the number generator program is in python)
5. After the numbers are generated, it calls on `report.php` and `template.xlsm`
6. Report.php gets those generated numbers and then puts them into their designated places on the xlsm file
7. The xlsm file is then converted into a pdf, which is then emailed to the user that submitted the picture.
I inherited all of this code from someone else who wanted me to help them on this project. Here is my problem:
*I don't understand how the PHP files are being called. I have python code ready to run the number generator online, however, I can't do this without figuring how the PHP files are being called.*
I understand what the PHP files do, I just don't understand how they are being called. I tried doing a `-grep` search for both `image_upload.php` and `report.php`, but I come up empty. There aren't any other PHP files that seem to do an `include(xyz.php)`, which is supposed to be how PHP files are called. I don't understand what calls `image_upload.php` to get the pictures moved into the /uploads folder. I also don't understand what calls `report.php` to make it run. I tried looking in `functions.php`, where most of the other PHP files are called, but `report.php` and `image_upload.php` aren't.
Please help me! If any clarification is needed, just comment, and I will try to provide any help I can! | 2021/06/28 | [
"https://Stackoverflow.com/questions/68160205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16332203/"
] | Set the [Request.URL](https://pkg.go.dev/net/http#Request.URL) to an [opaque URL](https://pkg.go.dev/net/url#URL.Opaque). The opaque URL is written to the request line as is.
```
request := &http.Request{
URL: &url.URL{Opaque: "http://127.0.0.1:10019/system?action=add_servers"}
Body: requestBody, //io.ReadCloser containing the body
Method: http.MethodPost,
ContentLength: int64(len(postBody)),
Header: make(http.Header),
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
}
```
The [http.NewRequest](https://pkg.go.dev/net/http#NewRequest) and [http.NewRequestContext](https://pkg.go.dev/net/http#NewRequestContext) functions are the preferred way to create a request value. Set Request.URL to the opaque URL after creating the request with one of these functions:
```
u := "http://127.0.0.1:10019/system?action=add_servers"
request, err := http.NewRequest("POST", u, requestBody)
if err != nil {
// handle error
}
request.URL = &url.URL{Opaque: u}
res, err := http.DefaultClient.Do(request)
``` | what value of the URL variable?
I think you can define the URL variable use a specific host
```
var url = "http://127.0.0.1:10019/system?action=add_servers"
```
In case your path is dynamic from another variable, you can use `fmt.Sprintf`, like below
```
// assume url value
var path = "/system?action=add_servers"
url = fmt.Sprintf("http://127.0.0.1:10019/%s", path)
``` | 1,863 |
45,948,854 | I have this situation :
* *File1* named **source.txt**
* *File2* named **destination.txt**
**source.txt** contains these strings:
```
MSISDN=213471001120
MSISDN=213471001121
MSISDN=213471001122
```
I want to see **destination.txt** contains these cases:
MSISDN=213471001120 **only** for First execution of python code
MSISDN=213471001121 **only** for second execution of python code
MSISDN=213471001122 **only** for third execution of python code
I have this code:
```
F1 = open("source.txt", "r")
txt = F1.read(19)
#print txt
F2 = open("destination.txt", "w")
F2.write(txt)
F3=open("source.txt", "w")
for ligne in F1:
if ligne==txt:
F3.write("")
break
F1.close()
F2.close()
F3.close()
```
**source.txt** File is empty after first execution of code.
Thank's in advanced. | 2017/08/29 | [
"https://Stackoverflow.com/questions/45948854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6056999/"
] | You have to read the whole file, before writing again, because mode `w` empties the file:
```
with open('source.txt') as lines:
lines = list(lines)
with open('destination.txt', 'w') as first:
first.write(lines[0])
with open('source.txt', 'w') as other:
other.writelines(lines[1:])
``` | You're gonna need an external file to store the state of "how many times have I run before"
```
with open('source.txt', 'r') as source, open('counter.txt', 'r') as counter, open('destination.txt', 'w') as destination:
num_to_read = int(counter.readline().strip())
for _ in range(num_to_read):
line_to_write = source.readline()
destination.write(line_to_write)
with open('counter.txt', 'w') as counter:
counter.write(num_to_read + 1)
```
I've changed your calls to `open` to use context managers so you don't need to call `close` at the end.
I haven't run this code, so there might be some bugs. In particular, the case of `counter.txt` not existing isn't handled. I'll leave that up to you. | 1,864 |
61,877,065 | I am trying to implement Okapi BM25 in python. While I have seen some tutorials how to do it, it seems I am stuck in the process.
So I have collection of documents (and has as columns 'id' and 'text') and queries (and has as columns 'id' and 'text'). I have done the pre-processing steps and I have my documents and queries as a list:
```
documents = list(train_docs['text']) #put the documents text to list
queries = list(train_queries_all['text']) #put the queries text to list
```
Then for BM25 I do this:
```
pip install rank_bm25
```
#calculate BM25
```
from rank_bm25 import BM25Okapi
bm25 = BM25Okapi(documents)
```
#compute the score
`bm_score = BM25Okapi.get_scores(documents, query=queries`)
But it wouldn't work.
---
Then I tried to do this:
```
import math
import numpy as np
from multiprocessing import Pool, cpu_count
```
`nd = len(documents) # corpus_size = 3612` (I am not sure if this is necessary)
```
class BM25:
def __init__(self, documents, tokenizer=None):
self.corpus_size = len(documents)
self.avgdl = 0
self.doc_freqs = []
self.idf = {}
self.doc_len = []
self.tokenizer = tokenizer
if tokenizer:
documents = self._tokenize_corpus(documents)
nd = self._initialize(documents)
self._calc_idf(nd)
def _initialize(self, documents):
nd = {} # word -> number of documents with word
num_doc = 0
for document in documents:
self.doc_len.append(len(document))
num_doc += len(document)
frequencies = {}
for word in document:
if word not in frequencies:
frequencies[word] = 0
frequencies[word] += 1
self.doc_freqs.append(frequencies)
for word, freq in frequencies.items():
if word not in nd:
nd[word] = 0
nd[word] += 1
self.avgdl = num_doc / self.corpus_size
return nd
def _tokenize_corpus(self, documents):
pool = Pool(cpu_count())
tokenized_corpus = pool.map(self.tokenizer, documents)
return tokenized_corpus
def _calc_idf(self, nd):
raise NotImplementedError()
def get_scores(self, queries):
raise NotImplementedError()
def get_top_n(self, queries, documents, n=5):
assert self.corpus_size == len(documents), "The documents given don't match the index corpus!"
scores = self.get_scores(queries)
top_n = np.argsort(scores)[::-1][:n]
return [documents[i] for i in top_n]
class BM25T(BM25):
def __init__(self, documents, k1=1.5, b=0.75, delta=1):
# Algorithm specific parameters
self.k1 = k1
self.b = b
self.delta = delta
super().__init__(documents)
def _calc_idf(self, nd):
for word, freq in nd.items():
idf = math.log((self.corpus_size + 1) / freq)
self.idf[word] = idf
def get_scores(self, queries):
score = np.zeros(self.corpus_size)
doc_len = np.array(self.doc_len)
for q in queries:
q_freq = np.array([(doc.get(q) or 0) for doc in self.doc_freqs])
score += (self.idf.get(q) or 0) * (self.delta + (q_freq * (self.k1 + 1)) /
(self.k1 * (1 - self.b + self.b * doc_len / self.avgdl) + q_freq))
return score
```
and then I try to get the scores:
```
score = BM25.get_scores(self=documents, queries)
```
But I get as a meesage:
score = BM25.get\_scores(self=documents, queries)
SyntaxError: positional argument follows keyword argument
---
Does anyone has an idea why there is this error? Thank you in advance. | 2020/05/18 | [
"https://Stackoverflow.com/questions/61877065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13498967/"
] | `**kwargs` expects arguments to be passed by keyword, not by position. Once you do that, you can access the individual kwargs like you would in any other dictionary:
```
class Student:
def __init__(self, **kwargs):
self.name = kwargs.get('name')
self.age = kwargs.get('age')
self.salary = kwargs.get('salary')
def show_name(self):
print("Name is : " + self.name)
def show_age(self):
print("Age is : " + str(self.age))
def show_salary(self):
print(f"Salary of {self.name} is : " + str(self.salary))
st = Student(name='John', age=25, salary=15000)
st2 = Student(name='Doe', age=25, salary=1500000)
st.show_salary()
st2.show_salary()
```
If you want to pass these arguments by position, you should use `*args` instead. | **kwargs** is created as a dictionary inside the scope of the function. You need to pass a keyword which uses them as keys in the dictionary. (Try running the print statement below)
```
class Student:
def __init__(self, **kwargs):
#print(kwargs)
self.name = kwargs["name"]
self.age = kwargs["age"]
self.salary = kwargs["salary"]
def show_name(self):
print("Name is : " + self.name)
def show_age(self):
print("Age is : " + str(self.age))
def show_salary(self):
print(f"Salary of {self.name} is : " + str(self.salary))
st = Student(name = 'John',age = 25, salary = 15000)
st2 = Student(name = 'Doe',age = 25,salary = 1500000)
st.show_salary()
st2.show_salary()
``` | 1,866 |
59,596,957 | First, let me say: I know I shouldn't be iterating over a dataframe per:
[How to iterate over rows - Don't!](https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/55557758#55557758)
[How to iterate over rows...](https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas)
etc.
However, for my application I don't think I have a better option, although I am relatively new to python & pandas and may simply lack the knowledge. However, with my iteration, as I am iterating over rows, I need to access an adjacent row's data, which I can't figure out how to do with vectorization or list comprehension.
Which leaves me with iteration. I have seen several posts on iterrows() and itertuples() which will work. Before I found out about these though, i tried:
```
for i in workingDF.index:
if i==0:
list2Add = ['NaN']
compareItem = workingDF.at[0,'name']
else:
if (workingDF.at[i,'name'] != compareItem):
list2Add.append('NaN')
compareItem = workingDF.at[i,'name']
else:
currentValue = workingDF.at[i,'value']
yesterdayValue = workingDF.at[(i-1),'value']
r = currentValue - yesterdayValue
list2Add.append(r)
```
Anyway, my naive code seemed to work fine/as intended (so far).
So the question is: Is there some inherent reason not to use "for i in workingDF.index" in favor of the standard iterrows() and itertuples? (Presumably there must be since those are the "recommended" methods...)
Thanks in advance.
Jim
EDIT:
An example was requested. In this example each row contains a name, testNumber, and score. The example code creates a new column labelled "change" which represents the change of the current score compared to the most recent prior score. Example code:
```
import pandas as pd
def createDF():
# list of name, testNo, score
nme2 = ["bob", "bob", "bob", "bob", "jim", "jim", "jim" ,"jim" ,"ed" ,"ed" ,"ed" ,"ed"]
tstNo2 = [1,2,3,4,1,2,3,4,1,2,3,4]
scr2 = [82, 81, 80, 79,93,94,95,98,78,85,90,92]
# dictionary of lists
dict = {'name': nme2, 'TestNo': tstNo2, 'score': scr2}
workingDF = pd.DataFrame(dict)
return workingDF
def addChangeColumn(workingDF):
"""
returns a Dataframe object with an added column named
"change" which represents the change in score compared to
most recent prior test result
"""
for i in workingDF.index:
if i==0:
list2Add = ['NaN']
compareItem = workingDF.at[0,'name']
else:
if (workingDF.at[i,'name'] != compareItem):
list2Add.append('NaN')
compareItem = workingDF.at[i,'name']
else:
currentScore = workingDF.at[i,'score']
yesterdayScore = workingDF.at[(i-1),'score']
r = currentScore - yesterdayScore
list2Add.append(r)
modifiedDF = pd.concat([workingDF, pd.Series(list2Add, name ='change')], axis=1)
return(modifiedDF)
if __name__ == '__main__':
myDF = createDF()
print('myDF is:')
print(myDF)
print()
newDF = addChangeColumn(myDF)
print('newDF is:')
print(newDF)
```
Example Output:
```
myDF is:
name TestNo score
0 bob 1 82
1 bob 2 81
2 bob 3 80
3 bob 4 79
4 jim 1 93
5 jim 2 94
6 jim 3 95
7 jim 4 98
8 ed 1 78
9 ed 2 85
10 ed 3 90
11 ed 4 92
newDF is:
name TestNo score change
0 bob 1 82 NaN
1 bob 2 81 -1
2 bob 3 80 -1
3 bob 4 79 -1
4 jim 1 93 NaN
5 jim 2 94 1
6 jim 3 95 1
7 jim 4 98 3
8 ed 1 78 NaN
9 ed 2 85 7
10 ed 3 90 5
11 ed 4 92 2
```
Thank you. | 2020/01/05 | [
"https://Stackoverflow.com/questions/59596957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554517/"
] | In short, the answer is the performance benefit of using iterrows. This [post](https://engineering.upside.com/a-beginners-guide-to-optimizing-pandas-code-for-speed-c09ef2c6a4d6) could better explain the differences between the various options. | My problem is that I wanted to create a new column which was the difference of a value in the current row and a value in a prior row without using iteration.
I think the more "panda-esque" way of doing this (without iteration) would be to use dataframe.shift() to create a new column which contains the prior rows data shifted into the current row so all necessary data is available in the current row. | 1,868 |
36,190,757 | I am trying to use the One Million Song Dataset, for this i had to install python tables, numpy, cython, hdf5, numexpr, and so.
Yesterday i managed to install all i needed, and after having some troubles with hdf5, i downloaded the precompiled binary packages and saved them in my /bin folder, and the respective libraries in /lib , after that i tested this python script : `http://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt`
and it worked fine, to be clear the way i made it work was to first run the script and start installing the needed dependencies, but today i restarted my laptop and it didn't work, now it throws me this error on the console :
```
python2.7 script.py
```
returns :
```
import numpy as np # get it at: http://numpy.scipy.org/
from . import random
from .mtrand import *
ImportError: /home/francisco/.local/lib/python2.7/site-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
```
seems to me that there is a missing variable in such file, my guess is that the script is looking for the numpy library in the wrong place, since i made so many failed installations maybe i broke something and it only worked out because it was loaded in the temporal memory of the computer.
I tried installing Anaconda, and i created a new environment and installed the packaged with the anaconda package manager, and even thought i listed all packaged and it returns :
```
# packages in environment at /home/francisco/anaconda2/envs/Music:
#
biopython 1.66 np110py27_0
cython 0.23.4 <pip>
hdf5 1.8.15.1 2
mkl 11.3.1 0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
openssl 1.0.2g 0
pillow 3.1.1 <pip>
pip 8.1.1 py27_0
pytables 3.2.2 np110py27_1
python 2.7.11 0
python-ldap 2.4.25 <pip>
readline 6.2 2
reportlab 3.3.0 <pip>
requirements 0.1 <pip>
setuptools 20.3 py27_0
sqlite 3.9.2 0
tables 3.2.2 <pip>
tk 8.5.18 0
wheel 0.29.0 py27_0
zlib 1.2.8 0
```
i still get the same error. I really need help and don't know what else to try. Thanks. | 2016/03/23 | [
"https://Stackoverflow.com/questions/36190757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2869143/"
] | I had the same problem, probably you have installed numpy without Anaconda, so there is a conflict because of this, which numpy to use: that one installed with pip or with conda. When I removed non-Anaconda numpy, error gone.
```
pip uninstall numpy
``` | First remove `numpy` from `/usr/local/lib/python2.7/dist-packages/numpy-1.11.0-py2.7-linux-x86_64.egg`
and then use the following command
`sudo pip install numpy scipy`
I had solve this error in my case. | 1,869 |
59,227,170 | i run a python program using `beautifulsoup` and `requests` to scrape embedded videos URL , but to download theses videos i need to bypass a ads popups and `javascript` reload only then the `m3u8` files start to appear in the network traffic;
so i need to simulate the clicks to get to the `javascript` reload (if there's a method better than selenium, trying to reduce script dependencies) and then when the `m3u8`files appear i need to get their url. | 2019/12/07 | [
"https://Stackoverflow.com/questions/59227170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12496538/"
] | There is no rule against using selenium side by side with beautifulsoup and requests. You can use the selenium to bypass the clicks, popups and ads and use beautifulsoup and requests to download the videos after the urls have appeared. You can redirect selenium to different urls using the results you get from running a `requests.get()` or similar or you could result to using **[scrapy](https://scrapy.org/)**(a full-blown scraping framework) and with a couple of third party plugins to handle those JavaScript and ads you should be able get those videos in not time. | >
> Blockquote
> `i run a python program using beautifulsoup and requests to scrape embedded videos URL , but to download theses videos i need to bypass a ads popups and javascript reload only then the m3u8 files start to appear in the network traffic;
>
>
>
so i need to simulate the clicks to get to the javascript reload (if there's a method better than selenium, trying to reduce script dependencies) and then when the m3u8files appear i need to get their url
`Can I have your code please? | 1,879 |
24,853,027 | I have installed Django 1.6.5 with PIP and Python 2.7.8 from the website.
I ran `django-admin.py startproject test123`, switched to `test123` directory, and ran the command `python manage.py runserver`, then i get this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/conf/__init__.py", line 72, in _configure_logging
from django.utils.log import DEFAULT_LOGGING
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/log.py", line 7, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/views/debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/__init__.py", line 2, in <module>
from django.http.request import (HttpRequest, QueryDict, UnreadablePostError,
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/http/request.py", line 11, in <module>
from django.core import signing
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/core/signing.py", line 45, in <module>
from django.utils.crypto import constant_time_compare, salted_hmac
File "/Library/Python/2.7/site-packages/Django-1.6.5-py2.7.egg/django/utils/crypto.py", line 6, in <module>
import hmac
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 8, in <module>
from operator import _compare_digest as compare_digest
ImportError: cannot import name _compare_digest
```
Found out that operator is a standard python library. Why cant it not import it?
P.S. I did try it in the command line, I can import the operator module, but I get an error on this statement: `from operator import _compare_digest as compare_digest` | 2014/07/20 | [
"https://Stackoverflow.com/questions/24853027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/637619/"
] | Followed this SO answer:
[Uninstall python.org version of python2.7 in favor of default OS X python2.7](https://stackoverflow.com/questions/13538586/uninstall-python-org-version-of-python2-7-in-favor-of-default-os-x-python2-7)
Then changed my `.bash_profile` Python path to `/usr/lib/python` for the default OSX python path.
Uninstalled Django and MySQL-Python:
```
sudo pip uninstall django
sudo pip uninstall MySQL-Python
```
And then again reinstalled everything, but with `MySQL-Python` being the first and second Django.
After these steps, everything is working. | You most likely have another file named `operator.py` on your `PYTHONPATH` (probably in the current working directory), which shadows the standard library `operator` module..
Remove or rename the file. | 1,880 |
45,718,546 | In python 3, you can now open a file safely using the `with` clause like this:
```
with open("stuff.txt") as f:
data = f.read()
```
Using this method, I don't need to worry about closing the connection
I was wondering if I could do the same for the multiprocessing. For example, my current code looks like:
```
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
pool.starmap(function,list)
pool.close()
pool.join()
```
Is there any way I could use a with clause to simplify this? | 2017/08/16 | [
"https://Stackoverflow.com/questions/45718546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2208112/"
] | ```
with multiprocessing.Pool( ... ) as pool:
pool.starmap( ... )
```
<https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool>
>
> New in version 3.3: Pool objects now support the context management protocol – see Context Manager Types. **enter**() returns the pool object, and **exit**() calls terminate().
>
>
>
You can see an example at the bottom of the `Pool` section. | Although its more than what the OP asked, if you want something that will work for both Python 2 and Python 3, you can use:
```py
# For python 2/3 compatibility, define pool context manager
# to support the 'with' statement in Python 2
if sys.version_info[0] == 2:
from contextlib import contextmanager
@contextmanager
def multiprocessing_context(*args, **kwargs):
pool = multiprocessing.Pool(*args, **kwargs)
yield pool
pool.terminate()
else:
multiprocessing_context = multiprocessing.Pool
```
After that, you can use multiprocessing the regular Python 3 way, regardless of which version of Python you are using. For example:
```
def _function_to_run_for_each(x):
return x.lower()
with multiprocessing_context(processes=3) as pool:
results = pool.map(_function_to_run_for_each, ['Bob', 'Sue', 'Tim'])
print(results)
```
Now, this will work in Python 2 or Python 3. | 1,885 |
48,512,269 | Hi guys I am trying to read from subprocess.PIPE without blocking the main process. I have found this code:
```
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty # python 3.x
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['myprogram.exe'], stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.daemon = True # thread dies with the program
t.start()
# ... do other things here
# read line without blocking
try: line = q.get_nowait() # or q.get(timeout=.1)
except Empty:
print('no output yet')
else: # got line
# ... do something with line
```
The code does not return anything. I am using Python 3 on Windows.
Do you have any ideas what might be the problem? | 2018/01/30 | [
"https://Stackoverflow.com/questions/48512269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7395188/"
] | I was away from my project for a long time but finally I manged to solve the issue.
```
from subprocess import PIPE, Popen
from threading import Thread
p = Popen(['myprogram.exe'], stdout=PIPE)
t = Thread(target=results)
t.daemon = True
t.start()
def results():
a = p.stdout.readline()
```
Maybe this is not exactly the right way to do it but it is working for me. I am only posting it because I personally believe that whoever ask a question should post the solution when they have found it. | On a unix environment you can simply make the stdout/stderr/stdin file descriptors nonblocking like so:
```
import os, fcntl
from subprocess import Popen, PIPE
def nonblock(stream):
fcntl.fcntl(stream, fcntl.F_SETFL, fcntl.fcntl(stream, fcntl.F_GETFL) | os.O_NONBLOCK)
proc = Popen("for ((;;)) { date; sleep 1; }", shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True,
executable='/bin/bash')
nonblock(proc.stdout)
while True:
for line in proc.stdout.readlines():
print(line, end="")
``` | 1,886 |
28,371,555 | I have written this script to test a single ip address for probing specific user names on smtp servers for a pentest. I am trying now to port this script to run the same tests, but to a range of ip addresses instead of a single one. Can anyone shed some light as to how that can be achieved?
```
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
``` | 2015/02/06 | [
"https://Stackoverflow.com/questions/28371555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4283164/"
] | I would implement this by turning your code as it stands into a function to probe a single host, taking the host name/ip as an argument. Then, loop over your list of hosts (either from the command line, a file, interactive querying of a user, or wherever) and make a call to your single host probe for each host in the loop. | Ok, so here is what I have done to get this going.
The solution is not elegant at all but it does the trick, and also, I could not spend more time trying to find a solution on this purely in Python, so I have decided, after reading the answer from bmhkim above(thanks for the tips) to write a bash script to have it iterate over a range of ip addresses and for each one call my python script to do its magic.
```
#!/bin/bash
for ip in $(seq 1 254); do
python smtp-probe.py 192.168.1.$ip <users.txt
done
```
I have had some problems with the output since that was giving me the servers responses to my probing attempts but not the actual ip addresses which were sending those responses, so I have adapted the original script to this:
```
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
print sys.argv[1] #Notice the printing of the script arguments/ip addresses for my output
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
```
Like I said above, that is a tricky way out-I know, but I am not a programmer, so that is the way out I was able to find(*if you have a way purely in Python to do it I would like very much to see it*). I will definitely re-visit this issue once I have a bit more time and I will keep studying Python until I get this right.
Thanks all for the support to my question!! | 1,887 |
57,331,667 | I'm using `poetry` library to manage project dependencies, so when I use
`docker build --tag=helloworld .`
I got this error
```
[AttributeError]
'NoneType' object has no attribute 'group'
```
Installing breaks on `umongo (2.1.0)` package
Here is my `pyproject.toml` file
```
[tool.poetry.dependencies]
python = "^3.7.0"
asyncio = "^3.4"
aiohttp = "^3.4"
motor = "^2.0"
umongo = "^2.0"
pyyaml = "^3.13"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
black = {version = "^18.3-alpha.0",allows-prereleases = true}
mypy = "^0.650.0"
wemake-python-styleguide = "^0.5.1"
pytest-mock = "^1.10"
pytest-asyncio = "^0.9.0"
pytest-aiohttp = "^0.3.0"
```
And `poetry.lock`
<https://pastebin.com/kUjAKJHM>
Dockerfile:
```
FROM python:3.7.1-alpine
RUN mkdir -p /opt/project/todo_api
RUN pip --no-cache-dir install poetry
COPY ./pyproject.toml /opt/project
COPY poetry.lock /opt/project
RUN cd /opt/project && poetry install --no-dev
COPY ./todo_api /opt/project/todo_api
COPY ./todo_api.yml /opt/project/todo_api.yml
WORKDIR /opt/project
ENTRYPOINT poetry run python -m aiohttp.web todo_api.main:main
``` | 2019/08/02 | [
"https://Stackoverflow.com/questions/57331667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6804296/"
] | The following works for me:
```
FROM python:3.7.1-alpine
WORKDIR /opt/project
RUN pip install --upgrade pip && pip --no-cache-dir install poetry
COPY ./pyproject.toml .
RUN poetry install --no-dev
```
with pyproject.toml:
```
[tool.poetry]
name = "57331667"
version = "0.0.1"
authors = ["skufler <skufler@email.com>"]
[tool.poetry.dependencies]
python = "^3.7.0"
asyncio = "^3.4"
aiohttp = "^3.4"
motor = "^2.0"
umongo = "^2.0"
pyyaml = "^3.13"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
black = {version = "^18.3-alpha.0",allows-prereleases = true}
mypy = "^0.650.0"
wemake-python-styleguide = "^0.5.1"
pytest-mock = "^1.10"
pytest-asyncio = "^0.9.0"
pytest-aiohttp = "^0.3.0"
```
Then:
```sh
docker build --tag=57331667 --file=./Dockerfile .
```
results:
```sh
...
Creating virtualenv 57331667-py3.7 in /root/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
Writing lock file
Package operations: 15 installs, 0 updates, 0 removals
- Installing idna (2.8)
- Installing multidict (4.5.2)
- Installing six (1.12.0)
- Installing async-timeout (3.0.1)
- Installing attrs (18.2.0)
- Installing chardet (3.0.4)
- Installing marshmallow (2.19.5)
- Installing pymongo (3.8.0)
- Installing python-dateutil (2.8.0)
- Installing yarl (1.3.0)
- Installing aiohttp (3.5.4)
- Installing asyncio (3.4.3)
- Installing motor (2.0.0)
- Installing pyyaml (3.13)
- Installing umongo (2.1.0)
Removing intermediate container c6a9c7652b5c
---> 89354562cf16
Successfully built 89354562cf16
Successfully tagged 57331667:latest
``` | If you want to install it with pip3 in production, here's how the latest version of Poetry (late 2021) can export a requirements.txt file:
```sh
# Production with no development dependencies
poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt --output ./requirements.prod.txt
# For development, including development dependencies
poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt --dev --output ./requirements.dev.txt
``` | 1,889 |
48,301,318 | I have a Python script where I import `datadog` module. When I run `python datadog.py`, it fails with `ImportError: cannot import name statsd`. The script starts with following lines:
```
import os
import mysql.connector
from time import time
from datadog import statsd
```
Actual error messages are following:
```
$ python /mnt/datadog.py
Traceback (most recent call last):
File "/mnt/datadog.py", line 5, in <module>
from datadog import statsd
File "/mnt/datadog.py", line 5, in <module>
from datadog import statsd
ImportError: cannot import name statsd
```
But when I'm in Python shell (started by `python` command), I can successfully run `from datadog import statsd`. What's the difference here?
By the way, I have proper Python packages installed in my computer:
```
$ pip freeze | egrep 'datadog|mysql'
datadog==0.17.0
mysql-connector==2.1.6
$ python --version
Python 2.7.5
``` | 2018/01/17 | [
"https://Stackoverflow.com/questions/48301318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8495751/"
] | The problem is that your script is named `datadog.py`. So when it imports the module `datadog`, it imports itself. | First install statsd by
```
pip install statsd
```
then do
```
import statsd
``` | 1,891 |
69,970,902 | s =[(1, 2), (2, 3), (3, 4), (1, 3)]
Output should be:
1 2
2 3
3 4
1 3
#in python only
**"WITHOUT USING FOR LOOP"**
In below code
```
ns=[[4, 4], [5, 4], [3, 3]]
for x in ns:
n=x[0]
m=x[1]
f=list(range(1,n+1))
l=list(range(2,n+1))
permut = itertools.permutations(f, 2)
permut=list(permut)
s=list(filter(lambda x: x[1]==x[0]+1 , permut))
#print(s)
m=m-len(s)
#print(m)
t=list(filter(lambda x: x[1]==x[0]+2 , permut))
#print(t)
for y in range(0,m):
s.append(t.pop(0))
print(*s, sep = "\n")
``` | 2021/11/15 | [
"https://Stackoverflow.com/questions/69970902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17415916/"
] | I had this issue last night, tried with php 7.3 and 7.4 in the end i just used the latest php 8.1 and this issue went away. | You could try going to `illuminate/log/Logger.php` and adding `use Monolog\Logger as Monolog;` at the beginning of the file. After that, change the constructor from this:
```
/**
* Create a new log writer instance.
*
* @param \Psr\Log\LoggerInterface $logger
* @param \Illuminate\Contracts\Events\Dispatcher|null $dispatcher
* @return void
*/
public function __construct(LoggerInterface $logger, Dispatcher $dispatcher = null)
{
$this->logger = $logger;
$this->dispatcher = $dispatcher;
}
```
to this:
```
/**
* Create a new log writer instance.
*
* @param \Monolog\Logger $logger
* @param \Illuminate\Contracts\Events\Dispatcher|null $dispatcher
* @return void
*/
public function __construct(Monolog $logger, Dispatcher $dispatcher = null)
{
$this->logger = $logger;
$this->dispatcher = $dispatcher;
}
``` | 1,892 |
14,142,144 | I have a custom field located in my `/app/models.py` . My question is...
What is the best practice here. Should I have a separate file i.e. `customField.py` and import to the `models.py`, or should it be all in the same `models.py` file?
best practice
```
class HibernateBooleanField(models.BooleanField):
__metaclass__ = models.SubfieldBase
def get_internal_type(self):
return "HibernateBooleanField"
def db_type(self):
return 'bit(1)'
def to_python(self, value):
if value in (True, False): return value
if value in ('t', 'True', '1', '\x01'): return True
if value in ('f', 'False', '0', '\x00'): return False
def get_db_prep_value(self, value, *args, **kwargs):
return 0x01 if value else 0x00
``` | 2013/01/03 | [
"https://Stackoverflow.com/questions/14142144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/578822/"
] | If you're on Oracle 11g you can use the DBMS\_PARALLEL\_EXECUTE package to run your procedure in multiple threads. [Find out more](http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH).
If you're on an earlier version you can implement DIY parallelism using a technique from Tom Kyte. The Hungry DBA provides [a good explanation on his blog here](http://hungrydba.blogspot.co.uk/2007/12/tom-kytes-do-it-yourself-parallelism.html). | Sounds like you need a set of queries using the MySql `LIMIT` clause to implement paging (e.g. a query would get the first 1000, another would get the second 1000 etc..).
You could form these queries and submit as `Callables` to an [Executor service](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Executor.html) with a set number of threads. The `Executor` will manage the threads. I suspect it may be more efficient to both query and write your records within each `Callable`, but this is an assumption that would likely require testing. | 1,893 |
6,367,014 | In my `settings.py`, I have the following:
```
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending e-mail.
EMAIL_HOST = 'localhost'
# Port for sending e-mail.
EMAIL_PORT = 1025
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
```
My email code:
```
from django.core.mail import EmailMessage
email = EmailMessage('Hello', 'World', to=['user@gmail.com'])
email.send()
```
Of course, if I setup a debugging server via `python -m smtpd -n -c DebuggingServer localhost:1025`, I can see the email in my terminal.
However, how do I actually send the email not to the debugging server but to user@gmail.com?
After reading your answers, let me get something straight:
1. Can't you use localhost(simple ubuntu pc) to send e-mails?
2. I thought in django 1.3 `send_mail()` is somewhat deprecated and `EmailMessage.send()` is used instead? | 2011/06/16 | [
"https://Stackoverflow.com/questions/6367014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749477/"
] | I had actually done this from Django a while back. Open up a legitimate GMail account & enter the credentials here. Here's my code -
```
from email import Encoders
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEMultipart import MIMEMultipart
def sendmail(to, subject, text, attach=[], mtype='html'):
ok = True
gmail_user = settings.EMAIL_HOST_USER
gmail_pwd = settings.EMAIL_HOST_PASSWORD
msg = MIMEMultipart('alternative')
msg['From'] = gmail_user
msg['To'] = to
msg['Cc'] = 'you@gmail.com'
msg['Subject'] = subject
msg.attach(MIMEText(text, mtype))
for a in attach:
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition','attachment; filename="%s"' % os.path.basename(a))
msg.attach(part)
try:
mailServer = smtplib.SMTP("smtp.gmail.com", 687)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, [to,msg['Cc']], msg.as_string())
mailServer.close()
except:
ok = False
return ok
``` | below formate worked for me
>
> EMAIL\_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
>
>
> EMAIL\_USE\_TLS = True EMAIL\_HOST = 'mail.xxxxxxx.xxx'
>
>
> EMAIL\_PORT = 465
>
>
> EMAIL\_HOST\_USER = 'support@xxxxx.xxx'
>
>
> EMAIL\_HOST\_PASSWORD = 'xxxxxxx'
>
>
> | 1,894 |
58,414,393 | Say i have a dataframe as shown below
```
Stock open high low close Avg
0 SBIN 255.85 256.00 255.80 255.90 Nan
1 HDFC 1222.25 1222.45 1220.45 1220.45 Nan
2 SBIN 255.95 255.95 255.85 255.85 Nan
3 HDFC 1222.00 1222.50 1221.70 1221.95 Nan
4 SBIN 255.80 256.00 255.80 255.95 255.9
5 HDFC 1221.90 1222.00 1221.55 1221.70 1221.366667
6 SBIN 255.85 255.95 255.70 255.80 255.8666667
7 HDFC 1222.00 1222.00 1221.45 1221.95 1221.866667
8 SBIN 255.80 255.95 255.70 255.80 255.85
9 HDFC 1222.00 1222.00 1221.25 1221.90 1221.85
10 SBIN 255.75 255.80 255.60 255.65 255.75
11 HDFC 1222.00 1222.00 1221.40 1221.80 1221.88333333
```
here is the link of the df i took screenshot off. <https://photos.app.goo.gl/kTZERCerFzggDfHEA>
Now the average column is the average of the previous three closes for that particular stock. How can i achieve this in python pandas? | 2019/10/16 | [
"https://Stackoverflow.com/questions/58414393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10559306/"
] | You can use `groupby` and `rolling`
```
df['Avg'] = df.groupby('Stock', as_index=False)['close'].rolling(3).mean().reset_index(0,drop=True)
df
Out[1]:
Stock open high low close Avg
0 SBIN 255.85 256.00 255.80 255.90 NaN
1 HDFC 1222.25 1222.45 1220.45 1220.45 NaN
2 SBIN 255.95 255.95 255.85 255.85 NaN
3 HDFC 1222.00 1222.50 1221.70 1221.95 NaN
4 SBIN 255.80 256.00 255.80 255.95 255.900000
5 HDFC 1221.90 1222.00 1221.55 1221.70 1221.366667
6 SBIN 255.85 255.95 255.70 255.80 255.866667
7 HDFC 1222.00 1222.00 1221.45 1221.95 1221.866667
8 SBIN 255.80 255.95 255.70 255.80 255.850000
9 HDFC 1222.00 1222.00 1221.25 1221.90 1221.850000
10 SBIN 255.75 255.80 255.60 255.65 255.750000
11 HDFC 1222.00 1222.00 1221.40 1221.80 1221.883333
``` | As I understood from your df you are trying to calculate something like moving average metric.
To do this you can simply use for iteration:
```
for i in range(0, df.shape[0] - 2):
df.loc[df.index[i + 2], 'AVG'] = np.round(((df.iloc[i, 1] + df.iloc[i + 1, 1] + df.iloc[i + 2, 1]) / 3), 1)
```
Where in pd.loc clauses you specify the columns on which you want to calculate the metric, I used own.
Also (if you want more specific metric) you can try to calculate an average using rolling method:
```
df['AVG'] = df.iloc[:, 1].rolling(window = 3).mean()
```
Instead of 1 you must use your columns on which you want to calculate the metric. In window column you specify the number of previuous columns on which you want to calculate this.
Hope this will help! | 1,904 |
14,568,370 | I have the following code
```
# logging
from twisted.python import log
import sys
# MIME Multipart handling
import email
import email.mime.application
import uuid
# IMAP Connection
from twisted.mail import imap4
from twisted.internet import protocol
#SMTP Sending
import os.path
from OpenSSL.SSL import SSLv3_METHOD
from twisted.internet import ssl
from twisted.mail.smtp import ESMTPSenderFactory
from twisted.internet.ssl import ClientContextFactory
from twisted.internet.defer import Deferred
from twisted.internet import reactor
#class AccountsManager(object):
def connectToIMAPServer(imap_server, username, password):
factory = IMAP4ClientFactory(username, password, login_insecure = True)
host, port = imap_server.split(":")
# connect to reactor
if port == '993':
reactor.connectSSL(host, int(port), factory, ssl.ClientContextFactory())
else:
if not port:
port = 143
reactor.connectTCP(host, int(port), factory)
d = factory.deferred
d.addCallback(lambda r: factory.proto)
return d
class IMAP4Client(imap4.IMAP4Client):
"""
A client with callbacks for greeting messages from an IMAP server.
"""
greetDeferred = None
def serverGreeting(self, caps):
self.serverCapabilities = caps
if self.greetDeferred is not None:
d, self.greetDeferred = self.greetDeferred, None
d.callback(self)
class IMAP4ClientFactory(protocol.ClientFactory):
usedUp = False
protocol = IMAP4Client
def __init__(self, username, password, mailbox = "INBOX", login_insecure = False):
self.ctx = ssl.ClientContextFactory()
self.username = username
self.password = password
self.mailbox = mailbox
self.login_insecure = login_insecure
self.deferred = Deferred()
def buildProtocol(self, addr):
"""
Initiate the protocol instance. Since we are building a simple IMAP
client, we don't bother checking what capabilities the server has. We
just add all the authenticators twisted.mail has. Note: Gmail no
longer uses any of the methods below, it's been using XOAUTH since
2010.
"""
assert not self.usedUp
self.usedUp = True
p = self.protocol(self.ctx)
p.factory = self
p.greetDeferred = self.deferred
p.registerAuthenticator(imap4.PLAINAuthenticator(self.username))
p.registerAuthenticator(imap4.LOGINAuthenticator(self.username))
p.registerAuthenticator(imap4.CramMD5ClientAuthenticator(self.username))
self.deferred.addCallback(self.GreetingCallback)
self.deferred.addErrback(self.GreetingErrback)
self.proto = p
return p
def GreetingCallback(self, result):
print "Secure Login"
auth_d = self.proto.authenticate(self.password)
auth_d.addCallback(self.AuthenticationCallback)
auth_d.addErrback(self.AuthenticationErrback)
return auth_d # attach it to the main deferred
def GreetingErrback(self, error):
log.err(error)
self.CloseConnection()
return error
def AuthenticationCallback(self, result):
print "Selecting Mailbox"
d = self.proto.examine(self.mailbox)
return d
def AuthenticationErrback(self, failure):
if self.login_insecure:
failure.trap(imap4.NoSupportedAuthentication)
return self.InsecureLogin()
else:
return error
def InsecureLogin(self):
print "Insecure Login"
d = self.proto.login(self.username, self.password)
d.addCallback(self.AuthenticationCallback)
return d
def CloseConnection(self):
self.proto.transport.loseConnection()
def clientConnectionFailed(self, connector, reason):
d, self.deferred = self.deferred, None
d.errback(reason)
class MailServer(object):
"Manages a server"
size = 0
used_space = 0
def __init__(self, smtp_server, imap_server, username, password):
self.smtp_server, self.smtp_port = smtp_server.split(":")
self.imap_server, self.imap_port = imap_server.split(":")
self.username = username
self.password = password
self.imap_connection = IMAP4ClientFactory(username, password)
def upload_data(self, data):
"""
Uploads data to email server returns deferred that will return with the imap uid
"""
# Create a text/plain message
id = str(uuid.uuid4()).upper()
msg = email.mime.Multipart.MIMEMultipart()
msg['Subject'] = 'GMA ID: %s' % id
msg['From'] = self.email_address
msg['To'] = self.email_address
# The main body is just another attachment
body = email.mime.Text.MIMEText("GMA ID: %s" % (self.uuid_id))
msg.attach(body)
att = email.mime.application.MIMEApplication(data,_subtype="raw")
att.add_header('Content-Disposition','attachment',filename = os.path.basename(self.filename))
msg.attach(att)
# Create a context factory which only allows SSLv3 and does not verify
# the peer's certificate.
contextFactory = ClientContextFactory()
contextFactory.method = SSLv3_METHOD
d = Deferred()
mime_obj = StringIO(str(msg))
senderFactory = ESMTPSenderFactory(
self.username,
self.password,
self.email_address,
self.email_address,
mime_obj,
d,
contextFactory=contextFactory)
d.addCallback(lambda r: self.email_sent(id, int(self.parts)) )
d.addErrback(self.email_error)
reactor.connectTCP(self.smtp_server, self.smtp_port, senderFactory)
d.addCallback(self.upload_success, *args, **kw)
d.addErrback(self.upload_error, 1)
return d
def upload_success(self, result):
print "upload was succesful!"
def upload_error(self, result):
print "upload error"
def download_data(self, uid):
"""
Downloads data from the email server returns a deferred that will return with the data
"""
print "uid"
if __name__ == "__main__":
log.startLogging(sys.stdout)
d = connectToIMAPServer("imap.gmail.com:993", "username", "password")
def f(s):
print s
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
reactor.run()
```
The class is suppose to handle logging in and selecting a mailbox and nicely return a IMAP proto ready to use however the two callbacks at the bottom are fired before the other ones, I get why, the callbacks are added before the other ones because buildProtocol hasn't been called yet so what is the best way to handle this, just have a dummy callback added in **init** that "holds" the first spot? | 2013/01/28 | [
"https://Stackoverflow.com/questions/14568370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410368/"
] | ```
from twisted.internet.endpoints import TCP4ClientEndpoint
d = TCP4ClientEndpoint(reactor, host, int(port)).connect(factory)
```
and
```
d.addCallback(lambda r: factory.deferred)
```
instead of
```
d = factory.deferred
```
in `connectToIMAPServer` should do it - your `factory.deferred` will be returned only after protocol is ready. ([Twisted Documentation on writing clients](http://twistedmatrix.com/documents/11.1.0/core/howto/clients.html)) | I eventually edited the code around and just managed the deferred's callback or errback internally
update code
```
# logging
from twisted.python import log
import sys
# MIME Multipart handling
import email
import email.mime.application
import uuid
# IMAP Connection
from twisted.mail import imap4
from twisted.internet import protocol
#SMTP Sending
import os.path
from OpenSSL.SSL import SSLv3_METHOD
from twisted.internet import ssl
from twisted.mail.smtp import ESMTPSenderFactory
from twisted.internet.ssl import ClientContextFactory
from twisted.internet.defer import Deferred
from twisted.internet import reactor
#class AccountsManager(object):
def connectToIMAPServer(imap_server, username, password):
factory = IMAP4ClientFactory(username, password, login_insecure = True)
host, port = imap_server.split(":")
# connect to reactor
if port == '993':
reactor.connectSSL(host, int(port), factory, ssl.ClientContextFactory())
else:
if not port:
port = 143
reactor.connectTCP(host, int(port), factory)
return factory.deferred
class IMAP4Client(imap4.IMAP4Client):
"""
A client with callbacks for greeting messages from an IMAP server.
"""
greetDeferred = None
def serverGreeting(self, caps):
self.serverCapabilities = caps
if self.greetDeferred is not None:
d, self.greetDeferred = self.greetDeferred, None
d.callback(self)
class IMAP4ClientFactory(protocol.ClientFactory):
usedUp = False
protocol = IMAP4Client
def __init__(self, username, password, mailbox = "INBOX", login_insecure = False):
self.ctx = ssl.ClientContextFactory()
self.username = username
self.password = password
self.mailbox = mailbox
self.login_insecure = login_insecure
# called when the protocol is all set up or there is an error setting it up
self.deferred = Deferred()
def buildProtocol(self, addr):
"""
Initiate the protocol instance. Since we are building a simple IMAP
client, we don't bother checking what capabilities the server has. We
just add all the authenticators twisted.mail has. Note: Gmail no
longer uses any of the methods below, it's been using XOAUTH since
2010.
"""
assert not self.usedUp
self.usedUp = True
p = self.protocol(self.ctx)
p.factory = self
# deferred for when the IMAP Greeting is done
p.greetDeferred = Deferred()
p.registerAuthenticator(imap4.PLAINAuthenticator(self.username))
p.registerAuthenticator(imap4.LOGINAuthenticator(self.username))
p.registerAuthenticator(imap4.CramMD5ClientAuthenticator(self.username))
p.greetDeferred.addCallback(self.GreetingCallback)
p.greetDeferred.addErrback(self.GreetingErrback)
self.proto = p
return p
def GreetingCallback(self, result):
log.msg("Succesfully sent IMAP Greeting.")
auth_d = self.proto.authenticate(self.password)
auth_d.addCallback(self.AuthenticationCallback)
auth_d.addErrback(self.AuthenticationErrback)
return auth_d
def GreetingErrback(self, error):
log.msg("Problem sending greeting")
log.err(error)
self.CloseConnection()
self.deferred.errback(error)
def AuthenticationCallback(self, result):
log.msg("Authenticated")
log.msg("Selecting Mailbox")
d = self.proto.examine(self.mailbox)
d.addCallback(self.MailboxSelectCallback)
d.addErrback(self.MailboxSelectErrback)
return d
def AuthenticationErrback(self, failure):
log.msg("Unable to authenticate securly")
if self.login_insecure:
log.msg("Trying to login insecurly")
failure.trap(imap4.NoSupportedAuthentication)
return self.InsecureLogin()
else:
log.err(failure)
self.deferred.errback(failure)
def InsecureLogin(self):
log.msg("Logging in insecurly")
d = self.proto.login(self.username, self.password)
d.addCallback(self.AuthenticationCallback)
return d
def MailboxSelectCallback(self, result):
# connected and protocol set up
log.msg("IMAP4 protocol setup")
self.deferred.callback(self.proto)
def MailboxSelectErrback(self, error):
log.msg("Cannot select mailbox %s" % self.mailbox)
log.err(error)
self.deferred.errback(error)
def CloseConnection(self):
self.proto.transport.loseConnection()
def clientConnectionFailed(self, connector, reason):
log.msg("Connecting was lost")
log.err(reason)
d, self.deferred = self.deferred, None
d.errback(reason)
class MailServer(object):
"Manages a server"
size = 0
used_space = 0
def __init__(self, smtp_server, imap_server, username, password):
self.smtp_server, self.smtp_port = smtp_server.split(":")
self.imap_server, self.imap_port = imap_server.split(":")
self.username = username
self.password = password
self.imap_connection = IMAP4ClientFactory(username, password)
def upload_data(self, data):
"""
Uploads data to email server returns deferred that will return with the imap uid
"""
# Create a text/plain message
id = str(uuid.uuid4()).upper()
msg = email.mime.Multipart.MIMEMultipart()
msg['Subject'] = 'GMA ID: %s' % id
msg['From'] = self.email_address
msg['To'] = self.email_address
# The main body is just another attachment
body = email.mime.Text.MIMEText("GMA ID: %s" % (self.uuid_id))
msg.attach(body)
att = email.mime.application.MIMEApplication(data,_subtype="raw")
att.add_header('Content-Disposition','attachment',filename = os.path.basename(self.filename))
msg.attach(att)
# Create a context factory which only allows SSLv3 and does not verify
# the peer's certificate.
contextFactory = ClientContextFactory()
contextFactory.method = SSLv3_METHOD
d = Deferred()
mime_obj = StringIO(str(msg))
senderFactory = ESMTPSenderFactory(
self.username,
self.password,
self.email_address,
self.email_address,
mime_obj,
d,
contextFactory=contextFactory)
d.addCallback(lambda r: self.email_sent(id, int(self.parts)) )
d.addErrback(self.email_error)
reactor.connectTCP(self.smtp_server, self.smtp_port, senderFactory)
d.addCallback(self.upload_success, *args, **kw)
d.addErrback(self.upload_error, 1)
return d
def upload_success(self, result):
print "upload was succesful!"
def upload_error(self, result):
print "upload error"
def download_data(self, uid):
"""
Downloads data from the email server returns a deferred that will return with the data
"""
print "uid"
if __name__ == "__main__":
log.startLogging(sys.stdout)
d = connectToIMAPServer("imap.gmail.com:993", "email", "password")
def f(s):
print s
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?"))
reactor.run()
``` | 1,905 |
35,266,464 | I was trying to build this example:
<https://www.linuxvoice.com/build-a-web-browser-with-20-lines-of-python/>
I'll just repost it here for completeness:
```
from PyQt5.QtCore import QUrl
from PyQt5.QtWidgets import QApplication
from PyQt5.QtWebKitWidgets import QWebView
import sys
app = QApplication(sys.argv)
view = QWebView()
view.show()
view.setUrl(QUrl(“http://linuxvoice.com”))
app.exec()
```
I also used the indications here to install pyqt5
<https://askubuntu.com/questions/612314/how-to-install-pyqt-for-python3-in-ubunt-14-10>
and also installed Qt5. So I should have everything they say in the linuxvoice tutorial.
When I want to run this with python 2.7, it says:
```
File "brows.py", line 9 SyntaxError: Non-ASCII character '\xe2' in file brows.py on line 9, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
```
and with Python3:
```
File "brows.py", line 9
view.setUrl(QUrl(“http://linuxvoice.com”))
SyntaxError: invalid character in identifier
```
Did anyone manage to make this work? | 2016/02/08 | [
"https://Stackoverflow.com/questions/35266464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5016028/"
] | SO here's the actual answer. I had the same issue and discovered it very fast.
`view.setUrl(QUrl(“http://linuxvoice.com”))`
Notice that their code uses quotes, look at how the quotes are compared to normal quotes.
Normal: ""
Theirs: “”
Basically, they're using weird ASCII quotes that Python can't handle. Really sneaky way to prevent copy-pasters.
Either way this code doesn't work anymore because in the most recent version of PyQt5, `QtWebKitWidgets` was removed. | You've got a stray byte somewhere in your code. It's popped up on StackOverflow previously and there's a good method for finding it: [Python "SyntaxError: Non-ASCII character '\xe2' in file"](https://stackoverflow.com/questions/21639275/python-syntaxerror-non-ascii-character-xe2-in-file). | 1,906 |
43,322,201 | I have a Flask application using python3. Sometimes it create daemon process to run script, then I want to kill daemon when timeout (use `signal.SIGINT`).
However, some processes which created by `os.system` (for example, `os.system('git clone xxx')`) are still running after daemon was killed.
so what should I do? Thanks all! | 2017/04/10 | [
"https://Stackoverflow.com/questions/43322201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7844505/"
] | In order to be able to kill a process you need its process id (usually referred to as a pid). `os.system` doesn't give you that, simply returning the value of the subprocess's return code.
The newer `subprocess` module gives you much more control, at the expense of somewhat more complexity. In particular it allows you to wait for the process to finish, with a timeout if required, and gives you access to the subprocess's pid. While I am not an expert in its use, this seems to
work. Note that this code needs Python 3.3 or better to use the `timeout` argument to the `Popen.wait` call.
```
import subprocess
process = subprocess.Popen(['git', 'clone', 'https://github.com/username/reponame'])
try:
print('Running in process', process.pid)
process.wait(timeout=10)
except subprocess.TimeoutExpired:
print('Timed out - killing', process.pid)
process.kill()
print("Done")
``` | The following command on the command line will show you all the running instances of python.
```
$ ps aux | grep -i python
username 6488 0.0 0.0 2434840 712 s003 R+ 1:41PM 0:00.00 python
```
The first number, `6488`, is the PID, process identifier. Look through the output of the command on your machine to find the PID of the process you want to kill.
You can run another command to kill the correct process.
```
$ kill 6488
```
You might need to use `sudo` with this command. Be careful though, you don't want to kill the wrong thing or bad stuff could happen! | 1,907 |
40,821,604 | I want to write to an element in a nested list named `foo`, but the nesting depth and indexes is only known at runtime, in a (non-nested!) list variable named `indexes`.
Examples:
If `indexes` is `[4]`, I want `foo[4]`.
If `indexes` is `[4,7]`, I want `foo[4][7]`.
If `indexes` is `[4,7,3]`, I want `foo[4][7][3]`.
What I could think of is to put together the command string (`"foo[4][7][3]"` in the last example), then call `eval`.
That would be `eval("foo["+']['.join([str(n) for n in indexes])+']')`.
That works, and is short enough, but I was hoping for a simpler, more pythonic way.
Does anyone know any alternatives?
Is there a way to *not* treat the empty list separately? As in:
If `indexes` is `[]`, I want the whole `foo`.
This needs an `if` in the `eval` solution.
EDIT: I need to **write to**, not read from, the element. I don't think either of the two existing answers, nor the answers to the indicated duplicate, can be used for writing. Apologies for misleading. | 2016/11/26 | [
"https://Stackoverflow.com/questions/40821604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/174365/"
] | You can use the [`reduce()` function](https://docs.python.org/2/library/functions.html#reduce):
```
from functools import reduce # Python 3 forward compatibility
import operator
def access(lst, indexes):
return reduce(operator.getitem, indexes, lst)
```
You *could* use `list.__getitem__` instead of [`operator.getitem`](https://docs.python.org/2/library/operator.html#operator.getitem), but then you limit the application to list objects only (it wouldn't work for tuples or dictionaries, the latter with keys rather than integer indices).
This repeatedly applies the indices, in order, to the previous result (starting the original list). This meets all your criteria, including the empty-list case:
```
>>> foo = ['a', 'b', 'c', 'd', ['foo', 'bar', 'baz', 'spam', 'eggs', 'ham', 'monty', ['alpha', 'beta', 'delta', 'gamma']]]
>>> access(foo, [])
['a', 'b', 'c', 'd', ['foo', 'bar', 'baz', 'spam', 'eggs', 'ham', 'monty', ['alpha', 'beta', 'delta', 'gamma']]]
>>> access(foo, [4])
['foo', 'bar', 'baz', 'spam', 'eggs', 'ham', 'monty', ['alpha', 'beta', 'delta', 'gamma']]
>>> access(foo, [4, 7])
['alpha', 'beta', 'delta', 'gamma']
>>> access(foo, [4, 7, 3])
'gamma'
```
If you needed to *assign* to the last element, keep the *last* index aside to then assign to that:
```
if indexes:
target = access(foo, indexes[:-1])
target[indexes[-1]] = new_value
else:
foo = new_value
``` | You can set `item` to `foo`, then proceeds with the indexes list to access deeper nested elements:
```
def access (foo, indexes):
item = foo
for index in indexes:
item = item[index]
return item
``` | 1,908 |
22,864,305 | Im a very new Python user (2.7) and have been working my way through the Learn Python The Hard Way course and up to chap 37 and decided to do read through some other learning materials and go over the basics again and do exercises there. I have been reading through this:
<http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/ifstatements.html>
and I just did this:
>
> 3.1.4.1. Graduate Exercise
>
> Write a program, graduate.py, that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.)
>
>
>
and this is my code for that:
```
print "How many credits do you currently have: "
credits = raw_input("> ")
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
```
but no mater what number I input it just gives "Sorry not enough credits" as the answer every time, why? I have tried moving some things around and making it > instead of >= but nothing worked. Im sure this is something stupidly simple Im missing but I cant figure it out.
I've done several other if statement exercises in the LPTHW course similar to this and never had a problem. | 2014/04/04 | [
"https://Stackoverflow.com/questions/22864305",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3286810/"
] | [`raw_input()`](https://docs.python.org/2.7/library/functions.html#raw_input) returns a string:
```
>>> credits = raw_input("> ")
> 150
>>> type(credits)
<type 'str'>
```
You need to cast it to `int`:
```
credits = int(raw_input("> "))
``` | In your code, at the if statement you are comparing a `str` type with a `int` type. so it is not working as you axpected. Cast the `credit` as `int`
```
print "How many credits do you currently have: "
credits = raw_input("> ")
credits = int(credits)
if credits >= 120:
print "You have graduated!"
else:
print "Sorry not enough credits"
``` | 1,909 |
11,958,728 | python 2.6.8
```
s= '''
foo
bar
baz
'''
>>>re.findall(r'^\S*',s,re.MULTILINE)
['', 'foo', 'bar', 'baz', '']
>>>ptrn = re.compile(r'^\S*',re.MULTILINE)
>>>ptrn.findall(s)
['', 'foo', 'bar', 'baz', '']
>>>ptrn.findall(s,re.MULTILINE)
['baz', '']
```
Why is there a difference between using MULTILINE flag in findall? | 2012/08/14 | [
"https://Stackoverflow.com/questions/11958728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1293804/"
] | When calling the `findall()` method on a regex object, the second parameter is not the `flags` argument (because that has already been used when compiling the regex) but the `pos` argument, telling the regex engine at which point in the string to start matching.
`re.MULTILINE` is just an integer (that happens to be `8`).
See [the docs](http://docs.python.org/library/re.html#re.RegexObject.findall). | Because the `findall` method of the compiled object `ptrn` doesn't take the MULTILINE parameter. It takes a `position` argument.
See here: <http://docs.python.org/library/re.html#re.RegexObject.findall>
The MULTILINE specifier is only used when you call `re.compile()` The resulting `ptrn` object already 'knows' that it is `MULTILINE`. | 1,912 |
45,110,802 | I am working on using an ElasticSearch database to store data I am pulling from online. However, when I try to index the data in the database I receive an error.
Here is my code for creating and indexing the data:
```
es = Elasticsearch()
es.index(index='weather', doc_type='data', body=doc)
```
However when I run this program, the second of those lines causes an error, here is the complete traceback:
```
Traceback (most recent call last):
File "weatherScraper.py", line 79, in <module>
main()
File "weatherScraper.py", line 73, in main
es.index(index='weather', doc_type='data', body=doc)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 298, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 128, in perform_request
self._raise_error(response.status, raw_data)
File "/home/alec/.local/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.AuthenticationException: TransportError(401, u'security_exception', u'missing authentication token for REST request [/weather/data]')
``` | 2017/07/14 | [
"https://Stackoverflow.com/questions/45110802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6848466/"
] | ''missing authentication token' means you need to authenticate before you can talk to this Elasticsearch instance. To index documents, the user must have write access. You can include a username and password in a URL like this: <http://user:password@hostname:port>
For example, in a shell:
```
export ES_ENDPOINT="http://usernameWithWriteAccess:password@localhost:9200"
```
Then in python:
```
es = Elasticsearch(os.environ['ES_ENDPOINT'])
``` | Also, if you do it from Postman tool, for example:
Go to Authorization tab, Basic Auth, write here username and password which you was received by clicking elasticsearch-setup-passwords.bat
[](https://i.stack.imgur.com/pUSlW.png) | 1,913 |
59,777,244 | I am a very new programmer and wanted to try out the AIY voice kit that uses Google Assistant API. I have a step-by-step guide that pretty much tells me how to set it up but now when it's up and running the guide tells me to run "assistant\_library\_demo.py" which is to make sure that the google assistant understands you. I have already done a mic check and sound check which worked perfectly.
However, when I run it I only get
```
pi@raspberrypi:~/AIY-projects-python $ src/examples/voice/assistant_library_demo.py
Segmentation fault
```
I've tried google but only found out that Segmentation fault means that the program crashed. I am not sure what to do from here since I am not very good at programming. Please let me know if I need to give any more information.
This is the code that I'm trying to run
```
import logging
import platform
import sys
from google.assistant.library.event import EventType
from aiy.assistant import auth_helpers
from aiy.assistant.library import Assistant
from aiy.board import Board, Led
def process_event(led, event):
logging.info(event)
if event.type == EventType.ON_START_FINISHED:
led.state = Led.BEACON_DARK # Ready.
logging.info('Say "OK, Google" then speak, or press Ctrl+C to quit...')
elif event.type == EventType.ON_CONVERSATION_TURN_STARTED:
led.state = Led.ON # Listening.
elif event.type == EventType.ON_END_OF_UTTERANCE:
led.state = Led.PULSE_QUICK # Thinking.
elif (event.type == EventType.ON_CONVERSATION_TURN_FINISHED
or event.type == EventType.ON_CONVERSATION_TURN_TIMEOUT
or event.type == EventType.ON_NO_RESPONSE):
led.state = Led.BEACON_DARK
elif event.type == EventType.ON_ASSISTANT_ERROR and event.args and event.args['is_fatal']:
sys.exit(1)
def main():
logging.basicConfig(level=logging.INFO)
credentials = auth_helpers.get_assistant_credentials()
with Board() as board, Assistant(credentials) as assistant:
for event in assistant.start():
process_event(board.led, event)
if __name__ == '__main__':
main()
``` | 2020/01/16 | [
"https://Stackoverflow.com/questions/59777244",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12727174/"
] | You have to return the generate value to the Button.
```
<asp:Button OnClientClick="return Generate()"
<script>
var test = 2;
function Generate() {
if (test === 1)
return true;
else
return false;
}
</script>
``` | Your problem lies in `OnClientClick="Generate();" OnClick="Button2_Click"`.
You're assigning two inline click events here, so they'll both trigger independently.
You have to handle the `Button2_Click` function from inside `Generate`.
One way you might do this is to call `Button2_Click` in the else condition:
```
if (count > 0) {
alert('You must complete all of the Actions fields first!');
return false;
} else {
//All Action fields complete, continue to code behind...
Button2_Click();
}
``` | 1,919 |
40,245,703 | I'm using python to hit a foreman API to gather some facts about all the hosts that foreman knows about. Unfortunately, there is not *get-all-hosts-facts* (or something similar) in the v1 foreman API, so I'm having to loop through all the hosts and get the information. Doing so has lead me to an annoying problem. Each call to a given host return a JSON object like so:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
This is totally fine, the issue arises when I append the next host's information. I then get a json file that looks something like this:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}{
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
```
Here's the code that's doing this:
```
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
with open(results_file, 'a') as f:
f.write(json.dumps(facts_data, sort_keys=True, indent=4))
```
Here's what I need the file to look like:
```
{
"host1.com": {
"apt_update_last_success": "1452187711",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
},
"host2.com": {
"apt_update_last_success": "1452703454",
"architecture": "amd64",
"augeasversion": "1.2.0",
"bios_release_date": "06/03/2015",
"bios_vendor": "Dell Inc."
}
}
``` | 2016/10/25 | [
"https://Stackoverflow.com/questions/40245703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2872525/"
] | It would be better to assemble all of your data into one dict and then write it all out one time, instead of each time in the loop.
```
d = {}
for i in hosts_data:
log.info("Gathering host facts for host: {}".format(i['host']['name']))
try:
facts = requests.get(foreman_host+api+"hosts/{}/facts".format(i['host']['id']), auth=(username, password))
if hosts.status_code != 200:
log.error("Unable to connect to Foreman! Got retcode '{}' and error message '{}'"
.format(hosts.status_code, hosts.text))
sys.exit(1)
except requests.exceptions.RequestException as e:
log.error(e)
facts_data = json.loads(facts.text)
log.debug(facts_data)
d.update(facts_data) #add to dict
# write everything at the end
with open(results_file, 'a') as f:
f.write(json.dumps(d, sort_keys=True, indent=4))
``` | Instead of writing json inside the loop, insert the data into a `dict` with the correct structure. Then write that dict to json when the loop is finished.
This assumes your dataset fit into memory. | 1,920 |
66,528,149 | I'm trying to deploy a Django application on Google App Engine. I followed the instructions given [here](https://cloud.google.com/python/django/appengine#macos-64-bit). The only problem is that when I execute the command `gcloud app deploy` I than get the error:
>
> ERROR: (gcloud.app.deploy) NOT\_FOUND: Unable to retrieve P4SA: [service-290226755412@gcp-gae-service.iam.gserviceaccount.com] from GAIA. Could be GAIA propagation delay or request from deleted apps.
>
>
>
I didn't find anything online and can't get through.
Thanks in advance! | 2021/03/08 | [
"https://Stackoverflow.com/questions/66528149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13922881/"
] | Same problem with Node, **just waited and tryed again** the command later and that's work for me (in the same shell, no steps between). | I had the same issue when deploying a Java application to an App Engine.
Enabling the 'Cloud Build API' under the APIs & Services section in the Google console resolved the issue for me. | 1,923 |
4,042,995 | >
> **Possible Duplicate:**
>
> [What is the equivalent of the C# “using” block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython)
>
>
>
I'm writing some IronPython using some disposable .NET objects, and wondering whether there is a nice "pythonic" way of doing this. Currently I have a bunch of finally statements (and I suppose there should be checks for None in each of them too - or will the variable not even exist if the constructor fails?)
```
def Save(self):
filename = "record.txt"
data = "{0}:{1}".format(self.Level,self.Name)
isf = IsolatedStorageFile.GetUserStoreForApplication()
try:
isfs = IsolatedStorageFileStream(filename, FileMode.Create, isf)
try:
sw = StreamWriter(isfs)
try:
sw.Write(data)
finally:
sw.Dispose()
finally:
isfs.Dispose()
finally:
isf.Dispose()
``` | 2010/10/28 | [
"https://Stackoverflow.com/questions/4042995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
] | Python 2.6 introduced the `with` statement, which provides for automatic clean up of objects when they leave the `with` statement. I don't know if the IronPython libraries support it, but it would be a natural fit.
Dup question with authoritative answer: [What is the equivalent of the C# "using" block in IronPython?](https://stackoverflow.com/questions/1757296/what-is-the-equivalent-of-the-c-using-block-in-ironpython) | If I understand correctly, it looks like the equivalent is the [`with`](http://docs.python.org/reference/compound_stmts.html#with) statement. If your classes define context managers, they will be called automatically after the with block. | 1,933 |
54,560,326 | so i'm fairly new to python and coding in general and i decided to make a text based trivia game as a sort of test. and I've coded everything for the first question. code which i will repeat for each question. my problem is specifically on lines 10-11. the intended function is to add one to the current score, then print the scoregained variable which uses format to tell you the score. but it doesn't work. the variable still prints fine but the score variable isn't added to, remaining at zero.
```
TRIVIA = input('TRIVIA: press enter to start')
strike = int('3')
strikesleft = ('strikes left: {} ').format(strike)
score = int('0')
scoregained = ('Your score is {}' ).format(score)
Q1 = input('What is the diameter of the earth? ')
if Q1 == ('7917.5'):
print('correct!')
input()
score = score+1
print(scoregained)
input()
``` | 2019/02/06 | [
"https://Stackoverflow.com/questions/54560326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11024647/"
] | `scoregained` isn't a function, it is a variable you assign but do not update. This would be a great place for a function, which you can reuse whenever you want to print the score. For example:
```
def print_score(score):
print('Your score is {}'.format(score))
```
You can reuse this function anytime you wish to print the score. | I'd probably use something like:
```
def score_stats(score):
print('Your score is {}'.format(score))
input('TRIVIA: press enter to start')
score, strike = 0, 3
strikesleft = 'strikes left: {}'.format(strike)
score_stats(score)
Q1 = input('What is the diameter of the earth?')
if Q1 == '7917.5':
print('correct!')
score += 1
score_stats(score)
else:
print('incorrect!')
score_stats(score)
Q2...
```
---
Output:
```
TRIVIA: press enter to start
Your score is 0
What is the diameter of the earth? 7917.5
correct!
Your score is 1
``` | 1,938 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.