qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
69,647,562 | Why does initializing the array `arr` work when it is done as a list comprehension (I think that is what the following example is --not sure), but not when each array location is initialized individually?
For example, this works:
(a)
```
arr=[]
arr=[0 for i in range(5)]
```
but (b),
```
arr=[]
arr[0]=0
arr[1]=0
```
etc, doesn't.
Isn't the `arr=[0 for i in range(5)]` instruction essentially doing what is done in (b) above in one fell swoop?
I realize that array sizes need to be predefined (or allocated). So, I can understand something like
`arr= [0]*5`
or using numpy,
`arr = np.empty(10, dtype=object)`
work.
However, I don't see how (a) preallocates the array dimension "ahead of time". How does python interpret (a) vs. (b) above? | 2021/10/20 | [
"https://Stackoverflow.com/questions/69647562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7242713/"
] | Firstly, there is no point in declaring a variable if you rebind it later anyway:
```
arr = [] # <-- this line is entirely pointless
arr = [0 for i in range(5)]
```
Secondly, the two expressions
```
[0 for i in range(5)]
[0] * 5
```
**create** a new `list` object, whereas
```
arr[0] = 0
```
mutates an existing one, namely it wants to **re**assign the first element of `arr`. Since this doesn't exist, you will see an error. You could do instead:
```
arr = []
arr.append(0)
arr.append(0)
```
to fill an initially empty `list` incrementally.
Note that a Python list is not an `Array` in, let's say, the Java sense that it has a predefined size. It is more like an `ArrayList`. | It doesn't pre-allocate. It's basically just appending in a loop, just in nice form (syntactic sugar).
Why it doesn't pre-allocate? Because to pre-allocate, we would need to know the length of the iterable, which may be a generator and it would use it up. And also, comprehension can have an if clause, limiting what eventually gets into the list. (See also generator comprehensions, which create generators - no pre-allocation because it's lazily evaluated)
---
Let's take a look at documentation:
<https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions>
>
> A list comprehension consists of brackets containing an expression followed by a for clause, then zero or more for or if clauses. The result will be a new list resulting from evaluating the expression in the context of the for and if clauses which follow it. For example, this listcomp combines the elements of two lists if they are not equal:
>
>
>
```
>>> [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
```
>
> and it’s equivalent to:
>
>
>
```
>>> combs = []
>>> for x in [1,2,3]:
... for y in [3,1,4]:
... if x != y:
... combs.append((x, y))
...
>>> combs
[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
```
See? Equivalent to append, not to pre-allocated (n\*[0]) list. | 16,537 |
15,213,428 | Recently I was going through the "Using Python App Engine with Google Cloud SQL" tutorial on Google Developers Academy website. However, I stumbled upon on the first part of the exercise "Building an application with a local MySQL instance". I could not connect the sample code (main.py) to my local MySQL instance. Wonder if anyone has found a solution to this problem.
It would be great if you could share with me how you set up your MySQL, configure it so the GAE's sandbox would be able to access the MySQL-python connecter. | 2013/03/04 | [
"https://Stackoverflow.com/questions/15213428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2130139/"
] | The answer is to use `add_action('edit_link','save_data')` and `add_option('name_of_option')` instead of `add_post_meta` view full results here [MetaBox Links](https://gist.github.com/davidchase/df9adeb1e03b88691899) | After some experiments, I figured out how to save data from custom metabox in link manager into db as post meta key/value (wp\_postmeta).
If someone needs, here is a working example:
```
action( 'add_meta_boxes', 'add_link_date' );
function add_link_date()
{
add_meta_box( 'link-date-meta-box', 'Link Date', 'link_date', 'link', 'normal', 'high' );
}
function link_date( $link )
{
$values = get_post_custom( $link->link_id );
$date = isset( $values['link_date'] ) ? esc_attr( $values['link_date'][0] ) : '';
wp_nonce_field( plugin_basename( __FILE__ ), 'link_date_nonce' );
?>
<p>
<label for="link_date_text">Link Date</label>
<input type="text" name="link_date_text" id="link_date_text" value="<?php echo $date; ?>" />
</p>
<?php
}
add_action( 'edit_link', 'myplugin_save_postdata' );
function myplugin_save_postdata( ) {
if ( defined( 'DOING_AUTOSAVE' ) && DOING_AUTOSAVE )
return;
if ( !isset( $_POST['link_date_nonce'] ) || !wp_verify_nonce( $_POST['link_date_nonce'], plugin_basename( __FILE__ ) ) )
return;
$link_id = $_POST['link_id'];
$linkDate = sanitize_text_field( $_POST['link_date_text'] );
add_post_meta( $link_id, 'link_date', $linkDate, true ) or
update_post_meta( $link_id, 'link_date', $linkDate );
}
``` | 16,539 |
50,876,292 | Given its link, I'd like to capture an online video (say from YouTube) for further processing **without downloading it on the disk**. What I mean by this is that I'd like to load it directly to memory whenever possible. According to these links:
<http://answers.opencv.org/question/24012/reading-video-stream-from-ip-camera-in-opencv-java/#24013>
<http://answers.opencv.org/question/24154/how-to-using-opencv-api-get-web-video-stream/#24156>
<http://answers.opencv.org/question/133/how-do-i-access-an-ip-camera/>
<https://pypi.org/project/pafy/>
it should be doable. My attempt looks like this:
```
import cv2
import pafy
vid = pafy.new("https://www.youtube.com/watch?v=QuELiw8tbx8")
vid_cap = cv2.VideoCapture()
vid_cap.open(vid.getbest(preftype="webm").url)
```
However it fails with an error
```
(python:12925): GLib-GObject-CRITICAL **: 14:48:56.168: g_object_set: assertion 'G_IS_OBJECT (object)' failed
False
```
How can I achieve my goal using python? | 2018/06/15 | [
"https://Stackoverflow.com/questions/50876292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4671908/"
] | You can achieve this by using `youtube-dl` and `ffmpeg`:
* Install the latest version of [`youtube-dl`](https://rg3.github.io/youtube-dl/download.html).
* Then do `sudo pip install --upgrade youtube_dl`
* Build `ffmpeg` with HTTPS support. You can do this by [turning on the `--enable-gnutls` option](https://askubuntu.com/a/650617/486771).
Once the installations are complete, it's time to test the `youtube-dl` in terminal. We'll be using [this youtube video](https://www.youtube.com/watch?v=HECa3bAFAYk) for testing.
First we get the list of formats available for this video:
```
youtube-dl --list-formats https://www.youtube.com/watch?v=HECa3bAFAYk
```
Select a `format code` of your choice. I want the **144p** resolution so I select **160**.
[](https://i.stack.imgur.com/VausY.png)
Next we get the video url for our format of choice by:
```
youtube-dl --format 160 --get-url https://www.youtube.com/watch?v=HECa3bAFAYk
```
>
> <https://r3---sn-4g5e6nz7.googlevideo.com/videoplayback?clen=184077&aitags=133%2C134%2C160%2C242%2C243%2C278&fvip=3&requiressl=yes&signature=5D21FFD906226C7680B26ACEF996B78B6A31F7C9.31B1115DB13F096AA5968DB2838E22A0D6A2EDCB&source=youtube&mn=sn-4g5e6nz7%2Csn-h0jeen7y&xtags=tx%3D9486108&itag=160&mime=video%2Fmp4&mt=1529091799&ms=au%2Conr&ei=XxckW-73GNCogQfqrryQAg&expire=1529113535&mm=31%2C26&c=WEB&keepalive=yes&id=o-AJExEG49WtIUkrF7OikaaGBCfKntDl75xCoO5_9cL-eP&ip=95.91.202.147&sparams=aitags%2Cclen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cxtags%2Cexpire&key=yt6&lmt=1526699176943888&dur=25.375&pl=22&gir=yes&mv=m&initcwndbps=1155000&ipbits=0&ratebypass=yes>
>
>
>
Finally we can play this video url in either `ffplay` or `vlc`. But instead of copying and pasting, we can do this in one command:
```
ffplay -i $(youtube-dl --format 160 --get-url https://www.youtube.com/watch?v=HECa3bAFAYk)
```
Now that we have confirmed that `youtube-dl` and `ffmpeg` works, we can write a Python script to process the frames in OpenCV. See [this link](https://github.com/rg3/youtube-dl/blob/master/README.md#embedding-youtube-dl) for more Python options.
```
import cv2
import numpy as np
import youtube_dl
if __name__ == '__main__':
video_url = 'https://www.youtube.com/watch?v=HECa3bAFAYkq'
ydl_opts = {}
# create youtube-dl object
ydl = youtube_dl.YoutubeDL(ydl_opts)
# set video url, extract video information
info_dict = ydl.extract_info(video_url, download=False)
# get video formats available
formats = info_dict.get('formats',None)
for f in formats:
# I want the lowest resolution, so I set resolution as 144p
if f.get('format_note',None) == '144p':
#get the video url
url = f.get('url',None)
# open url with opencv
cap = cv2.VideoCapture(url)
# check if url was opened
if not cap.isOpened():
print('video not opened')
exit(-1)
while True:
# read frame
ret, frame = cap.read()
# check if frame is empty
if not ret:
break
# display frame
cv2.imshow('frame', frame)
if cv2.waitKey(30)&0xFF == ord('q'):
break
# release VideoCapture
cap.release()
cv2.destroyAllWindows()
``` | Using pafy you can have a more elegant solution:
```
import cv2
import pafy
url = "https://www.youtube.com/watch?v=NKpuX_yzdYs"
video = pafy.new(url)
best = video.getbest(preftype="mp4")
capture = cv2.VideoCapture()
capture.open(best.url)
success,image = capture.read()
while success:
cv2.imshow('frame', image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success,image = capture.read()
cv2.destroyAllWindows()
capture.release()
``` | 16,540 |
27,321,523 | I have a Raspberry Pi that I use as a multi-purpose 24/7 device for DLNA, CIFS, VPN etc. Now I bought a TellStick, that is a USB device that can send 433MHz radio commands to wireless power switches, dimmers etc. The manufacturer offers sources and tools for linux, which is really great, btw.
Using a special command (named tdtool) I can send commands to my power switches, e.g.
```
tdtool --on 1
```
This switches on device 1. This works very well and stable, so that I want to get away from shell commands in order to make the handling easier. My idea is to set up a very simple web server that only needs to be able to receive GET or POST requests and triggers some action like running the command "tdtool --off 3". So the web server does not even need to serve pages, it just needs to listen to requests.
I want to create a HTTP-based solution because that would allow me to use my smartphone as a remote control. There is an Android app named "Tasker" that is awesome on its own, but it also allows sending customized HTTP requests based on certain conditions, so that I could make my lights go bright when I come home (and Tasker recognizes a connection to my WIFI network or similar).
As the Raspberry is not the most powerful piece of hardware, I'd like to keep things as simple as possible. Basically, I need this:
A HTTP get request comes in, for example:
```
/switch?device=1&action=on
```
According to this request, the server should translate that *somehow* into this:
```
tdtool --on 1
```
I am sure that I would find a way to build something like that with Apache and PHP, but I think that would be somewhat overdressed in my case. What would you recommend? Is there some cool python magic that could make this happen? Or some fancy mini webserver with a CGI script? Any thoughts and code samples are greatly appreciated, thanks in advance! | 2014/12/05 | [
"https://Stackoverflow.com/questions/27321523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1901272/"
] | While your question is too "opinion-like", there's an almost instant solution:
[nginx - How to run a shell script on every request?](https://stackoverflow.com/questions/22891148/nginx-how-to-run-a-shell-script-on-every-request)
But since you're talking about R-Pi, maybe you will find Python builtin [CGIHTTPServer](https://docs.python.org/2/library/cgihttpserver.html) (Python 2) or [http.server](https://docs.python.org/3/library/http.server.html) (Python 3) modules be more suitable for the task of executing a shell command | Here a full & working RealLife™ perl's example
----------------------------------------------
...using [Dancer](https://metacpan.org/pod/Dancer)
```
# cpan Dancer
$ dancer -a MyApp
$ cd MyApp
$ cat ./lib/MyApp.pm # need to be edited, see bellow
$ bin/app.pl
```
Now you can call the URL
```
http://127.0.0.1:3000/switch?device=1&action=on
```
`$cmd` will be now executed.
The `./lib/MyApp.pm` :
```
package MyApp;
use Dancer ':syntax';
our $VERSION = '0.1';
get '/switch' => sub {
my $var = params;
my $device = $var->{device};
my $action = "--" . $var->{action};
# building custom system command
my $cmd = "tdtool $action $device";
# running the command
`$cmd`;
return "$cmd\nexecuted\n";
};
true;
```
Here another full & working RealLife™ example using [php](/questions/tagged/php "show questions tagged 'php'")
--------------------------------------------------------------------------------------------------------------
```
<?php
header("HTTP/1.1 200 OK");
if (isset($_REQUEST['action'], $_REQUEST['device'])) {
$device = $_REQUEST['device'];
$action = '--' . $_REQUEST['action'];
$cmd = "tdtool $action $device";
system("$cmd");
echo "Command<br>$cmd<br>executed...";
}
?>
```
The url is :
```
http://127.0.0.1/switch.php?device=1&action=on
```
This require a `HTTP server` binding on port 80 and the script `switch.php` to be on the top of your `DocumentRoot` (for this example). | 16,545 |
73,479,698 | I am trying to build a Docker image but when I build it, I get the error message : 'E: Unable to locate package libxcb-util1'.
Here is my Dockerfile :
```
`# $DEL_BEGIN`
FROM python:3.9.7-buster
WORKDIR /prod
COPY design_interface design_interface
COPY requirements.txt requirements.txt
COPY setup.py setup.py
RUN pip install --upgrade pip
RUN apt-get update && apt install --assume-yes apt-utils
RUN apt-get update && pip install .
RUN apt-get update && pip install opencv-python
RUN apt-get update && apt install --assume-yes libxcb-icccm4
RUN apt-get update && apt install --assume-yes libxcb-image0
RUN apt-get update && apt-get install libxcb-util1
CMD ["python3","design_interface/MainWindow.py"]
`# $DEL_END`
```
The error message is :
```
Step 11/12 : RUN apt-get update && apt-get install libxcb-util1
---> Running in a6000629d968
Hit:1 http://deb.debian.org/debian buster InRelease
Hit:2 http://security.debian.org/debian-security buster/updates InRelease
Hit:3 http://deb.debian.org/debian buster-updates InRelease
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package libxcb-util1
The command '/bin/sh -c apt-get update && apt-get install libxcb-util1' returned a non-zero code: 100
```
What is more difficult to understand is why the command line 'sudo apt-get update && apt-get install libxcb-util1' works well on my computer but does not work when building the Docker image with the Dockerfile. | 2022/08/24 | [
"https://Stackoverflow.com/questions/73479698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19739078/"
] | Updating gradle solves the problem.
There are different ways to update the gradle, as explained in their official website: <https://gradle.org/install/>
*Assuming that you are a windows user*:
Downloading binary files of gradle and extracting the folder to the directory "c:/gradle" is enough.
* Download binary files of gradle for the next version
* Update path of `gradle` in PATH variable in your computer
[](https://i.stack.imgur.com/mSaAz.png)
* You should define another variable as follows:
[](https://i.stack.imgur.com/8iqxp.png)
* Variable name should be the same and its value should be the same with the one you use. While creating your cordova environment, this url is being used.
* Save and restart your IDE's or CLI windows
* test it out like ´gradle -v´ and you should see the active version of gradle in your system.
* Be prepared to the new problems in your cordova environment because you have a upgraded gradle now | To fix the issue, I've reverted to `cordova-android` version `9.1.0`. I've no idea, as of now, why `cordova-android` version `10` points to `gradle`, which as of now isn't possible to download... | 16,546 |
66,797,173 | I am using transformers pipeline to perform sentiment analysis on sample texts from 6 different languages. I tested the code in my local Jupyterhub and it worked fine. But when I wrap it in a flask application and create a docker image out of it, the execution is hanging at the pipeline inference line and its taking forever to return the sentiment scores.
* mac os catalina 10.15.7 (no GPU)
* Python version : 3.8
* Transformers package : 4.4.2
* torch version : 1.6.0
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
results = classifier(["We are very happy to show you the Transformers library.", "We hope you don't hate it."])
print([i['score'] for i in results])
```
The above code works fine in Jupyter notebook and it has provided me the expected result
```
[0.7495927810668945,0.2365245819091797]
```
So now if I create a docker image with flask wrapper its getting stuck at the `results = classifier([input_data])` line and the execution is running forever.
My folder structure is as follows:
```
- src
|-- app
|--main.py
|-- Dockerfile
|-- requirements.txt
```
I used the below `Dockerfile` to create the image
```
FROM tiangolo/uwsgi-nginx-flask:python3.8
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
RUN pip install -r /requirements.txt
RUN echo "uwsgi_read_timeout 1200s;" > /etc/nginx/conf.d/custom_timeout.conf
```
And my `requirements.txt` file is as follows:
```
pandas==1.1.5
transformers==4.4.2
torch==1.6.0
```
My `main.py` script look like this :
```
from flask import Flask, json, request, jsonify
import traceback
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
app = Flask(__name__)
app.config["JSON_SORT_KEYS"] = False
model_name = 'nlptown/bert-base-multilingual-uncased-sentiment'
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = pipeline('sentiment-analysis', model=model_path, tokenizer=model_path)
@app.route("/")
def hello():
return "Model: Sentiment pipeline test"
@app.route("/predict", methods=['POST'])
def predict():
json_request = request.get_json(silent=True)
input_list = [i['text'] for i in json_request["input_data"]]
results = nlp(input_list) ########## Getting stuck here
for result in results:
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
score_list = [round(i['score'], 4) for i in results]
return jsonify(score_list)
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=False, port=80)
```
My input payload is of the form
```
{"input_data" : [{"text" : "We are very happy to show you the Transformers library."},
{"text" : "We hope you don't hate it."}]}
```
I tried looking into the transformers github issues but couldn't find one. I execution works fine even when using the flask development server but it runs forever when I wrap it and create a docker image. I am not sure if I am missing any additional dependency to be included while creating the docker image.
Thanks. | 2021/03/25 | [
"https://Stackoverflow.com/questions/66797173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10422855/"
] | I was having a similar issue. It seems that starting the app somehow polutes the memory of transformers models. Probably something to do with how Flask does threading but no idea why. What fixed it for me was doing the things that are causing trouble (loading the models) in a different thread.
```
import threading
def preload_models():
"LOAD MODELS"
return 0
def start_app():
app = create_app()
register_handlers(app)
preloading = threading.Thread(target=preload_models)
preloading.start()
preloading.join()
return app
```
First reply here. I would be really glad if this helps. | Flask uses port 5000. In creating a docker image, it's important to make sure that the port is set up this way. Replace the last line with the following:
```
app.run(host="0.0.0.0", port=int(os.environ.get("PORT", 5000)))
```
Be also sure to `import os` at the top
Lastly, in `Dockerfile`, add
```
EXPOSE 5000
CMD ["python", "./main.py"]
``` | 16,547 |
3,887,393 | I'm hacking a quick and dirty python script to generate some reports as static html files.
What would be a good module to easily build static html files outside the context of a web application?
My goals are simplicity (the HTML will not be very complex) and ease of use (I don't want to write a lot of code just to output some html tags).
I found two alternatives on my first goolge search:
* markup.py - <http://markup.sourceforge.net/>
* HTML.py - <http://www.decalage.info/en/python/html>
Also, I feel that using a templating engine would be over-kill, but if you differ please say it and why.
Any other recommendation? | 2010/10/08 | [
"https://Stackoverflow.com/questions/3887393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2954/"
] | Maybe you could try [Markdown](http://www.freewisdom.org/projects/python-markdown/) instead, and convert it to HTML on the fly? | You don't necessarily need something complex - for instance, here's a ~150 line library to generate HTML in a functional manner:
<http://github.com/Yelp/PushmasterApp/blob/master/pushmaster/taglib.py>
(Full disclosure, I work with the person who originally wrote that version, and I also use it myself.) | 16,548 |
6,699,201 | What would I have to do to make a Python application I am writing open up a web page in the default browser? It doesn't need to be told what the webpage is or anything, it'll be opening one that I've already chosen.
I found some documentation [here](http://docs.python.org/library/webbrowser.html) but I always get a syntax error from the symbols in the URL. Any help? | 2011/07/14 | [
"https://Stackoverflow.com/questions/6699201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/841843/"
] | The URL needs to be in a string.
```
webbrowser.open('http://www.google.com/')
``` | Have a look at the `webbrowser` module. | 16,557 |
64,082,288 | I masked a *sorted* 1-D numpy array using the method below (which follows a solution proposed [here](https://stackoverflow.com/questions/64076440/accessing-a-large-numpy-array-while-preserving-its-order)):
```
def get_from_sorted(sorted,idx):
mask = np.zeros(sorted.shape, bool)
mask[idx] = True
return sorted[mask]
```
The python method returns the array after masking on the indexes `idx`. For example, if sorted=`np.array([0.1,0.2,0.3.0.4,0.5])`, and idx=`np.array([4,0,1]`), then the method `get_from_sorted` should return `np.array([0.1,0.2,0.5])` (note the order in the original array is preserved.)
Question: I need to get the mapping between the indices of the items in the array after masking and those in the original list. In the example above, such a mapping is
```
0 -> 0
1 -> 1
2 -> 5
```
because 0.1, 0.2, and 0.5 is on the 0th, 1st, and 5th place in `sorted`.
How can I program this mapping efficiently?
**Requirement on efficiency:** Efficiency is the key in my problem solving. Here, both "idx" and "sorted" is a 1-D array of 1 million elements, and idx is a 1-D array of about 0.5 million elements (taken from an image processing application). Thus, checking the elements of the masked array one by one, or in a vectorized fashion, against the original array, for example, using np.where, would not perform well in my case. Ideally, there should be a relatively simply mathematical relation between the indices in the masked array and the original sorted array. Any idea? | 2020/09/26 | [
"https://Stackoverflow.com/questions/64082288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/815653/"
] | I believe your goal as follows.
* Your question has the following 2 questions.
1. You want to know the method for creating new Google Document including the text data.
2. You want to know the method for adding more text data to the existing Google Document.
* You want to achieve this using Drive API with googleapis for Node.js.
* You have already been able to get and put the file using Drive API.
Answer for question 1:
----------------------
In this answer, new Google Document is created by including the text data using Drive API.
### Modification points:
* In this case, it is required to convert the text to the stream type.
* When the text is converted to Google Document, `mimeType` is required to be included in `fileMetadata`.
When above points are reflected to your script, it becomes as follows.
### Modified script:
From:
```
var content = "Content to be written in file"
var fileMetadata = {
name: filename,
parents: [rootFolderId]
};
var media = {
mimeType: 'application/vnd.google-apps.document',
body: content // In the form of string
};
```
To:
```
const stream = require("stream");
var filename = "sample filename"; // Please set the filename of created Google Document.
var rootFolderId = "root"; // Please set the folder ID.
var content = "Content to be written in file";
var bufferStream = new stream.PassThrough();
bufferStream.end(Uint8Array.from(Buffer.from(content, "binary")));
var fileMetadata = {
name: filename,
parents: [rootFolderId],
mimeType: "application/vnd.google-apps.document",
};
var media = {
mimeType: "text/plain", // <--- Added
body: bufferStream
};
```
* In this case, `stream` module is used.
Answer for question 2:
----------------------
In this answer, more text data is added to the existing Google Document using Drive API.
### Modification points:
* In this case, it is required to do the following flow.
1. Retrieve all texts data from the existing Google Document.
2. Add more text data to the retrieved texts.
3. Update the existing Google Document using the updated text data.
+ In this case, the method of "Files: update" in Drive API is used.
The sample script is as follows.
### Sample script:
```
const documentId = "###"; // Please set the Google Document ID of the existing Google Document.
drive.files.export(
{
fileId: documentId,
mimeType: "text/plain",
},
{ responseType: "stream" },
(err, { data }) => {
if (err) {
console.log(err);
return;
}
let buf = [];
data.on("data", (e) => buf.push(e));
data.on("end", () => {
const stream = require("stream");
const content = "\n" + "Added text data"; // Here, the text data is added to the existing text in Document.
buf.push(Buffer.from(content, "binary"));
const bufferStream = new stream.PassThrough();
bufferStream.end(Uint8Array.from(Buffer.concat(buf)));
var media = {
body: bufferStream,
};
drive.files.update(
{
fileId: documentId,
resource: {},
media: media,
fields: "id",
},
function (err, file) {
if (err) {
console.error(err);
return;
}
console.log(file.data.id);
}
);
});
}
);
```
* In this sample script, I used `const content = "\n" + "Added text data";` for adding more text data. If you don't want to insert the line break for this, please remove `"\n"`.
Note:
-----
* In order to add more text data, I think that you can also use Docs API. But in your goal, Drive API is used. So I proposed the method for using Drive API.
References:
-----------
* [Files: create](https://developers.google.com/drive/api/v3/reference/files/create)
* [Class: stream.PassThrough](https://nodejs.org/api/stream.html#stream_class_stream_passthrough)
* [Files: update](https://developers.google.com/drive/api/v3/reference/files/update) | From the [Media Uploads example](https://github.com/googleapis/google-api-nodejs-client#media-uploads) for `googleapis@60.0.1`, you can create a Google Document with a given title and content inside a given folder with
```
const drive = google.drive({ version: 'v3', auth });
const filename = '<filename>';
const parentFolderId = '<parent-folder-id>';
const content = '<file-content>';
const requestBody = {
name: filename,
parents: [parentFolderId],
mimeType: 'application/vnd.google-apps.document',
};
const media = {
mimeType: 'text/plain',
body: content,
};
await drive.files.create({
requestBody,
media,
fields: 'id',
});
```
To perform modifications on the document, best use the [Docs API](https://developers.google.com/docs/api/quickstart/nodejs). It offers fine control over the document modifications.
If you're looking for a **simple solution to update a Google Document's content using the Drive API**, a slightly coarser approach to using the Docs API is
```
drive = google.drive({ version: 'v3', auth });
const fileId = '<file-id>';
const newContent = '<new content>';
const media = {
mimeType: 'text/plain',
body: newContent,
};
await drive.files.update({
fileId,
media,
});
```
For **appending text to a document using the Drive API**, you can use something along the lines of
```
const drive = google.drive({ version: 'v3', auth });
const fileId = '<file-id>';
const contentToAppend = '<new content>';
const { data: prevContent } = await drive.files.export({
fileId,
mimeType: 'text/plain',
});
const newContent = prevContent + contentToAppend;
const media = {
mimeType: 'text/plain',
body: newContent,
};
await drive.files.update({
fileId,
media,
});
``` | 16,558 |
57,358,927 | I would like to use the twilight or twilight\_shifted colormap in my 2.7 python build, but it seems to be python 3 only? Is there some way to manually add it? | 2019/08/05 | [
"https://Stackoverflow.com/questions/57358927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1608765/"
] | `twilight` was added in matplotlib v3.0 which is python 3 only. But we can find where it was added in the source code are re-engineer it.
In the code below, you just need to grab the data used for `twilight` from the matplotlib source on github, by following this [link](https://github.com/matplotlib/matplotlib/blob/f2116d82dfd6b82fe178230766d95ea9ac2b0c8c/lib/matplotlib/_cm_listed.py#L1288).
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
_twilight_data = [ # data too long for stack overflow. get it from here:
# https://github.com/matplotlib/matplotlib/blob/f2116d82dfd6b82fe178230766d95ea9ac2b0c8c/lib/matplotlib/_cm_listed.py#L1288
]
_twilight_shifted_data = (_twilight_data[len(_twilight_data)//2:] +
_twilight_data[:len(_twilight_data)//2])
_twilight_shifted_data.reverse()
cmaps = {}
for (name, data) in (('twilight', _twilight_data),
('twilight_shifted', _twilight_shifted_data)):
cmaps[name] = colors.ListedColormap(data, name=name)
# generate reversed colormap
name = name + '_r'
cmaps[name] = colors.ListedColormap(list(reversed(data)), name=name)
fig, ax = plt.subplots()
p = ax.pcolormesh(np.arange(25).reshape(5, 5), cmap=cmaps['twilight'])
fig.colorbar(p, ax=ax)
plt.show()
```
That crates a dict with `twilight`, `twilight_r`, `twilight_shifted` and `twilight_shifted_r` colormaps.
The script also produces this test image:
[](https://i.stack.imgur.com/a4XPo.png) | You can create a new custom colormap as shown in this [tutorial](https://matplotlib.org/3.1.0/tutorials/colors/colormap-manipulation.html).
The data for the "twilight" and "twilight\_shifted" colormaps is [here](https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/_cm_listed.py). | 16,559 |
65,579,018 | **What I intend to do :**
I have an excel file with Voltage and Current data which I would like to extract from a specific sheet say 'IV\_RAW'. The values are only from 4th row and are in columns D and E.
Lets say the values look like this:
| V(voltage) | I(Current) |
| --- | --- |
| 47 | 1 |
| 46 | 2 |
| 45 | 3 |
| 0 | 4 |
| -0.1 | 5 |
| -10 | 5 |
Now, I just want to take out only the values starting with a voltage (V) of 45 and **shouldnt take negative voltages**. The corresponding current (I) values are also needed to be taken out. This has to be done for multiple excel files. So starting from a particular row number cannot be done instead voltage values should be the criterion.
**What I know:**
I know only how to take out the entire set of values using openxyl:
```
loc = ("path")
wb = load_workbook("Data") #thefilename
ws = wb["IV_raw"] #theactiveworksheet
#to extract the voltage and current data:
for row in ws.iter_rows(min_row=1, max_col=3, max_row=2, values_only=True):
print(row)
```
***I am a noon coder and new to python. So it will be really helpful if you guys could help. If there is a simplified versions with `pandas` it will be really great.
Thank you in advance*** | 2021/01/05 | [
"https://Stackoverflow.com/questions/65579018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14944185/"
] | The following uses `pandas` which you should definitly take a look at. with `sheet_name` you set the sheet\_name, `header` is the row index of the header (starting at 0, so Row 4 -> 3), `usecols` defines the columns using A1 notation.
The last line filters the dataframe. If I understand correctly, then you want Voltage between 0 and 45, thats what the example does and df is your resulting data\_frame
```
import pandas as pd
file_loc = "path.xlsx"
df = pd.read_excel(file_loc,
sheet_name = 'IV_raw',
header = 3,
usecols = "D:E")
df = df[(df['V(voltage)'] > 0) & (df['V(voltage)'] < 45)]
``` | you can try this,
```
import openpyxl
tWorkbook = openpyxl.load_workbook("YOUR_FILEPATH")
tDataBase = tWorkbook.active
voltageVal= "D4"
currentVal= "E4"
V = tDataBase[voltageVal].value
I = tDataBase[currentVal].value
``` | 16,561 |
26,513,125 | I have some django view handler functions which are structured like this
```
def view1(request):
# Check for authorization
if not isAuthorized(request):
return HttpResponse('Foo error', status=401)
return HttpResponse('view1 data')
def view2(request):
# Check for authorization
if not isAuthorized(request):
return HttpResponse('Foo error', status=401)
return HttpResponse('view2 data')
def view3(request):
# Check for authorization
if not isAuthorized(request):
return HttpResponse('Foo error', status=401)
return HttpResponse('view3 data')
```
I want to make this part :
```
# Check for authorization
if not isAuthorized(request):
return HttpResponse('Foo error', status=401)
```
some sort of one-liner, so that I do not have to repeat it in each view
In C this would have been a macro, but I've no clue how to achieve something similar in python
The check authorization function part is an example, it can be any check which has nothing to do with user authorization in particular
[Edit]
<https://stackoverflow.com/users/2337736/peter-deglopper> mentions decorators ...
To elaborate
I have a web API that can take either POST or GET
```
# return either GET or POST dict whichever exists
def getParams(request):
if request.method == 'GET':
return request.GET
return request.POST
```
The views do this :
```
def someAPI(request):
dct = getParams(request)
if not isValid(dct):
return HttpResponse('Bad request', status=401)
```
How could I acheive this with a decorator? I have that getParams() function in between.... | 2014/10/22 | [
"https://Stackoverflow.com/questions/26513125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20392/"
] | The line `int ans = tmp->next;` appears to be the source of the problem. This is attempting to take the `next` pointer in the node, convert it to an `int`, and return it. What you (almost certainly) want is to retrieve the data from the node and return that, with something like `int ans = tmp->num;`.
Of course, that's not saying the code is perfect otherwise (e.g., it seems to lack any attempt at checking for, not to mention dealing with, errors), but at least with that change, it stands some chance of working correctly under some (ideal) circumstances. | First, you are trying to delete `tmp` node, but top node still exist and value has to be returned as ans or top->next or in this situation top->num. Why do you initialize node `tmp` in the function when node `tmp` is a parameter? Why should node \* &top be in the function parameters instead of `tmp`.
value = top->num doesn't fix the problem, because he wants the pointer from the top of the linked list not the random node inputed through the function parameters. To fix this problem `Node * tmp` should equal top and then value should be equal to tmp->num. Otherwise all other problems have been fixed.
**//EDIT**
Ignore everything before //edit because all that is questions about his question that I now already know. I have compiled this code and it completely worked for me.
```
struct Node
{
int data;
Node *next;
};
int pop(Node *head)
{
while(head->next != NULL)
{
head = head->next;
}
int value;
Node *tmp;
tmp = new Node;
value = head->data;
tmp = head;
delete tmp;
return value;
}
```
Compiled code link - <http://ideone.com/7EgBhf> | 16,564 |
56,439,798 | I have a camera running on rtsp link. I want to write python code to check if the camera is live or dead. Similar to using curl to check http if url is working or not. What is a similar command can one use to check rtsp url status?
I have tried using openRTSP on terminal and I want to use it as python script
openRTSP rtsp://test\_url\_here | 2019/06/04 | [
"https://Stackoverflow.com/questions/56439798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6006820/"
] | You can call FFMPEG to extract a snapshot. If successful stream is accessible.
Test this functionality (exctracting snapshot from rtsp) with <https://videonow.live/broadcast-ip-camera-or-stream/> per tutorial at <https://broadcastlivevideo.com/publish-ip-camera-stream-to-website/>.
Command to extract should be something like:
```
/usr/bin/ffmpeg -y -frames 1 snapshot.png -rtsp_transport tcp -i rtsp://test_url_here
```
Then check if snapshot file was created and is not empty.
You can find exact code for functionality in this free open source WP plugin <https://wordpress.org/plugins/videowhisper-live-streaming-integration/> . | You can use the `opencv_python` module to play rtsp stream.
Sample codes:
```
import cv2
cap=cv2.VideoCapture("rtsp://admin:admin123@test_url_here")
ret,frame = cap.read()
while ret:
ret,frame = cap.read()
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
cap.release()
``` | 16,567 |
27,466,862 | There's something wrong with my OSX system and python that no amount of googling has fixed. I've uninstalled all traces of python except the system python package with OSX that I'm not supposed to uninstall, and then started afresh with a new python from python.org, and installed pip.
Now...not sure if this particular behavior below is part of the issue, but it seems strange to me:
I ran python twice. Once with sudo and once without. Without sudo, I can't access pip. What's going on?
```
$ sudo /Library/Frameworks/Python.framework/Versions/2.7/bin/python
Python 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pip
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pip
```
However...
```
$ /Library/Frameworks/Python.framework/Versions/2.7/bin/python
Python 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pip
>>>
```
I've already referred to: [sudo python runs old python version](https://stackoverflow.com/questions/15441440/sudo-python-runs-old-python-version)
I have nothing in my .bash\_profile, or anything in any other profiles.
All I've done is the following:
```
export PYTHONPATH=/lib/python2.7/site-packages/
```
`ls $PYTHONPATH` returns:
```
_markerlib pip pkg_resources.pyc setuptools-8.0.1.dist-info virtualenv.pyc
easy_install.py pip-1.5.6.dist-info setuptools virtualenv-1.11.6.dist-info virtualenv_support
easy_install.pyc pkg_resources.py setuptools-7.0.dist-info virtualenv.py
```
`which pip` returns:
```
/bin/pip
``` | 2014/12/14 | [
"https://Stackoverflow.com/questions/27466862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1119779/"
] | `sudo` overrides your `export`. It's the same Python (as you can easily tell from the version information it prints) but it runs with a different (system default) `PYTHONPATH`.
This is one of the jobs of `sudo`; it sanitizes the environment to safe defaults. You may be able to tweak this, but the real question is, what are you trying to accomplish? If you need to run as `root` with a particular environment, set up a `virtualenv` and/or write a wrapper script which sets things up before dispatching Python. | What do you get when you compare the output of `which pip` and `sudo which pip`?
On my system I get different outputs. If you do, I'm not sure how to fix that, but you could try to force the sudo'd python to look in the correct directory:
```
import sys
sys.path.insert(0, '/lib/python2.7/site-packages/')
import pip
``` | 16,568 |
21,783,840 | I have a CSV file that has numerous data points included in each row, despite belonging to the same column. Something similar to this:
```
A, B, C, X, Y, Z
```
Now, what I would like to do is to reformat the file such that the resulting CSV is:
```
A, B, C
X, Y, Z
```
I'm not too sure how to go about this / express it in a pythonic way. One idea (which I wasn't sure how to code) was to create a for loop similar to
```
For elements in file:
new_csv = []
counter = 0
max_counter = 3
##I'm using english because I dont know how to express it in python syntax
take first element in CSV and add 1 to counter
append first element to new_csv
is counter < max_counter?
if yes: go to next element
if no: append /n to new csv then go to next element
```
This probably is a terrible way to do it, but I'm struggling to find another way. How would I express this in the python syntax? Any ideas? | 2014/02/14 | [
"https://Stackoverflow.com/questions/21783840",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2179795/"
] | Just write two rows to an output [`csv.writer()` object](http://docs.python.org/2/library/csv.html#csv.writer):
```
with open(inputfilename, 'rb') as infh, open(outputfilename, 'wb') as outfh:
reader = csv.reader(infh)
writer = csv.writer(outfh)
for row in reader:
writer.writerows([row[:3], row[3:]])
```
Each row the `csv.reader()` object produces is a list object, so all we have to do is using the [Python slice notation](https://stackoverflow.com/questions/509211/pythons-slice-notation) to slice of a new list object of the first 3 elements, and another list object for the remaining 3 elements. | You could probably use python's [CSV module](http://docs.python.org/2/library/csv.html)
Example:
```
#!/usr/bin/env python
import csv
with open("input.csv", "r") as input_file, open("output.csv", "w+"):
input_csv, output_csv = csv.reader(input_file), csv.writer(output_file);
for row in input_csv:
output_csv.writerows([row[:len(row)/2],row[len(row)/2]]);
```
Which given the input
```
a,b,c,d,e,f
A,B,C,D,E,F,G
1,2,3,4,5,6
```
will output
```
a,b,c
d,e,f
A,B,C
D,E,F,G
1,2,3
4,5,6
``` | 16,569 |
74,304,917 | I'm having trouble trying to find the parameters of a gaussian curve fit.
The site <https://mycurvefit.com/> provides a good answer fairly quickly. However, my implementation with python's curve\_fit(), from the scipy.optimize library, is not providing good results (even when inputting the answers).
For instance, the equation I'm trying to fit is the following:
```
def gauss_func(x, a, b, c):
return a * np.exp(-(x-b)**2/(2*c**2))
```
With input points:
```
x_main = np.array([19.748, 39.611, 59.465])
y_main = np.array([0.438160379, 0.008706677, 0.000160106])
```
where I want to find the parameters **a**, **b** and **c**.
From the mycurvefit website, I get the answers:
a = 4821416
b = -154.0293
c = 30.51661
Which fit nicely the given points. But when I try to run with curve\_fit():
```
poptMain, pcovMain = curve_fit(gauss_func, x_main, y_main, p0=(1, -1, 1),sigma=np.array([1,1,1]))
```
I get the **"RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800."** error.
What I tried:
* Changing the maxfev to other values, such as 5000, 10000, 100000 (no effect).
* Replacing the initial guess p0 to values closer to the mycurvefit answer (no effect) and common values such as [1, 1, 1], [1, 0, 1], etc (no effect).
Even when inputting the answer, it still won't find the parameters! I have used this same code before with other similar cases, and it worked nicely. But this time it's not converging at all. What could I do to solve this? | 2022/11/03 | [
"https://Stackoverflow.com/questions/74304917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14703689/"
] | One way to do it is using window functions. The first one ([**`lag`**](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.lag.html#pyspark.sql.functions.lag)) marks the row if it is different than the previous. The second ([**`sum`**](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.sum.html#pyspark.sql.functions.sum)) marks all "Block" rows for previously marked rows. Lastly, deleting roes and the helper (`_flag`) column.
Input:
```py
from pyspark.sql import functions as F, Window as W
df = spark.createDataFrame(
[(1, 'A'),
(1, 'C'),
(1, 'C'),
(3, 'A'),
(3, 'B')],
['ID', 'Block'])
```
Script:
```py
w1 = W.partitionBy('Block').orderBy('ID')
w2 = W.partitionBy('Block')
grp = F.when(F.lag('ID').over(w1) != F.col('ID'), 1).otherwise(0)
df = df.withColumn('_flag', F.sum(grp).over(w2) == 0) \
.filter('_flag').drop('_flag')
df.show()
# +---+-----+
# | ID|Block|
# +---+-----+
# | 3| B|
# | 1| C|
# | 1| C|
# +---+-----+
``` | Use window functions. get ranks per group of blocks and through away any rows that rank higher than 1. Code below
```
(df.withColumn('index', row_number().over(Window.partitionBy().orderBy('ID','Block')))#create an index to reorder after comps
.withColumn('BlockRank', rank().over(Window.partitionBy('Block').orderBy('ID'))).orderBy('index')#Rank per Block
.where(col('BlockRank')==1)
.drop('index','BlockRank')
).show()
+---+-----+
| ID|Block|
+---+-----+
| 1| A|
| 1| C|
| 1| C|
| 3| B|
+---+-----+
``` | 16,572 |
63,574,704 | I have the following `Dockerfile`:
```
# beginning of the the docker ...
ARG SIGNAL_ID
CMD python ./my_repo/my_main.py --signal_id $SIGNAL_ID
```
I also have a `docker-compose.yml` with all the needed information for the service
```
version: '3'
services:
my_app:
build: .
# additional info ...
```
How would I run two services with different `SIGNAL_ID`'s based on the same Docker files?
I have tried the following, however I am unsure how to run them as both of them would be named `my_app`:
```
docker-compose build --build-arg SIGNAL_ID=5
docker-compose build --build-arg SIGNAL_ID=6
```
Maybe there is a way to pass `SIGNAL_ID` not via the build-arg but directly to the `Dockerfile` during `docker-compose up`?
something similar to `docker run -e SIGNAL_ID=5`?
---
You could have two different `docker-compose.yml`'s, but considering that the only difference between them would be `SIGNAL_ID`, I would like to avoid it. | 2020/08/25 | [
"https://Stackoverflow.com/questions/63574704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9253013/"
] | You can convert an integral value to its decimal representation with [`std::to_string`](https://en.cppreference.com/w/cpp/string/basic_string/to_string):
```
std::string const dec = std::to_string(num);
```
If you have a character array, say `char a[4]`, you can copy the data there element-wise:
```
for (std::size_t i = 0; std::begin(a) + i < std::end(a) && i < dec.size(); ++i) {
a[i] = dec[i] - '0';
}
```
---
Edit: See [Konrad Rudolph's answer](https://stackoverflow.com/a/63574802/430766) for a simpler (and presumably faster) way of doing this. | a way is decomposing the number in hundred, tens and units... modulo can help and log10 will be useful too:
this is going to be a nice work around if you arent allowed to convert to string
here an example:
```
int value = 256;
int myArray[3];
auto m = static_cast<int>(ceil(log10(value)));
for(int i =0; i < m; ++i)
{
myArray[m-1-i] = static_cast<int>(value/pow(10,i))%10;
}
``` | 16,573 |
7,151,776 | *Edit: Let me try to reword and improve my question. The old version is attached at the bottom.*
What I am looking for is a way to express and use free functions in a type-generic way. Examples:
```
abs(x) # maps to x.__abs__()
next(x) # maps to x.__next__() at least in Python 3
-x # maps to x.__neg__()
```
In these cases the functions have been designed in a way that allows users with user-defined types to customize their behaviour by delegating the work to a non-static method call. This is nice. It allows us to write functions that don't really care about the exact parameter types as long as they "feel" like objects that model a certain concept.
Counter examples: Functions that can't be easily used generically:
```
math.exp # only for reals
cmath.exp # takes complex numbers
```
Suppose, I want to write a generic function that applies exp on a list of number-like objects. What exp function should I use? How do I select the correct one?
```
def listexp(lst):
return [math.exp(x) for x in lst]
```
Obviously, this won't work for lists of complex numbers even though there is an exp for complex numbers (in cmath). And it also won't work for any user-defined number-like type which might offer its own special exp function.
So, what I'm looking for is a way to deal with this on both sides -- ideally without special casing a lot of things. As a writer of some generic function that does not care about the exact types of parameters I want to use the correct mathematical functions that is specific to the types involved without having to deal with this explicitly. As a writer of a user-defined type, I would like to expose special mathematical functions that have been augmented to deal with additional data stored in those objects (similar to the imaginary part of complex numbers).
What is the preferred pattern/protocol/idiom for doing that? I did not yet test `numpy`. But I downloaded its source code. As far as I know, it offers a sin function for arrays. Unfortunately, I haven't found its implementation yet in the source code. But it would be interesting to see how they managed to pick the right sin function for the right type of numbers the array currently stores.
In C++ I would have relied on function overloading and ADL (argument-dependent lookup). With C++ being statically typed, it should come as no surprise that this (name lookup, overload resolution) is handled completely at compile-time. I suppose, I could emulate this at runtime with Python and the reflective tools Python has to offer. But I also know that trying to import a coding style into another language might be a bad idea and not very idiomatic in the new language. So, if you have a different idea for an approach, I'm all ears.
I guess, somewhere at some point I need to manually do some type-dependent dispatching in an extensible way. Maybe write a module "tgmath" (type generic math) that comes with support for real and complex support as well as allows others to register their types and special case functions... Opinions? What do the Python masters say about this?
TIA
Edit: Apparently, I'm not the only one who is interested in generic functions and type-dependent overloading. There is [PEP 3124](http://www.python.org/dev/peps/pep-3124/) but it is in draft state since 4 years ago.
---
*Old version of the question:*
I have a strong background in Java and C++ and just recently started learning Python. What I'm wondering about is: How do we extend mathematical functions (at least their names) so they work on other user-defined types? Do these kinds of functions offer any kind of extension point/hook I can leverage (similar to the iterator protocol where `next(obj)` actually delegates to `obj.__next__`, etc) ?
In C++ I would have simply overloaded the function with the new parameter type and have the compiler figure out which of the functions was meant using the argument expressions' static types. But since Python is a very dynamic language there is no such thing as overloading. What is the preferred Python way of doing this?
Also, when I write custom functions, I would like to avoid long chains of
```
if isinstance(arg,someClass):
suchandsuch
elif ...
```
What are the patterns I could use to make the code look prettier and more Pythonish?
I guess, I'm basically trying to deal with the lack of function overloading in Python. At least in C++ overloading and argument-dependent lookup is an important part of good C++ style.
Is it possible to make
```
x = udt(something) # object of user-defined type that represents a number
y = sin(x) # how do I make this invoke custom type-specific code for sin?
t = abs(x) # works because abs delegates to __abs__() which I defined.
```
work? I know I could make sin a non-static method of the class. But then I lose genericity because for every other kind of number-like object it's `sin(x)` and not `x.sin()`.
Adding a `__float__` method is not acceptable since I keep additional information in the object such as derivatives for "automatic differentiation".
TIA
Edit: If you're curious about what the code looks like, check [this](http://ideone.com/H6z60) out. In an ideal world I would be able to use sin/cos/sqrt in a type-generic way. I consider these functions part of the objects interface even if they are "free functions". In `__somefunction` I did not qualify the functions with `math.` nor `__main__.`. It just works because I manually fall back on `math.sin` (etc) in my custom functions via the decorator. But I consider this to be an ugly hack. | 2011/08/22 | [
"https://Stackoverflow.com/questions/7151776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/172531/"
] | you can do this, but it works backwards. you implement `__float__()` in your new type and then `sin()` will work with your class.
in other words, you don't adapt sine to work on other types; you adapt those types so that they work with sine.
this is better because it forces consistency. if there is no obvious mapping from your object to a float then there probably isn't a reasonable interpretation of `sin()` for that type.
[sorry if i missed the "\_\_float\_\_ won't work" part earlier; perhaps you added that in response to this? anyway, for convincing proof that what you want isn't possible, python has the cmath library to add `sin()` etc for complex numbers...] | Typically the answer to questions like this is "you don't" or "use duck typing". Can you provide a little more detail about what you want to do? Have you looked at the remainder of the protocol methods for numeric types?
<http://docs.python.org/reference/datamodel.html#emulating-numeric-types> | 16,575 |
66,583,626 | In plotly I can create a histogram as e.g. [in this example code from the documentation](https://plotly.com/python/histograms/):
```
import plotly.express as px
df = px.data.tips()
fig = px.histogram(df, x="total_bill")
fig.show()
```
which results to:
[](https://i.stack.imgur.com/17o4f.png)
My question is how do I get the data values of the histogram? From what I can think of, this question should be equivalent to how do I access the values of a trace? (google did not help with either)
I could use numpy to redo the histogram:
```
import numpy as np
np.histogram(df.total_bill)
```
But this will not always result to the same buckets, plus it is re-doing all the sometimes expensive computation that goes to create a histogram.
[](https://i.stack.imgur.com/fZqTh.png) | 2021/03/11 | [
"https://Stackoverflow.com/questions/66583626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/508907/"
] | My understanding of your question is that you would like to get the exact intervals and counts displayed in the histogram. For smaller subset of `px.data.tips()`, this:
[](https://i.stack.imgur.com/8ksQb.png)
And reading off the chart those values would be:
```
counts = [2, 4, 3, 1]
bins = [5, 15, 25, 35, 45]
```
There's no *direct* way to do this, but that doesn't mean it's impossible. At least if you're willing to use the awesome `fig.full_figure_for_development()` and a *little* numpy.
### Code highlights *(complete snippet at the very end)*
```
xbins = f.data[0].xbins
plotbins = list(np.arange(start=xbins['start'], stop=xbins['end']+xbins['size'], step=xbins['size']))
counts, bins = np.histogram(list(f.data[0].x), bins=plotbins)
```
### Output:
```
[2 4 3 1] [ 5 15 25 35 45]
```
### All the details:
What I'm guessing you would *like* to be able to do is this:
Run:
```
fig.data[0].count
```
And get:
```
[2, 4, 3, 1]
```
But the closest you'll get is this:
Run:
```
fig.data[0].x
```
And get:
```
[15.53, 10.07, 12.6 , 32.83, 35.83, 29.03, 27.18, 22.67, 17.82,
18.78]
```
And those are just the raw values from the input`df['total_bill'].tail(10)`. So DerekO is right in that the rest is handled by javascript. But [`fig.full_figure_for_development()`](https://community.plotly.com/t/announcing-plotly-py-4-10-date-axis-improvements-a-faster-px-imshow-and-full-figures-for-development/44685) will:
>
> [...] return a new go.Figure object, prepopulated with the same values
> you provided, as well as all the default values computed by Plotly.js,
> to allow you to learn more about what attributes control every detail
> of your figure and how you can customize them.
>
>
>
So running `f = fig.full_figure_for_development(warn=False)`, and then:
```
f.data[0].xbins
```
Will give you:
```
histogram.XBins({
'end': 45, 'size': 10, 'start': 5
})
```
And now you know enough to get the same values in your figure with a little numpy:
### Complete code:
```
import plotly.express as px
import numpy as np
df = px.data.tips()
df = df.tail(10)
fig = px.histogram(df, x="total_bill")
f = fig.full_figure_for_development(warn=False)
xbins = f.data[0].xbins
plotbins = list(np.arange(start=xbins['start'], stop=xbins['end']+xbins['size'], step=xbins['size']))
counts, bins = np.histogram(list(f.data[0].x), bins=plotbins)
print(counts, bins)
``` | In the same Plotly Histogram documentation, there's a section called [Accessing the counts yaxis values](https://plotly.com/python/histograms/#accessing-the-counts-yaxis-values), and it explains that the y values are calculated by the JavaScript in the browser when the figure renders so you can't access it in the figure object (for example, through `fig.layout` or `fig.data`, which you might try for other types of charts)
They recommend calculating the counts and bins yourself using `np.histogram`, then passing these values to `px.bar` to ensure that your histogram matches the buckets as you intend. | 16,584 |
14,251,877 | I worked out a code that make sense to me but not python since I'm new to python.
Check my code here:
```
checksum_algos = ['md5','sha1']
for filename in ["%smanifest-%s.txt" % (prefix for prefix in ['', 'tag'], a for a in checksum_algos)]:
f = os.path.join(self.path, filename)
if isfile(f):
yield f
```
My intention is to search filename in a list like :
`['manifest-md5.txt','tagmanifest-md5.txt','manifest-sha1.txt','tagmanifest-sha1.txt']`
but I got `syntax` problem to implement it.
Thanks for any help. | 2013/01/10 | [
"https://Stackoverflow.com/questions/14251877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921082/"
] | You're overthinking it.
```
for filename in ("%smanifest-%s.txt" % (prefix, a)
for prefix in ['', 'tag'] for a in checksum_algos):
``` | Or you need [`itertools.product()`](http://docs.python.org/2/library/itertools.html#itertools.product):
```
>>> import itertools
>>> [i for i in itertools.product(('', 'tag'), ('sha', 'md5'))]
[('', 'sha'), ('', 'md5'), ('tag', 'sha'), ('tag', 'md5')]
``` | 16,587 |
62,601,766 | I am trying to use SIFT for feature detection with Python, but it is no longer part of OpenCV **or** OpenCV contrib.
With OpenCV opencv-contrib-python (both versions 4.2.0.34, the latest as of this question), I get:
```
>>> import cv2
>>> cv2.SIFT_create()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'cv2.cv2' has no attribute 'SIFT_create'
>>> cv2.xfeatures2d.SIFT_create()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv_contrib\modules\xfeatures2d\src\sift.cpp:1210:
error: (-213:The function/feature is not implemented) This algorithm is patented and is excluded in
this configuration; Set OPENCV_ENABLE_NONFREE CMake option and rebuild the library in function
'cv::xfeatures2d::SIFT::create'
```
Every related answer I have found has suggested using contrib or an older version, but neither of these work anymore.
Is it easier to build it from source to get SIFT back as the error indicates, or to use an alternative? How would I do either of these? All I need is some way to do feature detection, preferably scale-invariant.
[This question](https://stackoverflow.com/questions/10157806/are-there-any-fast-alternatives-to-surf-and-sift-for-scale-invariant-feature-ext) mentions SIFT alternatives but is very outdated (best answers are around 8 years old). What can we do now in 2020?
EDIT Showing OpenCV 3 not working:
Trying to install OpenCV 3:
```
>>> pip install opencv-python==3
ERROR: Could not find a version that satisfies the requirement opencv-python==3
(from versions: 3.4.2.16, 3.4.2.17, 3.4.3.18, 3.4.4.19, 3.4.5.20, 3.4.6.27,
3.4.7.28, 3.4.8.29, 3.4.9.31, 3.4.9.33, 4.0.0.21, 4.0.1.23, 4.0.1.24, 4.1.0.25,
4.1.1.26, 4.1.2.30, 4.2.0.32, 4.2.0.34)
ERROR: No matching distribution found for opencv-python==3
>>> pip install opencv-python==3.4.2.16
```
Then in Python:
```
>>> import cv2
>>> print(cv2.__version__)
3.4.2
>>> cv2.SIFT_create()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'cv2.cv2' has no attribute 'SIFT_create'
``` | 2020/06/26 | [
"https://Stackoverflow.com/questions/62601766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8605685/"
] | The patent for SIFT expired this Mar 2020. But the opencv might not be updated by moving the SIFT to free open source collection.
See this issue: <https://github.com/skvark/opencv-python/issues/126>
To rebuild with the non-free components:
```
git clone --recursive https://github.com/skvark/opencv-python.git
cd opencv-python
export CMAKE_ARGS="-DOPENCV_ENABLE_NONFREE=ON"
python setup.py bdist_wheel
``` | From [the issue](https://github.com/skvark/opencv-python/issues/126):
to rebuild with the non-free components:
```
git clone --recursive https://github.com/skvark/opencv-python.git
cd opencv-python
export CMAKE_ARGS="-DOPENCV_ENABLE_NONFREE=ON"
python setup.py bdist_wheel
``` | 16,589 |
46,016,131 | ```
I have a list of tuples `data`:
data =[(array([[2, 1, 3]]), array([1])),
(array([[2, 1, 2]]), array([1])),
(array([[4, 4, 4]]), array([0])),
(array([[4, 1, 1]]), array([0])),
(array([[4, 4, 3]]), array([0]))]
```
For simplicity's sake, this list here only has 5 tuples.
When I run the following code, it seem I am able to unpack each tuple with
each iteration:
```
for x,y in data2:
print(x,y)
output:
[[2 1 3]] [1]
[[2 1 2]] [1]
[[4 4 4]] [0]
[[4 1 1]] [0]
[[4 4 3]] [0]
This also works:
for x,y in data2[:2]:
print(x,y)
output:
[[2 1 3]] [1]
[[2 1 2]] [1]
However, when I take only a single tuple from the list:
for x,y in data2[0]:
print(x,y)
output:
ValueError Traceback (most recent call last)
<ipython-input-185-1eed1fccdb3a> in <module>()
----> 1 for x,y in data2[0]:
2 print(x,y)
ValueError: not enough values to unpack (expected 2, got 1)
```
I'm confused as to how tuples are being unpacked in the earlier cases, that are
preventing the last case to also successfully unpack the tuples.
Thank you. | 2017/09/02 | [
"https://Stackoverflow.com/questions/46016131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6802252/"
] | In the first two cases you're looping through `list`, in the last one you're accessing `tuple`
Not sure what you want to achieve, but instead of `data[0]`, `data[:1]` would work. | If your data looks like this:
```
data =[([[2, 1, 3]], [1]),
([[2, 1, 2]], [1]),
([[4, 4, 4]]), [0]),
([[4, 1, 1]], [0]),
([[4, 4, 3]], [0])]
for [a], b in data:
print a, b
```
Output:
```
[2, 1, 3] [1]
[2, 1, 2] [1]
[4, 4, 4] [0]
[4, 1, 1] [0]
[4, 4, 3] [0]
``` | 16,590 |
1,239,538 | I've been trying to use [suds](https://fedorahosted.org/suds/wiki) for Python to call a SOAP WSDL. I just need to call the service programmatically and write the output XML document. However suds automatically parses this data into it's own pythonic data format. I've been looking through [the examples](https://fedorahosted.org/suds/wiki/Documentation) and [the documentation](http://jortel.fedorapeople.org/suds/doc/), but I can't seem to find a way to return the XML document that the SOAP service gives me.
Is there an easy way to do this I'm overlooking? Is there an easier way to do this in Python than suds? | 2009/08/06 | [
"https://Stackoverflow.com/questions/1239538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/54283/"
] | At this early stage in suds development, the easiest way to get to the raw XML content is not what one would expect.
The examples on the site show us with something like this:
```
client = Client(url)
result = client.service.Invoke(subm)
```
however, the result is a pre-parsed object that is great for access by Python, but not for XML document access. Fortunately the Client object still has the original SOAP message received stored.
```
result = client.last_received()
print result
```
Will give you the actual SOAP message received back. | You could take a look at a library such as [soaplib](http://wiki.github.com/jkp/soaplib): its a really nice way to consume (and serve) SOAP webservices in Python. The latest version has some code to dynamically generate Python bindings either dynamically (at runtime) or statically (run a script against some WSDL).
[disclaimer: I'm the maintainer of the project! - I didn't write the bulk of it though] | 16,591 |
65,514,398 | I have a radar chart. Need to change the grid from circle-form to pentagon-form. Currently, I have this output:
[](https://i.stack.imgur.com/mDLeM.jpg)
Whereas I expect smth like this:
[](https://i.stack.imgur.com/45Hft.png)
Here's the info about the system i have: Windows 10 (64-bit); Python - 3.8.0 (32-bit); matplotlib - 3.3.3.
This question i've seen here: [How to make a polygon radar (spider) chart in python](https://stackoverflow.com/questions/52910187/how-to-make-a-polygon-radar-spider-chart-in-python) but it doesn't resolve my question.
But it doesn't work for me and i can't understand why. I can simply copy the code, but the result is - outer border changes to pentagon-form, but the inner gridlines remain circular. But it works for other people!
The code of the program is below:
```py
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle, RegularPolygon
from matplotlib.path import Path
from matplotlib.projections.polar import PolarAxes
from matplotlib.projections import register_projection
from matplotlib.spines import Spine
from matplotlib.transforms import Affine2D
def radar_factory(num_vars, frame='circle'):
"""Create a radar chart with `num_vars` axes.
This function creates a RadarAxes projection and registers it.
Parameters
----------
num_vars : int
Number of variables for radar chart.
frame : {'circle' | 'polygon'}
Shape of frame surrounding axes.
"""
# calculate evenly-spaced axis angles
theta = np.linspace(0, 2*np.pi, num_vars, endpoint=False)
class RadarAxes(PolarAxes):
name = 'radar'
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# rotate plot such that the first axis is at the top
self.set_theta_zero_location('N')
def fill(self, *args, closed=True, **kwargs):
"""Override fill so that line is closed by default"""
return super().fill(closed=closed, *args, **kwargs)
def plot(self, *args, **kwargs):
"""Override plot so that line is closed by default"""
lines = super().plot(*args, **kwargs)
for line in lines:
self._close_line(line)
def _close_line(self, line):
x, y = line.get_data()
# FIXME: markers at x[0], y[0] get doubled-up
if x[0] != x[-1]:
x = np.concatenate((x, [x[0]]))
y = np.concatenate((y, [y[0]]))
line.set_data(x, y)
def set_varlabels(self, labels):
self.set_thetagrids(np.degrees(theta), labels)
def _gen_axes_patch(self):
# The Axes patch must be centered at (0.5, 0.5) and of radius 0.5
# in axes coordinates.
if frame == 'circle':
return Circle((0.5, 0.5), 0.5)
elif frame == 'polygon':
return RegularPolygon((0.5, 0.5), num_vars, radius=0.5, edgecolor="k")
else:
raise ValueError("unknown value for 'frame': %s" % frame)
def draw(self, renderer):
""" Draw. If frame is polygon, make gridlines polygon-shaped """
if frame == 'polygon':
gridlines = self.yaxis.get_gridlines()
for gl in gridlines:
gl.get_path()._interpolation_steps = num_vars
super().draw(renderer)
def _gen_axes_spines(self):
if frame == 'circle':
return super()._gen_axes_spines()
elif frame == 'polygon':
# spine_type must be 'left'/'right'/'top'/'bottom'/'circle'.
spine = Spine(axes=self,
spine_type='circle',
path=Path.unit_regular_polygon(num_vars))
# unit_regular_polygon gives a polygon of radius 1 centered at
# (0, 0) but we want a polygon of radius 0.5 centered at (0.5,
# 0.5) in axes coordinates.
spine.set_transform(Affine2D().scale(.5).translate(.5, .5)
+ self.transAxes)
return {'polar': spine}
else:
raise ValueError("unknown value for 'frame': %s" % frame)
register_projection(RadarAxes)
return theta
data = [['O1', 'O2', 'O3', 'O4', 'O5'],
('Title', [
[4, 3.5, 4, 2, 3,],
[1.07, 5.95, 2.04, 1.05, 0.00,],
]
)]
N = len(data[0])
theta = radar_factory(N, frame='polygon') # polygon !!!
spoke_labels = data.pop(0)
title, case_data = data[0]
fig, ax = plt.subplots(figsize=(5, 5), subplot_kw=dict(projection='radar'))
fig.subplots_adjust(top=0.85, bottom=0.05)
ax.set_rgrids([0, 1, 2.0, 3.0, 4.0, 5.0, 6])
ax.set_title(title, position=(0.5, 1.1), ha='center')
for d in case_data:
line = ax.plot(theta, d)
ax.fill(theta, d, alpha=0.25)
ax.set_varlabels(spoke_labels)
plt.show()
``` | 2020/12/30 | [
"https://Stackoverflow.com/questions/65514398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14903741/"
] | I don't have enough reputation to add a comment so I'll put this down as an answer.
An update to the code that can serve as a workaround and works on Matplotlib > 3.5 has been recently added to issue 19981 by prohde.
You can check it here:
<https://github.com/matplotlib/matplotlib/issues/19981> | As mentioned by @joao-neves the docs will be updated in this [PR](https://github.com/matplotlib/matplotlib/pull/22458/files). The working code for your example would be
```py
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle, RegularPolygon
from matplotlib.path import Path
from matplotlib.projections.polar import PolarAxes
from matplotlib.projections import register_projection
from matplotlib.spines import Spine
from matplotlib.transforms import Affine2D
def radar_factory(num_vars, frame='circle'):
"""Create a radar chart with `num_vars` axes.
This function creates a RadarAxes projection and registers it.
Parameters
----------
num_vars : int
Number of variables for radar chart.
frame : {'circle' | 'polygon'}
Shape of frame surrounding axes.
"""
# calculate evenly-spaced axis angles
theta = np.linspace(0, 2*np.pi, num_vars, endpoint=False)
class RadarTransform(PolarAxes.PolarTransform):
def transform_path_non_affine(self, path):
# Paths with non-unit interpolation steps correspond to gridlines,
# in which case we force interpolation (to defeat PolarTransform's
# autoconversion to circular arcs).
if path._interpolation_steps > 1:
path = path.interpolated(num_vars)
return Path(self.transform(path.vertices), path.codes)
class RadarAxes(PolarAxes):
name = 'radar'
PolarTransform = RadarTransform
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# rotate plot such that the first axis is at the top
self.set_theta_zero_location('N')
def fill(self, *args, closed=True, **kwargs):
"""Override fill so that line is closed by default"""
return super().fill(closed=closed, *args, **kwargs)
def plot(self, *args, **kwargs):
"""Override plot so that line is closed by default"""
lines = super().plot(*args, **kwargs)
for line in lines:
self._close_line(line)
def _close_line(self, line):
x, y = line.get_data()
# FIXME: markers at x[0], y[0] get doubled-up
if x[0] != x[-1]:
x = np.concatenate((x, [x[0]]))
y = np.concatenate((y, [y[0]]))
line.set_data(x, y)
def set_varlabels(self, labels):
self.set_thetagrids(np.degrees(theta), labels)
def _gen_axes_patch(self):
# The Axes patch must be centered at (0.5, 0.5) and of radius 0.5
# in axes coordinates.
if frame == 'circle':
return Circle((0.5, 0.5), 0.5)
elif frame == 'polygon':
return RegularPolygon((0.5, 0.5), num_vars, radius=0.5, edgecolor="k")
else:
raise ValueError("unknown value for 'frame': %s" % frame)
def draw(self, renderer):
""" Draw. If frame is polygon, make gridlines polygon-shaped """
if frame == 'polygon':
gridlines = self.yaxis.get_gridlines()
for gl in gridlines:
gl.get_path()._interpolation_steps = num_vars
super().draw(renderer)
def _gen_axes_spines(self):
if frame == 'circle':
return super()._gen_axes_spines()
elif frame == 'polygon':
# spine_type must be 'left'/'right'/'top'/'bottom'/'circle'.
spine = Spine(axes=self,
spine_type='circle',
path=Path.unit_regular_polygon(num_vars))
# unit_regular_polygon gives a polygon of radius 1 centered at
# (0, 0) but we want a polygon of radius 0.5 centered at (0.5,
# 0.5) in axes coordinates.
spine.set_transform(Affine2D().scale(.5).translate(.5, .5)
+ self.transAxes)
return {'polar': spine}
else:
raise ValueError("unknown value for 'frame': %s" % frame)
register_projection(RadarAxes)
return theta
data = [['O1', 'O2', 'O3', 'O4', 'O5'],
('Title', [
[4, 3.5, 4, 2, 3,],
[1.07, 5.95, 2.04, 1.05, 0.00,],
]
)]
N = len(data[0])
theta = radar_factory(N, frame='polygon')
spoke_labels = data.pop(0)
title, case_data = data[0]
fig, ax = plt.subplots(figsize=(5, 5), subplot_kw=dict(projection='radar'))
fig.subplots_adjust(top=0.85, bottom=0.05)
ax.set_rgrids([0, 1, 2.0, 3.0, 4.0, 5.0, 6])
ax.set_title(title, position=(0.5, 1.1), ha='center')
for d in case_data:
line = ax.plot(theta, d)
ax.fill(theta, d, alpha=0.25, label='_nolegend_')
ax.set_varlabels(spoke_labels)
plt.show()
```
[](https://i.stack.imgur.com/UdHJ3m.png) | 16,592 |
6,259,623 | >
> **Possible Duplicate:**
>
> [How does Python compare string and int?](https://stackoverflow.com/questions/3270680/how-does-python-compare-string-and-int)
>
>
>
An intern was just asking me to help debug code that looked something like this:
```
widths = [image.width for image in images]
widths.append(374)
width = max(widths)
```
...when the first line should have been:
```
widths = [int(image.width) for image in images]
```
Thus, the code was choosing the string '364' rather than the integer 374. How on earth does python compare a string and an integer? I could understand comparing a single character (if python had a `char` datatype) to an integer, but I don't see any straightforward way to compare a string of characters to an integer. | 2011/06/07 | [
"https://Stackoverflow.com/questions/6259623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] | Python 2.x compares *every* built-in type to every other. From the [docs](http://docs.python.org/library/stdtypes.html#comparisons):
>
> Objects of different types, except different numeric types and different string types, never compare equal; such objects are ordered consistently but arbitrarily (so that sorting a heterogeneous array yields a consistent result).
>
>
>
This "arbitrary order" in CPython is actually sorted by type name.
In Python 3.x, you will get a `TypeError` if you try to compare a string to an integer. | When comparing values of incompatible types in python 2.x, the ordering will be arbitrary but consistent. This is to allow you to put values of different types in a sorted collection.
In CPython 2.x any string will always be higher than any integer, but as I said that's arbitrary. The actual ordering does not matter, it is just important that the ordering is consistent (i.e. you won't get a case where e.g. `x > y` and `y > z`, but `z > x`). | 16,593 |
27,914,648 | I am using geopy to geocode some addresses and I want to catch the timeout errors and print them out so I can do some quality control on the input. I am putting the geocode request in a try/catch but it's not working. Any ideas on what I need to do?
Here is my code:
```
try:
location = geolocator.geocode(my_address)
except ValueError as error_message:
print("Error: geocode failed on input %s with message %s"%(a, error_message))
```
I get the following exception:
```
File "/usr/local/lib/python2.7/site-packages/geopy/geocoders/base.py", line 158, in _call_geocoder
raise GeocoderTimedOut('Service timed out')
geopy.exc.GeocoderTimedOut: Service timed out
```
Thank you in advance! | 2015/01/13 | [
"https://Stackoverflow.com/questions/27914648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1860317/"
] | Try this:
```
from geopy.geocoders import Nominatim
from geopy.exc import GeocoderTimedOut
my_address = '1600 Pennsylvania Avenue NW Washington, DC 20500'
geolocator = Nominatim()
try:
location = geolocator.geocode(my_address)
print(location.latitude, location.longitude)
except GeocoderTimedOut as e:
print("Error: geocode failed on input %s with message %s"%(my_address, e.message))
```
You can also consider increasing the timeout on the geocode call you are making to your geolocator. In my example it would be something like:
```
location = geolocator.geocode(my_address, timeout=10)
```
or
```
location = geolocator.geocode(my_address, timeout=None)
``` | You may be experiencing this problem because you tried to request this address multiple times and they temporarily blocked you or slowed you down because of their [usage policy](https://operations.osmfoundation.org/policies/nominatim/). It states no more requests than one per second and that you should cache your results. I ran into this problem and you have a couple solutions. If you don't want to change your code much you can get a Google API key that you can use for something like 2500 requests/day for free or you can cache your results. Because I was already using DynamoDB on AWS for my problem I went ahead and just created a table that I cache my results in. [Here is the gist of my code.](https://gist.github.com/tylerjw/81fb59b53da6c2de1f295f3f0e22d75c) | 16,596 |
51,963,377 | I am trying to write a discriminator that evaluates patches of an image.
Therefore I generate 32x32 non-overlapping patches from the input and then concatenate them on a new axis.
The reason I am using a time-distributed layer is that at the end, the discriminator should evaluate the whole image as true or fake. Thus, I am trying to perform a forward pass on each patch individually and then averaging the discriminator output across the patches by the lambda layer:
```
def my_average(x):
x = K.mean(x, axis=1)
return x
def my_average_shape(input_shape):
shape = list(input_shape)
del shape[1]
return tuple(shape)
def defineD(input_shape):
a = Input(shape=(256, 256, 1))
cropping_list = []
n_patches = 256/32
for x in range(256/32):
for y in range(256/32):
cropping_list += [
K.expand_dims(
Cropping2D((( x * 32, 256 - (x+1) * 32), ( y * 32, 256 - (y+1) * 32)))(a)
, axis=1)
]
x = Concatenate(1)(cropping_list)
x = TimeDistributed(Conv2D(4 * 8, 3, padding='same'))(x) #
x = TimeDistributed(MaxPooling2D())(x)
x = TimeDistributed(LeakyReLU())(x) # 16
x = TimeDistributed(Conv2D(4 * 16, 3, padding='same'))(x)
x = TimeDistributed(MaxPooling2D())(x)
x = TimeDistributed(LeakyReLU())(x) # 8
x = TimeDistributed(Conv2D(4 * 32, 3, padding='same'))(x)
x = TimeDistributed(MaxPooling2D())(x)
x = TimeDistributed(LeakyReLU())(x) # 4
x = TimeDistributed(Flatten())(x)
x = TimeDistributed(Dense(2, activation='sigmoid'))(x)
x = Lambda(my_average, my_average_shape)(x)
return keras.models.Model(inputs=a, outputs=x)
```
For some reason I get the following error:
```
File "testing.py", line 41, in <module>
defineD((256,256,1) )
File "testing.py", line 38, in defineD
return keras.models.Model(inputs=a, outputs=x)
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 93, in __init__
self._init_graph_network(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 237, in _init_graph_network
self.inputs, self.outputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1353, in _map_graph_network
tensor_index=tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1340, in build_map
node_index, tensor_index)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 1312, in build_map
node = layer._inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
``` | 2018/08/22 | [
"https://Stackoverflow.com/questions/51963377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4880918/"
] | You need to put your cropping operations in a function and then use that function in a `Lambda` layer:
```
def my_cropping(a):
cropping_list = []
n_patches = 256/32
for x in range(256//32):
for y in range(256//32):
cropping_list += [
K.expand_dims(
Cropping2D((( x * 32, 256 - (x+1) * 32), ( y * 32, 256 - (y+1) * 32)))(a)
, axis=1)
]
return cropping_list
```
To use it:
```
cropping_list = Lambda(my_cropping)(a)
``` | I ran into the same issue and it solved indeed by wrapping a Lambda layer around the tensor as @today proposed.
Thanks for that hint, it pointed me in the right direction. I wanted to turn a vector into a diagonal matrix to
I wanted to concatenate a vector with a square image and by turning the vector in a diag matrix. It worked with the following snippet:
```
def diagonalize(vector):
diagonalized = tf.matrix_diag(vector) # make diagonal matrix from vector
out_singlechan = tf.expand_dims(diagonalized, -1) # append 1 channel to get compatible to the multichannel image dim
return out_singlechan
lstm_out = Lambda(diagonalize, output_shape=(self.img_shape[0],self.img_shape[1],1))(lstm_out)
``` | 16,598 |
820,671 | I want to create an object in python that has a few attributes and I want to protect myself from accidentally using the wrong attribute name. The code is as follows:
```
class MyClass( object ) :
m = None # my attribute
__slots__ = ( "m" ) # ensure that object has no _m etc
a = MyClass() # create one
a.m = "?" # here is a PROBLEM
```
But after running this simple code, I get a very strange error:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
a.m = "?"
AttributeError: 'test' object attribute 'm' is read-only
```
Is there any wise programmer who can spare a bit of their time and enlighten me about "read-only" errors? | 2009/05/04 | [
"https://Stackoverflow.com/questions/820671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69882/"
] | When you declare instance variables using `__slots__`, Python creates a [descriptor object](https://docs.python.org/2/howto/descriptor.html) as a class variable with the same name. In your case, this descriptor is overwritten by the class variable `m` that you are defining at the following line:
```
m = None # my attribute
```
Here is what you need to do: Do not define a class variable called `m`, and initialize the instance variable `m` in the `__init__` method.
```
class MyClass(object):
__slots__ = ("m",)
def __init__(self):
self.m = None
a = MyClass()
a.m = "?"
```
As a side note, tuples with single elements need a comma after the element. Both work in your code because `__slots__` accepts a single string or an iterable/sequence of strings. In general, to define a tuple containing the element `1`, use `(1,)` or `1,` and not `(1)`. | `__slots__` works with instance variables, whereas what you have there is a class variable. This is how you should be doing it:
```
class MyClass( object ) :
__slots__ = ( "m", )
def __init__(self):
self.m = None
a = MyClass()
a.m = "?" # No error
``` | 16,599 |
30,252,726 | I am generating pdf using html template with python `pisa.CreatePDF` API,
It works well with small html, but in case of huge html it takes lot of time. Is there any alternative ? | 2015/05/15 | [
"https://Stackoverflow.com/questions/30252726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2373367/"
] | I did few changes in html which results pisa.createPDF works fast for me.
I am using html of almost **2 MB**, contains single table with almost more than **10,000 rows**. So I break them into multiple tables and tried again. Its surprised me, initially with single table it took almost **40 minutes (2590 seconds)** to generate **PDF** and with multiple tables it has taken only **80 Seconds**. | You can try [pdfkit](https://pypi.python.org/pypi/pdfkit):
```
import pdfkit
pdfkit.from_file('test.html', 'out.pdf')
```
Also see [this question](https://stackoverflow.com/q/23359083/3489230) which describes solutions using PyQt. | 16,606 |
51,271,225 | header
output:
```
array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
```
body
output:
```
array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
testing = np.concatenate((header, body), axis=0)
```
```none
ValueError Traceback (most recent call last) <ipython-input-302-efb002602b4b> in <module>()
1 # Merge names and the rest of the data in np array
2
----> 3 testing = np.concatenate((header, body), axis=0)
ValueError: all the input arrays must have same number of dimensions
```
Might someone be able to troubleshoot this?
I have tried different commands to merge the two (including stack) and am getting the same error. The dimensions (columns) do seem to be the same though. | 2018/07/10 | [
"https://Stackoverflow.com/questions/51271225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10029062/"
] | You need to align array dimensions first. You are currently trying to combine 1-dimensional and 2-dimensional arrays. After alignment, you can use [`numpy.vstack`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html).
Note `np.array([A]).shape` returns `(1, 7)`, while `B.shape` returns `(2, 7)`. A more efficient alternative would be to use `A[None, :]`.
Also note your array will become of dtype `object`, as this will accept arbitrary / mixed types.
```
A = np.array(['Subject_ID', 'tube_label', 'sample_#', 'Relabel',
'sample_ID','cortisol_value', 'Group'], dtype='<U14')
B = np.array([['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC'],], dtype=object)
res = np.vstack((np.array([A]), B))
print(res)
array([['Subject_ID', 'tube_label', 'sample_#', 'Relabel', 'sample_ID',
'cortisol_value', 'Group'],
['STM002', '170714_STM002_1', 1, 1, 1, 1.98, 'HC'],
['STM002', '170714_STM002_2', 2, 2, 2, 2.44, 'HC']], dtype=object)
``` | Look at numpy.vstack and hstack, as well as the axis argument in np.append. Here it looks like you want vstack (i.e. the output array will have 3 columns, each with the same number of rows). You can also look into numpy.reshape, to change the shape of the input arrays so you can concatenate them. | 16,607 |
67,044,398 | to import the absolute path from my laptop I type:
==================================================
```
import os
print(os.getcwd())
```
he gives me the path no problem, but when I create a Document "ayoub.txt" in the path absolute, and I #call this document with:
===============================================================================================================================
```
file = open("ayoub.txt")
# I get an error:
#Traceback (most recent enter code herecall last):
#File "C:\Users\HPPRO~1\AppData\Local\Temp\tempCodeRunnerFile.python", line 4, in <module>
f#ile = open("ayoub.txt")
#FileNotFoundError: [Errno 2] No such file or directory: 'ayoub.txt'
#PS C:\Users\HP PRO>
``` | 2021/04/11 | [
"https://Stackoverflow.com/questions/67044398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14819475/"
] | I ran your code with some dummy data, the `cast<String>` for `categories` works for me. However, you have not added the cast to `skills` and `otherLanguages`. Have you checked the line number of the error? If the problem is definitely with `categories`, could you please add some sample data to the question. | Try to replace `List<String> categories, skills, otherLanguages;` to dynamic and remove the casting
`List<dynamic> categories, skills, otherLanguages;` | 16,610 |
43,716,699 | ```
python manage.py runserver
Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x03BBC1F0>
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\django\utils\autoreload.py", line 227, in wrapper
fn(*args, **kwargs)
File "C:\Python27\lib\site-packages\django\core\management\commands\runserver.py", line 125, in inner_run
self.check(display_num_errors=True)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 359, in check
include_deployment_checks=include_deployment_checks,
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 346, in _run_checks
return checks.run_checks(**kwargs)
File "C:\Python27\lib\site-packages\django\core\checks\registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Python27\lib\site-packages\django\core\checks\urls.py", line 16, in check_url_config
return check_resolver(resolver)
File "C:\Python27\lib\site-packages\django\core\checks\urls.py", line 26, in check_resolver
return check_method()
File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 254, in check
for pattern in self.url_patterns:
File "C:\Python27\lib\site-packages\django\utils\functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 405, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Python27\lib\site-packages\django\utils\functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 398, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Users\Kaidi\Desktop\CM2\CM\CM\urls.py", line 18, in <module>
from mysite import views
File "C:\Users\Kaidi\Desktop\CM2\CM\mysite\views.py", line 2, in <module>
from rest_framework import viewsets, permissions, status
File "C:\Python27\lib\site-packages\rest_framework\viewsets.py", line 26, in <module>
from rest_framework import generics, mixins, views
File "C:\Python27\lib\site-packages\rest_framework\generics.py", line 10, in <module>
from rest_framework import mixins, views
File "C:\Python27\lib\site-packages\rest_framework\views.py", line 98, in <module>
class APIView(View):
File "C:\Python27\lib\site-packages\rest_framework\views.py", line 103, in APIView
authentication_classes = api_settings.DEFAULT_AUTHENTICATION_CLASSES
File "C:\Python27\lib\site-packages\rest_framework\settings.py", line 220, in __getattr__
val = perform_import(val, attr)
File "C:\Python27\lib\site-packages\rest_framework\settings.py", line 165, in perform_import
return [import_from_string(item, setting_name) for item in val]
File "C:\Python27\lib\site-packages\rest_framework\settings.py", line 177, in import_from_string
module = import_module(module_path)
File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\rest_framework_jwt\authentication.py", line 1, in <module>
import jwt
File "C:\Python27\lib\site-packages\jwt\__init__.py", line 17, in <module>
from .jwk import (
File "C:\Python27\lib\site-packages\jwt\jwk.py", line 60
def is_sign_key(self) -> bool:
^
SyntaxError: invalid syntax
``` | 2017/05/01 | [
"https://Stackoverflow.com/questions/43716699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7946534/"
] | You seem to have installed the JWT package, which is only compatible with Python 3.4+. The rest-framework-jwt app is trying to import that rather than PyJWT which is compatible with 2.7.
Remove that installation with `pip uninstall jwt`. Once removed you'll want to install PyJWT like so:
```
pip install PyJWT
``` | Not need to uninstall jwt. Just upgrade your PyJWT
```
pip install PyJWT --upgrade
``` | 16,611 |
1,507,091 | I'm trying to enforce a time limit on queries in python MySQLDB. I have a situation where I have no control over the queries, but need to ensure that they do not run over a set time limit. I've tried using signal.SIGALRM to interrupt the call to execute, but this does not seem to work. The signal gets sent, but does not get caught until after the call to execute finishes.
I wrote a test case to prove this behavior:
```
#!/usr/local/bin/python2.6
import time
import signal
from somewhere import get_dbc
class Timeout(Exception):
""" Time Exceded """
def _alarm_handler(*args):
raise Timeout
dbc = get_dbc()
signal.signal(signal.SIGALRM, _alarm_handler)
signal.alarm(1)
try:
print "START: ", time.time()
dbc.execute("SELECT SLEEP(10)")
except Timeout:
print "TIMEOUT!", time.time()'
```
The "SELECT SLEEP(10)" is simulating a slow query, but I do see the same behavior with an actual slow query.
The Result:
```
START: 1254440686.69
TIMEOUT! 1254440696.69
```
As you can see, it's sleeping for 10 seconds then I get the Timeout Exception.
Questions:
1. Why do I not get the signal until after execute finishes?
2. Is there another reliable way to limit query execution time? | 2009/10/01 | [
"https://Stackoverflow.com/questions/1507091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678/"
] | [@nosklo's twisted-based solution](https://stackoverflow.com/a/1507370/8053001) is elegant and workable, but if you want to avoid the dependency on twisted, the task is still doable, e.g:
```
import multiprocessing
def query_with_timeout(dbc, timeout, query, *a, **k):
conn1, conn2 = multiprocessing.Pipe(False)
subproc = multiprocessing.Process(target=do_query,
args=(dbc, query, conn2)+a,
kwargs=k)
subproc.start()
subproc.join(timeout)
if conn1.poll():
return conn1.recv()
subproc.terminate()
raise TimeoutError("Query %r ran for >%r" % (query, timeout))
def do_query(dbc, query, conn, *a, **k):
cu = dbc.cursor()
cu.execute(query, *a, **k)
return cu.fetchall()
``` | Use [adbapi](http://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.html). It allows you to do a db call asynchronously.
```
from twisted.internet import reactor
from twisted.enterprise import adbapi
def bogusQuery():
return dbpool.runQuery("SELECT SLEEP(10)")
def printResult(l):
# function that would be called if it didn't time out
for item in l:
print item
def handle_timeout():
# function that will be called when it timeout
reactor.stop()
dbpool = adbapi.ConnectionPool("MySQLdb", user="me", password="myself", host="localhost", database="async")
bogusQuery().addCallback(printResult)
reactor.callLater(4, handle_timeout)
reactor.run()
``` | 16,612 |
17,213,455 | Im kind of new to python. Im trying to remove the first sentence from a string using the full stop as the delimiter. Is split the right method to be using in this instance? Im not getting the desired result...
```
def get_summary(self):
if self.description:
s2 = self.description.split('.', 1)[1]
return s2
else:
return None
``` | 2013/06/20 | [
"https://Stackoverflow.com/questions/17213455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2342568/"
] | You can use [`String.split`](http://docs.oracle.com/javase/6/docs/api/java/lang/String.html#split%28java.lang.String%29):
```
String cmd = "command atr1 art22 atr333 art4444";
String[] parts = cmd.split(" ");
```
The split method permits using a regular expression. This is useful for example if the amount of whitespace varies:
```
String cmd = "command atr1 art22 atr333 art4444";
String[] parts = cmd.split(" +"); // split by spans of one or more spaces
``` | Here a few options, sorted from easy/annoying-in-the-end to powerful/hard-to-learn
* "your command pattern".split( " " ) gives you an array of strings
* [`java.util.Scanner`](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Scanner.html) lets you take out one token after the other, and it has some handy helpers for parsing like `nextInt()` or `nextFloat()`
* a command line parser library, like [commons cli](http://commons.apache.org/proper/commons-cli/). those are a bit of work to learn, but they have the upside of solving some other problems you will be facing shortly :)
p.s. to generally find more help on the internet the search term you are looking for is "java parsing command line arguments", thats pretty much what you're trying to do, in case you didn't know :) | 16,622 |
44,486,483 | So I've begun working on this little translator program that translates English to German with an input. However, when I enter more than one word I get the words I've entered, followed by the correct translation.
This is what I have so far:
```
data = [input()]
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of
the':'der', 'german':'deutschen', 'language': 'sprache'}
from itertools import takewhile
def find_suffix(s):
return ''.join(takewhile(str.isalpha, s[::-1]))[::-1]
for d in data:
sfx = find_suffix(d)
print (d.replace(sfx, dictionary.get(sfx, sfx)))
```
I'm trying to get the following output:
```
"i am a student of the german sprache"
```
as opposed to:
```
"ich bin ein schueler der deutschen spracher"
```
I'm quite new to python so any help would be greatly appreciated | 2017/06/11 | [
"https://Stackoverflow.com/questions/44486483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8144709/"
] | Changing your code to this should provide a first step to what you're looking for.
```
data = raw_input()
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of':'der', 'german':'deutschen', 'language': 'sprache'}
from itertools import takewhile
def find_suffix(s):
return ''.join(takewhile(str.isalpha, s[::-1]))[::-1]
for d in data.split():
sfx = find_suffix(d)
print (d.replace(sfx, dictionary.get(sfx,''))),
```
What you have right now does not take every separate word into consideration as data is not a list of words as you intended but a list holding one string, the input you provided. Try print-debugging your snippet to see what I am talking about.
Notice that with such logic corner cases in your project appear. Taking each word and translating it with its German counterpart prohibits dictionary entries longer than 1 word, such as `'of the':'der'`. For demo purposes I chose to keep a dictionary with keys of length 1, so the above key:value pair becomes `'of':'der'` which is not correct, as German grammar is a little more complicated than that.
You now have more problems than what you started with, which is what toy projects are for. If I was you, I'd look into how open source projects deal with such cases and try to see what fits. Good luck with your project. | ```
data = [input()]
dictionary = {'i':'ich', 'am':'bin', 'a':'ein', 'student':'schueler', 'of the':'der', 'german':'deutschen', 'language': 'sprache'}
for word in data:
if word in dictionary:
print dictionary[word],
```
Explanation:
for every word in your input if that word in present in your dictionary
It will print the value associated with that word and comma (,) is to skip newline character. | 16,625 |
63,826,975 | I get the following error when I want to import matplotlib.pyplot on the Visual Studio's jupyter-notebook.
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 import matplotlib.pyplot as plt
~/miniconda3/envs/firstSteps/lib/python3.8/site-packages/matplotlib/__init__.py in
903 # triggering resolution of _auto_backend_sentinel.
904 rcParamsDefault = _rc_params_in_file(
--> 905 cbook._get_data_path("matplotlibrc"),
906 # Strip leading comment.
907 transform=lambda line: line[1:] if line.startswith("#") else line,
~/.local/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _get_data_path(*args)
AttributeError: module 'matplotlib' has no attribute 'get_data_path'
```
But I don't have this error if I try the same code on the navigator's jupyter-notebook.
So I don't understand why I get this error since both notebook are running under the same kernel which have the matplotlib 3.3.1 version installed on.
I would be grateful if someone can give me any enlightenment. :) | 2020/09/10 | [
"https://Stackoverflow.com/questions/63826975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12409079/"
] | If you want all Gold customers, then `Customers` should be the first table in the `LEFT JOIN`. There is also no need for a subquery on `customers`. However, MS Access does want one on `Transactions`:
```
SELECT c.CustId, NZ(SUM(t.Value)) AS Total
FROM Customers as c LEFT JOIN
(SELECT t.*
FROM Transactions as t
WHERE t.xDate BETWEEN #2020/01/03# AND #2020/01/04#
) as t
ON t.CustId = c.CustId
WHERE c.CustType = 'Gold'
GROUP BY c.CustId;
``` | ***Edit:*** Simplified query
```
SELECT Customers.CustID, Sum(Transactions.tValue) AS Total
FROM Customers LEFT JOIN Transactions ON Customers.CustID = Transactions.CustID
WHERE (Transactions.xDate BETWEEN #2020/01/03# AND #2020/01/04#) AND (Customers.CustType='Gold')
GROUP BY Customers.CustID;
```
You can sum total of union query result group by customer id. try below
I assume your `CustID` field is `Number` data type. If it is string data type then you need to change `DLookup()` function criteria part like `DLookup("CustType","Customers","CustID='" & t.CustID & "'")`
```
SELECT ut.CustID, Sum(ut.Total) AS Total
FROM (SELECT c.CustId, 0 as Total, c.CustType
FROM Customers AS c
GROUP BY c.CustId,c.CustType
UNION
SELECT t.CustId, SUM(t.tValue) AS Total, DLookup("CustType","Customers","CustID=" & t.CustID ) as CustType
FROM Transactions AS t
WHERE t.xDate BETWEEN #2020/01/03# AND #2020/01/04# GROUP BY t.CustId) AS ut GROUP BY ut.CustID, ut.CustType
HAVING (((ut.CustType)='gold'));
``` | 16,627 |
34,783,867 | I have two pandas series like following.
```
bulk_order_id
Out[283]:
3 523
Name: order_id, dtype: object
```
and
```
luster_6_loc
Out[285]:
3 Cluster 3
Name: Clusters, dtype: object
```
Now I want a new series which would look like this.
```
Cluster 3 523
```
I am doing following in python
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.append(pd.Series(bulk_order_id.values[i], index =
cluster_6_loc.iloc[i]))
```
Which gives me an error saying
```
TypeError: Index(...) must be called with a collection of some kind, 'Cluster 3' was passed
``` | 2016/01/14 | [
"https://Stackoverflow.com/questions/34783867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2927983/"
] | You could pass to `pd.Series` values of `luster_6_loc` as index and values of `bulk_order_id` as values:
```
bulk_order_id = pd.Series(523, index=[3])
cluster_6_loc= pd.Series('Cluster 3', index=[3])
cluster_final = pd.Series(bulk_order_id.values, cluster_6_loc.values)
In [149]: cluster_final
Out[149]:
Cluster 3 523
dtype: int64
```
**EDIT**
It's strange but it seems that `append` to `Series` doesn't work correctly (at least in version `0.17.1`):
```
s = pd.Series()
In [199]: s.append(pd.Series(1, index=[0]))
Out[199]:
0 1
dtype: int64
In [200]: s
Out[200]: Series([], dtype: float64)
```
Btw for your case your could do `set_value`:
```
cluster_final = pd.Series()
for i in range(len(cluster_6_loc)):
cluster_final.set_value(cluster_6_loc.iloc[i], bulk_order_id.values[i])
In [209]: cluster_final
Out[209]:
Cluster 3 523
dtype: int64
``` | Not sure whether I'm understanding your question correctly, but what's wrong with `pd.concat()` ([see docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html)):
```
s1 = pd.Series(data=['523'], index=[3])
3 523
dtype: object
s2 = pd.Series(data=['Cluster 3'], index=[3])
3 Cluster 3
dtype: object
```
and using `pd.concat()`, which would also work for several values:
```
pd.concat([s1, s2], axis=1)
0 1
3 523 Cluster 3
```
resulting in a `DataFrame` which is what you'll probably need anyway when combining `Series` with several values. You can move any of the `values` to the `index` using `.set_index()`, or add `.squeeze()` to get a `Series` instead.
So `pd.concat([s1, s2], axis=1).set_index(1)` gives:
```
0
1
Cluster 3 523
``` | 16,630 |
67,434,998 | I'm new to python / pandas. I've got multiple csv files in a directory. I want to remove duplicates in all the files and save new files to another directory.
Below is what I've tried:
```
import pandas as pd
import glob
list_files = (glob.glob("directory path/*.csv"))
for file in list_files:
df = pd.read_csv(file)
df_new = df.drop_duplicates()
df_new.to_csv(file)
```
This code runs but doesn't yield expected results. A couple of issues.
1. files are overwritten in the existing directory.
2. there is an additional index column being added which is not required.
what changes need to be done in the code to get the same set of files with the same file names without duplicate rows to another directory? | 2021/05/07 | [
"https://Stackoverflow.com/questions/67434998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15862144/"
] | The problem is because of the `{}` that are around your file, pandas thinks that the first level of the JSON are the columns and thus it uses just Browser History as a column. You can use this code to solve your problem:
```
import pandas as pd
df = pd.DataFrame(json.load(open('BrowserHistory.json', encoding='cp850'))['Browser History'])
print(df)
``` | Because your objects are in a list at the second level down of your JSON, you can't read it directly into a dataframe using `read_json`. Instead, you could read the json into a variable, and then create the dataframe from that:
```py
import pandas as pd
import json
f = open("BrowserHistory.json")
js = json.load(f)
df = pd.DataFrame(js['Browser History'])
df
# favicon_url page_transition ... client_id time_usec
# 0 https://www.google.com/favicon.ico LINK ... cliendid 1620386529857946
# 1 https://www.google.com/favicon.ico LINK ... cliendid 1620386514845201
# 2 https://www.google.com/favicon.ico LINK ... cliendid 1620386499014063
# 3 https://ssl.gstatic.com/ui/v1/icons/mail/rfr/g... LINK ... cliendid 1620386492788783
```
Note you may need to specify the file encoding on the `open` call e.g.
```py
f = open("BrowserHistory.json", encoding="utf8")
``` | 16,633 |
52,949,128 | I'm doing a project that involves analyzing WhatsApp log data.
After preprocessing the log file I have a table that looks like this:
```
DD/MM/YY | hh:mm | name | text |
```
I could build a graph where, using a chat with a friend of mine, I plotted a graph of the number of text per month and the mean number of words per month but I have some problems:
* If in a month we didn't exchange text the algorithm doesn't count that month, therefore in the graph I want to see that month with 0 messages
* there is a better way to utilize dates and time in python? Using them as strings isn't so intuitive but online I didn't found anything useful.
[this is the GitLab page of my project.](https://gitlab.com/GiuseppeMinardi/whatsgraph)
```
def wapp_split(line):
splitted = line.split(',')
Data['date'].append(splitted[0])
splitted = splitted[1].split(' - ')
Data['time'].append(splitted[0])
splitted = splitted[1].split(':')
Data['name'].append(splitted[0])
Data['msg'].append(splitted[1][0:-1])
def wapp_parsing(file):
with open(file) as f:
data = f.readlines()
for line in data:
if (line[17:].find(':')!= -1):
if (line[0] in numbers) and (line[1]in numbers):
prev = line[0:35]
wapp_split(line)
else:
line = prev + line
wapp_split(line)
```
Those are the main function of the script. The WhatsApp log is formatted like so:
```
DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp
```
The parsing function just take the file and send each line to the split *function*. Those if in the parsing function just avoid that mssages from WhatsApp and not from the people in the chat being parsed. | 2018/10/23 | [
"https://Stackoverflow.com/questions/52949128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8425613/"
] | Suppose that the table you have is a .csv file that looks like this (call it msgs.csv):
```
date;time;name;text
22/10/2018;11:30;Maria;Hello how are you
23/10/2018;11:30;Justin;Check this
23/10/2018;11:31;Justin;link
22/11/2018;11:30;Maria;Hello how are you
23/11/2018;11:30;Justin;Check this
23/12/2018;11:31;Justin;link
22/12/2018;11:30;Maria;Hello how are you
23/12/2018;11:30;Justin;Check this
23/01/2019;11:31;Justin;link
23/04/2019;11:30;Justin;Check this
23/07/2019;11:31;Justin;link
```
Now you can use pandas to import this csv in a table format that will recognise both date and time as a timestamp object and then for your calculations you can group the data by month.
```
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
df = pd.read_csv('msgs.csv', delimiter=';', parse_dates=[['date', 'time']], date_parser=dateparse)
per = df.date_time.dt.to_period("M")
g = df.groupby(per)
for i in g:
print('#######')
print('year: {year} ; month: {month} ; number of messages: {n_msgs}'
.format(year=i[0].year, month=i[0].month, n_msgs=len(i[1])))
```
EDIT - no information about specific month = 0 messages:
========================================================
In order to get the 0 for months in which no messages were sent you can do like this (looks better than above too):
```
import pandas as pd
dateparse = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
df = pd.read_csv('msgs.csv', delimiter=';', parse_dates=[['date', 'time']], date_parser=dateparse)
# create date range from oldest message to newest message
dates = pd.date_range(*(pd.to_datetime([df.date_time.min(), df.date_time.max()]) + pd.offsets.MonthEnd()), freq='M')
for i in dates:
df_aux = df[(df.date_time.dt.month == i.month) & (df.date_time.dt.year == i.year)]
print('year: {year} ; month: {month} ; number of messages: {n_msgs}'
.format(year=i.year, month=i.month, n_msgs=len(df_aux)))
```
EDIT 2: parse logs into a pandas dataframe:
===========================================
```
df = pd.DataFrame({'logs':['DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp',
'DD/MM/YY, hh:mm - Name Surname: This is a text sent using WhatsApp']})
pat = re.compile("(?P<date>.*?), (?P<time>.*?) - (?P<name>.*?): (?P<message>.*)")
df_parsed = df.logs.str.extractall(pat)
``` | It's best to convert the strings into datetime objects
```
from datetime import datetime
datetime_object = datetime.strptime('22/10/18', '%d/%m/%y')
```
When converting from a string, remember to use the correct seperators, ie "-" or "/" to match the string, and the letters in the format template on the right hand side of the function to parse with the date string too. Full details on the meaning of the letters can be found at [Python strptime() Method](https://www.tutorialspoint.com/python/time_strptime.htm) | 16,634 |
6,324,412 | After answering a question here on SO about finding a city in a
user-supplied question, I started thinking about the *best* way to
search for a string in a text when you have a limited data-set as this one.
`in` and `find` matches against a substring, which is not wanted. Reqular
expressions using "word boundaries" works but are quite slow. The
"punctuation" approach seems to be a candidate, but there is a lot of
punctuation [characters](http://en.wikipedia.org/wiki/Punctuation) that can appear both in question as well as
some in the name of a city (i.e. a period in "St. Louis").
Regexps are probably the best general-purpose solution, but I'm
curious if this can be solved using some other technique.
**The task is to:**
Find a city in the US in a user supplied text in the English language
regardless of case.
My code heavily inspired by <http://www.python.org/doc/essays/list2str/>
```
#!/usr/bin/env python
import time
import re
def timing(f, n):
print f.__name__,
r = range(n)
t1 = time.clock()
for i in r:
f(); f(); f(); f(); f(); f(); f(); f(); f(); f()
t2 = time.clock()
print round(t2-t1, 6)
def f0():
'''broken since it finds sub-strings, e.g.
city "Erie" is found in "series"'''
Q = question.upper()
for c in cities:
c = c.upper()
if c in Q:
pass
def f1():
'''slow, but working'''
for c in cities:
re.search('\\b%s\\b' % c, question, re.IGNORECASE)
def f2():
'''broken, same problem as f0()'''
Q = question.upper()
for c in cities:
c = c.upper()
if Q.find(c) > 0:
pass
def f3():
'''remove all punctuation, and then search for " str " '''
Q = question.upper()
punct = ['.', ',', '(', ')', '"', '\n', ' ', ' ', ' ']
for p in punct:
Q = Q.replace(p, ' ')
for c in cities:
c = ' ' + c.upper() + ' '
for p in punct:
c = c.replace(p, ' ')
if c in Q:
pass
with open('cities') as fd:
cities = [line.strip() for line in fd]
with open('question') as fd:
question = fd.readlines()[0]
testfuncs = f0, f1, f2, f3
for f in testfuncs:
print f
timing(f, 20)
```
On my old dodgy laptop, I get the following results
```
<function f0 at 0xb7730bc4>
f0 0.14
<function f1 at 0xb7730f7c>
f1 10.4
<function f2 at 0xb7730f44>
f2 0.15
<function f3 at 0xb7738684>
f3 0.61
```
If someone would like to have a go on my testdata, it can be found
[here](http://pastebin.com/VMAuVG9n) | 2011/06/12 | [
"https://Stackoverflow.com/questions/6324412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/297323/"
] | In the **Language** menu select your corresponding language. For example **H** and then **html** | 1. Check, if you have saved the documents as .HTML and not as .txt
2. in the menu, choose Settings>Style configurator...
and in the list in the left pan select html, check if the colors for different tags are being shown in the color blocks. if yes, chosse a font and then save and exit.
3. Check only after you save the document in .html, whether it is working or not. | 16,636 |
63,781,794 | I got this error message when I was installing python-binance.
Error message is in the link below please check
<https://docs.google.com/document/d/1VE0Ux_ji9RoK0NIrPD3BSbs60sTaxThk3boxsvh051c/edit>
Anyone knows how to fix it? | 2020/09/07 | [
"https://Stackoverflow.com/questions/63781794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14236836/"
] | You're trying to install [`email` from PyPI](https://pypi.org/project/email/) which is a very old outdated Python2-only package.
`email` is now [a module in the stdlib](https://docs.python.org/3/library/email.html). You don't need to install it, it must always be available. Just import and use. | You might have outdated setuptools, try:
```
pip install --upgrade setuptools
```
Then continue trying to install the module you want.
Usually these kinds of problems can be solved by googling the error: in this case you should try searching with "python setup.py egg\_info".
Also, try to give a more descriptive title for your problems in the future. "Python installing package with pip failed" is too broad. | 16,646 |
16,375,251 | This is part of a project I am working on for work.
I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to.
I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error.
Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution...
So, does **anyone** know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser.
I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to *try* and evaluate the javascript myself... but surely there must be an app or library (or **anything**) that can do this?? | 2013/05/04 | [
"https://Stackoverflow.com/questions/16375251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/629404/"
] | Well, in the end I came down to the following possible solutions:
* **Run Chrome headless** and collect the html output (thanks to koenp for the link!)
* **Run PhantomJS**, a headless browser with a javascript api
* **Run HTMLUnit**; same thing but for Java
* **Use Ghost.py**, a **python-based** headless browser (that I haven't seen suggested anyyyywhere for some reason!)
* Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize.
For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI.
Hopefully others will find this list useful! | Well you will need something that both understands the DOM and understand Javascript, so that comes down to a headless browser of some sort. Maybe you can take a look at the [selenium webdriver](http://docs.seleniumhq.org/docs/03_webdriver.jsp), but I guess you already did that. I don't hink there is an easy way of doing this without running the stuff in an actually browser engine. | 16,648 |
59,591,862 | Essentially I'm trying to do something that is stated here [Changing variables in multiple Python instances](https://stackoverflow.com/questions/9302789/changing-variables-in-multiple-python-instances)
but in java.
I want to reset a variable in all instances of a certain class so something like:
```
public class NewClass{
int variable = 1;
}
```
then:
```
NewClass one = new NewClass();
NewClass two = new NewClass();
NewClass three = new NewClass();
Newclass.variable = 2;
System.out.println(one.variable);
System.out.println(two.variable);
System.out.println(three.variable);
```
output would be:
```
2
2
2
```
is there a way to do that? | 2020/01/04 | [
"https://Stackoverflow.com/questions/59591862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12652681/"
] | This is probably far from a Todo but it'll give you some clarification of what to do.
```js
const box = document.querySelector('.box');
let inputTodo = document.getElementById('inputTodo');
const inputTodoHandler = (event) => {
if(event.which == 13 || event.keyCode == 13) {
addTodo(event.target.value);
event.target.value = '';
return false;
}
}
const addTodo = (todo) => {
const p = document.createElement('p');
p.textContent = todo;
box.appendChild( p );
}
inputTodo.addEventListener('keydown', inputTodoHandler );
```
```css
* {
margin: 0;
background-color: rgb(27, 27, 27);
font-family: 'Indie Flower', cursive;
}
h1 {
font-size: 5.5vw;
color: rgb(241, 240, 240);
display: flex;
justify-content: center;
margin-top: 50px;
letter-spacing: 1px;
}
.main {
margin-left: 100px;
margin-right: 100px;
margin-top: 50px;
display: flex;
flex-direction: row;
justify-content: space-evenly;
font-size: 1vw;
color: rgb(241, 240, 240);
letter-spacing: 2px;
}
.left {
display: flex;
flex-direction: column;
}
.left h2 {
padding-bottom: 50px;
}
.left form {
border: 4px solid rgb(102, 181, 255);
border-radius: 5px;
}
.left form input {
height: 30px;
width: 100%;
color: rgb(241, 240, 240);
font-size: 24px;
letter-spacing: 1px;
border: none;
}
.box {
border: 5px solid black;
border-radius: 7px;
background-color: rgb(255, 234, 176);
width: 600px;
height: 68vh;
}
```
```html
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="style.css">
<link rel="text/javascript" href="javascript.js">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<link href="https://fonts.googleapis.com/css?family=Indie+Flower&display=swap" rel="stylesheet">
<title>My to do list</title>
</head>
<body>
<header>
<h1>To do list</h1>
</header>
<section class="main">
<div class="left">
<h2>Please Enter your things to do here..</h2>
<form action="" onsubmit="return false;">
<input type="text" id="inputTodo">
</form>
</div>
<div class="box">
</div>
</section>
</body>
</html>
``` | use the Database system in your Website using sql or any server that stores the info in cloud that store/edit/delete and acess the database in your right/desired location | 16,649 |
17,960,696 | I was trying to install a package using easy\_install, errors happened "processing dependencies", looks like it cannot locate a package, here's the error I got
---
```
Processing dependencies for python-pack==1.5.0beta2
Searching for python-pack==1.5.0beta2
Reading http://pypi.python.org/simple/python-pack/
Couldn't find index page for 'python-pack' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading http://pypi.python.org/simple/
No local packages or download links found for python-pack==1.5.0beta2
Best match: None
```
---
The package to be installed is actually for Ubuntu, and my system is Debian. But I didn't expect errors at this stage.
Could any one please help me out?
Thanks,
Zhihui | 2013/07/31 | [
"https://Stackoverflow.com/questions/17960696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2636377/"
] | It turned out that the jar files in ~/.m2/repository have been corrupted. This issue has been solved by deleting everything in the repository and do a:
>
> mvn clean install
>
>
>
All the classes can be resolved now. | The answer is more likely that you need to add the dependency to your pom.xml file:
```
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-hibernate</artifactId>
<version>${dropwizard.version}</version>
</dependency>
``` | 16,653 |
33,324,083 | I am having trouble learning to plot a function in python. For example I want to create a graph with these two functions:
```
y=10x
y=5x+20
```
The only way I found was to use the following code
```
import matplotlib.pyplot as plt
plt.plot([points go here], [points go here])
plt.plot([points go here], [points go here])
plt.ylabel('some numbers')
plt.show()
```
and to manually enter data points, but I have some tougher problems coming up so that would be really difficult.
Is there a way to just put in what function I need a plot for and have python create the graph for me? | 2015/10/24 | [
"https://Stackoverflow.com/questions/33324083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5484597/"
] | There are quite a lot of answers here which exlain that, but let me give you another one.
A string is interned into the String literal pool only in two situations: when a class is loaded and the String was a literal or compile time constant. Otherwise only when you call `.intern()` on a String. Then a copy of this string is listed in the pool and returned. All other string creations will not be interned. String concatenation (`+`) is producing new instances as long as it is not a compile time constant expression\*.
First of all: never ever use it. If you do not understand it you should not use it. Use `.equals()`. Interning strings for the sake of comparison might be slower than you think and unnecessarily filling the hashtable. Especially for strings with highly different content.
1. s3 is a string literal from the constant pool and therefore interned. s4 is a expression not producing an interned constant.
2. when you intern s4 it has the same content as s3 and is therefore the same instance.
3. same as s4, expression not a constant
4. if you intern s1+s2 you get the instance of s3, but s4 is still not s3
5. if you intern s4 it is the same instance as s3
Some more questions:
```
System.out.println(s3 == s3.intern()); // is true
System.out.println(s4 == s4.intern()); // is false
System.out.println(s1 == "abc"); // is true
System.out.println(s1 == new String("abc")); // is false
```
\* Compile time constants can be expressions with literals on both sides of the concatenation (like `"a" + "bc"`) but also final String variables initialized from constants or literals:
```
final String a = "a";
final String b = "b";
final String ab = a + b;
final String ab2 = "a" + b;
final String ab3 = "a" + new String("b");
System.out.println("ab == ab2 should be true: " + (ab == ab2));
System.out.println("a+b == ab should be true: " + (a+b == ab));
System.out.println("ab == ab3 should be false: " + (ab == ab3));
``` | One thing you have to know is, that Strings are Objects in Java. The variables s1 - s4 do not point directly to the text you stored. It is simply a pointer which says where to find the Text within your RAM.
1. It is false because you compare the Pointers, not the actual text. The text is the same, but these two Strings are 2 completely different Objects which means they have diferent Pointers. Try printing s1 and s2 on the console and you will see.
2. Its true, because Java does some optimizing concerning Strings. If the JVM detects, that two different Strings share the same text, they will be but in something called "String Literal Pool". Since s3 and s4 share the same text they will also sahe the same slot in the "String Literal Pool". The inter()-Method gets the reference to the String in the Literal Pool.
3. Same as 1. You compare two pointers. Not the text-content.
4. As far as I know added values do not get stored in the pool
5. Same as 2. You they contain the same text so they get stored in the String Literal Pool and therefore share the same slot. | 16,654 |
46,132,556 | When I try to install python3-tk for python3.5 on ubuntu 16.04 I get the following error, what should I do?
python3-tk : Depends: python3 (< 3.5) but 3.5.1-3 is to be installed | 2017/09/09 | [
"https://Stackoverflow.com/questions/46132556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604079/"
] | Activity transition is always expensive and we should switch from one activity to another only when we are switching the context. A `fragment` is a portion of UI in an activity. Same fragment can be used with multiple activities. Just like activity a fragment has its own lifecycle and `setContentView(int layoutResID)` can be set to different layout in `OnCreate` of fragment.
This [link](https://stackoverflow.com/questions/20306091/dilemma-when-to-use-fragments-vs-activities) explains more on when to use activity or fragment.
[Android developer guide on Fragments](https://developer.android.com/guide/components/fragments.html)
[Code path tutorial](https://guides.codepath.com/android/Bottom-Navigation-Views) on bottom navigation views. | Please refer to :-
<https://github.com/waleedsarwar86/BottomNavigationDemo>
and complete explanation in
<http://waleedsarwar.com/posts/2016-05-21-three-tabs-bottom-navigation/>
You will get a running code with the explanation here. | 16,656 |
27,713,681 | ```
10:01:36 adcli
10:01:36 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 runma
10:01:37 runma
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 roots
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 adcli
10:01:37 adcli
10:01:37 sshd[
10:01:37 sshd[
```
Here is my approach:( I know it's not a complete but)
```
import re
i="sshd"
j="apached"
k="wexd"
count_a=0;
count_b=0;
count_c=0;
file=open("hex01.txt","r")
for line in file:
for datestamp in line[0:5]
if line.match("datestamp"):
print datestamp,m=line.count("sshd"),n=line.count("apached"),0=line.count ("wexd"),t=m+n+0
```
This is the sample input data I am trying to process in Python. I know it's reasonably easy to get the output using bash but i am learning a Python and I feel it's reasonably tough to get the desired output. Any help will be appreciated, i even don't need a perfect code but the algorith amd the appropriate python's libraries are enough. The output should be
aprocess\_count, bprocesscount, Totals
ex: `10:01:37 10,2,1,13` - Meaning that 10 sshd, 2adcli and 1 roots from above log file | 2014/12/30 | [
"https://Stackoverflow.com/questions/27713681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4406840/"
] | As Respawned alluded to, there is no easy answer that will work in all cases. That being said, here are two approaches which seem to work fairly well. Both having upsides and downsides.
Approach 1
==========
Internally, the `getTextContent` method uses whats called an `EvaluatorPreprocessor` to parse the PDF operators, and maintain the graphic state. So what we can do is, implement a custom `EvaluatorPreprocessor`, overwrite the `preprocessCommand` method, and use it to add the current text color to the graphic state. Once this is in place, anytime a new text chunk is created, we can add a color attribute, and set it to the current color state.
The downsides to this approach are:
1. Requires modifying the PDFJS source code. It also depends heavily on
the current implementation of PDFJS, and could break if this is
changed.
2. It will fail in cases where the text is used as a path to be filled with an image. In some PDF creators (such as Photoshop), the way it creates colored text is, it first creates a clipping path from all the given text characters, and then paints a solid image over the path. So the only way to deduce the fill-color is by reading the pixel values from the image, which would require painting it to a canvas. Even hooking into `paintChar` wont be of much help here, since the fill color will only emerge at a later time.
The upside is, its fairly robust and works irrespective of the page background. It also does not require rendering anything to canvas, so it can be done entirely in the background thread.
**Code**
All the modifications are made in the `core/evaluator.js` file.
First you must define the custom evaluator, after the [EvaluatorPreprocessor definition](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L2242).
```
var CustomEvaluatorPreprocessor = (function() {
function CustomEvaluatorPreprocessor(stream, xref, stateManager, resources) {
EvaluatorPreprocessor.call(this, stream, xref, stateManager);
this.resources = resources;
this.xref = xref;
// set initial color state
var state = this.stateManager.state;
state.textRenderingMode = TextRenderingMode.FILL;
state.fillColorSpace = ColorSpace.singletons.gray;
state.fillColor = [0,0,0];
}
CustomEvaluatorPreprocessor.prototype = Object.create(EvaluatorPreprocessor.prototype);
CustomEvaluatorPreprocessor.prototype.preprocessCommand = function(fn, args) {
EvaluatorPreprocessor.prototype.preprocessCommand.call(this, fn, args);
var state = this.stateManager.state;
switch(fn) {
case OPS.setFillColorSpace:
state.fillColorSpace = ColorSpace.parse(args[0], this.xref, this.resources);
break;
case OPS.setFillColor:
var cs = state.fillColorSpace;
state.fillColor = cs.getRgb(args, 0);
break;
case OPS.setFillGray:
state.fillColorSpace = ColorSpace.singletons.gray;
state.fillColor = ColorSpace.singletons.gray.getRgb(args, 0);
break;
case OPS.setFillCMYKColor:
state.fillColorSpace = ColorSpace.singletons.cmyk;
state.fillColor = ColorSpace.singletons.cmyk.getRgb(args, 0);
break;
case OPS.setFillRGBColor:
state.fillColorSpace = ColorSpace.singletons.rgb;
state.fillColor = ColorSpace.singletons.rgb.getRgb(args, 0);
break;
}
};
return CustomEvaluatorPreprocessor;
})();
```
Next, you need to modify the [getTextContent method](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L908) to use the new evaluator:
```
var preprocessor = new CustomEvaluatorPreprocessor(stream, xref, stateManager, resources);
```
And lastly, in the [newTextChunk](https://github.com/mozilla/pdf.js/blob/eac168f3cc4df2e8f3f7790c6ca60426b96dcc54/src/core/evaluator.js#L922) method, add a color attribute:
```
color: stateManager.state.fillColor
```
Approach 2
==========
Another approach would be to extract the text bounding boxes via `getTextContent`, render the page, and for each text, get the pixel values which reside within its bounds, and take that to be the fill color.
The downsides to this approach are:
1. The computed text bounding boxes are not always correct, and in some cases may even be off completely (eg: rotated text). If the bounding box does not cover at least partially the actual text on canvas, then this method will fail. We can recover from complete failures, by checking that the text pixels have a color variance greater than a threshold. The rationale being, if bounding box is completely background, it will have little variance, in which case we can fallback to a default text color (or maybe even the color of k nearest-neighbors).
2. The method assumes the text is darker than the background. Otherwise, the background could be mistaken as the fill color. This wont be a problem is most cases, as most docs have white backgrounds.
The upside is, its simple, and does not require messing with the PDFJS source-code. Also, it will work in cases where the text is used as a clipping path, and filled with an image. Though this can become hazy when you have complex image fills, in which case, the choice of text color becomes ambiguous.
**Demo**
<http://jsfiddle.net/x2rajt5g/>
Sample PDF's to test:
* <https://www.dropbox.com/s/0t5vtu6qqsdm1d4/color-test.pdf?dl=1>
* <https://www.dropbox.com/s/cq0067u80o79o7x/testTextColour.pdf?dl=1>
**Code**
```
function parseColors(canvasImgData, texts) {
var data = canvasImgData.data,
width = canvasImgData.width,
height = canvasImgData.height,
defaultColor = [0, 0, 0],
minVariance = 20;
texts.forEach(function (t) {
var left = Math.floor(t.transform[4]),
w = Math.round(t.width),
h = Math.round(t.height),
bottom = Math.round(height - t.transform[5]),
top = bottom - h,
start = (left + (top * width)) * 4,
color = [],
best = Infinity,
stat = new ImageStats();
for (var i, v, row = 0; row < h; row++) {
i = start + (row * width * 4);
for (var col = 0; col < w; col++) {
if ((v = data[i] + data[i + 1] + data[i + 2]) < best) { // the darker the "better"
best = v;
color[0] = data[i];
color[1] = data[i + 1];
color[2] = data[i + 2];
}
stat.addPixel(data[i], data[i+1], data[i+2]);
i += 4;
}
}
var stdDev = stat.getStdDev();
t.color = stdDev < minVariance ? defaultColor : color;
});
}
function ImageStats() {
this.pixelCount = 0;
this.pixels = [];
this.rgb = [];
this.mean = 0;
this.stdDev = 0;
}
ImageStats.prototype = {
addPixel: function (r, g, b) {
if (!this.rgb.length) {
this.rgb[0] = r;
this.rgb[1] = g;
this.rgb[2] = b;
} else {
this.rgb[0] += r;
this.rgb[1] += g;
this.rgb[2] += b;
}
this.pixelCount++;
this.pixels.push([r,g,b]);
},
getStdDev: function() {
var mean = [
this.rgb[0] / this.pixelCount,
this.rgb[1] / this.pixelCount,
this.rgb[2] / this.pixelCount
];
var diff = [0,0,0];
this.pixels.forEach(function(p) {
diff[0] += Math.pow(mean[0] - p[0], 2);
diff[1] += Math.pow(mean[1] - p[1], 2);
diff[2] += Math.pow(mean[2] - p[2], 2);
});
diff[0] = Math.sqrt(diff[0] / this.pixelCount);
diff[1] = Math.sqrt(diff[1] / this.pixelCount);
diff[2] = Math.sqrt(diff[2] / this.pixelCount);
return diff[0] + diff[1] + diff[2];
}
};
``` | This question is actually extremely hard if you want to do it to perfection... or it can be relatively easy if you can live with solutions that work only some of the time.
First of all, realize that `getTextContent` is intended for searchable text extraction and that's all it's intended to do.
It's been suggested in the comments above that you use `page.getOperatorList()`, but that's basically re-implementing the whole PDF drawing model in your code... which is basically silly because the largest chunk of PDFJS does exactly that... except not for the purpose of text extraction but for the purpose of rendering to canvas. So what you want to do is to hack [canvas.js](https://github.com/mozilla/pdf.js/blob/master/src/display/canvas.js) so that instead of just setting its internal knobs it also does some callbacks to your code. Alas, if you go this way, you won't be able to use stock PDFJS, and I rather doubt that your goal of color extraction will be seen as very useful for PDFJS' main purpose, so your changes are likely not going to get accepted upstream, so you'll likely have to maintain your own fork of PDFJS.
After this dire warning, what you'd need to minimally change are the functions where PDFJS has parsed the PDF color operators and sets its own canvas painting color. That happens around line 1566 (of canvas.js) in [function setFillColorN](https://github.com/mozilla/pdf.js/blob/e7cddcce283c5e76bc042747b5588a6d250c25e1/src/display/canvas.js#L1664). You'll also need to hook the text render... which is rather a character renderer at canvas.js level, namely [CanvasGraphics\_paintChar](https://github.com/mozilla/pdf.js/blob/e7cddcce283c5e76bc042747b5588a6d250c25e1/src/display/canvas.js#L1350) around line 1270. With these two hooked, you'll get a stream of callbacks for color changes interspersed between character drawing sequences. So you can reconstruct the color of character sequences reasonably easy from this.. in the simple color cases.
And now I'm getting to the really ugly part: the fact that PDF has an extremely complex color model. First there are two colors for drawing anything, including text: a fill color and stroke (outline) color. So far not too scary, but the color is an index in a ColorSpace... of which there are several, RGB being only one possibility. Then there's also alpha and compositing modes, so the layers (of various alphas) can result in a different final color depending on the compositing mode. And the PDFJS has not a single place where it accumulates color from layers.. it simply [over]paints them as they come. So if you only extract the fill color changes and ignore alpha, compositing etc.. it will work but not for complex documents.
Hope this helps. | 16,661 |
55,482,197 | I start to learn Django framework so I need to install latest python, pip, virtualenv and django packets on my mac.
I try to do it with brew, but I got some strange behavior.
At first, python3 installed not in /usr/bin/ but in /Library/Frameworks/Python.framework directory:
```
$ which python
/usr/bin/python
$ which python3
/Library/Frameworks/Python.framework/Versions/3.7/bin/python3
```
It is strange for me, because every tutorial tells about /usr/bin/python37 and nothing about /Library/Frameworks/Python.framework
Is this okay?
After that I made `sudo pip3 install virtualenv` and got this answer:
```
The directory '/Users/user/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/user/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
```
Okay, I made uninstall and install with -H sudo flag:
```
Installing collected packages: virtualenv
Successfully installed virtualenv-16.4.3
```
But when I try to make a virtual environment, I got
```
$ virtualenv venv
-bash: /usr/local/bin/virtualenv: No such file or directory
```
Checking virtualenv location:
```
$ which virtualenv
/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv
```
Why /Library/Frameworks/Python.framework/?
And why it searches for virtualenv in /usr/local/bin/virtualenv?
Coding on Macos is always so painful? | 2019/04/02 | [
"https://Stackoverflow.com/questions/55482197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11301741/"
] | Instead of using brew you can simply use "venv".
To create a virtual environment you can run -->
```
python3 -m venv environment_name
```
Example: If you want to create an virtual environment for django with name django\_env
```
python3 -m venv django_env
```
"-m" flag checks for sys.path and executes main module.
**Activation of Virtual Environment :**
```
source django_env/bin/activate
```
**Deactivation :**
```
deactivate
``` | ### Python3 Virtualenv Setup
Requirements:
* Python3
* Pip3
```sh
$ brew install python3 #upgrade
```
Pip3 is installed with Python3
**Installation**
To install virtualenv via pip run:
```sh
$ pip3 install virtualenv
```
**Usage**
Creation of virtualenv:
```sh
$ virtualenv -p python3 <desired-path>
```
Activate the virtualenv:
```sh
$ source <desired-path>/bin/activate
```
Deactivate the virtualenv:
```sh
$ deactivate
```
---
You can see more about the `Homebrew` on the [official page](https://brew.sh). | 16,664 |
71,020,555 | Like in other programing languages - python or JS, when we create a rest api specifically post for the request body we attract some JSON Object
EX:
url: .../employee (Post)
request body: {option: {filter: "suman"}}
In Python or JS we can just do request\_body.option.filter and get the data
How can I achieve the same with Java ?
Do I need need to create a class for the reqeust\_body and for option and make an instance object request\_body | 2022/02/07 | [
"https://Stackoverflow.com/questions/71020555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9661967/"
] | What about this?
```
table1 %>%
left_join(cbind(table2, n = 1)) %>%
group_by(Col1, Col2, Col3) %>%
mutate(n = sum(n, na.rm = TRUE))
```
and we will see
```
Col1 Col2 Col3 n
<chr> <chr> <chr> <dbl>
1 Al F C 1
2 Al UF UC 1
3 Al P < 0
4 Cu F C 0
5 Cu UF UC 0
6 Cu P < 0
7 Pb F C 1
8 Pb UF UC 1
9 Pb P < 1
``` | **1)** Append an n=1 column to table2 and an n=0 column to table 1 and then sum n by group.
```
table2 %>%
mutate(n = 1L) %>%
bind_rows(table1 %>% mutate(n = 0L)) %>%
group_by(Col1, Col2, Col3) %>%
summarize(n = sum(n), .groups = "drop")
```
giving:
```
# A tibble: 10 x 4
Col1 Col2 Col3 n
<chr> <chr> <chr> <int>
1 Al F C 1
2 Al P < 0
3 Al UF UC 1
4 Cu F < 1
5 Cu F C 0
6 Cu P < 0
7 Cu UF UC 0
8 Pb F C 1
9 Pb P < 1
10 Pb UF UC 1
```
**2)** This variation gives the same result.
```
list(table1, table2) %>%
bind_rows(.id = "id") %>%
group_by(Col1, Col2, Col3) %>%
summarize(n = sum(id == 2L), .groups = "drop")
```
**3)** This is a data.table only solution.
```
rbindlist(list(table1, table2), idcol = TRUE)[,
.(n = sum(.id == 2L)), by = .(Col1, Col2, Col3)]
```
**4)** This is a base R solution.
```
both <- rbind(transform(table1, n = 0), transform(table2, n = 1))
aggregate(n ~., both, sum)
```
**5)** This uses SQL.
```
library(sqldf)
sqldf("with both as (
select *, 0 as n from table1
union all
select *, 1 as n from table2
)
select Col1, Col2, Col3, sum(n) as n
from both
group by Col1, Col2, Col3
")
``` | 16,667 |
17,818,502 | Consider this sample python code. It reads from stdin and writes to a file.
```
import sys
arg1 = sys.argv[1]
f = open(arg1,'w')
f.write('<html><head><title></title></head><body>')
for line in sys.stdin:
f.write("<p>")
f.write(line)
f.write("</p>")
f.write("</body></html>")
f.close()
```
Suppose I want to modify this same program to write to stdout instead. Then, I'll have to replace each instance of `f.write()` with `sys.stdout.write()`. But that would be too tedious. I want to know if there is a way to specify `f` as an alias for `sys.stdout`, so that `f.write()` is treated as `sys.stdout.write()`. | 2013/07/23 | [
"https://Stackoverflow.com/questions/17818502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1199882/"
] | Just do
```
>>> import sys
>>> f = sys.stdout
>>> f.write('abc')
abc
```
Now you just need to do `f = sys.stdout` instead of `f = open(fileName)`. (And remove `f.close()`)
**Also**, Please consider using the following syntax for files.
```
with open(fileName, 'r') as f:
# Do Something
```
The file automatically gets closed for you this way. | Yes, in python, you can alias every class / function / method etc. Just assign the value you want to use to another variable:
```
import sys
f = sys.stdout
```
Now every method you call on `f` will get called on `sys.stdout`. You can do this with whatever you like, for example also with `i = sys.stdin` etc. | 16,669 |
18,263,733 | I am new to python and django i am creating first tutorial app.
I created app file using following command:
```
C:\Python27\Scripts\django-admin.py startproject mysite
```
After that successfully created a file in directory
But how to run python manage.py runserver i am getting error `not recognized as an internal or extrnal command`
```
C:\Python27\Scripts\django-admin.py startproject mysite
```
But how to run `python manage.py runserver` i am getting error `not recognized as an internal or extrnal command` | 2013/08/15 | [
"https://Stackoverflow.com/questions/18263733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2582761/"
] | You just need to `cd` into mysite from there.
Use `cd mysite` from the command line. Then run `python manage.py runserver` and the dev server will startup in the current (or a new if there inst a current) browser window.
To visualize this for you:
```
current_dir/ <-- your here now
mysite/ < -- use cd mysite to get to here!
manage.py <-- and use this
mysite/
__init__.py
urs.py
settings.py
ect.
```
current\_dir is where you initially created the project.
**Pro tip**: you always have to come back to this exact dir to use manage.py, so if you get that error again while your making the polls app; you are probably just in the wrong directory. | You need to go to the directory that the app you created resides in then run the command `manage.py runserver` on windows or `python manage.py runserver` in a Unix Terminal.
It is typical to create a separate directory for your Django projects. A typical directory would be:
```
C:\DjangoProjects\
```
You would then put the location of `django-admin.py` on your `PYTHONPATH` in your command shell and run the startproject command and the new project would be created in the current directory that you are in. If you have already created the project, you could also just cut and paste it to a different directory that way your Django projects are not in the same directory as your Python / Django source code.
Either way, in the end go the directory for the app you created, so:
```
C:\DjangoProjects\mysite\
```
and from that directory run the `manage.py runserver` command and this will start the `app` running on your local machine. | 16,675 |
27,692,051 | **Is there any way to disable the syntax highlighting in SublimeREPL-tabs when a script is running?**
Please see this question for context: [Red lines coming up after strings in SublimeREPL (python)?](https://stackoverflow.com/q/25693151/1426065)
For example, when python-scripts run in Sublime REPL, apostrophes (') in the output-text get highlighted as syntax.
Because of this, the last part of the line is highlighted as if the string **(which in fact is text-output and not actual code)** was not closed properly.
This is what the output looks like:

The highlighting is useful when Sublime REPL is running the interactive python shell, but when it just should run a script, I would like to get the text output without highlighting, like in any commandline-interface.
Of course I could just run the scripts in the commandline, but it would be nice to keep all work focused in just one program.
Maybe there are settings for the different kinds of Sublime REPL-enveronments (Interactive, run from script, etc.) that could change this behaviour?
Thanks for any help! :) | 2014/12/29 | [
"https://Stackoverflow.com/questions/27692051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4183985/"
] | Go to
Sublime Text > Preferences > Package Settings > SublimeREPL > Settings - User
(If your 'Settings - User' is empty, first copy in the contents of 'Settings - Default')
under "repl\_view\_settings": add:
```
,
"syntax": "Packages/Text/Plain text.tmLanguage"
```
so mine is now:
```
// standard sublime view settings that will be overwritten on each repl view
// this has to be customized as a whole dictionary
"repl_view_settings": {
"translate_tabs_to_spaces": false,
"auto_indent": false,
"smart_indent": false,
"spell_check": false,
"indent_subsequent_lines": false,
"detect_indentation": false,
"auto_complete": true,
"line_numbers": false,
"gutter": false,
"syntax": "Packages/Text/Plain text.tmLanguage"
},
``` | As @joe.dawley wrote in the comments to the original question there is a way to manually disable syntax highlighting in SublimeREPL by using the go to anything-command **(Ctrl + Shift + P)** and enter **"sspl"** to set the syntax to plain text. | 16,676 |
44,549,369 | I am trying to calculate the Kullback-Leibler divergence from Gaussian#1 to Gaussian#2
I have the mean and the standard deviation for both Gaussians
I tried this code from <http://www.cs.cmu.edu/~chanwook/MySoftware/rm1_Spk-by-Spk_MLLR/rm1_PNCC_MLLR_1/rm1/python/sphinx/divergence.py>
```
def gau_kl(pm, pv, qm, qv):
"""
Kullback-Leibler divergence from Gaussian pm,pv to Gaussian qm,qv.
Also computes KL divergence from a single Gaussian pm,pv to a set
of Gaussians qm,qv.
Diagonal covariances are assumed. Divergence is expressed in nats.
"""
if (len(qm.shape) == 2):
axis = 1
else:
axis = 0
# Determinants of diagonal covariances pv, qv
dpv = pv.prod()
dqv = qv.prod(axis)
# Inverse of diagonal covariance qv
iqv = 1./qv
# Difference between means pm, qm
diff = qm - pm
return (0.5 *
(numpy.log(dqv / dpv) # log |\Sigma_q| / |\Sigma_p|
+ (iqv * pv).sum(axis) # + tr(\Sigma_q^{-1} * \Sigma_p)
+ (diff * iqv * diff).sum(axis) # + (\mu_q-\mu_p)^T\Sigma_q^{-1}(\mu_q-\mu_p)
- len(pm))) # - N
```
I use the mean and the standard deviation as input, but the last line of the code `(len(pm))` cause an error because the mean is one number and I don't understand the len function here.
Note. The two sets(i.e., Gaussians) are not equal that's why I couldn't use the scipy.stats.entropy | 2017/06/14 | [
"https://Stackoverflow.com/questions/44549369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7879074/"
] | The following function computes the KL-Divergence between any two multivariate normal distributions (no need for the covariance matrices to be diagonal) (where numpy is imported as np)
```
def kl_mvn(m0, S0, m1, S1):
"""
Kullback-Liebler divergence from Gaussian pm,pv to Gaussian qm,qv.
Also computes KL divergence from a single Gaussian pm,pv to a set
of Gaussians qm,qv.
From wikipedia
KL( (m0, S0) || (m1, S1))
= .5 * ( tr(S1^{-1} S0) + log |S1|/|S0| +
(m1 - m0)^T S1^{-1} (m1 - m0) - N )
"""
# store inv diag covariance of S1 and diff between means
N = m0.shape[0]
iS1 = np.linalg.inv(S1)
diff = m1 - m0
# kl is made of three terms
tr_term = np.trace(iS1 @ S0)
det_term = np.log(np.linalg.det(S1)/np.linalg.det(S0)) #np.sum(np.log(S1)) - np.sum(np.log(S0))
quad_term = diff.T @ np.linalg.inv(S1) @ diff #np.sum( (diff*diff) * iS1, axis=1)
#print(tr_term,det_term,quad_term)
return .5 * (tr_term + det_term + quad_term - N)
``` | If you are still interested ...
That function expects diagonal entries of covariance matrix of multivariate Gaussians, not standard deviations as you mention. If your inputs are univariate Gaussians, then both `pv` and `qv` are vectors of length 1 for variances of corresponding Gaussians.
Besides, `len(pm)` corresponds to dimension of mean vectors. It is indeed **k** in *Multivariate normal distributions* section [here](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence). For univariate Gaussians, **k** is 1, for bivariate ones **k** is 2, so on. | 16,677 |
55,537,213 | I'm following Adrian Rosebrock's tutorial on recognising digits on an RPi, so no tesseract or whatever:
<https://www.pyimagesearch.com/2017/02/13/recognizing-digits-with-opencv-and-python/>
But it doesn't recognise decimal points, so I've been trying really hard to create a part that would help to do that. I think I've gotten close, but I'm not sure what I've done wrong.
This is my image after preprocessing
[](https://i.stack.imgur.com/fGfOw.png)
and this is what happens after the attempted recognising part
[](https://i.stack.imgur.com/1tCQi.png)
As you can see, I'm doing something wrong somewhere. Already tried tuning param1 and param2 in the houghCircles
More examples:
[](https://i.stack.imgur.com/iAFAP.png)
[](https://i.stack.imgur.com/gUjUF.png)
Can anyone guide me on what I should do? I'm really lost here
================================================================
The images i'm using
[](https://i.stack.imgur.com/byjzJ.jpg)
[](https://i.stack.imgur.com/cZFZQ.jpg)
The code I'm using
```
from imutils.perspective import four_point_transform
from imutils import contours
import imutils
import cv2
import numpy
DIGITS_LOOKUP = {
# Old Library
#(1, 1, 1, 0, 1, 1, 1): 0, # same as new 8
(0, 0, 1, 0, 0, 1, 0): 1,
(1, 0, 1, 1, 1, 1, 0): 2,
(1, 0, 1, 1, 0, 1, 1): 3,
(0, 1, 1, 1, 0, 1, 0): 4,
(1, 1, 0, 1, 0, 1, 1): 5,
#(1, 1, 0, 1, 1, 1, 1): 6,
(1, 0, 1, 0, 0, 1, 0): 7,
(1, 1, 1, 1, 1, 1, 1): 8,
(1, 1, 1, 1, 0, 1, 1): 9,
# New Digital Library
(0, 0, 1, 1, 1, 0, 1): 0,
(1, 0, 1, 0, 0, 1, 1): 2,
(0, 0, 1, 1, 0, 1, 1): 4,
(0, 0, 0, 0, 0, 1, 1): 4,
(1, 1, 0, 0, 0, 1, 1): 5,
(1, 1, 0, 1, 1, 0, 1): 5,
(1, 0, 0, 0, 0, 1, 1): 5,
(1, 1, 1, 0, 0, 0, 0): 7,
(1, 1, 0, 1, 1, 1, 1): 8,
(1, 1, 1, 0, 1, 1, 1): 8
}
image = cv2.imread("10.jpg")
image = imutils.resize(image, height=100)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 120, 255, 1)
cv2.imshow("1", edged)
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
warped = four_point_transform(gray, displayCnt.reshape(4, 2))
output = four_point_transform(image, displayCnt.reshape(4, 2))
thresh = cv2.threshold(warped, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
cv2.imshow("2", thresh)
print(thresh.shape)
circles = cv2.HoughCircles(warped, cv2.HOUGH_GRADIENT, 7, 14, param1=0.1, param2=20, minRadius=3, maxRadius=7)
# ensure at least some circles were found
if circles is not None:
circles = numpy.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("test", output)
cv2.waitKey(0)
``` | 2019/04/05 | [
"https://Stackoverflow.com/questions/55537213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2810806/"
] | If JSON is used to exchange data, it *must* use UTF-8 encoding (see [RFC8259](https://www.rfc-editor.org/rfc/rfc8259)). UTF-16 and UTF-32 encodings are no longer allowed. So it is not necessary to escape the degree character. And I strongly recommend against escaping unnecessarily.
*Correct and recommended*
```
{
"units": "°C"
}
```
Of course, you must apply a proper UTF-8 encoding.
If JSON is used in a closed ecosystem, you can use other text encodings (though I would recommend against it unless you have a very good reason). If you need to escape the degree character in your non-UTF-8 encoding, the correct escaping sequence is `\u00b0`.
*Possible but not recommended*
```
{
"units": "\u00b0C"
}
```
Your second approach is incorrect under all circumstances.
*Incorrect*
```
{
"units":"c2b0"
}
```
It is also incorrect to use something like "\xc2\xb0". This is the escaping used in C/C++ source code. It also used by debugger to display strings. In JSON, it always invalid.
*Incorrect as well*
```
{
"units":"\xc2\xb0"
}
``` | JSON uses unicode to be encoded, but it is specified that you can use `\uxxxx` escape codes to represent characters that don't map into your computer native environment, so it's perfectly valid to include such escape sequences and use only plain ascii encoding to transfer JSON serialized data. | 16,678 |
69,045,992 | So I am trying to install and import pynput in VSCode but its showing me an error every time I try to do it. I used VSCode's in-built terminal to install it using pip and typed the following :
`pip install pynput` but this error is shown : `Fatal error in launcher: Unable to create process using '"c:\users\vicks\appdata\local\programs\python\python38-32\python.exe" "C:\Users\vicks\AppData\Local\Programs\Python\Python38-32\Scripts\pip.exe" install pynput': The system cannot find the file specified`
After receiving the following error, I tried using CMD to install it but the same error is shown. I also tried using `python pip install pynput` and it shows `Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.` even though I have python 3.9.7 and I have selected it as my interpreter in VSCode and I have IDLE(Python 64 bit) installed. How may I resolve the following error? Any help regarding the same is appreciated
Thanks in advance :) | 2021/09/03 | [
"https://Stackoverflow.com/questions/69045992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16225182/"
] | you need to set a default lang in case there is no preferredLanguages or error occurred like this
```
static String lang ='';
List? languages = [];
languages = await Devicelocale.preferredLanguages;
if(languages?.isNotEmpty ==true){
lang = languages[0] ?? "en";
}else{
lang = "en";
}
``` | You should add the bang `!` at the end `languages[0]!` to remove the nullability. | 16,679 |
51,775,370 | I'm running Airflow on a clustered environment running on two AWS EC2-Instances. One for master and one for the worker. The worker node though periodically throws this error when running "$airflow worker":
```
[2018-08-09 16:15:43,553] {jobs.py:2574} WARNING - The recorded hostname ip-1.2.3.4 does not match this instance's hostname ip-1.2.3.4.eco.tanonprod.comanyname.io
Traceback (most recent call last):
File "/usr/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 387, in run
run_job.run()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 198, in run
self._execute()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2527, in _execute
self.heartbeat()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 182, in heartbeat
self.heartbeat_callback(session=session)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2575, in heartbeat_callback
raise AirflowException("Hostname of job runner does not match")
airflow.exceptions.AirflowException: Hostname of job runner does not match
[2018-08-09 16:15:43,671] {celery_executor.py:54} ERROR - Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
[2018-08-09 16:15:43,681: ERROR/ForkPoolWorker-30] Task airflow.executors.celery_executor.execute_command[875a4da9-582e-4c10-92aa-5407f3b46d5f] raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 52, in execute_command
subprocess.check_call(command, shell=True)
File "/usr/lib64/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 55, in execute_command
raise AirflowException('Celery command failed')
airflow.exceptions.AirflowException: Celery command failed
```
When this error occurs the task is marked as failed on Airflow and thus fails my DAG when nothing actually went wrong in the task.
I'm using Redis as my queue and postgreSQL as my meta-database. Both are external as AWS services. I'm running all of this on my company environment which is why the full name of the server is `ip-1.2.3.4.eco.tanonprod.comanyname.io`. It looks like it wants this full name somewhere but I have no idea where I need to fix this value so that it's getting `ip-1.2.3.4.eco.tanonprod.comanyname.io` instead of just `ip-1.2.3.4`.
**The really weird thing about this issue is that it doesn't always happen.** It seems to just randomly happen every once in a while when I run the DAG. It's also occurring on all of my DAGs sporadically so it's not just one DAG. I find it strange though how it's sporadic because that means other task runs are handling the IP address for whatever this is just fine.
**Note:** I've changed the real IP address to 1.2.3.4 for privacy reasons.
**Answer:**
<https://github.com/apache/incubator-airflow/pull/2484>
This is exactly the problem I am having and other Airflow users on AWS EC2-Instances are experiencing it as well. | 2018/08/09 | [
"https://Stackoverflow.com/questions/51775370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3299397/"
] | The hostname is set when the task instance runs, and is set to `self.hostname = socket.getfqdn()`, where socket is the python package `import socket`.
The comparison that triggers this error is:
```
fqdn = socket.getfqdn()
if fqdn != ti.hostname:
logging.warning("The recorded hostname {ti.hostname} "
"does not match this instance's hostname "
"{fqdn}".format(**locals()))
raise AirflowException("Hostname of job runner does not match")
```
It seems like the hostname on the ec2 instance is changing on you while the worker is running. Perhaps try manually setting the hostname as described here <https://forums.aws.amazon.com/thread.jspa?threadID=246906> and see if that sticks. | I had a similar problem on my Mac. It fixed it setting `hostname_callable = socket:gethostname` in `airflow.cfg`. | 16,682 |
55,337,221 | I'm trying to connect another computer in local network via python (subprocesses module) with this commands from CMD.exe
* `net use \\\\ip\C$ password /user:username`
* `copy D:\file.txt \\ip\C$`
Then in python it look like below.
But when i try second command, I get:
>
> "FileNotFoundError: [WinError 2]"
>
>
>
Have you met same problem?
Is there any way to fix it?
```
import subprocess as sp
code = sp.call(r'net use \\<ip>\C$ <pass> /user:<username>')
print(code)
sp.call(r'copy D:\file.txt \\<ip>\C$')
``` | 2019/03/25 | [
"https://Stackoverflow.com/questions/55337221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299639/"
] | The issue is that `copy` is a built-in, not a real command in Windows.
Those Windows messages are awful, but `"FileNotFoundError: [WinError 2]"` doesn't mean one of source & destination files can't be accessed (if `copy` failed, you'd get a normal Windows message with explicit file names).
Here, it means that the *command* could not be accessed.
So you'd need to add `shell=True` to your subprocess call to gain access to built-ins.
But don't do that (security issues, non-portability), use `shutil.copy` instead.
Aside, use `check_call` instead of `call` for your first command, as if `net use` fails, the rest will fail too. Better have an early failure.
To sum it up, here's what I would do:
```
import shutil
import subprocess as sp
sp.check_call(['net','use',r'\\<ip>\C$','password','/user:<username>'])
shutil.copy(r'D:\file.txt,r'\\<ip>\C$')
``` | you need make sure you have right to add a file.
i have testted successfully after i corrected the shared dirctory's right. | 16,685 |
50,777,013 | I am just a beginner to a tensorflow and trying to install TensorFlow with CPU support only.
Initially, I downloaded and installed Python 3.5.2 version from <https://www.python.org/downloads/release/python-352/>
After successful installation, I ran the command `pip3 install --upgrade tensorflow` which installed tensorflow-1.8.0.
To test installation i just ran following commands:
```
> python
> import tensorflow as tf
```
But this gave me an **error**:
>
> ImportError: Could not find 'msvcp140.dll'. TensorFlow requires that
> this DLL be installed in a directory that is named in your %PATH%
> environment variable. You may install this DLL by downloading Visual
> C++ 2015 Redistributable Update 3 from this URL:
> <https://www.microsoft.com/en-us/download/details.aspx?id=53587>
>
>
>
I searched for this issue and found link to an issue <https://github.com/tensorflow/tensorflow/issues/17393>.
According to above i tried running command
```
pip install tensorflow==1.5
```
But, this didn't solved my problem.
I even tried downloading ***msvcp140.dll*** and manually coping it under ***C:\Windows\SysWOW64*** folder and reinstalling python and tensorflow.
How do I fix this problem.
Thanks in advance. | 2018/06/09 | [
"https://Stackoverflow.com/questions/50777013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141116/"
] | I copied ***msvcp140.dll*** to path ***C:\Users\PCName\AppData\Local\Programs\Python\Python35***
and it worked for me.
I also switched back to tensorflow 1.8 from 1.5. | You can download the package from the url <https://www.microsoft.com/en-us/download/details.aspx?id=53587> and install it. This will solve the issue. | 16,686 |
57,473,982 | in vs code, for some reason, i cannot run any python code because vs code puts in python instead of py in cmd.
it shows this :
>
> [Running] python -u "c:\Users..."
>
>
>
but is supposed to show this :
>
> [Running] py -u "c:\Users\
>
>
>
i have tried searching online how to fix it, the error message:
**'python' is not recognized as an internal or external command,operable program or batch file.**
but comes up with useless answers
```
import pygame
pygame.init()
screen = pygame.display.set_mode((360,360))
```
what is outputted:
>
> [Running] python -u "c:\Users..."
>
>
>
as you can see it inputs the wrong command and i have no idea how to fix it.
The expected input:
>
> [Running] py -u "c:\Users..."
>
>
> | 2019/08/13 | [
"https://Stackoverflow.com/questions/57473982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11921290/"
] | Well you can change the interpreter that Code uses by pressing `Ctrl+Shift+P` and then searching for `Python: Select Interpreter`, this should help when it comes to running the code in the IDE. If that doesn't work you could just try and use the built in terminal in Code to run the code manually with the `py` command. | In VSCODE debug mode I have launch json as follows and then i can easily debug the code with breakpoints
```
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}]
```
} | 16,689 |
20,047,117 | I have my code as below.
```
def test():
print num1
print num
num += 10
if __name__ == '__main__':
num = 0
num1 = 3
test()
```
When executing the above python code I get the following output.
```
3
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "<stdin>", line 2, in test
UnboundLocalError: local variable 'num' referenced before assignment
```
I do not know why particularly num is not available in test method. It is very weird for me and i did not face this yet before.
Note: I am using python 2.7. | 2013/11/18 | [
"https://Stackoverflow.com/questions/20047117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1939389/"
] | If you do not want direct child selector, just add a parent reference for the nested elements.
This will make your thing work.
You can add the below.
```
.red .blue h1 {
color: blue;
}
```
**[WORKING DEMO](http://jsfiddle.net/N7FcB/1/)**
To enforce your div to render the color blue, you just need to add the reference of the element that you are using to the class.
**for Instance,**
```
div.blue h1 {
color: blue;
}
```
**[WORKING DEMO - 2](http://jsfiddle.net/N7FcB/4/)**
In both cases, it will work. | Or maybe like that:
```
.red > h1 {
color: red;
}
.blue h1 {
color: blue;
}
```
[fiddle](http://jsfiddle.net/sxVcL/3/).
This is 100%. | 16,690 |
42,742,519 | I am new to programming and was trying to create a program in python that creates a staircase with size based on the user input. The program should appear as shown below:

This is the code I have so far;
```
steps = int(input('How many steps? '))
print('__')
for i in range(steps):
n = steps+i
print(' '*(n-2) + '|_')
print('_'*n + '|')
```
This is what my output is;

Note - The horizontal lines are underscores '\_' and the vertical lines are the pipe character '|'. | 2017/03/12 | [
"https://Stackoverflow.com/questions/42742519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7696748/"
] | To get the correct amount of steps, you have to change the for loop to:
```
for i in range(steps-1):
```
This is because you want to print the `|_`'s one less time than there are steps; your "top" step `__` already counts as one step.
The whole thing (changed some other things to make the formatting better):
```
steps = int(input('How many steps? '))
print('__')
for i in range(steps-1):
print(' '*(2+2*i) + '|_')
print('_'*2*steps + '|')
```
Output:
[](https://i.imgur.com/vvCcI0h.png) | It is simpler to consider that `n` is the current step and given the step size (2) then you just need `2n` for your placement:
```
steps = 5
print('__')
for n in range(1, steps):
print(' '*n*2 + '|_')
print('_'*steps*2 + '|')
```
Output:
```
__
|_
|_
|_
|_
__________|
```
You can abstract this to take any step size, e.g.:
```
steps = 5
size = 4
print('_'*size)
for n in range(1, steps):
print(' '*n*size + '|' + '_'*(size-1))
print('_'*steps*size + '|')
```
Output:
```
____
|___
|___
|___
|___
____________________|
``` | 16,700 |
22,814,973 | im working on python application that requiring database connections..I had developed my application with sqlite3 but it start showing the error(the database is locked).. so I decided to use MySQL database instead.. and it is pretty good with no error..
the only one problem is that I need to ask every user using my application to install MySQL server on his pc (appserv for example) ..
so can I make mysql to be like sqlite3 apart of python lib. so I can produce a python script can be converted into exe file by the tool pyInstaller.exe and no need to install mysql server by users???
**update:**
after reviewing the code I found opened connection not closed correctly and work fine with sqllite3 ..thank you every body | 2014/04/02 | [
"https://Stackoverflow.com/questions/22814973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2980054/"
] | It depends (more "depends" in the answer).
If you need to share the data between the users of your application - you need a mysql database server somewhere setup, your application would need to have an access to it. And, the performance can really depend on the network - depends on how heavily would the application use the database. The application itself would only need to know how to "speak" with the database server - python mysql driver, like `MySQLdb` or `pymysql`.
If you don't need to share the data between users - then `sqlite` may be an option. Or may be not - depends on what do you want to store there, what for and what do you need to do with the data.
So, more questions than answers, probably it was more suitable for a comment. At least, think about what I've said.
Also see:
* <https://stackoverflow.com/questions/1009438/which-database-should-i-use-for-my-desktop-application>
* [Python Desktop Application Database](https://stackoverflow.com/questions/15271764/python-desktop-application-database)
* [Python Framework for Desktop Database Application](https://stackoverflow.com/questions/3255665/python-framework-for-desktop-database-application)
Hope that helps. | If your application is a stand-alone system such that each user maintains their own private database then you have no alternative to install MySQL on each system that is running the application. ~~You cannot bundle MySQL into your application such that it does not require a separate installation.~~
There is an embedded version of MySQL that you can build into your application (**thanks, Carsten, in the comments, for pointing this out**). More information is here: <http://mysql-python.blogspot.com/>. It may take some effort to get this working (on Windows you apparently need to build it from source code) and will take some more work to get it packaged up when you generate your executable, but this might be a MySQL solution for you.
I've just finished updating a web application using SQLite which had begun reporting Database is locked errors as the usage scaled up. By rewriting the database code with care I was able to produce a system that can handle moderate to heavy usage (in the context of a 15 person company) reliably still using SQLite -- you have to be careful to keep your connections around for the minimum time necessary and always call .close() on them. If your application is really single-user you should have no problem supporting it using SQLite -- and that's doubly true if it's single-threaded. | 16,704 |
51,583,196 | I am learning python django i am developing one website but i am struggling with URL pattern
I am Sharing my code for URL pattern i don't understand where i am getting wrong
url.py
```
urlpatterns = [
url(r'^$',views.IndexView.as_view(),name='index'),
# /music/id/
url(r'^picture/(?P<pk>[0-9]+)$',views.DetailView.as_view(),name='detail'),
#for PictureDetail view
url(r'^detail/(?P<pk>[0-9]+)/(?P<alb_title>[\w%20+A-Za-z]+)/(?P<song_title>[\w%20+A-Za-z]+)$', views.PicturedetailView.as_view(), name='picturedetail'),
]
```
My Detail.html:-
```
<ul>
{% for picture in album.picture_set.all %}
<div class="col-sm-4 col-lg-2">
<div class="thumbnail">
<a href="{% url 'music:picturedetail' pk=picture.pk alb_title=picture.album.album_title song_title=picture.song_title %}">
<img src="{{ picture.file_type.url }}" class="img-responsive">
</a>
<div class="caption">
<h6>{{picture.song_title}}</h6>
</div>
</div>
</div>
{% endfor %}
</ul>
{% endblock %}
```
I am passing three Parameters one with id and other two are strings ,i also updated my html href pattern but i am getting below error:-
```
Reverse for 'picturedetail' with keyword arguments '{'pk': 3, 'alb_title': 'Beautiful River', 'song_title': 'River'}' not found. 1 pattern(s) tried: ['music/detail/(?P<pk>[0-9]+)/(?P<alb_title>[\\w%20+A-Za-z]+)/(?P<song_title>[\\w%20+A-Za-z]+)$']
```
Thank you in Advance | 2018/07/29 | [
"https://Stackoverflow.com/questions/51583196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6246189/"
] | [](https://i.stack.imgur.com/xtW0A.png)
You can try this way:
```
floatingActionButton: FloatingActionButton(
onPressed: () {},
backgroundColor: Colors.red,
elevation: 0,
child: Container(
decoration: BoxDecoration(
color: Colors.transparent,
borderRadius: BorderRadius.all(
Radius.circular(100),
),
boxShadow: [
BoxShadow(
color: Colors.purple.withOpacity(0.3),
spreadRadius: 7,
blurRadius: 7,
offset: Offset(3, 5),
),
],
),
),
),
``` | floatingActionButton: FloatingActionButton(
onPressed: (){},
backgroundColor: Color(0xf0004451),
elevation: 10,
```
child: Container(
padding: const EdgeInsets.all(14.0),
decoration: BoxDecoration(
color: Colors.transparent,
borderRadius: BorderRadius.all(
Radius.circular(60),
),
boxShadow: [
BoxShadow(
color: Color(0xffE1E8EB).withOpacity(0.35),
spreadRadius: 8,
blurRadius: 8,
offset: const Offset(1, 1),
),
],
),
child: const Icon(Icons.add,
color: Color(0xffE1E8EB),
size: 18,
shadows: [Shadow(
color: Color(0xffE1E8EB),
offset: Offset(0.2, 0.5),
blurRadius: 5.0,
)],
),
),
),
``` | 16,705 |
15,832,700 | I am newbie in python and I am trying to launch python script with a module writen on C. I am getting Segmentation fault (core dumped) error when I am trying to launch python script.
Here is a C code:
```
// input_device.c
#include "Python.h"
#include "input.h"
static PyObject* input_device_open(PyObject* self, PyObject* id)
{
int fd, nr;
PyObject* pyfd;
if (!PyInt_Check(id))
return NULL;
nr = (int)PyInt_AsLong(id);
fd = device_open(nr, 0);
if (fd == -1)
return NULL;
pyfd = PyInt_FromLong(fd);
Py_INCREF(pyfd);
return pyfd;
}
static PyMethodDef module_methods[] =
{
{ "device_open", (PyCFunction)input_device_open, METH_VARARGS, "..." },
{ NULL, NULL, 0, NULL }
};
PyMODINIT_FUNC initinput_device(void)
{
Py_InitModule4("input_device", module_methods, "wrapper methods", 0, PYTHON_API_VERSION);
}
```
and the python script:
```
from input_device import device_open
device_open(1)
```
Could someone take a look and point me in the right direction, what I am doing wrong. Thanks in advance. | 2013/04/05 | [
"https://Stackoverflow.com/questions/15832700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1904234/"
] | Is it legitimate to return `NULL` without setting an exception, or making sure that one has been set by a function you have called? I thought that `NULL` was a signal that Python could go look for an exception to raise for the user.
I am not sure that the `Py_INCREF(pyfd);` is necessary; doesn't the object already have a refcount of 1 upon creation? | Your function receives a tuple of arguments. You need to extract the integer from the tuple:
```
static PyObject* input_device_open(PyObject* self, PyObject* args)
{
int fd, nr;
PyObject* pyfd;
if (!PyArg_ParseTuple(args, "i", &nr))
return NULL;
``` | 16,707 |
54,366,675 | i have a list like below:
```
[3,2,4,5]
```
and i want a list like below:
```
[['1','2','3'],['1','2'],['1','2','3','4'],['1','2','3','4','5']]
```
i mean i want to have a list that is created by the count of another list.
I want to iterate each with string.
how can i write it in python
i tried this code:
```
for i,val in enumerate(list_mf_input):
self.eachMembership.append([])
for value in range(1, val+1):
self.eachMembership.append([value])
``` | 2019/01/25 | [
"https://Stackoverflow.com/questions/54366675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6655937/"
] | You can use `range` with `list`, and list comprehension:
```
>>> a = [3, 2, 4, 5]
>>> [list(range(1, x+1)) for x in a]
[[1, 2, 3], [1, 2], [1, 2, 3, 4], [1, 2, 3, 4, 5]
```
And to make all strings, add `map` with `str`:
```
>>>[list(map(str, range(1, x+1))) for x in a]
[['1', '2', '3'], ['1', '2'], ['1', '2', '3', '4'], ['1', '2', '3', '4', '5']]
``` | try this code. I tried to make as easy as possible
```
lol=[3,2,4,5]
ans=[]
temp=[]
for i in lol:
for j in range(1,i+1):
temp.append(j)
ans.append(temp)
temp=[]
print(ans)
```
Hope it helps | 16,708 |
42,183,476 | Can someone please help me with the python equivalent of the curl command:
python equivalent of `curl -X POST -F "name=blahblah" -F "file=@blahblah.jpg"`
I would like to you python requests module, but I am not clear on the options to use. | 2017/02/12 | [
"https://Stackoverflow.com/questions/42183476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7552038/"
] | It depends on how you are going to use this reference.
1) There is no straight way to get component DOM reference within template:
```
import {Directive, Input, ElementRef, EventEmitter, Output, OnInit} from '@angular/core';
@Directive({selector: '[element]', exportAs: 'element'})
export class NgElementRef implements OnInit
{
@Output()
public elementChange:EventEmitter<any> = new EventEmitter<any>();
public elementRef:ElementRef;
constructor(elementRef:ElementRef)
{
this.elementRef = elementRef;
this.elementChange.next(undefined);
}
@Input()
public get element():any
{
return this.elementRef.nativeElement;
}
public set element(value:any)
{
}
ngOnInit():void
{
this.elementChange.next(this.elementRef.nativeElement);
}
}
```
Usage:
```
<my-comp [(element)]="var2"></my-comp>
<p>{{var2}}</p>
<!--or-->
<my-comp element #var2="element"></my-comp>
<p>{{var2.element}}</p>
```
2) You can get this reference in component that owns template with `@ViewChild('var2', {read: ElementRef})`. | As of Angular 8, the following provides access to the ElementRef and native element.
```
/**
* Export the ElementRef of the selected element for use with template references.
*
* @example
* <button mat-button #button="appElementRef" appElementRef></button>
*/
@Directive({
selector: '[appElementRef]',
exportAs: 'appElementRef'
})
export class ElementRefDirective<T> extends ElementRef<T> {
constructor(elementRef: ElementRef<T>) {
super(elementRef.nativeElement);
}
}
``` | 16,709 |
43,192,626 | I'm new to pandas & numpy. I'm running a simple program
```
labels = ['a','b','c','d','e']
s = Series(randn(5),index=labels)
print(s)
```
getting the following error
```
s = Series(randn(5),index=labels) File "C:\Python27\lib\site-packages\pandas\core\series.py", line 243, in
__init__
raise_cast_failure=True) File "C:\Python27\lib\site-packages\pandas\core\series.py", line 2950, in
_sanitize_array
raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional
```
Any idea what can be the issue? I'm trying this using eclipse, not using ipython notebook. | 2017/04/03 | [
"https://Stackoverflow.com/questions/43192626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1385691/"
] | I suspect you have your imports wrong.
If you add this to your code:
```
from pandas import Series
from numpy.random import randn
labels = ['a','b','c','d','e']
s = Series(randn(5),index=labels)
print(s)
a 0.895322
b 0.949709
c -0.502680
d -0.511937
e -1.550810
dtype: float64
```
It runs fine.
That said, and as pointed out by @jezrael, it's better practice to import the modules rather than pollute the namespace.
Your code should look like this instead.
***solution***
```
import pandas as pd
import numpy as np
labels = ['a','b','c','d','e']
s = pd.Series(np.random.randn(5),index=labels)
print(s)
``` | It seems you need [`numpy.random.rand`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.rand.html) for random `floats` or [`numpy.random.randint`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html) for random `integers`:
```
import pandas as pd
import numpy as np
np.random.seed(100)
labels = ['a','b','c','d','e']
s = pd.Series(np.random.randn(5),index=labels)
print(s)
a -1.749765
b 0.342680
c 1.153036
d -0.252436
e 0.981321
dtype: float64
```
---
```
np.random.seed(100)
labels = ['a','b','c','d','e']
s = pd.Series(np.random.randint(10, size=5),index=labels)
print(s)
a 8
b 8
c 3
d 7
e 7
dtype: int32
``` | 16,710 |
21,729,196 | I've got the following dictionary:
```py
d = {
'A': {
'param': {
'1': {
'req': True,
},
'2': {
'req': True,
},
},
},
'B': {
'param': {
'3': {
'req': True,
},
'4': {
'req': False,
},
},
},
}
```
I want to have a generator which will give me for each first level keys, the required parameters.
```py
req = {}
for key in d:
req[key] = (p for p in d[key]['param'] if d[key]['param'][p].get('req', False))
```
So here, for each key in `d`, I get parameter `p` only if `req` is `True`.
However, when I try to use my generator, it raises a `KeyError` exception:
```py
>>> req
{'A': <generator object <genexpr> at 0x27b8960>,
'B': <generator object <genexpr> at 0x27b8910>}
>>> for elem in req['A']:
... print elem
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-6-a96226f95cce> in <module>()
----> 1 for elem in req['A']:
2 print elem
3
<ipython-input-4-1732088ccbdb> in <genexpr>((p,))
1 for key in d:
----> 2 req[key] = (p for p in d[key]['param'] if d[key]['param'][p].get('req', False))
3
KeyError: '1'
``` | 2014/02/12 | [
"https://Stackoverflow.com/questions/21729196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2047097/"
] | The generator expressions you assign to `req[key]` binds on the `key` variable. But `key` changes from 'A' to 'B' in the loop. When you iterate over the first generator expression, it will evaluate `key` to 'B' in its `if` condition, even though `key` was 'A' when you created it.
The conventional way to bind to a variable's value and not its reference, is to wrap the expression in a lambda with a default value, and then call it immediately.
```
for key in d:
req[key] = (lambda key=key: (p for p in d[key]['param'] if d[key]['param'][p].get('req', False)))()
```
Result:
```
1
2
``` | This is because upon execution of the generator, the *latest* value of `key` is used.
Suppose the `for key in d:` iterates over the keys in the order `'A', 'B'`, the 1st generator is supposed to work with `key = 'A'`, but due to closure issues, it uses the item with `'B'` as key. And this has no `'1'` sub-entry.
Even worse, the `key` variable in the generator has two different values: the `for p in d[key]['param']` part uses the "correct" value, while the `if d[key]['param'][p].get('req', False)` uses the "closure value", which is the last one. | 16,711 |
30,871,488 | The familiar pythonic slicing conventions of `myList[-1:][0]` and `myList[-1]` are not available for Mongoengine listFields because it does not support negative indices. Is there an elegant way to get the last element of a list?
Error verbiage for posterity:
>
> `IndexError: Cursor instances do not support negative indices`
>
>
> | 2015/06/16 | [
"https://Stackoverflow.com/questions/30871488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2355364/"
] | You need to use [$location.path()](https://docs.angularjs.org/api/ng/service/$location)
```
// given url http://blo.c/news
var path = $location.path();
// => "/news"
```
If you are using HTML5 mode you must ensure [$locationProvider.html5Mode(true)](https://docs.angularjs.org/guide/$location#html5-mode) is set so `$location` works properly.
If you are not using HTML5 mode (which is the case here); then you'll need to drop to [traditional javascript](https://developer.mozilla.org/en-US/docs/Web/API/Window/location) to get the URL, since you are not using Angular routing in the first place:
```
// given url http://blo.c/news
var path = window.location.pathname;
// => "/news"
```
You might choose to inject [$window](https://docs.angularjs.org/api/ng/service/$window) instead of using `window` directly, this is only a thin wrapper over the native `window` object but facilitates testing. | Use the `$location.path` function to get the url. To get what's after the url, use `split`
```
$location.path.split(/\{1}/)[1]
``` | 16,712 |
54,701,639 | I have a python operator in my DAG. The python callable function is returning a bool value. But, when I run the DAG, I get the below error.
>
> TypeError: 'bool' object is not callable
>
>
>
I modified the function to return nothing but then again I keep getting the below error
>
> ERROR - 'NoneType' object is not callable
>
>
>
Below is my dag
```
def check_poke(threshold,sleep_interval):
flag=snowflake_poke(1000,10).poke()
#print(flag)
return flag
dependency = PythonOperator(
task_id='poke_check',
#python_callable=check_poke(129600,600),
provide_context=True,
python_callable=check_poke(129600,600),
dag=dag)
end = BatchEndOperator(
queue=QUEUE,
dag=dag)
start.set_downstream(dependency)
dependency.set_downstream(end)
```
Not able to figure out what it is that I am missing. Can someone help me out on this...Fairly new to airflow.
I edited the python operator in the dag as below
```
dependency = PythonOperator(
task_id='poke_check',
provide_context=True,
python_callable=check_poke(129600,600),
dag=dag)
```
But now, I get a different error.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 1245, in run
result = task_copy.execute(context=context)
File "/usr/local/lib/python2.7/dist-packages/airflow/operators/python_operator.py", line 66, in execute
return_value = self.python_callable(*self.op_args, **self.op_kwargs)
TypeError: () takes no arguments (25 given)
[2019-02-15 05:30:25,375] {models.py:1298} INFO - Marking task as UP_FOR_RETRY
[2019-02-15 05:30:25,393] {models.py:1327} ERROR - () takes no arguments (25 given)
``` | 2019/02/15 | [
"https://Stackoverflow.com/questions/54701639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4017926/"
] | The argument name gives it away. You are passing the result of a call rather than a callable.
```
python_callable=check_poke(129600,600)
```
The second error states that the callable is called with 25 arguments. So a `lambda:` won't work. The following would work but ignoring 25 arguments is really questionable.
```
python_callable=lambda *args, **kwargs: check_poke(129600,600)
``` | Agree with **@Dan D.** for the issue; but it's perplexing why his solution didn't work (it certainly works in `python` *shell*)
See if this finds you any luck (its just verbose variant of **@Dan D.**'s solution)
```
from typing import Callable
# your original check_poke function
def check_poke(arg_1: int, arg_2: int) -> bool:
# do something
# somehow returns a bool
return arg_1 < arg_2
# a function that returns a callable, that in turn invokes check_poke
# with the supplied params
def check_poke_wrapper_creator(arg_1: int, arg_2: int) -> Callable[[], bool]:
def check_poke_wrapper() -> bool:
return check_poke(arg_1=arg_1, arg_2=arg_2)
return check_poke_wrapper
..
# usage
python_callable=check_poke_wrapper_creator(129600, 600)
``` | 16,715 |
29,829,470 | I'm trying to get range-rings on my map, with the position of the image above the user's location, but the map doesn't appear when I test it and the user's location doesn't seem to show up on the map. I don't know what went wrong, I followed a tutorial on a website.
This is the code:
```html
<!DOCTYPE html>
<html>
<head>
<title>Radar</title>
<meta http-equiv="refresh" content="300">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>
<style>
#map-canvas {
height: 700px;
}
#logo {
position: fixed;
z-index: 99;
top: 8%;
left: 3%;
opacity: 0.9;
}
#legenda {
position: absolute;
z-index: 98;
top: 87%;
left: 82%;
opacity: 1.0;
height: 50px;
}
</style>
</head>
<body style="overflow: hidden;">
<div id="map-canvas"></div>
<script src="http://code.jquery.com/jquery-latest.js"></script>
<script src="http://maps.googleapis.com/maps/api/js?sensor=false"></script>
<script type="text/javascript">
// [START region_initialization]
// This example creates a custom overlay called USGSOverlay, containing
// a U.S. Geological Survey (USGS) image of the relevant area on the map.
// Set the custom overlay object's prototype to a new instance
// of OverlayView. In effect, this will subclass the overlay class.
// Note that we set the prototype to an instance, rather than the
// parent class itself, because we do not wish to modify the parent class.
var overlay;
USGSOverlay.prototype = new google.maps.OverlayView();
// Initialize the map and the custom overlay.
function initialize() {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(success);
} else {
error('Geo Location is not supported');
}
function success(position) {
var lat = position.coords.latitude;
var long = position.coords.longitude;
}
var coords = new google.maps.LatLng(position.coords.latitude, position.coords.longitude);
var styles =
[ { "featureType": "administrative", "elementType": "geometry.fill", "stylers": [ { "lightness": -88 }, { "saturation": -100 }, { "visibility": "on" } ] },{ "featureType": "administrative.country", "stylers": [ { "visibility": "on" }, { "weight": 1.3 }, { "lightness": 100 } ] },{ "featureType": "landscape", "stylers": [ { "saturation": -100 }, { "lightness": -81 } ] },{ "featureType": "poi", "stylers": [ { "visibility": "off" } ] },{ "featureType": "road.local", "stylers": [ { "visibility": "off" } ] },{ "featureType": "road.arterial", "elementType": "geometry", "stylers": [ { "visibility": "off" } ] },{ "featureType": "road.highway.controlled_access", "stylers": [ { "visibility": "simplified" } ] },{ "featureType": "road.highway.controlled_access", "elementType": "geometry.fill", "stylers": [ { "visibility": "simplified" }, { "saturation": -100 }, { "lightness": 100 }, { "weight": 1.3 } ] },{ "featureType": "road.highway", "stylers": [ { "visibility": "simplified" } ] },{ "featureType": "administrative.locality", "elementType": "labels", "stylers": [ { "lightness": 100 }, { "visibility": "simplified" } ] },{ "featureType": "road.highway", "elementType": "geometry.fill", "stylers": [ { "visibility": "simplified" }, { "lightness": 100 } ] },{ "featureType": "administrative.province", "elementType": "geometry.stroke", "stylers": [ { "lightness": 100 }, { "saturation": -100 } ] },{ "featureType": "administrative.locality", "elementType": "labels.icon", "stylers": [ { "visibility": "off" } ] },{ "featureType": "road", "elementType": "geometry", "stylers": [ { "lightness": -65 }, { "saturation": 1 }, { "hue": "#0000ff" } ] },{ "featureType": "water", "stylers": [ { "saturation": -53 }, { "lightness": -36 }, { "hue": "#00f6ff" } ] },{ "featureType": "landscape", "stylers": [ { "lightness": -39 } ] },{ } ]
// Create a new StyledMapType object, passing it the array of styles,
// as well as the name to be displayed on the map type control.
var styledMap = new google.maps.StyledMapType(styles,
{name: "Styled Map"});
var mapOptions = {
zoom: 7,
center: coords,
panControl: false,
zoomControl: false,
mapTypeControl: false,
streetViewControl: false,
mapTypeIds: [google.maps.MapTypeId.ROADMAP, 'map_style']
};
var map = new google.maps.Map(document.getElementById("map-canvas"), mapOptions);
map.mapTypes.set('map_style', styledMap);
map.setMapTypeId('map_style');
var icon = {
url: 'met.nl.eu.org/klanten/python/marker.png'
};
var marker = new google.maps.Marker({
position: coords,
map: map,
icon: icon
});
// Query and plot the data once the map is ready
google.maps.event.addListenerOnce(map, 'idle', function() {
$.ajax({
dataType: "json",
url: "http://met.nl.eu.org/klanten/python/get_lightnings.php"
}).done(function(data) {
data.d.forEach(function(lightning) {
var image = 'http://met.nl.eu.org/klanten/python/strike.png'
new google.maps.Marker({
position: new google.maps.LatLng(lightning.lat, lightning.lon),
map: map,
icon: image });
});
});
});
var swBound = new google.maps.LatLng(48.895311, 0.000000);
var neBound = new google.maps.LatLng(55.973607, 10.856428);
var bounds = new google.maps.LatLngBounds(swBound, neBound);
// The photograph is courtesy of MetNL.
var srcImage = 'v2.0/historie/28-06-11/00.png';
// The custom USGSOverlay object contains the USGS image,
// the bounds of the image, and a reference to the map.
overlay = new USGSOverlay(bounds, srcImage, map);
}
// [END region_initialization]
// [START region_constructor]
/** @constructor */
function USGSOverlay(bounds, image, map) {
// Initialize all properties.
this.bounds_ = bounds;
this.image_ = image;
this.map_ = map;
// Define a property to hold the image's div. We'll
// actually create this div upon receipt of the onAdd()
// method so we'll leave it null for now.
this.div_ = null;
// Explicitly call setMap on this overlay.
this.setMap(map);
}
// [END region_constructor]
// [START region_attachment]
/**
* onAdd is called when the map's panes are ready and the overlay has been
* added to the map.
*/
USGSOverlay.prototype.onAdd = function() {
var div = document.createElement('div');
div.style.borderStyle = 'none';
div.style.borderWidth = '0px';
div.style.position = 'absolute';
// Create the img element and attach it to the div.
var img = document.createElement('img');
img.src = this.image_;
img.style.width = '100%';
img.style.height = '100%';
img.style.position = 'absolute';
div.appendChild(img);
this.div_ = div;
// Add the element to the "overlayLayer" pane.
var panes = this.getPanes();
panes.overlayLayer.appendChild(div);
};
// [END region_attachment]
// [START region_drawing]
USGSOverlay.prototype.draw = function() {
// We use the south-west and north-east
// coordinates of the overlay to peg it to the correct position and size.
// To do this, we need to retrieve the projection from the overlay.
var overlayProjection = this.getProjection();
// Retrieve the south-west and north-east coordinates of this overlay
// in LatLngs and convert them to pixel coordinates.
// We'll use these coordinates to resize the div.
var sw = overlayProjection.fromLatLngToDivPixel(this.bounds_.getSouthWest());
var ne = overlayProjection.fromLatLngToDivPixel(this.bounds_.getNorthEast());
// Resize the image's div to fit the indicated dimensions.
var div = this.div_;
div.style.left = sw.x + 'px';
div.style.top = ne.y + 'px';
div.style.width = (ne.x - sw.x) + 'px';
div.style.height = (sw.y - ne.y) + 'px';
};
// [END region_drawing]
// [START region_removal]
// The onRemove() method will be called automatically from the API if
// we ever set the overlay's map property to 'null'.
USGSOverlay.prototype.onRemove = function() {
this.div_.parentNode.removeChild(this.div_);
this.div_ = null;
};
// [END region_removal]
google.maps.event.addDomListener(window, 'load', initialize);
</script>
<img src="http://met.nl.eu.org/NL_nl/iframe/logo.png" id="logo"/>
<img src="http://met.nl.eu.org/klanten/python/legenda.png" id="legenda"/>
</body>
</html>
```
Wat went wrong with my code?
\*\*Edit: Ik now know the fault occurs in this portion of the code: var coords = new google.maps.LatLng(position.coords.latitude, position.coords.longitude);
It gives a reference error saying "position is not defined" while earlier in the code i've written this: function success(position) {
var lat = position.coords.latitude;
var long = position.coords.longitude;
} | 2015/04/23 | [
"https://Stackoverflow.com/questions/29829470",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3832981/"
] | geolocation runs asynchronously.
You may either create the map/marker when it returns a result or define a default-coordinate and update map/marker when it returns a result.
The 2nd approach is preferable, because you wouldn't get a map at all when geolocation fails.
A simple implementation using a MVCObject, which makes it easy to 1. access the value and 2. observe changes(I've removed the irrelevant parts):
```
function initialize() {
//define a default-position
var coords = new google.maps.MVCObject();
coords.set('latlng', new google.maps.LatLng(52.370215, 4.895167));
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(success);
}
//set new value for coords
function success(position) {
coords.set('latlng',
new google.maps.LatLng(position.coords.latitude,
position.coords.longitude));
}
var mapOptions = {
zoom: 7,
center: coords.get('latlng')
};
var map = new google.maps.Map(document.getElementById("map-canvas"),
mapOptions);
var marker = new google.maps.Marker({
position: coords.get('latlng'),
map: map
});
//observe the latlng-property of coords,
//and update marker and map-center when it changes
google.maps.event.addListenerOnce(coords, 'latlng_changed', function () {
var latlng = this.get('latlng');
map.setCenter(latlng);
marker.setPosition(latlng)
});
}
google.maps.event.addDomListener(window, 'load', initialize);
```
Demo: [**http://jsfiddle.net/doktormolle/rttvLsLs/**](http://jsfiddle.net/doktormolle/rttvLsLs/) | I think you should include your Google Api key.
Try to add the script below :
```
<script async defer src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap"
type="text/javascript"></script>
``` | 16,717 |
53,405,006 | I am trying to set up a Dockerfile for my project and am unsure how to set a JAVA\_HOME within the container.
```
FROM python:3.6
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Define environment variable
ENV NAME Netflow
# Run netflow.py
CMD ["python", "netflow.py"]
```
In the requirements.txt I have...
```
numpy
pandas
kafka
pyspark
log
```
My netflow.py file is...
```
import pandas, math, re, log
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
conf = SparkConf().setAppName("building a warehouse")
sc = SparkContext(conf=conf)
df=pandas.read_csv(r'TestDataSet.csv')
```
The output in the terminal after trying to run it is....
```
JAVA_HOME is not set
Traceback (most recent call last):
File "netflow.py", line 7, in <module>
sc = SparkContext(conf=conf)
File "/usr/local/lib/python3.6/site-packages/pyspark/context.py", line 115, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "/usr/local/lib/python3.6/site-packages/pyspark/context.py", line 298, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/usr/local/lib/python3.6/site-packages/pyspark/java_gateway.py", line 94, in launch_gateway
raise Exception("Java gateway process exited before sending its port number")
```
I have been looking for a solution but none have worked so far.
I have tried
```
ENV JAVA_HOME /Library/Java/JavaVirtualMachines/openjdk-11.jdk/Contents/Home
```
and I have tried using a separate command
```
docker run -e "JAVA_HOME=/Library/Java/JavaVirtualMachines/openjdk-11.jdk/Contents/Home" project env
```
I am still getting the same error | 2018/11/21 | [
"https://Stackoverflow.com/questions/53405006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6465715/"
] | You need to actually install Java inside your container, but I would suggest rather finding a Pyspark docker image, or adding Python to the Openjdk images so that you don't need to mess with too many environment variables
More specifically, `JAVA_HOME=/Library/Java/JavaVirtualMachines` is a only available as a path to your Mac, and shouldn't be expected to work inside a Linux container
However, it's not clear why you need Pyspark when numpy is the only thing actually reading your data | To set environment variables, you can declare them in your dockerfile like so:
```
ENV JAVA_HOME="foo"
```
or
```
ENV JAVA_HOME foo
```
In fact, you already set an environment variable in the example you posted.
See [documentation](https://docs.docker.com/engine/reference/builder/#env) for more details. | 16,718 |
49,564,238 | I have below piece of code in python which I am using to get the component name of the JIRA issue some of them are single value in component field and some of them are multiple values in component field. My issue is that component field could have values with different name e.g R ABC 1.1 , R Aiapara 2.3A1(Active) etc.I don't want to do the way I am trying to do in below code.Is there any way I can find only the integer value from the component. from this component(R ABC 1.1) I need 1.1 and for 2nd component (R Aiapara 2.3A1(Active) I need 2.3 as well so this I would not need to depend on the name of the component
```
for version in issue["fields"]["components"]:
cacheData = json.dumps(version)
jsonToPython = json.loads(cacheData)
if jsonToPython['name'][:10] == "R Aiapara ":
allModules.append(jsonToPython["name"][10:])
print allModules
```
Below is the output I am getting
```
Retrieving list of issues
Processing SPTN-2
[u'1.6']
Processing SPTN-1
[u'1.5']
[u'1.5', u'1.6']
``` | 2018/03/29 | [
"https://Stackoverflow.com/questions/49564238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1513848/"
] | Using regex:
```
import re
s1 = "R ABC 4.4"
s2 = "R Ciapara 4.4A1(Active)"
print(re.findall(r"\d+\.\d+", s1))
print(re.findall(r"\d+\.\d+", s2))
```
**Output:**
```
['4.4']
['4.4']
``` | I feel like I am not quite understanding your question, so I will try to answer as best I can, but feel free to correct me if I get anything wrong.
This function will get all the numbers from the string in a list:
```
def getNumber(string):
numbers = ".0123456789"
result = []
isNumber = False
for i in string:
if (i in numbers and isNumber):
result[-1] += i
elif i in result:
result+= [i]
isNumber = True
else:
isNumber = False
return result
```
However, if you want all the characters after the first number, then you will want this function. It will return everything after the first number, and False if there isn't a number there.
```
def getNumber(string):
numbers = ".0123456789"
result = []
isNumber = False
for i,char in enumerate(string):
if char in numbers:
return string[i:]
return False
```
Now, if you want everything between the first and last numbers, then try this one instead:
```
def getNumber(string):
numbers = ".0123456789"
result = string
isNumber = False
for i,char in enumerate(string):
if char in numbers:
result = result[i:]
break
for i in range(len(result)-1, 0, -1):
if result[i] in numbers:
result = result[:i+1]
break
return result
```
Hope this helps :) | 16,719 |
28,570,268 | My file contains this format [{"a":1, "c":4},{"b":2, "d":5}] and I want to read this file into a list in python. The list items should be {"a":1, "c":4} and {"b":2, "d":5}. I tried to read into a string and then typecasting into a list but that is not helping. It is reading character by character. | 2015/02/17 | [
"https://Stackoverflow.com/questions/28570268",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4394027/"
] | You can "convert" a string that contains a list to an actual list like this
```
>>> import ast
>>> ast.literal_eval('[{"a":1, "c":4},{"b":2, "d":5}]')
[{'a': 1, 'c': 4}, {'b': 2, 'd': 5}]
```
You can of course sub out the literal string for the data you read from file | Another, more dirty option is this (it will produce list of strings):
```
a = str('[{"a":1, "c":4},{"b":2, "d":5}]')
b = list()
for i in a.replace('[','').replace(']','').split(sep='},'):
b.append(i+'}')
b[len(b)-1] = b[len(b)-1].replace('}}','}')
for i in b:
i
'{"a":1, "c":4}'
'{"b":2, "d":5}'
```
Since ast proposed by Tim earlier will go deaper than first level, it will actually convert underlying string into dictionary. So instead of list of strings you will get list of dictionaries. I am not sure if that's what you want to get. | 16,720 |
3,172,236 | I am writing a piece of code which will extract words from running text. This text can contain delimiters like \r,\n etc. which might be there in text.
I want to discard all these delimiters and only extract full words. How can I do this with Python? any library available for crunching text in python? | 2010/07/03 | [
"https://Stackoverflow.com/questions/3172236",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348663/"
] | Assuming your definition of "word" agrees with that of the regular expression module (`re`), that is, letters, digits and underscores, it's easy:
```
import re
fullwords = re.findall(r'\w+', thetext)
```
where `thetext` is the string in question (e.g., coming from an `f.read()` of a file object `f` open for reading, if that's where you get your text from).
If you define words differently (e.g. you want to include apostrophes so for example "it's" will be considered "one word"), it isn't much harder -- just use as the first argument of `findall` the appropriate pattern, e.g. `r"[\w']+"` for the apostrophe case.
If you need to be very, *very* sophisticated (e.g., deal with languages that use no breaks between words), then the problem suddenly becomes much harder and you'll need some third-party package like [nltk](http://www.nltk.org/). | Assuming your delimiters are whitespace characters (like space, `\r` and `\n`), then basic [`str.split()`](http://docs.python.org/library/stdtypes.html#str.split) does what you want:
```
>>> "asdf\nfoo\r\nbar too\tbaz".split()
['asdf', 'foo', 'bar', 'too', 'baz']
``` | 16,722 |
74,200,925 | I'm new to python and having problems with summing up the numbers inside an element and then adding them together to get a total value.
Example of what I'm trying to do:
```
list = {'area1': [395.0, 212.0], 'area2': [165.0, 110.0]}
'area1': [395.0 * 212.0], 'area2': [165.0 * 110.0]
'area1': [83740], 'area2': [18150]
total value = 101890
```
Main.py:
```
def cubicMeterCalculator():
floorAreaList = {}
print("Example of how this would look like 'area1 395 212' 'area2 165 110'")
n = int(input("\nHow many walls? "))
for i in range(n):
print("\nEnter name of the wall first and 'Space' to separate the name and numbers before hitting enter.")
name, *lengths = input().split(" ")
l_lengths = list(map(float,lengths))
floorAreaList[name] = l_lengths
print(floorAreaList)
total = sum(float, floorAreaList)
print(total)
``` | 2022/10/25 | [
"https://Stackoverflow.com/questions/74200925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20248393/"
] | You can use a generator expression to multiply the pairs of values in your dictionary, then `sum` the output of that:
```py
lst = {'area1': [395.0, 212.0], 'area2': [165.0, 110.0]}
total = sum(v[0]*v[1] for v in lst.values())
# 101890.0
``` | You can find the area using list comprehension.
Iterate through `lst.values()` -> `dict_values([[395.0, 212.0], [165.0, 110.0]])` and multiply the elements. Finally, use `sum` to find out the total.
```
lst = {'area1': [395.0, 212.0], 'area2': [165.0, 110.0]}
area = sum([i[0]*i[1] for i in lst.values()])
# 101890.0
```
As solution with `map` + `sum`,
```
sum(map(lambda x: x[0]*x[1], lst.values()))
``` | 16,723 |
15,497,896 | I am very new to programming and am converting a fortran90 code into python 2.7. I have done fairly well with it so far but have hit a difficult spot. I need to write this subroutine in Python but I don't understand the fortran notation and can't find any information on what the python equivalent of the Read(1,\*) lines would be.
Any help at all would be greatly appreciated.
```
SUBROUTINE ReadBCoutput(filenameBC,count,timeArray,MbolArray,uArray,gArray,rArray,iArray,zArray)
! read Bruzual & Charlot (2003) stellar population synthesis models into arrays
CHARACTER*500,INTENT(IN):: filenameBC
INTEGER,INTENT(OUT):: count
REAL,DIMENSION(:),ALLOCATABLE,INTENT(OUT):: timeArray,MbolArray,uArray,gArray,rArray,iArray,zArray
REAL:: logTime,Mbol,g,uMg,gMr,gMi,gMz
REAL,DIMENSION(:),ALLOCATABLE:: timeArrayLocal,MbolArrayLocal,uArrayLocal,gArrayLocal,rArrayLocal,iArrayLocal,zArrayLocal
! open file and read off unnecessary 29 lines of comments
OPEN(1,FILE=TRIM(filenameBC),RECL=2000)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
READ(1,*)
! now read arrays
count=0
ALLOCATE(timeArray(count))
ALLOCATE(MbolArray(count))
ALLOCATE(uArray(count))
ALLOCATE(gArray(count))
ALLOCATE(rArray(count))
ALLOCATE(iArray(count))
ALLOCATE(zArray(count))
IOEnd=0
DO WHILE(IOEnd>-1)
READ(1,*,IOSTAT=IOEnd) logTime,Mbol,g,uMg,gMr,gMi,gMz
!print*,'filename is',filenameBC
IF (IOEnd>-1) THEN ! not at end of file yet
! add new element to list
count=count+1
ALLOCATE(timeArrayLocal(count-1))
ALLOCATE(MbolArrayLocal(count-1))
ALLOCATE(uArrayLocal(count-1))
ALLOCATE(gArrayLocal(count-1))
ALLOCATE(rArrayLocal(count-1))
ALLOCATE(iArrayLocal(count-1))
ALLOCATE(zArrayLocal(count-1))
DO countInside=1,count-1
timeArrayLocal(countInside)=timeArray(countInside)
MbolArrayLocal(countInside)=MbolArray(countInside)
uArrayLocal(countInside)=uArray(countInside)
gArrayLocal(countInside)=gArray(countInside)
rArrayLocal(countInside)=rArray(countInside)
iArrayLocal(countInside)=iArray(countInside)
zArrayLocal(countInside)=zArray(countInside)
END DO
DEALLOCATE(timeArray)
DEALLOCATE(MbolArray)
DEALLOCATE(uArray)
DEALLOCATE(gArray)
DEALLOCATE(rArray)
DEALLOCATE(iArray)
DEALLOCATE(zArray)
ALLOCATE(timeArray(count))
ALLOCATE(MbolArray(count))
ALLOCATE(uArray(count))
ALLOCATE(gArray(count))
ALLOCATE(rArray(count))
ALLOCATE(iArray(count))
ALLOCATE(zArray(count))
DO countInside=1,count-1
timeArray(countInside)=timeArrayLocal(countInside)
MbolArray(countInside)=MbolArrayLocal(countInside)
uArray(countInside)=uArrayLocal(countInside)
gArray(countInside)=gArrayLocal(countInside)
rArray(countInside)=rArrayLocal(countInside)
iArray(countInside)=iArrayLocal(countInside)
zArray(countInside)=zArrayLocal(countInside)
END DO
timeArray(count)=10**logTime
MbolArray(count)=Mbol
gArray(count)=g
uArray(count)=uMg+g
rArray(count)=g-gMr
iArray(count)=g-gMi
zArray(count)=g-gMz
DEALLOCATE(uArrayLocal)
DEALLOCATE(gArrayLocal)
DEALLOCATE(rArrayLocal)
DEALLOCATE(iArrayLocal)
DEALLOCATE(zArrayLocal)
DEALLOCATE(MbolArrayLocal)
DEALLOCATE(timeArrayLocal)
END IF
END DO
CLOSE(1)
END SUBROUTINE ReadBCoutput
```
I don't expect anyone to convert the whole thing for me - I would just like to be clear on what this is actually doing and what is/isn't necessary to do in Python. I'm capable of searching on my own but I'm kind of blown away by what to look for here.
Thanks so much! | 2013/03/19 | [
"https://Stackoverflow.com/questions/15497896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | In fortran, `open(1,FILE=TRIM(filenameBC),RECL=2000)` opens the file with name `filenameBC`. The `TRIM` part is unnecessary as the fortran runtime library will do that for you (it's python equivalent is `filenameBC.rstrip()`). The `RECL=2000` part here is also a little fishy. I don't think that it does anything here -- Actually, I think that using it is undefined behavior since you're file should be connected for `"sequential"` access. According to the [fortran77 standard](http://www.fortran.com/F77_std/rjcnf0001-sh-12.html#sh-12.8.1) section 12.10.1,
>
> RECL = rl
>
>
> rl is an integer expression whose value must be positive. It specifies the length of each record in a file being connected for direct access. If the file is being connected for formatted input/output, the length is the number of characters. If the file is being connected for unformatted input/output, the length is measured in processor-dependent units. For an existing file, the value of rl must be included in the set of allowed record lengths for the file ( 12.2.2). For a new file, the processor creates the file with a set of allowed record lengths that includes the specified value. **This specifier must be given when a file is being connected for direct access; otherwise, it must be omitted.**
>
>
>
This may have changed in a newer revision of the standard -- If so, I believe that it specifies the *maximum* line length.
Fortran filehandles are simply integers. So, whereas in python you would say:
```
filehandle = open('file')
line = filehandle.readline() #or better, `next(filehandle)` :)
```
In fortran this is roughly the same as:
```
integer filehandle
filehandle = 1 ! pick an integer any positive one
! will do that we haven't used already,
! but it's best to avoid 0,5,6 as those
! usually mean `stderr`,`stdin` and `stdout`.
open(filehandle,file='file')
read(filehandle,*) line
```
the `*` basically gets you to read a single line from the file.
---
Note that this fortran code is a little buggy and HUGELY inefficient. For example the check `IF (IOEnd>-1) THEN` succeeds under any condition that isn't an end of file (e.g. strange errors will be masked similar to a bare `except` in python). In python, you can just pack this information into a list and grow the list dynamically -- python will handle all of the re-allocation that you need to do. At the end, you may choose to convert the list to a numpy `ndarray`.
In pseudo-python code, this translates roughly to:
```
data_list = []
with open(filenameBC.rstrip()) as fin:
for _ in range(29): #throw away first 29 lines (I think I counted right ...)
next(fin)
for line in fin:
data_list.append([float(x) for x in line.strip()])
timeArray,MbolArray,uArray,gArray,rArray,iArray,zArray = zip(*data_list)
``` | `READ(1,*)` is reading .... something out of your file and not storing it, i.e. just throwing it away. All those `READ(1,*)` statements are just a way of scrolling through the file until you get to the data you actually need. (Not the most compact way to code this, by the way. Whoever wrote this FORTRAN code may have been very smart in many respects but was not a terribly good programmer. Or maybe they were in a big hurry.) A python equivalent would just be
```
>>> infile.readline()
```
Note that FORTRAN can read data in *as* integers, floats, what have you, but python is just going to read everything as text, and then you are going to have to cast it to whatever numerical form you need.
However, if you want to look at [NumPy](http://docs.scipy.org/doc/numpy/reference/routines.io.html), it has a couple a routines that can read data as numbers: `loadtxt` and `genfromtxt`. Maybe a few others too, but those are the ones I have found most helpful. | 16,728 |
23,566,970 | I have been using argparse in a program I am writing however it doesnt seem to create the stated output file.
My code is:
```
parser.add_argument("-o", "--output", action='store', dest='output', help="Directs the output to a name of your choice")
with open(output, 'w') as output_file:
output_file.write("%s\n" % item)
```
I have also tried:
```
parser.add_argument("-o", "--output", action='store', type=argparse.FileType('w'), dest='output', help="Directs the output to a name of your choice")
output_file.write("%s\n" % item)
```
The error that occurs is :
```
output_file.write("%s\n" % item)
NameError: name 'output_file' is not defined
```
Can someone please explain why I am having this error occuring and how I could solve it?
All my code:
```
from __future__ import print_function
from collections import defaultdict
from itertools import groupby
import argparse #imports the argparse module so it can be used
from itertools import izip
#print = print_function
parser = argparse.ArgumentParser() #simplifys the wording of using argparse as stated in the python tutorial
parser.add_argument("-r1", type=str, action='store', dest='input1', help="input the forward read file") # allows input of the forward read
parser.add_argument("-r2", type=str, action='store', dest='input2', help="input the reverse read file") # allows input of the reverse read
parser.add_argument("-v", "--verbose", action="store_true", help=" Increases the output, only needs to be used to provide feedback to Tom for debugging")
parser.add_argument("-n", action="count", default=0, help="Allows for up to 5 mismatches, however this will reduce accuracy of matching and cause mismatches. Default is 0")
#parser.add_argument("-o", "--output", action='store', type=argparse.FileType('w'), dest='output', help="Directs the output to a name of your choice")
parser.add_argument("-fastq", action="store_true", help=" States your input as fastq format")
parser.add_argument("-fasta", action="store_true", help=" States your input as fasta format")
parser.add_argument("-o", "--output", action='store', dest='output', help="Directs the output to a name of your choice")
args = parser.parse_args()
def class_chars(chrs):
if 'N' in chrs:
return 'unknown'
elif chrs[0] == chrs[1]:
return 'match'
else:
return 'not_match'
with open(output, 'w') as output_file:
s1 = 'aaaaaaaaaaN123bbbbbbbbbbQccc'
s2 = 'aaaaaaaaaaN456bbbbbbbbbbPccc'
n = 0
consec_matches = []
chars = defaultdict(int)
for k, group in groupby(zip(s1, s2), class_chars):
elems = len(list(group))
chars[k] += elems
if k == 'match':
consec_matches.append((n, n+elems-1))
n += elems
print (chars)
print (consec_matches)
print ([x for x in consec_matches if x[1]-x[0] >= 9])
list = [x for x in consec_matches if x[1]-x[0] >= 9]
flatten_list= [x for y in list for x in y]
print (flatten_list)
matching=[y[1] for y in list for x in y if x ==0 ]
print (matching)
magic = lambda matching: int(''.join(str(i) for i in matching)) # Generator exp.
print (magic(matching))
s2_l = s2[magic(matching):]
line3=s1+s2_l
print (line3)
if line3:
output_file.write("%s\n" % item)
``` | 2014/05/09 | [
"https://Stackoverflow.com/questions/23566970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3616869/"
] | You are missing the bit where the arguments are actually parsed:
```
parser.add_argument("-o", "--output", help="Directs the output to a name of your choice")
args = parser.parse_args()
with open(args.output, 'w') as output_file:
output_file.write("%s\n" % item)
```
parser.parse\_args() will give you an object from which you can access the arguments by name using the long option name bar the dashes. | When I run your script I get:
```
Traceback (most recent call last):
File "stack23566970.py", line 31, in <module>
with open(output, 'w') as output_file:
NameError: name 'output' is not defined
```
There's no place in your script that does `output = ...`.
We can correct that with:
```
with open(args.output, 'w') as output_file:
```
`argparse` returns values as attributes of the `args` object.
Now I get:
```
Traceback (most recent call last):
File "stack23566970.py", line 62, in <module>
output_file.write("%s\n" % item)
NameError: name 'item' is not defined
```
Again, there's no `item = ...` line.
What is `item` supposed to be? | 16,729 |
50,447,751 | I'm trying to retrieve last month's media posts from an Instagram Business profile I manage, by using `'since'` and `'until'`, but it doesn't seem to work properly as the API returns posts which are out of the time range I selected.
I'm using the following string to call the API:
```
business_profile_id/media?fields=timestamp&since=2018-04-01&until=2018-04-30
```
while the Python snippet would be this (using the same [**init**](https://github.com/mobolic/facebook-sdk/blob/master/facebook/__init__.py) script from the facebook-python-sdk)
```
import facebook
graph = facebook.GraphAPI(access_token)
profile = graph.get_object(user)
posts = graph.get_connections(profile['id'], 'media?fields=caption,permalink,timestamp&since=2018-04-01&until=2018-04-30')
```
where get.connections is
```
def get_connections(self, id, connection_name, **args):
"""Fetches the connections for given object."""
return self.request(
"{0}/{1}/{2}".format(self.version, id, connection_name), args)
```
and request is
```
def request(
self, path, args=None, post_args=None, files=None, method=None):
"""Fetches the given path in the Graph API.
We translate args to a valid query string. If post_args is
given, we send a POST request to the given path with the given
arguments.
"""
if args is None:
args = dict()
if post_args is not None:
method = "POST"
# Add `access_token` to post_args or args if it has not already been
# included.
if self.access_token:
# If post_args exists, we assume that args either does not exists
# or it does not need `access_token`.
if post_args and "access_token" not in post_args:
post_args["access_token"] = self.access_token
elif "access_token" not in args:
args["access_token"] = self.access_token
try:
response = self.session.request(
method or "GET",
FACEBOOK_GRAPH_URL + path,
timeout=self.timeout,
params=args,
data=post_args,
proxies=self.proxies,
files=files)
except requests.HTTPError as e:
response = json.loads(e.read())
raise GraphAPIError(response)
headers = response.headers
if 'json' in headers['content-type']:
result = response.json()
elif 'image/' in headers['content-type']:
mimetype = headers['content-type']
result = {"data": response.content,
"mime-type": mimetype,
"url": response.url}
elif "access_token" in parse_qs(response.text):
query_str = parse_qs(response.text)
if "access_token" in query_str:
result = {"access_token": query_str["access_token"][0]}
if "expires" in query_str:
result["expires"] = query_str["expires"][0]
else:
raise GraphAPIError(response.json())
else:
raise GraphAPIError('Maintype was not text, image, or querystring')
if result and isinstance(result, dict) and result.get("error"):
raise GraphAPIError(result)
return result
```
Basically I'd like to get posts for a certain period and then get insights for each one.
Has anyone encountered this problem before? | 2018/05/21 | [
"https://Stackoverflow.com/questions/50447751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8542692/"
] | Unfortunately the `since` and `until` parameters are not supported on this endpoint and this endpoint has only support cursor based pagination. The only way to do what I wish to do is to load each page of results individually using the `before` and `after` cursors provided in the API response. | For your task, I would recommend you to not use InstagramAPI library. I will show you a simple solution for this using [instabot](https://github.com/instagrambot/instabot) library. For pip installation of this library, use this command:
`pip install instabot`
Use the following python code to get the media within the specified date range.
```
import datetime
from instabot import Bot
bot = Bot()
bot.login(username="YOUR USERNAME", password="YOUR PASSWORD")
def get_media_posts(start_date, end_date):
all_posts = bot.get_your_medias()
filtered_posts = []
for post in all_posts:
post_info = bot.get_media_info(post) #the media info for the post
post_timestamp = post_info[0].get('taken_at') #get the timestamp of the post
post_date = datetime.datetime.fromtimestamp(post_timestamp).date() #convert timestamp to date
if post_date >= start_date and post_date <= end_date:
filtered_posts.append(post) #or you can also use: filtered_posts.append(post_info)
return filtered_posts
```
This will return you a list of all the posts within the specified date and you can use the `bot.get_media_info(post)` to see what is inside every post.
NOTE: start\_date and end\_date should be in date() (and not in datetime) format according to this code but you can compare with whatever datetime function you want :) | 16,732 |
72,470,453 | ```
import os
import sys, getopt
import signal
import time
from edge_impulse_linux.audio import AudioImpulseRunner
DEFAULT_THRESHOLD = 0.60
my_threshold = DEFAULT_THRESHOLD
runner = None
def signal_handler(sig, frame):
print('Interrupted')
if (runner):
runner.stop()
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
def help():
print('python classify.py <path_to_model.eim> <audio_device_ID, optional>' )
def my_function(label, score):
print('' )
def main(argv):
try:
opts, args = getopt.getopt(argv, "h", ["--help"])
except getopt.GetoptError:
help()
sys.exit(2)
for opt, arg in opts:
if opt in ('-h', '--help'):
help()
sys.exit()
if len(args) == 0:
help()
sys.exit(2)
model = args[0]
dir_path = os.path.dirname(os.path.realpath(__file__))
modelfile = os.path.join(dir_path, model)
with AudioImpulseRunner(modelfile) as runner:
try:
model_info = runner.init()
labels = model_info['model_parameters']['labels']
print('Loaded runner for "' + model_info['project']['owner'] + ' / ' +
model_info['project']['name'] + '"')
#Let the library choose an audio interface suitable for this model, or pass device ID
parameter to manually select a specific audio interface
selected_device_id = None
if len(args) >= 2:
selected_device_id=int(args[1])
print("Device ID "+ str(selected_device_id) + " has been provided as an
argument.")
for res, audio in runner.classifier(device_id=selected_device_id):
print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']
['classification']), end='')
for label in labels:
score = res['result']['classification'][label]
print('%s: %.2f\t' % (label, score), end='')
print('', flush=True)
print('', flush=True)
if score > my_threshold:
my_function(label,score)
print('Yes', flush=True)
if label == "Hey Bmo":
my_function(label,score)
finally:
if (runner):
runner.stop()
if __name__ == '__main__':
main(sys.argv[1:])
main(sys.argv[1:])
```
I'm trying to make the Threshold check a specific label as it has multiple and
```
print('Yes', flush=True)
if label == "Hey Bmo":
my_function(label,score)
```
This doesn't work, this is my first time messing around with python so please excuse my lacklustre code attempt
terminal readout:
result (11 ms.) Hey Bmo: 0.02 Noise: 0.94 Unknown: 0.04
result (17 ms.) Hey Bmo: 0.90 Noise: 0.10 Unknown: 0.15
result (07 ms.) Hey Bmo: 0.05 Noise: 0.80 Unknown: 0.20
result (19 ms.) Hey Bmo: 0.10 Noise: 0.40 Unknown: 0.01
result (14 ms.) Hey Bmo: 0.01 Noise: 0.50 Unknown: 0.5
In conclusion when Hey Bmo reaches 0.60 trigger my threshold right now it checks Unknowns threshold | 2022/06/02 | [
"https://Stackoverflow.com/questions/72470453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19253158/"
] | [`some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) short-circuits after finding the first match so it doesn't necessarily have to iterate over the whole array of objects. And it also returns a boolean which satisfies your use-case.
```js
const query1 = ['empid','Name'];
const arr1 = [{'keyName':'empid' ,'keyValue':'2'}];
const query2 = ['empid','Name'];
const arr2 = [{'keyName':'empid2' ,'keyValue':'five'}];
const query3 = ['empid','Name', 'test'];
const arr3 = [{'keyName':'test2' ,'keyValue':'five'},{'keyName':'test' ,'keyValue':'five'}];
function found(arr, query) {
return arr.some(obj => {
return query.includes(obj.keyName);
});
}
console.log(found(arr1, query1));
console.log(found(arr2, query2));
console.log(found(arr3, query3));
``` | Use `_.isEqual(object, other);`
It may help you. | 16,733 |
37,277,206 | Currently while using `babel-plugin-react-intl`, separate json for every component is created with 'id', 'description' and 'defaultMessage'. What I need is that only a single json to be created which contains a single object with all the 'id' as the 'key' and 'defaultMessage' as the 'value'
Present situation:
`ComponentA.json`
```
[
{
"id": "addEmoticonA",
"description": "Add emoticon",
"defaultMessage": "Insert Emoticon"
},
{
"id": "addPhotoA",
"description": "Add photo",
"defaultMessage": "Insert photo"
}
]
```
`ComponentB.json`
```
[
{
"id": "addEmoticonB",
"description": "Add emoji",
"defaultMessage": "Insert Emoji"
},
{
"id": "addPhotoB",
"description": "Add picture",
"defaultMessage": "Insert picture"
}
]
```
What I need for translation.
`final.json`
```
{
"addEmoticonA": "Insert Emoticon",
"addPhotoA": "Insert photo",
"addEmoticonB": "Insert Emoji",
"addPhotoB": "Insert picture"
}
```
Is there any way to accomplish this task? May it be by using python script or anything. i.e to make a single json file from different json files. Or to directly make a single json file using babel-plugin-react-intl | 2016/05/17 | [
"https://Stackoverflow.com/questions/37277206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5380918/"
] | There is a [translations manager](https://github.com/GertjanReynaert/react-intl-translations-manager) that will do this.
Or for a custom option see below
---
The script below which is based on this [script](https://github.com/emmenko/redux-react-router-async-example/blob/master/scripts/i18nToXliff.js) goes through the translation messages created by
`babel-plugin-react-intl` and creates js files that contain all messages from all components in the json format.
```js
import fs from 'fs'
import {
sync as globSync
}
from 'glob'
import {
sync as mkdirpSync
}
from 'mkdirp'
import * as i18n from '../lib/i18n'
const MESSAGES_PATTERN = './_translations/**/*.json'
const LANG_DIR = './_translations/lang/'
// Ensure output folder exists
mkdirpSync(LANG_DIR)
// Aggregates the default messages that were extracted from the example app's
// React components via the React Intl Babel plugin. An error will be thrown if
// there are messages in different components that use the same `id`. The result
// is a flat collection of `id: message` pairs for the app's default locale.
let defaultMessages = globSync(MESSAGES_PATTERN)
.map(filename => fs.readFileSync(filename, 'utf8'))
.map(file => JSON.parse(file))
.reduce((collection, descriptors) => {
descriptors.forEach(({
id, defaultMessage, description
}) => {
if (collection.hasOwnProperty(id))
throw new Error(`Duplicate message id: ${id}`)
collection[id] = {
defaultMessage, description
}
})
return collection
}, {})
// Sort keys by name
const messageKeys = Object.keys(defaultMessages)
messageKeys.sort()
defaultMessages = messageKeys.reduce((acc, key) => {
acc[key] = defaultMessages[key]
return acc
}, {})
// Build the JSON document for the available languages
i18n.en = messageKeys.reduce((acc, key) => {
acc[key] = defaultMessages[key].defaultMessage
return acc
}, {})
Object.keys(i18n).forEach(lang => {
const langDoc = i18n[lang]
const units = Object.keys(defaultMessages).map((id) => [id, defaultMessages[id]]).reduce((collection, [id]) => {
collection[id] = langDoc[id] || '';
return collection;
}, {});
fs.writeFileSync(`${LANG_DIR}${lang}.json`, JSON.stringify(units, null, 2))
})
``` | You can use [babel-plugin-react-intl-extractor](https://github.com/Bolid1/babel-plugin-react-intl-extractor) for aggregate your translations in single file. Also it provides autorecompile translation files on each change of your messages. | 16,734 |
33,111,338 | I am trying to find out the sum of multiples of two numbers using python.I have done it already. I just want to solve it using lambda functions.
Without lambda code
```
def sumMultiples(num, limit):
sum = 0
for i in xrange(num, limit, num):
sum += i
return sum
def sum(limit):
return (sumMultiples(3, limit) +
sumMultiples(5, limit) -
sumMultiples(15, limit))
print sum(1000)
``` | 2015/10/13 | [
"https://Stackoverflow.com/questions/33111338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5442186/"
] | Try this code:
```
a = input("enter first number\n")
b= input("enter second number\n")
limit=[]
limit.append(a)
limit.append(b)
natNo=range(1,1000)
xyz = []
for i in limit:
xyz +=filter(lambda x: x == i or x % i==0, natNo)
set = {}
map(set.__setitem__, xyz, [])
nums=set.keys()
print "the multiples of the given numbers are: "+str(nums)
c=reduce(lambda x, y:x+y, nums)
print "the sum of the multiples of the given numbers is "+str(c)
``` | ```
limit = 1000
a=lambda num, limit: sum([i for i in xrange(num, limit, num)])
print a(3,limit)+a(5,limit)-a(15,limit)
``` | 16,735 |
42,562,876 | I'm trying to read the frames of an .mov file using OpenCV 3.2 (from the [menpo conda channel](https://anaconda.org/menpo/opencv3)). I'm using Python 3.5.3 through Anaconda on an Ubuntu 16.04 64-bit setup.
Problem is, I get the following error message from OpenCV when it hits the `cap.read()` call, and the loop immediately breaks and catches the `if num == 0` conditional.
Here's the entirety of the code I'm running:
```
import cv2
import numpy as np
import sys
f = sys.argv[1]
cap = cv2.VideoCapture(f)
frames = []
num = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret: break
gframe = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gframe)
num += 1
if num % 100 == 0:
frames.append(gframe)
if num % 1000 == 0:
print(num)
if num == 0:
print("Something went wrong: no frames found.")
exit(0)
cap.release()
```
````
user@ubuntu:/data$ python read.py movie.mov
Unable to stop the stream: Inappropriate ioctl for device
Something went wrong: no frames found.
user@ubuntu:/data$
````
I've found a couple of other StackOverflow questions on this topic, but they don't quite translate to my exact circumstance:
* [This question](https://stackoverflow.com/questions/41200201/opencv-unable-to-stop-the-stream-inappropriate-ioctl-for-device) proposes rebuilding OpenCV (also uses Python 2). That's not an option for me, as I'm trying to do this with Anaconda.
* These two questions ([here](http://answers.opencv.org/question/99659/videocapture-problem-inappropriate-ioctl-for-device/) and [here](http://answers.opencv.org/question/117110/netcat-stream-on-devstdin-not-working-with-opencv-310-dev-on-ubuntu-1604/)) on the OpenCV forums were left without any satisfactory answers.
* [This one](https://stackoverflow.com/questions/1605195/inappropriate-ioctl-for-device) has a lively discussion and a thorough answer, but it's specific to perl.
To that third point--there are quite a few other questions here that have the quote `inappropriate ioctl for device` but it's hard to see if any of them is directly relevant to this problem.
As a partial aside: I've installed this exact same opencv3 conda package on my macOS machine, and the code I've pasted here works just fine and on exactly the same .mov file I've tried on the Ubuntu machine.
Any ideas? | 2017/03/02 | [
"https://Stackoverflow.com/questions/42562876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13604/"
] | Solved the problem by getting around it entirely.
Used the [opencv-feedstock](https://github.com/conda-forge/opencv-feedstock) recipe of OpenCV for conda-forge. In the `recipe` folder is the build information for conda. I modified the `build.sh` file, changing the following options:
```
-DBUILD_PNG=1
-DBUILD_JPEG=1
-DWITH_CUDA=1
-DWITH_FFMPEG=1
```
ran `conda build --numpy 1.12 recipe` from the parent directory (had to specify the NumPy version, as the build script requirement lists `numpy x.x` which means you have to provide the version at runtime), and waited.
Took **forever** (seriously, a couple hours on a very powerful machine--the time sink is CUDA), but the build eventually completed successfully.
Then it was just a matter of installing the locally-built bz2 archive (`conda install --use-local opencv`). No more weird ioctl error messages, and the above script worked just fine. | I faced the same problem with Anaconda private env & Python 3.5 on Ubuntu 16.04 .
Initially installed Opencv3 using
```
conda install -c menpo opencv3
```
Solution:
1. Remove Opencv3 `conda remove opencv3`
2. Install Opencv3 using `pip install opencv-contrib-python`
If the problem still persists:
1. Uninstall OpenCV
2. Install dependencies
`sudo apt-get install ffmpeg`
`sudo apt-get install libavcodec-dev libavformat-dev libavdevice-dev`
3. Reinstall Opencv using step 2. | 16,736 |
19,609,456 | Is there any way of creating a simple java(or c,c ++, python) program that prints 3 (outputs the 3) when given input=6 and it gives output=6 when given input=3 without using "if conditions" ? | 2013/10/26 | [
"https://Stackoverflow.com/questions/19609456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2829141/"
] | Assuming you're happy for it to produce other outputs on inputs that aren't 6 or 3, then you can just compute 9-x. | You can always just use a switch-case statement. Also, if you only want those two answers, you could also take the input as an int and do 9-[your int] and print that answer. | 16,737 |
63,067,003 | I'm a beginner in python but I need to fix this small mistake. I tried different ways to fix it by changing the indentation. Maybe I'm overlooking something? The error is attached. Any help is much appreciated! Thank you
```
if(pretrained_weights):
model.load_weights(pretrained_weights)
print('*************Using pretrained weights****************')
return model
```
---
```
return model
^
IndentationError: unexpected indent
``` | 2020/07/24 | [
"https://Stackoverflow.com/questions/63067003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13986513/"
] | Here you go.
```
if(pretrained_weights):
model.load_weights(pretrained_weights)
print('*************Using pretrained weights****************')
return model
``` | My solution:
```
if(pretrained_weights):
model.load_weights(pretrained_weights)
print('*************Using pretrained weights****************')
return model
``` | 16,742 |
21,721,558 | I am working on setting up the pyramid framework on python3.3 virtual env.
For the database connection I use MySQL Connector/Python (SQLAlchemy).
I came across with the problem:
When I try to select records form the database I get the following:
`[Wed Feb 12 09:20:34.373204 2014] [:error] [pid 29351] [remote 127.0.0.1:55376] File "/home/xxx/wsgi/env/lib/python3.3/site-packages/mysql_connector_python-1.1.5-py3.3.egg/mysql/connector/pooling.py", line 29, in <module>
[Wed Feb 12 09:20:34.373237 2014] [:error] [pid 29351] [remote 127.0.0.1:55376]`
`ImportError: No module named queue`
It can't find the module queue, but this works just fine:
```
~/wsgi/env$ bin/python3.3
Python 3.3.2+ (default, Oct 9 2013, 14:50:09)
[GCC 4.8.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import queue
>>>
```
Where is the problem? There are no symlink in `wsgi/env/lib/python3.3/` for `queue.py`, but if I add it manually, the error still appears
**EDIT 1:**
When I use the pserve command to launch a web server, everything is ok, but with apache2, the one above happens.
Apache config:
```
# Use only 1 Python sub-interpreter. Multiple sub-interpreters
# play badly with C extensions. See
# http://stackoverflow.com/a/10558360/209039
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess pyramid user=user group=staff threads=4 \
python-path=/home/user/wsgi/env/lib/python3.3/site-packages
WSGIScriptAlias /app /home/user/wsgi/env/pyramid.wsgi
<Directory /home/user/wsgi/env>
WSGIProcessGroup pyramid
# Order allow,deny
Require all granted
</Directory>
``` | 2014/02/12 | [
"https://Stackoverflow.com/questions/21721558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2304257/"
] | Somehow, my computer was one year behind the actual time.
I adjusted to the correct time and the time zone.
I closed and open Google Chrome. Problem was fixed. | The problem is basically on older version of OS e.g. Windows-XP with SP-II. SHA-2 algorithm has been used to generate SSL certificates which is not in range of older version of OS.
There are two solutions for the problem as:
1. Upgrade the OS. Use another OS or upgrade existing one (with SP-III). or
2. Generate new SSL certificates with SHA-1 algorithm. | 16,744 |
71,875,058 | I have a sample spark df as below:
```
df = ([[1, 'a', 'b' , 'c'],
[1, 'b', 'c' , 'b'],
[1, 'b', 'a' , 'b'],
[2, 'c', 'a' , 'a'],
[3, 'b', 'b' , 'a']]).toDF(['id', 'field1', 'field2', 'field3'])
```
What I need next is to provide a multiple aggregations to show summary of the a, b, c values for each field. I have a working but tedious process as below:
```
agg_table = (
df
.groupBy('id')
.agg(
# field1
sum(when(col('field1') == 'a',1).otherwise(0)).alias('field1_a_count')
,sum(when(col('field1') == 'b',1).otherwise(0)).alias('field1_b_count')
,sum(when(col('field1') == 'c',1).otherwise(0)).alias('field1_c_count')
# field2
,sum(when(col('field2') == 'a',1).otherwise(0)).alias('field2_a_count')
,sum(when(col('field2') == 'b',1).otherwise(0)).alias('field2_b_count')
,sum(when(col('field2') == 'c',1).otherwise(0)).alias('field2_c_count')
# field3
,sum(when(col('field3') == 'a',1).otherwise(0)).alias('field3_a_count')
,sum(when(col('field3') == 'b',1).otherwise(0)).alias('field3_b_count')
,sum(when(col('field3') == 'c',1).otherwise(0)).alias('field3_c_count')
))
```
What I am expecting to get is this:
```
agg_table = (['id':'1','2','3'],
['field1_a_count':1,0,0],
['field1_b_count':2,0,1],
['field1_c_count':0, 1, 0],
['field2_a_count':1,1,0],
['field2_b_count':1,0,1],
['field2_c_count':1,0,0],
['field3_a_count':0,1,1],
['field3_b_count':2,0,0],
['field3_c_count':1,0,0])
```
It is just fine if I only really have 3 fields, but I have 30 fields with varying/custom names. Maybe somebody can help me with the repetitive task of coding the aggregated sum per field. I tried playing around with a suggestion from :
<https://danvatterott.com/blog/2018/09/06/python-aggregate-udfs-in-pyspark/>
I can make it work if I will only pull one column and one value, but I get varying errors, one of them is:
```
AnalysisException: cannot resolve '`value`' given input columns: ['field1','field2','field3']
```
One last line I tried is using:
```
validated_cols = ['field1','field2','field3']
df.select(validated_cols).groupBy('id').agg(collect_list($'field1_a_count',$'field1_b_count',$'field1_c_count', ...
$'field30_c_count')).show()
Output: SyntaxError: invalid syntax
```
I tried with pivot too, but from searches so far, it says it is only good for one column. I tried this multiple columns:
```
df.withColumn("p", concat($"p1", $"p2"))
.groupBy("a", "b")
.pivot("p")
.agg(...)
```
I still get a syntax error.
Another link I tried: <https://danvatterott.com/blog/2019/02/05/complex-aggregations-in-pyspark/>
I also tried the exprs approach: `exprs1 = {x: "sum" for x in df.columns if x != 'id'}`
Any suggested will be appreciated. Thanks | 2022/04/14 | [
"https://Stackoverflow.com/questions/71875058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16762881/"
] | Let me answer your question in two steps. First, you are wondering if it is possible to avoid hard coding all your aggregations in your attempt to compute all your aggregations. It is. I would do it like this:
```py
from pyspark.sql import functions as f
# let's assume that this is known, but we could compute it as well
values = ['a', 'b', 'c']
# All the columns except the id
cols = [ c for c in df.columns if c != 'id' ]
def count_values(column, value):
return f.sum(f.when(f.col(column) == value, 1).otherwise(0))\
.alias(f"{column}_{value}_count")
# And this gives you the result of your hard coded aggregations:
df\
.groupBy('id')\
.agg(*[count_values(c, value) for c in cols for value in values])\
.show()
```
But that is not what you expect right? You are trying to compute some kind of pivot on the `id` column. To do this, I would not use the previous result, but just work the data differently. I would start by replacing all the columns of the dataframe but `id` (that is renamed into `x`) by an array of values of the form `{column_name}_{value}_count`, and I would explode that array. From there, we just need to compute a simple pivot on the former `id` column renamed `x`, grouped by the values contained in the exploded array.
```py
df\
.select(f.col('id').alias('x'), f.explode(
f.array(
[f.concat_ws('_', f.lit(c), f.col(c), f.lit('count')).alias(c)
for c in cols]
)
).alias('id'))\
.groupBy('id')\
.pivot('x')\
.count()\
.na.fill(0)\
.orderBy('id')\
.show()
```
which yields:
```none
+--------------+---+---+---+
| id| 1| 2| 3|
+--------------+---+---+---+
|field1_a_count| 1| 0| 0|
|field1_b_count| 2| 0| 1|
|field1_c_count| 0| 1| 0|
|field2_a_count| 1| 1| 0|
|field2_b_count| 1| 0| 1|
|field2_c_count| 1| 0| 0|
|field3_a_count| 0| 1| 1|
|field3_b_count| 2| 0| 0|
|field3_c_count| 1| 0| 0|
+--------------+---+---+---+
``` | ### update
based on discussion in the comments, I think this question is a case of an [X-Y problem](https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem). The task at hand is something that is seen very frequently in the world of Data Engineering and ETL development: how to partition and then quantify good and bad records.
In the case where the data is being prepared to load to a data warehouse / hadoop ecosystem, the usual pattern is to take the raw input and load it to a dataframe, then apply transformations & validations that partition the data into "The Good, The Bad, and The Ugly":
* The first— and hopefully largest— partition contains records that are successfully transformed and which pass validation. These will go on to be persisted in durable storage and certified to be used for anayltics.
* The second partition contains records that were successfully transformed but which failed during QA. The QA rules should include checks for illegal nulls, string pattern matching (like phone number format), etc...
* The third partition is for records that are rejected early in the process because they failed on a transformation step. Examples include fields that contain non-number values that are cast to numeric types, text fields that exceed the maximum length, or strings that contain control characters that are not supported by the database.
The goal **should not** be to generate counts for each of these 3 classifications across every column and for every row. Trying to do that is counterproductive. Why? Because when a transformation step or QA check fails for a given record, that entire record should be rejected immediately and sent to a separate output stream to be analyzed later. Each row in the data set should be treated as just that: a single record. It isn't possible for a single field to fail and still have the complete record pass, which makes metrics at this granularity unnecessary. What action will you take knowing that 100 rows passed on the "address" field? For valid records, all that matters is the total number that passed for every column. Otherwise, it wouldn't be a valid record.
With that said, remember that the goal is to build a usable and cleansed data set; analyzing the rejected records is a secondary task and can be done offline.
It is common practice to add a field to the rejected data to indicated which column caused the failure. That makes it easy to troubleshoot any malformed data, so there is really no need to generate counts across all columns, even for bad records. Instead, just review the rejected data after the main job finishes, and address the problems. Continue doing that iteratively until the number of rejected records is below whatever threshold you think is reasonable, and then continue to monitor it going forward.
---
#### Old answer
This is a sign of a design flaw in the data. Whatever the "field1", "field2", etc... columns actually represent, it appears they are all related, in the sense that the values quantify some attribute (maybe each one is a count for a specific merchandise ID, or the number of people with a certain property...). The problem is that these fields are being added as individual columns on a fact table1, which then needs to be aggregated, resulting in the situation that you're facing.
A better design would be to collapse those "field1", "field2", etc... columns into a single code field that can be used as the `GROUP BY` field when doing the aggregation. You might want to consider creating a separate table to do this if the existing one has many other columns and making this change would alter the grain in a way that might cause other problems.
---
1: it's usually a big red flag to have a table with a bunch of enumerated columns with the same name and purpose. I've even seen cases where someone has created tables with "spare" columns for when they want to add more attributes later. Not good. | 16,754 |
49,007,215 | I want get the occurrence of characters in a string, I got this code:
```
string = "Foo Fighters"
def conteo(string):
copia = ''
for i in string:
if i not in copia:
copia = copia + i
conteo = [0]*len(copia)
for i in string:
if i in copia:
conteo[copia.index(i)] = conteo[copia.index(i)] + 1
out = ['0']*2*len(copia)
for i in range(len(copia)):
out[2*i] = copia[i]
out[2*i + 1] = conteo[i]
return (out)
```
And I want return something like: `['f', 2, 'o', 2, '', 1, 'i', 1, 'g', 1, 'h', 1, 't', 1, 'e', 1, 'r', 1, 's', 1]`
How can I do it? Without use a python library
Thank you | 2018/02/27 | [
"https://Stackoverflow.com/questions/49007215",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7590594/"
] | Depending on why you want this information, one method could be to use a `Counter`:
```
from collections import Counter
print(Counter("Foo Fighters"))
```
Of course, to create exactly the same output as requested, use itertools as well:
```
from collections import Counter
from itertools import chain
c = Counter("Foo Fighters")
output = list(chain.from_iterable(c.items()))
>> ['F', 2, 'o', 2, ' ', 1, 'i', 1, 'g', 1, 'h', 1, 't', 1, 'e', 1, 'r', 1, 's', 1]
``` | It's not clear whether you want a critique of your current attempt or a pythonic solution. Below is one way where output is a dictionary.
```
from collections import Counter
mystr = "Foo Fighters"
c = Counter(mystr)
```
**Result**
```
Counter({' ': 1,
'F': 2,
'e': 1,
'g': 1,
'h': 1,
'i': 1,
'o': 2,
'r': 1,
's': 1,
't': 1})
```
**Output as list**
I purposely do not combine the tuples in this list, as it's a good idea to maintain structure until absolutely necessary. It's a trivial task to combine these into one list of strings.
```
list(c.items())
# [('F', 2),
# ('o', 2),
# (' ', 1),
# ('i', 1),
# ('g', 1),
# ('h', 1),
# ('t', 1),
# ('e', 1),
# ('r', 1),
# ('s', 1)]
``` | 16,755 |
35,782,575 | I am using a python package called kRPC that requires a basic boilerplate of setup code to use in any given instance, so here's my question:
Once I create a generic *'kRPCboilerplate.py'*, where can I place it inside my Python27 directory so that I can simply type,
```
import kRPCboilerplate
```
at the beginning of all my files?
---
I want to install my custom Python file to my Python directory so that I don't have to copy and paste the file into every new project folder I make.
I understand that,
```
import boilerplate
```
will import *'boilerplate.py'*, but only if *'boilerplate.py'* is set in the root directory **relative** to the Python file that imports it.
The program I am creating will not be distributed, so there is no need to make a module installer, which is above the scope of my abilities. I simply want to copy and paste *'kRPCboilerplate.py'* to the proper directory so that I can use **Import** without ever having to specify a path or copy and paste the imported file into the relative directory. | 2016/03/03 | [
"https://Stackoverflow.com/questions/35782575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5910286/"
] | Your module root directory is 'Python27\Lib' where Python27 is your main python folder which includes the python executable file. You can drag and drop the .py files into there and import it without any complications! | Bit late to reply, but the safest is to set a special environmental variable called PYTHONPATH which will add search location for Python to search for libraries:
eg in Linux terminal:
`export PYTHONPATH=$PYTHONPATH:/path/to/file`
note it is only the path to the file, not the filename.
If you want a more permanent solution you can add
`export PYTHONPATH=$PYTHONPATH:/path/to/file`
to your ~/.bashrc or ~/.profile file
In windows the environmental variables are set in the config panel.
Not sure about OSx | 16,757 |
50,070,398 | I am new to tensorflow. When I am using `import tensorflow.contrib.learn.python.learn` for using the DNNClassifier it is giving me an error: `module object has no attribute python`
Python version 3.4
Tensorflow 1.7.0 | 2018/04/27 | [
"https://Stackoverflow.com/questions/50070398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5878765/"
] | You can use `transition-delay` combined with sass loops and completely avoid javascript:
```
@for $i from 0 through 3
.mobile-container.active li:nth-child(#{$i})
transition-delay: 330ms + (100ms * $i) !important
```
Check this [fork](https://codepen.io/anon/pen/aGBPXL) of your codepen. | You can use jquery plugin <https://github.com/morr/jquery.appear/> to track elements when they appear and provide data animations based on it.
E.g. You can give your element and attribute data-animated="fadeIn" and the plugin will do the rest. | 16,758 |
17,903,144 | I am new in python and I am supposed to create a game where the input can only be in range of 1 and 3. (player 1, 2 , 3) and the output should be error if user input more than 3 or error if it is in string.
```
def makeTurn(player0):
ChoosePlayer= (raw_input ("Who do you want to ask? (1-3)"))
if ChoosePlayer > 4:
print "Sorry! Error! Please Try Again!"
ChoosePlayer= (raw_input("Who do you want to ask? (1-3)"))
if ChoosePlayer.isdigit()== False:
print "Sorry! Integers Only"
ChoosePlayer = (raw_input("Who do you want to ask? (1-3)"))
else:
print "player 0 has chosen player " + ChoosePlayer + "!"
ChooseCard= raw_input("What rank are you seeking from player " + ChoosePlayer +"?")
```
I was doing it like this but the problem is that it seems like there is a problem with my code. if the input is 1, it still says "error please try again" im so confused! | 2013/07/27 | [
"https://Stackoverflow.com/questions/17903144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2626540/"
] | `raw_input` returns a string. Thus, you're trying to do `"1" > 4`. You need to convert it to an integer by using [`int`](http://docs.python.org/2/library/functions.html#int)
If you want to catch whether the input is a number, do:
```
while True:
try:
ChoosePlayer = int(raw_input(...))
break
except ValueError:
print ("Numbers only please!")
```
Just note that now it's an integer, your concatenation below will fail. Here, you should use [`.format()`](http://docs.python.org/2/library/stdtypes.html#str.format)
```
print "player 0 has chosen player {}!".format(ChoosePlayer)
``` | You have to cast your value to int using method [`int()`](http://docs.python.org/2/library/functions.html#int):
```
def makeTurn(player0):
ChoosePlayer= (raw_input ("Who do you want to ask? (1-3)"))
if int(ChoosePlayer) not in [1,2,3]:
print "Sorry! Error! Please Try Again!"
ChoosePlayer= (raw_input("Who do you want to ask? (1-3)"))
if ChoosePlayer.isdigit()== False:
print "Sorry! Integers Only"
ChoosePlayer = (raw_input("Who do you want to ask? (1-3)"))
else:
print "player 0 has chosen player " + ChoosePlayer + "!"
ChooseCard= raw_input("What rank are you seeking from player " + ChoosePlayer +"?")
``` | 16,760 |
54,530,138 | I am stuck on why my code doesn't count the number of vowels, including case-insensitive, and print a sentence reporting the number of vowels found in the word 'and'.
```
import sys
vowels = sys.argv[1]
count = 0
for vowel in vowels:
if(vowel =='a' or vowel == 'e' or vowel =='i' or vowel =='o' or vowel =='u' or vowel
=='A' or vowel =='E' or vowel =='I' or vowel =='O' or vowel =='U'):
count += 1
if count == 0:
print('There are 0 vowels in '.format(count))
elif count < 2:
print('There is 1 vowel in '.format(count))
else:
print('There are {} vowels'.format(count, vowels))
```
In my terminal:
**user$ python** vowel\_counter.py and
There are 0 vowels in
There are 0 vowels in | 2019/02/05 | [
"https://Stackoverflow.com/questions/54530138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | sys.argv is a list of the running arguments, where the first element is always your running file. therefore, you do not iterate over the text but rather over the arguments ['vowel\_counter.py', 'and'].
You should do something like this:
```
vowels=sys.argv[1]
``` | The following will take care of single or multiple arguments passed in the command line. Like `python vowel_count.py foo` and `python vowel_count.py foo bar`
```
$ cat vowel_count.py
import sys
args = sys.argv[1:]
print(args)
count = 0
for arg in args: # handling multiple commandline args
for char in arg:
if char.lower() in ['a','e','i','o','u']:
count += 1
print("The {} contains {} vowels".format(' ',join(args), count))
``` | 16,763 |
59,134,194 | This is a part of HTML code from following page [following page](https://orange.e-sim.org/battle.html?id=5377):
```
<div>
<div class="sidebar-labeled-information">
<span>
Economic skill:
</span>
<span>
10.646
</span>
</div>
<div class="sidebar-labeled-information">
<span>
Strength:
</span>
<span>
2336
</span>
</div>
<div class="sidebar-labeled-information">
<span>
Location:
</span>
<div>
<a href="region.html?id=454">
Little Karoo
<div class="xflagsSmall xflagsSmall-Argentina">
</div>
</a>
</div>
</div>
<div class="sidebar-labeled-information">
<span>
Citizenship:
</span>
<div>
<div class="xflagsSmall xflagsSmall-Poland">
</div>
<small>
<a href="pendingCitizenshipApplications.html">
change
</a>
</small>
</div>
</div>
</div>
```
I want to extract `region.html?id=454` from it. I don't know how to narrow the search down to `<a href="region.html?id=454">`, since there are a lot of `<a href=>` tags.
Here is the python code:
```
session=session()
r = session.get('https://orange.e-sim.org/battle.html?id=5377',headers=headers,verify=False)
soup = BeautifulSoup(r.text, 'html.parser')
div = soup.find_all('div',attrs={'class':'sidebar-labeled-information'})
```
And the output of this code is:
```
[<div class="sidebar-labeled-information" id="levelMission">
<span>Level:</span> <span>15</span>
</div>, <div class="sidebar-labeled-information" id="currRankText">
<span>Rank:</span>
<span>Colonel</span>
</div>, <div class="sidebar-labeled-information">
<span>Economic skill:</span>
<span>10.646</span>
</div>, <div class="sidebar-labeled-information">
<span>Strength:</span>
<span>2336</span>
</div>, <div class="sidebar-labeled-information">
<span>Location:</span>
<div>
<a href="region.html?id=454">Little Karoo<div class="xflagsSmall xflagsSmall-Argentina"></div>
</a>
</div>
</div>, <div class="sidebar-labeled-information">
<span>Citizenship:</span>
<div>
<div class="xflagsSmall xflagsSmall-Poland"></div>
<small><a href="pendingCitizenshipApplications.html">change</a>
</small>
</div>
</div>]
```
But my desired output is `region.html?id=454`.
The page which I'm trying to search in is located [here](https://orange.e-sim.org/battle.html?id=5377), but you need to have an account to view the page. | 2019/12/02 | [
"https://Stackoverflow.com/questions/59134194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12460618/"
] | You need to change the format string a little and pass `width` as a keyword argument to the `format()` method:
```
width = 6
with open(out_file, 'a') as file:
file.write("{:{width}}{:{width}}\n".format('a', 'b', width=width))
```
Contents of file afterwards:
```none
a b
``` | It's a bit ugly but you can do this. Using `{{}}` you can type a literal curly brace, and by that, you can format your format string with a variable width.
```
width = 6
format_str = "{{:{}}}{{:{}}}\n".format(width, width) #This makes the string "{:width}{:width}" with a variable width.
with open(out_file, a) as file:
file.write(format_str.format('a','b'))
```
EDIT: If you want to apply this type of variable width pattern to any of your patterns used in a code, you could use this function:
```
import re
def variable_width_pattern(source_pattern, width):
regex = r"\{(.*?)\}"
matches = re.findall(regex, source_pattern)
args = ["{{:{}}}".format(width) for x in range(len(matches))]
return source_pattern.format(*args)
``` | 16,766 |
54,722,251 | I am trying to connect to a mysql database (hosted on media temple) with my python script (ran locally) but I am receiving an error when I run it.
The error is:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 179, in _open_connection
self._cmysql.connect(**cnx_kwargs)
_mysql_connector.MySQLInterfaceError: Bad handshake
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/Charlie/Documents/python/myscript/mysql_insert.py", line 8, in <module>
port="3306"
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/__init__.py", line 172, in connect
return CMySQLConnection(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 78, in __init__
self.connect(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/abstracts.py", line 735, in connect
self._open_connection()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/mysql/connector/connection_cext.py", line 182, in _open_connection
sqlstate=exc.sqlstate)
mysql.connector.errors.OperationalError: 1043 (08S01): Bad handshake
```
Here is the code from the script
```
import mysql.connector
mydb = mysql.connector.connect(
host="external-db.s157688.gridserver.com",
user="myusername",
passwd="mypassword",
database="mydatabase",
port="3306"
)
mycursor = mydb.cursor()
sql = "INSERT INTO test (post_id, title) VALUES (%s, %s)"
val = [
('Peter', 'Lowstreet 4'),
('Amy', 'Apple st 652'),
('Hannah', 'Mountain 21'),
('Michael', 'Valley 345'),
('Sandy', 'Ocean blvd 2'),
('Betty', 'Green Grass 1'),
('Richard', 'Sky st 331'),
('Susan', 'One way 98'),
('Vicky', 'Yellow Garden 2'),
('Ben', 'Park Lane 38'),
('William', 'Central st 954'),
('Chuck', 'Main Road 989'),
('Viola', 'Sideway 1633')
]
mycursor.executemany(sql, val)
mydb.commit()
print(mycursor.rowcount, "was inserted.")
```
I tried to google it but did not find any solutions, could anybody help out? | 2019/02/16 | [
"https://Stackoverflow.com/questions/54722251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1320977/"
] | make sure you've installed mysql-connector and not mysql-connector-python, to make this sure just run the following commands: `pip3 uninstall mysql-connector-python pip3 install mysql-connector` | Make sure you have given correct port number | 16,769 |
15,912,804 | Standard python [distutils provides a '--user' option](http://docs.python.org/2/install/index.html#alternate-installation-the-user-scheme) which lets me install a package as a limited user, like this:
```
python setup.py install --user
```
Is there an equivalent for **easy\_install** and **pip**? | 2013/04/09 | [
"https://Stackoverflow.com/questions/15912804",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/376587/"
] | For `pip`, see [User Installs](http://www.pip-installer.org/en/latest/cookbook.html#user-installs) for details, but basically, it's just what you'd expect:
```
pip install --user Foo
```
It's a bit trickier for `easy_install`. As Ned Deily points out, if you can rely on `distribute` rather than `setuptools`, and 0.6.11 or later, you can just use `--user` the same as `pip`. But if you need to work with `setuptools`, or older `distribute`… see [Custom Installation Locations](http://peak.telecommunity.com/DevCenter/EasyInstall#custom-installation-locations) for details (and note that it explains how to create and set up user site packages, not just how to install there, because it needs to be able to work with Python 2.5 and earlier, which didn't do this by default). But hopefully, you're only using `easy_install` for the handful of packages that aren't `pip`-able, so that isn't a big deal.
However, it's at least worth considering whether [`virtualenv`](http://www.virtualenv.org) is a better fit for whatever you're trying to accomplish than a user site directory. `pip` and `virtualenv` work together very nicely, as [the docs](http://www.pip-installer.org/en/latest/installing.html#using-virtualenv) explain. | From the easy\_install docs
<http://peak.telecommunity.com/DevCenter/EasyInstall#downloading-and-installing-a-package>
>
> --install-dir=DIR, -d DIR Set the installation directory. It is up to you to ensure that this directory is on sys.path at runtime, and to
> use pkg\_resources.require() to enable the installed package(s) that
> you need.
>
>
> (New in 0.4a2) If this option is not directly specified on the command
> line or in a distutils configuration file, the distutils default
> installation location is used. Normally, this would be the
> site-packages directory, but if you are using distutils configuration
> files, setting things like prefix or install\_lib, then those settings
> are taken into account when computing the default installation
> directory, as is the --prefix option.
>
>
> --prefix=DIR (New in 0.6a10) Use the specified directory as a base for computing the default installation and script directories. On Windows,
> the resulting default directories will be prefix\Lib\site-packages
> and prefix\Scripts, while on other platforms the defaults will be
> prefix/lib/python2.X/site-packages (with the appropriate version
> substituted) for libraries and prefix/bin for scripts.
>
>
> Note that the --prefix option only sets the default installation and
> script directories, and does not override the ones set on the command
> line or in a configuration file.
>
>
>
You can also specify them on using a ~/.pydistutils.cfg file
<http://peak.telecommunity.com/DevCenter/EasyInstall#mac-os-x-user-installation>
Before installing EasyInstall/setuptools, just create a ~/.pydistutils.cfg file with the following contents (or add this to the existing contents):
>
> [install] install\_lib =
> ~/Library/Python/$py\_version\_short/site-packages install\_scripts =
> ~/bin This will tell the distutils and EasyInstall to always install
> packages in your personal site-packages directory, and scripts to
> ~/bin. (Note: do not replace $py\_version\_short with an actual Python
> version in the configuration file! The distutils will substitute the
> correct value at runtime, so that the above configuration file should
> work correctly no matter what Python version you use, now or in the
> future.)
>
>
> Once you have done this, you can follow the normal installation
> instructions and use easy\_install without any other special options or
> steps.
>
>
> (Note, however, that ~/bin is not in the default PATH, so you may have
> to refer to scripts by their full location. You may want to modify
> your shell startup script (likely .bashrc or .profile) or your
> ~/.MacOSX/environment.plist to include ~/bin in your PATH.
>
>
> | 16,770 |
48,791,900 | I'm using the python api to upload apks, mapping files and release note texts.
See <https://developers.google.com/resources/api-libraries/documentation/androidpublisher/v2/python/latest/androidpublisher_v2.edits.html>
I'm using the `apks().upload()`, `deobfuscationfiles().upload()` and `apklistings().update()` APIs to upload the new apks, mapping files and changelogs, respectively.
Then I call `tracks().update()` to assign the uploaded apks to the *production* track.
Finally I call `commit()` to finalize the edit.
This, however, **immediately** publishes the new apks.
What I want is for the release manager to have the final confirmation on the new release. So there should be a manual review/publish step like in a manual release.
Is this possible using the API. Do I use the wrong API calls?
One way of course would be to upload on the beta/alpha track and then have the release manager move it to production. But is there another way? | 2018/02/14 | [
"https://Stackoverflow.com/questions/48791900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/261009/"
] | You need to use the new version of Google Play Developer API v3, where you can now set "status" (completed, draft, stopped, inProgress) for Edits.tracks.
<https://developers.google.com/resources/api-libraries/documentation/androidpublisher/v3/python/latest/androidpublisher_v3.edits.tracks.html> | It isn't possible to have a manual review step at the moment. | 16,772 |
23,768,865 | I am trying to list some of software installed on a PC by using:
```
Get-WmiObject -Class Win32_Product |
Select-Object -Property name,version |
Where-Object {$_.name -like '*Java*'}
```
It works, but when I added more names in `Where-Object` it gave me no results neither an error.
```
Get-WmiObject -Class Win32_Product |
Select-Object -Property name,version |
Where-Object {$_.name -like '*Java*','*python*','*adobe*','*access*'}
```
Why does it only work with one name? | 2014/05/20 | [
"https://Stackoverflow.com/questions/23768865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3303155/"
] | I don't think `-like` will take an array on the right hand side. Try a regex instead:
```
Where-Object {$_.name -match 'Java|python|adobe|access'}
``` | The -Like operator takes a string argument (not a string array), so whatever you give it will get cast as [string]. If you cast the arguments you've give it to string:
```
[string]('*Java*','*python*','*adobe*','*access*')
```
you get:
```
*Java* *python* *adobe* *access*
```
and that's what you're trying to match against (and you don't have any file names that look like that).
The easiest way to do this is switch to the -match operator, and an alternating regex:
```
Where-Object { $_.name -match 'Java|python|adobe|access' }
``` | 16,773 |
60,250,462 | I have a large list that contains usernames (about 60,000 strings). Each username represents a submission. Some users have made only one submission i.e. they are **"one-time users"**, so their username appears only once in this list. Others have made multiple submission (**returning users**) so their username can appear many times in this list. I want to count how many of these one-time users there are and get some stats based on that. Here are the variables I'm currently grabbing:
```py
import time
start_time = time.time()
users = ["UserA", "UserB", "UserC", "UserA", "UserA", "UserA", "UserB", "UserB", "UserD"] # ...just a sample, this goes up to ~60,000 elements
print(f"1: got users list. Time elapsed: {time.time() - start_time}")
one_time_users = [user for user in users if users.count(user) == 1]
print(f"2: got one-time users list. Time elapsed: {time.time() - start_time}")
returning_users = [user for user in users if users.count(user) != 1]
print(f"3: got returning users list. Time elapsed: {time.time() - start_time}")
frequencies = [users.count(user) for user in set(users)]
print(f"4: calculated frequencies list. Time elapsed: {time.time() - start_time}")
sorted_frequencies = sorted(frequencies, reverse=True) # Descending order, largest first
print(f"5: got sorted frequencies list. Time elapsed: {time.time() - start_time}")
top_ten_frequencies_sum = sum(sorted_frequencies[:10])
print(f"6: got top 10 frequencies sum. Time elapsed: {time.time() - start_time}")
top_ten_frequencies_percentage = round(((top_ten_frequencies_sum / len(users)) * 100), 2)
print(f"7: got top 10 frequencies percentage. Time elapsed: {time.time() - start_time}")
average_submissions_per_user = round(len(users) / len(set(users)), 2)
print(f"8: got average submissions per user. Time elapsed: {time.time() - start_time}")
```
This operation is **very slow.** Here is my output:
```
1: got users list. Time elapsed: 0.41695237159729004
2: got one-time users list. Time elapsed: 48.26731848716736
3: got returning users list. Time elapsed: 101.88410639762878
4: calculated frequencies list. Time elapsed: 104.39784860610962
5: got sorted frequencies list. Time elapsed: 104.39850783348083
6: got top 10 frequencies sum. Time elapsed: 104.39853930473328
7: got top 10 frequencies percentage. Time elapsed: 104.39856457710266
8: got average submissions per user. Time elapsed: 104.4005241394043
```
As you can see the list comprehensions are taking the most time. Can someone explain to me:
1. Why it's so slow in terms of time complexity.
2. Whether [collections.Counter()](https://docs.python.org/3/library/collections.html#collections.Counter) will be a better choice and how best to apply it here.
Thank you! | 2020/02/16 | [
"https://Stackoverflow.com/questions/60250462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5900486/"
] | You can improve by using a [Counter](https://docs.python.org/2/library/collections.html#collections.Counter), in `2.` for each element you are iterating the whole list, and you are doing this multiple times for the same user if an user occurs more than once.
Note that when you use `users.count(user)` you iterate all the list of users to count how many times the user occurs. This means quadratic complexity with respect to the length of the list.
Using a counter, you can solve it in linear complexity.
Also, in `4.` you are iterating and counting again, while you could just remove the ones you just computed from the whole users.
Example.
```
>>> one_time_users = {user for user,cnt in Counter(users).items() if cnt == 1}
{'UserC', 'UserD'}
>>> returning_users = set(users) - one_time_users
>>> returning_users
{'UserB', 'UserA'}
```
or more directly
```
one_time_users, returning_users = [], []
for user,cnt in Counter(users).items():
if cnt==1:
one_time_users.append(user)
else:
returning_users.append(user)
```
Here a comparison of `l.count(el)` vs a `Counter(l)`.
```
>>> l = random.choices(range(500), k=60000)
>>> timeit.timeit('[el for el in l if l.count(el) == 1]',setup='from __main__ import l',number=1)
71.70409394335002
>>> timeit.timeit('[el for el,cnt in Counter(l).items() if cnt == 1]',setup='from __main__ import l, Counter',number=1)
0.005492186639457941
``` | As mentioned in your own comment, Counter is significantly faster here. You can see from your own timing that creating a set of the results takes around 10ms to complete (#8->#9), which is roughly the time Counter will take as well.
With counter you look at at each of the N elements once, and then at each unique element (at most N) once.
When you use `.count()` you iterate through the entire list (a fast implementation, but still the entire list). You do this for every element, so you look at each of N elements N times.
Every time your list gets 1000x bigger you require 1000x the time for the Counter method, but 1000000x for .count versions. | 16,774 |
56,206,422 | Try to pass the dictionary into the function to print them out, but it throws error: most\_courses() takes 0 positional arguments but 1 was given
```
def most_courses(**diction):
for key, value in diction.items():
print("{} {}".format(key,value))
most_courses({'Andrew Chalkley': ['jQuery Basics', 'Node.js Basics'],'Kenneth Love': ['Python Basics', 'Python Collections']})
```
I have used \*\*kwargs but why cant python unpack the dictionary? | 2019/05/19 | [
"https://Stackoverflow.com/questions/56206422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11523169/"
] | When you pass your dict as a param, you can either do it as you wrote:
```
most_courses({'Andrew Chalkley': ...
```
in this case `most_cources` should accept a "positional" param. That's why it raises: `most_courses() takes 0 positional arguments but 1 was given`.
You gave it 1 positional param, while `most_cources` (which looks like: `most_courses(**d)`) isn't expecting any..
You should either do:
```
most_courses(**{'Andrew Chalkley': ['jQuery Basics', 'Node.js Basics'],'Kenneth Love': ['Python Basics', 'Python Collections']})
```
OR change the signiture of your method:
```
def most_courses(diction):
for key, value in diction.items():
print("{} {}".format(key,value))
``` | There is no reason to use `**` here. You want to pass a dict and have it processed as a dict. Just use a standard argument.
```
def most_courses(diction):
``` | 16,775 |
61,853,196 | I have a python file that contains these elements:
```
startaddress = 768
length = 64
subChId = 6
protection = 1
bitrate = 64
```
and I want to convert them to a single dictionary string like this:
```
{"startaddress":"768","length":"64","subChId":"6","protection":"1","bitrate":"64"}
```
so I can read the values individually using json.loads().
How can I do that? | 2020/05/17 | [
"https://Stackoverflow.com/questions/61853196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3150586/"
] | Let a package manager like [brew](https://brew.sh/) do the work for you:
```sh
brew install deno
```
Easy to install, easy to upgrade.
Check the [official repo](https://github.com/denoland/deno_install) for all the installation options. | To find the instillation options use official documentation <https://deno.land/#installation>.
For MacOS following installation options are available.
**01.Using Shell**
```
curl -fsSL https://deno.land/x/install/install.sh | sh
```
**02.Using [Homebrew](https://brew.sh/)**
```
brew install deno
```
**03.Using Cargo (macOS)**
```
cargo install deno
``` | 16,777 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.