qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
58,117,763 | First of all, sorry for any newbie mistakes that I've made. But I couldn't figure out and couldn't find a source specifically for [deeppavlov (NER)](http://docs.deeppavlov.ai/en/master/features/models/ner.html) library. I'm trying to train ner\_ontonotes\_bert\_mult as described [here](http://docs.deeppavlov.ai/en/master/features/models/ner.html#train-and-use-the-model). I guess it can be trained from its checkpoint to make it recognize some specific patterns like;
```
"Round 23/22; 24,9 x 12,2 x 12,3"
```
as
```
[[['Round', '23/22', ';', '24,9 x 12,2 x 12,3']], [['B-PRODUCT', 'I-PRODUCT', 'B-QUANTITY']]]
```
My questions are (before I dig into details):
1. ~~Is it possible?~~ And I realized I can't use samples like " Round 23/22; 24,9 x 12,2 x 12,3 ". I need them to be in full sentences.
2. Where can I find more info about it specifically related to deeppavlov's model(s)?
3. How can I train pre-trained deeppavlov model to recognize my custom patterns?
I don't even understand if it is possible but I've decided to give it go and prepared 3 `.txt` files as `"train.txt"`, `"test.txt"` and `"validation.txt"` as [described in deeppovlov web page](http://docs.deeppavlov.ai/en/master/features/models/ner.html#training-data). And I put them under the folder `'~/.deeppavlov/downloads/ontonotes/ner_ontonotes_bert_mult'`. My dataset looks like this:
```
Round B-PRODUCT
23/22 I-PRODUCT
24,9 x 12,2 x 12,3 B-QUANTITY
Ring B-PRODUCT
HDFAA I-PRODUCT
12,7 x 10 B-QUANTITY
```
and so on... This is the code I am trying to train it:
```
import os
# Force tensorflow to use CPU instead of GPU.
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
from deeppavlov import configs, train_model
from deeppavlov.core.commands.utils import parse_config
config_dict = parse_config(configs.ner.ner_ontonotes_bert_mult)
print(config_dict['dataset_reader']['data_path'])
from deeppavlov import configs, train_model
ner_model = train_model(configs.ner.ner_ontonotes_bert_mult)
```
But I am getting this error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [37]
[[{{node save/Assign_280}}]]
```
Full traceback:
```
2019-09-26 15:50:27.63 ERROR in 'deeppavlov.core.common.params'['params'] at line 110: Exception in <class 'deeppavlov.models.bert.bert_ner.BertNerModel'>
Traceback (most recent call last):
File "/home/custom_user/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/custom_user/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/custom_user/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [37]
[[{{node save/Assign_280}}]]
```
**UPDATE 2:**
=============
And I realized I can't use samples like " Round 23/22; 24,9 x 12,2 x 12,3 ". I need them to be in full sentences.
**UPDATE:**
===========
It seems like this is happening due to my dataset. My custom dataset only has 3 tags (`B-PRODUCT`, `I-PRODUCT` and `B-QUANTITY`) but the pre-trained model has 37 of them. All available tags can be found [here](http://docs.deeppavlov.ai/en/master/features/models/ner.html#multilingual-bert-zero-shot-transfer) under the sentence of `"The list of available tags and their descriptions are presented below."`. 18 main tags(with `B` and `I` 36 tags), and `O` tag (“O” means the absence of entity.)). **Total of all of the 37 tags needs to be present in the dataset.** I was able to pass that error by adding dummy sentences by tagging them all with the missing tags. This is a terrible workaround since I'm willingly disrupting my own data-set. I'm still looking for a 'logical' way to train...
PS: Now I am getting this error.
```
Traceback (most recent call last):
File "/home/custom_user/.PyCharm2019.2/config/scratches/scratch_9.py", line 13, in <module>
ner_model = train_model(configs.ner.ner_ontonotes_bert_mult)
File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/__init__.py", line 31, in train_model
train_evaluate_model_from_config(config, download=download, recursive=recursive)
File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/commands/train.py", line 121, in train_evaluate_model_from_config
trainer.train(iterator)
File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 294, in train
self.train_on_batches(iterator)
File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 234, in train_on_batches
self._validate(iterator)
File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 150, in _validate
metrics = list(report['metrics'].items())
AttributeError: 'NoneType' object has no attribute 'items'
``` | 2019/09/26 | [
"https://Stackoverflow.com/questions/58117763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10183880/"
] | There are at least two problems here:
1. instead of `validation.txt` there should be a `valid.txt` file;
2. you are trying to retrain a model that was pretrained on a different dataset with a different set of tags, it's not necessary.
To train your model from scratch you can do something like:
```py
import json
from deeppavlov import configs, build_model, train_model
with configs.ner.ner_ontonotes_bert_mult.open(encoding='utf8') as f:
ner_config = json.load(f)
ner_config['dataset_reader']['data_path'] = '~/my_data_dir/' # directory with train.txt, valid.txt and test.txt files
ner_config['metadata']['variables']['NER_PATH'] = '~/where_to_save_the_model/'
ner_config['metadata']['download'] = [ner_config['metadata']['download'][-1]] # do not download the pretrained ontonotes model
ner_model = train_model(ner_config, download=True)
```
But you can tokenize your texts beforehand:
```py
ner_model([['Round', '23/22', ';', '24,9 x 12,2 x 12,3']])
``` | I tried deeppavlov training, and successfully trained the 'ner' model
I also got the same error at first while training, then I overcome by researching more about it
things to know before training -
-> you can find the 'ner\_ontonotes\_bert\_multi.json' config file link in deeppavlov doc, which gives the dataset path, pretrained model path , dataset\_reader and chain pipe to train
-> there is a pretrained model in the directory mentioned in the 'config' ,by default it is inside 'C:/users/{user\_name}/.deeppavlov/' is the root directory and pretrained models are gonna store in 'models' subdirectory
-> when you started training the already trained model is gonna be modified which means, training just try to improve the pre-trained model
so to train and build your own model (by scratch), simply delete the 'models' subdirectory from the '.deeppavlov' path and execute the training | 4,466 |
33,114,202 | I'm currently testing docker on a Debian 8.2 server and I'm seeking help from mor experienced people.
I've followed the official documentation to install docker (<http://docs.docker.com/installation/debian/>) and I'm now trying docker compose (<https://docs.docker.com/compose/>).
I've installed compose using pip as described here on the official documentation ("pip install -U docker-compose")
Running "docker-compose" gives me the help screen, but "docker-compose up" doesn't work and gives me a lot of errors.
Any idea on how I can make this to work?
Am I missing something? A pre-requisite maybe?
```
root@server:~/dockerfiles/compose-test# docker-compose up
Traceback (most recent call last):
File "/usr/local/bin/docker-compose", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 39, in main
command.sys_dispatch()
File "/usr/local/lib/python2.7/dist-packages/compose/cli/docopt_command.py", line 21, in sys_dispatch
self.dispatch(sys.argv[1:], None)
File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 27, in dispatch
super(Command, self).dispatch(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/compose/cli/docopt_command.py", line 24, in dispatch
self.perform_command(*self.parse(argv, global_options))
File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 57, in perform_command
verbose=options.get('--verbose'))
File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 73, in get_project
config_details = config.find(self.base_dir, config_path)
File "/usr/local/lib/python2.7/dist-packages/compose/config.py", line 107, in find
return ConfigDetails(load_yaml(filename), os.path.dirname(filename), filename)
File "/usr/local/lib/python2.7/dist-packages/compose/config.py", line 558, in load_yaml
return yaml.safe_load(fh)
File "/usr/local/lib/python2.7/dist-packages/yaml/__init__.py", line 93, in safe_load
return load(stream, SafeLoader)
File "/usr/local/lib/python2.7/dist-packages/yaml/__init__.py", line 71, in load
return loader.get_single_data()
File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 37, in get_single_data
node = self.get_single_node()
File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/local/lib/python2.7/dist-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/local/lib/python2.7/dist-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/local/lib/python2.7/dist-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/local/lib/python2.7/dist-packages/yaml/scanner.py", line 220, in fetch_more_tokens
return self.fetch_value()
File "/usr/local/lib/python2.7/dist-packages/yaml/scanner.py", line 580, in fetch_value
self.get_mark())
yaml.scanner.ScannerError: mapping values are not allowed here
in "./docker-compose.yml", line 3, column 8
root@server:~/dockerfiles/compose-test#
```
I'm running docker 1.8.2 and compose 1.4.2 | 2015/10/13 | [
"https://Stackoverflow.com/questions/33114202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5243755/"
] | You are on the right track. The first approach just needs two things:
* a dot at the beginning to make it [context-specific](http://doc.scrapy.org/en/latest/topics/selectors.html#working-with-relative-xpaths)
* `text()` at the end
Fixed version:
```
selector.xpath('.//div[@class="score unvoted"]/text()').extract()
```
And, FYI, you can make the second option work too by using the [`::text` pseudo-element](http://doc.scrapy.org/en/latest/topics/selectors.html#id1):
```
response.css('div.score.unvoted::text').extract()
``` | this should work -
```
selector.xpath('//div[contains(@class, "score unvoted")]/text()').extract()
``` | 4,467 |
59,711,699 | I run my python scrapy project shows the error `no module named 'requests'`
So I type `pip install requests`
and then terminal information:
```
Requirement already satisfied: requests in ./Library/Python/2.7/lib/python/site-packages (2.22.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in ./Library/Python/2.7/lib/python/site-packages (from requests) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in ./Library/Python/2.7/lib/python/site-packages (from requests) (1.25.7)
Requirement already satisfied: certifi>=2017.4.17 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2019.11.28)
```
type command `pip list` can see `request 2.22.0`
I type command `python --version` to check the python version:
```
python 2.7.16
```
Finally I run my scrapy project again still see the same error `no module named 'requests'`
I have no idea how to fix the error now, any help would be appreciated. Thanks. | 2020/01/13 | [
"https://Stackoverflow.com/questions/59711699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6902961/"
] | Install `python3` and `pip3` and then `pip3 install requests`
if you are on ubuntu `python3` is installed by default
you should first install `pip3` by `apt install python3-pip` and then `pip3 install requests` | If you are using two different versions of Python, it should explain why you can't use your module.
To install the module on Python 3, try:
```
pip3 install requests
```
And make sure, you are using the correct version. | 4,468 |
70,192,924 | I have a discord bot running on a python script, and its token is stored in a `.txt` file. If I read from the file using:
```
with open('Stored Discord Token.txt') as storedToken:
TOKEN = storedToken.readlines()
```
I can get the discord bot token. The problem is that the discord bot token looks like this:
`[' <token> ']`
This causes an error when trying to run the script, and the bot fails to connect, as it is an invalid token:
```
discord.errors.LoginFailure: Improper token has been passed.
```
How do I remove the square brackets, `'`s and spaces from the list containing the token?
---
**TL;DR**: How to remove `[`, `]`, `'`, and `spaces` from a single item list? | 2021/12/02 | [
"https://Stackoverflow.com/questions/70192924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12966704/"
] | First of all, `read()` will just return the whole file contents as a string, so you could use `TOKEN = storedToken.read()`.
Lists in Python can be accessed using `[index]` so to access the first line in the file you can do `TOKEN = storedToken.readlines()[0]`. If say you wanted to access the `n`th line you could do `storedToken.readlines()[n]`. Where `n` is an `int`. | As @Brian suggested, slicing out the substring solves the problem. If we simply add one line of code, like this:
```
with open('Stored Discord Token.txt') as file:
fileContents = file.readlines()
TOKEN = fileContents[-1]
```
we remove the `[`, `]`, `'`, and characters, and can now successfully pass that string as the token. | 4,471 |
56,009,890 | Authors of an xml document did not include all the text inside an element that will be converted to a hyperlink. I would like to process or pre-process the xml to include the necessary text. I find this hard to describe but a simple example should show what I'm attempting.
I'm using XSLT 2.0. I already do regular expression processing for various situations but can't figure this out.
I know how to do this with perl/python regular expression but I can't figure out how to approach this with XSLT.
Here is 'very' simplfied xml from an author in which they left out the ' (Sheet 3)' from the glink element.:
```
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<root>
<para>
Go look at figure <glink refid=1>Figure 22</glink> (Sheet 3). Then go do something else.
</para>
</root>
```
Here is what I'd like it to convert to where the ' (Sheet 3)' is now inside the glink tag:
```
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<root>
<para>
Go look at figure <glink refid=1>Figure 22 (Sheet 3)</glink>. Then go do something else.
</para>
</root>
```
The case when this conversion should happen is when there is a glink element followed by (this regular expression):
```
\s\(Sheet \d\)
```
I currently have 2 XSLTs. The first pre-processes the XML to convert a number of other situations (using regular expression/xsl:analyze-string). The second XSLT to convert from pre-processed xml to HTML. The second XSLT has a template to handle glink elements and turn it into a hyperlink but the hyperlink should be including the Sheet information.
I would assume that it is easier to pre-process this first and leave the 2nd XSLT alone, but I always appreciate better ways.
Thank you for your time. | 2019/05/06 | [
"https://Stackoverflow.com/questions/56009890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/107690/"
] | The existing answer has the right approach but I would sharpen the regular expression pattern and the match patterns:
```
<xsl:param name="pattern" as="xs:string">\s\(Sheet \d\)</xsl:param>
<xsl:variable name="pattern2" as="xs:string" select="'^' || $pattern"/>
<xsl:variable name="pattern3" as="xs:string" select="'^(' || $pattern || ')(.*)'"/>
<xsl:template match="glink[@refid][following-sibling::node()[1][self::text()[matches(., $pattern2)]]]">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<xsl:value-of select=". || replace(following-sibling::node()[1], $pattern3, '$1', 's')"/>
</xsl:copy>
</xsl:template>
<xsl:template match="text()[preceding-sibling::node()[1][self::glink[@refid]]][matches(., $pattern2)]">
<xsl:value-of select="replace(., $pattern3, '$2', 's')"/>
</xsl:template>
```
<https://xsltfiddle.liberty-development.net/bFN1y9z/1>
Otherwise I think the matches and replacements happen for more than a `glink` followed (directly?) by that pattern, as you can see in <https://xsltfiddle.liberty-development.net/bFN1y9z/2>.
The code I posted uses XPath 3.1's `||` string concatenation operator but if an XSLT 2 processor is the target that could of course be replaced with a normal `concat` function call. | You can use these two templates in combination with the *Identity template*:
```
<xsl:template match="glink">
<xsl:copy>
<xsl:copy-of select="@*|text()" />
<xsl:text> </xsl:text>
<xsl:value-of select="normalize-space(replace(following::text()[1],'\s(\(Sheet \d\)).*',' $1'))" />
</xsl:copy>
</xsl:template>
<xsl:template match="text()[preceding-sibling::glink]">
<xsl:value-of select="normalize-space(replace(.,'\s\(Sheet \d\)(.*)',' $1'))" />
</xsl:template>
```
The first one includes the `(Sheet 3)` string into `glink` and the second one excludes `(Sheet 3)` from the following `text()` node.
**The result is:**
```
<root>
<para>
Go look at figure <glink refid="1">Figure 22 (Sheet 3)</glink>. Then go do something else.</para>
</root>
``` | 4,473 |
64,026,529 | I'm trying to accomplish a basic image processing. Here is my algorithm :
Find n., n+1., n+2. pixel's RGB values in a row and create a new image from these values.
[](https://i.stack.imgur.com/uFwT2.png)
Here is my example code in python :
```
import glob
import ntpath
import time
from multiprocessing.pool import ThreadPool as Pool
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
images = glob.glob('model/*.png')
pool_size = 17
def worker(image_file):
try:
new_image = np.zeros((2400, 1280, 3), dtype=np.uint8)
image_name = ntpath.basename(image_file)
print(f'Processing [{image_name}]')
image = Image.open(image_file)
data = np.asarray(image)
for i in range(0, 2399):
for j in range(0, 1279):
pix_x = j * 3 + 1
red = data[i, pix_x - 1][0]
green = data[i, pix_x][1]
blue = data[i, pix_x + 1][2]
new_image[i, j] = [red, green, blue]
im = Image.fromarray(new_image)
im.save(f'export/{image_name}')
except:
print('error with item')
pool = Pool(pool_size)
for image_file in images:
pool.apply_async(worker, (image_file,))
pool.close()
pool.join()
```
My input and output images are in RGB format. My code is taking 5 second for every image. I'm open for any idea to optimization this task.
Here is example input and output images :
[Input Image](https://i.stack.imgur.com/84jvb.png)[2](https://i.stack.imgur.com/84jvb.png) [ 3840 x 2400 ]
[Output Image](https://i.stack.imgur.com/kGn4L.png)[3](https://i.stack.imgur.com/kGn4L.png) [ 1280 x 2400 ] | 2020/09/23 | [
"https://Stackoverflow.com/questions/64026529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14326860/"
] | Here is an approach:
```
import cv2
import numpy as np
# Load input image
im = cv2.imread('input.png')
# Calculate new first layer - it is every 3rd pixel of the first layer of im
n1 = im[:, ::3, 0]
# Calculate new second layer - it is every 3rd pixel of the second layer of im, starting with an offset of 1 pixel
n2 = im[:, 1::3, 1]
# Calculate new third layer - it is every 3rd pixel of the third layer of im, starting with an offset of 2 pixels
n3 = im[:, 2::3, 2]
# Now stack the three new layers to make a new output image
res = np.dstack((n1,n2,n3))
``` | As far as I understood from the question, you want to shift the pixel values of each channel of the input image in the output image. So, here is my approach.
```
im = cv2.cvtColor(cv2.imread('my_image.jpg'), cv2.COLOR_BGR2RGB)
im = np.pad(im, [(3, 3),(3,3),(0,0)], mode='constant', constant_values=0) # Add padding for enabling the shifting process later
r= im[:,:,0]
g= im[:,:,1]
g = np.delete(g,np.s_[-1],axis=1) # remove the last column
temp_pad = np.zeros(shape=(g.shape[0],1)) # removed part
g = np.concatenate((temp_pad,g),axis=1) # put the removed part back
b = im[:,:,2]
b = np.delete(b,np.s_[-2::],axis=1) # remove the last columns
temp_pad = np.zeros(shape=(b.shape[0],2)) # removed parts
b = np.concatenate((temp_pad,b),axis=1) # put the removed parts back
new_im = np.dstack((r,g,b)) # Merge the channels
new_im = new_im[3:-3,3:-3,:]/np.amax(new_im)#*255 # Remove the padding
```
Basically, I achieved the shifting by padding&merging the green and blue channels. Let me know if this is what you are looking for. Kolay gelsin :) | 4,476 |
62,281,696 | I have been doing some googling but I can't really find a good python3 solution to my problem. Given the following HTML code, how do I extract 2019, 0.7 and 4.50% using python3?
```
<td rowspan='2' style='vertical-align:middle'>2019</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>4.50%</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>SGD 0.7</td> <td>SGD0.2 </td>
``` | 2020/06/09 | [
"https://Stackoverflow.com/questions/62281696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3702643/"
] | A solution using [`BeautifulSoup`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/):
```
from bs4 import BeautifulSoup
txt = '''<td rowspan='2' style='vertical-align:middle'>2019</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>4.50%</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>SGD 0.7</td> <td>SGD0.2 </td>'''
soup = BeautifulSoup(txt, 'html.parser')
info_1, info_2, info_3, *_ = soup.select('td')
info_1 = info_1.get_text(strip=True)
info_2 = info_2.get_text(strip=True)
info_3 = info_3.get_text(strip=True).split()[-1]
print(info_1, info_2, info_3)
```
Prints:
```
2019 4.50% 0.7
``` | I think this might be helpful if does not exactly answer your question:
```
from html.parser import HTMLParser
class MyHTMLParser(HTMLParser):
def handle_data(self, data):
print(data)
parser = MyHTMLParser()
parser.feed("<Your HTML here>")
```
For your particular case this will return:
2019
4.50%
SGD 0.7
SGD0.2 | 4,477 |
58,736,295 | I have the following `boto3` draft script
```py
#!/usr/bin/env python3
import boto3
client = boto3.client('athena')
BUCKETS='buckets.txt'
DATABASE='some_db'
QUERY_STR="""CREATE EXTERNAL TABLE IF NOT EXISTS some_db.{}(
BucketOwner STRING,
Bucket STRING,
RequestDateTime STRING,
RemoteIP STRING,
Requester STRING,
RequestID STRING,
Operation STRING,
Key STRING,
RequestURI_operation STRING,
RequestURI_key STRING,
RequestURI_httpProtoversion STRING,
HTTPstatus STRING,
ErrorCode STRING,
BytesSent BIGINT,
ObjectSize BIGINT,
TotalTime STRING,
TurnAroundTime STRING,
Referrer STRING,
UserAgent STRING,
VersionId STRING,
HostId STRING,
SigV STRING,
CipherSuite STRING,
AuthType STRING,
EndPoint STRING,
TLSVersion STRING
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1', 'input.regex' = '([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) \\\"([^ ]*) ([^ ]*) (- |[^ ]*)\\\" (-|[0-9]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\") ([^ ]*)(?: ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*))?.*$' )
LOCATION 's3://my-bucket/{}'"""
with open(BUCKETS, 'r') as f:
lines = f.readlines()
for line in lines:
query_string = QUERY_STR.format(line, line)
response = client.create_named_query(
Name=line,
Database=DATABASE,
QueryString=QUERY_STR
)
print(response)
```
When executed, all responses come back with status code `200`.
Why am I not able to see the corresponding tables that should have been created?
Shouldn't I be able to (at least) see somewhere those queries stored?
**update1**: I am now trying to actually create the tables via the above queries as follows:
```py
for line in lines:
query_string = QUERY_STR.format(DATABASE, line[:-1].replace('-', '_'), line[:-1])
try:
response1 = client.start_query_execution(
QueryString=query_string,
WorkGroup=WORKGROUP,
QueryExecutionContext={
'Database': DATABASE
},
ResultConfiguration={
'OutputLocation': OUTPUT_BUCKET,
},
)
query_execution_id = response1['ResponseMetadata']['RequestId']
print(query_execution_id)
except Exception as e1:
print(query_string)
raise(e1)
```
Once again, the script does output some query ids (no error seems to take place), nonetheless no table is created.
I have also followed the advice of @John Rotenstein and initialised my `boto3` client as follows:
```
client = boto3.client('athena', region_name='us-east-1')
``` | 2019/11/06 | [
"https://Stackoverflow.com/questions/58736295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2409793/"
] | First of all, `response` simply tells you that your request has been successfully submitted. Method `create_named_query()` creates a snippet of your query, which then can be seen/access in AWS Athena console in **Saved Queries** tab.
[](https://i.stack.imgur.com/3rZ5I.png)
It seems to me that you want to create table using `boto3`. If that is the case, you need to use [`start_query_execution()`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.start_query_execution) method.
>
> Runs the SQL query statements contained in the Query . Requires you to have access to the workgroup in which the query ran.
>
>
>
Having response 200 out of `start_query_execution` doesn't guarantee that you query will get executed successfully. As I understand, this method does some simple pre-execution checks to validate syntax of the query. However, there are other things that could fail you query at the run time. For example if you try to create table in a database that doesn't exist, or if you try to create a table definition in a database to which you don't have access.
Here is an example, when I used you query string, formatted with with some random name for the table.
[](https://i.stack.imgur.com/mRHFb.png)
I got response 200 and got some value in `response1['ResponseMetadata']['RequestId']`. However, since I don't have `some_db` in AWS Glue catalog, this query failed at the run time, thus, no table was created.
Here is how you can track query execution within boto3
```py
import time
response1 = client.start_query_execution(
QueryString=query_string,
WorkGroup=WORKGROUP,
QueryExecutionContext={
'Database': DATABASE
},
ResultConfiguration={
'OutputLocation': OUTPUT_BUCKET,
},
)
query_execution_id = response1['ResponseMetadata']['RequestId']
while True:
time.sleep(1)
response_2 = client.get_query_execution(
QueryExecutionId=query_execution_id
)
query_status = response_2['QueryExecution']['Status']
print(query_status)
if query_status not in ["QUEUED", "RUNNING", "CANCELLED"]:
break
``` | To reproduce your situation, I did the following:
* In the Athena console, I ran:
```sql
CREATE DATABASE foo
```
* In the Athena console, I selected `foo` in the Database drop-down
* To start things simple, I ran this Python code:
```py
import boto3
athena_client = boto3.client('athena', region_name='ap-southeast-2') # Change as necessary
QUERY_STR="""
CREATE EXTERNAL TABLE IF NOT EXISTS foo.bar(id INT)
LOCATION 's3://my-bucket/input-files/'
"""
response = athena_client.start_query_execution(
QueryString=QUERY_STR,
QueryExecutionContext={'Database': 'foo'},
ResultConfiguration={'OutputLocation': 's3://my-bucket/athena-out/'}
)
```
* I then went to the Athena console, did a refresh, and confirmed that the `bar` table was created
**Suggestion:** Try the above to confirm that it works for you, too!
I then ran your code, using the `start_query_execution` version of your code (shown in your second code block). I had to make some changes:
* I didn't have a `buckets.txt` file, so I just provided a list of names
* Your code doesn't show the content of `OUTPUT_BUCKET`, so I used `s3://my-bucket/athena-output/` (Does that match the format that *you* used?)
* Your code uses `QUERY_STR.format(DATABASE...` but there was no `{}` in the `QUERY_STR` where the database name would be inserted, so I removed `DATABASE` as an input to the format variable
* I did *not* provide a value for `WORKGROUP`
**It all ran fine**, creating multiple tables.
So, check the above bullet-points to see if it caused a problem for you (such as replacing the Database name in the `format()` statement). | 4,478 |
32,954,110 | I have the following string in python:
```
foo = 'a_b_c'
```
How do I split the string into 2 parts: `'a_b'` and `'c'`? I.e, I want to split at the second `'_'`
`str.split('_')` splits into 3 parts: `'a'`, `'b'` and `'c'`. | 2015/10/05 | [
"https://Stackoverflow.com/questions/32954110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/308827/"
] | Use the [`str.rsplit()` method](https://docs.python.org/2/library/stdtypes.html#str.rsplit) with a limit:
```
part1, part2 = foo.rsplit('_', 1)
```
`str.rsplit()` splits from the right-hand-side, and the limit (second argument) tells it to only split once.
Alternatively, use [`str.rpartition()`](https://docs.python.org/2/library/stdtypes.html#str.rpartition):
```
part1, delimiter, part2 = foo.rpartition('_')
```
This includes the delimiter as a return value.
Demo:
```
>>> foo = 'a_b_c'
>>> foo.rsplit('_', 1)
['a_b', 'c']
>>> foo.rpartition('_')
('a_b', '_', 'c')
``` | ```
import re
x = "a_b_c"
print re.split(r"_(?!.*_)",x)
```
You can do it through `re`.Here in `re` with the use of `lookahead` we state that split by `_` after which there should not be `_`. | 4,479 |
19,390,828 | I am working on tree based program on python. I need to rewrite this function using recursion and liquidate all of these for-loops:
Example of my function:
```
def items_on_level(full_tree, level):
for key0, value0 in full_tree.items():
for key1, value1 in value0.items():
for key2, value2 in value1.items():
for key3, value3 in value2.items():
print(key3)
```
Input:
- level - level of my recursion tree
- full\_tree - dict with parents and children
```
{<Category: test>: {<Category: dkddk>: {}, <Category: test2>: {<Category: test3>: {}, <Category: test5>: {<Category: kfpokpok>: {}}}}
```
Function should return: all the objects on current level
Help! Thanks! | 2013/10/15 | [
"https://Stackoverflow.com/questions/19390828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2863834/"
] | ```
def itemsOnLevel(root, level):
if not level:
return list(root.keys())
else:
return list(itertools.chain.from_iterable([itemsOnLevel(v, level-1) for k,v in root.items()]))
``` | ```
itemsOnLevel = lambda r, l: (
lambda f, r, l: f (f, r, l) ) (
lambda f, r, l: [_ for _ in r.keys () ] if not l else
[i for k in r.values () for i in f (f, k, l - 1) ], r, l)
``` | 4,480 |
45,650,904 | I am using celery to do a long-time task. The task will create a subprocess using `subprocess.Popen`. To make the task abortable, I write the code below:
```
from celery.contrib import abortable
@task(bind=True, base=abortable.AbortableTask)
def my_task(self, *args):
p = subprocess.Popen([...])
while True:
try:
p.wait(1)
except subprocess.TimeoutExpired:
if self.is_aborted():
p.terminate()
return
else:
break
# Other codes...
```
I try it in my console and it works well. But when I decide to close the worker by pressing `Ctrl+C`, the program prints out `'worker: Warm shutdown (MainProcess)'` and blocked for a long time, which is not what I expect to be. **It seems that task abortion doesn't happen when a worker is about to shut down.**
From the documentation I know that **if I want to abort a task, I should manually instantiate a `AbortableAsyncResult` using a task id and call its `.abort()` method.** But I can find nowhere to place this code, because it requires the ids of all running tasks, which I have no approach to access.
So, how to invoke `.abort()` for all running tasks when workers are about to shut down? Or is there any alternative?
I am using celery 4.1.0 with python 3.6.2. | 2017/08/12 | [
"https://Stackoverflow.com/questions/45650904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3278171/"
] | You are passing `Int` though the actual type required is `CustomSegmentedControl`. To simply solve this problem just create the `IBOutlet` for your `CustomSegmentedControl` and pass it as parameter to `Button_CustomSegmentValueChanged` method.
```
func SwipedRight(swipe : UISwipeGestureRecognizer){
if currentSelectedView == 1 {
customSegmentOutlet.selectedSegmentIndex = 0
Button_CustomSegmentValueChanged(customSegmentOutlet)
//LoadLoginView()
}
}
``` | Assuming your `CustomSegmentedControl` is a subclass of `UISegmentedControl`, I have modified few lines of code
```
@IBAction func button_CustomSegmentValueChanged(_ sender: UISegmentedControl?) {
// guard sender for nil before use
}
```
and when calling this
```
func swipedRight(swipe : UISwipeGestureRecognizer){
if currentSelectedView == 1 {
let seg = UISegmentedControl()
seg.selectedSegmentIndex = 0
Button_CustomSegmentValueChanged(seg)
}
}
```
if `CustomSegmentedControl` is not subclass of `UISegmentedControl`, change it so. | 4,485 |
1,081,698 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? | 2009/07/04 | [
"https://Stackoverflow.com/questions/1081698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133068/"
] | They are not broken, they are simply not installed. The solution to that is to install them under 2.6. But first we should see if you really should do that...
Yes, Python will when installed replace the python command to the version installed (unless you run it with --alt-install). You don't exactly state what your problem is, so I'm going to guess. Your problem is that many local commands using Python now fail, because they get executed with Python 2.6, and not with Python 2.4. Is that correct?
If that is so, then simply delete /usr/local/bin/python, and make sure /usr/bin/python is a symbolic link to /usr/bin/python2.4. Then you would have to type python2.6 to run python2,6, but that's OK. That's the best way to do it. Then you only need to install the packages **you** need in 2.6.
But if my guess is wrong, and you really need to install all those packages under 2.6, then don't worry too much. First of all, install setuptools. It includes an easy\_install script, and you can then install modules with
```
easy_install <modulename>
```
It will download the module from pypi.python.org and install it. And it will also install any module that is a dependency. easy\_install can install any module that is using distutils as an installer, and not many don't. This will make installing 90% of those modules a breeze.
If the module has a C-component, it will compile it, and then you need the library headers too, and that will be more work, and all you can do there is install them the standard CentOS way.
You shouldn't use symbolic links between versions, because libraries are generally for a particular version. For 2.4 and 2.6 I think the .pyc files are compatible (but I'm not 100% sure), so that may work, but any module who uses C *will* break. And other versions of Python will have incompatible .pyc files as well. And I'm sure that if you do that, most Python people are not going to help you if you do it. ;-)
In general, I try too keep the system python "clean", I.e. I don't install anything there that isn't installed with the packaging tools. Instead I use virtualenv or buildout to let every application have their own python path where it's dependencies live. So every single project I have basically has it's own set of libraries. It gets easier that way. | There are a couple of options...
1. If the modules will run under Python 2.6, you can simply create symbolic links to them from the 2.6 site-packages directory to the 2.4 site-packages directory.
2. If they will not run under 2.6, then you may need to re-compile them against 2.6, or install up-to-date versions of them. Just make sure you are using 2.6 when calling `"python setup.py"`
...
You may want to post this on serverfault.com, if you run into additional challenges. | 4,487 |
46,121,057 | I'm new to bash and was tasked with scripting a check for a compliance process.
From bash (or if python is better), I need to script an ssh connection from within the host running the script.
For example:
ssh -l testaccount localhost
But I need to run this 52 times so that it is trapped by an IPS.
When running this string I am prompted for a password and I have to hit enter in order to make the script complete.
Is there a way to include a password or carriage return to act as manual intervention so that I do not have to hit enter each time?
Here's a sample of what I was able to get working, but it only sequenced 30 attempts:
```
#!/bin/bash
i=0
while [$i -lt 52]
do
echo | ssh -l testaccount localhost&
i=$[$i+1]
done
``` | 2017/09/08 | [
"https://Stackoverflow.com/questions/46121057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8581142/"
] | In contrast to CSS, JS and HTML files which can be [gzipped using dispatcher](https://docs.adobe.com/content/docs/en/dispatcher/disp-config.html), images can be compressed only by reducing quality or resizing them.
It is a quite common case for AEM projects and there are a couple of options to do that, some of them are coming out-of-the-box and do not even require programming:
* You can extend `DAM Update Asset` with [CreateWebEnabledImageProcess](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/dam/core/process/CreateWebEnabledImageProcess.html) Workflow Process Step. It allows you to generate new image rendition with parameters like size, quality, mime-type. Depending on workflow launcher configuration, this rendition can be generated during creation or modification of assets. You can also trigger the workflow to be run on chosen or all assets.
* In case that `CreateWebEnabledImageProcess` configuration is not sufficient for your requirements, you can implement your own Workflow Process Step and generate proper rendition programmatically, using for example [ImageHelper](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/commons/ImageHelper.html#saveLayer(com.day.image.Layer,%20java.lang.String,%20double,%20Node,%20java.lang.String,%20boolean)) or some Java framework for images transformation. That might be also needed if you want to generate the compressed images *on the fly*, for example, instead of generating rendition for each uploaded image, you can implement servlet attached to proper selectors and image extensions (i.e. `imageName.mobile.png`) which return the compressed image.
* Eventually, **integration with ImageMagick is possible**, [Adobe documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/extending/assets/best-practices-for-imagemagick.html) describes how it can be achieved using `CommandLineProcess` Workflow Process Step. However, you need to be aware of security vulnerabilities related to this mentioned in the documentation.
It is also worth to mention that if your client needs more advanced solutions for images transformation in the future, then [integration with Dynamic Media](https://docs.adobe.com/docs/en/aem/6-3/administer/content/dynamic-media/image-presets.html) can also be considered as a possibility, however, this is the most costly solution. | AEM offers options for "image optimisation" but this is a broad topic so there is no "magic" switch you can turn to "optimise" your images. It all boils down to the amount of kilo- or megabytes that are transferred from AEM to the users browser.
The size of an asset is influenced by two things:
1. Asset dimension (width and height).
2. Compression.
The biggest gains can be achieved by simply reducing the assets dimensions. AEM does that already. If you have a look at your assets renditions you will notice that there is not just the so called *original* rendition but several other renditions with different dimensions.
```
MyImage.jpg
└── jcr:content
└── renditions/
├── cq5dam.thumbnail.140.100.png
├── cq5dam.thumbnail.319.319.png
├── cq5dam.thumbnail.48.48.png
└── original
```
The numbers in the renditions name are the width and height of the rendition. So there is a version of `MyImage.jpg` that has a width of 140px and a height of 100px and so on.
This is all done by the `DAM Update Asset` workflow when the image is uploaded and can be modified to generate more renditions with different dimensions.
But generating images with different dimensions is only half of the story. AEM has to select the rendition with the right dimension at the right moment. This is commonly referred to as "responsive images". The AEM image component does not support "responsive" images out of the box and there are several ways to implement this feature.
The gist of it is that your image component has to contain a list of URLs for different sized renditions. When the page is rendered client side JavaScript determines which rendition is the best for current screen size and adds the URL to the `img` tags `src` attribute.
I would recommend that you have a look at the fairly new AEM Core components which are not included with AEM. Those core components contain an image component that supports responsive images. You can read more about those here:
1. [AEM Core Components Image Component (GitHub)](https://github.com/Adobe-Marketing-Cloud/aem-core-wcm-components/tree/master/content/src/content/jcr_root/apps/core/wcm/components/image/v1/image)
2. [AEM Core Components Documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/components/core-components.html)
Usually, components like that will not use "static" renditions that were already generated by the *DAM Update Asset* workflow but will rely on a Adaptive Image Servlet. This servlet basically gets the asset path and the target width and will return the asset in the requested width. To avoid doing this over and over you should allow the Dispatcher to cache the resulting image.
Those are just the basic things you can do. There are a lot of other things that can be done but all of them with less and less gains in terms of "optimisation". | 4,490 |
60,751,007 | I am trying to build a simple dictionary of all us english vs uk english differences for a web application I am working on.
Is there a non-hacky way to build a dictionary where both the value and key can be looked up in
python as efficiently as possible? I'd prefer not to loop through the dict by values for us spelling. For example:
```
baz = {'foo', 'bar'}
# baz['foo'] => 'bar'
# baz['bar'] => 'foo'
``` | 2020/03/19 | [
"https://Stackoverflow.com/questions/60751007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/872097/"
] | You have a raw `@JoinColumn` in `RolePrivilege`, change it, so that the name of the column is configured: `@JoinColumn(name = "roleId")`.
Also you're saving `RolePrivilege`, but the changes are not cascading, change the mapping to:
```
@ManyToOne(cascade = CascadeType.ALL)
```
P.S.: Prefer `List`s over `Set`s in -to-many mapping for [performance reasons](https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch20.html#performance-collections-mostefficentinverse). | Firstly, do not return String(wrap it to class for example to `RolePriviligueResponse` with `String status` as response body), secondly you dont need `@ResponseBody` annotation, your `@PostMapping` annotation already has it, third - dont use `Integer` for ID, better use `Long` type.
And you did not provide the name of `@JoinColumn(name="roleId")` | 4,495 |
14,657,433 | How do I calculate correlation matrix in python? I have an n-dimensional vector in which each element has 5 dimension. For example my vector looks like
```
[
[0.1, .32, .2, 0.4, 0.8],
[.23, .18, .56, .61, .12],
[.9, .3, .6, .5, .3],
[.34, .75, .91, .19, .21]
]
```
In this case dimension of the vector is 4 and each element of this vector have 5 dimension. How to construct the matrix in the easiest way?
Thanks | 2013/02/02 | [
"https://Stackoverflow.com/questions/14657433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1964587/"
] | Using [numpy](http://www.numpy.org/), you could use [np.corrcoef](http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html):
```
In [88]: import numpy as np
In [89]: np.corrcoef([[0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]])
Out[89]:
array([[ 1. , -0.35153114, -0.74736506, -0.48917666],
[-0.35153114, 1. , 0.23810227, 0.15958285],
[-0.74736506, 0.23810227, 1. , -0.03960706],
[-0.48917666, 0.15958285, -0.03960706, 1. ]])
``` | Here is a [pretty good example](http://www.tradinggeeks.net/2015/08/calculating-correlation-in-python/) of calculating a correlations matrix form multiple time series using Python. Included source code calculates correlation matrix for a set of Forex currency pairs using Pandas, NumPy, and matplotlib to produce a graph of correlations.
Sample data is a set of historical data files, and the output is a single correlation matrix and a plot. The code is very well documented. | 4,496 |
6,213,336 | I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats. | 2011/06/02 | [
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] | Quick answer:
```
arrays = []
for line in open(your_file): # no need to use readlines if you don't want to store them
# use a list comprehension to build your array on the fly
new_array = np.array((array.float(i) for i in line.split(' ')))
arrays.append(new_array)
```
If you process often this kind of data, the csv module will help.
```
import csv
arrays = []
# declare the format of you csv file and Python will turn line into
# lists for you
parser = csv.reader(open(your_file), delimiter=' '))
for l in parser:
arrays.append(np.array((array.float(i) for i in l)))
```
If you feel wild, you can even make this completly declarative:
```
import csv
parser = csv.reader(open(your_file), delimiter=' '))
make_array = lambda row : np.array((array.float(i) for i in row))
arrays = [make_array(row) for row in parser]
```
And if you realy want you colleagues to hate you, you can make a one liner (NOT PYTHONIC AT ALL :-):
```
arrays = [np.array((array.float(i) for i in r)) for r in csv.reader(open(your_file), delimiter=' '))]
```
Stripping all the boiler plate and flexibility, you can end up with a clean and quite readable one liner. I wouldn't use it because I like the refatoring potential of using `csv`, but it can be good enought. It's a grey zone here, so I wouldn't say it's Pythonic, but it's definitly handy.
```
arrays = [np.array((array.float(i) for i in l.split())) for l in open(your_file))]
``` | How about the following:
```
import numpy as np
arrays = []
for line in open('data.txt'):
arrays.append(np.array([float(val) for val in line.rstrip('\n').split(' ') if val != '']))
``` | 4,501 |
47,057,572 | I tried to use pytesseract:
```
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = 'C:\\Python27\\scripts\\pytesseract.exe'
im = Image.open('Download.png')
print pytesseract.image_to_string(im)
```
But I got this error:
```
Traceback (most recent call last):
File "C:/Python27/ocr.py", line 11, in <module>
print pytesseract.image_to_string(im)
File "C:\Python27\lib\site-packages\pytesseract\pytesseract.py", line
125, in image_to_string
raise TesseractError(status, errors)
TesseractError: (2, u'Usage: python pytesseract.py [-l lang] input_file')
```
What is wrong? | 2017/11/01 | [
"https://Stackoverflow.com/questions/47057572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8500407/"
] | You need to install tesseract using windows installer available [here](https://github.com/UB-Mannheim/tesseract/wiki). Then you should install the python wrapper as:
```
pip install pytesseract
```
Then you should also set the tesseract path in your script after importing pytesseract library as below (Please do not forget that installation path might be modified in your case!):
```
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files (x86)\Tesseract-OCR\tesseract.exe'
```
Note: It is tested on Anaconda3, Anaconda2, Py3 and Py2 without any issues. | I think there is something wrong with your path 'C:\Python27\scripts\pytesseract.exe', This seems to point to the pytessaract.py code (hence the error has pytessaract.py on it - the exact error is mentioned in the main function of pytessaract.py which runs only if **name** == "**main**" ).
The path must actually point to tessaract.exe, downloaded separately. Look at the 3rd point under installation in the link (<https://pypi.python.org/pypi/pytesseract>).
This has to be done because pytesseract is only a python wrapper around the tessaract program, so it calls tessaract.exe on your local machine for doing the actual ocr work. | 4,511 |
65,040,971 | I setup a new Debian 10 (Buster) instance on AWS EC2, and was able to install a pip3 package that depended on netifaces, but when I came back to it the next day the package is breaking reporting an error in netifaces. If I try to run pip3 install netifaces I get the same error:
```
~$ pip3 install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/0d/18/fd6e9c71a35b67a73160ec80a49da63d1eed2d2055054cc2995714949132/netifaces-0.10.9.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 20, in <module>
from setuptools.dist import Distribution, Feature
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 35, in <module>
from setuptools.depends import Require
File "/usr/lib/python3/dist-packages/setuptools/depends.py", line 7, in <module>
from .py33compat import Bytecode
File "/usr/lib/python3/dist-packages/setuptools/py33compat.py", line 55, in <module>
unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape)
AttributeError: 'HTMLParser' object has no attribute 'unescape'
``` | 2020/11/27 | [
"https://Stackoverflow.com/questions/65040971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439005/"
] | `HTMLParser().unescape` was removed in Python 3.9. Compare [the code in Python 3.8](https://github.com/python/cpython/blob/v3.8.0/Lib/html/parser.py#L466) vs [Python 3.9](https://github.com/python/cpython/blob/v3.9.0/Lib/html/parser.py).
The error seems to be a bug in `setuptools`. Try to upgrade `setuptools`. Or use Python 3.8. | I was facing this issue in PyCharm 2018. Apart from upgrading `setuptools` as mentioned above, I also had to upgrade to `PyCharm 2020.3.4` to solve this issue. Related bug on PyCharm issue tracker: <https://youtrack.jetbrains.com/issue/PY-39579>
Hope this helps someone avoid spending hours trying to debug this. | 4,512 |
8,778,865 | I was writing a program in python
```
import sys
def func(N, M):
if N == M:
return 0.00
else:
if M == 0:
return pow(2, N+1) - 2.00
else :
return 1.00 + (0.5)*func(N, M+1) + 0.5*func(N, 0)
def main(*args):
test_cases = int(raw_input())
while test_cases:
string = raw_input()
a = string.split(" ")
N = int(a[0])
M = int(a[1])
test_cases = test_cases -1
result = func(N, M)
print("%.2f" % round(result, 2))
if __name__ == '__main__':
sys.setrecursionlimit(1500)
sys.exit(main(*sys.argv))
```
It gives the same answer for N = 1000 ,M = 1 and N = 1000 , M = 2
On searching I found that limit of float expires over 10^400. My question is how to overcome it | 2012/01/08 | [
"https://Stackoverflow.com/questions/8778865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1032610/"
] | Floats in Python are IEEE doubles: they are not unlimited precision. But if your computation only needs integers, then just use integers: they are unlimited precision. Unfortunately, I think your computation does not stay within the integers.
There are third-party packages built on GMP that provide arbitrary-precision floats: <https://www.google.com/search?q=python%20gmp> | Consider using an arbitrary precision floating-point library, for example the [bigfloat](http://packages.python.org/bigfloat/) package, or [mpmath](http://code.google.com/p/mpmath/). | 4,517 |
63,002,403 | Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
``` | 2020/07/20 | [
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] | You are correct in that the memory references should change. Take a careful look at the memory addresses: they're not identical.
Edit: Regarding your edit, the memory address only changes on reassignment of the variable. The memory of the variable stays the same if you mutate the list. | Take a look of the id's you provided . There are completely different. 4338643744 != 4338953744. Look the first 5 numbers: 43386 != 43389. Everything is working as expects due to memory reference is changing properly. | 4,519 |
73,648,264 | I have a json like this but much longer:
```
[
{
"id": "123",
"name": "home network configuration",
"description": "home utilities",
"definedRanges": [
{
"id": "6500b67e",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "UNALLOCATED"
}
]
},
{
"id": "456",
"name": "lab network configuration",
"description": "lab experiments",
"definedRanges": [
{
"id": "1209b90d",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "ALLOCATED"
},
{
"id": "99e08ca4",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "UNALLOCATED"
}
]
}
]
```
I'd like to query with jq and obtain the following:
```
[
{
"name": "home network configuration"
"definedRanges": [
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
}
]
},
{
"name": "lab network configuration",
"definedRanges": [
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
},
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
}
]
}
]
```
or even this:
```
[
{
"name": "home network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
}
]
```
So far I was able to extract the network name at the first level with:
```
.[] | {name}
```
I could also extract the definedRanges with:
```
.[].definedRanges[] | {name,beginIPv4Address,endIPv4Address}
```
But I can't figure out how to merge the two with jq.
I solved the problem with a very simple python script (7 lines of code) but now I'd like to understand how to do the same with jq, out of curiosity. | 2022/09/08 | [
"https://Stackoverflow.com/questions/73648264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1066865/"
] | Well, you were close. Here's how you put those together:
```
map({name, definedRanges: .definedRanges | map({name, beginIPv4Address, endIPv4Address})})
```
[Online demo](https://jqplay.org/s/tHkRlLUV3K-) | Here's my shot at solutions to produce one or the other desired output:
```
map(
{ name }
+ (.definedRanges[] | {
"definedRanges.name": .name,
"definedRanges.beginIPv4Address": .beginIPv4Address,
"definedRanges.endIPv4Address": .endIPv4Address
}))
```
Output:
```json
[
{
"name": "home network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
}
]
```
Producing the first kind of output is even simpler (IMHO it reads a bit more straightforward):
```
map({
name,
definedRanges: .definedRanges | map({ name, beginIPv4Address, endIPv4Address })
})
```
Output:
```json
[
{
"name": "home network configuration",
"definedRanges": [
{
"name": "100-2001",
"beginIPv4Address": "192.168.090.101",
"endIPv4Address": "192.168.090.201"
}
]
},
{
"name": "lab network configuration",
"definedRanges": [
{
"name": "100-2002",
"beginIPv4Address": "192.168.090.102",
"endIPv4Address": "192.168.090.202"
},
{
"name": "100-2003",
"beginIPv4Address": "192.168.090.103",
"endIPv4Address": "192.168.090.203"
}
]
}
]
``` | 4,525 |
295,028 | I have a very tricky situation (for my standards) in hand. I have a script that needs to read a script variable name from [ConfigParser](https://docs.python.org/2/library/configparser.html). For example, I need to read
```
self.post.id
```
from a .cfg file and use it as a variable in the script. How do I achieve this?
I suppose I was unclear in my query. The .cfg file looks something like:
```
[head]
test: me
some variable : self.post.id
```
This self.post.id is to be replaced at the run time, taking values from the script. | 2008/11/17 | [
"https://Stackoverflow.com/questions/295028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2220518/"
] | test.ini:
```
[head]
var: self.post.id
```
python:
```
import ConfigParser
class Test:
def __init__(self):
self.post = TestPost(5)
def getPost(self):
config = ConfigParser.ConfigParser()
config.read('/path/to/test.ini')
newvar = config.get('head', 'var')
print eval(newvar)
class TestPost:
def __init__(self, id):
self.id = id
test = Test()
test.getPost() # prints 5
``` | This is a bit silly.
You have a dynamic language, distributed in source form.
You're trying to make what amounts to a change to the source. Which is easy-to-read, plain text Python.
Why not just change the Python source and stop messing about with a configuration file?
It's a lot easier to have a block of code like this
```
# Change this for some reason or another
x = self.post.id # Standard Configuration
# x = self.post.somethingElse # Another Configuration
# x = self.post.yetAnotherCase # A third configuration
```
it's just as complex to change this as it is to change a configuration file. And your Python program is simpler and more clear. | 4,527 |
45,533,019 | I was making a permutation script with python an i looked for how to make a multidimensional array, but the only way i could find was `array3
= [ [ "" for i in range(12) ] for j in range(4) ]` Is there any way i can make it so it's defined as multidimensional but not the size of it? I also found that it's possible to make it like `array = [[]]`but i cant find the way to put anything inside.
I'm trying to put letters and words inside the array so i think i cant use numpy.
For the other problem, the index out of range, I'm trying this:
```
array = [ ["a","b","c","d","e","f"],["7","8","9","0","11","12"]]
array2 = [ ["1","2","3","4","5","6"],["g","h","i","j","k","l"]]
array3 = [ [ "" for i in range(12) ] for j in range(4) ]
i,j = 0,0
print(array[0][0] + array2[0][1])
for k in range(3):
for l in range(2):
for m in range(4):
for n in range(7):
if j > 5:
j = 0
i += 1
print(m,n,k,l,i,j)
array3[m][n] =array[k][l] + array2[i][j]
j += 1
print(array3)
```
I was trying to put the first multidimensional array and the second together with a permutation algorithm but it says that the index is out of range...
What i want it to print is: a1, a2, a3, a4, a5, a6, ag, ah, aj, ak, al, b1, b2... | 2017/08/06 | [
"https://Stackoverflow.com/questions/45533019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7848729/"
] | Use `itertools.chain.from_iterable` and `itertools.product`:
```
from itertools import chain, product
for first, second in product(chain.from_iterable(array),
chain.from_iterable(array2)):
print("{}{}".format(first, second))
```
Since `array` and `array2` are lists and not arbitrary iterables, you can shorten this by using `chain` itself with argument unpacking:
```
for first, second in product(chain(*array), chain(*array2)):
print("{}{}".format(first, second))
```
---
`array3` can be created as a flat list:
```
array3 = ["{}{}".format(first, second) for first, second in product(chain(*array), chain(array2))]
```
or as a nested list:
```
array3 = [["{}{}".format(first, second) for second in chain(*array2)] for first in chain(*array)]
``` | If I understand you correctly You'll need a 4th degree nesting:
```
array = [["a","b","c","d","e","f"],["7","8","9","0","11","12"]]
array2 = [["1","2","3","4","5","6"],["g","h","i","j","k","l"]]
for row in array: # iterate "rows"
for cell in row: # iterate "cells" in a specific "row"
for row_2 in array2:
for cell_2 in row_2:
print '{}{}'.format(cell, cell_2)
```
which will give you:
```
a1
a2
a3
a4
a5
a6
ag
ah
ai
aj
ak
al
b1
b2
b3
b4
b5
...
``` | 4,528 |
40,032,276 | I have a dataframe similar to:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-07 | 13.90
1 | 2016-01-16 | 14.50
2 | 2016-01-09 | 10.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I am trying to keep the most recent values for each id, like this:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-16 | 14.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I have tried sort by date desc and after drop duplicates:
```
new_df = df.orderBy(df.date.desc()).dropDuplicates(['id'])
```
My questions are, `dropDuplicates()` will keep the first duplicate value that it finds? and is there a better way to accomplish what I want to do? By the way, I'm using python.
Thank you. | 2016/10/13 | [
"https://Stackoverflow.com/questions/40032276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4641956/"
] | The window operator as suggested works very well to solve this problem:
```
from datetime import date
rdd = sc.parallelize([
[1, date(2016, 1, 7), 13.90],
[1, date(2016, 1, 16), 14.50],
[2, date(2016, 1, 9), 10.50],
[2, date(2016, 1, 28), 5.50],
[3, date(2016, 1, 5), 1.50]
])
df = rdd.toDF(['id','date','price'])
df.show()
+---+----------+-----+
| id| date|price|
+---+----------+-----+
| 1|2016-01-07| 13.9|
| 1|2016-01-16| 14.5|
| 2|2016-01-09| 10.5|
| 2|2016-01-28| 5.5|
| 3|2016-01-05| 1.5|
+---+----------+-----+
df.registerTempTable("entries") // Replaced by createOrReplaceTempView in Spark 2.0
output = sqlContext.sql('''
SELECT
*
FROM (
SELECT
*,
dense_rank() OVER (PARTITION BY id ORDER BY date DESC) AS rank
FROM entries
) vo WHERE rank = 1
''');
output.show();
+---+----------+-----+----+
| id| date|price|rank|
+---+----------+-----+----+
| 1|2016-01-16| 14.5| 1|
| 2|2016-01-28| 5.5| 1|
| 3|2016-01-05| 1.5| 1|
+---+----------+-----+----+
``` | If you have items with the same date then you will get duplicates with the dense\_rank. You should use row\_number:
```py
from pyspark.sql.window import Window
from datetime import date
import pyspark.sql.functions as F
rdd = spark.sparkContext.parallelize([
[1, date(2016, 1, 7), 13.90],
[1, date(2016, 1, 7), 10.0 ], # I added this row to show the effect of duplicate
[1, date(2016, 1, 16), 14.50],
[2, date(2016, 1, 9), 10.50],
[2, date(2016, 1, 28), 5.50],
[3, date(2016, 1, 5), 1.50]]
)
df = rdd.toDF(['id','date','price'])
df.show(10)
+---+----------+-----+
| id| date|price|
+---+----------+-----+
| 1|2016-01-07| 13.9|
| 1|2016-01-07| 10.0|
| 1|2016-01-16| 14.5|
| 2|2016-01-09| 10.5|
| 2|2016-01-28| 5.5|
| 3|2016-01-05| 1.5|
+---+----------+-----+
# row_number
df.withColumn("row_number",F.row_number().over(Window.partitionBy(df.id).orderBy(df.date))).filter(F.col("row_number")==1).show()
+---+----------+-----+----------+
| id| date|price|row_number|
+---+----------+-----+----------+
| 3|2016-01-05| 1.5| 1|
| 1|2016-01-07| 13.9| 1|
| 2|2016-01-09| 10.5| 1|
+---+----------+-----+----------+
# dense_rank
df.withColumn("dense_rank",F.dense_rank().over(Window.partitionBy(df.id).orderBy(df.date))).filter(F.col("dense_rank")==1).show()
+---+----------+-----+----------+
| id| date|price|dense_rank|
+---+----------+-----+----------+
| 3|2016-01-05| 1.5| 1|
| 1|2016-01-07| 13.9| 1|
| 1|2016-01-07| 10.0| 1|
| 2|2016-01-09| 10.5| 1|
+---+----------+-----+----------+
``` | 4,529 |
36,341,553 | Lets say I want to use `gcc` from the command line in order to compile a C extension of Python. I'd structure the call something like this:
```
gcc -o applesauce.pyd -I C:/Python35/include -L C:/Python35/libs -l python35 applesauce.c
```
I noticed that the `-I`, `-L`, and `-l` options are absolutely necessary, or else you will get an error that looks something like [this](https://stackoverflow.com/questions/6985109/how-to-compile-c-code-from-cython-with-gcc). These commands tell gcc where to look for the headers (`-I`), where to look for the static libraries (`-L`), and which static library to actually use (`python35`, which actually translates to `libpython35.a`).
Now, this is obviously really easy to get the `libs` and `include` directories if its your machine, as they never change if you don't want them to. However, I was writing a program that calls `gcc` from the command line, that *other people will be using*. The line where this call occurs looks something like this:
```
from subprocess import call
import sys
filename = applesauce.c
include_directory = os.path.join(sys.exec_prefix, 'include')
libs_directory = os.path.join(sys.exec_prefix, 'libs')
call(['gcc', ..., '-I', include_direcory, '-L', libs_directory, ...])
```
### However, others will have different platforms and different Python installation structures, so just joining the paths won't always work.
Instead, I need a solution *from within Python* that will reliably return the `include` and `libs` directories.
Edit:
-----
I looked at the module `distutils.ccompiler`, and found many useful functions that would in part use distutils, but make it customizable for me to make my compiler entirely cross platform. The only thing is, I need to pass it the include and runtime libraries...
Edit 2:
-------
I looked at `distutils.sysconfig` an I am able to reliably return the 'include' directory including all the header files. I still have no idea how to get the runtime library.
The `distutils.ccompiler` docs are [here](https://docs.python.org/3.5/distutils/apiref.html#distutils.ccompiler.CCompiler)
The program that needs this functionality is named [Cyther](https://pypi.python.org/pypi/Cyther) | 2016/03/31 | [
"https://Stackoverflow.com/questions/36341553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3689198/"
] | The easiest way to compile extensions is to use `distutils`, [like this](https://docs.python.org/2/distutils/examples.html);
```
from distutils.core import setup
from distutils.extension import Extension
setup(name='foobar',
version='1.0',
ext_modules=[Extension('foo', ['foo.c'])],
)
```
Keep in mind that compared to unix/linux, [compiling extensions on ms-windows is not straightforward](https://blog.ionelmc.ro/2014/12/21/compiling-python-extensions-on-windows/) because different versions of the ms compiler are tightly coupled with the corresponding C library. So on ms-windows you have to compile extensions for Python x.y with the same compiler that Python x.y was compiled with. This means using old and unsupported tools for e.g. Python 2.7, 3.3 and 3.4. The situation is changing ([part 1](http://stevedower.id.au/blog/building-for-python-3-5/), [part 2](http://stevedower.id.au/blog/building-for-python-3-5-part-two/)) with Python 3.5.
**Edit:**
Having looked in the problem more, I doubt that what you want is 100% achievable. Given that the tools needed for compiling and linking are by definition outside of Python and that they are not even necessarily the tools that Python was compiled with, the information in `sys` and `sysconfig` is not guaranteed to an accurate representation of the system that Python is actually installed on. E.g. most ms-windows machines will not have developer tools installed. And even on POSIX platforms there can be a difference between the installed C compiler and the compiler that built Python especially when it is installed as a binary package. | If you look at the source of [`build_ext.py`](http://svn.python.org/projects/python/trunk/Lib/distutils/command/build_ext.py) from `distutils` in the method `finalize_options` you will find code for different platforms used to locate libs. | 4,532 |
65,365,486 | I'm trying to locate an element using python selenium, and have the html below:
```
<input class="form-control" type="text" placeholder="University Search">
```
I couldn't locate where to type what I want to type.
```
from selenium import webdriver
import time
driver = webdriver.Chrome(executable_path=r"D:\Python\Lib\site-packages\selenium\chromedriver.exe")
driver.get('https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology')
#<input class="form-control" type="text" placeholder="University Search">
text_area = driver.find_element_by_name('University Search')
text_area.send_keys("oxford university")
``` | 2020/12/19 | [
"https://Stackoverflow.com/questions/65365486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11405347/"
] | You are attempting to use [find\_element\_by\_name](https://selenium-python.readthedocs.io/locating-elements.html#locating-by-name) and yet this element has no `name` attribute defined. You need to look for the element with the specific `placeholder` attribute you are interested in - you can use [find\_element\_by\_xpath](https://selenium-python.readthedocs.io/locating-elements.html#locating-by-xpath) for this:
```
text_area = driver.find_element_by_xpath("//input[@placeholder='University Search']")
```
Also aside: When I open my browser, I don't see an element with "University Search" in the placeholder, only a search bar with "Site Search" -- but this might be a regional and/or browser difference. | Very close, try the XPath when all else fails:
```
text_area = driver.find_element_by_xpath("//*[@id='qs-rankings']/thead/tr[3]/td[2]/div/input")
```
You can copy the full/relative XPath to clipboard if you're inspecting the webpage's html. | 4,533 |
10,264,460 | I am looking for some words in a file in python. After I find each word I need to read the next two words from the file. I've looked for some solution but I could not find reading just the next words.
```
# offsetFile - file pointer
# searchTerms - list of words
for line in offsetFile:
for word in searchTerms:
if word in line:
# here get the next two terms after the word
```
Thank you for your time.
Update: Only the first appearance is necessary. Actually only one appearance of the word is possible in this case.
file:
```
accept 42 2820 access 183 3145 accid 1 4589 algebra 153 16272 algem 4 17439 algol 202 6530
```
word: ['access', 'algebra']
Searching the file when I encounter 'access' and 'algebra', I need the values of 183 3145 and 153 16272 respectively. | 2012/04/22 | [
"https://Stackoverflow.com/questions/10264460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/776614/"
] | An easy way to deal with this is to read the file using a generator that yields one word at a time from the file.
```
def words(fileobj):
for line in fileobj:
for word in line.split():
yield word
```
Then to find the word you're interested in and read the next two words:
```
with open("offsetfile.txt") as wordfile:
wordgen = words(wordfile)
for word in wordgen:
if word in searchterms: # searchterms should be a set() to make this fast
break
else:
word = None # makes sure word is None if the word wasn't found
foundwords = [word, next(wordgen, None), next(wordgen, None)]
```
Now `foundwords[0]` is the word you found, `foundwords[1]` is the word after that, and `foundwords[2]` is the second word after it. If there aren't enough words, then one or more elements of the list will be `None`.
It is a little more complex if you want to *force* this to match only within one line, but usually you can get away with considering the file as just a sequence of words. | If you need to retrieve only two first words, just do it:
```
offsetFile.readline().split()[:2]
``` | 4,536 |
67,167,886 | I have installed `TensorFlow` on an M1 (**ARM**) Mac according to [these instructions](https://github.com/apple/tensorflow_macos/issues/153). Everything works fine.
However, model training is happening on the `CPU`. How do I switch training to the `GPU`?
```
In: tensorflow.config.list_physical_devices()
Out: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
```
In the documentation of [Apple's TensorFlow distribution](https://github.com/apple/tensorflow_macos) I found the following slightly confusing [paragraph](https://github.com/apple/tensorflow_macos#additional-information):
>
> It is not necessary to make any changes to your existing TensorFlow scripts to use ML Compute as a backend for TensorFlow and TensorFlow Addons. There is an optional `mlcompute.set_mlc_device(device_name='any')` API for ML Compute device selection. The default value for device\_name is 'any', which means ML Compute will select the best available device on your system, including multiple GPUs on multi-GPU configurations. Other available options are `CPU` and `GPU`. Please note that in eager mode, ML Compute will use the CPU. For example, to choose the CPU device, you may do the following:
>
>
>
```
# Import mlcompute module to use the optional set_mlc_device API for device selection with ML Compute.
from tensorflow.python.compiler.mlcompute import mlcompute
# Select CPU device.
mlcompute.set_mlc_device(device_name='cpu') # Available options are 'cpu', 'gpu', and 'any'.
```
So I try to run:
```
from tensorflow.python.compiler.mlcompute import mlcompute
mlcompute.set_mlc_device(device_name='gpu')
```
and get:
```
WARNING:tensorflow: Eager mode uses the CPU. Switching to the CPU.
```
At this point I am stuck. How can I train `keras` models on the GPU to my MacBook Air?
TensorFlow version: `2.4.0-rc0` | 2021/04/19 | [
"https://Stackoverflow.com/questions/67167886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/626537/"
] | Update
------
The [tensorflow\_macos tf 2.4](https://github.com/apple/tensorflow_macos) repository has been archived by the owner. For `tf 2.5`, refer to [here](https://developer.apple.com/metal/tensorflow-plugin/).
---
It's probably not useful to disable the eager execution fully but to `tf. functions`. Try this and check your GPU usages, the warning message can be [misleading](https://github.com/apple/tensorflow_macos/issues/71#issuecomment-748444580).
```
import tensorflow as tf
tf.config.run_functions_eagerly(False)
```
---
The current release of [Mac-optimized TensorFlow](https://github.com/apple/tensorflow_macos/releases/tag/v0.1alpha3) has several issues that yet not fixed (`TensorFlow 2.4rc0`). Eventually, the eager mode is the default behavior in `TensorFlow 2.x`, and that is also unchanged in the [TensorFlow-MacOS](https://github.com/apple/tensorflow_macos/releases/tag/v0.1alpha3). But unlike the official, this optimized version uses **CPU** forcibly for eager mode. As they stated [here](https://github.com/apple/tensorflow_macos#device-selection-optional).
>
> ... in eager mode, **ML Compute** will use the **CPU**.
>
>
>
That's why even we set explicitly the `device_name='gpu'`, it switches back to CPU as the eager mode is still on.
```
from tensorflow.python.compiler.mlcompute import mlcompute
mlcompute.set_mlc_device(device_name='gpu')
WARNING:tensorflow: Eager mode uses the CPU. Switching to the CPU.
```
[Disabling the eager mode](https://stackoverflow.com/a/66768341/9215780) may work for the program to utilize the GPU, but it's not a general behavior and can lead to such [puzzling performance on both CPU/GPU](https://github.com/apple/tensorflow_macos/issues/88). For now, the most appropriate approach can be to choose `device_name='any'`, by that the **ML Compute** will query the available devices on the system and selects the best device(s) for training the network. | Try with turning off the eager execution...
via following
```
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
```
Let me know if it works. | 4,539 |
59,520,376 | I came across a scenario where i need to run the function parallely for a list of values in python. I learnt executor.map from `concurrent.futures` will do the job. And I was able to parallelize the function using the below syntax `executor.map(func,[values])`.
But now, I came across the same scenario (i.e the function has to run parallely), but then the function signature is different from the previous and its given below.
```
def func(search_id,**kwargs):
# somecode
return list
container = []
with concurrent.futures.ProcessPoolExecutor() as executor:
container.extend(executor.map(func, (searchid,sitesearch=site),[list of sites]))
```
I don't know how to achieve the above. Can someone guide me please? | 2019/12/29 | [
"https://Stackoverflow.com/questions/59520376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9452253/"
] | If you have an iterable of `sites` that you want to `map` and you want to pass the same `search_term` and `pages` argument to each call. You can use `zip` to create an iterable that returns tuples of 3 elements where the first is your list of sites and the 2nd and 3rd are the other parameters just repeating using `itertools.repeat`
```
def func(site, search_term, pages):
...
from functools import partial
from itertools import repeat
executor.map(func, zip(sites, repeat(search_term), repeat(pages)))
``` | Here is a useful way of using kwargs in `executor.map`, by simply using a `lambda` function to pass the kwargs with the `**kwargs` notation:
```py
from concurrent.futures import ProcessPoolExecutor
def func(arg1, arg2, ...):
....
items = [
{ 'arg1': 0, 'arg2': 3 },
{ 'arg1': 1, 'arg2': 4 },
{ 'arg1': 2, 'arg2': 5 }
]
with ProcessPoolExecutor() as executor:
result = executor.map(
lambda kwargs: func(**kwargs), items)
```
I found this also useful when using Pandas DataFrames by creating the items with `to_dict` by typing `items = df.to_dict(orient='records')` or loading data from JSON files. | 4,540 |
1,713,015 | I installed Yahoo BOSS (it's a Python installation that allows you to use their search features). I followed everything perfectly. However, when I run the example to confirm that it works, I get this:
```
$ python ex3.py
Traceback (most recent call last):
File "ex3.py", line 16, in ?
from yos.yql import db
File "/usr/lib/python2.4/site-packages/yos/yql/db.py", line 44, in ?
from yos.crawl import rest
File "/usr/lib/python2.4/site-packages/yos/crawl/rest.py", line 13, in ?
import xml2dict
File "/usr/lib/python2.4/site-packages/yos/crawl/xml2dict.py", line 6, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
Is there any way to fix this? I did exactly as stated in the documentation and it was installed on a fresh box.
People have suggested that Python 2.5 should be used, but everything currently uses Python 2.4. What should I do to get this Yahoo BOSS to work?
```
Python 2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
``` | 2009/11/11 | [
"https://Stackoverflow.com/questions/1713015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179736/"
] | Use Python 2.5 or above: xml.etree.ElementTree was added in 2.5.
<http://docs.python.org/library/xml.etree.elementtree.html> | A google search reveals that you need to install the [effbot elementtree](http://effbot.org/downloads/#elementtree) Python module. | 4,541 |
34,804,612 | so im writing a login system with python and i would like to know if i can search a text document for the username you put in then have it output the line it was found on and search a password document. if it matches the password that you put in with the string on that line then it prints that you logged in. any and all help is appreciated.in my previous code i have it search line one and if it doesnt find the string it adds one to line then repeats till it finds it. then it checks the password file at the same line
```
def checkuser(user,line): # scan the username file for the username
ulines = u.readlines(line)
if user != ulines:
line = line + 1
checkuser(user)
elif ulines == user:
password(user)
``` | 2016/01/15 | [
"https://Stackoverflow.com/questions/34804612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5382035/"
] | fucntion to get the line number. You can use this how you want
```
def getLineNumber(fileName, searchString):
with open(fileName) as f:
for i,line in enumerate(f, start=1):
if searchString in line:
return i
raise Exception('string not found')
``` | Pythonic way for your answer
```
f = open(filename)
line_no = [num for num,line in enumerate(f) if 'searchstring' in line][0]
print line_no+1
``` | 4,542 |
45,049,312 | How do I CONSOLIDATE the following using python COMPREHENSION
**FROM (list of dicts)**
```
[
{'server':'serv1','os':'Linux','archive':'/my/folder1'}
,{'server':'serv2','os':'Linux','archive':'/my/folder1'}
,{'server':'serv3','os':'Linux','archive':'/my/folder2'}
,{'server':'serv4','os':'AIX','archive':'/my/folder1'}
,{'server':'serv5','os':'AIX','archive':'/my/folder1'}
]
```
**TO (list of dicts with tuple as key and list of 'server#'s as value**
```
[
{('Linux','/my/folder1'):['serv1','serv2']}
,('Linux','/my/folder2'):['serv3']}
.{('AIX','/my/folder1'):['serv4','serv5']}
]
``` | 2017/07/12 | [
"https://Stackoverflow.com/questions/45049312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2288573/"
] | the need to be able to set default values to your dictionary and to have the same key several times may make a dict-comprehension a bit clumsy. i'd prefer something like this:
a [`defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict) may help:
```
from collections import defaultdict
lst = [
{'server':'serv1','os':'Linux','archive':'/my/folder1'},
{'server':'serv2','os':'Linux','archive':'/my/folder1'},
{'server':'serv3','os':'Linux','archive':'/my/folder2'},
{'server':'serv4','os':'AIX','archive':'/my/folder1'},
{'server':'serv5','os':'AIX','archive':'/my/folder1'}
]
dct = defaultdict(list)
for d in lst:
key = d['os'], d['archive']
dct[key].append(d['server'])
```
if you prefer to have a standard dictionary in the end (actually i do not really see a good reason for that) you could use [`dict.setdefault`](https://docs.python.org/3/library/stdtypes.html?highlight=setdefault#dict.setdefault) in order to create an empty list where the key does not yet exist:
```
dct = {}
for d in lst:
key = d['os'], d['archive']
dct.setdefault(key, []).append(d['server'])
```
the [documentation on `defaultdict` (vs. `setdefault`)](https://docs.python.org/3/library/collections.html#defaultdict-examples):
>
> This technique is simpler and faster than an equivalent technique
> using dict.setdefault()
>
>
> | It's difficult to achieve with list comprehension because of the accumulation effect. However, it's possible using `itertools.groupby` on the list sorted by your keys (use the same `key` function for both sorting and grouping).
Then extract the server info in a list comprehension and prefix by the group key. Pass the resulting (group key, server list) to dictionary comprehension and here you go.
```
import itertools
lst = [
{'server':'serv1','os':'Linux','archive':'/my/folder1'}
,{'server':'serv2','os':'Linux','archive':'/my/folder1'}
,{'server':'serv3','os':'Linux','archive':'/my/folder2'}
,{'server':'serv4','os':'AIX','archive':'/my/folder1'}
,{'server':'serv5','os':'AIX','archive':'/my/folder1'}
]
sortfunc = lambda x : (x['os'],x['archive'])
result = {k:[x['server'] for x in v] for k,v in itertools.groupby(sorted(lst,key=sortfunc),key = sortfunc)}
print(result)
```
I get:
```
{('Linux', '/my/folder1'): ['serv1', 'serv2'], ('AIX', '/my/folder1'): ['serv4', 'serv5'], ('Linux', '/my/folder2'): ['serv3']}
```
Keep in mind that it's not because it can be written in one line that it's more efficient. The `defaultdict` approach doesn't require sorting for instance. | 4,543 |
1,091,756 | >
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)? | 2009/07/07 | [
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] | the book [Expert Python Programming](http://www.packtpub.com/expert-python-programming/book) has something related discussion
Chapter 4: Choosing Good Names:"Building the Namespace Tree" and "Splitting the Code"
My line crude summary: collect some related class to one module(source file),and
collect some related module to one package, is helpful for code maintain. | There is no specific convention for this - do whatever makes your code the most readable and maintainable. | 4,544 |
45,043,961 | I'm getting the following error when trying to run
`$ bazel build object_detection/...`
And I'm getting ~20 of the same error (1 for each time it attempts to build that). I think it's something with the way I need to configure bazel to recognize the py\_proto\_library, but I don't know where, or how I would do this.
`/src/github.com/tensorflow/tensorflow_models/object_detection/protos/BUILD:325:1: name 'py_proto_library' is not defined (did you mean 'cc_proto_library'?).`
I also think it could be an issue with the fact that initially I had installed the cpp version of tensorflow, and then I built it for python. | 2017/07/11 | [
"https://Stackoverflow.com/questions/45043961",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2544449/"
] | The solution ended up being running this command, like the instructions say:
```
$ protoc object_detection/protos/*.proto --python_out=.
```
and then running this command, like the instructions say.:
```
$ export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
``` | Are you using `load` in the `BUILD` file you're building?
`load("@protobuf//:protobuf.bzl", "py_proto_library")`?
The error seems to indicate the symbol `py_proto_library` isn't loaded into skylark. | 4,554 |
68,630,769 | I have a list of strings in python and want to run a recursive grep on each string in the list. Am using the following code,
```
import subprocess as sp
for python_file in python_files:
out = sp.getoutput("grep -r python_file . | wc -l")
print(out)
```
The output I am getting is the grep of the string "python\_file". What mistake am I committing and what should I do to correct this?? | 2021/08/03 | [
"https://Stackoverflow.com/questions/68630769",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16582585/"
] | Your code has several issues. The immediate answer to what you seem to be asking was given in a comment, but there are more things to fix here.
If you want to pass in a variable instead of a static string, you have to use some sort of string interpolation.
`grep` already knows how to report how many lines matched; use `grep -c`. Or just ask Python to count the number of output lines. Trimming off the pipe to `wc -l` allows you to also avoid invoking a shell, which is a good thing; see also [Actual meaning of `shell=True` in subprocess](https://stackoverflow.com/questions/3172470/actual-meaning-of-shell-true-in-subprocess).
`grep` already knows how to search for multiple expressions. Try passing in the whole list as an input file with `grep -f -`.
```py
import subprocess as sp
out = sp.check_output(
["grep", "-r", "-f", "-", "."],
input="\n".join(python_files), text=True)
print(len(out.splitlines()))
```
If you want to speed up your processing and the patterns are all static strings, try also adding the `-F` option to `grep`.
Of course, all of this is relatively easy to do natively in Python, too. You should easily be able to find examples with `os.walk()`. | Your intent isn't totally clear from the way you've written your question, but the first argument to `grep` is the **pattern** (`python_file` in your example), and the second is the **file(s)** `.` in your example
You could write this in native Python or just use grep directly, which is probably easier than using both!
`grep` args
* `--count` will report just the number of matching lines
* `--file` *Read one or more newline separated patterns from file.* (manpage)
```sh
grep --count --file patterns.txt -r .
```
```py
import re
from pathlib import Path
for pattern in patterns:
count = 0
for path_file in Path(".").iterdir():
with open(path_file) as fh:
for line in fh:
if re.match(pattern, line):
count += 1
print(count)
```
NOTE that the behavior in your question would get a separate word count for each pattern, while you may really want a single count | 4,555 |
62,295,148 | I am new to lambda functions. I am trying to get the sum of elements in a list, but facing this issue repeatedly.
[](https://i.stack.imgur.com/9k7gu.png)
When following up with tutorials online([Tutorial-link](https://realpython.com/python-lambda/)). The following code is working fine for them. But, I am facing the same problem.
[](https://i.stack.imgur.com/A6syb.png)
**Can someone help me to understand why is this happening?** | 2020/06/10 | [
"https://Stackoverflow.com/questions/62295148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2663091/"
] | The [findOne](https://mongodb.github.io/node-mongodb-native/3.6/api/Collection.html#findOne) function in the node.js driver has a slightly different definition than the one in the shell.
Try
```
db.collection("inventory").findOne({ _id: 1 }, {projection:{ "size": 0 }}, (err, result) => {
``` | May you should try this
`db.collection("inventory").findOne({ _id: 1 }). select("-size"). exec((err, ``result) => {//body of the fn}) ;` | 4,556 |
67,047,424 | In this code, I am trying to insert a code block using [react-quilljs](https://github.com/gtgalone/react-quilljs#usage)
```
import React, { useState } from 'react';
import hljs from 'highlight.js';
import { useQuill } from 'react-quilljs';
import 'quill/dist/quill.snow.css'; // Add css for snow theme
export default () => {
hljs.configure({
languages: ['javascript', 'ruby', 'python', 'rust'],
});
const theme = 'snow';
const modules = {
toolbar: [['code-block']],
syntax: {
highlight: (text) => hljs.highlightAuto(text).value,
},
};
const placeholder = 'Compose an epic...';
const formats = ['code-block'];
const { quill, quillRef } = useQuill({
theme,
modules,
formats,
placeholder,
});
const [content, setContent] = useState('');
React.useEffect(() => {
if (quill) {
quill.on('text-change', () => {
setContent(quill.root.innerHTML);
});
}
}, [quill]);
const submitHandler = (e) => {};
return (
<div style={{ width: 500, height: 300 }}>
<div ref={quillRef} />
<form onSubmit={submitHandler}>
<button type='submit'>Submit</button>
</form>
{quill && (
<div
className='ql-editor'
dangerouslySetInnerHTML={{ __html: content }}
/>
)}
</div>
);
};
```
Using the above code, I get the following preview of the editor's content
[](https://i.stack.imgur.com/7tvmX.png)
There are two problems with this:
1. There is no code syntax highlighting, as I want to achieve this using the `highlihgt.js` package, inside the code block inside the editor, and
2. The code block is not displayed (with the black background and highlighting syntax when it's working) in the previewing div outside the editor.
How can I fix these two issues? | 2021/04/11 | [
"https://Stackoverflow.com/questions/67047424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10134463/"
] | Your code *is* getting marked up by `highlight.js` with CSS classes:
```
<span class="hljs-keyword">const</span>
```
You are not seeing the impact of those CSS classes because you don't have a stylesheet loaded to handle them. You need to choose the theme that you want from [the available styles](https://highlightjs.org/static/demo/) and import the corresponding stylesheet.
```
import 'highlight.js/styles/darcula.css';
``` | Look at the css in the editor mode. It depends on two class names ql-snow and ql-editor.
You can fix this issue by wrapping it around one more div with className ql-snow.
```
<div className='ql-snow'>
<div className='ql-editor' dangerouslySetInnerHTML={{ __html: content }}>
<div/>
</div>
```
This should work. | 4,557 |
10,509,293 | >
> **Possible Duplicate:**
>
> [Why aren't Python's superclass **init** methods automatically invoked?](https://stackoverflow.com/questions/3782827/why-arent-pythons-superclass-init-methods-automatically-invoked)
>
>
>
For example:
```
class Pet(object):
def __init__(self, name, species):
self.name = name
self.species = species
def getName(self):
return self.name
def getSpecies(self):
return self.species
def __str__(self):
return "%s is a %s" % (self.name, self.species)
class Dog(Pet):
def __init__(self, name, chases_cats):
Pet.__init__(self, name, "Dog")
self.chases_cats = chases_cats
def chasesCats(self):
return self.chases_cats
```
As you can see Dog inherits from pet. I understand the code perfectly fine. But why must we call the init for pet in the Dog class? Why isn't just calling it as in the first line of the dog class enough ( class Dog(Pet) )? It seems to only create messier code. It kind of kills the point of inheritance in Python to me. | 2012/05/09 | [
"https://Stackoverflow.com/questions/10509293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1241100/"
] | Having the language force the super class to initialize before or after means that you lose functionality. A subclass may depend on a superclass's initialization to be run first, or vice versa.
In addition, it wouldn't have any way of knowing what arguments to pass -- subclasses decide what values are passed to an initializer, which gives more flexibility over automatically passing all arguments.
An alternative method of initializing the super-class is by using [super](http://docs.python.org/library/functions.html#super), though this will actually initialize the super-class of `self` dynamically, by looking it up in the object's `__mro__` (method resolution order)
In python 2:
```
super(self, Dog).__init__(self, name, "Dog")
```
In python 3, you can further reduce the syntax, and avoid repeating yourself:
```
super().__init__(self, name, "Dog")
```
**Edit**:
Since you're not actually using the `name` argument passed to dog, you can further reduce some syntax, by accepting arbitrary keyword and positional arguments in the `Dog` initializer, and passing them up the initialization chain:
```
class Dog(Pet):
def __init__(self, chases_cats, *args, **kwargs):
Pet.__init__(self, *args, species="Dog", **kwargs)
self.chases_cats = chases_cats
class Pet(object):
def __init__(self, name, species, *args, **kwargs):
self.name = name
self.species = species
```
In your scenario, the initializer might be simple enough to not need this, but it's useful to know for when you have more arguments, and/or a deeper class hierarchy. | You can make species a class attribute like this
```
class Pet(object):
species = None
def __init__(self, name):
self.name = name
def getName(self):
return self.name
def getSpecies(self):
return self.species
def __str__(self):
return "%s is a %s" % (self.name, self.species)
class Dog(Pet):
species = "Dog"
def __init__(self, name, chases_cats):
Pet.__init__(self, name)
self.chases_cats = chases_cats
def chasesCats(self):
return self.chases_cats
``` | 4,559 |
67,377,043 | What is the reason why I cannot access a specific line number in the already split string?
```
a = "ABCDEFGHIJKLJMNOPRSTCUFSC"
barcode = "2"
import textwrap
prazno = textwrap.fill(a,width=5)
podeli = prazno.splitlines()
```
Here the output is correct:
```
print(podeli)
ABCDE
FGHIJ
KLJMN
OPRST
CUFSC
```
However, when I want to split one of the lines e.g podeli[2] by 3 characters the python just ignores that and gives the same output like that split of podeli[2] (line 2) has not occured.
```
if barcode[0] == '1':
podeli[1] += ' MATA'
elif barcode[0] == '2':
podeli[1] += ' MATA'
for podeli[2] in podeli:
textwrap.fill(podeli[2], width=3)
podeli[2].splitlines()
podeli[2] += ' MATA'
```
The expected output would be:
```
ABCDE MATA
FGH MATA
IJ
KLJMN
OPRST
CUFSC
```
Is there a way to split the line by a certain length and its order number?
Thank you, guys! | 2021/05/03 | [
"https://Stackoverflow.com/questions/67377043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15827381/"
] | You can solve your immediate problem by rebuilding the list, but I fear you have a more general problem that you haven't told us.
```
if barcode[0] == '1':
podeli[1] += ' MATA'
elif barcode[0] == '2':
podeli[1] += ' MATA'
line2 = textwrap.fill(podeli[2], width=3).splitlines()
podeli = podeli[0:2] + line2 + podeli[3:]
podeli[2] += ' MATA'
``` | Try this, it should split a string you specified by a width of 3:
```
n = 3
[((podeli[2])[i:i+n]) for i in range(0, len(podeli[2]), n)]
``` | 4,560 |
40,882,899 | I am using python+bs4+pyside in the code ,please look the part of the code below:
```
enter code here
#coding:gb2312
import urllib2
import sys
import urllib
import urlparse
import random
import time
from datetime import datetime, timedelta
import socket
from bs4 import BeautifulSoup
import lxml.html
from PySide.QtGui import *
from PySide.QtCore import *
from PySide.QtWebKit import *
def download(self, url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers or {})
opener = self.opener or urllib2.build_opener()
if proxy:
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
try:
response = opener.open(request)
html = response.read()
code = response.code
except Exception as e:
print 'Download error:', str(e)
html = ''
if hasattr(e, 'code'):
code = e.code
if num_retries > 0 and 500 <= code < 600:
# retry 5XX HTTP errors
return self._get(url, headers, proxy, num_retries-1, data)
else:
code = None
return {'html': html, 'code': code}
def crawling_hdf(openfile):
filename = open(openfile,'r')
namelist = filename.readlines()
app = QApplication(sys.argv)
for name in namelist:
url = "http://so.haodf.com/index/search?type=doctor&kw="+ urllib.quote(name)
#get doctor's home page
D = Downloader(delay=DEFAULT_DELAY, user_agent=DEFAULT_AGENT, proxies=None, num_retries=DEFAULT_RETRIES, cache=None)
html = D(url)
soup = BeautifulSoup(html)
tr = soup.find(attrs={'class':'docInfo'})
td = tr.find(attrs={'class':'docName font_16'}).get('href')
print td
#get doctor's detail information page
loadPage_bs4(td)
filename.close()
if __name__ == '__main__':
crawling_hdf("name_list.txt")
```
After I run the program , there shows a waring message:
**Warning (from warnings module):
File "C:\Python27\lib\site-packages\bs4\dammit.py", line 231
"Some characters could not be decoded, and were "
UnicodeWarning: Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.**
I have used ***print str(html)*** and find all chinese language in tages are messy code.
I have tried use ”decode or encode“ and ”gzip“ solutions which are search in this website,but it doesn't work in my case.
Thank you very much for your help! | 2016/11/30 | [
"https://Stackoverflow.com/questions/40882899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7229399/"
] | try changing your foreach loop to this.
```
foreach ($list as $key => $value) {
echo $value->title." || ";
echo $value->link." ";
echo nl2br("\n");
}
```
Hope this Works for you. | What you did is looping the keys and values seperated and than you tried to get the values from the keys of the stdClass, what you need to do is looping it as an object. I also used `json_decode($json_str, true)` to get the result as an array instead of an stdClass.
```
$json_str = '[{"title":"root","link":"one"},{"title":"branch","link":"two"},{"title":"leaf","link":"three"}]';
$json_decoded = json_decode($json_str, true);
foreach($json_decoded as $object)
{
echo $object['title'];
echo $object['link'];
}
``` | 4,561 |
66,012,040 | Running this DAG in airflow gives error as Task exited with return code Negsignal.SIGABRT.
I am not sure what is wrong I have done
```
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.utils.dates import days_ago
SNOWFLAKE_CONN_ID = 'snowflake_conn'
# TODO: should be able to rely on connection's schema, but currently param required by S3ToSnowflakeTransfer
# SNOWFLAKE_SCHEMA = 'schema_name'
#SNOWFLAKE_STAGE = 'stage_name'
SNOWFLAKE_WAREHOUSE = 'SF_TUTS_WH'
SNOWFLAKE_DATABASE = 'KAFKA_DB'
SNOWFLAKE_ROLE = 'sysadmin'
SNOWFLAKE_SAMPLE_TABLE = 'sample_table'
CREATE_TABLE_SQL_STRING = (
f"CREATE OR REPLACE TRANSIENT TABLE {SNOWFLAKE_SAMPLE_TABLE} (name VARCHAR(250), id INT);"
)
SQL_INSERT_STATEMENT = f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
SQL_LIST = [SQL_INSERT_STATEMENT % {"id": n} for n in range(0, 10)]
default_args = {
'owner': 'airflow',
}
dag = DAG(
'example_snowflake',
default_args=default_args,
start_date=days_ago(2),
tags=['example'],
)
snowflake_op_sql_str = SnowflakeOperator(
task_id='snowflake_op_sql_str',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=CREATE_TABLE_SQL_STRING,
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_sql_list = SnowflakeOperator(
task_id='snowflake_op_sql_list', dag=dag, snowflake_conn_id=SNOWFLAKE_CONN_ID, sql=SQL_LIST
)
snowflake_op_sql_str >> [
snowflake_op_with_params,
snowflake_op_sql_list,]
```
**Getting LOGS in airFlow as below ::**
```
Reading local file: /Users/aashayjain/airflow/logs/snowflake_test/snowflake_op_with_params/2021-02-02T13:51:18.229233+00:00/1.log
[2021-02-02 19:21:38,880] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:1017} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,887] {taskinstance.py:1018} INFO - Starting attempt 1 of 1
[2021-02-02 19:21:38,887] {taskinstance.py:1019} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,892] {taskinstance.py:1038} INFO - Executing <Task(SnowflakeOperator): snowflake_op_with_params> on 2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,895] {standard_task_runner.py:51} INFO - Started process 16510 to run task
[2021-02-02 19:21:38,901] {standard_task_runner.py:75} INFO - Running: ['airflow', 'tasks', 'run', 'snowflake_test', 'snowflake_op_with_params', '2021-02-02T13:51:18.229233+00:00', '--job-id', '7', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/snowflake_test.py', '--cfg-path', '/var/folders/6h/1pzt4pbx6h32h6p5v503wws00000gp/T/tmp1w61m38s']
[2021-02-02 19:21:38,903] {standard_task_runner.py:76} INFO - Job 7: Subtask snowflake_op_with_params
[2021-02-02 19:21:38,933] {logging_mixin.py:103} INFO - Running <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2021-02-02 19:21:38,954] {taskinstance.py:1232} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=snowflake_test
AIRFLOW_CTX_TASK_ID=snowflake_op_with_params
AIRFLOW_CTX_EXECUTION_DATE=2021-02-02T13:51:18.229233+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
[2021-02-02 19:21:38,961] {base.py:74} INFO - Using connection to: id: snowflake_conn. Host: uva00063.us-east-1.snowflakecomputing.com, Port: None, Schema: , Login: aashay, Password: XXXXXXXX, extra: XXXXXXXX
[2021-02-02 19:21:38,963] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.3.7, Python Version: 3.7.3, Platform: Darwin-19.5.0-x86_64-i386-64bit
[2021-02-02 19:21:38,964] {connection.py:769} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-02-02 19:21:38,964] {connection.py:785} INFO - Setting use_openssl_only mode to False
[2021-02-02 19:21:38,996] {local_task_job.py:118} INFO - Task exited with return code Negsignal.SIGABRT
```
apache-airflow==2.0.0
python 3.7.3
Looking forward for help with this. let me know I need to provide any more details wrt. code or airflow...................................??? | 2021/02/02 | [
"https://Stackoverflow.com/questions/66012040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10110307/"
] | I ran into the same error on Mac OSX - which looks like the OS you are using based on the local file path.
Adding the following to my Airflow *scheduler* session fixed the problem:
```
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
```
The cause is described in the [Airflow docs](https://airflow.apache.org/blog/airflow-1.10.10/#running-airflow-on-macos):
>
> This error occurs because of added security to restrict
> multiprocessing & multithreading in Mac OS High Sierra and above.
>
>
>
The doc refers to version 1.10.x - but based on local test environments this also resolves the error for v2.x.x.
**kaxil** comment in the following issue actually guided me to the relevant Airflow page:
<https://github.com/apache/airflow/issues/12808#issuecomment-738854764> | You are executing:
```
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
```
This try to run the `sql` in `SQL_INSERT_STATEMENT`.
So it executes:
```
f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
```
which gives:
```
INSERT INTO sample_table VALUES ('name', %(id)s)
```
As shown in your own log:
```
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
```
This is not a valid SQL statement.
I can't really tell what SQL you wanted to execute. Based on `SQL_LIST` I can assume that `%(id)s` suppose to be and id of integer type. | 4,564 |
32,103,424 | Coming from [Python recursively appending list function](https://stackoverflow.com/questions/32102420/python-recursively-appending-list-function)
Trying to recursively get a list of permissions associated with a file structure.
I have this function:
```
def get_child_perms(self, folder, request, perm_list):
# Folder contains other folders
if folder.get_children():
# For every sub-folder
return [self.get_child_perms(subfolder, request, perm_list) for subfolder in folder.get_children()]
return folder.has_read_permission(request)
```
That returns all the results except the folders that contain other folders.
```
folder <- Missing (allowed)
subfolder <- Missing (restricted)
subsubfolder <- Get this (restricted)
files
```
Output from function would be
[True, False, False]
another case would be, where A = allowed, R = restricted
```
folder A
subfolder A
subsubfolder R
files
files
subfolder R
files
subfolder A
subsubfolder A
files
files
subfolder A
files
files
```
Output would be
[True,True,False,False,True,True,True] | 2015/08/19 | [
"https://Stackoverflow.com/questions/32103424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4941891/"
] | The basic issue occurs you are only returning the `folder permission` , when folder does not have any children , when it has children, you are not including the `folder.has_read_permission(request)` in your return result , which is most probably causing you issue. You need to do -
```
def get_child_perms(self, folder, request, perm_list):
# Folder contains other folders
if folder.get_children():
# For every sub-folder
return [folder.has_read_permission(request)] + [self.get_child_perms(subfolder, request, perm_list) for subfolder in folder.get_children()]
return [folder.has_read_permission(request)]
```
This should result in (not tested) -
```
[folderperm [subfolderperm [subsubfolderperm]]
```
--- | why not [os.walk](https://docs.python.org/3/library/os.html?highlight=walk#os.walk)
>
> When topdown is True, the caller can modify the dirnames list in-place
> (perhaps using del or slice assignment), and walk() will only recurse
> into the subdirectories whose names remain in dirnames; this can be
> used to prune the search, impose a specific order of visiting, or even
> to inform walk() about directories the caller creates or renames
> before it resumes walk() again. Modifying dirnames when topdown is
> False is ineffective, because in bottom-up mode the directories in
> dirnames are generated before dirpath itself is generated.
>
>
>
for example you can build generator (lazy list) that generates only non restricted directories
```
for (dirpath, dirnames, filenames) in os.walk("top_path"):
if restricted(dirpath):
del dirnames
continue
yield (dirpath,tuple(filenames))
``` | 4,569 |
26,643,903 | I'm trying to use python requests to PUT a .pmml model to a local openscoring server.
This works (from directory containing DecisionTreeIris.pmml):
```
curl -X PUT --data-binary @DecisionTreeIris.pmml -H "Content-type: text/xml" http://localhost:8080/openscoring/model/DecisionTreeIris
```
This doesn't:
```
import requests
file = '/Users/weitzenfeld/IntelliJProjects/openscoring/openscoring-server/etc/DecisionTreeIris.pmml'
r = requests.put('http://localhost:8080/openscoring/model/DecisionTreeIris', files={'file': open(file, 'rb')})
r.text
```
returns:
```
u'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>\n<title>Error 415 </title>\n</head>\n<body>\n<h2>HTTP ERROR: 415</h2>\n<p>Problem accessing /openscoring/model/DecisionTreeIris. Reason:\n<pre> Unsupported Media Type</pre></p>\n<hr /><i><small>Powered by Jetty://</small></i>\n</body>\n</html>\n'
```
I also tried:
```
r = requests.put('http://localhost:8080/openscoring/model/DecisionTreeIris', files={'file': open(file, 'rb')}, headers={'Content-type': 'text/xml', 'Accept': 'text/xml'})
r.text
```
which returns:
```
u'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>\n<title>Error 406 </title>\n</head>\n<body>\n<h2>HTTP ERROR: 406</h2>\n<p>Problem accessing /openscoring/model/DecisionTreeIris. Reason:\n<pre> Not Acceptable</pre></p>\n<hr /><i><small>Powered by Jetty://</small></i>\n</body>\n</html>\n'
```
Note that my python attempt is the same as in the accepted answer to this question: [Using Python to PUT PMML](https://stackoverflow.com/questions/24320021/using-python-to-put-pmml).
Also, someone with >1500 rep should consider making an 'openscoring' tag. | 2014/10/30 | [
"https://Stackoverflow.com/questions/26643903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/829332/"
] | You are trying to store Components in the same variable. This is wrong:
```
var enemy1Clone : Transform = Instantiate(enemy1, transform.position, transform.rotation);
enemy1Clone.GetComponent("enemyscript");
enemy1Clone.GetComponent(Animator);
```
`enemy1Clone` is of type `Transform`, don't try putting `enemyscript` or `Animator` Components inside of it.
You don't need those 2 `GetComponent` so just discard them.
---
I have completely changed the logic for your game. It works now.
**Player.js**
```
function Awake()
{
GUIcombo.text = "COMBO: ";
GUIlives.text = "LIVES: 3";
}
function Start ()
{
anim = GetComponent(Animator);
}
function Update ()
{
if(lives <= 0)
Destroy(Player);
Player.transform.position.x = -4.325;
if(!jumped)
anim.SetFloat("hf",0.0);
if(Input.GetButtonDown("Fire1") && !jumped)
{
jumpup();
jumped = true;
anim.SetFloat("hf",1);
}
}
function OnCollisionEnter2D(coll: Collision2D)
{
var g : GameObject = coll.gameObject;
if(g.CompareTag("ground"))
{
anim.SetFloat("hf",0.0);
jumped=false;
combo = 0;
GUIcombo.text = "COMBO: " + combo;
}
else if(g.CompareTag("enemy") && jumped)
{
// Notify the Enemy to die.
g.SendMessage("die");
jumpup();
jumped=true;
combo += 1;
GUIcombo.text = "COMBO: " + combo;
}
else if(g.CompareTag("enemy") && !jumped)
{
lives -=1;
GUIlives.text = "LIVES: " + lives;
}
}
function slam()
{
Player.rigidbody2D.AddForce(new Vector2(0,-3000), ForceMode2D.Force);
}
function glide()
{
Player.rigidbody2D.AddForce(Vector2(0,600), ForceMode2D.Force);
}
function jumpup()
{
Player.transform.Translate(Vector3(Input.GetAxis("Vertical") * speed * Time.deltaTime, 0, 0));
Player.rigidbody2D.velocity = Vector2(0,10);
if(jumplevel2)
Player.rigidbody2D.velocity = Vector2(0,13);
if(jumplevel3)
Player.rigidbody2D.velocity = Vector2(0,16);
}
```
**Enemy.js**
```
#pragma strict
var enemy : GameObject;
var speed : float = 1.0;
var anim : Animator;
var isDead : boolean = false;
function Start()
{
anim = GetComponent(Animator);
anim.SetFloat("EnemyDie", 0);
enemy.transform.position.x = 8.325;
enemy.transform.position.y = -1.2;
}
function Update()
{
if(!isDead)
{
enemy.transform.Translate(Vector3(Input.GetAxis("Horizontal") * speed * Time.deltaTime, 0, 0));
enemy.rigidbody2D.velocity = Vector2(-5, 0);
}
}
// Call here when enemy should die.
function die()
{
if(!isDead)
{
isDead = true;
anim.SetFloat("EnemyDie", 1);
yield WaitForSeconds(0.5f);
Destroy(gameObject);
}
}
``` | if the **Animator** is not initialized, you have to re-order the components, put the **Animator** component right below the **Transform** component and you should be fine. | 4,570 |
56,496,458 | The [logging docs](https://docs.python.org/2/library/logging.html) don't mention what the default logger obtained from [`basicConfig`](https://docs.python.org/2/library/logging.html#logging.basicConfig) writes to: stdout or stderr.
What is the default behavior? | 2019/06/07 | [
"https://Stackoverflow.com/questions/56496458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1804173/"
] | If no `filename` argument is passed to [logging.basicconfig](https://docs.python.org/2/library/logging.html#logging.basicConfig) it will configure a `StreamHandler`. If a `stream` argument is passed to `logging.basicconfig` it will pass this on to `StreamHandler` otherwise `StreamHandler` defaults to using `sys.stderr` as can be seen from the [StreamHandler docs](https://docs.python.org/2/library/logging.handlers.html#logging.StreamHandler)
>
> class logging.StreamHandler(stream=None)
>
>
> Returns a new instance of the StreamHandler class. If stream is specified, the instance will use it for logging output; otherwise, sys.stderr will be used.
>
>
>
and the [source code](https://github.com/python/cpython/blob/2bfc2dc214445550521074f428245b502d215eac/Lib/logging/__init__.py#L827):
```
class StreamHandler(Handler):
"""
A handler class which writes logging records, appropriately formatted,
to a stream. Note that this class does not close the stream, as
sys.stdout or sys.stderr may be used.
"""
def __init__(self, stream=None):
"""
Initialize the handler.
If stream is not specified, sys.stderr is used.
"""
Handler.__init__(self)
if stream is None:
stream = sys.stderr
self.stream = stream
``` | Apparently the default is stderr.
A quick check: Using a minimal example
```py
import logging
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
logger.info("test")
```
and running it with `python test.py 1> /tmp/log_stdout 2> /tmp/log_stderr` results in an empty stdout file, but a non-empty stderr file. | 4,571 |
9,222,129 | I have three python functions:
```
def decorator_function(func)
def wrapper(..)
return func(*args, **kwargs)
return wrapper
def plain_func(...)
@decorator_func
def wrapped_func(....)
```
inside a module A.
Now I want to get all the functions inside this module A, for which I do:
```
for fname, func in inspect.getmembers(A, inspect.isfunction):
# My code
```
The problem here is that the value of func is not what I want it to be.
It would be decorator\_function, plain\_func and wrapper (instead of wrapped\_func).
How can I make sure that wrapped\_func is returned instead of wrapper? | 2012/02/10 | [
"https://Stackoverflow.com/questions/9222129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334909/"
] | If all you want is to keep the original function name visible from "the outside" I think that you can do that with
```
@functools.wraps
```
as a decorator to your decorator
here's the example from the standard docs
```
>>> from functools import wraps
>>> def my_decorator(f):
... @wraps(f)
... def wrapper(*args, **kwds):
... print('Calling decorated function')
... return f(*args, **kwds)
... return wrapper
...
>>> @my_decorator
... def example():
... """Docstring"""
... print('Called example function')
...
>>> example()
Calling decorated function
Called example function
>>> example.__name__
'example'
>>> example.__doc__
'Docstring'
``` | You can access the pre-decorated function with:
```
wrapped_func.func_closure[0].cell_contents()
```
For example,
```
def decorator_function(func):
def wrapper(*args, **kwargs):
print('Bar')
return func(*args, **kwargs)
return wrapper
@decorator_function
def wrapped_func():
print('Foo')
wrapped_func.func_closure[0].cell_contents()
```
prints
```
Foo # Note, `Bar` was not also printed
```
But really, if you know you want to access the pre-decorated function, then it would be a whole lot cleaner to define
```
def wrapped_func():
print('Foo')
deco_wrapped_func = decorator_function(wrapped_func)
```
So `wrapped_func` will be the pre-decorated function, and
`deco_wrapped_func` will be the decorated version. | 4,572 |
13,400,876 | >
> **Possible Duplicate:**
>
> [Python’s most efficient way to choose longest string in list?](https://stackoverflow.com/questions/873327/pythons-most-efficient-way-to-choose-longest-string-in-list)
>
>
>
I have a list L
```
L = [[1,2,3],[5,7],[1,3],[77]]
```
I want to return the length of the longest sublist without needing to loop through them, in this case 3 because [1,2,3] is length 3 and it is the longest of the four sublists. I tried len(max(L)) but this doesn't do what I want. Any way to do this or is a loop my only way? | 2012/11/15 | [
"https://Stackoverflow.com/questions/13400876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1538364/"
] | `max(L,key=len)` will give you the object with the longest length (`[1,2,3]` in your example) -- To actually get the length (if that's all you care about), you can do `len(max(L,key=len))` which is a bit ugly -- I'd break it up onto 2 lines. Or you can use the version supplied by ecatamur.
All of these answers have loops -- in my case, the loops are *implicit* which usually means they'll be executed in optimized native machine code. If you think about it, How could you know which element is the longest without looking at each one?
---
Finally, note that `key=function` isn't a feature that is specific to `max`. A lot of the python builtins (`max`,`min`,`sorted`,`itertools.groupby`,...) use this particular keyword argument. It's definitely worth investing a little time to understand how it works and what it typically does. | Try a comprehension:
```
max(len(l) for l in L)
``` | 4,573 |
50,506,478 | i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors | 2018/05/24 | [
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] | This is supported from version `@angular/cli@6.1.0-beta.2`.
---
**Update** (November 2019):
**[The `fileReplacements` solution does not work with
Angular CLI 8.0.0 anymore](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237)**
Change `angular.json` to set your production `index.html` instead:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
``` | I'm using Angular 8 and I don't know why it doesn't work on my side.
But I can specify the location for the `index.html` for any build configuration (and looks like also `serve`) <https://stackoverflow.com/a/57274333/3473303> | 4,574 |
63,997,745 | ```py
string = "This is a test string. It has 44 characters." #Line1
for i in range(len(string) // 10): #Line2
result= string[10 * i:10 * i + 10] #Line3
print(result) #Line4
```
I want to understand the above code so that I can achieve the same thing using C#
According to my understanding, in Line2:
len(string) counts the length of above string which is 44, dividing by 10 returns 4, [range](https://www.w3schools.com/python/trypython.asp?filename=demo_ref_range)(4) should return: 0,1,2,3 so the `for` loop will run 4 times to print `result`
I confirmed how Line3 works by adding below statements in the python code:
```py
print(string[0:10])
print(string[10:20])
print(string[20:30])
print(string[30:40])
```
The output of both were:
```
This is a
test strin
g. It has
44 charact
```
**I tried the below code in C# to achieve the same which didn't print anything:**
```cs
string str = "This is a test string. It has 44 characters.";
foreach (int i in Enumerable.Range(0, str.Length / 10))
{
string result = str[(10*i)..(10*i+10)];
Console.WriteLine(result);
}
``` | 2020/09/21 | [
"https://Stackoverflow.com/questions/63997745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14311349/"
] | You can do:
```
colSums(df) != 0
A B C
TRUE TRUE FALSE
``` | Maybe `apply()` can be useful:
```
#Data
df = data.frame("A" = c("TRUE","FALSE","FALSE","FALSE"),
"B" = c("FALSE","TRUE","TRUE","FALSE"),
"C" = c("FALSE","FALSE","FALSE","FALSE"))
#Apply
apply(df,2,function(x) any(x=='TRUE'))
```
Output:
```
A B C
TRUE TRUE FALSE
```
Or setting logical values:
```
#Data 2
df = data.frame("A" = c(TRUE,FALSE,FALSE,FALSE),
"B" = c(FALSE,TRUE,TRUE,FALSE),
"C" = c(FALSE,FALSE,FALSE,FALSE))
#Apply
apply(df,2,function(x) any(x==TRUE))
```
Output:
```
A B C
TRUE TRUE FALSE
``` | 4,583 |
1,621,521 | Is there a program which I can run like this:
```
py2py.py < orig.py > smaller.py
```
Where orig.py contains python source code with comments and doc strings, and smaller.py contains identical, runnable source code but without the comments and doc strings?
Code which originally looked like this:
```
#/usr/bin/python
"""Do something
blah blah...
"""
# Beware the frubnitz!
def foo(it):
"""Foo it!"""
print it # hmm?
```
Would then look like this:
```
def foo(it):
print it
``` | 2009/10/25 | [
"https://Stackoverflow.com/questions/1621521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/196255/"
] | [This Python minifier](https://pypi.python.org/pypi/pyminifier) looks like it does what you need. | I recommend [minipy](https://github.com/gareth-rees/minipy). The most compelling reason is that it does proper analysis of the source code abstract syntax tree so the minified code is much more accurate. I've found that the more well known [pyminifier](https://pypi.python.org/pypi/pyminifier) tends to generate code with undefined symbol errors, misinterpreted tuples, etc. I also got a few percent better compression results with minipy. A minor benefit of minipy is that it's less than half the code size of pyminifier. It's also easier to manage and integrate into a build pipeline because it's a single standalone python file. | 4,586 |
29,853,907 | I was working with django-1.8.1 and everything was good but when I tried again to run my server with command bellow, I get some errors :
command : `python manage.py runserver`
errors appeared in command-line :
```
> Traceback (most recent call last):
File "C:\Python34\lib\wsgiref\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "C:\Python34\lib\site-packages\django\contrib\staticfiles\handlers.py", l
ine 63, in __call__
return self.application(environ, start_response)
File "C:\Python34\lib\site-packages\django\core\handlers\wsgi.py", line 170, i
n __call__
self.load_middleware()
File "C:\Python34\lib\site-packages\django\core\handlers\base.py", line 50, in
load_middleware
mw_class = import_string(middleware_path)
File "C:\Python34\lib\site-packages\django\utils\module_loading.py", line 24,
in import_string
six.reraise(ImportError, ImportError(msg), sys.exc_info()[2])
File "C:\Python34\lib\site-packages\django\utils\six.py", line 658, in reraise
>
> raise value.with_traceback(tb) File "C:\Python34\lib\site-packages\django\utils\module_loading.py", line
> 21, in import_string
> module_path, class_name = dotted_path.rsplit('.', 1) ImportError: polls doesn't look like a module path [24/Apr/2015 21:50:25]"GET /
> HTTP/1.1" 500 59
```
error appeared in browser :
>
> A server error occurred. Please contact the administrator.
>
>
>
Also, I searched the web for this issue and I found something but they were not working for me.
This is my settings.py : `
```
"""
Django settings for havij project.
Generated by 'django-admin startproject' using Django 1.8.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '9va4(dd_pf#!(2-efc-ipiz@7fgb8y!d(=5gdmie0cces*9lih'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls' ,
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'polls',
)
ROOT_URLCONF = 'havij.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'havij.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
`
```
And here is the urls.py :
```
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
# Examples:
# url(r'^$', 'havij.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^admin/', include(admin.site.urls)),
```
Thanks for your answering. | 2015/04/24 | [
"https://Stackoverflow.com/questions/29853907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4701816/"
] | Looks like a mixed solution for me.
Typically `TcpListener listener = new TcpListener(8888)` awaits for the connection on given port.
Then, when it accepts the connection from client, it establishes connection on different socket `Socket socket = listener.AcceptSocket()` so that listening port remains awaiting for other connections (another clients).
Then, we can read from client that connected to server from the stream: `Stream stream = NetworkStream(socket)`.
`TcpClient` is used to connect to such server and it should be used in Client application, not in server one. | The term client has two definitions. At the application level you have a client and server application. The client is the master and the server is the slave. At the socket level, both the client application and server application have a client (also called a socket).
The server socket listens at the loopback address 127.0.0.1 (or IPAny). While the client socket connects to the server IP address. | 4,587 |
16,251,016 | Hi all i want to web based GUI Testing tool. I found dogtail is written using python. but i didnot get any good tutorial and examples to move further. Please Guide me weather dogtail is perfect or something better than this in python is there?. and if please share doc and example.
My requirement:
A DVR continuous showing live video on tile(4 x 4 ), GUI is web based(mozilla) . i Should be able to swap video and check log and have to compare actual result and present. | 2013/04/27 | [
"https://Stackoverflow.com/questions/16251016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2017556/"
] | [Selenium](https://pypi.python.org/pypi/selenium) is designed exactly for this, it allows you to control the browser in Python, and check if things are as expected (e.g check if a specific element exists, submit a form etc)
There's [some more examples in the documentation](http://selenium-python.readthedocs.org/en/latest/getting-started.html)
[Project Sikuli](http://www.sikuli.org/) is a similar tool, but is more general than just web-browsers | Selenium provides a python interface rather than just record your mouse movements, see <http://selenium-python.readthedocs.org/en/latest/api.html>
If you need to check your video frames your can record them locally and OCR the frames looking for some expected text or timecode. | 4,588 |
52,928,809 | Hi I am a beginner in python coding!
This is my code:
```
while True:
try:
x=raw_input("Please enter a word: ")
break
except ValueError:
print( "Sorry it is not a word. try again")
```
The main aim of this code is to check the input. If the input is string than OK, but when the input is integer it is an error. My problem is that the code with the format integer too, i dont get the error message. Can you help me where is the mistake? | 2018/10/22 | [
"https://Stackoverflow.com/questions/52928809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10540371/"
] | Try using KV0, it works for me:
```
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
[[AVAudioSession sharedInstance] setActive:YES error:nil];
[[AVAudioSession sharedInstance] addObserver:self forKeyPath:@"outputVolume" options:0 context:nil];
return YES;
}
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
NSLog(@"Volumechanged");
}
```
Note that you need the line:
[[AVAudioSession sharedInstance] setActive:YES error:nil];
if omitted you will not get notifications.
Also import AVFoundation.
As stated here:
<https://developer.apple.com/documentation/avfoundation/avaudiosession/1616533-outputvolume?language=objc>
**"You can observe changes to the value of this property by using Key-value observing."** | Apparently, my code worked ok. The problem was with my laptop - it had some volume issue, some when I was changing the volume on the simulator, the volume didn't really changed. When I switched to a real device, everything worked properly | 4,591 |
57,745,554 | I am trying to write a script in python so I can find in 1 sec the COM number of the USB serial adapter I have plugged to my laptop.
What I need is to isolate the COMx port so I can display the result and open putty with that specific port. Can you help me with that?
Until now I have already written a script in batch/powershell and I am getting this information but I havent been able to separate the text of the COMx port so I can call the putty program with the serial parameter.
I have also been able to find the port via Python but I cant isolate it from the string.
```
import re # Used for regular expressions (unused)
import os # To check that the path of the files defined in the config file exist (unused)
import sys # To leave the script if (unused)
import numpy as np
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
print ('Type of Devs: ',type(devs))
myarray = ([])
myarray =np.array(devs)
print ('Type of thing: ',type(myarray))
match = '<USB Serial Port (COM6)>' (custom match. the ideal would be "USB Serial Port")
i=0
#print (myarray, '\n')
while i != len(devs):
if match == myarray[i]:
print ('Found it!')
break
print ('array: ',i," : ", myarray[i])
i = i+1
print ('array 49: ', myarray[49]) (here I was checking what is the difference of the "element" inside the array)
print ('match : ', match) (and what is the difference of what I submitted)
print ('end')
```
I was expecting the if match == myarray[i] to find the two elements but for some reason it doesnt. Its returning me that those two are not the same.
Thank you for any help in advance!
=== UPDATE ===
Full script can be found here
<https://github.com/elessargr/k9-serial> | 2019/09/01 | [
"https://Stackoverflow.com/questions/57745554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4964403/"
] | this is a follow up answer from @MacrosG
i tried a minimal example with properties from Device
```
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
for d in devs:
if "USB" in d.description :
print(d.description)
``` | If Python says the strings are not the same I dare say it's quite likely they are not.
You can compare with:
```
if "USB Serial Port" in devs[i]:
```
Then you should be able to find not a complete letter by letter match but one that contains a USB port.
There is no need to use numpy, `devs` is already a list and hence iterable. | 4,592 |
55,007,820 | I've the following problem : I'm actually making a script for an ovirt server to automatically delete virtual machine which include unregister them from the DNS. But for some very specific virtual machine there is multiple FQDN for an IP address example:
```
myfirstfqdn.com IN A 10.10.10.10
mysecondfqdn.com IN A 10.10.10.10
```
I've tried to do it with socket in Python but it return only one answer, I've also tried python with dnspython but I failed.
the goal is to count the number of type A record on the dns server
Anyone have an idea to do stuff like this? | 2019/03/05 | [
"https://Stackoverflow.com/questions/55007820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8919169/"
] | That's outright impossible. If I am in the right mood, I could add an entry to my DNS server pointing to your IP address. Generally, you cannot find it out (except for some hints in some protocols like http(s)). | Given a zone file in the above format, you could do something like...
```
from collections import defaultdict
zone_file = """myfirstfqdn.com IN A 10.10.10.10
mysecondfqdn.com IN A 10.10.10.10"""
# Build mapping for lookups
ip_fqdn_mapping = defaultdict(list)
for record in zone_file.split("\n"):
fqdn, record_class, record_type, ip_address = record.split()
ip_fqdn_mapping[ip_address].append(fqdn)
# Lookup
ip_address_to_lookup = "10.10.10.10"
fqdns = ip_fqdn_mapping[ip_address_to_lookup]
print(fqdns)
```
Note: Using socket can be done like so - [Python lookup hostname from IP with 1 second timeout](https://stackoverflow.com/questions/2575760/python-lookup-hostname-from-ip-with-1-second-timeout)
However this does require that DNS server that you are querying has correctly configured PTR reverse records.
<https://www.cloudns.net/wiki/article/40/> | 4,595 |
52,588,535 | I was trying to pass some arguments via PyCharm when I noticed that it's behaving differently that my console. When I pass arguments with no space in between all works fine, but when my arguments contains spaces inside it the behavior diverge.
```
def main():
"""
Main function
"""
for i, arg in enumerate(sys.argv):
print('Arg#{}: {}'.format(i, arg))
```
If I run the same function:
```
python3 argumnents_tester.py 'argument 1' argument2
```
Run in **PyCharm**:
>
> Arg#0: /home/gorfanidis/PycharmProjects/test1/argparse\_test.py
>
> Arg#1: 'argument
>
> Arg#2: 1'
>
> Arg#3: argument2
>
>
>
Run in **Console**:
>
> Arg#0: argparse\_test.py
>
> Arg#1: argument 1
>
> Arg#2: argument2
>
>
>
So, PyCharm tends to ignore quotes altogether and splits the arguments using the spaces regardless of any quotes. Also, arguments with quotes are treated differently than the same arguments without quotes.
Question is why it this happening and at a practical level how am I suppose to pass an argument that contains spaces using PyCharm for example?
I am using Ubuntu 16.04 by the way. | 2018/10/01 | [
"https://Stackoverflow.com/questions/52588535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584765/"
] | You can use [`String.prototype.trim()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/trim):
```js
const title1 = "DisneyLand-Paris";
const title2 = "DisneyLand-Paris ";
const trimmed = title2.trim();
console.log(title1 === title2); // false
console.log(title1 === trimmed); // true
```
So you could do:
```js
// Trim and compare both strings, and report the result.
function compareTitles(title1, title2) {
if (title1.trim() === title2.trim()) {
console.log("It's the same titles!");
}
}
compareTitles("DisneyLand-Paris", "DisneyLand-Paris "); // It's the same titles!
compareTitles("DisneyLand-Paris ", "DisneyLand-Paris"); // It's the same titles!
``` | You need to use trim() method
```
const string1 = "DisneyLand-Paris";
const string2 = "DisneyLand-Paris ";
const string3 = "";
if(string1 === string2.trim()){
string3 = `it's the same titles ` ; //here you have to use template literals because of it’s
console.log(string3);
}
``` | 4,596 |
11,329,212 | I have started to look into python and am trying to grasp new things in little chunks, the latest goal i set for myself was to read a tab seperate file of floats into memory and compare values in the list and print the values if difference was as large as the user specified. I have written the following code for it so far:
```
#! /usr/bin/env python
value = raw_input('Please enter a mass difference:')
fh = open ( "values" );
x = []
for line in fh.readlines():
y = [float for float in line.split()]
x.append(y)
fh.close()
for i in range(0,len(x)-1):
for j in range(i,len(x)):
if x[j][0] - x[i][0] == value:
print x[i][0],x[j][0]
```
The compiler complains that i am not allowed to substract strings from strings (logically) but my question is ... why are they strings? Shouldn't the nested list be a list of floats as i use float for float?
Literal error:
```
TypeError: unsupported operand type(s) for -: 'str' and 'str'
```
I would greatly appreciate if someone can tell me where my reasoning goes wrong ;) | 2012/07/04 | [
"https://Stackoverflow.com/questions/11329212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1093485/"
] | Try this in place of your list comprehension:
```
y = [float(i) for i in line.split()]
```
*Explanation*:
The data you read from the file are strings, to convert them to other types you need to cast them. So in your case you want to cast your values to float via `float()` .. which you tried, but not quite correctly/successfully. This should give you the results you were looking for.
If you have other values to convert, this syntax will work:
```
float_val = float(string_val)
```
assuming that `string_val` contains valid characters for a float, it will convert, otherwise you'll get an exception.
```
>>> float('3.5')
3.5
>>> float('apple')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): apple
``` | The list comprehension isn't doing what you think it's doing. It's simply assigning each string to the variable `float`, and returning it. Instead you actually want to use another name and call float on it:
```
y = [float(x) for x in line.split()]
``` | 4,597 |
46,824,700 | With Rebol pick I can only get one element:
```
list: [1 2 3 4 5 6 7 8 9]
pick list 3
```
In python one can get a whole sub-list with
```
list[3:7]
``` | 2017/10/19 | [
"https://Stackoverflow.com/questions/46824700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310291/"
] | * [**AT**](http://www.rebol.com/docs/words/wat.html) can seek a position at a list.
* [**COPY**](http://www.rebol.com/r3/docs/functions/copy.html) will copy from a position to the end of list, by default
* the **/PART** refinement of COPY lets you add a limit to copying
Passing an integer to /PART assumes how many things you want to copy:
```
>> list: [1 2 3 4 5 6 7 8 9]
>> copy/part (at list 3) 5
== [3 4 5 6 7]
```
If you provide a series *position* to be the end, then it will copy *up to* that point, so you'd have to be past it if your range means to be inclusive.
```
>> copy/part (at list 3) (next at list 7)
== [3 4 5 6 7]
```
There have been some proposals for range dialects, I can't find any offhand. Simple code to give an idea:
```
range: func [list [series!] spec [block!] /local start end] [
if not parse spec [
set start integer! '.. set end integer!
][
do make error! "Bad range spec, expected e.g. [3 .. 7]"
]
copy/part (at list start) (next at list end)
]
>> list: [1 2 3 4 5 6 7 8 9]
>> range list [3 .. 7]
== [3 4 5 6 7]
``` | ```
>> list: [1 2 3 4 5 6 7 8 9]
== [1 2 3 4 5 6 7 8 9]
>> copy/part skip list 2 5
== [3 4 5 6 7]
```
So, you can skip to the right location in the list, and then copy as many consecutive members as you need.
If you want an equivalent function, you can write your own. | 4,599 |
20,383,924 | What I am trying to do is access the traffic meter data on my local netgear router. It's easy enough to login to it and click on the link, but ideally I would like a little app that sits down in the system tray (windows) that I can check whenever I want to see what my network traffic is.
I'm using python to try to access the router's web page, but I've run into some snags. I originally tried modified a script that would reboot the router (found here <https://github.com/ncw/router-rebooter/blob/master/router_rebooter.py>) but it just serves up the raw html and I need it after the onload javascript functions have run. This type of thing is described in many posts about web scraping and people suggested using selenium.
I tried selenium and have run into two problems. First, it actually opens the browser window, which is not what I want. Second, it skips the stuff I put in to pass the HTTP authentication and pops up the login window anyway. Here is the code:
```
from selenium import webdriver
baseAddress = '192.168.1.1'
baseURL = 'http://%(user)s:%(pwd)s@%(host)s/traffic_meter.htm'
username = 'admin'
pwd = 'thisisnotmyrealpassword'
url = baseURL % {
'user': username,
'pwd': pwd,
'host': baseAddress
}
profile = webdriver.FirefoxProfile()
profile.set_preference('network.http.phishy-userpass-length', 255)
driver = webdriver.Firefox(firefox_profile=profile)
driver.get(url)
```
So, my question is, what is the best way to accomplish what I want without having it launch a visible web browser window?
**Update:**
Okay, I tried sircapsalot's suggestion and modified the script to this:
```
from selenium import webdriver
from contextlib import closing
url = 'http://admin:notmyrealpassword@192.168.1.1/start.htm'
with closing(webdriver.Remote(desired_capabilities = webdriver.DesiredCapabilities.HTMLUNIT)) as driver:
driver.get(url)
print(driver.page_source)
```
This fixes the web browser being loaded, but it failed the authentication. Any suggestions? | 2013/12/04 | [
"https://Stackoverflow.com/questions/20383924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1777330/"
] | Okay, I found the solution and it was way easier than I thought. I did try John1024's suggestion and was able to download the proper webpage from the router using wget. However I didn't like the fact that wget saved the result to a file, which I would then have to open and parse.
I ended up going back to the original reboot\_router.py script I had attempted to modify unsuccessfully the first time. My problem was I was trying to make it too complicated. This is the final script I ended up using:
```
import urllib2
user = 'admin'
pwd = 'notmyrealpassword'
host = '192.168.1.1'
url = 'http://' + host + '/traffic_meter_2nd.htm'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, host, user, pwd)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
response = opener.open(url)
stuff = response.read()
response.close()
print stuff
```
This prints out the entire traffic meter webpage from my router, with its proper values loaded. I can then take this and parse the values out of it. The nice thing about this is it has no external dependencies like selenium, wget or other libraries that needs to be installed. Clean is good.
Thank you, everyone, for your suggestions. I wouldn't have gotten to this answer without them. | The web interface for my Netgear router (WNDR3700) is also filled with javascript. Yours may differ but I have found that my scripts can get all the info they need without javascript.
The first step is finding the correct URL. Using FireFox, I went to the traffic page and then used "This Frame -> Show only this frame" to discover that the URL for the traffic page on my router is:
```
http://my_router_address/traffic.htm
```
After finding this URL, no web browswer and no javascript is needed. I can, for example, capture this page with `wget`:
```
wget http://my_router_address/traffic.htm
```
Using a text editor on the resulting traffic.htm file, I see that the traffic data is available in a lengthy block that starts:
```
var traffic_today_time="1486:37";
var traffic_today_up="1,959";
var traffic_today_down="1,945";
var traffic_today_total="3,904";
. . . .
```
Thus, the `traffic.htm` file can be easily captured and parsed with the scripting language of your choice. No javascript ever needs to be executed.
UPDATE: I have a `~/.netrc` file with a line in it like:
```
machine my_router_address login someloginname password somepassword
```
Before `wget` downloads from the router, it retrieves the login info from this file. This has security advantages. If one runs `wget http://name@password...`, then the password is viewable to all on your machine via the process list (`ps a`). Using `.netrc`, this never happens. Restrictive permissions can be set on `.netrc`, e.g. readable only by user (`chmod 400 ~/.netrc`). | 4,600 |
29,516,084 | Getting the following error when trying to install Pandas (0.16.0), which is in my requirements.txt file, on AWS Elastic Beanstalk EC2 instance:
```
building 'pandas.msgpack' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/klib -Ipandas/src -I/opt/python/run/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pandas/msgpack.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
```
I'm running on `64bit Amazon Linux 2015.03 v1.3.0 running Python 2.7` and previously ran into this same error on a t1.micro instance, which was resolved when I change to a m3.medium, but I'm running an m3.xlarge so can't be a memory issue.
I have also ensured that gcc is installed as a package in `.ebextensions/00_gcc.config`:
```
packages:
yum:
gcc: []
gcc-c++: []
``` | 2015/04/08 | [
"https://Stackoverflow.com/questions/29516084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1177562/"
] | For pandas being compiled on Elastic Beanstalk, make sure to have both packages: `gcc-c++` *and* `python-devel`
```
packages:
yum:
gcc-c++: []
python-devel: []
``` | Install `python-dev`
```
sudo apt-get install python-dev
```
For `python3`
```
sudo apt-get install python3-dev
``` | 4,601 |
68,341,995 | I have a python file that basically runs a main() function which scrapes data given a given keyword.
I am trying to do multiprocessing so my main function in my Jupyter Notebook looks like
```
p1 = multiprocessing.Process(target=main, args=['cosmetics'])
p2 = multiprocessing.Process(target=main, args=['airpod pro case'])
if __name__ == '__main__':
p1.start()
p2.start()
```
Now this runs perfectly, it basically scrapes data given a keyword, saves it to a csv file (in this case, 'cosmetics.csv' and 'airpod pro case.csv' then calls another function to go through each column that contains a url from the saved csv file.
However, I wanted to run this on my terminal/cmd and I changed the above code to:
```
def multi_process(item_1, item_2):
p1 = multiprocessing.Process(target=main, args=[item_1])
p2 = multiprocessing.Process(target=main, args=[item_2])
#p3 = multiprocessing.Process(target=main, args=['hair dryer'])
if __name__ == '__main__':
p1.start()
p2.start()
#p3.start()
item_1 = sys.argv[1]
item_2 = sys.argv[2]
multi_process(item_1, item_2)
```
then I saved the file as a .py file and ran this line on my terminal/cmd
```
> python3 /Users/Name/Desktop/DE/Scrape.py "cosmetics" "airpod pro case"
FileNotFoundError: [Errno 2] File b'/Users/Name/Desktop/DE/cosmetics 1.csv' does not exist: b'/Users/Name/Desktop/DE/cosmetics 1.csv'
FileNotFoundError: [Errno 2] File b'/Users/Name/Desktop/DE/airpod pro case 1.csv' does not exist: b'/Users/Name/Desktop/DE/airpod pro case 1.csv'
```
and I get this error saying that it can't find the csv files which led me to check the folder and find out that the csv files are not being saved which should be.
Does running python file on termina/cmd not allow saving csv files?
I can't seem to figure out what the problem is. | 2021/07/12 | [
"https://Stackoverflow.com/questions/68341995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16404222/"
] | No, it's not that running the module using a terminal won't save files, that's actually the way you run python modules. Check your main() whether it is actually writing the CSV file or not. | the only question i saw in here was `Does running python file on termina/cmd not allow saving csv files?` the answer is no... its fine to save csv files in the command line typically
```
open("output.csv","w") as f:
f.write("this,is\na,csv")
``` | 4,605 |
5,436,227 | I have a sandbox PAYPAL area,
My language is python - Django and I use django-paypal
ipn tes on my server works but
when someone try to buy something, after paypal process in sandbox I don't receive signal and in my paypal\_ipn I don't see the transaction.
So the problem is that I don't receive the signal.
This is my signal code in models.py
```
from paypal.standard.ipn.signals import payment_was_successful
def show_me_the_money(sender, **kwargs):
code = sender.item_number
type, number_product, pagamento_corso_id = code.split('_')
obj = get_object_or_404(PagamentoCorso, int(pagamento_corso_id))
obj.pagamento = True
obj.save()
payment_was_successful.connect(show_me_the_money)
```
Please help me because is 7 days... and I'm very frustrated! :-) | 2011/03/25 | [
"https://Stackoverflow.com/questions/5436227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/677220/"
] | I think you can use the Paypal Test Tools to send fake notifactions to your notify url. That might make it easier to debug.
<http://www.friendly-stranger.com/pictures/paypal.jpg>
What you can also do is run the tests that come with django-paypal. It should be something like
```
python manage.py test paypal
```
You can also take a look at the test code and create your own tests to debug your problem.
If you still can't figure it out post your url configuration and the view that passes the IPN form to the template. | I had a similar problem. In my case, I could see from access logs that PayPal *is* accessing my payment notification URL, and the requests are 200 OK, but no signals triggered on Django side.
Turned out that payments had "Pending" status instead of "Completed" (and I wasn't listening on `paypal.standard.ipn.signals.payment_was_flagged` signal).
The reason my payments were flagged, was incorrect `settings.PAYPAL_RECEIVER_EMAIL` and incorrect `paypal_dict["business"]` email addresses. Exact same issue [as for this guy](http://jayngo.blogspot.com/2011/04/using-django-paypal.html). | 4,607 |
12,138,298 | I am trying to use "With open()" with python 2.6 and it is giving error(Syntax error) while it works fine with python 2.7.3
Am I missing something or some import to make my program work!
Any help would be appreciated.
Br
My code is here:
```
def compare_some_text_of_a_file(self, exportfileTransferFolder, exportfileCheckFilesFolder) :
flag = 0
error = ""
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1,open("transfer-out/"+exportfileTransferFolder) as f2:
if f1.read().strip() in f2.read():
print ""
else:
flag = 1
error = exportfileCheckFilesFolder
error = "Data of file " + error + " do not match with exported data\n"
if flag == 1:
raise AssertionError(error)
``` | 2012/08/27 | [
"https://Stackoverflow.com/questions/12138298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/864924/"
] | The `with open()` statement is supported in Python 2.6, you must have a different error.
See [PEP 343](http://www.python.org/dev/peps/pep-0343/) and the python [File Objects documentation](http://docs.python.org/library/stdtypes.html#file-objects) for the details.
Quick demo:
```
Python 2.6.8 (unknown, Apr 19 2012, 01:24:00)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> with open('/tmp/test/a.txt') as f:
... print f.readline()
...
foo
>>>
```
You are trying to use the `with` statement with multiple context managers though, which was only [added in Python 2.7](http://docs.python.org/reference/compound_stmts.html#grammar-token-with_stmt):
>
> Changed in version 2.7: Support for multiple context expressions.
>
>
>
Use nested statements instead in 2.6:
```
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1:
with open("transfer-out/"+exportfileTransferFolder) as f2:
# f1 and f2 are now both open.
``` | The `with open()` syntax is supported by Python 2.6. On Python 2.4 it is not supported and gives a syntax error. If you need to support PYthon 2.4, I would suggest something like:
```
def readfile(filename, mode='r'):
f = open(filename, mode)
try:
for line in f:
yield f
except e:
f.close()
raise e
f.close()
for line in readfile(myfile):
print line
``` | 4,610 |
63,805,737 | i am trying to implement the following code but it is throwing an error
TypeError
```
Traceback (most recent call last)
<ipython-input-29-6ed67c712ed4> in <module>()
28 return model
29
---> 30 model = nvidia_model()
31 print(model.summary())
32 # dead relu problem: when a node in the network essentially dies and only feeds value of 0 to the following nodes.
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'subsample')
```
Defining Nvidia Model
=====================
```
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
``` | 2020/09/09 | [
"https://Stackoverflow.com/questions/63805737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14206313/"
] | I was stuck on the same error. Here's what I found:
`Convolution2D` is a Keras v1 API (docs: <https://faroit.com/keras-docs/1.2.2/layers/convolutional/>)
`Conv2D` is the newer v2 API (docs: <https://faroit.com/keras-docs/2.0.5/layers/convolutional/>).
I got it working with the following changes:
1. Changed `import Convolution2D` to `import Conv2D`
2. Changed
```
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
```
to
```
model.add(Conv2D(24, 5, 2, input_shape = (66, 200, 3), activation = 'relu'))
model.add(Conv2D(36, 5, 2, activation = 'relu'))
model.add(Conv2D(48, 5, 2, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
``` | Try using pooling instead of subsample. Usually that will work. | 4,613 |
12,521,189 | I'm familiar with programming but new to python:
>
> mem = [0] \* memloadsize
>
>
>
what does the '[0]' represent?
Is it a built-in array? | 2012/09/20 | [
"https://Stackoverflow.com/questions/12521189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493578/"
] | The `[` and `]` characters, in this context, are used to construct array literals:
```
>>> []
[]
>>> [0]
[0]
>>> ['a', 'b', 'c']
['a', 'b', 'c']
```
Multiplying arrays is idiomatic, and generates an array gotten by repeating the elements in the array by the given factor:
```
>>> ['a'] * 4
['a', 'a', 'a', 'a']
>>> [0] * 9
[0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> [0, 1, 2] * 2
[0, 1, 2, 0, 1, 2]
```
Note that `[` and `]` are also used to index into an existing array. In that case, `[0]` accesses the first element in the array:
```
>>> a = ['first', 'second', 'third']
>>> a[0]
'first'
>>> a[2]
'third'
>>>
``` | It just means a one element `list` containing just a 0. Multiplying by `memloadsize` gives you a `list` of `memloadsize` zeros. | 4,618 |
6,522,281 | I've got a program that downloads part01, then part02 etc of a rar file split across the internet.
My program downloads part01 first, then part02 and so on.
After some tests, I found out that using, on example, UnRAR2 for python I can extract the first part of the file (an .avi file) contained in the archive and I'm able to play it for the first minutes. When I add another file it extracts a bit more and so on. What I wonder is: is it possible to make it extract single files WHILE downloading them?
I'd need it to start extracting part01 without having to wait for it to finish downloading... is that possible?
Thank you very much!
Matteo | 2011/06/29 | [
"https://Stackoverflow.com/questions/6522281",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/774236/"
] | You are talking about an .avi file inside the rar archives. Are you sure the archives are actually compressed? [Video files released by the warez scene do not use compression:](http://en.wikipedia.org/wiki/Standard_%28warez%29#Packaging)
>
> Ripped movies are still packaged due to the large filesize, but compression is disallowed and the RAR format is used only as a container. Because of this, modern playback software can easily play a release directly from the packaged files, and even stream it as the release is downloaded (if the network is fast enough).
>
>
>
(I'm thinking VLC, BSPlayer, KMPlayer, Dziobas Rar Player, rarfilesource, rarfs,...)
You can check for the compression as follows:
* Open the first .rar archive in WinRAR. (name.part01.rar or name.rar for *old style volumes names*)
* Click the info button.
If *Version to extract* indicates 2.0, then the archive uses no compression. (unless you have decade old rars) You can see *Total size* and *Packed size* will be equal.
>
> is it possible to make it extract
> single files WHILE downloading them?
>
>
>
**Yes.** When no compression is used, you can write your own program to extract the files. (I know of someone who wrote a script to directly download the movie from external rar files; but it's not public and I don't have it.) Because you mentioned Python I suggest you take a look at [rarfile 2.2](http://pypi.python.org/pypi/rarfile/2.2) by Marko Kreen like the author of [pyarrfs](http://pypi.python.org/pypi/pyarrfs/0.5.0) did. The archive is just the file chopped up with headers (rar blocks) added. It will be a copy operation that you need to pause until the next archive is downloaded.
I strongly believe it is also possible for compressed files. Your approach here will be different because you must use *unrar* to extract the compressed files. I have to add that there is also [a free RARv3 implementation](http://news.slashdot.org/story/11/05/11/0324246/Unarchiver-Provides-LGPL-RARv3-Extraction-Tool) to extract rars implemented in The Unarchiver.
I think this parameter for (un)rar will make it possible:
>
>
> ```
> -vp Pause before each volume
>
> By default RAR asks for confirmation before creating
> or unpacking next volume only for removable disks.
> This switch forces RAR to ask such confirmation always.
> It can be useful if disk space is limited and you wish
> to copy each volume to another media immediately after
> creation.
>
> ```
>
>
It will give you the possibility to pause the extraction until the next archive is downloaded.
>
> I believe that this won't work if the rar was created with the 'solid' option enabled.
>
>
>
When the solid option is used for rars, all packed files are treated as [one big file stream](http://www.win-rar.com/solidarchive.html). This should not cause any problems if you always start from the first file even if it doesn't contain the file you want to extract.
I also think it will work with passworded archives. | I highly doubt it. By nature of compression (from my understanding), every bit is needed to uncompress it. It seems that the source of where you are downloading from has intentionally broken the avi into pieces before compression, but by the time you apply compression, whatever you compressed is now one atomic unit. So they kindly broke the whole avi into Parts, but each Part is still an atomic nit.
But I'm not an expert in compression.
The only test I can currently think of is something like: `curl <http://example.com/Part01> | unrar`. | 4,621 |
57,604,719 | I was trying to install thrift(0.11.0) over my system(macOs 10.14.5).For which I downloaded and extracted tar file. Then I ran following commands :
```
./bootstrap.sh
./configure
make
make install
```
But **make install** throwed the following error :
```
error: could not create '/usr/lib/python2.7/site-packages': Operation not permitted
```
then I also tried manually creating site-package inside /usr/lib/python2.7 but still the error message was same.
I have also tried **sudo** while running **make install** but it didn't helped much. | 2019/08/22 | [
"https://Stackoverflow.com/questions/57604719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6729549/"
] | 1.open thrift's subfolder lib/py/ and modify the Makefile as follow:
PY\_PREFIX=/usr
change to
PY\_PREFIX = /Users/amy/python
2.sudo make install | I faced the same problem trying to install thrift on Mac OS.
I found a separate guide for installing thrift on Mac OS, I tried it and it finally worked successfully:
1- Download the boost library from [boost.org](http://www.boost.org/) untar compile with
```
./bootstrap.sh
sudo ./b2 threading=multi address-model=64 variant=release stage install
```
2- Download [libevent](http://monkey.org/~provos/libevent), untar and compile with
```
./configure --prefix=/usr/local
make
sudo make install
```
3- Download the latest version of [Apache Thrift](https://thrift.apache.org/download), untar and compile with
```
./configure --prefix=/usr/local/ --with-boost=/usr/local --with-libevent=/usr/local
```
Try it and let me know your results.
*Reference: [Apache Thrift - OS X Install](https://thrift.apache.org/docs/install/os_x)* | 4,624 |
17,577,403 | I am trying to write an HTML parser in Python that takes as its input a URL or list of URLs and outputs specific data about each of those URLs in the format:
URL: data1: data2
The data points can be found at the exact same HTML node in each of the URLs. They are consistently between the same starting tags and ending tags. If anyone out there would like to help an amateur python programmer get the job done, it would be greatly appreciated. Extra points if you can come up with a way to output the information that can be easily copied and pasted into an excel document for subsequent data analysis!
For example, lets say I would like to output the view count for a particular YouTube video. For the URL <http://www.youtube.com/watch?v=QOdW1OuZ1U0>, the view count is around 3.6 million. For all YouTube videos, this number is found in the following format within the page's source:
```
<span class="watch-view-count ">
3,595,057
</span>
```
Fortunately, these exact tags are found only once on a particular YouTube video's page. These starting and ending tags can be inputted into the program or built-in and modified when necessary. The output of the program would be:
<http://www.youtube.com/watch?v=QOdW1OuZ1U0>: 3,595,057 (or 3595057). | 2013/07/10 | [
"https://Stackoverflow.com/questions/17577403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2479631/"
] | ```
import urllib2
from bs4 import BeautifulSoup
url = 'http://www.youtube.com/watch?v=QOdW1OuZ1U0'
f = urllib2.urlopen(url)
data = f.read()
soup = BeautifulSoup(data)
span = soup.find('span', attrs={'class':'watch-view-count'})
print '{}:{}'.format(url, span.text)
```
If you do not want to use `BeautifulSoup`, you can use `re`:
```
import urllib2
import re
url = 'http://www.youtube.com/watch?v=QOdW1OuZ1U0'
f = urllib2.urlopen(url)
data = f.read()
pattern = re.compile('<span class="watch-view-count.*?([\d,]+).*?</span>', re.DOTALL)
r = pattern.search(data)
print '{}:{}'.format(url, r.group(1))
```
As for the outputs, I think you can store them in a csv file. | I prefer `HTMLParser` over `re` for this type of task. However, `HTMLParser` can be a bit tricky. I use immutable objects to store data... I'm sure this this the wrong way of doing it. But its worked with several projects for me in the past.
```
import urllib2
from HTMLParser import HTMLParser
import csv
position = []
results = [""]
class hp(HTMLParser):
def handle_starttag(self, tag, attrs):
if tag == 'span' and ('class', 'watch-view-count ') in attrs:
position.append('bingo')
def handle_endtag(self, tag):
if tag == 'span' and 'bingo' in position:
position.remove('bingo')
def handle_data(self, data):
if 'bingo' in position:
results[0] += " " + data.strip() + " "
my_pages = ["http://www.youtube.com/watch?v=QOdW1OuZ1U0"]
data = []
for url in my_pages:
response = urllib2.urlopen(url)
page = str(response.read())
parser = hp()
parser.feed(page)
data.append(results[0])
# reinitialize immutiable objects
position = []
results = [""]
index = 0
with open('/path/to/test.csv', 'wb') as f:
writer = csv.writer(f)
header = ['url', 'output']
writer.writerow(header)
for d in data:
row = [my_pages[index], data[index]]
writer.writerow(row)
index += 1
```
Then just open /path/to/test.csv in Excel | 4,625 |
55,700,995 | I have. directory with ~250 .txt files in it. Each of these files has a title like this:
`Abraham Lincoln [December 01, 1862].txt`
`George Washington [October 25, 1790].txt`
etc...
However, these are terrible file names for reading into python and I want to iterate over all of them to change them to a more suitable format.
I've tried similar things for changing single variables that are shared across many files. But I can't wrap my head around how I should iterate over these files and change the formatting of their names while still keeping the same information.
The ideal output would be something like
`1861_12_01_abraham_lincoln.txt`
`1790_10_25_george_washington.txt`
etc... | 2019/04/16 | [
"https://Stackoverflow.com/questions/55700995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10587373/"
] | Please try the straightforward (tedious) bash script:
```
#!/bin/bash
declare -A map=(["January"]="01" ["February"]="02" ["March"]="03" ["April"]="04" ["May"]="05" ["June"]="06" ["July"]="07" ["August"]="08" ["September"]="09" ["October"]="10" ["November"]="11" ["December"]="12")
pat='^([^[]+) \[([A-Za-z]+) ([0-9]+), ([0-9]+)]\.txt$'
for i in *.txt; do
if [[ $i =~ $pat ]]; then
newname="$(printf "%s_%s_%s_%s.txt" "${BASH_REMATCH[4]}" "${map["${BASH_REMATCH[2]}"]}" "${BASH_REMATCH[3]}" "$(tr 'A-Z ' 'a-z_' <<< "${BASH_REMATCH[1]}")")"
mv -- "$i" "$newname"
fi
done
``` | I like to pull the filename apart, then put it back together.
Also GNU date can parse-out the time, which is simpler than using `sed` or a big `case` statement to convert "October" to "10".
```
#! /usr/bin/bash
if [ "$1" == "" ] || [ "$1" == "--help" ]; then
echo "Give a filename like \"Abraham Lincoln [December 01, 1862].txt\" as an argument"
exit 2
fi
filename="$1"
# remove the brackets
filename=`echo "$filename" | sed -e 's/[\[]//g;s/\]//g'`
# cut out the name
namepart=`echo "$filename" | awk '{ print $1" "$2 }'`
# cut out the date
datepart=`echo "$filename" | awk '{ print $3" "$4" "$5 }' | sed -e 's/\.txt//'`
# format up the date (relies on GNU date)
datepart=`date --date="$datepart" +"%Y_%m_%d"`
# put it back together with underscores, in lower case
final=`echo "$namepart $datepart.txt" | tr '[A-Z]' '[a-z]' | sed -e 's/ /_/g'`
echo mv \"$1\" \"$final\"
```
EDIT: converted to BASH, from Bourne shell. | 4,626 |
70,627,862 | How can i build a nested python dictionary, based on **number** of levels, like this?
How can be the recursive function passing int of levels?
```
{
"aggs": {
"cat_level_1": {
"terms": {
"field": "cat_level_1"
},
"aggs": {
"cat_level_2": {
"terms": {
"field": "cat_level_2"
},
"aggs": {
"cat_level_3": {
"terms": {
"field": "cat_level_3"
}
}
}
}
}
}
}
}
``` | 2022/01/07 | [
"https://Stackoverflow.com/questions/70627862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8881622/"
] | We can build the dictionary from the deepest level, wrapping the previously built dictionary on each level.
```py
def dict_with_depth_of(n):
return {
"aggs": {
f"cat_level_{n}": {
"terms": {
"field": f"cat_level_{n}"
}
}
}
}
def nested_dict(n):
d = None
for i in reversed(range(1, n + 1)):
new_d = dict_with_depth_of(i)
if d is not None:
new_d["aggs"][f"cat_level_{i}"] = d
d = new_d
return d
print(nested_dict(3))
```
Output:
```
{'aggs': {'cat_level_1': {'aggs': {'cat_level_2': {'aggs': {'cat_level_3': {'terms': {'field': 'cat_level_3'}}}}}}}}
``` | This is the right answer
```
def dict_with_depth_of(n):
return {
"aggs": {
f"cat_level_{n}": {
"terms": {
"field": f"cat_level_{n}"
}
}
}
}
def nested_dict(n):
d = None
for i in reversed(range(1, n + 1)):
new_d = dict_with_depth_of(i)
if d is not None:
new_d["aggs"][f"cat_level_{i}"].update(d)
d = new_d
``` | 4,628 |
32,194,643 | So I am trying to use jinja2 for a simple html template but I keep getting this error when I call `render()`:
```
Warning: IronPythonEvaluator.EvaluateIronPythonScript operation failed.
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\loaders.py", line 125, in load
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 551, in compile
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 470, in _parse
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\parser.py", line 31, in __init__
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 501, in _tokenize
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 494, in preprocess
File "<string>", line 21, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 812, in get_template
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 786, in _load_template
UnicodeEncodeError: ('unknown', '\x00', 0, 1, '')
```
Now I understand that jinja2 only works with utf-8 so I am forcing my python code to that formatting:
```
#-*- coding: utf-8 -*-
import clr
import sys
pyt_path = r'C:\Program Files (x86)\IronPython 2.7\Lib'
sys.path.append(pyt_path)
pyt_path1 = r'C:\Program Files (x86)\IronPython 2.7\Lib\MarkupSafe'
sys.path.append(pyt_path1)
pyt_path2 = r'C:\Program Files (x86)\IronPython 2.7\Lib\jinja2'
sys.path.append(pyt_path2)
#The inputs to this node will be stored as a list in the IN variable.
dataEnteringNode = IN
from jinja2 import Environment, FileSystemLoader
j2_env = Environment(loader=FileSystemLoader(r'C:\Users\ksobon\Documents\Visual Studio 2015\Projects\WebSite2'), trim_blocks=True)
temp = j2_env.get_template('HTMLPage2.html')
OUT = temp.render(svgWidth = r'500')
```
Here's the template html file that I am trying to use:
```
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.chart rect {
fill: steelblue;
stroke: white;
}
</style>
<svg class="chart"></svg>
<script src="http://d3js.org/d3.v3.min.js"></script>
<script>
var data = [
{ name: "Locke", value: 42 },
{ name: "Reyes", value: 8 },
{ name: "Ford", value: 15 },
{ name: "Jarrah", value: 16 },
{ name: "Shephard", value: 23 },
{ name: "Kwon", value: 42 }
];
var w = {{svgWidth}},
h = 400;
var x = d3.scale.linear()
.domain([0, 1])
.range([0, w]);
var y = d3.scale.linear()
.domain([0, 100])
.rangeRound([0, h]);
var chart = d3.select("body")
.append("svg:svg")
.attr("class", "chart")
.attr("width", w * data.length - 1)
.attr("height", h);
chart.selectAll("rect")
.data(data)
.enter().append("svg:rect")
.attr("x", function (d, i) { return x(i) - .5; })
.attr("y", function (d) { return h - y(d.value) - .5; })
.attr("width", w)
.attr("height", function (d) { return y(d.value); });
</script>
```
Any ideas what I am doing wrong? I am in IronPython2.7 | 2015/08/25 | [
"https://Stackoverflow.com/questions/32194643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3263488/"
] | Button should be of custom type to set an image. Verify your code once again. Set the button to custom type in Storyboard. | I can see you are using IBOutlet of a button this means you are using button from nib, then you should change background image from Attributes Inspector on right side of your xcode
otherwise if you still need it to be changed programmatically
then try this
```
[self.color1 setBackgroundImage:[UIImage imageNamed:@"57_bg_selected.png"] forState:UIControlStateHighlighted];
```
Please Note you should mention extension of your image .jpg or .png in your file name | 4,629 |
72,596,436 | I've read about and understand [floating point round-off issues](https://docs.python.org/3/tutorial/floatingpoint.html) such as:
```
>>> sum([0.1] * 10) == 1.0
False
>>> 1.1 + 2.2 == 3.3
False
>>> sin(radians(45)) == sqrt(2) / 2
False
```
I also know how to work around these issues with [math.isclose()](https://docs.python.org/3/library/math.html#math.isclose) and [cmath.isclose()](https://docs.python.org/3/library/cmath.html#cmath.isclose).
The question is how to apply those work arounds to Python's match/case statement. I would like this to work:
```
match 1.1 + 2.2:
case 3.3:
print('hit!') # currently, this doesn't match
``` | 2022/06/12 | [
"https://Stackoverflow.com/questions/72596436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/424499/"
] | The key to the solution is to build a wrapper that overrides the `__eq__` method and replaces it with an approximate match:
```
import cmath
class Approximately(complex):
def __new__(cls, x, /, **kwargs):
result = complex.__new__(cls, x)
result.kwargs = kwargs
return result
def __eq__(self, other):
try:
return isclose(self, other, **self.kwargs)
except TypeError:
return NotImplemented
```
It creates approximate equality tests for both float values and complex values:
```
>>> Approximately(1.1 + 2.2) == 3.3
True
>>> Approximately(1.1 + 2.2, abs_tol=0.2) == 3.4
True
>>> Approximately(1.1j + 2.2j) == 0.0 + 3.3j
True
```
Here is how to use it in a match/case statement:
```
for x in [sum([0.1] * 10), 1.1 + 2.2, sin(radians(45))]:
match Approximately(x):
case 1.0:
print(x, 'sums to about 1.0')
case 3.3:
print(x, 'sums to about 3.3')
case 0.7071067811865475:
print(x, 'is close to sqrt(2) / 2')
case _:
print('Mismatch')
```
This outputs:
```
0.9999999999999999 sums to about 1.0
3.3000000000000003 sums to about 3.3
0.7071067811865475 is close to sqrt(2) / 2
``` | Raymond's answer is very fancy and ergonomic, but seems like a lot of magic for something that could be much simpler. A more minimal version would just be to capture the calculated value and just explicitly check whether the things are "close", e.g.:
```
import math
match 1.1 + 2.2:
case x if math.isclose(x, 3.3):
print(f"{x} is close to 3.3")
case x:
print(f"{x} wasn't close)
```
I'd also suggest only using `cmath.isclose()` where/when you actually need it, using appropriate types lets you ensure your code is doing what you expect.
The above example is just the minimum code used to demonstrate the matching and, as pointed out in the comments, could be more easily implemented using a traditional `if` statement. At the risk of derailing the original question, this is a somewhat more complete example:
```
from dataclasses import dataclass
@dataclass
class Square:
size: float
@dataclass
class Rectangle:
width: float
height: float
def classify(obj: Square | Rectangle) -> str:
match obj:
case Square(size=x) if math.isclose(x, 1):
return "~unit square"
case Square(size=x):
return f"square, size={x}"
case Rectangle(width=w, height=h) if math.isclose(w, h):
return "~square rectangle"
case Rectangle(width=w, height=h):
return f"rectangle, width={w}, height={h}"
almost_one = 1 + 1e-10
print(classify(Square(almost_one)))
print(classify(Rectangle(1, almost_one)))
print(classify(Rectangle(1, 2)))
```
Not sure if I'd actually use a `match` statement here, but is hopefully more representative! | 4,631 |
5,684,010 | I want to change this string
`<p><b> hello world </b></p>. I am playing <b> python </b>`
to:
`<bold><bold>hello world </bold></bold>, I am playing <bold> python </bold>`
I used:
```
import re
pattern = re.compile(r'\<p>(.*?)\</p>|\<b>(.*?)\</b>')
print re.sub(pattern, r'<bold>\1</bold>', "<p><b>hello world</b></p>. I am playing <b> python</b>")
```
It does not output what I want, it complains error: *unmatched group*
It works in this case:
```
re.sub(pattern, r'<bold>\1</bold>', "<p>hello world</p>. I am playing <p> python</p>")
```
`<bold> hello world </bold>`. I am playing `<bold> python</bold>` | 2011/04/16 | [
"https://Stackoverflow.com/questions/5684010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/293487/"
] | If you choose not to use regex, then it simple as this:
```
d = {'<p>':'<bold>','</p>':'</bold>','<b>':'<bold>','</b>':'</bold>'}
s = '<p><b> hello world </b></p>. I am playing <b> python </b>'
for k,v in d.items():
s = s.replace(k,v)
``` | The problem is because the first group is the one within `<p></p>` and the second group is within `<b></b>` in the regexp. However, in your substitution you are referring to the first group when, if it matched to `<b></b>`, there wasn't one. I offer a couple of solutions.
First,
```
>>> pattern = re.compile(r'<(p|b)>(.*?)</\1>')
>>> print re.sub(pattern, r'<bold>\2</bold>',
"<p><b>hello world</b></p>. I am playing <b> python</b>")
<bold><b>hello world</b></bold>. I am playing <bold> python</bold>
```
will match a given pair of tags. However, as you can see, it would have to be used twice on the string because when it matched the `<p></p>` tags, it skipped over the nested `<b></b>` tags.
Here's the option that I would go with:
```
>>> pattern = re.compile(r'<(/?)[pb]>')
>>> print re.sub(pattern, r'<\1bold>',
"<p><b>hello world</b></p>. I am playing <b> python</b>")
<bold><bold>hello world</bold></bold>. I am playing <bold> python</bold>
``` | 4,632 |
5,931,386 | I help to maintain a package for python called nxt-python. It uses metaclasses to define the methods of a control object. Here's the method that defines the available functions:
```
class _Meta(type):
'Metaclass which adds one method for each telegram opcode'
def __init__(cls, name, bases, dict):
super(_Meta, cls).__init__(name, bases, dict)
for opcode in OPCODES:
poll_func, parse_func = OPCODES[opcode]
m = _make_poller(opcode, poll_func, parse_func)
setattr(cls, poll_func.__name__, m)
```
I want to be able to add a different docstring to each of these methods that it adds. m is a method returned by \_make\_poller(). Any ideas? Is there some way to work around the python restriction on changing docstrings? | 2011/05/09 | [
"https://Stackoverflow.com/questions/5931386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495504/"
] | For plain functions:
```
def f(): # for demonstration
pass
f.__doc__ = "Docstring!"
help(f)
```
This works in both python2 and python3, on functions with and without docstrings defined. You can also do `+=`. Note that it is `__doc__` and not `__docs__`.
For methods, you need to use the `__func__` attribute of the method:
```
class MyClass(object):
def myMethod(self):
pass
MyClass.myMethod.__func__.__doc__ = "A really cool method"
``` | You may also use [setattr](http://docs.python.org/library/functions.html#setattr) on the class/function object and set the docstring.
```
setattr(foo,'__doc__',"""My Doc string""")
``` | 4,634 |
41,252,289 | What's the syntax for *exclusive* `min` and `max` arguments for redis `zcount` command in python (redis-py)? It's not alluded to in the [documentation](https://redis-py.readthedocs.io/en/latest/).
Would it be:
```
minimum = time.time() - 2000
maximum = time.time()
my_server.zadd(sorted_set, '('+str(minimum), maximum)
``` | 2016/12/20 | [
"https://Stackoverflow.com/questions/41252289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4936905/"
] | The [unit tests](https://github.com/andymccurdy/redis-py/blob/eae07e76b6524e6772be60169549766e7826419a/tests/test_commands.py) gives some examples:
```
def test_zcount(self, r):
r.zadd('a', a1=1, a2=2, a3=3)
assert r.zcount('a', '-inf', '+inf') == 3
assert r.zcount('a', 1, 2) == 2
assert r.zcount('a', 10, 20) == 0
```
This can help… | What about using `sys.float_info.epsilon`? This is the smallest comparable difference between two numbers:
```
minimum = time.time() - 2000
maximum = time.time()
my_server.zadd(sorted_set, minimum + sys.float_info.epsilon, maximum)
```
Or, with `-` for maximum:
```
minimum = time.time() - 2000
maximum = time.time()
my_server.zadd(sorted_set, minimum, maximum - sys.float_info.epsilon)
``` | 4,635 |
35,422,156 | This is the code that I am working with:
```
import pandas as pd
z = pd.Series(data = [1,2,3,4,5,6,7], index = xrange(1,8))
array = []
for i in range(1,8):
array.append(z[i]*2)
print array
```
It does exactly what I tell it to because I can't figure out how to do a simple iteration. This is the printed output
```
[2]
[2, 4]
[2, 4, 6]
[2, 4, 6, 8]
[2, 4, 6, 8, 10]
[2, 4, 6, 8, 10, 12]
[2, 4, 6, 8, 10, 12, 14]
```
What I want is for python to use the updated value in array so the desired output would be:
```
[2]
[2, 4]
[2, 4, 8]
[2, 4, 8, 16]
[2, 4, 8, 16, 32]
[2, 4, 8, 16, 32, 64]
[2, 4, 8, 16, 32, 64, 128]
```
Thank you for your help.
**Edit**
The example I first used was too simple so please answer using the example code below
```
import pandas as pd
sample = pd.Series(data = [ -3.2 , 30.66, 7.71, 9.87], index = range(0,4))
testarray = []
for i in range(0,4):
testarray.append(100000*(1+sample.values[i]/100))
print testarray
```
This produces
```
[96800.0, 130660.0, 107710.0, 109870.0]
```
When the desired numbers are:
96800
126478.88
136230.4016
149676.3423
So instead of it using 100000 I want it to use 96800 for the second iteration and so on. Thank you! | 2016/02/16 | [
"https://Stackoverflow.com/questions/35422156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5020060/"
] | You want to use the last value of the accumulation list in the expression.
For example:
```
data = [ -3.2 , 30.66, 7.71, 9.87] # the input list for `sample`
In [686]: test=[100000]
In [687]: for i in range(4):
test.append(test[-1]*(1+data[i]/100))
In [688]: test
Out[688]: [100000, 96800.0, 126478.88, 136230.401648, 149676.3422906576]
```
I could start this with `test=[]`, but then I'd have test whether the list was empty, and use `1000000` instead of `test[-1]`. So putting the `100000` in the list to start with is logically simpler.
Another option is to maintain a temporary variable that is updated each iteration:
```
In [689]: mult=100000
In [690]: test=[]
In [691]: for i in range(4):
test.append(mult*(1+data[i]/100))
mult=test[-1]
```
Or
```
mult *= (1+data[i]/100)
test.append(mult)
```
But since this is `pandas` I could also do the calculation with one vectorized call. The `numpy` array equivalent is:
```
In [697]: data_arr=np.array(data)
In [698]: np.cumprod(1+data_arr/100)
Out[698]: array([ 0.968 , 1.2647888 , 1.36230402, 1.49676342])
```
`cumprod` is the cumulative product (like the more common cumulative sum).
---
Your first example can be produced with:
```
In [709]: np.cumprod([2 for _ in range(7)])
Out[709]: array([ 2, 4, 8, 16, 32, 64, 128])
In [710]: np.cumprod(np.ones(7,int)*2)
Out[710]: array([ 2, 4, 8, 16, 32, 64, 128])
``` | I think that what you're trying to do is to compute powers of two, instead of multiplications of two:
```
array.append(2**z[i])
``` | 4,636 |
20,993,084 | I am new to python and been stuck with an issue, any one please help me to solve this issue. Requirement is I have created a sqlite database and created a table and also inserted values to it but the problem is I am not getting how to display that data from database in table view in python so please help me out from this.....advance thanks..
```
db_con = sqlite3.Connection
db_name = "./patientData.db"
createDb = sqlite3.connect(db_name)
queryCurs = createDb.cursor()
queryCurs.execute('''CREATE TABLE IF NOT EXISTS PATIENT
(NAME TEXT NOT NULL, ID INTEGER PRIMARY KEY, AGE INTEGER NOT NULL, GENDER TEXT NOT NULL , EYE_TYPE TEXT NOT NULL)''')
pName = self.patientEdit.text()
pId =self.patientidEdit.text()
#pId1 = int(pId)
pAge = self.ageEdit.text()
#pAge1 = int(pAge)
pGender = self.patientgend.text()
pEye_type = self.eyeTypeEdit.text()
queryCurs.execute('''INSERT INTO PATIENT(NAME,ID,AGE, GENDER,EYE_TYPE) VALUES(?, ?, ?, ?, ?)''',(pName, pId, pAge, pGender, pEye_type))
print ('Inserted row')
createDb.commit()
```
now how can I dispaly data in a tableview /listview any example code is also helpful | 2014/01/08 | [
"https://Stackoverflow.com/questions/20993084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440020/"
] | This is a short, albeit complete example on how to achieve the expected result.
The trick is to define a [QSqlQueryModel](http://srinikom.github.io/pyside-docs/PySide/QtSql/QSqlQueryModel.html?highlight=qsqlquerymodel#PySide.QtSql.QSqlQueryModel) and pass it to a [QTableView](http://srinikom.github.io/pyside-docs/PySide/QtGui/QTableView.html); in this way you use the PyQt4 SQL modules instead of `sqlite3` module, and the table can loop the query result automatically.
```
from PyQt4.QtSql import QSqlQueryModel,QSqlDatabase,QSqlQuery
from PyQt4.QtGui import QTableView,QApplication
import sys
app = QApplication(sys.argv)
db = QSqlDatabase.addDatabase("QSQLITE")
db.setDatabaseName("patientData.db")
db.open()
projectModel = QSqlQueryModel()
projectModel.setQuery("select * from patient",db)
projectView = QTableView()
projectView.setModel(projectModel)
projectView.show()
app.exec_()
``` | I know this question has been a long time ago and the accepted one is for PyQt4. Because the API has been changed a bit on PyQt5, hope my answer can help someone using PyQt5.
```
from PyQt5 import QtWidgets, QtSql
# connect to postgresql
db = QtSql.QSqlDatabase.addDatabase("QPSQL")
db.setHostName(**)
db.setDatabaseName(**)
db.setPort(**) # int
db.setUserName(**)
db.setPassword(**)
# create tableview
tableView = QtWidgets.QTableView()
# create sqlquery
query = QtSql.QSqlQuery()
result = query.exec_("""select * from "table" """)
if result:
model = QtSql.QSqlTableModel(db=db)
model.setQuery(query)
tableView.setModel(model)
tableView.show()
``` | 4,637 |
62,713,159 | I have a csv file with names (last\_name, name, age) and I want to convert all age attributes into integers. This is a way but I guess there is a more pythonic way to do so? I tried to do it with list comprehension but it didn't quite worked as I wanted.
```
import csv
with open("names.csv") as names_file:
head , *names = csv.reader(names_file)
names = [line for line in names]
for i in range(len(names)):
names[i][2] = int(names[i][2])
```
Thanks in advance. | 2020/07/03 | [
"https://Stackoverflow.com/questions/62713159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13607895/"
] | Best way to handle this:
```
import pandas as pd
df = pd.read_csv('names.csv')
df["age"] = pd.to_numeric(df["age"])
```
If you want a list just do this:
```py
list_ = df['age'].to_list()
print(list_)
``` | I would do:
```
names = [['A', 'B', '20'], ['C', 'D', '30'], ['E', 'F', '40']] # sample data
for i in names:
i[2] = int(i[2])
print(names)
```
Which is more succint than:
```
for i in range(len(names)):
names[i][2] = int(names[i][2])
```
If you have to use list-comprehension then you might do:
```
names = [['A', 'B', '20'], ['C', 'D', '30'], ['E', 'F', '40']]
names = [[int(j) if inx==2 else j for inx, j in enumerate(i)] for i in names]
print(names)
```
Output:
```
[['A', 'B', 20], ['C', 'D', 30], ['E', 'F', 40]]
```
Note that this harness nesting and as such is less desirable than above solution concerning readibility. | 4,638 |
33,982,834 | I have been trying to convert some code into a try statement but I can't seem to get anything working.
Here is my code in pseudo code:
```
start
run function
check for user input ('Would you like to test another variable? (y/n) ')
if: yes ('y') restart from top
elif: no ('n') exit program (loop is at end of program)
else: return an error saying that the input is invalid.
```
And here is my code (which works) in python 3.4
```
run = True
while run == True:
spuriousCorrelate(directory)
cont = True
while cont == True:
choice = input('Would you like to test another variable? (y/n) ')
if choice == 'y':
cont = False
elif choice == 'n':
run = False
cont = False
else:
print('This is not a valid answer please try again.')
run = True
cont = True
```
Now what is the proper way for me to convert this into a try statement or to neaten my code somewhat?
This isn't a copy of the mentioned referenced post as I am trying to manage two nested statements rather than only get the correct answer. | 2015/11/29 | [
"https://Stackoverflow.com/questions/33982834",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5552713/"
] | If you want to make your code neater, you should consider having
`while run:`
instead of
`while run == True:`
and also remove the last two lines, because setting `run` and `cont` to `True` again isn't necessary (their value didn't change).
Furthermore, I think that a `try - except` block would be useful in the case of an integer input, for example:
```
num = input("Please enter an integer: ")
try:
num = int(num)
except ValueError:
print("Error,", num, "is not a number.")
```
In your case though I think it's better to stick with `if - elif - else` blocks. | >
> Ok so as a general case I will try to avoid try...except blocks
>
>
>
Don't do this. Use the right tool for the job.
Use `raise` to signal that your code can't (or shouldn't) deal with the scenario.
Use `try-except` to process that *signal*.
>
> Now what is the proper way for me to convert this into a try statement?
>
>
>
Don't convert.
You don't have anything that `raise`s in your code, so there is no point of `try-except`.
>
> What is the proper way to neaten my code somewhat?
>
>
>
Get rid of your flag variables (`run`, `cont`). You have `break`, use it!
This is prefered way of imlementing `do-while`, as Python docs says; unfortunately, I cannot find it to link it right now.
If someone finds it, feel free to edit my answer to include it.
```
def main()
while True: # while user wants to test variables
spuriousCorrelate(directory) # or whatever your program is doing
while True: # while not received valid answer
choice = input('Would you like to test another variable? (y/n) ')
if choice == 'y':
break # let's test next variable
elif choice == 'n':
return # no more testing, exit whole program
else:
print('This is not a valid answer please try again.')
``` | 4,640 |
14,286,480 | I tagged python and perl in this only because that's what I've used thus far. If anyone knows a better way to go about this I'd certainly be willing to try it out. Anyway, my problem:
I need to create an input file for a gene prediction program that follows the following format:
```
seq1 5 15
seq1 20 34
seq2 50 48
seq2 45 36
seq3 17 20
```
Where seq# is the geneID and the numbers to the right are the positions of exons within an open reading frame. Now I have this information, in a .gff3 file that has a lot of other information. I can open this with excel and easily delete the columns with non-relevant data. Here's how it's arranged now:
```
PITG_00002 . gene 2 397 . + . ID=g.1;Name=ORF%
PITG_00002 . mRNA 2 397 . + . ID=m.1;
**PITG_00002** . exon **2 397** . + . ID=m.1.exon1;
PITG_00002 . CDS 2 397 . + . ID=cds.m.1;
PITG_00004 . gene 1 1275 . + . ID=g.3;Name=ORF%20g
PITG_00004 . mRNA 1 1275 . + . ID=m.3;
**PITG_00004** . exon **1 1275** . + . ID=m.3.exon1;P
PITG_00004 . CDS 1 1275 . + . ID=cds.m.3;P
PITG_00004 . gene 1397 1969 . + . ID=g.4;Name=
PITG_00004 . mRNA 1397 1969 . + . ID=m.4;
**PITG_00004** . exon **1397 1969** . + . ID=m.4.exon1;
PITG_00004 . CDS 1397 1969 . + . ID=cds.m.4;
```
---
So I need only the data that is in bold. For example,
```
PITG_0002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Any help you could give would be greatly appreciated, thanks!
Edit: Well I messed up the formatting. Anything that is between the \*\*'s is what I need lol. | 2013/01/11 | [
"https://Stackoverflow.com/questions/14286480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1784467/"
] | In Unix:
```
grep <file.gff3 " exon " |
sed "s/^\([^ ]+\) +[.] +exon +\([0-9]+\) \([0-9]+\).*$/\1 \2 \3/"
``` | For pedestrians:
(this is Python)
```
with open(data_file) as f:
for line in f:
tokens = line.split()
if len(tokens) > 3 and tokens[2] == 'exon':
print tokens[0], tokens[3], tokens[4]
```
which prints
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
``` | 4,641 |
6,254,713 | How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away. | 2011/06/06 | [
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] | Here's one way to do that:
```
>>> indices = numpy.indices((5, 5))
>>> a = indices[0] * indices[1]
>>> a
array([[ 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 4],
[ 0, 2, 4, 6, 8],
[ 0, 3, 6, 9, 12],
[ 0, 4, 8, 12, 16]])
```
To further explain, `numpy.indices((5, 5))` generates two arrays containing the x and y indices of a 5x5 array like so:
```
>>> numpy.indices((5, 5))
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
```
When you multiply these two arrays, numpy multiplies the value of the two arrays at each position and returns the result. | I'm away from my python at the moment, but does this one work?
```
array( [ [ i*j for j in xrange(5)] for i in xrange(5)] )
``` | 4,646 |
6,975,808 | I like to store my data after a longish python program as dictionaries in a new script. This then allows me to import the program (and hence data) easily for further manipulation.
I write something like this (an old example):
```
file = open(p['results']+'asa_contacts.py','w')
print>>file, \
'''
\'''
This file stores the contact residues according to changes in ASA
as a dictionary
\'''
d = {}
'''
```
followed by a lot of faffing around entering the dictionary code as a string:
```
print>>file, 'd[\'%s\'] = {}' %st
```
I was wondering if there was a module which did this automatically as it would save me a lot of time.
Thank you
Edit: it may be useful to know that these dictionaries are usually several layers deep like this one I'm using today:
```
d[ratio][bound][charge] = a_list
``` | 2011/08/07 | [
"https://Stackoverflow.com/questions/6975808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299000/"
] | Unless there's a specific reason that you need source code -- and I suspect there isn't, you just want to serialize and deserialize data from disk -- a better option would be Python's [`pickle` module](http://docs.python.org/library/pickle.html). | [Lossy's suggestion](https://stackoverflow.com/questions/6975808/outputting-data-as-dictionaries-in-a-new-python-file/6975843#6975843) of `repr` happens to work, but `repr` isn't *specifically* designed for serialization. I think it would be slightly more robust to use a tool designed for that purpose; and since you want something that's human-readable and -editable, `json` is the obvious choice.
```
>>> import json
>>> animals = {'a':'aardwolf', 'b':'beluga', 'c':'civet', 'd':'dik-dik',
'e':'echidna', 'f':'fennec', 'g':'goa', 'h':'hyrax',
'i':'impala', 'j':'javelina', 'k':'kudu', 'l':'lemur',
'm':'macaque', 'n':'nutria', 'o':'orca', 'p':'peccary',
'q':'quagga', 'r':'reebok', 's':'serval', 't':'tenrec',
'u':'urial', 'v':'vole', 'w':'wallaroo', 'x':'xenurine',
'y':'yapok', 'z':'zoologist'}
>>> s = json.dumps(animals)
>>> s[:60] + '...'
'{"a": "aardwolf", "c": "civet", "b": "beluga", "e": "echidna...'
>>> animals = json.loads(s)
>>> animals['w']
u'wallaroo'
``` | 4,653 |
48,765,260 | Object oriented python, I want to create a static method to convert hours, minutes and seconds into seconds.
I create a class called Duration:
```
class Duration:
def __init__(self, hours, minutes, seconds):
self.hours = hours
self.minutes = minutes
self.seconds = seconds
```
I then create a variable named duration1 where I give it the numbers
```
duration1 = Duration(29, 7, 10)
```
I have a method called info which checks if any of the numbers are less than 10, and if so add a "0" in front of it, and then i later revert the values into ints.
```
def info(self):
if self.hours < 10:
self.hours = "0" + str(self.hours)
elif self.minutes < 10:
self.minutes = "0" + str(self.minutes)
elif self.seconds < 10:
self.seconds = "0" + str(self.seconds)
info_string = str(self.hours) + "-" + str(self.minutes) + "-" + str(self.seconds)
self.hours = int(self.hours)
self.minutes = int(self.minutes)
self.seconds = int(self.seconds)
return info_string
```
And now I want to create a static\_method to convert these values into seconds, and return the value so i can call it with duration1 (atleast I think thats how i should do it..?
```
@staticmethod
def static_method(self):
hour_to_seconds = self.hours * 3600
minutes_to_seconds = self.minutes * 60
converted_to_seconds = hours_to_seconds + minutes_to_seconds \
+ self.seconds
return converted_to_seconds
duration1.static_method(duration1.info())
```
I guess my question is how would I use a static method with this code to change the numbers? I should also tell you this is a school assignment so I have to use a static method to solve the problem. Any help is greatly appreciated!
Error message says this:
```
hour_to_seconds = self.hours * 3600
```
AttributeError: 'str' object has no attribute 'hours' | 2018/02/13 | [
"https://Stackoverflow.com/questions/48765260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6715237/"
] | You're almost there.
Instead of passing the result of `duration1.info()` to `duration1.static_method()`, you simply pass the whole object:
```
duration1 = Duration(29, 7, 10)
print(duration1.info()) # 29-07-10
print(duration1.static_method(duration1)) # 104830
```
And since it's a static method, you could just as well call it from the class instead of the instance (which is the reason why one wants a static method - you don't have to create an instance):
```
print(Duration.static_method(duration1))
```
By the way, as a convention `self` should not be used as the parameter name if you're using it in a static method, but it works anyway, because - as I said - it's just a convention. It's not as if `self` is a "magic" variable.
But you should consider to rename the parameter.
By the way, you have a typo in `static_method`, once you call the variable `hour_to_seconds` but then access `hours_to_seconds` - you need to decide on one. | A static method is *explicitly* a function which is tied to a class rather than an instance. `self` in Python represents an instance of a class in Python.
These are not simpatico.
You can make your code work by passing `duration1` to your static method and changing the name of its accepted parameter from `self` to `duration` or similar. | 4,655 |
33,432,426 | I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
``` | 2015/10/30 | [
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] | `import queue` is **lowercase** `q` in Python 3.
Change `Q` to `q` and it will be fine.
(See code in <https://stackoverflow.com/a/29688081/632951> for smart switching.) | It's because of the Python version. In Python 2.x it's `import Queue as queue`; on the contrary in Python 3 it's `import queue`. If you want it for both environments you may use something below as mentioned [here](https://stackoverflow.com/questions/46363871/no-module-named-queue?noredirect=1&lq=1)
```
try:
import queue
except ImportError:
import Queue as queue
``` | 4,656 |
57,229,074 | I am trying to import an existing database into my **Django** project, so I run `python manage.py migrate --fake-initial`, but I get this error:
```
operations to perform:
Apply all migrations: ExcursionsManagerApp, GeneralApp, InvoicesManagerApp, OperationsManagerApp, PaymentsManagerApp, RatesMan
agerApp, ReportsManagerApp, ReservationsManagerApp, UsersManagerApp, admin, auth, authtoken, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... FAKED
Applying auth.0001_initial... FAKED
Applying contenttypes.0002_remove_content_type_name... OK
Applying GeneralApp.0001_initial...Traceback (most recent call last):
File "/Users/hugovillalobos/Documents/Code/IntellibookWebProject/IntellibookWebVenv/lib/python3.6/site-packages/django/db/back
ends/utils.py", line 83, in _execute
return self.cursor.execute(sql)
psycopg2.ProgrammingError: relation "GeneralApp_airport" already exists
```
Of course all the tables already exist in the database, that is the reason why I use `--fake-initial`, that is supposed to **fake** the creation of database objects.
Why is `migrate` attempting to create the table `GeneralApp__airport` instead of faking it? | 2019/07/27 | [
"https://Stackoverflow.com/questions/57229074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6843153/"
] | Just by running the following [command](https://docs.djangoproject.com/en/3.1/ref/django-admin/#cmdoption-migrate-fake) would solve the problem:
```
python manage.py migrate --fake
``` | Steps you can follow to make migrations from an existing database. Firstly empty the django migration table from the database.
```sql
delete from django_migrations
```
1. Remove migrations from your migrations folder for the app
```sh
rm -rf <app>/migrations/
```
2. Reset the migration for builtin apps(like admin)
```sh
python manage.py migrate --fake
```
3. Create initial migration for each and every app
```sh
python manage.py makemigrations <app>
```
4. The final step is to create fake initial migrations
```sh
python manage.py migrate --fake-initial
``` | 4,666 |
26,327,497 | **WHAT IS WORKING**
I'm following [wsgi documentation to run django](http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html#test-your-django-project). I'm testing that it's all working before start to use nginx. I succeeded running manage.py and loading the webpage in my browser:
```
python manage.py runserver 0.0.0.0:8000
```

**WHAT IS NOT WORKING**
The problem comes when I try to run it using uwsgi:
```
uwsgi --http :8000 --module metrics.wsgi
```
I can [run it without errors](http://pastebin.com/aBjSMt0b), but when I try to load it in my browser I get [AppRegistryNotReady error](http://pastebin.com/sWTKZnv3) from uwsgi. Any idea about what could be the reason? This is my wsgi.py file:
```
import os, sys, site, django.core.handlers.wsgi
SITE_DIR = '/home/ubuntu/web/metrics.com/app/'
site.addsitedir(SITE_DIR)
sys.path.append(SITE_DIR)
os.environ['DJANGO_SETTINGS_MODULE'] = 'metrics.settings'
application = django.core.handlers.wsgi.WSGIHandler()
```
my project structure:
```
/ubuntu
/www
/metrics.com
/app # here's my manage.py file
metrics/ # here's my wsgi.py and settigs.py files
``` | 2014/10/12 | [
"https://Stackoverflow.com/questions/26327497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2010764/"
] | **SOLUTION:**
An incorrect configuration in wsgi.py was making uWSGI unable to call the application. I solved it using this wsgi.py:
```
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "metrics.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
And running uwsgi like this:
```
uwsgi --http :8000 --chdir /home/ubuntu/web/metrics.com/app --module gamemetrics.wsgi
```
**Edit**: using --chdir we set the base directory to use for --module.
**Edit 2**: In some cases, this can fix NGINX error: upstream prematurely closed connection while reading response header from upstream | 1):
Highly recommend run uwsgi in emperor mode.
```
/usr/bin/uwsgi --emperor /etc/uwsgi --pidfile /var/run/uwsgi.pid --daemonize /var/log/uwsgi.log
```
2) Example wsgi.py for your project:
```
import os
import sys
ADD_PATH = ['/home/ubuntu/web/metrics.com/app/',]
for item in ADD_PATH:
sys.path.insert (0, item)
os.environ['PYTHON_EGG_CACHE']= '/tmp/your-project-eggs'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
3) Example uwsgi config for project (put this in /etc/uwsgi) <- see i.1
------- project.yaml -------
```
uwsgi:
print: Project Configuration Started
socket: /var/tmp/metrics_uwsgi.sock
pythonpath: /home/ubuntu/web/metrics.com
env: DJANGO_SETTINGS_MODULE=app.settings
module: app.wsgi
chdir: /home/ubuntu/web/metrics.com/app
daemonize: /home/ubuntu/web/metrics.com/log/uwsgi.log
pidfile: /var/run/metrics_uwsgi.pid
max-requests: 5000
buffer-size: 32768
harakiri: 30
reload-mercy: 8
master: 1
no-orphans: 1
touch-reload: /home/ubuntu/web/metrics.com/log/uwsgi
post-buffering: 8192
```
---
4) Include in your nginx config
```
location /
{
uwsgi_pass unix:///var/tmp/metrics_uwsgi.sock;
include uwsgi_params;
uwsgi_buffers 8 128k;
}
``` | 4,668 |
46,895,876 | I have a script `wordcount.py`
I used setuptools to create an entry point, named `wordcount`, so now I can call the command from anywhere in the system.
I am trying to execute it via spark-submit (command: `spark-submit wordcount`) but it is failing with the following error:
`Error: Cannot load main class from JAR file:/usr/local/bin/wordcount
Run with --help for usage help or --verbose for debug output`
However the exact same command works fine when I provide the path to the python script (command: `spark-submit /home/ubuntu/wordcount.py`)
Content of wordcount.py
```
import sys
from operator import add
from pyspark.sql import SparkSession
def main(args=None):
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
spark = SparkSession\
.builder\
.appName("PythonWordCount")\
.getOrCreate()
lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
spark.stop()
if __name__ == "__main__":
main()
```
Do you know if there is a way to bypass this?
Thanks a lot in advance. | 2017/10/23 | [
"https://Stackoverflow.com/questions/46895876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7963251/"
] | This is pretty similar to Jaap's comment, but a little more spelled out and uses the row names explicitly:
```
mat = as.matrix(dat[, 2:5])
row.names(mat) = dat$MUN
mat = rbind(mat, colSums(mat[c("Angra dos Reis (RJ)", "Areal (RJ)"), ], na.rm = T))
row.names(mat)[nrow(mat)] = "X"
mat
# X1990 X1991 X1992 X1993
# Angra dos Reis (RJ) 11 10 10 10
# Aperibé (RJ) NA NA NA NA
# Araruama (RJ) 12040 14589 14231 14231
# Areal (RJ) NA NA NA 3
# Armação dos Búzios (RJ) NA NA NA NA
# X 11 10 10 13
```
The result is a `matrix`, you can convert it back to a data frame if needed:
```
dat_result = data.frame(MUN = row.names(mat), mat, row.names = NULL)
```
I dislike the format of your data as a data frame. I would either convert it to a matrix (as above) or convert it to long format with, e.g., `tidyr::gather(dat, key = year, value = value, -MUN)` and work with it "by group" using `data.table` or `dplyr`.
---
Using this data:
```
dat = read.table(text = " MUN X1990 X1991 X1992 X1993
1 'Angra dos Reis (RJ)' 11 10 10 10
2 'Aperibé (RJ)' NA NA NA NA
3 'Araruama (RJ)' 12040 14589 14231 14231
4 'Areal (RJ)' NA NA NA 3
5 'Armação dos Búzios (RJ)' NA NA NA NA", header= T)
``` | A solution can be using sqldf package. If the name of the data frame is `df`, you can do it likes the following:
```
library(sqldf)
result <- sqldf("SELECT * FROM df UNION
SELECT 'X', SUM(X1990), SUM(X1991), SUM(X1992), SUM(X1993) FROM df
WHERE MUN IN ('Angra dos Reis (RJ)', 'Areal (RJ)')")
``` | 4,669 |
4,119,054 | I'm looking for a finance library in python which offers a method similar to the MATLAB's [portalloc](http://www.mathworks.com/help/toolbox/finance/portalloc.html) . It is used to optimize a portfolio. | 2010/11/07 | [
"https://Stackoverflow.com/questions/4119054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/409701/"
] | If you know linear algebra, there is a simple function for solving the optimization problem which any library should support. Unfortunately, it's been so long since I researched it I can't tell you the formula nor a library that supports it, but a little research should reveal it. The main point is that any linear algebra library should do.
Update:
Here's a quote from a post I found.
>
> Some research says that "mean variance portfolio optimization" can
> give good results. I discussed this in a message
>
>
> To implement this approach, a needed input is the covariance matrix of
> returns, which requires historical stock prices, which one can obtain
> using "Python quote grabber" <http://www.openvest.org/Databases/ovpyq> .
>
>
> For expected returns -- hmmm. One of the papers I cited found that
> assuming equal expected returns of all stocks can give reasonable
> results.
>
>
> Then one needs a "quadratic programming" solver, which appears to be
> handled by the CVXOPT Python package.
>
>
> If someone implements the approach in Python, I'd be happy to hear
> about it.
>
>
> There is a "backtest" package in R (open source stats package callable
> from Python) <http://cran.r-project.org/web/packages/backtest/index.html>
> "for exploring portfolio-based hypotheses about financial instruments
> (stocks, bonds, swaps, options, et cetera)."
>
>
> | Maybe you could use this [library](http://www.downv.com/Mac/download-Python-statlib-10009752.htm) (statlib) or this [one](http://www.downv.com/Mac/download-Mystic-10004716.htm) (Mystic) to help you. | 4,674 |
71,929,367 | Please don't delete this. this is so simple but I'm banging my head making this work for last two hours.
As displayed. I want to import python file module\_dir into module\_sub\_dir.py, but its giving erros. all **init**.py are empty.[](https://i.stack.imgur.com/MYBST.png)
```
from .. import module_dir
```
This doesn't work either as it gives erro
```
ImportError: attempted relative import with no known parent package
``` | 2022/04/19 | [
"https://Stackoverflow.com/questions/71929367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11268322/"
] | Add `sys.path.append("..")` to the very beginning then importlib will be able to reach the file in parent directories of CWD. But I recommend you not to use such ugly solution in your projects. | Update as I have figured out the issue myslef.
So the issue is how VS Code runs .py file individually as oppose to running entire package like Pycharm. It works in Pycharm. To make it work in VS Code sys.import.path('..') works. | 4,678 |
58,455,061 | Suppose python dictionary is like
D = {'a':1,'a':2}
Can I get those 2 values with same key
Because I want write a function so I can get dictionary like above? | 2019/10/18 | [
"https://Stackoverflow.com/questions/58455061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5961544/"
] | Dictionary keys in Python are unique. Python will resolve `D = {'a':1,'a':2}` as `D = {'a': 2}`
You can effectively store multiple values under the same key by storing a list under that key. In your case,
```
D = {'a': [1, 2]}
```
This would allow you to access the elements of 'a' by using
```
D['a'][elementIdx] # D['a'][0] = 1
``` | You cannot. I set up an identical dictionary, and when attempting to print the key `'a'`, I received the secondary value, i.e., `2`. Keys are meant to be unique.
You could try something like:
```
x = {}
for i in range(2):
x[f"a{i}"] = i
```
Which would output key values like `a0, a1, etc.` | 4,679 |
1,062,803 | Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks! | 2009/06/30 | [
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] | at the very least use a list comprehension:
```
[x for x in a + b if (a + b).count(x) == 1]
```
otherwise use the [set](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) class:
```
list(set(a).symmetric_difference(set(b)))
```
there is also a more compact form:
```
list(set(a) ^ set(b))
``` | Items in b that aren't in a, if you need to preserve order or duplicates in b:
```
>>> a = [1, 2, 3]
>>> b = [1, 2, 3, 4, 4, 5]
>>> a_set = set(a)
>>> [x for x in b if x not in a_set]
[4, 4, 5]
```
Items in b that aren't in a, not preserving order, and not preserving duplicates in b:
```
>>> list(set(b) - set(a))
[4, 5]
``` | 4,680 |
70,992,422 | When running xlwings 0.26.1 (latest for Anaconda 3.83) or 0.10.0 (using for compatibility reasons) with the latest version of `Office 365 Excel`, I get an error after moving a sheet when running `app.quit()`:
```
import xlwings as xw
import pythoncom
pythoncom.CoInitialize()
app = xw.apps.add()
app.display_alerts = False
app.screen_updating = False
wbSource = app.books.open('pathSourceTemp')
wsSource = wbSource.sheets['sourceSheet']
wbDestination = app.books.open('pathDestinationTemp')
wsDestination = None
#Grabs first sheet in destination
wsDestination = wbDestination.sheets[0]
#Copy sheet "before" destination sheet (which should be 1 sheet after the destination sheet)
wsSource.api.Copy(Before=wsDestination.api)
wbDestination.save()
#Close workbooks and app
wbDestination.close()
wbSource.close()
app.screen_updating = True
app.quit()
```
The final line causes Excel to throw an error that I have to click out of for the process to continue. | 2022/02/04 | [
"https://Stackoverflow.com/questions/70992422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7952542/"
] | I found the problem
If `mcr.microsoft.com/windows/nanoserver:1809` image is used then arguments should be used in %arg% format.
If `mcr.microsoft.com/dotnet/framework/sdk:4.8` image is used then arguments should be used in $env:arg format.
It is confusing and I haven't found where it is documented. | This looks like it might be a bug.
When you have a build arg with the same name as an already existing environment variable, Docker will use the already set environment variable instead of the build arg.
The framework image you use already has an environment variable called DOTNET\_VERSION, so you can't access the build arg value.
The solution is to name your build arguments something else. I've added a suffix \_ARG here
```
ARG DOTNET_VERSION_ARG=net48
ARG CONFIGURATION=Release
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build-env
ARG DOTNET_VERSION_ARG
ARG CONFIGURATION
RUN echo .Net version: $env:DOTNET_VERSION_ARG
FROM mcr.microsoft.com/windows/nanoserver:1809
ARG DOTNET_VERSION_ARG
RUN echo .Net version: $env:DOTNET_VERSION_ARG
```
My experiments were on Linux where I used this Dockerfile
```
FROM mcr.microsoft.com/dotnet/aspnet:6.0
ARG DOTNET_VERSION=no-arg
ARG DOTNET_VERSION_ARG=arg
RUN echo DOTNET_VERSION=$DOTNET_VERSION - DOTNET_VERSION_ARG=$DOTNET_VERSION_ARG
ENV DOTNET_VERSION=$DOTNET_VERSION_ARG
RUN echo DOTNET_VERSION=$DOTNET_VERSION
```
and got this output
```
DOTNET_VERSION=6.0.1 - DOTNET_VERSION_ARG=arg
DOTNET_VERSION=arg
```
So if you have an ENV statement, you can set the environment variable to the value from the build argument. | 4,690 |
53,092,936 | I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance. | 2018/10/31 | [
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] | You can use dict comprehension and enumerate the `csv` object,
```
import csv
reader = csv.reader(open("filename.csv"))
x = {num+1:name[0].split(" ",1)[-1].rstrip() for (num, name) in enumerate(reader)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
Or you can do it without using `csv` module simply by reading the file,
```
with open("filename.csv", 'r') as t:
next(t) # skip first line
x = {num+1:name.split(" ",1)[-1].strip() for (num, name) in enumerate(t)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
``` | This should work for the given input:
data.csv:
=========
```
1.male Bob West,
2.female Hannah South,
3.male Bruce North,
```
Code:
=====
```
import csv
reader = csv.reader(open("data.csv"))
d = {}
for row in reader:
splitted = row[0].split('.')
# print splitted[0]
# print ' '.join(splitted[1].split(' ')[1:])
d[splitted[0]] = ' '.join(splitted[1].split(' ')[1:])
print(d)
```
Output
======
```
{'1': 'Bob West', '3': 'Bruce North', '2': 'Hannah South'}
``` | 4,691 |
13,788,688 | I'm using a python code to get data from my server. However, I keep getting a "u" as a prefix to each key in the JSON
as follows:
```
"{u'BD': 271, u'PS': 48, u'00': 177, u'CA': 5, u'DE': 15, u'FR': 18, u'UM': 45, u'KR': 6, u'IL': 22181, u'GB': 15}"
```
My python code is as follows:
```
from json import dumps
ans = select something from the database
json.dumps(ans)
```
does any know how to avoid it? | 2012/12/09 | [
"https://Stackoverflow.com/questions/13788688",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1432779/"
] | The `u''` means the value is a unicode literal. Everything is working as intended, you don't need to get rid of those.
JSON is a standard that supports Unicode values natively, and thus the `json` module accepts unicode strings when converting a Python value to JSON:
```
>>> import json
>>> ans={u'BD': 271, u'PS': 48, u'00': 177, u'CA': 5, u'DE': 15, u'FR': 18, u'UM': 45, u'KR': 6, u'IL': 22181, u'GB': 15}
>>> json.dumps(ans)
'{"BD": 271, "PS": 48, "00": 177, "IL": 22181, "UM": 45, "KR": 6, "CA": 5, "DE": 15, "FR": 18, "GB": 15}'
``` | I think something got mixed up here. The result you've posted looks like a Python representation of a dict. To be precise: json.dumps returns a string, so its result should be enclosed in quotes, like this:
```
>>> import json
>>> json.dumps({'foo': 'bar'})
'{"foo": "bar"}'
``` | 4,698 |
32,540,092 | I have a jinja2 template designed to print out the IP addresses of ec2 instances (tagged region: au) :
```
{% for host in groups['tag_region_au'] %}
```
My problem is I can't for the life of me work out how to include only hosts that exist in one group and NOT another (however each host may be in two or more groups), for example in python the following works:
```
( (a in list) and ( a not in list2) )
```
However the following does not:
```
{% for (host in groups['tag_region_au']) and (host not in groups['tag_state_live']) %}
```
Any idea how I can include only hosts that exist in one group and that do not exist in the another group? | 2015/09/12 | [
"https://Stackoverflow.com/questions/32540092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/264897/"
] | You can use built-in `group_names` variable in this case. `group_names` variable is a list of all groups that the current host is a member of.
My `hosts` file:
```
[tag_region_au]
host1
host2
host3
[tag_state_live]
host2
host3
host4
```
My template file `test.j2`:
```
{% for host in groups['tag_region_au'] %}
{% if hostvars[host]['group_names']|length == 1 %}
{{ host }} - {{ hostvars[host]['group_names'] }}
{% endif %}
{% endfor %}
```
`hostvars` is a dict whose keys are ansible hostname and values are dict map variable name to value. `length` is jinja filter return number of items in list.
Result:
```
host1 - ['tag_region_au']
```
If you change `==` to `>`, the result is:
```
host2 - ['tag_region_au', 'tag_state_live']
host3 - ['tag_region_au', 'tag_state_live']
```
Update:
=======
To check host in group A and not in group B, you can use `difference` filter. Syntax is `{{ list1 | difference(list2) }}`.
Here is example template:
```
{% for host in groups['tag_region_au']|difference(groups['tag_state_live']) %}
{{ host }}
{% endfor %}
```
Result is: `host1` | A really clean way to do it if you don't mind starting a new play for the templating stuff is to use a group expression in the play target (which is exactly what they're for). For example:
```
- hosts: tag_region_au:!tag_state_live
tasks:
- template: (bla)
```
Then in your template, you'd reference the `play_hosts` var to get at the list of filtered hosts. | 4,699 |
7,549,403 | How can I tell python to scan the current directory for a file called "filenames.txt" and if that file isn't there, to extract it from a zip file called "files.zip"? I know how to work zipfile, I just don't know how to scan the current directory for that file and use if/then loops with it.. | 2011/09/25 | [
"https://Stackoverflow.com/questions/7549403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/715578/"
] | ```
import os, zipfile
if 'filenames.txt' in os.listdir('.'):
print 'file is in current dir'
else:
zf = zipfile.ZipFile('files.zip')
zf.extract('filenames.txt')
``` | From the documentation
```none
$ pydoc os.path.exists
Help on function exists in os.path:
os.path.exists = exists(path)
Test whether a path exists. Returns False for broken symbolic links
``` | 4,700 |
11,209,646 | Is there a python module that will do a waterfall plot like MATLAB does? I googled 'numpy waterfall', 'scipy waterfall', and 'matplotlib waterfall', but did not find anything. | 2012/06/26 | [
"https://Stackoverflow.com/questions/11209646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/908924/"
] | You can do a waterfall in matplotlib using the [PolyCollection](https://matplotlib.org/api/collections_api.html?highlight=polycollection#matplotlib.collections.PolyCollection) class. See this specific [example](https://matplotlib.org/examples/mplot3d/polys3d_demo.html) to have more details on how to do a waterfall using this class.
Also, you might find this [blog post](http://austringer.net/wp/index.php/2011/05/20/plotting-a-dolphin-biosonar-click-train/) useful, since the author shows that you might obtain some 'visual bug' in some specific situation (depending on the view angle chosen).
Below is an example of a waterfall made with matplotlib (image from the blog post):
[](https://i.stack.imgur.com/sqRcC.png)
(source: [austringer.net](http://austringer.net/images/biosonar/wfall_demo.png)) | Have a look at [mplot3d](http://matplotlib.sourceforge.net/mpl_toolkits/mplot3d/tutorial.html#wireframe-plots):
```
# copied from
# http://matplotlib.sourceforge.net/mpl_examples/mplot3d/wire3d_demo.py
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
```

I don't know how to get results as nice as Matlab does.
---
If you want more, you may also have a look at [MayaVi](http://matplotlib.sourceforge.net/mpl_toolkits/mplot3d/faq.html#how-is-mplot3d-different-from-mayavi): <http://mayavi.sourceforge.net/> | 4,702 |
61,575,311 | I am trying to open up data from a CSV file in my Visual Studio terminal and receive:
''''
```
Traceback (most recent call last):
File "/home/jubal/ CrashCourse Python Notes/Chapter 16 CC/Downloading Date/csv
format/highs_lows.py", line 7, in <module>
with open(filename) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'sitka_weather_2018_full.csv'
```
''''
and here is the program highs\_lows.py, it is saved in the same folder as sitka\_weather\_2018\_full.csv
''''
```
import csv
filename = 'sitka_weather_2018_full.csv'
with open(filename) as f:
reader = csv.reader(f)
header_row = next(reader)
highs = []
for row in reader:
highs.append(row[8])
print(highs)
```
''''
My latop run linux mint 19.2 cinnamon, also I am able to run this program just fine with jupyter notebook but when I try to convert into a python program and run it in the VS code terminal is when this problem occurs. Newbie to programming so any help would be great. Thanks for your time! | 2020/05/03 | [
"https://Stackoverflow.com/questions/61575311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10457351/"
] | So the question says that `N` can be upto 10000, but your code assumes that it is no bigger than 100. | Regarding LTE, you are almost there. Try to replace the
```
for (int j = i + 1; j <= i + (p - 1); j++)
tmp += skill[i] - skill[j];
```
loop with the constant time expression. Hint: when the most skillful player leaves the window, by how much the training time for the rest players gets decreased? | 4,707 |
64,397,933 | I have a simple webpage that uses that okta web api:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css" integrity="sha384-9gVQ4dYFwwWSjIDZnLEWnxCjeSWFphJiwGPXr1jddIhOegiu1FwO5qRGvFXOdJZ4" crossorigin="anonymous" />
<title>Simple Web Page</title>
<style>
h1 {
margin: 2em 0;
}
</style>
<!-- widget stuff here -->
<script src="https://ok1static.oktacdn.com/assets/js/sdk/okta-signin-widget/2.16.0/js/okta-sign-in.min.js" type="text/javascript"></script>
<link href="https://ok1static.oktacdn.com/assets/js/sdk/okta-signin-widget/2.16.0/css/okta-sign-in.min.css" type="text/css" rel="stylesheet" />
<link href="https://ok1static.oktacdn.com/assets/js/sdk/okta-signin-widget/2.16.0/css/okta-theme.css" type="text/css" rel="stylesheet" />
</head>
<body>
<div class="container">
<h1 class="text-center">Test</h1>
<div id="messageBox" class="jumbotron">
You are not logged in.
</div>
<!-- where the sign-in form will be displayed -->
<div id="okta-login-container"></div>
</div>
<script type="text/javascript">
var oktaSignIn = new OktaSignIn({
baseUrl: "{{ https://dev-8490637.okta.com }}",
clientId: "{{ 0oa97ptccRHXCE3kN5d5 }}",
authParams: {
issuer: "default",
responseType: ["token", "id_token"],
display: "page",
},
});
if (oktaSignIn.token.hasTokensInUrl()) {
oktaSignIn.token.parseTokensFromUrl(
// If we get here, the user just logged in.
function success(res) {
var accessToken = res[0];
var idToken = res[1];
oktaSignIn.tokenManager.add("accessToken", accessToken);
oktaSignIn.tokenManager.add("idToken", idToken);
window.location.hash = "";
document.getElementById("messageBox").innerHTML = "Hello, " + idToken.claims.email + "! You just logged in! :)";
},
function error(err) {
console.error(err);
}
);
} else {
oktaSignIn.session.get(function (res) {
// If we get here, the user is already signed in.
if (res.status === "ACTIVE") {
document.getElementById("messageBox").innerHTML = "Hello, " + res.login + "! You are *still* logged in! :)";
return;
}
oktaSignIn.renderEl(
{ el: "#okta-login-container" },
function success(res) {},
function error(err) {
console.error(err);
}
);
});
}
</script>
</body>
</html>
```
I have python installed on my laptop.
When I open the terminal of visual studio code that contain my index.html file I use the commands:
```none
cd C:\Users\marta\test (directory where my code is)
python -m http.server 8080 (redirect to port 8080)
```
I checked if my firewall/anti-virus was enable google to run and its correct.
The problem: When I load the http://localhost:8080/ is continuing show me this error:
 | 2020/10/17 | [
"https://Stackoverflow.com/questions/64397933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14465519/"
] | If you have created a custom mass update script you would have created parameters accessed like :
```
runtime.getCurrentScript().getParameter({name:'custscript....'});
```
If so and you are then triggering the work flow via:
```
workflow.initiate({
recordType:'customer', ...
```
then you might do something like:
```
/**
* @NApiVersion 2.x
* @NScriptType MassUpdateScript
*/
define(["N/runtime", "N/workflow"], function (runtime, workflow) {
function each(params) {
workflow.initiate({
workflowId:'customworkflow_target_id'
recordType: params.type,
recordId: params.id
defaultValues:{
custworkflow_field_1:runtime.getCurrentScript().getParameter({name:'custscript_field_1'})
// and so on. of course you'll probably dereference runtime.getCurrentScript() if you have multiple parameters
// You'll have to define workflow fields for every value you want to pass.
// custscript_field_1 is from the id of the workflow fields.
// For sanity's sake I recommend giving your script parameters similar ids as the workflow field ids
}
});
}
exports.each = each;
});
``` | Create parameters in your script then when you are calling the custom action script in the workflow, pass the workflow field values to the script parameters. | 4,708 |
31,361,482 | I'm writing a porting a basic python script and creating a similarly basic Flask application. I have a file consisting of a bunch of functions that I'd like access to within my Flask application.
Here's what I have so far for my views:
```
from flask import render_template
from app import app
def getRankingList():
return 'hey everyone!'
@app.route("/")
@app.route("/index")
def index():
rankingsList = getRankingsList()
return render_template('index.html', rankingsList = rankingsList)
if __name__ == '__main__':
app.run(debug=True)
```
Ideally, I'd have access to all of the functions from my original script and make use of them within my `getRankingsList()` function. I've googled around and can't seem to sort out how to do this, however.
Any idea | 2015/07/11 | [
"https://Stackoverflow.com/questions/31361482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316501/"
] | Simply have another python script file (for example `helpers.py`) in the same directory as your main flask .py file.
Then at the top of your main flask file, you can do `import helpers` which will let you access any function in helpers by adding `helpers.` before it (for example `helpers.exampleFunction()`).
Or you can do `from helpers import exampleFunction` and use `exampleFunction()` directly in your code. Or `from helpers import *` to import and use all the functions directly in your code. | Just import your file as usual and use functions from it:
```
# foo.py
def bar():
return 'hey everyone!'
```
And in the main file:
```
# main.py
from flask import render_template
from app import app
from foo import bar
def getRankingList():
return 'hey everyone!'
@app.route("/")
@app.route("/index")
def index():
rankingsList = getRankingsList()
baz = bar() # Function from your foo.py
return render_template('index.html', rankingsList=rankingsList)
if __name__ == '__main__':
app.run(debug=True)
``` | 4,709 |
33,775,658 | So I have this operation in python `x = int(v,base=2)` which takes `v`as a Binary String. What would be the inverse operation to that?
For example, given `1101000110111111011001100001` it would return `219936353`, so I want to get this binary string from the `219936353` number.
Thanks | 2015/11/18 | [
"https://Stackoverflow.com/questions/33775658",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3770881/"
] | Try out the bin() function.
```
bin(yourNumber)[2:]
```
will give you string containing bits for your number. | ```
num = 219936353
print("{:b}".format(num))
--output:--
1101000110111111011001100001
```
The other solutions are all wrong:
```
num = 1
string = bin(1)
result = int(string, 10)
print(result)
--output:--
Traceback (most recent call last):
File "1.py", line 4, in <module>
result = int(string, 10)
ValueError: invalid literal for int() with base 10: '0b1'
```
You would have to do this:
```
num = 1
string = bin(1)
result = int(string[2:], 10)
print(result) #=> 1
``` | 4,710 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.