qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
57,948,945 | I have a very large square matrix of order around 570,000 x 570,000 and I want to power it by 2.
The data is in json format casting to associative array in array (dict inside dict in python) form
Let's say I want to represent this matrix:
```
[ [0, 0, 0],
[1, 0, 5],
[2, 0, 0] ]
```
In json it's stored like:
```
{"3": {"1": 2}, "2": {"1": 1, "3": 5}}
```
Which for example `"3": {"1": 2}` means the number in 3rd row and 1st column is 2.
I want the output to be the same as json, but powered by 2 (matrix multiplication)
The programming language isn't important. I want to calculate it the fastest way (less than 2 days, if possible)
So I tried to use Numpy in python (`numpy.linalg.matrix_power`), but it seems that it doesn't work with my nested unsorted dict format.
I wrote a simple python code to do that but I estimated that it would take 18 days to accomplish:
```
jsonFileName = "file.json"
def matrix_power(arr):
result = {}
for x1,subarray in arr.items():
print("doing item:",x1)
for y1,value1 in subarray.items():
for x2,subarray2 in arr.items():
if(y1 != x2):
continue
for y2,value2 in subarray2.items():
partSum = value1 * value2
result[x1][y2] = result.setdefault(x1,{}).setdefault(y2,0) + partSum
return result
import json
with open(jsonFileName, 'r') as reader:
jsonFile = reader.read()
print("reading is succesful")
jsonArr = json.loads(jsonFile)
print("matrix is in array form")
matrix = matrix_power(jsonArr)
print("Well Done! matrix is powered by 2 now")
output = json.dumps(matrix)
print("result is in json format")
writer = open("output.json", 'w+')
writer.write(output)
writer.close()
print("Task is done! you can close this window now")
```
Here, X1,Y1 is the row and col of the first matrix which then is multiplied by the corresponding element of the second matrix (X2,Y2). | 2019/09/15 | [
"https://Stackoverflow.com/questions/57948945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10530951/"
] | Numpy is not the problem, you need to input it on a format that numpy can understand, but since your matrix is really big, it probably won't fit in memory, so it's probably a good idea to use a sparse matrix (`scipy.sparse.csr_matrix`):
```
m = scipy.sparse.csr_matrix((
[v for row in data.values() for v in row.values()], (
[int(row_n) for row_n, row in data.items() for v in row],
[int(column) for row in data.values() for column in row]
)
))
```
Then it's just a matter of doing:
```
m**2
``` | >
> I don't know how it can hold csr\_matrix format but not in dictionary. d.update gives MemoryError after some time
>
>
>
Here's a variant which doesn't construct the whole output dictionary and JSON string in memory, but prints the individual rows directly to the output file; this should need considerably less memory.
```
#!/usr/bin/env python3
…
import json
import sys
sys.stdout = open("output.json", 'w')
delim = '{'
end = m.indptr[0]
for row in range(m.shape[0]):
start = end
end = m.indptr[row+1]
if end > start: # if row not empty
print(delim, '"'+str(1+row)+'":',
json.dumps(dict(zip([str(1+i) for i in m.indices[start:end]], m.data[start:end])), default=int)
)
delim = ','
print('}')
``` |
57,948,945 | I have a very large square matrix of order around 570,000 x 570,000 and I want to power it by 2.
The data is in json format casting to associative array in array (dict inside dict in python) form
Let's say I want to represent this matrix:
```
[ [0, 0, 0],
[1, 0, 5],
[2, 0, 0] ]
```
In json it's stored like:
```
{"3": {"1": 2}, "2": {"1": 1, "3": 5}}
```
Which for example `"3": {"1": 2}` means the number in 3rd row and 1st column is 2.
I want the output to be the same as json, but powered by 2 (matrix multiplication)
The programming language isn't important. I want to calculate it the fastest way (less than 2 days, if possible)
So I tried to use Numpy in python (`numpy.linalg.matrix_power`), but it seems that it doesn't work with my nested unsorted dict format.
I wrote a simple python code to do that but I estimated that it would take 18 days to accomplish:
```
jsonFileName = "file.json"
def matrix_power(arr):
result = {}
for x1,subarray in arr.items():
print("doing item:",x1)
for y1,value1 in subarray.items():
for x2,subarray2 in arr.items():
if(y1 != x2):
continue
for y2,value2 in subarray2.items():
partSum = value1 * value2
result[x1][y2] = result.setdefault(x1,{}).setdefault(y2,0) + partSum
return result
import json
with open(jsonFileName, 'r') as reader:
jsonFile = reader.read()
print("reading is succesful")
jsonArr = json.loads(jsonFile)
print("matrix is in array form")
matrix = matrix_power(jsonArr)
print("Well Done! matrix is powered by 2 now")
output = json.dumps(matrix)
print("result is in json format")
writer = open("output.json", 'w+')
writer.write(output)
writer.close()
print("Task is done! you can close this window now")
```
Here, X1,Y1 is the row and col of the first matrix which then is multiplied by the corresponding element of the second matrix (X2,Y2). | 2019/09/15 | [
"https://Stackoverflow.com/questions/57948945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10530951/"
] | >
> now I have to somehow translate csr\_matrix back to json serializable
>
>
>
Here's one way to do that, using the attributes **data**, **indices**, **indptr** - `m` is the *csr\_matrix*:
```
d = {}
end = m.indptr[0]
for row in range(m.shape[0]):
start = end
end = m.indptr[row+1]
if end > start: # if row not empty
d.update({str(1+row): dict(zip([str(1+i) for i in m.indices[start:end]], m.data[start:end]))})
output = json.dumps(d, default=int)
``` | >
> I don't know how it can hold csr\_matrix format but not in dictionary. d.update gives MemoryError after some time
>
>
>
Here's a variant which doesn't construct the whole output dictionary and JSON string in memory, but prints the individual rows directly to the output file; this should need considerably less memory.
```
#!/usr/bin/env python3
…
import json
import sys
sys.stdout = open("output.json", 'w')
delim = '{'
end = m.indptr[0]
for row in range(m.shape[0]):
start = end
end = m.indptr[row+1]
if end > start: # if row not empty
print(delim, '"'+str(1+row)+'":',
json.dumps(dict(zip([str(1+i) for i in m.indices[start:end]], m.data[start:end])), default=int)
)
delim = ','
print('}')
``` |
744,894 | I want to pull certain comments from my py files that give context to translations, rather than manually editing the .pot file basically i want to go from this python file:
```
# For Translators: some useful info about the sentence below
_("Some string blah blah")
```
to this pot file:
```
# For Translators: some useful info about the sentence below
#: something.py:1
msgid "Some string blah blah"
msgstr ""
``` | 2009/04/13 | [
"https://Stackoverflow.com/questions/744894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/55565/"
] | After much pissing about I found the best way to do this:
```
#. Translators:
# Blah blah blah
_("String")
```
Then search for comments with a . like so:
```
xgettext --language=Python --keyword=_ --add-comments=. --output=test.pot *.py
``` | I was going to suggest the `compiler` module, but it ignores comments:
f.py:
```
# For Translators: some useful info about the sentence below
_("Some string blah blah")
```
..and the compiler module:
```
>>> import compiler
>>> m = compiler.parseFile("f.py")
>>> m
Module(None, Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))]))
```
The [AST](http://www.python.org/doc/2.5.2/lib/module-compiler.ast.html) module in Python 2.6 seems to do the same.
Not sure if it's possible, but if you use triple-quoted strings instead..
```
"""For Translators: some useful info about the sentence below"""
_("Some string blah blah")
```
..you can reliably parse the Python file with the compiler module:
```
>>> m = compiler.parseFile("f.py")
>>> m
Module('For Translators: some useful info about the sentence below', Stmt([Discard(CallFunc(Name('_'), [Const('Some string blah blah')], None, None))]))
```
I made an attempt at writing a mode complete script to extract docstrings - it's incomplete, but seems to grab most docstrings: <http://pastie.org/446156> (or on [github.com/dbr/so\_scripts](http://github.com/dbr/so_scripts/tree/0bd66a21695a390cfa45f9ee26d7bed4eac10e5c/parse_py))
The other, much simpler, option would be to use regular expressions, for example:
```
f = """# For Translators: some useful info about the sentence below
_("Some string blah blah")
""".split("\n")
import re
for i, line in enumerate(f):
m = re.findall("\S*# (For Translators: .*)$", line)
if len(m) > 0 and i != len(f):
print "Line Number:", i+1
print "Message:", m
print "Line:", f[i + 1]
```
..outputs:
```
Line Number: 1
Message: ['For Translators: some useful info about the sentence below']
Line: _("Some string blah blah")
```
Not sure how the `.pot` file is generated, so I can't be any help at-all with that part.. |
72,029,157 | I read book, I try practice these code snippet
```py
>>> from lis import parse
>>> parse('1.5')
1.5
```
Then I follow guide at <https://github.com/adamhaney/lispy#getting-started> . My PC is Windows 11 Pro x64.
```
C:\Users\donhu>python -V
Python 3.10.4
C:\Users\donhu>pip -V
pip 22.0.4 from C:\Program Files\Python310\lib\site-packages\pip (python 3.10)
C:\Users\donhu>pip install lispy
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement lispy (from versions: none)
ERROR: No matching distribution found for lispy
C:\Users\donhu>
```
[![enter image description here](https://i.stack.imgur.com/s0Iir.png)](https://i.stack.imgur.com/s0Iir.png)
I also try install with Anaconda, but not success.
[![enter image description here](https://i.stack.imgur.com/kaXpk.png)](https://i.stack.imgur.com/kaXpk.png)
How to fix? | 2022/04/27 | [
"https://Stackoverflow.com/questions/72029157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3728901/"
] | You should use [`map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) and [`filter()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter).
```js
const input = [
{
title: "QA",
rows: [
{
risk: "P1",
Title: "Server down",
},
{
risk: "P3",
Title: "Permission issue",
},
],
},
{
title: "Prod",
rows: [
{
risk: "P5",
Title: "Console log errors fix",
},
{
risk: "P1",
Title: "Server is in hung state",
},
],
},
];
const output = input.map((obj) => ({
...obj,
rows: obj.rows.filter((row) => row.risk === "P1"),
}));
console.log(output);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
### Explanation
You want to return one object for each of the original object values i.e. a `1:1` mapping so your primary operation is `map()`.
Then you want to return the same object except that the property `rows` should only contain the `rows` with `risk === "P1"` so you need to `filter()` the rows and create a new object (you should treat objects as [immutable](https://en.wikipedia.org/wiki/Immutable_object)) with that updated `rows` property. | First your original array needs an opening `[`. Instead of using `Array#forEach` use `Array#map` instead. `.forEach` does not return any result, but can allow you to modify the original array; `.map` on the other hand creates a new array.
```js
const input = [{ "title": "QA", "rows": [ { "risk": "P1", "Title": "Server down", }, { "risk": "P3", "Title": "Permission issue", } ] }, { "title": "Prod", "rows": [ { "risk": "P5", "Title": "Console log errors fix", }, { "risk": "P1", "Title": "Server is in hung state", } ] } ],
filter = "P1",
output = input.map(
({rows,...rest}) =>
({...rest, rows: rows.filter(({risk}) => risk === filter)})
);
console.log( output );
```
If your aim was to modify the original array, however, then make the following modification to your original code:
```js
const input = [{ "title": "QA", "rows": [ { "risk": "P1", "Title": "Server down", }, { "risk": "P3", "Title": "Permission issue", } ] }, { "title": "Prod", "rows": [ { "risk": "P5", "Title": "Console log errors fix", }, { "risk": "P1", "Title": "Server is in hung state", } ] } ],
filter = "P1";
input.forEach((element,index) => {
input[index] = {...element, rows: element.rows.filter( x => x.risk === filter )}
});
console.log( input );
``` |
40,427,547 | I am looking for a conditional statement in python to look for a certain information in a specified column and put the results in a new column
Here is an example of my dataset:
```
OBJECTID CODE_LITH
1 M4,BO
2 M4,BO
3 M4,BO
4 M1,HP-M7,HP-M1
```
and what I want as results:
```
OBJECTID CODE_LITH M4 M1
1 M4,BO 1 0
2 M4,BO 1 0
3 M4,BO 1 0
4 M1,HP-M7,HP-M1 0 1
```
What I have done so far:
```
import pandas as pd
import numpy as np
lookup = ['M4']
df.loc[df['CODE_LITH'].str.isin(lookup),'M4'] = 1
df.loc[~df['CODE_LITH'].str.isin(lookup),'M4'] = 0
```
Since there is multiple variables per rows in "CODE\_LITH" it seems like the script in not able to find only "M4" it can find "M4,BO" and put 1 or 0 in the new column
I have also tried:
```
if ('M4') in df['CODE_LITH']:
df['M4'] = 0
else:
df['M4'] = 1
```
With the same results.
Thanks for your help.
PS. The dataframe contains about 2.6 millions rows and I need to do this operation for 30-50 variables. | 2016/11/04 | [
"https://Stackoverflow.com/questions/40427547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6146748/"
] | Use simple `[row][col]` access to your double pointer. It is more readable and you can avoid errors, as you coded.
```
#include<stdio.h>
#include<stdlib.h>
int main(void) {
int **tab;
int ligne;
int col;
printf("saisir le nbre de lignes volous\n");
scanf("%d", &ligne);
printf("saisir le nbre de colonnes volous\n");
scanf("%d", &col);
tab = malloc(ligne*sizeof(int*));
if (tab != NULL)
{
int i ,j;
for (i=0 ; i < ligne; i++)
{
tab[i] = malloc(col*sizeof(int));
if (tab[i] == NULL)
{
fprintf(stderr, "Malloc failed\n");
return 1;
}
}
int k=0;
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
tab[i][j] = k++;
}
}
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
printf("%d\t", tab[i][j]);
}
free(tab[i]);
printf("\n");
}
}
free(tab);
return 0;
}
``` | ```
int main(void) {
int ligne;
int col;
printf("saisir le nbre de lignes volous\n");
scanf("%d", &ligne);
printf("saisir le nbre de colonnes volous\n");
scanf("%d", &col);
int tableSize = ligne * (col*sizeof(int));
int * table = (int*) malloc(tableSize);
int i,j;
for (i=0 ; i < ligne; i++) {
for (j = 0; j < col; j++) {
*(table + i+ j) = 0;
}
}
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
printf("%d\t", *(table + i +j));
}
printf("\n");
}
free(table);
return 0;
}
``` |
40,427,547 | I am looking for a conditional statement in python to look for a certain information in a specified column and put the results in a new column
Here is an example of my dataset:
```
OBJECTID CODE_LITH
1 M4,BO
2 M4,BO
3 M4,BO
4 M1,HP-M7,HP-M1
```
and what I want as results:
```
OBJECTID CODE_LITH M4 M1
1 M4,BO 1 0
2 M4,BO 1 0
3 M4,BO 1 0
4 M1,HP-M7,HP-M1 0 1
```
What I have done so far:
```
import pandas as pd
import numpy as np
lookup = ['M4']
df.loc[df['CODE_LITH'].str.isin(lookup),'M4'] = 1
df.loc[~df['CODE_LITH'].str.isin(lookup),'M4'] = 0
```
Since there is multiple variables per rows in "CODE\_LITH" it seems like the script in not able to find only "M4" it can find "M4,BO" and put 1 or 0 in the new column
I have also tried:
```
if ('M4') in df['CODE_LITH']:
df['M4'] = 0
else:
df['M4'] = 1
```
With the same results.
Thanks for your help.
PS. The dataframe contains about 2.6 millions rows and I need to do this operation for 30-50 variables. | 2016/11/04 | [
"https://Stackoverflow.com/questions/40427547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6146748/"
] | Use simple `[row][col]` access to your double pointer. It is more readable and you can avoid errors, as you coded.
```
#include<stdio.h>
#include<stdlib.h>
int main(void) {
int **tab;
int ligne;
int col;
printf("saisir le nbre de lignes volous\n");
scanf("%d", &ligne);
printf("saisir le nbre de colonnes volous\n");
scanf("%d", &col);
tab = malloc(ligne*sizeof(int*));
if (tab != NULL)
{
int i ,j;
for (i=0 ; i < ligne; i++)
{
tab[i] = malloc(col*sizeof(int));
if (tab[i] == NULL)
{
fprintf(stderr, "Malloc failed\n");
return 1;
}
}
int k=0;
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
tab[i][j] = k++;
}
}
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
printf("%d\t", tab[i][j]);
}
free(tab[i]);
printf("\n");
}
}
free(tab);
return 0;
}
``` | Here, I did some changes and added some comments to the changes
```
#include<stdio.h>
#include<stdlib.h>
int main(void) {
int **tab = NULL;
int ligne = 0;
int col = 0;
char buffer[128] = {0};
printf("saisir le nbre de lignes volous\n");
// to avoid leaving \n in buffer after you enter the first value
// you should also check also return value of fgets
// and quit program if it returns NULL
// in general it is good practice to check return values
// of all run-time functions.
if (fgets(buffer,sizeof(buffer),stdin)==NULL) {
return 1;
}
ligne = atoi(buffer);
printf("saisir le nbre de colonnes volous\n");
if (fgets(buffer,sizeof(buffer),stdin) == NULL) {
return 1;
}
col = atoi(buffer);
tab = malloc(ligne*sizeof(int*)); // do not cast malloc
int i ,j;
// use tab[i] and tab[i][j] syntax, it is easier to read
for (i=0 ; i < ligne; i++) {
tab[i] = malloc(col*sizeof(int));
}
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
tab[i][j] = 0;
}
}
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
printf("%d\t", tab[i][j]);
}
printf("\n");
}
// before you free tab, you need to free all lines
for (i=0 ; i < ligne; i++) {
free(tab[i]);
}
free(tab);
return 0;
}
``` |
40,427,547 | I am looking for a conditional statement in python to look for a certain information in a specified column and put the results in a new column
Here is an example of my dataset:
```
OBJECTID CODE_LITH
1 M4,BO
2 M4,BO
3 M4,BO
4 M1,HP-M7,HP-M1
```
and what I want as results:
```
OBJECTID CODE_LITH M4 M1
1 M4,BO 1 0
2 M4,BO 1 0
3 M4,BO 1 0
4 M1,HP-M7,HP-M1 0 1
```
What I have done so far:
```
import pandas as pd
import numpy as np
lookup = ['M4']
df.loc[df['CODE_LITH'].str.isin(lookup),'M4'] = 1
df.loc[~df['CODE_LITH'].str.isin(lookup),'M4'] = 0
```
Since there is multiple variables per rows in "CODE\_LITH" it seems like the script in not able to find only "M4" it can find "M4,BO" and put 1 or 0 in the new column
I have also tried:
```
if ('M4') in df['CODE_LITH']:
df['M4'] = 0
else:
df['M4'] = 1
```
With the same results.
Thanks for your help.
PS. The dataframe contains about 2.6 millions rows and I need to do this operation for 30-50 variables. | 2016/11/04 | [
"https://Stackoverflow.com/questions/40427547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6146748/"
] | Use simple `[row][col]` access to your double pointer. It is more readable and you can avoid errors, as you coded.
```
#include<stdio.h>
#include<stdlib.h>
int main(void) {
int **tab;
int ligne;
int col;
printf("saisir le nbre de lignes volous\n");
scanf("%d", &ligne);
printf("saisir le nbre de colonnes volous\n");
scanf("%d", &col);
tab = malloc(ligne*sizeof(int*));
if (tab != NULL)
{
int i ,j;
for (i=0 ; i < ligne; i++)
{
tab[i] = malloc(col*sizeof(int));
if (tab[i] == NULL)
{
fprintf(stderr, "Malloc failed\n");
return 1;
}
}
int k=0;
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
tab[i][j] = k++;
}
}
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
printf("%d\t", tab[i][j]);
}
free(tab[i]);
printf("\n");
}
}
free(tab);
return 0;
}
``` | As you have allocated your arrays, (the one dimensional parts) your array can be addressed as `table[i][j]`, and never as you do in
```
for (i = 0; i < ligne; i++) {
for (j = 0; j < col; j++) {
**(tab + i+ j) = 0; /* <--- this is an error */
}
}
```
as you see `tab + i + j` is a pointer to which you offset `i` (the `ligne` number) plus `j` (the `col` number) and both aren't actually the same size (columns are one cell size and rows are one line size) You'd better to write `tab[i][j]` as `tab[i]` is a pointer (allocated with `malloc(3)`) that points to a single dimensional array (and the different pointers `tab[0]`, `tab[1]`,... `tab[n]` don't have to be correlated between them, as they come from distinct `malloc()` calls) If you don't like the brackets notation, then you should write the equivalent
```
*(*(mat + i) + j) /* equivalent to mat[i][j] */
```
and never the notation you use in your code.
```
**(tab + i + j) /* equivalent to *mat[i + j] */
``` |
25,826,977 | I am currently taking a GIS programming class. The directions for using GDAL and ogr to manipulate the data is written for a Windows PC. I am currently working on a MAC. I am hoping to get some insight on how to translate the .bat code to a .sh code. Thanks!!
Windows .bat code:
```
cd /d c:\data\PhiladelphiaBaseLayers
set ogr2ogrPath="c:\program files\QGIS Dufour\bin\ogr2ogr.exe"
for %%X in (*.shp) do %ogr2ogrPath% -skipfailures -clipsrc c:\data\PhiladelphiaBaseLayers\clipFeature\city_limits.shp c:\data\PhiladelphiaBaseLayers\clipped\%%X c:\data\PhiladelphiaBaseLayers\%%X
for %%X in (*.shp) do %ogr2ogrPath% -skipfailures -s_srs EPSG:4326 -t_srs EPSG:3857 c:\data\PhiladelphiaBaseLayers\clippedAndProjected\%%X c:\data\PhiladelphiaBaseLayers\clipped\%%X
```
My mac .sh code:
```
cd ~/Desktop/PhiladelphiaBaseLayers
set ogr2ogrPath="/Applications/QGIS.app/Contents/Resources/python/plugins/processing/algs/gdal/ogr2ogr.py" \
for shpfile in *.shp
do $org2ogrPath$ -skipfailures -clipsrc \
~/Desktop/PhiladelphiaBaseLayers/clipFeature/city_limits.shp \
~/Desktop/PhiladelphiaBaseLayers/clipped/"shpfile2""shpfile" \
~/Desktop/PhiladelphiaBaseLayers/"shpfile2""shpfile"
for shpfile in *.shp
do $ogr2ogrPath$ -skipfailures -s_srs EPSG:4326 -t_srs EPSG:3857 \
~/Desktop/PhiladelphiaBaseLayers/clipped/"shpfile2""shpfile"
done
``` | 2014/09/13 | [
"https://Stackoverflow.com/questions/25826977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4038027/"
] | I think it's telling you that the `false` in your code will never be reached, because the `true` causes the first part of the expression to be returned.
You can simplify it to:
```dart
onClick.listen((e) => fonixMenu.hidden = !fonixMenu.hidden);
``` | I think what you actually wanted to do was
```dart
void main() {
....
var menuToggle =querySelector('#lines')
..onClick.listen((e) => fonixMenu.hidden = fonixMenu.hidden == true ? = false : fonixMenu.hidden = true);
// ^ 2nd =
....
}
```
but Danny's solution is more elegant when you ensure that `fonixMenu.hidden` is never `null` because
```dart
var x = null;
var Y = !x
```
causes an exception:
```dart
type 'Null' is not a subtype of type 'bool' of 'boolean expression'.
```
A simple null-safe variant
```dart
var menuToggle =querySelector('#lines')
..onClick.listen((e) => fonixMenu.hidden = fonixMenu.hidden == true ? false : true);
``` |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | ```
$("#show a").click(function(e) {
e.preventDefault();
$("#info, #hide").show();
$("#show").hide();
});
$("#hide a").click(function(e) {
e.preventDefault();
$("#info, #hide").hide();
$("#show").show();
});
``` | Use this to show/hide the "Details" div:
<http://api.jquery.com/toggle/>
Also, you could use just one span to display the "Show/Hide" link, changing the text accordingly when you click to toggle. |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | You do not need to span for toggling content. using two span for achiving this will make code more complicated. Try this:
**Html**
```
<span id="show" class="" style="text-decoration: underline">
<a href="#" class="fill-div" style="width: 100px;">
Show details
</a>
</span>
<div id="info" style="display:none; margin-left:2em">
<i>
Details shown here
</i>
</div>
```
**Js:**
```
$('#show').click(function() {
$('#info').toggle();
var visible = $('#info').is(":visible");
if(visible)
$('a',this).html('Hide details');
else
$('a',this).html('Show details');
});
```
**[Working Demo](http://jsfiddle.net/APA2S/1785/)**
**update:**
**[Demo for multiple](http://jsfiddle.net/APA2S/1786/)** | Use this to show/hide the "Details" div:
<http://api.jquery.com/toggle/>
Also, you could use just one span to display the "Show/Hide" link, changing the text accordingly when you click to toggle. |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | ```
$("#show a").click(function(e) {
e.preventDefault();
$("#info, #hide").show();
$("#show").hide();
});
$("#hide a").click(function(e) {
e.preventDefault();
$("#info, #hide").hide();
$("#show").show();
});
``` | A breeze with jQuery. Try my example on [**jsfiddle**](http://jsfiddle.net/H8gN8/).
```
// Gets executed when document and markup is fully loaded
$(document).ready(function() {
// Bind a click event on the element with id 'toggle'
$('#toggle').click(function() {
// Change text accordingly
if ($('#text').html() == 'Show details') {
$('#text').html('Hide details');
} else {
$('#text').html('Show details');
}
// Show / Hide info bar
$('#info').toggle();
});
});
``` |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | You do not need to span for toggling content. using two span for achiving this will make code more complicated. Try this:
**Html**
```
<span id="show" class="" style="text-decoration: underline">
<a href="#" class="fill-div" style="width: 100px;">
Show details
</a>
</span>
<div id="info" style="display:none; margin-left:2em">
<i>
Details shown here
</i>
</div>
```
**Js:**
```
$('#show').click(function() {
$('#info').toggle();
var visible = $('#info').is(":visible");
if(visible)
$('a',this).html('Hide details');
else
$('a',this).html('Show details');
});
```
**[Working Demo](http://jsfiddle.net/APA2S/1785/)**
**update:**
**[Demo for multiple](http://jsfiddle.net/APA2S/1786/)** | A breeze with jQuery. Try my example on [**jsfiddle**](http://jsfiddle.net/H8gN8/).
```
// Gets executed when document and markup is fully loaded
$(document).ready(function() {
// Bind a click event on the element with id 'toggle'
$('#toggle').click(function() {
// Change text accordingly
if ($('#text').html() == 'Show details') {
$('#text').html('Hide details');
} else {
$('#text').html('Show details');
}
// Show / Hide info bar
$('#info').toggle();
});
});
``` |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | You do not need to span for toggling content. using two span for achiving this will make code more complicated. Try this:
**Html**
```
<span id="show" class="" style="text-decoration: underline">
<a href="#" class="fill-div" style="width: 100px;">
Show details
</a>
</span>
<div id="info" style="display:none; margin-left:2em">
<i>
Details shown here
</i>
</div>
```
**Js:**
```
$('#show').click(function() {
$('#info').toggle();
var visible = $('#info').is(":visible");
if(visible)
$('a',this).html('Hide details');
else
$('a',this).html('Show details');
});
```
**[Working Demo](http://jsfiddle.net/APA2S/1785/)**
**update:**
**[Demo for multiple](http://jsfiddle.net/APA2S/1786/)** | ```
$("#show a").click(function(e) {
e.preventDefault();
$("#info, #hide").show();
$("#show").hide();
});
$("#hide a").click(function(e) {
e.preventDefault();
$("#info, #hide").hide();
$("#show").show();
});
``` |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | ```
$("#show a").click(function(e) {
e.preventDefault();
$("#info, #hide").show();
$("#show").hide();
});
$("#hide a").click(function(e) {
e.preventDefault();
$("#info, #hide").hide();
$("#show").show();
});
``` | You can do something like this:
<http://jsfiddle.net/Mp8p2/>
You don't need nearly as much HTML:
```
<span class="toggle">Show Details</span>
<div id="info">
Details shown here
</div>
```
along with just `display:none` in the css and the following jQuery:
```
$(".toggle").click(function() {
$('#info').slideToggle();
if($(this).html() == "Show Details") {
$(this).empty().text("Hide Details");
}
else {
$(this).html("Show Details");
}
});
``` |
23,827,284 | I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
1. pass a list (in python) to a casper.js script.
2. Inside the casperjs script I would like to iterate over the python object (a list of search terms)
3. In the casper script I would like to query google for search terms
4. Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
5. Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
**QUESTION --> I'm not sure how to write the python function to pass an object to casper.**
**QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.**
Here is my python code.
```
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
```
js CODE
```
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
``` | 2014/05/23 | [
"https://Stackoverflow.com/questions/23827284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3098818/"
] | You do not need to span for toggling content. using two span for achiving this will make code more complicated. Try this:
**Html**
```
<span id="show" class="" style="text-decoration: underline">
<a href="#" class="fill-div" style="width: 100px;">
Show details
</a>
</span>
<div id="info" style="display:none; margin-left:2em">
<i>
Details shown here
</i>
</div>
```
**Js:**
```
$('#show').click(function() {
$('#info').toggle();
var visible = $('#info').is(":visible");
if(visible)
$('a',this).html('Hide details');
else
$('a',this).html('Show details');
});
```
**[Working Demo](http://jsfiddle.net/APA2S/1785/)**
**update:**
**[Demo for multiple](http://jsfiddle.net/APA2S/1786/)** | You can do something like this:
<http://jsfiddle.net/Mp8p2/>
You don't need nearly as much HTML:
```
<span class="toggle">Show Details</span>
<div id="info">
Details shown here
</div>
```
along with just `display:none` in the css and the following jQuery:
```
$(".toggle").click(function() {
$('#info').slideToggle();
if($(this).html() == "Show Details") {
$(this).empty().text("Hide Details");
}
else {
$(this).html("Show Details");
}
});
``` |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | I was having this issue with Cocoapods. The solution was to clean the build folder re-install all pods, and then rebuild the app. The issue resolved itself that way. | In the project pane on the LHS, for your build products, don't select them in the list for Target membership in the RHS pane. |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | I was facing the same issue: below was the error
>
> Cycle in dependencies between targets 'Pods-MyAppName' and 'RxCocoa';
> building could produce unreliable results. This usually can be
> resolved by moving the target's Headers build phase before Compile
> Sources. Cycle path: Pods-MyAppName → RxCocoa → Pods-MyAppName
>
>
>
I solved it using the below steps:
1). Go to target RxCocoa in Pods-MyAppName project
2) Go to build phases
3) Drag the Headers Phase and move it above the Complile Sources build phase.
This fixed my issue.
Hope it helps! | >
> `cd ios && rm -rf Pods && pod install && cd ..`
>
>
>
This worked for me. |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | **Only for Expo**
1. `expo prebuild --clean`
2. `cd ios`
3. `pod install`
4. `yarn ios` | In the project pane on the LHS, for your build products, don't select them in the list for Target membership in the RHS pane. |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | In fact, you only need to pay attention to Xcode's prompt `This usually can be resolved by moving the target's Headers build phase before Compile Sources`, and then you can do it.
When I encountered this problem, Xcode prompts me:
```rb
:-1: Cycle inside XXXX; building could produce unreliable results. This usually can be resolved by moving the target's Headers build phase before Compile Sources.
Cycle details:
→ Target 'XXXX': LinkStoryboards
○ Target 'XXXX: Ditto Path/XXXX-Swift.h /Path/XXXX-Swift.h
○ Target 'XXXX has compile command for Swift source files
○ That command depends on command in Target 'XXXX: script phase “Run Script”
```
I only did one thing and solved the problem perfectly:
Select `Target` and then select `Build Phase` to move the `Run Script` to the front of `Compile Sources`.
Run, compiled successfully.
The principle is simple, just change the compilation order.
[![image 1](https://i.stack.imgur.com/RCyTG.jpg)](https://i.stack.imgur.com/RCyTG.jpg)
Xcode 10.2 &
Swift 5 | Had this problem for a while as well. Added the following block at the end of the podfile (this will also get rid of some warnings):
```
post_install do |installer|
installer.pods_project.targets.each do |target|
target.deployment_target = ios_version
target.build_configurations.each do |config|
config.build_settings['IOS_DEPLOYMENT_TARGET'] = 'iOS ' + ios_version
end
end
end
```
I also moved `use_frameworks!` to the root of the `Podfile`. |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | I was facing the same issue: below was the error
>
> Cycle in dependencies between targets 'Pods-MyAppName' and 'RxCocoa';
> building could produce unreliable results. This usually can be
> resolved by moving the target's Headers build phase before Compile
> Sources. Cycle path: Pods-MyAppName → RxCocoa → Pods-MyAppName
>
>
>
I solved it using the below steps:
1). Go to target RxCocoa in Pods-MyAppName project
2) Go to build phases
3) Drag the Headers Phase and move it above the Complile Sources build phase.
This fixed my issue.
Hope it helps! | The best temporary fix I've found (until you resolve the root problem) is to make a simple change to the source code (even something as trivial as adding a new line) then trying again. I find that making even a whitespace change is enough to allow it to build again. However, this is definitely temporary as it will likely happen again after a few tries.
This is better than cleaning the build folder (which I found to also be temporary) since it's much quicker to make a whitespace change and rebuild, than to clean the build folder and rebuild (especially on larger projects). |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | I had a similar issue with a mixed interaction between **Swift, Objective-C and CoreData**: in my project (written in Swift) I made use of Core Data's autogenerated Swift classes as well.
But at one point I needed an Objective C class with public properties (defined in its header counterpart) referring the the core data entities.
```
#import "ProjectName-Swift.h" // this is to import the swift entities into ObjC
@interface myObjCClass : NSObject
@property (nonatomic) MyCoreDataClass*myEntity;
@end
```
As soon as I changed the CoreData model, XCode tried to rebuild the classes and I got hung with the indicated cycle build error.
After an initial moment of despair, as I did not have any compile header phases in my project to change the order of, I found out that solution was quite simple:
In the `myObjCClass.h` I removed the shared Swift header import statement and changed it with a `@class` directive:
```
@class MyCoreDataClass; // tell the compiler I will import the class definition somewhere else
// the rest stays the same
@interface myObjCClass : NSObject
@property (nonatomic) MyCoreDataClass*myEntity;
@end
```
and I moved the `#import "ProjectName-Swift.h"` statement into the `myObjCClass.m` class definition file.
```
#import "myObjCClass.h"
#import "ProjectName-Swift.h"
@implementation myObjCClass
@end
```
And it builded no worries. | I was facing the same issue: below was the error
>
> Cycle in dependencies between targets 'Pods-MyAppName' and 'RxCocoa';
> building could produce unreliable results. This usually can be
> resolved by moving the target's Headers build phase before Compile
> Sources. Cycle path: Pods-MyAppName → RxCocoa → Pods-MyAppName
>
>
>
I solved it using the below steps:
1). Go to target RxCocoa in Pods-MyAppName project
2) Go to build phases
3) Drag the Headers Phase and move it above the Complile Sources build phase.
This fixed my issue.
Hope it helps! |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | I was finally able to resolve this by moving `Embed App Extensions` script in `Build Phases` of main Target to last position. | I was facing the same issue: below was the error
>
> Cycle in dependencies between targets 'Pods-MyAppName' and 'RxCocoa';
> building could produce unreliable results. This usually can be
> resolved by moving the target's Headers build phase before Compile
> Sources. Cycle path: Pods-MyAppName → RxCocoa → Pods-MyAppName
>
>
>
I solved it using the below steps:
1). Go to target RxCocoa in Pods-MyAppName project
2) Go to build phases
3) Drag the Headers Phase and move it above the Complile Sources build phase.
This fixed my issue.
Hope it helps! |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | Xcode 10's new build system detects dependency cycles in your build and provides diagnostics to help you resolve them. Fixing these dependency cycles improves the reliability of your build, so that the correct products are produced consistently (cycles are a possible cause of needing to delete your derived data). It also improves your incremental build times, as cycles in the build cause something in your build graph to always be out-of-date on each build, making the build re-do work unnecessarily every time you build.
There is documentation on resolving some common types of dependency cycles in Xcode Help: <https://help.apple.com/xcode/mac/current/#/dev621201fb0>
That said, this cycle diagnostic looks a little odd. It sounds like you were able to resolve it by re-arranging your build phases, but I don't think the diagnostic really explained the problem. If you wouldn't mind, a bug report about improving this diagnostic for this particular case would be very much appreciated. You can file one at <https://bugreport.apple.com>. Please include all details about your project that you think might be relevant; a sample project that reproduces the issue is ideal, but if you can't attach that, the diagnostic and some idea of the project structure is still helpful. | In the project pane on the LHS, for your build products, don't select them in the list for Target membership in the RHS pane. |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | For anybody having an issue with Xcode 10 build system, follow the following steps to fix it:
>
> 1. In Xcode, go to File->Project/Workspace settings.
> 2. Change the build system to Legacy Build system.
>
>
>
It will resolve the build issue with the new Xcode.
If you want to work with the new build system, then you can find the troubleshooting help [from this apple Xcode help page](https://help.apple.com/xcode/mac/current/#/dev621201fb0). | Same issue on `Version 10.0 beta 3 (10L201y)` and I wanted to have the *New Build System*.
Problem is had disabled `Enable Modules (C and Objective-C)`
in `Build Settings -> Apple Clang - Language - Modules`
After enabling it (set to YES) got rid of the Error. |
50,709,365 | I start with the following tabular data : (let's say tests results by version)
```
Item Result Version
0 TO OK V1
1 T1 NOK V1
2 T2 OK V1
3 T3 NOK V1
4 TO OK V2
5 T1 OK V2
6 T2 NOK V2
7 T3 NOK V2
```
```
df=p.DataFrame({'Item': ['TO','T1','T2','T3','TO','T1','T2','T3'],
'Version': ['V1','V1','V1','V1','V2','V2','V2','V2'],
'Result' : ['OK','NOK','OK','NOK','OK','OK','NOK','NOK']})
```
and I try to build the following report:
```
V2
OK NOK
V1 OK T0 T2
NOK T1 T3
```
(`T0` above should be a python **set** resulting from aggregation with `set` function), and I would like to have multiindexes on both rows and cols.
I can't figure how to put the same "Result" col in **both** rows and columns multiindexes. Any clues? | 2018/06/05 | [
"https://Stackoverflow.com/questions/50709365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9899968/"
] | For anybody having an issue with Xcode 10 build system, follow the following steps to fix it:
>
> 1. In Xcode, go to File->Project/Workspace settings.
> 2. Change the build system to Legacy Build system.
>
>
>
It will resolve the build issue with the new Xcode.
If you want to work with the new build system, then you can find the troubleshooting help [from this apple Xcode help page](https://help.apple.com/xcode/mac/current/#/dev621201fb0). | Xcode 10.2.1/Unit Test Target. My unit test target is independent of host target to improve building time. Solve it by uncheck `Find Implicit Dependencies` in `Scheme` - `Build` options, As i specify all dependencies in `Build Settings` - `Compile Sources`. |
60,311,148 | I'm trying to pip install a package in an AWS Lambda function.
The method recommended by Amazon is to create a zipped deployment package that includes the dependencies and python function all together (as described in [AWS Lambda Deployment Package in Python](https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html)). However, this results in not being able to edit the Lambda function using inline code editing within the AWS Lambda GUI.
So instead, I would like to pip install the package within the AWS Lambda function itself. In AWS Lambda, the filesystem is read-only apart from the /tmp/ directory, so I am trying to pip install to the /tmp/ directory. The function is only called once-daily, so I don't mind about the few extra seconds required to re-pip install the package every time the function is run.
**My attempt**
```
def lambda_handler(event, context):
# pip install dependencies
print('begin lambda handler')
import subprocess
import sys
subprocess.call('pip install cryptography -t /tmp/ --no-cache-dir'.split())
from cryptography.fernet import Fernet
pwd_encrypted = b'gAAAAABeTcT0OXH96ib7TD5-sTII6jMfUXPhMpwWRCF0315rWp4C0yav1XAPIn7prfkkA4tltYiWFAJ22bwuaj0z1CKaGl8vTgNd695SDl25HnLwu1xTzaQ='
key = b'fP-7YR1hUeVW4KmFmly4JdgotD6qjR52g11RQms6Llo='
cipher_suite = Fernet(key)
result = cipher_suite.decrypt(pwd_encrypted).decode('utf-8')
print(result)
print('end lambda handler')
```
However, this results in the error
>
> [ERROR] ModuleNotFoundError: No module named 'cryptography'
>
>
>
I have also tried replacing the *subprocess* call with the following, as recommended in [this stackoverflow answer](https://stackoverflow.com/a/50255019/11918892)
```
cmd = sys.executable+' -m pip install cryptography -t dependencies --no-cache-dir'
subprocess.check_call(cmd.split())
```
However, this results in the error
>
> OSError: [Errno 30] Read-only file system: '/var/task/dependencies'
>
>
> | 2020/02/20 | [
"https://Stackoverflow.com/questions/60311148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11918892/"
] | I solved this with a one-line adjustment to the original attempt. You just need to add /tmp/ to sys.path so that Python knows to search /tmp/ for the module. All you need to do is add the line `sys.path.insert(1, '/tmp/')`.
**Solution**
```
import os
import sys
import subprocess
# pip install custom package to /tmp/ and add to path
subprocess.call('pip install cryptography -t /tmp/ --no-cache-dir'.split(), stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
sys.path.insert(1, '/tmp/')
from cryptography.fernet import Fernet
def lambda_handler(event, context):
# pip install dependencies
pwd_encrypted = b'gAAAAABeTcT0OXH96ib7TD5-sTII6jMfUXPhMpwWRCF0315rWp4C0yav1XAPIn7prfkkA4tltYiWFAJ22bwuaj0z1CKaGl8vTgNd695SDl25HnLwu1xTzaQ='
key = b'fP-7YR1hUeVW4KmFmly4JdgotD6qjR52g11RQms6Llo='
cipher_suite = Fernet(key)
result = cipher_suite.decrypt(pwd_encrypted).decode('utf-8')
print(result)
```
**Output**
>
> Hello stackoverflow!
>
>
>
Note - as @JohnRotenstein mentioned in the comments, the preferred method to add Python packages is to package dependencies in an [AWS Lambda Layer](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html). My solution just shows that it is possible to pip install packages directly in an AWS Lambda function. | For some reason subprocess.call() was returning a FileNotFound error when I was trying to `pip3.8 install <package> -t <install-directory>`. I solved this by using os.system() instead of subprocess.call(), and I specified the path of pip directly:
`os.system('/var/lang/bin/pip3.8 install <package> -t <install-directory>')`. |
32,779,333 | I am trying to start learning about writing encryption algorithms, so while using python I am trying to manipulate data down to a binary level so I can add bits to the end of data as well as manipulate to obscure the data.
I am not new to programming I am actually a programmer but I am relatively new to python which is why I am struggling a bit.
can anyone show me the best way to manipulate, in python, a string down to the binary level (or recommend in what way I should approach this). I have looked at a number of questions:
[Convert string to binary in python](https://stackoverflow.com/questions/18815820/convert-string-to-binary-in-python)
[Manipulating binary data in Python](https://stackoverflow.com/questions/3059301/manipulating-binary-data-in-python)
[Convert binary to ASCII and vice versa](https://stackoverflow.com/questions/7396849/convert-binary-to-ascii-and-vice-versa-python)
But all these are not what I am looking for and I do not know enough of python to be able to pick out what I need. can someone please assist me with details (if you use a function please explain what it is to me e.g. ord()) | 2015/09/25 | [
"https://Stackoverflow.com/questions/32779333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779617/"
] | Set the color to UIColor.clearColor() | Use clear color for the scrollView background
```
self.scrollView.backgroundColor = UIColor.clearColor()
```
You don't need to set the background color for the view again once you have set the color with a pattern image. If you set the background color again, the pattern image will be removed. |
32,779,333 | I am trying to start learning about writing encryption algorithms, so while using python I am trying to manipulate data down to a binary level so I can add bits to the end of data as well as manipulate to obscure the data.
I am not new to programming I am actually a programmer but I am relatively new to python which is why I am struggling a bit.
can anyone show me the best way to manipulate, in python, a string down to the binary level (or recommend in what way I should approach this). I have looked at a number of questions:
[Convert string to binary in python](https://stackoverflow.com/questions/18815820/convert-string-to-binary-in-python)
[Manipulating binary data in Python](https://stackoverflow.com/questions/3059301/manipulating-binary-data-in-python)
[Convert binary to ASCII and vice versa](https://stackoverflow.com/questions/7396849/convert-binary-to-ascii-and-vice-versa-python)
But all these are not what I am looking for and I do not know enough of python to be able to pick out what I need. can someone please assist me with details (if you use a function please explain what it is to me e.g. ord()) | 2015/09/25 | [
"https://Stackoverflow.com/questions/32779333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779617/"
] | Set the color to UIColor.clearColor() | As a variant, you could use `UIWebView` in order to use HTML code to set transparency there HTML code.
To implement a UIWebView with a transparent background all you need to do is:
1. Set the `UIWebView`'s backgroundColor property to `[UIColor clearColor]`.
2. Use the `UIWebView`'s content in the HTML.
3. The `UIWebView`'s opaque property set to `NO`.
PS: extracted from [Using transparent background color in UIView by HTML code](https://stackoverflow.com/questions/5076213/using-transparent-background-color-in-uiview-by-html-code) |
32,779,333 | I am trying to start learning about writing encryption algorithms, so while using python I am trying to manipulate data down to a binary level so I can add bits to the end of data as well as manipulate to obscure the data.
I am not new to programming I am actually a programmer but I am relatively new to python which is why I am struggling a bit.
can anyone show me the best way to manipulate, in python, a string down to the binary level (or recommend in what way I should approach this). I have looked at a number of questions:
[Convert string to binary in python](https://stackoverflow.com/questions/18815820/convert-string-to-binary-in-python)
[Manipulating binary data in Python](https://stackoverflow.com/questions/3059301/manipulating-binary-data-in-python)
[Convert binary to ASCII and vice versa](https://stackoverflow.com/questions/7396849/convert-binary-to-ascii-and-vice-versa-python)
But all these are not what I am looking for and I do not know enough of python to be able to pick out what I need. can someone please assist me with details (if you use a function please explain what it is to me e.g. ord()) | 2015/09/25 | [
"https://Stackoverflow.com/questions/32779333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779617/"
] | Set the color to UIColor.clearColor() | I've faced with same problem (same tutorial), how I fixed it:
1. add in xibSetup method:
view.backgroundColor = UIColor(white: 0.9, alpha: 1.0) // or any color
2. creation in a UIViewController:
let noDataView = NoDataUIView(frame: self.view.**frame**)
view.addSubview(noDataView) |
64,727,574 | I am new to python I am writing code to count the frequency of numbers in a list
However I get KeyError. How to automatically check if value does not exist and return a default value. My code is below
```
arr = [1,1,2,3,2,1]
freq={}
for i in arr:
freq[i] += freq[i] + 1
``` | 2020/11/07 | [
"https://Stackoverflow.com/questions/64727574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14595676/"
] | Yes you can leverage the get method of a dictionary. You can simply do
```
arr=[1,1,2,3,2,1]
freq={}
for i in arr:
freq[i] = freq.get(i,0)+1
```
Please Google for basic question like this before asking on stackoverflow | You want the dictionary's `get` method. |
64,727,574 | I am new to python I am writing code to count the frequency of numbers in a list
However I get KeyError. How to automatically check if value does not exist and return a default value. My code is below
```
arr = [1,1,2,3,2,1]
freq={}
for i in arr:
freq[i] += freq[i] + 1
``` | 2020/11/07 | [
"https://Stackoverflow.com/questions/64727574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14595676/"
] | Yes you can leverage the get method of a dictionary. You can simply do
```
arr=[1,1,2,3,2,1]
freq={}
for i in arr:
freq[i] = freq.get(i,0)+1
```
Please Google for basic question like this before asking on stackoverflow | In the python library, there is a `defaultdict` which can help you here
[Documentation for the defaultdict](https://docs.python.org/3.8/library/collections.html#collections.defaultdict)
```
import collections
arr = [1, 1, 2, 3, 2, 1]
freq = collections.defaultdict(int) # The `int` makes the default value 0
for i in arr:
freq[i] += freq[i] + 1
```
When you try to access a `defaultdict` with a key that is not yet present, it executes the function that you supply when creating it - Calling `int()` without any arguments gives you a zero.
Sidenote: It seems that you want to count the occurances of numbers in your array, then you'd want to change your updating to
```
for i in arr:
freq[i] += 1
```
Also, the [Counter](https://docs.python.org/3.8/library/collections.html#collections.Counter) object might be of interest for you. |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | If order doesn't matter
=======================
If you really want to do this in python (as opposed to the `sort filepath | uniq -c` as Jean suggests), then I would do this:
```
import collections
with open('path/to/file') as f:
counts = collections.Counter(f)
outfile = open('path/to/outfile', 'w')
for line,occ in counts.iteritems():
outfile.write("%s repeat %d\n" %(line.strip(), occ))
outfile.close()
```
If order matters
================
If order matters (if entry `i` appears before entry `j` in the input file, then entry `i` must appear before entry `j` in the output file), then what you need is a modified run-length encoder. Note however, if you have the following input file:
```
v1
v1
v1
v2
v2
v3
v1
```
then, your output file will look like this:
```
v1 repeat 3
v2 repeat 2
v3
v1
with open('infilepath') as infile:
outfile = open('outfilepath', 'w')
curr = infile.readline().strip()
count = 1
for line in infile:
if line.strip() == curr:
count += 1
else:
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
else:
outfile.write("\n")
curr = line.strip()
count = 1
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
outfile.close()
```
Of course, `uniq -c infilepath > outfilepath` will do the same
Hope this helps | If it all fits into memory, then you could do:
```
from collections import Counter
with open('vectors') as fin:
counts = Counter(fin)
```
Or, if large, then you can use sqlite3:
```
import sqlite3
db = sqlite3.conncet('/some/path/some/file.db')
db.execute('create table vector (vector)')
with open('vectors.txt') as fin:
db.executemany('insert into vector values(?)', fin)
db.commit()
for row in db.execute('select vector, count(*) as freq from vector group by vector'):
print row # do something suitable here
```
If the vectors are always contiguous:
```
from itertools import groupby
with open('vector') as fin:
for vector, vals in groupby(fin):
print '{} {}repeat'.format(vector, sum(1 for _ in vals))
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | Just run this on command prompt
```
sort text.txt | uniq -c > output.txt
```
Remove sort if you want to preserve ordering(Will only count consecutive unique lines)
```
uniq -c text.txt > output.txt
```
Or this will give the required precise output(Solution suggested by ikegami)
```
uniq -c text.txt \
| perl -ple's/^\s*(\d+) //; $_ .= " repeat$1" if $1 > 1; \
> output.txt
``` | Assuming python 2.7, a less memory-intensive solution
```
from collections import Counter
with open("some_file.txt") as f:
cnt = Counter(f)
print cnt
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | If order doesn't matter
=======================
If you really want to do this in python (as opposed to the `sort filepath | uniq -c` as Jean suggests), then I would do this:
```
import collections
with open('path/to/file') as f:
counts = collections.Counter(f)
outfile = open('path/to/outfile', 'w')
for line,occ in counts.iteritems():
outfile.write("%s repeat %d\n" %(line.strip(), occ))
outfile.close()
```
If order matters
================
If order matters (if entry `i` appears before entry `j` in the input file, then entry `i` must appear before entry `j` in the output file), then what you need is a modified run-length encoder. Note however, if you have the following input file:
```
v1
v1
v1
v2
v2
v3
v1
```
then, your output file will look like this:
```
v1 repeat 3
v2 repeat 2
v3
v1
with open('infilepath') as infile:
outfile = open('outfilepath', 'w')
curr = infile.readline().strip()
count = 1
for line in infile:
if line.strip() == curr:
count += 1
else:
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
else:
outfile.write("\n")
curr = line.strip()
count = 1
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
outfile.close()
```
Of course, `uniq -c infilepath > outfilepath` will do the same
Hope this helps | Assuming python 2.7, a less memory-intensive solution
```
from collections import Counter
with open("some_file.txt") as f:
cnt = Counter(f)
print cnt
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | ```
perl -nle'
if ($c && $_ ne $last) {
print $c > 1 ? "$last repeat$c;" : "$last;";
$c = 0;
}
$last = $_;
++$c;
END {
if ($c) {
print $c > 1 ? "$last repeat$c;" : "$last;";
}
}
' file
```
(You can put that all on one line, or leave it as is.)
Output:
```
V1 repeat3
V3
V4
V1 repeat2
```
This solution is avg case O(N) CPU and O(1) memory. It and inspectorG4dget's are the only two of the seven existing answers that give the output in the format and order you requested. | Assuming python 2.7, a less memory-intensive solution
```
from collections import Counter
with open("some_file.txt") as f:
cnt = Counter(f)
print cnt
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | Just run this on command prompt
```
sort text.txt | uniq -c > output.txt
```
Remove sort if you want to preserve ordering(Will only count consecutive unique lines)
```
uniq -c text.txt > output.txt
```
Or this will give the required precise output(Solution suggested by ikegami)
```
uniq -c text.txt \
| perl -ple's/^\s*(\d+) //; $_ .= " repeat$1" if $1 > 1; \
> output.txt
``` | ```
vectors = {}
for vector in open("vect.txt").readlines():
vectors[vector] = vectors.setdefault(vector, 0) + 1
print vectors
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | If order doesn't matter
=======================
If you really want to do this in python (as opposed to the `sort filepath | uniq -c` as Jean suggests), then I would do this:
```
import collections
with open('path/to/file') as f:
counts = collections.Counter(f)
outfile = open('path/to/outfile', 'w')
for line,occ in counts.iteritems():
outfile.write("%s repeat %d\n" %(line.strip(), occ))
outfile.close()
```
If order matters
================
If order matters (if entry `i` appears before entry `j` in the input file, then entry `i` must appear before entry `j` in the output file), then what you need is a modified run-length encoder. Note however, if you have the following input file:
```
v1
v1
v1
v2
v2
v3
v1
```
then, your output file will look like this:
```
v1 repeat 3
v2 repeat 2
v3
v1
with open('infilepath') as infile:
outfile = open('outfilepath', 'w')
curr = infile.readline().strip()
count = 1
for line in infile:
if line.strip() == curr:
count += 1
else:
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
else:
outfile.write("\n")
curr = line.strip()
count = 1
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
outfile.close()
```
Of course, `uniq -c infilepath > outfilepath` will do the same
Hope this helps | I dont htink you can do this in less than O(n^2) ... (I could be wrong)
one way would be (in python)
```
with open("some_file_with_vectors") as f:
data = f.read()
counts = dict([(line,data.count(line)) for line in data.splitlines()])
print counts
#if you want to save to a file
with open("output.txt") as f:
for key in counts:
print >> f, key ,"=",counts[key]
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | ```
perl -nle'
if ($c && $_ ne $last) {
print $c > 1 ? "$last repeat$c;" : "$last;";
$c = 0;
}
$last = $_;
++$c;
END {
if ($c) {
print $c > 1 ? "$last repeat$c;" : "$last;";
}
}
' file
```
(You can put that all on one line, or leave it as is.)
Output:
```
V1 repeat3
V3
V4
V1 repeat2
```
This solution is avg case O(N) CPU and O(1) memory. It and inspectorG4dget's are the only two of the seven existing answers that give the output in the format and order you requested. | If it all fits into memory, then you could do:
```
from collections import Counter
with open('vectors') as fin:
counts = Counter(fin)
```
Or, if large, then you can use sqlite3:
```
import sqlite3
db = sqlite3.conncet('/some/path/some/file.db')
db.execute('create table vector (vector)')
with open('vectors.txt') as fin:
db.executemany('insert into vector values(?)', fin)
db.commit()
for row in db.execute('select vector, count(*) as freq from vector group by vector'):
print row # do something suitable here
```
If the vectors are always contiguous:
```
from itertools import groupby
with open('vector') as fin:
for vector, vals in groupby(fin):
print '{} {}repeat'.format(vector, sum(1 for _ in vals))
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | ```
perl -nle'
if ($c && $_ ne $last) {
print $c > 1 ? "$last repeat$c;" : "$last;";
$c = 0;
}
$last = $_;
++$c;
END {
if ($c) {
print $c > 1 ? "$last repeat$c;" : "$last;";
}
}
' file
```
(You can put that all on one line, or leave it as is.)
Output:
```
V1 repeat3
V3
V4
V1 repeat2
```
This solution is avg case O(N) CPU and O(1) memory. It and inspectorG4dget's are the only two of the seven existing answers that give the output in the format and order you requested. | ```
vectors = {}
for vector in open("vect.txt").readlines():
vectors[vector] = vectors.setdefault(vector, 0) + 1
print vectors
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | If order doesn't matter
=======================
If you really want to do this in python (as opposed to the `sort filepath | uniq -c` as Jean suggests), then I would do this:
```
import collections
with open('path/to/file') as f:
counts = collections.Counter(f)
outfile = open('path/to/outfile', 'w')
for line,occ in counts.iteritems():
outfile.write("%s repeat %d\n" %(line.strip(), occ))
outfile.close()
```
If order matters
================
If order matters (if entry `i` appears before entry `j` in the input file, then entry `i` must appear before entry `j` in the output file), then what you need is a modified run-length encoder. Note however, if you have the following input file:
```
v1
v1
v1
v2
v2
v3
v1
```
then, your output file will look like this:
```
v1 repeat 3
v2 repeat 2
v3
v1
with open('infilepath') as infile:
outfile = open('outfilepath', 'w')
curr = infile.readline().strip()
count = 1
for line in infile:
if line.strip() == curr:
count += 1
else:
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
else:
outfile.write("\n")
curr = line.strip()
count = 1
outfile.write(curr)
if count-1:
outfile.write(" repeat %d\n" %count)
outfile.close()
```
Of course, `uniq -c infilepath > outfilepath` will do the same
Hope this helps | ```
vectors = {}
for vector in open("vect.txt").readlines():
vectors[vector] = vectors.setdefault(vector, 0) + 1
print vectors
``` |
12,920,856 | I have a text file that consists of million of vectors like this:-
```
V1
V1
V1
V3
V4
V1
V1
```
Note:- ORDER is important. In the above output file, i counted the first vector 3 times. The same pattern is repeated twice after 5th line. There count should be different.
I want to count how many times each vector line is repeated and add it in the output text file like this:-
In above 7 vectors, first 3 lines are same and rest 2 are different and then last lines are same. So the output should look like this:-
```
V1 count 3
V3
V4
V1 count 2
```
Although, first and last patterns are same but they are counted differently because they are in different order.
I can use `python` or `perl`.I know the maximum length of rows (vectors) but how do I compare each row (vector) with other. Please help | 2012/10/16 | [
"https://Stackoverflow.com/questions/12920856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] | Just run this on command prompt
```
sort text.txt | uniq -c > output.txt
```
Remove sort if you want to preserve ordering(Will only count consecutive unique lines)
```
uniq -c text.txt > output.txt
```
Or this will give the required precise output(Solution suggested by ikegami)
```
uniq -c text.txt \
| perl -ple's/^\s*(\d+) //; $_ .= " repeat$1" if $1 > 1; \
> output.txt
``` | I dont htink you can do this in less than O(n^2) ... (I could be wrong)
one way would be (in python)
```
with open("some_file_with_vectors") as f:
data = f.read()
counts = dict([(line,data.count(line)) for line in data.splitlines()])
print counts
#if you want to save to a file
with open("output.txt") as f:
for key in counts:
print >> f, key ,"=",counts[key]
``` |
51,341,157 | ```
CREATE OR REPLACE FUNCTION CLEAN_STRING(in_str varchar) returns varchar
AS
$$
def strip_slashes(in_str):
while in_str.endswith("\\") or in_str.endswith("/"):
in_str = in_str[:-1]
in_str = in_str.replace("\\", "/")
return in_str
clean_str = strip_slashes(in_str)
return clean_str
$$
LANGUAGE plpythonu ;
```
This gives me `IndentationError` . However, If I remove backslashes, it works fine. How can I handle backslashes inside plpythonu? | 2018/07/14 | [
"https://Stackoverflow.com/questions/51341157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3601228/"
] | Your functions are asynchronous and asynchronous functions need some way of indicating when they are finished. Typically this is done with a callback or promise. Without that there is no way to know when they are finished. If they returned a promise, you might do something like this:
```js
var fun1 = function() {
console.log('Started fun1');
return new Promise(resolve => {
setTimeout(() => {
console.log('Finished fun1');
resolve(true)
}, 2000)
})
}
var fun2 = function() {
console.log('Started fun2');
return new Promise(resolve => {
setTimeout(() => {
console.log('Finished fun2');
resolve(true)
}, 2000)
})
}
function fun3(arr) {
let p = Promise.resolve()
for (var i = 0; i < arr.length; i++){
p = p.then(arr[i]);
}
return p
}
fun3([fun1, fun2]);
```
You can write the loop a little more succinctly with `reduce()`:
```
function fun3(arr){
return arr.reduce((a, c) => a.then(c), Promise.resolve())
}
```
If you can use `async/await` the last function would be a little easier to read. Since the `async` function returns a promise you can also easily know when the whole thing is done:
```
async function fun3(arr){
for(var i=0;i<arr.length;i++)
await arr[i]();
}
fun3([fun1,fun2])
.then(() => console.log("finished"))
```
Of course if it's possible to just have simple functions that are not asynchronous, a lot of these problems go away…you could just run then in a timer outside the functions. | **You can add the next function *inside* the `setTimeout` callback.**
For example,
```js
var fun1=function(){
console.log('Started fun1');
setTimeout(()=>{
console.log('Finished fun1');
fun2(); // Start the next timeout.
},2000)
}
var fun2=function(){
console.log('Started fun2');
setTimeout(()=>{
console.log('Finished fun2');
},2000)
}
// This should output 'Started fun1', delay,
// output 'Finished fun1' and then 'Started fun2',
// delay, and then finally output 'Finished fun2'.
fun1();
``` |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | This could do the trick:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
vars = locals().copy()
for i in vars:
if 'Name' in i:
print((i, eval(i)))
```
alternative in one line:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
print([(i, eval(i)) for i in locals().copy() if "Name" in i])
``` | You can access the global variables through `globals()` or if you want the local variables with `locals()`. They are stored in a `dict`. So
```
for i in range (1,100):
print(locals()[f"Name{i:02d}"])
```
should do what you want. |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | use `eval()`
```
>>> Name01 = "Dorian"
>>> Name02 = "Tom"
>>> Name04 = "Jerry"
>>> Name03 = "Jessica"
>>> for i in range(1, 100):
... print(eval('Name%02d'%i))
...
Dorian
Tom
Jessica
Jerry
```
incase if you are using 3.7+ you can go with `f string`
```
f"Name{i:02d}"
``` | You can access the global variables through `globals()` or if you want the local variables with `locals()`. They are stored in a `dict`. So
```
for i in range (1,100):
print(locals()[f"Name{i:02d}"])
```
should do what you want. |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | This could do the trick:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
vars = locals().copy()
for i in vars:
if 'Name' in i:
print((i, eval(i)))
```
alternative in one line:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
print([(i, eval(i)) for i in locals().copy() if "Name" in i])
``` | You can use exec:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
for i in range(1,5):
num = f"{i:02d}" # python3.6+
num = "{0:02d}".format(i) # python 3.x
num = "%02d"%i # python 2
exec('print(Name'+num+')')
``` |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | This could do the trick:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
vars = locals().copy()
for i in vars:
if 'Name' in i:
print((i, eval(i)))
```
alternative in one line:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
print([(i, eval(i)) for i in locals().copy() if "Name" in i])
``` | Using this code, as per requirement, you will just print names without printing duplicates:
```
Name01 = "Dorian"
Name02 = "Tom"
Name03 = "Tom"
Name04 = "Jerry"
Name05 = "Jessica"
Name06 = "Jessica"
vars = locals().copy()
names = []
for variable in vars:
# This will make sure that we are only
# using variables having 'Name' in their names
if 'Name' in variable:
# Getting the variable's value
value = eval(variable)
# If the value is not duplicate
if value not in names:
# Append it to our final names list
names.append(value)
for name in names:
print (name)
```
Output
```
Dorian
Tom
Jessica
Jerry
```
Explanation
>
> The locals() function returns a dictionary containing the variables
> defined in the local namespace. Calling locals() in the global
> namespace is same as calling globals() and returns a dictionary
> representing the global namespace of the module.
>
>
> |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | This could do the trick:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
vars = locals().copy()
for i in vars:
if 'Name' in i:
print((i, eval(i)))
```
alternative in one line:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
print([(i, eval(i)) for i in locals().copy() if "Name" in i])
``` | use `eval()`
```
>>> Name01 = "Dorian"
>>> Name02 = "Tom"
>>> Name04 = "Jerry"
>>> Name03 = "Jessica"
>>> for i in range(1, 100):
... print(eval('Name%02d'%i))
...
Dorian
Tom
Jessica
Jerry
```
incase if you are using 3.7+ you can go with `f string`
```
f"Name{i:02d}"
``` |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | use `eval()`
```
>>> Name01 = "Dorian"
>>> Name02 = "Tom"
>>> Name04 = "Jerry"
>>> Name03 = "Jessica"
>>> for i in range(1, 100):
... print(eval('Name%02d'%i))
...
Dorian
Tom
Jessica
Jerry
```
incase if you are using 3.7+ you can go with `f string`
```
f"Name{i:02d}"
``` | You can use exec:
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
for i in range(1,5):
num = f"{i:02d}" # python3.6+
num = "{0:02d}".format(i) # python 3.x
num = "%02d"%i # python 2
exec('print(Name'+num+')')
``` |
58,484,745 | let say that thoses python objects below are **locked** we just cannot change the code, all we can is writing right after it. i know it's aweful. but let say that we are forced to work with this.
```
Name01 = "Dorian"
Name02 = "Tom"
Name04 = "Jerry"
Name03 = "Jessica"
#let say that there's 99 of them
```
**How to print the name of each and single one of them (99) witouth repetition ?**
from my noob perspective. the ideal way to resolve this case witouth repetition is using the same logic that we have with strings.
Because name => **Name+index**
so it can be really easy to iterate with them.
so somewhat a code that work in the same logic of the totally fictive one below:
```
for i in range (1,100):
print(Name+f"{i:02d}")
```
```
for i in range (1,100):
string_v_of_obj = "Name" + str(f"{i:02d}")
print(func_transform_string_to_code(string_v_of_obj))
```
maybe something like that is possible.
```
for python_object in script_objects:
if Name in python_object:
print(python_object)
``` | 2019/10/21 | [
"https://Stackoverflow.com/questions/58484745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11613897/"
] | use `eval()`
```
>>> Name01 = "Dorian"
>>> Name02 = "Tom"
>>> Name04 = "Jerry"
>>> Name03 = "Jessica"
>>> for i in range(1, 100):
... print(eval('Name%02d'%i))
...
Dorian
Tom
Jessica
Jerry
```
incase if you are using 3.7+ you can go with `f string`
```
f"Name{i:02d}"
``` | Using this code, as per requirement, you will just print names without printing duplicates:
```
Name01 = "Dorian"
Name02 = "Tom"
Name03 = "Tom"
Name04 = "Jerry"
Name05 = "Jessica"
Name06 = "Jessica"
vars = locals().copy()
names = []
for variable in vars:
# This will make sure that we are only
# using variables having 'Name' in their names
if 'Name' in variable:
# Getting the variable's value
value = eval(variable)
# If the value is not duplicate
if value not in names:
# Append it to our final names list
names.append(value)
for name in names:
print (name)
```
Output
```
Dorian
Tom
Jessica
Jerry
```
Explanation
>
> The locals() function returns a dictionary containing the variables
> defined in the local namespace. Calling locals() in the global
> namespace is same as calling globals() and returns a dictionary
> representing the global namespace of the module.
>
>
> |
27,627,440 | I am trying to use the [python-user-agents](https://github.com/selwin/python-user-agents/blob/master/user_agents/parsers.py). I keep running into a number of bugs within the library itself.
First it referred to a `from ua_parser import user_agent_parser` that it never defined. So after banging my head, I looked online to see what that might be and found that `ua_parser` is yet another library that this project was using. So I downloaded `ua_parser`. But now I am getting an error that
```
TypeError: parse_device() got an unexpected keyword argument 'model'
```
Sure enough, `ua_parser` has a model variable that the python-user-agents library is not expecting. Has anyone done a better job with this library? Whoever wrote it clearly did a terrible job. But it seems to be the only thing out there that I could find. Any help fixing it to work well? I am looking to use it so to identify if a browser's device is mobile or touchable or a tablet as in: `user_agent.is_mobile` or `user_agent.is_touch_capable` or `user_agent.is_tablet` | 2014/12/23 | [
"https://Stackoverflow.com/questions/27627440",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2187407/"
] | if you look at the readme from the github link it tells you what to install and how to use the lib:
You need pyyaml and ua-parser:
```
pip install pyyaml ua-parser user-agents
```
A working example:
```
In [1]: from user_agents import parse
In [2]: ua_string = 'Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3'
In [3]: user_agent = parse(ua_string)
In [4]: user_agent.is_mobile
Out[4]: True
In [5]: user_agent.is_touch_capable
Out[5]: True
In [6]: user_agent.is_tablet
Out[6]: False
``` | Actually the new version of ua-parser is incompatible with this so you have to install ua-parser==0.3.6 |
21,214,531 | Howdy: somewhat of a python/programming newbie. I am trying to find each time a certain word starts a new sentence and replace it, which in this case is good old "Bob", replaced with "John". I am using a dictionary and the `.replace()` method to do the replacing - replacing the dictionary key with the associated value. Here is my code:
```
start_replacements = {'. Bob': '. John',
'! Bob': '! John',
'? Bob': '? John',
'\nBob': '\nJohn',
}
def search_and_replace(start_word, replacement):
with open('start_words.txt', 'r+') as article:
read_article = article.read()
replaced = read_article.replace(start_word, replacement)
article.seek(0)
article.write(replaced)
def main():
for start_word, replacement in start_replacements.iteritems():
search_and_replace(start_word, replacement)
if __name__ == '__main__':
main()
```
You will see in the dictionary that I have 4 ways of finding "Bob" at the beginning of a sentence, but I am not sure how to find "Bob" at the very beginning of at text file, without using regex's `^`. I would prefer to avoid using regex to keep this script more simple. Is this possible?
EDIT: Contents of "start\_words.txt" before running script:
```
Bob is at the beginning of the file. Bob after period! Bob after exclamation? Bob after question.
Bob after newline.
```
Content after running script:
```
Bob is at the beginning of the file. John after period! John after exclamation? John after question.
John after newline.
```
EDIT: **Explanation for not wanting regex**: I would prefer to stick with the dictionary because it is going to grow each week with new words and phrases added. In this instance it is just "Bob". the dictionary will probably grow into the hundreds. I am not hell bent on not using regex, but as a relative newbie, I was trying to find out if there was another way that I don't now about.
EDIT: The 3rd comment below by @tripleee is a great suggestion and works for what I want to do. Thanks a bunch.
Apologies, not my intention to cause some down votes for myself and within the answer. All help as been appreciated. | 2014/01/19 | [
"https://Stackoverflow.com/questions/21214531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680443/"
] | You have to adjust either data you are working with or the algorithm to account for this special case.
For example you may decorate the beginning of your data with some value and add corresponding replacement to your dictionary.
```
f_begin_deco = '\0\0\0' # Sequence that won't be in data.
start_replacements = { f_begin_deco + 'Bob': f_begin_deco + 'John' }
# In your search_and_replace function.
read_article = f_begin_deco + article.read()
replaced = read_article.replace(start_word, replacement)
replaced = replaced[len(f_begin_deco):] # Remove beginning of file decoration.
```
Also you may what to explore [context manager protocol](http://docs.python.org/2/library/stdtypes.html#typecontextmanager) to create more elegant code for data decoration.
Alternative approach is to change your search and replace algorithm to make it account for the special case.
```
start_replacements = { 'Bob': 'John' }
# In your search_and_replace function.
if read_article.startswith(start_word):
read_article = replacement + read_article[len(start_word):]
``` | Question to your question: why don't you want to use regex?
```
>>> import re
>>> x = "! Bob is a foo bar"
>>> re.sub('^[!?.\\n\\s]*Bob','John', x)
'John is a foo bar'
>>> x[:2]+re.sub('^[!?.\\n\\s]*Bob','John', x)
'! John is a foo bar'
```
Here's my attempt to do it without regex:
```
>>> x = "! Bob is a foo bar"
>>> first = ['!','?','.','\n']
>>> x = x.split()
>>> x[1] ="John" if x[1] == "Bob" and x[0] in first else x[1]
>>> x
['!', 'John', 'is', 'a', 'foo', 'bar']
>>> " ".join(x)
'! John is a foo bar'
```
As @falsetru noted:
```
>>> x = "\n Bob is a foo bar"
>>> x = x.split()
>>> x[1] ="John" if x[1] == "Bob" and x[0] in first else x[1]
>>> " ".join(x)
'Bob is a foo bar'
```
Possibly the ugliest way to resolve the `str.split()` removing the `\n` is to:
```
>>> x = "\n Bob is a foo bar"
>>> y = x.split()
>>> y[1] ="John" if y[1] == "Bob" and y[0] in first else y[1]
>>> y
['Bob', 'is', 'a', 'foo', 'bar']
>>> if x.split()[0] == "\n":
... y.insert(0,'\n')
...
>>> " ".join(y)
'Bob is a foo bar'
>>> y
['Bob', 'is', 'a', 'foo', 'bar']
>>> if x[0] == "\n":
... y.insert(0,'\n')
...
>>> " ".join(y)
'\n Bob is a foo bar'
```
I should stop appending my answer, otherwise I'll be just condoning the OP to use nonsensical solution to which regex resolves easily. |
21,214,531 | Howdy: somewhat of a python/programming newbie. I am trying to find each time a certain word starts a new sentence and replace it, which in this case is good old "Bob", replaced with "John". I am using a dictionary and the `.replace()` method to do the replacing - replacing the dictionary key with the associated value. Here is my code:
```
start_replacements = {'. Bob': '. John',
'! Bob': '! John',
'? Bob': '? John',
'\nBob': '\nJohn',
}
def search_and_replace(start_word, replacement):
with open('start_words.txt', 'r+') as article:
read_article = article.read()
replaced = read_article.replace(start_word, replacement)
article.seek(0)
article.write(replaced)
def main():
for start_word, replacement in start_replacements.iteritems():
search_and_replace(start_word, replacement)
if __name__ == '__main__':
main()
```
You will see in the dictionary that I have 4 ways of finding "Bob" at the beginning of a sentence, but I am not sure how to find "Bob" at the very beginning of at text file, without using regex's `^`. I would prefer to avoid using regex to keep this script more simple. Is this possible?
EDIT: Contents of "start\_words.txt" before running script:
```
Bob is at the beginning of the file. Bob after period! Bob after exclamation? Bob after question.
Bob after newline.
```
Content after running script:
```
Bob is at the beginning of the file. John after period! John after exclamation? John after question.
John after newline.
```
EDIT: **Explanation for not wanting regex**: I would prefer to stick with the dictionary because it is going to grow each week with new words and phrases added. In this instance it is just "Bob". the dictionary will probably grow into the hundreds. I am not hell bent on not using regex, but as a relative newbie, I was trying to find out if there was another way that I don't now about.
EDIT: The 3rd comment below by @tripleee is a great suggestion and works for what I want to do. Thanks a bunch.
Apologies, not my intention to cause some down votes for myself and within the answer. All help as been appreciated. | 2014/01/19 | [
"https://Stackoverflow.com/questions/21214531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680443/"
] | You can use regular expression (with the dictionary). This does not require iterating dictionary entries.
```
import re
nonspaces = re.compile(r'\S+') # To extract the first word
def search_and_replace(filepath, replacement):
def replace_sentence(match):
def replace_name(match):
name = match.group()
return replacement.get(name, name)
return nonspaces.sub(replace_name, match.group(), count=1)
# count=1: to change only the first word.
with open(filepath, 'r+') as f:
replaced = re.sub('[^.!?]+', replace_sentence, f.read())
f.seek(0)
f.write(replaced)
f.truncate() # NOTE: If name shrinks, unwanted string remains.
start_replacement = {
'Bob': 'John',
'Sam': 'Jack',
'Tom': 'Kevin',
}
search_and_replace('start_words.txt', start_replacement)
```
Explanation about regular expressions used.
* `[^.!?]`: Match any character that is not `.`, `!` nor `?`. Used to extract sentences.
```
>>> re.findall('[^.!?]+', 'Bob is at the beginning. Bob after period!')
['Bob is at the beginning', ' Bob after period']
```
* `\S`: Match any non-spaces character. used to extract the first word (possibly name):
```
>>> re.search(r'\S+', 'Bob is at the beginning').group()
'Bob'
>>> re.search(r'\S+', ' Tom after period!').group()
'Tom'
>>> re.sub(r'\S+', 'John', ' Bob and Tom.')
' John John John'
>>> re.sub(r'\S+', 'John', ' Bob and Tom.', count=1)
' John and Tom.'
```
See [`re` module documentation](http://docs.python.org/3/library/re.html) and [Regular Expression HOWTO](http://docs.python.org/3/howto/regex.html). | Question to your question: why don't you want to use regex?
```
>>> import re
>>> x = "! Bob is a foo bar"
>>> re.sub('^[!?.\\n\\s]*Bob','John', x)
'John is a foo bar'
>>> x[:2]+re.sub('^[!?.\\n\\s]*Bob','John', x)
'! John is a foo bar'
```
Here's my attempt to do it without regex:
```
>>> x = "! Bob is a foo bar"
>>> first = ['!','?','.','\n']
>>> x = x.split()
>>> x[1] ="John" if x[1] == "Bob" and x[0] in first else x[1]
>>> x
['!', 'John', 'is', 'a', 'foo', 'bar']
>>> " ".join(x)
'! John is a foo bar'
```
As @falsetru noted:
```
>>> x = "\n Bob is a foo bar"
>>> x = x.split()
>>> x[1] ="John" if x[1] == "Bob" and x[0] in first else x[1]
>>> " ".join(x)
'Bob is a foo bar'
```
Possibly the ugliest way to resolve the `str.split()` removing the `\n` is to:
```
>>> x = "\n Bob is a foo bar"
>>> y = x.split()
>>> y[1] ="John" if y[1] == "Bob" and y[0] in first else y[1]
>>> y
['Bob', 'is', 'a', 'foo', 'bar']
>>> if x.split()[0] == "\n":
... y.insert(0,'\n')
...
>>> " ".join(y)
'Bob is a foo bar'
>>> y
['Bob', 'is', 'a', 'foo', 'bar']
>>> if x[0] == "\n":
... y.insert(0,'\n')
...
>>> " ".join(y)
'\n Bob is a foo bar'
```
I should stop appending my answer, otherwise I'll be just condoning the OP to use nonsensical solution to which regex resolves easily. |
21,214,531 | Howdy: somewhat of a python/programming newbie. I am trying to find each time a certain word starts a new sentence and replace it, which in this case is good old "Bob", replaced with "John". I am using a dictionary and the `.replace()` method to do the replacing - replacing the dictionary key with the associated value. Here is my code:
```
start_replacements = {'. Bob': '. John',
'! Bob': '! John',
'? Bob': '? John',
'\nBob': '\nJohn',
}
def search_and_replace(start_word, replacement):
with open('start_words.txt', 'r+') as article:
read_article = article.read()
replaced = read_article.replace(start_word, replacement)
article.seek(0)
article.write(replaced)
def main():
for start_word, replacement in start_replacements.iteritems():
search_and_replace(start_word, replacement)
if __name__ == '__main__':
main()
```
You will see in the dictionary that I have 4 ways of finding "Bob" at the beginning of a sentence, but I am not sure how to find "Bob" at the very beginning of at text file, without using regex's `^`. I would prefer to avoid using regex to keep this script more simple. Is this possible?
EDIT: Contents of "start\_words.txt" before running script:
```
Bob is at the beginning of the file. Bob after period! Bob after exclamation? Bob after question.
Bob after newline.
```
Content after running script:
```
Bob is at the beginning of the file. John after period! John after exclamation? John after question.
John after newline.
```
EDIT: **Explanation for not wanting regex**: I would prefer to stick with the dictionary because it is going to grow each week with new words and phrases added. In this instance it is just "Bob". the dictionary will probably grow into the hundreds. I am not hell bent on not using regex, but as a relative newbie, I was trying to find out if there was another way that I don't now about.
EDIT: The 3rd comment below by @tripleee is a great suggestion and works for what I want to do. Thanks a bunch.
Apologies, not my intention to cause some down votes for myself and within the answer. All help as been appreciated. | 2014/01/19 | [
"https://Stackoverflow.com/questions/21214531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680443/"
] | You can use regular expression (with the dictionary). This does not require iterating dictionary entries.
```
import re
nonspaces = re.compile(r'\S+') # To extract the first word
def search_and_replace(filepath, replacement):
def replace_sentence(match):
def replace_name(match):
name = match.group()
return replacement.get(name, name)
return nonspaces.sub(replace_name, match.group(), count=1)
# count=1: to change only the first word.
with open(filepath, 'r+') as f:
replaced = re.sub('[^.!?]+', replace_sentence, f.read())
f.seek(0)
f.write(replaced)
f.truncate() # NOTE: If name shrinks, unwanted string remains.
start_replacement = {
'Bob': 'John',
'Sam': 'Jack',
'Tom': 'Kevin',
}
search_and_replace('start_words.txt', start_replacement)
```
Explanation about regular expressions used.
* `[^.!?]`: Match any character that is not `.`, `!` nor `?`. Used to extract sentences.
```
>>> re.findall('[^.!?]+', 'Bob is at the beginning. Bob after period!')
['Bob is at the beginning', ' Bob after period']
```
* `\S`: Match any non-spaces character. used to extract the first word (possibly name):
```
>>> re.search(r'\S+', 'Bob is at the beginning').group()
'Bob'
>>> re.search(r'\S+', ' Tom after period!').group()
'Tom'
>>> re.sub(r'\S+', 'John', ' Bob and Tom.')
' John John John'
>>> re.sub(r'\S+', 'John', ' Bob and Tom.', count=1)
' John and Tom.'
```
See [`re` module documentation](http://docs.python.org/3/library/re.html) and [Regular Expression HOWTO](http://docs.python.org/3/howto/regex.html). | You have to adjust either data you are working with or the algorithm to account for this special case.
For example you may decorate the beginning of your data with some value and add corresponding replacement to your dictionary.
```
f_begin_deco = '\0\0\0' # Sequence that won't be in data.
start_replacements = { f_begin_deco + 'Bob': f_begin_deco + 'John' }
# In your search_and_replace function.
read_article = f_begin_deco + article.read()
replaced = read_article.replace(start_word, replacement)
replaced = replaced[len(f_begin_deco):] # Remove beginning of file decoration.
```
Also you may what to explore [context manager protocol](http://docs.python.org/2/library/stdtypes.html#typecontextmanager) to create more elegant code for data decoration.
Alternative approach is to change your search and replace algorithm to make it account for the special case.
```
start_replacements = { 'Bob': 'John' }
# In your search_and_replace function.
if read_article.startswith(start_word):
read_article = replacement + read_article[len(start_word):]
``` |
61,680,684 | I am having trouble with a problem in python. I am making a tic tac toe game, i have created a function that takes in a list of lists containing the state of the game such that [[0,0,0],[0,0,0],[0,0,0]] and output a similar list replacing the 0, 1, 2 by "-", "X", "O" respectively as such -
```
def display_board(b):
for r in range(0, 3):
for c in range(0, 3):
if b[r][c] == 1:
b[r][c] = 'X'
elif b[r][c] == 2:
b[r][c] = 'O'
else:
b[r][c] = '-'
return b
```
I am using as disp = display\_board(b) where b contains the board state as mentioned above. The function is returning the needed value correctly, however, the value of b is also changing the same as disp. Also if there is another variable before this such that test = b the value of test also changes.
i have tried different compilers on different computers and the problem persists.
following is my full code and i am grateful for any help in advance
```
def move_input(p):
x = str(input("player {} enter your move <row,col> - ".format(p)))
while True:
l = x.split(",")
if len(x) != 3:
x = str(input("ERROR: INVALID INPUT\please enter correct input row and col with comma in between <row,"
"col> - "))
elif not (l[0].isdigit() and l[1].isdigit()):
x = str(input("ERROR: INVALID INPUT\please enter correct input row and col with comma in between <row,"
"col> - "))
else:
x[0] -= 1
x[1] -= 1
return x
def display_board(a):
b = a
for r in range(0, 3):
for c in range(0, 3):
if b[r][c] == 1:
b[r][c] = 'X'
elif b[r][c] == 2:
b[r][c] = 'O'
else:
b[r][c] = '-'
return b
def game():
g = [[0 for x in range(0, 3)] for x in range(0, 3)]
print("Hi there! welcome to the game of tic tac toe...")
while True:
# Here is the problem
print(g)
disp = display_board(g) # the value of g changes after this
print(g)
print("\n{}\n{}\n{}".format(disp[0], disp[1], disp[2]))
for player in range(1, 3):
if (0 in g[0]) or (0 in g[1]) or (0 in g[2]):
i = move_input(player)
row = i[0]
col = i[1]
g[row][col] = player
else:
print("DRAW! good job both players")
return 0
game()
``` | 2020/05/08 | [
"https://Stackoverflow.com/questions/61680684",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13498818/"
] | I know I'm writing very late, but I hope it helps some other people who are looking for the same thing, it has helped me, especially passing the parameters to the connection to the database, to which the variable is assigned in the where and filter the information that is needed. all from the url:
<https://developers.google.com/datastudio/connector/data-source-parameters#set_url_parameters> | Expanding on Yeisson's answer.
Report parameters are passed via query parameter `params`.
Value is URL-encoded JSON object with all report parameters that you want to set. So parameter values such as
```json
{
"ds0.includeToday": true,
"ds0.units": "Metric",
"ds1.countries": ["Canada", "Mexico"],
"ds1.labelName": "Population"
}
```
would be encoded this way (JavaScript):
```js
const json = JSON.stringify({
"ds0.includeToday": true,
"ds0.units": "Metric",
"ds1.countries": ["Canada", "Mexico"],
"ds1.labelName": "Population"
})
// "{\"ds0.includeToday\":true,\"ds0.units\":\"Metric\",\"ds1.countries\":[\"Canada\",\"Mexico\"],\"ds1.labelName\":\"Population\"}"
const encodedParams = encodeURIComponent("{\"ds0.includeToday\":true,\"ds0.units\":\"Metric\",\"ds1.countries\":[\"Canada\",\"Mexico\"],\"ds1.labelName\":\"Population\"}")
// "%7B%22ds0.includeToday%22%3Atrue%2C%22ds0.units%22%3A%22Metric%22%2C%22ds1.countries%22%3A%5B%22Canada%22%2C%22Mexico%22%5D%2C%22ds1.labelName%22%3A%22Population%22%7D"
```
and then passed to the report like this:
`https://datastudio.google.com/reporting/REPORT_ID/page/PAGE_ID?params=%7B%22ds0.includeToday%22%3Atrue%2C%22ds0.units%22%3A%22Metric%22%2C%22ds1.countries%22%3A%5B%22Canada%22%2C%22Mexico%22%5D%2C%22ds1.labelName%22%3A%22Population%22%7D` |
56,576,400 | I wanted to create an mapping between two arrays. But in python, doing this resulted in a mapping with **last element getting picked**.
```
array_1 = [0,0,0,1,2,3]
array_2 = [4,4,5,6,8,7]
mapping = dict(zip(array_1, array_2))
print(mapping)
```
The mapping resulted in `{0: 5, 1: 6, 2: 8, 3: 7}`
How to pick the most occurring element in this case `4` for key `0`. | 2019/06/13 | [
"https://Stackoverflow.com/questions/56576400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11309609/"
] | You can create a dictionary with key and a list of values for the key. Then you can go over the list of values in this dictionary, and update the value to be the most frequent item in the list using [Counter.most\_common](https://docs.python.org/3/library/collections.html#collections.Counter.most_common)
```
from collections import defaultdict, Counter
array_1 = [0,0,0,1,2,3]
array_2 = [4,4,5,6,8,7]
mapping = defaultdict(list)
#Create the mapping with a list of values
for key, value in zip(array_1, array_2):
mapping[key].append(value)
print(mapping)
#defaultdict(<class 'list'>, {0: [4, 4, 5], 1: [6], 2: [8], 3: [7]})
res = defaultdict(int)
#Iterate over mapping and chose the most frequent element in the list, and make it the value
for key, value in mapping.items():
#The most frequent element will be the first element of Counter.most_common
res[key] = Counter(value).most_common(1)[0][0]
print(dict(res))
```
The output will be
```
{0: 4, 1: 6, 2: 8, 3: 7}
``` | You can count frequencies of all mappings using `Counter` and then sort those mappings by key and frequency:
```
from collections import Counter
array_1 = [0,0,0,1,2,3]
array_2 = [4,4,5,6,8,7]
c = Counter(zip(array_1, array_2))
dict(i for i, _ in sorted(c.items(), key=lambda x: (x[0], x[1]), reverse=True))
# {3: 7, 2: 8, 1: 6, 0: 4}
``` |
73,956,255 | Hi I am running this python code to reduce multi-line patterns to singletons however, I am doing this on extremely large files of 200,000+ lines.
Here is my current code:
```
import sys
import re
with open('largefile.txt', 'r+') as file:
string = file.read()
string = re.sub(r"((?:^.*\n)+)(?=\1)", "", string, flags=re.MULTILINE)
file.seek(0)
file.write(string)
file.truncate()
```
The problem is the re.sub() is taking ages (10m+) on my large files. Is it possible to speed this up in any way?
Example input file:
```
hello
mister
hello
mister
goomba
bananas
goomba
bananas
chocolate
hello
mister
```
Example output:
```
hello
mister
goomba
bananas
chocolate
hello
mister
```
These patterns can be bigger than 2 lines as well. | 2022/10/05 | [
"https://Stackoverflow.com/questions/73956255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20154432/"
] | Regexps are compact here, but will never be speedy. For one reason, you have an inherently line-based problem, but regexps are inherently character-based. The regexp engine has to deduce, over & over & over again, where "lines" are by searching for newline characters, one at a time. For a more fundamental reason, everything here is brute-force character-at-a-time search, remembering nothing from one phase to the next.
So here's an alternative. Split the giant string into a list of lines, just once at the start. Then that work never needs to be done again. And then build a dict, mapping a line to a list of the indices at which that line appears. That takes linear time. Then, given a line, we don't have to search for it at all: the list of indices tells us at once every place it appears.
Worse-case time *can* still be poor, but I expect it will be at least a hundred times faster on "typical" inputs.
```py
def dedup(s):
from collections import defaultdict
lines = s.splitlines(keepends=True)
line2ix = defaultdict(list)
for i, line in enumerate(lines):
line2ix[line].append(i)
out = []
n = len(lines)
i = 0
while i < n:
line = lines[i]
# Look for longest adjacent match between i:j and j:j+(j-i).
# j must be > i, and j+(j-i) <= n so that j <= (n+i)/2.
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
# Lines at i and j match.
if all(lines[i + k] == lines[j + k]
for k in range(1, j - i)):
searching = False
break
if searching:
out.append(line)
i += 1
else: # skip the repeated block at i:j
i = j
return "".join(out)
```
EDIT
----
This incorporates Kelly's idea of incrementally updating `line2ix` using a `deque` so that the candidates looked at are always in `range(i+1, maxj+1)`. Then the innermost loop doesn't need to check for those conditions.
It's a mixed bag, losing a little when there are very few duplicates, because in such cases the `line2ix` sequences are very short (or even singletons for unique lines).
Here's timing for a case where it really pays off: a file containing about 30,000 lines of Python code. Many lines are unique, but a few kinds of lines are very common (for example, the empty `"\n"` line). Cutting the work in the innermost loop can pay for those common lines. `dedup_nuts` was picked for the name because this level of micro-optimization is, well, nuts ;-)
```none
71.67997950001154 dedup_original
48.948923900024965 dedup_blhsing
2.204853900009766 dedup_Tim
9.623824400012381 dedup_Kelly
1.0341253000078723 dedup_blhsingTimKelly
0.8434303000103682 dedup_nuts
```
And the code:
```py
def dedup_nuts(s):
from array import array
from collections import deque
encode = {}
decode = []
lines = array('L')
for line in s.splitlines(keepends=True):
if (code := encode.get(line)) is None:
code = encode[line] = len(encode)
decode.append(line)
lines.append(code)
del encode
line2ix = [deque() for line in lines]
view = memoryview(lines)
out = []
n = len(lines)
i = 0
last_maxj = -1
while i < n:
maxj = (n + i) // 2
for j in range(last_maxj + 1, maxj + 1):
line2ix[lines[j]].appendleft(j)
last_maxj = maxj
line = lines[i]
js = line2ix[line]
assert js[-1] == i, (i, n, js)
js.pop()
for j in js:
#assert i < j <= maxj
if view[i : j] == view[j : j + j - i]:
for k in range(i + 1, j):
js = line2ix[lines[k]]
assert js[-1] == k, (i, k, js)
js.pop()
i = j
break
else:
out.append(line)
i += 1
#assert all(not d for d in line2ix)
return "".join(map(decode.__getitem__, out))
```
Some key invariants are checked by asserts there, but the expensive ones are commented out for speed. Season to taste. | Nesting a quantifier within a quantifier is expensive and in this case unnecessary.
You can use the following regex without nesting instead:
```
string = re.sub(r"(^.*\n)(?=\1)", "", string, flags=re.M | re.S)
```
In the following test it more than cuts the time in half compared to your approach:
<https://replit.com/@blhsing/HugeTrivialExperiment> |
73,956,255 | Hi I am running this python code to reduce multi-line patterns to singletons however, I am doing this on extremely large files of 200,000+ lines.
Here is my current code:
```
import sys
import re
with open('largefile.txt', 'r+') as file:
string = file.read()
string = re.sub(r"((?:^.*\n)+)(?=\1)", "", string, flags=re.MULTILINE)
file.seek(0)
file.write(string)
file.truncate()
```
The problem is the re.sub() is taking ages (10m+) on my large files. Is it possible to speed this up in any way?
Example input file:
```
hello
mister
hello
mister
goomba
bananas
goomba
bananas
chocolate
hello
mister
```
Example output:
```
hello
mister
goomba
bananas
chocolate
hello
mister
```
These patterns can be bigger than 2 lines as well. | 2022/10/05 | [
"https://Stackoverflow.com/questions/73956255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20154432/"
] | @TimPeters' line-based comparison approach is good but wastes time in repeated comparisons of the same lines. @KellyBundy's encoding idea is good but wastes time in the overhead of a regex engine and text encoding.
A more efficient approach would be to adopt @KellyBundy's encoding idea in @TimPeters' algorithm, but instead of encoding lines into characters, encode them into an `array.array` of 32-bit integers to avoid the overhead of text encoding, and then use a `memoryview` of the `array` for quick slice-based comparisons:
```
from array import array
def dedup_blhsingTimKelly2(s):
encode = {}
decode = []
lines = s.splitlines(keepends=True)
n = len(lines)
for line in lines:
if line not in encode:
encode[line] = len(decode)
decode.append(line)
lines = array('L', map(encode.get, lines))
del encode
line2ix = [[] for _ in range(n)]
for i, line in enumerate(lines):
line2ix[line].append(i)
view = memoryview(lines)
out = []
i = 0
while i < n:
line = lines[i]
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
if view[i: j] == view[j: j + j - i]:
searching = False
break
if searching:
out.append(decode[line])
i += 1
else:
i = j
return "".join(out)
```
A run of @KellyBundy's benchmark code with this approach added, originally named `dedup_blhsingTimKelly`, now amended with Tim and Kelly's comments and named `dedup_blhsingTimKelly2`:
```
2.6650364249944687 dedup_original
1.3109814710041974 dedup_blhsing
0.5598453340062406 dedup_Tim
0.9783012029947713 dedup_Kelly
0.24442325498966966 dedup_blhsingTimKelly
0.21991234300367068 dedup_blhsingTimKelly2
```
[Try it online!](https://tio.run/##3Vffb9s2EH7XX0G4DyZrVbFdFCiEqn3agGLZXpo9OZ5BS2ebMkUKIt0k2Pa3Z0dStqXESYomGNAGCOI7Hr/7@V3M@sZutHr7vm5ub0VV68aSBqL9J64KXe0lYxuh1tGq0RWxogJhSXsSpHDCm4bf7A@8EPS5lhJyK7Qy@9MCVnwnbSFyG0UB3ZCMzIbDpNRCUfo6BJDkGy1yMDTYJNzkQiwkWAuNick2m4xZTIaXasgYWemGLIhQLvg10HdjNo8c6h70JKRHeTcejxmLIgwLQyt29UI3Yi0Ul9SwNCL404DdNQgNidktaTOg9FP6V/L6UrERo5@yywkbxGSAvwi4knxtMjT9/c/zi8/nn//4pYe9lBuDnh@GDrgPo5J/nPGXHuiFqA6AUihwiZvE1FJYL9ItQA2qMNlFswN2sJuKa7Ts9INKYWw4dwUVsTdzZQW1q6DhFqhHbJ11gGbu7zzhtXNERQDRO@s6O/eCwo8SVAvgVQJVY//paiMkOiIfiOpDu0vuwkzMD/pX5FzrrQ9Ramy3wZErSp6DsqTiNt@QJdgrAEVEWhLsPCnTckTLN4IlHZCSVDu8ugTy0aXq7YIV@ZBhuEYTu@EW7VCkaiTY2fR4v@LXJXF6MiJ44@yMTA9nBniTb7DPaOBKfjhwIZd@TOErTjEUtFe@Tll9eVZo/NF76h@4n1wrK1QH@3ADoxX37ZcN8G1Pi2X0w4IpipB9qF5yF5JLSdsmYLLbOcn2TSm9fM9Xm@n2SMhJjOhvsFDsfmTdav3KpYEnYseIDlf6aDhv@wl08bF@ImSUkclBBegoxRqYraixz4AtqQEHvCBLqfOtr0pa3kHISNml7WAQtgv6PcVy5OVvIOXN9MBOULku3Ez//a@XC2jlliPfyt4TXPJsaNnq1Wm3YP5EaRu47Jz2Kxd0YQpb7BBbv4ZBd7/G@7j95qfD82GMo1TTgJqswYZNYhhr05atxzu7aDab313lis1fbiF9FXCFbiqodHPjhG4Fe9vqO1fTj78V0NgVZiZSUnqme6lECVPyHJ7/TxQOw9bW4EkqvwhTnyZqK38rUZ/HyrylYjhjjzE2P8HTWT5vZ/R7SPooV37krw8/EUcxg0mfp63mZ@TqsxhK/UPhhTiZbxra4eVzidl9qDzGSGd3fCkkjzw@vrBuQV3qR/T2f7hHd8jRaqdy45ncff3E/Q0ZH98acbcf8elF@oB6GsF1je9B9Oa9zsZzbGjkOrJyBQ/KSdqObY2vNEtXaOIGPFx1Afe/IbxtJ6KPcuya8xbeqlTyalnwlDjImOBSWkKTTY79Cw6xMKtksVC8gsWCdSJht7f/AQ "Python 3.8 (pre-release) – Try It Online") | Nesting a quantifier within a quantifier is expensive and in this case unnecessary.
You can use the following regex without nesting instead:
```
string = re.sub(r"(^.*\n)(?=\1)", "", string, flags=re.M | re.S)
```
In the following test it more than cuts the time in half compared to your approach:
<https://replit.com/@blhsing/HugeTrivialExperiment> |
73,956,255 | Hi I am running this python code to reduce multi-line patterns to singletons however, I am doing this on extremely large files of 200,000+ lines.
Here is my current code:
```
import sys
import re
with open('largefile.txt', 'r+') as file:
string = file.read()
string = re.sub(r"((?:^.*\n)+)(?=\1)", "", string, flags=re.MULTILINE)
file.seek(0)
file.write(string)
file.truncate()
```
The problem is the re.sub() is taking ages (10m+) on my large files. Is it possible to speed this up in any way?
Example input file:
```
hello
mister
hello
mister
goomba
bananas
goomba
bananas
chocolate
hello
mister
```
Example output:
```
hello
mister
goomba
bananas
chocolate
hello
mister
```
These patterns can be bigger than 2 lines as well. | 2022/10/05 | [
"https://Stackoverflow.com/questions/73956255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20154432/"
] | Regexps are compact here, but will never be speedy. For one reason, you have an inherently line-based problem, but regexps are inherently character-based. The regexp engine has to deduce, over & over & over again, where "lines" are by searching for newline characters, one at a time. For a more fundamental reason, everything here is brute-force character-at-a-time search, remembering nothing from one phase to the next.
So here's an alternative. Split the giant string into a list of lines, just once at the start. Then that work never needs to be done again. And then build a dict, mapping a line to a list of the indices at which that line appears. That takes linear time. Then, given a line, we don't have to search for it at all: the list of indices tells us at once every place it appears.
Worse-case time *can* still be poor, but I expect it will be at least a hundred times faster on "typical" inputs.
```py
def dedup(s):
from collections import defaultdict
lines = s.splitlines(keepends=True)
line2ix = defaultdict(list)
for i, line in enumerate(lines):
line2ix[line].append(i)
out = []
n = len(lines)
i = 0
while i < n:
line = lines[i]
# Look for longest adjacent match between i:j and j:j+(j-i).
# j must be > i, and j+(j-i) <= n so that j <= (n+i)/2.
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
# Lines at i and j match.
if all(lines[i + k] == lines[j + k]
for k in range(1, j - i)):
searching = False
break
if searching:
out.append(line)
i += 1
else: # skip the repeated block at i:j
i = j
return "".join(out)
```
EDIT
----
This incorporates Kelly's idea of incrementally updating `line2ix` using a `deque` so that the candidates looked at are always in `range(i+1, maxj+1)`. Then the innermost loop doesn't need to check for those conditions.
It's a mixed bag, losing a little when there are very few duplicates, because in such cases the `line2ix` sequences are very short (or even singletons for unique lines).
Here's timing for a case where it really pays off: a file containing about 30,000 lines of Python code. Many lines are unique, but a few kinds of lines are very common (for example, the empty `"\n"` line). Cutting the work in the innermost loop can pay for those common lines. `dedup_nuts` was picked for the name because this level of micro-optimization is, well, nuts ;-)
```none
71.67997950001154 dedup_original
48.948923900024965 dedup_blhsing
2.204853900009766 dedup_Tim
9.623824400012381 dedup_Kelly
1.0341253000078723 dedup_blhsingTimKelly
0.8434303000103682 dedup_nuts
```
And the code:
```py
def dedup_nuts(s):
from array import array
from collections import deque
encode = {}
decode = []
lines = array('L')
for line in s.splitlines(keepends=True):
if (code := encode.get(line)) is None:
code = encode[line] = len(encode)
decode.append(line)
lines.append(code)
del encode
line2ix = [deque() for line in lines]
view = memoryview(lines)
out = []
n = len(lines)
i = 0
last_maxj = -1
while i < n:
maxj = (n + i) // 2
for j in range(last_maxj + 1, maxj + 1):
line2ix[lines[j]].appendleft(j)
last_maxj = maxj
line = lines[i]
js = line2ix[line]
assert js[-1] == i, (i, n, js)
js.pop()
for j in js:
#assert i < j <= maxj
if view[i : j] == view[j : j + j - i]:
for k in range(i + 1, j):
js = line2ix[lines[k]]
assert js[-1] == k, (i, k, js)
js.pop()
i = j
break
else:
out.append(line)
i += 1
#assert all(not d for d in line2ix)
return "".join(map(decode.__getitem__, out))
```
Some key invariants are checked by asserts there, but the expensive ones are commented out for speed. Season to taste. | Another idea: You're talking about "200,000+ lines", so we can encode each unique line as one of the 1,114,112 possible characters and simplify the regex to `r"(.+)(?=\1)"`. And after the deduplication, decode the characters back to lines.
```
def dedup(s):
encode = {}
decode = {}
lines = s.split('\n')
for line in lines:
if line not in encode:
c = chr(len(encode))
encode[line] = c
decode[c] = line
s = ''.join(map(encode.get, lines))
s = re.sub(r"(.+)(?=\1)", "", s, flags=re.S)
return '\n'.join(map(decode.get, s))
```
A little benchmark based on blhsing's but with some repeating lines (times in seconds):
```
2.5934535119995417 dedup_original
1.2498892020012136 dedup_blhsing
0.5043159520009795 dedup_Tim
0.9235864399997809 dedup_Kelly
```
I built a pool of 50 lines of 10 random letters, then joined 5000 random lines from that pool.
The two fastest with 10,000 lines instead:
```
2.0905018440007552 dedup_Tim
3.220036650000111 dedup_Kelly
```
Code ([Try it online!](https://tio.run/##hVXBjts2EL3rKwbuYclY0dobBAiMKDklQNBtL9mevI5BSyObMkUKItVskPbbt0PSlqXdJhFgWOS8eTN8nBm139zB6Fdv2u7xUTat6Rx0mJzfhC5Nc15Z10m9T6rONOBkg9LByRJXSRIRFnJYX11ltZGasReRJCsORhZoWcRkwhZSbhU6h51N4ZgvFzyFq3t9xTlUpoMtSO0T2CN7veCbxLOeSf@XMrC8XiwWnCdJiRWUWPbt1nRyL7VQzPJVAvR06PqOqDGz/Y51M8ber75kL@41n3P2Pr9f8lkKM/oRYaXE3uYE/eOv27tPt5/@/DDh3qmDpcg/po68P2aFfzz484T0TjYDYdC6MEph4aTR9iw4oUWvXCkLUt0DldToFbKZbZV0YcmOiC3q0uZ3XY98wN3IB0KOKJiS1kW7V16mAeb1R9032AmHLDCeshoRrf3/JhOtD8RkJDG98yWwCQtNrwr1iSBsSdpahLevB6koELwFPaX2Tt5hLTfD/m9wa8wxpKgM1YV1IMpaFKgdNMIVB9ih@4qoQa5qoBKBelXPWf1S8mxEUkPTk@sO4Z0/asBFFLzNKV1rwB2EIxwtmZ5Lfn1z8W/EQw1@H@ZAHtfXcDPYLIquOFBBEMBLPhh8ynWoZ/ybyh1LNpFvJGuQpyLwuxBpavBPYbSTesQ9eFC28jl@16E4TnZJxlAsdEQZTx/Vy55SCqXY6RLosMcN5OdLqcP6WazTSY@Xzl2mxP6ShOLPMxur9VEoi7/InTIaXKZsVG/nCvT58elBYJ7DcthCCrQiDexRtnTPSFfSIhV4CTtlimNQZVU/YcihHvf3bBbHEMWddO7vqNS3oXdRF6b0hfz937Aucbp@0rIszL6hCc8dGFCrsQjBoo2L/ek5p2oUxFkcOuabLtr5VJG4GUvPYyfGmOW62Jw6MBjHs7cR7Yk226OLo8KeInjcZfhlP5mnn/lYUH/0C3tMIbJ75qTqdWHDyBoP9HQ6hNPL@EzH95HgQ0vTk9wDzXqxoRtKvMSVVzBuLlebKGJLXxLHKoL4ao@uPoPpB@nVeTxPWC7X4KPFbyJTotmVYgWeMgUapzvs8uXlQmJAOmmVbbdaNLjd8lEm/PHxPw "Python 3.8 (pre-release) – Try It Online")):
```
import re
import random
import string
from timeit import timeit
strings = [''.join((*random.choices(string.ascii_letters, k=10), '\n')) for _ in range(50)]
s = ''.join(random.choices(strings, k=5000))
def dedup_original(s):
return re.sub(r"((?:^.*\n)+)(?=\1)", "", s, flags=re.MULTILINE)
def dedup_blhsing(s):
return re.sub(r"(^.*\n)(?=\1)", "", s, flags=re.M | re.S)
def dedup_Tim(s):
from collections import defaultdict
lines = s.splitlines(keepends=True)
line2ix = defaultdict(list)
for i, line in enumerate(lines):
line2ix[line].append(i)
out = []
n = len(lines)
i = 0
while i < n:
line = lines[i]
# Look for longest adjacent match between i:j and j:j+(j-i).
# j must be > i, and j+(j-i) <= n so that j <= (n+i)/2.
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
# Lines at i and j match.
if all(lines[i + k] == lines[j + k]
for k in range(1, j - i)):
searching = False
break
if searching:
out.append(line)
i += 1
else: # skip the repeated block at i:j
i = j
return "".join(out)
def dedup_Kelly(s):
encode = {}
decode = {}
lines = s.split('\n')
for line in lines:
if line not in encode:
c = chr(len(encode))
encode[line] = c
decode[c] = line
s = ''.join(map(encode.get, lines))
s = re.sub(r"(.+)(?=\1)", "", s, flags=re.S)
return '\n'.join(map(decode.get, s))
funcs = dedup_original, dedup_blhsing, dedup_Tim, dedup_Kelly
expect = funcs[0](s)
for f in funcs[1:]:
print(f(s) == expect)
for _ in range(3):
for f in funcs:
t = timeit(lambda: f(s), number=1)
print(t, f.__name__)
print()
``` |
73,956,255 | Hi I am running this python code to reduce multi-line patterns to singletons however, I am doing this on extremely large files of 200,000+ lines.
Here is my current code:
```
import sys
import re
with open('largefile.txt', 'r+') as file:
string = file.read()
string = re.sub(r"((?:^.*\n)+)(?=\1)", "", string, flags=re.MULTILINE)
file.seek(0)
file.write(string)
file.truncate()
```
The problem is the re.sub() is taking ages (10m+) on my large files. Is it possible to speed this up in any way?
Example input file:
```
hello
mister
hello
mister
goomba
bananas
goomba
bananas
chocolate
hello
mister
```
Example output:
```
hello
mister
goomba
bananas
chocolate
hello
mister
```
These patterns can be bigger than 2 lines as well. | 2022/10/05 | [
"https://Stackoverflow.com/questions/73956255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20154432/"
] | Regexps are compact here, but will never be speedy. For one reason, you have an inherently line-based problem, but regexps are inherently character-based. The regexp engine has to deduce, over & over & over again, where "lines" are by searching for newline characters, one at a time. For a more fundamental reason, everything here is brute-force character-at-a-time search, remembering nothing from one phase to the next.
So here's an alternative. Split the giant string into a list of lines, just once at the start. Then that work never needs to be done again. And then build a dict, mapping a line to a list of the indices at which that line appears. That takes linear time. Then, given a line, we don't have to search for it at all: the list of indices tells us at once every place it appears.
Worse-case time *can* still be poor, but I expect it will be at least a hundred times faster on "typical" inputs.
```py
def dedup(s):
from collections import defaultdict
lines = s.splitlines(keepends=True)
line2ix = defaultdict(list)
for i, line in enumerate(lines):
line2ix[line].append(i)
out = []
n = len(lines)
i = 0
while i < n:
line = lines[i]
# Look for longest adjacent match between i:j and j:j+(j-i).
# j must be > i, and j+(j-i) <= n so that j <= (n+i)/2.
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
# Lines at i and j match.
if all(lines[i + k] == lines[j + k]
for k in range(1, j - i)):
searching = False
break
if searching:
out.append(line)
i += 1
else: # skip the repeated block at i:j
i = j
return "".join(out)
```
EDIT
----
This incorporates Kelly's idea of incrementally updating `line2ix` using a `deque` so that the candidates looked at are always in `range(i+1, maxj+1)`. Then the innermost loop doesn't need to check for those conditions.
It's a mixed bag, losing a little when there are very few duplicates, because in such cases the `line2ix` sequences are very short (or even singletons for unique lines).
Here's timing for a case where it really pays off: a file containing about 30,000 lines of Python code. Many lines are unique, but a few kinds of lines are very common (for example, the empty `"\n"` line). Cutting the work in the innermost loop can pay for those common lines. `dedup_nuts` was picked for the name because this level of micro-optimization is, well, nuts ;-)
```none
71.67997950001154 dedup_original
48.948923900024965 dedup_blhsing
2.204853900009766 dedup_Tim
9.623824400012381 dedup_Kelly
1.0341253000078723 dedup_blhsingTimKelly
0.8434303000103682 dedup_nuts
```
And the code:
```py
def dedup_nuts(s):
from array import array
from collections import deque
encode = {}
decode = []
lines = array('L')
for line in s.splitlines(keepends=True):
if (code := encode.get(line)) is None:
code = encode[line] = len(encode)
decode.append(line)
lines.append(code)
del encode
line2ix = [deque() for line in lines]
view = memoryview(lines)
out = []
n = len(lines)
i = 0
last_maxj = -1
while i < n:
maxj = (n + i) // 2
for j in range(last_maxj + 1, maxj + 1):
line2ix[lines[j]].appendleft(j)
last_maxj = maxj
line = lines[i]
js = line2ix[line]
assert js[-1] == i, (i, n, js)
js.pop()
for j in js:
#assert i < j <= maxj
if view[i : j] == view[j : j + j - i]:
for k in range(i + 1, j):
js = line2ix[lines[k]]
assert js[-1] == k, (i, k, js)
js.pop()
i = j
break
else:
out.append(line)
i += 1
#assert all(not d for d in line2ix)
return "".join(map(decode.__getitem__, out))
```
Some key invariants are checked by asserts there, but the expensive ones are commented out for speed. Season to taste. | @TimPeters' line-based comparison approach is good but wastes time in repeated comparisons of the same lines. @KellyBundy's encoding idea is good but wastes time in the overhead of a regex engine and text encoding.
A more efficient approach would be to adopt @KellyBundy's encoding idea in @TimPeters' algorithm, but instead of encoding lines into characters, encode them into an `array.array` of 32-bit integers to avoid the overhead of text encoding, and then use a `memoryview` of the `array` for quick slice-based comparisons:
```
from array import array
def dedup_blhsingTimKelly2(s):
encode = {}
decode = []
lines = s.splitlines(keepends=True)
n = len(lines)
for line in lines:
if line not in encode:
encode[line] = len(decode)
decode.append(line)
lines = array('L', map(encode.get, lines))
del encode
line2ix = [[] for _ in range(n)]
for i, line in enumerate(lines):
line2ix[line].append(i)
view = memoryview(lines)
out = []
i = 0
while i < n:
line = lines[i]
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
if view[i: j] == view[j: j + j - i]:
searching = False
break
if searching:
out.append(decode[line])
i += 1
else:
i = j
return "".join(out)
```
A run of @KellyBundy's benchmark code with this approach added, originally named `dedup_blhsingTimKelly`, now amended with Tim and Kelly's comments and named `dedup_blhsingTimKelly2`:
```
2.6650364249944687 dedup_original
1.3109814710041974 dedup_blhsing
0.5598453340062406 dedup_Tim
0.9783012029947713 dedup_Kelly
0.24442325498966966 dedup_blhsingTimKelly
0.21991234300367068 dedup_blhsingTimKelly2
```
[Try it online!](https://tio.run/##3Vffb9s2EH7XX0G4DyZrVbFdFCiEqn3agGLZXpo9OZ5BS2ebMkUKIt0k2Pa3Z0dStqXESYomGNAGCOI7Hr/7@V3M@sZutHr7vm5ub0VV68aSBqL9J64KXe0lYxuh1tGq0RWxogJhSXsSpHDCm4bf7A@8EPS5lhJyK7Qy@9MCVnwnbSFyG0UB3ZCMzIbDpNRCUfo6BJDkGy1yMDTYJNzkQiwkWAuNick2m4xZTIaXasgYWemGLIhQLvg10HdjNo8c6h70JKRHeTcejxmLIgwLQyt29UI3Yi0Ul9SwNCL404DdNQgNidktaTOg9FP6V/L6UrERo5@yywkbxGSAvwi4knxtMjT9/c/zi8/nn//4pYe9lBuDnh@GDrgPo5J/nPGXHuiFqA6AUihwiZvE1FJYL9ItQA2qMNlFswN2sJuKa7Ts9INKYWw4dwUVsTdzZQW1q6DhFqhHbJ11gGbu7zzhtXNERQDRO@s6O/eCwo8SVAvgVQJVY//paiMkOiIfiOpDu0vuwkzMD/pX5FzrrQ9Ramy3wZErSp6DsqTiNt@QJdgrAEVEWhLsPCnTckTLN4IlHZCSVDu8ugTy0aXq7YIV@ZBhuEYTu@EW7VCkaiTY2fR4v@LXJXF6MiJ44@yMTA9nBniTb7DPaOBKfjhwIZd@TOErTjEUtFe@Tll9eVZo/NF76h@4n1wrK1QH@3ADoxX37ZcN8G1Pi2X0w4IpipB9qF5yF5JLSdsmYLLbOcn2TSm9fM9Xm@n2SMhJjOhvsFDsfmTdav3KpYEnYseIDlf6aDhv@wl08bF@ImSUkclBBegoxRqYraixz4AtqQEHvCBLqfOtr0pa3kHISNml7WAQtgv6PcVy5OVvIOXN9MBOULku3Ez//a@XC2jlliPfyt4TXPJsaNnq1Wm3YP5EaRu47Jz2Kxd0YQpb7BBbv4ZBd7/G@7j95qfD82GMo1TTgJqswYZNYhhr05atxzu7aDab313lis1fbiF9FXCFbiqodHPjhG4Fe9vqO1fTj78V0NgVZiZSUnqme6lECVPyHJ7/TxQOw9bW4EkqvwhTnyZqK38rUZ/HyrylYjhjjzE2P8HTWT5vZ/R7SPooV37krw8/EUcxg0mfp63mZ@TqsxhK/UPhhTiZbxra4eVzidl9qDzGSGd3fCkkjzw@vrBuQV3qR/T2f7hHd8jRaqdy45ncff3E/Q0ZH98acbcf8elF@oB6GsF1je9B9Oa9zsZzbGjkOrJyBQ/KSdqObY2vNEtXaOIGPFx1Afe/IbxtJ6KPcuya8xbeqlTyalnwlDjImOBSWkKTTY79Cw6xMKtksVC8gsWCdSJht7f/AQ "Python 3.8 (pre-release) – Try It Online") |
73,956,255 | Hi I am running this python code to reduce multi-line patterns to singletons however, I am doing this on extremely large files of 200,000+ lines.
Here is my current code:
```
import sys
import re
with open('largefile.txt', 'r+') as file:
string = file.read()
string = re.sub(r"((?:^.*\n)+)(?=\1)", "", string, flags=re.MULTILINE)
file.seek(0)
file.write(string)
file.truncate()
```
The problem is the re.sub() is taking ages (10m+) on my large files. Is it possible to speed this up in any way?
Example input file:
```
hello
mister
hello
mister
goomba
bananas
goomba
bananas
chocolate
hello
mister
```
Example output:
```
hello
mister
goomba
bananas
chocolate
hello
mister
```
These patterns can be bigger than 2 lines as well. | 2022/10/05 | [
"https://Stackoverflow.com/questions/73956255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20154432/"
] | @TimPeters' line-based comparison approach is good but wastes time in repeated comparisons of the same lines. @KellyBundy's encoding idea is good but wastes time in the overhead of a regex engine and text encoding.
A more efficient approach would be to adopt @KellyBundy's encoding idea in @TimPeters' algorithm, but instead of encoding lines into characters, encode them into an `array.array` of 32-bit integers to avoid the overhead of text encoding, and then use a `memoryview` of the `array` for quick slice-based comparisons:
```
from array import array
def dedup_blhsingTimKelly2(s):
encode = {}
decode = []
lines = s.splitlines(keepends=True)
n = len(lines)
for line in lines:
if line not in encode:
encode[line] = len(decode)
decode.append(line)
lines = array('L', map(encode.get, lines))
del encode
line2ix = [[] for _ in range(n)]
for i, line in enumerate(lines):
line2ix[line].append(i)
view = memoryview(lines)
out = []
i = 0
while i < n:
line = lines[i]
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
if view[i: j] == view[j: j + j - i]:
searching = False
break
if searching:
out.append(decode[line])
i += 1
else:
i = j
return "".join(out)
```
A run of @KellyBundy's benchmark code with this approach added, originally named `dedup_blhsingTimKelly`, now amended with Tim and Kelly's comments and named `dedup_blhsingTimKelly2`:
```
2.6650364249944687 dedup_original
1.3109814710041974 dedup_blhsing
0.5598453340062406 dedup_Tim
0.9783012029947713 dedup_Kelly
0.24442325498966966 dedup_blhsingTimKelly
0.21991234300367068 dedup_blhsingTimKelly2
```
[Try it online!](https://tio.run/##3Vffb9s2EH7XX0G4DyZrVbFdFCiEqn3agGLZXpo9OZ5BS2ebMkUKIt0k2Pa3Z0dStqXESYomGNAGCOI7Hr/7@V3M@sZutHr7vm5ub0VV68aSBqL9J64KXe0lYxuh1tGq0RWxogJhSXsSpHDCm4bf7A@8EPS5lhJyK7Qy@9MCVnwnbSFyG0UB3ZCMzIbDpNRCUfo6BJDkGy1yMDTYJNzkQiwkWAuNick2m4xZTIaXasgYWemGLIhQLvg10HdjNo8c6h70JKRHeTcejxmLIgwLQyt29UI3Yi0Ul9SwNCL404DdNQgNidktaTOg9FP6V/L6UrERo5@yywkbxGSAvwi4knxtMjT9/c/zi8/nn//4pYe9lBuDnh@GDrgPo5J/nPGXHuiFqA6AUihwiZvE1FJYL9ItQA2qMNlFswN2sJuKa7Ts9INKYWw4dwUVsTdzZQW1q6DhFqhHbJ11gGbu7zzhtXNERQDRO@s6O/eCwo8SVAvgVQJVY//paiMkOiIfiOpDu0vuwkzMD/pX5FzrrQ9Ramy3wZErSp6DsqTiNt@QJdgrAEVEWhLsPCnTckTLN4IlHZCSVDu8ugTy0aXq7YIV@ZBhuEYTu@EW7VCkaiTY2fR4v@LXJXF6MiJ44@yMTA9nBniTb7DPaOBKfjhwIZd@TOErTjEUtFe@Tll9eVZo/NF76h@4n1wrK1QH@3ADoxX37ZcN8G1Pi2X0w4IpipB9qF5yF5JLSdsmYLLbOcn2TSm9fM9Xm@n2SMhJjOhvsFDsfmTdav3KpYEnYseIDlf6aDhv@wl08bF@ImSUkclBBegoxRqYraixz4AtqQEHvCBLqfOtr0pa3kHISNml7WAQtgv6PcVy5OVvIOXN9MBOULku3Ez//a@XC2jlliPfyt4TXPJsaNnq1Wm3YP5EaRu47Jz2Kxd0YQpb7BBbv4ZBd7/G@7j95qfD82GMo1TTgJqswYZNYhhr05atxzu7aDab313lis1fbiF9FXCFbiqodHPjhG4Fe9vqO1fTj78V0NgVZiZSUnqme6lECVPyHJ7/TxQOw9bW4EkqvwhTnyZqK38rUZ/HyrylYjhjjzE2P8HTWT5vZ/R7SPooV37krw8/EUcxg0mfp63mZ@TqsxhK/UPhhTiZbxra4eVzidl9qDzGSGd3fCkkjzw@vrBuQV3qR/T2f7hHd8jRaqdy45ncff3E/Q0ZH98acbcf8elF@oB6GsF1je9B9Oa9zsZzbGjkOrJyBQ/KSdqObY2vNEtXaOIGPFx1Afe/IbxtJ6KPcuya8xbeqlTyalnwlDjImOBSWkKTTY79Cw6xMKtksVC8gsWCdSJht7f/AQ "Python 3.8 (pre-release) – Try It Online") | Another idea: You're talking about "200,000+ lines", so we can encode each unique line as one of the 1,114,112 possible characters and simplify the regex to `r"(.+)(?=\1)"`. And after the deduplication, decode the characters back to lines.
```
def dedup(s):
encode = {}
decode = {}
lines = s.split('\n')
for line in lines:
if line not in encode:
c = chr(len(encode))
encode[line] = c
decode[c] = line
s = ''.join(map(encode.get, lines))
s = re.sub(r"(.+)(?=\1)", "", s, flags=re.S)
return '\n'.join(map(decode.get, s))
```
A little benchmark based on blhsing's but with some repeating lines (times in seconds):
```
2.5934535119995417 dedup_original
1.2498892020012136 dedup_blhsing
0.5043159520009795 dedup_Tim
0.9235864399997809 dedup_Kelly
```
I built a pool of 50 lines of 10 random letters, then joined 5000 random lines from that pool.
The two fastest with 10,000 lines instead:
```
2.0905018440007552 dedup_Tim
3.220036650000111 dedup_Kelly
```
Code ([Try it online!](https://tio.run/##hVXBjts2EL3rKwbuYclY0dobBAiMKDklQNBtL9mevI5BSyObMkUKItVskPbbt0PSlqXdJhFgWOS8eTN8nBm139zB6Fdv2u7xUTat6Rx0mJzfhC5Nc15Z10m9T6rONOBkg9LByRJXSRIRFnJYX11ltZGasReRJCsORhZoWcRkwhZSbhU6h51N4ZgvFzyFq3t9xTlUpoMtSO0T2CN7veCbxLOeSf@XMrC8XiwWnCdJiRWUWPbt1nRyL7VQzPJVAvR06PqOqDGz/Y51M8ber75kL@41n3P2Pr9f8lkKM/oRYaXE3uYE/eOv27tPt5/@/DDh3qmDpcg/po68P2aFfzz484T0TjYDYdC6MEph4aTR9iw4oUWvXCkLUt0DldToFbKZbZV0YcmOiC3q0uZ3XY98wN3IB0KOKJiS1kW7V16mAeb1R9032AmHLDCeshoRrf3/JhOtD8RkJDG98yWwCQtNrwr1iSBsSdpahLevB6koELwFPaX2Tt5hLTfD/m9wa8wxpKgM1YV1IMpaFKgdNMIVB9ih@4qoQa5qoBKBelXPWf1S8mxEUkPTk@sO4Z0/asBFFLzNKV1rwB2EIxwtmZ5Lfn1z8W/EQw1@H@ZAHtfXcDPYLIquOFBBEMBLPhh8ynWoZ/ybyh1LNpFvJGuQpyLwuxBpavBPYbSTesQ9eFC28jl@16E4TnZJxlAsdEQZTx/Vy55SCqXY6RLosMcN5OdLqcP6WazTSY@Xzl2mxP6ShOLPMxur9VEoi7/InTIaXKZsVG/nCvT58elBYJ7DcthCCrQiDexRtnTPSFfSIhV4CTtlimNQZVU/YcihHvf3bBbHEMWddO7vqNS3oXdRF6b0hfz937Aucbp@0rIszL6hCc8dGFCrsQjBoo2L/ek5p2oUxFkcOuabLtr5VJG4GUvPYyfGmOW62Jw6MBjHs7cR7Yk226OLo8KeInjcZfhlP5mnn/lYUH/0C3tMIbJ75qTqdWHDyBoP9HQ6hNPL@EzH95HgQ0vTk9wDzXqxoRtKvMSVVzBuLlebKGJLXxLHKoL4ao@uPoPpB@nVeTxPWC7X4KPFbyJTotmVYgWeMgUapzvs8uXlQmJAOmmVbbdaNLjd8lEm/PHxPw "Python 3.8 (pre-release) – Try It Online")):
```
import re
import random
import string
from timeit import timeit
strings = [''.join((*random.choices(string.ascii_letters, k=10), '\n')) for _ in range(50)]
s = ''.join(random.choices(strings, k=5000))
def dedup_original(s):
return re.sub(r"((?:^.*\n)+)(?=\1)", "", s, flags=re.MULTILINE)
def dedup_blhsing(s):
return re.sub(r"(^.*\n)(?=\1)", "", s, flags=re.M | re.S)
def dedup_Tim(s):
from collections import defaultdict
lines = s.splitlines(keepends=True)
line2ix = defaultdict(list)
for i, line in enumerate(lines):
line2ix[line].append(i)
out = []
n = len(lines)
i = 0
while i < n:
line = lines[i]
# Look for longest adjacent match between i:j and j:j+(j-i).
# j must be > i, and j+(j-i) <= n so that j <= (n+i)/2.
maxj = (n + i) // 2
searching = True
for j in reversed(line2ix[line]):
if j > maxj:
continue
if j <= i:
break
# Lines at i and j match.
if all(lines[i + k] == lines[j + k]
for k in range(1, j - i)):
searching = False
break
if searching:
out.append(line)
i += 1
else: # skip the repeated block at i:j
i = j
return "".join(out)
def dedup_Kelly(s):
encode = {}
decode = {}
lines = s.split('\n')
for line in lines:
if line not in encode:
c = chr(len(encode))
encode[line] = c
decode[c] = line
s = ''.join(map(encode.get, lines))
s = re.sub(r"(.+)(?=\1)", "", s, flags=re.S)
return '\n'.join(map(decode.get, s))
funcs = dedup_original, dedup_blhsing, dedup_Tim, dedup_Kelly
expect = funcs[0](s)
for f in funcs[1:]:
print(f(s) == expect)
for _ in range(3):
for f in funcs:
t = timeit(lambda: f(s), number=1)
print(t, f.__name__)
print()
``` |
53,569,407 | Is it possible to conditionally replace parts of strings in MySQL?
Introduction to a problem: Users in my database stored articles (table called "table", column "value", each row = one article) with wrong links to images. I'd like to repair all of them at once. To do that, I have to replace all of the addresses in "href" links that are followed by images, i.e.,
`<a href="link1"><img src="link2"></a>`
should by replaced by
`<a href="link2"><img src="link2"></a>`
My idea is to search for each "href" tag and if the tag is followed by and "img", than I'd like to obtain "link2" from the image and use it replace "link1".
I know how to do it in bash or python but I do not have enough experience with MySQL.
To be specific, my table contains references to images like
```
<a href="www.a.cz/b/c"><img class="image image-thumbnail " src="www.d.cz/e/f.jpg" ...
```
I'd like to replace the first adress (href) by the image link. To get
```
<a href="www.d.cz/e/f.jpg"><img class="image image-thumbnail " src="www.d.cz/e/f.jpg" ...
```
Is it possible to make a query (queries?) like
```
UPDATE `table`
SET value = REPLACE(value, 'www.a.cz/b/c', 'XXX')
WHERE `value` LIKE '%www.a.cz/b/c%'
```
where XXX differs every time and its value is obtained from the database? Moreover, "www.a.cz/b/c" varies.
To make things complicated, not all of the images have the "href" link and not all of the links refer to images. There are three possibilities:
1. "href" followed by "img" -> replace
2. "href" not followed by "img" -> keep original link (probably a link to another page)
3. "img" without "href" -> do nothing (there is no wrong link to replace)
Of course, some of the images may have a correct link. In this case it may be also replaced (original and new will be the same).
Database info from phpMyAdmin
>
> Software: MariaDB
>
>
> Software version: 10.1.32-MariaDB - Source distribution
>
>
> Protocol version: 10
>
>
> Server charset: UTF-8 Unicode (utf8)
>
>
> Apache
>
>
> Database client version: libmysql - 5.6.15
>
>
> PHP extension: mysqli
>
>
>
Thank you in advance | 2018/12/01 | [
"https://Stackoverflow.com/questions/53569407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10731133/"
] | ```
SELECT
regexp_replace(
value,
'^<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"(.*)$',
'<a href="\\3"><img class="\\2" src="\\3"\\4'
)
FROM
yourTable
```
The replacement only happens if the pattern is matched.
* `^` at the start means `start of the string`
* `([^"]+)` means `one of more characters, excluding "`
* `(.*)` means zero or more of any character
* `$` at the end means `end of the string`
The replacement takes the 3rd "pattern enclosed in braces" (back-reference) and puts it where the 1st "pattern enclosed in braces" (back-reference) was.
The 2nd, 3rd and 4th back-references are replaced with themselves *(no change)*.
<https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=96aef2214f844a1466772f41415617e5>
If you have strings that don't ***exactly*** match the pattern, it will do nothing. Extra spaces will trip it up, for example.
In which case you need to work out a new regular expression that always matches all of the strings you want to work on. Then you can use the `\\n` back-references to make replacements.
For example, the following deals with extra spaces in the `href` tag...
```
SELECT
regexp_replace(
value,
'^<a[ ]+href[ ]*=[ ]*"([^"]+)"><img class="([^"]+)" src="([^"]+)"(.*)$',
'<a href="\\3"><img class="\\2" src="\\3"\\4'
)
FROM
yourTable
```
***EDIT:***
Following comments clarifying that these are actually snippets from the MIDDLE of the string...
<https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=48ce1cc3df5bf4d3d140025b662072a7>
```
UPDATE
yourTable
SET
value = REGEXP_REPLACE(
value,
'<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"',
'<a href="\\3"><img class="\\2" src="\\3"'
)
WHERE
value REGEXP '<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"'
```
*(Though I prefer the syntax `RLIKE`, it's functionally identical.)*
This will also find an replace that pattern multiple times. You're not clear if that's desired or possible. | Solved, thanks to @MatBailie , but I had to modified his answer. The final query, including the update, is
```
UPDATE `table`
SET value = REGEXP_REPLACE(value, '(.*)<a href="([^"]+)"><img class="([^"]+)" src="([^"]+)"(.*)', '\\1<a href="\\4"><img class="\\3" src="\\4"\\5'
```
)
A wildcard (.\*) had to be put at the beginning of the search because the link is included in an article (long text) and, consequently, the arguments of the replace pattern are increased. |
64,950,799 | I am trying to group the indexes of the customers based on the following condition with python.
If database contains the same contact number or email, the result should return the indexes of the tuples grouped together in a sub-list.
For a given database:
```
data = [
("Customer1","contactA", "emailA"),
("CustomerX","contactA", "emailX"),
("CustomerZ","contactZ", "emailW"),
("CustomerY","contactY", "emailX"),
]
```
The above example shows that Customer1 and CustomerX shares the same contact number, and CustomerX and CustomerY shares the same email, hence Customer1, CustomerX and CustomerY are the same customer.
Hence the result is `[[0, 1, 3], [2]]` | 2020/11/22 | [
"https://Stackoverflow.com/questions/64950799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14091382/"
] | You could build a graph where you connect elements with a common email or with a common contact and then find [connected components](https://en.wikipedia.org/wiki/Component_(graph_theory)) (e.g., by using a [bfs](https://en.wikipedia.org/wiki/Breadth-first_search) visit).
In this case I'm using the [networkx](https://networkx.org/documentation/stable//index.html) library to build a graph and find connected components.
```
>>> contacts = defaultdict(list)
>>> emails = defaultdict(list)
>>> for idx, (name, contact, email) in enumerate(data):
... contacts[contact].append(idx)
... emails[email].append(idx)
...
>>> g = nx.Graph()
>>> for common_attr in itertools.chain(contacts.values(), emails.values()):
... g.add_edges_from(itertools.combinations(common_attr,2))
...
>>> list(nx.connected_components(g))
[{0, 1, 3}, {2}]
``` | You could do this:
```
my_contact_dict = {}
my_email_dict = {}
my_list = []
for pos, cust in enumerate(data):
contact_group = my_contact_dict.get(cust[1], set()) # returns empty set if not in dict
email_group = my_email_dict.get(cust[2], set()) #
contact_group.add (pos)
email_group.add (pos)
contact_group.update (email_group) # Share info between the two groups
email_group.update (contact_group) #
for member in contact_group:
my_contact_dict[data[member][1]] = contact_group
for member in email_group:
my_email_dict[data[member][2]] = email_group
result = {tuple(x) for x in my_contact_dict.values()}
print (result)
```
**Testing it out:**
```
data = [
("Customer1","contactA", "emailA"),
("CustomerX","contactA", "emailX"),
("CustomerZ","contactZ", "emailW"),
("CustomerY","contactY", "emailX"),
]
```
gives:
```
{(2,), (0, 1, 3)}
```
And:
```
data = [
("Customer1","contactA", "emailA"),
("CustomerX","contactA", "emailX"),
("CustomerZ","contactZ", "emailW"),
("CustomerY","contactY", "emailX"),
("CustomerW","contactZ", "emailA"),
]
```
gives:
```
{(0, 1, 2, 3, 4)}
``` |
64,950,799 | I am trying to group the indexes of the customers based on the following condition with python.
If database contains the same contact number or email, the result should return the indexes of the tuples grouped together in a sub-list.
For a given database:
```
data = [
("Customer1","contactA", "emailA"),
("CustomerX","contactA", "emailX"),
("CustomerZ","contactZ", "emailW"),
("CustomerY","contactY", "emailX"),
]
```
The above example shows that Customer1 and CustomerX shares the same contact number, and CustomerX and CustomerY shares the same email, hence Customer1, CustomerX and CustomerY are the same customer.
Hence the result is `[[0, 1, 3], [2]]` | 2020/11/22 | [
"https://Stackoverflow.com/questions/64950799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14091382/"
] | You could build a graph where you connect elements with a common email or with a common contact and then find [connected components](https://en.wikipedia.org/wiki/Component_(graph_theory)) (e.g., by using a [bfs](https://en.wikipedia.org/wiki/Breadth-first_search) visit).
In this case I'm using the [networkx](https://networkx.org/documentation/stable//index.html) library to build a graph and find connected components.
```
>>> contacts = defaultdict(list)
>>> emails = defaultdict(list)
>>> for idx, (name, contact, email) in enumerate(data):
... contacts[contact].append(idx)
... emails[email].append(idx)
...
>>> g = nx.Graph()
>>> for common_attr in itertools.chain(contacts.values(), emails.values()):
... g.add_edges_from(itertools.combinations(common_attr,2))
...
>>> list(nx.connected_components(g))
[{0, 1, 3}, {2}]
``` | I usually approach these kinds of problems with the pandas package, as it makes handling of (large) datasets especially easy.
```
import pandas as pd
data = [
("Customer1","contactA", "emailA"),
("CustomerX","contactA", "emailX"),
("CustomerZ","contactZ", "emailW"),
("CustomerY","contactY", "emailX"),
("CustomerB","contactY", "emailZ"),
("CustomerC","contactB", "emailZ"),
("CustomerD","contactZ", "emailD"),
("CustomerE","contactO", "emailO"),
("CustomerF","contactF", "emailF")
]
df = pd.DataFrame(data, columns=["Customer", "Contact", "Email"])
#unique contact information
unique_contact = df.Contact.unique()
result=[]
for contact in unique_contact:
#all entries with this contact
contact_df = df[df.Contact == contact]
#unique email adresses of these contacts
contact_df_email = contact_df.Email.unique()
matches1 = df.Contact == contact #where contact is the same
matches2 = df.Email.isin(contact_df_email) #where email is the same as in any of the identical contacts
#index values of entries that share contact OR email information
result.append(df[matches1 | matches2].index.values.tolist())
#credit: https://stackoverflow.com/a/56567367
def over(coll):
# gather the lists that do overlap
overlapping = [x for x in coll if any(x_element in [y for k in coll if k != x for y in k] for x_element in x)]
# flatten and get unique
overlapping = sorted(list(set([z for x in overlapping for z in x])))
# get the rest
non_overlapping = [x for x in coll if all(y not in overlapping for y in x)]
return [overlapping]+non_overlapping
print(over(result))
```
An answer found [here](https://stackoverflow.com/a/56567367) was especially helpful in solving this and as can be seen in my example, this can be extended to more complex customer structures. For the input data provided in your question the output is
>
> [[0, 1, 3], [2]]
>
>
> |
27,773,111 | I'm new to cocos2d-X.I'm trying to set up cocos2d-x for android and I exactly followed below [video](https://www.youtube.com/watch?v=2LI1IrRp_0w&index=2&list=PLRtjMdoYXLf4od_bOKN3WjAPr7snPXzoe) tutorial
I failed the steps in terminal with problem (python setup.py command result is not as expected).
For example when I begin to setup in terminal I get the following error.
```
->Please enter the path of NDK_ROOT (or press Enter to skip):/Users/apple/Documents/Development/Cosos2d-x/android-ndk-r9d
->Error: "/Users/apple/Documents/Development/Cosos2d-x/android-ndk-r9d " is not a valid path of NDK_ROOT. Ignoring it.
->Check environment variable ANDROID_SDK_ROOT
->Search for environment variable ANDROID_SDK_ROOT...
->ANDROID_SDK_ROOT not found
->Search for command android in system...
->Command android not found
```
The same above error happening for setup path ANDROID\_SDK\_ROOT and ANT\_ROOT.
How can I fix the problem? Thanks for get me out.
I'm working on
* Mac OS 10.9.5
* android-ndk-r9d
* apache-ant-1.9.4
* cocos2d-x-3.3
* adt-bundle-mac-x86\_64-20140321 | 2015/01/05 | [
"https://Stackoverflow.com/questions/27773111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2219111/"
] | Do you copy the path to the terminal?
If so, try to delete the trailing whitespace, it will solve the problem. | Cocos script uses `os.path.join($your_path, $some_extra_file)`, so you have to add slash `/` at the end:
>
> /Users/apple/Documents/Development/Cosos2d-x/android-ndk-r9d/
>
>
> |
38,361,916 | I am trying to insert the following list of dictionaries named `posts` to mongo, and got a `BulkWriteError: batch op errors occurred` error which I don't know how to fix.
`posts:`
```
[{'#AUTHID': 'fffafe151f07a30a0ede2038a897b680',
'Records': [
{'DATE': '07/22/09 05:54 PM',
'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'},
......
{'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}]},
{'#AUTHID': 'fffafe151f07a30a0ede2038a897b680',
'Records': [
{'DATE': '07/22/09 05:54 PM',
'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'},
{'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'},
....
```
The code I used:
```
collection = db.posts
collection.insert_many(p for p in posts )
```
But then I got an error that says `BulkWriteError: batch op errors occurred` and only managed to import the first dictionary (corresponding to the first `#AUTHID`)
I found a link that describes similar situation but it doesn't explain much about why this happens or how to solve this issue. It's under the **\_Why does PyMongo add an *id field to all of my documents?*** in the following link:
<https://github.com/mongodb/mongo-python-driver/blob/master/doc/faq.rst#id25> | 2016/07/13 | [
"https://Stackoverflow.com/questions/38361916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6200575/"
] | Not to late to answer here, you almost there. I am not sure if the [FAQ](https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.insert_many) updated but please read it properly:
>
> when calling `insert_many()` with a list of references to a **single** document raises BulkWriteError
>
>
>
Note that it says **single** or in other word, **same instance**. The example in the FAQ shows how to produce the error with the same instance. You can check if it is the same by using `id()` to display the memory address. In fact, I can see the content of your documents is the same. Most probably (but not necessarily) it is the same instance.
```
print id(posts[0])
print id(posts[1])
```
If any of the dict having the same instance, then something wrong during preparing the `posts` variable. Just make sure all list items should have different instance because you are inserting (many) different documents! | [Here is output](http://i.stack.imgur.com/SIZQQ.png)
in this output records are store which are in list.
```
from pymongo import MongoClient
client = MongoClient('localhost', 27017)
db = client['post']
posts = [{'#AUTHID': 'fffafe151f07a30a0ede2038a897b680',
'Records': [
{'DATE': '07/22/09 05:54 PM',
'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'},
{'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}]},
{'#AUTHID': 'fffafe151f07a30a0ede2038a897b680',
'Records': [
{'DATE': '07/22/09 05:54 PM',
'STATUS': 'Is flying back friday night in time to move the rest of his stuff then go to work the next morning... great.'},
{'DATE': '07/19/09 04:39 PM', 'STATUS': 'is stealing his net by the lake'}]}]
collection = db.posti.insert_many(p for p in posts )
``` |
18,388,050 | I have a large amount of data of this type:
```
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "ivorabey@zeroeh.com"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
```
I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus:
```
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
```
I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7. | 2013/08/22 | [
"https://Stackoverflow.com/questions/18388050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646265/"
] | Depending on how the code tags are formatted, you could split the line on `"` then pick out the second element.
```
s = 'string(15) "Ivor Abeysekera"'
temp = s.split('"')[1]
# temp is 'Ivor Abeysekera'
```
Note that this will get rid of the trailing `"`, if you need it you can always just add it back on. In your example this would look like:
```
data[a]['state'] = lines[i + 1].split('"')[1]
# etc. for each call of lines[i + 1]
```
Because you are calling it so much (regardless of what answer you use) you should probably turn it into a function:
```
def prepare_data(line_to_fix):
return line_to_fix.split('"')[1]
# latter on...
data[a]['state'] = prepare_data(lines[i + 1])
```
This will give you some more flexibility. | **BAD SOLUTION Based on current question**
but to answer your question just use
```
info_string = lines[i + 1]
value_str = info_string.split(" ",1)[-1].strip(" \"")
```
**BETTER SOLUTION**
do you have access to the php generating that .... if you do just do `echo json_encode($data);` instead of using `var_dump`
if instead you have them output json it(the json output) will look like
```
{"variable":"value","variable2","value2"}
```
you can then read it in like
```
import json
json_str = requests.get("http://url.com/json_dump").text # or however you get the original text
data = json.loads(json_str)
print data
``` |
18,388,050 | I have a large amount of data of this type:
```
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "ivorabey@zeroeh.com"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
```
I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus:
```
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
```
I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7. | 2013/08/22 | [
"https://Stackoverflow.com/questions/18388050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646265/"
] | **BAD SOLUTION Based on current question**
but to answer your question just use
```
info_string = lines[i + 1]
value_str = info_string.split(" ",1)[-1].strip(" \"")
```
**BETTER SOLUTION**
do you have access to the php generating that .... if you do just do `echo json_encode($data);` instead of using `var_dump`
if instead you have them output json it(the json output) will look like
```
{"variable":"value","variable2","value2"}
```
you can then read it in like
```
import json
json_str = requests.get("http://url.com/json_dump").text # or however you get the original text
data = json.loads(json_str)
print data
``` | You should use regular expressions (regex) for this:
<http://docs.python.org/2/library/re.html>
What you intend to do can be easily done with the following code:
```
# Import the library
import re
# This is a string just to demonstrate
a = 'string(32) "PV Technical Sales Certification"'
# Create the regex
p = re.compile('[^"]+"(.*)"$')
# Find a match
m = p.match(a)
# Your result will be now in s
s = m.group(1)
```
Hope this helps! |
18,388,050 | I have a large amount of data of this type:
```
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "ivorabey@zeroeh.com"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
```
I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus:
```
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
```
I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7. | 2013/08/22 | [
"https://Stackoverflow.com/questions/18388050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646265/"
] | **BAD SOLUTION Based on current question**
but to answer your question just use
```
info_string = lines[i + 1]
value_str = info_string.split(" ",1)[-1].strip(" \"")
```
**BETTER SOLUTION**
do you have access to the php generating that .... if you do just do `echo json_encode($data);` instead of using `var_dump`
if instead you have them output json it(the json output) will look like
```
{"variable":"value","variable2","value2"}
```
you can then read it in like
```
import json
json_str = requests.get("http://url.com/json_dump").text # or however you get the original text
data = json.loads(json_str)
print data
``` | You can do this statefully by looping across all the lines and keeping track of where you are in a block:
```
# Make field names to dict keys
fields = {
'public_state': 'state',
'public_zip': 'zip',
'email': 'email',
'full_name': 'contact',
'org_name': 'name',
'org_website': 'website',
'city': 'city',
}
data = []
current = {}
key = None
with open( "mess.txt" ) as messy:
for line in messy.split('\n'):
line = line.lstrip()
if line.startswith('}'):
data.append(current)
current = {}
elif line.startswith('['):
keyname = line.split('"')[1]
key = fields.get(keyname)
elif key is not None:
# Get everything betweeen the first and last quotes on the line
value = line.split('"', 1)[1].rsplit('"', 1)[0]
current[key] = value
```
This avoids having to keep track of your position in the file, and also means that you could work across enormous data files (if you process the dictionary after each record) without having to load the whole thing into memory at once. In fact, let's restructure that as a generator that processes blocks of data at a time and yields dicts for you to work with:
```
fields = {
'public_state': 'state',
'public_zip': 'zip',
'email': 'email',
'full_name': 'contact',
'org_name': 'name',
'org_website': 'website',
'city': 'city',
}
def dict_maker(fileobj):
current = {}
key = None
for line in fileobj:
line = line.lstrip()
if line.startswith('}'):
yield current
current = {}
elif line.startswith('['):
keyname = line.split('"')[1]
key = fields.get(keyname)
elif key is not None:
# Get everything betweeen the first and last quotes on the line
value = line.split('"', 1)[1].rsplit('"', 1)[0]
current[key] = value
with open("mess.txt") as messy:
for d in dict_maker(messy):
print d
```
That makes your main loop tiny and understandable: you loop across the potentially enormous set of dicts, one at a time, and do something with them. It totally separates the act of making the dictionaries from the act of consuming them. And since the generator is stateful, and only processes one line at a time, you could pass in anything that looks like a file, like a list of strings, the output of a web request, input from another programming writing to `sys.stdin`, or whatever. |
18,388,050 | I have a large amount of data of this type:
```
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "ivorabey@zeroeh.com"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
```
I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus:
```
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
```
I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7. | 2013/08/22 | [
"https://Stackoverflow.com/questions/18388050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646265/"
] | Depending on how the code tags are formatted, you could split the line on `"` then pick out the second element.
```
s = 'string(15) "Ivor Abeysekera"'
temp = s.split('"')[1]
# temp is 'Ivor Abeysekera'
```
Note that this will get rid of the trailing `"`, if you need it you can always just add it back on. In your example this would look like:
```
data[a]['state'] = lines[i + 1].split('"')[1]
# etc. for each call of lines[i + 1]
```
Because you are calling it so much (regardless of what answer you use) you should probably turn it into a function:
```
def prepare_data(line_to_fix):
return line_to_fix.split('"')[1]
# latter on...
data[a]['state'] = prepare_data(lines[i + 1])
```
This will give you some more flexibility. | You should use regular expressions (regex) for this:
<http://docs.python.org/2/library/re.html>
What you intend to do can be easily done with the following code:
```
# Import the library
import re
# This is a string just to demonstrate
a = 'string(32) "PV Technical Sales Certification"'
# Create the regex
p = re.compile('[^"]+"(.*)"$')
# Find a match
m = p.match(a)
# Your result will be now in s
s = m.group(1)
```
Hope this helps! |
18,388,050 | I have a large amount of data of this type:
```
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "ivorabey@zeroeh.com"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
```
I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus:
```
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
```
I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7. | 2013/08/22 | [
"https://Stackoverflow.com/questions/18388050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646265/"
] | Depending on how the code tags are formatted, you could split the line on `"` then pick out the second element.
```
s = 'string(15) "Ivor Abeysekera"'
temp = s.split('"')[1]
# temp is 'Ivor Abeysekera'
```
Note that this will get rid of the trailing `"`, if you need it you can always just add it back on. In your example this would look like:
```
data[a]['state'] = lines[i + 1].split('"')[1]
# etc. for each call of lines[i + 1]
```
Because you are calling it so much (regardless of what answer you use) you should probably turn it into a function:
```
def prepare_data(line_to_fix):
return line_to_fix.split('"')[1]
# latter on...
data[a]['state'] = prepare_data(lines[i + 1])
```
This will give you some more flexibility. | You can do this statefully by looping across all the lines and keeping track of where you are in a block:
```
# Make field names to dict keys
fields = {
'public_state': 'state',
'public_zip': 'zip',
'email': 'email',
'full_name': 'contact',
'org_name': 'name',
'org_website': 'website',
'city': 'city',
}
data = []
current = {}
key = None
with open( "mess.txt" ) as messy:
for line in messy.split('\n'):
line = line.lstrip()
if line.startswith('}'):
data.append(current)
current = {}
elif line.startswith('['):
keyname = line.split('"')[1]
key = fields.get(keyname)
elif key is not None:
# Get everything betweeen the first and last quotes on the line
value = line.split('"', 1)[1].rsplit('"', 1)[0]
current[key] = value
```
This avoids having to keep track of your position in the file, and also means that you could work across enormous data files (if you process the dictionary after each record) without having to load the whole thing into memory at once. In fact, let's restructure that as a generator that processes blocks of data at a time and yields dicts for you to work with:
```
fields = {
'public_state': 'state',
'public_zip': 'zip',
'email': 'email',
'full_name': 'contact',
'org_name': 'name',
'org_website': 'website',
'city': 'city',
}
def dict_maker(fileobj):
current = {}
key = None
for line in fileobj:
line = line.lstrip()
if line.startswith('}'):
yield current
current = {}
elif line.startswith('['):
keyname = line.split('"')[1]
key = fields.get(keyname)
elif key is not None:
# Get everything betweeen the first and last quotes on the line
value = line.split('"', 1)[1].rsplit('"', 1)[0]
current[key] = value
with open("mess.txt") as messy:
for d in dict_maker(messy):
print d
```
That makes your main loop tiny and understandable: you loop across the potentially enormous set of dicts, one at a time, and do something with them. It totally separates the act of making the dictionaries from the act of consuming them. And since the generator is stateful, and only processes one line at a time, you could pass in anything that looks like a file, like a list of strings, the output of a web request, input from another programming writing to `sys.stdin`, or whatever. |
18,388,050 | I have a large amount of data of this type:
```
array(14) {
["ap_id"]=>
string(5) "22755"
["user_id"]=>
string(4) "8872"
["exam_type"]=>
string(32) "PV Technical Sales Certification"
["cert_no"]=>
string(12) "PVTS081112-2"
["explevel"]=>
string(1) "0"
["public_state"]=>
string(2) "NY"
["public_zip"]=>
string(5) "11790"
["email"]=>
string(19) "ivorabey@zeroeh.com"
["full_name"]=>
string(15) "Ivor Abeysekera"
["org_name"]=>
string(21) "Zero Energy Homes LLC"
["org_website"]=>
string(14) "www.zeroeh.com"
["city"]=>
string(11) "Stony Brook"
["state"]=>
string(2) "NY"
["zip"]=>
string(5) "11790"
}
```
I wrote a for loop in python which reads through the file, creating a dictionary for each array and storing elements like thus:
```
a = 0
data = [{}]
with open( "mess.txt" ) as messy:
lines = messy.readlines()
for i in range( 1, len(lines) ):
line = lines[i]
if "public_state" in line:
data[a]['state'] = lines[i + 1]
elif "public_zip" in line:
data[a]['zip'] = lines[i + 1]
elif "email" in line:
data[a]['email'] = lines[i + 1]
elif "full_name" in line:
data[a]['contact'] = lines[i + 1]
elif "org_name" in line:
data[a]['name'] = lines[i + 1]
elif "org_website" in line:
data[a]['website'] = lines[i + 1]
elif "city" in line:
data[a]['city'] = lines[i + 1]
elif "}" in line:
a += 1
data.append({})
```
I know my code is terrible, but I am fairly new to Python. As you can see, the bulk of my project is complete. What's left is to strip away the code tags from the actual data. For example, I need `string(15) "Ivor Abeysekera"` to become `Ivor Abeysekera"`.
After some research, I considered `.lstrip()`, but since the preceding text is always different.. I got stuck.
Does anyone have a clever way of solving this problem? Cheers!
Edit: I am using Python 2.7 on Windows 7. | 2013/08/22 | [
"https://Stackoverflow.com/questions/18388050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2646265/"
] | You should use regular expressions (regex) for this:
<http://docs.python.org/2/library/re.html>
What you intend to do can be easily done with the following code:
```
# Import the library
import re
# This is a string just to demonstrate
a = 'string(32) "PV Technical Sales Certification"'
# Create the regex
p = re.compile('[^"]+"(.*)"$')
# Find a match
m = p.match(a)
# Your result will be now in s
s = m.group(1)
```
Hope this helps! | You can do this statefully by looping across all the lines and keeping track of where you are in a block:
```
# Make field names to dict keys
fields = {
'public_state': 'state',
'public_zip': 'zip',
'email': 'email',
'full_name': 'contact',
'org_name': 'name',
'org_website': 'website',
'city': 'city',
}
data = []
current = {}
key = None
with open( "mess.txt" ) as messy:
for line in messy.split('\n'):
line = line.lstrip()
if line.startswith('}'):
data.append(current)
current = {}
elif line.startswith('['):
keyname = line.split('"')[1]
key = fields.get(keyname)
elif key is not None:
# Get everything betweeen the first and last quotes on the line
value = line.split('"', 1)[1].rsplit('"', 1)[0]
current[key] = value
```
This avoids having to keep track of your position in the file, and also means that you could work across enormous data files (if you process the dictionary after each record) without having to load the whole thing into memory at once. In fact, let's restructure that as a generator that processes blocks of data at a time and yields dicts for you to work with:
```
fields = {
'public_state': 'state',
'public_zip': 'zip',
'email': 'email',
'full_name': 'contact',
'org_name': 'name',
'org_website': 'website',
'city': 'city',
}
def dict_maker(fileobj):
current = {}
key = None
for line in fileobj:
line = line.lstrip()
if line.startswith('}'):
yield current
current = {}
elif line.startswith('['):
keyname = line.split('"')[1]
key = fields.get(keyname)
elif key is not None:
# Get everything betweeen the first and last quotes on the line
value = line.split('"', 1)[1].rsplit('"', 1)[0]
current[key] = value
with open("mess.txt") as messy:
for d in dict_maker(messy):
print d
```
That makes your main loop tiny and understandable: you loop across the potentially enormous set of dicts, one at a time, and do something with them. It totally separates the act of making the dictionaries from the act of consuming them. And since the generator is stateful, and only processes one line at a time, you could pass in anything that looks like a file, like a list of strings, the output of a web request, input from another programming writing to `sys.stdin`, or whatever. |
64,154,088 | I am Python coder and got stuck in a question that "How to check input in textbox of tkinter python". The problem is that it is not giving output on writing this code .
```
def start(event):
a = main.get(1.0,END)
if a == 'ver':
print('.....')
main = Text(root)
main.pack()
root.bind('<Return>',start)
``` | 2020/10/01 | [
"https://Stackoverflow.com/questions/64154088",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14225987/"
] | We can do this by `get()` method:
```
from tkinter import *
a=Tk()
def check():
print(x.get('1.0',END)[:-1])
x=Text(a)
b=Button(a,text='Check',command=check)
x.pack()
b.pack()
a.mainloop()
``` | You should write something like
```
def start(event):
t = var.get()
if t == 'something':
pass
var = StringVar()
e = Entry(master, textvariable=var)
e.pack()
e.bind(bind('<Return>',start)
``` |
64,154,088 | I am Python coder and got stuck in a question that "How to check input in textbox of tkinter python". The problem is that it is not giving output on writing this code .
```
def start(event):
a = main.get(1.0,END)
if a == 'ver':
print('.....')
main = Text(root)
main.pack()
root.bind('<Return>',start)
``` | 2020/10/01 | [
"https://Stackoverflow.com/questions/64154088",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14225987/"
] | The text widget guarantees that there is always a trailing newline. When you do `get(1.0,END)` you're getting that trailing newline even if the user doesn't enter a newline.
If you want to get exactly what the user entered, use `get("1.0", "end-1c")`. That will get all of the characters up to the end, minus one character.
Note: text indexes are strings of the form *line.character*. Your code is using the floating point value `1.0` which is incorrect. It works fine in some cases, but it won't work in all. You should always use a string rather than a float for text indexes. | You should write something like
```
def start(event):
t = var.get()
if t == 'something':
pass
var = StringVar()
e = Entry(master, textvariable=var)
e.pack()
e.bind(bind('<Return>',start)
``` |
64,154,088 | I am Python coder and got stuck in a question that "How to check input in textbox of tkinter python". The problem is that it is not giving output on writing this code .
```
def start(event):
a = main.get(1.0,END)
if a == 'ver':
print('.....')
main = Text(root)
main.pack()
root.bind('<Return>',start)
``` | 2020/10/01 | [
"https://Stackoverflow.com/questions/64154088",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14225987/"
] | ```
from tkinter import *
from threading import Thread
root = Tk()
def check():
while True :
a = main.get(1.0,END)[:-1]
if a == 'ver':
print('.....')
main = Text(root)
main.pack()
Thread(target=check).start()
root.mainloop()
``` | You should write something like
```
def start(event):
t = var.get()
if t == 'something':
pass
var = StringVar()
e = Entry(master, textvariable=var)
e.pack()
e.bind(bind('<Return>',start)
``` |
52,113,890 | I needed to extend User model to add things like address, score, more user\_types, etc. There are 2 possible ways to achieve that, extend the User model or create a new model that will be connected with the target User with `OneToOneField`. I decided to go with a new model because It seemed easier and It is recommended in [this](https://stackoverflow.com/questions/44109/extending-the-user-model-with-custom-fields-in-django) stack overflow question. But now I cannot create Serializer without nested profile field which is moreover undocumented because default rest\_framwork documentation generator cannot generate documentation for nested serializers.
My `UserSerializer` looks like this:
```
class UserSerializer(serializers.ModelSerializer):
# This creates a nested profile field
profile = ProfileSerializer(required=True)
def create(self, validated_data):
profile_data = validated_data.pop('profile')
user = User.objects.create_user(**validate_data)
profile, created = Profile.objects.upodate_or_creeate(user=user, defaults=profile_data)
return user
class Meta:
model = User
fields = ('id', 'username', 'email', 'password', 'buckelists', 'profile')
read_only_fields = ('id',)
extra_kwargs = {'password':{'write_only': True}}
```
This Serializer takes following JSON format:
```
{
'name': ...,
'email': ...,
'password': ...,
'profile': {
'address': ...,
'score': ...,
'user_type': ...,
'achievements': ...,
'country': ...,
'trusted': ...,
}
```
This looks weird and documentation generated with `rest_framework.documentation.include_docs_urls` shows just following:
```
{
'username': ...,
'email': ...,
'password': ...,
'field': ...,
}
```
So it's not clear what should be included in the profile field. I'd like to create Serializer that would accepted following format:
```
{
'name': ...,
'email': ...,
'password': ...,
'address': ...,
'score': ...,
'user_type': ...,
'achievements': ...,
'country': ...,
'trusted': ...,
}
```
Is it possible without creating custom Serializer from scratch? Or at least is it possible to generate documentation for nested serializers.
PS: I use python3.6 and Django 2.1
EDIT:
Here is a relevant part of my models.py:
```
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
trusted = models.BooleanField(default=False)
address = models.CharField(max_length=100, default="")
COUNTRIES = (
('CZ', 'Czech Republic'),
('EN', 'England'),
)
country = models.CharField(max_length=2, choices=COUNTRIES, default="CZ")
score = models.BigIntegerField(default=0)
achievements = models.ManyToManyField(Achievement, blank=True)
USER_TYPES = (
('N', 'Normal'),
('C', 'Contributor'),
('A', 'Admin'),
)
user_type = models.CharField(max_length=1, choices=USER_TYPES, default='N')
@receiver(post_save, sender=settings.AUTH_USER_MODEL)
def create_auth_token(sender, instance=None, created=False, **kwargs):
if created:
Token.objects.create(user=instance)
@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created=False, **kwargs):
if created:
profile, created = Profile.objects.get_or_create(user=instance)
profile.save()
```
EDIT:
Mohammad Ali's answers solves this for GET, but I'd also like to use POST, UPDATE and PATCH methods. I have found that I have to use `source` parameter but this is relative to serializer I don't know how to reference profile wihtout having profile field. | 2018/08/31 | [
"https://Stackoverflow.com/questions/52113890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4981456/"
] | Take it easy, You can create Profile obj just in the create function.
```
class UserSerializer(serializers.ModelSerializer):
trusted = serializers.BooleanField()
address = serializers.CharField()
class Meta:
model = User
fields = ('username', 'email', 'password', 'trusted', 'address',)
def create(self, validated_data):
user = User.objects.create(username=validated_data['username'], email=validated_data['email'])
user.set_password(validated_data['password'])
user.save()
profile = Profile(user=user, trusted=validated_data['trusted'], address=validated_data['address']
profile.save()
return validated_data
```
It is just a brief implementation of your scenario. you can fill the story. | Plase read documentation for Serializers: [Django REST FRAMEWORK](http://www.django-rest-framework.org/api-guide/relations/)
-- user related\_name
```
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="user_profile") # models
class ProfileSerializer(serializers.ModelSerializer):
user = serializers.PrimaryKeyRelatedField(queryset=User.objects.all(), required=False)
class Meta:
model = Profile
fields = '__all__'
class UserSerializer(serializers.ModelSerializer):
user_profile = ProfileSerializer(required=True)
class Meta:
model = User
fields = '__all__'
``` |
27,701,573 | I got error message: *{DetachedInstanceError} Parent instance is not bound to a session; lazy load operation of attribute 'owner' cannot proceed*
My python code:
```
car_obj = my_query_function() # get a Car object
owner_name = car_obj.owner.name # here generate error!
```
My model:
```
class Person(EntityClass):
attributes = ['id', 'name']
name = sa.Column(sa.String(250))
class Car(EntityClass):
attributes = ['id', 'brand', 'color', 'purchase_time', 'owner_id']
brand = sa.Column(sa.String(250))
color = sa.Column(sa.String(250))
purchase_time = sa.Column(sa.String(250))
owner_id = sa.Column(DBKeyType, sa.ForeignKey(Person.__tablename__ + '.id'), nullable=False)
owner = relationship('Person', cascade='all, delete-orphan', backref=backref('car', cascade='delete'), single_parent=True)
```
Is this has something to do with the lazy-loading relationship setting between Car and User (many-to-one association)? How can I fix the relationship? Thanks in advance. | 2014/12/30 | [
"https://Stackoverflow.com/questions/27701573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3778914/"
] | I traced the docs and made it work by adding `lazy='subquery'`
```
owner = relationship('Person', lazy='subquery', cascade='all, delete-orphan', backref=backref('car', cascade='delete'), single_parent=True)
```
<http://docs.sqlalchemy.org/en/rel_0_9/orm/join_conditions.html> | Made it work by adding `joinedload_all()` in `session.query(Car).options()`, for example:
```
cars = session.query(Car).options(joinedload_all('*')).all()
session.close()
for car in cars:
"do your struff"
```
good luck |
29,956,181 | I am a newbie in this field, and I am trying to solve a problem (not really sure if it is possible actually) where I want to print on the display some information plus some input from the user.
The following works fine:
```
>>> print (" Hello " + input("tellmeyourname: "))
tellmeyourname: dfsdf
Hello dfsdf
```
However if I want to assign user's input to a variable, I can't:
```
>>> print (" Hello ", name = input("tellmeyourname: "))
tellmeyourname: mike
Traceback (most recent call last):
File "<pyshell#47>", line 1, in <module>
print (" Hello ", name = input("tellmeyourname: "))
TypeError: 'name' is an invalid keyword argument for this function
```
I have researched inside here and other python documentation, tried with `%s` etc. to solve, without result. I don't want to use it in two lines (first assigning the variable `name= input("tellmeyourname:")` and then printing).
Is this possible? | 2015/04/29 | [
"https://Stackoverflow.com/questions/29956181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4848506/"
] | Starting from Python 3.8, this will become possible using an [assignment expression](https://www.python.org/dev/peps/pep-0572/):
```
print("Your name is: " + (name := input("Tell me your name: ")))
print("Your name is still: " + name)
```
Though 'possible' is not the same as 'advisable'...
---
But in Python <3.8: you can't. Instead, separate your code into two statements:
```
name = input("Tell me your name: ")
print("Your name is: " + name)
```
If you often find yourself wanting to use two lines like this, you could make it into a function:
```
def input_and_print(question):
s = input("{} ".format(question))
print("You entered: {}".format(s))
input_and_print("What is your name?")
```
Additionally you could have the function return the input `s`. | **no this is not possible**. well except something like
```
x=input("tell me:");print("blah %s"%(x,));
```
but thats not really one line ... it just looks like it |
34,300,908 | I've been creating an webapp (just for learning purposes) using python django, and have no intention in deploying it. However, is there a way to let someone else, try the webapplication, or more precisely: Is it possible to somehow test the webapp on another computer. I tried to send det source code (and the whole folder), to another computer, installed virtual environment, activated it, and tried to runserver. However, I always get runtimeerror:maximum recursion depth exceeded in cmp. Is there any other way around it? | 2015/12/15 | [
"https://Stackoverflow.com/questions/34300908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3799968/"
] | You can use ngrok -- <https://ngrok.com/> -- to create a public URL to your local server for testing, and then give that URL to people so they can try your webapp. | You can also use [Localtunnel](https://localtunnel.me) to easily share a web service on your local development without deploying the code in the server.
Install the localtunnel
```
npm install -g localtunnel
```
Start a webserver on some local port (eg <http://localhost:8000>) and use the command line interface to request a tunnel to your local server
```
lt --port 8000
```
You will receive a url, for example <https://xyz.localtunnel.me>, that you can share with anyone for as long as your local instance of lt remains active. Any requests will be routed to your local service at the specified port. |
56,364,756 | My log files have some multiline bytestring in them, like
[2019-05-25 19:16:31] b'logstring\r\n\r\nmore log'
After I try to extract the original multiline string, how do I convert that to a real string
using Python 3?
As a simplified example, after reading the log file and stripping the time, I end up with a variable that has the type str and has the b' prefix, as a string.
```
# note: b'' is inside the str (taken from log)
tmp = "b'logstring\r\n\r\nmore log'"
# convert here
print(tmp)
```
I'm looking for a way to tell python that the content needs to get decoded. But str doesn't allow decoding.
The result I'd like to see from the print command is
```
logstring
more log
```
UPDATE: The "eval" function will produce this result, but this would execute the code, so it's not safe.
```
# note: b'' is inside the str (taken from log)
tmp = "b'logstring\r\n\r\nmore log'"
tmp = eval(tmp)
print(tmp)
```
Is there a better way? | 2019/05/29 | [
"https://Stackoverflow.com/questions/56364756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2335020/"
] | You can use regex:
```
import re
tmp = "b'logstring\r\n\r\nmore log'"
r = re.compile(r"b'(.+)'", re.DOTALL|re.MULTILINE)
result = r.sub(r"\1", tmp)
print(result) # logstring\r\n\r\nmore log
```
You could use this for the entire file or line by line but you may need to slightly change this code to meet your needs.
**- Edit -**
If you want to remove duplicate newlines (as your desired output shows) you can do it like this:
```
import re
tmp = "b'logstring\r\n\r\nmore log'"
binary_regex = re.compile(r"b'(.+)'", re.DOTALL|re.MULTILINE)
newline_regex = re.compile(r"(\r\n)+", re.DOTALL|re.MULTILINE)
# Make sure to do the compiles outside of any loops you have
result = binary_regex.sub(r"\1", tmp) # Remove the b''
result = newline_regex.sub(r"\r\n", result) # Remove duplicate new lines
print(result)
```
**Output:**
```
logstring
more log
``` | It seems that you can lock down the eval function so that it can't run functions and python builtins. You do this by passing a dictionary of allowed global and local functions.
By mapping all builtins to None you can block the execution of regular python commands. With that in place, using eval to evaluate the string content is safe.
Source [this article](http://lybniz2.sourceforge.net/safeeval.html)
```
# note: b'' is inside the str (taken from log)
tmp = "b'logstring\r\n\r\nmore log'"
tmp = eval(tmp, {'__builtins__': None}, {})
print(tmp)
```
And in the comments above @juanpa.arrivillaga offered another solution, that also solves the eval security problem:
```
import ast
tmp = "b'logstring\r\n\r\nmore log'"
tmp = ast.literal_eval(tmp)
print(tmp)
``` |
6,467,407 | I'm using Jython from within Java; so I have a Java setup similar to below:
```
String scriptname="com/blah/myscript.py"
PythonInterpreter interpreter = new PythonInterpreter(null, new PySystemState());
InputStream is = this.getClass().getClassLoader().getResourceAsStream(scriptname);
interpreter.execfile(is);
```
And this will (for instance) run the script below:
```
# myscript.py:
import sys
if __name__=="__main__":
print "hello"
print sys.argv
```
How I pass in 'commandline' arguments using this method ?
(I want to be able to write my Jython scripts so that I can also run them on the commandline with 'python script arg1 arg2'). | 2011/06/24 | [
"https://Stackoverflow.com/questions/6467407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184456/"
] | I'm using Jython 2.5.2 and `runScript` didn't exist, so I had to replace it with `execfile`. Aside from that difference, I also needed to set `argv` in the state object before creating the `PythonInterpreter` object:
```
String scriptname = "myscript.py";
PySystemState state = new PySystemState();
state.argv.append (new PyString ("arg1"));
state.argv.append (new PyString ("arg2"));
PythonInterpreter interpreter = new PythonInterpreter(null, state);
InputStream is = Tester.class.getClassLoader().getResourceAsStream(scriptname);
interpreter.execfile (is);
```
The `argv` list in the state object initially has a length of 1, with an empty string in it, so the preceding code results in the output:
```
hello
['', 'arg1', 'arg2']
```
If you need `argv[0]` to be the actual script name, you'd need to create the state like this:
```
PySystemState state = new PySystemState();
state.argv.clear ();
state.argv.append (new PyString (scriptname));
state.argv.append (new PyString ("arg1"));
state.argv.append (new PyString ("arg2"));
```
Then the output is:
```
hello
['myscript.py', 'arg1', 'arg2']
``` | For those people whom the above solution does not work, try the below. This works for me on jython version 2.7.0
```
String[] params = {"get_AD_accounts.py","-server", "http://xxxxx:8080","-verbose", "-logLevel", "CRITICAL"};
```
The above replicates the command below. i.e. each argument and its value is separate element in params array.
***jython get\_AD\_accounts.py -logLevel CRITICAL -server <http://xxxxxx:8080> -verbose***
```
PythonInterpreter.initialize(System.getProperties(), System.getProperties(), params);
PySystemState state = new PySystemState() ;
InputStream is = new FileInputStream("C:\\projectfolder\\get_AD_accounts.py");
PythonInterpreter interp = new PythonInterpreter(null, state);
PythonInterpreter interp = new PythonInterpreter(null, state);
interp.execfile(is);
``` |
16,640,624 | I am outputting
```
parec -d "name"
```
You don't need to know this command, just know that as soon as you press enter, it outputs binary data representing audio.
My goal is to read this with python in real time, ie start it and have it in a variable "data" I can read from with something like
```
data = p.stdout.read()
```
What I tried
```
p = subprocess.Popen(['parec','-d','"name"'],stdout=subprocess.PIPE,shell=True)
while True:
data = p.stdout.read()
```
But this results in no data being received.
```
parec -d "name" > result.raw
```
is readable with an audio-programme and contains exactly the necessary data. So what is the command for python? | 2013/05/19 | [
"https://Stackoverflow.com/questions/16640624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2348735/"
] | There are multiple problems and they are not simple (unless the version of the ascensor script is outdated). The first issue is fairly simple, and illustrates the initial problem - some of the documentation doesn't match the code. In particular, the case doesn't match.
For example, you have `childType: 'section'` (lowercase c) but you need `ChildType: 'section'` uppercase C. This is true of all the settings.
After fixing this the fun begins! The various arrays you specified follow the documentation, but the ascensor script in the jsfiddle is attempting to parse strings. `"hello1 | hello2 |..."` versus `['hello1', 'hello2'...]` (similar issue for the AscensorMap). Perhaps the ascensor script reference refers to an outdated version, or the plug-in has changed and the docs haven't been updated. | The reverse of the current answer is now true.
Using the latest version of Ascensor (1.8.0 (2014-02-23)), you have to specify the property names in lower case.
e.g. change `ChildType: 'section'` to `childType: 'section'`.
The examples all around the net are unfortunately using older versions. |
57,361,849 | I'm doing some dockerized code in Python (3.5) and flask (1.1.1) working against a CouchDB database (2.3.1) using the cloudant python extension (2.12.0) which seems to be the most up to date library to work against CouchDB.
I'm trying to fetch and use a view from the database, but it is not working. I can fetch documents, and work with the database normally, but I can't use the view.
I've added a print statement for the object that should hold the design document at the program start, and I see that the document shows as having no views (or anything at all) AND the CouchDB log shows NO requests for the design document being made.
I also tried to both get the design document and use the view via curl using the same URL and username/password, and both actions work successfully.
Here's sample code that fails:
```py
from flask import Flask, render_template , request, g
from cloudant.client import CouchDB
from cloudant.view import View
from cloudant.design_document import DesignDocument
import requests
application = Flask(__name__)
application.config.from_pyfile("config.py")
couch = CouchDB(application.config['COUCHDB_USER'], application.config['COUCHDB_PASSWORD'], url=application.config['COUCHDB_SERVER'], connect=True, auto_renew=True)
database = couch[application.config['COUCHDB_DATABASE']]
views = DesignDocument(database, '_design/vistas')
print(views)
print(views.list_views())
@application.route("/", methods=['GET', 'POST'])
def index():
for pelicula in View(views,'titulos_peliculas'):
titulos.append({ "id": pelicula['id'], "titulo": pelicula['key'] })
return render_template('menu.html',titulos=titulos)
```
In that code, the print of the design document (views) returns:
```
{'lists': {}, 'indexes': {}, 'views': {}, 'shows': {}, '_id': '_design/vistas'}
```
With empty views as show... And the CouchDB log only shows the login to the database and getting the DB info:
```
couchdb:5984 172.23.0.4 undefined POST /_session 200 ok 69
couchdb:5984 172.23.0.4 vmb_web HEAD //peliculas 200 ok 232
```
No other queries at all.
No errors in the app log either. Even when I call the routed use of the views:
```
[pid: 21|app: 0|req: 1/1] 172.23.0.1 () {52 vars in 1225 bytes} [Mon Aug 5 15:03:24 2019] POST / => generated 1148 bytes in 56 msecs (HTTP/1.1 200) 2 headers in 81 bytes (1 switches on core 0)
```
And, as I said, I can get, and use the document:
```
curl http://vmb_web:password@127.0.0.1:999/peliculas/_design/vistas
```
```
{"_id":"_design/vistas","_rev":"1-e8108d41a6627ea61b9a89a637f574eb","language":"javascript","views":{"peliculas":{"map":"function(doc) { if (doc.schema == 'pelicula') { emit(doc.titulo, null); for(i=0;i<doc.titulos_alt.length;i++) { emit(doc.titulos_alt[i],null); } for(i=0;i<doc.directores.length;i++) { emit(doc.directores[i].nombre,null); } for(i=0;i<doc.actores.length;i++) { emit(doc.actores[i].nombre,null); } for(i=0;i<doc.escritores.length;i++) { emit(doc.escritores[i].nombre,null); } for(i=0;i<doc.etiquetas.length;i++) { emit(doc.etiquetas[i],null); } } }"},"titulos_peliculas":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, null); } }"},"archivos_peliculas":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, doc.archivo); } }"},"titulo_rev":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, doc._rev); } }"}}}
``` | 2019/08/05 | [
"https://Stackoverflow.com/questions/57361849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6879212/"
] | I'm answering my own question, in case someone in the future stumbles upon this. I got the answer from Esteban Laver in the github for python-cloudant and it is what @chrisinmtown mentions in a response up there.
I was failing to call fetch() on the design document before using it.
Another good suggestion was to use the get\_view\_result helper method for the database object which takes care of fetching the design document and instantiating the View object from the selected view all at once. | I believe the code posted above creates a new DesignDocument object, and does not search for an existing DesignDocument. After creating that object, it looks like you need to call its fetch() method and **then** check its views property. HTH.
p.s. promoting my comment to an answer, hope that's cool in SO land these days :) |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I hate to re-open an old post, but I had the hardest time installing pygame with a version of python that was not Ubuntu's default build. So I created this tutorial/ how to:
[Install python3.1 and pygame1.9.1 in Ubuntu](https://sites.google.com/site/cslappe1/knowledge-base-and-how-to-s/installpython31andpygame191inubuntu)
I hopes this helps the next unfortunate soul to try this. | Just use the below command to install pygame for Python3. I could install pygame correctly on Ubuntu 16.04 and Python Python 3.5.2.
pip3 install pygame |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I hate to re-open an old post, but I had the hardest time installing pygame with a version of python that was not Ubuntu's default build. So I created this tutorial/ how to:
[Install python3.1 and pygame1.9.1 in Ubuntu](https://sites.google.com/site/cslappe1/knowledge-base-and-how-to-s/installpython31andpygame191inubuntu)
I hopes this helps the next unfortunate soul to try this. | I installed **pygame for python3** quite easily using the `pip3` (*a tool for installing and managing Python packages*) command on **Ubuntu 16.04.7 LTS**.
1. Open a terminal and install *pip3*, type `sudo apt install python3-pip`
2. Now use it to install *pygame* for **python3**, type `pip3 install pygame`
That's it! Import the library and confirm that everything works:
```
# I'll try it out using the python 3 interpreter.
python3 --version
Python 3.5.2
robert@robert-E7212:~/Source/Python/python_crash_course/alien_invasion$ python3
Python 3.5.2 (default, Oct 7 2020, 17:19:02)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
# No errors, pygame was imported successfully!
>>> import pygame
pygame 2.0.0 (SDL 2.0.12, python 3.5.2)
Hello from the pygame community. https://www.pygame.org/contribute.html
>>>
``` |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | PyGame on Python 3 remains experimental, but these steps worked for me on Ubuntu 11.10:
```
sudo apt-get install mercurial python3-dev libjpeg-dev libpng12-dev libportmidi-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev libsdl1.2-dev libsmpeg-dev libx11-dev ttf-freefont libavformat-dev libswscale-dev
hg clone -u 01b2cb16dc17 https://bitbucket.org/pygame/pygame
cd pygame
python3 config.py
2to3 setup.py -w
python3 setup.py build
sudo python3 setup.py install
```
(You may remove the `-u 01b2cb16dc17` to try the latest version; `01b2cb16dc17` worked for me.) | I installed **pygame for python3** quite easily using the `pip3` (*a tool for installing and managing Python packages*) command on **Ubuntu 16.04.7 LTS**.
1. Open a terminal and install *pip3*, type `sudo apt install python3-pip`
2. Now use it to install *pygame* for **python3**, type `pip3 install pygame`
That's it! Import the library and confirm that everything works:
```
# I'll try it out using the python 3 interpreter.
python3 --version
Python 3.5.2
robert@robert-E7212:~/Source/Python/python_crash_course/alien_invasion$ python3
Python 3.5.2 (default, Oct 7 2020, 17:19:02)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
# No errors, pygame was imported successfully!
>>> import pygame
pygame 2.0.0 (SDL 2.0.12, python 3.5.2)
Hello from the pygame community. https://www.pygame.org/contribute.html
>>>
``` |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | PyGame on Python 3 remains experimental, but these steps worked for me on Ubuntu 11.10:
```
sudo apt-get install mercurial python3-dev libjpeg-dev libpng12-dev libportmidi-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev libsdl1.2-dev libsmpeg-dev libx11-dev ttf-freefont libavformat-dev libswscale-dev
hg clone -u 01b2cb16dc17 https://bitbucket.org/pygame/pygame
cd pygame
python3 config.py
2to3 setup.py -w
python3 setup.py build
sudo python3 setup.py install
```
(You may remove the `-u 01b2cb16dc17` to try the latest version; `01b2cb16dc17` worked for me.) | I followed @Søren 's method, but without the -u number.
The only complication was a few compilation errors at the last line, all due to syntax and unicode differences between Python 2 and Python 3, but with a little checking of the web documentation it was a matter of a few minutes with a text editor modifying the following files (all paths are relative to the pygame directory created during the download):
```
gedit build/lib.linux-x86_64-3.2/pygame/colordict.py
gedit build/lib.linux-x86_64-3.2/pygame/tests/test-utils/png.py
gedit build/lib.linux-x86_64-3.2/pygame/examples/movieplayer.py
```
The line numbers from the compiler error messages are great for giving you where to start. The things to look out for are:
```
1 remove all references to u"xxxx" colours
2 use Python3 syntax for exceptions
3 change all print commands to Python3 equivalents
```
Then re-issue the final compilation command:
```
sudo python3 setup.py install
```
If you miss one or two or get it wrong, just keep going round the loop editing and re-compiling till it works.
BTW I deliberately did not give details of the compiler messages, because I expect they will depend on the current build you download. The files I needed to change were for version '1.9.2pre' downloaded as of the date on this post. |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I hate to re-open an old post, but I had the hardest time installing pygame with a version of python that was not Ubuntu's default build. So I created this tutorial/ how to:
[Install python3.1 and pygame1.9.1 in Ubuntu](https://sites.google.com/site/cslappe1/knowledge-base-and-how-to-s/installpython31andpygame191inubuntu)
I hopes this helps the next unfortunate soul to try this. | I followed @Søren 's method, but without the -u number.
The only complication was a few compilation errors at the last line, all due to syntax and unicode differences between Python 2 and Python 3, but with a little checking of the web documentation it was a matter of a few minutes with a text editor modifying the following files (all paths are relative to the pygame directory created during the download):
```
gedit build/lib.linux-x86_64-3.2/pygame/colordict.py
gedit build/lib.linux-x86_64-3.2/pygame/tests/test-utils/png.py
gedit build/lib.linux-x86_64-3.2/pygame/examples/movieplayer.py
```
The line numbers from the compiler error messages are great for giving you where to start. The things to look out for are:
```
1 remove all references to u"xxxx" colours
2 use Python3 syntax for exceptions
3 change all print commands to Python3 equivalents
```
Then re-issue the final compilation command:
```
sudo python3 setup.py install
```
If you miss one or two or get it wrong, just keep going round the loop editing and re-compiling till it works.
BTW I deliberately did not give details of the compiler messages, because I expect they will depend on the current build you download. The files I needed to change were for version '1.9.2pre' downloaded as of the date on this post. |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I hate to re-open an old post, but I had the hardest time installing pygame with a version of python that was not Ubuntu's default build. So I created this tutorial/ how to:
[Install python3.1 and pygame1.9.1 in Ubuntu](https://sites.google.com/site/cslappe1/knowledge-base-and-how-to-s/installpython31andpygame191inubuntu)
I hopes this helps the next unfortunate soul to try this. | It's because installing the `python-pygame` package installs it for the default version of Python on your system, 2.6.5 in this case. You should download the pygame package and use setup.py to install it in 3.1.2. |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | PyGame on Python 3 remains experimental, but these steps worked for me on Ubuntu 11.10:
```
sudo apt-get install mercurial python3-dev libjpeg-dev libpng12-dev libportmidi-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev libsdl1.2-dev libsmpeg-dev libx11-dev ttf-freefont libavformat-dev libswscale-dev
hg clone -u 01b2cb16dc17 https://bitbucket.org/pygame/pygame
cd pygame
python3 config.py
2to3 setup.py -w
python3 setup.py build
sudo python3 setup.py install
```
(You may remove the `-u 01b2cb16dc17` to try the latest version; `01b2cb16dc17` worked for me.) | The python-pygame package is only compiled for python2.6 and python2.7 where I am. You'll have to install it again, possibly from a python3 branch of the source. |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I installed **pygame for python3** quite easily using the `pip3` (*a tool for installing and managing Python packages*) command on **Ubuntu 16.04.7 LTS**.
1. Open a terminal and install *pip3*, type `sudo apt install python3-pip`
2. Now use it to install *pygame* for **python3**, type `pip3 install pygame`
That's it! Import the library and confirm that everything works:
```
# I'll try it out using the python 3 interpreter.
python3 --version
Python 3.5.2
robert@robert-E7212:~/Source/Python/python_crash_course/alien_invasion$ python3
Python 3.5.2 (default, Oct 7 2020, 17:19:02)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
# No errors, pygame was imported successfully!
>>> import pygame
pygame 2.0.0 (SDL 2.0.12, python 3.5.2)
Hello from the pygame community. https://www.pygame.org/contribute.html
>>>
``` | It's because installing the `python-pygame` package installs it for the default version of Python on your system, 2.6.5 in this case. You should download the pygame package and use setup.py to install it in 3.1.2. |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I installed **pygame for python3** quite easily using the `pip3` (*a tool for installing and managing Python packages*) command on **Ubuntu 16.04.7 LTS**.
1. Open a terminal and install *pip3*, type `sudo apt install python3-pip`
2. Now use it to install *pygame* for **python3**, type `pip3 install pygame`
That's it! Import the library and confirm that everything works:
```
# I'll try it out using the python 3 interpreter.
python3 --version
Python 3.5.2
robert@robert-E7212:~/Source/Python/python_crash_course/alien_invasion$ python3
Python 3.5.2 (default, Oct 7 2020, 17:19:02)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
# No errors, pygame was imported successfully!
>>> import pygame
pygame 2.0.0 (SDL 2.0.12, python 3.5.2)
Hello from the pygame community. https://www.pygame.org/contribute.html
>>>
``` | I followed @Søren 's method, but without the -u number.
The only complication was a few compilation errors at the last line, all due to syntax and unicode differences between Python 2 and Python 3, but with a little checking of the web documentation it was a matter of a few minutes with a text editor modifying the following files (all paths are relative to the pygame directory created during the download):
```
gedit build/lib.linux-x86_64-3.2/pygame/colordict.py
gedit build/lib.linux-x86_64-3.2/pygame/tests/test-utils/png.py
gedit build/lib.linux-x86_64-3.2/pygame/examples/movieplayer.py
```
The line numbers from the compiler error messages are great for giving you where to start. The things to look out for are:
```
1 remove all references to u"xxxx" colours
2 use Python3 syntax for exceptions
3 change all print commands to Python3 equivalents
```
Then re-issue the final compilation command:
```
sudo python3 setup.py install
```
If you miss one or two or get it wrong, just keep going round the loop editing and re-compiling till it works.
BTW I deliberately did not give details of the compiler messages, because I expect they will depend on the current build you download. The files I needed to change were for version '1.9.2pre' downloaded as of the date on this post. |
6,539,472 | I'm reading the book *Introduction to Computer Science Using Python and Pygame* by Paul Craven (note: legally available for free online). In the book, he uses a combination of Python 3.1.3 and Pygame 1.9.1 . In my Linux Ubuntu machine, I have Python 3.1.2 but even after I sudo apt-get installed python-pygame (version 1.9.1), Python 3.1.2 can't import pygame.
```
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
```
Python 2.6.5 imports it without fuss, however,
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygame
>>>
```
Are you aware of any issues for Linux/Ubuntu's Python 3.1.2 (Prof. Craven used Windows in his book)? How come Pygame 1.9.1 worked for Python 3.1.3 but not for 3.1.2?
Thanks for any pointers. (--,) | 2011/06/30 | [
"https://Stackoverflow.com/questions/6539472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/777225/"
] | I installed **pygame for python3** quite easily using the `pip3` (*a tool for installing and managing Python packages*) command on **Ubuntu 16.04.7 LTS**.
1. Open a terminal and install *pip3*, type `sudo apt install python3-pip`
2. Now use it to install *pygame* for **python3**, type `pip3 install pygame`
That's it! Import the library and confirm that everything works:
```
# I'll try it out using the python 3 interpreter.
python3 --version
Python 3.5.2
robert@robert-E7212:~/Source/Python/python_crash_course/alien_invasion$ python3
Python 3.5.2 (default, Oct 7 2020, 17:19:02)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
# No errors, pygame was imported successfully!
>>> import pygame
pygame 2.0.0 (SDL 2.0.12, python 3.5.2)
Hello from the pygame community. https://www.pygame.org/contribute.html
>>>
``` | Just use the below command to install pygame for Python3. I could install pygame correctly on Ubuntu 16.04 and Python Python 3.5.2.
pip3 install pygame |
42,349,191 | This is a typical use case for FEM/FVM equation systems, so is perhaps of broader interest. From a triangular mesh à la
[![enter image description here](https://i.stack.imgur.com/RS6MJ.png)](https://i.stack.imgur.com/RS6MJ.png)
I would like to create a `scipy.sparse.csr_matrix`. The matrix rows/columns represent values at the nodes of the mesh. The matrix has entries on the main diagonal and wherever two nodes are connected by an edge.
Here's an MWE that first builds a node->edge->cells relationship and then builds the matrix:
```
import numpy
import meshzoo
from scipy import sparse
nx = 1600
ny = 1000
verts, cells = meshzoo.rectangle(0.0, 1.61, 0.0, 1.0, nx, ny)
n = len(verts)
nds = cells.T
nodes_edge_cells = numpy.stack([nds[[1, 2]], nds[[2, 0]],nds[[0, 1]]], axis=1)
# assign values to each edge (per cell)
alpha = numpy.random.rand(3, len(cells))
vals = numpy.array([
[alpha**2, -alpha],
[-alpha, alpha**2],
])
# Build I, J, V entries for COO matrix
I = []
J = []
V = []
#
V.append(vals[0][0])
V.append(vals[0][1])
V.append(vals[1][0])
V.append(vals[1][1])
#
I.append(nodes_edge_cells[0])
I.append(nodes_edge_cells[0])
I.append(nodes_edge_cells[1])
I.append(nodes_edge_cells[1])
#
J.append(nodes_edge_cells[0])
J.append(nodes_edge_cells[1])
J.append(nodes_edge_cells[0])
J.append(nodes_edge_cells[1])
# Create suitable data for coo_matrix
I = numpy.concatenate(I).flat
J = numpy.concatenate(J).flat
V = numpy.concatenate(V).flat
matrix = sparse.coo_matrix((V, (I, J)), shape=(n, n))
matrix = matrix.tocsr()
```
With
```
python -m cProfile -o profile.prof main.py
snakeviz profile.prof
```
one can create and view a profile of the above:
[![enter image description here](https://i.stack.imgur.com/TmNtl.png)](https://i.stack.imgur.com/TmNtl.png)
The method `tocsr()` takes the lion share of the runtime here, but this is also true when building `alpha` is more complex. Consequently, I'm looking for ways to speed this up.
What I've already found:
* Due to the structure of the data, the values on the diagonal of the matrix can be summed up in advance, i.e.,
```
V.append(vals[0, 0, 0] + vals[1, 1, 2])
I.append(nodes_edge_cells[0, 0]) # == nodes_edge_cells[1, 2]
J.append(nodes_edge_cells[0, 0]) # == nodes_edge_cells[1, 2]
```
This makes `I`, `J`, `V` shorter and thus speeds up `tocsr`.
* Right now, edges are "per cell". I could identify equal edges with each other using `numpy.unique`, effectively saving about half of `I`, `J`, `V`. However, I found that this too takes some time. (Not surprising.)
One other thought that I had was that that I could replace the diagonal `V`, `I`, `J` by a simple `numpy.add.at` if there was a `csr_matrix`-like data structure where the main diagonal is kept separately. I know that this exists in some other software packages, but couldn't find it in scipy. Correct?
Perhaps there's a sensible way to construct CSR directly? | 2017/02/20 | [
"https://Stackoverflow.com/questions/42349191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/353337/"
] | I would try creating the csr structure directly, especially if you are resorting to `np.unique` since this gives you sorted keys, which is half the job done.
I'm assuming you are at the point where you have `i, j` sorted lexicographically and overlapping `v` summed using `np.add.at` on the optional `inverse` output of `np.unique`.
Then `v` and `j` are already in csr format. All that's left to do is creating the `indptr` which you simply get by `np.searchsorted(i, np.arange(M+1))` where `M` is the column length. You can pass these directly to the `sparse.csr_matrix` constructor.
Ok, let code speak:
```
import numpy as np
from scipy import sparse
from timeit import timeit
def tocsr(I, J, E, N):
n = len(I)
K = np.empty((n,), dtype=np.int64)
K.view(np.int32).reshape(n, 2).T[...] = J, I
S = np.argsort(K)
KS = K[S]
steps = np.flatnonzero(np.r_[1, np.diff(KS)])
ED = np.add.reduceat(E[S], steps)
JD, ID = KS[steps].view(np.int32).reshape(-1, 2).T
ID = np.searchsorted(ID, np.arange(N+1))
return sparse.csr_matrix((ED, np.array(JD, dtype=int), ID), (N, N))
def viacoo(I, J, E, N):
return sparse.coo_matrix((E, (I, J)), (N, N)).tocsr()
#testing and timing
# correctness
N = 1000
A = np.random.random((N, N)) < 0.001
I, J = np.where(A)
E = np.random.random((2, len(I)))
D = np.zeros((2,) + A.shape)
D[:, I, J] = E
D2 = tocsr(np.r_[I, I], np.r_[J, J], E.ravel(), N).A
print('correct:', np.allclose(D.sum(axis=0), D2))
# speed
N = 100000
K = 10
I, J = np.random.randint(0, N, (2, K*N))
E = np.random.random((2 * len(I),))
I, J, E = np.r_[I, I, J, J], np.r_[J, J, I, I], np.r_[E, E]
print('N:', N, ' -- nnz (with duplicates):', len(E))
print('direct: ', timeit('f(a,b,c,d)', number=10, globals={'f': tocsr, 'a': I, 'b': J, 'c': E, 'd': N}), 'secs for 10 iterations')
print('via coo:', timeit('f(a,b,c,d)', number=10, globals={'f': viacoo, 'a': I, 'b': J, 'c': E, 'd': N}), 'secs for 10 iterations')
```
Prints:
```
correct: True
N: 100000 -- nnz (with duplicates): 4000000
direct: 7.702431229001377 secs for 10 iterations
via coo: 41.813509466010146 secs for 10 iterations
```
Speedup: 5x | So, in the end this turned out to be the difference between COO's and CSR's `sum_duplicates` (just like @hpaulj suspected). Thanks to the efforts of everyone involved here (particularly @paul-panzer), [a PR](https://github.com/scipy/scipy/pull/7078) is underway to give `tocsr` a tremendous speedup.
SciPy's `tocsr` does a `lexsort` on `(I, J)`, so it helps organizing the indices in such a way that `(I, J)` will come out fairly sorted already.
For for `nx=4`, `ny=2` in the above example, `I` and `J` are
```
[1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7]
[1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7 1 6 3 5 2 7 5 5 7 4 5 6 0 2 2 0 1 2 5 5 7 4 5 6 0 2 2 0 1 2 1 6 3 5 2 7]
```
First sorting each row of `cells`, then the rows by the first column like
```
cells = numpy.sort(cells, axis=1)
cells = cells[cells[:, 0].argsort()]
```
produces
```
[1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6]
[1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6 1 4 2 5 3 6 5 5 5 6 7 7 0 0 1 2 2 2 5 5 5 6 7 7 0 0 1 2 2 2 1 4 2 5 3 6]
```
For the number in the original post, sorting cuts down the runtime from about 40 seconds to 8 seconds.
Perhaps an even better ordering can be achieved if the nodes are numbered more appropriately in the first place. I'm thinking of [Cuthill-McKee](https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm) and [friends](https://en.wikipedia.org/wiki/Minimum_degree_algorithm). |
69,276,976 | I've tried to way I was instructed and moved the code in csv I was given into the same folder as my Jupyter Notebook is located. It still isn't reading it. I'm also trying to convert it into a dataframe and get it to 'describe'. I'll post the code and the errors below. Please help! Thank you in advance!
```
import pandas as pd
fish = pd.read_csv('c:\\Users\\M\anaconda3\\Scripts\\Fish')
fish2 = pd.DataFrame(fish)
fish2.to_csv('fishdata.csv')
fish2.describe()
```
```
OSError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_11624/2289113011.py in <module>
----> 1 fish = pd.read_csv('c:\\Users\\M\anaconda3\\Scripts\\Fish')
~\anaconda3\lib\site-packages\pandas\util\_decorators.py in wrapper(*args, **kwargs)
309 stacklevel=stacklevel,
310 )
--> 311 return func(*args, **kwargs)
312
313 return wrapper
~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
584 kwds.update(kwds_defaults)
585
--> 586 return _read(filepath_or_buffer, kwds)
587
588
~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in _read(filepath_or_buffer, kwds)
480
481 # Create the parser.
--> 482 parser = TextFileReader(filepath_or_buffer, **kwds)
483
484 if chunksize or iterator:
~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in __init__(self, f, engine, **kwds)
809 self.options["has_index_names"] = kwds["has_index_names"]
810
--> 811 self._engine = self._make_engine(self.engine)
812
813 def close(self):
~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py in _make_engine(self, engine)
1038 )
1039 # error: Too many arguments for "ParserBase"
-> 1040 return mapping[engine](self.f, **self.options) # type: ignore[call-arg]
1041
1042 def _failover_to_python(self):
~\anaconda3\lib\site-packages\pandas\io\parsers\c_parser_wrapper.py in __init__(self, src, **kwds)
49
50 # open handles
---> 51 self._open_handles(src, kwds)
52 assert self.handles is not None
53
~\anaconda3\lib\site-packages\pandas\io\parsers\base_parser.py in _open_handles(self, src, kwds)
220 Let the readers open IOHandles after they are done with their potential raises.
221 """
--> 222 self.handles = get_handle(
223 src,
224 "r",
~\anaconda3\lib\site-packages\pandas\io\common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
699 if ioargs.encoding and "b" not in ioargs.mode:
700 # Encoding
--> 701 handle = open(
702 handle,
703 ioargs.mode,
OSError: [Errno 22] Invalid argument: 'c:\\Users\\M\x07naconda3\\Scripts\\Fish'
``` | 2021/09/22 | [
"https://Stackoverflow.com/questions/69276976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Deno does not currently support "classic" workers.
1. From [Worker() - Web APIs | MDN](https://developer.mozilla.org/en-US/docs/Web/API/Worker/Worker):
>
> `type`: A [`DOMString`](https://developer.mozilla.org/en-US/docs/Web/API/DOMString) specifying the type of worker to create. The value can be `classic` or `module`. If not specified, the default used is `classic`.
>
>
>
2. From [Workers | Manual | Deno](https://deno.land/manual/runtime/workers):
>
> Currently Deno supports only `module` type workers; thus it's essential to pass the `type: "module"` option when creating a new worker.
>
>
>
For your use case you might be able to use a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs). e.g.:
```
new Worker(
`data:text/javascript;base64,${btoa(
`console.log("hello world"); self.close();`
)}`,
{ type: "module" }
);
``` | The information provided in [mfulton26's answer](https://stackoverflow.com/a/69292184/438273) is right, but you don't need a data URL: you simply need to add `{ type: "module" }` to your worker instantiation options. Deno even supports TypeScript as the source for your worker:
`blob-worker.ts`:
```ts
const workerModuleSource = `
const version: string = Deno.version.deno;
console.log(\`Hello from Deno v\${version}\`);
self.close();
`;
const blob = new Blob(
[workerModuleSource],
{type: 'application/typescript'},
);
const objUrl = URL.createObjectURL(blob);
const worker = new Worker(objUrl, {
deno: true,
type: 'module',
});
URL.revokeObjectURL(objUrl);
```
```
$ deno run --unstable blob-worker.ts
Hello from Deno v1.14.1
``` |
56,452,581 | I've almost the same problem like this one:
[How to make a continuous alphabetic list python (from a-z then from aa, ab, ac etc)](https://stackoverflow.com/questions/29351492/how-to-make-a-continuous-alphabetic-list-python-from-a-z-then-from-aa-ab-ac-e)
But, I am doing a list in gui like excel, where on the vertical header should be letters ...aa,ab,ac....dg,dh,di...
To do it, I have to declare every place on my list to certain letter. It is probably impossible with yield.
I mean, let me say, I have 100 of cells and I want to name them all differently.
Cell 1 should be "A", Cell 2 should be "B".... Cell 27 should be "AA" and so one. You know it probably from excel.
I could do it manually, but it is going to take a lot of time.
Well, I tried to play a little with this code underneath, but without success.
I know that there should be a loop somewhere, but I have no idea where.
```
from string import ascii_lowercase
import itertools
def iter_all_strings():
for size in itertools.count(1):
for s in itertools.product(ascii_lowercase, repeat=size):
yield "".join(s)
for s in iter_all_strings():
print(s)
if s == 'bb':
break
```
The scope:
"for s in iter\_all\_strings():"
is counting until the break. I would say here should be my loop for iteration for my cells. There's just no place for that. | 2019/06/04 | [
"https://Stackoverflow.com/questions/56452581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11329096/"
] | Another alternative, if you want to dive deeper (create up to ~18,000 columns):
```
from string import ascii_lowercase
letters = list(ascii_lowercase)
num_cols = 100
excel_cols = []
for i in range(0, num_cols - 1):
n = i//26
m = n//26
i-=n*26
n-=m*26
col = letters[m-1]+letters[n-1]+letters[i] if m>0 else letters[n-1]+letters[i] if n>0 else letters[i]
excel_cols.append(col)
``` | Try this code. It works by pretending that all Excel column names have two characters, but the first "character" may be the null string. I get the `product` to accept the null string as a "character" by using a list of characters rather than a string.
```
from string import ascii_lowercase
import itertools
first_char = [''] + list(ascii_lowercase)
def iter_excel_columns():
for char1, char2 in itertools.product(first_char, ascii_lowercase):
yield char1 + char2
for s in iter_excel_columns():
print(s)
if s == 'bb':
break
```
This gives the printout that you apparently want:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
aa
ab
ac
ad
ae
af
ag
ah
ai
aj
ak
al
am
an
ao
ap
aq
ar
as
at
au
av
aw
ax
ay
az
ba
bb
``` |
56,452,581 | I've almost the same problem like this one:
[How to make a continuous alphabetic list python (from a-z then from aa, ab, ac etc)](https://stackoverflow.com/questions/29351492/how-to-make-a-continuous-alphabetic-list-python-from-a-z-then-from-aa-ab-ac-e)
But, I am doing a list in gui like excel, where on the vertical header should be letters ...aa,ab,ac....dg,dh,di...
To do it, I have to declare every place on my list to certain letter. It is probably impossible with yield.
I mean, let me say, I have 100 of cells and I want to name them all differently.
Cell 1 should be "A", Cell 2 should be "B".... Cell 27 should be "AA" and so one. You know it probably from excel.
I could do it manually, but it is going to take a lot of time.
Well, I tried to play a little with this code underneath, but without success.
I know that there should be a loop somewhere, but I have no idea where.
```
from string import ascii_lowercase
import itertools
def iter_all_strings():
for size in itertools.count(1):
for s in itertools.product(ascii_lowercase, repeat=size):
yield "".join(s)
for s in iter_all_strings():
print(s)
if s == 'bb':
break
```
The scope:
"for s in iter\_all\_strings():"
is counting until the break. I would say here should be my loop for iteration for my cells. There's just no place for that. | 2019/06/04 | [
"https://Stackoverflow.com/questions/56452581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11329096/"
] | Another alternative, if you want to dive deeper (create up to ~18,000 columns):
```
from string import ascii_lowercase
letters = list(ascii_lowercase)
num_cols = 100
excel_cols = []
for i in range(0, num_cols - 1):
n = i//26
m = n//26
i-=n*26
n-=m*26
col = letters[m-1]+letters[n-1]+letters[i] if m>0 else letters[n-1]+letters[i] if n>0 else letters[i]
excel_cols.append(col)
``` | Here is another way to approach the problem. This also allows you to give the number of columns you want to generate and will work for any "two character" columns and would also work if you changed the allowed letters for some reason:
```
from string import ascii_lowercase
letters = list(ascii_lowercase)
num_cols = 100
excel_cols = []
for i in range(0, num_cols - 1):
col = ""
if i / len(letters) > 0:
col = str(letters[i / len(letters) - 1])
col += letters[i % len(letters)]
excel_cols.append(col)
print(excel_cols)
#output: ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'aa', 'ab', 'ac', 'ad', 'ae', 'af', 'ag', 'ah', 'ai', 'aj', 'ak', 'al', 'am', 'an', 'ao', 'ap', 'aq', 'ar', 'as', 'at', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ba', 'bb', 'bc', 'bd', 'be', 'bf', 'bg', 'bh', 'bi', 'bj', 'bk', 'bl', 'bm', 'bn', 'bo', 'bp', 'bq', 'br', 'bs', 'bt', 'bu', 'bv', 'bw', 'bx', 'by', 'bz', 'ca', 'cb', 'cc', 'cd', 'ce', 'cf', 'cg', 'ch', 'ci', 'cj', 'ck', 'cl', 'cm', 'cn', 'co', 'cp', 'cq', 'cr', 'cs', 'ct', 'cu']
```
If you wanted to work the exercise of going past two character column names, here's a teaser to get you started. The actual implementation is left as an exercise for the reader:
```
def get_loops_needed(num_cols):
loops_needed = 0
temp = num_cols
while True:
temp = temp / len(letters)
loops_needed += 1
if temp == 0:
break
return loops_needed
``` |
64,834,395 | i use linux
nodejs had no problem untill i upgraded my system (sudo apt upgrade)
now when i try to install nodejs it say python-minimal mot installed
then i knew that it casue of updating python from python2.7.17 to python2.7.18 and python minimal is no longer require ,but now i cant install nodejs cause it ask for python-minimal
can any one help me
the problem is when i want to install nodejs 15 or 14
but when i install nodejs 12 or lower it have no problem
this is what it say after i write (sudo apt install nodejs)
```
sudo apt upgrade nodejs
[sudo] password for julian:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
libnode72 : Conflicts: nodejs-legacy
nodejs : Depends: python-minimal but it is not installable
E: Broken packages
``` | 2020/11/14 | [
"https://Stackoverflow.com/questions/64834395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14535629/"
] | jq does not have an `eval` function for evaluating arbitrary jq expressions, but it does provide functions that can be used to achieve much the same effect, the key idea being that certain JSON values can be used to specify query operations.
In your case, you would have to translate the jq query into a suitable jq operation, such as:
```
jq --argjson a '["Header","Tenant"]' '
getpath(paths|select( .[- ($a|length) :]== $a))
' test.json
```
Extending jq's JSON-based query language
----------------------------------------
More interestingly, you could write your own `eval`, e.g.
```
jq --argjson a '[[], "Header","Tenant"]' '
def eval($expr):
if $expr == [] then .
else $expr[0] as $op
| if $op == [] then .[] | eval($expr[1:])
else getpath([$op]) | eval($expr[1:])
end
end;
eval($a)
' test.json
```
With eval.jq as a module
------------------------
If the above def of `eval` were put in a file, say ~/jq/eval.jq, then you could simply write:
```
jq -L ~/jq --argjson a '[[], "Header","Tenant"]' '
include "eval";
eval($a)' test.json
```
Or you could specify the search path in the jq program:
```
jq --argjson a '[[], "Header","Tenant"]' '
include "eval" { "search": "~/jq" };
eval($a)' input.json
```
Or you could use `import` ... | **TLDR;** The following code does the job:
```
$ a=".[].Header.Tenant"; jq -f <(echo "[$a]") test.json
[
"Tenant1",
"Tenant2"
]
```
One as well can add/modify the filter in the jq call, if needed:
```
$ a=".[].Header.Tenant"; jq -f <(echo "[$a]|length") test.json
2
```
**Longer explanation**
My ultimate goal was to figure out how I can define the lowest common denominator jq filter in a variable and use it when calling jq, plus add additional parameters if necessary. If you have a really complex jq filter spanning multiple lines that you call frequently, you probably want to template it somehow and use that template when calling jq.
While *peak* demonstrated how it can be done, I think it is overengineering the simple task.
However, using process substitution combined with the jq's `-f` option to read a filter from the file does solve my problem. |
61,081,016 | After following the official RTD installation tutorial for ubuntu18 I manage to do everything (even webhooks) until the point of building, for a project called **test**, where I get the following error:
>
> python3.6 -mvirtualenv /home/myuser/readthedocs.org/user\_builds/test/envs/latest
>
>
>
Followed by:
>
> There must be only one argument: DEST\_DIR (you gave /home/gcsuser/readthedocs.org/user\_builds/test/envs/latest)
> Usage: virtualenv.py [OPTIONS] DEST\_DIR
>
>
>
[![THe output when building](https://i.stack.imgur.com/wSLZi.jpg)](https://i.stack.imgur.com/wSLZi.jpg)
I haven't changed (nor I do know here) to change the DEST\_DIR.
The debug.log states exactly what's on the image
Here is the total log after clicking the "Build" Button:
[`Full log in here`](https://pastebin.com/QQz6Rn47)
If I run the command "python3.6 -mvirtualenv /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest" in the command line the result is the following:
>
> (venv) myuser@lxgcsrtd01:~/readthedocs.org$ python3.6 -mvirtualenv /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest
>
>
> Using real prefix '/usr'
>
>
> Path not in prefix '/home/myuser/readthedocs.org/venv/include/python3.6m' '/usr'
>
>
> New python executable in /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest/bin/python3.6
>
>
> Not overwriting existing python scrip /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest/bin/python (you must use /home/myuser/readthedocs.org/user\_builds/myprojecto/envs/latest/bin/python3.6)
>
>
> Installing setuptools, pip, wheel... done. (venv)
>
>
> myuser@lxgcsrtd01:~/readthedocs.org$
>
>
>
Looks like, as the error states, there are two argumets, yet I can't find any space in bettween the arguments, and if I copy paste it to the shell it won't throw any errors. | 2020/04/07 | [
"https://Stackoverflow.com/questions/61081016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2236386/"
] | Your running job #3 has only 4 tasks (screenshot #2), thats why you see 4 executors. Spark doesn't need 6 executors to complete 4 tasks.
Each executor (screenshot #3) has 5 cores and what looks like 14GB memory ((14GB -300MB) \* 0.6 ~ 7.8GB). See [Spark memory management](https://spark.apache.org/docs/latest/configuration.html#memory-management).
Each executor executes a single task, which means it uses only one core out of 5 allocated, hence the low CPU usage. (In Spark, an executor with X cores can process X tasks in parallel. It can NOT process one task on X cores.) | You have only 2 nodes with 16 vCores each, in total of 32 vCores, which you can very well see in your Yarn UI.
Now when you are submitting your job you are requesting Yarn to create 6 containers(executors) with 5 vCores each but then on a single node you can have at max of 2 executors considering 5 cores requirement (10 vCores used up to create 2 executors on a single worker node from total of 16vCores available).
You will end up getting max of 4 executors anyways. One executor can't span multiple worker nodes. |
21,579,459 | I am just starting on Python from a PHP background. I was wondering if there is a more elegant way in assigning a variable the result of an "if ... in" statement?
I currently do
```
is_holiday = False
if now_date in holidays:
is_holiday = True
```
To me it looks like an unnecessary amount of code line or is this the absolute minimum and meets python style guides? | 2014/02/05 | [
"https://Stackoverflow.com/questions/21579459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/912588/"
] | ```
is_holiday = now_date in holidays
``` | Use [Conditional expressions](http://docs.python.org/2/reference/expressions.html#conditional-expressions):
`is_holiday = True if now_date in holidays else False`
or just `is_holiday = now_date in holidays`. |
17,502,704 | I am trying to use the tempfile module. (<http://docs.python.org/2.7/library/tempfile.html>)
I am looking for a temporary file that I could open several times to get several streams to read it.
```
tmp = ...
stream1 = # get a stream for the temp file
stream2 = # get another stream for the temp file
```
I have tried several functions (TemporaryFile, NamedTemporaryFile, SpooledTemporaryFile) and using the fileno method or so but I could not perform what I am looking for.
Any idea of should I just make my own class?
Thanks
> UPDATE
--------
I get an error trying to open the file with its name...
```
In [2]: t = tempfile.NamedTemporaryFile()
In [3]: t.write('abcdef'*1000000)
In [4]: t.name
Out[4]: 'c:\\users\\mike\\appdata\\local\\temp\\tmpczggbt'
In [5]: f = open(t.name)
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-6-03b9332531d2> in <module>()
----> 1 f = open(t.name)
IOError: [Errno 13] Permission denied: 'c:\\users\\mike\\appdata\\local\\temp\\tmpczggbt'
``` | 2013/07/06 | [
"https://Stackoverflow.com/questions/17502704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1232891/"
] | File objects (be they temporary or otherwise) cannot be read multiple times without re-positioning the file position back to the start.
Your options are:
* To reopen the file multiple times, creating multiple file objects for the same file.
* To rewind the file object before each read.
To reopen the file, use a `NamedTemporaryFile` and use a regular `open()` call to re-open the same filename several times. You probably will want to pass `delete=False` to the constructor, especially on Windows, to be able to do this.
To rewind, call `.seek(0)` on the file object. | You could use [`tempfile.mkstemp()`](http://docs.python.org/2.7/library/tempfile.html#tempfile.mkstemp). From the documentation:
>
> Creates a temporary file in the most secure manner possible. There are no race conditions in the file’s creation, assuming that the platform properly implements the os.O\_EXCL flag for os.open(). The file is readable and writable only by the creating user ID. If the platform uses permission bits to indicate whether a file is executable, the file is executable by no one. The file descriptor is not inherited by child processes.
>
>
> Unlike TemporaryFile(), the user of mkstemp() is responsible for deleting the temporary file when done with it.
>
>
>
You can then use the `open()` builtin function to create and open that file several times. Remember to delete the file when you are done, as this is not done automatically. |