qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
43,966,631 | I'm trying to scrape some information from a website, bear in mind that I'm new to python.
My current code is this
```
from lxml import html
import requests
page1 = requests.get('snip')
page2 = requests.get('snip')
page3 = requests.get('snip')
page4 = requests.get('snip')
tree = html.fromstring(page.content)
```
I need to extract the number (currently 37) from here:
```
<div class='count col-xs-4'>
<p><strong>37</strong> <br class='hidden-md hidden-lg'/>followers</p>
</div>
```
However I'm not too sure how to do it. Can anyone help me with this? | 2017/05/14 | [
"https://Stackoverflow.com/questions/43966631",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495583/"
] | You can do it using `@Indexed` annotation's `expireAfterSeconds` attribute over a field whose type is `Date`.Roughly:
```
@Document
public class SomeEntity {
String id;
@Field
@Indexed(name="someDateFieldIndex", expireAfterSeconds=3600)
Date someDateField;
// rest of code here
}
```
Or by manipulating a `MongoTemplate`:
```
mongoTemplate
.indexOps(SomeEntity.class)
.ensureIndex(new Index().on("someDateField", Sort.Direction.ASC).expire(3600));
``` | >
> Thanks, but does the whole document is expired and deleted or just the
> field?
>
>
>
According to the MongoDB Documentation <https://docs.mongodb.com/manual/core/index-ttl/>
The TTL index is used to remove documents from a collection.
So, the entire document will be deleted and not uniquely the field indexed.
Nb : The index has to be positioned on a Date Field, else the TTL will not be apply
regards |
61,370,118 | I'm currently building a project in which for every user I need to save a list of Strings (urls of articles he read).
I'm using python + flask + SQLalchemy with sqlite.
I understand that sqlite doesn't support arrays, so I was thinking about switching over to a different database instead of sqlite, one which supports arrays.
I would like to know what would you do? which database supports arrays or if there's better way solving it. | 2020/04/22 | [
"https://Stackoverflow.com/questions/61370118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10486010/"
] | You can serialize it in some way. An option is to simply calling `str` with the array, and then handle it in some way when you need to convert it back.
Another option is to use the `json` module:
```py
import json
lst = [1,2,3]
serialized = json.dumps(lst)
print(serialized) # '[1, 2, 3]'
print(type(serialized)) # <class 'str'>
deserialized = json.loads(serialized)
print(deserialized) # [1, 2, 3]
print(type(deserialized)) # <class 'list'>
```
But as ggorlen's link suggests, you should try to come up with a better option than storing the array directly. | MySQL and SQLlite both have support for JSON columns. You can store an array in a JSON column. In terms of best practice and performance, putting arrays in a database is debatable of course.
ORM
```
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
list_of_items = Column(JSON, nullabled=False)
```
Adding an Array to your DB.
```
parent_one = Parent(list_of_items=['item1', 'item2'])
session.add(parent_one)
session.commit()
```
Now when you fetch the record of parent\_one, you will see its 'list\_of\_items' column is an array in python.
This can also be done with the SQL alchemy core language. |
53,365,328 | Hello guys I'm doing a simple command line version of minesweeper game in python 3 and I'm having some trouble with the code.
The idea is to build a minesweeper game with 2d array to represent a 9x9 grid where the bombs are placed randomly.
This is what i did so far:
```
import random
#Function to create the 9x9 grid
def initialize():
grid=9
matrix=[[' ' for i in range(grid)] for i in range(grid)]
return matrix
#Function to print the grid
def printMat( matrix ):
print(' 0 1 2 3 4 5 6 7 8')
for i in range( 9 ):
print(i,end=' ')
for j in range( 9 ):
print('[',matrix[i][j],']',sep = " ",end=' ')
print('\n')
#Function to place the bombs
def place_bomb(bomb):
bombs = 10
while bombs != 0:
bomb[random.randint(0,8)][random.randint(0,8)] = "*"
bombs-=1
return bomb
#Function to calculate the surrounding bombs
def surrounding(grid, lin, col):
size = len(grid)
surrounding = []
for i in range(-1, 2):
for j in range(-1, 2):
if i == 0 and j == 0:
continue
elif -1 < (lin + i) < size and -1 < (col + j) < size:
surrounding+=lin + i, col + j
return surrounding
#Function to verify if the player found a bomb and show the updated grid
#Not finished yet
def step(grid, lin, col):
if grid[lin][col] == '* ':
print("bomb")
#return bomb=bomb-1
#Function to verify if the player found all the bombs
#If all the booms were found then return True else return False
#Not finished yet
def status():
pass
def game():
game_active=True
grid=initialize()
printMat(grid)
while game_active==True:
lin = int(input('Choose a line :'))
col = int(input('Choose a column:'))
c=place_bomb(grid)
printMat(c)
f=surrounding(c,lin,col)
printMat(f)
game()
```
It return an error from the printMat function:
```
Traceback (most recent call last):
line 52, in <module>
game()
line 50, in game
printMat(f)
line 13, in printMat
print('[',matrix[i][j],']',sep = " ",end=' ')
TypeError: 'int' object is not subscriptable
```
How can I solve this?
Any help or tip to build the game would be appreciated | 2018/11/18 | [
"https://Stackoverflow.com/questions/53365328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10668703/"
] | In a nutshell, it happens because `each()` just fires commands simultaneously against all elements. In your case you probably need to go this way `element.all(bars).getCssValue('width')).then(array => {/*do what you want with returned array here*/})`
**\* Edited \***
What you want to do is
```
element.all(bars).getCssValue('width')).then(array => {
// tests is at least one element will not be "0px"
let elementHasWidth = array.some(elem => elem !== "0px");
expect(elementHasWidth).toBe(true);
})
``` | Try using this code before your error statement:
```
browser.wait(function() {
return element.all(bars).isPresent().then(function(result) {
return result;
});
}, 5000);
``` |
53,365,328 | Hello guys I'm doing a simple command line version of minesweeper game in python 3 and I'm having some trouble with the code.
The idea is to build a minesweeper game with 2d array to represent a 9x9 grid where the bombs are placed randomly.
This is what i did so far:
```
import random
#Function to create the 9x9 grid
def initialize():
grid=9
matrix=[[' ' for i in range(grid)] for i in range(grid)]
return matrix
#Function to print the grid
def printMat( matrix ):
print(' 0 1 2 3 4 5 6 7 8')
for i in range( 9 ):
print(i,end=' ')
for j in range( 9 ):
print('[',matrix[i][j],']',sep = " ",end=' ')
print('\n')
#Function to place the bombs
def place_bomb(bomb):
bombs = 10
while bombs != 0:
bomb[random.randint(0,8)][random.randint(0,8)] = "*"
bombs-=1
return bomb
#Function to calculate the surrounding bombs
def surrounding(grid, lin, col):
size = len(grid)
surrounding = []
for i in range(-1, 2):
for j in range(-1, 2):
if i == 0 and j == 0:
continue
elif -1 < (lin + i) < size and -1 < (col + j) < size:
surrounding+=lin + i, col + j
return surrounding
#Function to verify if the player found a bomb and show the updated grid
#Not finished yet
def step(grid, lin, col):
if grid[lin][col] == '* ':
print("bomb")
#return bomb=bomb-1
#Function to verify if the player found all the bombs
#If all the booms were found then return True else return False
#Not finished yet
def status():
pass
def game():
game_active=True
grid=initialize()
printMat(grid)
while game_active==True:
lin = int(input('Choose a line :'))
col = int(input('Choose a column:'))
c=place_bomb(grid)
printMat(c)
f=surrounding(c,lin,col)
printMat(f)
game()
```
It return an error from the printMat function:
```
Traceback (most recent call last):
line 52, in <module>
game()
line 50, in game
printMat(f)
line 13, in printMat
print('[',matrix[i][j],']',sep = " ",end=' ')
TypeError: 'int' object is not subscriptable
```
How can I solve this?
Any help or tip to build the game would be appreciated | 2018/11/18 | [
"https://Stackoverflow.com/questions/53365328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10668703/"
] | New problems like this with existing code are likely due to Protractor and Selenium evolving from Web Driver Control Flow and WebDriverJS Promise Manager to native promises. You used to be able to write code that looked like synchronous code and under the hood the toolkits would convert it to asynchronous code that waited for the DOM to load and JavaScript on the page to run. Going forward you [need to convert your code](https://www.protractortest.org/#/async-await) to explicitly use `async/await` or promises. (See the reasoning behind the Selenium change [here](https://github.com/SeleniumHQ/selenium/issues/2969).)
Unfortunately a lot of old (pre 2019) examples, even in the Protractor documentation, are written in synchronous style, so you should be careful about following them. Even more unfortunately, [you cannot mix `async` code with Control Flow code](https://github.com/SeleniumHQ/selenium/issues/3037), and if you try, all the Control Flow will be disabled, and probably some of your tests that relied on it will start failing.
By the way, what is the value of `bars`? A Protractor `by` object works differently than a native WebDriver locator and I'm not sure they are reusable. Try using `by.name('bars')` or whatever instead of `bars`.
Your case is tricky because of all the promises involved. `element.getCssValue` returns a promise. Since you are trying to get a `true` or `false` value out of this, I suggest using a reducer.
```
let nonZero = element.all(by.name('bars')).reduce((acc, elem) => {
return acc || elem.getCssValue('width').then( (width) => width > 0 );
}, false);
```
In a more complicated situation, you could use `all().each()` but you have to be careful to ensure that nothing you do inside `each` affects the DOM, because once it does, it potentially invalidates the rest of the array.
If you are potentially modifying the page with your ultimate action, then, as ugly as it may seem, you need to loop over finding the elements:
```
for (var i = 0; true; i++) {
let list = element.all(by.css('.items li'));
if (i >= await list.count();)
break;
list.get(i).click();
};
``` | Try using this code before your error statement:
```
browser.wait(function() {
return element.all(bars).isPresent().then(function(result) {
return result;
});
}, 5000);
``` |
53,365,328 | Hello guys I'm doing a simple command line version of minesweeper game in python 3 and I'm having some trouble with the code.
The idea is to build a minesweeper game with 2d array to represent a 9x9 grid where the bombs are placed randomly.
This is what i did so far:
```
import random
#Function to create the 9x9 grid
def initialize():
grid=9
matrix=[[' ' for i in range(grid)] for i in range(grid)]
return matrix
#Function to print the grid
def printMat( matrix ):
print(' 0 1 2 3 4 5 6 7 8')
for i in range( 9 ):
print(i,end=' ')
for j in range( 9 ):
print('[',matrix[i][j],']',sep = " ",end=' ')
print('\n')
#Function to place the bombs
def place_bomb(bomb):
bombs = 10
while bombs != 0:
bomb[random.randint(0,8)][random.randint(0,8)] = "*"
bombs-=1
return bomb
#Function to calculate the surrounding bombs
def surrounding(grid, lin, col):
size = len(grid)
surrounding = []
for i in range(-1, 2):
for j in range(-1, 2):
if i == 0 and j == 0:
continue
elif -1 < (lin + i) < size and -1 < (col + j) < size:
surrounding+=lin + i, col + j
return surrounding
#Function to verify if the player found a bomb and show the updated grid
#Not finished yet
def step(grid, lin, col):
if grid[lin][col] == '* ':
print("bomb")
#return bomb=bomb-1
#Function to verify if the player found all the bombs
#If all the booms were found then return True else return False
#Not finished yet
def status():
pass
def game():
game_active=True
grid=initialize()
printMat(grid)
while game_active==True:
lin = int(input('Choose a line :'))
col = int(input('Choose a column:'))
c=place_bomb(grid)
printMat(c)
f=surrounding(c,lin,col)
printMat(f)
game()
```
It return an error from the printMat function:
```
Traceback (most recent call last):
line 52, in <module>
game()
line 50, in game
printMat(f)
line 13, in printMat
print('[',matrix[i][j],']',sep = " ",end=' ')
TypeError: 'int' object is not subscriptable
```
How can I solve this?
Any help or tip to build the game would be appreciated | 2018/11/18 | [
"https://Stackoverflow.com/questions/53365328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10668703/"
] | In a nutshell, it happens because `each()` just fires commands simultaneously against all elements. In your case you probably need to go this way `element.all(bars).getCssValue('width')).then(array => {/*do what you want with returned array here*/})`
**\* Edited \***
What you want to do is
```
element.all(bars).getCssValue('width')).then(array => {
// tests is at least one element will not be "0px"
let elementHasWidth = array.some(elem => elem !== "0px");
expect(elementHasWidth).toBe(true);
})
``` | Thanks to everyone who helped. The solution was retry again if it resulted in error. Due to the realtime nature of the element, the element reference was changing before it had got the css width. |
53,365,328 | Hello guys I'm doing a simple command line version of minesweeper game in python 3 and I'm having some trouble with the code.
The idea is to build a minesweeper game with 2d array to represent a 9x9 grid where the bombs are placed randomly.
This is what i did so far:
```
import random
#Function to create the 9x9 grid
def initialize():
grid=9
matrix=[[' ' for i in range(grid)] for i in range(grid)]
return matrix
#Function to print the grid
def printMat( matrix ):
print(' 0 1 2 3 4 5 6 7 8')
for i in range( 9 ):
print(i,end=' ')
for j in range( 9 ):
print('[',matrix[i][j],']',sep = " ",end=' ')
print('\n')
#Function to place the bombs
def place_bomb(bomb):
bombs = 10
while bombs != 0:
bomb[random.randint(0,8)][random.randint(0,8)] = "*"
bombs-=1
return bomb
#Function to calculate the surrounding bombs
def surrounding(grid, lin, col):
size = len(grid)
surrounding = []
for i in range(-1, 2):
for j in range(-1, 2):
if i == 0 and j == 0:
continue
elif -1 < (lin + i) < size and -1 < (col + j) < size:
surrounding+=lin + i, col + j
return surrounding
#Function to verify if the player found a bomb and show the updated grid
#Not finished yet
def step(grid, lin, col):
if grid[lin][col] == '* ':
print("bomb")
#return bomb=bomb-1
#Function to verify if the player found all the bombs
#If all the booms were found then return True else return False
#Not finished yet
def status():
pass
def game():
game_active=True
grid=initialize()
printMat(grid)
while game_active==True:
lin = int(input('Choose a line :'))
col = int(input('Choose a column:'))
c=place_bomb(grid)
printMat(c)
f=surrounding(c,lin,col)
printMat(f)
game()
```
It return an error from the printMat function:
```
Traceback (most recent call last):
line 52, in <module>
game()
line 50, in game
printMat(f)
line 13, in printMat
print('[',matrix[i][j],']',sep = " ",end=' ')
TypeError: 'int' object is not subscriptable
```
How can I solve this?
Any help or tip to build the game would be appreciated | 2018/11/18 | [
"https://Stackoverflow.com/questions/53365328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10668703/"
] | New problems like this with existing code are likely due to Protractor and Selenium evolving from Web Driver Control Flow and WebDriverJS Promise Manager to native promises. You used to be able to write code that looked like synchronous code and under the hood the toolkits would convert it to asynchronous code that waited for the DOM to load and JavaScript on the page to run. Going forward you [need to convert your code](https://www.protractortest.org/#/async-await) to explicitly use `async/await` or promises. (See the reasoning behind the Selenium change [here](https://github.com/SeleniumHQ/selenium/issues/2969).)
Unfortunately a lot of old (pre 2019) examples, even in the Protractor documentation, are written in synchronous style, so you should be careful about following them. Even more unfortunately, [you cannot mix `async` code with Control Flow code](https://github.com/SeleniumHQ/selenium/issues/3037), and if you try, all the Control Flow will be disabled, and probably some of your tests that relied on it will start failing.
By the way, what is the value of `bars`? A Protractor `by` object works differently than a native WebDriver locator and I'm not sure they are reusable. Try using `by.name('bars')` or whatever instead of `bars`.
Your case is tricky because of all the promises involved. `element.getCssValue` returns a promise. Since you are trying to get a `true` or `false` value out of this, I suggest using a reducer.
```
let nonZero = element.all(by.name('bars')).reduce((acc, elem) => {
return acc || elem.getCssValue('width').then( (width) => width > 0 );
}, false);
```
In a more complicated situation, you could use `all().each()` but you have to be careful to ensure that nothing you do inside `each` affects the DOM, because once it does, it potentially invalidates the rest of the array.
If you are potentially modifying the page with your ultimate action, then, as ugly as it may seem, you need to loop over finding the elements:
```
for (var i = 0; true; i++) {
let list = element.all(by.css('.items li'));
if (i >= await list.count();)
break;
list.get(i).click();
};
``` | Thanks to everyone who helped. The solution was retry again if it resulted in error. Due to the realtime nature of the element, the element reference was changing before it had got the css width. |
56,900,523 | I read Python's grammar file (<https://docs.python.org/3/reference/grammar.html>) and I don´t understand what TFPDEF and FPDEF mean. I hope you can help me :)
![enter image description here](https://i.stack.imgur.com/lqvKT.png) | 2019/07/05 | [
"https://Stackoverflow.com/questions/56900523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11729018/"
] | Save functionality only saves the query editor content and doesn't save anything within the results pane. I believe that using pack and make-set operators can generate a very close output to what you are looking for (if I understood it correctly). See below an example for grouping by client\_OS and an internal breakdown by name and count
```
customEvents
| summarize count() by name, client_OS
| extend p =pack('name', name, 'count', count_)
| summarize names = make_set(p) by client_OS
``` | I believe that the closest to what you expect can be achieved this way:
```
exceptions
| summarize entries = make_list(pack_all()) by operation_Name
```
Scalar function `pack_all` creates an object from all available columns and aggregation function `make_list` returns JSON array of all the values provided as its arguments.
**Edit:** The `makelist` function is deprecated, use `make_list` instead. |
64,606,582 | I have a network load balancer which has my mqtt instance as target.I have hosted mqtt server on my instance and I have a python script which sends traffic to my mqtt instance via network load balancer. It currently works on TCP. Now I want to make the connection secure between network load balancer and my client so I have added a listener which listens on port 8883 and I have attached acm certificates on it. Now my issue is how do I get client certificate which I will add to my python script? Any help would be appreciated. | 2020/10/30 | [
"https://Stackoverflow.com/questions/64606582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14399164/"
] | If you generate a certificate via the public ACM CA you are not able to export the certificate, you will only be able to associate it with managed AWS resources such as elastic load balancers, CloudFront, VPNs and API Gateway.
If you want to be able to control the physical certificate your choices are limited to the below:
* [AWS ACM Private CA](https://aws.amazon.com/certificate-manager/private-certificate-authority/), this is an extended part of the ACM service but can be expensive.
* A free SSL signing service such as [certbot](https://certbot.eff.org/), these will need to be rotated more frequently than your standard CA.
* Purchase an SSL through a trusted certificate authority | >
> Now my issue is how do I get client certificate which I will add to my python script?
>
>
>
You **can't export** your certificate out of ACM. If you require to have control of a private key from your SSL cert, you need to get it from a **third party**, not from ACM.
ACM certs can only be used on **load balancer, API gateway and CloudFront** distro. If you want to use SSL cert somewhere else, you must get it from a third party. A popular choice is <https://letsencrypt.org/>. |
39,054,391 | I've been able to build v8 successfully under **Windows 8.1** and **Visual Studio 2015 Update 2** with the build chain based on `GN` [Building with GN](https://github.com/v8/v8/wiki/Building-with-GN) but I can't find how to build the v8 library. Basically I want to enable `component=shared_library` but I can't find how to do it...
Examples such as *helloworld.exe* are compiling and are working.
Here is how I built it so far:
```
set DEPOT_TOOLS_WIN_TOOLCHAIN=0
mkdir depot_tools
cd depot_tools
curl -O https://storage.googleapis.com/chrome-infra/depot_tools.zip
cmake -E tar xf "depot_tools.zip" --format=zip
SET PATH=%CD%;%CD%\python276_bin;%PATH%
cd ..
gclient config https://chromium.googlesource.com/v8/v8
set GYP_MSVS_VERSION=2015
gclient sync
cd v8
python tools/dev/v8gen.py x64.release
ninja -C out.gn/x64.release
```
I've also tried the old fashion way using `gyp` without success (does not compile):
```
SET DEPOT_TOOLS_WIN_TOOLCHAIN=0
mkdir depot_tools
cd depot_tools
curl -O https://storage.googleapis.com/chrome-infra/depot_tools.zip
cmake -E tar xf "depot_tools.zip" --format=zip
SET PATH=%CD%;%CD%\python276_bin;%PATH%
cd ..
gclient config https://chromium.googlesource.com/v8/v8
set GYP_MSVS_VERSION=2015
set GYP_GENERATORS=ninja
gclient sync
cd v8
python gypfiles\gyp_v8 -Dtarget_arch=x64 -Dcomponent=shared_library
ninja -C out\Release_x64 all
```
Any advise? | 2016/08/20 | [
"https://Stackoverflow.com/questions/39054391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5683706/"
] | I ran into the same problem, the solution is to edit the gn arguments.
after:
`python tools/dev/v8gen.py x64.release`
just edit the file *out.gn/x64.release/args.gn* and add the line:
`is_component_build = true`
and after building you should find a v8.dll in your *out.gn/x64.release* folder. | I was produced in a gyp. No that was the setting in the '`set GYP_GENERATORS = ninja`', was the setting in '`GYP_GENERATORS = msvs`'. Run the generated sln file, but leave some errors at the time of build, was successful build and browse the Internet. |
62,680,583 | I found a similar post and educational link (see below) but I'm not sure how to apply the answer logic from that post to my line of code.
My dataset looks like this
```
PT_FIN PT_DISCH_DISPO
1 HOME/SELF CARE OP
2 PARENT HOME/SELF CARE IP
3 Admitted as inpatient
4 LEFT WITHOUT TREATMENT
....
```
In total, there are over 100,000 records and below is a value count of the response options for the `PT_DISCH_DISPO` variable:
```
HOME/SELF CARE OP 201253
PARENT HOME/SELF CARE IP 15313
Admitted as inpatient 13721
LEFT WITHOUT TREATMENT 4633
TRANS TO OTHER FACILITY OP 608
ERROR REGISTRATION 227
LEFT AGAINST MED ADVICE OP 181
DSCHRG/TRNSFR PSYCH IP 134
EXPIRED OTHER 64
EXPIRED NO AUTOPSY NEEDED 61
DISCH/TRANS to INPT REHAB FAC 33
TRANS TO ACUTE CARE HOSP IP 30
NO SHOW 27
EXPIRED W/ AUTOPSY 25
OTHER IP 24
RELATIVE HOME IP 19
TRANS TO ADULT FACILITY OP 13
LEFT AGAINST MED ADVICE IP 12
TRANSFERRED TO SNF IP 11
3RD PARTY PAYOR REQUEST FOR TRANS 10
EXPIRED/CORONER'S CASE 6
FOSTER HOME IP 5
HOME UNDER HOME HEALTH CARE IP 4
ORGANIZED GROUP HOME IP 4
REHABILITATION CENTER IP 3
DISCHARGED TO SNF OP 2
Against Medical Advice 2
CORRECTIONAL FACILITY IP 2
TRANS TO INTERMED CARE FAC IP 1
Expired other 1
DEAD ON ARRIVAL 1
```
It's a pretty simple line of code:
```
#removing strings from dataset "~" denotes do not keep
TAT_v2 = TAT_v1[~TAT_v1.PT_DISCH_DISPO.str.contains("error|no show|left without treatment", case=False)]
```
It runs fine when I pill data from say 2019 onward but when I expand my dataframe to include data from 2018 and run this line of code I get the following error:
```
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\generic.py in __invert__(self)
1540 def __invert__(self):
1541 try:
-> 1542 arr = operator.inv(com.values_from_object(self))
1543 return self.__array_wrap__(arr)
1544 except Exception:
TypeError: bad operand type for unary ~: 'NoneType'
```
Can someone please educate me as to why this error is occurring and how I can learn to fix it? Am I be receiving this error because I have Null values within my `PT_DISCH_DISPO` variable?
Many thanks!
[python bad operand type for unary -: 'NoneType'](https://stackoverflow.com/questions/39659023/python-bad-operand-type-for-unary-nonetype)
<https://realpython.com/null-in-python/> | 2020/07/01 | [
"https://Stackoverflow.com/questions/62680583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10687615/"
] | Ah all I have to do was add: `na=False` to my line of code
```
TAT_v2 = TAT_v1[~TAT_v1.PT_DISCH_DISPO.str.contains("error|no show", na=False, case=False)]
``` | As far as I understand your panda code, this line `TAT_v1.PT_DISCH_DISPO.str.contains("error|no show|left without treatment", case=False)` somehow returns None.
Investigate [contains](https://docs.python.org/3.4/library/operator.html#operator.__contains__)
documentation may help |
61,605,339 | Dataframe:
```
id name count
123 ssdf 1
123 wasdfasdf 1
234 ewdd 0
345 efdfg 2
345 efg 1
345 gfdfg 1
456 gfdfg 1
456 ffg 1
567 vd 3
```
Here there is id and count columns, how many id's are there where count is 1.
If one id has 1,1,1 then i need that id in the result list.
if the id has 0,1,1 then never mind.
if the id has 0,3,0 then never mind.
Expected result:
```
list_1 = ['123','456']
```
also can i please expect the answer as the function in python. | 2020/05/05 | [
"https://Stackoverflow.com/questions/61605339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8753186/"
] | in Dropdown, add value property to select and use the id like value={this.state.selectedValue}.
So you will have that value in ev.target.value.
Then, in your parent, you can do something like: `companiesData.filter(company => company.id === ev.target.value)`. And you have the info there.
and of course set the selectedValue (using hooks or normal setState) | Another option (if you don't want to do filtering) is to simply send `e.target` to your handler instead of `e.target.value`.
In your handler, retrieve the info you need like this:
```js
const parentHandler = target => {
const targetOptions = target.options;
const selectedValue = target.value;
console.log("selected value", selectedValue);
console.log("all html options array", targetOptions);
console.log("selected option html", targetOptions[target.selectedIndex]);
console.log(
"selected option name",
targetOptions[target.selectedIndex].getAttribute("name")
);
};
```
see a [demo here](https://codesandbox.io/s/select-option-pass-extra-value-fix-wpbck?file=/src/App.js) |
42,392,600 | I'm working with django 1.6.5 and python 2.7.
I have import feature in my app and I get error:
```
OSError: [Errno 18] Invalid cross-device link
```
I have problem with this part of code:
```
os.rename(db_temp, settings.DATABASES['bookmat']['NAME'])
```
code in settings:
```
'bookmat': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': '/my_projects/book/db/bookmat.sqlite3',
},
``` | 2017/02/22 | [
"https://Stackoverflow.com/questions/42392600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7456529/"
] | `os.rename` only works if source and destination are on the same file system. You should use `shutil.move` instead. | `rename` only works when the source and target names are on the same file system. You probably have different mounts. Otherwise you get that error. You can implement the same effect with a copy and a delete. |
42,861,301 | I'm new to python. Kindly don't go harsh on me if this question looks novice. I created a Matrix like this:
```
Matrix = np.matrix([[0,0,0,1], [0,0,1,1], [1,0,0,1], [1,1,0,0],[0,0,1,1]])
```
Now, When I try to access the element of Matrix, like this:
```
import numpy as np
print(Matrix[0][3])
```
I get the following error:
>
> IndexError: index 1 is out of bounds for axis 0 with size 1
>
>
>
I have been through all the related posts on stackoverflow but haven't found a solution so far. | 2017/03/17 | [
"https://Stackoverflow.com/questions/42861301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5348132/"
] | Yes, this is possible if you create a sub-class of FilterStore similarly to PriorityQueue. | Here is an example that worked for me
```
import simpy
import bisect
class PriorityFilterStore(simpy.FilterStore):
def _do_put(self, event):
if len(self.items) < self._capacity:
bisect.insort(self.items, event.item)
event.succeed()
def _do_get(self, event):
for i in range(len(self.items)):
item = self.items[i]
if event.filter(item):
del self.items[i]
event.succeed(item)
break
return True
``` |
42,068,069 | Unable to install Python module on Ubuntu machine. When I try to install the Paramiko module, I receive the following error:
```
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 70, in <module>
import packaging.version
ImportError: No module named packaging.version
``` | 2017/02/06 | [
"https://Stackoverflow.com/questions/42068069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2964666/"
] | Finally I found a solution, it worked after cleaning up and re-installing pip:
```
sudo -i
apt-get purge -y python-pip
wget https://bootstrap.pypa.io/get-pip.py
python ./get-pip.py
apt-get install python-pip
``` | Run
```
pip install packaging
```
Add `sudo` if it is nessesary. It must resolve the issue. |
42,068,069 | Unable to install Python module on Ubuntu machine. When I try to install the Paramiko module, I receive the following error:
```
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 70, in <module>
import packaging.version
ImportError: No module named packaging.version
``` | 2017/02/06 | [
"https://Stackoverflow.com/questions/42068069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2964666/"
] | Finally I found a solution, it worked after cleaning up and re-installing pip:
```
sudo -i
apt-get purge -y python-pip
wget https://bootstrap.pypa.io/get-pip.py
python ./get-pip.py
apt-get install python-pip
``` | there might be a problem with your pip installation try:
```
sudo pip install --upgrade pip
``` |
54,363,239 | I am trying to install PIP package management system in my Mac but I am getting an error.
When I am writing a cod `sudo easy_install pip`, it is giving me following error:
```
Searching for pip
Reading https://pypi.python.org/simple/pip/
Download error on https://pypi.python.org/simple/pip/: [SSL:
TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) -- Some packages may not be found!
Couldn't find index page for 'pip' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.python.org/simple/ Download error on https://pypi.python.org/simple/: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)
-- Some packages may not be found!
No local packages or download links found for pip
error: Could not find suitable distribution for Requirement.parse('pip')
```
Can anyone help me to understand this? | 2019/01/25 | [
"https://Stackoverflow.com/questions/54363239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9689847/"
] | The easy solution is download and install the latest version of Python 3. Just grab the dmg from the website <https://www.python.org/downloads/release/python-370/> Verify it was installed.
```
$ python3 --version
Python 3.7.0
```
Pip is automatically included in this version so update it. Note use pip3 for Python 3.x
```
$ pip3 install --upgrade pip
Successfully uninstalled pip-10.0.1
Successfully installed pip-18.0
``` | This worked for me:
Make sure you installed python
```
$ python3 --version
```
Then install pip
```
$ python3 -m pip install --user --upgrade pip
$ python3 -m pip --version
```
for more check here:
<https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/> |
71,106,080 | Would any happen to know why I keep getting the error below in my for loop? Any insight would be greatly appreciated and helpful.
```
the_count = [1, 2, 3, 4, 5]
for number in the_count:
print(f"This is count {number}")
NameError Traceback (most recent call last)
<ipython-input-3-e3d6b461f13e> in <module>
----> 1 print(f"This is count {number}")
NameError: name 'number' is not defined
``` | 2022/02/14 | [
"https://Stackoverflow.com/questions/71106080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16931630/"
] | Well it should give indentation error but it’s giving name error
```py
the_count = [1, 2, 3, 4, 5]
for number in the_count:
print(f"This is count {number}")
```
This is how your code should be. | Here are a few ways you can print the loop:
```
# Option 1
the_count = [1, 2, 3, 4, 5]
for x in the_count:
print('This is count:' + str(x))
# Option 2
the_count = [1, 2, 3, 4, 5]
for x in the_count:
print(f"This is count {x}")
``` |
71,106,080 | Would any happen to know why I keep getting the error below in my for loop? Any insight would be greatly appreciated and helpful.
```
the_count = [1, 2, 3, 4, 5]
for number in the_count:
print(f"This is count {number}")
NameError Traceback (most recent call last)
<ipython-input-3-e3d6b461f13e> in <module>
----> 1 print(f"This is count {number}")
NameError: name 'number' is not defined
``` | 2022/02/14 | [
"https://Stackoverflow.com/questions/71106080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16931630/"
] | Are you running this line by line in a terminal or as a .py file? This code works perfectly if you run it as a python file.
```
the_count = [1, 2, 3, 4, 5]
for number in the_count:
print(f"This is count {number}")
```
Which outputs:
```
This is count 1
This is count 2
This is count 3
This is count 4
This is count 5
```
You may run into errors with the for loop if you run this code line by line, since the for loop would be executed fully prior to even reading the print statment. | Here are a few ways you can print the loop:
```
# Option 1
the_count = [1, 2, 3, 4, 5]
for x in the_count:
print('This is count:' + str(x))
# Option 2
the_count = [1, 2, 3, 4, 5]
for x in the_count:
print(f"This is count {x}")
``` |
67,313,858 | I am trying to do semantic search with Elasticsearch using tensorflow\_hub, but I get `RequestError: RequestError(400, 'search_phase_execution_exception', 'runtime error')` . From search\_phase\_execution\_exception I suppose that with corrupted data([from this stack question](https://stackoverflow.com/questions/30073759/searchphaseexecutionexceptionfailed-to-execute-phase-query-all-shards-failed)) My document structure looks like this
```
{
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1
},
"mappings": {
"dynamic": "true",
"_source": {
"enabled": "true"
},
"properties": {
"id": {
"type":"keyword"
},
"title": {
"type": "text"
},
"abstract": {
"type": "text"
},
"abs_emb": {
"type":"dense_vector",
"dims":512
},
"timestamp": {
"type":"date"
}
}
}
}
```
And I create a document using `elasticsearch.indices.create`.
```
es.indices.create(index=index, body='my_document_structure')
res = es.indices.delete(index=index, ignore=[404])
for i in range(100):
doc = {
'timestamp': datetime.datetime.utcnow(),
'id':id[i],
'title':title[0][i],
'abstract':abstract[0][i],
'abs_emb':tf_hub_KerasLayer([abstract[0][i]])[0]
}
res = es.index(index=index, body=doc)
```
for my semantic search I use this code
query = "graphene"
query\_vector = list(embed([query])[0])
```
script_query = {
"script_score": {
"query": {"match_all": {}},
"script": {
"source": "cosineSimilarity(params.query_vector, doc['abs_emb']) + 1.0",
"params": {"query_vector": query_vector}
}
}
}
response = es.search(
index=index,
body={
"size": 5,
"query": script_query,
"_source": {"includes": ["title", "abstract"]}
}
)
```
I know there are some similar questions in stackoverflow and elsasticsearch, but I couldn't find solution for me. My guess is that the document structure is wrong but I can't figure out what exactly. I used search query code from [this](https://github.com/jtibshirani/text-embeddings/blob/blog/src/main.py) repo. The full error message is too long and doesn't seem to contain much information, so I share only last part of it.
```
~/untitled/elastic/venv/lib/python3.9/site-packages/elasticsearch/connection/base.py in
_raise_error(self, status_code, raw_data)
320 logger.warning("Undecodable raw error response from server: %s", err)
321
--> 322 raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
323 status_code, error_message, additional_info
324 )
RequestError: RequestError(400, 'search_phase_execution_exception', 'runtime error')
```
An here is the Error from Elasticsearch server.
```
[2021-04-29T12:43:07,797][WARN ][o.e.c.r.a.DiskThresholdMonitor]
[asmac.local] high disk watermark [90%] exceeded on
[w7lUacguTZWH9xc_lyd0kg][asmac.local][/Users/username/elasticsearch-
7.12.0/data/nodes/0] free: 17.2gb[7.4%], shards will be relocated
away from this node; currently relocating away shards totalling [0]
bytes; the node is expected to continue to exceed the high disk
watermark when these relocations are complete
``` | 2021/04/29 | [
"https://Stackoverflow.com/questions/67313858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6687545/"
] | I think you're hitting the [following issue](https://www.elastic.co/guide/en/elasticsearch/reference/7.12/breaking-changes-7.6.html#_update_to_vector_function_signatures) and you should update your query to this:
```
script_query = {
"script_score": {
"query": {"match_all": {}},
"script": {
"source": "cosineSimilarity(params.query_vector, 'abs_emb') + 1.0",
"params": {"query_vector": query_vector}
}
}
}
```
Also make sure that `query_vector` [contains floats and not doubles](https://www.elastic.co/guide/en/elasticsearch/reference/7.12/breaking-changes-7.4.html#_use_float_instead_of_double_for_query_vectors) | in my case the error was "Caused by: java.lang.ClassCastException: class org.elasticsearch.index.fielddata.ScriptDocValues$Doubles cannot be cast to class org.elasticsearch.xpack.vect
ors.query.VectorScriptDocValues$DenseVectorScriptDocValues"
My mistake was - I removed the ES index before starting ingesting content. The one that had the "type":"dense\_vector" field.
It caused ES did not use the correct type for indexing dense vectors: they were stored as useless lists of doubles. In this sense the ES index was 'corrupted': all 'script\_score' queries returned 400. |
64,858,958 | I'd like to run a script which does some setup then opens up a shell with that environment. So rather than then doing
```
$ python
>>> # do some setup
>>> # start doing what I really came here to do
```
I want to do
```
$ python my_script.py
>>> # start doing what I really came here to do
``` | 2020/11/16 | [
"https://Stackoverflow.com/questions/64858958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4391249/"
] | Run your script with the `-i` argument:
```
python -i my_script.py
```
This will execute `my_script.py` and drop to an interactive shell afterwards. | you can do some thing like this
```py
import code
variables = {"test": True}
shell = code.InteractiveConsole(variables)
shell.interact()
```
now it will open python shell and you can access test variable directly |
31,709,072 | I am trying to use `os.system` (soon to be replaced with `subprocess`) to call a shell script (which runs a process as a daemon)
```
os.system('/path/to/shell_script.sh')
```
The shell script looks like:
```
nohup /path/to/program &
```
If I execute this shell script in my local environment, I have to hit enter before being returned to the console as the shell script is running a process as a daemon. If I do the above command in python, I also have to hit enter before being returned to the console.
However, if I do this in a python program, it just hangs forever.
How can I get the python program to resume execution after calling a shell script that runs a process as a daemon? | 2015/07/29 | [
"https://Stackoverflow.com/questions/31709072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391717/"
] | From [here](http://tldp.org/LDP/abs/html/x9644.html) -
>
> Within a script, running a command in the background with an ampersand (&)
> may cause the script to hang until ENTER is hit. This seems to occur with
> commands that write to stdout.
>
>
>
You should try redirecting your output to some file (or null if you do not need it), maybe `/dev/null` , if you really do not need the output.
```
nohup /path/to/program > /dev/null &
``` | Why don't you trying using a separate thread?
Wrap up your process into something like
```
def run(my_arg):
my_process(my_arg)
thread = Thread(target = run, args = (my_arg, ))
thread.start()
```
Checkout join and lock for more control over the thread execution.
<https://docs.python.org/2/library/threading.html> |
65,871,852 | Hey guys I need some help, I am trying to download videos from this site<https://ttdownloader.com/dl.php?v=YTo0OntzOjk6IndhdGVybWFyayI7YjowO3M6NzoidmlkZW9JZCI7czoxOToiNjkxMjEwNzYyNzY1MjY5NzM1MCI7czozOiJ1aWQiO3M6MzI6Ijk0MTdiOWE3NWU2MmE3MDQ1NjZhYzk0MzJjMThlY2VlIjtzOjQ6InRpbWUiO2k6MTYxMTQ5NzE1ODt9> using python.
this is code I have tried.
```
import requests
url ='''https://ttdownloader.com/dl.php?v=YTo0OntzOjk6IndhdGVybWFyayI7YjowO3M6NzoidmlkZW9JZCI7czoxOToiNjkxMjEwNzYyNzY1MjY5NzM1MCI7czozOiJ1aWQiO3M6MzI6Ijk0MTdiOWE3NWU2MmE3MDQ1NjZhYzk0MzJjMThlY2VlIjtzOjQ6InRpbWUiO2k6MTYxMTQ5NzE1ODt9'''
page = requests.get(url)
with open('output.mp4', 'wb') as file:
file.write(page.content)
```
But it doesnt work as expected, when i check page.content all I see is b'' | 2021/01/24 | [
"https://Stackoverflow.com/questions/65871852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15041837/"
] | ❌ The link that you are using is **NOT** a html page.
❌ Therefore it doesn't return anything as html.
✅ Your link is a media link.
✅ Therefore you must stream it and download it. Something like this:
```
import requests
url = '/your/valid/ttdownloader/url'
with requests.get(url, stream=True) as r:
with open('ouput.mp4', 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
```
***NOTE:***
The link that you posted in the question is now invalid.
Please try the above code with a newly generated link. | You should use [`request.urlretrieve`](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlretrieve) to directly save the URL to a file:
```
from urllib import request
url ='''https://ttdownloader.com/dl.php?v=YTo0OntzOjk6IndhdGVybWFyayI7YjowO3M6NzoidmlkZW9JZCI7czoxOToiNjkxMjEwNzYyNzY1MjY5NzM1MCI7czozOiJ1aWQiO3M6MzI6Ijk0MTdiOWE3NWU2MmE3MDQ1NjZhYzk0MzJjMThlY2VlIjtzOjQ6InRpbWUiO2k6MTYxMTQ5NzE1ODt9'''
request.urlretrieve(url, output.mp4)
```
However, this code gave me a `urllib.error.HTTPError: HTTP Error 403: Forbidden` error. It appears that this link is not publicly available without authentication. |
53,007,246 | I am using `selenium webdriver` on Chrome; `python 3` on Windows 10.
I want to scrape some reports from a database. I search with a company ID and a year, the results are a list of links formatted in a specific way: something like `year_companyID_seeminglyRandomDateAndDoctype.extension`, e.g. `2018_2330_20020713F04.pdf`. I want to get all pdfs of a certain doctype. I can grab all links for a certain doctype using `webdriver.find_elements_by_partial_link_text('F04')` or all of that extension with `'.pdf'` instead of `'F04'`, but I cannot figure out a way to check for both at once. First I tried something like
```
links = webdriver.find_elements_by_partial_link_text('F04')
for l in links:
if l.find('.pdf') == -1:
continue
else:
#do some stuff
```
But unfortunately, the links are WebElements:
```
print(links[0])
>> <selenium.webdriver.remote.webelement.WebElement (session="78494f3527260607202e68f6d93668fe", element="0.8703868381417961-1")>
print(links[0].get_attribute('href'))
>> javascript:readfile2("F","2330","2015_2330_20160607F04.pdf")
```
so the conditional in the `for` loop above fails.
I see that I could probably access the necessary information in whatever that object is, but I would prefer to do the checks first when getting the links. **Is there any way to check multiple conditions in the `webdriver.find_elements_by_*` methods?** | 2018/10/26 | [
"https://Stackoverflow.com/questions/53007246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10202292/"
] | You can try to use below code
```
links = [link.get_attribute('href') for link in webdriver.find_elements_by_partial_link_text('F04') if link.get_attribute('href').endswith('.pdf")')]
```
You can also try XPath as below
```
links = webdriver.find_elements_by_xpath('//a[contains(., "F04") and contains(@href, ".pdf")]')
``` | Andersson's approach seems to work with a slight correction:
if link.get\_attribute('href').endswith('.pdf')] rather than if link.get\_attribute('href').endswith('.pdf")')], i.e. delete the "). |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | I would advise you to check the [trigram alogrithm](http://www.postgresql.org/docs/8.3/interactive/pgtrgm.html). In my opinion it works better for finding typos then edit distance algorithm. It should work faster as well and if you keep dictionary in postgres database you can make use of index.
You may find useful stackoverflow [topic](https://stackoverflow.com/questions/307291/how-does-the-google-did-you-mean-algorithm-work) about google "Did you mean" | Some questions for you, to help you determine whether you should be asking your "where do I find real-world weights" question:
Have you actually measured the effectiveness of the uniform weighting implementation? How?
How many different "internal objects" do you have -- i.e. what is the size of your dictionary?
How are you actually using the edit distance e.g. John/Joan, Marmaduke/Marmeduke, Featherstonehaugh/Featherstonhaugh: is that "all 1 error" or is it 25% / 11.1% / 5.9% difference? What threshold are you using?
How many pairs of dictionary entries are within your threshold (e.g. John vs Joan, Joan vs Juan, etc)? If you introduced a fancy weighting system, how many pairs of dictionary entries would migrate (a) from inside the threshold to outside (b) vice versa?
What do you do if both John and Juan are in your dictionary and the user types Joan?
What are the penalties/costs of (1) choosing the wrong dictionary word (not the one that the user meant) (2) failing to recognise the user's input?
Will introducing a complicated weighting system actually reduce the probabilities of the above two error types by sufficient margin to make the complication and slower speed worthwhile?
BTW, how do you know what keyboard the user was using?
Update:
"""My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance)."""
Yes, Thursday -> Tursday by omitting an "h", but Tuesday -> Tursday by substituting "r" instead of "e". *E and R are next to each other on qwERty and azERty keyboards.* Every "real person" can easily **guess** that Thursday is more likely than Tuesday. Even if statistics as well as guesses point to Thursday being more likely than Tuesday (perhaps omitting h will cost 0.5 and e->r will cost 0.75), will the difference (perhaps 0.25) be significant enough to always pick Thursday? Can/will your system ask "Did you mean Tuesday?" or does/will it just plough ahead with Thursday? |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | I would advise you to check the [trigram alogrithm](http://www.postgresql.org/docs/8.3/interactive/pgtrgm.html). In my opinion it works better for finding typos then edit distance algorithm. It should work faster as well and if you keep dictionary in postgres database you can make use of index.
You may find useful stackoverflow [topic](https://stackoverflow.com/questions/307291/how-does-the-google-did-you-mean-algorithm-work) about google "Did you mean" | If the research is your interest I think continuing with that algorithm, trying to find decent weights would be fruitful.
I can't help you with typo stats, but I think you should also play with python's difflib. Specifically, the ratio() method of SequenceMatcher. It uses an algorithm which the docs <http://docs.python.org/library/difflib.html> claim is well suited to matches that 'look right', and may be useful to augment or test what you're doing.
For python programmers just looking for typos it is a good place to start. One of my coworkers has used both Levenshtein edit distance and SequenceMatcher's ratio() and got much better results from ratio(). |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | Possible source for real world typo statistics would be in the **Wikipedia's complete edit history**.
<http://download.wikimedia.org/>
Also, you might be interested in the AWB's RegExTypoFix
<http://en.wikipedia.org/wiki/Wikipedia:AWB/T> | I would advise you to check the [trigram alogrithm](http://www.postgresql.org/docs/8.3/interactive/pgtrgm.html). In my opinion it works better for finding typos then edit distance algorithm. It should work faster as well and if you keep dictionary in postgres database you can make use of index.
You may find useful stackoverflow [topic](https://stackoverflow.com/questions/307291/how-does-the-google-did-you-mean-algorithm-work) about google "Did you mean" |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | If the research is your interest I think continuing with that algorithm, trying to find decent weights would be fruitful.
I can't help you with typo stats, but I think you should also play with python's difflib. Specifically, the ratio() method of SequenceMatcher. It uses an algorithm which the docs <http://docs.python.org/library/difflib.html> claim is well suited to matches that 'look right', and may be useful to augment or test what you're doing.
For python programmers just looking for typos it is a good place to start. One of my coworkers has used both Levenshtein edit distance and SequenceMatcher's ratio() and got much better results from ratio(). | Some questions for you, to help you determine whether you should be asking your "where do I find real-world weights" question:
Have you actually measured the effectiveness of the uniform weighting implementation? How?
How many different "internal objects" do you have -- i.e. what is the size of your dictionary?
How are you actually using the edit distance e.g. John/Joan, Marmaduke/Marmeduke, Featherstonehaugh/Featherstonhaugh: is that "all 1 error" or is it 25% / 11.1% / 5.9% difference? What threshold are you using?
How many pairs of dictionary entries are within your threshold (e.g. John vs Joan, Joan vs Juan, etc)? If you introduced a fancy weighting system, how many pairs of dictionary entries would migrate (a) from inside the threshold to outside (b) vice versa?
What do you do if both John and Juan are in your dictionary and the user types Joan?
What are the penalties/costs of (1) choosing the wrong dictionary word (not the one that the user meant) (2) failing to recognise the user's input?
Will introducing a complicated weighting system actually reduce the probabilities of the above two error types by sufficient margin to make the complication and slower speed worthwhile?
BTW, how do you know what keyboard the user was using?
Update:
"""My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance)."""
Yes, Thursday -> Tursday by omitting an "h", but Tuesday -> Tursday by substituting "r" instead of "e". *E and R are next to each other on qwERty and azERty keyboards.* Every "real person" can easily **guess** that Thursday is more likely than Tuesday. Even if statistics as well as guesses point to Thursday being more likely than Tuesday (perhaps omitting h will cost 0.5 and e->r will cost 0.75), will the difference (perhaps 0.25) be significant enough to always pick Thursday? Can/will your system ask "Did you mean Tuesday?" or does/will it just plough ahead with Thursday? |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | Possible source for real world typo statistics would be in the **Wikipedia's complete edit history**.
<http://download.wikimedia.org/>
Also, you might be interested in the AWB's RegExTypoFix
<http://en.wikipedia.org/wiki/Wikipedia:AWB/T> | Some questions for you, to help you determine whether you should be asking your "where do I find real-world weights" question:
Have you actually measured the effectiveness of the uniform weighting implementation? How?
How many different "internal objects" do you have -- i.e. what is the size of your dictionary?
How are you actually using the edit distance e.g. John/Joan, Marmaduke/Marmeduke, Featherstonehaugh/Featherstonhaugh: is that "all 1 error" or is it 25% / 11.1% / 5.9% difference? What threshold are you using?
How many pairs of dictionary entries are within your threshold (e.g. John vs Joan, Joan vs Juan, etc)? If you introduced a fancy weighting system, how many pairs of dictionary entries would migrate (a) from inside the threshold to outside (b) vice versa?
What do you do if both John and Juan are in your dictionary and the user types Joan?
What are the penalties/costs of (1) choosing the wrong dictionary word (not the one that the user meant) (2) failing to recognise the user's input?
Will introducing a complicated weighting system actually reduce the probabilities of the above two error types by sufficient margin to make the complication and slower speed worthwhile?
BTW, how do you know what keyboard the user was using?
Update:
"""My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance)."""
Yes, Thursday -> Tursday by omitting an "h", but Tuesday -> Tursday by substituting "r" instead of "e". *E and R are next to each other on qwERty and azERty keyboards.* Every "real person" can easily **guess** that Thursday is more likely than Tuesday. Even if statistics as well as guesses point to Thursday being more likely than Tuesday (perhaps omitting h will cost 0.5 and e->r will cost 0.75), will the difference (perhaps 0.25) be significant enough to always pick Thursday? Can/will your system ask "Did you mean Tuesday?" or does/will it just plough ahead with Thursday? |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | [Probability Scoring for Spelling Correction](http://publication.wilsonwong.me/paper/233281639.pdf) by Church and Gale might help. In that paper, the authors model typos as a noisy channel between the author and the computer. The appendix has tables for typos seen in a corpus of Associated Press publications. There is a table for each of the following kinds of typos:
* deletion
* insertion
* substitution
* transposition
For example, examining the insertion table, we can see that *l* was incorrectly inserted after *l* 128 times (the highest number in that column). Using these tables, you can generate the probabilities you're looking for. | Some questions for you, to help you determine whether you should be asking your "where do I find real-world weights" question:
Have you actually measured the effectiveness of the uniform weighting implementation? How?
How many different "internal objects" do you have -- i.e. what is the size of your dictionary?
How are you actually using the edit distance e.g. John/Joan, Marmaduke/Marmeduke, Featherstonehaugh/Featherstonhaugh: is that "all 1 error" or is it 25% / 11.1% / 5.9% difference? What threshold are you using?
How many pairs of dictionary entries are within your threshold (e.g. John vs Joan, Joan vs Juan, etc)? If you introduced a fancy weighting system, how many pairs of dictionary entries would migrate (a) from inside the threshold to outside (b) vice versa?
What do you do if both John and Juan are in your dictionary and the user types Joan?
What are the penalties/costs of (1) choosing the wrong dictionary word (not the one that the user meant) (2) failing to recognise the user's input?
Will introducing a complicated weighting system actually reduce the probabilities of the above two error types by sufficient margin to make the complication and slower speed worthwhile?
BTW, how do you know what keyboard the user was using?
Update:
"""My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance)."""
Yes, Thursday -> Tursday by omitting an "h", but Tuesday -> Tursday by substituting "r" instead of "e". *E and R are next to each other on qwERty and azERty keyboards.* Every "real person" can easily **guess** that Thursday is more likely than Tuesday. Even if statistics as well as guesses point to Thursday being more likely than Tuesday (perhaps omitting h will cost 0.5 and e->r will cost 0.75), will the difference (perhaps 0.25) be significant enough to always pick Thursday? Can/will your system ask "Did you mean Tuesday?" or does/will it just plough ahead with Thursday? |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | Possible source for real world typo statistics would be in the **Wikipedia's complete edit history**.
<http://download.wikimedia.org/>
Also, you might be interested in the AWB's RegExTypoFix
<http://en.wikipedia.org/wiki/Wikipedia:AWB/T> | If the research is your interest I think continuing with that algorithm, trying to find decent weights would be fruitful.
I can't help you with typo stats, but I think you should also play with python's difflib. Specifically, the ratio() method of SequenceMatcher. It uses an algorithm which the docs <http://docs.python.org/library/difflib.html> claim is well suited to matches that 'look right', and may be useful to augment or test what you're doing.
For python programmers just looking for typos it is a good place to start. One of my coworkers has used both Levenshtein edit distance and SequenceMatcher's ratio() and got much better results from ratio(). |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | [Probability Scoring for Spelling Correction](http://publication.wilsonwong.me/paper/233281639.pdf) by Church and Gale might help. In that paper, the authors model typos as a noisy channel between the author and the computer. The appendix has tables for typos seen in a corpus of Associated Press publications. There is a table for each of the following kinds of typos:
* deletion
* insertion
* substitution
* transposition
For example, examining the insertion table, we can see that *l* was incorrectly inserted after *l* 128 times (the highest number in that column). Using these tables, you can generate the probabilities you're looking for. | If the research is your interest I think continuing with that algorithm, trying to find decent weights would be fruitful.
I can't help you with typo stats, but I think you should also play with python's difflib. Specifically, the ratio() method of SequenceMatcher. It uses an algorithm which the docs <http://docs.python.org/library/difflib.html> claim is well suited to matches that 'look right', and may be useful to augment or test what you're doing.
For python programmers just looking for typos it is a good place to start. One of my coworkers has used both Levenshtein edit distance and SequenceMatcher's ratio() and got much better results from ratio(). |
3,419,400 | Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
1. `typos` - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
2. `Spelling` - "Shikago" instead of "Chicago"
I use [Damerau-Levenshtein distance](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) for the typos and [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) for spelling (Python implementations [here](http://mwh.geek.nz/2009/04/26/python-damerau-levenshtein-distance/) and [here](http://atomboy.isa-geek.com/plone/Members/acoil/programing/double-metaphone/metaphone.py/view)).
I want to focus on the Damerau-Levenshtein (or simply `edit-distance`). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
* I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
* "Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
* Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even [Norvig's very cool spell corrector](http://norvig.com/spell-correct.html) uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to [Google-scale data](http://www.google.com/jobs/britney.html)...
Should I just guess them?
**EDIT - trying to answer user questions:**
* My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
* I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
* Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have **many** other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
* "How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
* Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
* The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
* Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in [Precision and Recall](http://en.wikipedia.org/wiki/Precision_and_recall). | 2010/08/05 | [
"https://Stackoverflow.com/questions/3419400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | Possible source for real world typo statistics would be in the **Wikipedia's complete edit history**.
<http://download.wikimedia.org/>
Also, you might be interested in the AWB's RegExTypoFix
<http://en.wikipedia.org/wiki/Wikipedia:AWB/T> | [Probability Scoring for Spelling Correction](http://publication.wilsonwong.me/paper/233281639.pdf) by Church and Gale might help. In that paper, the authors model typos as a noisy channel between the author and the computer. The appendix has tables for typos seen in a corpus of Associated Press publications. There is a table for each of the following kinds of typos:
* deletion
* insertion
* substitution
* transposition
For example, examining the insertion table, we can see that *l* was incorrectly inserted after *l* 128 times (the highest number in that column). Using these tables, you can generate the probabilities you're looking for. |
27,438,448 | while developing qpython scripts, it would be much easier to develop them on a pc and only transfer them to the android device for testing.
What is the easiest way to do such a transfer? (Development might require to do this very frequently)
Thanks in advance for all your feedback.
tfv | 2014/12/12 | [
"https://Stackoverflow.com/questions/27438448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4352930/"
] | In linux with ADB installed, in terminal you can use
`adb push pcprojectfolder /sdcard/com.hipipal.qpyplus/projects/androidprojectfolder`
You do need to manually end the task on the phone before pushing though. | There is a ftp service in setting part which can help you to translate files between android device and pc. |
27,438,448 | while developing qpython scripts, it would be much easier to develop them on a pc and only transfer them to the android device for testing.
What is the easiest way to do such a transfer? (Development might require to do this very frequently)
Thanks in advance for all your feedback.
tfv | 2014/12/12 | [
"https://Stackoverflow.com/questions/27438448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4352930/"
] | In linux with ADB installed, in terminal you can use
`adb push pcprojectfolder /sdcard/com.hipipal.qpyplus/projects/androidprojectfolder`
You do need to manually end the task on the phone before pushing though. | QPy for android has built in FTP. Just enter the local address it provides into an ftp program (like bareftp for ubuntu) |
11,862,679 | I have the following python code :
```
#This program converts the speeds 60 KPH
#through 130 KPH (in 10 kph increments)
#to MPH
#Global constants
START = 60
END = 131
INCREMENT = 10
CONVERSION_FACTOR = 0.6214
def main():
#Print the table headings
print('KPH\t\tMPH')
print('----------------')
#Print the speeds
for kph in range(START, END, INCREMENT):
mph = kph * CONVERSION_FACTOR
print(kph, '\t\t', format(mph, '.1f'))
#Call the main function
main()
```
Running this code I get the following result :
```
KPH MPH
----------------
60 37.3
70 43.5
80 49.7
90 55.9
100 62.1
110 68.4
120 74.6
130 80.8
```
How can I right align the second column, so that my results are shown more properly? | 2012/08/08 | [
"https://Stackoverflow.com/questions/11862679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/881739/"
] | Use the [Format Specification Mini-Language](http://docs.python.org/library/string.html#formatspec)
```
"{:>10.3}".format(12.34)
```
Result (using `_` for spaces ):
```
______12.3
``` | You could use `printf` style formating to specify width too.
```
>>> print('%10.2f' % 1.23456)
1.12
```
In your example you could use:
```
print('%-10i%.1f' % (kph, mph))
``` |
57,447,363 | I installed Python 3.7 via the Windows Store. I have selected this runtime environment in VScode, and VScode insists on nagging me about installing the pylint addon (or another linting addon). It attempts to run this command in a Git Bash terminal (MINGW64):
```
$ C:/Users/rjamd/AppData/Local/Microsoft/WindowsApps/python.exe -m pip install -U pylint --user
```
...which is denied permission to run:
```
bash: C:/Users/rjamd/AppData/Local/Microsoft/WindowsApps/python.exe:
Permission denied
```
Seems like they didn't really think about the distribution method screwing with permissions!
I should not be getting *any* permissions issues, because this is attempting to run in my local user directory, which I have full permissions on.
Full permissions are granted to:
* SYSTEM (group)
* rjamd (my user)
* Administrators (group)
Everything appears to be in order.
Update:
* Python will not run at all in the Git bash terminal, but it runs in CMD (without running CMD as admin).
* Git bash cannot find any other instances of Python using `which --all python`. It prints the same executable path twice in a row. I have 3 conda environments, and two different paths for the new Python environment ("AppData" and "Program Files"), all of which show up in VScode's environment selection menu. | 2019/08/11 | [
"https://Stackoverflow.com/questions/57447363",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1710956/"
] | I got a solution by disabling execution aliases of python from windows setting. <https://github.com/gmacario/spaceappschallenge-2019/issues/6>. It's about "manage app execution aliases" Thanks. | (Microsoft employee and CPython core developer here).
This is most likely a bug in Git Bash. They are probably trying to read some information from the executable in a way that doesn't handle reparse points correctly (and may not be able to handle Windows symlinks either, in that case).
Launching the executable under `AppData` from anywhere else in Windows should be okay, though if you can't launch it with PowerShell or cmd.exe, please use the Windows Feedback tool to report a bug for the team. |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | Use zip
```py
animals = ["cow", "pig", "horse", "chick", "sheep"]
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
``` | You can use this ...
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
for animal,sound in zip(["cow", "pig", "horse", "chick", "sheep"],["moo", "oink", "neigh", "cluck", "bahh"]):
print(verseFor(animal, sound))
main()
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | Use zip
```py
animals = ["cow", "pig", "horse", "chick", "sheep"]
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
``` | The answer to the actual question is that you need to iterate over both lists together using `zip`.
However, separately, it bothers me that it prints "a oink". Here is a version that will print "an oink" -- the `a` function returns a word preceded by the appropriate indefinite article ("a" or "an").
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!"
def a(thing):
if thing[0] in 'aeiou':
return f'an {thing}'
else:
return f'a {thing}'
def verseFor(animal, sound):
an_animal = a(animal)
a_sound = a(sound)
lyrics = f"""{oldMacdonald()}
And on his farm he had {an_animal}, Ee-igh, Ee-igh, Oh!
With {a_sound}, {sound} here and {a_sound}, {sound} there.
Here {a_sound}, there {a_sound}, everywhere {a_sound}, {sound}.
{oldMacdonald()}
"""
return lyrics
def main():
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
animals = ["cow", "pig", "horse", "chick", "sheep"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
main()
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | Use zip
```py
animals = ["cow", "pig", "horse", "chick", "sheep"]
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
``` | ```
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
the problem is with sound dict. here is a quick fix:
```
def main():
animals = [ ["cow", "moo"], ["pig", "neigh"], ["sheep", "bahh"] ]
for animal in animals:
print(verseFor(animal[0], animal[1]))
```
or you can use this method:
```
def main():
animals = [
{
"name" : "cow",
"sound": "moe"
},
{
"name" : "pig",
"sound": "haha"
},
{
"name" : "dog",
"sound": "lol"
}
]
for animal in animals:
print(verseFor(animal["name"], animal["sound"))
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | Use zip
```py
animals = ["cow", "pig", "horse", "chick", "sheep"]
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
``` | As @Janith said, you must concatenete string to strings. So to achive what you are trying to do you have to rewrite it like that:
```
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound[0] + ", " + sound[1] + " here and a " + sound[2] + ", " \
"" + sound[3] + ".\nHere a " + sound[4] + ", there a " + sound[5] + ", " \
"everywhere a " + sound[6] + ", " + sound[7] + "\n" + oldMacdonald()
```
But, there is an additional issue: You have only five animals with their 5 corresponding sounds, but you're placing 8 sounds on your lyrics....! so we are placing at least 3 extra "sounds" on the lyrics thing.
You may want to check one of the good python tutorials in the web, like the one in the official python web site <https://docs.python.org/3/tutorial/> |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | So Basically, in this block of code
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
`sound` is a list and `animal` is iterating over a list, i.e animal is single elements of the list meaning `cow` in 1st iteration, `pig` in 2nd, `horse` in 3rd and so on.
But you are passing `sound` as a whole list, not single elements of it, in `verseFor`.
So You have to iterate over both lists to send their animal and sound element by element. As mentioned, you can use `zip` like this.
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
animal = ["cow", "pig", "horse", "chick", "sheep"]
for ani, sou in zip(animal, sound):
print(verseFor(ani, sou))
```
Now you are looping over both elements of sound and animals. If you look the output of `zip`, you get this.
```
list(zip(animal,sound))
>>>[('cow', 'moo'),
('pig', 'oink'),
('horse', 'neigh'),
('chick', 'cluck'),
('sheep', 'bahh')]
```
So basically in the first iteration in the code I provided, we are passing `cow` in `ani` and `moo` in `sou`. Then in the next iteration `pig` and `oink` respectively and so on. | Use zip
```py
animals = ["cow", "pig", "horse", "chick", "sheep"]
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | So Basically, in this block of code
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
`sound` is a list and `animal` is iterating over a list, i.e animal is single elements of the list meaning `cow` in 1st iteration, `pig` in 2nd, `horse` in 3rd and so on.
But you are passing `sound` as a whole list, not single elements of it, in `verseFor`.
So You have to iterate over both lists to send their animal and sound element by element. As mentioned, you can use `zip` like this.
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
animal = ["cow", "pig", "horse", "chick", "sheep"]
for ani, sou in zip(animal, sound):
print(verseFor(ani, sou))
```
Now you are looping over both elements of sound and animals. If you look the output of `zip`, you get this.
```
list(zip(animal,sound))
>>>[('cow', 'moo'),
('pig', 'oink'),
('horse', 'neigh'),
('chick', 'cluck'),
('sheep', 'bahh')]
```
So basically in the first iteration in the code I provided, we are passing `cow` in `ani` and `moo` in `sou`. Then in the next iteration `pig` and `oink` respectively and so on. | You can use this ...
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
for animal,sound in zip(["cow", "pig", "horse", "chick", "sheep"],["moo", "oink", "neigh", "cluck", "bahh"]):
print(verseFor(animal, sound))
main()
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | So Basically, in this block of code
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
`sound` is a list and `animal` is iterating over a list, i.e animal is single elements of the list meaning `cow` in 1st iteration, `pig` in 2nd, `horse` in 3rd and so on.
But you are passing `sound` as a whole list, not single elements of it, in `verseFor`.
So You have to iterate over both lists to send their animal and sound element by element. As mentioned, you can use `zip` like this.
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
animal = ["cow", "pig", "horse", "chick", "sheep"]
for ani, sou in zip(animal, sound):
print(verseFor(ani, sou))
```
Now you are looping over both elements of sound and animals. If you look the output of `zip`, you get this.
```
list(zip(animal,sound))
>>>[('cow', 'moo'),
('pig', 'oink'),
('horse', 'neigh'),
('chick', 'cluck'),
('sheep', 'bahh')]
```
So basically in the first iteration in the code I provided, we are passing `cow` in `ani` and `moo` in `sou`. Then in the next iteration `pig` and `oink` respectively and so on. | The answer to the actual question is that you need to iterate over both lists together using `zip`.
However, separately, it bothers me that it prints "a oink". Here is a version that will print "an oink" -- the `a` function returns a word preceded by the appropriate indefinite article ("a" or "an").
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!"
def a(thing):
if thing[0] in 'aeiou':
return f'an {thing}'
else:
return f'a {thing}'
def verseFor(animal, sound):
an_animal = a(animal)
a_sound = a(sound)
lyrics = f"""{oldMacdonald()}
And on his farm he had {an_animal}, Ee-igh, Ee-igh, Oh!
With {a_sound}, {sound} here and {a_sound}, {sound} there.
Here {a_sound}, there {a_sound}, everywhere {a_sound}, {sound}.
{oldMacdonald()}
"""
return lyrics
def main():
sounds = ["moo", "oink", "neigh", "cluck", "bahh"]
animals = ["cow", "pig", "horse", "chick", "sheep"]
for animal, sound in zip(animals, sounds):
print(verseFor(animal, sound))
main()
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | So Basically, in this block of code
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
`sound` is a list and `animal` is iterating over a list, i.e animal is single elements of the list meaning `cow` in 1st iteration, `pig` in 2nd, `horse` in 3rd and so on.
But you are passing `sound` as a whole list, not single elements of it, in `verseFor`.
So You have to iterate over both lists to send their animal and sound element by element. As mentioned, you can use `zip` like this.
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
animal = ["cow", "pig", "horse", "chick", "sheep"]
for ani, sou in zip(animal, sound):
print(verseFor(ani, sou))
```
Now you are looping over both elements of sound and animals. If you look the output of `zip`, you get this.
```
list(zip(animal,sound))
>>>[('cow', 'moo'),
('pig', 'oink'),
('horse', 'neigh'),
('chick', 'cluck'),
('sheep', 'bahh')]
```
So basically in the first iteration in the code I provided, we are passing `cow` in `ani` and `moo` in `sou`. Then in the next iteration `pig` and `oink` respectively and so on. | ```
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
the problem is with sound dict. here is a quick fix:
```
def main():
animals = [ ["cow", "moo"], ["pig", "neigh"], ["sheep", "bahh"] ]
for animal in animals:
print(verseFor(animal[0], animal[1]))
```
or you can use this method:
```
def main():
animals = [
{
"name" : "cow",
"sound": "moe"
},
{
"name" : "pig",
"sound": "haha"
},
{
"name" : "dog",
"sound": "lol"
}
]
for animal in animals:
print(verseFor(animal["name"], animal["sound"))
``` |
63,750,405 | I keep getting "can only concatenate str (not list) to str" but i'm not sure why i'm getting this error. I'm fairly new to python so any help would be appreciated.
```
def oldMacdonald():
return "Old MacDonald had a farm, Ee-igh, Ee-igh, Oh!\n"
def verseFor(animal, sound):
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound + ", " + sound + " here and a " + sound + ", " \
"" + sound + ".\nHere a " + sound + ", there a " + sound + ", " \
"everywhere a " + sound + ", " + sound + "\n" + oldMacdonald()
return lyrics
def main():
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
main()
``` | 2020/09/05 | [
"https://Stackoverflow.com/questions/63750405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14224247/"
] | So Basically, in this block of code
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
for animal in ["cow", "pig", "horse", "chick", "sheep"]:
print(verseFor(animal, sound))
```
`sound` is a list and `animal` is iterating over a list, i.e animal is single elements of the list meaning `cow` in 1st iteration, `pig` in 2nd, `horse` in 3rd and so on.
But you are passing `sound` as a whole list, not single elements of it, in `verseFor`.
So You have to iterate over both lists to send their animal and sound element by element. As mentioned, you can use `zip` like this.
```py
sound = ["moo", "oink", "neigh", "cluck", "bahh"]
animal = ["cow", "pig", "horse", "chick", "sheep"]
for ani, sou in zip(animal, sound):
print(verseFor(ani, sou))
```
Now you are looping over both elements of sound and animals. If you look the output of `zip`, you get this.
```
list(zip(animal,sound))
>>>[('cow', 'moo'),
('pig', 'oink'),
('horse', 'neigh'),
('chick', 'cluck'),
('sheep', 'bahh')]
```
So basically in the first iteration in the code I provided, we are passing `cow` in `ani` and `moo` in `sou`. Then in the next iteration `pig` and `oink` respectively and so on. | As @Janith said, you must concatenete string to strings. So to achive what you are trying to do you have to rewrite it like that:
```
lyrics = oldMacdonald() + "And on his farm he had a " + animal + ", Ee-igh, Ee-igh, Oh!\n" \
"With a " + sound[0] + ", " + sound[1] + " here and a " + sound[2] + ", " \
"" + sound[3] + ".\nHere a " + sound[4] + ", there a " + sound[5] + ", " \
"everywhere a " + sound[6] + ", " + sound[7] + "\n" + oldMacdonald()
```
But, there is an additional issue: You have only five animals with their 5 corresponding sounds, but you're placing 8 sounds on your lyrics....! so we are placing at least 3 extra "sounds" on the lyrics thing.
You may want to check one of the good python tutorials in the web, like the one in the official python web site <https://docs.python.org/3/tutorial/> |
57,965,649 | I'm trying to use snakemake with a docker image, but am having trouble with the docker volume. Unfortunately, there are no details on how to use 'singularity-args' to do this.
My snakemake file is:
```
rule all:
input:
'a/file3.txt'
rule step1:
output:
touch('a/file1.txt')
rule step2:
input:
rules.step1.output[0]
output:
'a/file2.txt'
params:
text = 'this is a test',
path = '/data/file2.txt'
singularity:
"docker://XXX/test"
shell:
"python test.py {params.text} {params.path}"
rule step3:
input:
rules.step2.output[0]
output:
touch('a/file3.txt')
```
The docker image is basically a python file that writes a string to file (for testing purposes). I'm trying to mount my home directory to the docker /data directory. With docker, I'm able to mount a volume using '-v'.
What is the correct way of doing this with snakemake?
I've tried the following commands (on MacOS and Ubuntu 18.04) and both have failed.
```
snakemake -s pipeline.py --use-singularity --singularity-args “-B /home/XXX/snakemake/a:/data”
snakemake -s pipeline.py --use-singularity --singularity-args “-B /home/XXX/snakemake/a”
```
The error message is:
```
No rule to produce /home/XXX/snakemake/a:/data” (if you use input functions make sure that they don't raise unexpected exceptions).
```
Am I missing a step?
Thanks in advance! | 2019/09/16 | [
"https://Stackoverflow.com/questions/57965649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6693924/"
] | Just a trivial check... In your command lines you have tilted double quotes (`“`) instead of the straight ones (`"`), e.g.:
```
snakemake -s pipeline.py --use-singularity --singularity-args “-B /home/XXX/snakemake/a”
```
Maybe you are are copying and pasting from a text editor that uses the tilted quotes? I would use straight quotes as the other type would probably be interpreted in the wrong way. | I was able to get it working on Ubuntu 18.04 with the following command:
```
SINGULARITY_BINDPATH=“/home/XXX/snakemake/a:/data”; snakemake -s pipeline.py --latency-wait 10 --use-singularity
```
Unfortunately I wasn’t able to get the flag “--singularity-args” to work. Regardless of using ‘--bind’ or ‘-B’, I got the error “No rule to produce /Users/XXX/Devel/snakemake/a:/data”.
I’m using Snakemake 5.6.0 inside a Python3 virtual environment.
Also, on a side note, I don’t believe the MacOS singularity binary works. It had issues with Snakemake.
This work-around is good enough for now.
**UPDATE**
While this solution worked, the real solution (typo) was provided by @dariober. |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Finally, I found out the answer in [laravel 8.x upgarade guide](https://laravel.com/docs/8.x/upgrade). I have texted the controller name with full namespace, instead of importing it.
```
Route::prefix('admin')->namespace('Admin')->group(static function() {
Route::middleware('auth')->group(static function () {
//...
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
});
});
``` | I also faced same issue with Laravel 7 latest version. See how I solved it:
First include this directory on the page
[enter image description here](https://i.stack.imgur.com/hCpo4.png)
use `\App\Http\Controllers\Admin\ProfileController`
Then call the full version of the route including the className like this
```
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
```
This will automatically create different routes for all methods defined in the ProfileController class. See an example in the image attached using TodoController. |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Finally, I found out the answer in [laravel 8.x upgarade guide](https://laravel.com/docs/8.x/upgrade). I have texted the controller name with full namespace, instead of importing it.
```
Route::prefix('admin')->namespace('Admin')->group(static function() {
Route::middleware('auth')->group(static function () {
//...
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
});
});
``` | Make sure you followed the upgrade guide. There have been quite a few things that changed from v7 to v8.
To `App/Providers/RouteServiceProvider.php` add `$namespace`
```
class RouteServiceProvider extends ServiceProvider
{
/**
* This namespace is applied to your controller routes.
*
* In addition, it is set as the URL generator's root namespace.
*
* @var string
*/
protected $namespace = 'App\Http\Controllers';
}
```
You also find more answers here: <https://stackoverflow.com/a/63808132/799176> |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | ```
Route::resource('invoice','\App\Http\Controllers\InvoiceController');
``` | In Laravel 8, First, you have to import the controller like,
```
use App\Http\Controllers\ProfileController;
```
And then use the resource,
```
Route::resource('profile', 'App\Http\Controllers\ProfileController');
```
if you used it in class,
```
Route::resource('profile',[ProfileController::class, 'classname']);
``` |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Finally, I found out the answer in [laravel 8.x upgarade guide](https://laravel.com/docs/8.x/upgrade). I have texted the controller name with full namespace, instead of importing it.
```
Route::prefix('admin')->namespace('Admin')->group(static function() {
Route::middleware('auth')->group(static function () {
//...
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
});
});
``` | In Laravel 8, First, you have to import the controller like,
```
use App\Http\Controllers\ProfileController;
```
And then use the resource,
```
Route::resource('profile', 'App\Http\Controllers\ProfileController');
```
if you used it in class,
```
Route::resource('profile',[ProfileController::class, 'classname']);
``` |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Make sure you followed the upgrade guide. There have been quite a few things that changed from v7 to v8.
To `App/Providers/RouteServiceProvider.php` add `$namespace`
```
class RouteServiceProvider extends ServiceProvider
{
/**
* This namespace is applied to your controller routes.
*
* In addition, it is set as the URL generator's root namespace.
*
* @var string
*/
protected $namespace = 'App\Http\Controllers';
}
```
You also find more answers here: <https://stackoverflow.com/a/63808132/799176> | So the reason this is an issue is because Laravel 8 removed the default namespace on the RouteServiceProvider.
If you want to use the `ProfileController::class` functionality you need to reset the `protected $namespace` to be null in the RouteProvider to match the base code.
Otherwise it will prepend the $namespace to whichever class you pass.
so change
```
protected $namespace = null
```
Then remove the `->namespace('Admin')` from your routes.php file |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Run following step for clear route cache
```php
php artisan route:clear
``` | Make sure you followed the upgrade guide. There have been quite a few things that changed from v7 to v8.
To `App/Providers/RouteServiceProvider.php` add `$namespace`
```
class RouteServiceProvider extends ServiceProvider
{
/**
* This namespace is applied to your controller routes.
*
* In addition, it is set as the URL generator's root namespace.
*
* @var string
*/
protected $namespace = 'App\Http\Controllers';
}
```
You also find more answers here: <https://stackoverflow.com/a/63808132/799176> |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | I also faced same issue with Laravel 7 latest version. See how I solved it:
First include this directory on the page
[enter image description here](https://i.stack.imgur.com/hCpo4.png)
use `\App\Http\Controllers\Admin\ProfileController`
Then call the full version of the route including the className like this
```
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
```
This will automatically create different routes for all methods defined in the ProfileController class. See an example in the image attached using TodoController. | So the reason this is an issue is because Laravel 8 removed the default namespace on the RouteServiceProvider.
If you want to use the `ProfileController::class` functionality you need to reset the `protected $namespace` to be null in the RouteProvider to match the base code.
Otherwise it will prepend the $namespace to whichever class you pass.
so change
```
protected $namespace = null
```
Then remove the `->namespace('Admin')` from your routes.php file |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Finally, I found out the answer in [laravel 8.x upgarade guide](https://laravel.com/docs/8.x/upgrade). I have texted the controller name with full namespace, instead of importing it.
```
Route::prefix('admin')->namespace('Admin')->group(static function() {
Route::middleware('auth')->group(static function () {
//...
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
});
});
``` | Running `php artisan route:list` , I was having the same problem.
>
> Target class [App\Http\Controllers\App\Http\Controllers\CourseController] does not exist.
>
>
>
In my case , what worked was:
Changing the resource from : `Route::resource('courses', CourseController::class);` to `Route::resource('courses', 'CourseController');`
I didn't changed any namespace and I'm using laravel 7.2. |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | So the reason this is an issue is because Laravel 8 removed the default namespace on the RouteServiceProvider.
If you want to use the `ProfileController::class` functionality you need to reset the `protected $namespace` to be null in the RouteProvider to match the base code.
Otherwise it will prepend the $namespace to whichever class you pass.
so change
```
protected $namespace = null
```
Then remove the `->namespace('Admin')` from your routes.php file | Please use below code, it seem that you are using class without importing so first you need to import profilecontroller in web.php file.
```
use App\Http\Controllers\Admin\ProfileController; ## insert path profilecontroller
Route::prefix('admin')->namespace('Admin')->group(static function() {
Route::middleware('auth')->group(static function () {
//...
Route::resource('profile', ProfileController::class);
});
});
``` |
63,845,765 | I'm newbie in python. I have the next list with dictionaries inside.
```
l = [{'id': 2, 'source_id': 100},
{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 5, 'source_id': 200},
{'id': 4, 'source_id': 200}]
```
And I want to get result like:
```
l = [{'id': 1, 'source_id': 100},
{'id': 3, 'source_id': 1234},
{'id': 4, 'source_id': 200}]
```
I understand first step is sorting the list:
```
sorted_sources_list = sorted(l, key=lambda source: source['id'])
```
But I don't know how delete duplicate with the greatest id. | 2020/09/11 | [
"https://Stackoverflow.com/questions/63845765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11149563/"
] | Finally, I found out the answer in [laravel 8.x upgarade guide](https://laravel.com/docs/8.x/upgrade). I have texted the controller name with full namespace, instead of importing it.
```
Route::prefix('admin')->namespace('Admin')->group(static function() {
Route::middleware('auth')->group(static function () {
//...
Route::resource('profile', '\App\Http\Controllers\Admin\ProfileController');
});
});
``` | ```
Route::resource('invoice','\App\Http\Controllers\InvoiceController');
``` |
70,098,080 | Hey guys I am trying to deploy my project on the rinkeby chain using infura, but I am getting a ValueError
Here is my trackback:
```
INFO: Could not find files for the given pattern(s).
Brownie v1.17.1 - Python development framework for Ethereum
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\run.py", line 44, in main
network.connect(CONFIG.argv["network"])
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\main.py", line 40, in connect
web3.connect(host, active.get("timeout", 30))
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 52, in connect
uri = _expand_environment_vars(uri)
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 183, in _expand_environment_vars
raise ValueError(f"Unable to expand environment variable in host setting: '{uri}'")
ValueError: Unable to expand environment variable in host setting: 'https://rinkeby.infura.io/v3/$WEB3_INFURA_PROJECT_ID'
```
Here is my deploy.py code
```
from brownie import accounts, config, SimpleStorage, network
import os
def deploy_simple_storage():
account = get_account()
simple_storage = SimpleStorage.deploy({"from": account})
stored_value = simple_storage.retrieve()
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1)
updated_stored_value = simple_storage.retrieve()
print(updated_stored_value)
def get_account():
if network.show_active() == "development":
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main():
deploy_simple_storage()
```
I have a really little experience in coding. I think the problem is related to .env, but I don't know what I should now. FYI I am using windows n follow this course
<https://www.youtube.com/watch?v=M576WGiDBdQ>
stuck at 4:48:00 | 2021/11/24 | [
"https://Stackoverflow.com/questions/70098080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14382180/"
] | it appears your env variables are not set correctly, and it looks like in this case it's your WEB3\_INFURA\_PROJECT\_ID.
You can fix it by setting the variable in your .env file and adding dotenv: .env to your brownie-config.yaml.
brownie-config.yaml:
dotenv: .env
.env:
export WEB3\_INFURA\_PROJECT\_ID=YOUR\_PROJECT\_ID\_HERE
Remember to save these files.
Additionally, you should be on at least brownie version v1.14.6. You can find out what version you're on with:
brownie --version | I've had this error for days and have seen it posted alot online.
I was simply 1 directory up from where I needed to be - I followed the Patrick Collins tutorial very closey with the addition of setting up a virtual env, so maybe others are simply having the same problem I had.
I was originally in a directory "\demos\web3 brownie" which contained a folder called "brownie" and my python virtual env files.
The script is now running when I'm in directory "\demos\web3 brownie\brownie" which contains the brownie build, script, contract directories |
70,098,080 | Hey guys I am trying to deploy my project on the rinkeby chain using infura, but I am getting a ValueError
Here is my trackback:
```
INFO: Could not find files for the given pattern(s).
Brownie v1.17.1 - Python development framework for Ethereum
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\run.py", line 44, in main
network.connect(CONFIG.argv["network"])
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\main.py", line 40, in connect
web3.connect(host, active.get("timeout", 30))
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 52, in connect
uri = _expand_environment_vars(uri)
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 183, in _expand_environment_vars
raise ValueError(f"Unable to expand environment variable in host setting: '{uri}'")
ValueError: Unable to expand environment variable in host setting: 'https://rinkeby.infura.io/v3/$WEB3_INFURA_PROJECT_ID'
```
Here is my deploy.py code
```
from brownie import accounts, config, SimpleStorage, network
import os
def deploy_simple_storage():
account = get_account()
simple_storage = SimpleStorage.deploy({"from": account})
stored_value = simple_storage.retrieve()
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1)
updated_stored_value = simple_storage.retrieve()
print(updated_stored_value)
def get_account():
if network.show_active() == "development":
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main():
deploy_simple_storage()
```
I have a really little experience in coding. I think the problem is related to .env, but I don't know what I should now. FYI I am using windows n follow this course
<https://www.youtube.com/watch?v=M576WGiDBdQ>
stuck at 4:48:00 | 2021/11/24 | [
"https://Stackoverflow.com/questions/70098080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14382180/"
] | it appears your env variables are not set correctly, and it looks like in this case it's your WEB3\_INFURA\_PROJECT\_ID.
You can fix it by setting the variable in your .env file and adding dotenv: .env to your brownie-config.yaml.
brownie-config.yaml:
dotenv: .env
.env:
export WEB3\_INFURA\_PROJECT\_ID=YOUR\_PROJECT\_ID\_HERE
Remember to save these files.
Additionally, you should be on at least brownie version v1.14.6. You can find out what version you're on with:
brownie --version | I followed the same course and was stuck too. I just put my infura project ID as a environment variable on my system(windows10) not on the .env file.
<https://www.twilio.com/blog/2017/01/how-to-set-environment-variables.html> |
70,098,080 | Hey guys I am trying to deploy my project on the rinkeby chain using infura, but I am getting a ValueError
Here is my trackback:
```
INFO: Could not find files for the given pattern(s).
Brownie v1.17.1 - Python development framework for Ethereum
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\run.py", line 44, in main
network.connect(CONFIG.argv["network"])
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\main.py", line 40, in connect
web3.connect(host, active.get("timeout", 30))
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 52, in connect
uri = _expand_environment_vars(uri)
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 183, in _expand_environment_vars
raise ValueError(f"Unable to expand environment variable in host setting: '{uri}'")
ValueError: Unable to expand environment variable in host setting: 'https://rinkeby.infura.io/v3/$WEB3_INFURA_PROJECT_ID'
```
Here is my deploy.py code
```
from brownie import accounts, config, SimpleStorage, network
import os
def deploy_simple_storage():
account = get_account()
simple_storage = SimpleStorage.deploy({"from": account})
stored_value = simple_storage.retrieve()
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1)
updated_stored_value = simple_storage.retrieve()
print(updated_stored_value)
def get_account():
if network.show_active() == "development":
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main():
deploy_simple_storage()
```
I have a really little experience in coding. I think the problem is related to .env, but I don't know what I should now. FYI I am using windows n follow this course
<https://www.youtube.com/watch?v=M576WGiDBdQ>
stuck at 4:48:00 | 2021/11/24 | [
"https://Stackoverflow.com/questions/70098080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14382180/"
] | it appears your env variables are not set correctly, and it looks like in this case it's your WEB3\_INFURA\_PROJECT\_ID.
You can fix it by setting the variable in your .env file and adding dotenv: .env to your brownie-config.yaml.
brownie-config.yaml:
dotenv: .env
.env:
export WEB3\_INFURA\_PROJECT\_ID=YOUR\_PROJECT\_ID\_HERE
Remember to save these files.
Additionally, you should be on at least brownie version v1.14.6. You can find out what version you're on with:
brownie --version | I had the same issue (Mac OS) and I looked at another YouTube around Brownie Deployment and noticed that **"network"** needs to be defined at import.
This line of code above the `from brownie import` did the trick in my `deploy.py`:
```py
import brownie.network as network
``` |
70,098,080 | Hey guys I am trying to deploy my project on the rinkeby chain using infura, but I am getting a ValueError
Here is my trackback:
```
INFO: Could not find files for the given pattern(s).
Brownie v1.17.1 - Python development framework for Ethereum
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\run.py", line 44, in main
network.connect(CONFIG.argv["network"])
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\main.py", line 40, in connect
web3.connect(host, active.get("timeout", 30))
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 52, in connect
uri = _expand_environment_vars(uri)
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 183, in _expand_environment_vars
raise ValueError(f"Unable to expand environment variable in host setting: '{uri}'")
ValueError: Unable to expand environment variable in host setting: 'https://rinkeby.infura.io/v3/$WEB3_INFURA_PROJECT_ID'
```
Here is my deploy.py code
```
from brownie import accounts, config, SimpleStorage, network
import os
def deploy_simple_storage():
account = get_account()
simple_storage = SimpleStorage.deploy({"from": account})
stored_value = simple_storage.retrieve()
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1)
updated_stored_value = simple_storage.retrieve()
print(updated_stored_value)
def get_account():
if network.show_active() == "development":
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main():
deploy_simple_storage()
```
I have a really little experience in coding. I think the problem is related to .env, but I don't know what I should now. FYI I am using windows n follow this course
<https://www.youtube.com/watch?v=M576WGiDBdQ>
stuck at 4:48:00 | 2021/11/24 | [
"https://Stackoverflow.com/questions/70098080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14382180/"
] | I had the same issue (Mac OS) and I looked at another YouTube around Brownie Deployment and noticed that **"network"** needs to be defined at import.
This line of code above the `from brownie import` did the trick in my `deploy.py`:
```py
import brownie.network as network
``` | I've had this error for days and have seen it posted alot online.
I was simply 1 directory up from where I needed to be - I followed the Patrick Collins tutorial very closey with the addition of setting up a virtual env, so maybe others are simply having the same problem I had.
I was originally in a directory "\demos\web3 brownie" which contained a folder called "brownie" and my python virtual env files.
The script is now running when I'm in directory "\demos\web3 brownie\brownie" which contains the brownie build, script, contract directories |
70,098,080 | Hey guys I am trying to deploy my project on the rinkeby chain using infura, but I am getting a ValueError
Here is my trackback:
```
INFO: Could not find files for the given pattern(s).
Brownie v1.17.1 - Python development framework for Ethereum
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\run.py", line 44, in main
network.connect(CONFIG.argv["network"])
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\main.py", line 40, in connect
web3.connect(host, active.get("timeout", 30))
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 52, in connect
uri = _expand_environment_vars(uri)
File "c:\users\allow\appdata\local\programs\python\python39\lib\site-packages\brownie\network\web3.py", line 183, in _expand_environment_vars
raise ValueError(f"Unable to expand environment variable in host setting: '{uri}'")
ValueError: Unable to expand environment variable in host setting: 'https://rinkeby.infura.io/v3/$WEB3_INFURA_PROJECT_ID'
```
Here is my deploy.py code
```
from brownie import accounts, config, SimpleStorage, network
import os
def deploy_simple_storage():
account = get_account()
simple_storage = SimpleStorage.deploy({"from": account})
stored_value = simple_storage.retrieve()
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1)
updated_stored_value = simple_storage.retrieve()
print(updated_stored_value)
def get_account():
if network.show_active() == "development":
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main():
deploy_simple_storage()
```
I have a really little experience in coding. I think the problem is related to .env, but I don't know what I should now. FYI I am using windows n follow this course
<https://www.youtube.com/watch?v=M576WGiDBdQ>
stuck at 4:48:00 | 2021/11/24 | [
"https://Stackoverflow.com/questions/70098080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14382180/"
] | I had the same issue (Mac OS) and I looked at another YouTube around Brownie Deployment and noticed that **"network"** needs to be defined at import.
This line of code above the `from brownie import` did the trick in my `deploy.py`:
```py
import brownie.network as network
``` | I followed the same course and was stuck too. I just put my infura project ID as a environment variable on my system(windows10) not on the .env file.
<https://www.twilio.com/blog/2017/01/how-to-set-environment-variables.html> |
60,364,629 | I am trying to extract `abc.tar.gz`
```
import tarfile
tar = tarfile.open('abc.tar.gz')
tar.extractall()
```
**this raise error:**
```
EOFError Traceback (most recent call last)
<ipython-input-7-a3ea34501364> in <module>()
1 import tarfile
2 tar = tarfile.open('/content/drive/My Drive/Colab Notebooks/abc.tar.gz')
----> 3 tar.extractall()
7 frames
/usr/lib/python3.6/gzip.py in read(self, size)
480 break
481 if buf == b"":
--> 482 raise EOFError("Compressed file ended before the "
483 "end-of-stream marker was reached")
484
EOFError: Compressed file ended before the end-of-stream marker was reached
``` | 2020/02/23 | [
"https://Stackoverflow.com/questions/60364629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8229399/"
] | You can use
```sh
!tar xvzf abc.tar.gz
``` | Current way is:
```
!tar -xf abc.tar.gz
``` |
60,364,629 | I am trying to extract `abc.tar.gz`
```
import tarfile
tar = tarfile.open('abc.tar.gz')
tar.extractall()
```
**this raise error:**
```
EOFError Traceback (most recent call last)
<ipython-input-7-a3ea34501364> in <module>()
1 import tarfile
2 tar = tarfile.open('/content/drive/My Drive/Colab Notebooks/abc.tar.gz')
----> 3 tar.extractall()
7 frames
/usr/lib/python3.6/gzip.py in read(self, size)
480 break
481 if buf == b"":
--> 482 raise EOFError("Compressed file ended before the "
483 "end-of-stream marker was reached")
484
EOFError: Compressed file ended before the end-of-stream marker was reached
``` | 2020/02/23 | [
"https://Stackoverflow.com/questions/60364629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8229399/"
] | You can use
```sh
!tar xvzf abc.tar.gz
``` | weird thing, but -xf doesn't work anymore, unless you remove the hyphen. So the new command looks something like this -
```html
!tar xf filename.tar.gz
``` |
60,364,629 | I am trying to extract `abc.tar.gz`
```
import tarfile
tar = tarfile.open('abc.tar.gz')
tar.extractall()
```
**this raise error:**
```
EOFError Traceback (most recent call last)
<ipython-input-7-a3ea34501364> in <module>()
1 import tarfile
2 tar = tarfile.open('/content/drive/My Drive/Colab Notebooks/abc.tar.gz')
----> 3 tar.extractall()
7 frames
/usr/lib/python3.6/gzip.py in read(self, size)
480 break
481 if buf == b"":
--> 482 raise EOFError("Compressed file ended before the "
483 "end-of-stream marker was reached")
484
EOFError: Compressed file ended before the end-of-stream marker was reached
``` | 2020/02/23 | [
"https://Stackoverflow.com/questions/60364629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8229399/"
] | Current way is:
```
!tar -xf abc.tar.gz
``` | weird thing, but -xf doesn't work anymore, unless you remove the hyphen. So the new command looks something like this -
```html
!tar xf filename.tar.gz
``` |
42,300,956 | So I have an array like so `['test', 'testtwo']`.
I wish to able to use that as an index for a dictionary like so `myDict['test']['testtwo']`.
Is this possible in python? Sorry for the short explanation.
EDIT:
```
exampleDict = {
'test': {
'testtwo': [
'',
''
]
}
}
```
And when doing some stuff in python I end up with the array `['test', 'testtwo']` and then need to use that to access `exampleDict['test']['testtwo']`. The keys change though and the number of keys in the array changes as well. | 2017/02/17 | [
"https://Stackoverflow.com/questions/42300956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5577674/"
] | You need to configure the ListenerContainerFactory appropriately:
```
@Bean
public JmsListenerContainerFactory<?> myFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
factory.setClientId("jmsDemo");
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}
```
There interesting part is here:
```
factory.setSubscriptionDurable(true);
factory.setClientId("jmsDemo");
```
Now when you enter the ActiveMQ WebConsole you should see this:
[![ActiveMQ WebConsole showing one Durable Topic Subscriber](https://i.stack.imgur.com/gdjXg.png)](https://i.stack.imgur.com/gdjXg.png) | You also need to configure the listener container factory to create a container for [durable subscriptions](http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/config/AbstractJmsListenerContainerFactory.html#setSubscriptionDurable-java.lang.Boolean-). |
42,300,956 | So I have an array like so `['test', 'testtwo']`.
I wish to able to use that as an index for a dictionary like so `myDict['test']['testtwo']`.
Is this possible in python? Sorry for the short explanation.
EDIT:
```
exampleDict = {
'test': {
'testtwo': [
'',
''
]
}
}
```
And when doing some stuff in python I end up with the array `['test', 'testtwo']` and then need to use that to access `exampleDict['test']['testtwo']`. The keys change though and the number of keys in the array changes as well. | 2017/02/17 | [
"https://Stackoverflow.com/questions/42300956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5577674/"
] | In the answer marked as correct above, the code:
```
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
factory.setClientId("jmsDemo");
```
must come after
```
configurer.configure(factory, connectionFactory);
```
or you will lose those settings. | You also need to configure the listener container factory to create a container for [durable subscriptions](http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/config/AbstractJmsListenerContainerFactory.html#setSubscriptionDurable-java.lang.Boolean-). |
42,300,956 | So I have an array like so `['test', 'testtwo']`.
I wish to able to use that as an index for a dictionary like so `myDict['test']['testtwo']`.
Is this possible in python? Sorry for the short explanation.
EDIT:
```
exampleDict = {
'test': {
'testtwo': [
'',
''
]
}
}
```
And when doing some stuff in python I end up with the array `['test', 'testtwo']` and then need to use that to access `exampleDict['test']['testtwo']`. The keys change though and the number of keys in the array changes as well. | 2017/02/17 | [
"https://Stackoverflow.com/questions/42300956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5577674/"
] | You need to configure the ListenerContainerFactory appropriately:
```
@Bean
public JmsListenerContainerFactory<?> myFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
factory.setClientId("jmsDemo");
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}
```
There interesting part is here:
```
factory.setSubscriptionDurable(true);
factory.setClientId("jmsDemo");
```
Now when you enter the ActiveMQ WebConsole you should see this:
[![ActiveMQ WebConsole showing one Durable Topic Subscriber](https://i.stack.imgur.com/gdjXg.png)](https://i.stack.imgur.com/gdjXg.png) | In the answer marked as correct above, the code:
```
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
factory.setClientId("jmsDemo");
```
must come after
```
configurer.configure(factory, connectionFactory);
```
or you will lose those settings. |
20,586,314 | I want to install pandas on MAC OS X 10.7 with Python 2.7 with "sudo pip install pandas". Some of the output I'm getting:
```
Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pandas/index.c -o build/temp.macosx-10.7-intel-2.7/pandas/index.o
In file included from pandas/index.c:317:
pandas/src/numpy_helper.h: In function ‘infer_type’:
pandas/src/numpy_helper.h:32: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h:32: error: (Each undeclared identifier is reported only once
pandas/src/numpy_helper.h:32: error: for each function it appears in.)
pandas/src/numpy_helper.h: In function ‘get_datetime64_value’:
pandas/src/numpy_helper.h:53: error: ‘PyDatetimeScalarObject’ undeclared (first use in this function)
pandas/src/numpy_helper.h:53: error: expected expression before ‘)’ token
pandas/src/numpy_helper.h: In function ‘is_datetime64_object’:
pandas/src/numpy_helper.h:84: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h: In function ‘is_timedelta64_object’:
pandas/src/numpy_helper.h:89: error: ‘PyTimedeltaArrType_Type’ undeclared (first use in this function)
```
And I get this:
```
lipo: can't open input file: /var/tmp//ccaW8SVp.out (No such file or directory)
error: command 'llvm-gcc-4.2' failed with exit status 1
```
All the dependencies (NumPy, pythondateutil, pytz) are working fine. I've also the latest version of the Command Line Tools as suggested here: [Why am I getting the error: command 'llvm-gcc-4.2' failed with exit status 1](https://stackoverflow.com/questions/9398142/why-am-i-getting-the-error-command-llvm-gcc-4-2-failed-with-exit-status-1) and here [gcc-4.2 failed with exit status 1](https://stackoverflow.com/questions/8473066/gcc-4-2-failed-with-exit-status-1)
I think my question is strongly related to this (not helping) post: [Issues with installing pandas in python in MAC](https://stackoverflow.com/questions/19237501/issues-with-installing-pandas-in-python-in-mac)
And now I'm out of options and have really no idea what to do :-( Except asking for help... So thanks in advance! | 2013/12/14 | [
"https://Stackoverflow.com/questions/20586314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3102330/"
] | It takes a little while to set up, but I've found [MacPorts](http://www.macports.org) to be extremely useful in installing third-party modules. In my experience it has many more packages available than Homebrew, and the dependencies are installed with a lot less fuss and bother than trying to do it yourself. Check out the [Quick Start](http://www.macports.org/install.php) for how to install.
Once it's installed, restart your Terminal session, and check your `PATH` (`echo $PATH`) to ensure that `/opt/local/bin` is before `/usr/bin` and `/Library/Frameworks/Python.framework/Versions/2.7/bin`. Then, you can run `sudo port install py27-pandas` to get Python and all of the dependencies installed.
The drawback of installing MacPorts is that any packages you've installed using the python.org version of Python won't be available. The easiest way around this is to reinstall them using MacPorts, or if they're not available as a port, using the MacPorts version of `pip`.
Good luck! | Update: I finally managed to get it installed (with the help of others I should add...!).
I am not sure if I remember everything I did correctly and extensively, but here are some pointers:
I think the point is that the numpy version you have should be => 1.6 in order for pandas to be able to install properly.
However the default Python installation provided by Apple only allows for numpy up to version 1.5.1.
It might be, BTW, that there is more or other reasons why pandas doesn't like the Mac OS X provided Python, but the point remains that you need another Python version.
So, I downloaded and installed a Python 2.7.6 version from Python.org.
Then you can "pip install --upgrade numpy" and it will be numpy 1.8.
>
> NOTE
> If '$ which python' gives you the new Python.org Python 2.7.6 version, you might want to say '$ python pip install ...' as the first line of pip contains the default '/usr/bin/python' so that will install whatever it is your're installing for the "old" default Python version.
>
>
>
NOTE that you should also upgrade/reinstall scipy, if you use it, so the numpy and scipy version correspond to one another.
I hope this is of any avail to anyone...! |
20,586,314 | I want to install pandas on MAC OS X 10.7 with Python 2.7 with "sudo pip install pandas". Some of the output I'm getting:
```
Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pandas/index.c -o build/temp.macosx-10.7-intel-2.7/pandas/index.o
In file included from pandas/index.c:317:
pandas/src/numpy_helper.h: In function ‘infer_type’:
pandas/src/numpy_helper.h:32: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h:32: error: (Each undeclared identifier is reported only once
pandas/src/numpy_helper.h:32: error: for each function it appears in.)
pandas/src/numpy_helper.h: In function ‘get_datetime64_value’:
pandas/src/numpy_helper.h:53: error: ‘PyDatetimeScalarObject’ undeclared (first use in this function)
pandas/src/numpy_helper.h:53: error: expected expression before ‘)’ token
pandas/src/numpy_helper.h: In function ‘is_datetime64_object’:
pandas/src/numpy_helper.h:84: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h: In function ‘is_timedelta64_object’:
pandas/src/numpy_helper.h:89: error: ‘PyTimedeltaArrType_Type’ undeclared (first use in this function)
```
And I get this:
```
lipo: can't open input file: /var/tmp//ccaW8SVp.out (No such file or directory)
error: command 'llvm-gcc-4.2' failed with exit status 1
```
All the dependencies (NumPy, pythondateutil, pytz) are working fine. I've also the latest version of the Command Line Tools as suggested here: [Why am I getting the error: command 'llvm-gcc-4.2' failed with exit status 1](https://stackoverflow.com/questions/9398142/why-am-i-getting-the-error-command-llvm-gcc-4-2-failed-with-exit-status-1) and here [gcc-4.2 failed with exit status 1](https://stackoverflow.com/questions/8473066/gcc-4-2-failed-with-exit-status-1)
I think my question is strongly related to this (not helping) post: [Issues with installing pandas in python in MAC](https://stackoverflow.com/questions/19237501/issues-with-installing-pandas-in-python-in-mac)
And now I'm out of options and have really no idea what to do :-( Except asking for help... So thanks in advance! | 2013/12/14 | [
"https://Stackoverflow.com/questions/20586314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3102330/"
] | For me, the following fixed it:
```
sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/
sudo pip install --upgrade numpy
sudo pip install pandas
```
I noticed that Pandas was trying to link against an older version of numpy under `Frameworks`, rather than the newer one under `/Library/Python/2.7/site-packages` | It takes a little while to set up, but I've found [MacPorts](http://www.macports.org) to be extremely useful in installing third-party modules. In my experience it has many more packages available than Homebrew, and the dependencies are installed with a lot less fuss and bother than trying to do it yourself. Check out the [Quick Start](http://www.macports.org/install.php) for how to install.
Once it's installed, restart your Terminal session, and check your `PATH` (`echo $PATH`) to ensure that `/opt/local/bin` is before `/usr/bin` and `/Library/Frameworks/Python.framework/Versions/2.7/bin`. Then, you can run `sudo port install py27-pandas` to get Python and all of the dependencies installed.
The drawback of installing MacPorts is that any packages you've installed using the python.org version of Python won't be available. The easiest way around this is to reinstall them using MacPorts, or if they're not available as a port, using the MacPorts version of `pip`.
Good luck! |
20,586,314 | I want to install pandas on MAC OS X 10.7 with Python 2.7 with "sudo pip install pandas". Some of the output I'm getting:
```
Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pandas/index.c -o build/temp.macosx-10.7-intel-2.7/pandas/index.o
In file included from pandas/index.c:317:
pandas/src/numpy_helper.h: In function ‘infer_type’:
pandas/src/numpy_helper.h:32: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h:32: error: (Each undeclared identifier is reported only once
pandas/src/numpy_helper.h:32: error: for each function it appears in.)
pandas/src/numpy_helper.h: In function ‘get_datetime64_value’:
pandas/src/numpy_helper.h:53: error: ‘PyDatetimeScalarObject’ undeclared (first use in this function)
pandas/src/numpy_helper.h:53: error: expected expression before ‘)’ token
pandas/src/numpy_helper.h: In function ‘is_datetime64_object’:
pandas/src/numpy_helper.h:84: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h: In function ‘is_timedelta64_object’:
pandas/src/numpy_helper.h:89: error: ‘PyTimedeltaArrType_Type’ undeclared (first use in this function)
```
And I get this:
```
lipo: can't open input file: /var/tmp//ccaW8SVp.out (No such file or directory)
error: command 'llvm-gcc-4.2' failed with exit status 1
```
All the dependencies (NumPy, pythondateutil, pytz) are working fine. I've also the latest version of the Command Line Tools as suggested here: [Why am I getting the error: command 'llvm-gcc-4.2' failed with exit status 1](https://stackoverflow.com/questions/9398142/why-am-i-getting-the-error-command-llvm-gcc-4-2-failed-with-exit-status-1) and here [gcc-4.2 failed with exit status 1](https://stackoverflow.com/questions/8473066/gcc-4-2-failed-with-exit-status-1)
I think my question is strongly related to this (not helping) post: [Issues with installing pandas in python in MAC](https://stackoverflow.com/questions/19237501/issues-with-installing-pandas-in-python-in-mac)
And now I'm out of options and have really no idea what to do :-( Except asking for help... So thanks in advance! | 2013/12/14 | [
"https://Stackoverflow.com/questions/20586314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3102330/"
] | It takes a little while to set up, but I've found [MacPorts](http://www.macports.org) to be extremely useful in installing third-party modules. In my experience it has many more packages available than Homebrew, and the dependencies are installed with a lot less fuss and bother than trying to do it yourself. Check out the [Quick Start](http://www.macports.org/install.php) for how to install.
Once it's installed, restart your Terminal session, and check your `PATH` (`echo $PATH`) to ensure that `/opt/local/bin` is before `/usr/bin` and `/Library/Frameworks/Python.framework/Versions/2.7/bin`. Then, you can run `sudo port install py27-pandas` to get Python and all of the dependencies installed.
The drawback of installing MacPorts is that any packages you've installed using the python.org version of Python won't be available. The easiest way around this is to reinstall them using MacPorts, or if they're not available as a port, using the MacPorts version of `pip`.
Good luck! | It's very strange.
It seems the default python 2.7 with OSX can not install pandas well.
I just install python 3.3 and then it works.
pythonbrew switch Python-3.3.1
pip install pandas |
20,586,314 | I want to install pandas on MAC OS X 10.7 with Python 2.7 with "sudo pip install pandas". Some of the output I'm getting:
```
Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pandas/index.c -o build/temp.macosx-10.7-intel-2.7/pandas/index.o
In file included from pandas/index.c:317:
pandas/src/numpy_helper.h: In function ‘infer_type’:
pandas/src/numpy_helper.h:32: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h:32: error: (Each undeclared identifier is reported only once
pandas/src/numpy_helper.h:32: error: for each function it appears in.)
pandas/src/numpy_helper.h: In function ‘get_datetime64_value’:
pandas/src/numpy_helper.h:53: error: ‘PyDatetimeScalarObject’ undeclared (first use in this function)
pandas/src/numpy_helper.h:53: error: expected expression before ‘)’ token
pandas/src/numpy_helper.h: In function ‘is_datetime64_object’:
pandas/src/numpy_helper.h:84: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h: In function ‘is_timedelta64_object’:
pandas/src/numpy_helper.h:89: error: ‘PyTimedeltaArrType_Type’ undeclared (first use in this function)
```
And I get this:
```
lipo: can't open input file: /var/tmp//ccaW8SVp.out (No such file or directory)
error: command 'llvm-gcc-4.2' failed with exit status 1
```
All the dependencies (NumPy, pythondateutil, pytz) are working fine. I've also the latest version of the Command Line Tools as suggested here: [Why am I getting the error: command 'llvm-gcc-4.2' failed with exit status 1](https://stackoverflow.com/questions/9398142/why-am-i-getting-the-error-command-llvm-gcc-4-2-failed-with-exit-status-1) and here [gcc-4.2 failed with exit status 1](https://stackoverflow.com/questions/8473066/gcc-4-2-failed-with-exit-status-1)
I think my question is strongly related to this (not helping) post: [Issues with installing pandas in python in MAC](https://stackoverflow.com/questions/19237501/issues-with-installing-pandas-in-python-in-mac)
And now I'm out of options and have really no idea what to do :-( Except asking for help... So thanks in advance! | 2013/12/14 | [
"https://Stackoverflow.com/questions/20586314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3102330/"
] | For me, the following fixed it:
```
sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/
sudo pip install --upgrade numpy
sudo pip install pandas
```
I noticed that Pandas was trying to link against an older version of numpy under `Frameworks`, rather than the newer one under `/Library/Python/2.7/site-packages` | Update: I finally managed to get it installed (with the help of others I should add...!).
I am not sure if I remember everything I did correctly and extensively, but here are some pointers:
I think the point is that the numpy version you have should be => 1.6 in order for pandas to be able to install properly.
However the default Python installation provided by Apple only allows for numpy up to version 1.5.1.
It might be, BTW, that there is more or other reasons why pandas doesn't like the Mac OS X provided Python, but the point remains that you need another Python version.
So, I downloaded and installed a Python 2.7.6 version from Python.org.
Then you can "pip install --upgrade numpy" and it will be numpy 1.8.
>
> NOTE
> If '$ which python' gives you the new Python.org Python 2.7.6 version, you might want to say '$ python pip install ...' as the first line of pip contains the default '/usr/bin/python' so that will install whatever it is your're installing for the "old" default Python version.
>
>
>
NOTE that you should also upgrade/reinstall scipy, if you use it, so the numpy and scipy version correspond to one another.
I hope this is of any avail to anyone...! |
20,586,314 | I want to install pandas on MAC OS X 10.7 with Python 2.7 with "sudo pip install pandas". Some of the output I'm getting:
```
Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pandas/index.c -o build/temp.macosx-10.7-intel-2.7/pandas/index.o
In file included from pandas/index.c:317:
pandas/src/numpy_helper.h: In function ‘infer_type’:
pandas/src/numpy_helper.h:32: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h:32: error: (Each undeclared identifier is reported only once
pandas/src/numpy_helper.h:32: error: for each function it appears in.)
pandas/src/numpy_helper.h: In function ‘get_datetime64_value’:
pandas/src/numpy_helper.h:53: error: ‘PyDatetimeScalarObject’ undeclared (first use in this function)
pandas/src/numpy_helper.h:53: error: expected expression before ‘)’ token
pandas/src/numpy_helper.h: In function ‘is_datetime64_object’:
pandas/src/numpy_helper.h:84: error: ‘PyDatetimeArrType_Type’ undeclared (first use in this function)
pandas/src/numpy_helper.h: In function ‘is_timedelta64_object’:
pandas/src/numpy_helper.h:89: error: ‘PyTimedeltaArrType_Type’ undeclared (first use in this function)
```
And I get this:
```
lipo: can't open input file: /var/tmp//ccaW8SVp.out (No such file or directory)
error: command 'llvm-gcc-4.2' failed with exit status 1
```
All the dependencies (NumPy, pythondateutil, pytz) are working fine. I've also the latest version of the Command Line Tools as suggested here: [Why am I getting the error: command 'llvm-gcc-4.2' failed with exit status 1](https://stackoverflow.com/questions/9398142/why-am-i-getting-the-error-command-llvm-gcc-4-2-failed-with-exit-status-1) and here [gcc-4.2 failed with exit status 1](https://stackoverflow.com/questions/8473066/gcc-4-2-failed-with-exit-status-1)
I think my question is strongly related to this (not helping) post: [Issues with installing pandas in python in MAC](https://stackoverflow.com/questions/19237501/issues-with-installing-pandas-in-python-in-mac)
And now I'm out of options and have really no idea what to do :-( Except asking for help... So thanks in advance! | 2013/12/14 | [
"https://Stackoverflow.com/questions/20586314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3102330/"
] | For me, the following fixed it:
```
sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/
sudo pip install --upgrade numpy
sudo pip install pandas
```
I noticed that Pandas was trying to link against an older version of numpy under `Frameworks`, rather than the newer one under `/Library/Python/2.7/site-packages` | It's very strange.
It seems the default python 2.7 with OSX can not install pandas well.
I just install python 3.3 and then it works.
pythonbrew switch Python-3.3.1
pip install pandas |
32,469,144 | I'm new to Python and while trying to write a scrip that will keep asking questions about the user until the scrip gets FALSE,
I decided to check the scrip,of course it gave me an syntax error that told me the mistake was on the fifth lane, `a.
Now on that lane I tried to change the old value of a to a new value.
sadly, I can't understand the mistake that I made, can some one please check it and explain me what went wrong ?
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a==str :
print "Nice name man!"
elif a==int :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
``` | 2015/09/09 | [
"https://Stackoverflow.com/questions/32469144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5298838/"
] | Your line
```
a=str(raw_input("Yey ! my first friend,what is your name?\n")
```
* Indent this line so it is inside the 'if' statement
* Add a ')' at the end of this line | A general structure for this kind of repeating loop is
```
while True:
a=str(raw_input(...))
if a=="whatever": break
# other responses to a
``` |
32,469,144 | I'm new to Python and while trying to write a scrip that will keep asking questions about the user until the scrip gets FALSE,
I decided to check the scrip,of course it gave me an syntax error that told me the mistake was on the fifth lane, `a.
Now on that lane I tried to change the old value of a to a new value.
sadly, I can't understand the mistake that I made, can some one please check it and explain me what went wrong ?
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a==str :
print "Nice name man!"
elif a==int :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
``` | 2015/09/09 | [
"https://Stackoverflow.com/questions/32469144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5298838/"
] | Your line
```
a=str(raw_input("Yey ! my first friend,what is your name?\n")
```
* Indent this line so it is inside the 'if' statement
* Add a ')' at the end of this line | You just need to indent the line. Your code should work fine. Keep learning python. It's awesome!!!!
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a==str :
print "Nice name man!"
elif a==int :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
```
To further help with the test cases, I changed your string and int tests for you. "==" test is for value btw.
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a.isalpha() :
print "Nice name man!"
elif a.isdigit() :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
``` |
32,469,144 | I'm new to Python and while trying to write a scrip that will keep asking questions about the user until the scrip gets FALSE,
I decided to check the scrip,of course it gave me an syntax error that told me the mistake was on the fifth lane, `a.
Now on that lane I tried to change the old value of a to a new value.
sadly, I can't understand the mistake that I made, can some one please check it and explain me what went wrong ?
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a==str :
print "Nice name man!"
elif a==int :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
``` | 2015/09/09 | [
"https://Stackoverflow.com/questions/32469144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5298838/"
] | You just need to indent the line. Your code should work fine. Keep learning python. It's awesome!!!!
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a==str :
print "Nice name man!"
elif a==int :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
```
To further help with the test cases, I changed your string and int tests for you. "==" test is for value btw.
```
#!/usr/bin/python
print "Hello, I'm wilfred and I'm an Artificial Intelligence\n"
a=str(raw_input("Do you want to be my friend? \n"))
if a=="yes":
a=str(raw_input("Yey ! my first friend,what is your name?\n"))
if a.isalpha() :
print "Nice name man!"
elif a.isdigit() :
print "bye!"
elif a=="no":
print "Well, nice to meet you anway, good bye now \n"
``` | A general structure for this kind of repeating loop is
```
while True:
a=str(raw_input(...))
if a=="whatever": break
# other responses to a
``` |
71,024,979 | I have a question with probably a well-known answer. However I couldnt articulate it well enough to find answers on google.
Lets say you are using the developer interface of Chrome browser (Press F12). If you click on the network tab and go to any website, a lot of files will be queried there for example images, stylesheets and JSON-responses.
I want to parse these JSON-responses using python now.
Thanks in advance! | 2022/02/07 | [
"https://Stackoverflow.com/questions/71024979",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10083891/"
] | You can save the network requests to a `.har` file (JSON format) and analyze that.
In your network tools panel, there is a download button to export as HAR format.
[![export HAR button](https://i.stack.imgur.com/2B0Gz.png)](https://i.stack.imgur.com/2B0Gz.png)
```py
import json
with open('myrequests.har') as f:
network_data = json.load(f)
print(network_data)
```
Or, as [Jack Deeth answered](https://stackoverflow.com/a/71025044/5747944) you can make the requests using Python instead of your browser and get the response JSON data that way.
Though, this can sometimes be difficult depending on the website and nature of the request(s) (for example, needing to login and/or figuring out how to get all the proper arguments to make the request) | I use `requests` to get the data, and it comes back as a Python dictionary:
```py
import requests
r = requests.get("url/spotted/with/devtools")
r.json()["keys_observed_in_devtools"]
``` |
71,024,979 | I have a question with probably a well-known answer. However I couldnt articulate it well enough to find answers on google.
Lets say you are using the developer interface of Chrome browser (Press F12). If you click on the network tab and go to any website, a lot of files will be queried there for example images, stylesheets and JSON-responses.
I want to parse these JSON-responses using python now.
Thanks in advance! | 2022/02/07 | [
"https://Stackoverflow.com/questions/71024979",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10083891/"
] | I use `requests` to get the data, and it comes back as a Python dictionary:
```py
import requests
r = requests.get("url/spotted/with/devtools")
r.json()["keys_observed_in_devtools"]
``` | Perhaps you can try using Selenium.
Maybe the answers on this [question](https://stackoverflow.com/questions/52633697/selenium-python-how-to-capture-network-traffics-response) can help you. |
71,024,979 | I have a question with probably a well-known answer. However I couldnt articulate it well enough to find answers on google.
Lets say you are using the developer interface of Chrome browser (Press F12). If you click on the network tab and go to any website, a lot of files will be queried there for example images, stylesheets and JSON-responses.
I want to parse these JSON-responses using python now.
Thanks in advance! | 2022/02/07 | [
"https://Stackoverflow.com/questions/71024979",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10083891/"
] | You can save the network requests to a `.har` file (JSON format) and analyze that.
In your network tools panel, there is a download button to export as HAR format.
[![export HAR button](https://i.stack.imgur.com/2B0Gz.png)](https://i.stack.imgur.com/2B0Gz.png)
```py
import json
with open('myrequests.har') as f:
network_data = json.load(f)
print(network_data)
```
Or, as [Jack Deeth answered](https://stackoverflow.com/a/71025044/5747944) you can make the requests using Python instead of your browser and get the response JSON data that way.
Though, this can sometimes be difficult depending on the website and nature of the request(s) (for example, needing to login and/or figuring out how to get all the proper arguments to make the request) | Perhaps you can try using Selenium.
Maybe the answers on this [question](https://stackoverflow.com/questions/52633697/selenium-python-how-to-capture-network-traffics-response) can help you. |
58,606,333 | I've installed kivy on my laptop using an installer from GitHub and everything was ok, I always run my code by CMD, I use Pycharm to code but I've never tried to run my code through it until moments ago. In Pycharm, configuration, python interpreter, I changed to "python 3.5(untitled)", Now my code runs in Pycharm, But I get following error in CMD:
C:\Users\ASUS\Desktop>kivytest.py
Traceback (most recent call last):
File "C:\Users\ASUS\Desktop\kivytest.py", line 1, in
from kivy.app import App
ModuleNotFoundError: No module named 'kivy'
I really need CMD, so what should I do now?! | 2019/10/29 | [
"https://Stackoverflow.com/questions/58606333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11746957/"
] | You can do a window count of records of the same customer that have `status = 'Sold'` in a `case` expression:
```
select
t.*,
case when sum( (status = 'Sold')::int ) over(partition by customer) >= 1
then 'Yes'
else 'No'
end
from mytable
```
NB: note that this does not magically create new records (as shown in your sample data). This query gives you as many records in the resultset as there are in the table, with an additionnal column that indicates whether each cutsomer has at least one sold item in the table.
Here is [a demo provided by VBokšić](https://dbfiddle.uk/?rdbms=postgres_12&fiddle=f37f3dc483d68dbba95f7ff0b55c0169) (thanks). | Another option is to use `bool_or()` as a window function. If you can live with a `boolean` column rather than a varchar with Yes/No, this makes the expression even simpler:
```
select productid, customer, status,
bool_or(status = 'Sold') over (partition by customer) as sold_at_least_one
from mytable;
```
Online example: <https://rextester.com/NDN54253> |
41,535,284 | I have looked everywhere and tried many suggested solutions, still without the required result: **to run a python file from my lamp server**. I can not seem to integrate all the pieces of the puzzle ... Complicating the story is that many solutions either use old apache version (<2.4), which changed the config files significantly. No more httpd.conf! so this [executing-a-python-script-in-apache2](https://stackoverflow.com/questions/9145517/executing-a-python-script-in-apache2) does not help; But also the python version being > 3 complicates matters.
**specs:**
* linux Kubuntu, apache 2.4, python 3.5
* apache is running
* website files are in root/var/www/html/, I have sudo access to this folder.
* apache2 cgi module enabled: a2enmod cgi
* the python 3.5 path is usr/bin/env python3
* the python script, simplest of scripts, has been made executable
```
#!/usr/bin/env python3
print ("Content-type: text/html\n")
print ("Hello world!")
```
lets boil it down to the simplest case: I would like to have apache interpret the spark.py script and spit out the html: "Hello world!"
**Questions:**
* is the script file correct as is?
* which config files do I need to change and what do I need to add to these config files?
I know for security reasons, you should not have apache run script in your root dir. | 2017/01/08 | [
"https://Stackoverflow.com/questions/41535284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4718423/"
] | The python documentation for modwsgi seems to fit what you are asking for. The following webpage has a really simple example and the necessary configuration for a python3-apache2 setup.
<http://modwsgi.readthedocs.io/en/develop/user-guides/quick-configuration-guide.html>
You will need to install the mod\_wsgi for the configuration to work. Take note of the different "\_" underscore and "-" dash character used in apt and pip3.
```
$ sudo apt install apache2-dev libapache2-mod-wsgi-py3
$ sudo pip3 install mod_wsgi
```
libapache2-mod-wsgi-py3 and mod\_wsgi seems to be the same thing. However, my test deployment only works after installing mod\_wsgi. Could be configuration issue. The following are the details of the configuration I have tested on Ubuntu 16.04.2.
Application file /home/user/wsgi\_sample/hello.wsgi:
```
def application(environ, start_response):
status = '200 OK'
output = b'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
```
Apache2 configuration /etc/apache2/sites-available/000-test.conf
```
<VirtualHost *:80>
ServerName testmachine
<Directory /home/user/wsgi_sample>
Order allow,deny
Allow from all
Require all granted
</Directory>
WSGIScriptAlias /hello /home/user/wsgi_sample/hello.wsgi
</VirtualHost>
```
Enable the site in apache2.
```
sudo a2ensite 000-test.conf
```
Open your browser to `testmachine/hello`.
wsgi may also be deployed on Apache2 using passenger. It demands a slighter longer configuration. Ask a new question if passenger/python3 is desired. | Yes, your minimum code seem correct. The Apache config information is answered here
<https://stackoverflow.com/a/57531411/4084546> |
34,637,098 | I need some sample from the distribution p(x)=x\*exp(-ax) x>0
but in python, only the exponential distribution is available for sampling. How can I draw on this distribution in python? Thanks!!! | 2016/01/06 | [
"https://Stackoverflow.com/questions/34637098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5753057/"
] | Severin Pappadeux is correct in the sense that you can use `random.gammavariate(alpha, beta)` in this case. He, however, does not explain the necessary parameters to end up with your distribution.
Based on the Gamma distribution and your desired distribution, it is not hard to see that in `random.gammavariate(alpha, beta)`, we need to set `alpha = 2`. Then, the `beta` parameter can be set to `a ** -1`. We solely need to multiply by `a ** -2` to obtain a value sampled from your desired distribution.
Full code:
```
import random
def distr(a):
return random.gammavariate(2, a ** -1) * (a ** -2)
``` | This is, I believe, a gamma distribution, link <https://en.wikipedia.org/wiki/Gamma_distribution>
You could use function `random.gammavariate(alpha, beta)` from [here](https://docs.python.org/3.0/library/random.html) |
53,637,976 | This reddit thread will explain the project a little and give context into what I'm trying to accomplish. Essentially, I'm comparing pdf's to see if there are duplicates by using something other than their name. This is because all of the pdf's in my dataset have unique names, but could still be the same thing, content wise. Seek and destroy the duplicates. I'm still in the seek part of this project.
<https://www.reddit.com/r/Python/comments/a337qv/finding_unique_pdfs_in_a_folder_need_info_on_how/>
I decided to try to compare file sizes first to see what I would find before I tried anything else. I figured that would be a simple first approach, but for some reason the os library doesn't like the way that I'm feeding it the string of the file path. I've tried manipulating it any way I can to get it to work, but no dice.
I went Through and confirmed that the file exists within the folder I'm trying to get into.
So here is kind of the workflow for how I am imagining this to work. I pulled a query of the possible drawings that could contain duplicates and saved it as an excel file. I saved that file as a csv file so if I fuck up the sheet I don't fuck up my data. Plus I new a quick way to turn csv file columns into lists using pandas.
[Here is a screen shot of the csv file](https://imgur.com/a/hklFmMW)
In the code below you can see that I made a list that has each of those columns. all of the data within the lists are strings. The description isn't very critical, but the "found in" and "name" columns (When concatenated) make up the file path and file name. I made a new list of the combined string that is the file path + the file so that I could get its size to compare.
I'm really hoping there is a way I can manipulate that path in either the csv file or in python with the lists.
Here is the error that comes up when I run the code.
<https://imgur.com/a/d9P7g0e>
C:\Users\27659\Desktop\PycharmProjects\sort\_unique\_profiles\venv\Scripts\python.exe "C:/Users/27659/Desktop/PycharmProjects/sort\_unique\_profiles/Compare and mark files.py"
```
same item in list
Traceback (most recent call last):
File "C:/Users/27659/Desktop/PycharmProjects/sort_unique_profiles/Compare and mark files.py", line 19, in <module>
if os.path.getsize(full_file_list[i]) == os.path.getsize(full_file_list[k]) and i == k:
File "C:\Users\27659\AppData\Local\Programs\Python\Python37-32\lib\genericpath.py", line 50, in getsize
return os.stat(filename).st_size
FileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7535-01.pdf'
Process finished with exit code 1
```
'
Here is the code
```
import pandas as pd
import os
main_df = pd.read_csv("C://Users//27659//Desktop//Unique Wood Profiles.csv")
drawing_number_list = main_df['Name'].tolist()
description_list = main_df['Description'].tolist()
found_in_list = main_df['Found In'].tolist()
full_file_list = ['placeholder']
for i in range(0, len(drawing_number_list)):
full_file_list.append(found_in_list[i] + drawing_number_list[i] + '.pdf')
del full_file_list[0]
for i in range(0, len(full_file_list)):
for k in range(0, len(full_file_list)):
if os.path.getsize(full_file_list[i]) == os.path.getsize(full_file_list[k]) and i == k:
print('same item in list')
elif os.path.getsize(full_file_list[i]) == os.path.getsize(full_file_list[i]):
print('oh bugger')
else:
print('thumbs up')
```
'
**EDIT:**
```
All files: ['C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7534-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7535-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7394-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7740-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7886-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7821-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7392-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\Bow, Bay & Garden\\P0035400.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\CA\\45-05151.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\CA\\45-05154.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\CA\\45-05149.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\Grilles\\60-4623-02.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\Grilles\\85-7723-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\Mull, Stack & Trim\\08-W8092.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\7205\\Mull, Stack & Trim\\85-7611-01.pdf', 'C:\\WS\\WS\\Aspire\\CA & AW\\POCKET 7206\\CA\\85-8287-01.pdf', 'C:\\WS\\WS\\Aspire\\DH\\7105\\DH\\60-12966-01.pdf', 'C:\\WS\\WS\\Aspire\\DH\\7105\\DH\\85-8444-01.pdf', 'C:\\WS\\WS\\Aspire\\DH\\7105\\DH\\60-12931-01.pdf', 'C:\\WS\\WS\\Aspire\\DH\\7105\\DH\\60-12986-01.pdf', 'C:\\WS\\WS\\Aspire\\DH\\7105\\DH\\60-12967-01.pdf', 'C:\\WS\\WS\\Aspire\\DH\\7105\\DH\\60-12675-01.pdf', 'C:\\WS\\WS\\Aspire\\DS\\7305\\Window Std Unit\\85-8047-01.pdf', 'C:\\WS\\WS\\Aspire\\DS\\7305\\Window Std Unit\\85-7604-01.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\1W\\08-0003-W2403.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\1W\\08-1504-02.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\DS Tran Insert\\60-0239-01.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\Grilles\\60-0057-01.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\Grilles\\60-0036-01.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\Mull, Stack, & Trim\\45-03943.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\Tech Data & Mull C:\\60-2922-01.pdf', 'C:\\WS\\WS\\Aspire\\HGD\\1.75 In HGD IS_7502\\Tech Data & Mull C:\\60-3346-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\1W\\70-6439-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\1W\\85-7624-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\1W\\70-6656-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\1W\\70-6374-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\1W\\60-10478-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\1W\\70-6604-01.pdf', 'C:\\WS\\WS\\Aspire\\SGD\\2W\\70-6655-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - French_6206\\60-6264-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Mull, Stack & Trim_6206\\60-7136-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Mull, Stack & Trim_6206\\60-5946-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Sash Set Pict, Tran, Fixed_6206\\60-5473-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Sash Set Pict, Tran, Fixed_6206\\60-5504-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Sash Set Pict, Tran, Fixed_6206\\60-6290-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Sash Set Pict, Tran, Fixed_6206\\60-5503-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Sash Set Pict, Tran, Fixed_6206\\60-5474-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Screens & Storms_6206\\60-6031-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA - Screens & Storms_6206\\60-6033-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA_6206-files\\60-5491-01.pdf', 'C:\\WS\\WS\\Collections\\CA & AW\\CA_6206\\CA_6206-files\\60-5588-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6502\\Mull, Stack & Trim\\08-W5035X.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6503\\1Wide\\70-1889-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6503\\1Wide\\70-1737-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6503\\1Wide\\70-2159-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6504\\IS - 1Wide\\70-3317-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6504\\IS-Sidelite\\70-4158-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1802-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1803-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1800-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1805-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1804-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1810-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 1Wide\\85-1811-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - 3Wide Triple\\85-1812-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6505\\IS - DS Sidelite Boxed\\85-1813-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6605\\OS - 1Wide\\85-1806-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6605\\OS - 1Wide\\85-1808-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6605\\OS - 2Wide\\85-1809-01.pdf', 'C:\\WS\\WS\\Collections\\HGD\\6605\\OS - DS Sidelite Boxed\\85-1814-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-5654-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6547-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6582-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6846-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6543-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6548-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-5850-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6542-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-1684-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-5865-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Dbl-Hung\\60-6605-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Mull, Stack & Trim\\60-5588-02.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Sash Set Pict, Tran, Fixed\\60-6837-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Sash Set Pict, Tran, Fixed\\60-6839-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Sash Set Pict, Tran, Fixed\\60-6849-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - DH_6107\\Sash Set Pict, Tran, Fixed\\60-7100-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - SH_6106\\SH_6106\\60-6557-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - SH_6106\\SH_6106\\60-6552-01.pdf', 'C:\\WS\\WS\\Collections\\Hung Window\\Collections - SH_6106\\SH_6106\\60-6551-01.pdf', 'C:\\WS\\WS\\Collections\\SGD\\Collections - SGD_6708\\2Wide\\70-0337-01.pdf', 'C:\\WS\\WS\\Collections\\SGD\\Collections - SGD_6708\\2Wide\\60-0851-01.pdf', 'C:\\WS\\WS\\Collections\\SGD\\Collections - SGD_6708\\3Wide\\60-4013-01.pdf', 'C:\\WS\\WS\\Collections\\SGD\\Collections - SGD_6708\\4Wide\\70-0507-01.pdf', 'C:\\WS\\WS\\Contemporary\\FRENCH CA\\8218\\85-7906-01.pdf', 'C:\\WS\\WS\\Contemporary\\FRENCH CA\\8218\\85-10134-01.pdf', 'C:\\WS\\WS\\Contemporary\\FRENCH CA\\8218\\85-10821-02.pdf', 'C:\\WS\\WS\\Contemporary\\FRENCH CA\\8218\\85-8084-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\1W\\85-10889-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\1W\\85-2654-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\1W\\85-10320-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\2W\\85-11892-02.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\Mull, Stack & Trim\\85-12005-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\Mull, Stack & Trim\\85-12006-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\Mull, Stack & Trim\\85-12007-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 IS-CTMP 8517\\Mull, Stack & Trim\\85-12008-01.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\2.25 OS-CTMP 8617\\1W\\08-0006-C2403.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\OS Bi-fold 8623\\2W\\45-03268.pdf', 'C:\\WS\\WS\\Contemporary\\HGD\\OS Bi-fold 8623\\2W\\45-04681.pdf', 'C:\\WS\\WS\\Contemporary\\PUSH OUT\\8217\\PUSH OUT FRENCH CA\\85-8182-01.pdf', 'C:\\WS\\WS\\Contemporary\\PUSH OUT\\8217\\PUSH OUT FRENCH CA\\85-8186-01.pdf', 'C:\\WS\\WS\\Contemporary\\PUSH OUT\\8217\\PUSH OUT FRENCH CA\\85-8059-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\45-01468.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\85-8748-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\85-11969-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\85-11138-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\85-7951-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\85-8941-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Bipart\\10W\\85-12079-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Pocket\\Single Dir\\3W\\45-01465.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Multi Slide 8720\\Stacked\\Bipart\\10W\\45-01460.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\60-10452-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\08-2877-03.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\70-0360-02.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\85-11143-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\60-10454-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\60-10450-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\1.75 Contemp Sliding Patio 8717\\1W\\85-5529-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\2.25 Contemp Lift and Slide 8718\\Pocket\\Bipart\\10W\\85-8746-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\2.25 Contemp Lift and Slide 8718\\Pocket\\Bipart\\10W\\85-8394-01.pdf', 'C:\\WS\\WS\\Contemporary\\SGD\\2.25 Contemp Lift and Slide 8718\\Stacked\\Bipart\\10W\\45-01477.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8201\\AW\\311419.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8201\\AW\\08-W3004.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8201\\Bow, Bay & Garden\\09-W8096.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8201\\Bow, Bay & Garden\\70-3810-01.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8201\\CA\\311421.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8201\\CA\\311420.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8210\\AW\\60-6005-01.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8210\\AW\\60-6004-01.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8210\\AW\\60-3267.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8210\\AW\\70-0794-01.pdf', 'C:\\WS\\WS\\Custom Shield\\CA & AW\\8210\\AW\\70-0793-01.pdf', 'C:\\WS\\WS\\Custom Shield\\HGD\\851\\DS Tran Insert\\09-W5026-01.pdf', 'C:\\WS\\WS\\Custom Shield\\HGD\\851\\DS Tran Insert\\P0125800.pdf', 'C:\\WS\\WS\\Custom Shield\\HGD\\861\\DS Tran Insert\\60-2009-01.pdf', 'C:\\WS\\WS\\Custom Shield\\HGD\\861\\DS Tran Sim Sash\\60-1698-01.pdf', 'C:\\WS\\WS\\Custom Shield\\HGD\\861\\DS Tran Std Unit\\60-1694-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\60-3483-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\60-3258-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\08-1005-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\60-0638-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\08-2039-02-03.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\08-2041-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\08-2038-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\08-2049-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Cottage, Oriel, Sim Tran\\60-0642-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Replacement\\60-2455-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Replacement\\60-2458-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Replacement\\08-1906-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Replacement\\60-2453-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Replacement\\60-2459-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Replacement\\60-3516-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Set Pic, Tran, Fixed\\08-2148-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8103\\Sash Set Pic, Tran, Fixed\\08-2147-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\60-9949-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\60-9858-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\60-9017-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\80-0581-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\08-1897-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\08-1025-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\08-1892-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\DH\\60-9021-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\Sash Set Pic, Tran, Fixed\\08-2108-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8108\\Sash Set Pic, Tran, Fixed\\08-2106-02.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8112\\DH\\60-2487-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Hung Window\\8112\\DH\\60-2493-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Slide Window\\8403\\Double\\60-9350-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Slide Window\\8403\\Double\\60-9359-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Slide Window\\8403\\Double\\60-9353-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Slide Window\\8403\\Double\\60-8003-01.pdf', 'C:\\WS\\WS\\Custom Shield\\Slide Window\\8403\\Double\\60-4026-01.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\CA & AW\\1210\\Grilles\\60-5031-03.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\CA & AW\\1210\\Mull, Stack & Trim\\85-8215-01.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\CA & AW\\1210\\Mull, Stack & Trim\\85-7445-01.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\CA & AW\\1210\\Sash Set Pict, Tran, Fixed\\85-8207-01.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\CA & AW\\1210\\Screen-Storms Retractable Standard Interior\\85-9583-01.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\DS\\1310\\Window Std Unit\\45-05175.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\DS\\1310\\Window Std Unit\\45-05173.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\DS\\1310\\Window Std Unit\\45-05171.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\DS\\1310\\Window Std Unit\\85-8275-01.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\French CA\\1212\\French CA\\45-02011.pdf', 'C:\\WS\\WS\\LifeGuard IG Premium\\French CA\\1212\\French CA\\85-8084-02.pdf', 'C:\\WS\\WS\\LifeGuard IG WS\\CA & AW\\1201\\AW\\09-w3002.pdf', 'C:\\WS\\WS\\LifeGuard IG WS\\CA & AW\\1201\\CA\\08-W2036.pdf', 'C:\\WS\\WS\\LifeGuard IG WS\\CA & AW\\1204\\CA\\60-12350-01.pdf', 'C:\\WS\\WS\\LifeGuard IG WS\\SGD\\SGD French_1702\\2W\\08-2930-03.pdf', 'C:\\WS\\WS\\LifeGuard WS\\CA & AW\\1206\\Sash Set Pic, Tran, Fixed\\60-10771-01.pdf', 'C:\\WS\\WS\\LifeGuard WS\\DS\\1302\\Window Std Unit\\85-5506-01.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\DS Tran Std Unit\\100413.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\DS Tran Std Unit\\100727.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\Install, Replacement Catalog\\102379.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\Mull, Stack & Trim\\80-0716-01.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\Mull, Stack & Trim\\80-0714-01.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\Mull, Stack & Trim\\80-0715-01.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8501\\Mull, Stack & Trim\\80-0718-01.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8509\\1W\\08-w5020.pdf', 'C:\\WS\\WS\\Mark Haven Alum Clad\\HGD\\8601\\DS Tran Std Unit\\100413.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-8398-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-8702-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-8701-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7154-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-9136-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7509-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-8428-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7276-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7787-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7523-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7196-02.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Pic\\1108 Single Glazed\\85-7329-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Tran\\1110 Single Glazed\\85-8031-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Tran\\1110 Single Glazed\\85-8042-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH Tran\\1110 Single Glazed\\85-8384-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\45-04721.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-12152-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-12154-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-12152-02.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-12109-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-10647-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-7318-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-7206-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-7206-02.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-9116-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-7441-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1104 IG\\85-10977-01.pdf', 'C:\\WS\\WS\\Premium Coastal\\DH\\1105 Single Glazed\\45-00775.pdf', 'C:\\WS\\WS\\Premium\\CA & AW\\8211\\AW\\85-8437-01.pdf', 'C:\\WS\\WS\\Premium\\CA & AW\\8211\\EB, Segment\\85-8434-01.pdf', 'C:\\WS\\WS\\Premium\\CA & AW\\8211\\EB, Segment\\85-8354-01.pdf', 'C:\\WS\\WS\\Premium\\CA & AW\\8211\\Special Shapes\\85-7622-01.pdf', 'C:\\WS\\WS\\Premium\\DH Pic\\POCKET PIC 8119\\60-12888-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\DH\\85-7614-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\DH\\85-7366-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\DH\\85-9320-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\DH\\85-7626-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\DH\\85-7612-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\DH\\85-7210-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\Prod-Mod\\85-9010-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8109\\Prod-Mod\\85-9001-01.pdf', 'C:\\WS\\WS\\Premium\\DH\\8120\\DH\\85-9220-01.pdf', 'C:\\WS\\WS\\Premium\\DS\\8306\\Corner Unit\\85-10624-01.pdf', 'C:\\WS\\WS\\Premium\\DS\\8307\\Premium DS - Window Std Unit_8307\\85-10692-01.pdf', 'C:\\WS\\WS\\Premium\\DS\\8307\\Premium DS - Window Std Unit_8307\\85-8046-01.pdf', 'C:\\WS\\WS\\Premium\\DS\\POCKET 8309\\Premium POCKET DS - Grilles_8309\\85-11915-01.pdf', 'C:\\WS\\WS\\Premium\\DS\\POCKET 8309\\Premium POCKET DS - Grilles_8309\\85-8588-01.pdf', 'C:\\WS\\WS\\Premium\\DS\\POCKET 8309\\Premium POCKET DS - Grilles_8309\\85-8598-01.pdf', 'C:\\WS\\WS\\Premium\\FRENCH CA\\8214\\French Casement\\45-03962.pdf', 'C:\\WS\\WS\\Premium\\FRENCH CA\\8214\\French Casement\\45-03935-01.pdf', 'C:\\WS\\WS\\Premium\\SGD\\Telescoping 8706\\2W\\08-2878-03.pdf', 'C:\\WS\\WS\\Premium\\SGD\\Telescoping 8706\\2W\\08-2876-03.pdf', 'C:\\WS\\WS\\Premium\\SGD\\Telescoping 8706\\2W\\08-2997-03.pdf', 'C:\\WS\\WS\\ProShield\\CA & AW\\7201\\AW\\85-1099-01.pdf', 'C:\\WS\\WS\\ProShield\\CA & AW\\7201\\AW\\85-1097-01.pdf', 'C:\\WS\\WS\\ProShield\\CA & AW\\7201\\CA\\08-3193-05.pdf', 'C:\\WS\\WS\\ProShield\\CA & AW\\7201\\Mull, Stack & Trim\\60-5893-01.pdf', 'C:\\WS\\WS\\ProShield\\DS\\7301\\Std Unit\\85-2118-01.pdf', 'C:\\WS\\WS\\ProShield\\DS\\733\\CA Pict, Tran, Sim Tran Std Unit\\511458A.pdf', 'C:\\WS\\WS\\ProShield\\Hung Window\\DH ProTilt_7101\\Double\\60-12634-01.pdf', 'C:\\WS\\WS\\ProShield\\Hung Window\\DH ProTilt_7101\\Double\\60-10723-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3756-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3757-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3759-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3761-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3762-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3764-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3765-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3766-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3767-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3751-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3752-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3753-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3754-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\70-3755-01.pdf', 'C:\\WS\\WS\\Sequel\\CA & AW\\8207\\Bow, Bay & Garden\\08-1316-02.pdf', 'C:\\WS\\WS\\Signature\\CA & AW\\8205\\AW\\85-7575-01.pdf', 'C:\\WS\\WS\\Signature\\CA & AW\\8205\\AW\\85-7576-01.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8109\\DH_8109\\45-05060.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8109\\DH_8109\\85-8502-01.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8120\\DH\\85-12096-01.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8122\\DH\\45-13499.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8122\\DH\\45-11955.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8122\\DH\\45-11973.pdf', 'C:\\WS\\WS\\Signature\\Dbl-Hung\\8122\\DH\\45-13364.pdf', 'C:\\WS\\WS\\Signature\\DH Pic\\8123\\Pic\\45-13412.pdf', 'C:\\WS\\WS\\Signature\\DH Pic\\8123\\Pic\\45-14407.pdf', 'C:\\WS\\WS\\Signature\\DH Pic\\8123\\Pic\\45-14408.pdf', 'C:\\WS\\WS\\Signature\\DS\\Signature - DS_8306\\Signature DS - Grilles_8306\\85-11041-01.pdf', 'C:\\WS\\WS\\Signature\\DS\\Signature - DS_8306\\Signature DS - Grilles_8306\\85-8594-01.pdf', 'C:\\WS\\WS\\Signature\\DS\\Signature - DS_8306\\Signature DS - Grilles_8306\\85-11044-01.pdf', 'C:\\WS\\WS\\Signature\\DS\\Signature - DS_8306\\Signature DS - Grilles_8306\\85-8596-01.pdf', 'C:\\WS\\WS\\Signature\\DS\\Signature - DS_8306\\Signature DS - Mull, Stack & Trim_8306\\85-9937-01.pdf', 'C:\\WS\\WS\\Signature\\DS\\Signature - DS_8306\\Signature DS - Window Std Unit_8306\\85-9472-01.pdf', 'C:\\WS\\WS\\Visions 1000\\DS\\936\\Mull, Stack & Trim\\70-1575-01.pdf', 'C:\\WS\\WS\\Wood\\CA & AW\\6200\\Bow, Bay & Garden\\310243.pdf', 'C:\\WS\\WS\\Wood\\CA & AW\\6200\\French CA\\08-W2045.pdf', 'C:\\WS\\WS\\Wood\\CA & AW\\6200\\French CA\\08-W2044X.pdf', 'C:\\WS\\WS\\Wood\\CA & AW\\6200\\Mull, Stack & Trim\\09-0024B-W0186.pdf', 'C:\\WS\\WS\\Wood\\CA & AW\\6204\\AW\\70-1068-01.pdf', 'C:\\WS\\WS\\Wood\\CA & AW\\6204\\AW\\60-3075-01.pdf', 'C:\\WS\\WS\\Wood\\DS\\630\\Window Std Unit\\60-11335-01.pdf', 'C:\\WS\\WS\\Wood\\DS\\6301\\Mull, Stack & Trim\\09-0125-W0896.pdf', 'C:\\WS\\WS\\Wood\\DS\\6301\\Mull, Stack & Trim\\502728.pdf', 'C:\\WS\\WS\\Wood\\HGD\\1.75 In HGD IS_651\\EB, Segment\\08-1788-02.pdf', 'C:\\WS\\WS\\Wood\\HGD\\1.75 In HGD IS_6510\\1Wide\\45-09154.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_610\\DH\\60-0757-01.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_610\\DH\\60-12630-01.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_610\\DH\\60-11717-01.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_610\\Sash Set Pict, Tran, Fixed\\08-2185-02.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_610\\Sash Set Pict, Tran, Fixed\\08-2478-02.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_611\\Sash Replacement\\60-2456-01.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_611\\Sash Replacement\\60-2457-01.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_611\\Sash Replacement\\60-2454-01.pdf', 'C:\\WS\\WS\\Wood\\Hung Window\\DH_611\\Sash Replacement\\60-2460-01.pdf', 'C:\\WS\\WS\\Wood\\SGD\\Narrow Stile - 2009 - Omni Sill_6714\\Mull, Stack & Trim\\85-4951-01.pdf', 'C:\\WS\\WS\\Wood\\SGD\\Wide Stile - 2009 - Omni Sill_6715\\1Wide\\60-1096-02.pdf', 'C:\\WS\\WS\\Wood\\SGD\\Wide Stile - 2009 - Omni Sill_6715\\1Wide\\60-1094-01.pdf', 'C:\\WS\\WS\\Wood\\SGD\\Wide Stile - 2009 - Omni Sill_6715\\1Wide\\60-1092-01.pdf', 'C:\\WS\\WS\\Wood\\Slide Window\\Dbl-Triple Slide Window_6401\\Double\\60-8006-01.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Mull, Stack & Trim\\102773.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Mull, Stack & Trim\\103350.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Mull, Stack & Trim\\80-4123-01.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Mull, Stack & Trim\\80-4123-02.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Panel, Smooth FG\\80-2122-01.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Panel, Smooth FG\\80-1981-01.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Panel, Smooth FG\\80-1839-01.pdf', 'C:\\WS\\WS\\WS - EDS - IS\\6801\\Panel, Smooth Steel\\80-2762-01.pdf', 'C:\\WS\\WS\\WS - EDS - OS\\6901\\1W\\102829.pdf', 'C:\\WS\\WS\\WS - EDS - OS\\6901\\1W\\80-2087-01.pdf', 'C:\\WS\\WS\\WS - EDS - OS\\7901\\PNL\\80-1336-01.pdf', 'C:\\WS\\WS\\WS Alum Clad\\CA & AW\\8200\\EB, Segment\\08-2053-02.pdf', 'C:\\WS\\WS\\WS Alum Clad\\CA & AW\\8204\\Grilles\\70-1947-01.pdf', 'C:\\WS\\WS\\WS Alum Clad\\DH Window\\810\\DH\\85-4965-01.pdf', 'C:\\WS\\WS\\WS Alum Clad\\SGD\\Pocket Door 1.75 Panel 8712\\2L or 2R Single Direction\\85-3313-01.pdf', 'C:\\WS\\WS\\WS Alum Clad\\SGD\\Pocket Door 1.75 Panel 8712\\2L or 2R Single Direction\\85-2440-01.pdf', 'C:\\WS\\WS\\WS Alum Clad\\SGD\\Pocket Door 1.75 Panel 8712\\2L2R 4W Biparting\\85-3331-04.pdf', 'C:\\WS\\WS\\WS Alum Clad\\SGD\\Pocket Door 2.25 Panel 8713\\2L or 2R Single Direction\\85-3779-03.pdf', 'C:\\WS\\WS\\WS HR175\\SH_6101\\Single\\60-12643-01.pdf', 'C:\\WS\\WS\\WS HR175\\SH_6101\\Single\\60-12638-01.pdf', 'C:\\WS\\WS\\WS HR175\\SH_6101\\Single\\60-12602-01.pdf', 'C:\\WS\\WS\\WS HR175\\SH_6101\\Single\\60-12601-01.pdf', 'C:\\WS\\WS\\WS HR175\\SH_6101\\Single\\60-12644-01.pdf', 'C:\\WS\\WS\\WS HR175\\SH_6101\\Single\\60-12636-01.pdf']
Existent files: ['C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7534-01.pdf']
Grouped files: [['C:\\WS\\WS\\Aspire\\CA & AW\\7205\\AW\\85-7534-01.pdf']]
``` | 2018/12/05 | [
"https://Stackoverflow.com/questions/53637976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6640090/"
] | You really should be able to make that question shorter. Why not write some functions? Then you could identify a certain function as the problem and your question would be about that function only.
Maybe this helps?
```
def get_filenames_list(csv_filename):
import os
import pandas
# Read table from CSV file
table = pandas.read_csv(csv_filename)
# Create list of all files from table
filenames = []
for name, path in zip(table["Name"], table["Found In"]):
filename = "{}.pdf".format(os.path.join(path, name))
filenames.append(filename)
return filenames
def remove_nonexistent_files(filenames):
import os
# Remove nonexistent files from list of filenames
filenames = list(filter(os.path.isfile, filenames))
return filenames
def group_identical_files(filenames, buffer_size=1024):
# Remove duplicates from list of filenames
filenames = list(set(filenames))
groups = []
while len(filenames) > 0:
a_filename = filenames.pop()
group = [a_filename]
for b_filename in filenames:
with open(a_filename, "rb") as a_handle:
with open(b_filename, "rb") as b_handle:
identical = False
while True:
a_buffer = a_handle.read(buffer_size)
b_buffer = b_handle.read(buffer_size)
if a_buffer != b_buffer:
break
if len(a_buffer) == 0 and len(b_buffer) == 0:
# EOF is reached for both files
identical = True
break
if identical:
group.append(b_filename)
filenames.remove(b_filename)
groups.append(group)
return groups
if __name__ == "__main__":
files = get_filenames_list("C://Users//27659//Desktop//Unique Wood Profiles.csv")
print("All files:", files)
existent_files = remove_nonexistent_files(files)
print("Existent files:", existent_files)
grouped_files = group_identical_files(existent_files)
print("Grouped files:", grouped_files)
```
You might have to adjust the path to you CSV file (which I've taken from you code above) but otherwise, you can just run that script and it will show you
* `All files`: Contains all filenames taken from the CSV file.
* `Existent files`: So you can check if the script succeeded in putting the filenames together. (If this list is empty, you'll know that either these files don't exist, or the prior step did not work.)
* `Grouped files`: The result where each list contains the filenames of identical files. If a list only has one element, then no other file is identical to that one. | I broke this down into the simplest version to get to the fundamental problem. which is why does one path work from my list of paths and all the others don't? What I discovered through pure luck is that when I went to the path location and opened the file from inside my pdm vault, python's os module will then recognize it as a path. Really strange to me. If anyone has an explanation as to why that would be, I would appreciate it.
I talked to our IT department, and they said it might have something to do with me using a C: path instead of the server path. They weren't sure though. They don't know very much about how pdm is structured.
'
Here is the code I wrote to create a simple go no-go list.
'
```
import pandas as pd
import os
#creates a dataframe in python of the information in my csv file
main_df = pd.read_csv("C://Users//27659//Desktop//Unique Wood Profiles.csv")
# Turns the columns of the dataframe into lists
drawing_number_list = main_df['Name'].tolist()
description_list = main_df['Description'].tolist()
found_in_list = main_df['Found In'].tolist()
# I've always done my empty lists this way. i never took the time to learn about empty lists and this only adds one more line of code.
full_file_list = ['placeholder']
# fill the list with the concatenated strings from the other list. puts together the full path to the file.
for i in range(0, len(drawing_number_list)):
full_file_list.append(found_in_list[i] + drawing_number_list[i] + '.pdf')
# index 0 of the list is still the string 'placeholder' this deletes it.
del full_file_list[0]
# some boolean logic that will tell me if the path is something the computer can use. this also puts the print out in a nice format that can be copy and pasted from the print window.
for i in range(0, len(full_file_list)):
if os.path.exists(full_file_list[i]):
print('1\t' + full_file_list[i])
else:
print('0\t' + full_file_list[i])
```
'
It's very similar to Jayjayyy's code. With the print out this way I can copy and paste the print out into a txt file and go through each one and open and close the file that the path goes to. since I don't have that many files in my list, I'm going to do it manually. if I had more you would have to get the path from the actual server. This project is a one off check for me, so it's not that important to have the server path. I would recommend that if someone has the same problem in the future, take the time and get the correct path straight from the server, and the permissions that go with it. it will save you a lot of headache. |
16,333,707 | Please refer my Fiddle [here](http://jsfiddle.net/yvonnezoe/q524G/2/). I have a table with dynamic rows, each containing a checkbox with variable ID `id="monitor_'+rowID+'"`where rowID is a global variable declared without keyword `var` inside function `addText()`. I tried to use `.each(function(){});` to loop through the rows but it doesn't seem to work.
My intention is to see if the checkbox in each row is checked. If it is, a function `monitoring` will be called.
Here is the function:
```
function monitoring() {
$('#test').append("checked");
fbType = $('td.nth(2)', $(this)).text();
fbNum = $('td.nth(3)', $(this)).text();
$('#test').append(fbType + fbNum);
$.post('/request', {
inputText: fbNum,
key_pressed: fbType.toString()
}).done(function (reply) {
if (reply == "on") {
$('#test').append("on ");
} else {
$('#test').append("off ");
}
});
```
}
You can add the rows but selecting an option and add a number, then press button "Add". The first `.append("checked")` is to make sure I'm calling the function but the weird thing is, it only display once even if i have a few checkboxes and I checked them all.
Another question, I can't display the following:
```
fbType = $('td.nth(2)', $(this)).html();
fbNum = $('td.nth(3)', $(this)).html();
```
Why is it so? It is crucial because as you can see, I will be posting those data to a python function...
Please tell me what's wrong with my codes and how I should do it. | 2013/05/02 | [
"https://Stackoverflow.com/questions/16333707",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2297702/"
] | You could alter the elements in your for each loop.
And I've changed the nth-selector in your attempt to get the values.
In your example you tried a jQuery function as a CSS selector.
```
$('#monitor').click(function () {
$('#status_table tr [id^="monitor_"]:checked').each(function () {
monitoring($(this).parents('tr'));
});
});
function monitoring($row) {
$('#test').append("checked");
fbType = $row.find('td:nth-child(2)').html();
fbNum = $row.find('td:nth-child(3)').html();
$('#test').append(fbType + fbNum);
}
``` | You have an algorithm problem here :
```
$('#monitor').click(function () {
$('#status_table tr').each(function () {
$check = $('#status_table tr #monitor_' + rowID)
if ($check.is(':checked')) {
monitoring();
}
});
});
```
You are using rowID whereas as you said it's a global var, which is then when you use .each() at the last rowID you have (if there is 3 rows, rowID = 3).
Can you try to explain what you want to do with those lines :
```
fbType = $('td.nth(2)', $(this)).html();
fbNum = $('td.nth(3)', $(this)).html();
``` |
10,050,078 | I want to make function, that can to return a value of string. But I got stumbled around this problem in days cannot to resolve this mistake alone. So I need your advices and hints. I was using Hash jh sha3 2010 candidate function. here is the code:
```
anyway this an update code, but I still dont get expected value to get this function called from Python Language. the returned value is "9976864". Anymore helps?
#include <stdio.h>
#include "jh_ansi_opt32.h"
#include <time.h>
#include <stdlib.h>
char* jh(char *input)
{
BitSequence output[512];
char *iData;
char* msg;
int dInt;
msg= (char*)malloc(sizeof(output));
if(!msg){
return 1;
}
memset(output,0,sizeof(output));
iData = input;
printf("[+] data is %s\n", iData);
dInt = strlen(iData);
BitSequence data[dInt];
memset(data,0, sizeof(data));
strncpy(data,iData,dInt);
DataLength dLen =dInt;
HashJh(512, data,dLen,output);
//printf("\n[+] resulted hash is ");
int k;
for (k=0;k<sizeof(output);k++){
msg[k]= output[k];
}
if (msg) return msg;
free(msg);
return 0;
}
```
And the python one is:
```
from ctypes import *
d = CDLL('jh.dll')
a=d.jh('this is message by hash jh function')
print a
```
This is an update code, but still dont get expected value. The retuned value when I try to call from python is something integer "9968784". Anymore helps would be appreciated, thanks.. | 2012/04/06 | [
"https://Stackoverflow.com/questions/10050078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1309539/"
] | >
>
> ```
> if (!(BitSequence *)malloc(sizeof(output)))
> exit(EXIT_FAILURE);
>
> ```
>
>
That doesn't do anything. Second, you're incrementing `msg` and **then** returning it. Third, you never seem to dereference `msg`, you're only incrementing it. | Get rid of the `malloc` code, your `output`, `data`, and `dLen` arrays/variables will be allocated on the stack.
`msg` is a `char*`, not a `char`. It is also uninitialized.
If you want to return a string, you need to allocate that using `malloc` and fill it in somehow. Return a pointer to that for your string. |
10,050,078 | I want to make function, that can to return a value of string. But I got stumbled around this problem in days cannot to resolve this mistake alone. So I need your advices and hints. I was using Hash jh sha3 2010 candidate function. here is the code:
```
anyway this an update code, but I still dont get expected value to get this function called from Python Language. the returned value is "9976864". Anymore helps?
#include <stdio.h>
#include "jh_ansi_opt32.h"
#include <time.h>
#include <stdlib.h>
char* jh(char *input)
{
BitSequence output[512];
char *iData;
char* msg;
int dInt;
msg= (char*)malloc(sizeof(output));
if(!msg){
return 1;
}
memset(output,0,sizeof(output));
iData = input;
printf("[+] data is %s\n", iData);
dInt = strlen(iData);
BitSequence data[dInt];
memset(data,0, sizeof(data));
strncpy(data,iData,dInt);
DataLength dLen =dInt;
HashJh(512, data,dLen,output);
//printf("\n[+] resulted hash is ");
int k;
for (k=0;k<sizeof(output);k++){
msg[k]= output[k];
}
if (msg) return msg;
free(msg);
return 0;
}
```
And the python one is:
```
from ctypes import *
d = CDLL('jh.dll')
a=d.jh('this is message by hash jh function')
print a
```
This is an update code, but still dont get expected value. The retuned value when I try to call from python is something integer "9968784". Anymore helps would be appreciated, thanks.. | 2012/04/06 | [
"https://Stackoverflow.com/questions/10050078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1309539/"
] | >
>
> ```
> if (!(BitSequence *)malloc(sizeof(output)))
> exit(EXIT_FAILURE);
>
> ```
>
>
That doesn't do anything. Second, you're incrementing `msg` and **then** returning it. Third, you never seem to dereference `msg`, you're only incrementing it. | This either loses the pointer returned by malloc, or only works when there is no more memory:
```
if (!(BitSequence *)malloc(sizeof(output)))
exit(EXIT_FAILURE);
```
Then this does the same:
```
if ((BitSequence *) malloc(sizeof(data)) == NULL)
exit(EXIT_FAILURE);
```
Is that what you need? I'd normally say that is a bug. |
10,050,078 | I want to make function, that can to return a value of string. But I got stumbled around this problem in days cannot to resolve this mistake alone. So I need your advices and hints. I was using Hash jh sha3 2010 candidate function. here is the code:
```
anyway this an update code, but I still dont get expected value to get this function called from Python Language. the returned value is "9976864". Anymore helps?
#include <stdio.h>
#include "jh_ansi_opt32.h"
#include <time.h>
#include <stdlib.h>
char* jh(char *input)
{
BitSequence output[512];
char *iData;
char* msg;
int dInt;
msg= (char*)malloc(sizeof(output));
if(!msg){
return 1;
}
memset(output,0,sizeof(output));
iData = input;
printf("[+] data is %s\n", iData);
dInt = strlen(iData);
BitSequence data[dInt];
memset(data,0, sizeof(data));
strncpy(data,iData,dInt);
DataLength dLen =dInt;
HashJh(512, data,dLen,output);
//printf("\n[+] resulted hash is ");
int k;
for (k=0;k<sizeof(output);k++){
msg[k]= output[k];
}
if (msg) return msg;
free(msg);
return 0;
}
```
And the python one is:
```
from ctypes import *
d = CDLL('jh.dll')
a=d.jh('this is message by hash jh function')
print a
```
This is an update code, but still dont get expected value. The retuned value when I try to call from python is something integer "9968784". Anymore helps would be appreciated, thanks.. | 2012/04/06 | [
"https://Stackoverflow.com/questions/10050078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1309539/"
] | >
>
> ```
> if (!(BitSequence *)malloc(sizeof(output)))
> exit(EXIT_FAILURE);
>
> ```
>
>
That doesn't do anything. Second, you're incrementing `msg` and **then** returning it. Third, you never seem to dereference `msg`, you're only incrementing it. | Regarding the return value issue in Python, `ctypes` defaults to expecting an integer return value. Tell it the return type ([docs](http://docs.python.org/library/ctypes.html#return-types)):
```
>>> from ctypes import *
>>> d = CDLL('jh.dll')
>>> d.jh('abc')
39727048
>>> d.jh.restype=c_char_p
>>> dll.jh('abc')
'abc'
```
I faked your DLL and just returned a string. The number you get is the integer value of the pointer address returned.
Note as others mention you will leak memory returning a malloc'ed pointer with no way to free it. |
26,365,149 | I have NGINX server with uWSGI and python with PyZMQ (installed as sudo pip install pyzmq).
I'm trying create socket from ZMQ context, but always catch exception.
```
import zmq
import os
import sys
from cgi import parse_qs, escape
sys.path.append('/usr/share/nginx/www/application')
os.environ['PYTHON_EGG_CACHE'] = '/usr/share/nginx/www/.python-egg'
def application(environ, start_response):
ctx = zmq.Context()
try:
message = 'Everything OK'
s = ctx.socket(zmq.REQ)
except Exception as e:
message = "Exception({0}): {1}".format(e.errno, e.strerror)
pass
response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(message)))]
start_response('200 OK', response_headers);
return [message]
```
It raised exception
```
Exception(14): Bad address
```
If I commented line
```
s = ctx.socket(zmq.REQ)
```
then is everything ok.
I searched on internet, but nobody has same problem.
Please, do you have any idea, what am I doing wrong?
**Edit:**
I wrote simple python script, that working and I get the response from recv:
```
import zmq
import os
import sys
print 'Create zeromq instance...'
ctx = zmq.Context()
print 'Create socket ...'
try:
s = ctx.socket(zmq.REQ)
except Exception as e:
print "Exception({0}): {1}".format(e.errno, e.strerror)
sys.exit()
s.connect('tcp://localhost:5555')
s.send('fTtt;')
message = s.recv()
print message
```
I seems to be a problem with uWSGI run python ZMQ, but why? | 2014/10/14 | [
"https://Stackoverflow.com/questions/26365149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2878139/"
] | Ok, it seem's to be at same error than [ZeroMQ + Django & uwsgi issues](https://stackoverflow.com/questions/16071125/zeromq-django-uwsgi-issues)
>
> The problem arise because zmq context create a background thread and this thread is not inherited after fork().
>
>
>
I try NGINX with simple python wsgi and its work. | Presumably you intend to use the socket somewhere below the HTTP response? Because at the moment you're not connecting or binding on anything, which makes the "Bad address" exception fairly strange. Try creating a minimal example without the cgi, nginx, cache, response stuff, just the ZMQ context creation and the socket creation and see if you raise the same exception. If so, it appears there's something wonky in your binding or your library. Make sure everything is installed correctly and all your versions are compatible. |
61,099,889 | I have a list as follows.
```
mylist = [["person1,person2,person3", ["item1", "item2", "item3", "item4", "item5", "item6"]],
["person4,person5,person3", ["item10", "item11", "item12"]]]
```
I want to make `person-item` list from `mylist` as follows.
```
myoutput = [["person1", [["item1", "item2", "item3", "item4", "item5", "item6"]]],
["person2", [["item1", "item2", "item3", "item4", "item5", "item6"]]],
["person3", [["item1", "item2", "item3", "item4", "item5", "item6"], ["item10", "item11", "item12"]]],
["person4", [["item10", "item11", "item12"]]]
["person5", [["item10", "item11", "item12"]]]]
```
This is how I currently do this.
```
mylist = [["person1,person2,person3", ["item1", "item2", "item3", "item4", "item5", "item6"]], ["person4,person5,person3", ["item10", "item11", "item12"]]]
person_list = []
for item in mylist:
person_list.extend(item[0].split(","))
person_list = list(set(person_list))
print(person_list)
myoutput = []
for person in person_list:
mytemp = []
for item in mylist:
if person in item[0]:
mytemp.append(item[1])
myoutput.append(tuple((person, mytemp)))
print(myoutput)
```
However, this is very inefficient since I have a huge dataset to process. In other words my current program is O(n) and O(n2). I am looking for ways to efficiently do this in python.
I am happy to provide more details if needed. | 2020/04/08 | [
"https://Stackoverflow.com/questions/61099889",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10704050/"
] | You may use a `dict` to add the list to the key that is the person, that would be more efficient rather than collecting the previous values each time. With `defaultdict` if the key is not present it'll add it wuth the type you specified, here a `list`, then just add your list to it
```
myoutput = defaultdict(list) # from collections import defaultdict
for item in mylist:
for person in item[0].split(","):
myoutput[person].append(item[1])
```
This gives
```
{'person1': [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']],
'person2': [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']],
'person3': [['item1', 'item2', 'item3', 'item4', 'item5', 'item6'], ['item10', 'item11', 'item12']],
'person4': [['item10', 'item11', 'item12']], 'person5': [['item10', 'item11', 'item12']]}
```
To come back to your structure if you really need it :
```
myoutput = [[k, v] for k, v in myoutput.items()]
[['person1', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]],
['person2', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]],
['person3', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6'], ['item10', 'item11', 'item12']]],
['person4', [['item10', 'item11', 'item12']]], ['person5', [['item10', 'item11', 'item12']]]]
``` | You can use `dict.setdefault` to initialize each person with an empty list and keep appending incoming lists to the respective persons:
```
output = {}
for persons, items in mylist:
for person in persons.split(','):
output.setdefault(person, []).append(items)
```
so that:
```
list(map(list, output.items()))
```
returns:
```
[['person1', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]],
['person2', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]],
['person3',
[['item1', 'item2', 'item3', 'item4', 'item5', 'item6'],
['item10', 'item11', 'item12']]],
['person4', [['item10', 'item11', 'item12']]],
['person5', [['item10', 'item11', 'item12']]]]
``` |
61,099,889 | I have a list as follows.
```
mylist = [["person1,person2,person3", ["item1", "item2", "item3", "item4", "item5", "item6"]],
["person4,person5,person3", ["item10", "item11", "item12"]]]
```
I want to make `person-item` list from `mylist` as follows.
```
myoutput = [["person1", [["item1", "item2", "item3", "item4", "item5", "item6"]]],
["person2", [["item1", "item2", "item3", "item4", "item5", "item6"]]],
["person3", [["item1", "item2", "item3", "item4", "item5", "item6"], ["item10", "item11", "item12"]]],
["person4", [["item10", "item11", "item12"]]]
["person5", [["item10", "item11", "item12"]]]]
```
This is how I currently do this.
```
mylist = [["person1,person2,person3", ["item1", "item2", "item3", "item4", "item5", "item6"]], ["person4,person5,person3", ["item10", "item11", "item12"]]]
person_list = []
for item in mylist:
person_list.extend(item[0].split(","))
person_list = list(set(person_list))
print(person_list)
myoutput = []
for person in person_list:
mytemp = []
for item in mylist:
if person in item[0]:
mytemp.append(item[1])
myoutput.append(tuple((person, mytemp)))
print(myoutput)
```
However, this is very inefficient since I have a huge dataset to process. In other words my current program is O(n) and O(n2). I am looking for ways to efficiently do this in python.
I am happy to provide more details if needed. | 2020/04/08 | [
"https://Stackoverflow.com/questions/61099889",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10704050/"
] | You may use a `dict` to add the list to the key that is the person, that would be more efficient rather than collecting the previous values each time. With `defaultdict` if the key is not present it'll add it wuth the type you specified, here a `list`, then just add your list to it
```
myoutput = defaultdict(list) # from collections import defaultdict
for item in mylist:
for person in item[0].split(","):
myoutput[person].append(item[1])
```
This gives
```
{'person1': [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']],
'person2': [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']],
'person3': [['item1', 'item2', 'item3', 'item4', 'item5', 'item6'], ['item10', 'item11', 'item12']],
'person4': [['item10', 'item11', 'item12']], 'person5': [['item10', 'item11', 'item12']]}
```
To come back to your structure if you really need it :
```
myoutput = [[k, v] for k, v in myoutput.items()]
[['person1', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]],
['person2', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]],
['person3', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6'], ['item10', 'item11', 'item12']]],
['person4', [['item10', 'item11', 'item12']]], ['person5', [['item10', 'item11', 'item12']]]]
``` | ```
from collections import defaultdict
mylist = [["person1,person2,person3", ["item1", "item2", "item3", "item4", "item5", "item6"]],
["person4,person5,person3", ["item10", "item11", "item12"]]]
d = defaultdict(list)
for item in mylist:
for name in item[0].split(','):
d[name].append(item[1])
l = [[k, v] for k, v in d.items()]
for item in l:
print(item)
```
Prints:
```
['person1', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]]
['person2', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6']]]
['person3', [['item1', 'item2', 'item3', 'item4', 'item5', 'item6'], ['item10', 'item11', 'item12']]]
['person4', [['item10', 'item11', 'item12']]]
['person5', [['item10', 'item11', 'item12']]]
``` |
45,566,799 | I am loading some data into Python from an external json file, which for the most part works fine. However occasionally some of the data contains escaped characters, which I need to retain.
### raw input file:
```
{
"version": 1,
"query": "occasionally I \"need\" to escape \"double\" quotes"
}
```
### loading it into python:
```
import json
with open('input_file', 'r') as f:
file = json.load(f)
```
Edit
----
Apologies I should be clearer. What I am trying to do is something like the following:
```
'{}'.format(file['query'])
```
Using json.dumps
----------------
```
actual_query = '"datadog.agent.up".over("role:dns").by("host").last(1).count_by_status()'
json.dumps(actual_query)
'"\\"datadog.agent.up\\".over(\\"role:dns\\").by(\\"host\\").last(1).count_by_status()"'
``` | 2017/08/08 | [
"https://Stackoverflow.com/questions/45566799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4666648/"
] | This is exactly what you should be expecting, and I'm not sure why it isn't what you want. Remember that `print` commands return the representation of a variable, eg `print('\"')` gives `"`.
using your example, you can see how you would get the escape characters back when outputting your results:
```
import json
a = r"""{
"version": 1,
"query": "occasionally I \"need\" to escape \"double\" quotes"
}"""
j = json.loads(a)
print j
print json.dumps(j)
```
which gives me:
```
{u'query': u'occasionally I "need" to escape "double" quotes', u'version': 1}
{"query": "occasionally I \"need\" to escape \"double\" quotes", "version": 1}
```
(if you'll excuse the python2)
---
In repsonse to your edit:
`'{}'.format(file['query']) == file['query']` returns `True` - you're formatting a string object as a string. As I have suggested, using
```
json.dumps(file['query'])
```
returns
```
"occasionally I \"need\" to escape \"double\" quotes"
```
which by the way is the string:
```
'"occasionally I \\"need\\" to escape \\"double\\" quotes"'
```
this is the case also for your 'actual query':
```
query = '"\\"datadog.agent.up\\".over(\\"role:dns\\").by(\\"host\\").last(1).count_by_status()"'
```
gives
```
print json.dumps(query)
# "\"datadog.agent.up\".over(\"role:dns\").by(\"host\").last(1).count_by_status()"
with open('myfile.txt', 'w') as f:
f.write(json.dumps(query))
# file contents:
# "\"datadog.agent.up\".over(\"role:dns\").by(\"host\").last(1).count_by_status()"
```
double `\\`:
see, this is why you need to be explicit about what you're actually trying to do.
a trick for doubling `\` is to put in a `repr()`
eg:
```
print repr(json.dumps(query))[1:-1] # to remove the ' from the beginning and end
# "\\"datadog.agent.up\\".over(\\"role:dns\\").by(\\"host\\").last(1).count_by_status()"
with open('myfile.txt', 'w') as f:
f.write(repr(json.dumps(actual_query))[1:-1])
# file:
# "\\"datadog.agent.up\\".over(\\"role:dns\\").by(\\"host\\").last(1).count_by_status()"
```
you could also do a `.replace(r'\', r'\\')` on it | When I run your program, the json I get looks a little different.
You have single quotes around the second line in your output. I don't get that.
Anyway. While the single quotes solve the escape problem, it is not valid Json. Valid Json needs double quotes. Single quotes are just a string delimiter in Python.
Replace the last line in your code with
`print(json.dumps(file))`
And proper json is returned.
`{
"query": "occasionally I \"need\" to escape \"double\" quotes",
"version": 1
}`
Regards,
Melle |
72,047,357 | Can someone please help me with drawing random shapes in random colors using turtle python? | 2022/04/28 | [
"https://Stackoverflow.com/questions/72047357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18978036/"
] | Sharing your attempt first is appreciated. Here is a simple program that chooses a random colour and shape from a list and draws it. You can add more colours in the list or more functions to draw different shapes.
Feel free to ask for clarifications in the comments below...
```
import random
color_list = ["red" , "green" , "blue"]
shape_list = ["square" , "circle" , "triangle"]
myTurtle = turtle.Turtle()
myTurtle.hideturtle()
myTurtle.color(random.choice(color_list))
def drawSquare():
for i in range(4):
myTurtle.forward(100)
myTurtle.left(90)
def drawCircle():
myTurtle.circle(100)
def drawTriangle():
for i in range(3):
myTurtle.forward(100)
myTurtle.left(120)
random_shape = random.choice(shape_list)
if random_shape == "square":
drawSquare()
elif random_shape == "circle":
drawCircle()
elif random_shape == "triangle":
drawTriangle()
``` | Here is some very straight-forward code for random moves and colors:
```
import turtle
import random
pat = turtle.Turtle()
turtle.Screen().bgcolor('black')
colors = ["red", "yellow", "blue"]
for i in range(500):
pat.color(random.choice(colors))
x = random.randint(1,4)
if x == 1:
pat.forward(30)
elif x == 2:
pat.back(30)
elif x == 3:
pat.right(30)
elif x == 4:
pat.left(30)
``` |
63,311,273 | I'm having the world of issues performing a rolling join of two dataframes in pyspark (and python in general). I am looking to join two pyspark dataframes together by their ID & closest date backwards (meaning the date in the second dataframe cannot be greater than the one in the first)
Table\_1:
[![+-----+------------+-------+
| ID | Date | Value |
+-----+------------+-------+
| A1 | 01-15-2020 | 5 |
| A2 | 01-20-2020 | 10 |
| A3 | 02-21-2020 | 12 |
| A1 | 02-21-2020 | 6 |
+-----+------------+-------+](https://i.stack.imgur.com/qpmYp.png)](https://i.stack.imgur.com/qpmYp.png)
Table\_2:
[![enter image description here](https://i.stack.imgur.com/RvSrV.png)](https://i.stack.imgur.com/RvSrV.png)
Desired Result:
[![ID Date Value Value 2
A1 01-15-2020 5 5
A2 01-20-2020 10 12
A3 02-21-2020 12 14
A1 01-21-2020 6 3](https://i.stack.imgur.com/gD7zy.png)](https://i.stack.imgur.com/gD7zy.png)
In essence, I understand an SQL Query can do the trick where I can do spark.sql("query") So anything else. I've tried several things which aren't working in a spark context. Thanks! | 2020/08/08 | [
"https://Stackoverflow.com/questions/63311273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12886251/"
] | Here is my trial.
First, I determine the `Date_2` which met your condition. After that, join the second dataframe again and get the `Value_2`
```
from pyspark.sql.functions import monotonically_increasing_id, unix_timestamp, max
df3 = df1.withColumn('newId', monotonically_increasing_id()) \
.join(df2, 'ID', 'left') \
.where(unix_timestamp('Date', 'M/dd/yy') >= unix_timestamp('Date_2', 'M/dd/yy')) \
.groupBy(*df1.columns, 'newId') \
.agg(max('Date_2').alias('Date_2'))
df3.orderBy('newId').show(20, False)
+---+-------+-----+-----+-------+
|ID |Date |Value|newId|Date_2 |
+---+-------+-----+-----+-------+
|A1 |1/15/20|5 |0 |1/12/20|
|A2 |1/20/20|10 |1 |1/11/20|
|A3 |2/21/20|12 |2 |1/31/20|
|A1 |1/21/20|6 |3 |1/16/20|
+---+-------+-----+-----+-------+
df3.join(df2, ['ID', 'Date_2'], 'left') \
.orderBy('newId') \
.drop('Date_2', 'newId') \
.show(20, False)
+---+-------+-----+-------+
|ID |Date |Value|Value_2|
+---+-------+-----+-------+
|A1 |1/15/20|5 |5 |
|A2 |1/20/20|10 |12 |
|A3 |2/21/20|12 |14 |
|A1 |1/21/20|6 |3 |
+---+-------+-----+-------+
``` | I would prefer to solve this problem using Window.
You need to join both datasets using id and date(>=), then you need to know how many days of difference you have in order to filter what you need using dense\_rank to just get closest date.
```
from pyspark.sql.functions import col, datediff, dense_rank
from pyspark.sql.window import Window
from datetime import date
df1 = (
spark
.createDataFrame(
[
("A1",date(2020, 1, 15), 5),
("A2",date(2020, 1, 20), 10),
("A3",date(2020, 2, 21), 12),
("A1",date(2020, 1, 21), 6),
],
["id_1","date_1","value_1"]
)
)
df2 = (
spark
.createDataFrame(
[
("A1",date(2020, 1, 10), 1),
("A1",date(2020, 1, 12), 5),
("A1",date(2020, 1, 16), 3),
("A2",date(2020, 1, 25), 20),
("A2",date(2020, 1, 1), 12),
("A3",date(2020, 1, 31), 14),
("A3",date(2020, 1, 30), 12)
],
["id_2","date_2","value_2"]
)
)
winSpec = Window.partitionBy("value_1").orderBy("date_difference")
df3 = (
df1
.join(df2, [df1.id_1==df2.id_2,df1.date_1>=df2.date_2])
.withColumn("date_difference", datediff("date_1","date_2"))
.withColumn("dr", dense_rank().over(winSpec))
.where("dr=1")
.select(
col("id_1").alias("id"),
col("date_1").alias("date"),
col("value_1"),
col("value_2")
)
)
+---+----------+-------+-------+
|id |date |value_1|value_2|
+---+----------+-------+-------+
|A1 |2020-01-21|6 |3 |
|A1 |2020-01-15|5 |5 |
|A2 |2020-01-20|10 |12 |
|A3 |2020-02-21|12 |14 |
+---+----------+-------+-------+
``` |
63,311,273 | I'm having the world of issues performing a rolling join of two dataframes in pyspark (and python in general). I am looking to join two pyspark dataframes together by their ID & closest date backwards (meaning the date in the second dataframe cannot be greater than the one in the first)
Table\_1:
[![+-----+------------+-------+
| ID | Date | Value |
+-----+------------+-------+
| A1 | 01-15-2020 | 5 |
| A2 | 01-20-2020 | 10 |
| A3 | 02-21-2020 | 12 |
| A1 | 02-21-2020 | 6 |
+-----+------------+-------+](https://i.stack.imgur.com/qpmYp.png)](https://i.stack.imgur.com/qpmYp.png)
Table\_2:
[![enter image description here](https://i.stack.imgur.com/RvSrV.png)](https://i.stack.imgur.com/RvSrV.png)
Desired Result:
[![ID Date Value Value 2
A1 01-15-2020 5 5
A2 01-20-2020 10 12
A3 02-21-2020 12 14
A1 01-21-2020 6 3](https://i.stack.imgur.com/gD7zy.png)](https://i.stack.imgur.com/gD7zy.png)
In essence, I understand an SQL Query can do the trick where I can do spark.sql("query") So anything else. I've tried several things which aren't working in a spark context. Thanks! | 2020/08/08 | [
"https://Stackoverflow.com/questions/63311273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12886251/"
] | ```
df1=spark.createDataFrame([('A1','1/15/2020',5),
('A2','1/20/2020',10),
('A3','2/21/2020',12),
('A1','1/21/2020',6)],
['ID1','Date1','Value1'])
df2=spark.createDataFrame([('A1','1/10/2020',1),
('A1','1/12/2020',5),
('A1','1/16/2020',3),
('A2','1/25/2020',20),
('A2','1/1/2020',12),
('A3','1/31/2020',14),
('A3','1/30/2020',12)],['ID2','Date2','Value2'])
df2=df1.join(df2,df1.ID1==df2.ID2) \
.withColumn("distance",datediff(to_date(df1.Date1,'MM/dd/yyyy'),\
to_date(df2.Date2,'MM/dd/yyyy'))).filter("distance>0")
df2.groupBy(df2.ID1,df2.Date1,df2.Value1)\
.agg(min(df2.distance).alias('distance')).join(df2, ['ID1','Date1','distance'])\
.select(df2.ID1,df2.Date1,df2.Value1,df2.Value2).orderBy('ID1','Date1').show()
``` | I would prefer to solve this problem using Window.
You need to join both datasets using id and date(>=), then you need to know how many days of difference you have in order to filter what you need using dense\_rank to just get closest date.
```
from pyspark.sql.functions import col, datediff, dense_rank
from pyspark.sql.window import Window
from datetime import date
df1 = (
spark
.createDataFrame(
[
("A1",date(2020, 1, 15), 5),
("A2",date(2020, 1, 20), 10),
("A3",date(2020, 2, 21), 12),
("A1",date(2020, 1, 21), 6),
],
["id_1","date_1","value_1"]
)
)
df2 = (
spark
.createDataFrame(
[
("A1",date(2020, 1, 10), 1),
("A1",date(2020, 1, 12), 5),
("A1",date(2020, 1, 16), 3),
("A2",date(2020, 1, 25), 20),
("A2",date(2020, 1, 1), 12),
("A3",date(2020, 1, 31), 14),
("A3",date(2020, 1, 30), 12)
],
["id_2","date_2","value_2"]
)
)
winSpec = Window.partitionBy("value_1").orderBy("date_difference")
df3 = (
df1
.join(df2, [df1.id_1==df2.id_2,df1.date_1>=df2.date_2])
.withColumn("date_difference", datediff("date_1","date_2"))
.withColumn("dr", dense_rank().over(winSpec))
.where("dr=1")
.select(
col("id_1").alias("id"),
col("date_1").alias("date"),
col("value_1"),
col("value_2")
)
)
+---+----------+-------+-------+
|id |date |value_1|value_2|
+---+----------+-------+-------+
|A1 |2020-01-21|6 |3 |
|A1 |2020-01-15|5 |5 |
|A2 |2020-01-20|10 |12 |
|A3 |2020-02-21|12 |14 |
+---+----------+-------+-------+
``` |
62,805,513 | I'm quite new to python, the thing I'm trying to do is get data from an website and add a part of the webpage to and pandas dataframe.
This is the code I got already but I'm getting an error when adding data to the Dataframe.
**The Code I got:**
```
url = 'https://oldschool.runescape.wiki/w/Module:Exchange/Anglerfish/Data'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
price_data = soup.find_all('span', class_='s1')
df = pd.DataFrame()
for data in price_data:
a = pd.DataFrame(data.text.split(":")[0],data.text.split(":")[1])
df.append(a)
print(df)
```
**The Error I'm Getting:**
```
ValueError Traceback (most recent call last)
<ipython-input-33-963d51917cf2> in <module>()
10
11 for data in price_data:
---> 12 a = pd.DataFrame(data.text.split(":")[0],data.text.split(":")[1])
13 df.append(a)
14
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
507 )
508 else:
--> 509 raise ValueError("DataFrame constructor not properly called!")
510
511 NDFrame.__init__(self, mgr, fastpath=True)
ValueError: DataFrame constructor not properly called!
``` | 2020/07/09 | [
"https://Stackoverflow.com/questions/62805513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2743956/"
] | It seems that the data structure you get from `data.text.split(":")[0],data.text.split(":")[1]` does not suit what is expected from the function `pd.DataFrame()`. First take a look at the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) of the function to fully understand what is expecting and how to properly pass data to it. You can either pass a **dictionary** with the column name and the values (arrays must be of equal length, or an index should be specified), or **lists/arrays** as [YOBEN\_S](https://stackoverflow.com/users/7964527/yoben-s) proposed, for example:
```
a = pd.DataFrame({'Column_1':data.text.split(":")[0],'Column_2':data.text.split(":")[1]})
```
Since you are dealing with `html` data, you should try a different approach using `pandas.read_html()` which can be read [here](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.read_html.html) for more information | Fix your code by
```
pd.DataFrame([[data.text.split(":")[0],data.text.split(":")[1]]])
``` |
62,805,513 | I'm quite new to python, the thing I'm trying to do is get data from an website and add a part of the webpage to and pandas dataframe.
This is the code I got already but I'm getting an error when adding data to the Dataframe.
**The Code I got:**
```
url = 'https://oldschool.runescape.wiki/w/Module:Exchange/Anglerfish/Data'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
price_data = soup.find_all('span', class_='s1')
df = pd.DataFrame()
for data in price_data:
a = pd.DataFrame(data.text.split(":")[0],data.text.split(":")[1])
df.append(a)
print(df)
```
**The Error I'm Getting:**
```
ValueError Traceback (most recent call last)
<ipython-input-33-963d51917cf2> in <module>()
10
11 for data in price_data:
---> 12 a = pd.DataFrame(data.text.split(":")[0],data.text.split(":")[1])
13 df.append(a)
14
/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
507 )
508 else:
--> 509 raise ValueError("DataFrame constructor not properly called!")
510
511 NDFrame.__init__(self, mgr, fastpath=True)
ValueError: DataFrame constructor not properly called!
``` | 2020/07/09 | [
"https://Stackoverflow.com/questions/62805513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2743956/"
] | I did some more research, the best way for me to do it was:
```
#get data from marketwatch
url = 'https://oldschool.runescape.wiki/w/Module:Exchange/Anglerfish/Data'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
price_data = soup.find_all('span', class_='s1')
df = pd.DataFrame(columns=['timestamp', 'price'])
for data in price_data:
df = df.append({'timestamp': data.text.split(":")[0], 'price': data.text.split(":")[1]}, ignore_index=True)
print(df)
``` | Fix your code by
```
pd.DataFrame([[data.text.split(":")[0],data.text.split(":")[1]]])
``` |
45,424,454 | I just started looking into Java's [CompletableFuture](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html) and a little bit confused on whether this is truly asynchronous (i.e running on one thread concurrently) or spanning multiple threads (Parallel).
For example, suppose I'd like to make 1000 different service calls. Suppose further that each service call can be made asynchronously. When using CompletableFuture, will the JVM make 1000 separate threads (assuming the JVM allows for this many threads), or execute all of these requests in one thread? Or is it doing a bit of both? Using some threads to execute those requests asynchronously?
What I want to do is something like this (in Python):
<https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html>
**is there a way to execute multiple requests/operations on the same thread in Java asynchronously?** | 2017/07/31 | [
"https://Stackoverflow.com/questions/45424454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2804747/"
] | As is explained in the [javadoc](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html)
>
> All async methods without an explicit Executor argument are performed
> using the ForkJoinPool.commonPool() (unless it does not support a
> parallelism level of at least two, in which case, a new Thread is
> created to run each task).
>
>
>
So a threadpool is used, either implicitly (unless you have a single core machine in which case the threads aren't pooled) or explicitly. In your case you would get to control the amount of threads used by using an explicit `Executor` (e.g. [ThreadPoolExecutor](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html)) with the amount of threads you want (most likely a lot less than 1000).
The calls cannot share a single thread (the calling thread), as Java doesn't have the [capability](https://stackoverflow.com/questions/16539245/java-equivalent-of-c-sharp-async-await) for what people these days understand by asynchronous due to the popular async/await paradigm (i.e. the fictional "truly asynchronous" term - synchronous vs. asynchronous is independent of threads, but asynchronicity can be implemented with threads, as it is done in `CompletableFuture`).
With the [HttpClient](https://www.baeldung.com/java-9-http-client) introduced in Java 11 (as incubator module in 9), it's possible to perform asynchronous requests with a threadpool using `CompletableFuture`s in the "standard" way. If you're looking to minimize the thread count, you'll have to switch to reactive programming and use something like [Spring's WebClient](https://www.baeldung.com/spring-5-webclient) for the requests. | If you schedule your computation without specifying a thread-pool, the `fork-join` common pool would be used, otherwise you can specify your own `Executor` to `supplyAsync` and choose a size that fits your need. |
45,424,454 | I just started looking into Java's [CompletableFuture](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html) and a little bit confused on whether this is truly asynchronous (i.e running on one thread concurrently) or spanning multiple threads (Parallel).
For example, suppose I'd like to make 1000 different service calls. Suppose further that each service call can be made asynchronously. When using CompletableFuture, will the JVM make 1000 separate threads (assuming the JVM allows for this many threads), or execute all of these requests in one thread? Or is it doing a bit of both? Using some threads to execute those requests asynchronously?
What I want to do is something like this (in Python):
<https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html>
**is there a way to execute multiple requests/operations on the same thread in Java asynchronously?** | 2017/07/31 | [
"https://Stackoverflow.com/questions/45424454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2804747/"
] | As is explained in the [javadoc](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html)
>
> All async methods without an explicit Executor argument are performed
> using the ForkJoinPool.commonPool() (unless it does not support a
> parallelism level of at least two, in which case, a new Thread is
> created to run each task).
>
>
>
So a threadpool is used, either implicitly (unless you have a single core machine in which case the threads aren't pooled) or explicitly. In your case you would get to control the amount of threads used by using an explicit `Executor` (e.g. [ThreadPoolExecutor](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html)) with the amount of threads you want (most likely a lot less than 1000).
The calls cannot share a single thread (the calling thread), as Java doesn't have the [capability](https://stackoverflow.com/questions/16539245/java-equivalent-of-c-sharp-async-await) for what people these days understand by asynchronous due to the popular async/await paradigm (i.e. the fictional "truly asynchronous" term - synchronous vs. asynchronous is independent of threads, but asynchronicity can be implemented with threads, as it is done in `CompletableFuture`).
With the [HttpClient](https://www.baeldung.com/java-9-http-client) introduced in Java 11 (as incubator module in 9), it's possible to perform asynchronous requests with a threadpool using `CompletableFuture`s in the "standard" way. If you're looking to minimize the thread count, you'll have to switch to reactive programming and use something like [Spring's WebClient](https://www.baeldung.com/spring-5-webclient) for the requests. | Try to use `CompletableFuture.thenCompose()`
to run multiple future sequentially in an async manner. |
45,424,454 | I just started looking into Java's [CompletableFuture](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html) and a little bit confused on whether this is truly asynchronous (i.e running on one thread concurrently) or spanning multiple threads (Parallel).
For example, suppose I'd like to make 1000 different service calls. Suppose further that each service call can be made asynchronously. When using CompletableFuture, will the JVM make 1000 separate threads (assuming the JVM allows for this many threads), or execute all of these requests in one thread? Or is it doing a bit of both? Using some threads to execute those requests asynchronously?
What I want to do is something like this (in Python):
<https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html>
**is there a way to execute multiple requests/operations on the same thread in Java asynchronously?** | 2017/07/31 | [
"https://Stackoverflow.com/questions/45424454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2804747/"
] | If you schedule your computation without specifying a thread-pool, the `fork-join` common pool would be used, otherwise you can specify your own `Executor` to `supplyAsync` and choose a size that fits your need. | Try to use `CompletableFuture.thenCompose()`
to run multiple future sequentially in an async manner. |
42,894,950 | I currently have an HTML file and a python file. The python file uses YELP's API and returns JSON data. How do I display that data onto my webpage through HTML? Is there a function like `document.getElementById("id").innerHTML = JSONDATA` from JavaScript?
Please let me know if you need any more details; this is my first time posting and first time using an API/making a website. I understand the JSON data is not going to look nice but I will put it into a dictionary and sort it later, basically right now I am just wondering how to display data from a Python file into a HTML file. Also, feel free to link any helpful tutorials.
Found the following Node.js code as it was suggested to use Javascript instead, where in this would I put my tokens/secrets? And then how would I call it in my html file... Thank you.
```
/* require the modules needed */
var oauthSignature = require('oauth-signature');
var n = require('nonce')();
var request = require('request');
var qs = require('querystring');
var _ = require('lodash');
/* Function for yelp call
* ------------------------
* set_parameters: object with params to search
* callback: callback(error, response, body)
*/
var request_yelp = function(set_parameters, callback) {
/* The type of request */
var httpMethod = 'GET';
/* The url we are using for the request */
var url = 'http://api.yelp.com/v2/search';
/* We can setup default parameters here */
var default_parameters = {
location: 'San+Francisco',
sort: '2'
};
/* We set the require parameters here */
var required_parameters = {
oauth_consumer_key : process.env.oauth_consumer_key,
oauth_token : process.env.oauth_token,
oauth_nonce : n(),
oauth_timestamp : n().toString().substr(0,10),
oauth_signature_method : 'HMAC-SHA1',
oauth_version : '1.0'
};
/* We combine all the parameters in order of importance */
var parameters = _.assign(default_parameters, set_parameters, required_parameters);
/* We set our secrets here */
var consumerSecret = process.env.consumerSecret;
var tokenSecret = process.env.tokenSecret;
/* Then we call Yelp's Oauth 1.0a server, and it returns a signature */
/* Note: This signature is only good for 300 seconds after the oauth_timestamp */
var signature = oauthSignature.generate(httpMethod, url, parameters, consumerSecret, tokenSecret, { encodeSignature: false});
/* We add the signature to the list of paramters */
parameters.oauth_signature = signature;
/* Then we turn the paramters object, to a query string */
var paramURL = qs.stringify(parameters);
/* Add the query string to the url */
var apiURL = url+'?'+paramURL;
/* Then we use request to send make the API Request */
request(apiURL, function(error, response, body){
return callback(error, response, body);
});
};
``` | 2017/03/20 | [
"https://Stackoverflow.com/questions/42894950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7737479/"
] | I had a similar situation. I had to show the IAM users of AWS account in a HTML page. I used AWS boto3 Python client to grab all IAM users and write a JSON file. Then from HTML file I read that JSON file and showed all users in a table.
**Here is the Python code `IAM.PY`:**
```
import boto3
import os
import subprocess
import json
iam_client = boto3.client('iam')
def list_user_cli():
list_cmd = "aws iam list-users"
output = subprocess.check_output(list_cmd, shell = True)
output = str(output.decode('ascii'))
return output
def write_json_file(filename, data):
try:
with open(filename, "w") as f:
f.writelines(data)
print(filename + " has been created.")
except Exception as e:
print(str(e))
if __name__ == "__main__":
filename = "iam.json"
data = list_user_cli()
write_json_file(filename, data)
```
**Here is the HTML file `IAM.HTML`:**
```
<!DOCTYPE html>
<html>
<head>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<title>IAM User List</title>
<style type="text/css">
body{
margin: 20px;
}
</style>
</head>
<body>
<div class="container">
<table class="table table-responsive table-hover table-bordered">
<thead>
<tr>
<th>User ID</th>
<th>User Name</th>
<th>Path</th>
<th>Create Date</th>
<th>Arn</th>
</tr>
</thead>
<tbody id="iam_tbody">
</tbody>
</table>
</div>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.0/jquery.min.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$.ajax({
method: "GET",
url: "http://localhost/iam/iam.json",
}).done(function(response){
user_list = response.Users;
for(i = 0; i<user_list.length; i++){
tr = "<tr>";
tr += "<td>";
tr += user_list[i]["UserId"];
tr += "</td>";
tr += "<td>";
tr += user_list[i]["UserName"];
tr += "</td>";
tr += "<td>";
tr += user_list[i]["Path"];
tr += "</td>";
tr += "<td>";
tr += user_list[i]["CreateDate"];
tr += "</td>";
tr += "<td>";
tr += user_list[i]["Arn"];
tr += "</td>";
tr += "<tr>";
$("#iam_tbody").append(tr);
}
});
});
</script>
</body>
</html>
```
**Output**
[![IAM DEMO OUTPUT](https://i.stack.imgur.com/GPKPq.jpg)](https://i.stack.imgur.com/GPKPq.jpg) | You can use Jquery Ajax to call your API, include [**Jquery**](https://code.jquery.com/) in your html file.
```
$.ajax({
method: "GET",
url: "api_url",
}).done(function( response ) {
$('#divId').append(response);
});
```
In Your Html File
```
<div id="divId"></div>
```
[**Jquery Ajax Documentation**](http://api.jquery.com/jquery.ajax/) |
37,891,954 | I'm playing with the reuters-example dataset and it runs fine (my model is trained). I read about how to save a model, so I could load it later to use again. But how do I use this saved model to predict a new text? Do I use `models.predict()`?
Do I have to prepare this text in a special way?
I tried it with
```
import keras.preprocessing.text
text = np.array(['this is just some random, stupid text'])
print(text.shape)
tk = keras.preprocessing.text.Tokenizer(
nb_words=2000,
filters=keras.preprocessing.text.base_filter(),
lower=True,
split=" ")
tk.fit_on_texts(text)
pred = tk.texts_to_sequences(text)
print(pred)
model.predict(pred)
```
But I always get
```
(1L,)
[[2, 4, 1, 6, 5, 7, 3]]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-83-42d744d811fb> in <module>()
7 print(pred)
8
----> 9 model.predict(pred)
C:\Users\bkey\Anaconda2\lib\site-packages\keras\models.pyc in predict(self, x, batch_size, verbose)
457 if self.model is None:
458 self.build()
--> 459 return self.model.predict(x, batch_size=batch_size, verbose=verbose)
460
461 def predict_on_batch(self, x):
C:\Users\bkey\Anaconda2\lib\site-packages\keras\engine\training.pyc in predict(self, x, batch_size, verbose)
1132 x = standardize_input_data(x, self.input_names,
1133 self.internal_input_shapes,
-> 1134 check_batch_dim=False)
1135 if self.stateful:
1136 if x[0].shape[0] > batch_size and x[0].shape[0] % batch_size != 0:
C:\Users\bkey\Anaconda2\lib\site-packages\keras\engine\training.pyc in standardize_input_data(data, names, shapes, check_batch_dim, exception_prefix)
79 for i in range(len(names)):
80 array = arrays[i]
---> 81 if len(array.shape) == 1:
82 array = np.expand_dims(array, 1)
83 arrays[i] = array
AttributeError: 'list' object has no attribute 'shape'
```
Do you have any recommendations as to how to make predictions with a trained model? | 2016/06/18 | [
"https://Stackoverflow.com/questions/37891954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5931447/"
] | You can just "call" your model with an array of the correct shape:
```
model(np.array([[6.7, 3.3, 5.7, 2.5]]))
```
Full example:
```
from sklearn.datasets import load_iris
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
import numpy as np
X, y = load_iris(return_X_y=True)
model = Sequential([
Dense(16, activation='relu'),
Dense(32, activation='relu'),
Dense(1)])
model.compile(loss='mean_absolute_error', optimizer='adam')
history = model.fit(X, y, epochs=10, verbose=0)
print(model(np.array([[6.7, 3.3, 5.7, 2.5]])))
```
```
<tf.Tensor: shape=(1, 1), dtype=float64, numpy=array([[1.92517677]])>
``` | I trained a neural network in Keras to perform non linear regression on some data. This is some part of my code for testing on new data using previously saved model configuration and weights.
```
fname = r"C:\Users\tauseef\Desktop\keras\tutorials\BestWeights.hdf5"
modelConfig = joblib.load('modelConfig.pkl')
recreatedModel = Sequential.from_config(modelConfig)
recreatedModel.load_weights(fname)
unseenTestData = np.genfromtxt(r"C:\Users\tauseef\Desktop\keras\arrayOf100Rows257Columns.txt",delimiter=" ")
X_test = unseenTestData
standard_scalerX = StandardScaler()
standard_scalerX.fit(X_test)
X_test_std = standard_scalerX.transform(X_test)
X_test_std = X_test_std.astype('float32')
unseenData_predictions = recreatedModel.predict(X_test_std)
``` |