qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
70,168,761 | I am trying to click on a check box. Below is the HTML Code
```
<div class="mb-1 p-3 termsCheck">
<input class="form-check-input float-end" type="checkbox" value="" id="flexCheckDefault" required=""> <label class="form-check-label float-end" for="flexCheckDefault"><span>
Agree to Terms & Conditions </span> / <span> ุฃูุงูู ุนูู ุงูุดุฑูุท ูุงูุฃุญูุงู
</span> </label>
</div>
```
I am using the below code to click on it.
```
check = driver.find_element(By.CSS_SELECTOR, '#flexCheckDefault')
check.click()
```
I am getting this error
```
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (477, 1222)
```
full error:
```
driver.find_element(By.XPATH, "//label[@for='flexCheckDefault']").click()
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 81, in click
self._execute(Command.CLICK_ELEMENT)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 710, in _execute
return self._parent.execute(command, params)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 424, in execute
self.error_handler.check_response(response)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (292, 1317)
(Session info: chrome=91.0.4472.101)
Stacktrace:
#0 0x556910005919 <unknown
```
Can some please help me with this.
when I am using the below code I am getting this error::
```
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//label[@for='flexCheckDefault']"))).click()
```
Error
```
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//label[@for='flexCheckDefault']"))).click()
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 81, in click
self._execute(Command.CLICK_ELEMENT)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 710, in _execute
return self._parent.execute(command, params)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 424, in execute
self.error_handler.check_response(response)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (292, 467)
(Session info: chrome=91.0.4472.101)
Stacktrace:
#0 0x563f45805919 <unknown>
``` | 2021/11/30 | [
"https://Stackoverflow.com/questions/70168761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13572791/"
] | `max_func` should return a function that takes an argument (`x`), applies it to `f` and `g` and then return the maximal value:
```
def max_func(f, g):
def mf(x):
return max(f(x), g(x))
return mf
``` | In your code, you are calling only f(x) in both if and else statement.
You can try:
```
def max_func(f,g):
if f(x)> g(x):
return f(x)
else:
return g(x)
def new_function (x:int): -> int
return new_function
``` | 3,507 |
41,950,021 | I'm learning python and working on exercises. One of them is to code a voting system to select the best player between 23 players of the match using lists.
I'm using `Python3`.
My code:
```
players= [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
vote = 0
cont = 0
while(vote >= 0 and vote <23):
vote = input('Enter the name of the player you wish to vote for')
if (0 < vote <=24):
players[vote +1] += 1;cont +=1
else:
print('Invalid vote, try again')
```
I get
>
> TypeError: '<=' not supported between instances of 'str' and 'int'
>
>
>
But I don't have any strings here, all variables are integers. | 2017/01/31 | [
"https://Stackoverflow.com/questions/41950021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7493136/"
] | Change
```
vote = input('Enter the name of the player you wish to vote for')
```
to
```
vote = int(input('Enter the name of the player you wish to vote for'))
```
You are getting the input from the console as a string, so you must cast that input string to an `int` object in order to do numerical operations. | When you use the input function it automatically turns it into a string. You need to go:
```
vote = int(input('Enter the name of the player you wish to vote for'))
```
which turns the input into a int type value | 3,510 |
41,908,655 | i need to create a dataframe containing tuples from a series of dataframes arrays. What I need is the following:
I have dataframes `a` and `b`:
```
a = pd.DataFrame(np.array([[1, 2],[3, 4]]), columns=['one', 'two'])
b = pd.DataFrame(np.array([[5, 6],[7, 8]]), columns=['one', 'two'])
a:
one two
0 1 2
1 3 4
b:
one two
0 5 6
1 7 8
```
I want to create a dataframe `a_b` in which each element is a tuple formed from the corresponding elements in a and b, i.e.
```
a_b = pd.DataFrame([[(1, 5), (2, 6)],[(3, 7), (4, 8)]], columns=['one', 'two'])
a_b:
one two
0 (1, 5) (2, 6)
1 (3, 7) (4, 8)
```
Ideally i would like to do this with an arbitrary number of dataframes.
I was hoping there was a more elegant way than using a for cycle
I'm using python 3 | 2017/01/28 | [
"https://Stackoverflow.com/questions/41908655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6316272/"
] | you can use `numpy.rec.fromarrays((a.values, b.values)).tolist()`:
```
In [34]: pd.DataFrame(np.rec.fromarrays((a.values, b.values)).tolist(),
columns=a.columns,
index=a.index)
Out[34]:
one two
0 (1, 5) (2, 6)
1 (3, 7) (4, 8)
```
merging three DF's:
```
In [36]: pd.DataFrame(np.rec.fromarrays((a.values, b.values, a.values)).tolist(),
columns=a.columns,
index=a.index)
Out[36]:
one two
0 (1, 5, 1) (2, 6, 2)
1 (3, 7, 3) (4, 8, 4)
```
**UPDATE:**
>
> suppose you don't know in advance the number of dataframes, how would
> you do?
>
>
>
```
In [60]: dfs = [a,b,a]
In [62]: tuple_of_dfs = (x.values for x in dfs)
In [63]: pd.DataFrame(np.rec.fromarrays(tuple_of_dfs).tolist(), columns=a.columns, index=a.index)
Out[63]:
one two
0 (1, 5, 1) (2, 6, 2)
1 (3, 7, 3) (4, 8, 4)
``` | You could use `zip` over columns of `a`, `b`
```
In [31]: pd.DataFrame({x: zip(a[x], b[x]) for x in a.columns})
Out[31]:
one two
0 (1, 5) (2, 6)
1 (3, 7) (4, 8)
``` | 3,515 |
68,767,995 | I have a string like this:
```
txt = ''' lea_po () {
val : 96.9;
wh : "CP D ";
related : DD;
leak () {
va : 0.008;
when : " SI";
in : V;
}
**stagestat (" E I P", "I 2 ") {
data : " H H/L - R : - - - : L - - , \
- : - - - : L - - , \
- - - R : - - : - - , \
- - - : - - : - - L, \
- - - ~R : - - - : N P N ";
}**
...
'''
```
I would like to only extract the bold part which start with the `stagestat` and end with the curly bracket `"}"`.
But I tried to use a python regex to search but it fails.
```
re.search("^stagestat.*}$",txt)
```
Can someone give some comments on this? | 2021/08/13 | [
"https://Stackoverflow.com/questions/68767995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13699970/"
] | * Make sure you have `Unicode True` at the start of your script.
* Make sure you save your script as UTF-8 BOM or UTF-16LE BOM.
* !include nsDialogs before commctrl.nsh
* Make sure you are using the Unicode version of all plug-ins.
Characters that "look Chinese" is a symptom of treating ASCII as UTF-16. Strings cut off after the first character is a symptom of treating UTF-16 as ASCII (not a Unicode plug-in?).
This works for me:
```
Unicode True
RequestExecutionLevel user
!include nsDialogs.nsh
!define _COMMCTRL_NSH_VERBOSE 3
!include commctrl.nsh
Section
SectionEnd
Page custom nsDialogsPage
Var lv
Function msgboxtext
Pop $0
SendMessage $lv ${LVM_GETSELECTIONMARK} 0 0 $8
StrCpy $9 1
${If} $8 = -1
Return
${EndIf}
System::Call '*(&t${NSIS_MAX_STRLEN})p.r3'
System::Call "*(i, i, i, i, i, p, i, i, i) p (0, 0, $9, 0, 0, r3, ${NSIS_MAX_STRLEN}) .r1"
System::Call "User32::SendMessage(p, i, p, p) p ($lv, ${LVM_GETITEMTEXT}, $8, r1)"
System::Call "*$3(&t${NSIS_MAX_STRLEN} .r4)"
System::Free $1
System::Free $3
MessageBox mb_ok $4
FunctionEnd
Function nsDialogsPage
nsDialogs::Create 1018
Pop $0
${NSD_CreateButton} 0 0 100% 12u "Get text"
Pop $9
${NSD_OnClick} $9 msgboxtext
nsDialogs::CreateControl /NOUNLOAD ${WC_LISTVIEW} ${__NSD_ListView_STYLE}|${WS_TABSTOP} ${__NSD_ListView_EXSTYLE} 0 13u 100% -13u ""
Pop $lv
${NSD_LV_InsertColumn} $lv 0 100 "column 0"
${NSD_LV_InsertColumn} $lv 1 70 "์๋๋ฌธ"
${NSD_LV_InsertItem} $lv 0 '์๋๋ฌธ'
${NSD_LV_SetItemText} $lv 0 1 '์๋๋ฌธ ์๋๋ฌธ'
${NSD_LV_InsertItem} $lv 1 '2 ์๋๋ฌธ'
${NSD_LV_SetItemText} $lv 1 1 '2.2 ์๋๋ฌธ ์๋๋ฌธ'
nsDialogs::Show
FunctionEnd
``` | Have you tried to add command "unicode True" at the top of the script? | 3,516 |
47,344,625 | content of my python file
```
class myclass(object):
def __init__(self):
pass
def myfun(self):
pass
print ("hello world")
```
Output on executing file
```
hello world
```
Query
```
since I did not create object of class . How's it still able to print "hello world"
``` | 2017/11/17 | [
"https://Stackoverflow.com/questions/47344625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3812837/"
] | The class body executes at class definition time, and that's how the language is designed.
From section [9.3.1 Class Definition](https://docs.python.org/3/tutorial/classes.html#class-definition-syntax) syntax:
>
> In practice, the statements inside a class definition will usually be function definitions, but other statements are allowed, and sometimes useful.
>
>
>
That is simply how the execution model works in Python, so there's not much more to say about it.
>
> as per my understanding...anything class can not run until we call it by creating a object
>
>
>
Simply a misunderstanding. This applies for `def`, i.e. function blocks, but not for class blocks. | It will get a call, as python work like that.
Your code will always return output.
**hello world**
```
class myclass(object):
def __init__(self):
pass
def myfun(self):
print("hello world")
pass
```
If you want to avoid it you have to add print statement inside the method. | 3,517 |
54,005,909 | I am attempting to create a contract bridge match point scoring system. In the list below the 1st, 3rd, etc. numbers are the pair numbers (players) and the 2nd, 4th etc. numbers are the scores achieved by each pair. So pair 2 scored 430, pair 3 scored 420 and so on.
I want to loop through the list and score as follows:
for each pair score that pair 2 beats they receive 2 points, for each they tie 1 point and where they don't beat they get 0 points. The loop then continues and compares each pair's score in the same way. In the example below, pair 2 gets 7 points (beating 3 other pairs and a tie with 1), pair 7 gets 0 points, pair 6 gets 12 points beating every other pair.
My list (generated from an elasticsearch json object) is:
```
['2', '430', '3', '420', '4', '460', '5', '400', '7', '0', '1', '430', '6', '480']
```
The python code I have tried (after multiple variations) is:
```
nsp_mp = 0
ewp_mp = 0
ns_list = []
for row in arr["hits"]["hits"]:
nsp = row["_source"]["nsp"]
nsscore = row["_source"]["nsscore"]
ns_list.append(nsp)
ns_list.append(nsscore)
print(ns_list)
x = ns_list[1]
for i in range(6): #number of competing pairs
if x > ns_list[1::2][i]:
nsp_mp = nsp_mp + 2
elif x == ns_list[1::2][i]:
nsp_mp = nsp_mp
else:
nsp_mp = nsp_mp + 1
print(nsp_mp)
```
which produces:
```
['2', '430', '3', '420', '4', '460', '5', '400', '7', '0', '1', '430', '6', '480']
7
```
which as per calculation above is correct. But when I try to execute a loop it does not return the correct results.
Maybe the approach is wrong. What is the correct way to do this?
The elasticsearch json object is:
```
arr = {'took': 0, 'timed_out': False, '_shards': {'total': 5, 'successful': 5, 'skipped': 0, 'failed': 0}, 'hits': {'total': 7, 'max_score': 1.0, 'hits': [{'_index': 'match', '_type': 'score', '_id': 'L_L122cBjpp4O0gQG0qd', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '2', 'ewp': '9', 'contract': '3NT', 'by': 'S', 'tricks': '10', 'nsscore': '430', 'ewscore': '0', 'timestamp': '2018-12-23T16:45:32.896151'}}, {'_index': 'match', '_type': 'score', '_id': 'MPL122cBjpp4O0gQHEog', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '3', 'ewp': '10', 'contract': '4S', 'by': 'N', 'tricks': '10', 'nsscore': '420', 'ewscore': '0', 'timestamp': '2018-12-23T16:45:33.027631'}}, {'_index': 'match', '_type': 'score', '_id': 'MfL122cBjpp4O0gQHEqk', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '4', 'ewp': '11', 'contract': '3NT', 'by': 'N', 'tricks': '11', 'nsscore': '460', 'ewscore': '0', 'timestamp': '2018-12-23T16:45:33.158060'}}, {'_index': 'match', '_type': 'score', '_id': 'MvL122cBjpp4O0gQHUoj', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '5', 'ewp': '12', 'contract': '3NT', 'by': 'S', 'tricks': '10', 'nsscore': '400', 'ewscore': '0', 'timestamp': '2018-12-23T16:45:33.285460'}}, {'_index': 'match', '_type': 'score', '_id': 'NPL122cBjpp4O0gQHkof', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '7', 'ewp': '14', 'contract': '3NT', 'by': 'S', 'tricks': '8', 'nsscore': '0', 'ewscore': '50', 'timestamp': '2018-12-23T16:45:33.538710'}}, {'_index': 'match', '_type': 'score', '_id': 'LvL122cBjpp4O0gQGkqt', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '1', 'ewp': '8', 'contract': '3NT', 'by': 'N', 'tricks': '10', 'nsscore': '430', 'ewscore': '0', 'timestamp': '2018-12-23T16:45:32.405998'}}, {'_index': 'match', '_type': 'score', '_id': 'M_L122cBjpp4O0gQHUqg', '_score': 1.0, '_source': {'tournament_id': 1, 'board_number': '1', 'nsp': '6', 'ewp': '13', 'contract': '4S', 'by': 'S', 'tricks': '11', 'nsscore': '480', 'ewscore': '0', 'timestamp': '2018-12-23T16:45:33.411104'}}]}}
``` | 2019/01/02 | [
"https://Stackoverflow.com/questions/54005909",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1903663/"
] | List appears to be a poor data structure for this, I think you are making everything worse by flattening your elasticsearch object.
>
> Note there are a few minor mistakes in listings below - to make sure
> I'm not solving someone's homework for free. I also realize this is
> not the most efficient way of doing so.
>
>
>
Try with dicts:
1) convert elasticsearch json you have to a dict with a better structure:
```
scores = {}
for row in arr["hits"]["hits"]:
nsp = row["_source"]["nsp"]
nsscore = row["_source"]["nsscore"]
scores[nsp] = nsscore
```
This will give you something like this:
```
{'1': '430',
'2': '430',
'3': '420',
'4': '460',
'5': '400',
'6': '480',
'7': '0'}
```
2) write a function to calculate pair score:
```
def calculate_score(pair, scores):
score = 0
for p in scores:
if p == pair:
continue
if scores[p] < scores[pair]:
score += 2 # win
elif scores[p] == scores[pair]:
score += 1
return score
```
This should give you something like this:
```
In [13]: calculate_score('1', scores)
Out[13]: 7
In [14]: calculate_score('7', scores)
Out[14]: 0
```
3) loop over all pairs, calculating scores. I'll leave this as exercise. | The main problem with your code is, that the loop is one short, you have 7 entries. Then you should convert the numbers to `int`, so that the comparison is correct. In your code, you get for ties 0 points.
Instead of having a list, with flattend pairs, you should use tuple pairs.
```
ns_list = []
for row in arr["hits"]["hits"]:
nsp = int(row["_source"]["nsp"])
nsscore = int(row["_source"]["nsscore"])
ns_list.append((nsp, nsscore))
print(ns_list)
x = ns_list[0][1]
nsp_mp = 0
for nsp, nsscore in ns_list:
if x > nsscore:
nsp_mp += 2
elif x == nsscore:
nsp_mp += 1
print(nsp_mp)
``` | 3,518 |
50,159,438 | I tried reproducing some of the examples described [here](https://www.boost.org/doc/libs/1_63_0/libs/python/doc/html/numpy/tutorial/ndarray.html), but I experience the following problem with the code below, which was written by just copy-pasting relevant parts of the linked page.
```
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
#include <iostream>
using namespace std;
namespace p = boost::python;
namespace np = boost::python::numpy;
np::ndarray test()
{
int data[] = {1,2,3,4,5};
p::tuple shape = p::make_tuple(5);
p::tuple stride = p::make_tuple(sizeof(int));
p::object own;
np::dtype dt = np::dtype::get_builtin<int>();
np::ndarray array = np::from_data(data, dt, shape,stride,own);
std::cout << "Selective multidimensional array :: "<<std::endl
<< p::extract<char const *>(p::str(array)) << std::endl ;
return array;
}
BOOST_PYTHON_MODULE(test_module)
{
using namespace boost::python;
// Initialize numpy
Py_Initialize();
boost::python::numpy::initialize();
def("test", test);
}
```
When I compile as a shared library and load the module in python,
```
import test_module as test
print(test.test())
```
it seems that the `ndarray` gets created properly by the C++ code, but the version that python receives is rubbish; the arrays that get printed are:
```
[1 2 3 4 5]
[2121031184 32554 2130927769 32554 0]
```
What could be the cause of such a difference? | 2018/05/03 | [
"https://Stackoverflow.com/questions/50159438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7141288/"
] | This week I had the same issue. To solve my problem I use dynamic memory:
```
np::ndarray test(){
int *data = malloc(sizeof(int) * 5);
for (int i=0; i < 5; ++i){
data[i] = i + 1;
}
p::tuple shape = p::make_tuple(5);
p::tuple stride = p::make_tuple(sizeof(int));
p::object own;
np::dtype dt = np::dtype::get_builtin<int>();
np::ndarray array = np::from_data(data, dt, shape, stride, own);
return array;
}
```
I think the difference according to this answer: <https://stackoverflow.com/a/36322044/4637693> is:
>
> The difference between declaring an array as
>
>
>
> ```
> int array[n];
>
> ```
>
> and
>
>
>
> ```
> int* array = malloc(n * sizeof(int));
>
> ```
>
> In the first version, you are declaring an object with automatic storage duration. This means that the array lives only as long as the function that calls it exists. In the second version, you are getting memory with dynamic storage duration, which means that it will exist until it is explicitly deallocated with free.
>
>
>
I will take more time in the next weeks to see if this works for a matrix too.
**EDIT**
Or you can use a dynamic structure from boost like list:
```
np::ndarray test(){
boost::python::list my_list;
for (int i=0; i < 5; ++i){
my_list.append(i + 1);
}
np::ndarray array = np::from_object(my_list);
return array;
}
```
This work also for a Matrix for example:
```
np::ndarray test(){
//This will use a list of tuples
boost::python::list my_list;
for (int i=0; i < 5; ++i){
my_list.append(boost::python::make_tuple(i + 1, i, i-1));
}
//Just convert the list to a NumPy array.
np::ndarray array = np::from_object(my_list);
return array;
}
```
I assume (for the moment) that by using the boost functions you will be able to avoid the memory conflicts. | Creating a new reference to the array before returning it solved the problem. Good news is that `np::ndarray` has a [`copy()`](https://github.com/boostorg/python/blob/4f6d547c0af4c400dc5d059ccd847426ff21852f/src/numpy/ndarray.cpp#L177) method that achieves exactly the same thing. Thus, you should add
```
np::ndarray new_array = array.copy();
```
before the return statement | 3,521 |
70,042,756 | I'm running a python code to calculate the distance between certain coordinates. The original data looks like:
```
a = np.array([[1,40,70],[2,41,71],[3,42,73]]) #id, latitude, longitude
```
and I'm looking forward to get the distance between every pair, the result should look like:
```
[1, 2, 100(km)]
[1, 3, 200(km)].
[2, 1, 100(km)]
[2, 3, 300(km)]
[3, 1, 200(km)]
[3, 2, 300(km)]
```
*The result should contan pair(m,n) and pair(n,m)*
the actual data has 39000 columns, thus I have a great demand for code efficiency. Currently I'm using a double-loop which is really stupid:
```
line = 0
result = np.zeros((6,3))
for i in a:
for j in a:
dis = getDistance(i[1],i[2],j[1],j[2]) # this is the function i made to calculate distance between two coordinates
result[line] = [i[0],j[0],dis]
line += 1
```
Can anyone help me to improve the code? | 2021/11/20 | [
"https://Stackoverflow.com/questions/70042756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17454717/"
] | 1- run :
```
php artisan make:middleware AccountType
```
2- Add it to the *routeMiddleware* array in your kernel file by opening `app/Http/Kernel.php`:
```
'accType' => \App\Http\Middleware\AccountType::class,
```
3- Edit `AccountType` file:
```
public function handle($request, Closure $next)
{
// If user account type is profile allow to next or else block the request
if (Auth::user() && Auth::user()->account_type == 'profile') {
return $next($request);
}else{
abort(403, 'Unauthorized action.');
}
}
```
4- Apply the middleware to your route:
```
Route::get('/profile', ['middleware' => 'accType', function () {
//
}]);
``` | If your want to have a multi authenticate system, whit different logics, its better to implement multiple guard and define them in your desire models :
```
[...]
'guards' => [
[...]
'admin' => [
'driver' => 'session',
'provider' => 'admins',
],
'writer' => [
'driver' => 'session',
'provider' => 'writers',
],
],
[...]
[...]
'providers' => [
[...]
'admins' => [
'driver' => 'eloquent',
'model' => App\BusinessDashboard::class,
],
'writers' => [
'driver' => 'eloquent',
'model' => App\ProfileDashboard::class,
],
],
[...]
```
you can find a complete guide article in bellow code :
[enter link description here](https://pusher.com/tutorials/multiple-authentication-guards-laravel/) | 3,522 |
47,845,358 | Given a string, find the rank of the string amongst its permutations sorted lexicographically.
Note that the characters might be repeated. If the characters are repeated, we need to look at the rank in unique permutations.
Look at the example for more details.
Input : 'aba'
Output : 2
The order permutations with letters 'a', 'a', and 'b' :
aab
aba
baa
I was able to solve for unique characters but not with repeated characters. Can someone help me code this in python?
Thanks | 2017/12/16 | [
"https://Stackoverflow.com/questions/47845358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7549959/"
] | ```
long long int pow_mod(long long int a,long long int b)
{
long long MOD=1000003;
if(a == 1)
return 1;
long long int x =1 ,y = a;
while(b>0)
{
if(b%2)
{
x = (x*y)%MOD;
}
y = (y*y)%MOD;
b = b>>1;
}
return x;
}
int Solution::findRank(string A) {
long long ans=0;
long long mod=1000003;
long long arr[300];
long long n=A.length();
long long fact[n];
fact[0]=1;
for(int i=1;i<n;i++)
{
fact[i]=((fact[i-1]%mod)*(i%mod))%mod;
}
for(long long i=0;i<300;i++)
arr[i]=0;
for(long long i=0;i<n;i++)
{
arr[A[i]]++;
}
// for(long long i=0;i<26;i++)
// cout<<arr[i]<<" ";
for(long long i=0;i<n;i++)
{
long long cnt=0;
long long di=1;
for(long long j=(A[i]-1);j>=0;j--)
{
cnt+=arr[j];
}
// cout<<cnt<<" ";
for(int j=0;j<300;j++)
{
di=(di%mod * fact[arr[j]]%mod)%mod;
}
long long a=pow_mod(di,(mod - 2)) % mod;
// cout<<di<<" ";
ans=(ans+((cnt*fact[n-i-1])%mod * a )%mod)%mod;
// cout<<ans<<" ";
arr[A[i]]--;
}
++ans;
return ans%mod;
}
``` | You could generate the permutations, sort them, and find your original string:
```
from itertools import permutations
def permutation_index(s):
return sorted(''.join(x) for x in permutations(s)).index(s)
``` | 3,527 |
20,048,001 | ```
f = open('file.txt')
print f.read()
```
That was pretty straight forward wasn't it? That works because python knows how to read and write`.txt` files. How do these formats even work? I wish to build a python program to read atleast the major formats of documents (including pdfs), spreadsheets and presentations.
Now please don't tell me, "Go ahead and use PDFMiner!"; "Use IronPython for reading `.doc`!".
I want to understand myself, how the format magic happens. I want to know how formats work, so that I Can build my own "any format reader". I dont want solutions to read various formats. I want to know the theory behind it.
Any links to such resources, or help regarding reading multiple formats is greatly appreciated.
-Thank you | 2013/11/18 | [
"https://Stackoverflow.com/questions/20048001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2672265/"
] | No, you have completely misunderstood what your code is doing. Python doesn't "know" how to read .txt files, because there is no "format" here. It is just opening a plain file and printing out the bytes it finds there.
Something like PDF or DOC is completely different. The bytes by themselves do not mean anything: they are the representation of a file format that is understood by the relevant application. In order to "read" them in a way that understands what they contain, you need to build a parser that does the same thing as the original application. That's what those libraries you refer to do (although IronPython is an implementation of Python in .NET, and has nothing to do with reading doc files). | You will need to look up the specification for each format you want to deal with, [this](http://www.idpf.org/epub/301/spec/epub-overview.html) is the specification for ePub for example, it's a bit broad but you get the idea, then you need figure out yourself how you want to deal with it. | 3,528 |
13,536,350 | On using this regular expression in python :
`pathstring = '<span class="titletext">(.*)</span>'`
`pathFinderTitle = re.compile(pathstring)`
My output is:
```
Govt has nothing to do with former CAG official RP Singh:
Sibal</span></a></h2></div><div class="esc-lead-article-source-wrapper">
<table class="al-attribution single-line-height" cellspacing="0" cellpadding="0">
<tbody><tr><td class="al-attribution-cell source-cell">
<span class='al-attribution-source'>Times of India</span></td>
<td class="al-attribution-cell timestamp-cell">
<span class='dash-separator'> - </span>
<span class='al-attribution-timestamp'>‎46 minutes ago‎
```
The text find should have stopped at first "< /span>".
Please suggest whats wrong here. | 2012/11/23 | [
"https://Stackoverflow.com/questions/13536350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/733583/"
] | `.*` is a *greedy* match of any characters; it is going to consume as many characters as possible. Instead, use the non-greedy version `.*?`, as in
```
pathstring = '<span class="titletext">(.*?)</span>'
``` | `.*` will match `</span>` so it keeps on going until the last one.
The best answer is: Don't parse html with regular expressions. Use the [lxml](http://lxml.de/installation.html) library (or something similar).
```
from lxml import html
html_string = '<blah>'
tree = html.fromstring(html_string)
titles = tree.xpath("//span[@class='titletext']")
for title in titles:
print title.text
```
Using a proper xml/html parser will save you massive amounts of time and trouble. If you roll your own parser, you'll have to cater for malformed tags, comments, and myriad other things. Don't reinvent the wheel. | 3,530 |
32,878,530 | I have a Python/Django project that I manage using PyCharm. Everything was working perfectly under Mac OSX Yosemite. This morning I upgraded to the final release version of El Capitan, now I cannot run the project. The error I get is:
>
> Error loading MySQLdb module: No module named MySQLdb
>
>
>
I've tried all the suggestions from past questions, but none has resolved it. The most common suggestion seems to be:
>
> sudo pip install MySQL-python
>
>
>
When I do that, I get:
>
> Requirement already satisfied (use --upgrade to upgrade): MySQL-python in /Library/Python/2.7/site-packages
>
>
>
None of the other suggestions help either. I can try reinstalling MySQL, but I don't think it's the MySQL that's broken. This might have something to do with permissions. Can anyone please help get me going again? | 2015/10/01 | [
"https://Stackoverflow.com/questions/32878530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1402166/"
] | Happened to me as well. I removed the package, installed mysql using [Homebrew](http://brew.sh/) and then reinstall the package.
```
pip uninstall MySQL-python
brew install mysql
pip install MySQL-python
```
If you run into any issues with brew, be sure to check their [troubleshooting page on El Capitan](https://github.com/Homebrew/homebrew/blob/master/share/doc/homebrew/El_Capitan_and_Homebrew.md). | View this post : [MySQL Improperly Configured Reason: unsafe use of relative path](https://stackoverflow.com/questions/31343299/mysql-improperly-configured-reason-unsafe-use-of-relative-path)
And if you have updated Xcode, open 1 time Xcode for agreement the licence. | 3,535 |
62,804,653 | I see that all of the examples per the documentation use some form of a simple web application (For example, Flask in Python). Is it possible to use cloud run as a non web application? For example, deploy cloud run to use a python script and then use GCP Scheduler to invoke cloud run every hour to run that script? Basically my thinking for this is to avoid having to deploy and pay for Compute Engine, and only pay for when the cloud run container is invoked via the scheduler. | 2020/07/08 | [
"https://Stackoverflow.com/questions/62804653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11287314/"
] | It's mandatory to answer to HTTP request. It's the contract of Cloud Run
* Stateless (no volume attached to the container)
* Answer to HTTP request
However, if you already have a python script, it's easy to wrap it in a flask webserver. Let's say, you have something like this (I assume that the file name is `main.py` -> *important for the Dockerfile at the end*)
```
import ....
var = todo(...)
connect = connect(...)
connect(var)
```
1. Firstly, wrap it in a function like this
```
import ....
def my_function(request):
var = todo(...)
connect = connect(...)
connect(var)
return 'ok',200
```
2. Secondly, add a flask server
```
from flask import Flask, request
import os
import ....
app = Flask(__name__)
@app.route('/')
def my_function(request):
var = todo(...)
connect = connect(...)
connect(var)
return 'ok',200
if __name__ == "__main__":
app.run(host='0.0.0.0',port=int(os.environ.get('PORT',8080)))
```
3. Add flask in your `requirements.txt`
4. Build a standard container, here an example of Dockerfile
```
FROM python:3-alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV PORT 8080
CMD [ "python", "main.py" ]
```
5. Build (with Cloud Build for example) and deploy the service on Cloud Run
Now you have an URL, that you can call with Cloud Scheduler.
*Be careful, the max request duration is, for now, limited to 15 minutes (soon 4x more) and limited to 2vCPU and 2Gb of memory (again, soon more).* | It depends what is being installed in the container image, as there is no requirement that one would have to install a web-server. For [example](https://github.com/syslogic/cloudbuild-android), with such an image I can build Android applications, triggered whenever a repository changes (file excludes recommend) ...and likely could even run a head-less Android emulator for Gradle test tasks and publish test results to Pub/Sub (at least while the test-suite wouldn't run for too long). I mean, one has to understand the possibilities of Cloud Build to understand what Cloud Run can do. | 3,536 |
26,268,268 | I have the following string in a seperate .txt file:
```
L#$KJ#()JSEFS(DF)(SD*F
#KJ$H#K$JH@#K$JHD)
SF SDFLKJ#{P@$OJ{SDPFODS{PFO{
#K$HK#JHSFHD(*SHF)SF{HP
#L$H@โ#$Hโ@#L$KH#โ@L$K
#~L$KJ#:$SD)FJ)S(DJF)(S
#$KJH#$
SDLKFJD(FJ)SDJFSDLFKS
~L#$KJ:@LK$#J$
LSJDF(S*JDF(*SJDF(*J(DSF*J
```
I have to take every element by column position and output how many times in a column an element occurs. For example, Position: 0 or Column 1 ( S: 20.0% #: 50.0% L: 20.0% ~: 10.0% )
I have written this script in python using NumPy to create an array of arrays by line but am getting "TypeError: list indices must be integers, not tuple"
Here is the script when I try to print the first column:
```
import numpy as np
from sys import argv
script, filename = argv
target = open(filename, 'r')
y = []
for x in range(0, 10):
y.append(np.array(list(target.readline())))
print y[:,1]
```
What am I doing wrong? | 2014/10/08 | [
"https://Stackoverflow.com/questions/26268268",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3643019/"
] | From <https://docs.recurly.com/api/recurlyjs/jsonp_endpoints>
```
$.ajax({
dataType: 'jsonp',
url: 'https://{subdomain}.recurly.com/jsonp/{subdomain}/plans/{plan_code}',
data: {
currency: 'USD',
},
success: function (data) {
// do stuff
},
}
``` | You should *not* use the V2 API from the browser. Doing so risks exposing your private API key. If someone has your API key they can make calls charging customers, modifying subscriptions, causing all sorts of problems.
Look at the JSONP endpoints that Byaxy linked to. | 3,538 |
14,389,513 | I need to parse html table of the following structure:
```
<table class="table1" width="620" cellspacing="0" cellpadding="0" border="0">
<tbody>
<tr width="620">
<th width="620">Smth1</th>
...
</tr>
<tr bgcolor="ffffff" width="620">
<td width="620">Smth2</td>
...
</tr>
<tr bgcolor="E4E4E4" width="620">
<td width="620">Smth3</td>
...
</tr>
<tr bgcolor="ffffff" width="620">
<td width="620">Smth4</td>
...
</tr>
</tbody>
</table>
```
Python code:
```
r = requests.post(url,data)
html = lxml.html.document_fromstring(r.text)
rows = html.xpath(xpath1)[0].findall("tr")
#Getting Xpath with FireBug
data = list()
for row in rows:
data.append([c.text for c in row.getchildren()])
```
But I get this on the third line:
```
IndexError: list index out of range
```
The task is to form python dict from this. Number of rows could be different.
**UPD.**
Changed the way I'm getting html code to avoid possible problems with requests lib. Now it's a simple url:
```
html = lxml.html.parse(test_url)
```
This proves everyting is Ok with html:
```
lxml.html.open_in_browser(html)
```
But still the same problem:
```
rows = html.xpath(xpath1)[0].findall('tr')
data = list()
for row in rows:
data.append([c.text for c in row.getchildren()])
```
Here is the xpath1:
```
'/html/body/table/tbody/tr[5]/td/table/tbody/tr/td[2]/table/tbody/tr/td/center/table'
```
**UPD2.** It was found experimentally, that xpath crashes on:
```
xpath1 = '/html/body/table/tbody'
print html.xpath(xpath1)
#print returns []
```
If xpath1 is shorter, then it seeem to work well and returns `[<Element table at 0x2cbadb0>]` for `xpath1 = '/html/body/table'` | 2013/01/17 | [
"https://Stackoverflow.com/questions/14389513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988698/"
] | You didn't include the XPath, so I'm not sure what you're trying to do, but if I understood correctly, this should work
```
xpath1 = "tbody/tr"
r = requests.post(url,data)
html = lxml.html.fromstring(r.text)
rows = html.xpath(xpath1)
data = list()
for row in rows:
data.append([c.text for c in row.getchildren()])
```
This is making a list of one item lists though, like this:
```
[['Smth1'], ['Smth2'], ['Smth3'], ['Smth4']]
```
To have a simple list of the values, you can use this code
```
xpath1 = "tbody/tr/*/text()"
r = requests.post(url,data)
html = lxml.html.fromstring(r.text)
data = html.xpath(xpath1)
```
This is all assuming that r.text is exactly what you posted up there. | Your `.xpath(xpath1)` XPath expression failed to find any elements. Check that expression for errors. | 3,539 |
39,448,121 | I'm trying to upload a file using ftp in python, but I get an error saying:
```
ftplib.error_perm: 550 Filename invalid
```
when I run the following code:
```
ftp = FTP('xxx.xxx.x.xxx', 'MY_FTP', '')
ftp.cwd("/incoming")
file = open('c:\Automation\FTP_Files\MaxErrors1.json', 'rb')
ftp.storbinary('STOR c:\Automation\FTP_Files\MaxErrors1.json', file)
ftp.close()
```
I've checked that the file exists in the location I specified, does anyone know what might be causing the issue? | 2016/09/12 | [
"https://Stackoverflow.com/questions/39448121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6434729/"
] | The problem is that on the server, the path `c:\Automation\FTP_Files\MaxErrors1.json` is not valid. Instead try just doing:
```
ftp.storbinary('STOR MaxErrors1.json', file)
``` | The argument to STOR needs to be the destination file name, not the source path. You should just do `ftp.storbinary('STOR MaxErrors1.json', file)`. | 3,540 |
20,457,271 | So im new at python and i would like some help in making code where:
When an input is typed if the input has a minimum of three words that match any one thing on a list it would replace the input with the text in the list that matches the criteria
Example:
Jan
-Scans List-
-Finds Janice-
-Replaces and gives output as Janice Instead of Jan-
Janice
So far
```
getname = []
for word in args
room.usernames.get(args,word)
```
Room.usernames is the list and args input
Error List item has no attribute .get
there is a module used and its ch.py located at <http://pastebin.com/5BLZ0UA0> | 2013/12/08 | [
"https://Stackoverflow.com/questions/20457271",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3065432/"
] | You will need to:
* make a replacement words dictionary
* get some input
* do sanity checking to make sure it fits your parameters
* split it into a `list`
* loop through your new `list` and replace each word with its replacement in your `dict`, if it is in there
I won't write all of this for you. But here's a tip for how to do that last part: use the `get` method of dicts - it allows you to provide a "fallback" in case the word is not found in the `dict`. So just fall back to the word itself.
```
replacement_words = {'jan':'janice','foo':'bar'}
my_list = ['jan','is','cool']
[replacement_words.get(word,word) for word in my_list]
Out[41]: ['janice', 'is', 'cool']
``` | You Could try this
```
getname = []
for word in args
"%s"% (room.usernames).get(args,word)
``` | 3,542 |
50,641,831 | I am running a playbook against a RHEL 7.4 image which sits behind a proxy. SELINUX and the firewall have been disabled. I am using Ansible 2.5.3
Here is the task
```
- name: Add Docker repository.
get_url:
url: "{{ docker_yum_repo_url }}"
dest: '/etc/yum.repos.d/docker-{{ docker_edition }}.repo'
owner: root
group: root
mode: 0644
use_proxy: yes
```
And the error
```
fatal: [10.40.12.136]: FAILED! => changed=false
invocation:
module_args:
attributes: null
backup: null
checksum: ''
client_cert: null
client_key: null
content: null
delimiter: null
dest: /etc/yum.repos.d/docker-ce.repo
directory_mode: null
follow: false
force: false
force_basic_auth: false
group: root
headers: null
http_agent: ansible-httpget
mode: 420
owner: root
regexp: null
remote_src: null
selevel: null
serole: null
setype: null
seuser: null
sha256sum: ''
src: null
timeout: 10
tmp_dest: null
unsafe_writes: null
url: https://download.docker.com/linux/centos/docker-ce.repo
url_password: null
url_username: null
use_proxy: true
validate_certs: true
msg: 'Failed to connect to download.docker.com at port 443: [Errno 113] No route to host'
```
If I ssh into the server I can wget or curl the file with no issue at all and as you can see the module is been configured to use my proxy.
UPDATE: It does appear to be trying to use my proxy
```
<10.47.69.136> EXEC /bin/sh -c 'http_proxy=http://192.168.1.240:8080 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1527849642.94-177395007646298/get_url.py && sleep 0'
```
thanks. | 2018/06/01 | [
"https://Stackoverflow.com/questions/50641831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1768233/"
] | Build your app with sourcemaps, then use a tool like [`source-map-explorer`](https://www.npmjs.com/package/source-map-explorer) that details the size of every part of library that you're using.
Some tips with common libraries :
* If you're using RxJS 5.5 or higher, use the `.pipe(operator1(), operator2())` syntax, it significantly reduces RxJS size when you don't use many different operators
* If you're using moment.js be sure to import only the locales you're using
* If you're using Angular Material, be sure you don't import unused modules
* If you're using bootstrap and SASS, import only the needed parts of `.scss` files
If you didn't already do it, divide your app into lazy modules, it will reduce the size of the initial chunk. | I had a similar issue with large prod bundles and fixed the issue with this command;
```
npm run build
```
I had been using
`npm run ng build --prod`
but the --prod flag doesn't execute and I was left with a vendor.bundle.js which was 2.9mbs
```
chunk {inline} inline.bundle.js, inline.bundle.js.map (inline) 3.89 kB [entry] [rendered]
chunk {main} main.bundle.js, main.bundle.js.map (main) 122 kB [initial] [rendered]
chunk {polyfills} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 349 kB [initial] [rendered]
chunk {styles} styles.bundle.js, styles.bundle.js.map (styles) 16.6 kB [initial] [rendered]
chunk {vendor} vendor.bundle.js, vendor.bundle.js.map (vendor) 2.96 MB [initial] [rendered]
```
With just npm run build (no ng) the whole dist folder is 400kb
```
chunk {0} polyfills.67d5a7eec41ba0d20dc4.bundle.js (polyfills) 101 kB [initial] [rendered]
chunk {1} main.3874ea54d23e70290fa8.bundle.js (main) 327 kB [initial] [rendered]
chunk {2} styles.0aad5eda327b47b9e9fa.bundle.css (styles) 1.56 kB [initial] [rendered]
chunk {3} inline.318b50c57b4eba3d437b.bundle.js (inline) 796 bytes [entry] [rendered]
``` | 3,543 |
49,520,600 | I'm creating a program in python where i need to ask the user for there team name and team members. I need to have the team names in a list and have the team members embedded within the team name. Any Help?
```
# Creating Teams
print("Welcome to the program")
print("======================================================")
Teams = []
# Looping to find out the four different team names and players.
for x in range(0, 4) :
# Asking for the Team name and the team players names
TeamName = input("What is the name of the team?")
Player1 = input("What is the first name of the player in your team?")
Player2 = input("What is the second name of the player in your team?")
Player3 = input("What is the third name of the player in your team?")
Player4 = input("What is the last name of the player in your team?")
Teams.extend([TeamName, Player1, Player2, Player3, Player4])
TeamPlayers = int(input("What Team do you want to see? (0-4)"))
print([Teams, Player1, Player2, Player3, Player4])
``` | 2018/03/27 | [
"https://Stackoverflow.com/questions/49520600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9560138/"
] | `equalsIgnoreCase` just returns a boolean, which indicates, whether two strings are equal if the case of the strings is not considered.
What you need is
```
String aLower = a.toLowerCase();
String bLower = b.toLowerCase();
if(aLower.startsWith(bLower))
{
// do something
}
```
That way you just convert both strings to lower case to compare them afterwards. | Try this :
```
System.out.println((a.toLowerCase()).startsWith(b.toLowerCase()));
``` | 3,546 |
38,064,044 | I am implementing a simple DSL. I have the following input string:
```
txt = 'Hi, my name is <<name>>. I was born in <<city>>.'
```
And I have the following data:
```
{
'name': 'John',
'city': 'Paris',
'more': 'xxx',
'data': 'yyy',
...
}
```
I need to implement the following function:
```
def tokenize(txt):
...
return fmt, vars
```
Where I get:
```
fmt = 'Hi, my name is {name}. I was born in {city}.'
vars = ['name', 'city']
```
That is, `fmt` can be passed to the [str.format()](https://docs.python.org/2/library/stdtypes.html#str.format) function, and `vars` is a list of the detected tokens (so that I can perform lookup in the data, which can be more complex than what I described, since it can be split in several namespaces)
After this, processing the format would be simple:
```
def expand(fmt, vars, data):
params = get_params(vars, data)
return fmt.format(params)
```
Where `get_params` is performing simple lookup of the data, and returning something like:
```
params = {
'name': 'John',
'city': 'Paris',
}
```
My question is:
How can I implement tokenize? How can I detect the tokens, knowing that the delitimers are `<<` and `>>`? Should I go for regexes, or is there an easier path?
This is something similar to what `pystache`, or even `.format` itself, are doing, but I would like a light-weight implementation. Robustness is not very critical at this stage. | 2016/06/27 | [
"https://Stackoverflow.com/questions/38064044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/647991/"
] | Yes, this is a perfect target for regexp. Find the begin/end quotation marks, replace them with braces, and extract the symbol names into a list. Do you have a solid description of legal symbols? You'll want a search such as
```
/\<\<([a-zA-Z]+[a-zA-Z0-9_]*)\>\>/
```
For classical variable names (note that this excludes leading underscores). Are you familiar enough with regexps to take it from here? | ```
import re
def tokenize(text):
found_variables = []
def replace_and_capture(match):
found_variables.append(match.group(1))
return "{{{}}}".format(match.group(1))
return re.sub(r'<<([^>]+)>>', replace_and_capture, text), found_variables
fmt, vars = tokenize('Hi, my name is <<name>>. I was born in <<city>>.')
print(fmt)
print(vars)
# Output:
# Hi, my name is {name}. I was born in {city}.
# ['name', 'city']
``` | 3,547 |
66,565,415 | basically im new to python programming and I've just learnt about user input. Im trying to make an if statement that checks if the input is a str or int and then prints a message. If anyone knows how then please comment it, thanks.
```
print("How old are you?")
dog = input()
final = int(dog) + int(100)
if dog == str:
print("No")
else:
print("\nHeres you name + 100!")
print(f"{final}")
print("Thanks for playing!")
```
btw the if statement that I put doesn't work, I dont know if I need to put an exception so it responds with not an error message. | 2021/03/10 | [
"https://Stackoverflow.com/questions/66565415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15051671/"
] | Re how to check types, you can do like this:
```py
my_string = "hello"
if isinstance(my_string, str):
print("I'm a string")
else:
print("I'm not")
# outputs: "I'm a string"
```
However, as commented the input method is always returning a value of type str. So if you want to ensure that the value is only consisting of numerical characters, you can use the string method `isdigit()`
```py
my_string = "asd"
my_string.isdigit() # returns False
my_string = "123"
my_string.isdigit() # returns True
``` | I would use the `type` method to check:
```
print("How old are you?")
dog = input()
final = int(dog) + int(100)
#use type method to check the type of the variable
if type(dog) == str:
print("No")
else:
print("\nHeres you name + 100!")
print(f"{final}")
print("Thanks for playing!")
```
Another thing you can do is set prevent the user from entering anything but an `int` and force a proper variable, something like:
```
while True:
dog = input('How old are you? ')
try:
dog = int(dog)
break
except ValueError:
print('No')
final = dog + 100
print("\nHeres you name + 100!")
print(f"{final}")
print("Thanks for playing!")
```
Also, don't waste processing power by converting an `int` to an `int` so that means `int(100)` is the same as `100` | 3,548 |
57,899,891 | I read this (from [here](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__)):
>
> User-defined classes have `__eq__()` and `__hash__()` methods by default; with them, all objects compare unequal (except with themselves) and `x.__hash__()` returns an appropriate value such that x == y implies both that x is y and hash(x) == hash(y).
>
>
>
And I wish to know if the `__eq__()` method by default is defined like:
```
def __eq__(self, other):
return hash(self) == hash(other)
``` | 2019/09/12 | [
"https://Stackoverflow.com/questions/57899891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3530284/"
] | No, it's more like:
```
def __eq__(self, other)
return self is other
```
You can't use `hash()` because it's possible for different objects to have the same hash value. | You can read the following reference: <https://eev.ee/blog/2012/03/24/python-faq-equality/>
in the default method where you just try to compare 2 objects while not override the ***eq*** it will see if they are the same 2 objects, more like the following:
```
def __eq__(self, other)
return self is other
``` | 3,549 |
12,757,832 | I began learning Python a little time ago, and I got the first problem. Here's the code:
```
fh = open('/usr/share/dict/words')
for line in fh.readlines():
print(line, end='')
```
When I execute it in the Terminal (OS X), it tells me `invalid syntax` error where `end` equal sign is placed. What's the problem here? Didn't find the solution...
I installed Python 3.3.0 from this [page](http://python.org/download), Python 3.3.0 Mac OS X 64-bit/32-bit x86-64/i386 Installer
Sorry for so nooby questions :( | 2012/10/06 | [
"https://Stackoverflow.com/questions/12757832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/764638/"
] | Apparently because the `WebView` had editable content the events were being consumed by a child responder.
The workaround was to listen to `WebViewDidChangeNotification` when this happens. | Here is what I think is occurring
NSFontManager apparently sends the messages along the responder chain and not to the delegate.
I would set my CustomWebViewController to be the delegate of the window. | 3,552 |
5,919,318 | I want to nest my custom validators within my schema, like this:
```
MySchema(Schema):
class MyValidator(validators.FancyValidator):
def _to_python(self, value, state):
...
class MyOtherValidator(validators.FancyValidator):
def _to_python(self, value, state):
...
field_foo = All(validators.NotEmpty(),
MyValidator())
field_bar = All(validators.NotEmpty(),
MyOtherValidator())
```
However, it seems to think that `MyValidator` and `MyOtherValidator` are fields b/c it's `form_errors` contain:
```
{
'MyValidator': 'Missing value',
'MyOtherValidator': 'Missing value'
}
```
If I don't nest them, they seem fine. What am I missing? | 2011/05/07 | [
"https://Stackoverflow.com/questions/5919318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/315709/"
] | My advice is: you should move the `FancyValidator` subclass definitions to global scope, either (1) in the same module just before the definition of the `Schema` in which they're used, or (2) in a separate module for greater reuse. Unless there is a specific reason why nesting is required, a few extra class names in the module namespace shouldn't hurt.
For example:
```
from formencode import All, FancyValidator, Schema, validators
class MyValidator(FancyValidator):
def _to_python(self, value, state):
return value + '_foo'
class MyOtherValidator(FancyValidator):
def _to_python(self, value, state):
return value + '_bar'
class MySchema(Schema):
field_foo = All(validators.NotEmpty(), MyValidator())
field_bar = All(validators.NotEmpty(), MyOtherValidator())
print MySchema().to_python({'field_foo': 'x', 'field_bar': 'y'}, None)
```
Result:
```
{'field_foo': 'x_foo', 'field_bar': 'y_bar'}
``` | In python, anything that you define in your class is a field.
Be it static variables, variables set with self, methods or classes.
If you want to "hide" the classes, or make them "less public", as is probably the right wording in the python dictionary, you should start their names with \_\_.
```
>>> dir(X())
['A', '_B', '_X__C', '__doc__', '__module__']
>>> class X:
... class A: pass
... class __B: pass
... c = 0
... __d = 1
...
>>> dir(X())
['A', '_X__B', '_X__d', '__doc__', '__module__', 'c']
```
As you can see, there is a loophole for this kind of privacy, but most automatic tools will recognize that you are trying to make something private. | 3,553 |
50,294,263 | I'm trying to write simple multi-threaded python script:
```
from multiprocessing.dummy import Pool as ThreadPool
def resize_img_folder_multithreaded(img_fldr_src,img_fldr_dst,max_num_of_thread):
images = glob.glob(img_fldr_src+'/*.'+img_file_extension)
pool = ThreadPool(max_num_of_thread)
pool.starmap(resize_img,zip(images,itertools.repeat(img_fldr_dst)))
# close the pool and wait for the work to finish
pool.close()
pool.join()
def resize_img(img_path_src,img_fldr_dest):
#print("about to resize image=",img_path_src)
image = io.imread(img_path_src)
image = transform.resize(image, [300,300])
io.imsave(os.path.join(img_fldr_dest,os.path.basename(img_path_src)),image)
label = img_path_src[:-4] + '.xml'
if copyLabels is True and os.path.exists(label) is True :
copyfile(label,os.path.join(img_fldr_dest,os.path.basename(label)))
```
setting the argument `max_num_of_thread` to any number in [1...10]
doesn't improve my run time at all (`for 60 images it stays around 30 sec`) , the `max_num_of_thread`=10 my PC got stuck
my question is : what is the bottle neck in my code , why can't I see any improvement?
some data about my PC :
```
python -V
Python 3.6.4 :: Anaconda, Inc.
cat /proc/cpuinfo | grep 'processor' | wc -l
4
cat /proc/meminfo
MemTotal: 8075960 kB
MemFree: 3943796 kB
MemAvailable: 4560308 kB
cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=17.10
``` | 2018/05/11 | [
"https://Stackoverflow.com/questions/50294263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3134181/"
] | Blame the GIL.
Python has this mechanism called the GIL, global interpreter lock. It is basically a mutex that prevents native threads from executing Python bytecodes at once. This must be done since Python's (at least, CPython) memory management is not thread-safe.
In other words, the GIL will prevent you from running multiple threads at the same time. Essentially, you're running one thread at a time. Multi-threading, in the sense of exploiting multiple CPU cores, is more like an illusion in Python.
Fortunately, there is a way to solve this problem. it's a bit more expensive resource-wise though. You can utilize multiprocessing instead. Python has excellent support for this through the `multiprocessing` module. This way, you will be able to achieve parallelism[1].
You might ask why isn't multiprocessing affected by the GIL limitations. The answer is pretty simple. Each new process of your program has a different instance (I think there's a better word for this) of the Python interpreter. This means that each process has its own GIL. So, the processes are not managed by the GIL, but by the OS itself. This provides you with parallelism[2].
---
### References
* [1] <https://softwareengineering.stackexchange.com/questions/186889/why-was-python-written-with-the-gil>
* [2] <https://jeffknupp.com/blog/2013/06/30/pythons-hardest-problem-revisited/> | You should only use multiprocessing with the number of cpu cores you have available. You are also not using a Queue, so the pool of resources are doing the same work. You need to add a queue to your code.
[Filling a queue and managing multiprocessing in python](https://stackoverflow.com/questions/17241663/filling-a-queue-and-managing-multiprocessing-in-python) | 3,554 |
35,814,637 | I am trying to call a Torch 7 program from within a Python cgi-bin script.
If I run the following Python script from the command line:
```
# -*- coding: utf-8 -*-
from subprocess import call
call (['th', 'sample.lua', 'cv/lm_lstm_epoch3.54_0.9324.t7', '-gpuid', '-1', '-primetext', '"ืืืจ ืืืืื ืืืจืคืงื ืืื ืืืฉ ืืื ืืื ืฉื ืืืจ"', '-temperature', '1.0', '-length', '1000'])
```
This works fine and I get the following output
>
> ubuntu@ip-172-31-45-110:/usr/lib/cgi-bin$ python test2.py
> creating an lstm...
>
>
> seeding with "ืืืจ ืืืืื ืืืจืคืงื ืืื ืืืฉ ืืื ืืื ืฉื ืืืจ"
> -----------------------------------------------------
>
>
> "ืืืจ ืืืืื ืืืจืคืงื ืืื ืืืฉ ืืื ืืื ืฉื ืืืจ"ืื ืืงืจืง ืืื ืื ืฉืื ืืืื ืืืืืืื ืขืืื ืืืชืืืื ื ืืืื ืื ืื ืกืืื ืืื ืจ ืืงืืฉื ืืืืคื ืืช ืืฉืชื ืืืืช ืืจื ืืืื ืืืืืื ืืื ืืื ืืืื ืื ืฉื ืื ืืืืืชื ืืืืื ืืืืชืืืชืื ืืฉืจ ืืืจื ืื ืืฉืจ ืขืฉื ืื ืื ืืืืช ืงืืืืืืื ื ืงืจื ืืื ืืืืฆืืื ืืื ื ืืงืื ืื ืืืจ ืืื ืื ืชืืืฆืืืืืจ ืขื ืืื ืจืืฉืื ืืืืืื ืืจื ื] ื ืืืืืืืืื ืืืืงืืกื ืื ืชืืืืื ืืื ืืืื ืืืืคืจื ืืืืฉืจืขืชื ืืืืชื ืื ืื ื"ื ืืื ืงืืืจ ืืคืืืืฉืื ืืฉืืืจื ืฉื ืืงืื ืืจืืฉืืืืกืืคื ืืืจื ืืืืื ืืืืช ืืืงืืฉ ืืืืืืืืขืืืจืื ืืืจืื ื ืืขืืืืจ ืืืฉืจืื ืื ืืืื ืืืืืืืฉืชื ืืืจืื ืืฉืงืื ืืืจ ืืื ืื ืฉืืขืื ืฉื ื ืฉืขืืจืืืขืืื ื"ื ืจืื ืื ืื ืืจ ืืืฉืจืืืืืจ ืื ื ืคืฉ ืืืจ ืงืื ืฉื ื ืชืจืฆืืช
>
>
>
However if I try calling Torch 7 from a cgi-bin script
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import subprocess
import os, sys
import cgi, cgitb
print "Content-Type: text/html"
print
print "<TITLE>CGI script output</TITLE>"
output = ""
output = subprocess.call (['th', 'sample.lua', 'cv/lm_lstm_epoch3.54_0.9324.t7', '-gpuid', '-1', '-primetext', '"ืืืจ ืืืืื ืืืจืคืงื ืืื ืืืฉ ืืื ืืื ืฉื ืืืจ"', '-temperature', '1.0', '-length', '1000'])
print output
```
Then I get no output at all to the webpage, there is also no error in the apache error log.
Thanks | 2016/03/05 | [
"https://Stackoverflow.com/questions/35814637",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6001121/"
] | You want to use [check\_output](https://docs.python.org/3/library/subprocess.html#subprocess.check_output) to store the output of the command you are executing. [call](https://docs.python.org/3/library/subprocess.html#subprocess.call) will not do this. Call will only give you the return code of what you are executing.
```
subprocess.check_output(['th', 'sample.lua', 'cv/lm_lstm_epoch3.54_0.9324.t7', '-gpuid', '-1', '-primetext', '"ืืืจ ืืืืื ืืืจืคืงื ืืื ืืืฉ ืืื ืืื ืฉื ืืืจ"', '-temperature', '1.0', '-length', '1000'])
``` | May be you could try [lutorpy](https://github.com/imodpasteur/lutorpy) then you can run the torch code directly with python. You can use require("sample") to import your sample.lua module, and then run the inner function just like you run a python function. | 3,557 |
53,728,124 | I'm working on a data-set on which I have certain values which are needed to be rounded to either lower/upper bound.
eg. if I want the upper bound to be **9** and lower to **3** and we have numbers like -
```
[ 7.453511737983394,
8.10917072790058,
6.2377799380575,
5.225853201122676,
4.067932296134156 ]
```
and we want to list to be rounded to either 3 or 9 like -
```
[ 9,
9,
9,
3,
3 ]
```
I know we can do that in a good old fashion like iterating in the array and finding the difference and then getting the one which is closest.
my-approach-code:
```
for i in the_list[:]:
three = abs(3-the_list[i])
nine = abs(9-the_list[i])
if three < nine:
the_list[i] = three
else:
the_list[i] = nine
```
I'm wondering if there is a *quick and dirty* way which is inbuilt in python like:
```
hey_bound = round_the_num(number, bound_1, bound_2)
```
I know that we can `my-approach-code` but I'm pretty much sure that out there this has been implemented in much better way, I tried to find it but had no luck finding it, and here we are.
any guesses or direct links for solution to this will be amazing. | 2018/12/11 | [
"https://Stackoverflow.com/questions/53728124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7618421/"
] | **EDIT:**
The best approach in my opinion up to now is using numpy (to avoid "manual" looping) with a simple calculation of the difference arrays between `the_list` and the two bounds (so no expensive multiplication here), to then only conditionally add the one or the other, depending on which is smaller:
```
import numpy as np
the_list = np.array([ 7.453511737983394,
8.10917072790058,
6.2377799380575,
5.225853201122676,
4.067932296134156 ])
dhi = 9 - the_list
dlo = 3 - the_list
idx = dhi + dlo < 0
the_rounded = the_list + np.where(idx, dhi, dlo)
# array([9., 9., 9., 3., 3.])
```
---
I'd apply round function to the offsetless normalized list and scale back and add offset afterwards:
```
import numpy as np
the_list = np.array([ 7.453511737983394,
8.10917072790058,
6.2377799380575,
5.225853201122676,
4.067932296134156 ])
hi = 9
lo = 3
dlt = hi - lo
the_rounded = np.round((the_list - lo)/dlt) * dlt + lo
# [9. 9. 9. 3. 3.]
``` | A one liner list comprehension using inbuilt `min` function by modifying the key argument to look for an absolute difference
```
upper_lower_bound_list=[3,9]
myNumberlist=[ 7.453511737983394,
8.10917072790058,
6.2377799380575,
5.225853201122676,
4.067932296134156 ]
```
List comprehension
```
[min(upper_lower_bound_list, key=lambda x:abs(x-myNumber)) for myNumber in myNumberlist]
```
**Output**
```
[9, 9, 9, 3, 3]
``` | 3,558 |
67,374,632 | I want to take two lists and want to count the values that appear in both but considering **the same position**.
```
a = [1, 2, 3, 4, 5]
b = [4, 5, 3, 6, 5]
returnMatches(a, b)
```
would return `2`, for instance.
```
a = [1, 2, 3, 4, 5]
b = [1, 2, 3, 3, 5]
returnMatches(a, b)
```
would return `4`, for instance.
Is there a pythonic one line option or do I really need to iterate over both lists?
Thanks | 2021/05/03 | [
"https://Stackoverflow.com/questions/67374632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5152497/"
] | Try
```py
output = sum([x == y for x, y in zip(a, b)])
```
I am not sure if it needs further clarification. You might want to look into "list comprehension" and `zip()` if you are not familiar with them. | ```py
returnMatches = lambda a,b:sum([x==y for x,y in zip(a,b)]
```
This is the same as:
```py
def returnMatches(a, b):
matches = 0
for x, y in zip(a, b):
matches += 1
return matches
```
Take a look at `zip()` for more information on that builtin | 3,568 |
74,575,794 | Consider the following simple code:
```
import re
def my_match(s):
if re.match("^[a-zA-Z]+", s):
return True
else:
return False
```
Is there a way to collapse this in a single `return` statement? In `C` we could do for example:
```
return match("^[a-zA-Z]+", s) ? true : false;
```
Is there something similar in python? | 2022/11/25 | [
"https://Stackoverflow.com/questions/74575794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/738154/"
] | Python also supports this, although the syntaxes is a little different than most languages.
```
import re
def my_match(s):
return True if re.match("^[a-zA-Z]+", s) else False
```
In general, the Python syntax is `val_when_true if cond else val_when_false`, compared to the `cond ? val_when_true : val_when_false` you see in other languages.
In your particular case though, you can just write:
```
import re
def my_match(s):
return bool(re.match("^[a-zA-Z]+", s))
``` | A more generell solution would be to use the following code line. It excludes a fit with length 0 as it specificly checks for the None statement. In this case an empty string is impossible but it is more explicit.
```py
return re.match("^[a-zA-Z]+", s) is not None
``` | 3,570 |
17,217,225 | Consider:
```
>>> a = {'foo': {'bar': 3}}
>>> b = {'foo': {'bar': 3}}
>>> a == b
True
```
According to the python doc, [you can indeed use](http://docs.python.org/2/library/stdtypes.html#dict) the `==` operator on dictionaries.
What is actually happening here? Is Python recursively checking each element of the dictionaries to ensure equality? Is it making sure the keys are identically matched, and the values are also identically matched?
Is there documentation that specifies exactly what `==` on a dictionary means? Or whether I have to implement my own version of checking for equality?
(If the `==` operator works, why aren't dicts hashable? That is, why can't I create a set() of dicts, or use a dict as a dictionary key?) | 2013/06/20 | [
"https://Stackoverflow.com/questions/17217225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3788/"
] | Python is recursively checking each element of the dictionaries to ensure equality. See the [C `dict_equal()` implementation](http://hg.python.org/cpython/file/6f535c725b27/Objects/dictobject.c#l1839), which checks each and every key and value (provided the dictionaries are the same length); if dictionary `b` has the same key, then a `PyObject_RichCompareBool` tests if the values match too; this is essentially a recursive call.
Dictionaries are not hashable because their [`__hash__` attribute is set to `None`](http://docs.python.org/2/reference/datamodel.html#object.__hash__), and most of all they are *mutable*, which is disallowed when used as a dictionary key.
If you were to use a dictionary as a key, and through an existing reference then change the key, then that key would no longer slot to the same position in the hash table. Using another, equal dictionary (be it equal to the unchanged dictionary or the changed dictionary) to try and retrieve the value would now no longer work because the wrong slot would be picked, or the key would no longer be equal. | The dictionaries are equal if they have the same keys and the same values for each corresponding key.
See some examples:
```
dict(a=1, b=2) == dict(a=2, b=1)
False
dict(a=1, b=2) == dict(a=1, b=2, c=0)
False
dict(a=1, b=2) == dict(b=2, a=1)
True
``` | 3,575 |
54,930,121 | I was just wondering, is there any way to convert IUPAC or common molecular names to SMILES? I want to do this without having to manually convert every single one utilizing online systems. Any input would be much appreciated!
For background, I am currently working with python and RDkit, so I wasn't sure if RDkit could do this and I was just unaware. My current data is in the csv format.
Thank you! | 2019/02/28 | [
"https://Stackoverflow.com/questions/54930121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11131756/"
] | OPSIN (<https://opsin.ch.cam.ac.uk/>) is another solution for name2structure conversion.
It can be used by installing the cli, or via <https://github.com/gorgitko/molminer>
(OPSIN is used by the RDKit KNIME nodes also) | The accepted answer uses the [Chemical Identifier Resolver](https://cactus.nci.nih.gov/chemical/structure) but for some reason the website seems to be buggy for me and the API seems to be messed up.
So another way to connvert smiles to IUPAC name is with the the PubChem python API, which can work if your smiles is in their database
e.g.
```
#!/usr/bin/env python
import sys
import pubchempy as pcp
smiles = str(sys.argv[1])
print(smiles)
s= pcp.get_compounds(smiles,'smiles')
print(s[0].iupac_name)
``` | 3,578 |
46,289,914 | I have python code as follow.
jv\_list is populated from resultset retrieve from D.B. query.
```
jv_list = list(result.get_points())
print(jv_list)
```
I am printing jv\_list and it is giving me below mention output.
```
[{u'in': u'19834', u'length-bytes': u'79923888', u'run-time': u'1h50m43.489993955s', u'time': u'2017-09-08T21:20:39.846582783Z'}]
```
How can i get division result which is actually second value divided by first value. for. `i.e. 79923888/ 19834` | 2017/09/19 | [
"https://Stackoverflow.com/questions/46289914",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3597746/"
] | Try this sequence of commands
```
>>> X = [{u'in': u'19834', u'length-bytes': u'79923888', u'run-time': u'1h50m43.489993955s', u'time': u'2017-09-08T21:20:39.846582783Z'}]
>>> for x in X:
... x['answer'] = float(x['length-bytes'])/float(x['in'])
...
>>> X
[{'in': '19834', 'length-bytes': '79923888', 'run-time': '1h50m43.489993955s', 'time': '2017-09-08T21:20:39.846582783Z', 'answer': 4029.64041544822}]
``` | You can't reliably do that, dictionaries are unordered.
With this data structure, you will need to address the elements via their keys.
that is `jv_list[0]['length_bytes'] / jv_list[0]['in']` for each pair of elements you want to divide by each other. | 3,588 |
30,702,519 | I am trying to run some piece of Python code in a Bash script, so i wanted to understand what is the difference between:
```
#!/bin/bash
#your bash code
python -c "
#your py code
"
```
vs
```
python - <<DOC
#your py code
DOC
```
I checked the web but couldn't compile the bits around the topic. Do you think one is better over the other?
If you wanted to return a value from Python code block to your Bash script then is a heredoc the only way? | 2015/06/08 | [
"https://Stackoverflow.com/questions/30702519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3598271/"
] | The main flaw of using a here document is that the script's standard input will be the here document. So if you have a script which wants to process its standard input, `python -c` is pretty much your only option.
On the other hand, using `python -c '...'` ties up the single-quote for the shell's needs, so you can only use double-quoted strings in your Python script; using double-quotes instead to protect the script from the shell introduces additional problems (strings in double-quotes undergo various substitutions, whereas single-quoted strings are literal in the shell).
As an aside, notice that you probably want to single-quote the here-doc delimiter, too, otherwise the Python script is subject to similar substitutions.
```
python - <<'____HERE'
print("""Look, we can have double quotes!""")
print('And single quotes! And `back ticks`!')
print("$(and what looks to the shell like process substitutions and $variables!)")
____HERE
```
As an alternative, escaping the delimiter works identically, if you prefer that (`python - <<\____HERE`) | If you prefer to use `python -c '...'` without having to escape with the double-quotes you can first load the code in a bash variable using here-documents:
```
read -r -d '' CMD << '--END'
print ("'quoted'")
--END
python -c "$CMD"
```
The python code is loaded verbatim into the CMD variable and there's no need to escape double quotes. | 3,595 |
52,810,422 | I go through the book: "Malware Data Science Attack Detection and Attribution" in chapter one and use pefile python module to check the AddressOfEntryPoint,
I found the sample: ircbot.exe's AddressOfEntryPoint is 0xCC00FFEE when I do pe.dump\_info(). This value is quite large and look wrong.
[ircbot.exe's OPTIONAL Header](https://i.stack.imgur.com/X5ez7.png)
md5: 17fa7ec63b129f171511a9f96f90d0d6
how to fix this AddressOfEntryPoint? | 2018/10/15 | [
"https://Stackoverflow.com/questions/52810422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5742815/"
] | try this
```
select * from yourtable where youcolumn like '%''%'
``` | I hope this solves your problem
% **'** % -> Finds any values that have " **'** " in any position
By executing below query you will get the result which have '(single quote) in them
```
SELECT * FROM TABLE_NAME WHERE COLUMN_NAME LIKE "%'%"
```
i have executed the same query [enter image description here](https://i.stack.imgur.com/vQR6q.jpg) | 3,600 |
15,417,574 | For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this:
```
def df2csv(df,fname,myformats=[],sep=','):
"""
# function is faster than to_csv
# 7 times faster for numbers if formats are specified,
# 2 times faster for strings.
# Note - be careful. It doesn't add quotes and doesn't check
# for quotes or separators inside elements
# We've seen output time going down from 45 min to 6 min
# on a simple numeric 4-col dataframe with 45 million rows.
"""
if len(df.columns) <= 0:
return
Nd = len(df.columns)
Nd_1 = Nd - 1
formats = myformats[:] # take a copy to modify it
Nf = len(formats)
# make sure we have formats for all columns
if Nf < Nd:
for ii in range(Nf,Nd):
coltype = df[df.columns[ii]].dtype
ff = '%s'
if coltype == np.int64:
ff = '%d'
elif coltype == np.float64:
ff = '%f'
formats.append(ff)
fh=open(fname,'w')
fh.write(','.join(df.columns) + '\n')
for row in df.itertuples(index=False):
ss = ''
for ii in xrange(Nd):
ss += formats[ii] % row[ii]
if ii < Nd_1:
ss += sep
fh.write(ss+'\n')
fh.close()
aa=DataFrame({'A':range(1000000)})
aa['B'] = aa.A + 1.0
aa['C'] = aa.A + 2.0
aa['D'] = aa.A + 3.0
timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec
timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec
```
Note: the increase in performance depends on dtypes.
But it is always true (at least in my tests)
that to\_csv() performs much slower than non-optimized python.
If I have a 45 million rows csv file, then:
```
aa = read_csv(infile) # 1.5 min
aa.to_csv(outfile) # 45 min
df2csv(aa,...) # ~6 min
```
Questions:
```
What are the ways to make the output even faster?
What's wrong with to_csv() ? Why is it soooo slow ?
```
Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server. | 2013/03/14 | [
"https://Stackoverflow.com/questions/15417574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1442475/"
] | Lev. Pandas has rewritten `to_csv` to make a big improvement in native speed. The process is now i/o bound, accounts for many subtle dtype issues, and quote cases. Here is our performance results vs. 0.10.1 (in the upcoming 0.11) release. These are in `ms`, lower ratio is better.
```
Results:
t_head t_baseline ratio
name
frame_to_csv2 (100k) rows 190.5260 2244.4260 0.0849
write_csv_standard (10k rows) 38.1940 234.2570 0.1630
frame_to_csv_mixed (10k rows, mixed) 369.0670 1123.0412 0.3286
frame_to_csv (3k rows, wide) 112.2720 226.7549 0.4951
```
So Throughput for a single dtype (e.g. floats), not too wide is about 20M rows / min, here is your example from above.
```
In [12]: df = pd.DataFrame({'A' : np.array(np.arange(45000000),dtype='float64')})
In [13]: df['B'] = df['A'] + 1.0
In [14]: df['C'] = df['A'] + 2.0
In [15]: df['D'] = df['A'] + 2.0
In [16]: %timeit -n 1 -r 1 df.to_csv('test.csv')
1 loops, best of 1: 119 s per loop
``` | Your `df_to_csv` function is very nice, except it does a lot of assumptions and doesn't work for the general case.
If it works for you, that's good, but be aware that it is not a general solution. CSV can contain commas, so what happens if there is this tuple to be written? `('a,b','c')`
The python `csv` module would quote that value so that no confusion arises, and would escape quotes if quotes are present in any of the values. Of course generating something that works in all cases is much slower. But I suppose you only have a bunch of numbers.
You could try this and see if it is faster:
```
#data is a tuple containing tuples
for row in data:
for col in xrange(len(row)):
f.write('%d' % row[col])
if col < len(row)-1:
f.write(',')
f.write('\n')
```
I don't know if that would be faster. If not it's because too many system calls are done, so you might use `StringIO` instead of direct output and then dump it to a real file every once in a while. | 3,605 |
34,336,040 | I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image:
[](https://i.stack.imgur.com/sClUu.png)
I am using the following code:
```
def get_single_item_data(item_url):
sourceCode = requests.get(item_url)
plainText = sourceCode.text
soup = BeautifulSoup(plainText)
for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}):
print(item_name.string)
item_url = 'https://www.kaggle.com/titericz'
get_single_item_data(item_url)
```
The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs:
`[<h4 data-bind="text: rankingText"></h4>]`
but in the html of the link when inspecting this is like:
`<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image:
[](https://i.stack.imgur.com/8i76M.png)
Its clear that the text is missing. How can I overpass that?
Edit:
Printing the `soup` variable in the terminal I can see that this value exists:
[](https://i.stack.imgur.com/BFyuz.png)
So there should be a way to access through `soup`.
Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there. | 2015/12/17 | [
"https://Stackoverflow.com/questions/34336040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4157666/"
] | The data is databound using javascript, as the "data-bind" attribute suggests.
However, if you download the page with e.g. `wget`, you'll see that the rankingText value is actually there inside this script element on initial load:
```
<script type="text/javascript"
profile: {
...
"ranking": 96,
"rankingText": "96th",
"highestRanking": 3,
"highestRankingText": "3rd",
...
```
So you could use that instead. | This could because of dynamic data filling.
Some javascript code, fill this tag after page loading. Thus if you fetch the html using requests it is not filled yet.
```
<h4 data-bind="text: rankingText"></h4>
```
Please take a look at [Selenium web driver](http://www.seleniumhq.org/projects/webdriver/). Using this driver you can fetch the complete page and running js as normal. | 3,614 |
46,465,389 | I have a python string like this;
```
input_str = "2548,0.8987,0.8987,0.1548"
```
I want to remove the sub-string at the end after the last comma, including the comma itself.
The output string should look like this;
```
output_str = "2548,0.8987,0.8987"
```
I am using python v3.6 | 2017/09/28 | [
"https://Stackoverflow.com/questions/46465389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] | With `split` and `join`
=======================
```
','.join(input_str.split(',')[:-1])
```
### Explanation
```
# Split string by the commas
>>> input_str.split(',')
['2548', '0.8987', '0.8987', '0.1548']
# Take all but last part
>>> input_str.split(',')[:-1]
['2548', '0.8987', '0.8987']
# Join the parts with commas
>>> ','.join(input_str.split(',')[:-1])
'2548,0.8987,0.8987'
```
---
With `rsplit`
=============
```
input_str.rsplit(',', maxsplit=1)[0]
```
---
With `re`
=========
```
re.sub(r',[^,]*$', '', input_str)
```
If you are gonna to use it multiple times make sure to compile the regex:
```
LAST_ELEMENT_REGEX = re.compile(r',[^,]*$')
LAST_ELEMENT_REGEX.sub('', input_str)
``` | There's [the split function](https://www.tutorialspoint.com/python/string_split.htm) for python :
```
print input_str.split(',')
```
Will return :
```
['2548,0.8987,0.8987', '0.1548']
```
But in case you have multiple commas, [rsplit is here for that](https://docs.python.org/2/library/stdtypes.html#str.rsplit) :
```
str = '123,456,789'
print str.rsplit(',', 1)
```
Will return :
```
['123,456','789']
``` | 3,620 |
71,310,217 | My goal is to use gpu(GE FORCE GTX 850M). I have tried according to this guide(<https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/>). Tensoflow 2.8 is installed and keras also. But when I execute the test code, the output is as below. It must be that the code does not or cannot use gpu. What went wrong?
```
(my_tf) PS C:\WINDOWS\system32> python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
2022-03-01 23:30:30.626829: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
โป Below is all the installed packages(Name/Version/Build/Channel) at my C:\Anaconda3
```
_ipyw_jlab_nb_ext_conf 0.1.0 py38_0
_py-xgboost-mutex 2.0 cpu_0
absl-py 0.15.0 pyhd3eb1b0_0
anaconda-client 1.9.0 py38haa95532_0
anaconda-navigator 2.1.2 py38haa95532_0
anyio 3.5.0 py38haa95532_0
argon2-cffi 21.3.0 pyhd3eb1b0_0
argon2-cffi-bindings 21.2.0 py38h2bbff1b_0
astunparse 1.6.3 pypi_0 pypi
attrs 21.4.0 pyhd3eb1b0_0
babel 2.9.1 pyhd3eb1b0_0
backcall 0.2.0 pyhd3eb1b0_0
backports 1.1 pyhd3eb1b0_0
backports.functools_lru_cache 1.6.4 pyhd3eb1b0_0
backports.tempfile 1.0 pyhd3eb1b0_1
backports.weakref 1.0.post1 py_1
beautifulsoup4 4.10.0 pyh06a4308_0
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0
bottleneck 1.3.2 py38h2a96729_1
brotli 1.0.9 ha925a31_2
brotlipy 0.7.0 py38h2bbff1b_1003
bs4 0.0.1 pypi_0 pypi
bzip2 1.0.8 he774522_0
ca-certificates 2021.10.26 haa95532_4
cachetools 5.0.0 pypi_0 pypi
certifi 2021.10.8 py38haa95532_2
cffi 1.15.0 py38h2bbff1b_1
chardet 4.0.0 py38haa95532_1003
charset-normalizer 2.0.4 pyhd3eb1b0_0
chime 0.6.0 pypi_0 pypi
click 8.0.3 pyhd3eb1b0_0
clyent 1.2.2 py38_1
colorama 0.4.4 pyhd3eb1b0_0
conda 4.11.0 py38haa95532_0
conda-build 3.21.7 py38haa95532_1
conda-content-trust 0.1.1 pyhd3eb1b0_0
conda-env 2.6.0 haa95532_1
conda-package-handling 1.7.3 py38h8cc25b3_1
conda-repo-cli 1.0.4 pyhd3eb1b0_0
conda-token 0.3.0 pyhd3eb1b0_0
conda-verify 3.4.2 py_1
console_shortcut 0.1.1 4
cryptography 3.4.8 py38h71e12ea_0
cupy 7.0.0 pypi_0 pypi
cycler 0.11.0 pyhd3eb1b0_0
datetime 4.3 pypi_0 pypi
debugpy 1.5.1 py38hd77b12b_0
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
descartes 1.1.0 pyhd3eb1b0_4
entrypoints 0.3 py38_0
fastrlock 0.8 pypi_0 pypi
filelock 3.4.2 pyhd3eb1b0_0
flatbuffers 2.0 pypi_0 pypi
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.10.4 hd328e21_0
future 0.18.2 py38_1
gast 0.5.3 pyhd3eb1b0_0
glob2 0.7 pyhd3eb1b0_0
google-auth 2.6.0 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
graphviz 2.38 hfd603c8_2
grpcio 1.44.0 pypi_0 pypi
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 3.3 pyhd3eb1b0_0
imap-tools 0.34.0 pypi_0 pypi
importlib-metadata 4.11.1 pypi_0 pypi
importlib_metadata 4.8.2 hd3eb1b0_0
intel-openmp 2021.4.0 haa95532_3556
ipykernel 6.4.1 py38haa95532_1
ipython 7.31.1 py38haa95532_0
ipython_genutils 0.2.0 pyhd3eb1b0_1
ipywidgets 7.6.5 pyhd3eb1b0_1
jedi 0.18.1 py38haa95532_1
jinja2 2.11.3 pyhd3eb1b0_0
joblib 1.1.0 pyhd3eb1b0_0
jpeg 9d h2bbff1b_0
json5 0.9.6 pyhd3eb1b0_0
jsonschema 3.2.0 pyhd3eb1b0_2
jupyter_client 7.1.2 pyhd3eb1b0_0
jupyter_core 4.9.1 py38haa95532_0
jupyter_server 1.13.5 pyhd3eb1b0_0
jupyterlab 3.2.9 pyhd3eb1b0_0
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 2.10.3 pyhd3eb1b0_1
jupyterlab_widgets 1.0.0 pyhd3eb1b0_1
keras 2.8.0 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.3.2 py38hd77b12b_0
libarchive 3.4.2 h5e25573_0
libclang 13.0.0 pypi_0 pypi
libiconv 1.15 h1df5818_7
liblief 0.10.1 ha925a31_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.2.0 hd0e1b90_0
libwebp 1.2.2 h2bbff1b_0
libxgboost 1.5.0 hd77b12b_1
libxml2 2.9.12 h0ad7f3c_0
lz4-c 1.9.3 h2bbff1b_1
markdown 3.3.6 pypi_0 pypi
markupsafe 2.0.1 py38h2bbff1b_0
matplotlib 3.5.1 py38haa95532_0
matplotlib-base 3.5.1 py38hd77b12b_0
matplotlib-inline 0.1.2 pyhd3eb1b0_2
menuinst 1.4.18 py38h59b6b97_0
mistune 0.8.4 py38he774522_1000
mizani 0.7.3 pyhd8ed1ab_0 conda-forge
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py38h2bbff1b_0
mkl_fft 1.3.1 py38h277e83a_0
mkl_random 1.2.2 py38hf11a4ad_0
mouseinfo 0.1.3 pypi_0 pypi
multitasking 0.0.10 pypi_0 pypi
munkres 1.1.4 py_0
navigator-updater 0.2.1 py38_1
nbclassic 0.3.5 pyhd3eb1b0_0
nbclient 0.5.11 pyhd3eb1b0_0
nbconvert 6.1.0 py38haa95532_0
nbformat 5.1.3 pyhd3eb1b0_0
nest-asyncio 1.5.1 pyhd3eb1b0_0
notebook 6.4.8 py38haa95532_0
numexpr 2.8.1 py38hb80d3ca_0
numpy 1.22.2 pypi_0 pypi
numpy-base 1.21.5 py38hc2deb75_0
oauthlib 3.2.0 pypi_0 pypi
olefile 0.46 pyhd3eb1b0_0
openssl 1.1.1m h2bbff1b_0
opt-einsum 3.3.0 pypi_0 pypi
packaging 21.3 pyhd3eb1b0_0
palettable 3.3.0 pyhd3eb1b0_0
pandas 1.4.1 py38hd77b12b_0
pandas-datareader 0.10.0 pypi_0 pypi
pandocfilters 1.5.0 pyhd3eb1b0_0
parso 0.8.3 pyhd3eb1b0_0
patsy 0.5.2 py38haa95532_1
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 8.4.0 py38hd45dc43_0
pip 22.0.3 pypi_0 pypi
pkginfo 1.8.2 pyhd3eb1b0_0
plotnine 0.8.0 pyhd8ed1ab_0 conda-forge
powershell_shortcut 0.0.1 3
prometheus_client 0.13.1 pyhd3eb1b0_0
prompt-toolkit 3.0.20 pyhd3eb1b0_0
protobuf 3.19.4 pypi_0 pypi
psutil 5.8.0 py38h2bbff1b_1
py-lief 0.10.1 py38ha925a31_0
py-xgboost 1.5.0 py38haa95532_1
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyautogui 0.9.52 pypi_0 pypi
pybithumb 1.0.21 pypi_0 pypi
pycosat 0.6.3 py38h2bbff1b_0
pycparser 2.21 pyhd3eb1b0_0
pygetwindow 0.0.9 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pyjwt 2.1.0 py38haa95532_0
pykorbit 0.1.10 pypi_0 pypi
pymsgbox 1.0.9 pypi_0 pypi
pyopenssl 19.1.0 py38_0
pyparsing 3.0.4 pyhd3eb1b0_0
pyperclip 1.8.1 pypi_0 pypi
pyqt 5.9.2 py38hd77b12b_6
pyqt5 5.15.4 pypi_0 pypi
pyqt5-qt5 5.15.2 pypi_0 pypi
pyqt5-sip 12.9.0 pypi_0 pypi
pyqtchart 5.15.4 pypi_0 pypi
pyqtchart-qt5 5.15.2 pypi_0 pypi
pyrect 0.1.4 pypi_0 pypi
pyrsistent 0.18.0 py38h196d8e1_0
pyscreeze 0.1.26 pypi_0 pypi
pysocks 1.7.1 py38haa95532_0
python 3.8.12 h6244533_0
python-dateutil 2.8.2 pyhd3eb1b0_0
python-graphviz 0.16 pyhd3eb1b0_1
python-libarchive-c 2.9 pyhd3eb1b0_1
pytweening 1.0.3 pypi_0 pypi
pytz 2021.3 pyhd3eb1b0_0
pyupbit 0.2.21 pypi_0 pypi
pywin32 302 py38h827c3e9_1
pywinauto 0.6.8 pypi_0 pypi
pywinpty 2.0.2 py38h5da7b33_0
pyyaml 6.0 py38h2bbff1b_1
pyzmq 22.3.0 py38hd77b12b_2
qt 5.9.7 vc14h73c81de_0
qtpy 1.11.2 pyhd3eb1b0_0
requests 2.27.1 pyhd3eb1b0_0
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.8 pypi_0 pypi
ruamel_yaml 0.15.100 py38h2bbff1b_0
scikit-learn 1.0.2 py38hf11a4ad_1
scipy 1.7.3 py38h0a974cb_0
selenium 3.141.0 pypi_0 pypi
send2trash 1.8.0 pyhd3eb1b0_1
setuptools 58.0.4 py38haa95532_0
sip 4.19.13 py38hd77b12b_0
six 1.16.0 pyhd3eb1b0_1
sniffio 1.2.0 py38haa95532_1
soupsieve 2.3.1 pyhd3eb1b0_0
sqlite 3.37.2 h2bbff1b_0
statsmodels 0.13.0 py38h2bbff1b_0
tensorboard 2.8.0 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorflow 2.8.0 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.24.0 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
terminado 0.13.1 py38haa95532_0
testpath 0.5.0 pyhd3eb1b0_0
tf-estimator-nightly 2.8.0.dev2021122109 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.11 h2bbff1b_0
tornado 6.1 py38h2bbff1b_0
tqdm 4.62.3 pyhd3eb1b0_1
traitlets 5.1.1 pyhd3eb1b0_0
typing-extensions 3.10.0.2 hd3eb1b0_0
typing_extensions 3.10.0.2 pyh06a4308_0
ujson 4.0.2 py38hd77b12b_0
urllib3 1.26.8 pyhd3eb1b0_0
utils 1.0.1 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wcwidth 0.2.5 pyhd3eb1b0_0
webencodings 0.5.1 py38_1
websocket-client 0.58.0 py38haa95532_4
websockets 9.1 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0
widgetsnbextension 3.5.2 py38haa95532_0
win_inet_pton 1.1.0 py38haa95532_0
wincertstore 0.2 py38haa95532_2
winpty 0.4.3 4
xgboost 1.5.0 py38haa95532_1
xz 5.2.5 h62dcd97_0
yaml 0.2.5 he774522_0
yfinance 0.1.66 pypi_0 pypi
zipp 3.7.0 pyhd3eb1b0_0
zlib 1.2.11 h8cc25b3_4
zstd 1.4.9 h19a0ad4_0
``` | 2022/03/01 | [
"https://Stackoverflow.com/questions/71310217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5554839/"
] | The anaconda guide looks outdated, because since Tensorflow 2.0, tensorflow and tensorflow-gpu are merged into one.
If you are using Tensorflow 2, I would advise you this guide <https://www.tensorflow.org/install/gpu> which has worked for me (do not download the latest cuDNN SDK version which is bugged but the one indicated in the tutorial) | In this guide <https://www.tensorflow.org/install/source_windows>, there are version configuration .It helped. | 3,627 |
9,430,644 | ```
class test:
def __init__(self):
test_dict = {'1': 'one', '2': 'two'}
def test_function(self):
print self.test_dict
if __name__ == '__main__':
t = test()
print t.test_dict
```
Error:
```
AttributeError: test instance has no attribute 'test_dict'
```
Also, if i execute code: `t.test_function()` instead of `print t.test_dict`, error occurred too:
```
AttributeError: test instance has no attribute 'test_dict'
```
Why? i have defined test\_dict in function `__init__`, so it should be initialized to each instance, but why does python tell me it cannot find the dict? | 2012/02/24 | [
"https://Stackoverflow.com/questions/9430644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1031769/"
] | You forgot `self`.
Change this:
```
def __init__(self):
test_dict = {'1': 'one', '2': 'two'}
```
with:
```
def __init__(self):
self.test_dict = {'1': 'one', '2': 'two'}
```
`self` is your *instance* inside the methods in your class. That is not because `self` is a special keyword but because `self` is usually the word choosen to be a method first argument.
If you want to know more about `self`, there's a good answer [here](https://stackoverflow.com/a/625098/1132524).
At the end notice that you got an `AttributeError` when you tried to call
```
t.test_dict
```
beacuse the `test_dict` attribute was not defined. | You made an error in your `__init__`. This:
```
def __init__(self):
test_dict = {'1': 'one', '2': 'two'}
```
Should be:
```
def __init__(self):
self.test_dict = {'1': 'one', '2': 'two'}
``` | 3,628 |
13,174,127 | I want to be able to capture the value of an HTML attribute with a python regexp.
currently I use
```
re.compile( r'=(["\'].*?["\'])', re.IGNORECASE | re.DOTALL )
```
My problem is that I want the regular expression to "remember" whether the attribute started with a single or a double quote.
I found the bug in my current approach with the following attribute
```
href="javascript:foo('bar')"
```
my regex catches
```
"javascript:foo('
``` | 2012/11/01 | [
"https://Stackoverflow.com/questions/13174127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1616042/"
] | You can capture the first quote and then use a backreference:
```
r'=((["\']).*?\2)'
```
However, regular expressions are [not the proper approach to parsing HTML](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454). You should consider using a DOM parser instead. | The following would be more efficient in theory:
```
regex = r'"[^"]*"|\'[^']*\''
```
For the reference, here's Jeffrey Friedl's [expression](http://regex.info/dlisting.cgi?ed=3&id=36481) for html tags (from the owl book):
```
< # Opening "<"
( # Any amount of . . .
"[^"]*" # double-quoted string,
| # or . . .
'[^']*' # single-quoted string,
| # or . . .
[^'">] # "other stuff"
)* #
> # Closing ">"
``` | 3,630 |
21,067,443 | I can use the following code to change a string to a variable and then call function of the library that was previously imported.
```
>>> import sys
>>> x = 'sys'
>>> globals()[x]
<module 'sys' (built-in)>
>>> globals()[x].__doc__
```
Without first importing the module, I have an string to variable but I can't use the same `globals()[var]` syntax with `import`:
```
>>> y = 'os'
>>> globals()[y]
<module 'os' from '/usr/lib/python2.7/os.pyc'>
>>> import globals()[y]
File "<stdin>", line 1
import globals()[y]
^
SyntaxError: invalid syntax
>>> z = globals()[y]
>>> import z
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named z
```
**Is it possible for a string input and import the library that has the same name as the string input?** If so, how?
@ndpu and @paulobu has answered that `__import__()` allows string to library access as a variable. But is there a problem with using the same variable name rather than using an alternate for the library? E.g.:
```
>>> x = 'sys'
>>> sys = __import__(x)
``` | 2014/01/11 | [
"https://Stackoverflow.com/questions/21067443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610569/"
] | Most Python coders prefer using [`importlib.import_module`](http://docs.python.org/2.7/library/importlib.html#importlib.import_module) instead of `__import__`:
```
>>> from importlib import import_module
>>> mod = raw_input(":")
:sys
>>> sys = import_module(mod)
>>> sys
<module 'sys' (built-in)>
>>> sys.version_info # Just to demonstrate
sys.version_info(major=2, minor=7, micro=5, releaselevel='final', serial=0)
>>>
```
You can read about the preference of `importlib.import_module` over `__import__` [here](http://docs.python.org/2.7/library/functions.html#__import__). | You are looking for [`__import__`](http://docs.python.org/2/library/functions.html#__import__) built-in function:
```
__import__(globals()[y])
```
Basic usage:
```
>>>math = __import__('math')
>>>print math.e
2.718281828459045
```
You can also look into `importlib.import_module` as suggested in another answer and in the `__import__`'s documentation. | 3,631 |
46,963,157 | I'm trying to implement an efficient way of creating a frequency table in python, with a rather large numpy input array of `~30 million` entries. Currently I am using a `for-loop`, but it's taking far too long.
The input is an ordered `numpy array` of the form
```
Y = np.array([4, 4, 4, 6, 6, 7, 8, 9, 9, 9..... etc])
```
And I would like to have an output of the form:
```
Z = {4:3, 5:0, 6:2, 7:1,8:1,9:3..... etc} (as any data type)
```
Currently I am using the following implementation:
```
Z = pd.Series(index = np.arange(Y.min(), Y.max()))
for i in range(Y.min(), Y.max()):
Z[i] = (Y == i).sum()
```
Is there a quicker way of doing this or a way without `iterating` through a loop? Thanks for helping, and sorry if this has been asked before! | 2017/10/26 | [
"https://Stackoverflow.com/questions/46963157",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8840045/"
] | You can simply do this using Counter from collections module. Please see the below code i ran for your test case.
```
import numpy as np
from collections import Counter
Y = np.array([4, 4, 4, 6, 6, 7, 8, 9, 9, 9,10,5,5,5])
print(Counter(Y))
```
It gave the following output
```
Counter({4: 3, 9: 3, 5: 3, 6: 2, 7: 1, 8: 1, 10: 1})
```
you can easily use this object for further. I hope this helps. | I think numpy.unique is your solution.
<https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.unique.html>
```
import numpy as np
t = np.random.randint(0, 1000, 100000000)
print(np.unique(t, return_counts=True))
```
This takes ~4 seconds for me.
The collections.Counter approach takes ~10 seconds.
But the numpy.unique returns the frequencies in an array and the collections.Counter returns a dictionary. It's up to convenience.
Edit. I cannot comment on other posts so I'll write here that @lomereiters solution is lightning fast (linear) and should be the accepted one. | 3,634 |
36,433,011 | At this moment I have a game that drops falling colored blocks (*obstacles*) from the top of the screen, and the objective is for the player to dodge said (*obstacles*) by moving either left or right.
I currently have set up where every time the user runs the script, the blocks will be a different color, but the problem is, they will **only** be that color for the duration of game play, and for the color to be different, the user would have to exit and re-run the script.
The code I have for this:
```
col1 = randint(1, 255)
col2 = randint(1, 255)
col3 = randint(1, 255)
block_color = (col1, col2, col3)
```
Once the script is executed, a random color is defined by the three randints above, and its applied later in the script.
I'm looking for advice on how I might be able to change the color of **every single block that falls**.
So, for example, one block falls and it's randcolor is red, and then the second block falls and it's randcolor is blue, etc.
I imagine it would function along the lines of defining 3 random integers every time a block falls and applying those three rgb values to the new block. I just cannot figure how to actually write that in python.
Any help would be greatly appreciated. Thank you. | 2016/04/05 | [
"https://Stackoverflow.com/questions/36433011",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3777680/"
] | Add relative positioning to the container and absolute positioning to the icon
```
#menui {
left: 0;
position:absolute;
}
.smalltop {
background-color: #FFF;
list-style-type: none;
margin: 0 auto;
position:relative;
}
```
**[jsFiddle example](https://jsfiddle.net/j08691/pzLbruhx/1/)** | Just give property `float:left` to `#menui`, and see the result | 3,636 |
71,775,713 | I'm trying to do some interesting integration problems for my Calculus I students under Anaconda python 3.8.5 and sympy version 1.9,
So question 1 is:
integrate(sin(m \* x)\* cos(n \* x), x)
[](https://i.stack.imgur.com/lOzzI.png)
whereas x is the integration variable and m and n are 2 unequal and uncomplement (m != -n) real constants.
Question 2 is: integrate((a \*\* 2 - x \*\* 2) \*\* (1/2), x)
[](https://i.stack.imgur.com/XCTsK.png)
whereas for my Calculus I students we have to assume that |a| > |x| otherwise they won't be even able to interpret the results.
The following solution works for question 1:
```
m, n = symbols("m n", real=True, nonzero=True)
integrate(sin(m * x)* cos(n * x), x).args[2][0]
```
[](https://i.stack.imgur.com/7hcEI.png)
but for question 2 it obviously gives me results more than my Calculus I students can understand:
[](https://i.stack.imgur.com/bWKWD.png)
whereas I only want:
[](https://i.stack.imgur.com/xBuub.png)
instread. Since I already know in question 1 that m!= n and m!=-n, and in question 2 |a| > |x|, is there a way that I can tell sympy this so that I don't have to dig through the Piecewise stuff (or to interpret the complex range result solutions) and get the answer directly? Thanks. | 2022/04/07 | [
"https://Stackoverflow.com/questions/71775713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8438488/"
] | If you want to say that `m` and `n` are not equal (and get the answer you gave) make one odd and one even:
```
>>> var('m',odd=True)
m
>>> var('n',even=True)
n
>>> integrate(sin((m) * x)* cos(n * x), x)
-m*cos(m*x)*cos(n*x)/(m**2 - n**2) - n*sin(m*x)*sin(n*x)/(m**2 - n**2)
```
(You get an interesting result if you just make `m,n,p` positive and let `m=n+p`, do the integral, and replace `p` with `m - n` and simplify. Haven't been able to investigate, though.)
If you want `x` > `a`, let's integrate from `a` to `x` with real variables:
```
>>> var('a x',positive=1)
(a, x)
>>> integrate((a ** 2 - x ** 2) ** (S.Half), (x,a,x))
a**2*asin(x/a)/2 - pi*a**2/4 + x*sqrt(a**2 - x**2)/2
```
If you want to get rid of the constant you can do
```
>>> _.as_independent(x)[1] + Symbol("C")
C + a**2*asin(x/a)/2 + x*sqrt(a**2 - x**2)/2
```
This will only change sign if `x < a < 0`, I b | First of all, what version of SymPy are you using? You can verify that with:
```py
import sympy as sp
print(sp.__version__)
```
If you are using older versions, maybe the solver is having trouble. The following solution has been tested on SymPy 1.9 and 1.10.1.
```py
# define a, x as ordinary symbols with no assumptions. Sadly, it is not
# possible to make assumptions similar to |a| > |x|.
a, x = symbols("a, x")
# integrate the expression
res = integrate(sqrt(a**2 - x**2), x)
print(type(res))
# Piecewise
```
[](https://i.stack.imgur.com/tpatB.png)
The result is a `Piecewise` object. As you can see, SymPy computed the complete solution. You can then extract the interested piece with the following command:
```py
res.args[1][0]
```
Here, `res.args[1]` extracts the second piece, which is a tuple, `(expr, condition)`. With `res.args[1][0]` we extract the expression from the second piece. | 3,638 |
7,774,740 | This is an extension question of [PHP pass in $this to function outside class](https://stackoverflow.com/questions/7774444/php-pass-in-this-to-function-outside-class)
And I believe this is what I'm looking for but it's in python not php: [Programmatically determining amount of parameters a function requires - Python](https://stackoverflow.com/questions/741950/programatically-determining-amount-of-parameters-a-function-requires-python)
Let's say I have a function like this:
```
function client_func($cls, $arg){ }
```
and when I'm ready to call this function I might do something like this in pseudo code:
```
if function's first parameter equals '$cls', then call client_func(instanceof class, $arg)
else call client_func($arg)
```
So basically, is there a way to lookahead to a function and see what parameter values are required before calling the function?
I guess this would be like `debug_backtrace()`, but the other way around.
`func_get_args()` can only be called from within a function which doesn't help me here.
Any thoughts? | 2011/10/14 | [
"https://Stackoverflow.com/questions/7774740",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/266763/"
] | Use [Reflection](http://php.net/book.reflection), especially [ReflectionFunction](http://php.net/class.reflectionfunction) in your case.
```
$fct = new ReflectionFunction('client_func');
echo $fct->getNumberOfRequiredParameters();
```
As far as I can see you will find [getParameters()](http://php.net/reflectionfunctionabstract.getparameters) useful too | Only way is with reflection by going to <http://us3.php.net/manual/en/book.reflection.php>
```
class foo {
function bar ($arg1, $arg2) {
// ...
}
}
$method = new ReflectionMethod('foo', 'bar');
$num = $method->getNumberOfParameters();
``` | 3,639 |
60,973,894 | [Open Street Map (pyproj). How to solve syntax issue?](https://stackoverflow.com/questions/59596835/open-street-map-pyproj-how-to-solve-syntax-issue)
has a similar question and the answers there did not help me.
I am using the helper class below a few hundred times and my console gets flooded with warnings:
```
/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/crs/crs.py:53: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
return _prepare_from_string(" ".join(pjargs))
```
<https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6>
When i try to follow the hint by using:
```
return transform(Proj('epsg:4326'), Proj('epsg:3857'), lon,lat)
```
**I get some (inf,inf) results in cases where the original code worked. What is the proper way to avoid the syntax error but get the same results?**
* <https://gis.stackexchange.com/questions/164043/how-to-create-a-projection-from-a-crs-string-using-pyproj>
shows the old syntax but no code example for a compatible new statement.
<https://github.com/pyproj4/pyproj/issues/224> states:
```
*What is the preferred way of loading EPSG CRSes now?
use "EPSG:XXXX" in source_crs or target_crs arguments of proj_create_crs_to_crs() when creating a transformation, or as argument of proj_create() to instanciate a CRS object*
```
**What does this mean as a code example?**
from pyproj import Proj, transform
```
class Projection:
@staticmethod
def wgsToXy(lon,lat):
return transform(Proj(init='epsg:4326'), Proj(init='epsg:3857'), lon,lat)
@staticmethod
def pointToXy(point):
xy=point.split(",")
return Projection.wgsToXy(float(xy[0]),float(xy[1]))
``` | 2020/04/01 | [
"https://Stackoverflow.com/questions/60973894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1497139/"
] | In order to keep using the old syntax (feeding the transformer `(Lon,Lat)` pairs) you can use the `always_xy=True` parameter when creating the transformer object:
```py
from pyproj import Transformer
transformer = Transformer.from_crs(4326, 3857, always_xy=True)
points = [
(6.783333, 51.233333), # Dusseldorf
(-122.416389, 37.7775) # San Francisco
]
for pt in transformer.itransform(points):
print(pt)
```
Output
```
(755117.1754412088, 6662671.876828446)
(-13627330.088231295, 4548041.532457043)
``` | This is my current guess for the fix:
```
#e4326=Proj(init='epsg:4326')
e4326=CRS('EPSG:4326')
#e3857=Proj(init='epsg:3857')
e3857=CRS('EPSG:3857')
```
**Projection helper class**
```
from pyproj import Proj, CRS,transform
class Projection:
'''
helper to project lat/lon values to map
'''
#e4326=Proj(init='epsg:4326')
e4326=CRS('EPSG:4326')
#e3857=Proj(init='epsg:3857')
e3857=CRS('EPSG:3857')
@staticmethod
def wgsToXy(lon,lat):
t1=transform(Projection.e4326,Projection.e3857, lon,lat)
#t2=transform(Proj('epsg:4326'), Proj('epsg:3857'), lon,lat)
return t1
@staticmethod
def pointToXy(point):
xy=point.split(",")
return Projection.wgsToXy(float(xy[0]),float(xy[1]))
``` | 3,640 |
66,514,262 | i want to plot a graphs of my csv file data
now i want epoch as x axis and on y axis the label "acc" and "val\_acc" is plot i try the following code but it gives blank graph
`
```
x = []
y = []
with open('trainSelfVGG.csv','r') as csvfile:
plots = csv.reader(csvfile, delimiter=',')
for row in plots:
x.append('epoch')
y.append('acc')
plt.plot(x,y, label='Loaded from file!')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Accuracy VS Val_acc')
plt.legend()
plt.show()`
```
i am new to python please help the data of csv file look like this
```
epoch| acc| | loss |lr |val_acc |val_loss
0 0.712187529 0.923782527 5.00E-05 0.734799922 0.865529358
1 0.746874988 0.845359206 5.00E-05 0.733945608 0.870365739
2 0.739687502 0.853801966 5.00E-05 0.734799922 0.869380653
3 0.734375 0.872551799 5.00E-05 0.734799922 0.818775356
4 0.735000014 0.817328095 5.00E-05 0.744980752 0.782691181
5 0.738125026 0.813450873 5.00E-05 0.743200898 0.756890059
6 0.749842465 0.769637883 5.00E-05 0.746404648 0.761445224
7 0.740312517 0.779146731 5.00E-05 0.750605166 0.74676168
8 0.745937526 0.77233541 5.00E-05 0.738217294 0.754457355
9 0.760239422 0.717389286 5.00E-05 0.756656706 0.719709456
10 0.758437514 0.727203131 5.00E-05 0.753880084 0.766058266
11 0.756562471 0.718854547 5.00E-05 0.764060915 0.699205279
12 0.751874983 0.735785842 5.00E-05 0.76099956 0.711962938
13 0.762187481 0.709208548 5.00E-05 0.762850642 0.701643765
14 0.766250014 0.689858377 5.00E-05 0.771037996 0.698576272
15 0.791562498 0.642151952 5.00E-05 0.775665641 0.674562693
16 0.773750007 0.672213078 5.00E-05 0.77153641 0.683691561
17 0.785312474 0.657182395 5.00E-05 0.778015077 0.670122385
18 0.770951509 0.685499191 5.00E-05 0.774384141 0.670817852
19 0.777812481 0.673273861 5.00E-05 0.785134554 0.652816713
20 0.80250001 0.626691639 5.00E-05 0.783141136 0.66740793
21 0.787500024 0.64432466 5.00E-05 0.788053513 0.651966989
22 0.7890625 0.621332884 5.00E-05 0.775096118 0.663884819
23 0.787500024 0.637105942 5.00E-05 0.785775304 0.657734036
24 0.794580996 0.616357446 5.00E-05 0.771749973 0.670413017
25 0.803717732 0.599221408 5.00E-05 0.788195908 0.64291203
26 0.811874986 0.587966204 5.00E-05 0.791186094 0.653984845
27 0.804062486 0.591458261 5.00E-05 0.792538822 0.642165542
28 0.797187507 0.602103412 5.00E-05 0.78812474 0.635053933
29 0.807187498 0.595692158 5.00E-05 0.77474016 0.661368072
30 0.811909258 0.577990949 5.00E-05 0.774526536 0.668637931
31 0.820625007 0.546454251 5.00E-05 0.783212304 0.650670886
32 0.82593751 0.53596288 5.00E-05 0.778655827 0.651631236
33 0.805608094 0.582103312 5.00E-05 0.792823553 0.635468125
34 0.822621286 0.555304945 5.00E-05 0.783924222 0.647240341
35 0.823125005 0.551530778 5.00E-05 0.783141136 0.662788212
``` | 2021/03/07 | [
"https://Stackoverflow.com/questions/66514262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12673562/"
] | You are correct, a string cannot be returned as an array via the `return` statement. When you pass or return an array, only a pointer to its first element is used. Hence the function `get_name()` returns a pointer to the array defined locally with automatic storage (aka *on the stack*). This is incorrect as this array is discarded as soon as it goes out of scope, ie: when the function returns.
There are several ways for `get_name()` to provide the name to its caller:
* you can pass a destination array and its length: and let the function fill the name in the array, carefully avoiding to write beyond the end of the array but making sure it has a null terminator:
```
char *get_name(char *dest, size_t size, int num) {
if (num == 1) {
snprintf(dest, size, "Jake Peralta");
} else {
snprintf(dest, size, "John Doe");
}
// return the destination pointer for convenience.
return dest;
}
int main() {
char name[30];
int num = 1;
get_name(name, sizeof name, num);
printf("%s\n", name);
return 0;
}
```
* you can allocate memory in `get_name()` and return a pointer to the allocated array where you copy the string. It will be the caller's responsibility to free this object with `free()` when it is no longer used.
```
char *get_name(int num) {
if (num == 1) {
return strdup("Jake Peralta");
} else {
return strdup("John Doe");
}
}
int main() {
int num = 1;
char *name = get_name(num);
printf("%s\n", name);
free(name);
return 0;
}
```
* you can return a constant string, but you can only do this if all names are known at compile time.
```
const char *get_name(int num) {
if (num == 1) {
"Jake Peralta";
} else {
"John Doe";
}
}
int main() {
int num = 1;
const char *name = get_name(num);
printf("%s\n", name);
return 0;
}
``` | You are returning address of `real_name` from `get_name` method which will go out of scope after the function returns. Instead allocate the memory of string on heap and return its address. Also, the caller would need to free the string memory allocated on the heap to avoid any memory leaks. | 3,641 |
27,088,984 | I am learning python so this question may be a simple question, I am creating a list of cars and their details in a list as bellow:
```
car_specs = [("1. Ford Fiesta - Studio", ["3", "54mpg", "Manual", "ยฃ9,995"]),
("2. Ford Focous - Studio", ["5", "48mpg", "Manual", "ยฃ17,295"]),
("3. Vauxhall Corsa STING", ["3", "53mpg", "Manual", "ยฃ8,995"]),
("4. VW Golf - S", ["5", "88mpg", "Manual", "ยฃ17,175"])
]
```
I have then created a part for adding another car as follows:
```
new_name = input("What is the name of the new car?")
new_doors = input("How many doors does it have?")
new_efficency = input("What is the fuel efficency of the new car?")
new_gearbox = input("What type of gearbox?")
new_price = input("How much does the new car cost?")
car_specs.insert(len(car_specs), (new_name[new_doors, new_efficency, new_gearbox, new_price]))
```
It isnt working though and comes up with this error:
```
Would you like to add a new car?(Y/N)Y
What is the name of the new car?test
How many doors does it have?123456
What is the fuel efficency of the new car?23456
What type of gearbox?234567
How much does the new car cost?234567
Traceback (most recent call last):
File "/Users/JagoStrong-Wright/Documents/School Work/Computer Science/car list.py", line 35, in <module>
car_specs.insert(len(car_specs), (new_name[new_doors, new_efficency, new_gearbox, new_price]))
TypeError: string indices must be integers
>>>
```
Anyones help would be greatly appreciated, thanks. | 2014/11/23 | [
"https://Stackoverflow.com/questions/27088984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4284218/"
] | Just append the tuple to the list making sure to separate new\_name from the list with a `,`:
```
new_name = input("What is the name of the new car?")
new_doors = input("How many doors does it have?")
new_efficency = input("What is the fuel efficency of the new car?")
new_gearbox = input("What type of gearbox?")
new_price = input("How much does the new car cost?")
car_specs.append(("{}. {}".format(len(car_specs) + 1,new_name),[new_doors, new_efficency, new_gearbox, new_price]))
```
I would use a dict to store the data instead:
```
car_specs = {'2. Ford Focous - Studio': ['5', '48mpg', 'Manual', '\xc2\xa317,295'], '1. Ford Fiesta - Studio': ['3', '54mpg', 'Manual', '\xc2\xa39,995'], '3. Vauxhall Corsa STING': ['3', '53mpg', 'Manual', '\xc2\xa38,995'], '4. VW Golf - S': ['5', '88mpg', 'Manual', '\xc2\xa317,175']}
```
Then add new cars using:
```
car_specs["{}. {}".format(len(car_specs)+1,new_name)] = [new_doors, new_efficency, new_gearbox, new_price]
``` | You are not setting the first element go your tuple correctly. You are appending the name to the length of car specs as you expect.
Also new\_name is as string, when you do new\_name[x] your asking python for the x+1th character in that string.
```
new_name = input("What is the name of the new car?")
new_doors = input("How many doors does it have?")
new_efficency = input("What is the fuel efficency of the new car?")
new_gearbox = input("What type of gearbox?")
new_price = input("How much does the new car cost?")
car_specs.insert(str(len(car_specs + 1))+'. - ' + name, [new_doors, new_efficency, new_gearbox, new_price])
``` | 3,643 |
41,474,163 | I was doing Singly-Linked List implementation and I remember Linus Torvalds talking about it [here](https://youtu.be/o8NPllzkFhE?t=890).
In a singly-linked list in order to remove a node we should have access to the previous node and then change the node it is currently pointing to.
Like this
[](https://i.stack.imgur.com/rVCdE.png)
So any way we should have access to the previous node.
But Linus Torvalds removed the special case by using the idea of address in C. So that head also has the 'previous thing' which is the address of head which points to head. So he has used C's feature of pointer and address to remove special case.
The normal code with special case
[](https://i.stack.imgur.com/gvF0j.png)
The code with special case becoming normal case
[](https://i.stack.imgur.com/7XBSb.png)
I think this kind of special case removal in singly linked list cannot be done in python, because we don't have the concept of pointers (and hence I will not be able to go one step before head)? Am I right ? | 2017/01/04 | [
"https://Stackoverflow.com/questions/41474163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1597944/"
] | Sure you can do this in Python. What he's saying is that you have some data structure that represents the list itself and points to the head of the list, and you manipulate that just as you would the pointer in a list item when you're dealing with the first list item.
Now Python is not C so the implementation would be different, but the principle applies. The list itself is not the same object as its first item, and list items should not have the same methods as the list as a whole, so it makes sense to use separate kinds of objects for them.
Both of them can, however, use an attribute of the same name (e.g. `next`) to point to the next item. So when you iterate through the list, and you are at the first item, the "previous" item is the list itself, and you are manipulating its `next` attribute if you need to remove the first item.
In the real world, of course, you would never write your own Python linked list class except as an exercise. The built-in `list` is more efficient. | You cannot use Linus's specific trick in Python, because, as you well know, Python does not have pointers (as such) or an address-of operator. You can still, however, eliminate a special case for the list head by giving the list a dummy head node. You can do that as an inherent part of the design of your list, or you can do it on the fly just by creating an extra node and making it refer to the first data-bearing node as its next node. Either way, all the nodes you might want to delete are then interior nodes, not special cases. | 3,644 |
34,522,741 | A request comes to tornado's GET handler of a web app.
From the `GET` function, a `blocking_task` function is called. This `blocking_task` function has `@run_on_executor` decorator.
But this execution fails.
Could you please help on this. It seems that motor db is not able to execute the thread.
```
import time
from concurrent.futures import ThreadPoolExecutor
from tornado import gen, web
from tornado.concurrent import run_on_executor
from tornado.ioloop import IOLoop
import argparse
from common.config import APIConfig
import sys
import os
import motor
parser = argparse.ArgumentParser()
parser.add_argument("-c", "--config-file", dest='config_file',
help="Config file location")
args = parser.parse_args()
CONF = APIConfig().parse(args.config_file)
client = motor.MotorClient(CONF.mongo_url)
db = client[CONF.mongo_dbname]
class Handler(web.RequestHandler):
executor = ThreadPoolExecutor(10)
def initialize(self):
""" Prepares the database for the entire class """
self.db = self.settings["db"]
@gen.coroutine
def get(self):
self.blocking_task()
@run_on_executor
def blocking_task(self):
mongo_dict = self.db.test_cases.find_one({"name": "Ping"})
if __name__ == "__main__":
app = web.Application([
(r"/", Handler),
],
db=db,
debug=CONF.api_debug_on,
)
app.listen(8888)
IOLoop.current().start()
> ERROR:tornado.application:Exception in callback <functools.partial
> object at 0x7f72dfbe48e8> Traceback (most recent call last): File
> "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/ioloop.py",
> line 600, in _run_callback
> ret = callback() File "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/stack_context.py",
> line 275, in null_wrapper
> return fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/motor-0.5-py2.7.egg/motor/frameworks/tornado.py",
> line 231, in callback
> child_gr.switch(future.result()) error: cannot switch to a different thread
```
Could you please help on this. | 2015/12/30 | [
"https://Stackoverflow.com/questions/34522741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5722595/"
] | Finally following code works, Thank you @kwarunek
Also added parameters to the callback function.
```
import time
from concurrent.futures import ThreadPoolExecutor
from tornado import gen, web
from tornado.concurrent import run_on_executor
from tornado.ioloop import IOLoop
import argparse
from common.config import APIConfig
import sys
import os
import motor
parser = argparse.ArgumentParser()
parser.add_argument("-c", "--config-file", dest='config_file',
help="Config file location")
args = parser.parse_args()
CONF = APIConfig().parse(args.config_file)
client = motor.MotorClient(CONF.mongo_url)
db = client[CONF.mongo_dbname]
class Handler(web.RequestHandler):
executor = ThreadPoolExecutor(10)
def initialize(self):
""" Prepares the database for the entire class """
self.db = self.settings["db"]
@gen.coroutine
def get(self):
self.blocking_task("Ping", "Void-R")
@run_on_executor
def blocking_task(self, name, status):
IOLoop.instance().add_callback(callback=lambda: self.some_update(name, status))
@gen.coroutine
def some_update(self, name, status):
mongo_dict = yield self.db.test_cases.find_one({"name": name})
self.db.test_cases.update({ "name": name }, { "$set": { "status" : status } } )
if __name__ == "__main__":
app = web.Application([
(r"/", Handler),
],
db=db,
debug=CONF.api_debug_on,
)
app.listen(8888)
IOLoop.current().start()
``` | From [docs](http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.run_on_executor)
>
> IOLoop and executor to be used are determined by the io\_loop and
> executor attributes of self. To use different attributes, pass keyword
> arguments to the decorator
>
>
>
You have to provide a init threadpoolexecutor:
```
import time
from concurrent.futures import ThreadPoolExecutor
from tornado import gen, web
from tornado.concurrent import run_on_executor
from tornado.ioloop import IOLoop
class Handler(web.RequestHandler):
executor = ThreadPoolExecutor(10)
@gen.coroutine
def get(self):
self.blocking_task()
@run_on_executor
def blocking_task(self):
time.sleep(10)
if __name__ == "__main__":
app = web.Application([
(r"/", Handler),
])
app.listen(8888)
IOLoop.current().start()
```
By default `run_on_executor` search for threadpool in `executor` attribute, unless you pass other explicitly, e.g.
```
_thread_pool = ThreadPoolExecutor(10)
@run_on_executor(executor='_thread_pool')
def blocking_task(self):
pass
```
**edit**
Basically IOLoop should be used in single-threaded env (you can run separate IOLoop on each thread, but it is not your case). To communicate with IOLoop you should use [add\_callback](http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop.add_callback), that is the only thread safe function.
You can use like:
```
@run_on_executor
def blocking_task(self):
IOLoop.instance().add_callback(some_update)
@gen.coroutine
def some_update():
db.test_cases.update({ "name": "abc" }, { "$set": { "status" : "xyz" } } )
```
But do you really need threading at all. What is the purpose of the separate thread if you schedule update on main - IOLoop's thread. | 3,649 |
61,257,025 | I'm new to python and tkinter and I try to create a tool witch loops every 5 seconds over a directory to list all the files.
In my code the filenames in the list appears only after I interupt the loop.
My goal is to start a loop by clicking on a button to start the endless loop to list the files and a stop button to stop the loop.
```
from tkinter import filedialog
import tkinter as tk
import time
import os
global dateiListe
def browse_button():
global pfad
global dateiname
dateiname = filedialog.askdirectory()
pfad.set(dateiname)
if len(dateiname) > 0:
print( len(dateiname) )
btn_schleifeStart['state'] = tk.NORMAL
else:
print( len(dateiname) )
btn_schleifeStart['state'] = tk.DISABLED
def start_schleife():
btn_ordnerWรคhlen['state'] = tk.DISABLED
btn_schleifeStart['state'] = tk.DISABLED
while True:
dateiListe = []
for datei in os.listdir(dateiname):
if datei.lower().endswith(('.png', '.jpg', '.jpeg')):
listBox.insert(1, datei)
listBox.insert(2, datei)
print(datei)
time.sleep(5)
root = tk.Tk()
root.geometry("500x400")
pfad = tk.StringVar()
btn_ordnerWรคhlen = tk.Button(text="Ordner wรคhlen", command=browse_button)
btn_schleifeStart = tk.Button(text="Start", command=start_schleife,state=tk.DISABLED)
txt_pfad = tk.Label(master=root,textvariable=pfad, fg="blue")
listBox = tk.Listbox(root)
btn_ordnerWรคhlen.grid(row=0, column=0, sticky="sw")
txt_pfad.grid(row=1, column=0)
btn_schleifeStart.grid(row=3, column=0)
listBox.grid(row=4, column=0)
root.mainloop()
``` | 2020/04/16 | [
"https://Stackoverflow.com/questions/61257025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10474530/"
] | Unary means one, so what they are talking about is a constructor with a single parameter. The standard name for such a thing is a [conversion constructor](https://stackoverflow.com/questions/15077466/what-is-a-converting-constructor-in-c-what-is-it-for). | Unary refers to one or singular, so a 'Unary constructor' ideally refers to a constructor with a single parameter. | 3,652 |
45,628,813 | Previously I was working without unittests and I had this structure for my project:
```
-main.py
-folderFunctions:
-functionA.py
```
Just using init file in folderFunctions, and importing
```
from folderFunctions import functionA
```
everything was working good.
Now I have also unittests wihch I organized in this way:
```
-main.py
-folderFunctions:
-functionA.py
-folderTest:
-testFunctionA.py
```
So I had to add (in order to run testFunctionA.py) in both functionA.py and testFunctionA.py these 2 lines to import the path:
```
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '../..')
```
In this way the test works properly.
But it is ugly to me and I guess also not very pythonic.
Is there a way to make it more elegant? | 2017/08/11 | [
"https://Stackoverflow.com/questions/45628813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5178905/"
] | If you want your library/application to become bigger and easy to package I hardly recommend to separate source code from test code, because test code shouldn't be packaged in binary distributions (egg or wheel).
You can follow this tree structure:
```
+-- src/
| +-- main.py
| \-- folder_functions/ # <- Python package
| +-- __init__.py
| \-- function_a.py
\-- tests/
\-- folder_functions/
+-- __init__.py
\-- test_function_a.py
```
Note: according to the [PEP8](https://www.python.org/dev/peps/pep-0008/), Python packages and modules names should be in "snake case" (lowercase + underscore).
The **src** directory could be avoided if you have (and you should) a main package.
As explained in other comments, the **setup.p**y file should stand next to the **src** and **tests** folders (root level).
Read the [Python Packaging User Guide](https://packaging.python.org)
**edit**
The next step is to create a **setup.py**, for instance:
```
from setuptools import find_packages
from setuptools import setup
setup(
name='Your-App',
version='0.1.0',
author='Your Name',
author_email='your@email',
url='URL of your project home page',
description="one line description",
long_description='long description ',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: Python Software Foundation License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python',
'Topic :: Software Development',
],
platforms=["Linux", "Windows", "OS X"],
license="MIT License",
keywords="keywords",
packages=find_packages("src"),
package_dir={'': 'src'},
entry_points={
'console_scripts': [
'cmd_name = main:main',
],
})
```
Once your project is configured, you can create a virtualenv and install your application inside:
```
virtualenv your-app
source your-app/bin/activate
pip install -e .
```
You can run your tests with unitests standard module.
To import your module in your **test\_function\_a.py**, just proceed as usual:
```
from folder_functions import function_a
``` | The more elegant way is `from folderFunctions.folderTest import testFunctionA` and make sure that you have an `__init__.py` file in the `folderTest` directory. You may also look at this [question](https://stackoverflow.com/questions/8953844/import-module-from-subfolder) | 3,653 |
49,054,768 | I'm going to optimize three variable `x`, `alpha` and `R`.
`X` is a one dimensional vector, `alpha` is a two dimensional vector and `R` is a scalar value. How can I maximize this function?
I write below code:
```
#from scipy.optimize import least_squares
from scipy.optimize import minimize
import numpy as np
sentences_lengths =[6, 3]
length_constraint=5
sentences_idx=[0, 1]
sentences_scores=[.1,.2]
damping=1
pairwise_idx=[(0,0),(0,1),(1,0),(1,1)]
overlap_matrix=[[0,.01],[.02,0]]
def func(x, R, alpha, sign=1.0):
""" Objective function """
return sign*(sum(x[i] * sentences_scores[i] for i in sentences_idx) - damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
x0=np.array([1,0])
R0=.1
alpha0=np.array([1,0,0,0])
def func_deriv(x, R, alpha, sign=1.0):
""" Derivative of objective function """
#Partial derivative to x
dfdX = sign*(sum(sentences_scores[i] for i in sentences_idx))
#Partial derivative to R
dfdR= sign*(- damping * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
#Partial derivative to alpha
dfdAlpha= sign*(- damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
return [ dfdX, dfdR, dfdAlpha]
cons = ({'type': 'ineq',
## Constraints: one constraint for the size + consistency constraints
#sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint
'fun' : lambda x: length_constraint - sum(x[i] * sentences_lengths[i] for i in sentences_idx) ,
'jac' : lambda x: [-sum(sentences_lengths[i] for i in sentences_idx), 0, 0]}
,{'type': 'ineq',
#alpha[i][j] - x[i] <= 0
'fun' : lambda x: [x[i]-alpha[i][j] for i,j in pairwise_idx],
'jac' : lambda x: [1.0, 0.0, -1.0]}
,{'type': 'ineq',
#alpha[i][j] - x[j] <= 0
'fun' : lambda x: [x[j]-alpha[i][j] for i,j in pairwise_idx],
'jac' : lambda x: [1.0, 0.0, -1.0]}
,{'type': 'ineq',
#x[i] + x[j] - alpha[i][j] <= 1
'fun' : lambda x: [1+alpha[i][j]-x[i]-x[j] for i,j in pairwise_idx],
'jac' : lambda x: [-1.0-1.0, 0.0, 1.0]})
res = minimize(func, (x0,R0,alpha0)
, args=(sentences_lengths
,length_constraint
,sentences_idx
,sentences_scores
,damping
,pairwise_idx
,overlap_matrix,)
, jac=func_deriv
, constraints=cons
, method='SLSQP'
, options={'disp': True})
```
I get Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-a1a91fdf2d13> in <module>()
55 , constraints=cons
56 , method='SLSQP'
---> 57 , options={'disp': True})
58
59 #res = least_squares(fun, (x,R,alpha), jac=jac, bounds=bounds, args=(sentences_scores, damping,overlap_matrix), verbose=1)
/usr/local/lib/python3.5/dist-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
456 elif meth == 'slsqp':
457 return _minimize_slsqp(fun, x0, args, jac, bounds,
--> 458 constraints, callback=callback, **options)
459 elif meth == 'dogleg':
460 return _minimize_dogleg(fun, x0, args, jac, hess,
/usr/local/lib/python3.5/dist-packages/scipy/optimize/slsqp.py in _minimize_slsqp(func, x0, args, jac, bounds, constraints, maxiter, ftol, iprint, disp, eps, callback, **unknown_options)
305
306 # Transform x0 into an array.
--> 307 x = asfarray(x0).flatten()
308
309 # Set the parameters that SLSQP will need
/usr/local/lib/python3.5/dist-packages/numpy/lib/type_check.py in asfarray(a, dtype)
102 if not issubclass(dtype, _nx.inexact):
103 dtype = _nx.float_
--> 104 return asarray(a, dtype=dtype)
105
106
/usr/local/lib/python3.5/dist-packages/numpy/core/numeric.py in asarray(a, dtype, order)
529
530 """
--> 531 return array(a, dtype, copy=False, order=order)
532
533
ValueError: setting an array element with a sequence.
``` | 2018/03/01 | [
"https://Stackoverflow.com/questions/49054768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2742177/"
] | I find the solution.
```
from scipy.optimize import least_squares
from scipy.optimize import minimize
import numpy as np
def func(x_f, *args, sign=1.0):
""" Objective function """
sentences_lengths, length_constraint, sentences_idx, sentences_scores, damping, pairwise_idx, overlap_matrix\
, x_ini_size, R0_size, alpha0_shape = args
x=(x_f[:x_ini_size])
R=x_f[x_ini_size:x_ini_size+R0_size]
alpha=(x_f[x_ini_size+R0_size:].reshape(alpha0_shape))
return sign*(sum((x[i]) * sentences_scores[i] for i in sentences_idx) - damping * R * sum((alpha[i][j]) * overlap_matrix[i][j] for i,j in pairwise_idx))
def func_deriv(x, R, alpha, sign=1.0):
""" Derivative of objective function """
#Partial derivative to x
dfdX = sign*(sum(sentences_scores[i] for i in sentences_idx))
#Partial derivative to R
dfdR= sign*(- damping * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
#Partial derivative to alpha
dfdAlpha= sign*(- damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
return [ dfdX, dfdR, dfdAlpha]
"""print(list(x_ini))
a = np.array([list(x_ini),list(R0),list(alpha0)])
print(a)
ccc=[x_ini,R0,alpha0]
print(x_ini)
print(list(ccc))
x0=np.concatenate([x_ini,R0,alpha0])
print(x0.flatten())"""
"""
pairwise_idx-------->>>
array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
overlap_matrix----------->>
array([[ 0. , 0.01],
[ 0.02, 0. ]])
alpha0--->>>
array([[1, 0],
[0, 0]])
"""
sentences_lengths =[6, 3]
length_constraint=5
sentences_idx=[0, 1]
sentences_scores=[.1,.2]
damping=1.0
pairwise_idx=np.array([[0, 0],[0, 1],[1, 0],[1, 1]])
overlap_matrix=np.array([[0,.01],[.02,0]])
x_ini=np.array([0,0])
R0=np.array([.1])
alpha0=np.array([[0,0],[0,0]])
x_ini_size = x_ini.size
R0_size = R0.size
alpha0_shape = alpha0.shape
x0 = np.concatenate([x_ini, R0, alpha0.flatten()])
#x1bnds = [int(s) for s in range(0,2)]
#x1bnds=np.array([0,1])
#x1bnds=np.array([0,2], dtype=int)
#x1bnds = ((0,0),(1,1))
#x1bnds =np.arange(0,2, 1)
x1bnds = (0, 1)
x2bnds = (0, 1)
Rbnds = (0, 1)
alpha1bnds= (0, 1)
alpha2bnds= (0, 1)
alpha3bnds= (0, 1)
alpha4bnds= (0, 1)
bnds = (x1bnds, x2bnds, Rbnds, alpha1bnds, alpha2bnds, alpha3bnds, alpha4bnds)
#x=x_f[:x_ini_size]
#alpha=x_f[x_ini_size+R0_size:].reshape(alpha0_shape)
"""cons = ({'type': 'ineq',
## Constraints: one constraint for the size + consistency constraints
#sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint
'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)])
, 'jac' : lambda x_f: np.array([-sum(sentences_lengths[i] for i in sentences_idx), 0, 0])}
,{'type': 'ineq',
#alpha[i][j] - x[i] <= 0
'fun' : lambda x_f: np.array([x_f[:x_ini_size][i]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx])
, 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])}
,{'type': 'ineq',
#alpha[i][j] - x[j] <= 0
'fun' : lambda x_f: np.array([x_f[:x_ini_size][j]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx])
, 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])}
,{'type': 'ineq',
#x[i] + x[j] - alpha[i][j] <= 1
'fun' : lambda x_f: np.array([1+x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]-x_f[:x_ini_size][i]-x_f[:x_ini_size][j] for i,j in pairwise_idx])
, 'jac' : lambda x_f: np.array([-1.0-1.0, 0.0, 1.0])})
"""
cons = ({'type': 'ineq',
## Constraints: one constraint for the size + consistency constraints
#sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint
'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)])
}
,{'type': 'ineq',
#alpha[i][j] - x[i] <= 0
'fun' : lambda x_f: np.array([(x_f[:x_ini_size][i])-(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]) for i,j in pairwise_idx])
}
,{'type': 'ineq',
#alpha[i][j] - x[j] <= 0
'fun' : lambda x_f: np.array([(x_f[:x_ini_size][j])-(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]) for i,j in pairwise_idx])
}
,{'type': 'ineq',
#x[i] + x[j] - alpha[i][j] <= 1
'fun' : lambda x_f: np.array([1+(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j])-(x_f[:x_ini_size][i])-(x_f[:x_ini_size][j]) for i,j in pairwise_idx])
}
,{'type':'eq'
,'fun': lambda x_f : np.array([(x_f[:x_ini_size][i]-int(x_f[:x_ini_size][i]))
for i in sentences_idx])})
res = minimize(func
, x0
, args=(sentences_lengths
, length_constraint
, sentences_idx
, sentences_scores
, damping, pairwise_idx
, overlap_matrix
, x_ini_size
, R0_size
, alpha0_shape)
, method='SLSQP'
#, jac=func_deriv
, constraints=cons
, bounds=bnds
, options={'disp': True})
#res = least_squares(fun, (x,R,alpha), jac=jac, bounds=bounds, args=(sentences_scores, damping,overlap_matrix), verbose=1)
print(res)
```
The result is:
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0
Iterations: 1
Function evaluations: 9
Gradient evaluations: 1
fun: 0.0
jac: array([ 0.1 , 0.2 , 0. , 0. , -0.001, -0.002, 0. ])
message: 'Optimization terminated successfully.'
nfev: 9
nit: 1
njev: 1
status: 0
success: True
x: array([ 0. , 0. , 0.1, 0. , 0. , 0. , 0. ])
```
The result is the same initial values. Is it not wonderful? | I can do this task.
```
from scipy.optimize import least_squares
from scipy.optimize import minimize
import numpy as np
def func(x_f, *args, sign=1.0):
""" Objective function """
sentences_lengths, length_constraint, sentences_idx, sentences_scores, damping, pairwise_idx, overlap_matrix\
, x_ini_size, R0_size, alpha0_shape = args
x=x_f[:x_ini_size]
R=x_f[x_ini_size:x_ini_size+R0_size]
alpha=x_f[x_ini_size+R0_size:].reshape(alpha0_shape)
return sign*(sum(x[i] * sentences_scores[i] for i in sentences_idx) - damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
def func_deriv(x, R, alpha, sign=1.0):
""" Derivative of objective function """
#Partial derivative to x
dfdX = sign*(sum(sentences_scores[i] for i in sentences_idx))
#Partial derivative to R
dfdR= sign*(- damping * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
#Partial derivative to alpha
dfdAlpha= sign*(- damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx))
return [ dfdX, dfdR, dfdAlpha]
"""print(list(x_ini))
a = np.array([list(x_ini),list(R0),list(alpha0)])
print(a)
ccc=[x_ini,R0,alpha0]
print(x_ini)
print(list(ccc))
x0=np.concatenate([x_ini,R0,alpha0])
print(x0.flatten())"""
"""
pairwise_idx-------->>>
array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
overlap_matrix----------->>
array([[ 0. , 0.01],
[ 0.02, 0. ]])
alpha0--->>>
array([[1, 0],
[0, 0]])
"""
sentences_lengths =[6, 3]
length_constraint=5
sentences_idx=[0, 1]
sentences_scores=[.1,.2]
damping=1.0
pairwise_idx=np.array([[0, 0],[0, 1],[1, 0],[1, 1]])
overlap_matrix=np.array([[0,.01],[.02,0]])
x_ini=np.array([1,0])
R0=np.array([.1])
alpha0=np.array([[1,0],[0,0]])
x_ini_size = x_ini.size
R0_size = R0.size
alpha0_shape = alpha0.shape
x0 = np.concatenate([x_ini, R0, alpha0.flatten()])
#x1bnds = [int(s) for s in range(0,2)]
#x1bnds=np.array([0,1])
#x1bnds=np.array([0,2], dtype=int)
#x1bnds = ((0,0),(1,1))
x1bnds =np.arange(0,2, 1)
x2bnds = (0, 1)
Rbnds = (0, 1)
alpha1bnds= [0, 1]
alpha2bnds= [0, 1]
alpha3bnds= [0, 1]
alpha4bnds= np.array([0,2], dtype=int)
bnds = (x1bnds, x2bnds, Rbnds, alpha1bnds, alpha2bnds, alpha3bnds, alpha4bnds)
#x=x_f[:x_ini_size]
#alpha=x_f[x_ini_size+R0_size:].reshape(alpha0_shape)
"""cons = ({'type': 'ineq',
## Constraints: one constraint for the size + consistency constraints
#sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint
'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)])
, 'jac' : lambda x_f: np.array([-sum(sentences_lengths[i] for i in sentences_idx), 0, 0])}
,{'type': 'ineq',
#alpha[i][j] - x[i] <= 0
'fun' : lambda x_f: np.array([x_f[:x_ini_size][i]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx])
, 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])}
,{'type': 'ineq',
#alpha[i][j] - x[j] <= 0
'fun' : lambda x_f: np.array([x_f[:x_ini_size][j]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx])
, 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])}
,{'type': 'ineq',
#x[i] + x[j] - alpha[i][j] <= 1
'fun' : lambda x_f: np.array([1+x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]-x_f[:x_ini_size][i]-x_f[:x_ini_size][j] for i,j in pairwise_idx])
, 'jac' : lambda x_f: np.array([-1.0-1.0, 0.0, 1.0])})
"""
cons = ({'type': 'ineq',
## Constraints: one constraint for the size + consistency constraints
#sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint
'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)])
}
,{'type': 'ineq',
#alpha[i][j] - x[i] <= 0
'fun' : lambda x_f: np.array([x_f[:x_ini_size][i]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx])
}
,{'type': 'ineq',
#alpha[i][j] - x[j] <= 0
'fun' : lambda x_f: np.array([x_f[:x_ini_size][j]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx])
}
,{'type': 'ineq',
#x[i] + x[j] - alpha[i][j] <= 1
'fun' : lambda x_f: np.array([1+x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]-x_f[:x_ini_size][i]-x_f[:x_ini_size][j] for i,j in pairwise_idx])
})
res = minimize(func
, x0
, args=(sentences_lengths
, length_constraint
, sentences_idx
, sentences_scores
, damping, pairwise_idx
, overlap_matrix
, x_ini_size
, R0_size
, alpha0_shape)
, method='SLSQP'
#, jac=func_deriv
, constraints=cons
, bounds=bnds
, options={'disp': True})
#res = least_squares(fun, (x,R,alpha), jac=jac, bounds=bounds, args=(sentences_scores, damping,overlap_matrix), verbose=1)
print(res)
``` | 3,654 |
34,586,114 | In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this:
```
mysite/
manage.py
mysite/ --> (settings.py, etc)
myapp/ --> (models.py, views.py, etc)
static/
```
In `mysite/settings.py` I have:
```
STATIC_ROOT = 'staticfiles'
```
So when I run the command:
```
python manage.py collectstatic
```
It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`)
What's the point of this? Isn't it just creating a copy of all my static files? | 2016/01/04 | [
"https://Stackoverflow.com/questions/34586114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Collect static files from multiple apps into a single path
----------------------------------------------------------
Well, a single Django *project* may use several *apps*, so while there you only have one `myapp`, it may actually be `myapp1`, `myapp2`, etc
By copying them from inside the individual apps into a single folder, you can point your frontend web server (e.g. nginx) to that single folder `STATIC_ROOT` and serve static files from a single location, rather than configure your web server to serve static files from multiple paths.
Persistent URLs with [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/3.2/ref/contrib/staticfiles/#manifeststaticfilesstorage)
--------------------------------------------------------------------------------------------------------------------------------------------
A note about the MD5 hash being appended to the filename for versioning: It's not part of the default behavior of `collectstatic`, as `settings.STATICFILES_STORAGE` defaults to `StaticFilesStorage` (which doesn't do that)
The MD5 hash will kick in e.g. if you set it to use `ManifestStaticFilesStorage`, which adds that behavior.
>
> The purpose of this storage is to keep serving the old files in case
> some pages still refer to those files, e.g. because they are cached by
> you or a 3rd party proxy server. Additionally, itโs very helpful if
> you want to apply far future Expires headers to the deployed files to
> speed up the load time for subsequent page visits.
>
>
> | It's useful when there are multiple django apps within the site.
`collectstatic` will then collect static files from all the apps in one place - so that it could be served up in a production environment. | 3,655 |
60,418,192 | Julia newbe here, transitioning from python.
So, I want to build what in Python I would call list, made of lists made of lists. In my case, it's a 1000 long list whose element is a list of 3 lists.
Until now, I have done it this way:
```
BIG_LIST = collect(Array{Int64,1}[[],[],[]] for i in 1:1000)
```
This served my purpose when all three most inner lists where made of integers.
Now I need 2 of them to be of integers, while the third of Float.
Is this possible? How do I do it?
If you could also explain better how to properly initialize these objects that would be great. I am aware that collect is not the best choice here.
Note that the length of the 3 inner lists is the same among the 3, but can vary during the process. | 2020/02/26 | [
"https://Stackoverflow.com/questions/60418192",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10139617/"
] | First, if you know that intermediate lists always have 3 elements, you'll probably be better off using [`Tuple` types](https://docs.julialang.org/en/v1/manual/types/#Tuple-Types-1) for those. And tuples can specify independently the types of their elements. So something like this might suit your purposes:
```
julia> l = [(Int64[], Int64[], Float64[]) for _ in 1:10]
10-element Array{Tuple{Array{Int64,1},Array{Int64,1},Array{Float64,1}},1}:
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
julia> push!(l[1][3], 5)
1-element Array{Float64,1}:
5.0
julia> l
10-element Array{Tuple{Array{Int64,1},Array{Int64,1},Array{Float64,1}},1}:
([], [], [5.0])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
([], [], [])
```
A few details to note here, that might be of interest to you:
* Empty but typed lists can be constructed using `T[]`, where `T` is the element type.
* `collect(f(i) for i in 1:n)` is essentially equivalent to a simple comprehension (like you're used to in python): `[f(i) for i in 1:n]`. Note that since variable `i` plays no role here, you can replace it with a `_` placeholder so that it more immediately appears to the reader that you're essentially creating a collection of similar objects (but not identical, in the sense that they don't share the same underlying memory; modifying one won't affect the others).
* I don't know of any better way to initialize such a collection and I wouldn't think that using `collect`(or a comprehension) is a bad idea here. For collections of identical objects, [`fill`](https://docs.julialang.org/en/v1/base/arrays/#Base.fill) provides a useful shortcut, but it wouldn't apply here because all sub-lists would be linked.
---
Now, if all inner sublists have the same length, you might want to switch to a slightly different data structure: a vector of vectors of tuples:
```
julia> l2 = [Tuple{Int64,Int64,Float64}[] for _ in 1:10]
10-element Array{Array{Tuple{Int64,Int64,Float64},1},1}:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
julia> push!(l2[2], (1,2,pi))
1-element Array{Tuple{Int64,Int64,Float64},1}:
(1, 2, 3.141592653589793)
julia> l2
10-element Array{Array{Tuple{Int64,Int64,Float64},1},1}:
[]
[(1, 2, 3.141592653589793)]
[]
[]
[]
[]
[]
[]
[]
[]
``` | Francois has given you a great answer. I just wanted to raise one other possibility. It sounds like your data has a fairly complicated, but specific, structure. For example, the fact that your outer list has 1000 elements, and your inner list always has 3 lists...
Sometimes in these situations it can be more intuitive to just build your own type(s), and write a couple of accessor functions. That way you don't end up doing things like `mylist[3][2][6]` and forgetting which index refers to which dimension of your data. For example:
```
struct MyInnerType
field1::Vector{Int}
field2::Vector{Int}
field3::Vector{Float64}
end
struct MyOuterType
x::Vector{MyInnerType}
function MyOuterType(x::Vector{MyInnerType})
length(x) != 1000 && error("This vector should always have length of 1000")
new(x)
end
end
```
I'm guessing here, but perhaps accessor functions like this would be useful for, e.g. `field3`:
```
get_field3(y::MyInnerType, i::Int)::Float64 = y.field3[i]
get_field3(z::MyOuterType, iouter::Int, iinner::Int)::Float64 = get_field3(z.x[iouter], iinner)
```
Remember that there is no performance penalty to using your own types in Julia.
One other thing, I've included all type information in my functions above for clarity, but this is not actually necessary for getting maximum performance either. | 3,661 |
44,651,760 | When I run `python manage.py migrate` on my Django project, I get the following error:
```
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line
utility.execute()
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle
executor.loader.check_consistent_history(connection)
File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history
connection.alias,
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'.
```
I have a user model like below:
```
class User(AbstractUser):
place = models.CharField(max_length=64, null=True, blank=True)
address = models.CharField(max_length=128, null=True, blank=True)
```
How can I solve this problem? | 2017/06/20 | [
"https://Stackoverflow.com/questions/44651760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3186922/"
] | Since you are using a custom User model, you can do 4 steps:
1. Comment out django.contrib.admin in your INSTALLED\_APPS settings
>
>
> ```
> INSTALLED_APPS = [
> ...
> #'django.contrib.admin',
> ...
> ]
>
> ```
>
>
2. Comment out admin path in urls.py
>
>
> ```
> urlpatterns = [
> ...
> #path('admin/', admin.site.urls)
> ...
> ]
>
> ```
>
>
3. Then run
>
>
> ```
> python manage.py migrate
>
> ```
>
>
4. **When done, uncomment all back** | If you set **AUTH\_USER\_MODE**L in **settings.py** like this:
```
AUTH_USER_MODEL = 'custom_user_app_name.User'
```
you should comment this line before run **makemigration** and **migrate** commands. Then you can uncomment this line again. | 3,662 |
54,235,347 | I am implementing a GUI in Python/Flask.
The way flask is designed, the local host along with the port number has to be "manually" opened.
Is there a way to automate it so that upon running the code, browser(local host) is automatically opened?
I tried using webbrowser package but it opens the webpage after the session is killed.
I also looked at the following posts but they are going over my head.
[Shell script opening flask powered webpage opens two windows](https://stackoverflow.com/questions/28056360/shell-script-opening-flask-powered-webpage-opens-two-windows)
[python webbrowser.open(url)](https://stackoverflow.com/questions/2634235/python-webbrowser-openurl)
Problem occurs when html pages are rendered based on user inputs.
Thanks in advance.
```
import webbrowser
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
webbrowser.open_new('http://127.0.0.1:2000/')
app.run(port=2000)
``` | 2019/01/17 | [
"https://Stackoverflow.com/questions/54235347",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9557881/"
] | Use timer to start new thread to open web browser.
```
import webbrowser
from threading import Timer
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
def open_browser():
webbrowser.open_new("http://127.0.0.1:5000")
if __name__ == "__main__":
Timer(1, open_browser).start()
app.run(port=2000)
``` | **I'd suggest the following improvement to allow for loading of the browser when in debug mode:**
*Inspired by [this answer](https://stackoverflow.com/a/9476701/10521959), will only load the browser on the first run...*
```
def main():
# The reloader has not yet run - open the browser
if not os.environ.get("WERKZEUG_RUN_MAIN"):
webbrowser.open_new('http://127.0.0.1:2000/')
# Otherwise, continue as normal
app.run(host="127.0.0.1", port=2000)
if __name__ == '__main__':
main()
```
<https://stackoverflow.com/a/9476701/10521959> | 3,672 |
42,506,954 | I'm calling `curl` from a Perl script to POST a file:
```perl
my $cookie = 'Cookie: _appwebSessionId_=' . $sessionid;
my $reply = `curl -s
-H "Content-type:application/x-www-form-urlencoded"
-H "$cookie"
--data \@portports.txt
http://$ipaddr/remote_api.esp`;
```
I want to use the Python [requests](http://docs.python-requests.org/en/master/) module instead. I've tried the following Python code:
```py
files = {'file': ('portports.txt', open('portports.txt', 'rb'))}
headers = {
'Content-type' : 'application/x-www-form-urlencoded',
'Cookie' : '_appwebSessionId_=%s' % sessionid
}
r = requests.post('http://%s/remote_api.esp' % ip, headers=headers, files=files)
print(r.text)
```
But I always get the response "ERROR no data found in request." How can I fix this? | 2017/02/28 | [
"https://Stackoverflow.com/questions/42506954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4153606/"
] | The `files` parameter encodes your file as a multipart message, which is not what you want. Use the `data` parameter instead:
```
import requests
url = 'http://www.example.com/'
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
cookies = {'_appwebSessionId_': '1234'}
with open('foo', 'rb') as file:
response = requests.post(url, headers=headers, data=file, cookies=cookies)
print(response.text)
```
This generates a request like:
```none
POST / HTTP/1.1
Connection: keep-alive
Accept: */*
Accept-Encoding: gzip, deflate
Host: www.example.com
User-Agent: python-requests/2.13.0
Content-Length: 15
Content-Type: application/x-www-form-urlencoded
Cookie: _appwebSessionId_=1234
content of foo
```
Note that in both this version and in your original `curl` command, the file must already be URL encoded. | First UTF-8 decode your URL.
Put headers and files in a JSON object, lesse all\_data.
Now your code should look like this.
```python
all_data = {
{
'file': ('portports.txt', open('portports.txt', 'rb'))
},
{
'Content-type' : 'application/x-www-form-urlencoded',
'Cookie' : '_appwebSessionId_=%s' % sessionid
}
}
all_data = json.dumps(all_data)
requests.post(url, data = all_data)
``` | 3,673 |
57,812,562 | I want to see the full trace of the code till a particular point
so i do
```
...
import traceback
traceback.print_stack()
...
```
Then it will show
```
File ".venv/lib/python3.7/site-packages/django/db/models/query.py", line 144, in __iter__
return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1052, in results_iter
results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql
cursor.execute(sql, params)
File ".venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 110, in execute
extra={'duration': duration, 'sql': sql, 'params': params}
File "/usr/lib64/python3.7/logging/__init__.py", line 1371, in debug
self._log(DEBUG, msg, args, **kwargs)
File "/usr/lib64/python3.7/logging/__init__.py", line 1519, in _log
self.handle(record)
File "/usr/lib64/python3.7/logging/__init__.py", line 1528, in handle
if (not self.disabled) and self.filter(record):
File "/usr/lib64/python3.7/logging/__init__.py", line 762, in filter
result = f.filter(record)
File "basic_django/settings.py", line 402, in filter
traceback.print_stack()
```
How to make this output more colorful using pygments.
Generally to colorize a json string in python i do
```
from pygments import highlight
from pygments.lexers import JsonLexer
from pygments.formatters import TerminalTrueColorFormatter
json_str = '{ "name":"John" }'
print(highlight(json_str, JsonLexer(), TerminalTrueColorFormatter()))
```
Similarly how to do that with `traceback.print_stack()`
**Answer I Used based on Alexander Huszagh**
1) we have to use `Python3TracebackLexer`
2) we have to use `traceback.format_stack()` which gives a `list` and then concatenate them as a `string` using `''.join(traceback.format_stack())`.
```
import traceback
import pygments
from pygments.lexers import Python3TracebackLexer
from pygments.formatters import TerminalTrueColorFormatter
traceback_color = pygments.highlight(''.join(traceback.format_stack()),Python3TracebackLexer(),TerminalTrueColorFormatter(style='trac')) # trac or rainbow_dash i prefer
print(traceback_color)
``` | 2019/09/05 | [
"https://Stackoverflow.com/questions/57812562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2897115/"
] | Pygments lists the available [lexers](http://pygments.org/docs/lexers/). You can do this with Python3TracebackLexer.
```
from pygments import highlight
from pygments.lexers import Python3TracebackLexer
from pygments.formatters import TerminalTrueColorFormatter
err_str = '''
File ".venv/lib/python3.7/site-packages/django/db/models/query.py", line 144, in __iter__
return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1052, in results_iter
results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql
cursor.execute(sql, params)
File ".venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 110, in execute
extra={'duration': duration, 'sql': sql, 'params': params}
File "/usr/lib64/python3.7/logging/__init__.py", line 1371, in debug
self._log(DEBUG, msg, args, **kwargs)
File "/usr/lib64/python3.7/logging/__init__.py", line 1519, in _log
self.handle(record)
File "/usr/lib64/python3.7/logging/__init__.py", line 1528, in handle
if (not self.disabled) and self.filter(record):
File "/usr/lib64/python3.7/logging/__init__.py", line 762, in filter
result = f.filter(record)
File "basic_django/settings.py", line 402, in filter
traceback.print_stack()
'''
print(highlight(err_str, Python3TracebackLexer(), TerminalTrueColorFormatter()))
```
In order to get `err_str`, replace `print_stack` with `format_stack` as follows than do:
```
def colorize_traceback(err_str):
return highlight(err_str, Python3TracebackLexer(), TerminalTrueColorFormatter())
try:
... # Some logic
except Exception: # Or a more narrow exception
# tb.print_stack()
print(colorize_traceback(tb.format_stack()))
``` | Alternatively, use the [rich](https://github.com/willmcgugan/rich) library.
[With just two lines of code](https://rich.readthedocs.io/en/latest/traceback.html), it will prettify your tracebacks... and then some!
```py
from rich.traceback import install
install()
```
How does it look afterwards? Take a gander:
[](https://i.stack.imgur.com/RbV8i.png)
And the beauty of it? [It supports Pygment themes](https://rich.readthedocs.io/en/latest/reference/traceback.html#rich.traceback.install)! | 3,674 |
40,081,601 | I have a instance of django deployed in Heroku as follow, Procfile:
```
web: python manage.py collectstatic --noinput ; gunicorn MY_APP.wsgi --log-file -
worker: celery -A MY_APP worker
beat: celery -A MY_APP beat
```
This instance can receive 2000-4000 requests per minute, and sometimes it is too much.
I know I should change the communications... but can I change something in the configuration to get a 10-30% in the server efficiency? | 2016/10/17 | [
"https://Stackoverflow.com/questions/40081601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073310/"
] | The first thing that springs to mind is to check out connection pooling and/or persistent database connections. Depending on how much database access your app is using, this could significantly increase the number of RPM your app is able to handle.
Check out [this StackOverflow question](https://stackoverflow.com/questions/1125504/django-persistent-database-connection) for some good ideas, in particular the following answers:
* [persistent connections, available since Django 1.6](https://stackoverflow.com/a/19438241/3231557)
* [PgBouncer, a lightweight connection pooler for PostgreSQL](https://stackoverflow.com/a/1698102/3231557) | The whole point of Heroku is that you can dynamically scale your app. You can spin up new web workers with `heroku ps:scale web+1` for example. | 3,675 |
6,261,459 | Here is a test case I've created for a problem I found out.
For some reason the dict() 'l' in B() does not seem to hold the correct value. See the output below on my Linux 11.04 Ubuntu, python 2.7.1+.
```
class A():
name = None
b = None
def __init__(self, name, bname, cname, dname):
self.name = name
print "A: name", name
self.b = B(name, bname, cname, dname)
print "A self.b:", self.b
class B():
name = None
l = dict()
c = None
def __init__(self, name, bname, cname, dname):
self.aname = name
self.name = bname
print " B: name", bname
self.c = C(bname, cname, dname)
self.l["bb"] = self.c
print " B self:", self
print " B self.c:", self.c
print " B self.l[bb]:", self.l["bb"], "<<< OK >>>"
def dump(self):
print " A: name", self.aname
print " B: name", self.name
for i in self.l:
print " B: i=", i, "self.l[i]", self.l[i], "<<< ERROR >>>"
class C():
name = None
l = dict()
d = None
def __init__(self, bname, cname, dname):
self.bname = bname
self.cname = cname
print " B: name", bname
print " C: name", cname
print " C self:", self
def dump(self):
print " B name:", self.bname
print " C name:", self.cname
a1 = A("a1", "b1", "c1", "d1")
a2 = A("a2", "b2", "c2", "d2")
a3 = A("a3", "b3", "c3", "d3")
a1.b.dump()
a1.b.c.dump()
a2.b.dump()
a2.b.c.dump()
a3.b.dump()
a3.b.c.dump()
```
Output on my machine:
```
$ python bedntest.py
A: name a1
B: name b1
B: name b1
C: name c1
C self: <__main__.C instance at 0xb76f3a6c>
B self: <__main__.B instance at 0xb76f388c>
B self.c: <__main__.C instance at 0xb76f3a6c>
B self.l[bb]: <__main__.C instance at 0xb76f3a6c> <<< OK >>>
A self.b: <__main__.B instance at 0xb76f388c>
A: name a2
B: name b2
B: name b2
C: name c2
C self: <__main__.C instance at 0xb76f3acc>
B self: <__main__.B instance at 0xb76f3aac>
B self.c: <__main__.C instance at 0xb76f3acc>
B self.l[bb]: <__main__.C instance at 0xb76f3acc> <<< OK >>>
A self.b: <__main__.B instance at 0xb76f3aac>
A: name a3
B: name b3
B: name b3
C: name c3
C self: <__main__.C instance at 0xb76f3b2c>
B self: <__main__.B instance at 0xb76f3b0c>
B self.c: <__main__.C instance at 0xb76f3b2c>
B self.l[bb]: <__main__.C instance at 0xb76f3b2c> <<< OK >>>
A self.b: <__main__.B instance at 0xb76f3b0c>
A: name a1
B: name b1
B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>>
B name: b1
C name: c1
A: name a2
B: name b2
B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>>
B name: b2
C name: c2
A: name a3
B: name b3
B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>>
B name: b3
C name: c3
```
To my understanding, the lines above:
```
B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>>
```
should all hold a unique instance of C(), as seen at initialization time - not the last instance that was created (see <<< OK >>> lines).
What happened here? | 2011/06/07 | [
"https://Stackoverflow.com/questions/6261459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | What happened is that you created a class attribute. Create an instance attribute instead by instantiating in `__init__()`. | It looks like you are trying to "declare" instance attributes at the class level. Class attributes have their own specific uses in Python, and it is wrong to put them there if you are not intending to ever use the class attributes
```
class A():
name = None # Don't do this
b = None # Don't do this
def __init__(self, name, bname, cname, dname):
self.name = name
print "A: name", name
self.b = B(name, bname, cname, dname)
print "A self.b:", self.b
```
In `class B` you have created a class attribute `l`. Since the instance doesn't have it's own attribute `l` it uses the class's attribute.
You could just write your class B like this instead
```
class B():
def __init__(self, name, bname, cname, dname):
self.aname = name
self.name = bname
self.l = dict()
print " B: name", bname
self.c = C(bname, cname, dname)
self.l["bb"] = self.c
print " B self:", self
print " B self.c:", self.c
print " B self.l[bb]:", self.l["bb"], "<<< OK >>>"
...
``` | 3,676 |
30,969,533 | I have a task to draw a potential graph with 3 variables, x , y, and z. I don't think we can draw the function U(x, y, z) directly with matplotlib. So what I'm planning to do is to draw cross sectional plots of x-y and y-z. I believe this is enough because the function U(x, y, z) has periodic behavior.
I'm quite new to python. So would you recommend or tell me where do I start or which method I can use for this?
Thank you. | 2015/06/21 | [
"https://Stackoverflow.com/questions/30969533",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4358807/"
] | String literals in SQL are denoted by single quotes (`'`). Without them, and string would be treated as an object name. Here, you generate a where clause `title = Test`. Both are interpreted as columns names, and the query fails since there's no column `Test`.
To solve this, you could surround `Test` by quotes:
```
String query = "SELECT * FROM "+ GROUPS +" WHERE "+ TITLE_GROUPS + " = '" + title + "'";
``` | Change your WHERE clause to be...
```
...
title = 'test'
```
The way it is written it is looking for a column named Test. | 3,677 |
7,360,654 | I am trying to *generate* self signed SSL certificates using Python, so that it is platform independent. My target is the \*.pem format.
I found [this script](http://sunh11373.blogspot.com/2007/04/python-utility-for-converting.html) that generates certificates, but no information how to self-sign them. | 2011/09/09 | [
"https://Stackoverflow.com/questions/7360654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/722291/"
] | The script you've linked doesn't create self-signed certificate; it only creates a request.
To create self-signed certificate you could use [`openssl`](https://www.openssl.org/docs/faq.html#USER4) it is available on all major OSes.
```
$ openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095
```
If you'd like to do it using [M2Crypto](https://pypi.python.org/pypi/M2Crypto) then take a look at [`X509TestCase.test_mkcert()` method](https://gitlab.com/m2crypto/m2crypto/blob/17f7ca77afa75cedaa60bf3db767119adba4a2ec/tests/test_x509.py#L237). | You could use the openssl method that J.F. Sebastian stated from within Python.
Import the OS lib and call the command like this:
```
os.system("openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095")
```
If it requires user interaction, it might work if you run it via subprocess pipe and allow for raw input to answer any prompts. | 3,678 |
47,441,401 | I have an 8\*4 numpy array with floats (myarray) and would like to transform it into a dictionary of dataframes (and eventually concatenate it into one dataframe) with pandas in python. I'm coming across the error "ValueError: DataFrame constructor not properly called!" though. Here is the way I attempt it:
```
mydict={}
for i, y in enumerate(np.arange(2015,2055,5)):
for j, s in enumerate(['Winter', 'Spring', 'Summer', 'Fall']):
mydict[(y,s)]=pd.DataFrame(myarray[i,j])
mydict
```
Any ideas? Thanks!
As requested, some sample data:
```
array([[ 29064908.33333333, 33971366.66666667, 37603508.33333331,
37105916.66666667],
[ 25424991.66666666, 30156625. , 32103324.99999999,
31705075. ],
[ 26972666.66666666, 28182699.99999995, 30614324.99999999,
29673008.33333333],
[ 26923466.66666666, 27573075. , 28308725. ,
27834291.66666666],
[ 26015216.66666666, 28709191.66666666, 30807833.33333334,
27183991.66666684],
[ 25711475. , 32861633.33333332, 35784916.66666666,
28748891.66666666],
[ 26267299.99999999, 35030583.33333331, 37863808.33333329,
29931858.33333332],
[ 28871674.99999998, 38477549.99999999, 40171374.99999999,
33853750. ]])
```
and expected output:
```
2015 2020 2025 2030 2035 2040 2045 2050
Winter 2.9e+07 2.5e+07 2.6e+07 2.6e+07 2.6e+07 2.5e+07 2.6e+07 2.8e+07
Spring 3.3e+07 3.0e+07 2.8e+07 2.7e+07 2.8e+07 3.2e+07 3.5e+07 3.8e+07
Summer 3.7e+07 3.2e+07 3.0e+07 2.8e+07 3.0e+07 3.5e+07 3.7e+07 4.0e+07
Fall 3.7e+07 3.1e+07 2.9e+07 2.7e+07 2.7e+07 2.8e+07 2.9e+07 3.3e+07
``` | 2017/11/22 | [
"https://Stackoverflow.com/questions/47441401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8938572/"
] | You are trying to cast a list of 2 numbers to int. [int only takes a number or a string as its argument](https://docs.python.org/3/library/functions.html#int).
What you want is to [map](http://book.pythontips.com/en/latest/map_filter.html#map) the int function to each item in the list.
```
>>> w, h = map(int, input().split(" "))
5 10
>>> w
5
>>> h
10
``` | `int(...)` constructs an integer which cannot be unpacked to a tuple `W, H`. What you probably want is
```
W, H = (int(x) for x in input().split(" "))
``` | 3,679 |
40,853,556 | I have a list of tuples in python containing 3-dimenstional data, where each tuple is in the form: (x, y, z, data\_value), i.e., I have data values at each (x, y, z) coordinate. I would like to make a 3D discrete heatmap plot where the colors represent the value of data\_values in my list of tuples. Here, I give an example of such a heatmap for a 2D dataset where I have a list of (x, y, data\_value) tuples:
```
import matplotlib.pyplot as plt
from matplotlib import colors
import numpy as np
from random import randint
# x and y coordinates
x = np.array(range(10))
y = np.array(range(10,15))
data = np.zeros((len(y),len(x)))
# Generate some discrete data (1, 2 or 3) for each (x, y) pair
for i,yy in enumerate(y):
for j, xx in enumerate(x):
data[i,j] = randint(1,3)
# Map 1, 2 and 3 to 'Red', 'Green' qnd 'Blue', respectively
colormap = colors.ListedColormap(['Red', 'Green', 'Blue'])
colorbar_ticklabels = ['1', '2', '3']
# Use matshow to create a heatmap
fig, ax = plt.subplots()
ms = ax.matshow(data, cmap = colormap, vmin=data.min() - 0.5, vmax=data.max() + 0.5, origin = 'lower')
# x and y axis ticks
ax.set_xticklabels([str(xx) for xx in x])
ax.set_yticklabels([str(yy) for yy in y])
ax.xaxis.tick_bottom()
# Put the x- qnd y-axis ticks at the middle of each cell
ax.set_xticks(np.arange(data.shape[1]), minor = False)
ax.set_yticks(np.arange(data.shape[0]), minor = False)
# Set custom ticks and ticklabels for color bar
cbar = fig.colorbar(ms,ticks = np.arange(np.min(data),np.max(data)+1))
cbar.ax.set_yticklabels(colorbar_ticklabels)
plt.show()
```
This generates a plot like this:
[](https://i.stack.imgur.com/rwH59.jpg)
How can I make a similar plot in 3D-space (i.e., having a z-axis), if my data have a third dimension. For example, if
```
# x and y and z coordinates
x = np.array(range(10))
y = np.array(range(10,15))
z = np.array(range(15,20))
data = np.zeros((len(y),len(x), len(y)))
# Generate some random discrete data (1, 2 or 3) for each (x, y, z) triplet.
# Am I defining i, j and k correctly here?
for i,yy in enumerate(y):
for j, xx in enumerate(x):
for k, zz in enumerate(z):
data[i,j, k] = randint(1,3)
```
I sounds like [plot\_surface in mplot3d](http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html) should be able to do this, but z in the input of this function is essentially the value of data at (x, y) coordinate, i.e., (x, y, z = data\_value), which is different from what I have, i.e., (x, y, z, data\_value). | 2016/11/28 | [
"https://Stackoverflow.com/questions/40853556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3076813/"
] | ### New answer:
It seems we really want to have a 3D Tetris game here ;-)
So here is a way to plot cubes of different color to fill the space given by the arrays `(x,y,z)`.
```
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm
import matplotlib.colorbar
import matplotlib.colors
def cuboid_data(center, size=(1,1,1)):
# code taken from
# http://stackoverflow.com/questions/30715083/python-plotting-a-wireframe-3d-cuboid?noredirect=1&lq=1
# suppose axis direction: x: to left; y: to inside; z: to upper
# get the (left, outside, bottom) point
o = [a - b / 2 for a, b in zip(center, size)]
# get the length, width, and height
l, w, h = size
x = [[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in bottom surface
[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in upper surface
[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in outside surface
[o[0], o[0] + l, o[0] + l, o[0], o[0]]] # x coordinate of points in inside surface
y = [[o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in bottom surface
[o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in upper surface
[o[1], o[1], o[1], o[1], o[1]], # y coordinate of points in outside surface
[o[1] + w, o[1] + w, o[1] + w, o[1] + w, o[1] + w]] # y coordinate of points in inside surface
z = [[o[2], o[2], o[2], o[2], o[2]], # z coordinate of points in bottom surface
[o[2] + h, o[2] + h, o[2] + h, o[2] + h, o[2] + h], # z coordinate of points in upper surface
[o[2], o[2], o[2] + h, o[2] + h, o[2]], # z coordinate of points in outside surface
[o[2], o[2], o[2] + h, o[2] + h, o[2]]] # z coordinate of points in inside surface
return x, y, z
def plotCubeAt(pos=(0,0,0), c="b", alpha=0.1, ax=None):
# Plotting N cube elements at position pos
if ax !=None:
X, Y, Z = cuboid_data( (pos[0],pos[1],pos[2]) )
ax.plot_surface(X, Y, Z, color=c, rstride=1, cstride=1, alpha=0.1)
def plotMatrix(ax, x, y, z, data, cmap="jet", cax=None, alpha=0.1):
# plot a Matrix
norm = matplotlib.colors.Normalize(vmin=data.min(), vmax=data.max())
colors = lambda i,j,k : matplotlib.cm.ScalarMappable(norm=norm,cmap = cmap).to_rgba(data[i,j,k])
for i, xi in enumerate(x):
for j, yi in enumerate(y):
for k, zi, in enumerate(z):
plotCubeAt(pos=(xi, yi, zi), c=colors(i,j,k), alpha=alpha, ax=ax)
if cax !=None:
cbar = matplotlib.colorbar.ColorbarBase(cax, cmap=cmap,
norm=norm,
orientation='vertical')
cbar.set_ticks(np.unique(data))
# set the colorbar transparent as well
cbar.solids.set(alpha=alpha)
if __name__ == '__main__':
# x and y and z coordinates
x = np.array(range(10))
y = np.array(range(10,15))
z = np.array(range(15,20))
data_value = np.random.randint(1,4, size=(len(x), len(y), len(z)) )
print data_value.shape
fig = plt.figure(figsize=(10,4))
ax = fig.add_axes([0.1, 0.1, 0.7, 0.8], projection='3d')
ax_cb = fig.add_axes([0.8, 0.3, 0.05, 0.45])
ax.set_aspect('equal')
plotMatrix(ax, x, y, z, data_value, cmap="jet", cax = ax_cb)
plt.savefig(__file__+".png")
plt.show()
```
[](https://i.stack.imgur.com/mVsjM.png)
I find it really hard to see anything here, but that may be a question of taste and now hopefully also answers the question.
---
### Original Answer:
*It seems I misunderstood the question. Therefore the following does not answer the question. For the moment I leave it here, to keep the comments below available for others.*
I think [`plot_surface`](http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#surface-plots) is fine for the specified task.
Essentially you would plot a surface with the shape given by your points `X,Y,Z` in 3D and colorize it using the values from `data_values` as shown in the code below.
```
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
# as plot_surface needs 2D arrays as input
x = np.arange(10)
y = np.array(range(10,15))
# we make a meshgrid from the x,y data
X, Y = np.meshgrid(x, y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# data_value shall be represented by color
data_value = np.random.rand(len(y), len(x))
# map the data to rgba values from a colormap
colors = cm.ScalarMappable(cmap = "viridis").to_rgba(data_value)
# plot_surface with points X,Y,Z and data_value as colors
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=colors,
linewidth=0, antialiased=True)
plt.show()
```
[](https://i.stack.imgur.com/oqFab.png) | I've update the code above to be compatible with newer version of matplot lib.
```py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colorbar
from matplotlib import cm
viridis = cm.get_cmap('plasma', 8) #Our color map
def cuboid_data(center, size=(1,1,1)):
# code taken from
# http://stackoverflow.com/questions/30715083/python-plotting-a-wireframe-3d-cuboid?noredirect=1&lq=1
# suppose axis direction: x: to left; y: to inside; z: to upper
# get the (left, outside, bottom) point
o = [a - b / 2 for a, b in zip(center, size)]
# get the length, width, and height
l, w, h = size
x = np.array([[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in bottom surface
[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in upper surface
[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in outside surface
[o[0], o[0] + l, o[0] + l, o[0], o[0]]]) # x coordinate of points in inside surface
y = np.array([[o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in bottom surface
[o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in upper surface
[o[1], o[1], o[1], o[1], o[1]], # y coordinate of points in outside surface
[o[1] + w, o[1] + w, o[1] + w, o[1] + w, o[1] + w]]) # y coordinate of points in inside surface
z = np.array([[o[2], o[2], o[2], o[2], o[2]], # z coordinate of points in bottom surface
[o[2] + h, o[2] + h, o[2] + h, o[2] + h, o[2] + h], # z coordinate of points in upper surface
[o[2], o[2], o[2] + h, o[2] + h, o[2]], # z coordinate of points in outside surface
[o[2], o[2], o[2] + h, o[2] + h, o[2]]]) # z coordinate of points in inside surface
return x, y, z
def plotCubeAt(pos=(0,0,0), c="b", alpha=0.1, ax=None):
# Plotting N cube elements at position pos
if ax !=None:
X, Y, Z = cuboid_data( (pos[0],pos[1],pos[2]) )
ax.plot_surface(X, Y, Z, color=c, rstride=1, cstride=1, alpha=0.1)
def plotMatrix(ax, x, y, z, data, cmap=viridis, cax=None, alpha=0.1):
# plot a Matrix
norm = matplotlib.colors.Normalize(vmin=data.min(), vmax=data.max())
colors = lambda i,j,k : matplotlib.cm.ScalarMappable(norm=norm,cmap = cmap).to_rgba(data[i,j,k])
for i, xi in enumerate(x):
for j, yi in enumerate(y):
for k, zi, in enumerate(z):
plotCubeAt(pos=(xi, yi, zi), c=colors(i,j,k), alpha=alpha, ax=ax)
if cax !=None:
cbar = matplotlib.colorbar.ColorbarBase(cax, cmap=cmap,
norm=norm,
orientation='vertical')
cbar.set_ticks(np.unique(data))
# set the colorbar transparent as well
cbar.solids.set(alpha=alpha)
if __name__ == '__main__':
# x and y and z coordinates
x = np.array(range(10))
y = np.array(range(10,15))
z = np.array(range(15,20))
data_value = np.random.randint(1,4, size=(len(x), len(y), len(z)) )
print(data_value.shape)
fig = plt.figure(figsize=(10,4))
ax = fig.add_axes([0.1, 0.1, 0.7, 0.8], projection='3d')
ax_cb = fig.add_axes([0.8, 0.3, 0.05, 0.45])
ax.set_aspect('auto')
plotMatrix(ax, x, y, z, data_value, cmap=viridis, cax = ax_cb)
plt.savefig(__file__+".png")
plt.show()
``` | 3,680 |
13,855,056 | I have a list of numbers, let's say `[1091, 2053, 4099, 4909, 5023, 9011]`. Here every number has it's permutation in a list too. Now i want to group these permutations of each other, so the list becomes `[[1091, 9011], [2053, 5023], [4099, 4909]]`. I know how to use [`groupby`](http://docs.python.org/2/library/itertools.html#itertools.groupby) and [`permutations`](http://docs.python.org/2/library/itertools.html#itertools.permutations), but have no idea, what should be they key for `groupby` or how should i solve the problem some other way.
Note: the numbers should be exact permutations, 112 and 121 count, but 112 and 122 don't.
How to group permutations of a number in a list? | 2012/12/13 | [
"https://Stackoverflow.com/questions/13855056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/596361/"
] | ```
import itertools as it
a = [1091, 2053, 4099, 4909, 5023, 9011]
sort_string = lambda x: sorted(str(x))
[[int(x) for x in v] for k,v in it.groupby(sorted(a, key=sort_string), key=sort_string)]
# [[1091, 9011], [2053, 5023], [4099, 4909]]
``` | You can use `collections.Counter` to represent each number as a tuple of `integer, total_occurrences` and then store all the data in instances in a dictionary:
```
from collections import Counter, defaultdict
dest = defaultdict(list)
data = [1091, 2053, 4099, 4909, 5023, 9011]
data = ((Counter([int(x) for x in str(datum)]), datum) for datum in data)
for numbers, value in data:
numbers = tuple(sorted(numbers.items()))
dest[numbers].append(value)
print dest.values()
# [[1091, 9011], [2053, 5023], [4099, 4909]]
``` | 3,681 |
14,087,547 | **Conclusion:** It's impossible to override or disable Python's built-in escape sequence processing, such that, you can skip using the raw prefix specifier. I dug into Python's internals to figure this out. So if anyone tries designing objects that work on complex strings (like regex) as part of some kind of framework, make sure to specify in the docstrings that string arguments to the object's `__init__()` **MUST** include the `r` prefix!
**Original question:** I am finding it a bit difficult to force Python to not "change" anything about a user-inputted string, which may contain among other things, regex or escaped hexadecimal sequences. I've already tried various combinations of raw strings, `.encode('string-escape')` (and its decode counterpart), but I can't find the right approach.
Given an escaped, hexadecimal representation of the Documentation IPv6 address `2001:0db8:85a3:0000:0000:8a2e:0370:7334`, using `.encode()`, this small script (called `x.py`):
```
#!/usr/bin/env python
class foo(object):
__slots__ = ("_bar",)
def __init__(self, input):
if input is not None:
self._bar = input.encode('string-escape')
else:
self._bar = "qux?"
def _get_bar(self): return self._bar
bar = property(_get_bar)
#
x = foo("\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34")
print x.bar
```
Will yield the following output when executed:
```
$ ./x.py
\x01\r\xb8\x85\xa3\x00\x00\x00\x00\x8a.\x03ps4
```
Note the `\x20` got converted to an ASCII space character, along with a few others. This is basically correct due to Python processing the escaped hex sequences and converting them to their printable ASCII values.
This can be solved if the initializer to `foo()` was treated as a raw string (and the `.encode()` call removed), like this:
```
x = foo(r"\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34")
```
However, my end goal is to create a kind of framework that can be used and I want to hide these kinds of "implementation details" from the end user. If they called `foo()` with the above IPv6 address in escaped hexadecimal form (without the raw specifier) and immediately print it back out, they should get back *exactly* what they put in w/o knowing or using the raw specifier. So I need to find a way to have `foo`'s `__init__()` do whatever processing is necessary to enable that.
**Edit:** Per [this SO question](https://stackoverflow.com/a/647787), it seems it's a defect of Python, in that it **always** performs some kind of escape sequence processing. There does not appear to be any kind of facility to completely turn off escape sequence processing, even temporarily. Sucks. I guess I am going to have to research subclassing `str` to create something like `rawstr` that intelligently determines what escape sequences Python processed in a string, and convert them back to their original format. This is not going to be fun...
**Edit2:** Another example, given the sample regex below:
```
"^.{0}\xcb\x00\x71[\x00-\xff]"
```
If I assign this to a var or pass it to a function **without** using the raw specifier, the `\x71` gets converted to the letter `q`. Even if I add `.encode('string-escape')` or `.replace('\\', '\\\\')`, the escape sequences **are still processed**. thus resulting in this output:
```
"^.{0}\xcb\x00q[\x00-\xff]"
```
How can I stop this, again, without using the raw specifier? Is there some way to "turn off" the escape sequence processing or "revert" it after the fact thus that the `q` turns back into `\x71`? Is there a way to process the string and escape the backslashes **before** the escape sequence processing happens? | 2012/12/30 | [
"https://Stackoverflow.com/questions/14087547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/482691/"
] | I think you have an understandable confusion about a difference between Python string literals (source code representation), Python string objects in memory, and how that objects can be printed (in what format they can be represented in the output).
If you read some bytes from a file into a bytestring you can write them back as is.
`r""` exists only in source code there is no such thing at runtime i.e., `r"\x"` and `"\\x"` are equal, they may even be the exact same string object in memory.
To see that input is not corrupted, you could print each byte as an integer:
```
print " ".join(map(ord, raw_input("input something")))
```
Or just echo as is (there could be a difference but it is unrelated to your `"string-escape"` issue):
```
print raw_input("input something")
```
---
Identity function:
```
def identity(obj):
return obj
```
If you **do nothing** to the string then your users will receive **the exact same object** back. You can provide examples in the docs what you consider a concise readable way to represent input string as Python literals. If you find confusing to work with binary strings such as `"\x20\x01"` then you could accept ascii hex-representation instead: `"2001"` (you could use binascii.hexlify/unhexlify to convert one to another).
---
The regex case is more complex because there are two languages:
1. Escapes sequences are interpreted by Python according to its string literal syntax
2. Regex engine interprets the string object as a regex pattern that also has its own escape sequences | I think you will have to go the join route.
Here's an example:
```
>>> m = {chr(c): '\\x{0}'.format(hex(c)[2:].zfill(2)) for c in xrange(0,256)}
>>>
>>> x = "\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34"
>>> print ''.join(map(m.get, x))
\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34
```
---
I'm not entirely sure *why* you need that though. If your code needs to interact with other pieces of code, I'd suggest that you agree on a defined format, and stick to it. | 3,683 |
45,625,042 | I have a master python script, that goes and automates configuring nodes in parallel in a distributed system setup in our lab.
I run multiple instances of kickstart.py and it goes and configures all nodes in parallel. How do I create log handler such that each instance of kickstart.py configures each node separately in parallel and each instance logs into different log file. I want to use python logging module. Any help is appreciated. Thanks | 2017/08/10 | [
"https://Stackoverflow.com/questions/45625042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8448115/"
] | A solution using `st_distance` from the `sf` package. `my_df_final` is the final output.
```
# Load packages
library(tidyverse)
library(sp)
library(sf)
# Create ID for my_df_1 and my_df_2 based on row id
# This step is not required, just help me to better distinguish each point
my_df_1 <- my_df_1 %>% mutate(ID1 = row.names(.))
my_df_2 <- my_df_2 %>% mutate(ID2 = row.names(.))
# Create spatial point data frame
my_df_1_sp <- my_df_1
coordinates(my_df_1_sp) <- ~START_LONG + START_LAT
my_df_2_sp <- my_df_2
coordinates(my_df_2_sp) <- ~longitude + latitude
# Convert to simple feature
my_df_1_sf <- st_as_sf(my_df_1_sp)
my_df_2_sf <- st_as_sf(my_df_2_sp)
# Set projection based on the epsg code
st_crs(my_df_1_sf) <- 4326
st_crs(my_df_2_sf) <- 4326
# Calculate the distance
m_dist <- st_distance(my_df_1_sf, my_df_2_sf)
# Filter for the nearest
near_index <- apply(m_dist, 1, order)[1, ]
# Based on the index in near_index to select the rows in my_df_2
# Combine with my_df_1
my_df_final <- cbind(my_df_1, my_df_2[near_index, ])
``` | Based on this [answer](https://stackoverflow.com/questions/31668163/geographic-distance-between-2-lists-of-lat-lon-coordinates) you could do
```
library(geosphere)
mat <- distm(my_df_1[2:1], my_df_2[2:1], fun = distVincentyEllipsoid)
cbind(my_df_1, my_df_2[max.col(-mat),])
```
Which gives:
```
# START_LAT START_LONG ID1 latitude longitude depth_top ID2
#10 -33.15000 163.0000 1175 -31.8482 173.2424 1303 144
#10.1 -35.60000 165.1833 528 -31.8482 173.2424 1303 144
#10.2 -34.08333 162.8833 1328 -31.8482 173.2424 1303 144
#10.3 -34.13333 162.5833 870 -31.8482 173.2424 1303 144
#10.4 -34.31667 162.7667 672 -31.8482 173.2424 1303 144
#6 -47.38333 148.9833 707 -44.6570 174.6950 555 1481
#6.1 -47.53333 148.6667 506 -44.6570 174.6950 555 1481
#10.5 -34.08333 162.9000 981 -31.8482 173.2424 1303 144
#6.2 -47.38333 148.9833 756 -44.6570 174.6950 555 1481
#6.3 -47.15000 148.7167 210 -44.6570 174.6950 555 1481
``` | 3,684 |
58,351,041 | I am trying to install a virtualenv in windows 10 using a step process I found on some website. The steps are as follows, but only care about 1-4 for now:
1. Run Windows Power Shell as Administrator
2. pip install virtualenv
3. pip install virtualenvwrapper-win
4. mkvirtualenv โC:\Users\username\Documents\Virtualenvโ
5. cd Test
6. Set-ExecutionPolicy AllSigned | Press Y and Enter
7. Set-ExecutionPolicy RemoteSigned | Press Y and Enter
8. .\Scripts\activate
9. deactivate
Steps 1-3 work fine, but when I try step four I get the following response:
PS C:\WINDOWS\system32> mkvirtualenv 'C:\Users\username\Documents\Virtualenv'
Using base prefix 'c:\users\username\appdata\local\programs\python\python37-32'
New python executable in C:\Users\DANIEL~1\DOCUME~1\VIRTUA~1\Scripts\python.exe
Installing setuptools, pip, wheel...
done.
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
The cd step following right afterwords does not work aswell. I am pretty new to python/programming in general so I might be missing some basic things.
running step 5 gives the following error message:
cd : Cannot find path 'C:\WINDOWS\system32\Virtualenv' because it does not exist.
At line:1 char:1
+ cd Virtualenv
+ ~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (C:\WINDOWS\system32\Virtualenv:String) [Set-Location], ItemNotFoundExce
ption
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
How do I fix this? Thanks in advance. | 2019/10/12 | [
"https://Stackoverflow.com/questions/58351041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12204393/"
] | The compiler changes the name of the local function, preventing you from calling it using its original name from the debugger. See [this question](https://stackoverflow.com/questions/45337983/why-local-functions-generate-il-different-from-anonymous-methods-and-lambda-expr) for examples. What you can do is temporarily modify the code to save a reference to the local function in a delegate variable. After recompiling, you can invoke the function through the delegate variable from Quick Watch or the Immediate Window. In your case, add this code to the beginning of the method:
```
Func<string,Task> f = ResetPasswordLocal;
```
Now you can invoke `f` in Quick Watch. | I'll have to say that I haven't tried it and will not bother to do so because there's a lot more to local functions than you think and I would put it very low in terms of priority for the debugger.
Try putting your code in [sharplab.io](https://sharplab.io/) and see what it takes to make that local function. | 3,685 |
53,539,612 | I have a big `tab separated` file like this:
```
chr1 9507728 9517729 0 chr1 9507728 9517729 5S_rRNA
chr1 9537731 9544392 0 chr1 9537731 9547732 5S_rRNA
chr1 9497727 9507728 0 chr1 9497727 9507728 5S_rRNA
chr1 9517729 9527730 0 chr1 9517729 9527730 5S_rRNA
chr8 1118560 1118591 1 chr8 1112435 1122474 AK128400
chr8 1118591 1121351 0 chr8 1112435 1122474 AK128400
chr8 1121351 1121382 1 chr8 1112435 1122474 AK128400
chr8 1132513 1142552 0 chr8 1132513 1142552 AK128400
chr19 53436277 53446295 0 chr19 53436277 53446295 AK128361
chr19 53456313 53465410 0 chr19 53456313 53466331 AK128361
chr19 53465410 53465441 1 chr19 53456313 53466331 AK128361
chr19 53466331 53476349 0 chr19 53466331 53476349 AK128361
```
according to the last column there are 3 groups and every group has 4 rows. based of the value of 4th column I want to get the average of 1st row of every group, 2nd row of every group, 3rd row of every group and 4th row of every group. so, in the expected output I would have 4 rows (since there are 4 rows per group) and 2 columns. the 1st column is ID and in this example would have 1, 2, 3 and 4. the 2nd column would be the average values that I mentioned how should be calculated.
`expected output`:
```
1 0.33
2 0
3 0.66
4 0
```
I am trying to do that in python 2.7 using the following command:
```
file = open('myfile.txt', 'r')
average = []
for i in file:
ave = i[3]/3
average.append(ave)
```
this return only one number which is wrong. do you know how to fix it to get the expected output? | 2018/11/29 | [
"https://Stackoverflow.com/questions/53539612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10657934/"
] | If there is no common data between different clients/orgs, there is no point of having a shared channel between them. Taking care of permissions overs data will complicate your network setup. It would be better to abstract out that detail from network design.
You should have one org corresponding to each client. In each org there will be a single channel which all the peers in that org will use to communicate. | I think you could encrypt every client's data by passing the transient key to chaincode,and just manage the keys, this may be light weight and fesible for your scenery. | 3,686 |
22,650,001 | I have a Django application running on [Dotcloud](http://dotcloud.com/ "Dotcloud"). I have tried to add [Logentries](http://logentries.com/ "Logentries") logging which works in normal usage for my site, but causes my cron jobs to fail with this error -
`Traceback (most recent call last):
File "/home/dotcloud/current/my_project/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/conf/__init__.py", line 50, in _setup
self._configure_logging()
File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/conf/__init__.py", line 80, in _configure_logging
logging_config_func(self.LOGGING)
File "/usr/lib/python2.7/logging/config.py", line 777, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python2.7/logging/config.py", line 575, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'logentries_handler': expected string or buffer`
This is one of the scripts being run from cron -
```
#!/bin/bash
echo "Loading definitions - check /var/log/supervisor/availsserver.log for results"
. /etc/profile
/home/dotcloud/env/bin/python /home/dotcloud/current/my_project/manage.py load_definitions
```
These are my settings for Logentries -
`'logentries_handler': {
'token': os.getenv("LOGENTRIES_TOKEN"),
'class': 'logentries.LogentriesHandler'
}
....
'logentries': {
'handlers': ['logentries_handler'],
'level': 'INFO',
},`
The LOGENTRIES\_TOKEN is there when I do `dotcloud env list`.
**This is a summary of the symptoms -**
- Logentries logging works from the site in normal usage.
- If I manually run the script - `dotcloud run www ~/current/scripts/load_definitions.sh` it works.
- If I remove the Logentries settings from my `settings.py` the cron jobs work.
- The cron jobs fail if Logentries entries are in my `settings.py`
I have spent hours trying to find a solution. Can anyone help? | 2014/03/26 | [
"https://Stackoverflow.com/questions/22650001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3458323/"
] | This is what you want:
```
List<Card> cards = IntStream.rangeClosed(1, 4)
.boxed()
.flatMap(value ->
IntStream.rangeClosed(1, 13)
.mapToObj(suit -> new Card(value, suit))
)
.collect(Collectors.toList());
```
Points to note:
* you have to box the ints because `flatMap` on primitives doesn't have any
type-conversion overloads like `flatMapToObj` (doesn't exist);
* assign to `List` rather than `ArrayList` as the `Collectors` methods make no
guarantee as to the specific type they return (as it happens it currently is
`ArrayList`, but you can't rely on that);
* use `rangeClosed` for this kind of situation. | Another way to get what you want (based on Maurice Naftalin's answer):
```
List<Card> cards = IntStream.rangeClosed(1, 4)
.mapToObj(value -> IntStream.rangeClosed(1, 13)
.mapToObj(suit -> new Card(value, suit))
)
.flatMap(Function.identity())
.collect(Collectors.toList())
;
```
Additional points to note:
* you have to map the int values to Stream streams, then flatMap said streams via Function.identity(), since flatMapToObj does not exists, yet said operation is translatable to a map to Stream, then an identity flatMap. | 3,687 |
51,634,841 | I have a little project in python to do. I have to parse 4 arguments in my program.
so the commands are:
-i (store the source\_file)
-d (store the destination\_file)
-a (store the a folder named: i386, x64\_86 or all )
-p (store the folder named: Linux, Windows or all)
The folder Linux has 2 folders in: i386 and x64\_86; the folder has those 2 folderswindows too
My script has to copy the forders like i tell him, there are 9 combinations, for example:
Python exemple.py -i -d -a i386 p windows
So in this exemple i have to copy just the forder windows containing just the folder i386
to copy the files i use the shutil.copytree(source\_file, destination, ignore=ignore\_patterns(.....))
i manage to acces the input and the output( args.input, args.output) but for arch and platform i have to acces the coices and i dont know how.
Any idea please ?
```
pars = argparse.ArgumentParser(prog='copy dirs script')
a1 = pars.add_argument("-i", "--input", required=True, nargs="?",
help="the source dirctory is /""X:/.......")
a2 = pars.add_argument("-o", "--output", required=True, nargs="?",
help="the destination dirctory is the curently working dirctory")
pars.add_argument("-a", "--arch", choices=["all", "i386", "x86_64"], required=True,
help="Targeted check architecture: 32b, 64b, All")
pars.add_argument("-p", "--platform", choices=["all", "windows", "linux"], required=True,
help="Targeted check platform: Windows, Linux, All")
```
Any idea please ? | 2018/08/01 | [
"https://Stackoverflow.com/questions/51634841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10123257/"
] | After peeking into [the PHP source code](https://github.com/php/php-src/blob/master/ext/gd/gd.c#L2287), to have some insights about the "[imagecreatefromstring](http://php.net/manual/en/function.imagecreatefromstring.php)" function, I've discovered that it handles only the following image formats:
* JPEG
* PNG
* GIF
* WBM
* GD2
* BMP
* WEBP
PHP recognizes the format of the image contained in the argument of the "imagecreatefromstring" function by checking the image signature, as explained [here](https://oroboro.com/image-format-magic-bytes/).
When an unknown signature is detected, the warning "Data is not in a recognized format" is raised.
Therefore, the only plausible explanation for the error that you are experiencing is that **your PPTX file contains an image that is not in one of the above formats**.
You can view the format of the images inside your PPTX file by changing its extension from ".pptx" to ".zip" and then opening it.
You should see something like this:
```
Archive: sample.pptx
Length Date Time Name
--------- ---------- ----- ----
5207 1980-01-01 00:00 [Content_Types].xml
...
6979 1980-01-01 00:00 ppt/media/image1.jpeg
6528 1980-01-01 00:00 ppt/media/image2.jpeg
178037 1980-01-01 00:00 ppt/media/image3.jpeg
229685 1980-01-01 00:00 ppt/media/image4.jpeg
164476 1980-01-01 00:00 ppt/media/image5.jpeg
6802 1980-01-01 00:00 ppt/media/image6.png
19012 1980-01-01 00:00 ppt/media/image7.png
32146 1980-01-01 00:00 ppt/media/image8.png
...
--------- -------
795623 74 files
```
As you can see, my **sample.pptx** file contains some images in JPEG and PNG format.
Maybe your sample file contains some slides with images in a vector format (WMF or EMF); it's unclear to me (since I didn't find any reference in [the docs](https://media.readthedocs.org/pdf/phppowerpoint/latest/phppowerpoint.pdf)) if those formats are supported or not.
Eventually you should try with other PPTX files, just to make sure that the problem is not related to a specific one (you can find some under "[test/resources/files](https://github.com/PHPOffice/PHPPresentation/tree/develop/tests/resources/files)").
I've searched for a list of the supported image formats for PowerPoint files, but I haven't been able to find a precise response.
The only relevant links that I've found are the following:
* [ECMA 376 Open Office XML 1st Edition - Image Part](https://c-rex.net/projects/samples/ooxml/e1/Part1/OOXML_P1_Fundamentals_Image_topic_ID0EGXDO.html#topic_ID0EGXDO)
* [Office Implementation Information for ISO/IEC 29500
Standards Support](https://interoperability.blob.core.windows.net/files/MS-OI29500/[MS-OI29500].pdf) (2.1.32 Part 1 Section 15.2.14, Image Part, pages 57/58)
* [Images in Open XML documents](https://blogs.msdn.microsoft.com/dmahugh/2006/12/10/images-in-open-xml-documents/) (read the comments at the end of the page)
* [Question on OpenXML Developer Forum](http://openxmldeveloper.org/discussions/formats/f/15/p/418/944.aspx#944)
This means that also the presence in the PPTX file of an image in the TIFF or PICT (QuickDraw) format could lead to the error under consideration. | Save your pptx again in PPT 2007 format in open office or MS Powerpoint.Its format issue.You are opening a very recent PPT format with 2007 | 3,688 |
56,581,577 | Python says that TrackerMedianFlow\_create() is no longer an attribute of cv2.
I've looked here but it's not the same: [OpenCV, How to pass parameters into cv2.TrackerMedianFlow\_create function?](https://stackoverflow.com/questions/47723349/opencv-how-to-pass-parameters-into-cv2-trackermedianflow-create-function)
I've asked on several discord servers without success.
I've copied this code directly from my textbook with ctrl + c so it should be exact.
```
import cv2
import numpy as np
cap = cv2.VideoCapture("../data/traffic.mp4")
_, frame = cap.read()
bbox = cv2.selectROI(frame, False, True)
cv2.destroyAllWindows()
tracker = cv2.TrackerMedianFlow_create()
status_tracker = tracker.init(frame, bbox)
fps = 0
while True:
status_cap, frame = cap.read()
if not status_cap:
break
if status_tracker:
timer = cv2.getTickCount()
status_tracker, bbox = tracker.update(frame)
if status_tracker:
x, y, w, h = [int(i) for i in bbox]
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 15)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
cv2.putText(frame, "FPS: %.0f" % fps, (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0, 0, 0), 8);
else:
cv2.putText(frame, "Tracking failure detected", (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0,0,255), 8)
cv2.imshow("MedianFlow tracker", frame)
k = cv2.waitKey(1)
if k == 27:
break
cv2.destroyAllWindows()
```
My line that causes the problem is:
```
tracker = cv2.TrackerMedianFlow_create()
```
Up until there the code runs.
```
Traceback (most recent call last):
File "D:/Documents/E-Books/Comp Vision/opencv3computervisionwithpythoncookbook_ebook/OpenCV3ComputerVisionwithPythonCookbook_Code/Chapter04/myPart5.py", line 11, in <module>
tracker = cv2.TrackerMedianFlow_create()
AttributeError: module 'cv2.cv2' has no attribute 'TrackerMedianFlow_create'
```
I expected it to work without an error. | 2019/06/13 | [
"https://Stackoverflow.com/questions/56581577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7174600/"
] | for opencv 4.5.1 user
opencv-contrib-python
```
import cv2
cv2.legacy_TrackerMedianFlow()
``` | `TrackerMedianFlow` is a [module within the opencv-contrib package](https://github.com/opencv/opencv_contrib/tree/master/modules/tracking/src), and does not come standard with the official OpenCV distribution. You will need to install the opencv-contrib package to access `TrackerMedianFlow_create()`
Per the [documentation](https://pypi.org/project/opencv-contrib-python/), you should uninstall the package without the additional modules and proceed to reinstall opencv with the additional modules you need.
```
pip uninstall opencv-python
pip install opencv-contrib-python
``` | 3,689 |
55,643,507 | I am getting this valid error while preprocessing some data:
```
9:46:56.323 PM default_model Function execution took 6008 ms, finished with status: 'crash'
9:46:56.322 PM default_model Traceback (most recent call last):
File "/user_code/main.py", line 31, in default_model
train, endog, exog, _, _, rawDf = preprocess(ledger, apps)
File "/user_code/Wrangling.py", line 73, in preprocess
raise InsufficientTimespanError(args=(appDf, locDf))
```
That's occurring here:
```
async def default_model(request):
request_json = request.get_json()
if not request_json:
return '{"error": "empty body." }'
if 'transaction_id' in request_json:
transaction_id = request_json['transaction_id']
apps = [] # array of apps whose predictions we want, or uempty for all
if 'apps' in request_json:
apps = request_json['apps']
modelUrl = None
if 'files' in request_json:
try:
files = request_json['files']
modelUrl = getModelFromFiles(files)
except:
return package(transaction_id, error="no model to execute")
else:
return package(transaction_id, error="no model to execute")
if 'ledger' in request_json:
ledger = request_json['ledger']
try:
train, endog, exog, _, _, rawDf = preprocess(ledger, apps)
# ...
except InsufficientTimespanError as err:
return package(transaction_id, error=err.message, appDf=err.args[0], locDf=err.args[1])
```
And preprocess is correctly throwing my custom error:
```
def preprocess(ledger, apps=[]):
"""
convert ledger from the server, which comes in as an array of csv entries.
normalize/resample timeseries, returning dataframes
"""
appDf, locDf = splitLedger(ledger)
if len(appDf) < 3 or len(locDf) < 3:
raise InsufficientDataError(args=(appDf, locDf))
endog = appDf['app_id'].unique().tolist()
exog = locDf['location_id'].unique().tolist()
rawDf = normalize(appDf, locDf)
trainDf = cutoff(rawDf.copy(), apps)
rawDf = cutoff(rawDf.copy(), apps, trim=False)
# TODO - uncomment when on realish data
if len(trainDf) < 2 * WEEKS:
raise InsufficientTimespanError(args=(appDf, locDf))
```
The thing is, it is in a `try``except` block precisely because I want to trap the error and return a payload with the error, rather than crashing with a 500 error. But its crashing on my custom error, in the try block, anyway. Right on that line calling `preprocess`.
This must be a failure on my part to conform to proper python code. But I'm not sure what I am doing wrong. The environment is python 3.7
Here's where that error is defined, in Wrangling.py:
```
class WranglingError(Exception):
"""Base class for other exceptions"""
pass
class InsufficientDataError(WranglingError):
"""insufficient data to make a prediction"""
def __init__(self, message='insufficient data to make a prediction', args=None):
super().__init__(message)
self.message = message
self.args = args
class InsufficientTimespanError(WranglingError):
"""insufficient timespan to make a prediction"""
def __init__(self, message='insufficient timespan to make a prediction', args=None):
super().__init__(message)
self.message = message
self.args = args
```
And here is how main.py declares (imports) it:
```
from Wrangling import preprocess, InsufficientDataError, InsufficientTimespanError, DataNotNormal, InappropriateValueToPredict
``` | 2019/04/12 | [
"https://Stackoverflow.com/questions/55643507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/732570/"
] | Your `preprocess` function is declared `async`. This means the code in it isn't actually run where you call `preprocess`, but instead when it is eventually `await`ed or passed to a main loop (like `asyncio.run`). Because the place where it is run is no-longer in the try block in `default_model`, the exception is not caught.
You could fix this in a few ways:
* make `preprocess` not async
* make `default_model` async too, and `await` on `preprocess`. | Do the line numbers in the error match up with the line numbers in your code? If not is it possible that you are seeing the error from a version of the code before you added the try...except? | 3,692 |
54,836,440 | There is a chance this is still a problem and the Pyinstaller and/or Folium people have no interest in fixing it, but I'll post it again here in case someone out there has discovered a workaround.
I have a program that creates maps, geocodes etc and recently added the folium package to create some interactive maps in html format. I always compile my code using pyinstaller so that others at my company can just use the executable rather than running the python code. If I run my code in an IDE, it loads, runs and performs exactly as expected. However, when I attempt to compile while I have `import folium` somewhere in my script, I get an error when trying to run the executable that pyinstaller creates.
The error text reads something like this:
```
Traceback (most recent call last):
File "analysisSuite.py", line 58, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__)
File "site-packages\folium\__init__.py", line 8, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__)
File "site-packages\branca\__init__.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__)
File "site-packages\branca\colormap.py", line 29, in <module>
File "site-packages\pkg_resources\__init__.py", line 1143, in resource_stream
File "site-packages\pkg_resources\__init__.py", line 1390, in get_resource_stream
File "site-packages\pkg_resources\__init__.py", line 1393, in get_resource_string
File "site-packages\pkg_resources\__init__.py", line 1469, in _get
File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 479, in get_data
with open(path, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\natha\\AppData\\Local\\Temp\\_MEI309082\\branca\\_cnames.json'
[30956] Failed to execute script analysisSuite
```
I am still relatively new to Python, so trying to decipher what the issue is by this text is pretty overwhelming. I have no idea if there is a workaround, where I just need to edit a file, add a file or add some parameter to pyinstaller, but perhaps someone else out there can read this and has an idea of what could be causing this problem. Thanks in advance to anyone that has suggestions.
EDIT: The problem seems to be with branca, which is a dependency of folium. It looks for that \_cnames.json file which is in the site-packages\branca folder but either doesn't get copied as it should or perhaps I need to somehow identify in my script where it should look for those files and then just manually copy them into a folder that I choose.
ADDITIONAL UPDATE: I've been testing and testing and have determined the heart of the problem. When you run your exe, it gets unpacked in a temp folder. One of the modules within `branca` is `colormap.py` In the `colormap` file, there are essentially three lines that keep `branca` from loading correctly.
```
resource_package = __name__
resource_path_schemes = '/_schemes.json'
resource_path_cnames = '/_cnames.json'
```
So, when the executable gets unpacked in this temp folder and branca tries to load up, because of these above lines, it expects these two files to also be in this temp folder, but of course, they won't be because they're being told to always and only be in the folder where the colormap module lives. The key here is figuring out a way so that the path reference can be relative, so that it doesn't look in the temp folder but also that the reference is dynamic, so that wherever you have your executable, as long as you have those json files present in some folder that it "knows" about, then you'll be good. Now I just need to figure out how to do that. | 2019/02/22 | [
"https://Stackoverflow.com/questions/54836440",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9431874/"
] | I had the same problem. Pyinstaller could not work with the Python Folium package. I could not get your cx\_Freeze solution to work due to issues with Python 3.7 and cx\_Freeze but with a day of stress I found a Pyinstaller solution which I am sharing with the community.
Firstly you have to edit these 3 files:
1. \folium\folium.py
2. \folium\raster\_layers.py
3. \branca\element.py
Makes the following changes, commenting out the existing ENV line and replacing with the code below:
```
#ENV = Environment(loader=PackageLoader('folium', 'templates'))
import os, sys
from jinja2 import FileSystemLoader
if getattr(sys, 'frozen', False):
# we are running in a bundle
templatedir = sys._MEIPASS
else:
# we are running in a normal Python environment
templatedir = os.path.dirname(os.path.abspath(__file__))
ENV = Environment(loader=FileSystemLoader(templatedir + '\\templates'))
```
Create this spec file in your root folder, obviously your pathex and project name will be different:
```
# -*- mode: python -*-
block_cipher = None
a = Analysis(['time_punch_map.py'],
pathex=['C:\\Users\\XXXX\\PycharmProjects\\TimePunchMap'],
binaries=[],
datas=[
(".\\venv\\Lib\\site-packages\\branca\\*.json","branca"),
(".\\venv\\Lib\\site-packages\\branca\\templates","templates"),
(".\\venv\\Lib\\site-packages\\folium\\templates","templates"),
],
hiddenimports=[],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='time_punch_map',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
runtime_tmpdir=None,
console=True )
```
Finally generate the single exe with this command from the terminal:
```
pyinstaller time_punch_map.spec
``` | I could not get this to work using pyinstaller. I had to instead use cx\_Freeze.
`pip install cx_Freeze`
cx\_Freeze requires that a setup.py file is created, typically in the same folder as the main script that is being converted to an exe. My setup.py file looks like this:
```
import sys
from cx_Freeze import setup, Executable
import os.path
PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__))
os.environ['TCL_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tcl8.6')
os.environ['TK_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tk8.6')
# Dependencies are automatically detected, but it might need fine tuning.
build_exe_options = {"packages": ["pkg_resources","asyncio","os","pandas","numpy","idna","folium","branca","jinja2","matplotlib"]}
# GUI applications require a different base on Windows (the default is for a
# console application).
base = None
if sys.platform == "win32":
base = "Win32GUI"
options = {
'build_exe': {
'include_files':[
os.path.join(PYTHON_INSTALL_DIR, 'DLLs', 'tk86t.dll'),
os.path.join(PYTHON_INSTALL_DIR, 'DLLs', 'tcl86t.dll'),
# 'C:\\Users\\natha\\AppData\\Local\\Programs\\Python\\Python36-32\\Lib\\site-packages\\branca\\_cnames.json',
# 'C:\\Users\\natha\\AppData\\Local\\Programs\\Python\\Python36-32\\Lib\\site-packages\\branca\\_schemes.json'
],
},
}
setup( name = "MyProgram",
version = "0.1",
description = "MyProgram that I created",
options = {"build_exe": build_exe_options},
executables = [Executable("myProgram.py", base=base)])
```
Notice I had to add various folium dependencies to the "packages" dictionary, such as branca, asyncio and pkg\_resources. Also, I did independent updates for asyncio, pkg\_resources and even setuptools using pip - for example:
`pip install --upgrade setuptools`
Once those were in place, I would open a command prompt from the directory where my setup.py file is saved and just type `python setup.py build`
Once this runs, I have a new folder in my directory called `build` and inside of that is another folder, inside of which is my exe, which ran perfectly. Hope this helps someone else that may encounter this problem. | 3,693 |
20,364,207 | Hey I've been trying to add Python 3.3 to windows powershell by repacing 27 with 33 in the path.
I tried to post a screenshot but turns out I need 10 rep so I'll just copy and paste what I've attempted:
```
[Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33", "User")
```
>
```
[Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33")
```
>
```
[Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33\python.exe", "User")
```
>
```
[Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33;C:\Python33\Scripts", "User")
```
>
```
[Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33\", "User")
```
The path to the folder where python.exe resides is: C:\Python33
Somewhere I'm doing something wrong but am not sure where.
Help a fellow out with his foray into programming?
Thanks. | 2013/12/03 | [
"https://Stackoverflow.com/questions/20364207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3063721/"
] | Python 3.3 comes with PyLauncher (py.exe), which is installed in the C:\Windows directory (already on the path) and enables any installed Python to be executed via command line as follows:
```
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:\> py
Python 3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:19:30) [MSC v.1600 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> ^Z
PS C:\> py -2
Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> ^Z
PS C:\> py -3
Python 3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:19:30) [MSC v.1600 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
Note that the default Python if both 2.X and 3.X are installed is 2.X (3.X in later versions of Python), but this can be overridden with the `-3` switch or the default changed by setting the `PY_PYTHON` environment variable.
Also, if you install Python 3.3 last and register extensions, PyLauncher will be the default program for .py files and adding a special `#!` comment to to top of a script will specify the version of Python to use for the script. This allows you to have Python 2 and Python 3 files on the desktop and just double-click them to run the correct version of Python for that script.
See [Python Launcher for Windows](http://docs.python.org/3/using/windows.html?highlight=launcher#python-launcher-for-windows) in the [Python 3 docs](http://docs.python.org/3). | The windows environment variable `path` is searched left to right. If the path to the 2.7 binaries is still in the variable, it will never find the 3.3 binaries, whose path you are appending to the end of the path variable.
Also, you are not adding the path to PowerShell. The windows python binaries are what PowerShell considers legacy executables. What you are doing is telling the OS where executable binaries are. PowerShell knows how to use that info to execute those binaries without an absolute path.
to do what you are looking to do in Powershell, try something like this
```
$env:Path = ((($env:Path -split ";") | Where-Object { $_ -notlike "*Python*"}) -join ";") + ";C:\Python33"
```
To make it persist, do this next
```
[Environment]::SetEnvironmentVariable("Path",$env:Path, "User")
``` | 3,696 |
48,272,511 | I have CLI tool I need open ([indy](https://github.com/hyperledger/indy-node/blob/stable/getting-started.md)), and then execute some commands.
So I want to write a bash script to do this for me. Using python as example it might look like:
```
#!/bin/bash
python
print ("hello world")
```
But ofcourse all this does is open python and doesn't enter the commands. How I would I make this work?
My development environment is Windows, and the run time environment will be a linux docker container.
Edit: It looks like this approach will work for what I'm actually doing, it seems like Python doesn't like it though. Any clues why? | 2018/01/16 | [
"https://Stackoverflow.com/questions/48272511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1068446/"
] | You forgot to add minutes to `setMinutes`:
```
function getUserTimeZoneDateTime() {
var currentUtcDateTime = moment.utc().toDate();
var mod_start = new Date(currentUtcDateTime.setMinutes(currentUtcDateTime.getMinutes() + GlobalValues.OffsetMinutesFromUTC - currentUtcDateTime.getTimezoneOffset()));
var currentUserDateTime= moment(mod_start).format('MM/DD/YYYY h:mm A');
return currentUserDateTime;
};
```
The difference that you use wrong in parenthesis is always an integer number of hours.
You can also use easier way: `moment(obj).utcOffset(OffsetMinutesFromUTC);` to set offset:
```
function getUserTimeZoneDateTime() {
var currentUtcDateTime = moment.utc().toDate();
return moment(currentUtcDateTime).utcOffset(GlobalValues.OffsetMinutesFromUTC - currentUtcDateTime.getTimezoneOffset()).format('MM/DD/YYYY h:mm A');
};
``` | ```
var now = new Date().getTime();
```
This gets the time and stores it in the variable, here called `now`.
It should get the time wherever the user is.
Hope this helps! | 3,697 |
73,064,635 | I was trying to capture a video in kivy/android using camera4kivy. but it seems that this function won't work. I tried capture video with location, subdir and filename (kwarg\*\*) but still nothing happend.
```
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.image import Image
from camera4kivy.preview import Preview
class CamApp(App):
def build(self):
self.cam = Preview()
self.cam.connect_camera(enable_analyze_pixels=True)
self.cam.select_camera('1')
box1 = BoxLayout()
box1.add_widget(self.cam)
try:
self.cam.capture_video(location = 'shared', subdir='myapp', name='myvid')
except Exception as e: print(e)
return box1
def on_stop(self):
self.cam.disconnect_camera()
return super().on_stop()
if __name__ == '__main__':
CamApp().run()
```
>
> 07-21 16:17:14.405 28320 29758 I python : JVM exception occurred:
> Attempt to invoke virtual method 'void
> androidx.camera.core.VideoCapture.startRecording(androidx.camera.core.VideoCapture$OutputFileOptions,
> java.util.concurrent.Executor,
> androidx.camera.core.VideoCapture$OnVideoSavedCallback)' on a null
> object reference java.lang.NullPointerException 07-21 16:17:14.406
> 28320 28320 I python : Traceback (most recent call last): 07-21
> 16:17:14.406 28320 28320 I python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/android/runnable.py",
> line 38, in run 07-21 16:17:14.407 28320 28320 I python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py",
> line 289, in do\_select\_camera 07-21 16:17:14.407 28320 28320 I python
> : File "jnius/jnius\_export\_class.pxi", line 857, in
> jnius.jnius.JavaMethod.**call** 07-21 16:17:14.407 28320 28320 I
> python : File "jnius/jnius\_export\_class.pxi", line 954, in
> jnius.jnius.JavaMethod.call\_method 07-21 16:17:14.407 28320 28320 I
> python : File "jnius/jnius\_utils.pxi", line 91, in
> jnius.jnius.check\_exception 07-21 16:17:14.407 28320 28320 I python :
> jnius.jnius.JavaException: JVM exception occurred: Attempt to invoke
> virtual method 'void
> androidx.camera.lifecycle.ProcessCameraProvider.unbindAll()' on a null
> object reference java.lang.NullPointerException 07-21 16:17:14.408
> 28320 29758 I python : [WARNING] [Base ] Unknown
> provider 07-21 16:17:14.408 28320 29758 I python : [INFO ] [Base
>
> ] Start application main loop 07-21 16:17:14.411 28320 29758 I python
> : [INFO ] [Base ] Leaving application in progress... 07-21
> 16:17:14.412 28320 29758 I python : Traceback (most recent call
> last): 07-21 16:17:14.412 28320 29758 I python : File
> "/home/testapp/.buildozer/android/app/main.py", line 31, in
> 07-21 16:17:14.412 28320 29758 I python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/app.py",
> line 955, in run 07-21 16:17:14.412 28320 29758 I python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py",
> line 574, in runTouchApp 07-21 16:17:14.413 28320 29758 I python :
>
> File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py",
> line 339, in mainloop 07-21 16:17:14.413 28320 29758 I python :
>
> File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py",
> line 391, in idle 07-21 16:17:14.413 28320 29758 I python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/clock.py",
> line 783, in tick\_draw 07-21 16:17:14.414 28320 29758 I python :
>
> File "kivy/\_clock.pyx", line 662, in
> kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21
> 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line
> 708, in kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21
> 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line
> 704, in kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21
> 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line
> 218, in kivy.\_clock.ClockEvent.tick 07-21 16:17:14.414 28320 29758 I
> python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/uix/anchorlayout.py",
> line 122, in do\_layout 07-21 16:17:14.415 28320 29758 I python :
>
> File "kivy/properties.pyx", line 520, in
> kivy.properties.Property.**set** 07-21 16:17:14.415 28320 29758 I
> python : File "kivy/properties.pyx", line 1478, in
> kivy.properties.ReferenceListProperty.set 07-21 16:17:14.415 28320
> 29758 I python : File "kivy/properties.pyx", line 606, in
> kivy.properties.Property.\_dispatch 07-21 16:17:14.415 28320 29758 I
> python : File "kivy/\_event.pyx", line 1307, in
> kivy.\_event.EventObservers.dispatch 07-21 16:17:14.416 28320 29758 I
> python : File "kivy/\_event.pyx", line 1213, in
> kivy.\_event.EventObservers.\_dispatch 07-21 16:17:14.416 28320 29758 I
> python : File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py",
> line 159, in on\_size 07-21 16:17:14.416 28320 29758 I python :
>
> File
> "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py",
> line 217, in stop\_capture\_video 07-21 16:17:14.416 28320 29758 I
> python : File "jnius/jnius\_export\_class.pxi", line 857, in
> jnius.jnius.JavaMethod.**call** 07-21 16:17:14.417 28320 29758 I
> python : File "jnius/jnius\_export\_class.pxi", line 954, in
> jnius.jnius.JavaMethod.call\_method 07-21 16:17:14.417 28320 29758 I
> python : File "jnius/jnius\_utils.pxi", line 91, in
> jnius.jnius.check\_exception 07-21 16:17:14.417 28320 29758 I python :
> jnius.jnius.JavaException: JVM exception occurred: Attempt to invoke
> virtual method 'void
> androidx.camera.core.VideoCapture.stopRecording()' on a null object
> reference java.lang.NullPointerException 07-21 16:17:14.417 28320
> 29758 I python : Python for android ended. 07-21 16:17:14.540 28320
> 29758 F com.moria.test: mutex.cc:340] destroying mutex with owner or
> contenders. Owner:29737 07-21 16:17:14.541 28320 29737 F
> com.moria.test: debugger\_interface.cc:356] Check failed: removed\_it ==
> removed\_entries.end()
>
>
> | 2022/07/21 | [
"https://Stackoverflow.com/questions/73064635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19499522/"
] | You can use `Map` collection:
```
new Map(fooArr.map(i => [i.name, i.surname]));
```
As [mdn says about `Map` collection](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map):
>
> The Map object holds key-value pairs and remembers the original
> insertion order of the keys. Any value (both objects and primitive
> values) may be used as either a key or a value.
>
>
>
An example:
```js
let fooArr = [
{ name: 'name 1', surname: 'surname 1' },
{ name: 'name 2', surname: 'surname 2' }
];
let result = new Map(fooArr.map(i => [i.name, i.surname]));
console.log(JSON.stringify([...result]));
```
As an alternative, you can use `Set` or just create simple object. Object has `key-value` too. Let me show an example:
```js
let fooArr = [
{ name: 'foo', surname: 'bar' },
{ name: 'hello', surname: 'world' }
];
let object = fooArr.reduce(
(obj, item) => Object.assign(obj, { [item.name]: item.surname }), {});
console.log(object)
``` | To add to the previous answer. This React guide explains the arrays and how to use or set your keys for them: <https://reactjs.org/docs/lists-and-keys.html#keys>
I would recommend making `<div>`s for each result or make a `<table>` with `<tr>` and `<td>` to store the individual items. Give each div or row a key and it is a lot easier to use it afterwards. | 3,698 |
53,900,909 | Iโm new to python. Why is this code not printing the top50 films?
```
#!/usr/bin/python3
import requests
from bs4 import BeautifulSoup
import warnings
warnings.filterwarnings("ignore", category=UserWarning, module='bs4')
# website
url = "https://www.imdb.com/search/title?release_date="
year = input("Enter you're fav year for movie display: ")
output = url+year
# extracting the info from website
soup = BeautifulSoup(output, "lxml")
# Display the top 50 films
i = 1
movieList = soup.find_all('div', attrs={'class': 'lister-item mode-advanced'})
for x in movieList:
div = x.find('div', attrs={'class': 'lister-item-content'})
print(str(i) + '.')
header = x.findChild('h', attrs={'class': 'lister-item-header'})
print('Movie: ' + str(header[0].findChild('a'))
[0].contents[0].encode('utf-8').decode('ascii', 'ignore')) #and can someone tell me what is this.. because Iโm following some guide. And i didnโt understand this line.
i += 1
```
My current output is empty, canโt see anything on the terminal.
```
0/50 [00:00<?, ?it/s]1.
Traceback (most recent call last): File "movie_recom.py", line 26, in <module> print('Movie: ' + str((header[0].findChild('a'))
TypeError: 'NoneType' object is not subscriptable
```
I need this output:
```
Most Popular Feature Films Released 2018-01-01:
1. Movie: Avengers: Infinity War
2. Movie: Venom
3. Movie: A Quiet Place
4. Movie: Black Panther
5. Movie: I Feel Pretty
6. Movie: Deadpool 2
7. Movie: Ready Player One
8. Movie: Super Troopers 2
9. Movie: Rampage
10. Movie: Den of Thieves
```
and so on until 50.
Thanks in advance. | 2018/12/23 | [
"https://Stackoverflow.com/questions/53900909",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10222187/"
] | You haven't yet issued a request, then you can parse the response content.
This should get the full list:
```
r = requests.get(output)
soup = BeautifulSoup(r.text, "lxml")
# Display the top 50 films
movieList = soup.find_all('div', attrs={'class': 'lister-item mode-advanced'})
for n, x in enumerate(movieList, 1):
div = x.find('div', attrs={'class': 'lister-item-content'})
print(str(n)+'.', div.find('a', href=True).text)
```
will return:
```
1. Aquaman
2. Mowgli: Legend of the Jungle
3. Spider-Man: Into the Spider-Verse
...
50. The Rookie
``` | Thanks Guys for the help but i already solved it.
```
i = 1
movieList = soup.find_all('div', attrs={'class': 'lister-item mode-advanced'})
for x in tqdm(movieList):
div = x.find('div', attrs={'class': 'lister-item-content'})
# print(str(i) + '.')
header = x.findChild('h3', attrs={'class': 'lister-item-header'})
print(str(i) + '.' + header.findChild('a').text)
i += 1
``` | 3,701 |
3,413,144 | I am using Selenium RC to do some test now. And the driver I use is python.
But now, I faced a problem, that is: every time Selenium RC runs, and open a url, it opens 2 windows, one is for logging and the other one is for showing HTML content. But I can't close them all in script.
Here is my script:
```
#!/usr/bin/env python
#-*-coding:utf-8-*-
from selenium import selenium
def main():
sel = selenium('localhost', 4444, '*firefox', 'http://www.sina.com.cn/')
sel.start()
try:
sel.open('http://www.sina.com.cn/')
except Exception, e:
print e
else:
print sel.get_title()
sel.close()
sel.stop()
if __name__ == '__main__':
main()
```
It is very easy to understand. What I really want is to close all windows that selenium opens. I've tried close() and stop(), but they all don't work. | 2010/08/05 | [
"https://Stackoverflow.com/questions/3413144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/411728/"
] | I've fix this problem.
It happens because I installed firefox-bin not firefox.
Now I've removed firefox-bin and have installed firefox, it works now.
stop() will close all windows that selenium opened.
Thank you for your reminds [AutomatedTester](https://stackoverflow.com/users/108827/automatedtester) | I may suggest to make a system command with python to close the firefox windows
Bussiere | 3,702 |
11,479,955 | I started learning python today from the tutorial on the official site.
When reading about **filter(function, sequence)** i thought of making a function that returns if a number is prime to use it with the filter.
```
notDividedBy = [2,3,4,5,6,7,8,9]
def prime(num):
"""True if num is prime, false otherwise"""
copy = notDividedBy[:]
check = True
if num in copy:
copy.remove(num)
for x in copy:
if num % x == 0:
check = False
break
return check
```
The above code works in the shell.
My question is: Since i feel like although a solution, it is not the most elegant one, can anyone transform this code to something more python-like?(better structure? less lines?)
I believe it would help me for better understanding of the basics of the language.
The thing is, don't use any imports or anything, just simple staff. | 2012/07/14 | [
"https://Stackoverflow.com/questions/11479955",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/596298/"
] | Creating many many copies of lists is not a particularly efficient way of doing things. Instead use the `xrange()` (Python 2.x) or `range()` (Python 3) iterator. Here's one (naive) way you could implement a primality test:
```
from math import sqrt
def isPrime(n):
if n < 2: return False
if n == 2: return True
if not n % 2: return False #test if n is even
#we've already remove all the even numbers, no need to test for 2
#we only need to test up to sqrt(n), because any composite numbers can be
# factored into 2 values, at least one of which is < sqrt(n)
for i in xrange(3, int(sqrt(n)) + 1, 2):
if not n % i:
return False
return True
``` | One thing off the bat, if you are going to implement prime testing in this fashion, there's no reason to use an auxillary array
```
def prime(num):
"""True if num is prime, false otherwise"""
check = True
#if num in copy:
# copy.remove(num)
for x in range(2,x-1):
if num % x == 0:
check = False
break
return check
``` | 3,705 |
25,204,021 | I'm using Continuum's Anaconda Spyder for python.
All of a sudden it's giving me this error, although it's supposed to be free:
```
Vendor: Continuum Analytics, Inc.
Package: mkl
Message: trial mode EXPIRED 14 days ago
You cannot run mkl without a license any longer.
A license can be purchased it at: http://continuum.io
We are sorry for any inconveniences.
SHUTTING DOWN PYTHON INTERPRETER
```
Any idea what the problem might be? I'm using it on Windows 64-bit. I've been using it fine for months. I just got back to it now after some weeks and now it's saying it's expired, but there never was a license as far as I can remember. | 2014/08/08 | [
"https://Stackoverflow.com/questions/25204021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/961627/"
] | There is a free trial that starts when you `conda install mkl`. If you want to remove it, use `conda remove --features mkl`. | The MKL optimizations are not free: <https://store.continuum.io/cshop/mkl-optimizations/>.
There is a trial period but after that it costs you money. Interesting that you used it for a while. Maybe it's an issue with license checking or there was no mechanism to actually check for a license. When you install the package with `conda install mkl` their is no mention of a license. Maybe something was introduced with a recent version of anaconda? | 3,710 |
26,152,787 | i'm just starting to learn python and I need to solve this problem but i'm stuck. We've been given a function (lSegInt)to find the intersections of lines. What I need to do is to format the data properly inorder to pass it through this function too find how many time two polylines intersect.
Here's the data:
```
pt1 = (1,1)
pt2 = (5,1)
pt3 = (5,5)
pt4 = (1,5)
pt5 = (2,2)
pt6 = (2,3)
pt7 = (4,6)
pt8 = (6,3)
pt9 = (3,1)
pt10 = (1,4)
pt11 = (3,6)
pt12 = (4,3)
pt13 = (7,4)
l5 = [[pt1, pt5, pt6, pt7, pt8, pt9]]
l6 = [[pt10, pt11, pt12, pt13]]
```
Here's my code:
```
def split(a):
lines = []
for i in range(len(a[0]) - 1):
line = []
for j in (i,i+1):
line.append(a[0][j])
lines.append(line)
return lines
sl5 = split(l5)
sl6 = split(l6) + split(l6)
```
This is where i'm stuck. Need to find out how many times the polylines intersect. I wanted to use a zipped for loop with sl5 and sl6 but it wouldn't check every line of one list vs every line of another and the lists are different lengths.
```
while i < len(sl5):
for x, in a,:
z = 1
fresults.append(lSegInt(x[0],x[1],sl6[0][0],sl6[1][0]))
fresults.append(lSegInt(x[0],x[1],sl6[1][0],sl6[1][1]))
fresults.append(lSegInt(x[0],x[1],sl6[2][0],sl6[2][1]))
i = i + 1
print fresults
```
Function:
```
def lSegInt(s1, s2, t1, t2):
'''Function to check the intersection of two line segments. Returns
None if no intersection, or a coordinate indicating the intersection.
An implementation from the NCGIA core curriculum. s1 and s2 are points
(e.g.: 2-item tuples) marking the beginning and end of segment s. t1
and t2 are points marking the beginning and end of segment t. Each point
has an x and y coordinate: (1, 3).
Variables are named following linear formula: y = a + bx.'''
if s1[0] != s2[0]: # if s is not vertical
b1 = (s2[1] - s1[1]) / float(s2[0] - s1[0])
if t1[0] != t2[0]: # if t is not vertical
b2 = (t2[1] - t1[1]) / float(t2[0] - t1[0])
a1 = s1[1] - (b1 * s1[0])
a2 = t1[1] - (b2 * t1[0])
if b1 == b2: # if lines are parallel (slopes match)
return(None)
xi = -(a1-a2)/float(b1-b2) # solve for intersection point
yi = a1 + (b1 * xi)
else:
xi = t1[0]
a1 = s1[1] - (b1 * s1[0])
yi = a1 + (b1 * xi)
else:
xi = s1[0]
if t1[0] != t2[0]: # if t is not vertical
b2 = (t2[1] - t1[1]) / float(t2[0] - t1[0])
a2 = t1[1] - (b2 * t1[0])
yi = a2 + (b2 * xi)
else:
return(None)
# Here is the actual intersection test!
if (s1[0]-xi)*(xi-s2[0]) >= 0 and \
(s1[1]-yi)*(yi-s2[1]) >= 0 and \
(t1[0]-xi)*(xi-t2[0]) >= 0 and \
(t1[1]-yi)*(yi-t2[1]) >= 0:
return((float(xi), float(yi))) # Return the intersection point.
else:
return(None)
```
Any help on this is greatly appreciated. Sorry for the wall of text. | 2014/10/02 | [
"https://Stackoverflow.com/questions/26152787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4099447/"
] | There are a couple of flaws with your CSS code but the biggest one causing the display issue is:
```
.box2 {
float: middle;
```
}
There is no `float: middle;` property. You need to either set them all to `float:left;` (or `float:right;`) or use an entirely different approach. (like using `display: table-cell;`) | <http://jsfiddle.net/rishabh66/kLfb2wet/> .
Add `float: left;` to all divs
to make horizontally align boxes inside parent div. | 3,713 |
40,521,707 | Whenever I am trying to perform normalization over the array obtained from the csv file . My code wont work because i have n't provided the custom file.
I an getting an error message as :
```
x = np.myarray
```
**AttributeError: 'module' object has no attribute'myarray'**
As I am new to python ,can anyone please help me how to get a normalized matrix if we read the matrix from csv file ?
```
import numpy as np
import csv
with open('csvk.csv', 'rb') as f:
reader = csv.reader(f)
data_as_list = list(reader)
print data_as_list
myarray = np.asarray(data_as_list)
print myarray
x = np.myarray
x_normed = x / x.max(axis=0)
print x_normed
``` | 2016/11/10 | [
"https://Stackoverflow.com/questions/40521707",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6302830/"
] | The answer is simply, it's the maximum that field can hold.
>
> MySQL retrieves and displays TIME values in 'HH:MM:SS' format (or
> 'HHH:MM:SS' format for large hours values). TIME values may range from
> '-838:59:59' to '838:59:59'. The hours part may be so large because
> the TIME type can be used not only to represent a time of day (which
> must be less than 24 hours), but also elapsed time or a time interval
> between two events (which may be much greater than 24 hours, or even
> negative).
>
>
>
You would probably be better off just using an int field (where the value is stored as seconds difference from the starting time). It's a common practice to have a field that stores seconds elapsed since epoch rather than a datetime field anyway. Else you would need to switch to a [datetime](http://dev.mysql.com/doc/refman/5.7/en/datetime.html) field.
<http://dev.mysql.com/doc/refman/5.7/en/time.html> | You are using **strtotime()**
strtotime() - Parse English textual datetimes into Unix timestamps:
Eg:
```
echo(strtotime("3 October 2005"));
output as 1128312000
```
**HERE TRY THIS**
```
$epoch_time_out_user =strtotime($_POST['timeout'])- (300*60);
$dt = new DateTime("@$epoch_time_out_user");
$time_out_user = $dt->format('H:i:s');
//now $time_out_user has time
// now you can use **$time_out_user** in insert query
// do same for **$time_in_user**
//FYI : The value (300*60) is depends timezone
``` | 3,715 |
70,640,586 | I have a shred library `libcustum.so` in a non standard folder, and a python package where I use `ctypes.cdll.LoadLibrary("libcustom.so")`.
How can I set `libcustum.so` path at build time (something similar to rpath) ?
```
env LD_LIBRARY_PATH=/path/to/custum/lib python3 -c "import mypackage"
```
work fine, but I don't want to use global `LD_LIBRARY_PATH`, and I don't want to set library path at run time.
```
python3 -c "import mypackage"
```
result in:
```
OSError: libcustum.so: cannot open shared object file: No such file or directory
``` | 2022/01/09 | [
"https://Stackoverflow.com/questions/70640586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5940776/"
] | This might be somewhat related to [this question](https://stackoverflow.com/questions/32998502/python-importerror-no-module-named-crypto-publickey-rsa).
/e:
Ok, since you are using [this firebase package](https://pypi.org/project/firebase/), I can hopefully help you out.
First of all, it's the package's fault that it isn't running. While it depends on many external packages, it has none of them defined.
This is what I had to do in a clean virtual environment just to be able to do `from firebase import Firebase`:
```
pip install sseclient python_jwt gcloud pycryptodome requests-toolbelt
```
[Here](https://pastebin.com/yksdudUc) is the requirements.txt I ended up with in the clean environment. Notice, that this was only for importing a single class from the package. There still might be other dependencies hidden somewhere, waiting to throw an exception.
I encourage you to give feedback to the developer about this issue. Even better, fix this yourself and open a Pull Request. You might help others with the same issue. | I found a workaround for this. I simply used another module to read from the firebase db. Instead of using `firebase` I used `firebase_admin` as mentioned in the [firebase documentation](https://firebase.google.com/docs/database/admin/start#python). `firebase_admin` doesn't use Crypto so there's no more problem from this point of view. However I had to change a little bit how I retrieve and write data. | 3,716 |
66,355,390 | I'm trying to make a flask pipeline which receives data from a python file and sends the data to react which display them.
I currently am stuck trying to receive the data in flask after sending them via post to the URL: `localhost:5000/weather-data`
The data is being posted with this Code:
```
dummy_data = {'data': str(msg.payload.decode('iso-8859-1')),
'timestamp': datetime.datetime.now().isoformat()}
response = requests.post(url, data=dummy_data)
print(response.text)
```
The print result is:
```
{"data": "{\"region\": \"Jokkmokk\", \"temp_now\": 8.91, \"weather_now\": \"bewรถlkt\", \"humidity\": 50, \"wind\": 24}",
"timestamp": "2021-02-24T17:23:15.347058"}
```
Which is all right but then i try to receive the data and return it on the flask side with this code:
```
from flask import Flask, request
app = Flask(__name__)
@app.route('/')
def test():
return 'HelloWorld'
@app.route('/weather-data', methods=['POST', 'GET'])
def weather_data():
try:
data = request.form.to_dict()
print(data)
return data
except Exception as e:
print(e)
if __name__ == '__main__':
app.run(host='127.0.0.1', debug=True, port=5000)
```
This runs normally through and my print(data) gives the exact same dictionary back but if i take a look at `localhost:5000/weather-data` i only see empty curly braces `{}`
As a Test i tried to return the data without receivng them first with this code:
```
@app.route('/weather-data', methods=['POST', 'GET'])
def weather_data():
return {"data": "{\"region\": \"Fishermans City\", \"temp_now\": 6.87, \"weather_now\": \"st\\u00fcrmisch\", "humidity\": 52, \"wind\": 58}",
"timestamp": "2021-02-23T18:32:49.120861"}
```
Like this it perfectly worked and showed the Data on the website.
**Edit:**
I think this is a stupid question for some of you but because i am kinda new to this i wanted to ask if it is possible that the Data is on the Page but when i reload the Page it overwrites the data with empty curly braces?
If yes is there a way that i can keep them on the Page until i make another Post with new data? | 2021/02/24 | [
"https://Stackoverflow.com/questions/66355390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15275815/"
] | If the files requested are big consider use spawn instead of exec.
```
const http = require('http');
const exec = require('child_process').exec;
const DOWNLOAD_DIR = './downloads/';
const generate_width_and_height = function() {
const random = Math.floor((Math.random() * 100) + 200);
console.log(random);
return random
}
const create_file_url = function() {
return "http://placekitten.com/" + generate_width_and_height() + "/" + generate_width_and_height()
}
const oneHundredTwentyEightElementsArray = Array.from(Array(127), (_,x) => x);
const oneHundredTwentyEightUrlsArray = oneHundredTwentyEightElementsArray.map( _ => create_file_url())
const download_file_wget = function(file_url, file_number) {
// extract the file name
const file_name = "file_" + file_number
// compose the wget command
const wget = 'wget -P ' + DOWNLOAD_DIR + ' ' + file_url;
// excute wget using child_process' exec function
const child = exec(wget, function(err, stdout, stderr) {
if (err) throw err;
else console.log(file_name + ' downloaded to ' + DOWNLOAD_DIR);
});
};
for (let index = 0; index < oneHundredTwentyEightElementsArray.length; index++) {
const url = oneHundredTwentyEightUrlsArray[index];
download_file_wget(url, index)
}
``` | You can use [Javascript Promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) to download multiple files with node and wget.
First wrap your inner code in a promise:
```js
const downloadFile = (url) => {
return new Promise((resolve) => {
console.log(`wget ${url} --no-check-certificate`)
exec(`wget ${url} --no-check-certificate`, function(err, stdout, stderr) {
if (err) {
console.log('ERR', err, url)
} else {
console.log('SUCCESS ' + url);
resolve(1)
}
});
})
}
```
Then use Promise.all to process all the downloads asynchronously:
```
const files = [
'http://placekitten.com/10/10',
'http://placekitten.com/10/10'
// etc
]
(async () => {
await Promise.all(files.map(url => downloadFile(url)))
})()
``` | 3,717 |
2,399,812 | Is there a way to create a 'kiosk mode' in wxpython under Windows (98 - 7) where the application disables you from breaking out of the app using Windows keys, alt-tab, alt-f4, and ctrl+alt+delete? | 2010/03/08 | [
"https://Stackoverflow.com/questions/2399812",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/204535/"
] | If an application could do that it would make a great denial-of-service attack on the machine.
In particular Ctrl+Alt+Delete is the [Secure Attention Sequence](http://technet.microsoft.com/en-us/library/cc780332(WS.10).aspx). Microsoft goes to great lengths to insure that when the user hits those keys, they switch to a secure desktop that they can be confident that the logon box is the *real* Windows logon and not a counterfeit.
What you need to look at isn't functions that your application can call, but System Administration options that allow an Administrator to configure a machine for limited use. These exist, but it's more a question for Super User than for Stack Overflow.
This should get you started
<http://msdn.microsoft.com/en-us/library/aa372139(VS.85).aspx> | wxPython alone cannot be done with that.
You need to do Low Level Keyboard Hook with C/C++ or with equivalent ctypes, for
Windows keys, alt-tab, alt-f4,
but Ctrl-Alt-Del, I don't think so for Windows XP and above. | 3,718 |
28,911,296 | I want convert nametuple to dict with python:
I have:
```
CommentInfo(stt=1, gid=12, uid=222)
```
Now I want:
```
{"stt":1,"gid":12,"uid":222}
```
Please help me! Thanks very much! | 2015/03/07 | [
"https://Stackoverflow.com/questions/28911296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3214584/"
] | You need to use `_asdict()` function to convert the named tuples into a dictionary.
**Example:**
```
>>> CommentInfo = namedtuple('CommentInfo', ["stt", "gid", "uid"])
>>> x = CommentInfo(stt=1,gid=12,uid=222)
>>> x._asdict()
OrderedDict([('stt', 1), ('gid', 12), ('uid', 222)])
``` | namedtuples has a `._asdict()` method to convert it to an OrderedDict, so if you have an instance in a variable `comment` you can use `comment._asdict()` | 3,719 |
48,842,401 | I'm using python 3.6 and selenium 3.8.1, Chrome browser to simulate users entering an order. The app we use has a particularly frustrating implementation for automation - a loading modal will pop up whenever a filter for a product is loading, but it does not truly cover elements underneath it. Additionally, load time fluctuates wildly, but with an upper bound. If I don't use excessive sleep statements, selenium will either start clicking wildly before the correct objects are loaded or clicks on the element but, of course, hits the loading modal. (Fun side note, the loading modal only fills the screen view, so selenium is also able to interact with items below the fold. :P)
To get around this:
```
def kill_evil_loading_modal(self):
# i pause for a second to make sure the loader has a chance to pop
time.sleep(1)
# pulling locator type and from another file: ("id","locator_id")
loading_modal = ProductsLocators.loading_modal_selector
# call a function that returns true/false for object if exists
check_for_evil = self.is_element_exist(*loading_modal)
while check_for_evil == True:
check_for_evil = self.is_element_exist(*loading_modal)
```
This works great! Where I had a ton of evil time.sleep(x) statements to avoid the loading modal, I'm now catching it and waiting until it's gone to move forward.
If I only had to deal with that two or three times, I would move on. Sadly, this loading modal hits after *every click* ... so this is what my main script looks like now:
```
new_quote02_obj.edit_quote_job(**data)
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.click_product_dropdown()
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.click_product_dropdown_link()
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.select_category_dropdown(scenario_data['category_name'])
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.select_market_dropdown(scenario_data['local_sales_market'])
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.add_products_job(scenario_data['product_list_original'])
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.click_done_btn()
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.next_btn_page()
```
How can I refactor to stay DRY? | 2018/02/17 | [
"https://Stackoverflow.com/questions/48842401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2252481/"
] | If you want to wait until modal disappeared and avoid using `time.sleep()` you can try [ExplicitWait](http://selenium-python.readthedocs.io/waits.html#explicit-waits):
```
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait as wait
wait(driver, 10).until_not(EC.visibility_of_element_located(("id", "locator_id")))
```
or
```
wait(driver, 10).until(EC.invisibility_of_element_located(("id", "locator_id")))
```
This should allow you to wait up to 10 seconds (increase delay if needed) until element with specified selector (`"id", "locator_id"`) will become invisible
If modal appears after each click you can implement your own click method, like
```
def click_n_wait(by, value, timeout=10):
wait(driver, timeout).until(EC.element_to_be_clickable((by, value))).click()
wait(driver, timeout).until(EC.invisibility_of_element_located(("id", "locator_id")))
```
and use it as
```
click_n_wait("id", "button_id")
``` | As you mentioned in your question *a loading modal will pop up whenever a filter for a product is loading* irespective of the loader *cover elements underneath it* or not you can simply `wait` for the next intended element with which you want to interact with. Following this approach you can completely get rid of the function `kill_evil_loading_modal()` which looks to me as a overhead. As a replacement to `kill_evil_loading_modal()` function you have to invoke [**WebDriverWait()**](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.wait.html#module-selenium.webdriver.support.wait) method along with proper [**expected\_conditions**](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.expected_conditions.html#module-selenium.webdriver.support.expected_conditions) as required as follows :
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# other code
WebDriverWait(driver, 2).until(EC.element_to_be_clickable((By.XPATH, "xpath_of_element_A"))).click()
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "xpath_of_element_B"))).click()
WebDriverWait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, "xpath_of_element_C"))).click()
``` | 3,720 |
67,499,322 | i am trying to control my browser using python, what I need is I give commands in terminal that should work on the browser like opening and searching for something(like scorling the bowser) and closing the browser
currently I am done with opening the browser and closing | 2021/05/12 | [
"https://Stackoverflow.com/questions/67499322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15904381/"
] | A conflict might occur when both of you are changing the same file in the same branch(or while pulling a different branch to your local branch). In such cases, sometimes git would be able to automatically merge the changes when you try to pull your friend's commit. But if the changes are mostly on the same/nearby lines, automatic merge would fail and you need to merge the changes manually and then commit.
You could use `git mergetool` to correct any merge conflict (after defining a mergetool of your choice) | It depends if you are working on the same git branch or not. Even If you are working on the same branch but you modify different files you won't get conflicts. You will only get conflicts if you both change the same file. | 3,721 |
48,028,274 | this is the code just to find some sort of product :print the product of all the number in this array Modulo 10^9+7
```
n=int(input())
answer=1
b=10**9
array_1=[]
for i in range(n):
array_1.append(int(input()))
for j in range(n):
answer=(answer*array_1[j])% (b+7)
print(answer)
```
this my code in python 3 working properly in jupyter notebook, but on
python (3.5.2) it is showing error for input 5 and then 1 2 3 4 5
```
Execution failed.
ValueError: invalid literal for int() with base 10 : '1 2 3 4 5'
Stack Trace:
Traceback (most recent call last):
File "/hackerearth/PYTHON3_46/s_ad.py3", line 16, in
array_1.append(int(input()))
ValueError: invalid literal for int() with base 10: '1 2 3 4 5'
```
please some one help me to solve this error as i m newbie in python | 2017/12/29 | [
"https://Stackoverflow.com/questions/48028274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7756559/"
] | >
> array\_1.append(int(input())) by using this i m trying get an array of int by
> taking value one by one from user input โ
>
>
>
But it looks like you are entering the numbers one after the other as a single string with each number separated by a space. In that case, you should use split to get the individual numbers:
```
array_of_ints = [int(num) for num in input().split()]
array_1 += array_of_ints
```
If you are trying to input the numbers one by one, your code seems correct. You just need to make sure you actually do enter the numbers one by one.
i.e. 1 then press Enter, 2 then press Enter, and so on... | I'm not completely sure what you are trying to achieve here, but just based on looking at it, your code would not accept any of the inputs after `1 2`
If you are running this from a terminal, there should be a new line between each input, i.e.
```
./your_program.py
4
4
3
2
1
``` | 3,722 |
34,207,898 | I am having an issue selecting data from a pandas DataFrame with between\_time. When the start and end dates of the query are between two days the result is empty. I am using pandas 0.17.1 (python 2.7)
I have the following data frame:
```
mydf = pd.DataFrame.from_dict({'azi': {Timestamp('2015-05-12 00:00:14.348000'): 109.801,
Timestamp('2015-05-12 00:00:36.125000'): 109.994,
Timestamp('2015-05-12 00:00:57.599000'): 109.60299999999999,
Timestamp('2015-05-12 00:01:14.576000'): 100.2},
'ele': {Timestamp('2015-05-12 00:00:14.348000'): 180.001,
Timestamp('2015-05-12 00:00:36.125000'): 179.999,
Timestamp('2015-05-12 00:00:57.599000'): 179.999,
Timestamp('2015-05-12 00:01:14.576000'): 180.001}})
```
Which results in:
```
azi ele
2015-05-12 00:00:14.348 109.801 180.001
2015-05-12 00:00:36.125 109.994 179.999
2015-05-12 00:00:57.599 109.603 179.999
2015-05-12 00:01:14.576 100.200 180.001
```
The following query **fails**:
```
mydf['azi'].between_time(datetime(2015, 5, 11, 23, 59, 59, 850000), datetime(2015, 5, 12, 0, 1, 59, 850000))
```
resulting in:
```
Series([], Name: azi, dtype: float64)
```
However the following query **works**:
```
mydf2['azi'].between_time(datetime(2015, 5, 11, 0, 0, 0, 0), datetime(2015, 5, 12, 0, 1, 59, 850000))
```
with the right answer:
```
2015-05-12 00:00:14.348 109.801
2015-05-12 00:00:36.125 109.994
2015-05-12 00:00:57.599 109.603
2015-05-12 00:01:14.576 100.200
Name: azi, dtype: float64
```
**Questions**:
1. I am missing something in the functionality of the function, or is this a real bug?
2. Is there a workaround for this? The background is that I really need to process data in 1 minute chunks which limits are not always coinciding with 00:00:00 | 2015/12/10 | [
"https://Stackoverflow.com/questions/34207898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5665206/"
] | From the [documentation](http://search.cpan.org/~peco/Email-Send-SMTP-Gmail-0.1.1/lib/Email/Send/SMTP/Gmail.pm): put commas between the email addresses.
>
> send(-to=>'', [-subject=>'', -cc=>'', -bcc=>'', -replyto=>'', -body=>'', -attachments=>''])
>
> It composes and sends the email in one shot
>
>
> to, cc, bcc: **comma separated email addresses**
>
> attachments: comma separated files with full path
>
>
>
```
$mail->send(-to=>'a@gmail.com,b@gmail.com,c@gmail.com,...'
``` | Simple add all recipients as a comma separated list:
```
(-to=>'pqr@gmail.com,rec2@gmail.com' ...
``` | 3,723 |
66,604,878 | I am new to CI/CD and Gitlab. I have a CI/CD script to test, build and deploy and I use 2 branches and 2 EC2. My goal is to have a light and not redundant script to build and deploy my changes in functions of the branch.
Currently my script looks like this but after looking the Gitlab doc I saw many conditionals keywords like `rules` but I'm really lost about how I can use conditional format in my script to optimise it.
Is there a way to use condition and run some script if there is a merge from a branch or from an other? Thanks in advance!
```
#image: alpine
image: "python:3.7"
before_script:
- python --version
stages:
- test
- build_staging
- build_prod
- deploy_staging
- deploy_prod
test:
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
only:
refs:
- develop
build_staging:
stage: build_staging
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build:staging
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- develop
build_prod:
stage: build_prod
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- master
deploy_staging:
stage: deploy_staging
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
only:
refs:
- develop
deploy_prod:
stage: deploy_prod
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
only:
refs:
- master
``` | 2021/03/12 | [
"https://Stackoverflow.com/questions/66604878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15306690/"
] | The first is creating an anonymous subclass of `MyRunnable`.
The second is creating an anonymous subclass of `Thread`, which requires that `MyRunnable` is instantiable; and `MyRunnable` wouldn't actually then be used at all, because it's not invoked in the `run()` method you're defining in the `Thread` subclass.
There is no reason to subclass `Thread`, and presumably you want some special behavior from your `MyRunnable` base class (although the only thing that would provide special behavior that would actually be run is the constructor).
Use the first way. | You can also use a lambda expression to start a thread.
```
Thread myRunnableThread3 = new Thread(()-> {
System.out.println(Thread.currentThread().getName());
System.out.println("myRunnableThread3!");},"MyThread");
myRunnableThread3.start();
```
Prints
```
MyThread
myRunnableThread3!
``` | 3,724 |
20,115,972 | I tried to use Ambari to manage the installation and maintenance of the Hadoop cluster.
After I started ambari server, I use the web page to set up Hadoop cluster.
But at the 3rd step-- confirm hosts, the error shows below
And I check the log at /var/log/ambari-server, I found:
>
> INFO:root:BootStrapping hosts ['qiao'] using /usr/lib/python2.6/site-packages/ambari\_server cluster primary OS: redhat6 with user 'root' sshKey File /var/run/ambari-server/bootstrap/1/sshKey password File null using tmp dir /var/run/ambari-server/bootstrap/1 ambari: master; server\_port: 8080; ambari version: 1.4.1.25
>
>
> INFO:root:Executing parallel bootstrap
>
>
> ERROR:root:ERROR: Bootstrap of host qiao fails because previous action finished with non-zero exit code (1)
>
>
> INFO:root:Finished parallel bootstrap
>
>
> | 2013/11/21 | [
"https://Stackoverflow.com/questions/20115972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2951132/"
] | Do you provide ssh rsa private key or paste it?
and from the place you are installing, make sure you can ssh to any hosts without typing any password.
If still the same error, try
ambari-server reset
ambari-server setup | Pls restart ambari-server
**ambari-server restart**
and then try accessing Ambari
It would work. | 3,725 |
67,219,194 | So recently I've been doing a project whera as optimisation I want to use numpy arrays instead of python list built-in. It would be a 2d array with fixed length in both axes. I also want to maximasie cashe use so that code is as fast as it can be. However when playing with id(var) function I gor unexpected results:
code:
```
a = numpy.ascontiguousarray([1,2,3,4,5,6,7,8,9], dtype=numpy.int32)
for var in a:
print(hex(id(var)))
```
returned:
```
x1aaba10d8f0
0x1aaba1f33d0
0x1aaba10d8f0
0x1aaba1f33d0
0x1aaba10d8f0
0x1aaba1f33d0
0x1aaba10d8f0
0x1aaba1f33d0
0x1aaba10d8f0
```
which to me it is super weird cus that would mean 2 variables are located in same memory block (is that even a thing ?). anyway - is it me not understanding it correctlly?
As a side question - can the original task of building 2d array be acheaved with less expensive method? Numpy arrays come with many functions I do not need. Only 2 things I need:
1. to be able to reverse it normally done with [::-1] syntax
2. check if one == other efficiently
Thank in advance for all the help :-) | 2021/04/22 | [
"https://Stackoverflow.com/questions/67219194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13757444/"
] | `id(var)` does not work as you think it is. Indeed, `id(var)` returns a unique ID for the specified object `var`, but `var` is not a cell of `a`. **`var` is a Python object referencing a cell of `a`**. Note that `a` does not contains such objets as it would be too inefficient (and data would not be contiguous as requested). The reason why you see duplicated IDs is that previous `var` object as been recycled. | The kinds of arrays that you really want are unclear, nor is the purpose. But talk of contiguous (or continuous) and caching, suggests that you aren't clear about how Python works.
First, Python is object oriented, all the way down. Integers, strings, lists are all objects of some class, with associated methods, and attributes. For builtin classes we have little say about the storage.
Let's make a small list:
```
In [89]: alist = [1,2,3,1000,1001,1000,'foobar']
In [90]: alist
Out[90]: [1, 2, 3, 1000, 1001, 1000, 'foobar']
```
A list has a data buffer that stores references (pointers if you will) to objects else where in memory. The `id` may give some idea of where, it shouldn't be understood as a 'pointer' in the `c` language sense.
For this list:
```
In [91]: [id(i) for i in alist]
Out[91]:
[9784896,
9784928,
9784960,
140300786887792,
140300786888080,
140300786887792,
140300786115632]
```
1,2,3 have small id values because Python has initialized small integers (up to 256) at the start. So all uses will have that unique id.
```
In [92]: id(2)
Out[92]: 9784928
```
Within the list creation `1000` appears to be unique, but not so outside of that context.
```
In [93]: id(1001)
Out[93]: 140300786888592
```
Looks like the string is cached as well - but that's just the interpreter's choice, and we shouldn't count on it.
```
In [94]: id('foobar')
Out[94]: 140300786115632
```
The reverse list is a new list, with its own pointer array. But the references are same:
```
In [95]: rlist = alist[::-1]
In [96]: rlist
Out[96]: ['foobar', 1000, 1001, 1000, 3, 2, 1]
In [97]: rlist[5],id(rlist[5])
Out[97]: (2, 9784928)
```
Indexing actions like `[::-1]` should just depend on the number of items in the list. It doesn't depend on where the value actually point to. Same for other copies. Even appending to the array is relatively time independent (it maintains growth space in the data buffer). Actually working with the objects in the list may be depend on where they are stored in memory, but we have little say about that.
A "2d" list is actually a list with list elements; nested lists. The sublists are stored else where in memory, just like strings and numbers. In that sense the nested lists are not contiguous.
So what about arrays?
```
In [101]: x = np.arange(12)
In [102]: x
Out[102]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
In [104]: x.__array_interface__
Out[104]:
{'data': (57148880, False),
'strides': None, # default (8,)
'descr': [('', '<i8')],
'typestr': '<i8',
'shape': (12,),
'version': 3}
In [105]: x.nbytes # 12*8 bytes
Out[105]: 96
```
`x` is a `ndarray` object, with attributes like `shape`, `strides` and `dtype`. And a data buffer. In this case is a `c` array 96 bytes long, at "57148880`. We can't use that number, but I find it useful when comparing this` array\_interface`dict across arrays. A`view` in particular will have the same, or related value.
```
In [106]: x.reshape(3,4)
Out[106]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [107]: x.reshape(3,4).__array_interface__['data']
Out[107]: (57148880, False)
In [108]: x.reshape(3,4)[1,:].__array_interface__['data']
Out[108]: (57148912, False) # 32 bytes later
```
The array data buffer has actual values, not references. Here with `int` dtype, each 8 bytes is interpreted as a 'int64' value.
Your `id` iteration effectively asks for a list, `[x[i] for i in range(n)]`. An element of an array has to be "unboxed", and is a new object, type `np.int64`. While not an array, it does have a lot of properties in common with a 1 element array.
```
In [110]: x[4].__array_interface__
Out[110]:
{'data': (57106480, False),
...
'shape': (),....}
```
That `data` value is unrelated to `x`'s.
As long as you use `numpy` methods on existing arrays, speeds are good, often 10x better than equivalent list methods. But if you start with a list, it takes time to make an array. And treating the array like list is slow.
And the reverse of `x`?
```
In [111]: x[::-1].__array_interface__
Out[111]:
{'data': (57148968, False),
'strides': (-8,),
'descr': [('', '<i8')],
'typestr': '<i8',
'shape': (12,),
'version': 3}
```
It's a new array, but with a different `strides` (-8,), and `data` points to the end of the buffer, `880+96-8`. | 3,727 |
3,312,436 | Running GNU Emacs 22.2.1 on Ubuntu 9.04.
When editing python code in emacs, if a docstring contains an apostrophe, emacs highlights all following code as a comment, until another apostrophe is used. Really annoying!
In other words, if I have a docstring like this:
```
''' This docstring has an apostrophe ' '''
```
Then all following code is highlighted as a comment. Comments are highlighted as code.
I can escape the docstring to avoid this, like this:
```
''' This docstring has an escaped apostrophe \' '''
```
Then highlighting is fine, but then it looks funny and unnecessary to other devs on my team, and I get made fun of for using emacs since " it can't handle apostrophies". ;)
So, anyone know how to make emacs behave better in this regard?
Thanks,
Josh | 2010/07/22 | [
"https://Stackoverflow.com/questions/3312436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/316963/"
] | This appears to work correctly in GNU Emacs 23.2.1. If it's not practical to upgrade, you might be able to copy `python.el` out of the Emacs 23 source code, or perhaps just the relevant pieces of it (python-quote-syntax, python-font-lock-syntactic-keywords, and the code that uses the latter, I think - I'm not much of an Elisp hacker).
Unfortunately savannah.gnu.org's bzr browser isn't working just now so I can't point you directly at the code, you'll have to download it. See <http://www.gnu.org/software/emacs/> | It may be an emacs bug, but it could also be by purpose. If you insert doctests in your docstrings, as I often do to explain API, I could even wish to have the full python syntax highlighting inside docstrings.
But it's probably a bug... (probably emacs syntax highlighter just care of simple and double quotes and ignore triple simple and triple doubles). If so, you should use triple double quotes instead of triple simple quotes as in your example (as far as I know most users use triple double quotes for docstring), and you won't have the problem. | 3,728 |
48,080,359 | I am new to python. I want to find the max value from col2 with respect to the values 'men', 'women' and 'people' in col1 of the list. Like, `['men', 12, '1946-Truman.txt'], ['women', 7, '1946-Truman.txt']`and`['people', 49, '1946-Truman.txt']` contain max values of col2 for men, women and people.
One possible solution is to convert this list of tuples to three separate arrays for men, women and people and then finding max value from all of the arrays. But, I want a better solution.
**Data:**
```
[['men', 2, '1945-Truman.txt']
['women', 2, '1945-Truman.txt']
['people', 10, '1945-Truman.txt']
['men', 12, '1946-Truman.txt']
['women', 7, '1946-Truman.txt']
['people', 49, '1946-Truman.txt']
['men', 7, '1947-Truman.txt']
['women', 2, '1947-Truman.txt']
['people', 12, '1947-Truman.txt']
['men', 4, '1948-Truman.txt']
['women', 1, '1948-Truman.txt']
['people', 22, '1948-Truman.txt']
['men', 2, '1949-Truman.txt']
['women', 1, '1949-Truman.txt']
['people', 15, '1949-Truman.txt']
['men', 6, '1950-Truman.txt']
['women', 2, '1950-Truman.txt']
['people', 15, '1950-Truman.txt']
['men', 8, '1951-Truman.txt']
['women', 2, '1951-Truman.txt']
['people', 9, '1951-Truman.txt']
['men', 3, '1953-Eisenhower.txt']
['women', 0, '1953-Eisenhower.txt']
['people', 17, '1953-Eisenhower.txt']]
```
Thanks in advance. | 2018/01/03 | [
"https://Stackoverflow.com/questions/48080359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6181928/"
] | You may use `pandas` packages.
By defining the data frame :
```
import pandas as pd
df = pd.DataFrame([['men', 2, '1945-Truman.txt'],
['women', 2, '1945-Truman.txt'],
['people', 10, '1945-Truman.txt'],
['men', 12, '1946-Truman.txt'],
['women', 7, '1946-Truman.txt'],
['people', 49, '1946-Truman.txt'],
['men', 7, '1947-Truman.txt'],
['women', 2, '1947-Truman.txt'],
['people', 12, '1947-Truman.txt'],
['men', 4, '1948-Truman.txt'],
['women', 1, '1948-Truman.txt'],
['people', 22, '1948-Truman.txt'],
['men', 2, '1949-Truman.txt'],
['women', 1, '1949-Truman.txt'],
['people', 15, '1949-Truman.txt'],
['men', 6, '1950-Truman.txt'],
['women', 2, '1950-Truman.txt'],
['people', 15, '1950-Truman.txt'],
['men', 8, '1951-Truman.txt'],
['women', 2, '1951-Truman.txt'],
['people', 9, '1951-Truman.txt'],
['men', 3, '1953-Eisenhower.txt'],
['women', 0, '1953-Eisenhower.txt'],
['people', 17, '1953-Eisenhower.txt']])
```
Then
```
df.groupby([0], sort=False)[1].max()
```
return
```
0
men 12
women 7
people 49
Name: 1, dtype: int64
```
Is that what you want ? | You can use `itertools.groupby`:
```
import itertools
new_data = [(a, list(b)) for a, b in itertools.groupby(sorted(data, key=lambda x:x[0]), key=lambda x:x[0])]
new_final_data = [max(b, key=lambda x:x[1]) for a, b in new_data]
```
Output:
```
[['men', 12, '1946-Truman.txt'], ['people', 49, '1946-Truman.txt'], ['women', 7, '1946-Truman.txt']]
```
Or, a dictionary with each key the type of individual:
```
new_final_data = {a:max(b, key=lambda x:x[1]) for a, b in new_data}
```
Output:
```
{'women': ['women', 7, '1946-Truman.txt'], 'men': ['men', 12, '1946-Truman.txt'], 'people': ['people', 49, '1946-Truman.txt']}
``` | 3,729 |
42,776,454 | I have multiple series of "start" and "stop" times in a set of data, and would like to see if a particular set of dates/times does or does not fall between a given set of "start/stop" times. I'm using pandas in python, and I've tried having the data as dataframes or as timeseries- haven't gotten either to work. I've been using this bit of code:
```
print (start1 < test1[0:LenS] < stop1).any()
```
(note that the "test1[0:LenS]" is just to make sure test1 is the same length as start1 and stop1) and I get this error:
```
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
Since I'm trying to use .any(), I'm pretty confused. Thanks for your help! | 2017/03/14 | [
"https://Stackoverflow.com/questions/42776454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7697187/"
] | Put `saveFileDialog1.ShowDialog();` inside some button event handler which lets the user save the document. Double-click on the `SaveFileDialog` icon in your Visual Studio designer window as well to add the FileOk event handler and within event handler, put your code like this:
```
private void saveFileDialog1_FileOk(object sender, CancelEventArgs e)
{
var doc = DocX.Create(saveFileDialog1.FileName);
doc.InsertParagraph("This is my first paragraph");
doc.Save();
}
```
Hope it helps! | To do this:
```
private void btn_approve_Click(object sender, EventArgs e)
{
saveFileDialog1.Title = "Save As";
saveFileDialog1.Filter = "DocX|*.docx";
if (saveFileDialog1.ShowDialog() == DialogResult.OK)
{
var doc = DocX.Create(saveFileDialog1.FileName);
doc.InsertParagraph("This is my first paragraph");
doc.Save();
}
}
``` | 3,739 |
41,467,654 | I have a python script on a machine.
I could run it from both **ssh connection** and the **console** of the machine.
Because the script changes some IP config stuff, I want to disconnect the ssh before doing the IP changing - that way the ssh won't hang and will be closed properly before the IP changes.
**So** - is there a way in python to check if the script is ran from ssh? and if so to close the ssh? | 2017/01/04 | [
"https://Stackoverflow.com/questions/41467654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1662033/"
] | You can add header info as a dict on 4 arguments. As far as know is not possible embed in the BODY.
```
import http.client
BODY = "***filecontents***"
conn = http.client.HTTPConnection("127.0.0.1", 5000)
conn.connect()
conn.request("PUT", "/file", BODY, {"someheadername":"someheadervalues",
"someotherheadername":"someotherheadervalues"})
``` | The command:
```
conn.request("PUT", "/file", BODY)
```
Is overloaded as below as well, so its pretty straight forward :)
```
conn.request("PUT", "url", payload, headers)
``` | 3,740 |
57,717,100 | By "comparable", I mean "able to mutually perform the comparison operations `>`, `<`, `>=`, `<=`, `==`, and `!=` without raising a `TypeError`". There are a number of different classes for which this property does hold:
```py
1 < 2.5 # int and float
2 < decimal.Decimal(4) # int and Decimal
"alice" < "bob" # str and str
(1, 2) < (3, 4) # tuple and tuple
```
and for which it doesn't:
```py
1 < "2" # int and str
1.5 < "2.5" # float and str
```
even when it seems like it really ought to:
```py
datetime.date(2018, 9, 25) < datetime.datetime(2019, 1, 31) # date and datetime
[1, 2] < (3, 4) # list and tuple
```
[As demonstrated in this similar question](https://stackoverflow.com/questions/29457135/in-python-how-to-know-whether-objects-can-be-compared), you can obviously check this for two unknown-typed objects `a` and `b` by using the traditional python approach of "ask forgiveness, not permission" and using a `try`/`except` block:
```py
try:
a < b
# do something
except TypeError:
# do something else
```
but [catching exceptions is expensive](https://stackoverflow.com/questions/2522005/cost-of-exception-handlers-in-python), and I expect the second branch to be taken sufficiently frequently for that to matter, so I'd like to catch this in an `if`/`else` statement instead. How would I do that? | 2019/08/29 | [
"https://Stackoverflow.com/questions/57717100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2648811/"
] | Since it is impossible to know beforehand whether a comparison operation can be performed on two specific types of operands until you actually perform such an operation, the closest thing you can do to achieving the desired behavior of avoiding having to catch a `TypeError` is to cache the known combinations of the operator and the types of the left and right operands that have already caused a `TypeError` before. You can do this by creating a class with such a cache and wrapper methods that do such a validation before proceeding with the comparisons:
```
from operator import gt, lt, ge, le
def validate_operation(op):
def wrapper(cls, a, b):
# the signature can also be just (type(a), type(b)) if you don't care about op
signature = op, type(a), type(b)
if signature not in cls.incomparables:
try:
return op(a, b)
except TypeError:
cls.incomparables.add(signature)
else:
print('Exception avoided for {}'.format(signature)) # for debug only
return wrapper
class compare:
incomparables = set()
for op in gt, lt, ge, le:
setattr(compare, op.__name__, classmethod(validate_operation(op)))
```
so that:
```
import datetime
print(compare.gt(1, 2.0))
print(compare.gt(1, "a"))
print(compare.gt(2, 'b'))
print(compare.lt(datetime.date(2018, 9, 25), datetime.datetime(2019, 1, 31)))
print(compare.lt(datetime.date(2019, 9, 25), datetime.datetime(2020, 1, 31)))
```
would output:
```
False
None
Exception avoided for (<built-in function gt>, <class 'int'>, <class 'str'>)
None
None
Exception avoided for (<built-in function lt>, <class 'datetime.date'>, <class 'datetime.datetime'>)
None
```
and so that you can use an `if` statement instead of an exception handler to validate a comparison:
```
result = compare.gt(obj1, obj2)
if result is None:
# handle the fact that we cannot perform the > operation on obj1 and obj2
elsif result:
# obj1 is greater than obj2
else:
# obj1 is not greater than obj2
```
And here are some timing statistics:
```
from timeit import timeit
print(timeit('''try:
1 > 1
except TypeError:
pass''', globals=globals()))
print(timeit('''try:
1 > "a"
except TypeError:
pass''', globals=globals()))
print(timeit('compare.gt(1, "a")', globals=globals()))
```
This outputs, on my machine:
```
0.047088712933431365
0.7171912713398885
0.46406612257995117
```
As you can see, the cached comparison validation does save you around 1/3 of time when the comparison throws an exception, but is around 10 times slower when it doesn't, so this caching mechanism makes sense only if you anticipate that the vast majority of your comparisons are going to throw an exception. | What you could do is use `isinstance` before the comparison, and deal with the exceptions yourself.
```
if(isinstance(date_1,datetime) != isinstance(date_2,datetime)):
#deal with the exception
``` | 3,741 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.